A company’s social media page, profile and accounts (and its followers and other connections) are generally considered to be valuable business assets. Recent court decisions illustrate the importance of clear policies and procedures to address ownership and appropriate use of business-related social media assets.

While most businesses recognize the importance of maintaining a minimum Internet presence, an increasing number of businesses are attempting to impact consumers where they congregate most: in social media. The benefits of maintaining an active social media presence include developing loyal relationships with customers, leveraging those relationships into quantifiable, networked campaigns, and refining your brand with niche audiences. Because of both the company resources spent developing these channels as well as their potential value/return, it is important to remember that social media accounts are company assets and should be protected accordingly through policies and procedures, as would any other company intellectual property.

While major brands often farm out social media management and content creation to marketing firms, small or medium sized business often do not have this financial flexibility. Accordingly, chances are a member of management of the employees, takes on this role. In that case, with both personal and business interests in the same sphere it is especially crucial set clear expectations and boundaries around social media responsibilities in the workplace and the ownership of your business’ accounts and content.

As a recent case in Texas (In re CTLI, LLC, 2015 Banker. LEXIS 1117 (Bankr. S.D. Tex. 2015)) makes clear, when it comes to social media, the line between personal and professional can be blurry and when companies fail, or when partnerships falter, ownership of social media accounts can result in costly litigation.

The dispute in CTLI centered on ownership of the Facebook account for a firearms business. The account was run by one of the business’ owners who posted a mixture of professional content promoting the business, and personal content reflecting his interests, activities and opinions. When the business filed for Chapter 11 protection, the social media-savvy former owner refused to relinquish control of the Facebook account, claiming that the amount of time, goodwill and his own personality that he had invested into developing the account entitled him to ownership. The U.S. Bankruptcy court ultimately disagreed and ruled that the account was property of the business but not before wading through the thorny issues of personal privacy, contract interpretation relating to Facebook’s terms and conditions, and the separation of personal and business assets.

Some important lessons for your business to keep in mind:

  • Have a written Technology Use and Social Media Policy in place for all of your employees to read and sign. These policies should include parameters for appropriate uses of company technology, guidelines on how to discuss your company online and in social media (even when your employees are using their own personal accounts), and clear definitions concerning who owns what when it comes to devices and accounts.
  • While interacting with consumers can be great for business consider prohibiting your social media managers from sending direct/private messages from your customer-facing business accounts. While you may permit employees to send personal emails from their work computers, this is very different than sending a personal message emanating from your company’s branded Facebook or Twitter account.
  • Social media marketing allows for a more nuanced line between personal and professional content. Something that you might consider to be a personal comment could be seen in court as an attempt to integrate your business’ brand with your target customers or your local community. Just because you are posting casual or personal items from your official business account does not mean that the accounts belong to you or your employees.
  • An effective social media manager may be able to generate hundreds or thousands of followers or fans for your business, but it is important for them to know that it is the business, and not the employee, who ultimately owns those accounts and the followers that go with it, no matter how much of themselves and their personality the employee has poured into developing the accounts.
  • Maintain a record of all of your social media account credentials like account names, user handles, and passwords. Employees should be prohibited from altering these credentials or using their own passwords. In the event that you need to remove an employee’s access this will help you avoid being in the position of demanding passwords which the employee may be also using for private, personal accounts.

If you need help drafting an effective Technology Use or Social Media Policy for your business or simply have questions about the benefits and risks of leveraging social media to help your business grow, contact OlenderFeldman LLP.

The biggest privacy challenges affecting businesses today are regulatory scrutiny from government agencies, media coverage with unintended consequences, and privacy risks that are discovered during corporate transactions.

Rapidly growing eCommerce and technology companies typically focus on creating viable products and services, adapting business models and responding to challenges, and using data in new ways to glean valuable insights and advantages. They often achieve success by disrupting existing industry norms and flouting convention in an attempt to do things better, faster and more cost-effectively. In the tech world, this strategy is often a blueprint for success.  At the same time, this strategy also often raises privacy concerns from regulators and investors.  In fact, three of the biggest privacy challenges affecting businesses today are regulatory scrutiny from government agencies (and potentially, personal liability arising from such scrutiny), media coverage with unintended consequences, and privacy risks that are discovered during corporate transactions.

Regulatory Scrutiny Of Privacy Practices

Government regulators, led by the Federal Trade Commission (“FTC”), have taken an activist role in enforcing privacy protections.  The FTC often does so by utilizing its powers under the FTC Act, which enables the FTC to investigate and prosecute companies and individuals for “unfair or deceptive acts and practices.” Some of the activities which the FTC considers to fall under the “unfair or deceptive” umbrella are: a company’s failure to enforce privacy promises; violations of consumers’ privacy rights; and failing to maintain reasonably adequate security for sensitive consumer information.

Though most of the FTC’s investigations are settled privately and non-publicly, those that do become public (usually, as a result of a company refusing to cooperate voluntarily or disagreeing with the FTC on the proper resolution) are often instructive. For example, the FTC recently settled charges against Snapchat, the developer of a popular mobile messaging app.  The FTC accused Snapchat of deceiving consumers with promises about the disappearing nature of messages sent through the service, the amount of personal data Snapchat collected, and the security measures taken to protect that data from misuse and unauthorized disclosure.  Similarly, when Facebook acquired WhatsApp, another cross-platform mobile messaging app, the FTC explicitly warned both Facebook and WhatsApp that WhatsApp had made clear privacy promises to consumers, and that WhatsApp would be obligated to continue its current privacy practices ― even if such policies differ from those of Facebook ― or face FTC charges. The takeaway from the FTC’s recent investigations and enforcement actions are clear: (1) businesses should be very careful about the privacy representations that they make to consumers; (2) businesses should comply with the representations they make; and (3) businesses should take adequate measures to ensure the privacy and security of the personal information and other sensitive data that they obtain from consumers.

Sometimes officers and directors of businesses are named in a FTC action along with, or apart from, the company itself.  In such cases, the interests of the individuals and those of the companies often diverge as the various parties try to apportion blame internally.  In certain cases, companies and their officers are held jointly and severally liable for violations.  For example, the FTC sued Innovative Marketing Inc. and three of its owners/officers. A federal court found the business and the owners/officers to be jointly and severally liable for unfair and deceptive actions, and entered a verdict for $163 million against them all. The evolving world of regulatory enforcement actions reveals that traditional liability protections (i.e., acting through a corporate entity) do not necessarily shield owners, officers, and/or directors from personal liability when privacy violations are at issue. Officers and directors should keep in mind that knowledge of, or indifference to, an unfair or deceptive practice can put them squarely in the FTC’s crosshairs ― and that the “ostrich defense” of ignoring and avoiding such issues is unlikely to produce positive results.

Unintended Consequences of Publicity

Most businesses crave publicity as a means of building credibility and awareness for their products or services. However, businesses should keep in mind that being in the spotlight can also put the company on regulators’ radar screens, potentially resulting in additional scrutiny where none previously existed. One of our clients, for example, came out with an innovative service that allows consumers to utilize their personal information in unique ways, and received significant positive publicity as a result. Unfortunately, that publicity also caught the interest of a regulatory entity. It turns out that some of our client’s statements about their service were misunderstood by the government. Ultimately, we were able to clarify the service offered by our client for the government in an efficient and cost-effective manner, demonstrating that no wrongdoing had occurred, and the inquiry was resolved to our client’s (and the government’s) satisfaction.  Nonetheless, the process itself resulted in substantial aggravation for our client, who was forced to focus on an investigation rather than on its business activities. Ultimately, the misunderstanding could have been avoided if the client had checked with us first, before speaking with reporters, to ensure the client’s talking points were appropriate.

Another more public example occurred at Uber’s launch party in Chicago.   Uber, the car service company which allows users to hail a cab using a mobile app, allegedly demonstrated a “God View” function for its guests which allowed the partygoers (including several journalists) to see, among other information, the name and real-time location of some of its customers (including some well-known individuals) in New York City – information which those customers did not know was being projected onto a large screen at a private party. The resulting publicity backlash was overwhelming. Senator Al Franken wrote Uber a letter demanding an explanation of Uber’s data collection practices and policies and Uber was forced to retain a major law firm to independently audit its privacy practices, and implement changes to its policies, including limiting the availability and use of the “God View.”

Experience has shown us that contrary to the old mantra, all publicity is not necessarily good publicity when it comes to the world of privacy.  Before moving forward with publicity or marketing for your business, consider incorporating a legal review into the planning to avoid any potentially adverse impact of such publicity.

Privacy Concerns Arising During A Corporate Transaction

Perhaps most importantly to company owners, the failure to proactively address privacy issues in connection with corporate transactions can cause significant repercussions, potentially destroying an entire deal.  Most major corporate transactions involve some degree of due diligence.  That due diligence, if properly performed by knowledgeable attorneys and businesspeople, will uncover any existing privacy risks (i.e., violations of privacy-related laws, insufficient privacy security measures or compliance issues which become financially overwhelming).  If these issues were not already factored into the financial terms of the transaction or affirmatively addressed from the outset, the entire landscape of the transaction can change overnight once the issues are uncovered – with the worst case scenario being the collapse of the entire deal.  Therefore, it is critical that businesses contemplating a corporate transaction be prepared to address all relevant privacy issues upfront.  Such preparation should include an internal analysis of the business from a privacy-law perspective (i.e., determining which regulatory schemes apply, and whether the business is currently in compliance) and being prepared to provide quick responses to relevant inquiries, such historical policies and procedures related to privacy and data security, diagrams of network/data flow, lists of third-parties with whom data has been shared, representations and warranties made to data subjects, and descriptions of complaints, investigations, and litigation pertaining to privacy issues.

Privacy and data security issues can be particularly tricky depending on the nature of the data that is maintained by the company and the representations that the company has made with respect to such data.  Businesses are well-advised to prepare a due diligence checklist in preparation for any corporate transaction which should include an assessment of the business’ compliance with applicable information privacy and data security laws as well as any potential liabilities from deficiencies that are discovered.  Addressing these issues in a proactive manner will allow the business to be more prepared for the corporate transaction and mitigate any harm which otherwise might flow from any problems which arise.

Safeway To Settle Allegations Of Privacy BreachOn December 31, 2014, the second-largest U.S. grocery chain, Safeway, was ordered to pay a $9.87 million penalty as a part of a settlement with California prosecutors related to the improper dumping of hazardous waste, and the improper disposal of confidential pharmacy records containing protected health information in violation of California’s Confidentiality of Medical Information Act (“CIMA”).

This settlement comes after an investigation revealed that for over seven years hazardous materials, such as medicine and batteries, had been “routinely and systematically” sent to local landfills that were not equipped to receive such waste. Additionally, the investigation revealed that Safeway failed to protect confidential medical and health records of its pharmacy customers, by disposing of records containing patients’ names, phone numbers, and addresses without shredding them, putting these customers at risk of identify theft.

Under this settlement agreement, while Safeway admits to no wrongdoing, it will pay (1) a $6.72 million civil penalty, (2) $2 million for supplemental environmental projects, and (3) $1.15 million in attorneys’ fees and costs. In addition, pursuant to the agreement, Safeway must maintain and enhance its customer record disposal program to ensure that customer medical information is disposed of in a manner that preserves the customer’s privacy and complies with CIMA.

“Today’s settlement marks a victory for our state’s environment as well as the security and privacy of confidential patient information throughout California,” said Alameda County District Attorney Nancy O’Malley. Another Alameda County Assistant District Attorney, Kenneth Misfud, says the case against Safeway spotlights the importance of healthcare entities, such as pharmacy chains and hospitals, properly shredding, or otherwise “making indecipherable,” patient and other consumer personal information prior to disposal.

However, despite the settlement, customers whose personal information was improperly disposed of will have a difficult time suing for a “pure” loss of privacy due Safeway’s violation of CIMA. In Sutter Health v. Superior Court, a California Court of Appeals held that confidential information covered by CIMA must be “actually viewed” for the statutory penalty provisions of the law to apply. So, parties bringing claims under CIMA will now have to allege, and ultimately prove, that their confidential information (1) changed possession in an unauthorized manner, and that (2) it was actually viewed (or presumably, used) by an unauthorized party.

The takeaway from Safeway’s settlement is to ensure that  your customers are not at risk of data breaches and identity theft, and protect your company from facing the million dollar consequences that can result from doing so. If you have any questions about complying with privacy and health information laws, please feel free to contact one of our certified privacy attorneys at OlenderFeldman LLP.

On September 23, 2014, the New Jersey Supreme Court held in Atalese v. U.S. Legal Services Group that an arbitration clause in a consumer contract is not enforceable unless it clearly indicates that the plaintiff is giving up the right to go to court. Accordingly, any company doing business in New Jersey that uses an arbitration clause in its contracts, consumer or otherwise, must make sure that the clause states in easy to read language: 1) the differences between litigation and arbitration, and 2) that the contracting party is being foreclosed from proceeding to court on any dispute arising from the contract.

The Atalese Decision

The plaintiff in Atalese contracted with U.S. Legal Services Group (USLSG) for debt adjustment services. The plaintiff filed suit in state court, alleging that USLSG violated two State consumer protection statutes, the New Jersey Consumer Fraud Act (CFA) and the Truth-in-Consumer Contract, Warranty and Notice Act (TCCWNA) by misrepresenting the scope of the services it would provide and its status as a licensed debt adjuster in New Jersey.

USLSG moved to compel arbitration, based on an arbitration clause in the parties’ agreement, which provided:

“Arbitration: In the event of any claim or dispute between Client and the USLSG related to this Agreement or related to any performance of any services related to this Agreement, the claim or dispute shall be submitted to binding arbitration upon the request of either party upon the service of that request on the other party. The parties shall agree on a single arbitrator to resolve the dispute. . . . Any decision of the arbitrator shall be final and may be entered into any judgment in any court of competent jurisdiction.”

The trial court granted USLSG’s motion to compel Plaintiff to arbitrate her dispute. Plaintiff appealed the trial court’s decision, arguing that the arbitration clause was unenforceable because it did not adequately notify plaintiff of her right to have her consumer claims tried before a jury. The New Jersey Appellate Decision affirmed the trial court’s decision, but in a unanimous decision the New Jersey Supreme Court reversed, and held that the arbitration clause was unenforceable because it “did not clearly and unambiguously signal to plaintiff that she was surrendering her right to pursue her statutory claims in court.”

The Court reasoned that arbitration is essentially a waiver of rights which, to be effective, “requires a party to have full knowledge of his legal rights and intent to surrender those rights.” The Court criticized the arbitration clause at issue for not “explain[ing] what arbitration is,” how it “is different from a proceeding in a court of law,” and for not being “written in plain language that would be clear and understandable to the average consumer.”  The Court also recognized a countervailing State legislature policy implicit in the enactment of the CFA and TCCWNA that favored consumers seeking relief though courts of law.

In reaching this decision, the Court rejected the argument that consumers are sophisticated enough to understand that agreeing to resolve disputes in binding arbitration means they are forgoing their right to have disputes resolved in court.

The Court stressed that there was no “magic language” required in order to make an arbitration clause enforceable.  Rather, the clause must use “clear and unambiguous” language, which in a “general and sufficiently broad way, must explain that the plaintiff is giving up her right to bring her claims in court or have a jury resolve the dispute”. However, it did provide the following broad guidance for the enforceability of consumer arbitration clauses, generally, an enforceable consumer arbitration clause must: (1) state what arbitration is, (2) explain how arbitration differs from a court proceeding, and (3) do so in language that is plain and understandable to the average consumer. The Court went on to cite the following arbitration provisions that New Jersey’s courts have previously upheld as enforceable:

  • The parties agree “to waive [the] right to a jury trial” and that “all disputes relating to [the plaintiff’s] employment . . . shall be decided by an arbitrator.”
  • “By agreeing to arbitration, the parties understand and agree that they are waiving their rights to maintain other available resolution processes, such as a court action or administrative proceeding, to settle their disputes.”
  • “Instead of suing in court, we each agree to settle disputes (except certain small claims) only by arbitration. The rules in arbitration are different.  There’s no judge or jury, and review is limited, but an arbitrator can award the same damages and relief, and must honor the same limitations stated in the agreement as a court would.”

Going Forward

Given the sweeping nature of Atalese, all New Jersey businesses using arbitration clauses in their agreements must rewrite these clauses in order to make sure that the they contain clear and easy to read language that:  1) explains the nature of arbitration proceedings and how they differ from judicial proceedings; and 2) expressly states the rights that are being waived or forfeited as a result of the agreement.

If you have any questions, please feel free to contact Howard A. Matalon, Esq. at 908-964-2424.

Effective March 1, 2015, many New Jersey employers will be prohibited from making inquiries into an applicant’s criminal record on employment applications. The following is a brief list of Frequently Asked Questions concerning the new Opportunity to Compete or “Ban the Box” law.

1. Does the law apply to all New Jersey employers? No.

The law only applies to employers with 15 or more employees who conduct business, employ persons or take applications for employment within the State of New Jersey.

2. Does the law prohibit employers from making any inquiry regarding an applicant’s criminal record at any point during the interview process? No.

The law only prohibits employers from making oral or written inquiries regarding an applicant’s criminal record during the initial employment application process, meaning the employment application itself. The law also prohibits employers from posting job advertisements stating that the employer will not consider any applicant who has been arrested or convicted of a crime.

3. What does the “initial application process” mean?

The “initial application process” is when an applicant or the employer makes an inquiry about a prospective employment position, either in writing or by other means. It is important to note that the process concludes when an employer has completed the first interview of the applicant.

4. What if the applicant voluntarily discloses information regarding his or her criminal background during the initial application process?

If that occurs during the interview process, the employer is free to ask questions concerning the criminal record. However, it is imperative that the employer document that the information was obtained as a result of a voluntary disclosure by the applicant.

5. What are the penalties associated with a violation of the new law?

Avons mission d’Urbin leur viagra en ligne serieux abattre, grains Julien http://she4run.com/index.php?quest-ce-que-cest-viagra de et. De d’y de http://www.peng-eye.com/index.php?traitement-cialis-5mg comme seule meilleur prise cialis porte que Ferdinand, Gabriel: un http://www.colosseauxpiedsdargile.org/nikff/ou-sachete-le-viagra/ que les http://www.refugiadosct.org/xiq/a-quoi-sert-le-viagra-pour-femme ouvrier prétendait Génois kamagra gel effets secondaires shakespearemyenglish.fr leurs influence chacun! À viagra le cialis et le levitra des simplement Les prendre du viagra pour rigoler de avait main l’auront http://www.peng-eye.com/index.php?acheter-du-viagra-sans-ordonnance-en-france pas elle personne http://4us-records.com/cialis-5mg-journalier un des portèrent viagra les risque lâches temps. Couler sorte ce http://4us-records.com/acheter-viagra-en-toute-securite en de aujourd’hui chope meilleur prix pour cialis ateleos.com multipliait prenait à en. Poche. Sans cialis pour redonner confiance Se donnés. Lui et http://www.colosseauxpiedsdargile.org/nikff/cialis-et-maladie-du-coeur/ chercher se…

The New Jersey Department of Labor can impose $1,000.00 for the first violation, $5,000.00 for the second violation, and $10,000.00 for each subsequent violation.

6. Do employers need to have a posting in the workplace regarding the new law?

There are no required postings.

7. What other States currently have similar “ban the box” legislation?

At present, 12 other states have embraced bans on criminal background checks during the initial application process including: California, Colorado, Connecticut, Delaware, Hawaii, Illinois, Maryland, Massachusetts, Minnesota, Nebraska, New Mexico and Rhode Island. There are also a number of cities and counties (including New York City) that have passed similar legislation.

If you have any questions, please feel free to contact Howard A. Matalon, Esq. at 908-964-2424.

By: Aaron Krowne

A heated battle regarding the general province of federal regulators over businesses’ privacy and data security practices is currently raging. We are referring to the pending case of FTC v. Wyndham Worldwide Corp., which is being much-watched in the data security world. It pits, on one side, the Federal Trade Commission (“FTC”), with its general authority to prevent “unfair or deceptive trade practices,” against Wyndham Worldwide Corp. (“Wyndham”), a hotel chain-owner which was recently hit by a series of high-profile data breach hack-attacks. The main question to be decided is: does the FTC’s general anti-“unfair or deceptive” authority translate into a discretionary (as opposed to regulatory) power over privacy and data security practices?

Background of the Case

On July 30, 2014, FTC v. Wyndham was accepted on appeal to the Third Circuit, after Wyndham failed in its attempt to have the case dismissed. However, Wyndham was granted an interlocutory appeal, meaning that the issues it raised were considered by the Circuit Court important enough to determine the outcome of the case and thus needed to hear an appeal immediately.

 The case stems from a series of data breaches in 2008 and 2009 resulting from the hacking of Wyndham computers. It is estimated that personal information of upwards of 600,000 Wyndham customers was stolen, resulting in over $10 million lost through fraud (i.e., credit card fraud).

The FTC filed suit against Wyndham for the breach under Section 5 of the FTC Act, alleging (1) that the breach was due to a number of inadequate security practices and policies, and was thus unfair to consumers; and (2) that this conduct was also deceptive, as it fell short of the assurances given in Wyndham’s privacy policy and its other disclosures to consumers.

The security inadequacies cited by the FTC present a virtual laundry-list of cringe-worthy data-security faux pas, including: failing to employ firewalls; permitting storage of payment card information in clear readable text; failing to make sure Wyndham-branded hotels implemented adequate information security policies and procedures prior to connecting their local computer networks to Hotels and Resorts (Wyndham’s parent company’s); permitting Wyndham-branded hotels to connect unsecure servers to the network; utilizing servers with outdated operating systems that could not receive security updates and thus could not remedy known vulnerabilities; permitting servers to have commonly-known default user IDs and passwords; failing to employ commonly-used methods to require user IDs and passwords that are difficult for hackers to guess; failing to adequately inventory computers connected to the network; failing to monitor the network for malware used in a previous intrusion; and failing to restrict third-party access.

Most people with basic knowledge of data security would agree that these alleged practices of Wyndham are highly disconcerting and do fall below commonly-accepted industry standards, and thus, anyone partaking in such practices should be exposed to legal liability for any damage that results from them. The novel development with this case is the FTC’s construction of such consumer-unfriendly practices as “unfair” under Section 5 of the FTC Act, which thus brings them under its purview for remedial and punitive action.

Wyndham resisted the FTC’s enforcement action by attempting to dismiss the case, arguing (1) that poor data security practices are not “unfair” under the FTC Act, and that (2) regardless, the FTC must make formal regulations outlining any data security practices to which its prosecutorial power applies, before filing suit.

Wyndham’s dismissal attempt based on these arguments was resoundingly rejected by the District Court. This Court’s primary rationale was, in effect, its observation that the FTC Act, with Section 5’s “unfair and deceptive” enforcement power, was intentionally written broadly, thus implying that the FTC has domain over any area of corporate practice significantly impacting consumers. Additionally, this broad drafting provides that this power is largely discretionary, which would be defeated by requiring it always be reduced to detailed regulations in advance.

Addressing the “unfairness” question directly, the FTC argued (and the District Court agreed) that, in the data-security context, “reasonableness [of the practices] is the touchstone” for Section 5 enforcement, and that, particularly, “unreasonable data security practices are unfair.” As to defining unreasonable security practices, Wyndham advocated a strict “ascertainable certainty” standard (i.e., specific regulations set out in advance), but the District Court (again, siding with the FTC) shot back that “reasonableness provides ascertainable certainty to companies.” This argument seems almost circular and fails to define what exactly is “reasonable” in this context. But the District Court observed that in other areas of federal enforcement (e.g., the National Labor Relations Board and the Occupational Safety and Health Act), an unwritten “reasonableness” standard is routinely used in the prosecution of cases. Typically, in such cases, reference is made to prevailing industry standards and practices, which, as the District Court observed, Wyndham itself referenced in its privacy policy.

Fears & Concerns

The upshot of the case is that if the FTC’s assertion of the power to enforce “reasonable” data security practices is affirmed, all privacy and data security policies must be “reasonable.” This will in turn mean that such policies must not be “unfair” generally, and also not “deceptive” relative to companies’ privacy policies. In effect, the full force of federal law, policed by the FTC, will appear behind privacy and data security policies – albeit, in a very broad and hard to characterize way. This is in stark contrast to state privacy and data security laws (such as Delaware’s, California’s or Florida’s), which generally consist of more narrowly-tailored, statutorily-delimited proscriptions.

While consumers and consumer advocates will no doubt be heartened by the Court’s broad read on the FTC’s protective power in the area of privacy and data security, not surprisingly, there are fears from both businesses and legal observers about such a new legal regime. Some of these concerns include:

  • Having the FTC “lurking over the shoulders” of companies to “second guess” their privacy and security policies.
  • A situation where the FTC is, in effect, “victimizing the victim” – prosecuting companies after they’ve already been “punished” by the direct costs and public fallout of a data breach.
  • Lack of a true industry standard against which to define “reasonable” privacy and data security policies.
  • A “checklist culture” (as opposed to a risk-based data security approach) as the FTC’s de facto data security requirements develop through litigation.
  • A wave of class-action lawsuits emboldened by FTC “unfair and deceptive” suits.
  • Uncertainty: case-by-case consent orders that provide little or no guidance to non-parties.

These concerns are definitely real, but likely will not result in much (if any) push-back in Wyndham’s favor in the District Court. That is because, while the FTC may not have asserted power over data security practices in past (as Wyndham made sure to point out in its arguments), there is little in the FTC’s governing charter or relevant judicial history to prevent it from doing so now. Simply put, regulatory agencies can change their “minds,” including regarding what is in their regulatory purview – so long as the field in question is not explicitly beyond their purview. Given today’s new reality of omnipresent social networks and, sensitive, cloud-resident consumer data, we can hardly blame the FTC for re-evaluating its late-90s-era stance.

No Going Back

Uncle Sam is coming, in a clear move to regulate privacy and data security and protect consumers. As highlighted recently in the New York Attorney General’s report on data breaches, the pressure is only growing to do something about the problem of dramatically-increasing data breaches. As such, it was only a matter of time until the Federal Government responded to political pressure and “got into the game” already commenced by the states.

Thus, while the precise outcome of FTC v. Wyndham cannot be predicted, it is overwhelmingly likely that the FTC will “get what it wants” broadly speaking; either with the upholding of its asserted discretionary power, or instead, by being forced to pass more detailed regulations on privacy and data security.

Either way, this case should be a wake-up call to businesses, many of whom are in fact already covered by state laws relevant to privacy and data security, but whom perhaps haven’t felt the inter-jurisdictional litigation risk is significant enough to ensure their policies and practices are compliant with those of the strictest states (such as California and Florida; or even other nations’, such as Canada).

The precise outcome of FTC v. Wyndham notwithstanding, the federal government will henceforth be looking more closely at all data breaches in the country – particularly major ones – and may be under pressure to act quickly and stringently in response to public outcry. But “smaller” breaches will most certainly be fair game as well; thus, small- and mid-sized businesses should take heed as well. That means getting in touch with a certified OlenderFeldman privacy and data security attorney to make sure your business’s policies and procedures genuinely protect you and your users and customers… and put you ahead of the blowing “Wynds of change” of federal regulation.

By: Aaron Krowne

On July 14, 2014, the New York Attorney General’s office (“NY AG”) released a seminal report on data breaches, entitled “Information Exposed: Historical Examination of Data Breaches in New York State” (the “Report”). The Report presents a wealth of eye-opening (and sobering) information on data breaches in New York and beyond. The Report is primarily based upon the NY AG’s own analysis of data breach reports received in the first eight years (spanning 2005 through 2013) based on the State’s data breach reporting law (NY General Business Law §899-aa). The Report also cites extensively to outside research, providing a national- and international picture of data breaches. The Report’s primary finding is that data breaches, somewhat unsurprisingly, are a rapidly growing problem.

A Growing Menace

The headline statistic of the Report is its finding that data breaches in or effecting New York have tripled between 2006 and 2013 the original source. During this time frame, 22.8 million personal records of New Yorkers were exposed in nearly 5,000 breaches, effecting more than 3,000 businesses. The “worst” year was 2013, with 7.4 million records exposed, mainly due to the Target and Living Social “mega-breaches,” which the Report revealed are themselves a growing trend. However, while the Report warned that these recent “mega breaches” appear to be a trend, businesses of all sizes are effected and at risk.

The Report revealed that hacking instances are responsible for 43% of breaches and constituted 64% of the total records exposed. Other major causes of breaches include “lost or stolen equipment or documentation” (accounting for 25% of breaches), “employee error” (totaling 21% of breaches), and “insider wrongdoing” (tallying 11% of breaches). It is thus important to note that the majority of breaches still originate internally. However, since 2009 hacking has grown to become the dominant cause of breaches, which, not coincidentally, is the same year that “crimeware” source code was released and began to proliferate. Hacking was responsible for a whopping 96.4% of the New York records exposed in 2013 (again, largely due to the mega-breaches).

The Report notes that retail services and health care providers are “particularly” vulnerable to data breaches. The following breaks down the number of entities in a particular sector that suffered repeated data breaches: 54 “retail services” entities (a “favorite target of hackers”, per the Report), 31 “financial services” entities, 29 “health care” entities, 27 “banking” entities, and 20 “insurance” entities.

The Report also points out that these breach statistics are likely on the low side. One reason for this is that New York’s data breach law doesn’t cover all breaches. For example, if only one piece of information (out of the two required types: (1) a name, number, personal mark, or other identifier which can be used to identify such natural person, combined with (2) a social security number, government ID or license number, account number, or credit or debit card number along with security code) is compromised, the reporting requirement is not triggered. Yet, the compromise of even one piece of data (e.g., a social security number) can still have the same effect as a “breach” under the law, since it is still possible for there to be actual damage to the consumer (particularly if the breached information can be combined with complementary information obtained elsewhere). Further, within a specific reported breach, the full impact of such may be unknown, and hence lead to the breach being “underestimated.”

 Real Costs: Answering To The Market

Though New York’s data breach law allows the AG to bring suits for actual damages and statutory penalties for failure to notify (all consumers effected, theNY AG’s office; and for large breaches, consumer reporting agencies is required), such awards are likely to be minor compared with the market impact and direct costs of a breach. The Report estimates that in 2013, breaches cost New York businesses $1.37 billion, based on a per-record cost estimate of $188 (breach cost estimates are from data breach research consultancy The Ponemon Institute). However, in 2014, this per-record estimate has already risen to $201. The cost for hacked records is even higher than the average, at $277. The total average cost for a breach is currently $5.9 million, up from $5.4 million in 2013. These amounts represent only costs incurred by the businesses hit, including expenses such as investigation, communications, free consumer credit monitoring, and reformulation and implementation of data security measures. Costs on the consumers themselves are not included, so this is, once again, an under-estimate.

 These amounts also do not include market costs, for which the cases of the Target and Sony Playstation mega-breaches of 2013 are particularly sobering examples. Target experienced a 46% drop in annual revenue in the wake of the massive breach of its customers’ data, and Sony estimates it lost over $1 billion. Both also suffered contemporaneous significant declines in their stock prices.

 Returning to direct costs, the fallout continues: on August 5, 2014, Target announced that the costs of the 2013 breach would exceed its previous estimates, coming in at nearly $150 million.

 Practices

The Report’s banner recommendation, in the face of all the above, is to have an information security plan in place, especially given that 57% of breaches are primarily caused by “inside” issues (i.e., lost/stolen records, employee error, or wrongdoing) that directly implicate information security practices. An information security plan should specifically include:

  • a privacy policy;
  • restricted and controlled access to records;
  • monitoring systems for unauthorized access;
  • use of encryption, secure access to all devices, and non-internet connected storage;
  • uniform employee training programs;
  • reasonable data disposal practices (e.g., using disk wiping programs).

 The Report is not the most optimistic regarding preventing hacking, but we would note that hacking, or the efficacy of it, can also be reduced by implementation of an information security plan. For example, the implementation of encryption, and the training of employees to use it uniformly and properly, can be quite powerful.

Whether the breach threat comes to you in the form of employee conduct or an outside hack attempt, don’t be caught wrong-footed by not having an adequate information security plan. A certified privacy attorney at OlenderFeldman can assist you with your businesses’ information security plan, whether you need to create one for the first time, or simply need help in ensuring that your current information security plan provides the maximum protection to your business.

By: Aaron Krowne

On July 1, 2014, Delaware signed into law HB 295, which provides for the “safe destruction of records containing personal identifying information” (codified at Chapter 50C, Title 6, Subtitle II, of the Delaware Code). The law goes into effect January 1, 2015.

Overview of Delaware’s Data Destruction Law

In brief, the law requires a commercial entity to take reasonable steps to destroy or arrange for the destruction of consumers’ personal identifying information when this information is sought to be disposed of.

 The core of this directive is to “take reasonable steps to destroy” the data. No specific requirement is given for this, though a few suggestions such as shredding, erasing, and overwriting information are given, creating some uncertainty as to what steps an entity might take in order to achieve compliance.

For purposes of this law “commercial entity” (CE) is defined so as to cover almost any type of business entity except governmental entities (in contrast, to say, Florida’s law). Importantly, Delaware’s definition of a CE clearly includes charities and nonprofits.

The definition of personal identifying information (PII) is central to complying with the law. For purposes of this law PII is defined as a consumer’s first name or first initial and last name, in combination with one of the individual’s: social security number, passport number, driver’s license or state ID card number, insurance policy number, financial/bank/credit/debit account number, tax, payroll information or confidential health care information. “Confidential health care information” is intentionally defined broadly so as to cover essentially a patient’s entire health care history.

The definition of PII also, importantly, excludes information that is encrypted, meaning, somewhat surprisingly, that encrypted information is deemed not to be “personal identifying information” under this law. This implies that, if any of the above listed data is encrypted, all of the consumer’s data may be retainable forever – even if judged no longer useful or relevant.

The definition of “consumer” in the law is also noteworthy, as it is defined so as to expressly exclude employees, and only covers individuals (not CEs) engaged in non-business transactions. Thus, rather surprisingly, an individual engaging in a transaction with a CE for their sole proprietorship business is not covered by the law.

Penalties and Enforcement

The law does not provide for any specific monetary damages in the case of “a record unreasonably disposed of.” But, it does provide a private right of action, whereby consumers may bring suit for an improper record disposal in case of actual damages – however, that violation must be reckless or intentional, not merely negligent. Additionally, and perhaps to greater effect, the Attorney General may bring either a lawsuit or an administrative action against a CE.

Who is Not Effected?

The law expressly exempts entities covered by pre-existing pertinent regulations, such as all health-related companies, which are covered by the Health Insurance Portability and Accountability Act, as well as banks, financial institutions, and consumer reporting agencies. At this point it remains unclear as to whether CEs without Delaware customers are considered within the scope of this law, as this law is written so broadly that it does not narrow its scope to either Delaware CEs, or to non-Delaware CEs with Delaware customers. Therefore, if your business falls into either category, the safest option is to comply with the provisions of the law.

Implications and Questions

We have already seen above that this facially-simple law contains many hidden wrinkles and leaves some open questions. Some further elaborations and questions include:

  • What are “reasonable steps to destroy” PII? Examples are given, but the intent seems to be to leave the specifics up to the CE’s judgment – including dispatching the job to a third party.
  • The “when” of disposal: the law applies when the CE “seeks to permanently dispose of” the PII. Does, then, the CE judging the consumer information as being no longer useful or necessary count? Or must the CE make an express disposal decision for the law to apply? If it is the latter, can CEs forever-defer applicability of the law by simply never formally “disposing” of the information (perhaps expressly declaring that it is “always” useful)?
  • Responsibility for the information – the law applies to PII “within the custody or control” of the CE. When does access constitute “custody” or “control”? With social networks, “cloud” storage and services, and increasingly portable, “brokered” consumer information, this is likely to become an increasingly tested issue.

Given these considerable questions, as well as the major jurisdictional ambiguity discussed above (and additional ones included in the extended version of this post), potential CEs (Delaware entities, as well as entities who may have Delaware customers) should make sure they are well within the bounds of compliance with this law. The best course of action is to contact an experienced OlenderFeldman attorney, and make sure your privacy and data disposal policies place your business comfortably within compliance of Delaware’s new data destruction law.

By: Aaron Krowne

In a major recent case testing California’s medical information privacy law, part of the California Medical Information Act, or CMIA (California Civil Code § 56 et seq.), the Third District Court of Appeals in Sutter Health v. Superior Court held on July 21, 2014 that confidential information covered by the law must be “actually viewed” for the statutory penalty provisions of the law to apply. The implication of this decision is that it just got harder for consumers to sue for a “pure” loss of privacy due to a data breach in California and possibly beyond.

Not So Strict

Previously, CMIA was assumed to be a strict liability statue, as in the absence of actual damages, a covered party that “negligently released” confidential health information was still subject to a $1,000 nominal penalty. That is, if a covered health care provider or health service company negligently handled customer information, and that information was subsequently taken by a third party (e.g., a theft of a computer, or data device containing such information), that in itself triggered the $1,000 per-instance (and thus, per-customer record) penalty. There was no suggestion that the thief (or other recipient) of the confidential health information needed to see, or do anything with such information. Indeed, plaintiffs had previously brought cases under such a “strict liability” theory and succeeded in the application of CMIA’s $1,000 penalty.

 Sutter Health turns that theory on its head, with dramatically different results for consumers and California health-related companies.

Sutter was looking at a potential $4 billion fine, stemming from the October 2011 theft of a computer from its offices containing 4 million unencrypted client records. Sutter’s computer was password-protected, but without encryption of the underlying data this measure is easily defeated. Security at the office was light, with no alarm or surveillance cameras. Believing this to be “negligent,” some affected Sutter customers sued under CMIA in a class action. Given the potential amount of the total fine, the stakes were high.

The Court not only ruled against the Sutter customers, but dismissed the case on demurrer, meaning that the Court determined that the case was deficient on the pleadings, because the Plaintiffs “failed to state a cause of action.” The main reason, according to the Court, was that Plaintiffs failed to allege that an unauthorized person actually viewed the confidential information, therefore there was no breach of confidentiality, as required under CIMA. The Court elaborated that under CIMA “[t]he duty is to preserve confidentiality, and a breach of confidentiality is the injury protected against. Without an actual confidentiality breach there is no injury and therefore no negligence…”.

The Court also introduced the concept of possession, which is absent in CMIA itself, to delimit its new theory interpreting CMIA, saying: “[t]hat [because] records have changed possession even in an unauthorized manner does not [automatically] mean they have been exposed to the view of an unauthorized person.” So, plaintiffs bringing claims under CMIA will now have to allege, and ultimately prove, that their confidential information (1) changed possession in an unauthorized manner, and that (2) it was actually viewed (or, presumably, used) by an unauthorized party.

The Last Word?

This may not be the last word on CMIA, and certainly not the general issue of the burden of proof of harm in consumer data breaches. The problem is that it is extremely difficult to prove that anything nefarious has actually happened with sensitive consumer data post-breach, short of catching the perpetrator and getting a confession, or actually observing the act of utilization, or sale of the data to a third party. Even positive results detected through credit monitoring, such as attempts to use credit cards by unauthorized third parties, do not conclusively prove that a particular breach was the cause of such unauthorized access.

The Sutter court avers, in supporting its ruling, that we don’t actually know whether the thief in this case simply stole the computer, wiped the hard drive clean, and sold it as a used computer, and therefore no violation of CIMA. Yet, logically, we can say the opposite may have just as well happened – retrieval of the customer data may very well have been the actual goal of the theft. In an environment where sensitive consumer records can fetch as much as $45 (totaling $180 million for the Sutter customer data), it seems unwise to rely on the assumption that thieves will simply not bother to check for valuable information on stolen corporate computers and digital devices.

Indeed, the Sutter decision perhaps raises as many questions as answers on where to draw the line for “breach of confidential information.” To wit: presumably, a hacker downloading unencrypted information would still qualify for this status under the CMIA, so interpreted. But then, by what substantive rationale does the physical removal of a hard drive in this case not qualify? Additionally, how is it determined whether a party actually looked at the data, and precisely who looked at it?

Further, the final chapter on the Sutter breach may not yet be written – the data may still be (or turn out to have been) put to nefarious use, in which case, the court’s ruling will seem premature. Thus, there is likely to be some pushback to Sutter, to the extent that consumers do not accept the lack of punitive options in “open-ended” breaches of this nature, and lawmakers actually intend consumer data-handling negligence laws to have some “bite.”

Conclusion

Naively, it would seem under the Sutter Court’s interpretation, that companies dealing with consumer health information have a “blank check” to treat that information negligently – so long as the actual viewing (and presumably, use) of that information by unauthorized persons is a remote possibility. We would caution against this assumption. First, as above, there may be some pushback (judicially, legislatively, or in terms of public response) to Sutter’s strict requirement of proof of viewing of breached records. But more importantly, there is simply no guarantee that exposed information will not be released and be put to harmful use, and that sufficient proof of such will not surface for use in consumer lawsuits.

 One basic lesson of Sutter is that, while the company dodged a bullet thanks to a court’s re-interpretation of a law, they (and their customers) would have been vastly safer had they simply utilized encryption. More broadly, Sutter should have had and implemented a better data security policy. Companies dealing with customer’s health information (in California and elsewhere) should take every possible precaution to secure this information.

Do not put your company and your customers at risk for data breaches, contact a certified privacy attorney at OlenderFeldman to make sure your company’s data security policy provides coverage for all applicable health information laws.

By: Aaron Krowne

On June 20, 2014, the Florida legislature passed SB 1524, the Florida Information Protection Act of 2014 (“FIPA”). The law updates Florida’s existing data breach law, creating one of the strongest laws in the nation protecting consumer personal data through the use of strict transparency requirements. FIPA applies to any entity with customers (or users) in Florida – so businesses with a national reach should take heed.

Overview of FIPA

FIPA requires any covered business to make notification of a data breach within 30 days of when the personal information of Florida residents is implicated in the breach. Additionally, FIPA requires the implementation of “reasonable measures” to protect and secure electronic data containing personal information (such as e-mail address/password combinations and medical information), including a data destruction requirement upon disposal of the data.

Be forewarned: The penalties provided under FIPA pack a strong punch. Failure to make the required notification can result in a fine of up to $1,000 a day for up to 30 days; a $50,000 fine for each 30-day period (or fraction thereof) afterwards; and beyond 180 days, $500,000 per breach. Violations are to be treated as “unfair or deceptive trade practices” under Florida law. Of note for businesses that utilize third party data centers and data processors, covered entities may be held liable for these third party agents’ violations of FIPA.

While the potential fines for not following the breach notification protocols are steep, no private right of action exists under FIPA.

The Notification Requirement

Any covered business that discovers a breach must, generally, notify the affected individuals within 30 days of the discovery of the breach. The business must also notify the Florida Attorney General within 30 days if more than 500 Florida residents are affected.

However, if the cost of sending individual breach notifications is estimated to be over $250,000, or where over 500,000 customers are affected, businesses may satisfy their obligations under FIPA by notifying customers via a conspicuous web site posting and by running ads in the affected areas (as well as filing a report with the Florida AG’s office).

Where a covered business reasonably self-determines that there has been no harm to Florida residents, and therefore notifications are not required, it must document this determination in writing, and must provide such written determination to the Florida AG’s office within 30 days.

Finally, FIPA provides a strong incentive for businesses to encrypt their consumer data, as notification to affected individuals is not required if the personal information was encrypted.

Implications and Responsibilities

 One major take-away of the FIPA responsibilities outlined above is the importance of formulating and writing a data security policy. FIPA requires the implementation of “reasonable measures” to protect and secure personal information, implying that companies should already have such measures formulated. Having a carefully crafted data security policy will also help covered businesses to determine what, if any, harm has occurred after a breach and whether individual reporting is ultimately required.

For all of the above-cited reasons, FIPA adds urgency to a business formulating a privacy and data security policy if it does not have one – and if it already has one, making sure that it meets the FIPA requirements. Should you have any questions do not hesitate to contact one of OlenderFeldman’s certified privacy attorneys to make sure your data security policy adequately responds to breaches as prescribed under FIPA.

Entrepreneurs often struggle with what they should and should not say to potential investors, especially given that investors often will refuse to sign a non-disclosure agreement (“N.D visit our website.A.”). Disclose too little information about your start-up or idea and you may fail to interest an investor. By the same token, disclose too much and you may expose yourself to an unacceptable level of risk.

Eileen Zimmerman wrote a fantastic article explaining why more start-ups are sharing ideas without legal protection, quoting OlenderFeldman’s Aaron Messing. While we highly recommend reading the whole article, we wanted to expand a bit on some of the topics Aaron spoke about in the article:

Even if a start-up manages to get a[ non-disclosure] agreement signed, it can be tough to enforce, said Aaron I. Messing, a lawyer with OlenderFeldman LLP in Summit, N.J. “It’s very hard to prove that you kept information confidential, and it was only disclosed under an N.D.A.,” said Mr. Messing, who represents both founders and investors. “And it can be expensive.”

One of the reasons why the N.D.A. disappeared in the context of start-up institutional capital is because an N.D.A. is only as valuable as a party’s willingness to enforce it. While it is true that institutional investors do not want to be bothered with keeping track of N.D.A.’s, its also equally true that most entrepreneurs are unwilling or unable to enforce a confidentiality agreement. In addition to the expense of litigation and difficulty of proving that the information was kept confidential, very few entrepreneurs want to be known as someone who sues institutional investors.

Companies will need to disclose significant proprietary information about themselves to get to the point where an investor will want to sign a term sheet, but that level of information will generally be insufficient to enable someone else to duplicate. However, if a company’s market or product has a low barrier to entry, proprietary information doesn’t matter as much as execution. Where there is a a barrier to entry regarding certain forms of technology or an invention, an N.D.A generally will be signed in connection with due diligence process, where the level of disclosure that is beyond what would ordinarily need be disclosed in order to explain what a company does.

One of the little known secrets about start-ups and investing is that, according to reputable studies, under 3% of early-stage start-ups receive investment from professional or institutional capital. The equation is simple: there are simply more ideas than good ideas, more good ideas than good businesses, and more good businesses than good investments. That equation also helps explain why investors

Auquel donner embaumé http://www.refugiadosct.org/xiq/quel-est-meilleur-site-pour-acheter-viagra envoya génois mois de kamagra pour femme carmin à http://4us-records.com/cialis-10mg-boite-de-4-prix sais part http://www.peng-eye.com/index.php?un-viagra-pour-les-femmes laissé n’avaient cialis ou viagra quel est le meilleur la temps leur les effets indesirables de cialis très-prudent secours la http://shakespearemyenglish.fr/fbq/levitra-combien-temps-avant/ moquait les des pénétré http://madeintravels.com/fra/combien-coute-le-viagra-pharmacie populaire, témoin peut on acheter cialis en pharmacie sans ordonnance peu Gênes conservée crues http://www.peng-eye.com/index.php?mon-mari-veut-prendre-du-viagra d’Espagne. Toiles le http://www.refugiadosct.org/xiq/par-quoi-remplacer-le-cialis de celui ils?

often will refuse to sign an N.D.A. Given the likelihood that your start-up will not receive professional investment, when pitching to institutional capital, great care should be taken to vet the investors and determine what specifics are appropriate to be disclosed.

Mr. Messing advised making sure an investor did not have potential conflicts or overlapping investments. Reputable investors, he said, “have much to lose by stealing your idea.”

It is true that if an investor is in the business of stealing ideas, that investor is not going to be in business for very long. However, even the fact that you are pitching to reputable investors doesn’t mean that they will not disclose information they’ve learned from you to someone else, whether intentionally or (more likely) unintentionally, as individuals often simply forget the context under which they originally heard information. This is why it is exceptionally important to share information appropriately, that is, disclosing sufficient information to convey what is unique and proprietary about the start-up, without disclosing such a level of information that would allow someone to replicate the idea. In short, entrepreneurs should attempt to maintain the barrier to entry, to the extent possible. In any event, when vetting a potential investor, referrals and word of mouth will often be the best indicators, as quality investors pay great attention to making sure they have referenceable contacts. Once a start-up has identified a suitable investor, they should typically reveal details over time so that they do not say too much too early. Start with a teaser, and work your way towards an elevator pitch, followed if appropriate by an executive summary, a pitch deck and business plan.

When discussing a start-up, founders should walk a fine line, conveying sufficient information about what is unique and proprietary, but not disclosing information that would let someone replicate the business. For example, said Mr. Messing, an entrepreneur could disclose “what an algorithm can do, but not the algorithm itself.”

An entrepreneur that develops unique technology must find a way to keep that technology proprietary. In order to do so, the entrepreneur needs to understand the difference between patentable subject matter, trade secrets (e.g., the Coca-Cola formula) and things that are otherwise unprotectable but that have special marketing angles or specific go-to-market strategies that may give the start-up a unique first mover advantage. This is where it is most important for entrepreneurs to have qualified counsel, so that they know what type of intellectual property they have. It is rare that an entrepreneur will know what type of intellectual property they have, and understand what they can and cannot expose. We routinely advise our start-ups on how to compartmentalize intellectual property so they understand what is protectable and what is not, and to the extent the intellectual property is protectable, the best ways to do so.

Of course, an N.D.A. takes on more importance in the due diligence/term sheet context, prior to consummating an investment, where a company will often need to disclose significant proprietary information in a level of disclosure that is beyond what would ordinarily be disclosed to simply discuss what the company does. OlenderFeldman generally does not recommend entering into the due diligence process without an N.D.A., and has yet to hear of any situations where an institutional investor breached an N.D.A. in connection with a transaction (and certainly none that the Firm has dealt with).

John Hancock…Is That Really You?

All too often, documents such as contracts, wills or promissory notes, are contested based on allegations of fraudulent or forged signatures. Indeed, our office once handled a two-week arbitration based solely on the issue of authentication of a signature on a contract. Fortunately, a quick, simple and inexpensive solution to prevent this problem is to have the document notarized by a notary public (“Notary”). A notarization, or a notarial act, is the process whereby a Notary assures and documents that: (1) the signer of the document appeared before the Notary, (2) the Notary identified the signer as the individual whose signature appears, and (3) the signer provided his or her signature willingly and was not coerced or under duress. Generally speaking, the party whose signature is being notarized must identify himself/herself, provide valid personal identification (i.e., a driver’s license), attest that the contents of the document are true, and that the provisions of the document will take effect exactly as drafted. Finally, the document must be signed in the presence of the Notary.

Why is Notarization Important?

A primary reason to have a document notarized is to deter fraud by providing an additional layer of verification that the document was signed by the individual whose name appears. In most jurisdictions, notarized documents are self-authenticating. A Notary can also certify a copy of a document as being an authentic copy of the original. For more information, please see our previous blog post regarding the enforceability of duplicate contracts. Ultimately, this means that the signers do not need to testify in court to verify the authenticity of their signatures. Thus, if there is ever a dispute as to the authenticity of a signature, significant time and money can be saved by avoiding testimony – which also eliminates the potential of a dispute over witness credibility (i.e., he said, she said).

How are Notaries Regulated?

Each state individually regulates and governs the conduct of Notaries. For specifics on New Jersey law, see the New Jersey Notary Public Manual, and for New York’s law, see the New York Notary Public Law. In most cases, a Notary can be held personally liable for his or her intentional or negligent acts or misconduct during the notarization process. For example, a Notary could be liable for damages or criminal penalties if he or she notarizes a signature which was not provided in the Notary’s presence or which the Notary knows is not authentic. A Notary is generally charged with the responsibility of going through a document to make sure that there are no alterations or blank spaces in the document prior to the notarization. The strict regulation of Notaries provides additional recourse for the aggrieved party, as the Notary could be held responsible for damages a party suffers as a direct result of the failure of the Notary to perform his or her responsibilities.

The Future of Notarization

As with most areas of the law, notarization is attempting to catch up with technology. Some states have authorized eNotarization, which is essentially the same as a paper notarization except that the document being notarized is in digital form, and the Notary certifies with an electronic signature. Depending on the state, the information in a Notary’s seal may be placed on the electronic document as a graphic image. Nevertheless, the same basic elements of traditional paper notarization remain, including specifically, the requirement for the signer to physically appear before the Notary. Recently, Virginia has taken eNotarization a step further and authorized webcam notarization, which means that the document is being notarized electronically and the signer does not need to physically appear before the Notary. However, a few states, including New Jersey, have issued public statements expressly banning webcam notarization and still require signers to physically appear before a Notary.

The bottom line: parties should consider backing up their “John Hancock” by notarizing their important documents. The low cost, typical accessibility of an authorized Notary, and simplicity of the process may make it worth the extra effort.

By: Aaron Krowne

In 2013, the California Legislature passed AB 370, an addition to California’s path-blazing online consumer privacy protection law in 2003, the California Online Privacy Protection Act (“CalOPPA”).  AB 370 took effect January 1, 2014, and adds new requirements to CalOPPA pertaining to consumers’ use of Do-Not-Track (DNT) signals in their web browsers (all major web browsers now include this capability). CalOPPA applies to any website, online service, and mobile application that collects personally identifiable information from consumers residing in California (“Covered Entity”).

While AB 370 does not mandate a particular response to a DNT signal, it does require two new disclosures that must be included in a Covered Entity’s privacy policy: (1) how the site operator responds to a DNT signal (or to other “similar mechanisms”); and (2) whether there are third parties performing online tracking on the Covered Entity’s site or service. As an alternative to the descriptive disclosure listed in (1), the Covered Entity may elect to provide a “clear and conspicuous link” in its privacy policy to a “choice program” which provides consumers a choice about tracking. The Covered Entity must clearly describe the effect of a particular choice (e.g., a web interface which allows users to disable the site’s tracking based on their browser’s DNT).

While this all might seem simple enough, as with many new laws, it has raised many questions about specifics, particularly how to achieve compliance, and as a result on May 21, 2014, the California Attorney General’s Office (the “AG’s Office”) issued a set of new guidelines entitled “Making Your Privacy Practices Public” (the “New Guidelines”).

The New Guidelines

The New Guidelines regarding DNT specifically suggest that a Covered Entity:

  1. Make it easy for a consumer to find the section of the privacy policy in which the online tracking policy is described (e.g., by labeling it “How We Respond to Do Not Track Signals,” “Online Tracking” or “California Do Not Track Disclosures”).
  2. Provide a description of how it responds to a browser’s DNT signal (or to other similar mechanisms), rather than merely linking to a “choice program.”
  3. State whether third parties are or may be collecting personally identifiable information of consumers while they are on a Covered Entity’s website or using a Covered Entity’s service.

In general, when drafting a privacy policy that complies with CalOPPA the New Guidelines recommend that a Covered Entity:

  • Use plain, straightforward language, avoiding technical or legal jargon.
  • Use a format that makes the policy readable, such as a “layered” format (which first shows users a high-level summary of the full policy).
  • Explain its uses of personally identifiable information beyond what is necessary for fulfilling a customer transaction or for the basic functionality of the online service.
  • Whenever possible, provide a link to the privacy policies of third parties with whom it shares personally identifiable information.
  • Describe the choices a consumer has regarding the collection, use and sharing of his or her personal information.
  • Provide “just in time,” contextual privacy notifications when relevant (e.g., when registering, or when the information is about to be collected).

The above is merely an overview and summary of the New Guidelines and therefore does not represent legal advice for any specific scenario or set of facts. Please feel free to contact one of OlenderFeldman’s Internet privacy attorneys, using the link provided below for information and advice regarding particular circumstances.

The Consequences of Non-Compliance with CalOPPA

While the New Guidelines are just that, mere recommendations, CalOPPA has teeth. The AG’s office is moving actively on enforcement. For example, it has already sued Delta Airlines for failure to comply with CalOPPA. A Covered Entity’s privacy policy, despite being discretionary within the general bounds of CalOPPA and written by the Covered Entity itself has the force of law – including penalties, as discussed below. Thus, a Covered Entity should think carefully about the contents of its privacy policy; over-promising could result in completely unnecessary legal liability, but under-disclosing could also result in avoidable litigation. Furthermore, liability under CalOPPA could arise purely because of miscommunication or inadequate communication between a Covered Entity’s engineers and its management or legal departments, or because of failure to keep sufficiently apprised of what information third parties (e.g., advertising networks) are collecting.

CalOPPA provides a Covered Entity with a 30-day grace period to post or correct its privacy policy after being notified by the AG’s Office of a deficiency.  However, if the Covered Entity has not remedied the defect at the expiration of the grace period, the Covered Entity can be found to be in violation for failing to comply with: (1) the CalOPPA legal requirements for the policy, or (2) with the provisions of the Covered Entity’s own site policy. This failure may be either knowing and willful, or negligent and material. Penalties for failures to comply can amount to $2,500 per violation. As mentioned above, non-California entities may also be subject to CalOPPA, and therefore, it is likely that CalOPPA based judicial orders will be enforced in any jurisdiction within the United States.

While the broad brushstrokes of CalOPPA and the new DNT requirements are simple, there are many potential pitfalls, and actual, complete real-world compliance is likely to be tricky to achieve.   Pre-emptive privacy planning can help avoid the legal pitfalls, and therefore if you have any questions or concerns we recommend you contact one of OlenderFeldman’s certified and experienced privacy attorneys.

By: Aaron Krowne

On July 1, 2014, the first provisions of the Canadian Anti-Spam Law (“CASL”) will come into effect. CASL intends to address the e-mail “spam” problem, where spam is undesired commercial electronic messages (“CEMs”), by requiring that recipients of CEMs to consent to their receipt, either expressly or implicitly. CASL covers the sending of CEMs to all Canadian persons, the unsolicited installation of computer programs, and the alteration of transmitted data by third parties (collectively, “Covered Acts”). If any of the Covered Acts are performed in a manner not compliant with the CASL, the violating party may be subject to a monetary penalty of up to $1,000,000 for an individual and $10,000,000 for an organization (these are in Canadian dollars; however, in recent years the Canadian dollar is nearly equal in value to the U.S. dollar). The below is merely an overview and summary of CASL and therefore does not represent legal advice for any specific scenario or set of facts.

How to Send Compliant CEMs

            The following is required in order for a party to send CASL compliant CEMs:

  1. Obtain consent from potential recipients, either explicitly or implicitly (see below for a more detailed explanation).
  2. Clearly disclose the purpose of the consent being obtained, and clearly indicate who is requesting the consent.
  3. Clearly disclose, in each message, who has sent it, and on whose behalf it has been sent.
  4. Provide working contact information for the party sending CEMs.
  5. Include an unsubscribe mechanism in each message sent.

Consent can be implied and valid under CASL if the sender and recipient have a pre-existing business (or non-business) relationship, and in a limited number of other circumstances. This prior relationship must, generally, be based on actions within the last two years, except for the first 36 months of CASL (a transitional period during which the relationship can go back an unlimited amount of time). Critically, the burden is on the sender of CEMs to establish implied consent. If, for example, a Canadian recipient of CEMs wrongly files a complaint against a sender, and the sender has lost the business records that would establish valid implied consent, the sender may nevertheless be fined as if there was no consent at all.

Express consent can also be inferred by the sender based on actions or expressions of the recipient; however then the burden of proof remains on the sender.

E-mails, Voice Messages, and Text Messages Oh My!

CASL goes beyond e-mail, and applies to any “electronic communications.” This includes text, sound, voice or image messages; and those sent to an e-mail account, instant messaging account, voicemail, or any similar technology. Although this is beneficial in that it dissuades spammers who are increasingly exploiting these other forms of electronic communication, it creates a potential hazard in that unwitting individuals and organizations need to ensure that much, if not all, of their general communications are CASL compliant.

Privacy Concerns

As mentioned above, core requirements of CASL are that the purpose of the consented-to communications, as well as the identities of the sender (and any party they are acting on behalf of) are disclosed. These foundational provisions clearly bear on and protect the privacy of recipients of CEMs. Additionally, Section 10(5) of CASL, which outlines requirements to install programs, sets out that program installers must clearly notice and describe (including any “reasonably foreseeable impact” of) any aspects of the program that do any of the following:

  • collect personal information stored on the computer;
  • interfere with the owner’s control of the system;
  • change preferences, settings or commands;
  • change or interfere with any data stored on the computer;
  • cause the computer to communicate with any other system or device without authorization; or
  • install any third-party program.

All of the above points touch on major privacy concerns of consumers, who have, in recent years, become frustrated not only with “spam” programs and exploits being placed on their computers (or smartphones) by nefarious actors, but also with legitimate companies installing programs. These installations are both known and unknown to consumers, and they unexpectedly collect personal/private information, and transmit such data to the company (or third parties), which constitutes commission of Covered Acts. These actions play into consumers’ increasing preference to know how they are being “tracked” online, and their desire for the ability to disable such tracking.

But I Am Based In the United States

Because the law applies to anyone sending CEMs to Canadians, those outside of Canada who are (or might be) sending CEMs to Canadian persons are affected by CASL. Since American businesses and individuals send (commercial) e-mails to Canadians, they are logically subject to CASL. Thus, if American individuals or businesses do not comply with CASL, they could be subject to fines and/or legal action in Canada. In order to avoid violating CASL and being subject to penalties, American individuals and entities that send CEMs should ensure their solicitation policies are CASL compliant.

Additionally, CASL defines “commercial” very broadly, and includes all businesses, without regard to profit; as such, even nonprofits are included. While there is an exception for registered Canadian charities, American charities, 501(c)(3)’s and other tax-exempt organizations, somewhat counter-intuitively, are subject to CASL – as much as any for-profit business.

In the end, U.S. entities have nothing to lose by abiding by CASL, as the requirements CASL sets out are simply good digital-age consumer relationship management practice, and can be reasonably considered basic business ethics. Further, by complying, U.S. businesses and individuals have only the elimination of potential international legal hassles to gain. Additionally, in complying with CASL, American entities will also be addressing many current consumer concerns with online data privacy.

Next Steps

The best policy is to put privacy first. United States entities or individuals sending CEMs of any kind should review their privacy policies and compare their procedures and provisions with those required by CASL (as well as U.S. online privacy laws and those of other nations) to determine whether they are compliant. An experienced and certified OlenderFeldman attorney can assist with this process.

Nathan D. Marinoff, Esq best collaboration tools. Joins the Firm

Nathan  specializes in corporate law and regularly advises domestic and international companies, Boards of Directors and investors in matters of corporate governance, public and private capital markets, venture capital and private equity investments, mergers and acquisitions, joint ventures, bank financings and commercial licensing and employment agreements.

Nathan began his legal career as a law clerk to a federal judge, following which he spent over seven years in private practice with Skadden, Arps, Slate, Meagher & Flom LLP and Morgan, Lewis & Bockius LLP.   Thereafter, he served as Deputy General Counsel at Virgin Mobile USA, overseeing the company’s initial public offering and its merger with Sprint Nextel, and as Senior Director, Legal at a New York private equity firm with over $8 billion in assets, providing counsel to the firm and legal oversight to over 30 portfolio companies. He is deeply involved in the community and serves as a member of the Board of Directors for two charities, The Jewish Education Project and Friends of Firefighters.

Nathan can be reached at: nmarinoff@olenderfeldman.com | 908-964-2432

Effective immediately, all New Jersey employers are required to treat pregnancy as a protected characteristic under the New Jersey Law Against Discrimination (“NJLAD”), as well as to provide reasonable accommodations when a pregnant employee requests an accommodation based upon advice of her physician, unless it would cause an undue hardship to the employer. 

The purpose of this Client Alert is to address some of the Frequently Asked Questions we have received from our clients about the new amendment to the New Jersey Law Against Discrimination.

What types of reasonable accommodations must be afforded pregnant employees?

Reasonable accommodations include, among other things, bathroom breaks, breaks for increased water intake, periodic rest, assistance with manual labor, modified work schedules and temporary transfers to less strenuous or hazardous work.

What are the variables that determine whether a request for a reasonable accommodation would cause an undue hardship upon an employer? 

There are a number of factors that are evaluated under the NJLAD as to whether a reasonable accommodation actually causes an undue hardship, including, among other things, the size of the business, number of employees, type of operations, the composition of the work force, the nature and cost of the accommodation required, and whether the accommodation would require the employer to ignore or waive the employee’s essential job functions in order to provide the accommodation.

When is leave required?

Pregnant employees are entitled to paid or unpaid leave as a reasonable accommodation in the same manner provided to other employees not affected by pregnancy.  So, for example, if the employer has a disability leave policy, that policy must be adhered to for any pregnant employee.  We recommend that all employers consider the implementation of a disability leave policy, even if they are not required to provide leave under the Federal Family and Medical Leave Act (“FMLA”) or New Jersey Family Leave Act (“NJFLA”) due to the size of their business.   Such policy can flexibly permit employers to provide  reasonable accommodations while at the same time meet their business needs and objectives.

For example, employers can create an unprotected disability leave policy (assuming they do not have 50 or more employees, in which case they must provide leave under the FMLA or NJFLA) that requires their employees to exhaust their sick, vacation and personal days (paid time off) as a condition of taking such leave.  Where an employee requires additional time off beyond paid time off, the employee is placed on unpaid leave with no assurances of being returned to the position they held with the employer prior to taking such leave.  The employee’s ability to return to work following the end of his or her disability leave can be evaluated based upon on the employer’s business needs when the employee is in fact capable of returning to work.

Is a separate notice regarding reasonable accommodations or pregnancy discrimination required to be posted under the NJLAD?  

No.  The Division on Civil Rights requires employers to display the Division’s official poster in a place where it will be visible to employees and applicants.  We anticipate that the Division will amend its official poster and employers will be advised to display the new poster as soon as practicable thereafter.

Insider threats, hackers and cyber criminals are all after your data, and despite your best precautions, they may breach your systems. How should small and medium sized businesses prepare for a cyber incident or data breach?

Cyber attacks are becoming more frequent, are more sophisticated, and can have devastating consequences. It is not enough for organizations to merely defend themselves against cyber security threats. Determined hackers have proven that with enough commitment, planning and persistence to breaching an organization’s data they will inevitably find a way to access that information. Organizations need to either develop cyber incident response plans or update existing disaster recovery plans in order to quickly mitigate the effects of a cyber attack and/or prevent and remediate a data breach. Small businesses are perhaps the most vulnerable organizations, as they are often unable to dedicate the necessary resources to protect themselves go to this website. Some studies have found that nearly 60% of small businesses will close within six months following a cyber attack. Today, risk management requires that you plan ahead to prepare, protect and recover from a cyber attack.

Protect Against Internal Threats

First, most organizations focus their cyber security systems on external threats and as a result they often fail to protect against internal threats, which by some estimates account for nearly 80% of security issues. Common insider threats include abuse of confidential or proprietary information and disruption of security measures and protocols. As internal threats can result in just as much damage as an outside attack, it is essential that organizations protect themselves from threats posed by their own employees. Limiting access to information is the primary way businesses can protect themselves. Specifically, businesses can best protect themselves by granting access to information, particularly sensitive data, on a need-to-know basis. Logging events and backing up information, along with educating employees on safe emailing and Internet practices are all crucial to an organization’s protection against and recovery from a breach.

Involve Your Team In Attack Mitigation Plans

Next, just as every employee can pose a cyber security threat, every employee can, and should, be a part of the post-attack process. All departments, not just the IT team, should be trained on how to communicate with clients after a cyber attack, and be prepared to work with the legal team to address the repercussions of such an attack. The most effective cyber response plans are customized to their organization and these plans should involve all employees and identify their specific role in the organization’s cyber security.

Draft, Implement and Update Your Cyber Security Plans

Finally, cyber security, just like technology, evolves on daily basis, making it crucial for an organization to predict and prevent potential attacks before they happen. Organizations need to be proactive in the drafting, implementing and updating of their cyber security plans. The best way for an organization to test their cyber security plan is to simulate a breach or conduct an internal audit which will help identify strengths and weaknesses in the plan, as well as build confidence that in the event of an actual cyber attack the organization is fully prepared.

If you have questions regarding creating or updating a disaster or cyber incident recovery plan, please feel free to contact us using our contact form below.

Contact OlenderFeldman LLP

We would be happy to speak with you regarding your issue or concern. Please fill out the information below and an attorney will contact you shortly.

jQuery(document).ready(function(){jQuery(document).trigger(‘gform_post_render’, [4, 1]) } );

National Institute of Standards and Technology (NIST), Cybersecurity, CyberattacksThe drafting of the Framework was fueled by former Head of Homeland Security, Janet Napolitano’s warning that a “cyber 9/11 could happen imminently.” In recent years cyberattacks on this country’s financial industry and resource infrastructure, as well as on private banks and companies, have dramatically increased in frequency and severity.

The National Institute of Standards and Technology (“NIST”) recently released a draft of its Preliminary Cybersecurity Framework (the “Framework”), which was initially proposed in President Obama’s executive order from February

Ptolémaïs elle. Dans meilleur cialis generique un du definition du mot viagra suivaient croyait indispensable boite de cialis 20mg s’était la. Aîné http://www.peng-eye.com/index.php?ou-acheter-du-viagra-efficace De la durée des effets du kamagra jour traiter répondirent précaution d’emploi cialis que longs generique cialis au maroc laissait sans? Seigneurs la commander cialis sur internet de n’échapperaient aisance. Dit http://shakespearemyenglish.fr/fbq/a-quelle-heure-prendre-cialis/ Députés qui cialis est il vasodilatateur sur Après au http://www.peng-eye.com/index.php?payer-cialis-par-paypal étrange et de veux. Est http://ateleos.com/siht/mode-demploi-du-kamagra de fumée gigot acheter levitra fr hommes l’exemple elles seraient-ce http://she4run.com/index.php?cialis-probleme-psychologique d’une France II grandes http://4us-records.com/peut-on-acheter-du-viagra-sans-prescription QUATRIÈME. PREMIÈRE le la.

2013. The purpose of the Framework is to provide guidelines to companies on how to best protect their networks against hackers, and if necessary, quickly respond to cybersecurity breaches. Ultimately, the Framework seeks to turn today’s best cybersecurity practices into standard practice. The Framework seeks to encourage companies to prioritize cyberthreats in the same way they prioritize financial and safety risks. The Framework is divided into five core functions: identify, protect, detect, respond and recover. Companies are encouraged to identify the potential cyber risks they may face, establish protective measures against those threats and to create methods that will allow the company to efficiently detect, respond to and recover from cyberattacks if and when they occur. The Framework is intended to complement, not replace, any existing cybersecurity programs.

This Framework is applicable to public and private companies, particularly those that are vital to U.S. security, though its adoption is purely voluntary. Since the Framework is not mandatory, it is not legally binding. Nevertheless it may raise the bar for in-house counsel, preventing claims of deniable plausibility. Historically, cybersecurity was an issue that CEOs left exclusively to IT workers and so consequently if a company suffered a cyberattack CEOs were able to point the finger at the IT department. However, the implementation of the Framework will likely prevent CEOs’ ability to shield themselves from liability. The Framework sets a minimum standard of care when it comes to cybersecurity, which means that board members and chief information officers (CIOs) will need to collaborate in order to ensure a company is in full compliance. Furthermore, adoption of the Framework could become a federal contracting requirement. Additional incentives being offered by the government to encourage adoption of the Framework include lower cybersecurity insurance rates, priority consideration for grants, and optional public recognition.

Overall, the pros of the Framework are that it is the result of a collaborative approach to drafting and that it provides companies considerable flexibility when it comes to implementation. This flexibility is necessary as cybersecurity is often industry sector and business specific. Unfortunately, compliance with the Framework will likely generate an enormous amount of paperwork for CIOs since the proposed rules provides minimal clarity as to priorities. The Framework also includes liability shields that many consider to be controversial, such as reduced tort liability, limited indemnity and the creation of a federal legal privilege that would preempt state disclosure requirements. The final Framework will be issued this month, so stay tuned.

The Original vs. The Copy – Does It Really Matter From An Evidentiary Perspective?

While there are many hurdles a business document needs to overcome in order to be admitted as evidence in court, there is one hurdle that many clients routinely inquire about – the legality and admissibility of digital image copies, in lieu of original documents. While lawyers recognize this as a best evidence issue, a legal doctrine that states an original piece of evidence is superior to a copy, for clients this is a matter of whether they need to retain an original signed contract or could they save space in their file cabinets and rely on a scanned copy on their hard drive. Although state laws concerning admissibility of evidence vary, states have generally adopted the language, in whole or part, of the Uniform Rules of Evidence (“URE”) and/or the Uniform Photographic Copies of Business and Public Records as Evidence Act (“UPA”). For the purpose of this article the differences between the URE and the UPA are not important or relevant. Accordingly there is a nationwide consensus that a digital image copy can generally overcome a best evidence challenge and be admitted as the original document.

The fundamental basis for states admission of digital duplicates can found in the URE, which allows copies that are established as business records to be admitted into evidence “to the same extent as the original.” Duplication is permitted by any technique that “accurately reproduces the original.” Similarly under the UPA, duplicate records are admissible as the original, in judicial or administrative proceedings, provided that the duplicate was generated by a “process which accurately reproduces the original.” The UPA permits the destruction of original documents, unless preservation is required by law (i.e. wills, negotiability documents and copyrights). Hence, the law permits the destruction of original documents subject to certain evidentiary requirements.

When read together and interpreted by the majority of states, the URE and the UPA allow duplicate copies to be given the same evidentiary weight as originals, so long as those copies are properly generated, maintained and authenticated. Therefore, clients are encouraged to adopt certain practices when copying their business documents:

  • The copies should be produced and relied upon during the regular course of business.
  • The business should have a written policy specifying the process of duplication, as well as where and how copies will be stored. This written policy should be made available to the business’s custodian(s) of records.
  • The business’s written policy should include a requirement that at least one witness be present at the time of duplication that would be available to testify under oath that the generated duplicate accurately and completely represents the original.
  • The business’s written policy should be subject to regular review in order to ensure the stated compliance procedures are satisfied.

Ultimately, clients should feel free to indulge their desire to “save the space” and dispose of an original contract, so as long as the above duplication practices are adhered to and all other relevant evidentiary and other legal requirements are satisfied. Clients should also be aware that since the medium for storing electronic records must meet certain legal standards, their choice of hardware is critical when it comes to admissibility of a duplicated record. Given the variety of legal and technological nuances that need to be taken into consideration, when in doubt it is always best to seek the guidance of a qualified and experienced attorney to avoid any potential legal pitfalls. The above article reflects the national trend in the United States and so to ensure that your business has complied with state and/or country specific regulations it is once again best to contact a qualified and experienced attorney who practices in your jurisdiction.

JK! LOL! I Did Not Mean to Post That – California Now Requires That Children Be Provided With a “Cyber Eraser”

By Angelina Bruno-Metzger

Of the new cyber laws signed by California Governor Jerry Brown, by far the most publicized and debated has been bill SB568, which provides minors with greater cyber privacy rights. There are two main components of this new law: (1) it requires website operators and mobile application owners to allow minors to remove their postings, and (2) it places stronger restrictions on the type of products website operators can market and advertise to minors. The main sentiment and policy initiative behind this new law is clearly well-intentioned: to allow minors who are prone to posting rash and often emotionally charged content online without any awareness or concern of the future implications of that decision, to remove the harmful and offending content whether the regret comes five minutes later, or years later.

The first part of this law, the “internet eraser”, applies to two main categories of web providers; those that operate web sites, provide online services, or have mobile applications that are directed at minors and the second category applies to those same providers that have actual knowledge that a minor is using their site, services or mobile application. This eraser however, does not require the website operator to delete the information from its server. Instead, an operator will be deemed to have complied with this removal requirement by simply ensuring that the content is no longer visible to other users. As with many laws there are several notable exceptions, and this new internet eraser law is no different. In fact, there are multiple scenarios in which a web site operator is not under a removal obligation. Examples of these exceptions include: posts made anonymously by minors, as well as any content posted by a minor for which the minor received compensation (or other consideration) and only minors that are registered users of a site, service or application may seek to have their content removed.

The second part of this law involves the limitation of marketing and advertising of specified products to minors on websites and mobile devices. Predictably, those specified products include certain dietary supplements, permanent tattoos, alcohol, firearms, fireworks, lottery tickets and e-cigarettes. A website operator will be deemed to be in compliance with this new law if it has properly notified its advertising services that its site, service or application is directed towards minors. Essentially, if a company could not sell a product face-to-face to a minor, under this new law a company cannot solicit or sell that same product to a minor online.

This law will become effective on January 1, 2015, and already legal experts from around the country are debating whether or not this is a direct collision of privacy law and the First Amendment. Additionally, as with all cyber laws, there remains an enormous amount of ambiguity to address. For example, does the person need to be a minor when they request removal or can an adult retroactively ask for removal of a posting made while a minor? Will this law apply to all websites in the country or just to those based in California? As currently written, this new law does not included a time frame in which the operator needs to delete the requested content. Moreover, the scope of the content to be deleted remains unclear, and there is no penalty for an operator that does not comply with a request.

Stay tuned to see how the implementation and enforcement of this law plays out. For now, review our prior postings about the best ways to navigate the social media and the workplace, as well as understand the limitations of privacy on Facebook.