By: Aaron Krowne

On July 14, 2014, the New York Attorney General’s office (“NY AG”) released a seminal report on data breaches, entitled “Information Exposed: Historical Examination of Data Breaches in New York State” (the “Report”). The Report presents a wealth of eye-opening (and sobering) information on data breaches in New York and beyond. The Report is primarily based upon the NY AG’s own analysis of data breach reports received in the first eight years (spanning 2005 through 2013) based on the State’s data breach reporting law (NY General Business Law §899-aa). The Report also cites extensively to outside research, providing a national- and international picture of data breaches. The Report’s primary finding is that data breaches, somewhat unsurprisingly, are a rapidly growing problem.

A Growing Menace

The headline statistic of the Report is its finding that data breaches in or effecting New York have tripled between 2006 and 2013. During this time frame, 22.8 million personal records of New Yorkers were exposed in nearly 5,000 breaches, effecting more than 3,000 businesses. The “worst” year was 2013, with 7.4 million records exposed, mainly due to the Target and Living Social “mega-breaches,” which the Report revealed are themselves a growing trend. However, while the Report warned that these recent “mega breaches” appear to be a trend, businesses of all sizes are effected and at risk.

The Report revealed that hacking instances are responsible for 43% of breaches and constituted 64% of the total records exposed. Other major causes of breaches include “lost or stolen equipment or documentation” (accounting for 25% of breaches), “employee error” (totaling 21% of breaches), and “insider wrongdoing” (tallying 11% of breaches). It is thus important to note that the majority of breaches still originate internally. However, since 2009 hacking has grown to become the dominant cause of breaches, which, not coincidentally, is the same year that “crimeware” source code was released and began to proliferate. Hacking was responsible for a whopping 96.4% of the New York records exposed in 2013 (again, largely due to the mega-breaches).

The Report notes that retail services and health care providers are “particularly” vulnerable to data breaches. The following breaks down the number of entities in a particular sector that suffered repeated data breaches: 54 “retail services” entities (a “favorite target of hackers”, per the Report), 31 “financial services” entities, 29 “health care” entities, 27 “banking” entities, and 20 “insurance” entities.

The Report also points out that these breach statistics are likely on the low side. One reason for this is that New York’s data breach law doesn’t cover all breaches. For example, if only one piece of information (out of the two required types: (1) a name, number, personal mark, or other identifier which can be used to identify such natural person, combined with (2) a social security number, government ID or license number, account number, or credit or debit card number along with security code) is compromised, the reporting requirement is not triggered. Yet, the compromise of even one piece of data (e.g., a social security number) can still have the same effect as a “breach” under the law, since it is still possible for there to be actual damage to the consumer (particularly if the breached information can be combined with complementary information obtained elsewhere). Further, within a specific reported breach, the full impact of such may be unknown, and hence lead to the breach being “underestimated.”

 Real Costs: Answering To The Market

Though New York’s data breach law allows the AG to bring suits for actual damages and statutory penalties for failure to notify (all consumers effected, theNY AG’s office; and for large breaches, consumer reporting agencies is required), such awards are likely to be minor compared with the market impact and direct costs of a breach. The Report estimates that in 2013, breaches cost New York businesses $1.37 billion, based on a per-record cost estimate of $188 (breach cost estimates are from data breach research consultancy The Ponemon Institute). However, in 2014, this per-record estimate has already risen to $201. The cost for hacked records is even higher than the average, at $277. The total average cost for a breach is currently $5.9 million, up from $5.4 million in 2013. These amounts represent only costs incurred by the businesses hit, including expenses such as investigation, communications, free consumer credit monitoring, and reformulation and implementation of data security measures. Costs on the consumers themselves are not included, so this is, once again, an under-estimate.

 These amounts also do not include market costs, for which the cases of the Target and Sony Playstation mega-breaches of 2013 are particularly sobering examples. Target experienced a 46% drop in annual revenue in the wake of the massive breach of its customers’ data, and Sony estimates it lost over $1 billion. Both also suffered contemporaneous significant declines in their stock prices.

 Returning to direct costs, the fallout continues: on August 5, 2014, Target announced that the costs of the 2013 breach would exceed its previous estimates, coming in at nearly $150 million.

 Practices

The Report’s banner recommendation, in the face of all the above, is to have an information security plan in place, especially given that 57% of breaches are primarily caused by “inside” issues (i.e., lost/stolen records, employee error, or wrongdoing) that directly implicate information security practices. An information security plan should specifically include:

  • a privacy policy;
  • restricted and controlled access to records;
  • monitoring systems for unauthorized access;
  • use of encryption, secure access to all devices, and non-internet connected storage;
  • uniform employee training programs;
  • reasonable data disposal practices (e.g., using disk wiping programs).

 The Report is not the most optimistic regarding preventing hacking, but we would note that hacking, or the efficacy of it, can also be reduced by implementation of an information security plan. For example, the implementation of encryption, and the training of employees to use it uniformly and properly, can be quite powerful.

Whether the breach threat comes to you in the form of employee conduct or an outside hack attempt, don’t be caught wrong-footed by not having an adequate information security plan. A certified privacy attorney at OlenderFeldman can assist you with your businesses’ information security plan, whether you need to create one for the first time, or simply need help in ensuring that your current information security plan provides the maximum protection to your business.

By: Aaron Krowne

On July 1, 2014, Delaware signed into law HB 295, which provides for the “safe destruction of records containing personal identifying information” (codified at Chapter 50C, Title 6, Subtitle II, of the Delaware Code). The law goes into effect January 1, 2015.

Overview of Delaware’s Data Destruction Law

In brief, the law requires a commercial entity to take reasonable steps to destroy or arrange for the destruction of consumers’ personal identifying information when this information is sought to be disposed of.

 The core of this directive is to “take reasonable steps to destroy” the data. No specific requirement is given for this, though a few suggestions such as shredding, erasing, and overwriting information are given, creating some uncertainty as to what steps an entity might take in order to achieve compliance.

For purposes of this law “commercial entity” (CE) is defined so as to cover almost any type of business entity except governmental entities (in contrast, to say, Florida’s law). Importantly, Delaware’s definition of a CE clearly includes charities and nonprofits.

The definition of personal identifying information (PII) is central to complying with the law. For purposes of this law PII is defined as a consumer’s first name or first initial and last name, in combination with one of the individual’s: social security number, passport number, driver’s license or state ID card number, insurance policy number, financial/bank/credit/debit account number, tax, payroll information or confidential health care information. “Confidential health care information” is intentionally defined broadly so as to cover essentially a patient’s entire health care history.

The definition of PII also, importantly, excludes information that is encrypted, meaning, somewhat surprisingly, that encrypted information is deemed not to be “personal identifying information” under this law. This implies that, if any of the above listed data is encrypted, all of the consumer’s data may be retainable forever – even if judged no longer useful or relevant.

The definition of “consumer” in the law is also noteworthy, as it is defined so as to expressly exclude employees, and only covers individuals (not CEs) engaged in non-business transactions. Thus, rather surprisingly, an individual engaging in a transaction with a CE for their sole proprietorship business is not covered by the law.

Penalties and Enforcement

The law does not provide for any specific monetary damages in the case of “a record unreasonably disposed of.” But, it does provide a private right of action, whereby consumers may bring suit for an improper record disposal in case of actual damages – however, that violation must be reckless or intentional, not merely negligent. Additionally, and perhaps to greater effect, the Attorney General may bring either a lawsuit or an administrative action against a CE.

Who is Not Effected?

The law expressly exempts entities covered by pre-existing pertinent regulations, such as all health-related companies, which are covered by the Health Insurance Portability and Accountability Act, as well as banks, financial institutions, and consumer reporting agencies. At this point it remains unclear as to whether CEs without Delaware customers are considered within the scope of this law, as this law is written so broadly that it does not narrow its scope to either Delaware CEs, or to non-Delaware CEs with Delaware customers. Therefore, if your business falls into either category, the safest option is to comply with the provisions of the law.

Implications and Questions

We have already seen above that this facially-simple law contains many hidden wrinkles and leaves some open questions. Some further elaborations and questions include:

  • What are “reasonable steps to destroy” PII? Examples are given, but the intent seems to be to leave the specifics up to the CE’s judgment – including dispatching the job to a third party.
  • The “when” of disposal: the law applies when the CE “seeks to permanently dispose of” the PII. Does, then, the CE judging the consumer information as being no longer useful or necessary count? Or must the CE make an express disposal decision for the law to apply? If it is the latter, can CEs forever-defer applicability of the law by simply never formally “disposing” of the information (perhaps expressly declaring that it is “always” useful)?
  • Responsibility for the information – the law applies to PII “within the custody or control” of the CE. When does access constitute “custody” or “control”? With social networks, “cloud” storage and services, and increasingly portable, “brokered” consumer information, this is likely to become an increasingly tested issue.

Given these considerable questions, as well as the major jurisdictional ambiguity discussed above (and additional ones included in the extended version of this post), potential CEs (Delaware entities, as well as entities who may have Delaware customers) should make sure they are well within the bounds of compliance with this law. The best course of action is to contact an experienced OlenderFeldman attorney, and make sure your privacy and data disposal policies place your business comfortably within compliance of Delaware’s new data destruction law.

By: Aaron Krowne

In a major recent case testing California’s medical information privacy law, part of the California Medical Information Act, or CMIA (California Civil Code § 56 et seq.), the Third District Court of Appeals in Sutter Health v. Superior Court held on July 21, 2014 that confidential information covered by the law must be “actually viewed” for the statutory penalty provisions of the law to apply. The implication of this decision is that it just got harder for consumers to sue for a “pure” loss of privacy due to a data breach in California and possibly beyond.

Not So Strict

Previously, CMIA was assumed to be a strict liability statue, as in the absence of actual damages, a covered party that “negligently released” confidential health information was still subject to a $1,000 nominal penalty. That is, if a covered health care provider or health service company negligently handled customer information, and that information was subsequently taken by a third party (e.g., a theft of a computer, or data device containing such information), that in itself triggered the $1,000 per-instance (and thus, per-customer record) penalty. There was no suggestion that the thief (or other recipient) of the confidential health information needed to see, or do anything with such information. Indeed, plaintiffs had previously brought cases under such a “strict liability” theory and succeeded in the application of CMIA’s $1,000 penalty.

 Sutter Health turns that theory on its head, with dramatically different results for consumers and California health-related companies.

Sutter was looking at a potential $4 billion fine, stemming from the October 2011 theft of a computer from its offices containing 4 million unencrypted client records. Sutter’s computer was password-protected, but without encryption of the underlying data this measure is easily defeated. Security at the office was light, with no alarm or surveillance cameras. Believing this to be “negligent,” some affected Sutter customers sued under CMIA in a class action. Given the potential amount of the total fine, the stakes were high.

The Court not only ruled against the Sutter customers, but dismissed the case on demurrer, meaning that the Court determined that the case was deficient on the pleadings, because the Plaintiffs “failed to state a cause of action.” The main reason, according to the Court, was that Plaintiffs failed to allege that an unauthorized person actually viewed the confidential information, therefore there was no breach of confidentiality, as required under CIMA. The Court elaborated that under CIMA “[t]he duty is to preserve confidentiality, and a breach of confidentiality is the injury protected against. Without an actual confidentiality breach there is no injury and therefore no negligence…”.

The Court also introduced the concept of possession, which is absent in CMIA itself, to delimit its new theory interpreting CMIA, saying: “[t]hat [because] records have changed possession even in an unauthorized manner does not [automatically] mean they have been exposed to the view of an unauthorized person.” So, plaintiffs bringing claims under CMIA will now have to allege, and ultimately prove, that their confidential information (1) changed possession in an unauthorized manner, and that (2) it was actually viewed (or, presumably, used) by an unauthorized party.

The Last Word?

This may not be the last word on CMIA, and certainly not the general issue of the burden of proof of harm in consumer data breaches. The problem is that it is extremely difficult to prove that anything nefarious has actually happened with sensitive consumer data post-breach, short of catching the perpetrator and getting a confession, or actually observing the act of utilization, or sale of the data to a third party. Even positive results detected through credit monitoring, such as attempts to use credit cards by unauthorized third parties, do not conclusively prove that a particular breach was the cause of such unauthorized access.

The Sutter court avers, in supporting its ruling, that we don’t actually know whether the thief in this case simply stole the computer, wiped the hard drive clean, and sold it as a used computer, and therefore no violation of CIMA. Yet, logically, we can say the opposite may have just as well happened – retrieval of the customer data may very well have been the actual goal of the theft. In an environment where sensitive consumer records can fetch as much as $45 (totaling $180 million for the Sutter customer data), it seems unwise to rely on the assumption that thieves will simply not bother to check for valuable information on stolen corporate computers and digital devices.

Indeed, the Sutter decision perhaps raises as many questions as answers on where to draw the line for “breach of confidential information.” To wit: presumably, a hacker downloading unencrypted information would still qualify for this status under the CMIA, so interpreted. But then, by what substantive rationale does the physical removal of a hard drive in this case not qualify? Additionally, how is it determined whether a party actually looked at the data, and precisely who looked at it?

Further, the final chapter on the Sutter breach may not yet be written – the data may still be (or turn out to have been) put to nefarious use, in which case, the court’s ruling will seem premature. Thus, there is likely to be some pushback to Sutter, to the extent that consumers do not accept the lack of punitive options in “open-ended” breaches of this nature, and lawmakers actually intend consumer data-handling negligence laws to have some “bite.”

Conclusion

Naively, it would seem under the Sutter Court’s interpretation, that companies dealing with consumer health information have a “blank check” to treat that information negligently – so long as the actual viewing (and presumably, use) of that information by unauthorized persons is a remote possibility. We would caution against this assumption. First, as above, there may be some pushback (judicially, legislatively, or in terms of public response) to Sutter’s strict requirement of proof of viewing of breached records. But more importantly, there is simply no guarantee that exposed information will not be released and be put to harmful use, and that sufficient proof of such will not surface for use in consumer lawsuits.

 One basic lesson of Sutter is that, while the company dodged a bullet thanks to a court’s re-interpretation of a law, they (and their customers) would have been vastly safer had they simply utilized encryption. More broadly, Sutter should have had and implemented a better data security policy. Companies dealing with customer’s health information (in California and elsewhere) should take every possible precaution to secure this information.

Do not put your company and your customers at risk for data breaches, contact a certified privacy attorney at OlenderFeldman to make sure your company’s data security policy provides coverage for all applicable health information laws.

By: Aaron Krowne

In this post we briefly introduce a key aspect of the right to privacy – the reasonable expectation of privacy (“REP”) – and discuss the impact of the recent US Supreme Court decisions in Riley v. California and US v. Wurie on it, with implications for digital information privacy.

A Game-Changer?

The Supreme Court’s recent ruling on July 25, 2014 in the paired cases Riley v. California and United States v. Wurie represents a major development on the REP front. These cases concerned two individuals who had been arrested for relatively minor infractions, yet ended up convinced of major crimes after police searched and made use of information found on their cell phones.

In Riley, a traffic stop for a broken tail light led to a search of the vehicle, and ultimately a search of the driver’s, Riley’s, cell phone. The search of Riley’s phone revealed information that led police to connect him to a recent gang shooting, for which he was convicted. In Wurie, the namesake defendant was witnessed in an apparent street drug sale and was subsequently arrested in a police sting, during which his cell phone was searched. Information on Wurie’s phone led police to his apartment, and when police entered and searched the apartment (Wurie’s girlfriend let them in despite the lack of a warrant) they found a large quantity of drugs, leading to a greatly elevated conviction for Wurie.

Few would argue that Riley and Wurie were not actually guilty of the greater crimes they were connected to based on the information found on their phone; the key question is whether the phone-gleaned evidence was admissible, or if it was inadmissible for being the result of an unreasonable search in violation of the Fourth Amendment. In the face of a shooting and major drug-dealing activity, the Supreme Court surprisingly (to some) said “no” – the evidence was not admissible. The Supreme Court declared that the warrantless searches of cell phones in these cases violated a person’s reasonable expectation of privacy, and that the state’s interest in law enforcement did not trump this expectation.

Much of the debate in these cases (given significant coverage in the opinion) dealt with whether the prevailing rule, from the precedent in US v. Robinson (1973), also applied to Riley and Wurie. In Robinson, the Court found that a search of physical items found on an arrestee (e.g., in a pocket) was permissible under the circumstances of an arrest, and items (or information) discovered this way would generally qualify as admissible evidence to support additional criminal charges. The precedent set in Robinson stood as the general rule, and thus the law of the land, for over 40 years. The prosecution in Riley and Wurie, simply following in this vein, argued that searches of a cell phone were not materially different from searches of physical on-person items during an arrest, and were thus similarly admissible. The Supreme Court vehemently disagreed with this rationale, countering famously:

“[t]hat is like saying a ride on horseback is materially indistinguishable from a flight to the moon. Both are ways of getting from point A to point B, but little else justifies lumping them together.”

It continued:

“Modern cell phones, as a category, implicate privacy concerns far beyond those implicated by the search of a cigarette pack, a wallet, or a purse. A conclusion that inspecting the contents of an arrestee’s pockets works no substantial additional intrusion on privacy beyond the arrest itself may make sense as applied to physical items, but any extension of that reasoning to digital data has to rest on its own bottom.”

The Supreme Court further explained:

 ”… The possible intrusion on privacy is not physically limited in the same way when it comes to cell phones… Even the most basic phones that sell for less than $20 might hold photographs, picture messages, text messages, Internet browsing history, a calendar, a thousand-entry phone book, and so on… The sum of an individual’s private life can be reconstructed through a thousand photographs labeled with dates, locations, and descriptions; the same cannot be said of a photograph or two of loved ones tucked into a wallet.”

In sum, the Supreme Court recognized that digital information on a cell phone is so personal and so extensive that there is a significant and reasonable expectation of privacy in that information, as compared to mere physical items one might carrying.

A Strong Signal

Because the decision in Riley was unanimous, the ruling sends an especially strong signal as to where the highest legal doctrine in the land stands regarding REP. Of course, the facts of the cases were limited to warrantless personal electronic device searches by law enforcement during an arrest; but due to the force of unanimity behind the decision and the extensive and clear digital privacy-supporting rationale (just small portions of which were quoted above), the ruling will likely reverberate into other areas of digital information privacy. Further, absent the imminent danger of an arrest, the state interest in breaching privacy is further weakened, implying the Supreme Court’s rationale should apply even more strongly to general (non-arrest) situations. The net effect is that the ruling will likely expand the scope of what is considered REP for digital personal information relative to areas that have been in question in recent years.

Translation To Practice

The Supreme Court’s ruling in Riley provides strong support for a real REP in the voluminous personal information which is now “everywhere” online: in social networks, in “the cloud,” and in corporate databases. This in turn validates consumers’ and the general public’s concern over how that information is used, including attendant issues of:

  • how much information is collected, and when;
  • how the information is encoded/stored (e.g., anonymized or identifiable);
  • who has access to the information (internally or when shared with third parties);
  • under what circumstances it is accessed or shared;
  • what objective purposes (business or governmental) it can be used for;
  • if and when it is disposed of, and how this is carried out;
  • how data breaches are handled; and
  • how the above policies are formulated and communicated to users.

Of course, these concerns had already begun to find support in state laws and guidelines (e.g., California’s and Florida’s, as well as Canada‘s and other countries effecting US online businesses). But now, the U.S. Supreme Court itself has given powerful voice to them, as well as form to the underlying REP principles, lending greater legitimacy. While this does not mean that every circumstance finds a REP in each trivial bit of consumer data, this development only accelerates the implementation of laws respecting a REP in digital personal information.

For assistance in making sure your polices and practices respect your users’ REP, are compliant with current online privacy laws, and are positioned for the inevitable increase in online privacy laws, be sure to contact one of OlenderFeldman’s certified privacy attorneys today.

By: Aaron Krowne

On June 20, 2014, the Florida legislature passed SB 1524, the Florida Information Protection Act of 2014 (“FIPA”). The law updates Florida’s existing data breach law, creating one of the strongest laws in the nation protecting consumer personal data through the use of strict transparency requirements. FIPA applies to any entity with customers (or users) in Florida – so businesses with a national reach should take heed.

Overview of FIPA

FIPA requires any covered business to make notification of a data breach within 30 days of when the personal information of Florida residents is implicated in the breach. Additionally, FIPA requires the implementation of “reasonable measures” to protect and secure electronic data containing personal information (such as e-mail address/password combinations and medical information), including a data destruction requirement upon disposal of the data.

Be forewarned: The penalties provided under FIPA pack a strong punch. Failure to make the required notification can result in a fine of up to $1,000 a day for up to 30 days; a $50,000 fine for each 30-day period (or fraction thereof) afterwards; and beyond 180 days, $500,000 per breach. Violations are to be treated as “unfair or deceptive trade practices” under Florida law. Of note for businesses that utilize third party data centers and data processors, covered entities may be held liable for these third party agents’ violations of FIPA.

While the potential fines for not following the breach notification protocols are steep, no private right of action exists under FIPA.

The Notification Requirement

Any covered business that discovers a breach must, generally, notify the affected individuals within 30 days of the discovery of the breach. The business must also notify the Florida Attorney General within 30 days if more than 500 Florida residents are affected.

However, if the cost of sending individual breach notifications is estimated to be over $250,000, or where over 500,000 customers are affected, businesses may satisfy their obligations under FIPA by notifying customers via a conspicuous web site posting and by running ads in the affected areas (as well as filing a report with the Florida AG’s office).

Where a covered business reasonably self-determines that there has been no harm to Florida residents, and therefore notifications are not required, it must document this determination in writing, and must provide such written determination to the Florida AG’s office within 30 days.

Finally, FIPA provides a strong incentive for businesses to encrypt their consumer data, as notification to affected individuals is not required if the personal information was encrypted.

Implications and Responsibilities

 One major take-away of the FIPA responsibilities outlined above is the importance of formulating and writing a data security policy. FIPA requires the implementation of “reasonable measures” to protect and secure personal information, implying that companies should already have such measures formulated. Having a carefully crafted data security policy will also help covered businesses to determine what, if any, harm has occurred after a breach and whether individual reporting is ultimately required.

For all of the above-cited reasons, FIPA adds urgency to a business formulating a privacy and data security policy if it does not have one – and if it already has one, making sure that it meets the FIPA requirements. Should you have any questions do not hesitate to contact one of OlenderFeldman’s certified privacy attorneys to make sure your data security policy adequately responds to breaches as prescribed under FIPA.

By: Aaron Krowne

You may have heard quite a bit of buzz about “Google Glass” in the past few years – and if you aren’t already intimately familiar with it, you probably at least know it is a new technology from Google that involves a “computer and camera” grafted onto otherwise standard eyeglasses. Indeed, this basic picture is correct, and itself is enough to suggest some interesting (and to some, upsetting) legal and societal questions.

What is Google Glass?

 Google Glass, first released in February 2013, is one of the first technologies for “augmented reality” aimed at the consumer market. Augmented reality (“AR”) seeks to take a “normal experience” of the world and add computerized feedback and controls, which enable various ways for users to “get more” out of whatever activity they are engaged in. Before the release of Google Glass, this type of technology was mostly limited to the military and other niche applications. AR is in contrast to typical, modal technology, which requires users to (at least intermittently) “take a break” from the outside world and focus exclusively on the technology (e.g., periodically checking your cell phone for mapping directions). This stands in contrast to more well-known virtual reality, which seeks to simulate or entirely replace the outside world with a generated, “real-seeming” one (e.g., a flight simulator). AR technology isn’t entirely alien to consumers either; a simple form which is not new to the consumer market is voice interaction on smartphones (e.g., SIRI on the iPhone) – but Google Glass takes AR to another level.

Google Glass is indeed “glasses with a computer and camera” on them, but also, importantly, has a tiny “heads up display” (screen), viewable by the wearer on the periphery of one lens. The camera allows the wearer to capture the world in snapshots or video, but more transformatively, allows the computer to “see” the world through the wearer’s eyes and provide automated feedback about it. The main page of Google Glass’s web site gives a number of examples of how the device can be used, including: hands-free heads-up navigation (particularly useful in non-standard settings, such as cycling or other sports), overlaid instruction while training for sports (e.g., improving one’s golf swing by “seeing” the correct swing), real-time heads-up translations (e.g., for street signs in a foreign country), useful real-time lookup functions such as currency conversions, overlaid site and landmark information (e.g., names, history, and even ratings), and, simply, a hands-free version of more “conventional” functions such as phone calls, digital music playing, and instant messaging. This list only scratches the surface of what Google Glass can do – and surely there are countless other applications that have not yet been imagined.

Until April 2014, Google Glass was only available to software developers, but now it is available to the consumer market, where it sells for about $1,500. While it is tempting to write off such new, pricey technology as of interest mostly to “geeks,” the capabilities of Google Glass are so compelling that it is reasonable to expect that it (and possibly “clones” developed by other manufacturers) will enter considerably more widespread (if not mainstream) use in a few short years. After all, this is what happened with cell phones, and then smartphones, with historically-blinding speed. Here, we will endeavor not to be “blinded” by the legal implications of this new technology and provide a brief summary of the most commonly referenced societal and legal concerns implicated by Google Glass.

Safety Issues

An almost immediate “gut” concern with a technology that inserts itself into one’s field of view is that it might be distracting, and hence, unsafe in certain situations; most worryingly, while driving. It is often said that the law lags behind technology; but as many as eight states are already considering legislation that would restrict Google Glass (or future Glass-like devices) while driving. Indeed, the law has shown that it can respond quickly to new technology when it is coupled with acute public concern or outrage; consider the few short years between the nascent cell phone driving-distraction concerns of the mid-2000s and the now near-universal bans and restrictions on that sort of use.

 More immediately, while some states that do not have laws that explicitly mention technologies like Google Glass, their existing laws are written broadly enough (or have been interpreted such as) to cover the banning of Google Glass use while driving. For example, California’s “distracted driving” law (Cal. Vehicle Code S. 27602), covers “visually displaying [...] a video signal [...] visible to the driver,” which most certainly includes Google Glass. This interpretation of the California law was legally confirmed in the recent Abadie case, where the law was affirmatively applied to a San Francisco driver who had been cited for driving while wearing Google Glass. However, lucky for the driver in the Abadie case, the ticket was dismissed on the grounds that actual use of Google Glass at the time could not be proven.

Are such safety concerns about distraction well-founded? Google and other defenders of Google Glass counter that Google Glass is specifically designed not to be distracting; the projected image is in the wearer’s periphery which eliminates the need to “look down” at some other more conventional device (such as a GPS unit or a smartphone).

The truth seems to be somewhere in the middle, implying a nuanced approach. As the Abadie case guides, Google Glass might not always be in use, and therefore, not distracting. And per the above, Google Glass might even reduce distraction in certain contexts and for certain functions. Yet, nearly all states have “distraction” as a category on police traffic accident report forms. Therefore, whether or not laws are ultimately written specifically to cover Google Glass-type technology, its usage while driving has already given rise to new, manifest legal risks.

Privacy Issues

The ability to easily and ubiquitously record one’s surroundings naturally triggers privacy concerns. The most troubling privacy concern is that Google Glass provides private citizens with the technology to surreptitiously record others. Such concerns could even find a legal basis in wiretapping laws currently on the books, such as the Federal Wiretap Act (18 USC S. 2511, et seq.), which prohibits the recording of “oral” communications when all parties do not consent. This law certainly applies to Google Glass and any other wearable recording device, much as it does with non-wearable recording devices.

There are other privacy concerns relating to the sheer ubiquity of recording: e.g., worry for a “general loss of privacy” due to the transformation of commonplace situations and otherwise ephemeral actions and utterances into preserved, replayable, reviewable, and broadcastable/sharable media. An always-worn, potentially always-on device like Google Glass certainly seems to validate this concern and is itself sufficient to give rise to inflamed sentiments or outright conflict. Tech blogger Sarah Slocum learned this lesson the hard way when she was assaulted at a San Francisco bar in February 2014 for wearing her Google Glass inside the bar.

Further, Google Glass provides facial recognition capability, and combined with widespread “tagging” in photos uploaded to social media sites, this capability does seem to add heft (if not urgency) to the vague sense of disquiet. Specifically, Google Glass in combination with tagging would appear to make it exceedingly easy (if not effortless) for identities to be extracted from day-to-day scenes and linked to unwitting third parties’ online profiles – perhaps in places or scenarios they would not want family, employers or “the general public” to see.

Google Glass’s defenders would reply to the above concerns by pointing out that the device isn’t particularly unobtrusive, so certainly one cannot actually “secretly” record with it. Further adding to its “obviousness,” Google Glass displays a prominent blinking red light when it actually is recording. Additionally, Google Glass only records for approximately ten seconds by default, and can only support about 45 minutes of total recording on its battery, making it significantly inferior to dedicated recording or surveillance devices which have long been (cheaply) available to consumers.

But in the end, it is clear that Google Glass means more recording and photographing will be taking place in public. Further, the fact that “AI” capabilities like facial recognition are not only possible, but integral to Google Glass’s best intended use, suggests that the device will be “pushing the envelope” in ways that challenge people’s general expectation of privacy. This envelope-pushing is likely to generate lawsuits – as well as laws.

Piracy Concerns

Another notable area of concern is “piracy” (the distribution of copyrighted works in violation of a copyright). Because Google Glass can be worn all the time, record, and “see what the wearer sees,” it is inherently capable of easily capturing wearer-viewed media, such as movies, art exhibits, or musical performances, without the consent of the performer and copyright owner. In such situations, consumer recording devices are often restricted or banned for this reason.

Of course, recording still happens – especially with today’s ubiquitous smartphones – but the worry is that if Google Glass is “generally” permitted, customer/viewer recording will be harder to control. This concern was embodied in a recent case where an Ohio man was kicked out of a movie theater and then questioned by Homeland Security personnel (specifically, Immigration and Customs Enforcement, which investigates piracy) for wearing Google Glass to a movie. The man was released without charges, as there is no indication the device was on. But despite the fact that this example had a happy ending for the wearer, such an interaction certainly amounts to more than a minor inconvenience for both the wearer as well as the business being patronized.

Law & Conventions

Some of the above concerns with Google Glass are likely to fade as social conventions develop and adapt around such devices. The idea of society needing to catch up with technology is not a new concept – as Google’s Glass “FAQ” specifically points out, when cameras first became available to consumers, they were initially banned from beaches and parks. This seems ridiculous (if not a little quaint) to us today, but it is important to note that even the legal implications of cameras have not “gone away.” Rather, a mixture of tolerance, usage conventions, non-governmental regulatory practices and laws evolved to deal with cameras (for example, intentionally taking a picture that violates the target’s reasonable expectation of privacy is still legally-actionable, even if the photographer is in a public area). The same evolution is likely to happen with Google Glass. If you have questions about how Google Glass is being used by, or effecting, you or your employees, or have plans to use Google Glass (either personally or in the course of a business), do not hesitate to consult with one of OlenderFeldman’s experienced technology attorneys to discuss the potential legal risks and best practices.

John Hancock…Is That Really You?

All too often, documents such as contracts, wills or promissory notes, are contested based on allegations of fraudulent or forged signatures. Indeed, our office once handled a two-week arbitration based solely on the issue of authentication of a signature on a contract. Fortunately, a quick, simple and inexpensive solution to prevent this problem is to have the document notarized by a notary public (“Notary”). A notarization, or a notarial act, is the process whereby a Notary assures and documents that: (1) the signer of the document appeared before the Notary, (2) the Notary identified the signer as the individual whose signature appears, and (3) the signer provided his or her signature willingly and was not coerced or under duress. Generally speaking, the party whose signature is being notarized must identify himself/herself, provide valid personal identification (i.e., a driver’s license), attest that the contents of the document are true, and that the provisions of the document will take effect exactly as drafted. Finally, the document must be signed in the presence of the Notary.

Why is Notarization Important?

A primary reason to have a document notarized is to deter fraud by providing an additional layer of verification that the document was signed by the individual whose name appears. In most jurisdictions, notarized documents are self-authenticating. A Notary can also certify a copy of a document as being an authentic copy of the original. For more information, please see our previous blog post regarding the enforceability of duplicate contracts. Ultimately, this means that the signers do not need to testify in court to verify the authenticity of their signatures. Thus, if there is ever a dispute as to the authenticity of a signature, significant time and money can be saved by avoiding testimony – which also eliminates the potential of a dispute over witness credibility (i.e., he said, she said).

How are Notaries Regulated?

Each state individually regulates and governs the conduct of Notaries. For specifics on New Jersey law, see the New Jersey Notary Public Manual, and for New York’s law, see the New York Notary Public Law. In most cases, a Notary can be held personally liable for his or her intentional or negligent acts or misconduct during the notarization process. For example, a Notary could be liable for damages or criminal penalties if he or she notarizes a signature which was not provided in the Notary’s presence or which the Notary knows is not authentic. A Notary is generally charged with the responsibility of going through a document to make sure that there are no alterations or blank spaces in the document prior to the notarization. The strict regulation of Notaries provides additional recourse for the aggrieved party, as the Notary could be held responsible for damages a party suffers as a direct result of the failure of the Notary to perform his or her responsibilities.

The Future of Notarization

As with most areas of the law, notarization is attempting to catch up with technology. Some states have authorized eNotarization, which is essentially the same as a paper notarization except that the document being notarized is in digital form, and the Notary certifies with an electronic signature. Depending on the state, the information in a Notary’s seal may be placed on the electronic document as a graphic image. Nevertheless, the same basic elements of traditional paper notarization remain, including specifically, the requirement for the signer to physically appear before the Notary. Recently, Virginia has taken eNotarization a step further and authorized webcam notarization, which means that the document is being notarized electronically and the signer does not need to physically appear before the Notary. However, a few states, including New Jersey, have issued public statements expressly banning webcam notarization and still require signers to physically appear before a Notary.

The bottom line: parties should consider backing up their “John Hancock” by notarizing their important documents. The low cost, typical accessibility of an authorized Notary, and simplicity of the process may make it worth the extra effort.

By: Aaron Krowne

In 2013, the California Legislature passed AB 370, an addition to California’s path-blazing online consumer privacy protection law in 2003, the California Online Privacy Protection Act (“CalOPPA”).  AB 370 took effect January 1, 2014, and adds new requirements to CalOPPA pertaining to consumers’ use of Do-Not-Track (DNT) signals in their web browsers (all major web browsers now include this capability). CalOPPA applies to any website, online service, and mobile application that collects personally identifiable information from consumers residing in California (“Covered Entity”).

While AB 370 does not mandate a particular response to a DNT signal, it does require two new disclosures that must be included in a Covered Entity’s privacy policy: (1) how the site operator responds to a DNT signal (or to other “similar mechanisms”); and (2) whether there are third parties performing online tracking on the Covered Entity’s site or service. As an alternative to the descriptive disclosure listed in (1), the Covered Entity may elect to provide a “clear and conspicuous link” in its privacy policy to a “choice program” which provides consumers a choice about tracking. The Covered Entity must clearly describe the effect of a particular choice (e.g., a web interface which allows users to disable the site’s tracking based on their browser’s DNT).

While this all might seem simple enough, as with many new laws, it has raised many questions about specifics, particularly how to achieve compliance, and as a result on May 21, 2014, the California Attorney General’s Office (the “AG’s Office”) issued a set of new guidelines entitled “Making Your Privacy Practices Public” (the “New Guidelines”).

The New Guidelines

The New Guidelines regarding DNT specifically suggest that a Covered Entity:

  1. Make it easy for a consumer to find the section of the privacy policy in which the online tracking policy is described (e.g., by labeling it “How We Respond to Do Not Track Signals,” “Online Tracking” or “California Do Not Track Disclosures”).
  2. Provide a description of how it responds to a browser’s DNT signal (or to other similar mechanisms), rather than merely linking to a “choice program.”
  3. State whether third parties are or may be collecting personally identifiable information of consumers while they are on a Covered Entity’s website or using a Covered Entity’s service.

In general, when drafting a privacy policy that complies with CalOPPA the New Guidelines recommend that a Covered Entity:

  • Use plain, straightforward language, avoiding technical or legal jargon.
  • Use a format that makes the policy readable, such as a “layered” format (which first shows users a high-level summary of the full policy).
  • Explain its uses of personally identifiable information beyond what is necessary for fulfilling a customer transaction or for the basic functionality of the online service.
  • Whenever possible, provide a link to the privacy policies of third parties with whom it shares personally identifiable information.
  • Describe the choices a consumer has regarding the collection, use and sharing of his or her personal information.
  • Provide “just in time,” contextual privacy notifications when relevant (e.g., when registering, or when the information is about to be collected).

The above is merely an overview and summary of the New Guidelines and therefore does not represent legal advice for any specific scenario or set of facts. Please feel free to contact one of OlenderFeldman’s Internet privacy attorneys, using the link provided below for information and advice regarding particular circumstances.

The Consequences of Non-Compliance with CalOPPA

While the New Guidelines are just that, mere recommendations, CalOPPA has teeth. The AG’s office is moving actively on enforcement. For example, it has already sued Delta Airlines for failure to comply with CalOPPA. A Covered Entity’s privacy policy, despite being discretionary within the general bounds of CalOPPA and written by the Covered Entity itself has the force of law – including penalties, as discussed below. Thus, a Covered Entity should think carefully about the contents of its privacy policy; over-promising could result in completely unnecessary legal liability, but under-disclosing could also result in avoidable litigation. Furthermore, liability under CalOPPA could arise purely because of miscommunication or inadequate communication between a Covered Entity’s engineers and its management or legal departments, or because of failure to keep sufficiently apprised of what information third parties (e.g., advertising networks) are collecting.

CalOPPA provides a Covered Entity with a 30-day grace period to post or correct its privacy policy after being notified by the AG’s Office of a deficiency.  However, if the Covered Entity has not remedied the defect at the expiration of the grace period, the Covered Entity can be found to be in violation for failing to comply with: (1) the CalOPPA legal requirements for the policy, or (2) with the provisions of the Covered Entity’s own site policy. This failure may be either knowing and willful, or negligent and material. Penalties for failures to comply can amount to $2,500 per violation. As mentioned above, non-California entities may also be subject to CalOPPA, and therefore, it is likely that CalOPPA based judicial orders will be enforced in any jurisdiction within the United States.

While the broad brushstrokes of CalOPPA and the new DNT requirements are simple, there are many potential pitfalls, and actual, complete real-world compliance is likely to be tricky to achieve.   Pre-emptive privacy planning can help avoid the legal pitfalls, and therefore if you have any questions or concerns we recommend you contact one of OlenderFeldman’s certified and experienced privacy attorneys.

By: Aaron Krowne

On July 1, 2014, the first provisions of the Canadian Anti-Spam Law (“CASL”) will come into effect. CASL intends to address the e-mail “spam” problem, where spam is undesired commercial electronic messages (“CEMs”), by requiring that recipients of CEMs to consent to their receipt, either expressly or implicitly. CASL covers the sending of CEMs to all Canadian persons, the unsolicited installation of computer programs, and the alteration of transmitted data by third parties (collectively, “Covered Acts”). If any of the Covered Acts are performed in a manner not compliant with the CASL, the violating party may be subject to a monetary penalty of up to $1,000,000 for an individual and $10,000,000 for an organization (these are in Canadian dollars; however, in recent years the Canadian dollar is nearly equal in value to the U.S. dollar). The below is merely an overview and summary of CASL and therefore does not represent legal advice for any specific scenario or set of facts.

How to Send Compliant CEMs

            The following is required in order for a party to send CASL compliant CEMs:

  1. Obtain consent from potential recipients, either explicitly or implicitly (see below for a more detailed explanation).
  2. Clearly disclose the purpose of the consent being obtained, and clearly indicate who is requesting the consent.
  3. Clearly disclose, in each message, who has sent it, and on whose behalf it has been sent.
  4. Provide working contact information for the party sending CEMs.
  5. Include an unsubscribe mechanism in each message sent.

Consent can be implied and valid under CASL if the sender and recipient have a pre-existing business (or non-business) relationship, and in a limited number of other circumstances. This prior relationship must, generally, be based on actions within the last two years, except for the first 36 months of CASL (a transitional period during which the relationship can go back an unlimited amount of time). Critically, the burden is on the sender of CEMs to establish implied consent. If, for example, a Canadian recipient of CEMs wrongly files a complaint against a sender, and the sender has lost the business records that would establish valid implied consent, the sender may nevertheless be fined as if there was no consent at all.

Express consent can also be inferred by the sender based on actions or expressions of the recipient; however then the burden of proof remains on the sender.

E-mails, Voice Messages, and Text Messages Oh My!

CASL goes beyond e-mail, and applies to any “electronic communications.” This includes text, sound, voice or image messages; and those sent to an e-mail account, instant messaging account, voicemail, or any similar technology. Although this is beneficial in that it dissuades spammers who are increasingly exploiting these other forms of electronic communication, it creates a potential hazard in that unwitting individuals and organizations need to ensure that much, if not all, of their general communications are CASL compliant.

Privacy Concerns

As mentioned above, core requirements of CASL are that the purpose of the consented-to communications, as well as the identities of the sender (and any party they are acting on behalf of) are disclosed. These foundational provisions clearly bear on and protect the privacy of recipients of CEMs. Additionally, Section 10(5) of CASL, which outlines requirements to install programs, sets out that program installers must clearly notice and describe (including any “reasonably foreseeable impact” of) any aspects of the program that do any of the following:

  • collect personal information stored on the computer;
  • interfere with the owner’s control of the system;
  • change preferences, settings or commands;
  • change or interfere with any data stored on the computer;
  • cause the computer to communicate with any other system or device without authorization; or
  • install any third-party program.

All of the above points touch on major privacy concerns of consumers, who have, in recent years, become frustrated not only with “spam” programs and exploits being placed on their computers (or smartphones) by nefarious actors, but also with legitimate companies installing programs. These installations are both known and unknown to consumers, and they unexpectedly collect personal/private information, and transmit such data to the company (or third parties), which constitutes commission of Covered Acts. These actions play into consumers’ increasing preference to know how they are being “tracked” online, and their desire for the ability to disable such tracking.

But I Am Based In the United States

Because the law applies to anyone sending CEMs to Canadians, those outside of Canada who are (or might be) sending CEMs to Canadian persons are affected by CASL. Since American businesses and individuals send (commercial) e-mails to Canadians, they are logically subject to CASL. Thus, if American individuals or businesses do not comply with CASL, they could be subject to fines and/or legal action in Canada. In order to avoid violating CASL and being subject to penalties, American individuals and entities that send CEMs should ensure their solicitation policies are CASL compliant.

Additionally, CASL defines “commercial” very broadly, and includes all businesses, without regard to profit; as such, even nonprofits are included. While there is an exception for registered Canadian charities, American charities, 501(c)(3)’s and other tax-exempt organizations, somewhat counter-intuitively, are subject to CASL – as much as any for-profit business.

In the end, U.S. entities have nothing to lose by abiding by CASL, as the requirements CASL sets out are simply good digital-age consumer relationship management practice, and can be reasonably considered basic business ethics. Further, by complying, U.S. businesses and individuals have only the elimination of potential international legal hassles to gain. Additionally, in complying with CASL, American entities will also be addressing many current consumer concerns with online data privacy.

Next Steps

The best policy is to put privacy first. United States entities or individuals sending CEMs of any kind should review their privacy policies and compare their procedures and provisions with those required by CASL (as well as U.S. online privacy laws and those of other nations) to determine whether they are compliant. An experienced and certified OlenderFeldman attorney can assist with this process.

Nathan D. Marinoff, Esq. Joins the Firm

Nathan  specializes in corporate law and regularly advises domestic and international companies, Boards of Directors and investors in matters of corporate governance, public and private capital markets, venture capital and private equity investments, mergers and acquisitions, joint ventures, bank financings and commercial licensing and employment agreements.

Nathan began his legal career as a law clerk to a federal judge, following which he spent over seven years in private practice with Skadden, Arps, Slate, Meagher & Flom LLP and Morgan, Lewis & Bockius LLP.   Thereafter, he served as Deputy General Counsel at Virgin Mobile USA, overseeing the company’s initial public offering and its merger with Sprint Nextel, and as Senior Director, Legal at a New York private equity firm with over $8 billion in assets, providing counsel to the firm and legal oversight to over 30 portfolio companies. He is deeply involved in the community and serves as a member of the Board of Directors for two charities, The Jewish Education Project and Friends of Firefighters.

Nathan can be reached at: nmarinoff@olenderfeldman.com | 908-964-2432

For the second year in a row, Christian has been recognized by his peers in Super Lawyers as a Rising Star. This distinction is limited to less than 2.5 percent of attorneys in New Jersey.

OlenderFeldman is proud to congratulate Christian Jensen on being named one of Super Lawyers’ 2014 Rising Stars. The New Jersey Rising Stars list is limited to lawyers who are 40 years old or less or have been in practice for 10 years or less and is comprised of no more than 2.5% of the lawyers in the state.

Christian focuses his practice with OlenderFeldman in the areas of complex commercial litigation and intellectual property litigation, including business and consumer fraud, construction and employment law. For more information about Christian please click here.

Effective immediately, all New Jersey employers are required to treat pregnancy as a protected characteristic under the New Jersey Law Against Discrimination (“NJLAD”), as well as to provide reasonable accommodations when a pregnant employee requests an accommodation based upon advice of her physician, unless it would cause an undue hardship to the employer. 

The purpose of this Client Alert is to address some of the Frequently Asked Questions we have received from our clients about the new amendment to the New Jersey Law Against Discrimination.

What types of reasonable accommodations must be afforded pregnant employees?

Reasonable accommodations include, among other things, bathroom breaks, breaks for increased water intake, periodic rest, assistance with manual labor, modified work schedules and temporary transfers to less strenuous or hazardous work.

What are the variables that determine whether a request for a reasonable accommodation would cause an undue hardship upon an employer? 

There are a number of factors that are evaluated under the NJLAD as to whether a reasonable accommodation actually causes an undue hardship, including, among other things, the size of the business, number of employees, type of operations, the composition of the work force, the nature and cost of the accommodation required, and whether the accommodation would require the employer to ignore or waive the employee’s essential job functions in order to provide the accommodation.

When is leave required?

Pregnant employees are entitled to paid or unpaid leave as a reasonable accommodation in the same manner provided to other employees not affected by pregnancy.  So, for example, if the employer has a disability leave policy, that policy must be adhered to for any pregnant employee.  We recommend that all employers consider the implementation of a disability leave policy, even if they are not required to provide leave under the Federal Family and Medical Leave Act (“FMLA”) or New Jersey Family Leave Act (“NJFLA”) due to the size of their business.   Such policy can flexibly permit employers to provide  reasonable accommodations while at the same time meet their business needs and objectives.

For example, employers can create an unprotected disability leave policy (assuming they do not have 50 or more employees, in which case they must provide leave under the FMLA or NJFLA) that requires their employees to exhaust their sick, vacation and personal days (paid time off) as a condition of taking such leave.  Where an employee requires additional time off beyond paid time off, the employee is placed on unpaid leave with no assurances of being returned to the position they held with the employer prior to taking such leave.  The employee’s ability to return to work following the end of his or her disability leave can be evaluated based upon on the employer’s business needs when the employee is in fact capable of returning to work.

Is a separate notice regarding reasonable accommodations or pregnancy discrimination required to be posted under the NJLAD?  

No.  The Division on Civil Rights requires employers to display the Division’s official poster in a place where it will be visible to employees and applicants.  We anticipate that the Division will amend its official poster and employers will be advised to display the new poster as soon as practicable thereafter.

OlenderFeldman LLP Data Protection and Privacy lawyers Michael Feldman and Jordan Kovnot will attend the International Association of Privacy Professionals (IAPP) Global Privacy Summit, to be held March 5-7 in Washington, D.C.

The event will feature thousands of privacy industry professionals participating in dozens of educational sessions ranging from FTC compliance, cloud computing, big data privacy, cybersecurity, data breach response, the NIST Cybersecurity Framerwork, COPPA and more. If you would like to meetup with Michael or Jordan, please send them an email or contact us using the contact form. We hope to see you there.

Insider threats, hackers and cyber criminals are all after your data, and despite your best precautions, they may breach your systems. How should small and medium sized businesses prepare for a cyber incident or data breach?

Cyber attacks are becoming more frequent, are more sophisticated, and can have devastating consequences. It is not enough for organizations to merely defend themselves against cyber security threats. Determined hackers have proven that with enough commitment, planning and persistence to breaching an organization’s data they will inevitably find a way to access that information. Organizations need to either develop cyber incident response plans or update existing disaster recovery plans in order to quickly mitigate the effects of a cyber attack and/or prevent and remediate a data breach. Small businesses are perhaps the most vulnerable organizations, as they are often unable to dedicate the necessary resources to protect themselves. Some studies have found that nearly 60% of small businesses will close within six months following a cyber attack. Today, risk management requires that you plan ahead to prepare, protect and recover from a cyber attack.

Protect Against Internal Threats

First, most organizations focus their cyber security systems on external threats and as a result they often fail to protect against internal threats, which by some estimates account for nearly 80% of security issues. Common insider threats include abuse of confidential or proprietary information and disruption of security measures and protocols. As internal threats can result in just as much damage as an outside attack, it is essential that organizations protect themselves from threats posed by their own employees. Limiting access to information is the primary way businesses can protect themselves. Specifically, businesses can best protect themselves by granting access to information, particularly sensitive data, on a need-to-know basis. Logging events and backing up information, along with educating employees on safe emailing and Internet practices are all crucial to an organization’s protection against and recovery from a breach.

Involve Your Team In Attack Mitigation Plans

Next, just as every employee can pose a cyber security threat, every employee can, and should, be a part of the post-attack process. All departments, not just the IT team, should be trained on how to communicate with clients after a cyber attack, and be prepared to work with the legal team to address the repercussions of such an attack. The most effective cyber response plans are customized to their organization and these plans should involve all employees and identify their specific role in the organization’s cyber security.

Draft, Implement and Update Your Cyber Security Plans

Finally, cyber security, just like technology, evolves on daily basis, making it crucial for an organization to predict and prevent potential attacks before they happen. Organizations need to be proactive in the drafting, implementing and updating of their cyber security plans. The best way for an organization to test their cyber security plan is to simulate a breach or conduct an internal audit which will help identify strengths and weaknesses in the plan, as well as build confidence that in the event of an actual cyber attack the organization is fully prepared.

If you have questions regarding creating or updating a disaster or cyber incident recovery plan, please feel free to contact Aaron Messing at 908-624-6293 or using our contact form below.

Contact OlenderFeldman LLP

We would be happy to speak with you regarding your issue or concern. Please fill out the information below and an attorney will contact you shortly.

Collection of Location Data Enables Personalized Recommendations; Creates Privacy Concerns

Location data is becoming increasing valuable to companies, who can use this information to build detailed profiles of individual’s preferences and activities, including where they live, work and shop.  Location data can be collected from all Wifi-enabled smartphones, which have a persistent identifier that can be tracked without notifying the user.  Companies can also determine which Wi-Fi networks a phone has logged into.

Although this information can enable companies to provide individually tailored services and products, many have raised concerns about the privacy implications of this type of tracking. For example, a company could infer that an individual has a medical condition based on trips to health care providers.  Additionally, companies are increasingly able to connect online and offline behaviors into a composite profile.

Please click here to see Aaron Messing’s  interview with Fox News concerning location privacy.

Jordan Kovnot, an attorney with OlenderFeldman, LLP and adjunct professor at Fordham Law School, was quoted extensively in this article in Tablet Magazine about a new wave of Internet laws that target the perpetrators of so-called “revenge porn.”

The term “revenge porn” refers to the practice of maliciously posting sexually explicit images of an individual without their consent. The practice most commonly involves jilted former lovers who were either sent the images or actively participated in their creation (sometimes with the knowledge of the subjects, and sometimes secretly, using hidden cameras). After these relationships sour, the angry exes post the images online, often in tandem with links to the victims’ names, addresses, places of work, and social media accounts. In addition to humiliation and mental anguish, victims of revenge porn have been subsequently targeted by stalkers and extortionists who find their pictures and contact information online.

New Jersey’s invasion of privacy law prohibits making secret recordings of individuals engaged in sexual conduct. That law was used to prosecute of a Rutgers student who surreptitiously recorded his roommate, Tyler Clementi, whose subsequent suicide brought national attention to the case. Last year California passed a law that criminalizes the posting of explicit photographs of an individual without his/her consent, though it is limited to instances in which the perpetrator was also the photographer. Recently a bill was put forth in the New York State Senate to outlaw the posting of revenge porn regardless of who created the images and regardless of whether they were created in secret. Such a law would go as far as to punish the unauthorized publication of so-called “selfies” (explicit self-portraits willingly shared by the photographer) where the publication was done with an intent to cause distress.

As Kovnot discusses in the interview, the images at the heart of these violations are often taken in the context of intimate, trusting relationships. As those relationships fall apart, angry, jealous or spiteful individuals sometimes exploit those pictures and videos in order to inflict pain. Existing privacy laws often offer little help to victims, particularly in instances in which the victim willingly shared (or assisted in the creation of) the image. In those cases the victim is often deemed to have no expectation of privacy. These new laws are intended help serve as deterrents and to provide victims with new avenues for relief.

The Original vs. The Copy – Does It Really Matter From An Evidentiary Perspective?

While there are many hurdles a business document needs to overcome in order to be admitted as evidence in court, there is one hurdle that many clients routinely inquire about – the legality and admissibility of digital image copies, in lieu of original documents. While lawyers recognize this as a best evidence issue, a legal doctrine that states an original piece of evidence is superior to a copy, for clients this is a matter of whether they need to retain an original signed contract or could they save space in their file cabinets and rely on a scanned copy on their hard drive. Although state laws concerning admissibility of evidence vary, states have generally adopted the language, in whole or part, of the Uniform Rules of Evidence (“URE”) and/or the Uniform Photographic Copies of Business and Public Records as Evidence Act (“UPA”). For the purpose of this article the differences between the URE and the UPA are not important or relevant. Accordingly there is a nationwide consensus that a digital image copy can generally overcome a best evidence challenge and be admitted as the original document.

The fundamental basis for states admission of digital duplicates can found in the URE, which allows copies that are established as business records to be admitted into evidence “to the same extent as the original.” Duplication is permitted by any technique that “accurately reproduces the original.” Similarly under the UPA, duplicate records are admissible as the original, in judicial or administrative proceedings, provided that the duplicate was generated by a “process which accurately reproduces the original.” The UPA permits the destruction of original documents, unless preservation is required by law (i.e. wills, negotiability documents and copyrights). Hence, the law permits the destruction of original documents subject to certain evidentiary requirements.

When read together and interpreted by the majority of states, the URE and the UPA allow duplicate copies to be given the same evidentiary weight as originals, so long as those copies are properly generated, maintained and authenticated. Therefore, clients are encouraged to adopt certain practices when copying their business documents:

  • The copies should be produced and relied upon during the regular course of business.
  • The business should have a written policy specifying the process of duplication, as well as where and how copies will be stored. This written policy should be made available to the business’s custodian(s) of records.
  • The business’s written policy should include a requirement that at least one witness be present at the time of duplication that would be available to testify under oath that the generated duplicate accurately and completely represents the original.
  • The business’s written policy should be subject to regular review in order to ensure the stated compliance procedures are satisfied.

Ultimately, clients should feel free to indulge their desire to “save the space” and dispose of an original contract, so as long as the above duplication practices are adhered to and all other relevant evidentiary and other legal requirements are satisfied. Clients should also be aware that since the medium for storing electronic records must meet certain legal standards, their choice of hardware is critical when it comes to admissibility of a duplicated record. Given the variety of legal and technological nuances that need to be taken into consideration, when in doubt it is always best to seek the guidance of a qualified and experienced attorney to avoid any potential legal pitfalls. The above article reflects the national trend in the United States and so to ensure that your business has complied with state and/or country specific regulations it is once again best to contact a qualified and experienced attorney who practices in your jurisdiction.

JK! LOL! I Did Not Mean to Post That – California Now Requires That Children Be Provided With a “Cyber Eraser”

By Angelina Bruno-Metzger

Of the new cyber laws signed by California Governor Jerry Brown, by far the most publicized and debated has been bill SB568, which provides minors with greater cyber privacy rights. There are two main components of this new law: (1) it requires website operators and mobile application owners to allow minors to remove their postings, and (2) it places stronger restrictions on the type of products website operators can market and advertise to minors. The main sentiment and policy initiative behind this new law is clearly well-intentioned: to allow minors who are prone to posting rash and often emotionally charged content online without any awareness or concern of the future implications of that decision, to remove the harmful and offending content whether the regret comes five minutes later, or years later.

The first part of this law, the “internet eraser”, applies to two main categories of web providers; those that operate web sites, provide online services, or have mobile applications that are directed at minors and the second category applies to those same providers that have actual knowledge that a minor is using their site, services or mobile application. This eraser however, does not require the website operator to delete the information from its server. Instead, an operator will be deemed to have complied with this removal requirement by simply ensuring that the content is no longer visible to other users. As with many laws there are several notable exceptions, and this new internet eraser law is no different. In fact, there are multiple scenarios in which a web site operator is not under a removal obligation. Examples of these exceptions include: posts made anonymously by minors, as well as any content posted by a minor for which the minor received compensation (or other consideration) and only minors that are registered users of a site, service or application may seek to have their content removed.

The second part of this law involves the limitation of marketing and advertising of specified products to minors on websites and mobile devices. Predictably, those specified products include certain dietary supplements, permanent tattoos, alcohol, firearms, fireworks, lottery tickets and e-cigarettes. A website operator will be deemed to be in compliance with this new law if it has properly notified its advertising services that its site, service or application is directed towards minors. Essentially, if a company could not sell a product face-to-face to a minor, under this new law a company cannot solicit or sell that same product to a minor online.

This law will become effective on January 1, 2015, and already legal experts from around the country are debating whether or not this is a direct collision of privacy law and the First Amendment. Additionally, as with all cyber laws, there remains an enormous amount of ambiguity to address. For example, does the person need to be a minor when they request removal or can an adult retroactively ask for removal of a posting made while a minor? Will this law apply to all websites in the country or just to those based in California? As currently written, this new law does not included a time frame in which the operator needs to delete the requested content. Moreover, the scope of the content to be deleted remains unclear, and there is no penalty for an operator that does not comply with a request.

Stay tuned to see how the implementation and enforcement of this law plays out. For now, review our prior postings about the best ways to navigate the social media and the workplace, as well as understand the limitations of privacy on Facebook.

 

WARNING: Your Account Has Been Compromised – California Expands Existing Data Privacy Breach Law

By Angelina Bruno-Metzger

Governor Jerry Brown recently signed bill SB46 into law, which amends California’s data breach notification law by expanding the definition of “personal information.” The current law requires alerts to be sent to consumers when a database has been breached in a way that could expose a consumer’s social security number, driver license number, credit card number(s), or medical/health insurance information. Under this new amendment, website operators will be obligated to send out privacy notifications after the breach of a “user name or email address, in combination with a password or security question and answer that would permit access to an online account.” Additionally this law requires notifications, even when no other personal information has been breached, in cases when a breach of a user name or email address used in combination with a password or security question could permit access to an online account. Currently, as with the new “Do Not Track” law, California is the only state whose breach notification statute incorporates breaches solely by the loss of a user name or email address.

This law will go into effect on January 1, 2014 and a company’s notification obligations under this new law are different depending on the type of personal data that has been breached. When the security breach does not involve login credentials for an email account, the operator is allowed to notify affected customers through the use of a “security breach electronic form”. This form would direct the person whose personal information has been compromised to immediately change his/her password and security question(s) or answer(s) – as well as direct the user to take appropriate precautionary measures with all other virtual accounts that use the same user name or email address and password. However, when the security breach does involve login credentials for an email account the operator, logically, may not provide notification to that email address. Alternatively, the operator may provide “clear and conspicuous notice delivered to the resident online when the resident is connected to the online account from an IP address or online location from which the person or business knows the resident customarily accesses the account.”

As with the other recently passed cyber laws, the implications of this new data privacy breach law will likely be felt nationally and internationally, as almost every company that offers online personalized services requires a consumer to create a username and password. While there remains some uncertainty about exactly what businesses must abide by this new regulation, as not all companies can readily, if at all, confirm affected users are California residents, since sharing of home addresses is often optional, it is best for businesses to abide by the old “better safe than sorry” adage. The two best ways companies can come into compliance with this regulation are to: (1) ensure that all usernames, passwords, security questions and answers are stored in an encrypted form, and (2) update existing protocols, or create new internal protocols that are consistent with this law’s reporting requirements.

See OlenderFeldman LLP’s information privacy lawyer Aaron Messing’s predictions for what should happen in 2013 within the data privacy field and compare it with this new data privacy breach law in California.

Sharing is Caring, but Not Always in the Case of Cookies – CA Governor Signs the Country’s First “Do Not Track” Disclosure Bill

by Angelina Bruno-Metzger

On September 27, 2013, bill AB370, now known as the “Do Not Track” disclosure law (“DNT”), was officially signed into law by Governor Jerry Brown. This law will impose new and additional disclosure requirements on commercial websites and online services that collect personally identifiable information (“PII”) on users. “Do Not Track,” is an amendment to the California Online Privacy Protection Act (“CalOPPA”), which originally required that websites, as well as mobile applications, to explicitly and conspicuously post their privacy policies. This posted privacy policy must include what categories of PII are being collected and what third parties will also have access to that information. Under this latest amendment, website operators (or mobile applications) need to: (1) disclose and explain their privacy policies and how they respond to DNT signals, and (2) disclose applicable third-party data collection and use policies.

It should, however, be noted that this law does not explicitly prohibit tracking or affirmatively require a website operator to honor a consumer’s do not track request. It simply mandates that operators disclose their privacy policies. Additionally, the lack of a clear definition of “do not track” could be equally problematic when it comes to enforcement – since this new law does not define what it is regulating. A clear definition will most likely emerge through enforcement and adjudication of the law, as well as policy statements.

This “Do Not Track” law mandates that all companies have a complete technical understanding of their websites, as well as the third parties that are allowed to operate on the site, so that each company can fully disclose its data collection practices. While technically speaking this law would only require companies to make the disclosures to California residents, it will likely have a national, if not international, effect, as most companies usually do not craft different policies for specific states, and cannot know whether a user is a California resident. This new law will go into effect on January 1, 2014, and any operator that fails to provide the required disclosures will be given a warning and 30 days to comply or else be found in violation of the new law. Failure to comply, whether that failure is knowing and willful or negligent and material, could result in a $2,500 fine under California’s Unfair Competition Law.

Recently California has been boldly breaking ground in the nation in the area of online data privacy, and the “Do Not Track” law is no exception; it is the first of its kind in the country. For a more complete understanding of what online tracking is and how it works, please see our previous post Behavioral Advertising and “Do Not Track” Navigating the Privacy Minefield