Technology can impact the way we work, play, communicate and live, and “big data” analysis – the processing of large amounts of data in order to gain actionable insights – has the ability to radically alter society by identifying patterns and traits that would otherwise go undiscovered. This data, however, can raise significant privacy concerns in the context of a merger or acquisition.

Dunn and Bradstreet interviewed us regarding various Tips for Customer Data Management During a Merger or Acquisition. We thought the topic was so interesting, that we decided to expand a little bit more on the subject.

As background, it is important to consider that there are three types of M&A transactions affecting data: stock transactions, mergers, and sales of assets. In a stock transaction, there are no data issues, while the owners of a company sell stock to a new owner, the entity itself remains intact.  This means business as usual from the entity’s standpoint, and there are no data or confidentiality issues.

By contrast, in a merger (where the target is not the surviving entity) or in an asset transaction, the original entity itself goes away, which means all of the assets in that entity have to be transferred, and there is a change of legal title to those assets (including to any data) which can have legal implications. For example, if a party consents to the use of their data by OldCo, and OldCo sells all of its assets to NewCo, does that party’s consent to use data also transfer to NewCo?

In a merger, data needs to be appropriately assigned and transferred, which often has privacy implications. Companies generally have privacy policies explaining how they collect and use consumers’ personal information. These policies often contain language stating that the company will not give such information to any third-party without the consumer’s consent. In such situations, the transfer of data must be done in accordance with the written commitments and representations made by that company (which may vary if different representations were made to different categories of individuals), and may require providing notice or obtaining consent from consumers (which, depending on the scope of the notice or consent required, can be an arduous task).

Companies also generally maintain employee data and client data in addition to consumer data. This information needs to be handled in accordance with contractual obligations, as well as legal obligations. National and foreign laws may also regulate the transfer of certain information. For example, in transborder transactions, or for transactions involving multinational companies, it is extremely important to ensure that any transfer of data complies with the data privacy and transborder transfer obligations applicable in all of the relevant jurisdictions.

Obligations may arise even during the contemplation of a merger, or during the due diligence process, where laws may impact the ability of companies to disclose certain information and documentation. For example, in the United States, financial companies are required to comply with the Sarbanes-Oxley Act and the Gramm-Leach-Bliley Act, which govern the controls required to protect certain types of data, and companies in the health care and medical fields are often required to comply with the Health Insurance Portability and Accountability Act.

In the multinational / crossborder context, businesses may run into challenges posed by conflicting multi-jurisdictional data protection laws, which may prevent routine data flows (such as phone lists or other employee data) to countries that are deemed to have insufficient data protection laws, or require that centralized databases comply with the laws in multiple jurisdictions. Additionally, employee rights to access and amend data, as well as requirements to obtain consent before collection and limitations on maintenance of data may cause challenges as well.

So what should companies do when contemplating or navigating a merger or acquisition? First, companies should determine what information they have. Next, companies must ensure that they understand what information they have, including the circumstances under which the information was collected, and what rights and obligations they have relative to that information. Companies should determine what ability they have to transfer information, what consents or approvals are necessary to do so, and the potential impact of a transfer on the various stakeholders.

The bottom line? Any technology, and big data in particular, can be put to both good and bad uses. It is important that as companies gather data about individuals, that that information be used in accordance with existing laws and regulations governing data use, as well as in a way that respects the privacy of the individuals to which the data pertains.

Safeway To Settle Allegations Of Privacy BreachOn December 31, 2014, the second-largest U.S. grocery chain, Safeway, was ordered to pay a $9.87 million penalty as a part of a settlement with California prosecutors related to the improper dumping of hazardous waste, and the improper disposal of confidential pharmacy records containing protected health information in violation of California’s Confidentiality of Medical Information Act (“CIMA”).

This settlement comes after an investigation revealed that for over seven years hazardous materials, such as medicine and batteries, had been “routinely and systematically” sent to local landfills that were not equipped to receive such waste. Additionally, the investigation revealed that Safeway failed to protect confidential medical and health records of its pharmacy customers, by disposing of records containing patients’ names, phone numbers, and addresses without shredding them, putting these customers at risk of identify theft.

Under this settlement agreement, while Safeway admits to no wrongdoing, it will pay (1) a $6.72 million civil penalty, (2) $2 million for supplemental environmental projects, and (3) $1.15 million in attorneys’ fees and costs. In addition, pursuant to the agreement, Safeway must maintain and enhance its customer record disposal program to ensure that customer medical information is disposed of in a manner that preserves the customer’s privacy and complies with CIMA.

“Today’s settlement marks a victory for our state’s environment as well as the security and privacy of confidential patient information throughout California,” said Alameda County District Attorney Nancy O’Malley. Another Alameda County Assistant District Attorney, Kenneth Misfud, says the case against Safeway spotlights the importance of healthcare entities, such as pharmacy chains and hospitals, properly shredding, or otherwise “making indecipherable,” patient and other consumer personal information prior to disposal.

However, despite the settlement, customers whose personal information was improperly disposed of will have a difficult time suing for a “pure” loss of privacy due Safeway’s violation of CIMA. In Sutter Health v. Superior Court, a California Court of Appeals held that confidential information covered by CIMA must be “actually viewed” for the statutory penalty provisions of the law to apply. So, parties bringing claims under CIMA will now have to allege, and ultimately prove, that their confidential information (1) changed possession in an unauthorized manner, and that (2) it was actually viewed (or presumably, used) by an unauthorized party.

The takeaway from Safeway’s settlement is to ensure that  your customers are not at risk of data breaches and identity theft, and protect your company from facing the million dollar consequences that can result from doing so. If you have any questions about complying with privacy and health information laws, please feel free to contact one of our certified privacy attorneys at OlenderFeldman LLP.

By: Aaron Krowne

In a major recent case testing California’s medical information privacy law, part of the California Medical Information Act, or CMIA (California Civil Code § 56 et seq.), the Third District Court of Appeals in Sutter Health v. Superior Court held on July 21, 2014 that confidential information covered by the law must be “actually viewed” for the statutory penalty provisions of the law to apply. The implication of this decision is that it just got harder for consumers to sue for a “pure” loss of privacy due to a data breach in California and possibly beyond.

Not So Strict

Previously, CMIA was assumed to be a strict liability statue, as in the absence of actual damages, a covered party that “negligently released” confidential health information was still subject to a $1,000 nominal penalty. That is, if a covered health care provider or health service company negligently handled customer information, and that information was subsequently taken by a third party (e.g., a theft of a computer, or data device containing such information), that in itself triggered the $1,000 per-instance (and thus, per-customer record) penalty. There was no suggestion that the thief (or other recipient) of the confidential health information needed to see, or do anything with such information. Indeed, plaintiffs had previously brought cases under such a “strict liability” theory and succeeded in the application of CMIA’s $1,000 penalty.

 Sutter Health turns that theory on its head, with dramatically different results for consumers and California health-related companies.

Sutter was looking at a potential $4 billion fine, stemming from the October 2011 theft of a computer from its offices containing 4 million unencrypted client records. Sutter’s computer was password-protected, but without encryption of the underlying data this measure is easily defeated. Security at the office was light, with no alarm or surveillance cameras. Believing this to be “negligent,” some affected Sutter customers sued under CMIA in a class action. Given the potential amount of the total fine, the stakes were high.

The Court not only ruled against the Sutter customers, but dismissed the case on demurrer, meaning that the Court determined that the case was deficient on the pleadings, because the Plaintiffs “failed to state a cause of action.” The main reason, according to the Court, was that Plaintiffs failed to allege that an unauthorized person actually viewed the confidential information, therefore there was no breach of confidentiality, as required under CIMA. The Court elaborated that under CIMA “[t]he duty is to preserve confidentiality, and a breach of confidentiality is the injury protected against. Without an actual confidentiality breach there is no injury and therefore no negligence…”.

The Court also introduced the concept of possession, which is absent in CMIA itself, to delimit its new theory interpreting CMIA, saying: “[t]hat [because] records have changed possession even in an unauthorized manner does not [automatically] mean they have been exposed to the view of an unauthorized person.” So, plaintiffs bringing claims under CMIA will now have to allege, and ultimately prove, that their confidential information (1) changed possession in an unauthorized manner, and that (2) it was actually viewed (or, presumably, used) by an unauthorized party.

The Last Word?

This may not be the last word on CMIA, and certainly not the general issue of the burden of proof of harm in consumer data breaches. The problem is that it is extremely difficult to prove that anything nefarious has actually happened with sensitive consumer data post-breach, short of catching the perpetrator and getting a confession, or actually observing the act of utilization, or sale of the data to a third party. Even positive results detected through credit monitoring, such as attempts to use credit cards by unauthorized third parties, do not conclusively prove that a particular breach was the cause of such unauthorized access.

The Sutter court avers, in supporting its ruling, that we don’t actually know whether the thief in this case simply stole the computer, wiped the hard drive clean, and sold it as a used computer, and therefore no violation of CIMA. Yet, logically, we can say the opposite may have just as well happened – retrieval of the customer data may very well have been the actual goal of the theft. In an environment where sensitive consumer records can fetch as much as $45 (totaling $180 million for the Sutter customer data), it seems unwise to rely on the assumption that thieves will simply not bother to check for valuable information on stolen corporate computers and digital devices.

Indeed, the Sutter decision perhaps raises as many questions as answers on where to draw the line for “breach of confidential information.” To wit: presumably, a hacker downloading unencrypted information would still qualify for this status under the CMIA, so interpreted. But then, by what substantive rationale does the physical removal of a hard drive in this case not qualify? Additionally, how is it determined whether a party actually looked at the data, and precisely who looked at it?

Further, the final chapter on the Sutter breach may not yet be written – the data may still be (or turn out to have been) put to nefarious use, in which case, the court’s ruling will seem premature. Thus, there is likely to be some pushback to Sutter, to the extent that consumers do not accept the lack of punitive options in “open-ended” breaches of this nature, and lawmakers actually intend consumer data-handling negligence laws to have some “bite.”

Conclusion

Naively, it would seem under the Sutter Court’s interpretation, that companies dealing with consumer health information have a “blank check” to treat that information negligently – so long as the actual viewing (and presumably, use) of that information by unauthorized persons is a remote possibility. We would caution against this assumption. First, as above, there may be some pushback (judicially, legislatively, or in terms of public response) to Sutter’s strict requirement of proof of viewing of breached records. But more importantly, there is simply no guarantee that exposed information will not be released and be put to harmful use, and that sufficient proof of such will not surface for use in consumer lawsuits.

 One basic lesson of Sutter is that, while the company dodged a bullet thanks to a court’s re-interpretation of a law, they (and their customers) would have been vastly safer had they simply utilized encryption. More broadly, Sutter should have had and implemented a better data security policy. Companies dealing with customer’s health information (in California and elsewhere) should take every possible precaution to secure this information.

Do not put your company and your customers at risk for data breaches, contact a certified privacy attorney at OlenderFeldman to make sure your company’s data security policy provides coverage for all applicable health information laws.

The Supreme Court of New Jersey held that individuals have a reasonable expectation of privacy in their cell phone location data under the NJ state constitution and that “cell-phone location information, which users must provide to receive service, can reveal a great deal of personal information about an individual.”

In a turn that is becoming less and less surprising given the trailblazing nature of the New Jersey Supreme Court, the Court recently ruled in State v. Thomas W. Earls that police must obtain search warrants before obtaining the personal tracking information for alleged perpetrators from cell phone providers.  While this ruling has obvious implications for law enforcement professionals, from a broader perspective, the decision impacts — and, most importantly, protects — the privacy of individuals (and related businesses) who conduct business and their personal lives on cell phones throughout the nation.  The decision underscores a continuing battle between government intrusion into personal privacy which is increasingly in tension with the advancement of the digital age vis a vis the use of smartphones to conduct day-to-day business.  While various states throughout the country have been toying with the idea of passing legislation which would require probable cause warrants to issue before access to cell phone data is granted, the New Jersey Supreme Court’s ruling puts New Jersey at the forefront of addressing this issue.

While the facts of the case are not specifically relevant and pale in impact when compared to the implications of the decision, for completeness, the case involved burglaries in Middletown, New Jersey.  In investigating the burglaries, law enforcement officials used the data received from T-Mobile to track the stolen merchandise including a cellular phone which ultimately led to arrests of Mr. Earls.  In protecting the rights of Mr. Earls and overturning the decision of the lower courts, the New Jersey Supreme Court matter-of-factly ruled that individuals can and should “reasonably expect that their personal information will remain private” when entering into a contract with a cell phone carrier.  In explicitly recognizing a Constitutionally-based right to privacy as to the location of his or her cell phone, this decision builds on last year’s ruling by the United States Supreme Court in United States v. Jones, 615 F.3d 544 (2012).  In that case the United States Supreme Court said that the State’s/Government’s attachment of a GPS device to a vehicle and the use of that device and data to monitor a vehicle’s movements constitutes a search under the Fourth Amendment and, as such, is protected under the laws related thereto.

Given the capabilities of cell phones, the New Jersey Supreme Court declared that, in essence, a cell phone was a GPS device.  In fact, the Court went as far as to say that using a cell phone for locational purposes can “be far more revealing than  acquiring toll billing, bank, or Internet subscriber records. It is akin to using a tracking device and can function as a substitute for 24/7 surveillance without police having to confront the limits of their resources”  Interestingly enough, the Court’s decision also raises the possible impact of use of the data on access to PHI (protected health information).  The suggestion is that that cell phone tracking could theoretically be used to determine when and who a patient is treating with given that the location of a medical facility is easily discernible.

This is only the tip of the iceberg.  While the Court does a reasonable job at spinning out possible scenarios where the privacy of the cell phone owner could be impacted due to intrusions on privacy without a warrant, the Court speaks in a targeted and hypothetical manner.  Though the Court attempts to temper things legally by placing an “emergency aid” exception to the use of a warrant and ultimately the fruits of the search, the possibility of mining this data should be recognized by individuals who continue to use cell phones for every aspect of their daily lives.  As cell phone (and data) use naturally increases, it will be crucial to provide tight restrictions on third-party use of cell phone information because not only will companies likely try to monetize the same, in using data in any unauthorized way, the rights and interests of privacy as a whole come into play.

OlenderFeldman LLP has significant experience dealing with privacy and business related issues which are implicated in the decision discussed above.  If you have any questions about the legal or practical implications of this case, please contact Christian Jensen, Esq. (cjensen@olenderfeldman.com) at (908) 964-2485.

Protected Health Information (PHI)

Protected Health Information (PHI)Protected Health Information Privacy Concerns are Rapidly Increasing

OlenderFeldman LLP contributed to the recently released report entitled, The Financial Impact of Breached Protected Health Information: A Business Case for Enhanced PHI Security, which can be downloaded for free at http://webstore.ansi.org/phi. As the press release correctly notes, protected health information (PHI) “is now more susceptible than ever to accidental or impermissible disclosure, loss, or theft. Health care organizations (providers, payers, and business associates) are not keeping pace with the growing risks of exposure as a result of electronic health record adoption, the increasing number of organizations handling PHI, and the growing rewards of PHI theft.”

The report provides a  5-step method for assessing security risks and evaluating the “at risk” value of an organization’s PHI, including estimating overall potential data breach costs, and provides a methodology for determining an appropriate level of investment needed to strengthen privacy and security programs and reduce the probability of a breach occurrence.