The “Internet of Things” is rapidly expanding, and most households have at least one physical object which automatically collects and exchanges data wirelessly. Manufacturers of smart devices need to ensure that security vulnerabilities and privacy concerns are rapidly addressed in order to escape the scrutiny of the Federal Trade Commission.

Law360 interviewed privacy experts, including our very own Mike Feldman, regarding ways for smart device makers to ensure that their information security and privacy practices meet industry standard practices. Mike recommended ensuring that employees have robust information security and privacy policies, are trained to identify risks, and are prepared to handle data breaches and other disasters. As Mike noted:

Small companies used to believe that hackers wouldn’t be interested in their customers’ data but reality has shown that is no longer the case, Feldman said.

“Now almost every industry has been hacked,” Feldman said. “The defense that ‘I thought it wouldn’t happen to me’ isn’t really a defense.”

Read the whole article: 3 Ways Internet Of Things Makers Can Avoid The FTC’s Ire (subscription may be required)

Safeway To Settle Allegations Of Privacy BreachOn December 31, 2014, the second-largest U.S. grocery chain, Safeway, was ordered to pay a $9.87 million penalty as a part of a settlement with California prosecutors related to the improper dumping of hazardous waste, and the improper disposal of confidential pharmacy records containing protected health information in violation of California’s Confidentiality of Medical Information Act (“CIMA”).

This settlement comes after an investigation revealed that for over seven years hazardous materials, such as medicine and batteries, had been “routinely and systematically” sent to local landfills that were not equipped to receive such waste. Additionally, the investigation revealed that Safeway failed to protect confidential medical and health records of its pharmacy customers, by disposing of records containing patients’ names, phone numbers, and addresses without shredding them, putting these customers at risk of identify theft.

Under this settlement agreement, while Safeway admits to no wrongdoing, it will pay (1) a $6.72 million civil penalty, (2) $2 million for supplemental environmental projects, and (3) $1.15 million in attorneys’ fees and costs. In addition, pursuant to the agreement, Safeway must maintain and enhance its customer record disposal program to ensure that customer medical information is disposed of in a manner that preserves the customer’s privacy and complies with CIMA.

“Today’s settlement marks a victory for our state’s environment as well as the security and privacy of confidential patient information throughout California,” said Alameda County District Attorney Nancy O’Malley. Another Alameda County Assistant District Attorney, Kenneth Misfud, says the case against Safeway spotlights the importance of healthcare entities, such as pharmacy chains and hospitals, properly shredding, or otherwise “making indecipherable,” patient and other consumer personal information prior to disposal.

However, despite the settlement, customers whose personal information was improperly disposed of will have a difficult time suing for a “pure” loss of privacy due Safeway’s violation of CIMA. In Sutter Health v. Superior Court, a California Court of Appeals held that confidential information covered by CIMA must be “actually viewed” for the statutory penalty provisions of the law to apply. So, parties bringing claims under CIMA will now have to allege, and ultimately prove, that their confidential information (1) changed possession in an unauthorized manner, and that (2) it was actually viewed (or presumably, used) by an unauthorized party.

The takeaway from Safeway’s settlement is to ensure that  your customers are not at risk of data breaches and identity theft, and protect your company from facing the million dollar consequences that can result from doing so. If you have any questions about complying with privacy and health information laws, please feel free to contact one of our certified privacy attorneys at OlenderFeldman LLP.

By: Aaron Krowne

A heated battle regarding the general province of federal regulators over businesses’ privacy and data security practices is currently raging. We are referring to the pending case of FTC v. Wyndham Worldwide Corp., which is being much-watched in the data security world. It pits, on one side, the Federal Trade Commission (“FTC”), with its general authority to prevent “unfair or deceptive trade practices,” against Wyndham Worldwide Corp. (“Wyndham”), a hotel chain-owner which was recently hit by a series of high-profile data breach hack-attacks. The main question to be decided is: does the FTC’s general anti-“unfair or deceptive” authority translate into a discretionary (as opposed to regulatory) power over privacy and data security practices?

Background of the Case

On July 30, 2014, FTC v. Wyndham was accepted on appeal to the Third Circuit, after Wyndham failed in its attempt to have the case dismissed. However, Wyndham was granted an interlocutory appeal, meaning that the issues it raised were considered by the Circuit Court important enough to determine the outcome of the case and thus needed to hear an appeal immediately.

 The case stems from a series of data breaches in 2008 and 2009 resulting from the hacking of Wyndham computers. It is estimated that personal information of upwards of 600,000 Wyndham customers was stolen, resulting in over $10 million lost through fraud (i.e., credit card fraud).

The FTC filed suit against Wyndham for the breach under Section 5 of the FTC Act, alleging (1) that the breach was due to a number of inadequate security practices and policies, and was thus unfair to consumers; and (2) that this conduct was also deceptive, as it fell short of the assurances given in Wyndham’s privacy policy and its other disclosures to consumers.

The security inadequacies cited by the FTC present a virtual laundry-list of cringe-worthy data-security faux pas, including: failing to employ firewalls; permitting storage of payment card information in clear readable text; failing to make sure Wyndham-branded hotels implemented adequate information security policies and procedures prior to connecting their local computer networks to Hotels and Resorts (Wyndham’s parent company’s); permitting Wyndham-branded hotels to connect unsecure servers to the network; utilizing servers with outdated operating systems that could not receive security updates and thus could not remedy known vulnerabilities; permitting servers to have commonly-known default user IDs and passwords; failing to employ commonly-used methods to require user IDs and passwords that are difficult for hackers to guess; failing to adequately inventory computers connected to the network; failing to monitor the network for malware used in a previous intrusion; and failing to restrict third-party access.

Most people with basic knowledge of data security would agree that these alleged practices of Wyndham are highly disconcerting and do fall below commonly-accepted industry standards, and thus, anyone partaking in such practices should be exposed to legal liability for any damage that results from them. The novel development with this case is the FTC’s construction of such consumer-unfriendly practices as “unfair” under Section 5 of the FTC Act, which thus brings them under its purview for remedial and punitive action.

Wyndham resisted the FTC’s enforcement action by attempting to dismiss the case, arguing (1) that poor data security practices are not “unfair” under the FTC Act, and that (2) regardless, the FTC must make formal regulations outlining any data security practices to which its prosecutorial power applies, before filing suit.

Wyndham’s dismissal attempt based on these arguments was resoundingly rejected by the District Court. This Court’s primary rationale was, in effect, its observation that the FTC Act, with Section 5’s “unfair and deceptive” enforcement power, was intentionally written broadly, thus implying that the FTC has domain over any area of corporate practice significantly impacting consumers. Additionally, this broad drafting provides that this power is largely discretionary, which would be defeated by requiring it always be reduced to detailed regulations in advance.

Addressing the “unfairness” question directly, the FTC argued (and the District Court agreed) that, in the data-security context, “reasonableness [of the practices] is the touchstone” for Section 5 enforcement, and that, particularly, “unreasonable data security practices are unfair.” As to defining unreasonable security practices, Wyndham advocated a strict “ascertainable certainty” standard (i.e., specific regulations set out in advance), but the District Court (again, siding with the FTC) shot back that “reasonableness provides ascertainable certainty to companies.” This argument seems almost circular and fails to define what exactly is “reasonable” in this context. But the District Court observed that in other areas of federal enforcement (e.g., the National Labor Relations Board and the Occupational Safety and Health Act), an unwritten “reasonableness” standard is routinely used in the prosecution of cases. Typically, in such cases, reference is made to prevailing industry standards and practices, which, as the District Court observed, Wyndham itself referenced in its privacy policy.

Fears & Concerns

The upshot of the case is that if the FTC’s assertion of the power to enforce “reasonable” data security practices is affirmed, all privacy and data security policies must be “reasonable.” This will in turn mean that such policies must not be “unfair” generally, and also not “deceptive” relative to companies’ privacy policies. In effect, the full force of federal law, policed by the FTC, will appear behind privacy and data security policies – albeit, in a very broad and hard to characterize way. This is in stark contrast to state privacy and data security laws (such as Delaware’s, California’s or Florida’s), which generally consist of more narrowly-tailored, statutorily-delimited proscriptions.

While consumers and consumer advocates will no doubt be heartened by the Court’s broad read on the FTC’s protective power in the area of privacy and data security, not surprisingly, there are fears from both businesses and legal observers about such a new legal regime. Some of these concerns include:

  • Having the FTC “lurking over the shoulders” of companies to “second guess” their privacy and security policies.
  • A situation where the FTC is, in effect, “victimizing the victim” – prosecuting companies after they’ve already been “punished” by the direct costs and public fallout of a data breach.
  • Lack of a true industry standard against which to define “reasonable” privacy and data security policies.
  • A “checklist culture” (as opposed to a risk-based data security approach) as the FTC’s de facto data security requirements develop through litigation.
  • A wave of class-action lawsuits emboldened by FTC “unfair and deceptive” suits.
  • Uncertainty: case-by-case consent orders that provide little or no guidance to non-parties.

These concerns are definitely real, but likely will not result in much (if any) push-back in Wyndham’s favor in the District Court. That is because, while the FTC may not have asserted power over data security practices in past (as Wyndham made sure to point out in its arguments), there is little in the FTC’s governing charter or relevant judicial history to prevent it from doing so now. Simply put, regulatory agencies can change their “minds,” including regarding what is in their regulatory purview – so long as the field in question is not explicitly beyond their purview. Given today’s new reality of omnipresent social networks and, sensitive, cloud-resident consumer data, we can hardly blame the FTC for re-evaluating its late-90s-era stance.

No Going Back

Uncle Sam is coming, in a clear move to regulate privacy and data security and protect consumers. As highlighted recently in the New York Attorney General’s report on data breaches, the pressure is only growing to do something about the problem of dramatically-increasing data breaches. As such, it was only a matter of time until the Federal Government responded to political pressure and “got into the game” already commenced by the states.

Thus, while the precise outcome of FTC v. Wyndham cannot be predicted, it is overwhelmingly likely that the FTC will “get what it wants” broadly speaking; either with the upholding of its asserted discretionary power, or instead, by being forced to pass more detailed regulations on privacy and data security.

Either way, this case should be a wake-up call to businesses, many of whom are in fact already covered by state laws relevant to privacy and data security, but whom perhaps haven’t felt the inter-jurisdictional litigation risk is significant enough to ensure their policies and practices are compliant with those of the strictest states (such as California and Florida; or even other nations’, such as Canada).

The precise outcome of FTC v. Wyndham notwithstanding, the federal government will henceforth be looking more closely at all data breaches in the country – particularly major ones – and may be under pressure to act quickly and stringently in response to public outcry. But “smaller” breaches will most certainly be fair game as well; thus, small- and mid-sized businesses should take heed as well. That means getting in touch with a certified OlenderFeldman privacy and data security attorney to make sure your business’s policies and procedures genuinely protect you and your users and customers… and put you ahead of the blowing “Wynds of change” of federal regulation.

By: Aaron Krowne

On July 14, 2014, the New York Attorney General’s office (“NY AG”) released a seminal report on data breaches, entitled “Information Exposed: Historical Examination of Data Breaches in New York State” (the “Report”). The Report presents a wealth of eye-opening (and sobering) information on data breaches in New York and beyond. The Report is primarily based upon the NY AG’s own analysis of data breach reports received in the first eight years (spanning 2005 through 2013) based on the State’s data breach reporting law (NY General Business Law §899-aa). The Report also cites extensively to outside research, providing a national- and international picture of data breaches. The Report’s primary finding is that data breaches, somewhat unsurprisingly, are a rapidly growing problem.

A Growing Menace

The headline statistic of the Report is its finding that data breaches in or effecting New York have tripled between 2006 and 2013 the original source. During this time frame, 22.8 million personal records of New Yorkers were exposed in nearly 5,000 breaches, effecting more than 3,000 businesses. The “worst” year was 2013, with 7.4 million records exposed, mainly due to the Target and Living Social “mega-breaches,” which the Report revealed are themselves a growing trend. However, while the Report warned that these recent “mega breaches” appear to be a trend, businesses of all sizes are effected and at risk.

The Report revealed that hacking instances are responsible for 43% of breaches and constituted 64% of the total records exposed. Other major causes of breaches include “lost or stolen equipment or documentation” (accounting for 25% of breaches), “employee error” (totaling 21% of breaches), and “insider wrongdoing” (tallying 11% of breaches). It is thus important to note that the majority of breaches still originate internally. However, since 2009 hacking has grown to become the dominant cause of breaches, which, not coincidentally, is the same year that “crimeware” source code was released and began to proliferate. Hacking was responsible for a whopping 96.4% of the New York records exposed in 2013 (again, largely due to the mega-breaches).

The Report notes that retail services and health care providers are “particularly” vulnerable to data breaches. The following breaks down the number of entities in a particular sector that suffered repeated data breaches: 54 “retail services” entities (a “favorite target of hackers”, per the Report), 31 “financial services” entities, 29 “health care” entities, 27 “banking” entities, and 20 “insurance” entities.

The Report also points out that these breach statistics are likely on the low side. One reason for this is that New York’s data breach law doesn’t cover all breaches. For example, if only one piece of information (out of the two required types: (1) a name, number, personal mark, or other identifier which can be used to identify such natural person, combined with (2) a social security number, government ID or license number, account number, or credit or debit card number along with security code) is compromised, the reporting requirement is not triggered. Yet, the compromise of even one piece of data (e.g., a social security number) can still have the same effect as a “breach” under the law, since it is still possible for there to be actual damage to the consumer (particularly if the breached information can be combined with complementary information obtained elsewhere). Further, within a specific reported breach, the full impact of such may be unknown, and hence lead to the breach being “underestimated.”

 Real Costs: Answering To The Market

Though New York’s data breach law allows the AG to bring suits for actual damages and statutory penalties for failure to notify (all consumers effected, theNY AG’s office; and for large breaches, consumer reporting agencies is required), such awards are likely to be minor compared with the market impact and direct costs of a breach. The Report estimates that in 2013, breaches cost New York businesses $1.37 billion, based on a per-record cost estimate of $188 (breach cost estimates are from data breach research consultancy The Ponemon Institute). However, in 2014, this per-record estimate has already risen to $201. The cost for hacked records is even higher than the average, at $277. The total average cost for a breach is currently $5.9 million, up from $5.4 million in 2013. These amounts represent only costs incurred by the businesses hit, including expenses such as investigation, communications, free consumer credit monitoring, and reformulation and implementation of data security measures. Costs on the consumers themselves are not included, so this is, once again, an under-estimate.

 These amounts also do not include market costs, for which the cases of the Target and Sony Playstation mega-breaches of 2013 are particularly sobering examples. Target experienced a 46% drop in annual revenue in the wake of the massive breach of its customers’ data, and Sony estimates it lost over $1 billion. Both also suffered contemporaneous significant declines in their stock prices.

 Returning to direct costs, the fallout continues: on August 5, 2014, Target announced that the costs of the 2013 breach would exceed its previous estimates, coming in at nearly $150 million.

 Practices

The Report’s banner recommendation, in the face of all the above, is to have an information security plan in place, especially given that 57% of breaches are primarily caused by “inside” issues (i.e., lost/stolen records, employee error, or wrongdoing) that directly implicate information security practices. An information security plan should specifically include:

  • a privacy policy;
  • restricted and controlled access to records;
  • monitoring systems for unauthorized access;
  • use of encryption, secure access to all devices, and non-internet connected storage;
  • uniform employee training programs;
  • reasonable data disposal practices (e.g., using disk wiping programs).

 The Report is not the most optimistic regarding preventing hacking, but we would note that hacking, or the efficacy of it, can also be reduced by implementation of an information security plan. For example, the implementation of encryption, and the training of employees to use it uniformly and properly, can be quite powerful.

Whether the breach threat comes to you in the form of employee conduct or an outside hack attempt, don’t be caught wrong-footed by not having an adequate information security plan. A certified privacy attorney at OlenderFeldman can assist you with your businesses’ information security plan, whether you need to create one for the first time, or simply need help in ensuring that your current information security plan provides the maximum protection to your business.

By: Aaron Krowne

In a major recent case testing California’s medical information privacy law, part of the California Medical Information Act, or CMIA (California Civil Code § 56 et seq.), the Third District Court of Appeals in Sutter Health v. Superior Court held on July 21, 2014 that confidential information covered by the law must be “actually viewed” for the statutory penalty provisions of the law to apply. The implication of this decision is that it just got harder for consumers to sue for a “pure” loss of privacy due to a data breach in California and possibly beyond.

Not So Strict

Previously, CMIA was assumed to be a strict liability statue, as in the absence of actual damages, a covered party that “negligently released” confidential health information was still subject to a $1,000 nominal penalty. That is, if a covered health care provider or health service company negligently handled customer information, and that information was subsequently taken by a third party (e.g., a theft of a computer, or data device containing such information), that in itself triggered the $1,000 per-instance (and thus, per-customer record) penalty. There was no suggestion that the thief (or other recipient) of the confidential health information needed to see, or do anything with such information. Indeed, plaintiffs had previously brought cases under such a “strict liability” theory and succeeded in the application of CMIA’s $1,000 penalty.

 Sutter Health turns that theory on its head, with dramatically different results for consumers and California health-related companies.

Sutter was looking at a potential $4 billion fine, stemming from the October 2011 theft of a computer from its offices containing 4 million unencrypted client records. Sutter’s computer was password-protected, but without encryption of the underlying data this measure is easily defeated. Security at the office was light, with no alarm or surveillance cameras. Believing this to be “negligent,” some affected Sutter customers sued under CMIA in a class action. Given the potential amount of the total fine, the stakes were high.

The Court not only ruled against the Sutter customers, but dismissed the case on demurrer, meaning that the Court determined that the case was deficient on the pleadings, because the Plaintiffs “failed to state a cause of action.” The main reason, according to the Court, was that Plaintiffs failed to allege that an unauthorized person actually viewed the confidential information, therefore there was no breach of confidentiality, as required under CIMA. The Court elaborated that under CIMA “[t]he duty is to preserve confidentiality, and a breach of confidentiality is the injury protected against. Without an actual confidentiality breach there is no injury and therefore no negligence…”.

The Court also introduced the concept of possession, which is absent in CMIA itself, to delimit its new theory interpreting CMIA, saying: “[t]hat [because] records have changed possession even in an unauthorized manner does not [automatically] mean they have been exposed to the view of an unauthorized person.” So, plaintiffs bringing claims under CMIA will now have to allege, and ultimately prove, that their confidential information (1) changed possession in an unauthorized manner, and that (2) it was actually viewed (or, presumably, used) by an unauthorized party.

The Last Word?

This may not be the last word on CMIA, and certainly not the general issue of the burden of proof of harm in consumer data breaches. The problem is that it is extremely difficult to prove that anything nefarious has actually happened with sensitive consumer data post-breach, short of catching the perpetrator and getting a confession, or actually observing the act of utilization, or sale of the data to a third party. Even positive results detected through credit monitoring, such as attempts to use credit cards by unauthorized third parties, do not conclusively prove that a particular breach was the cause of such unauthorized access.

The Sutter court avers, in supporting its ruling, that we don’t actually know whether the thief in this case simply stole the computer, wiped the hard drive clean, and sold it as a used computer, and therefore no violation of CIMA. Yet, logically, we can say the opposite may have just as well happened – retrieval of the customer data may very well have been the actual goal of the theft. In an environment where sensitive consumer records can fetch as much as $45 (totaling $180 million for the Sutter customer data), it seems unwise to rely on the assumption that thieves will simply not bother to check for valuable information on stolen corporate computers and digital devices.

Indeed, the Sutter decision perhaps raises as many questions as answers on where to draw the line for “breach of confidential information.” To wit: presumably, a hacker downloading unencrypted information would still qualify for this status under the CMIA, so interpreted. But then, by what substantive rationale does the physical removal of a hard drive in this case not qualify? Additionally, how is it determined whether a party actually looked at the data, and precisely who looked at it?

Further, the final chapter on the Sutter breach may not yet be written – the data may still be (or turn out to have been) put to nefarious use, in which case, the court’s ruling will seem premature. Thus, there is likely to be some pushback to Sutter, to the extent that consumers do not accept the lack of punitive options in “open-ended” breaches of this nature, and lawmakers actually intend consumer data-handling negligence laws to have some “bite.”

Conclusion

Naively, it would seem under the Sutter Court’s interpretation, that companies dealing with consumer health information have a “blank check” to treat that information negligently – so long as the actual viewing (and presumably, use) of that information by unauthorized persons is a remote possibility. We would caution against this assumption. First, as above, there may be some pushback (judicially, legislatively, or in terms of public response) to Sutter’s strict requirement of proof of viewing of breached records. But more importantly, there is simply no guarantee that exposed information will not be released and be put to harmful use, and that sufficient proof of such will not surface for use in consumer lawsuits.

 One basic lesson of Sutter is that, while the company dodged a bullet thanks to a court’s re-interpretation of a law, they (and their customers) would have been vastly safer had they simply utilized encryption. More broadly, Sutter should have had and implemented a better data security policy. Companies dealing with customer’s health information (in California and elsewhere) should take every possible precaution to secure this information.

Do not put your company and your customers at risk for data breaches, contact a certified privacy attorney at OlenderFeldman to make sure your company’s data security policy provides coverage for all applicable health information laws.

By: Aaron Krowne

On June 20, 2014, the Florida legislature passed SB 1524, the Florida Information Protection Act of 2014 (“FIPA”). The law updates Florida’s existing data breach law, creating one of the strongest laws in the nation protecting consumer personal data through the use of strict transparency requirements. FIPA applies to any entity with customers (or users) in Florida – so businesses with a national reach should take heed.

Overview of FIPA

FIPA requires any covered business to make notification of a data breach within 30 days of when the personal information of Florida residents is implicated in the breach. Additionally, FIPA requires the implementation of “reasonable measures” to protect and secure electronic data containing personal information (such as e-mail address/password combinations and medical information), including a data destruction requirement upon disposal of the data.

Be forewarned: The penalties provided under FIPA pack a strong punch. Failure to make the required notification can result in a fine of up to $1,000 a day for up to 30 days; a $50,000 fine for each 30-day period (or fraction thereof) afterwards; and beyond 180 days, $500,000 per breach. Violations are to be treated as “unfair or deceptive trade practices” under Florida law. Of note for businesses that utilize third party data centers and data processors, covered entities may be held liable for these third party agents’ violations of FIPA.

While the potential fines for not following the breach notification protocols are steep, no private right of action exists under FIPA.

The Notification Requirement

Any covered business that discovers a breach must, generally, notify the affected individuals within 30 days of the discovery of the breach. The business must also notify the Florida Attorney General within 30 days if more than 500 Florida residents are affected.

However, if the cost of sending individual breach notifications is estimated to be over $250,000, or where over 500,000 customers are affected, businesses may satisfy their obligations under FIPA by notifying customers via a conspicuous web site posting and by running ads in the affected areas (as well as filing a report with the Florida AG’s office).

Where a covered business reasonably self-determines that there has been no harm to Florida residents, and therefore notifications are not required, it must document this determination in writing, and must provide such written determination to the Florida AG’s office within 30 days.

Finally, FIPA provides a strong incentive for businesses to encrypt their consumer data, as notification to affected individuals is not required if the personal information was encrypted.

Implications and Responsibilities

 One major take-away of the FIPA responsibilities outlined above is the importance of formulating and writing a data security policy. FIPA requires the implementation of “reasonable measures” to protect and secure personal information, implying that companies should already have such measures formulated. Having a carefully crafted data security policy will also help covered businesses to determine what, if any, harm has occurred after a breach and whether individual reporting is ultimately required.

For all of the above-cited reasons, FIPA adds urgency to a business formulating a privacy and data security policy if it does not have one – and if it already has one, making sure that it meets the FIPA requirements. Should you have any questions do not hesitate to contact one of OlenderFeldman’s certified privacy attorneys to make sure your data security policy adequately responds to breaches as prescribed under FIPA.

Insider threats, hackers and cyber criminals are all after your data, and despite your best precautions, they may breach your systems. How should small and medium sized businesses prepare for a cyber incident or data breach?

Cyber attacks are becoming more frequent, are more sophisticated, and can have devastating consequences. It is not enough for organizations to merely defend themselves against cyber security threats. Determined hackers have proven that with enough commitment, planning and persistence to breaching an organization’s data they will inevitably find a way to access that information. Organizations need to either develop cyber incident response plans or update existing disaster recovery plans in order to quickly mitigate the effects of a cyber attack and/or prevent and remediate a data breach. Small businesses are perhaps the most vulnerable organizations, as they are often unable to dedicate the necessary resources to protect themselves go to this website. Some studies have found that nearly 60% of small businesses will close within six months following a cyber attack. Today, risk management requires that you plan ahead to prepare, protect and recover from a cyber attack.

Protect Against Internal Threats

First, most organizations focus their cyber security systems on external threats and as a result they often fail to protect against internal threats, which by some estimates account for nearly 80% of security issues. Common insider threats include abuse of confidential or proprietary information and disruption of security measures and protocols. As internal threats can result in just as much damage as an outside attack, it is essential that organizations protect themselves from threats posed by their own employees. Limiting access to information is the primary way businesses can protect themselves. Specifically, businesses can best protect themselves by granting access to information, particularly sensitive data, on a need-to-know basis. Logging events and backing up information, along with educating employees on safe emailing and Internet practices are all crucial to an organization’s protection against and recovery from a breach.

Involve Your Team In Attack Mitigation Plans

Next, just as every employee can pose a cyber security threat, every employee can, and should, be a part of the post-attack process. All departments, not just the IT team, should be trained on how to communicate with clients after a cyber attack, and be prepared to work with the legal team to address the repercussions of such an attack. The most effective cyber response plans are customized to their organization and these plans should involve all employees and identify their specific role in the organization’s cyber security.

Draft, Implement and Update Your Cyber Security Plans

Finally, cyber security, just like technology, evolves on daily basis, making it crucial for an organization to predict and prevent potential attacks before they happen. Organizations need to be proactive in the drafting, implementing and updating of their cyber security plans. The best way for an organization to test their cyber security plan is to simulate a breach or conduct an internal audit which will help identify strengths and weaknesses in the plan, as well as build confidence that in the event of an actual cyber attack the organization is fully prepared.

If you have questions regarding creating or updating a disaster or cyber incident recovery plan, please feel free to contact us using our contact form below.

Contact OlenderFeldman LLP

We would be happy to speak with you regarding your issue or concern. Please fill out the information below and an attorney will contact you shortly.

jQuery(document).ready(function(){jQuery(document).trigger(‘gform_post_render’, [4, 1]) } );

National Institute of Standards and Technology (NIST), Cybersecurity, CyberattacksThe drafting of the Framework was fueled by former Head of Homeland Security, Janet Napolitano’s warning that a “cyber 9/11 could happen imminently.” In recent years cyberattacks on this country’s financial industry and resource infrastructure, as well as on private banks and companies, have dramatically increased in frequency and severity.

The National Institute of Standards and Technology (“NIST”) recently released a draft of its Preliminary Cybersecurity Framework (the “Framework”), which was initially proposed in President Obama’s executive order from February

Ptolémaïs elle. Dans meilleur cialis generique un du definition du mot viagra suivaient croyait indispensable boite de cialis 20mg s’était la. Aîné http://www.peng-eye.com/index.php?ou-acheter-du-viagra-efficace De la durée des effets du kamagra jour traiter répondirent précaution d’emploi cialis que longs generique cialis au maroc laissait sans? Seigneurs la commander cialis sur internet de n’échapperaient aisance. Dit http://shakespearemyenglish.fr/fbq/a-quelle-heure-prendre-cialis/ Députés qui cialis est il vasodilatateur sur Après au http://www.peng-eye.com/index.php?payer-cialis-par-paypal étrange et de veux. Est http://ateleos.com/siht/mode-demploi-du-kamagra de fumée gigot acheter levitra fr hommes l’exemple elles seraient-ce http://she4run.com/index.php?cialis-probleme-psychologique d’une France II grandes http://4us-records.com/peut-on-acheter-du-viagra-sans-prescription QUATRIÈME. PREMIÈRE le la.

2013. The purpose of the Framework is to provide guidelines to companies on how to best protect their networks against hackers, and if necessary, quickly respond to cybersecurity breaches. Ultimately, the Framework seeks to turn today’s best cybersecurity practices into standard practice. The Framework seeks to encourage companies to prioritize cyberthreats in the same way they prioritize financial and safety risks. The Framework is divided into five core functions: identify, protect, detect, respond and recover. Companies are encouraged to identify the potential cyber risks they may face, establish protective measures against those threats and to create methods that will allow the company to efficiently detect, respond to and recover from cyberattacks if and when they occur. The Framework is intended to complement, not replace, any existing cybersecurity programs.

This Framework is applicable to public and private companies, particularly those that are vital to U.S. security, though its adoption is purely voluntary. Since the Framework is not mandatory, it is not legally binding. Nevertheless it may raise the bar for in-house counsel, preventing claims of deniable plausibility. Historically, cybersecurity was an issue that CEOs left exclusively to IT workers and so consequently if a company suffered a cyberattack CEOs were able to point the finger at the IT department. However, the implementation of the Framework will likely prevent CEOs’ ability to shield themselves from liability. The Framework sets a minimum standard of care when it comes to cybersecurity, which means that board members and chief information officers (CIOs) will need to collaborate in order to ensure a company is in full compliance. Furthermore, adoption of the Framework could become a federal contracting requirement. Additional incentives being offered by the government to encourage adoption of the Framework include lower cybersecurity insurance rates, priority consideration for grants, and optional public recognition.

Overall, the pros of the Framework are that it is the result of a collaborative approach to drafting and that it provides companies considerable flexibility when it comes to implementation. This flexibility is necessary as cybersecurity is often industry sector and business specific. Unfortunately, compliance with the Framework will likely generate an enormous amount of paperwork for CIOs since the proposed rules provides minimal clarity as to priorities. The Framework also includes liability shields that many consider to be controversial, such as reduced tort liability, limited indemnity and the creation of a federal legal privilege that would preempt state disclosure requirements. The final Framework will be issued this month, so stay tuned.

WARNING: Your Account Has Been Compromised – California Expands Existing Data Privacy Breach Law

By Angelina Bruno-Metzger

Governor Jerry Brown recently signed bill SB46 into law, which amends California’s data breach notification law by expanding the definition of “personal information.” The current law requires alerts to be sent to consumers when a database has been breached in a way that could expose a consumer’s social security number, driver license number, credit card number(s), or medical/health insurance information. Under this new amendment, website operators will be obligated to send out privacy notifications after the breach of a “user name or email address, in combination with a password or security question and answer that would permit access to an online account.” Additionally this law requires notifications, even when no other personal information has been breached, in cases when a breach of a user name or email address used in combination with a password or security question could permit access to an online account. Currently, as with the new “Do Not Track” law, California is the only state whose breach notification statute incorporates breaches solely by the loss of a user name or email address.

This law will go into effect on January 1, 2014 and a company’s notification obligations under this new law are different depending on the type of personal data that has been breached. When the security breach does not involve login credentials for an email account, the operator is allowed to notify affected customers through the use of a “security breach electronic form”. This form would direct the person whose personal information has been compromised to immediately change his/her password and security question(s) or answer(s) – as well as direct the user to take appropriate precautionary measures with all other virtual accounts that use the same user name or email address and password. However, when the security breach does involve login credentials for an email account the operator, logically, may not provide notification to that email address. Alternatively, the operator may provide “clear and conspicuous notice delivered to the resident online when the resident is connected to the online account from an IP address or online location from which the person or business knows the resident customarily accesses the account.”

As with the other recently passed cyber laws, the implications of this new data privacy breach law will likely be felt nationally and internationally, as almost every company that offers online personalized services requires a consumer to create a username and password. While there remains some uncertainty about exactly what businesses must abide by this new regulation, as not all companies can readily, if at all, confirm affected users are California residents, since sharing of home addresses is often optional, it is best for businesses to abide by the old “better safe than sorry” adage. The two best ways companies can come into compliance with this regulation are to: (1) ensure that all usernames, passwords, security questions and answers are stored in an encrypted form, and (2) update existing protocols, or create new internal protocols that are consistent with this law’s reporting requirements.

See OlenderFeldman LLP’s predictions for what should happen in 2013 within the data privacy field and compare it with this new data privacy breach law in California.

In honor of Data Privacy Day, Cyber Data Risk Managers asked top industry experts their thoughts on what they think, feel and should happen in 2013 as it pertains to Data Privacy, Information Security and Cyber Insurance and what steps can be taken to mitigate risk.

Cyber Data Risk Managers asked many top privacy and data security experts, including Dr. Larry Ponemon, Rick Kam, Richard Santalesa and Bruce Schneier, their thoughts on what to expect in 2013. OlenderFeldman LLP contributed the following quote:

2012 was notable for several high-profile breaches of major companies, including LinkedIn, Yahoo!, and Zappos, among others. As businesses move more confidential and sensitive data to the cloud (especially in the aftermath of Hurricane Sandy’s devastation and the havoc it wreaked on businesses with locally-based servers), data security obligations are of paramount importance. Businesses should expect more notable data breaches, more class-action lawsuits, and federal legislation concerning data breach obligations in 2013.

To protect themselves, business should: (i) require that cloud providers and other third-party vendors provide them with a written information security plan containing appropriate administrative, technical and physical security measures to safeguard their valuable information; and (ii) ensure compliance with those obligations by drafting appropriate contractual provisions that delineate indemnification and data breach remediation obligations, among others. In particular, when using smaller providers, businesses should consider requiring that the providers be insured, so that they will be able to satisfy their indemnification and remediation obligations in the event of a breach.

Give the 2013 Data Privacy, Information Security and Cyber Insurance Trends report a read.

 

If your password looks something like “123456,” you might want to change it.

By Alice Cheng

Late Wednesday evening, hackers successfully breached Yahoo! security published a list of unencrypted emails and passwords. The list exposed the login information of more than 450,000 Yahoo! users. The hackers, who call themselves the D33D Company, explained that they obtained the passwords by using an SQL injection vulnerability—a technique that is often used to make online databases cough up information. The familiar method has been employed in other high-profile hacks, including of Sony and, more recently, LinkedIn.

However, unlike other malicious attacks, the D33D hackers claim that they only had good intentions: “We hope that the parties responsible for managing the security of this subdomain will take this as a wake-up call, and not as a threat.”

The attempted wake-up call is apparently much needed, though often ignored. An analysis of the exposed Yahoo! passwords revealed that a large number were incredibly weak— popular passwords in the set ranged from sequential numbers to being merely “password.”

In a statement, Yahoo! apologized and stated that notifications will be sent out to all affected users. The company also urged users to change their passwords regularly.

 If you are a Yahoo! user, you may want to change your account password, as well as any accounts with similar login credentials. It will also be well worth your time to heed to the wake-up call and incorporate better password practices. Use a different password for each site, and create long passwords that include a mix of upper- and lower- case letters, numbers, and symbols. To help keep things simple, password management software (such as LastPass and KeePass) is also available to help keep track of the complex passwords you create.

Protect Against Data Breaches

Protect Against Data Breaches

All companies, big and small, are at risk for data breaches. Most companies have legal obligations with respect to the integrity and confidentiality of certain information in its possession.  Information privacy and security is essential to  protect your business, safeguard your customers’ privacy, and secure your company’s vital information.

 

Recently, hackers gained access to Yahoo’s databases, exposing over 450,000 usernames and passwords to Yahoo, Gmail, AOL, Hotmail, Comcast, MSN, SBC Global, Verizon, BellSouth and Live.com accounts. This breach comes on the heels of a breach of over 6.5 million LinkedIn user passwords. With these embarrassing breaches, and the widespread revelation of their inadequate information security practices, Yahoo and LinkedIn were added to the rapidly growing list of large companies who have suffered massive data breaches in recent years.

While breaches at large companies like Yahoo and LinkedIn make the headlines, small businesses are equally at risk, and must take appropriate measures to keep their information safe. Aaron Messing, an information privacy attorney with OlenderFeldman LLP, notes that most businesses networks are accessible from any computer in the world and, therefore, potentially vulnerable to threats from individuals who do not require physical access to it.A recent report by Verizon found that nearly three-quarters of breaches in the last year involved small businesses. In fact, small business owners may be the most vulnerable to data breaches, as they are able to devote the least amount of resources to information security and privacy measures. Studies have found that the average cost of small business breaches is $194 per record breached, a figure that includes various expenses such as detecting and reporting the breach, notifying and assisting affected customers, and reimbursing customers for actual losses. Notably, these expenses did not include the cost of potential lawsuits, public embarrassment, and loss of customer goodwill, which are common consequences of weak information security and poorly managed data breaches. For a large business, a data breach might be painful. For a small business, it can be a death sentence.

LinkedIn presents a good example of these additional costs. It is currently facing a $5 million class action lawsuit related to the data breach. The lawsuit does not allege any specific breaches of cybersecurity laws, but instead alleges that LinkedIn violated its own stated privacy policy. Businesses of all sizes should be very careful about the representations they make on their websites, as what is written in a website terms of use or privacy policy could have serious legal implications.

Proactive security and privacy planning is always better than reactive measures. “While there is no sure-fire way to completely avoid the risk of data breaches,” says Aaron Messing, an information privacy lawyer with OlenderFeldman LLP, “steps can be taken, both before and after a breach, to minimize risk and expense.” To preserve confidential communications and to obtain advice on possible legal issues related to your company, consulting with privacy attorneys about your specific requirements is recommended. OlenderFeldman recommends the following general principles as a first step towards securing your business.

First, consider drafting a detailed information security policy and a privacy policy tailored to your company’s specific needs and threats which will to guide the implementation of appropriate security measures. A privacy policy is complementary to the information security policy, and sets the standards for collection, processing, storing, use and disclosure of confidential or personal information about individuals or entities, as well as prevention of unauthorized access, use or disclosure. Your policies should plan for proactive crisis management in the event of a security incident, which will enable coordinated execution of remedial actions. Most companies have legal obligations with respect to the integrity and confidentiality of certain information in its possession. Your company should have and enforce policies that reflect the philosophy and strategy of its management regarding information security.

Second, although external breaches from hackers gain the most publicity, the vast majority of data breaches are internal. Accordingly, physical security is one of the most important concerns for small businesses.  Informal or non-existent business attitudes and practices with regards to security often create temptations and a relatively safe environment for an opportunist within to gain improper or unauthorized access to your company’s sensitive information. Mitigating this risk requires limiting access to company resources on a need to know/access basis and restricting access to those who do not need the access. Theft or damage of the system hardware or paper files presents a great risk of business interruption and loss of confidential or personal information. Similarly, unauthorized access, use, or disclosure, whether intentional or unintentional, puts individuals at risk for identity theft, which may cause monetary liability and reputational damage to your company.

Third, be vigilant about protecting your information. Even if your company develops a secure network, failure to properly monitor logs and processes or weak auditing allows new vulnerabilities and unauthorized use to evolve and proliferate. As a result, your company may not realize that a serious loss had occurred or was ongoing.  Develop a mobile device policy to minimize the security and privacy risks to your company. Ensure that your technology resources (such as photocopy machines, scanners, printers, laptops and smartphones) are securely erased before it is otherwise recycled or disposed. Most business owners are not aware that technology resources generally store and retain copies of documents that have been printed, scanned, faxed, and emailed on their internal hard drives. For example, when a document is photocopied, the copier’s hard drive often keeps an image of that document. Thus, anyone with possession of that photocopier (i.e., when it is sold or returned) can obtain copies of all documents that were copied or scanned on the machine. This compilation of documents and potentially sensitive information poses serious threats of identity theft.

Finally, in the event of a breach, consult a privacy lawyer to determine your obligations. After a breach has been discovered, there should be a forensic investigation to determine what information was accessed and whether that information is still accessible to unauthorized users.  Your business may be legally obligated to notify customers or the authorities of the breach. Currently, there are no federal laws regulating notification, but 46 states and the District of Columbia have enacted data breach notification laws, which mandate various breach reporting times, and to various authorities.

 

Login / Logout

Login / LogoutA New Jersey court recently held that a teacher who accessed and printed a co-worker’s personal email after the coworker left the computer  without signing out of her account was not guilty of a crime.

By Alice Cheng

In Marcus v. Rogers, 2012 WL 2428046 (N.J.Super.A.D. June 28, 2012), a New Jersey court held that a defendant was not in violation of any laws when he snooped through the emails of a coworker who had forgotten to sign out of a shared computer.

The defendant, a teacher who was involved in a salary dispute with the school district he worked for, sat down to use a computer in the school’s computer room when he accidentally bumped the mouse of the computer next to him. The screen of the adjacent computer came alive to show the Yahoo! email inbox of a member of the education association he was in dispute with, which included two emails that clearly mentioned him. He then clicked on the emails, printed them out, and used them at a meeting with the education association as evidence that they had not bargained in good faith.

The individuals who were  copied on the email conversations filed suit, claiming that the defendant had violated New Jersey’s version of the Stored Communications Act (N.J.S.A. 2A:156A-27), which reads in pertinent part:

A person is guilty . . . if he (1) knowingly accesses without authorization a facility through which an electronic communication service is provided or exceeds an authorization to access that facility, and (2) thereby obtains, alters, or prevents authorized access to a wire or [an] electronic communication while that communication is in electronic storage.

The court found that the defendant did not “knowingly access [the facility] without authorization” as it was the previous user who had logged into the account. The judge then let the jury decide whether or not he “exceed[ed] an authorization to access that facility” when she failed to close her inbox and log out of her account. The jury found that did not, as he had “tacit authorization” to access the account. On appeal, the court affirmed.

While there is no clear answer to the question of whether snooping emails is illegal (as always, it depends), always remember to log out of public computers. Similarly, all mobile devices, such as smartphones or laptops, should be password protected. As for the email snoopers, be forewarned that snooping may nevertheless carry major consequences, if hacking or unauthorized access is found.

Your smartphone knows all about you. Before giving it away or recycling your smartphone, make sure that you take the proper precautions so that your smartphone doesn’t spill your secrets to the world.

Fox Business NewsIn a Fox Business article by Michael Estrin entitled, “Don’t be Stupid With an Unwanted Smartphone,” OlenderFeldman LLP provides insight on the importance of wiping all data before selling or donating an old phone. Some excerpts follow, and be sure to read the entire thing:


If an identity thief gets hold of data on your old smartphone, the risks could be dire, according to Aaron Messing, a lawyer specializing in technology and information privacy issues.

“It’s important for consumers to realize that their smartphones are actually mini-computers that contain all types of sensitive personal and financial information,” says Messing, who’s with the Olender Feldman firm in Union, N.J.

That information typically includes, but is not limited to: phone contacts, calendars, emails, text messages, pictures and a browser history. Increasingly, many phones also contain everything you’d have in your wallet — and more — as more consumers are using mobile banking and payment apps.

If just a little information gets into the wrong hands, it can go a very long way because each piece of compromised data is a clue toward finding more, says Messing.

“Email is especially sensitive because access to email will often give (a thief the) ability to reset passwords, which can be used to access financial and health information,” says Messing. Since many consumers ignore warnings not to use the same password for numerous sites, the risk could easily be multiplied very quickly.

So far, there haven’t been many reported incidents of identity theft using data pulled from discarded smartphones. But it’s a problem that Messing worries might rise as smartphone usage grows. A recent study by Pew Internet found that nearly half of Americans now own smartphones, up from 35% last year.

The proposed bill prohibits an employer from requiring a current or prospective employee to provide access to a personal account or even asking if they have an account or profile on a social networking website.

By Alice Cheng

Last month, a New Jersey Assembly committee approved a measure that would prohibit an employer from requiring a current or prospective employee to disclose user name or passwords to allow access to personal accounts. The employer is prohibited from asking a current or prospective employee whether she has an account or profile on a social networking website. Additionally, an employer may not retaliate or discriminate against an individual who accordingly exercises her rights under the bill.

This bill came in light of the multitude of stories of employers and schools requesting such information, or performing “shoulder surfing,” during interviews and at school/work. Although this may be only an urban legend at best, the ACLU and Facebook itself have demanded that the privacy-violating practice come to an end, and legislators across the nation have nevertheless responded promptly. For example, Maryland, California, and even the U.S. Senate have all proposed similar legislation banning such password requests to protect employee privacy.

Not only are password requests problematic for employees, but it also may land employers in legal hot water. Social media profiles may contain information that employers legally cannot ask (such as race or religion), and may potentially open employers up to discrimination suits.

Under the New Jersey bill, civil penalties are available in an amount not to exceed $1,000 for the first violation, or $2,500 for each subsequent violation.

Recently, in Ehling v. Monmouth Ocean Hospital Service Cop., 11-cv-3305 (WJM) (D.N.J.; May 30, 2012), a New Jersey court found that accessing an employee’s Facebook posts by “shoulder surfing” a coworker’s page states a privacy claim. See Venkat Balasubramani’s excellent writeup at the Technology & Marketing Law Blog.

New Jersey Law Requires Photocopiers and Scanners To Be Erased Because Of Privacy Concerns

New Jersey Law Requires Photocopiers and Scanners To Be Erased Because Of Privacy ConcernsNJ Assembly Bill A-1238 requires the destruction of records stored on digital copy machines under certain circumstances in order to prevent identity theft

By Alice Cheng

Last week, the New Jersey Assembly passed Bill-A1238 in an attempt to prevent identity theft. This bill requires that information stored on photocopy machines and scanners to be destroyed before devices change hands (e.g., when resold or returned at the end of a lease agreement).

Under the bill, owners of such devices are responsible for the destruction, or arranging for the destruction, of all records stored on the machines. Most consumers are not aware that digital photocopy machines and scanners store and retain copies of documents that have been printed, scanned, faxed, and emailed on their hard drives. That is, when a document is photocopied, the copier’s hard drive often keeps an image of that document. Thus, anyone with possession of the photocopier (i.e., when it is sold or returned) can obtain copies of all documents that were copied or scanned on the machine. This compilation of documents and potentially sensitive information poses serious threats of identity theft.

Any willful or knowing violation of the bill’s provisions may result in a fine of up to $2,500 for the first offense and $5,000 for subsequent offenses. Identity theft victims may also bring legal action against offenders.

In order for businesses to avoid facing these consequences, they should be mindful of the type of information stored, and to ensure that any data is erased before reselling or returning such devices. Of course, business owners should be especially mindful, as digital copy machines  may also contain trade secrets and other sensitive business information as well.

Check Cloud Contracts for Provisions Related to Privacy, Data Security and Regulatory Concerns

Check Cloud Contracts for Provisions Related to Privacy, Data Security and Regulatory Concerns“Cloud” Technology Offers Flexibility, Reduced Costs, Ease of Access to Information, But Presents Security, Privacy and Regulatory Concerns

With the recent introduction of Google Drive, cloud computing services are garnering increased attention from entities looking to more efficiently store data. Specifically, using the “cloud” is attractive due to its reduced cost, ease of use, mobility and flexibility, each of which can offer tremendous competitive benefits to businesses. Cloud computing refers to the practice of storing data on remote servers, as opposed to on local computers, and is used for everything from personal webmail to hosted solutions where all of a company’s files and other resources are stored remotely. As convenient as cloud computing is, it is important to remember that these benefits may come with significant legal risk, given the privacy and data protection issues inherent in the use of cloud computing. Accordingly, it is important to check your cloud computing contracts carefully to ensure that your legal exposure is minimized in the event of a data breach or other security incident.

Cloud computing allows companies convenient, remote access to their networks, servers and other technology resources, regardless of location, thereby creating “virtual offices” which allow employees remote access to their files and data which is identical in scope the access which they have in the office. The cloud offers companies flexibility and scalability, enabling them to pool and allocate information technology resources as needed, by using the minimum amount of physical IT resources necessary to service demand. These hosted solutions enable users to easily add or remove additional storage or processing capacity as needed to accommodate fluctuating business needs. By utilizing only the resources necessary at any given point, cloud computing can provide significant cost savings, which makes the model especially attractive to small and medium-sized businesses. However, the rush to use cloud computing services due to its various efficiencies often comes at the expense of data privacy and security concerns.

The laws that govern cloud computing are (perhaps somewhat counterintuitively) geographically based on the physical location of the cloud provider’s servers, rather than the location of the company whose information is being stored. American state and federal laws concerning data privacy and security tend to vary while servers in Europe are subject to more comprehensive (and often more stringent) privacy laws. However, this may change, as the Federal Trade Commission (FTC) has been investigating the privacy and security implications of cloud computing as well.

In addition to location-based considerations, companies expose themselves to potentially significant liability depending on the types of information stored in the cloud. Federal, state and international laws all govern the storage, use and protection of certain types of personally identifiable information and protected health information. For example, the Massachusetts Data Security Regulations require all entities that own or license personal information of Massachusetts residents to ensure appropriate physical, administrative and technical safeguards for their personal information (regardless of where the companies are physically located), with fines of up to $5,000 per incident of non-compliance. That means that the companies are directly responsible for the actions of their cloud computing service provider. OlenderFeldman LLP notes that some information is inappropriate for storage in the cloud without proper precautions. “We strongly recommend against storing any type of personally identifiable information, such as birth dates or social security numbers in the cloud. Similarly, sensitive information such as financial records, medical records and confidential legal files should not be stored in the cloud where possible,” he says, “unless it is encrypted or otherwise protected.” In fact, even a data breach related to non-sensitive information can have serious adverse effects on a company’s bottom line and, perhaps more distressing, its public perception.

Additionally, the information your company stores in the cloud will also be affected by the rules set forth in the privacy policies and terms of service of your cloud provider. Although these terms may seem like legal boilerplate, they may very well form a binding contract which you are presumed to have read and consented to. Accordingly, it is extremely important to have a grasp of what is permitted and required by your cloud provider’s privacy policies and terms of service. For example, the privacy policies and terms of service will dictate whether your cloud service provider is a data processing agent, which will only process data on your behalf or a data controller, which has the right to use the data for its own purposes as well. Notwithstanding the terms of your agreement, if the service is being provided for free, you can safely presume that the cloud provider is a data controller who will analyze and process the data for its own benefit, such as to serve you ads.

Regardless, when sharing data with cloud service providers (or any other third party service providers)), it is important to obligate third parties to process data in accordance with applicable law, as well as your company’s specific instructions — especially when the information is personally identifiable or sensitive in nature. This is particularly important because in addition to the loss of goodwill, most data privacy and security laws hold companies, rather than service providers, responsible for compliance with those laws. That means that your company needs to ensure the data’s security, regardless of whether it’s in a third party’s (the cloud providers) control. It is important for a company to agree with the cloud provider as to the appropriate level of security for the data being hosted. Christian Jensen, a litigation attorney at OlenderFeldman LLP, recommends contractually binding third parties to comply with applicable data protection laws, especially where the law places the ultimate liability on you. “Determine what security measures your vendor employs to protect data,” suggests Jensen. “Ensure that access to data is properly restricted to the appropriate users.” Jensen notes that since data protection laws generally do not specify the levels of commercial liability, it is important to ensure that your contract with your service providers allocates risk via indemnification clauses, limitation of liabilities and warranties. Businesses should reserve the right to audit the cloud service provider’s data security and information privacy compliance measures as well in order to verify that the third party providers are adhering to its stated privacy policies and terms of service. Such audits can be carried out by an independent third party auditor, where necessary.

OlenderFeldman LLP was interviewed by Jennifer Banzaca of the Hedge Fund Law Report for a three part series entitled, “What Concerns Do Mobile Devices Present for Hedge Fund Managers, and How Should Those Concerns Be Addressed?” (Subscription required; Free two week subscription available.) Some excerpts of the topics Jennifer and Aaron discussed follow. You can read  the third entry here.

Preventing Access by Unauthorized Persons

This section highlights steps that hedge fund managers can take to prevent unauthorized users from accessing a mobile device or any transmission of information from a device.  Concerns over unauthorized access are particularly acute in connection with lost or stolen devices.

[Lawyers] recommended that firms require the use of passwords or personal identification numbers (PINs) to access any mobile device that will be used for business purposes.  Aaron Messing, a Corporate & Information Privacy Associate at OlenderFeldman LLP, further elaborated, “We generally emphasize setting minimum requirements for phone security.  You want to have a mobile device lock with certain minimum requirements.  You want to make sure you have a strong password and that there is boot protection, which is activated any time the mobile device is powered on or reactivated after a period of inactivity.  Your password protection needs to be secure.  You simply cannot have a password that is predictable or easy to guess.”

Second, firms should consider solutions that facilitate the wiping (i.e., erasing) of firm data on the mobile device to prevent access by unauthorized users . . . . [T]here are numerous available wiping solutions.  For instance, the firm can install a solution that will facilitate remote wiping of the mobile device if the mobile device is lost or stolen.  Also, to counter those that try to access the mobile device by trying to crack its password, a firm can install software that automatically wipes firm data from the mobile device after a specific number of failed log-in attempts.  Messing explained, “It is also important for firms to have autowipe ability – especially if you do not have a remote wipe capability – after a certain number of incorrect password entries.  Often when a phone is lost or stolen, it is at least an hour or two before the person realizes the mobile device is missing.”

Wipe capability can also be helpful when an employee leaves the firm or changes mobile devices. . . Messing further elaborated, “When an employee leaves, you should have a policy for retrieving proprietary or sensitive information from the employee-owned mobile device and severing access to the network.  Also, with device turnover – if employees upgrade phones – you want employees to agree and acknowledge that you as the employer can go through the old phone and wipe the sensitive aspects so that the next user does not have the ability to pick up where the employee left off.”

If a firm chooses to adopt a wipe solution, it should adopt policies and procedures that ensure that employees understand what the technology does and obtain consent to the use of such wipe solutions.  Messing explained, “What we recommend in many cases is that as a condition of enrolling a device on the company network, employees must formally consent to an ‘Acceptable Use’ policy, which defines all the situations when the information technology department can remotely wipe the mobile device.  It is important to explain how that wipe will impact personal device use and data and employees’ data backup and storage responsibilities.”

Third, a firm should consider adopting solutions that prevent unauthorized users from gaining remote access to a mobile device and its transmissions.  Mobile security vendors offer products to protect a firm’s over-the-air transmissions between the server and a mobile device and the data stored on the mobile device.  These technologies allow hedge fund managers to encrypt information accessed by the mobile device – as well as information being transmitted by the mobile device – to ensure that it is secure and protected.  For instance, mobile devices can retain and protect data with WiFi and mobile VPNs, which provide mobile users with secure remote access to network resources and information.

Fourth, Rege suggested hedge fund managers have a procedure for requiring certificates to establish the identity of the device or a user.  “In a world where the devices are changing constantly, having that mechanism to make sure you always know what device is trying to access your system becomes very important.”

Preventing Unauthorized Use by Firm Personnel

Hedge fund managers should be concerned not only by potential threats from external sources, but also potential threats from unauthorized access and use by firm personnel.

For instance, hedge fund managers should protect against the theft of firm information by firm personnel.  Messing explained, “You want to consider some software to either block or control data being transferred onto mobile devices.  Since some of these devices have a large storage capacity, it is very easy to steal data.  You have to worry not only about external threats but internal threats as well, especially when it comes to mobile devices, you want to have system controls that are put in place to record and maybe even limit the data being taken from or copied onto mobile devices.”

Monitoring Solutions

To prevent unauthorized access and use of the mobile device, firms can consider remote monitoring.   However, monitoring solutions raise employee privacy concerns, and the firm should determine how to address these competing concerns.

Because of gaps in expectations regarding privacy, firms are much more likely to monitor activity on firm-provided mobile devices than on personal mobile devices. . . . In addressing privacy concerns, Messing explained, “You want to minimize the invasion of privacy and make clear to your employees the extent of your access.  When you are using proprietary technology for mobile applications, you can gain a great deal of insight into employee usage and other behaviors that may not be appropriate – especially if not disclosed.  We are finding many organizations with proprietary applications tracking behaviors and preferences without considering the privacy implications.  Generally speaking, you want to be careful how you monitor the personal device if it is also being used for work purposes.  You want to have controls to determine an employee’s compliance with security policies, but you have to balance that with a respect for that person’s privacy.  When it comes down to it, one of the most effective ways of doing that is to ensure that employees are aware of and understand their responsibilities with respect to mobile devices.  There must be education and training that goes along with your policies and procedures, not only with the employees using the mobile devices, but also within the information technology department as well.  You have people whose job it is to secure corporate information, and in the quest to provide the best solution they may not even consider privacy issues.”

As an alternative to remote monitoring, a firm may decide to conduct personal spot checks of employees’ mobile devices to determine if there has been any inappropriate activity.  This solution is less intrusive than remote monitoring, but likely to be less effective in ferreting out suspicious activity.

Policies Governing Archiving of Books and Records

Firms should consider both technology solutions and monitoring of mobile devices to ensure that they are capturing all books and records that are required to be kept pursuant to the firm’s books and records policies and external law and regulation with respect to books and records.

Also, firms may contemplate instituting a policy to search employees’ mobile devices and potentially copying materials from such mobile devices to ensure the capture of all such information or communications from mobile devices.  However, searching and copying may raise privacy concerns, and firms should balance recordkeeping requirements and privacy concerns.  Messing explained, “In the event of litigation or other business needs, the company should image, copy or search an employee’s personal device if it is used for firm business.  Therefore, employees should understand the importance of complying with the firm’s policies.”

Policies Governing Social Media Access and Use by Mobile Devices

Many firms will typically have some policies and procedures in place that ban or restrict the proliferation of business information via social media sites such as Facebook and Twitter, including with respect to the use of firm-provided mobile devices.  Specifically, such a policy could include provisions prohibiting the use of the firm’s name; prohibiting the disclosure of trade secrets; prohibiting the use of company logos and trademarks; addressing the permissibility of employee discussions of competitors, clients and vendors; and requiring disclaimers.

Messing explained, “We advise companies just to educate employees about social media.  If you are going to be on social media, be smart about what you are doing.  To the extent possible, employees should note their activity is personal and not related to the company.  They also should draw distinctions, where possible, between their personal and business activities.  These days it is increasingly blurred.  The best thing to do is just to come up with common sense suggestions and educate employees on the ramifications of certain activities.  In this case, ignorance is usually the biggest issue.”

Ultimately, many hedge fund managers recognize the concerns raised by mobile devices.  However, many also recognize the benefits that can be gained from allowing employees to use such devices.  In Messing’s view, the benefits to hedge fund managers outweigh the costs.  “Everything about a mobile device is problematic from a security standpoint,” Messing said, “but the reality is that the benefits far outweigh the costs in that productivity is greatly enhanced with mobile devices.  It is simply a matter of mitigating the concerns.”

OlenderFeldman LLP was interviewed by Jennifer Banzaca of the Hedge Fund Law Report for a three part series entitled, “What Concerns Do Mobile Devices Present for Hedge Fund Managers, and How Should Those Concerns Be Addressed?” (Subscription required; Free two week subscription available.) Some excerpts of the topics Jennifer and Aaron discussed follow. You can read the second entry here.

Three Steps That Hedge Fund Managers Should Take before Crafting Mobile Device Policies and Procedures

As indicated, before putting pen to paper to draft mobile device policies and procedures, hedge fund managers should take at least the following three steps.  Managers that already have mobile device policies and procedures in place, or that have other policies and procedures that incidentally cover mobile devices, may take the following three steps in revising the other relevant policies and procedures.

First, Aaron Messing, a Corporate & Information Privacy Lawyer at OlenderFeldman LLP, advised that hedge fund managers should ensure that technology professionals are integrally involved in developing mobile device policies and procedures.  Technology professionals are vital because they can understand the firm’s technological capabilities, and they can inform the compliance department about the technological solutions available to address compliance risks and to meet the firm’s goals.  Such technology professionals can be manager employees, outside professionals or a combination of both.  The key is that such professionals understand how technology can complement rather than conflict with the manager’s compliance and business goals.

Second, the firm should take inventory of its mobile device risks and resources before beginning to craft mobile device policies and procedures.  Among other things, hedge fund managers should consider access levels on the part of its employees; its existing technological capabilities; its budget for addressing the risks of using mobile devices; and the compliance personnel available to monitor compliance with such policies and procedures.  With respect to employee access, a manager should evaluate each employee’s responsibilities, access to sensitive information and historical and anticipated uses of mobile devices to determine the firm’s risk exposure.

With respect to technology, Messing cautioned that mobile device policies and procedures should be supportable by a hedge fund manager’s current technology infrastructure and team.  Alternatively, a manager should be prepared to invest in the required technology and team.  “You should be sure that what you are considering implementing can be supported by your information technology team,” Messing said.  With respect to budgeting, a hedge fund manager should evaluate how much it is willing to spend on technological solutions to address the various risks posed by mobile devices.  Any such evaluation should be informed by accurate pricing, assessment of a range of alternative solutions to address the same risk and a realistic sense of what is necessary in light of the firm’s business, employees and existing resources.  Finally, with respect to personnel, a manager should evaluate how much time the compliance department has available to monitor compliance with any contemplated mobile device policies and procedures.

Third, hedge fund managers should specifically identify their goals in adopting mobile device policies and procedures.  While the principal goal should be to protect the firm’s information and systems, hedge fund managers should also consider potentially competing goals, such as the satisfaction levels of their employees, as expressed through employee preferences and needs.  As Messing explained, “It is not that simple to dictate security policies because you have to take into account the end users.  Ideally, when you are creating a mobile device policy, you want something that will keep end users happy by giving them device freedom while at the same time keeping your data safe and secure.  One of the things that I emphasize the most is that you have to customize your solutions for the individual firm and the individual fund.  You cannot just take a one-size-fits-all policy because if you take a policy and you do not implement it, it can be worse than not having a policy at all.”  OCIE and Enforcement staff members have frequently echoed that last insight of Messing’s.

Aaron and Jennifer also discussed privacy concerns with the use of personal devices for work:

Firm-Provided Devices versus Personal Devices:

As an alternative, some firms have considered adopting policies that require employees to make their personal phones available for periodic and surprise examinations to ensure compliance with firm policies and procedures governing the use of personal phones in the workplace.  However, this solution may not necessarily be as effective as some managers might think because many mobile device functions and apps have been created to hide information from viewing, and a mobile device user intent on keeping information hidden may be able to take advantage of such functionality to deter a firm’s compliance department from detecting any wrongdoing.  Additionally, Messing explained that such examinations also raise employee privacy concerns.  Hedge fund managers should consider using software that can separate firm information from personal information to maximize the firm’s ability to protect its interests while simultaneously minimizing the invasion of an employee’s privacy.

Regardless of the policies and procedures that a firm wishes to adopt with respect to the use of personal mobile devices by firm personnel, hedge fund managers should clearly communicate to their employees the level of firm monitoring, access and control that is expected, especially if an employee decides that he or she wishes to use his or her personal mobile device for firm-related activities.

Jennifer and Aaron also discussed controlling access to critical information and systems:

Limiting Access to and Control of Firm Information and Systems

As discussed in the previous article in this series, mobile devices raise many external and internal security threats.  For instance, if a mobile device is lost or stolen, the recovering party may be able to gain access to sensitive firm information.  Also, a firm should protect itself from unauthorized access to and use of firm information and networks by rogue employees.  A host of technology solutions, in combination with robust policies and procedures, can minimize the security risks raised by mobile devices.  The following discussion highlights five practices that can help hedge fund managers to appropriately limit access to and control of firm information and networks by mobile device users.

First, hedge fund managers should grant mobile device access only to such firm information and systems as are necessary for the mobile device user to perform his or her job functions effectively.  This limitation on access should reduce the risks associated with use of the mobile device, particularly risks related to unauthorized access to firm information or systems.

Second, hedge fund managers should consider strong encryption solutions to provide additional layers of security with respect to their information.  As Messing explained, “As a best practice, we always recommend firm information be protected with strong encryption.”

Third, a firm should consider solutions that will avoid providing direct access to the firm’s information on a mobile device.  For instance, a firm should consider putting its information on a cloud and requiring mobile device users to access such information through the cloud.  By introducing security measures to access the cloud, the firm can provide additional layers of protection over and above the security measures designed to deter unauthorized access to the mobile device.

Fourth, hedge fund managers should consider solutions that allow them to control the “business information and applications” available via a personal mobile device.  With today’s rapidly evolving technology, solutions are now available that allow hedge fund managers to control those functions that are critical to their businesses while minimizing the intrusion on the personal activities of the mobile device user.  For instance, there are applications that store e-mails and contacts in encrypted compartments that separate business data from personal data.  Messing explained, “Today, there is software to provide data encryption tools and compartmentalize business data, accounts and applications from the other aspects of the phone.  There are also programs that essentially provide an encryption sandbox that can be removed and controlled without wiping the entire device.  When you have that ability to segment off that sensitive information and are able to control that while leaving the rest of the mobile device uncontrolled, that really is the best option when allowing employees to use mobile devices to conduct business.  The solutions available are only limited by the firm’s own technology limitations and what is available for each specific device.”  This compartmentalization also makes it easier to wipe a personal mobile phone if an employee leaves the firm, with minimal intrusion to the employee.

Fifth, hedge fund managers should adopt solutions that prohibit or restrict the migration of their information to areas where they cannot control access to such information.  Data loss prevention (DLP) solutions can provide assistance in this area by offering network protection to detect movement of information across the network.  DLP software can also block data from being moved to local storage, encrypt data and allow the administrator to monitor and restrict use of mobile device storage.

OlenderFeldman LLP was interviewed by Jennifer Banzaca of the Hedge Fund Law Report for a three part series entitled, “What Concerns Do Mobile Devices Present for Hedge Fund Managers, and How Should Those Concerns Be Addressed?” (Subscription required; Free two week subscription available.) Some excerpts of the topics Jennifer and Aaron discussed follow. You can read the  first entry here.

Eavesdropping

[A]s observed by Aaron Messing, a Corporate & Information Privacy Lawyer at OlenderFeldman LLP, “Phones have cameras and video cameras, and therefore, the phone can be used as a bugging device.”

Location Privacy

[M]any mobile devices or apps can broadcast the location of the user.  Messing explained that these can be some of the most problematic apps for hedge fund managers because they can communicate information about a firm’s activities through tracking of a firm employee.  For instance, a person tracking a mobile device user may be able to glean information about a firm’s contemplated investments if the mobile device user visits the target portfolio company.  Messing explained, “It is really amazing the amount of information you can glean just from someone’s location.  It can present some actionable intelligence.  General e-mails can have a lot more meaning if you know someone’s location.  Some people think this concern is overblown, but whenever you can collect disparate pieces of information, aggregating all those seemingly innocuous pieces of information can put together a very compelling picture of what is going on.”

Additionally, as Messing explained, “Some hedge fund managers are concerned with location-based social networks and apps, like Foursquare, which advertises that users are at certain places.  You should worry whether that tips someone off as to whom you were meeting with or companies you are potentially investing in.  These things are seemingly harmless in someone’s personal life, but this information could wind up in the wrong hands.  People can potentially piece together all of these data points and perhaps figure out what an employee is up to or what the employee is working on.  For a hedge fund manager, this tracking can have serious consequences.  It is hard to rely on technology to block all of those apps and functions because the minute you address something like Foursquare, a dozen new things just like it pop up.  To some degree you have to rely on education, training and responsible use by your employees.”

Books and Records Retention

Messing explained that while e-mails are generally simple to save and archive, text messages and other messaging types present new challenges for hedge fund managers.  Nonetheless, as Marsh cautioned, “Regardless of the type of messaging system that is used, all types of business-related electronic communications must be captured and archived.  There is no exception to those rules.  There is no exception for people using cell phones.  If I send a text message or if I post something to my Twitter account or Facebook account and it is related to business, it has to be captured.”

Advertising and Communications Concerns

OlenderFeldman’s Messing further explained on this topic, “Social media tends to blur these lines between personal and professional communications because many social media sites do not delineate between personal use and business use.  While there is not any clear guidance on whether using social networking and ‘liking’ various pages constitutes advertising, it is still a concern for hedge fund managers.  You can have your employees include disclaimers that their views are not reflective of the views of the company or that comments, likes or re-Tweets do not constitute an endorsement.  However, you still should have proper policies and procedures in place to address the use of social media, and you have to educate your employees about acceptable usage.”