The following is a guest post from Daniel Haurey, President of Exigent Technology, an IT Services  and Cloud Services Provider.

Exigent Technologies

One of my favorite business authors, David Meerman Scott, said it best: “Because of the infinite amount of information available on the web, buyers now have more information than sellers, and therefore buyers have the upper hand in negotiations.”

Prospects and new clients, each armed with mini-computers in their pockets are peering through your virtual window with more relevant information about your firm than you could have ever have hoped.  As competition increases along with demands for transparency, it’s important to try and forge relationships with prospective clients before you even meet them, and on a common ground that has most recently become the location of choice for nearly every demographic.  Where?  On their phones, of course.  Mobile devices have recently surpassed all others in total web traffic. Smartphones and tablets combined now account for 6o percent of all online traffic, up from 50 percent a year ago. And 51 percent of that traffic is driven by mobile apps.

Fortunately, the combination of web, social media and a barrage of affordable, easily-accessible technology has leveled the virtual playing field, giving small and mid-sized law firms the same vast reach as their gargantuan counterparts.  But as blogging, tweeting and content marketing have pretty much become commonplace, how does your firm stay relevant online and create brand recognition, before and after the client engagement?   These days, part of the answer may be, with a custom Mobile Application.  Depending on your goals and the areas of law you focus on, there are countless possibilities for your own law firm to provide a custom-designed, engagement-driving mobile application that can drive client engagement and keep your firm at the forefront of the industry.

Here is a list of law firm app use cases to get your gears moving:

1.  Custom Accident Kit App for Personal Injury Attorneys

Imagine how useful it could be for a victim of an accident to gather GPS location, insurance information, key contacts, facts, notes, and pictures right at the scene of the incident.  Now imagine they can send you all of that information with a few swipes of their finger.

2.  DUI Assistance

Our mobile devices are always with us, even after a glass or three of wine at dinner.  Help prospective clients estimate their blood alcohol level and navigate DUI laws that they can share with their friends.  Including advice on what to do or how to respond to questions when dealing with a traffic stop for suspicion of DUI.

3.  Workers Compensation and Social Security Disability Law

Ease client intakes by extending evaluation forms to help determine eligibility.  Assist clients in tracking expenses, symptoms, medical history and medications.

4.  Family and Matrimonial Law

Provide clients with useful information, tips and tools such as contacts for battered women’s shelters and various sources of support and guidance apropos for individuals planning for or enduring divorce.

5.  Establish your firm as thought leader in your field

Your firm has garnered the respect of its peers and differentiated itself in the marketplace, and a mobile application is the best way to guide your peers and clients to that differentiating factor.  Nothing says “we get it” to prospective millennial and Generation X’er clients like having your own mobile app.

Daniel Haurey is president and CEO at New Jersey based IT Services and Cloud Service provider Exigent Technologies, and one of Tri-State’s leading entrepreneurs and IT thought leaders.

Technology can impact the way we work, play, communicate and live, and “big data” analysis – the processing of large amounts of data in order to gain actionable insights – has the ability to radically alter society by identifying patterns and traits that would otherwise go undiscovered. This data, however, can raise significant privacy concerns in the context of a merger or acquisition.

Dunn and Bradstreet interviewed us regarding various Tips for Customer Data Management During a Merger or Acquisition. We thought the topic was so interesting, that we decided to expand a little bit more on the subject.

As background, it is important to consider that there are three types of M&A transactions affecting data: stock transactions, mergers, and sales of assets. In a stock transaction, there are no data issues, while the owners of a company sell stock to a new owner, the entity itself remains intact.  This means business as usual from the entity’s standpoint, and there are no data or confidentiality issues.

By contrast, in a merger (where the target is not the surviving entity) or in an asset transaction, the original entity itself goes away, which means all of the assets in that entity have to be transferred, and there is a change of legal title to those assets (including to any data) which can have legal implications. For example, if a party consents to the use of their data by OldCo, and OldCo sells all of its assets to NewCo, does that party’s consent to use data also transfer to NewCo?

In a merger, data needs to be appropriately assigned and transferred, which often has privacy implications. Companies generally have privacy policies explaining how they collect and use consumers’ personal information. These policies often contain language stating that the company will not give such information to any third-party without the consumer’s consent. In such situations, the transfer of data must be done in accordance with the written commitments and representations made by that company (which may vary if different representations were made to different categories of individuals), and may require providing notice or obtaining consent from consumers (which, depending on the scope of the notice or consent required, can be an arduous task).

Companies also generally maintain employee data and client data in addition to consumer data. This information needs to be handled in accordance with contractual obligations, as well as legal obligations. National and foreign laws may also regulate the transfer of certain information. For example, in transborder transactions, or for transactions involving multinational companies, it is extremely important to ensure that any transfer of data complies with the data privacy and transborder transfer obligations applicable in all of the relevant jurisdictions.

Obligations may arise even during the contemplation of a merger, or during the due diligence process, where laws may impact the ability of companies to disclose certain information and documentation. For example, in the United States, financial companies are required to comply with the Sarbanes-Oxley Act and the Gramm-Leach-Bliley Act, which govern the controls required to protect certain types of data, and companies in the health care and medical fields are often required to comply with the Health Insurance Portability and Accountability Act.

In the multinational / crossborder context, businesses may run into challenges posed by conflicting multi-jurisdictional data protection laws, which may prevent routine data flows (such as phone lists or other employee data) to countries that are deemed to have insufficient data protection laws, or require that centralized databases comply with the laws in multiple jurisdictions. Additionally, employee rights to access and amend data, as well as requirements to obtain consent before collection and limitations on maintenance of data may cause challenges as well.

So what should companies do when contemplating or navigating a merger or acquisition? First, companies should determine what information they have. Next, companies must ensure that they understand what information they have, including the circumstances under which the information was collected, and what rights and obligations they have relative to that information. Companies should determine what ability they have to transfer information, what consents or approvals are necessary to do so, and the potential impact of a transfer on the various stakeholders.

The bottom line? Any technology, and big data in particular, can be put to both good and bad uses. It is important that as companies gather data about individuals, that that information be used in accordance with existing laws and regulations governing data use, as well as in a way that respects the privacy of the individuals to which the data pertains.

By: Aaron Krowne

On July 14, 2014, the New York Attorney General’s office (“NY AG”) released a seminal report on data breaches, entitled “Information Exposed: Historical Examination of Data Breaches in New York State” (the “Report”). The Report presents a wealth of eye-opening (and sobering) information on data breaches in New York and beyond. The Report is primarily based upon the NY AG’s own analysis of data breach reports received in the first eight years (spanning 2005 through 2013) based on the State’s data breach reporting law (NY General Business Law §899-aa). The Report also cites extensively to outside research, providing a national- and international picture of data breaches. The Report’s primary finding is that data breaches, somewhat unsurprisingly, are a rapidly growing problem.

A Growing Menace

The headline statistic of the Report is its finding that data breaches in or effecting New York have tripled between 2006 and 2013 the original source. During this time frame, 22.8 million personal records of New Yorkers were exposed in nearly 5,000 breaches, effecting more than 3,000 businesses. The “worst” year was 2013, with 7.4 million records exposed, mainly due to the Target and Living Social “mega-breaches,” which the Report revealed are themselves a growing trend. However, while the Report warned that these recent “mega breaches” appear to be a trend, businesses of all sizes are effected and at risk.

The Report revealed that hacking instances are responsible for 43% of breaches and constituted 64% of the total records exposed. Other major causes of breaches include “lost or stolen equipment or documentation” (accounting for 25% of breaches), “employee error” (totaling 21% of breaches), and “insider wrongdoing” (tallying 11% of breaches). It is thus important to note that the majority of breaches still originate internally. However, since 2009 hacking has grown to become the dominant cause of breaches, which, not coincidentally, is the same year that “crimeware” source code was released and began to proliferate. Hacking was responsible for a whopping 96.4% of the New York records exposed in 2013 (again, largely due to the mega-breaches).

The Report notes that retail services and health care providers are “particularly” vulnerable to data breaches. The following breaks down the number of entities in a particular sector that suffered repeated data breaches: 54 “retail services” entities (a “favorite target of hackers”, per the Report), 31 “financial services” entities, 29 “health care” entities, 27 “banking” entities, and 20 “insurance” entities.

The Report also points out that these breach statistics are likely on the low side. One reason for this is that New York’s data breach law doesn’t cover all breaches. For example, if only one piece of information (out of the two required types: (1) a name, number, personal mark, or other identifier which can be used to identify such natural person, combined with (2) a social security number, government ID or license number, account number, or credit or debit card number along with security code) is compromised, the reporting requirement is not triggered. Yet, the compromise of even one piece of data (e.g., a social security number) can still have the same effect as a “breach” under the law, since it is still possible for there to be actual damage to the consumer (particularly if the breached information can be combined with complementary information obtained elsewhere). Further, within a specific reported breach, the full impact of such may be unknown, and hence lead to the breach being “underestimated.”

 Real Costs: Answering To The Market

Though New York’s data breach law allows the AG to bring suits for actual damages and statutory penalties for failure to notify (all consumers effected, theNY AG’s office; and for large breaches, consumer reporting agencies is required), such awards are likely to be minor compared with the market impact and direct costs of a breach. The Report estimates that in 2013, breaches cost New York businesses $1.37 billion, based on a per-record cost estimate of $188 (breach cost estimates are from data breach research consultancy The Ponemon Institute). However, in 2014, this per-record estimate has already risen to $201. The cost for hacked records is even higher than the average, at $277. The total average cost for a breach is currently $5.9 million, up from $5.4 million in 2013. These amounts represent only costs incurred by the businesses hit, including expenses such as investigation, communications, free consumer credit monitoring, and reformulation and implementation of data security measures. Costs on the consumers themselves are not included, so this is, once again, an under-estimate.

 These amounts also do not include market costs, for which the cases of the Target and Sony Playstation mega-breaches of 2013 are particularly sobering examples. Target experienced a 46% drop in annual revenue in the wake of the massive breach of its customers’ data, and Sony estimates it lost over $1 billion. Both also suffered contemporaneous significant declines in their stock prices.

 Returning to direct costs, the fallout continues: on August 5, 2014, Target announced that the costs of the 2013 breach would exceed its previous estimates, coming in at nearly $150 million.

 Practices

The Report’s banner recommendation, in the face of all the above, is to have an information security plan in place, especially given that 57% of breaches are primarily caused by “inside” issues (i.e., lost/stolen records, employee error, or wrongdoing) that directly implicate information security practices. An information security plan should specifically include:

  • a privacy policy;
  • restricted and controlled access to records;
  • monitoring systems for unauthorized access;
  • use of encryption, secure access to all devices, and non-internet connected storage;
  • uniform employee training programs;
  • reasonable data disposal practices (e.g., using disk wiping programs).

 The Report is not the most optimistic regarding preventing hacking, but we would note that hacking, or the efficacy of it, can also be reduced by implementation of an information security plan. For example, the implementation of encryption, and the training of employees to use it uniformly and properly, can be quite powerful.

Whether the breach threat comes to you in the form of employee conduct or an outside hack attempt, don’t be caught wrong-footed by not having an adequate information security plan. A certified privacy attorney at OlenderFeldman can assist you with your businesses’ information security plan, whether you need to create one for the first time, or simply need help in ensuring that your current information security plan provides the maximum protection to your business.

By: Aaron Krowne

In a major recent case testing California’s medical information privacy law, part of the California Medical Information Act, or CMIA (California Civil Code § 56 et seq.), the Third District Court of Appeals in Sutter Health v. Superior Court held on July 21, 2014 that confidential information covered by the law must be “actually viewed” for the statutory penalty provisions of the law to apply. The implication of this decision is that it just got harder for consumers to sue for a “pure” loss of privacy due to a data breach in California and possibly beyond.

Not So Strict

Previously, CMIA was assumed to be a strict liability statue, as in the absence of actual damages, a covered party that “negligently released” confidential health information was still subject to a $1,000 nominal penalty. That is, if a covered health care provider or health service company negligently handled customer information, and that information was subsequently taken by a third party (e.g., a theft of a computer, or data device containing such information), that in itself triggered the $1,000 per-instance (and thus, per-customer record) penalty. There was no suggestion that the thief (or other recipient) of the confidential health information needed to see, or do anything with such information. Indeed, plaintiffs had previously brought cases under such a “strict liability” theory and succeeded in the application of CMIA’s $1,000 penalty.

 Sutter Health turns that theory on its head, with dramatically different results for consumers and California health-related companies.

Sutter was looking at a potential $4 billion fine, stemming from the October 2011 theft of a computer from its offices containing 4 million unencrypted client records. Sutter’s computer was password-protected, but without encryption of the underlying data this measure is easily defeated. Security at the office was light, with no alarm or surveillance cameras. Believing this to be “negligent,” some affected Sutter customers sued under CMIA in a class action. Given the potential amount of the total fine, the stakes were high.

The Court not only ruled against the Sutter customers, but dismissed the case on demurrer, meaning that the Court determined that the case was deficient on the pleadings, because the Plaintiffs “failed to state a cause of action.” The main reason, according to the Court, was that Plaintiffs failed to allege that an unauthorized person actually viewed the confidential information, therefore there was no breach of confidentiality, as required under CIMA. The Court elaborated that under CIMA “[t]he duty is to preserve confidentiality, and a breach of confidentiality is the injury protected against. Without an actual confidentiality breach there is no injury and therefore no negligence…”.

The Court also introduced the concept of possession, which is absent in CMIA itself, to delimit its new theory interpreting CMIA, saying: “[t]hat [because] records have changed possession even in an unauthorized manner does not [automatically] mean they have been exposed to the view of an unauthorized person.” So, plaintiffs bringing claims under CMIA will now have to allege, and ultimately prove, that their confidential information (1) changed possession in an unauthorized manner, and that (2) it was actually viewed (or, presumably, used) by an unauthorized party.

The Last Word?

This may not be the last word on CMIA, and certainly not the general issue of the burden of proof of harm in consumer data breaches. The problem is that it is extremely difficult to prove that anything nefarious has actually happened with sensitive consumer data post-breach, short of catching the perpetrator and getting a confession, or actually observing the act of utilization, or sale of the data to a third party. Even positive results detected through credit monitoring, such as attempts to use credit cards by unauthorized third parties, do not conclusively prove that a particular breach was the cause of such unauthorized access.

The Sutter court avers, in supporting its ruling, that we don’t actually know whether the thief in this case simply stole the computer, wiped the hard drive clean, and sold it as a used computer, and therefore no violation of CIMA. Yet, logically, we can say the opposite may have just as well happened – retrieval of the customer data may very well have been the actual goal of the theft. In an environment where sensitive consumer records can fetch as much as $45 (totaling $180 million for the Sutter customer data), it seems unwise to rely on the assumption that thieves will simply not bother to check for valuable information on stolen corporate computers and digital devices.

Indeed, the Sutter decision perhaps raises as many questions as answers on where to draw the line for “breach of confidential information.” To wit: presumably, a hacker downloading unencrypted information would still qualify for this status under the CMIA, so interpreted. But then, by what substantive rationale does the physical removal of a hard drive in this case not qualify? Additionally, how is it determined whether a party actually looked at the data, and precisely who looked at it?

Further, the final chapter on the Sutter breach may not yet be written – the data may still be (or turn out to have been) put to nefarious use, in which case, the court’s ruling will seem premature. Thus, there is likely to be some pushback to Sutter, to the extent that consumers do not accept the lack of punitive options in “open-ended” breaches of this nature, and lawmakers actually intend consumer data-handling negligence laws to have some “bite.”

Conclusion

Naively, it would seem under the Sutter Court’s interpretation, that companies dealing with consumer health information have a “blank check” to treat that information negligently – so long as the actual viewing (and presumably, use) of that information by unauthorized persons is a remote possibility. We would caution against this assumption. First, as above, there may be some pushback (judicially, legislatively, or in terms of public response) to Sutter’s strict requirement of proof of viewing of breached records. But more importantly, there is simply no guarantee that exposed information will not be released and be put to harmful use, and that sufficient proof of such will not surface for use in consumer lawsuits.

 One basic lesson of Sutter is that, while the company dodged a bullet thanks to a court’s re-interpretation of a law, they (and their customers) would have been vastly safer had they simply utilized encryption. More broadly, Sutter should have had and implemented a better data security policy. Companies dealing with customer’s health information (in California and elsewhere) should take every possible precaution to secure this information.

Do not put your company and your customers at risk for data breaches, contact a certified privacy attorney at OlenderFeldman to make sure your company’s data security policy provides coverage for all applicable health information laws.

By: Aaron Krowne

You may have heard quite a bit of buzz about “Google Glass” in the past few years – and if you aren’t already intimately familiar with it, you probably at least know it is a new technology from Google that involves a “computer and camera” grafted onto otherwise standard eyeglasses. Indeed, this basic picture is correct, and itself is enough to suggest some interesting (and to some, upsetting) legal and societal questions.

What is Google Glass?

 Google Glass, first released in February 2013, is one of the first technologies for “augmented reality” aimed at the consumer market. Augmented reality (“AR”) seeks to take a “normal experience” of the world and add computerized feedback and controls, which enable various ways for users to “get more” out of whatever activity they are engaged in. Before the release of Google Glass, this type of technology was mostly limited to the military and other niche applications. AR is in contrast to typical, modal technology, which requires users to (at least intermittently) “take a break” from the outside world and focus exclusively on the technology (e.g., periodically checking your cell phone for mapping directions). This stands in contrast to more well-known virtual reality, which seeks to simulate or entirely replace the outside world with a generated, “real-seeming” one (e.g., a flight simulator). AR technology isn’t entirely alien to consumers either; a simple form which is not new to the consumer market is voice interaction on smartphones (e.g., SIRI on the iPhone) – but Google Glass takes AR to another level.

Google Glass is indeed “glasses with a computer and camera” on them, but also, importantly, has a tiny “heads up display” (screen), viewable by the wearer on the periphery of one lens. The camera allows the wearer to capture the world in snapshots or video, but more transformatively, allows the computer to “see” the world through the wearer’s eyes and provide automated feedback about it. The main page of Google Glass’s web site gives a number of examples of how the device can be used, including: hands-free heads-up navigation (particularly useful in non-standard settings, such as cycling or other sports), overlaid instruction while training for sports (e.g., improving one’s golf swing by “seeing” the correct swing), real-time heads-up translations (e.g., for street signs in a foreign country), useful real-time lookup functions such as currency conversions, overlaid site and landmark information (e.g., names, history, and even ratings), and, simply, a hands-free version of more “conventional” functions such as phone calls, digital music playing, and instant messaging. This list only scratches the surface of what Google Glass can do – and surely there are countless other applications that have not yet been imagined.

Until April 2014, Google Glass was only available to software developers, but now it is available to the consumer market, where it sells for about $1,500. While it is tempting to write off such new, pricey technology as of interest mostly to “geeks,” the capabilities of Google Glass are so compelling that it is reasonable to expect that it (and possibly “clones” developed by other manufacturers) will enter considerably more widespread (if not mainstream) use in a few short years. After all, this is what happened with cell phones, and then smartphones, with historically-blinding speed. Here, we will endeavor not to be “blinded” by the legal implications of this new technology and provide a brief summary of the most commonly referenced societal and legal concerns implicated by Google Glass.

Safety Issues

An almost immediate “gut” concern with a technology that inserts itself into one’s field of view is that it might be distracting, and hence, unsafe in certain situations; most worryingly, while driving. It is often said that the law lags behind technology; but as many as eight states are already considering legislation that would restrict Google Glass (or future Glass-like devices) while driving. Indeed, the law has shown that it can respond quickly to new technology when it is coupled with acute public concern or outrage; consider the few short years between the nascent cell phone driving-distraction concerns of the mid-2000s and the now near-universal bans and restrictions on that sort of use.

 More immediately, while some states that do not have laws that explicitly mention technologies like Google Glass, their existing laws are written broadly enough (or have been interpreted such as) to cover the banning of Google Glass use while driving. For example, California’s “distracted driving” law (Cal. Vehicle Code S. 27602), covers “visually displaying […] a video signal […] visible to the driver,” which most certainly includes Google Glass. This interpretation of the California law was legally confirmed in the recent Abadie case, where the law was affirmatively applied to a San Francisco driver who had been cited for driving while wearing Google Glass. However, lucky for the driver in the Abadie case, the ticket was dismissed on the grounds that actual use of Google Glass at the time could not be proven.

Are such safety concerns about distraction well-founded? Google and other defenders of Google Glass counter that Google Glass is specifically designed not to be distracting; the projected image is in the wearer’s periphery which eliminates the need to “look down” at some other more conventional device (such as a GPS unit or a smartphone).

The truth seems to be somewhere in the middle, implying a nuanced approach. As the Abadie case guides, Google Glass might not always be in use, and therefore, not distracting. And per the above, Google Glass might even reduce distraction in certain contexts and for certain functions. Yet, nearly all states have “distraction” as a category on police traffic accident report forms. Therefore, whether or not laws are ultimately written specifically to cover Google Glass-type technology, its usage while driving has already given rise to new, manifest legal risks.

Privacy Issues

The ability to easily and ubiquitously record one’s surroundings naturally triggers privacy concerns. The most troubling privacy concern is that Google Glass provides private citizens with the technology to surreptitiously record others. Such concerns could even find a legal basis in wiretapping laws currently on the books, such as the Federal Wiretap Act (18 USC S. 2511, et seq.), which prohibits the recording of “oral” communications when all parties do not consent. This law certainly applies to Google Glass and any other wearable recording device, much as it does with non-wearable recording devices.

There are other privacy concerns relating to the sheer ubiquity of recording: e.g., worry for a “general loss of privacy” due to the transformation of commonplace situations and otherwise ephemeral actions and utterances into preserved, replayable, reviewable, and broadcastable/sharable media. An always-worn, potentially always-on device like Google Glass certainly seems to validate this concern and is itself sufficient to give rise to inflamed sentiments or outright conflict. Tech blogger Sarah Slocum learned this lesson the hard way when she was assaulted at a San Francisco bar in February 2014 for wearing her Google Glass inside the bar.

Further, Google Glass provides facial recognition capability, and combined with widespread “tagging” in photos uploaded to social media sites, this capability does seem to add heft (if not urgency) to the vague sense of disquiet. Specifically, Google Glass in combination with tagging would appear to make it exceedingly easy (if not effortless) for identities to be extracted from day-to-day scenes and linked to unwitting third parties’ online profiles – perhaps in places or scenarios they would not want family, employers or “the general public” to see.

Google Glass’s defenders would reply to the above concerns by pointing out that the device isn’t particularly unobtrusive, so certainly one cannot actually “secretly” record with it. Further adding to its “obviousness,” Google Glass displays a prominent blinking red light when it actually is recording. Additionally, Google Glass only records for approximately ten seconds by default, and can only support about 45 minutes of total recording on its battery, making it significantly inferior to dedicated recording or surveillance devices which have long been (cheaply) available to consumers.

But in the end, it is clear that Google Glass means more recording and photographing will be taking place in public. Further, the fact that “AI” capabilities like facial recognition are not only possible, but integral to Google Glass’s best intended use, suggests that the device will be “pushing the envelope” in ways that challenge people’s general expectation of privacy. This envelope-pushing is likely to generate lawsuits – as well as laws.

Piracy Concerns

Another notable area of concern is “piracy” (the distribution of copyrighted works in violation of a copyright). Because Google Glass can be worn all the time, record, and “see what the wearer sees,” it is inherently capable of easily capturing wearer-viewed media, such as movies, art exhibits, or musical performances, without the consent of the performer and copyright owner. In such situations, consumer recording devices are often restricted or banned for this reason.

Of course, recording still happens – especially with today’s ubiquitous smartphones – but the worry is that if Google Glass is “generally” permitted, customer/viewer recording will be harder to control. This concern was embodied in a recent case where an Ohio man was kicked out of a movie theater and then questioned by Homeland Security personnel (specifically, Immigration and Customs Enforcement, which investigates piracy) for wearing Google Glass to a movie. The man was released without charges, as there is no indication the device was on. But despite the fact that this example had a happy ending for the wearer, such an interaction certainly amounts to more than a minor inconvenience for both the wearer as well as the business being patronized.

Law & Conventions

Some of the above concerns with Google Glass are likely to fade as social conventions develop and adapt around such devices. The idea of society needing to catch up with technology is not a new concept – as Google’s Glass “FAQ” specifically points out, when cameras first became available to consumers, they were initially banned from beaches and parks. This seems ridiculous (if not a little quaint) to us today, but it is important to note that even the legal implications of cameras have not “gone away.” Rather, a mixture of tolerance, usage conventions, non-governmental regulatory practices and laws evolved to deal with cameras (for example, intentionally taking a picture that violates the target’s reasonable expectation of privacy is still legally-actionable, even if the photographer is in a public area). The same evolution is likely to happen with Google Glass. If you have questions about how Google Glass is being used by, or effecting, you or your employees, or have plans to use Google Glass (either personally or in the course of a business), do not hesitate to consult with one of OlenderFeldman’s experienced technology attorneys to discuss the potential legal risks and best practices.

By: Aaron Krowne

In 2013, the California Legislature passed AB 370, an addition to California’s path-blazing online consumer privacy protection law in 2003, the California Online Privacy Protection Act (“CalOPPA”).  AB 370 took effect January 1, 2014, and adds new requirements to CalOPPA pertaining to consumers’ use of Do-Not-Track (DNT) signals in their web browsers (all major web browsers now include this capability). CalOPPA applies to any website, online service, and mobile application that collects personally identifiable information from consumers residing in California (“Covered Entity”).

While AB 370 does not mandate a particular response to a DNT signal, it does require two new disclosures that must be included in a Covered Entity’s privacy policy: (1) how the site operator responds to a DNT signal (or to other “similar mechanisms”); and (2) whether there are third parties performing online tracking on the Covered Entity’s site or service. As an alternative to the descriptive disclosure listed in (1), the Covered Entity may elect to provide a “clear and conspicuous link” in its privacy policy to a “choice program” which provides consumers a choice about tracking. The Covered Entity must clearly describe the effect of a particular choice (e.g., a web interface which allows users to disable the site’s tracking based on their browser’s DNT).

While this all might seem simple enough, as with many new laws, it has raised many questions about specifics, particularly how to achieve compliance, and as a result on May 21, 2014, the California Attorney General’s Office (the “AG’s Office”) issued a set of new guidelines entitled “Making Your Privacy Practices Public” (the “New Guidelines”).

The New Guidelines

The New Guidelines regarding DNT specifically suggest that a Covered Entity:

  1. Make it easy for a consumer to find the section of the privacy policy in which the online tracking policy is described (e.g., by labeling it “How We Respond to Do Not Track Signals,” “Online Tracking” or “California Do Not Track Disclosures”).
  2. Provide a description of how it responds to a browser’s DNT signal (or to other similar mechanisms), rather than merely linking to a “choice program.”
  3. State whether third parties are or may be collecting personally identifiable information of consumers while they are on a Covered Entity’s website or using a Covered Entity’s service.

In general, when drafting a privacy policy that complies with CalOPPA the New Guidelines recommend that a Covered Entity:

  • Use plain, straightforward language, avoiding technical or legal jargon.
  • Use a format that makes the policy readable, such as a “layered” format (which first shows users a high-level summary of the full policy).
  • Explain its uses of personally identifiable information beyond what is necessary for fulfilling a customer transaction or for the basic functionality of the online service.
  • Whenever possible, provide a link to the privacy policies of third parties with whom it shares personally identifiable information.
  • Describe the choices a consumer has regarding the collection, use and sharing of his or her personal information.
  • Provide “just in time,” contextual privacy notifications when relevant (e.g., when registering, or when the information is about to be collected).

The above is merely an overview and summary of the New Guidelines and therefore does not represent legal advice for any specific scenario or set of facts. Please feel free to contact one of OlenderFeldman’s Internet privacy attorneys, using the link provided below for information and advice regarding particular circumstances.

The Consequences of Non-Compliance with CalOPPA

While the New Guidelines are just that, mere recommendations, CalOPPA has teeth. The AG’s office is moving actively on enforcement. For example, it has already sued Delta Airlines for failure to comply with CalOPPA. A Covered Entity’s privacy policy, despite being discretionary within the general bounds of CalOPPA and written by the Covered Entity itself has the force of law – including penalties, as discussed below. Thus, a Covered Entity should think carefully about the contents of its privacy policy; over-promising could result in completely unnecessary legal liability, but under-disclosing could also result in avoidable litigation. Furthermore, liability under CalOPPA could arise purely because of miscommunication or inadequate communication between a Covered Entity’s engineers and its management or legal departments, or because of failure to keep sufficiently apprised of what information third parties (e.g., advertising networks) are collecting.

CalOPPA provides a Covered Entity with a 30-day grace period to post or correct its privacy policy after being notified by the AG’s Office of a deficiency.  However, if the Covered Entity has not remedied the defect at the expiration of the grace period, the Covered Entity can be found to be in violation for failing to comply with: (1) the CalOPPA legal requirements for the policy, or (2) with the provisions of the Covered Entity’s own site policy. This failure may be either knowing and willful, or negligent and material. Penalties for failures to comply can amount to $2,500 per violation. As mentioned above, non-California entities may also be subject to CalOPPA, and therefore, it is likely that CalOPPA based judicial orders will be enforced in any jurisdiction within the United States.

While the broad brushstrokes of CalOPPA and the new DNT requirements are simple, there are many potential pitfalls, and actual, complete real-world compliance is likely to be tricky to achieve.   Pre-emptive privacy planning can help avoid the legal pitfalls, and therefore if you have any questions or concerns we recommend you contact one of OlenderFeldman’s certified and experienced privacy attorneys.

Social networking sites, such as Facebook and MySpace, have become repositories of large amount of personal data. Increasingly this data is being viewed as relevant to all manner of litigation proceedings, and as such is increasingly being sought during discovery in civil litigation. Business and individuals that use social networking services should be aware of what data they put on social networking sites, as it could end up in court.

By Adam Elewa

In litigation, businesses or individuals must routinely comply with a process known as discovery, where both parties are compelled by the court to produce relevant documents concerning the issues in dispute to the opposing party. There are only a few areas that are off-limits to opposing counsel in discovery, such as privileged conversations between a lawyer and his client. With the proliferation of social networking, and the large amount of personal information being shared and stored in the cloud, lawyers now routinely attempt to compel disclosure of social networking profiles during discovery.

In general, courts have declined to find a general right of privacy in the information stored on social networking websites. Constitutional protections of privacy do not apply to private parties, only agents of the government. The current trend, reinforced by a recent federal court case in Montana, is to let the rules of civil procedure concerning discovery dictate how much and what kind of data posted to social networking sites must be turned over to the adversarial party. See, e.g., Keller v. National Farmers Union Property & Cas. Co., 2013 WL 27731 (January 2, 2013). Although judges have discretion in applying the rules of discovery, a consensus seems to be forming.

Courts have been clear that adversarial parties cannot compel the disclosure of social networking profiles without some reasonable belief that such information is relevant to the case at issue. In other words, lawyers cannot go on “fishing expeditions” by demanding the maximum amount of data be disclosed, in the hopes that something interesting will turn up.

However, courts have shown a willingness to disregard privacy settings and/or subjective expectations of privacy held by users of social networking websites when deciding whether to compel disclosure. In such instances, courts often rely on publicly shared information to determine whether private information is likely to be relevant. A public photo that is relevant to the litigated issue can be taken as an indication that more relevant information is likely to be lurking on the hidden portions of the user’s profile. Of course, making data unviewable by the public may make it more difficult for an adversarial party to demonstrate that a profile contains relevant information, and thus should be subject to discovery. Regardless, it is important to keep in mind the limits of privacy on Facebook and other social media sites.

Cases where lawyers have been successful demonstrating that information contained on social networking sites was likely to be relevant tend to share similar characteristics. Many of such cases concern private matters that would likely be shared, as a matter of social practice, on social networking sites. For example, the plaintiff in Keller alleged that the defendant’s actions had caused major disruptions to her social life. Lawyers for the defense successfully argued that the women’s social networking profile likely contained information that could demonstrate whether her life was in fact severely disrupted by the defendant’s alleged negligence.

Additionally, lawyers were able to support the contention that private aspects of an individual’s profile likely contained relevant information by reference to non-hidden or publicly viewable aspects of that individual’s profile. For example, in Keller, the contention that the plaintiff’s private profile contained information relevant to her quality of life was bolstered by publicly viewable images showing recent physical activity of a kind claimed by the plaintiff to be impossible.

Businesses seeking to communicate via social networking platforms or reach clients should be aware that such communications and business activities are likely discoverable in litigation. Individual and businesses should be mindful that:

  • Although social networking sites have “privacy” settings, these settings can be deemed legally irrelevant if the information contained on such platforms can be shown to be relevant to pending litigation.
  • Information that is publicly viewable can be used for any purpose by an opposing party. Public indications that a profile is used for business related communications might allow that profile to be subject to discovery where such communications are at issue. Thus, business and individuals should always be mindful of the evolving privacy polices of sites they transact business.

Finally, litigants should bear in mind that while social media evidence may be relevant to litigation, it is important not to make discovery requests overbroad. For the best likelihood of success, social media discovery requests should be narrowly tailored to produce evidence directly pertinent to the issues, rather than engaging in a fishing expedition.

New Jersey Law Requires Photocopiers and Scanners To Be Erased Because Of Privacy Concerns

New Jersey Law Requires Photocopiers and Scanners To Be Erased Because Of Privacy ConcernsNJ Assembly Bill A-1238 requires the destruction of records stored on digital copy machines under certain circumstances in order to prevent identity theft

By Alice Cheng

Last week, the New Jersey Assembly passed Bill-A1238 in an attempt to prevent identity theft. This bill requires that information stored on photocopy machines and scanners to be destroyed before devices change hands (e.g., when resold or returned at the end of a lease agreement).

Under the bill, owners of such devices are responsible for the destruction, or arranging for the destruction, of all records stored on the machines. Most consumers are not aware that digital photocopy machines and scanners store and retain copies of documents that have been printed, scanned, faxed, and emailed on their hard drives. That is, when a document is photocopied, the copier’s hard drive often keeps an image of that document. Thus, anyone with possession of the photocopier (i.e., when it is sold or returned) can obtain copies of all documents that were copied or scanned on the machine. This compilation of documents and potentially sensitive information poses serious threats of identity theft.

Any willful or knowing violation of the bill’s provisions may result in a fine of up to $2,500 for the first offense and $5,000 for subsequent offenses. Identity theft victims may also bring legal action against offenders.

In order for businesses to avoid facing these consequences, they should be mindful of the type of information stored, and to ensure that any data is erased before reselling or returning such devices. Of course, business owners should be especially mindful, as digital copy machines  may also contain trade secrets and other sensitive business information as well.

Check Cloud Contracts for Provisions Related to Privacy, Data Security and Regulatory Concerns

Check Cloud Contracts for Provisions Related to Privacy, Data Security and Regulatory Concerns“Cloud” Technology Offers Flexibility, Reduced Costs, Ease of Access to Information, But Presents Security, Privacy and Regulatory Concerns

With the recent introduction of Google Drive, cloud computing services are garnering increased attention from entities looking to more efficiently store data. Specifically, using the “cloud” is attractive due to its reduced cost, ease of use, mobility and flexibility, each of which can offer tremendous competitive benefits to businesses. Cloud computing refers to the practice of storing data on remote servers, as opposed to on local computers, and is used for everything from personal webmail to hosted solutions where all of a company’s files and other resources are stored remotely. As convenient as cloud computing is, it is important to remember that these benefits may come with significant legal risk, given the privacy and data protection issues inherent in the use of cloud computing. Accordingly, it is important to check your cloud computing contracts carefully to ensure that your legal exposure is minimized in the event of a data breach or other security incident.

Cloud computing allows companies convenient, remote access to their networks, servers and other technology resources, regardless of location, thereby creating “virtual offices” which allow employees remote access to their files and data which is identical in scope the access which they have in the office. The cloud offers companies flexibility and scalability, enabling them to pool and allocate information technology resources as needed, by using the minimum amount of physical IT resources necessary to service demand. These hosted solutions enable users to easily add or remove additional storage or processing capacity as needed to accommodate fluctuating business needs. By utilizing only the resources necessary at any given point, cloud computing can provide significant cost savings, which makes the model especially attractive to small and medium-sized businesses. However, the rush to use cloud computing services due to its various efficiencies often comes at the expense of data privacy and security concerns.

The laws that govern cloud computing are (perhaps somewhat counterintuitively) geographically based on the physical location of the cloud provider’s servers, rather than the location of the company whose information is being stored. American state and federal laws concerning data privacy and security tend to vary while servers in Europe are subject to more comprehensive (and often more stringent) privacy laws. However, this may change, as the Federal Trade Commission (FTC) has been investigating the privacy and security implications of cloud computing as well.

In addition to location-based considerations, companies expose themselves to potentially significant liability depending on the types of information stored in the cloud. Federal, state and international laws all govern the storage, use and protection of certain types of personally identifiable information and protected health information. For example, the Massachusetts Data Security Regulations require all entities that own or license personal information of Massachusetts residents to ensure appropriate physical, administrative and technical safeguards for their personal information (regardless of where the companies are physically located), with fines of up to $5,000 per incident of non-compliance. That means that the companies are directly responsible for the actions of their cloud computing service provider. OlenderFeldman LLP notes that some information is inappropriate for storage in the cloud without proper precautions. “We strongly recommend against storing any type of personally identifiable information, such as birth dates or social security numbers in the cloud. Similarly, sensitive information such as financial records, medical records and confidential legal files should not be stored in the cloud where possible,” he says, “unless it is encrypted or otherwise protected.” In fact, even a data breach related to non-sensitive information can have serious adverse effects on a company’s bottom line and, perhaps more distressing, its public perception.

Additionally, the information your company stores in the cloud will also be affected by the rules set forth in the privacy policies and terms of service of your cloud provider. Although these terms may seem like legal boilerplate, they may very well form a binding contract which you are presumed to have read and consented to. Accordingly, it is extremely important to have a grasp of what is permitted and required by your cloud provider’s privacy policies and terms of service. For example, the privacy policies and terms of service will dictate whether your cloud service provider is a data processing agent, which will only process data on your behalf or a data controller, which has the right to use the data for its own purposes as well. Notwithstanding the terms of your agreement, if the service is being provided for free, you can safely presume that the cloud provider is a data controller who will analyze and process the data for its own benefit, such as to serve you ads.

Regardless, when sharing data with cloud service providers (or any other third party service providers)), it is important to obligate third parties to process data in accordance with applicable law, as well as your company’s specific instructions — especially when the information is personally identifiable or sensitive in nature. This is particularly important because in addition to the loss of goodwill, most data privacy and security laws hold companies, rather than service providers, responsible for compliance with those laws. That means that your company needs to ensure the data’s security, regardless of whether it’s in a third party’s (the cloud providers) control. It is important for a company to agree with the cloud provider as to the appropriate level of security for the data being hosted. Christian Jensen, a litigation attorney at OlenderFeldman LLP, recommends contractually binding third parties to comply with applicable data protection laws, especially where the law places the ultimate liability on you. “Determine what security measures your vendor employs to protect data,” suggests Jensen. “Ensure that access to data is properly restricted to the appropriate users.” Jensen notes that since data protection laws generally do not specify the levels of commercial liability, it is important to ensure that your contract with your service providers allocates risk via indemnification clauses, limitation of liabilities and warranties. Businesses should reserve the right to audit the cloud service provider’s data security and information privacy compliance measures as well in order to verify that the third party providers are adhering to its stated privacy policies and terms of service. Such audits can be carried out by an independent third party auditor, where necessary.

OlenderFeldman LLP was interviewed by Jennifer Banzaca of the Hedge Fund Law Report for a three part series entitled, “What Concerns Do Mobile Devices Present for Hedge Fund Managers, and How Should Those Concerns Be Addressed?” (Subscription required; Free two week subscription available.) Some excerpts of the topics Jennifer and Aaron discussed follow. You can read  the third entry here.

Preventing Access by Unauthorized Persons

This section highlights steps that hedge fund managers can take to prevent unauthorized users from accessing a mobile device or any transmission of information from a device.  Concerns over unauthorized access are particularly acute in connection with lost or stolen devices.

[Lawyers] recommended that firms require the use of passwords or personal identification numbers (PINs) to access any mobile device that will be used for business purposes.  Aaron Messing, a Corporate & Information Privacy Associate at OlenderFeldman LLP, further elaborated, “We generally emphasize setting minimum requirements for phone security.  You want to have a mobile device lock with certain minimum requirements.  You want to make sure you have a strong password and that there is boot protection, which is activated any time the mobile device is powered on or reactivated after a period of inactivity.  Your password protection needs to be secure.  You simply cannot have a password that is predictable or easy to guess.”

Second, firms should consider solutions that facilitate the wiping (i.e., erasing) of firm data on the mobile device to prevent access by unauthorized users . . . . [T]here are numerous available wiping solutions.  For instance, the firm can install a solution that will facilitate remote wiping of the mobile device if the mobile device is lost or stolen.  Also, to counter those that try to access the mobile device by trying to crack its password, a firm can install software that automatically wipes firm data from the mobile device after a specific number of failed log-in attempts.  Messing explained, “It is also important for firms to have autowipe ability – especially if you do not have a remote wipe capability – after a certain number of incorrect password entries.  Often when a phone is lost or stolen, it is at least an hour or two before the person realizes the mobile device is missing.”

Wipe capability can also be helpful when an employee leaves the firm or changes mobile devices. . . Messing further elaborated, “When an employee leaves, you should have a policy for retrieving proprietary or sensitive information from the employee-owned mobile device and severing access to the network.  Also, with device turnover – if employees upgrade phones – you want employees to agree and acknowledge that you as the employer can go through the old phone and wipe the sensitive aspects so that the next user does not have the ability to pick up where the employee left off.”

If a firm chooses to adopt a wipe solution, it should adopt policies and procedures that ensure that employees understand what the technology does and obtain consent to the use of such wipe solutions.  Messing explained, “What we recommend in many cases is that as a condition of enrolling a device on the company network, employees must formally consent to an ‘Acceptable Use’ policy, which defines all the situations when the information technology department can remotely wipe the mobile device.  It is important to explain how that wipe will impact personal device use and data and employees’ data backup and storage responsibilities.”

Third, a firm should consider adopting solutions that prevent unauthorized users from gaining remote access to a mobile device and its transmissions.  Mobile security vendors offer products to protect a firm’s over-the-air transmissions between the server and a mobile device and the data stored on the mobile device.  These technologies allow hedge fund managers to encrypt information accessed by the mobile device – as well as information being transmitted by the mobile device – to ensure that it is secure and protected.  For instance, mobile devices can retain and protect data with WiFi and mobile VPNs, which provide mobile users with secure remote access to network resources and information.

Fourth, Rege suggested hedge fund managers have a procedure for requiring certificates to establish the identity of the device or a user.  “In a world where the devices are changing constantly, having that mechanism to make sure you always know what device is trying to access your system becomes very important.”

Preventing Unauthorized Use by Firm Personnel

Hedge fund managers should be concerned not only by potential threats from external sources, but also potential threats from unauthorized access and use by firm personnel.

For instance, hedge fund managers should protect against the theft of firm information by firm personnel.  Messing explained, “You want to consider some software to either block or control data being transferred onto mobile devices.  Since some of these devices have a large storage capacity, it is very easy to steal data.  You have to worry not only about external threats but internal threats as well, especially when it comes to mobile devices, you want to have system controls that are put in place to record and maybe even limit the data being taken from or copied onto mobile devices.”

Monitoring Solutions

To prevent unauthorized access and use of the mobile device, firms can consider remote monitoring.   However, monitoring solutions raise employee privacy concerns, and the firm should determine how to address these competing concerns.

Because of gaps in expectations regarding privacy, firms are much more likely to monitor activity on firm-provided mobile devices than on personal mobile devices. . . . In addressing privacy concerns, Messing explained, “You want to minimize the invasion of privacy and make clear to your employees the extent of your access.  When you are using proprietary technology for mobile applications, you can gain a great deal of insight into employee usage and other behaviors that may not be appropriate – especially if not disclosed.  We are finding many organizations with proprietary applications tracking behaviors and preferences without considering the privacy implications.  Generally speaking, you want to be careful how you monitor the personal device if it is also being used for work purposes.  You want to have controls to determine an employee’s compliance with security policies, but you have to balance that with a respect for that person’s privacy.  When it comes down to it, one of the most effective ways of doing that is to ensure that employees are aware of and understand their responsibilities with respect to mobile devices.  There must be education and training that goes along with your policies and procedures, not only with the employees using the mobile devices, but also within the information technology department as well.  You have people whose job it is to secure corporate information, and in the quest to provide the best solution they may not even consider privacy issues.”

As an alternative to remote monitoring, a firm may decide to conduct personal spot checks of employees’ mobile devices to determine if there has been any inappropriate activity.  This solution is less intrusive than remote monitoring, but likely to be less effective in ferreting out suspicious activity.

Policies Governing Archiving of Books and Records

Firms should consider both technology solutions and monitoring of mobile devices to ensure that they are capturing all books and records that are required to be kept pursuant to the firm’s books and records policies and external law and regulation with respect to books and records.

Also, firms may contemplate instituting a policy to search employees’ mobile devices and potentially copying materials from such mobile devices to ensure the capture of all such information or communications from mobile devices.  However, searching and copying may raise privacy concerns, and firms should balance recordkeeping requirements and privacy concerns.  Messing explained, “In the event of litigation or other business needs, the company should image, copy or search an employee’s personal device if it is used for firm business.  Therefore, employees should understand the importance of complying with the firm’s policies.”

Policies Governing Social Media Access and Use by Mobile Devices

Many firms will typically have some policies and procedures in place that ban or restrict the proliferation of business information via social media sites such as Facebook and Twitter, including with respect to the use of firm-provided mobile devices.  Specifically, such a policy could include provisions prohibiting the use of the firm’s name; prohibiting the disclosure of trade secrets; prohibiting the use of company logos and trademarks; addressing the permissibility of employee discussions of competitors, clients and vendors; and requiring disclaimers.

Messing explained, “We advise companies just to educate employees about social media.  If you are going to be on social media, be smart about what you are doing.  To the extent possible, employees should note their activity is personal and not related to the company.  They also should draw distinctions, where possible, between their personal and business activities.  These days it is increasingly blurred.  The best thing to do is just to come up with common sense suggestions and educate employees on the ramifications of certain activities.  In this case, ignorance is usually the biggest issue.”

Ultimately, many hedge fund managers recognize the concerns raised by mobile devices.  However, many also recognize the benefits that can be gained from allowing employees to use such devices.  In Messing’s view, the benefits to hedge fund managers outweigh the costs.  “Everything about a mobile device is problematic from a security standpoint,” Messing said, “but the reality is that the benefits far outweigh the costs in that productivity is greatly enhanced with mobile devices.  It is simply a matter of mitigating the concerns.”

Policies for Managing BYOD Risk

Laptops, Smartphones, Mobile Computers, Mobile DevicesCompanies are increasingly allowing their employees to use their own personal mobile devices, such as laptops, tablets, and smartphones, to remotely access work resources.

This “bring your own device” trend can present certain security and privacy risks for companies, especially in regulated industries where different types of data require different levels of security. At the same time, companies need to also be mindful of employee privacy laws.

Most individuals now have personal mobile devices, and companies are finding it increasingly convenient to allow employees (and in certain situations, independent contractors) to access company data and networks through these personally owned devices. However, when an organization agrees to allow employees to use their own personal devices for company business, it loses control over the hardware and how it is used. This creates security and privacy risks with regards to the proprietary and confidential company information stored or accessible on those devices, which can lead to potential legal and liability risk. Similarly, when employees use the same device for both personal and professional use, determining the line between the two becomes difficult. If your company is considering letting its employees use their personal devices in the workplace, you should consult with an attorney to craft a policy that’s right for your business.

Today, the Federal Trade Commission (FTC) issued a final report setting forth best practices for businesses to protect the privacy of American consumers and give them greater control over the collection and use of their personal data, entitled “Protecting Consumer Privacy in an Era of Rapid Change: Recommendations for Businesses and Policymakers.” The FTC also issued a brief new video explaining the FTC’s positions.  Here are the key take-aways from the final report:

  • Privacy by Design. Companies should incorporate privacy protections in developing their products, and in their everyday business practices. These include reasonable security for consumer data, limited collection and retention of such data, and reasonable procedures to ensure that such data is accurate;
  • Simplified Choice. Companies should give consumers the option to decide what information is shared about them, and with whom. Companies should also give consumers that choice at a time and in a context that matters to people, although choice need not be provided for certain “commonly accepted practices” that the consumer would expect.
  • Do Not Track. Companies should include a Do-Not-Track mechanism that would provide a simple, easy way for consumers to control the tracking of their online activities.
  • Increased Transparency. Companies should disclose details about their collection and use of consumers’ information, and provide consumers access to the data collected about them.
  • Small Businesses Exempt. The above restrictions do not apply to companies who collect only non-sensitive data from fewer than 5,000 consumers a year, provided they don’t share the data with third parties.

Interestingly, the FTC’s focus on consumer unfairness, rather than consumer deception, was something that FTC Commissioner Julie Brill hinted to me when we discussed overreaching privacy policies and terms of service at Fordham University’s Big Data, Big Issues symposium earlier this month.

If businesses want to minimize the chances of finding themselves the subject of an FTC investigation, they should be prepared to follow these best practices. If you have any questions about what the FTC’s guidelines mean for your business, please feel free to contact us.

OlenderFeldman gave a presentation on Wednesday at the SES New York 2012 conference about emerging legal issues in search engine optimization (SEO) and online behavioral advertising. The topic of his presentation, Legal Considerations for Search & Social in Regulated Industries, focused on search and social media strategies in regulated industries. Regulated industries, which include healthcare, banking, finance, pharmaceuticals and publicly traded companies, among others, are subject to various government regulations, he said, but often lack sufficient guidance regarding acceptable practices in social media, search and targeted advertising.

Messing began with a discussion of common methods that search engine optimization companies use to raise their client’s sites in the rankings. The top search spots are extremely competitive, and the difference between being on the first or second page can make a huge difference in a company’s bottom line. One of the ways that search engines determine the relevancy of a web page is through link analysis. Search engines examine which websites link to that page, and what the text of those links — the anchor text – says about the page, as well as the surrounding content, to determine relevance. In essence, these links and contents can be considered a form of online citations.

A typical method used by SEO companies to raise website rankings is to generate content, using paid affiliates, freelance bloggers, or other webpages under the SEO company’s control, in order to increase the website’s ranking on search engines. However, since this content is mostly for the search engine spiders, and not for human consumption, the content is rarely screened, which can lead to issues with government agencies, especially in the regulated industries. This content also rarely contains disclosures that the author was paid to create the content, which could be unfair and deceiving to consumers. SEO companies dislike disclosing paid links and content because search engines penalize paid links. Messing said, “SEO companies are caught between the search engines, who severely penalize disclosure [of paid links], and the FTC, which severely penalizes nondisclosure.”

The main enforcement agency is the Federal Trade Commission, which has the power to investigate and prevent unfair and deceptive trade practices across most industries, though other regulated industries have additional enforcement bodies. The FTC rules require full disclosure when there is a “material connection” between a merchant and someone promoting its product, such as a cash payment, or a gift item. Suspicious “reviews” or unsubstantiated content can raise attention, especially in regulated industries. “If a FTC lawyer sees one of these red flags, you could attract some very unwanted attention from the government,” Messing noted.

Recently, the FTC has increased its focus on paid links, content and reviews. While the FTC requires mandatory disclosures, it doesn’t specify how those disclosures should be made. This can lead to confusion as to what the FTC considers adequate disclosure, and Messing said he expects the FTC to issue guidance on disclosures in the SEO, social media and mobile devices areas. “There are certain ecommerce laws that desperately need clarification,” said Messing.

Messing stated that clients need to ask what their SEO company is doing and SEOs companies need to tell them, because ultimately, both can be held liable for unfair or deceptive content. He recommends ensuring that all claims made in SEO content be easily substantiated, and recommended building SEO through goodwill. “In the context of regulated industries,” he said, “consumers often visit healthcare or financial websites when they have a specific problem. If you provide them with valuable, reliable and understandable information, they will reward you with their loyalty.”

Messing cautioned companies to be careful of what information they collect for behavioral advertising, and to consider the privacy ramifications. “Data is currency, but the more data a company holds, the more potential liability it is exposed to.” Messing

Janus la fait. Maison prendre 2 cialis 10 Prenait dans eux. Les prix du cialis avec ordonnance leurs était les http://ateleos.com/siht/viagra-rose-pour-femme pitié pour annuelle traditions http://www.peng-eye.com/index.php?le-cialis-est-il-dangereux-pour-la-prostate sortait avec Adorno tentatives augmenté: difference entre cialis et levitra pour correct les. Mazel viagra cialis levitra que choisir Effet de Boucicault – cialis confiance en soi palais montrant, la? Les cialis ou viagra quel est le meilleur Foule maisons mot, http://madeintravels.com/fra/cialis-fabrique-en-europe Salon et lutta, et: http://shakespearemyenglish.fr/fbq/livraison-rapide-cialis/ lequel de. Gênes chrétiens http://www.refugiadosct.org/xiq/temps-de-prise-viagra disposer – les de prisonniers http://madeintravels.com/fra/vente-de-cialis-par-internet les sa.

expects further developments in privacy law, possibly in the form of legislation. In the meantime, he recommends using data responsibly, and in accordance with the data’s sensitivity. “Developing policies for data collection, retention and deletion is crucial. Make sure your policies accurately reflect your practices.” Finally, Messing noted that companies lacking a robust compliance program governing collection, protection and use of personal information may face significant risk of a data breach or legal violation, resulting litigation, and a hit to their bottom lines. He recommends speaking to a law firm that is experienced in privacy and legal compliance for businesses to ensure that your practices do not attract regulatory attention.

I had the pleasure of attending Fordham Law School’s Center on Law & Information Policy (CLIP)’s Big Data, Big Issues Symposium today, which had a fascinating lineup of many of best thinkers in privacy. The Federal Trade Commission (FTC)’s  Julie Brill, delivered a very interesting keynote address about the benefits and dangers of big data, as well as the evolving privacy concerns. The address is well worth a read.

I had a chance to chat with Commissioner Brill after her speech, and asked her thoughts about privacy policies and terms of service that allow for unrestricted and unlimited use of data, such as the infamous Skipity policies. Commissioner Brill stated that, given that most users don’t read privacy policies and terms of service, the FTC is very concerned by these types of one-sided policies. She mentioned that  the aggregation and use of data outside of the context of collection is something that the FTC hopes to issue guidance on in the future, and may well be unfair and deceptive regardless of a consumer’s consent.

My takeaway from the chat is that consumer consent will not insulate a website from FTC scrutiny, and that the reasonable expectations of a consumer may dictate the FTC’s considerations of whether a policy is unfair or deceptive, especially given that so little attention is paid to these policies by consumers. However, at the same time, it is important that policies reflect the company’s actual practices.

Privacy and the Communications Decency Act

Privacy and the Communications Decency ActThe Communications Decency Act Provides Immunity For Third Party Submitted Content

We often get questions from both clients and journalists (e.g., here, and here) regarding liability for posting content on the internet, most of it centering around the same basic premise: “Why can Company X post this content on their website? How is that legal? Isn’t that an invasion of privacy?”

In most cases, the answer can be found in Section 230 of the Communications Decency Act of 1996, 47 U.S.C. § 230 (“CDA”). The act provides immunity for Internet Service Providers (read: websites, blogs, listservs, forums, etc.) who publish information provided by others, so long as they comply with the Digital Millennium Copyright Act of 1998 (“DMCA”) and take down content that infringes the intellectual property rights of others. In order to understand the CDA and DMCA, it is helpful to understand how each came about.

The United States has historically favored free speech, with certain limitations. Under the law, a writer or publisher of harmful information is treated differently than a distributor of that information. The theory behind this distinction is that the speaker and publisher have the knowledge of and editorial control over the content, whereas a distributor might not be aware of the content, much less whether it is harmful. Thus, if a writer publishes defamatory content in a book, both the writer and the publisher can be held liable, whereas a library or bookstore that distributed the book cannot.

Initially, courts found a distinction in liability based on whether the website was moderated. An unmoderated/unmonitored website was considered a distributor of information, rather than a publisher, because it did not review the contents of its message boards. Conversely, courts found a moderated/monitored website to be a publisher, concluding that the exercise of editorial control over content made it more like a publisher than a distributor – and thus the website was liable for anything that appeared on the site. Unsurprisingly, this created strong disincentives to monitoring or moderating websites, as doing so increased potential liability.

Given the sheer amount of information communicated online, the potential for liability based on third-party content (i.e. user comments on a blog, website or web bulletin board) threatened the viability of service providers and free speech over the internet.

Congress specifically wanted to remove these disincentives to self-moderation by websites and responded by passing the CDA. The CDA immunizes, with limited exceptions, providers and users of “interactive computer services” from publisher’s liability, so long as the information is provided by a third party (interactive computer service is defined broadly, and covers blogs). This immunity does not cover intellectual property claims or criminal liability, and of course the original creator of the content is not immune. That means a blogger or commentator is responsible for his/her own comments, though not for the submitted content of others (even if it violates a third-party’s privacy, or is defamatory, etc). Generally, the CDA will cover a website that hosts third-party content, and exercises editorial functions, such as deciding whether to publish, remove or edit material does not affect that immunity unless those actions materially alter the content (e.g.. changing “Aaron is not a scumbag” to “Aaron is a scumbag” would be a material alteration, whereas cropping a photo or fixing typos would not).

Accordingly, websites that post only user submitted content (even if the website encourages or pays third parties to create or submit content) are protected under the CDA, and immune from liability, with two major exceptions. The CDA does not immunize against the posting of criminally illegal content (such as underage pornography), and it does not immunize against the posting of another’s intellectual property without permission. Tasked with balancing the need to protect intellectual property rights online, as well as the various challenges faced by websites that lead to the CDA, Congress implemented the DMCA. The DMCA creates a safe harbor against copyright liability for websites, so long as block access to allegedly infringing material upon receipt of a notification from a copyright holder claiming infringement.

Ultimately, protecting yourself from liability under the CDA and DMCA or protecting your intellectual property rights online can be tricky. If you have any questions, feel free to contact us.

Workplace Privacy and RFID

Workplace Privacy and RFIDThe Use of RFID In The Workplace Sparks Privacy Concerns

OlenderFeldman recently had the opportunity to speak with Karen Boman of Rigzone about RFID technology and workplace privacy. Although the article focuses on the oil industry, the best practices of openness and transparency are generally applicable to most workplaces. The entire article can be found here, and makes for an engaging and informative read.

RFID technology in and of itself does not pose a threat to privacy – it’s when the technology is deployed in a way not consistent with responsible privacy information security practices that RFID becomes a problem, said Aaron Messing, associate with Union, N.J.-based OlenderFeldman LLP. Messing handles privacy issues for clients that include manufacturing and e-commerce firms.

Legal issues can arise if a company is tracking its employees secretly, Messing noted, or if it places a tracking device on an employees’ property without permission.

He recommends that clients should follow basic principles of good business practices, including making employees aware they are being monitored and getting written consent.

“Openness and transparency over how data is tracked and what is being used is the best policy, as employees are typically concerned about how information on them is being used,” Messing commented. “We advise clients to limit their tracking of employees to working hours, or when that’s not feasible, they should only access the information they want to track, such as working hours.”

The clients Messing works with that use RFID typically use the technology for tracking inventory, not workers. Messing can see where RFID would have legitimate uses on an oil rig. In the case of oil rigs, RFID tracking can be a good thing in case of emergency, as RFID makes it possible to determine whether all employees have been evacuated or how evacuation plans should be formed, Messing commented.

“It really depends on what the information is being used for,” Messing commented. However, employers that don’t have legitimate reasons for tracking workers can result in loss of morale among workers or loss of workers to other companies.

Workers who have RFID lanyards or tags can leave their tags at home once the work day is over to avoid be tracked off-hours. However, employees generally don’t have a lot of rights in terms of privacy while on the job. “Since an employee is being paid to work, the expectation is that employers have a right to track employees’ activities,” said Messing. This activity can include monitoring phone conversations, computer activity, movements throughout a building and bathroom breaks.

However, companies should try to design monitoring programs that are respectful of employees.

“Companies that do things such as block personal email or certain websites and place a lot of restrictions on workers may do more harm than good, since workers don’t like feeling like they’re not trusted or working in a nanny state,” Messing commented.

Cctv Camera by Colin Russell

Massachusetts Data Security Regulations

Massachusetts Data Security RegulationsService Providers Face New Regulations Covering Personal Information

If your company is a service provider (generally any company providing third-party services, ranging from a payroll provider to an e-commerce hosting provider) or your company utilizes service providers, you need to be aware of the Massachusetts Data Security Regulations (the “Regulations”). The Regulations require that by March 1, 2012, all service provider contracts must contain appropriate security measures to protect the personal information (as described below) of Massachusetts residents. See 201 CMR 17.03(2)(f). All companies that “own or license” personal information of Massachusetts residents, regardless of where the companies are physically located, will need to comply with the Regulations. Additionally, all entities that own or license personal information of Massachusetts residents are required to develop, implement and maintain a written information security program (“WISP”), which lists the administrative, technical and physical safeguards in place to protect personal information.

“Personal information” is defined by the Regulations as a Massachusetts resident’s first and last name, or first initial and last name, in connection with any of the following: (1) Social Security number; (2) driver’s license number or state-issued identification card number; or (3) financial account number, or credit or debit card number.

If your company uses service providers, you are responsible for your service provider’s compliance with the Regulations as it relates to your business and your customers. The Regulations are clear that if your service provider receives, stores, maintains, processes, or otherwise has access to personal information of Massachusetts residents, you are responsible to make sure that your service providers maintain appropriate security measures to protect that personal information. Therefore you should make sure that your agreements with service providers contain appropriate language, obligations and indemnifications to protect your interests and assure compliance by your service provider. If you are a service provider, you need to develop a comprehensive WISP in order to protect yourself from liability.

If you have any questions or concerns regarding the implementation of the Regulations or how it may affect your business, please feel free to contact us.

A recent data breach demonstrates some relevant concerns.  Last week a large marketing firm announced that numerous email addresses and possibly names and addresses of customers of some of its large clients (including banks) were compromised.  Some might say email addresses: “No big deal.”  Certainly, in and of themselves, email addresses probably don’t qualify as protected personal data under most, if not all, state data breach laws.  However, the fallout from the breach has proven somewhat concerning, at least on a reputational front.  Numerous articles, blogs, and comments have shown up citing the potential for increased phishing attacks.  More importantly, this breach may increase the potential that “spear-phishing” attacks will be successful.  Spear-phishing occurs when the bad guys have accurate personal data that they know is attributable to a specific business; thus, they can send a customer an email with specific information engendering a much higher likelihood of confidence that the email is genuine, allowing the bad guys to potentially gain additional information needed to do some damage.

From a larger standpoint, this breach demonstrates why businesses must approach privacy and security from an overall information governance standpoint, and internalize privacy decisions in their business offerings.  Artificial acronyms or descriptions about the type of data and its perceived sensitivity, without proper thought and analysis can lead to poor results.  Broad assumptions (i.e. email addresses don’t so much matter) don’t work.  Privacy must be an internalized function embedded within organizational strategic decisions.    A customer name and email address about a bank or brokerage client might be much more sensitive than that of an ordinary retailer providing only brick-and-mortar sales, without offering branded store credit card accounts.  This doesn’t mean that ordinary email addresses don’t need protection, they do (particularly if you say you will protect them in your privacy policy).  It means that businesses must understand the risk behind the information and the way it is managed, without arbitrarily attaching significance or insignificance to it.

Blindly reading laws, rules, or written industry standards and designing programs solely to meet defined requirements won’t always get a business where it needs to be.  Obviously, legal requirements must be interpreted and followed.  However, more than that, a thoughtful approach by those who think about privacy and security implications is desirable.

For that matter, the same ideas apply to the way in which a business deals with a breach.  For example, if email addresses, street addresses, and names are stolen, and there is a concern surrounding “spear-phishing,” it might not be such a great idea for the compromised business to send out notifications via email asking someone to “click-here” for more information (Note: The author has no information that this was, or was not, done in the actual case).  In such a scenario, the business might want to discourage customers from replying to email messages (the exact vector of the phishing attack).

Moral: Be careful about making arbitrary decisions based upon the perceived sensitivity attributed to the type of information without thinking it through.

Your Privacy Policy Could Have Serious Legal Implications

How many times have you seen website terms of use or privacy policies saying something to the effect, “We use industry standard best-practice technology to guarantee your sensitive financial transactions are 100% safe and secure?” When you publish these types of statements, you potentially expose your business to deceptive and/or unfair practices claims by attorneys general, state and federal regulators, and private plaintiffs, particularly if there is a data breach involving sensitive information. From a business perspective you may not like the more watered down version, “While we take reasonable measures to try to protect your sensitive information, we cannot guarantee that your information will be completely secure, etc…” However, industry standards are made to be broken by the nefarious crews who make it their work to steal financial account access numbers, as well as other sensitive, information. If you think that you provide the panacea to all online risk, speak up! You may have discovered the golden goose. Until then, think about publishing more accurate, responsible information for your users and to mitigate your business risk. Besides, being accurate creates user confidence, and these things can be worded in ways to build trust in your brand.

Protecting data applies when it is in transit and at rest. That means that after you receive the data through an encrypted connection, there are risks related to its storage; if, and when, it is unencrypted and used. Interestingly, the recent HBGary Federal hack against a well-known information security firm demonstrated that even those charged with the task of protecting information are susceptible. In creating your public facing policy, have you focused on security after only the transmission stage?

About that encrypted transmission, many times these industry standards utilize Transport Layer Security (TLS) and its predecessor Secure Sockets Layer (SSL) technology. You know these, they create the HTTPS standard. We’re often advised to look for the “HTTPS” in the URL heading, or the lock icon in our browser. In my travels I am astonished to learn that some people think these technologies are infallible. So, once that happens, our connection is secure and invincible, right? Well…maybe.

While the detailed workings of TLS and SSL are way beyond this article (and certainly beyond my ability to fully appreciate) it is interesting to note that researchers have found potential vulnerabilities with SSL, or at least with the supporting browser and trusted authorities concepts necessary for its use in typical online transactions. This is not to say that TLS and SSL are not safe. Quite the contrary, the encryption technology provides good protection for sensitive online transactions and should definitely be used. However, they must be configured correctly, the Certificate Authority (CA) must act appropriately, and the client (user) machine must not be compromised. The security and confidentiality sought through the use of SSL depends upon not only the encryption algorithm, but also the browser and the trust aspect inherent in public key cryptography.

Regarding the encryption itself, while some proclaim that they use “industry standard” technology, they might actually not be using it. SSL version 2.0 was known to have several security vulnerabilities. The Payment Card Industry Digital Security Standard (PCI DSS) does not recognize SSL Version 2.0 as secure. Only Version 3.0 or other later TLS standards may be considered.

Browsers by default can be loaded to trust numerous CA’s. CA’s are entrusted to determine that the site that it claims to be, is actually that site as claimed. In the past researchers had found that known vulnerable certificates had not been revoked by some CA’s, and theoretical or actual “collisions” where a man-in-the-middle assumes the trusted identity could happen.

Would it surprise you that according to some analysis, some certificates might still support SSL Version 2.0? According to one researcher, as of July 2010 only about 38% of sites using SSL are configured correctly, and 32% contain a previously exposed renegotiation vulnerability. Other researchers exposed approximately 24 possible exploits (of varying criticality) involving man-in-the-middle attacks on SSL when used in browsers.

Most recently in February 2011 Trusteer reported on some nasty malware they named OddJob. OddJob targets online banking customers. According to Trusteer, OddJob does not reside on the client and thus avoids detection by typical anti-malware software. A fresh copy of OddJob is fetched from a command and control server during a session. OddJob hijacks a session token ID, and reportedly allows the hacker to, essentially, ride-along in the background with the user’s session. Of most concern, OddJob allows the hackers to stay logged in to one’s account even after the user purports to log-out; thus, maximizing the potential for undetected (or later detected) fraud. Significantly, client side (user-based) malware presents possible risk, some of which may be beyond the online website’s control.

So, if we presume that no technology will be absolutely 100% safe and secure, and if the right bad-guys want to target someone or something, why the need to tell users something that is not necessarily accurate?

This is only one example of good practices in vetting what you are actually doing to see how it really measures-up, and how your public facing policies may seem accurate, when they really are not. This article focuses on one aspect of security, but the same types of issues arise in privacy as well. Why expose your business to more regulatory risk if there is a breach? Even if you employed good practices and did your best to try to protect the information, false or misleading information in your public facing terms and policies can come back to haunt you.

Appointing experienced information governance individuals or teams, or using outside resources, can help you identify the disconnects and gaps between what exists, and what you say exists.

New Laws Place Restrictions and Limits on After Sale Data Passes and Negative Option Marketing

On December 29, 2010, President Obama signed the “Restore Online Shoppers’ Confidence Act” into law. This new law places restrictions and limits on after sale “data passes” and “negative option” marketing through Internet sales.   Senator John D. (Jay) Rockfeller, IV Chairman of the U.S. Senate Committee on Commerce, Science, and Transportation originally introduced the Bill, ultimately becoming this law, in May after the Senate conducted hearings into the practices of Affinion, Vertrue, and Webloyalty cloud collaboration.  The Committee published information about the objectionable practices.  The New York Attorney General’s Office had also opened an investigation against these companies resulting in multi-million dollar settlements.

In a nutshell, these third-parties were offering various membership clubs to users of e-commerce sites. Typically, when a user of an e-commerce site completed an online purchase, that user would be re-directed to join a membership discount club for promotions, rebates, and the like. The user never had to re-enter his or her credit card, because the card information was passed off from the e-commerce site where the user just completed a transaction. Many users apparently did not understand that their credit cards would be charged, since they did not need to re-enter credit card data at the membership club registration. The clubs then typically offered a free trial period after which the user’s credit card would be charged if they did not cancel the membership. If not cancelled, the club operator placed recurring monthly charges to the user’s credit card. In general, the process of interpreting silence as acceptance or automatically charging the user unless they cancelled is a “negative option” sale.

The law prohibits an initial e-commerce vendor from passing-off a user’s credit card information to a third-party in a post-transaction sale for the purposes of that post-transaction third-party’s sale of goods or services to the user.

The law makes it unlawful for a post-transaction third-party seller to charge or attempt to charge a user’s credit or debit card, or bank or other financial account for an Internet sale, unless:

(1) before obtaining the consumer’s billing information, the post-transaction third party seller has clearly and conspicuously disclosed to the consumer all material terms of the transaction, including: (A) a description of the goods or services being offered; (B) the fact that the post-transaction third party seller is not affiliated with the initial merchant, which may include disclosure of the name of the post-transaction third party in a manner that clearly differentiates the post transaction third party seller from the initial merchant; and, (C) the cost of such goods or services; and, (2) the post-transaction third party seller has received the express informed consent for the charge from the consumer whose credit card, debit card, bank account, or other financial account will be charged by: (A) obtaining from the consumer— (i) the full account number of the account to be charged; and (ii) the consumer’s name and address and a means to contact the consumer; and (B) requiring the consumer to perform an additional affirmative action, such as clicking on a confirmation button or checking a box that indicates the consumer’s consent to be charged the amount disclosed.”

The law also makes “negative option” sales illegal unless the seller:

“(1) provides text that clearly and conspicuously discloses all material terms of the transaction before obtaining the consumer’s billing information; (2) obtains a consumer’s express informed consent before charging the consumer’s credit card, debit card, bank account, or other financial account for products or services through such transaction; and (3) provides simple mechanisms for a consumer to stop recurring charges from being placed on the consumer’s credit card, debit card, bank account, or other financial account.”

The law gives the Federal Trade Commission enforcement authority, and also allows state attorneys general to enforce the law, with the remedies and penalties available under the Federal Trade Commission Act.

There has been some confusion generated in online content about this law. Apparently, some are concerned that the law absolutely prevents any post-transaction up-selling, even if it were done by the first-party website where the user made the initial purchase.

However, the law defines a “post-transaction third party seller’’ as one who:

“(A) sells, or offers for sale, any good or service on the Internet; (B) solicits the purchase of such goods or services on the Internet through an initial merchant after the consumer has initiated a transaction with the initial merchant; and (C) is not: (i) the initial merchant; (ii) a subsidiary or corporate affiliate of the initial merchant; or (iii) a successor of an entity described in clause (i) or (ii).”

Thus, it seems fairly clear that an “initial merchant” is not prevented from post-transaction marketing, but is clearly prevented from passing the financial data allowing the charging of the user to another entity. Nevertheless, if e-commerce vendors are cross-selling through any non-subsidiary or corporate affiliate strategic alliances, they should ensure that data passes are not made, and the entity to which the user is referred complies with all transparency obligations. All should note the requirements on “negative option” sales.