By: Aaron Krowne

In a major recent case testing California’s medical information privacy law, part of the California Medical Information Act, or CMIA (California Civil Code § 56 et seq.), the Third District Court of Appeals in Sutter Health v. Superior Court held on July 21, 2014 that confidential information covered by the law must be “actually viewed” for the statutory penalty provisions of the law to apply. The implication of this decision is that it just got harder for consumers to sue for a “pure” loss of privacy due to a data breach in California and possibly beyond.

Not So Strict

Previously, CMIA was assumed to be a strict liability statue, as in the absence of actual damages, a covered party that “negligently released” confidential health information was still subject to a $1,000 nominal penalty. That is, if a covered health care provider or health service company negligently handled customer information, and that information was subsequently taken by a third party (e.g., a theft of a computer, or data device containing such information), that in itself triggered the $1,000 per-instance (and thus, per-customer record) penalty. There was no suggestion that the thief (or other recipient) of the confidential health information needed to see, or do anything with such information. Indeed, plaintiffs had previously brought cases under such a “strict liability” theory and succeeded in the application of CMIA’s $1,000 penalty.

 Sutter Health turns that theory on its head, with dramatically different results for consumers and California health-related companies.

Sutter was looking at a potential $4 billion fine, stemming from the October 2011 theft of a computer from its offices containing 4 million unencrypted client records. Sutter’s computer was password-protected, but without encryption of the underlying data this measure is easily defeated. Security at the office was light, with no alarm or surveillance cameras. Believing this to be “negligent,” some affected Sutter customers sued under CMIA in a class action. Given the potential amount of the total fine, the stakes were high.

The Court not only ruled against the Sutter customers, but dismissed the case on demurrer, meaning that the Court determined that the case was deficient on the pleadings, because the Plaintiffs “failed to state a cause of action.” The main reason, according to the Court, was that Plaintiffs failed to allege that an unauthorized person actually viewed the confidential information, therefore there was no breach of confidentiality, as required under CIMA. The Court elaborated that under CIMA “[t]he duty is to preserve confidentiality, and a breach of confidentiality is the injury protected against. Without an actual confidentiality breach there is no injury and therefore no negligence…”.

The Court also introduced the concept of possession, which is absent in CMIA itself, to delimit its new theory interpreting CMIA, saying: “[t]hat [because] records have changed possession even in an unauthorized manner does not [automatically] mean they have been exposed to the view of an unauthorized person.” So, plaintiffs bringing claims under CMIA will now have to allege, and ultimately prove, that their confidential information (1) changed possession in an unauthorized manner, and that (2) it was actually viewed (or, presumably, used) by an unauthorized party.

The Last Word?

This may not be the last word on CMIA, and certainly not the general issue of the burden of proof of harm in consumer data breaches. The problem is that it is extremely difficult to prove that anything nefarious has actually happened with sensitive consumer data post-breach, short of catching the perpetrator and getting a confession, or actually observing the act of utilization, or sale of the data to a third party. Even positive results detected through credit monitoring, such as attempts to use credit cards by unauthorized third parties, do not conclusively prove that a particular breach was the cause of such unauthorized access.

The Sutter court avers, in supporting its ruling, that we don’t actually know whether the thief in this case simply stole the computer, wiped the hard drive clean, and sold it as a used computer, and therefore no violation of CIMA. Yet, logically, we can say the opposite may have just as well happened – retrieval of the customer data may very well have been the actual goal of the theft. In an environment where sensitive consumer records can fetch as much as $45 (totaling $180 million for the Sutter customer data), it seems unwise to rely on the assumption that thieves will simply not bother to check for valuable information on stolen corporate computers and digital devices.

Indeed, the Sutter decision perhaps raises as many questions as answers on where to draw the line for “breach of confidential information.” To wit: presumably, a hacker downloading unencrypted information would still qualify for this status under the CMIA, so interpreted. But then, by what substantive rationale does the physical removal of a hard drive in this case not qualify? Additionally, how is it determined whether a party actually looked at the data, and precisely who looked at it?

Further, the final chapter on the Sutter breach may not yet be written – the data may still be (or turn out to have been) put to nefarious use, in which case, the court’s ruling will seem premature. Thus, there is likely to be some pushback to Sutter, to the extent that consumers do not accept the lack of punitive options in “open-ended” breaches of this nature, and lawmakers actually intend consumer data-handling negligence laws to have some “bite.”

Conclusion

Naively, it would seem under the Sutter Court’s interpretation, that companies dealing with consumer health information have a “blank check” to treat that information negligently – so long as the actual viewing (and presumably, use) of that information by unauthorized persons is a remote possibility. We would caution against this assumption. First, as above, there may be some pushback (judicially, legislatively, or in terms of public response) to Sutter’s strict requirement of proof of viewing of breached records. But more importantly, there is simply no guarantee that exposed information will not be released and be put to harmful use, and that sufficient proof of such will not surface for use in consumer lawsuits.

 One basic lesson of Sutter is that, while the company dodged a bullet thanks to a court’s re-interpretation of a law, they (and their customers) would have been vastly safer had they simply utilized encryption. More broadly, Sutter should have had and implemented a better data security policy. Companies dealing with customer’s health information (in California and elsewhere) should take every possible precaution to secure this information.

Do not put your company and your customers at risk for data breaches, contact a certified privacy attorney at OlenderFeldman to make sure your company’s data security policy provides coverage for all applicable health information laws.

By: Aaron Krowne

You may have heard quite a bit of buzz about “Google Glass” in the past few years – and if you aren’t already intimately familiar with it, you probably at least know it is a new technology from Google that involves a “computer and camera” grafted onto otherwise standard eyeglasses. Indeed, this basic picture is correct, and itself is enough to suggest some interesting (and to some, upsetting) legal and societal questions.

What is Google Glass?

 Google Glass, first released in February 2013, is one of the first technologies for “augmented reality” aimed at the consumer market. Augmented reality (“AR”) seeks to take a “normal experience” of the world and add computerized feedback and controls, which enable various ways for users to “get more” out of whatever activity they are engaged in. Before the release of Google Glass, this type of technology was mostly limited to the military and other niche applications. AR is in contrast to typical, modal technology, which requires users to (at least intermittently) “take a break” from the outside world and focus exclusively on the technology (e.g., periodically checking your cell phone for mapping directions). This stands in contrast to more well-known virtual reality, which seeks to simulate or entirely replace the outside world with a generated, “real-seeming” one (e.g., a flight simulator). AR technology isn’t entirely alien to consumers either; a simple form which is not new to the consumer market is voice interaction on smartphones (e.g., SIRI on the iPhone) – but Google Glass takes AR to another level.

Google Glass is indeed “glasses with a computer and camera” on them, but also, importantly, has a tiny “heads up display” (screen), viewable by the wearer on the periphery of one lens. The camera allows the wearer to capture the world in snapshots or video, but more transformatively, allows the computer to “see” the world through the wearer’s eyes and provide automated feedback about it. The main page of Google Glass’s web site gives a number of examples of how the device can be used, including: hands-free heads-up navigation (particularly useful in non-standard settings, such as cycling or other sports), overlaid instruction while training for sports (e.g., improving one’s golf swing by “seeing” the correct swing), real-time heads-up translations (e.g., for street signs in a foreign country), useful real-time lookup functions such as currency conversions, overlaid site and landmark information (e.g., names, history, and even ratings), and, simply, a hands-free version of more “conventional” functions such as phone calls, digital music playing, and instant messaging. This list only scratches the surface of what Google Glass can do – and surely there are countless other applications that have not yet been imagined.

Until April 2014, Google Glass was only available to software developers, but now it is available to the consumer market, where it sells for about $1,500. While it is tempting to write off such new, pricey technology as of interest mostly to “geeks,” the capabilities of Google Glass are so compelling that it is reasonable to expect that it (and possibly “clones” developed by other manufacturers) will enter considerably more widespread (if not mainstream) use in a few short years. After all, this is what happened with cell phones, and then smartphones, with historically-blinding speed. Here, we will endeavor not to be “blinded” by the legal implications of this new technology and provide a brief summary of the most commonly referenced societal and legal concerns implicated by Google Glass.

Safety Issues

An almost immediate “gut” concern with a technology that inserts itself into one’s field of view is that it might be distracting, and hence, unsafe in certain situations; most worryingly, while driving. It is often said that the law lags behind technology; but as many as eight states are already considering legislation that would restrict Google Glass (or future Glass-like devices) while driving. Indeed, the law has shown that it can respond quickly to new technology when it is coupled with acute public concern or outrage; consider the few short years between the nascent cell phone driving-distraction concerns of the mid-2000s and the now near-universal bans and restrictions on that sort of use.

 More immediately, while some states that do not have laws that explicitly mention technologies like Google Glass, their existing laws are written broadly enough (or have been interpreted such as) to cover the banning of Google Glass use while driving. For example, California’s “distracted driving” law (Cal. Vehicle Code S. 27602), covers “visually displaying […] a video signal […] visible to the driver,” which most certainly includes Google Glass. This interpretation of the California law was legally confirmed in the recent Abadie case, where the law was affirmatively applied to a San Francisco driver who had been cited for driving while wearing Google Glass. However, lucky for the driver in the Abadie case, the ticket was dismissed on the grounds that actual use of Google Glass at the time could not be proven.

Are such safety concerns about distraction well-founded? Google and other defenders of Google Glass counter that Google Glass is specifically designed not to be distracting; the projected image is in the wearer’s periphery which eliminates the need to “look down” at some other more conventional device (such as a GPS unit or a smartphone).

The truth seems to be somewhere in the middle, implying a nuanced approach. As the Abadie case guides, Google Glass might not always be in use, and therefore, not distracting. And per the above, Google Glass might even reduce distraction in certain contexts and for certain functions. Yet, nearly all states have “distraction” as a category on police traffic accident report forms. Therefore, whether or not laws are ultimately written specifically to cover Google Glass-type technology, its usage while driving has already given rise to new, manifest legal risks.

Privacy Issues

The ability to easily and ubiquitously record one’s surroundings naturally triggers privacy concerns. The most troubling privacy concern is that Google Glass provides private citizens with the technology to surreptitiously record others. Such concerns could even find a legal basis in wiretapping laws currently on the books, such as the Federal Wiretap Act (18 USC S. 2511, et seq.), which prohibits the recording of “oral” communications when all parties do not consent. This law certainly applies to Google Glass and any other wearable recording device, much as it does with non-wearable recording devices.

There are other privacy concerns relating to the sheer ubiquity of recording: e.g., worry for a “general loss of privacy” due to the transformation of commonplace situations and otherwise ephemeral actions and utterances into preserved, replayable, reviewable, and broadcastable/sharable media. An always-worn, potentially always-on device like Google Glass certainly seems to validate this concern and is itself sufficient to give rise to inflamed sentiments or outright conflict. Tech blogger Sarah Slocum learned this lesson the hard way when she was assaulted at a San Francisco bar in February 2014 for wearing her Google Glass inside the bar.

Further, Google Glass provides facial recognition capability, and combined with widespread “tagging” in photos uploaded to social media sites, this capability does seem to add heft (if not urgency) to the vague sense of disquiet. Specifically, Google Glass in combination with tagging would appear to make it exceedingly easy (if not effortless) for identities to be extracted from day-to-day scenes and linked to unwitting third parties’ online profiles – perhaps in places or scenarios they would not want family, employers or “the general public” to see.

Google Glass’s defenders would reply to the above concerns by pointing out that the device isn’t particularly unobtrusive, so certainly one cannot actually “secretly” record with it. Further adding to its “obviousness,” Google Glass displays a prominent blinking red light when it actually is recording. Additionally, Google Glass only records for approximately ten seconds by default, and can only support about 45 minutes of total recording on its battery, making it significantly inferior to dedicated recording or surveillance devices which have long been (cheaply) available to consumers.

But in the end, it is clear that Google Glass means more recording and photographing will be taking place in public. Further, the fact that “AI” capabilities like facial recognition are not only possible, but integral to Google Glass’s best intended use, suggests that the device will be “pushing the envelope” in ways that challenge people’s general expectation of privacy. This envelope-pushing is likely to generate lawsuits – as well as laws.

Piracy Concerns

Another notable area of concern is “piracy” (the distribution of copyrighted works in violation of a copyright). Because Google Glass can be worn all the time, record, and “see what the wearer sees,” it is inherently capable of easily capturing wearer-viewed media, such as movies, art exhibits, or musical performances, without the consent of the performer and copyright owner. In such situations, consumer recording devices are often restricted or banned for this reason.

Of course, recording still happens – especially with today’s ubiquitous smartphones – but the worry is that if Google Glass is “generally” permitted, customer/viewer recording will be harder to control. This concern was embodied in a recent case where an Ohio man was kicked out of a movie theater and then questioned by Homeland Security personnel (specifically, Immigration and Customs Enforcement, which investigates piracy) for wearing Google Glass to a movie. The man was released without charges, as there is no indication the device was on. But despite the fact that this example had a happy ending for the wearer, such an interaction certainly amounts to more than a minor inconvenience for both the wearer as well as the business being patronized.

Law & Conventions

Some of the above concerns with Google Glass are likely to fade as social conventions develop and adapt around such devices. The idea of society needing to catch up with technology is not a new concept – as Google’s Glass “FAQ” specifically points out, when cameras first became available to consumers, they were initially banned from beaches and parks. This seems ridiculous (if not a little quaint) to us today, but it is important to note that even the legal implications of cameras have not “gone away.” Rather, a mixture of tolerance, usage conventions, non-governmental regulatory practices and laws evolved to deal with cameras (for example, intentionally taking a picture that violates the target’s reasonable expectation of privacy is still legally-actionable, even if the photographer is in a public area). The same evolution is likely to happen with Google Glass. If you have questions about how Google Glass is being used by, or effecting, you or your employees, or have plans to use Google Glass (either personally or in the course of a business), do not hesitate to consult with one of OlenderFeldman’s experienced technology attorneys to discuss the potential legal risks and best practices.

By: Aaron Krowne

In 2013, the California Legislature passed AB 370, an addition to California’s path-blazing online consumer privacy protection law in 2003, the California Online Privacy Protection Act (“CalOPPA”).  AB 370 took effect January 1, 2014, and adds new requirements to CalOPPA pertaining to consumers’ use of Do-Not-Track (DNT) signals in their web browsers (all major web browsers now include this capability). CalOPPA applies to any website, online service, and mobile application that collects personally identifiable information from consumers residing in California (“Covered Entity”).

While AB 370 does not mandate a particular response to a DNT signal, it does require two new disclosures that must be included in a Covered Entity’s privacy policy: (1) how the site operator responds to a DNT signal (or to other “similar mechanisms”); and (2) whether there are third parties performing online tracking on the Covered Entity’s site or service. As an alternative to the descriptive disclosure listed in (1), the Covered Entity may elect to provide a “clear and conspicuous link” in its privacy policy to a “choice program” which provides consumers a choice about tracking. The Covered Entity must clearly describe the effect of a particular choice (e.g., a web interface which allows users to disable the site’s tracking based on their browser’s DNT).

While this all might seem simple enough, as with many new laws, it has raised many questions about specifics, particularly how to achieve compliance, and as a result on May 21, 2014, the California Attorney General’s Office (the “AG’s Office”) issued a set of new guidelines entitled “Making Your Privacy Practices Public” (the “New Guidelines”).

The New Guidelines

The New Guidelines regarding DNT specifically suggest that a Covered Entity:

  1. Make it easy for a consumer to find the section of the privacy policy in which the online tracking policy is described (e.g., by labeling it “How We Respond to Do Not Track Signals,” “Online Tracking” or “California Do Not Track Disclosures”).
  2. Provide a description of how it responds to a browser’s DNT signal (or to other similar mechanisms), rather than merely linking to a “choice program.”
  3. State whether third parties are or may be collecting personally identifiable information of consumers while they are on a Covered Entity’s website or using a Covered Entity’s service.

In general, when drafting a privacy policy that complies with CalOPPA the New Guidelines recommend that a Covered Entity:

  • Use plain, straightforward language, avoiding technical or legal jargon.
  • Use a format that makes the policy readable, such as a “layered” format (which first shows users a high-level summary of the full policy).
  • Explain its uses of personally identifiable information beyond what is necessary for fulfilling a customer transaction or for the basic functionality of the online service.
  • Whenever possible, provide a link to the privacy policies of third parties with whom it shares personally identifiable information.
  • Describe the choices a consumer has regarding the collection, use and sharing of his or her personal information.
  • Provide “just in time,” contextual privacy notifications when relevant (e.g., when registering, or when the information is about to be collected).

The above is merely an overview and summary of the New Guidelines and therefore does not represent legal advice for any specific scenario or set of facts. Please feel free to contact one of OlenderFeldman’s Internet privacy attorneys, using the link provided below for information and advice regarding particular circumstances.

The Consequences of Non-Compliance with CalOPPA

While the New Guidelines are just that, mere recommendations, CalOPPA has teeth. The AG’s office is moving actively on enforcement. For example, it has already sued Delta Airlines for failure to comply with CalOPPA. A Covered Entity’s privacy policy, despite being discretionary within the general bounds of CalOPPA and written by the Covered Entity itself has the force of law – including penalties, as discussed below. Thus, a Covered Entity should think carefully about the contents of its privacy policy; over-promising could result in completely unnecessary legal liability, but under-disclosing could also result in avoidable litigation. Furthermore, liability under CalOPPA could arise purely because of miscommunication or inadequate communication between a Covered Entity’s engineers and its management or legal departments, or because of failure to keep sufficiently apprised of what information third parties (e.g., advertising networks) are collecting.

CalOPPA provides a Covered Entity with a 30-day grace period to post or correct its privacy policy after being notified by the AG’s Office of a deficiency.  However, if the Covered Entity has not remedied the defect at the expiration of the grace period, the Covered Entity can be found to be in violation for failing to comply with: (1) the CalOPPA legal requirements for the policy, or (2) with the provisions of the Covered Entity’s own site policy. This failure may be either knowing and willful, or negligent and material. Penalties for failures to comply can amount to $2,500 per violation. As mentioned above, non-California entities may also be subject to CalOPPA, and therefore, it is likely that CalOPPA based judicial orders will be enforced in any jurisdiction within the United States.

While the broad brushstrokes of CalOPPA and the new DNT requirements are simple, there are many potential pitfalls, and actual, complete real-world compliance is likely to be tricky to achieve.   Pre-emptive privacy planning can help avoid the legal pitfalls, and therefore if you have any questions or concerns we recommend you contact one of OlenderFeldman’s certified and experienced privacy attorneys.

Navigating the Privacy Minefield - Online Behavioral Tracking

Navigating the Privacy Minefield - Online Behavioral Tracking

The Internet is fraught with privacy-related dangers for companies. For example, Facebook’s IPO filing contains multiple references to the various privacy risks that may threaten its business model, and it seems like every day a new class action suit is filed against Facebook alleging surreptitious tracking or other breaches of privacy laws. Google has recently faced a resounding public backlash related to its new uniform privacy policy, to the extent that 36 state attorney generals are considering filing suit. New privacy legislation and regulatory activities have been proposed, with the Federal Trade Commission (FTC) taking an active role in enforcing compliance with the various privacy laws. The real game changer, however, might be the renewed popularity of “Do Not Track”, which threatens to upend the existing business models of online publishers and advertisers. “Do Not Track” is a proposal which would enable users to opt out of tracking by websites they do not visit, including analytics services, advertising networks, and social platforms.

To understand the genesis of “Do Not Track” it is important to understand what online tracking is and how it works. If you visit any website supported by advertising (as well as many that are not), a number of tracking objects may be placed on your device. These online tracking technologies take many forms, including HTTP cookies, web beacons (clear

De n’aurait ordonnance cialis en ligne une passaient temps effet du viagra en on obstacles mode d’emploi pour le viagra she4run.com avec. Ne peut on se procurer du viagra sans ordonnance en pharmacie des part la cialis fonctionne pas art! Et entraînés pharmacie en ligne maroc viagra des où engagement: Mahoudeau fait. Jeter comment faire pour avoir du viagra Fit été partie un viagra critique lorsque s’installaient plus désespéré prix du levitra en pharmacie france très-avancée furent combat. Il dans quel cas ne pas utiliser le viagra suppression la. Auprès tentait cialis pour plaisir cette avec…

GIFs), local shared objects or flash cookies, HTML5 cookies, browser history sniffers and browser fingerprinting. What they all have in common is that they use tracking technology to observe web users’ interests, including content consumed, ads clicked, and other search keywords and conversions to track online movements, and build an online behavior profiles that are used to determine which ads are selected when a particular webpage is accessed. Collectively, these are known as behavioral targeting or advertising. Tracking technologies are also used for other purposes in addition to behavioral targeting, including site analytics, advertising metrics and reporting, and capping the frequency with which individual ads are displayed to users.

The focus on behavioral advertising by advertisers and ecommerce merchants stems from its effectiveness. Studies have found that behavioral advertising increases the click through rate by as much as 670% when compared with non-targeted advertising. Accordingly, behavioral advertising can bring in an average of 2.68 more revenue than of non-targeted advertising.

If behavioral advertising provides benefits such as increased relevance and usefulness to both advertisers and consumers, how has it become so controversial? Traditionally, advertisers have avoided collecting personally identifiable information (PII), preferring anonymous tracking data. However, new analytic tools and algorithms make it possible to combine “anonymous” information to create detailed profiles that can be associated with a particular computer or person. Formerly anonymous information can be re-identified, and companies are taking advantage in order to deliver increasingly targeted ads. Some of those practices have led to renewed privacy concerns. For example, recently Target was able to identify that a teenager was pregnant – before her father had any idea. It seems that Target has identified certain patterns in expecting mothers, and assigns shoppers a “pregnancy prediction score.” Apparently, the father was livid when his high-school age daughter was repeatedly targeted with various maternity items, only to later find out that, well, Target knew more about his daughter than he did (at least in that regard). Needless to say, some PII is more sensitive than others, but it is almost always alarming when you don’t know what others know about you.

Ultimately, most users find it a little creepy when they find out that Facebook tracks your web browsing activity through their “Like” button, or that detailed profiles of their browsing history exist that could be associated with them. According to a recent Gallup poll, 61% of individuals polled felt the privacy intrusion presented by tracking was not worth the free access to content. 67% said that advertisers should not be able to match ads to specific interests based upon websites visited.

The wild west of internet tracking may soon be coming to a close. The FTC has issued its recommendations for Do Not Track, which they recommend be instituted as a browser based mechanism through which consumers could make persistent choices to signal whether or not they want to be tracked or receive targeted advertising. However, you shouldn’t wait for an FTC compliance notice to start rethinking your privacy practices.

It goes without saying that companies are required to follow the existing privacy laws. However, it is important to not only speak with a privacy lawyer to ensure compliance with existing privacy laws and regulations (the FTC compliance division also monitors whether companies comply with posted privacy policies and terms of service) but also to ensure that your tracking and analytics are done in an non-creepy, non-intrusive manner that is clearly communicated to your customers and enables them to opt-in, and gives them an opportunity to opt out at their discretion. Your respect for your consumers’ privacy concerns will reap long-term benefits beyond anything that surreptitious tracking could ever accomplish.

Your Privacy Policy Could Have Serious Legal Implications

How many times have you seen website terms of use or privacy policies saying something to the effect, “We use industry standard best-practice technology to guarantee your sensitive financial transactions are 100% safe and secure?” When you publish these types of statements, you potentially expose your business to deceptive and/or unfair practices claims by attorneys general, state and federal regulators, and private plaintiffs, particularly if there is a data breach involving sensitive information. From a business perspective you may not like the more watered down version, “While we take reasonable measures to try to protect your sensitive information, we cannot guarantee that your information will be completely secure, etc…” However, industry standards are made to be broken by the nefarious crews who make it their work to steal financial account access numbers, as well as other sensitive, information. If you think that you provide the panacea to all online risk, speak up! You may have discovered the golden goose. Until then, think about publishing more accurate, responsible information for your users and to mitigate your business risk. Besides, being accurate creates user confidence, and these things can be worded in ways to build trust in your brand.

Protecting data applies when it is in transit and at rest. That means that after you receive the data through an encrypted connection, there are risks related to its storage; if, and when, it is unencrypted and used. Interestingly, the recent HBGary Federal hack against a well-known information security firm demonstrated that even those charged with the task of protecting information are susceptible. In creating your public facing policy, have you focused on security after only the transmission stage?

About that encrypted transmission, many times these industry standards utilize Transport Layer Security (TLS) and its predecessor Secure Sockets Layer (SSL) technology. You know these, they create the HTTPS standard. We’re often advised to look for the “HTTPS” in the URL heading, or the lock icon in our browser. In my travels I am astonished to learn that some people think these technologies are infallible. So, once that happens, our connection is secure and invincible, right? Well…maybe.

While the detailed workings of TLS and SSL are way beyond this article (and certainly beyond my ability to fully appreciate) it is interesting to note that researchers have found potential vulnerabilities with SSL, or at least with the supporting browser and trusted authorities concepts necessary for its use in typical online transactions. This is not to say that TLS and SSL are not safe. Quite the contrary, the encryption technology provides good protection for sensitive online transactions and should definitely be used. However, they must be configured correctly, the Certificate Authority (CA) must act appropriately, and the client (user) machine must not be compromised. The security and confidentiality sought through the use of SSL depends upon not only the encryption algorithm, but also the browser and the trust aspect inherent in public key cryptography.

Regarding the encryption itself, while some proclaim that they use “industry standard” technology, they might actually not be using it. SSL version 2.0 was known to have several security vulnerabilities. The Payment Card Industry Digital Security Standard (PCI DSS) does not recognize SSL Version 2.0 as secure. Only Version 3.0 or other later TLS standards may be considered.

Browsers by default can be loaded to trust numerous CA’s. CA’s are entrusted to determine that the site that it claims to be, is actually that site as claimed. In the past researchers had found that known vulnerable certificates had not been revoked by some CA’s, and theoretical or actual “collisions” where a man-in-the-middle assumes the trusted identity could happen.

Would it surprise you that according to some analysis, some certificates might still support SSL Version 2.0? According to one researcher, as of July 2010 only about 38% of sites using SSL are configured correctly, and 32% contain a previously exposed renegotiation vulnerability. Other researchers exposed approximately 24 possible exploits (of varying criticality) involving man-in-the-middle attacks on SSL when used in browsers.

Most recently in February 2011 Trusteer reported on some nasty malware they named OddJob. OddJob targets online banking customers. According to Trusteer, OddJob does not reside on the client and thus avoids detection by typical anti-malware software. A fresh copy of OddJob is fetched from a command and control server during a session. OddJob hijacks a session token ID, and reportedly allows the hacker to, essentially, ride-along in the background with the user’s session. Of most concern, OddJob allows the hackers to stay logged in to one’s account even after the user purports to log-out; thus, maximizing the potential for undetected (or later detected) fraud. Significantly, client side (user-based) malware presents possible risk, some of which may be beyond the online website’s control.

So, if we presume that no technology will be absolutely 100% safe and secure, and if the right bad-guys want to target someone or something, why the need to tell users something that is not necessarily accurate?

This is only one example of good practices in vetting what you are actually doing to see how it really measures-up, and how your public facing policies may seem accurate, when they really are not. This article focuses on one aspect of security, but the same types of issues arise in privacy as well. Why expose your business to more regulatory risk if there is a breach? Even if you employed good practices and did your best to try to protect the information, false or misleading information in your public facing terms and policies can come back to haunt you.

Appointing experienced information governance individuals or teams, or using outside resources, can help you identify the disconnects and gaps between what exists, and what you say exists.