Protecting data applies when it is in transit and at rest. That means that after you receive the data through an encrypted connection, there are risks related to its storage; if, and when, it is unencrypted and used. Interestingly, the recent HBGary Federal hack against a well-known information security firm demonstrated that even those charged with the task of protecting information are susceptible. In creating your public facing policy, have you focused on security after only the transmission stage?
About that encrypted transmission, many times these industry standards utilize Transport Layer Security (TLS) and its predecessor Secure Sockets Layer (SSL) technology. You know these, they create the HTTPS standard. We’re often advised to look for the “HTTPS” in the URL heading, or the lock icon in our browser. In my travels I am astonished to learn that some people think these technologies are infallible. So, once that happens, our connection is secure and invincible, right? Well…maybe.
While the detailed workings of TLS and SSL are way beyond this article (and certainly beyond my ability to fully appreciate) it is interesting to note that researchers have found potential vulnerabilities with SSL, or at least with the supporting browser and trusted authorities concepts necessary for its use in typical online transactions. This is not to say that TLS and SSL are not safe. Quite the contrary, the encryption technology provides good protection for sensitive online transactions and should definitely be used. However, they must be configured correctly, the Certificate Authority (CA) must act appropriately, and the client (user) machine must not be compromised. The security and confidentiality sought through the use of SSL depends upon not only the encryption algorithm, but also the browser and the trust aspect inherent in public key cryptography.
Regarding the encryption itself, while some proclaim that they use “industry standard” technology, they might actually not be using it. SSL version 2.0 was known to have several security vulnerabilities. The Payment Card Industry Digital Security Standard (PCI DSS) does not recognize SSL Version 2.0 as secure. Only Version 3.0 or other later TLS standards may be considered.
Browsers by default can be loaded to trust numerous CA’s. CA’s are entrusted to determine that the site that it claims to be, is actually that site as claimed. In the past researchers had found that known vulnerable certificates had not been revoked by some CA’s, and theoretical or actual “collisions” where a man-in-the-middle assumes the trusted identity could happen.
Would it surprise you that according to some analysis, some certificates might still support SSL Version 2.0? According to one researcher, as of July 2010 only about 38% of sites using SSL are configured correctly, and 32% contain a previously exposed renegotiation vulnerability. Other researchers exposed approximately 24 possible exploits (of varying criticality) involving man-in-the-middle attacks on SSL when used in browsers.
Most recently in February 2011 Trusteer reported on some nasty malware they named OddJob. OddJob targets online banking customers. According to Trusteer, OddJob does not reside on the client and thus avoids detection by typical anti-malware software. A fresh copy of OddJob is fetched from a command and control server during a session. OddJob hijacks a session token ID, and reportedly allows the hacker to, essentially, ride-along in the background with the user’s session. Of most concern, OddJob allows the hackers to stay logged in to one’s account even after the user purports to log-out; thus, maximizing the potential for undetected (or later detected) fraud. Significantly, client side (user-based) malware presents possible risk, some of which may be beyond the online website’s control.
So, if we presume that no technology will be absolutely 100% safe and secure, and if the right bad-guys want to target someone or something, why the need to tell users something that is not necessarily accurate?
This is only one example of good practices in vetting what you are actually doing to see how it really measures-up, and how your public facing policies may seem accurate, when they really are not. This article focuses on one aspect of security, but the same types of issues arise in privacy as well. Why expose your business to more regulatory risk if there is a breach? Even if you employed good practices and did your best to try to protect the information, false or misleading information in your public facing terms and policies can come back to haunt you.
Appointing experienced information governance individuals or teams, or using outside resources, can help you identify the disconnects and gaps between what exists, and what you say exists.