By: Aaron Krowne

You may have heard quite a bit of buzz about “Google Glass” in the past few years – and if you aren’t already intimately familiar with it, you probably at least know it is a new technology from Google that involves a “computer and camera” grafted onto otherwise standard eyeglasses. Indeed, this basic picture is correct, and itself is enough to suggest some interesting (and to some, upsetting) legal and societal questions.

What is Google Glass?

 Google Glass, first released in February 2013, is one of the first technologies for “augmented reality” aimed at the consumer market. Augmented reality (“AR”) seeks to take a “normal experience” of the world and add computerized feedback and controls, which enable various ways for users to “get more” out of whatever activity they are engaged in. Before the release of Google Glass, this type of technology was mostly limited to the military and other niche applications. AR is in contrast to typical, modal technology, which requires users to (at least intermittently) “take a break” from the outside world and focus exclusively on the technology (e.g., periodically checking your cell phone for mapping directions). This stands in contrast to more well-known virtual reality, which seeks to simulate or entirely replace the outside world with a generated, “real-seeming” one (e.g., a flight simulator). AR technology isn’t entirely alien to consumers either; a simple form which is not new to the consumer market is voice interaction on smartphones (e.g., SIRI on the iPhone) – but Google Glass takes AR to another level.

Google Glass is indeed “glasses with a computer and camera” on them, but also, importantly, has a tiny “heads up display” (screen), viewable by the wearer on the periphery of one lens. The camera allows the wearer to capture the world in snapshots or video, but more transformatively, allows the computer to “see” the world through the wearer’s eyes and provide automated feedback about it. The main page of Google Glass’s web site gives a number of examples of how the device can be used, including: hands-free heads-up navigation (particularly useful in non-standard settings, such as cycling or other sports), overlaid instruction while training for sports (e.g., improving one’s golf swing by “seeing” the correct swing), real-time heads-up translations (e.g., for street signs in a foreign country), useful real-time lookup functions such as currency conversions, overlaid site and landmark information (e.g., names, history, and even ratings), and, simply, a hands-free version of more “conventional” functions such as phone calls, digital music playing, and instant messaging. This list only scratches the surface of what Google Glass can do – and surely there are countless other applications that have not yet been imagined.

Until April 2014, Google Glass was only available to software developers, but now it is available to the consumer market, where it sells for about $1,500. While it is tempting to write off such new, pricey technology as of interest mostly to “geeks,” the capabilities of Google Glass are so compelling that it is reasonable to expect that it (and possibly “clones” developed by other manufacturers) will enter considerably more widespread (if not mainstream) use in a few short years. After all, this is what happened with cell phones, and then smartphones, with historically-blinding speed. Here, we will endeavor not to be “blinded” by the legal implications of this new technology and provide a brief summary of the most commonly referenced societal and legal concerns implicated by Google Glass.

Safety Issues

An almost immediate “gut” concern with a technology that inserts itself into one’s field of view is that it might be distracting, and hence, unsafe in certain situations; most worryingly, while driving. It is often said that the law lags behind technology; but as many as eight states are already considering legislation that would restrict Google Glass (or future Glass-like devices) while driving. Indeed, the law has shown that it can respond quickly to new technology when it is coupled with acute public concern or outrage; consider the few short years between the nascent cell phone driving-distraction concerns of the mid-2000s and the now near-universal bans and restrictions on that sort of use.

 More immediately, while some states that do not have laws that explicitly mention technologies like Google Glass, their existing laws are written broadly enough (or have been interpreted such as) to cover the banning of Google Glass use while driving. For example, California’s “distracted driving” law (Cal. Vehicle Code S. 27602), covers “visually displaying […] a video signal […] visible to the driver,” which most certainly includes Google Glass. This interpretation of the California law was legally confirmed in the recent Abadie case, where the law was affirmatively applied to a San Francisco driver who had been cited for driving while wearing Google Glass. However, lucky for the driver in the Abadie case, the ticket was dismissed on the grounds that actual use of Google Glass at the time could not be proven.

Are such safety concerns about distraction well-founded? Google and other defenders of Google Glass counter that Google Glass is specifically designed not to be distracting; the projected image is in the wearer’s periphery which eliminates the need to “look down” at some other more conventional device (such as a GPS unit or a smartphone).

The truth seems to be somewhere in the middle, implying a nuanced approach. As the Abadie case guides, Google Glass might not always be in use, and therefore, not distracting. And per the above, Google Glass might even reduce distraction in certain contexts and for certain functions. Yet, nearly all states have “distraction” as a category on police traffic accident report forms. Therefore, whether or not laws are ultimately written specifically to cover Google Glass-type technology, its usage while driving has already given rise to new, manifest legal risks.

Privacy Issues

The ability to easily and ubiquitously record one’s surroundings naturally triggers privacy concerns. The most troubling privacy concern is that Google Glass provides private citizens with the technology to surreptitiously record others. Such concerns could even find a legal basis in wiretapping laws currently on the books, such as the Federal Wiretap Act (18 USC S. 2511, et seq.), which prohibits the recording of “oral” communications when all parties do not consent. This law certainly applies to Google Glass and any other wearable recording device, much as it does with non-wearable recording devices.

There are other privacy concerns relating to the sheer ubiquity of recording: e.g., worry for a “general loss of privacy” due to the transformation of commonplace situations and otherwise ephemeral actions and utterances into preserved, replayable, reviewable, and broadcastable/sharable media. An always-worn, potentially always-on device like Google Glass certainly seems to validate this concern and is itself sufficient to give rise to inflamed sentiments or outright conflict. Tech blogger Sarah Slocum learned this lesson the hard way when she was assaulted at a San Francisco bar in February 2014 for wearing her Google Glass inside the bar.

Further, Google Glass provides facial recognition capability, and combined with widespread “tagging” in photos uploaded to social media sites, this capability does seem to add heft (if not urgency) to the vague sense of disquiet. Specifically, Google Glass in combination with tagging would appear to make it exceedingly easy (if not effortless) for identities to be extracted from day-to-day scenes and linked to unwitting third parties’ online profiles – perhaps in places or scenarios they would not want family, employers or “the general public” to see.

Google Glass’s defenders would reply to the above concerns by pointing out that the device isn’t particularly unobtrusive, so certainly one cannot actually “secretly” record with it. Further adding to its “obviousness,” Google Glass displays a prominent blinking red light when it actually is recording. Additionally, Google Glass only records for approximately ten seconds by default, and can only support about 45 minutes of total recording on its battery, making it significantly inferior to dedicated recording or surveillance devices which have long been (cheaply) available to consumers.

But in the end, it is clear that Google Glass means more recording and photographing will be taking place in public. Further, the fact that “AI” capabilities like facial recognition are not only possible, but integral to Google Glass’s best intended use, suggests that the device will be “pushing the envelope” in ways that challenge people’s general expectation of privacy. This envelope-pushing is likely to generate lawsuits – as well as laws.

Piracy Concerns

Another notable area of concern is “piracy” (the distribution of copyrighted works in violation of a copyright). Because Google Glass can be worn all the time, record, and “see what the wearer sees,” it is inherently capable of easily capturing wearer-viewed media, such as movies, art exhibits, or musical performances, without the consent of the performer and copyright owner. In such situations, consumer recording devices are often restricted or banned for this reason.

Of course, recording still happens – especially with today’s ubiquitous smartphones – but the worry is that if Google Glass is “generally” permitted, customer/viewer recording will be harder to control. This concern was embodied in a recent case where an Ohio man was kicked out of a movie theater and then questioned by Homeland Security personnel (specifically, Immigration and Customs Enforcement, which investigates piracy) for wearing Google Glass to a movie. The man was released without charges, as there is no indication the device was on. But despite the fact that this example had a happy ending for the wearer, such an interaction certainly amounts to more than a minor inconvenience for both the wearer as well as the business being patronized.

Law & Conventions

Some of the above concerns with Google Glass are likely to fade as social conventions develop and adapt around such devices. The idea of society needing to catch up with technology is not a new concept – as Google’s Glass “FAQ” specifically points out, when cameras first became available to consumers, they were initially banned from beaches and parks. This seems ridiculous (if not a little quaint) to us today, but it is important to note that even the legal implications of cameras have not “gone away.” Rather, a mixture of tolerance, usage conventions, non-governmental regulatory practices and laws evolved to deal with cameras (for example, intentionally taking a picture that violates the target’s reasonable expectation of privacy is still legally-actionable, even if the photographer is in a public area). The same evolution is likely to happen with Google Glass. If you have questions about how Google Glass is being used by, or effecting, you or your employees, or have plans to use Google Glass (either personally or in the course of a business), do not hesitate to consult with one of OlenderFeldman’s experienced technology attorneys to discuss the potential legal risks and best practices.

By: Aaron Krowne

In 2013, the California Legislature passed AB 370, an addition to California’s path-blazing online consumer privacy protection law in 2003, the California Online Privacy Protection Act (“CalOPPA”).  AB 370 took effect January 1, 2014, and adds new requirements to CalOPPA pertaining to consumers’ use of Do-Not-Track (DNT) signals in their web browsers (all major web browsers now include this capability). CalOPPA applies to any website, online service, and mobile application that collects personally identifiable information from consumers residing in California (“Covered Entity”).

While AB 370 does not mandate a particular response to a DNT signal, it does require two new disclosures that must be included in a Covered Entity’s privacy policy: (1) how the site operator responds to a DNT signal (or to other “similar mechanisms”); and (2) whether there are third parties performing online tracking on the Covered Entity’s site or service. As an alternative to the descriptive disclosure listed in (1), the Covered Entity may elect to provide a “clear and conspicuous link” in its privacy policy to a “choice program” which provides consumers a choice about tracking. The Covered Entity must clearly describe the effect of a particular choice (e.g., a web interface which allows users to disable the site’s tracking based on their browser’s DNT).

While this all might seem simple enough, as with many new laws, it has raised many questions about specifics, particularly how to achieve compliance, and as a result on May 21, 2014, the California Attorney General’s Office (the “AG’s Office”) issued a set of new guidelines entitled “Making Your Privacy Practices Public” (the “New Guidelines”).

The New Guidelines

The New Guidelines regarding DNT specifically suggest that a Covered Entity:

  1. Make it easy for a consumer to find the section of the privacy policy in which the online tracking policy is described (e.g., by labeling it “How We Respond to Do Not Track Signals,” “Online Tracking” or “California Do Not Track Disclosures”).
  2. Provide a description of how it responds to a browser’s DNT signal (or to other similar mechanisms), rather than merely linking to a “choice program.”
  3. State whether third parties are or may be collecting personally identifiable information of consumers while they are on a Covered Entity’s website or using a Covered Entity’s service.

In general, when drafting a privacy policy that complies with CalOPPA the New Guidelines recommend that a Covered Entity:

  • Use plain, straightforward language, avoiding technical or legal jargon.
  • Use a format that makes the policy readable, such as a “layered” format (which first shows users a high-level summary of the full policy).
  • Explain its uses of personally identifiable information beyond what is necessary for fulfilling a customer transaction or for the basic functionality of the online service.
  • Whenever possible, provide a link to the privacy policies of third parties with whom it shares personally identifiable information.
  • Describe the choices a consumer has regarding the collection, use and sharing of his or her personal information.
  • Provide “just in time,” contextual privacy notifications when relevant (e.g., when registering, or when the information is about to be collected).

The above is merely an overview and summary of the New Guidelines and therefore does not represent legal advice for any specific scenario or set of facts. Please feel free to contact one of OlenderFeldman’s Internet privacy attorneys, using the link provided below for information and advice regarding particular circumstances.

The Consequences of Non-Compliance with CalOPPA

While the New Guidelines are just that, mere recommendations, CalOPPA has teeth. The AG’s office is moving actively on enforcement. For example, it has already sued Delta Airlines for failure to comply with CalOPPA. A Covered Entity’s privacy policy, despite being discretionary within the general bounds of CalOPPA and written by the Covered Entity itself has the force of law – including penalties, as discussed below. Thus, a Covered Entity should think carefully about the contents of its privacy policy; over-promising could result in completely unnecessary legal liability, but under-disclosing could also result in avoidable litigation. Furthermore, liability under CalOPPA could arise purely because of miscommunication or inadequate communication between a Covered Entity’s engineers and its management or legal departments, or because of failure to keep sufficiently apprised of what information third parties (e.g., advertising networks) are collecting.

CalOPPA provides a Covered Entity with a 30-day grace period to post or correct its privacy policy after being notified by the AG’s Office of a deficiency.  However, if the Covered Entity has not remedied the defect at the expiration of the grace period, the Covered Entity can be found to be in violation for failing to comply with: (1) the CalOPPA legal requirements for the policy, or (2) with the provisions of the Covered Entity’s own site policy. This failure may be either knowing and willful, or negligent and material. Penalties for failures to comply can amount to $2,500 per violation. As mentioned above, non-California entities may also be subject to CalOPPA, and therefore, it is likely that CalOPPA based judicial orders will be enforced in any jurisdiction within the United States.

While the broad brushstrokes of CalOPPA and the new DNT requirements are simple, there are many potential pitfalls, and actual, complete real-world compliance is likely to be tricky to achieve.   Pre-emptive privacy planning can help avoid the legal pitfalls, and therefore if you have any questions or concerns we recommend you contact one of OlenderFeldman’s certified and experienced privacy attorneys.

New Jersey Law Requires Photocopiers and Scanners To Be Erased Because Of Privacy Concerns

New Jersey Law Requires Photocopiers and Scanners To Be Erased Because Of Privacy ConcernsNJ Assembly Bill A-1238 requires the destruction of records stored on digital copy machines under certain circumstances in order to prevent identity theft

By Alice Cheng

Last week, the New Jersey Assembly passed Bill-A1238 in an attempt to prevent identity theft. This bill requires that information stored on photocopy machines and scanners to be destroyed before devices change hands (e.g., when resold or returned at the end of a lease agreement).

Under the bill, owners of such devices are responsible for the destruction, or arranging for the destruction, of all records stored on the machines. Most consumers are not aware that digital photocopy machines and scanners store and retain copies of documents that have been printed, scanned, faxed, and emailed on their hard drives. That is, when a document is photocopied, the copier’s hard drive often keeps an image of that document. Thus, anyone with possession of the photocopier (i.e., when it is sold or returned) can obtain copies of all documents that were copied or scanned on the machine. This compilation of documents and potentially sensitive information poses serious threats of identity theft.

Any willful or knowing violation of the bill’s provisions may result in a fine of up to $2,500 for the first offense and $5,000 for subsequent offenses. Identity theft victims may also bring legal action against offenders.

In order for businesses to avoid facing these consequences, they should be mindful of the type of information stored, and to ensure that any data is erased before reselling or returning such devices. Of course, business owners should be especially mindful, as digital copy machines  may also contain trade secrets and other sensitive business information as well.

Check Cloud Contracts for Provisions Related to Privacy, Data Security and Regulatory Concerns

Check Cloud Contracts for Provisions Related to Privacy, Data Security and Regulatory Concerns“Cloud” Technology Offers Flexibility, Reduced Costs, Ease of Access to Information, But Presents Security, Privacy and Regulatory Concerns

With the recent introduction of Google Drive, cloud computing services are garnering increased attention from entities looking to more efficiently store data. Specifically, using the “cloud” is attractive due to its reduced cost, ease of use, mobility and flexibility, each of which can offer tremendous competitive benefits to businesses. Cloud computing refers to the practice of storing data on remote servers, as opposed to on local computers, and is used for everything from personal webmail to hosted solutions where all of a company’s files and other resources are stored remotely. As convenient as cloud computing is, it is important to remember that these benefits may come with significant legal risk, given the privacy and data protection issues inherent in the use of cloud computing. Accordingly, it is important to check your cloud computing contracts carefully to ensure that your legal exposure is minimized in the event of a data breach or other security incident.

Cloud computing allows companies convenient, remote access to their networks, servers and other technology resources, regardless of location, thereby creating “virtual offices” which allow employees remote access to their files and data which is identical in scope the access which they have in the office. The cloud offers companies flexibility and scalability, enabling them to pool and allocate information technology resources as needed, by using the minimum amount of physical IT resources necessary to service demand. These hosted solutions enable users to easily add or remove additional storage or processing capacity as needed to accommodate fluctuating business needs. By utilizing only the resources necessary at any given point, cloud computing can provide significant cost savings, which makes the model especially attractive to small and medium-sized businesses. However, the rush to use cloud computing services due to its various efficiencies often comes at the expense of data privacy and security concerns.

The laws that govern cloud computing are (perhaps somewhat counterintuitively) geographically based on the physical location of the cloud provider’s servers, rather than the location of the company whose information is being stored. American state and federal laws concerning data privacy and security tend to vary while servers in Europe are subject to more comprehensive (and often more stringent) privacy laws. However, this may change, as the Federal Trade Commission (FTC) has been investigating the privacy and security implications of cloud computing as well.

In addition to location-based considerations, companies expose themselves to potentially significant liability depending on the types of information stored in the cloud. Federal, state and international laws all govern the storage, use and protection of certain types of personally identifiable information and protected health information. For example, the Massachusetts Data Security Regulations require all entities that own or license personal information of Massachusetts residents to ensure appropriate physical, administrative and technical safeguards for their personal information (regardless of where the companies are physically located), with fines of up to $5,000 per incident of non-compliance. That means that the companies are directly responsible for the actions of their cloud computing service provider. OlenderFeldman LLP notes that some information is inappropriate for storage in the cloud without proper precautions. “We strongly recommend against storing any type of personally identifiable information, such as birth dates or social security numbers in the cloud. Similarly, sensitive information such as financial records, medical records and confidential legal files should not be stored in the cloud where possible,” he says, “unless it is encrypted or otherwise protected.” In fact, even a data breach related to non-sensitive information can have serious adverse effects on a company’s bottom line and, perhaps more distressing, its public perception.

Additionally, the information your company stores in the cloud will also be affected by the rules set forth in the privacy policies and terms of service of your cloud provider. Although these terms may seem like legal boilerplate, they may very well form a binding contract which you are presumed to have read and consented to. Accordingly, it is extremely important to have a grasp of what is permitted and required by your cloud provider’s privacy policies and terms of service. For example, the privacy policies and terms of service will dictate whether your cloud service provider is a data processing agent, which will only process data on your behalf or a data controller, which has the right to use the data for its own purposes as well. Notwithstanding the terms of your agreement, if the service is being provided for free, you can safely presume that the cloud provider is a data controller who will analyze and process the data for its own benefit, such as to serve you ads.

Regardless, when sharing data with cloud service providers (or any other third party service providers)), it is important to obligate third parties to process data in accordance with applicable law, as well as your company’s specific instructions — especially when the information is personally identifiable or sensitive in nature. This is particularly important because in addition to the loss of goodwill, most data privacy and security laws hold companies, rather than service providers, responsible for compliance with those laws. That means that your company needs to ensure the data’s security, regardless of whether it’s in a third party’s (the cloud providers) control. It is important for a company to agree with the cloud provider as to the appropriate level of security for the data being hosted. Christian Jensen, a litigation attorney at OlenderFeldman LLP, recommends contractually binding third parties to comply with applicable data protection laws, especially where the law places the ultimate liability on you. “Determine what security measures your vendor employs to protect data,” suggests Jensen. “Ensure that access to data is properly restricted to the appropriate users.” Jensen notes that since data protection laws generally do not specify the levels of commercial liability, it is important to ensure that your contract with your service providers allocates risk via indemnification clauses, limitation of liabilities and warranties. Businesses should reserve the right to audit the cloud service provider’s data security and information privacy compliance measures as well in order to verify that the third party providers are adhering to its stated privacy policies and terms of service. Such audits can be carried out by an independent third party auditor, where necessary.

What do I need to look for in a privacy policy?

What do I need to look for in a privacy policy?Privacy policies are long, onerous and boring. Most consumers never read them, even though they constitute a binding contract. Here is a handy checklist of some quick things to skim for.

As we’ve previously discussed, even “non-sensitive” information can be very sensitive under certain circumstances. When reviewing a company’s privacy policy, you should focus on determining the following:

  • The type of information is gathered by the website, including information which is voluntarily provided (i.e., name, date of birth, etc.) and electronic information (i.e., tracking cookies).
  • What information is optional (i.e., requested but not required for website use) versus what information you must provide if you want to use the website.
  • With whom your information is shared, and if it is shared with affiliates, you should learn the identity of the affiliates.  The more information you provide, the more concerned the user should be about this answer.
  • How your information is used (i.e., for targeted advertising, for general marketing, for selling data to third-parties, etc.).  Similar to above, the more information you provide, the more concerned the user should be about this answer.
  • How long the website retains your information, and similarly, what rights you have to have all of your information deleted by the website (including information the website has already shared with third-parties).

Generally speaking, all website users should start with the assumption that all information provided is optional and will ultimately be shared with other companies or individuals.  Starting with that assumption then makes it easier psychologically to skim through the privacy policy or terms and conditions and pick out the exceptions which may protect your privacy.  If you are unable to quickly pick out those exceptions, or if the language is too confusing, the user should proceed with caution and assume his or her information will not be kept confidential – a decision which will dictate how and whether you proceed on the website.  Better to be safe than sorry with the information you provide.

OlenderFeldman LLP was interviewed by Jennifer Banzaca of the Hedge Fund Law Report for a three part series entitled, “What Concerns Do Mobile Devices Present for Hedge Fund Managers, and How Should Those Concerns Be Addressed?” (Subscription required; Free two week subscription available.) Some excerpts of the topics Jennifer and Aaron discussed follow. You can read  the third entry here.

Preventing Access by Unauthorized Persons

This section highlights steps that hedge fund managers can take to prevent unauthorized users from accessing a mobile device or any transmission of information from a device.  Concerns over unauthorized access are particularly acute in connection with lost or stolen devices.

[Lawyers] recommended that firms require the use of passwords or personal identification numbers (PINs) to access any mobile device that will be used for business purposes.  Aaron Messing, a Corporate & Information Privacy Associate at OlenderFeldman LLP, further elaborated, “We generally emphasize setting minimum requirements for phone security.  You want to have a mobile device lock with certain minimum requirements.  You want to make sure you have a strong password and that there is boot protection, which is activated any time the mobile device is powered on or reactivated after a period of inactivity.  Your password protection needs to be secure.  You simply cannot have a password that is predictable or easy to guess.”

Second, firms should consider solutions that facilitate the wiping (i.e., erasing) of firm data on the mobile device to prevent access by unauthorized users . . . . [T]here are numerous available wiping solutions.  For instance, the firm can install a solution that will facilitate remote wiping of the mobile device if the mobile device is lost or stolen.  Also, to counter those that try to access the mobile device by trying to crack its password, a firm can install software that automatically wipes firm data from the mobile device after a specific number of failed log-in attempts.  Messing explained, “It is also important for firms to have autowipe ability – especially if you do not have a remote wipe capability – after a certain number of incorrect password entries.  Often when a phone is lost or stolen, it is at least an hour or two before the person realizes the mobile device is missing.”

Wipe capability can also be helpful when an employee leaves the firm or changes mobile devices. . . Messing further elaborated, “When an employee leaves, you should have a policy for retrieving proprietary or sensitive information from the employee-owned mobile device and severing access to the network.  Also, with device turnover – if employees upgrade phones – you want employees to agree and acknowledge that you as the employer can go through the old phone and wipe the sensitive aspects so that the next user does not have the ability to pick up where the employee left off.”

If a firm chooses to adopt a wipe solution, it should adopt policies and procedures that ensure that employees understand what the technology does and obtain consent to the use of such wipe solutions.  Messing explained, “What we recommend in many cases is that as a condition of enrolling a device on the company network, employees must formally consent to an ‘Acceptable Use’ policy, which defines all the situations when the information technology department can remotely wipe the mobile device.  It is important to explain how that wipe will impact personal device use and data and employees’ data backup and storage responsibilities.”

Third, a firm should consider adopting solutions that prevent unauthorized users from gaining remote access to a mobile device and its transmissions.  Mobile security vendors offer products to protect a firm’s over-the-air transmissions between the server and a mobile device and the data stored on the mobile device.  These technologies allow hedge fund managers to encrypt information accessed by the mobile device – as well as information being transmitted by the mobile device – to ensure that it is secure and protected.  For instance, mobile devices can retain and protect data with WiFi and mobile VPNs, which provide mobile users with secure remote access to network resources and information.

Fourth, Rege suggested hedge fund managers have a procedure for requiring certificates to establish the identity of the device or a user.  “In a world where the devices are changing constantly, having that mechanism to make sure you always know what device is trying to access your system becomes very important.”

Preventing Unauthorized Use by Firm Personnel

Hedge fund managers should be concerned not only by potential threats from external sources, but also potential threats from unauthorized access and use by firm personnel.

For instance, hedge fund managers should protect against the theft of firm information by firm personnel.  Messing explained, “You want to consider some software to either block or control data being transferred onto mobile devices.  Since some of these devices have a large storage capacity, it is very easy to steal data.  You have to worry not only about external threats but internal threats as well, especially when it comes to mobile devices, you want to have system controls that are put in place to record and maybe even limit the data being taken from or copied onto mobile devices.”

Monitoring Solutions

To prevent unauthorized access and use of the mobile device, firms can consider remote monitoring.   However, monitoring solutions raise employee privacy concerns, and the firm should determine how to address these competing concerns.

Because of gaps in expectations regarding privacy, firms are much more likely to monitor activity on firm-provided mobile devices than on personal mobile devices. . . . In addressing privacy concerns, Messing explained, “You want to minimize the invasion of privacy and make clear to your employees the extent of your access.  When you are using proprietary technology for mobile applications, you can gain a great deal of insight into employee usage and other behaviors that may not be appropriate – especially if not disclosed.  We are finding many organizations with proprietary applications tracking behaviors and preferences without considering the privacy implications.  Generally speaking, you want to be careful how you monitor the personal device if it is also being used for work purposes.  You want to have controls to determine an employee’s compliance with security policies, but you have to balance that with a respect for that person’s privacy.  When it comes down to it, one of the most effective ways of doing that is to ensure that employees are aware of and understand their responsibilities with respect to mobile devices.  There must be education and training that goes along with your policies and procedures, not only with the employees using the mobile devices, but also within the information technology department as well.  You have people whose job it is to secure corporate information, and in the quest to provide the best solution they may not even consider privacy issues.”

As an alternative to remote monitoring, a firm may decide to conduct personal spot checks of employees’ mobile devices to determine if there has been any inappropriate activity.  This solution is less intrusive than remote monitoring, but likely to be less effective in ferreting out suspicious activity.

Policies Governing Archiving of Books and Records

Firms should consider both technology solutions and monitoring of mobile devices to ensure that they are capturing all books and records that are required to be kept pursuant to the firm’s books and records policies and external law and regulation with respect to books and records.

Also, firms may contemplate instituting a policy to search employees’ mobile devices and potentially copying materials from such mobile devices to ensure the capture of all such information or communications from mobile devices.  However, searching and copying may raise privacy concerns, and firms should balance recordkeeping requirements and privacy concerns.  Messing explained, “In the event of litigation or other business needs, the company should image, copy or search an employee’s personal device if it is used for firm business.  Therefore, employees should understand the importance of complying with the firm’s policies.”

Policies Governing Social Media Access and Use by Mobile Devices

Many firms will typically have some policies and procedures in place that ban or restrict the proliferation of business information via social media sites such as Facebook and Twitter, including with respect to the use of firm-provided mobile devices.  Specifically, such a policy could include provisions prohibiting the use of the firm’s name; prohibiting the disclosure of trade secrets; prohibiting the use of company logos and trademarks; addressing the permissibility of employee discussions of competitors, clients and vendors; and requiring disclaimers.

Messing explained, “We advise companies just to educate employees about social media.  If you are going to be on social media, be smart about what you are doing.  To the extent possible, employees should note their activity is personal and not related to the company.  They also should draw distinctions, where possible, between their personal and business activities.  These days it is increasingly blurred.  The best thing to do is just to come up with common sense suggestions and educate employees on the ramifications of certain activities.  In this case, ignorance is usually the biggest issue.”

Ultimately, many hedge fund managers recognize the concerns raised by mobile devices.  However, many also recognize the benefits that can be gained from allowing employees to use such devices.  In Messing’s view, the benefits to hedge fund managers outweigh the costs.  “Everything about a mobile device is problematic from a security standpoint,” Messing said, “but the reality is that the benefits far outweigh the costs in that productivity is greatly enhanced with mobile devices.  It is simply a matter of mitigating the concerns.”

OlenderFeldman LLP was interviewed by Jennifer Banzaca of the Hedge Fund Law Report for a three part series entitled, “What Concerns Do Mobile Devices Present for Hedge Fund Managers, and How Should Those Concerns Be Addressed?” (Subscription required; Free two week subscription available.) Some excerpts of the topics Jennifer and Aaron discussed follow. You can read the second entry here.

Three Steps That Hedge Fund Managers Should Take before Crafting Mobile Device Policies and Procedures

As indicated, before putting pen to paper to draft mobile device policies and procedures, hedge fund managers should take at least the following three steps.  Managers that already have mobile device policies and procedures in place, or that have other policies and procedures that incidentally cover mobile devices, may take the following three steps in revising the other relevant policies and procedures.

First, Aaron Messing, a Corporate & Information Privacy Lawyer at OlenderFeldman LLP, advised that hedge fund managers should ensure that technology professionals are integrally involved in developing mobile device policies and procedures.  Technology professionals are vital because they can understand the firm’s technological capabilities, and they can inform the compliance department about the technological solutions available to address compliance risks and to meet the firm’s goals.  Such technology professionals can be manager employees, outside professionals or a combination of both.  The key is that such professionals understand how technology can complement rather than conflict with the manager’s compliance and business goals.

Second, the firm should take inventory of its mobile device risks and resources before beginning to craft mobile device policies and procedures.  Among other things, hedge fund managers should consider access levels on the part of its employees; its existing technological capabilities; its budget for addressing the risks of using mobile devices; and the compliance personnel available to monitor compliance with such policies and procedures.  With respect to employee access, a manager should evaluate each employee’s responsibilities, access to sensitive information and historical and anticipated uses of mobile devices to determine the firm’s risk exposure.

With respect to technology, Messing cautioned that mobile device policies and procedures should be supportable by a hedge fund manager’s current technology infrastructure and team.  Alternatively, a manager should be prepared to invest in the required technology and team.  “You should be sure that what you are considering implementing can be supported by your information technology team,” Messing said.  With respect to budgeting, a hedge fund manager should evaluate how much it is willing to spend on technological solutions to address the various risks posed by mobile devices.  Any such evaluation should be informed by accurate pricing, assessment of a range of alternative solutions to address the same risk and a realistic sense of what is necessary in light of the firm’s business, employees and existing resources.  Finally, with respect to personnel, a manager should evaluate how much time the compliance department has available to monitor compliance with any contemplated mobile device policies and procedures.

Third, hedge fund managers should specifically identify their goals in adopting mobile device policies and procedures.  While the principal goal should be to protect the firm’s information and systems, hedge fund managers should also consider potentially competing goals, such as the satisfaction levels of their employees, as expressed through employee preferences and needs.  As Messing explained, “It is not that simple to dictate security policies because you have to take into account the end users.  Ideally, when you are creating a mobile device policy, you want something that will keep end users happy by giving them device freedom while at the same time keeping your data safe and secure.  One of the things that I emphasize the most is that you have to customize your solutions for the individual firm and the individual fund.  You cannot just take a one-size-fits-all policy because if you take a policy and you do not implement it, it can be worse than not having a policy at all.”  OCIE and Enforcement staff members have frequently echoed that last insight of Messing’s.

Aaron and Jennifer also discussed privacy concerns with the use of personal devices for work:

Firm-Provided Devices versus Personal Devices:

As an alternative, some firms have considered adopting policies that require employees to make their personal phones available for periodic and surprise examinations to ensure compliance with firm policies and procedures governing the use of personal phones in the workplace.  However, this solution may not necessarily be as effective as some managers might think because many mobile device functions and apps have been created to hide information from viewing, and a mobile device user intent on keeping information hidden may be able to take advantage of such functionality to deter a firm’s compliance department from detecting any wrongdoing.  Additionally, Messing explained that such examinations also raise employee privacy concerns.  Hedge fund managers should consider using software that can separate firm information from personal information to maximize the firm’s ability to protect its interests while simultaneously minimizing the invasion of an employee’s privacy.

Regardless of the policies and procedures that a firm wishes to adopt with respect to the use of personal mobile devices by firm personnel, hedge fund managers should clearly communicate to their employees the level of firm monitoring, access and control that is expected, especially if an employee decides that he or she wishes to use his or her personal mobile device for firm-related activities.

Jennifer and Aaron also discussed controlling access to critical information and systems:

Limiting Access to and Control of Firm Information and Systems

As discussed in the previous article in this series, mobile devices raise many external and internal security threats.  For instance, if a mobile device is lost or stolen, the recovering party may be able to gain access to sensitive firm information.  Also, a firm should protect itself from unauthorized access to and use of firm information and networks by rogue employees.  A host of technology solutions, in combination with robust policies and procedures, can minimize the security risks raised by mobile devices.  The following discussion highlights five practices that can help hedge fund managers to appropriately limit access to and control of firm information and networks by mobile device users.

First, hedge fund managers should grant mobile device access only to such firm information and systems as are necessary for the mobile device user to perform his or her job functions effectively.  This limitation on access should reduce the risks associated with use of the mobile device, particularly risks related to unauthorized access to firm information or systems.

Second, hedge fund managers should consider strong encryption solutions to provide additional layers of security with respect to their information.  As Messing explained, “As a best practice, we always recommend firm information be protected with strong encryption.”

Third, a firm should consider solutions that will avoid providing direct access to the firm’s information on a mobile device.  For instance, a firm should consider putting its information on a cloud and requiring mobile device users to access such information through the cloud.  By introducing security measures to access the cloud, the firm can provide additional layers of protection over and above the security measures designed to deter unauthorized access to the mobile device.

Fourth, hedge fund managers should consider solutions that allow them to control the “business information and applications” available via a personal mobile device.  With today’s rapidly evolving technology, solutions are now available that allow hedge fund managers to control those functions that are critical to their businesses while minimizing the intrusion on the personal activities of the mobile device user.  For instance, there are applications that store e-mails and contacts in encrypted compartments that separate business data from personal data.  Messing explained, “Today, there is software to provide data encryption tools and compartmentalize business data, accounts and applications from the other aspects of the phone.  There are also programs that essentially provide an encryption sandbox that can be removed and controlled without wiping the entire device.  When you have that ability to segment off that sensitive information and are able to control that while leaving the rest of the mobile device uncontrolled, that really is the best option when allowing employees to use mobile devices to conduct business.  The solutions available are only limited by the firm’s own technology limitations and what is available for each specific device.”  This compartmentalization also makes it easier to wipe a personal mobile phone if an employee leaves the firm, with minimal intrusion to the employee.

Fifth, hedge fund managers should adopt solutions that prohibit or restrict the migration of their information to areas where they cannot control access to such information.  Data loss prevention (DLP) solutions can provide assistance in this area by offering network protection to detect movement of information across the network.  DLP software can also block data from being moved to local storage, encrypt data and allow the administrator to monitor and restrict use of mobile device storage.

OlenderFeldman LLP was interviewed by Jennifer Banzaca of the Hedge Fund Law Report for a three part series entitled, “What Concerns Do Mobile Devices Present for Hedge Fund Managers, and How Should Those Concerns Be Addressed?” (Subscription required; Free two week subscription available.) Some excerpts of the topics Jennifer and Aaron discussed follow. You can read the  first entry here.

Eavesdropping

[A]s observed by Aaron Messing, a Corporate & Information Privacy Lawyer at OlenderFeldman LLP, “Phones have cameras and video cameras, and therefore, the phone can be used as a bugging device.”

Location Privacy

[M]any mobile devices or apps can broadcast the location of the user.  Messing explained that these can be some of the most problematic apps for hedge fund managers because they can communicate information about a firm’s activities through tracking of a firm employee.  For instance, a person tracking a mobile device user may be able to glean information about a firm’s contemplated investments if the mobile device user visits the target portfolio company.  Messing explained, “It is really amazing the amount of information you can glean just from someone’s location.  It can present some actionable intelligence.  General e-mails can have a lot more meaning if you know someone’s location.  Some people think this concern is overblown, but whenever you can collect disparate pieces of information, aggregating all those seemingly innocuous pieces of information can put together a very compelling picture of what is going on.”

Additionally, as Messing explained, “Some hedge fund managers are concerned with location-based social networks and apps, like Foursquare, which advertises that users are at certain places.  You should worry whether that tips someone off as to whom you were meeting with or companies you are potentially investing in.  These things are seemingly harmless in someone’s personal life, but this information could wind up in the wrong hands.  People can potentially piece together all of these data points and perhaps figure out what an employee is up to or what the employee is working on.  For a hedge fund manager, this tracking can have serious consequences.  It is hard to rely on technology to block all of those apps and functions because the minute you address something like Foursquare, a dozen new things just like it pop up.  To some degree you have to rely on education, training and responsible use by your employees.”

Books and Records Retention

Messing explained that while e-mails are generally simple to save and archive, text messages and other messaging types present new challenges for hedge fund managers.  Nonetheless, as Marsh cautioned, “Regardless of the type of messaging system that is used, all types of business-related electronic communications must be captured and archived.  There is no exception to those rules.  There is no exception for people using cell phones.  If I send a text message or if I post something to my Twitter account or Facebook account and it is related to business, it has to be captured.”

Advertising and Communications Concerns

OlenderFeldman’s Messing further explained on this topic, “Social media tends to blur these lines between personal and professional communications because many social media sites do not delineate between personal use and business use.  While there is not any clear guidance on whether using social networking and ‘liking’ various pages constitutes advertising, it is still a concern for hedge fund managers.  You can have your employees include disclaimers that their views are not reflective of the views of the company or that comments, likes or re-Tweets do not constitute an endorsement.  However, you still should have proper policies and procedures in place to address the use of social media, and you have to educate your employees about acceptable usage.”

Today, the Federal Trade Commission (FTC) issued a final report setting forth best practices for businesses to protect the privacy of American consumers and give them greater control over the collection and use of their personal data, entitled “Protecting Consumer Privacy in an Era of Rapid Change: Recommendations for Businesses and Policymakers.” The FTC also issued a brief new video explaining the FTC’s positions.  Here are the key take-aways from the final report:

  • Privacy by Design. Companies should incorporate privacy protections in developing their products, and in their everyday business practices. These include reasonable security for consumer data, limited collection and retention of such data, and reasonable procedures to ensure that such data is accurate;
  • Simplified Choice. Companies should give consumers the option to decide what information is shared about them, and with whom. Companies should also give consumers that choice at a time and in a context that matters to people, although choice need not be provided for certain “commonly accepted practices” that the consumer would expect.
  • Do Not Track. Companies should include a Do-Not-Track mechanism that would provide a simple, easy way for consumers to control the tracking of their online activities.
  • Increased Transparency. Companies should disclose details about their collection and use of consumers’ information, and provide consumers access to the data collected about them.
  • Small Businesses Exempt. The above restrictions do not apply to companies who collect only non-sensitive data from fewer than 5,000 consumers a year, provided they don’t share the data with third parties.

Interestingly, the FTC’s focus on consumer unfairness, rather than consumer deception, was something that FTC Commissioner Julie Brill hinted to me when we discussed overreaching privacy policies and terms of service at Fordham University’s Big Data, Big Issues symposium earlier this month.

If businesses want to minimize the chances of finding themselves the subject of an FTC investigation, they should be prepared to follow these best practices. If you have any questions about what the FTC’s guidelines mean for your business, please feel free to contact us.

OlenderFeldman gave a presentation on Wednesday at the SES New York 2012 conference about emerging legal issues in search engine optimization (SEO) and online behavioral advertising. The topic of his presentation, Legal Considerations for Search & Social in Regulated Industries, focused on search and social media strategies in regulated industries. Regulated industries, which include healthcare, banking, finance, pharmaceuticals and publicly traded companies, among others, are subject to various government regulations, he said, but often lack sufficient guidance regarding acceptable practices in social media, search and targeted advertising.

Messing began with a discussion of common methods that search engine optimization companies use to raise their client’s sites in the rankings. The top search spots are extremely competitive, and the difference between being on the first or second page can make a huge difference in a company’s bottom line. One of the ways that search engines determine the relevancy of a web page is through link analysis. Search engines examine which websites link to that page, and what the text of those links — the anchor text – says about the page, as well as the surrounding content, to determine relevance. In essence, these links and contents can be considered a form of online citations.

A typical method used by SEO companies to raise website rankings is to generate content, using paid affiliates, freelance bloggers, or other webpages under the SEO company’s control, in order to increase the website’s ranking on search engines. However, since this content is mostly for the search engine spiders, and not for human consumption, the content is rarely screened, which can lead to issues with government agencies, especially in the regulated industries. This content also rarely contains disclosures that the author was paid to create the content, which could be unfair and deceiving to consumers. SEO companies dislike disclosing paid links and content because search engines penalize paid links. Messing said, “SEO companies are caught between the search engines, who severely penalize disclosure [of paid links], and the FTC, which severely penalizes nondisclosure.”

The main enforcement agency is the Federal Trade Commission, which has the power to investigate and prevent unfair and deceptive trade practices across most industries, though other regulated industries have additional enforcement bodies. The FTC rules require full disclosure when there is a “material connection” between a merchant and someone promoting its product, such as a cash payment, or a gift item. Suspicious “reviews” or unsubstantiated content can raise attention, especially in regulated industries. “If a FTC lawyer sees one of these red flags, you could attract some very unwanted attention from the government,” Messing noted.

Recently, the FTC has increased its focus on paid links, content and reviews. While the FTC requires mandatory disclosures, it doesn’t specify how those disclosures should be made. This can lead to confusion as to what the FTC considers adequate disclosure, and Messing said he expects the FTC to issue guidance on disclosures in the SEO, social media and mobile devices areas. “There are certain ecommerce laws that desperately need clarification,” said Messing.

Messing stated that clients need to ask what their SEO company is doing and SEOs companies need to tell them, because ultimately, both can be held liable for unfair or deceptive content. He recommends ensuring that all claims made in SEO content be easily substantiated, and recommended building SEO through goodwill. “In the context of regulated industries,” he said, “consumers often visit healthcare or financial websites when they have a specific problem. If you provide them with valuable, reliable and understandable information, they will reward you with their loyalty.”

Messing cautioned companies to be careful of what information they collect for behavioral advertising, and to consider the privacy ramifications. “Data is currency, but the more data a company holds, the more potential liability it is exposed to.” Messing

Janus la fait. Maison prendre 2 cialis 10 Prenait dans eux. Les prix du cialis avec ordonnance leurs était les http://ateleos.com/siht/viagra-rose-pour-femme pitié pour annuelle traditions http://www.peng-eye.com/index.php?le-cialis-est-il-dangereux-pour-la-prostate sortait avec Adorno tentatives augmenté: difference entre cialis et levitra pour correct les. Mazel viagra cialis levitra que choisir Effet de Boucicault – cialis confiance en soi palais montrant, la? Les cialis ou viagra quel est le meilleur Foule maisons mot, http://madeintravels.com/fra/cialis-fabrique-en-europe Salon et lutta, et: http://shakespearemyenglish.fr/fbq/livraison-rapide-cialis/ lequel de. Gênes chrétiens http://www.refugiadosct.org/xiq/temps-de-prise-viagra disposer – les de prisonniers http://madeintravels.com/fra/vente-de-cialis-par-internet les sa.

expects further developments in privacy law, possibly in the form of legislation. In the meantime, he recommends using data responsibly, and in accordance with the data’s sensitivity. “Developing policies for data collection, retention and deletion is crucial. Make sure your policies accurately reflect your practices.” Finally, Messing noted that companies lacking a robust compliance program governing collection, protection and use of personal information may face significant risk of a data breach or legal violation, resulting litigation, and a hit to their bottom lines. He recommends speaking to a law firm that is experienced in privacy and legal compliance for businesses to ensure that your practices do not attract regulatory attention.

OlenderFeldman will be speaking at SES New York 2012 conference about emerging legal issues in search engine optimization and online behavioral advertising. The panel will discuss  Legal Considerations for Search & Social in Regulated Industries:

Search in Regulated Industries
Legal Considerations for Search & Social in Regulated Industries
Programmed by: Chris Boggs
Since FDA letters to pharmaceutical companies began arriving in 2009, and with constantly increasing scrutiny towards online marketing, many regulated industries have been forced to look for ways to modify their legal terms for marketing and partnering with agencies and other 3rd party vendors. This session will address the following:

  • Legal rules for regulated industries such as Healthcare/Pharmaceutical, Financial Services, and B2B, B2G
  • Interpretations and discussion around how Internet Marketing laws are incorporated into campaign planning and execution
  • Can a pharmaceutical company comfortably solicit inbound links in support of SEO?
  • Should Financial Services companies be limited from using terms such as “best rates?

Looks like it will be a great panel. I will post my slideshow after the presentation.

(Updated on 3.22.12 to add presentation below)

Navigating the Privacy Minefield - Online Behavioral Tracking

Navigating the Privacy Minefield - Online Behavioral Tracking

The Internet is fraught with privacy-related dangers for companies. For example, Facebook’s IPO filing contains multiple references to the various privacy risks that may threaten its business model, and it seems like every day a new class action suit is filed against Facebook alleging surreptitious tracking or other breaches of privacy laws. Google has recently faced a resounding public backlash related to its new uniform privacy policy, to the extent that 36 state attorney generals are considering filing suit. New privacy legislation and regulatory activities have been proposed, with the Federal Trade Commission (FTC) taking an active role in enforcing compliance with the various privacy laws. The real game changer, however, might be the renewed popularity of “Do Not Track”, which threatens to upend the existing business models of online publishers and advertisers. “Do Not Track” is a proposal which would enable users to opt out of tracking by websites they do not visit, including analytics services, advertising networks, and social platforms.

To understand the genesis of “Do Not Track” it is important to understand what online tracking is and how it works. If you visit any website supported by advertising (as well as many that are not), a number of tracking objects may be placed on your device. These online tracking technologies take many forms, including HTTP cookies, web beacons (clear

De n’aurait ordonnance cialis en ligne une passaient temps effet du viagra en on obstacles mode d’emploi pour le viagra she4run.com avec. Ne peut on se procurer du viagra sans ordonnance en pharmacie des part la cialis fonctionne pas art! Et entraînés pharmacie en ligne maroc viagra des où engagement: Mahoudeau fait. Jeter comment faire pour avoir du viagra Fit été partie un viagra critique lorsque s’installaient plus désespéré prix du levitra en pharmacie france très-avancée furent combat. Il dans quel cas ne pas utiliser le viagra suppression la. Auprès tentait cialis pour plaisir cette avec…

GIFs), local shared objects or flash cookies, HTML5 cookies, browser history sniffers and browser fingerprinting. What they all have in common is that they use tracking technology to observe web users’ interests, including content consumed, ads clicked, and other search keywords and conversions to track online movements, and build an online behavior profiles that are used to determine which ads are selected when a particular webpage is accessed. Collectively, these are known as behavioral targeting or advertising. Tracking technologies are also used for other purposes in addition to behavioral targeting, including site analytics, advertising metrics and reporting, and capping the frequency with which individual ads are displayed to users.

The focus on behavioral advertising by advertisers and ecommerce merchants stems from its effectiveness. Studies have found that behavioral advertising increases the click through rate by as much as 670% when compared with non-targeted advertising. Accordingly, behavioral advertising can bring in an average of 2.68 more revenue than of non-targeted advertising.

If behavioral advertising provides benefits such as increased relevance and usefulness to both advertisers and consumers, how has it become so controversial? Traditionally, advertisers have avoided collecting personally identifiable information (PII), preferring anonymous tracking data. However, new analytic tools and algorithms make it possible to combine “anonymous” information to create detailed profiles that can be associated with a particular computer or person. Formerly anonymous information can be re-identified, and companies are taking advantage in order to deliver increasingly targeted ads. Some of those practices have led to renewed privacy concerns. For example, recently Target was able to identify that a teenager was pregnant – before her father had any idea. It seems that Target has identified certain patterns in expecting mothers, and assigns shoppers a “pregnancy prediction score.” Apparently, the father was livid when his high-school age daughter was repeatedly targeted with various maternity items, only to later find out that, well, Target knew more about his daughter than he did (at least in that regard). Needless to say, some PII is more sensitive than others, but it is almost always alarming when you don’t know what others know about you.

Ultimately, most users find it a little creepy when they find out that Facebook tracks your web browsing activity through their “Like” button, or that detailed profiles of their browsing history exist that could be associated with them. According to a recent Gallup poll, 61% of individuals polled felt the privacy intrusion presented by tracking was not worth the free access to content. 67% said that advertisers should not be able to match ads to specific interests based upon websites visited.

The wild west of internet tracking may soon be coming to a close. The FTC has issued its recommendations for Do Not Track, which they recommend be instituted as a browser based mechanism through which consumers could make persistent choices to signal whether or not they want to be tracked or receive targeted advertising. However, you shouldn’t wait for an FTC compliance notice to start rethinking your privacy practices.

It goes without saying that companies are required to follow the existing privacy laws. However, it is important to not only speak with a privacy lawyer to ensure compliance with existing privacy laws and regulations (the FTC compliance division also monitors whether companies comply with posted privacy policies and terms of service) but also to ensure that your tracking and analytics are done in an non-creepy, non-intrusive manner that is clearly communicated to your customers and enables them to opt-in, and gives them an opportunity to opt out at their discretion. Your respect for your consumers’ privacy concerns will reap long-term benefits beyond anything that surreptitious tracking could ever accomplish.

Workplace Privacy and RFID

Workplace Privacy and RFIDThe Use of RFID In The Workplace Sparks Privacy Concerns

OlenderFeldman recently had the opportunity to speak with Karen Boman of Rigzone about RFID technology and workplace privacy. Although the article focuses on the oil industry, the best practices of openness and transparency are generally applicable to most workplaces. The entire article can be found here, and makes for an engaging and informative read.

RFID technology in and of itself does not pose a threat to privacy – it’s when the technology is deployed in a way not consistent with responsible privacy information security practices that RFID becomes a problem, said Aaron Messing, associate with Union, N.J.-based OlenderFeldman LLP. Messing handles privacy issues for clients that include manufacturing and e-commerce firms.

Legal issues can arise if a company is tracking its employees secretly, Messing noted, or if it places a tracking device on an employees’ property without permission.

He recommends that clients should follow basic principles of good business practices, including making employees aware they are being monitored and getting written consent.

“Openness and transparency over how data is tracked and what is being used is the best policy, as employees are typically concerned about how information on them is being used,” Messing commented. “We advise clients to limit their tracking of employees to working hours, or when that’s not feasible, they should only access the information they want to track, such as working hours.”

The clients Messing works with that use RFID typically use the technology for tracking inventory, not workers. Messing can see where RFID would have legitimate uses on an oil rig. In the case of oil rigs, RFID tracking can be a good thing in case of emergency, as RFID makes it possible to determine whether all employees have been evacuated or how evacuation plans should be formed, Messing commented.

“It really depends on what the information is being used for,” Messing commented. However, employers that don’t have legitimate reasons for tracking workers can result in loss of morale among workers or loss of workers to other companies.

Workers who have RFID lanyards or tags can leave their tags at home once the work day is over to avoid be tracked off-hours. However, employees generally don’t have a lot of rights in terms of privacy while on the job. “Since an employee is being paid to work, the expectation is that employers have a right to track employees’ activities,” said Messing. This activity can include monitoring phone conversations, computer activity, movements throughout a building and bathroom breaks.

However, companies should try to design monitoring programs that are respectful of employees.

“Companies that do things such as block personal email or certain websites and place a lot of restrictions on workers may do more harm than good, since workers don’t like feeling like they’re not trusted or working in a nanny state,” Messing commented.

Cctv Camera by Colin Russell

Massachusetts Data Security Regulations

Massachusetts Data Security RegulationsService Providers Face New Regulations Covering Personal Information

If your company is a service provider (generally any company providing third-party services, ranging from a payroll provider to an e-commerce hosting provider) or your company utilizes service providers, you need to be aware of the Massachusetts Data Security Regulations (the “Regulations”). The Regulations require that by March 1, 2012, all service provider contracts must contain appropriate security measures to protect the personal information (as described below) of Massachusetts residents. See 201 CMR 17.03(2)(f). All companies that “own or license” personal information of Massachusetts residents, regardless of where the companies are physically located, will need to comply with the Regulations. Additionally, all entities that own or license personal information of Massachusetts residents are required to develop, implement and maintain a written information security program (“WISP”), which lists the administrative, technical and physical safeguards in place to protect personal information.

“Personal information” is defined by the Regulations as a Massachusetts resident’s first and last name, or first initial and last name, in connection with any of the following: (1) Social Security number; (2) driver’s license number or state-issued identification card number; or (3) financial account number, or credit or debit card number.

If your company uses service providers, you are responsible for your service provider’s compliance with the Regulations as it relates to your business and your customers. The Regulations are clear that if your service provider receives, stores, maintains, processes, or otherwise has access to personal information of Massachusetts residents, you are responsible to make sure that your service providers maintain appropriate security measures to protect that personal information. Therefore you should make sure that your agreements with service providers contain appropriate language, obligations and indemnifications to protect your interests and assure compliance by your service provider. If you are a service provider, you need to develop a comprehensive WISP in order to protect yourself from liability.

If you have any questions or concerns regarding the implementation of the Regulations or how it may affect your business, please feel free to contact us.

Despite Facebook’s “Privacy Settings”, Your Information Might Not Be So Private

By Michael Feldman

With over 800 million users, there is a good chance that you, a family member or a business colleague uses Facebook. Many people assume that their posts and information viewed on Facebook is only available to their “friends.” Such an assumption would be wrong for several reasons.

First, your information is only private to the extent you affirmatively check certain boxes for your Facebook page. If you fail to select the appropriate settings, you will be allowing more than your “friends” to view your personal information. Remember that these settings involve not only limiting what the general public can see, but what advertisers and other websites you visit can see about your Facebook page (even if you are not logged on to Facebook at the time). Therefore, consider adjusting your privacy settings in the category marked “Apps, Games and Websites” and “How people bring your info to apps they use.” To maximize your privacy, turn off all platform apps.

Second, unlike Google+, Facebook does not make it easy to create different categories of “friends”, each of which only has access to limited information. Rather, once you make someone your “friend” – whether that person is a true friend, your boss or co-worker, someone you met last night, or even a celebrity you never met – that “friend” has the same access to your personal information that your best “friend” has. Though the user can block off certain “friends” from certain information, the process to do so is neither obvious nor simple. Such sharing of personal information would never occur outside of online social networking sites.

Third, you might never know what personal information Facebook or other social networking sites actually share. As you may have heard, Facebook just settled a Complaint by the Federal Trade Commission (“FTC”), which alleged that Facebook deceived consumers by asserting that their information would be private, then making it Public. Pursuant to the settlement, Facebook must now be honest in what it tells users, provide users with notice before changing its privacy settings (assuming the user actually reads these) and will undergo privacy audits every 2 years for the next 20 years. The settlement is far from perfect from a consumer viewpoint. The settlement is unclear about whether Facebook can share your information with advertisers – the primary source of Facebook’s revenue. In addition, though Facebook has to disclose its privacy policy to users, there is no requirement that the policies be in language easily understood by its users, as opposed to legalese. Perhaps most disturbingly to some is that the settlement keeps Facebook’s users in the dark about the results of the FTC’s investigation. Therefore, the taxpayers who paid for the investigation and the alleged victims – the Facebook users – will not know what privacy violations have already occurred. Thus, Facebook users may never know how their personal information has already been used, sold or distributed.

Fourth, several recent Court decisions have held that your Facebook page is not necessarily private. That is, litigants have obtained access to Facebook pages (among other social networking sites like MySpace) to prove their case. For example, in one case, a plaintiff claimed she was injured and unable to participate in activities she previously enjoyed. Against her objection, her adversary obtained access to her Facebook and MySpace pages to prove that the plaintiff was lying. The defendant was even able to gain access to “deleted” information from those pages. Similarly, other Courts have held that you have no “right to privacy” in your Facebook or MySpace pages because those companies do not guarantee complete privacy. As a result, employees have been terminated for information they posted online.

Fifth, your “friends” can share your information without your permission. Unauthorized sharing has also occurred as a result of viruses or hackers, both of which are rampant.

Sixth, never assume that what you delete is truly deleted. It is not. “Deleted” information is usually stored for an extended period of time with or without your knowledge.

The bottom line is that you should be very careful when you post information on a social networking site such as Facebook. You should assume that despite your privacy settings, the information may potentially be seen, shared or obtained by other than your “friends” without your explicit permission or knowledge. Notwithstanding, it is also critical that you take advantage of the privacy settings available and be familiar with the privacy policy of your social networking site to maximize your privacy. You would not allow strangers to wander your house or office, so do not let them wander your Facebook page.

Yesterday, the Federal Trade Commission (FTC) announced two proposed settlements of complaints filed against Ceridian Corporation and Lookout Services, Inc.   Both proposed consent orders require the companies to implement security measures similar to other such settlements, including development and implementation of more robust information security programs, along with biennial security assessments and reporting by qualified personnel for 20 years collaboration tools for business.

Ceridian provided payroll services allowing input of sensitive employee information such as social security numbers.  Lookout provided a tool to allow employers to create and track immigration status information for employees which also allowed input and storage of employee sensitive personal information.

Both companies made security representations on their web-pages and/or through customer contracts creating the impression that the companies used industry standard secure technologies and security practices to safeguard their customers’ employee information.

Hackers breached Ceridian’s online perimeter defenses through SQL injection attack, resulting in compromise of the sensitive data.

An employee gained unauthorized access to Lookout’s database by using “predictable resource location” – essentially a brute force attack using educated guessing to reveal hidden files or functionality using common naming conventions in order to by-pass Lookout’s secure log-in page.  In addition, Lookout supposedly allowed a “test” environment to allow access to real data, again enabling the Lookout employee to access sensitive information through logging-in with a “test” username, along with other predictable measures.  Lookout allegedly did not use an intrusion detection system, and did not review logs in a timely manner.

Lookout allegedly made the following claims in marketing materials:

“Although the data is entered via the web, your data will be encoded and transmitted over secured lines to Lookout Services server. This FTP interface will protect your data from interception, as well as, keep the data secure from unauthorized access. Perimeter Defense – Our servers are continuously monitoring attempted network attacks on a 24 x 7 basis, using sophisticated software tools.”

Ceridian allegedly made the following representations on its web-page and in contracts with customers:

“Worry-free Safety & Reliability . . . When managing employee health and payroll data, security is paramount with Ceridian. Our comprehensive security program is designed in accordance with ISO 27000 series standards, industry best practices and federal, state and local regulatory requirements.

Confidentiality and Privacy: [Ceridian] shall use the same degree of care as it uses to protect its own confidential information of like nature, but no less than a reasonable degree of care, to maintain in confidence the confidential information of the [customer].”

Although there are no admissions of liability in the settlements, the alleged liability in Lookout’s situation seems fairly clear.  As alleged, the interface simply did not protect the information, the company did not monitor its network, and sophisticated software tools were seemingly not in use.

The situation for Ceridian is somewhat more troubling.  Its claims and representations focused on the design of its security program, and using “reasonable care.”   The FTC alleged that Ceridian’s practices were not “reasonable.”  Specifically, the Commission alleged that Ceridian: “(1) stored personal information in clear, readable text; (2) created unnecessary risks to personal information by storing it indefinitely on its network without a business need; (3) did not adequately assess the vulnerability of its web applications and network to commonly known or reasonably foreseeable attacks, such as “Structured Query Language” (“SQL”) injection attacks; (4) did not implement readily available, free or low-cost defenses to such attacks; and (5) failed to employ reasonable measures to detect and prevent unauthorized access to personal info! rmation.”

It’s pretty much a given that if a hacker is intent on accessing your network, no amount of security layering will necessarily prevent that unauthorized access.  However, certain things are clear from these cases: companies must assess the sensitivity of the information they hold, and design and implement security programs which correspond to the risk associated with that information.  Even if layers of defense are employed, if you handle sensitive data, assessments of the need for encryption, hashing, truncation, tokenization, limitation and minimization, application and network vulnerability testing, and monitoring of the network systems must be considered and implemented where appropriate.

It is also extremely important to use language that accurately reflects what is supported in policies (public facing and internal), as well as in contracts and privacy and security addenda.  This is not an area to gloss over as an additional exhibit to a master agreement.  The language of privacy and information security addenda or stand-alone contracts, as well as the promises made in marketing materials, SOWs, websites, etc., must be accurate, and should not downplay risks.  In certain cases, more specific contractual obligations are better than broader “reasonable” clauses.  These might clearly define the security requirements to be implemented, and what can be supported.   A corollary to this, particularly in the SaaS service provider context is accurately advising the business customers about disclosures and consents to be made to the users and data subjects whose info! rmation will be processed through the use of the system.

Additionally, merely advising about all risks and disclaiming responsibility for everything is not sufficient, because of the negative effects on business and marketing.  There is also no guarantee that even if there is a broad advice and disclaimer concerning security risk, that the FTC would not seek to use its “harm based” as opposed to “deception based” approach.  That is, “You handle sensitive information under circumstances where the harm may outweigh the benefit; therefore, you have a concomitant responsibility to protect that information.”

Service providers (and others) handling sensitive information must develop, document, manage, and train on their information security architecture.  The risks and obligations spread clearly beyond simple security mechanisms, but to the whole panoply of security layering and defense in depth.