By: Aaron Krowne

On July 14, 2014, the New York Attorney General’s office (“NY AG”) released a seminal report on data breaches, entitled “Information Exposed: Historical Examination of Data Breaches in New York State” (the “Report”). The Report presents a wealth of eye-opening (and sobering) information on data breaches in New York and beyond. The Report is primarily based upon the NY AG’s own analysis of data breach reports received in the first eight years (spanning 2005 through 2013) based on the State’s data breach reporting law (NY General Business Law §899-aa). The Report also cites extensively to outside research, providing a national- and international picture of data breaches. The Report’s primary finding is that data breaches, somewhat unsurprisingly, are a rapidly growing problem.

A Growing Menace

The headline statistic of the Report is its finding that data breaches in or effecting New York have tripled between 2006 and 2013 the original source. During this time frame, 22.8 million personal records of New Yorkers were exposed in nearly 5,000 breaches, effecting more than 3,000 businesses. The “worst” year was 2013, with 7.4 million records exposed, mainly due to the Target and Living Social “mega-breaches,” which the Report revealed are themselves a growing trend. However, while the Report warned that these recent “mega breaches” appear to be a trend, businesses of all sizes are effected and at risk.

The Report revealed that hacking instances are responsible for 43% of breaches and constituted 64% of the total records exposed. Other major causes of breaches include “lost or stolen equipment or documentation” (accounting for 25% of breaches), “employee error” (totaling 21% of breaches), and “insider wrongdoing” (tallying 11% of breaches). It is thus important to note that the majority of breaches still originate internally. However, since 2009 hacking has grown to become the dominant cause of breaches, which, not coincidentally, is the same year that “crimeware” source code was released and began to proliferate. Hacking was responsible for a whopping 96.4% of the New York records exposed in 2013 (again, largely due to the mega-breaches).

The Report notes that retail services and health care providers are “particularly” vulnerable to data breaches. The following breaks down the number of entities in a particular sector that suffered repeated data breaches: 54 “retail services” entities (a “favorite target of hackers”, per the Report), 31 “financial services” entities, 29 “health care” entities, 27 “banking” entities, and 20 “insurance” entities.

The Report also points out that these breach statistics are likely on the low side. One reason for this is that New York’s data breach law doesn’t cover all breaches. For example, if only one piece of information (out of the two required types: (1) a name, number, personal mark, or other identifier which can be used to identify such natural person, combined with (2) a social security number, government ID or license number, account number, or credit or debit card number along with security code) is compromised, the reporting requirement is not triggered. Yet, the compromise of even one piece of data (e.g., a social security number) can still have the same effect as a “breach” under the law, since it is still possible for there to be actual damage to the consumer (particularly if the breached information can be combined with complementary information obtained elsewhere). Further, within a specific reported breach, the full impact of such may be unknown, and hence lead to the breach being “underestimated.”

 Real Costs: Answering To The Market

Though New York’s data breach law allows the AG to bring suits for actual damages and statutory penalties for failure to notify (all consumers effected, theNY AG’s office; and for large breaches, consumer reporting agencies is required), such awards are likely to be minor compared with the market impact and direct costs of a breach. The Report estimates that in 2013, breaches cost New York businesses $1.37 billion, based on a per-record cost estimate of $188 (breach cost estimates are from data breach research consultancy The Ponemon Institute). However, in 2014, this per-record estimate has already risen to $201. The cost for hacked records is even higher than the average, at $277. The total average cost for a breach is currently $5.9 million, up from $5.4 million in 2013. These amounts represent only costs incurred by the businesses hit, including expenses such as investigation, communications, free consumer credit monitoring, and reformulation and implementation of data security measures. Costs on the consumers themselves are not included, so this is, once again, an under-estimate.

 These amounts also do not include market costs, for which the cases of the Target and Sony Playstation mega-breaches of 2013 are particularly sobering examples. Target experienced a 46% drop in annual revenue in the wake of the massive breach of its customers’ data, and Sony estimates it lost over $1 billion. Both also suffered contemporaneous significant declines in their stock prices.

 Returning to direct costs, the fallout continues: on August 5, 2014, Target announced that the costs of the 2013 breach would exceed its previous estimates, coming in at nearly $150 million.

 Practices

The Report’s banner recommendation, in the face of all the above, is to have an information security plan in place, especially given that 57% of breaches are primarily caused by “inside” issues (i.e., lost/stolen records, employee error, or wrongdoing) that directly implicate information security practices. An information security plan should specifically include:

  • a privacy policy;
  • restricted and controlled access to records;
  • monitoring systems for unauthorized access;
  • use of encryption, secure access to all devices, and non-internet connected storage;
  • uniform employee training programs;
  • reasonable data disposal practices (e.g., using disk wiping programs).

 The Report is not the most optimistic regarding preventing hacking, but we would note that hacking, or the efficacy of it, can also be reduced by implementation of an information security plan. For example, the implementation of encryption, and the training of employees to use it uniformly and properly, can be quite powerful.

Whether the breach threat comes to you in the form of employee conduct or an outside hack attempt, don’t be caught wrong-footed by not having an adequate information security plan. A certified privacy attorney at OlenderFeldman can assist you with your businesses’ information security plan, whether you need to create one for the first time, or simply need help in ensuring that your current information security plan provides the maximum protection to your business.

By: Aaron Krowne

On July 1, 2014, Delaware signed into law HB 295, which provides for the “safe destruction of records containing personal identifying information” (codified at Chapter 50C, Title 6, Subtitle II, of the Delaware Code). The law goes into effect January 1, 2015.

Overview of Delaware’s Data Destruction Law

In brief, the law requires a commercial entity to take reasonable steps to destroy or arrange for the destruction of consumers’ personal identifying information when this information is sought to be disposed of.

 The core of this directive is to “take reasonable steps to destroy” the data. No specific requirement is given for this, though a few suggestions such as shredding, erasing, and overwriting information are given, creating some uncertainty as to what steps an entity might take in order to achieve compliance.

For purposes of this law “commercial entity” (CE) is defined so as to cover almost any type of business entity except governmental entities (in contrast, to say, Florida’s law). Importantly, Delaware’s definition of a CE clearly includes charities and nonprofits.

The definition of personal identifying information (PII) is central to complying with the law. For purposes of this law PII is defined as a consumer’s first name or first initial and last name, in combination with one of the individual’s: social security number, passport number, driver’s license or state ID card number, insurance policy number, financial/bank/credit/debit account number, tax, payroll information or confidential health care information. “Confidential health care information” is intentionally defined broadly so as to cover essentially a patient’s entire health care history.

The definition of PII also, importantly, excludes information that is encrypted, meaning, somewhat surprisingly, that encrypted information is deemed not to be “personal identifying information” under this law. This implies that, if any of the above listed data is encrypted, all of the consumer’s data may be retainable forever – even if judged no longer useful or relevant.

The definition of “consumer” in the law is also noteworthy, as it is defined so as to expressly exclude employees, and only covers individuals (not CEs) engaged in non-business transactions. Thus, rather surprisingly, an individual engaging in a transaction with a CE for their sole proprietorship business is not covered by the law.

Penalties and Enforcement

The law does not provide for any specific monetary damages in the case of “a record unreasonably disposed of.” But, it does provide a private right of action, whereby consumers may bring suit for an improper record disposal in case of actual damages – however, that violation must be reckless or intentional, not merely negligent. Additionally, and perhaps to greater effect, the Attorney General may bring either a lawsuit or an administrative action against a CE.

Who is Not Effected?

The law expressly exempts entities covered by pre-existing pertinent regulations, such as all health-related companies, which are covered by the Health Insurance Portability and Accountability Act, as well as banks, financial institutions, and consumer reporting agencies. At this point it remains unclear as to whether CEs without Delaware customers are considered within the scope of this law, as this law is written so broadly that it does not narrow its scope to either Delaware CEs, or to non-Delaware CEs with Delaware customers. Therefore, if your business falls into either category, the safest option is to comply with the provisions of the law.

Implications and Questions

We have already seen above that this facially-simple law contains many hidden wrinkles and leaves some open questions. Some further elaborations and questions include:

  • What are “reasonable steps to destroy” PII? Examples are given, but the intent seems to be to leave the specifics up to the CE’s judgment – including dispatching the job to a third party.
  • The “when” of disposal: the law applies when the CE “seeks to permanently dispose of” the PII. Does, then, the CE judging the consumer information as being no longer useful or necessary count? Or must the CE make an express disposal decision for the law to apply? If it is the latter, can CEs forever-defer applicability of the law by simply never formally “disposing” of the information (perhaps expressly declaring that it is “always” useful)?
  • Responsibility for the information – the law applies to PII “within the custody or control” of the CE. When does access constitute “custody” or “control”? With social networks, “cloud” storage and services, and increasingly portable, “brokered” consumer information, this is likely to become an increasingly tested issue.

Given these considerable questions, as well as the major jurisdictional ambiguity discussed above (and additional ones included in the extended version of this post), potential CEs (Delaware entities, as well as entities who may have Delaware customers) should make sure they are well within the bounds of compliance with this law. The best course of action is to contact an experienced OlenderFeldman attorney, and make sure your privacy and data disposal policies place your business comfortably within compliance of Delaware’s new data destruction law.

By: Aaron Krowne

In a major recent case testing California’s medical information privacy law, part of the California Medical Information Act, or CMIA (California Civil Code § 56 et seq.), the Third District Court of Appeals in Sutter Health v. Superior Court held on July 21, 2014 that confidential information covered by the law must be “actually viewed” for the statutory penalty provisions of the law to apply. The implication of this decision is that it just got harder for consumers to sue for a “pure” loss of privacy due to a data breach in California and possibly beyond.

Not So Strict

Previously, CMIA was assumed to be a strict liability statue, as in the absence of actual damages, a covered party that “negligently released” confidential health information was still subject to a $1,000 nominal penalty. That is, if a covered health care provider or health service company negligently handled customer information, and that information was subsequently taken by a third party (e.g., a theft of a computer, or data device containing such information), that in itself triggered the $1,000 per-instance (and thus, per-customer record) penalty. There was no suggestion that the thief (or other recipient) of the confidential health information needed to see, or do anything with such information. Indeed, plaintiffs had previously brought cases under such a “strict liability” theory and succeeded in the application of CMIA’s $1,000 penalty.

 Sutter Health turns that theory on its head, with dramatically different results for consumers and California health-related companies.

Sutter was looking at a potential $4 billion fine, stemming from the October 2011 theft of a computer from its offices containing 4 million unencrypted client records. Sutter’s computer was password-protected, but without encryption of the underlying data this measure is easily defeated. Security at the office was light, with no alarm or surveillance cameras. Believing this to be “negligent,” some affected Sutter customers sued under CMIA in a class action. Given the potential amount of the total fine, the stakes were high.

The Court not only ruled against the Sutter customers, but dismissed the case on demurrer, meaning that the Court determined that the case was deficient on the pleadings, because the Plaintiffs “failed to state a cause of action.” The main reason, according to the Court, was that Plaintiffs failed to allege that an unauthorized person actually viewed the confidential information, therefore there was no breach of confidentiality, as required under CIMA. The Court elaborated that under CIMA “[t]he duty is to preserve confidentiality, and a breach of confidentiality is the injury protected against. Without an actual confidentiality breach there is no injury and therefore no negligence…”.

The Court also introduced the concept of possession, which is absent in CMIA itself, to delimit its new theory interpreting CMIA, saying: “[t]hat [because] records have changed possession even in an unauthorized manner does not [automatically] mean they have been exposed to the view of an unauthorized person.” So, plaintiffs bringing claims under CMIA will now have to allege, and ultimately prove, that their confidential information (1) changed possession in an unauthorized manner, and that (2) it was actually viewed (or, presumably, used) by an unauthorized party.

The Last Word?

This may not be the last word on CMIA, and certainly not the general issue of the burden of proof of harm in consumer data breaches. The problem is that it is extremely difficult to prove that anything nefarious has actually happened with sensitive consumer data post-breach, short of catching the perpetrator and getting a confession, or actually observing the act of utilization, or sale of the data to a third party. Even positive results detected through credit monitoring, such as attempts to use credit cards by unauthorized third parties, do not conclusively prove that a particular breach was the cause of such unauthorized access.

The Sutter court avers, in supporting its ruling, that we don’t actually know whether the thief in this case simply stole the computer, wiped the hard drive clean, and sold it as a used computer, and therefore no violation of CIMA. Yet, logically, we can say the opposite may have just as well happened – retrieval of the customer data may very well have been the actual goal of the theft. In an environment where sensitive consumer records can fetch as much as $45 (totaling $180 million for the Sutter customer data), it seems unwise to rely on the assumption that thieves will simply not bother to check for valuable information on stolen corporate computers and digital devices.

Indeed, the Sutter decision perhaps raises as many questions as answers on where to draw the line for “breach of confidential information.” To wit: presumably, a hacker downloading unencrypted information would still qualify for this status under the CMIA, so interpreted. But then, by what substantive rationale does the physical removal of a hard drive in this case not qualify? Additionally, how is it determined whether a party actually looked at the data, and precisely who looked at it?

Further, the final chapter on the Sutter breach may not yet be written – the data may still be (or turn out to have been) put to nefarious use, in which case, the court’s ruling will seem premature. Thus, there is likely to be some pushback to Sutter, to the extent that consumers do not accept the lack of punitive options in “open-ended” breaches of this nature, and lawmakers actually intend consumer data-handling negligence laws to have some “bite.”

Conclusion

Naively, it would seem under the Sutter Court’s interpretation, that companies dealing with consumer health information have a “blank check” to treat that information negligently – so long as the actual viewing (and presumably, use) of that information by unauthorized persons is a remote possibility. We would caution against this assumption. First, as above, there may be some pushback (judicially, legislatively, or in terms of public response) to Sutter’s strict requirement of proof of viewing of breached records. But more importantly, there is simply no guarantee that exposed information will not be released and be put to harmful use, and that sufficient proof of such will not surface for use in consumer lawsuits.

 One basic lesson of Sutter is that, while the company dodged a bullet thanks to a court’s re-interpretation of a law, they (and their customers) would have been vastly safer had they simply utilized encryption. More broadly, Sutter should have had and implemented a better data security policy. Companies dealing with customer’s health information (in California and elsewhere) should take every possible precaution to secure this information.

Do not put your company and your customers at risk for data breaches, contact a certified privacy attorney at OlenderFeldman to make sure your company’s data security policy provides coverage for all applicable health information laws.

By: Aaron Krowne

On June 20, 2014, the Florida legislature passed SB 1524, the Florida Information Protection Act of 2014 (“FIPA”). The law updates Florida’s existing data breach law, creating one of the strongest laws in the nation protecting consumer personal data through the use of strict transparency requirements. FIPA applies to any entity with customers (or users) in Florida – so businesses with a national reach should take heed.

Overview of FIPA

FIPA requires any covered business to make notification of a data breach within 30 days of when the personal information of Florida residents is implicated in the breach. Additionally, FIPA requires the implementation of “reasonable measures” to protect and secure electronic data containing personal information (such as e-mail address/password combinations and medical information), including a data destruction requirement upon disposal of the data.

Be forewarned: The penalties provided under FIPA pack a strong punch. Failure to make the required notification can result in a fine of up to $1,000 a day for up to 30 days; a $50,000 fine for each 30-day period (or fraction thereof) afterwards; and beyond 180 days, $500,000 per breach. Violations are to be treated as “unfair or deceptive trade practices” under Florida law. Of note for businesses that utilize third party data centers and data processors, covered entities may be held liable for these third party agents’ violations of FIPA.

While the potential fines for not following the breach notification protocols are steep, no private right of action exists under FIPA.

The Notification Requirement

Any covered business that discovers a breach must, generally, notify the affected individuals within 30 days of the discovery of the breach. The business must also notify the Florida Attorney General within 30 days if more than 500 Florida residents are affected.

However, if the cost of sending individual breach notifications is estimated to be over $250,000, or where over 500,000 customers are affected, businesses may satisfy their obligations under FIPA by notifying customers via a conspicuous web site posting and by running ads in the affected areas (as well as filing a report with the Florida AG’s office).

Where a covered business reasonably self-determines that there has been no harm to Florida residents, and therefore notifications are not required, it must document this determination in writing, and must provide such written determination to the Florida AG’s office within 30 days.

Finally, FIPA provides a strong incentive for businesses to encrypt their consumer data, as notification to affected individuals is not required if the personal information was encrypted.

Implications and Responsibilities

 One major take-away of the FIPA responsibilities outlined above is the importance of formulating and writing a data security policy. FIPA requires the implementation of “reasonable measures” to protect and secure personal information, implying that companies should already have such measures formulated. Having a carefully crafted data security policy will also help covered businesses to determine what, if any, harm has occurred after a breach and whether individual reporting is ultimately required.

For all of the above-cited reasons, FIPA adds urgency to a business formulating a privacy and data security policy if it does not have one – and if it already has one, making sure that it meets the FIPA requirements. Should you have any questions do not hesitate to contact one of OlenderFeldman’s certified privacy attorneys to make sure your data security policy adequately responds to breaches as prescribed under FIPA.

In response to questions from concerned business owners, we’ve compiled answers to some of the frequently asked legal questions regarding complying with the Affordable Care Act, or “Obamacare”.

The Affordable Care Act: FAQ For Business Owners

Many businesses are still unaware that they must assess this year whether they are required under the Patient Protection and Affordable Care Act (“ACA”) — otherwise commonly referred to as “Obamacare” — to provide affordable healthcare to their Full Time employees when the health care plan mandate goes into effect on January 1, 2014.

Because of the complex nature of the ACA’s provisions and their nationwide impact, we have prepared this FAQ Sheet to explain in basic terms how the ACA works and to address the most common misunderstandings about the law itself by the business community. Remember: simple mistakes can often be costly to fix.

1. Do the ACA’s Health Care Plan Requirements apply to every business?   No. The ACA only applies to businesses having “Large Employer Status”, which is defined under the ACA as having 50 or more Full Time or Full Time Equivalent (“FTE”) employees.  

A Full Time employee under the ACA is someone who works an average of 30 hours per week (or 130 hours per month) as measured over a period of six (6) consecutive months in the 2013 calendar year.  Hours include both time worked and time paid but not worked (such as holidays, paid time off, and so forth).  But this is not the end of the assessment process because FTE employees also must be taken into account.

To protect against businesses trying to get around the 50 Full Time employee threshold by simply reducing the hours of a few employees below 30 hours per week, the ACA requires that an employer add together the total number of Full Time employees and FTEs for purposes of evaluating “Large Employer Status”.  The number of FTEs is determined by combining the number of hours of service in a given month for all employees averaging less than 30 hours of service per week and dividing that number by 120.  That calculation will yield the number of FTEs that must be added to the total number of Full Time employees to determine whether an employer meets the “Larger Employer Status” threshold.

Example: Business X has 42 Full Time employees and 20 employees who each work on average 80 hours per month.  Using the calculation set forth above, those 20 employees would translate into 13 FTEs  (20 x 80/120).  The total of Full Time employees and FTEs at Business X would therefore be 55 and trigger “Large Employer Status.”  Business X must therefore provide an ACA-compliant health care plan for its Full Time employees in 2014.

2. If a business qualifies as a “Large Employer” under the ACA, does it need to provide healthcare plans for all company employees?  No.

Businesses that are required to have an ACA-compliant plan only need to provide health care benefits to Full Time employees (i.e., those working 30 hours or more per week). 

3. What does a business need to include in its health care plan to become “ACA-compliant”? ACA-compliant Plans must: (A) be “Affordable”; (B) Provide “Essential Benefits”; and (C) Cover 60% of the Plan Cost (otherwise known as “Minimum Value”). 

The Affordability Test.

In order to meet ACA’s definition of an “Affordable” health care plan, the lowest cost option for a Full Time employee’s individual coverage must be less than 9.5% of the employee’s modified adjusted gross household income.  Businesses can evaluate whether they satisfy the 9.5% threshold of an individual employee’s AGI by looking to Box 1 of an employee’s Form W-2 Wages.

Example: Employee X has W-2 Wages of $30,000.  The health care plan requires the employee to contribute $200 per month for individual coverage (or $2,400 per year).  The coverage would therefore meet ACA’s definition of Affordable.  If the plan were to require the employee to contribute $250 per month (or $3,000 per year) it would exceed the 9.5% threshold and therefore the plan would not satisfy the affordability standard.

The “Essential Benefits” Requirements.

An ACA-compliant Plan must also contain “Essential Benefits” unless the plan is grandfathered under the ACA (and most existing plans do not qualify for grandfathered status for reasons not addressed here – consult your healthcare consultant or provider for details).

Such Essential Benefits must include at a minimum:

  • Ambulatory patient services, such as doctor’s visits and outpatient services;
  • Emergency services;
  • Hospitalization;
  • Maternity and newborn care;
  • Mental health and substance use disorder services, including behavioral health treatment;
  • Prescription drugs;
  • Rehabilitative and habilitative services and devices;
  • Laboratory services;
  • Preventive and wellness services and chronic disease management; and
  • Pediatric services, including oral and vision care.

 

In addition, an Essential Benefits small group Plan is subject to annual deductible limits ($2,000 for self coverage and $4,000 for family) and all plans are subject to annual out-of-pocket maximums for Essential Benefits.  For 2014, the out-of-pocket maximums are $6,350 for individual coverage and $12,700 for family coverage.

The “Minimum Value” Test

“Minimum Value” under the ACA means that the employer’s share of its sponsored plan is at least 60% of the total cost of the plan.

Both the CMS.gov and IRS.gov websites have a Minimum Value Calculator that can be downloaded as an Excel Spreadsheet and used by the employer to determine whether its sponsored Plan meets the Minimum Value requirements.  This calculation can easily be handled by health care benefits consultants, who will be able to recommend approaches to health care plans to insure minimum value is achieved.

4. Do businesses have any obligation to notify employees of their rights under the ACA regardless of whether or not they are providing an ACA-compliant Plan in 2014?  Yes.

On or before October 1, 2013, all businesses that would otherwise be subject to the Fair Labor Standards Act (which includes any business in the United States with annual dollar volume of sales or receipts in the amount of $500,000 or more) must provide ACA notification advising employees of their rights and whether the employer will be providing an ACA-compliant plan. 

This notice is known as a “Marketplace Exchange Notice,” which relates to the fact that individuals can obtain health care subsidies or purchase health care through State Marketplace Exchanges; such exchanges are expected to go into effect later this year if such insurance is not offered through an employer.  Sample notice links from the Department of Labor are attached here (employers who offer a health plan) and here (employers who do not offer a health plan).

5. Does the ACA make any changes to COBRA that businesses must comply with?   Yes.

The ACA also requires businesses to notify any employees eligible to receive COBRA benefits that they are entitled to elect coverage under the Marketplace Exchange rather than COBRA.  

A link to the DOL website page regarding new sample COBRA notification forms is available here.

6. What exposure do businesses have if they are required to provide an ACA-compliant health care plan and fail to do so?  The penalties for non-compliance under the ACA range from $2,000 to $3,000 per Full Time employee for each year of non-compliance, with the amount of the fine dependent on the nature of the employer’s failure to comply with the law.

If a business fails to offer Full Time employees a healthcare plan, the ACA penalty is $2,000 per Full Time employee (after the first 30 Full Time employees) for any employee that would otherwise be eligible to receive coverage under an ACA-compliant plan from their employer.

If a business offers a plan to all Full Time employees, but the plan is not ACA-compliant, the business may be fined $3,000 for each Full Time employee that seeks health care coverage through a healthcare exchange rather than through the employer sponsored plan.

It is also important to note that because the Internal Revenue Service will be policing ACA compliance, an employer who fails to comply with ACA may expose itself to other federal investigations into employee matters, including a full IRS or Department of Labor audit.

In conclusion, every business MUST carefully consider as part of its planning whether it is subject to the ACA and take steps this year to come into compliance if necessary.  OlenderFeldman LLP is available to assist you in this regard and to make recommendations on health care consultants as well to develop and structure an ACA-compliant plan.  Please contact Howard Matalon, OF’s Employment Partner, for an evaluation of your ACA compliance requirements by email or by using our contact us form.

Protected Health Information (PHI)

Protected Health Information (PHI)Protected Health Information Privacy Concerns are Rapidly Increasing

OlenderFeldman LLP contributed to the recently released report entitled, The Financial Impact of Breached Protected Health Information: A Business Case for Enhanced PHI Security, which can be downloaded for free at http://webstore.ansi.org/phi. As the press release correctly notes, protected health information (PHI) “is now more susceptible than ever to accidental or impermissible disclosure, loss, or theft. Health care organizations (providers, payers, and business associates) are not keeping pace with the growing risks of exposure as a result of electronic health record adoption, the increasing number of organizations handling PHI, and the growing rewards of PHI theft.”

The report provides a  5-step method for assessing security risks and evaluating the “at risk” value of an organization’s PHI, including estimating overall potential data breach costs, and provides a methodology for determining an appropriate level of investment needed to strengthen privacy and security programs and reduce the probability of a breach occurrence.