Electronic health records ripe for theft


From a recent POLITICO Article:

America’s medical records systems are flirting with disaster, say the experts who monitor crime in cyberspace. A hack that exposes the medical and financial records of hundreds of thousands of patients is coming, they say — it’s only a matter of when.

As health data become increasingly digital and the use of electronic health records booms, thieves see patient records in a vulnerable health care system as attractive bait, according to experts interviewed by POLITICO. On the black market, a full identity profile contained in a single record can bring as much as $500.

The issue has yet to capture attention on Capitol Hill, which has been slow to act on cybersecurity legislation.

The full POLITICO Article can be read here.

Advertisements

Law Firms, HIPAA and the “Minimum Necessary Standard” Rule


TMI blogThe HIPAA Omnibus Rule became effective on March 26, 2013. Covered entities and Business Associates had until September 23, 2013 to become compliant with the entirety of the law including the security rule, the privacy rule and the breach notification rule. Law firms that do business with a HIPAA regulated organization and receive protected health information (PHI) are considered a Business Associate (BA) and subject to all regulations including the security, privacy and breach notification rules. These rules are very prescriptive in nature and can impose additional procedures and additional cost to a law firm.

Under the HIPAA, there is a specific rule covering the use of PHI by both covered entities and Business Associates called the “Minimum Necessary Stand” rule or 45 CFR 164.502(b), 164.514(d). The HIPAA Privacy rule and minimum necessary standard are enforced by the U.S. Department of Health and Human Services Office for Civil Rights (OCR). Under this rule, law firms must develop policies and procedures which limit PHI uses, disclosures and requests to those necessary to carry out the organization’s work including:

  • Identification of persons or classes of persons in the workforce who need access to PHI to carry out their duties;
  • For each of those, specification of the category or categories of PHI to which access is needed and any conditions appropriate to such access; and
  • Reasonable efforts to limit access accordingly.

The minimum necessary standard is based on the theory that PHI should not be used or disclosed when it’s not necessary to satisfy a particular job. The minimum necessary standard generally requires law firms to take reasonable steps to limit the use or disclosure of, PHI to the minimum necessary to represent the healthcare client. The Privacy Rule’s requirements for minimum necessary are designed to be flexible enough to accommodate the various circumstances of any covered entity.

The first thing firms should understand is that, as Business Associates subject to HIPAA through their access and use of client data, firms are subject to the Minimum Necessary Standard, which requires that when a HIPAA-covered entity or a business associate (law firm) of a covered entity uses or discloses PHI or when it requests PHI from another covered entity or business associate, the covered entity or business associate must make “reasonable efforts to limit protected health information to the minimum necessary to accomplish the intended purpose of the use, disclosure, or request.”

Law firm information governance professionals need to be aware of this rule and build it into their healthcare client related onboarding processes.

You Don’t Know What You Don’t Know


Blog_06272014_graphicThe Akron Legal News this week published an interesting editorial on information governance. The story by Richard Weiner discussed how law firms are dealing with the transition from rooms filled with hard copy records to electronically stored information (ESI) which includes firm business records as well as huge amounts of client eDiscovery content. The story pointed out that ESI flows into the law firm so quickly and in such huge quantities no one can track it much less know what it contains.  Law firms are now facing an inflection point, change the way all information is managed or suffer client dissatisfaction and client loss.

The story pointed out that “in order to function as a business, somebody is going to have to, at least, track all of your data before it gets even more out of control – Enter information governance.”

There are many definitions of information governance (IG) floating around but the story presented one specifically targeted at law firms: IG is “the rules and framework for managing all of a law firm’s electronic data and documents, including material produced in discovery, as well as legal files and correspondence.” Richard went on to point out that there are four main tasks to accomplish through the IG process. They are:

  • Map where the data is stored;
  • Determine how the data is being managed;
  • Determine data preservation methodology;
  • Create forensically sound data collection methods.

I would add several more to this list:

  • Create a process to account for and classify inbound client data such as eDiscovery and regulatory collections.
  • Determine those areas where client information governance practices differ from firm information governance practices.
  • Reconcile those differences with client(s).

As law firms’ transition to mostly ESI for both firm business and client data, law firms will need to adopt IG practices and process to account for and manage to these different requirements. Many believe this transition will eventually lead to the incorporation of machine learning techniques into IG to enable law firm IG processes to have a much more granular understanding of what the actual meaning of the data, not just that it’s a firm business record or part of a client eDiscovery response. This will in turn enable more granular data categorization capability of all firm information.

Iron Mountain has hosted the annual Law Firm Information Governance Symposium which has directly addressed many of these topics around law firm IG. The symposium has produced ”A Proposed Law Firm Information Governance Framework” a detailed description of the processes to look at as law firms look at adopting an information governance program.

Emails considered “abandoned” if older than 180 days


The Electronic Communications Privacy Act – Part 1

Email PrivacyIt turns out that those 30 day email retention policies I have been putting down for years may… actually be the best policy.

This may not be a surprise to some of you but the government can access your emails without a warrant by simply providing a statement (or subpoena) that the emails in question are relevant to an on-going federal case – criminal or civil.

This disturbing fact is legally justified through the misnamed Electronic Communications Privacy Act of 1986 otherwise known as 18 U.S.C. § 2510-22.

There are some stipulations to the government gaining access to your email;

    • The email must be stored on a server, or remote storage (not an individual’s computer).This obviously targets Gmail, Outlook.com, Yahoo mail and others but what about corporate email administered by third parties, what about Outlook Web Access, remote workers that VPN into their corporate email servers, PSTs saved on cloud storage…
    • The emails must have already been opened. Does Outlook auto-preview affect the state of “being read”?
    • The emails must be over 180 days old if unopened

The ECPA (remember it was written in 1986) starts with the premise that any email (electronic communication) stored on a server longer than 180 days had to be junk email and abandoned.  In addition, the assumption is that if you opened an email and left it on a “third-party” server for storage you were giving that “third-party” access to your mail and giving up any privacy interest you had which in reality is happening with several well-known email cloud providers (terms and conditions).  In 1986 the expectation was that you would download your emails to your local computer and then either delete it or print out a hard copy for record keeping.  So the rules put in place in 1986 made sense – unopened email less than 180 days old was still in transit and could be secured by the authorities only with a warrant (see below); opened email or mail stored for longer than 180 days was considered non-private or abandoned so the government could access it with a subpoena (an administrated request) – in effect, simply by asking for it.

Warrant versus Subpoena: (from Surveillance Self-Defense Web Site)

To get a warrant, investigators must go to a neutral and detached magistrate and swear to facts demonstrating that they have probable cause to conduct the search or seizure. There is probable cause to search when a truthful affidavit establishes that evidence of a crime will be probably be found in the particular place to be searched. Police suspicions or hunches aren’t enough — probable cause must be based on actual facts that would lead a reasonable person to believe that the police will find evidence of a crime.

In addition to satisfying the Fourth Amendment’s probable cause requirement, search warrants must satisfy the particularity requirement. This means that in order to get a search warrant, the police have to give the judge details about where they are going to search and what kind of evidence they are searching for. If the judge issues the search warrant, it will only authorize the police to search those particular places for those particular things.

Subpoenas are issued under a much lower standard than the probable cause standard used for search warrants. A subpoena can be used so long as there is any reasonable possibility that the materials or testimony sought will produce information relevant to the general subject of the investigation.

Subpoenas can be issued in civil or criminal cases and on behalf of government prosecutors or private litigants; often, subpoenas are merely signed by a government employee, a court clerk, or even a private attorney. In contrast, only the government can get a search warrant.

With all of the news stories about Edward Snowden and the NSA over the last year, this revelation brings up many questions for those of us in the eDiscovery, email archiving and cloud storage businesses.

In future blogs I will discuss these questions and others such as how does this effect “abandoned” email archives.

Dark Data Archiving…Say What?


Dark door 2

In a recent blog titled “Bring your dark data out of the shadows”, I described what dark data was and why its important to manage it. To review, the reasons to manage were:

  1. It consumes costly storage space
  2. It consumes IT resources
  3. It masks security risks
  4. And it drives up eDiscovery costs

For the clean-up of dark data (remediation) it has been suggested by many, including myself, that the remediation process should include determining what you really have, determine what can be immediately disposed of (obvious stuff like duplicates and any expired content etc.), categorize the rest, and move the remaining categorized content into information governance systems.

But many “conservative” minded people (like many General Counsel) hesitate at the actual deletion of data, even after they have spent the resources and dollars to identify potentially disposable content. The reasoning usually centers on the fear of destroying information that could be potentially relevant in litigation. A prime example is seen in the Arthur Andersen case where a Partner famously sent an email message to employees working on the Enron account, reminding them to “comply with the firm’s documentation and retention policy”, or in other words – get rid of stuff. Many GCs don’t want to be put in the position of rightfully disposing of information per policy and having to explain later in court why potentially relevant information was disposed of…

For those that don’t want to take the final step of disposing of data, the question becomes “so what do we do with it?” This reminds me of a customer I was dealing with years ago. The GC for this 11,000 person company, a very distinguished looking man, was asked during a meeting that included the company’s senior staff, what the company’s information retention policy was. He quickly responded that he had decided that all information (electronic and hardcopy) from their North American operations would be kept for 34 years. Quickly calculating the company’s storage requirements over 34 years with 11,000 employees, I asked him if he had any idea what his storage requirements would be at the end of 34 years. He replied no and asked what the storage requirements would be. I replied it would be in the petabytes range and asked him if he understood what the cost of storing that amount of data would be and how difficult it would be to find anything in it.

He smiled and replied “I’m retiring in two years, I don’t care”

The moral of that actual example is that if you have decided to keep large amounts of electronic data for long periods of time, you have to consider the cost of storage as well as how you will search it for specific content when you actually have to.

In the example above, the GC was planning on storing it on spinning disk which is costly. Others I have spoken to have decided that most cost effective way to store large amounts of data for long periods of time is to keep backup tapes. Its true that backup tapes are relatively cheap (compared to spinning disk) but are difficult to get anything off of, they have a relatively high failure rate (again compared to spinning disk)  and have to be rewritten every so many years because backup tapes slowly lose their data over time.

A potential solution is moving your dark data to long term hosted archives. These hosted solutions can securely hold your electronically stored information (ESI) at extremely low costs per gigabyte. When needed, you can access your archive remotely and search and move/copy data back to your site.

An important factor to look for (for eDiscovery) is that data moved, stored, indexed and recovered from the hosted archive cannot alter the metadata in anyway. This is especially important when responding to a discovery request.

For those of you considering starting a dark data remediation project, consider long term hosted archives as a staging target for that data your GC just won’t allow to be disposed of.

Bring your dark data out of the shadows


NosferatuShadowDark data, otherwise known as unstructured, unmanaged, and uncategorized information is a major problem for many organizations. Many organizations don’t have the will or systems in place to automatically index and categorize their rapidly growing unstructured dark data, especially in file shares, and instead rely on employees to manually manage their own information. This reliance on employees is a no-win situation because employees have neither the incentive nor the time to actively manage their information.

Organizations find themselves trying to figure out what to do with huge amounts of dark data, particularly when they’re purchasing TBs of new storage annually because they’ve run out.

Issues with dark data:

  • Consumes costly storage space and resources – Most medium to large organizations provide terabytes of file share storage space for employees and departments to utilize. Employees drag and drop all kinds of work related files (and personal files like personal photos, MP3 music files, and personal communications) as well as PSTs and work station backup files. The vast majority of these files are unmanaged and are never looked at again by the employee or anyone else.
  • Consumes IT resources – Personnel are required to perform nightly backups, DR planning, and IT personnel to find or restore files employees could not find.
  • Masks security risks – File shares act as “catch-alls” for employees. Sensitive company information regularly finds its way to these repositories. These file shares are almost never secure so sensitive information like personally identifiable information (PII), protected health information (PHI, and intellectual property can be inadvertently leaked.
  • Raises eDiscovery costs – Almost everything is discoverable in litigation if it pertains to the case. The fact that tens or hundreds of terabytes of unindexed content is being stored on file shares means that those terabytes of files may have to be reviewed to determine if they are relevant in a given legal case. That can add hundreds of thousands or millions of dollars of additional cost to a single eDiscovery request.

To bring this dark data under control, IT must take positive steps to address the problem and do something about it. The first step is to look to your file shares.

Infobesity in the Healthcare Industry: A Well-Balanced Diet of Predictive Governance is needed


Fat TwitterWith the rapid advances in healthcare technology, the movement to electronic health records, and the relentless accumulation of regulatory requirements, the shift from records management to information governance is increasingly becoming a necessary reality.

In a 2012 CGOC (Compliance, Governance and Oversight Counsel) Summit survey, it was found that on the average 1% of an organization’s data is subject to legal hold, 5% falls under regulatory retention requirements and 25% has business value. This means that 69% of an organization’s ESI is not needed and could be disposed of without impact to the organization. I would argue that for the healthcare industry, especially for covered entities with medical record stewardship, those retention percentages are somewhat higher, especially the regulatory retention requirements.

According to an April 9, 2013 article on ZDNet.com, by 2015, 80% of new healthcare information will be composed of unstructured information; information that’s much harder to classify and manage because it doesn’t conform to the “rows & columns” format used in the past. Examples of unstructured information include clinical notes, emails & attachments, scanned lab reports, office work documents, radiology images, SMS, and instant messages. Despite a push for more organization and process in managing unstructured data, healthcare organizations continue to binge on unstructured data with little regard to the overall health of their enterprises.

So how does this info-gluttony, (the unrestricted saving of unstructured data because data storage is cheap and saving everything is just easier), affect the health of the organization? Obviously you’ll look terrible in horizontal stripes, but also finding specific information quickly (or at all) is impossible, you’ll spend more on storage, data breaches will could occur more often, litigation/eDiscovery expenses will rise, and you won’t want to go to your 15th high school reunion…

To combat this unstructured info-gain, we need an intelligent information governance solution – STAT!  And that solution must include a defensible process to systematically dispose of information that’s no longer subject to regulatory requirements, litigation hold requirements or because it no longer has business value.

To enable this information governance/defensible disposal Infobesity cure, healthcare information governance solutions must be able to extract meaning from all of this unstructured content, or in other words understand and differentiate content conceptually. The automated classification/categorization of unstructured content based on content meaning cannot accurately or consistently differentiate the meaning in electronic content by simply relying on simple rules or keywords. To accurately automate the categorization and management of unstructured content, a machine learning capability to “train by example” is a precondition. This ability to systematically derive meaning from unstructured content as well as machine learning to accurately automate information governance is something we call “Predictive Governance”.

A side benefit of Predictive Governance is (you’ll actually look taller) previously lost organizational knowledge and business intelligence can be automatically compiled and made available throughout the organization.