In my frequent discussions with customers about the benefits of cloud archiving for regulatory, legal, and business reasons, I still find a large percentage that still don’t worry about archiving corporate social media content.
Everyone leaves the company eventually. Better opportunities, reduction in workforce actions, termination, or your manager has the IQ of un-popped popcorn…, no matter the reason, everyone eventually leaves. In the UK, these people are referred to as “leavers.” In the U.S. they’re called departing employees or ex-employees, and depending on the circumstances, more colorful names. However, the way company handles these departing employees can mean the difference between business as usual or major customer satisfaction issues, project delays, higher eDiscovery costs, and higher costs.
When an employee is terminated or informs the company they are leaving, the HR organization usually has a checklist of things to do before the employee departs. This includes (but is not limited to):
- Return credit cards
- Turn in all expense reports
- Turn in laptop
- Turn in external hard disks
- Turn in cell phone
- Returning building and office keys and access cards
- Removing access/User ID to all electronic systems
Pretty standard stuff to ensure the employee does not walk off with company equipment or confidential information. However, this process does not address the most valuable company asset…information.
Is Departing Employee Data Valuable?
At its base level, companies employ people to create, process, and utilize information. What happens to the GBs of data the employees create and store over their time at the company? True, much of that information is stored on the employee’s laptop but how long do those laptops sit around before they’re re-imaged and re-tasked? In a blog last month, I touched on this specific problem
“Not long ago I received a call from an obviously panicked ex-coworker from a company that I had left 6 months prior. They were looking for the pricing/ROI calculator that I had developed more than a year prior. A large deal was dependent on them producing a believable ROI by the next morning. I told the ex-coworker that it and all of my content should be on my laptop and even suggested a couple of keywords to search on. Later that day, the same person called back and told me that the company’s standard process for departing employee’s laptops was to re-image the hard disk after 30 days and distribute it to incoming employees – the ROI model I had spent over a man-month developing was lost forever.”
Now consider the numerous other places an employee can store data; file shares, cloud storage accounts (OneDrive, Dropbox), cell phones, SharePoint, One Note, PSTs, etc. Now also consider how you would find a specific file containing a customer presentation in a short period of time…
If not managed as a valuable company asset, much if not all of that expensive employee data is, if not lost, is extremely difficult if not impossible to find when needed.
Chaotic Data Management Makes You a Target
Let’s address another problem associated with ex-employee data… eDiscovery.
You’re a General Counsel at a medium sized company and you receive an eDiscovery request one afternoon asking for all responsive data around a specific vendor contract between Feb 4, 2009 and last month. Several ex-employees are named as targets of the discovery.
This is a common scenario many companies face. The issue is this; when responding to discovery, you must look for potentially responsive data in all possible locations, unless you can prove that data could not exist due to existing processes. The legal bottom line is this: if you don’t know for sure that data doesn’t exist somewhere, then you must search for it, no matter the cost. Opposing Counsel have become very adept at finding the opposing parties weakness, especially around data handling, and exploiting it to force you to send more money so that you will settle early.
Discovery response also carries with it a time constraint. This time required to respond has caused many companies to spend huge amounts of money to bring in high priced discovery consultants to ensure discovery is finished in time.
Both of these issues can be readily addressed with new processes and technology.
Process Change and Technology
Worthless data can be extremely valuable when you can’t find it. Most companies I have worked for were very good about the employee exit process. But so far I have never had an HR (or other) person ask me specifically for all of the locations my data could be residing.
The laptop and cell phone are turned in and quickly re-imaged (losing all data), file shares with work files and PSTs are eventually cleaned up destroying data, and email accounts are closed. Very quickly, all of that employee data (intellectual property and know-how) is lost.
In reality, all it takes to solve this problem is first to develop an exit process that ensures the company knows where all employee data is before they leave, and second, migrate all of that ex-employee data to a central repository for long term management. Many companies are finding that a low cost “cool” cloud archive is the best and lowest cost answer.
Just because an employee has departed doesn’t mean their intellectual property has to as well. Keep that ex-employee information available for business use, litigation, and regulatory compliance well into the future.
The Industry’s First “Leaver” Archive
Microsoft Azure is that low cost cool data repository. Archive360’s Archive2Azure provides the management layer for Azure to allow this departing employee data to be migrated into Azure, encrypted, retention/disposition applied, and custom indexing processes enabled to provide centralized ultra-low-cost cool storage so that grey, low touch, ex-employee data can be managed and searched quickly.
The Electronic Communications Privacy Act – Part 1
This may not be a surprise to some of you but the government can access your emails without a warrant by simply providing a statement (or subpoena) that the emails in question are relevant to an on-going federal case – criminal or civil.
This disturbing fact is legally justified through the misnamed Electronic Communications Privacy Act of 1986 otherwise known as 18 U.S.C. § 2510-22.
There are some stipulations to the government gaining access to your email;
- The email must be stored on a server, or remote storage (not an individual’s computer).This obviously targets Gmail, Outlook.com, Yahoo mail and others but what about corporate email administered by third parties, what about Outlook Web Access, remote workers that VPN into their corporate email servers, PSTs saved on cloud storage…
- The emails must have already been opened. Does Outlook auto-preview affect the state of “being read”?
- The emails must be over 180 days old if unopened
The ECPA (remember it was written in 1986) starts with the premise that any email (electronic communication) stored on a server longer than 180 days had to be junk email and abandoned. In addition, the assumption is that if you opened an email and left it on a “third-party” server for storage you were giving that “third-party” access to your mail and giving up any privacy interest you had which in reality is happening with several well-known email cloud providers (terms and conditions). In 1986 the expectation was that you would download your emails to your local computer and then either delete it or print out a hard copy for record keeping. So the rules put in place in 1986 made sense – unopened email less than 180 days old was still in transit and could be secured by the authorities only with a warrant (see below); opened email or mail stored for longer than 180 days was considered non-private or abandoned so the government could access it with a subpoena (an administrated request) – in effect, simply by asking for it.
Warrant versus Subpoena: (from Surveillance Self-Defense Web Site)
To get a warrant, investigators must go to a neutral and detached magistrate and swear to facts demonstrating that they have probable cause to conduct the search or seizure. There is probable cause to search when a truthful affidavit establishes that evidence of a crime will be probably be found in the particular place to be searched. Police suspicions or hunches aren’t enough — probable cause must be based on actual facts that would lead a reasonable person to believe that the police will find evidence of a crime.
In addition to satisfying the Fourth Amendment’s probable cause requirement, search warrants must satisfy the particularity requirement. This means that in order to get a search warrant, the police have to give the judge details about where they are going to search and what kind of evidence they are searching for. If the judge issues the search warrant, it will only authorize the police to search those particular places for those particular things.
Subpoenas are issued under a much lower standard than the probable cause standard used for search warrants. A subpoena can be used so long as there is any reasonable possibility that the materials or testimony sought will produce information relevant to the general subject of the investigation.
Subpoenas can be issued in civil or criminal cases and on behalf of government prosecutors or private litigants; often, subpoenas are merely signed by a government employee, a court clerk, or even a private attorney. In contrast, only the government can get a search warrant.
With all of the news stories about Edward Snowden and the NSA over the last year, this revelation brings up many questions for those of us in the eDiscovery, email archiving and cloud storage businesses.
In future blogs I will discuss these questions and others such as how does this effect “abandoned” email archives.
In a recent blog titled “Bring your dark data out of the shadows”, I described what dark data was and why its important to manage it. To review, the reasons to manage were:
- It consumes costly storage space
- It consumes IT resources
- It masks security risks
- And it drives up eDiscovery costs
For the clean-up of dark data (remediation) it has been suggested by many, including myself, that the remediation process should include determining what you really have, determine what can be immediately disposed of (obvious stuff like duplicates and any expired content etc.), categorize the rest, and move the remaining categorized content into information governance systems.
But many “conservative” minded people (like many General Counsel) hesitate at the actual deletion of data, even after they have spent the resources and dollars to identify potentially disposable content. The reasoning usually centers on the fear of destroying information that could be potentially relevant in litigation. A prime example is seen in the Arthur Andersen case where a Partner famously sent an email message to employees working on the Enron account, reminding them to “comply with the firm’s documentation and retention policy”, or in other words – get rid of stuff. Many GCs don’t want to be put in the position of rightfully disposing of information per policy and having to explain later in court why potentially relevant information was disposed of…
For those that don’t want to take the final step of disposing of data, the question becomes “so what do we do with it?” This reminds me of a customer I was dealing with years ago. The GC for this 11,000 person company, a very distinguished looking man, was asked during a meeting that included the company’s senior staff, what the company’s information retention policy was. He quickly responded that he had decided that all information (electronic and hardcopy) from their North American operations would be kept for 34 years. Quickly calculating the company’s storage requirements over 34 years with 11,000 employees, I asked him if he had any idea what his storage requirements would be at the end of 34 years. He replied no and asked what the storage requirements would be. I replied it would be in the petabytes range and asked him if he understood what the cost of storing that amount of data would be and how difficult it would be to find anything in it.
He smiled and replied “I’m retiring in two years, I don’t care”
The moral of that actual example is that if you have decided to keep large amounts of electronic data for long periods of time, you have to consider the cost of storage as well as how you will search it for specific content when you actually have to.
In the example above, the GC was planning on storing it on spinning disk which is costly. Others I have spoken to have decided that most cost effective way to store large amounts of data for long periods of time is to keep backup tapes. Its true that backup tapes are relatively cheap (compared to spinning disk) but are difficult to get anything off of, they have a relatively high failure rate (again compared to spinning disk) and have to be rewritten every so many years because backup tapes slowly lose their data over time.
A potential solution is moving your dark data to long term hosted archives. These hosted solutions can securely hold your electronically stored information (ESI) at extremely low costs per gigabyte. When needed, you can access your archive remotely and search and move/copy data back to your site.
An important factor to look for (for eDiscovery) is that data moved, stored, indexed and recovered from the hosted archive cannot alter the metadata in anyway. This is especially important when responding to a discovery request.
For those of you considering starting a dark data remediation project, consider long term hosted archives as a staging target for that data your GC just won’t allow to be disposed of.