Does your organization utilize Office 365 for email? Is your organization required to journal email for compliance, legal, or business requirements? Do your Attorneys complain about the time it takes to find information for an eDiscovery request? If the answer is yes to any of these questions, then keep reading. Continue reading
As more companies move their data to the cloud, the question of data sovereignty is becoming a hotter topic. Data sovereignty is the requirement that digital data is subject to the laws of the country in which it is collected or processed. Many countries have requirements that data collected in a particular country must stay in that country. They argue that it’s in the Government’s interest to protect their citizen’s personal information against any misuse. Continue reading
Healthcare costs continue to skyrocket. In 2016, healthcare costs in the US were estimated to be almost 18 percent of GDP. The healthcare industry is seeing an unprecedented and accelerating growth of ESI. This avalanche of data is being generated from the digitization of healthcare information, EHR systems, precision and personalized medicine, health information exchanges, new imaging technologies (DICOM), new regulations, IoMT (Internet of Medical Things), and other major technology developments. Continue reading
The EU/US, Safe Harbor scheme, was struck down by the Court of Justice of the European Union (CJECU) in October of 2015 putting companies on both sides of the Atlantic in a difficult position – not having a process for legally transferring data out of the EU to the US. Continue reading
According to IDC, healthcare data is one of the fastest growing segments of the digital universe – growing from 153 exabytes in 2013 to an estimated 2,314 exabytes in 2020, a 48% annual growth rate. So where will the healthcare industry put all of this critical and sensitive data and how long must it be held?
In my frequent discussions with customers about the benefits of cloud archiving for regulatory, legal, and business reasons, I still find a large percentage that still don’t worry about archiving corporate social media content.
Everyone leaves the company eventually. Better opportunities, reduction in workforce actions, termination, or your manager has the IQ of un-popped popcorn…, no matter the reason, everyone eventually leaves. In the UK, these people are referred to as “leavers.” In the U.S. they’re called departing employees or ex-employees, and depending on the circumstances, more colorful names. However, the way company handles these departing employees can mean the difference between business as usual or major customer satisfaction issues, project delays, higher eDiscovery costs, and higher costs.
When an employee is terminated or informs the company they are leaving, the HR organization usually has a checklist of things to do before the employee departs. This includes (but is not limited to):
- Return credit cards
- Turn in all expense reports
- Turn in laptop
- Turn in external hard disks
- Turn in cell phone
- Returning building and office keys and access cards
- Removing access/User ID to all electronic systems
Pretty standard stuff to ensure the employee does not walk off with company equipment or confidential information. However, this process does not address the most valuable company asset…information.
Is Departing Employee Data Valuable?
At its base level, companies employ people to create, process, and utilize information. What happens to the GBs of data the employees create and store over their time at the company? True, much of that information is stored on the employee’s laptop but how long do those laptops sit around before they’re re-imaged and re-tasked? In a blog last month, I touched on this specific problem
“Not long ago I received a call from an obviously panicked ex-coworker from a company that I had left 6 months prior. They were looking for the pricing/ROI calculator that I had developed more than a year prior. A large deal was dependent on them producing a believable ROI by the next morning. I told the ex-coworker that it and all of my content should be on my laptop and even suggested a couple of keywords to search on. Later that day, the same person called back and told me that the company’s standard process for departing employee’s laptops was to re-image the hard disk after 30 days and distribute it to incoming employees – the ROI model I had spent over a man-month developing was lost forever.”
Now consider the numerous other places an employee can store data; file shares, cloud storage accounts (OneDrive, Dropbox), cell phones, SharePoint, One Note, PSTs, etc. Now also consider how you would find a specific file containing a customer presentation in a short period of time…
If not managed as a valuable company asset, much if not all of that expensive employee data is, if not lost, is extremely difficult if not impossible to find when needed.
Chaotic Data Management Makes You a Target
Let’s address another problem associated with ex-employee data… eDiscovery.
You’re a General Counsel at a medium sized company and you receive an eDiscovery request one afternoon asking for all responsive data around a specific vendor contract between Feb 4, 2009 and last month. Several ex-employees are named as targets of the discovery.
This is a common scenario many companies face. The issue is this; when responding to discovery, you must look for potentially responsive data in all possible locations, unless you can prove that data could not exist due to existing processes. The legal bottom line is this: if you don’t know for sure that data doesn’t exist somewhere, then you must search for it, no matter the cost. Opposing Counsel have become very adept at finding the opposing parties weakness, especially around data handling, and exploiting it to force you to send more money so that you will settle early.
Discovery response also carries with it a time constraint. This time required to respond has caused many companies to spend huge amounts of money to bring in high priced discovery consultants to ensure discovery is finished in time.
Both of these issues can be readily addressed with new processes and technology.
Process Change and Technology
Worthless data can be extremely valuable when you can’t find it. Most companies I have worked for were very good about the employee exit process. But so far I have never had an HR (or other) person ask me specifically for all of the locations my data could be residing.
The laptop and cell phone are turned in and quickly re-imaged (losing all data), file shares with work files and PSTs are eventually cleaned up destroying data, and email accounts are closed. Very quickly, all of that employee data (intellectual property and know-how) is lost.
In reality, all it takes to solve this problem is first to develop an exit process that ensures the company knows where all employee data is before they leave, and second, migrate all of that ex-employee data to a central repository for long term management. Many companies are finding that a low cost “cool” cloud archive is the best and lowest cost answer.
Just because an employee has departed doesn’t mean their intellectual property has to as well. Keep that ex-employee information available for business use, litigation, and regulatory compliance well into the future.
The Industry’s First “Leaver” Archive
Microsoft Azure is that low cost cool data repository. Archive360’s Archive2Azure provides the management layer for Azure to allow this departing employee data to be migrated into Azure, encrypted, retention/disposition applied, and custom indexing processes enabled to provide centralized ultra-low-cost cool storage so that grey, low touch, ex-employee data can be managed and searched quickly.
Organizations habitually over-retain information, especially unstructured electronic information, for all kinds of reasons. Many organizations simply have not addressed what to do with it so many of them fall back on relying on individual employees to decide what should be kept and for how long and what should be disposed of. On the opposite end of the spectrum a minority of organizations have tried centralized enterprise content management systems and have found them to be difficult to use so employees find ways around them and end up keeping huge amounts of data locally on their workstations, on removable media, in cloud accounts or on rogue SharePoint sites and are used as “data dumps” with or no records management or IT supervision. Much of this information is transitory, expired, or of questionable business value. Because of this lack of management, information continues to accumulate. This information build-up raises the cost of storage as well as the risk associated with eDiscovery. In reality, as information ages, it probability of re-use and therefore its value, shrinks quickly. Fred Moore, Founder of Horison Information Strategies, wrote about this concept years ago as the Lifecycle of Data. Figure 1 below shows that as data ages, the probability of reuse goes down…very quickly as the amount of saved data rises. Once data has aged 10 to 15 days, its probability of ever being looked at again approaches 1% and as it continues to age approaches but never quite reaches zero (figure 1 – blue shading).
Figure 1: The Lifecycle of Information
Contrast that with the possibility that a large part of any organizational data store has little of no business, legal or regulatory value. In fact the Compliance, Governance and Oversight Counsel (CGOC) conducted a survey in 2012 that showed that on the average, 1% of organizational data is subject to litigation hold, 5% is subject to regulatory retention and 25% had some business value (figure 2 – green shading). This means that approximately 69% of an organizations data store has no business value and could be disposed of without legal, regulatory or business consequences. The average employee creates, sends, receives and stores conservatively 20 MB of data per day. This means that at the end of 15 business days, they have accumulated 220 MB of new data, at the end of 90 days, 1.26 GB of data and at the end of three years, 15.12 GB of data (if they don’t delete anything). So how much of this accumulated data needs to be retained? Again referring to figure 2 below, the red shaded area represents the information that probably has no legal, regulatory or business value according to the 2012 CGOC survey. At the end of three years, the amount of retained data from a single employee that could be disposed of without adverse effects to the organization is 10.43 GB. Now multiply that by the total number of employees and you are looking at some very large data stores.
Figure 2: The Lifecycle of information Value
The above Lifecycle of Information Value graphic above shows us that employees really don’t need all of the data they squirrel away (because its probability of re-use drops to 1% at around 15 days) and based on the CGOC survey, approximately 69% of organizational data is not required for legal, regulatory retention or has business value. The difficult piece of this whole process is how can an organization efficiently determine what data is not needed and dispose of it using automation (because employees probably won’t)… As unstructured data volumes continue to grow, automatic categorization of data is quickly becoming the only realistic way to get ahead of the data flood. Without accurate automated categorization, the ability to find the data you need, quickly will never be realized. Even better, if data categorization can be based on the value of the content, not just a simple rule or keyword match, highly accurate categorization and therefore information governance is achievable.
Traditional approaches to information management are generally speaking no longer suitable to meet today’s information management needs. The legacy “move-to-manage” premise is expensive, fraught with difficulties and contradictory to modern data repositories that (a) are either cloud-based, (b) have built-in governance tools, or (c) contain data that best resides in the native repository.
In reality, traditional records management and ECM systems only manage a small percentage of an organization’s total information. A successful implementation is often considered 5% of the information that exists. What about all the information not deemed a “record”?
Traditional archiving systems tend to capture everything and for the most part cause organizations to keep their archived information for much longer periods of time, or forever. Corporate data volumes and the data landscape have changed dramatically since archiving systems became widely adopted. Some organizations are discovering the high cost of getting their data out while others are experiencing end-user productivity issues, incompatible stuns or shortcuts and a lack of support for the modern interfaces through which users expect to access their information.
The unstructured data problem, along with the emerging reality of the cloud, have brought us to an inflection point; either continue to use decade-old, higher-cost and complex approaches to manage huge quantities of information, or proactively govern this information where it naturally resides to more effectively identify, organize and advance the best possible outcomes for security, compliance, litigation response and innovation.
Today’s enterprise-ready hardware and storage solutions as well as scalable business productivity applications featuring built-in governance tools are both affordable and easily accessible. For forward-thinking organizations, there is no question that in-place information management is the most viable and cost-effective methodology for information management in the 21st century.