Archive for January, 2012
Imagine for a minute that you are a U.S. Senator. Imagine that you made someone angry. Now, imagine that the person who is angry is resourceful enough to figure out how to hack into one of your online social-media accounts. Maybe your Twitter account. Finally, imagine that the hacker started posting inflammatory messages, or at least embarrassing messages, in your name on your account.
According to Lucas Shaw at The Wrap, in a story picked up by Reuters, that’s exactly what happened to Senator Chuck Grassley of Iowa. The Senator had reportedly been supportive of the Protect IP Act, PIPA, and the hacker, obviously, was not. The Senator’s experience was not really a disaster, but it was at least an annoyance. Under other sets of circumstances, it could have been a disaster, just a very different kind of a disaster than the ones we usually discuss.
Unfortunately, hacking appears to be increasing. In fact, it’s so prevalent that hacking services have begun to spring up. And there are plenty of individuals more than happy to outsource their hacking requirements, as did one very wealthy individual involved in a nasty family feud.
Hackers are just one more reason why time travel is important. Whether your data gets corrupted by bit errors on disk drives, by software bugs, system failures, or malicious attacks by hackers, you need to be able to time travel and roll back your data to the last known good state. Then, if you have the right technology in place, you can carefully apply valid updates to the file or database, to get back to a good operational state.
Early this year, Batley News in the U.K., reported that Cattles Group, a financial services company, was being investigated, after the firm lost the personal information belonging to a million people, including both customers and employees. You can read the entire article here, but the lost data was on two tapes that went missing. It doesn’t mean that they got in the wrong hands, and it doesn’t mean that the data has actually been accessed by an unauthorized person, or that accounts have been compromised. But under a number of laws that exist in various countries, losing personal information that has been entrusted to an organization is a reportable offense. And so, Cattles Group notified the police and two other government agencies. And they also notified each of the affected customers and employees.
Despite the continued decline in the use of tape, it is, in fact, still in use, and there are a number of applications where tape remains very valuable and a great technology fit. Two of the historical values of tape were that it was removable and transportable. And one tape holds a lot of data. Remember, two tapes held the personal information of a million people. The fact that tape is removable and transportable is also its liability. So it is not unusual to hear incidents of lost tapes and, thus, lost data. In fact, there is an entire website, datalossdb.org, devoted to reporting data losses, and you can search the database for data losses associated with tape media.
If the job of the solution is to get your data from one location to another in a secure and cost-effective way, so that you can restore operations after a disaster, I think the improvements in disk-based replication technology, including point-in-time, application-consistent snapshots, data deduplication, and data compression, make it unlikely that tape will survive much longer as a backup media. Add to that Axxana’s zero-data-loss-over-any-distance capabilities and there’s no compelling reason to stay with tape.
When I ask the question, “What’s your RPO?” I typically get an answer like “We’ve got an RPO of 5 minutes.” If I ask “How much data are you willing to lose?” I’ll hear a similar answer. But if I ask someone, “How much data is in your storage system?” they don’t answer me in minutes.
The data created by the mission-critical applications that support businesses don’t get updated in an even, regulated way. The update rates are almost always highly variable, full of transaction peaks and valleys. The peaks can happen during predictable times, like a holiday shopping season, and during unexpected times, like when there is panic buying before a hurricane.
That’s the not-so-funny disconnect between data and disaster recovery. We don’t measure data in minutes. We measure it in GBs. Wouldn’t it be better to set our snapshots and our recovery points based upon how much data has changed, instead of how much time has passed?
I have to credit our CTO, Dr. Alex Winokur, for helping me think this through. But, I’ve decided that, unless your RPO is zero, RPO doesn’t matter. Instead, we need to make sure we’ve protected the data to the very last byte.
I borrowed my title for today’s post from the section heading of a whitepaper, “Impact on U.S. Small Business of Natural & Man-Made Disasters,” that I found on Edwards Information, LLC. For those of you who don’t know them, Edwards Information is a valuable information resource for organizations, and they describe themselves as “the authority on disaster recovery and business continuity.”
The full document, which was presented by HP and SCORE, is available here. I think every CEO, every IT professional, every CFO, every Risk Manager, and every Business Continuity professional should read the entire article, but I want to draw attention to some specific data from page 3 of the document:
- “A Company that experiences a computer outage lasting more than 10 days will never fully recover financially. 50 percent will be out of business within five years.” 1
- An estimated 25 percent of businesses do not reopen following a major disaster. 2
- 70 percent of small firms that experience a major data loss go out of business within a year. 3
- Of companies experiencing catastrophic data loss: 43% of companies never reopened and 51% of companies closed within 2 years. 4
- 80% of companies that do not recover from a disaster within one month are likely to go out of business. 5
- 75% of companies without business continuity plans fail within three years of a disaster. 6
- Companies that aren’t able to resume operations within ten days (of a disaster hit) are not likely to survive. 7
- Of those businesses that experience a disaster and have no emergency plan, 43 percent never reopen; of those that do reopen, only 29 percent are still operating two years later. 8
Here are the publisher’s references for the information:
- 1 Jon Toiga, Disaster Recovery Planning: Managing Risk and Catastrophe in Information Systems, (Yourdon Press, 1989)
- 2 “Open For Business” a publication of The Institute for Business & Home Safety (IBHS), a nonprofit association that engages in communication, education, engineering and research for the insurance industry. See www.ibhs.org/docs/OpenForBusiness.pdf
- 3 Contingency Planning, Strategic Research Corp and DTI/Price Waterhouse Coopers (2004) and is widely quoted in places such as: Diana Shepstone, National data awareness project launched to help businesses prevent data disasters ( Data Centre Solutions, Jan. 8, 2007) see: http://www.datacentresols.com/news/articles-full.php?newsid=5455
- 4 University of Texas Center for Research on Information Systems, as cited in Datamation, June 14, 1994
- 5 Jonathan Bernstein, president, Bernstein Crisis Management, LLC in Director, June 1998, v51n11, p44
- 6 Bruce Blythe, CEO, Crisis Management International in Blindsided: A Manager’s Guide to Catastrophic Incidents in the Workplace By Bruce T. Blythe (Portfolio Hardcover, August 22, 2002)
- 7 http://www.techworld.com/cmsdata/whitepapers/833/How%20Secure%20is%20your%20Storage_Symantec.pdf.
- 8 The Hartford’s Guide to Emergency Preparedness Planning, created by The Hartford Financial Services Group and now published by J.J. Keller & Associates
As Hector Barreto, a former SBA administrator was quoted in the article, “…(N)o matter where you live, there’s always the potential for a major disaster. No one is insulated from the threat of losses caused by wind, storms, floods and wildfires, power outages and other natural and man-made disasters.”
But, knowledge is power, and risk can be controlled. Given the increasingly critical role that IT systems and data play in the ability of an organization to operate, the knowledge from this research provides compelling arguments for data protection and IT disaster recovery investments.
Happy New Year!
Every time New Year’s Day comes around, I’m reminded of December 31st, 1999. Thanks to a lot of news reports, on December 31st of 1999, I was waiting to see if all the lights would go out, if the computers would stop working, if my phone line would go dead, if airplanes would lose their navigation systems, high-speed trains would run off their rails, and military defense systems would fail. I had friends that would absolutely not fly on January 1st, 2000, and I had other friends who spent New Year’s Eve away from their spouse, because at least one of them was anxiously watching the computer systems, the security systems, and the backup power generators at their company, to respond in case the Y2K bug brought the world’s computer systems to a screeching halt.
For those who don’t remember, here’s a brief explanation. Software programs developed in the early days of computers often used a two-digit number instead of a 4-digit number to represent the year. They did this, in part, to save space, but also because who would have believed that software programs written in the 70s and 80s would still be in use 20 – 30 years later. All of these programs had the Y2K bug. And so, when the year passed over from 1999 to 2000, calculations that involved the difference of years were at risk of being incorrect.
00 – 99 = -99 versus 2000 – 1999 = +1
So I and my friends anxiously watched as seconds ticked by and clocks along the International Date Line passed from 24:00 to 00:01 and the year changed from 1999 to 2000. And what happened? Nothing. The lights stayed on, the planes remained in the air, computers continued operating, and defense systems continued to work.
Interestingly, after turn of the clock, conspiracy theorists began postulating that the Y2K bug was all a hoax, promoted by companies looking to make big money fixing things that weren’t really broken. But the reality, as a ComputerWorld article points out, was that this was not a hoax, but rather a crisis very successfully managed.
This past year was a year of tremendous natural disasters, including floods, earthquakes, tsunamis, wide-spread fires in drought regions, and unseasonal snowstorms that knocked out power for entire regions. As we closed out 2011 and look at 2012, we see that more companies, more governments, and more organizations are taking a Y2K approach to disaster recovery planning. It’s not as if the world is unfamiliar with earthquakes, floods, fires, tsunamis, power outages, or terrorism. And with proper preparation, the worst of future natural and man-made disasters can, in fact, be virtual non-events for your IT operations.
Here’s to 2012, with zero data loss for everyone.