Archive for the ‘Finance’ Category
Too often, business continuity planning and disaster recovery planning are treated as the same functions. Unfortunately, they are not. Business continuity planning helps organizations insure that applications and processes continue through the myriad of day-to-day disruptions that might occur. These include IT component failures, such as disk-drive failures, a server failure, a dropped network link, or an application bug. Disaster recovery planning helps organizations recover operations after less frequent, but far more devastating events, such as fires, floods, hurricanes, earthquakes, and a variety of man-made disasters. While the data center strategy is only one component of business continuity and disaster recovery planning, it is a key component. And while business continuity and disaster recovery planning are different functions, they must often be considered together, because of budget limitations.
There are plenty of advantages to having a business continuity data center in region, a very short distance from the production data center. If the data centers are very close, there will be little impact on transaction latency for the always-important two-phase database commit. Failover times from the production data center to the business continuity data center can be very short. Staff that normally work at the primary data center can easily show up for work at the in-region business continuity data center. WAN charges between the primary and business continuity data centers will be relatively low.
The problem with an in-region business continuity data center is that it can’t replace an out-of-region disaster recovery data center. The two are simply too close for comfort. And few organizations can afford three data centers. Following are a few of the types of disasters that can prevent an in-region business continuity data center from acting as a disaster recovery data center:
- Electrical-grid failure
- Telecommunications failure
- Transportation systems failure
- Chemical spills
- Radiation leaks
- War, terrorism, and civil unrest
For these types disasters, it is much more likely that both in-region data centers will be affected and much more challenging to recover applications and data. One of the trade-offs organizations must make is between how quickly they recover and how certain they are that they can recover from the range of disasters that could strike them. We believe that a slight increase in recovery time is well worth the additional assurance that you can actually recover applications after a disaster. Using an in-region business continuity data center as a disaster recovery data center is a little like doing a tandem sky dive. It’s fine, as long as nothing goes wrong.
Datacentre Solutions Awards will hold their annual ceremony on 23 May 2012 at the Millenium Gloucester Hotel and Conference Center in London. We’re very excited that Axxana was nominated in not one, but two categories: Datacentre Storage Hardware Product of the Year and Datacentre Storage Software Product of the Year.
All of the nominees have worked hard to bring innovative solutions to the market. We hope that you will take the time to read through the descriptions and vote for your favorite in each category.
Before voting, I want to take you back to a time about 25 years ago, when the only way to reduce the risk of data loss from disk drive failures was to use higher and higher quality disk drives, and then back up the data to tape each night. The risk of data loss from a drive failure during the production day was still very real, even with the massive investment in high-quality disk drives. And still, only the largest companies could afford the very expensive disk drives. Everyone else had to settle for a much worse and much riskier storage device. RAID technology, which was first defined and described in 1987, changed that by making it possible to protect data on lower quality drives. Today even the smallest companies can achieve very high levels of protection against data loss from disk drive failures by using RAID-based storage systems. RAID transformed computing.
Until recently, data center disaster recovery approaches have been very similar to the pre-RAID data centers of 30 years ago. Even today, only the largest, wealthiest organizations can afford the very expensive three-site, synchronous/asynchronous disaster recovery approach that is used by most large, multi-national banks. But with Axxana’s Phoenix System RP, every medium-sized or larger organization can afford to protect all of their data through a disaster. Axxana is for disaster recovery what RAID was for protection against drive failures. We are transforming disaster recovery.
Please click on the link below and vote today.
DCS Awards: Vote here!
I borrowed my title for today’s post from the section heading of a whitepaper, “Impact on U.S. Small Business of Natural & Man-Made Disasters,” that I found on Edwards Information, LLC. For those of you who don’t know them, Edwards Information is a valuable information resource for organizations, and they describe themselves as “the authority on disaster recovery and business continuity.”
The full document, which was presented by HP and SCORE, is available here. I think every CEO, every IT professional, every CFO, every Risk Manager, and every Business Continuity professional should read the entire article, but I want to draw attention to some specific data from page 3 of the document:
- “A Company that experiences a computer outage lasting more than 10 days will never fully recover financially. 50 percent will be out of business within five years.” 1
- An estimated 25 percent of businesses do not reopen following a major disaster. 2
- 70 percent of small firms that experience a major data loss go out of business within a year. 3
- Of companies experiencing catastrophic data loss: 43% of companies never reopened and 51% of companies closed within 2 years. 4
- 80% of companies that do not recover from a disaster within one month are likely to go out of business. 5
- 75% of companies without business continuity plans fail within three years of a disaster. 6
- Companies that aren’t able to resume operations within ten days (of a disaster hit) are not likely to survive. 7
- Of those businesses that experience a disaster and have no emergency plan, 43 percent never reopen; of those that do reopen, only 29 percent are still operating two years later. 8
Here are the publisher’s references for the information:
- 1 Jon Toiga, Disaster Recovery Planning: Managing Risk and Catastrophe in Information Systems, (Yourdon Press, 1989)
- 2 “Open For Business” a publication of The Institute for Business & Home Safety (IBHS), a nonprofit association that engages in communication, education, engineering and research for the insurance industry. See www.ibhs.org/docs/OpenForBusiness.pdf
- 3 Contingency Planning, Strategic Research Corp and DTI/Price Waterhouse Coopers (2004) and is widely quoted in places such as: Diana Shepstone, National data awareness project launched to help businesses prevent data disasters ( Data Centre Solutions, Jan. 8, 2007) see: http://www.datacentresols.com/news/articles-full.php?newsid=5455
- 4 University of Texas Center for Research on Information Systems, as cited in Datamation, June 14, 1994
- 5 Jonathan Bernstein, president, Bernstein Crisis Management, LLC in Director, June 1998, v51n11, p44
- 6 Bruce Blythe, CEO, Crisis Management International in Blindsided: A Manager’s Guide to Catastrophic Incidents in the Workplace By Bruce T. Blythe (Portfolio Hardcover, August 22, 2002)
- 7 http://www.techworld.com/cmsdata/whitepapers/833/How%20Secure%20is%20your%20Storage_Symantec.pdf.
- 8 The Hartford’s Guide to Emergency Preparedness Planning, created by The Hartford Financial Services Group and now published by J.J. Keller & Associates
As Hector Barreto, a former SBA administrator was quoted in the article, “…(N)o matter where you live, there’s always the potential for a major disaster. No one is insulated from the threat of losses caused by wind, storms, floods and wildfires, power outages and other natural and man-made disasters.”
But, knowledge is power, and risk can be controlled. Given the increasingly critical role that IT systems and data play in the ability of an organization to operate, the knowledge from this research provides compelling arguments for data protection and IT disaster recovery investments.
It’s no different in the world of data protection and disaster recovery. Prior to Axxana, zero-data-loss disaster recovery solutions were available. All you needed was a great deal of money and a willingness to accept lots of restrictions on where your data centers could be located or a willingness to accept transaction latency that would be deemed intolerable by most organizations. But as Craig Stewart (also know as VirtualPro), wrote after being introduced to Axxana:
You utilise Axxana so you don’t have to do expensive synchronous replication, so you don’t have to introduce unnecessary application latency, so you don’t have to have that second site within ~100KM distances. The reason this product is built to withstand every feasible disaster is so that you can safely use cheaper asynchronous replication over large distance and still guarantee that synchronous replication RPO that the business or application owner demands.
I swear one of those imaginary light bulbs went on above my head while I was discussing it!
I know everyone always says that they are “drowning in data,” but I’m always looking for more. So, I was very happy this week when a very large pile of data landed in my email inbox. The data were the results of research commissioned by EMC and performed by VansonBourne. VansonBourne just released a report based on the research entitled “European Disaster Recovery Survey 2011: Data today gone tomorrow, how well companies are poised for IT Recovery.”
I provided a link to the report, in case you want to read the entire report, but let me tell you some of what I found interesting. First there was this:
A quarter of organizations have experienced data loss within the last twelve months.
Hardware failures are the most frequent cause of data loss at over 60%, and I should probably write more about how Axxana protects against data loss when there is a hardware failure, because we do. Instead I’ve written a lot about the risk of natural disasters; maybe too much, since natural disasters accounted for only 7% of the reported data losses. It’s just that when a natural disaster occurs, like an earthquake or a flood, the risk to your data can be enormous. Just ask the folks in Japan or Thailand.
More interesting, though, than the cause of data loss, were the reported consequence of data loss. Here are a few data points from the report on the impact of data loss:
- 43% reported loss of employee productivity
- 28% reported loss of revenue
- 14% reported loss of customers
- 12% reported loss of repeat business
In this fiercely competitive business climate, employee productivity is extremely important in trying to derive profit from revenue. Every deal and every customer is important, and losing repeat business from an existing customer may be the worst outcome of all, since that should be the most profitable.
Even though the average amount of data lost was relatively small at only 400GB, the consequences were significant, which is why we advocate protecting 100% of your data for all applications. When it can be done so cost effectively, why risk losing productivity, revenue, customers, and repeat sales.
Imagine you make cars, and 80 percent of your parts come from 20 percent of your suppliers. The parts are packed in containers and delivered to your manufacturing location on ships. Imagine there was a disaster, like an earthquake. Your biggest suppliers have great contingency plans that ensure a seamless flow of components, so you can make cars. But one of your suppliers, not one of the big 20%, was affected, and couldn’t ship parts for several months. Oh, well, it’s not that important. Just apply the 80/20 rule.
The 80/20 rule, which is also known as the Pareto Principle or Juran’s Pareto Principle, doesn’t always work. The rule originated from an analysis of wealth distribution by Italian economist, Vilfredo Pareto, who estimated that 80% of the wealth in his country was controlled by 20% of the people. Dr. Joseph Juran, who was a pioneer in quality management, applied Pareto’s analysis to quality management challenges, determining that 20% of the factors account for 80% of an outcome. In manufacturing, this might mean that 20% of your suppliers account for 80% of your output potential, so in supply chain disaster preparedness, companies logically place the bulk of their focus on the 20% of companies that supply 80% of the parts. Unfortunately, according to Patrick Brennan, in his article, Lessons Learned from the Japan Earthquake, published this summer in the Disaster Recovery Journal, Lesson 1 was “Don’t Apply the 80/20 Rule to Supply Chain Disaster Preparedness.” The 80/20 rule doesn’t work.
When the lack of availability of a $1 part prevents a company from making a $30,000 product, something needs to change.
The same error occurs when attempting to apply the 80/20 rule to the value of data. While it might be convenient to believe that 20% of your data accounts for 80% of the value, the loss of even a small amount of data, can have an enormous effect on the output of an analytical process or on the reputation of an organization. Imagine, for example, that a disaster destroys the last 3 minutes of data, and one of those pieces of data was an email that provided critical evidence to defend against a shareholder claim, or it was a buy order for fuel in a rising fuel market, or it was a change to a medication order for a critically ill patient.
You can’t always determine in advance, which data will be valuable. Therefore, it is best to provide complete protection to all data. If it’s important enough to keep, it’s important enough to protect. Fortunately, we make complete data protection both possible and affordable.
There’s a LinkedIn group called BCMIX – Business Continuity Management Information eXchange. There are over 7,000 members of this group, which I think shows just how important Business Continuity Management is in organizations today. Members can post questions to the community and get advice from other professionals who are struggling with the same issues. I’m paraphrasing here, but some of the recent topics were:
- Can you develop a profile for what types of individuals are able to manage disasters?
- How do you determine the RTO for critical systems and applications?
- What is the ROI from a Business Continuity Management Program?
I’m always interested in the calculation of an ROI on an intangible such as a BCM program, because the true value of it, like insurance, is not really calculable until after the event. I mean what is the ROI on a fire extinguisher?
There’s really no ROI on a fire extinguisher until you need it, which, hopefully is never. But, if you do have a fire, you want the fire extinguisher that works well with the type of fire you have. There are different types of fires and different types of fire extinguishers for each type of fire. There are also combination fire extinguishers that work with more than one type of fire. For those of you who want a quick tutorial on fires and fire extinguishers, here’s a helpful website: Fire Extinguisher: 101.
Once you’ve decided what risks you want to reduce, then you should get the best possible protection at the lowest possible cost. And that’s where the ROI comes in. Our Phoenix System is like a combination fire extinguisher, because we protect data through a wide variety of disasters: floods, fires, earthquakes, bombings, hurricanes, building collapse. But we have something else going for us. We actually lower the cost of data protection, by reducing data communications costs when replicating data over distance.
Maybe there’s no way to determine the return on a Business Continuity Management plan, but once you’ve made the decision to put a plan in place, you might as well have the best possible coverage at the lowest possible cost. To help you understand the savings that an Axxana Phoenix System investment can provide, we developed an ROI white paper. I hope you find it helpful.
If you drive a car, but you don’t pay for your gas, you may not care how your driving habits affect your mileage. If you are a business manager, and you don’t directly pay for your organization’s insurance policy, you may not care how your business continuity and risk management programs affect your insurance rates.
I was very happy to stumble upon a blog post at Travelers Insurance entitled: “What I should know about risk management.” Business continuity management is an important component of risk management, and this post provides independent validation for something that, although obvious, is not often explicitly stated: A better business continuity plan lowers insurance rates.
The post makes an important point: “Risk management, particularly loss control, begins at the top of any organization.” And the way most organizations are set up, it’s not until you get to the top of the organization that all of the benefits and and all of the costs come together so that the CFO can determine a return on investment. The CFO should care about how your business continuity plan affects insurance rates.
Anyone who has every had responsibility for developing a business continuity management or risk management program knows that it’s important to have all of the stakeholders at the table. When assembling the team of stakeholders, don’t forget to include the person responsible for negotiating the business liability and loss insurance policy. Make sure that the benefits of improved business continuity and risk management are included in the determination of the premiums of the policy, and make sure that the benefits in the form of reduced premiums are included in the ROI analysis of business continuity and risk management investments. Then show it to your CFO.
For some time now, we’ve been talking about the cost savings associated with an Axxana Phoenix System installation. Now, I’m happy to tell you, we can give you a more in-depth look at the source of the cost savings and the resulting return on investment. Without resorting to estimates of the cost of down time, we can show you how you save money on the communication links alone, which more than justifies the cost of an Axxana Phoenix System investment.
Our CTO, Alex Winokur, authored a whitepaper entitled “ROI Model: The Cost of Communication Bandwidth for Remote Replication,” which looks at the peak demand and average load on a WAN connection between two data centers. He demonstrates the excess cost of provisioning networks to meet peak demand and the increase in unprotected data when companies choose the alternative approach of provisioning to the average load. With Axxana, organizations can dramatically decrease their network bandwidth, while completely eliminating the risk associated with data that is not yet replicated to the remote site. I hope you will take the time to read our new whitepaper and give us your feedback. I think you will be pleasantly surprised by the possibilities that Axxana enables.
I’ve continued to look at the data that was in Symantec 2010 Disaster Recovery Study. There’s a lot of very useful information in the study. Here’s some of what I found interesting:
• Only 20% of virtual environments are protected by replication or failover technologies
• 60% of virtualized environments are not covered in DR plans
• Actual downtime from outages is more than twice what companies expect
• 40% of DR tests fail to meet the RTO/RPO that have been set for the applications
That last one is very interesting. It’s hard to imagine anyone putting up with a 40% failure rate for long. I suspect some things will have to change, and soon. But given the tight budget times, it doesn’t mean that companies are going to spend more. In fact, 43% of companies said their disaster recovery budget would decline in the next 12 months.
At Axxana, our sole reason to exist is to provide disaster recovery capabilities to organizations, so you might think that declining budgets for DR are bad news, but they’re not. No, in the world of disaster recovery, when budgets get tight and service levels aren’t being met, something needs to change. And that’s when organizations look for new, more-innovative ways to provide data protection and disaster recovery. That’s what we offer. We have a new class of data protection, Enterprise Data Recording (EDR), that actually enables companies to meet RTO/RPO service levels, while lowering the cost of data protection.