Archive for August, 2012
Our thoughts and concerns are with the people living along the Gulf Coast of the United States this week, as Hurricane Isaac is about to make landfall. It was only seven years ago that Hurricane Katrina came ashore and wreaked havoc and destruction on the area.
Katrina was a much more severe, Category 3 hurricane, while Isaac may only hit Category 1 or 2. Isaac is slow moving, which means that people and businesses have time to prepare. It also, however, means there will be longer periods of sustained winds and rain along the Gulf Coast. Flooding risk is high. Many people will evacuate the area, but many will choose to stay. Based upon early news reports, it appears that the cities and government are more prepared this time than last. In particular, flood control systems have been enhanced around major metropolitan areas.
With the hurricane approaching, now is the time for organizations to implement their disaster preparedness plans. Operational functions that can be moved to another region should be moved. If organizations have a second data center outside of the high-risk zone and can move production applications to that data center, they should do so now. For most companies, that means shutting down production applications and restarting them. But it’s better to have a period of planned downtime, than to try to recover applications that have lost data because of an unplanned outage.
I found a very good article written by Tom Deaderick called “10 Places You Don’t Want a Data Center.” Tom is a Director at OnePartner LLC, which provides high-availability colocation services from the company’s data center in the southwest corner of Virginia. Anyone who is on a site-selection team for a new data center or evaluating new colocation providers should read Tom’s article. OnePartner is doing something right. The company reports having no outages in over 1400 days.
Tom’s #2 place you don’t want a data center is “in a location that suffers from frequent natural disasters.” He includes some useful data on the annual frequency of tornadoes for each state in the United States. Based on a quick glance at the data, you might think you should never build a data center in Texas. The state had an average of 139 tornadoes per year between 1950 and 2004. That’s over 7,600 tornadoes in 55 years. Maryland, on the other hand, had only 6 tornadoes per year over the same period. From a tornado-risk perspective, Maryland is obviously much safer, right? Wrong.
You’ve got to be careful with statistics. Texas, as most Americans know, is the second largest state in the U.S., with an area of almost 270,000 square miles. Maryland is #42 and covers only 10,455 square miles. So if you calculate the tornado-rate per square mile, Maryland ranks 8th in annual tornado frequency at 5.74 tornadoes per 10,000 square miles, 10% higher than in Texas, which ranks 11th. For the record, Florida is the state with the highest tornadoes-per-10,000 square-miles rate at 9.37.
Tom offers 10 important factors to consider when locating a data center. Read the article to get the list, because I don’t want to steal his thunder. But, yes, companies should know the frequency of various types of disasters and obviously avoid known flood plains, airplane take-off and landing paths, and the San Andreas Fault. I wonder if Tom looked at earthquake risk in Virginia. Based on data from the last century, they are extremely rare. But, in fact, a significant earthquake occurred in Virginia in August, 2011. And there was another, less-severe earthquake in the same area just a few days ago. The epicenters for both the August 2011 earthquake and the July 2012 earthquake were almost 350 miles from Tom’s data center. But a much stronger earthquake occurred in southwest Virginia in 1774. I wonder when southwest Virginia will have its next big earthquake. Despite new earthquake prediction techniques, nobody really knows.
That brings me to my last point. Disasters are, by their nature, simple to track, but very difficult to predict. In designing data centers for maximum up-time and minimal data loss, it’s important to protect your data against disasters that you can’t predict.
I think everyone will agree that when nearly 10% of the world’s population loses power, that counts as a major disaster. Unlike some disasters, the recent power outage in India, that affected more than 600 million people, was definitely predictable. Penny Jones at Data Center Dynamics wrote about India’s power issues back in February 2011.
In a follow up article this week, after the blackout, DatacenterDynamic’s general manager for India, Praveen Nair, reported that
Northern India alone can suffer an average of three to four hours of power cuts a day as the government carries out load shedding.
For larger data centers, the massive power outage of the past week was not particularly disruptive, because, as Nair says,
99% of the big players are used to this condition and have adequate backup. So when the outage took place, most data centers switched to generator sets for their power needs and most are equipped to run for days”
The same can not be said for the millions of people trying to get to and from work on transportation systems that were completely shut down. In a disaster, the human factor can never be ignored, which is why we speak so frequently about the need to have a recovery location outside the disaster zone. In this case, the disaster zone was most of India, a much larger area than would be affected by even the largest typhoon, tsunami, flood, fire, or earthquake. And the best place to have a recovery facility would have been on another continent, where there would be no local human impact.
Organizations always need to prepare for recovery from natural disasters. I suspect, however, that some of the greatest challenges for organizations, going forward, will be disasters related to infrastructure failures, particularly in rapidly growing areas such is India.