Archive for January, 2011
About thirty years ago, IBM announced the IBM 3380 Direct Access Storage Device. It had a capacity of 2.52GB and a price that began at $81,000 without the controller. At the time, successful storage solution providers like IBM made their storage systems out of high-quality, high-cost components and charged a premium. The design goal was to prevent failures, because there weren’t a lot of ways to survive failures.
Given the volume of data now created, today’s storage systems are by necessity very different. They are designed with the expectation that components will fail and fail frequently, but that the data will survive. In order to achieve acceptable levels of data availability and data protection, storage system suppliers overcome the component failures through software, through redundant components, and through redundant copies.
What’s the chance that you will be hit on the head with a hammer? What’s the probability that your data center will be hit by a major fire, a major flood, a hurricane, or an earthquake? Both are pretty low, right? If you are a disaster recovery professional, you’ve probably been asked at last once, “Why are you budgeting so much for disaster recovery, when these events are unlikely to happen?” Wouldn’t it be better to spend money on preventing or surviving things that happen frequently? Or better yet, wouldn’t it be better to spend money on things that will help the company grow? But just like a hammer to the head, big disasters can be very costly when they do happen. So we, as businesses, somewhat reluctantly, spend money trying to prevent those disasters that we can prevent and survive those that we can not prevent.
I was looking at some articles on the severity and frequency of accidents and found an interesting blog post by Bill Wilson who has worked in the nuclear power industry and writes about the prevention of industrial accidents. He wrote about Herbert William Heinrich, who worked for an American insurance company and published a book on the prevention of industrial accidents. His research found that for every fatal or severe accident, there were 29 minor injuries and 300 accidents that resulted in no injuries. He suggested that by eliminating the root cause of accidents that caused no injuries, companies could prevent most fatal accidents. The article shows how a dropped hammer can produce a wide range of results, from no injury to fatality, depending upon other circumstances around the dropped hammer, like whether someone was walking beneath the hammer and how high the hammer was when it dropped. But what is common to all of these events is that all injuries could be prevented by eliminating the dropping of the hammer. It’s possible to imagine that all hammer dropping could be eliminated by tethering the hammer to the person carrying it. When it comes to accident prevention, however, the problem with that approach is that the tether that prevents the dropped hammer does nothing to prevent the falling brick. Read the rest of this entry »
If all of the leaks and rumors reported by industry press are to be believed, EMC will soon announce the company’s latest unified storage offering. In fact, by the time you read this, it will probably already have been announced. Dave Raffo at SearchStorage says it will be named the VNX and will combine the CLARiiON CX4 and the Celerra NS into one unified storage platform. This isn’t EMC’s first unified storage offering, and it’s not the only one on the market. NetApp has been offering a unified storage solution for a while, and claims over 150,000 installations. Oracle has been shipping unified storage, ever since they purchased Sun Microsystems, and, of course, there are several startups shipping solutions.
Unified storage has been loosely defined as a storage system that provides both block and file services. After that, arguments over what constitutes “true” unified storage devolve into a discussion of whether the so-called “unified storage” has common management, automated tiering, integrated de-duplication, thin-provisioning capabilities, common replication, and common snapshot features. Regardless of the specific features, what is common to all unified storage is that they enable the consolidation of more data of different types (block and file) into one scalable storage system at one location. This helps customers drive greater efficiency in management, increased flexibility, and improved storage utilization. However, what’s missing, or at least pushed to the back, in the drive to unified storage, is an honest discussion over the need for superior data protection, truly superior data protection, when all data is stored in a single unified storage system.
It is an unfortunate fact that high bandwidth communication lines are required for metropolitan-area synchronous replication. They are also needed for frequent asynchronous transmissions of snapshots to a remote disaster recovery center. When we meet with companies in the U.S., the U.K. or Central Europe, they may complain about the cost of bandwidth for replication, but at least the bandwidth is available at a price. Anyone with enough money can get as many 1 Gb/sec lines as they need, which will do nicely to protect the data for most applications. And they can take those lines and use them with their favorite storage-controller based, triple-site replication software.
In Johannesburg, South Africa, a company might be lucky to get a pair of 40 Mb/sec lines, which in most cases won’t be enough to protect all of the company’s data. And the cost will be outrageous. So triple-site replication approaches are almost unheard of there. The world may be getting increasingly flat, but it’s a mistake to believe that every region of the world has equal access to an affordable, abundant supply of communications resources. Read the rest of this entry »