Posts Tagged ‘Phoenix’
This week, as thousands of IT professionals converge at EMC World, many will be getting their first look at Axxana’s Phoenix System RP. The system, which integrates with EMC RecoverPoint and all RecoverPoint-supported platforms, forces IT professionals to change the way they think and to imagine what was previously thought impossible. Now it truly is possible for companies to protect all of their data over any distance through a wide range of disasters. It is not only possible, but it is affordable for virtually any mid-sized and large enterprise customer. And thanks to RecoverPoint’s and the Phoenix System RP’s integration with VCE Vblocks, zero data loss over any distance is also possible for smaller companies that are leveraging public cloud infrastructures based upon VCE Vblocks.
If you look at the home page of our Axxana website, you will see that we have changed our banner this week to honor other great innovators. These individuals imagined and created what was previously thought impossible. The Wright brothers proved that flight was not just for birds, bees, and bats, but that man, too could fly. Alexander Graham Bell proved that people could remain connected and communicate, hearing each others’ voices over vast distances. John Bardeen and his colleagues, who developed commercially available transistors proved that electronics could be made affordable for the masses, and Albert Einstein, well, he changed just about everything we thought about the physical world.
In his book, The Black Swan, The Impact of the Highly Improbable, Nassim Taleb explains how Europeans could not imagine black swans until they actually saw them. Just like black swans, many will not believe that they can recover their data from the ashes, until they see the Phoenix System RP. No one today denies the existence of black swans, and everyone can imagine them. Soon, no one will doubt the ability to protect all data and recover it from the ashes, from the floods, from an earthquake, or from a building collapse. If you are at EMC World, please stop by and see for yourself. We are at Booth 605.
Amazon has built a fantastic reputation as a provider of cloud services. With multiple data centers, service availability levels at 99.9% and integrated data backup services, Amazon’s EC2 makes perfect sense for new companies that want to build software applications and deliver them as a service. By delivering applications as a service, emerging companies can be a disruptive force competing against established packaged-application vendors. And Amazon EC2 enables these Application-as-a-Service suppliers to avoid the up-front capital costs associated with building multiple, redundant data centers. It doesn’t mean, however, that Amazon EC2 is perfect and without risk.
A look at the Amazon Web Services Service Health Dashboard today showed a number of service interruptions and performance issues in Amazon’s Northern Virginia facility on April 21 – 24. Henry Blodget of Business Insider reported that Amazon had a cloud crash and the “cloud crash destroyed many customers’ data.”
It would take a lot of digging to get to the bottom of why data was lost. The Business Insider article refers to a letter from Amazon to a customer that discusses “an inconsistent data snapshot” and Amazon’s inability to recover the data. Unfortunately, corrupted data which has been carefully copied to another location is still corrupted. That’s why it is important to keep a series of application-consistent snapshots together with transaction journals, so that application-data can be restored to its last known good state and updates can be applied to bring the data back to RPO=0. This is precisely what is done with the EMC RecoverPoint/Axxana Phoenix System RP solution. RecoverPoint maintains application-consistent snapshots, and Axxana stores the changed data, protected from fire, smoke, flood, shock, earthquakes, and building collapse.
As the cloud services become increasingly adopted for mission critical applications, perhaps it is time to consider a zero-data-loss solution.
This post isn’t being written to be critical of Google. They have a tremendous platform. I know many people who use Google, not only for advertising and searching, but for blogging, for collaboration applications, and for email. But I’ve been watching the continuing problems with Google’s Gmail service. On Sunday, February 27th, a software bug caused some Gmail user data to be deleted. As reported by Google, only .02% of users were affected by the data loss, down from earlier estimates that were .08%. Turns out, though, that .02% of the Gmail user base is still a big number. By some estimates, it’s about 35,000 people. It’s now five days later. The latest update from Google, which is from yesterday, reports that:
We have restored the majority of the affected accounts, and will continue to restore the remaining accounts as quickly as possible. Accounts with more mail are taking more time.
Why would it take Google so long to restore data? Because, Google has to restore the data from tape. Google has an interesting perspective on tape:
To protect your information from these unusual bugs, we also back it up to tape. Since the tapes are offline, they’re protected from such software bugs. But restoring data from them also takes longer than transferring your requests to another data center, which is why it’s taken us hours to get the email back instead of milliseconds.
Hours instead of milliseconds? Actually, for some users, it’s days instead of milliseconds. Read the rest of this entry »
If you don’t have both a High Availability site locally and a replicated site for system maintenance and disaster recovery some distance away, would it be best to have just the HA site or the replicated disaster recovery site?
With regard to the HA option, Kathleen Lucey, President of Montague Risk Management, and a business continuity management expert pointed out:
If what you are talking about is local clustering in the same site, then I would not consider this to be HA. The protection afforded by a same-site clustering solution is limited to failover to the designated backup server in the event of a failure of the primary. A larger local event could take down the entire cluster, and so this is not really HA, but more properly local hardware backup. Read the rest of this entry »
It is an unfortunate fact that high bandwidth communication lines are required for metropolitan-area synchronous replication. They are also needed for frequent asynchronous transmissions of snapshots to a remote disaster recovery center. When we meet with companies in the U.S., the U.K. or Central Europe, they may complain about the cost of bandwidth for replication, but at least the bandwidth is available at a price. Anyone with enough money can get as many 1 Gb/sec lines as they need, which will do nicely to protect the data for most applications. And they can take those lines and use them with their favorite storage-controller based, triple-site replication software.
In Johannesburg, South Africa, a company might be lucky to get a pair of 40 Mb/sec lines, which in most cases won’t be enough to protect all of the company’s data. And the cost will be outrageous. So triple-site replication approaches are almost unheard of there. The world may be getting increasingly flat, but it’s a mistake to believe that every region of the world has equal access to an affordable, abundant supply of communications resources. Read the rest of this entry »
I was talking to John McArthur the other day about a use case we are looking at with a customer in Canada. The customer doesn’t want to lose data if and when a disaster hits their primary data center, and their service provider’s DR data center is more than 100 miles away. Therefore, the customer is presented by the storage vendor with two options – either do Multi-Hop (one of two ways for deploying a 3 data center topology for replication over long distances, trying to lose as little data as possible – EE) or go with the new Axxana Phoenix solution. John asked me what I thought of this, and I naturally answered that Multi-Hop is too expensive, too complicated to deploy and doesn’t really solve the problem… (of long distance synchronous replication… – EE). John liked my answer… he said so… and I repeated the 3 reasons why I thought Multi-Hop doesn’t really cut it…: “It is too expensive… too complicated to deploy, and doesn’t really solve the problem…”
As I said that again, I recalled an excellent joke involving a consultant and a flock of sheep:
A shepherd was herding his flock in a remote pasture, when, suddenly, a brand-new BMW advanced out of the dust cloud towards him. The driver, a young man in a Brioni suit, Gucci shoes, Ray Ban sunglasses and YSL tie, leaned out the window and asked the shepherd… “If I tell you exactly how many sheep you have in your flock, will you give me one?” The shepherd looked at the man, obviously a yuppie, then looked at his peacefully grazing flock and calmly answered, “Sure.”
The yuppie parked his car, whipped out his IBM ThinkPad and connected it to a cell phone, then he surfed to a NASA page on the internet, where he called up a GPS satellite navigation system, scanned the area, and then opened up a database and an Excel spreadsheet with complex formulas. He sent an email on his Blackberry, and, after a few minutes, received a response. Finally, he prints out a 130-page report on his miniaturized printer, then turns to the shepherd and says, “You have exactly 1586 sheep. “That is correct; take one of the sheep,” said the shepherd. He watches the young man select one of the animals and bundle it into his car.
Then the shepherd says: “If I can tell you exactly what your business is, will you give me back my animal?” “OK, why not,” answered the young man. “Clearly, you are a consultant,” said the shepherd. “That’s correct,” says the yuppie, “But how did you guess that?” “No guessing required,” answers the shepherd. “You turned up here although nobody called you. You want to get paid for an answer I already knew, to a question I never asked, and you don’t know crap about my business…Now give me back my dog.”
So how do you know a storage sales rep is selling you on the idea of multi-hop replication?! It’s easy… no guessing required… it’s just too expensive… too complicated… and doesn’t really solve the problem… If you want Synchronous Replication over your existing Asynchronous lines… whatever the distance may be between your data centers… you need Axxana.
There are three things every company’s disaster recovery planner should know:
- The communications costs for the current data replication approach
- The communications costs when using Axxana’s Phoenix System RP
- The cost of the Axxana Phoenix System RP
I thought I should share with you at little known secret. One of Axxana’s first installations of the Phoenix System RP was done at no cost to the customer. That’s right. No cost. And it wasn’t because we gave the customer the system for free. We didn’t.