Think you’re protected? Think again. When a rolling disaster strikes, traditional disaster recovery solutions can’t protect you. Axxana’s Phoenix is the only solution that enables continuous application availability in a rolling disaster. Don’t say we didn’t warn you.

Be Ready. Survive the Inevitable

Rolling disasters are inevitable. With Axxana’s Phoenix, application downtime is not.

Contrary to popular assumption, traditional disaster recovery solutions—even those using synchronous replication between their primary and standby sites—do not ensure continuous application availability.

In a rolling disaster, a power outage or a network communication disruption causes the line between the two sites to fail before the primary center does. Applications at the primary site continue to create new transactions, but these transactions are not replicated to the standby site. Without Axxana’s Phoenix, data loss is guaranteed—and with it, database inconsistencies, protracted recoveries, and application downtime.

Adding to the chaos, the exact chain of events leading to a disaster is often uncertain. In this scenario, recovery teams cannot determine whether they’ve had a rolling disaster; therefore, they need to respond as if they have.

Many organizations mistakenly believe that their existing disaster recovery solution can protect them during a rolling disaster. Don’t be one of them. For a reality check on your own organization’s preparedness for a rolling disaster, read below about the four most common misconceptions.

Myth: Synchronous Is Fail-Safe

Beware of a false sense of security when it comes to synchronous replication. Synchronous replication is designed for disasters in which the production data center fails before (or at the same time as) the lines used to replicate data to the backup data center (or a DRaaS provider’s data center). In rolling disasters, where replication lines fail first, the production data center continues to produce data. Then, when the disaster reaches the primary data center, this yet-to-be replicated data is lost and cannot be recovered. Because the organization may not have built rolling disasters and data loss into its risk mitigation plans, response time, recovery, and availability may also be compromised.

Myth: Data Guard Max Availability = Continuous Application Availability

Maximum Availability mode is commonly mistaken as synchronous replication in an Oracle® Data Guard environment. Max Availability mode can help ensure continuous application availability if the production data center fails first or if the data center and communication lines fail simultaneously, but it cannot ensure continuous application availability in rolling disasters. This is because Data Guard cannot operate without fully functioning replication lines; when these lines fail in a rolling disaster, the application continues to operate without replicating the data to the disaster recovery site.

Myth: Three DCs Is the Ferrari of Business Continuity

In three-data-center topologies, the organization replicates synchronously to a nearby data center and asynchronously to a more distant data center. If the synchronous replication to the nearby data center ceases (as in a rolling disaster), and the primary site continues to produce data, the organization will lose that data. As with other data loss scenarios, recovery time—and therefore, downtime—increases and risks of noncompliance, reputation damage, and financial exposure climb. In other words, in spite of its tremendous complexity and cost, your three-data-center solution won’t save you.

Myth: Asynchronous Data Lag Is Preset

Organizations running asynchronous replication are willing to tolerate a certain amount of data loss, and they typically incorporate this risk into their business continuity plans and service level agreements. Rolling disasters amplify this risk by creating a larger data lag—and therefore greater data loss and longer recovery times—than the organization has planned for or can tolerate. If replication fails during a period of peak throughput, a larger volume of data lag will accumulate and be lost. At this point, data loss and the resulting downtime will cross the threshold of tolerability and become a potential liability for the organization.