StorONE Blog

Elevate Backup into Business Continuity

Backup software capabilities like block-level incremental backup and instant recovery mean you can elevate backup into business continuity. For the backup process to replace standalone HA solutions and provide business continuance, it needs more than backup software innovation. We’ve also seen innovation in backup hardware, like lower cost, high-density drives.

The missing link is that the software that drives this hardware hasn’t changed for over two decades. For backup to reach its full potential, it needs new backup storage software.

Elevating Backup into Business Continuity Requires

•    Frequent backups to deliver HA-like Recovery Point Objectives (RPO)

•    Instant Recovery to backup storage to deliver HA-like Recovery Time Objectives (RTO)

•    Production-Class performance while in a recovered state to meet the Recovered Performance Expectation (RPE)

•    Highly Available backup storage hardware so Backup Storage can become Standby Storage

Next week’s webinar “Four Steps to Elevate Backup into Business Continuance” will help you architect a backup infrastructure that can consolidate your business continuance software and hardware infrastructure into the backup infrastructure, dramatically simplifying operations and reducing IT spending.

Elevate Backup

When it comes to data protection, you’ll often hear the phrase “It is not about backup, it is about recovery,” and that is partially true. Indeed, recovery is the acid test of your backup strategy, but without the solid foundation of a frequently secured copy of data, your recoveries are useless.

Replacing HA Requires Frequent Backups

HA solutions capture changed data continuously for those few mission-critical applications that every organization has. Most workloads, however, don’t need constant capture, but they do need more rapid availability than what is available from legacy backup. Modern backup software solutions can fill the gap because they can execute both continuous back-ups for those workloads that need zero data loss and frequent block-level incremental backups for those workloads that can afford a minor data loss. The problem is the backup storage target.

Unlike traditional backup jobs, which are large, once-per-day sequential transfers, these more frequent, and in some cases continual, data transfers have a more random IO pattern. The traditional hard-disk-based storage solution can’t keep pace, forcing the customer to maintain their expensive HA solution for mission-critical workloads only while not meeting the new, higher expectations of business-critical applications.

The Consolidation Secret

Another challenge to the legacy backup storage target is how to deal with consolidation jobs. Block-level backups typically create one master full backup with an incremental backup chain that links to the original master. After several incremental iterations, this “chain” requires that the backup software consolidate or interleave those incremental backups into the master full. The goal is to create a new full, virtually, without dragging all the data across the network.

The challenge is the process of interleaving incremental backup jobs with the master full to create a new, virtual full places a massive IO load on the backup storage target. Legacy-based backup storage can’t handle it. As a result, many IT professionals choose to perform another “real” full, which is time-consuming and impacts production applications.

Replacing HA Requires a Backup Optimized Flash Front End

To combat the challenges of frequent block-level incremental backups, some vendors are seizing on the opportunity to push QLC based all-flash arrays as a viable backup target.  While flash has come down in price, so have hard drives, and they still enjoy a massive price advantage. All-flash, even with QLC, isn’t practical for backup.

Instead of all-flash, a modern backup storage target needs a backup optimized, flash front end that acts as a storage tier, not a cache. An optimized flash storage tier requires that the backup target software leverage the flash tier intelligently, delivering maximum read and write performance from twelve or fewer flash drives.

Most legacy production all-flash arrays require 24 or more flash drives to deliver appreciable performance because their per-drive performance isn’t close to the per-drive potential of the hardware.  Backup storage doesn’t have the luxury of hiding the cost of this inefficiency. Its software must extract the full potential and capacity from each flash drive element.

Another part of the optimized flash front end is the intelligent movement of data from the flash tier to the hard disk tier for long-term storage to lower costs. As we will discuss in the third blog in this series, leveraging high-density drives means the backup storage software must deliver rapid RAID rebuilds and maintain the integrity of retained data.

IT professionals need to avoid vendors that are “just throwing flash” at the problem. The backup storage software needs to extract maximum performance from the fewest drives possible to meet business continuity demands while keeping costs low. The flash tier must also use high-density drives so the capacity of these few drives can be large enough to provide meaningful value to operations like block-level backups, consolidation jobs, and instant recovery. A high-capacity flash tier driven by modern storage software ensures that the backup uses the hard disk tier for archiving old backup data.

Learn More

Our next blog will discuss items two and three, enabling backup storage to provide rapid instant recovery and production-class performance while the application or virtual machine is in its recovered state. In the meantime, please register for our next webinar, “Four Steps to Elevate Backup to Business Continuance,” or download our newest whitepaper, “Designing a Modern Backup Storage Infrastructure.”

Request a Demo