StorONE Blog

Designing a Built to Last Backup Storage Target

Designing a built to last backup storage target enables IT to store data for the next decade without data migration or disruption of services. The longevity of your backup storage is critical since most organizations use backup storage as a semi-archive. It is responsible for the rapid recovery of mission-critical applications and the long-term retention of files and versions of files. The dichotomy of purposes breaks most backup storage targets and leaves customers scrambling for multiple solutions.

Requirements for a Decade-Proof Storage Target

  1. Capacity scaling from 30TBs to 30PBs
  2. Simultaneous utilization of drives of varying densities
  3. Ability to support new protocols as they become relevant
  4. Flexibility to support new storage controller innovations without data migration
  5. The performance to make instant recovery practical
  6. Data safety to support long term data retention

Built to Last Capacity Scaling

Backup storage demands are typically 5X to 10X the demands of the primary storage infrastructure, but not all data centers are “petabyte-class.” The backup storage software needs to scale small enough to support a medium-sized business but also scale enough to meet the double-digit petabytes of the enterprise.

To scale to these capacities, most systems require a scale-out model that adds complexity and is more expensive than a scale-up model. The problem is that most scale-up systems can’t scale to multiple petabytes because of limitations inherent in their software. For data centers to take advantage of scale-up storage’s better efficiencies and lower costs, they must invest in purchasing a more efficient storage software engine.

backup storage target

Built to Last Drive Utilization

The way the backup storage target uses the physical drives is also critical. It must have the ability to mix drives of varying types and densities into a single storage system while obfuscating the management of those drives from the storage administrator. Accomplishing this feat requires total abstraction of the data from the underlying media. As the growth in production data plus retained data continues, the backup storage target can expand at each increment using the highest density, lowest cost per terabyte drive available. Doing so enables the almost limitless expansion of a volume without changing backup workload destinations or schedules.

Protocol Independence

The only constant in the data center is changing. During the current decade, the typical organization will add additional backup software, potentially retire other backup software, and exploit new capabilities of their current backup solution. In many cases, these changes will require a change in the protocol on the backup storage target. The problem is that most backup storage targets lock you into only a couple of protocols.

For data center’s to take advantage of new backup storage software capabilities like archiving old backups, instant recovery, or whatever feature is next, the backup storage target should provide protocol independence. It should also have a proven track record of adding protocol support as soon as those protocols become market relevant.

Storage Controller Flexibility

As capacities continue to scale and the organization reaches double-digit petabytes, the physical storage controller (server) may become the bottleneck. This limitation is what drives many backup storage target companies to only support a scale-out architecture. However, that architecture assumes that there will be no innovation in internal server connectivity, which we already know is not the case. PCIe Gen-4 servers are already on the market, and we expect PCIe Gen-5 within the next two years. Each generation of motherboard doubles the potential bandwidth, extending the capacity limitations of the prior generation.

The storage software that drives the backup storage hardware must take advantage of these new advances in motherboard design. Most, because of their dependency on the old, legacy storage IO stack and algorithms, can’t.

Built to Last Data Integrity

If the organization uses the backup process to retain data and manage file versions, it should also make sure that data is accessible. At the most basic level, data accessibility means that the backup storage target’s controllers need high availability and protection from media failure. With drive capacities set to reach 50TB by 2026 and 100TBs by 2030, protection from media failure can no longer be RAID. Today it takes days to return a volume to a fully protected state after a drive failure. When we reach 50TB drives, it may take a month!

Modern backup storage targets need to reinvent RAID to protect data, not drives. They need to know what data has fallen out of a fully protected state because of a drive failure, not just methodically rebuild the entire drive. Protecting data and not the drive also means the backup storage target protects against silent data corruption, which may increase as the age of the drive increases. If silent data corruption occurs, the new RAID technology can rebuild just the data impacted by the read error generated by silent data corruption.

Performance for Today

Long-term, cost-effective data retention is critical, but IT also needs a solution that addresses today’s pressing problems, namely backup and recovery performance. Backup software sends most of its data to storage targets using block-level incremental or change block tracked backups. The data is smaller in size but more frequently sent. Ingest performance is critical.

Recoveries today are often initiated to the backup storage device, leveraging a feature called instant recovery. When IT uses the instant recovery feature, the backup storage target has become, at least for the time being, production storage. As a result, it must deliver production-class performance similar to what users are used to experiencing. It must also have all the high-availability and data protection features (snapshots) that IT uses to support the backup process.

Conclusion

Modern backup software enables a dichotomy of purposes. It can provide data so fast that it can claim 100% Availability while also providing features rivaling long term archive products. The problem is this dichotomy of purposes breaks most backup storage targets and leaves customers scrambling for multiple solutions.

Learn More

Read about StorONE’s S1:Backup that meets all of these requirements.

Join us next week for our webinar “How to Scale Backup Storage from 10TBs to 10PBs.” During the webinar, we will dive deep into helping you design an affordably scaling backup storage solution that will last over a decade without data migration. Join us live on Tuesday, September 28th at 1:00 pm ET / 10:00 am PT.

Register Now: How to Scale Backup Storage from 10TBs to 10PBs – StorONE

Request a Demo