StorONE Blog

What is Auto-Tiering?

Auto-tiering is a feature that some storage solutions have which enables moving data automatically from one type of storage to another. It is typically associated with hybrid storage, which moves data from flash-based solid-state drives (SSDs) to hard disk drives (HDD). Auto-tiering technology is not limited to moving data between flash and hard disk drives, nor is it limited to moving data between only two storage tiers. A storage solution with auto-tiering should be able to move data to various types of storage tiers.

What’s the Value of Auto-Tiering

Auto-tiering has one primary goal, to lower the cost of storage infrastructure by moving less active data to a less expensive storage tier. The technology is built on the reality that most data within the data center is inactive. By most accounts, only about 10% of an organization’s data is active. It does not make financial sense to store data that the company is no longer using on expensive media like SSDs.

A storage solution with auto-tiering technology can save the organization hundreds of thousands of dollars. While all-flash vendors claim that SSDs have reached price parity with HDDs, a simple Google search will reveal that SSDs are 10X the cost of hard disk drives. Considering that an organization needs dozens and even hundreds of drives to meet its capacity needs, the cost savings potential of a hybrid solution is significant.

Want to know what is better for your business All-Flash Arrays or Hybrid Storage with Auto-Tiering? Watch our on-demand webinar “Showdown – All-Flash vs. Hybrid Storage – Which is Best for Your Business?

Doesn’t Dedupe Solve the Problem?

Many All-Flash Arrays include deduplication to help reduce the cost delta between HDDs and SSDs. In production storage, however, most customers only realize a 2 1/2 or 3 to 1 reduction ratio. Even if the customer can achieve this level of reduction, it doesn’t come close to overcoming the 10X price advantage that HDDs enjoy. There is also a tax associated with deduplication. IT will need to purchase more drives than it should to overcome the performance overhead of the very heavy deduplication algorithm.

Vendors don’t see the efficiency returns they did a decade ago, even in backup storage, where the reduction ratio should be higher. The decreasing efficiency of deduplication in the backup use case is due to advancements in backup software features like block-level-incremental backups, which eliminate much of the redundancy before the data ever gets to the backup storage target. Additionally, most backup software solutions now include deduplication as a feature within their product so there is no need to buy it a second time in the storage product.

The Challenges with Auto-Tiering

There are legitimate challenges when implementing auto-tiering technologies which all-flash vendors use to promote their case for an all-flash data center. First, they correctly point out that legacy hybrid storage solutions deliver inconsistent performance. The problem is they are built on a false premise. These systems assume that data will trickle into the storage solution in an organized fashion and that they can fill up the SSD tier first, casually moving the oldest data to the HDD when the flash tier is full.

auto-tiering

The problem is data center I/O doesn’t trickle; it comes in peak I/O bursts that suddenly and repeatedly fill the flash tier to its maximum capacity. As a result, the system is trying to continue to receive data to a high-performance storage tier while simultaneously moving old data to a much slower HDD tier. The reality is it can’t handle the external data ingest and the internal data movement simultaneously. The impact on the application or user is they will experience I/O waits and assign Hybrid Storage with its reputation of providing inconsistent performance. The reality is that it is not the fault of the storage hardware it is the fault of the auto-tiering technique.

The second reason vendors will provide for an all-flash data center is that a storage system that uses hard disk drives puts data at risk because of prolonged RAID rebuild times. RAID rebuild is a term used to describe a storage system’s process to maintain access to data when a drive fails. During this time, the system must provide data to users and applications by using parity calculations to recreate data on a failed drive. At the same time, the storage system tries to recover the data on a failed drive, so it doesn’t have to calculate parity continually. With legacy storage systems, this process can, depending on the capacity of the hard disk drive, take days, even weeks. SSDs, on the other hand, can recover from a failure in a few hours.

Overcoming the Auto-Tiering Challenges

A top pain point for most prospective customers that StorONE talks to is driving down the cost of storage infrastructure, and the overwhelming majority will agree that 80% to 90% of their data is inactive. Using hard drives to store the less active data makes sense, but auto-tiering challenges are a stumbling block. StorONE set out to eliminate the auto-tiering challenges by rewriting the auto-tiering algorithm so it can deliver consistent performance while continuing to use those 10X less expensive HDDs.

The Most Intelligent Auto-Tiering Algorithm

First, the StorONE auto-tiering algorithm monitors the upper tier for moments of less I/O activity. It then moves that data to the lower tier during those times. The algorithm takes advantage of peak I/O. If there are peaks, there must be valleys. During these moments of lower I/O pressure, the algorithm will preemptively move data to the lower tier. As a result, flash capacity is always available to handle the next I/O burst. Applications and users don’t experience the wait states common in traditional hybrid systems.

Second, the StorONE auto-tiering algorithm leverages that it is part of the StorONE engine and uses an optimized method to move data to the hard disk tier. This internal transfer again happens during a moment of lower I/O pressure and the algorithm transfers data at phenomenal speeds, as much as 30TB per hour.

Industry’s Best RAID Rebuild Times

StorONE’s vRAID overcomes the problem of slow recovery from drive failure. StorONE re-wrote the erasure coding algorithm for high performance. It delivers high performance when everything is working, and high-performance rebuilds. If an HDD fails, vRAID can return the customer to a fully protected state in less than three hours, even when using high-density (20TB+) hard disk drives. Not only can customers safely use HDDs, but they can also use the highest density drives, thereby lowering costs and reducing data center footprint. Fewer drives also mean lower power and cooling requirements. Recent reports also indicate that HDDs are better for the environment than SSDs.

StorONE Auto-Tiering Lower Costs, Lowest TCO

StorONE’s Auto-tiering lowers costs by intelligently marrying SSD and HDD technology, reducing storage costs by 10X or more without impacting performance or data access. This algorithm and vRAID work inside the StorONE Engine, which is a complete re-write of the storage I/O stack. This engine also extracts maximum performance from the flash tier. Most customers can meet their performance, and active data set capacity demands with just 12 to 24 flash drives, further reducing costs. And our solution is sold via Scale-For-Free, a drive-based pricing model that rewards you for using fewer, high-performance, high-density SSD and HDD drives.

You are being asked to do more with less—partner with a company that enables you to do it without risking data. Watch our on-demand webinar “Do More Storage with Less Hardware” to learn more.


Request a Demo