StorONE Blog

What Are Your Storage Priorities?

The Storage Utilization Challenge

Today’s information era places a premium on storage performance and capacity while further squeezing budgets. Data is growing at an exponential rate, and businesses are turning to artificial intelligence, machine learning and analytics workloads in a meaningful way to harness this information for new advantages. Meanwhile, these business requirements necessitate faster performance from traditional workloads such as Oracle and Microsoft SQL Server as well. This demanding workload ecosystem requires unprecedented levels of utilization of storage capacity, storage memory and storage IO performance.
The world’s largest public cloud service providers are commonly perceived as masters of storage efficiency, due to their ability to create massively scale-out architectures. As a result, many enterprises are working to “shrink” a scale-out architecture and apply it to their own storage infrastructure, in order to more efficiently accelerate workload performance and increase capacity.
The reality, however, is that a scale-out architecture is not a model of efficiency. Cloud service providers’ storage arrays are plagued with the same inefficiencies that impact those deployed by a typical enterprise. The “hyperscalers” are simply able to mask these inefficiencies by the massive scale at which they operate. Typical data centers do not have this luxury, and as a result must have a sharper focus on maximizing the utilization of their available resources.
One area of top concern for IT shops today is storage performance. With the advent of solid-state drives (SSDs) and NVMe (non-volatile memory express) access protocols, new and unprecedented levels of raw storage drive performance are possible. However, the inefficiencies of legacy storage software are causing these drives to take an 80% or more reduction in performance when they are integrated into a typical storage array. This is increasingly unacceptable as enterprises strive to serve workloads that arguably cannot receive enough performance on limited budgets.
Previously, storage software efficiency did not have much impact on system performance due to the latency of hard disk drives (HDDs); the performance of SSDs radically changes this equation. Disk vendors typically try to compensate for inefficiencies in storage software with faster (and more expensive) processors; the challenge is that the storage system is not able to take advantage of these additional cores because they cannot be run in parallel.
Storage software inefficiency has created a challenge whereby customers need to purchase far more storage capacity than they actually need to achieve the levels of IOPS that their workloads require. For example, if the IOPS utilization of a five terabyte (TB) storage array is 20%, it would require the customer to purchase five times that capacity (25 TB) for the array to deliver the raw performance of one 5 TB drive.
This problem becomes exacerbated as the density of drives increases, and as we factor in other associated infrastructure and overhead, such as servers. Compounding the storage performance utilization challenge is the fact that most arrays are deployed for a single use case, for instance specifically to address high end databases, which results in sprawl of multiple systems with low utilization per use case. In addition to adding cost and complexity, storage inefficiency creates a situation whereby typical enterprises are more limited in their ability to tap recent innovations.

HOW CAN STORAGE MANAGERS MEET INCREASING PERFORMANCE AND CAPACITY REQUIREMENTS WHILE ALSO CUTTING THEIR COST STRUCTURE?

A singular, unified storage operating environment is needed to obtain the new levels of utilization required by new workloads. StorOne’s Unified Enterprise Storage (UES) platform, S1, provides immediate return on investment by enabling customers to extract maximum performance from underlying systems.
S1 centralizes all applications and workloads to be run on a singular storage system, while normalizing underlying hardware and supporting all protocols and drive types in a “mix and match” approach. This consolidation and ability to tap the most recent drive innovations at a lower cost accelerates performance for lower capex investment. For example, customers can store more active data on an SSD or on NVMe, and also store less active data on hard disk drives as opposed to on slower-performing tape storage. Furthermore, this centralization and flexibility streamline scalability and administration.

Request a Demo