Lowering cost per IOPS is a more urgent priority for IT professionals to have than trying to find the system with the most IOPS. Most data centers don’t need anything close to the IOPS potential of today’s systems, yet vendors seem to be in an endless race to deliver more and more IOPS. The challenge for IT is how to meet the demands of the organization while dramatically lowering costs. Paying extra for IOPS you can’t take advantage of doesn’t help solve the problem. Storage system vendors, to enable IT to meet this challenge, need to focus on lowering cost per IOPS while still improving total potential IOPS per system.
Understanding IOPS
As I explained years ago in an article, “What Are IOPS?“, IO Operations per second (IOPS) is how storage vendors communicate the potential performance of their solution to customers. As I explained in that article, IOPS isn’t a perfect way to rate a storage system. There are undoubtedly other metrics like latency and bandwidth that are equally important. IOPS, however, is the metric that vendors quote most. In 2008, delivering high IOPS was difficult. In 2020, thanks to the capabilities of today’s server hardware, networking, and storage media, delivering high IOPS is relatively easy. Delivering IOPS at a low cost is the primary challenge.
Hardware is Lowering Cost Per IOPS
Today’s hardware is lowering the cost per IOPS. Many SAS and NVMe SSDs can individually provide hundreds of thousands of IOPS. Serial Attached SCSI (SAS) or NVMe IO buses support the performance that these drives can deliver. Even prior generation CPUs can drive the IO through those channels. Finally, the external networks, like Fibre Channel and Ethernet, can get data to IO hungry application servers. The cost of all of this hardware is declining rapidly, which in turn is lowering cost per IOPS. Most organizations should be able to meet their performance demands with 64GB of RAM and last year’s CPU generation. They also shouldn’t have to move to next-generation networks like NVMe-oF until they are ready.
Software is NOT Lowering Cost Per IOPS
The problem facing organizations is all of that hardware requires software to drive it. Most storage software, however, is extremely inefficient in how it utilizes hardware resources. This inefficiency is a problem for both turnkey storage system providers and software-only providers. Inefficient storage software makes lowering cost per IOPS almost impossible. The inefficiency stems from storage software building from existing code libraries and open source repositories. In most cases, vendors are putting a new GUI on old code. The software ends up being a layer of modules that each IO operation needs to traverse. These layers add overhead and latency which leads to poor resource utilization.
In the days of hard disk drives (HDD), these layers of software code had almost no impact on performance. The rotational latency of the HDDs was so high, the software inefficiencies could hide, unnoticed, behind the slow HDD. Now in the era of flash and even faster memory like Intel Optane, storage software has no place to hide. The high-performance hardware exposes each layer of storage software code as an inefficient bottleneck.
To get around this point of exposure storage vendors, as usual, throw hardware at the problem. They configure systems for their software that are far more powerful, and more expensive, than they need. This trend holds true even for mid-range performance systems. Vendors outfit these systems with the latest generation of CPUs, hundreds of GBs of memory, and dozens of SSDs. Using hardware to compensate for inefficient software dramatically increases the cost per IOPS as well as overall system cost. The problem is so widespread that IT is lead to believe there is no other alternative.
Creating Efficient Storage Software
Eliminating the overhead within storage software is key to lowering the cost per IOPS. Storage software, to be efficient, requires collapsing all of the storage services stacks into a single plane. The result is the storage software can process and manage IO with almost no overhead. A single services layer means that software extracts 80%-90% of the raw performance of a drive. Efficient storage software, means the typical data center can meet their performance needs with five SAS based flash drives, instead of 24 NVMe-based drives. At that point, the customer only needs to buy more flash drives if their capacity requirements dictate it. If their storage software provides tiering, they could also add QLC flash or hard disk drives for colder data.
StorONE is Lowering Cost Per IOPS
StorONE’s S1 Enterprise Storage Platform is lowering cost per IOPS by delivering an efficient storage software solution. We rewrote all the storage algorithms, integrating them into a single layer that very efficiently uses hardware resources.
The result is we provide the lowest cost per IOPS on the market today. We can extract 80%-90% of the IOPS per drive installed within our platform. That means that five or six flash drives can often deliver more than 600K IOPS.
In most cases, our customers can meet their performance and capacity demands with half as many drives as our competitors. We also need less powerful CPUs and less total memory. Fewer drives, lower memory requirements, and modest CPUs lead to a lowering of cost per IOPS. Efficient storage software also leads to a less expensive upfront cost and, more importantly, long term price reductions. With StorONE, the storage cost savings are permanent!
Learn More!
Lowering cost per IOPS is only one aspect of lowering the overall cost of storage infrastructure. In our on-demand webinar “Be a Storage Hero – Crush Storage Costs” StorONE’s Technical Account Manager Scott Armbrust and I provide practical ways that IT professionals can lower costs and be the Storage Hero to their organizations.