How Much Will Flash Change Enterprise Storage?

How Much Will Flash Change Enterprise Storage?

Technology News


Flash memory has already had a huge impact on consumer devices—everything from smartphones to consumer electronics to solid-state drives (SSDs) in sleek laptops—and business applications. But when I attended the annual Storage Visions conference a couple of weeks ago, I was struck again by just how much flash is being used in enterprise systems, and what the potential is for future uses going forward.

A few years ago, there was a lot of resistance to flash memory in the enterprise because of concerns about its reliability and endurance, especially with consumer-grade (multi-level cell or MLC) flash. That is why the initial wave of enterprise flash products used single-layer-cell SLC flash, which is expensive and available only in limited quantities. It turns out that with the right controllers and software, even MLC flash provides sufficient endurance for most enterprise applications. Unlike hard drives, which tend to fail randomly, flash tends to degrade over time in a predictable pattern. The shift from expensive SLC to MLC has allowed enterprises to deploy a lot more flash.

Indeed, at this point, the vast majority of enterprises use flash at one or more points in their data center operations. Every major vendor of storage arrays typically sells systems with a few percent of the storage consisting of flash-based SSDs, used in caches and as the lowest tier of storage. Most now offer all-flash arrays as well, following in the footsteps of pioneers such as Pure Storage. 

Pure Storage offers all-flash arrays that it says can cost less than traditional spinning disks because it offers data reduction—on the fly compression of data—as well as improved speed. In particular, the company points to successes with small and mid-size databases, virtual machines, and virtual desktops (VDI). I know a few companies that have been quite successful with such deployments, in applications such as VDI or high-frequency trading.

In addition, we’re now seeing more flash in servers as well, initially on PCIe solutions. This is where companies such as Fusion-io and Violin Memory initially made their mark, and we’re seeing it lots more places.

More recently, I’ve seen solutions that connect flash memory directly to the DIMM channel traditionally used by DRAM vendors. Pioneers here include Diablo Technologies with its MCS (Memory Channel Storage) architecture, and products such as Viking Memory’s ArxCis and SanDisk’s ULLtraDIMM. This is now used in some systems from IBM and I expect to see it more in the future.

What impressed me as having changed the most is the concept that there are now more and more applications that could go to “all flash” environments. Indeed, one of the keynotes at the show was called “Enabling the All Flash Data Center” given by John Scaramuzzo, general manager of SanDisk Enterprise Storage Solutions. (He was formerly the president of SMART Storage, which was acquired by SanDisk and used much of the technology that became ULLtraDIMM.)

Tiers of Storage

In this presentation, Scaramuzzo talked about how applications such as virtualization, cloud computing, and in-memory computing were enabling all-flash scenarios. Effectively, he argued that tier 0 has already become largely flash, and flash is becoming a bigger part of tier 1 applications as well. This makes a lot of sense to me—I’m hearing lots of companies explain that in virtual environments the need for lots of input-output operations per second (IOPS) makes flash very compelling. This seems particularly true in virtual desktop applications, as you can put many more VDI sessions per server, and still not have a problem when lots of users log on at the same time.

What stood out more was his belief that flash is now making more sense in tier 2 applications, driven by improvements in the density, power, and cooling requirements of SSDs, particularly when looked at with a Total Cost of Ownership (TCO) lens. Even though the raw cost per bit of flash is higher, he suggested that reduced support costs, lower power and cooling, reduced rack and floor space and the need for fewer arrays makes for a stronger case for the use of flash in the data center. As the cost of flash continues to decrease and the capacities increase, an “All Flash Data Center” becomes  more achievable, Scaramuzzo said.

He talked about how there are now 2.5-inch form factor SSDs with 2TB of storage capable of delivering more than 100,000 IOPS, and said that the movement towards 3D NAND flash manufacturing shows how this could scale to 64TB or even higher over the next few years, all without losing performance. He said this has let SSDs catch up with hard drives in density, yet offer more speed, less power, and less cooling. A year ago, he said, the math to make this work on a TCO basis wasn’t possible, but now it is. He even saw tier 3 applications, such as archiving, being able to move to flash storage, saying a cross-over was possible in the next three to five years. This is a concept I’ve heard most about from the biggest hyper-scale cloud vendors.

It’s an interesting vision, and one I really haven’t heard articulated much before as an enterprise solution—in part because on a per-bit basis, flash storage is still much more expensive than hard drives; and because the total capacity of flash drive makers is so much less than hard drive makers.

Indeed, in conversations with hard drive vendors such as Seagate, I keep hearing about how hard drives are improving their density as well (if not their speed) and how the capacity of the hard drive industry is so much larger, and the merits of hybrid drives (which Seagate calls SSHDs) that combine some flash and a hard drive together.

In addition, there are now hybrid solutions on the storage array side that offer features such as deduplication and compression in arrays consisting of flash and hard drives, as pioneered by companies such as Nimble Storage, Tegile, and Tintri.

These hybrid solutions typically come in at much lower initial prices than all-flash solutions. It looks like there are application types where all-flash makes sense (usually those where on-the-fly compression works and where there is a need for a lot of IOPS, including a lot of mid-size databases) and others where it doesn’t (such as very large databases, or those with lots of images or videos, which are already compressed.)

Joe Unsworth, Gartner Research VP for NAND Flash and SSDs, points out that solid-state arrays are growing very quickly, but are still a relatively small part of the market, and likely to remain that way for the foreseeable future. Indeed, he sees the market for these flash-based arrays growing from $782 million in 2013 to $3.6 billion in 2017. But he points out that even then, this would only be 10 percent of the total storage array market.

Just given the economics, it seems to me that hard drives will be a large part of storage—almost certainly the majority of bits—for a long time to come. But I certainly can see where flash will make some applications not only faster, but actually more affordable.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.