Raise your hand if you’re old enough to remember loading computer games or database files from a cassette. No? How about a 5.25-inch floppy diskette? Loading a single megabyte – nearly the entire capacity of the disk – could take minutes. If you were suddenly bound by the same storage performance today, you’d spend half of your work hours screaming. From your humble laptop to the largest data centers, storage speed is critical for everyday tasks.
Rest assured that there are changes afoot that will bring faster, more responsive storage given recent trends and announcements around 3-D and trends in non-volatile memory (NVM).
“Non-volatile memory (NVM) is basically semiconductor memory in which you store things, but when you turn the power off, it stays—like in a mechanical hard disk,” explains Rob Crooke, VP and GM of Intel’s NVM group. “With NVM, when we write to a memory location, it traps some of the electrons in something called a floating gate, and they stay there even when the power goes off—they are trapped in the gate of the memory component itself. And they stay for a long time, for years.”
Moore’s law states how the density of “components,” or transistors, one can fabricate in a given integrated circuit space will double every two years. This loose rule has been at the foundation of computing’s relentless march to ever greater heights over the last four decades. However, storage has not kept pace. Spinning hard drives, which comprise the vast majority of digital storage capacity today, offered a top spin rate of 15,000 RPM 10 years ago. That ceiling still exists today.
When it comes to capacity, hard disk drives (HDDs) have lost their early ability to outperform Moore’s law. While platter areal densities were increasing at roughly 40 percent in the early 2000s, that rate has dropped to about 15 percent to 19 percent today. All told, this yields a doubling of capacity every five years. SanDisk notes that, over the last decade, benchmarked “HDD performance has improved by only 13 percent, while SSD performance improved by 400 to 500 percent!”
In throughput terms, mainstream HDDs have scaled from about 100 MB/s to more than 180 MB/s over the same period. However, SSDs (solid-state drives) now surpass 500 MB/s, practically filling the bandwidth for 6 Gbps SATA, the fastest storage connection technology on today’s client system. But not even 500 MB/s is enough. The market consistently demands more to fuel the rising tide of new applications.
3-D NAND to enable 10TB SSDs
Only solid-state flash media has the speed growth to meet many future demands. It falls to Moore’s law and a diversity of advances to make sure that both the speed and capacity of flash memory meet market needs. One of the most exciting of these is Intel’s transition into 3-D NAND production (officially announced at the company’s annual investor meeting last November), which follows closely on the heels of Samsung’s similar move into a 32-layer product.
“The world is moving from what they call 2-D, or ‘planar’ NAND [one of the two architectures used in flash memory] to 3-D NAND. With 2-D NAND we put a flat checkerboard of transistors out there, and we can put 64 or 128 billion of those in something the size of a postage stamp,” Crooke says. “By making the squares smaller and smaller on the checkerboard—that’s Moore’s Law—we squished more squares onto a board in any given year. Eventually the squares get really close together and it’s difficult to make them smaller without them interfering with each other.
“So for 3-D NAND, we make the checkerboard squares bigger—relax them a little bit—and we go vertical. We make a cube of transistors instead of a flat layer of transistors. Since we ‘back off’ the squares, they are four times as big on each side and it makes it easier to do Moore’s Law because they are not as squished together. But that means you have to go 16 times higher to get the same number of bits on the die. We are going 32 levels high, so we’ll double the density—we’ll have 256 billion bits in a given die.”
Whereas top-end flash drives now use 2-D NAND featuring 128Gb dies, Intel’s first-generation 3-D dies, due in mid-2015, will be 256Gb and enable 1TB of storage in a 2 mm high chip. Intel noted on the investor call that its 3-D NAND will enable 10TB SSDs within a couple of years.
“Now we can start working on making the checkerboard squares closer together again, and we can go even further,” Crooke continues. “We can make the cube of transistors taller over time—from 32 to 48 to 64 transistors high—so it allows us to extend Moore’s Law further into the future.”
While 3-D NAND ramps up, we also have the trend toward non-volatile memory gaining momentum. The chief advantage of HDDs and SSDs is that they retain data in the absence of input power and do so at a fairly low price. System memory, or RAM, is many times faster than conventional storage, but it is volatile, meaning all data vanishes when there’s no power, and it is many times more expensive per gigabyte than storage memory.
The Storage Networking Industry Association’s NVDIMM Special Interest Group, looks to bridge these two mediums. Relatively little has been discussed publicly about the growth path for NVDIMMs, but there are clues, such as this SNIA paper, which states many of the design objectives for NVDIMMs. In short, NVDIMMs would be “a complementary memory tier to flash/SSD,” providing “the speed/latency/endurance of DRAM with the persistence of flash.” Given that many data center applications are currently being tweaked to run “in-memory” for higher performance (albeit with much larger and more costly RAM capacities), it’s reasonable to search for architectures that can deliver similar performance with more agreeable cost points. NVDIMM aims to satisfy this.
Faster paths for data
While flash memory evolves, there remains the challenge of getting that memory closer to the CPU so it can be interacted with more efficiently. With conventional SATA and SAS storage, a drive stands separated from the CPU by the chipset and a long journey down the data bus out to the media. This winding path introduces significant latency and overhead in data processing. In contrast, with modern system architectures, the PCI Express 3.0 bus connects straight into the CPU. Thus SSDs that are built onto PCIe cards, such as Intel’s SSD DC P3700 series, can leverage dramatically higher bandwidth levels. Moreover, SSDs built for PCIe can use the NVM Express (NVMe) storage specification, which utilizes the SSD’s inherent parallelism much more effectively than the older Advanced Host Controller Interface (AHCI) spec used with SATA and SAS devices, pushing performance even higher.
Of course, these advances are rarely plug-and-play simple to implement. A lot of modifications must be made to applications and operating systems so they can take advantage of new devices and architectures. Fortunately, the Software Services Group (SSG) at Intel tends to make many of these necessary adaptations.
“Work at the OS level must be done to optimize these advances and open up their new capabilities to applications,” says Leena Puthiyedath, an Intel principal engineer working with the Windows operating system and IA architecture. “The general trend of support for fast storage is to allow I/O to be synchronously completed instead of asynchronous today [meaning that storage processes data in tandem with the CPU rather than one handing off to the other]. When the disk is from non-volatile memory, the OSes will also support modes that avoid copying from disk page to memory page in order to access data, as happens with all I/O today. Overall, copies and system activity are reduced. This should result in a more responsive system for the end user.”
The benefits of faster storage impact virtually every facet of the market. In the enterprise and data centers, faster reads and writes translate into more robust online transaction processing and, perhaps more important, more ability to handle larger big data analytics tasks. This will only become more prominent as the Internet of Things goes mainstream and we arrive in a world where 50 billion devices are all delivering data for real-time processing.
For end users, expect everything from instant-on system booting to the enablement of true real-time interactivity with applications. This will be especially key in perceptual computing ventures. This new class of software and computer usages that Intel has been pushing over the last few years will demand very fast response time for the experience to be perceived as “real.” Local storage performance will be crucial.
Even in cases where such tasks are possible today with substantial hardware resources, the progress of fast storage advances, such as those described here, will allow the same class of performance with fewer, more affordable resources. Fast storage is about turning the SSD of today into yesteryear’s floppy disk. It may seem improbable, but it’s happening, and it’s essential.
This content was originally published on the Intel Free Press website.
Top image: Lenore Edman/Flickr