Big Data

Optane 101: Memory or storage? Yes.


This article is part of the Technology Insight series, made possible with funding from Intel.


By now, you’ve seen the word “Optane” bandied about on VentureBeat (such as here and here) and probably countless other places — and for good reason. Intel, the maker of all Optane products, is heavily promoting the results of its decade-long R&D investment in this new memory/storage hybrid. But what exactly is Optane, and what is it good for? (Hint: Massively memory-hungry applications like analytics and AI). If you’re not feeling up to speed, don’t worry. We’ll have you covered on all the basics in the next few minutes.

The bottom line

  • Optane is a new Intel technology that blurs the traditional lines between DRAM memory and NAND flash storage.
  • Optane DC solid state drives provide super-fast data caching and agile system expansion.
  • Capacity up to 512GB per persistent memory module; configurable for persistent or volatile operation; ideal for applications that emphasize high capacity and low latency over raw throughput
  • Strong contender for data centers; future for clients. Costs and advantages are case-specific, impacted by DRAM prices. Early user experience still emerging.

Now, let’s dive into some more detail.

Media vs. memory vs. storage

First, understand that Intel Optane is neither DRAM nor NAND flash memory. It’s a new set of technologies based on what Intel calls 3D XPoint media, which was co-developed with Micron. (We’re going to stumble around here with words like media, memory, and storage, but will prefer “media.”) 3D XPoint works like NAND in that it’s non-volatile, meaning data doesn’t disappear if the system or components lose power.

However, 3D XPoint has significantly lower latency than NAND. That lets it perform much more like DRAM in some situations, especially with high volumes of small files, such as online transaction processing (OLTP). In addition, 3D XPoint features orders-of-magnitude higher endurance than NAND, which makes it very attractive in data center applications involving massive amounts of data writing.

When combined with Intel firmware and drivers, 3D XPoint gets branded as simply “Optane.”

So, is Optane memory or storage? The answer depends on where you put it in a system and how it gets configured.

Above: Intel often depicts the memory/storage continuum as a pyramid, with a small amount of fast, costly (per gigabyte) DRAM on top and a lot of slower, less costly storage on the bottom. Optane implementations slot between these two.

Optane memory

Consider Intel Optane Memory, the first product delivered to market with 3D XPoint media. Available in 16GB or 32GB models, Optane memory products are essentially tiny PCIe NVMe SSDs built on the M.2 form factor. They serve as a fast cache for storage. Frequently loaded files get stashed on Optane memory, alleviating the need to find those files on NAND SSDs or hard drives, which will entail much higher latency. Optane memory is targeted at PCs, but therein lies the rub. Most PCs don’t pull that much file traffic from storage and don’t need that sort of caching performance, And because, unlike NAND, 3D XPoint doesn’t require an erase cycle when writing to media, Optane is strong on write performance. Still, most client applications don’t have that much high-volume, small-size writing to do.

Above: Little larger than a stick of gum, Intel Optane Memory will appeal to users who benefit from frequent file caching. The higher the amount of small files to cache, the more benefit can be expected.

Optane SSDs: Client and data center 

Next came Intel Optane SSDs and Data Center (DC) SSDs. Today, the Intel Optane SSD 8 Series ships in 58GB to 118GB capacities, also using the M.2 form factor. The 9 Series reaches from 480GB to 1.5TB but employs the M.2, U.2, and Add In Card (AIC) form factors. Again, Intel bills these as client SSDs, and they certainly have good roles to play under certain conditions. But NAND SSDs remain the go-to for clients across most desktop-class applications, especially when price and throughput performance (as opposed to latency) are being balanced.

Above: Low-latency Optane SSDs come in several form factors. This helps deploying organizations find more opportunities to accelerate storage media accesses in a range of system types.

Things change once we step into the data center. The SKUs don’t look that different from their client counterparts — capacities from 100GB to 1.5TB across U.2, M.2, and half-height, half-length (HHHL) AIC form factors — except in two regards: price and endurance. Yes, the Intel Optane SSD DC P4800X (750GB) costs roughly double the Intel Optane SSD 905P (960GB). But look at its endurance advantage: 41 petabytes written (PBW) versus 17.52 PBW. In other words, on average, you can exhaust more than two consumer Optane storage drives — and pay for IT to replace them — in the time it takes to wear out one DC Optane drive.

Optane DC Persistent Memory

Lastly, Intel Optane DC Persistent Memory modules (DCPMM) place 3D XPoint media on DDR4 form factor memory sticks. (Note: There’s no DDR4 media on the module, but DCPMMs do insert into the DDR4 DIMM sockets on compatible server motherboards.) Again, Optane media is slower than most DDR4, but not much slower in many cases. Why use it, then? Because Optane DCPMMs come in capacities up to 512GB – much higher than DDR4 modules, which top out at 64GB each. Thus, if you have applications and workloads that prioritize capacity over speed, a common situation for in-memory databases and servers with high virtual machine density, Optane DCPMMs may be a strong fit.

Above: Optane DC persistent memory modules (DCPMM) currently look like DDR4 modules, but they do not integrate any DDR4 media, a fact that leads to confusion with some users. Note that systems with Optane DCPMM still require some DDR4 to operate.

The value proposition for DCPMM was stronger in early 2018 and early 2019, when DRAM prices were higher. This allowed DPCMMs to win resoundingly on capacity and per-gigabyte price. As DRAM prices have plummeted, though, the two are at near-parity, which is why you now hear Intel talking more about the capacity benefits in application-specific settings. As Optane gradually proves itself in enterprises, expect to see Intel lower DCPMM prices to push the technology into the mainstream.

As for total performance, DCPMM use case stories and trials are just emerging from the earliest enterprise adopters. Intel has yet to publish clear data showing “Optane DCPMMs show X% performance advantage over DRAM-only configurations in Y and Z environments.” This is partially because server configurations, which often employ virtualization and cross-system load sharing, can be very tricky to typify. But it’s also because the technology is so new that it hasn’t been widely tested. For now, the theory is that large DCPMM pools, while slower than DRAM-only pools, will reduce the need for disk I/O swaps. That will accelerate total performance above and beyond the levels reduced by adopting a somewhat slower media.

Net takeaway: Optane DCPMM should be a net performance gain for massively memory-hungry applications.

In Part 2, we’ll detail Optane’s various use modes and discuss the workloads able to make the most effective use of them.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.