It’s getting hard these days to find places where container technology doesn’t feature, but in a hard drive? Yet that’s what is being explored in one of the Storage Networking Industry Association (SNIA) technical working groups under the title of computational storage.
The idea seems simple: all storage devices already contain microprocessors to handle the data management tasks, so why not give them a bit more compute power, some working memory and something else to do? And while that would have been difficult a few years ago, because these things aren’t designed to be reprogrammed by outsiders, adding a dock for containers makes it rather easier.
There are a couple of caveats, of course, one being that the average hard drive doesn’t have a lot of processor resource to spare, so it might not cope with more than a single container. Some smaller formats, such as M.2, might not even allow that. The other, as I realised when discussing it recently with storage industry veteran and SNIA board director Rob Peglar, is that computational storage is only ‘computational’ when seen through SNIA’s eyes. That’s to say, it is only about compute tasks that are specific to storage.
There’s a lot to compute in storage
However, that in turn is a wider field than we might first think. As well as obviously-relevant tasks such as data compression or calculating RAID parities and erasure code pairs, it could also stretch to things such as video transcoding, for example using the likes of FFmpeg to compress audio and video for storage.
You could even have a drive with computational capabilities providing peer-to-peer services to non-computational drives on the same PCIe subsystem. P2P services might also give us a way around the single container limitation, which means that each computational storage element is likely to be fixed-purpose. Multiple drives, each with a different computational capability, could potentially share those over PCIe.
Examples of intelligent and computationally-capable drives already exist, for example from Eideticom, Netint, NGD Systems and ScaleFlux. But as Rob cautioned, there is still some standardisation work and ecosystem development to be done. In particular, computational storage is distributed processing so it needs orchestration, and storage-related application stacks will need adjustment to take advantage of it. “We are looking to the software industry for careful guidance,” he said.
Offloading drive I/O is the real win
It’s also worth noting that despite the ‘computational storage’ name, offloading the computational workload is a secondary benefit. More important is that all these storage processing tasks also require a lot of data to be moved between the processor and the storage device – lots of I/O, in other words.
As everything else in the system speeds up, I/O delays become more prominent and there is more incentive to find ways to reduce them. For example, as well as computational storage, which keeps the I/O within the drive (or the PCIe subsystem) for storage-specific processing work, there is non-volatile memory (NVM), which provides a different route to I/O reduction for mainstream tasks.
NVM puts fast and persistent storage in a slot next to the processor, and therefore keeps its I/O on the memory bus – we’ll write more about this soon. It is very likely that we will see systems incorporate both NVM and computational storage, as they address different needs.
0 التعليقات:
إرسال تعليق