Hyper-Flash? Merging Hyperconvergence with All Flash

False-Convergence-Image-IToday there was an article in the register by Chris Mellor about the Hyperconverged vendors looking to move to all flash inside their platforms. It’s obviously a logical next step to think that if you want ultra high performance in a small package that using flash/ssd tech would be a good step, but knowing what I do about the various platforms out there today, I think the response to if a Hyperconverged vendor should build an all flash system is: probably not.

SSD/Flash Disks are not hard drives, lets just get that out of the way first because its important to make that distinction. Sure they hold data, but its how they hold data thats important. And in the flash space, how the write is performed is where the real differentiation shows up between vendors. Just putting fast flash disks in these systems wont necessarily be a good thing, it will depend much more on the intelligence within the system itself in how it treats that flash disk.features_flashstorage20101020

Nutanix uses flash for read and write acceleration but doesnt have the kind of intellectual property to get the most benefit out of it (in my view that is), thats why they had customers who burned through Fusion IO cards in the earlier systems and move over to IBM SSD disks with greater longevity. To their benefit hey added dedupe for some portions of data, and with their architecture an all SSD appliance may be more compelling (even though the dedupe isnt inline for all data). I still see a challenge in their use of replication between nodes, as well as capacity. They have always been compute heavy and storage light. They need more capacity, not less.

Simplivity uses flash as a read cache/hot tier and may get more longevity out of it at that space, but what happens when they need to do all writes to flash? There would have to be some architectural changes to the data path to provide an all flash system. Today they use eMLC flash for their hot tier and standard SATA SSD’s for capacity and the path that data takes bypasses the SSD alltogether when a write occurs. Their ability to do inline deduplication and compression before data hits the disk would actually give them much more longevity with the flash disks in their systems, so there could be a compelling case to go the all flash route, and they could probably use the much cheaper TLC based flash in the capacity tier to bring down costs. BUT, as opposed to Nutanix, their compute capacity would never be enough to generate the kind of IO that an all flash system could ever utilize, and data doesnt span hosts so its all local IO. At that point, is a single ESXi host going to generate 100,000 IOPS? Probably not.

EVO:Rail will suffer the same challenges as Nutanix but even more so since there is no deduplication in the system at all. EVO is just VSAN ready nodes with a deployment wizard. And like Nutanix, they have to rely on replication factors for data protection within the nodes. So all that additional data redundancy will create a significant amount of writes and without a system that truly understands how flash best handles writes changing 1K may result in huge amounts of IO being done across multiple nodes. Not a good recipe for data efficiency.

The commenter who mentioned UCS Mini/Invicta that may be the more logical solution, or any system that takes an existing flash vendors system and can miniaturize it to fit in their kit because they have intimate knowledge of how flash works and how best to address IO. With Invicta, the way they handle write IO for flash is fairly important since their system essentially coalesces flash writes and lays them out to match the size of the cell. So changing that 1K block doesnt result in having to pull data out, and reflash the cells. Thats one reason their write performance is so high (250k IOPS from what I’ve seen). Now, the big caveat here is would the industry as a whole look at UCS Mini with access to Invicta as a Hyperconverged Solution? I’d say the answer is, it depends, and since such a product doesnt exist its hard to speculate.

f667b7a1e6-10-most-expensive-tuned-carsI think the bigger challenge for the Hyperconverged vendors for an all flash based systems is price/performance. SSD is not cheap, and if you can get the performance you need from a hybrid disk approach and keep costs down then thats what you are going to do. Additionally, all the vendors in this space have the challenge of not being able to mix and match systems in the same cluster. You can’t pair disparate nutanix nodes in the same cluster, you can’t pair disparate SimpliVity OmniCubes in that fashion as well. With EVO:Rail its only EVO:Rail, there is no option for different disk configs, and vSAN is looking for like disk configurations across all nodes.

The larger benefit from all flash systems is the ability to run multiple disparate workloads (virtual and non-virtual) with a performance profile that will always beat out disk based systems, and a physical/power footprint that is simply unmatched by spinning rust. And while yes, its hella fast, speed isn’t always the driving factor when making the choice to go the all flash route. I’ll probably have more thoughts on this later, so look for a future post on the value prop of flash.

This entry was posted in Cisco, HyperConvergence, Omnicube, SimpliVity. Bookmark the permalink.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.