Unprecedented Storage Performance for VMware | Infinio

Testdrive Infinio Accelerator

Michael Wilmsen, owner of Virtual Hike, is an independent Architect/Trainer with almost 20 years of experience in IT, and a VMware Certified Design Expert (VCDX 210). He reviewed Infinio and found the installation and configuration to be simple, and significant improvement in both read and write performance.

Nowadays, many legacy storage devices (SAN/NAS) have the option for hosting flash devices in their solution. Flash devices leverage high IOPS and low latency.
Most common used storage protocols (iSCSI, NFS and even Fiber Channel) are bandwidth optimized, and are not latency optimized. Of course Fiber Channel (FC) has a lower latency, as Ethernet based protocols like iSCSI and NFS. But still remote device will have a higher latency as local devices. In this blog post from Mellanox, they explain why FC is doomed according to them.

In the past I wrote a blog post explaining why you want your flash devices as close as possible to your applications. Hence why Hyper Converged Infrastructure (HCI) are so popular nowadays.

But what if you have a legacy storage solution, and still want low latency for your applications?

 

VMware vSphere has a feature called VMware vSphere Flash Read Cache (VFFS). VFFS uses a local flash device to cache read IO’s.
In my opinion VFFS has one major disadvantage, you have to specify the block size. This block size is determined by your application(s).
Let us assume that you actually know the block size used by your application. Will every application uses the same block size? Probably not. This makes VFFS harder to use.