Solid State Disks (SSDs): Definition and Use Cases

solid-state-disk-drives

If there’s one technology that has taken the storage world by storm over recent years, it’s flash drives. Solid State Disks (or SSDs) have transformed the storage landscape, offering much higher I/O density (IOPS per TB of storage) than can be achieved with traditional hard drives. HDDs are obviously mechanical media, based on spinning platters, accessed by multiple read/write heads. The physical geometry of these devices means that they are more attuned to sequential than random workloads. It’s easy, for example to write data sequentially onto a disk track as the disk rotates past the head. What’s much harder for HDDs is to manage random I/O profiles that read data from physically disjoint parts of the drive, either on separate tracks or platters. Totally random read requests can slow a hard drive down to 120-200 IOPS, depending on the drive speed.

 

Cloud Storage Options: SSDs (Solid State Disks) vs HDD (Hard Disk Drives)

 

As we move to the cloud, the ability to see the underlying storage hardware is abstracted from us. Ideally, we should be able to simply dial in the IOPS we need for each volume and go from there. Unfortunately, things aren’t yet that simple. Take Amazon Web Services’ EBS (Elastic Block Storage) offerings for example. EBS is used to provide primary OLTP-type access to instances, including the boot drive of your VM. Current EBS offerings are based either on HDDs or SSDs, focusing on either throughput (MB/s) or IOPS respectively. HDD offerings are either 250 or 500 IOPS-based, whereas the SSD offerings deliver 20 or 40 times that capability, with nothing in between (except for some burst capability).

If your application is currently running from an HDD EBS option, how would you move it over to Solid State Disk? The answer is, not that easily. Currently the process involves taking a snapshot of the current EBS volume and using that snapshot image to build a new Solid State Disk-based EBS volume. Putting that new volume into place requires an application outage (shut down the instance, detach/attach the two volumes, power up).

Of course the answer could be to simply deploy everything to flash, but unfortunately that’s not always an option. Solid State Disk EBS volumes are some 2-3 times more expensive than their HDD counterparts and remember that SSD EBS volumes are optimised for IOPS, not for throughput. In fact, the HDD options out-perform SSD EBS volumes by that measure. In many instances only part of a volume may be active or “hot”, so moving it entirely to flash is an expensive workaround.

 

Targeting Resources

An alternative strategy is to use flash in a targeted manner, assigning it to just those application I/Os that need it. This is an approach that has been used in on-premises solutions for many years. Storage arrays have used DRAM as a cache and more recently as flash has become more mainstream, as a way to either cache or tier a mix of flash and traditional HDDs in a cost effective manner. There’s an old adage in storage; capacity is free, but performance costs. This applies to a flash/HDD mix where cheaper hard drives provide the capacity and flash delivers the performance.

In many cases, the amount of flash needed to improve performance can be as little as 10% of the storage volume. This is because only small part of the data on a volume is active at any one time. Caching provides the ability to target flash more effectively and depending on the workload, the amount of flash needed can be varied to provide the right level of application acceleration.

Zadara recently announced software update 16.05 that increased the amount of cache that can be deployed with VPSA, their Virtual Private Storage Array.   A VPSA can now support up to 3.2TB in 200GB increments (depending on the engine size). This means customers can add cache in granular amounts to improve performance as needed by the application, without overcommitting resources. More importantly, these changes can be implemented without affecting the application. There’s no need to take snapshots and experience an application outage to re-assign the instance to a new volume. This is possible because in this instance, flash is being used as a cache rather than a storage tier.

Of course knowing how much cache to use requires monitoring and understanding the performance of the application and storage. Adding flash cache will have a direct impact on latency and throughput, so any additional cache added should be done in structured way, including taking performance measurements before and after the change is implemented. The impacts of the change can then be measured and quantified, setting a baseline to ensure that the flash has been applied cost effectively. With flash still a relatively expensive resource, there’s no need to waste it on applications that don’t need it. The good news is that having the ability to make dynamic changes on-the-fly gives you the flexibility to try different configurations and determine which provides the optimal mix of cache. Essentially, you can’t make a mistake because you simply test and adjust, until you are happy with the results.

Share This Post

More To Explore