Zadara Blog

News, information, opinion and commentary on issues affecting enterprise data storage and management.

Bring Cold Object Storage to Your Private Cloud

In today’s computing environment, more and more companies are beginning to work with massive datasets, ranging into the hundreds of petabytes and beyond. Whether it’s big data analytics, high-definition video, or internet-of-things applications, the necessity for companies to handle large amounts of data in their daily operations continues to grow.

Historically, enterprises have managed their data as a hierarchy of files. But this approach is simply inadequate for efficiently handling the huge datasets that are becoming more and more common today. For example, public cloud platforms, such as Amazon Web Services (AWS) and Microsoft Azure, that must service many thousands of users simultaneously, would quickly become intolerably unresponsive if every user data request meant having to traverse the folders and subfolders of multiple directory trees to find and collect the information needed for a response.

That’s why modern public cloud platforms, and other users of big data, use object storage in place of older file systems. And as the use of private clouds grows, they too are employing object storage to meet the challenges of efficiently handling large amounts of data.

big data word cloud

What Is Object Storage?

With object storage, there is no directory tree or folders. Instead, there is a flat global namespace that allows each unit of stored data, called an object, to be directly addressed.

Each object contains not only data, but also metadata that describes the data, and a global ID number that uniquely identifies that object. This allows every object in the storage system, no matter where it might be physically stored, to be quickly retrieved simply by providing its unique identifier.

Why Object Storage is Well Suited To Private Clouds

When it comes to handling massive datasets in a cloud environment, object storage has a number of unique advantages. Let’s take a look at some of these:

  • It’s infinitely scalable. Because of its flat namespace, an object storage system can theoretically be scaled without limitation simply by adding objects, each with its own unique ID.
  • Metadata makes searching easy. The metadata that accompanies each object provides critical information about the object’s data, making it easy to search for and retrieve needed data quickly and efficiently without having to analyze the data itself.
  • It’s highly robust and reliable. The VPSA Object Storage differs from a traditional RAID redundant storage using a distributed “Ring” topology policy under the hood.  Zadara Object store allows for a 2-way or 3-way replication as options which the customers can choose at creation time. By the use of erasure coding (instead of RAID) to achieve continuous and efficient replication of data across multiple nodes, an object storage system automatically backs data up, and can quickly rebuild data that is destroyed or corrupted. Nodes can be added or removed at will, and the system uses Swift’s underlying Ring replication to ensure that new objects are incorporated, or removed ones are rebuilt, automatically and transparently.
  • It simplifies storage management. The metadata of an object can contain as much (or as little) information about the data as desired. For example, it could specify where the object is to be stored, which applications will use it, the date when it should be deleted, or what level of data security is required. Having this degree of detail available for every object allows much of the data management task to be automated in software.
  • It lowers costs. Object storage systems don’t require expensive specialized storage appliances, but are designed for use with low-cost commodity disk drives.

storage arrays in cloud

Zadara VPSA Object Storage

Zadara offers an object storage solution that incorporates all the advantages discussed above, and then some. VPSA Object Storage is specifically designed for use with private as well as public clouds. It is especially suited to storing relatively static data such as big data or multimedia files, or for archiving data of any type. VPSA Object Storage provides anytime, anywhere, any-device remote access (with appropriate access controls) via HTTP.

The VPSA Object Storage solution, which is Amazon S3 and OpenStack Swift compatible, features frequent, incremental, snapshot-based, automatic data backup to object-based storage, eliminating the need to have separate backup software running on the host.

If you would like to explore how Zadara VPSA Object Storage can help boost your company’s private cloud, please contact us.

October 10, 2017

Posted In: Industry Insights

Tags: , , , , , , , , , ,

Leave a Comment

Challenges MSPs Face as Customers Move to the Cloud

The face of the MSP (managed IT services provider) marketplace is changing rapidly. Not so long ago the keys to success for most MSPs revolved around recommending or selling the newest and best hardware and software products to their customers. But as more and more companies migrate to the cloud, that approach is no longer adequate.

The Cloud’s XaaS Model Changes Everything for MSPs

Perhaps the most important feature of the cloud model is that it allows customers to meet many, if not all, of their IT requirements by making use of pay-as-you-go services offered by cloud providers. This “anything as a service” (XaaS) approach reduces, or in some cases totally eliminates, the necessity of purchasing specific hardware/software solutions. For example, many companies no longer meet their document processing needs by installing Microsoft Office on their computers. Instead they simply subscribe to Office 365 and receive the services they need through the cloud.


Service Providers Gain Competitive Advantage by Leveraging Zadara Storage

Watch the Webinar


In today’s IT environment customers aren’t looking for products, but for solutions. That means MSPs must now demonstrate that they provide a unique value proposition for customers who can theoretically go directly to a CSP (cloud service provider) to obtain almost any type of IT service they might need.

Yet the good news for MSPs is that customers aren’t really looking for services – they’re looking for solutions to the business issues they face. As IT business coach Mike Schmidtmann puts it, “Cloud is a business conversation, not a price-and-product conversation.”

So, the MSPs that survive and thrive in the age of the cloud will be those who shift away from simply offering specific products, and move toward providing strategic IT solutions that help their customers realize their business objectives.

value-added features

A Good MSP Will Help Customers Develop an IT Strategy Based on Business Goals

Most MSP clients are not interested in IT per se. Their focus is on using IT effectively to enhance their business operations. So, the first service a cloud-savvy MSP can provide to their customers is to help them develop a comprehensive IT strategy that is closely aligned with the company’s business objectives. In effect, the MSP will seek to become an extension of the customer’s own IT staff, providing a depth of expertise and operational capability that would be very difficult for the customer to maintain in-house.

Once armed with a good understanding of the customer’s business goals, an MSP can help them develop a comprehensive IT strategy that will support those objectives. So, the first conversations between MSPs and their customers shouldn’t be about specific solutions, but about the goals and strategy that customer is pursuing for both the present and the future of its business.


Service Provider Success Story:

Overall, we are seeing 80% better performance with Zadara Storage than with our prior storage solution.” — Chris Jones, Infrastructure Architect at Netrepid

Read the Case Study


A Good MSP Will Identify Specific Cloud Solutions That Meet Customer Needs

cloud storage as-a-service

A recent CompTIA survey reveals that many companies, especially smaller ones, have a great deal of difficulty in aligning their IT infrastructure with their business strategy. They simply don’t have the in-house technological expertise to do so effectively. John Burgess, president of an MSP in Little Rock, AR, says that such companies are “usually fairly ad hoc and reactionary in how they manage and spend technology.”

Here’s where the added value an MSP partner can provide becomes clearly evident. A good MSP can help identify the specific available cloud services that best fit the customer’s business strategy. In doing so, the MSP will be looking not just at individual services and the CSPs that offer them, but at how those services can be integrated into a unified system that can be effectively managed as a single solution.

A Good MSP Will Manage the Customer’s Cloud Infrastructure

Perhaps the most important service a good MSP can offer is to relieve customers of the burden of having to worry about their IT operations. This involves the capability to initially put the system in place, to monitor its operations on a 24/7/365 basis, and to proactively handle problem resolution and upgrades to system components.

A Good MSP Will Establish Relationships With Expert Partners

Few MSPs have the resources to develop and maintain in-house the kind of comprehensive cloud expertise required to fully support their customers on their own. Most will benefit from having specialized expert partners that can support the MSP in the services they offer to customers.

A good example of such a partner is Zadara Storage. As a storage-as-a-service (STaaS) provider, Zadara offers a high level of expertise in all elements of storage, whether in the public cloud, private clouds, or customers’ on-premises data centers. In fact, Zadara’s VPSA Storage Arrays are already installed in the facilities of major public cloud platforms such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP), and are available for installation on customer premises as the basis of a private or hybrid cloud solution.

Whether the VPSA Storage Arrays they use are in the cloud, on-premises, or both, Zadara customers never buy storage hardware. Instead, they purchase storage services, paying a monthly fee for only the amount of storage they actually use during that billing period.


Partnering with a first-class STaaS provider enables you to provide your customers with a cost-effective enterprise-grade storage solution

Join the Zadara Partner Network Today

Zadara Storage Managed Services. Download White Paper


 

October 4, 2017

Posted In: Industry Insights

Tags: , , , , , , , , , , , ,

Leave a Comment

All Flash vs Hybrid: Choosing Your Best SSD Storage Solution

All flash vs hybrid: To flash or not to flash is no longer the question. According to 451 Research, about 90 percent of all enterprises are already using some form of flash data storage in their data centers. And it’s All Flash Arrays (AFAs) that form the fastest growing segment of that market. As flash memory prices continue their steep descent, more and more storage administrators are finding that flash-based solid state drives (SSDs) are becoming an affordable alternative to traditional hard disk drives (HDDs) for a wide range of storage applications.

But all-flash may not be the best option in every case. In fact, for many workloads, a SSD/HDD hybrid solution may be more cost effective.

The big advantage of SSDs is their blazing speed. In fact, modern SSDs have proven themselves superior to HDDs not only in terms of raw performance, but also in other areas such as storage capacity, reliability, and power consumption. But the one area in which flash-based storage has yet to surpass spinning disks, and probably won’t for some years to come, is in cost per GB.

Hybrid Arrays: Faster Than HDDs, Less Expensive Than AFAs

In a SSD/HDD hybrid device, a small amount of flash media (usually between two and ten percent of total capacity) is paired with hard disks that are used for bulk storage. The flash portion of the array is employed as a cache in which frequently used data is housed. Information that the using application calls for less frequently, which is usually by far the bulk of the data, resides on the HDDs. The storage controller monitors which data is being called for most often, and moves blocks in and out of the SSD cache as required.

This strategy allows “hot” data, information the using application needs frequently, to be read and written from the cache at near SSD speeds. Colder, less used data is still housed on the slower HDDs. The result is that for appropriate workloads, SSD/HDD hybrid devices can often provide SSD-like performance at a much lower price point.

The one area in which a hybrid device provides little advantage is with new data. In order to know which blocks should be kept in the SSD cache, the storage controller must train itself over time. So these drives have what might be called a “break-in” period as the controller figures out which portions of the data should be cached. During that time, the device performs more like a HDD array than an AFA.

The great advantage of this hybrid arrangement is that the largest portion of the storage array is made up of relatively inexpensive HDDs. The small amount of more expensive SSD cache increases the cost of the device only by 10 to 20 percent, while often providing a performance gain of 100 percent or more.

 

How To Decide Whether An AFA or Hybrid Solution Is Best For You

 

The place to start in deciding whether an AFA or hybrid solution will work best in your data center is to evaluate your workloads.

Where the performance advantages of SSDs show themselves most significantly is with those workloads, such as large databases and OLTP systems, that require very low latencies and high input/output (IOPS) rates. Such applications, particularly when they are more dependent on random access performance than on high throughput rates, are likely to be especially well served by AFA storage. On the other hand, workloads that transfer data in a more linear fashion may be better suited to a hybrid solution.

One issue with hybrid arrays is that they can’t always be counted on to deliver predictable performance, especially as regards latency. Because the I/O responsiveness of the array is dependent on whether the data is housed in the SSD cache or on slower hard drives, applications that require consistently low latencies may not be a good match for a hybrid solution.

One key is to focus more on cost per IOPS than on cost per GB. HDDs can still boast a lower cost per GB, while SSDs have a clear advantage in cost per IOPS. As Jim Handy, an analyst at Objective Analysis, puts it, “Focus on what would be the lowest overall system cost to get the throughput that you require.”

You should also consider total cost of ownership. Because SSDs are beginning to surpass HDDs in storage density (the amount of storage that can be packed into a single device), fewer of them are needed for a given amount of storage. That can reduce data center space requirements. And because SSDs consume less power and emit less heat, both electricity and cooling costs can be lower when AFAs are used in place of HDDs and hybrids. When TCO is taken into account, an AFA solution to your storage needs may be more affordable than it at first appears.

 

Both All-Flash and Hard Disk Arrays Remain Viable Options

AFA storage is clearly the wave of the future. But hybrid storage can still be a very cost effective option for workloads that don’t require the highest levels of IOPS and latency performance. If you are considering how best to take advantage of the performance benefits SSDs can provide in your data center, we can help. Please request a customized TCO analysis.

February 8, 2017

Posted In: Tech Corner

Tags: , , , , ,

Leave a Comment

Debunking Myths about SSD Data Storage

Solid State Drives (SSDs) are rapidly gaining acceptance for enterprise data storage. But some companies remain reluctant to fully embrace the technology because of lingering concerns about the suitability of flash-memory for storage of mission-critical data. In many cases, however, issues that may have had some legitimacy several years ago are no longer valid. They have, in effect, become myths about SSD data that are contradicted by the facts.

In this article, we’ll take a look at several of those myths.

1. SSDs Are A Lot More Expensive Than HDDs

Since solid state devices were first introduced, they have cost more than equivalent hard disk drives (HDDs). But as flash memory technology has matured, prices have fallen steeply. Today the per-gigibyte prices of enterprise SSDs have already attained rough parity with those for the highest-performing HDDs. For example, Zadara™ Storage, in partnership with Intel, is offering SSD-based cloud storage at a price point equivalent to that of existing HDD products.

Moreover, because of factors like greater storage density, less power consumption, and greater reliability, overall costs associated with SSD storage are often lower than those for hard disks.

2. SSDs Have Less Storage Capacity Than HDDs

Until recently it was true that modern HDDs far outpaced SSDs in the amount of storage that could be packed into a given drive form factor. But that’s changing fast. Currently, the top of the storage density range for HDDs is represented by helium-filled drives with a capacity of 10TB. The prevailing industry expectation is that, due to constraints imposed by the laws of physics, spinning platter drives will max out their capacity potential at no more than about 40TB.

In contrast, 16TB SSDs are now on the market, and several manufacturers have announced 100TB behemoths they expect to make commercially available within the next several years. Clearly, the capacity advantage of HDDs over SSDs is already a thing of the past.

 

3. SSDs Have Shorter Life Spans Than HDDs

It’s true that SSDs wear out with use. (But, of course, so do HDDs). Each time a NAND flash memory cell is written or erased, it degrades by a tiny amount. Over time, the effects of these write/erase (or program/erase) cycles accumulate to a level where the device can no longer retain new data. To make matters worse, there is no way to consistently predict when a SSD will reach that point.

SSD providers use several methods to combat the write cycle limitations of their products. For example, wear-leveling and over-provisioning are standard practices used to reduce the number of times any particular memory cell is written to. Also, device and system level error correction schemes are employed to ensure that unrecoverable data losses are minimized.

These mitigation strategies have proven so effective that SSDs now equal or actually outpace HDDs in reliability and longevity. Both now have expected life spans of about six years. Plus, according to a report by the Storage Networking Industry Association (SNIA), SSDs boast a MTBF (Mean Time Between Failures) rate that is more than twice as good as that for HDDs.

 

4. SSDs Performance Gets Worse Over Time

Flash memory devices are at their fastest fresh out of the box (FOB) due to the fact that SSDs cannot just overwrite previously stored data. Before a location can be written to, any previously existing data in that location must be erased. Since the erasure process takes time, a SSD will exhibit its highest level of performance when there are plenty of never-used blocks available that don’t have to be erased before they can be written to. As the drive fills up, the number of such blocks diminishes, negatively impacting performance.

In the past, some users saw the speed of their drives deteriorate over time because manufacturers rated the product based on FOB specs rather than on “steady state” performance.

Today’s SSD products attack the issue on a couple of fronts. On a technical level, a process called “garbage collection” is employed to pre-erase previously used blocks in the background so they are already available when a write request is received. Also, drives are now routinely overprovisioned beyond their stated capacity to provide more empty space so that fewer erase cycles are required. And finally, published specs now normally reflect the performance that can be expected under actual production conditions, rather than FOB.

 

5. SSDs Are Only Useful For High Performance Workloads

Adam Roberts, Chief Solutions Architect for SanDisk, writes of the many occasions on which he has heard customers say something like, “I haven’t considered flash for this solution because the HDDs meet my performance needs.” The unspoken assumption behind such statements is that SSDs are important only because of their blazing speed. Well, that speed advantage certainly is important, especially for demanding workloads. But that’s not the only benefit SSDs offer.

Because of the advantages SSDs have over HDDs in storage density and power consumption, their use can reduce the number of drives, servers, and enclosures required. The result is that when costs such as power, cooling, data center floor and rack space, and maintenance are taken into account, the TCO for SSD storage solutions may already be less than that for equivalent HDD implementations. So, even when a company’s applications don’t require the highest levels of storage I/O performance, incorporating SSDs into its storage solution can lower overall costs significantly.

If you’ve been holding back from considering SSD storage because of concerns such as the ones we’ve discussed in this article, maybe it’s time to take another look. We here at Zadara Storage would be happy to help. Please request a customized TCO analysis.

February 1, 2017

Posted In: Tech Corner

Tags: , , ,

Leave a Comment

HDD Versus SSD: A Head-to-Head Comparison

HDD versus SSD: What’s better for you? Solid state drives (SSDs) are displacing hard disk drives (HDDs) in corporate data centers at an accelerating rate. But HDDs are not dead yet. Each technology still has use cases for which it is the most cost-effective choice, and most experts expect HDDs to maintain a foothold in the data center for years to come. So, savvy CIOs and IT managers will need to assess the advantages and disadvantages of each of these competing data storage solutions for their own workloads.

In this article we’re going to examine how enterprise SSDs and the highest performing HDDs compare on a number of important features.


Take a Look Side-by-Side: Download HDD vs. SSD Infographic


HDD versus SSD: Performance

More than any other factor, it’s the speed advantage of SSDs over HDDs that has pushed them to the forefront in the battle for data storage preeminence. A recent benchmark study compared “the fastest consumer-grade” HDD with “the fastest mainstream SSD on the market.” In the random 4k write test the HDD achieved a rate of just under 208 IOPS, while the SSD came in at almost 30,000 IOPS.

Of course an IOPS rating does not definitively characterize the performance of a drive (depending on specific workloads, other measures like latency or throughput may be more important), but it does indicate the scale of the speed advantage enjoyed by SDDs. In general, SSDs can achieve a performance level that is up to three orders of magnitude faster than HDDs.

 

HDD versus SSD: Capacity

Enterprise HDDs of 10TB capacity are now on the market. These helium-filled devices represent the leading edge of HDD technology. And manufacturers remain committed to extending the capabilities of spinning disks to their maximum potential. Current expectations are that HDD capacity will reach the 20-40TB range by 2020.

But SSD capacities are already beginning to surpass those of hard disks. Samsung is now shipping a 16TB SSD drive, while Seagate has announced one that packs a massive 60TB into a single device. As these new high capacity SSD products demonstrate, flash memory technology is reaching density levels that spinning platter drives can never achieve.

 

HDD versus SSD: Cost Per TB

On a cost per TB basis, SSDs have already reached rough parity with the 15K RPM drives that currently represent the apex of enterprise HDD performance. For example, Zadara™ Storage, in cooperation with Intel, is offering SSD-based cloud storage at prices equivalent to those for existing HDD products.

At this point storage arrays that incorporate commodity HDDs still have a price advantage over SSD arrays for workloads that are not performance-intensive. But SSD prices continue to fall at a rapid rate.

 

HDD versus SSD: Durability and Reliability

Both SSDs and HDDs have characteristic failure modes that cause them to eventually fail. Sooner or later the moving parts of HDDs will simply wear out. SSDs have no moving parts, but each write to a storage cell degrades that cell by a small amount. Eventually it will reach a point where it cannot be written to any more.

Still, complete failures of enterprise SSD arrays rarely occur. That’s because techniques such as overprovisioning (providing more memory than the rated amount) and wear-leveling (which spreads writes over a larger number of cells) are widely employed to limit the number of writes experienced by any particular cell.

Overall, experience has shown that high-end HDDs (4TB+) have a failure rate of about 3.5 percent during the expected life of the drive, compared to a SSD failure rate of about 0.3 percent.

At this point SSDs and HDDs seem to have roughly equivalent longevity. Testing by Backblaze indicates that the median lifespan of a hard drive is over six years, while SSD life expectancy is in the range of five to seven years.

 

Carbon Footprint

Because its moving parts use power and generate heat, each HDD directly consumes about five times as much electricity as does an equivalent SSD. In addition, since SSDs dissipate less heat than do hard drives, and since their increased storage density means that fewer of them are needed to store any given amount of data, data center space and cooling requirements are significantly reduced. That translates directly into a reduction in an organization’s environmental impact.

 

TCO

For most corporate workloads, the acquisition cost of flash storage is still significantly greater than for HDD storage. However, when operating costs are factored in, the TCO for SSDs may actually already be lower than for equivalent HDD arrays. Use of SSDs reduces data center costs for power, cooling, floor space, rack space, and maintenance. And as SSD purchase prices continue to fall, the TCO disparity can only grow greater over time.

 

SDD Is The Future

It’s clear that when HDDs and SSDs are compared feature for feature, the advantage lies heavily with SSDs. The only thing HDDs still have going for them is their continuing (but continually diminishing) price advantage. When it comes to bulk storage of data for which the highest levels of input/output performance are not required, HDDs, either alone or, more likely, as part of a HDD/SSD hybrid solution, may still be the most cost-effective option. Clearly, however, the future of storage in the corporate data center lies with solid state memory, and not with spinning platters.


Ready to make the switch from HDD to SSD storage? We’re happy to help — Schedule a Demo Today!


January 25, 2017

Posted In: Tech Corner

Tags: , ,

2 Comments

SSD vs HDD Pricing: SSD Gains Ground on HDD Storage as Prices Plummet

For the last half century hard disk drives (HDDs) have been the premier storage technology in enterprise data centers. But with the maturing of flash memory technology, that long reign appears to be nearing its end. When comparing SSD vs HDD pricing, there are important differences to consider just like when looking at comparisons between Software-Defined Storage vs Traditional SAN and NAS.

Although the performance of HDD storage arrays continues to advance, it’s clear that the underlying technology, based on spinning platters and moving read-write heads, has begun to reach unsurpassable limits imposed by the laws of physics. Flash-based solid state drives (SSDs), on the other hand, are still in the early phases of their development. Their biggest claim to the data center storage throne has been a huge speed advantage over HDDs. But that advantage came with a price that made large-scale adoption of the technology cost-prohibitive for most companies.

That, however, is no longer the case. Prices for SSDs have been dropping steeply, and are now reaching a point of rough parity with those of the highest performance HDDs. Plus, SSDs are now beginning to outpace their venerable competitors in other important areas as well. Let’s take a look at the major factors that are spearheading the adoption of SSDs as HDD replacements in corporate data centers.


Take a Look Side-by-Side: Download HDD vs. SSD Infographic


SSDs Are Fast and Getting Faster!

SSDs already boast IOPS (input-output operations per second) rates of as much as 1000 times those of HDDs. And the disparity is quickly growing even greater. Intel claims that its upcoming Optane SSDs will be up to ten times faster than previous flash memory products.

An even more important metric for workloads that demand the greatest random access performance is latency (the time between when data is asked for and when can be read). While average HDD latency is measured in milliseconds, leading-edge SSD latencies are now approaching tens of nanoseconds.

For performance-intensive use cases, such as big data analytics or database applications, there is no longer any real choice. Only SSD arrays can provide the data access speeds or transfer rates necessary for the most demanding modern workloads.

 

SSDs Are Dense And Getting Denser

Until recently SSDs could not compete with HDDs in terms of storage density (the amount of memory that can be packed into a particular drive form factor). But that, too, has changed. The current high end of the enterprise HDD marketplace features 3.5-inch drives with a capacity of 10TB. But Samsung is now shipping a 2.5-inch, 15TB SSD. Even more impressive are the 60TB SSD recently announced by Seagate, and the 100TB drive promised for 2017 by Toshiba.

The ability of SSDs to pack more storage into the same space is important. It translates into fewer servers, less power, less cooling, and less space required in the data center. The result of such savings is that replacing HDD storage arrays with SSDs can actually result in a lower TCO.

 

SSDs Are Durable and Reliable, and Getting More So

The fact that SSDs have no moving parts gives them a leg up on HDDs in durability from the beginning. Although all drives of whatever technology will eventually fail, SSDs have proven to have longer lives than HDDs.

The biggest issue SSDs have had regarding durability is the fact that each flash memory cell can only be written to a limited number of times. But enterprise SSD implementations use sophisticated techniques such as wear leveling (which attempts to ensure that writes are evenly distributed across memory blocks), to maximize the lifespan of the drive.

SSDs also come out on top in terms of reliability. Their MTBF (Mean Time Between Failures) rate is significantly better than that of their HDD counterparts.

 

SSD vs HDD Pricing: SSDs Are Not Cheap But They Are Getting Cheaper

SSDs continue to drop in price relative to HDDs. Some providers are now offering SSD-based storage at a per-gigabyte price that is equivalent to the cost of 15,000 RPM hard drives (that is, top-of-the-line HDDs). For example, Zadara™ Storage is partnering with Intel to provide a storage solution based on Intel’s 3D NAND flash memory technology. With these new products, the Zadara Storage Cloud software architecture delivers the highest flash performance at prices that are on par with those of existing HDD products.

You can learn more about making the switch to SSD by attending the webinar ‘Storage Revolution: CapEx to OpEx, SSD to HDD, What’s Next‘.

Click here to sign up.

 

SSDs Haven’t Fully Displaced HDDs Yet, But They Will

The battle between SSDs and HDDs for control of the enterprise data center isn’t yet over, but the winner is clear. As the prices of SSD products more and more rival those of equivalent HDDs, there’s simply no reason for the continued use of spinning disk technology as the main enterprise storage solution.

SSD price points can be expected to fluctuate in line with market forces. In the fourth quarter of 2016, for example, prices actually rose somewhat due to a shortage brought about by a holiday-induced surge in sales of flash memory-laden laptops. But despite such temporary ups and downs, the SSD price trend line will continue to move in a downward direction.


Ready to make the switch from HDD to SSD storage? We’re happy to help — Schedule a Demo Today!


January 18, 2017

Posted In: Tech Corner

Tags: , , ,

One Comment

7 Reasons to Consider Switching From HDD to SSD Storage

“In 2016, we will stop putting hard disk drives in our servers.” That’s the opinion of storage consultant Jim O’Reilly, formerly VP of engineering at Germane Systems. Like many other data storage experts, O’Reilly believes that flash memory technology has advanced to the point where Solid State Drives (SSDs) are on the cusp of supplanting hard disk drives (HDDs) in corporate data centers with more people switching from HDD to SSD storage..

Is the SSD revolution really at hand? Is it time for your company to consider switching from HDD to SSD storage? Here are seven factors that indicate it may well be time to start planning your move toward a SSD storage solution.

Download the SSD vs HDD infographic to learn more. 

1. The SSD Performance Advantage

The number one advantage of SSDs is their speed. Compared to hard disks, SSDs are blindingly fast. HDDs are mechanical devices that depend on rotating platters along with actuator arms that must be brought into position in order to read or write data. All that movement takes time and imposes unavoidable limitations on the quickness with which an HDD array can respond to data requests.

SSDs, on the other hand, have no moving parts. They are semiconductor devices that can randomly access storage locations in a single step. The result is an orders-of-magnitude improvement over HDDs in IOPS – the number of input/output operations that can be done in a second. According to Webfeet Research president Alan Niebel, a high end 15,000 rpm HDD can produce about 180 IOPS. Enterprise SSDs, on the other hand, can reach around 200,000 IOPS. This huge difference translates directly into application performance.

2. Lower SSD Purchase Prices

The factor that has inhibited wider adoption of SSDs has always been cost. The fact that SSDs had purchase prices far greater per gigabyte of storage than HDDs made them cost-prohibitive for extensive use in enterprise datacenters. But that’s changing fast. New advances in flash memory technology have been pushing SSD costs down quickly. SSDs still cost more than an equivalent amount of HDD storage, but the gap is narrowing. In fact, according to Hubbert Smith, Director, Product Planning at Samsung, there is now rough parity between the prices of enterprise SSDs and the highest performing HDDs.

Zadara Storage has taken a leadership position and priced SSD at HDD price parity as of November 2016.

 

3. Higher SSD Capacities

Not long ago SSDs were no match for hard disks in terms of capacity. But that’s no longer the case. In fact, SSDs have now begun to push past the capacity limits the laws of physics impose on HDDs. Toshiba expects that by 2020 HDD capacity may reach the 20-40TB range. But they also expect that by then SSDs will have achieved capacities exceeding 256TB. Already, Seagate has shipped a 60TB SSD, and Toshiba has announced a 100TB model to be available in 2017.

Those larger SSD capacities translate into greater storage density, which in turn translates into lowered datacenter space and power requirements.

 

4. Less Storage Required

Use of SSDs can actually reduce the amount of storage you require. That’s because the much greater speed of SSD arrays allows real-time deduplication and compression to be done at the primary storage level. These functions involve a large proportion of random rather than sequential accesses to the data, which with HDDs imposes unacceptable delays due to spinning disk latency and seek times. But with SSDs, which are inherently random access devices, implementation of deduplication and compression right in the storage array can result in a reduction in storage requirements of 50 to 80 percent for the same amount of data.

 

5. Increased SSD Durability and Reliability

All disks eventually fail, but SSDs do it at a much lower rate than hard disks. A study of SSD use in Google datacenters over a six year period found that “flash drives have a significantly lower replacement rate in the field” than hard disks. SSDs do exhibit a higher Uncorrectable Bit Error Rate (UBER) than HDDs, necessitating greater attention to backups and data recovery processes. But overall, SSDs have a significantly higher MTBF (Mean Time Between Failures) than do HDDs.

 

6. Ability To Handle Performance-Intensive Workloads

In many companies the major driving force behind SSD adoption is the need to accommodate increasingly demanding workloads. The greatly reduced latency and high IOPS performance provided by SSDs make the technology a natural for applications such as big data analytics, OLTP, databases, VDI (Virtual Desktop Infrastructure), and video image processing.

 

7. Lower Total Cost of Ownership

All the SSD features we’ve discussed above lead to what may seem to be a surprising result. Although initial acquisition costs for SSDs remain higher than those for hard drives, TCO may already be lower. In fact, Zadara has partnered with Intel to provide a flash storage solution based on Intel’s 3D NAND. Doing so has allowed Zadara to offer high-performing SSD storage at an HDD price point.

Factors such as the greater density and higher reliability of SSD storage arrays translate into fewer drives and therefore fewer servers required for the same amount of storage. Datacenter space, power, and cooling demands are far lower. Even software licensing costs may be reduced, since they are usually keyed to the number of servers employed

You can learn more about making the switch to SSD by attending the webinar ‘Storage Revolution: CapEx to OpEx, SSD to HDD, What’s Next‘.

Click here to sign up.

There’s No Need To Rush In!

The fact is that a shift toward SSD-based storage need not be an all-at-once proposition. Most companies that have moved in that direction have done so gradually. They start by using SSDs for their most business-critical and highly demanding applications. Only the most performance-intensive “hot” data is committed to SSD, using either a tiered or cache-based approach, while more static or “cold” data remains in HDD arrays.

This gradual move toward SSD storage is not only more cost-effective for most organizations, it also allows them to learn by experience how to best employ flash storage for their particular set of workloads.

If you are interested in exploring how switching from HDD to SSD storage could benefit your organization, we here at Zadara Storage would be happy to help. Why not start by downloading our latest analyst paper: Zadara Storage Voted by IT Pros as On-Premise Enterprise Storage-as-a-Service Market Leader?

January 11, 2017

Posted In: Tech Corner

Tags: , ,

Leave a Comment

When, Why & How to Make the Switch to SSD Storage

Is it time to make the switch to SSD storage? Hard disk drives (HDDs) are still king of the datacenter. But that is changing quickly. Flash memory-based solid state drives (SSDs) are gaining ground quickly. Research firm Wikibon estimates that SSD shipments could reach the same level as those of HDDs in 2018. And Andy Walls, CTO and chief architect at IBM, has said that “flash will dominate disk storage by 2019.”

So, if you are an IT manager or CIO, you may be thinking about whether or not it’s time to begin including SSD storage as part of your company’s enterprise storage solution. If that’s the case, here are some things to consider as you make that decision.

To learn more, download our infographic that compares SSD versus HDD. 

Why SSDs are gaining ground in corporate data centers

The case for SSDs starts with their huge performance advantage over HDDs. Hard drives are mechanical devices that depend on rotating platters and read/write heads that must be moved into position to access data. All that movement takes time, which places limitations on how quickly a HDD can respond to data I/O requests.

SSDs, on the other hand, are semiconductor devices with no moving parts. From an IOPS (Input/output Operations Per Second) performance standpoint, there’s really no comparison. Here at Zadara Storage, we’ve found that in some AWS use cases, SSD offerings can deliver 20 to 40 times the IOPS performance of HDD offerings.

The factor that has limited the adoption of SSD technology is cost. All that speed comes at a steep price premium over equivalent HDDs. But the good news is that SSD costs are falling rapidly as the technology matures. According to Alex McDonald, vice chair of Storage Network Industry Association (SNIA) Europe, the market price crossover point between enterprise SSDs and HDDs should be reached in 2017.

Zadara Storage has taken a leadership position and priced SSD at HDD price parity as of November 2016. Zadara made this possible by aligning with Intel to offer Intel-based flash.

SSDs are demonstrating significant advantages over HDDs in other ways as well. For example, SSD durability and reliability have increased to levels that compare favorably to those of HDDs. And while HDD maximum storage capacity, now at about 10TB, seems to be closing in on limitations imposed by the laws of physics, SSD capacity is growing rapidly. SSDs of 16TB are now on the market. Seagate recently introduced a 60TB SSD, and Toshiba has announced one at 100TB that will be available in 2017.

When the full range of SSD advantages, such as lower power (and therefore cooling) requirements, greater storage density (which allows use of fewer servers), and smaller space requirements are taken into account, the TCO of SSDs can, in some use cases, actually be lower than that of hard drives.

Still, doing a forklift upgrade to SSD, in which you replace all your HDD storage with AFA (All Flash Array) units may not be the most cost effective course. Because the cost of AFA storage is still significantly greater than HDD storage of equivalent capacity, and since many application workloads don’t require the speed SSDs can provide, most storage experts advise a gradual changeover.

 

How to begin to make the switch to SSD storage

In general, most companies that have begun switching to SSDs have done so by stages. They start by using SSDs only for workloads that specifically require high IOPS performance. This may be done through the use of a software defined tiered system, in which high demand (Tier 0 or Tier 1) workloads are serviced by AFA storage, while less speed intensive workloads (Tier 2 and Tier 3) are assigned to HDD arrays. Or it may be done through the use of hybrid HDD/SSD storage arrays, in which the SDD portion of the array is used as a cache for frequently accessed or high performance data, while the bulk of the data remains in HDD storage.

Determining which workloads could benefit from the use of flash, and which are more suited to HDD storage, will require that you monitor your applications in order to understand the performance requirements of each. You can then set up software policies to automatically manage the assignment of each workload to the appropriate type of storage.

You can learn more about making the switch to SSD by attending the webinar ‘Storage Revolution: CapEx to OpEx, SSD to HDD, What’s Next‘.

Click here to sign up.

When should you consider switching to SSD storage?

According to Tom Coughlin, founder of the storage consulting firm Tom Coughlin Associates, when you have workloads where there is a clear financial gain from increased performance, it’s time to consider moving to SSD storage. Examples might include big data analytics, and use cases in which customers expect a real-time response, such as database, OLTP, and web applications. Also, as SSD prices continue to fall, it makes sense to plan to replace HDD arrays that reach their end of life with SDD devices.

If you’re thinking about whether it might be time to consider SSD technology for your company’s storage needs, why not download our ‘Software Defined Storage’ whitepaper.

January 4, 2017

Posted In: Tech Corner

Tags: , , ,

2 Comments

Solid State Disks (SSDs): Definition and Use Cases

If there’s one technology that has taken the storage world by storm over recent years, it’s flash drives. Solid State Disks (or SSDs) have transformed the storage landscape, offering much higher I/O density (IOPS per TB of storage) than can be achieved with traditional hard drives. HDDs are obviously mechanical media, based on spinning platters, accessed by multiple read/write heads. The physical geometry of these devices means that they are more attuned to sequential than random workloads. It’s easy, for example to write data sequentially onto a disk track as the disk rotates past the head. What’s much harder for HDDs is to manage random I/O profiles that read data from physically disjoint parts of the drive, either on separate tracks or platters. Totally random read requests can slow a hard drive down to 120-200 IOPS, depending on the drive speed.

 

Cloud Storage Options: SSDs (Solid State Disks) vs HDD (Hard Disk Drives)

Set Of Solid State Drives (ssd)

As we move to the cloud, the ability to see the underlying storage hardware is abstracted from us. Ideally, we should be able to simply dial in the IOPS we need for each volume and go from there. Unfortunately, things aren’t yet that simple. Take Amazon Web Services’ EBS (Elastic Block Storage) offerings for example. EBS is used to provide primary OLTP-type access to instances, including the boot drive of your VM. Current EBS offerings are based either on HDDs or SSDs, focusing on either throughput (MB/s) or IOPS respectively. HDD offerings are either 250 or 500 IOPS-based, whereas the SSD offerings deliver 20 or 40 times that capability, with nothing in between (except for some burst capability).

If your application is currently running from an HDD EBS option, how would you move it over to Solid State Disk? The answer is, not that easily. Currently the process involves taking a snapshot of the current EBS volume and using that snapshot image to build a new Solid State Disk-based EBS volume. Putting that new volume into place requires an application outage (shut down the instance, detach/attach the two volumes, power up).

Of course the answer could be to simply deploy everything to flash, but unfortunately that’s not always an option. Solid State Disk EBS volumes are some 2-3 times more expensive than their HDD counterparts and remember that SSD EBS volumes are optimised for IOPS, not for throughput. In fact, the HDD options out-perform SSD EBS volumes by that measure. In many instances only part of a volume may be active or “hot”, so moving it entirely to flash is an expensive workaround.

 

Targeting Resources

An alternative strategy is to use flash in a targeted manner, assigning it to just those application I/Os that need it. This is an approach that has been used in on-premises solutions for many years. Storage arrays have used DRAM as a cache and more recently as flash has become more mainstream, as a way to either cache or tier a mix of flash and traditional HDDs in a cost effective manner. There’s an old adage in storage; capacity is free, but performance costs. This applies to a flash/HDD mix where cheaper hard drives provide the capacity and flash delivers the performance.

In many cases, the amount of flash needed to improve performance can be as little as 10% of the storage volume. This is because only small part of the data on a volume is active at any one time. Caching provides the ability to target flash more effectively and depending on the workload, the amount of flash needed can be varied to provide the right level of application acceleration.

Zadara recently announced software update 16.05 that increased the amount of cache that can be deployed with VPSA, their Virtual Private Storage Array.   A VPSA can now support up to 3.2TB in 200GB increments (depending on the engine size). This means customers can add cache in granular amounts to improve performance as needed by the application, without overcommitting resources. More importantly, these changes can be implemented without affecting the application. There’s no need to take snapshots and experience an application outage to re-assign the instance to a new volume. This is possible because in this instance, flash is being used as a cache rather than a storage tier.

Of course knowing how much cache to use requires monitoring and understanding the performance of the application and storage. Adding flash cache will have a direct impact on latency and throughput, so any additional cache added should be done in structured way, including taking performance measurements before and after the change is implemented. The impacts of the change can then be measured and quantified, setting a baseline to ensure that the flash has been applied cost effectively. With flash still a relatively expensive resource, there’s no need to waste it on applications that don’t need it. The good news is that having the ability to make dynamic changes on-the-fly gives you the flexibility to try different configurations and determine which provides the optimal mix of cache. Essentially, you can’t make a mistake because you simply test and adjust, until you are happy with the results.

June 15, 2016

Posted In: Tech Corner

Tags: ,

Leave a Comment