Zadara Blog

News, information, opinion and commentary on issues affecting enterprise data storage and management.

Best practices for migrating data to the cloud

Originally published on InfoWorld

Moving petabytes of production data is a trick best done with mirrors. Follow these steps to minimize risk and cost and maximize flexibility

Enterprises that are embracing a cloud deployment need cost-effective and practical ways to migrate their corporate data into the cloud. This is sometimes referred to as “hydrating the cloud.” Given the challenge of moving massive enterprise data sets anywhere non-disruptively and accurately, the task can be a lengthy, complicated, and risky process.

Not every organization has enough dedicated bandwidth to transfer multiple petabytes without causing performance degradation to the core business, or enough spare hardware to migrate to the cloud. In some cases, those organizations in a physically isolated location, or without cost-effective high-speed Internet connections, face an impediment to getting onto a target cloud. Data must be secured, backed-up, and in the case of production environments, migrated without missing a beat.

[ Working with data in the cloud requires new thinking. InfoWorld shows you the way: How Cosmos DB ensures data consistency in the global cloud. | Stay up on the cloud with InfoWorld’s Cloud Computing Report newsletter. ]

AWS made hydration cool, so to speak. In fall 2016 AWS branded such offerings as Snowball, a petabyte-scale data transfer service using one or more AWS-supplied appliances, and Snowmobile, an exabyte-scale transport service using an 18-wheeler truck that carries data point to point. These vehicles make it easy to buy and deploy migration services for data that resides in the AWS cloud. It would take 120 days to migrate 100TB of data using a dedicated 100Mbps connection. The same transfer using multiple Snowballs would require about a week.

Yet for the remaining 55 percent of the public cloud market that is not using AWS – or those enterprises with private, hybrid, or multi-cloud deployments that want more flexibility – other cloud migration options may be more appealing than AWS’s native offerings. This may be especially true when moving production data, where uploading static data onto appliances leaves the IT team with a partial copy during the transfer. They need a way to resynchronize the data.

The following is a guide to cloud hydration best practices, which differ depending on whether your data is static, and thus resources are offline, or in production. I will also offer helpful tips for integrating with the new datacenter resources, and accommodating hybrid or multicloud architectures.

Static data

Unless data volumes are under 1TB, you’ll want to leverage physical media such as an appliance to expedite the hydration process for file, block, or object storage. This works elegantly in environments where the data does not need to be continuously online, or the transfer requires the use of a slow, unreliable, or expensive Internet connection.

1. Copy the static data to a local hydration appliance. Use a small, portable, easily shipped NAS appliance, configured with RAID for durability while shipping the between sites. The appliance should include encryption – either 128-bit AES, or preferably 256-bit AES, to protect against unauthorized access after the NAS leaves the client facility.
Using a very fast 10G connection, teams can upload 100MB to 200MB of data per second onto a NAS appliance. The appliance should support the target environment (Windows, Linux, etc.) and file access mechanism (NFS, CIFS, Fibre Channel, etc.). One appliance is usually sufficient to transfer up to 30TB of data. For larger data volumes, teams can use multiple appliances or repeat the process several times to move data in logical chunks or segments.

2. Ship the appliance to the cloud environment. The shipping destination could be a co-location facility near the target cloud or the cloud datacenter itself. Regardless of whether the target is a public cloud or hybrid/multi-cloud setting, two other considerations distinguish the smooth and easy migration from those that can become more protracted.

3. Copy the data to a storage target in the cloud. The storage target should be connected to the AWS, Azure, Google, or other target cloud infrastructure using VPN access via high-speed fiber.

For example, law firms routinely need to source all emails from a client site for the e-discovery purposes during litigation. Typically, the email capture spans a static, defined date-range from months or years prior. The law firm will have its cloud hydration vendor ship an appliance to the litigant’s site, direct them to copy all emails as needed, then ship the appliance to the cloud hydration vendor for processing.

While some providers require the purchase of the applianceothers allow for one-time use of the appliance during migration, after which it is returned and the IT team is charged on a per terabyte basis. No capital expenditure or long-term commitment required.

Production data

This process requires some method of moving the data and resynchronizing once the data is moved to the cloud. Mirroring represents an elegant answer to migrating production data.

Cloud hydration using mirroring requires two local on-premises appliances that have the capability to keep track of incremental changes to the production environment while data is being moved to the new cloud target.

1. Production data is mirrored to the first appliance, creating an online copy of the data set. Then a second mirror is created from the first mirror, creating a second online copy.

2. The second mirror is “broken” and the appliance is shipped to the cloud environment.

3. The mirror is then reconnected between the on-premises copy and the remote copy and data synchronization is re-established.

4. An online copy of the data is now in the cloud and the servers can fail over to the cloud.

For example, a federal agency had 2PB of on-premises data that it wanted to deploy in a private cloud. The agency’s IT team set up two on-premises storage resources adjacent to each other in one datacenter, moved production data onto one mirror, then set up a second mirror so that everything was copied. Then the team broke the mirror and shipped the entire rack to a second datacenter several thousand miles away, where its cloud hydration vendor (Zadara Storage) re-established the mirrors.

When reconnected, data were synchronized to represent a full, up-to-date mirror copy. Once the process was complete, the hardware that was used during the data migration process was sent to a remote location to serve as a second disaster recovery copy.

In another example, a global management consulting firm used 10G links to move smaller sets of data from its datacenter to the target storage cloud, and hydration appliances to move petabytes of critical data. Once the 10G link data uploads were copied to the storage resource, the cloud hydration provider used a AWS Direct Connect link to AWS. In this way the resources were isolated from the public cloud, yet made readily available to it. Other static data were copied onto the NAS appliances and shipped to locations that are available to the AWS cloud.

Features for easy integration

Regardless of whether the target is a public cloud or a hybrid or multicloud setting, three other factors distinguish the smooth and easy migrations from the more difficult and protracted ones.

– Format preservation. It’s ideal when the data migration process retains the desired data format, so that IT teams can copy the data into the cloud and instantly make use of it, versus converting copied data into a native format that is used locally but is not accessible from within the cloud itself. IT managers need to be to get at the data right away, without the extra step of having to create volumes to access it. With terabytes of data, the extra few hours of delay may not seem like a big deal, but at petabyte scale, the delay can become insufferable.

– Enterprise format support. Traditional storage device formats such as CIFS and NFS are either minimally supported by public cloud providers or not supported at all. Yet the applications these file systems serve often yield the most savings, in terms of management time and expense, when moved to the cloud. Having the ability to copy CIFS, NFS, or other legacy file types and retain the same format for use in the cloud saves time, potential errors, and hassle from the conversion, and helps assure the hydration timeline.

– Efficient export. No vendor wants to see a customer decommission its cloud, but when needs change, bidirectional data migration or exporting of cloud data for use elsewhere needs to proceed just as efficiently – through the same static and production approaches as described above.

Hybrid cloud or multicloud support

A final consideration with any cloud hydration is making sure it’s seeded to last. With 85 percent of enterprises having a strategy to use multiple clouds, and 20 percent of enterprises planning to use multiple public clouds (RightScale State of the Cloud Report 2017), IT teams are revising their architectures with hybrid or multicloud capabilities in mind. No company wants to be locked into any one cloud provider, with no escape from the impact of the inevitable outage or disruption.

Cloud hydration approaches that allow asynchronous replication between cloud platforms make it a no-brainer for IT teams to optimize their cloud infrastructures for both performance and cost. Organizations can migrate specific workloads to one cloud platform or another (e.g., Windows applications on Azure, open source on AWS) or move them to where they can leverage the best negotiated prices and terms for given requirements. A cloud migration approach that enables concurrent access to other clouds also enables ready transfer and almost instant fail-over between clouds, in the event of an outage on one provider.

Experts have called 2017 the year of the “great migration.” Projections by Cisco and 451 Research suggest that by 2020, 83 percent of all datacenter traffic and 60 percent of enterprise workloads will be based in the cloud. New data migration options enable IT teams to “hydrate” their clouds in ways that minimize risk, cost, and hassle, and that maximize agility.

Howard Young is a solutions architect at Zadara Storage, an enterprise Storage-as-a-Service (STaaS) provider for on-premises, public, hybrid, and multicloud settings that performs cloud hydrations as one of its services. Howard has personally assisted in dozens of cloud hydrations covering billions of bits of data.

New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to newtechforum@infoworld.com.

 

 

 

 

June 29, 2018

Posted In: Blog, Industry Insights, Tech Corner

Tags: , , , ,

Accounting Gimmicks Don’t Create Enterprise STaaS

Pure Storage recently announced a new Evergreen Storage Service (ES2) claimed to deliver “storage-as-a-service (STaaS) for private and hybrid clouds” for “terms as low as 12 months” with pay-per-use pricing “subject to a minimum commitment.” In other words, Pure Storage now prices and sells the same all-flash arrays (AFAs) using CapEx or OpEx dollars. This makes it seem that ES2 innovation is limited to an accounting “alternative” (read gimmick).

Sorry Pure Storage. ES2 doesn’t even come close to meeting the criteria for Enterprise STaaS.

Zadara
Storage Cloud
Pure Storage
ES2

Multi-protocol Enterprise Storage Access

Yes

No
Entire Storage Environment Yes No
Comprehensive Storage Service Yes No
Rich Enterprise-Grade Data Management Yes Limited
Public Pricing with Monthly Billing Yes No
Unsurpassed Agility and Scaling Yes No
100% Uptime Service Level Agreement Yes No

Only the Zadara Storage Cloud offers a genuine Enterprise STaaS experience, including:

  • – Multi-protocol enterprise storage access that offers block, file, and object access to support mixed database and application workloads in physical, virtualized, and container data center environments.
  • – An entire storage environment installed at customer selected locations including public cloud direct-connected to hyperscalers, private cloud on-premises in local data centers, remote data centers, or colocation data centers, and public-private hybrids.
  • – A comprehensive storage service that is operated, managed, and monitored by Zadara expert staff for health status, data availability, and storage capacity and performance.
  • – Rich enterprise-grade data management supporting database and application high-availability, business continuity, and disaster recovery (including triple-mirroring), plus encryption, snapshots, replication, local/remote mirroring, online volume migration, and more.
  • – Public pricing with monthly billing for customer-defined storage configurations and usage with online configurator and free trials that eliminate the guesswork for block, file, and object storage budgeting and chargebacks.
  • – Unsurpassed agility and scaling that features the agility to create and change configurations whenever needed and the ability to scale capacity and performance up or down to match present and future needs.
  • – 100% uptime Service Level Agreement (SLA) including 24x7x365 proactive support and seamless upgrades allowing customer IT staff to focus on strategic business needs rather than ongoing system maintenance.

 

The Zadara Enterprise STaaS Solution

Zadara delivers Enterprise STaaS to business and service provider clients using a portfolio of products running the same Zadara software and using the same industry-standard hardware. The essential difference in products relates to where the hardware is physically located (on-premise or in the cloud, for example) and how the storage is provisioned (block, file, or object).

A Zadara Storage Cloud can include some or all of the following:

In every scenario, Zadara clients provision storage wherever it is needed without concerns about available performance or capacity. The Zadara solution universally delivers storage that meets all business use-cases for enterprises and service providers using a pay-per-use model supported by operating expense (OpEx) budgets.

 

June 22, 2018

Posted In: Blog, Industry Insights

Innovate Now at the Zadara® Summit 2018

Innovate Now at the Zadara® Summit 2018
Discover why the most successful enterprise and service provider businesses worldwide are innovating now to gain competitive advantage using Zadara storage-as-a-service (STaaS) solutions. Zadara delivers private, hybrid, and public cloud storage with comprehensive enterprise functionality.

Continue reading Innovate Now at the Zadara® Summit 2018

May 18, 2018

Posted In: Company News, Industry Insights

Tags:

From On-Premises Storage-as-a-Service to Cloud-Based Storage: Phased Adoption

Organizations migrate on-premises storage into a cloud storage-as-a-service environment for a number of reasons. The migration may be mandatory because of a merger or acquisition, in which data from another organization has to be migrated to a new or existing environment, or because your business segment has been sold, which requires you to migrate storage elsewhere. In many cases, migration is voluntary in a quest for better service delivery. Cloud services can cut costs, increase flexibility and in some cases, ensure better service performance.

Your migration can be handled in one of three ways: big bang, parallel, and phased. Let’s take a look at the pluses and minuses of each.

Big Bang Migration

Big bang migrations move data to the new location and immediately direct users to access data elsewhere. It’s the proverbial ripping off of the Band-Aid in which everyone starts something new all at once instead of slowly migrating over. The frustrations that are inevitable with any migration are intense, but you get them out of the way in a contained time period (hypothetically speaking). The costs tend to be lower, and you avoid dealing with intermediary solutions or using two operating systems at once.

The bad news about big bang migrations is that they’re high risk. Failure can lead to long periods of downtime and intense frustration as users are corralled into a new system all at once. Big bang migrations can work well if companies are moving only small amounts of data, or if they’re migrating data from only a few offices. A large transition performed big bang style can lead to major interruptions, which most organizations can’t afford.

Parallel Migration

Parallel migration sets a complete new storage-as-a-service environment that runs concurrently with the old on-premises storage. The two environments run side-by-side until the organization is ready to switch the old one off. Parallel migration mitigates risk somewhat since the old environment stays functional while the parallel environment is established.

It may be worthwhile to run both on-premises storage and remote storage-as-a-service arrays concurrently for applications that you’d have to recover very quickly and with minimal data loss. Establishing and managing two complete environments at once, however, gets expensive, in terms of both infrastructure usage and personnel costs, so migrating everything in this way gets cost prohibitive very quickly.

Phased Migration

phased migration

Phased migration, also called iterative migration, migrates your stored data over incrementally until you’re fully up and running in the cloud. You can migrate one or a few offices at a time, or you can start with applications that have few or no interdependencies. Phased migration gives users a chance to get used to new ways of doing things and gives you time to figure out hiccups as you go without the risk that comes with big bang migration. It can also be more complex to manage as you bridge old and new storage to keep applications functional while you transition.

For a deeper dive into phased migration with VPSA Storage Array, check out our case study: VPSA in the Cloud. The quick summary is that VPSA Storage Array makes it easy to asynchronously replicate data, whether you’re replicating to a different pool within the same VPSA Storage Array, to a remote VPSA Storage Array (either within your Zadara Cloud or to a completely different geographic location) or to a completely different cloud provider. Data is written to your local VPSA storage array in real time and then, asynchronously and in the background, it’s replicated to a remote VPSA Storage Array.

In addition to being a great disaster recovery and backup solution, this per-volume remote mirroring creates a natural, seamless phased migration. Because replication is snapshot based, only data that’s changed gets replicated, and only the most recent change is synchronized, which saves bandwidth and ensures minimal service performance impact. When all volumes are migrated to the cloud, you can start running your new workloads there.

Choosing the Right Mix

Most migrations involve some mix of big bang, parallel and phased. The right balance depends on your budget, your timeframe, and your risk threshold. Our VPSA Storage Array is designed to make phased migration easier and more cost-effective, giving you a way to control costs without introducing the risk that comes with moving everything overnight. For more information about the advantages gained with Zadara’s VPSA Storage Array, read the case study.

January 25, 2018

Posted In: Industry Insights

Tags:

Leave a Comment

Bring Cold Object Storage to Your Private Cloud

In today’s computing environment, more and more companies are beginning to work with massive datasets, ranging into the hundreds of petabytes and beyond. Whether it’s big data analytics, high-definition video, or internet-of-things applications, the necessity for companies to handle large amounts of data in their daily operations continues to grow.

Historically, enterprises have managed their data as a hierarchy of files. But this approach is simply inadequate for efficiently handling the huge datasets that are becoming more and more common today. For example, public cloud platforms, such as Amazon Web Services (AWS) and Microsoft Azure, that must service many thousands of users simultaneously, would quickly become intolerably unresponsive if every user data request meant having to traverse the folders and subfolders of multiple directory trees to find and collect the information needed for a response.

That’s why modern public cloud platforms, and other users of big data, use object storage in place of older file systems. And as the use of private clouds grows, they too are employing object storage to meet the challenges of efficiently handling large amounts of data.

big data word cloud

What Is Object Storage?

With object storage, there is no directory tree or folders. Instead, there is a flat global namespace that allows each unit of stored data, called an object, to be directly addressed.

Each object contains not only data, but also metadata that describes the data, and a global ID number that uniquely identifies that object. This allows every object in the storage system, no matter where it might be physically stored, to be quickly retrieved simply by providing its unique identifier.

Why Object Storage is Well Suited To Private Clouds

When it comes to handling massive datasets in a cloud environment, object storage has a number of unique advantages. Let’s take a look at some of these:

  • It’s infinitely scalable. Because of its flat namespace, an object storage system can theoretically be scaled without limitation simply by adding objects, each with its own unique ID.
  • Metadata makes searching easy. The metadata that accompanies each object provides critical information about the object’s data, making it easy to search for and retrieve needed data quickly and efficiently without having to analyze the data itself.
  • It’s highly robust and reliable. The VPSA Object Storage differs from a traditional RAID redundant storage using a distributed “Ring” topology policy under the hood.  Zadara Object store allows for a 2-way or 3-way replication as options which the customers can choose at creation time. By the use of erasure coding (instead of RAID) to achieve continuous and efficient replication of data across multiple nodes, an object storage system automatically backs data up, and can quickly rebuild data that is destroyed or corrupted. Nodes can be added or removed at will, and the system uses Swift’s underlying Ring replication to ensure that new objects are incorporated, or removed ones are rebuilt, automatically and transparently.
  • It simplifies storage management. The metadata of an object can contain as much (or as little) information about the data as desired. For example, it could specify where the object is to be stored, which applications will use it, the date when it should be deleted, or what level of data security is required. Having this degree of detail available for every object allows much of the data management task to be automated in software.
  • It lowers costs. Object storage systems don’t require expensive specialized storage appliances, but are designed for use with low-cost commodity disk drives.

storage arrays in cloud

Zadara VPSA Object Storage

Zadara offers an object storage solution that incorporates all the advantages discussed above, and then some. VPSA Object Storage is specifically designed for use with private as well as public clouds. It is especially suited to storing relatively static data such as big data or multimedia files, or for archiving data of any type. VPSA Object Storage provides anytime, anywhere, any-device remote access (with appropriate access controls) via HTTP.

The VPSA Object Storage solution, which is Amazon S3 and OpenStack Swift compatible, features frequent, incremental, snapshot-based, automatic data backup to object-based storage, eliminating the need to have separate backup software running on the host.

If you would like to explore how Zadara VPSA Object Storage can help boost your company’s private cloud, please contact us.

October 10, 2017

Posted In: Industry Insights

Tags: , , , , , , , , , ,

Leave a Comment

Challenges MSPs Face as Customers Move to the Cloud

The face of the MSP (managed IT services provider) marketplace is changing rapidly. Not so long ago the keys to success for most MSPs revolved around recommending or selling the newest and best hardware and software products to their customers. But as more and more companies migrate to the cloud, that approach is no longer adequate.

The Cloud’s XaaS Model Changes Everything for MSPs

Perhaps the most important feature of the cloud model is that it allows customers to meet many, if not all, of their IT requirements by making use of pay-as-you-go services offered by cloud providers. This “anything as a service” (XaaS) approach reduces, or in some cases totally eliminates, the necessity of purchasing specific hardware/software solutions. For example, many companies no longer meet their document processing needs by installing Microsoft Office on their computers. Instead they simply subscribe to Office 365 and receive the services they need through the cloud.


Service Providers Gain Competitive Advantage by Leveraging Zadara Storage

Watch the Webinar


In today’s IT environment customers aren’t looking for products, but for solutions. That means MSPs must now demonstrate that they provide a unique value proposition for customers who can theoretically go directly to a CSP (cloud service provider) to obtain almost any type of IT service they might need.

Yet the good news for MSPs is that customers aren’t really looking for services – they’re looking for solutions to the business issues they face. As IT business coach Mike Schmidtmann puts it, “Cloud is a business conversation, not a price-and-product conversation.”

So, the MSPs that survive and thrive in the age of the cloud will be those who shift away from simply offering specific products, and move toward providing strategic IT solutions that help their customers realize their business objectives.

value-added features

A Good MSP Will Help Customers Develop an IT Strategy Based on Business Goals

Most MSP clients are not interested in IT per se. Their focus is on using IT effectively to enhance their business operations. So, the first service a cloud-savvy MSP can provide to their customers is to help them develop a comprehensive IT strategy that is closely aligned with the company’s business objectives. In effect, the MSP will seek to become an extension of the customer’s own IT staff, providing a depth of expertise and operational capability that would be very difficult for the customer to maintain in-house.

Once armed with a good understanding of the customer’s business goals, an MSP can help them develop a comprehensive IT strategy that will support those objectives. So, the first conversations between MSPs and their customers shouldn’t be about specific solutions, but about the goals and strategy that customer is pursuing for both the present and the future of its business.


Service Provider Success Story:

Overall, we are seeing 80% better performance with Zadara Storage than with our prior storage solution.” — Chris Jones, Infrastructure Architect at Netrepid

Read the Case Study


A Good MSP Will Identify Specific Cloud Solutions That Meet Customer Needs

cloud storage as-a-service

A recent CompTIA survey reveals that many companies, especially smaller ones, have a great deal of difficulty in aligning their IT infrastructure with their business strategy. They simply don’t have the in-house technological expertise to do so effectively. John Burgess, president of an MSP in Little Rock, AR, says that such companies are “usually fairly ad hoc and reactionary in how they manage and spend technology.”

Here’s where the added value an MSP partner can provide becomes clearly evident. A good MSP can help identify the specific available cloud services that best fit the customer’s business strategy. In doing so, the MSP will be looking not just at individual services and the CSPs that offer them, but at how those services can be integrated into a unified system that can be effectively managed as a single solution.

A Good MSP Will Manage the Customer’s Cloud Infrastructure

Perhaps the most important service a good MSP can offer is to relieve customers of the burden of having to worry about their IT operations. This involves the capability to initially put the system in place, to monitor its operations on a 24/7/365 basis, and to proactively handle problem resolution and upgrades to system components.

A Good MSP Will Establish Relationships With Expert Partners

Few MSPs have the resources to develop and maintain in-house the kind of comprehensive cloud expertise required to fully support their customers on their own. Most will benefit from having specialized expert partners that can support the MSP in the services they offer to customers.

A good example of such a partner is Zadara Storage. As a storage-as-a-service (STaaS) provider, Zadara offers a high level of expertise in all elements of storage, whether in the public cloud, private clouds, or customers’ on-premises data centers. In fact, Zadara’s VPSA Storage Arrays are already installed in the facilities of major public cloud platforms such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP), and are available for installation on customer premises as the basis of a private or hybrid cloud solution.

Whether the VPSA Storage Arrays they use are in the cloud, on-premises, or both, Zadara customers never buy storage hardware. Instead, they purchase storage services, paying a monthly fee for only the amount of storage they actually use during that billing period.


Partnering with a first-class STaaS provider enables you to provide your customers with a cost-effective enterprise-grade storage solution

Join the Zadara Partner Network Today

Zadara Storage Managed Services. Download White Paper


 

October 4, 2017

Posted In: Industry Insights

Tags: , , , , , , , , , , , ,

Leave a Comment

Practical Benefits Of A Hybrid Cloud Strategy

As more and more companies move to the cloud, one of the first questions they have to answer is which cloud model best fits their needs: public, private, or hybrid. Many are choosing the hybrid model as their best option.

The term “hybrid cloud” simply refers to an operational environment that includes both private and public cloud platforms. It has become an attractive model for many enterprises because it allows users to take advantage of the cost and functionality advantages of the public cloud, while also gaining the flexibility and control a private cloud provides.

Let’s take a quick look at some of the unique benefits of a hybrid cloud strategy.

Flexibility to Determine Optimal Placement of Workloads

With a hybrid cloud, administrators can decide where to place each workload to maximize efficiency and minimize costs.

The distinctive characteristic of the public cloud is its ability to provide IT services on demand without requiring up-front capital investments for hardware and infrastructure. With its XaaS (“Whatever you need”-as-a-Service) model, public cloud platforms, such as Microsoft Azure, Amazon Web Services (AWS), or Google Cloud Platform (GCP), have become excellent vehicles for quickly deploying common applications that many companies depend on.

Whether it’s a CRM (customer relationship management) or ERM (enterprise resource management) application, or perhaps a document management environment such as Office 365, companies can institute such workloads on a public cloud platform quickly and cost-effectively.

Yet many organizations also have workloads that are better served in an on-premises environment than in the public cloud. For example, workloads that require very high levels of I/O responsiveness, such as big data analytics, may be affected by public cloud latency issues that could degrade system performance to unacceptable levels. By housing such workloads in a company’s on-premises private cloud, where storage and servers can be kept in close physical proximity to one another, latency effects can be minimized.

Control of Data and Applications

The public cloud is a multi-tenant environment in which resources are shared among a number of customers. Many companies, concerned about the possibility of their workloads somehow being affected by the activities of other users, prefer to keep their mission-critical applications at home in a private cloud, under their direct control, while offloading less critical workloads to the public cloud.

Data Placement to Meet Security Requirements

Data security keys

Data security is the number one reason for the use of private clouds. Although public cloud platforms can now provide very high levels of data protection, many organizations believe that their most sensitive data is less vulnerable when it is kept at home behind their own firewall. This is particularly true for companies in industries, such as healthcare or banking, that are subject to regulatory compliance mandates that specify how customer information must be kept secure.

On the other hand, less sensitive data that becomes inactive or infrequently used can be moved to public cloud storage to take advantage of lower costs and greater scalability.

Speed of Testing and Deploying New Applications

Many companies use both public and private clouds in the testing and deployment of new applications. The design parameters of new apps can be shaped, refined, and thoroughly tested using a public cloud PaaS (Platform-as-a-Service) offering. Because PaaS resources are virtualized, developers can call them in as needed without having to spend capital funds to purchase hardware. Then, once development and testing are complete, the application can be deployed to a public or private cloud for production.

Spillover of Non-Critical Data to the Public Cloud

keyboard in clouds

Many hybrid cloud implementations are specifically designed to allow seamless failover to the public cloud should the operations of an organization’s private cloud be disrupted for any reason. This is especially true in the area of data backup/restore and disaster recovery. Once the emergency has passed, operations can be returned to the private cloud environment, often without users ever being aware that the failure occurred.

This is also the idea behind “cloud bursting,” which is instituted when surges in demand outpace the capacity of a private cloud. Whether it’s pre-planned, perhaps in anticipation of seasonal spikes in traffic, or is the entirely unexpected result of some news event that suddenly drives increased traffic to a company’s website, non-sensitive data can be temporarily spilled over into the public cloud so that operations can continue without disruption.

The Zadara Hybrid Cloud Storage Solution

The Zadara Storage Cloud has proven to be a highly effective storage solution for hybrid cloud implementations. Zadara VPSA Storage Arrays are connected to major cloud providers like AWS, Azure, and GCP. They can also be housed on customer premises as the storage component of a private cloud. With their remote replication and mirroring capabilities, these devices can transparently transfer stored data between clouds to facilitate failover, spillover, backup/restore, and disaster recovery.

Zadara VPSA Storage Arrays are provided on a storage-as-a-service (STaaS) basis. No matter how many may be installed on site, customers pay only a monthly fee for just the amount of storage they actually use during the billing period.

If you’d like to explore how Zadara Storage can assist your company in developing a cost-effective hybrid cloud implementation, please download the ‘Zadara Storage Cloud’ whitepaper.

September 26, 2017

Posted In: Industry Insights

Tags: , , , , , , , , , ,

Leave a Comment

How a Multi-Cloud Strategy Can Benefit MSPs

Businesses of all sizes are moving to the cloud in ever-increasing numbers. MSPs (Managed IT Services Providers) are recognizing that if they don’t want to be left behind, they’ve got to lead the way. That’s why most successful MSPs today are committed to providing their customers with a comprehensive array of services delivered through the cloud.

But for an MSP to provide the highest levels of cloud-based services, it’s not enough to develop expertise with any particular cloud platform. All the major cloud service providers (CSPs), such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP), offer a common set of basic features. Yet, they also differ from one another significantly in the types of services each is best suited to provide. That’s why deriving the maximum benefit from the cloud model requires the ability to take advantage of the best that each individual cloud platform has to offer.

In other words, to get the most out of the cloud, you need a multi-cloud strategy.

The multi-cloud approach provides some important advantages to MSPs, both in the services they can offer their customers, and in terms of their own operations. Let’s take a look at some of these benefits.


Service Providers Gain Competitive Advantage by Leveraging Zadara Storage

Watch the Webinar


Benefits to MSP Customers

multi cloud puzzle pieces

Match Workloads To the Most Suitable Platforms

All the major clouds provide similar suites of basic services. Yet each is optimized for different types of workloads. For example, if your customer is running Windows client apps, Microsoft Azure is a natural fit. If they are doing big data analytics, GCP might be a better choice. Part of your job as a multi-cloud MSP is to help your customers determine the best cloud platform for each of their workloads.

Avoid Vendor Lock-In

A good MSP will work with clients to ensure that their workloads are portable between platforms. That way, if a client becomes dissatisfied with a particular platform for any reason, their options won’t be limited by the prospect of a costly and time-consuming migration to another cloud.

Reduce Costs

Each CSP provides different service plans, at different price points, for each set of features it offers. Part of what a cloud-savvy MSP can offer clients is the ability to distribute specific workloads among the various cloud platforms to not only take advantage of what each cloud does best but also, to get the best pricing for exactly the services the client needs.


Service Provider Success Story:

“Zadara Storage is a key reason that our solution can outperform the competition.” — David Benson, Chief Technology Officer and Co-founder, BeBop Technology

Read the Case Study


Enhance Data Security

By replicating data (and even virtual servers) among different clouds, MSPs can offer a high level of backup/recovery and disaster recovery services to clients. A disruption at one location can immediately trigger failover to either another zone or to an entirely different cloud platform.

Benefits to the MSP Itself

choosing multi cloud

Increase Your Ability to Meet SLA Requirements

MSPs are usually bound by a Service Level Agreement (SLA) that provides a specific up-time guarantee. A multi-cloud strategy helps MSPs meet SLA up-time requirements by allowing operations to be quickly and transparently shifted from a CSP that is experiencing an outage to a different platform.

Keep Up With Technological Advances

The major cloud platforms are quite competitive with one another. Each works hard to introduce new or improved features that are not available through its competitors. But multi-cloud MSPs are able to tap into innovations introduced by any of the CSPs with whom they work.

Extend Your Expertise

A multi-cloud approach can substantially reduce the degree to which an MSP is required to be a technical jack of all trades. Instead of maintaining in-house experts for a wide range of solutions, MSPs can leverage the expertise of the various CSPs to offer platform-specific services to clients.

How Zadara Can Help With a Multi-cloud Strategy

The Zadara Storage Cloud is ideal as part of a multi-cloud solution. With Zadara VPSA Storage Arrays installed both on customer premises and connected to the major cloud providers such as AWS, Azure, and GCP, data can be seamlessly and transparently replicated among various public and private cloud platforms. Download White Paper: Getting Great Performance in the Cloud.


Partner with a first-class STaaS provider to provide your customers with a cost-effective enterprise-grade storage solution

Join the Zadara Partner Network Today

Zadara Storage Managed Services. Download White Paper


 

September 21, 2017

Posted In: Industry Insights

Tags: , , , , , , , ,

Leave a Comment

Why Companies Adopt Both Public and Private Clouds

More and more companies are basing significant portions of their IT infrastructure in the cloud. According to the RightScale 2017 State of the Cloud Survey of IT professionals, a full 95 percent of respondents said that their companies have adopted the cloud as an integral part of their IT operations. For some of those companies, the focus is on the public cloud; for others it’s on an in-house private cloud. The majority make use of both public and private clouds.

What is it about public and private clouds that causes so many companies to be drawn to them? Let’s take a look at the benefits each of these cloud models offer to businesses today.

The Benefits of the Cloud

It was not that long ago that the standard approach to IT in most companies was to build and maintain their own in-house datacenters. But the cloud computing model has brought about a fundamental shift in the way businesses seek to meet their IT needs. No longer must companies devote scarce capital (CapEx) funds to the purchase of their own servers, storage, and networking hardware. Instead, the cloud model encourages them to purchase IT services on a pay-as-you-go basis for a monthly fee.

Customers pay only for the services that they actually use. The cloud platform provider is responsible to acquire, support, and upgrade the required hardware and software as necessary, and to ensure that a sufficient amount of these resources is always available to allow on-demand provisioning and scaling. The result is that the cloud model offers companies lower overall costs, greater flexibility and agility, rapid deployment of applications, and a substantial reduction in the amount of expert staff required to manage the organization’s IT infrastructure.

How Public and Private Clouds Differ From One Another

public and private clouds cross streets

Public cloud platforms, such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) are, as the name implies, open to everyone. They operate on a multi-tenancy model in which hardware and software resources are shared among a number of different customers. This allows the public cloud to realize economies of scale that drive down costs for all users.

Private clouds, on the other hand, are built on a single-tenancy model. That means they are devoted exclusively to one customer, and there is no sharing of resources. Private clouds can be implemented either in a company’s on-premises datacenter using its own hardware, in an external facility run by a trusted partner such as a managed services provider (MSP), or even, in some cases, with dedicated resources in the facilities of a public cloud provider. The key is that a private cloud is isolated to a single customer, and there is no intermingling of that customer’s hardware/software resources or data with those of other customers.

Advantages of the Public Cloud

Because of its large multi-tenant user base, a public cloud platform can normally provide IT services at a lower cost than a private cloud could achieve. Costs are also reduced by the fact that customers have no responsibility for purchasing, housing, supporting, or managing hardware. The result is that workloads can be deployed on a public cloud platform more quickly and inexpensively than would be the case with a private cloud.

Advantages of a Private Cloud

cloud in chains protected for data protection

The main driver in the decision of many companies to make use of a private cloud is the desire to retain maximum control over business-critical data. Although public clouds now provide the highest levels of data protection, the multi-tenant nature of such platforms, and the fact that they are designed to allow access by users around the world, presents a level of perceived vulnerability that many companies are not comfortable with. Plus, businesses in certain industries face strict regulatory compliance obligations, such as those imposed by the Health Insurance Portability and Accountability Act (HIPAA). With a private cloud, all of a company’s data can remain safely hidden behind the organization’s own firewall, totally inaccessible to outsiders.

The ability to tailor a private cloud to the exact requirements of a company’s specific workloads may also provide performance advantages over what could be achieved with a public cloud platform.

The Zadara Storage Solution Spans Both Public and Private Cloud Platforms

The Zadara Storage Cloud provides a common storage solution for both public and private clouds. Its VPSA Storage Arrays support each of the major public cloud platforms such as AWS, Azure, and Google Cloud Platform (GCP). They also form the basis of many private cloud implementations. The Zadara Storage architecture also provides resource isolation, so users gain the benefits of multi-tenant public clouds, but with the security and predictable performance of a private cloud. Whether they use the public cloud, a private cloud, or a hybrid combination of the two, Zadara customers receive all the benefits of the cloud model, including paying a monthly fee for just the amount of storage they actually use. And Zadara takes on the responsibility to monitor and support the customer’s storage, whether on-site or in the public cloud.

If you would like to know more about how Zadara can help you develop a comprehensive cloud solution for your company, please download the ‘Zadara Storage Cloud’ whitepaper.

September 13, 2017

Posted In: Industry Insights

Tags: , , , , , , , , , , , , ,

2 Comments