Zadara Blog

News, information, opinion and commentary on issues affecting enterprise data storage and management.

Zadara Announces a $25M Investment Round Led by IGP Capital

Investment to Accelerate Growth and Development of Consumption-Based Enterprise Storage Service Offering

Irvine, CA, August 28, 2018 – Zadara, provider of zero-risk enterprise cloud storage, today announced that it has signed a $25 million funding round led by IGP Capital with participation from existing investors. The closing of the investment round is conditioned upon the approval of the general meeting of Zadara. This brings the total equity raised to over $60 million. Zadara plans to use the investment primarily to accelerate growth, including expanding its worldwide sales, dev-ops and engineering teams as well as its service provider partner channel. In addition, the funds will help Zadara develop its service offering, which focuses on eliminating the technical, operational, and financial risks associated with enterprise data storage and management.

Zadara uses a combination of industry-standard hardware and patented Zadara software to deliver powerful enterprise-class data storage and management — with the convenience of the cloud. Customers have full flexibility of choice when it comes to their installations as Zadara supports all protocols, data types and locations. In addition, thanks to the company’s usage-based pricing model, customers only pay for the storage they consume, meaning they can scale their capacity up and down as needs change. Zadara is available via public clouds, including AWS, Google Cloud Platform and Azure, managed service providers, data centers, colocation partners, and on premises in customers’ data centers.

“Zadara is dedicated to delivering zero-risk enterprise cloud storage. The new funding supports our mission by helping us to provide customers with industry-leading enterprise data storage solutions — like our upcoming all-flash arrays with data compression and deduplication — as a fully-managed service, with a 100%-uptime guarantee and consumption-based pricing,” said Nelson Nahum, CEO and co-founder of Zadara.

“We have been impressed with Zadara’s founding team and its proven track record of building innovative, scalable businesses in the storage market over the past two decades. Zadara’s consumption-based storage platform offers enterprises the best of both worlds — cloud economics and flexibility, combined with local storage performance and control. Zadara’s customers and partners have been extremely passionate about its technology and value proposition and we’re excited to support the company in their journey,” commented Moshe Lichtman, co-founder and general partner at IGP Capital.

Zadara is exhibiting at VMworld in Las Vegas this week. Come and visit us at booth #1612.

About Zadara

Zadara is zero-risk enterprise cloud storage. We help organizations eliminate the technical, operational and financial risks associated with enterprise data storage, by providing industry-leading enterprise data storage solutions as a fully-managed service, with a 100%-uptime guarantee and consumption-based pricing. Zadara uses a combination of industry-standard hardware and patented Zadara software to deliver the power of enterprise-class data storage and management — with the convenience of the cloud. Any data type. Any protocol. Any location. Zadara is available via public clouds, managed service providers, data centers, colocation partners, and on premises in customers’ data centers. More at www.zadara.com, LinkedIn, and Twitter.

August 30, 2018

Posted In: Uncategorized

Leave a Comment

Meet Zadara at Google Cloud Next — Booth S1626

Google Cloud Next ’18
Moscone Center, San Francisco
July 24 – 26

Supercharge Your GCP with Enterprise Cloud Storage from Zadara

Meet the Zadara Storage team at Google Cloud Next ’18 to see a LIVE DEMO of the Zadara’s industry-leading enterprise storage delivered as a fully-managed, pay-only-for-what-you-use service.

The Google Cloud Platform Experience with Zadara

100% SLA and 24/7 support
Available On-Premises, In the Cloud, or as a Hybrid configuration (On-Premises + In the Cloud)
Secure – private data transport, user-owned encryption keys
Private – no shared resources
Scalable – change storage capacity and performance at will with no downtime
Cost-effective – no equipment purchase, no maintenance Convenient – eliminate disruptive updates, hot-fixes, and upgrades


Full-featured NAS file and SAN block storage — including AD and HA/DR with dedicated resources — for just $5K per month. Don’t wait. This offer is only available until July 26, 2018.

Book a meeting with Zadara

Ready to learn more? Download this simple comparison chart.

 

 

July 2, 2018

Posted In: Blog, Company News

Tags: , , , ,

Best practices for migrating data to the cloud

Originally published on InfoWorld

Moving petabytes of production data is a trick best done with mirrors. Follow these steps to minimize risk and cost and maximize flexibility

Enterprises that are embracing a cloud deployment need cost-effective and practical ways to migrate their corporate data into the cloud. This is sometimes referred to as “hydrating the cloud.” Given the challenge of moving massive enterprise data sets anywhere non-disruptively and accurately, the task can be a lengthy, complicated, and risky process.

Not every organization has enough dedicated bandwidth to transfer multiple petabytes without causing performance degradation to the core business, or enough spare hardware to migrate to the cloud. In some cases, those organizations in a physically isolated location, or without cost-effective high-speed Internet connections, face an impediment to getting onto a target cloud. Data must be secured, backed-up, and in the case of production environments, migrated without missing a beat.

[ Working with data in the cloud requires new thinking. InfoWorld shows you the way: How Cosmos DB ensures data consistency in the global cloud. | Stay up on the cloud with InfoWorld’s Cloud Computing Report newsletter. ]

AWS made hydration cool, so to speak. In fall 2016 AWS branded such offerings as Snowball, a petabyte-scale data transfer service using one or more AWS-supplied appliances, and Snowmobile, an exabyte-scale transport service using an 18-wheeler truck that carries data point to point. These vehicles make it easy to buy and deploy migration services for data that resides in the AWS cloud. It would take 120 days to migrate 100TB of data using a dedicated 100Mbps connection. The same transfer using multiple Snowballs would require about a week.

Yet for the remaining 55 percent of the public cloud market that is not using AWS – or those enterprises with private, hybrid, or multi-cloud deployments that want more flexibility – other cloud migration options may be more appealing than AWS’s native offerings. This may be especially true when moving production data, where uploading static data onto appliances leaves the IT team with a partial copy during the transfer. They need a way to resynchronize the data.

The following is a guide to cloud hydration best practices, which differ depending on whether your data is static, and thus resources are offline, or in production. I will also offer helpful tips for integrating with the new datacenter resources, and accommodating hybrid or multicloud architectures.

Static data

Unless data volumes are under 1TB, you’ll want to leverage physical media such as an appliance to expedite the hydration process for file, block, or object storage. This works elegantly in environments where the data does not need to be continuously online, or the transfer requires the use of a slow, unreliable, or expensive Internet connection.

1. Copy the static data to a local hydration appliance. Use a small, portable, easily shipped NAS appliance, configured with RAID for durability while shipping the between sites. The appliance should include encryption – either 128-bit AES, or preferably 256-bit AES, to protect against unauthorized access after the NAS leaves the client facility.
Using a very fast 10G connection, teams can upload 100MB to 200MB of data per second onto a NAS appliance. The appliance should support the target environment (Windows, Linux, etc.) and file access mechanism (NFS, CIFS, Fibre Channel, etc.). One appliance is usually sufficient to transfer up to 30TB of data. For larger data volumes, teams can use multiple appliances or repeat the process several times to move data in logical chunks or segments.

2. Ship the appliance to the cloud environment. The shipping destination could be a co-location facility near the target cloud or the cloud datacenter itself. Regardless of whether the target is a public cloud or hybrid/multi-cloud setting, two other considerations distinguish the smooth and easy migration from those that can become more protracted.

3. Copy the data to a storage target in the cloud. The storage target should be connected to the AWS, Azure, Google, or other target cloud infrastructure using VPN access via high-speed fiber.

For example, law firms routinely need to source all emails from a client site for the e-discovery purposes during litigation. Typically, the email capture spans a static, defined date-range from months or years prior. The law firm will have its cloud hydration vendor ship an appliance to the litigant’s site, direct them to copy all emails as needed, then ship the appliance to the cloud hydration vendor for processing.

While some providers require the purchase of the applianceothers allow for one-time use of the appliance during migration, after which it is returned and the IT team is charged on a per terabyte basis. No capital expenditure or long-term commitment required.

Production data

This process requires some method of moving the data and resynchronizing once the data is moved to the cloud. Mirroring represents an elegant answer to migrating production data.

Cloud hydration using mirroring requires two local on-premises appliances that have the capability to keep track of incremental changes to the production environment while data is being moved to the new cloud target.

1. Production data is mirrored to the first appliance, creating an online copy of the data set. Then a second mirror is created from the first mirror, creating a second online copy.

2. The second mirror is “broken” and the appliance is shipped to the cloud environment.

3. The mirror is then reconnected between the on-premises copy and the remote copy and data synchronization is re-established.

4. An online copy of the data is now in the cloud and the servers can fail over to the cloud.

For example, a federal agency had 2PB of on-premises data that it wanted to deploy in a private cloud. The agency’s IT team set up two on-premises storage resources adjacent to each other in one datacenter, moved production data onto one mirror, then set up a second mirror so that everything was copied. Then the team broke the mirror and shipped the entire rack to a second datacenter several thousand miles away, where its cloud hydration vendor (Zadara Storage) re-established the mirrors.

When reconnected, data were synchronized to represent a full, up-to-date mirror copy. Once the process was complete, the hardware that was used during the data migration process was sent to a remote location to serve as a second disaster recovery copy.

In another example, a global management consulting firm used 10G links to move smaller sets of data from its datacenter to the target storage cloud, and hydration appliances to move petabytes of critical data. Once the 10G link data uploads were copied to the storage resource, the cloud hydration provider used a AWS Direct Connect link to AWS. In this way the resources were isolated from the public cloud, yet made readily available to it. Other static data were copied onto the NAS appliances and shipped to locations that are available to the AWS cloud.

Features for easy integration

Regardless of whether the target is a public cloud or a hybrid or multicloud setting, three other factors distinguish the smooth and easy migrations from the more difficult and protracted ones.

– Format preservation. It’s ideal when the data migration process retains the desired data format, so that IT teams can copy the data into the cloud and instantly make use of it, versus converting copied data into a native format that is used locally but is not accessible from within the cloud itself. IT managers need to be to get at the data right away, without the extra step of having to create volumes to access it. With terabytes of data, the extra few hours of delay may not seem like a big deal, but at petabyte scale, the delay can become insufferable.

– Enterprise format support. Traditional storage device formats such as CIFS and NFS are either minimally supported by public cloud providers or not supported at all. Yet the applications these file systems serve often yield the most savings, in terms of management time and expense, when moved to the cloud. Having the ability to copy CIFS, NFS, or other legacy file types and retain the same format for use in the cloud saves time, potential errors, and hassle from the conversion, and helps assure the hydration timeline.

– Efficient export. No vendor wants to see a customer decommission its cloud, but when needs change, bidirectional data migration or exporting of cloud data for use elsewhere needs to proceed just as efficiently – through the same static and production approaches as described above.

Hybrid cloud or multicloud support

A final consideration with any cloud hydration is making sure it’s seeded to last. With 85 percent of enterprises having a strategy to use multiple clouds, and 20 percent of enterprises planning to use multiple public clouds (RightScale State of the Cloud Report 2017), IT teams are revising their architectures with hybrid or multicloud capabilities in mind. No company wants to be locked into any one cloud provider, with no escape from the impact of the inevitable outage or disruption.

Cloud hydration approaches that allow asynchronous replication between cloud platforms make it a no-brainer for IT teams to optimize their cloud infrastructures for both performance and cost. Organizations can migrate specific workloads to one cloud platform or another (e.g., Windows applications on Azure, open source on AWS) or move them to where they can leverage the best negotiated prices and terms for given requirements. A cloud migration approach that enables concurrent access to other clouds also enables ready transfer and almost instant fail-over between clouds, in the event of an outage on one provider.

Experts have called 2017 the year of the “great migration.” Projections by Cisco and 451 Research suggest that by 2020, 83 percent of all datacenter traffic and 60 percent of enterprise workloads will be based in the cloud. New data migration options enable IT teams to “hydrate” their clouds in ways that minimize risk, cost, and hassle, and that maximize agility.

Howard Young is a solutions architect at Zadara Storage, an enterprise Storage-as-a-Service (STaaS) provider for on-premises, public, hybrid, and multicloud settings that performs cloud hydrations as one of its services. Howard has personally assisted in dozens of cloud hydrations covering billions of bits of data.

New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to newtechforum@infoworld.com.

 

 

 

 

June 29, 2018

Posted In: Blog, Industry Insights, Tech Corner

Tags: , , , ,

Meet Zadara at AWS Summit New York City on July 17th

AWS Summit New York City
New York City, NY – July 17, 2018
Javits Center, Booth #530

Are you sacrificing enterprise features for the cloud? Well, you shouldn’t have to.

Zadara gives you all the power of industry-leading enterprise storage, without the cost and complexity of owning and operating your own infrastructure. Your business relies on storage with high performance, control, availability, and privacy – and you shouldn’t settle for anything less in your cloud storage solution.

Meet the Zadara team at AWS Summit New York and learn how Zadara takes AWS to a new level, by bringing you the enterprise storage functionality you require.


Full-featured NAS file and SAN block storage — including AD and HA/DR with dedicated resources — for just $5K per month. Don’t wait. This offer is only available until July 17, 2018.

Book a meeting with Zadara

Zadara adds a host of storage related features to any AWS account including security and encryption of data at-rest and in-flight with customer-owned keys, cross-region replication, added redundancy, volumes over 100TB in size, snapshots with built in lifecycle policies, thin provisioned clones, and more.

Download the Supercharge Your AWS Storage infographic to find out which enterprise features you’re missing.

 

June 27, 2018

Posted In: Blog, Company News

Tags: , , ,

Accounting Gimmicks Don’t Create Enterprise STaaS

Pure Storage recently announced a new Evergreen Storage Service (ES2) claimed to deliver “storage-as-a-service (STaaS) for private and hybrid clouds” for “terms as low as 12 months” with pay-per-use pricing “subject to a minimum commitment.” In other words, Pure Storage now prices and sells the same all-flash arrays (AFAs) using CapEx or OpEx dollars. This makes it seem that ES2 innovation is limited to an accounting “alternative” (read gimmick).

Sorry Pure Storage. ES2 doesn’t even come close to meeting the criteria for Enterprise STaaS.

Zadara
Storage Cloud
Pure Storage
ES2

Multi-protocol Enterprise Storage Access

Yes

No
Entire Storage Environment Yes No
Comprehensive Storage Service Yes No
Rich Enterprise-Grade Data Management Yes Limited
Public Pricing with Monthly Billing Yes No
Unsurpassed Agility and Scaling Yes No
100% Uptime Service Level Agreement Yes No

Only the Zadara Storage Cloud offers a genuine Enterprise STaaS experience, including:

  • – Multi-protocol enterprise storage access that offers block, file, and object access to support mixed database and application workloads in physical, virtualized, and container data center environments.
  • – An entire storage environment installed at customer selected locations including public cloud direct-connected to hyperscalers, private cloud on-premises in local data centers, remote data centers, or colocation data centers, and public-private hybrids.
  • – A comprehensive storage service that is operated, managed, and monitored by Zadara expert staff for health status, data availability, and storage capacity and performance.
  • – Rich enterprise-grade data management supporting database and application high-availability, business continuity, and disaster recovery (including triple-mirroring), plus encryption, snapshots, replication, local/remote mirroring, online volume migration, and more.
  • – Public pricing with monthly billing for customer-defined storage configurations and usage with online configurator and free trials that eliminate the guesswork for block, file, and object storage budgeting and chargebacks.
  • – Unsurpassed agility and scaling that features the agility to create and change configurations whenever needed and the ability to scale capacity and performance up or down to match present and future needs.
  • – 100% uptime Service Level Agreement (SLA) including 24x7x365 proactive support and seamless upgrades allowing customer IT staff to focus on strategic business needs rather than ongoing system maintenance.

 

The Zadara Enterprise STaaS Solution

Zadara delivers Enterprise STaaS to business and service provider clients using a portfolio of products running the same Zadara software and using the same industry-standard hardware. The essential difference in products relates to where the hardware is physically located (on-premise or in the cloud, for example) and how the storage is provisioned (block, file, or object).

A Zadara Storage Cloud can include some or all of the following:

In every scenario, Zadara clients provision storage wherever it is needed without concerns about available performance or capacity. The Zadara solution universally delivers storage that meets all business use-cases for enterprises and service providers using a pay-per-use model supported by operating expense (OpEx) budgets.

 

June 22, 2018

Posted In: Blog, Industry Insights

Innovate Now at the Zadara® Summit 2018

Innovate Now at the Zadara® Summit 2018
Discover why the most successful enterprise and service provider businesses worldwide are innovating now to gain competitive advantage using Zadara storage-as-a-service (STaaS) solutions. Zadara delivers private, hybrid, and public cloud storage with comprehensive enterprise functionality.

Continue reading Innovate Now at the Zadara® Summit 2018

May 18, 2018

Posted In: Company News, Industry Insights

Tags:

From On-Premises Storage-as-a-Service to Cloud-Based Storage: Phased Adoption

Organizations migrate on-premises storage into a cloud storage-as-a-service environment for a number of reasons. The migration may be mandatory because of a merger or acquisition, in which data from another organization has to be migrated to a new or existing environment, or because your business segment has been sold, which requires you to migrate storage elsewhere. In many cases, migration is voluntary in a quest for better service delivery. Cloud services can cut costs, increase flexibility and in some cases, ensure better service performance.

Your migration can be handled in one of three ways: big bang, parallel, and phased. Let’s take a look at the pluses and minuses of each.

Big Bang Migration

Big bang migrations move data to the new location and immediately direct users to access data elsewhere. It’s the proverbial ripping off of the Band-Aid in which everyone starts something new all at once instead of slowly migrating over. The frustrations that are inevitable with any migration are intense, but you get them out of the way in a contained time period (hypothetically speaking). The costs tend to be lower, and you avoid dealing with intermediary solutions or using two operating systems at once.

The bad news about big bang migrations is that they’re high risk. Failure can lead to long periods of downtime and intense frustration as users are corralled into a new system all at once. Big bang migrations can work well if companies are moving only small amounts of data, or if they’re migrating data from only a few offices. A large transition performed big bang style can lead to major interruptions, which most organizations can’t afford.

Parallel Migration

Parallel migration sets a complete new storage-as-a-service environment that runs concurrently with the old on-premises storage. The two environments run side-by-side until the organization is ready to switch the old one off. Parallel migration mitigates risk somewhat since the old environment stays functional while the parallel environment is established.

It may be worthwhile to run both on-premises storage and remote storage-as-a-service arrays concurrently for applications that you’d have to recover very quickly and with minimal data loss. Establishing and managing two complete environments at once, however, gets expensive, in terms of both infrastructure usage and personnel costs, so migrating everything in this way gets cost prohibitive very quickly.

Phased Migration

phased migration

Phased migration, also called iterative migration, migrates your stored data over incrementally until you’re fully up and running in the cloud. You can migrate one or a few offices at a time, or you can start with applications that have few or no interdependencies. Phased migration gives users a chance to get used to new ways of doing things and gives you time to figure out hiccups as you go without the risk that comes with big bang migration. It can also be more complex to manage as you bridge old and new storage to keep applications functional while you transition.

For a deeper dive into phased migration with VPSA Storage Array, check out our case study: VPSA in the Cloud. The quick summary is that VPSA Storage Array makes it easy to asynchronously replicate data, whether you’re replicating to a different pool within the same VPSA Storage Array, to a remote VPSA Storage Array (either within your Zadara Cloud or to a completely different geographic location) or to a completely different cloud provider. Data is written to your local VPSA storage array in real time and then, asynchronously and in the background, it’s replicated to a remote VPSA Storage Array.

In addition to being a great disaster recovery and backup solution, this per-volume remote mirroring creates a natural, seamless phased migration. Because replication is snapshot based, only data that’s changed gets replicated, and only the most recent change is synchronized, which saves bandwidth and ensures minimal service performance impact. When all volumes are migrated to the cloud, you can start running your new workloads there.

Choosing the Right Mix

Most migrations involve some mix of big bang, parallel and phased. The right balance depends on your budget, your timeframe, and your risk threshold. Our VPSA Storage Array is designed to make phased migration easier and more cost-effective, giving you a way to control costs without introducing the risk that comes with moving everything overnight. For more information about the advantages gained with Zadara’s VPSA Storage Array, read the case study.

January 25, 2018

Posted In: Industry Insights

Tags:

Leave a Comment

Zadara Storage Offers Storage-as-a-Service Disaster Relief to Businesses Impacted by Recent Hurricanes

Following the devastating Hurricanes Harvey, Irma, and Maria affecting regions in Texas, Florida, and Puerto Rico, Zadara Storage has dedicated storage resources to help businesses recover. Businesses who have lost their data storage to the destruction of the hurricanes are provided with six months of storage to replace their damaged equipment at no cost.

Thousands of businesses have been affected by the recent natural disasters, and as a result, normal business operations may come to a halt. Without access to mission critical data, such as financial systems, customer information, or human resource directories, companies lose precious time waiting upon lengthy insurance claim processes.

Instead, Zadara Storage has acted quickly to alleviate delays by offering immediate disaster relief in two methods: in the cloud or on-premises.

Zadara Storage On-Premises — Zadara Storage sends storage equipment to a business’s data center to begin restoration with the goal of becoming operational as quickly as possible.

Zadara Storage In the Cloud — Zadara Storage also provides an “In the Cloud” solution which connects to one of Zadara’s public cloud partners (Amazon Web Services, Google Cloud Platform, and Microsoft Azure). Businesses can restore their data to Zadara Storage’s enterprise cloud environment and directly connect to public cloud compute.

“Our team was troubled by the incredible destruction of these storms and we want to help,” said Nelson Nahum, CEO and co-founder of Zadara Storage. Nahum explains that although Zadara Storage does not have huge resources to offer, they are doing their part to alleviate some of the disaster that businesses have experienced.  Nahum concludes, “The cost for storage and IT infrastructure is considerable, and businesses shouldn’t wait until insurance claims clear. We can help now.”

Impacted organizations can request assistance from Zadara Storage online and begin the restoration of their IT infrastructure. Zadara Storage is offering up to 1PB of Cloud and/or On-Premises-as-a-Service storage, at no-charge for six months. Additional details can be found here. Companies can end the service after the six months, no questions asked.

October 16, 2017

Posted In: Company News

Leave a Comment

Bring Cold Object Storage to Your Private Cloud

In today’s computing environment, more and more companies are beginning to work with massive datasets, ranging into the hundreds of petabytes and beyond. Whether it’s big data analytics, high-definition video, or internet-of-things applications, the necessity for companies to handle large amounts of data in their daily operations continues to grow.

Historically, enterprises have managed their data as a hierarchy of files. But this approach is simply inadequate for efficiently handling the huge datasets that are becoming more and more common today. For example, public cloud platforms, such as Amazon Web Services (AWS) and Microsoft Azure, that must service many thousands of users simultaneously, would quickly become intolerably unresponsive if every user data request meant having to traverse the folders and subfolders of multiple directory trees to find and collect the information needed for a response.

That’s why modern public cloud platforms, and other users of big data, use object storage in place of older file systems. And as the use of private clouds grows, they too are employing object storage to meet the challenges of efficiently handling large amounts of data.

big data word cloud

What Is Object Storage?

With object storage, there is no directory tree or folders. Instead, there is a flat global namespace that allows each unit of stored data, called an object, to be directly addressed.

Each object contains not only data, but also metadata that describes the data, and a global ID number that uniquely identifies that object. This allows every object in the storage system, no matter where it might be physically stored, to be quickly retrieved simply by providing its unique identifier.

Why Object Storage is Well Suited To Private Clouds

When it comes to handling massive datasets in a cloud environment, object storage has a number of unique advantages. Let’s take a look at some of these:

  • It’s infinitely scalable. Because of its flat namespace, an object storage system can theoretically be scaled without limitation simply by adding objects, each with its own unique ID.
  • Metadata makes searching easy. The metadata that accompanies each object provides critical information about the object’s data, making it easy to search for and retrieve needed data quickly and efficiently without having to analyze the data itself.
  • It’s highly robust and reliable. The VPSA Object Storage differs from a traditional RAID redundant storage using a distributed “Ring” topology policy under the hood.  Zadara Object store allows for a 2-way or 3-way replication as options which the customers can choose at creation time. By the use of erasure coding (instead of RAID) to achieve continuous and efficient replication of data across multiple nodes, an object storage system automatically backs data up, and can quickly rebuild data that is destroyed or corrupted. Nodes can be added or removed at will, and the system uses Swift’s underlying Ring replication to ensure that new objects are incorporated, or removed ones are rebuilt, automatically and transparently.
  • It simplifies storage management. The metadata of an object can contain as much (or as little) information about the data as desired. For example, it could specify where the object is to be stored, which applications will use it, the date when it should be deleted, or what level of data security is required. Having this degree of detail available for every object allows much of the data management task to be automated in software.
  • It lowers costs. Object storage systems don’t require expensive specialized storage appliances, but are designed for use with low-cost commodity disk drives.

storage arrays in cloud

Zadara VPSA Object Storage

Zadara offers an object storage solution that incorporates all the advantages discussed above, and then some. VPSA Object Storage is specifically designed for use with private as well as public clouds. It is especially suited to storing relatively static data such as big data or multimedia files, or for archiving data of any type. VPSA Object Storage provides anytime, anywhere, any-device remote access (with appropriate access controls) via HTTP.

The VPSA Object Storage solution, which is Amazon S3 and OpenStack Swift compatible, features frequent, incremental, snapshot-based, automatic data backup to object-based storage, eliminating the need to have separate backup software running on the host.

If you would like to explore how Zadara VPSA Object Storage can help boost your company’s private cloud, please contact us.

October 10, 2017

Posted In: Industry Insights

Tags: , , , , , , , , , ,

Leave a Comment