New (and Free!) EMC Storage Plug-in for Oracle Enterprise Manager (OEM) 12c

first published here:


The EMC Storage Plug-in for Oracle Enterprise Manager 12c is the next generation of EMC’s storage plug-ins for Oracle Enterprise Manager and brings new insight to the Oracle DBA using EMC storage for his/her databases. While previous versions of the plug-ins were specific to different EMC storage systems, this brand new version has been completely re-architected and is designed to work with a range of EMC storage systems including VMAX, VMAX 3, VNX Block, VNX File, and XtremIO.


The EMC Storage Plug-in for OEM 12c gathers availability, performance, and configuration information for EMC Storage Systems and marries that information with Oracle database host information. This provides the DBA with a comprehensive look into the performance of the specific storage used by each database – significantly simplifying the task of tracking database performance and isolating issues.


Key Advantages:

  • Gain enterprise-wide visibility into EMC Storage. Makes available EMC storage related performance, health, and availability metrics within OEM. Enhanced plug-in interface targeted at providing EMC storage analysis.
  • Relationship mapping between EMC storage and Oracle Databases. Greatly reduce troubleshooting times, and simplify security & compliance management.
  • Integration between OEM and EMC Storage delivers reduced MTTI & MTTR. Customizable Information Publisher Reports  and historical metric collection assists troubleshooting, diagnostics, and analysis.


The EMC Storage Plug-in is featured at Oracle’s OEM 12c Extensibility Exchange

the demo can be watched here:

Optimizing XCOPY Block Size for EMC XtremIO



while we are working on an updated reference architecture for VMware Horizon View and XtremIO, we started to play with many of our recommended parameters for ESXi, all the parameters are well documented in the user guide and also, you can apply them all AUTOMATICALLY using the EMC VSI (Virtual Storage Integrator) vCenter plugin as you can see below

however, one parameter was never inspected thoroughly, it is the default XCOPY block size that VMware are using, the default size is 4MB and when I tried a larger block size, it didn’t give any better performance in terms of time or latency..



I asked myself whether playing with smaller block size can yield different results so from 4MB I took the other extreme approach to 4k/8k/16, it didn’t improve anything, in fact, it made the cloning operation slower, finally, I came to test a block size of 256kb, I was amazed!

first, in order to change the block size, you need to run the following command per an ESXi host, it doesn’t require a reboot and it’s a global parameter which means that every volume that will be connected to this host will run XCOPY at the specified bandwidth



I then started to clone 500 VMs, the bandwidth was very high but the shocker was the reduced time and the latency it took for XCOPY to perform the operation, the total time of cloning 500 windows 7 VMs with office 2010 and the latest patches was 30 minutes!


Latency – Redefined!


what you see above is not a mockup, this was the latency I observed during the cloning operation, granted, I didn’t power on the VMs after the cloning but I just wanted to test the effect of the 256kb block size on the cloning operations, less than 1MS latency!!!

below you can see a demo I recorded showing the entire process, we are still validating all of this in order for this parameter to find it place to the next user guide.


XCOPY Block Size Optimization for EMC XtremIO

Protecting your VMs with EMC Appsync & XtremIO Snapshots


One of my favorite products in EMC was just GA’d with a new version and now support XtremIO, a double party!, now lets speak about replica management challenges:

Replica Management Challenges

Maintaining System Availability.

Minimize Data Loss and Downtime.

Maximize SLA Performance.

Storage Efficiency.

Fastest Recovery Using Smallest Storage Footprint.

Agile Development.

.Rapidly Deploy Refreshed Environments Without Compromising on Time and Quality


.Application Admins need to manage recovery and repurposing

ok but how replica managements is related to EMC Appsync ?


at it’s core, EMC Appsync give you a very simplified way to mange you replicas


AppSync is Simple to Install: AppSync is simple with EMC Unisphere-like interface and can centrally push and upgrade its agents to application servers.

AppSync is Simple to Manage: AppSync has an intuitive SLA-driven management interface that provides real-time RPO protection status information.

AppSync is Simple to Order: AppSync is now part of the EMC VNX Application Protection Suite and is part of the Total Protection and Total Efficiency Packs. You may already have the ability to use this product, all it takes is a zero dollar sales order with your EMC Sales rep to make it official.


The product is SLA driven and you can receive alerts when SLAs are not being met.

In this example, our Exchange Bronze (local copy plan) is doing fine as indicated by our green status in the AppSync dashboard.

But VMware and SQL aren’t doing so well because it looks like something may be wrong with the remote link.

AppSync will provide a dashboard status based on the status of the copy, and can also send an alert to the email recipients.




With EMC Virtual Storage Integrator (VSI), admins can…

•Configure protection

•Run service plans

•Mount datastores

•Restore files

All within vCenter!

In version 2.1, EMC Virtual Storage Integrator (VSI) 6.4 must be used for AppSync Management within vCenter. this is awesome because it allows you, the vSphere administrator to manage Appsync directly from the GUI, the “portal” you are comfortable with which is the vCenter web  interface

here’s another screenshot from the vCenter web interface


what’s the “magic sauce” between Appsync to XtremIO, well, it’s the unique way we implement snapshots on XtremIO (

this version of Appsync (2.1) will focus on backing up VMware vSphere datastores and Oracle / MS SQL DB’s (virtual of physical)


the operation recovery is fairly trivial, based on the three pre-defined SLA’s, you can select a local copy or a remote one or a combination of both, again, using the super easy GUI, for DB’s protection, we also embed the XtremIO VSS provider for Appsync which is needed for an application consistent based backup (MS SQL DB’s use case)


the other use case which is more applicable for the DB’s world and boy, I had so many conversations with customers about it, is the ability to “refresh” your test / dev environment, the workflow will allow you (for example) to take a copy of your production DB and refresh your test /dev copies with the one you just took, note that this version of AppSync 2.1) And XtremIO (3.0) support the mounting of the new DB’s but will not support refreshing the DB(s) itself, this feature will come at a later version



downloading and taking it up for a spin is super easy, just go to the url above and test it for 30 days, im SURE you are going to love it!

below you can see a demo I recorded that shows it all, enjoy.

Appsync 2.1 Integration with XtremIO

EMC ScaleIO 1.31–VMware Kernel “inside”



wow, I’m really happy, the ScaleIO team at EMC have just GA’d version 1.31 of the product, it contains many new features / improvement but the big news is the direct kernel integration of the “SDC” component into the ESXi kernel, that’s right there shows the power of the federation from an alliance point of view but for the techies, it will provide an hugh boost in terms of performance which IMHO prove why a kernel drive for software defined based products like VSAN and now ScaleIO is an Hugh thing!

Version 1.31 also adds several features that apply to all platforms including improved
security, SNMP support, and updates to integration features i.e. the REST API and OpenStack.

VMware Enhancements Overview

ScaleIO 1.31 introduces several changes in the VMware environment.
First, it provides the option to install the SDC natively in the ESX kernel instead of using the
SVM to host the SDC component. The V1.31 SDC driver for ESX is VMware PVSP certified, and
requires a host acceptance level of “PartnerSupported” or lower in the ESX hosts. Note that
the SVM will continue to be required for the MDM and the SDS components.
Next, 1.31 includes improvements in the installation process. Specifically the plugin
registration process has been simplified. In addition, deployment using the plugin
automatically creates a separate ScaleIO VM to host the Installation Manager and ScaleIO
gateway. This will allow for future version upgrades using Installation Manager in the VMware
Lastly, additional management capabilities are offered in the plugin. These include: multiple
volume creation in a single step, adding SDS or SDC to an SVM after initial deployment, and
migrating the SDC from the SVM into the ESX kernel.

Integrated SDC in ESX

With Version 1.31, it becomes possible to install the SDC for an ESX server directly in the ESX
kernel. The implementation uses a vSphere Installation Bundle or VIB file, which is included
with the 1.31 software distribution. Relative to the SVM-based SDC, the VIB-based SDC
provides improved performance with increased IOPs, increased bandwidth capability, and
reduced latency. With the SDC running natively in ESX, there is no longer a requirement to
use iSCSI to present ScaleIO volumes from the SDCs to the ESX hosts. It is possible to
migrate the SDC from the SVM into the ESX kernel. The plugin provides a feature to perform
this operation.
Note that the ESX version of SDC requires VMware version 5.5. For prior versions of VMware,
the SVM-based SDC must be deployed.


ScaleIO 1.31 VMware Deployment


VMware Cluster Upgrade Path

(1.30 to 1.31)

Upgrade to 1.31 in VMware uses the same strategy as the physical cluster upgrade for Linux
hosts. The only difference here is that for versions 1.30 and older versions, all ScaleIO
components in VMware – i.e. MDM, tie-breaker, SDS, SDC and the Lightweight Installation
Agent (LIA) are running inside virtual machines called the ScaleIO Virtual Machines (SVMs).
These components are upgraded in place, inside the SVMs, using the Installation Manager.
The 1.31 plugin can be used to bring up the Installation Manager in a VM for this purpose,
during deployment.
After this portion of the upgrade is complete, the SDCs may be migrated from SVMs into the
ESX kernel. This applies to vSphere 5.5 only. The 1.31 plugin provides functionality to
perform this step directly from the vSphere Web Client.

Upgrading a ScaleIO 1.30 cluster to 1.31 in a VMware enviornment


EMC ScaleIO 1.31 Volume Provisioning in VMware


Restricted Remote Access

With Version 1.31, it becomes possible for the ScaleIO administrator to restrict access to
configuration changes to local clients in the primary MDM host only. This restriction, when
enabled, applies to remote clients on any hosts other than the primary MDM. Restriction
applies to clients of all types including the CLI, GUI, and REST clients.
When restricted access is enabled, local access clients i.e. clients in the primary MDM host are
not affected. For these clients, all operations permitted for the logged-in user’s role can be
executed. For remote access clients, only monitoring commands i.e. ScaleIO commands
permitted for the “Monitor” role, can be executed. This is regardless of the role of the loggedin user.
Shown here are the new commands to query if remote MDM access restriction is in effect, and
to enable or disable the restriction. Let’s look at some examples of these commands in action,


From a login session on the MDM, it is possible to enable the restriction using the CLI
command –-set_remote_read_only_limit_state. After this command is executed by the
MDM, users may continue to login into this ScaleIO cluster from remote hosts, regardless of
their role.
However, they will not be to able to change the state of the system. For example, they cannot
create a new volume. They can continue to perform monitoring operations, for example query
all volumes in the system.
The same logic applies to access via other management interfaces for ScaleIO. Regardless of
user role, after remote restriction is activated, the GUI running on a remote host will report a
failure when that user attempts to make a change to the system.

Restricted SDC Access

For improved access control, Version 1.31 introduces a feature to restrict access to ScaleIO
volumes to an explicit set of SDCs. This is referred to as the restricted SDC access feature.
When the restriction is in place, the administrator is required to explicitly grant access to an
SDC before any volume-related operations specific to that SDC can be performed – for
example, mapping volumes to the SDC.
Several new commands have been implemented to support this feature, including
query_restricted_sdc_mode and set_restricted_sdc_mode to examine and set the state of
restriction, and add_sdc to grant SDC access. Note that add_sdc can be run for a host before
the SDC is installed on it.


Read RAM Cache Setting per Volume

With prior versions, the SDS RAM cache was used by the entire Storage Pool.
Version 1.31 introduces the capability to specify whether the RAM cache should be used for a
given volume within a storage pool. Note that by default when a ScaleIO volume is created,
the RAM cache will be used for that volume.
To support this feature, the existing commands –add_volume and –query_volume have
been enhanced, and the command –set_volume_rmcache_usage has been added.


SNMP Trap Overview

ScaleIO 1.31 can be configured for SNMP trap generation, enabling monitoring using thirdparty management tools which can receive SNMP traps.
By default, SNMP traps are not generated. The ScaleIO gateway needs to be configured to
activate traps.
ScaleIO traps are associated with events and severity levels. Each associated event includes a
code that specifies a remedial action for that event.

Platform and integration Enhancements


With Version 1.31, ScaleIO adds support for Redhat and CentOS version 7.0.
The REST API has been updated to accommodate all the added and changed commands.
In addition, the ScaleIO OpenStack driver package has been enhanced for compatibility with
the Juno release of OpenStack. In particular, ScaleIO 1.31 can report pool-level statistics to
OpenStack. This enables the Juno version to make an intelligent decision when provisioning
volumes, by selecting a suitable Storage Pool.

EMC Storage Analytics (ESA) 3.0 – Now With XtremIO Support !




one of the most common questions I get from our customers is about the ESA support, so it’s here now! but what is it exactly..

well, if you are a VMware customer, you are very likely to use vCenter Operations (aka vCOPS), vCOPS is a monitoring software to monitor and report on your virtual (and also physical) environment, you know what? I take it back, vCOPS is an aggregator of information surfaces points which can then make sense and correlates between these “islands” of information points.

a product that have been in the EMC portfolio for quite some time, is the ESA plugin, in a nutshell, you can either buy it as a standalone software that will show you storage related information using the vCOPS interface, or if you are an existing vCOPS enterprise customer, you can just buy the plugin and import it to your existing vCOPS environment.

recently, VMware renamed the name of the product and it is now known as “VMware vRealize Operations Manager” (vR Ops) v6.0.

today, we mark the release of the added XtremIO pack for the existing storage adapter, it’s one plugin (adapter) that is common for ALL the EMC storage products.


adding the EMC solution is very easy, you just need to..”add” it from the “Solutions” tab


once the EMC solution has been added, you will want to configure it for the specific array you bought the license for. press the “configure” icon on the “EMC Adapter”


the screenshot below shows a VNX being added but the GUI is the same for XtremIO


now, lets examine the ESA XtremIO topology:


below you can see a demo I recorded showing it end to end including the built-in dashboards for XtremIO but you can always go ahead and create new ones to your heart wish

ESI For Windows suite 3.6 & EMC XtremIO



As part of our ongoing integration with the other EMC eco system, I’m proud to announce the integration into EMC Storage Integrator (ESI), no, this is not VSI but provide a similar functionality for bare metal systems that are running windows server / Hyper-V , linux OS’s or in a future release, for Microsoft System Center.

so what is it exactly?


EMC Storage Integrator (ESI) for Windows Suite is a set of tools for Microsoft Windows and Microsoft Applications administrator and the GUI is based on Microsoft Management Console (MMC)

The following Windows Platforms / Applications are supported as of this version (3.6)


XtremIO Provisioning – Overview

lets now focus on adding XtremIO storage system(s), creating volumes, viewing storage “pools” and imitators groups


As shown in the screenshot below, the XtremIO uses the below parameters for adding the array in ESI.


In XtremIO the LUNs are referred to Volumes, can be created by right clicking on the Storage system or from the right hand side Action menu.


There is no concept of Storage Pools in XtremIO array.

•All devices in XtremIO are TDEVs by default.

•Within the ESI the total capacity of the XtremIO is displayed under the Storage pools tab.


Initiator Groups are XtremIO special entities which are similar to Storage groups.

These groups contains the initiators from the Host and these groups mapped to Volumes.


•Volume Mappings are shown as Masking views in the Masking view tab.


XtremIO Powershell CMDLets– Overview


this version of ESI also introduce the first time we officially support Powershell as a way of communicating with the XtremIO array!




below you can see a demo demonstrating what we have just discussed plus the future integration to SCOM

ESI For Windows Suite 3.6 & EMC XtremIO Demo


As always, you can download the plugin from


Citrix PVS 7.6 Ram With Overflow On Hard Disk On EMC XtremIO



I wanted to take for a spin the new CITRIX PVS feature that cache a lot of the IOPS into the target VM RAM with overflow to it’s Drive.

instead of just taking screenshots, I created a video log Smile






I then ran the same workload on the default PVS “cache to drive” settings and the IOPS was much higher so I have to give credit to the feature, it does work


but doesn’t eliminate the need for IOPS.

I also wanted to acknowledge the fact that different customers VDI environments behave..differently, take yours for a spin a and make a decision for yourself.

VSI 6.3 Is Here!


We have just GA’d VSI (Virtual Storage Integrator) 6.3

If you are an XtremIO customer, this is a must (free to install / free to upgrade) tool,

New to this version are the following abilities:


  1. The ability to apply the ESXi / XtremIO best practices directly from the vCenter interface, no longer need to mess with cli commands!

2. the ability to extend an XtremIO volume that was formatted to VMFS


3. The ability to run space reclamation at the datastore level while using the applicable “-n” parameter that is recommended by us for XtremIO array, this is by far my favorite feature for this release as it allow you to invoke space reclamation (at the datastore level) directly from within vCenter.


As always, download it from

here’s a demo showing ALL the integration points of VSI and XtremIO

VSI 6.3 & EMC XtremIO

Lately, in XtremIO



I’ve been busy, very busy in the last couple of weeks, running some test for version 3.0 (which was GA’d last week!) but more importantly, I think it will be fair to say that the XtremIO product (and the flash industry in general) has reached a tipping point, a point of no return if you would like, here’s what changed:


Gartner has placed XtremIO solidly in the ‘Leaders’ quadrant of their inaugural All-Flash Array (or Solid State Array in Gartner parlance) Magic QuadrantThe Register leaked the Magic Quadrant and I have reprinted their image below.

XtremIO got the leading part in the “leader” quadrant, I can’t say we were surprised (at least, I wasn’t) as we are seeing an increased numbers of customers joining our flash revolution and on the competitive front, we are seeing more and more customers who understand why “Architecture Matters”, since day 1, we have been very vocal about the fact that the architecture, the house foundations must be solid and that features can be added later on, we have more and more customers telling us that things like scale – out really matters to do as they need the horsepower to drive the capacity added, things like garbage collection that does not impact the array performance, user data and metadata in memory that is one of unique features of our snapshots and more and more..

so what is the tipping point you are probably asking yourself, well, we are now in phase 3 of the All-Flash based datacenter, let’s look at the phases..


phase 1 :

this phase can be related to the period between the Direct Availability phase to the GA of the product (November 2013), customers were mainly interested in XtremIO because of it’s performance, sort of like, “I have this DB that just needs the lowest latency / highest IOPS you can give me”, I also heard comments like “I’m still not too sure about FLASH, it’s a new technology and I need to get my head around it”, all of this comments were more than fair in my eyes, XtremIO was a brand new technology and it HAD to prove itself in the field, and boy it did!

Phase 2 : 

this phase can be related to the period between the GA of the product to the end of Q2 in 2014, it is known as the “Application Services” phase, no longer customers saw XtremIO as a performance tier, they started to really get their heads around things like always on deduplication and the unique snapshots architecture and as such, started to migrate their DB’s + test / dev environments to XtremIO and migrated their VDI environment to XtremIO (VDI was also part of phase 1), we started to see a massive migration to our platform around this phase, but still it wasn’t a full DC migration.

Phase 3:

this phase can be related to the period between Q3 till now, we are now seeing new (and returning customers) who have been telling us, “you know what, I enjoyed using XtremIO so much, that is really hard for me to look back, can you solve my ENTIRE virtualized datacenter storage needs?”.

this question is kinda tricky since the word “entire” means many things to me but it’s easy to answer after an analysis, so for example, we have now released a Mitrend XtremIO analysis tool (free!) that can scan your entire virtualized VMFS/RDM’s file systems and tell you how much capacity you will need migrating them to XtremIO, it does it while taking into considerations things like our always on deduplication / compression.

this to me is that current maturity of the product, as a relatively young dad, I always think about my daughter being born, get send to the kinder garden and in the following year get sent to elementary school!

the critical capabilities of this phase are the following:

1. Scale Out


no longer, a dual engine that works in Active / Passive / ALUAH (enter any other marketing terms that tries to drift you from the main idea) is sufficient, customer MUST have the horse power to drive the capacity associated, the VMware IO blender and the added VM’s needs it not only for todays workload but for the years to come, if you are building isolated silos, it’s like the 90’s all over again.

2. Real-Time XtremIO Data Services


it is now mandatory to have deduplication, compression, space-efficient snapshots, it also a very tricky concept because this slide can be related to many other products but like always, the devil is in the details, if your AFA throttle back deduplication / compression because of lack of horse power / Active-Passive technology (see point 1), you are missing out, XtremIO provides an ALWAYS inline data services, see why you want / need the true Active-Active + Scale Out architecture? the data services themselves are important for the “generic” virtualized use cases since it allow you to have a very good TCO for these environment’s and then you need to prove your CIO “why SHOULDN’T I go with XtremIO”?

3. Eco-System Integration


because you now want to manage your entire DC using XtremIO, it is not sufficient to have a solid architecture anymore, you need replication, you need the same monitoring tools to just work with the new platform etc etc’ and this is the strength of EMC, it was ALWAYS the strength of EMC, we don’t just sell the box, we give our customers the entire solution, for example, you can now build your DC using either a VBLOCK or a VPLEX, you can now have a true Active / Active Datacenter using VPLEX, you can now monitor your entire dc using ViPR SRM, you can now replicate your VMs using RecoverPoint for VMs (and amazing solution!) and the list goes on and on and on..

hope I was able to successfully share with you where were we and were are we now, I encourage you to tune to the Q3/2014 EMC earning results to hear more..and the best part? this is just the beginning!

here’s a video demo showing virtualized workload running on a 4 X-Bricks cluster!

XtremIO 3.0 and The Virtualized Datacenter


Then came Oracle (Open World)..


if your’e a fan of larry (you should, his keynotes are funny!) and you attended or watch the Oracle Open World (OOW 14) this year, Oracle compared their new All Flash array to XtremIO, im not even going to debate here about their silliness comparison, you can read about it here: 

I just wanted to say that in 2013, larry compared whatever they had on the truck at that time to a VMAX and IBM, and in less than an year, it’s XtremIO ?

what does larry know? well, it’s very common now to see XtremIO replacing exadata, customers are voting with their wallets toward an open, super easy to mange and much more performing than exadata, still , kudos for larry’s for vote of confidence.

I hope I was able to share a little bit of this rollercoaster called XtremIO, the good part, it’s only the beginning, I cant wait for Q4


ViPR 2.1 & XtremIO

Yesterday, we GA’d the ViPR controller 2.1 which now add support for EMC XtremIO, this is great because if you are an existing ViPR customer (you can download the appliance for a free trial), you can now manage EMC and 3rd party arrays from the same console..let’s take a step back into what ViPR is..


EMC ViPR Software Defined Storage helps customers protect & optimize traditional workloads, helps reclaim public cloud workloads and accelerates Big Data initiatives. ViPR is an EMC-developed software solution that has two components:

the first component of ViPR is the ViPR Controller, which abstracts and pools multi-vendor storage arrays into a single virtual storage pool that can then be managed by policy. ViPR Controller automates repetitive storage provisioning tasks and provides self-service access to storage. ViPR makes it as easy to consume enterprise storage as it is to consume a public cloud storage service. ViPR Controller also provides a simple way to manage storage resources, performance, and utilization. For deeper insight and visualization across multi-vendor, traditional and Software-Defined Storage environments including chargeback,
ViPR integrates with ViPR SRM and Storage Service Assurance Suite (SA).

The second component, is ViPR Services. ViPR Services can be thought of as a storage engine – entirely in software – that support multiple access methods – Object, HDFS, Block, and in the future File. ViPR services run on both commodity hardware and on traditional arrays. This allows customers to reuse existing storage assets, and mix and match them with commodity hardware.

Customers have three ways to procure ViPR Controller:

1) As a software product that can be installed and run on customer provided, traditional EMC or 3rd Party arrays, such as VMAX, VNX, iSilon and now XtremIO

2) As a software product that can be installed on customer provided OS and commodity storage

3) As a complete storage appliance which includes ViPR Controller, ViPR Services and bundled commodity hardware, which is the ECS appliance.


So how does ViPR help our customers. Our customers have told us, ViPR needs to address three main issues:

First, help me reduce my costs – help me be more efficient and avoid unnecessary purchases. Data growth is unsustainable, we need better utilization and simpler management. The era of data silos has to end. Similar to how VMware helps customers reduce the costs associated with server proliferation, ViPR helps customer reduce costs by abstracting & automating multi-vendor storage, which allows customers to avoid unnecessary expenditures by increasing utilization and efficiency. In fact, independent lab testing that was completed by Principled Technologies in April, 2014 shows that ViPR can reduce time to provision storage by up to 85%.

Second, customers want the freedom to choose the hardware, software (the storage and data services) and management independently. ViPR provides this freedom to customers. Because ViPR manages EMC, 3rd party and commodity storage, customers can mix and match storage giving them the freedom to choose what is right for their business.

Finally, customers want the ability to deliver storage-as-a-service to their end user including multi-tenancy, elasticity, self-service access, metering and chargeback. ViPR Controller provides all this. ViPR delivers policy-based, on-demand, self service access to storage resources, allowing for multi-tenancy, metering and chargeback in conjunction with ViPR SRM. In addition, ViPR delivers the scalable economics and simplicity of public cloud services, with the reliability and control of private cloud infrastructure.

Now lets talk about the ViPR use cases: Storage Automation and Storage-as-a-Service

The primary use case for ViPR Controller is storage automation which provides centralized management & provisioning across multi-vendor environments.

There are several challenges that customers are facing with regards to storage automation, including:

  • Storage management lifecycle takes too long and is manual/complex
  • Administration & repetitive manual processes consume storage administrator cycles
  • Multi-Vendor enterprise storage platforms are managed as silos
  • Human errors causing downtime and/or reliability issues
  • Explosive data growth is making the situation worse

The ViPR Controller solves for these challenges by:

  • Reducing storage administration costs and downtime through policy-based automation, which improve efficiency and minimize human errors
  • Transforming Multi-Vendor storage platforms into ONE software storage platform from which underlying storage capabilities can be advertised as services via a self-service catalog
  • Easily implementing a ViPR Controller solution without changing any storage processes

    Our next use case for storage as a service focuses on delivering a policy-driven ,on-demand self-service catalog. Normally, the IT Administrators, who will typically want to this model in place, so that they no longer have to act as the interface between the consumers and storage systems, and in order to regain control from public cloud storage offerings. Additionally, this is a conversation for the Lines of Business, who can’t wait for IT to provision storage, which often slows down application development; this in particular is where we see the emergence of ‘shadow IT’.

    The primary challenge associated with regards to storage-as-a-service is the need for strong, policy-based automation and orchestration across complex storage environments. In fact, often times we see a certain amount of fragmented, home-grown automation implemented inside customer’s environments.

    The EMC ViPR Controller in conjunction with the customers choice of orchestrator e.g. VMWare VCO/vCAC, ServiceMesh, ServiceNow etc. helps solve for these challenges by:

  • Transforming complex multi-vendor storage environments into ONE software storage platform from which underlying storage capabilities can be advertised as services via a self-service catalog.
  • Reducing time to service completion through policy-based automation
  • Easily implementing a ViPR Controller solution without changing any storage processes