RecoverPoint 5.0 SP1 Is out, here’s what’s new for both the classic and the RP4VMs version, Part #1, Physical RecoverPoint


This is part 1 of what’s new with the RecoverPoint 5.0 SP1 release, the classic (Physical) version of it.

Partial Restore for XtremIO

  • An added option to the recover production flow for XtremIO
    • Available for DD since RP 5.0
  • Allows to select which volumes to restore (previously restore of all CG volumes could of taken place)
  • During partial restore:
    • Transfer is paused to all replica copies, until the action completes
    • Only selected volumes are restored
    • At production – selected volumes would be set to NoAccess, non-selected volumes remain Accessible
    • Like current recover production behavior:
      • At replica journal – images newer than the recovery image are lost
      • After production resumes all volumes undergo short init


  • Partial Restore from an image that does not contain all the volumes (when Adding\Removing RSets) would show the suitable volumes available for restore but restore might remove RSETs which were added after the restore took place
    • Exactly the same as with full restore
  • Both production and the selected replica clusters must be running at least 5.0.1 to enable the capability

    GUI changes

    XtremIO Snapshot Expiration

  • As of previous versions, snapshot deletion and manual retention policy assignment were not possible via RecoverPoint for XtremIO
  • Snapshot deletion allows the user to manually delete a PiT which corresponds to a XtremIO snap
    • Was only possible with DataDomain
  • Retention policy allows the user to setup expiration for specific PiT

Applies only to XtremIO and DD copies

Snapshot deletion – GUI

Snapshot deletion – CLI

  • Admin CLI command delete_snapshot
  • delete_snapshot –help

DESCRIPTION: Deletes the specified snapshot.


group=<group name>

copy=<target copy name>


This command is relevant only for DataDomain and XtremIO copies and can only be run interactively

  • delete_snapshot – interactive mode

Enter the consistency group name

Enter the name of the copy at which the snapshot resides

Select the snapshot you want to delete

  • User can delete any user or system snapshot without setting retention
  • Oldest PiT cannot be deleted
  • Deleting a snapshot is only possible when there are XtremIO volumes at the remote copy
  • When there are connectivity issues to the XMS, system would keep trying to delete the snapshot. It would get deleted when there is proper connectivity
  • RecoverPoint would block any deletion of PiT which is currently used for any recovery operation
  • In order to perform snapshot deletion, both clusters must be running at least RP 5.0.1

    Snapshot Retention GUI option 1

    Snapshot retention – CLI

  • Admin CLI command set_single_snapshot_consolidation_policy
  • Can only be run interactively
  • set_single_snapshot_consolidation_policy – Interactive mode

    Enter the consistency group name

    Enter the name of the copy at which the snapshot resides

    Select the snapshot whose consolidation policy you want to set

    For the snapshot retention time, enter the number units and the unit (<num>months | <num>weeks | <num>days | <num>hours | none) for the desired time.

    Enter consistency type: (Default is current value of ‘Crash consistent’)

    * None is for remove/cancel the retention of the snapshot

  • The retention of any user or system snapshot can be changed given it’s on a XtremIO or DD array
  • Retention can be set in: hours, days, weeks and months.
    • The minimum is 1 hour
  • Changing the retention of a user bookmark will apply to to all the replica copies.
  • In case a PiT has with retention higher than the Protection Window, the PiT will last until the retention time is over and will not be deleted, it other words, it would extend the PW
  • In case a PiT has with retention lower than the Protection Window, the PiT will last until the retention time is over and then would be treated as a regular PiT which can be removed as part of the XtremIO specific snapshot consolidation policy
  • In order to configure retention, all participating clusters must be running at least RP 5.0.1

    Manual Integrity Check for Snap-Based Replication

  • In previous versions, integrity check was available only for non-SBR (Snap-Based Replication) links.
    • In RP 5.0.1, manual Integrity Check can be performed on SBR links, this currently includes:
      • XtremIO
      • DataDomain
  • This enhanced capability enables integrity check while in the transfer status is “snap-idle”
  • Manual Integrity check for SBR links – CLI
  • Integrity Check is only available via CLI
  • Interactive Mode

  • Non-Interactive Mode

  • RPO can be impacted during integrity check and should return to normal state in the next snap shipping cycle. This is general to all integrity check operations
  • Integrity Check Events remains unchanged

    This release is also supporting ScaleIO in an heterogenous environment!

RecoverPoint 5.0 SP1 Is out, here’s what’s new for both the classic and the RP4VMs version, Part 2, RP4VMs

Hi, so this is part 2 of the posts series which cover the what’s new for the RP4VMs version 5.0 SP1, if you are looking for part 1 which covers the physical RP, look here:

ok, so to the new stuff in RP4VMs!

RE-IP simplifications will apply to systems running RPVM 5.0.1 and later
Main purpose is to simplify the operation – no need for glue scripts
Configured completely in the UI/Plugin
Co-exist with previous glue script based reIP
Supports IPv4 or IPv6
Users can change IPs, Masks, Gateways, DNS servers


Supports major VM OS: MS Windows server versions 2016, 2012,
2008, Windows 8, and 10, or Red Hat Linux server versions 6.x and
7.x, SUSE 12.x, Ubuntu 15.x and CentOS 7
Implementation is different between Windows based and Linux based
VMs, but the outcome is the same
Configuration will be done with the RP VC Plugin
Requires VM Tools

To use the new re-IP method, which is the default setting, Network Configuration Method is set to Automatic
No need to fill in the “Adapter ID” with the Automatic re-IP option
Supports multiple NICs per VM
Configuration on a per-Copy basis, which allows flexibility in multi-copy replication
During recovery wizard (Test Copy, Fail-over, Recover Production), use any of the 4 Network options

Protection > Consistency Groups. Expand the relevant consistency
group, select the relevant copy, and click the Edit Copy Network
Available after a VM has been protected

Adding/removing NICs, requires reconfiguration
If performing a temporary failover (+Setting new copy as production), ensure that the network configuration of the former prod copy is configured to avoid losing its network configuration


Starting with 5.0.1 MAC replication to remote Copies is enabled by default
By default, MAC replication to Local Copies
is disabled
During the Protect VM wizard, the user will have the option to enable MAC replication for copies residing on the same vCenter
(local copies and remote copies when RPVM clusters are sharing the same vCenter)
This can create a MAC conflict if the VM is protected back within the same VC/Network
Available for different networks and/or VCs hosting the Local Copy
When enabled, The production VM network configuration is also preserved (so there’s no need to configure the Re-IP)

Enhanced power-up Boot Sequence – VM Load up indication will be obtained from the OS itself (using VM tools)
“Critical” mechanism is still supported.
Enhanced detection and configuration conflicts for Duplicate UUIDs of
ESXi nodes
In an all 5.0.1 system, RPVM will use VMWare’s MORef of the ESXi and will generate a new unique identifier to replace the BIOS UUID usage
A Generic Warning will be displayed if an ESX has been added to an ESX Cluster that is registered to RPVM, and one of the entities (either a splitter or a cluster) are not 5.0SP1, and the ESX has a duplicate BIOS UUID.
RPVM 5.0.1 will add support for vSphere 6.5 (including VSAN)
A new message was added when attempting to protect a VM in a nonregistered ESX cluster right in the Protect VM wizard
Minimal changes in Deployer to enhance user experience and simplify the deployment flow, mainly around NIC topology selection



RP4VMs as part of DellEMC Hybrid Cloud (EHC)

The First End-To-End Dell VDI Reference Architecture

What an awesome thing is to have an end to end infrastructure Reference Architecture for your VDI deployment, not just the storage part, that’s easy, just add XtremIO.

This end-to-end VDI solution is currently available as a Reference Architecture. It showcases different scale points in terms of desktop/user densities with XtremIO as the VM storage and Unity for the user shares. In the next phase, it will be made a validated Nodes/Bundles solution that will be an orderable pre-packaged offering simplifying procurement for customers.

VMware Horizon RA Link:

Citrix XenDesktop RA Link:

An Important APD/PDL Change in the XtremIO Behavior


We have recently released a new target code for our arrays, know as XIOS 4.0.15

This release contains many fixes including a change in the APD/PDL behavior when using ESXi hosts, this is based on many customers feedback we have received that were asking to tweak the array APD/PDL behavior..

If you are new to this concept, I highly suggest you start reading about it here:

And an older article from the ESXi 5 days here,

Permanent Device Loss (PDL) is a condition where all paths to a device are marked as “Dead.” Because the storage adapter cannot communicate to the device, its state is “Lost Communication.” Similarly, All Paths Down (APD) is a condition where all paths to a device are also marked as “Dead.”
However, in this case, the storage adapter displays the state of the device as “Dead” or “Error.”
The purpose of differentiating PDL and APD in ESX 5.x and higher is to inform the operating system whether the paths are permanently or temporarily down. This affects whether or not ESX attempts to re-establish connectivity to the device.

A new feature in XtremIO 4.0.15 allows the storage administrator to configure the array to not send “PDL” as a response to a planned device removal. By default, XtremIO configures all ESX initiators with the PDL setting.
In planned device removals where the cluster has stopped its services and there are no responses received by the ESX host for I/O requests, ESX Datastores will respond with PDL or APD behavior depending on the XtremIO setting.

An XMCLI command is used to enable all ESX initiators as “APD” or revert back to “PDL.” A new option is available for the modify-clusters-parameters admin-level command which modifes various cluster parameters:
device-connectivity-mode=<apd, pdl>

In any case, if you have a new or existing XtremIO cluster and you are using ESXi hosts (who doesn’t?), please set the initiators type to “ESX”

Refer to the XtremIO Storage Array User Guide ( or
help in XMCLI for more information on its usage.

The version that will follow the 4.0.15-20 one will have APD turned on by default for newly installed arrays.

VSI 7.1 Is Here, vSphere 6.5 supported!


We have just released the 7.1 version of our vSphere vCenter plugin, if you are new to the VSI plugin, I highly suggest you start with these posts here

VSI enables VMware administrators to provision and manage the following EMC storage systems for VMware ESX/ESXi hosts:

  • EMC Unity™
  • EMC UnityVSA™
  • EMC ViPR® software-defined storage
  • EMC VMAX All Flash
  • EMC VMAX3™
  • EMC eNAS storage
  • EMC VNX® series storage
  • EMC VNXe1600™ and VNXe3200™ storage
  • EMC VPLEX® systems
  • EMC XtremIO® storage

Tasks that administrators can perform with VSI include storage provisioning, storage mapping, viewing information such as capacity utilization, and managing data protection systems. This release also supports EMC AppSync®, EMC RecoverPoint®, and EMC PowerPath®/VE.

New features and Changes

This release of VSI includes support for the following:

  • VMware vSphere Web Client version 6.5 tolerance
  • Multiple vCenter server IP addresses


  • Restoring deleted virtual machines or datastores using AppSync software



  • Space reclamation with no requirement to provide the ESXi host credential (vSphere Web Client 6.0 or later)


  • Viewing storage capacity metrics when provisioning VMFS datastores on EMC XtremIO storage systems


  • Enabling and disabling inline compression when creating VMFS datastores on EMC Unity storage systems version 4.1.0 or later
  • Space reclamation on Unity storage systems
  • Extending Unity VVol datastores
  • Viewing and deleting scheduled tasks, such as space reclamation, from the VSI plug-in
  • Enabling and disabling compression on NFS datastores on VMAX All Flash/VMAX3 eNAS devices
  • Viewing the compression property of a storage group when provisioning VMFS datastores on VMAX All Flash storage systems
  • Path management on Unity storage arrays using VMware NMP and EMC
  • PowerPath/VE version 6.1 or later

you can download VSI 7.1 from here



XtremIO Plugin for VMware vRealize Orchestrator 1.1.0 is out.


We have just released an update for our XtremIO vRO adapter, if you are new to the adapter, you can read about it here

You can download the new version from here

And the documentation from here

VMware vRealize Orchestrator is an IT process automation tool that allows automated management and operational tasks across both VMware and third-party applications.
Note: VMware vRealize Orchestrator was formerly named VMware vCenter Orchestrator.
The XtremIO plugin for vRealize Orchestrator facilitates the automation and orchestration of tasks that involve the XtremIO Storage Array. It augments the capabilities of VMware’s
vRealize Orchestrator solution by providing access to XtremIO Storage Array-specific management workflows.
This plugin provides many built-in workflows and actions that can be used by a vRealize administrator directly or for the construction of higher level custom workflows to automate storage tasks. The major workflow categories are provided by the XtremIO plugin for vRealize Orchestrator for storage configuration, provisioning, LUN and VM snapshot backup and VM recovery.

Basic Workflow Definition
A basic workflow is a workflow that for the most part represents a discrete piece of XtremIO functionality, such as creating a Volume or mapping a Volume. The workflows in the management folders for Consistency Groups, Cluster, Initiator Groups, Protection Scheduler, Snapshot Set, Tag, Volume and XMS are all examples of basic workflows.

High-level Workflow Definition
A high-level workflow is a collection of basic workflows put together in such a way as to achieve a higher level of functionality. The workflows in the XtremIO Storage Management
folder and the XtremIO VMware Storage Management folder are examples of high-level workflows. The Host Expose Storage workflow in the VMware Storage Management folder, for example, allows a user to create Volumes, create any necessary Initiator Groups and map those Volumes to a host, all from one workflow. All the input needed from this workflow is supplied prior to the calling of the first workflow in the chain of basic workflows that are called.

Accessing XtremIO Workflows

Expand the XtremIO folder to list folders containing specific XtremIO and VMware
functionality. The following section provides information on each of the workflows.

XtremIO CG Management Folder

Workflow Name Description
CG Add Volume Adds one or more Volumes to a Consistency Group.
CG Copy Creates a copy of the Volume(s) in the selected Consistency
CG Create Allows you to create one Consistency Group and optionally supply
a Consistency Group Tag to the created Consistency Group.
CG Delete Deletes the specified Consistency Group.
CG List Lists all known Consistency Groups for a given cluster.
CG List Volumes Lists all Volumes in a Consistency Group.
CG Protection Creates a Protection of the Volume(s) in the selected Consistency
CG Refresh Refreshes a Consistency Group from the selected copy of the
Consistency Group.
CG Remove Volume Removes one or more Volumes from a Consistency Group.
CG Rename Renames the specified Consistency Group.
CG Restore Restores a Consistency Group from a selected CG Protection

XtremIO Cluster Management Folder

Workflow Name Description
Cluster List Lists the cluster(s) for a given XMS Server.

XtremIO IG Management Folder

Workflow Name Description
IG Add Initiator Adds an Initiator to an Initiator Group.
IG Create Allows you to create one or more Initiator Groups and optionally
supply an Initiator Group Tag to the created Initiator Groups.
IG Delete Deletes one or more Initiator Groups.
IG List Lists all Initiator Groups based upon the supplied input criteria.
IG List Initiators Lists all Initiators for the supplied Initiator Group.
IG List Volumes Lists Volumes for a given Initiator Group.
IG Remove Initiator Removes an Initiator from an Initiator Group.
IG Rename Renames a selected Initiator Group.
IG Show Returns Initiator Group attributes as a formatted string for a given
Initiator Group, set of specified Initiator Groups or all Initiator
Groups, (if none are supplied).
Workflow Name Description

XtremIO Performance Management Folder

Workflow Name Description
Cluster Report Lists the cluster information shown below in one of three output
formats based upon the supplied input criteria.
• Output formats: CSV, HTML, Dictionary (JSON)
• Data Reduction Ratio: Cluster’s object field
• Compression Factor: Clusters object field
• Dedup Ratio: Clusters object field dedup-ratio-text
• Physical Capacity: Sum of SSD objects field
ssd-size-in-kb * 1024 (in bytes)
• Total Space: Clusters object field
ud-ssd-space * 1024 (in
• Space Consumed: Clusters object field
ud-ssd-space-in-use * 1024 (in bytes)
• Remaining Space: Total Space – Space Consumed * 1024 (in
• Thin Provisioning Savings: 1 / Cluster Field
• Overall Efficiency: = 1 / Cluster Field
thin-provisioning-ratio *
• Performance: Contains certain metrics for ascertaining overall
cluster performance.
Datastore Report Provides datastore performance information for a specific ESXi
Host Report Provides performance information for ESXi Hosts.
IG Report Provides performance information for Initiator Groups
Workflow Usage Report Returns usage count information for XtremIO Workflows that have
been successfully invoked.

XtremIO Protection Scheduler Folder

Workflow Name Description
Protection Scheduler Create Creates a Protection Scheduler, which consists of the schedule
name, optional Tag and associated input arguments.
Protection Scheduler Delete Deletes a Protection Scheduler.
Protection Scheduler List List the Protection Schedulers based upon the supplied input
Protection Scheduler Modify Allows you to modify various attributes of a Protection Scheduler.
Protection Scheduler Resume Resumes a suspended local Protection Scheduler.
Protection Scheduler Suspend Suspends the action of a local Protection Scheduler.

XtremIO RecoverPoint Management Folder

Workflow Name Description
RP Add Adds a RecoverPoint Server to the list of available RecoverPoint
Servers in the vRealize inventory, for use by the XtremIO
RecoverPoint workflows.
RP Create CG Creates and enables a new RecoverPoint Consistency Group from
new and existing user and journal volumes.
RP Delete CG Deletes a RecoverPoint Consistency Group but retains the user
and journal storage.
RP Delete CG Storage Deletes a RecoverPoint Consistency Group and the associated
user and journal storage.
RP List Produces a list of available RecoverPoint Servers that are present
in the vRealize inventory.
RP List CGs Produces a list of RecoverPoint Consistency Groups.
RP Modify CG Allows modification of the RecoverPoint Consistency Group
recovery point objective, compression and number of snapshots
RP Remove Removes a RecoverPoint server from the list of available
RecoverPoint Servers in the vRealize inventory.

XtremIO Snapshot Set Management Folder

Workflow Name Description
SnapshotSet Copy Creates a copy of the Volumes in the selected Snapshot Set.
SnapshotSet Delete Deletes a Snapshot Set.
SnapshotSet List Returns a list of Snapshot Sets by name and/or Tag.
SnapshotSet Map Maps a Snapshot Set to an Initiator Group.
SnapshotSet Protection Creates a Protection of the Volumes from the selected Snapshot
SnapshotSet Refresh Refreshes a Snapshot Set from the selected copy.
SnapshotSet Rename Renames a Snapshot Set.
SnapshotSet Unmap Unmaps a Snapshot Set.
Workflow Name Description

XtremIO Storage Management Folder

Workflow Name Description
Datastore Delete Storage This workflow either:
• Shutdowns and deletes all VMs associated with the datastore.
• Disconnects all the secondary datastore VMDKs from the VMs
utilizing that datastore.
The workflow then unmounts the datastore as necessary from all
ESXi hosts and deletes it. It then unmaps the XtremIO Volume
associated with the datastore and deletes it.
If the Volume to be deleted is also in an XtremIO Consistency
Group, the Consistency Group is also deleted, if the group is
empty (as a result of the Volume deletion).
If the datastore selected for deletion is used for primary storage
(i.e. VMDKs that are surfacing operating system disks, for
example, c:\ for windows) then select Yes to the question “Delete
VMs associated with the datastore” prior to running the workflow.
Note: Best practice for datastore usage is to use separate
datastores for primary and secondary VM storage.
Datastore Expose Storage Allows you to create or use an existing XtremIO Volume to
provision a VMFS Datastore.
If an existing XtremIO Volume is used, which is a copy or
protection of an existing datastore, then the existing XtremIO
Volume needs to be assigned a new signature prior to utilizing it.
You can select to expose the storage to a single ESXi host or all
ESXi hosts in a given cluster.
If selecting existing Volume(s), either select them individually or
select a Consistency Group that contains the Volumes to use for
the datastores.
VM Clone Storage Makes a copy of the specified production datastores and
connects those datastores copies to a set of specified
test/development VMs. If the datastore to be cloned contains a
hard disk representing the operating system disk, the VMDK is
ignored when reconnecting the VMDKs to the test/development
The datastores to be copied can either be selected individually or
from a specified XtremIO Consistency Group.
The workflow then makes a copy of the selected Volumes. It then
clones the VM to VMDKs relationships of the production
datastores to the copied datastores for the selected
test/development VMs.
The VMs involved follow a specific naming convention for the
production and test/development VMs.
For example, if the production VM is called “Finance” then the
secondary VM name must start with “Finance” followed by a
suffix/delimiter such as “Finance_001”.
The production and test/development VMs must also be stored in
separate folders. The workflow skips over the production VMs
that do not have a matching VM in the test/development folder.
VM Delete Storage Deletes the datastores containing the application data and
underlying XtremIO storage Volumes associated to these
The VMs for which the datastores are deleted can be selected by
choosing either a vCenter folder that the VMs reside in or by
individually selecting the VMs.
The VM itself can also be deleted by selecting Yes to the question
“Delete VMs and remove all storage (No will only remove VMs
secondary storage)”.
If No is selected to the above question, the workflow unmounts
the secondary datastore containing the application data from all
hosts that are using the datastore prior to its deletion.
The workflow then proceeds to unmap the XtremIO Volume
associated with the datastore and then deletes it.
If the Volume to be deleted is also in an XtremIO Consistency
Group, the Consistency Group is also deleted, if the group is
empty (as a result of the Volume deletion).
VM Expose Storage Allows you to create or use an existing XtremIO Volume to
provision a VMFS datastore and then provision either a VMDK
(Virtual Machine Disk) or RDM (Raw Disk Mappings) to a virtual
machine. This is accomplished by calling the workflows Datastore
Expose Storage and VM Add Storage from this workflow.
Workflow Name Description

XtremIO Tag Management Folder

Workflow Name Description
Tag Apply Applies a Tag to an object.
Tag Create Creates a Tag for a given tag type.
Tag Delete Deletes a list of supplied Tags.
Tag List Lists Tags for a given Tag type.
Tag Remove Removes a Tag from an object.
Tag Rename Renames a Tag.

XtremIO VMware Storage Management Folder

Workflow Name Description
Datastore Copy Makes a copy of your underlying XtremIO Volume that the
datastore is based on.
Datastore Create Creates a VMFS datastore on a Fibre Channel, iSCSI or local SCSI
Datastore Delete Deletes the selected datastore.
This workflow either:
• Shutdowns and deletes all VMs associated with the datastore.
• Disconnects all the datastore VMDKs from all the VMs utilizing
that datastore.
The workflow then unmounts the datastore as necessary from all
ESXi hosts and deletes it. It then unmaps the XtremIO Volume
associated with the datastore and deletes it.
If the datastore selected for deletion is used wholly or partly for
primary storage (i.e. VMDKs that are surfacing operating system
disks, for example, c:\ for windows) then select Yes to the
question “Delete VMs associated with the datastore” prior to
running the workflow.
Note: Best practice for datastore usage is to use separate
datastores for primary and secondary VM storage.
Datastore Expand Used to expand a datastore.
Datastore List Returns the datastores known to the vCenter Server instance
Datastore Mount Mounts the given datastore onto the selected host, or all hosts
associated with the datastore if a host is not specified.
Note: If you select to mount the datastore on all hosts and the
datastore fails to mount on at least one of the hosts, the info log
file of the workflow reports the hosts that the datastore could not
be mounted on.
For convenience this workflow allows you to select from
pre-existing datastores that have already been mounted to ESXi
hosts or datastores that have been copied (but not yet previously
mounted).Such datastores are resignatured as part of the
mounting process.
Datastore Protection Makes a protection (a read only copy) of your underlying XtremIO
Volume that the datastore is based on.
Datastore Reclaim Storage Used to reclaim unused space in a datastore, and also, to delete
the unused VMDK files.
Datastore Refresh Refreshes a copy of the underlying XtremIO Volume that the
datastore is based on.
Once the datastore to refresh has been selected, a list of copies
or protections, from which to refresh, are displayed. These are
based on whether “Copy” or “Protection” was selected in the
answer to “Restore Snapshot Type”.
The list displayed chronologically, with the most recent copy
listed first.
If a copy to refresh is not selected, the most recent copy is
automatically refreshed.
There is an option to provide a name for the Snapshot Set and
suffix that will be created.
Datastore Restore Restores the underlying XtremIO Volume that the datastore is
based on, from a copy that was created earlier.
Once the datastore to refresh has been selected, a list of copies
or protections, from which to refresh, are displayed. These are
based on whether “Copy” or “Protection” was selected in the
answer to “Restore Snapshot Type”.
The list displayed chronologically, with the most recent copy
listed first.
If a copy to refresh is not selected, the most recent copy is
automatically refreshed.
Datastore Show Returns information about the supplied datastore:
• AvailableDisksByHost: List of hosts and available disks that
can be used in creating a datastore.
• Capacity: Datastore’s total capacity in bytes.
• Expandable: True if datastore is expandable.
• ExpansionSpace: Space in bytes that is left for expansion of
• Extendable: True if datastore can be extended.
• FreeSpace: Datastore’s free capacity in bytes.
• Hosts: List of hosts known to the datastore.
• Name: Name of the datastore.
• RDMs: List of RDMs known to the datastore.
• UnusedVMDKs: List of VMDks that are not in use by a VM.
• VMs: List of VMs known to the datastore.
• VMDKs: List of VMDKs residing in the datastore and in use by a
Datastore Show Storage Returns the XtremIO Volumes, Consistency Groups and Snapshot
Sets that make up the datastore.
If the workflow runs successfully but none of the output variables
are populated, check the info log for messages such as this one:
[YYYY-MM-DD HH:MM:SS] [I] Unable to find one or more
corresponding Volumes to naa-names — possibly unregistered,
unreachable XMSServers, or non-XtremIO Volumes.
In addition to ensuring that all of the XMS Servers have been
added to the vRO inventory via the XMS Add workflow, it is
strongly recommended to also run the Host Conformance
workflow. This ensures that all your XtremIO Volumes that are
visible to ESXi hosts have their corresponding XMS Server known
to vRealize.
Datastore Unmount Unmounts the given datastore from the specified host or all hosts
associated with the datastore, if a host is not specified.
Workflow Name Description
Host Add SSH Connection Adds the connection information for a given ESXi Host into the
vRealize inventory.
When running the workflow Host Modify Settings you must select
the SSH Connection to use in order for the workflow to connect to
the ESXi Host and run properly.
Note: When supplying a value for the field “The host name of the
SSH Host”, ensure this is a fully qualified domain name or IP
address that allows the workflow to connect to the ESXi host via
Host Conformance Checks either a single ESXi host or all ESXi hosts in a vCenter
instance, to ensure that all the XtremIO Volumes that are mapped
to ESXi hosts have for this vRealize instance all of the necessary
XMS Servers connected to it.
Once the report is run, it provides a list of Volumes that are
conformant (have an associated XMS Server instance present for
those Volumes) and those Volumes that are not conformant (do
not have an associated XMS Server present for those Volumes).
For those Volumes that are found to be not conformant the XMS
Add workflow should be run so that XtremIO vRealize workflows
can work properly with these Volume(s).
Host Delete Storage Allows you to delete a list of selected Volumes mounted to a host.
The Volumes can be specified by individual name, Consistency
Group or Snapshot Set.
You can supply a value for all three fields or just a subset of them.
As long as one Volume name, Consistency Group name or
Snapshot Set name is supplied, the workflow can be run.
The selected Volumes are unmapped and deleted and the
corresponding Consistency Group Deleted (if all the Volumes
have been removed.
Host Expose Storage Exposes new or existing XtremIO Volumes to a standalone host or
VMware ESXi host.
Select either a list of Volumes or a Consistency Group
representing a list of Volumes, to be exposed to the host.
If new Volumes are being created you also have the option of
creating a new Consistency Group to put the Volumes into.
Host List Lists the hosts for a given vCenter.
Host Modify Settings Modifies the settings for a given ESXi server.
This workflow requires that an SSH Host Instance be setup for the
ESXi host to be checked, prior to running this workflow.
In order to setup this SSH Host Instance run the workflow Host
Add SSH Connection.
Host Remove SSH Connection Removes an SSH Host configuration entry from the vRealize
Host Rescan Rescans one or all hosts of a given vCenter.
Host Show This workflow returns the following.
• WWNs for the given vCenter Host.
• Available disks for the given vCenter Host that can be used to
create new datastores. The output variable
can be used as input to the Datastore Create workflow input
• In use disks for the given vCenter Host.
Workflow Name Description
vCenter Add Adds a vCenter instance to the vRealize inventory.
vCenter Cluster Delete Storage Deletes all unused XtremIO storage Volumes for each ESXi host in
a vCenter Cluster.
This workflow exits with an error when an XMS Server cannot be
located for any of the unused XtremIO storage Volumes that are to
be removed.
vCenter Cluster Expose
Exposes new or existing XtremIO Volumes to the ESXi hosts in a
VMware Cluster.
vCenter List Lists the vCenter instances known to the vRealize inventory.
vCenter List Clusters Returns a list of vCenter Cluster objects for the supplied vCenter.
vCenter Remove Removes a vCenter instance from the vRealize inventory.
vCenter Show Returns a list of vCenter Hosts for a given Cluster object.
For each host a list of available disks is also returned.
VM Add VMDK Provisions to a VM, either a:
• New VMDK
• New RDM
• List of existing, unused, VMDKs
If the storage type selected is VMDK then a datastore must be
selected for the VMDK.
If the storage type selected is RDM then a datastore does not
need to be selected in order to create the RDM.
VM Delete Deletes a VM and release resources, such as the primary storage
allocated to the VM.
Note: This workflow should be used to remove the primary
storage, “Hard disk 1” associated with a VM.
Primary storage cannot be removed until the VM is shutdown and
deleted. Secondary VMDK and RDM storage are preserved.
Use the VM Delete Storage workflow to delete all of the following:
• VM
• The datastore associated with the VM
• The underlying XtremIO storage making up that VM
VM List Lists the VMs for a given vCenter or for a given vCenter host.
VM Remove VMDK Removes and deletes an RDM or VMDK from a VM.
Note: This workflow cannot be used to remove the primary
storage, “Hard disk 1” (the VM’s primary storage drive), as this
requires the VM to be shut down and deleted prior to reclaiming
the storage for “Hard disk 1”.
VM Show Lists the VMDKs, RDMs and RDM Physical LUN names for each
RDM for a given VM.
Workflow Name Description

XtremIO Volume Management Folder

Workflow Name Description
Volume Copy Allows you to create one or more copies supplying a list of source
Volumes, or a list of Volume Tags.
Volume Create Allows you to create one or more Volumes and optionally supply a
Volume Tag to the created Volumes. For each Volume created, if at
the cluster level, the parameter
show-clusters-thresholds shows that
vaai-tp-alerts is set to a value of between 1-100, then the
add-volume vaai-tp-alerts parameter is set to enabled.
Volume Delete Deletes the list of selected Volumes or Volumes associated with a
particular Volume Tag.
Volume Expand Expands a Volume to the new size supplied.
Volume List Lists Volumes by name, size and Tag for a given cluster.
Volume Map Allows you to map Volumes by Name or by Tag for the supplied
Initiator Group.
Volume Modify Allows particular Volume parameters to be changed. The current
Volume parameter that can be changed is
(VAAI Thin Provisioning alert).
The selected Volumes can be either be set to participate or
disabled from participation in thin provisioning threshold alerts.
At the cluster level the thin provisioning alert level must set to a
value of between 1 – 100 in order for a Volume to generate an
alert. The syntax is as follows:
vaai-tp-alerts=[enabled / disabled]
Volume Protection Allows you to create one or more Protections supplying a list of
source Volumes, or a list of Volume Tags.
Volume Refresh Refreshes a Volume from the selected copy.
Volume Rename Renames the selected Volume.
Volume Restore Restores a Volume from the selected Protection.
Volume Show Returns Volume parameters as a formatted string for a given
Volume, set of specified Volumes or all Volumes (if none are
The values for
naa-name, logical-space-in-use and
vaai-tp-alerts are returned for the specified Volumes.
• The
naa-name parameter represents the NAA Identifier that is
assigned to a Volume only after it has been mapped to an
Initiator Group.
• The
logical-space-in-use parameter is the Volume
logical space in use (VSG).
• The
vaai-tp-alerts parameter controls whether a Volume
participates in thin provisioning alerts. The value is either
enabled or disabled.
At the cluster level, the thin provisioning alert level must set to a
value between 1 – 100, in order for a Volume to generate an alert,
regardless of whether an individual Volume has
vaai-tp-alerts enabled.
Volume Unmap Unmaps Volumes in one of the following ways:
• Unmaps a user supplied list of Volumes.
• Unmaps all Volumes associated with a particular Initiator
• Unmaps a selected list of Volumes associated with a particular
Initiator Group.
• Unmaps all Volume associated with a particular Tag.
Workflow Name Description

XtremIO XMS Management Folder

Workflow Name Description
XMS Add Adds an XMS server to the vRealize Inventory.
XMS List Lists the known XMS servers in the vRealize Inventory.
XMS Remove Removes an XMS server from the vRealize Inventory.
XMS Show Returns XMS attributes as a formatted string for a given XMS
xms-ip, server-name and cluster-names attributes
are always returned for a given XMS Server.
sw-version is also returned by default but can be removed
from the list of returned attributes, if that information is not



EMC Storage Integrator (ESI) 5.0 Is Out


We have just released the 5.0 version of ESI plugin, for those of you who aren’t familiar with the plugin , it allows you to manage physical windows or Hyper-V or even linux OS’s from a storage perspective, things like volume provisioning, snapshots taking etc, similar to our VSI plugin for vSphere but in this case for everything else..

Here’s the supported systems matrix

From an XtremIO perspective, this release bring a support for snapshot refresh using the XtremIO tagging

Important Changes To ESXi and the XtremIO SATP Behavior

Hi Everyone,

Some heads up on an important change we’ve been working with VMware that is now part of the vSphere 6.0/6.5 and 5.5P8

If you attach an ESXi host to an XtremIO array, it will AUTOMATICALLY choose Round Robin (RR) and IOPS=1 as the SATP behavior.

a little background.. because XtremIO is an Active/Active array (all the storage controllers are sharing the performance & ownership of the entire data), the default ESXi SATP behaviour was set to “fixed” which means, only one path for every datastore was actually working, of course, you could have changed it using the ESXi UI, using the Dell EMC VSI plugin etc..

One word of cautious, This does not mean that you can skip the OTHER ESXI host best practices which you can either achieve using the VSI plugin as show below (HBA Qdepth etc) or by using the ESX CLI, powershell, scripts etc


But it does mean that if a customer didn’t read the HCG or forgot about it, many errors we have seen in the field won’t be seen again due to multipathing misconfiguration. It will also mean much better performance out of the box!

For ESXi 5.5, you must install the latest update ( )

( )


For ESXi 6.0, you must install the latest update – 2145663 ( )

( )


For ESXi 6.5, it’s there at the GA build (which is the only build for now)

If you want to run a query to validate this:

# esxcli storage nmp device list

Lastly, the latest VSI plugin (7.0) doesn’t support vSphere 6.5 yet, the VSI team are working on it and will update once I have more news.

Dell EMC AppSync 3.1 Is Out, Here’s What’s New!

This post was written by Marco Abela and Itzik Reich

The AppSync 3.1 release (GA on 12/14/16) includes major new features for enabling iCDM use cases with XtremIO. Let’s take a look and what is new for XtremIO:

Crash consistent SQL copies

-Protects the database without using VSS or VDI, and depends on the array to create crash consistent copies

The AppSync 3.0 release introduced a “Non VDI” feature. This “non VDI” feature creates copies of SQL using the Microsoft VSS freeze/thaw framework at the filesystem level. With AppSync 3.1, by selecting “Crash-Consistent”, no application level freeze/thaw is done, resulting in less overhead needed for creating the copy of your SQL server using only XVC. This is equivalent to taking a snapshot of the relevant volumes from the XMS.

If VNX/VNXe/Unity, SQL databases subscribed must all be part of same CG or LunGroup
Mount with recovery will work only for “Attach Database” Recovery Type option
When restoring, only “No Recovery” type will be supported
assqlrestore.exe is not supported for “crash-consistent” SQL copies
Transaction log backup is not supported
Not supported on ViPR

SQL Cluster Mount

Concern Statement: AppSync does not provide the ability to copy SQL clusters using mount points, nor mount to alternate mount paths, only same as original source.
Solution: Ability to mount to alternate paths, including mounting back to the same production cluster, as a cluster resource, under an alternate clustered SQL instance and also mount multiplecopies simultaneously as clustered resources on a single mount host.
For mount points, the root drive must be present and must already be a clustered physical disk resource
Cannot use the default mount path, i.e. C:\AppSyncMounts as this is a system drive

FileSystem Enhancements:

Repurposing support for File Systems:
This new version introduces the repurpose workflow for File Systems compatible with AppSync. This is very exciting for supporting iCDM use cases for Applications in which AppSync does not have direct application support for, allowing you to create copies for test/dev uses cases using a combination of Filesystem freeze/thaw and scripts if needed as part of the AppSync repurpose flow.

Nested Mount Points, Path Mapping, and FileSystem with PowerHA, are also key new FileSystem enhancements. For a summary of what these mean see below:

Repurpose Workflows for File Systems
Concern Statement: The repurposing workflow does not support file systems, only Oracle and SQL. Epic and other file system users need to be able and utilize the Repurposing workflow.
Solution: Enable the repurpose flow (Wizard) for File Systems on all AppSync supported OS and storage. This would be supported on all AppSync supported OS and RecoverPoint Local and Remote.

Unlike SQL and Oracle repurposing, multiple file systems can be repurposed together – as seen in the screenshot.

Repurpose Workflows RULES for File Systems
When copy is refreshed after performing an on demand mount, AppSync unmounts the mounted copy, refreshes (create a new copy & only on successful creation of new ones expire the old copy) and mounts the copy back to the mount host with the same mount options
Applicable to all storage types except for XtremIO volumes that are in a consistency group (see point below)
Not applicable for RecoverPoint repurposing
RecoverPoint repurposing on-demand mounts are not re-mounted
With VNX/RP-VNX, you cannot repurpose from a mounted copy
With VMAX2/RP-VMAX2, you cannot repurpose from a gen1 copy if the gen1 copy created is a snap
When using XtremIO CGs – the copy is unmounted (only application unmount – no storage cleanup),
storage is refreshed and is mounted (only application mount) to the mount host with same mount options
Repurposing NFS file systems and/or Unity environments, are not supported.

Repurpose Workflows RULES for File Systems
File system repurposing workflows support the same rules as SQL and Oracle, such as:
Gen 1 copies are the only copy that is considered application consistent
Restores from Gen 1 through RecoverPoint (intermediary copy) or SRDF are not supported
Restores from Gen 2 are not supported
Callout scripts use the Label field
Freeze and thaw callouts are only supported for Gen 1 copies
Unmount callouts are supported for Gen 1 and Gen 2 copies
If the number of filesystems exceeds the allowable storage units, which is 12 by default, defined
in server settings for each storage type, the operation will fail
“ units” for VMAX array
“” for VNX array
“” for VPLEX, Unity, and XtremIO arrays

Persistent Filesystem Mount
Concern Statement: File systems mounted with AppSync are not persistent upon the host
Solution: Offer persistent filesystem mount options so the hosts mounted filesystems (by
AppSync), are automatically mounted, upon a host reboot.
Applies to all file systems, including those which support Oracle DB copies
For AIX, ensure the mount setting in /etc/filesystems on the source host, is set to TRUE
(AppSync uses same settings as on source)
For Linux, AppSync modifies the /etc/fstab file:
Entries include the notation/comment “# line added by AppSync Agent”
Unmounting within AppSync, removed the entry

Feature Benefit
Nested mount points Ability to copy and correctly mount, refresh and restore Nested Mount production environments, eliminating the current restriction.
Path Mapping support Ability to mount specific application copies to location specified by the user. This eliminates the restriction of allowing mounts on original path or default path only
FS Plan with PowerHA cluster AppSync will become aware of the node failover within PowerHA cluster so that all supported use cases work seamlessly in a failover scenario.

Currently, when a failover happens on one cluster node and the file system is activated on another, the File System is not followed by AppSync. For that reason this configuration is currently not supported

Repurpose Flow with FS Plan The popular Repurpose Wizard will become available with the file system plan. This will be supported on all AppSync supported OS and storage, including RecoverPoint.

The combination of all these new FS enhancements enable iCDM use cases for……


That’s right…As XtremIO has become the second product worldwide to be distinguished as “Very Common” with more than 20% Epic customers and counting, we have worked with the AppSync team to enabled iCDM use cases for EPIC. The filesystem enhancements above help enable these use cases, and further demonstrate the robustness XtremIO provides for EPIC software.

Staggering Volume Deletes:
In order to avoid possible temporary latency increases, which can be caused by the massive deletion of multiple volumes/snapshots with high change rate, AppSync introduces a logic to delete XtremIO volumes at a rate of one volume every 60 seconds. This logic is disabled by default, and should be enabled only in the rare circumstance where this increased latency may be observed. The cleanup thread is triggered every 30th minute of the hour (that is, once in an hour).

The cleanup thread gets triggered every 30th minute of the hour (by default)
The cleanup thread starting Hour, Minute, and time delay can all be configured

In order to enable this, user will have to access the AppSync server settings by going to http://APPSYNC_SERVER_IP:8085/appsync/?manageserversettings=true and going to SettingsàAppSync Server SettingsàManage All Settings and change the value of “maint.xtremio.snapcleanup.enable” from “false” to “true”.

All File system from a single VG must be mounted and unmounted together (applies to nested and non-nested mount points)

XtremIO CG support for repurpose workflow:

The repurpose flow now supports awareness into applications laid out on XtremIO using Consistency Groups:

– For Windows applications (SQL, Exchange, Filesystems), all the app components e.x. db files,log files etc.. should be part of same CG (one and only one and not being part of more than one CG) for AppSync to use CG based API calls.

– For Oracle, all the db,control files should be part of one CG and the Archive log should be part of another CG.

What is the benefit of this? Using XtremIO Virtual Copies (XVC) to its full potential for quickest operation time. This reduces application freeze time, as well as reduces the overall length of the copy creation time and later refresh process. During the refresh operation, it will tell you if it was able to use CG based refresh or not:


With CG:

You will notice the status screen mentioning that the refresh was done “..using the CG..”

To analyze this a little further in looking at REST logs issued to the XMS:

The snap operation was done with a single API specifying to snap the CG:

2016-12-13 18:49:32,189 – RestLogger – INFO – rest_server::log_request:96 – REST call: <POST /v2/types/snapshots HTTP/1.1> with args {} and content {u’snap-suffix’: u’.snap.20161213_090905.g1′, u’cluster-id’: u’xbricksc306′, u’consistency-group-id’: u’MarcoFS_CG’}

And refresh specifying Refreshing from CG to Consistency Group through single API:

2016-12-13 18:50:23,426 – RestLogger – INFO – rest_server::log_request:96 – REST call: <POST /v2/types/snapshots HTTP/1.1> with args {} and content {u’from-consistency-group-id’: u’MarcoFS_CG’, u’cluster-id’: u’xbricksc306′, u’to-snapshot-set-id’: u’SnapshotSet.1481647772251′}

Without CG:

You will receive a message in the status screen stating that volumes have not been laid out in a CG, or not done so as specified per the prerequisites earlier.

Windows Server 2016 support

Both as AppSync server & agent
Clustering support still pending qualification (check with ESM at time of GA)
Microsoft Edge as a browser is not supported for AppSync GUI

vSphere 2016 tolerance support (ESX 6.5) – no new functionality.

Path Mapping

AppSync currently does not allow users to modify the root path of a Mount operation – a limitation that Replication Manager does not have.
Solution: Specify a complete Mount Path for the application copy being mounted. Change the root path for a Mount Host (specify a substitution), so that the layout is replicated using the substitution.
Unix Example:
/prd01 can become /sup01
Windows Examples:
E:\ can becomes H:\
F:\ can become G:\FolderName


Mounting Windows file system examples:
When E:\ is configured to become H:\
Ensure that the H:\ IS NOT already mounted
When E:\ is configured to become H:\SomeFolder (e.g. H:\myCopy)
Ensure that the H: drive is already available, as AppSync relies on the root mount drive letter to exist, in order to mount the myCopy directory (in this example), which was originally the E:\ drive and contents
When no target mount path is configured, AppSync mounts as “Same as original path”
In this case, the job fails if Mount on Server is set to: Original Host (E:\ cannot mount back as E:\)
Currently supported on File System Service Plans and Repurposing file systems only
Path mapping is not supported with Oracle, Exchange or SQL

Progress Dialog Monitor

Concern Statement: When the progress dialog is closed, users have to go to the event window to look for updates, manually having to refresh the window.
Solution: Allow users to view and monitor the Service Plan run progress through the GUI by launching the process dialog window which updates automatically.

Oracle Database Restart

After a mount host reboot, the Oracle database does not automatically restart
Solution: Offers the ability to reboot a mount host AppSync recovered Oracle on, to the same state as before the reboot
Unlike the conventional /etc/oratab entry, AppSync creates a startupscript
/etc/asdbora on AIX
/etc/init.d/asdbora on Linux
A symlink to this named S99asdbora script is found under /etc/rc.d
Configurable through the AppSync UI (disabled by default)
Not available for RMAN or “Mount on standalone server and prepare scripts for manual database recovery”

File System Service Plan with AIX PowerHA

Concern Statement: Currently AIX PowerHA clustered environments support Oracle only – no support for clustered file systems. Epic, and other file system plan users, need support for file system clustering.
When a failover happens, the file system is not followed by AppSync
Solution: Support PowerHA environments utilizing file system Service Plans and Repurposing workflows
AppSync will become aware of the node failover within PowerHA cluster so that all supported use cases work seamlessly in a failover scenario
Setup Details:
1. Add/register all nodes of the PowerHA cluster to AppSync before registering the virtual IP
2. The virtual IP (Service label IP/name) resource must be configured in the resource group for the clustered application,
as well as added/registered to AppSync (as if it were another physical)
Each resource group must have a unique Service label IP
3. Protect the file systems belonging to that particular resource group, rather than protecting the file systems by navigating the nodes
Note: Volume groups are not concurrent after a restore, you must manually make them concurrent.

Oracle Protection on PowerHA Changes
Previously: Setting up Oracle with PowerHA only involved adding both nodes
There was no need for a servicelableIP/Name as Oracle told AppSync which active node to manage
Application mapping is configured through the Application (Oracle) Service Plan, and not through the node, as
a file system service plan would be configured
AppSync 3.1: Now requires registering the use of the servicelabelP/Name of the Oracle DB
resource group
Similar to configuring AppSync to support file system service plans under PowerHA
Add all physical nodes, and then register the servicelabelIP/Name
Configure the Oracle service plan using the servicelableIP/Name
Repurposing uses the servicelabelIP/Name
If the servicelabelIP/Name is not registered, AppSync will not discover the Oracle databases
If upgrading from a previous version to 3.1, the servicelabelIP/Name must be registered, otherwise the job
fails with an appropriate message – no reconfiguration is required, simply register the servicelabelIP/Name

Copy Data Management (CDM), Or, Why It’s the Most Important Consideration For AFA’s Going Forward.

A guest post by Tamir Segal

Very soon after we release Dell-EMC XtremIO’s copy technology, we were very surprise by an unexpected finding. We learned that in some cases, customers would be deploying XtremIO to hold redundant data or “copies” of the production workload. We were intrigued, why would someone use a Primmum array to hold non-production data? and why would they dare to consolidate it with production workloads?

Because of our observations, we commissioned IDC to perform an independent study and drill into the specifics of the copy data problem. The study included responses from 513 IT leaders in large enterprises (10,000 employees or more) in North America and included 10 in-depth interviews across a spectrum of industries and perspectives.

The “copy data problem” is rapidly gaining attention among senior IT managers and CIOs as they begin to understand it and the enormous impact that it has on their organization in terms of cost, business workflow and efficiency. IDC defines copy data as any replica or copy of original data. Typical means of creating copy data include snapshots, clones, replication (local or remote), and backup/recovery (B/R) operations. IDC estimates that the copy data problem will cost IT organizations $50.63 billion by 2018 (worldwide).

One could think that copies are bad for organizations and they just lead to sprawl of data and waste, and therefore the expected question to ask is why don’t we just eliminate all those copies? The answer is straightforward, copies are important and needed for many critical processes in any organization, for example; how can you develop the next generation of your product without a copy of your production environment as baseline for the next version? In fact, there are many significant benefits to using even more copy data in some use cases. However, legacy and inefficient copy management practices resulted with a substantial waste and financial burden on IT (Did I mention $50.3B?).

IOUG made a research on 300 DB managers and professional and what is the most activities taking up most time each week. Results are somehow surprising, Figure 1 shows that 30% spending significant amount of their time on creating copies. However, this does not end here, Test &QA are also tasks done on non-production copies and patches are first tested on non-product environments.

Figure 1 – Database Management activities taking up most time each week (source: Unisphere research, efficiency isn’t enough: data centers lead the drive to innovation. 2014 IOUG survey)

What are those copies? and what are the uses cases they support and what are the problems there today? Those can be categorized under 4 main areas:

  • Innovation (testing or development)
  • Analytic and decision making (run ETL from a copy rather than production)
  • IT operations (such as pre-production simulation and patch management)
  • Data Protection.

Before getting the research’s results, I assumed that Data Protection would be leading use case for copies. I was wrong, based on research there is no significant leader in data consumption.

Figure 2 – Raw Capacity Deployed by Workload (Source IDC Survey)

Another interesting data point was to see what technology is used to create copies, per the research results 53% used custom written scripts to create copies.

Figure 3 – what tools are used to create secondary copies (source IDC Survey)

The copy data management challenges directly impact critical business processes and therefore have direct impact on the cost, revenue, agility and competitiveness of any organization. But the big question is by how much? The IDC research was looking to quantify the size of the problem and how big it, some highlights of the researches are:

  • 78% of organizations manage 200+ instances of Oracle and SQL Server databases. The mean response for the survey was 346.43 database instances.
  • 82% of organizations make more than 10 copies of each instance. In fact, the mean was 14.88 copies of each instance.
  • 71% of the organizations surveyed responded that it takes half a day or more to create or refresh a copy.
  • 32% of the organizations refresh environments every few days, whereas 42% of the organizations refresh environments every week.

Based on the research results, it was found that on average a staggering 20,619 hours are spent on wait time for instance refreshes every week by the various teams. in a conservative estimate of 25% of instances yields more than 5,000 hours, or 208 days, of operational waiting or waste.

The research is available for everyone, and you can view it here

These results are very clear, there is a very large ROI (more than 50%) that can be realized and probably by many organizations since more than 78% of the companies are managing more than 200+ instances of database and as the research shows, the process today is wasteful and inefficient.

The Copy Data Management Challenges

It is important to understand why legacy infrastructure and improper copy data management processes have fostered the need for copy data management solutions. The need for efficient infrastructure is driven by extensive storage silos, sprawl, expensive storage and inefficient copy creation technologies. The need for efficient copy data management processes is driven by increased wait times for copies to be provisioned, low productivity and demands for self-service.

Legacy storage systems were not designed to support true mixed workload consolidation and require significant performance tuning to guarantee application SLAs. Thus, storage administrators have been conditioned to overprovision storage capacity and create dedicated storage silos for each use case and/or workload.

Furthermore, DBAs are often using their own copy technologies, it is very common that DBAs will ask storage administrators to provision capacity, they will than use their native tools to create a database copy. One common practice is to use RMAN in oracle and restore a copy from a backup.

Copy technologies, such as legacy snapshots, do not provide a solution. Snapshots are space efficient compared to full copies; however, in many cases copies created using snapshot technology are under-performing, impact production SLAs, taking too long to create or refresh, have limited scale, lack real modern efficient data reduction and are complex to manage and schedule.

Because of performance and SLA requirements, storage admins are forced to use full copies and clones, but these approaches result in an increase in storage sprawl as capacity is allocated upfront and each copy consumes the full size of its source. To save on capacity costs, these types of copies are created on a lower tiered storage system or lower performing media.

External appliances for copy data management lead to incremental cost and they still require a storage system to store copies. They may offer some remedy; however, they introduce more complexity in terms of additional management overhead and require substantial capacity and performance from the underlying storage system.

Due to the decentralized nature of application self-service and the multitude of applications distributed throughout organizations within a single business, a need for copy data management has developed to provide oversight into copy data processes across the data center and ensure compliance with business or regulatory objectives.

The Dell-EMC XtremIO’s integrated Copy Data Management approach

As IT leaders, how can we deliver the needed services to support efficacies, cost saving and agility to your organization? How does the copy data management can be addressed in a better way? This is how Dell-EMC can help you to resolve the copy data service at its source.

Dell EMC XtremIO pioneered the concept of integrated Copy Data Management (iCDM). The concept behind iCDM is to provide nearly unlimited virtual copies of data sets, particularly databases on a scale-out All-Flash array using a self-service option to allow consumption at need for DBAs and application owners. iCDM is built on XtremIO’s scale-out architecture, XtremIO’s unique virtual copy technology and application integration and orchestration layer provided by Dell-EMC AppSync.

Figure 4 – XtremIO’s integrated Copy Data Management stack



XtremIO Virtual Copy (XVC) used with iCDM is not physical but rather a logical view of the data at a specific point in time (like a snapshot), unlike snapshot XVC is both metadata and physical capacity efficient (dedup and compression) and does not impact production SLAs. Like physical copies, XVC provides the equal performance compared to production, but unlike physical copies, which may take long time to create, XVC can be created immediately. Moreover, data refreshes can trigger as often as desired at any direction or hierarchy enabling flexible and powerful data movement between data-sets.

The ability to provide consistent and predictable performance on a platform that can scale-out is a mandatory requirement. Once you have an efficient copy services with unlimited copies, you will want to consolidate more workloads. As you consolidate more workloads into a single array, more performance may be needed and you to be able to add more performance to your array.

We live in a world where the copies have consumers, in our case they are the DBAs and application owners. As you modernize your business, you want to empower them to be able to create and consume copies when they need them, this is where Dell-EMC AppSync can provide the application orchestration and automation for applications copies creation.

iCDM is a game changer and its impact on the IT organization is tremendous; XtremIO iCDM enables significant costs savings, provide more copies when needed and support future growth. Copies can be refreshed on-demand, they are efficient, high performance and have no SLA risks. As a result, iCDM enables DBAs and application owner to accelerate their development time and trim up to 60% off the testing process, have more test beds and improve the product quality. Similarly, analytical databases can be updated frequently so that analysis is always performed on current data rather than stale data.

Figure 5 – Accelerate database development projects with XtremIO iCDM

More information on XtremIO’s iCDM can be found here.

As a bonus, I included a short checklist to help you choose your All-Flash array and copy data management solution:

Does your CDM is based on All-Flash array?


Can you have copies and production on the same array w/o SLA risks?


Does your CDM solution is future proof? Can you SCALE-OUT and add more performance and capacity when needed? Can you get scalable number of copies?


Does your CDM can immediately refresh copies from production or any other source? Can it refresh to any direction (prod to copy, copy to copy or copy to prod?


Can your copies have the same performance characteristics as production?


Do your copies get data service like production including compression and deduplication?


Can you get application integration and automation?


Can your DBAs and application owner get self-service options for application copy creation?


XtremIO iCDM is the most effective copy data management option available today, it enables better workflows, reduces risks, eliminates costs and ensures SLA compliance. The benefits extend to all stakeholders, they can now perform their work more efficiently while having better results; the results can be seen in reduced waste and costs reduction while providing better services, improved business workflows and greater productivity.