RecoverPoint For VMs (RP4VMs) 5.0 Is Here


Carrying on from today’s announcements about RecoverPoint ( ), I’m also super pumped to see that my brothers & sisters from the CTD group just released RP4VMs 5.0!, why do I like this version so much? In one word “simplicity”, or in 2 words, “large scale”, these are 2 topics that I will try to cover in this post.

The RP4VM 5.0 release is centered on the main topics of Ease of use, Scale, and Total Cost of Ownership (TCO). lets take a quick look at the enhanced scalability features of RP4VM 5.0. One enhancement is that it can now support an unlimited amount of Virtual Center inventory. It also supports the protection of up to 5000 Virtual machines (VMs) when using 50 RP clusters, 2500 VMs with 10 RP clusters, and now up to 128 Consistency Groups (CGs) per cluster.
The major focus of this module is on the Easily deploy and Save on network subjects as we discuss the simplification of configuring the vRPAs using from 1-4 IP addresses during cluster creation and the use of DHCP when configuring IP addresses.

RP4VMs 5.0 has enhanced the pre-deployment steps by allowing all the network connections to be on a single vSwitch if desired. The RP Deployer usage to install and configure the cluster has also been enhanced with improvements to the required number of IP addresses required, And lastly, improvements have been made in the RecoverPoint for VMs plug-in to
improve the user experience

With the release of RecoverPoint for VMs 5.0, the network configuration requirements have been simplified in several ways. The first enhancement is that now all the RP4VM vRPA
connections on each ESXi host can be configured to run through a single vSwitch. It will make the pre-deployment steps quicker and easier for the vStorage admin to accomplish and take up less resources. It can also all be run through a single vmnic, saving resources on the hosts.

With the release of RP4VM 5.0, all of the network connections can be combined onto a single vSwitch. Here you can see that the WAN, LAN, and iSCSI ports are all on vSwitch0.
While two VMkernel ports are shown, only a single port is required for a successful implementation. Now for a look at the properties page of the vRPA we just created. You can
see here that the four network adapters needed for vRPA communication are all successfully connected to the VM Network portal. It should be noted that while RP4VM 5.0 allows you to save on networking resources, you still need to configure the same underlying infrastructure for the iSCSI connections as before.

A major goal with the introduction of RP4VM 5.0, is to reduce the number of IPs per vRPA down to as few as 1. Achieving this allows us to reduce the required number of NICs and ports per vRPA. This will also allow for the number of iSCSI connections to be funneled
through a single port. Because of this, the IQN names for the ports will be reduced to one and the name will represent the actual NIC being used, as shown above. The reduced topology will be available when doing the actual deployment, either when running Deployer (DGUI) or when selecting the box property in boxmgmt. This will be demonstrated later

Releases before 5.0 supported selecting a different VLAN for each network during OVF deployment. The RecoverPoint for Virtual Machines OVA package in Release 5.0 requires that only the management VLAN be selected during deployment. Configuring this management VLAN in the OVF template enables you to subsequently run the Deployer wizards to further configure the network adapter topology using 1-4 vNICs as desired.

RP4VMs 5.0 will support a choice from 5 different IP configuration options during deployment. All vRPAs in a cluster require the same configuration for it to work. Shown in this table are the 5 options that can be used. Option 1 uses a single IP address with all the
traffic flowing through Eth0. Option 2A uses 2 IP addresses with the WAN and LAN on one and the iSCSI ports on the other. Option 2B also uses 2 IPs, but with the WAN on its own and the LAN and the iSCSI connections on the other. Option 3 uses 3 IPs, one for WAN, one for LAN and one for the two iSCSI ports. The last option, #4, separates all the connections to their own IPs. Use this option when performance and High Availability (HA) is critical. This is the recommended practice whenever the resources are available. It should be noted that physical RPAs only use options 1 and 2B, without iSCSI, as iSCSI is not yet supported on a physical appliance.

Observe these recommendations when making your selection:
1) In low-traffic or non-production deployments, all virtual network cards may be placed on the same virtual network (on a single vSwitch).

2) Where high availability and performance is desired, separate the LAN and WAN traffic from the Data (iSCSI) traffic. For even better performance, place each network (LAN, WAN, Data1, and Data2) on a separate virtual switch.

3) For high-availability deployments in which clients have redundant physical switches, route each Data (iSCSI) card to a different physical switch (best practice) by creating one VMkernel port on every vSwitch and vSwitch dedicated for Data (iSCSI).

4) Since the vRPA relies on virtual networks, the bandwidth that you expose to the vRPA iSCSI vSwitches will determine the performance of the vRPA. You can configure vSphere hosts with quad port NICs and present them to the vRPAs as single or dual iSCSI networks; or implement VLAN tagging to logically divide the network traffic among multiple vNICs

The Management network will always be run through eth0. When deploying the OVF template you need to know which configuration option you will be using in Deployer and set the port accordingly. If you do not set the management traffic to the correct destination network, you may not be able to reach the vRPA to run Deployer.

start the deployment process, enter the IP address of one of the vRPAs, which has been configured in your vCenter, into a supported browser in the following format: https://<vRPA_IP_address>/. This will open up the RecoverPoint for Virtual Machines
home page where you can get documentation or start the deployment process. To start Deployer, click on the EMC RecoverPoint for VMs Deployer link.

After proceeding through the standard steps from previous releases, you will come to the Connectivity Settings screen. In our first example we will setup the system to have all the networking go through a single interface. The first item to enter is the IP address which will be used to manage the RP system. Next you will choose what kind of network infrastructure will be use for the vRPAs in the Network Adaptors Configuration section. The first part is to choose the Topology for WAN and LAN in the dropdown. When selected, you will see 2 options to choose from, WAN and LAN on the same adapter and WAN and LAN on separate adapters. In this first example we will choose WAN and LAN on same network adapter.
Depending on the option chosen, the number of fields available to configure will change as will the choices in the Topology for Data (iSCSI) dropdown. To use a single interface we will select the Data (iSCSI) on the same network adapter as WAN and LAN option. Because we are using a single interface, the Network Mapping dropdown is grayed out and the choice we made when deploying the OVF file for the vRPA has been chosen for us. The next available field to set is the Default Gateway which we entered to match the Cluster Management IP. Under the vRPAs Setting section there are only two IP fields. The first is for the WAN/LAN/DATA IP, which was already set as the IP used to start Deployer. The second IP is for all the connections in the second vRPA that we are creating the cluster with. This IP will also be used for LAN/WAN and DATA on the second vRPA. So there is a management IP and a single IP for each vRPA to use once completed.

Our second example is for the Data on separate connection from the WAN/LAN option, which we have selected in the Network Adapters Configuration dropdown lists by choosing WAN and LAN on same adapter and Data (iSCSI) on separate network
adapter from WAN and LAN
. Next we will have to select a network connection to use for the Data traffic from a dropdown of configured interfaces. Because we are using multiple IP addresses, we have to supply a netmask for each one, unlike the previous option where it was already determined by the first IP address we entered to start Deployer. Here we enter one for WAN/LAN and another for the Data IP address. Under the vRPAs Settings section which was lower on the vRPA Cluster Settings page, we will have to provide an IP for the Data connection of the first vRPA , and also the two required for the second vRPAs connections.

Our third example is for the WAN is separated from LAN and Data connection option, which we have selected in the Network Adapters Configuration dropdown lists by choosing WAN and LAN on separate adapters and Data (iSCSI) on same network adapter as LAN. Once that option is selected the fields below will change accordingly. Next we will have to select a network connection to use for the WAN traffic from a dropdown of configured interfaces since the LAN and Data are using the connection chosen when deploying the OVF template for the vRPA. We once again need to supply two netmasks, but this time the first is for the LAN/Data connection and the second is for the WAN connection alone. Under the vRPAs Setting section which is lower down on the vRPA Cluster Settings page, you will supply an IP for the WAN alone on the first vRPA and two IPs for the second vRPA, one for the WAN and one for the LAN/Data connection.

The 4th example is for the WAN and LAN and Data all being separated option, which we have selected in the Network Adapters Configuration dropdown lists by choosing WAN and LAN on separate adapters and Data (iSCSI) on separate network adapters from WAN and LAN. Once that option is selected the fields below will change accordingly displaying the new configuration screens shown here. Next we will have to select a network connection to use for the WAN and the Data traffic from a dropdown of configured interfaces since the LAN is using the connection chosen when deploying the OVF template for the vRPA. There will now be three netmasks which need to be input, one for LAN, one for WAN and a single one for the Data connections. In the vRPAs Settings section which is lower down on the Cluster Settings page, you will now input a WAN IP address and a Data IP address for the first vRPA and then an IP for each of the LAN, WAN and Data connections individually on vRPA2.

The last available option is the All are separated with dual data NICs option, which we have selected in the Network Adapters Configuration dropdown lists by choosing WAN and LAN on separate adapters and Data (iSCSI) on two dedicated network adapters, which is used for the best available performance and is recommended as a best practice. Once those options are selected the fields below will change to display the new configuration screens shown. Next we will have to select network interfaces to use for WAN and the two Data connections from a dropdown of configured interfaces since the LAN is using the connection chosen when deploying the OVF template for the vRPA. This option requires 4 netmask be entered, one for the WAN, LAN, Data 1 and Data 2 IPs, as all have their own connection links. Under
the vRPAs Setting section which is lower down on the Cluster Settings page, we can now see that we need to provide the full amount of IPs which can be used in any configuration per vRPA.

With the release of RP4VM 5.0, DHCP is supported for the WAN, LAN and iSCSI interfaces, but the cluster management and iSCSI VMkernel addresses must remain static. Support is also added for dynamic changes to all interfaces, unlike previous versions of the software. RP4VM 5.0 is also offering full stack support for IPv6 on all interfaces except iSCSI. Another enhancement is a reduction in the amount of configuration data which is shared across the clusters; with 5.0 only the WAN addresses of all vRPAs, the LAN addresses of vRPA 1and 2, MTUs, the cluster name, and the cluster management IPs of all clusters will be shared.
Note that the boxmgmt option to retrieve settings from remote RPA is unsupported as of 5.0 When IP addresses are provided by DHCP and an RPA reboots, the RPA will acquire an IP address from the DHCP server. If the DHCP server is not available, the RPA will not be able to return to the operational state; therefore it is recommended to supply redundant, highly available DHCP servers in the network when using the DHCP option.

Shown here on the Cluster Settings page of the Connectivity step of Deployer, we can see the option to select DHCP for individual interfaces. Notice that the DHCP icon does not appear for the Cluster Management IP address. This address must remain static in 5.0. If any of the other interfaces have their DHCP checkbox selected, the IP address netmasks will be removed and DHCP will be entered in its place. When you look at the vRPAs Settings window you can see that the LAN is a static address while the WAN and two Data addresses are now using DHCP. Another item to note here is that IPv6 is also available for all interfaces except for iSCSI, which currently only supports IPv4. Another important consideration to take note of is that adapter changes are only supported offline using boxmgmt. A vRPA would have to be detached from the cluster, the changes made, and then reattached to the cluster.

Let us take a closer look at the main page of Deployer. In the top center we see the vCenter IP address as well as the version of the RP4VM plugin which is installed on it. Connected to that is the RP4VMs cluster which displays the system name and the management IP address. If the + sign is clicked, the display will change to display the vRPAs which are part of the cluster.
In the wizards ribbon along the bottom of the page we will find all the functions that can be performed on a cluster. On the far right of the ribbon are all the wizards for the 5.0 release of Deployer which includes wizards to perform vRPA cluster network modifications, replace a vRPA, add and remove vRPAs from a cluster and remove a vRPA cluster from a system.

Up to the release of RecoverPoint for VMs 5.0, clusters were limited to a minimum of two appliances with a maximum of eight. While such a topology makes RP4VMs clusters more robust and provides high availability, additional use cases exist where redundancy and availability is traded for cost reduction. RP4VMs 5.0 introduces support for a single-vRPA cluster to enable, for example, product evaluation of basic RecoverPoint for VMs functionality and operations, and to provide cloud service providers the ability to support a topology with a single-vRPA cluster per tenant. This scale out model enables starting with a low scale single-vRPA cluster and provides a simple scale out process. This makes RP4VMs a low footprint protection tool. It protects a small number of VMs by having a minimal required footprint to reduce Disaster Recovery (DR) costs. The RecoverPoint for VMs environment allows scale out in order to meet sudden growth for DR needs.
RP4VMs systems can contain up to five vRPA clusters. They can be in a star, partially or fully connected formation protecting VMs locally or remotely. All clusters in an RP4VMs system need to have the same amount of vRPAs. RP4VMs 5.0 single-vRPA cluster deployments reduce the footprint for network, compute, and storage overheard for small to medium deployments. It offers a Total Cost of Ownership (TCO) reduction.
Note: The single-vRPA cluster is only supported in RecoverPoint for VMs implementations.

The RP4VMs Deployer can be used for connecting up to five clusters to the RP4VMs system. All clusters in an RP4VMs systems must have the same number of vRPAs. Software upgrades can be run from the Deployer. Non-disruptive upgrades are supported for clusters with two or more vRPAs. For a single-vRPA cluster, the warning shows that the upgrade will be disruptive. The replication tasks managed by this vRPA will be stopped until the upgrade is completed. The single-vRPA cluster upgrade occurs without a full sweep or journal loss. During the vRPA reboot, the Upgrade Progress report may not update and Deployer may become unavailable. When the vRPA completes its reboot, the user can login to Deployer and observe the upgrade process to its completion. Deployer also allows vRPA cluster network modifications such as cluster name, time zones
and so on, for single-vRPA clusters. To change network adapter settings use advanced tools such as Deployer API or the boxmgmt interface.

The vRPA Cluster wizard in Deployer is used to connect clusters. When adding an additional cluster to an existing system, the cluster must be clean, meaning that no configuration changes, including license installations, have been made to the new cluster. Repeat the connect cluster procedure to connect additional vRPA clusters.
Note: RP4VMs only supports clusters with the same number of vRPAs in one RP4VMs system.

New Dashboard tabs for RP4VM 5.0 will provide users an overview of system health and Consistency Group Status. The new tabs will allow the administrator quick access to the overall RP4VM system.
To access the Dashboard, in the vSphere Web Client home page, click on the RecoverPoint for VMs icon.
New Tabs include:
Overall Health – provides a summary of the overall system health including CG transfer status and system alerts
Recovery Activities – displays recovery activities for copies and group sets, provides recovery-related search functions, and enables users to select appropriate next actions
Components– displays the status of system components and a history of incoming writes or throughput for each vRPA cluster or vRPA
Events Log – displays and allows users to filter the system events

The new Dashboard for RP4VMs includes a Recovery Activities Tab. This will allow the monitoring of any active recovery actions such as Failover, Test a Copy and Recover Production. This tab allows the user to monitor and control all ongoing recovery operations.

The RecoverPoint for VMs Dashboard includes a Component Tab for viewing the status of all Clusters and vRPAs managed by the vCenter Server instance. For each component selected from the System Component window on the left, relevant statistics and information will be displayed in the right window.

Beginning with RecoverPoint for VMs 5.0 there is now an automatic RP4VM Uninstaller. Running the Uninstaller removes vRPA clusters and all of their configuration entities from vCenter Servers.
For more information on downloading and running the tool, see Appendix: Uninstaller tool in the RecoverPoint for Virtual Machines Installation and Deployment User Guide.

RecoverPoint for VMs 5.0 allows the removal of a Replication set from a Consistency Group without journal loss. Removing a Replication set does not impact the ability to perform a Failover or Recover Production to a point in time before the Rset was removed. The deleted Rset will not be restored as part of that image.
The RecoverPoint System will automatically generate a bookmark indicating the Rset removal. A point to remember is that the only Replication Set of a Consistency Group cannot be removed

Here we see a Consistency Group that is protecting 2 VMs. Each VM has a Local copy. Starting with RP4VMs 5.0 a user can now remove a protected VM from the Consistency Group without causing a Journal history loss. Also after a VM removal the Consistency Group is still fully capable of returning back to an image, using Recover Production or Failover, that contained the VM that was removed.

Displayed is a view of the Journal Volume for the copies of a Consistency Group. There are both system made Snapshots and User generated Bookmarks. Notice that after the deleting of a Replication Set, a Bookmark is created automatically. All the snapshots created from that point will not include the volumes from the removed Virtual Machine.

Lets see some Demos

The New Deployer

Protection of VMs running on XtremIO

Failing over VMs running on XtremIO

RecoverPoint 5.0 – The XtremIO Enhancements


We have just released the 5th version of RecoverPoint which offers an even deeper integration with XtremIO, if you are not familiar with RecoverPoint, it’s the replication solution for XtremIO and basically offers a scale out approach to replication. RecoverPoint can be used in it’s physical form (which is the scope of this blog post) or as a software only solution (RecoverPoint For Virtual Machines, RP4VMs)

Expanding an XtremIO volume is very simple from the CLI or UI. To expand a volume in XtremIO simply right-click the volume, modify it and set the new size.

Before RecoverPoint 5.0, increasing the size of a volume was a manual process. While XtremIO made it very easy to expand volumes, RecoverPoint was unable to perform the change in size. To increase the size of a volume, you would have to remove the volume
from RecoverPoint, log in to the XtremIO and resize the volume. Once the volume was resized, add the volume to RecoverPoint again. A volume sweep would be triggered by the new volume in RecoverPoint.

RecoverPoint 5.0 and above allows online expansion to Replication Set volumes without causing a Full Sweep resulting in journal loss. When both the production and copy volumes
are from an XtremIO array the Replication set can be expanded. Best practice is to perform the re-sizing on the copy first, then change production to match.

This example has a consistency group containing two replication sets. The selected replication set has a production volume and a single remote copy. Both are on XtremIO arrays and in different clusters.

Here is an example of expanding the size of a replication set. The first step is to expand the (remote) copy on the XtremIO array. The volume can be identified by the UID, which is
common to both RecoverPoint and XtremIO. Next we increase the size to 75 GB in this example.

Notice that now the Copy and Production volumes of the Replication Set are different sizes. Since we expanded the Copy volume first, the snapshots created during the time the
volumes are mismatched will still be available for Failover and Recover Production. Upon a Rescan of the SAN volumes the system will issue a warning and log an event. A rescan can be initiated by the user or it will happen during Snapshot creation.

Next we will expand the Production volume of the Replication Set. Once this is accomplished the user can initiate a rescan or wait until the next snapshot cycle.

After the rescan is complete, the Replication Set now contains the Production and the copy of the same size. Displayed is the Journal history, notice the snapshots and bookmarks are intact. Also there is a system generated bookmark after the resizing is complete.

RecoverPoint protection is policy-driven. A protection policy, based on the particular business needs of your company, is uniquely specified for each consistency group, each copy (and copy journal) and each link. Each policy comprises settings that collectively
govern the way in which replication is carried out. Replication behavior changes dynamically during system operation in light of the policy, the level of system activity, and the availability of network resources.
Some advanced protection policies can only be configured through the CLI.

Beginning with RecoverPoint 5.0, there is a new Snapshot Consolidation Policy for copies on the XtremIO array. The Goal of this new policy is to make the consolidation of
snapshots more user configurable.
The XtremIO array currently has a limitation of 8k volumes, snapshots and snapshot mount points. RecoverPoint 4.4 and below enforces the number of maximum number of snapshot, but in a non-changeable manner. For example, the user cannot change the time RecoverPoint will consolidate the snapshots on the XtremIO. This new policy will allow the user more control over how long the snapshots will be retained.

One to three consolidation policies can now be specified for each copy of a consistency
group that resides on an XtremIO array. By default and for simplicity reasons, consolidation policy is selected automatically based on number of snaps and required Protection Window.

The new CLI command config_detailed_snapshot_consolidation_policy will allow a much more detailed and concise consolidation policy for copies on an XtremIO array.
config_copy_policy command will allow for the setting of the maximum number of snapshots.

Data Domain is an inline deduplication storage system, which has revolutionized disk-based backup, archiving, and disaster recovery that utilizes high-speed processing. Data Domain easily integrates with existing infrastructure and third-party backup solutions.
ProtectPoint is a data protection offering which brings the benefits of snapshots together with the benefits of backups. By integrating the primary storage with the protection storage, ProtectPoint reduces cost and complexity, increases speed, and maintains recoverability of backups.
RecoverPoint 4.4 introduced support for ProtectPoint 3.0 by enabling the use of Data Domain as a local copy for XtremIO protected volumes. The Data Domain system has to be registered with RecoverPoint using IP. Data transfer can be configured to use IP or FC. The Data Domain based copy, is only local and the link policy supports two modes of replication:
Manual – bookmarks and incremental replication are initiated from the File System or
Application Agents
Periodic – RecoverPoint creates regular snapshots and stores them on Data Domain.
There is no journal for the Data Domain copy during the Protect Volumes or Add Copy wizard. If the add Data Domain copy checkbox is selected, users will only select the Data Domain registered resource pool.

When using RecoverPoint 5.0 with ProtectPoint 3.5 and later, specific volumes of a consistency group can be selected for production recovery, while others volumes in the group are not recovered.

Displayed is an example of a use case for the new Partial Restore feature of RecoverPoint 5.0. In this example, this new feature allows for the recovery of only part of a database system

Only selected volumes are blocked for writing. Transfer is paused from XIO source > DD replica during recovery. During transfer, marking data is collected for the non-selected volumes, in case writes are being performed to them in production.
During partial restore:
Transfer is paused to all copies, until the action completes
At production – selected volumes are blocked, non-selected volumes remain accessible
Only selected volumes are restored
After production resumes all volumes undergo short initialization (in case of periodic snapbased replication policy is configured for XIO->DD link).
Things to keep in mind about the partial restore option for ProtectPoint:
Only selected volumes are blocked for writing
Transfer is paused from XIO source -> DD replica during recovery
During transfer, marking data is collected for the non-selected volumes, in case writes are being performed to them in production

VSI 7.0 Is Here, VPLEX Support Is Included!


We have just released the 7th version! Of the VSI (Virtual Storage Integrator) vCenter plugin, this release includes

  1. VPLEX VIAS Provisioning (and viewing support), yes, its been a feature that was long due and I’m proud to say it’s here now, so first we need to register VPLEX which is a pretty much straightforward thing to do

    VPLEX VIAS Provisioning Support – Use

    VPLEX Integrated Array Service (VIAS)
    ŸCreate virtual volumes from pre-defined storage pools on supported arrays. These storage pools are visible to VPLEX through array management providers (AMPs)
    ŸSimplify the provisioning, by comparing with the traditional provisioning from storage volumes

    VPLEX VIAS Provisioning Support –

    Software Prerequisites
    VMware vSphere 6.0 or 5.5
    VMware vSphere Web Client
    VMware ESX/ESXi hosts
    VMware vCenter Servers (single or linked-mode)
    ŸStorage Prerequisites
    VPLEX 5.5/6.0
    Only support XtremIO/VMAX3/VNX Block
    Storage pool is created on the array
    AMP is registered with VPLEX, connectivity status is OK.

    VPLEX VIAS Provisioning Support –
    Provision VMFS Datastore on Host Level

    VPLEX VIAS Provisioning Support –
    Provision VMFS Datastore on Cluster Level

    VPLEX VIAS Provisioning Support –
    Provision VMFS Datastore

    VPLEX VIAS Provisioning Support –
    Provision VMFS Datastore

    VPLEX VIAS Provisioning Support –
    Provision VMFS Datastore

    Let’s see a demo about how it all works, thanks for Todd Toles who recorded it!

    Multiple vCenters Support – Background
    VSI 6.7 or Older
    Designed to work with single vCenter
    Multiple vCenters are not supported totally
    More and more customers requested this
    ŸVSI 6.8
    XtremIO use cases
    ŸVSI 6.9.1 & 6.9.2
    RecoverPoint & SRM use cases
    Unity/UnityVSA use cases

    Quality Improvement – Batch
    Provisioning Use Case
    Customer provisions 30 datastores on a
    cluster which has 32 ESXi hosts, the vCenter
    will fall into no response
    ŸRoot Cause: there are huge tasks (e.g.
    ~900+ “Rescan all HBA” and “Rescan VMFS”
    tasks) created in a short time, which
    generates a rescan storm and impacts the
    vCenter much

    What have we done to fix it?

    Code changes to optimize the host rescan
    operations invoked by VSI
    ŸManually configure the vCenter advanced settings to add “config.vpxd.filter.hostRescanFilter=false
    , which disables the automatic VMFS rescanning on each host (under the same
    cluster) when create a new datastore for the cluster. Enable this filter when batch provisioning is done.

AppSync 3.0.2 is out 


We’ve just GAd AppSync 3.0.2 which  includes the following new features and enhancements:


1. Hotfix/Patch improvement – Starting in AppSync 3.0.2, hotfix/patch is delivered as an executable like the AppSync main installer. A single patch will install both the AppSync server and the AppSync agent hotfixes. You can push the hotfix to both UNIX and Windows agents from the AppSync server.

2. Agent deployment and discovery separation – Enables the deployment of the agent even if discovery fails across all supported AppSync applications including Microsoft applications, File system and Oracle.

3. Event message accuracy and usefulness

The installation and deployment messages have been enhanced to provide more specific information that helps in identifying the cause of the problem.

All AppSync error messages have been enhanced to include a solution.

4. Unsubscribe from subscription tab – You can now easily unsubscribe applications from the Subscription tab of a service plan.

5. Storage order preference enhancement – You can now limit the copy technology preference in a service plan by clearing the storage preference options you do not want.

6. FSCK improvements – You can now skip performing a file system check (fsck) during a mount operation on UNIX hosts.

7. Improve SRM support – For RecoverPoint 4.1 and later, AppSync can now manage VMWare SRM managed RecoverPoint consistency groups without manual intervention. A mount option is now available to automatically disable the SRM flag on a consistency group before enabling image access and returning it to the previous state after the activity.

8. XtremIO improvements

a. Reduces the application freeze time during application consistent protection.

b. Support for XtremIO version earlier than 4.0.0 has been discontinued.

9. Eliminating ItemPoint from AppSync – ItemPoint is no longer supported with AppSync.Users cannot perform item level restore for Microsoft Exchange using ItemPoint with AppSync.

10. XtremIO and Unity performance improvement – Improved operational performance of Unity and XtremIO.

11. Serviceability enhancements – The Windows control panel now displays the size and version of AppSync.


The AppSync User and Administration Guide provides more information on the new features and enhancements. The AppSync Support Matrix on is the authoritative source of information on supported software and platforms


Not All Flash Arrays Snapshots Are Born (Or Die) Similar.



CDM (Copy Data Management) is an huge thing right now for AFA vendors, each product try to position itself as an ideal platform for it but like anything else, the devil is in the details.

If you are new to this concept, I would encourage you to start here:

and then view the following 3 videos Yoav Eilat did with a great partner of ours, WWT

done watching all the videos and still not convinced? How should you test the your AFA Vs the others? Its pretty simple actually

  1. Fill you AFA with Data (preferably DB’s)
  2. Take plenty of snapshots of the same DB
  3. Present these snapshots to your DB host and run IO against them (using SLOB for example)
  4. Do you see a performance hit of your parental volume compared to the snapshots volume – red flag!
  5. Delete some snapshots and see what happens

You’ll thank me later.

VMworld 2016, Introducing the EMC XtremIO VMware vRealize Orchestrator Adapter


The 2nd big announcement at VMworld is the vCenter Orchestrator (vCO) adapter for XtremIO.

This has been in the making for quite some time, as someone who is very close to the development of this, I can tell you that we have been in contact with many customers about the exact requirements since at the end of the day, vCO is a framework and like any other framework, it is only as good (or bad) in what popular workflows it support.

The adapter that we will be releasing in 09/2016 will include the majority of the XtremIO functionality, volume creation, deletion, extending a volume, snapshots creation etc., shortly after the 1st release, we will be adding reports and Replication support via RecoverPoint

Below you can see a demo we recorded for VMworld, it’s a little bit rough but can give you a good overview around what It can do (Thanks to Michael Johnston & Michael Cooney for recording it, you rock!)

VMworld 2016, EMC AppSync & XtremIO Integration with VMware vCenter

What an awesome year for us at EMC XtremIO, so many things going on!

One of the announcements we are making at VMworld is the integration between Appsync to the vCenter but what does it actually mean?

Well, if you are new to Appsync, I suggest you start here:

so still, what’s new?

We are now offering you, the vCente / vSphere / Cloud administrator to operate Appsync from the vCenter Web UI, why? Because we acknowledge that every admin is used to work with one tool as the “portal” to his / her world and instead of forcing you to learn another UI (in our case, it’s the AppSync UI), you can do it all from the vCenter UI.

Apart from the repurposing your test / dev environment which Appsync is known for (utizling the amazing XtremIO CDM engine) , I want to take a step back and focus on one use case that is relevant for EVERY vSphere / XtremIO customer out there which is backing up and restore VMs for free, no, really. You can now take as many snapshots that you want and be able to restore from each one, you can either:

  1. Restore a VM or VMS

  1. Restore a full volume (datastore) and the VMs that were in it

  2. Restore a file from within the VM c: / d: etc drive! No agent required!

    Very powerful engine since the majority of the restore requests from the last week or so, so you can happily use the XtremIO XVC engine to restore it from, easy, powerfull and again ,free!

    See the short demo here:

    See a full demo here:

EMC AppSync 3.0.1 Is Out, Here’s what’s new


Building on the top of the 3.0 release of AppSyc, we have just GA’d the first service pack for it,

AppSync 3.0.1 includes the following new features and enhancements:
Agent robustness – Allows you to configure a common time out value for commands on the UNIX agent for time-out flexibility.

Error reporting – The UNIX host based error messages have been enhanced to provide more specific information that helps in identifying the cause of the problem.

Event message accuracy – The event messages have been enhanced to provide more specific information that helps in identifying the cause of the problem.

Error logging.

Enhanced logging for workflows.

Enhanced the Unix plug-in log for readability

Configurable retry for failed jobs – Allows you to set a retry count and retry interval to perform VSS freeze/thaw operation in the case of VSS failures (for example, VSS 10 second timeout issue) that are encountered during copy creation.

The AppSync User and Administration Guide provides more information on how to set a retry count and interval for failed jobs.

Mount multiple copies to the same AIX server – Allows you to mount multiple copies of the protected application consecutively on the same AIX server used as the mount host.

Exchange support for VPLEX – Allows you to create and manage copies of your Exchange data. This addition completes VPLEX support to cover all the five AppSync supported applications (Oracle, Microsoft SQL, Microsoft Exchange, VMware Datastore, and File System).

Selective array registration – For XtremIO and VMAX, you can now select the arrays that you want AppSync to manage when the XMS and SMI-S provider are managing multiple arrays.


A New VDI Reference Architecture


We have just released a new VDI Reference Architecture based on all the latest and the greatest

This reference architecture discusses the design considerations that give you a reference point for deploying a successful Virtual
Desktop Infrastructure (VDI) project using EMC XtremIO. Based on the evidence, we firmly establish the value of XtremIO as the
best-in-class all-flash array for VMware Horizon Enterprise 7.0 deployments. The reference architecture presents a complete VDI
solution for VMware Horizon 7.0, delivering virtualized 64-bit Windows 10 desktops. The solution also factors in VMware App Volumes
2.10 for policy-driven application delivery that includes Microsoft Office 2016, Adobe Reader 11 and other common desktop user
applications. This reference architecture discusses the design considerations that will give you a reference point for deploying a
successful VDI project using XtremIO.

You can download the RA from here:

and here’s a demo to show it all for you!

Virtualization of Windows 10, Office 2016 desktop environment on XtremIO

         Deploying 3000 virtualized desktops (Linked clones and full clones) managed by VMware Horizon 7 on EMC XtremIO 4.0.10

         On-demand application delivery using VMware App Volumes 2.10 to 3000 Desktops on EMC XtremIO 4.0.10

         Performance evaluation of virtualized desktops deployed at scale (3000 desktops) using loginVSI on EMC XtremIO 4.0.10

         Common considerations for deploying VDI at scale using EMC XtremIO 4.0.10

         XMS Web GUI technical preview

XIOS 4.0.10 And XMS 4.2.0 Are Here, Here’s What’s New


I’m so proud to finally announce the GA of XIOS 4.0.10/ XMS 4.2, this release contains couple of features that so many of you, our customers have been asking for so let’s highlight them and then deep a little bit deeper to each one of them

ŸXtremIO PowerShell support
ŸSMI-S support
ŸServiceability enhancements
ŸREST API Improvements
ŸVSS enhancements
ŸXMS Simulator
ŸWebUI Tech Preview
Ÿthe usual bug fixes.

XtremIO PowerShell support

  • Supported with XMS 4.2.0
    PowerShell Supported versions: 4.0 and 5.0
    Based on XtremIO REST API version 2.0
    Supports all storage management commands


    XtremIOPowerShell.msi package – available on support page
    Verify PowerShell ISE version 4 and above is installed
    Installation package imports the module to PowerShell path

Connecting To The XMS

Connect to single cluster or all clusters manages by the XMS

Powershell Commands

Supports storage management REST commands
XtremIO command structure:
Supported actions:
Example: Get-XtremVolumes


Ability to list a subset of attributes (-Properties)
Filtering logic support (-Filters)
-CLImode: avoid user confirmation in session level (for scripting)
-Confirm: user confirmation for single command
-ShowRest: returns the command in json format
-Full: list all object attributes





SMI-S Integration

What is an SMI-S Provider?

SNIA developed standard intended to facilitate the management of storage devices from multiple vendors in storage area networks.
Enables broad interoperable management of heterogeneous storage vendor systems
Multiple ‘Profiles’ can be implemented
We have implemented the profiles required for Microsoft SCVMM and Microsoft Azure platforms
All SCVMM operations can be done in the GUI or through CMDlets
More profiles would be implemented in future based on field requirements and roadmap features


SMI-S Provider implemented directly on the XMS
No external component needs to be installed
Better and consistent performance guaranteed
Array operations possible
Create/delete a volume
Create an XtremIO Virtual Copy – XVC (aka Snapshot)
Mount the volume on a host
The entire array is considered as a single pool
Internally uses RestAPI calls
Completely stateless and does not cache any data

ECOM needs a new Read Only user on XMS
Needs to be defined on ECOM (for non-default password usage)
Same user to be defined on ECOM and XMS for adding
Provide the same credentials in SCVMM
LDAP users also supported

The following profiles have been implemented:
Masking and Mapping
Disk Drive Lite
FC Target Ports
iSCSI Target Ports
Physical Package
Multiple Computer System
Block Services Package
Thin Provisioning
Replication Services (Snapshots)



Fabric Tab -> Add
Resources -> Storage
Specify the “Run As” account as defined in ECOM
and XMS
Go to the ‘Jobs’ tab to see operation status
‘Providers’ option will show the XMS information and current status


Create Logical Unit’
allows to create new volumes
Right click on volume name to delete

Right Click on Host -> Properties

Allow active clusters to send SNMP keep alive trap to SNMP manager
Can be sent between 5 min to 24 hours , customer configurable
Working via CLI require two commands
1. Enable the feature and frequency on XMS level:
modify-snmp-notifier enable-heartbeat heartbeat-frequency=15
2. Enable the feature on cluster level (default is enabled for all):
modify-clusters-parameters enable-heartbeat


ŸGUI and CLI now support the option to export and
XMS configuration for back-up and restoration
ŸThe following configuration elements are exported
XMS: User Account, Ldap Config, Syr Notifier, Syslog Notifier, Snmp Notifier, Email Notifier, Alert Definition
Cluster: Cluster Name, X-Brick Name, SC Name, Target Ports, iSCSI Portal & Routes, IG, Initiator, Volume, CG, LUN Mapping, Tag, Scheduler

Snapshot Enhancements
Native VSS support for application aware snapshots
Snapshot Enhancements
The new VSS supports working inside the VM guest using RDM volumes.

WebUI Technical Preview

Yea..that’s the one you have all been waiting for but couple of disclaimers, it’s a technical preview which means we ask you to test and provide feedback for, it’s not the final Web UI and It’s likely to change before GA and again, the reason we are releasing it is that you can contact us and let us know your opinion about it, good, bad, ugly, it’s all good! Please note that the classic Java is still working and provides the full functionality, so it’s

100% pure HTML5 (no Java) !!!
Just enter XMS WebUI URL
Enter your standard XMS User credentials
Nothing to install

In Multi-cluster setup: Multi-Cluster Overview
In Single cluster setup: Single-Cluster Overview
To access WebUI Homepage: Click on WebUI Logo

Single cluster homepage


2 main navigation
Menu bar
Context selector

Only object types supported
by the selected menu items will appear in the Context Selector
Filtering capabilities:
Direct: Text, selected properties, tags
Indirect: Filter based on relationship to other objects






For each object type in the Context selector there is a list of supported reports
ŸAll reports support single/multiple object reports, for example:

Troubleshoot an object with all available data:
One-click navigation between pre-defined reports

Track capacity and data savings over time

Track endurance and SSD utilization

Notifications Events

Events screen

ŸDrill down to Critical/Major Alerts from Cluster Overview

Storage Configuration & mapping

Configuration Screen
Create/delete, perform all actions on selected objects.

Storage configuration & mappings
Configuration Screen
No context selector
Indirect cluster filtering

Mappings Screen
Map between selected Volumes/CGs/Snapsets and selected Initiator Groups (many-to-many object mappings supported)

Main provisioning flow

(1)Create Volumes (2)Create Initiator Groups (3)Map

3. Map

Add volumes to Consistency Group

Local Protection

Create 1-time local protection or define local protection policy

Refresh copy


  • Export configuration/inventory object data

Hardware view