Copy Data Management (CDM), Or, Why It’s the Most Important Consideration For AFA’s Going Forward.

A guest post by Tamir Segal

Very soon after we release Dell-EMC XtremIO’s copy technology, we were very surprise by an unexpected finding. We learned that in some cases, customers would be deploying XtremIO to hold redundant data or “copies” of the production workload. We were intrigued, why would someone use a Primmum array to hold non-production data? and why would they dare to consolidate it with production workloads?

Because of our observations, we commissioned IDC to perform an independent study and drill into the specifics of the copy data problem. The study included responses from 513 IT leaders in large enterprises (10,000 employees or more) in North America and included 10 in-depth interviews across a spectrum of industries and perspectives.

The “copy data problem” is rapidly gaining attention among senior IT managers and CIOs as they begin to understand it and the enormous impact that it has on their organization in terms of cost, business workflow and efficiency. IDC defines copy data as any replica or copy of original data. Typical means of creating copy data include snapshots, clones, replication (local or remote), and backup/recovery (B/R) operations. IDC estimates that the copy data problem will cost IT organizations $50.63 billion by 2018 (worldwide).

One could think that copies are bad for organizations and they just lead to sprawl of data and waste, and therefore the expected question to ask is why don’t we just eliminate all those copies? The answer is straightforward, copies are important and needed for many critical processes in any organization, for example; how can you develop the next generation of your product without a copy of your production environment as baseline for the next version? In fact, there are many significant benefits to using even more copy data in some use cases. However, legacy and inefficient copy management practices resulted with a substantial waste and financial burden on IT (Did I mention $50.3B?).

IOUG made a research on 300 DB managers and professional and what is the most activities taking up most time each week. Results are somehow surprising, Figure 1 shows that 30% spending significant amount of their time on creating copies. However, this does not end here, Test &QA are also tasks done on non-production copies and patches are first tested on non-product environments.

Figure 1 – Database Management activities taking up most time each week (source: Unisphere research, efficiency isn’t enough: data centers lead the drive to innovation. 2014 IOUG survey)

What are those copies? and what are the uses cases they support and what are the problems there today? Those can be categorized under 4 main areas:

  • Innovation (testing or development)
  • Analytic and decision making (run ETL from a copy rather than production)
  • IT operations (such as pre-production simulation and patch management)
  • Data Protection.

Before getting the research’s results, I assumed that Data Protection would be leading use case for copies. I was wrong, based on research there is no significant leader in data consumption.

Figure 2 – Raw Capacity Deployed by Workload (Source IDC Survey)

Another interesting data point was to see what technology is used to create copies, per the research results 53% used custom written scripts to create copies.


Figure 3 – what tools are used to create secondary copies (source IDC Survey)

The copy data management challenges directly impact critical business processes and therefore have direct impact on the cost, revenue, agility and competitiveness of any organization. But the big question is by how much? The IDC research was looking to quantify the size of the problem and how big it, some highlights of the researches are:

  • 78% of organizations manage 200+ instances of Oracle and SQL Server databases. The mean response for the survey was 346.43 database instances.
  • 82% of organizations make more than 10 copies of each instance. In fact, the mean was 14.88 copies of each instance.
  • 71% of the organizations surveyed responded that it takes half a day or more to create or refresh a copy.
  • 32% of the organizations refresh environments every few days, whereas 42% of the organizations refresh environments every week.

Based on the research results, it was found that on average a staggering 20,619 hours are spent on wait time for instance refreshes every week by the various teams. in a conservative estimate of 25% of instances yields more than 5,000 hours, or 208 days, of operational waiting or waste.

The research is available for everyone, and you can view it here

These results are very clear, there is a very large ROI (more than 50%) that can be realized and probably by many organizations since more than 78% of the companies are managing more than 200+ instances of database and as the research shows, the process today is wasteful and inefficient.

The Copy Data Management Challenges

It is important to understand why legacy infrastructure and improper copy data management processes have fostered the need for copy data management solutions. The need for efficient infrastructure is driven by extensive storage silos, sprawl, expensive storage and inefficient copy creation technologies. The need for efficient copy data management processes is driven by increased wait times for copies to be provisioned, low productivity and demands for self-service.

Legacy storage systems were not designed to support true mixed workload consolidation and require significant performance tuning to guarantee application SLAs. Thus, storage administrators have been conditioned to overprovision storage capacity and create dedicated storage silos for each use case and/or workload.

Furthermore, DBAs are often using their own copy technologies, it is very common that DBAs will ask storage administrators to provision capacity, they will than use their native tools to create a database copy. One common practice is to use RMAN in oracle and restore a copy from a backup.

Copy technologies, such as legacy snapshots, do not provide a solution. Snapshots are space efficient compared to full copies; however, in many cases copies created using snapshot technology are under-performing, impact production SLAs, taking too long to create or refresh, have limited scale, lack real modern efficient data reduction and are complex to manage and schedule.

Because of performance and SLA requirements, storage admins are forced to use full copies and clones, but these approaches result in an increase in storage sprawl as capacity is allocated upfront and each copy consumes the full size of its source. To save on capacity costs, these types of copies are created on a lower tiered storage system or lower performing media.

External appliances for copy data management lead to incremental cost and they still require a storage system to store copies. They may offer some remedy; however, they introduce more complexity in terms of additional management overhead and require substantial capacity and performance from the underlying storage system.

Due to the decentralized nature of application self-service and the multitude of applications distributed throughout organizations within a single business, a need for copy data management has developed to provide oversight into copy data processes across the data center and ensure compliance with business or regulatory objectives.

The Dell-EMC XtremIO’s integrated Copy Data Management approach

As IT leaders, how can we deliver the needed services to support efficacies, cost saving and agility to your organization? How does the copy data management can be addressed in a better way? This is how Dell-EMC can help you to resolve the copy data service at its source.

Dell EMC XtremIO pioneered the concept of integrated Copy Data Management (iCDM). The concept behind iCDM is to provide nearly unlimited virtual copies of data sets, particularly databases on a scale-out All-Flash array using a self-service option to allow consumption at need for DBAs and application owners. iCDM is built on XtremIO’s scale-out architecture, XtremIO’s unique virtual copy technology and application integration and orchestration layer provided by Dell-EMC AppSync.


Figure 4 – XtremIO’s integrated Copy Data Management stack

 

 

XtremIO Virtual Copy (XVC) used with iCDM is not physical but rather a logical view of the data at a specific point in time (like a snapshot), unlike snapshot XVC is both metadata and physical capacity efficient (dedup and compression) and does not impact production SLAs. Like physical copies, XVC provides the equal performance compared to production, but unlike physical copies, which may take long time to create, XVC can be created immediately. Moreover, data refreshes can trigger as often as desired at any direction or hierarchy enabling flexible and powerful data movement between data-sets.

The ability to provide consistent and predictable performance on a platform that can scale-out is a mandatory requirement. Once you have an efficient copy services with unlimited copies, you will want to consolidate more workloads. As you consolidate more workloads into a single array, more performance may be needed and you to be able to add more performance to your array.

We live in a world where the copies have consumers, in our case they are the DBAs and application owners. As you modernize your business, you want to empower them to be able to create and consume copies when they need them, this is where Dell-EMC AppSync can provide the application orchestration and automation for applications copies creation.

iCDM is a game changer and its impact on the IT organization is tremendous; XtremIO iCDM enables significant costs savings, provide more copies when needed and support future growth. Copies can be refreshed on-demand, they are efficient, high performance and have no SLA risks. As a result, iCDM enables DBAs and application owner to accelerate their development time and trim up to 60% off the testing process, have more test beds and improve the product quality. Similarly, analytical databases can be updated frequently so that analysis is always performed on current data rather than stale data.


Figure 5 – Accelerate database development projects with XtremIO iCDM

More information on XtremIO’s iCDM can be found here.

As a bonus, I included a short checklist to help you choose your All-Flash array and copy data management solution:

Does your CDM is based on All-Flash array?

ð

Can you have copies and production on the same array w/o SLA risks?

ð

Does your CDM solution is future proof? Can you SCALE-OUT and add more performance and capacity when needed? Can you get scalable number of copies?

ð

Does your CDM can immediately refresh copies from production or any other source? Can it refresh to any direction (prod to copy, copy to copy or copy to prod?

ð

Can your copies have the same performance characteristics as production?

ð

Do your copies get data service like production including compression and deduplication?

ð

Can you get application integration and automation?

ð

Can your DBAs and application owner get self-service options for application copy creation?

ð

XtremIO iCDM is the most effective copy data management option available today, it enables better workflows, reduces risks, eliminates costs and ensures SLA compliance. The benefits extend to all stakeholders, they can now perform their work more efficiently while having better results; the results can be seen in reduced waste and costs reduction while providing better services, improved business workflows and greater productivity.


vSphere 6.5 UNMAP Improvements With DellEMC XtremIO

Two years ago, my manager at the time and myself visited the VMware HQ in palo alto, we had an agenda in mind which is around the UNMAP functionality found in vSphere, the title of the presentation I gave was “small problems lead to big problems” and it had a similar photo to the one above, the point we were trying to make is that the more customers that will be using AFAs that VMware do not release back the unused to capcity to, the bigger the problem gets because from a $ per GB, each GB matter…they got the point and we ended the conversation with a promise to ship them an XtremIO X-Brick to develop automated UNMAP on XtremIO that the greater good will benefit from it as well, not just XtremIO.

if you are new to the UNMAP “stuff”, i encourage you to read multiple posts i wrote on the matter..

https://itzikr.wordpress.com/2014/04/30/the-case-for-sparse_se/

https://itzikr.wordpress.com/2013/06/21/vsphere-5-5-where-is-my-space-reclaim-command/

https://itzikr.wordpress.com/2015/04/28/vsphere-6-guest-unmap-support-or-the-case-for-sparse-se-part-2/

The day has come.

VMware has just released vSphere 6.5 which an enhanced unmap functionality at both the volume (Datastore) level and the In-Guest level, lets examine both

  1. Volume (datastore) level

    Using the UI, you can now set an automated UNMAP at the datastore level and set it to either “low” or “none” , low basiecally means that once in a 12 hour internal, the ESXi crawler will run it at the datastore level, however, you can set different thresholds using the ESXI cli as shown below

    Why can’t you set it as “high in the UI? I assume that since space reclamation is a relatively heavy IO related, VMware want’s you to ensure your storage array can actually cope with the load and the cli is less visible than the UI itself…note that you can still run an ad hock space reclamation at the datastore level like you could in vSphere 5.5/6.0, running it manually will finish quicker but will be the heaviest option.

    If you DO chose to run it manually, the best practices for XtrmeIO is to set the chunk size used for the reclamation to 20000 as seen below

  2. In-Guest space reclamation

    In vSphere 6.0, you could have run the “defrag” (optimization) at the windows level, windows server 2012, 2012 R2 and 2016 were supported as long as you set the VMDK to “thin” and enable it at the ESXi host

    And running the latest VM hardware version which at vSphere 6.0 was “11” and in vSphere 6.5 is “13” as seen below

    So, what’s new?

    Linux! In the past, VMware didn’t support the SPC-4 standard which was required to enable space reclamation inside the linux guest OS and now with vSphere 6.5, SPC-4 is fully supported so you can run space reclamation inside the linux OS using either manual cli or a crone job. In order to check that the linux OS does indeed support space reclamation, run the “sg_vpd” command as seen below and look for the LBPU:1 output.

    Running the sq_inq command will actually show if SPC-4 is enabled at the linux OS level

    In order to run the space reclamation inside the linux guest OS, simply run the “rm” command, yes, its that simple.

    You can see the entire flow in the following demo that was recorded by Tomer Nahum from our QA team, thanks Tomer!

P.S

Note that at the time of writing this post, we have identified an issue with the windows guest OS space reclamation, AFAIK, it doesn’t work with many (all?) the arrays out there and we are working with VMware to resolve it. also note that you must use the full web client (NOT the H5 client) when formatting the VMFS 6 datastore, it appears that when using the H5 embedded client, it doesn’t align the volume with the write offset

99

Naa.xxxx01 is created via webclient, naa.xxxx02 is created via embedded vClient

 

 

 

XtremIO Directions (AKA A Sneak Peek Into Project-F)

At DellEMC World 2016, together with Chris Ratcliffe (SVP, CTD Marketing) and Dan Inbar (GM XtremIO & FluidFS), we gave a session on what we call “XtremIO Future Directions”, we really wanted to show without getting into too much details where we are heading in the next few years, think of it as a technical vision for the near term future.

We started by giving a quick overview of the business, we think that Dell EMC XtremIO is the FASTEST growing product ever seen in the storage business, with more than 40% of the total AFA market share since our GA in November 2013 is something that can’t be taken lightly. For me personally, I can say that the journey has been amazing so far, as a relatively young father, I think of the acceleration the product had to go through in such a short time, the market demand is absolutely crazy! More than 3,000 unique customers and over $3Bn in cumulative revenue. From a technical perspective, If I try to explain the success of XtremIO over other AFAs, it really boils down to “purpose built architecture” – something which many other products now claim, when we built XtremIO following four pillars were (and in my opinion still are) mandatory building blocks:

  1. Purpose-Built All Flash Array

    We had the luxury of writing the code using a clean slate, that meant SSD’s only in our case and many unique features e.g. XDP, that could never have happened in the old world (you can read more about XDP here: http://www.emc.com/collateral/white-paper/h13036-wp-xtremio-data-protection.pdf)

  2. Inline All The Time Data Services

    XtremIO is a CAS (Content Aware Storage) architecture, many customers think of dedupe (for example) as a feature, in the case of XtremIO, it really isn’t. In the old days we described deduplication as a “side effect” of the architecture but since “side effect” is normally thought of as a bad thing, we took that terminology out J. But seriously, what I mean is that we examine the data in real time and give each chunk of data a unique signature and by doing so, when we write the data, if the signature already exists, we simply dedupe it, there is no post process hashing like so many products out there, no “throttling” to the CAS engine etc. This is a killer feature not just because of data savings but rather HOW the data is being written and evenly distributed in real time across all the available nodes and drives, apart from the CAS/Deduplication engine, we of course compress all the data in real time, no knobs needed. We also encrypt the data and everything is done while using thin provisioning so you really only store the unique data you are producing, again, all in real time.

  3. Scale-Out IOPS & Bandwidth

    Wow, this one is funny – I remember in the old days spending hours explaining why a Scale-Out architecture is needed and had to debunk competitors claims like “no one needs 1M IOPS” and “no one needs more than a dual-controller architecture”. While I would agree that not everyone needs it, if I look at our install base, so many DO. It’s also not just IOPS, the tightly coupled Scale-Out architecture is what gives you the bandwidth and low latency that your applications need.

  4. CDM (or, unique Snapshots capabilities)

    If IDC/Gartner are right by saying that more than 60% of the data stored in your datacenter is actually multiple copies of your data and if your storage array can’t cope with these copies in an efficient way (note that “efficient” is not just capacity but also without performance penalty) then, you’re not as agile as you could be and your TCO goes out of the window – read more about it here:

    http://wikibon.com/evolution-of-all-flash-array-architectures/

    Luckily, XtremIO snapshots have been specifically designed to be ultra-efficient and as a result we see many customers that are using them not just for the primary data itself but for its copies as well, you can read more about it here https://itzikr.wordpress.com/2014/05/12/xtremio-redefining-snapshots/

    https://www.emc.com/collateral/solution-overview/solution-overview-xtremio-icdm-h14591.pdf

    Moving on to our longer term vision for XtremIO, what’s interesting is the that the current XtremIO array, (internally known as X1), was never the endgame, it’s rather a STEP on the road to what we believe customers will need in the coming years. The issue we have faced is that to build what we want means that many new technologies need to become available. We are gradually implementing them all, each cycle, with what’s available from a hardware/software perspective.


The current architecture is doing an amazing job for:

  • Acceleration (the ability to simply accelerate your workload, e.g. your Database) by moving it to XtremIO.
  • Providing very good copy data management (iCDM)
  • Consolidation of large environments into the cluster (think virtualized environments with thousands of VMs residing in the large XtremIO cluster configurations, up to 8 X-Bricks, all 16 storage controllers running in an Active/Active fashion with global dedupe and compression)

We aren’t stopping there, we are going to take scalability to different dimensions, providing far denser configurations, very flexible cluster configurations and…new data services. One of them is the NAS add-on that was the highlight of the session we had at Dell EMC World. Note that it is only one of a number of new data services we will be adding. So why did we mention NAS specifically if we are going to introduce other data services as well? It’s very simple really, this is the first “Dell” and “EMC” World, we wanted to highlight a true UNIFIED ENGINEERED
solution coming from both companies which are one now (Dell EMC).

Before moving to read ahead about the NAS part, we also highlighted other elements of the technical roadmap e.g. the ability to really decouple compute (performance) from capacity, ability to leverage some elements of the Software Defined Storage into the solution and to really optimize the way we move data, not just in the way it lands in the array but rather going sideways to the cloud or other products. Here again, the core CAS architecture comes into play, “getting the architecture right” was our slogan in the early days, you now understand why it was so important to make it right the first time..

Ok, so back to the NAS Support! We gave the first peek into something internally called “Project-F and I must say, the response has been overwhelming so we thought we should share it with you as well – please note that a lot of this is still under wraps and as always, when you deal with roadmap items, the usual disclaimers apply – roadmaps can change without notice etc.

Ok, so what is it?

During 2017, Dell EMC will release an XtremIO-based unified block and file array. By delivering XtremIO NAS and block protocol support on a unified all-flash array, we plan to deliver the transformative power and agility of flash in modern applications to NAS. XtremIO is the most widely adopted All-Flash platform for block workloads. However, we recognized an opportunity to extend our predictable performance, inline data services, and simple scalability to file based workloads. This is the first unified, content-aware, tightly-coupled, scale-out, all flash storage with inline data services to provide consistent and predictable performance for file and block.
The main objective of the Dell/EMC merger is to create an entity that is greater than the sum of its parts. Not just operational synergies, but technical synergies that enable new compelling solutions. The new XtremIO file support feature set is the first of many synergies to come. XtremIO’s NAS capabilities are based on a proven NAS file system (FluidFS) from DELL Technologies and offers:

  • Full NAS feature-set
  • Supports Multiple NAS protocols (NFS, SMB/CIFS, HDFS and NDMP)
  • Over 1M NAS IOPS, predictable sub-millisecond latency
  • Enterprise-grade scalability, and reliability

A common question that I get is “don’t you already have other NAS solutions?” to me, this question is silly. EMC and now, Dell EMC has always been about a portfolio approach. Lets ignore NAS for a second, wouldn’t this question be applicable to Block based protocols (and products) as well? Of course it will, and as in Block, different products are serving different use cases, for the XtremIO NAS solution, we were looking for a platform that can scale out in a tightly-coupled manner, where metadata is distributed, one that can fit the use cases below. Again, there is nothing wrong with the other approaches, each has their cons/pros for different use cases which is the beauty of the portfolio, we don’t force the use case to one product, we tailor the best product to the specific customer use case. Regardless of the problem you are trying to solve, Dell EMC has a best-of-breed platform that can help. If you want to learn more about the storage categories, I highly encourage you to read chad’s post here http://virtualgeek.typepad.com/virtual_geek/2014/01/understanding-storage-architectures.html

Target Use Cases

This solution targets workloads that require low latency, high performance, and multi-dimensional scalability: Transactional NAS applications are well-suited use cases for the XtremIO NAS capabilities. A few examples are VSI, OLTP and VDI and mixed-workloads including TEST & DEV, and DEVOPS. These workloads will also leverage XtremIO’s data services such as compression and deduplication.

A good rule of thumb is, if you are familiar with the current XtremIO use cases and want them to be applied over NAS/SMB, that is a perfect match.

File Capabilities & Benefits

With its addition of NAS protocol support, XtremIO can deliver all this with its unique scale-out architecture and inline data services for Transactional NAS and block workloads. Storage is no longer the bottleneck, but enables database lifecycle acceleration, workload consolidation, private/hybrid clouds adoption, and Transactional NAS workload optimization. The key features and benefits include:

Unified All-Flash scale-out storage for block and file
Multi-protocol support of FC, iSCSI, NFS, SMB, FTP, HDFS and NDMP
Predictable and consistent high performance
In memory metadata architecture w/ inline data service


Scalable
Over 1M NAS IOPS w/ sub-millisecond latency
Billions of files
Petabytes scalability
Elasticity – scale-out for block and file, grow when needed, the NAS part can scale from one appliance (2 Active/Active controllers) to 6 appliances (12 Active/Active controllers!)


Simple
Single unified management (scroll down to see a demo of it)

Resilience
Built-in replication for file
Enterprise grade availability
Proven technology from DELL and EMC


Efficiency
Inline deduplication and compression
Efficient virtual copy technology for block and file
Thin provision support

Both file and block will be using the XtremIO inline data services such as encryption, inline compression and deduplication. In addition, for file workloads native array based replication is available.

Other technical capabilities includes:
ICAP for antivirus scan
Virtual copy technology
Remote replication
Quotas (on name, directories or users)

Below, you can see a recorded demo on the upcoming HTML5 UI, note that it is different than the web UI tech preview that we introduced in july 2016 ( https://itzikr.wordpress.com/2016/06/30/xios-4-2-0-is-here-heres-whats-new/ ), yes, as the name tends to suggest, during the “tech preview”, we have been gathering a lot of customers feedback about what they want to see in the future UI of XtremIO (hence the name, tech preview)



If you want to participate in a tech preview then start NOW! Speak to your DellEMC SE / Account Manager and they’ll be able to help you enroll.

P.S

the reason we called the NAS integration “Project-F” is simple, the original XtremIO product had a temporary name, “Project-X”🙂

2c1cda00-c03d-4e57-a661-9e9f9bbf5e2d-original

==Update==

you can now watch an high quality recording of the session itself here

 

RecoverPoint For VMs (RP4VMs) 5.0 Is Here

Hi,

Carrying on from today’s announcements about RecoverPoint ( https://itzikr.wordpress.com/2016/10/10/recoverpoint-5-0-the-xtremio-enhancements/ ), I’m also super pumped to see that my brothers & sisters from the CTD group just released RP4VMs 5.0!, why do I like this version so much? In one word “simplicity”, or in 2 words, “large scale”, these are 2 topics that I will try to cover in this post.

The RP4VM 5.0 release is centered on the main topics of Ease of use, Scale, and Total Cost of Ownership (TCO). lets take a quick look at the enhanced scalability features of RP4VM 5.0. One enhancement is that it can now support an unlimited amount of Virtual Center inventory. It also supports the protection of up to 5000 Virtual machines (VMs) when using 50 RP clusters, 2500 VMs with 10 RP clusters, and now up to 128 Consistency Groups (CGs) per cluster.
The major focus of this module is on the Easily deploy and Save on network subjects as we discuss the simplification of configuring the vRPAs using from 1-4 IP addresses during cluster creation and the use of DHCP when configuring IP addresses.

RP4VMs 5.0 has enhanced the pre-deployment steps by allowing all the network connections to be on a single vSwitch if desired. The RP Deployer usage to install and configure the cluster has also been enhanced with improvements to the required number of IP addresses required, And lastly, improvements have been made in the RecoverPoint for VMs plug-in to
improve the user experience


With the release of RecoverPoint for VMs 5.0, the network configuration requirements have been simplified in several ways. The first enhancement is that now all the RP4VM vRPA
connections on each ESXi host can be configured to run through a single vSwitch. It will make the pre-deployment steps quicker and easier for the vStorage admin to accomplish and take up less resources. It can also all be run through a single vmnic, saving resources on the hosts.


With the release of RP4VM 5.0, all of the network connections can be combined onto a single vSwitch. Here you can see that the WAN, LAN, and iSCSI ports are all on vSwitch0.
While two VMkernel ports are shown, only a single port is required for a successful implementation. Now for a look at the properties page of the vRPA we just created. You can
see here that the four network adapters needed for vRPA communication are all successfully connected to the VM Network portal. It should be noted that while RP4VM 5.0 allows you to save on networking resources, you still need to configure the same underlying infrastructure for the iSCSI connections as before.


A major goal with the introduction of RP4VM 5.0, is to reduce the number of IPs per vRPA down to as few as 1. Achieving this allows us to reduce the required number of NICs and ports per vRPA. This will also allow for the number of iSCSI connections to be funneled
through a single port. Because of this, the IQN names for the ports will be reduced to one and the name will represent the actual NIC being used, as shown above. The reduced topology will be available when doing the actual deployment, either when running Deployer (DGUI) or when selecting the box property in boxmgmt. This will be demonstrated later

Releases before 5.0 supported selecting a different VLAN for each network during OVF deployment. The RecoverPoint for Virtual Machines OVA package in Release 5.0 requires that only the management VLAN be selected during deployment. Configuring this management VLAN in the OVF template enables you to subsequently run the Deployer wizards to further configure the network adapter topology using 1-4 vNICs as desired.

RP4VMs 5.0 will support a choice from 5 different IP configuration options during deployment. All vRPAs in a cluster require the same configuration for it to work. Shown in this table are the 5 options that can be used. Option 1 uses a single IP address with all the
traffic flowing through Eth0. Option 2A uses 2 IP addresses with the WAN and LAN on one and the iSCSI ports on the other. Option 2B also uses 2 IPs, but with the WAN on its own and the LAN and the iSCSI connections on the other. Option 3 uses 3 IPs, one for WAN, one for LAN and one for the two iSCSI ports. The last option, #4, separates all the connections to their own IPs. Use this option when performance and High Availability (HA) is critical. This is the recommended practice whenever the resources are available. It should be noted that physical RPAs only use options 1 and 2B, without iSCSI, as iSCSI is not yet supported on a physical appliance.

Observe these recommendations when making your selection:
1) In low-traffic or non-production deployments, all virtual network cards may be placed on the same virtual network (on a single vSwitch).

2) Where high availability and performance is desired, separate the LAN and WAN traffic from the Data (iSCSI) traffic. For even better performance, place each network (LAN, WAN, Data1, and Data2) on a separate virtual switch.


3) For high-availability deployments in which clients have redundant physical switches, route each Data (iSCSI) card to a different physical switch (best practice) by creating one VMkernel port on every vSwitch and vSwitch dedicated for Data (iSCSI).


4) Since the vRPA relies on virtual networks, the bandwidth that you expose to the vRPA iSCSI vSwitches will determine the performance of the vRPA. You can configure vSphere hosts with quad port NICs and present them to the vRPAs as single or dual iSCSI networks; or implement VLAN tagging to logically divide the network traffic among multiple vNICs


The Management network will always be run through eth0. When deploying the OVF template you need to know which configuration option you will be using in Deployer and set the port accordingly. If you do not set the management traffic to the correct destination network, you may not be able to reach the vRPA to run Deployer.


start the deployment process, enter the IP address of one of the vRPAs, which has been configured in your vCenter, into a supported browser in the following format: https://<vRPA_IP_address>/. This will open up the RecoverPoint for Virtual Machines
home page where you can get documentation or start the deployment process. To start Deployer, click on the EMC RecoverPoint for VMs Deployer link.


After proceeding through the standard steps from previous releases, you will come to the Connectivity Settings screen. In our first example we will setup the system to have all the networking go through a single interface. The first item to enter is the IP address which will be used to manage the RP system. Next you will choose what kind of network infrastructure will be use for the vRPAs in the Network Adaptors Configuration section. The first part is to choose the Topology for WAN and LAN in the dropdown. When selected, you will see 2 options to choose from, WAN and LAN on the same adapter and WAN and LAN on separate adapters. In this first example we will choose WAN and LAN on same network adapter.
Depending on the option chosen, the number of fields available to configure will change as will the choices in the Topology for Data (iSCSI) dropdown. To use a single interface we will select the Data (iSCSI) on the same network adapter as WAN and LAN option. Because we are using a single interface, the Network Mapping dropdown is grayed out and the choice we made when deploying the OVF file for the vRPA has been chosen for us. The next available field to set is the Default Gateway which we entered to match the Cluster Management IP. Under the vRPAs Setting section there are only two IP fields. The first is for the WAN/LAN/DATA IP, which was already set as the IP used to start Deployer. The second IP is for all the connections in the second vRPA that we are creating the cluster with. This IP will also be used for LAN/WAN and DATA on the second vRPA. So there is a management IP and a single IP for each vRPA to use once completed.


Our second example is for the Data on separate connection from the WAN/LAN option, which we have selected in the Network Adapters Configuration dropdown lists by choosing WAN and LAN on same adapter and Data (iSCSI) on separate network
adapter from WAN and LAN
. Next we will have to select a network connection to use for the Data traffic from a dropdown of configured interfaces. Because we are using multiple IP addresses, we have to supply a netmask for each one, unlike the previous option where it was already determined by the first IP address we entered to start Deployer. Here we enter one for WAN/LAN and another for the Data IP address. Under the vRPAs Settings section which was lower on the vRPA Cluster Settings page, we will have to provide an IP for the Data connection of the first vRPA , and also the two required for the second vRPAs connections.


Our third example is for the WAN is separated from LAN and Data connection option, which we have selected in the Network Adapters Configuration dropdown lists by choosing WAN and LAN on separate adapters and Data (iSCSI) on same network adapter as LAN. Once that option is selected the fields below will change accordingly. Next we will have to select a network connection to use for the WAN traffic from a dropdown of configured interfaces since the LAN and Data are using the connection chosen when deploying the OVF template for the vRPA. We once again need to supply two netmasks, but this time the first is for the LAN/Data connection and the second is for the WAN connection alone. Under the vRPAs Setting section which is lower down on the vRPA Cluster Settings page, you will supply an IP for the WAN alone on the first vRPA and two IPs for the second vRPA, one for the WAN and one for the LAN/Data connection.


The 4th example is for the WAN and LAN and Data all being separated option, which we have selected in the Network Adapters Configuration dropdown lists by choosing WAN and LAN on separate adapters and Data (iSCSI) on separate network adapters from WAN and LAN. Once that option is selected the fields below will change accordingly displaying the new configuration screens shown here. Next we will have to select a network connection to use for the WAN and the Data traffic from a dropdown of configured interfaces since the LAN is using the connection chosen when deploying the OVF template for the vRPA. There will now be three netmasks which need to be input, one for LAN, one for WAN and a single one for the Data connections. In the vRPAs Settings section which is lower down on the Cluster Settings page, you will now input a WAN IP address and a Data IP address for the first vRPA and then an IP for each of the LAN, WAN and Data connections individually on vRPA2.


The last available option is the All are separated with dual data NICs option, which we have selected in the Network Adapters Configuration dropdown lists by choosing WAN and LAN on separate adapters and Data (iSCSI) on two dedicated network adapters, which is used for the best available performance and is recommended as a best practice. Once those options are selected the fields below will change to display the new configuration screens shown. Next we will have to select network interfaces to use for WAN and the two Data connections from a dropdown of configured interfaces since the LAN is using the connection chosen when deploying the OVF template for the vRPA. This option requires 4 netmask be entered, one for the WAN, LAN, Data 1 and Data 2 IPs, as all have their own connection links. Under
the vRPAs Setting section which is lower down on the Cluster Settings page, we can now see that we need to provide the full amount of IPs which can be used in any configuration per vRPA.


With the release of RP4VM 5.0, DHCP is supported for the WAN, LAN and iSCSI interfaces, but the cluster management and iSCSI VMkernel addresses must remain static. Support is also added for dynamic changes to all interfaces, unlike previous versions of the software. RP4VM 5.0 is also offering full stack support for IPv6 on all interfaces except iSCSI. Another enhancement is a reduction in the amount of configuration data which is shared across the clusters; with 5.0 only the WAN addresses of all vRPAs, the LAN addresses of vRPA 1and 2, MTUs, the cluster name, and the cluster management IPs of all clusters will be shared.
Note that the boxmgmt option to retrieve settings from remote RPA is unsupported as of 5.0 When IP addresses are provided by DHCP and an RPA reboots, the RPA will acquire an IP address from the DHCP server. If the DHCP server is not available, the RPA will not be able to return to the operational state; therefore it is recommended to supply redundant, highly available DHCP servers in the network when using the DHCP option.


Shown here on the Cluster Settings page of the Connectivity step of Deployer, we can see the option to select DHCP for individual interfaces. Notice that the DHCP icon does not appear for the Cluster Management IP address. This address must remain static in 5.0. If any of the other interfaces have their DHCP checkbox selected, the IP address netmasks will be removed and DHCP will be entered in its place. When you look at the vRPAs Settings window you can see that the LAN is a static address while the WAN and two Data addresses are now using DHCP. Another item to note here is that IPv6 is also available for all interfaces except for iSCSI, which currently only supports IPv4. Another important consideration to take note of is that adapter changes are only supported offline using boxmgmt. A vRPA would have to be detached from the cluster, the changes made, and then reattached to the cluster.


Let us take a closer look at the main page of Deployer. In the top center we see the vCenter IP address as well as the version of the RP4VM plugin which is installed on it. Connected to that is the RP4VMs cluster which displays the system name and the management IP address. If the + sign is clicked, the display will change to display the vRPAs which are part of the cluster.
In the wizards ribbon along the bottom of the page we will find all the functions that can be performed on a cluster. On the far right of the ribbon are all the wizards for the 5.0 release of Deployer which includes wizards to perform vRPA cluster network modifications, replace a vRPA, add and remove vRPAs from a cluster and remove a vRPA cluster from a system.


Up to the release of RecoverPoint for VMs 5.0, clusters were limited to a minimum of two appliances with a maximum of eight. While such a topology makes RP4VMs clusters more robust and provides high availability, additional use cases exist where redundancy and availability is traded for cost reduction. RP4VMs 5.0 introduces support for a single-vRPA cluster to enable, for example, product evaluation of basic RecoverPoint for VMs functionality and operations, and to provide cloud service providers the ability to support a topology with a single-vRPA cluster per tenant. This scale out model enables starting with a low scale single-vRPA cluster and provides a simple scale out process. This makes RP4VMs a low footprint protection tool. It protects a small number of VMs by having a minimal required footprint to reduce Disaster Recovery (DR) costs. The RecoverPoint for VMs environment allows scale out in order to meet sudden growth for DR needs.
RP4VMs systems can contain up to five vRPA clusters. They can be in a star, partially or fully connected formation protecting VMs locally or remotely. All clusters in an RP4VMs system need to have the same amount of vRPAs. RP4VMs 5.0 single-vRPA cluster deployments reduce the footprint for network, compute, and storage overheard for small to medium deployments. It offers a Total Cost of Ownership (TCO) reduction.
Note: The single-vRPA cluster is only supported in RecoverPoint for VMs implementations.


The RP4VMs Deployer can be used for connecting up to five clusters to the RP4VMs system. All clusters in an RP4VMs systems must have the same number of vRPAs. Software upgrades can be run from the Deployer. Non-disruptive upgrades are supported for clusters with two or more vRPAs. For a single-vRPA cluster, the warning shows that the upgrade will be disruptive. The replication tasks managed by this vRPA will be stopped until the upgrade is completed. The single-vRPA cluster upgrade occurs without a full sweep or journal loss. During the vRPA reboot, the Upgrade Progress report may not update and Deployer may become unavailable. When the vRPA completes its reboot, the user can login to Deployer and observe the upgrade process to its completion. Deployer also allows vRPA cluster network modifications such as cluster name, time zones
and so on, for single-vRPA clusters. To change network adapter settings use advanced tools such as Deployer API or the boxmgmt interface.


The vRPA Cluster wizard in Deployer is used to connect clusters. When adding an additional cluster to an existing system, the cluster must be clean, meaning that no configuration changes, including license installations, have been made to the new cluster. Repeat the connect cluster procedure to connect additional vRPA clusters.
Note: RP4VMs only supports clusters with the same number of vRPAs in one RP4VMs system.


New Dashboard tabs for RP4VM 5.0 will provide users an overview of system health and Consistency Group Status. The new tabs will allow the administrator quick access to the overall RP4VM system.
To access the Dashboard, in the vSphere Web Client home page, click on the RecoverPoint for VMs icon.
New Tabs include:
Overall Health – provides a summary of the overall system health including CG transfer status and system alerts
Recovery Activities – displays recovery activities for copies and group sets, provides recovery-related search functions, and enables users to select appropriate next actions
Components– displays the status of system components and a history of incoming writes or throughput for each vRPA cluster or vRPA
Events Log – displays and allows users to filter the system events


The new Dashboard for RP4VMs includes a Recovery Activities Tab. This will allow the monitoring of any active recovery actions such as Failover, Test a Copy and Recover Production. This tab allows the user to monitor and control all ongoing recovery operations.


The RecoverPoint for VMs Dashboard includes a Component Tab for viewing the status of all Clusters and vRPAs managed by the vCenter Server instance. For each component selected from the System Component window on the left, relevant statistics and information will be displayed in the right window.


Beginning with RecoverPoint for VMs 5.0 there is now an automatic RP4VM Uninstaller. Running the Uninstaller removes vRPA clusters and all of their configuration entities from vCenter Servers.
For more information on downloading and running the tool, see Appendix: Uninstaller tool in the RecoverPoint for Virtual Machines Installation and Deployment User Guide.


RecoverPoint for VMs 5.0 allows the removal of a Replication set from a Consistency Group without journal loss. Removing a Replication set does not impact the ability to perform a Failover or Recover Production to a point in time before the Rset was removed. The deleted Rset will not be restored as part of that image.
The RecoverPoint System will automatically generate a bookmark indicating the Rset removal. A point to remember is that the only Replication Set of a Consistency Group cannot be removed


Here we see a Consistency Group that is protecting 2 VMs. Each VM has a Local copy. Starting with RP4VMs 5.0 a user can now remove a protected VM from the Consistency Group without causing a Journal history loss. Also after a VM removal the Consistency Group is still fully capable of returning back to an image, using Recover Production or Failover, that contained the VM that was removed.


Displayed is a view of the Journal Volume for the copies of a Consistency Group. There are both system made Snapshots and User generated Bookmarks. Notice that after the deleting of a Replication Set, a Bookmark is created automatically. All the snapshots created from that point will not include the volumes from the removed Virtual Machine.

Lets see some Demos

The New Deployer

Protection of VMs running on XtremIO

Failing over VMs running on XtremIO

RecoverPoint 5.0 – The XtremIO Enhancements

Hi,

We have just released the 5th version of RecoverPoint which offers an even deeper integration with XtremIO, if you are not familiar with RecoverPoint, it’s the replication solution for XtremIO and basically offers a scale out approach to replication. RecoverPoint can be used in it’s physical form (which is the scope of this blog post) or as a software only solution (RecoverPoint For Virtual Machines, RP4VMs)

Expanding an XtremIO volume is very simple from the CLI or UI. To expand a volume in XtremIO simply right-click the volume, modify it and set the new size.

Before RecoverPoint 5.0, increasing the size of a volume was a manual process. While XtremIO made it very easy to expand volumes, RecoverPoint was unable to perform the change in size. To increase the size of a volume, you would have to remove the volume
from RecoverPoint, log in to the XtremIO and resize the volume. Once the volume was resized, add the volume to RecoverPoint again. A volume sweep would be triggered by the new volume in RecoverPoint.

RecoverPoint 5.0 and above allows online expansion to Replication Set volumes without causing a Full Sweep resulting in journal loss. When both the production and copy volumes
are from an XtremIO array the Replication set can be expanded. Best practice is to perform the re-sizing on the copy first, then change production to match.

This example has a consistency group containing two replication sets. The selected replication set has a production volume and a single remote copy. Both are on XtremIO arrays and in different clusters.

Here is an example of expanding the size of a replication set. The first step is to expand the (remote) copy on the XtremIO array. The volume can be identified by the UID, which is
common to both RecoverPoint and XtremIO. Next we increase the size to 75 GB in this example.

Notice that now the Copy and Production volumes of the Replication Set are different sizes. Since we expanded the Copy volume first, the snapshots created during the time the
volumes are mismatched will still be available for Failover and Recover Production. Upon a Rescan of the SAN volumes the system will issue a warning and log an event. A rescan can be initiated by the user or it will happen during Snapshot creation.

Next we will expand the Production volume of the Replication Set. Once this is accomplished the user can initiate a rescan or wait until the next snapshot cycle.

After the rescan is complete, the Replication Set now contains the Production and the copy of the same size. Displayed is the Journal history, notice the snapshots and bookmarks are intact. Also there is a system generated bookmark after the resizing is complete.

RecoverPoint protection is policy-driven. A protection policy, based on the particular business needs of your company, is uniquely specified for each consistency group, each copy (and copy journal) and each link. Each policy comprises settings that collectively
govern the way in which replication is carried out. Replication behavior changes dynamically during system operation in light of the policy, the level of system activity, and the availability of network resources.
Some advanced protection policies can only be configured through the CLI.

Beginning with RecoverPoint 5.0, there is a new Snapshot Consolidation Policy for copies on the XtremIO array. The Goal of this new policy is to make the consolidation of
snapshots more user configurable.
The XtremIO array currently has a limitation of 8k volumes, snapshots and snapshot mount points. RecoverPoint 4.4 and below enforces the number of maximum number of snapshot, but in a non-changeable manner. For example, the user cannot change the time RecoverPoint will consolidate the snapshots on the XtremIO. This new policy will allow the user more control over how long the snapshots will be retained.

One to three consolidation policies can now be specified for each copy of a consistency
group that resides on an XtremIO array. By default and for simplicity reasons, consolidation policy is selected automatically based on number of snaps and required Protection Window.

The new CLI command config_detailed_snapshot_consolidation_policy will allow a much more detailed and concise consolidation policy for copies on an XtremIO array.
The
config_copy_policy command will allow for the setting of the maximum number of snapshots.

Data Domain is an inline deduplication storage system, which has revolutionized disk-based backup, archiving, and disaster recovery that utilizes high-speed processing. Data Domain easily integrates with existing infrastructure and third-party backup solutions.
ProtectPoint is a data protection offering which brings the benefits of snapshots together with the benefits of backups. By integrating the primary storage with the protection storage, ProtectPoint reduces cost and complexity, increases speed, and maintains recoverability of backups.
RecoverPoint 4.4 introduced support for ProtectPoint 3.0 by enabling the use of Data Domain as a local copy for XtremIO protected volumes. The Data Domain system has to be registered with RecoverPoint using IP. Data transfer can be configured to use IP or FC. The Data Domain based copy, is only local and the link policy supports two modes of replication:
Manual – bookmarks and incremental replication are initiated from the File System or
Application Agents
Periodic – RecoverPoint creates regular snapshots and stores them on Data Domain.
There is no journal for the Data Domain copy during the Protect Volumes or Add Copy wizard. If the add Data Domain copy checkbox is selected, users will only select the Data Domain registered resource pool.

When using RecoverPoint 5.0 with ProtectPoint 3.5 and later, specific volumes of a consistency group can be selected for production recovery, while others volumes in the group are not recovered.

Displayed is an example of a use case for the new Partial Restore feature of RecoverPoint 5.0. In this example, this new feature allows for the recovery of only part of a database system

Only selected volumes are blocked for writing. Transfer is paused from XIO source > DD replica during recovery. During transfer, marking data is collected for the non-selected volumes, in case writes are being performed to them in production.
During partial restore:
Transfer is paused to all copies, until the action completes
At production – selected volumes are blocked, non-selected volumes remain accessible
Only selected volumes are restored
After production resumes all volumes undergo short initialization (in case of periodic snapbased replication policy is configured for XIO->DD link).
Things to keep in mind about the partial restore option for ProtectPoint:
Only selected volumes are blocked for writing
Transfer is paused from XIO source -> DD replica during recovery
During transfer, marking data is collected for the non-selected volumes, in case writes are being performed to them in production

VSI 7.0 Is Here, VPLEX Support Is Included!

Hi,

We have just released the 7th version! Of the VSI (Virtual Storage Integrator) vCenter plugin, this release includes

  1. VPLEX VIAS Provisioning (and viewing support), yes, its been a feature that was long due and I’m proud to say it’s here now, so first we need to register VPLEX which is a pretty much straightforward thing to do

    VPLEX VIAS Provisioning Support – Use
    Cases


    VPLEX Integrated Array Service (VIAS)
    ŸCreate virtual volumes from pre-defined storage pools on supported arrays. These storage pools are visible to VPLEX through array management providers (AMPs)
    ŸSimplify the provisioning, by comparing with the traditional provisioning from storage volumes

    VPLEX VIAS Provisioning Support –
    Prerequisites

    Software Prerequisites
    VMware vSphere 6.0 or 5.5
    VMware vSphere Web Client
    VMware ESX/ESXi hosts
    VMware vCenter Servers (single or linked-mode)
    ŸStorage Prerequisites
    VPLEX 5.5/6.0
    Only support XtremIO/VMAX3/VNX Block
    Storage pool is created on the array
    AMP is registered with VPLEX, connectivity status is OK.

    VPLEX VIAS Provisioning Support –
    Provision VMFS Datastore on Host Level


    VPLEX VIAS Provisioning Support –
    Provision VMFS Datastore on Cluster Level


    VPLEX VIAS Provisioning Support –
    Provision VMFS Datastore


    VPLEX VIAS Provisioning Support –
    Provision VMFS Datastore


    VPLEX VIAS Provisioning Support –
    Provision VMFS Datastore


    Let’s see a demo about how it all works, thanks for Todd Toles who recorded it!

    Multiple vCenters Support – Background
    VSI 6.7 or Older
    Designed to work with single vCenter
    Multiple vCenters are not supported totally
    More and more customers requested this
    capability
    ŸVSI 6.8
    XtremIO use cases
    ŸVSI 6.9.1 & 6.9.2
    RecoverPoint & SRM use cases
    Unity/UnityVSA use cases

    Quality Improvement – Batch
    Provisioning Use Case
    Customer provisions 30 datastores on a
    cluster which has 32 ESXi hosts, the vCenter
    will fall into no response
    ŸRoot Cause: there are huge tasks (e.g.
    ~900+ “Rescan all HBA” and “Rescan VMFS”
    tasks) created in a short time, which
    generates a rescan storm and impacts the
    vCenter much

    What have we done to fix it?

    Code changes to optimize the host rescan
    operations invoked by VSI
    ŸManually configure the vCenter advanced settings to add “config.vpxd.filter.hostRescanFilter=false
    , which disables the automatic VMFS rescanning on each host (under the same
    cluster) when create a new datastore for the cluster. Enable this filter when batch provisioning is done.

AppSync 3.0.2 is out 

Hi

We’ve just GAd AppSync 3.0.2 which  includes the following new features and enhancements:

 

1. Hotfix/Patch improvement – Starting in AppSync 3.0.2, hotfix/patch is delivered as an executable like the AppSync main installer. A single patch will install both the AppSync server and the AppSync agent hotfixes. You can push the hotfix to both UNIX and Windows agents from the AppSync server.

2. Agent deployment and discovery separation – Enables the deployment of the agent even if discovery fails across all supported AppSync applications including Microsoft applications, File system and Oracle.

3. Event message accuracy and usefulness

The installation and deployment messages have been enhanced to provide more specific information that helps in identifying the cause of the problem.

All AppSync error messages have been enhanced to include a solution.

4. Unsubscribe from subscription tab – You can now easily unsubscribe applications from the Subscription tab of a service plan.

5. Storage order preference enhancement – You can now limit the copy technology preference in a service plan by clearing the storage preference options you do not want.

6. FSCK improvements – You can now skip performing a file system check (fsck) during a mount operation on UNIX hosts.

7. Improve SRM support – For RecoverPoint 4.1 and later, AppSync can now manage VMWare SRM managed RecoverPoint consistency groups without manual intervention. A mount option is now available to automatically disable the SRM flag on a consistency group before enabling image access and returning it to the previous state after the activity.

8. XtremIO improvements

a. Reduces the application freeze time during application consistent protection.

b. Support for XtremIO version earlier than 4.0.0 has been discontinued.

9. Eliminating ItemPoint from AppSync – ItemPoint is no longer supported with AppSync.Users cannot perform item level restore for Microsoft Exchange using ItemPoint with AppSync.

10. XtremIO and Unity performance improvement – Improved operational performance of Unity and XtremIO.

11. Serviceability enhancements – The Windows control panel now displays the size and version of AppSync.

 

The AppSync User and Administration Guide provides more information on the new features and enhancements. The AppSync Support Matrix on https://elabnavigator.emc.com/eln/modernHomeDataProtection is the authoritative source of information on supported software and platforms

 

Not All Flash Arrays Snapshots Are Born (Or Die) Similar.

 

Hi,

CDM (Copy Data Management) is an huge thing right now for AFA vendors, each product try to position itself as an ideal platform for it but like anything else, the devil is in the details.

If you are new to this concept, I would encourage you to start here:

http://xtremioblog.emc.com/the-why-what-and-how-of-integrated-copy-data-management

and then view the following 3 videos Yoav Eilat did with a great partner of ours, WWT

done watching all the videos and still not convinced? How should you test the your AFA Vs the others? Its pretty simple actually

  1. Fill you AFA with Data (preferably DB’s)
  2. Take plenty of snapshots of the same DB
  3. Present these snapshots to your DB host and run IO against them (using SLOB for example)
  4. Do you see a performance hit of your parental volume compared to the snapshots volume – red flag!
  5. Delete some snapshots and see what happens

You’ll thank me later.

VMworld 2016, Introducing the EMC XtremIO VMware vRealize Orchestrator Adapter

Hi,

The 2nd big announcement at VMworld is the vCenter Orchestrator (vCO) adapter for XtremIO.

This has been in the making for quite some time, as someone who is very close to the development of this, I can tell you that we have been in contact with many customers about the exact requirements since at the end of the day, vCO is a framework and like any other framework, it is only as good (or bad) in what popular workflows it support.

The adapter that we will be releasing in 09/2016 will include the majority of the XtremIO functionality, volume creation, deletion, extending a volume, snapshots creation etc., shortly after the 1st release, we will be adding reports and Replication support via RecoverPoint

Below you can see a demo we recorded for VMworld, it’s a little bit rough but can give you a good overview around what It can do (Thanks to Michael Johnston & Michael Cooney for recording it, you rock!)

VMworld 2016, EMC AppSync & XtremIO Integration with VMware vCenter

What an awesome year for us at EMC XtremIO, so many things going on!

One of the announcements we are making at VMworld is the integration between Appsync to the vCenter but what does it actually mean?

Well, if you are new to Appsync, I suggest you start here:

http://xtremioblog.emc.com/xtremio-appsync-magic-for-application-owners

so still, what’s new?

We are now offering you, the vCente / vSphere / Cloud administrator to operate Appsync from the vCenter Web UI, why? Because we acknowledge that every admin is used to work with one tool as the “portal” to his / her world and instead of forcing you to learn another UI (in our case, it’s the AppSync UI), you can do it all from the vCenter UI.

Apart from the repurposing your test / dev environment which Appsync is known for (utizling the amazing XtremIO CDM engine) , I want to take a step back and focus on one use case that is relevant for EVERY vSphere / XtremIO customer out there which is backing up and restore VMs for free, no, really. You can now take as many snapshots that you want and be able to restore from each one, you can either:

  1. Restore a VM or VMS

  1. Restore a full volume (datastore) and the VMs that were in it

  2. Restore a file from within the VM c: / d: etc drive! No agent required!

    Very powerful engine since the majority of the restore requests from the last week or so, so you can happily use the XtremIO XVC engine to restore it from, easy, powerfull and again ,free!

    See the short demo here:


    See a full demo here: