The Evolution Of VDI – Part 1

This Blog Post is based on the work of Chhandomay Mandal, Michael Cooney and myself.

This is the first part of a two-part series blog post around evolution of VDI over the past years and how EMC XtremIO works across all of them. So let’s start.

How many of us remember Terminal Services?

Terminal services were quite popular in late nineties through early 2000s. It provided the ability to host multiple, simultaneous client sessions on a single Windows Server OS instance.

It was cost effective, easy to deliver, mature technology reaching north of 80 million users in its heyday.

 However, it had limited use cases with application compatibility issues. The most problematic of all was the fact that the application instances were shared. If somebody could crash Office, it was down for everyone on the system. If someone could get the OS to panic, then everybody was presented with the blue screen of death.

On the storage side, the hard disk drive or HDD arrays of the day with ho-hum performance and limited data services were sufficient to host the Terminal Services.

As server virtualization gained strong foothold in the data center, desktop virtualization became the next logical candidate. We could decouple the software from the hardware dependencies, and run desktops as VMs.

As the data center grade storage costs an order of magnitude higher than that of desktop storage, we were introduced to the concepts of gold images and differential data to make the storage economics work. Instead of everybody having their own desktop image, all desktops shared the same read-only gold image and the individual desktop writes destined to the OS got written to the unique differential data space of a desktop.

 So, in this model, you need, say, 40 GB for gold image, and 2 GB for differential data per desktop. For a 10,000 desktop deployment, your capacity requirement is roughly 20TB, which is reasonable.

 On the flip side, differential data get discarded when users log off or desktops reboot. So we solved the capacity problem at the expense of user personalization. This type of desktops are commonly referred as non-persistent desktops as they don’t retain changes made by the user, and are commonly deployed for task worker use cases like call centers.

During the same time, hybrid arrays came to the market that added SSDs into HDD arrays. The hybrid arrays leverage SSDs as a cache or a tier in front of HDDs to provide some performance acceleration. Now, VDI desktops create periodic I/O storms. For example, the gold image gets hammered with read I/O requests when many desktops boot simultaneously. Although hybrid arrays provide some relief to the IO storms because of SSDs in their stack, they neither scale for thousands of desktops nor have the agile, efficient data services needed in a modern data center.

Persistent desktop, where you take everything you have in your physical desktop – OS, applications, user settings, data – and just make it a VM, is an intuitive solution.

In this model, users are happy because they retain all their personalization including the applications they installed. Moreover, desktop admins are happy too because all the existing desktop management tools like System Center Configuration Manager, Patch Manager and other agents continue to work exactly as before.

Now, persistent desktops generate higher IOPS per desktop than their non-persistent counterparts, but, most importantly, this VDI model has extremely large capacity needs. Continuing with the same example as before, for 10,000 users at 40GB per persistent desktop, your capacity need is 400TB. So the capacity requirement increased 20x, from 20TB in non-persistent desktops to 400TB in persistent desktops for the same number of desktops. A lot of these are data are common, as Windows OS and applications constitute the bulk of it. A highly efficient data reduction technology at the heart of the storage layer is the most critical component for any persistent desktop model to be remotely economically feasible.

On the storage side, around 2012, All-Flash arrays started to come to the market. Some of them provided high performance with zero data services; others provided performance improvements beyond what the hybrid arrays could deliver along with limited data services including some data reduction. However, as these All-Flash arrays are based on scale-up architectures, their controllers become the bottleneck for performance much earlier than their backend SSDs. Moreover, data reduction service, like data deduplication, is a post-process activity for these All-Flash arrays. So a typical scale-up architecture based All-Flash array in the market can’t deliver the performance or the high data reduction efficiencies that are critical for a successful persistent desktop deployment at large scale.

In recent months, newer technologies are coming up in the VDI platform software side, that has the promise of delivering a no-compromise, all-inclusive desktop experience, when paired with the right storage platform. We can now deliver desktops at scale that can easily handle graphics-rich applications, seamlessly work across all use cases and can be better than physical desktops for both user experience and cost.

Storage is the critical component in delivering the promise of next-gen desktops. A scale-out, truly N-way active-active architecture is needed to deliver the high performance at a consistently low latency needed to scale your virtual desktop environment while maintaining a great end-user experience. Inline, all-the time data reduction technologies are critical to reduce the capacity footprint. XtremIO is the only solution in the market today that can not only satisfy all the requirements of non-persistent and persistent desktops at scale but also deliver on the emerging VDI platform software technologies.

As many of you can attest, XtremIO can run non-persistent desktops very well. It delivers high performance with consistently low latency, thereby enabling you to host a large number of desktops on a single X-Brick, which is the basic building of an XtremIO cluster with two controllers and a DAE that can have 25 SSDs. You can run storage-intensive operations like desktop refresh and recompose in a non-persistent desktop environment while other active desktops are running, and the array will continue to deliver high level of user experience.

However, if you are running non-persistent desktops only because of capacity savings you need, then XtremIO has good news for you.

The same single X-Brick can host a high number of persistent desktops as well. Your users can have a highly responsive desktop with all their personalization and your desktop admins can continue to use the same set of desktop administration tools while enjoying the superior storage capacity reduction from XtremIO to make the solution very cost-effective.

What makes XtremIO unique for VDI?

  1. In a nutshell, XtremIO’s unique content-based metadata engine coupled with its scale-out architecture delivers the unique value for VDI. You grow an XtremIO cluster by non-disruptively adding additional X-Bricks, linearly increasing both capacity and performance at the same time.

2. The volumes are always thin-provisioned, the only writes performed to the disks are for data that are globally unique across the entire cluster.

3. All the data services are inline, all the time. So as the writes come in, the metadata engine looks at the content of the data blocks, performing the writes to the SSDs only if the cluster didn’t see the data before; for duplicate data, it just acknowledges the writes to the host with appropriate updates to in-memory metadata without actually performing any writes to the SSDs. This inline deduplication saves tremendous capacity for persistent desktops without affecting performance at all.

4. Inline compression adds to XtremIO’s data reduction efficiencies.

5. XtremIO has a proprietary Flash-based data protection algorithm that offers better than RAID-10 performance with RAID 5 capacity savings.

6. Finally, XtremIO’s differentiated agile copy data service helps you provision and deploy desktops at a much faster rate than any other Flash-based solution in the market.

XtremIO’s architectural advantages translate into very significant savings for our VDI customers. Today we have over 2.5 million virtual desktops running on XtremIO, and our customers typically see 10:1 or more data reduction ratios for persistent desktops with 50% less cost per desktop than traditional storage.

A single X-Brick can host up to 2500 persistent desktops and up to 3500 non-persistent desktops.

I want to add that beyond storage, XtremIO helps in reducing server infrastructure footprint as well for VDI. As an example, we have had customers who could reduce the RAM allocation from 2 GB/desktop to 1.5 GB/desktop, that’s a 25% reduction in RAM, and XtremIO handled the resulting increased IOPS due to more swapping by the OS with the same latency for the same number of hosted desktops on the array.

With XtremIO, Desktop Virtualization has experienced some breakthroughs including:

  • 10:1 Reduction in capacity requirements per desktop
  • 50% Reduction in Storage Cost Per Desktop
  • 25% Lower RAM requirements
    • Based on reducing RAM needed per desktop from 2GB to 1.5GB
      • XtremIO delivers the performance experience of a 2GB desktop using only 1.5GB of RAM
      • Not all IO performance comes from RAM. Flash is faster than the desktop O/S, Desktop kernel & Hypervisor

expects disk to be. Previously adding desktop RAM solved some IO problems. This is no longer as necessary.

The disk performance now comes directly from the XtremIO disk, not virtual desktop “RAM trickery”.

  • 40% Reduction in Server Infrastructure.
    • If you need to support all desktop services at scale
    • Example: Suspend & Resume desktop services for 100% of desktops = a lot of extra storage sitting around

being idle 99% of the time. With XtremIO, admins can oversubscribe the storage needed for desktop

services & provision less XtremIO storage (don’t need 1:1 mapping).

  • Each X-Brick can support up to 2,500 persistent virtual desktops and up to 3,500 non-persistent desktops.

Now let’s see how a single X-Brick handle a boot storm of 2,500 VDI VMs!

now lets see how the same single X-Brick handle the load of 2,500 Full clones VMs (Office is installed locally)

note that im using LoginVSI 4.1 Knowledge Worker which creates additional load across the board (everyone else out there are still using the “medium” workload which is lighter on the CPUs and the storage array)

see the full differences here:


Now let’s take a look at some of the emerging trends in VDI. First, just-in-time desktops.

In a traditional desktop model, applications, user data, profile settings, OS are all intermingled together. When you deliver persistent desktops using this traditional model, it is the old wine in a new bottle and you miss out the potential of improving the desktop management workflow itself.

Instead, what if you could virtualize each application or a set of related applications separately in their own containers? Alongside, each user gets their own container for their specific data & applications. Think of it, each of these containers being a vmdk file. And then, you can put together everything at run time by collecting the appropriate set of containers for a specific user as and when he needs to have access to his desktop. Let’s illustrate this further.

You have a pool of stateless desktops with nothing but OS. Applications have their own containers; same for each user specific data.

Office worker Joe comes in. He gets one of the stateless desktops, Adobe & Office applications get added, Joe’s personal data along with the applications he installed on his desktop get mounted from Joe’s user data volume, and he gets his own customized desktop.

Now let’s consider designer Bob. When he logs in, it is the same process but this time, based on Bob’s profile, the graphics-intensive design applications get added too for Bob’s personalized desktop.

So this combines the best of both non-persistent and persistent world. Users get customizable desktops and apps with consistent experience across sessions. IT can easily update and deliver apps, secure the data and enjoy the better economics.

The chart shows the results of application response times, as measured by the industry-standard real world VDI load generator tool LoginVSI, of 2,500 desktops running on a single 10 TB X-Brick. First, LoginVSI generated the VDI load for 2,500 persistent (full clone) desktops. The application response times – as the number of desktops are increased – are shown in blue. Then the 2,500 persistent (full clone) desktops were converted into layered desktops. In this specific example, VMware App Volumes were used to create the application containers and delivery of applications. When the same 2,500 layered desktops were run, the application response times showed similar patterns but were roughly 15% higher throughout the test than before.

The takeaways here are:

  1. The layered desktops create more IOPS/desktop than their persistent full clone counterparts. So a platform that can deliver high IOPS with consistently low latency is more important than ever in this type of next-gen desktops.
  2. The I/O profiles change from the typical VDI I/O patterns that we are used to. For example, application container volumes will see high I/Os as the application volumes are effectively read-only. However, there will be variations of the read I/O intensity; e.g. the Microsoft Office container volume will likely see higher read I/Os than a specialty application as there are many more users of Office than a department-specific specialty application. On the other hand, containers for user specific information will see high write I/O profile. Unless the storage array balances the load uniformly and automatically across all controllers and all SSDs in a true N-way active-active scale-out architecture with radically simple array management, the SAN administrators will be hard pressed to optimize the storage platform to deliver best user experience with these next-gen desktops.
  3. Finally, you can see there were some high utilization spikes in persistent (full clone) desktops in blue; the spikes with layered desktops (in purple) were less in magnitude but higher in numbers (existing pretty consistently throughout the tests). It shows that the underlying storage platform needs to deliver high performance with consistently low latencies irrespective of the VDI I/O load for these next-gen desktops.

High IOPS is a critical need for any VDI deployment. Here is an example of what XtremIO can deliver for VDI. In this example, the XMS dashboard is showing that the X-Brick is delivering nearly 125K IOPS when 1,000 persistent (full-clone) desktops are booted simultaneously.

High throughput is another critical need for any VDI deployment. Time needed to clone many desktops from a template is dependent on the throughput that an array can deliver. Here is an example of what XtremIO can do for VDI. In this example, the XMS dashboard is showing that the X-Brick is delivering nearly 45 GB/s throughput when 1,000 persistent (full clone) desktops are cloned simultaneously from a template. 45GB/s is an insanely high throughput, the array is able to deliver this throughput at the target side due to XtremIO’s in-memory metadata architecture, and unique in-memory metadata based implementation of VMware’s VAAI Copy Offload API.

Consistently low-latency is another important criteria for the array to deliver highly-responsive, better-than-physical user experience to all the users, all the time, at any scale. Here is another example of what XtremIO can deliver for VDI. In this example, the XMS dashboard is showing that the X-Brick is delivering sub-millisecond latency for 1,000 persistent (full clone) desktops at steady state.

Below you can see a testing I ran with Horizon 6 + App volumes for 2,500 users running on a single 10TB X-Brick!

in part 2, i will discuss Desktop As A Service (DaSS) and virtualizing GPUs

VMworld 2015– please vote for the EMC XtremIO Sessions




Its about the time of the year again where EMCWorld is behind us and VMworld is across the corner.

Here at XtremIO, We have put forward a lot of submissions for this year, I would appreciate your voting for the ones you like.

You can vote here:

5431 Scaleable Database as a Service for the All Flash Software Defined Datacenter with VMware Integrated OpenStack(VIO) and Trove



Redefine your virtualized environment or Infrastructure-as-a-Service cloud into a scalable and highly elastic database as a service platform. Deliver a fully managed and scalable relational or NoSQL database-as-a-service by easily integrating Trove with VIO and EMC XtremIO. Large Enterprise customers that have grown in a non structured approach require a common interface to the myriad internal cloud platforms. The OpenStack interface provides a set of common APIs which developers can programmatically deploy and manage virtual workloads. The VMware Integrated OpenStack provides an orchestrated workflow for the deployment and configuration of the various capability areas within Openstack. Trove is OpenStack’s implementation of a scalable and reliable cloud database service providing functionality for both relational and non-relational database engines. We will present best practices for deploying the Cassandra NoSQL engine on this platform as well as testing results demonstrating key metrics that will be of interest to customers and service providers alike. Trove automates the configuration and provisioning of new database servers. This includes deployment, configuration, patching, backups, restores, and monitoring. The XtremIO all flash storage rounds out the capabilities by providing superior performance, rich set of copy services and the ability for failover and replication.

5720 Architecting All-Flash Storage for Private Cloud- a Clean-Slate Approach to Realize Unprecedented Agility, Efficiency and Performance


Flash is rapidly gaining popularity as viable high-performance data center storage. But not all flash storage platforms are created equal. Many vendors replace spinning media with flash drives in a legacy storage architecture. This is an easy but sub-optimal approach. Flash is fundamentally different than spinning media and needs a fresh approach to take benefits of flash technology capabilities. In this session we will explore some of the critical architectural tenets for getting the most out of flash storage. You will learn how scale-out architecture, in-memory metadata, smart data reduction, and inline always-on data services can help transform your infrastructure and ensure consistent high IO performance for all workloads, lower your infrastructure footprint and TCO, and enable agile service delivery.

5731 Reimagine Mixed Workload Consolidation- Break the Traditional Infrastructure Barriers

Virtualizing all of the applications, mission critical and non-mission critical, has been an unattainable goal for many IT organizations. But with the right storage architecture and hypervisor integrations, you now can. You can realize exceptional infrastructure and application agility by consolidating workloads of mixed characteristics and of varying criticality. You can scale your environment non-disruptively and cost-efficiently to meet the needs of enterprises small and large. You can safely consolidate production, test, development, QA and analytics environments on a common platform while compromising no production performance and with marginally incremental capacity consumption. We will explore real life examples demonstrating how many IT organizations have broken the traditional storage barriers. You will learn about the key architectural elements that can ensure consistent predictable IO performance to help you meet and beat the strictest performance SLAs while lowering the total cost and complexity of your infrastructure and application lifecycle

5765 Transforming Agile Software DevOps with Flash

More than ever, agile development and software DevOps drives critical top-line business impact for customers across a broad range of industries. Learn how XtremIO is fundamentally enabling the next generation of agile DevOps with customer use cases to
• Improve developer and overall product quality by providing full copies of production
applications & datasets to all developers with zero-overhead XtremIO in-memory
copy services
• Dramatically accelerate performance across the entire DevOps ecosystem, enabling 1000’s of developers in a real-time
• Deliver consistent, predictable performance for developers, automated build systems,
and automated QA via sub-millisecond All-Flash DevOps storage
• Accelerate adoption of a continual experimentation and learning through
rapid repetition of prototyping

5740 Best Practices for Running Virtualized Workloads on XtremIO

Great, you customer have just purchased a shiny new all flash array (AFA), now what? In this session we will learn the reasons for one of the quickest revolutions that the storage industry has seen in the last 20 years, and how XtremIO can enable breakthrough capabilities for your server virtualization and private cloud deployments. We will go trough specific use case issues and how to overcome them. With lots of real-world tips and tricks, you can consolidate your most demanding virtualization workloads successfully and gracefully.

5747 VMware’s Internal Private Cloud – Choosing the Right Storage to Meet User Demands

VMware has embarked on a Private Cloud initiative during the last 5 years. From a humble infrastructure, hosting a few VMs it has now grown to host more than 200k VMs, that is supported 24 x7, year around, across multiple geographies. A very diverse sets of business units consume a smorgasbord of Applications & Data from the platform. The internal private cloud initiative at Vmware is at the core of providing an agile datacenter to meet the various demanding business needs of our end users. This session will explore the lessons learned in selecting the right storage infrastructure. It will dive into the role cutting edge technologies like EMC XtremIO (All Flash Array) plays to meet the demanding requirements. Hands on experts who build, operate and manage the private cloud will share their best practices, tricks and techniques they have perfected over the last 5 years. Learn from the architects the storage optimizations, fine-tuning and configurations that you will need to scale your Private Cloud.

5755 Revisiting EUC Storage Design in 2015 – Perspectives from VMware EUC Office of the CTO

Storage landscape of EUC has changed a lot in the last 5 years. The latest generation of EUC software, along with the advent of purpose-built all-flash arrays, software-defined storage, latest generations of hybrid arrays, have all helped mainstream the EUC/VDI technology in IT.

4560 Re-Imagining your Virtualized Workloads with FLASH

Re-imagining your virtualized workloads with FLASH Moore’s law changed everything, compute and network started to get faster and faster and the only kid that left alone was the storage. storage was always considered as the last frontier for virtualized workloads, as a customer, you had to adopt to its limitations instead of it answering your business demands, no more! in this session you will hear why customers decided to go all in for an all flash array, what was their motivation and even concerns of doing so and how did it all changed for them once they did.

4916 Virtualized Oracle On All-Flash: A Customer’s Perceptive on Database Performance and Operations

In tIn the virtualized infrastructure the new technology wave is all-flash arrays. But today all administrators (virtual, storage, DBA) need to know how changing an essential part of the virtual infrastructure impacts critical applications like Oracle databases. This joint customer and XtremIO presentation acts as a practical guide to using all-flash storage in a virtualized infrastructure. The emphasis will be on value realized by a customer using all-flash together with findings from third party test reports by Principled Technologies. You will learn how all-flash storage is changing performance intensive applications like virtualized databases.

5652 Best Practices & Considerations for the Sizing and Deployment of 3D Graphics Intensive Horizon View 6 Desktops in an All Flash Datacenter

An all-flash datacenter hosting a VMware vSphere infrastructure with NVIDIA GRID can be used to deliver a robust and wide ranging set of graphics accelerated capabilities to end-users. NVIDIA GRID technology offers maximal choice in terms of both user density and GPU resource allocation. GRID vGPU can be used to handle the vast majority of both vSGA or vDGA use cases, allowing the infrastructure administrator to support a wide spectrum of virtualized resource options on a much reduced hardware set. To ensure the best end-user experience, and maintain a effective and efficient deployment, it is required that the infrastructure administrator be knowledgeable of the various graphics accelerated user profiles with their corresponding application and vGPU requirements. Information detailing the server-side considerations involved in the sizing exercise are generally available and so-far, somewhat understood. The corresponding storage-side requirements for the various configuration options and use-cases are not. This presentation will address both server and storage sizing considerations, and aim to offer a more complete knowledge-set to be used in the successful sizing of graphics accelerated VMware Horizon 6 DaaS projects.

Configuring XtremIO & RecoverPoint for VMware SRM

One of the software that VMware have released back in 2008 and that was always my favorite after vSphere itself was always SRM (Site Recovery Manager), back then, I installed it in so many customer sites and even have a session about it at VMworld 2009 as one of the first production installation at a customer site..back then configuring the SRA (the “communicator / translator” between the vCenter to the storage array) was a pretty difficult task and im so glad things have changes so drastically over the years, im also very happy to see that as of SRM 5.8 (and of course 6.0), it has been fully merged into the vCenter web interface as seen below


another new feature is that there is No need to manage IP address changes on an individual level anymore (though those options do remain if needed). These can now be mapped from one subnet to another and applied at the Site>Network Mapping level. There is the option of using both, eg. Subnet mapping for the subnet, and individual mapping for VMs within that subnet


also, as part of VMware global initiative to not force you the customer you use MS-SQL or ORacle DB, you can now use the embedded vPostgres Database option that is built into the installer for SRM. It is an additional option beyond the currently available Databases and is supported, though not tested, up to the SRM maximums. There isn’t a way to convert or migrate an existing database to vPostgres.


SRM Architecture

•Site Recovery Manager is designed for virtual-to-virtual recovery for the VMware vSphere environment

•Built for two-site scenario, but can protect bi-directionally. Can also protect multiple production sites and recover them into a single, “shared recovery site”.

•Site Recovery Manager integrates with third-party storage-based replication (also known as array-based replication) to move data to the remote site, our focus in this post is the RecoverPoint / XtremIO SRA


Site Recovery Manager is designed for the scenario that we see our customers most commonly implementing for disaster recovery—two datacenters. Site Recovery Manager supports both bi-directional failover as well as failover in a single direction. In addition, there is also support for a “shared recovery site”, allowing customers to failover multiple protected sites into a single, shared recovery site.

The key elements that make up a Site Recovery Manager deployment:

-VMware vSphere: Site Recovery Manager is designed for virtual-to-virtual disaster recovery. It works with many versions of ESX and ESXi (consult product documentation for more details). Site Recovery Manager also requires that you have a vCenter Server management server at each site; these two vCenter Servers are independent, each managing its own site, but Site Recovery Manager makes them aware of the virtual machines that they will need to recover if a disaster occurs.

-Site Recovery Manager service: the Site Recovery Manager service is the disaster recovery brain of the deployment and takes care of managing, updating, and executing disaster recovery plans. Site Recovery Manager ties in very tightly with vCenter Server — in fact, Site Recovery Manager is managed via a vCenter Server plug-in.

-Storage: Site Recovery Manager requires iSCSI, FibreChannel, or NFS storage that supports replication at the block level. in our case, we support FC /iSCSI

-Storage-based (also called array-based) replication: Site Recovery Manager relies on storage vendors’ array-based replication to get the important data from the protected site to the recovery site. Site Recovery Manager communicates with the replication via storage replication adapters that the storage vendor creates and certifies for Site Recovery Manager. VMware is working with a broad range of storage partners to ensure that support for Site Recovery Manager will be available regardless of what storage a customer chooses, so expect the list to continue to grow.

-vSphere Replication has no such restrictions on use of storage-type or adapters.


User Interface


Users can manage both protected and recovery SRM instances from a single UI interface, obviating the need to open multiple clients or run particular management tasks from a specific location.

This is completely independent of vCenter Linked Mode. Linked mode is still helpful, because it will automatically migrate SRM licenses from site to site as VMs are migrated or fail, and also for standard non-SRM related infrastructure management.

SRM 5.8 /6.0 is fully supported with the vSphere Web Client and no longer available for use with the vSphere Client.

in the case of RecoverPoint / XtremIO there is a special UI to cover two very specific features SRM itself can only failover to the last point in time which isnt that helpful, especially in our case, you see, the value of RecoverPoint/ XtremIO is the ability to go to ANY point in time, so we can leverage the vCenter plugin to select the point in time you want to failover to and then SRM “think” it’s the last point in time (see a screenshot below)


the other special feature is the ability to give you, the vSphere admin or the storage admin tge insight to see which VMs are protected, which VMs aren’t and which VMs are partially protected, this is done with the unique integration of the RecoverPoint GUI to vCenter (as seen below)


Array Replication


If using storage-based replication, integration with the arrays with vendor-specific replication and protection engines are a very fundamental. This integration is provided via code written by the array vendors themselves. the SRA for RecoverPoint that support XtremIO is 2.0.2

SRAs have advanced for SRM 5, improving the integration with array-replication software for functionality like reprotect/replication reversal and failback.


SRA information is enhanced within SRM 5 and shows not only information about paired remote devices, datastores, and relevant protection groups, but will also show an arrow indicating the direction of replication for each device.

This gives very quick visibility into what is being protected and to where. This is particularly important during reprotect and failback operations.

Installing & Configuring the XtremIO / RecoverPoint SRA

The installation itself is pretty trivial, just download the SRA form the VMware SRM web site and install the executable at both the SRM servers or in my lab case, the vCenter servers which are also acting as the SRM servers, once the SRA have been installed, you will have to restart the SRM service.


once everything is installed, you will have to configure the SRA using the SRM web interface, configuring it gets as simple as it can gets, basically, you need to point the SRA to the RecoverPoint virtual management IP and feed it with the username / password to manage the RPA’s cluster, you will need to then repeat it at the recovery site as well.


lastly, in order for the SRM SRA to control RecoverPoint, you need to change the management of the consistency group (CG) to SRM, again, this will allow RecoverPoint to be managed by an “external application” which is in our case, VMware SRM

Storage Layout


lets take a look at the example above, at the protected site I have couple of datastores, each one can contain 1 VM or more, each datastore (lun at the storage level) can be a part of a protection group however, if you take a look at the purple example, a vm CAN spab across multiple datastores and hence, the protected group can span across multiple datastores (luns)

then, on the right side (my recovery side),I define the recovery group which is really a logical container for the protected groups I put inside of it.


By ensuring virtual machines are stored in a logical fashion on disk according to their protection group, administrators can minimize “shuffling” of VMs to fit optimal layouts for SRM.

VMDKs of a similar priority, or that will belong to the same protection group should be stored in the same datastores to minimize the amount of replication required to create efficient protection groups and thereby recovery plans.

Ensuring that your storage layout and VM placement has been organized with this in mind will mitigate many issues.

Workflows and Use Cases

Planned Migration

Allows for a data synchronization as part of the process, Will stop on errors and allow you to resolve them before continuing Since it shut’s down the virtual machines being migrated, application consistent VM’s are recovered on the recovery side!

DR Event

Allows for a data synchronization as part of the process, Will not stop on errors If the protected site is available, than the virtual machines being migrated will be application consistent at the recovery side. If the protected site is not available the consistency state will be what was designed in the solution.

Test Recovery

Allows for a data synchronization as part of the process, Supports a recovery that uses a different network, uses a clone or snapshot for the test.


Can be run following a successful recovery. Reverses the direction of replication, and protects virtual machines back to the original site. This enables a failback to recover the environment back to the primary site.


This is done following a test recovery. Removes the snapshot or clone created during the test. Powers off and deletes test VMs. Recreates the shadow VM indicating protection of the relevant VM from the primary site. The cleanup creates its own history report. Following a cleanup, the relevant plan is once again ready to be run.

Use Cases

Unplanned Failover

Recover from unexpected site failure, Full or partial site failure. The most critical but least frequent use-case. Unexpected site failures do not happen often. When they do, fast recovery is critical to the business

Preventive Failover

Anticipate potential datacenter outages, For example: in case of planned hurricane, floods, forced evacuation, etc.Initiate preventive failover for smooth migration. Graceful shutdown of VMs at protected site. leverage SRM ‘planned migration’ capability to ensure no data-loss

Planned Migration

Most frequent SRM use case, Planned datacenter maintenance, Global load balancing. Ensure smooth site migrations. Test to minimize risk. Execute partial failovers Use SRM planned migration to minimize data-loss. Automated Failback enables bi-directional migrations

Running a Test Recovery Plan


SRM offers two UI buttons to run test recoveries, or a test may alternately be initiated through a call to the API. Note the “Synchronize storage” option. This ensures very current copies of the VMs for the test.


This is a test recovery ready for users to test. Cleanup would occur after testing is complete by simply pressing the “cleanup” button. The virtual machines run from the cloned / snapshot environment at the recovery site, and replication and protection of the protected environment is not impacted during tests.

Following a cleanup, there is are no running virtual machines associated with the recovery plan that was tested, and associated snapshots / clones created by the test plan have been eliminated.

Shadow VMs have been recreated on the recovery site to indicate those VMs that are protected on the primary site and will be instantiated on the recovery site when a recovery plan is run.

Running a Recovery Plan


Two different UI buttons can start recoveries, or alternately it may be executed by an API call.

A recovery plan can be run as either a Planned Migration, or a DR event. Note that both types of execution will attempt to synchronize storage early in the recovery.  The data synchronization attempt is to ensure application consistency, and will execute as an early initial step in a recovery plan after an attempt to shut down the protected VMs, to ensure data is recent and synchronized after the VMs are quiescent.

Planned Migration


The difference between a Planned Migration and a Disaster Recovery is that a Planned Migration will automatically stop on errors and allow the administrator to fix the problem. A Planned Migration is designed to ensure maximum consistency of data and availability of the environment. A DR scenario is instead designed to return the environment to operation as rapidly as possible, regardless of errors.

Disaster Recovery


If a Recovery Plan is run as a disaster recovery, the goal is an aggressive Recovery Time Objective, and SRM will not halt the plan from continuing regardless of any errors that might be encountered.

Running a Recovery Plan – Storage Layer


Notice that during a recovery plan execution, replication is interrupted. The mirror image, or replication destination datastore, is now promoted and made read/write. The virtual machines in it are registered in vCenter in place of the shadow VM placeholders.

Failback is a process of “Reverse Recovery”


Failback combines recovery plans and reprotect.

“Failback” is the capability of running a recovery plan *after* an environment has been migrated or failed-over to a recovery site, to return the environment back to its starting site.

After a failover has occurred, the environment can be reprotected back to the original environment once it is again safe. Following this reprotect the recovery plan can be run once more, moving the environment back to its initial primary site.

Next it is imperative to reprotect once more, to ensure the environment is once again protected and ready to failover.

With SRM 5 VMware introduced the “Reprotect” and failback workflows that allowed storage replication to be automatically reversed, protection of VMs to be automatically configured from the “failed over” site back to the “primary site” and thereby allowing a failover to be run that moved the environment back to the original site.

After running a *planned failover only*, the SRM user can now reprotect back to the primary environment:

Planned failover shuts down production VMs at the protected site cleanly, and disables their use via GUI. This ensures the VM is a static object and not powered on or running, which is why we have the requirement for planned migration to fully automate the process.

Once the reprotect is complete a failback is simply the process of running the recovery plan that was used to failover initially.

ok, if you have read to this point, you probably want to see it all in action, please see a demo I made showing the integration of VMware SRM and XtremIO/RecoverPoint

EMCWorld 2015–Best Practices for running virtualized workloads on XtremIO



as part of this year EMCWOrld, I ran a best practices session for XtremIO & vSphere, the session was full of useful data, an amazing customer testimonial (VMware) and some really bad jokes Winking smile

this year I had a pleasure to co-host with a special guest, Shane from VMware, Shane covered some of the use cases VMware have been leveraging XtremIO for.

I got many request to post the deck online and luckily, a friend has also recorded the session so here it is:



XtremIO 4.0–XMS / Management Improvements

The fourth release of XtremIO is all about scalability, management and matureness, this post covers the management aspects of the array


We started to have many many customers with multiple XtremIO clusters and they loved the XMS concepts but asked if we can have one XMS manage more than one array, version 4.0 delivers that!, a single XMS can now manage multiple clusters and much much more.


Up to 8 clusters are supported in this release. The managed arrays will have to be (NDU) upgraded to version 4.0.

Multi-Cluster MANAGEMENT


Here’s how it looks, you can simply click the array you want to manage / view


..or, you can simply see it in the main console, things like aggregated performance metrics across your datacenter, things like data reduction and more, again, you can view it as an aggregated view or per cluster, this really gives you a good overview of your XtremIO clusters from a centralized management console.


Tags Management


Think about Gmail, it uses tags for easy searching, we have something similar in mind.

It can align storage management with business needs.


For example, you can tag volumes to “production” and “test”, applications, clusters, consistency groups, snapshots sets. Many entities can be tagged.

Ÿ Flexible tagging – Create tags for any object

Ÿ An object can have multiple tags

Ÿ Filter objects using tags for reports or operations

Ÿ Model hierarchy in tags – if folder-like hierarchy is required


Making Use of Tags


You can then of course, filter your results based on the tags you input.





Managing tags


Tags-managed per object type

However, you can create in GUI tag for multiple objects

A tag has the following properties:


Object Type

Hierarchy (default is root)


Read-only and up user role

Associating tags to objects





XtremIO 4.0 Reporting

Two years historical data for Cluster & Objects

GUI/CLI/REST interfaces to access historical data

Multi-cluster reporting

Security – Reports for a single user or publicly available for all users

Tagging support

Report templates


Ÿ Pre-loaded reports for common queries

Ÿ Use templates as the basis for new report

What can you do with reports?


Provide both real-time & historical data

Stores up to 2 years with variable granularity

Support reporting at different desired levels: Cluster, volumes, IGs, Group of objects etc.

Aggregate objects based on business needs (using tags)

Multiple modes of data access

– Online GUI, printed report, png/csv files

– APIs are available to consume data programmatically

 Key Benefits
Provides better visibility of cluster performance and capacity usage over time

Use cases

– Better planning and monitoring

– Trend analysis of performance and capacity

– SLA tracking

– ROI and TCO performance

– Forensics/analytics of performance issues

– Analysis of a single application workload or consolidated workloads

What’s New IN Version 4.0?

Ÿ What did we have in Version 3.0?

– Real-time monitoring data of Clusters & Objects

– GUI based real-time monitor generation

– Limited historical data (7 days of cluster-level performance data)

Ÿ What NEW in version 4.0?

– 2 years historical data for Cluster & Objects

– GUI/CLI/REST interfaces to access historical data

– Cluster-level capacity tracking

– Multi-cluster reporting

– Improved security – Enable reports for a user or make it publicly available for all cluster users

Reports data: Avg, Max, Min time aggregations


Average, Maximum and Minimum historical data values are calculated for the different time aggregation units

Default is Average

Reports Definition


Public/Private reports


Public/private reports

– Private reports are visible only to the user that created the report

– Public reports are visible to all users; editable only to the user who created the report

– By default all reports are Private

– Reports names do not have to be unique

Reports time definition


Time definitions:

– Real-time (real-time monitoring data display; like in version 3.0)

– Last hour (60 min); Last day (24 hours); Last week (7 days); Last year (365 days)

– Custom time – any From date-time to any To date-time


XtremIO 4.0 – XDP Enhancements–Replication (part2)

Remote Protection Challenge


Lets talk about remote replication challenges, traditionally, you would need different tools in order to protect different applications and /or arrays, how do you replicate to more than one site? How can you achieve applications consistency and lastly, how can you utilize your WAN link in the best possible way?

XTREMIO native replication With RecoverPoint

Think about the following question:


Im so happy to announce the native integration of XtremIO with RecoverPoint!, this is not your traditional grandfather replication solution, why do we need it,


How can you take the array storage controllers that are so busy serving primary IO’s and now hammer them with replication as well? Well, you don’t!, you really need to offload that task to an entity that can scale-out for replication purposes as well!, think about it this way, you don’t fly abroad every time you want to send a package, however, you DO prepare the package and “offload” it to UPS/ Fedex etc’, this is EXACTLY how this replication will work, we didn’t just utlizie the existing RecoverPoint technology, our goal was to make it better and as someone who have tested this quite heavily, here’s how it work:


1. An host that is connected to XtremIO is issuing an write request (that was defined as part of a replicated Consistency Group), because of the CG, the array (XtremIO) will issue a snapshot for that volume, the snapshot interval will be explained later on)


2. RecoverPoint will then deliver this snapshot to a remote RecoverPoint Appliance (RPA) or RPA’s (scale out for specific volumes as well!), that will then deliver it to the remote array, the remote array can be XtremIO of course BUT it can be ANY array that RecoverPoint support (heterogeneous support, we don’t lock you in to buy XtremIO at the remote site as well)

The remote array will receive the IO from the RPA based on the technology it support, if its XtremIO, it will be a snapshot, if it’s a VNX/VMAX, it can be either a splitter or a snapshot


3. RecoverPoint will continue sending these XtremIO based snapshots to the remote site until the replication is finished


4. Our first replication is done, great!, now let’s assume our RPO is 60 seconds, so RP will ask the array for the diff SCSI (deltas between what it sent to what it has), this is where the beauty of utilizing the amazing XtremIO snapshot technology kicks in, for us, snapshots are jut like any other volume, we dedupe and compress them globally!


5.. so in our case, we send the Diff answer to the RPA’s which will then build the bitmap of changes and store them in their “journals”, this is why in this integration, the RP journal can be very small, it doesn’t store anything but metadata, this is a radical shift from the traditional splitter based RecoverPoint where it used the journal to actually store the Diffs of the data as well, remember, we let each “entity” do what they do best, we (XtremIO) create and store the snapshots, RecoverPoint send and store the journal, scale out for primary IO’s and scale out for replication!


5. The snapshots are then deliver to the remote site, and stored as snapshots, this process will of course return based on the RPS’s so lets dive to these as well:


In this version of RecoverPoint/XtremIO integration we have two best of breed RPO’s

· Periodic – sets the minimum time between cycles, the minimum is 60 seconds, the maximum is 1 day.

· Continuous – no wait time, provides the best RPO in the market (for all flash array), guaranteed 60 Seconds!


lastly, management of the entire solution is so easy, below you can see the entire workflow, this is a long video I made that’s shows many of the “advanced” parameters, one of things I like about the RecoverPoint solution is that it has an “easy” and advanced’ buttons, an entire replication can be configured in 10 seconds and on the other hand, you can go in an push knobs to your liking as well, I really cant wait for customers to start using it!

XtremIO 4.0 – XDP Enhancements (part1)

one of the best part of this release is the effort we put toward anything related to protection of the data, XtremIO is becoming a general all purpose flash array and as such, customers demands nothing but the best when it comes to protecting their data, lets examine the features:

COPY Data Management



Improve the ease of use and productivity around snapshots, XtremIO will introduce:

•Read only snapshots

•Consistency management (CG)

•Snapshot set

•Local protection using scheduler

•Application aware (Native VSS)

•Refresh and recovery


Snapshots can now be

Read-Write (default)

*NEW* Read-Only snapshots

Read-only are immutable and cannot be changed

Main use case: protection from logical data corruption

New entity: Consistency Group



Manage snapshots on a set of volumes

Members: volumes or snapshots

Snapshot on a CG, creates a cross-consistent point in time snapshots from all members

The result of snapshot on a CG is a Snapshot Set that points to the created snapshots

Snapshot Sets are managed by the CG


New entity: Snapshot Set


An entity that points to related snapshots

Correlate a set of snapshots to a
single snapshot operation

Created as a result of a snapshot on:

Consistency Group

Snapshot Set

Set of volumes




•Auto-creation of snapshots based on

–Supports definition of job (creation, deletion, suspension, modify)

–Each job define the scheduling of:

Snapshot creation

•Exact time

•Or Interval/Frequency

Retention policy (either age or number)

–Supports: CGs, Snapshots Sets, or a Volume



•Restore operation

–Enables recover of data from snapshots

–Allow restore from snapshots that are direct descendent

–Provides immediate Recover Time Objective (RTO)

•Refresh operation

–Enable to refresh/reassign any entity (within a VSG) to any entity

–Supports refresh to any direction

»Prod to snap

»Snap to snap

»Snap to prod

–Instantaneous refresh

•Immediate refresh (no copy of data or metadata)

•Keeps the SCSI face of the refreshed entities

•No change in the SCSI serial ID (NAA)

•No re-mapping

•No OS side device rescan

•Support volume resize (if was changed)

•Only requires to unmount before refresh, remount after refresh





•Create-snapshot command


•Create Snapshot and Reassign


VSS Hardware Provider

Native VSS support for application aware snapshots


•Native support for Volume copy shadow services using XtremIO provider

–Using MS framework to pause application

–Enable the creation of application aware snapshots

–Support for Windows Server 2012R2/2008

–Support all VSS writers

•Simple MSI installation

•Simple configuration


•In the control panel find the XtremIO VSS Panel

•Configure the XMS address and the user

Writers and Requestors


App-Consistent Copy Services



–Full scheduling and automation


–Full orchestration of XtremIO snapshots

–Empowers application administrators


–Policy-based protection

–Database repurposing

–Integration to Micrsoft SQL Server/Exchange

–Integration to Oracle

–Integration to VMware data stores

here’s a demo I recorded showing some of what we discussed above

XtremIO–XDP Redefined

XtremIO : Integrating With VMware Storage IO Control (SIOC) , vSphere 5.5 and 6.0


A common question that i get is how to apply a Qos at the VM level if and when its needed, this can come because of many reasons

  • you are a service provider and you want to cap your clients
  • you are a customer and you want to prevent a VM going wild “the noisy Neighbor”
  • you have a snapshot volume/VM where you don’t want to allocate the same amount of storage resources to it as you allocate for your “production” VM.

Luckily VMware have made a lot of investment in this front in both vSphere 5.5 and 6.0,

in vSphere 5.5, the default scheduler has changed to mClock scheduler which you can read about more here:

and in vSphere 6.0, it added a “reservation” per VM, the “reservation” part still need to be configured in the VM .vmx file.

i made a video showing all of this, enjoy and i hope you’ll use this feature, it’s really cool!

vSphere 6 guest unmap support (or the case for sparse se, part 2)


if you are a follow of my blog, you know how much space reclamation is a topic that is closed to my heart, in fact, almost exactly an year ago, I wrote this post : which I highly recommend you to read if you want to get a full understanding of space reclamation, UNMAP and where is VMware in regards to this.

times are changing and im so happy that in 1 year, VMware had made some changes to UNMAP as well, is only a start but it’s a good one.

so what’s so different?

well, in vSphere 6, a guest VM can now issue an UNMAP and the ESXi host will not block it, is it a new thing?

 The config option, EnableBlockDelete, is not new, it even existed prior to vSphere 6.0 (However description of that config option is changed in vSphere to reflect the actual implementation). However, prior to vSphere 6.0, Windows guests did not do the automatic UNMAPs because on the non-supportability of B2 mode page from vSphere side which is required by the windows 8 and above. vSphere 6.0 implements B2 mode page.

as long as the guest OS follows the alignment requirements reported to them in read-capacity16 and B0 mode page it will work

Caveats / Limitations

it’s not perfect, there are many cases where this will not work:

  • unmap of deleted blocks with in VMFS (i.e vmdk deletion, swap file deletion etc), for this you can use the EMC VSI plugin.
  • Smaller unmap granularity. With existing VMFS version, the minimum unmap granularity is 1MB.
  • Automatic UNMAP for support for windows version less than Windows 8 / server 2012 (no linux support)
  • the device at the VM level (the drive) must be configured as “thin”, not “eager zeroed thick” or “lazy zeroed thick”

Still ok with you? You just upgraded to vSphere 6 and you are running modern guest OS’s like windows server 2012 and windows 8, great!, see the video I made below

EMC World 2015 – The XtremIO Sessions

Wow, I cant believe emc world 2015 Is just around the corner, this year will be super big for us at XtremIO, there are so many awesome sessions and who knows, maybe a new product announcements. For your convenience I have put all the sessions related to us and the labs below, you can schedule them here:

Transform Your Real Time Analytics With XtremIO & Splunk In this session, we will discuss how you can transform your real time analytics with Splunk using EMC storage solutions. We will provide technical validation and customer proof points along with a reference architecture using XtremIO and Isilon products. We will discuss benefits of using flash storage for improving performance by placing hot/warm buckets while optimizing the cost using Isilon for cold/frozen buckets.

What’s New With XtremIO For 2015 This session discusses what’s new with XtremIO for 2015 as announced in our 4.0 release. We also discuss the coolest new capabilities and how our beta customers leveraged them, including replication, online cluster expansion, sophisticated new snapshot management, new X-Brick options, expanded cluster options, and new portfolio integrations.

XtremIO: Transforming Your Workloads, Enabling The Agile Data Center This session provides an update on XtremIO and how it is transforming customers’ workloads for agility and tremendous TCO and business benefits. With customer examples and best practices, we will discuss the use cases and benefits for consolidating mixed workloads across database, analytics, business apps, and hybrid cloud. And we will explore customer examples of how these transformed workflows for real-time analytics and agile dev/ops. Specific best practices include converged infrastructure VSPEX & Vblocks, in-memory copy services, data reduction, virtualization, and QoS.

Transforming Agile Software DevOps With XtremIO More than ever, agile development and software DevOps drives critical top-line business impact for customers across a broad range of industries. Learn how XtremIO is fundamentally enabling the next generation of agile DevOps with customer use cases to • Improve developer and overall product quality by providing full copies of production applications & datasets to all developers with zero-overhead XtremIO in-memory copy services • Dramatically accelerate performance across the entire DevOps ecosystem, enabling 1000’s of developers in a real-time • Deliver consistent, predictable performance for developers, automated build systems, and automated QA via sub-millisecond All-Flash DevOps storage • Accelerate adoption of a continual experimentation and learning through rapid repetition of prototyping.

Transformational Technology Inside XtremIO This session provides an overview to the EMC XtremIO all-flash scale-out array and its design objectives. The architecture is discussed and compared to other flash arrays in the market with the goal of helping the audience understand the unique requirements of building an all-flash array, the proper methodology for testing all-flash arrays, and architectural differentiation among flash array features that affect endurance, performance, and consistency across the most demanding mixed workload consolidations.

Data Protection For EMC XtremIO With EMC RecoverPoint (Hands-on Lab) XtremIO is the only scale-out storage array designed from the ground up for flash. In this lab, see how EMC is bringing business continuity and data protection to XtremIO through EMC RecoverPoint. In this lab, users will get an overview of the RecoverPoint 4.1.2 Unisphere interface, learn about RecoverPoint integration with XtremIO, and protecting applications running on XtremIO arrays.

XtremIO In The Wild: Insights, Data Reduction, Performance & Operational Best Practices As the market’s #1 all-flash array, XtremIO has deployed to almost 2000 customers to date. This session looks through some of the most interesting insights from that install base to help discern the future of storage and all-flash application workloads. We will spotlight general and specific results for data reduction, performance, and operational best practices.

Transforming End-User Computing With XtremIO: DaaS & New Use Cases For Graphics & Beyond Learn how XtremIO is transforming the End User Computing landscape by offering a full range of desktops – non-persistent/persistent, high-performance, and NEW graphics-intensive – with consistent and predictable performance at a low $/desktop and total cost of ownership. Going beyond VDI, XtremIO is enabling newer EUC use cases like Desktop-as-a-Service where best of both worlds meet – uncompromising experience for the end-users and radically simple deployment model with breakthrough costs for the IT organizations. Customers are sharing their EUC transformational journey with XtremIO giving you the opportunity to apply the lessons learned in your EUC environments.    

XtremIO For SAP: Game Changer For SAP Landscapes Learn how EMC is helping customers increase SAP performance by 70% and reduce TCO dramatically using XtremIO All-Flash Arrays. This customer panel teaches you how to move your SAP applications from traditional infrastructure to all flash arrays at incredible speed, while consolidating multiple copies for test and pre-production landscapes onto one XtremIO platform.

Business Continuity For XtremIO All Flash Array The session discusses the range of XtremIO high availability / business continuity solutions that enable XtremIO customers to protect mission-critical workloads. EMC offers…

Brocade Communication Systems, Inc.: Redefining Storage Connectivity For 3rd Platform From Isilon To XtremIO EMC and Brocade are redefining storage connectivity for Fibre Channel and IP storage to enable the journey to the 3rd platform.  Whether it is XtremIO AFA with FC SANs or a Big Data Hadoop application with Isilon Scale out NAS a Brocade Storage Fabrics with Fabric Vison is required.

XtremIO Native Replication With RecoverPoint The session provides a deep dive of the new native replication for XtremIO. It articulates the innovative unique replication technique that leverages the power of XtremIO’s in-memory snapshot architecture to perform replication with breakthrough efficiency and RPOs. We will discuss the different design considerations when applying replication to all flash workloads and how XtremIO and RecoverPoint successfully implement those considerations. We will show how customers can achieve best in class replication while keeping XtremIO’s consistent high performance. The session includes a demonstration of the solution.  

XtremIO For Microsoft Workloads This session explores how the transformational capabilities of XtremIO can benefit your SQL Server, Hyper-V and Exchange deployments. We also explore consolidation and agility across SQL Server environments enabled by zero cost writeable snapshots and the game-changing benefits of deduplication and compression within Exchange and Hyper-V VDI deployments.  

XtremIO v3.0 GUI & CLI Simulator (Hands-on Lab) Use this simulator as an introduction for a look and feel of the XtremIO XMS GUI Dashboard and CLI. You will be able to navigate through the dashboard as if it is a real environment, including provisioning storage, managing performance, and creating monitors.

Top 10 Tips & Tricks To Rock Your XtremIO World Learn the top 10 things to get the most out of your XtremIO system. Our panel of technical experts will share best practices of configuration, operation and management, and will  provide insights into your specific questions during a live ‘Ask-The-Expert’ session.

Accelerating SQL Server With XtremIO This session provide a deep dive into XtremIO all-flash technologies for SQL Server. We begin with looking at the XtremIO features that are unique for SQL Server deployments, going over test some results we are seeing in the lab with different feature implementations. Then switch gear into a demo, and dive into detailed use case studies of where you could benefit from deploying your SQL Servers on XtremIO. We wrap up the session with discussing some common pitfalls, deployment considerations, and provide some best practices guidance.

No Storage Tuning & Complete Operational Simplicity For Databases Running XtremIO In a 2014, IT Resources Strategies Survey, DBAs and storage teams today spend nearly 70% of their time maintaining their database environments versus focusing on application integration. This session reviews detailed solutions and case studies to showcase the value of EMC XtremIO all-flash-arrays to dramatically reduce steps needed for database infrastructure performance tuning, provisioning, and replication management.

SAS Analytics: Transformed With XtremIO & Isilon This session provides an overview of how your mixed analytics SAS workloads can be transformed on XtremIO & Isilon scale-out platforms. Whether your working with one of the three primary SAS file systems or SAS Grid, performance and capacity will scale linearly, significantly eliminate application latency and remove the complexity of storage tuning from the equation. Learn about best practices from fellow customers and how they deliver new levels of SAS business value.  

Intel: Intel Technologies To Optimize Storage System Data Movement & Operations Faster response times for storage applications and deployments, (such as big data and hybrid cloud), are becoming a critical requirement.  Intel’s hardware and software technologies can help product developers and IT managers build the infrastructure needed to successfully deploy services.  Intel’s hardwire technologies including CPU, networking, SSD and software technologies provide faster data movement though the storage systems such as EMC VMAX, VPLEX, VNX, XtremIO, Isilon  and faster data operations for storage functions such as compression, deduplication, encryptions and RAID in these storage systems.  

RecoverPoint: What’s New In 2015 Protection and mobility is essential in the datacenter, across datacenters and to the cloud. Learn about new ways introduced in 2015 to protect your applications and data with RecoverPoint and RecoverPoint for Virtual Machines – EMC’s software-only, hypervisor-based VM-level disaster recovery solution. New enhancements and integrations include: replication for VSPEX Blue, XtremIO, ScaleIO, EMC Enterprise Hybrid Cloud and protection of virtual environments.

FC SAN: Should I Stay Or Should I Go… Refreshing your infrastructure?  Wondering “should I converge my network now and go with an IP based Storage Area Network or should I wait just a little longer to see what happens in the industry”?  Have you started the process of migrating to an IP based SAN and uncovered some of the IP SAN pitfalls?  Wondering why you can’t connect your vmknics to your NSX Logical Switches??  Thinking about how to support the connectivity needs of ScaleIO, VMware Virtual SAN or XtremIO?  If you answered yes to any of these questions, join us for a candid discussion that will be focused on storage networks and the intersections with Ethernet, IP and Network Virtualization.  We’ll provide some insight into the trends we’re seeing and you can feel free to share your thoughts and concerns or just take the opportunity to vent.

Best Practices For Running Virtualized Workloads On XtremIO Great, you customer have just purchased a shiny new all flash array (AFA), now what? In this session, learn the reasons for ones of the quickest revolutions that the storage industry have seen in the last 10 years, we will understand the different architectures and the different use cases, we will go trough specific use cases pains and how to overcome this, Lastly, we will put an emphasis on specific gotcha’s that AFA possess and how to overcome these, so your customer won’t have to.  

VPLEX: Advanced Configuration & Design – Performance, Design, Failure Modes & More This session covers advanced aspects of using VPLEX, such as performance optimization, scalability, configuration, and failure modes and will provide customers with deeper understanding of its architecture to design for optimal behavior. This session also discusses in details best practices of architecting and deploying VPLEX with all-flash XtremIO environments for performance and availability, and virtual-environments for maximum flexibility.  

ViPR Controller: Automate Delivery Of Storage Services (Hands-on Lab) ViPR Controller is storage automation software which transforms multi-vendor storage into a simple, extensible and open platform.  During this lab you will learn how to provision XtremIO and Hitachi storage, VPLEX volumes and create snapshots with ViPR Controller.  You will also learn how to provision ViPR Controller managed storage via the plug-in to VMware vRealize Orchestrator.

  • There aren’t any available sessions at this time.

Global Partner Summit – Flash Breakout for Partners Did you know  XtremIO is the fastest growing product in EMC’s history and is running away with the market share in this emerging space?  In this  session, led by Mike Wing, Global VP, XtremIO, you’ll learn about this transformational story, how to position yourself as a leader, and how to help your customers Redefine Possible with the power of Flash.

Tuesday, May 5, 3:40 PM – 4:30 PM

EMC IT: Leveraging The Power Of Flash In The Data Center How did EMC IT get breakthrough shared-storage benefits for application acceleration, consolidation, and agility with a reduction in cost plus scale-out architecture, consistent low latency and IOPs, deduplication, and compression? This session provides an overview, discussion, use cases, benefits and lessons learned from EMC IT’s implementation of Flash storage/XtremIO.

Evaluating All-Flash Arrays AFA’s are evaluated differently from hybrid arrays. Learn the latest tools and techniques from XtremIO engineers and each other. Share your challenges and successes to help fellow attendees with their testing. Bring your ideas to advance the state of the art and improve available tools. This is a no-holds-barred session aimed to get a clear picture of AFA capabilities.

Enterprise Hybrid Cloud & Database-as-Service Best Practices Learn best practices for Enterprise Hybrid Cloud deployments with Tier 0 storage services build on scale-out all-flash arrays. This session discusses key use cases and benefits like Database-as-a-Service, leveraging pre-defined service catalog capabilities; empowering VM Admins, Application Admins, and Infrastructure Admins with self-service; streamline EHC management; eliminate silos, accelerate workflows, and handle cloud bursting. Customer examples show how XtremIO can take you hybrid cloud to the next level for all workloads, with breakthrough consolidation, data reduction, workload agility, and elastic capacity

OpenStack Enablement With EMC This session provides an overview of different data storage services in OpenStack, then details how various EMC products (VMAX, VNX, XtremIO, Isilon, Scale IO, etc.) integrate with OpenStack, covering best practices, use cases, and unique value-added benefits that can be derived from each.