VMware Horizon 7 Instant Clones Technology on XtremIO

One of the new cool technologies that VMware have come up with recently is Instant Cloning aka “forking”.

Is was actually talked about way back in 2014 & 2015 but actually saw the light in Horizon 7 as part of just in time technology for VDI VMs.

You can read about it here

http://blogs.vmware.com/euc/2016/02/horizon-7-view-instant-clone-technology-linked-clone-just-in-time-desktop.html

as someone who works a lot with VDI and helping our XtremIO customers to leverage the array in the best way they can, I wanted to examine the workload of instant clones VMs on a single XtremIO X-Brick, for this I provisioned 2500 VDI VMs running windows 10 (which in itself is very new to VDI as I only starting to see customers who are starting deploying it now) and office 2016, I didn’t take the easy route as both windows 10 + office 2016 add a significant load on the infrastructure (compute + storage) but since I want to be able to help with future deployments on XtremIO, I chose these two.

In order to generate the workload on the 2500 VMs, I used LoginVSI which is the industry standard to simulate VDI workloads.

The results can see below J

VPLEX GeoSynchrony 5.5 SP2 Is Out

 

I’m so happy this release is finally out, during the 3 years since XtremIO went GA (time passed by so fast..) we got many customers who wanted to leverage XtremIO with VPLEX, there are many reasons to do so but i would say that at least my favorite one is an Active/ Active virtualized data center based on VPLEX, as such, customers wanted to have the full feature set of XtremIO such as UNMAP and the ease of management, aka VIAS, happy to report these (and more) are now in!

VAAI UNMAP ENHANCEMENTS

Thin provisioning awareness for mirrors
– Enables host-based storage reclamation for local and distributed mirrors using UNMAP
–Enables UNMAP during Thin-> Thin migration
– Restricted to XtremIO and VMware ESXi (same as SP1)

NEW/CHANGED CUSTOMER USE CASES

. Reclaiming storage from the host using UNMAP to mirrored volumes
. Creating thin-provisioned mirrored virtual volumes
. Migrating thin-provisioned storage
. Noticing soft threshold crossing
. Handling out-of-space condition on thin mirrors

CREATING THIN MIRRORED VOLUMES
Device must be thin-capable
– 1:1 device geometry
– local and distributed RAID-1 mirrors are now supported
– underlying storage volumes must all come from XtremIO
• Virtual volume must still be thin-enabled
NEW: The virtual-volume provision –thin command can now be
used to create thin-enabled virtual volumes:
– provisions XtremIO storage volumes
– claims these storage volumes in VPLEX
– creates extents and devices atop the storage volumes
– creates virtual volumes and makes them thin-enabled

3. MIGRATING THIN-PROVISIONED STORAGE

VPLEX uses a temporary mirror during migration
thin-enabled setting from source volume is
transferred to the target volume when the migration
is committed
• UNMAP is supported during and after migration when:
– the source and target are both thin-capable (XtremIO), and
– the source is thin-enabled

4. NOTICING SOFT THRESHOLD CROSSING

• If an XtremIO array crosses its configured resource
utilization soft threshold, it raises a Unit Attention:
– THIN PROVISIONING SOFT THRESHOLD REACHED (38/07h)
• In SP2, VPLEX notices this Unit Attention and emits a call-home (limited to every 24 hours):
– scsi/170 WARNING: Thin Provisioning Soft Threshold
reached – vol <volName>
• This Unit Attention is not propagated to hosts on the
VPLEX front-end

5. HANDLING OUT-OF-SPACE ON MIRRORS
When an out-of-space error is seen on the last healthy mirror
leg of a RAID1, VPLEX propagates the failure to the host
• When an out-of-space error is seen on a mirror leg while there
are other healthy mirror legs, VPLEX:
– Marks the affected mirror leg dead
– Prevents the affected mirror leg from being automatically
resurrected
– Decreases RAID-1 redundancy, but host is unaffected
• After resolving resource exhaustion on XtremIO, admin must
manually resurrect the affected devices in VPLEX

USABILITY: DEVICE RESURRECTION

• A new convenience command can be used to trigger
resurrection of dead storage volumes:
device resurrect-dead-storage-volumes –devices []

USABILITY: THIN-ENABLED PROPERTY OF VV

The virtual volume listing contains a thin-enabled
property that was a true/false boolean in SP1
• In SP2, this property has three possible states:
– enabled
– disabled (thin-capable but not enabled)
– unavailable (not thin-capable)

USABILITY: UNMAP MONITORS

Two new statistics:
– unmap-ops
– unmap-avg-lat
• Each applicable to the following targets:
– host-init (I)
– fe-prt (T)
– fe-lu (L)
– fe-director (in the default monitors)

USABILITY: REVERT THIN-REBUILD DEFAULT

In SP1, thin-capable storage volumes were
automatically configured to use thin-rebuild
semantics
– i.e. skip the write to a rebuild target if the source and
target both hold zero data
• In SP2, this behavior has been reverted
– for XtremIO, normal rebuild gives better performance

VPLEX DATA PATH – UNMAP HANDLING

In SP1, UNMAP was converted into a WRITE SAME 16
(identical semantics for XtremIO)
• In SP2 the VPLEX I/O is tagged internally as an
UNMAP, and we initiate an UNMAP to the XtremIO

VPLEX / XtremIO, VIAS SUPPORT

VIAS now supports registering an XMS as an AMP managing multiple XtremIO arrays.
– No longer have to create one REST AMP per array as in VPLEX 5.5 and 5.5.1.
• During the REST AMP registration, it is no longer required to select the array. VIAS will automatically discover all the arrays.
• Under the cover, VIAS will either use the XtremIO REST API v1 or v2 based on the XMS version for provisioning.

Select the AMP to see the Managed Arrays on the right side.


Once the new REST AMP has been created,
managing virtual volumes with VIAS stays the same.
– There is no impact anywhere else in either the VPLEX GUI or CLI.

GUI Support for Thin Virtual Volume

  1. Overview

    A new attribute “Thin Enabled” was added to the virtual volume type.
    It can be seen in the ‘Virtual Volumes’ view and the property dialog.
    – The value can be unavailable, enabled or disabled.
    – Currently only virtual volumes created from XtremIO storage
    volumes can currently be Thin Enabled.
    – A new attribute “Thin Capable” (Yes/No for value) was added to the
    following object types:
    – Storage Volume
    – Device
    – Extent


    2.
    Changes in the GUI
    1. Views


    Storage Volumes Views



2. Property dialogs



3.
Dialogs

A “Create thin virtual volumes” options was added to the “Create
Virtual Volumes” dialog. If the virtual volumes can’t be created as thin,
the operation will succeed but it will be thick instead.


4. Wizards

In the Provision from Pools/Storage Volumes, if all selected arrays are XtremIO, the user will have the option to create the virtual volumes as thin enabled with the addition of this new page in the wizard


Below you can see a recorded demo of the new UNMAP functionality

EMC AppSync 2.2 SP3 Is Out, Another Must Upgrade For XtremIO Customers

Hi,

We have just GA’d a new service pack for EMC AppSync, as you probably already know, Copy Data Managemnet (CDM) is a key components of the XtremIO architecture, it is so big and so different than anything else that is out there that many customers are using XtremIO just for that.

But, what good does a CDM architecture and features if you don’t have an integral software the links between the storage array technology to the applications it needs to protect & repurpose. This is where Appsync comes in, if you are new to Appsync, I encourage you to start reading about it here first

https://itzikr.wordpress.com/2014/12/19/protecting-your-vms-with-emc-appsync-xtremio-snapshots/

https://itzikr.wordpress.com/2015/10/01/emc-appsync-2-2-sp2-is-out/

EMC AppSync offers a simple, SLA-driven, self-service approach for protecting, restoring, and cloning critical Microsoft and Oracle applications and VMware environments. After
defining service plans (such as Gold, Silver, and Bronze), application owners can protect, restore, and clone production data quickly with item-level granularity by using the
underlying EMC replication technologies. AppSync also provides an application protection monitoring service that generates alerts when the SLAs are not met.
AppSync supports the following applications and storage arrays:
Applications — Oracle, Microsoft SQL Server, Microsoft Exchange, and VMware VMFS and NFS datastores, file systems, and Oracle application for NFS.
Storage — VMAX, VMAX 3, VNX (Block and File), VNXe, XtremIO, and ViPR Controller
Replication Technologies—VNX Advanced Snapshots, VNXe Unified Snapshot, SRDF, TimeFinder, SnapVX, RecoverPoint, XtremIO Snapshot, and ViPR Snapshot

ok, here’s what’s new in 2.2 SP3


lService pack full install.
Until AppSync 2.2.2, service pack was always an upgrade install. However, AppSync 2.2.3 supports full install.
Unmount callout script for AIX file system.
CLI enhancements include:
Repurposing refresh
Mount option for mounting all file system copies that are protected together (include_all_copies=true)
Expire option to remove a copy which has multiple associated copies
Unmount option to specify latest or oldest mounted copy
XtremIO specific fixes.
If you are on XtremIO 4.0.2, it is recommended that you upgrade to AppSync 2.2.3 because it includes critical XtremIO specific fixes.
Improved support for Linux MPIO user friendly names.
Supports VMAX Hudson HYPERMAX OS: 5977.809.784 with SMI-S 8.2.
Supports VMAX Indus HYPERMAX OS: 5977.691.684 with SMI-S 8.1.2.
Supports VMAX All Flash Arrays – 450F and 850F Models.
Supports RecoverPoint with SRM flag set in RecoverPoint Consistency Groups.
Supports RecoverPoint 4.4.
Supports RedHat Enterprise Linux, Oracle Linux, and CentOS 7.0, 7.1, and 7.2.
Supports IBM AIX 7.2.
Supports VERITAS VxVM 7.0, VCS 7.0, and VxFS 6.2, 7.0

Fixed issues (in 2.2 SP3)

URM00105205: Fixed issues with SUDO user permissions that were leading to host deployment failure.

URM00104744: Resolves the Mail Recipient field becoming null after rediscovery of Exchange host.

URM00104571: Added support to show Phase pits and corresponding events for the time period as configured by the user.

URM00105382: Addressed an issue of Mount failure when multiple Initiators from different ESX servers into a same Initiator groups for XtremIO.

URM00105041: Fixes the issue during create copy of exchange differential copy in VMAX V3 storage arrays.

URM00104870: This fixes timeout issue while creating remote copies of VNX file.

URM00105329: Addressed an issue where 1st gen copy of RAC database cannot be mounted if redo logs in separate ASM disk group.

URM00105066: Addresses an issue of RP bookmark copy not getting deleted from AppSync, even when the bookmark gets deleted from RPA.

URM00105450: Fixes optimistic lock exception when expire the VNX snapshots.

URM00105360: Fixes the issue of unable to get options “data only” and “log only” of Exchange database restore wizard.

URM00104609: Provided a fix to avoid indefinite wait by AppSync server for a response from Array.

URM00104799: Fixes a problem of AppSync Host Plugin hotfix hang on non-English machines.

URM00105077: Fixed an issue with SUDO user that leads to append extra sudo before the command execution.

URM00104906: Resolved timeout issue during RP bookmark creation.

URM00105258: Rectified a problem of Virtual Machine being detected as physical host.

URM00105278: Added a fix that will remove extra blank lines from the command output of powermt display.

URM00105281: Fixed an issue of Oracle 12c agent installation prevents discovery of hosts.

URM00105342: Fixed the mapping issue in case of Oracle DB created with UDEV rules.

URM00105629: Fix provided to validate RecoverPoint bookmarks for a CG before restore operation proceeds.

URM00105464: Fixes a device cleanup issue after unmounting a copy on AIX machine.

URM00105759: Fixed timeout issue while running fsck at mount time.

URM00105546: Fixed the discovery issue during mount.

URM00105501: Fixed PowerPath cleanup during unmount and enhanced CLI to mount all filesystem copies that are protected together.

URM00105607: Special characters in XtremIO 3.0 for folder name is handled for expire.

URM00105798: Addressed the unexpected Error while trying to delete an RPA.

URM00105538: Addressed performance issue with Oracle ASM mount with recove


 

VSI 6.8 Is Here, Here’s what’s new

Hi,

I’m very happy to accounce we have just released Virtual Storage Integrator (VSI) 6.8!

If you have been living under a rock for the past 6 years, this is our vCenter (web) plugin that you can use to manage all of the EMC arrays, in the context of XtremIO there are so many things you can do with it that I’m using in a daily basis to manage my vSphere lab, oh, and it’s free which is always good.

If you want an overview of what the plugin does, you can start by reading here:

https://itzikr.wordpress.com/2015/07/17/virtual-storage-integrator-6-6-is-almost-here/

https://itzikr.wordpress.com/2015/09/21/vsi-6-6-3-is-here-grab-it-while-its-hot/

so, what’s new in 6.8

Yep, the number 1 request was to support VSI in a vCenter linked mode configuration, that is now supported.

Multiple vCenters Support – Overall
Scope

Multiple vCenters Support – VSI 6.8 Scope

All features related to XtremIO


Multiple vCenters Support – Preconditions

vCenters are configured in linked mode.
ŸDeploy VSI plugin to every single vCenter in
linked mode group.


Quality Improvements

Viewing XtremIO-based Datastore/RDM Disk Properties

ŸSymptom
Take several minutes to retrieve XtremIO-based datastore/RDM disk properties.
ŸImprovement
Optimize the algorithm to quickly match the underlying registered XtremIO array for datastore/RDM disk
Apply batch API/multiple threads to retrieve volumes/snapshots from XtremIO array.

Space Reclamation gets stalled

ŸSymptom
space reclamation task seen as ongoing in the vCenter tasks.
ŸImprovement
this has now been resolved.

VMware SRM Pairing fails

ŸSymptom
Upon trying to pair the SRM servers, you get a “token” error
ŸImprovement
this has now been resolved.

EMC World 2016 – The XtremIO Sessions

Wow, I can’t believe another year has passed since the last EMC World, time is flying by folks and that is a fact!

Here, at XtremIO we are very busy trying to have a solid agenda for you our customers, partners and SEs so you can leverage the event to come and hear about different topics, talk to the engineers and see how it all comes together.

Below you can see all the XtremIO sessions we have published so far and their dates, please notes that the dates for the sessions can still be change, looking forward to see you ALL there!

Registration for the event can be done from this url :

http://www.emcworld.com/registration.htm

my session is the one highlighted in yellow.

Topic Title

Level (select one)
1. Beginner
2. Intermeditae
3. Advanced)

Session Day #1

Session Day #2

GA

Abstract (60 words)

2016 All-Flash State of the Union: What’s New with XtremIO and How Customers Are Using It

Beginner

Monday 8:30 – 9:30

Wed. 1:30 – 2:30

60 min.

This session provides an update on the latest XtremIO capabilities and how it is transforming customers’ workloads for agility, tremendous TCO and business benefits. With customer examples via real time data, we will discuss the use cases and benefits for consolidating mixed workloads across database, analytics, business apps.

Best Practices for Running Virtualized Workloads & Containers in the All-Flash Data Center

Intermediate

Monday 4:30 – 5:30

Wed 1:30 – 2:30

60 min.

Great, you customer have just purchased a shiny new all flash array (AFA), now what? In this session we will learn the reasons for one of the quickest revolutions that the storage industry has seen in the last 20 years, and how XtremIO can enable breakthrough capabilities for your server virtualization and private cloud deployments. We will go trough specific use case issues and how to overcome them. With lots of real-world tips and tricks, you can consolidate your most demanding virtualization workloads successfully and gracefully.

Building the Ultimate Database as a Service with XtremIO

Intermediate

Tuesday 3:00 – 4:00

Wed Noon – 1:00

60 min.

Database as a Service creates exciting new possibilities for developers, testers, business analysts and application owners. However, a successful deployment requires careful planning around database standardization and service automation, and an infrastructure that can safely consolidate multiple production and non-production workloads. This session will provide step-by-step guidance on getting your DBaaS project deployed on EMC XtremIO, the ideal platform for massive database consolidation.

Business Continuity, Disaster Recovery, and Data Protection for the All-Flash Data Center

Intermediate

Monday 3:00 – 4:00

Thurs 1:00 – 2:00

60 min.

Everybody knows that Flash is fast, but do you know the best way to protect your workload on a media that is capable of servicing more than 1M IOPS with tight RPO and RTO? This session will provide details of all the business continuity and disaster recovery solutions available for XtremIO customers. It will also include customer examples and recommend the best approaches for different scenarios. This session will cover the integration of XtremIO with ProtectPoint/Data Domain, VPLEX, RecoverPoint, AppSync and more.

Broken Promises, Buyer Beware: Special considerations for evaluating AFAs

Advanced

Tuesday Noon – 1:00

Thurs 11:30 – 12:30

60 min.

Capacity & Performance are key factors in any storage purchase decision. But special consideration must be paid to them during any evaluation. In this session we’ll share best practices to be followed during evaluating an AFA to ensure a hiccup free production deployment.

Deploment Best Practices for Consolidating SQL Server and Integrated Copy Data Management

Advanced

Monday 4:30 – 5:30

Wed. 8:30 – 9:30

60 min.

Examine SQL Server behavior on XtremIO’s all-flash array vs. traditional storage, explore use cases, see demos of XtremIO’s unique integrated copy data management (iCDM) stack which increases DBA productivity, and accelerates SQL Server database application lifecycle management. See why traditional SQL Server best practices can be significantly simplified with the deployment of XtremIO. We will focus on areas like storage deployment, queue depth, multi-pathing, number of LUNs, database file layout, tempdb placement, and more!

Learn from real life experts on how to deploy Big Data Analytics On an All-Flash Array implementation

Intermediate

Monday Noon – 1:00

Wed. Noon – 1:00

60 min.

This session provides an overview on the different architectural options available for customers to streamline their big data applications on a the XtremIO All-Flash Array. Learn from customers on the tradeoff and benefits of the different options. Understand how the different architectures can help customers scale / protect and simplify the storage infrastructure required to support their analytics workload. The presentation will discuss practical implications along with proven recommendations that storage professionals can walk away with.

Desktop Virtualization Best Practices: A Customer Perspective

Intermediate

Monday 8:30 – 9:30

Tues. 8:30 – 9:30

60 min.

Health Network Laboratories (HNL), a leading diagnostic test lab, leveraged a best-of-breed technology approach to produce stellar results moving its nearly 1,000 pathologists and scientists to VDI. Architected for a 10-year growth plan, the new VDI solution uses Citrix XenDesktop for brokering, EMC XtremIO for high performing, no-compromise desktops, Dell Wyse thin clients for access from anywhere, Imprivata for one-click single sign-on, Unidesk for application layering, and active-active data center with EMC VPLEX for business continuity. In this session, HNL CIO and his team will share what they learned in their journey to desktop virtualization nirvana.

Introducing XtremIO Integrated Copy Data Management and How It will transform Your Infrastructure, Your Operations, and Your Business Process Agility

Beginner

Monday 1:30 – 2:30

Wed. 3:00 – 4:00

60 min.

Everybody knows that all-flash arrays are ideal for running database workloads – this is because flash is fast. But a database application is much more than just the production instance. Considerations in storage planning need to be made for additional copies of the database or application for things development and test, analytics and reporting, operations, and local recovery. Historically all-flash arrays have been too expensive to house these additional copies. Now, with innovations built into the EMC XtremIO all-flash array, these copies can be space-efficiently consolidated, operating at production levels of high performance, and managed easily in an automated fashion. The result is simplicity, lower cost application development, and faster software development cycles bringing agility and true transformation into your IT. Learn all about it with real customer examples in this session.

Next-Generation OpenStack Private Clouds with VMware and EMC XtremIO All-Flash Array

Intermediate

Monday Noon – 1:00

Thurs 8:30 – 9:30

60 min.

OpenStack Cloud deployments have long been notorious for being too complex and unpredictable, often requiring long implementation times and ongoing optimization. In this session, experts from VMware and EMC will discuss how VMware Integrated OpenStack (VIO), coupled with EMC XtremIO all-flash scale-out storage can dramatically simplify OpenStack implementations while delivering enterprise-class reliability, consistent predictable workload performance, and easy linear scalability.

Oracle Integrated Copy Data Management: Realizing the Power of XtremIO Virtual Copy Technology and EMC AppSync

Advanced

Tuesday Noon – 1:00

Thurs. 10:00 – 11:00

60 min.

On average enterprise Oracle Database users create between 8 and 10 functional copies of each production database—and no enterprise Oracle Database user has but a single production database. Copy Data Management is one of the most troubling technical requirements in the enterprise today and it is first and foremost on the minds of Oracle Database Administrators. XtremIO Virtual Copy Technology leads the industry in terms of copy data management performance, flexibility and ease of use. To extend these XtremIO attributes to the self-service user, customers choose EMC AppSync. This session will introduce EMC AppSync and XtremIO Virtual Copy Technology and provide a case study covering ease of use and consistent, predictable performance across source volumes and virtual copies alike.

Game Changing SAP Best Practices for HANA and Traditional SAP, Consolidation, Converged Infrastructure, and iCDM on XtremIO

Intermediate

Monday 1:30 – 2:30

Thurs. 1:00 – 2:00

60 min.

99% of the Fortune 100 run their business on SAP. Almost 90% of those SAP architectures are built on an EMC SAP Data platform. Consolidation, reduced complexity & performance are primary focal points for these businesses. As is reducing cycles spent on infrastructure management to empower more focus on business innovation via SAP. Callaway Golf and Mohawk Industries are perfect examples of this mew Game-changing All-Flash driven SAP mantra. Callaway will show how they accelerated SAP performance In Production with EMC XtremIO All Flash Storage, by as much as 110%. Mohawk will show one of the worlds most advanced virtualized vHANA architectures on VCE Converged Infrastructure. Both customer teams will share best practices & lessons learned on reducing SAP costs via XtremIO Virtual Copy (XVC snapshots) & deduplication; accelerating performance and how EMC empowers their NextGen SAP strategies.

Tales From The Trenches: End-User Computing Architect Perspectives on Next Generation VDI

Advanced

Tuesday 3:00 – 4:00

Thurs. 10:00 – 11:00

60 min.

In this session, VDI architects from VMware and EMC XtremIO will present the recent trends in desktop virtualization and role of all-flash storage in ushering the new era of VDI. Topics covered will include: (1) Just-in-Time desktops: How to orchestrate disparate desktop building blocks and application delivery schemas to deliver personalized desktops and the role of all-flash storage. (2) Application layering with VMware App Volumes: At scale performance with EMC XtremIO. (3) VDI with Windows 10: Key differences in storage access profiles from earlier Windows desktop OSes, optimization recommendations, and implications for all-flash storage.

XtremIO 101: All-Flash Architecture, Benefits, and Use Cases for Mixed Workload Consolidation

Beginner

Tuesday 8:30 – 9:30

Thurs. 11:30 – 12:30

60 min.

This session provides an overview to the EMC XtremIO all-flash scale-out array and its design objectives. The architecture will be discussed and compared to other flash arrays in the market with the goal of helping the audience understand the unique requirements of building an all-flash array, the proper methodology for testing all-flash arrays, and architectural differentiation among flash array features that affect endurance, performance, and consistency across the most demanding mixed workload consolidations.

XtremIO iCDM Best Practices: Customter Panel

Beginner

Monday 4:30 – 5:30 Trying to resched to not conflict with PR Session

Wed. 8:30 – 9:30

60 min.

Everybody knows that all-flash arrays are fast, but XtremIO enables much more than speeda and feed. Considerations in storage planning need to be made for additional copies of the database or application for things development and test, analytics and reporting, operations, and local recovery. Historically all-flash arrays have been too expensive to house these multiple copies of your production data. Now, with innovations built into the EMC XtremIO all-flash array, these copies can be space-efficiently consolidated, operating at production levels of high performance, and managed easily in an automated fashion. The result is simplicity, lower cost application development, reduce the required resources and have faster software development cycles. Learn from XtremIO customers how they implemented iCDM and how it transformed their IT to gain business agility and competitive edge.

Providing a persistent data volume to EMC XtremIO using ClusterHQ Flocker, Docker And Marathon

Hi,

Containers are huge, that’s not a secret to anyone in the IT industry, customers are testing the waters and looking for many ways to utilize containers technologies, it is also not a secret that the leading vendor in this technology is docker.

But docker itself isn’t perfect yet, while it’s as trendy as trendy can get, there are many ways around the docker runtime to provide cluster management etc’..

It all started with a customer request some weeks ago, their request was “can you show us how do you integrate with docker, Marathon (to provide containers orchestration) and ClusterHQ Flocker to provide a persistent data volume”..sounds easy right? J

Wait a second, isn’t containers technologies supposed to be completely lossless, designed to fail and do not need any persistent data that will survive a failure in case a container dies??

Well, that’s exactly where customers are asking things that aren’t always part of the master design of containers technologies and where there is a gap, there is a solution..


Enter ClusterHQ with their Flocker product, taken from their website

What is Flocker?

Flocker is an open-source container data volume manager for your Dockerized applications.

By providing tools for data migrations, Flocker gives ops teams the tools they need to run containerized stateful services like databases in production.

Unlike a Docker data volume which is tied to a single server, a Flocker data volume, called a dataset, is portable and can be used with any container in your cluster.

Flocker manages Docker containers and data volumes together. When you use Flocker to manage your stateful microservice, your volumes will follow your containers when they move between different hosts in your cluster.

Container Manager and Storage Integrations

Flocker is designed to work with the other tools you are using to build and run your distributed applications. Flocker can be used with popular container managers or orchestration tools like Docker, Kubernetes, Mesos.

For storage, Flocker supports block-based shared storage such as Amazon EBS, or OpenStack Cinder so you can choose the storage backend that is best for your application. Read more about choosing the best storage backend for your application. You can also use Flocker to take advantage of pre-defined storage profiles that are offered by many storage providers. Find out more about Flocker’s storage profile support.

How is related to EMC XtremIO you might wonder, well, as the #1 sold AFA in the market, we are starting to get many requests like these and so, together with ClusterHQ, there is now a support for EMC XtremIO to provide this functionality.

If you want to see a full demo of a MySQL app failing over, look no further

I also wanted to give an huge thank you to Dadee Birgher from my team who set it all up in no time.

RecoverPoint For Virtual Machines (RP4VMs) 4.3 SP1 Is Out!

Hi, One of my favorite products in the EMC portfolio has just GA’d with a new version, one that I’m truly excited about! If you are not familiar with what RP4VMs is, a good place to start will be here

https://itzikr.wordpress.com/2015/07/14/recoverpoint-for-virtual-machines-rp4vms-4-3-is-here-and-its-awesome/

got it? Still here? Great, here’s what’s new

The first thing that was completely re-done was the deployment wizard, it’s replacing the classic deployment manager, fully web based and integrated into the vRPA, to start the deployment, you simply browse to the vRPA with https://IP/WDM Instead of bothering you with more text / slides, you can see a demo I recorded here

Some of the plugin enhancements are the ability to go back after the deployment and validate the registered ESXi hosts for potential issues, RP4VMs will even try to take the extra mile and resolve these issues for you.

Hmm..this one is interesting, RP4VMs currently support the vSCSI API for splitting the IO but in the upcoming version of vSphere and a small upgrade to RP4VMs (coming very soon as well..), it will support the new VMware API known as VIOF (IO Filters).

Faster Cloning Time Or Semi-Automated Space Reclamation – Revisited.

Readers of my blog knows that we typically reccemdend using Eager Zeroed thick VMDKs for performance, the performance aspect of it will manifest as a faster cloning time and a faster performance during the initial writing blocks inside the VM..


 

It’s a good thing that the times are changing, recommended practices are always being revised because technology itself is changing, how does it related to Eager zeroed thick VMs?

With vSphere 6, VMware introduced a semi-automated mechanism to reclaim capacity from VMs, if you are not familiar with space reclamation and why it is an absolute must on AFAs, a good place to start will be in a post I wrote here

https://itzikr.wordpress.com/2014/04/30/the-case-for-sparse_se/

so again, in vSphere 6, there is a way to run space reclamation on VMs

https://itzikr.wordpress.com/2015/04/28/vsphere-6-guest-unmap-support-or-the-case-for-sparse-se-part-2/

but the BIG but is that UNMAP only works when

* unmap of deleted blocks with in VMFS (i.e vmdk deletion, swap file deletion etc), for this you can use the EMC VSI plugin.

* Smaller unmap granularity. With existing VMFS version, the minimum unmap granularity is 1MB.

* Automatic UNMAP for support for windows version less than Windows 8 / server 2012 (no linux support)

* the device at the VM level (the drive) must be configured as “thin”, not “eager zeroed thick” or “lazy zeroed thick”

So while it is awesome that you can now almost forget about running space reclamation manually, you are still faced with a big dilemma, should I change my VMs VMDK format to “thin” which is a pre-requisite for automated UNMAP or should I stick with eager zeroed thick for performance?

You CAN have your cake and eat it to, here’s what you need to do

Below you can see a windows 10 VM that I prepared,

The Windows 10 VM was allocated a 40GB drive

 

Which out of the 40GB, only 16.6GB were actually used.

I then cloned the VM to 2 templates, the first one was cloned to a thick eager zeroed template and then I cloned the VM to a template that was based on “thin” vmdk.

I then had two templated looking like this

I then deployed a VM from the eager zeroed thick template, the cloning took:

14 seconds

You can also see the IOPS used, red means the beginning of the cloning operation, green means the time it ended.

And the bandwidth that was used

With that I moved on deploying a VM from the thin VMDK based template

the cloning took:

09 seconds

You can also see the IOPS used, red means the beginning of the cloning operation, green means the time it ended.

 

And the bandwidth that was used

 

 

And here’s how it looks if you compare the bandwidth between the two cloning operations, again, they were both deployed from the same windows templates, the only difference is that was template was thin and the other was egaer zeroed thick

 

So the question is why, why would a thin based deployment takes LESS time than an eager zeroed thick, it used to be the exact opposite!

The “magic” is happening because when I initially install windows on that VM, I then CLONED it to a template which defrag the data inside the VM, yes, I also cloned that VM to an eager zeroed thick template but now the data was structured in a much better way so the fact that the thin template had less capacity as it is thin, now means less time to deploy!

So..

Clone you VMs to a thin template, enable running space reclamation on the OS’s that support it (windows 8/8.1/10 / Server 2012/2012 R2/2016 and enjoy both worlds!

The Interesting Case of 2/0 0x5 0x25 0x0 – ILLEGAL REQUEST – LOGICAL UNIT NOT SUPPORTED

Hi,

Some heads up if you are using an Active/Active Array (XtremIO, VMAX, HDS and maybe more..)

Lately, I have worked with VMware about a strange support case, basically if you are using any A/A array and failing a path, the failover to the remaining paths will not always happen, this can effect not only path failover/failback but also NDU procedures etc’ where you are purposely failing paths during the storage controllers upgrades etc’

The VMware KB for this is:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1003433

and if you look under the notes section:

  • Path Failover is triggered only if other paths to the LUN do not return this sense code. The device is marked as PDL after all paths to the LUN return this sense code.

A problem with this process has been identified in ESXi 6.0 where failover is not triggered when other paths to the LUN are available.

The root cause of this issue has been identified and the fix will be available in the next major release of ESXi 6.0. 
If you are running ESXi 6.0 U1.x and you are affected by this issue, a hot patch is available. To obtain the patch, raise a Support Request and reference this KB article ID. For more information, see How to file a Support Request in My VMware (2006985).

2/0 0x5 0x25 0x0 – ILLEGAL REQUEST – LOGICAL UNIT NOT SUPPORTED

It’s important to understand that this is not an XtremIO issue, it’s an issue that started with vSphere 6.0 as in the 6.0 code that Is a change in the way that the ESXi sense a PDL scenario. If you are using vSphere 5.5 (and all of it’s updates) it’s all good for you.


As of today, if you are

Using vSphere 6 Update 1, you can ask VMware for a specific hotfix, you can reference the kb I mentioned above, this hotfix will not go public so you have to be specific when asking about it, why won’t it go public?

Because hotfixes are limited in their QA testing and we want to make sure that you, the customer is installing this hotfix only if you really need to, also in the upcoming version of vSphere 6, VMware will include this “fix” as part of the ESXi kernel and because of that a very rigid QA effort will be done around it.

So, if you are anxious to solve it now and using vSphere 6, just call VMware support, if you can wait a little bit more and prefer to have a fix that has gone through a more rigid QA, please wait for the upcoming vSphere 6 release.

lastly, if you are using EMC PowerPath/VE, you are not impact by this as PP/VE take ownership of the the NMP.

.

Connecting EMC XtremIO To An Heterogeneous Storage Environment

Hi,

A Topic that comes and go every once in a while is what should you do if multiple storage arrays (VNX, VMAX etc’) are connected to the same vSphere cluster where the XtremIO array is connected to as well.

This is in fact a two sided question.

Question number one is specific to the VAAI ATS primitive, it is because in some specific VMAX / VNX software revisions, there was a recommendation to disable ATS because of bugs, these bugs have been resolved since and I ALWAYS encourage you to check with your VNX/VMAX team if a recommendation in the past was made to disable ATS/XCOPY.. but what happen if your ESXi host(s) are set to ATS off and you just connected an XtremIO array, mapped these volumes and now you recalled that, hey! We (XtremIO) actually always recommend to enable ATS..well,

If VAAI setting is enabled after a datastore was created on XtremIO storage, the setting does not automatically propagate to the corresponding XtremIO Volumes. The setting
must be manually configured to avoid data unavailability to the datastore. Perform the following procedure on all datastores created on XtremIO storage before VAAI
is enabled on the ESX host. To manually set VAAI setting on a VMFS-5 datastore created on XtremIO storage with VAAI
disabled on the host:
1. Confirm that the VAAI Hardware Accelerator Locking is enabled on this host.
2. Using the following vmkfstools command, confirm that the datastore is configured as “public ATS-only”: # vmkfstools -Ph -v1 <path to datastore> | grep public
• In the following example, a datastore volume is configured as “public”:


• In the following example, a datastore volume is configured as “public ATS-only”:


3. If the datastore was found with mode “public”, change it to “public ATS-only” by executing the following steps:
a. Unmount the datastore from all ESX hosts on which it is mounted (except one ESX host).
b. Access the ESX host on which the datastore is still mounted.
c. Run the following vmkfstools command to enable ATS on the datastore: # vmkfstools –configATSOnly 1 <path to datastore>
d. Click 0 to continue with ATS capability.
e. Repeat step 2 to confirm that ATS is set on the datastore.
f. Unmount datastore from the last ESX host.
g. Mount datastore on all ESX hosts.

Qestion number two is a more generic one, you have a VNX/VMAX And XtremIO all connected to the same vSphere cluster and you want to enable ESXi best practices, for example, the XCOPY chunk size, what can you do if some of these best practices vary between one platform to the other, its easy when a best practice can be applied as per the specific storage array but like the example I used above, XCOPY is a system parameter that can be applied per the entire ESXi host..

Below you can see the table we have come up with, like always, things may change so you want to consult with your SE before the actual deployment

 

 

 

Parameter Name

Scope/ Granularity

VMAX1

VNX

XtremIO

Multi-Array Resolution

vSphere 5.5

vSphere 6

FC Adapter Policy IO Throttle Count

per vHBA

256 (default)

256 (default)

1024

2562

(or per vHBA)

same as 5.5

fnic_max_qdepth

Global

32 (default)

32 (default)

128

32

same as 5.5

Disk.SchedNumReqOutstanding

LUN

32 (default)

32 (default)

256

Set per LUN3

same as 5.5

Disk.SchedQuantum

Global

8 (default)

8 (default)

64

8

same as 5.5

Disk.DiskMaxIOSize

Global

32MB (default)

32MB (default)

4MB

4MB

same as 5.5

XCOPY (/DataMover/MaxHWTransferSize)

Global

16MB

16MB

256KB

4MB

VAAI Filters with VMAX

 

Notes:

  1. Unless otherwise noted, the term VMAX refers to VMAX and VMAX3 platforms
  2. The setting for FC Adapter policy IO Throttle Count can be set to the value specific to the individual storage array type if connections are segregated. If the storage arrays are connected using the same vHBA’s, use the multi-array setting in the table.
  3. The value for Disk.SchedNumReqOutstanding can be set on individual LUNs and therefore the value used should be specific to the underlying individual storage array type.

Parameters Detail

 

The sections that follow describe each parameter separately.

 

FC Adapter Policy IO Throttle Count

 

Parameter

FC Adapter Policy IO Throttle Count

Scope

UCS Fabric Interconnect Level

Description

The total number of I/O requests that can be outstanding on a per-virtual host bus adapter (vHBA)


This is a “hardware” level queue.

Default UCS Setting

2048

EMC Recommendations

EMC recommends setting to 1024 for systems vHBA’s connecting to XtremIO only

EMC recommends leaving at default of 256 for vHBA’s connecting to VNX/VMAX systems only

EMC recommends setting to 256 for vHBA’s connecting to XtremIO systems and VNX/VMAX.

 

fnic_max_qdepth

 

Parameter

fnic_max_qdepth

Scope

Global

Description

Driver level setting that manages the total number of I/O requests that can be outstanding on a per-LUN basis.
This is a Cisco driver level option.

Mitigation Plan
vSphere 5.5

There are options to reduce the queue size on a per-lun basis:

 

Disk Queue Depth:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1008113

esxcli storage core device set –device  device_name –queue-full-threshold  Q –queue-full-sample-size S

 


 

Mitigation Plan
vSphere 6

There are options to reduce the queue size on a per-lun basis:

 

Disk Queue Depth:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1008113

esxcli storage core device set –device  device_name –queue-full-threshold  Q –queue-full-sample-size S


 

EMC Recommendations

There are options to reduce the queue size on a per-lun basis:

 

Disk Queue Depth:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1008113

esxcli storage core device set –device  device_name –queue-full-threshold  Q –queue-full-sample-size S

 

EMC will set fnic_max_qdepth to 128 for systems with XtremIO only

VCE will leave at default of 32 for VNX/VMAX systems adding XtremIO.

VCE will set to 32 for XtremIO systems adding VNX/VMAX.

 

Disk.SchedNumReqOutstanding

 

Parameter

Disk.SchedNumReqOutstanding

Scope

LUN

Description

When two or more virtual machines share a LUN (logical unit number), this parameter controls the total number of outstanding commands permitted from all virtual machines collectively on the host to that LUN (this setting is not per virtual machine).

Mitigation Plan
vSphere 5.5

vSphere 5.5 permits the application of per-device application of this setting. Use the value in the table that corresponds to the underlying storage system presenting the LUN.

Mitigation Plan
vSphere 6

vSphere 6.0 permits the application of per-device application of this setting. Use the value in the table that corresponds to the underlying storage system presenting the LUN.

 

Disk.SchedQuantum

 

Parameter

Disk.SchedQuantum

Description

The maximum number of consecutive “sequential” I/O’s allowed from one VM before we force a switch to another VM (unless this is the only VM on the LUN). Disk.SchedQuantum is set to a default value of 8.

Scope

Global

EMC Recommendations

EMC recommends setting to 64 for systems with XtremIO only

EMC recommends leaving at default of 8 for VNX/VMAX systems adding XtremIO.

EMC recommends setting to 8 for XtremIO systems adding VNX/VMAX.

 

Disk.DiskMaxIOSize

 

Parameter

Disk.DiskMaxIOSize

Scope

Global

Description

ESX can pass I/O requests as large as 32767 KB directly to the storage device. I/O requests larger than this are split into several, smaller-sized I/O requests. Some storage devices, however, have been found to exhibit reduced performance when passed large I/O requests (above 128KB, 256KB, or 512KB, depending on the array and configuration). As a fix for this, you can lower the maximum I/O size ESX allows before splitting I/O requests.

EMC Recommends

EMC recommends setting to 4096 for systems only connected to XtremIO.

EMC recommends leaving at default of 32768 for systems only connected to VNX or VMAX.

EMC recommends setting to 4096 for systems with VMAX + XtremIO.

EMC recommends setting to 4096 for XtremIO systems adding VNX.

 

XCOPY (/DataMover/MaxHWTransferSize)

 

Parameter

XCOPY (/DataMover/MaxHWTransferSize)

Scope

Global

Description

Maximum number of blocks used for XCOPY operations.

EMC Recommends

vSphere 5.5:

EMC recommends settingto 256 for systems only connected to XtremIO.

EMC recommends settingto 16384 for systems only connected to VNX or VMAX.

EMC recommends leaving the default of 4096 for systems with VMAX or VNX adding XtremIO.

EMC recommends leaving the default of 4096 for XtremIO systems adding VNX or VMAX.

 

vSphere 6:

EMC recommends enabling VAAI claim rule for systems connected to VMAX to override system setting to set to 240MB

EMC recommends setting to 256KB for systems only connected to XtremIO

 

vCenter Concurrent Full Clones

 

Parameter

config.vpxd.ResourceManager.maxCostPerHost

Scope

vCenter

Description

Determines the maximum number of concurrent full clone operations allowed (the default value is 8)

EMC recommends setting

EMC recommends setting to 8/Xbrick (up to 48) for systems only connected to XtremIO.

EMC recommends leaving the default for systems only connected to VNX or VMAX.

EMC recommends settingto 8 for systems with VMAX + XtremIO.

EMC recommends settingto 8 for systems with VNX + XtremIO