The Interesting Case of 2/0 0x5 0x25 0x0 – ILLEGAL REQUEST – LOGICAL UNIT NOT SUPPORTED

Hi,

Some heads up if you are using an Active/Active Array (XtremIO, VMAX, HDS and maybe more..)

Lately, I have worked with VMware about a strange support case, basically if you are using any A/A array and failing a path, the failover to the remaining paths will not always happen, this can effect not only path failover/failback but also NDU procedures etc’ where you are purposely failing paths during the storage controllers upgrades etc’

The VMware KB for this is:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1003433

and if you look under the notes section:

  • Path Failover is triggered only if other paths to the LUN do not return this sense code. The device is marked as PDL after all paths to the LUN return this sense code.
  • A problem with this process has been identified in ESXi 6.0 where, for Active/Active arrays, failover is not triggered when other paths to the LUN are available. A fix is being investigated by VMware. This article will be updated when the fix is available.

2/0 0x5 0x25 0x0 – ILLEGAL REQUEST – LOGICAL UNIT NOT SUPPORTED

It’s important to understand that this is not an XtremIO issue, it’s an issue that started with vSphere 6.0 as in the 6.0 code that Is a change in the way that the ESXi sense a PDL scenario. If you are using vSphere 5.5 (and all of it’s updates) it’s all good for you.

As of today, if you are

Using vSphere 6 Update 1, you can ask VMware for a specific hotfix, you can reference the kb I mentioned above, this hotfix will not go public so you have to be specific when asking about it, why won’t it go public?

Because in the upcoming version of vSphere 6, VMware will include this “fix” as part of the ESXi kernel.

So, if you are anxious to solve it now and using vSphere 6, just call VMware support, if you can wait a littile bit more and prefer to have a fix that has gone through a more rigid QA, please wait for the upcoming vSphere 6 release.

lastly, if you are using EMC PowerPath/VE, you are not impact by this as PP/VE take ownership of the the NMP.

Connecting EMC XtremIO To An Heterogeneous Storage Environment

Hi,

A Topic that comes and go every once in a while is what should you do if multiple storage arrays (VNX, VMAX etc’) are connected to the same vSphere cluster where the XtremIO array is connected to as well.

This is in fact a two sided question.

Question number one is specific to the VAAI ATS primitive, it is because in some specific VMAX / VNX software revisions, there was a recommendation to disable ATS because of bugs, these bugs have been resolved since and I ALWAYS encourage you to check with your VNX/VMAX team if a recommendation in the past was made to disable ATS/XCOPY.. but what happen if your ESXi host(s) are set to ATS off and you just connected an XtremIO array, mapped these volumes and now you recalled that, hey! We (XtremIO) actually always recommend to enable ATS..well,

If VAAI setting is enabled after a datastore was created on XtremIO storage, the setting does not automatically propagate to the corresponding XtremIO Volumes. The setting
must be manually configured to avoid data unavailability to the datastore. Perform the following procedure on all datastores created on XtremIO storage before VAAI
is enabled on the ESX host. To manually set VAAI setting on a VMFS-5 datastore created on XtremIO storage with VAAI
disabled on the host:
1. Confirm that the VAAI Hardware Accelerator Locking is enabled on this host.
2. Using the following vmkfstools command, confirm that the datastore is configured as “public ATS-only”: # vmkfstools -Ph -v1 <path to datastore> | grep public
• In the following example, a datastore volume is configured as “public”:


• In the following example, a datastore volume is configured as “public ATS-only”:


3. If the datastore was found with mode “public”, change it to “public ATS-only” by executing the following steps:
a. Unmount the datastore from all ESX hosts on which it is mounted (except one ESX host).
b. Access the ESX host on which the datastore is still mounted.
c. Run the following vmkfstools command to enable ATS on the datastore: # vmkfstools –configATSOnly 1 <path to datastore>
d. Click 0 to continue with ATS capability.
e. Repeat step 2 to confirm that ATS is set on the datastore.
f. Unmount datastore from the last ESX host.
g. Mount datastore on all ESX hosts.

Qestion number two is a more generic one, you have a VNX/VMAX And XtremIO all connected to the same vSphere cluster and you want to enable ESXi best practices, for example, the XCOPY chunk size, what can you do if some of these best practices vary between one platform to the other, its easy when a best practice can be applied as per the specific storage array but like the example I used above, XCOPY is a system parameter that can be applied per the entire ESXi host..

Below you can see the table we have come up with, like always, things may change so you want to consult with your SE before the actual deployment

 

 

 

Parameter Name

Scope/ Granularity

VMAX1

VNX

XtremIO

Multi-Array Resolution

vSphere 5.5

vSphere 6

FC Adapter Policy IO Throttle Count

per vHBA

256 (default)

256 (default)

1024

2562

(or per vHBA)

same as 5.5

fnic_max_qdepth

Global

32 (default)

32 (default)

128

32

same as 5.5

Disk.SchedNumReqOutstanding

LUN

32 (default)

32 (default)

256

Set per LUN3

same as 5.5

Disk.SchedQuantum

Global

8 (default)

8 (default)

64

8

same as 5.5

Disk.DiskMaxIOSize

Global

32MB (default)

32MB (default)

4MB

4MB

same as 5.5

XCOPY (/DataMover/MaxHWTransferSize)

Global

16MB

16MB

256KB

4MB

VAAI Filters with VMAX

 

Notes:

  1. Unless otherwise noted, the term VMAX refers to VMAX and VMAX3 platforms
  2. The setting for FC Adapter policy IO Throttle Count can be set to the value specific to the individual storage array type if connections are segregated. If the storage arrays are connected using the same vHBA’s, use the multi-array setting in the table.
  3. The value for Disk.SchedNumReqOutstanding can be set on individual LUNs and therefore the value used should be specific to the underlying individual storage array type.

Parameters Detail

 

The sections that follow describe each parameter separately.

 

FC Adapter Policy IO Throttle Count

 

Parameter

FC Adapter Policy IO Throttle Count

Scope

UCS Fabric Interconnect Level

Description

The total number of I/O requests that can be outstanding on a per-virtual host bus adapter (vHBA)


This is a “hardware” level queue.

Default UCS Setting

2048

EMC Recommendations

EMC recommends setting to 1024 for systems vHBA’s connecting to XtremIO only

EMC recommends leaving at default of 256 for vHBA’s connecting to VNX/VMAX systems only

EMC recommends setting to 256 for vHBA’s connecting to XtremIO systems and VNX/VMAX.

 

fnic_max_qdepth

 

Parameter

fnic_max_qdepth

Scope

Global

Description

Driver level setting that manages the total number of I/O requests that can be outstanding on a per-LUN basis.
This is a Cisco driver level option.

Mitigation Plan
vSphere 5.5

There are options to reduce the queue size on a per-lun basis:

 

Disk Queue Depth:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1008113

esxcli storage core device set –device  device_name –queue-full-threshold  Q –queue-full-sample-size S

 


 

Mitigation Plan
vSphere 6

There are options to reduce the queue size on a per-lun basis:

 

Disk Queue Depth:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1008113

esxcli storage core device set –device  device_name –queue-full-threshold  Q –queue-full-sample-size S


 

EMC Recommendations

There are options to reduce the queue size on a per-lun basis:

 

Disk Queue Depth:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1008113

esxcli storage core device set –device  device_name –queue-full-threshold  Q –queue-full-sample-size S

 

EMC will set fnic_max_qdepth to 128 for systems with XtremIO only

VCE will leave at default of 32 for VNX/VMAX systems adding XtremIO.

VCE will set to 32 for XtremIO systems adding VNX/VMAX.

 

Disk.SchedNumReqOutstanding

 

Parameter

Disk.SchedNumReqOutstanding

Scope

LUN

Description

When two or more virtual machines share a LUN (logical unit number), this parameter controls the total number of outstanding commands permitted from all virtual machines collectively on the host to that LUN (this setting is not per virtual machine).

Mitigation Plan
vSphere 5.5

vSphere 5.5 permits the application of per-device application of this setting. Use the value in the table that corresponds to the underlying storage system presenting the LUN.

Mitigation Plan
vSphere 6

vSphere 6.0 permits the application of per-device application of this setting. Use the value in the table that corresponds to the underlying storage system presenting the LUN.

 

Disk.SchedQuantum

 

Parameter

Disk.SchedQuantum

Description

The maximum number of consecutive “sequential” I/O’s allowed from one VM before we force a switch to another VM (unless this is the only VM on the LUN). Disk.SchedQuantum is set to a default value of 8.

Scope

Global

EMC Recommendations

EMC recommends setting to 64 for systems with XtremIO only

EMC recommends leaving at default of 8 for VNX/VMAX systems adding XtremIO.

EMC recommends setting to 8 for XtremIO systems adding VNX/VMAX.

 

Disk.DiskMaxIOSize

 

Parameter

Disk.DiskMaxIOSize

Scope

Global

Description

ESX can pass I/O requests as large as 32767 KB directly to the storage device. I/O requests larger than this are split into several, smaller-sized I/O requests. Some storage devices, however, have been found to exhibit reduced performance when passed large I/O requests (above 128KB, 256KB, or 512KB, depending on the array and configuration). As a fix for this, you can lower the maximum I/O size ESX allows before splitting I/O requests.

EMC Recommends

EMC recommends setting to 4096 for systems only connected to XtremIO.

EMC recommends leaving at default of 32768 for systems only connected to VNX or VMAX.

EMC recommends setting to 4096 for systems with VMAX + XtremIO.

EMC recommends setting to 4096 for XtremIO systems adding VNX.

 

XCOPY (/DataMover/MaxHWTransferSize)

 

Parameter

XCOPY (/DataMover/MaxHWTransferSize)

Scope

Global

Description

Maximum number of blocks used for XCOPY operations.

EMC Recommends

vSphere 5.5:

EMC recommends settingto 256 for systems only connected to XtremIO.

EMC recommends settingto 16384 for systems only connected to VNX or VMAX.

EMC recommends leaving the default of 4096 for systems with VMAX or VNX adding XtremIO.

EMC recommends leaving the default of 4096 for XtremIO systems adding VNX or VMAX.

 

vSphere 6:

EMC recommends enabling VAAI claim rule for systems connected to VMAX to override system setting to set to 240MB

EMC recommends setting to 256KB for systems only connected to XtremIO

 

vCenter Concurrent Full Clones

 

Parameter

config.vpxd.ResourceManager.maxCostPerHost

Scope

vCenter

Description

Determines the maximum number of concurrent full clone operations allowed (the default value is 8)

EMC recommends setting

EMC recommends setting to 8/Xbrick (up to 48) for systems only connected to XtremIO.

EMC recommends leaving the default for systems only connected to VNX or VMAX.

EMC recommends settingto 8 for systems with VMAX + XtremIO.

EMC recommends settingto 8 for systems with VNX + XtremIO

 

 

EMC Storage Integrator (ESI) 3.9 Is Here

ESI is our windows server + windows server Hyper-V plugin, it’s very similar to our VSI plugin but is targeting our windows based customers who are either running windows server as bare betal or as Hyper-V

For an overview about the plugin itself, please read the blog post I wrote some time ago, https://itzikr.wordpress.com/2014/11/25/esi-for-windows-suite-3-6-emc-xtremio/

Ok, so what’s new with this version (3.9) for our XtremIO customers?

Support For XtremIO Snapshots


Creating Snapshot of Volume
Below is the wizard which is used for creating snapshot for XtremIO Volume.
Read-only snapshots are only supported from XtremIO 4.0, So for XtremIO 3.0
this snapshot type option will be disabled


Create Snapshot of another Snapshot
In Volume & Snapshot tab for XtremIO storage system, ESI will give option for
creating snapshot from another snapshot. The selected snapshot will be the
source for creating a snapshot. On selection of a snapshot , this option will be
available as a context menu.


 

Refresh XtremIO Volume
A XtremIO volume can be refreshed from Snapshot of that particular
Volume. This option will be available in Volume & Snapshot tab. On
selection of Volume/Snapshot , this option will be available as a context
menu.


Below is the wizard which is used for Refresh/Restore .
All the snapshots of the selected volume of the selected are listed


Restore XtremIO Volume
A XtremIO volume can be restored from snapshot (read-only) of that
particular Volume. This option will be available in Volume tab. On selection
of Volume, this option will be available as a context menu.


Below is the wizard which is used for Restore Snapshot
All the read-only snapshots of that are listed .


Best practices for Host Provisioning with XtremIO 4.0

Host Best Practices are the set of optimal settings/configurations which are
recommended for achieving the optimized performance.
It is divided into three categories:


1.Storage Side Settings
a)Specifying Disk Logical Block Size


2.Hypervisor Settings (VMWare ESX)
a)Disk Settings.
b)Native Multipathing Settings


3. Host Settings
i)Windows Host
a)HBA Queue Depth Settings
ii)Linux Host
a)HBA Settings
b)Native Multipathing Settings

 

Best practices for Host Provisioning , setting Disk
Logical Block Size.

Recommended Disk Logical Block Size for Windows or Linux Operating System
and Other Operating System is 4KB and 512B respectively.


Best practices for Windows Host Provisioning with
XtremIO 4.0

Below wizard depicts the best practices for Windows Host Provisioning with
XtremIO 4.0 storage system


Best practices for Host Provisioning with XtremIO 4.0
and VMware ESX

Below wizard depicts the best practices for Host Provisioning with XtremIO 4.0
storage system and VMWare ESX


Best practices for Linux Host Provisioning with
XtremIO 4.0

Below wizard depicts the best practices for Linux Host Provisioning with
XtremIO 4.0 storage system.


Host Configuration for VMware® vSphere On EMC XtremIO

Hi,

There has been a lot of questions lately around best practices when using XtremIO with vSphere, attached below is the extract of our user guide vSphere section, why am I posting it online then? Because, user guides are can be somewhat difficult to find if you don’t know where to look and google is your best friend..

Please note that this section refer to the vSphere cluster as it exclusively connected to the XtremIO array, if you are using a mixed cluster environment, some of these parameters will be different, a later post will follow up on that scenario.


Note: XtremIO Storage Array supports both ESX and ESXi. For simplification, all references to ESX server/host apply to both ESX and ESXi, unless stated otherwise.


Note: In hosts running a hypervisor, such as VMware ESX or Microsoft Hyper-V, it is important to ensure that the logical unit numbers of XtremIO volumes are consistent across all hosts in the hypervisor cluster. Inconsistent LUNs may affect operations such as VM online migration or VM power-up.


Note: When using Jumbo Frames with VMware ESX, the correct MTU size must be set on the virtual switch as well.


Fibre Channel HBA Configuration

When using Fibre Channel with XtremIO, the following FC Host Bus Adapters (HBA) issues should be addressed for optimal performance.

Pre-Requisites

To install one or more EMC-approved HBAs on an ESX host, follow the procedures in one of these documents, according to the FC HBA type:

For Qlogic and Emulex HBAs – Typically the driver for these HBAs is preloaded with ESX. Therefore, no further action is required. For details, refer to the vSphere and HBA documentation.

For Cisco UCS fNIC HBAs (vsphere 5.x and above) – Refer to the Virtual Interface Card Drivers section in the Cisco UCS Manager Install and Upgrade Guides for complete driver installation instructions

(http://www.cisco.com/en/US/partner/products/ps10281/prod_installation_guides _list.html).

Queue Depth and Execution Throttle


Note: Changing the HBA queue depth is designed for advanced users. Increasing queue depth may cause hosts to over-stress other arrays connected to the ESX host, resulting in performance degradation while communicating with them. To avoid this, in mixed environments with multiple array types connected to the ESX host, compare the XtremIO recommendations with those of other platforms before applying them.


This section describes the required steps for adjusting I/O throttle and queue depth settings for Qlogic, Emulex, and Cisco UCS fNIC. Follow one of these procedures according to the vSphere version used.

The queue depth setting controls the amount of outstanding I/O requests per a single path. On vSphere, the HBA queue depth can be adjusted through the ESX CLI.

Execution throttle settings control the amount of outstanding I/O requests per HBA port.

The HBA execution throttle should be set to the maximum value. This can be done on the HBA firmware level using the HBA BIOS or CLI utility provided by the HBA vendor:

Qlogic – Execution Throttle – This setting is no longer read by vSphere and is therefore not relevant when configuring a vSphere host with Qlogic HBAs.

Emulex – lpfc_hba_queue_depth – No need to change the default (and maximum) value (8192).

For Cisco UCS fNIC, the I/O Throttle setting determines the total number of outstanding I/O requests per virtual HBA.

For optimal operation with XtremIO storage, it is recommended to adjust the queue depth of the FC HBA. With Cisco UCS fNIC, it is also recommended to adjust the I/O Throttle setting to 1024.


Note: For further information on adjusting HBA queue depth with ESX, refer to VMware KB article 1267 on the VMware website

(http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displa yKC&externalId=1267).


Note: If the execution throttle in the HBA level is set to a value lower than the queue depth, it may limit the queue depth to a lower value than set.


Note: The setting adjustments in this section for Cisco UCS fNIC HBA apply to VMware vSphere only. Since these settings are global to the UCS chassis, they may impact other blades in the UCS chassis running different OS (e.g. Windows).


To adjust HBA I/O throttle of the Cisco UCS fNIC HBA:

  1. In the UCSM navigation tree, click the Servers tab.
  2. In the navigation tree, expand the Policies and Adapter Policies.
  3. Click the FC Adapter Policy Linux or FC Adapter Policy VMWare.
  4. In the main window, expand the Options drop-down.
  5. Configure the I/O Throttle Count field to 1024.
  6. Click Save Changes.


Note: For more details on Cisco UCS fNIC FC adapter configuration, refer to:

https://www.cisco.com/c/dam/en/us/solutions/collateral/data-center-virtualization/unified-computing/guide-c07-730811.pdf


Fibre Channel HBA Configuration

To adjust the HBA queue depth on a host running vSphere 5.x or above:

  • Open an SSH session to the host as root.
  • Run one of the following commands to verify which HBA module is currently loaded:
HBA Vendor Command
Qlogic esxcli system module list | egrep “ql|Loaded”
Emulex esxcli system module list | egrep “lpfc|Loaded”
Cisco UCS fNIC esxcli system module list | egrep “fnic|Loaded”

Example (for a host with Emulex HBA):

# esxcli system module list | egrep “lpfc|Loaded” Name Is Loaded Is Enabled lpfc true true lpfc820 false true

In this example the native lpfc module for the Emulex HBA is currently loaded on ESX.

  • Run one of the following commands on the currently loaded HBA module, to adjust the HBA queue depth:


    Note: The commands displayed in the table refer to the Qlogic qla2xxx/qlnativefc, Emulex lpfc and Cisco UCS fNIC modules. Use an appropriate module name based on the output of the previous step.


HBA Vendor Command
Qlogic vSphere 5.x:

esxcli system module parameters set -p ql2xmaxqdepth=256 -m qla2xxx

vSphere 6.x:

esxcli system module parameters set -p qlfxmaxqdepth=256 -m qlnativefc

Emulex esxcli system module parameters set -p lpfc0_lun_queue_depth=128 -m lpfc

Cisco UCS fNIC

esxcli system module parameters set –p fnic_max_qdepth=128 –m fnic


Note: The command for Emulex HBA adjusts the HBA queue depth for the lpfc0 Emulex HBA. If another Emulex HBA is connected to the XtremIO storage, change lpfc0_lun_queue_depth accordingly. For example, if lpfc1 Emulex HBA is connected to XtremIO, replace lpfc0_lun_queue_depth with lpfc1_lun_queue_depth.


Note: If all Emulex HBAs on the host are connected to the XtremIO storage, replace lpfc0_lun_queue_depth with lpfc_lun_queue_depth.


  • Reboot the ESX host.
  • Open an SSH session to the host as root.
  • Run the following command to confirm that queue depth adjustment is applied:

    esxcli system module parameters list -m <driver>


    Note: When using the command, replace <driver> with the module name, as received in the output of step 2 (for example, lpfc, qla2xxx and qlnativefc).


    Examples:

    • For a vSphere 5.x host with Qlogic HBA and queue depth set to 256:

# esxcli system module parameters list -m qla2xxx | grep ql2xmaxqdepth

ql2xmaxqdepth int 256 Max queue depth to report for target devices.

  • For a vSphere 6.x host with Qlogic HBA and queue depth set to 256:

# esxcli system module parameters list -m qlnativefc | grep qlfxmaxqdepth

qlfxmaxqdepth int 256 Maximum queue depth to report for target devices.

  • For a host with Emulex HBA and queue depth set to 128:

# esxcli system module parameters list -m lpfc | grep lpfc0_lun_queue_depth

lpfc0_lun_queue_depth int 128 Max number of FCP commands we can queue to a specific LUN

If queue depth is adjusted for all Emulex HBAs on the host, run the following command instead:

# esxcli system module parameters list|-m lpfc | grep lun_queue_depth

Host Parameters Settings

This section details the ESX host parameters settings necessary for optimal configuration when using XtremIO storage.


Note: The following setting adjustments may cause hosts to over-stress other arrays connected to the ESX host, resulting in performance degradation while communicating with them. To avoid this, in mixed environments with multiple array types connected to the ESX host, compare these XtremIO recommendations with those of other platforms before applying them.


When using XtremIO storage with VMware vSphere, it is recommended to set the following parameters to their maximum values:

Disk.SchedNumReqOutstanding – Determines the maximum number of active storage commands (I/Os) allowed at any given time at the VMkernel. The maximum value is 256.


Note: When using vSphere 5.5 or above, the Disk.SchedNumReqOutstanding parameter can be set on a specific volume rather than on all volumes presented to the host. Therefore, it should be set only after XtremIO volumes are presented to the ESX host using ESX command line.


Disk.SchedQuantum – Determines the maximum number of consecutive “sequential” I/Os allowed from one VM before switching to another VM (unless this is the only VM on the LUN). The maximum value is 64.

In addition, the following parameter setting is required:

Disk.DiskMaxIOSize
Determines the maximum I/O request size passed to

storage devices. With XtremIO, it is required to change it from 32767 (default setting of 32MB) to 4096 (4MB). This adjustment allows a Windows VM to EFI boot from XtremIO storage with a supported I/O size of 4MB.


Note: For details on adjusting the maximum I/O block size in ESX, refer to VMware KB article 1003469 on the VMware website

(http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType= kc&docTypeID=DT_KB_1_1&externalId=1003469).


These setting adjustments should be carried out on each ESX host connected to XtremIO cluster via either the vSphere Client or the ESX command line.

To adjust ESX host parameters for XtremIO storage, follow one of these procedures:

Using the vSphere WebUI client:

  1. Launch the vSphere Web client and navigate to Home > Hosts and Clusters.
  2. In the left menu section, locate the ESX host and click it.
  3. In the right pane, click Manage > Settings.
  4. From the System section, click Advanced System Settings.
  5. Locate the Disk.SchedNumReqOutstanding parameter. Click the Edit icon and set the parameter to its maximum value (256).


Note: Do not apply step 5 in a vSphere 5.5 (or above) host, where the parameter is set on a specific volume using ESX command line.


  1. Locate the Disk.SchedQuantum parameter. Click the Edit icon and set it to its maximum value (64).
  2. Locate the Disk.DiskMaxIOSize parameter. Click the Edit icon and set it to 4096.
  3. Click OK to apply the changes.

Using the ESX host command line (for vSphere 5.0 and 5.1):

  • Open an SSH session to the host as root.
  • Run the following commands to set the SchedQuantum,

    SchedNumReqOutstanding, and DiskMaxIOSize parameters, respectively:

    • esxcfg-advcfg -s 64 /Disk/SchedQuantum
    • esxcfg-advcfg -s 256 /Disk/SchedNumReqOutstanding
    • esxcfg-advcfg -s 4096 /Disk/DiskMaxIOSize

Using the ESX host command line (for vSphere 5.5 or above):

  • Open an SSH session to the host as root.
  • Run the following commands to set the SchedQuantum and DiskMaxIOSize parameters, respectively:
    • esxcfg-advcfg -s 64 /Disk/SchedQuantum
    • esxcfg-advcfg -s 4096 /Disk/DiskMaxIOSize
  • Run the following command to obtain the NAA for XtremIO LUNs presented to the ESX host and locate the NAA of the XtremIO volume:
    • esxcli storage nmp path list | grep XtremIO -B1
  • Run the following command to set SchedNumReqOutstanding for the device to its maximum value (256):
    • esxcli storage core device set -d naa.xxx -O 256

vCenter Server Parameter Settings

The maximum number of concurrent full cloning operations should be adjusted, based on the XtremIO cluster size. The vCenter Server parameter

config.vpxd.ResourceManager.maxCostPerHost determines the maximum

number of concurrent full clone operations allowed (the default value is 8). Adjusting the parameter should be based on the XtremIO cluster size as follows:

10TB Starter X-Brick (5TB) and a single X-Brick – 8 concurrent full clone operations

Two X-Bricks – 16 concurrent full clone operations

Four X-Bricks – 32 concurrent full clone operations

Six X-Bricks – 48 concurrent full clone operations

To adjust the maximum number of concurrent full cloning operations:

  1. Launch vSphere WebUI client to log in to the vCenter Server.
  2. From the top menu, select vCenter Inventory List.
  3. From the left menu, under Resources, Click vCenter Servers.
  4. Select vCenter > Manage Tab > Settings > Advanced Settings.
  5. Click Edit.
  6. Locate the config.vpxd.ResourceManager.maxCostPerHost parameter and set it according to the XtremIO cluster size. If you cannot find the parameter, type its name in the Key field and the corresponding value in the Value field.
  7. Click Add.
  8. Click OK to apply the changes.

vStorage API for Array Integration (VAAI) Settings

VAAI is a vSphere API that offloads vSphere operations such as virtual machine provisioning, storage cloning and space reclamation to storage arrays that supports VAAI. XtremIO Storage Array fully supports VAAI.

To ensure optimal performance of XtremIO storage from vSphere, VAAI must be enabled on the ESX host before using XtremIO storage from vSphere. Failing to do so may expose the xtremIO cluster to the risk of datastores becoming inaccessible to the host.

This section describes the necessary settings for configuring VAAI for XtremIO storage.

Enabling VAAI Features

Confirming that VAAI is Enabled on the ESX Host

When using vSphere version 5.x and above, VAAI is enabled by default. Before using the XtremIO storage, confirm that VAAI features are enabled on the ESX host.

To confirm that VAAI is enabled on the ESX host:

  • Launch the vSphere Web Client and navigate to Home > Hosts and Clusters.
  • In the left menu section, locate the ESX host. and click it.
  • In the right pane, click Manage > Settings.
  • From the System section, click Advanced System Settings.
  • Verify that the following parameters are enabled (i.e. both are set to “1”):
    • DataMover.HardwareAcceleratedMove
    • DataMover.HardwareAcceleratedInit
    • VMFS3.HardwareAcceleratedLocking


    If any of the above parameters are not enabled, adjust them by clicking the Edit icon and click OK.

Manually Setting VAAI on Datastore

If VAAI setting is enabled after a datastore was created on XtremIO storage, the setting does not automatically propagate to the corresponding XtremIO Volumes. The setting must be manually configured to avoid data unavailability to the datastore.


Perform the following procedure on all datastores created on XtremIO storage before VAAI is enabled on the ESX host.

To manually set VAAI setting on a VMFS-5 datastore created on XtremIO storage with VAAI disabled on the host:

  1. Confirm that the VAAI Hardware Accelerator Locking is enabled on this host. For details, refer to “Confirming that VAAI is Enabled on the ESX Host” on page 46.
  2. Using the following vmkfstools command, confirm that the datastore is configured as “public ATS-only”

    # vmkfstools -Ph -v1 <path to datastore> | grep public

    In the following example, a datastore volume is configured as “public”

    # vmkfstools -Ph -v1 /vmfs/volumes/datastore1 | grep public Mode: public

    In the following example, a datastore volume is configured as “public ATS-only”

    # vmkfstools -Ph -v1 /vmfs/volumes/datastore2 | grep public Mode: public ATS-only

  3. If the datastore was found with mode “public”, change it to “public ATS-only” by executing the following steps:
    1. Unmount the datastore from all ESX hosts on which it is mounted (except one ESX host).
    2. Access the ESX host on which the datastore is still mounted.
    3. To enable ATS on the datastore, run the following vmkfstools command:

      # vmkfstools –configATSOnly 1 <path to datastore>

    4. Click 0 to continue with ATS capability.
    5. Repeat step 2 to confirm that ATS is set on the datastore.
    6. Unmount datastore from the last ESX host.
    7. Mount datastore on all ESX host.

Tuning VAAI XCOPY with XtremIO

By default, vSphere instructs the storage array to copy data in 4MB chunks. To optimize VAAI XCOPY operation with XtremIO, it is recommended to adjust the chunk size to 256KB. The VAAI XCOPY chunk size is set using the MaxHWTransferSize parameter.

To adjust the VAAI XCOPY chunk size, run the following CLI commands according to the vSphere version running on your ESX host:

For vSphere version earlier than 5.5:

esxcli system settings advanced list -o

/DataMover/MaxHWTransferSize

esxcli system settings advanced set –int-value 0256

–option /DataMover/MaxHWTransferSize

For vSphere version 5.5 and above:

esxcfg-advcfg -s 0256 /DataMover/MaxHWTransferSize

Disabling VAAI in ESX

In some cases (mainly for testing purposes) it is necessary to temporarily disable VAAI.

As a rule, VAAI should be enabled on an ESX host connected to XtremIO. Therefore, avoid disabling VAAI or temporarily disable it if required.


Note: For further information about disabling VAAI, refer to VMware KB article 1033665 on the VMware website

(http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displa yKC&externalId=1033665).


As noted in the Impact/Risk section of VMware KB 1033665, disabling the ATS (Atomic Test and Set) parameter can cause data unavailability in ESXi 5.5 for volumes created natively as VMFS5 datastore.


To disable VAAI on the ESX host:

  • Browse to the host in the vSphere Web Client navigator.
  • Select the Manage tab and click Settings.
  • In the System section, click Advanced System Settings.
  • Click Edit and modify the following parameters to disabled (i.e. set to zero):
    • DataMover.HardwareAcceleratedMove
    • DataMover.HardwareAcceleratedInit
    • VMFS3.HardwareAcceleratedLocking
  • Click OK to apply the changes.

Multipathing Software Configuration


Note: You can use EMC Virtual Storage Integrator (VSI) Path Management to configure path management across EMC platforms, including XtremIO. For information on using this vSphere Client plug-in, refer to the EMC VSI Path Management Product Guide.


Configuring vSphere Native Multipathing

XtremIO supports the VMware vSphere Native Multipathing (NMP) technology. This section describes the procedure required for configuring native vSphere multipathing for XtremIO volumes.

For best performance, it is recommended to do the following:

Set the native round robin path selection policy on XtremIO volumes presented to the ESX host.


Note: With NMP in vSphere versions below 5.5, clustering is not supported when the path policy is set to Round Robin. For details, see vSphere MSCS Setup Limitations in the Setup for Failover Clustering and Microsoft Cluster Service guide for ESXi

5.0
or ESXi/ESX 4.x. In vSphere 5.5, Round Robin PSP (PSP_RR) support is introduced. For details, see MSCS support enhancements in vSphere 5.5 (VMware KB 2052238).


Set the vSphere NMP Round Robin path switching frequency to XtremIO volumes from the default value (1000 I/O packets) to 1.

These settings ensure optimal distribution and availability of load between I/O paths to the XtremIO storage.


Note: Use the ESX command line to adjust the path switching frequency of vSphere NMP Round Robin.


To set vSphere NMP Round-Robin configuration, it is recommended to use the ESX command line for all the XtremIO volumes presented to the host. Alternatively, for an XtremIO volume that was already presented to the host, use one of the following methods:

Per volume, using vSphere Client (for each host where the volume is presented)

Per volume, using ESX command line (for each host where the volume is presented)

The following procedures detail each of these three methods.

To configure vSphere NMP Round Robin as the default pathing policy for all XtremIO volumes, using the ESX command line:


Note: Use this method when no XtremIO volume is presented tothe host. XtremIO volumes already presented to the host are not affected by this procedure (unless they are unmapped from the host).


  1. Open an SSH session to the host as root.
  2. Run the following command to configure the default pathing policy for newly defined XtremIO volumes to Round Robin with path switching after each I/O packet:

    esxcli storage nmp satp rule add -c tpgs_off -e “XtremIO

    Active/Active” -M XtremApp -P VMW_PSP_RR -O iops=1 -s

    VMW_SATP_DEFAULT_AA -t vendor -V XtremIO

    This command also sets the vSphere NMP Round Robin path switching frequency for newly defined XtremIO volumes to one (1).


    Note: Using this method does not impact any non-XtremIO volume presented to the ESX host.


To configure vSphere NMP Round Robin on an XtremIO volume in an ESX host, using vSphere WebUI Client:

  1. Browse to the host in the vSphere Web Client navigator.
  2. Select the Manage tab and click Storage.
  3. Click Storage Devices.
  4. Locate the XtremIO volume and select the Properties tab.
  5. Under Multipathing Properties, click Edit Multipathing.
  6. From the Path Selection policy drop-down list, select Round Robin (VMware) policy.
  7. Click OK to apply the changes.
  8. Click Edit Multipathing and verify that all listed paths to the XtremIO Volume are set to Active (I/O) status.


To configure vSphere NMP Round Robin on an XtremIO volume in an ESX host, using ESX command line:

  1. Open an SSH session to the host as root.
  2. Run the following command to obtain the NAA of XtremIO LUNs presented to the ESX host:

    #esxcli storage nmp path list | grep XtremIO -B1

  3. Run the following command to modify the path selection policy on the XtremIO volume to Round Robin:

    esxcli storage nmp device set –device <naa_id> –psp VMW_PSP_RR

    Example:

#esxcli storage nmp device set –device naa.514f0c5e3ca0000e

–psp VMW_PSP_RR


Note: When using this method, it is not possible to adjust the vSphere NMP Round Robin path switching frequency. Adjusting the frequency changes the NMP PSP policy for the volume from round robin to iops, which is not recommended with XtremIO. As an alternative, use the first method described in this section.


For details, refer to VMware KB article 1017760 on the VMware website

(http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc& docTypeID=DT_KB_1_1&externalId=1017760).

Configuring PowerPath Multipathing


Note: For the most updated information on PowerPath support with XtremIO storage, refer to the XtremIO Simple Support Matrix.


XtremIO supports multipathing using EMC PowerPath on Linux. PowerPath provides array-customized LAMs (native class support) for XtremIO volumes. PowerPath array-customized LAMs feature optimal failover and load balancing behaviors for the XtremIO volumes, managed by PowerPath.

For details on the PowerPath/VE releases supported for your VMware vSphere host, refer to the XtremIO Simple Support Matrix.

For details on native class support with XtremIO for your host, refer to the EMC PowerPath/VE release notes document for the PowerPath/VE version you are installing.

For details on installing and configuring PowerPath/VE with XtremIO native class on your host, refer to the EMC PowerPath on VMware vSphere Installation and Administration Guide for the PowerPath/VE version you are installing. This guide provides the required information for placing XtremIO volumes under PowerPath/VE control.

Post-Configuration Steps – Using the XtremIO Storage

When host configuration is completed, you can use the XtremIO storage from the host. For details on creating, presenting and managing volumes that can be accessed from the host via either GUI or CLI, refer to the XtremIO Storage Array User Guide that matches the version running on your XtremIO cluster.

EMC Virtual Storage Integrator (VSI) Unified Storage Management version 6.2 and above can be used to provision from within vSphere Client Virtual Machine File System (VMFS) datastores and Raw Device Mapping volumes on XtremIO. Furthermore, EMC VSI Storage Viewer version 6.2 (and above) extends the vSphere Client to facilitate the discovery and identification of XtremIO storage devices allocated to VMware ESX/ESXi hosts and virtual machines.

For further information on using these two vSphere Client plug-ins, refer to the VSI Unified Storage Management product guide and the VSI Storage Viewer product guide.

Disk Formatting

When creating volumes in XtremIO for a vSphere host, the following considerations should be made:

Disk logical block size – The only logical block (LB) size supported by vSphere for presenting to ESX volumes is 512 bytes.

Note: In XtremIO version 4.0.0 (and above), the Legacy Windows option is not supported.


Disk alignment – Unaligned disk partitions may substantially impact I/O to the disk.

With vSphere, data stores and virtual disks are aligned by default as they are created. Therefore, no further action is required to align these in ESX.

With virtual machine disk partitions within the virtual disk, alignment is determined by the guest OS. For virtual machines that are not aligned, consider using tools such as UBERalign to realign the disk partitions as required.

Presenting XtremIO Volumes to the ESX Host

Note: This section applies only to XtremIO version 4.0 and above.

Note: When using iSCSI software initiator with ESX and XtremIO storage, it is recommended to use only lower case characters in the IQN to correctly present the XtremIO volumes to ESX. For more details, refer to VMware KB article 2017582 on the VMware website. http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=display KC&externalId=2017582


When adding Initiator Groups and Initiators to allow ESX hosts to access XtremIO volumes, specify ESX as the operating system for newly-created Initiators, as shown in the figure below.


Note: Refer to the XtremIO Storage Array User Guide that matches the version running on your XtremIO cluster.


Following a cluster upgrade from XtremIO version 3.0.x to version 4.0 (or above), make sure to modify the operating system for each initiator that is connected to an ESX host.

Creating a File System

Note: File system configuration and management are out of the scope of this document.

It is recommended to create the file system using its default block size (using a non-default block size may lead to unexpected behavior). Refer to your operating system and file system documentation.

Using LUN 0 with XtremIO Storage

This section details the considerations and steps that should be performed when using LUN 0 with vSphere.

Notes on the use of LUN numbering:

In XtremIO version 4.0.0 (or above), volumes are numbered by default starting from LUN id 1 (and not 0 as was the case in previous XtremIO versions).

Although possible, it is not recommended to manually adjust the LUN id to 0, as it may lead to issues with some operating systems.

When a cluster is updated from XtremIO version 3.0.x to 4.0.x, an XtremIO volume with a LUN id 0 remains accessible following the upgrade.

With XtremIO version 4.0.0 (or above), no further action is required if volumes are numbered starting from LUN id 1.

By default, an XtremIO volume with LUN0 is inaccessible to the ESX host.


Note: Performing the described procedure does not impact access to XtremIO volumes with LUNs other than 0.


When native multipathing is used, do not to use LUN0, or restart the ESX if the rescan fails to find LUN0.

Virtual Machine Formatting

For optimal performance, it is recommended to format virtual machines on XtremIO storage, using Thick Provision Eager Zeroed. Using this format, the required space for the virtual machine is allocated and zeroed on creation time. However, with native XtremIO data reduction, thin provisioning, and VAAI support, no actual physical capacity allocation occurs.

Thick Provision Eager Zeroed format advantages are:

Logical space is allocated and zeroed on virtual machine provisioning time, rather than scattered, with each I/O sent by the virtual machine to the disk (when Thick Provision Lazy Zeroed format is used).

Thin provisioning is managed in the XtremIO Storage Array rather than in the ESX host (when Thin Provision format is used).

To format a virtual machine using Thick Provision Eager Zeroed:

  1. From vSphere Web Client launch the Create New Virtual Machine wizard.
  2. Proceed using the wizard up to the 2f Customize hardware screen.
  3. In the Customize hardware screen, click Virtual Hardware.
  4. Toggle the New Hard Disk option.
  5. Select the Thick Provision Eager Zeroed option to format the virtual machine’s virtual disk.


  6. Proceed using the wizard to complete creating the virtual machine.

Space Reclamation

This section provides a comprehensive list of capacity management steps for achieving optimal capacity utilization on the XtremIO array, when connected to an ESX host.

Data space reclamation helps to achieve optimal XtremIO capacity utilization. Space reclamation is a vSphere function, enabling to reclaim used space by sending zeros to a specific address of the volume, after being notified by the file system that the address space was deleted.

Unlike traditional operating systems, ESX is a hypervisor, running guest operating systems on its file-system (VMFS). As a result, space reclamation is divided to guest OS and ESX levels.

ESX level space reclamation should be run only when deleting multiple VMs, and space is reclaimed from the ESX datastore. Guest level space reclamation should be run as a periodic maintenance procedure to achieve optimal capacity savings.

The following figure displays a scenario in which VM2 is deleted while VM1 and VM3 remain.


Space Reclamation at Guest Level

Note: Refer to the relevant OS chapter to run space reclamation in the guest OS level.

On VSI environments, every virtual server should be treated as a unique object. When using VMDK devices, T10 trim commands are blocked. Therefore, it is required to run space reclamation manually. RDM devices pass through T10 trim commands.

There are two types of VDI provisioning that differ by their space reclamation guidelines:

Temporary desktop (Linked Clones) – Normally, temporary desktops are deleted once the end users log off. Therefore, running space reclamation on the guest OS is not relevant, and only ESX level space reclamation should be used.

Persistent desktop (Full Clones) – Persistent desktop contains long-term user data. Therefore, space reclamation should be run on guest OS level first, and only then on ESX level.

On large-scale VSI/VDI environments, it is recommended to divide the VMs to groups to avoid overloading the SAN fabric.

Space Reclamation at ESX Level

ESX 5.1 and below

In versions prior to ESX 5.5, the vmkfstools command is used for space-reclamation. This command supports datastores up to 2TB.

The following example describes running vmkfstool on a datastore XtremIO_DS_1 with 1% free space to allow user writes.

# cd /vmfs/volumes/XtremIO_DS_1

# vmkfstools -y 99

Vmfs reclamation may fail due to T10 commands blocking (VPLEX). In such cases, it is required to apply a manual copy of zeroes to the relevant free space.

The following example describes running a manual script on X41-VMFS-3 datastore (refer to “ESX Shell Reclaim Script” on page 62).

# ./reclaim_space.sh X41-VMFS-3

Note: The datastore name cannot include spaces.

ESX 5.5 and above

ESX 5.5 introduces a new command for space reclamation and supports datastores larger than 2TB.

The following example describes running space reclamation on a datastore XtremIO_DS_1:

# esxcli storage vmfs unmap –volume-label=XtremIO_DS_1 –reclaim-unit=20000

The reclaim-unit argument is an optional argument, indicating the number of vmfs blocks to UNMAP per iteration.

Vmfs reclamation may fail due to T10 commands blocking (VPLEX). In such cases, it is required to apply a manual copy of zeroes to the relevant free space.

The following exmaple describes running a manual script on X41-VMFS-3 datastore (refer to “ESX Shell Reclaim Script” on page 62):

# ./reclaim_space.sh X41-VMFS-3

Note: The datastore name cannot include spaces.

ESX Shell Reclaim Script

The following example describes an ESX shell reclaim script.

for i in $1 do size=$(df -m|grep $i|awk ‘{print $4}’) name=$(df -m|grep $i|awk ‘{print $NF}’) reclaim=$(echo $size |awk ‘{printf “%.f\n”,$1 * 95 / 100}’) echo $i $name $size $reclaim dd count=$reclaim bs=1048576 if=/dev/zero of=$name/zf sleep 15 /bin/sync rm -rf $name/zf done


Note: While Increasing percentage leads to elevated precision, it may increase the probability of receiving a ‘no free space’ SCSI error during the reclamation.


Out of Space VM Suspend and Notification with Thin Provisioning (TPSTUN)

TPSTUN is a VAAI primitive that enables the array to notify vSphere when a LUN is running out of space due to thin provisioning over-commit. The command causes suspending all virtual machines on that LUN. XtremIO supports this VAAI primitive.

A virtual machine provisioned on a LUN that is aproaching full capacity usage becomes suspended, and the following message appears:


At this point, the VMware administrator can resolve the out-of-space situation on the XtremIO cluster, and prevent the guest OS in the VMs from crushing.

EMC ViPR Controller 2.4 Is Out, Now With XtremIO Support

Hi,

We have just released the 2.4 version of the ViPR controller, this release add a lot of goodness for many scenarios, one of them is the support for XtremIO XIOS 4.0/4.0.1,

For me, I think about ViPR as a manager of managers, it allow EMC and NON EMC customers to be able to provision volumes, create snapshots, map volumes, create DR workflows and much more, all in a true self-service portal (cloud anyone..?) with a true RBAC (role based access)

There’s a good high level why ViPR is such a critical component in today’s diverse DC which you can view here:

and another good deep dive into ViPR (from an older version) which you can view here:

Now,let’s recap the changes in ViPR 2.4, please note that I haven’t covered everything as it’s a monster release, instead, I just focused on areas that involves XtremIO (either directly or via another product that integrates to XtremIO)


Block: enhancements have been made to support ScaleIO via the REST API starting with version 1.32, the management of remote clusters with Vblock and XtremIO 4.0
Object: Elastic Cloud Storage (ECS) Appliance support has been added through the Object Storage Services.


File: enhancements have been made to add the ingestion of file system subdirectory and shares along with the discovery of Virtual Data Mover on the VNX (vNAS) to intelligently place newly created file systems on vNAS servers that provide better performance.


Data Protection: Cinder discovered storage arrays are supported as VPLEX backend and an administrator can increase the capacity of a RecoverPoint.


Product: enhancements have been made to empower the administrator to customize a node name, to add different volumes to a consistency group, and improve security using improved handling of passwords

Some enhancements have been made to:
VCE Vblock: support is being added for the integration of multiple remote image servers; this directly provides better network latency that benefits the installation of operating systems
ScaleIO: while the supported functionalities remain the same, ViPR Controller is able to communicate with ScaleIO version 1.32 using the REST API
XtremIO: along with support for the software version 4.0, ViPR Controller manages multiple clusters through a single XtremIO Management Service (XMS)

The following enhancements have been made relating to data protection:
VPLEX: Cinder discovered storage arrays are now usable as VPLEX backend storage, which enables ViPR Controller to orchestrate virtual volumes from non-EMC storage arrays behind
VPLEX systems. Additional enhancements are the ingestion of backend volumes and the management of migration services using VPLEX.
RecoverPoint: this release enables the administrator to optionally increase the size of a journal volume making certain the volumes continue to collect logs.


Security while accessing the ViPR Controller has improved. By default, entering th password ten consecutive times incorrectly causes the system to lock that station out for 10
minutes. An administrator with REST API and/or CLI access can manage this feature. An administrator has the capability to customize the ViPR Controller node name to meet
data center specification, which is a change from the usual “vipr1”, “vipr2” and “vipr3” naming convention.
The ViPR Controller Consistency Group has been enhanced to support VPLEX, XtremIO, and other volumes. This also includes the ability to add multiple volumes to a consistency group to ensure these volumes remain at a consistent level.
The method that ViPR Controller treats existing zones is different. When an order has been placed, the infrastructure checks the fabric manager to determine whether a zone with the appropriate WWNs already exists. If yes, that zone is leveraged to process the order. If a zone does not already exist, a new zone is created to process the order. This feature makes certain that the ViPR Controller creates zones only when necessary. This enhancement is available for all installations of ViPR Controller 2.4, but it must be enabled on upgrades.


Starting with ViPR Controller 2.4, support for XtremIO 4.0 along with the management of multiple cluster through a single XtremIO Management Service (XMS) is being added. The Storage Provider page discovers the XMS along with its clusters. A user with
administrative privileges is required for the ViPR Controller to integrate and manage these clusters. Additionally, XtremIO-based volume can now be part of a Consistency Group in ViPR Controller, which is an operation that was unavailable to XtremIO volumes before this release.

After upgrading to ViPR Controller 2.4, ViPR Controller will create a storage provider entry for each XtremIO system that was previously registered.


ViPR Controller also adds support for XtremIO 4.0 snapshots. The specific supported operations are:
Read-Only: XtremIO snapshots are regular volumes and are created as writable snapshots. In order to satisfy the need for local backup and immutable copies, there is an option to create a read-only snapshot. A read-only snapshot can be mapped to an external host such as a backup application, but it is not possible to write to it.
Restore: Using a single command, it is possible to restore a production volume or a CG from one of its descendant Snapshot Sets.
Refresh: The refresh command is a powerful tool for test and development environments and for the offline processing use case. With a single command, a snapshot of the production volume or CG is taken. This allows the test and development application to work on current data without the need to copy data or to rescan.



VPLEX: the VPLEX lesson covers the use of Cinder discovered storage arrays as VPLEX backend storage, the ingestion of backend volume and the management of VPLEX data
migration speed.
RecoverPoint: the RecoverPoint lesson covers the enhanced capability of adding more capacity to a journal size.
Let us first take a look at VPLEX.

ViPR Controller 2.0 started support for a broader set of third party block storage arrays by leveraging OpenStacks Cinder interface and existing drivers. ViPR Controller 2.2 added support for multipathing for FC. With ViPR Controller 2.4, Cinder discovered storage arrays
can be used as VPLEX backend, as long as both Fibre Channel ports from the VPLEX local and the third-party storage array are connected in the same fabric. Most importantly, the third party storage array must also be a supported VPLEX backend.
Check the ViPR Controller Support Matrix for a list of supported fabric manager, OpenStackoperating system and VPLEX supported backend.


Listed are some of the steps necessary to provision a virtual volume using the ViPR Controller. Add FC Storage Port (step 3) is being added here due to Cinder’s limitation. The limitation occurs when ViPR Controller discovers any storage arrays behind Cinder, Cinder only provides ViPR Controller with one link to communicate with the storage array. It is recommended to add additional ports for the storage array to assure that there are at least two storage ports connected to each VPLEX director. This step only needs to be performed the first time a Cinder discovered storage array is being used; thereafter, it can be skipped. In this course, some of the steps are covered due to their technical differences. Let us take a look at how storage ports are added and virtual pools are created.


First, the OpenStack server must be added as a Storage Provider. The process to do this is the same as before. This image shows three storage providers: two VPLEX systems and
the OpenStack host identified as a Third-Party Block Storage Provider. When the OpenStack host is added onto ViPR Controller for Southbound integration, any storage arrays that are configured inside of the Cinder configuration are automatically identified.
Also shown here are five configured storage arrays. Due to Cinder limitation, only one storage port is identified per storage array; however, there are ways within the ViPR Controller to add more storage ports.


Beginning with ViPR Controller 2.4, a Migration Services option, which leverages VPLEX, is introduced in the Service Catalog. The two tasks that can be performed within Migration Services are data migration and migration from VPLEX Local to VPLEX Metro.
In order to leverage data migration, all volumes must already have been created through the VPLEX cluster and the ViPR Controller. However, if the volume was created through the VPLEX cluster but not ViPR Controller, the volumes must first be ingested into ViPR Controller for management before it can be migrated.
With migration from VPLEX Local to VPLEX Metro, the virtual volume is simply moved from being a local volume to a distributed volume thus improving its availability across two VPLEX clusters instead of one.


In the VPLEX Data Migration page, the options are project, virtual pool, operation, target virtual pool and volume. Two options play a key role in the data migration task: Operation and Target Virtual Pool. Operation specifies the type of data migration while Target
Virtual Pool specifies the destination of the volume being migrated.


The speed of the migration can be configured using the Controller Configurations within ViPR Controller > VPLEX > Data Migration Speed. The value varies among the following: lowest, low, medium, high and highest. The transfer size can either be 128 KB, 2MB, 8 MB, 16 MB or 32 MB.
Note: If the migration value is changed during a migration operation, the newly-changed value will take effect on future migration operations. The current operation is not impacted.


Prior to ViPR Controller 2.4, VPLEX volume ingestion was only being performed for the virtual volume, not for the other components including the storage from the backend
storage arrays. With ViPR Controller 2.4, this framework is improved by adding ingestion for backend volumes, clones/full copies and mirrors/continuous copies of unmanaged VPLEX volumes. With this improvement, the volumes become ViPR Controller-managed volumes along with their associated snapshots and copies.
Note: For the most up-to-date information on supported VPLEX backend arrays inside of the ViPR Controller, please refer to the ViPR Controller Support Matrix in EMC Support Zone.


Now let us take a look at the RecoverPoint related enhancements in this release.


Prior to ViPR Controller 2.4, a RecoverPoint journal created within the ViPR Controller was a single volume, with no way to increase the journal size within the ViPR Controller. 80% of the journal volume is used to keep track of changes. In a busy environment, the journal size could fill quickly.
ViPR Controller 2.4 now provides the ViPR Controller administrator the option to increase the journal capacity. Using the Add Journal Capacity option within the Block Protection
Services category, an administrator can increase the volume by selecting the appropriate project, consistency group, copy name (the RecoverPoint volume name), virtual array and virtual pool. The new capacity can either depend on pre-defined calculations detailed in the RecoverPoint Administration Guide or defined by the data center administrator.


ViPR Controller 2.4 enhances the way a Consistency Group operates. For VPLEX systems, to ensure that all virtual volumes get to and remain on a consistent level, volumes from
different backend storage arrays can be part of the same consistency group. For RecoverPoint, ViPR Controller is able to provision multiple provisioning requests against the same Consistency Group at the same time. For XtremIO, with the support of version 4.0, XtremIO volumes can be added to or deleted
from a Consistency Group. Snapshots can also be taken or deleted from a Consistency Group.


As part of this release, enhancements have been made to the plug-ins that ViPR Controller works with. This table shows the enhancements related to the vRO workflow while there were no changes to Microsoft’s SCVMM and VMware’s vROps/vCOps. Let us look into how the workflow has been enhanced.


Prior to ViPR Controller 2.4, when the vRO administrator wanted to add ViPR Controller configuration, the vRO configurator was leveraged. While this was convenient, it also meant that every time something was updated in the ViPR Controller, the service needed to be
restarted. This impacted the availability of the plug-in during the restart. With ViPR Controller 2.4, the ViPR Controller configuration is moved to vRO workflow from the vRO configurator. The configuration of tenant and project is also being moved to vRO workflow. By moving the ViPR Controller configuration to the vRO workflow, there is no need to restart the service. Additionally, tenants and projects are now configurable within the workflow. Both of these enhancements make the EMC ViPR Plug-in for vRO more time efficient to use and minimize the need to restart the service.

For vRO users who have upgraded to ViPR Controller 2.4, a message appears indicating that the ViPR Controller configuration has been moved to the vRO workflow when the user
accesses the VMware vRealize Orchestrator Configuration. Shown here is the vRealize Orchestrator management interface where the configuration folder is selected to show the vRO workflow. Before vRO can be used, the administrator must decide to either proceed with the existing ViPR Controller configuration details or update the ViPR Controller configuration.
– By selecting to proceed with the existing ViPR Controller configuration, the plug-in continues to work; however tenants, projects and virtual arrays will need to be added manually.
– By choosing to update the ViPR Controller configuration, the user will be provided with a series of screens to input details of the ViPR Controller before being able to use vRO

here’s a demo of provisioning/ decomission a volume from XtremIO 4.X using the VIPR C 2.4

 

EMC ProtectPoint, Now Support XtremIO

Wow, this has been highly on my radar, one of the most anticipated integration point to XtremIO is almost available!

You wouldn’t drive a racecar without a helmet… and you shouldn’t sell XtremIO without data protection. But not just any data protection solution will do – XtremIO requires flash optimized data protection that can protect workloads at the speed of flash. ProtectPoint is the only solution in the industry that delivers this and now it supports XtremIO.

EMC ProtectPoint, an industry first data protection solution that integrates primary storage and industry leading protection storage, now supports XtremIO – the world’s #1 all flash array, which:

Enables up to 20x faster backup to meet application protection SLAs in addition to

Eliminates the backup impact on application servers and

Reduces cost and complexity by eliminating excess infrastructure including a traditional backup application

When buying an all flash array, performance is nonnegotiable and applications on your XtremIO contain some of your business’ most valuable data. So why leave things to chance when it comes to protecting that data? Ensure data protection doesn’t become a bottleneck by leveraging EMC’s flash optimized data protection solution – ProtectPoint for XtremIO.

The secret sauce of ProtectPoint is the combination of best of breed technology – that only EMC could bring together. ProtectPoint combines three components: application integration to ensure an application consistent backup, a change block tracking/data mover engine to ensure minimal infrastructure required and minimal impact on the network, storage and servers and protection storage to ensure cost effective retention and reliable recovery. The exact components may vary with different primary storage arrays, but ProtectPoint still brings the same value no matter what.

By leveraging best of breed technology from across the EMC family, we can:

  • Allow you to leverage common data protection infrastructure for multiple use cases (i.e. backup and replication)
  • Provides seamless integration between different parts of the EMC data center stack and easily scale with your environment
  • By developing ProtectPoint across multiple EMC teams, we are able to drive faster innovation (resulting in a broader ecosystem more quickly vs. having the engineer for each storage array and application separately) and broad investment across EMC exemplifies the strategic importance of ProtectPoint

Now, here’s a quick peek at the technology behind ProtectPoint with XtremIO.

  1. First is the agent installed on the application server – which could be the application agent (for Oracle, SAP or DB2) or the generic file system agent (for other applications). These agents provide an application consistent backup without impacting the host and are designed to empower application owners to control their own backup and recovery and eliminates the requirement for a traditional backup application.
  2. Next, the engine is powered by XtremIO and RecoverPoint technology (via a RecoverPoint appliance). This ensures that there is no impact to the production workloads on the XtremIO and that only unique blocks are sent to Data Domain– minimizing time and bandwidth required to backup an application.
  3. Finally, the Data Domain system has the capability to ingest blocks directly from the XtremIO, deduplicate and store them. But most importantly, even though the system only ingests unique blocks, it stores each backup as a FULL native image copy every time.

At a high level, this is how ProtectPoint works to backup directly from XtremIO to Data Domain. After an initial configuration by the storage administrator, which makes a point in time copy of the data to be protected and seeds the initial data on the Data Domain system, the environment is ready for it’s first full backup via ProtectPoint.

First… to trigger a ProtectPoint backup, an application owner, like an Oracle DBA, triggers a backup at an application consistent checkpoint. This pauses the application momentarily simply to mark the point in time for that backup.

Then… only unique blocks are sent from XtremIO directly to Data Domain. This is enabled by leveraging industry leading XtremIO virtual copy and RecoverPoint technology as the change block tracking engine, which efficiently tracks changes since the last backup.

Finally… the Data Domain system will ingest the unique data and use it to create an independent full backup in native format, which enables greatly simplified recovery.

With ProtectPoint, you do a full backup every time, but only send unique blocks, so the full comes at the cost of an incremental.

In addition, by leveraging RecoverPoint techonlogy, ProtectPoint for XtremIO also offers a new type of backup – triggered by the infrastructure (RPA) to backup data without an application owner triggering it. Let’s take a look at how that works:

First… the infrastructure (specifically the RecoverPoint appliance) triggers a backup based on a previously defined policy.

Then… only unique blocks are sent from XtremIO directly to Data Domain.

Finally… the Data Domain system will ingest and storage the unique data and use it to create a full backup.

[Note: even though she did not trigger it, the app owner can see these backups]

Let’s take a look at how a recovery works with ProtectPoint for XtremIO – first we’ll review a traditional full recovery, which would be recovering an entire LUN.

First, the app owner will trigger the recovery …

Then, the app agent reads the backup image from the Data Domain system and pulls the appropriate data back over the network.

At which point, the RecoverPoint will replace the appropriate production LUN with the recovered copy.

Next let’s take a look at how a full recovery works if it’s done via the storage admin leveraging RecoverPoint, which enables change block tracking on recovery.

First, the storage admin will trigger the recovery …

Then, the RecoverPoint appliance reads the backup image from the Data Domain system and then pulls only the required data back over the network.

At which point, the RecoverPoint will replace the appropriate production LUN with the recovered copy.

This enables a full recovery at the speed of a granular recovery – it is very similar to the Avamar capability with VMware support change block tracking on recovery.

In addition, you can also do a granular recovery with ProtectPoint via instant access. For an Oracle database this might be recovering a specific database, table or record – as opposed to the entire LUN.

First, the app owner or DBA triggers the recovery

Then, the application server connects to the backup image on the Data Domain – but the image doesn’t move off the system. This gives the DBA instant access to their protected data, although it is still on the Data Domain.

At this point, the app owner can recover the specific object she desires to the production database.

With ProtectPoint XtremIO integration, we get a few additional benefits by using RecoverPoint technology as the engine:

  • By leveraging a shared RecoverPoint appliance, ProtectPoint is a simple extension of our best in class replication offering and your customers can leverage their existing RecoverPoint appliances for ProtectPoint backups.
  • All replication and backup can be centrally managed via the RecoverPoint GUI for centralized storage consistent data protection. Alternatively, to empower app owners, day to day backup and recovery can all be centrally controlled by DBA via their native utilizes (i.e. RMAN).
  • By integrating with XtremIO virtual copies and leveraging a RecoverPoint appliance to do the processing, ProtectPoint ensures that the backup does not impact production workload performance on the XtremIO

ProtectPoint offers integration at two layers – either the native integration at the application layer or integration at the “file system” layer for applications that aren’t supported with a native appliance agent. ProtectPoint currently natively integrates with three major applications and databases: Oracle (via RMAN), SAP (via BR*Tools) and IBM DB2 (via Advanced Copy Services).

here’s a good white board discussion

and another overview with a demo at the end of it

EMC XtremIO Sessions At VMworld 2015 Europe


 

Wow! This year, we @ XtremIO have some really good sessions lined up at VMworld, im so happy that 3 out of the 4 sessions are actually coming from the solutions team which I lead as part of the CTO role. The sessions will go really deep into the weeds so if you are expecting the usual vendor marketing, look elsewhere

EUC4879 – Horizon View Storage – Let’s Dive Deep!

Some people say that if you get the storage right in your VDI environment, everything else is easy! In this fun-filled technical workshop, attendees will receive a wealth of knowledge on one of the trickiest technical elements of a Horizon deployment. This session will cover storage architecture, planning, and operations in Horizon. The session will also profile a new, innovative reference architecture from the team at EMC XtremIO.

Wednesday, Oct 14, 5:00 PM – 6:00 PM – Hall 8.0, Room 21

My take on this session:
Michael is a long runner, he has invested a lot of time in writing up the VDI white paper and if you can see the rest of the team at this session, you know it’s a must if you are planning to do VDI this year, you can see some of the work led to this session here: https://itzikr.wordpress.com/2015/06/26/the-evolution-of-vdi-part-1/

VAPP4916 – Virtualized Oracle On All-Flash: A Customer’s Perceptive on Database Performance and Operations In the virtualized infrastructure the new technology wave is all-flash arrays. But today all administrators (virtual, storage, DBA) need to know how changing an essential part of the virtual infrastructure impacts critical applications like Oracle databases. This joint customer and XtremIO presentation acts as a practical guide to using all-flash storage in a virtualized infrastructure. The emphasis will be on value realized by a customer using all-flash together with findings from third party test reports by Principled Technologies. You will learn how all-flash storage is changing performance intensive applications like virtualized databases.

Thursday, Oct 15, 9:00 AM – 10:00 AM – Hall 8.0, Room 38

My take on this session: vinay and sam are the go to people when it comes to DB’s, there so much stuff going in the universe of XtremIO and DBs that youll be amazed how much you can learn just by attending this session!

VAPP5598 – Advanced SQL Server on vSphere

Microsoft SQL Server is one of the most widely deployed “apps” in the market today and is used as the database layer for a myriad of applications, ranging from departmental content repositories to large enterprise OLTP systems. Typical SQL Server workloads are somewhat trivial to virtualize; however, business critical SQL Servers require careful planning to satisfy performance, high availability, and disaster recovery requirements. It is the design of these business critical databases that will be the focus of this breakout session. You will learn how build high-performance SQL Server virtual machines through proper resource allocation, database file management, and use of all-flash storage like XtremIO. You will also learn how to protect these critical systems using a combination of SQL Server and vSphere high availability features. For example, did you know you can vMotion shared-disk Windows Failover Cluster nodes? You can in vSphere 6! Finally, you will learn techniques for rapid deployment, backup, and recovery of SQL Server virtual machines using an all-flash array.

  • Scott Salyer – Director, Enterprise Application Architecture, VMware
  • Thursday, Oct 15, 1:30 PM – 2:30 PM – Hall 8.0, Room 24

    My take on this session: wanda is a SQL guru, she can have an entire day just explaining how to properly virtualize MS SQL with XtremIO and vSphere, the foundation for this session is a white paper she’s working on as well, I also know scott from VMware and I even had the pleasure to co-present with him on the same topic some years ago, again DB and XtremIO are like peanut butter and chocolate, do not miss it if you care to know about compression, performance, snapshots etc’!

VAPP6646-SPO – Best Practices for Running Virtualized Workloads on All-Flash Array

All Flash Arrays are taking the storage industry by storm, many customers are leveraging them for virtualizing their data centers, trough the usage of EMC XtremIO, we will start examining the reason this is happening, what are the similarities and differences between the AFA’s architectures, we’ll then go deep into some specific best practices for the following virtualized use cases: 1. Databases, can they benefit from being virtualized on an AFA. 2. EUC, how VDI 1.0 started, what does VDI 3.0 means and how is it applicable for an AFA. 3. Generic workloads being migrated to AFAs

My take on this session: what can I say, this is me I guess but seriously, this session has 2 parts, the first one really goes deep into the different type of AFAs architectures so if you are considering evaluating / buying one, I highly encourage you to attend the session, the 2nd part of the session really goes into the dirty tricks that have to do with AFAs and vSpherre AND more importantly, how to overcome these, even if you are not an XtremIO customer, you will benefit from it as well.

  I hope to see you ALL at VMworld! Itzik

EMC AppSync 2.2 SP2 Is Out!

Its not a secret that copy management plays an Huge role with our customers, they have been looking for ways to utilize array based snapshots without impacting the performance of the parental volumes for ages, the best proof is the fact that clones still are widely used by so many customers out there and its not because of “DR”, its simply because customers are afraid of impacting their “crown jewels”, e.g, the primary DB.

The bigger issue than just performance impact is the data growth (or should I say, explosion) due to copies, IDC predict that more than 60% of the data we store is derived from the actual copies as oppose to the primary data itself.

We have been shipping our unique snapshots mechanism which we are now calling “XVC” (XtremIO Virtual Copies”) since April 2014 and it’s been highly successful by our customers, almost overnight, we started to see an adoption of XtremIO for this practical use case.

Talking to our customers (and I do that a lot), there has always been one repeated feedback in regards to snapshots, “we LOVE it but we want a single engine that can automate, orchestrate and provide a self-service for our applications owners (mainly DBA’s) to do it all from one place”.

So back in December 2014, we announced the first version of Appsync that integrates with XtremIO, I highly encourage you to first have a read about it here, https://itzikr.wordpress.com/2014/12/19/protecting-your-vms-with-emc-appsync-xtremio-snapshots/

EMC’s AppSync enables Oracle / MS SQL database copy management automation to deliver a “self-service” experience to create, protect and recover databases in minutes versus traditional database lifecycle management with manual or script driven provisioning, protection and recovery that take hours to days and require an DBA and storage administrator.

AppSync takes care of the provisioning in 3 modes. On-demand , Scheduled and Expired. While the first 2 are meant for provisioning while the third option is related to the removal of database copies which were created earlier. During the copy process the replication operation is taken care by the RecoverPoint or VMAX or VNX replication tools. We need to add the storage option or RecoverPoint Appliance option to take the advantage for the replication operation.

In the first option we can take the copy of the database as per our convenience while in the second option we can schedule the operation without any DBAs intervention.

The second option ensures that the performance of the production database is not impacted and the database copies can be created during business off-hours.

The third option purges the database copies created earlier and it recovers the earlier used capacities. Here we can also schedule the Rotation policy where the purging stuff can happen automatically.

..it wasn’t perfect, it lacked the ability to do an XtremIO XVC refresh / restore, refresh is the use case that is so popular, imagine the following scenario, you have a production DB running on XtremIO, you then want to take a point in time copy of this DB volume(s) and present these to your test/dev environment to run some analytics on it or maybe just to run your newly launched code against, you then want to repeat this process every night / week/month basically, refreshing your test/dev with the latest copy of your production DBs volumes, this is what Appsync now support, XtremIO XVC refresh / restore + the Support for RecoverPoint with XtremIO, it support the following applications, Oracle, MS SQL, MS Exchange and vSphere

Now let’s have a more detailed explanation for all the features that AppSync 2.2 SP2 support with XtremIO

RESTORE SUPPORT FOR XTREMIO 4.0 COPIES

restore

In AppSync 2.2.2 user can restore the copies created on XtremIO 4.0.
Copies which are created on XtremIO 3.x or less are still not restorable and user needs to follow the manual steps in order to
restore those copies.
If user upgrades the XtremIO from 3.x or below to XtremIO 4.0
then the copies created on XtremIO 3.x or below will be restorable.
As part of restore, XtremIO created snapshots are kept under a new tag so that admin can take appropriate action on these
snapshots. ( /Volumes/AppSyncSnapshots/RestoredSnapShots )


REFRESH SUPPORT FOR XTREMIO 4.0 COPIES

This is the crown jewels of the AppSync product which now includes support for XtremIO, the “refresh” operation has many use cases, lets cover some of them

refresh

the most common one is refreshing your test / dev DBs with the latest copy of your production environment every X interval, it can be once a night, once a week etc, it can be refreshing only some of your test / dev copies, the point is that now for the first time, you can work on your test/dev environment without messing with your production instance and all of this, with a click of a button


XtremIO makes the DBA’s production maintenance easier:

  • Run DBCC against a snapshot of your database and move the CPU usage and locking off your production servers.
  • Verify your backup chain by restoring full, differential and log backups without using up extra space.
  • Make full backups from a snapshot of the database.

reporting

Quoting some customers of ours from EMC World:

“Our busiest season is in August, when students come back to school in waves and we learn which of the features we deployed over the summer don’t actually scale. This past August, we had the XtremIO in place, so our I/O speeds were great – so great that CPU on the production SQL cluster became our bottleneck.

We sat in the high 90s for weeks while developers scrambled to refactor queries.

To alleviate some of the CPU pressure, we leveraged XtremIO snapshots and offloaded some heavy real-time reports to another SQL box.

Another way to scale out is to deploy SQL Server AlwaysOn, and enjoy a transactionally consistent copy (or three) of your database without any space penalty thanks to inline deduplication. We’re deploying AlwaysOn this month with a copy of the database for each of our intensive workloads. Before XtremIO, my IT team would just have laughed at me if I’d asked for that many spindles. Now I can do it without buying anything.

Don’t have SQL Server Enterprise? Snap a copy of the database to another production server and point read-only workloads that can use point-in-time data at it. Refresh them every hour, every night or whatever makes sense for your application.”


Upgrading SQL Server is much safer with XtremIO’s snapshots.

  • Protect the database by snapping it prior to a SQL Server – if the upgrade is a disaster, you can point SQL Server at the snapshot and recover.
  • Test the upgrade on a snap of the database until you get all the scripting right.
  • Run through the upgrade on a snap and get a real sense for how long the upgrade will take.

Snapshots are also great for providing safety and predictability around application updates and deployments.

LENGTH OF VOLUME NAME IS INCREASED.

XtremIO 3.x and below had a restriction of volume name length of 64 characters and hence AppSync used to recommend
volume name to be 40 characters in length ( to support repurposing use case and snapshots).

XtremIO 4.0 has increased the volume name character length limit to 128 and hence AppSync recommends the volume length
to be less then 104 characters ( to support repurposing use case and snapshots).

MULTIPLE XTREMIO MANAGED BY SINGLE XMS


AppSync 2.2.2 supports the configuration where user can
manage multiple XtremIO arrays under one XMS.



RecoverPoint SUPPORT FOR XTREMIO 4.0

From AppSync 2.2.2 onwards, users can create RecoverPoint bookmarks with XtremIO 4.0 in the backend.
Mixed storage (VNX to XtremIO and vice versa is also supported) with both static and dynamic mount.
.Please follow the RecoverPoint guide to configure RecoverPoint consistency groups with XtremIO in backend.
RecoverPoint Repurposing is also supported for XtremIO 4.0.
Minimum RecoverPoint Version to be used is 4.1.2 in order to use XtremIO 4.0 in backend.

Parallel bookmarks are not supported if any volume in the group set is on an XtremIO array. That means users can’t
create simultaneous bookmarks on two RecoverPoint Consistency Groups. So AppSync also doesn’t protect two
consistency groups together in the same ServicePlan run. The recommendation is to have one Consistency Group
protection per service Plan run from Appsync if XtremIO 4.0 is in the backend.

RecoverPoint XtremIO bookmarks supported Image Access Mode is “Physical” while mounting RecoverPoint copies.

You can download Appsync from http://support.emc.com

You will have to install AppSync 2.1 and then upgrade to 2.2 SP1 followed by SP2, no external DB is needed, AppSync uses its own DB so the installation is literally “next->next” , done type of experience, the software is free for 90 days so I highly encourage you to take it for a spin!

Below you can see a demo of AppSync with Oracle DB,

and you can also see a demo of AppSync refreshing your dev/test environments with the latest production copy when it run against MS SQL running on Microsoft Clustering Services (MCSC)

VSI 6.6.3 Is Here!

Hi,

Rolling on with the Q3 updates to XtremIO, I’m very happy to share with you an updated version of our VSI (Virtual Storage Integrator) vCenter plugin.

If you are new to VSI, I highly encourage you to read about it here first, https://itzikr.wordpress.com/2015/07/17/virtual-storage-integrator-6-6-is-almost-here/ , seriously, it’s pretty much the glue between XtremIO to VMware vSphere & vSphere and many other things and its FREE!

Ok, you’re back? Cool! this 6.6 SP3 release contains the following improvements:

Take Snapshot

This feature help user to take snapshot for qualified XtremIO datastore.

There is dedicated menu item named “Take Snapshot” which will be shown only for XtremIO based datastore.

User can take snapshots by right clicking on an XtremIO datastore. On the right-click menu, choose All EMC VSI Plugin Actions

Take Snapshot.


The popup dialog shows loading during the back end process.


XtremIO 4.0, read-only snapshot is supported


Take Snapshot by Scheduler (4.0 ONLY)

This feature help user to take snapshot for qualified XtremIO datastore by scheduler.

There is dedicated menu item named “Take Snapshot by Scheduler” which will be shown only for XtremIO 4.0 based datastore.

User can take snapshots by right clicking on an XtremIO 4.0 datastore. On the right-click menu, choose All EMC VSI Plugin Actions
Create Snapshot Scheduler.

The popup dialog shows loading during the back end process.


After loading is finished, user can configure scheduler as wanted:


After pressing ‘Submit’ button, the task will be submitted automatically to SIS server. User can press ‘OK’ button to close this dialog.

Managing Snapshot Scheduler

This feature enables end user managing XtremIO snapshot scheduler directly in vSphere client. The management includes view, modify, remove and disable/enable. The snapshot schedulers were created on one XtremIO volume which is the backend of one qualified vSphere datastore.

NOTE: It’s for XtremIO 4.0 based datastore only. There will no snapshot scheduler shown for XtremIO 3.0 based datastore.

View Snapshot Scheduler

User can click one qualified datastore, and then view defined snapshot scheduler which is on the backend XtremIO volume at storage side under tab “Scheduler Management”.


Modify/Remove/Disable/Enable Snapshot Scheduler

After viewing snapshot scheduler, end user can select one to modify, remove, suspend or resume it.

By navigating path as: Datastores -> Manage -> XtremIO Management -> Scheduler Management, use can manage defined scheduler on one XtremIO volume.

001

Choose one from listed schedulers:

002


User can choose “Suspend” to suspend or disable one scheduler, choose “Modify” to update one scheduler and choose “remove” to remove one scheduler. For the “Suspend” button, its value will vary based on the status of selected scheduler. If it’s an on-going scheduler, the value will be “Suspend”. Otherwise, the value will be “Resume”.

If you click on “Suspend” or “Resume” button, there will be a dialog shown to confirm user’s action. After pressing “OK” button on this dialog. The process of suspending or resuming one scheduler will be triggered. After its completion, selected scheduler will be suspended or resumed.

If you click on “Modify” button, there will be a pop up dialog to enable user edit this scheduler:

 003

Pressing “Submit” button will trigger scheduler updating process. After this process completed, the scheduler will be updated:

004

View Snapshot for VM

This feature lists the XtremIO’s snapshots (aka, Restorable Point in Time for VM) details in the table under the tab of “XtremIO Management” after user selecting one qualified VM. Those snapshots are on the XtremIO volume which is the backend LUN of datastore hosting the selected VM.


Restore VM from one PiT

This feature enables user select one of Point in Time (PiT) to restore VM to corresponding status.

User can firstly select one PiT, and then press ‘Restore’ button. After pressing ‘OK’ button on confirm dialog, there will be a task triggered underlying to do VM restore automatically.


The whole process in backend consist of several sub tasks as followed:

  1. Take a writable snapshot (snap-B) based on the selected snapshot snap-A.
  2. Mount the snap-B to the host which is the same as the one hosting the target virtual machine vm-A, a new temp datastore (datastore-B) is ready.
  3. Search vmx file on datastore-B based on the one of vm-A.
  4. Register new vm (vm-B) with the searched vmx file.
  5. Check whether the available datastore capacity is enough or not:
    1. Calculate the vm size (size-1) on datastore-B.
    2. Calculate the available capacity (size-2) of datastore (datastore-A) in which the vm-A is residing.
    3. Check whether size-2 > size-1, if true, continue, if false, go to step 12.
  6. Shutdown vm-A
  7. Take a snapshot (snap-C) based on the volume which is mounted as the datastore-A, snap-C can be considered as a backup.
  8. Unregister vm-A
  9. Clone vm-B to datastore-A, get a new vm (vm-C)
  10. Power on vm-C
  11. Destroy vm-A
  12. Unmount datastore-B
  13. Clean snap-B

    View Snapshot for Datastore

This feature lists the XtremIO’s snapshots details in the table under the tab of “XtremIO Management” after user selected one qualified datastore. Those snapshots are on the XtremIO volume which is the backend LUN of selected datastore.

NOTE:
The ‘qualified’ here means this datastore is based on one LUN, and this LUN is related to single XtremIO volume. If one unqualified datastore was selected, XtremIO Management tab would be invisible.


Restore Datastore from Snapshot (XIOS 4.0 ONLY)

This feature enable user to restore one datastore from one selected XtremIO snapshot.

NOTE: It’s for XtremIO 4.0 based datastore only. If one snapshot is from XtremIO 3.0, the ‘Restore’ button will be disabled.

User can firstly select one snapshot, and then press ‘Restore’ button. After pressing ‘OK’ button on the confirm dialog, there will be a task triggered underlying to do datastore restore automatically.


Assuming in the front-end specifying the target snapshot (snap-A) used to restore the datastore (datastore-A). Then the underlying in back-end is like:

1.     Shutdown/Poweroff all vms based on datastore-A

2.     Take a snapshot (snap-B) based on the volume (vol-A) which is mounted as the datastore-A, snap-B can be considered as a backup.

3.     Restore vol-A with snap-A, remove the auto backup snapshot.

4.     Handle virtual machines on restored datastore

5.     Unregister not-existing virtual machines

6.    Power on existing vms

Mount Datastore from Snapshot

It’s totally migrated from existing features of VSI 6.6 using HTML based technology. Previous one is removed from VSI.

When a snapshot is selected in the table, mount button would be enabled to allow user to mount the snapshot as a datastore.


When Mount button is clicked, a window will pop up for configuration. At the beginning, the window tries to load all available hosts in the vCenter.

After loading is finished, all connected hosts display in the dropdown list. User needs to choose one host to mount the selected snapshot on.

During the whole process of mount configuration, user can cancel the mount operation by clicking Cancel button or clicking “x” on the top left of the window. But once the OK button is clicked, the task will submit to the SIS server and the operation cannot be canceled anymore.

When the OK button is clicked, the whole web page would be locked for processing. After the in progress bar disappears, user can monitor the task status by My Tasks in the Recent Tasks panel.

After all tasks finished, user can find the newly mounted XtremIO datastore by going to Home > vCenter > Hosts and selected the host which you choose to do mount in the previous steps. And on the panel right to it, you will find the datastore by clicking on Related Object > Datastores.

You can see all the “old” and the “new features” of the VSI plugin here: