Want Your vSphere 6.5 UNMAP Fixed? Download This Patch

 

Hi,

One of the biggest changes introducted In vSphere 6.5, was the ability to run an automated UNMAP at the datastore level and inside the guest os (for both windows and linux VMs that support the SCSI primitive)

Unfortunately, the GA release of vSphere 6.5, had a bug in it that prevented the in-guest UNMAP from running properly, I wrote about it here

https://itzikr.wordpress.com/2016/11/16/vsphere-6-5-unmap-improvements-with-dellemc-xtremio/

 

we’ve been working with VMware on resolving this issue (it’s not a storage array specific issue) so everyone will benefit from the fix and today (15/03/2017), the fix has been released!

 

https://my.vmware.com/group/vmware/patch#search

 

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2148980

 

“Tools in guest operating system might send unmap requests that are not aligned to the VMFS unmap granularity. Such requests are not passed to the storage array for space reclamation. In result, you might not be able to free space on the storage array.”

 

So there you go, if you are looking for THE most compelling reason to upgrade to vSphere 6.5, this is it!

The First Dell EMC End To End VDI Reference Architecture

We have been working with heritage Dell teams for past several months to bring end-to-end solutions that are completely built off Dell Technologies stack to market.

I am happy to announce that the very first solution is now available for you to leverage. It is an end-to-end VDI solution built with Dell Wyse Thin/Zero clients for end-points, Dell  EMC PowerEdge Servers, Dell EMC Networking switches for networking, Dell EMC XtremIO & Dell EMC Unity for storage, and VMware Horizon as the VDI platform.

This end-to-end VDI solution is currently available as a Reference Architecture. It showcases different scale points in terms of desktop/user densities with XtremIO as the VM storage and Unity for the user shares. In the next phase, it will be made a validated Nodes/Bundles solution that will be an orderable pre-packaged offering simplifying procurement for customers.


 

VMware Horizon

http://en.community.dell.com/techcenter/extras/m/mediagallery/20443602/download

 


Citrix XenDesktop

http://en.community.dell.com/techcenter/extras/m/mediagallery/20443457/download

 

You gotta love the powerhouse!

RecoverPoint 5.0 SP1 Is out, here’s what’s new for both the classic and the RP4VMs version, Part #1, Physical RecoverPoint

Hi

This is part 1 of what’s new with the RecoverPoint 5.0 SP1 release, the classic (Physical) version of it.

Partial Restore for XtremIO

  • An added option to the recover production flow for XtremIO
    • Available for DD since RP 5.0
  • Allows to select which volumes to restore (previously restore of all CG volumes could of taken place)
  • During partial restore:
    • Transfer is paused to all replica copies, until the action completes
    • Only selected volumes are restored
    • At production – selected volumes would be set to NoAccess, non-selected volumes remain Accessible
    • Like current recover production behavior:
      • At replica journal – images newer than the recovery image are lost
      • After production resumes all volumes undergo short init

    Limitations

  • Partial Restore from an image that does not contain all the volumes (when Adding\Removing RSets) would show the suitable volumes available for restore but restore might remove RSETs which were added after the restore took place
    • Exactly the same as with full restore
  • Both production and the selected replica clusters must be running at least 5.0.1 to enable the capability

    GUI changes

    XtremIO Snapshot Expiration

  • As of previous versions, snapshot deletion and manual retention policy assignment were not possible via RecoverPoint for XtremIO
  • Snapshot deletion allows the user to manually delete a PiT which corresponds to a XtremIO snap
    • Was only possible with DataDomain
  • Retention policy allows the user to setup expiration for specific PiT

Applies only to XtremIO and DD copies

Snapshot deletion – GUI

Snapshot deletion – CLI

  • Admin CLI command delete_snapshot
  • delete_snapshot –help

DESCRIPTION: Deletes the specified snapshot.

PARAMETER(S):

group=<group name>

copy=<target copy name>

NOTES ON USAGE:

This command is relevant only for DataDomain and XtremIO copies and can only be run interactively

  • delete_snapshot – interactive mode

Enter the consistency group name

Enter the name of the copy at which the snapshot resides

Select the snapshot you want to delete

  • User can delete any user or system snapshot without setting retention
  • Oldest PiT cannot be deleted
  • Deleting a snapshot is only possible when there are XtremIO volumes at the remote copy
  • When there are connectivity issues to the XMS, system would keep trying to delete the snapshot. It would get deleted when there is proper connectivity
  • RecoverPoint would block any deletion of PiT which is currently used for any recovery operation
  • In order to perform snapshot deletion, both clusters must be running at least RP 5.0.1

    Snapshot Retention GUI option 1

    Snapshot retention – CLI

  • Admin CLI command set_single_snapshot_consolidation_policy
  • Can only be run interactively
  • set_single_snapshot_consolidation_policy – Interactive mode

    Enter the consistency group name

    Enter the name of the copy at which the snapshot resides

    Select the snapshot whose consolidation policy you want to set

    For the snapshot retention time, enter the number units and the unit (<num>months | <num>weeks | <num>days | <num>hours | none) for the desired time.

    Enter consistency type: (Default is current value of ‘Crash consistent’)

    * None is for remove/cancel the retention of the snapshot

  • The retention of any user or system snapshot can be changed given it’s on a XtremIO or DD array
  • Retention can be set in: hours, days, weeks and months.
    • The minimum is 1 hour
  • Changing the retention of a user bookmark will apply to to all the replica copies.
  • In case a PiT has with retention higher than the Protection Window, the PiT will last until the retention time is over and will not be deleted, it other words, it would extend the PW
  • In case a PiT has with retention lower than the Protection Window, the PiT will last until the retention time is over and then would be treated as a regular PiT which can be removed as part of the XtremIO specific snapshot consolidation policy
  • In order to configure retention, all participating clusters must be running at least RP 5.0.1

    Manual Integrity Check for Snap-Based Replication

  • In previous versions, integrity check was available only for non-SBR (Snap-Based Replication) links.
    • In RP 5.0.1, manual Integrity Check can be performed on SBR links, this currently includes:
      • XtremIO
      • DataDomain
  • This enhanced capability enables integrity check while in the transfer status is “snap-idle”
  • Manual Integrity check for SBR links – CLI
  • Integrity Check is only available via CLI
  • Interactive Mode

  • Non-Interactive Mode

  • RPO can be impacted during integrity check and should return to normal state in the next snap shipping cycle. This is general to all integrity check operations
  • Integrity Check Events remains unchanged

    This release is also supporting ScaleIO in an heterogenous environment!

RecoverPoint 5.0 SP1 Is out, here’s what’s new for both the classic and the RP4VMs version, Part 2, RP4VMs

Hi, so this is part 2 of the posts series which cover the what’s new for the RP4VMs version 5.0 SP1, if you are looking for part 1 which covers the physical RP, look here:

https://itzikr.wordpress.com/2017/02/22/recoverpoint-5-0-sp1-is-out-heres-whats-new-for-both-the-classic-and-the-rp4vms-version-part-1-physical-recoverpoint/

ok, so to the new stuff in RP4VMs!

RE–IP – THE BASICS
RE-IP simplifications will apply to systems running RPVM 5.0.1 and later
Main purpose is to simplify the operation – no need for glue scripts
Configured completely in the UI/Plugin
Co-exist with previous glue script based reIP
Supports IPv4 or IPv6
Users can change IPs, Masks, Gateways, DNS servers

RE-IP IMPLEMENTATION

Supports major VM OS: MS Windows server versions 2016, 2012,
2008, Windows 8, and 10, or Red Hat Linux server versions 6.x and
7.x, SUSE 12.x, Ubuntu 15.x and CentOS 7
Implementation is different between Windows based and Linux based
VMs, but the outcome is the same
Configuration will be done with the RP VC Plugin
Requires VM Tools

RE-IP CONFIGURATION
To use the new re-IP method, which is the default setting, Network Configuration Method is set to Automatic
No need to fill in the “Adapter ID” with the Automatic re-IP option
Supports multiple NICs per VM
Configuration on a per-Copy basis, which allows flexibility in multi-copy replication
During recovery wizard (Test Copy, Fail-over, Recover Production), use any of the 4 Network options

Protection > Consistency Groups. Expand the relevant consistency
group, select the relevant copy, and click the Edit Copy Network
Configuration
Available after a VM has been protected

RE-IP CONSIDERATIONS
Adding/removing NICs, requires reconfiguration
If performing a temporary failover (+Setting new copy as production), ensure that the network configuration of the former prod copy is configured to avoid losing its network configuration

MAC REPLICATION

Starting with 5.0.1 MAC replication to remote Copies is enabled by default
By default, MAC replication to Local Copies
is disabled
During the Protect VM wizard, the user will have the option to enable MAC replication for copies residing on the same vCenter
(local copies and remote copies when RPVM clusters are sharing the same vCenter)
This can create a MAC conflict if the VM is protected back within the same VC/Network
Available for different networks and/or VCs hosting the Local Copy
When enabled, The production VM network configuration is also preserved (so there’s no need to configure the Re-IP)

RPVM 5.0.1 MISC. CHANGES
Enhanced power-up Boot Sequence – VM Load up indication will be obtained from the OS itself (using VM tools)
“Critical” mechanism is still supported.
Enhanced detection and configuration conflicts for Duplicate UUIDs of
ESXi nodes
In an all 5.0.1 system, RPVM will use VMWare’s MORef of the ESXi and will generate a new unique identifier to replace the BIOS UUID usage
A Generic Warning will be displayed if an ESX has been added to an ESX Cluster that is registered to RPVM, and one of the entities (either a splitter or a cluster) are not 5.0SP1, and the ESX has a duplicate BIOS UUID.
RPVM 5.0.1 will add support for vSphere 6.5 (including VSAN)
A new message was added when attempting to protect a VM in a nonregistered ESX cluster right in the Protect VM wizard
Minimal changes in Deployer to enhance user experience and simplify the deployment flow, mainly around NIC topology selection

 

 

RP4VMs as part of DellEMC Hybrid Cloud (EHC)

The First End-To-End Dell VDI Reference Architecture

What an awesome thing is to have an end to end infrastructure Reference Architecture for your VDI deployment, not just the storage part, that’s easy, just add XtremIO.

This end-to-end VDI solution is currently available as a Reference Architecture. It showcases different scale points in terms of desktop/user densities with XtremIO as the VM storage and Unity for the user shares. In the next phase, it will be made a validated Nodes/Bundles solution that will be an orderable pre-packaged offering simplifying procurement for customers.

VMware Horizon RA Link: http://en.community.dell.com/techcenter/blueprints/blueprint_for_vdi/m/mediagallery/20442049/download

Citrix XenDesktop RA Link: http://en.community.dell.com/techcenter/extras/m/mediagallery/20443457/download

An Important APD/PDL Change in the XtremIO Behavior

Hi

We have recently released a new target code for our arrays, know as XIOS 4.0.15

This release contains many fixes including a change in the APD/PDL behavior when using ESXi hosts, this is based on many customers feedback we have received that were asking to tweak the array APD/PDL behavior..

If you are new to this concept, I highly suggest you start reading about it here:

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2004684

And an older article from the ESXi 5 days here, https://blogs.vmware.com/vsphere/2011/08/all-path-down-apd-handling-in-50.html

Permanent Device Loss (PDL) is a condition where all paths to a device are marked as “Dead.” Because the storage adapter cannot communicate to the device, its state is “Lost Communication.” Similarly, All Paths Down (APD) is a condition where all paths to a device are also marked as “Dead.”
However, in this case, the storage adapter displays the state of the device as “Dead” or “Error.”
The purpose of differentiating PDL and APD in ESX 5.x and higher is to inform the operating system whether the paths are permanently or temporarily down. This affects whether or not ESX attempts to re-establish connectivity to the device.

A new feature in XtremIO 4.0.15 allows the storage administrator to configure the array to not send “PDL” as a response to a planned device removal. By default, XtremIO configures all ESX initiators with the PDL setting.
In planned device removals where the cluster has stopped its services and there are no responses received by the ESX host for I/O requests, ESX Datastores will respond with PDL or APD behavior depending on the XtremIO setting.

An XMCLI command is used to enable all ESX initiators as “APD” or revert back to “PDL.” A new option is available for the modify-clusters-parameters admin-level command which modifes various cluster parameters:
device-connectivity-mode=<apd, pdl>

In any case, if you have a new or existing XtremIO cluster and you are using ESXi hosts (who doesn’t?), please set the initiators type to “ESX”


Refer to the XtremIO Storage Array User Guide (https://support.emc.com/products/31111_XtremIO/Documentation/) or
help in XMCLI for more information on its usage.

The version that will follow the 4.0.15-20 one will have APD turned on by default for newly installed arrays.

VSI 7.1 Is Here, vSphere 6.5 supported!

Hi,

We have just released the 7.1 version of our vSphere vCenter plugin, if you are new to the VSI plugin, I highly suggest you start with these posts here

https://itzikr.wordpress.com/2015/09/21/vsi-6-6-3-is-here-grab-it-while-its-hot/

https://itzikr.wordpress.com/2016/03/31/vsi-6-8-is-here-heres-whats-new/

https://itzikr.wordpress.com/2016/10/04/vsi-7-0-is-here-vplex-is-included/

VSI enables VMware administrators to provision and manage the following EMC storage systems for VMware ESX/ESXi hosts:

  • EMC Unity™
  • EMC UnityVSA™
  • EMC ViPR® software-defined storage
  • EMC VMAX All Flash
  • EMC VMAX3™
  • EMC eNAS storage
  • EMC VNX® series storage
  • EMC VNXe1600™ and VNXe3200™ storage
  • EMC VPLEX® systems
  • EMC XtremIO® storage

Tasks that administrators can perform with VSI include storage provisioning, storage mapping, viewing information such as capacity utilization, and managing data protection systems. This release also supports EMC AppSync®, EMC RecoverPoint®, and EMC PowerPath®/VE.

New features and Changes

This release of VSI includes support for the following:

  • VMware vSphere Web Client version 6.5 tolerance
  • Multiple vCenter server IP addresses

dual-ip-vcenter

  • Restoring deleted virtual machines or datastores using AppSync software

restore-vm

restore-datastore

  • Space reclamation with no requirement to provide the ESXi host credential (vSphere Web Client 6.0 or later)

unmap-schedule-view

  • Viewing storage capacity metrics when provisioning VMFS datastores on EMC XtremIO storage systems

capacity-view

  • Enabling and disabling inline compression when creating VMFS datastores on EMC Unity storage systems version 4.1.0 or later
  • Space reclamation on Unity storage systems
  • Extending Unity VVol datastores
  • Viewing and deleting scheduled tasks, such as space reclamation, from the VSI plug-in
  • Enabling and disabling compression on NFS datastores on VMAX All Flash/VMAX3 eNAS devices
  • Viewing the compression property of a storage group when provisioning VMFS datastores on VMAX All Flash storage systems
  • Path management on Unity storage arrays using VMware NMP and EMC
  • PowerPath/VE version 6.1 or later

you can download VSI 7.1 from here https://download.emc.com/downloads/DL82021_VSI_for_VMware_vSphere_Web_Client_7.1.ova?source=OLS

 

 

XtremIO Plugin for VMware vRealize Orchestrator 1.1.0 is out.

Hi

We have just released an update for our XtremIO vRO adapter, if you are new to the adapter, you can read about it here

https://itzikr.wordpress.com/2016/08/26/vmworld-2016-introducing-the-emc-xtremio-vcenter-orchestrator-adaptor/

You can download the new version from here

https://download.emc.com/downloads/DL79358_XtremIO_1.1.0_plugin_for_VMware_vRealize_(vCenter)_Orchestrator.zip?source=OLS

And the documentation from here

https://support.emc.com/docu78902_XtremIO_Plugin_for_VMware_vRealize_Orchestrator_Installation_and_Configuration_Guide.pdf?language=en_US

https://support.emc.com/docu79427_XtremIO_Plugin_for_VMware_vRealize_Orchestrator_Workflows_User_Guide.pdf?language=en_US

VMware vRealize Orchestrator is an IT process automation tool that allows automated management and operational tasks across both VMware and third-party applications.
Note: VMware vRealize Orchestrator was formerly named VMware vCenter Orchestrator.
The XtremIO plugin for vRealize Orchestrator facilitates the automation and orchestration of tasks that involve the XtremIO Storage Array. It augments the capabilities of VMware’s
vRealize Orchestrator solution by providing access to XtremIO Storage Array-specific management workflows.
This plugin provides many built-in workflows and actions that can be used by a vRealize administrator directly or for the construction of higher level custom workflows to automate storage tasks. The major workflow categories are provided by the XtremIO plugin for vRealize Orchestrator for storage configuration, provisioning, LUN and VM snapshot backup and VM recovery.

Basic Workflow Definition
A basic workflow is a workflow that for the most part represents a discrete piece of XtremIO functionality, such as creating a Volume or mapping a Volume. The workflows in the management folders for Consistency Groups, Cluster, Initiator Groups, Protection Scheduler, Snapshot Set, Tag, Volume and XMS are all examples of basic workflows.

High-level Workflow Definition
A high-level workflow is a collection of basic workflows put together in such a way as to achieve a higher level of functionality. The workflows in the XtremIO Storage Management
folder and the XtremIO VMware Storage Management folder are examples of high-level workflows. The Host Expose Storage workflow in the VMware Storage Management folder, for example, allows a user to create Volumes, create any necessary Initiator Groups and map those Volumes to a host, all from one workflow. All the input needed from this workflow is supplied prior to the calling of the first workflow in the chain of basic workflows that are called.

Accessing XtremIO Workflows

Expand the XtremIO folder to list folders containing specific XtremIO and VMware
functionality. The following section provides information on each of the workflows.

XtremIO CG Management Folder

Workflow Name Description
CG Add Volume Adds one or more Volumes to a Consistency Group.
CG Copy Creates a copy of the Volume(s) in the selected Consistency
Group.
CG Create Allows you to create one Consistency Group and optionally supply
a Consistency Group Tag to the created Consistency Group.
CG Delete Deletes the specified Consistency Group.
CG List Lists all known Consistency Groups for a given cluster.
CG List Volumes Lists all Volumes in a Consistency Group.
CG Protection Creates a Protection of the Volume(s) in the selected Consistency
Group.
CG Refresh Refreshes a Consistency Group from the selected copy of the
Consistency Group.
CG Remove Volume Removes one or more Volumes from a Consistency Group.
CG Rename Renames the specified Consistency Group.
CG Restore Restores a Consistency Group from a selected CG Protection
workflow.

XtremIO Cluster Management Folder

Workflow Name Description
Cluster List Lists the cluster(s) for a given XMS Server.

XtremIO IG Management Folder

Workflow Name Description
IG Add Initiator Adds an Initiator to an Initiator Group.
IG Create Allows you to create one or more Initiator Groups and optionally
supply an Initiator Group Tag to the created Initiator Groups.
IG Delete Deletes one or more Initiator Groups.
IG List Lists all Initiator Groups based upon the supplied input criteria.
IG List Initiators Lists all Initiators for the supplied Initiator Group.
IG List Volumes Lists Volumes for a given Initiator Group.
IG Remove Initiator Removes an Initiator from an Initiator Group.
IG Rename Renames a selected Initiator Group.
IG Show Returns Initiator Group attributes as a formatted string for a given
Initiator Group, set of specified Initiator Groups or all Initiator
Groups, (if none are supplied).
Workflow Name Description

XtremIO Performance Management Folder

Workflow Name Description
Cluster Report Lists the cluster information shown below in one of three output
formats based upon the supplied input criteria.
• Output formats: CSV, HTML, Dictionary (JSON)
• Data Reduction Ratio: Cluster’s object field
data-reduction-ratio-text
• Compression Factor: Clusters object field
compression-factor-text
• Dedup Ratio: Clusters object field dedup-ratio-text
• Physical Capacity: Sum of SSD objects field
ssd-size-in-kb * 1024 (in bytes)
• Total Space: Clusters object field
ud-ssd-space * 1024 (in
bytes)
• Space Consumed: Clusters object field
ud-ssd-space-in-use * 1024 (in bytes)
• Remaining Space: Total Space – Space Consumed * 1024 (in
bytes)
• Thin Provisioning Savings: 1 / Cluster Field
(
thin-provisioning-ratio)
• Overall Efficiency: = 1 / Cluster Field
(
thin-provisioning-ratio *
data-reduction-ratio)
• Performance: Contains certain metrics for ascertaining overall
cluster performance.
Datastore Report Provides datastore performance information for a specific ESXi
host.
Host Report Provides performance information for ESXi Hosts.
IG Report Provides performance information for Initiator Groups
Workflow Usage Report Returns usage count information for XtremIO Workflows that have
been successfully invoked.

XtremIO Protection Scheduler Folder

Workflow Name Description
Protection Scheduler Create Creates a Protection Scheduler, which consists of the schedule
name, optional Tag and associated input arguments.
Protection Scheduler Delete Deletes a Protection Scheduler.
Protection Scheduler List List the Protection Schedulers based upon the supplied input
criteria.
Protection Scheduler Modify Allows you to modify various attributes of a Protection Scheduler.
Protection Scheduler Resume Resumes a suspended local Protection Scheduler.
Protection Scheduler Suspend Suspends the action of a local Protection Scheduler.

XtremIO RecoverPoint Management Folder

Workflow Name Description
RP Add Adds a RecoverPoint Server to the list of available RecoverPoint
Servers in the vRealize inventory, for use by the XtremIO
RecoverPoint workflows.
RP Create CG Creates and enables a new RecoverPoint Consistency Group from
new and existing user and journal volumes.
RP Delete CG Deletes a RecoverPoint Consistency Group but retains the user
and journal storage.
RP Delete CG Storage Deletes a RecoverPoint Consistency Group and the associated
user and journal storage.
RP List Produces a list of available RecoverPoint Servers that are present
in the vRealize inventory.
RP List CGs Produces a list of RecoverPoint Consistency Groups.
RP Modify CG Allows modification of the RecoverPoint Consistency Group
recovery point objective, compression and number of snapshots
settings.
RP Remove Removes a RecoverPoint server from the list of available
RecoverPoint Servers in the vRealize inventory.

XtremIO Snapshot Set Management Folder

Workflow Name Description
SnapshotSet Copy Creates a copy of the Volumes in the selected Snapshot Set.
SnapshotSet Delete Deletes a Snapshot Set.
SnapshotSet List Returns a list of Snapshot Sets by name and/or Tag.
SnapshotSet Map Maps a Snapshot Set to an Initiator Group.
SnapshotSet Protection Creates a Protection of the Volumes from the selected Snapshot
Set.
SnapshotSet Refresh Refreshes a Snapshot Set from the selected copy.
SnapshotSet Rename Renames a Snapshot Set.
SnapshotSet Unmap Unmaps a Snapshot Set.
Workflow Name Description


XtremIO Storage Management Folder

Workflow Name Description
Datastore Delete Storage This workflow either:
• Shutdowns and deletes all VMs associated with the datastore.
• Disconnects all the secondary datastore VMDKs from the VMs
utilizing that datastore.
The workflow then unmounts the datastore as necessary from all
ESXi hosts and deletes it. It then unmaps the XtremIO Volume
associated with the datastore and deletes it.
If the Volume to be deleted is also in an XtremIO Consistency
Group, the Consistency Group is also deleted, if the group is
empty (as a result of the Volume deletion).
If the datastore selected for deletion is used for primary storage
(i.e. VMDKs that are surfacing operating system disks, for
example, c:\ for windows) then select Yes to the question “Delete
VMs associated with the datastore” prior to running the workflow.
Note: Best practice for datastore usage is to use separate
datastores for primary and secondary VM storage.
Datastore Expose Storage Allows you to create or use an existing XtremIO Volume to
provision a VMFS Datastore.
If an existing XtremIO Volume is used, which is a copy or
protection of an existing datastore, then the existing XtremIO
Volume needs to be assigned a new signature prior to utilizing it.
You can select to expose the storage to a single ESXi host or all
ESXi hosts in a given cluster.
If selecting existing Volume(s), either select them individually or
select a Consistency Group that contains the Volumes to use for
the datastores.
VM Clone Storage Makes a copy of the specified production datastores and
connects those datastores copies to a set of specified
test/development VMs. If the datastore to be cloned contains a
hard disk representing the operating system disk, the VMDK is
ignored when reconnecting the VMDKs to the test/development
VMs.
The datastores to be copied can either be selected individually or
from a specified XtremIO Consistency Group.
The workflow then makes a copy of the selected Volumes. It then
clones the VM to VMDKs relationships of the production
datastores to the copied datastores for the selected
test/development VMs.
The VMs involved follow a specific naming convention for the
production and test/development VMs.
For example, if the production VM is called “Finance” then the
secondary VM name must start with “Finance” followed by a
suffix/delimiter such as “Finance_001”.
The production and test/development VMs must also be stored in
separate folders. The workflow skips over the production VMs
that do not have a matching VM in the test/development folder.
VM Delete Storage Deletes the datastores containing the application data and
underlying XtremIO storage Volumes associated to these
datastores.
The VMs for which the datastores are deleted can be selected by
choosing either a vCenter folder that the VMs reside in or by
individually selecting the VMs.
The VM itself can also be deleted by selecting Yes to the question
“Delete VMs and remove all storage (No will only remove VMs
secondary storage)”.
If No is selected to the above question, the workflow unmounts
the secondary datastore containing the application data from all
hosts that are using the datastore prior to its deletion.
The workflow then proceeds to unmap the XtremIO Volume
associated with the datastore and then deletes it.
If the Volume to be deleted is also in an XtremIO Consistency
Group, the Consistency Group is also deleted, if the group is
empty (as a result of the Volume deletion).
VM Expose Storage Allows you to create or use an existing XtremIO Volume to
provision a VMFS datastore and then provision either a VMDK
(Virtual Machine Disk) or RDM (Raw Disk Mappings) to a virtual
machine. This is accomplished by calling the workflows Datastore
Expose Storage and VM Add Storage from this workflow.
Workflow Name Description


XtremIO Tag Management Folder

Workflow Name Description
Tag Apply Applies a Tag to an object.
Tag Create Creates a Tag for a given tag type.
Tag Delete Deletes a list of supplied Tags.
Tag List Lists Tags for a given Tag type.
Tag Remove Removes a Tag from an object.
Tag Rename Renames a Tag.


XtremIO VMware Storage Management Folder

Workflow Name Description
Datastore Copy Makes a copy of your underlying XtremIO Volume that the
datastore is based on.
Datastore Create Creates a VMFS datastore on a Fibre Channel, iSCSI or local SCSI
disk.
Datastore Delete Deletes the selected datastore.
This workflow either:
• Shutdowns and deletes all VMs associated with the datastore.
• Disconnects all the datastore VMDKs from all the VMs utilizing
that datastore.
The workflow then unmounts the datastore as necessary from all
ESXi hosts and deletes it. It then unmaps the XtremIO Volume
associated with the datastore and deletes it.
If the datastore selected for deletion is used wholly or partly for
primary storage (i.e. VMDKs that are surfacing operating system
disks, for example, c:\ for windows) then select Yes to the
question “Delete VMs associated with the datastore” prior to
running the workflow.
Note: Best practice for datastore usage is to use separate
datastores for primary and secondary VM storage.
Datastore Expand Used to expand a datastore.
Datastore List Returns the datastores known to the vCenter Server instance
selected.
Datastore Mount Mounts the given datastore onto the selected host, or all hosts
associated with the datastore if a host is not specified.
Note: If you select to mount the datastore on all hosts and the
datastore fails to mount on at least one of the hosts, the info log
file of the workflow reports the hosts that the datastore could not
be mounted on.
For convenience this workflow allows you to select from
pre-existing datastores that have already been mounted to ESXi
hosts or datastores that have been copied (but not yet previously
mounted).Such datastores are resignatured as part of the
mounting process.
Datastore Protection Makes a protection (a read only copy) of your underlying XtremIO
Volume that the datastore is based on.
Datastore Reclaim Storage Used to reclaim unused space in a datastore, and also, to delete
the unused VMDK files.
Datastore Refresh Refreshes a copy of the underlying XtremIO Volume that the
datastore is based on.
Once the datastore to refresh has been selected, a list of copies
or protections, from which to refresh, are displayed. These are
based on whether “Copy” or “Protection” was selected in the
answer to “Restore Snapshot Type”.
The list displayed chronologically, with the most recent copy
listed first.
If a copy to refresh is not selected, the most recent copy is
automatically refreshed.
There is an option to provide a name for the Snapshot Set and
suffix that will be created.
Datastore Restore Restores the underlying XtremIO Volume that the datastore is
based on, from a copy that was created earlier.
Once the datastore to refresh has been selected, a list of copies
or protections, from which to refresh, are displayed. These are
based on whether “Copy” or “Protection” was selected in the
answer to “Restore Snapshot Type”.
The list displayed chronologically, with the most recent copy
listed first.
If a copy to refresh is not selected, the most recent copy is
automatically refreshed.
Datastore Show Returns information about the supplied datastore:
• AvailableDisksByHost: List of hosts and available disks that
can be used in creating a datastore.
• Capacity: Datastore’s total capacity in bytes.
• Expandable: True if datastore is expandable.
• ExpansionSpace: Space in bytes that is left for expansion of
datastore.
• Extendable: True if datastore can be extended.
• FreeSpace: Datastore’s free capacity in bytes.
• Hosts: List of hosts known to the datastore.
• Name: Name of the datastore.
• RDMs: List of RDMs known to the datastore.
• UnusedVMDKs: List of VMDks that are not in use by a VM.
• VMs: List of VMs known to the datastore.
• VMDKs: List of VMDKs residing in the datastore and in use by a
VM.
Datastore Show Storage Returns the XtremIO Volumes, Consistency Groups and Snapshot
Sets that make up the datastore.
If the workflow runs successfully but none of the output variables
are populated, check the info log for messages such as this one:
[YYYY-MM-DD HH:MM:SS] [I] Unable to find one or more
corresponding Volumes to naa-names — possibly unregistered,
unreachable XMSServers, or non-XtremIO Volumes.
In addition to ensuring that all of the XMS Servers have been
added to the vRO inventory via the XMS Add workflow, it is
strongly recommended to also run the Host Conformance
workflow. This ensures that all your XtremIO Volumes that are
visible to ESXi hosts have their corresponding XMS Server known
to vRealize.
Datastore Unmount Unmounts the given datastore from the specified host or all hosts
associated with the datastore, if a host is not specified.
Workflow Name Description
Host Add SSH Connection Adds the connection information for a given ESXi Host into the
vRealize inventory.
When running the workflow Host Modify Settings you must select
the SSH Connection to use in order for the workflow to connect to
the ESXi Host and run properly.
Note: When supplying a value for the field “The host name of the
SSH Host”, ensure this is a fully qualified domain name or IP
address that allows the workflow to connect to the ESXi host via
TCP/IP.
Host Conformance Checks either a single ESXi host or all ESXi hosts in a vCenter
instance, to ensure that all the XtremIO Volumes that are mapped
to ESXi hosts have for this vRealize instance all of the necessary
XMS Servers connected to it.
Once the report is run, it provides a list of Volumes that are
conformant (have an associated XMS Server instance present for
those Volumes) and those Volumes that are not conformant (do
not have an associated XMS Server present for those Volumes).
For those Volumes that are found to be not conformant the XMS
Add workflow should be run so that XtremIO vRealize workflows
can work properly with these Volume(s).
Host Delete Storage Allows you to delete a list of selected Volumes mounted to a host.
The Volumes can be specified by individual name, Consistency
Group or Snapshot Set.
You can supply a value for all three fields or just a subset of them.
As long as one Volume name, Consistency Group name or
Snapshot Set name is supplied, the workflow can be run.
The selected Volumes are unmapped and deleted and the
corresponding Consistency Group Deleted (if all the Volumes
have been removed.
Host Expose Storage Exposes new or existing XtremIO Volumes to a standalone host or
VMware ESXi host.
Select either a list of Volumes or a Consistency Group
representing a list of Volumes, to be exposed to the host.
If new Volumes are being created you also have the option of
creating a new Consistency Group to put the Volumes into.
Host List Lists the hosts for a given vCenter.
Host Modify Settings Modifies the settings for a given ESXi server.
This workflow requires that an SSH Host Instance be setup for the
ESXi host to be checked, prior to running this workflow.
In order to setup this SSH Host Instance run the workflow Host
Add SSH Connection.
Host Remove SSH Connection Removes an SSH Host configuration entry from the vRealize
inventory.
Host Rescan Rescans one or all hosts of a given vCenter.
Host Show This workflow returns the following.
• WWNs for the given vCenter Host.
• Available disks for the given vCenter Host that can be used to
create new datastores. The output variable
availableDisks
can be used as input to the Datastore Create workflow input
parameter
diskName.
• In use disks for the given vCenter Host.
Workflow Name Description
vCenter Add Adds a vCenter instance to the vRealize inventory.
vCenter Cluster Delete Storage Deletes all unused XtremIO storage Volumes for each ESXi host in
a vCenter Cluster.
This workflow exits with an error when an XMS Server cannot be
located for any of the unused XtremIO storage Volumes that are to
be removed.
vCenter Cluster Expose
Storage
Exposes new or existing XtremIO Volumes to the ESXi hosts in a
VMware Cluster.
vCenter List Lists the vCenter instances known to the vRealize inventory.
vCenter List Clusters Returns a list of vCenter Cluster objects for the supplied vCenter.
vCenter Remove Removes a vCenter instance from the vRealize inventory.
vCenter Show Returns a list of vCenter Hosts for a given Cluster object.
For each host a list of available disks is also returned.
VM Add VMDK Provisions to a VM, either a:
• New VMDK
• New RDM
• List of existing, unused, VMDKs
If the storage type selected is VMDK then a datastore must be
selected for the VMDK.
If the storage type selected is RDM then a datastore does not
need to be selected in order to create the RDM.
VM Delete Deletes a VM and release resources, such as the primary storage
allocated to the VM.
Note: This workflow should be used to remove the primary
storage, “Hard disk 1” associated with a VM.
Primary storage cannot be removed until the VM is shutdown and
deleted. Secondary VMDK and RDM storage are preserved.
Use the VM Delete Storage workflow to delete all of the following:
• VM
• The datastore associated with the VM
• The underlying XtremIO storage making up that VM
VM List Lists the VMs for a given vCenter or for a given vCenter host.
VM Remove VMDK Removes and deletes an RDM or VMDK from a VM.
Note: This workflow cannot be used to remove the primary
storage, “Hard disk 1” (the VM’s primary storage drive), as this
requires the VM to be shut down and deleted prior to reclaiming
the storage for “Hard disk 1”.
VM Show Lists the VMDKs, RDMs and RDM Physical LUN names for each
RDM for a given VM.
Workflow Name Description


XtremIO Volume Management Folder

Workflow Name Description
Volume Copy Allows you to create one or more copies supplying a list of source
Volumes, or a list of Volume Tags.
Volume Create Allows you to create one or more Volumes and optionally supply a
Volume Tag to the created Volumes. For each Volume created, if at
the cluster level, the parameter
show-clusters-thresholds shows that
vaai-tp-alerts is set to a value of between 1-100, then the
add-volume vaai-tp-alerts parameter is set to enabled.
Volume Delete Deletes the list of selected Volumes or Volumes associated with a
particular Volume Tag.
Volume Expand Expands a Volume to the new size supplied.
Volume List Lists Volumes by name, size and Tag for a given cluster.
Volume Map Allows you to map Volumes by Name or by Tag for the supplied
Initiator Group.
Volume Modify Allows particular Volume parameters to be changed. The current
Volume parameter that can be changed is
vaai-tp-alerts
(VAAI Thin Provisioning alert).
The selected Volumes can be either be set to participate or
disabled from participation in thin provisioning threshold alerts.
At the cluster level the thin provisioning alert level must set to a
value of between 1 – 100 in order for a Volume to generate an
alert. The syntax is as follows:
vaai-tp-alerts=[enabled / disabled]
Volume Protection Allows you to create one or more Protections supplying a list of
source Volumes, or a list of Volume Tags.
Volume Refresh Refreshes a Volume from the selected copy.
Volume Rename Renames the selected Volume.
Volume Restore Restores a Volume from the selected Protection.
Volume Show Returns Volume parameters as a formatted string for a given
Volume, set of specified Volumes or all Volumes (if none are
specified).
The values for
naa-name, logical-space-in-use and
vaai-tp-alerts are returned for the specified Volumes.
• The
naa-name parameter represents the NAA Identifier that is
assigned to a Volume only after it has been mapped to an
Initiator Group.
• The
logical-space-in-use parameter is the Volume
logical space in use (VSG).
• The
vaai-tp-alerts parameter controls whether a Volume
participates in thin provisioning alerts. The value is either
enabled or disabled.
At the cluster level, the thin provisioning alert level must set to a
value between 1 – 100, in order for a Volume to generate an alert,
regardless of whether an individual Volume has
vaai-tp-alerts enabled.
Volume Unmap Unmaps Volumes in one of the following ways:
• Unmaps a user supplied list of Volumes.
• Unmaps all Volumes associated with a particular Initiator
Group.
• Unmaps a selected list of Volumes associated with a particular
Initiator Group.
• Unmaps all Volume associated with a particular Tag.
Workflow Name Description


XtremIO XMS Management Folder

Workflow Name Description
XMS Add Adds an XMS server to the vRealize Inventory.
XMS List Lists the known XMS servers in the vRealize Inventory.
XMS Remove Removes an XMS server from the vRealize Inventory.
XMS Show Returns XMS attributes as a formatted string for a given XMS
server.
The
xms-ip, server-name and cluster-names attributes
are always returned for a given XMS Server.
The
sw-version is also returned by default but can be removed
from the list of returned attributes, if that information is not
required.

 

 

EMC Storage Integrator (ESI) 5.0 Is Out

Hi,

We have just released the 5.0 version of ESI plugin, for those of you who aren’t familiar with the plugin , it allows you to manage physical windows or Hyper-V or even linux OS’s from a storage perspective, things like volume provisioning, snapshots taking etc, similar to our VSI plugin for vSphere but in this case for everything else..

Here’s the supported systems matrix

From an XtremIO perspective, this release bring a support for snapshot refresh using the XtremIO tagging