vSphere 6.5 UNMAP Improvements With DellEMC XtremIO

Two years ago, my manager at the time and myself visited the VMware HQ in palo alto, we had an agenda in mind which is around the UNMAP functionality found in vSphere, the title of the presentation I gave was “small problems lead to big problems” and it had a similar photo to the one above, the point we were trying to make is that the more customers that will be using AFAs that VMware do not release back the unused to capcity to, the bigger the problem gets because from a $ per GB, each GB matter…they got the point and we ended the conversation with a promise to ship them an XtremIO X-Brick to develop automated UNMAP on XtremIO that the greater good will benefit from it as well, not just XtremIO.

if you are new to the UNMAP “stuff”, i encourage you to read multiple posts i wrote on the matter..

https://itzikr.wordpress.com/2014/04/30/the-case-for-sparse_se/

https://itzikr.wordpress.com/2013/06/21/vsphere-5-5-where-is-my-space-reclaim-command/

https://itzikr.wordpress.com/2015/04/28/vsphere-6-guest-unmap-support-or-the-case-for-sparse-se-part-2/

The day has come.

VMware has just released vSphere 6.5 which an enhanced unmap functionality at both the volume (Datastore) level and the In-Guest level, lets examine both

  1. Volume (datastore) level

    Using the UI, you can now set an automated UNMAP at the datastore level and set it to either “low” or “none” , low basiecally means that once in a 12 hour internal, the ESXi crawler will run it at the datastore level, however, you can set different thresholds using the ESXI cli as shown below

    Why can’t you set it as “high in the UI? I assume that since space reclamation is a relatively heavy IO related, VMware want’s you to ensure your storage array can actually cope with the load and the cli is less visible than the UI itself…note that you can still run an ad hock space reclamation at the datastore level like you could in vSphere 5.5/6.0, running it manually will finish quicker but will be the heaviest option.

    If you DO chose to run it manually, the best practices for XtrmeIO is to set the chunk size used for the reclamation to 20000 as seen below

  2. In-Guest space reclamation

    In vSphere 6.0, you could have run the “defrag” (optimization) at the windows level, windows server 2012, 2012 R2 and 2016 were supported as long as you set the VMDK to “thin” and enable it at the ESXi host

    And running the latest VM hardware version which at vSphere 6.0 was “11” and in vSphere 6.5 is “13” as seen below

    So, what’s new?

    Linux! In the past, VMware didn’t support the SPC-4 standard which was required to enable space reclamation inside the linux guest OS and now with vSphere 6.5, SPC-4 is fully supported so you can run space reclamation inside the linux OS using either manual cli or a crone job. In order to check that the linux OS does indeed support space reclamation, run the “sg_vpd” command as seen below and look for the LBPU:1 output.

    Running the sq_inq command will actually show if SPC-4 is enabled at the linux OS level

    In order to run the space reclamation inside the linux guest OS, simply run the “rm” command, yes, its that simple.

    You can see the entire flow in the following demo that was recorded by Tomer Nahum from our QA team, thanks Tomer!

P.S

Note that at the time of writing this post, we have identified an issue with the windows guest OS space reclamation, AFAIK, it doesn’t work with many (all?) the arrays out there and we are working with VMware to resolve it. also note that you must use the full web client (NOT the H5 client) when formatting the VMFS 6 datastore, it appears that when using the H5 embedded client, it doesn’t align the volume with the write offset

99

Naa.xxxx01 is created via webclient, naa.xxxx02 is created via embedded vClient

=== 15/03/2017 Update ===

VMware have released the fix for the in-guest unmap command

https://itzikr.wordpress.com/2017/03/15/want-your-vsphere-6-5-unmap-fixed-download-this-patch/

 

 

 

Advertisements

4 comments

  1. Nice write up… Trying this with FreeNAS it works as advertised as well, with one problem. Both fstrim and discard throw errors, even though it appears all or most blocks are freed.

    [ 155.964397] sd 0:0:0:0: [sda] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
    [ 155.964403] sd 0:0:0:0: [sda] Sense Key : Illegal Request [current]
    [ 155.964410] sd 0:0:0:0: [sda] Add. Sense: Invalid field in parameter list
    [ 155.964414] sd 0:0:0:0: [sda] CDB: Unmap/Read sub-channel 42 00 00 00 00 00 00 00 18 00
    [ 155.964418] blk_update_request: critical target error, dev sda, sector 18132992
    [ 155.988823] XFS (dm-2): discard failed for extent [0x12,75731], error -5

  2. Regarding above, I actually formatted using the H5 client… just realizing this.. hoping that is what is causing problems.

    1. Hi James, if its windows based VM, i’m afraid the VMware bug still exist, open a support case with them and let me know how it goes..

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s