Monthly Archives: February 2012

Upcoming Customer roundtable about CITRIX XD running on an EMC VNX & CISCO UCS

 

Hi,

Regarded as one of the premier health care systems in the nation, Baptist Memorial Health Care (BMHC) is an award-winning network of hospitals dedicated to providing compassionate, highquality
care for patients. With 14 affiliate hospitals throughout the mid-south, Baptist combines convenience with premium care.
To effectively manage and administer 14,000 physical desktops with limited IT resources, Baptist Memorial turned to EMC VNX unified storage and Citrix XenDesktop for desktop virtualization. Baptist Memorial has initially deployed 135 virtual desktops and expects that to grow to 3,000 by the end of 2012, with the goal of enabling healthcare providers throughout their network efficient access to electronic medical records using Mckesson’s Practice Partner
application.
Join us to learn more about how Baptist Memorial became an IT innovator in the healthcare industry using EMC VNX and Citrix XenDesktop solution.

Presenters:

* Charles Rosse, Systems Administrator 2, BMHC
*  Bruce Brinson, Senior Alliances & Business Development Manager, EMC
Topics:
* Joint Certified Solutions: EMC Solutions deliver seamless integration,  management, and optimization in Citrix XenDesktop environments
* Citrix Integration: Efficiencies, time savings, and agility enabled through
centralized management and tight integration between VNX and Citrix XenDesktop.
* Cost-Savings: BMHC expects to save over $3 million through desktop and server virtualization
* Improved Performance: FAST Cache accelerates performance and mitigates VDI boot storms using Flash drive

register clicking on the picture below:

image

Leave a comment

Filed under Uncategorized

VFCache Installation Video

image

Hi,

recently we released the first version of VFCache

Both Chad ( http://virtualgeek.typepad.com/virtual_geek/2012/02/vfcache-hello-world-and-covers-come-off-project-thunder.html ) and chuck (  http://chucksblog.emc.com/chucks_blog/2012/02/vfcache-means-very-fast-cache-indeed.html ) covered in length what is the purpose of this FLASH Card and it’s associated software component but in really if that’s the first time your hear about it, then EMC’s architectural approach is to leverage the right technology to get the right data to the right place at the right time and cost. To accomplish this, EMC has developed its FAST array-based software, which automates the movement and placement of data across storage resources as needs change over time, optimizing applications and lowering costs.

VFCache is EMC’s newest intelligent software technology that extends FAST into the server. The caching optimization within VFCache automatically adapts to changing workloads by determining which data is most frequently referenced and placing it in the server Flash cache. VFCache adds another tier of intelligence and performance to the I/O stack, providing even greater efficiency. When coupled with FAST, VFCache creates the most efficient and intelligent I/O path from the application to the data store. With both technologies, EMC provides an end-to-end tiering solution to optimize application capacity and performance from the server to the storage. As a result of the VFCache intelligence, a copy of the “hottest” data automatically resides on the PCIe card in the server for maximum speed. As the data slowly ages and cools, FAST automatically moves the data to the appropriate tier of the storage array over time.

 

This graphic shows the VFCache architecture. As shown, the PCIe card would be installed into an available PCIe gen 2 slot inside of the server. In addition, the VFCache driver would be installed as an I/O filter driver into the operating environment.

Version 1 of VFCache works with any rack server in Microsoft Windows, Red Hat Linux, and VMware environments.

image

 

Here is an example of the effect that VFCache can have on your application. This shows a typical use case, a 1.2 TB Oracle Database application, before and after VFCache was implemented.

In this example, VFCache was able to increase throughput by 2.3 times, simultaneously reducing response time by 50 percent.

The results you will achieve will vary depending upon the read/write ratio and read hit rate of your specific application. This particular workload had a typical Oracle read/write ratio of about 70/30 and a read hit rate of about 80 percent.

image

 

So how does VFCache fit into the larger EMC portfolio? As previously mentioned, think of it as extending EMC FAST technology into the server.

Now you can leverage VFCache for the data requiring the best performance, unsurpassed by any other technology. The hottest of the hot data resides on Flash sitting in the server closer to the application, delivering the most IOPS and the shortest response time. Hot data resides on Flash drives in the array to enable you to reduce storage costs per I/O. As the data ages, it sits on Fibre Channel or SAS drives. Lastly, the coldest data lives on high-capacity SATA or nearline SAS drives to drive costs down.

This is a story that no one else in the industry can tell. It’s about putting the right data on the right storage at the right time—all through automation. It helps your applications reach the performance they need, at the cost you desire, and the protection level your business demands.

image

attached below is the “how-to” Installation video, hope you find it useful!

Leave a comment

Filed under Uncategorized

VAAI–Some Updates

H

lately, I had to deal with a customer case who thought that one of the VAAI Primitives isn’t working so I decided to put some more light on the issue.

Chad already covered this in length on his blog with the following posts:

http://virtualgeek.typepad.com/virtual_geek/2011/07/3-pieces-of-bad-news-and-a-deep-dive-into-apd.html and http://virtualgeek.typepad.com/virtual_geek/2011/09/vnx-and-vnxe-updates-and-vaai-hotfixes.html

but there were some updates / correction since this blog was written so I here they are, as always, please make sure you receive the most up to date information from your EMC rep as things tend to change over time.. (this post was written in Feb 2012)

VMAX

vSphere 4.1

we support the 3 VAAI Primitives:

- Hardware-accelerated Full Copy

- Hardware-accelerated Block Zero

-Hardware-assisted locking

caveat for the XCOPY primitive as taken from the publicly available doc since June 2011,

image

note that this document will explain in length the impact of VAAI when using a VMAX array so it’s a very good document (and not too long) to read

clip_image001

Basically,

“XCOPY on meta devices lays out the protection bitmaps in the host based cylinder format. All legacy sessions lay out the bitmaps in internal member cylinder format. As a result, the two session types cannot coexist. SRDF uses a different mechanism so this compatibility issue does not apply”

- Environments running Enginuity 5875.135.91, 5875.139.93 or 5875.198.148 and using VAAI should upgrade to Target version Enginuity 5875.231.172. Contact your EMC Service Representative and quote solution ID emc263675

This issue is resolved in ESX/ESXi 4.1 Patch 03:

- VMware ESX 4.1, Patch Release ESX410-201107001
For more information, see VMware ESX 4.1, Patch Release ESX410-201107001 (2000612)2000755

- VMware ESXi 4.1, Patch Release ESXi410-201107001
For more information, see VMware ESXi 4.1, Patch Release ESXi410-201107001 (2000613).

- VMware ESXi 5.0, Patch Release ESXi500-201109401-BG
For more information, see VMware ESXi 5.0 Patch ESXi500-201109401-BG: Updates esx-base (1027808).

Roadmap – There is no planning to “fix” this and the eng department never received such a request so if the client think it’s a must you can fill a feature request.

vSphere 5

we support the other primitive (stun and resume) but the “space reclaim”  one has been identified in VMware as not working so VMware decided to withdraw the support and ask customers to disable it on the ESX hosts. (http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2007427 )

The VMAX array already rejects it so there is no need to reject it on the hosts

VNX

vSphere 4.1

we support the 3 primitives in all the cases but the XCOPY one may work faster by using the ESX copy (depends on the customer environment workload) so we recommend customers to either:

1. leave as is if they are happy with the performance.

1. Disable Hardware-accelerated Full Copy on the hosts if the VNX copy is working slower than the ESX one..

2. Apply for a special VNX enabler that will bring the VNX XCOPY to the same speed as the ESX one (as EMC customer support)

the customer must be running a minimum version of VNX OE for Block v05.31.000.5.502

you can read more about the VAAI coverage in vSphere 4.1 using the EMC VNX systems clicking the link below:

image

Roadmap – A VNX future release will contain this enabler as an integral part of the release (among other very important VAAI enhancements..)

vSphere 5

We support the NFS VAAI via a plugin that have to be installed on the ESXi hosts, you can read more about it, clicking on the image below

image

We support the other primitive but the “space reclaim”  one has been identified in VMware as not working so VMware decided to withdraw the support and ask customers to disable it on the ESX hosts.

The VNX array doesn’t  rejects it so there IS a need to reject it on the hosts

RECOVERPOINT

General notes:

- Support and best practices for RecoverPoint are on top of the support and best practices given for the arrays themselves.

- Roadmap : Focus is made on the array splitters, to support the rest of the VAAI commands in future versions.

vSphere 4.1

With the VNX/CX splitter, we support all commands:

- HW Assist Locking since RecoverPoint 3.3 SP1 and R30 (4.30.000.5.509 and up)

- WriteSame since RecoverPoint 3.4, R31

- XCOPY since RecoverPoint 3.4, R31

With the Symmetrix splitter (on VMAXe), we support

- HW Assist locking since RecoverPoint 3.4 SP1 and Rhine microcode

Intelligent fabric splitters (Brocade and Cisco) reject all VAAI commands (since Cisco SSI 4.2.3k and 5.0.4j; RecoverPoint 3.4).

vSphere 5

We support the thin provisioning “Stun” commands with the VNX/CX and Symmetrix splitters, since RecoverPoint 3.4 SP1.

CLARiiON

vSphere 4.1

all the 3 VAAI API’s are supported.

vSphere 5 / 5 update 1

we do not support any of the API’s as they didn’t pass the speed enhancements required for VAAI, technically speaking, they will work. we are working with VMware to try and certify the ATS API as supported.

27/03/2012 Update:

VMware have released vSphere 5.0 update 1 and one of it’s features is the UNMAP also known as “Space Reclaim”, it’s a manual step.

here’s the current support matrix for our arrays in regards to this API

VNX – YES

VMAX – NO

Leave a comment

Filed under Uncategorized