EMC ScaleIO 1.31–VMware Kernel “inside”

EMC ScaleIOESX KERNEL INSIDE

 

wow, I’m really happy, the ScaleIO team at EMC have just GA’d version 1.31 of the product, it contains many new features / improvement but the big news is the direct kernel integration of the “SDC” component into the ESXi kernel, that’s right there shows the power of the federation from an alliance point of view but for the techies, it will provide an hugh boost in terms of performance which IMHO prove why a kernel drive for software defined based products like VSAN and now ScaleIO is an Hugh thing!

Version 1.31 also adds several features that apply to all platforms including improved
security, SNMP support, and updates to integration features i.e. the REST API and OpenStack.

VMware Enhancements Overview

ScaleIO 1.31 introduces several changes in the VMware environment.
First, it provides the option to install the SDC natively in the ESX kernel instead of using the
SVM to host the SDC component. The V1.31 SDC driver for ESX is VMware PVSP certified, and
requires a host acceptance level of “PartnerSupported” or lower in the ESX hosts. Note that
the SVM will continue to be required for the MDM and the SDS components.
Next, 1.31 includes improvements in the installation process. Specifically the plugin
registration process has been simplified. In addition, deployment using the plugin
automatically creates a separate ScaleIO VM to host the Installation Manager and ScaleIO
gateway. This will allow for future version upgrades using Installation Manager in the VMware
environment.
Lastly, additional management capabilities are offered in the plugin. These include: multiple
volume creation in a single step, adding SDS or SDC to an SVM after initial deployment, and
migrating the SDC from the SVM into the ESX kernel.

Integrated SDC in ESX

With Version 1.31, it becomes possible to install the SDC for an ESX server directly in the ESX
kernel. The implementation uses a vSphere Installation Bundle or VIB file, which is included
with the 1.31 software distribution. Relative to the SVM-based SDC, the VIB-based SDC
provides improved performance with increased IOPs, increased bandwidth capability, and
reduced latency. With the SDC running natively in ESX, there is no longer a requirement to
use iSCSI to present ScaleIO volumes from the SDCs to the ESX hosts. It is possible to
migrate the SDC from the SVM into the ESX kernel. The plugin provides a feature to perform
this operation.
Note that the ESX version of SDC requires VMware version 5.5. For prior versions of VMware,
the SVM-based SDC must be deployed.

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2096169

 

ScaleIO 1.31 VMware Deployment

 

VMware Cluster Upgrade Path

(1.30 to 1.31)

Upgrade to 1.31 in VMware uses the same strategy as the physical cluster upgrade for Linux
hosts. The only difference here is that for versions 1.30 and older versions, all ScaleIO
components in VMware – i.e. MDM, tie-breaker, SDS, SDC and the Lightweight Installation
Agent (LIA) are running inside virtual machines called the ScaleIO Virtual Machines (SVMs).
These components are upgraded in place, inside the SVMs, using the Installation Manager.
The 1.31 plugin can be used to bring up the Installation Manager in a VM for this purpose,
during deployment.
After this portion of the upgrade is complete, the SDCs may be migrated from SVMs into the
ESX kernel. This applies to vSphere 5.5 only. The 1.31 plugin provides functionality to
perform this step directly from the vSphere Web Client.

Upgrading a ScaleIO 1.30 cluster to 1.31 in a VMware enviornment

 

EMC ScaleIO 1.31 Volume Provisioning in VMware

 

Restricted Remote Access

With Version 1.31, it becomes possible for the ScaleIO administrator to restrict access to
configuration changes to local clients in the primary MDM host only. This restriction, when
enabled, applies to remote clients on any hosts other than the primary MDM. Restriction
applies to clients of all types including the CLI, GUI, and REST clients.
When restricted access is enabled, local access clients i.e. clients in the primary MDM host are
not affected. For these clients, all operations permitted for the logged-in user’s role can be
executed. For remote access clients, only monitoring commands i.e. ScaleIO commands
permitted for the “Monitor” role, can be executed. This is regardless of the role of the loggedin user.
Shown here are the new commands to query if remote MDM access restriction is in effect, and
to enable or disable the restriction. Let’s look at some examples of these commands in action,
next.

image

From a login session on the MDM, it is possible to enable the restriction using the CLI
command –-set_remote_read_only_limit_state. After this command is executed by the
MDM, users may continue to login into this ScaleIO cluster from remote hosts, regardless of
their role.
However, they will not be to able to change the state of the system. For example, they cannot
create a new volume. They can continue to perform monitoring operations, for example query
all volumes in the system.
The same logic applies to access via other management interfaces for ScaleIO. Regardless of
user role, after remote restriction is activated, the GUI running on a remote host will report a
failure when that user attempts to make a change to the system.

Restricted SDC Access

For improved access control, Version 1.31 introduces a feature to restrict access to ScaleIO
volumes to an explicit set of SDCs. This is referred to as the restricted SDC access feature.
When the restriction is in place, the administrator is required to explicitly grant access to an
SDC before any volume-related operations specific to that SDC can be performed – for
example, mapping volumes to the SDC.
Several new commands have been implemented to support this feature, including
query_restricted_sdc_mode and set_restricted_sdc_mode to examine and set the state of
restriction, and add_sdc to grant SDC access. Note that add_sdc can be run for a host before
the SDC is installed on it.

 

Read RAM Cache Setting per Volume

With prior versions, the SDS RAM cache was used by the entire Storage Pool.
Version 1.31 introduces the capability to specify whether the RAM cache should be used for a
given volume within a storage pool. Note that by default when a ScaleIO volume is created,
the RAM cache will be used for that volume.
To support this feature, the existing commands –add_volume and –query_volume have
been enhanced, and the command –set_volume_rmcache_usage has been added.

 

SNMP Trap Overview

ScaleIO 1.31 can be configured for SNMP trap generation, enabling monitoring using thirdparty management tools which can receive SNMP traps.
By default, SNMP traps are not generated. The ScaleIO gateway needs to be configured to
activate traps.
ScaleIO traps are associated with events and severity levels. Each associated event includes a
code that specifies a remedial action for that event.

Platform and integration Enhancements

 

With Version 1.31, ScaleIO adds support for Redhat and CentOS version 7.0.
The REST API has been updated to accommodate all the added and changed commands.
In addition, the ScaleIO OpenStack driver package has been enhanced for compatibility with
the Juno release of OpenStack. In particular, ScaleIO 1.31 can report pool-level statistics to
OpenStack. This enables the Juno version to make an intelligent decision when provisioning
volumes, by selecting a suitable Storage Pool.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s