Dell EMC AppSync 4.0 Is Here.

CDM (Copy Data Management) is exploding, analysts predict that most of your data in the data centre is derived from copies of your data and as such, we are continuing to invest in this field.

For the readers of my blog, you know that I’m a big fan of AppSync which allows you to copy/restore/repurpose your data with a direct integration to the Dell EMC storage products so now is a great time to explain about the 4.0 version of AppSync which we have just released.

AppSync is a software that enables Integrated Copy Data Management (iCDM) with Dell EMC’s primary storage systems.

AppSync simplifies and automates the process of generating and consuming copies of production data. By abstracting the underlying storage and replication technologies, and through deep application integration, AppSync empowers application owners to satisfy copy demand for operational recovery and data repurposing on their own. In turn, storage administrators need only be concerned with initial setup and policy management, resulting in an agile, frictionless environment.

AppSync automatically discovers application databases, learns the database structure, and maps it through the Virtualization Layer to the underlying storage LUN. It then orchestrates all the activities required from copy creation and validation through mounting at the target host and launching or recovering the application. Supported workflows also include refresh, expire, and restore production.

New Simplified HTML5 GUI

In AppSync 4.0 ,we completely enhance the UI and we now support HTML5 as the interface.

Below you can see a video, showing how to add XtremIO & PowerMAX arrays, run a discovery on the vCenter and the application host that is running a database

and below you can see a demo showing you the options associated with creating (or subscribing) to a service plan

Metro Re-purposing

Select Copy Management -> Copies -> SQL Server -> DB instance -> User Databases -> “metro” database (most options are “Greyed Out” until checkbox is checked)

Check box next to “metro” db instance -> actions become available -> Click “Create Copy with Plan”

Select “Data Re-purposing” and click “Next”

• Select the options you desire and click “Next”

Name is auto-generated by the database name, current date/time and “1.1”

• Copy location – Select either local or remote for which side to store the copy

Note only local restores are possible with SRDF Metro

• Mount options are the same as with normal Service Plans

• 2nd Generation Copies – Select “Yes” to create the 2nd gen at the same time as 1st gen

• You will notice new “Array Selection”, AppSync recognizes a “Metro 1” and the associated array serial numbers.

• Selecting an array to the right of Metro 1 is where the copy is taken, click “Next”

Deeper Integration with Dell EMC Storage Platforms

VMAX v3+ Unisphere 4 PowerMax REST API platform

• All workflows previously supported with the SMI-S provider for VMAX3, AF & PowerMax, now utilize Unisphere for PowerMax (U4P) REST API.

001

Support Storage Class Memory on PowerMax

• Support Service Level Biasing for PowerMax with SCM (Storage Class Memory)

– With the Foxtail PowerMax release, arrays are able to support NVMe SCM drives and NAND flash drives.

– AppSync 4.0 now allows users to set a “Diamond” SLO on PowerMax arrays after adding SCM drives

Application Integration Improvements

• Ability to restore to any RAC node

– Previously, AppSync required restoring to the same node the copy was originally created on

– This enhancement supports environments where the source node may have gone offline, such as during a disaster or outage

• During the restore process, the user will be able to select the specific RAC node to restore to

You can download AppSync 4.0 by clicking the screenshot below

And as always, the documentation can be found here

https://www.dell.com/support/home/us/en/19/product-support/product/appsync/docs#sort=relevancy&f:versionFacet=[49124]&f:lang=[en]

Dell EMC At Storage Field Day (SFD) 19 – Unstructured Data

Store, manage and protect unstructured data with efficiency and massive scalability.

Dell EMC Isilon is the industry’s #1 family of scale-out network-attached storage systems, designed for demanding enterprise file workloads. Choose from all-flash, hybrid and archive NAS platforms, as such, we gave a few sessions, as part of Storage Field Day 19 in Santa Clara, CA on January 23, 2020


Dell EMC Project Nautilus Introduction

Ted Schachter, Sr. Advisor, Product Management, introduces Dell EMC Project Nautilus. Dell EMC customers need the ability to capture and analyze fast data from live sensors in their manufacturing and prototyping phases, move it to long term storage, and analyze petabytes of historical data to gain deeper insights from an interconnected platform. They believe that Project Nautilus is the answer, and introduce the Storage Field Day audience to this real-time analytics platform.

 

Dell EMC Isilon’s Answer to Unstructured Data in the Cloud

444

Kaushik Ghosh, Director, Product Management, and Callan Fox, Consultant, Product Management, present Dell EMC Isilon in the cloud. As unstructured data grows, organizations are needing to utilize the cloud more than ever. With only approximately 2% of these organizations able to take advantage of it, they discuss why the top three cloud providers came to Dell EMC to help customers get their file data into the cloud. They provide the audience with an overview of the Isilon unstructured data offerings for public cloud, including a preview of their Azure cloud announcement.

Dell EMC Isilon’s Answer to Infrastructure and Data Insights

555

Kaushik Ghosh, Director, Product Management, and Callan Fox, Consultant, Product Management, introduce Dell EMC Isilon CloudIQ and ClarityNow. These software tools put the insights of the storage and the data in the right hands. These technologies help customers gain user-friendly summaries of the health of their data center, streamlining administrative tasks and alleviating bottlenecks at the Isilon array. ClarityNow, a recent acquisition, gives customers direct insight into their data location, value, and usage.

What Next for Unstructured Data Solutions at Dell EMC?

Kaushik Ghosh, Director, Product Management, discusses the vision for unstructured data from Dell EMC.

Dell EMC At Storage Field Day (SFD) 19 – Automation

We are investing a LOT where it comes to automation & containers integration across our storage products, as such, we gave a few sessions, as part of Storage Field Day 19 in Santa Clara, CA on January 23, 2020

The Evolution of Applications and the Need for Better Tools with Dell EMC

Paul Martin, Senior Principal Engineer, and Audrius Stripeikis, Product Manager, present DevOps for storage with Dell EMC. As one of the leading IT infrastructure providers, Dell Technologies is at the forefront of the pressures that technology and economy put on application environments inside and outside the datacenter. In this video, the presenters review the role changes that affect personnel and the new use cases appearing for both developers and administrators. They also show how automation tools improve speed and productivity.

Power Tools and Enablers for Dell EMC Storage – for Programmers

Paul Martin, Senior Principal Engineer, and Audrius Stripeikis, Product Manager, present “power tools” for programmers interacting with Dell EMC storage. Dell Technologies has chosen tools for programmers and administrators to provide capabilities and consistency for using Dell EMC storage products in modern application environments. Programmatic interfaces are the historical mechanism for implementing infrastructure automation and the foundation that lets Dell EMC storage products participate in modern automation frameworks. All Dell EMC storage products support APIs for automation.


Dell EMC Ansible Overview and Demonstration

Paul Martin, Senior Principal Engineer, and Audrius Stripeikis, Product Manager, demonstrate DevOps automation in Ansible with Dell EMC. The presenters describe the need for Ansible core concepts, available modules, and architecture, including success stories. They then demonstrate one-click Dell EMC PowerMAX provisioning.


Dell EMC Storage Integration with VMware vRealize Suite

Paul Martin, Senior Principal Engineer, and Audrius Stripeikis, Product Manager, show how Dell EMC integrates with the VMware vRealize suite. The presenters describe the need for VRO and the core concepts of integration, including architecture and success stories.

Kubernetes for Dell EMC Storage Introduction and Demo

Paul Martin, Senior Principal Engineer, and Audrius Stripeikis, Product Manager, demonstrate Kubernetes integration with Dell EMC storage products. The presenters describe persistent storage for Kubernetes, CSI concepts and choices, architectures, and persistent storage challenges. They then demonstrate integration with PowerMAX persistent storage. Finally, they discuss how Dell EMC embraces open access and community support for developer and automation tools for storage.

Dell EMC PowerProtect 19.3 is available, Kubernetes Integration? You bet!

Back in 2018, i traveled to a customer advisory board and talked about CSI, i then got ALL the customers basically telling me “That’s great but what about backup”?

Capture

Kubernetes is a portable, extensible open-source platform for managing containerized workloads and services.

PowerProtect Data Manger (PPDM) allows you to protect your production workloads in Kubernetes (K8s) environments, ensuring that the data is easy to backup and restore, always available, consistent, and durable in a Kubernetes workload or DR situation.

  1. PPDM has a Kubernetes-native architecture developed for Kubernetes environments.
  2. Easy for the IT Ops team to use and is separate from the dev ops environment; allows centralized governance from the dev ops environment.
  3. Users are protecting into Data Domain, benefiting from secondary storage with unmatched efficiency, deduplication, performance, and scalability – and near-future plans to protect to object storage for added flexibility.

Dell EMC and Velero are working together in an open source community to improve how you protect your data, applications, and workloads for every step of your Kubernetes journey.

Velero is:

  • A an open source tool to safely backup and restore, perform disaster recovery, and migrate Kubernetes cluster resources and persistent volumes.
  • For the Kubernetes administrator or the developers, and it handles protection for Kubernetes.
  • A tool that focuses on backup/restore of the K8s configuration.

PPDM builds on top of these capabilities to:

  • Provide a data protection solution that has single management for VMs, applications, and containers.
  • Provide an enterprise grade solution that allows you to place your production workloads in K8s environment
  • Focus on crash consistent backup/restore that is always available and durable in a K8s workload or DR situation

Kubernetes, or k8s (k, 8 characters, s), or “kube” is a popular open source platform for container orchestration which automates container deployment, container (de)scaling and container load balancing

  • Kubernetes automates container operations.
  • Kubernetes eliminates many of the manual processes involved in deploying and scaling containerized applications.
  • You can cluster together groups of hosts running containers, and Kubernetes helps you easily and efficiently manage those clusters.
  • Kubernetes is an ideal platform for hosting cloud-native applications that require rapid scaling.

Kubernetes Features

  • Automated Scheduling: Kubernetes provides advanced scheduler to launch container on cluster nodes.
  • Self Healing Capabilities: Rescheduling, replacing, and restarting the containers which are died.
  • Automated roll-outs and rollbacks: Kubernetes supports roll-outs and rollbacks for the desired state of the containerized application.
  • Horizontal Scaling and Load Balancing: Kubernetes can scale up and scale down the application as per the requirements.

K8s Architectural Overview

PowerProtect is our answer to modern challenges

  • It allows customers to take existing production workloads or new workloads and start placing them in kubernetes production environments, knowing that they will be protected.
  • It allows IT operations and backup admins to manage k8s data protection from a single Enterprise-grade management UI, as well as allowing a k8s admins to define protection for their workloads from the k8s APIs.
  • We are building the solution in collaboration with VMware Velero – which focuses on data protection and migration for k8s workloads.

When building the solution we focus on 3 pillars:

  • Central Management
    • When given a K8s cluster credentials – Next-Gen SW will discover the namespaces, labels and pods in the environment, you will be able to protect namespaces or specific pods
    • Protection is defined via the same PLC mechanism that Next-Gen SW has
    • Logging, Monitoring, governance, recovery are done through Next-Gen SW, the same way they are done for other assets
    • Efficient and Flexible
    • Same Data Protection Platform – our solution is built into DPD’s single Next-Generation software, so as an IT ops you only need to manage thought one platform – your VMs, your applications and your containers
    • Protection to deduped storage allows great TCO with DD/DDVE/Next-Gen Hardware superior deduplication. Protection can also be performed to S3-compatible storage (ECS, S3 in a public cloud next release).
    • Next-Generation SW is planned to protect any persistent volume (i.e. crash-consistent images), an will protect quiesced applications such as MySQL, Postgres, Mongo and Cassandra in future
    • Protection is planned for any kubernetes deployment, such as:
      • PKS (Essential/Enterprise/Cloud)
      • Openshift
      • GCP (Anthos/GKE)
      • AWS (EKS)
      • Openstack
      • On-prem bare metal
  • Built for Kubernetes
    • By using the k8s APIs we allow flexibility in which clusters can be protected. It’s possible to use additional applications such as Grafana, Prometheus, istio, helm to augment capabilities and automation to the solution.
    • Next-Gen SW discovers, shows and monitors k8s resources – namespaces and persistent volumes.
    • No sidecars – there is no need to install a backup client container for each pod (which is time consuming, has a large vector of attack, consumes lots of resources and does not scale)
    • Node affinity – by providing protection controllers per node we avoid cross-node traffic. This is more efficient and more secure.

PowerProtect is our answer to modern challenges

  • It allows customers to take existing production workloads or new workloads and start placing them in kubernetes production environments, knowing that they will be protected.
  • It allows IT operations and backup admins to manage k8s data protection from a single Enterprise-grade management UI, as well as allowing a k8s admins to define protection for their workloads from the k8s APIs.
  • We are building the solution in collaboration with VMware Velero – which focuses on data protection and migration for k8s workloads.

When building the solution we focus on 3 pillars:

  • Central Management
    • When given a K8s cluster credentials – Next-Gen SW will discover the namespaces, labels and pods in the environment, you will be able to protect namespaces or specific pods
    • Protection is defined via the same PLC mechanism that Next-Gen SW has
    • Logging, Monitoring, governance, recovery are done through Next-Gen SW, the same way they are done for other assets
    • Efficient and Flexible
    • Same Data Protection Platform – our solution is built into DPD’s single Next-Generation software, so as an IT ops you only need to manage thought one platform – your VMs, your applications and your containers
    • Protection to deduped storage allows great TCO with DD/DDVE/Next-Gen Hardware superior deduplication. Protection can also be performed to S3-compatible storage (ECS, S3 in a public cloud next release).
    • Next-Generation SW is planned to protect any persistent volume (i.e. crash-consistent images), an will protect quiesced applications such as MySQL, Postgres, Mongo and Cassandra in future
    • Protection is planned for any kubernetes deployment, such as:
      • PKS (Essential/Enterprise/Cloud)
      • Openshift
      • GCP (Anthos/GKE)
      • AWS (EKS)
      • Openstack
      • On-prem bare metal
  • Built for Kubernetes
    • By using the k8s APIs we allow flexibility in which clusters can be protected. It’s possible to use additional applications such as Grafana, Prometheus, istio, helm to augment capabilities and automation to the solution.
    • Next-Gen SW discovers, shows and monitors k8s resources – namespaces and persistent volumes.
    • No sidecars – there is no need to install a backup client container for each pod (which is time consuming, has a large vector of attack, consumes lots of resources and does not scale)
    • Node affinity – by providing protection controllers per node we avoid cross-node traffic. This is more efficient and more secure.

A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual Pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.

A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted once read/write or many times read-only).

While PersistentVolumeClaims allow a user to consume abstract storage resources, it is common that users need PersistentVolumes with varying properties, such as performance, for different problems. Cluster administrators need to be able to offer a variety of PersistentVolumes that differ in more ways than just size and access modes, without exposing users to the details of how those volumes are implemented. For these needs, there is the StorageClass resource.

Container Storage Interface (CSI) defines a standard interface for container orchestration systems (like Kubernetes) to expose arbitrary storage systems to their container workloads.

We used hostPath in our lab, Kubernetes supports hostPath for development and testing on a single-node cluster. A hostPath PersistentVolume uses a file or directory on the Node to emulate network-attached storage.

K8 Components to be backed up

Namespaces

  • Kubernetes namespaces can be seen as a logical entity used to represent cluster resources for usage of a particular set of users. This logical entity can also be termed as a virtual cluster. One physical cluster can be represented as a set of multiple such virtual clusters (namespaces). The namespace provides the scope for names. Names of resources within one namespace need to be unique.

PersistentVolumeClaim (PVC)

  • PVC is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted once read/write or many times read-only).

storageClassName

  • A claim can request a particular class by specifying the name of a StorageClass using the attribute storageClassName. Only PVs of the requested class, ones with the same storageClassName as the PVC, can be bound to the PVC.

CSI

  • Container Storage Interface (CSI) defines a standard interface for container orchestration systems (like Kubernetes) to expose arbitrary storage systems to their container workloads.
  • Once a CSI compatible volume driver is deployed on a Kubernetes cluster, users may use the csi volume type to attach, mount, etc. the volumes exposed by the CSI driver.
  • The csi volume type does not support direct reference from Pod and may only be referenced in a Pod via a PersistentVolumeClaim object

Note: We used hostPath in our lab, Kubernetes supports hostPath for development and testing on a single-node cluster. A hostPath PersistentVolume uses a file or directory on the Node to emulate network-attached storage.

Protecting K8 workloads using PowerProtect Data Manager

Asset Source

  • Asset source for the Kubernetes cluster is the cluster’s master node’s  FQDN or IP address. In case of a HA cluster, the external IP of the load balancer must be used for the asset source.
  • The default port for a production Kubernetes API server with PPDM is 6443.
  • PowerProtect will use bearer token or a kubeconfig file to authenticate with the Kubernetes API server.

Assets

PowerProtect will discover two types of assets for  protection in Kubernetes clusters

  • Namespaces
  • Persistent Volume Claims (PVC). PVC’s are namespace bound and so should be shown as children of the namespace they belong to in the UI.
  • PowerProtect will use Velero for protection of namespaces (metadata). PowerProtect will drive PVC snapshot and backup using its own controller.

PowerProtect components on the Kubernetes Cluster

PowerProtect will install the following components on the Kubernetes cluster when a Kubernetes cluster is added as an asset source.

  • Custom Resource Definitions for  BackupJob, RestoreJob, BackupStorageLocation, BackupManagement
  • Service account for PowerProtect controller
  • Cluster role binding to bind service account to cluster admin role
  • Deployment for PowerProtect Controller with replica set of 1 for R3
  • Velero
  • PLC Configuration & Asset Configuration
  • When a PLC is created, a new storage unit (SU) is created on the protection storage as part of PLC configuration. In case of a PLC of type Kubernetes,  a BackupStorageLocation containing the SU information will also be created on the cluster.
  • PowerProtect controller running in the Kubernetes cluster will create a corresponding BackupStorageLocation in the Velero namespace whenever a BackupStorageLocation is created in the PowerProtect namespace.

    Protection driven from PLC

    When protection action is triggered, CNDM will post a BackupJob custom resource (for each namespace asset in that PLC) to the Kubernetes API server and monitor the status. BackupJob custom resource name will be <namespace>-YYYY-MM-DD-SS. Backupjob will include

    The namespace asset that needs to be protected

    All the PVC assets in that namespace that need to be protected

    Backup storage location (target).

    PowerProtect Controller that is watching for these custom resource will be notified.

    PowerProtect Controller will create a Velero Backup custom resource to backup with the following information

    Namespace

    Velero setting to not include PVC and PV resource types

    Velero setting to include cluster resources.

    Velero setting to not take snapshots

    Velero BackupStoragelocation corresponding to the PLC SU

    PowerProtect controller will monitor and wait for Velero backup to complete.

    Since the provider for the BackupStorageLocation is DataDomain, Velero will invoke the DataDomain Object store plugin to write data to the storage unit.

    Once Velero CR status indicates that the backup has completed, PowerProtect controller will then perform steps 7, 8, 9 for each PVC

    Snapshot PVC

    Launch cProxy pod with the snapshot volume mounted to the pod

    cProxy pod will write snapshot contents to DD.

    Once all the PVC’s are backed up, PowerProtect controller will update the status in the BackupJob custom resource. The manifest will include all files created by PowerProtect and Velero. In order to get the list of files created by Velero, PowerProtect controller will read the DataDomain folder after the Velero backup is complete.

    CNDM will create a protection copy set containing protection copy for the namespace and each PVC asset

    Note: Kubernetes etcd datastore has a limit of 1.5 MiB for document size by default. If the PVC’s in a single namespace exceed couple of hundred, we can run into document size limits

    • The controller currently supports up to 10 backupjobs (namespaces) simultaneously.  Within each backupjob, the pvc’s are backed up one at a time. So at a time, there can at the most be 10 cproxy’s running for backup.
      • This is an initial implementation. As a reference point – Velero today is completely serial.
    • CSI snapshots today can only take full snapshots
    • FS agent will be running inside the cproxy. After CSI snapshot is taken, it will be mounted onto a cproxy pod and the FS agent will read that mount and write to DD.

    The following steps describe the restore workflow when a namespace has been deleted and restore is triggered from PPDM UI.

    CNDM will post the Restorejob custom resource to the Kubernetes API server. RestoreJob custom resource will include

    The namespace asset that needs to be restored

    All the PVC assets in that copyset

    Backup storage location (target).

    PowerProtect Controller that is watching for these custom resource will be notified.

    PowerProtect Controller will create a Velero Restore custom resource to restore

    Namespace resource (note: not the whole namespace, just the namespace resource)

    All the resources needed in order to start restore of PVC

    PowerProtect controller will monitor and wait for Velero restore to complete. Since the provider for the BackupStorageLocation is DataDomain, Velero will invoke the DataDomain Object store plugin to read data to the storage unit.

    Once Velero CR status indicates that the restore has completed, PowerProtect controller will then restore each PVC in the restore job

    cProxy pod will be launched for each PVC

    cProxy read contents from DataDomain and write to the PVC

    PowerProtect Controller will then create a second Velero Restore custom resource to restore all the remaining resources in the namespace excluding namespace reesource, PVC and PV

    PowerProtect controller will monitor and wait for Velero restore to complete.

    PowerProtect controller will update the final RestoreJob status

    Below you can see a demo about how it all looks

    • Training Resources

    The following training assets were created for this release. Additional assets also exist that describe the PowerProtect product capabilities and concepts. These training sessions are not specific to this release but do provide an introduction for those that are new to PowerProtect. For more education assets, search for “PowerProtect” on the education portal.

    PowerProtect Data Manager 19.3 Recorded Knowledge Transfer

    Take this 2 hour and 30 minute On-Demand Class to get an overview of the new enhancements, features, and functionality in the PowerProtect Data Manager 19.3 release.

    Registration Link:   https://education.dellemc.com/content/emc/en-us/csw.html?id=933228438

The Kubernetes CSI Driver for Dell EMC PowerMax v1.1 is now available

 

Product Overview:

 
 

CSI Driver for PowerMax enables integration with Kubernetes open-source container orchestration infrastructure and delivers scalable persistent storage provisioning operations for PowerMax and All Flash Arrays.

 
 

Highlights of this Release:

 
 

The CSI Driver for Dell EMC PowerMax has the following features:

  • Supports CSI 1.1
  • Supports Kubernetes version 1.13, and 1.14
  • Supports Unisphere for PowerMax 9.1
  • Supports Fibre Channel
  • Supports Red Hat Enterprise Linux 7.6 host operating system
  • Supports PowerMax – 5978.444.444 and 5978.221.221
  • Supports Linux native multipathing
  • Persistent Volume (PV) capabilities:
    • Create
    • Delete
  • Dynamic and Static PV provisioning
  • Volume mount as ext4 or xfs file system on the worker node
  • Volume prefix for easier LUN identification in Unisphere
  • HELM charts installer
  • Access modes:
    • SINGLE_NODE_WRITER
    • SINGLE_NODE_READER_ONLY

        
       

Software Support:

  • Supports Kubernetes v1.14
  • Supports PowerMax – 5978.221.221 (ELM SR), 5978.444.444 (Foxtail)
  • CSI v1.1 compliant

     
     

Operating Systems:  

  • Supports CentOS 7.3 and 7.5
  • Supports Red Hat Enterprise Linux 7.6

 
 

 
 

Resources:

The Kubernetes CSI Driver for Dell EMC Isilon v1.0 is now available


Isilon, the de-facto scale out NAS platform, so much of it’s Ptb are being used all over the world, no wonder why i’m getting a lot of requests to have the Kubernetes CSI plugin available for it, well, now you have it!

Product Overview:

CSI Driver for Dell EMC Isilon enables integration with Kubernetes, open-source container orchestrator, infrastructure and delivers scalable persistent storage provisioning operations for Dell EMC Isilon storage system.

Highlights of this Release:

The CSI driver for Dell EMC Isilon enables Isilon use as a persistent storage in the Kubernetes clusters. The driver enables fully automated workflows for dynamic and static persistent volume (PV) provisioning and snapshot creation. Driver uses SmartQuotas to limit volume size.

​  

Software Support:

This introductory release of CSI Driver for Dell EMC Isilon supports the following features:

Supports CSI 1.1

  • Supports Kubernetes version 1.14.x
  • Supports Red Hat Enterprise Linux 7.6 host operating system
  • Persistent Volume (PV) capabilities:
    • Create from scratch
    • Create from snapshot
    • Delete
  • Dynamic PV provisioning
  • Volume mount as NFS export
  • HELM charts installer
  • Access modes:
    • SINGLE_NODE_WRITER
    • MULTI_NODE_READER_ONLY
    • MULTI_NODE_MULTI_WRITER
  • Snapshot capabilities:
    • Create
    • Delete

Note:
Volume Snapshots is an Alpha feature in Kubernetes. It is recommended for use only in short-lived testing clusters, as features in the Alpha stage have an increased risk of bugs and a lack of long-term support. See Kubernetes documentation for more information about feature stages.

Refer to Dell EMC Simple Support Matrix for the latest product version qualifications.

Resources:

CSI Driver for Dell EMC Isilon files and documentation are available for download on:

below you can see a demo about how it all works:

XtremIO 6.3 is here, Sync, Scale & Protect!

We have just released the new XtremIO, 6.3 version with some big enhancements, so let’s dive into each one of them!

XtremIO Remote Protection

XtremIO Metadata-Aware Replication

XtremIO Metadata-Aware Asynchronous Replication leverages the XtremIO architecture to provide the most efficient replication that reduces the bandwidth consumption. XtremIO Content-Aware Storage (CAS) architecture and in-memory metadata allow the replication to transfer only unique data blocks to the target array. Every data block that is written in XtremIO is identified by a fingerprint which is kept in the data block’s metadata information.

  • If the fingerprint is unique, the data block is physically written and the metadata points to the physical block.
  • If the fingerprint is not unique, it is kept in the metadata and points to an existing physical block.

A non-unique data block, which already exists on the target array, is not sent again (deduplicated). Instead, only the block metadata is replicated and updated at the target array.

The transferred unique data blocks are sent compressed over the wire.

XtremIO Asynchronous Replication is based on Snapshot-shipping method that allows XtremIO to transfer only the changes, by comparing the changes between two subsequent Snapshots, benefiting from write-folding.

This efficient replication is not limited per volume, per replication session or per single source array, but is a global deduplication technology across all volumes and all source arrays.

In a fan-in environment, replicating from four sites to a single target site, as displayed in Figure 14, overall storage capacity requirements (in all primary sites and the target site) are reduced by up to 38 percent[1], providing the customers with considerable cost savings.

001

  1. Global Data Reduction with XtremIO Metadata-Aware Replication

XtremIO Synchronous Replication (new to 6.3)

XtremIO enables to protect the data both asynchronously and synchronously when ‘zero data loss’ data protection policy is required.

XtremIO Synchronous replication is fully integrated with Asynchronous replication, in-memory snapshots and iCDM capabilities, which makes it the most efficient.

The challenge with Synchronous replication arises when the source and target are out of sync. This is true during the initial sync phase as well as when a disconnection occurs due to link failure or a user-initiated operation (for example, pausing the replication or performing failover).

Synchronous replication is highly efficient as a result of using these unique capabilities:

  • Metadata-aware replication – For the initial synchronization phase and when the target gets out of sync, the replication is using the metadata-aware replication to efficiently and quickly replicate the data to the target. The replication is using multiple cycles until the gap is minimal and then switches to synchronous replication. This reduces the impact on the production to a minimum and accelerates the sync time.
  • Recovery Snapshots – to avoid the need for a full copy or even a full metadata copy, XtremIO leverages the in-memory snapshots capabilities. Every few minutes recovery-snapshots are created on both sides, which can be used as a baseline in case a disconnection occurs. When the connection is resumed, the system only needs to replicate the changes made since the most recent recovery snapshot prior to the disconnection.
  • Prioritization – In order to ensure the best performance for the applications using Sync replication, XtremIO automatically prioritizes the I/O of Sync replication over Async replication. Everything is done automatically and no tuning or special definition is required.
  • Auto-Recovery from link disconnection– The replication resumes automatically when the link is back to normal.



XtremIO Synchronous replication is managed at the same location as the Asynchronous replication and supports all Disaster Recovery operations similarly to Asynchronous replication.

Switching between Async to Sync is simply performed using a single command or via the UI as can be seen below



Once changed, you can also see the new replication mode in the remote session view


Best Protection

XtremIO replication efficiency allows XtremIO to support the replication for All-Flash Arrays (AFA) high performance workloads. The replication supports both Synchronous replication and Asynchronous replication with an RPO as low as 30 seconds and can keep up to 500 PITs.[2]. XtremIO offers simple operations and workflows for managing the replication and integration with iCDM for both Synchronous and Asynchronous replication:

  • Test a Copy (Current or specific PIT) at the remote host
    Testing a copy does not impact the production and the replication, which continues to replicate the changes to the target array, and is not limited by time. The “Test Copy” operation is using the same SCSI identity for the target volumes as will be used in case of Failover.
  • Failover
    Using the failover command, it is possible to select the current or any PIT at the target and promote it to the remote host. Promoting a PIT is instantaneous.
  • Failback
    Fails over back from the target array to the source array.
  • Repurposing Copies
    XtremIO offers a simple command to create a new environment from any of the replication PITs.
  • Refresh a Repurposing Copy
    With a single command, a repurpose copy can be refreshed from any replication PIT. This is very useful when refreshing the data from the production to a test environment that resides on a different cluster or when refreshing the DEV environment from any of the existing build versions.
    • Creating a Bookmark on demand
      An ad-hoc PIT can be created when needed. This option is very useful when an application-aware PIT is required or before performing maintenance or upgrades.

Unified View for Local and Remote Protection

A dedicated tab for data protection exists in  the XtremIO GUI for managing XtremIO local and remote protection (Sync & Async replication). In the Data Protection Overview screen, a high level view displays the status for all Local and Remote protections, as shown in ‎Figure 15.

002

  1. Data Protection Overview Screen

The Overview section includes:

  • The minimum RPO compliance for all Local and Protection sessions
  • Protection sessions status chart
  • Connectivity information between peer clusters

From the overview screen it is easy to drill down to the session

Consistency Group Protection View

With the new unified protection approach in one view it is easy to understand the protection for the consistency group. The Topology View pane displays the local and remote protection topology of the Consistency Group, as shown in ‎Figure 16. Clicking each of the targets displays the detailed information in the Information pane.

003


Secured Snapshots

The purpose of this feature is to allow a customer to protect snapshots created by a “Local Protection Session” against an accidental user deletion:

  • Once a “Local Protection Session” creates the snapshot, it is automatically marked as “Secured”
  • The snapshot’s protection will expire once the snapshot is due for deletion by its retention policy

Once a “Local Protection Session” is set as create secured snapshots:

  • It cannot be set to create non-secure snapshots

Once a snapshot is set as “Secured”:

  • The snapshot cannot be deleted
  • The snapshot-set containing this snapshot cannot be deleted
  • The contents of this snapshot cannot be restored nor refreshed

    Secured Snapshots – Override

  • Procedure
    • Require Case – Legal Obligation
  • New XMCLI Command
    • Tech-Level Command
  • To remove the “secured” flag, a formal ticket must be filed to DellEMC.
  • This is a legal obligation!
  • Once filed, a technician level user account can use the new “remove-secured-snap-flag” XMCLI command to release the “secure” flag.

    Secured Snapshots – Examples – Create a Protected Local Protection

    The following output displays the creation of a new “Local Protection Session” with a “Secured Flag” setting

    The following output displays the modification of an existing “Local Protection Session” to start creating snapshots with a “Secured Flag” setting

    Secured Snapshots – Examples – Snapshot Query & Release

    The first output displays the query of the properties of a “Secured Snapshot”

    The second output displays an attempt of deleting a “Secured Snapshot”

    The third output displays how to release the “Secured” flag (using a tech level user account)

    Below, you can also see how a created protection policy on a consistency group look like when you decide to not allow the option to delete the snapshot

    And this is the error you will get if you (or someone else) will try to remove this volume


    IPv6 Dual Stack

  • Dual IP Versions (4 & 6)
  • XMS Management
    • External IPv4 & IPv6
    • Internal IPv4 or IPv6
  • Storage Controller
    • iSCSI IPv4 & IPv6

    Native Replication IPv4 Only

    This feature allows a customer to assign multiple IP addresses (IPv4 and IPv6) to a single interface

    Let’s discuss the different network interfaces used in XtremIO and see what changed

    XMS Management traffic is used to allow a user to connect to various interfaces (such as WebUI, XMCLI, RestAPI, etc.)

    Previously, the user could configure either an IPv4 or an IPv6 IP Address.

    The new feature allows assigning two IP Addresses – One of each version

    XMS Management traffic is also used internally to connect to clusters (SYM and PM)

    The behavior on this interface wasn’t changed – All managed cluster should use the same IP version

    Storage Controllers’ iSCSI traffic allows external host connectivity

    Previously, the user could only configure IP addresses from a single IP version

    The new feature allows assigning multiple IP Addresses with different versions

    Native Replication behavior remains the same – These interfaces are limited to IPv4 IP addresses

    IPv6 Dual Stack – XMS Management – New XMCLI commands

    Multiple parameters were added to the “show-xms” XMCLI command to support this feature:

    Figure 1 Two new parameters which determine the IP versions

    Figure 2: The “Primary IP Address” parameter names remained as is (to conform with backward-compatibility requirements)

    Figure 3: Various new parameters to describe the new “Secondary IP Address”

    Two additional XMCLI commands were introduced to support this feature:

    Figure 1 The “add-xms-secondary-ip-address” XMCLI command sets the XMS management interface “Secondary IP Address” and “Secondary IP Gateway”

    Figure 2:
    The “remove-xms-secondary-ip-address” XMCLI command removes the XMS management interface “Secondary IP Address” and “Secondary IP Gateway”

    Note that there is no “modify” command – To modify the secondary IP address, remove and set it again with its corrected values.

    IPv6 Dual Stack – Storage Controller Management – Interfaces

    To support the changes made to the Storage Controllers’ iSCSI interfaces settings, the following changes were implemented:

  • Per “iSCSI Portals” – The user can now configure multiple iSCSI portals with different IP versions on the same interface
  • Per “iSCSI Routes” – The user can now configure multiple IPv4 and IPv6 iSCSI routes

    As explained earlier, the Storage Controller Native Replication interface behavior remains as is – The interfaces only allow IPv4 IP addresses.


    Scalability

    Scalability Increase

    The new 6.3.0 release supports an increased number of objects:

  • Number of volumes and copies (per cluster) was increased to 32K
  • Number of SCSI3 Registrations and Reservations (per cluster) was increased to 32K

    Below you can see a demo, showing how to configure Sync Replication from the GUI

    And here you can see how Sync replication works with VMware SRM in conjunction with our unique point in time failover in case where you don’t want to failover to the last point in time