Got OpenStack And XtremIO, You just got Cindered!

 

OpenStack is leading the open source cloud revolution and has become the de-facto open platform for managing private and public clouds

OpenStack allows storage resources to be located anywhere in the cloud and available for use upon demand

OpenStack reduces the complexity in managing and consuming resources across heterogeneous environments

With the availability of the XtremIO integration to OpenStack, customers can now easily connect their OpenStack cloud to an XtremIO storage array.

OpenStack architecture in a nutshell

•Fully accessed by common set of APIs and common Dashboard

image

•Supports heterogeneous resources

•Compute resources, Hypervisors, storage backends, networking resources

What is Cinder?

Cinder is the block storage service for OpenStack.

Cinder directs the creation and deletion of volumes on a storage array.

Cinder attaches/detaches volumes from instances/virtual machines (VMs) created by OpenStack.

Storage resource allocation is performed on demand based on the OpenStack cloud requirements

XtremIO Cinder Driver

image

XtremIO Cinder driver sends storage provisioning commands to the XtremIO array.

XtremIO Cinder driver implements the XtremIO REST APIs

The OpenStack cloud can access XtremIO using either iSCSI or Fibre Channel protocols.

XtremIO Cinder Operations support

image

Cinder Driver Setup & Installation

Setup

–Make sure XtremIO array is connected to the OpenStack compute resources (initiators)

–For iSCSI

▪Configure iSCSI portals in XMS

▪CHAP initiator authentication is supported. If CHAP initiator authentication is required, set CHAP Authentication mode to initiator in XMS.

—The CHAP initiator authentication credentials (usernames and passwords) are generated automatically by the cinder driver

–Set up a dedicated account for Openstack with Admin role privileges (recommendation)

Installation

–The driver should be installed on the cinder host that has the cinder-volume component

▪Copy xtremio.py to cinder/volume/drivers/emc

Cinder Configuration

Edit cinder.conf /etc/cinder/cinder.conf

volume_driver=cinder.volume.drivers.emc.xtremio.XtremIOISCSIDriver

OR

volume_driver=cinder.volume.drivers.emc.xtremio. XtremIOFibreChannelDriver

san_ip=[xms IP address] à retrieve from show-xms CLI Command

san_user=[xms username] à the XMS user-name previously defined

san_password=[xms password]

Restart cinder

 

image

Volume Created in XtremIO

imageimage

Create Snapshot in OpenStack

image

Snapshot created in XtremIO

image

Attach Volume to VM in OpenStack

image

Volume mapped in XtremIO

image

 

 

OpenStack Demo

wrapping it all up, you can see a video Iris Kaminer, a PM in our team made.

Leave a comment

Filed under Uncategorized

VSI (Virtual Storage Integrator) 6.2 is Out, XtremIO is IN!

 

On the heels on the Mega Launch event in which we announced XIOS 3.0, another product release has just been GA, the VSI plugin.

if you are an EMC customer and never heard about it, it’s really time to wake up, this (FREE!) plugin is really a framework for the EMC products for VMware vCenter, at it’s core, it allows you to view storage array metrics and provision VMFS datastores or RDM’s devices, present them to the ESXi hosts and all of this without using the storage GUI. this release is the first release that support XtremIO AND the vCenter web interface, prior releases supported XtremIO but only worked with the classic vCenter.

attached below are two videos in case you find the documentation too complex:

01. Installing and Configuring the EMC VSI 6.2

 

02. Provisioning Datastores And RDM’s

 

ff

but this release has also another feature which ‘im proud to say was co-invented by Tamir Segal who is a PM in our XtremIO team and your’s trully.

it really started as me scratching my head around the new XtremIO Snapshots capability and how revolutionary it is (you can read more about it here:

https://itzikr.wordpress.com/2014/05/12/xtremio-redefining-snapshots/

my question was, why are we even using VDI full clones, the answer is of course, “because customers wants to have persistent desktops” but then my head start spinning, I was thinking, “well, we can still give you a persistent desktop that will look like a full clone but why waste XCOPY time or even logical capacity??” without the penalty of the VMware linked cloned technology (can’t replicate it, you always need to worry about the parent –> child relationship etc’ etc’) and so we started to toy with the following logic:

1. create one datastore that will have X amount of VDI VM’s in it (full clones, all cloned from the same replica)

2. snapshot this volume X amount of times.

3. present the snapshot to the ESX’s cluster and renaming the VM’s, run sysprep on them and upload their computer account configuration to the VDI broker.

4. repeat it on the other snapshots..

and so, we went with this request to the VSI team and they made it possible, so you see, I’m Xtremly happy now because this platform called “XtremIO” is so unique and it allow us to really think outside of the box and leverage technologies in a way that was never done before!

by the way, there is an even extra value in this plugin if you work with Citrix XenDesktop and EMC XtremIO since there is no native way in Citrix Studio to create full persistent clones!

in order to make the VSI VDI plugin to work with Horizon View, you will need to disable the message security mode (as the vm’s creation and communication about it is coming from an “external” source to Horizon.

view-security

 

VSI 6.2 Release Notes:

https://support.emc.com/docu54317_VSI_for_VMware_vSphere_Web_Client_6.2_Release_Notes.pdf

VSI 6.2 Download:

https://download.emc.com/downloads/DL54375_VSI_for_VMWare_vSphere_Web_Client_6.2.ova

VSI 6.2 Product Guide:

https://support.emc.com/docu54147_VSI_for_VMware_vSphere_Web_Client_Version_6.2_Product_Guide.pdf?language=en_US

Leave a comment

Filed under Uncategorized

VPLEX/RecoverPoint Integration with XtremIO – A customer case study

 

It’s not a secret that one of my favorite products within EMC is VPLEX, it’s a product that means different things for different people, some look at it as a DR solution while others loos at it in it’s true flash and glory meaning, a distributed cache that is virtualizing your underlying storage and provide an Active – Active site topology so for example, you can have site A in NY distributed  to site B in a remote location (up to 10ms latency is the supported configuration as of 07/2014)

“but isn’t the latency between the sites defied the point of using an AFA like EMC XtremIO” ??

nope, you can have you site A VM’s, DB’s et’c doing their thing in site A and in a n event of a site failure, leverage vSphere HA to restart them in the remote site

Or…

you can leverage the VPLEX RecoverPoint splitter replicating to a remote site in an async fashion, in fact I have just the customer that is doing this to tell you about..

image

CLAL INSURANCE

The Tel-Aviv-based Clal Insurance is Israel’s leading insurance, pension, and financial  services group. The Clal Group holds a 19% share of the Israeli insurance market and  manages hundreds of billions of shekel-denominated assets. Haim Inger is the CTO at  Clal Data Systems, the IT company that provides all IT services to Clal Insurance.

Their Challenges

For Inger, the main storage-related challenge facing his organization was end-ofmonth reporting on the life insurance systems. Even with high-end storage in the data  center, these core applications needed better performance.
“The end-of-month jobs would start around midnight, and they would not end until  around 2:00 p.m. the following day,” he recalls. “The result was that almost 600 users  were unable to do their work for most of the day on the first of every month, and  customers were unable to get any service during that same period of time.”
This situation was costing Clal Insurance an estimated $900,000 per year in lost  business and productivity. Inger could see that the main bottleneck was storage, and  he was looking for a solution that would make it possible to finish these jobs much  earlier.

The Solution

The answer was an EMC XtremIO All-Flash Array. Clal Insurance started by deploying  a two X-Brick cluster (X-Bricks are the basic scale-out building blocks of XtremIO  arrays) and replicating them locally with EMC VPLEX to another two X-Brick cluster.
The local VPLEX replication of the X-Bricks was in addition to a full disaster recovery  (DR) backup system, using EMC RecoverPoint.
With this configuration, which provides both data availability and storage virtualization
for data mobility, Clal Insurance was able to dramatically reduce the time needed to  run the end-of-month reports. “Our goal was to finish the reports around 8:00 a.m.,
even though users typically arrive at work an hour earlier,” says Inger. “We actually  finished at 5:30 a.m.”
And the story gets better. “We have seen the same benefit in an even more
demanding scenario, which is end-of-year reporting,” Inger continues. “On a usual  basis for end of year, we would be shut down for 24 hours. Thanks to XtremIO, we  were able to open the systems at 6:20 a.m. on January first of this year. Even with  local replication using VPLEX, plus RecoverPoint backup to our DR site, the reporting  goals and online day-to-day work were not affected. We don’t see any impact at all.

image

Clal Insurance tried several ways to solve the end-of-month reporting issue before
deciding to deploy XtremIO. First they installed more flash drives in the existing
storage they had on the premises. This approach resulted in small gains — instead of  2:00 p.m., the jobs finished at around 12:00 p.m. — but it was not enough. Then they started looking at All-Flash Arrays, including XtremIO, and others.
“The reason we decided to go with XtremIO, even though it wasn’t yet generally
available, was the product’s ability to coexist with our current environment.” says Inger. “We already had VPLEX installed, and our existing DR configuration was based  on VPLEX and RecoverPoint. The only solution we saw that was really plug-and-play  was XtremIO.”
Clal Insurance evaluated two X-Bricks for almost six months before putting XtremIO into production. “We tested two things,” explains Inger. “First we tested the stability, to see that we didn’t get data corruption, downtime, or anything like that. XtremIO  was very stable, even during the early beta phase. The other thing we checked was performance. Depending on the application, we got from two to ten times better
performance with XtremIO compared to our existing storage environment.”
Specifically, the best average read time on the company’s existing high-end storage was about 5ms. Clal Insurance was looking for at least 3ms; they achieved 1.5ms with XtremIO.
Inger says XtremIO was the right choice. “From the moment we got the X-Bricks and installed them physically, it took us less than a week to move the first production database over, and then an additional two weeks to move the other four databases —a total of 14 terabytes,” he says. “And it just worked. I didn’t have to reinvent or rearchitect my DR system or anything else. With any other solution from IBM, NetApp,
or the All-Flash Array start-up vendors, I would have had to reinvent almost
everything about backup and DR. What’s more, the management is any storage guy’s dream. XtremIO is the simplest storage system I ever saw anywhere to manage.

 

image

Unlike other insurance companies, Clal Insurance is 100% virtualized. The company has more than 1,300 servers, all using the latest VMware solutions. XtremIO fits right  in. “The benefit of XtremIO for a virtualized environment lies in its Inline Data  Reduction capabilities.” says Inger. “Our database servers previously consumed about  120TB of Fibre Channel disk. With XtremIO, we consume only about 40TB of SSD
because of the data reduction ratio. That is a real savings of money and footprint, with much better performance than we had before.”
Inger purchased several additional X-Bricks recently. “All of our database virtual  machines are going to reside on XtremIO,” he says.
In summary, XtremIO is used for two main purposes at Clal Insurance: The first is to  accelerate the day-to-day use of applications and to super-accelerate the nightly and  end-of-month jobs of the life insurance applications. The second is to accelerate the deployment of the company’s virtual servers with as small a footprint as possible on  the storage side.
“The greatest benefit we have realized from our XtremIO system is the ability to serve  our employees and our insurance brokers as fast as possible, and with no downtime,  for end-of-month batch processing times,” says Inger. “This project really shows that  investing in infrastructure can bring great ROI to our company while improving the  overall service we give our customers. And we have found XtremIO to be totally reliable, with zero problems.”
According to Inger, XtremIO has a bright future at Clal Insurance. “I plan to move all of the mission-critical databases and most of the mission-critical applications to  XtremIO over the next 12 months,” he concludes. “In my opinion, everything will go there within three to four years.”

below you can also see a demo of VPLEX + the RecoverPoint splitter on XtremIO

Leave a comment

Filed under Uncategorized

XtremIO 3.0–Improved performance and Compression

 

 

A big part of the 3.0 release was the added ALWAYS INLINE compression and the extra performance boost on the SAME XtremIO hardware.

aren’t these suppose to SLOW you down, let alone provide you better performance??

Not for XtremIO, we have the best R&D team I have ever worked with, if you follow my twits, you know that I mean it, like one evening I sent a feedback on a 3.0 beta build and the next morning, a new build with the feature I have asked for was in, that’s agility and that’s EMC letting us work in a “startup mentality” and this is not a joke or a marketing phrase, it’s the way EMC acquire companies.

image

Compression, yea, it’s a big one and like anything else in our product, we deliberately took the time to make it right, if you follow our messaging, we keep saying that the first GA release was about architecture, the corner stones if you’d like on which we can then build new features and boy we did, so compression for us, like deduplication is always on, we do NOT turn off or throttle data services, we don’t want to go back to the old days of unpredicted performance and from a price prespective, this feature is free and will allow you, the customer to reduce the TCO even further by being able to put more VM’s, more DB’s and more VDI vm’s on the same XtremIO array.

image

MORE Performance

again ,we were able to do the impossible, give you more performance on everything using the SAME hardware, things that are heavily defendant on performance like DB’s (IOPS and bandwidth), now works even faster.

want to see a demo of this? click the demo below, it will show you a DB working on the 2.4 XtremIO software that is then being storage vMotion to another X-Brick that is running the 3.0 code…

 

Leave a comment

Filed under Uncategorized

XtremIO 3.0–The Starter X-Brick

 

“Good things comes in small packages” is a common phrase, in many cases, the small package is “good enough” but nobody ever told us something in the line of “ small packages may perform as the big packages”

image

Enter the Starter X-Brick, it contains half of the raw capacity (5tb) of it’s bigger brother (10tb) and a quarter of it’s even bigger sister (20TB) X-Brick and it’s ideal for small POC’s, environments that do not need all the physical capacity like small(er) VDI deployments or smaller DB’s but the crazy part is that from a performance point of view, it perform exactly in the same manner like it’s bigger brother and sister.

“ok, my VDI/DB/VSI environment was small at the time I purchased the Starter X-Brick and I now need to grow”

 

image

No problem

you can order the other half of the SSD drives and grow to a full X-Brick while doing it ONLINE and non-disruptively, TODAY.

Leave a comment

Filed under Uncategorized

XtremIO 3.0–Why Scale Out matters?

 

image

As part of the mega lunch today, we have announced XtremIO 3.0, the interesting about this release for me is that we took was already the number 1 AFA in the market and made it better, better on the existing hardware which is no coincidence when you design your product on commodity hardware and the magic happens on software. part of this announcement was the new snapshot capabilities which we I already covered here

image

http://itzikr.wordpress.com/2014/05/12/xtremio-redefining-snapshots/

but here’s another twist on the same topic, XtremIO is a true Scale Out based storage array, that means that with every X-Brick you are adding to the cluster, you capacity grows in ADDITION to the performance, this is done because each X-Brick contains two true Active-Active controllers.

now, one of the claims I hear from other vendors is “who needs so much performance” or “two active/passive (ALUA) architecture is good enough!”

here is why they are wrong

enter the iPhone 3Gs, that was the first iPhone I owned and for the time that it came out in, it was GOOD Enough. the apps were snappy and IOS in inteslf was pretty responsive.

But something happened on the way to paradise

Apple introduced the iPhone 4 and gradually, my iPhone 3Gs became outdated, it actually became SLOWER and things like my mail app and in general, IOS itself became slower and that begs the questions, WHY??

the reason is simple, we are living in an era where performance is a relative term and the moment a new piece of hardware comes out, the developers who write applications for this hardware, quickly leverage it and they leveraging the new performance capabilities of the new, more performing hardware.

ok, but how is this related to XtremIO 3.0 and more importantly, snapshots??

image

well, think about a production DB, it can be other environments as well but just for the sake of the example, let’s stick with a DB.

in many cases, many spinning drives will be needed just to accommodate the performance needs of the core DB and if you are starting to talk about cloning your DB, the operation itself takes a lot of time and more importantly, consume so much more space which result in many cases of actually limiting the numbers of DB’s clones you can offer.

“but I thought dedupe / compression resolve this, no?”

dedupe and compression are data reduction technologies, they will definitely  assist in lowering the footprint the core DB and it’s clones will consume but how do you drive the IOPS / Bandwidth for all of these clones??

ENTER Scale Out

image

XtremIO Scale out from the (new) Starter X-Brick to 6 X-Bricks (12 Active-Active controllers) which you NEED to leverage all of these copies, so now you can actually do very interesting things like giving each developer it’s own DB instance!

so the next time a storage vendor tells you that a two controller, let alone an Active – Passive archtecture is good enough, just tell them

“iPhone 3Gs was also good enough” …back in 2009

Leave a comment

Filed under Uncategorized

VMworld 2014, The EMC Sessions

image

Hi,

like any year, we will have a strong attendance but I personally feel this year is a bit special for EMC, from SDS to All Flash Arrays (AFA’s), to hybrid clouds, the market is clearly changing, we each see it every day and it’s a lot of fun to be part of it all,

for your convenience, I put all the EMC sessions and /or sessions that has EMC employees presenting in , it may change a bit toward the actual date but here’s the rough list:

INF1864-SPO – Software Defined Storage – What’s Next?

There is a lot of disruption in the domain of storage and information persistence ranging from abstraction and automation through to how storage is architected.   New architectural models such as “software plus commodity hardware” and “gateways to cloud models” are changing the landscape.  Foundational technology such as NAND flash and post-NAND options are changing the way persistence models work.  New at scale analytic and application workloads and applications are driving new protocols and expectations.   Are you confused about the future of file, block, pNFS, Object storage, HDFS, iSCSI, FCOE and where to leverage flash as well as Virtual Volumes?  Well you are not alone.

While there has been a lot of focus on Software Defined everything, the detail of what is software defined storage (SDS) has been getting loads of attention as well.  In this session, we examine where we are today with SDS and what direction this may take in the not so distant future.  We’ll look at the near term – and the longer term – from architecture to foundational technologies, as well as what is the latest in the most advanced R&D labs and exotic use cases.

Being practical, we will review what options are available today with SDS, Flash, HDFS and other emerging technologies in the space of “information persistence” – and an eye towards Virtual Volumes. This session will help clear up some common confusion by looking at customers’ use of these technologies in practice.

There will be no marketing or product pitches in this session.  Instead we’ll focus on demonstrations of new capabilities and a sneak peek into what the future has in store for consumers of storage resources in the world of virtualization.

NET1861-SPO – Automating Networking and Security Services with NSX for vSphere and vCenter Orchestrator (vCO)

Speeding up full stack application deployment is the holy grail of private cloud environments. The setup of networking and security for an application has historically been a tough nut to crack. With NSX for vSphere along with vCO, that no longer needs to be the case. We will quickly describe the base NSX design that sets us up for automation. Then walk through an application deployment scenario, breaking down the automation setups as we go. We will discuss the NSX API’s, how best to use them in vCO, and dive into vCO workflow design. We’ll get into the details of automating the creation of an Edge Services Gateway for Routing and Load Balancing, as well as the use of the Distributed Firewall and the Distributed Logical Router. Network and Security services have historically been some of the most difficult pieces of application deployment to automate. This all changes with NSX for vSphere and vCenter Orchestrator.

EUC2712-SPO – The Impact of Flash on virtual servers, databases and virtual desktops workloads

Flash is fast, everyone understands that flash can completely revolutionize your virtual datacenters in ways that weren’t achievable before. This session will discuss, through real customer examples, the following four use cases:

  1. Virtual Servers – Are they a good candidate for leveraging an all flash array? What is the value add for both the production and development and test environments?
  2. Databases – Apart from performance, how else will your Oracle / MS SQL database benefit from the move to an all flash array?
  3. EUC using VMware Horizon 6 and Citrix XenDesktop – What were the historical challenges with designing and implementing VDI and streaming applications?  How did we overcome those challenges with an all flash array and what have we learned so far?
  4. Virtualized SAP environments – Can you finally give your developers enough landscapes to work on?

 

INF3014-SPO – Scriping and Programing Cloud Automation – How To Power Hour

Presented in an un-conference format and driven by audience entries on a whiteboard and live via Twitter, 3 experts in managing large scale VMware-related clouds will answer *your* questions on how to approach managemt using automation and tools. We’ll cover topics ranging from controlling backups with PowerCLI, deploying servers with tools like Puppet and vCenter Orchestrator, building performance collection with Python, and whatever else the audience can think of. This breakout will include both discussions of approaches, as well as recommendations for specific resources, toolkits, modules and the like to solve your problems. Be prepared to take notes and pictures as we whiteboard and discuss live.

INF1736 – Designing Next Generation Software-Defined Data Centers: A Panel with VMware Certified Design Experts

Successfully designing a next generation Software-Defined Data Center demands attention to details not present in “traditional” Data Center Virtualization. VMware Certified Design Experts Scott Lowe, Jason Nash, Matt Cowger, Josh Odgers, and Jonathan Kohler will lead a panel discussion on the design and architecture required to successfully implement and operate a Next Generation SDDC. The panel will answer your questions on topics such as storage, networking, security, compute, consumption, orchestration, automation, and much more.

SDDC2649 – AlwaysOn Meditech: Is Your EMR protected?

Learn how one of the world’s largest Meditech deployments is protected from a business continuity / disaster recovery standpoint. Technologies such as VMware Site Recovery Manager (SRM), RecoverPoint, SRDF/A, and consistency groups will be discussed. VMware, EMC, and Meditech have worked together to validate a solution for Meditech DR, hear from a customer how it works in the real world! Discussions for both Meditech versions 5.x and 6.x will be discussed. Understand the requirements for advanced capabilities such as being able to select a particular point in time for recovery.  Even if you rely on another EHR/EMR system, many of the principles discussed will apply. Meditech remains one of the top deployed EHR / EMR systems by healthcare customers today. However, many customers may be unaware of the best practices in handling both planned and unplanned downtime in the Meditech environment. For example, Meditech has a 7 page shutdown procedure for 6.x systems. During DR, would you like to automate the process to ensure best chance for a successful recovery? Learn how with VMware SRM and EMC replication technologies. Session Background: In December 2013, VMware and EMC published a whitepaper in conjunction with Meditech around validated recovery procedures for a Meditech environment. Even a small Meditech system can easily be 10+ VMs, #s vary for Category 1 -6 Meditech environments. This whitepaper included recommended recovery order for Meditech server roles, SRM best practices, tips to decrease recovery time, etc. • The session will cover the overview recommendations from this whitepaper and best practices developed in real world DR testing in a customer environment.

VAPP1314 – Scaling Your Storage Architecture for Big Data – How the Isilon Server Fits Well with Virtual Hadoop Workloads

This session goes through the big issues and problem statements with data for Hadoop. These are the potential triple replication of the input and output data, the high bandwidth requirements and the need to scale the storage independently of the compute layer as the system grows. These are problems that local direct attached-storage (DAS) storage has been positioned to solve in the physical world. This talk explains a set of three-four different architectures for storage underlying a Hadoop implementation, which includes DAS, but goes beyond that to show that you can have the benefits of centralized management with the bandwidth and scalability that you need for Hadoop. The talk will show a set of reference designs and will include a customer scenario, with a potential customer speaker. The talk concludes with a set of best practices derived from measurement work that has been done on the various storage approaches in the Big Data/Hadoop world. We give references to technical papers and sites where more information can be found.

VAPP2309-SPO – Reduce Your Business Risks and Lower Deployment Costs by Virtualizing SAP HANA

In the fall of 2013, and during the second phase of EMC’s 100% virtualized SAP project Propel, EMC IT was asked to install and accelerate the rollout of Business Planning and Consolidation (BPC) on HANA, SAP’s in-memory database. The timeline was set, the budget was exhausted, and additional staff resources were unavailable. If we were not able to deliver the solution, along with a mission critical SLA, it would disrupt enterprise planning and negatively affect downstream business programs. Despite some uncertainty in the vendor community we made the decision to virtualize BPC as a single instance VM on our production ESXi cluster, leveraging existing HA and DR facilities, server capacity, as well standard cloud architecture principles. We immediately built and released development instances to our users, allowing them to rapidly configure, test, and train on the new HANA technology. At the same time the standard VM build process was used to instantiate the production instance and setup its disaster recovery target. In less than three months the business was able to complete all phases of their project, with end result being a high performing and extremely stable HANA environment on Vsphere. The success of the program provided the confidence and experience that allowed us to embark on the next HANA deployment – a new business data warehouse (BW) on a scale-out HANA configuration.

VAPP1340 – Virtualize Active Directory, the Right Way!

The Active Directory infrastructure is considered very delicate, sensitive, and “special” in most Windows enterprise. Customers who have successfully virtualized the AD Domain Controllers tend to be less reticent about virtualizing any other workload in the enterprise. This session will provide education and re-assurance to the attendees that it is safe to virtualize their domain controllers. This session will aim to overcome common objections to virtualizing ADDS, instill confidence that VMware supports new VM GenerationID for cloning Windows Server 2012 domain controllers and help the attendees better understand the capabilities and limitations of the new safeguard feature in Server 2012.

SDDC1176-SPO – Ask the Expert vBloggers

Back for it’s 7th year at VMworld, Ask the Experts is back with an awesome panel of the industries top bloggers. In this session there are no powerpoints, no sales pitches and no rules! Experts in the industry are here to answer YOUR questions while having some fun in the process. Bring your topic, anything from Software-Defined Data Center, to End-User Computing to 3rd Platform Applications… Storage, Networking, Security. No question is off limits.

VAPP2272-SPO – Oracle 12c Multitenancy vs. OS-Level Virtualization (ala VMware vSphere): A Balanced Comparison

Oracle has been talking a lot about cloud of late. Multitenancy is a highly touted feature of Oracle 12c which (supposedly) enables the creation of a private database cloud. But does it really? OS-level virtualization (such as VMware vSphere) has been hugely popular for creating private database clouds for many relational databases, including Oracle. Will Multitenancy replace OS-level virtualization? Will Oracle take over the consolidation, virtualization, etc., of Oracle databases? In this session we compare and contrast Oracle 12c Multitenancy with OS-level virtualization. In our view, a hypervisor-based solution is the still the obvious way to go for several reasons: * VMware’s version of live migration (vMotion) is far more capable and flexible than Oracle’s equivalent offerings in this area. In particular, PDB / Multitenancy has no built-in live migration feature at all. In order to get close to what VMware provides, the Oracle customer must use client-side load balancing, which requires software changes on the development site. VMware vMotion simply works * Cloning a PDB is not a zero-downtime event. Oracle may eventually fix this, but cloning a running database using storage-based semantics, and mounting the resulting database onto a VM is still more seamless and simple than PDB cloning. * The cost of Oracle 12c Multitenancy licensing is far higher than VMware vSphere. In this session, we will explore the Oracle 12c Multitenancy feature in detail, look at how it works in both a physical and virtualized context, and make recommendations for creating an Oracle private database cloud. We will also present the results of performance / functionality testing showing live migration of VMs running Oracle with zero downtime and minimal performance impact.

 

SEC2296 – The Insider Threat and the Cloud: The Harsh Reality in the Wake of Recent Security Breaches

We have seen the biggest breaches in history over the last year with examples like Edward Snowden and large-scale breaches and theft of data at several major retailers. Whether you are worried about malicious employees, theft of personal data, or outside attackers stealing your intellectual property, insider threats are the number cause of breaches with significant impact to organizations. Virtualization and cloud infrastructure essentially concentrate compute, networking and storage onto a single software platform, thereby concentrating risk. What would happen if a malicious employee was a virtualization administrator or an outside attacker was able to steal the credentials of a company’s private or public cloud? Also, misconfiguration continues to be the number one cause of datacenter downtime. With concentrated environments like cloud and converged infrastructure, a simple mistake can affect hundreds or thousands of VMs, not just one. This panel will be comprised of a group of experts from VMware, EMC, Coalfire, and Forrester Research. The panel will discuss the dynamics driving organizations to move to the cloud and how insider threats can be amplified in cloud environments if not addressed proactively. In addition, the panel will discuss: 1) security as a key design point in the cloud; 2) threat identification and characterization for cloud-based operations; 3) proactive security controls to protect against insider threats; 4) monitoring to detect insider threats; and 5) data security to proactively reduce the effective threat surface.

SDDC3041 – Building ITaaS – A Self Service Framework for IT Administrators

Imagine a world where a Virtual Administrator could simply order a new blade or data store and in less than 1 hour an e-mail arrived confirming it had been added to the vSphere cluster.  Or a Database Administrator needs a new database server and simply orders it from a web page.  EMC IT built a private cloud to better meet the needs of the business, but now is using that powerful solution to make IT job easier and empower administrators to be self-sufficient.  This session will cover how we built this capability, using tools like vCloud Automation Center, vCloud Operations Manager and Puppet.  And offer a framework that anyone can use to deploy self-service for IT in their existing data center.

 

 

 

 

 

 

 

Leave a comment

Filed under Uncategorized