Tuesday, December 16, 2014

The Evolution of Atlantis: USX

Atlantis Computing delivered a great event at Woodbine Racetrack organized by Ross Di Stefano, Canadian Sales Director at Atlantis Computing and Lisa Kramer the Inside Sales Representative.  Attendees had the opportunity to meet and hear Seth Knox the VP of Products at Atlantis talk about Atlantis USX.

Atlantis USX is a Software Designed Storage (SDS) solution that has been generally available since January of this year.  Interest and adoption have been strong due to the importance of Software Defined Storage (SDS).  In Seth’s presentation he explained that virtualization has driven both storage capacity and performance requirements.  The challenge with traditional storage vendors is that the level of innovation has been slow due to the dependency on hardware.  By decoupling the value of storage solutions into software, innovation can happen at a much more aggressive rate.  This is necessary as average storage requirements continue to grow at 40% a year, meaning most customers double capacity in 2½ years.  Although SDS is used to describe many things, there are certain common characteristics necessary for true software defined storage. 

Storage Consolidation: You can apply Atlantis USX to any storage pool and manage it as a single set of resources.  Atlantis will have the capability to incorporate 3rd party Cloud storage pools with a Cloud gateway feature releasing next year.

Support for Hyper-Convergence: With USX you can take Direct Attach Storage add true data services such as inline dedup and performance acceleration and present it as an enterprise class storage volumes.

Cloud Management: USX fully supports a RESTFul API to enable automation and control of the software.

USX fully supports VMware’s Virtual SAN and provides additional data services capability and enables presentation of NFS/SMB and iSCSI storage which are not native to the VMware feature set.  In addition, when vSphere 6 ships, USX will provide full support for VMware VVOLs as well.

Using the additional dedup data services, most customers report a 5:1 consolidation ratio after introducing USX.  With USX Atlantis is broadening the use case for their software beyond End User computing technologies such as virtual desktops or Server Based Computing (SBC).  While the use case remains strong for these technologies; USX enables Hyper-convergence on server blades and provides non-persistent characteristics to persistent desktop pools (lower disk requirements at less cost), Altantis is targeting other use cases including performance based database scenarios.

USX allows you to create high performance and capacity volumes or blend the underlying feature set of SSDs and disk to create hybrid volumes.  Atlantis USX enables you to scale up and out by adding performance and capacity.  If you have not had a look at USX check out "Altantis USX" or would like additional information on Software Defined Storage check out this chalk talk.

Thursday, October 16, 2014

Thanks Halifax VMUG!

vmug
The Halifax VMUG group has been running strong for 6 years thanks to the dedicated efforts of Percy Gouchie, Greg Heard and Alex Stefishen. 
OnX sponsored the event and movie: The Equalizer starring Denzel Washington.  It was great to get the opportunity to present my VMworld Presentation to the user group out East. 
The event was well attended with over sixty members making the trek to Halifax.  John Deveau, an OnX customer, showcased many of the custom dashboards they have developed for their own use.  In addition Ryan Veino from VMware recapped some of the product announcements from VMworld. 
Feedback was excellent from the attendees with many finding the content both relevant and useful to some of the VMware technology they are working with or considering implementing.  Many thanks to our hosts and sponsor and for Lynn Morrison, one of the great Account Reps from OnX Enterprise Solutions for helping get it together.

Friday, September 5, 2014

virtualguru.org @ VMworld 2014

Technorati Tags:

Well it has been a busy couple of weeks at the show but it was a great event.  Stephane Asselin (@virtualstef) and I had a great time at our presentation during VMworld 2014. We had over 370 people attend in person; a special thanks to those who attended virtually. With well over 500 registered we had lots of great questions after. For those who missed it, the recording of our session “EUC1289 - What’s New in End User Computing: Full Desktop Automation and Self-Service” has been released on the VMworld website (http://www.vmworld.com).

photo

Not only was our session well attended but we also had lots of support for our most recent publication “VMware Horizon Suite: bookBuilding End User Services”. 

We appreciate all the great feedback and many thanks to those of you who came out on the last day for the book signing. We had a great time working with Eric Sloof (@esloof) and the crew of VMworld TV for the “VMware Press Trivia Challenge Book Give Away Event”

If you missed our discussion and interview regarding the book then you can catch it here on VMworld TV. 

I managed to get out quite a few posts for those of you following the http://virtualguru.org blog but have more coming.  There was just so much information and so little time so stay tuned for additional information from the show.

Wednesday, August 27, 2014

vCloud Automation Center Overview and a Glimpse into the Future

Virtualization on its own will not deliver on the software defined enterprise. Management through automation is the catalyst that empowers the software defined datacenter.

Management automation accelerates your service delivery model to your customers. It also delivers improved operational efficiency, better resource utilization and reduced complexity and standardization. vCloud Automation Center (vCAC) amalgamated vCAC, vFabric Application Director and vCenter Orchestrator under a single framework. The new name for the suite is vRealize Suite (vCAC becomes vRealize Automaton, vCOPs becomes vRealize Operations and ITBM becomes vRealize Business).

Unifying these products delivers a common service catalog of applications, infrastructure, XaaS and Desktops. vCAC allows both single and multi-tenant deployments. Services are defined through blueprints which allow you to define infrastructure and layered application provisioning (leveraging Application Director). In addition through the vCO integration you can define service blueprints allowing you to define custom processes through either standard or customized workflows.

ITBM can be plugged into vCAC to auto-populate the cost profiles using standard and reference industry costs. ITBM also allows you to compare other public clouds costs to ensure your resources are efficiently deployed.

Moving forward with vCAC, VMware is investing in comprehensive infrastructure management to ensure they can manage many platforms (hypervisor, converged infrastructure and public cloud) without compromising IT policies. VMware has also placed a strong focus on automated release management to deliver a DevOps solution using the product. The target is to deliver anything as a service and deliver on the IT as a Broker concept.

From a vendor perspective, support is coming for IBM, HDS and Fujitsu servers. NSX integration will become more seamless. VMware will build enhancements on other hypervisors and Clouds such as AWS. These orchestration plugins will be certified by VMware so it is simple to hook them in and start automating.

One area of focus is to be able to switch from on-premise to off seamlessly. This will include synchronization of blueprints between Clouds. Blueprints will also be enhanced to provide infrastructure and applications authoring that is simple and straight forward. In addition the product will monitor drift protection and auto-remediate them (very Puppet like).

Work is being done to allow multiple catalog selections by the user that consolidates them using a single approval process ("similar to a shopping cart approach"). vCOPs will be more tightly integrated to allow you to look at resources and build in reclamation and right-sizing workflows through vCAC.


What is new with vCloud Suite - Presented by Karthik Narayan and GS Khalsa

The vCloud Suite consists of many products; this session will only cover the major ones. The presenters start with a definition of what "vCloud Suite" is:

"It is an integrated offering based on SDDC architecture for managing a vSphere environment."

The main components are: vCAC, SRM, ITBM, and vCNS (NSX and virtual SAN can be used to extend the suite but are not a part at this time)

Karthik introduces the SDDC framework. The foundation is based around defining business and investment priorities leading to key infrastructure initiatives. It is based on certain IT outcomes such as standardized, streamlined, secure, resilient infrastructure as well as automated services. The benefit of the SDDC framework is driving savings, providing quality control and finally to deliver speed and agility.

What is new in the basic vSphere and vCenter stack is vCenter Support Assistant; it is a pluggin which alerts you before a problem, directs you to KB (knowledge base) and allows you to file a SR (support request) easily. It is targeted for administrators and managers to provide status and information on what is going on in the environment.

GS Khalsa is introduced to talk about vSphere Replication. vSphere replication is VMware's host based replication which means it does not require any special storage. The RPO is 15 minutes up to 24 hours. The configuration is very simple. It is included in essentials plus and higher licensing and is fully supported on virtual SAN. You can use it as an alternative or enhancement for backup or DR. There is no automation so a recovery is done on a VM by VM bases. Once you get beyond 10 - 15 VMs you probably want to look at SRM. In 5.5 you can replicate to a VMware Service Provider or vCloud Air. In addition with vSphere Replication you get improved reporting.

SRM is a DR orchestration solution. It will turn up and down VMs in a certain sequence and create a runbook for recovery. In addition SRM integrates with over 50 storage vendor products. Most customers use SRM to do disaster avoidance or a planned migration from one-site to another.

With SRM the test functionality you can create an exact copy that can be used for testing patching for example. In SRM 5.8 their is a vCAC\vCO workflow plugin. Allot of work has been done on scale and simplification. With the workflows in vCO, you can build SRM into a service blueprint. You can also use this outside of vCAC within the vSphere client.

In addition anything you can do with the vCO plugin you can also do with PowerCLI. SRM will protect 5000 VMs (formerly 1500) and will do concurrent recovery of 2000 VMs (from 1000). To accomplish this enhancements were done on vSphere and SRM. Batch commands are now sent for a protection group vs. one command at a time. SRM has been fully integrated into the vSphere Web Client. You can also now map entire subnets rather than individual addresses vs. mapping an IP to an individual VM or importing an excel spreadsheet. In addition SRM now has the option of an embedded vPostgres database which simplifies the installation.

VMware has also revised the documentation for the Cloud Suite so it is focused at the suite level addressing things like the order to install the products vs. treating each product individually.



Tuesday, August 26, 2014

End-User Computing for the Mobile Cloud Era

VMware is building a complete portfolio that is all connected in three major pillars: desktop, mobile and content.  This session highlights a new suite "VMware Workspace Suite" with Horizon, AirWatch and Secure Content Locker.

Horizon 6 was launched in April and their has been tremondous interest from customers.  It delivers both VDI and App delivery in a single platform.  Horizon 6 provides a unified workspace and central image management that can be delivered on a virtual SAN platform.  Virtual SAN eliminates complexity and cost.  In addition with vCOPs you get management from the datacenter to the device.

VMware introduced Horizon DaaS on vCloud Air earlier this year.  Enhancements to this service include Cloud Bursting or monthly terms for seasonal use cases.  In addition Apps-as-a-Service will be available.  The service will also be expanded to EMEA.

The session makes mention of a customer using Mirage for a 60 - 70,000 seat environment.  The session moves to a live demo that starts from a Mac laptop with access to workspace integrated with Horizon Application delivery and the nice seamless windows interface with seamless printing.

The next application that is demo'd is Lync running on Mac.  The demo then moves over to the chrome book with the NVIDIA integration.  The NVIDIA demo uses GPUs on hosts and standard based codex's on the client as well as HTML5 on the wire.  

Project Meteor was profiled again which is the Just in Time desktop technology based on the ability to fast clone a VM in seconds (Project Fargo) and provision aplications using CloudVolumes.  This allows the VM to be created a few seconds before the session request and destroyed after logoff.

CloudVolumes adds a 2nd layer of abstraction for the applications.  It leverages a vmdk that can be mounted quickly.  You simply install the application into the vmdk (CloudVolumes refer to these as App Stacks) and link them back to the desktop.  This allows you to organize and manage applications in layers.  VMware recommends that you organize yourself into logical abstraction layers so by department or user segment for example.  CloudVolumes actually supports vmdk and vhd.  In addition they support RDSH servers.

The discussion switches to mobility which is very complex when consider the number of devices, compliance, security and regulator requirements.  AirWatch is designed with scale and multitenancy so you can seperate different business groups and different compliance requirements.  There is also a strong focus on enablement to ensure you can unlock the power of the content for your customers or end users.

To deliver this, AirWatch starts with device management which is entirely customizable.  Once the device is managed you tie it to the framework.  This is all done on a consistent code base which allows AirWatch to scale.  The framework allows you to manage the full device or a container on the device.  

The cost of managing a mobile device is much less than a PC.  In addition the device is not that important as the data lives in the Cloud.  VMware is attempting to bring this concept to a destop.  To do this VMware is looking at a mobile cloud architecture.

What VMware looked at was a few integration points that benefit end users and administrators.  For users it is the ability to access apps anywhere, sso. file synchronization between devices and finally the users social framework.

For administrators this is a centralized and consistent management framework, a service catalog and a single access point with policies.

The session ends with a summary of the announcements: VMware Workspace Suite, the partnerships with NVIDIA and Google and VMware and SAP as well as Project Meteor.  


 

VMworld General Session (Day 2) - Live from VMworld

Ben Fathi VMware's Chief Technology Officer takes the stage.   Ben reiterates Pat's message on a liquid world that requires a friction free IT system.  In todays session they will be delivering some demos of the product announcements yesterday.  Ben introduces Sanjay Poonen the Executive Vice President and General Manager, End-User Computing.  

Sanjay mentions that they have done 3 aquisitions.  Sanjay says the world is changing and sites the fact that their is more technology in todays automobile than in some of the original rocket ships.  VMware's mission is Secure Virtual Workspace for Work at the Speed of Life.  Sanjay is here to explain what VMware is doing in three key areas: Desktop, Mobile and Content.  In each category the software must be world class but tie into a SDDC architecture.  VMware is bringing the entire stack together.

Sanjay mentions that with Horizon 6 they delivered unified VDI and App Publishing on a single platform.  In addition they have launched DaaS using DeskTone and today are announcing Application as a Service from VMware vCloud Air.  He teases a CloudVolumes demo.  A video is shown of google chrome users leveraging View desktops on the NVIDIA grid to deliver a rich 3D user experience.

Sanjay introduces Kevin Ichhpurani the SVP from SAP to discuss the VMware and SAP partnership.  Kevin explains that mobility is key in their customer base.  Through the partnership SAP is integrating their APIs with VMware AirWatch to deliver seamless management.  The benefit for customers is with preintegrated solutions their is less TCO.  

Sanjay mentions United Airlines who is one of Apples largest iPad customers who are managing the devices through AirWatch for security.  Sanjay announces a new suite which is is modelled after the vCloud Suite.  Sanjay mentions that they have leap frogged the competition and are now number one according to the analysts. 

Sanjay introduces Kit Colbert (@kitcolbert) the CTO of End User Computing who is going to dive into some sample use cases.  Kit mentions healthcare and works through a day in the life of a doctor.  It begins with VMware workspace portal for using applications.  The doctor then moves to his iPad with no loss of applications.  Then the doctor leverages a View desktop for high resolution imaging and finally the secure content locker is used to transfer files through AirWatch.  The final piece is a demonstration of both doctors collaborating on the shared image.  In the end this is about more time to focus on treating patients.

Kit then shows a demo of CloudVolumes.  From within the CloudVolume Admin UI the user is entitled to an application enabling the application to integrate into a View desktop.  CloudVolumes leverages hypervisor technology to deliver the applications vs. streaming or pushing.  The old delivery methods are not very scalable.  Kit mentions project Fargo which clones a running virtual machine is about a second.  Project Fargo will be used to clone a virtual machine while CloudVolumes delivers the applications in seconds.  The desktop is then destroyed when the user logs out.  This really simplifies management and provides tremendous cost reduction along with better secure.  This wholestic approach to "just in time desktops" is under development and is called project meteor.

Sanjay summarizes with the value message which provides customers with a complete stack from SDDC to EUC.

Raghu Raghuram the Executive VP of the SDDC and Ben return to stage to demonstrate EVO:Rail.  EVO rail is a 2 U form factor built from 4 independant nodes runing compute, storage and networkinig (leveraging virtual SAN).  It can perform a rolling upgrade so that maintenance can be done in realtime. It can scale upto a 16 node cluster.  The web UI has been simplified but allows some customization.  The entire process to stand it up is 15 minutes.  

EVO Rack is the second member of this family and comes vCloud Suite, Virtual SAN and NSX.  It is built to be deployed within 2 hours.  It also includes rack management and 20% of the labs at vmworld are running on EVO Rack. 

Raghu mentions that VMware has put in a tremondous amount of effort in Openstack since joining the Openstack project.  Raghu believes that the best way to run Openstack is on VMware.  This will be a fully supported offering from VMware.  The benefit from an IT perspective is that you do not need new skills to run Openstack.

Raghu goes on to mention the vSphere 6 beta which will provide 4 vCPU support for FT.  In addition vSphere 6 will have cross vCenter vMotion or the ability to migrate VMs from one vCenter datacenter to a second vCenter datacenter.  In addition with long distance vMotion you can literally migrate a VM from coast-to-coast.  With NSX you can also ensure that none of the network properties change irrespective of the distance between networks.

Ben explains that VMware is working very hard to deliver "Containers without Compromise" by running them in VMs.  Ben makes the point that Containers on their own don't offer many management points however running them on VMs does.  VMware is working to make containers a first class citizen on their virtual infrastructure.  Ben announces that they are working with Pivotal, Google and docker to run containers on SDDC.

Raghu highlights the vRealize Suite which provides management of the SDDC.  You can sign up for the vRealize beta now.  





Monday, August 25, 2014

VMworld General Session - live from VMware

"The limits of the possible can only be defined by going beyond the limits"

Robin Matlock the Chief Marketing Officer takes the stage.  Over 22,000 attendees representing 85 countries around the world are in attendance. Robin explains that change is either a barrier or an opportunity.   This week is about pushing boundaries.  This week the SDDC will be covered indepth.  In addition Hybrid Cloud and End-User Computing will be a major focus.  

Robin introduces Pat Gelsinger Chief Executive Officer.  Pat mentions that he has been the CEO for two years now.  Pat describes a liquid world where static structure is giving way to the dynamic.  He uses the anology of education nolonger taking place in a classroom but extending to where you are.  Work, retail and all aspects life are becoming more fluid and dynamic.

He mentions serveral customers including the Ministry of Education in Malaysia pushing curriculum through the remote areas of Borneo using View and virtualization.  He mentions the advent of companies like Uber having a higher market capitilization than traditional brick and motor companies.

In this new world the brave will survive.  He sits several examples of bravery.  Pat mentions however that bravery is seldom a solo act.  When Pat talks about bravery he thinks about his team of engineers and software delivering the next generation of software.  Everyday Pat challenges this group of engineers to be disruptive for the benefit of VMwares customers.

Pat mentions that they are eating their own dog food at VMware, their internal ERP system is migrating to vCloud Air, in addition Airwatch will enable their own BYOD program at VMware.

Pat mentions Apollo Education Group who delivers university curriculum over the internet who have migrated from public to private cloud and reduced costs while improving services.  He all introduces Tim Garza from the California department of water resources who have become IT leaders using VMware software.

Pat explains that right now the discussion is locked in "or" i.e. on premise or off.  VMware believes that they can be the bridge between these two and deliver the "and" vs. "or" delivering on a hybrid cloud model.  VMware is executing on the SDDC, Hybrid Cloud and End-User computing.

The announcements around the products begin with a focus on vSphere.  Pat mentions that they are announcing vCloud Suite 5.8  and are in beta with vSphere 6.0 beta which adds increase scale.  In addition virtual san 2.0 beta is in the works.  In addition with vSphere 6.0 they are delivering virtual volumes.  He mentions that vCenter Operations Suite will be VMware vRealize Suite.

Pat mentions that their are 3 wasy to deliver SDDC 1) build your own, 2) Converged Infrastructure and now Hyper-converged infrastructure and VMware EVO. EVO is SDDC is packaged with hardware in a single solution delivered through VMware's OEM partners.  VMware EVO Rail is profiled. designed for SMB for deployment in 15 minutes based on a physical appliance that will run 100 VMs and sold in clusters of 4.  The second member of EVO is EVO rack which is designed to scale data centers in two hours.  Another component is that EVO will be part of the open compute project as well.

But why do infrastructure?  To deliver applications.  Pat announces VMware OpenStack which is a openstack and VMware integrated product that delivers VMware APIs and OpenStack APIs.  This enables OpenStack development native on VMware.

Pat then announces a common platform for all applications based on containers:  Docker, Google and Pivotal container constructs will be fully supported on VMware.  This will be done through the OpenContainer API.  They are demonstrating "VMware Fargo" which enables containers to be delivered on VMware with little or no overhead.

Pat then uses the anology of an egg which is hard and brittle on the outside and soft on the inside.  This is similar to way datacenters are designed.  This means that once breached you are vulnerable.  Pat explains that with NSX and SDDC micro segmentation is possible and is a fundamental shift in how security is applied.  Pat explains that VMware was positioned in the magic quadrant by Gartner of networking which is the first time a software company has appeared in this quadrant.

Pat shifts to EUC.  Pat explains the number of innovations and aquisitions including the recent aquisition of CloudVolumes.  In tomorrows session we will hear from Sanjay on how this vision is evolving. 

Pat then announces that vCHS is now vmware vCloud Air and a commitment to bring all their products to Air.  Pat then introduces Bill Fathers the Executvie Vice President and General Manager of Hybrid Cloud.  Bill says that they launched Air 1 year ago.  

vCloud Air is based on compatability with the enterprise.  If you do not have compatability then you get a very fragmented cloud.  But if you think about if from the perspective of the enterprise applications.  Like SAP and Oracles of the world it is very important for the infrastructure to be the same.  vCloud Air has a distinct advantage as you can easily meld the application with the pieces and data that live in the enterprise.  In addition, because this is the foundation you can quickly add value services ontop.  

Bill then annoounces, Devops as a Service, Database as a services (including DR), Object Storage or Cloud storage (based on EMC ViPR), Mobility Services (based on AirWatch) and Cloud Management (Based on VMware vRealize Suite).  In addition vCloud Air ondemand is highlighted based on true pay-as-you go service.  You can try the beta at http://vmware.com/go/ondemand

Pat concludes with a summation of the announcements and how VMware provides choice without compromise, opportunities to build the Hybrid Cloud of the future and the ability to connect users securely to the datacenter.  Pat then challenges the audience to go bravely into this new future and introduces Carl Eschenbach, President and Chief Operating Officer.

Carl explains that the current presures on IT trying to provide end user freedom while maintaining control is very high.  In order to enable transformation we need to take a different approach through software and automation.  We need to automate IT so that IT can thrive and support the business.  

Carl introduces Medtronic's vice presedent who explains that they leveraged the VMware Accelerate Consulting Services to transform their IT.  They have implemented the SDDC model to fully automate the server build process while maintaining their corporate standards.  In addition they are an AirWatch customer for enabling mobile device management.  The next phase for Medtronic is Hybrid Cloud through integration of NSX and vCloud Air.     

Carl then profiles MIT who is 99% virtualized and is looking to put openstack ontop of VMware.  MIT has also adopted NSX for network virtualization to provide a secure multi-tenant environment.  In addition some of the faculty is testing VMware Air.

The last guest, Ford Motor Company is introduced.  Ford has also adopted a SDDC model as well.  Carl wraps up with a video profile of the companies that have been part of the EVO Rail beta trail.  The theme was the time to value and lower overall cost of the solution.  

 
    
 






Thursday, July 3, 2014

Published “VMware Horizon Suite: Building End User Services”

I am happy to announce that VMware Horizon Suite: Building End User Services has been published and is now generally available.  When I finished VMware View 5: Building a Successful View Desktop I knew it would not be my last publication.  My friend and technical editor on the first book, Stephane Asselin suggested a much broader topic covering the entire suite and incorporating the major changes in View 6.  Along with the suggestion came an offer of assistance in developing this book.  This turned out to be a fantastic idea and the collaboration worked well between us.  We are extremely happy and proud of the end product. 

In addition to Stephane, we were able to work together with our original good cop/bad cop publishing team; Joan  Murray, Ellie Brue from VMware Press (I won’t tell you who the good cop or the bad cop was).  They are, and continue to be very supportive of our thoughts and ideas.

Our technical editors were Justin Venezia and Mike Barnett; two veritable experts themselves with extensive experience in the end user space.  We were lucky to work with them and hope to work with them again on the next project.

I want to thank the continued support we get through our blogs: http://myeuc.net (Stephane’s) and http://virtualguru.org as well as the comments and reviews on the books.  If you are going to VMworld, please feel free to drop by the bookstore to meet or  attend our presentation at the show “Session ID: 1289 Title: What’s New in End User Computing: Full Desktop Automation and Self-Service”.  We look forward to seeing you there.

Friday, April 4, 2014

Catalog your way to the Cloud

I have been speaking with allot of customers regarding their own Private, Hybrid and Public Cloud initiatives.  One area of struggle  for customers is articulating the business value of becoming ‘Cloud like’.  The justification tends to diverge towards the ‘ilities’ adjectives; flexibility, scalability and I once heard ‘cloudability’.  But can you really get started with a Cloud strategy if your goal is cloudability?

A proper Cloud strategy begins with some pretty mundane tasks; I  personally think of it in terms of Categorizing, Cataloging and Optimizing.  From an enterprise perspective you need to categorize your workloads so you understand both the attributes and criticality (Whoa, used one of the ‘ilities’).  For example are they Tier 1, 2 or 3 workloads?  Are they Transient or short term workloads, are they infrastructure related? Do they require additional capacity for only certain periods of time  (Bursty workloads).  Once categorized you can begin the next piece of the puzzle, the creation of the Service Catalog.

The development of a Service Catalog  helps you understand what service or services you will be providing, in addition it translates a disparate set of IT resources in a way that makes them understandable to the business.  Categorizing first enables you to determine what Service Catalogs you should begin with.  It allows you to quickly see the real value and understand the nature of the workload.  In addition it can help you intelligently apply costs.

As an example lets take two scenarios: The IT department at company “A” has never given much strategic thought to their virtualization platform and just licenses everything at same feature level, let’s call it enterprise.  When they move to Cloud they calculate the cost of applying the Cloud Management Suite on top of everything.  They struggle to articulate the value and justify the cost so they can’t get the initiative off the ground.

The 2nd IT department at company “B” begins planning for their Cloud strategy by categorizing all their workloads.  They develop a strategy to manage everything through a Cloud Management suite but introduce a mixed hypervisor environment.  This reduces the licensing in DEV\UAT but slightly increases the cost of their virtualization environment for their Tier 1 workloads.  The Cloud management framework will be collocated with their Tier 1 workloads as it will be critical to their long term strategy. 

In addition, long term they want to look at taking their mixed hypervisor environment and move the Transient workload to the public cloud environment.  Because they have chosen wisely they will use their Cloud Management Framework to manage and apply policy across all parts using the same toolset.

Company “B” can clearly articulate the value; they are reducing costs in areas not deemed critical to the business and justifying the tools they need to manage a Private or Federated Cloud environment to get it done .  In addition they are able to identify what business services they will provide as they have reviewed which Service Catalogs they will create in the short-term, near-term and long-term. 

Because company “B”’s Cloud strategy moves forward they are in a better position to optimize the environment to deliver the most benefit to the business moving forward.

Although somewhat simplistic, the point is that the general approach that we took with virtualization does not translate  well to a Cloud Strategy.  While we needed to understand the technical aspects to virtualize workloads, we need to add the understanding of business value to develop our Cloud Strategy.

Thursday, April 3, 2014

The allure of Nutanix

The business pedigree of Nutanix derives from the founders who have worked with Google and Oracle Exadata.  The co-founders, Dheeraj Panday and Ajeet Singh set out to create a better platform for virtualization.  Nutanix is on their 3rd round of venture capital funding which has been successfully awarded.  Rumors of an IPO perhaps explain the ability of Nutanix to draw talent. 

They are very similar to vSAN, although they have the distinction of being first in the market and really view vSAN as validating the approach as well as competition.  They are the fastest growing hardware company in the last 10 years which is interesting as they go to great pains to explain they are a software company.  They actually do not manufacturer hardware themselves but OEM all components.

The platform provides a combined hypervisor and logical storage system that is created from local SSDs and host based hard drives that are aggregated to provide a single volume logical SAN.  They use their own proprietary file system under the covers but present to virtualization farms as NFS.  They do not use RAID but instead allow you to configure a replication factor on the logical volume.  Min is 0 designed for Hadoop clusters however 2 is recommended for virtualization.  This means that you lose ½ of the capacity in a virtualization farm however.  They provide a small form factor which can be bought in certain models designed to run a predetermined number of workloads. 

They have many enterprise class features; dedupe; predictive failure on hardware components as well as a phone home service, dedupe of memory and SSD with disk likely in the future.  They consider themselves enterprise class Converged Infrastructure (CI) although traditional CI vendors would dispute this as they do not apply engineering at the hardware level.  The sweet spot for this technology seems to be VDI although Nutanix targets other applications of the technology such as Hadoop.

Nutanix represents a new breed of CI which I have coined micro-converged infrastructure; a host hypervisor and SAN built from aggregating locally installed SSDs and host hard drives.  The architecture is compelling as it allows a very modular approach at a reduced price point without sacrificing performance.  It is clear from Nutanix’s growth that they are meeting a demand and garnering allot of interest.  For an overview of Nutanix is two minutes have a look at this video.

Monday, March 31, 2014

The Conundrum of Cloud Adoption

It is typical to have not seen much personnel or budget increases in many organizations over the last number of years. User expectations have dramatically increased however because of the consumerization of IT. The new standard in the minds of the users we service is the ‘appstore’ approach to delivering services. In addition most IT teams are still aligned in traditional technology silos and struggle to adopt “as a service” models.

These problem are compounded by where most IT teams spend time. In many recent discussions with our customers at the executive level, they estimate they are spending 60 – 70% of their time on lights on activities, straight operations or firefighting. The remainder of time is spent on new project initiatives leaving little time for proactive strategies to adopt new service models in End User Computing (EUC) and Private or Federated Cloud. How then can they break this cycle and move the blocks forward to become more proactive is often the question asked of us?

The first step is to carve out time that does not exist. The only way to do this is to spend less time in operations by undertaking the following:

1. Categorize, Automate and Orchestrate

You have two choices on where you begin; datacenter or end user services. In the end it does not matter and this tends to be dictated by each customers situation and the current needs of the business

2. Standardize a percentage of IT as a catalog

As much as we want to believe it not everything is unique; IT departments treat everything as custom builds when not all requirements are. Even if this starts as a very simple catalog; office applications (EUC) or a simple two tier VM configuration (Private Cloud) the importance is to work yourself through the process.

3. Turn over delivery

Once you have completed step 1 and 2, enable a suitable business group that is not traditional IT to self-provision. In the initial adoption this may not be end users; it may be business or application analysts; “get it done” as this is when you reduce time in operations. Prepare yourself as this will be a learning process so it may require some heavy lifting in training to the transition team.

Once you have achieved the first three steps you can move to higher value activities described in steps 4 and 5.

4. Adopt a Virtual Datacenter

The vast majority of IT are virtualizing OSes however turning your datacenter into software is a huge enabler. It allows you to thin provision a host of physical devices including networking and storage.

5. Apply Security and Policy to the Virtual Datacenter

Ensure that the virtual datacenter is more secure than your physical one; software can be encrypted and encoded and policy driven. Apply these to your virtual datacenter

After turning your datacenter into software you are ready to drive efficiencies by implementing steps 6 – 8.

6. Evaluate where you are doing your computing

Once your Virtual Datacenter is secure and encrypted it can be migrated anywhere so take advantage of lower cost opportunities to run your IT services

7. Evaluate the efficiency of what you are delivering

Okay we have come a long way however the journey is not complete; evaluate whether it is more efficient for you to deliver each service internally or through a 3rd party. It is important that at this point it is an evaluation; there is one more critical step to complete before farming a percentage of services.

8. Brand

You are the department who has serviced your users effectively for years; your intellectual property is the knowledge and understanding of the users that you serve. They should always continue to come to you for all requirements even if some are being delivered through 3rd party. When you farm the services that you can deliver more efficiently elsewhere ensure that the users do not see whitespace between the internal IT team and service providers. Your role in this new model is to be the one stop shop from a provider perspective and maintain quality control.

In these discussions we did not mention Cloud or Federation or any the terms that are wearing thin due to the promise and expectations being out of alignment. We have described a process that enables Cloud adoption. We also did not discern where it gets adopted first; in the datacenter or in end user computing, as this is a business decision. The process starts with an IT department that has no time which is often the situation. It is hoped that in describing the process logically that this will help internal IT team organize their own initiatives and determine what areas to focus on first.

Thursday, February 13, 2014

Cloud Hybrid Service (vCHS): Advanced Networking & Security, Chris Colotti

The goal of this presentation is to understand the building blocks of vCHS and networking requirements to build a Hybrid Cloud. vCHS is available as a Dedicated Cloud which is Physically Isolated or a Virtual Private Cloud which is logically isolated.

vCHS is built on vSphere and vCenter, vCloud Director (vCD) and vCloud Networking and Security (vCNS) at this time. NSX is not part of the infrastructure at this time. As NSX has additional functionality it is definitely something VMware is looking at closely. When you buy vCHS you get one external network protected by an Edge Gateway (EG). By default a routed network and an isolated network created for you. VMware Cloud Service Providers running vCD will also have these services and features available.

The Edge Gateway has one interface facing out and 9 internal so you have 9 possible routable IP spaces. The EG is deployed in HA mode. All networks are segmented based on VXLAN. EG come out of the provider resource pool not out of the tenant pool. Typically customers will create a DMZ, Application network as well as one for Test and Development.

As part of the EG you can create an VPN connection between the customer datacenter. In addition VMware now offers a dedicated network option. The VPN uses IPSEC and allows you to build complex interconnected Cloud architectures. Each connection however is a single tunnel. This allows you to run cross cloud functional services like Active Directory (AD) for example.

All Firewall Rules are configured at the gateway. By default all traffic is denied. Right now the vCHS operational team has access to the firewall logs however they are available upon request. VMware is looking at ways to provide direct access to these logs.

You can configure Source NAT and Destination NAT rules on the EG. In addition you can configure Load Balancing by defining Virtual IPs and Server Pools. The load balancing rules allow you to run health checks to monitor things like ports on the servers in the pools.

It is possible to drop 3rd party appliances between isolated networks to for additional network services such as an F5 virtual appliance. VMware is also providing examples of doing split designs using common services like SharePoint and Exchange. You cannot replace the external facing EG with a 3rd party appliance but you can configure the EG to flow through the traffic to get these scenarios working.

You can use stretched networks to extend a Layer 2 network between the customer datacenter and the cloud. You have to keep in mind though that all network traffic traverses the VPN as routing is done by the On Premise Network Gateway. One reason to use stretched networks is applications that are tied to MACs or IPs. But keep in mind that a vApp container can only contain 128 VMs.

To do stretched network you need an Edge Gateway on premise and in the Cloud with two active interfaces. A single EG is required per segment you want to stretch so you cannot use the additional interfaces. This is not recommended for a segment with a large number of VMs due to the amount of traffic going back to the router on premise.

The other option is a DirectConnect which allows you to put in a private line to connect an on premise segment to a Cloud based network. There are actually two versions available; DirectConnect and Cross Connect. If the customer is in an existing datacenter running vCHS, then they can cross connect from their on premise cage to vCHS. DirectConnect is setup when the customer is not in the same datacenter.

These options (VPN, DirectConnect and Cross Connect) allows the customer to pick and choose the best method to connect to vCHS. In vCHS there is 5 different Role Based Access Control levels defined. from Account Administrator, Virtualization Infrastructure Administrator, Network Administrator, Read-only Administrator and Subscription Administrator to provide a flexible security policy.












- Posted using BlogPress from my iPad

Wednesday, February 12, 2014

General Session, Pat Gelsinger (CEO, VMware)

Pat believes we are going through a major tectonic shift in the IT industry. Pat explains that in times of disruptive innovation you need to obey these guiding principles:

1) Bet on the right side of the technology
2) Do the right thing for the customer
3) Deliver solutions in cooperation with the partner ecosystem

Pat reviews VMware 3 strategic priorities for 2014; Software Defined Datacenter (SDDC), End User Computing (EUC) and vCloud Hybrid Services (vCHS). In order to do this VMware must make SDDC easy to adopt because right now the value is there but implementation can be easier.

Pat explains that VMware is learning to be both a Software Vendor and a Cloud Services Provider. Adapting to this new reality helps them understand how to deliver the SDDC, while enabling it to extend to the Hybrid Cloud. This must be done without sacrificing enterprise requirements like security and policy control. In addition in the End User Computing space VMware must unify Mobile Device Management and End User Computing to incorporate and deliver Cloud Mobility.

While this is a huge challenge for VMware it also represents a huge opportunity. To do this VMware has hired some of the best software engineers in the world. VMware intends to be a leader in the development and adoption of these technologies and its success to date demonstrates this.









- Posted using BlogPress from my iPad

General Session: Presented by Sanjay Poonen Executive VP & GM, End-User Computing

We are entering a world where mobile and Cloud will be key. Everything is interconnected and mobile. Apps are becoming more cloud centric. Data is being stored on Cloud based Object stores.

VMware's vision is based on a foundation of virtualized infrastructure and hybrid cloud computing. A layer of management and automation is required to manage these as a single entity. Once these layers are established you can build a software defined datacenter.

VMware wants to make your end user experience like Netflix in which you start watching at home, travel and continue to watch on any device in any context including machine (the Tesla Automobile is provided as an example).

The challenge with EUC is that it is made up of a bunch of point products. VMware is committed to being best in breed in all these components but to also be completely unified. VMware makes a commitment to extend VDI to add application publishing. Once complete they will add the remaining components of the software defined datacenter (SDDC) to unify VMware's EUC product offering. VMware's believes their desktop strategy is accelerating faster than the competition.

Sanjay has announced an ongoing initiative with Google and Caesar Sengupta, VP of Chrome and Google is introduced. Caesar mentions 58% of the Fortune 500 are paying Google customers consuming enterprise services. Caesar talks about the Chromebook and how the product went to 21% of the embedded operating system in tablets and laptops in an incredibly short period of time. It would appear that the initiative is based on Google Chromebooks supporting VMware Horizon View at some point.

Sanjay introduces John Marshall the CEO of AirWatch. AirWatch was able to scale to match the demand for Mobile Device Management (MDM). AirWatch has been the leader in Gartner's MDM quadrant since inception. AirWatch started with managing the device but with a focus on enterprise grade; they provide application deployment, content management, email connectivity and integrated browsing options on any device.

The world is going mobile and each device will have its own ecosystem. This is why AirWatch supports every mobile device and every mobile platform. AirWatch is based on central policy management which is pushed down to the device. If the device is lost the business content can be removed to protect corporate assets.

VMware believes the combination of SDDC, EUC and MDM will make their solutions and value incredibly compelling to their customers.














- Posted using BlogPress from my iPad

Tuesday, February 11, 2014

vCloud Hybrid Service and the Recovery as a Service: Presented by Chris Colotti

As VMware's Recovery as a Service (RaaS) is a service based on vCloud Hybrid Service (vCHS), you need to understand how resources can be purchased:

1) As a Dedicated Cloud (DC) resource which provides physically isolated reserved resources
2) As a Virtual Private Cloud (VPC) which is software based and is defined by resource allocation. RaaS is offered as an extension of the VPC model.

Recovery as a Service will not be based on Site Recovery Manager. It will be based on vSphere Replication. If you recall vSphere Replication is done per VM, based on asynchronous replication and replicates at the VMDK level. RaaS vSphere Replication is not the same as the vSphere Replication shipped with vSphere as additional capabilities have been added. VMware plans to merge these at some point.

One of the differences between the vSphere Replication (VR) that is shipped with vSphere and VR for RaaS is that each VR supports 500 VMs and an additional encryption module is provided to secure transfers. To setup VR for RaaS a customer downloads the OVF, pairs the components with vCenter and configures each VM for RaaS. VMware has made it very simple for a customer to enable.

At GA Recovery as a Service will be available in all 5 global regions for vCHS according to VMware. In its initial release, this solution is not designed for the large enterprise. It is targeted at the SMB and Midsize Enterprise space.

RaaS is based on the Virtual Private Cloud consumption model; the minimum size required for the RaaS VPC is 20GB vRAM, 10 GHz of CPU and 1 TB of storage. 10 Mbps is required for network throughput along with 2 public IPs. It will be subscription based and is elastic so additional capacity can be added as required. In addition you get two failover tests per year, however the option to buy additional tests is available.

RaaS requires a dedicated VPC so an existing vCHS VPC that is currently running VMs cannot be used. VMware does not allow you to run the supporting infrastructure in the RaaS VPC as VMs are not powered on until a recovery is initiated. In order to provide the supporting infrastructure for the RaaS VPC (AD, DNS, etc.) you can add an additional VPC, use a current one not dedicated to RaaS or if you already reside in a datacenter delivering vCHS you have a "direct connect" option from your current infrastructure to your RaaS VPC space.

After a recovery has been initiated, to fail back you need to power off the VM in the RaaS VPC. You will then need to delete the original on premise VM in vSphere. The process then does a vCloud Connector Copy from vCHS back to the on premise vSphere. You then need to reconfigure the VM and power it on. Once it is powered up you can restart replication to the RaaS VPC. You have the option to use the reseed option after a recovery to select the powered off VM in vCHS to avoid the initial file transfer.










- Posted using BlogPress from my iPad

Desktop as a Service: Horizon View and Desktone

Typically providing desktops in the Cloud is not the first focus for Cloud Service Providers, it is generally server side applications. Before looking at Cloud based desktops, there are a few questions that you need to ask yourself?

1) Do you want to offer applications or full desktops?

When providing a full desktop be aware that consumer expectations change as well as the operational overhead. You need to consider storage of user data as well as operational patching for example.

2) What is the demarcation point between what is managed and provided?

What sort of services will you build around the desktop, what applications will you provide, what are the rules that govern the use of the desktop?

Some of the technical challenges with View are its dependancy on vCenter as it does not understand VMware's Cloud Orchestration layer. As VMware recommends vCloud Director (vCD) for multi-tenancy the interaction between the View layer and vCloud Director needs to be considered. Both View and vCD wants to "Own" resources. In integrating View with vCD you have three approaches:

Deploying View outside the vCD infrastructure. This approach allows you to build a shared service however it creates difficulties in supporting true multi-tenancy.

The other option is to deploy a View vApp per Organization. While this respects the multi-tenancy of vCD it requires much more View infrastructure. In addition any changes made to resourcing at the vCD level will not be transparent to view.

You can use the View Direct connect plugin to deploy View Desktops from a catalog from within vCD. However, this removes allot of the benefit that the View Connection broker provides to a virtual desktop environment.

Desktop licensing needs careful consideration as there is no Microsoft Service Provider Licensing Agreement (SPLA) for Desktop OSes. You can implement Bring Your Own License (BYOL) but you need to watch Microsoft compliance issues around what hardware the desktops can be deployed to. Another option is to use Windows Server OSes as desktops which gets you around the compliance issues but may add additional considerations.

Unlike View, Desktone is vCloud Director aware so it does not depend on vCenter. Desktone built the platform to use APIs so it was easy to support vCenter as well as vCD natively. With Desktone you can deliver applications or full desktops using which is referred to multi-mode and leverages Windows Remote Desktop Services (RDS). A combination of Desktone and View is recommended by VMware for service providers getting into the Desktop as a Service (DaaS) market.

Desktone allows you to create a Service Center which has multi-tenant visibility. View complements Desktone by providing windows knowledge of the desktops allowing you to manage profiles, apply sysprep, introduce persona management for example. Desktone provides Cloud layer, multi-mode and multi-datacenter functionality to enable a service provider to deliver a full service offering.









- Posted using BlogPress from my iPad

Building the Data Center of the Future: Ben Fathi Chief Technology Officer

Ben explains that it is his first time onstage for VMware, prior to VMware he worked at MIPs technologies were he was a Unix kernel engineer and worked on supercomputers. Ben went on to work at Microsoft and managed server, storage clustering etc. After that Ben managed the security of Microsoft as well as the management of Windows 7 and Hyper-V. Ben left Microsoft and went to CISCO and ran all their protocol teams. At VMware he has been managing vSphere development and now finds himself the CTO. The change to him is that rather than building OSes for servers he is now leading the building of OSes for the datacenter.

There are 3 imperatives for the IT infrastructure:

1) Virtualization needs to extend to ALL of it; including networking and storage
2) IT management needs to give way to automation
3) Compatible hybrid cloud becomes ubiquitous

Two years ago VMware announced the software defined datacenter sending the industry in a new direction. Ben believes that in 2014 SDDC will hit the tipping point. VMware is seeing really strong customer momentum; Symantec, Subaru, Dow Jones is listed among the early adopters of SDDC.

But what does SDDC mean? It is made up of compute, network and storage. Ben mentions that even though 85% of applications are running virtualized their are still some that are not. Compute continues to evolve to reduce latency and additional extensions for Hadoop. Telcos and the Hadoop communities are investing in virtualization to run applications that are have traditionally been physical.

Last year VMware announced NSX the network virtualization platform. NSX is built to run on any hypervisor, run with any application and any cloud providers toolset. VMware believes NSX will revolutionize networkings as vSphere has done for operating systems.

The current challenges to networking is that provisioning is slow, hardware dependent and operationally intensive. NSX takes advantage of virtual switches in hypervisors and creates a flat layer 2 network and programs it. This allows you to set policies that control the network programatically.

In 2012 the number of virtual ports in the datacenter exceeded the physical ports. Ben explains 3 of the top 5 investment banks are deploying NSX as well as the leading global telcos.

Ben switches gears to storage. As you know storage is one of the biggest capex and opex cost in the datacenter. The storage market is in the midst of disruption. Server flash and storage prices are falling. You have an abundance of CPU cycles in servers making new approaches possible.

VMware has been delivering storage innovation for years through vMotion, VAAI, Storage DRS and now vSAN. So what is software defined storage? In the new model there is really three different types of storage; Hypervisor Converged (vSAN), a SAN/NAS pool (traditional storage) and an Object Storage Pool or Cloud Storage.

vSAN is a distributed object store implemented directly in the hypervisor kernel. It allows you to apply policy based management to storage. It is flash accelerated with great performance and lower TCO. Ben says it is also brain dead simple to use, you simply turn it on in vCenter.

Ben explains that they have tested 915k IOPs in a 16 node cluster with less than 10% CPU overhead. VMware had 10,800 participants in there public beta. VMware is ready to release in Q1 with a 16 node configuration. In addition beta participants get 20% off on your first purchase of the product.

Ben moves to opportunities in Hybrid Cloud market. Gartner estimates that the Infrastructure as a Service market was 9 billion in 2013 and will grow to 31 billion in 2017. Ben thinks that the most important thing with a Hybrid Cloud is that it is compatible with the enterprise toolset. Ben explains that on vSphere they have over 120 different types of operating systems as well as enterprise applications which are difficult to run in a public cloud.

The challenges with public cloud is that they are proprietary platforms, do not support enterprise applications and have potential security and compliance issues. VMware's value proposition with their Hybrid Cloud services is that they have the same core components and management tools as the enterprise. In 2014 VMware is going to add Desktop as a Service and Disaster Recovery Services, Database as a Service as well as Mobile Services through AirWatch.

There are really 5 starting points for customers moving to the cloud:

1) Development and testing
2) Prepackaged software; i.e. Exchange, SharePoint etc.
3) Disaster Recovery
4) New Applications using services like Cloud Foundry

Ben challenges the crowd to embrace the new reality.










- Posted using BlogPress from my iPad

Partner Exchange, General Session: Dave O'Callaghan and Carl Eschenbach

The new world makes it difficult to understand what drives IT decisions. People want to use their own devices without compromise; how does enterprise IT fit into this new world? How do we evolve in a world that is changing so fast? Its time to make history again and rethink and master the new reality is the challenge to the partner community from VMware.

Dave O'Callaghan the Sr. Vice President of the partner community takes the stage and welcomes the crowd to PEX 2014. Dave talks about VMware's total revenue from 2007 to 2013 moving from 1 to 5 billion dollars. 85% of that revenue has come through the partner network.

Dave explains that success is about constantly aligning to the new reality. Dave draws the conclusion that the new reality is aligning our businesses to delivering a software defined datacenter. This will require practise, focus and training to meet the challenges of the new reality.

Dave introduces Carl Eschenbach, President and Chief Operating Officer at VMware. Carl explains that their are 4000 partners in attendance this year. Carl explains our challenge is to become masters of the new software defined enterprise. VMware will provide the tools to partners to assist them to do so. VMware made 5.21 Billion dollars in revenue in 2013 delivering 17% growth.

Carl reaffirms VMware's commitment to the channel as it has been key to their success. VMware has invested in $300 million in partner programs to provide incentives to the partner community. VMware's renewals business is at an all time high.

Carl explains that they have invested heavily in bringing the right executive team: Sanjay Poonen (End User Computing), Ben Fathi (Chief Technology Officer), Robin Matlock (Chief Marketing Officer), Tony Scott (Chief Information Officer), and Sanjay Mirchandani the GM of Asia Pacific and Japan.

VMware had 234 new software releases last year including major launches of VMware Horizon Suite and vCHS, NSX and vSAN beta among others. In addition VMware acquired desktone (Desktop as a Service) and virsto the storage hypervisor (vSAN) and announced the acquisition of AirWatch.

VMware's three priorities for 2014 are End-User Computing, Software Defined Datacenter and Hybrid Cloud. This potential software product market is estimated to be 50 billion dollars before services.

So what's next? The Software-Defined Enterprise is next. Carl takes us back in history from mainframe, client-server to mobile-cloud. The fundamental challenge in all these transformations has been relatively flat IT budgets. VMware sees more friction as the consumer demands more while budgets continue to remain flat. VMware believes that they are in unique position to address this. Why? because they have done this already with virtualization. By saving money, they have liberated a percentage of spending that can be spent on innovation.

The way VMware will deliver on this is to deliver the software defined enterprise. What are the foundations of the software defined enterprise:

1) Applications, however they are only as reliable as the infrastructure they run on. This stability is provided by introducing virtualization across all traditional physical datacenter infrastructure; server, network and software defined storage. However it also must extend to the Hybrid Cloud through interfaces like vCloud Automation Center (vCAC). This is the software defined datacenter.

2) End User Computing, in addition we need to give users access to the software defined datacenter though innovations in the virtual workspace while ensuring security compliance and control. The icing on the cake is AirWatch for mobility management.

VMware's mandate is Any App, Any Place, Any Time with No Compromise. VMware expects that the services revenue around these opportunities is 50 billion dollars for a combined total of software, licensing and services of 100 billion dollars.









- Posted using BlogPress from my iPad

Monday, February 10, 2014

The value of vSAN

VMware believes vSAN is a very disruptive technology that does not require you to re-architect the environment to integrate it. There are several trends that necessitate virtual storage adoption:

1) The amount of data we are storing
2) The complexity of storage today

vSAN is a very simple product to deploy. Installation involves answering a few questions to get it up and running but does not require zoning or LUN creation. VMware sees three strong use cases for vSAN: Virtual Desktop, Test and Development and Disaster Recovery.

VMware expects people to adopt vSAN organically. For example customers will buy vSAN for a development cluster initially but as it proves itself it will evolve for use in other environments. VMware is targeting vSAN for the mid-tier storage performance requirements as apposed to applicable for all workloads. vSAN will coexist with physical SAN environments in the enterprise.

VMware sees storage as the final piece of the complete Software Defined Datacenter. The challenge for VMware is will their customers see them as a storage vendor? VMware sees a large shift in the performance power of the server platform, from server flash, to multi-core CPUs delivering an enterprise grade hardware platform. In addition storage is becoming less specialized as VMs aggregate workloads on common storage platforms.

VMware believes the hypervisor is in a unique position to understand both workload performance and storage requirements as it is directly in the IO path. Although most people understand the virtualization story with VMware, the company has been innovative in storage technology and management; i.e. VMotion, Storage DRS, Storage IO control for example.

VMware sees three critical areas in Software Defined Storage; the virtual data plane or the aggregating of storage pools, virtual data services such as data protection and performance and finally the policy-driven control plane which allows policy based automation and orchestration. All these layers are necessary to make up Software Defined Storage.

vSAN will ship in as a Virtual SAN Ready node which will come direct from the hardware vendors as well as a Do it Yourself "DiY" option in which you deploy the hardware and apply the software. In a very small 16 node cluster VMware has bench marked 1 million IOPs as part of there testing.

vSAN provides enterprise grade storage performance from server based storage. vSAN makes use of Host based Hard Drives (HHDs) and Solid State Drives (SSDs) installed on the server which are presented as the vSAN datastore. This means that technology such as VMotion are fully supported on vSAN. It does not present LUNs however so Raw Disk Mapping (RDMs) are not supported on the architecture.

vSAN will work with any servers and RAID controllers on the Hardware Compatibly List (HCL) and can make use of SAS, SATA and SSD drives. VMware recommends 10 GBe connections between servers although it will run on 1 GBe.

vSAN writes to cache and then destages to disk. You can scale out vSAN by adding additional servers with additional HHDs and SSDs. It requires VMware vSphere 5.5 and VMware recommends that all servers in the cluster are configured identically.

The ability to assess use cases for vSAN and been built into the VMware Infrastructure Planner (VIP). VMware has announced that the GA release of vSAN will be in Q1 of this year.




















- Posted using BlogPress from my iPad

VMware Horizon Mirage: Endpoint Protection and Disaster Recovery; Presented by John Dodge, Stephane Asselin and Shlomo Wygodny

Disaster Recovery is a very important capability of VMware Horizon Mirage. Endpoints change naturally and by default Mirage synchronizes those changes back to the Mirage cluster in the datacenter.

Rumour has it that during the final stages of the agreement with VMware, the owner of Wanova lost his laptop. Normally this would have been catastrophic given the circumstances. In a perfect example of eating their own dog food, Wanova was able to restore his laptop with everything intact in less than an hour using the Mirage System Recovery option.

Mirage System Recovery is smart enough to bring down key pieces to get the user up and running quickly while all the data continues to trickle down. This minimal configuration is referred to as "the minimum working set" required to get the system funtional.

With Mirage the System Recovery Scenario looks as follows

1) Install Mirage (that is if the IT group has not supplied a laptop with Mirage installed)
2) Assign Central Virtual Desktop Image and have the Mirage agent pull down the pieces.
3) The desktop reboots to the new working set to complete the process

In addition Mirage can also be used to initiate a Desktop Repair. To provide an example let's look at the recovery of files.

1) User installs an app that wipes My Documents
2) The system administrator restores snapshot from the central console "No troubleshooting required" (Note: it is important to note that only the files that have changed are sent as a comparison is always done first to identify the deltas before sending the files.)

There are three options to repairing endpoints in Mirage.

1) Restore Snapshot - repair using good files and settings from Snapshot

2) Enforce Base Layer - the mirage agent roles back any changes to the standard base layer within the Central Virtual Desktop.

3) A bare metal option which allows a small Windows 7 image which has the Mirage agent to be booted from a USB drive or PXE booted from the network to pull down the assigned Central Virtual Desktop (CVD) from the datacenter to the endpoint.

In some cases the Disaster Recovery value that Mirage brings is core to a customers decision to integrating the technology.












- Posted using BlogPress from my iPad

VMware Horizon Mirage Design, John Dodge, Stephane Asselin and Shlomo Wygodny

The world is rapidly evolving: John sites several recent milestones

1) 2010 the year the non windows applications exceeded windows application driven by the smart phone and tablet market.

2) There are 250 million dropbox users world wide.

3) 52% of employees carry more than one device

VMware's new End-User Computing Vision is: "Software Defined Workspace at the Speed of Life"

VMware's strategy is to plan for the convergence of traditional Windows Management and Delivery, Windows on Mobile and Mobile Management and Delivery in general. In addition EUC is extending to the machine space; for example the Tesla is a smart phone you can drive. John provides the example of Tesla changing the software in order disable a certain feature related to suspension.

John mentions that AirWatch was VMware's largest acquisition to date. VMware now finds itself in the leading quadrant of mobile.

Conversations shifts to Mirage: Mirage provides several capabilities and operates on the notion of layers. Fundamentally there is an IT management layer involving the Base, Application and Driver layers. The other layer can be though of as the user layer such as the Machine identify, user profile and non IT installed applications. Typically the IT management layer is the layer that gets pushed, the user layer gets pulled.

Typically in the datacenter you have a cluster of stateless mirage servers that are load balanced. Mirage supports NAS and DASD however a production implementation requires NAS.

All the administrative work happens through the Mirage console. Mirage is flexible and allows delivery of applications through a mirage layer, SCCM or App-V for example.

Mirage Server runs on Windows Server 2008 R2 and requires a database which is a required component.

Mirage Client is a lightweight client which can be silently installed. Mirage will make use of the VSS so requires 10 GB locally for the installation; 5 GB for the install and additional space to download an initial image for example. Mirage will throttle transfers based on how active the user is on the endpoint. In the next release you will be able to turn the user throttling capability off however you should be aware that it will impact user activity.

1) There is a Max. 1500 Endpoints per server physical or virtual
2) Max 20,000 Central Virtual Desktops (CVD) per Mirage Cluster
3) Deploy N+1 Mirage Servers to avoid single points of failure
4) VMware recommends Two gigabit Ethernet on the server

Replication in a typical environment is estimated 15 kbps per endpoint and approx. 150 MB per 24 hour period. This will vary considerably from client to client.

Behind the scenes Mirage Storage is a Single Instance Store in which:

1) Mirage stores CVDs, 1000 per volume is VMware's guideline
2) File and binary (chunks) are de-duped depending on file type (Mirage is smart enough to understand database files for examples)

Each Mirage server has a Local Cache which caches endpoint synchronization data. Typically there is one per Mirage Server and 100 GB of space is recommended. This cache is dedicated per server however it only benefits data which is uploaded.

On an End User PC the following layers can fall under Mirage Management

1) User Personalization Layer
2) Machine Identity Layer
3) Mirage Application layer
4) Base Layer
5) Driver Library

All this layering is done using native Windows APIs.

When deploying Mirage, the first step in the process is to build a reference machine. To do this you would add an agent, create a good base image and then centralize it. You would put everything you expect to but in a single base layer. Traditionally this is the most static parts of the desktop image. This image is then replicated up to the datacenter.

Once we have this image we create a Base Layer by applying Base Layer Rules. Once we have applied the Base Layer rules it is considered a CVD. Once we have this we can assign or deploy the base layer to our endpoints. The easiest way to distribute this base layer is to have clean endpoints.

To distribute the base layer you create Mirage groups. You can create a dynamic group by adding rules based on naming conventions for example "vdi". Every endpoint that boots with a vanilla OS and the Mirage Agent and the acronym in the name will join this collection and receive the CVD.

Before Mirage sends any data the endpoint is analyze to ensure only the delta is sent. Mirage intelligently merges Base Layer changes into the End Point using native Windows native API. To the OS it feels like an application installation. These changes can be stacked to schedule the reboot to happen on a monthly basis.

Most endpoints today have disk encryption; as Mirage works from within the OS, Mirage does not notice the encryption unless a Windows XP to 7 Upgrade is attempted. In some cases you may have to decrypt before the upgrade however Mirage has worked with many mainstream encryption software to provide full support in addition to Microsoft Endpoint Encryption Software.

Creating Application Layers is very flexible and allows you to combine applications in a single layer or have a single application layer per application. If you create a single layer then you are testing all these interdependencies. Mirage Application layers do not provide application virtualization so to the OS the application(s) are natively installed. You can combine ThinApp and Mirage Layers to get the best of both worlds.

A Driver library is used to enable a single base layer which can be applied on multiple physical devices. Drivers are combined with the base layer to deal with different vendor desktops; HP, Dell etc.

One common use case for Mirage is for a Windows XP to 7 Migration. The first phase of this process is to push the base layer components down to the XP endpoint. Before the migration is done an optional pre-deployment snapshot is done to provide a fallback point in case. This snapshot is stored on the Mirage Server. Once that is done a reboot is done which is referred to as a "pivot". What is being done is the swapping out of the XP files with the Windows 7 OS files in addition to joining the desktop to the domain to complete the migration. The benefit of this approach is that an in-place migration is done and the end user impact is minimized. While times will vary a typical migration can take from 30 - 50 minutes.

Great information today from @VirtualStef and the team







































- Posted using BlogPress from my iPad


- Posted using BlogPress from my iPad

Monday, January 27, 2014

The benefit of infrastructure innovation to VMware Horizon View

Virtual Desktops are expensive to deploy from a storage perspective. With the advancements in deduplication technology the footprint of a virtual desktop deployment on storage has been dramatically reduced. The issue is no longer the cost of ‘space’ alone. Deduplication removes the ‘like’ blocks and stores just one copy and references the dependent data to reduce the amount stored. In a virtual desktop environment that consists of 100s if not 1000s of copies of a Windows desktop operating system the consolidation of storage is very high. However, the IO required for a single VM can also be high; in production VMs have been observed to require 25 – 100 IOPs per virtual desktop. While the storage footprint is small the performance requirement is extremely high. The performance footprint of a virtual desktop environment can vastly exceed the performance demanded of all but a very few high performance high demand enterprise software solutions such as Oracle, SQL and large Microsoft Exchange environments.

View Storage Accelerator

VMware has introduced several technologies that increase performance while reducing cost. VMware implemented View Storage Acceleration (VSA) which is a form of local host caching. When a virtual machine is deployed a digest file is created which references the most common blocks of the VMs operating system. In operation the digest file is used to pull the requested blocks into memory on the ESXi host. This reduces the read requests that are serviced by the storage system by introducing a host based cache.

vSphere Flash Read Cache

vSphere 5.5 introduced the capability of pooling locally installed SSDs on the vSphere hosts into a logical cache accelerator that all read intensive VMs can benefit from. The vSphere Flash Read Cache aggregates all read requests so that they are cached locally on an SSD drive vs. in memory as is the case with VSA. This creates a separate caching layer across all hosts (provided they have local SSDs installed) to accelerate performance. While not specifically designed for virtual desktop environments, it will enhance any read activity across the virtual desktop compute cluster.

View Composer, Stateless Desktops and Storage Reclaim

VMware Horizon View enables the deployment of a stateless desktop. A stateless desktop essentially redirects the writes so that the majority of the desktop is read only. A stateless desktop is a much cheaper desktop to deliver and manage operationally. This is because it is not customized to an individual user and makes use of View Composer linked clone technology. Composer enables a large number of desktops to use very little space. The decoupling between the user and desktop and use of Composer allows more flexible deployment options. Stateless desktops can make use of local Solid State Drives on the ESXi host to deploy the OS disk of the VM. The benefits have been somewhat difficult to realize though as operationally the OS disk must be recreated to reclaim unused space and reduce the size of the tree.

VMware View 5.3 introduced a reclaim process that allows this to be done automatically and to take place outside production times. This enables a linked clone tree to be deployed for an extensive period of time and reduces the manual operational process of reclaiming space. In a virtual desktop environment the predominant use of stateless desktops dramatically reduces the cost of the solution.

vSAN

VMware has entered the storage virtualization market with the release of VMware vSAN. A vSAN allows the customer to completely segregate the virtual desktop environment onto a SAN that is built using local SSD and Host Hard drives (HHDs) that are collected and presented logically as a single shared storage environment. Segregation or separation of the virtual desktop environment provides the benefit of isolating the View requirements on a distinct set of physical resources so that there is no overlap at the hypervisor or storage levels for production enterprise workloads. The benefits of this approach are many:

1. Predictive hardware performance

2. No risk of virtual desktop performance impacting general Storage performance

3. Scalable, building block approach to deployment

4. Centralized Storage through logical SAN presentation

5. Native vSphere HA enabled through point (4)

6. Reducing the cost of storage while still providing all the benefits of a SAN

The solution is based on Micro Converged Infrastructure in which a physical server with local SSDs and HHDs runs the vSphere ESXi and a storage controller as shown in figure 1. For storage controllers that support pass through, vSAN takes complete control of the SSDs and HHDs attached to the storage controller. RAID0 mode is used for storage controllers that do not support pass through . This essentially creates a single drive RAID0 set using the storage controller which requires you to manually mark the SSDs within vSphere.

imageFigure 1: vSAN logical diagram

VMware recently announced the bundling of vSAN as part of VMware Horizon Suite.  VMware Horizon Suite includes View, Workspace and Mirage and now vSAN as well.  The innovation and combination of technologies at the infrastructure layer provide additional value over and above what is incorporated into the VMware Horizon View software delivering compelling value to VMware’s customers.