Saturday, December 8, 2012

Deploying the vCenter Server Appliance

When you deploy the vCenter Server Appliance there is a few things that you should verify first.  Proper Name resolution is a must so ensure that the DNS in your environment is properly configured and that you have created an A or host record for your vCenter Server Appliance and a reverse lookup or PTR record. 

The installation of the vCenter Server Appliance is a little unorthodox as you actually need to exit the initial configuration wizard to properly install vCenter Single Sing On (SSO).

vCenter Single Sign On is a part of VMware Cloud Suite.  Although it can be installed separately it is an integrated feature starting with vCenter 5.1.  For additional information on SSO please refer to VMware’s vCenter Single Sign On FAQ.

The reason you need to exit the configuration wizard is that SSO is largely based on tokens and certificates and therefore the hostname of the vCenter Server Appliance (vCSA) must be properly set and verified before SSO is initialized. 

Deploying the appliance from OVF is a matter of importing it.  Let’s look at the steps once you have imported it and it is booted properly.

The vCSA will use DHCP by default.  To begin the configuration browse to the assigned IP and specify the proper port 5480 (i.e. http://192.168.10.220:5480) as shown in figure 1.01

image

   figure 1.10

The default login is root with the password “vmware”.  The wizard runs and prompts you to accept the license agreement; accept it and click Next.  On the Configure Options page you are prompted to cancel the wizard and properly configure the hostname and static IP address first before proceeding as shown in figure 1.11.

image

figure 1.11

After you cancel the wizard, select the network tab and configure the hostname and network properties including the static IP as shown in figure 1.12. 

The one thing I have noticed is that the hostname does not always apply properly through the graphical user interface.  As I mentioned however; it is important that the hostname is properly applied before SSO is configured and initiated.

image  

figure 1.12

The other item that seems to throw errors if not first completed on the command line is joining the appliance to the domain. 

You can join the vCSA and configure the hostname from the command line.  The process is similar to adding a vMA (vSphere Management Assistant) virtual machine to the domain and uses the same domainjoin-cli command. 

The domainjoin-cli command is included in the vCSA and can be found in the /opt/likewise/bin directory.

SSH is enabled by default for the root account so you can either use the console or SSH to the vCSA.  The default password for the root account is “vmware” unless changed.  Using the domainjoin-cli you can update the hostname using the command domainjoin-cli setname [hostname] as shown in figure 1.13.

image

figure 1.13

After the hostname is set you can use domainjoin-cli once again to join the vCSA to the domain using the command domainjoin-cli join [domain name] [administrator@domain] as shown in figure 1.14.  You will be prompted for the password to complete the process.

image

figure 1.14

Once you have completed these two commands successfully you can log back into to the GUI and complete the wizard to setup the database and configure SSO.  The wizard also enables the Active Directory however now it should complete with no errors.

When you login to the GUI you will need to rerun the Setup Wizard from under the Utilities pane by clicking the Launch button.

Under the Configure Options select the “Set custom configuration” and click Next.

If you are using the native SQL database select embedded (otherwise put in your specific database settings) from the Database type dropdown and click Next.

If you are running the SSO on the vCSA select embedded from SSO deployment type dropdown and click Next.

On the Active Directory Settings page, select the Active Directory Enabled and put in your Domain, Administrator User and Administrator password and click Next as shown in the figure 1.15.

image

figure 1.15

Review the configuration and click “Start” to begin the configuration.  Ensure everything installs, configures and starts correctly.

The setup automatically installs SSO with a random password.  As it is randomly generated you cannot install the vSphere Web Client in another location as the installation requests SSO Administrator name and password and Lookup service URL. 

You can verify the SSO Lookup service URL through the command line by running the following command cat /etc/vmware/ls_url.txt as shown in figure 1.16.

image

figure 1.16

As we are going to look at the VMware vSphere Web Client in the next post I will leave the configuration and details till then.

Wednesday, December 5, 2012

VMware View Building a Successful Virtual Desktop

I am happy to say the book release of VMware View: Building a Successful Virtual Desktop was officially launched on December the 4rth.  It is an interesting process writing a book and I can honestly say I have learned a great deal over the last year.  I consider myself extremely lucky to have had the privilege of working with my team. From the initial acceptance of the project by Joan Murray, the Editor/Program Manager with Pearson, and her support throughout the process, and the tireless efforts of Eleanor “Ellie” Bru, my Primary technical editor, I am deeply grateful.

I also had great support from my two technical content reviewers: Stephane Asselin and Shawn Tooley. Stephane was a direct contributor to Chapter 5 and offered his expertise in all matters related to VMware View. Stephane is a leading expert in VMware View working with VMware. Shawn Tooley is a published author himself, and his suggestions greatly contributed to the polish of this book. I also want to thank Seth Kerney, who worked hard to put this book together.

I would also like to thank everyone for their feedback and support in the community.  I had the opportunity to discuss the project with Mike Laverick, who needs no introduction because of his extensive contributions. He offered many words of wisdom based on his broad experience which helped refine our approach. 

Monday, December 3, 2012

Federate Now; Extending to Cloud

Moving towards the IT as a Service (ITaaS) model is a complex process for most organizations.  It requires changes in technology, processes and skill sets.  As the current climate makes expenditures difficult, many IT departments are reluctant to start any major transformation.  The hunker down and ride it out mentality is delaying many strategic initiatives that would move IT departments closer to a Service Provider Model.

The overwhelming number of decisions on how to become more service orientated often delay any meaningful plans to get started.  Delaying however is just as hazardous as users become disenfranchised with internal IT and start to leverage unsanctioned Cloud services.  There is a better way which allows you to integrate the benefits of Cloud without having to work through all the complexities. 

As part of VMware’s vCloud Platform, VMware provides Cloud Connector.  Implementing Cloud Connector allows IT departments to federate their on premise Private Cloud with a VMware Certified Cloud Provider.  Most VMware Certified Cloud Providers have several different Cloud models; from traditional virtual infrastructure based on VMware vSphere to more self-service platforms based on vCloud Director.  The self-service models tend to take a more metered approach to billing, charging based on utilization vs. a flat per VM price.

Federating or connecting your traditional vSphere environment to a VMware Cloud Providers vCloud Director environment allows you to turn up certain Cloud like features for select workloads quickly and efficiently.  vCloud Director and Cloud Connector are designed to plug into enterprise private cloud environments. 

Unlike other Cloud services that are focused on the end user and do not provide an administrator console, vCloud Connector plugs into vCenter.  This allows you to continue to manage your Cloud virtual resources in the same manner and with the same toolsets as your current on premise environment.  You can still offer user self-service through the vCloud Director portal but administration can look and feel the same.  You access the vCloud Connector plug-in from your vSphere client; the vCloud Director resources appear as an extension of your environment as shown in figure 1.01.

image

figure 1.01

While this does not negate the need to deliver on ITaaS, it does enable a quick method of plugging in and turning up Cloud features while or in lieu of transforming your existing environment.  It also allows you to be very selective on what workloads you apply these Cloud features to.  This allows you to assess the business value without a large upfront commitment. 

I believe Cloud is forcing IT departments to rationalize which services they are extremely good at and which they should be looking for assistance on.  At the two ends of the spectrum are in-house and outsourced.  Recognize that federating provides a good middle ground in which you centrally manage the combination of services that you provide.  In this way you become the primary service provider to your organization while delivering a wider range of services but avoiding the development cost and time.

Sunday, October 28, 2012

EMC Forum Toronto: IT as a Service (ITaaS)

Paul O’Doherty, Cloud Solution Manager

The advent of Cloud computing has created pressure on internal IT departments to act more like service providers.

for_blog_final 

Like a service provider, IT must deliver improve business agility and more direct value to the business top-line while maintaining security, trust and control.  At the same time the IT group must figure our how to simplify the consumption of technology services by the business.

This requires changes to how service is delivered; allowing more self-service and greater end user choice on compute devices.  It requires the IT group to focus on delivering services vs. silos of technology.

IT must consider more hybrid models of private and public cloud.  A hybrid model enables the IT group to deliver the services they are extremely good at and federate others.  Studies have shown that a Hybrid Cloud model provides the absolute lowest total cost for delivering IT compared to any other model – including public cloud.

In order to transform there are three primary models that the IT team must adopt:

  • The Technology model: integrating Hybrid Cloud using converged infrastructure.  Converged infrastructure enforces consistent standards and predictable IT and utilization costs.
  • The Consumption model: deliver service catalogs vs. delivering a collection of virtual machines to accelerate deployment of new services. 
  • The Operations model: A new set of skills will be required in Cloud and datacenter architecture.  In addition a change in traditional Project Management to Product Management.  This will enable the IT team to deliver products that improve business competitiveness and agility.

Adopting these models will take time, however the benefit is that operational costs are reduced. In addition, deployment times are accelerated and the amount of IT capital available for delivering new capabilities and services increases in an ITaaS model.  To see the complete presentation click here

Wednesday, October 24, 2012

EMC Forum Toronto: Isilon Advantage

Isilon allows you to apply N+1 or N+4 redundancy so you can tailor the availability of the storage around the business requirement.

Isilon support's NFS, SMB, HTTP, FTP and iSCSI in addition to Native Hadoop HDFS Support.

The appliance supports RBAC and enables Authentication Zones to enable multi-tenant Cloud storage.

EMC has collapsed multiple features into OneFS. Which enables

Single File System
High Performance
Multi-Tier
80% efficiency
Easy Growth
Linear Scalability

Isilon Hardware Family incorporates incorporates the S-Series (Performance), X-Series (Flexibility) and NL-Series (Capacity). You can archive off one tier to another while online using Smart Pools.

OneFS is a new approach which collapses everything under one management model. Performance is unmatched in the Isilon which delivers 1.6M IO/s to provide the worlds fastest NAS device. In addition you can now revert snapshots based on a single command line.

New capability is SyncIQ - Failover/Failback between sites with the touch of one button. SyncIQ is multi-node and multi-threaded.

Isilon comes with Secure Separation between access and the platform. You can be an administrator on the box without having access to the data. In addition the new SmartLock feature can disable the administrator from deleting files.

Isilon will support VAAI and VASA APIs in the vSphere platform. In addition they have implemented a REST API to allow integration into other EMC Platforms.

For complete roadmap and product information go to

http://www.isilon.com/onefs-directions


- Posted using BlogPress from my iPad

EMC Forum Toronto: Big Data Transforms Business: Michael Bloom and Adam Fournier

Michael is a mid tier specialist and Adam is with Greenplum. What is responsible for the big jump in data? It is the proliferation of smart devices recording and uploading images and data around the globe.

How are companies using big data?

Getting to know their customers
Budget and planning exercises
Performance management of workloads
Pricing and costing exercises

This explosion in data introduces new opportunities for business. For example being able to tailor make smart phone adds as the customer is entering the store based on prior business habits. This provides a localized experience for consumers.

The first thing you need is lots of space. For example: Broad Institute is using Isilon to store data for genome sequencing. Isilon has a single management interface amalgamating petabytes of potential storage space.

Once you have all that data is localized what do you do with it? You need to apply analytics to data to turn it into business value. This segmentation of massive amounts of data is called micro segmentation.

Greenplum data analytics for structured or unstructured data (Hadoop) adds nodes for linear scalability and performance. Queries can run in parallel and are tuned to scale.

The layers of this model is the Isilon platform and presentation of the data using the HDFS protocol. The Greenplum uses an HDFS API to access this data.

Big Data analytics require data science; essentially you are running mathematical algorithms to predict want would be needed next. In the example these algorithms were used to predict what the customer would be interested in and target market to them.

How many packaged apps are built around big data; very few so they are all custom built at this point in time. Developing these custom interfaces can be very advantages: as an example their are a few online retailers enabling partners to query their customer data to understand the shopping habits.

Greenplum Chorus brings a social networking type interface to allow you to interact with big data. You can create a workspace, create a team of users to interact with it and add a sandbox to store your data. Once your workspace is created you can grab an instance of data to interact with it. You can tag it to associate it with your workspace (vs. moving it). This is beneficial as it gives you the ability to associate but avoid having to create copies or moving large amounts of data. You have the ability to join relational and hadoop based data sets. The point of this flexibility is to enable a business to do things like customer profiling.

Pivotlabs helped built these interfaces and EMC liked it them so much they essentially acquired the company. They bring the application development piece that was missing in the EMC big data message.

To reiterate the layers; Petabyte Storage, Analytics Platform (Greenplum), Data Science (Greenplum analytics lab) and the ability to develop new applications (Pivot Labs).





















- Posted using BlogPress from my iPad

EMC Forum Toronto: Why VCE Today, Frank Hauck

The session starts with the potential of the converged infrastructure opportunity. Right now converged infrastructure is 6% of IT spend. VCE sees this growing to 60% of IT spending. Converged Infrastructure (CI) actually helps IT departments by simplifying things.

Today's IT is typically built out project by project. This makes the deployment piecemeal. The speaker relates a common problem where infrastructure gets out of sync. Essentially the upgrade and maintenance is not being maintained consistently across the infrastructure. The solution is to build it converged in the factory.

Servers, Storage, Networks with VMware software; there is a standard set of components put together in a standard way. There is still some flexibility to allow a percentage of customizations at the appropriate level. In designing Vblocks, VCE is looking to level set the skill sets required and blend the silos of expertise (Server, Storage and Network).

The Vblocks allow you to stand up infrastructure in less than two days and start delivering increased IO to the application. In addition VCE provides end-to-end support on the Vblocks.

The first products that were created by VCE were reference architectures. Customers and partners asked VCE to take these and deliver a product. This is were the Vblock came from and converged is a trend that is catching on industry wise.

VCE has built a Software API to allow you to interconnect frameworks to reduce the number of folks you have working on infrastructure. According to IDC research the VCE products provide 5 X faster deployment with 4 X less resources to deploy. In the near future VCE will ship with a remote office or branch in a box solutions designed for the SMB space.











- Posted using BlogPress from my iPad

EMC Forum: Toronto

At the EMC Forum in Toronto the list of Platinum sponsors is CISCO, VMware and Brocade. The Gold list is: Intel, Onx and VCE

EMC is a onetime data storage company that now sells disruption

The message is about transforming EMC's ability to transform IT and the way it companies do business.

Country Manager for EMC Canada Mike Sharon is introduced. Mike thanks the audience for their business and support and taking time to learn EMC's solutions.

The program this year includes 20 sessions including one delivered by your truly on how to transform IT to IT as a Service (ITaaS). Mike talks about the evolution of the discussion from IT focus to how to bring a competative focus on business.

Dennis Hoffman is introduced as the keynote Sr. Vice President of the EMC partner program.

Dennis prefaces the messages that will be delivered throughout the day which is about transformation. Dennis talks about the waves of change impacting IT companies, Careers and whole businesses.

Dennis believes that we are at the start of a new wave that incorporates Cloud, Big Data

Dennis says that the message is clear you should be doing both private and public cloud. Surveys show us that people are already doing both. You cannot have a hybrid cloud without trust. As EMC is not going to become a service provider this is about trusted partners.

We need to simply Cloud through a set of products, proven infrastructure and converged infrastructure. The transformation begins with the simplification of the infrastructure.

Cloud is really a workload centric approach; how do I deliver disparate workloads and deliver them consistently. Customers are increasingly asking several questions?

Can I do it cheaper in Cloud
Can I trust the workload in the Cloud
Does the Cloud meet all my functional requirements

Cloud transforms IT, "We are striving to take IT from a cost center to a competitive advantage"

Dennis talks about the issue of trust in the Cloud. The Old World is very static. The characteristics are static and bolt on security models.

In order to do this we have to put a tivo on our networks in order to record. This takes a massive amount of data.

Like Cloud Big Data is transforming business. In 2000 the world generates two exabytes every year, now we generate that every day.

It is being used in a variety of ways to improve business. data has to be converted to information in order to be beneficial to us.

EMC has made big investments in this area; the first is really about storage; the second piece is investments in analytics (Greenplum) and on-top of that they have invested in analytics labs. This is an area of the market that is straining to find skills. In addition they bought a company (Pivot) to write applications to take advantage of this data.

EMC has struggled to rebrand itself it from a place to store data to a partner to transformation. Much of the acquisitions have been outside the core of storage. Storage still represents a large percentage of its revenues.

VMware is a recognized leader in the development of Cloud operating systems. vCloud Director is designed with federation in mind.

EMC has 50,000 employees in 83 countries. EMC is going to build products. You expect EMC to invest and acquire solutions.













- Posted using BlogPress from my iPad

Monday, September 3, 2012

VMworld 2012: Cloud Bursting redefined

While the notion of Cloud Bursting is enticing we are still at the early stages of making this a seamless and autonomic process. Technologies such as VXLAN and VMGRE are an important piece in getting to this point but they do not completely solve the problem. VXLAN (Developed by CISCO) and VMGRE (Developed by Microsoft) allow you to overlay a Layer 2 (L2) network over a Layer 3 (L3) network to remove the need to update IPs and MACs to gain mobility. These technologies are being used to extend L2 networks to public cloud providers to simplify deployment.

As Cloud Bursting is a much overused term, in this post I define it as the ability to dynamically create additional VMs based on Key Performance Indicators (KPIs) that flow over into a public cloud with complete autonomy.
While we are not there yet, we have significantly moved the bar forward to something I refer to as dynamic stretched scaling.

VMware has suggested that scaling involves automatically adding additional VMs to vApps based on demand. VMware defines bursting as essentially doing the same thing only between a hybrid private and public cloud. This is now possible but as suggested it is closer to dynamic stretched scaling than true cloud bursting.

To break this down into a more tangible example lets look at the layers required to make this work. The current architecture of dynamic stretched scaling takes advantage of new features in the vCloud Connector product from VMware (for additional details please see my post on the Tech Preview on vCloud Connector). vCloud Connector allows you to keep a catalog in sync between your private and public cloud. In order to configure this you will need to have the same catalog available in both locations. The client connections will be LB using any technology that provides Global Load Balancing (GLB). VMware's goto partner for LB in generally is F5 which does offer GLB.

In the backend you must monitor your vApps KPIs to identify when additional load is required and alternatively when it is not required. Based on these KPIs you add or delete VMs equally on both sides of your cloud (public and private). You can automate this by tying in your monitoring to VMware Orchestrator.



Although this is substantial leap forward from where we were a very short time ago, there is still a great deal of complexity in the configuration. It also requires tying various vendor solutions together. Also while this works well for some architectures, it adds complexity to others. For example consider a traditional tiered architecture with a relational database, application and web tier. You may have to segregate the relational database between read/write masters and read only subscribers to adjust for latency while still ensuring reasonable performance.

Latency in this model may be relative as well. You will have to know when the performance latency in the private cloud environment is greater than the network latency introduced by using a public cloud provider and react accordingly.

Using dynamic scaling locally and in the public cloud requires you to know what things to monitor that indicate poor performance like the time it takes an end-user to load a page for example. In addition if you are a Cloud provider, performance typically is defined by an SLA. To match the service commitment it may require adding or throttling resources.

It is possible to put these models together now but it still requires an in-depth understanding of the components and how they interact with each other. This is why I refer to current capabilities as dynamic stretched scaling vs. true Cloud busting.


- Posted using BlogPress from my iPad

VMworld 2012: vCloud Connector Technology Preview

VMware has introduced three major enhancements to Cloud Connector which they are calling:

One Network

Cloud Connector has the ability to do a stretched network using an SSL VPN to connect from an on premise cloud to a public cloud. This allows you to move VMs between locations. Even though you are moving VMs between locations, the Layer 2 network configuration is maintained meaning that neither the IPs or MAC addresses need to change even if you are moving to different geographical regions. The initiation of the SSL VPN is facilitated through the vCloud Director API which all public cloud providers generally expose so it should be universally available. One network does require the underlying virtual infrastructure (vSphere, vCloud Director) to be running at version 5.1.

One Catalog

VMware has enhanced catalogs to allow a publisher subscriber model. For example you can publish a catalog from your private cloud and have your public cloud organizations subscribe. Any changes made to the published catalog are synced to maintain consistency across all copies (Note: at the time of release this is a full sync not a delta sync)

One Cloud

Significant changes have been made to the GUI. It is based on the vCloud Director 5.1 interface so has the same look and feel. Now the Connector Console has the ability to have direct console access to your VMs from within the interface. In addition the migration of workloads is wizard driven and has been reduced to a few clicks.

The migration of workloads still requires the VMs or vApps to be powered off but VMware has hot-migration on its roadmap targeted for the next major release. VMware sited a number of Universities and Sega Gaming systems who were using the One Catalog feature to share projects and gaming catalogs between different regions using public cloud providers.








































- Posted using BlogPress from my iPad

Thursday, August 30, 2012

Configuration Management for Your Cloud using vCenter Operations Suite

A high level of automation is required to lower OpEx and CapEx in the cloud. Cloud infrastructures are unique as the rate of change is extremely high. Because of this rate of change how can you ensure complete visibility? How can you maintain compliance as you virtualize your Tier 1 applications?

It is possible to have a single pane of glass for your entire virtual infrastructure using vCenter Operations Suite. You can deal with compliance and introduce a high level of automation to simplify operations, provisioning and business management using these tools.

The goal of vCenter operations suite is to provide simplified operational management. In developing these products, VMware realized just integrating into the vCloud stack is not enough to add significant value.

vCenter Configuration Manager allows you to track changes, and compliance for all levels of the private cloud stack. The same policies can be used to manage both your private and public cloud. In addition as a service provider you can generate reports to your customers to demonstrate compliance.

1000's of settings and configurations are collected for all things including Virtual Network and Storage settings, Host profiles, vShield, vCD configurations and all the underlying vSphere components. Dashboards are used to present the information visually.

A Change Management Dashboard keeps track of all changes in the environment. You can also do bulk changes across all vCenters and ESXi servers at once. Once you have made changes you should configure a policy to enforce this configuration. This ensures that you tell whether or not your current configuration is revised so that it deviates from the policy.

The solution comes with many out of the Box templates for compliance. To build these VMware has an entire team that takes the policy and compliance standards and translates them into policy rule sets. In addition you can define exceptions and expire certain policies to ensure flexibility in how the rules are applied.

It is possible to run compliance reports and when things are out of compliance roll them back using the click of a button. The degree of automation can also assist with performance as well as configuration issues. From a single view you can see performance degradations. You can also see what changes where made leading up the the performance problems. Using configuration manager you can role back any changes that may have led to performance issues as well.

There is a rich set of extensibility features that are provided: APIs, workflows and SDKs. In addition the product ties into vCenter Orchestrator.




- Posted using BlogPress from my iPad

VMworld 2012: Cisco Virtual Networking and Security Announcement

CISCO has aggressively virtualized their product line which they refer to as the vStack. They have introduced new Nexus 1000v products at the show. These will be supported on Hyper-V and Opensource hypervisors like KVM. The products are:

ASA 1000v - a virtual Adaptive Security Appliance
Nexus CSR - a virtualized Nexus router
vWAAS - a virtual appliance for providing WAN acceleration
VSG - Virtual Security Gateway

CISCO is promoting VXLAN which allows you to overlay a layer 2 (L2) network over a layer 3 network (L3). CISCO developed the technology and shared it with their partners like VMware and Citrix. This made setting the standard easier as there was widespread support for VXLAN. It is based on providing a tunnel through the network so it requires gateways. This has some implications on the underlying network.

The L2 frame is encapsulated in UDP. It uses a 24-bit VXLAN identifier so it is possible to provide 16 million networks. The virtual machine is unaware that they are actually passing between L3 networks. It believes it is on the same L2 network.

VXLAN uses IP multicast to understand the network. The underlying physical hosts join multicast groups. Multicast groups can be shared for VXLANs so the networks (16 million) are not limited by the number of multicast groups. Packets are filtered to prevent sharing multicast groups from becoming similar to one large broadcast network (example: one multicast group being used for all potential 16 million networks). In addition learning is done by multicast but packets are sent by unicast.

On the physical switches you need to have IP Multicast turned on and Proxy ARP. From a Layer 2 perspective you have to turn on IGMP Snooping (the default on Cisco switches). You also need to ensure UDP port based load distribution is enabled.

In vCloud Director 5.1 the integration of Nexus 1000v is available natively in the GUI once you enable the feature.

- Posted using BlogPress from my iPad

Wednesday, August 29, 2012

VMworld 2012: VMware Horizon: Deep Dive and Best Practises

The problem that Horizon attempts to solve is changing from managing devices vs. managing user content. The problem is that the days of telling users what to do are over. We are entering the Post-PC Era. To be clear this does not exclude Windows but it is now a piece of a larger requirement vs. the only way to access content.

We have hit a crossover point in 2011 in which the underlying operating system is nolonger a limiting component restricting applications to pure windows platforms. Now we have iOS, Android, Macs and Windows. We have to come up with a mechanism to service all of these. The market has provided point solutions to provide services to all these environments however this gets complex and costly. This complexity has been compounded by the Bring Your Own Device (BYOD) phenomena.

This is where Horizon is targetted. The idea is to have one management point to entitle, build policy and report on accessing applications. Horizon was launched in spring of 2011 to deal with the external SaaS integration challenges. In fall of 2011 VMware integrated ThinApp to deal with Windows applications. In Summer 2012 VMware built an on-premise offering as customers complained about using a SaaS application to manage SaaS.

Horizon is made up a Horizon Application Manager and a Horizon Connector virtual appliance. The connector takes metadata and sends it to the manager to control. There are three roles defined; Administrator, User and a Super Administrator role. As Horizon is multi-tenant friendly the Super Administrator role manages multiple environments or workspaces.

The Horizon Application Manager appliance scales to 100,000 users because it is a true broker. The Horizon Connector support 30,000 users per appliance. RSA is supported and you can also separate connectors for internal and external usage.

There is no LB built in but HA and Cloning are supported. Cloning is recommended because it allows the certificates to be copied. The functional components do not change with the introduction of the Horizon Suite. Competitively, the deployment and integration is very easy compared to other solutions.

In the Horizon Suite VMware now supports Mobile Apps, VMware View, Project Octopus and Citrix Published Applications. VMware constructed it as a vApp to enclose all the pieces and simplify deployment. A Configurator VM manages all the VAs (Virtual Appliances). You simply enable the components you want through the configuration wizard and the Configurator VM spins up and configures the VA. There is a Gateway VA to allow files to be transferred in and out of the environment. There is a rich set of Class of Service rules to ensure what happens in the Horizon is controlled by the policy engine.

VMware Mobile is fully supported to ensure smart phones are a fully integrated and managed client in Horizon. VMware manages the entire catalogue of Horizon Apps through entitlement. It has RabbitMQ built in to provide a message queue or bus. This allows you to automate using things like Remedy and Orchestrator to ensure you can integrate Horizon and leverage your existing workflow tools.

Horizon has built in Reporting and Analytics to track users, resources and files.








- Posted using BlogPress from my iPad

Tuesday, August 28, 2012

VMworld 2012: Steve Herrod, End User Keynote

Steve will introduce the new suites from VMware. The session will discuss how VMware has combined their products to solve key business problems with a focus on the end user space. Steve opens with a few buzz words to highlight growing trends in the consumerization of IT, BYOD (Bring Your Own Device) or the IT of consumerization (which describes how technically savvy users are becoming).

Steve talks about how difficult these trends have made things for IT. The disparity in solutions has led to more tools and more point products. This complexity has led to cost increases.

Last year they talked about the transformation from legacy approach of IT into a new services approach. The second challenge that was alluded to is how you broker these new services and tailor them to a user vs a device.

Transformation has not really meant the death of the PC but the move to a multi-device environment. Gartner reports only 30% of business have migrated to Windows 7.

Steve reviews the release of View 5.1. View 5.1 has reduced the aquisition costs and improved the end user experience. Steve also mentions new products that are appliance based and allow you to build out desktops quickly. CISCO has a new device called the ISR G2 that allows you to deploy View in a branch and integrates a number of technology enhancements including VoIP.

Even with all the work done on View you still need to deal with the offline mode. Steve talks about Winova software and specifically the Mirage product. Mirage will decompose a desktop image into layers such as hardware OS, applications and user data. Once you have done this it enables you to quickly change out components of the stack. Mirage enables you to keep these Mirage desktop images synchronized when deployed to the end device. This allows you to keep these images current in near realtime. Mirage enables you to run the View desktop locally but trickle the changes back to the datacenter.

This keys up a demo that takes a user from Windows XP to 7 while online and using their laptop. A reboot completes the migration. The next scenario presented is when the laptop is 'stolen' access is enabled through a thin client to demonstrate that the virtual desktop follows the user. The final piece is the redeployment of the View desktop using Fusion to integrate it into a new Apple iOS laptop.

VMware also introduces the concept of User Interface Virtualization - to allow legacy apps to be tablet friendly. It takes apps and makes them work through gesture. The demo showed movement between applications using a album cover approach. The demo also showed clipboard functionality using gestures.

Now Steve moves to a discussion on brokers. Steve introduces the Horizon Suite. It can broker Applications, Data and Desktops. Project Octopus is integrated into the suite. Steve mentions that this can all be administered from a single place. The Alpha product is shown. When you login you get a dashboard of current use of all the pieces including traditional, thin and mobile applications. You can manage mobile Apps in addition to Web Applications, Thin Apps and low and behold Citrix. The Apple AppStore and Google Apps have also been integrated. Deployment is done through the completion of Class of Service policies. Applications can be self activated by the user provided the administrator allows it.

Steve now introduces Horizon mobile which manages a virtual container for your smart phone allowing personal and business segregation. Currently they support iPhone and Android. Horizon Apps are managed in a separate container on the mobile phone.


- Posted using BlogPress from my iPad

VMworld 2012: Opening Keynote: Paul Maritz and Pat Gelsinger

A number of announcements were made including the attendance and diamond sponsors. The numbers this year are 20,000 attendees and 10,000 watching the keynote online. The diamond sponsors this year are CISCO, Dell EMC HP and NetApp.

Paul Maritz was introduced and begins with a look back to when he did his first keynote at VMworld in 2008. Paul talks to the numbers on virtualization based on Gartner. In 2008 25% of workloads were vitual and in 2012 it is 60%. It is now the default way of running applications. In 2008 there were 25,000 VCPs and in 2012 there are 125,000. 13k attendees in 08 and 20 k in 2012.

Paul talks about the mindset "In 2008 we asked what was cloud in 2012 we are asking how do we implement Cloud?". How does the Cloud effect people and processes? VMware has developed a deep body of how to information on virtualization and Cloud. Paul asks In 4 years how do things look?

Paul believes that a strong set of forces are at work to automate most of the people processes. From physical process to computer based process. Every business is expected to be able to do this. what is happening now is not just the presentation of static information but the tailoring of experiences based on the audience in real time. This has profound implications on what happens underneath to the infra This cannot happen on todays infrastructure. Providing information is not innovative but providing relevance is. How can we take IT more efficient an agile to deliver on these challenges.

We will see an transformation in IT. It will happen in three broad categories: Infrastructure, Applications (real-time demand at exceptional scale) and Access (from a PC dominated world to mobile devices).

Paul introduces Pat Gelsinger to talk about the future (the new CEO Almost)

Pat would like to begin talking about the infrastructure layer. Pat is targeting moving businesses from 60% virtualization to 90% virtualization. We need to continue to make progress on provisioning down to minutes and seconds. To much complexity is involved in deployment. For example the complexity of networking security; can we automate these as wells? The promise is the software-defined datacenter. All infrastructure is virtualized and delivered as a service and the automation is entirely done by software.

Pat mentions that there is strong push from vendors, database (read Oracle) to make it more proprietary and separated. This is not the approach that VMware will take. The challenge is to do to the rest of the datacenter what was done for servers with virtualization. Can it be done?

Introducing vCloud Suite

vCloud is based on vSphere and layered vShield, SRM and vCloud Director with plugins for operations suite and vfabric. It includes APIs for extensibility, Connector for interoperability and Orchestrator for automation.

It is based on the unmatched, comprehensive and highest performance virtualization product on the market vSphere 5.1.

Pat says that VMware has done a good job of having a yearly cadence of major software releases. VMware is proven. 80% of the 60% of workloads virtualized are running on VMware.

Pat then makes the announcement that many have been waiting for. Pat announces "Today we are striking vRAM from VMware's dictionary. Licensing will be based on CPU/Socket with no core limits.

Pat also mentions a new set of role-based certifications. VMware will provide the tools and services to help you navigate of this change to Cloud computing.

VMware realizes It is a multi-cloud world and explains that Cloud Foundry supports other vendor clouds. In addition VMware aquired Dynamic Ops to support other clouds.

VMware also mentions the opensource Network acquisition Nicira which was completed on Thursday of last week.

What about the Applications in the era of Cloud? Pat mentions the major updates to vFabric this year. Cloud Foundry is the one and only open PaaS platform.

Pat also mentions Wanova and Mirage product. VMware is making Horizon is the broker for the cloud era. In summary VMware has the transformation to Cloud well covered and is here to enable customers.


- Posted using BlogPress from my iPad

Tuesday, March 27, 2012

Do you have ‘Actionable Intelligence’ (AI) on your Virtual Infrastructure?

I have been doing a lot of vCenter Operations and have to admit I really like the version 5 product.  A little less enamored about the licensing but only because of where they but the demarcation point between Advanced and Enterprise.  Let me share a few thoughts with you and explain the AI title.

Actionable Intelligence is something I picked up from interacting with VMware’s sales folks.  It is a key differentiator between vCenter Operations (vOPS) and many other systems management tools. 

The secret sauce is in the dynamic thresholds.  Rather than setting a static threshold such as CPU over 75% utilization (as you get in vCenter), vCenter Operations configures dynamic thresholds.  Every 24 hours, algorithms are run to identify what is normal behavior and what is abnormal behavior on the objects that are aggregated in vOPS.  The  objects and data Is pulled from vCenter. 

Why is this so unique?  Well the proof is in the patent.  All the algorithms are patented from VMware because they have incorporate their intellectual property on what’s best for your vms.  VMware, in the development of their systems management products noted that in many tools the data is difficult to interpret or potentially suspect.  In the design of vCenter Operations, considerable time was spent to ensure the information and the display is straightforward and provides information you can act on (hence the ‘actionable intelligence’ buzz word).

Does it deliver?  Well you need to allow time for the technology to recognize normal and abnormal behavior.  For example; if vOPS is running for 1 day and an irregular spike in CPU on a vm occurs it may register an anomaly.  If this spike happens every week at the same time then it will be considered normal behavior.  VMware generally recommends that you run the tool for a break in  period of two weeks.  The real-time monitoring will be useful the moment it is installed however.

One of the nice integration points is the inclusion of the Capacity IQ reporting and what-if modeling in a ‘single’ product.  I quote single as it is still two vms (the analysis and UI vappliance) released as a ‘single’ vApp. 

My only minor point is on the licensing break; essentially standard is real-time monitoring, advanced is everything including long term risk and efficiency.  You are restricted in vOPS advanced to a single locked dashboard which is the default console.  Enterprise allows you to unlock the customizations so you can build separate dashboards.  Enterprise ++ allows you to incorporate physical and virtual machines. 

What I find is that the minute it is deployed, customizations are critical.  You may want the management team to only see a high level view vs. being able to drill down on all critical alerts for example.  To move from Advanced to Enterprise is a license key upgrade on the same product which unlocks the customization utility.  It’s almost like they planned it this way to entice the Enterprise upgrade, hmmmmmm…

Saturday, February 18, 2012

Virtual Security; Defense in-depth

The VMware vShield product line has come of age introducing a new acronym into our vernacular "Virtual Security". It is now possible to provide a level of security around our servers and data that would be both difficult and costly without virtualization. An accepted principle in the field of security is defense in-depth. It is based on the premise that many layers of security at different locations is better than a heavily secured perimeter.

vShield has expanded into four products; Edge, App, Endpoint and Data. The difference between them is at what level in the virtual infrastructure they provide protection. vShield Edge puts a security appliance or shim between the vSphere host and the vSwitch to harden the perimeter of the virtualization environment. vShield App applies security between the virtual NIC of the vm and the vSwitching securing traffic to and from the VMs. vShield EndPoint in combination with 3rd party support reduces the attack vector or exposure of the VMs to viruses and malware. EndPoint does this is a way that reduces the overhead of scanning in a way that complements the unique characteristics of virtualization. vShield Data ensures the integrity of our critical data. It will scan our data to ensure it meets one of a number of compliance templates included in the product. It will report on any file that fails the compliance scan. Together they provide a comprehensive solution that we generally term virtual security.

In order to take advantage of virtual security we will need to broaden our virtualization expertise to incorporate security and our security teams. While vShield has made it easy to apply, you still need to ensure that the capabilities are understood and policy is developed. A defined strategy will ensure you understand how the vShield product rules can be customized to meet the requirements of your business.



- Posted using BlogPress from my iPad

Wednesday, February 15, 2012

Multi-Tenancy; Is your infrastructure Ready?

It is clear that this year will see significant change in the IT industry as internal IT shops move to the service provider model. Key to this will be virtualization, which provides a flexible platform for the allocation of compute resources. This layer will evolve to include automation and multi-tenancy.

From our base concept of a virtual datacenter we will add a provider datacenter and an organizational datacenter. The provider will deliver management and a global view of all compute resources available either on premise or off and the organizational datacenter will provide autonomous management of a partition of these available resources. These are the basic building blocks of vCloud Director. vCloud Director on its own however is not the complete solution as it is dependent on the underlying physical server and storage architecture.

To move forward we need to look carefully at our physical layer and ensure it is multi-tenant capable. Potentially this may be a real source of cost in moving to the provider model. A large percentage of storage architecture has been designed for either random or sequential read\write activity but not both.

When you mix work loads you quickly reach the limits of what the storage solution can do. There are a number of storage solutions that have cut their teeth building solutions for hosting providers. These are vendors that understand the challenge of large mix work load environments at a tremendous scale. To get to multi-tenancy in the virtualization layer we will need to consider if the foundation on which it is built can deliver the performance required.

Their will be a process transformation in order to ensure the value of these investments can be delivered efficiently. This will require a commitment to the automation of a large portion of IT processes. Automation accelerates the delivery of services from the providers of IT to the consumers of IT.

This can equate to a fair bit of work as to automate you must first understand the process from start to finish. Even in large organizations with well defined processes this can be challenging because once defined the processes will need to evolve. The trick is to ensure the checks and balances are carried forward while streamlining the Time To Output (TTO) of the process.

It is necessary to overcome these challenges in order to keep in-sync with the evolution of end user demands, the maturity of technology and the journey to the Cloud. It is going to be a busy year for IT.






- Posted using BlogPress from my iPad

Monday, February 13, 2012

Virtualization Automation; the view point to the Cloud

Over the last few years very few customers have integrated some of the more advanced features of virtual infrastructure. For example, products such a VMware vShield have been available for sometime and are included in vSphere bundles but few have integrated them completely. This is either because the benefit is not clear or because their has been no real driver to add additional layers of complexity to internal virtual server environments. This has created a bit of a problem for VMware in moving customers to adopt additional products.

Much has changed however with the advent of Cloud services that offer externally hosted infrastructure or Infrastructure as a Service (IaaS). If you look carefully at these solutions, they offer several features that are not available from a user perspective in traditional virtual infrastructure:

Self-service: When you subscribe to IaaS sites you are presented a portal from which you can deploy various different templates without the need to contact an IT department.

Automation: The process from start to finish is designed to secure payment and provide services in the quickest time possible to ensure the provider can start billing and the user start deploying.

Advanced Networking: Most providers allow some control over networking configurations to allow you to present or not present the vms or the application services within them.

Many customers are starting to look at these features from hosting providers and understand how they might integrate them in internal environments. There is strong interest in products such as VMware vCloud Director, which has moved from version 1 to 1.5. In combination with vCloud Director, Service Manager (which allows you to deliver ITIL process automation) is being integrated into Director environments. The last product that allows customers to deploy a Cloud like service internally is vCenter Chargeback Manager. Even for customers who are unlikely to 'chargeback' they find it beneficial to be able to 'show back' the cost to avoid an overallocation of capacity in their virtualization environments.

It is this combination of solutions that allows customers to build on on-premise service oriented architecture that demonstrates advanced virtualization automation. Even if the customer intends to look at 3rd party Infrastructure as a Service, deploying an on premise virtualization automation environment will inform IT teams, ensuring better decisions are made. Whether it is to understand what new features are needed in existing virtualization environments or to better understand the options, deploying these products on premise is a good idea.

A view point ensures you are developing a position from which to form a strategy. The deployment of vCloud Director, Service Manager and Chargeback manager internally can deliver a better understanding of the change required to enable your journey to the Cloud. It also ensures you have a tangible reference for the combination of services required to get there.

- Posted using BlogPress from my iPad