Friday, December 20, 2013

How do I get to the Cloud?

It is typical for IT not to have seen much personnel or budget increases over the last number of years. However, user expectations have dramatically increased because of the consumerization of IT. The new standard in the minds of the users we service is the ‘appstore’ approach for everything. In addition most IT teams are still aligned in traditional technology silos and struggle to adopt “as a service” models.

These problems are compounded by where most IT teams spend time.  It is estimated that 60 – 70% of time spent is on lights on activities, straight operations or firefighting. The remainder of time is spent on new project initiatives leaving little time for proactive strategies to adopt new service models in End User Computing (EUC) or Private or Federated Cloud. How then can we break  this cycle and move the blocks forward to take advantage of Cloud?

The first step is to carve out time that does not exist. The only way to do this is to spend less time in operations by undertaking the following:

1. Categorize, Automate and Orchestrate

You have two choices on where you begin; datacenter or end user services. The decision is usually dependant on what provides greater business value to the organization at the time vs. what is the right starting point.

2. Standardize a percentage of IT as a catalog

As much as we want to believe it, not everything is unique; IT departments treat everything as custom builds when not all are. Even if this starts as a very simple catalog; office applications (EUC) or a simple two tier VM configuration (Private Cloud) the important thing is to work through the process.

3. Turn over delivery

Once you have completed step 1 and 2, enable a suitable business group to self-provision. In the initial adoption this may not be end users; it may be business or application analysts.  “Get it done” as this is when you reduce time in operations. Prepare yourself as this is a learning process.  It may require some training to the transition team.

Once you have achieved the first three steps you can move to higher value activities in steps 4 and 5.

4. Adopt a Virtual Datacenter

The vast majority of IT shops are virtualizing OSes, however turning your datacenter into software is a huge enabler. It allows you to thin provision a host of physical devices including networking and storage.

5. Apply Security and Policy to the Virtual Datacenter

Ensure that the virtual datacenter is more secure; software can be encrypted and encoded and policy driven. Apply these to your virtual datacenter

After turning your datacenter into software you are ready to drive efficiencies by implementing steps 6 – 8.  Most people would argue that the evaluation should happen first and this is true for a business case.  In the execution stage, the point of evaluating after steps 1 to 5 is that your organization has matured their understanding of the process and technology and can now take better advantage of the many options.  In addition, in a large enterprise it is unlikely that a environment will be all or nothing i.e. Public vs. Private Cloud.

6. Evaluate where you are doing your computing

Once your Virtual Datacenter is secure and encrypted it can be migrated anywhere so take advantage of lower cost opportunities to run your IT services

7. Evaluate the efficiency of what you are delivering

Okay we have come a long way however the journey is not complete; evaluate whether it is more efficient for you to deliver each service internally or through a 3rd party. It is important that at this point it is an evaluation; there is one more critical step to complete before farming any in-house services.

8. Brand

You are the department who has serviced your users effectively for years; your intellectual property is the knowledge and understanding of the users and the business you do. Users should continue to come to you for all requirements even if some are being delivered through 3rd party. Ensure that the users do not see whitespace between the internal IT team and service providers you select. Your role in this new model is to be the one stop shop from a provider perspective and maintain quality control no matter how services are delivered. 
We did not mention Cloud or Federation or any the terms that are loaded with promise and expectations. We have described a process that does enable Cloud adoption. 


Wednesday, June 26, 2013

Microsoft Licenses and the Cloud

Cloud represents a large opportunity for both customers and service providers alike.  The details of how Microsoft Licensing works between customers and Cloud providers can be a little confusing however.  The confusion increases as you look at OS and  enterprise software licensing such as Exchange, SharePoint and SQL (Please refer to the License Mobility Overview document from Microsoft for links on how to verify additional Microsoft products). 

While you cannot transfer Windows OS licenses to your Cloud provider, it is possible with some Microsoft enterprise applications.  In order to be eligible to transfer licenses between you and your Cloud Provider, you need to have a Microsoft Volume Licensing (VL) agreement as well as have current maintenance or Software Assurance (SA).  Enrollment and active maintenance provides you access to the Microsoft License Mobility program.

To make use of the Microsoft License Mobility Program your provider should be an authorized License Mobility partner (Note: It is possible for a Cloud Provider to provide Windows licensing under the Service Provider Licensing Agreement “SPLA” without being a authorized Mobility Partner so ensure you review this).  Once you determine your eligible Mobility Licenses and select an License Mobility Partner you are required to submit a License Verification form to Microsoft. 

After the process is complete you can assign the licenses for use in your providers datacenter.  There are a few provisos  to be aware of; the minimum time that you can assign a license is 90 days and if you switch providers you need to resend the License Verification form to Microsoft.

To understand the process lets look at an example.  Customer A is running Exchange within a private cloud environment.  They wish to migrate the Exchange VM to their trusted Cloud authorized License Mobility Partner. 

In this case the OS license would be provided through the SPLA license as part of the Infrastructure as a Service (IaaS) agreement with the Cloud Provider.  Customer A would ‘transfer’ the Exchange license to the Cloud providers datacenter while switching off the Exchange VM running in their private Cloud.  Provided the Exchange VM remains dedicated to Customer A, the VM can run on a shared virtualization platform within the Cloud providers datacenter.

The list of authorized Mobility Partners and eligible enterprise applications are available from Microsoft’s website.  While this seems complex, a good Mobility Partner and provider should be able to step you through the process. 

Thursday, May 23, 2013

VMware NSX: Transform Your Network

The merging of vCenter Network and Security 5.1 (vCNS) and Nicira has been branded the VMware NSX (Network Virtualization Platform).  VMware has invested heavily in this solution and the internal business unit developing networking and security solutions.
The NSX platform attempts to address the problems with networking and virtualization.  Cloud is all about speed.  Virtualizing networking is about breaking down physical boundaries. 
VMware has transformed the provisioning time for an OS instance by decoupling hardware and software within a virtual machine.  Unfortunately the VMs requirement for additional services (network, firewall and security) takes the minutes to deploy a VM back up to days. 
VMware’s NSX initiative is designed to decouple hardware and software within the network stack.  A production VM needs IPs, VLANs, Firewall, ACLS, QoS etc. etc.  The time to deploy these network services kills the flexibility of virtualization.  It also limits mobility as there is a dependency on networking and switching physical hardware.  It also has an impact on DR as these services do not transport with the VM. 
To address these problems VMware has built a network hypervisor.  The server VM believes it is talking to  physical network gear even though the network routing and switching is virtualized.  This is likely to have an impact in the networking market as the average markup on network gear is 70% vs. 20% in the hardware server market. 
Virtualized networking is available today in the product line.  these virtual wires float within the software or virtualization layer.  These virtual wires can speak to physical servers as they can be mapped to a physical environment through the VXLAN standard.
VMware has extended the virtual network to partners through VXLAN to integrate physical appliances like F5 Load Balancers. 
VMware NSX enables stretching to other Cloud environments.  In addition NSX enables
  1. On boarding Customers Faster
  2. The ability to offer new, automated network services to customers
  3. Reduce Costs
    1. Move away from traditional physical networking
  4. Deliver Flexibility through elastic networking that scales out as needed
Integrating NSX does not require a rip and replace of your existing network hardware vendor.  You only need IP connectivity and high performance network fabric to integrate NSX. 

Sunday, March 17, 2013


When you are building converged infrastructure the primary considerations are Power, Performance and Consolidation.

When it comes to virtual infrastructure the balance between costs and performance is largely based on the number of VMs/host and the utilization of CPU, Memory and Disk and Network IO on that host.  Ideally you want to use less IT architecture to achieve higher consolidation ratios.

Fusion-IO enables higher consolidation ratios and boasts 4000 Clients and climbing.  Fusion-IO technology in deployed in 65% of the Fortune 100.  In architecture that is known to thrive on memory such as SAP HANAH in memory database platforms, Fusion-IO has become a de facto standard.

Arguably the largest Cloud infrastructure is Facebook which has deployed 40,000 Fusion IO cards.   

Fusion-IO cards and software solutions are based on flash technology.  To get a better understanding of the Fusion-IO approach have a look at the following video.

CPU speeds have increased, Disk drive capacity is enormous as well.  Disk speeds have not kept up however and every year it gets worse.  This is were Fusion-IO technology can assist. Fusion-IO technology can increase the cache performance or provide a higher tier of performance storage based on flash.

Many traditional hardware providers are now using Fusion-IO cards in their products; HP, IBM, DELL, CISCO and NetApp.  Ironically they are all competing using the same OEM’d products from Fusion-IO.

Traditionally Fusion-IO software and cards have been deployed to add performance to database implementations.  Fusion-IO technology brings faster queries, faster data load times, faster reporting, and better SAN utilization and reduced disk queues to database implementations.

Within a Cloud environment, Fusion-IO provides a multiplier effect for your virtual infrastructure.  Fusion-IO cards and ioTurbine software can double the density of your VM hosts when properly deployed.

Fusion-IO is a memory Tier; Although DRAM is  faster, it is more expensive than deploying Fusion-IO cards.  Also there is typically an upper limit to memory which is vastly exceeded using a Fusion-IO cards.

When you install a Fusion-IO card into a VMware ESXi host by default it appears as a local datastore.  When you install ioTurbine software it changes from a datastore to cache.  You can enable select VMs to take advantage of this caching tier.

When a write is requested to the storage tier it goes directly to the SAN to ensure complete persistence.  When the end storage device sends an acknowledgement the ioTurbine software keeps an Asynchronous copy on the Fusion-IO card.  The same process is done for reads.  As reads and writes are cached to the Fusion-IO card, future requests can be serviced from cache dramatically improving performance while reducing the load on the existing SAN.

In addition to dedicated Fusion-IO cards within a server you can combine several Fusion-IO flash cards together in a single storage appliance using the Fusion-IO ION product.  This is essentially a ‘top of rack configuration’ which collects cards so that they can used as a high performance storage array.

There are a few things to keep in mind when considering each approach although they can be used to compliment each other.  When installing Fusion-IO cards into your ESXi hosts it is best to install them in all hosts in a cluster although it is not required.  To leverage the cards as cache vs. storage you will need the ioTurbine software.  The performance benefits are also based on the local cache being warm so minimizing VM migrations is be beneficial. 

In the ‘top of the rack’ model, the Fusion-IO ION product contains multiple flash cards in a 1U server platform.  The ION platform can deliver over a million I/Os per second over Fibre Channel, InfinBand and iSCSI standard storage protocols.

Whether it is higher consolidation ratios or dealing with high I/Os, Fusion-IO products are worth evaluating.

Friday, March 8, 2013

Application Federation: iPaaS

As many of you know, I believe that a huge area of growth for Enterprise IT customers is through proper federation of their on premise private Cloud environments to public Cloud providers.  Properly done this allows IT departments to take full advantage of a pool or multiple pools of public Cloud resources without changing their internal standards or compliance requirements.  This is different than simply putting work loads in the Cloud as this is fairly straightforward and closer to outsourcing vs. federating.  With the advent of Software Defined Networking (SDN) traditional barriers are rapidly dissolving making this a reality in Infrastructure as a Service (IaaS) models.

Another quickly evolving type of federation, is  application federation.  We are at an interesting point in which application development on the Cloud is happening at an incredible pace.  The constructs of Cloud based applications are different from traditional enterprise applications in many respects.  One very obvious one is the ability of Cloud based applications to plug into one another to take advantage of new streams of data and input.

The reality however is that enterprise data is still very tightly coupled to traditional enterprise applications deployed privately within corporations and businesses.  How then do you take advantage of Cloud applications when most of your enterprise data is still deployed in this manner?

This is exactly the problem that integrated Platform as a Service (iPaaS) solutions attempt to address.  Think of them as super connectors allowing you to plugin traditional enterprise applications to Cloud based applications or Software as a Service Providers (SaaS).  This is federation at the application level.

The term is still relatively new, however the need is very real.  It will be sometime before applications are migrated to pure SaaS models and need to work in a hybrid model.  In the Hybrid model, data is stored in the enterprise and other processes are federated to and from SaaS providers.

Dell recently acquired a major innovator in this space; Boomi.  Boomi boasts 1000+ customers and is considered a leader in this space by industry analysts. 

The Cloud Suite made by Boomi is AtomSphere which allows you to quickly distribute workflows using connectors that allow interaction across global geographies.  AtomSphere’s service engine is distributed across several datacenters around the world.  As it is a Cloud service there is no additional software to install or appliances to deploy.

From a user interface perspective, the dashboard of AtomSphere is divided logically into sections; Build, Deploy and Manage.  Build is were you will spend most of your time setting up process maps or workflows using connectors by leveraging the easy to use drag and drop, visio like display. 

Once built you can deploy your process map either on premise or in the Cloud using containers referred to as Atoms.  Versioning is supported within the platform, allowing you to quickly roll back in the event that you run into a problem.  To complete the functionality you can monitor through process reporting or setting up real-time listeners to determine if processes are successful and how long they run.

Boomi also provides an overall system health dashboard so you can quickly see the robustness of the Cloud service over time called  As Boomi is a true community of developers the number of supported applications has expanded dramatically and continues to expand to the benefit of everyone leveraging the AtomSphere platform.

Although federation continues to happen at the Infrastructure as a Service layer it is important to understand that you can federate at the application level as well.  When federating at the application level you will be keenly interested in the capabilities and services provide by iPaaS solutions like Boomi AtomSphere.

Wednesday, February 20, 2013

Adding Active Directory as an Identity Source for Single Sign On

When you first deploy the vCenter Server Appliance (vCSA), Single Sign On (SSO) does not use Active Directory as an Identity Source.  The SSO service and the vSphere Web Client are configured and integrated as part of the deployment of the vCSA but the default Identity Source is the local OS accounts.

You can update the Identity Source to point to your Active Directory for authentication.  You will first need to login to the vSphere Web Client using the vCSA root account which is an SSO administrator by default.  To login follow these steps:

Login to the vCSA using the Web client and the root or administrative credentials. The vSphere Web Client URL is https://[vCenter Server Appliance]:9443

Navigate to the Administration Tab. Click “Sign-On and Discovery and Configuration as shown in figure 1.01.

imagefigure 1.01 

Under the Identity Sources tab on the right click the “+” to add a new Identity Source. 

As shown in figure 1.02: Select the Identity source type “Active Directory

Type a Name i.e.

Provide the URL for you AD server i.e. ldap://

Provide the Distinguished Name (Base DN for users) in the format DC=virtualguru, DC=org

In Authentication Type you can select “Reuse Session".  Reuse Session is supported for Active Directory Identity Sources and essentially takes the credentials used to logon to SSO and passes them to the AD Server. 

image figure 1.02

(Note: although in the example we are using the root of the domain you should use a Group within the Active Directory that includes all of your vCenter Administrators and then provide the fully Distinguished Name (DN) of the group. This reduces the amount of time required to lookup  the accounts and provides better security by being more specific. If you are unsure of what the DN is then simply create the group and use a LDAP Browser to browse to the group and read the DN Name as shown in figure 1.03. A great utility is the Softerra LDAP Browser which can be downloaded here.)


figure 1.03

Wednesday, February 6, 2013

Upgrading vCloud 1.5 and Storage Profile Considerations

I have seen some information out there that touches upon what you need to be aware off but wanted to investigate a little further. The confusion centers around what happens to existing datastores not using Storage Profiles when you upgrade from vCloud 1.5 to 5.1? 

To review, Storage Profiles are a feature that is enabled using vCenter and accessible to customers who have vSphere Enterprise Plus licensing.  vSphere Enterprise Plus licensing is included in the vCloud Suite 5.1 bundle.

Storage Profiles allow you to categorize your datastores using user-defined or system-defined attributes.  User defined are created by you whereas system-defined are provided by the storage vendor through their support of the VMware vStorage APIs for Storage Awareness (VASA).

Upon Completion of the vCloud Upgrade from 1.5 to 5.1 a default Storage Profile is created called ‘Any’ as shown in the figure 1.01.


figure 1.01

Any includes all storage pools that the vSphere environment has access to.  In my case it picked up two local storage pools as well as shown in figure 1.02.


figure 1.02

The default ‘Any’ Storage Profile is retroactively applied against all my existing Organizational virtual datacenters (vDCs) as shown in figure 1.03.  If I was not aware of this behavior I  may have VMs being created on a mix of local and shared storage pools.  Clearly this is something we will want to clean up.


figure 1.03

(Note: I personally believe that in certain Dev situations local storage may be an acceptable storage pool.  If you were considering this you would categorize local storage pools in separate storage profiles however.  The problem here is really the default mixing of both shared and local storage).

Removing the Any Storage Profiles

In my testing I had not enabled Storage Profiles so to remove ‘Any’ I will need to Enable Storage Profiles in vSphere, create Storage Profiles without local datastores and then replace the ‘Any’ Storage Profiles with my newly created Storage Profiles.

This is easy in a small environment but requires some planning in a large one (Providers take note).  To Enable and create Storage Profiles follow these steps:

Create User-defined Storage Capabilities

  • From vSphere browse to Home, Inventory, Datastores and Datastore Clusters
  • Right-click the datastore you wish to use and select Assign User-Defined Storage Capability
  • Click New and set a Name and Description
  • Repeat for all datastores that should be included

Create Storage Profiles

  • From vSphere browse to Home and under Management select VM Storage Profiles
  • Select Enable VM Storage Profile and then the Enable button (This is hard to see) and then click Close
  • Click Create VM Storage Profile
  • Name and assign the Storage Capabilities from the user-defined list you created

From vCloud Director Refresh and adjust the Storage Profiles

  • From the Home page select the Manage & Monitor tab and select vCenters under the vSphere Resources Heading
  • Select your vCenter on the right and right-click  and select Refresh Storage Profiles

Adjust you Provider vDCs

  • From the Manage & Monitor page under Provider VDCs DOUBLE-CLICK your provider VDC.
  • Under the Storage Providers Tab click the Plus to add your new storage profile
  • Select the ‘Any’ Storage Profile and right-click and select Disable
  • Select the ‘Any’ Storage Profile and right-click and Remove as shown in figure 1.04


figure 1.04

Monday, February 4, 2013

Just what is Single Sign On?; vSphere 5.1

Technorati Tags: ,,

Single Sign On or SSO was introduced as a requirement for deploying vCenter 5.1.  SSO is based on identity management technology built by RSA but designed for VMware environments.  It provides a higher level authentication mechanism than Active Directory to enable Federation.  Federation in this sense allows you to authenticate once but access many vSphere environments such as multiple vCenter servers that may not have common Active Directories (ADs). 

SSO treats AD as an identity source; it also supports OpenLDAP, local accounts on the vCenter server and accounts created within SSO.  When you login to vCenter 5.1 you are passing the authentication to SSO which forwards it to an identity source for authentication.  Once authenticated you use a token vs. a username and password.  This token allows you to access multiple environments without re-authenticating.

SSO requires its own database.  Multiple SSO servers can be deployed and connected to the same database.  When multiple SSO servers are deployed one is designated Primary while the rest are slaves.  To make this configuration Highly Available you must add a Load Balancing (LB)Solution in front of the SSO servers.  The final step is to re-point the vCenter server to the new Load Balanced IP as covered in this VMware Knowledge Base article.

The easiest way to install SSO is to use the vCenter Server Appliance (vCSA) as it is integrated.  For additional details on installing vCSA please see my post.  Once installed the  default SSO account is: admin@System-Domain.  The password for this account is set during installation on a Windows server or randomly when you configure SSO on the vCSA.  

Install the VMware vSphere Web Client
  • Launch autorun.exe from the vCenter Server media
  • Select VMware vSphere Web Client and click Install.
  • Follow the prompts to choose the language and agree to the end user patent and license
  • Accept the default port settings.
  • Enter the information to register the vSphere Web Client with vCenter Single Sign On.

The default administrator user name is admin@System-Domain (Note: You will need to set this password first if you used the vCSA) and the Lookup Service URL is:


  • Click Install.
Reset the random password for the admin@System-Domain account (vCSA Only)

Note: As mentioned, if you installed the integrated SSO on the vCSA then you will need to login using the root account and set the password for the admin@System-Domain account first.

  • Login to the vCSA using the Web client and the root or administrative credentials.  The vSphere Web Client is included as a part of the vCSA so the URL is https://[vCenter Server Appliance]:9443
  • Navigate to the Administration Tab. Click “SSO User and Groups
  • Right Click on the admin account and select Edit User. Specify the new password and click OK.

You should add your Active Directory vCenter Administrators Group to the SSO Administrators group as part of the configuration to enable their primary login to administer SSO (Note: this assumes you have already added your AD as an Identity Source.  If you have not yet have a look at this post).  To do this you will need to install the VMware vSphere Client first.

Adding your vCenter Administrators to the SSO Administrators Group
  • Login to your SSO server using the VMware vSphere Web client and the admin@System-Domain account.
  • Browse to Administration and under Access select SSO Users and Groups
  • Select the _Administrators_ under the Principal name column


  • Select the Add Principals button from the menu
  • Under Identity source change from System-Domain to your Active Directory


  • Search for the the right group and click Add and OK