Tuesday, January 26, 2010

Thin Client or Desktop Appliances

An often under considered component in a VDI deployment project is the thin client device or “desktop appliance”. Reducing the total cost of ownership in a virtual desktop environment is often dependent on the removal of the thick client device and its replacement with a desktop appliance. While the operational requirements are reduced on a desktop appliance they still need to be considered and planned for as part of the deployment strategy. Desktop appliances come with an integrated operating system that may be Windows or Linux based. In addition they may have image management solutions that need to be deployed, although for proof of concept or limited scale environment imaging can usually be done by unlocking and shuttling the image through a USB device.

One of the common problems with desktop appliances is the integrated version of the desktop agent that is shipped is typically not current enough to provide all the features of the VDI solution. In addition the desktop agent supplied by companies such as VMware or Citrix may have additional requirements such as windows compatibility that need to be considered before selecting a specific embedded OS for the desktop appliance. Desktop agents may not have feature parity between Linux or Windows agents or may limit support to Windows derivatives only. Those desktop appliances that do provide feature parity and are Linux based often do so by using vendor developed software. As these agents and features are not directly supplied by the VDI software vendor they should be thoroughly tested. One thing that helps when selecting the desktop appliance is that if it has already been certified as working with the VDI vendor (i.e. XenDesktop or VMware View Certified). Enough time should be allowed in the deployment plan to understand how to manage the desktop appliance and also how to apply upgrades to the embedded image. It is useful to have surplus units available for ongoing operational support such as image testing or agent upgrades.

An interesting alternative developed for VDI is the no-software desktop appliance or thin client. These devices reduce the management overhead by running only firmware on the desktop appliance and moving the management to a centralized administration console (i.e. the PanoCube although others are appearing on the market). While reducing operational overhead these devices tend to be very vendor biased and restrict the customer to certain platforms only. The other potential drawback is the possible physical replacement of the device for any major revisions to the product line or feature set. These devices are designed for VDI only so if the environment requires a blend of server based computing and VDI a standard desktop appliance may be better suited. If matched to the right requirement these devices can substantially reduce the burden of management so are well worth considering.

Wednesday, January 20, 2010

Is VDI the right way to go?

I am going to combine a couple of thoughts here and add a little blue sky thinking.  One thing I have noticed from dealing with various organizations at different levels of virtual desktop maturity is that there still seems a few barriers to 100% adoption across the entire organization.   I am generalizing as things are not the same for every customer.   The real TCO for VDI is not substantially reduced until the PCs are replaced by Thin Clients (or desktop appliances); and there tends to be the sticking point for some.  Sometimes as much as IT would love to move users to a lower support cost desktop alternative the users or business is reluctant to go.  This can be for various reasons such as protectionism from the desktop support teams, peoples general reluctance to change or a misunderstanding of the technology being deployed to site a few.  In situations like this VDI tends to be used for 2nd desktop requirements and remote access. 

VDI provides the opportunity to manage the corporate image while at the same time providing very flexible options for delivering it to the user locally or remotely.  Although it is not exactly a consolidated environment (I am setting aside technologies like View Composer, Provisioning Server, Storage virtual cloning, for a moment) it is a centralized distributed environment of desktops.  I have had the opportunity to look at a slightly different option recently and wanted to share some thoughts.  I have been reviewing Microsoft’s DirectAccess Technology which is a new feature of Windows 7 and Windows Server 2008.  It goes along with my own thinking that technology should not change anything about the way the user works or plays, it should just do its job seamlessly. 

Now this approach from Microsoft is designed for the IPv6 world although it will run with IPv4.  The fundamental opportunity that IPv6 promises is that everything is globally addressable.  What this means is that potentially all things have unique addresses unlike today were we use NAT to extend the lifespan of IPv4 networks.  Traditionally we use VPNs to connect devices remotely which often adds overhead and delays to the login process.  Additionally, they are often dependant on user interaction to start them up.  DirectAccess automatically establishes a bi-directional connection from client computers located remotely to the corporate network using IPsec and IPv6.  It uses certificates to establish a tunnel to the DirectAccess server where the traffic can be decrypted and forwarded to your internal network.  If you have deployed IPv6 and Windows 2008 internally the connection can be securely transported to all your application servers.  Access Control is used to allow or restrict access.  The promise of this technology is that it allows you to extend your corporate network without changing the user experience or sacrificing how the desktop is managed.  It also makes your corporate network perimeter much more dynamic.  Essentially it allows you to overlay your corporate network in a secure fashion over private and public networks. 

Now make no mistake this solution from Microsoft does presume that the end user device is a laptop and that it has been deployed and managed by IT services.  The reason I thought about the relationship between VDI and Windows DirectAccess is that often customers deploy VDI for remote access to avoid a full VPN solution.  With Microsoft DirectAccess and Windows 2008 and 7 integration Microsoft has provided another option that might be a good fit in certain situations.

Monday, January 18, 2010

Application Encapsulation or Application Virtualization

One of the problems in distributed desktop environments is application lifecycle management. Lifecycle management is the testing, deploying, upgrading and removing applications that are no longer needed. In addition installing applications into a standard desktop image increases the number of images that need to be maintained. With every unique application workload a separate image is developed so different users or business groups have the appropriate applications. This leads to desktops being segregated based on the types of applications; e.g. finance uses a finance image and marketing uses a marketing image and so on and so on. While manageable from a desktop perspective, it can lead to operational overhead in building, managing and maintaining the number of standard images.

In addition as application incompatibilities are discovered desktop images became locked to a specific build with static application and operating system versions.  In a terminal server environment this caused servers to be silo’d based on application compatibility; on desktops this leads to a long refresh cycle. Application encapsulation or application virtualization was originally developed to solve these problem on terminal server environments however it was ported to the desktop space to deal with the same issues.

Application encapsulation is a form of virtualization that isolates the application in a separate set of files that have read access to the underlying operating system but only limited or redirected writes. Citrix XenApp Application streaming leverages the Microsoft cab file format (Microsoft’s native compressed archive format) for its encapsulated packages. VMware acquired a company called ThinStall (VMware ThinApp) which encapsulates the application into a single exe or msi. Once applications are repackaged for application virtualization they can be removed from the desktop image and run from a file share as an exe (VMware) or streamed to the desktop using RTSP (Real Time Streaming Protocol) (Citrix) to run from a cached location. By abstracting the application from the images the number of images that need to be maintained in reduced. In addition, depending on the software the applications can be delivered to users based on file or AD (Active Directory) permissions. The big benefit to implementing application encapsulation is that applications can be tied to users vs. the more traditional approach of installing them into a desktop image. It is common for organizations to over license software by installing it on every desktop instead of just to the required users to simplify licensing compliance. Obstructing the applications on the desktop through virtualization allows the image to be truly universal as a single image can be applied to all users.

Unless you have the same application workload for every business unit you should consider application encapsulation or “application virtualization” to reduce the operational overhead of managing applications in a VDI environment. Encapsulation eliminates application interoperability problems and reduces the management of deploying new applications. Because the applications are pre-packaged, the application configurations are centrally managed lowering application support. While these technologies are available without desktop virtualization, they are more problematic to implement as it is difficult to maintain a consistent desktop OS baseline in a physical environment even if user changes are restricted. Because of the consistent representation of physical hardware within a virtual machine a standard desktop baseline is much easier to enforce in a VDI environment.

VDI presents the opportunity to effectively reduce the administrative burden of applications through the integration of application encapsulation. These solutions have been bundled by the vendors in a way that allows customers to easily incorporate this technology.  Keep in mind that when managing a VDI project you should allow ample time for the testing and integration of application virtualization technology.  The heavy lifting in deploying application virtualization is the repackaging of applications. 


2. Application encapsulation is a form of virtualization that isolates the application in a separate set of files

Friday, January 15, 2010

Citrix Netscaler VPX; AGEE

Verifying the Domain in order to allow access to resources

(Note: although I defined some terms this process assumes you have prior experience with the NetScaler VPX or AGEE; look for additional posts that cover basic configuration of the NetScaler VPX) One of the common requests I get when setting up an Citrix Access Gateway is to enable a pre-scan to determine if the client computer is a corporate asset. For example you may want to provide generic XenApp access to everyone; however if the user is logging in from a corporate laptop provide them full SSL VPN access.

This tip actually came from working with Citrix’s internal support services. To ensure credit is given where due; I have found Citrix’s NetScaler/AGEE support people excellent to deal with.

There are other ways covered on Citrix’s KB but I have found that it can be difficult to find a simple and straightforward approach that works consistently. In order to properly configure this you will need to understand a few related terms and definitions:


A Policy is a series of rules that must be met in order to provide a level of access to internal resources. The rules are built using conditions and expressions that can be applied at different stages of the client connection process (ex. pre-authentication, session, etc)


A Profile is used to define a set of configurations that will be turned on or off during the users session. Think of the policies as the requirements you must meet and the profile as the session settings.


Resources define the internal networks, servers, applications etc. that the user is allowed access to. Resources can be defined generically by subnet or very specifically down to the IP address and port.


A VIP is a virtual IP that has an associated web site. When you create a VIP you are creating a new IP and web site in order to allow user access. To provide granular control on the access, you bind a policy and configure the profile and session settings. In addition you associate a series of resources that are available through the VIP/Website provided the user successfully meets the conditions defined in the policy.

In order to filter based on whether the user’s PC is a member of the domain you will configure a VIP and create a policy. The policy will use an ‘Named’ expression or a predefined expression statement which we will build to scan the client’s registry. The scan will determine if the PC is a member of your corporate domain before login. If the scan is successful then the user can receive full VPN access.

Using PuTTy ssh to your netscaler VPX and login using nsroot and the password


Paste the following

add policy expression Corporate_Asset q/CLIENT.REG('HKEY_LOCAL_MACHINE\\\\SOFTWARE\\\\Microsoft\\\\Windows\\ NT\\\\CurrentVersion\\\\Winlogon_DefaultDomainName').VALUE ==[YOUR DOMAIN NAME]

Login through the NetScaler VPX and under Configuration create a new Session Policy. Under the General Drop Down List you will have a new named expression called Corporate_Asset


When you add this Named Expression to your Policy the scan will look under the HKEY LOCAL MACHINE\SOFTWARE\Microsoft\Windows NT\Current Version\Winlogon\DefaultDomainName Value to ensure the desktop matches the domain specified in the expression. It the user meets the requirements then the user will have access to the resources that you have been defined.

Monday, January 11, 2010

VDI: User Data Redirection

In order to take full advantage of the advanced image management capabilities in a virtual desktop environment you have to homogenize the image by ensuring user data is redirected not written to the desktop image. Within a windows environment user configuration information is typically stored in local or roaming profiles. In order to ensure this information is maintained across multiple sessions on different desktops the profile is typically stored centrally and cached locally at login (roaming). When the user logs off, any changes are synchronized to the centrally stored profile. This technology has been around for many years and has been used in both desktop and terminal server environments. Profiles can be configured as read-only (mandatory), read-write (normal), or a mixture of both (flex) profiles. Flex profiles are based on a mandatory profile but user changes are written to a separate location such as a user directory. The Flex profile merges both the read-only profile and user customizations to provide the speed of a mandatory profile while still allowing user customizations. A number of things must work in harmony in order to ensure a user profile loads and unloads properly; the profile directory must be available, the user must have the appropriate permissions, adequate space must be available on the login device and all this must happen within a reasonable window of time to not affect the user. The same mixture of technology must also work when the user logs off to ensure that any changes are properly captured and the profile unloads cleanly. If a user is logged into two separate environments the profile that is unloaded last will overwrite any prior updates. Given the number of components that must interoperate it is quite common to experience many operational challenges when introducing roaming profiles.


1. User data needs to be redirected from the desktop image

VMware and Citrix approach the challenge of redirecting user data differently. VMware uses a separate user data disk that is thinly provisioned to a vmdk (Virtual Machine Disk) file. This redirection is accomplished through a local policy that is applied when you install the agent software on the virtual machine. Essentially VMware is using a local profile redirected or stored on a separate disk partition. The benefit to this approach is that you have the performance of a local profile with none of the draw backs of a roaming profile stored on a file server. The drawback of this approach is the flexibility that is lost by not having a centrally stored profile. A roaming profile can be universally applied to both a virtual desktop and terminal services environment vs. having to manage separate profiles depending on what the user is connecting to (desktop, shared server environment). Citrix provides a profile optimization service that ensures that profile changes are merged not discarded based on the profile or session that is exited last or essentially a variation of Flex profiles. The Citrix solution was integrated into their product line after sepagoProfile software was acquired from Sepago. The benefit to this approach is that the profile can be universally applied between either a virtual desktop or XenApp environment provided the operating systems are compatible (i.e. WinXP and Windows 2003, Vista and Windows 2008)

Using IOmeter for testing I/O in physical and then virtual machines

One question that comes up often as more and more I/O heavy server workloads are migrated to virtual servers is how can I verify I/O performance pre  and post migration?  While there are a number a methods, I have written this  to demonstrate one possible way to run test scenarios on a physical server and then verify them after you have converted it to a virtual server.  

IOmeter can be used to both generate and measure I/O load on Disk and Network subsystems.  IOmeter can measure network throughput from one VM to another. This is helpful when you are determining whether to put two VMs on the same vSwitch, on separate vSwitches or add additional uplinks.  In a virtual environment IOmeter can come in handy to benchmark across different SAN environments and networking configurations.  For benchmarking, this section will focus on Disk I/O. For example, IOmeter can be configured to benchmark within a virtual machine to run a disk I/O test while it is sitting on an NFS file system. This test can be saved and then re-run from a different storage system such as FC or iSCSI disk storage. (Note: Additional information on IOmeter is available from iometer.org)

Install IOmeter

Iometer can be downloaded and installed from IOmeter.org.  Install it on the physical server you want to measure I/O on

Open IOmeter and expand the All Managers in the Topology Screen to ensure the server is listed with the default workers


Click the Disk Targets and select the drive. An x will appear in the box. (Note a red slash appears across the drive because there is no logfile however it is created automatically after you run the first test)


Click the Access Specifications and double click the default under Global Access Specifications


Any of these parameters can be changed such as Reads vs. Writes or randomization however you must put something in the Transfer Request Size to generate I/O.


Assign the Default Access Specification by clicking Add


Click the Results Page and Drag the computer icon (VIRTUALGURU in the example) onto the Total I/O per Second raised tab.


The Total I/O per Second should now display the computer name instead of All Managers


Click the Green Flag to start the test and select Save when prompted to save the results file


Ensure you are now getting Display results


By clicking the Greater than sign or arrow at the end of the Total I/O per Second progress bar you can display the results graphically


Once completed, click the stop sign from the menu. You can save this configuration using the save button from the menu and re-open it by clicking the open file option from the menu.  To determine relative performance run the same configuration tests after the server is P2V’d into the virtual environment.  If you have different tiers of storage in your SAN environment run the tests multiple times from different volumes.  You may have to go back and re-adjust the Access Specifications again before you get good comparisons.

Thursday, January 7, 2010

VDI and Advanced Image Management

Technorati Tags: ,,

VDI has come along way over the last several years to solving many of the inherent problems in scale.  One important thing to keep in mind is that as with any new feature designed to overcome a limitation in the product, it often comes with its own set of considerations.  Specifically, I want to talk about how VMware View Composer and Citrix Provisioning server simplify image management but also impact the potential level of automation you can use when deploying virtual desktops.  For anyone who has been developing desktop images, this is not necessarily a new problem but rather one that still exists and needs to be reviewed when deploying Composer or Provisioning Server.  For simplicity I am going to use the term Advanced Image Management to describe the linked clone (Composer) and multi-locking of a single image file that Provisioning Server does.  The consideration is the same for both, however they are obviously very different approaches to managing virtual desktop images. 

When deploying virtual desktops from either the virtual desktop template using Composer or the vDisk image using Provisioning Server the SID is not changed.  This tends to be a problem when software references the SID to ensure a unique software agent identity on the desktop or within the Active Directory.  I have seen McAfee and Zenworks both have trouble, although I am sure there are many agent based software solutions that would have similar problems.  In some cases the software vendor already provides a solution; for example with McAfee it is possible to stop services remove some registry keys and when the system reboots a unique identification is created.  Typically you can find this information if the software vendor provides a recommended cloning configuration for its agent or software.  In other cases you will implement post setup scripts to ensure any software that breaks in the advanced image management process is added back before the user login.  In most cases your VDI environment will have a combination of advanced image management spawned virtual desktops and traditional or virtual desktops that use a full or flat virtual machine hard drive.  It is important in the planning stages not to overestimate the cost savings with respect to storage and the use of advanced image management tools.  A more pragmatic approach is to divide the total number of virtual desktops into traditional (more costly) and advanced image management desktops.  Use the percentage of each to determine the real cost savings of deploying this technology with your VDI environment.

Duplicate File Drive Mappings in VMware View

When you are deploying virtual desktops using VMware View the login script and default behaviour of the Remote Desktop Protocol can lead to duplicate drive mappings that can often confuse users.  When a Virtual Desktop Environment is setup for access within the organization a user logging on from a corporate PC will have already run their login scripts and have their drive mappings on the physical PC.  When they launch their VDI environment the default behaviour of RDP is to map all drives defined on the physical PC.  This leads to the previously mapped drives showing up on the virtual desktop in addition to the drives mapped by the login scrip.  The problem becomes how do you switch off this behaviour so that only the virtual desktop login scripts runs?

The resolution can be found by adjusting the local Terminal Services registry keys (Windows XP) or the Remote Desktop Session Host keys (Windows 7).  While this can be adjusted using a Group Policy, I have found it easiest to set it in the virtual machine template.  You do have to be aware that policies are applied locally and then through the AD so if the policy is configured in AD it will overwrite the local policy (An un-configured policy has no effect).  As we typically recommend that virtual desktops have separate OUs in the AD this does not happen often.  The exact process for windows XP is as follows:

From the start and run option type gpedit.msc and click OK.  Open the Computer Configuration, Administrative Templates, Terminal Services, Client/Server data redirection folder.


Enable the Do not allow drive redirection Properties and click Apply


For Windows 7 from the start menu type gpedit.msc in the search bar and browse to Computer Configuration, Administrative Templates, Windows Components, Remote Desktop Services, Remote Desktop Session Host, Device and Resource Redirection.


Enable the Do not allow drive redirection policy


Your Windows 7 or Windows XP virtual desktop template will now block the default behaviour or the RDP protocol and ensure the drives are not mapped twice in the users virtual desktop session.

Wednesday, January 6, 2010

The Importance of Storage Tiers in VDI

I have implemented many VDI solutions and what is apparent is the importance of a flexible storage solution.  In the early days storage was a consideration mainly from a cost perspective; How do we justify a SAN solution for storing our virtual desktops?  iSCSI became an effective way of mitigating this while the vendors introduced or acquired technology to assist like VMware View Composer and Citrix’s Provisioning Server.  As we assist more of our customers in scaling VDI we are looking at NAS as a core piece.  NAS has traditionally been used for windows shares so it is not necessarily a new piece of the puzzle.  With appliance based NAS solutions that are loaded with cache NAS is providing an even cheaper tier for storing virtual desktops.  In addition as it is already incorporated for storing User and Group data it is likely that the customer has incorporated NAS in the environment already.  Another important use of NAS storage is for redirected user data or profiles.  One thing that customers should be aware of is that there is a direct link between redirecting user data and the ability to use some of the advanced image management solutions like Composer or Provisioning server.  So if redirecting user data is not part of the current physical desktop strategy it needs to be considered when scaling VDI technology.  Thoroughly reviewing your current storage solution so that you have incorporated different storage technologies like iSCSI and NAS will ensure that you have an environment that will scale but at a reasonable cost.