Wednesday, April 28, 2010

DPM & Hyper-V

Data Protection Manager (DPM) is the extension of Microsoft Backup, designed for the enterprise.  It is based on volume shadow copies which create a ‘point in time’ snapshot of a Microsoft Volume.  Microsoft provides the ability to take a full copy and then perpetual scheduled snapshots across a server and desktop environment.  With DPM you can schedule and manage this activity centrally. 

Shadow copies are dependant on a volume shadow copy writer in order to ‘quiesce’ the read/write activity to the volume to ensure data is consistent.  Additional shadow copy writers have been written by Microsoft for applications such as SQL, SharePoint etc.  Because the shadow copy writers are application aware they ensure the applications data is crash recoverable.  The difference between crash consistent and crash recoverable is whether or not the data in transit is consistent with the data on the volume.  If the snapshot is crash recoverable the data is  consistent.

When snapshots are requested from the Hyper-V host level to take a point in time copy of a running virtual machines the snapshot is crash consistent.  When it is initiated by the Operating system within the virtual machine using the application specific shadow copy writer the data is crash recoverable.  This is why agents are typically run at both the Hyper-V host level and also within the virtual machines.

According to Microsoft the estimates for a single 2010 DPM server is as follows:

  • 100 Servers
  • 1000 Desktops
  • 2000 SQL databases
  • 40 TB Exchange Datastores
  • Managing 80 TB of disk space

Friday, April 23, 2010

Microsoft System Center Desktop Error Monitoring (DEM)

 

Enabled through error reporting on Windows desktops; the idea is that all the information that you could forward to Microsoft (watson.microsoft.com) is available to the internal IT team.  DEM is available under MDOP which is provided as part of Software Assurance (SA).  DEM facilitates low-cost monitoring of application crashes.  DEM is enabled through GPO and does not require an agent. 

According to Microsoft studies 90% of users reboot if they get an application crash vs. calling the help desk.  This translates to lost time, possible lost data and still the underlying problem remains unresolved.  DEM allows you to collect data and bounce it off Microsoft's Knowledge base. 

The system requirements are a management server, reporting server and SQL database.  Overall utilization is light as the purpose of these components is to collect not query the information.  Through GPO you redirect the errors from watson.microsoft.com to your internal DEM server. 

One of the interesting things you can do is turn off the user prompt that requests a user to send the data.  DEM is built on the same framework as Systems Center Operations Manager however it is rights restricted to ‘agentless’ desktop error monitoring only.  Because it is essentially OpsMgr you can also alert on crossed thresholds.  DEM categorizes all the error messages automatically to allow you to easily check the version information of the applications and its associated DLLs. You can also create a custom error message response on the alert or collect additional information like dumping information from a file.  You can report on the number of application crashes across the organization.  You can then take these batched dumps and send them to Microsoft.  Microsoft will query these against the knowledge base and will respond with the link if it is a known issue. 

Bonus Tip: Do not configure DEM to send full memory dumps from the desktop as there is a significant increase in the amount of data traversing the network.

It’s about the Service not the Servers

Technorati Tags:

The role of the IT administrator is evolving from proactive support (although some of us have very reactive environments) to service automation.  Microsoft's framework to enable  IT administrators to implement service models is Systems Center, Service Manager and Opalis.  Opalis is a workflow engine from Microsoft that was  recently acquired and allows you to automate Systems Center components.  The new challenge for IT administrators is to be able to create a “Service Definitions”.  A service definition includes all tiers of a business application  including the optimal performance or baseline of the business application and the knowledge to remediate problems if they occur.  So what does this mean in the context of Systems Center? well customizing management packs in Operations Manager, designing workflow logic in Opalis and defining remediation events in Configuration Manager to react to changes in the environment.  This adds some significant complexity to the existing IT environment however this is clearly a strong focus in the software industry as they all build on existing workflow frameworks (i.e. VMware LifeCycle Manager, Citrix Workflow Studio).  Clearly this will require a reorganization of IT teams to bring application knowledge, infrastructure, network expertise and process understanding closer together to map all the pieces and processes surrounding a business application.  Once  modeled or mapped the logic can be defined and the processes that should be automated can be automated.  Having worked with customers in traditional or silo’d IT environments to complete this business application mapping it can be very time consuming as often the information rests with individuals in an organization and not in well documented business processes.  Even though this will be challenging, internal IT teams cannot afford not to evolve as the software vendors continue to push for this level of automation in Cloud services that continue to mature.

Wednesday, April 21, 2010

Microsoft Management Summit 2010

Notes on Brad Anderson's Keynote, Corporate Vice President Microsoft Corporation

Windows 7 has been the fastest selling operating system in history according to Microsoft. This has lead to a renewed focus on providing a comprehensive desktop management strategy to Microsoft’s customers.

Configuration Manager 2007 R3 delivers power management a.k.a. “the greening of IT”. Configuration Manager now has visibility into power consumption metrics. Once tracked the configurations can be adjusted and a report generated to show power conservation gains. Clients are simply enabled so that the KW/hour can be tracked. Through the management console you can enable power management policies which adjust power behavior patterns on the client side. In addition Configuration Manager allows you to put in a cost associated with the power consumption. Once you enforce policies you can report on CO2 savings across the entire organization. Brad makes the point that these reports can be used to save money and to market a green message to customers. The beta of this technology is available today.

Brad makes the point that you do not want to have to separate management solutions for your distributed desktops and centralized VDI solutions. Brad hammers home the point that everything should converge under a single management framework (If I did not know better I’d say this was a thinly veiled criticism of VMware’s platform).

Citrix was named by Microsoft as the leader in the VDI space and Brad mentioned that both Microsoft and Citrix meet quarterly to ensure their VDI strategies align. At this point Bill Anderson is brought out to show an integrated view of configuration manager and XenApp (Oddly not a XenDesktop/Hyper-V/Configuration Manager demonstration). The goal is to increase the level of automation between Configuration Manager and XenApp. During the demo an application is deployed and published from Configuration Manager to the XenApp server environment. The demo switches to a remote desktop running Dazzle where the application is launched.

Interestingly Brad challenges the audience to have lunch with their internal Citrix administrators. The next part of the message focuses on VDI and the partnership with Citrix. In Windows 2008 R2 SP1 Remote FX and dynamic Memory is delivered to optimize the platform for VDI. Dynamic memory is Microsoft’s version of memory oversubscription. Memory is now allocated as a range vs. a static amount. Remote FX is the release of Calista technology that was acquired by Microsoft and designed to deliver multi-media.

The next demonstration focused on Microsoft ForeFront which is Microsoft’s security framework which plugs into Systems Center. The Beta allows end point protection to be enabled and configured through Configuration manager. Interestingly the install of the ForeFront agent uninstalls other security products (Now I know why the Antivirus vendors made such a fuss about Windows 7). ForeFront provides complete reporting on how compliant the environment is from a security perspective.

Brad then started talking about Desktop as a Service (DaaS) and how every desktop strategy should incorporate the ability to move desktops to the cloud at some point. Windows Intune was re-announced at the keynote as the go to market strategy of Systems Center Online. It will not provide feature parity with System Center in its initial release (Please refer to my Intune post for additional details).

Microsoft’s next announcement focused on commoditizing the compliance industry which is a hefty challenge indeed. Microsoft’s strategy is to translate industry compliance rules into a set of policies and templates that can be applied across an IT environment and then reported on. It was at this point we watched the demo of Service Manager take the PCI compliance standard which Microsoft had translated into an audit of the systems that had to comply and a series of configurations to ensure compliance. As anyone who has worked in compliance knows this is generally a very labor intensive process to translate the compliance requirements into actionable IT tasks that can be applied across an organization. Of course by applying a set of policies Microsoft is tackling compliance from a systems perspective but this can be a large piece of the compliance requirement.

Microsoft is providing the Beta of many of the products now but general release is slated for 2011.

MED-V; Enabling OS Migrations?

Technorati Tags: ,

I have always wondered about MED-V and the use cases for it. It has always seemed that it was a product that missed its window in time. In the early years of virtualization deployments, VMware workstation was used to contain Windows legacy OSes to enable desktop migrations. I seem to recall an old case study where Merrill Lynch used this strategy successfully. Microsoft positions MED-V as a key tool for migrating from Windows XP to Windows 7.

MED-V is a management layer that sits on top of a distributed Virtual PC environment. It is very reminiscent of the original VMware ACE architecture and to be quite frank that product struggled to find opportunities as well. Why not just use App-V, XP mode in Windows 7, TS or VDI? well Microsoft’s argument is that if you have a number of legacy applications that are not OS compatible and you have a large number of desktops, MED-V is a good choice. While this makes sense if I have desktops that have a reasonable amount of CPU and memory, it seems that it is a technology where the stars have to completely align to prove itself.

MED-V can apply policies to the distributed virtual instances to make the image read or read / write. In addition the naming of the VM, the network profile (IP, DNS) and resource utilization (Memory) of the VM can all be centrally managed. In addition the applications in the guest VM can be blended into the startup menu of the host desktop. Microsoft recommends Configuration Manager to push the MED-V images as the default deployment mechanism is not as robust.

Here is my take on the ideal situation for MED-V; A Windows 7 deployment of new desktop hardware in which 10 or more Windows XP legacy applications that are incompatible with Windows 7 exist in a enterprise environment where Configuration Manager is deployed.

Windows Intune; Online Services for PC Management

Technorati Tags: ,,

Intune builds on a strategy to take Microsoft products and cloud enable them.  Intune is for customers who have not deployed Systems Center onsite and is only licensed for desktop management at this time.  Intune allows you to avoid costs and complexity by NOT implementing on-premise management.  The target audience for Microsoft Intune is the mid market customer. 

Intune is available through the Microsoft Business Online Services.  If you subscribe to the service you will be able to deploy the latest version of Microsoft's desktop operating system to help standardize the user environment.  Subscribers also will get access to MDOP and all its tools (i.e. diagnostic and recovery tool kit for image and password recovery).   It is recommended that you configure the following as part of the initial enrollment.

  • Product Update Classifications
  • Setup Auto approval for patching
  • Setup the Agent policy
  • Setup Alerts and notifications

Communication is secured through certificates; one for the initial setup and then one per desktop for ongoing management.  Intune has been tested on Windows 7, Vista and XP running the latest service pack.  For alerting the SCOM management agents are used.  These management packs have been tweaked to be less chatty across the WAN. 

The console has been intentionally simplified to provide a fairly straight forward operational console.  The team that developed Intune worked internally on the Windows Systems Update Server (WSUS) so similar capabilities, concepts and simplicity in setup is apparent when browsing the interface.  The console was designed with “surfability”  in mind. 

Intune will track license compliance and alert on license issues.  You can import licensing agreement information to cross reference license compliance.  The team expects to have asset tracking in the final release so that hardware and software inventory is available. 

In the initial release there is no concept of delegation; all users are essentially administrators.  Desktop policies are available and can be configured and deployed to the desktops.  The policies are limited in the first release but focus has been put on the most critical settings.  The application of policies has been intentionally simplified through the use of templates and wizards.  You have the flexibility of enforcing local or domain policies depending on whether the desktops participate in AD.

One of the interesting features is the ability to remote control the machines through the integration of Microsoft Easy Assist.  This is end-user driven in the initial service offering meaning the user initiates the request.  Due to the integration of system center monitoring you can configure notification rules to send an alert or message for things like an Easy Assist request.

Although Microsoft was intentionally vague about the road map for Intune it is clear that the service is being actively developed to bring new features to market quickly.  Demand for the initial Beta preview was so strong that Microsoft closed signup on the day the service was announced.

Citrix Essentials with Site Recovery

Citrix has continued to develop the feature set of Citrix Essentials to enhance the Hyper-V platform.  What is interesting is they have announced a Citrix Essentials Express that is restricted to two host servers.  Included with Citrix Essentials Express is Site Recovery.  This provides some interesting DR alternatives for businesses in the SMB space.  As Citrix Essentials uses its StorageLink feature to provide visibility into the SAN layer to enable Site Recovery the limitation is the number of SAN vendors providing Citrix StorageLink support.  Citrix has set a strategy of continuing to focus on reasonable alternatives for DR automation for the SMB market so expect the capabilities and vendor support to continue to evolve over time.

Tuesday, April 20, 2010

Microsoft’s Management Summit 2010

This year Microsoft Invited me to attend the 2010 Microsoft Management Summit.  As we have noticed a stronger interest in Microsoft technology this year amongst our customers with the release of Windows 7 late last year, I was delighted with the opportunity to go and review Microsoft’s virtualization and management strategies. 

The key note by Bob Muglia (President, Server & Tools Business) started with a restating of the core principles of Dynamic IT which were laid out by Microsoft in 2003.

  • Service-enabled
  • Process-led, Model-driven
  • Unified and Virtualized
  • User-focused

Bob noted that many of the products in the System Center Suite have matured so that the reality of Dynamic IT can now be delivered.  Bob also drew a strong comparison between the principles of Dynamic IT and the requirements for Cloud Computing. 

The point was made that software development is largely based on software models that originated from the developers within an  organization.  With increased scale, the maturity of virtualization, and the need to properly stage code into production Microsoft discovered that the IT organization had a stronger influence over the software model than developers.  This background was used to introduce several recent or new integration points between System Center and Service Map, Visual Studio and the Lab Management feature, and Hyper-V.  Through this integration the demo focused on deploying a new software model consisting of several tiers (web, database etc.) visually represented in Service Map onto a staging environment consisting of Hyper-V virtual machines.  Lab Management from Visual Studio 2010 was used to develop and validate test plans.  When an error occurred you had the option of taking a screen shot or capturing the state of the VMs making up the software model and emailing them to the development team.  Once the code was “corrected” the final configuration was deployed using Opalis Orchestration which reminded me of VMware’s Stage Manager but seems to provide the flexibility of LifeCycle Manager of Citrix’s Workflow Studio. 

The keynote then laid out Microsoft’s message around Cloud computing and lessons learned with the deployment of Azure and Bing.  These lessons are being used to fine tune the next generation of software to be ‘cloud ready’.  Some references were made to software made up of multiple virtual containers that could scale up and down on demand.  This sounds much like the BEA Liquid VM development that was being done before the Oracle acquired the company.

It was at this point we got a sneak peak of the next release of SCVMM.  One thing I picked up was that XenServer was integrated into the management console.  Templates have been extended to include multiple virtual machines as a single application architecture.  SCVMM now integrates with WSUS server for patching.  The library in SCVMM has been expanded to include App-V application packages which allow templates to include VMs and virtual applications.  This also simplifies scaling of additional VMs to meet demand as applications are streamed into new application servers vs. natively installed or scripted into the images.

One interesting thing that was also demonstrated was the ability of SC to monitor VMs in the Cloud or off-premise monitoring.  This was provided through a management Pack for Windows Azure; it would seem that if its a Microsoft Cloud it will have a management pack.

This makes Microsoft’s foray into Virtual Lab Automation interesting as it is tightly integrated into Visual Studio and the hosted development platform Azure.