Wednesday, May 5, 2010

WAN Optimization

Ensuring you have a good methodology for promoting WAN Optimization changes to your production network is an absolute must to move from tactical to strategic use of the technology. While the details may vary depending on the vendor solution that is deployed, having a clearly defined process is critical.

The majority of our customers deploy their WAN Optimization solutions inline vs. logical inline or with hardcoded path statements (often referred to as out-of-path deployments). As such the potential for unplanned network outages is always a risk when promoting net new Optimization configurations to your appliances.

I recommend setting up a lab that emulates production as closely as possible. With the cost of smaller appliances designed for branch optimization now cheaper than ever it does not have to cost a fortune to build a lab that represents your production environment. In addition much of the network hardware that had to be physical can be virtualized to reduce the cost and size of your lab.

WAN Optimization technology has the potential to benefit many transport protocols, applications, data steams and even server consolidation projects within an organization. There is a tendency however to isolate this technology within the networking team. While this makes sense from an operations perspective it does nothing to empower other parts of the IT team to take advantage of the capabilities of WAN Optimization. For example, Riverbed supports encrypted MAPI, without the awareness of the Exchange team however it is unlikely to be an overall factor in the design of the email system.

Part of the process should use the lab to engage your Subject Matter Experts (SME’s) to demonstrate the benefit of WAN Optimization as it relates to their technology (File Services, Storage, Messaging, Web Services, etc.). In addition you can reduce the overall risks in promoting changes to production by having them validate the tests that you are using to prove the solution. As with any technology you wish to promote within your organization, the more people who recognize the benefits, the more likely that it will factor into new designs, consolidations or enhancements to existing technology.

As we move more and more to providing service driven architecture, it is a great idea to capitalizes on technology that improves the overall end user experience like WAN Optimization. It is even more important to ensure that your IT team is familiar with the technology so it can benefit the entire organization.

Wednesday, April 28, 2010

DPM & Hyper-V

Data Protection Manager (DPM) is the extension of Microsoft Backup, designed for the enterprise.  It is based on volume shadow copies which create a ‘point in time’ snapshot of a Microsoft Volume.  Microsoft provides the ability to take a full copy and then perpetual scheduled snapshots across a server and desktop environment.  With DPM you can schedule and manage this activity centrally. 

Shadow copies are dependant on a volume shadow copy writer in order to ‘quiesce’ the read/write activity to the volume to ensure data is consistent.  Additional shadow copy writers have been written by Microsoft for applications such as SQL, SharePoint etc.  Because the shadow copy writers are application aware they ensure the applications data is crash recoverable.  The difference between crash consistent and crash recoverable is whether or not the data in transit is consistent with the data on the volume.  If the snapshot is crash recoverable the data is  consistent.

When snapshots are requested from the Hyper-V host level to take a point in time copy of a running virtual machines the snapshot is crash consistent.  When it is initiated by the Operating system within the virtual machine using the application specific shadow copy writer the data is crash recoverable.  This is why agents are typically run at both the Hyper-V host level and also within the virtual machines.

According to Microsoft the estimates for a single 2010 DPM server is as follows:

  • 100 Servers
  • 1000 Desktops
  • 2000 SQL databases
  • 40 TB Exchange Datastores
  • Managing 80 TB of disk space

Friday, April 23, 2010

Microsoft System Center Desktop Error Monitoring (DEM)

 

Enabled through error reporting on Windows desktops; the idea is that all the information that you could forward to Microsoft (watson.microsoft.com) is available to the internal IT team.  DEM is available under MDOP which is provided as part of Software Assurance (SA).  DEM facilitates low-cost monitoring of application crashes.  DEM is enabled through GPO and does not require an agent. 

According to Microsoft studies 90% of users reboot if they get an application crash vs. calling the help desk.  This translates to lost time, possible lost data and still the underlying problem remains unresolved.  DEM allows you to collect data and bounce it off Microsoft's Knowledge base. 

The system requirements are a management server, reporting server and SQL database.  Overall utilization is light as the purpose of these components is to collect not query the information.  Through GPO you redirect the errors from watson.microsoft.com to your internal DEM server. 

One of the interesting things you can do is turn off the user prompt that requests a user to send the data.  DEM is built on the same framework as Systems Center Operations Manager however it is rights restricted to ‘agentless’ desktop error monitoring only.  Because it is essentially OpsMgr you can also alert on crossed thresholds.  DEM categorizes all the error messages automatically to allow you to easily check the version information of the applications and its associated DLLs. You can also create a custom error message response on the alert or collect additional information like dumping information from a file.  You can report on the number of application crashes across the organization.  You can then take these batched dumps and send them to Microsoft.  Microsoft will query these against the knowledge base and will respond with the link if it is a known issue. 

Bonus Tip: Do not configure DEM to send full memory dumps from the desktop as there is a significant increase in the amount of data traversing the network.