While the notion of Cloud Bursting is enticing we are still at the early stages of making this a seamless and autonomic process. Technologies such as VXLAN and VMGRE are an important piece in getting to this point but they do not completely solve the problem. VXLAN (Developed by CISCO) and VMGRE (Developed by Microsoft) allow you to overlay a Layer 2 (L2) network over a Layer 3 (L3) network to remove the need to update IPs and MACs to gain mobility. These technologies are being used to extend L2 networks to public cloud providers to simplify deployment.
As Cloud Bursting is a much overused term, in this post I define it as the ability to dynamically create additional VMs based on Key Performance Indicators (KPIs) that flow over into a public cloud with complete autonomy.
While we are not there yet, we have significantly moved the bar forward to something I refer to as dynamic stretched scaling.
VMware has suggested that scaling involves automatically adding additional VMs to vApps based on demand. VMware defines bursting as essentially doing the same thing only between a hybrid private and public cloud. This is now possible but as suggested it is closer to dynamic stretched scaling than true cloud bursting.
To break this down into a more tangible example lets look at the layers required to make this work. The current architecture of dynamic stretched scaling takes advantage of new features in the vCloud Connector product from VMware (for additional details please see my post on the Tech Preview on vCloud Connector). vCloud Connector allows you to keep a catalog in sync between your private and public cloud. In order to configure this you will need to have the same catalog available in both locations. The client connections will be LB using any technology that provides Global Load Balancing (GLB). VMware's goto partner for LB in generally is F5 which does offer GLB.
In the backend you must monitor your vApps KPIs to identify when additional load is required and alternatively when it is not required. Based on these KPIs you add or delete VMs equally on both sides of your cloud (public and private). You can automate this by tying in your monitoring to VMware Orchestrator.
Although this is substantial leap forward from where we were a very short time ago, there is still a great deal of complexity in the configuration. It also requires tying various vendor solutions together. Also while this works well for some architectures, it adds complexity to others. For example consider a traditional tiered architecture with a relational database, application and web tier. You may have to segregate the relational database between read/write masters and read only subscribers to adjust for latency while still ensuring reasonable performance.
Latency in this model may be relative as well. You will have to know when the performance latency in the private cloud environment is greater than the network latency introduced by using a public cloud provider and react accordingly.
Using dynamic scaling locally and in the public cloud requires you to know what things to monitor that indicate poor performance like the time it takes an end-user to load a page for example. In addition if you are a Cloud provider, performance typically is defined by an SLA. To match the service commitment it may require adding or throttling resources.
It is possible to put these models together now but it still requires an in-depth understanding of the components and how they interact with each other. This is why I refer to current capabilities as dynamic stretched scaling vs. true Cloud busting.
- Posted using BlogPress from my iPad
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment