tag:blogger.com,1999:blog-58891309782613012382024-03-12T21:25:23.784-07:00virtualguru.orgPractical Virtualization InsightPaul O'Dohertyhttp://www.blogger.com/profile/05237786392603690096noreply@blogger.comBlogger180125tag:blogger.com,1999:blog-5889130978261301238.post-51547806302612238962018-07-25T08:23:00.001-07:002018-07-25T08:23:44.252-07:00#GoogleNext18: Bringing you the future of Cloud<p align="justify">Chen Goldberg <a href="https://twitter.com/GoldbergChen">@<b>GoldbergChen</b></a> and Aparna Sinha <a href="https://twitter.com/apbhatnagar">@<b>apbhatnagar</b></a> will be speaking about the integrated stack of Kubernetes and Istio for Cloud Services Platform and On-Prem GKE. Cloud Services Platform has been built with Consistent experience, Centralized control, Agility with Flexibility in mind. Cloud Services Platform is targeted for Hybrid infrastructure. This enables Continuous Integration/Continuous Deployment “CI/CD” across enterprise and Public Cloud environments. Today this is complicated. Cloud Services Platform overcomes these limitations and is powered by Google Kubernetes Engine “GKE”. </p> <p align="justify">Cloud Services Platform provides consistency of service, development and management across environments. GKE On-Prem  is a software kit which can be deployed on enterprise server hardware. From within the Cloud Services Platform in GCP console you can add your GKE On-Prem environment. Adding it generates a registration manifest which you apply to finish connecting GKE On-Prem to Google Cloud Services Platform. </p> <p align="justify">Once integrated you have visibility into the On-Prem GKE platform from within GCP console. You can deploy from GCP console to the On-Prem GKE cluster. In addition, GKE Policy Management is being released to provide synching of namespaces, RBAC polices and secure cluster management. A demo is shown across 3 clusters in both Google and On-Prem GKE in which an RBAC role policy to allow pod deployments for development only. The policy is stored in a Git repository and then pushed to all the clusters. </p> <p align="justify">Cloud Services Platform leverages Istio which manages business services levels. It decouples operations from development by providing common capabilities that every app requires. A demo shows a sample application written in four “4” different programming languages. Istio is pushed to the environment to provide generic services across all languages without code changes. An authentication policy is pushed to enable mTLS authentication across all code bases that make up the application to enable client side authentication. </p> <p align="justify">Jeff White the Platform Architect at eBay is invited to talk about how they are using Google Cloud Platform. eBay leverages Istio to observe metrics and tune existing applications. They are also using Istio to have consistent policies across their environments. Further adoption of Google Cloud Platform and Istio are planned as they move forward.</p>Paul O'Dohertyhttp://www.blogger.com/profile/05237786392603690096noreply@blogger.com0tag:blogger.com,1999:blog-5889130978261301238.post-49659103714468505712018-07-24T12:30:00.001-07:002018-07-24T12:30:21.370-07:00#GoogleNext18: Automating Large Scale Cloud Migrations to GCP with Velostrata<p align="justify">Velostrata will monitor workloads on VMware and recommend classifications with the appropriate sizing. This expands on Google Clouds cost savings like sustained use discounts and custom workload sizes. You can live test your migrations using a Test Clone. This allows you to snapshot a number of VMs and bring them up in isolation on GCP. You can do that via the console or an API call. You can also due site-level bandwidth throttling. This is important because most enterprises will throttle in but not necessarily throttle outgoing traffic. Velostrata Network settings enable you to set a bandwidth cap on all migration traffic from a specific site.</p> <p align="justify">The solution is built to work within your private enterprise space so it will work behind NAT and proxies. Velostrata runs a hosted service to collect telemetry and log aggregation data for your migrations. Google recommends that you use discovery and assessment tools to define what moves and the appropriate order. It is important to develop a pipeline so that you can keep the migrations moving over at a predetermined rate and tempo for large customers. Google recommends that you run the migration in sprints, migrating within in a week while identifying the next workloads for the week following .</p> <p align="justify">The Velostrata migration component is tightly integrated with VMware vCenter. You simply right click the VM and select migrate. The migration wizard enables you to select the Google VPC and makes recommendations for sizing. The monitoring is of the migration is done within the vCenter console. Any migration alerts or notifications are propagated into the vCenter monitoring system. You can both failover and failback. </p> <p>Velostrata has full runbook automation capabilities. You create a runbook from within the Velostrata web console. The runbook is exported in csv format which allows you to filter and order the migrations. You can then pass the csv through the rightsizing module to determine what the appropriate Google class for the workload is. As it is developed in spreadsheet format the runbook is self documenting. To effect the migration you pass the csv to the migration job engine which displays the status of the migration of the group of VMs in the runbook. </p>Paul O'Dohertyhttp://www.blogger.com/profile/05237786392603690096noreply@blogger.com0tag:blogger.com,1999:blog-5889130978261301238.post-35855777591134733192018-07-24T11:02:00.001-07:002018-07-24T11:02:25.202-07:00#GoogleNext18: @podoherty reporting live @googlecloud: KeyNote<p align="justify">Diane Greene the CEO of Google Cloud takes the stage. Diane welcomes everyone and mentions that they have 25,000 registered attendees of Google Next. This is up from approximately 6000 attendees at the last event.</p> <p align="justify">Diane mentions that information is starting to power every business. IT has gone from being a cost center to a driver for the business. Tech is now core to every product. Talking to CEOs, they realize that they are going to be shutting down their datacenters. Google is seeing amazing growth based on this trend. </p> <p align="justify">But why Google? Google’s business is information. Google has a cloud which takes in information and organizes it in a way no one else can. Google has spent 20 years scaling and optimizing their platform. Diane’s job is really to surface all these greats innovations. </p> <p align="justify">Diane mentions the products in data analytics, G Suite and Machine Learning and Artificial Intelligence “AI”. Google is a world leader in security. Everything starts with the Titan chip that encrypts at rest and in transit either using Google’s or your encryption keys. There is no more secure setup then combining a Chrome book, G Suite and two-factor authentication. For example G Suite stops 99.99 % of spam and phishing attacks. Today AI is built into everything Google does like datacenter energy use and BigQuery. Google wants GCP to be the best place for open source development.</p> <p align="left">Kubernetes is the fastest moving development\container platform of all time. Google has been named leader in Infrastructure-as-a-Service “IaaS” and content collaboration by Gartner. Look at all the Gartner and Forrester Google rankings here <a title="https://cloud.google.com/analyst-reports/" href="https://cloud.google.com/analyst-reports/">https://cloud.google.com/analyst-reports/</a></p> <p align="justify">Mike McNamara @MikeMcnamara the Chief Info and Digital Officer for Target @Target is welcomed onto stage. Target insourced their environment and reorganized around product, agile and DevOps methods. Mike chose Google because of shared values, site reliability and good synergies between their engineers. Google has increased their number of engineers and increased their “Office of the CTO” “OCTO” personal significantly to assist with these engineering discussions. </p> <p align="justify">Google’s mission is to organize the worlds information and make it universally accessible and useful. Google’s Cloud mission is to organize your information.</p> <p align="justify">Sundar Pichai <a href="https://twitter.com/sundarpichai">@<b>sundarpichai</b></a> the CEO of Google is invited to stage. Google’s customers have grown to include not just consumers and developers but enterprise customers and partners. Google is committed to having an open platform. The Google phone was launched in 2008 with one provider and now they have over 24,000 devices. Kubernetes went from its initial release four years ago to number one. 75% of enterprises use Kubernetes. Google created TensorFlow so that anyone can use AI. AI is helping doctors diagnose patients faster with better treatments. Google wants to bring an AI first approach to all customers. That is what Googles Cloud journey is all about at its heart. </p> <p align="justify">Urs Holzle <a href="https://twitter.com/uhoelzle">@<b>uhoelzle</b></a> the SVP of Technical Infrastructure is introduced. Urs wants to talk about how Google is bringing the Cloud to you. Urs mentions that Cloud computing is missing a simple way to combine your enterprise with one or multiple cloud providers. Cloud providers differentiate in a way that are not necessarily different such as creating a VM. Each has its own way of setting things up. This gets really complicated in a Hybrid or multi-cloud environment. Administration has become the key expense. While server costs have fallen, administration has increased significantly. </p> <p align="justify">Google is extending Kubernetes using Istio. Istio makes service to service connections easy and reliable. Istio is a collaboration between Google, Pivotal, Redhat and Tigera. Today Istio is available. Google has announces Cloud Services Platform which is a combination of Kubernetes “GKE” and Istio. Full integration with Stackdriver will be available from day one. </p> <p align="justify">A demo is shown of a retail web based application. The app is deployed in GKE (Kubernetes). Through Google Cloud Platform you get a visibility on all clusters. Once Istio is deployed you get a service map in Stackdriver. Istio automatically works out of the box. You can drill down on the service map and see latency between the different retail application GKE components. You can also define Business service levels to track deviation from the business goals of the applications. Using Istio you get a common service platform with lower operational overhead and service requirements. </p> <p align="justify">Google announces GKE on premises. With Google Cloud platform you can manage a GKE platform in Google or deployed in your datacenter. </p> <p align="justify">Prabhakar Raghavan <a href="https://twitter.com/WittedNote">@<b>WittedNote</b></a> the VP of G Suite takes the stage. G Suite has 1.4 Billion users with 80,000 students. These will be our future employees. Companies like AirBus have chosen G Suite for collaboration. </p> <p align="justify">Prabhakar mentions that they have three design principals for G Suite, Secure, Smart and Simple. Secure is cloud based, with two factor and G Suite security keys. Google Security center now has a new investigation tool for G Suite that looks at suspicious file transfer or egress data scenarios. It goes into beta today. G Suite now has data regions to enable you to localize mail boxes to certain geographies. It is generally available today.</p> <p align="justify">You can know Google Sheets with natural language to quantify data and the formula will be generated for you. Prabhakar mentions that 10% of replies in G Mail are done through machine language through Smart Replies. Smart Replies are coming to Google hangouts. Smart Compose is the ability for the AI to learn your correspondence and add the number of machine responses based on learning how you interact with your contacts. Google translate can now take poor grammar and translate it to proper language through Grammar Translate.Google Calendar has enhanced scheduling to look at past patterns and find windows and locations that work across your target group. </p> <p align="justify">Fei-Fei Li @drfeifei Google’s Chief Scientist for AI is introduced and announces the 3rd generation of <a href="https://en.wikipedia.org/wiki/Tensor_processing_unit">Tensor Processing Units</a> “TPUs”. Fei-Fei mentions AutoML which simplifies Machine Learning for customers. Fei-Fei announces AutoML Natural Language and AutoML Translation are now generally available to simplify the use of Machine Learning use for its customers.</p>Paul O'Dohertyhttp://www.blogger.com/profile/05237786392603690096noreply@blogger.com0tag:blogger.com,1999:blog-5889130978261301238.post-3218560921511193992017-12-17T12:11:00.001-08:002017-12-17T12:14:22.295-08:00SMB Cloud Summit Toronto: Inaugural event & DCD Connect Canada<div align="justify">
The inaugural <a href="http://dcd.events/conferences/canada/benefits/smb-cloud-summit">Ontario SMB Cloud Summit</a> event was sponsored by Long View Systems as part of its commitment to helping business adopt Cloud computing. <a href="https://www.linkedin.com/in/nolanevans/">Nolan Evans</a>, General Manager, Ontario at Long View Systems explained why the SMB Cloud Summit and DCD Connect are important to support “At Long View Systems we have a uniquely Canadian perspective. We know it is critical for our customers to get good advice and support as they define their business goals in the context of Cloud to take advantage of the unique capabilities this model offers. As a provider who has built managed services on our own Cloud and on public Cloud, we have a lot of experience that can help avoid the pitfalls of new technology adoption. Participating in the Cloud Summit allows us to bring those learnings to the community in a collaborative way with information that is relevant for those in Central Canada” </div>
<a href="https://lh3.googleusercontent.com/-INIxB0uGbHM/WjbPY9y4XoI/AAAAAAAAA0g/Gch607yBDscEHQvn_F2gOuau6192_Kx8QCHMYCw/s1600-h/clip_image002%255B4%255D"><img alt="clip_image002" border="0" height="240" src="https://lh3.googleusercontent.com/-bv_3WVW-vA4/WjbPZe5xd5I/AAAAAAAAA0k/UdWx_HLHx_g3pmqP-y6KJRJkb9dqfsjGgCHMYCw/clip_image002_thumb%255B1%255D?imgmax=800" style="background-image: none; border-image: none; border: 0px currentColor; display: block; float: none; margin-left: auto; margin-right: auto;" title="clip_image002" width="231" /></a><br />
<div align="justify">
<a href="https://www.linkedin.com/in/rjoanneanderson/">Joanne Anderson</a> the Director for Technology Adoption and Regional Growth from the Ontario Investment Office. Provided the introduction and noted that while 80% of US companies have invested in cloud only 50% in Canada have. This was one of the driving factors in putting the event together.</div>
<div align="justify">
<br /></div>
<div align="justify">
Michael O’Neil @oneil_intoronto, the host for the panelist discussions for the day is one of the world’s most senior IT industry analysts. During his 25 year career, he has led four different IT consulting companies and spearheaded leading-edge research projects in North America and around the world. He led the day’s events in a series of focus discussions on key aspects of cloud adoption.</div>
<div align="justify">
<br /></div>
<div align="justify">
Feisal Hirani @feztech and Paul O’Doherty @podoherty, two of Long Views Principal Cloud Architects joined a distinguished list of <a href="http://dcd.events/conferences/canada/speakers">speakers</a> and industry experts for two days of interactive discussions on the development of Cloud for SMB and Enterprise businesses. <a href="https://www.linkedin.com/today/author/ivan-brinjak-0662334">Ivan Brinjak</a> @ivanbrinjak the Sales Director at Long View Systems and John Kaus @john_Kaus the Cloud Sales specialist where on hand to help customers reflect on their current and future plans for hybrid IT and cloud adoption. </div>
<div align="justify">
<br /></div>
<a href="https://lh3.googleusercontent.com/-vVxSqX_7HVk/WjbPZ5C7eDI/AAAAAAAAA0o/PyKiD5JIjX8G3knwkbFmI0FQH0hHIsTLACHMYCw/s1600-h/clip_image003%255B5%255D"><img alt="clip_image003" border="0" height="222" src="https://lh3.googleusercontent.com/-R9TFSNmZe5Q/WjbPaPWjguI/AAAAAAAAA0s/ISNg9MBrGCUtgXu-v0SpGcW5VYGU8XeOwCHMYCw/clip_image003_thumb%255B2%255D?imgmax=800" style="background-image: none; border-image: none; border: 0px currentColor; display: inline;" title="clip_image003" width="419" /></a><br />
<br /><br />
<div align="justify">
The feedback and audience interaction was fantastic. While it was the 1.0 version of the event, the wealth of industry expertise on hand over the two days establishes the event as an important source of relevant information for assisting businesses in Ontario in adopting Cloud technology.</div>
<div align="justify">
<span style="font-weight: normal;"></span><br /></div>
Paul O'Dohertyhttp://www.blogger.com/profile/05237786392603690096noreply@blogger.com2tag:blogger.com,1999:blog-5889130978261301238.post-41623032261931146472017-12-15T09:22:00.001-08:002017-12-15T09:40:32.571-08:00Enable edge intelligence with Azure IoT EdgeBased on a presentation by Terry Mandin @TerryMandin<br />
Microsoft is simplifying IoT through the use of the Azure IoT Suite and the following components: <br />
<ol>
<li>Azure IoT Hub – the gateway that allows you to ingest data</li>
<li>Azure Time Series Insights – this enables you to graphically analyze and explore data points </li>
<li>Microsoft IoT Central; a fully managed IoT SaaS</li>
<li>Azure IoT Edge; a virtual representation of your IoT device that can download code to a physical device and have it execute locally.</li>
</ol>
<div align="justify">
Microsoft also has a certification program to validate security on 3rd party products called Azure IoT Security. Azure IoT Edge is recently announced and is provided by Microsoft for enabling you to keep data close to the enterprise. IoT Edge forces the computing back out to the gateway device by enabling the the ability to push IoT modules called ‘module images’ from a repository on the Azure Cloud. </div>
<div align="justify">
In the oilfields of Alberta, IoT is being leveraged to monitor pumpjacks to determine if they are working properly based on data sent to an IoT hub on the Azure cloud. In the next version of the IoT solution, the customer will send the a module image with custom code to the gateway device using IoT Edge. </div>
<div align="justify">
<a href="https://lh3.googleusercontent.com/-5LQzAdcq5Dk/WjQEzgmZE6I/AAAAAAAAA0M/iYQyEehZ06o65LtIPgu53kGsWht0ZzbZgCHMYCw/s1600-h/image3"><img alt="image" border="0" height="189" src="https://lh3.googleusercontent.com/--MFjvV-w9wo/WjQE0WOjRMI/AAAAAAAAA0Q/-TQMfxs61zMNcPNWoTfmTgvRAcIv4XdqACHMYCw/image_thumb1?imgmax=800" style="background-image: none; display: block; float: none; margin-left: auto; margin-right: auto;" title="image" width="333" /></a></div>
<div align="justify">
In this model a gateway device will be placed on the Well site right next to the pumpjack. The Azure IoT Edge agent and runtime runs on the gateway, using local processing to find problems. If a problem is found then the pumpjack speed can be adjusted quickly while also logging the information to the cloud for maintenance.</div>
<div align="justify">
The code or ‘module image’ in built in the repository within Azure. You also provision an IoT Edge Device in Azure which is a logical representation of your gateway. You define which modules will run on the gateway in Azure. The IoT Edge looks at the module image and pushes it out to the gateway which has the runtime environment and agent on it. When you deploy the physical device, you install the Azure IoT Edge runtime which pulls the modules down from the cloud. This is done without compromising security. </div>
<div align="justify">
The IoT Edge agent on the device ensures the Edge module is always running and reports the health back to the cloud. The IoT Edge also communicates to the other IoT leaf devices which are other physical devices with sensors. The IoT runtime and agent can run on something as small as a physical Sensor or as large as a full blown Gateway hardware device. </div>
<div align="justify">
You can run your own custom code within a module image or several Azure modules including Stream Analytics, Azure functions and AI and Machine learning. You can push down both Machine learning and cognitive functions as well. The underlying software that is the runtime is container based supporting the individual containers or image modules. </div>
Paul O'Dohertyhttp://www.blogger.com/profile/05237786392603690096noreply@blogger.com0tag:blogger.com,1999:blog-5889130978261301238.post-88762797618925214762017-12-15T06:29:00.001-08:002017-12-15T08:34:39.938-08:00Microsoft Summit: AI Fear of Missing Out “FOMO”Ozge Yeloglu <a href="https://twitter.com/OzgeYeloglu">@<b>OzgeYeloglu</b></a>, Data and AI Lead of Microsoft Canada has a core team of Data architects and Data Scientists located in central Canada. They have combined architects and scientists so that privacy and compliance is part of the AI implementation. Ozge was the first Data Scientist hired in Canada by Microsoft. Prior to Microsoft, Ozge was co-founder of a startup that analyzed logs to predict application failures.<br />
<br />
<div align="justify">
What is artificial intelligence? The definition is “Intelligence exhibited by machines mimicking functions associated with human minds”. The three main pillars of human functions are reasoning (learning from data), understanding (interpreting meaning from data) and interacting (interacting with people in a human way). We are still very far away from natural human interaction with AI. </div>
<div align="justify">
The reason AI is such a hot topic is because of advancements in the foundational components: Big Data, Cloud Computing, Analytics and powerful query algorithms. These are more universally available than at any other time in history. <br />
<br /></div>
<div align="justify">
Digital Transformation in AI can be looked at in four pillars: Enable your customers through customer analytics and measuring customer experiences. Enable your employees through business data differentiation and organizational knowledge. Optimize your operations using intelligent predictions and deep insights (IoT). The final pillar is to transform your products by making them more dynamic?<br />
<br /></div>
<div align="justify">
The four foundational components for an AI platform is infrastructure, IT service, Digital Services and Cognitive data. The reality is that based on Gartner's research is that of the discussions happening on AI only 6% are at the implementing stage. Largely the majority of discussions are about knowledge gathering.<br />
<br /></div>
<div align="justify">
Ozge is doing a lot of lunch and learns to help people understand what AI is all about. Often once understood they realize that they need the foundational pieces in place before being ready for AI. </div>
<div align="justify">
It is important to start with a single business problem, build the machine learning tooling and demonstrate the value. As you work through the use case you are educating your people. Essentially this applies a building block approach. Ozge recommends starting near future because the tools and technologies are emerging so quickly. Starting with a three year plan almost guarantees that the tools you select today will be obsolete by the time the project finishes.<br />
<br /></div>
<div align="justify">
It is important to know your data estate. If your data is not the right data your solutions will not be the right solutions. If it your data is not in the right place, it will take to long to run. Building the right data architecture is an enabler for AI. Great AI needs great data. It is important to also find the right people. Many Data Scientists are generalists so they may not have the right Domain expertise for your particular business. For this reason it may be better to take existing people and train them on Big Data management. <br />
<br /></div>
<div align="justify">
A good AI Solution is built on a AI platform, with comprehensive data, that resolves a business problem surrounded by the right people.</div>
Paul O'Dohertyhttp://www.blogger.com/profile/05237786392603690096noreply@blogger.com0tag:blogger.com,1999:blog-5889130978261301238.post-4412156363103644582017-09-29T10:26:00.001-07:002017-09-29T10:26:52.024-07:00Microsoft Ignite 2017: High Availability for your Azure VMs<p align="justify">The idea with Cloud is that each layer is responsible for its own availability and by combining these loosely coupled layers you get higher availability. It should be a consideration before you begin deployment. For example if you start by considering maintenance and how and when you would take a planned outage it informs your design. You should predict the types of failures you can experience so you are mitigating it in your architecture. </p> <p align="justify">There is an emerging concept on Hyper-scale Cloud platforms called grey failures. A grey failure is when your VM, workloads or applications are not down but are not getting the resources they need.</p> <p align="justify"><a href="https://www.cs.jhu.edu/~huang/paper/grayfailure-hotos17.pdf" target="_blank">“Grey Failure: The Achilles Heel of Cloud Scale Systems”</a></p> <p align="justify"><strong>Monitoring and Alerting</strong> should be on for any VMs running in IaaS. When you open a ticket in Azure any known issues are surfaced as part of the support request process. This is part of Azure’s automated analytics engine providing support before you input the ticket. </p> <p align="justify"><strong>Backup and DR plans</strong> should be applied to your VM. Azure allows you to create granular retention policies. When you recover VMs you can restore over the existing or create a new VM. For DR you can leverage ASR to replicate VMs outside another region. ASR is not multi-target however so you could not replicate VMs from the enterprise to an Azure region and then replicate them to a different one. It would be two distinct steps, first replicate and failover the VM to Azure and then you can setup replication between two regions.</p> <p align="justify"><strong>Maintenance </strong>Microsoft now provides a local endpoint in a region with a simple REST API that provides information on upcoming maintenance events. These can be surfaced within the VM so you can trigger soft shutdowns of your virtual instance. For example, if you have a single VM (outside an availability set) and the host is being patched the VM can complete a graceful shutdown.</p> <p align="justify">Azure uses VM preserving technology when they do underlying host maintenance. For updates that do not require a reboot of the host the VM is frozen for a few seconds while the host files are updated. For most applications this is seamless however if it is impactful you can use the REST API to create a reaction.</p> <p align="justify">Microsoft collects all host reboot requirements so that they are applied at once vs. periodically throughout the year to improve platform availability. You are preemptively notified 30 days out for these events. One notification is sent per subscription to the administrator. The customer can add additional recipients.</p> <p align="justify"><strong>Availability Sets</strong>  is a logical grouping of VMs within a datacenter that allows Azure to understand how your application is built to provide for redundancy and availability. Microsoft recommends that two or more VMs are created within an availability set. To have a 99.95% SLA you need to deploy your VMs in an Availability Sets. Availability Sets provide you fault isolation for your compute. </p> <p align="justify">An Availability Set with <a href="https://azure.microsoft.com/en-us/services/managed-disks/" target="_blank">Managed Disks</a> is call a Managed Availability Set. With Managed Availability Set you get fault isolation for compute and storage. Essentially it ensures that the managed VM Disks are not on the same storage.</p>Paul O'Dohertyhttp://www.blogger.com/profile/05237786392603690096noreply@blogger.com0tag:blogger.com,1999:blog-5889130978261301238.post-40809848008045964852017-09-29T08:50:00.001-07:002017-09-29T08:50:57.922-07:00Microsoft Ignite 2017: Tips & Tricks with Azure Resource Manager with @rjmax<p align="justify">The AzureRM vision is to capture everything you might do or envision in Cloud. This should extend from infrastructure, configuration to governance and security. </p> <p align="justify">Azure is seeing about 200 ARM templates deployed per second. The session will focus on some of the template enhancements and how Microsoft is more closely integrating identity management and delivering new features.</p> <p align="justify">You now have the ability to deploy ARM deployments across subscriptions (service providers pay attention!). You can also deploy across resource groups. The two declaratives within the ARM template are that enable this are:</p> <blockquote> <p align="justify">“resourceGroup”</p> </blockquote> <blockquote> <p align="justify">“subscriptionId” </p> </blockquote> <p align="justify">You may be wondering how you share your templates, increase the reliability and support them after a deployment? </p> <p align="justify"><strong>Managed Applications</strong></p> <p align="justify">Managed applications is the ability to simplify template sharing. Managed applications can be shared or sold, they are meant to be simple to deploy, they are contained so they cannot be broken and they can be connected. Connected means you define what level of access you need to it after it has been deployed for ongoing management and support.</p> <p align="left">For additional details on Managed applications please see <a title="https://docs.microsoft.com/en-us/azure/azure-resource-manager/managed-application-overview" href="https://docs.microsoft.com/en-us/azure/azure-resource-manager/managed-application-overview">https://docs.microsoft.com/en-us/azure/azure-resource-manager/managed-application-overview</a> . </p> <p align="justify">Managed applications are available in West US and West US Central but will be global by the end of the year. When you define a managed application through the Azure portal you determine if it is locked or unlocked. If it is locked you need to define who is authorized to write to it. </p> <p align="justify"><a href="https://lh3.googleusercontent.com/-0xGKr5u_G20/Wc5rzOqQZoI/AAAAAAAAAzo/lruRSmYSukEPWyw-dJze9Z-kbvlZ_uPYQCHMYCw/s1600-h/image%255B4%255D"><img title="image" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" border="0" alt="image" src="https://lh3.googleusercontent.com/-ASYWorAdZvo/Wc5r0JGNC-I/AAAAAAAAAzs/_i1H4TWL7C4V3ip2RybXo-KTOzeHHvZHQCHMYCw/image_thumb%255B2%255D?imgmax=800" width="393" height="401" /></a></p> <p align="justify">By default Managed Applications are deployed within your subscription. From within the access pane of the Managed Application you can share it to other users and subscriptions. Delivering Managed Applications to the Azure marketplace is in Public Preview at this moment.</p> <p align="justify"><strong>Managed Identity</strong></p> <p align="justify">With Managed Identity you can now create virtual machines with a service principal provided by Azure active directory. This allows the VM to get a token to enable service access to avoid having passwords and credentials in code. To learn more have a look here</p> <p align="justify"><a title="https://docs.microsoft.com/en-us/azure/active-directory/msi-overview" href="https://docs.microsoft.com/en-us/azure/active-directory/msi-overview">https://docs.microsoft.com/en-us/azure/active-directory/msi-overview</a> </p> <p align="justify"><strong>ARM Templates & Event Grid</strong></p> <p align="justify">You can use Event Grid to collect all ARM events and requests which can be pushed to an end point or listener. To learn more on Event Grid read here</p> <p align="justify"><a title="https://buildazure.com/2017/08/24/what-is-azure-event-grid/" href="https://buildazure.com/2017/08/24/what-is-azure-event-grid/">https://buildazure.com/2017/08/24/what-is-azure-event-grid/</a></p> <p align="justify"><strong>Resource Policies</strong></p> <p align="justify">You can use Resource Policies to do Location Ringfencing. Location Ringfencing allows you to define a policy to ensure your data does not leave a certain location. </p> <p align="justify"><a href="https://lh3.googleusercontent.com/-F0B8beIAkzw/Wc5r3jekN6I/AAAAAAAAAzw/nU8ff95cmTUXu6qkygbIxEPn5Nlv9xupwCHMYCw/s1600-h/image%255B9%255D"><img title="image" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" border="0" alt="image" src="https://lh3.googleusercontent.com/-mdj-muWON1Y/Wc5r4MFACvI/AAAAAAAAAz0/mcHQYjZv-Bs5qda_tUN01gGDxzLhYlpyQCHMYCw/image_thumb%255B5%255D?imgmax=800" width="455" height="275" /></a></p> <p align="justify">You can also restrict which VM Classes that people can use. For example to prevent your developers from deploying extremely expensive classes of VMs.</p> <p align="justify">Policies can be used to limit the access to all the marketplace images to just a few. You can find many starting point policies on GitHub </p> <p align="justify"><a href="https://github.com/azure/azure-policy-samples">https://github.com/azure/azure-policy-samples</a></p> <p align="justify">Azure Policies are in Preview and additional information can be found here: </p> <p align="justify"><a title="https://azure.microsoft.com/en-us/services/azure-policy/" href="https://azure.microsoft.com/en-us/services/azure-policy/">https://azure.microsoft.com/en-us/services/azure-policy/</a></p>Paul O'Dohertyhttp://www.blogger.com/profile/05237786392603690096noreply@blogger.com0tag:blogger.com,1999:blog-5889130978261301238.post-36737418501882008682017-09-29T07:02:00.001-07:002017-09-29T07:02:25.565-07:00Microsoft MSIgnite 2017:How to get Office 365 to the next level with Brjann Brekkan<p align="justify">It is important that customers are configured with a single Identity or Tenant. You should look at the Identity as the Control Plane or the single source of truth. Azure Active Director “AD” has grow 30% in Year-over-Year growth to 12.8 million customers. In addition there are now 272,000 Apps in Azure AD. Ironically the most used application in Azure AD is Google Apps. Customers are using Azure AD to authenticate Google Services.</p> <p align="justify">Azure AD is included with O365 so there is no additional cost. Identity in O365 consists of three different types of users:</p> <ol> <li> <div align="justify">Cloud Identity: accounts live in O365 only</div> </li> <li> <div align="justify">Synchronized Identity: accounts sync with a local AD Server</div> </li> <li> <div align="justify">Federated Identity: Certificate based authentication based on an on premises deployment of AD Federation Service.</div> </li> </ol> <p align="justify">The Identity can be managed using several methods.</p> <p align="justify"><strong>Password Hash Sync</strong> ensures you have the same password on-premises as in the cloud. The con to Hash sync is that disabled or user edits are not updated until the sync cycle is complete. In hash sync the hashes on-prem are not identical to those in the cloud but the passwords are the same.</p> <p align="justify"><strong>Pass-through Authentication</strong> You still have the same password but passwords remain on-premises. There is a Pass-through Agent “PTA” agent that gets installed on your enterprise AD server. THE PTA Agent handles the queuing of requests from Azure AD and sends the validations back once authenticated.</p> <p align="justify">Seamless Single Sign-On works with both Password Hash Sync and Pass-through Authentication. This is done with no additional requirement onsite. SSO is enabled during the installation of AD Connect. </p> <p align="justify">You do not need more than on Azure AD if you have more than one AD on premises. One Azure AD can support hundreds of unique domain names. You can also mix cloud only accounts and on prem synchronized accounts. You can use PowerShell Graph API vs. AD Connect to synchronize and manage users and groups but it is much more difficult. AD Connect is necessary for Hybrid Exchange however.</p> <p align="justify">There are six general use cases for Azure AD:</p> <ol> <li> <div align="justify">Employee Secure Access to Applications</div> </li> <li> <div align="justify">To leverage Dynamic Groups for automated application deployment. Dynamic groups allow move, join and leave workflow processes</div> </li> <li> <div align="justify">To federate access for Business-to-Business communication and collaboration (included in Azure AD, 1 license enables 5 collaborations)</div> </li> <li> <div align="justify">Advanced threat and Identity protection. This is enabled through conditional access based on device compliance. </div> </li> <li> <div align="justify">To Abide by Governance and Compliance industry regulations. Access Review is in public review which identifies accounts that have not accessed the system for a while and prompts the administrator to review them.</div> </li> <li> <div align="justify">To leverage O365 as an application development platform </div> </li> </ol> <p align="justify">With AD Premium you get AD Connect Health, Dynamic Group memberships, Multi-Factor Authentication for all objects that can be applied when needed vs always on. In addition there is a better overall end user experience. </p>Paul O'Dohertyhttp://www.blogger.com/profile/05237786392603690096noreply@blogger.com0tag:blogger.com,1999:blog-5889130978261301238.post-22404445207832559442017-09-28T09:19:00.001-07:002017-09-28T09:19:07.266-07:00Microsoft Ignite 2017: Protect Azure IaaS Deployments using Microsoft Security Center with Sarah Gender & Adwait Joshi<p align="justify">Adopting Cloud no longer has a security barrier. It is however a shared responsibility between the cloud provider and tenant. It is important that the tenant understands this principle so they properly secure their resources.</p> <p align="justify"><a href="https://lh3.googleusercontent.com/-RGJSqQ75DUs/Wc0g8my_5LI/AAAAAAAAAyk/KHIMKe8L844PZMBC2NHUxZVxkCqBmeXcACHMYCw/s1600-h/image%255B5%255D"><img title="image" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" border="0" alt="image" src="https://lh3.googleusercontent.com/-F3ISQLIYRfo/Wc0g9B8OPWI/AAAAAAAAAyo/jn-rpOZsoU8PIE2e4rtnfPineFP_Wvr9wCHMYCw/image_thumb%255B3%255D?imgmax=800" width="439" height="298" /></a></p> <p align="justify">Securing IaaS is not just about just securing VMs but also the networking and services like storage. It is also about securing multiple clouds as many customers have a multi-cloud strategy. While things like malware protection need to be applied in the cloud the application is different in the cloud. Key challenges specific to cloud are:</p> <ul> <li>Visibility and Control</li> <li>Management Complexity (a mix of IaaS, PaaS and SaaS components)</li> <li>Rapidly Evolving Threats (you need a solution optimized for cloud as things are more dynamic)</li> </ul> <p align="justify">Microsoft ensures that Azure is built on a Secure Foundation by enforcing physical, infrastructure and operational security. Microsoft provides the controls but the customer or tenant is responsible for Identity & Access, Information Protection, Threat Protection and Security Management. </p> <p>10 Ways Azure Security Center helps protect IaaS deployments </p> <p><strong>1 Monitor security state of cloud resources</strong></p> <ul> <li>Security Center Automatically discovers and monitors Azure resources</li> <li>You can secure Enterprise and 3rd party clouds like AWS from Security Center </li> </ul> <p align="justify">Security Center is built into the Azure Panel so no additional access is required. If you select the Data Collection policy you can automatically push the monitoring agent. This agent is the same as the Operations Management Agent. When you setup Data Collection you can set the level of logging required. </p> <p align="justify"><a href="https://lh3.googleusercontent.com/-odNi4Q0prPw/Wc0g9V-RrZI/AAAAAAAAAys/lLypbfKKhEkcShfrzIFTmH7c-DZ5FecXQCHMYCw/s1600-h/image%255B10%255D"><img title="image" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" border="0" alt="image" src="https://lh3.googleusercontent.com/-2ZSf8hEq95A/Wc0g9wFsdbI/AAAAAAAAAyw/AVyZP6O0k08IutFQ2_IGsWibeCerT16dwCHMYCw/image_thumb%255B6%255D?imgmax=800" width="407" height="237" /></a></p> <p align="justify">Security Center comes with a policy engine that allows you to tune policies via subscription. For example you can define one policy posture for production and another for dev and test. </p> <p><strong>2 Ensure VMs are configured in a certain way </strong></p> <ul> <li>You can see system update status, Antimalware protection (Azure has one that is built in for free), OS and web configuration assessment (e.g. IIS assessment against best practise confirms)</li> <li>It will allow you to fix vulnerabilities quickly </li> </ul> <p><strong>3 Encrypt disks and data</strong> </p> <p><strong>4 Control Network Traffic </strong></p> <p><strong>5 Use NSGs add additional firewalls </strong></p> <p><strong>6 Collect Security Data </strong></p> <ul> <li>Analyze and search security logs from many sources </li> <li>Security Center allows you to integrate 3rd party products like Qualys scans as well for assessment for other applications and compliance issues. Security Center monitors IaaS VMs and some PaaS components like web apps. </li> </ul> <p align="justify">Security Center provides a new dashboard for failed logon attempts on your VMs. The most common attack on cloud VMs are RPD brute force attacks. To avoid this you can use Just-in-Time access so that port 3389 is only open for a window of time from certain IPs. These are all audited and logged.</p> <p align="justify">Another attack vector is Malware. Application whitelisting allows you to track for good behaviour vs. blocking the bad. Unfortunately it has been arduous to apply. </p> <p><strong>7 Block malware and unwanted applications</strong></p> <p align="justify">Security Center uses the adaptive algorithm to understand what applications are running to develop a set of whitelists. Once you are happy with lists you can move to enforcement.</p> <p><strong>8 Use advanced analytics to detect threats quickly.</strong></p> <p align="justify">Security Center looks at VMs and network activity and leverages Microsoft’s global threat intelligence to detect threats quickly. This leverages machine learning to understand what is normal activity statistically to identity abnormal behavior.</p> <p><strong>9 Quickly assess the scopes and impact of the attack</strong></p> <p align="justify">This is a new feature that graphically displays all the related components that were involved in an attack.</p> <p align="justify"><a href="https://lh3.googleusercontent.com/-_GRLFhHnEBU/Wc0g-UPN3WI/AAAAAAAAAy0/3vCX_8CdCSUSIveYbscQ9pXnpnjit-juACHMYCw/s1600-h/image%255B16%255D"><img title="image" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" border="0" alt="image" src="https://lh3.googleusercontent.com/-xMUz_bVT6vY/Wc0g-oIOptI/AAAAAAAAAy4/pvaELHrYPfgDEQqv_ZiUo_rt9jgPMtuZgCHMYCw/image_thumb%255B10%255D?imgmax=800" width="404" height="237" /></a></p> <p><strong>10 Automate threat response </strong></p> <p align="justify">Azure uses Logic Apps to automate responses which allows you to trigger workflows form and alert to enable conditional actions. In addition there is a new malicious map that identifies known malicious IPs by region with related threat intelligence.</p> <p align="justify">The basic Policy for Security Center is free so there is no reason to not have more visibility on what is vulnerable in your environment.</p> <p align="justify">For more information check out</p> <p align="justify"><a href="http://azure.microsoft.com/en-us/services/security-center">http://azure.microsoft.com/en-us/services/security-center</a></p>Paul O'Dohertyhttp://www.blogger.com/profile/05237786392603690096noreply@blogger.com0tag:blogger.com,1999:blog-5889130978261301238.post-79277626344310584812017-09-27T08:37:00.001-07:002017-09-27T12:13:29.310-07:00Microsoft Ignite 2017: New advancements in Intune Management<div align="justify">
Microsoft 365 is bringing the best of Microsoft together. One ofthe key things <a href="https://en.wikipedia.org/wiki/Satya_Nadella" target="_blank">Satya Nadella</a> did when he took over was to put customers front and center. Microsoft has invested in partner and customer programs to help accelerate the adoption of Intune. </div>
<div align="justify">
There are three versions of Intune: </div>
<ol>
<li> <div align="justify">
Intune for Enterprises</div>
</li>
<li> <div align="justify">
Intune for Education </div>
</li>
<li> <div align="justify">
Intune for SMBs (In Public Preview)</div>
</li>
</ol>
<div align="justify">
One of the biggest innovations Microsoft completed was moving Intune to Azure. There is a new portal for Intune available within Azure that provides an overview of Device compliance. </div>
<div align="justify">
<a href="https://lh3.googleusercontent.com/-9A7RDAyWrIU/WcvFmKxkYbI/AAAAAAAAAyA/FagnX8zsfGwl0XErEdhT4XmPD-mZBCfNQCHMYCw/s1600-h/image%255B4%255D"><img alt="image" border="0" height="232" src="https://lh3.googleusercontent.com/-78bff1yiUmI/WcvFmlvp8II/AAAAAAAAAyE/7EK0E1efzdUz0i0Z56If676HhrGa6kvFgCHMYCw/image_thumb%255B2%255D?imgmax=800" style="background-image: none; border: 0px currentcolor; display: inline;" title="image" width="406" /></a></div>
<div align="justify">
To setup Intune the first thing you do is define a device profile. Microsoft supports a range of platforms from such as Android, iOS, mac and windows. Once you have a device profile there are dozens of configurations you can apply. <br />
<br /></div>
<div align="justify">
Once you define the profile you assign it to Azure AD Groups. You can either include or exclude users. So you can create a baseline for all users and exclude your executive group to provide them an elevated set of features. <br />
<br /></div>
<div align="justify">
As it lives in the Azure Portal you can click on Azure Active Directory and see the same set of policies. Within the policy you can set access controls that are conditional. For example “you get corporate email only if you are compliant and patched”. Intune checks the state of the device and compliance and then grants access. The compliance overview portal is available in Intune from within Azure.</div>
<div align="justify">
<a href="https://lh3.googleusercontent.com/-pMjlyViVrHg/WcvFm_eBvAI/AAAAAAAAAyI/6xBx4UglOOot3sYaf_c9TpkxBfC_QFt3QCHMYCw/s1600-h/compliance%255B4%255D"><img alt="compliance" border="0" height="344" src="https://lh3.googleusercontent.com/-W83F0PpjYVA/WcvFnUeKTcI/AAAAAAAAAyM/dpK6v2sBJJICuh0bPlJA2B07MZqMM0KjQCHMYCw/compliance_thumb%255B2%255D?imgmax=800" style="background-image: none; border: 0px currentcolor; display: block; float: none; margin-left: auto; margin-right: auto;" title="compliance" width="379" /></a></div>
<div align="justify">
Microsoft has dramatically simplified the ability to add apps. From within Intune’s portal. You can access and browse the iOS AppStore to add applications within the interface. In addition to granting access to Apps you can apply App protection policies. For example you can enforce that the user is leveraging a minimum app version. You can block or warn if the user is in violation of this policy. </div>
<div align="justify">
The demo shows an enrolled iPad attempting to use a down-level version of word that displays a warning when the user launches it. You can provide conditional access which allows a grace period for remediating certain types of non compliant states. <br />
<br /></div>
<div align="justify">
Many top 500 companies leverage Jamf today (<a href="https://www.jamf.com/" title="https://www.jamf.com">https://www.jamf.com</a>) for Apple management. Jamf is the standard for Apple mobile device management. Whether you're a small business, school or growing enterprise environment, Jamf can meet you where you're at and help you scale. <br />
<br /></div>
<div align="justify">
Intune can now be used in conjunction with Jamf. With this partnership you can use both Jamf and Intune together. Mac’s enroll in Jamf Pro. Jamf is able to send the macOS device inventory to Intune to determine compliance. If Intune determines it is compliant the access is allowed. If they are not, Intune and Jamf present some options to the user to enable them to resolve issues and check compliance.</div>
<div align="justify">
<a href="https://lh3.googleusercontent.com/-cN4vighZd60/WcvFnyYkMpI/AAAAAAAAAyQ/bMi3G6HoDnYc2uKaNLOa0JdA72cwP9-PQCHMYCw/s1600-h/image%255B9%255D"><img alt="image" border="0" height="250" src="https://lh3.googleusercontent.com/-7pjdRNg79SM/WcvFoSHE8MI/AAAAAAAAAyU/CzTG86hfuxsd6F6JmouvRWl09m_wrEmxwCHMYCw/image_thumb%255B5%255D?imgmax=800" style="background-image: none; border: 0px currentcolor; display: inline;" title="image" width="448" /></a><br />
Some other features that have been built into conditional access is to restrict access to services based on the location of the user. Microsoft has also enhanced Mobile Threat Protection and extended Geo fencing (In tech preview). <br />
<br /></div>
<div align="justify">
For Geo fencing you define known Geo locations. If the user roams outside of those locations the password gets locked. Similarly for Mobile Threat Protection, you define trusted locations and create rules to determine what happens if access is requested from a trusted on non-trusted location.</div>
Paul O'Dohertyhttp://www.blogger.com/profile/05237786392603690096noreply@blogger.com0tag:blogger.com,1999:blog-5889130978261301238.post-50538154829619541672017-09-27T07:24:00.001-07:002017-09-27T07:24:24.823-07:00Microsoft Ignite 2017: Azure Security and Management for hybrid environments Jeremy Winter Director, Security and Management<p align="justify">Jeremy Winter is going to do a deep dive on Azure’s Management and Security announcements. Everyone is a different state when it comes to cloud. This digital transformation is having a pretty big ground level impact. Software is everywhere and it is changing the way we think about our business. </p> <p align="justify">Digital transformation requires alignment across teams. This includes Developers, Operational teams and the notion of a Custodian who looks after all the components. It requires cross-team collaboration, automated execution, proactive governance and cost transparency. This is not a perfect science, it is a journey Microsoft is learning, tweaking and adjusting as they go. Operations is going to become more software based as we start to integrate scripting and programmatic languages.</p> <p align="justify">Microsoft’s bet is that management and security should be part of the platform. Microsoft doesn't think you should have to take enterprise tooling and bring it with you to the cloud. You should expect this management and security to be a native capability. Microsoft thinks about this according to  a full stack of cloud capabilities</p> <p align="justify"><a href="https://lh3.googleusercontent.com/-YHoFYa4Fbw8/Wcu0k7JxmhI/AAAAAAAAAxs/ZBPrJD0ucFI91N3sJGKXgusvupcSkc6ewCHMYCw/s1600-h/full_cld_cap5"><img title="full_cld_cap" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" border="0" alt="full_cld_cap" src="https://lh3.googleusercontent.com/-sD5PiYLoRFs/Wcu0mEOVzfI/AAAAAAAAAxw/2q16x8iH8Ak4pfGkzU4XyysdFY3MPs2fQCHMYCw/full_cld_cap_thumb2?imgmax=800" width="423" height="317" /></a></p> <p align="justify">Security is integral, Protect is really about Site Recovery, Monitoring is about visibility to what is going on, Configuration is about automating what you are doing. Microsoft believes they have a good baseline for all these components in Azure and is now focused on Governance.</p> <p align="justify">Many frameworks try to put a layer of abstraction between the developer and the platform. Microsoft’s strategy is different. They want to allow the activity to go direct to the platform but to protect it via policy. This is a different approach and is something that Microsoft is piloting with Microsoft IT. Intercontinental Hotels is used as a reference case for digital transformation. </p> <br /><iframe height="270" allowfullscreen="allowfullscreen" src="https://www.youtube.com/embed/c9ahRPLZmrw" frameborder="0" width="480"></iframe> <br /> <p>Strategies for successful management in the cloud:</p> <p><strong>1. Where to start “Secure and Well-managed”</strong></p> <p align="justify">You need to secure your cloud resources through Azure Security Center, Protect your data via backup and replication and Monitor your cloud health. Ignite announced PowerShell in Azure Cloud Shell as well as Monitoring and Management built right in to the Azure portals. </p> <p align="justify">Jeremy shows an Azure Monitoring a Linux VM. The Linux VM has management native on the Azure Panel. In the Panel you can see the inventory of what is inside the VM. You no longer have to remote in. It is done programmatically. </p> <p align="justify">In the demo the internal version of Java is shown on the VM and we can not look at Change tracking to determine why the version of Java appears to have changed. This is important from an audit perspective as you have to be able to identity changes. You can see the complete activity log of the guest. </p> <p align="justify">Also new is update management. You can see all the missing updates on individual or multiple computers. You can go ahead and schedule patching by defining your own maintenance window. In the future Microsoft will add pre and post activities. You also have the ability to use Azure Management for non-Azure based VMs. </p> <p align="justify">For disaster recovery you are able to replicate from one region to another. You can use this from enterprise to Azure now, but now also region-to-region. For backup it is now exposed as a property of the virtual machine which you simply enable and assign a backup policy to.</p> <p>With the new Azure Cloud Shell you have the ability to run PowerShell right inline. There are 3449 commands that have been ported over so far. </p> <p><strong>2.  Governance for the cloud </strong></p> <p align="justify"><u><a href="http://azure.com/policy">azure.com/policy</a></u> is in tech preview. Jeremy switches to the demo for Azure policies. You now have the ability to create policy and see non-compliant and compliant VMs. The example shows a sample policy to ensure that all VMs do not have public IPs. With the policy you are able to quickly audit the environment. You can also enforce these policies. It is based on JSON so you can edit the policies directly. Other use cases include things like auditing for unencrypted VMs.</p> <p><strong>3.  Security Management and threat detection </strong></p> <p align="justify">Microsoft is providing unified visibility, adaptive threat and detection and intelligent response. Azure Security Center is fully hybrid, you can apply it to  enterprise and Azure workloads. Jeremy switches to Azure Security Center which provides an overview of your entire environments security posture. </p> <p align="justify">Using the portal you can scan and get Microsoft's Security recommendations. Within the portal you can use ‘Just in Time Access’. This allows the developer to request that a port be open but it is enabled for a window of time. Security Center Analysis allows you to whitelist ports and audit what has changed. </p> <p align="justify">Microsoft can track and group Alerts through Security Center. Now you have a new option through continuous investigation to see visually what security incident has been logged. It takes the tracking and pushes it through the analytics engine. It allows you to visually follow the investigation thread to determine the root cause of the security exploit. Azure Log Analytics is the engine that drives these tool sets.</p> <p align="justify">Azure Log Analytics now has an Advanced Analytics component that provides an advanced query language that you can leverage across the entire environment. It will be fully deployed across Azure by December.</p> <p><strong>4.  Get integrated analytics and monitoring </strong></p> <p align="justify">For this you need to start from the app using application insights, bring in network visibility as well as infrastructure perspective. There is a new version of Application Insights. Jeremy shows Azure Monitor which was launched last year at Ignite. Azure Monitor is the place for end-to-end monitoring of your Azure datacenters. </p> <p align="justify">The demo shows the ability to drill in on VM performance and leverage machine learning to pin point the deviations. The demo shows slow response time for the ‘contoso demo site’. It shows that everything is working but slowly. You can see the dependency view of every process on the machine and everything it is talking to. Quickly you are able to see that the problem has nothing to do the website but is actually a problem downstream. </p> <p align="justify">Microsoft is able to do this because they have a team of data scientists baking in the analytics directly into the Azure platform through the Microsoft Smarts Team. </p> <p><strong>5. Migrate Workloads to the Cloud. </strong></p> <p align="justify">Microsoft announced a new capability for Azure Migration. You can now discover applications on your virtual machines and group them. From directly within the Azure portal you can determine which apps are ready for migration to Azure. In addition it will recommend the types of Migration tools that you can use to complete the migrations. This is in limited preview. </p>Paul O'Dohertyhttp://www.blogger.com/profile/05237786392603690096noreply@blogger.com0tag:blogger.com,1999:blog-5889130978261301238.post-8482009009024837972017-09-27T06:43:00.001-07:002017-09-28T11:51:10.749-07:00Microsoft Ignite 2017: Modernizing ETL with Azure Data Lake with @MikeDoesBigData @microsoft.com<div align="justify">
Mike has done extensive work on the u-sql language and framework. The session will focus on modern Data Warehouse architectures as well as introducing Azure Data Lake. </div>
<div align="justify">
A traditional Data Warehouse has Data sources, Extract-Transform-Load “ETL”, Data warehouses and BI and analytics as foundational components. </div>
<div align="justify">
<a href="https://lh3.googleusercontent.com/-vhmx6yBxTz0/Wcuq8Yw4zQI/AAAAAAAAAxI/n-hNmxvWZUAAn_9AnE4tyTRxbmTPN_0TACHMYCw/s1600-h/image%255B4%255D"><img alt="image" border="0" height="422" src="https://lh3.googleusercontent.com/--pD4wFgWQ4E/Wcuq9I4pNTI/AAAAAAAAAxM/W5p3VPhmkWQVj6K_Cw_ex1_iYg3h-6R-wCHMYCw/image_thumb%255B2%255D?imgmax=800" style="background-image: none; border: 0px currentcolor; display: block; float: none; margin-left: auto; margin-right: auto;" title="image" width="234" /></a></div>
<div align="justify">
Today many of the data sources are increasing in data volume and the current solutions do not scale. In addition you are getting data that is non-relational from things like devices, web sensors and social channels.</div>
<div align="justify">
A Data Lake allows you store data as it is essentially a very large scalable file system. From there you can do analysis using Hadoop, Spark and R. A Data Lake is really designed for the questions you don’t know while a Data Warehouse is designed for the questions you do.</div>
<div align="justify">
Azure Data Lake consists of a highly scalable storage area called the ADL Store. It is exposed through a HDFS Compatible REST API which allows analytic solutions to site on top and operate at scale. </div>
<div align="justify">
Cloudera and Hortonworks are available from the Azure Marketplace. Microsoft version of Hadoop is HDInsight. With HDInsight you pay for the cluster whether you use it or not. </div>
<div align="justify">
Data Lake Analytics is a batch workload analytics engine. It is designed to do Analytics at Very Large Scale. Azure ADL Analytics allows you to pay for the resources you are running vs. spinning up the entire cluster with HDInsight. </div>
<div align="justify">
You need to understand the Big Data pipeline and data flow in Azure. You go from ingestion to the Data Lake Store. From there you move it into the visualization layer. In Azure you can move data through the Azure Data Factory. You can also ingest through the Azure Event Hub. </div>
<div align="justify">
<a href="https://lh3.googleusercontent.com/-FeOVxhHCkLI/Wcuq-HIL-cI/AAAAAAAAAxQ/WTZnGvdMQ8UTdn6pxIcWee6a-D6lzkE6wCHMYCw/s1600-h/image%255B9%255D"><img alt="image" border="0" height="227" src="https://lh3.googleusercontent.com/-gSd-TDHkNEA/Wcuq_LL_wKI/AAAAAAAAAxU/IFFlMeGdMBIQ8q1_GJPZ5SND6GKP2iGtQCHMYCw/image_thumb%255B5%255D?imgmax=800" style="background-image: none; border: 0px currentcolor; display: inline;" title="image" width="436" /></a></div>
<div align="justify">
Azure Data Factory is designed to move data from a variety of data stores to Azure Data Lake. For example you can take data out of AWS Redshift and move it to Azure Data Lake Store. Additional information can be found here:</div>
<div align="justify">
<a href="https://doc.microsoft.com/en-use/azure/data-factory">https://doc.microsoft.com/en-use/azure/data-factory</a></div>
<div align="justify">
U-SQL is the language framework that provides the scale out capabilities. It scales out your custom code in .NET, Python, R over your Data Lake. It is called U because is unifies SQL seamlessly across structured and unstructured data. </div>
<div align="justify">
Microsoft suggest that you query the data where it lives. U-SQL allows you query and read/write data not just from you Azure Data Lake but also from storage blobs, Azure SQL in VMs, Azure SQL and Azure SQL Data Warehouse.</div>
<div align="justify">
<a href="https://lh3.googleusercontent.com/-jvJoBjcMle8/Wcuq_u109XI/AAAAAAAAAxY/ws8Np09FCNMb-QqwlUnMb0IaOHkuCGCLQCHMYCw/s1600-h/image%255B14%255D"><img alt="image" border="0" height="418" src="https://lh3.googleusercontent.com/-BkQ-Cr_MXbc/WcurBMBVhCI/AAAAAAAAAxc/1vbcWkeULwUp-SvjbPMy2OPfKDYkbhr9QCHMYCw/image_thumb%255B8%255D?imgmax=800" style="background-image: none; border: 0px currentcolor; display: inline;" title="image" width="348" /></a></div>
<div align="justify">
There are a few built-in cognitive functions that are available to you. You can install this code in your Azure Data Lake to add cognitive capabilities to your queries.</div>
Paul O'Dohertyhttp://www.blogger.com/profile/05237786392603690096noreply@blogger.com0tag:blogger.com,1999:blog-5889130978261301238.post-85750022872286624402017-09-26T11:10:00.001-07:002017-09-26T11:10:29.326-07:00Microsoft Ignite 2017: Windows Server Storage Spaces and Azure File Sync<p>Microsoft’s strategy is about addressing storage costs and management complexity through the use of:</p> <ol> <li>world class infrastructure on commodity hardware</li> <li>Finding smarter ways to store data</li> <li>using storage active-tiering </li> <li>offload the complexity to Microsoft</li> </ol> <p align="justify">How does Storage spaces work? You attach the storage directly to your nodes and then connect the nodes together in a cluster. You then create a storage volume across the the cluster.</p> <p align="justify">When you create the volume you select the resiliency. You have the following three options:</p> <ol> <li> <div align="justify">Mirror</div> </li> <ol> <li> <div align="justify">Fast but uses allot of storage</div> </li> </ol> <li> <div align="justify">Parity </div> </li> <ol> <li> <div align="justify">Slower but uses less storage</div> </li> </ol> <li> <div align="justify">Mirror-accelerated parity</div> </li> <ol> <li> <div align="justify">This allows you to create volumes that use both mirroring and parity. This is fast but conserves space as well</div> </li> </ol> </ol> <p align="justify">Storage Spaces Direct is a great option for running File Servers as VMs. This allows you to isolate file server VMs by use VMs running on a storage spaces direct volume. In Windows 2016 you also have the ability to introduce Storage QoS on Storage Spaces to deal with noisy neighbors. It allows you to predefine QoS storage policies to prioritize storage performance for some workloads.</p> <p align="justify">You also have the ability to Dedup. Dedup works by taking unique chunks to a dedup chunk store and replacing the original block with a reference to the unique block. Ideal use cases for Microsoft Dedup is general purpose file servers, VDI and backup. </p> <p align="justify">You may apply Dedup to a SQL Server and Hyper-V but it depends on how much demand there is on the system. High Random I/O workloads are not ideal for Dedup. Dedup is only supported on NTFS on Windows Server 2016. It will support ReFS on <a href="https://www.neowin.net/news/microsoft-launches-windows-server-version-1709" target="_blank">Windows Server 1709</a> which is the next release.</p> <p align="justify">Microsoft has introduced Azure File Sync. With Azure File Sync you are able to centralize your File Services. You can use your on-premises File services to cache files for faster local performance. It is a true file share so it is services use standard SMB and NFS. </p> <p align="justify">Shifting your focus from on-premises file services allows you to take advantage of cloud based backup and DR. Azure File Sync has a fast DR recovery option to get you back up and running in minutes.</p> <p align="justify">Azure File Sync requires Windows 2012 or Windows 2016 and enables you to install a service that tracks file usage. It also teams the server with an Azure File Share. Files that are not touched over time are migrated to Azure File services. </p> <p align="justify">To recover you simply deploy a clean server and reinstall the service. The namespace is recovered right away so the service is available quickly. When the users request the files a priority restore is performed from Azure based storage. Azure File Sync allows your branch file server to have a fixed storage profile as older files move to the cloud.</p> <p align="justify">With this technology you can introduce follow the sun scenarios were work on one file server is synced through Azure File Share to a different region so it is available.</p> <p align="justify">On the roadmap is cloud-to-cloud sync which allows the Azure File Shares to sync through the Azure backbone to different regions. When you have cloud-to-cloud sync the moment the branch server cannot connect to its primary Azure File Share it will go to the next closest. </p> <p align="justify">Azure File Sync is now publically available in five “5” Azure Regions.</p>Paul O'Dohertyhttp://www.blogger.com/profile/05237786392603690096noreply@blogger.com0tag:blogger.com,1999:blog-5889130978261301238.post-86817331502814296272017-09-25T16:42:00.001-07:002017-09-25T16:43:26.975-07:00Microsoft Ignite 2017: Getting Started with IoT; a hands on Primer<p align="justify">As part of the session we are supplied an MXChip IoT Developer’s Kit. This provides a physical IoT device enabling us to mock up IoT scenario’s. The device we are leveraging is made by <a href="https://www.arduino.cc/" target="_blank">Arduino</a>. The device comes with a myriad of sensors and interfaces including, temperature and a small display screen. The device retails for approx. 30 – 40$ USD and is a great way to get started learning IoT. </p> <p align="justify"><a href="https://lh3.googleusercontent.com/-ypU0EV1jCkU/Wcl9h_D_65I/AAAAAAAAAwQ/QrC-WqIIEC4toDg0xp81f438viDkvPaMACHMYCw/s1600-h/kit5"><img title="kit" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt="kit" src="https://lh3.googleusercontent.com/-CBa1oCgJP3s/Wcl9i8ktZqI/AAAAAAAAAwU/BNiwFHzF9FkCclXpzPQeEaZ7ocwuWTbTwCHMYCw/kit_thumb3?imgmax=800" width="395" height="296" /></a></p> <p align="justify">When considering IoT one needs to not just connect the device but understand what you want to achieve with the data. Understanding what you want to do with the data allows you to define the backend components for storage, analytics or both. For example is you are ingesting data that you want to analyze you may leverage Azure Stream Analytics. For less complex scenarios you may define an App Service and use functions. </p> <p align="justify">Microsoft’s preferred development suite is Visual Studio Code. Visual Studio Code includes extensions for Arduino. The process to get up and running is a little involved but there are lots of samples to get you started at <a href="https://aka.ms/azureiotgetstarted">https://aka.ms/azureiotgetstarted</a>.</p> <p align="justify">One of the more innovative ways to use the device was demonstrated in the session. The speaker created “The Microsoft Cognitive Bot” by leveraging the Arduino physical sensor and leveraging “LUIS” in the Azure Cloud. LUIS is the <strong>L</strong>anguage <strong>U</strong>nderstanding and <strong>I</strong>ntelligence <strong>S</strong>ervice that is the underlying technology that Microsoft Cortana is built on. The speaker talks to the MXChip sensor asking details about the weather and the conference with LUIS responding.</p> <p align="justify">The session starts with an introduction of what A Basic framework of an IoT Solution looks like as shown in the picture below. On the left of the frame are the devices. Devices can connect to the Azure IoT hub directly provided the traffic is secure and they can reach the internet. For devices that either do not connect directly to the internet or cannot communicated securely you can introduce a Field gateway. </p> <p align="justify">Field Gateways can be used for other items as well such as translating data. In cases where you need high responsiveness, you also may analyze the data on a Field Gateway so that there is less latency between the analysis and the response. Often when dealing with IoT there is both Hot and Cold data streams.  Hot being defined as data that requires less latency between the analysis and response vs. cold which may not have a time sensitivity.</p> <p><a href="https://lh3.googleusercontent.com/-F64AhbTSMjo/Wcl9k8FWlgI/AAAAAAAAAwY/S9zgIpn0iJAx0ZoJ2hwIBseh3QIuuEQQACHMYCw/s1600-h/image7"><img title="image" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" border="0" alt="image" src="https://lh3.googleusercontent.com/-4K06yHsz1_A/WcmUdnrM_DI/AAAAAAAAAww/pG-91fbNqKwEb5yfREzt0nWqZ5_3P7RZQCHMYCw/image_thumb5?imgmax=800" width="446" height="206" /></a></p> <p align="justify">An ingestion point requires a D2C endpoint or “Device-to-Cloud”. In addition to D2C you have the other traffic flow which is a C2D endpoint or Cloud-to-Device. C2D traffic tends to be queued. There are two other types of endpoints that you can define; a Method endpoint which is instantaneous and is dependent on the device being connected. The other type is a Twin endpoint. With a Twin endpoint’s you can inject a property; wait for the device to report current state and then synchronize it with your desired state. </p> <p align="justify">We then had an opportunity to sit down with IoT experts like Brett from Microsoft. Okay I know he does not look happy in this picture but we had a really great session. We developed an architecture to look at long term traffic patterns as well as analyze abnormal speeding patterns in real-time for more responsive actions.. “Sorry Speeders ; – )”. </p> <p align="justify"><a href="https://lh3.googleusercontent.com/-mkVhzrM8BbM/WcmUepG1sdI/AAAAAAAAAw0/9SXIBpxhq90eI4AGjTqA2YHcDdRL49vEQCHMYCw/s1600-h/image12"><img title="image" style="border: 0px currentcolor; border-image: none; display: inline; background-image: none;" border="0" alt="image" src="https://lh3.googleusercontent.com/-To82nVfy3Xc/WcmUfiUeGhI/AAAAAAAAAw4/lAstRE0C4PsWz-375KLQDCpftRTgoap4QCHMYCw/image_thumb8?imgmax=800" width="467" height="208" /></a></p> <p align="justify">The session turned pretty hands on and we had to get our devices communicating with an Azure based IoT hub we created. We then setup communications back to the device to review both ingress and egress traffic. In addition we configured Azure table stores and integrated some cool visualization using Power BI. Okay got to admit I totally geeked out when I first configured a functional IoT service and then started to do some analysis. It is easy to see how IoT will fundamentally change our abilities in the very near future. Great first day as Microsoft Ignite. </p>Paul O'Dohertyhttp://www.blogger.com/profile/05237786392603690096noreply@blogger.com2tag:blogger.com,1999:blog-5889130978261301238.post-15189211775156393242017-09-21T12:47:00.001-07:002017-09-22T06:34:27.992-07:00ZertoCon Toronto’s 2017 Keynote Session<div align="justify">
<a href="https://lh3.googleusercontent.com/-_qrXve3inPY/WcQXJSsytvI/AAAAAAAAAv0/SZAnkK-AXIknUt00lQduqRKgGoaW5nk2wCHMYCw/s1600-h/zertocon%255B3%255D"><img alt="zertocon" border="0" height="208" src="https://lh3.googleusercontent.com/-h3ftOkl3bCA/WcQXJ9k49zI/AAAAAAAAAv4/mxKMuWtfr7oX7rZVo9ZkCoDcYjc8ltwYwCHMYCw/zertocon_thumb%255B1%255D?imgmax=800" style="background-image: none; display: inline;" title="zertocon" width="411" /></a></div>
<div align="justify">
<span style="font-weight: normal;"><strong>Ross DiStefano</strong> the </span><span style="font-weight: normal;">Eastern Canada Sales Manager at Zerto introduces </span><strong>Rob Strechay @RealStrech</strong> the Vice President of Products at Zerto. Rob mentions that Zerto was the first to bring hypervisor replication to market. Zerto has about 700 employees and is based out of Boston and Israel. With approximately 5000 customers, Zerto provides round the clock support for replication for tens of thousands of VMs. Almost all of the service providers in Gartner’s magic quadrant are leveraging Zerto software for their DR-as-a-Service offerings. <br />
<br /></div>
<div align="justify">
Zerto’s focus is on reducing your Disaster Recovery “DR” and Migration complexity and costs. Zerto would like to be agnostic to where the replication target and destinations are located. Today at ZertoCon Toronto, the intention is to focus on Zerto’s multi-cloud strategy.<br />
<br /></div>
<div align="justify">
Most customers are looking at multiple options from hyper-scale cloud, managed service providers and enterprise cloud strategies. Zerto’s strategy is to be a unifying partner for this diverse set of partners and services. This usually starts with a transformation projects such as adopting a new virtualization strategy, implementing new software or embracing hybrid, private or public cloud strategies.<br />
<br /></div>
<div align="justify">
451’s research shows that C-Level conversations about cloud are focused around net new initiatives, moving workloads to cloud, adding capacity to the business or the introduction of new services. The Rob transitions to what’s new with Zerto Virtual Replication. What Zerto has found is that people are looking to stop owning IT infrastructure that is not core to their business and focus on the business data and applications that do. To do this they need managed services and hyper-scale cloud partners.</div>
<div align="justify">
Mission critical workloads are running in Public Cloud today. With Zerto 5.0 the company introduced the Mobile App, One-to-Many replication, replication to Azure and the 30-Day Journal. Zerto 5.5 was announced in August with replication from Azure, advancements in AWS recovery performance and Zerto Analytics & Mobile enhancements.<br />
<br /></div>
<div align="justify">
With 5.5 Zerto goes to Azure and back. A big part of this feature involved working with Microsoft’s API’s to convert VMs back from Azure. This meshes well with Microsoft’s strategy of enabling customers to scale up and down. Coming soon is region-to-region replication within Azure. <br />
<br /></div>
<div align="justify">
With the AWS enhancements, Zerto worked with Amazon to review why there existing default import limitations where so slow. In doing so they learned how to improve and enhance the replication so that it runs six “6” times faster. AWS import is still there, but now zerto-import or ‘zimport’ is used to support larger volumes while the native AWS import does the OS volume. You can also add a software component to the VM to further enhance the import to receive that 6 fold improvement.<br />
<br /></div>
<div align="justify">
Zerto analytics and Zerto Mobile provides cross-device, cross-platform information delivered as a SaaS offering. Right-now the focus is on monitoring so you can understand how prepared you are for any contingency within or between datacenters. These analytics are near real-time. As Zerto analytics has been built on cloud technologies, it will follow a continues release cycle. One new feature is RPO history that shows how you have effectively you have been meeting your SLA’s. <br />
<br /></div>
<div align="justify">
The next release is targeted for the mid-February timeframe which will deliver the same replication out of Amazon as well as the Azure inter-region replication. They are moving towards six “6” month product releases on a regular bases with a targeted set of features. <br />
<br /></div>
<div align="justify">
H2 2018 and beyond they are looking at KVM support, Container migrations, VMware on AWS and Google Cloud support. Zerto is looking to be the any-to-any migration company as an overall strategy. <br />
<br /></div>
<div align="center">
<a href="https://lh3.googleusercontent.com/-bqz6jLqennY/WcQXMAzFdfI/AAAAAAAAAv8/A9FcF_X1yY8irg6kwgmarIUQkOw-9NTmACHMYCw/s1600-h/dimitri%255B3%255D"><img alt="dimitri" border="0" height="291" src="https://lh3.googleusercontent.com/-x0V2j8WV-WQ/WcQXNRd_hII/AAAAAAAAAwA/RzWRT3fbL4oCAM3XW4ap_r3Y7RK3sDtFgCHMYCw/dimitri_thumb%255B1%255D?imgmax=800" style="background-image: none; display: inline;" title="dimitri" width="337" /></a></div>
<div align="justify">
<strong>Dmitri Li</strong>, Zerto’s System Engineer in Canada takes the stage and mentions that we now live in a world that operates 24/7. It is important to Define DR not as a function of IT but as a way to understand what is critical to the business. For this it is important to focus on a Business Impact Analysis so you can properly tier applications by RTO/RPO.<br />
<br /></div>
<div align="justify">
You also need to ensure your DR strategy is cost effective and does not violate your governance and compliance requirements. When you lay out this plan it needs to be something you can execute and test in a simple way to validate it works. <br />
<br /></div>
<div align="justify">
Another important change besides round the clock operations is that we are protecting against a different set of threats today than we were in the past. Cybercrime is on the rise. With Ransomware, 70% of businesses pay to try and get their data back. The problem with paying is that once you do, you put yourself on the VIP list for repeat attacks. <br />
<br /></div>
<div align="justify">
Zerto was recognized as the <a href="https://www.zerto.com/awards/zerto-wins-ransomware-protection-company-year-2017-storage-awards/" target="_blank">Ransomeware security product of the year</a> even though they are not a security product. Zerto addresses this using Journaling for Point-in-Time recovery. You can recover a file, folder, a VM or your entire site to the moments before the attack.<br />
<br /></div>
<div align="justify">
It is important to also look at Cloud as a potential target for your DR strategy. Building another datacenter can be cost prohibitive so hyper-scale or managed service partners like Long View Systems can be better choices. </div>
Paul O'Dohertyhttp://www.blogger.com/profile/05237786392603690096noreply@blogger.com0tag:blogger.com,1999:blog-5889130978261301238.post-85494577617375341482017-09-08T10:45:00.001-07:002017-09-08T10:45:30.562-07:00VMworld 2017: Futures with Scott Davis EVP of Product Engineering at Embotics<p align="justify">Had a great discussion with Scott Davis the EVP of Product Engineering and CTO of Embotics at VMworld 2017. Scott was kind enough to share some of the future looking innovations they are working hard on at Embotics. </p> <blockquote> <p align="justify"><font size="2">"Clearly the industry is moving from having virtualization or IaaS centric cross-cloud management platforms to more of an application-centric container and microservices focus. We really see customers getting serious about running containers because of the application portability, microservices synergy, development flexibility and seamless production scale-out that is possible. At the same time they are reducing the operational overhead and interacting more programmatically with the environment. </font></p> <p align="justify"><font size="2">When we look at the work VMware is doing with Pivotal Container Service we believe this is the right direction but we think that the key is really enhanced automation for DevOps. One of the challenges that was pointed out, is that while customers are successfully deploying Kubernetes systems for their container development, production operation can be a struggle. Often the environment gets locked in stasis because the IT team is wary of upgrades in a live environment. </font></p> <p align="justify"><font size="2">At Embotics we are all about automation. With our vCommander product we have have a lot of intelligence that we can use to build a sophisticated level of iterative automation. So let's take that challenge and let's think about what would be needed to execute a low risk DevOps migration. You would probably want to deploy the new Kubernetes version and test it against your existing set of containers. This should be sandboxed to eliminate the risk to production, validated and then the upgrade should be fully automated.” </font></p> </blockquote> <p align="justify"><font size="2">Scott proceeds to demonstrate a beta version of Embotics Cloud Management Platform 'CMP 2.0" automating these exact set of steps across a Kubernetes environment and then rolling the changes forward to update the production environment.</font></p> <blockquote> <p align="justify"><font size="2">“I think fundamentally we can deliver true DevOps, speeding up release cycles, delivering higher quality and providing a better end user experience. In addition we can automatically pull source code out of platforms like Jenkins, spin up instances, regression test and validate. The test instances that are successful can be vaporized, while preserving the ones that are not so that the issues can be remediated. </font></p> <p align="justify"><font size="2">We are rolling this out In a set of continuous software releases to our product so that as customers are integrating Containers, the Embotics 'CMP' is extended to meet these new use-cases.</font></p> <p align="justify"><font size="2">We realize as we collect a number of data points spanning user preference, IT specified compliance rules and vCommander environment knowledge across both enterprise and hyper-scale targets like Azure and AWS that we can assist our customers with intelligent placement suggestions.” </font></p> </blockquote> <p align="justify"><font size="2">Scott switches to a demo in which the recommended cloud target is ranked by the number of stars in the beta interface. </font></p> <blockquote> <p align="justify"><font size="2">“We are building it in a way that allows the customer to adjust the parameters and their relative importance so if PCI compliance is more important they can adjust a slider in the interface and our ranking system adjusts to the new priority. Things like cost, compliance can be set to be both relative or mandatory to tune the intelligent placement according to what the customer views as important."</font></p> </blockquote> <p align="justify">Clearly Embotics is making some innovative moves to incorporate a lot of flexibility in their CMP platform. Looking forward to seeing these releases in the product line with cross cloud intelligence for containers and placement.</p>Paul O'Dohertyhttp://www.blogger.com/profile/05237786392603690096noreply@blogger.com0tag:blogger.com,1999:blog-5889130978261301238.post-7120851984888302312017-09-08T10:40:00.001-07:002017-09-08T10:40:57.761-07:00VMworld 2017: Interview with Zerto’s Don Wales VP of Global Cloud Sales<p>It is a pleasure to be here with Zerto's, Don Wales the Vice President of Global Cloud Sales at VMworld 2017. Don this is a big show for Zerto, can you tell me about some of the announcements you are showcasing here today?</p> <blockquote> <p align="justify">"<font size="2">Sure Paul, we are extremely excited about our Zerto 5.5 release. With this release we have introduced an exciting number of new capabilities. You know we have many customers looking at Public and Hybrid Cloud Strategies and at Zerto we want them to be able to leverage these new destinations but do so in a way that is simple and straightforward.  Our major announcements here are our support for Fail In and Out of Azure, Increase AWS capabilities, Streamlined and Automated Upgradability, significant API enhancements and BI analytics.  All these are designed for a much better end-user experience.</font></p> <p align="justify"><font size="2">One piece of feedback that we are proud of is when customers tell us that Zerto does exactly what they need it to do without a heavy engineering cost for installation. You know Paul when you think about taking DR to a Cloud Platform like Azure it can be very complex. We have made it both simple and bi-directional. You can fail into and out of Azure with all the capabilities that our customers expect from Zerto like live failover, 30-day journal retention and Journal level file restore. </font></p> <p align="justify"><font size="2">We also recognize that Azure is not the only cloud target our customers want to use. We have increased the recovery times to Amazon Web services. We have improved the performance and it our testing we have seen a 6x improvement in the recovery to AWS. Zerto has also extended out support to AWS regions in Canada, Ohio, London and Mumbai. </font></p> <p align="justify"><font size="2">All this as well as continuing to enhance the capabilities that our traditional Cloud Service Providers need to make their customers experience simple yet powerful."</font></p> </blockquote> <p>Don, with your increased number of supported Cloud targets and regions how do you ensure your customers have visibility on what's going on?</p> <blockquote> <p align="justify">“<font size="2">Paul we have a SaaS product that allows our customers complete visibility on-premise and in public clouds called Zerto Analytics. It does historical reporting across all Zerto DR and recovery sites.  It is a significant step forward in providing the kind of Business Intelligence that customers need as their environments grow and expand.”</font></p> </blockquote> <p>Don these innovations are great, looks like Zerto is going to have a great show. Let me ask you when Don's not helping customers with their critical problems, what do you do you unwind?</p> <blockquote> <p align="justify"><font size="2">“It’s all about the family Paul. There is nothing I like better than relaxing with the family at home, and being with my wife and twin daughters.  One of our favorite things is to spend time at our beach house where our extended family gathers.  It’s a great chance to relax and get ready for my next adventure.”</font>  </p> </blockquote> <p>Many thanks for the time Don, it is great to see all innovations released here at VMworld 2017</p>Paul O'Dohertyhttp://www.blogger.com/profile/05237786392603690096noreply@blogger.com0tag:blogger.com,1999:blog-5889130978261301238.post-84386890056290323952017-09-01T07:59:00.001-07:002017-09-01T07:59:32.670-07:00VMworld 2017: Interview with Crystal Harrison @MrsPivot3<p>I had the pleasure of spending a few moments with Crystal Harrison, Pivot3’s dynamic VP of product strategy <a>“@MrsPivot3</a>”. </p> <p>Crystal, I know Pivot3 from the innovative work you have been doing in security and surveillance. How has the interest been from customers in the move to datacenter and cloud?</p> <blockquote> <p><font size="2">“You know with the next generation release of Acuity, the industry’s first priority-aware hyper converged infrastructure “HCI” the demand has been incredible. While we started with 80% of the product being applied to security use cases we are now seeing a distribution of approx. 60% applied to datacenter and cloud with 40% deriving from our security practice. This is not due to any lack of demand on the security side it is just the demand on our cloud and datacenter focus has taken off with Acuity.”</font></p> </blockquote> <blockquote> <p><font size="2">We are pushing the boundaries with our HCI offering as we are leveraging NVM Express “NVMe” to capitalize on the low latency and internal parallelism of flash-based storage devices. All this is wrapped in an intuitive management interface controlled by policy.”</font></p> </blockquote> <p>How do you deal with tiering within the storage components of Acuity?</p> <blockquote> <p><font size="2">Initially the policies manage where the data or instance lives in the system. We have the ability to dynamically reallocate resources  in real-time as needed. Say for example you have a critical business application that is starting to demand additional resources, we can recapture it from lower priority and policy assigned workloads on the fly. This protects your sensitive workloads and ensures they always have what they need.</font></p> </blockquote> <p>How has the demand been from Cloud Service providers?</p> <blockquote> <p><font size="2">They love it. We have many flexible models including pay-by-the-drip metered and measured cost models. In addition the policy engine gives them the ability to set and charge for a set of performance based tiers for storage and compute. Iron Mountain is one of our big cloud provider customers. What is really unique is because we have lightweight management overhead and patented Erasure coding you can write to just about every terabyte that you buy which is important value to our customers and service providers.</font></p> </blockquote> <p>Crystal, it really sounds like Pivot3 has built a high value innovative solution. Getting away from technology, what does Crystal do to relax when she is not helping customers adopt HCI?</p> <blockquote> <p><font size="2">My unwind is the gym. After a long day a good workout helps me reset for the next.</font></p> </blockquote> <p>Crystal, it has been a pleasure, great to see Pivot3 having a great show here at VMworld 2017.</p>Paul O'Dohertyhttp://www.blogger.com/profile/05237786392603690096noreply@blogger.com0tag:blogger.com,1999:blog-5889130978261301238.post-34610415118572152422017-08-31T11:20:00.001-07:002017-08-31T11:20:46.457-07:00VMworld 2017: Dr. Peter Weinstock, Game Changer: Life-like rehearsal through medical simulation<p>Dr. Peter Weinstock is an Intensive Care Unit physician and Director of the Pediatric Simulator Program at Boston Children's Hospital/Harvard Medical School.</p> <p>Peter wants to talk about game changers in medicine. Peter looks after critically ill children and is interested in any advance that helps his patients. Peter references a few game changes in medicine such as antibiotics. Antibiotics were discovered in the 1800s. With the discovery we were able to save patients that we could not before. Another game changer was anesthetic which allows us to deliver surgeries that were not possible before. </p> <p>A game changer moves the bar on the outcomes for all patients. Peter’s innovation is Life-like rehearsal through medical simulation. </p> <p>The problem in pediatrics is that the exception of medical emergencies do not happen often enough to perfect the treatment and approach to them. Medicine is also an apprentice program in which we are often practicing on the patients that we are treating. </p> <p>In other high stakes industries simulation and practice are foundational. Take for example the airline industry. In the airline industry when they looked at bad outcomes it was often the lack of communication in a crisis. The medical industry is not immune to these freezing or lack of interaction with the whole team. Airline simulators are used to help the cockpit crew to practice interaction and approach to various emergencies.</p> <p>So how do we take these methods to the medical industry? In Boston they have a full 3D simulator so the team can practice before the actual surgery. Through 3D printing and special effects typically found in the movie industry they are able to recreate surgery simulators using incredible authenticity.</p> <p>Prior to this one of the real surgical practice techniques involved making an incision on an actual pepper and removing the seeds. By creating a simulation we are able to practice and drill by leveraging techniques common in other high risk industries in the medical field. Pictured below is Peter with one of the medical simulators, notice the realism. </p> <p><a href="https://lh3.googleusercontent.com/-bG018x-aIuk/WahTeA1jvoI/AAAAAAAAAvg/A6DIuYvCKfgXMihbZb5rBVtBBPmht06twCHMYCw/s1600-h/simulator%255B4%255D"><img title="simulator" style="margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt="simulator" src="https://lh3.googleusercontent.com/-3RbrCw0mGSs/WahTfbtetPI/AAAAAAAAAvk/MGYeWwewlLoMPkea0AP5Uw257ohuKKJSQCHMYCw/simulator_thumb%255B1%255D?imgmax=800" width="287" height="287" /></a></p> <p>We do not stop at simulation; we also look at execution. Adopting the team approach used in formula one pit crews for quick efficient focused effort and communication we drill the team.This enables our surgical team to reach a level of efficiency not previously possible. This is a game changer in the medical field.</p>Paul O'Dohertyhttp://www.blogger.com/profile/05237786392603690096noreply@blogger.com0tag:blogger.com,1999:blog-5889130978261301238.post-90584585764355938282017-08-31T10:57:00.001-07:002017-08-31T13:09:41.579-07:00VMworld 2017: Raina el Kaliouby of Affectiva and Emotional AIRaina el Kaliouby (@kaliouby) co-founder Affectiva takes to the stage.<br />
<a href="https://lh3.googleusercontent.com/-PJVo8dHDEd4/WahOHqc6QFI/AAAAAAAAAvM/JXZqNn7CCLExkItvJGYmOCm_fQTKX4EPgCHMYCw/s1600-h/affectiva%255B3%255D"><img alt="affectiva" border="0" height="244" src="https://lh3.googleusercontent.com/-PyypGOrAW2A/WahOH9K7rkI/AAAAAAAAAvQ/NPEajwv0m2YNzExVOw2oOly5m3Mne_hrwCHMYCw/affectiva_thumb?imgmax=800" style="background-image: none; display: block; float: none; margin-left: auto; margin-right: auto;" title="affectiva" width="244" /></a><br />
Affectiva’s vision is that one day we will interact with technologies in the way we interact with people. In order to achieve this technologies must become emotionally aware. This is the potential for emotional AI enabling you to change behavior in positive ways. Raina mentions that today we have things like emoticons but that these are a poor way to communicate emotions. They are all very unnatural ways to interact. Even with voice AI, they tend to have allot of smarts but no heart. <br />
Studies have shown that people <a href="http://money.cnn.com/2017/08/22/technology/culture/personal-voice-assistants-anger/index.html" target="_blank">rage against</a> technologies like Siri because they are emotionally devoid. There is also a risk that interaction with emotionally devoid technology causes a lack of empathy in human beings. <br />
Affectiva’s first foray was to use wearable glasses for autistic people to provide emotional feedback on human interactions. Autistic people struggle to read body queues. They are now partnering with another company to make this commercially available using <a href="https://prezi.com/iwuheoxykpfz/affectiva-google-glass-facial-recognition/" target="_blank">google glass</a>.<br />
Raina’s demo shows the technology profiling facial expressions in real-time. They do this by using neural networking to feed 1000s of facial expressions to the AI so that it can recognize different emotions. They now have the largest network of facial recognition data. The core engine has been packaged as an sdk to allow developers the ability to add personalize experiences to there applications.<br />
Some of the possible use cases are personalizing movie feeds based on emotional reaction. Another use case is for autonomous cars to recognize if the driver is tired or angry. They have also partnered with educational software companies to develop software that adapts based on the level of engagement of the student. <br />
The careful use of this technology has been why Affectiva has created the <a href="http://go.affectiva.com/emotion-ai-summit" target="_blank">Emotion AI Summit</a>. The Summit will explore how Emotion AI can move us to deeper connections with technology, with businesses and with the people we care about. it takes place at the MIT Media Lab on September 13<sup>th</sup>.Paul O'Dohertyhttp://www.blogger.com/profile/05237786392603690096noreply@blogger.com0tag:blogger.com,1999:blog-5889130978261301238.post-63886075613311912272017-08-31T10:33:00.001-07:002017-08-31T10:33:44.132-07:00VMworld 2017: General Session Day three: Hugh Herr MIT Media Lab Biomechatronics<p><a href="http://https://www.media.mit.edu/people/hherr/overview/" target="_blank">Hugh Herr</a> (@hughherr) takes the stage and mentions that prosthetics have not evolved a great deal over the decades and are passive with little innovation. Hugh Herr mentions that he lost both legs from frostbite iin a mountain climbing accident in 1982. During the postop, Hugh mentioned that he wanted to mountain climb and was told it would never happen by the doctor. The doctor was dead wrong. He did not understand that innovation and technology is not static but is transient and grows over time. </p> <p><a href="https://lh3.googleusercontent.com/-PqiXOa2ZEQA/WahIdQ970aI/AAAAAAAAAu4/3i_28cHHwjQBMeQ-h6MaB1a3ZFEdtiyQwCHMYCw/s1600-h/hugh%255B3%255D"><img title="hugh" style="margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt="hugh" src="https://lh3.googleusercontent.com/-q88wUYR2OKY/WahId2S79_I/AAAAAAAAAu8/b3xRTJb-nO8gLqmi2O-gVy-LvC2Jbf9XgCHMYCw/hugh_thumb?imgmax=800" width="244" height="244" /></a></p> <p>Hugh actually considered himself lucky as because he was a double amputee he could adjust his height by creating prosthetics that were taller. Hugh references the Biomechatronics limbs he is wearing on stage which have three computers and built in actuators. Hugh’s passion is running the Center for Extreme Bionics. Extreme Bionics is anything that is designed or implanted into the body that enhances human capability.</p> <p>Hugh explains that when limbs are amputated surgeons fold over the muscle so there is no sensory feedback to the patient. Dr. Hugh and team have developed a new way of amputating. The new ways has surgeons create a little joint by ensuring there are two muscles working to expand and contract. With this new method the patient can ‘feel’ a phantom limb. By adding a simple controller you can track sensory movement that can be relayed to a bionic limb.</p> <p>What they learned is that if you give the spinal cord enough information the body will intrinsically know how to move. But what about paralysis? The approach right now is to add a cumbersome exoskeleton to enable the ability to move. Work is being done to inject stems cell into severed spinal cord with the results being an incredible return of mobility.</p> <p>Dr. Hugh and team are testing crystals embedded in muscles to relay information along with light emitters to control muscles. In this way they can build a virtual spinal column of sensors enabling mobility that was once considered impossible.</p> <p>Hugh mentions that they are going to advance from their current foundational science in Biomechatronics to eliminate disabilities and augment physicality. It is important that we need to develop policies that govern the use of this technology so that it is used ethically. </p>Paul O'Dohertyhttp://www.blogger.com/profile/05237786392603690096noreply@blogger.com0tag:blogger.com,1999:blog-5889130978261301238.post-10433680908778168942017-08-30T15:23:00.000-07:002017-08-30T15:23:03.984-07:00Transforming the Data Center to Deliver Any Application on Any Platformwith Chris Wolf @cswolf<div>
It's not just about Cloud, it's also about bringing services to the edge. Why does the edge matter? Well your average flight, if it is using IoT is generating 500 GBs of data per flight. How do we mine that data when we are turning planes around so quickly? This is creating a huge pull for the edge. Edge mastery is a new a competitive advantage. </div>
<div>
<br /></div>
<div>
<div class="separator" style="clear: both;">
<a href="https://lh3.googleusercontent.com/-2mKST05ZDBU/WacBVAZo5EI/AAAAAAAAAuk/cbMp2QR2rH4xOMZc4Ki3z2ysGVuqHgD-wCHMYCw/s640/blogger-image--486643371.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://lh3.googleusercontent.com/-2mKST05ZDBU/WacBVAZo5EI/AAAAAAAAAuk/cbMp2QR2rH4xOMZc4Ki3z2ysGVuqHgD-wCHMYCw/s320/blogger-image--486643371.jpg" width="320" /></a></div>
<br /></div>
<div>
VMware is focused on Speed, Agile, Innovation, Differentiation and Pragmatism. VMware also realizes that hyper scale cloud platforms are not right for every use case. Public Cloud provides great speed but it is sticky. For application agility on Public Cloud there is a tendency for operational drift. VMware's approach is to have globally consistent infrastructure-as-code.</div>
<div>
<br /></div>
<div>
Nike is showcased for how they leverage NSX. They leverage NSX to securely deploy development environments. In addition they run a true hybrid environment and run services in Azure and AWS. They are looking at VMware AWS Cloud to shutter a legacy datacenter and move it wholesale into an AWS region in the West and then likely migrate it the eastern region to move it closer to dependent applications reducing overall latency.</div>
<div>
<br /></div>
<div>
To get those new cloud capabilities you need to be current. You can buy this with VMware Cloud Foundation because the lifecycle upgrades are managed for you. Yanbing Li takes the stage to talk about vSAN 6.6 which has just been recently released. Hyper-converged infrastructure "HCI" really breaks down the silo's within the datacenter. Three hundred of VMware's Cloud Service Partners are leveraging vSAN in their datacenters today. VMware is seeing customers using vSAN to save costs to fund their SDDC initiatives. vSAN has hit an important milestone with 10,000 customers.</div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
Paul O'Dohertyhttp://www.blogger.com/profile/05237786392603690096noreply@blogger.com0tag:blogger.com,1999:blog-5889130978261301238.post-38869283722176422992017-08-29T13:59:00.000-07:002017-08-29T13:59:29.888-07:00Great Q&A with Pat Gelsinger, CEO of VMware and Andy Jassy, CEO of AWS Cloud Services<div>
Great Executive Q & A with Pat Gelsinger CEO of VMware, Andy Jassy CEO of AWS Cloud Services and Sanjay Poonen COO of VMware </div>
<div>
<br /></div>
<div>
<div class="separator" style="clear: both;">
</div>
<div class="separator" style="clear: both;">
<a href="https://lh3.googleusercontent.com/-3L4RmLS1eYQ/WaXAltOoYqI/AAAAAAAAAuE/m3ruOLnyWZ8L2YmMJcrlZCCsq3h4JQ7_wCHMYCw/s640/blogger-image--1254871089.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://lh3.googleusercontent.com/-3L4RmLS1eYQ/WaXAltOoYqI/AAAAAAAAAuE/m3ruOLnyWZ8L2YmMJcrlZCCsq3h4JQ7_wCHMYCw/s320/blogger-image--1254871089.jpg" width="320" /></a></div>
<br />
<br /></div>
<div>
Question from press core: "In the General Session, VMware's strategy was consistent infrastructure and operations, what does VMware mean by the term consistent infrastructure?"</div>
<div>
<br /></div>
<div>
Pat "There are 4400 VMware's vCloud Air Network "vCAN" partners providing Public Cloud Services to our customers. With the AWS partnership, customers can extend services to Amazon. This is all done leveraging VMware's management tools. It is this consistent infrastructure and operations that we were discussing in the general session. In addition we are developing other cloud services but these are likely to come to market as 'bit size' services to solve a particular challenge. We believe this approach makes it easier for customers to adopt."</div>
<div>
<br /></div>
<div>
Michael "VMware has 500,000 customers that the services being developed by VMware are directly applicable to which is a huge portion of the market." </div>
<div>
<br /></div>
<div>
Question from Paul O'Doherty @podoherty. Public Cloud gets sticky with server less architecture while VMware is really focused on the infrastructure; are you discussing other areas where VMware can add value to an Amazon and can you elaborate?</div>
<div>
<br /></div>
<div>
Pat "Well what we have announced today is a very big achievement but it really has kicked off an extensive collaboration involving a huge portion of the engineering talent at Amazon and VMware. While it would be premature to talk about anything at this point, we expect today's announcement to be one of many moving forward with Amazon."</div>
<div>
<br /></div>
<div>
Question from press core: "If in the new Cloud economy it is all about the apps, then it would seem that the partnership favours Amazon over the long term; can you comment?"</div>
<div>
<br /></div>
<div>
Pat "At VMware we do not see it that way. This is an opportunity for VMware to continue to add value as a part of a strong and ongoing partnership. When you think about moving applications to the cloud, often this involves some heavy engineering. Refactoring or Re-platforming an application, if it is not essentially changing does not add a significant amount of value. This set of services announced today adds a lot of value to our customers. Today VMware is providing management and metrics to applications but this is also the start of a joint roadmap with many more products and announcements that will be more app orientated"</div>
<div>
<br /></div>
<div>
Question from press core: "What is the benefit from the partnership from Amazons perspective?"</div>
<div>
<br /></div>
<div>
Andy: "Everything that we decided to pursue was not lightly considered. What carries the day is what customers want from us. When we look at the adoption of Public Cloud, enterprise is still at the relative beginning of their journey with some notable exceptions. Most are in the early days of their journey. When we spoke to customers about their Cloud Strategy we were asked "Why are you not working with VMware?" It really was the impetuous for these discussions. Customer feedback and excitement is tremendous around this partnership."</div>
<div>
<br /></div>
<div>
Question from press core: "As customers are heavily penalized for egress traffic from Public Cloud, are there any concerns that this on-boarding tends to flow one way around the VMware and Amazon partnership?"</div>
<div>
<br /></div>
<div>
Andy: "For customers who are serious about the adoption of a hybrid cloud platform, while egress traffic is a consideration, it is not a roadblock in the adoption."</div>
<div>
<br /></div>
<div>
Pat:"I do see customers also approaching architecture a little differently. For example now they have to build for average and peak load in a single environment. With a true hybrid platform it is possible to build for average workload while leveraging AWS for peak capacity demand"</div>
Paul O'Dohertyhttp://www.blogger.com/profile/05237786392603690096noreply@blogger.com0tag:blogger.com,1999:blog-5889130978261301238.post-80071407142194491002017-08-29T13:54:00.001-07:002017-08-30T07:29:30.886-07:00AWS Native Services Integration with VMware Cloud on AWS with PaulBockelman @boxpaul & Haider Witwit<div>
VMware Cloud on AWS has a tremendous amount of capabilities. This session will focus on some of the ninety "90" services available through VMware Cloud on AWS. We will start with a recap on VMware Cloud on AWS and then look at a sample use-case. The three core services within VMware Cloud on AWS are vSphere on bare metal, NSX and vSAN. This allows you to extend your enterprise the data enter. It integrates through link mode in vCenter as a separate site. In addition you have access to AWS integrations like CloudFormation templates.</div>
<div>
<br /></div>
<div>
For every customer, they get there own account with single-tenant dedicated hardware. The deployment is automated and stood up for you and takes approximately two "2" hours. The minimum configuration is a four "4" node cluster. It is connected to an AWS VPC through a NSX compute gateway. VMware recommends that you configure CloudWatch for monitoring your endpoints. The services on the left "VMware cluster" can connect directly to services on the right "AWS VPC".</div>
<div>
<br /></div>
<div>
<div class="separator" style="clear: both;">
<a href="https://lh3.googleusercontent.com/-nXVLlFHgmA4/WaXg1hmNwII/AAAAAAAAAuU/gFeah4uOThMrgd3watg-xcDngvYRVpjyACHMYCw/s640/blogger-image--1384538701.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="290" src="https://lh3.googleusercontent.com/-nXVLlFHgmA4/WaXg1hmNwII/AAAAAAAAAuU/gFeah4uOThMrgd3watg-xcDngvYRVpjyACHMYCw/s400/blogger-image--1384538701.jpg" width="400" /></a></div>
</div>
<div>
<br /></div>
<div>
This allows you to create integrated architectures in which some components live in the VMware SDDC along with native AWS services like AWS Storage Gateways, EC2 instances, AWS Certificate Manager and CloudWatch. <span style="font-family: "helvetica neue light" , , "helvetica" , "arial" , sans-serif;">In addition you can blend both server-less architectures like Lamba and the VMware SDDC. </span></div>
<div>
<br /></div>
<div>
A sample architecture with documentation and its integration points can be found at the following links:</div>
<div>
<br /></div>
<div>
http://demo1-app1.vmw.awsdemo.cloud/</div>
<div>
http://demo1-app2.vmw.awsdemo.cloud/ </div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
Paul O'Dohertyhttp://www.blogger.com/profile/05237786392603690096noreply@blogger.com0