Friday, September 29, 2017

Microsoft Ignite 2017: High Availability for your Azure VMs

The idea with Cloud is that each layer is responsible for its own availability and by combining these loosely coupled layers you get higher availability. It should be a consideration before you begin deployment. For example if you start by considering maintenance and how and when you would take a planned outage it informs your design. You should predict the types of failures you can experience so you are mitigating it in your architecture.

There is an emerging concept on Hyper-scale Cloud platforms called grey failures. A grey failure is when your VM, workloads or applications are not down but are not getting the resources they need.

“Grey Failure: The Achilles Heel of Cloud Scale Systems”

Monitoring and Alerting should be on for any VMs running in IaaS. When you open a ticket in Azure any known issues are surfaced as part of the support request process. This is part of Azure’s automated analytics engine providing support before you input the ticket.

Backup and DR plans should be applied to your VM. Azure allows you to create granular retention policies. When you recover VMs you can restore over the existing or create a new VM. For DR you can leverage ASR to replicate VMs outside another region. ASR is not multi-target however so you could not replicate VMs from the enterprise to an Azure region and then replicate them to a different one. It would be two distinct steps, first replicate and failover the VM to Azure and then you can setup replication between two regions.

Maintenance Microsoft now provides a local endpoint in a region with a simple REST API that provides information on upcoming maintenance events. These can be surfaced within the VM so you can trigger soft shutdowns of your virtual instance. For example, if you have a single VM (outside an availability set) and the host is being patched the VM can complete a graceful shutdown.

Azure uses VM preserving technology when they do underlying host maintenance. For updates that do not require a reboot of the host the VM is frozen for a few seconds while the host files are updated. For most applications this is seamless however if it is impactful you can use the REST API to create a reaction.

Microsoft collects all host reboot requirements so that they are applied at once vs. periodically throughout the year to improve platform availability. You are preemptively notified 30 days out for these events. One notification is sent per subscription to the administrator. The customer can add additional recipients.

Availability Sets  is a logical grouping of VMs within a datacenter that allows Azure to understand how your application is built to provide for redundancy and availability. Microsoft recommends that two or more VMs are created within an availability set. To have a 99.95% SLA you need to deploy your VMs in an Availability Sets. Availability Sets provide you fault isolation for your compute.

An Availability Set with Managed Disks is call a Managed Availability Set. With Managed Availability Set you get fault isolation for compute and storage. Essentially it ensures that the managed VM Disks are not on the same storage.

Microsoft Ignite 2017: Tips & Tricks with Azure Resource Manager with @rjmax

The AzureRM vision is to capture everything you might do or envision in Cloud. This should extend from infrastructure, configuration to governance and security.

Azure is seeing about 200 ARM templates deployed per second. The session will focus on some of the template enhancements and how Microsoft is more closely integrating identity management and delivering new features.

You now have the ability to deploy ARM deployments across subscriptions (service providers pay attention!). You can also deploy across resource groups. The two declaratives within the ARM template are that enable this are:

“resourceGroup”

“subscriptionId”

You may be wondering how you share your templates, increase the reliability and support them after a deployment?

Managed Applications

Managed applications is the ability to simplify template sharing. Managed applications can be shared or sold, they are meant to be simple to deploy, they are contained so they cannot be broken and they can be connected. Connected means you define what level of access you need to it after it has been deployed for ongoing management and support.

For additional details on Managed applications please see https://docs.microsoft.com/en-us/azure/azure-resource-manager/managed-application-overview .

Managed applications are available in West US and West US Central but will be global by the end of the year. When you define a managed application through the Azure portal you determine if it is locked or unlocked. If it is locked you need to define who is authorized to write to it.

image

By default Managed Applications are deployed within your subscription. From within the access pane of the Managed Application you can share it to other users and subscriptions. Delivering Managed Applications to the Azure marketplace is in Public Preview at this moment.

Managed Identity

With Managed Identity you can now create virtual machines with a service principal provided by Azure active directory. This allows the VM to get a token to enable service access to avoid having passwords and credentials in code. To learn more have a look here

https://docs.microsoft.com/en-us/azure/active-directory/msi-overview 

ARM Templates & Event Grid

You can use Event Grid to collect all ARM events and requests which can be pushed to an end point or listener. To learn more on Event Grid read here

https://buildazure.com/2017/08/24/what-is-azure-event-grid/

Resource Policies

You can use Resource Policies to do Location Ringfencing. Location Ringfencing allows you to define a policy to ensure your data does not leave a certain location.

image

You can also restrict which VM Classes that people can use. For example to prevent your developers from deploying extremely expensive classes of VMs.

Policies can be used to limit the access to all the marketplace images to just a few. You can find many starting point policies on GitHub

https://github.com/azure/azure-policy-samples

Azure Policies are in Preview and additional information can be found here:

https://azure.microsoft.com/en-us/services/azure-policy/

Microsoft MSIgnite 2017:How to get Office 365 to the next level with Brjann Brekkan

It is important that customers are configured with a single Identity or Tenant. You should look at the Identity as the Control Plane or the single source of truth. Azure Active Director “AD” has grow 30% in Year-over-Year growth to 12.8 million customers. In addition there are now 272,000 Apps in Azure AD. Ironically the most used application in Azure AD is Google Apps. Customers are using Azure AD to authenticate Google Services.

Azure AD is included with O365 so there is no additional cost. Identity in O365 consists of three different types of users:

  1. Cloud Identity: accounts live in O365 only
  2. Synchronized Identity: accounts sync with a local AD Server
  3. Federated Identity: Certificate based authentication based on an on premises deployment of AD Federation Service.

The Identity can be managed using several methods.

Password Hash Sync ensures you have the same password on-premises as in the cloud. The con to Hash sync is that disabled or user edits are not updated until the sync cycle is complete. In hash sync the hashes on-prem are not identical to those in the cloud but the passwords are the same.

Pass-through Authentication You still have the same password but passwords remain on-premises. There is a Pass-through Agent “PTA” agent that gets installed on your enterprise AD server. THE PTA Agent handles the queuing of requests from Azure AD and sends the validations back once authenticated.

Seamless Single Sign-On works with both Password Hash Sync and Pass-through Authentication. This is done with no additional requirement onsite. SSO is enabled during the installation of AD Connect.

You do not need more than on Azure AD if you have more than one AD on premises. One Azure AD can support hundreds of unique domain names. You can also mix cloud only accounts and on prem synchronized accounts. You can use PowerShell Graph API vs. AD Connect to synchronize and manage users and groups but it is much more difficult. AD Connect is necessary for Hybrid Exchange however.

There are six general use cases for Azure AD:

  1. Employee Secure Access to Applications
  2. To leverage Dynamic Groups for automated application deployment. Dynamic groups allow move, join and leave workflow processes
  3. To federate access for Business-to-Business communication and collaboration (included in Azure AD, 1 license enables 5 collaborations)
  4. Advanced threat and Identity protection. This is enabled through conditional access based on device compliance.
  5. To Abide by Governance and Compliance industry regulations. Access Review is in public review which identifies accounts that have not accessed the system for a while and prompts the administrator to review them.
  6. To leverage O365 as an application development platform

With AD Premium you get AD Connect Health, Dynamic Group memberships, Multi-Factor Authentication for all objects that can be applied when needed vs always on. In addition there is a better overall end user experience.

Thursday, September 28, 2017

Microsoft Ignite 2017: Protect Azure IaaS Deployments using Microsoft Security Center with Sarah Gender & Adwait Joshi

Adopting Cloud no longer has a security barrier. It is however a shared responsibility between the cloud provider and tenant. It is important that the tenant understands this principle so they properly secure their resources.

image

Securing IaaS is not just about just securing VMs but also the networking and services like storage. It is also about securing multiple clouds as many customers have a multi-cloud strategy. While things like malware protection need to be applied in the cloud the application is different in the cloud. Key challenges specific to cloud are:

  • Visibility and Control
  • Management Complexity (a mix of IaaS, PaaS and SaaS components)
  • Rapidly Evolving Threats (you need a solution optimized for cloud as things are more dynamic)

Microsoft ensures that Azure is built on a Secure Foundation by enforcing physical, infrastructure and operational security. Microsoft provides the controls but the customer or tenant is responsible for Identity & Access, Information Protection, Threat Protection and Security Management.

10 Ways Azure Security Center helps protect IaaS deployments

1 Monitor security state of cloud resources

  • Security Center Automatically discovers and monitors Azure resources
  • You can secure Enterprise and 3rd party clouds like AWS from Security Center

Security Center is built into the Azure Panel so no additional access is required. If you select the Data Collection policy you can automatically push the monitoring agent. This agent is the same as the Operations Management Agent. When you setup Data Collection you can set the level of logging required.

image

Security Center comes with a policy engine that allows you to tune policies via subscription. For example you can define one policy posture for production and another for dev and test.

2 Ensure VMs are configured in a certain way

  • You can see system update status, Antimalware protection (Azure has one that is built in for free), OS and web configuration assessment (e.g. IIS assessment against best practise confirms)
  • It will allow you to fix vulnerabilities quickly

3 Encrypt disks and data 

4 Control Network Traffic

5 Use NSGs add additional firewalls

6 Collect Security Data

  • Analyze and search security logs from many sources
  • Security Center allows you to integrate 3rd party products like Qualys scans as well for assessment for other applications and compliance issues. Security Center monitors IaaS VMs and some PaaS components like web apps.

Security Center provides a new dashboard for failed logon attempts on your VMs. The most common attack on cloud VMs are RPD brute force attacks. To avoid this you can use Just-in-Time access so that port 3389 is only open for a window of time from certain IPs. These are all audited and logged.

Another attack vector is Malware. Application whitelisting allows you to track for good behaviour vs. blocking the bad. Unfortunately it has been arduous to apply.

7 Block malware and unwanted applications

Security Center uses the adaptive algorithm to understand what applications are running to develop a set of whitelists. Once you are happy with lists you can move to enforcement.

8 Use advanced analytics to detect threats quickly.

Security Center looks at VMs and network activity and leverages Microsoft’s global threat intelligence to detect threats quickly. This leverages machine learning to understand what is normal activity statistically to identity abnormal behavior.

9 Quickly assess the scopes and impact of the attack

This is a new feature that graphically displays all the related components that were involved in an attack.

image

10 Automate threat response

Azure uses Logic Apps to automate responses which allows you to trigger workflows form and alert to enable conditional actions. In addition there is a new malicious map that identifies known malicious IPs by region with related threat intelligence.

The basic Policy for Security Center is free so there is no reason to not have more visibility on what is vulnerable in your environment.

For more information check out

http://azure.microsoft.com/en-us/services/security-center

Wednesday, September 27, 2017

Microsoft Ignite 2017: New advancements in Intune Management

Microsoft 365 is bringing the best of Microsoft together. One ofthe key things Satya Nadella did when he took over was to put customers front and center. Microsoft has invested in partner and customer programs to help accelerate the adoption of Intune.
There are three versions of Intune: 
  1. Intune for Enterprises
  2. Intune for Education
  3. Intune for SMBs (In Public Preview)
One of the biggest innovations Microsoft completed was moving Intune to Azure. There is a new portal for Intune available within Azure that provides an overview of Device compliance.
image
To setup Intune the first thing you do is define a device profile. Microsoft supports a range of platforms from such as Android, iOS, mac and windows. Once you have a device profile there are dozens of configurations you can apply.

Once you define the profile you assign it to Azure AD Groups. You can either include or exclude users. So you can create a baseline for all users and exclude your executive group to provide them an elevated set of features.

As it lives in the Azure Portal you can click on Azure Active Directory and see the same set of policies. Within the policy you can set access controls that are conditional. For example “you get corporate email only if you are compliant and patched”. Intune checks the state of the device and compliance and then grants access. The compliance overview portal is available in Intune from within Azure.
compliance
Microsoft has dramatically simplified the ability to add apps. From within Intune’s portal. You can access and browse the iOS AppStore to add applications within the interface. In addition to granting access to Apps you can apply App protection policies. For example you can enforce that the user is leveraging a minimum app version. You can block or warn if the user is in violation of this policy.
The demo shows an enrolled iPad attempting to use a down-level version of word that displays a warning when the user launches it. You can provide conditional access which allows a grace period for remediating certain types of non compliant states.

Many top 500 companies leverage Jamf today (https://www.jamf.com) for Apple management. Jamf is the standard for Apple mobile device management. Whether you're a small business, school or growing enterprise environment, Jamf can meet you where you're at and help you scale.

Intune can now be used in conjunction with Jamf. With this partnership you can use both Jamf and Intune together. Mac’s enroll in Jamf Pro. Jamf is able to send the macOS device inventory to Intune to determine compliance. If Intune determines it is compliant the access is allowed. If they are not, Intune and Jamf present some options to the user to enable them to resolve issues and check compliance.
image
Some other features that have been built into conditional access is to restrict access to services based on the location of the user. Microsoft has also enhanced Mobile Threat Protection and extended Geo fencing (In tech preview).

For Geo fencing you define known Geo locations. If the user roams outside of those locations the password gets locked. Similarly for Mobile Threat Protection, you define trusted locations and create rules to determine what happens if access is requested from a trusted on non-trusted location.

Microsoft Ignite 2017: Azure Security and Management for hybrid environments Jeremy Winter Director, Security and Management

Jeremy Winter is going to do a deep dive on Azure’s Management and Security announcements. Everyone is a different state when it comes to cloud. This digital transformation is having a pretty big ground level impact. Software is everywhere and it is changing the way we think about our business.

Digital transformation requires alignment across teams. This includes Developers, Operational teams and the notion of a Custodian who looks after all the components. It requires cross-team collaboration, automated execution, proactive governance and cost transparency. This is not a perfect science, it is a journey Microsoft is learning, tweaking and adjusting as they go. Operations is going to become more software based as we start to integrate scripting and programmatic languages.

Microsoft’s bet is that management and security should be part of the platform. Microsoft doesn't think you should have to take enterprise tooling and bring it with you to the cloud. You should expect this management and security to be a native capability. Microsoft thinks about this according to  a full stack of cloud capabilities

full_cld_cap

Security is integral, Protect is really about Site Recovery, Monitoring is about visibility to what is going on, Configuration is about automating what you are doing. Microsoft believes they have a good baseline for all these components in Azure and is now focused on Governance.

Many frameworks try to put a layer of abstraction between the developer and the platform. Microsoft’s strategy is different. They want to allow the activity to go direct to the platform but to protect it via policy. This is a different approach and is something that Microsoft is piloting with Microsoft IT. Intercontinental Hotels is used as a reference case for digital transformation.



Strategies for successful management in the cloud:

1. Where to start “Secure and Well-managed”

You need to secure your cloud resources through Azure Security Center, Protect your data via backup and replication and Monitor your cloud health. Ignite announced PowerShell in Azure Cloud Shell as well as Monitoring and Management built right in to the Azure portals.

Jeremy shows an Azure Monitoring a Linux VM. The Linux VM has management native on the Azure Panel. In the Panel you can see the inventory of what is inside the VM. You no longer have to remote in. It is done programmatically.

In the demo the internal version of Java is shown on the VM and we can not look at Change tracking to determine why the version of Java appears to have changed. This is important from an audit perspective as you have to be able to identity changes. You can see the complete activity log of the guest.

Also new is update management. You can see all the missing updates on individual or multiple computers. You can go ahead and schedule patching by defining your own maintenance window. In the future Microsoft will add pre and post activities. You also have the ability to use Azure Management for non-Azure based VMs.

For disaster recovery you are able to replicate from one region to another. You can use this from enterprise to Azure now, but now also region-to-region. For backup it is now exposed as a property of the virtual machine which you simply enable and assign a backup policy to.

With the new Azure Cloud Shell you have the ability to run PowerShell right inline. There are 3449 commands that have been ported over so far.

2.  Governance for the cloud

azure.com/policy is in tech preview. Jeremy switches to the demo for Azure policies. You now have the ability to create policy and see non-compliant and compliant VMs. The example shows a sample policy to ensure that all VMs do not have public IPs. With the policy you are able to quickly audit the environment. You can also enforce these policies. It is based on JSON so you can edit the policies directly. Other use cases include things like auditing for unencrypted VMs.

3.  Security Management and threat detection

Microsoft is providing unified visibility, adaptive threat and detection and intelligent response. Azure Security Center is fully hybrid, you can apply it to  enterprise and Azure workloads. Jeremy switches to Azure Security Center which provides an overview of your entire environments security posture.

Using the portal you can scan and get Microsoft's Security recommendations. Within the portal you can use ‘Just in Time Access’. This allows the developer to request that a port be open but it is enabled for a window of time. Security Center Analysis allows you to whitelist ports and audit what has changed.

Microsoft can track and group Alerts through Security Center. Now you have a new option through continuous investigation to see visually what security incident has been logged. It takes the tracking and pushes it through the analytics engine. It allows you to visually follow the investigation thread to determine the root cause of the security exploit. Azure Log Analytics is the engine that drives these tool sets.

Azure Log Analytics now has an Advanced Analytics component that provides an advanced query language that you can leverage across the entire environment. It will be fully deployed across Azure by December.

4.  Get integrated analytics and monitoring

For this you need to start from the app using application insights, bring in network visibility as well as infrastructure perspective. There is a new version of Application Insights. Jeremy shows Azure Monitor which was launched last year at Ignite. Azure Monitor is the place for end-to-end monitoring of your Azure datacenters.

The demo shows the ability to drill in on VM performance and leverage machine learning to pin point the deviations. The demo shows slow response time for the ‘contoso demo site’. It shows that everything is working but slowly. You can see the dependency view of every process on the machine and everything it is talking to. Quickly you are able to see that the problem has nothing to do the website but is actually a problem downstream.

Microsoft is able to do this because they have a team of data scientists baking in the analytics directly into the Azure platform through the Microsoft Smarts Team.

5. Migrate Workloads to the Cloud.

Microsoft announced a new capability for Azure Migration. You can now discover applications on your virtual machines and group them. From directly within the Azure portal you can determine which apps are ready for migration to Azure. In addition it will recommend the types of Migration tools that you can use to complete the migrations. This is in limited preview.

Microsoft Ignite 2017: Modernizing ETL with Azure Data Lake with @MikeDoesBigData @microsoft.com

Mike has done extensive work on the u-sql language and framework. The session will focus on modern Data Warehouse architectures as well as introducing Azure Data Lake.
A traditional Data Warehouse has Data sources, Extract-Transform-Load “ETL”, Data warehouses and BI and analytics as foundational components.
image
Today many of the data sources are increasing in data volume and the current solutions do not scale. In addition you are getting data that is non-relational from things like devices, web sensors and social channels.
A Data Lake allows you store data as it is essentially a very large scalable file system. From there you can do analysis using Hadoop, Spark and R. A Data Lake is really designed for the questions you don’t know while a Data Warehouse is designed for the questions you do.
Azure Data Lake consists of a highly scalable storage area called the ADL Store. It is exposed through a HDFS Compatible REST API which allows analytic solutions to site on top and operate at scale.
Cloudera and Hortonworks are available from the Azure Marketplace. Microsoft version of Hadoop is HDInsight. With HDInsight you pay for the cluster whether you use it or not.
Data Lake Analytics is a batch workload analytics engine. It is designed to do Analytics at Very Large Scale. Azure ADL Analytics allows you to pay for the resources you are running vs. spinning up the entire cluster with HDInsight.
You need to understand the Big Data pipeline and data flow in Azure. You go from ingestion to the Data Lake Store. From there you move it into the visualization layer. In Azure you can move data through the Azure Data Factory. You can also ingest through the Azure Event Hub.
image
Azure Data Factory is designed to move data from a variety of data stores to Azure Data Lake. For example you can take data out of AWS Redshift and move it to Azure Data Lake Store. Additional information can be found here:
U-SQL is the language framework that provides the scale out capabilities. It scales out your custom code in .NET, Python, R over your Data Lake. It is called U because is unifies SQL seamlessly across structured and unstructured data.
Microsoft suggest that you query the data where it lives. U-SQL allows you query and read/write data not just from you Azure Data Lake but also from storage blobs, Azure SQL in VMs, Azure SQL and Azure SQL Data Warehouse.
image
There are a few built-in cognitive functions that are available to you. You can install this code in your Azure Data Lake to add cognitive capabilities to your queries.

Tuesday, September 26, 2017

Microsoft Ignite 2017: Windows Server Storage Spaces and Azure File Sync

Microsoft’s strategy is about addressing storage costs and management complexity through the use of:

  1. world class infrastructure on commodity hardware
  2. Finding smarter ways to store data
  3. using storage active-tiering
  4. offload the complexity to Microsoft

How does Storage spaces work? You attach the storage directly to your nodes and then connect the nodes together in a cluster. You then create a storage volume across the the cluster.

When you create the volume you select the resiliency. You have the following three options:

  1. Mirror
    1. Fast but uses allot of storage
  2. Parity
    1. Slower but uses less storage
  3. Mirror-accelerated parity
    1. This allows you to create volumes that use both mirroring and parity. This is fast but conserves space as well

Storage Spaces Direct is a great option for running File Servers as VMs. This allows you to isolate file server VMs by use VMs running on a storage spaces direct volume. In Windows 2016 you also have the ability to introduce Storage QoS on Storage Spaces to deal with noisy neighbors. It allows you to predefine QoS storage policies to prioritize storage performance for some workloads.

You also have the ability to Dedup. Dedup works by taking unique chunks to a dedup chunk store and replacing the original block with a reference to the unique block. Ideal use cases for Microsoft Dedup is general purpose file servers, VDI and backup.

You may apply Dedup to a SQL Server and Hyper-V but it depends on how much demand there is on the system. High Random I/O workloads are not ideal for Dedup. Dedup is only supported on NTFS on Windows Server 2016. It will support ReFS on Windows Server 1709 which is the next release.

Microsoft has introduced Azure File Sync. With Azure File Sync you are able to centralize your File Services. You can use your on-premises File services to cache files for faster local performance. It is a true file share so it is services use standard SMB and NFS.

Shifting your focus from on-premises file services allows you to take advantage of cloud based backup and DR. Azure File Sync has a fast DR recovery option to get you back up and running in minutes.

Azure File Sync requires Windows 2012 or Windows 2016 and enables you to install a service that tracks file usage. It also teams the server with an Azure File Share. Files that are not touched over time are migrated to Azure File services.

To recover you simply deploy a clean server and reinstall the service. The namespace is recovered right away so the service is available quickly. When the users request the files a priority restore is performed from Azure based storage. Azure File Sync allows your branch file server to have a fixed storage profile as older files move to the cloud.

With this technology you can introduce follow the sun scenarios were work on one file server is synced through Azure File Share to a different region so it is available.

On the roadmap is cloud-to-cloud sync which allows the Azure File Shares to sync through the Azure backbone to different regions. When you have cloud-to-cloud sync the moment the branch server cannot connect to its primary Azure File Share it will go to the next closest.

Azure File Sync is now publically available in five “5” Azure Regions.

Monday, September 25, 2017

Microsoft Ignite 2017: Getting Started with IoT; a hands on Primer

As part of the session we are supplied an MXChip IoT Developer’s Kit. This provides a physical IoT device enabling us to mock up IoT scenario’s. The device we are leveraging is made by Arduino. The device comes with a myriad of sensors and interfaces including, temperature and a small display screen. The device retails for approx. 30 – 40$ USD and is a great way to get started learning IoT.

kit

When considering IoT one needs to not just connect the device but understand what you want to achieve with the data. Understanding what you want to do with the data allows you to define the backend components for storage, analytics or both. For example is you are ingesting data that you want to analyze you may leverage Azure Stream Analytics. For less complex scenarios you may define an App Service and use functions.

Microsoft’s preferred development suite is Visual Studio Code. Visual Studio Code includes extensions for Arduino. The process to get up and running is a little involved but there are lots of samples to get you started at https://aka.ms/azureiotgetstarted.

One of the more innovative ways to use the device was demonstrated in the session. The speaker created “The Microsoft Cognitive Bot” by leveraging the Arduino physical sensor and leveraging “LUIS” in the Azure Cloud. LUIS is the Language Understanding and Intelligence Service that is the underlying technology that Microsoft Cortana is built on. The speaker talks to the MXChip sensor asking details about the weather and the conference with LUIS responding.

The session starts with an introduction of what A Basic framework of an IoT Solution looks like as shown in the picture below. On the left of the frame are the devices. Devices can connect to the Azure IoT hub directly provided the traffic is secure and they can reach the internet. For devices that either do not connect directly to the internet or cannot communicated securely you can introduce a Field gateway.

Field Gateways can be used for other items as well such as translating data. In cases where you need high responsiveness, you also may analyze the data on a Field Gateway so that there is less latency between the analysis and the response. Often when dealing with IoT there is both Hot and Cold data streams.  Hot being defined as data that requires less latency between the analysis and response vs. cold which may not have a time sensitivity.

image

An ingestion point requires a D2C endpoint or “Device-to-Cloud”. In addition to D2C you have the other traffic flow which is a C2D endpoint or Cloud-to-Device. C2D traffic tends to be queued. There are two other types of endpoints that you can define; a Method endpoint which is instantaneous and is dependent on the device being connected. The other type is a Twin endpoint. With a Twin endpoint’s you can inject a property; wait for the device to report current state and then synchronize it with your desired state.

We then had an opportunity to sit down with IoT experts like Brett from Microsoft. Okay I know he does not look happy in this picture but we had a really great session. We developed an architecture to look at long term traffic patterns as well as analyze abnormal speeding patterns in real-time for more responsive actions.. “Sorry Speeders ; – )”.

image

The session turned pretty hands on and we had to get our devices communicating with an Azure based IoT hub we created. We then setup communications back to the device to review both ingress and egress traffic. In addition we configured Azure table stores and integrated some cool visualization using Power BI. Okay got to admit I totally geeked out when I first configured a functional IoT service and then started to do some analysis. It is easy to see how IoT will fundamentally change our abilities in the very near future. Great first day as Microsoft Ignite.

Thursday, September 21, 2017

ZertoCon Toronto’s 2017 Keynote Session

zertocon
Ross DiStefano the Eastern Canada Sales Manager at Zerto introduces Rob Strechay @RealStrech the Vice President of Products at Zerto. Rob mentions that Zerto was the first to bring hypervisor replication to market. Zerto has about 700 employees and is based out of Boston and Israel. With approximately 5000 customers, Zerto provides round the clock support for replication for tens of thousands of VMs. Almost all of the service providers in Gartner’s magic quadrant are leveraging Zerto software for their DR-as-a-Service offerings.

Zerto’s focus is on reducing your Disaster Recovery “DR” and Migration complexity and costs. Zerto would like to be agnostic to where the replication target and destinations are located. Today at ZertoCon Toronto, the intention is to focus on Zerto’s multi-cloud strategy.

Most customers are looking at multiple options from hyper-scale cloud, managed service providers and enterprise cloud strategies. Zerto’s strategy is to be a unifying partner for this diverse set of partners and services. This usually starts with a transformation projects such as adopting a new virtualization strategy, implementing new software or embracing hybrid, private or public cloud strategies.

451’s research shows that C-Level conversations about cloud are focused around net new initiatives, moving workloads to cloud, adding capacity to the business or the introduction of new services. The Rob transitions to what’s new with Zerto Virtual Replication. What Zerto has found is that people are looking to stop owning IT infrastructure that is not core to their business and focus on the business data and applications that do. To do this they need managed services and hyper-scale cloud partners.
Mission critical workloads are running in Public Cloud today. With Zerto 5.0 the company introduced the Mobile App, One-to-Many replication, replication to Azure and the 30-Day Journal. Zerto 5.5 was announced in August with replication from Azure, advancements in AWS recovery performance and Zerto Analytics & Mobile enhancements.

With 5.5 Zerto goes to Azure and back. A big part of this feature involved working with Microsoft’s API’s to convert VMs back from Azure. This meshes well with Microsoft’s strategy of enabling customers to scale up and down. Coming soon is region-to-region replication within Azure.

With the AWS enhancements, Zerto worked with Amazon to review why there existing default import limitations where so slow. In doing so they learned how to improve and enhance the replication so that it runs six “6” times faster. AWS import is still there, but now zerto-import or ‘zimport’ is used to support larger volumes while the native AWS import does the OS volume. You can also add a software component to the VM to further enhance the import to receive that 6 fold improvement.

Zerto analytics and Zerto Mobile provides cross-device, cross-platform information delivered as a SaaS offering. Right-now the focus is on monitoring so you can understand how prepared you are for any contingency within or between datacenters. These analytics are near real-time. As Zerto analytics has been built on cloud technologies, it will follow a continues release cycle. One new feature is RPO history that shows how you have effectively you have been meeting your SLA’s.

The next release is targeted for the mid-February timeframe which will deliver the same replication out of Amazon as well as the Azure inter-region replication. They are moving towards six “6” month product releases on a regular bases with a targeted set of features.

H2 2018 and beyond they are looking at KVM support, Container migrations, VMware on AWS and Google Cloud support. Zerto is looking to be the any-to-any migration company as an overall strategy.

dimitri
Dmitri Li, Zerto’s System Engineer in Canada takes the stage and mentions that we now live in a world that operates 24/7. It is important to Define DR not as a function of IT but as a way to understand what is critical to the business. For this it is important to focus on a Business Impact Analysis so you can properly tier applications by RTO/RPO.

You also need to ensure your DR strategy is cost effective and does not violate your governance and compliance requirements. When you lay out this plan it needs to be something you can execute and test in a simple way to validate it works.

Another important change besides round the clock operations is that we are protecting against a different set of threats today than we were in the past. Cybercrime is on the rise. With Ransomware, 70% of businesses pay to try and get their data back. The problem with paying is that once you do, you put yourself on the VIP list for repeat attacks.

Zerto was recognized as the Ransomeware security product of the year even though they are not a security product. Zerto addresses this using Journaling for Point-in-Time recovery. You can recover a file, folder, a VM or your entire site to the moments before the attack.

It is important to also look at Cloud as a potential target for your DR strategy. Building another datacenter can be cost prohibitive so hyper-scale or managed service partners like Long View Systems can be better choices.

Friday, September 8, 2017

VMworld 2017: Futures with Scott Davis EVP of Product Engineering at Embotics

Had a great discussion with Scott Davis the EVP of Product Engineering and CTO of Embotics at VMworld 2017. Scott was kind enough to share some of the future looking innovations they are working hard on at Embotics.

"Clearly the industry is moving from having virtualization or IaaS centric cross-cloud management platforms to more of an application-centric container and microservices focus. We really see customers getting serious about running containers because of the application portability, microservices synergy, development flexibility and seamless production scale-out that is possible. At the same time they are reducing the operational overhead and interacting more programmatically with the environment.

When we look at the work VMware is doing with Pivotal Container Service we believe this is the right direction but we think that the key is really enhanced automation for DevOps. One of the challenges that was pointed out, is that while customers are successfully deploying Kubernetes systems for their container development, production operation can be a struggle. Often the environment gets locked in stasis because the IT team is wary of upgrades in a live environment.

At Embotics we are all about automation. With our vCommander product we have have a lot of intelligence that we can use to build a sophisticated level of iterative automation. So let's take that challenge and let's think about what would be needed to execute a low risk DevOps migration. You would probably want to deploy the new Kubernetes version and test it against your existing set of containers. This should be sandboxed to eliminate the risk to production, validated and then the upgrade should be fully automated.”

Scott proceeds to demonstrate a beta version of Embotics Cloud Management Platform 'CMP 2.0" automating these exact set of steps across a Kubernetes environment and then rolling the changes forward to update the production environment.

“I think fundamentally we can deliver true DevOps, speeding up release cycles, delivering higher quality and providing a better end user experience. In addition we can automatically pull source code out of platforms like Jenkins, spin up instances, regression test and validate. The test instances that are successful can be vaporized, while preserving the ones that are not so that the issues can be remediated.

We are rolling this out In a set of continuous software releases to our product so that as customers are integrating Containers, the Embotics 'CMP' is extended to meet these new use-cases.

We realize as we collect a number of data points spanning user preference, IT specified compliance rules and vCommander environment knowledge across both enterprise and hyper-scale targets like Azure and AWS that we can assist our customers with intelligent placement suggestions.”

Scott switches to a demo in which the recommended cloud target is ranked by the number of stars in the beta interface.

“We are building it in a way that allows the customer to adjust the parameters and their relative importance so if PCI compliance is more important they can adjust a slider in the interface and our ranking system adjusts to the new priority. Things like cost, compliance can be set to be both relative or mandatory to tune the intelligent placement according to what the customer views as important."

Clearly Embotics is making some innovative moves to incorporate a lot of flexibility in their CMP platform. Looking forward to seeing these releases in the product line with cross cloud intelligence for containers and placement.

VMworld 2017: Interview with Zerto’s Don Wales VP of Global Cloud Sales

It is a pleasure to be here with Zerto's, Don Wales the Vice President of Global Cloud Sales at VMworld 2017. Don this is a big show for Zerto, can you tell me about some of the announcements you are showcasing here today?

"Sure Paul, we are extremely excited about our Zerto 5.5 release. With this release we have introduced an exciting number of new capabilities. You know we have many customers looking at Public and Hybrid Cloud Strategies and at Zerto we want them to be able to leverage these new destinations but do so in a way that is simple and straightforward.  Our major announcements here are our support for Fail In and Out of Azure, Increase AWS capabilities, Streamlined and Automated Upgradability, significant API enhancements and BI analytics.  All these are designed for a much better end-user experience.

One piece of feedback that we are proud of is when customers tell us that Zerto does exactly what they need it to do without a heavy engineering cost for installation. You know Paul when you think about taking DR to a Cloud Platform like Azure it can be very complex. We have made it both simple and bi-directional. You can fail into and out of Azure with all the capabilities that our customers expect from Zerto like live failover, 30-day journal retention and Journal level file restore.

We also recognize that Azure is not the only cloud target our customers want to use. We have increased the recovery times to Amazon Web services. We have improved the performance and it our testing we have seen a 6x improvement in the recovery to AWS. Zerto has also extended out support to AWS regions in Canada, Ohio, London and Mumbai.

All this as well as continuing to enhance the capabilities that our traditional Cloud Service Providers need to make their customers experience simple yet powerful."

Don, with your increased number of supported Cloud targets and regions how do you ensure your customers have visibility on what's going on?

Paul we have a SaaS product that allows our customers complete visibility on-premise and in public clouds called Zerto Analytics. It does historical reporting across all Zerto DR and recovery sites.  It is a significant step forward in providing the kind of Business Intelligence that customers need as their environments grow and expand.”

Don these innovations are great, looks like Zerto is going to have a great show. Let me ask you when Don's not helping customers with their critical problems, what do you do you unwind?

“It’s all about the family Paul. There is nothing I like better than relaxing with the family at home, and being with my wife and twin daughters.  One of our favorite things is to spend time at our beach house where our extended family gathers.  It’s a great chance to relax and get ready for my next adventure.” 

Many thanks for the time Don, it is great to see all innovations released here at VMworld 2017

Friday, September 1, 2017

VMworld 2017: Interview with Crystal Harrison @MrsPivot3

I had the pleasure of spending a few moments with Crystal Harrison, Pivot3’s dynamic VP of product strategy “@MrsPivot3”.

Crystal, I know Pivot3 from the innovative work you have been doing in security and surveillance. How has the interest been from customers in the move to datacenter and cloud?

“You know with the next generation release of Acuity, the industry’s first priority-aware hyper converged infrastructure “HCI” the demand has been incredible. While we started with 80% of the product being applied to security use cases we are now seeing a distribution of approx. 60% applied to datacenter and cloud with 40% deriving from our security practice. This is not due to any lack of demand on the security side it is just the demand on our cloud and datacenter focus has taken off with Acuity.”

We are pushing the boundaries with our HCI offering as we are leveraging NVM Express “NVMe” to capitalize on the low latency and internal parallelism of flash-based storage devices. All this is wrapped in an intuitive management interface controlled by policy.”

How do you deal with tiering within the storage components of Acuity?

Initially the policies manage where the data or instance lives in the system. We have the ability to dynamically reallocate resources  in real-time as needed. Say for example you have a critical business application that is starting to demand additional resources, we can recapture it from lower priority and policy assigned workloads on the fly. This protects your sensitive workloads and ensures they always have what they need.

How has the demand been from Cloud Service providers?

They love it. We have many flexible models including pay-by-the-drip metered and measured cost models. In addition the policy engine gives them the ability to set and charge for a set of performance based tiers for storage and compute. Iron Mountain is one of our big cloud provider customers. What is really unique is because we have lightweight management overhead and patented Erasure coding you can write to just about every terabyte that you buy which is important value to our customers and service providers.

Crystal, it really sounds like Pivot3 has built a high value innovative solution. Getting away from technology, what does Crystal do to relax when she is not helping customers adopt HCI?

My unwind is the gym. After a long day a good workout helps me reset for the next.

Crystal, it has been a pleasure, great to see Pivot3 having a great show here at VMworld 2017.

Thursday, August 31, 2017

VMworld 2017: Dr. Peter Weinstock, Game Changer: Life-like rehearsal through medical simulation

Dr. Peter Weinstock is an Intensive Care Unit physician and Director of the Pediatric Simulator Program at Boston Children's Hospital/Harvard Medical School.

Peter wants to talk about game changers in medicine. Peter looks after critically ill children and is interested in any advance that helps his patients. Peter references a few game changes in medicine such as antibiotics. Antibiotics were discovered in the 1800s. With the discovery we were able to save patients that we could not before. Another game changer was anesthetic which allows us to deliver surgeries that were not possible before.

A game changer moves the bar on the outcomes for all patients. Peter’s innovation is Life-like rehearsal through medical simulation.

The problem in pediatrics is that the exception of medical emergencies do not happen often enough to perfect the treatment and approach to them. Medicine is also an apprentice program in which we are often practicing on the patients that we are treating.

In other high stakes industries simulation and practice are foundational. Take for example the airline industry. In the airline industry when they looked at bad outcomes it was often the lack of communication in a crisis. The medical industry is not immune to these freezing or lack of interaction with the whole team. Airline simulators are used to help the cockpit crew to practice interaction and approach to various emergencies.

So how do we take these methods to the medical industry? In Boston they have a full 3D simulator so the team can practice before the actual surgery. Through 3D printing and special effects typically found in the movie industry they are able to recreate surgery simulators using incredible authenticity.

Prior to this one of the real surgical practice techniques involved making an incision on an actual pepper and removing the seeds. By creating a simulation we are able to practice and drill by leveraging techniques common in other high risk industries in the medical field. Pictured below is Peter with one of the medical simulators, notice the realism.

simulator

We do not stop at simulation; we also look at execution. Adopting the team approach used in formula one pit crews for quick efficient focused effort and communication we drill the team.This enables our surgical team to reach a level of efficiency not previously possible. This is a game changer in the medical field.

VMworld 2017: Raina el Kaliouby of Affectiva and Emotional AI

Raina el Kaliouby (@kaliouby) co-founder Affectiva takes to the stage.
affectiva
Affectiva’s vision is that one day we will interact with technologies in the way we interact with people. In order to achieve this technologies must become emotionally aware. This is the potential for emotional AI enabling you to change behavior in positive ways. Raina mentions that today we have things like emoticons but that these are a poor way to communicate emotions. They are all very unnatural ways to interact. Even with voice AI, they tend to have allot of smarts but no heart.
Studies have shown that people rage against technologies like Siri because they are emotionally devoid. There is also a risk that interaction with emotionally devoid technology causes a lack of empathy in human beings.
Affectiva’s first foray was to use wearable glasses for autistic people to provide emotional feedback on human interactions. Autistic people struggle to read body queues. They are now partnering with another company to make this commercially available using google glass.
Raina’s demo shows the technology profiling facial expressions in real-time. They do this by using neural networking to feed 1000s of facial expressions to the AI so that it can recognize different emotions. They now have the largest network of facial recognition data. The core engine has been packaged as an sdk to allow developers the ability to add personalize experiences to there applications.
Some of the possible use cases are personalizing movie feeds based on emotional reaction. Another use case is for autonomous cars to recognize if the driver is tired or angry. They have also partnered with educational software companies to develop software that adapts based on the level of engagement of the student.
The careful use of this technology has been why Affectiva has created the Emotion AI Summit.  The Summit will explore how Emotion AI can move us to deeper connections with technology, with businesses and with the people we care about. it takes place at the MIT Media Lab on September 13th.

VMworld 2017: General Session Day three: Hugh Herr MIT Media Lab Biomechatronics

Hugh Herr (@hughherr) takes the stage and mentions that prosthetics have not evolved a great deal over the decades and are passive with little innovation. Hugh Herr mentions that he lost both legs from frostbite iin a mountain climbing accident in 1982. During the postop, Hugh mentioned that he wanted to mountain climb and was told it would never happen by the doctor. The doctor was dead wrong. He did not understand that innovation and technology is not static but is transient and grows over time.

hugh

Hugh actually considered himself lucky as because he was a double amputee he could adjust his height by creating prosthetics that were taller. Hugh references the Biomechatronics limbs he is wearing on stage which have three computers and built in actuators. Hugh’s passion is running the Center for Extreme Bionics. Extreme Bionics is anything that is designed or implanted into the body that enhances human capability.

Hugh explains that when limbs are amputated surgeons fold over the muscle so there is no sensory feedback to the patient. Dr. Hugh and team have developed a new way of amputating. The new ways has surgeons create a little joint by ensuring there are two muscles working to expand and contract. With this new method the patient can ‘feel’ a phantom limb. By adding a simple controller you can track sensory movement that can be relayed to a bionic limb.

What they learned is that if you give the spinal cord enough information the body will intrinsically know how to move. But what about paralysis? The approach right now is to add a cumbersome exoskeleton to enable the ability to move. Work is being done to inject stems cell into severed spinal cord with the results being an incredible return of mobility.

Dr. Hugh and team are testing crystals embedded in muscles to relay information along with light emitters to control muscles. In this way they can build a virtual spinal column of sensors enabling mobility that was once considered impossible.

Hugh mentions that they are going to advance from their current foundational science in Biomechatronics to eliminate disabilities and augment physicality. It is important that we need to develop policies that govern the use of this technology so that it is used ethically.

Wednesday, August 30, 2017

Transforming the Data Center to Deliver Any Application on Any Platformwith Chris Wolf @cswolf

It's not just about Cloud, it's also about bringing services to the edge. Why does the edge matter? Well your average flight, if it is using IoT is generating 500 GBs of data per flight. How do we mine that data when we are turning planes around so quickly? This is creating a huge pull for the edge. Edge mastery is a new a competitive advantage. 


VMware is focused on Speed, Agile, Innovation, Differentiation and Pragmatism. VMware also realizes that hyper scale cloud platforms are not right for every use case. Public Cloud provides great speed but it is sticky. For application agility on Public Cloud there is a tendency for operational drift. VMware's approach is to have globally consistent infrastructure-as-code.

Nike is showcased for how they leverage NSX. They leverage NSX to securely deploy development environments. In addition they run a true hybrid environment and run services in Azure and AWS. They are looking at VMware AWS Cloud to shutter a legacy datacenter and move it wholesale into an AWS region in the West and then likely migrate it the eastern region to move it closer to dependent applications reducing overall latency.

To get those new cloud capabilities you need to be current. You can buy this with VMware Cloud Foundation because the lifecycle upgrades are managed for you. Yanbing Li takes the stage to talk about vSAN 6.6 which has just been recently released. Hyper-converged infrastructure "HCI" really breaks down the silo's within the datacenter. Three hundred of VMware's Cloud Service Partners are leveraging vSAN in their datacenters today. VMware is seeing customers using vSAN to save costs to fund their SDDC initiatives. vSAN has hit an important milestone with 10,000 customers.













Tuesday, August 29, 2017

Great Q&A with Pat Gelsinger, CEO of VMware and Andy Jassy, CEO of AWS Cloud Services

Great Executive Q & A with Pat Gelsinger CEO of VMware, Andy Jassy CEO of AWS Cloud Services and Sanjay Poonen COO of VMware 



Question from press core: "In the General Session, VMware's strategy was consistent infrastructure and operations, what does VMware mean by the term consistent infrastructure?"

Pat "There are 4400 VMware's vCloud Air Network "vCAN" partners providing Public Cloud Services to our customers. With the AWS partnership, customers can extend services to Amazon. This is all done leveraging VMware's management tools. It is this consistent infrastructure and operations that we were discussing in the general session. In addition we are developing other cloud services but these are likely to come to market as 'bit size' services to solve a particular challenge. We believe this approach makes it easier for customers to adopt."

Michael "VMware has 500,000 customers that the services being developed by VMware are directly applicable to which is a huge portion of the market." 

Question from Paul O'Doherty @podoherty. Public Cloud gets sticky with server less architecture while VMware is really focused on the infrastructure; are you discussing other areas where  VMware can add value to an Amazon and can you elaborate?

Pat "Well what we have announced today is a very big achievement but it really has kicked off an extensive collaboration involving a huge portion of the engineering talent at Amazon and VMware. While it would be premature to talk about anything at this point, we expect today's announcement to be one of many moving forward with Amazon."

Question from press core: "If in the new Cloud economy it is all about the apps, then it would seem that the partnership favours Amazon over the long term; can you comment?"

Pat "At VMware we do not see it that way. This is an opportunity for VMware to continue to add value as a part of a strong and ongoing partnership. When you think about moving applications to the cloud, often this involves some heavy engineering. Refactoring or Re-platforming an application, if it is not essentially changing does not add a significant amount of value. This set of services announced today adds a lot of value to our customers. Today VMware is providing management and metrics to applications but this is also the start of a joint roadmap with many more products and announcements that will be more app orientated"

Question from press core: "What is the benefit from the partnership from Amazons perspective?"

Andy: "Everything that we decided to pursue was not lightly considered. What carries the day is what customers want from us. When we look at the adoption of Public Cloud, enterprise is still at the relative beginning of their journey with some notable exceptions. Most are in the early days of their journey. When we spoke to customers about their Cloud Strategy we were asked "Why are you not working with VMware?" It really was the impetuous for these discussions. Customer feedback and excitement is tremendous around this partnership."

Question from press core: "As customers are heavily penalized for egress traffic from Public Cloud, are there any concerns that this on-boarding tends to flow one way around the VMware and Amazon partnership?"

Andy: "For customers who are serious about the adoption of a hybrid cloud platform, while egress traffic is a consideration, it is not a roadblock in the adoption."

Pat:"I do see customers also approaching architecture a little differently. For example now they have to build for average and peak load in a single environment. With a true hybrid platform it is possible to build for average workload while leveraging AWS for peak capacity demand"

AWS Native Services Integration with VMware Cloud on AWS with PaulBockelman @boxpaul & Haider Witwit

VMware Cloud on AWS has a tremendous amount of capabilities. This session will focus on some of the ninety "90" services available through VMware Cloud on AWS. We will start with a recap on VMware Cloud on AWS and then look at a sample use-case. The three core services within VMware Cloud on AWS are vSphere on bare metal, NSX and vSAN. This allows you to extend your enterprise the data enter. It integrates through link mode in vCenter as a separate site. In addition you have access to AWS integrations like CloudFormation templates.

For every customer, they get there own account with single-tenant dedicated hardware. The deployment is automated and stood up for you and takes approximately two "2" hours. The minimum configuration is a four "4" node cluster. It is connected to an AWS VPC through a NSX compute gateway. VMware recommends that you configure CloudWatch for monitoring your endpoints. The services on the left "VMware cluster" can connect directly to services on the right "AWS VPC".


This allows you to create integrated architectures in which some components live in the VMware SDDC along with native AWS services like AWS Storage Gateways, EC2 instances, AWS Certificate Manager and CloudWatch. In addition you can blend both server-less architectures like Lamba and the VMware SDDC. 

A sample architecture with documentation and its integration points can be found at the following links:

http://demo1-app1.vmw.awsdemo.cloud/
http://demo1-app2.vmw.awsdemo.cloud/ 




NSX & App Defence: Transform Network Security with MilinDesai @virtualmilin posted by @podoherty

This session will focus on transforming network and security. Christopher Frenz from Interfaith Medical Center starts with the message that healthcare is a target because healthcare records can be used for identity theft. Combine this possibility with an environment that has a lot of legacy applications and you have a very difficult environment to protect. In addition, Medical tends to keep their devices for an extended period of time. For example, wannacry infected a large number of medical devices in the healthcare industry in the US.

One of the misconceptions is that compliance equals security when it should not. Often compliance requirements are dated and should really be viewed as a lowest common denominator. In looking at the challenges in Interfaith's environment they realized that a lot of attacks happened through lateral movement. By leveraging NSX they were able to move to a zero trust environment. Currently VMware has 2,900 customers using NSX.

In adopting NSX, they started with their core network services like DNS because the protocols were understood and easy to configure policies on. From the general widespread services, they went up the food chain to more specialized systems. They are now looking at AppDefense to add an additional level of security beyond creating a zero trust environment. This is part of a more comprehensive defence in depth strategy that they are applying.

AppDefense captures the behaviour of the application as the hypervisor sees all activity related to the virtual machine. In addition, provisioning and application frameworks are queried to understand additional information. Then the virtual machine is profiled to ensure there is a complete understanding of the behaviour of the VM. What you wind up with is a very small number of components that need to be validated. These become the manifest that determines purpose of the VM and what applications are served from it.

AppDefense monitors the guest in realtime against the manifest. This is the AppDefense monitor. If we get a signal of that what is running is not intended you have the option of determining what you want to do. This is done through a response policy.

Centene is invited on stage to delivery there story. In order to make forward progress the customer created a separate Cloud team. While they new the technology they were interested in they could not make progress in the old model. They dedicated a team of four "4" architects and one engineer to be fully focused on Cloud services. There mantra was to ensure everything they delivered to the business was completely automated. To achieve their goals they deployed vRealize Automation along with NSX with a heavy focus on security policies. 






General Session Day Two with Pat Gelsinger, Micheal Dell and Ray O'Farrell

Pat welcomes the audience and calls Michael Dell, CEO of Dell Technologies onto stage. It is a bit of a fireside chat with a number of questions from the audience. 

The first question was a concern on VMware support to which Pat mentions VMware Skyline which provides proactive and predictive support to their customers.

The next question is VMware & Dells plan around IoT and Big Data. Michael mentions that Digital transformation is a CEO level discussion and concern. If you are not looking at how you use data to enhance your business you are doing it the wrong way. Dell and VMware have been reimagining their products to take advantage of IoT and Big Data while addressing both their larger and SMB markets and supporting their partners. The focus is on making the VMware ecosystem even more open moving forward through products like VMware Pulse IoT Center.

The final question is around the synergies between VMware, Dell and EMC. Michael mentions that the more they do together the better things get. For every product release the integration gets deeper, however it is being done in a way that supports the ecosystem of partners. Michael mentions the innovation being done by customers in leveraging containers to drive their business.

Rob Mee, the CEO of Pivotal is introduced and he mentions that they have been partnering to build Kubo which is open-source. Rob then announces, Pivotal Container Service "PKS" which includes kubernetes, Pivotal and NSX as s single product. 

Sam Ramji the VP of Google Cloud Platform is introduced to speak containers. Google has  been running at the containers at scale for sometime. Google sees container adoption skyrocketing. Sam believes "PKS" is important as it enables a hybrid infrastructure for running containers. It enables customers to put containers where they need them with support from Google and VMware.

Ray O'Farrell the CTO of VMware is introduced. VMware has a few guiding principals, like having the most modern infrastructure possible. VMware also wants to be pragmatic in how we develop product so we want to maintain the operational aspects of those products. As customers have asked for new models, VMware now has SaaS products.

Ray begins with a fictitious company "Elastic Sky Pizza"Z. They need to undergo a digital transformation.  The company integrates cloud foundation with AppDefender, vRealize Suite and VMware Cloud on AWS. When we think about our options, Public Cloud is great for getting things done quickly. VMware infrastructure can provide consistent experience by having a consistent environment.



The last piece of the puzzle is to layer VMware AppDefense for security. A demo is shown of the dashboard which shows a sample application with the known good behaviours. Once we have identified all the good behaviours and turn on the rules we know the application is protected.

vRealize is shown with the integrated AWS cloud connection. The demo shows deploying a SDDC in AWS which is easy with the vRealize Operations console. The four "4" node cluster will take approx. two "2" hours to deploy. In addition you can set thresholds for adding additional hosts depending on utilization. 

vRealize Network Insight is shown which color codes the complexity of the application depending on the total traffic flows. Green indicates an application with mostly contained network flows. The application is selected for migration and vMotioned into the AWS Cluster.

The demo moves to the "PKS" dashboard. The demo goes through the wizard interface for creating the kubernetes cluster. The credentials are then shared with the developers. The developer is then able to use native commands to interact with the environment. The last bit of the demo shows the NSX security wrapped around the container networks.

The VMware Cloud Services are promo'd


  1. VM Automation - provides cloud agnostic blueprint for deployment in any hyper-scale cloud provider. 
  2. VMware NSX Cloud - AWS instances are shown through the interface which provides a consistent view of all the networking
  3. Wavefront - Wavefront measures the KPIs for the application and infrastructure, thinks VMware Log Insight in the cloud. 
  4. Workspace ONE Mobile Flows - Mobile Flows allows you to automate business processes by using automated workflows

VMware Pulse IoT Center is the last topic that the team is covering. Pulse IoT Center can manage from your gateway to your things. It will manage from the gateway devices out to the sensors and machines. The technology is based on components from vROPs and AirWatch.
















Monday, August 28, 2017

VMworld 2017: Delivering New User Experiences with Digital Workspaces with Sumit Dhawan

Sumit Dhawan @sumit_dhawan, the SVP of EUC mentions that VMware worked hard with partners like Apple, Google and Amazon to deliver a massive amount of innovation this year. If you think about cost, it can only be controlled by standardizing. While we may have control over PCs, Mobile and Cloud, we lack control holistically. If we look at the technologies that are coming to the 'Cloud-to-Edge' like the Internet-of-Things "IoT" this is only going to compound.

If you look at modern OSes like Apple, Android and Windows 10 they all offer APIs for management. Securely communicating using these APIs to the device allows you to provide context. If you pair this with identity then you begin to understand the application profile of the user. Workspace ONE brings identity and context together in a unique way.

From a management perspective, IT also wants to ensure compliance of all these devices. Within Workspace ONE VMware has created a digital contract between the IT team and the users. Workspace provides one place for the management of all devices. Workspace ONE can be extended for various use cases which will now be demo'd.

Shawn Bass "@shawnbass", the CTO of VMware's EUC platform is introduced onto the stage. Shawn keys the demo which will demonstrate Windows 10 management via AirWatch. Jason Rosak "@Jasonrosak" the director of product management explains the demo.

Jason mentions how time intensive the traditional approach of imaging then deploying a desktop is compared to mobile device setup. The demo shows an intuitive setup process with Windows 10 being managed by AirWatch and deploying applications via policy. 

While this deals with the day one setup issues, it does not deal with application delivery. Shawn announces a new integration for delivering large application package without having to deploy branch office deployment services. Users are able to self-serve their applications and threw  network bandwidth harvesting technology running in the background localized deployment points are not required. In addition,the demo shows the patch enforcement ability through workspace ONE. 

Dan Quintas the product manager for Mac integration is introduced. VMware has simplified the deployment of macOS Sierra. The demo shows a bunch of new bootstrap tools to deploy  applications on a MacBook. Workspace ONE has native support for Mac. The demo shows Visio working on the Mac leveraging Horizon using the advanced windowing capabilities. This delivers a seamless experience to the MacBook.

VMware is the first to provide comprehensive Chrome management support. This includes the ability to provide a managed Google Play environment on the Chrome book with the applications curated by the admin team.

Dell is brought onstage to announce Dell EMC VDI Complete which are fully packaged solutions for as low as $7 USD/user/month (Previously announced at Dell EMC World). In addition Horizon Cloud can now deliver support on Azure. A demo is shown in which the Horizon Cloud is used to add an Azure Region. Once done, the Horizon Cloud pairs with the Azure Region. The next step is to upload your image and configure your farm. You can then entitle applications from within Horizon Cloud to your Azure Horizon environment. You can pair your Azure subscription cost with a 8$ USD/user/month for the Horizon Cloud option.

Workspace ONE intelligence is announced which allows you to leverage analytics to apply patch and remediation policies to avoid things like the wannacry exploits. 

A technology preview is announced of Mobile Flows which allows workflows to be integrated into the email requests. A demo is shown with Mobile Flows integrated into the VMware boxer email client. Mobile flows will extend across a wide range of applications and cloud platforms.


.




VMworld 2017: VMware Cloud Services presented by Guido Appenzeller Chief Strategy Officer


67% of VMware customers forsee an end state where they rely on multiple clouds. If you are running in multiple clouds a key consideration is vendor lock-in. This creates silos of different ways of defining policy, firewalls etc with little portability between them. 



VMware Cloud Services is about creating a cloud agnostic, cross cloud management solutions. The current product portfolio consists of the following service offerings:

Discovery - gives you a central database of all services you are consuming between clouds; VMware, AWS and Azure for example. In addition you can tag them.
Cost Insight - allows you to analyze and compare cloud spend, find savings opportunities and the cost of services to the business.
Network Insight - is now offered as a cloud service. It takes data points across your enterprise or AWS and allows you to run analyzes on them. VMware found customers using this information to plan application migrations to understand the interdependencies 
Wavefront - allows you to take real-time monitoring analytics to the cloud to provide visibility on application health
NSX Cloud - the SaaS version of NSX; you go online, request a new cluster from AWS which is deployed (Note: at this time this service is only available on Amazon)

These services have been built on the AWS Cloud. Pricing is available on http:/cloud.vmware.com . There are two costing models, pay-per-use or prepaid for one to three years.

VMworld 2017: VMware AppDefense with Tom Corn,SVP Security Products


VMware AppDefense is about detecting attacks and automating and orchestrating the response. In addition there is a significant focus on allowing partners to integrate in VMware's AppDefense framework because of the unique visibility VMware has.



If you think about it, we are trying to protect an application which is a distributed system. So how do we understand the application beyond just a collection of infrastructure. VMware is not a security company, however we are focused on Secure Infrastructure. We asked, can we understand the application and create lease privilege on a network so that only the components that should speak together do? Compute really is an enormous attack platform so we are reducing it with AppDefense. The last piece is can we architect in third party security products by giving them context they would not ordinarily have?

Micro segmentation from NSX is ofcourse a perimeter piece of this. It allows us to draw a logical boundary. AppDefense is looking within these boundaries to understand if there is any behaviour that is deviating from the purpose of the VM. The model today is always chasing bad behaviour while we are focusing on chasing good because it is more efficient and cost effective

Step one, is to capture what the VM should be doing; then monitoring against a manifest and then the third piece is a library of responses that can be automated. We are leveraging some unique capabilities with virtualization. We capture by plugging into vCenter and then crawling through the provisioning systems. This is already there in systems like Puppet, Chef and vRA, its just customers are not mining the data. We can go a level deeper looking at processes as will with technologies like Jenkins.

Once Step one is done we trigger the monitoring element so that there is a learning element. We leverage Machine learning to understand the delta's between what was done in provisioning and what is contained within the application instances. The end result is the application scope or manifest is created. In the manifest we understand that this is what this VM should do and these are the processes that do it. The manifest is maintained through updates and patches.

Step Two is about how we Detect. VMware at the virtualization level can monitor outside the guest vs a traditional approach where you have to be on the wire. In Step three, uncharacteristic behaviour triggers a set of reactions such as snapshot or VM isolation controlled by policy. What do you want to happen if something happens that is not good behaviour?

This allows us to have security that responds in the same time factor as the attack. Typically security is a partnership between security and infrastructure. AppDefense is a partnership between the security and application team. 

In addition, there is a mobile app that gets installed so that any processing on the application can be sent directly to the application team for response and clarification. This allows the application team to partner in profiling the application. Remediation an attack is a lot easier to do because rather than sifting through tens of thousands of security exploits we monitor a few expected good behaviours. When it changes the system reacts.

The secret sauce is the ability to peer into the guest, which requires a component that runs in the guests kernel. This opens up the opportunity to run this on non-virtualized application components.