AWS Summit SF: Enchiladas, File Systems, and Machine Learning
This week Amazon Web Services kicked off its annual series of “Summit” conferences in San Francisco. The Summits, which are held all over the world, are a chance for the AWS user community to come together and learn about the latest and greatest technologies and
The San Francisco Summit is especially important because it’s the first Summit conference of the year, and its keynote presentation usually sets the stage for what to expect from AWS throughout the year leading up to their re:Invent conference in the Fall.
Andy Jassy (Sr. Vice President of AWS) delivered this year’s two plus hour keynote presentation. Between what was presented and our speaking with hundreds of AWS users in attendance, here are some of the themes that we heard and what we think they mean for you.
“Customers Want Access To the Whole Enchilada”
Jassy’s two hour keynote covered a lot of themes, but one that he was careful to emphasize throughout was that customers choosing a public infrastructure cloud are looking for the “whole Enchilada “ in that they require depth and breadth of offerings. This is the thread that continues to run through the how and why of AWS’s product roadmap. If you look at the evolution of AWS, you can easily see the breadth part. They provide everything from compute and storage to virtual desktops. However, it’s the depth that Jassy continued to emphasize, pointing out that AWS has seven families of compute instances, and that they offer not just one type of managed database, but five.
Based on how many ways Jassy emphasized this, I think it’s safe to assume that AWS will continue to deepen its offerings in each of the core service categories. The San Francisco announcements and the recent introduction of the D2 instances family are good examples of this.
Elastic File Systems and Machine Learning
Some will argue that by breaking out the financial results of AWS from the rest of Amazon, AWS now needs to educate the broader market about AWS’s competitive advantage versus other public clouds that purport to offer similar services. AWS is hoping to capitalize on the fact that it began pioneering the public infrastructure cloud category 11 long years ago. As Jassy says, “there is no compression algorithm for experience.”
As an AWS user and partner, I think there is more to the story.
Let’s take a look at two new technologies that were introduced at the San Francisco summit.
This week at the Summit, AWS announced the release of a long-requested feature that’s sure to make many system architects and developers happy: the Elastic File System (EFS). Up until the release of EFS, there was no easy way for EC2 instances to share a common file system. While highly scalable, each EC2 instance was an island, and there was a need to replicate files/code to each instance individually.
This new file system option will more easily enable applications that rely on common file system architectures to be deployed to EC2. Scaling a common shared file system across many compute instances is a very difficult technical problem to solve, which is why we are only seeing EFS come about now.
You might mistake the EFS announcement as a minor technical nuance for EC2, but I can assure you it’s far more important than you think. For example, any technical team that runs a WordPress- or Drupal-based website is probably fist-bumping as you read this. Little-known fact: WordPress and Drupal instances, which are more easily managed and scaled with the use of a common file system, power ~20% of the websites on Internet.
The second major announcement made was for Amazon Machine Learning: a new service that makes it easy to develop and deploy machine learning algorithms against all your big data stored in AWS. This is a significant announcement in that machine learning has long been limited to those that have an in-depth understanding of statistical analysis and algorithm development. AWS Machine learning is attempting to remove that barrier by making it relatively painless for users to create classification or prediction models.
Both of these new offerings are fuel to Jassy’s argument about the need for depth and a clear example that AWS will go very deep into creating value within existing categories like they are here with Compute and Big Data. AWS is clearly aiming at removing technical barriers (both large and small) that prevent business value from being realized on their core infrastructure.
Companies are going “all-in” on the cloud
Amazon has been clear that they see the hybrid cloud as a temporary stop on the journey to moving all workloads to the public cloud. As AWS offerings and technologies mature, they are counting on there being fewer legitimate objections to public cloud usage by enterprises. Jassy reiterated this in his keynote, and we also see our own enterprise customers, such as General Electric, going “all-in on the cloud.”
One of the big barriers to public cloud adoption that we continue to see is the need for tools and services that enable enterprises to manage costs and implement usage governance in a world where public cloud services can be programmatically purchased by development or operations teams outside of traditional IT procurement processes.
Once a company implements Cost Management tools and practices like those that we provide here at Cloudability, the benefits of public cloud usage can be accessed by all teams within an enterprise—quickly leading to the acceleration of the “all-in” effect. Based on what AWS is announcing and what I know we are developing, I think it’s safe to say that this trend is going to continue to pick up speed.
So that’s our take on the San Francisco Summit. Stay tuned for more dispatches of our thoughts on the trends as we travel with AWS around the globe to London, Sydney, and New York for more Summits.