What is Amazon’s Secret for Success and Why is EC2 a Runaway Train?

Posted on by Randy Bias


We can all see it Amazon’s continued growth. The ‘Other’ line in their revenue reports is now the #1 area of growth for Amazon, even above consumer electronics. Their latest 10-Q reported 87% year-over-year growth, well over their consumer electronics business. Per predictions from myself, UBS, and others, AWS is staying on-track for 100% year-over-year growth, revenues in the 1B range for 2011, and no end in sight to the high flying act.

I’ll repeat again what I have said in other venues, at this rate, AWS itself will be a 16B business by 2016. If this comes to fruition, AWS will be the biggest infrastructure and hosting business in the world, bar none. It will have achieved that goal in a mere 10 years. For comparison, in 10 years time, Salesforce.com achieved 2B in revenue. You just don’t see runaway trains like this very often.

So what is the secret for Amazon’s succcess? Is it providing on-demand servers for $.10/hr? Being the first mover and market maker? Brand name recognition? Or something else?

Perhaps more importantly, why can’t anyone replicate this success on this scale?

Cloud Punditry Fails You
The #1 problem in my mind for why folks can’t easily understand Amazon’s secrets is that they don’t want to. The majority of cloud pundits out there work for large enterprise vendors who have a significant investment in muddying the waters. For the longest time cloud computing was identified primarily as an outsourcing play and relegated to the ‘on-demand virtual servers’ bucket. Now, there is a broad understanding that we’re in the midst of a fundamentally transformative and disruptive shift.

The problem with disruptions is that the incumbents lose and when the majority of cloud pundits are embedded marketeers and technologists from those same incumbents, we have a recipe for cloud-y soup.

Amazon’s Secret(s)
The reality is that there are a number of secrets that Amazon has, but probably only one that matters. We can certainly point to their development velocity and ability to build great teams. We could also point to their innovative technology deployments, such as the Simple Storage Service (S3). Being a first mover and market maker matters, but there were businesses before them engaged in “utility computing” before it was cool. Is it just timing? Or a combination of timing and product/market fit?

All of these items matter, but probably the number one reason for Amazon’s success isn’t what they let you do, but what they don’t let you do.

<blink /> …. Say what?

Amazon’s #1 secret to acquiring and retaining customers is simplicity and reduction of choice. This is also the single hardest thing to replicate from the AWS business model.

Cloud Readiness … Wasn’t
Prior to AWS the notion of ‘cloud ready’ applications did not exist. Amazon inadvertently cracked open the door to a new way of thinking about applications by being self-serving. Put simply, AWS reduced choice by simplifying the network model, pushing onto the customer responsibility for fault tolerance (e.g. local storage of virtual servers is not persistent and neither are the servers themselves), and forcing the use of automation for applications to be scalable and manageable.

The end result is what we now think of as cloud-ready applications. Examples include Netflix, Animoto, Zynga, and many others. By turning the application infrastructure management problem into one that could be driven programmatically, educating developers on this new model, and then providing a place where developers could have “as much as you can eat” on demand, they effectively changed the game for the next generation of applications.

Application developers now understand the value of building their application to fit a particular infrastructure environment, rather than requiring a specific infrastructure environment to prop up their application’s shortcomings.

The Buyer Has Changed In Multiple Ways
So, just as with Salesforce.com and SaaS, the buyer is shifting from centralized IT departments to the application developer. We’ve all long known this, but I think what has confused many of the incumbents is that there is a seeming paradox here.

Within the typical enterprise datacenter, developers have long been one of the drivers for the ongoing and painful “silo-ization” of enterprise applications. New applications enter the datacenter and custom infrastructure is provided per ‘requirements’ from the application developer. This has been a pattern for 25+ years which is now becoming an anti-pattern. Now, the application developer has the choice: fit the infrastructure to the app or fit the app to the infrastructure (aka ‘cloud-readiness’).

Put another way: push the risk to the centralized IT department and manage them indirectly with ‘requirements’ or accept the risk onto the application and manage it’s infrastructure directly and programmatically.

All application developers want to be in control of their apps and their destiny. Combine this with the structural problems inherent in most centralized IT departments fulfillment and delivery capabilities and the choice seems clear: get it done now, for cheap, under my own control or push the risks out to a group I don’t control or manage with unknown delivery dates and costs.

Developers, in droves, from all kinds of businesses, have voted with their pocket books and Amazon EC2 is a runaway train because of it.

Amazon’s Secret Explained
To some, it seems like Amazon has missed a clear opportunity: mimicking the enterprise datacenter.

Bad Amazon, don’t you understand that what developers really want in a public cloud is exactly what they have in their own datacenters today?

Except that isn’t true!  Amazon EC2 is a fabulous service that empowers developers by reducing and systematically removing choice. Fit the app to the infrastructure, not the infrastructure to the app, says AWS. But why? It may not seem apparent, but the reason Amazon has simplified and reduced choice is to keep their own costs down. More choice creates complexity and increases hardware, software, and operational costs. This is the anti-pattern in today’s enterprise datacenters. Amazon Web Services, and the Elastic Compute Cloud (EC2) in particular, is the ANTIDOTE to enterprise datacenters.  It is the new emerging de facto pattern for IT rather than the anti-pattern that enterprise datacenters have become.

Modeling enterprise datacenters in public clouds results in expensive, hard to run and maintain services that aren’t capable of the feats that EC2 can perform (e.g. spinning up 1,000+ VMs simultaneously) or growing to it’s size.

Many pundits and the incumbents they work for attempt to position the solution as ‘automation’. Haven’t we had 30 years of attempts at automation of the enterprise datacenter? Wasn’t 100M+ dollars poured into companies like Cassatt and OpSource to ‘automate’ the enterprise datacenter?

Here’s another part of the secret: automating homogeneous systems is 10x easier than automating heterogeneous systems. You can’t just add magical automation sauce to an existing enterprise datacenter and *poof* a cloud appears.

Amazon’s simplification of their infrastructure, and hence reduction of choice for customers, has resulted in an ability to deliver automation that works.

The secret *is* simplicity, not complexity.

The new pattern *is* a homogeneous, standardized and tightly integrated infrastructure environment.

AWS success is *because* they ignored the prevailing anti-pattern in enterprise datacenters.

A Brief Aside on AWS VPC
The astute observer will recognize that AWS Virtual Private Cloud (VPC) is a clear implementation, at least at the network level, of the prevailing enterprise IT anti-pattern. I don’t have clear data on what percentage of AWS revenue is VPC, although it’s relatively new. In particular, it wasn’t until this year that VPC implemented a robust mechanism for modeling complex enterprise datacenter networking topologies.

Regardless, VPC is a subset of the enterprise IT anti-pattern. It’s just enough to allow greater adoption of AWS by existing enterprise applications and hence is more akin to technologies like Mainframe Rehosting software and CloudSwitch.  In effect, it allows emulation of legacy application environments so they can be ported to next generation environments.

VPC doesn’t provide SAN based storage (EBS is a bit of a different beast, although it has many similarities), nor does it provide a number of other enterprise IT anti-patterns beyond the networking model.

It’s just a way for AWS to continue to build momentum by reducing friction in adoption for existing legacy applications.

The Secret Exposed
Now that the secret is out, what is likely to happen? My guess? Not much. Despite the obviousness of this article and the need for the cloud computing community as a whole to follow AWS lead here, I don’t expect them to. One of the major advantages of complexity is dependency. Enterprise vendors *love* complex software, hardware, and applications. Complexity increases costs, creates dependency, and massively increases lock-in.

Most vendors, even in the cloud computing community are still doing two key things: #1) trying to sell to the infrastructure IT buyer a solution that obviates their job (good luck!) and #2) providing complex solutions for complex problems in an attempt to provide value.

Here’s the deal: your customer, or the customer of your buyer, is the next generation application developer, who understands cloud ready systems. They *need*, whether they know it or not, a simple, clean, and scalable solution for the complex problems they are trying to solve.

This is Amazon’s secret to success and the reason it’s not being replicated is that people think it’s Amazon’s failure. I’m sure they would like you to continue thinking that.

This entry was posted in Cloud Computing. Bookmark the permalink.

Previous post:
Next post: