EMC and Canonical expand OpenStack Partnership

Posted on by Randy Bias

As you saw at last week’s OpenStack Summit, EMC® is expanding its partnership with Canonical amongst others. I want to take a moment to talk specifically about our relationship with Canonical. We see it as a team up between the world’s #1 storage provider and the world’s #1 cloud Linux distribution.

For the last two years, EMC has been a part of Canonical’s Cloud Partner Program and OpenStack Interoperability Lab (OIL). During this time EMC created a new Juju Charm for EMC VNX technology. This enables deployment by Canonical’s Juju modeling   software. This past week, we specifically announced the availability of a new OpenStack solution with Ubuntu OpenStack and Canonical as part of the Reference Architecture Program announced last November in Paris. The solution is built in close collaboration with Canonical in EMC labs then tested, optimized, and certified.

Cloud workloads are driving storage requirements, making it a crucial part of any OpenStack deployment. Companies look for scalable systems that leverage features of advanced enterprise storage while also avoiding complexity. EMC and Canonical created an easily modeled and reference architecture using EMC storage platforms (VNX® and EMC XtremIO™), Ubuntu OpenStack and Juju. This allows for repeatable and automated cloud deployments.

According to the OpenStack User Survey, 55% of production clouds today run on Ubuntu. Many of these deployments have stringent requirements for enterprise quality storage. EMC and Canonical together fulfill these requirements by providing a reference architecture combining the world’s #1 storage, #1 cloud Linux distribution, and tools for repeatable automated deployments.

We will be releasing an XtremIO (our all flash array) Charm and eventually ScaleIO (our software-only distributed block storage) as well. ScaleIO is a member of EMC’s Software Defined Storage portfolio, has been proven at massive scale, and is a great alternative to Ceph. You will soon be able to download a free, unsupported and unlimited version of ScaleIO to evaluate yourself.  Look for these products and others, such as ViPR Controller, to be available in Canonical’s Charm Store and through Canonical’s Autopilot OpenStack deployment software later this year.

This work is in support of eventually making all of EMC’s storage solutions available via OpenStack drivers available for use with Ubuntu OpenStack. Given the wide acceptance of Ubuntu with the OpenStack community, EMC will use Ubuntu internally and in future products. We believe that these efforts coupled with the quality professional services and support customers have come to expect from us will help give enterprise customers peace of mind. This will accelerate adoption of OpenStack Cloud solutions in the enterprise.

With EMC storage and Canonical solutions, customers realize these benefits:

  • A repeatable deployable cloud infrastructure
  • Reduced operating costs
  • Compatibility with multiple hardware and software vendors
  • Advanced storage features only found with enterprise storage solutions

Our  reference architecture takes the Ubuntu OpenStack distribution, and combines it with EMC VNX or XtremIO arrays, and Brocade 6510 switches. Automated with Juju, the time to production for OpenStack is dramatically reduced.

The solution for Canonical can be found at this link and a brief video with John Zannos can be found here on EMCTV. The EMC and Canonical  architecture is below for your perusal.

EMC and Canonical Ubuntu OpenStack Reference Architecture

This reference architecture underscores EMC commitment to providing customers choice. EMC customers can now choose to build an Ubuntu OpenStack cloud based on EMC storage, and use Juju for deployment automation.

It’s an exciting time for Stackers as the community and customers continue to demand reference architectures, repeatable processes, and support for existing and future enterprise storage systems.

Posted in OpenStack | Leave a comment

State of the Stack v4 – OpenStack In All It’s Glory

Posted on by Randy Bias

Yesterday I gave the seminal State of the Stack presentation at the OpenStack Summit.  This is the 4th major iteration of the deck.  This particular version took a very different direction for several reasons:

  1. Most of the audience is well steeped in OpenStack and providing the normal “speeds and feeds” seemed pedantic
  2. There were critical unaddressed issues in the community that I felt needed to be called out
  3. It seemed to me that the situation was becoming more urgent and I needed to be more direct than usual (yes, that *is* possible…)

There are two forms you can consume this in: the slideshare and the YouTube video from the summit.  I recommend the video first and then the Slideshare.  The reason being, that with the video I provide a great deal of additional color, assuming you can keep up with my rapid fire delivery.  Color in this case can be construed several different ways.

I hope you enjoy. If you do, please distribute widely via twitter, email, etc. :)

The video:

The Slideshare:

State of the Stack v4 – OpenStack in All It's Glory from Randy Bias

Posted in OpenStack | Leave a comment

OpenStack Self-Improvement Mini-Survey

Posted on by Randy Bias

Want to help make OpenStack great?  I’ve put together a very quick survey to get some deeper feedback than is provided by the User Survey.  The intention is to provide some additional information around the State of the Stack v4 I’m giving next week at the summit.

I will really appreciate it if you take the 2-3 minutes out of your day to answer this honestly.

Click here for the survey.

UPDATE: Horizon was missing from the survey and I have added it.  Heartfelt apologies to all of the Horizon committers.  An honest mistake on my part.  OpenStack is almost too big to keep track of.  :)

Posted in OpenStack | Leave a comment

Introducing CoprHD (“copperhead”), the Cornerstone of a Software-Defined Future

Posted on by Randy Bias

You’ve probably been wondering what I’ve been working on post-acquisition and yesterday you saw some of the fruits of my (and many others) labor in the CoprHD announcement.  CoprHD, pronounced “copperhead” like the snake, is EMC’s first ever open source product.  That EMC would announce open sourcing a product is probably as big a surprise to many EMCers as it may be to you, but more importantly it’s a sign of the times.  It’s a sign of where customers want to take the market.  It’s also the sign of a company willing to disrupt itself and it’s own thinking.

This is not your father’s EMC.  This is a new EMC and I hope that CoprHD, a core storage technology based on EMC’s existing ViPR Controller product, will show you that we are very serious about this initiative.  It’s not a me too move.

This move is partly in direct response to enterprise customer requests and our own assessment of where the market is headed.  Perhaps more importantly, this move drives freedom of choice and the maintenance of control on the part of our customers.  Any community member (partner, customer, competitor) is free to add support for any storage system.  CoprHD is central to a vendor neutral SDS controller strategy.

For those of you not familiar with ViPR Controller, it is a “software-defined storage” (SDS) controller, much like OpenDaylight is a software-defined networking (SDN) controller.  This means that ViPR can control and manage a variety of storage platforms and, in fact, today it is already multi-vendor, supporting not only EMC, but NetApp, Hitachi, and many others.  ViPR Controller has REST APIs, ability to integrate to OpenStack Cinder APIs, a pluggable backend, and is truly the only software stack I’ve seen out there that fulfills the hopes and dreams of a true SDS controller by not only providing heterogeneous storage management but also metering, a storage service catalog, resource pooling, and much, much more.

CoprHD is the open source version of ViPR Controller.  A comparison:

Comparing CoprHD vs ViPR Controller.001

What is “Non-essential, EMC-specific code”?  In this case, it’s simply the part of the code that enables “phone home” support to EMC, which has no relevance to users of CoprHD’s SDS services with non-EMC data stores.  CoprHD is in every way ViPR Controller and the two are interchangeable, delivering on the promise of vendor neutrality and providing customers control, choice, and community.  A quick caveat: please be aware that at this time, although this is the same code base and APIs, a clean installation is required to convert CoprHD to ViPR Controller or vice versa.  There is no “upgrade” process and it’s unclear that it ever makes sense to create one, although we might eventually create a migration tool depending on customer demand for one.

The rest of this blog post seeks to answer the key questions many have about this initiative:

  • Why ViPR Controller?
  • Why now?
  • Why would EMC do this?

Exciting times.  Let’s walk through it!

The Emerging Strategy for Enterprise: Open Source First

More and more we’re seeing from customers that proprietary software solutions have to be justified.  Today, the default is to first use open source software and open APIs to solve 80% of the problem and only to move to proprietary software when it is truly required.  This reflects the growing awareness of traditional enterprises that it is in their best interests to maintain control of their IT capabilities, reduce costs and increase agility.  This strategy, of course, mirrors what newer webscale enterprises such as Amazon and Google already knew.  Webscale players have been estimated to be as much as 80-90% open source software internally, compared to traditional enterprises which can be closer to 20-30% [1].

We heard from many enterprise customers that they were reluctant to adopt ViPR Controller, despite it being proven in production, simply because it was not open source.  No one wants “lock-in”, by which what they really mean is they desire vendor neutrality and maintaining control.

Businesses also want to know that not only could they switch vendors for support of an open source project, but perhaps more importantly, that they could directly affect the prioritization of roadmap features, by providing their own development resources or hiring outside engineering firms.

Finally, in any open source first strategy is the need and desire to have like-minded consumers of the same project around the table.  Businesses want to know that others like them are close by and available in public forums such as bug tracking systems, code review tools, and Internet Relay Chat (IRC).

This then is the “control” provided by an open source first strategy:

  1. Vendor neutrality and choice of support options
  2. Direct influence and contribution to the roadmap
  3. Ability to engage with like-minded businesses through public forums

You’ll probably notice that none of these equate to “free”.  Nowhere in our dialogues with customers has there been an overt focus on free software.  Certainly every business wants to cut costs, but all are willing to pay for value.

EMC Puts Customers and Business Outcomes First

EMC is renowned for being the world’s leader in storage technology, but more than a storage business, EMC is an information management business.  We put a premium on helping customers succeed even when that means that there may be an impact to our business.  If you look at today’s EMC, it is organized in such a way that an entire division, Emerging Technologies Division, is dedicated to disrupting the old way of doing things.  Software-only technologies such as ScaleIO, ViPR, and ECS (the non-appliance version) exist here.  Software that can run on anyone’s hardware, not just EMC’s.  All-flash technologies like XtremIO were birthed here.  ETD has led EMC’s community development with EMC{code} and is also leading the way in helping EMC become more involved with open source initiatives and delivering open source distributions of some of its products.

Our product strategy is to meet the customer where they are at and to be “flexible on the plan, while firm on the long term mission.”  Our broader strategy is to drive standardization and clarity in the industry around “Software-Defined Storage” (SDS), to help establish open and standard APIs, and to ease the management of storage through automation and vendor neutral management systems.  This means continually evolving and adjusting our business and our products.  It also implies a need to do more than storage (hence Emerging Technologies and not Emerging Storage Technologies Division) but more on that at a later date.

Achieving this vision requires leadership and forethought.  CoprHD is a sign of our willingness to go the distance, adapt and change, and disrupt ourselves.  Software-defined infrastructure and software-defined datacenters are a critical part of EMC II’s future and CoprHD is vital to enabling the SDS layer of any SDDC future.

CoprHD Is Leading The Way in SDS

Make no doubt, CoprHD (code available in June) is leading the way in SDS.  EMC welcomes everyone who wants to participate and we have already heard from customers who will ask their vendors to come to the party by adding support for their product to the open source project.  A truly software-defined future awaits and EMC is using its deep storage roots and focus on software to deliver on that future.

Again, this is NOT your father’s EMC.  This is a new EMC.

Thank-Yous Are In Order

Finally, although I acted as a “lightning rod” to drive organizational change, I mostly educated, where others acted.  I want to thank a number of EMCers without whom the CoprHD open source project simply wouldn’t have happened.  A short and incomplete list of amazing people who made this possible follows:

  • Jeremy Burton: executive buy-in and sponsorship
  • Manuvir Das: engineering leadership
  • Salvatore DeSimone: architecture, thought-leadership, and co-educator
  • James Lally: project management
  • The entire ViPR Controller software team for being willing to make a change
  • Intel for stepping up and helping us become a better open source company
  • Canonical for validating our direction and intentions
  • EMC{code} team for encouragement and feedback


[1] An estimate from my friends at Black Duck Software.

Posted in Cloud Computing | Leave a comment

What AWS Revenues Mean for Public Cloud and OpenStack More Generally

Posted on by Randy Bias

At the risk of sounding like “I told you so”, I wanted to comment on the recent Amazon 10-Q report.  If you were paying attention you likely saw it as it was the first time that AWS revenues were reported broken out from the rest of Amazon.com, ending years of speculation on revenue. The net of it is that AWS revenues for Q1 2015 were 1.566B, putting it on a run rate of just over 6B this year, which is almost on the money for what I predicted at the 2011 Cloud Connect keynote I gave [ VIDEO, SLIDES ]. Predictions in cloud pundit land are tricky as we’re usually about as often wrong as we are right; however, I do find it somewhat gratifying to have had this particular prediction correct and I will explain why shortly.

The 2015 Q1 AWS 10-Q

If you don’t want to wade through the 10-Q, there are choice pieces in here that are quite fascinating.  For example as pointed out here AWS is actually the fastest growing segment of Amazon by a long shot.  It is also the most profitable in terms of gross margin according to the 10-Q.  I remember having problems convincing people that AWS was operating at a significant profit over the last 5 years, but here it is laid out in plain black and white numbers.

Other interesting highlights include:

  • Growth from Q1 2014 -> Q1 2015 is 50% y/o/y, matching my original numbers of 100% y/o/y growth in the early days scaling down to 50% in 2015/2016
  • Goodwill + acquisitions is 760M, more than that spent on Amazon.com (retail) internationally and a third of what is spent on Amazon.com in North America
  • 1.1B spent in Q1 2015 “majority of which is to support AWS and additional capacity to support our fulfillment operations”
  • AWS y/o/y growth is 49% compared to 24% for Amazon.com in North America and AWS accounts for 7% of ALL Amazon sales

Here is a choice bit from the 10-Q:

Property and equipment acquired under capital leases were $954 million and $716 million during Q1 2015 and Q1 2014. This reflects additional investments in support of continued business growth primarily due to investments in technology infrastructure for AWS. We expect this trend to continue over time.

The AWS Public Cloud is Here to Stay

I’ve always been bullish on public cloud and I think these numbers reinforce that it’s potentially a massively disruptive business model. Similarly, I’ve been disappointed that there has been considerable knee-jerk resistance to looking at AWS as a partner, particularly in OpenStack land [1].

What does it mean now that we can all agree that AWS has built something fundamentally new?  A single business comparable to all the rest of the U.S. hosting market combined?  A business focused almost exclusively on net new “platform 3” applications that is growing at an unprecedented pace?

It means we need to get serious about public and hybrid cloud. It means that OpenStack needs to view AWS as a partner and that we need to get serious about the AWS APIs.  It means we should also be looking closely at the Azure APIs, given it appears to be the second runner-up.

As the speculation ceases, let’s remember, this is about creating a whole new market segment, not about making incremental improvements to something we’ve done before.

[1] If you haven’t yet, make sure to check out the latest release we cut of the AWS APIs for OpenStack

Posted in Cloud Computing, OpenStack | Leave a comment

DevOps Event @ EMC World 2015

Posted on by Randy Bias

I am super excited to announce that EMC is sponsoring a DevOps event at EMC World 2015.  As many of you guessed, with the acquisition of Cloudscaling, and the recent creation of the EMC{code} initiative, we are trying to become a company that engages more directly with developers and the DevOps community in particular.

We have some great guests who are going to come and speak and some of the EMC{code} evangelists will be leading sessions as well.  Here’s a list of the currently planned sessions:

  • Engaging the New Developer Paradigm
  • The DevOps Toolkit
  • The Enterprise Journey to DevOps
  • Docker 101
  • Container Management at Scale
  • Deploying Data-Centric APIs
  • Predictive Analytics to Prevent Fraud
  • Deploying Modern Apps with CloudFoundry

This will not be your normal EMC event and does not require registration for EMC World to attend.  So if you are in Las Vegas May 3rd, come join us!


Posted in OpenStack | Leave a comment

Hyper-Converged Confusion

Posted on by Randy Bias

I have had my doubts about converged and hyper-converged infrastructure since the Vblock launched, but I eventually came around to understanding why enterprises love the VCE Vblock. I am now starting to think of converged infrastructure (CI) as really “enterprise computing 2.0”. CI dramatically reduces operational costs and allows for the creation of relatively homologous environments for “platform 2” applications. Hyper-converged infrastructure (HCI) seeks to take CI to the next most obvious point: extreme homogeneity and drastic reduction in labor costs.

I can see why so many come to what seems like a foregone conclusion: let’s just make CI even easier! Unfortunately, I don’t think people have thought through all of the ramifications of HCI in enterprises. Hyper-converged is really an approach designed for small and medium businesses, not larger enterprises operating at scale.  The primary use cases in larger environments is for niche applications with less stringent scalability and security requirements: VDI, remote office / branch office servers, etc. 

There are three major challenges with HCI in larger scale enterprise production environments that I will cover today: 

  1. Security
  2. Independent scaling of resources
  3. Scaling the control plane requires scaling the data plane

I think this perspective will be controversial, even inside of EMC, so hopefully I can drive a good conversation about it.

Let’s take a look.

Continue reading →

Posted in Cloud Computing | Leave a comment

Vancouver OpenStack Summit – EMC Federation Presentations

Posted on by Randy Bias

Voting for presentations at the Vancouver OpenStack Summit is now open.  Please help us by voting on the sessions submitted by EMC Federation speakers along with any other sessions that cover topics that might interest you.  Please vote at your earliest convenience since each vote helps!

OpenStack community members are voting on presentations to be presented at the OpenStack Summit, May 18-22, in Vancouver, Canada. Hundreds of high-quality submissions were received, and your votes can help determine which ones to include in the schedule.

PDF containing all of the submissions by the EMC Federation: 


Here is a list you can click through to vote. 

Thank you so much for taking the time to vote on these sessions!

Posted in OpenStack | Leave a comment

The Future of OpenStack’s EC2 APIs

Posted on by Randy Bias

Recently, some more talk was had around the future of the EC2 APIs, beginning with some comments on the openstack-operators mailing list, followed by threads on the dev and foundation mailing list.  This ultimately resulted in a suggested commit to officially “deprecate” the EC2 APIs from Nova.  This commit was rejected, but make sure you read through the commentary if you have time.  Some really great perspective.  If you don’t here’s my basic summation:

  • Many people still very much care about the EC2 and AWS APIs and are quite concerned about their state and the lack of attention to keeping them current
  • Some people are adamant about deprecating and then removing them as expeditiously as possible
  • Others are interested in keeping them around, but moving them out of the default distribution and making sure they have a good home

As many people know, I’m passionate about this subject.  If you missed the blog posting that caused a massive kerfuffle in summer of 2013, now is the time to take a look again: OpenStack’s Future Depends on Embracing Amazon. Now. There was a pretty massive response to that original article, including a very vibrant OpenStack meetup with a debate that was covered live between myself and Boris Renski, co-founder of Mirantis. 

I am proud of driving that conversation, but one pushback that arose could be summarized as: “put your money where your mouth is”.  At the time we were already working towards a goal that would have responded to this pushback but it’s taken alot longer than I would like to materialize.

We are finally there.  Let me explain.

The StackForge Standalone EC2 API

It’s taken a while and the entire backstory and history isn’t really relevant for this article, but Cloudscaling (now part of EMC) has been working diligently to build a drop-in replacement for the existing Nova EC2 API. This standalone EC2 API can be found in StackForge. This re-implementation of the EC2 APIs is now ready for prime time and serendipitously you can see from the opening comments that the community is very interested in adopting it.

Some details on the status of the new EC2 API can be found in the initial documentation in the commit.

To summarize, the new standalone API:

  • Is feature complete and at parity with the existing Nova EC2 API
  • Has equivalent or better test coverage to the existing Nova EC2 API
  • Is configured by default on a different port (can be run in parallel to all existing APIs)
  • Included a new set of features in the form of full coverage for the VPC APIs (a subset of EC2)
  • Has been tested exhaustively with the AWS unified CLI tool, a python CLI for driving all of the AWS services
  • Calls the OpenStack REST APIs rather than any of the “internal API” or function calls for a clean separation of duties
  • AWS tempest tests have been expanded, tested against AWS itself as a baseline then used to validate this APIs behavior

This is very exciting and it’s what the community has been asking for.  More importantly, to me, at least, is that the EC2 API could potentially stay in StackForge and become an optional plugin to OpenStack, letting those who care use it while also allowing the team who is maintaining it to iterate at a slightly different speed from the current 6-month integrated release cycle.

For those who are wondering, it’s EMC’s intention to continue to invest into and maintain this API bridge to OpenStack. 

The EC2 API Still Matters to OpenStack

During the “debate” that occurred in 2013, I was frequently bemused by the attempts of community members to downplay the importance of the EC2 APIs. I think it’s all settled down now and generally accepted that we want the EC2 APIs to live on and succeed in OpenStack-land and hopefully we’ll even support other APIs down the road.

For those who are still holdouts though, I think the latest OpenStack User Survey data continues to reinforce how important the EC2 and other AWS APIs are:

A Brief State of the Stack 2014 v3 - 2014-11-06 CSH Updates-09.019

What’s enlightening here is that in 2013 I was hearing the constant refrain of “the EC2 APIs are used by only a ‘fraction’ of the community”.  That ‘fraction’ was *merely* ~30-35% at the time according to the user surveys.  As you can see, usage of the EC2 APIs has actually increased since that time and now we’re at 44% for production deployments, a 25% increase in roughly 18 months. This is very important.

It means that usage of the EC2 APIs is increasing, fairly dramatically, over time.

I’ll reiterate again, since folks still sometimes get confused, I’m not advocating dropping the OpenStack APIs in favor of AWS.  I’m advocating embracing the AWS APIs, making them a first class citizen, and viewing AWS as a partner, not an enemy.  A partner in making cloud big for everyone.

This reality inside the OpenStack community is starting to materialize and I need your help.

The Game Plan

Awesome, we have a new set of improved EC2 APIs, a path towards supporting them and deprecating the old.  Whether you love the EC2 APIs or hate them, it’s good for everyone to move them out of the default deployment, create greater isolation between these APIs and OpenStack internals, and to have a path forward where they can be maintained with love. 

Everybody wins, even the detractors.

Well, to get this the rest of the way, we need to do the following:

  1. Test, test, test: if you are using the existing EC2 APIs, please give these a try, break them, and file bugs
  2. If you are a developer and want to help cover any gaps in functionality or bugs that have been found, then get involved now; this is a standard stackforge project, so anyone can get in the mix
  3. There are some known challenges in the existing OpenStack APIs that need to be addressed for a more robust solution; these are documented in a new blueprint you can find here
  4. Help update and maintain the documentation so people know that this capability is available for their OpenStack deployments/products, whether DIY or product based
  5. Add a set of testing capabilities to RefStack to test for “AWS interoperability” alongside “OpenStack interoperability” 

I really appreciate all of the supporters and also detractors who have been involved in this discussion. I believe that this kind of debate and action, like the Internet before it and the IETF mantra of old (“running code and rough consensus”), is what makes OpenStack great. Completing this project will also provide us a blueprint for how we support the public APIs of other public clouds in OpenStack-land.

Finally, a big thanks to Alex Levine, Feodor Tersin, and Andrey Pavlov, for being the tip of the spear on this work.  Without them we wouldn’t have made it this far.

Posted in OpenStack | Leave a comment

“Vanilla OpenStack” Doesn’t Exist and Never Will

Posted on by Randy Bias

One of the biggest failures of OpenStack to date is expectation setting.  New potential OpenStack users and customers come into OpenStack and expect to find:

  • A uniform, monolithic cloud operating system (like Linux)
  • Set of well-integrated and interoperable components
  • Interoperability with their own vendors of choice in hardware, software, and public cloud

Unfortunately, none of this exists and probably none of it should have ever been expected since OpenStack won’t ever become a unified cloud operating system.

The problem can be summed up by a request I still see regularly from customers:

I want ‘vanilla OpenStack’

Vanilla OpenStack does not exist, never has existed, and never will exist.

Examining The Request

First of all, it’s a reasonable request.  The potential new OpenStack customer is indirectly asking for those things that led them to OpenStack in the first place.  The bi-annual user survey has already told us what people care about:

OpenStack User Survey Fall 2014 v1 copy.001

The top reasons for OpenStack boil down to:

  • Help me reduce costs
  • Help me reduce or eliminate vendor lock-in

Hence the desire for “vanilla” OpenStack.

But what is “vanilla”?  It could be a number of things:

  1. Give me the “official” OpenStack release with no modifications
  2. Give me OpenStack with all default settings
  3. Give me an OpenStack that has no proprietary components

Which immediately leads into the next problem: what is “OpenStack”?  Which could also be a number of things[1]:

  1. Until recently, officially the principle trademark “OpenStack-powered” meant Nova + Swift *only*

  2. The de facto “baseline” set of commonly deployed OpenStack services

    1. Nova, Keystone, Glance, Cinder, Horizon, and Neutron

    2. There is no name or official stance on this arbitrary grouping

  3. Use of DefCore + RefStack to test for OpenStack

UPDATE: to be more clear, the baseline set above *does* have a name. It is called “core” and called out in section 4.1 of the bylaws, which is below. I apologize for the confusion as “core” has been overloaded a fair bit in discussions on the board and at one point trademark rights were tied to “core”.

4.1 General Powers.

(a) The business and affairs of the Foundation shall be managed by or under the direction of a Board of Directors, who may exercise all of the powers of the Foundation except as otherwise provided by these Bylaws.
(b) The management of the technical matters relating to the OpenStack Project shall be managed by the Technical Committee. The management of the technical matters for the OpenStack Project is designed to be a technical meritocracy. The “OpenStack Project” shall consist of a “Core OpenStack Project,” library projects, gating projects and supporting projects. . The Core OpenStack Project means the software modules which are part of an integrated release and for which an OpenStack trademark may be used. The other modules which are part of the OpenStack Project, but not the Core OpenStack Project may not be identified using the OpenStack trademark except when distributed with the Core OpenStack Project. The role of the Board of Directors in the management of the OpenStack Project and the Core OpenStack Project are set forth in Section 4.13. On formation of the Foundation, the Core OpenStack Project is the Block Storage, Compute, Dashboard, Identity Service, Image Service, Networking, and Object Storage modules. The Secretary shall maintain a list of the modules in the Core OpenStack Project which shall be posted on the Foundation’s website.

So this is helpful, but still confusing.  If, for example, you don’t ship Swift, which some OpenStack vendors do not, then technically you can’t call your product OpenStack-powered. HP’s public cloud and Rackspace’s public clouds, last I checked anyway, don’t use the identity service (Keystone), which also means that technically they can’t be “OpenStack-powered” either. A strict reading of this section also says that all projects that are in “integrated” status are also part of “core” and that you can’t identify “core” with an OpenStack trademark unless “core” is distributed together, which implies that if you don’t have Sahara, then you aren’t OpenStack. Which, of course makes no sense.

So my point still stands.  There has been a disconnect between how OpenStack is packaged up by vendors, how the trademarks are used, and how integrated is defined and contrasted to “core”, etc.

This is why item #3 above is still in motion and is the intended replacement model for #1.  You can find out more about DefCore’s work here.

Understanding The OpenStack Trademark and Its History

It’s not really a secret, but it’s deeply misunderstood: until the last few weeks, the Bylaws very specifically said that “OpenStack-powered” is Nova plus Swift.  That’s it. No other projects were included in the definition.  Technically, many folks who shipped an “OpenStack-powered” product without Swift were not actually legally allowed to use the trademark and brand.  This was widely unenforced because the Board and Foundation knew the definitions were broken.  Hence DefCore.

Also, the earliest deployments out there of OpenStack were Swift-only.  Cloudscaling launched the first ever production deployment of OpenStack outside of Rackspace in January of 2011, barely 6 months after launch.  At that time, Nova was not ready for prime time.  So the earliest OpenStack deployments also technically violated the trademark bylaws, although since the Foundation and Bylaws had yet to be created this didn’t really matter.

My point here is that from the very beginning of OpenStack’s formation drawing a line around “OpenStack” has been difficult and still is to this day, given the way the Bylaws are written.

FYI, the new proposed Bylaws changes are explained here.  You will notice that they get rid of a rigid definition of “OpenStack” in favor of putting the definition in the hands of the technical committee and board.  They also disconnect the trademark from “core” itself.

Explosive Growth of Projects is Making Defining OpenStack Much Harder

There are now 20 projects in OpenStack.  Removing libraries and non-shipping projects like Rally, there are still ~15 projects in “OpenStack” integrated status.  And there are many more on the way.  Don’t be surprised if by the end of 2016 there are as many as 30 shipping OpenStack projects in integrated status.

Many of these new projects are above the imaginary waterline many have created in their minds for OpenStack.  Meaning that for many OpenStack is an IaaS-only effort.  However, we can now see efforts like Zaqar, Sahara, and others are blurring the line and moving us up into PaaS land.


So when a customer is asking for “OpenStack”, just what are they asking for?  The answer is that we don’t know and rarely do they.  The lack of definition on the part of the Board, the Foundation, and the TC has made explaining this very challenging.

Vanilla-land: A Fairy-Tale in the Making

You can’t run OpenStack at scale and in production (the only measure that matters) in a “vanilla” manner.  Here I am considering “vanilla” to include all three definitions above: default settings, no proprietary code, and no modifications.

UPDATE: I want to be clear that by “production” I mean large scale production deployments.  Anything that requires a multi-tiered switch fabric and typically over 3-5 racks in size.  Yes, people run smaller production systems; however, it’s arguable they should be all-in on public cloud instead of wasting time running infrastructure.  Also, for the purposes of this article talking about 5-10 server deployments doesn’t make sense.  At that size, you can obviously run “vanilla OpenStack”, but I haven’t engaged with any enterprise that operates at this scale.

The closest you can get to this is DevStack, which is not scalable nor acceptable for production.


It would really take far too long to go through all of the gory details, but I need to give you some solid examples so you can understand.  Let’s do this.

General Configuration

First, there are well over 500 configuration options for OpenStack.  Many options immediately take you into proprietary land.  Want to use your existing hypervisor, ESX?  ESX is proprietary code, creating vendor lock-in and increasing costs.  Want to use your existing storage or networking vendors?  Same deal.

Don’t want to reuse technology you already have?  Then be prepared for a shock about what you’ll get by default from core projects.


Whether you use nova-networking or Neutron, the “default” mode is what nova-networking calls “single_host” mode.  Single host mode is very simple.  You attach VLANs to a single virtual machine (or a single bare metal host) which acts as the gateway and firewall for all customers and all applications.  Limited scalability and performance since an x86 server will never have the performance of a switch with proprietary ASICs and firmware.  Worst of all, the only real failover model here is to use a high availability active/passive model.  Most people use Linux-HA, which means that on failover, you’re looking at 45-60 seconds when absolutely NO network traffic is running through your cloud.  Can you imagine a system-wide networking failure of 60 seconds each time you failover the single_host server to do maintenance?

You can’t run like this in production, which means you *will* be using a Neutron plugin that provides control over a proprietary non-OpenStack networking solution, whether that’s bare metal switching, SDN, or something else.


Like networking, the default block storage on demand capability in Cinder is not what people expect.  By default, Cinder simply assumes that each hypervisor has it’s own locally attached storage (either DAS or some kind of shared storage using Fiber Channel, etc).  Calls to the Cinder API result in the hypervisor creating a new block device (disk) on its local storage.  That means:

  • The block storage is probably not network-attached
  • You can’t back the block storage up to another system
  • The block device can’t be moved between VMs like AWS EBS
  • Hypervisor failure likely means loss of not only VM, but also all storage attached to it

UPDATE: Sorry folks.  This was inaccurate.  Cinder does use iSCSI by default.

I believe this still isn’t what customers are expecting. You would need to add HA to each cinder-volume instance combined with DRBD to do disk replication and potentially iSCSI multi-pathing for failover.

That means in order to meet the actual requirements of the customer they have to deal with the feature gaps above on their own, get an OpenStack distribution that handles the gap, or load a Cinder plugin that manages a proprietary non-OpenStack block storage solution.  That could be EMC’s own ScaleIO, an open source distributed block store like Ceph, industry-standard SAN arrays like VMAX/VNX, or really anything else.

If you look at the laundry list of storage actually used in production you’ll see that over half of all deployments take this option and that the default Cinder configuration is only 23% of production deployments:

OpenStack User Survey Fall 2014 v1 - Block Storage Slide Only.001

Application Management

Want your developers to create great new scalable cloud-native applications?  Great, let’s do it, but it won’t be with Horizon.  Horizon is a very basic dashboard and even with Heat, there are major gaps if you want to help your developers succeed.  You’ll need Scalr or Rightscale as cloud application management frameworks (especially if you are going multi-cloud or hybrid cloud with the major public clouds) or you’ll need a PaaS like CloudFoundry that does the heavy lifting for you.

You Can Reduce Vendor Lock-in, But … You Can’t Eliminate It

Are you trying to eliminate vendor lock-in?  Power to you.  That’s the right move.  Just don’t expect to succeed.  You can reduce but not eliminate vendor lock-in.  It’s better to demand that your vendors provide open source solutions, which don’t necessarily eliminate lock-in, but does reduce it.

Why isn’t it possible?  Well, network switches, for example, are deeply proprietary.  Even if you went with something like Cumulus Linux on ODM switches from Taiwan, you will *still* run proprietary firmware and use a proprietary closed-source ASIC from someone like Marvell or Broadcom.  Not even Google gets around this.

Firmware and BIOS on standard x86 servers is all deeply proprietary, licensed strictly, and this won’t change any time soon.  Not even the Open Compute Project (OCP) can get entirely around this.

The Notion of Vanilla OpenStack is Dangerous

This idea that there is a generic “no lock-in” OpenStack is one of the most dangerous ideas in OpenStack-land and needs to be quashed ruthlessly.  Yes, you should absolutely push to have as much open source in your OpenStack deployment as possible, but since 100% isn’t possible, what you should be evaluating is what combination of open source and proprietary get you to the place where you can solve the business problem you are trying to conquer.

Excessive navel-gazing and trying to completely eliminate proprietary components is doomed to failure, even if you have the world’s most badass infrastructure operations and development team.

If Google can’t do it, then you can’t either.  Period.

The Process for Evaluating Production OpenStack

Here’s the right process for evaluating OpenStack:

  1. Select the person in your organization to spearhead this work

  2. Make him/her read this blog posting

  3. The leader should immediately download and play with DevStack

  4. The leader should create a team to build a very simple POC (5 servers or less)

  5. Understand how the plugins and optional components work

  6. Commission a larger pilot (at least 20-40 servers) with a trusted partner or set of trusted partners who have various options for “more generic” and “more proprietary” OpenStack

  7. Kick the crap out of this pilot; make sure you come with an exhaustive testing game plan

    1. VM launch times

    2. Block storage and networking performance

    3. etc…

  8. Gather business requirements from the internal developers who will use the system

  9. Figure out the gap between “more generic” and “more proprietary” and your business requirements

  10. Dial into the right level of “lock-in” that you are comfortable with from a strategic point of view that meets the business requirements

  11. If at all possible (it probably won’t be, but try anyway), get OpenStack from a vendor who can be a “single throat to choke”


I am trying to put a pragmatic face on what is a very challenging problem: how do you get to the next generation of datacenter?  We all believe OpenStack is the cornerstone of such an effort.  Unfortunately, OpenStack itself is not a single monolithic turn key system.  It’s a set of interrelated but not always dependent projects.  A set of projects that is increasing rapidly and your own business probably needs only a subset of all the projects, at least initially.

That means being realistic about what can be accomplished and what is a pipe dream.  Follow these guidelines and you’ll get there.  But whatever you do, don’t ask for “vanilla OpenStack”.  It doesn’t exist and never will.

[1] Mark Collier pointed out some inaccuracies and I have adjusted this bullet list to reflect the situation as correctly as possible.

Posted in OpenStack | Leave a comment


← Older posts