Earlier this week, we hosted an OpenStack Meetup at the offices of our customer, EVault, here in San Francisco. Titled “Neutron Networking and SDN in Production,” the Meetup featured panelists from major contributors to Neutron and those offering open source plugins that leverage Neutron. (SiliconANGLE covered the event live, and a replay will be embedded here as soon as it’s available. Some slides used to set initial context are here.)
Our panel set out to discern the reality from hype in SDN and Neutron Networking. We started off by asking:
Do we really need Neutron? Is nova-network good enough since many of the biggest OpenStack deployments are using nova-network with their own plug-ins?
We came to a common conclusion – although I don’t know that we had complete consensus – that Neutron would eventually give us additional networking capabilities that are a hard requirement for success with certain kinds of workloads and use cases.
Next, we took on a more challenging line of questioning:
Given that Neutron will support some critical functionality that will never be in Nova, is it the right place to dump _all_ networking functionality in OpenStack like load balancing as a service, VPN as a service, you-name-it as a service?
We didn’t get anything like consensus on that point. Both panelists and audience members had reservations, myself included. Several panelists were pretty enthusiastic about putting all networking behind Neutron and having one standardized API.
This led into a conversation about the fact that people want to expose certain functionality in the Neutron API, and that functionality might not exist across all of the different vendor plug-ins. This took us to some debate and contention over what the API exposes:
Should the API expose the lowest common denominator of functionality, or should it allow vendors to have all kind of extended functionality that’s specific to their products? Does it make sense to have all of the functionality of all networking abstractions possible in one single unified API, or does it make more sense for Neutron to be a lower-level system that stitches together higher-order network services and just does basic service chaining, allowing you to manage each of those other individual network services via their own API’s directly?
A couple of the panelists (myself included) felt that it’s a better approach for Neutron to offer a lower-level API. Panelists heavily engaged with the Neutron technical committee – understandably perhaps – felt like you can add everything you’d ever need into the Neutron API over time. This is a great topic for a follow-up post, as I firmly believe this is an unworkable approach that would lead to value-subtracting complexity and eventually be crushed under its own weight.
Next, we explored the question:
Given that we need Neutron and its added functionality over time, is vanilla Neutron ready for production today, without any vendor plug ins?
The agreement was as close to universal as you can get in an OpenStack technical conversation: vanilla Neutron is not production grade, and it is not ready for the prime time. It needs vendor plug-ins that fill in the blanks.
This was shocking to a number of people in the audience who came up to us afterwards and said they just weren’t expecting to hear that.
This begs another big question:
How do you get into production with Neutron today and still avoid vendor lock-in?
Of course, vendor lock-in is not an absolute. It’s a spectrum. Each OpenStack user going into production with Neutron today has to evaluate the degree of lock-in they are willing to accept in exchange for the confidence of knowing they’ve got a solution that’s ready to support workloads in production.
Wrapping it Up
Ultimately, it’s unclear whether all networking functions ever will be modeled behind the Neutron API with a bunch of plug-ins. That’s part of the ongoing dialogue we’re having in the community about what makes the most sense for the project’s future.
The bottom-line consensus was is that Neutron is a work in progress. Vanilla Neutron is not ready for production, so you should get a vendor if you need to move into production soon.
Posted in Cloud Computing, OpenStack | Tagged Meetups, Networking, Neutron, Nova, OpenStack, SDN | 11 Comments
The second AWS re:Invent user conference is wrapping up today in Las Vegas with a sold out attendance of 8,000 people. Major new features and announcements included:
AWS CloudTrail logs API calls by account users and stores them in S3 to facilitate governance and compliance.
AWS WorkSpaces (of particular note to Citrix and VMware) delivers cloud-based VDI (Virtual Desktop Infrastructure).
Amazon AppStream allows developers to stream applications from the cloud (such as 3D games and interactive HD apps) so the end users don’t require high end hardware to enjoy compute intensive apps.
Amazon Kenesis supports real-time processing of streaming big data.
New I2 Instance Types for high I/O performance via SSD.
New C3 Instance Types for compute intensive workloads.
Cloudscaling has long held that AWS along with Google Compute Engine (which graduated from Beta earlier this year) are the winning public cloud service providers that are important to emulate for hybrid cloud application interoperability, and ongoing customer and analyst proof points are only reinforcing that position over time.
Gartner recently refreshed its IaaS Magic Quadrant and had to rescale the results to accommodate AWS’s leadership growth in the leader quadrant, with the increasingly high scores pushing other public cloud service providers into less desirable quadrants. Per Lydia Long, the Gartner analyst who published the report, “While we saw much stronger momentum for AWS than other providers in 2012, 2013 has really been a tipping point. We still hear plenty of interest in competitors, but AWS is overwhelmingly the dominant vendor.” Google Compute Engine was not included in this quadrant refresh as it was still in Beta earlier in the year, but Lydia lists Google among Amazon, Microsoft and VMware as the vendors with leading market share five years from now.
AWS and other public cloud service providers regularly advocate that public cloud is – at least in the long term – the only viable solution for most, if not all, enterprise use cases. Cloudscaling, however, is built on the idea that there will always be critical drivers such as cost and control in enterprise IT that make a hybrid cloud strategy the only logical path forward for the enterprise CIO. Some problems are best addressed with a combination of owned and rented infrastructure.
The OpenStack ecosystem has the greatest opportunity for success in delivering on-premise, elastic private clouds for enterprises and scalable application companies. Cloudscaling’s product, Open Cloud System, delivers the only production-proven elastic cloud that’s ready for hybrid applications with AWS and GCE.
Cloudscaling invests significant engineering resources creating an elastic private cloud solution that offers both API and behavioral fidelity with the leading public clouds to enable hybrid cloud application interoperability. If you’re interested to see what’s possible in context of your use cases and application workloads, we invite you to contact us and try out our hosted OCSgo elastic cloud implementation.
If you want to know what’s new with OpenStack – new projects, code maturity, community growth, user stats and more – you’ll want to tune in for my live presentation of State of the (Open)Stack. I’ll will be broadcasting live from the OpenStack Summit in Hong Kong on Wednesday, November 6 at 6pm PDT.
In this bi-annual address and frank summary of OpenStack’s current trajectory, I reprise my original State of the Stack presentation from the spring 2013 OpenStack Summit in Portland, Oregon.
We’ll answer the following questions:
Why is OpenStack winning?
Does reality follow hype?
What does OpenStack look like now, in the past, and in the near future?
Who is using OpenStack? Why? What for?
Why a hybrid-first cloud strategy is key to OpenStack’s future success
This presentation was originally given to a standing room only audience during the spring OpenStack Summit. However, due to the overwhelming volume of presentations for the Summit, it was not possible to get a standalone, one-hour spot. So, we’ll bring version 2 of this presentation live from the OpenStack Hong Kong Summit via BrightTALK.
This live event is tailored to:
IT professionals who need to know the current status of OpenStack, told without vendor boosterism and with a bent toward practical advice for production deployment
Architecture pros who are looking into building a hybrid cloud using OpenStack to connect with the major public clouds
Anyone who wanted to attend the OpenStack Summit in Hong Kong but can’t be there
(Note: If you’re at the Summit, don’t miss any of the sessions that are taking place during this live broadcast. We’ll have this presentation available on replay, so you can catch it later.)
Posted in Cloud Computing, OpenStack | 2 Comments
If crowd size and quality of audience participation are any indication, the OpenStack community in Chicago is keenly interested in the project, and they’re actively engaged in learning more.
Randy Bias and Narayan Desai (Argonne National Laboratory and OpenStack pioneer) led an OpenStack boot camp on Monday for a near-capacity crowd. Topics covered included
What was most impressive was the quality of the questions asked, indicating not only an understanding of infrastructure, but an active interest in OpenStack.
On Tuesday, Randy led an OpenStack content track, along with help from speakers Boris Renski of Mirantis, Sean Lynch of Metacloud, Scott Devoid of Argonne National Laboratory, Joshua Buss of Brightag, and capping off the day in a debate with Jay Cuthrell of VCE. Topics included:
Some stats from the Havana release, launching today:
More Developers: Havana has code contributions from 910 developers, up 60+% from Grizzly, which was itself up 56% from Folsom before it!
New Major Contributors: The corporate names tied to those developers keeps evolving. For Havana, the major contributors were Canonical, Dreamhost, eNovance, HP, IBM, Intel, Mirantis, OpenStack Foundation, Rackspace, RedHat, Suse, and Yahoo!.
Serious Feature Growth: 392 new features, to be precise, with more than 20,000 commits merged.
You’re going to hear a lot about metering, orchestration, QoS in block storage drivers to guarantee app performance and firewall as a service, and other new features. The Foundation staff has prepared a useful and brief summary of all that’s new and noteworthy in Havana.
But there are two major themes that underpin everything that’s important about Havana.
#1: Continuous integration and exhaustive testing makes all of this possible. The work of teams led by Monty Taylor and Thierry Carrez in setting up OpenStack’s CI and Jenkins testing processes is how this semi-annual release pace keeps being met. Fact: they’re spinning up 700+ test clouds every day.
#2: People aren’t just looking. They’re engaging. The growth in developers contributing code to OpenStack continues to rise. Qingye Jiang (John) tracks trends associated with open source IaaS communities, and his most recent edition shows that the hyperbolic growth of OpenStack (measured by monthly Git contributors) is sustaining, while similar stats for OpenNebula, Eucalyptus and CloudStack are flat or declining.
Engagement and testing are two sides of the same OpenStack coin. One would be irrelevant without the other. If the momentum proof is in the OpenStack pudding, then it’s time for dessert.
Onward to Icehouse!
// UPDATE: Link to Havana release on OpenStack site. //
Posted in Cloud Computing, OpenStack | Tagged Havana, OpenStack | 3 Comments
IT leaders across industries face immense challenges. On the one hand, they must deliver game-changing capabilities that utilize big data and analytics. At the same time they are expected to drive innovation and growth with declining budgets. As a result, now more than ever, IT infrastructure matters.
The notion that “infrastructure matters” in order for organizations to unlock the cloud and unleash data was front and center at the Gartner Symposium ITxpo® last week in Orlando. Cloudscaling partnered with Google in its Enterprise Pavilion to demonstrate how the combined technologies can help organizations build the right IT infrastructure to meet today’s social-mobile-big data challenges as well as showcase the power of an OpenStack-powered hybrid cloud solution.
Together Google and Cloudscaling provide a highly scalable, performant and low cost solution for hybrid cloud infrastructure that supports modern application development. At Cloudscaling, we say it’s “infrastructure that both IT and developers can love.” Being invited to demonstrate Cloudscaling’s Open Cloud System (OCS) in the Google Enterprise Pavillion at the Gartner Symposium confirms the growing demand among enterprise IT for architectures and solutions that make it easier to modernize the data center, support DevOps and deliver value to the business.
At the Symposium, we shared this demo with the Pavillion audience. It showcases a series of GCE APIs to establish API endpoints, set up a firewall, provision virtual disks and networking, spin up virtual machines. These are executed from the gcutil command line tool, first in OCS. Then, we ran the same API commands with the same syntax in GCE. The demo shows how developers can design and build applications one time, then deploy them on GCE, Open Cloud System or both.
As we look forward to the delivery of OpenStack Havana later this week and the beginning of the contribution period of OpenStack Icehouse, Cloudscaling is committed to keeping the GCE APIs updated and available for the OpenStack community. Hybrid cloud is still in its embryonic stages, and keeping OpenStack at the forefront of this emerging trend is fundamental to further strengthening its position as the de facto open source framework for IaaS clouds.
Earlier this month, Gartner published a report in which it essentially gave a “thumbs up” to elastic cloud infrastructure for the enterprise. The report, “2014 Planning Guide for Private Cloud, Data Center Modernization and Desktop Transformation”, highlights the strategic importance of OpenStack as part of the cost equation associated with cloud computing and the trend towards modernizing the data center.
The report highlights OpenStack as a way enterprises can reduce the risk from vendor lock-in and open the door to hybrid cloud. It goes further to suggest that most enterprises will look to vendor-supported solutions based on OpenStack to get their own OpenStack-based clouds up and running.
Increasingly organizations are taking a hybrid approach to cloud because they need the ability to run applications onsite, offsite or both – without compromise. We fundamentally believe that OpenStack is central to realizing the promise of hybrid cloud and helping transform the way IT services are delivered.
Be sure to check out the demo.
Cloudscaling gets asked these questions frequently – weekly if not daily.
To be honest, initially we ourselves were stumped since the answers are not cut and dry. But the more we developed and deployed our OpenStack-powered system for customers in the last 18 months, the clearer the answers to these questions became for us. And these answers have shaped how our Open Cloud System (OCS) product and market approach has evolved in 2013.
At a high level, these are our conclusions about building OpenStack-powered private clouds:
In our Thursday webinar, I’ll expand on these topics, and as a bonus, I’ll provide you with answers to the questions posed at the beginning of this post. (Well, at least with my biased responses!)Cloud Computing, OpenStack | Leave a comment
We live in a world where the cloud is a key component of IT’s ability to become agile, accelerate innovation and open up entirely new ways of competing. As enterprises continue their movement to the cloud, there is also a need for the IT profession to make a similar shift.
According to an IDC study funded by Microsoft, cloud computing will generate nearly 14 million new jobs between by 2015. And according to a recent TechTarget survey, 40% of respondents admit they’re struggling to hire engineers and developers with cloud experience. Looking at the results of these and other studies like the BSA Global Computing Scorecard reveals an underlying theme: there isn’t enough cloud talent to go around.
At Cloudscaling, we believe the lack of OpenStack skills is preventing IT departments and software developers from taking advantage of the kinds of tools that will truly allow a data center to modernize. In order for OpenStack to turn the corner and become mainstream, vendors must educate the market on how to mitigate risks when adopting OpenStack as well as develop a workforce that will take the organization into the future.
To that end, Cloudscaling is launching an OpenStack training program to help bridge the skill gap among IT professionals in the new cloud era. Our goal is to create a way for people to get trained in OpenStack and open-source elastic cloud technologies that they can’t get anywhere else. Beyond teaching the fundamentals of OpenStack, we believe it’s necessary to train people how to think differently about the way IT gets delivered as well as how to create the right cloud delivery strategy – public, private or hybrid – for their business.
Our initial focus will be on teaching OpenStack, because that’s where we see the biggest need in the market. We’re kicking things off this fall with a handful of classes:
Cloud Computing with OpenStack is designed to help IT and business executives understand how cloud computing – powered by OpenStack – differs from traditional IT and why this matters to their business. We also teach students about the market, the ecosystem of products, and risks to consider with DIY when it comes to deploying a private cloud.
For the more hands-on technical crowd, we’re offering two courses right out of the gate: Cloudscaling Fundamentals with OpenStack and Cloud Storage with OpenStack Swift. The fundamentals course covers everything you need to know about deploying and operating OpenStack to power compute services, using components such as Nova, Glance, Keystone, Horizon, and Cinder. The storage course dives into resilient, distributed object storage using Swift. These two courses will be geared primarily toward systems, network, and DevOps engineers looking to dig into the nuts and bolts of what it really takes to make OpenStack go.
We are in the open cloud era and OpenStack is leading the charge. Don’t fall behind – realign your skills to be a part of one of the fastest growing open source projects to date.
Registration is now open. We’ll see you in the classroom.
Posted in Cloud Computing, OpenStack | Tagged OpenStack, training | Leave a comment
While virtualization has increased resource utilization efficiency for IT, many say the real enabler of dynamic and agile IT is the cloud. Take a look for yourself by joining us for DemoFriday™ this Sept. 20, when Juniper and Cloudscaling team up to discuss a turnkey software-defined networking (SDN) cloud solution with network virtualization and orchestration.
With its recent acquisition of Contrail Systems, Juniper promises a standards-based SDN controller that enables carrier-class scale, integration with multi-vendor physical networks, and true inter-cloud federation. Cloudscaling serves enterprises, service providers, and web application providers who want elastic cloud infrastructure to deliver the benefits of public clouds like Amazon Web Services (AWS) — but deployable in their own data centers and under their IT teams’ control.
In this demo, Cloudscaling Vice President of Product Management Azmir Mohamed and Juniper Networks Senior Director of Engineering Parantap Lahiri spotlight Cloudscaling’s Open Cloud System (OCS). The system is an IaaS solution powered by OpenStack technology, and the latest OCS release 2.5 provides integration with Contrail Virtual Network Engine. Join Mohamed and Lahiri as they:
The demo of the elastic SDN cloud solution will be webcast live on Friday, September 20, at 10:00 a.m. PDT.
(This post originally appeared at SDNCentral)
Posted in Cloud Computing | Tagged Contrail | Leave a comment
Today’s launch of OpenContrail by Juniper Networks is a sign of the times. As we have seen with the OpenStack project, some of the largest enterprise vendors have been embracing open source with a vengeance. Most notable in the open sourcing of OpenContrail is that Juniper acquired Contrail as a closed-source proprietary system and then decided to open source the project. This is extremely rare in the enterprise world and shows a unique perception on Juniper’s part into how the world is changing that I find refreshing.
In mid-2012 I first met the Contrail team, quite a while before they were acquired by Juniper. At that time I was very impressed with the Contrail approach. Rather than creating all new protocols, Juniper had taken an approach that combined a number of standard protocols (MPLS, BGP, XMPP, IF-MAP) into a very clean, well-designed software-defined networking (SDN) solution. Protocols are hard to develop and take a while to mature. It’s almost always better to reuse or extend existing protocols rather than to bake entire new ones. You see this with TCP/IP, HTTP, and other protocols. Here I think Contrail excels in its design and approach.
The biggest downside of the Contrail approach that I could find was that it was a proprietary software solution. As we move to an open cloud world, folks building clouds wish to future proof them against vendor lock-in. This is challenging and in some ways an impossible goal; however, every little bit helps and I think given the criticality of SDN, this is an area in particular that should be as open as possible.
Now Juniper has provided us with the best of all worlds: open source software using proven, scalable, open standards, that can be deployed as part of an elastic cloud.
Bravo, Juniper. Bravo.Posted in Cloud Computing, OpenStack | Tagged Contrail, Juniper, OpenContrail, OpenStack | Leave a comment
← Older posts