The First Generation Cloud Dealt with Orchestration; The Next Generation Will Deal with Applications
During the past decade, the world of the cloud has been consumed with orchestration: How can we make an infrastructure which can adapt to the needs of the enterprise? Words like automation, flexibility, and control have ruled the world of the cloud to date.
But now that a number of cloud orchestration projects have begun to mature, it's time to take a look at the applications themselves. Until now, the applications which dwell in clouds look suspiciously like the applications which inhabited the traditional datacenter. And while they may function pretty well, they are not really designed with an agile infrastructure in mind.
Make It Small, Make It Fast
In the world of the cloud, it would make sense to have small applications which are lightweight and nimble. They should be quick to start and stop. They should do what they need to do and then get out of the way so that valuable compute resources can be focused on applications which require compute power -- like databases, for instance.
Docker has made inroads in this area by using container technology to share the operating system space between many applications. Virtual machines contain a full operating system for each instance, which requires lots of disk space, lots of memory, and prolonged startup and shutdown times. Docker-type solutions keep memory usage down, make startups and shutdowns lightning quick, and create application bundles which are easy to deploy.
But shared resources can mean that an exploit of the base operating system can cause the compromise of dozens or even hundreds of applications resident on that host. It also means that multi-tenant situations are difficult to achieve, as shared resources could mean increased ability to see your neighbor's work. If you don't trust your neighbor, you want a wall between the two applications which makes them invisible to each other, just like the solutions already extant in the world of hypervisors....