Docker; not just another type of virtualisation

Plenty of folks are experimenting in the Lucee community (a JVM scripting language often running on Tomcat) with :whale: Docker for the first time.

What you’re saying though did cross my mind, modius, which makes sense to me if your application were broken into multiple “microservice” parts, but assuming each container is running the entire app, as it is in my case, and as your lucee/lucee4-nginx container easily allows, why would you want more than one container on a single instance? Are you saying that would be more resource efficient? If so, I’m not sure how.

A common misconception for first-timers is the notion that Docker is just a different kind of “server virtualisation” and initial efforts look to move a monnolithic app running on a single server, to a monolithic app running on a single container on a single server.

A single container per server kind of defeats the purpose of containers; except in exceptional circumstances. Admittedly for many its novel and immediately beneficial to have the entire server scripted, and versioned as Docker image layers.

One of the beauties of Docker is the ability to treat a cluster of nodes (aka servers) as a single compute resource.

For example, we run three m4.xlarge (4CPUS/16GB) nodes for our heterogenous hosting platform. Our orchestration tools see that cluster a single resource and when we deploy a new container, or scale up a service the platform “fits” the additional containers based on which node is currently the most “empty”; automatically splitting containers over multiple nodes if its been flagged as needing “high availability".

To “make use of the resources” in a single container environment you would need to choose an EC2 instance that closely matches the resource requirements of your specific app, or have a bunch of resources left over. Most specify servers to have “left over” resources just in case. Having a pool of servers where containers are “fitted” to match available resources generally allows for a more efficient utilisation of those resources; one of the reasons containerisation typically sees an improvement in resource utilisation.

As always, it depends on your requirements — a single service running in isolation has much simpler requirements to many different services running together either in concert or as separate concerns.

Spinning up AMIs orchestrated through OpsWorks use to be our standard DevOps approach prior to Docker; we still have a few clients in this framework. AMIs take about 20 minutes to scale; and often much longer to redeploy depending on what bootstrapping your server instance may require. In contrast, our Docker containers can take seconds to scale and minutes to redeploy.

The issue with AWS Beanstalk when we last evaluated the platform (and things may be different today) was the need to have a single application stack bound to an environment and consequently bound to an auto-scaling group. This may be great if you have a single app stack to scale up, but less than ideal if you have a collection of different apps you want to host and manage independently within the same node cluster.