Containers are a method of running virtualized applications on your computer, similar to virtual machines, but using a different set of underlying technologies. They can be complex to understand, but they’re essential for running everything from Minecraft servers to Google. And we’re going to show you how to set them up.
We’ll be using Linux, an operating system built with a focus on portability, modularity, and stability. Linux is in everything from servers to microwaves to video game consoles. Containers aren’t limited to running on Linux, but the technologies behind them are best suited to and run best on Linux. If you’re new to Linux, we’d recommend you check out our beginner’s guide before diving in.
Are containers virtual machines?
Containers can be a complex topic, but it’s best to start with one key point: A container is not a virtual machine. A virtual machine is a simulated version of specific hardware and its software that runs in what’s known as a hypervisor. If you’ve ever used software like VirtualBox or multipass, then you’ve used a hypervisor.
The hypervisor usually runs either as its own operating system (known as a type 1 hypervisor) or within the confines of another operating system like Windows or Ubuntu (a type 2 hypervisor). The hypervisor’s responsibility is to present the guest operating system with the simulated hardware it requires to run. The full operating system can then run on top. This includes everything from a simulated CPU and RAM to data buses, disk drives, or network adapters. This simulation is computationally expensive, so virtual machines typically have significant overhead.
So, what is a container?
A container is similar to a virtual machine in that it contains and runs software in an isolated environment on a host system. However, containers replace traditional hardware virtualization by relying on the host operating system directly. Containers share the libraries and binaries of the host operating system and only have the resources and dependencies needed to run the specific application they contain. This means there’s no need for a full operating system per container, as all containers running on a system can share the single host operating system while retaining the segregation you get with virtual machines.
Containers access the host operating system through a container engine, which manages running containers and controls their access to the underlying operating system. This might include enforcing security between containers and granting or denying access to operating system files or network connections.
What are some tradeoffs for using containers?
While virtual machines and containers are similar, using containers does have its drawbacks. For one, virtual machines are considered more secure, as the attack surface to “escape” the virtual machine is significantly smaller and harder to penetrate. A container might not be suitable for testing out malware, for example.
The major benefit of using containers is that they’re lightweight, avoiding the need to virtualize an entire operating system means minimal startup times and reduced system overhead. This means that many more containers can run on one host, which wouldn’t be possible with virtual machines.
Because containers don’t require a full operating system, they can easily be packaged into smaller images and distributed. While a full virtual machine image might easily be tens of gigabytes, containers can come in images as small as 15Kb. This makes it extremely easy to distribute and use containers. We’ll demonstrate this by running some simple example containers in Docker.
That’s a lot of theory, so let’s get practical. To show you how to set up a container, we’ll be installing Docker in Ubuntu 23.10 and using it to run a simple Hello World container. Our steps are tested on a virtual machine, but you could also dual-boot from your Windows PC or use a great laptop for Linux.
Docker has rapidly become the de-facto containerization tool. While other tools do exist, Docker is widely adopted and is perfect for all but the most demanding applications. There’s documentation on the different ways to install Docker, but we’ll be using the convenience install script. This command will download a script from get.docker.com to your local machine:
$ curl -fsSL https://get.docker.com -o get-docker.sh
You can then run this script to install Docker. You can verify that Docker is installed correctly by checking the version with:
$ sudo sh ./get-docker.sh
Followed by a version check with:
$ sudo docker version
You can also verify that the docker service is running in the background with:
$ sudo systemctl status docker
It should indicate ‘active (running)’ in green text, as highlighted in the screenshot below. The version of the Docker engine should also be printed without error.
Running a simple example
Now that Docker is installed, you can use it to download a container image. Container images are a lot like ISOs for virtual machines, except they’re usually smaller and easier to build. Download a simple hello-world container image with the following command:
$ sudo docker pull hello-world
Once that image is downloaded, you can verify by listing the images downloaded on your system using the following:
$ sudo docker images
You should now see hello-world downloaded. Note the very small size (13Kb on our test machine), as well as its tag. The tag of an image is effectively its version. By default, Docker will download the latest version of an image. Run a container based on this image using:
$ sudo docker run hello-world:latest
This will output Dockers’ hello-world spiel, which runs from a very small C program (which you can check out on GitHub).
Congratulations; you’ve run your first container! In this case, the program has outputted its result and finished its execution. The container is now dead and no longer running.
How to keep containers alive
Having run a basic container, we can now build a more complex example that doesn’t immediately exit once it’s completed a task. Containers are often built around a single process, which may spawn more processes. Once this base process exits, the whole container will exit with it. This might sound like a limitation, but it’s actually very similar to how init systems in the full Linux kernel function.
To run a persistent example, we can pull an image for Nginx, which is a web server used to host a significant percentage of the world’s websites. We’ve chosen Nginx for this example because it’s simple, requires no advanced configuration, and is lightweight. Download the latest Nginx image with the following command:
$ sudo docker pull nginx
We’ve added the -p flag here to configure port forwarding from the container to the host operating system. Port 80 is used for unencrypted HTTP (i.e. web) traffic. This will allow you to access the container from your host machine:
$ sudo docker run -p 80:80 nginx
This command will run in your terminal in attached mode which means the container is literally attached to your terminal, so all logs from the container will be printed there. Once the container has started up, open http://localhost in your web browser. You’ll see a Nginx welcome screen similar to below:
The Nginx container is now running inside your container engine. While Nginx runs in attached mode, it’ll only continue running as long as it’s open. You can exit the Nginx container by pressing Ctrl + C in your terminal.
How to run containers in the background
To run Nginx in the background, we’ll add another flag, -d, for detached mode. This will start the container detached from your terminal, meaning that it’s in the background. For example:
$ sudo docker run -d -p 80:80 nginx
You can list running containers on a system. Notice how your Nginx container is now running in the background and has been assigned an ID.
$ sudo docker ps
A running background container can be killed using its ID. You might also notice that the container has been given a name. When one is not specified, Docker will give each container a random, unique name.
$ sudo docker kill
However, you could test this out with some more interesting images. There are plenty available over at the Docker hub, which functions as a central repository for public container images.
Diving deeper with containers
This has been a brief introduction to containers. Containers can be limitlessly complex, but they’re a foundational building block of the highly distributed systems that run much of our modern internet. That power doesn’t take away from their use on a smaller scale, though. Familiarity with the basics of containers and Docker can be the gateway to running useful local services like a Minecraft server or Plex on an old PC.