Docker 1.13 Swarm Mode with Raspberry Pi: Setting up a Cluster
Docker 1.13 has been released recently. It includes a lot of improvements and new features. There is a new compose file format which allows deployment on swarm with a YAML file and making rolling updates with the same file, Docker Secret which is intended to store valuable data, service logs (experimental), metric output (in Prometheus style) and so on.
This post will include installation of latest stable version of Docker on Raspbian Jessie which is 1.13 at the time this post is written, setting up a Docker Swarm with 3 Raspberry Pi and Portainer which is an alternative control option to command line interface and has a web gui.
In order to get started, we will begin with three Raspberry Pi. All of them has Raspbian Jessie installed on them. At the beginning, you should know IP address of each.
In the setup used in this post, used IPs and their hostnames are:
192.168.1.42 – pi_swrm_01
192.168.1.44 – pi_swrm_02
192.168.1.45 – pi_swrm_03
Firstly, connect Raspberry Pi boards over SSH. Afterwards follow the steps below. Each of them are explained in how and why fashion.
On the each machine, update package list and upgrade updated packages:
First command will update the package list and the second will upgrade them. Note that -y in the second command gives yes to any questions that are being asked during upgrading automatically. You can omit it and approve upgrading manually.
Again, on each machine, run the following command to install Docker:
First command switches current user to root with its own environment variables and the second command will install the Docker.
In order not to work as root user, we would like to add pi user to docker group. For that purpose, run that command on each machine:
By doing this, you can manage Docker daemon without being root or using sudo; therefore, no root access is needed for controlling Docker anymore.
Creating Docker Swarm
After installing Docker 1.13, select one of the machines as founder of the swarm. Let’s take second one (pi_swrm_02 at 192.168.1.44).
On that terminal, start Docker swarm as follows:
This command will write an output like this:
You may be wondering what that output wants to say. At the first line, you can see the ID of current node(node means machine in a Swarm, either virtual or physical). Due to being the founder of that swarm, we are the only node in it for now. So we should be the manager because there is no alternative.
Afterwards, you should see a command starting with docker swarm join. This command, until the end with IP:Port is used for other nodes to join our swarm. Any other nodes that can reach our manager on given port can join the swarm as a worker node. Workers nodes cannot make decisions like where a container will be started nor see other nodes/manage swarm.
Note: If you accidentally closed the window or did not copied the command(or token at least), we will shortly see how to get it later.
What if we want a node to join our cluster as a manager instead of worker? We can just ask Docker to give us a key which permits participating as manager. Try that one:
A command with token is included at the output of this command. You can run it any node you want to put in swarm as manager.
You may have guessed how can we get the key and command again when we firstly created the swarm:
So the main question is what are these complicated expressions in that code. The first part of the keys are the fingerprint of swarm manager. When a node tries to connect a manager in order to participate the swarm, it can verify that the machine it connected is the manager which it should be. It is useful against man-in-the-middle attacks.
Until here, we know how to detect whether the manager we connect is the real one or not. How a manager can detect we are authorized to join the swarm or not? The second part of the key, which is different for worker and manager provides this authentication.
Now we know what is the meaning of these commands and why they are in that format. All the rest is running one of these commands on the other machines:
If you have number Raspberry Pi’s up to (including) 3, I can suggest you to make each of them manager. You can use manager token with its command for this.
In order to verify the process, just run it on a manager node. We can choose it as the node we started swarm. Let’s see other nodes connected:
If you have added all nodes as manager, your output will be similar. The last column indicates that relevant node is a manager or not.
Portainer provides a web gui for managing docker containers and services. It also provides some graphics based on basic metrics. We will install Portainer as a service and arrange it so that it may be run only on a manager node.
Edit: Portainer image name updated. It is now multiarch, so you can use the same image not only on Raspberry Pi but on other platforms including x86/64 as well. Thanks to Anthony Lapenna for mentioning this.
The constraint part says that the node which this container is started should be a manager node. The reason for this restriction derives from the need of being manager in order to create services, see swarm nodes and other manager-specific properties. With this command, the container will be started on an arbitrary manager node (actually, according to Docker Swarm’s current scheduling algorithm).
Note that this may take a while because relevant according to your internet speed node will download the image if it is not already downloaded.
After all, we can test the dashboard. One beautiful property is, we can make request to any node, it will be forwarded to a node with the service available. This is made possible by internal mesh network.
You can play with it to see what is in there. Here is my main page:
Make sure that you use images built for armhf architecture when you use Raspberry Pi. We have used Portainer for arm in the previous command.