Skip to main content

Creating a Docker Swarm Cluster

Today we're going to be spinning up Docker Swarm! Docker Swarm is an orchestration platform (like Kubernetes) that allows you to manage a containerized environment. In 2019, I believe that Kubernetes is far more popular in the world of DevOps, but I still want to see what Docker Swarm is all about.. so let's get started!

Prerequisites;
1. Three Linux machines (I'm using Ubuntu 18.04.1 LTS servers)
2. Docker 1.12 or newer installed on all machines. At the  time of writing this, I have version 10.06.1-ce (ce stands for community edition)
3. SSH access to said machines
4. The following ports open; TCP port 2377, TCP/UDP port 7946, and UDP port 4789
5. Some type of naming convention - Because we're going to have a manager machine, pick a naming convention that is easy to remember. For example, I named my VM's dockerswarm01, dockerswarm02, and dockerswarm03. dockerswarm01 will be the management machine

For step 1, let's go ahead and SSH into each of the Linux machines. I used MobaXTerm for this so I can SSH into all from the same window. You can also use PowerShell or a Linix/OS X Terminal for this as well.


Now, ensure you're running as root by running sudo su - and typing in your password.

Next, let's initiate the Swarm cluster by running docker swarm init --advertise-addr <ip of your swarm manager>

Once you do that, your screen should look something similar to the below screenshot (yours will show a token and your current nodes GUID).


On your two other Linux machines, you're going to run the full command under the "To add a worker to this swarm, run the following command:". Ensure you run the command with the full token.

If the connection is successful, you should see a screen like the screenshot below. If you do not see this, please confirm the following;
1. Docker is running on the machine
2. You have network connectivity to the Swarm Manager
3. All of the ports were open from the pre-req section


If you run docker info on the manager, you should see some output about containers running, if swarm is active, etc.

Finally, go ahead and run docker node ls to see all of your nodes in your Docker Swarm cluster.


That's it! You've spun up your Docker Swarm cluster. Thanks for reading.

Comments

Popular posts from this blog

DevOps tooling in the Microsoft realm

When I really started to dive into automation and practicing DevOps with specific tooling, there were a few key players. At the time Microsoft was not one of them. They were just starting to embrace the open source world, including the art and practice of DevOps. Since then Microsoft has went all in and the tech giant has made some incredible tooling. Recently I switched to a Microsoft-heavy environment and I love it. I went from AWS/Python/Ansible/Jenkins to Azure/PowerShell/ARM/Azure DevOps. My first programming language was PowerShell so being back in the saddle allowed me to do a full circle between all of the different types of tooling in both worlds. Today I want to share some of that tooling with you.

The first thing I want to talk about is ARM. What is ARM? ARM is a configuration management tool that allows you to perform software-defined-infrastructure. Much like Ansible and Terraform, ARM allows you to define what you want your environment to look like at scale. With ARM, yo…

Monitoring your containers in an AKS cluster with Prometheus

Monitoring and alerting is arguably one of the most important thing in Cloud Engineering and DevOps. It's the difference between your clients stack being up and a client being down. Most of us have SLA's to abide by (for good reason). Today we're going to learn how to spin up Prometheus in an AKS cluster to monitor our applications.

Pre-reqs;
1. Intermediate knowledge of Kubernetes
2. An AKS cluster spun up in Azure

Recently AKS supports Prometheus via Helm, so we'll use that for an automated solution to spin this up. This installs kube-prometheus, which is a containerized version of the application. With raw Prometheus, there are a few things that are needed for the operator;

1. Prometheus: Defines a desired deployment.
2. ServiceMonitor: Specifies how groups of services should be monitored
3. Alertmanager: Defines the operator to ensure services and deployments are running by matching the resource

With kube-prometheus, it is all packaged for you. This means configuri…

So, you want to be a Cloud Engineer?

In 2019 one of the biggest pieces of tech is the cloud. Whether it be public cloud or private cloud, cloud technologies are here to stay (for now). I predict that Cloud Engineering will be a very big part of IT (and development) for another 5-10 years. Today I want to share with you my journey in becoming a Cloud Engineer and some helpful tips. A career timeline to be a Cloud Engineer can go like so;

Desktop Support > Junior Sysadmin > Sysadmin > Sysadmin/Technical Lead > Engineer >  Cloud Engineer.

Although our career paths may not align, I believe that this progression is very import. Let me tell you why.



Helpdesk/Desktop Support Helpdesk and desktop support get your feet wet. It allows you to understand technology and how it's used in the workplace from a business perspective. It shows you what technologies may be best in the current environment your in and how to support those technologies. It also teaches you soft skills and how to support people from a technic…