Skip to main content

Spinning up a Kubernetes cluster with Kubeadm

In today's world, we have several public cloud technologies that will ultimately help us with spinning up these infrastructures. This however comes with a price. Because a public cloud provider (like AWS or Azure) handles the API/master server and networking, you'll get something up quick, but miss some key lessons of spinning up a Kubernetes cluster. Today, I'll help you with that.

There are some pre-reqs for this blog:
1. At least 3 VM's. In my case, I'm using my ESXi 6.7 server at home.
2. Basic knowledge/understanding of what Kubernetes is utilized for.
3. Windows, Mac, or Linux desktop. For this blog, I am using Windows 10.

The first thing you want to do is spin up three virtual machines running Ubuntu18.04. You can use a RHEL based system, but the commands I show and run (including the repos I'm using) will be different.

I have already set up my 3 virtual machines. I gave them static IP addresses as I have found API/configuration issues if the VM shuts down or gets assigned a new IP address via DHCP.


Now, we want to log into the Kubernetes Master Node. The Master Node is what we use to host our API. The two additional VM's we have are the worker nodes. The Worker Nodes are what hold configurations and data (pods, deployments, etcd, etc.).

I called my three VM's kube-01, kube-02, and kube-03. kube01 is what I will be using for my Master.

The first thing we want to do is run sudo apt update -y to ensure our Linux boxes have all updates (please run this on all 3 VM's).


The first thing we're going to install is apt-transport-https. This is for any package managers using libapt-pkg to access metadata and packages available in sources accessible over https. Please do this on all 3 VMs

apt-get install -y apt-transport-https curl

Next lets curl packages from Google needed for the Kubernetes repo.

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - cat </etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main EOF

Next we'll do an apt update -y to bring down updates from the Kubernetes repo we just added.

At this point, we are now ready to install kubelet, kubeadm, and kubectl. Please run the following:
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl

 We are setting the packages on hold so they are set off once we initialize our cluster.

Next, lets install Docker. After that we'll start the services.

apt install docker -y
systemctl enable docker
systemctl start docker
systemctl status docker

Run swapoff -a as Kubernetes is not compatible with it being on. Swap is memory stored in HD. Similar to a pagefile in Windows.

Next, lets go ahead and reload our daemon and restart kubelet as we added new configurations:
systemctl daemon-reload
systemctl restart kubelet

Now let's go ahead and start our API server! We will need to initialize the cluster. Depending on your subnet, your api server address and pod network may be different than mine.

kubeadm init --apiserver-advertise-address 192.168.1.8 --pod-network-cidr=192.168.0.0/16


If all was completed successfully, you will see something similar to the screenshot below.


Copy all of that output, including the auth token and kubeadm join line. Store it someplace safe because we will need that for our worker nodes.

Next, you should see the commands that need to be run in your output for "To start using your cluster, you need to run the following as a regular user:"

Now let's get our networking set up. We're going to use Weave CNI. Weave CNI is a pretty solid standard, works "out of the box", and doesn't require additional configuration. Please run the following on your API/Master Node.

export kubever=$(kubectl version | base64 | tr -d '\n') kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"

Go ahead and run kubectl get svc and kubectl get cs. You should see a similar output to the below screenshot:




Now that our API server is up, lets head over to the worker nodes. On each worker node, you will need to run the following lines:


curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - 
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main EOF
apt-update -y
apt install -y kubeadm kubelet kubectl
swapoff -a
apt install docker.io -y
systemctl enable docker


At this point, you are ready to run your kubeadm join line that was provided to you by your API.

After that, run kubectl get nodes you'll see something like the below screenshot.


Now all of our nodes up and, statuses are ready, and we're all set!

Comments

Popular posts from this blog

Run PowerShell code with Ansible on a Windows Host

Ansible is one of the Configuration Manager kings in the game. With it's easy-to-understand syntax and even easier to use modules, Ansible is certainly a go-to when you're picking what Configuration Management you want to use for your organization. Your question may be "but Ansible is typically on Linux and what happens when I'm in a Windows environment?". Luckily I'm here to tell you that Ansible will still work! I was pleasantly surprised with how easy it is to use Ansible on Windows with a little WinRM magic. Let's get started.

Pre-requisites for this post:
1) WinRM set up to connect to your Windows host from Ansible
2) Ansible set up for Windows Remote Management
3) SSH access to the Ansible host
4) Proper firewall rules to allow WinRM (port 5985) access from your Ansible host to your Windows host
5) Hosts file set up in Ansible that has your IP or hostname of your Windows Server.
6) At least one Linux host running Ansible and one Windows Server host …

Running PowerShell commands in a Dockerfile

As Docker continues to grow we are starting to see the containerization engine more and more on Windows. With the need for containers on Windows, we also need the same automation we get in Linux with Dockerfiles. Today we're going to create a Dockerfile that runs PowerShell cmdlets.
Prerequisites; 1. Docker for Windows
2. A code editor (VSCode preferred)

Let's go ahead and get our Dockerfile set up. Below is the Dockerfile I used for this post.

from mcr.microsoft.com/windows/servercore:1903 MAINTAINER Michael Levan RUN powershell -Command Install-WindowsFeature -Name Web-Server RUN powershell -Command New-Item -Type File -Path C:\ -Name config
As you can see from the above, this is a tiny Dockerfile. What this will do is install the IIS Windows 

Feature and create a new file in C:\ called "config".
You should see something very similar to the below screenshot;

Next let's create a running container out of our image. First we'll need to run docker container ls to

 get o…

DevOps tooling in the Microsoft realm

When I really started to dive into automation and practicing DevOps with specific tooling, there were a few key players. At the time Microsoft was not one of them. They were just starting to embrace the open source world, including the art and practice of DevOps. Since then Microsoft has went all in and the tech giant has made some incredible tooling. Recently I switched to a Microsoft-heavy environment and I love it. I went from AWS/Python/Ansible/Jenkins to Azure/PowerShell/ARM/Azure DevOps. My first programming language was PowerShell so being back in the saddle allowed me to do a full circle between all of the different types of tooling in both worlds. Today I want to share some of that tooling with you.

The first thing I want to talk about is ARM. What is ARM? ARM is a configuration management tool that allows you to perform software-defined-infrastructure. Much like Ansible and Terraform, ARM allows you to define what you want your environment to look like at scale. With ARM, yo…