Skip to main content

Docker & Kubernetes Part 3

Ladies and gentlemen, we are now at part 3, which is the last part of the Docker & Kubernetes series. Now that we know what Docker/Kubernetes is, what containers are, and we have minikube installed, let's move on to deployments and pods.

First and foremost, what are deployments and pods?

Deployments are a way to bring up your environment from your Docker images. They allow you to have your golden environment, with as many pods as you want, with the ability to update those pods on the fly and give you self-healing.

Pods Pods are a collection of containers. You can have multiple pods, with multiple containers inside of those pods. It also allows you to manage storage resources, unique network IPs, and options that show how a container should run.

Let's ensure that our minikube node is up and operational. Run the following and you should see the output below:

Now, let's create a Kubernetes manifest which will be stored locally (in production, you always want to store these in some sort of private source control). Manifests are written in YAML, and we can do a vim to edit the code, or use VSCode

vim TestManifest.yaml

I have written the following manifest, which will spin up an Nginx deployment with 5 replicas (pods).

Lets break this down:
1) The API version is the version of the Kubernetes API that you will be interacting with.
2) The Kind is for specifying the type of manifest. In our case, it's Deployment.
3) The metadata is, well.. the metadata :). This is the metadata of your deployment.
4) The spec block is for building your pods and specifying what you want them to look like.

Take a look at "replicas". Replicas state how many pods you will be creating.

In our case, we will be using the latest version of Nginx and allowing traffic over port 80.

To kick this off, we will want to use the Kubernetes API.

kubectl create -f TestManfiest.yaml

You should see the following:

deployment.apps "nginx-deployment" created

Now we will run the following to ensure that our pods got created:

kubectl get pods

You should see a similar output to this screen

And like magic, the pods are created!

Next, we want to create a service. A service allows your deployment/pods to be accessible over the web and internally. The definition of a service is "an abstraction which defines a logical set of Pods and a policy by which to access them"

To expose our deployment, we want to run the following:

kubectl expose deployment nginx-deployment --port=80 --type=LoadBalancer

What the above does is creating a service and exposes our deployment over port 80 in a Load Balancer fashion (in short: a load balancer allows you to span application across multiple endpoints vs having a single point of failure).

There you have it! You're up, running, and ready to go with Nginx.


Popular posts from this blog

Run PowerShell code with Ansible on a Windows Host

Ansible is one of the Configuration Manager kings in the game. With it's easy-to-understand syntax and even easier to use modules, Ansible is certainly a go-to when you're picking what Configuration Management you want to use for your organization. Your question may be "but Ansible is typically on Linux and what happens when I'm in a Windows environment?". Luckily I'm here to tell you that Ansible will still work! I was pleasantly surprised with how easy it is to use Ansible on Windows with a little WinRM magic. Let's get started.

Pre-requisites for this post:
1) WinRM set up to connect to your Windows host from Ansible
2) Ansible set up for Windows Remote Management
3) SSH access to the Ansible host
4) Proper firewall rules to allow WinRM (port 5985) access from your Ansible host to your Windows host
5) Hosts file set up in Ansible that has your IP or hostname of your Windows Server.
6) At least one Linux host running Ansible and one Windows Server host …

Running PowerShell commands in a Dockerfile

As Docker continues to grow we are starting to see the containerization engine more and more on Windows. With the need for containers on Windows, we also need the same automation we get in Linux with Dockerfiles. Today we're going to create a Dockerfile that runs PowerShell cmdlets.
Prerequisites; 1. Docker for Windows
2. A code editor (VSCode preferred)

Let's go ahead and get our Dockerfile set up. Below is the Dockerfile I used for this post.

from MAINTAINER Michael Levan RUN powershell -Command Install-WindowsFeature -Name Web-Server RUN powershell -Command New-Item -Type File -Path C:\ -Name config
As you can see from the above, this is a tiny Dockerfile. What this will do is install the IIS Windows 

Feature and create a new file in C:\ called "config".
You should see something very similar to the below screenshot;

Next let's create a running container out of our image. First we'll need to run docker container ls to

 get o…

DevOps tooling in the Microsoft realm

When I really started to dive into automation and practicing DevOps with specific tooling, there were a few key players. At the time Microsoft was not one of them. They were just starting to embrace the open source world, including the art and practice of DevOps. Since then Microsoft has went all in and the tech giant has made some incredible tooling. Recently I switched to a Microsoft-heavy environment and I love it. I went from AWS/Python/Ansible/Jenkins to Azure/PowerShell/ARM/Azure DevOps. My first programming language was PowerShell so being back in the saddle allowed me to do a full circle between all of the different types of tooling in both worlds. Today I want to share some of that tooling with you.

The first thing I want to talk about is ARM. What is ARM? ARM is a configuration management tool that allows you to perform software-defined-infrastructure. Much like Ansible and Terraform, ARM allows you to define what you want your environment to look like at scale. With ARM, yo…