Skip to main content

Docker & Kubernetes Part 3

Ladies and gentlemen, we are now at part 3, which is the last part of the Docker & Kubernetes series. Now that we know what Docker/Kubernetes is, what containers are, and we have minikube installed, let's move on to deployments and pods.

First and foremost, what are deployments and pods?

Deployments are a way to bring up your environment from your Docker images. They allow you to have your golden environment, with as many pods as you want, with the ability to update those pods on the fly and give you self-healing.

Pods Pods are a collection of containers. You can have multiple pods, with multiple containers inside of those pods. It also allows you to manage storage resources, unique network IPs, and options that show how a container should run.

Let's ensure that our minikube node is up and operational. Run the following and you should see the output below:



Now, let's create a Kubernetes manifest which will be stored locally (in production, you always want to store these in some sort of private source control). Manifests are written in YAML, and we can do a vim to edit the code, or use VSCode

vim TestManifest.yaml

I have written the following manifest, which will spin up an Nginx deployment with 5 replicas (pods).

Lets break this down:
1) The API version is the version of the Kubernetes API that you will be interacting with.
2) The Kind is for specifying the type of manifest. In our case, it's Deployment.
3) The metadata is, well.. the metadata :). This is the metadata of your deployment.
4) The spec block is for building your pods and specifying what you want them to look like.

Take a look at "replicas". Replicas state how many pods you will be creating.

In our case, we will be using the latest version of Nginx and allowing traffic over port 80.

To kick this off, we will want to use the Kubernetes API.

kubectl create -f TestManfiest.yaml

You should see the following:

deployment.apps "nginx-deployment" created

Now we will run the following to ensure that our pods got created:

kubectl get pods

You should see a similar output to this screen



And like magic, the pods are created!

Next, we want to create a service. A service allows your deployment/pods to be accessible over the web and internally. The definition of a service is "an abstraction which defines a logical set of Pods and a policy by which to access them"

To expose our deployment, we want to run the following:

kubectl expose deployment nginx-deployment --port=80 --type=LoadBalancer

What the above does is creating a service and exposes our deployment over port 80 in a Load Balancer fashion (in short: a load balancer allows you to span application across multiple endpoints vs having a single point of failure).



There you have it! You're up, running, and ready to go with Nginx.

Comments

Popular posts from this blog

DevOps tooling in the Microsoft realm

When I really started to dive into automation and practicing DevOps with specific tooling, there were a few key players. At the time Microsoft was not one of them. They were just starting to embrace the open source world, including the art and practice of DevOps. Since then Microsoft has went all in and the tech giant has made some incredible tooling. Recently I switched to a Microsoft-heavy environment and I love it. I went from AWS/Python/Ansible/Jenkins to Azure/PowerShell/ARM/Azure DevOps. My first programming language was PowerShell so being back in the saddle allowed me to do a full circle between all of the different types of tooling in both worlds. Today I want to share some of that tooling with you.

The first thing I want to talk about is ARM. What is ARM? ARM is a configuration management tool that allows you to perform software-defined-infrastructure. Much like Ansible and Terraform, ARM allows you to define what you want your environment to look like at scale. With ARM, yo…

Monitoring your containers in an AKS cluster with Prometheus

Monitoring and alerting is arguably one of the most important thing in Cloud Engineering and DevOps. It's the difference between your clients stack being up and a client being down. Most of us have SLA's to abide by (for good reason). Today we're going to learn how to spin up Prometheus in an AKS cluster to monitor our applications.

Pre-reqs;
1. Intermediate knowledge of Kubernetes
2. An AKS cluster spun up in Azure

Recently AKS supports Prometheus via Helm, so we'll use that for an automated solution to spin this up. This installs kube-prometheus, which is a containerized version of the application. With raw Prometheus, there are a few things that are needed for the operator;

1. Prometheus: Defines a desired deployment.
2. ServiceMonitor: Specifies how groups of services should be monitored
3. Alertmanager: Defines the operator to ensure services and deployments are running by matching the resource

With kube-prometheus, it is all packaged for you. This means configuri…

So, you want to be a Cloud Engineer?

In 2019 one of the biggest pieces of tech is the cloud. Whether it be public cloud or private cloud, cloud technologies are here to stay (for now). I predict that Cloud Engineering will be a very big part of IT (and development) for another 5-10 years. Today I want to share with you my journey in becoming a Cloud Engineer and some helpful tips. A career timeline to be a Cloud Engineer can go like so;

Desktop Support > Junior Sysadmin > Sysadmin > Sysadmin/Technical Lead > Engineer >  Cloud Engineer.

Although our career paths may not align, I believe that this progression is very import. Let me tell you why.



Helpdesk/Desktop Support Helpdesk and desktop support get your feet wet. It allows you to understand technology and how it's used in the workplace from a business perspective. It shows you what technologies may be best in the current environment your in and how to support those technologies. It also teaches you soft skills and how to support people from a technic…