Skip to main content

Docker & Kubernetes Part 3

Ladies and gentlemen, we are now at part 3, which is the last part of the Docker & Kubernetes series. Now that we know what Docker/Kubernetes is, what containers are, and we have minikube installed, let's move on to deployments and pods.

First and foremost, what are deployments and pods?

Deployments are a way to bring up your environment from your Docker images. They allow you to have your golden environment, with as many pods as you want, with the ability to update those pods on the fly and give you self-healing.

Pods Pods are a collection of containers. You can have multiple pods, with multiple containers inside of those pods. It also allows you to manage storage resources, unique network IPs, and options that show how a container should run.

Let's ensure that our minikube node is up and operational. Run the following and you should see the output below:



Now, let's create a Kubernetes manifest which will be stored locally (in production, you always want to store these in some sort of private source control). Manifests are written in YAML, and we can do a vim to edit the code, or use VSCode

vim TestManifest.yaml

I have written the following manifest, which will spin up an Nginx deployment with 5 replicas (pods).

Lets break this down:
1) The API version is the version of the Kubernetes API that you will be interacting with.
2) The Kind is for specifying the type of manifest. In our case, it's Deployment.
3) The metadata is, well.. the metadata :). This is the metadata of your deployment.
4) The spec block is for building your pods and specifying what you want them to look like.

Take a look at "replicas". Replicas state how many pods you will be creating.

In our case, we will be using the latest version of Nginx and allowing traffic over port 80.

To kick this off, we will want to use the Kubernetes API.

kubectl create -f TestManfiest.yaml

You should see the following:

deployment.apps "nginx-deployment" created

Now we will run the following to ensure that our pods got created:

kubectl get pods

You should see a similar output to this screen



And like magic, the pods are created!

Next, we want to create a service. A service allows your deployment/pods to be accessible over the web and internally. The definition of a service is "an abstraction which defines a logical set of Pods and a policy by which to access them"

To expose our deployment, we want to run the following:

kubectl expose deployment nginx-deployment --port=80 --type=LoadBalancer

What the above does is creating a service and exposes our deployment over port 80 in a Load Balancer fashion (in short: a load balancer allows you to span application across multiple endpoints vs having a single point of failure).



There you have it! You're up, running, and ready to go with Nginx.

Comments

Popular posts from this blog

So, you want to be a Cloud Engineer?

In 2019 one of the biggest pieces of tech is the cloud. Whether it be public cloud or private cloud, cloud technologies are here to stay (for now). I predict that Cloud Engineering will be a very big part of IT (and development) for another 5-10 years. Today I want to share with you my journey in becoming a Cloud Engineer and some helpful tips. A career timeline to be a Cloud Engineer can go like so;

Desktop Support > Junior Sysadmin > Sysadmin > Sysadmin/Technical Lead > Engineer >  Cloud Engineer.

Although our career paths may not align, I believe that this progression is very import. Let me tell you why.



Helpdesk/Desktop Support Helpdesk and desktop support get your feet wet. It allows you to understand technology and how it's used in the workplace from a business perspective. It shows you what technologies may be best in the current environment your in and how to support those technologies. It also teaches you soft skills and how to support people from a technic…

Monitoring your containers in an AKS cluster with Prometheus

Monitoring and alerting is arguably one of the most important thing in Cloud Engineering and DevOps. It's the difference between your clients stack being up and a client being down. Most of us have SLA's to abide by (for good reason). Today we're going to learn how to spin up Prometheus in an AKS cluster to monitor our applications.

Pre-reqs;
1. Intermediate knowledge of Kubernetes
2. An AKS cluster spun up in Azure

Recently AKS supports Prometheus via Helm, so we'll use that for an automated solution to spin this up. This installs kube-prometheus, which is a containerized version of the application. With raw Prometheus, there are a few things that are needed for the operator;

1. Prometheus: Defines a desired deployment.
2. ServiceMonitor: Specifies how groups of services should be monitored
3. Alertmanager: Defines the operator to ensure services and deployments are running by matching the resource

With kube-prometheus, it is all packaged for you. This means configuri…

Spinning up a Kubernetes cluster with Kubeadm

In today's world, we have several public cloud technologies that will ultimately help us with spinning up these infrastructures. This however comes with a price. Because a public cloud provider (like AWS or Azure) handles the API/master server and networking, you'll get something up quick, but miss some key lessons of spinning up a Kubernetes cluster. Today, I'll help you with that.

There are some pre-reqs for this blog:
1. At least 3 VM's. In my case, I'm using my ESXi 6.7 server at home.
2. Basic knowledge/understanding of what Kubernetes is utilized for.
3. Windows, Mac, or Linux desktop. For this blog, I am using Windows 10.

The first thing you want to do is spin up three virtual machines running Ubuntu18.04. You can use a RHEL based system, but the commands I show and run (including the repos I'm using) will be different.

I have already set up my 3 virtual machines. I gave them static IP addresses as I have found API/configuration issues if the VM shuts do…