Skip to main content

Docker on Windows - Part 2 Creating A Container

Welcome back and thank you for joining me on this epic journey! On part 1 of the Docker series, we went over installation and configuration of Docker on Windows. Today, we will bring down an image, create a container, give the container it's needed ports, allow it to run in the background, and see our Nginx splash page come up!

First, let's bring up our PowerShell window and do a quick docker --version to confirm Docker is installed, running, and happy. If Docker is not running, please check on Part 1 of the Docker on Windows series to confirm you followed all of the steps. Make sure to also confirm the Docker service is running.

For the purposes of this post, we're going to utilize Nginx because it's the most straight forward for learning deployments with Docker, in my opinion. It utilizes a port that's mostly open for all and the image is pre-build in the Docker hub.

Speaking of Docker hub, let's head over and take a look at the Nginx image.

Go ahead and search for "Nginx" in the search bar. You'll see several images pop up. Take a look at how the first one is different. It says "official" in the name. This means that this is an official Docker image made by Nginx. The images that don't say "official" means someone else (like you or I) made the image and uploaded it to Docker hub. Let's ensure we use the official.

Next, we're going to take a look at the docker run command which we will be using to spin up our container. For more information on the docker run command, please follow this link. For today, I will explain the switches that we need for the purposes of this blog post.

Our docker line is going to look like the following:

docker run --name my-nginx -tid -p 8080:80 nginx:latest

Let's break this line down:

docker = calling the Docker API
run = the command you're going to use to create the container. There are several other commands to do things like list containers, list images, etc.
--name = naming your container
tid =  t = pseudo-tty. In short, psuedo means you're interacting with an arbitrary computer and TTY means you're interacting with a console; i = interactive. Keep the STDIN open even if not inside the container for a stream of input data if needed; d = run container in background while you aren't connected to the container
p = what port you want your container to interact on. If you see something like "8080:80", this means any traffic coming into 8080, push it to 80 (http)

Let's open up a PowerShell prompt and run it!

Notice in the screenshot below how it's downloading the Nginx image. This will occur if the image you are calling hasn't been downloaded yet. If it has, it'll be sitting in your local images and this won't occur.

Next, you should see something similar to the screenshot below.

Now, let's go ahead and run docker container ls and what you should see is a running container.

Now, let's open up a web browser and go to

You should see something similar to the Nginx splash page below.

Congrats! You have officially spun up your first container on Docker for Windows! The next and final blog post on our 3 part series, we will do the same thing, but in an automated fashion with Docker Compose. Stay tuned and thanks for reading!


Popular posts from this blog

So, you want to be a Cloud Engineer?

In 2019 one of the biggest pieces of tech is the cloud. Whether it be public cloud or private cloud, cloud technologies are here to stay (for now). I predict that Cloud Engineering will be a very big part of IT (and development) for another 5-10 years. Today I want to share with you my journey in becoming a Cloud Engineer and some helpful tips. A career timeline to be a Cloud Engineer can go like so;

Desktop Support > Junior Sysadmin > Sysadmin > Sysadmin/Technical Lead > Engineer >  Cloud Engineer.

Although our career paths may not align, I believe that this progression is very import. Let me tell you why.

Helpdesk/Desktop Support Helpdesk and desktop support get your feet wet. It allows you to understand technology and how it's used in the workplace from a business perspective. It shows you what technologies may be best in the current environment your in and how to support those technologies. It also teaches you soft skills and how to support people from a technic…

Monitoring your containers in an AKS cluster with Prometheus

Monitoring and alerting is arguably one of the most important thing in Cloud Engineering and DevOps. It's the difference between your clients stack being up and a client being down. Most of us have SLA's to abide by (for good reason). Today we're going to learn how to spin up Prometheus in an AKS cluster to monitor our applications.

1. Intermediate knowledge of Kubernetes
2. An AKS cluster spun up in Azure

Recently AKS supports Prometheus via Helm, so we'll use that for an automated solution to spin this up. This installs kube-prometheus, which is a containerized version of the application. With raw Prometheus, there are a few things that are needed for the operator;

1. Prometheus: Defines a desired deployment.
2. ServiceMonitor: Specifies how groups of services should be monitored
3. Alertmanager: Defines the operator to ensure services and deployments are running by matching the resource

With kube-prometheus, it is all packaged for you. This means configuri…

Spinning up a Kubernetes cluster with Kubeadm

In today's world, we have several public cloud technologies that will ultimately help us with spinning up these infrastructures. This however comes with a price. Because a public cloud provider (like AWS or Azure) handles the API/master server and networking, you'll get something up quick, but miss some key lessons of spinning up a Kubernetes cluster. Today, I'll help you with that.

There are some pre-reqs for this blog:
1. At least 3 VM's. In my case, I'm using my ESXi 6.7 server at home.
2. Basic knowledge/understanding of what Kubernetes is utilized for.
3. Windows, Mac, or Linux desktop. For this blog, I am using Windows 10.

The first thing you want to do is spin up three virtual machines running Ubuntu18.04. You can use a RHEL based system, but the commands I show and run (including the repos I'm using) will be different.

I have already set up my 3 virtual machines. I gave them static IP addresses as I have found API/configuration issues if the VM shuts do…