Skip to main content

Build a Windows Docker Image with Azure DevOps

Building a Docker Image with CI/CD is very useful. The primary use case I like is that I can set up CI for my Dockerfile. That means any time I update my Docker file it will automagically create a new build for me. Let's get started!

Pre-requisites/what you need:

1) An Azure Account
2) An Azure DevOps Account
3) Azure repos or another source control repo
4) ACR (Azure Container Repository) to store out Docker image
5) VSCode
6) A working knowledge on CI/CD in Azure DevOps (how to create builds, queue builds, build releases)

First things first - let's head over to our source control and copy down our repo. First thing I need to run is git clone yourrepo. I'm using Azure Repos but you can use whatever you prefer. You can copy it to wherever you like to work out of. I typically put everything in /users/me/Documents/GITREPOS on Windows or /users/me/home if I'm on Linux.



Now that we have our git repo copied down, let's go ahead and open up the directory in VSCode. Create a new file and name it "Dockerfile" with no extension.


Now that we have our Dockerfile let's go ahead and start filling it in! Below is the code I used, but feel free to use whatever suits you.

Please Note: If you are using Windows 10, please ensure you switch to Windows containers. If you right click the Docker whale in the task bar, you will see the option to switch. If you don't do this you will get a bunch of errors when trying to pull down the image.

FROM mcr.microsoft.com/windows/servercore:1607
MAINTAINER "Michael Levan"
RUN powershell -Command New-Item -ItemType Directory -Path C:\ -Name mynewconfigdir

Notice how I had to use "servercore:1607"? The way Microsoft does images is based on OS build. So if you're on Windows 10 1903, you can down the Docker Image for "servercore:1903". After looking at some errors I found, it seems that the OS build running this in Azure on the backend is 1607. This may change after the time of writing this so if the release fails, take a look at the error message and it should tell you what OS build it's currently running on.

Create a new directory and put your Dockefile in it. I called mine "WindowsDockerFile". Once you have your code, it's time to commit it up to your repo.


Now that our code is committed we're ready to kick off our build! Let's head over to Azure DevOps and go to pipelines > builds. Create a new build pipeline and choose the classic editor.


Once you choose your source, team project, repo, and branch, you're ready to go to click continue. Click on an empty project and for your tasks, use "Copy Files" and "publish build artifacts". Ensure that you have the right source folder, what contents you want to target, and your target folder. We're going to use the Build.ArtifactStagingDirectory predefined Azure DevOps variable.




If all goes well you should have all green check-marks with no errors or warnings.



We're now ready to queue up our build and start on our release. Let's head over to releases under Pipelines. Choose New Release and create an empty pipeline.

 For our artifact, choose the build you just created. If you re-did your build a few times, ensure you're choosing the latest.


Next we'll change our staging environment name to Dev, change the agent display name, and select the Docker task.


In my task for my container registry I'm going to connect to ACR and choose my repo.


The next thing we need to do is specify our repo and point to our Dockerfile that we want to build.



Once we do that let's save and create a release! If your release succeeded, you should see all green check-marks.


Now let's head over to ACR and see if our Docker image was pushed.


My Docker image was officially built and pushed to ACR with the help of CI/CD! 


Comments

Popular posts from this blog

So, you want to be a Cloud Engineer?

In 2019 one of the biggest pieces of tech is the cloud. Whether it be public cloud or private cloud, cloud technologies are here to stay (for now). I predict that Cloud Engineering will be a very big part of IT (and development) for another 5-10 years. Today I want to share with you my journey in becoming a Cloud Engineer and some helpful tips. A career timeline to be a Cloud Engineer can go like so;

Desktop Support > Junior Sysadmin > Sysadmin > Sysadmin/Technical Lead > Engineer >  Cloud Engineer.

Although our career paths may not align, I believe that this progression is very import. Let me tell you why.



Helpdesk/Desktop Support Helpdesk and desktop support get your feet wet. It allows you to understand technology and how it's used in the workplace from a business perspective. It shows you what technologies may be best in the current environment your in and how to support those technologies. It also teaches you soft skills and how to support people from a technic…

Monitoring your containers in an AKS cluster with Prometheus

Monitoring and alerting is arguably one of the most important thing in Cloud Engineering and DevOps. It's the difference between your clients stack being up and a client being down. Most of us have SLA's to abide by (for good reason). Today we're going to learn how to spin up Prometheus in an AKS cluster to monitor our applications.

Pre-reqs;
1. Intermediate knowledge of Kubernetes
2. An AKS cluster spun up in Azure

Recently AKS supports Prometheus via Helm, so we'll use that for an automated solution to spin this up. This installs kube-prometheus, which is a containerized version of the application. With raw Prometheus, there are a few things that are needed for the operator;

1. Prometheus: Defines a desired deployment.
2. ServiceMonitor: Specifies how groups of services should be monitored
3. Alertmanager: Defines the operator to ensure services and deployments are running by matching the resource

With kube-prometheus, it is all packaged for you. This means configuri…

Spinning up a Kubernetes cluster with Kubeadm

In today's world, we have several public cloud technologies that will ultimately help us with spinning up these infrastructures. This however comes with a price. Because a public cloud provider (like AWS or Azure) handles the API/master server and networking, you'll get something up quick, but miss some key lessons of spinning up a Kubernetes cluster. Today, I'll help you with that.

There are some pre-reqs for this blog:
1. At least 3 VM's. In my case, I'm using my ESXi 6.7 server at home.
2. Basic knowledge/understanding of what Kubernetes is utilized for.
3. Windows, Mac, or Linux desktop. For this blog, I am using Windows 10.

The first thing you want to do is spin up three virtual machines running Ubuntu18.04. You can use a RHEL based system, but the commands I show and run (including the repos I'm using) will be different.

I have already set up my 3 virtual machines. I gave them static IP addresses as I have found API/configuration issues if the VM shuts do…