Skip to main content

CI/CD with Azure DevOps - Part 1 - Building an artifact

In a wonderful world where deploying an application is incredibly fast and amazing, where do we start? With Azure DevOps of course! In part 1 of CI/CD with Azure DevOps we are going to create a CI build.

Pre-requisites


First things first - What are building? We're going to build Redis containers. If you don't know what Redis is, it's an open-source application for database cache. It stores in-memory key-value data. Instead of your customer having to constantly make queries to the database, Redis stores values in cache.

The first thing we want to do is ensure we have our Kubernetes manifest with the proper values.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis
spec:
  selector:
    matchLabels:
      app: redis
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
    spec:
      containers:
      - name: redistest
        image: redis:latest
        ports:
        - containerPort: 6379

As you can see above, we are pulling the latest Redis image, using the Deployment type for Kubernetes, creating two containers, and opening the default Redis port which is 6379.

Now that we have our code, we need a place to store it. Azure Repos is the option we'll go with as it's built in.

From our Azure DevOps project we're going to go to Repos > Files.


If this is a new project, you'll see a screenshot similar to the above with your project details. This is because it's a new project with no existing repos. Let's go ahead and clone that down to our desktop. In the first section "Clone to your computer" copy the HTTPS link.


While still in PowerShell (I'm using Windows Terminal with PowerShell Core) cd into the repo and run mkdir redis to create a new directory.


The reason why we want to do this is for segregation and so we can choose what path we want in the build pipeline. In production, you could have a repo with multiple files ranging from YAML to PowerShell to C# (.cs) to Python (.py). This is an attempt to get us into a best practice of clean repos.

Now we can open up that empty repo in VSCode and put our Redis Kubernetes Manifest inside.


Let's create a new file and write our Redis YAML file in it then save to the Redis directory.

To do that, click on File > New File > Navigate to your repo directory > Navigate to the Redis directory > Rename your file with a .yml extension and click save.



Once back in VSCode, you should see something similar to the following:


Now we're ready to commit out code to the repo. Open back up PowerShell and run the following;

git add . (This git command adds the files to the staging area)

git commit -m "Pushing Redis Kubernetes manifest" (This git command adds a message for the      commit so all other contributors or viewers can see what changes)

git push origin master (This git command pushes the changes up to your specified branch. In our case, it's master)


If we head back over to Azure Repos, we can see our commit.


Great! Now we have some code up in Azure and we're ready to create our build. Let's head over to Pipelines > Builds in the left pane right underneath Repos.

If your project is new you won't see any builds.

Go ahead and click "New pipeline" > click on "Use the classic editor > confirm your source is Azure Repos with the appropriate Team project, Repository, and branch > click continue

At the "Select a template" page let's choose "Empty job".


Awesome! So now we have our build, the repo where our code is coming from, and a default agent. Let's click on the agent and give it a name. In my case, I'm going to name it "RedisContainer". You can name it whatever you'd like.

For the agent pool, we can choose "Hosted VS2017". This is how and where you want your build to run for a specific platform. As you can see, there are options for OS X and Ubuntu. Everything else can be left blank.



Now we're ready to start adding some tasks. Click on the "+" button next to your agent name. Search for "Copy files" and "Publish build artifacts". Click on the "add" button next to both so they get added to your build.



Now that we have our build steps, let's click on "Copy Files to:" as it requires some information.

The first thing is the Display Name. I typically like to keep that default so I know what the build is doing. However, you could change it to something like "Copy Files To: For Redis Container Code

The second thing you'll see if the Source Folder. Click the three dots and choose your Redis folder.

For contents let's specify the type of file we want. In our case, it's a YAML file, so we'll type in "*.yml".

For the target folder, we're going to use the pre-defined build variable "$(Build.ArtifactStagingDirectory)". This build variable specifies the path on the agent where artifacts are currently stored to copy over before being push to the destination. For more information on pre-defined Azure DevOps variables, please visit: https://docs.microsoft.com/en-us/azure/devops/pipelines/build/variables?view=azure-devops&tabs=yaml


For our Publish Artifact: drop step, we can leave that default (notice our pre-defined variable exists there).

Under the "Save & Queue" option, click "Save". Keep the folder as the default "\"

Congrats! You have specified your first build. Now let's go ahead and queue it up! Click on your build and select "Queue" on the top right and click "run".


If your build ran successfully, you should see something similar to the screenshot below.


Next up we'll create a release from our build! Stay tuned.

Comments

Popular posts from this blog

DevOps tooling in the Microsoft realm

When I really started to dive into automation and practicing DevOps with specific tooling, there were a few key players. At the time Microsoft was not one of them. They were just starting to embrace the open source world, including the art and practice of DevOps. Since then Microsoft has went all in and the tech giant has made some incredible tooling. Recently I switched to a Microsoft-heavy environment and I love it. I went from AWS/Python/Ansible/Jenkins to Azure/PowerShell/ARM/Azure DevOps. My first programming language was PowerShell so being back in the saddle allowed me to do a full circle between all of the different types of tooling in both worlds. Today I want to share some of that tooling with you.

The first thing I want to talk about is ARM. What is ARM? ARM is a configuration management tool that allows you to perform software-defined-infrastructure. Much like Ansible and Terraform, ARM allows you to define what you want your environment to look like at scale. With ARM, yo…

Monitoring your containers in an AKS cluster with Prometheus

Monitoring and alerting is arguably one of the most important thing in Cloud Engineering and DevOps. It's the difference between your clients stack being up and a client being down. Most of us have SLA's to abide by (for good reason). Today we're going to learn how to spin up Prometheus in an AKS cluster to monitor our applications.

Pre-reqs;
1. Intermediate knowledge of Kubernetes
2. An AKS cluster spun up in Azure

Recently AKS supports Prometheus via Helm, so we'll use that for an automated solution to spin this up. This installs kube-prometheus, which is a containerized version of the application. With raw Prometheus, there are a few things that are needed for the operator;

1. Prometheus: Defines a desired deployment.
2. ServiceMonitor: Specifies how groups of services should be monitored
3. Alertmanager: Defines the operator to ensure services and deployments are running by matching the resource

With kube-prometheus, it is all packaged for you. This means configuri…

Run PowerShell code with Ansible on a Windows Host

Ansible is one of the Configuration Manager kings in the game. With it's easy-to-understand syntax and even easier to use modules, Ansible is certainly a go-to when you're picking what Configuration Management you want to use for your organization. Your question may be "but Ansible is typically on Linux and what happens when I'm in a Windows environment?". Luckily I'm here to tell you that Ansible will still work! I was pleasantly surprised with how easy it is to use Ansible on Windows with a little WinRM magic. Let's get started.

Pre-requisites for this post:
1) WinRM set up to connect to your Windows host from Ansible
2) Ansible set up for Windows Remote Management
3) SSH access to the Ansible host
4) Proper firewall rules to allow WinRM (port 5985) access from your Ansible host to your Windows host
5) Hosts file set up in Ansible that has your IP or hostname of your Windows Server.
6) At least one Linux host running Ansible and one Windows Server host …