Skip to main content

Using the AZ CLI for managing CI/CD in Azure DevOps

Once upon a time there was VSTS and with VSTS was the VSTS CLI. Microsoft has since then evolved and with evolution comes a new CLI! The AZ CLI now has a DevOps extension and although it isn't as feature-rich as the UI, it's still great to use. Let's have a look.

The first thing you'll need to do is confirm you have AZ CLI with at least version v2.0.49, the DevOps extension, and Visual Studio Code.

To install the AZ CLI: https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest

To install the AZ CLI DevOps Extension: https://github.com/Azure/azure-devops-cli-extension

Once you have those two components installed we're now ready to move on.

First things first - What can we do with the DevOps extension? Let's have a look at the help.


As you can see we can do a few things managerial/configuration tasks. The key thing I want us to take a look at is the "Related Groups". The key thing here is the pipelines portion of the extension: https://docs.microsoft.com/en-us/cli/azure/ext/azure-devops/pipelines?view=azure-cli-latest

You can find a ton of information at the above Microsoft link on every option there is. Let's take a look at the command line.


We have a ton of good information here. Let's first take a look at the help for "build".


Let's get some build information by using the "list" command. I'm going to run az pipelines build list --org https://dev.azure.com/adminturneddevops/ --project TheLifeOfAnEngineerBlog but of course you will need to specify your org and specific project. Once I run this, I'm able to see a ton of output (this will vary based on how many builds you have).

This output is a bit verbose. What if I want specific info? Maybe specify a branch? Let's try it by running az pipelines build list --org https://dev.azure.com/adminturneddevops/ --project TheLifeOfAnEngineerBlog --branch master and seeing the output.


How about we want to get even MORE granular and search for builds that have failed? We can use the --result flag by running az pipelines build list --org https://dev.azure.com/adminturneddevops/ --project TheLifeOfAnEngineerBlog --branch master --result failed and this will print out a JSON formatted list of all builds that failed for master.

So now we can list builds and we can get pretty granular, but what about if we want to CREATE builds? The first thing we'll need to do is figure out the right line of code for the job. The following will as always need to be edited to match your environment.

 az pipelines create --name 'TheLifeOfAnEngineerBlogANSIBLE' --description 'Pipeline for Ansible' --repository TheLifeOfAnEngineerBlog --branch master --repository-type tfsgit --org https://dev.azure.com/adminturneddevops/ --project TheLifeOfAnEngineerBlog

Once you run the above you will get an output of possible environments you can build with. This is very similar to what you would see in the UI.


I'm going to choose option 1 then at the next screen choose option 2 to view/edit my YAML. As soon as you do that, VSCode will open to a default YAML pipeline. Notice that in your command prompt where you kicked off your AZ command that the engine is still running.

For my pipeline I chose to use the CopyFiles@2 task to copy files from my repo in Azure Repos and publish the artifact based on the code. The below is what I put in VSCode;

trigger:
- master

pool:
name: Hosted VS2017

steps:
- task: CopyFiles@2
displayName: 'Copy Files'
inputs:
SourceFolder: Ansible
TargetFolder: '$(Build.ArtifactStagingDirectory)'

- task: PublishBuildArtifacts@1
displayName: 'Publish Artifact: drop'

After that go ahead and save then return back to your command prompt and hit the enter key. After that you'll have two options 1) Commit to master 2) Create a new branch. I'm going to go ahead and create a new branch.


Then I'm going to enter a new branch name.


You'll see some JSON output on your command prompt. Let's head over to Azure DevOps and check on your new build.


As you can see from the above my build has succeeded and used my new branch.

Comments

Popular posts from this blog

So, you want to be a Cloud Engineer?

In 2019 one of the biggest pieces of tech is the cloud. Whether it be public cloud or private cloud, cloud technologies are here to stay (for now). I predict that Cloud Engineering will be a very big part of IT (and development) for another 5-10 years. Today I want to share with you my journey in becoming a Cloud Engineer and some helpful tips. A career timeline to be a Cloud Engineer can go like so;

Desktop Support > Junior Sysadmin > Sysadmin > Sysadmin/Technical Lead > Engineer >  Cloud Engineer.

Although our career paths may not align, I believe that this progression is very import. Let me tell you why.



Helpdesk/Desktop Support Helpdesk and desktop support get your feet wet. It allows you to understand technology and how it's used in the workplace from a business perspective. It shows you what technologies may be best in the current environment your in and how to support those technologies. It also teaches you soft skills and how to support people from a technic…

Monitoring your containers in an AKS cluster with Prometheus

Monitoring and alerting is arguably one of the most important thing in Cloud Engineering and DevOps. It's the difference between your clients stack being up and a client being down. Most of us have SLA's to abide by (for good reason). Today we're going to learn how to spin up Prometheus in an AKS cluster to monitor our applications.

Pre-reqs;
1. Intermediate knowledge of Kubernetes
2. An AKS cluster spun up in Azure

Recently AKS supports Prometheus via Helm, so we'll use that for an automated solution to spin this up. This installs kube-prometheus, which is a containerized version of the application. With raw Prometheus, there are a few things that are needed for the operator;

1. Prometheus: Defines a desired deployment.
2. ServiceMonitor: Specifies how groups of services should be monitored
3. Alertmanager: Defines the operator to ensure services and deployments are running by matching the resource

With kube-prometheus, it is all packaged for you. This means configuri…

Spinning up a Kubernetes cluster with Kubeadm

In today's world, we have several public cloud technologies that will ultimately help us with spinning up these infrastructures. This however comes with a price. Because a public cloud provider (like AWS or Azure) handles the API/master server and networking, you'll get something up quick, but miss some key lessons of spinning up a Kubernetes cluster. Today, I'll help you with that.

There are some pre-reqs for this blog:
1. At least 3 VM's. In my case, I'm using my ESXi 6.7 server at home.
2. Basic knowledge/understanding of what Kubernetes is utilized for.
3. Windows, Mac, or Linux desktop. For this blog, I am using Windows 10.

The first thing you want to do is spin up three virtual machines running Ubuntu18.04. You can use a RHEL based system, but the commands I show and run (including the repos I'm using) will be different.

I have already set up my 3 virtual machines. I gave them static IP addresses as I have found API/configuration issues if the VM shuts do…