Skip to main content

Azure DevOps REST API with Python

Working within the UI on Azure DevOps is great, but I prefer some code to get the job done. Working with a REST API allows you the ability to interact with Azure DevOps within your application.

At the time of writing, the Azure DevOps REST API is on version 5.1.


1. Pycharm or VSCode.
2. An Azure DevOps account.
3. A PAT token (admin rights to Azure DevOps required).

The first thing we want to do is take a look at the REST API itself. Head over to

If you take a look on the left side, you'll see multiple pieces of documentation. You can interact with Azure DevOps via any of those API's. Today we're going to focus on the Build API.

If you click on the link above, you'll see a few different operations. The first piece of code we're going to write is for retrieving the builds. This would be a GET request. 

Open up Pycharm or VSCode and lets write some code! 

The first thing we'll need to do is import some libraries;

import requests
from requests.auth import HTTPBasicAuth
import logging
import getpass

Let's go over each of these;

requests = A library for API calls (GET, POST, DELETE, etc.)
requests.auth = Authentication for your API call
logging = Logger for specifying errors, warnings, info, etc.
getpass = A secure way to pass in your password (in our case the PAT token). It will not show as plain text.

Now we can start taking a look at our core code. Let's build a function called buildAPI.

def buildAPI(uri, username):

Within our function we'll create a variable for calling our PAT token.

p = getpass.getpass(prompt='Please enter PAT token: ')

Next we'll create our try/except blocks (For more info on error handling, please visit:

    resp = requests.get(uri, auth=HTTPBasicAuth(username, p))

except Exception as e:

Let's talk about what we're doing above;

1. The first line is starting out try block for error handling purposes
2. The second line is our variable to call our requests library and the get() method. We're passing in our uri and auth. The uri we're getting from our params and the same goes for our username.
3. The third line is to print the response. In our case it's the JSON output of our builds.
4. The forth and fifth line is if any errors occur, print our the errors to our console.

You have successfully used a GET request to retrieve your builds!


Popular posts from this blog

DevOps tooling in the Microsoft realm

When I really started to dive into automation and practicing DevOps with specific tooling, there were a few key players. At the time Microsoft was not one of them. They were just starting to embrace the open source world, including the art and practice of DevOps. Since then Microsoft has went all in and the tech giant has made some incredible tooling. Recently I switched to a Microsoft-heavy environment and I love it. I went from AWS/Python/Ansible/Jenkins to Azure/PowerShell/ARM/Azure DevOps. My first programming language was PowerShell so being back in the saddle allowed me to do a full circle between all of the different types of tooling in both worlds. Today I want to share some of that tooling with you.

The first thing I want to talk about is ARM. What is ARM? ARM is a configuration management tool that allows you to perform software-defined-infrastructure. Much like Ansible and Terraform, ARM allows you to define what you want your environment to look like at scale. With ARM, yo…

Monitoring your containers in an AKS cluster with Prometheus

Monitoring and alerting is arguably one of the most important thing in Cloud Engineering and DevOps. It's the difference between your clients stack being up and a client being down. Most of us have SLA's to abide by (for good reason). Today we're going to learn how to spin up Prometheus in an AKS cluster to monitor our applications.

1. Intermediate knowledge of Kubernetes
2. An AKS cluster spun up in Azure

Recently AKS supports Prometheus via Helm, so we'll use that for an automated solution to spin this up. This installs kube-prometheus, which is a containerized version of the application. With raw Prometheus, there are a few things that are needed for the operator;

1. Prometheus: Defines a desired deployment.
2. ServiceMonitor: Specifies how groups of services should be monitored
3. Alertmanager: Defines the operator to ensure services and deployments are running by matching the resource

With kube-prometheus, it is all packaged for you. This means configuri…

So, you want to be a Cloud Engineer?

In 2019 one of the biggest pieces of tech is the cloud. Whether it be public cloud or private cloud, cloud technologies are here to stay (for now). I predict that Cloud Engineering will be a very big part of IT (and development) for another 5-10 years. Today I want to share with you my journey in becoming a Cloud Engineer and some helpful tips. A career timeline to be a Cloud Engineer can go like so;

Desktop Support > Junior Sysadmin > Sysadmin > Sysadmin/Technical Lead > Engineer >  Cloud Engineer.

Although our career paths may not align, I believe that this progression is very import. Let me tell you why.

Helpdesk/Desktop Support Helpdesk and desktop support get your feet wet. It allows you to understand technology and how it's used in the workplace from a business perspective. It shows you what technologies may be best in the current environment your in and how to support those technologies. It also teaches you soft skills and how to support people from a technic…