Skip to main content

Create an AKS cluster with Terraform

When Microsoft first started their partnership back in 2016, Terraform was still a new to the Software-Defined Infrastructure space. Now, Terraform is one of the most popular tools to use for this purpose. Today we're going to spin up an AKS (Azure Kubernetes Service) cluster with Terraform.


Pre-requisites

1. Knowledge of Terraform
2. Knowledge of Kubernetes
3. An Azure subscription that you have access to create resource with
4. VSCode and the Terraform extension
5. AZ CLI installed as this will be our authentication method
6. A Service Principal as we'll need a clientID and clientSecret.


The first thing we're going to do is open up VSCode and create a directory called "AKS". Within the AKS folder, create your main.tf and terraform.tfvars files.


Now that we have our configuration files set up, let's start thinking about what we need for AKS to properly run.

The first thing we need to do is ensure AZ CLI is pointing to the right place. From your terminal run;

az account show -o table

If this is not the correct subscription that you would like to authenticate to and create your AKS cluster, change your subscription by running

az account set --subscription sub_name

Once we're authenticated to the proper subscription, we can start configuring Terraform.

Let's set our provider (for more information on providers, see: https://www.terraform.io/docs/providers/index.html). We're going to use the Azure provider. We'll use version 1.36.1 and set our subscription ID.

provider "azurerm" {
  version         = "1.36.1"
  subscription_id = "your_sub_id"
}

Let's head over to our tyerraform.tfvars file and start configuring some variables. Our first two variables will be our Resource Group name and it's location.

rg_name = "aksRG01"
location = "eastus"


We can now head back over to out main.tf config and declare our variables.


variable "rg_name" {}
variable "location" {}


Now that we have our provider and variables, let's go ahead and set up our first resource, our Resource Group. We'll interpolate our variables for our properties.

resource "azurerm_resource_group" "aksRG" {
    name = "${var.rg_name}"
    location = "${var.location}"
}


Now that we have our resource group, our main.tf config should look like this:

provider "azurerm" {
  version         = "1.36.1"
  subscription_id = "sub_id"
}

variable "rg_name" {}
variable "location" {}



resource "azurerm_resource_group" "aksRG" {
    name = "${var.rg_name}"
    location = "${var.location}"
}

Let's head back over to our terraform.tfvars config to start adding the rest of our variables for our AKS cluster.


name = "AKSCluster1"
vm_size = "Standard_D1_v2"
os_disk_space = "30"
client_id = "your_client_id"
client_secret = "client_secret_you_generated"

ssh_pubkey = "your_ssh_pubkey"


Now we can go back to our main.tf config and add our variables in.

variable "name" {}
variable "vm_size" {}
variable "os_disk_space" {}
variable "client_id" {}
variable "client_secret" {}
variable "ssh_pubkey" {}

We've finished creating our variables and can now focus on the resource, which is our AKS cluster. The resource we're going to use is "azurerm_kubernetes_cluster" and I'll give it a name of "aksclus"

resource "azurerm_kubernetes_cluster" "aksclus" {}

Let's add in our name, location, resource group, and dns prefix from our variables.

  name                = "${var.name}"
  location            = "${var.location}"
  resource_group_name = "${azurerm_resource_group.aksRG.name}"
  dns_prefix          = "${var.name}-dns"

As you can see for dns_prefix, we're just concatenating '-dns- at the end.

Next we'll look at the linux_profile property. This is to configure our username and public key so we can SSH into the VMs.

  linux_profile {
    admin_username = "mike"

  ssh_key {
    key_data = "${var.ssh_pubkey}"
        }
    }


To configure the size and type of our cluster, we'll resource the agent_pool_profile property, taking advantage of using our variables.

  agent_pool_profile {
    name            = "defaultpool"
    count           = 1
    vm_size         ="${var.vm_size}"
    os_type         = "Linux"
    os_disk_size_gb = "${var.os_disk_space}"
  }


For our last property, we're going to configure our service principal.

  service_principal {
    client_id     = "${var.client_id}"
    client_secret = "${var.client_secret}"
  }


Now our entire main.tf config should look like the below.

provider "azurerm" {
  version         = "1.36.1"
  subscription_id = "220284d2-6a19-4781-87f8-5c564ec4fec9"
}

variable "rg_name" {}
variable "location" {}
variable "name" {}
variable "vm_size" {}
variable "os_disk_space" {}
variable "client_id" {}
variable "client_secret" {}
variable "ssh_pubkey" {}


resource "azurerm_resource_group" "aksRG" {
    name = "${var.rg_name}"
    location = "${var.location}"
}

resource "azurerm_kubernetes_cluster" "aksclus" {
  name                = "${var.name}"
  location            = "${var.location}"
  resource_group_name = "${azurerm_resource_group.aksRG.name}"
  dns_prefix          = "${var.name}-dns"

  linux_profile {
    admin_username = "mike"

  ssh_key {
    key_data = "${var.ssh_pubkey}"
        }
    }

  agent_pool_profile {
    name            = "defaultpool"
    count           = 1
    vm_size         ="${var.vm_size}"
    os_type         = "Linux"
    os_disk_size_gb = "${var.os_disk_space}"
  }

  service_principal {
    client_id     = "${var.client_id}"
    client_secret = "${var.client_secret}"
  }
}


Now we're ready to initialize by running terraform init. Once we have successfully initialized, run terraform plan to see what will be created.



Once the plan is ready, we should now be ready to run terraform apply to create our cluster. Write "yes" and click enter to create your environment.



We'll give Terraform a few minutes to run then head over to our Azure portal. Go to Resource Groups and click on the Resource Group you specified in your variable.


Congrats! Your Terraform configuration is complete. Now you can remove your new configuration by running terraform destroy.

Comments

Popular posts from this blog

DevOps tooling in the Microsoft realm

When I really started to dive into automation and practicing DevOps with specific tooling, there were a few key players. At the time Microsoft was not one of them. They were just starting to embrace the open source world, including the art and practice of DevOps. Since then Microsoft has went all in and the tech giant has made some incredible tooling. Recently I switched to a Microsoft-heavy environment and I love it. I went from AWS/Python/Ansible/Jenkins to Azure/PowerShell/ARM/Azure DevOps. My first programming language was PowerShell so being back in the saddle allowed me to do a full circle between all of the different types of tooling in both worlds. Today I want to share some of that tooling with you.

The first thing I want to talk about is ARM. What is ARM? ARM is a configuration management tool that allows you to perform software-defined-infrastructure. Much like Ansible and Terraform, ARM allows you to define what you want your environment to look like at scale. With ARM, yo…

Monitoring your containers in an AKS cluster with Prometheus

Monitoring and alerting is arguably one of the most important thing in Cloud Engineering and DevOps. It's the difference between your clients stack being up and a client being down. Most of us have SLA's to abide by (for good reason). Today we're going to learn how to spin up Prometheus in an AKS cluster to monitor our applications.

Pre-reqs;
1. Intermediate knowledge of Kubernetes
2. An AKS cluster spun up in Azure

Recently AKS supports Prometheus via Helm, so we'll use that for an automated solution to spin this up. This installs kube-prometheus, which is a containerized version of the application. With raw Prometheus, there are a few things that are needed for the operator;

1. Prometheus: Defines a desired deployment.
2. ServiceMonitor: Specifies how groups of services should be monitored
3. Alertmanager: Defines the operator to ensure services and deployments are running by matching the resource

With kube-prometheus, it is all packaged for you. This means configuri…

So, you want to be a Cloud Engineer?

In 2019 one of the biggest pieces of tech is the cloud. Whether it be public cloud or private cloud, cloud technologies are here to stay (for now). I predict that Cloud Engineering will be a very big part of IT (and development) for another 5-10 years. Today I want to share with you my journey in becoming a Cloud Engineer and some helpful tips. A career timeline to be a Cloud Engineer can go like so;

Desktop Support > Junior Sysadmin > Sysadmin > Sysadmin/Technical Lead > Engineer >  Cloud Engineer.

Although our career paths may not align, I believe that this progression is very import. Let me tell you why.



Helpdesk/Desktop Support Helpdesk and desktop support get your feet wet. It allows you to understand technology and how it's used in the workplace from a business perspective. It shows you what technologies may be best in the current environment your in and how to support those technologies. It also teaches you soft skills and how to support people from a technic…