Skip to main content

Kubernetes on Google Cloud Platform - Part 2 Into the cluster in GCP! Create our first pod

Now that our cluster is up and running (please see part 1 to see up your K8s cluster), let's take a look inside!

Once we click on the cluster, we see some well-written information right away about our cluster, the size, nodes, networking, and version. 


Scrolling down we can see information about our node pools. Size, version, name, and redundancy information.

Moving over to workloads, lets go ahead and spin up a containerized application.


Here we have a very simple Nginx application that will spin up as "mikes-nginx" and pull the latest version of Nginx from the Docker repo.


We can even take a look at the YAML by clicking "view YAML" to see the code for ourselves.


Now let's go ahead and click the blue "deploy" button. You should see something similar to what I see below.


As you can see above, we can see a ton of great information about our pod. The application its running, logs, labels, active pods, and managing our pods.

Let's scroll back up in the same "Workloads" page and click that Expose button. Exposing an application turns it into a Kubernetes Service and allows the application to be hit publicly. We'll go ahead and leave it on port 80 as an unsecured connection for the purposes of this post.


Now that our service is up, let's go ahead and take a look at it. We see some great info here, but we want to focus on that "External endpoints" IP and port. That's what we're going to use to hit our application. (we also see monitoring and graphs. I will be going into monitoring in part 3 of this blog series).

Let's go ahead and click on that external endpoint.


There you have it! Our application is up, public facing, our pods are active, and we have successfully spun up our pod in Kubernetes on GCP!

On our 3rd and final part of the GCP Kubernetes series, we will go into monitoring our cluster, pods, and applications within GCP.

Comments

Popular posts from this blog

DevOps tooling in the Microsoft realm

When I really started to dive into automation and practicing DevOps with specific tooling, there were a few key players. At the time Microsoft was not one of them. They were just starting to embrace the open source world, including the art and practice of DevOps. Since then Microsoft has went all in and the tech giant has made some incredible tooling. Recently I switched to a Microsoft-heavy environment and I love it. I went from AWS/Python/Ansible/Jenkins to Azure/PowerShell/ARM/Azure DevOps. My first programming language was PowerShell so being back in the saddle allowed me to do a full circle between all of the different types of tooling in both worlds. Today I want to share some of that tooling with you.

The first thing I want to talk about is ARM. What is ARM? ARM is a configuration management tool that allows you to perform software-defined-infrastructure. Much like Ansible and Terraform, ARM allows you to define what you want your environment to look like at scale. With ARM, yo…

Monitoring your containers in an AKS cluster with Prometheus

Monitoring and alerting is arguably one of the most important thing in Cloud Engineering and DevOps. It's the difference between your clients stack being up and a client being down. Most of us have SLA's to abide by (for good reason). Today we're going to learn how to spin up Prometheus in an AKS cluster to monitor our applications.

Pre-reqs;
1. Intermediate knowledge of Kubernetes
2. An AKS cluster spun up in Azure

Recently AKS supports Prometheus via Helm, so we'll use that for an automated solution to spin this up. This installs kube-prometheus, which is a containerized version of the application. With raw Prometheus, there are a few things that are needed for the operator;

1. Prometheus: Defines a desired deployment.
2. ServiceMonitor: Specifies how groups of services should be monitored
3. Alertmanager: Defines the operator to ensure services and deployments are running by matching the resource

With kube-prometheus, it is all packaged for you. This means configuri…

Run PowerShell code with Ansible on a Windows Host

Ansible is one of the Configuration Manager kings in the game. With it's easy-to-understand syntax and even easier to use modules, Ansible is certainly a go-to when you're picking what Configuration Management you want to use for your organization. Your question may be "but Ansible is typically on Linux and what happens when I'm in a Windows environment?". Luckily I'm here to tell you that Ansible will still work! I was pleasantly surprised with how easy it is to use Ansible on Windows with a little WinRM magic. Let's get started.

Pre-requisites for this post:
1) WinRM set up to connect to your Windows host from Ansible
2) Ansible set up for Windows Remote Management
3) SSH access to the Ansible host
4) Proper firewall rules to allow WinRM (port 5985) access from your Ansible host to your Windows host
5) Hosts file set up in Ansible that has your IP or hostname of your Windows Server.
6) At least one Linux host running Ansible and one Windows Server host …