Skip to main content

Python3 and AWS

So, here we are. Deep into cloud architecture, getting our feet wet in application deployment/distributed systems, and we find ourselves doing a ton of manual tasks. These manual tasks could be point/click, or they could be simply re-running the same aws cli commands over and over again. We're going to use another way to hit the AWS API, and that's with Python.

PLEASE NOTE: The code is pasted at the bottom. Feel free to copy/paste it while we're going over it. I have screenshots for a visual as well.

Python has an amazing module (one of it's best in my opinion) called Boto3. Boto3 is a library/module to AWS's API. Similar to AWS CLI, all you need is a secret key/access key.

For this blog, we're going to list EC2 instances.
You will need the following to follow along:
  1. PyCharm for the IDE
  2. An internet connection
  3. An AWS account that has access to EC2 (please don't experiment in your prod environment).
  4. A coffee, or a beer
  5. Prior knowledge of functions, if statements, booleans, and variables in Python. If you know them in another language, you should still be able to follow along.
The first thing we will do is spin up an EC2 instance, as shown below.


This can be any AMI/Distro. That won't matter in this read.

Next, we're going to create an IAM role. This IAM role for PURELY testing purpose, will have access to admin.


 New Project > Name it whatever you'd like > pick a location to store your code."]]]]}'>
After your IAM role is configured, let's open up PyCharm.

You'll want to create a new project by going to File > New Project > Name it whatever you'd like > pick a location to store your code.


Next, go ahead and go to File > New File > Python > AWSDemo.py (feel free to name it whatever you'd like. Just ensure to put the .py at the end of the name).


Once the file is open, go to File > Default Settings > Your project interpreter > Click the plus (+) sign at the bottom > search for Boto3.


Now that we're all ready, you should have a blank file ready to go.

The first thing we are going to do is import Boto3


Next, let's define our first function


Our first function will do the following:
  1. Be named AWS_Access_Key():
  2. It will have one parameter, the access key.
  3. It will have an if statement that if accesskey is not None (meaning if it's not null), it'll print 'Access key accepted'
  4. Next, we will return the parameter value. Returning the parameter value allows us to use the functions input in another function.
You might be wondering why I have an input function inside our defining parameter. This is for security purposes. The access key/secret key is very powerful. If anyone has that, and you have admin access, they can literally do whatever they want in your AWS environment. Some folks prefer not to do it this way, but this is the way I choose to do it. If you don't want to do it this way, you can define your parameter like so:


For the secret key, we're going to do the same exact steps.


Next, we're going to get into the fun.
Let's give a break down of what's happening in the below screenshot.
  1. We're creating a new function called listService():
  2. Be given two parameters that need to be filled in. awsService and region (we will break these down together).
  3. Call the boto3 module's client parameter, fill in the appropriate parameters needed, and assign the values to the "service" variable. Let's break down the parameters in boto3.client:
    • the service_name represents what AWS service we want to use. In our case, it's ec2
    • the region_name represents what region we want to scan. In my case, it's us-east-1
    • aws_access_key_id is our access key that we put in for the first parameter (notice that it shows the function name as the value because the function has the value due to the return key)
    • aws_secret_access_key is our secret key that we put in for the second parameter (notice that it shows the function name as the value because the function has the value due to the return key)
  4. Next, we're going to call our "service" variable that is holding the values for the boto3.client parameter and call another parameter called describe_instances (this is another Boto3 parameter).
  5. We're now going to add our filters. I want you to go back to your AWS console, click on your EC2 instance, and take a look at the description. Do you see those values? Those are the values that we can pull. In our case, we will be pulling the instance state name and if it's running, give us value.
  6. Next, we're going to print the variable output so we can get out instances.
  7. Finally, we're going to define our parameter with it's 'ec2' and 'us-east-1' value.



Now, we're ready to run! Get your access key and secret key ready.

Go ahead and run the function. You will see the following:


After you enter your access key, go ahead and enter your secret key


There you have it! You are now on your way to being a Boto3 expert. Thank you for reading, and I hope you found this both insightful and entertaining!



As promised, below is the code:

import boto3 def AWS_Access_Key(accesskey=input('Please enter access key: ')): if accesskey is not None: print('Access key accepted') return accesskey def AWS_Secret_Key(secretkey=input('Please enter access key: ')): if secretkey is not None: print('Access key accepted') return secretkey def listService(awsService, region): service = boto3.client(service_name = awsService, region_name = region, aws_access_key_id = AWS_Access_Key(), aws_secret_access_key = AWS_Secret_Key() ) instances = service.describe_instances( Filters=[ {'Name': 'instance-state-name', 'Values': ['running']} ]) print(instances) listService('ec2', 'us-east-1')

Comments

Popular posts from this blog

DevOps tooling in the Microsoft realm

When I really started to dive into automation and practicing DevOps with specific tooling, there were a few key players. At the time Microsoft was not one of them. They were just starting to embrace the open source world, including the art and practice of DevOps. Since then Microsoft has went all in and the tech giant has made some incredible tooling. Recently I switched to a Microsoft-heavy environment and I love it. I went from AWS/Python/Ansible/Jenkins to Azure/PowerShell/ARM/Azure DevOps. My first programming language was PowerShell so being back in the saddle allowed me to do a full circle between all of the different types of tooling in both worlds. Today I want to share some of that tooling with you.

The first thing I want to talk about is ARM. What is ARM? ARM is a configuration management tool that allows you to perform software-defined-infrastructure. Much like Ansible and Terraform, ARM allows you to define what you want your environment to look like at scale. With ARM, yo…

Monitoring your containers in an AKS cluster with Prometheus

Monitoring and alerting is arguably one of the most important thing in Cloud Engineering and DevOps. It's the difference between your clients stack being up and a client being down. Most of us have SLA's to abide by (for good reason). Today we're going to learn how to spin up Prometheus in an AKS cluster to monitor our applications.

Pre-reqs;
1. Intermediate knowledge of Kubernetes
2. An AKS cluster spun up in Azure

Recently AKS supports Prometheus via Helm, so we'll use that for an automated solution to spin this up. This installs kube-prometheus, which is a containerized version of the application. With raw Prometheus, there are a few things that are needed for the operator;

1. Prometheus: Defines a desired deployment.
2. ServiceMonitor: Specifies how groups of services should be monitored
3. Alertmanager: Defines the operator to ensure services and deployments are running by matching the resource

With kube-prometheus, it is all packaged for you. This means configuri…

So, you want to be a Cloud Engineer?

In 2019 one of the biggest pieces of tech is the cloud. Whether it be public cloud or private cloud, cloud technologies are here to stay (for now). I predict that Cloud Engineering will be a very big part of IT (and development) for another 5-10 years. Today I want to share with you my journey in becoming a Cloud Engineer and some helpful tips. A career timeline to be a Cloud Engineer can go like so;

Desktop Support > Junior Sysadmin > Sysadmin > Sysadmin/Technical Lead > Engineer >  Cloud Engineer.

Although our career paths may not align, I believe that this progression is very import. Let me tell you why.



Helpdesk/Desktop Support Helpdesk and desktop support get your feet wet. It allows you to understand technology and how it's used in the workplace from a business perspective. It shows you what technologies may be best in the current environment your in and how to support those technologies. It also teaches you soft skills and how to support people from a technic…