Skip to main content

Retrieving EC2 instance information with PowerShell - Part 2

If you did not read part 1 of this, there may be some differences in configurations. As long as you have access to the AWS CLI with a specified user (or yourself), you should be good to go!

We left off with running some PowerShell cmdlets to pull some information. Let's say that's not enough. Let's say we want to have the ability to make a tool out of this script to run at our leisure. Say daily, weekly, monthly. We'll have to turn this into a dedicated script for our use case.

The first thing we'll do is get our code editor. I prefer VSCode, but you can choose whichever you prefer. Within VSCode, please download the PowerShell extension.

Let's go ahead and go to File > New File. Save wherever you would like (in production, a Git repo would be ideal. For testing, we can save to our desktop). I named mine Get-EC2VMinfo.ps1, but you can name it whatever you'd like, as long as it has a .ps1 file extension.

Now that we have our text editor and file ready, let's start with our function block and parameters.

I specified two parameters here, instanceID and Region. The instanceID param is for if you want to specify a specific instance. If you don't specify, they all get returned.

The above is the core code that will be run in our process block. Let's break it down;
1. The first try block contains a call to the $PSCmdlet class that essentially says "run this part of the code if instance ID is specified"
2. The else statement is for if instanceID is NOT specified and it will return all instances
3. The [pscustomobject] blocks are to create objects out of our output and specify what we want to see.
4. The try/catch error handing is very simple. If there are any errors, they get thrown to the screen.

Now that we wrote out code, let's go ahead and run it.

I ran the code without specifying an instanceID, but I only have one running, which is why there isn't more output. Above we can see;
1. InstanceName
2. InstanceID
3. AMI
4. InstanceType
5. PrivateIP
6. PublicIP

These are the custom objects that we created within our code.

There you have it folks! We now have a tool that can pull EC2 info for us on any occasion we would like.

For code resources, please visit:

Want to tell me how I'm doing or ask questions? Please feel free to contact me!
Twitter: @JerseySysadmin


Popular posts from this blog

DevOps tooling in the Microsoft realm

When I really started to dive into automation and practicing DevOps with specific tooling, there were a few key players. At the time Microsoft was not one of them. They were just starting to embrace the open source world, including the art and practice of DevOps. Since then Microsoft has went all in and the tech giant has made some incredible tooling. Recently I switched to a Microsoft-heavy environment and I love it. I went from AWS/Python/Ansible/Jenkins to Azure/PowerShell/ARM/Azure DevOps. My first programming language was PowerShell so being back in the saddle allowed me to do a full circle between all of the different types of tooling in both worlds. Today I want to share some of that tooling with you.

The first thing I want to talk about is ARM. What is ARM? ARM is a configuration management tool that allows you to perform software-defined-infrastructure. Much like Ansible and Terraform, ARM allows you to define what you want your environment to look like at scale. With ARM, yo…

Monitoring your containers in an AKS cluster with Prometheus

Monitoring and alerting is arguably one of the most important thing in Cloud Engineering and DevOps. It's the difference between your clients stack being up and a client being down. Most of us have SLA's to abide by (for good reason). Today we're going to learn how to spin up Prometheus in an AKS cluster to monitor our applications.

1. Intermediate knowledge of Kubernetes
2. An AKS cluster spun up in Azure

Recently AKS supports Prometheus via Helm, so we'll use that for an automated solution to spin this up. This installs kube-prometheus, which is a containerized version of the application. With raw Prometheus, there are a few things that are needed for the operator;

1. Prometheus: Defines a desired deployment.
2. ServiceMonitor: Specifies how groups of services should be monitored
3. Alertmanager: Defines the operator to ensure services and deployments are running by matching the resource

With kube-prometheus, it is all packaged for you. This means configuri…

So, you want to be a Cloud Engineer?

In 2019 one of the biggest pieces of tech is the cloud. Whether it be public cloud or private cloud, cloud technologies are here to stay (for now). I predict that Cloud Engineering will be a very big part of IT (and development) for another 5-10 years. Today I want to share with you my journey in becoming a Cloud Engineer and some helpful tips. A career timeline to be a Cloud Engineer can go like so;

Desktop Support > Junior Sysadmin > Sysadmin > Sysadmin/Technical Lead > Engineer >  Cloud Engineer.

Although our career paths may not align, I believe that this progression is very import. Let me tell you why.

Helpdesk/Desktop Support Helpdesk and desktop support get your feet wet. It allows you to understand technology and how it's used in the workplace from a business perspective. It shows you what technologies may be best in the current environment your in and how to support those technologies. It also teaches you soft skills and how to support people from a technic…