Skip to main content

Building an instance with CloudFormation

Welcome back and thank you for taking the time to read my blog. After the last few blog posts being about Docker, Kubernetes, and micro-apps, I wanted to switch gears and jump into some cloud based architecture. One of the blessings in disguise/new hotness is IaS (Infrastructure-As-Code). Essentially what IaS allows you to do is something similar to an AMI/OVA/template. However, there is one huge difference. when you create an AMI or a template, that's it. You have your golden image with all of your applications and configurations. What if you want to change things up? You have to recreate an entire golden image, build on it, capture it, etc. Time can add up if you do it often. That's where Infrastructure-As-Code comes into play.

Infrastructure-As-Code allows you to edit your template/AMI/OVA at ANY given time, whether it be a new application, a new file, a new instance size, etc. For our demo, we will be using CloudFormation. This is AWS's IaS solution. Azure has one called Templates. There is another very popular open-source called Terraform by Hashicorp.

The first thing we want to do is log into AWS and go to the CloudFormation panel.


Once we click on that, we will be in the CloudFormation dashboard. We're going to go ahead and click "Create Stack".


Once we hit "Create Stack", you see a few options:

Design template: Allows you to make an architecture diagram, and it puts itself into code for you.
Select a sample template: Selecting a sample template allows you to pull pre-made templates which is convenient instead of re-writing what already exists.
Upload a template to S3: Upload a template you already have saved locally.
Specify an Amazon S3 URL: This allows you to specify a template that you already have saved in S3

Today we're going to keep it simple and select a sample template. This is very helpful because a lot of this are 500+ lines of JSON, so instead of reinventing the wheel, we might as well see what AWS will provide for us.


We're going to go ahead and select the LAMP stack


Once you have the LAMP stack created, we're going to go ahead and click on "View/Edit template in Designer"

This is important because we want to take a look at the template. Chances are, there may be some things we want to edit. I'm going to just post what I edited because no one wants to read me post 500+ lines of JSON :)

I went through my template and edited the following:
1) "DBPassword": I wanted this to be a minimum of 8 characters. The template starts out with a minimum of 1.
2) "DBRootPassword": I wanted this to be a minimum of 10 characters. The template starts out with a minimum of 1.
3) "InstanceType": For instance type "Allowed Values", I want to ensure only t2.small is allowed. You may also want to edit this so not just anyone can create insanely large instances for no reason.
4) "AWSRegionArch2AMI": For this, I chose to just utilize RedHat's 7.5 AMI in the us-east-1 region. Ideally you want to cut this off to your companies approved AMI's and regions.

Below are some screenshots of what I changed.



Next thing we want to do is save the template. For our testing purposes, we will save it locally.



Now let's upload to S3 and click next.


Fill in your specified information.


For the next screen called "options", fill in any specific IAM roles, tags, or alarms you'd like on your CloudFormation stack.

At the "review" page, go ahead and review your entries and click next. You will see the following "creating in progress" status in CloudFormation.


This could take a little while, so grab yourself a coffee and pick up your XBOX One controller.

If all finished well and JSON decided to play nice, you should see the output below!



Your WebsiteURL will be different, but if you click on it, you should see your PHP splash page.


There ya have it folks. You have officially created a server with code! After your excitement and god-like feelings simmer down, remember to turn off your EC2 instance so you don't get charged.

Comments

Popular posts from this blog

DevOps tooling in the Microsoft realm

When I really started to dive into automation and practicing DevOps with specific tooling, there were a few key players. At the time Microsoft was not one of them. They were just starting to embrace the open source world, including the art and practice of DevOps. Since then Microsoft has went all in and the tech giant has made some incredible tooling. Recently I switched to a Microsoft-heavy environment and I love it. I went from AWS/Python/Ansible/Jenkins to Azure/PowerShell/ARM/Azure DevOps. My first programming language was PowerShell so being back in the saddle allowed me to do a full circle between all of the different types of tooling in both worlds. Today I want to share some of that tooling with you.

The first thing I want to talk about is ARM. What is ARM? ARM is a configuration management tool that allows you to perform software-defined-infrastructure. Much like Ansible and Terraform, ARM allows you to define what you want your environment to look like at scale. With ARM, yo…

Monitoring your containers in an AKS cluster with Prometheus

Monitoring and alerting is arguably one of the most important thing in Cloud Engineering and DevOps. It's the difference between your clients stack being up and a client being down. Most of us have SLA's to abide by (for good reason). Today we're going to learn how to spin up Prometheus in an AKS cluster to monitor our applications.

Pre-reqs;
1. Intermediate knowledge of Kubernetes
2. An AKS cluster spun up in Azure

Recently AKS supports Prometheus via Helm, so we'll use that for an automated solution to spin this up. This installs kube-prometheus, which is a containerized version of the application. With raw Prometheus, there are a few things that are needed for the operator;

1. Prometheus: Defines a desired deployment.
2. ServiceMonitor: Specifies how groups of services should be monitored
3. Alertmanager: Defines the operator to ensure services and deployments are running by matching the resource

With kube-prometheus, it is all packaged for you. This means configuri…

So, you want to be a Cloud Engineer?

In 2019 one of the biggest pieces of tech is the cloud. Whether it be public cloud or private cloud, cloud technologies are here to stay (for now). I predict that Cloud Engineering will be a very big part of IT (and development) for another 5-10 years. Today I want to share with you my journey in becoming a Cloud Engineer and some helpful tips. A career timeline to be a Cloud Engineer can go like so;

Desktop Support > Junior Sysadmin > Sysadmin > Sysadmin/Technical Lead > Engineer >  Cloud Engineer.

Although our career paths may not align, I believe that this progression is very import. Let me tell you why.



Helpdesk/Desktop Support Helpdesk and desktop support get your feet wet. It allows you to understand technology and how it's used in the workplace from a business perspective. It shows you what technologies may be best in the current environment your in and how to support those technologies. It also teaches you soft skills and how to support people from a technic…