Skip to main content

Reverse Proxy/Load Balancing with Nginx

Today we’re going to be talking about reverse proxy/load balancing with Nginx. Why is this important? It comes down to not wanting your web apps to have one single point of failure. Performance also plays a big role here. With reverse proxying, there are 3 load balancing methods:

1) Round Robin (goes around in a circle of servers essentially).
2) Least-connected (goes to the server with the least amount of load)
3) IP Hash (chooses what server should be used for the next request

For our testing purposes, we will have 3 servers. Two of them are RedHat 7.5 and the third is Ubuntu 18.04. All of these servers will have Nginx configured.
The first thing we want to do is confirm connection between all servers. In a production environment, you would confirm they’re all on the same subnet and have communication between one-another. In something cloud based like AWS, you would want to confirm they’re in proper security groups that allow certain types of connection and communication. This is out of the scope of this blog, but there is a ton of information out there. A good note to keep in mind is ICMP is NOT turned on by default in security groups if you are trying to ping other hosts for communication testing.
After we have confirmed communication between the servers, update those bad boys so we can get to the fun stuff.

First things first, we need to install Nginx. To install Nginx on RedHat, do the following;

```sudo rpm -ivh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm```
```Yum update –y```
```yum install nginx –y```

Aptitude made it easy for us and put it right in the package manager, so for Ubuntu, do the following;

```Apt-get install nginx```
After that, confirm Nginx is running. On RedHat;

```Systemctl status nginx```

On Ubuntu;

```Service nginx status```

After that, do a curl to localhost and confirm you see HTML;

```Curl localhost```

As RedHat 7.5 will be our Reverse Proxy host, we want to cd (change directory) to the following location;

```Cd /etc/nginx```

You’re going to see several configuration files here. Run a cat on nginx.conf.

This is the default Nginx configuration file. In production, we need to create a new Nginx config file. On RedHat, it will be in /etc/nginx/conf.d. In Debian based systems, it will be in /etc/nginx/sites-enabled
Lets cd (change directory) to the conf.d file and run;

```touch myserver.conf```

Use vim or vi and paste in your first block;

server {
        Listen 80;
}

Above is the beginning to your configuration. This says “hey Mr. RedHat server, listen to traffic on port 80 on this server”.
Remember this config block, because we will be coming back here shortly. The next thing we want to do is put in an “upstream server” block. This allows the reverse proxy/load balancer to look at all of the servers in the block and point to them. For our purposes, we are going to use IP addresses. You can use hostnames as well. This block is going to be posted ABOVE the server block.
    ```upstream mynewserver {
        server 192.168.1.10;
        server 192.168.1.9;
    }```

The last thing we are going to do is our proxy_pass line. The proxy_pass is what makes all of the magic happen in a reverse proxy. It’s saying “hey, push the traffic to the upstream blog”
At the end, your config should look like the below;

PLEASE pay attention to the opening and closing brackets. These are very important and one wrong placement will throw your config out of wack.
Now it’s time to test!
Run the following;

```nginx –t```
Your output should show something similar to the following. If not, please go back and see if you missed any steps;

```nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful```

Next, restart nginx;

```Systemctl restart nginx```

Comments

Popular posts from this blog

DevOps tooling in the Microsoft realm

When I really started to dive into automation and practicing DevOps with specific tooling, there were a few key players. At the time Microsoft was not one of them. They were just starting to embrace the open source world, including the art and practice of DevOps. Since then Microsoft has went all in and the tech giant has made some incredible tooling. Recently I switched to a Microsoft-heavy environment and I love it. I went from AWS/Python/Ansible/Jenkins to Azure/PowerShell/ARM/Azure DevOps. My first programming language was PowerShell so being back in the saddle allowed me to do a full circle between all of the different types of tooling in both worlds. Today I want to share some of that tooling with you.

The first thing I want to talk about is ARM. What is ARM? ARM is a configuration management tool that allows you to perform software-defined-infrastructure. Much like Ansible and Terraform, ARM allows you to define what you want your environment to look like at scale. With ARM, yo…

Monitoring your containers in an AKS cluster with Prometheus

Monitoring and alerting is arguably one of the most important thing in Cloud Engineering and DevOps. It's the difference between your clients stack being up and a client being down. Most of us have SLA's to abide by (for good reason). Today we're going to learn how to spin up Prometheus in an AKS cluster to monitor our applications.

Pre-reqs;
1. Intermediate knowledge of Kubernetes
2. An AKS cluster spun up in Azure

Recently AKS supports Prometheus via Helm, so we'll use that for an automated solution to spin this up. This installs kube-prometheus, which is a containerized version of the application. With raw Prometheus, there are a few things that are needed for the operator;

1. Prometheus: Defines a desired deployment.
2. ServiceMonitor: Specifies how groups of services should be monitored
3. Alertmanager: Defines the operator to ensure services and deployments are running by matching the resource

With kube-prometheus, it is all packaged for you. This means configuri…

So, you want to be a Cloud Engineer?

In 2019 one of the biggest pieces of tech is the cloud. Whether it be public cloud or private cloud, cloud technologies are here to stay (for now). I predict that Cloud Engineering will be a very big part of IT (and development) for another 5-10 years. Today I want to share with you my journey in becoming a Cloud Engineer and some helpful tips. A career timeline to be a Cloud Engineer can go like so;

Desktop Support > Junior Sysadmin > Sysadmin > Sysadmin/Technical Lead > Engineer >  Cloud Engineer.

Although our career paths may not align, I believe that this progression is very import. Let me tell you why.



Helpdesk/Desktop Support Helpdesk and desktop support get your feet wet. It allows you to understand technology and how it's used in the workplace from a business perspective. It shows you what technologies may be best in the current environment your in and how to support those technologies. It also teaches you soft skills and how to support people from a technic…