Skip to main content

Test your Python3 code in VSCode with a Docker container

With great code, comes great testing (or so we hope).


This blog post will require the following:

1. VSCode installed.
2. Docker extension in VSCode.
3) Docker for Mac or Windows.
4. A Mac or Windows device.
5. Tissues to wipe your tears of joy from how exciting containers & Python are.

Today we're going to talk about testing our Python code in a Docker container. There can be instances where this doesn't work and a VM would be better for your purposes. However, if you're building distributed systems/applications, you want to know how your code will interact in a containerized environment. This is also a really good practice if you're going over some training material. Whenever I go over training material, whether that be learning some new in Python, testing code, or even testing the way an application integrates with a system, I want something fast, easy, and smooth. It takes time to spin up a VM and test when we can just use a container. Let's get started!

First things first. Let's ensure we have our Docker image. For the purposes of this blog post, we'll use Centos 7. Run docker pull centos:7 from your terminal (or PowerShell window if you're on a Windows box).


Now that we have our image, let's head over to VSCode.

If you don't already have the Docker extension for VSCode, please install it and reload VSCode.


Once the Docker extension is installed, you will see the Docker icon on the left pane in VSCode. You should see 3 options:

1. Images
2. Containers
3. Registries


If this is your first time using Docker on your machine, you won't have has many images as I do. You should however see your centos:7 image pop up from prior instruction in this post. If you don't, please run docker image ls to see if you image exists. If not, do another docker pull as previously discussed in this post.

Next we want to go ahead and build our Dockerfile. Our Dockerfile will contain the contents of Python3. This is to ensure we have a pre-made image of what we need to properly test our Python code. Below is an example Docker file that I created. Please feel free to use this.

A few things to note before we run the below:

1. A Dockerfile should be named "Dockerfile" for the purposes of docker build (the command we will use to build our image) picking up the docker file
2. The Dockerfile should be saved in it's own directory. This is because wherever you run your Docker image from will parse everything else in the directory. It's cleaner, smoother, and faster to have a directory with your Dockerfile

FROM centos:7 MAINTAINER Michael 
RUN yum -y update 
RUN yum -y install yum-utils 
RUN yum -y groupinstall development 
RUN yum -y install https://centos7.iuscommunity.org/ius-release.rpm 
RUN yum -y install python36u



Let's open up our terminal and run the following:

docker build -t pythondev:1.0 /path/to/dockerfile

In my case, my Dockerfile is in my desktop under the "Dockerimage" directory.



What we're running is installing several packages, so it may take up to 3-4 minutes to finish building the image.

If we had back over to VSCode, we'll see our brand new Docker image! Let's go ahead and run it by right clicking and running "run interactive".



Now that our container is up, we're ready to go and start testing!

I wrote a very simple Python script that will bring back everything in our root directory with permissions and dates accessed.

import os 
def containerTest():
    os.system("ls -la /")
if __name__ == '__main__':
containerTest()



Next, let's run vi pythontest.py and copy/paste our code into our container.


Now that our code exists in our container, we're ready to run!


There we go! We have built our Python container test image, ran our code, and successfully tested our containerized environment to ensure our code would work properly.

With this, there is a lot to keep in mind. You want to make sure in a production environment that you have proper source control and you're pulling your code from a proper location. Once you've tested your code and you're ready to commit it to up to your source, always do so. This is imperative to staying production ready and always ensuring your are testing the proper code.

Thanks for reading and I hope you enjoyed!

Comments

Popular posts from this blog

DevOps tooling in the Microsoft realm

When I really started to dive into automation and practicing DevOps with specific tooling, there were a few key players. At the time Microsoft was not one of them. They were just starting to embrace the open source world, including the art and practice of DevOps. Since then Microsoft has went all in and the tech giant has made some incredible tooling. Recently I switched to a Microsoft-heavy environment and I love it. I went from AWS/Python/Ansible/Jenkins to Azure/PowerShell/ARM/Azure DevOps. My first programming language was PowerShell so being back in the saddle allowed me to do a full circle between all of the different types of tooling in both worlds. Today I want to share some of that tooling with you.

The first thing I want to talk about is ARM. What is ARM? ARM is a configuration management tool that allows you to perform software-defined-infrastructure. Much like Ansible and Terraform, ARM allows you to define what you want your environment to look like at scale. With ARM, yo…

Monitoring your containers in an AKS cluster with Prometheus

Monitoring and alerting is arguably one of the most important thing in Cloud Engineering and DevOps. It's the difference between your clients stack being up and a client being down. Most of us have SLA's to abide by (for good reason). Today we're going to learn how to spin up Prometheus in an AKS cluster to monitor our applications.

Pre-reqs;
1. Intermediate knowledge of Kubernetes
2. An AKS cluster spun up in Azure

Recently AKS supports Prometheus via Helm, so we'll use that for an automated solution to spin this up. This installs kube-prometheus, which is a containerized version of the application. With raw Prometheus, there are a few things that are needed for the operator;

1. Prometheus: Defines a desired deployment.
2. ServiceMonitor: Specifies how groups of services should be monitored
3. Alertmanager: Defines the operator to ensure services and deployments are running by matching the resource

With kube-prometheus, it is all packaged for you. This means configuri…

So, you want to be a Cloud Engineer?

In 2019 one of the biggest pieces of tech is the cloud. Whether it be public cloud or private cloud, cloud technologies are here to stay (for now). I predict that Cloud Engineering will be a very big part of IT (and development) for another 5-10 years. Today I want to share with you my journey in becoming a Cloud Engineer and some helpful tips. A career timeline to be a Cloud Engineer can go like so;

Desktop Support > Junior Sysadmin > Sysadmin > Sysadmin/Technical Lead > Engineer >  Cloud Engineer.

Although our career paths may not align, I believe that this progression is very import. Let me tell you why.



Helpdesk/Desktop Support Helpdesk and desktop support get your feet wet. It allows you to understand technology and how it's used in the workplace from a business perspective. It shows you what technologies may be best in the current environment your in and how to support those technologies. It also teaches you soft skills and how to support people from a technic…