Follow me: Jack Histon's Twitter Share on LinkedIn Share on Google+ RSS Feed

Author avatar

Welcome. I am Jack Histon. My career would not be what it is today without dedication and hard work from software bloggers. My purpose is to give back to that online community.

docker-compose for a simpler life

Thursday, 10 August 2017

Tweet about this on Twitter Share on Facebook Share on LinkedIn Share on Google+ Pin on Pinterest Share on Reddit Share on StumbleUpon

Docker is a tool that allows you to create containers on the fly, allowing you to easily deploy applications on any environment. This blog post won't give you the in's and out's of all that is Docker. Docker is too big to delve into as a whole in one blog post. This post will discuss docker-compose; a tool that runs and synchronises containers simultaneously.

I am a windows man by nature - I am a .NET enthusiast – and so throughout this post I will be using Docker for Windows. Docker for Windows is virtually identical in user experience as it is to work with Docker natively on Linux. Therefore, the instructions laid out in this post should work on Linux as well.

What is docker-compose?

Docker-compose is a tool within the tool belt of Docker itself. It is a tool that allows you to run multi-container Docker applications. For example, you may want to run your ASP.NET Core application, as well as Sql Server, Redis, Load balancers, Nginx, etctera. All these moving parts would be a huge amount of work to setup manually. It would be nice to run one command to just make this all happen. This is where docker-compose comes in and helps.

docker-compose.yml

The examples given are what I use with docker-compose for this very blog. It will show you how you can specify environment variables; versions of docker images to use; and how to expose ports and so on. All of these can be achieved through the use of a docker-compose.yml file. Let's break the definition of the file down into multiple sections.

Version


version: '2'

Right at the top of the docker-compose.yml file, I have defined that we are using version 2 of the docker-compose api. As I currently type, the latest version is 3. This will govern what is and isn't available when it comes to features within docker-compose.

Networks


networks:
  aspnetcoreapp-network:
    driver: bridge

The next section is networks. This defines a brand new isolated network for all the containers to reside in. This could be multiple networks, if you require network isolation between defined containers. Here, we are using the default bridge driver. This essentially means that for a defined network, they have to live on the same host, and they are isolated from external communication to them-n.b., containers can be exports through ports configured.

Nginx Service


services:
  proxy:
    container_name: "blog_nginx"
    build:
      context: ./nginx
      dockerfile: Dockerfile
    ports:
      - "${NGINX_BIND_PORT}:80"
    networks:
      - aspnetcoreapp-network
    depends_on:
      - "app"

Services are the bread and butter of the docker-compose.yml file. There are many things going on here. You first define a name for your service. You then provider a container name within that. You can then either provider an image for the container to use, or specify that you want to build from a Dockerfile.

The nginx service has specified that it wants to build from a Dockerfile that is located at "./nginx/Dockerfile". It will then create an image from the defined Dockerfile.


FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf

The proxy service then has one port on it – as it will want to expose its port 80 for external connections from the host. We then say it is part of the network we have defined, and finally say that it depends on another service. A service depending on another service means the service cannot run until the specified service has already successfully started. This is useful for boot order.

One thing I have not mentioned are the environment variables. If docker-compose finds a .env file in the current directory of the docker-compose.yml file, then you can specify values in it that docker-compose can use. This is where the ${} values come into play-e.g., ${NGINXBINDPORT} will evaluate to whatever the environment variable is. The rules around these environment variables can be read at the docker website.

App service

The app service is the actual application for the blog website. This is not externally exposed. Meaning, nginx reverse proxies to it, which is the recommended way of working from the ASP.NET Core team.


services:
  app:
    container_name: "blog"
    build: 
      context: ${APP_CONTEXT}
      dockerfile: Dockerfile
    expose:
      - "5000"
    depends_on:
      - "db"
    networks:
      - aspnetcoreapp-network
    links:
      - db:db
    environment:
      ConnectionStrings__DefaultConnection: ${ConnectionString}
      GoogleAnalyticsTrackingCode: ${GoogleAnalyticsTrackingCode}

As can be seen, the defined areas are very similar. One thing to note, is the environment section. This section allows environment variables defined in the docker compose scope-I.e., host, .env file environment variables etc. - will be set inside the container itself.

Also, it only exposes the port 5000. This means that the host machine cannot talk directly to the container, but any container on the same network is allowed to.


FROM microsoft/dotnet:1.1.2-runtime

WORKDIR /app
COPY . .

ENV ASPNETCORE_URLS http://+:5000
EXPOSE 5000

ENTRYPOINT ["dotnet", "blog.dll"]

This is the example dockerfile that I have used. This requires me to have published the website beforehand. The publishing of a website could be performed with docker as well, but that is beyond the scope of this post.

Postgres Database


services:
  db:
    container_name: "blog_db"
    image: postgres:alpine
    ports:
      - "${POSTGRES_BIND_PORT}:5432"
    restart: always
    volumes:
      - db_volume:/var/lib/postgresql/data
    environment:
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_DB: ${POSTGRES_DB}
    networks:
      - aspnetcoreapp-network

Finally, this is the service for the database container. I am using a bog standard image that is pulled from the docker hub called "postgres:alpine". Alpine here means that it is built on the alpine linux distro-I.e., a Linux distribution that is very lightweight.

One new thing that is defined here is the volume keyword. A volume in the Docker world is a way for mounting directories on the host into a container. This is useful so that when the container stopped, all the data is not lost. Quite an important step for databases. The volume is defined at the bottom of the docker-compose.yml file.

*Deploying with docker-compose *

It is a very easy task to deploy with docker-compose, after all that is what it is built for. To start all of your containers you run:


docker-compose up

This will start all the containers simultaneously. However, generally you want to re-build all the custom images such as the nginx and app services I have defined.


docker-compose up --build

Also, you would really like the containers to just start in the background and not be hogging up your command line real estate.


docker-compose up --build -d

If you want to stop all containers in a safe way, you can run the following command:


docker-compose down

However, if you want to re-deploy your application after the fact-e.g., you have updated your application code, it's been published, and you just want to get into the containers- then docker-compose allows virtually zero down time. All you have to do is run the up command again:


docker-compose up --build -d

With the same files and directories. The command will check which images have to be built. If it has to be built, then it will stop the currently running container and restart it automatically. If it hasn't changed, such as the db service in the example, then it will leave that container alone. This is very useful when you want virtually zero downtime for deployments.

Summary

In this blog post, I have discussed docker-compose. Docker compose can make your life a lot easier. It probably takes me 2 minutes to fully build my blog application, and then deploy that onto the Linux server in Azure.

I think the next step for me, is to hook up a continuous delivery system against my git repository, and then I don't have to deploy manually anymore :)

Tl;dr – The full docker-compose.yml file


version: '2'

networks:
  aspnetcoreapp-network:
    driver: bridge

services:
  proxy:
    container_name: "blog_nginx"
    build:
      context: ./nginx
      dockerfile: Dockerfile
    ports:
      - "${NGINX_BIND_PORT}:80"
    networks:
      - aspnetcoreapp-network
    depends_on:
      - "app"

  app:
    container_name: "blog"
    build: 
      context: ${APP_CONTEXT}
      dockerfile: Dockerfile
    expose:
      - "5000"
    depends_on:
      - "db"
    networks:
      - aspnetcoreapp-network
    links:
      - db:db
    environment:
      ConnectionStrings__DefaultConnection: ${ConnectionString}
      GoogleAnalyticsTrackingCode: ${GoogleAnalyticsTrackingCode}

  db:
    container_name: "blog_db"
    image: postgres:alpine
    ports:
      - "${POSTGRES_BIND_PORT}:5432"
    restart: always
    volumes:
      - db_volume:/var/lib/postgresql/data
    environment:
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_DB: ${POSTGRES_DB}
    networks:
      - aspnetcoreapp-network
volumes:
  db_volume:


Share with a friend

Please share this blog post so others can learn from it as well.

Tweet about this on Twitter Share on Facebook Share on LinkedIn Share on Google+ Pin on Pinterest Share on Reddit Share on StumbleUpon

Recent Posts

Archives



© 2017 - Jack Histon - Blog