mtnr

A tech blog with fries on the side

Tag: docker

  • Yet another how to run WordPress with Docker tutorial

    There are already a lot of tutorials on how to run a WordPress blog from a Docker image. This one is more of the same and yet a little different.

    The usual approach includes a full stack containing WordPress as well as a database (e.g., MySQL, MariaDB, Postgres, etc.). However, since I was planning on using the datbase for more than just WordPress, i thought it might be prudent to run an instance of MariaDB on the host rather than spin up a database in each container.

    That being said, here’s the docker-compose.yml file I’m using to spin up my blog.

    services:
    
      wordpress:
        image: wordpress:6.5.3-apache
        container_name: mtnr-wordpress
        restart: always
        ports:
          - 8001:80
        env_file:
          - .env
        environment:
          WORDPRESS_DB_HOST: host.docker.internal:3306
          WORDPRESS_DB_USER: $MYSQL_USER
          WORDPRESS_DB_PASSWORD: $MYSQL_PASSWORD
          WORDPRESS_DB_NAME: mtnr
          WORDPRESS_CONFIG_EXTRA: |
            define( 'WP_REDIS_HOST', 'mtnr-redis' );
        volumes:
          - wordpress:/var/www/html
        extra_hosts:
          - host.docker.internal:host-gateway
    
      database-relay:
        image: alpine/socat:1.8.0.0
        container_name: mtnr-database-relay
        network_mode: host
        command: TCP-LISTEN:3306,fork,bind=host.docker.internal TCP-CONNECT:127.0.0.1:3306
        extra_hosts:
          - host.docker.internal:host-gateway
    
      redis:
        image: redis:7.2.5
        container_name: mtnr-redis
        restart: always
    
    volumes:
      wordpress:

    A few things worth mentioning here. For starters, all three services make use of a dedicated version in favor of latest in order to avoid any breaking changes when restarting the service. Let’s look at each service in a little bit more detail.

    Starting from the top with the WordPress service.

    Sensitive data such as $MYSQL_USER and $MYSQL_PASSWORD will be provided as environment variables via an .env file stored in the same location as the docker-compose.yml file.

    WORDPRESS_DB_HOST points to host.docker.internal. The URL is provided as an extra host matching the host gateway. Since the database on the host listens to 127.0.0.1, simply adding the host gateway alone won’t solve our problem of connecting to the host database.

    This is where socat comes into play.

    You can read about this in detail here: https://dev.to/mjnaderi/accessing-host-services-from-docker-containers-1a97.

    I added redis for object caching, too. This seems pretty straight forward. However, on order for WordPress, or rather a redis object cache plugin recognize the redis container, we must add the container name to WordPress’ config via adding the WP_REDIS_HOST variable to the WORDPRESS_CONFIG_EXTRA environmen variable.

    All WordPress files will be stored in a volume called wordpress, which about sums it up.

    All there is left to do is to start up the containers (-d as in detached).

    sudo docker compose up -d

    Further reading:

  • Get your backups to safety with rsnapshot

    In this article we learned how to backup Docker volumes. However, storing them on the same machine won’t do us any good should, for example, the harddrive fail.

    rsnapshot to the rescue.

    I run rsnsapshot as a Docker container from my home server and have set up a ssh source in the rsnapshot config file which I fetch on a regular basis via the following cron job:

    docker exec rsnapshot rsnapshot alpha

    That’s it.

  • How to backup your Docker volumes

    I’m running this blog from a Docker container behind a reverse proxy.

    That being said, I’m using volumes to persist my container data. Backing these up requires a bit more work than simply copying files from a to b.

    Backup commands using temporary containers

    In order to create a backup from a volume, I’m using the following command:

    sudo docker run --rm \
      --volumes-from <CONTAINER_NAME> \
      -v ~/backups:/backups \
      ubuntu \
      tar cvzf /backups/backup.tar.gz /var/www/html

    That might be alot. Let’s break it down.

    1. sudo docker run will start a new container as root. In this case from the ubuntu image (i.e., the image name provided in line 4).
    2. --volumes-from will mount all volumes from the given container name. Of course, you’ll have to replace the placeholder with the name of your container.
    3. -v ~/backups:/backups will mount the backups folder located in your home directory to a /backups folder in the container we’re starting. We will use this to store the backups we’re about to create.
    4. ubuntu being the image we’re using to start our disposable backup container.
    5. Finally there is the actual command to backup the files we want from the WordPress container which are available from the /var/www/html directory. We tar (create an archive containing all files from the given directory) and zip (compress) them with tar cvzf /backups/backup.tar.gz /var/www/html (create verbose zip file).

    Once we execute the command all the files from the /var/www/html directory from within our WordPress container will end up in ~/backups/backup.tar.gz and the backup container we just fired will be removed automatically once the tar command has finished.

    I will issue a second command just like the above to backup the corresponding MySQL database container.

    sudo docker run --rm \
      --volumes-from <CONTAINER_NAME> \
      -v ~/backups:/backups \
      ubuntu \
      tar cvzf /backups/backup-db.tar.gz /var/lib/mysql

    We’ve created two zip files in our backup folder. One for the files and one for the database of our WordPress instance.

    Let’s build on that.

    Automating your backups

    Since it would be a bit tedious to fully type out the same commands each time we want to backup our data, let’s automate the process.

    As a first step I add all the backup commands for the volumes I want to backup into a script.

    #!/bin/bash
    docker run --rm \
      --volumes-from <CONTAINER_NAME> \
      -v ~/backups:/backups \
      ubuntu \
      tar cvzf /backups/backup.tar.gz /var/www/html
    
    docker run --rm \
      --volumes-from <CONTAINER_NAME> \
      -v ~/backups:/backups \
      ubuntu \
      tar cvzf /backups/backup-db.tar.gz /var/lib/mysql

    If I run that script as the non-root user, I’ll run into the following error.

    docker: permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Head "http://%2Fvar%2Frun%2Fdocker.sock/_ping": dial unix /var/run/docker.sock: connect: permission denied.

    What’s going on? Well, as you may have noticed, I omitted the sudo command from the docker commands because I plan to call this script from a cron job where I won’t be able to input my password. However the docker command needs root access since Docker is not a rootless installation in this instance.

    Let’s tie it all together and add the script to the root user’s cron jobs.

    sudo crontab -e

    Assuming your script resides in a script folder below your home folder, add the following and save.

    0 1 * * * ~/scripts/backup.sh

    This will execute the script with root privileges each night at 1 o’clock in the night.

    However, the backup files will belong to the root user. Let’s remedy that.

    File permissions

    Add the following line to your backup script at the end.

    chown $(id -u -n):$(id -g -n) ~/backups/*

    This will assign ownership and group of all backups to your user.

    Summary

    We’ve automated daily backups of our Docker volumes using shell scripts and cron jobs. Neat. Let me know what you think.

    Oh, and please check these articles for further reading: