mtnr

A tech blog with fries on the side

Category: tech

  • Mask private SSH key in GitLab CI/CD

    When using Ansible for automating deployment from a GitLab CI/CD pipeline, you must provide a private SSH key via a GitLab CI/CD variable.

    GitLab CI/CD variables can be setup to be masked which prevents them from appearing in log files. Furthermore, they can be configured to be hidden so you have no way of accessing its value manually.

    Masked in job logs, and can never be revealed in the CI/CD settings after the variable is saved.

    This makes it perfect for storing secrets like the aforementioned private SSH key.

    However, the functionality comes with a flaw, as it is not possible to use it, when the secret value contains forbidden characters (e.g., blank spaces).

    Private SSH keys start and end with a comment that contains blank spaces and must not be altered or deleted as this would render the key useless.

    -----BEGIN OPENSSH PRIVATE KEY-----
    ...
    -----END OPENSSH PRIVATE KEY-----

    To mitigate the issue, you can encode the key with Base64. First create a new key pair and save it wherever you like. We will not keep the keys on our local system.

    $ ssh-keygen -t ed25519 -b 4096 -C "GitLab CI/CD"

    Don’t provide a passphrase when asked.

    For the sake of this article, we’ll assume the generated files will reside in ~/gitlab/id_ed25519 and ~/gitlab/id_ed25519.pub.

    Now to encode the private key you can use the following command.

    $ base64 -w0 -i ~/gitlab/id_ed25519

    base64 uses -i to specify the input file. In our case, the private SSH key. But what does -w0 signify?

    By default, the base64 command wraps lines at 76 characters when encoding data. The -w0 option is used to output the encoded result in a single line without any line breaks.

    Copy the output of the command and add it to a new masked and hidden CI/CD variable named SSH_PRIVATE_KEY in your GitLab project.

    Let’s have a look at a simple pipeline script on how to use the private SSH key.

    # Excerpt
      before_script:
        # Make sure that the image used provides an ssh-agent, install one, otherwise.
        - 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
        - eval $(ssh-agent -s)
        # Read the private SSH key from the variable, decode it, remove line breaks,
        # and add it to the SSH agent.
        - echo "$SSH_PRIVATE_KEY" | base64 -d | tr -d '\r' | ssh-add - > /dev/null
      script:
        - # Ansible can now use the provided private SSH key

    Provided, that you added the public key to the ~/.ssh/authorized_keys file on the target system(s), there is only one thing left to do.

    Gather the SSH public keys from the target system(s) and make them available to your scripts known hosts.

    $ ssh-keyscan example.com

    Replace example.com with the actual fully qualified domain name or the IP address of your target system and add the output to a new GitLab CI/CD variable called SSH_KNOWN_HOSTS.

    Revisiting our script from earlier, we can now add the final piece of the puzzle.

    # Excerpt
      before_script:
        # Make sure that the image used provides an ssh-agent, install one, otherwise.
        - 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
        - eval $(ssh-agent -s)
        # Read the private SSH key from the variable, decode it, remove line breaks,
        # and add it to the SSH agent.
        - echo "$SSH_PRIVATE_KEY" | base64 -d | tr -d '\r' | ssh-add - > /dev/null
        # Make sure the target system is known
        - mkdir -p ~/.ssh
        - chmod 700 ~/.ssh
        - echo "$SSH_KNOWN_HOSTS" > ~/.ssh/known_hosts
        - chmod 644 ~/.ssh/known_hosts
      script:
        - # Ansible can now use the provided private SSH key

    Happy deploying!

    Further reading:

  • Unit tests in Angular

    Here’s a comprehensive article about Angular unit tests using TestBed and without:

    https://medium.com/widle-studio/angular-unit-testing-without-testbed-a-comprehensive-guide-2e4c557c8da

    It covers a wide variety of topics from synchronous and asynchronous tests and testing components and services to mocking http calls.

  • Upgrading from RSA to ED25519

    Just out of curiosity, ascertain what keys you have on your machine by issuing the following command:

    for key in ~/.ssh/id_*; do ssh-keygen -l -f "${key}"; done | uniq

    Generate new ED25519 key pair

    ssh-keygen -o -a 256 -t ed25519 -C "$(hostname)-$(date +'%d-%m-%Y')"

    Executing the command above will generate a new pair of Ed25519 keys. When asked, provide a strong password for the key pair.

    $ ~/.ssh/id_ed25519     #Private key
    $ ~/.ssh/id_ed25519.pub #Public key

    Let’s have a brief look at each option.

    -o will use OpenSSH format for the new keys
    -a specifies the number (amount) of key derivation rounds (KDF)
    -t specifies the type; in this case Ed25519
    -C adds an optional comment that helps with identifying the key

    Using the new keys

    Now, simply add the public key to the authorized keys of the machine you would like to login to. In order to retrieve the public key, use the following command and copy & paste the output of said command.

    cat ~/.ssh/id_ed25519.pub

    Sprinkle a bit of convenience on top

    Now if you’re like me and are using a Mac, you may use the Keychain to store your password, so you don’t have to always type it out when logging in to your server via ssh.

    I added the following to ~/.ssh/config:

    Host mtnr
        HostName mtnr.cloud
        UseKeychain yes
        IdentityFile ~/.ssh/id_ed25519

    Now, when calling ssh mtnr, I can ssh into my server without specifying anything extra like e.g. which pair of keys to use for authentication and, I only have to type out the password once. All subsequent attempts will use the password stored in my Keychain.

    Neat!

    Further reading/sources:

  • MXToolBox

    All of your MX record, DNS, blacklist and SMTP diagnostics in one integrated tool.  Input a domain name or IP Address or Host Name. Links in the results will guide you to other relevant tools and information.  And you’ll have a chronological history of your results.

    https://mxtoolbox.com

  • Yet another how to run WordPress with Docker tutorial

    There are already a lot of tutorials on how to run a WordPress blog from a Docker image. This one is more of the same and yet a little different.

    The usual approach includes a full stack containing WordPress as well as a database (e.g., MySQL, MariaDB, Postgres, etc.). However, since I was planning on using the datbase for more than just WordPress, i thought it might be prudent to run an instance of MariaDB on the host rather than spin up a database in each container.

    That being said, here’s the docker-compose.yml file I’m using to spin up my blog.

    services:
    
      wordpress:
        image: wordpress:6.5.3-apache
        container_name: mtnr-wordpress
        restart: always
        ports:
          - 8001:80
        env_file:
          - .env
        environment:
          WORDPRESS_DB_HOST: host.docker.internal:3306
          WORDPRESS_DB_USER: $MYSQL_USER
          WORDPRESS_DB_PASSWORD: $MYSQL_PASSWORD
          WORDPRESS_DB_NAME: mtnr
          WORDPRESS_CONFIG_EXTRA: |
            define( 'WP_REDIS_HOST', 'mtnr-redis' );
        volumes:
          - wordpress:/var/www/html
        extra_hosts:
          - host.docker.internal:host-gateway
    
      database-relay:
        image: alpine/socat:1.8.0.0
        container_name: mtnr-database-relay
        network_mode: host
        command: TCP-LISTEN:3306,fork,bind=host.docker.internal TCP-CONNECT:127.0.0.1:3306
        extra_hosts:
          - host.docker.internal:host-gateway
    
      redis:
        image: redis:7.2.5
        container_name: mtnr-redis
        restart: always
    
    volumes:
      wordpress:

    A few things worth mentioning here. For starters, all three services make use of a dedicated version in favor of latest in order to avoid any breaking changes when restarting the service. Let’s look at each service in a little bit more detail.

    Starting from the top with the WordPress service.

    Sensitive data such as $MYSQL_USER and $MYSQL_PASSWORD will be provided as environment variables via an .env file stored in the same location as the docker-compose.yml file.

    WORDPRESS_DB_HOST points to host.docker.internal. The URL is provided as an extra host matching the host gateway. Since the database on the host listens to 127.0.0.1, simply adding the host gateway alone won’t solve our problem of connecting to the host database.

    This is where socat comes into play.

    You can read about this in detail here: https://dev.to/mjnaderi/accessing-host-services-from-docker-containers-1a97.

    I added redis for object caching, too. This seems pretty straight forward. However, on order for WordPress, or rather a redis object cache plugin recognize the redis container, we must add the container name to WordPress’ config via adding the WP_REDIS_HOST variable to the WORDPRESS_CONFIG_EXTRA environmen variable.

    All WordPress files will be stored in a volume called wordpress, which about sums it up.

    All there is left to do is to start up the containers (-d as in detached).

    sudo docker compose up -d

    Further reading:

  • Roll Your Own Network

    One stop shop for rolling your own network. You’ll find tutorials on how to run a network, server, desktop, mobile, and manage certificates and backups.

    It’s a work in progress according to themselves, but definitely worthwhile checking it out!

    https://roll.urown.net

  • Better backups with BorgBase

    I recently posted an article on how I used rsnapshot for backing up this blog. This worked fine but I was having two major issues with the approach.

    1. Backing up the database’s internal file storage

    Generally speaking, it’s bad practice to backup a database’s internal file storage as it could change mid-backup. A better approach would be to create a database dump which will result in a consistent snapshot.

    2. Backing up volumes in archives

    With the current approach of creating compressed tars of the Docker volumes, each time a full backup will be created. This uses up more space than necessary.

    Borg to the rescue

    Borg is an easy to use deduplicating archiver that comes with compression and encryption out of the box.

    You can find a detailed manual and more info at https://www.borgbackup.org/.

    Since I have to backup protected resources, I installed it with privileged access.

    sudo apt update
    sudo apt install borgbackup

    At the time of writing this you may find the installation instructions under https://borgbackup.readthedocs.io/en/1.2.8/installation.html.

    At this point we could go ahead and set up a (remote) repository and start backing up our data which would result in smaller backups than the previously used tar archives.

    However, we still need a way to automatically backup the database as well as a convenient way to automate or backups.

    Automate backups with Borgmatic

    Borgmatic also comes with an exhaustive documentation that can be found at https://torsion.org/borgmatic/docs/how-to/set-up-backups/.

    I’ve opted for a root install using apt.

    sudo apt update
    sudo apt install borgmatic

    Now that we have installed Borgmatic, let’s create a config file in /etc/borgmatic/config.yaml.

    sudo borgmatic config generate

    Now, before editing the configuration file to our needs, let’s set up a remote repository with BorgBase first.

    Remote backups with BorgBase

    Sign up and receive 10 GB and 2 repositories for free forever. No credit card required. Just a place to keep your backups.

    https://www.borgbase.com

    After setting up your free account it looks sth. like this.

    BorgBase welcome screen

    Before you may add a repository, you have to add a ssh key, first. BorgBase makes it very easy to add a key and guides you all the way. Here’s how I created my key.

    ssh-keygen -t ed25519 -C "<EMAIL>" -f ~/.ssh/id_ed25519_borgbase

    Please replace <EMAIL> with your own mail address. The above will generate a new key and place it under .ssh in your home folder under the name of id_ed25519_borgbase. In addition it will generate a corresponding public key. This is what you must provide BorgBase with in order to create and access a repository. Type the following to access it from your terminal:

    cat ~/.ssh/id_ed25519_borgbase.pub

    After setting up your repository you will be presented with a wizard to set up your server for communicating with it.

    BorgBase Setup Wizard

    Now it’s time to edit the borgmatic config file from earlier. It’s pretty self explanitory.

    I’m including everything under /etc, my home folder as well as the Docker volume for this blog.

    source_directories:
      - /etc
      - /home/<USER>
      - /var/lib/docker/volumes/<VOLUME_NAME>

    There is a detailed explanation on how to include database dumps in your backups available at https://torsion.org/borgmatic/docs/how-to/backup-your-databases/.

    I added the following snippet to my config.

    mariadb_databases:
      - name: <DB_NAME>
        hostname: 127.0.0.1
        port: 3306
        username: <USER>
        password: <PASSWORD>

    After your done, you may validate your config with the following command.

    sudo borgmatic config validate

    The last thing to do is initializing the repository.

    sudo borgmatic init --encryption repokey

    Test your setup

    Before editing your crontab, it makes sense to test your setup manually.

    sudo borgmatic create --verbosity 1 --list --stats

    If everything works as expected, you should add a call to borgmatic to the root user’s crontab.

    sudo crontab -e

    Conclusion

    And now you can lie back and relax. Depending on your crontab settings your incremental backups will be created automatically and will be securely encrypted stored off site.

    Nice!

  • Get your backups to safety with rsnapshot

    In this article we learned how to backup Docker volumes. However, storing them on the same machine won’t do us any good should, for example, the harddrive fail.

    rsnapshot to the rescue.

    I run rsnsapshot as a Docker container from my home server and have set up a ssh source in the rsnapshot config file which I fetch on a regular basis via the following cron job:

    docker exec rsnapshot rsnapshot alpha

    That’s it.

  • How to backup your Docker volumes

    I’m running this blog from a Docker container behind a reverse proxy.

    That being said, I’m using volumes to persist my container data. Backing these up requires a bit more work than simply copying files from a to b.

    Backup commands using temporary containers

    In order to create a backup from a volume, I’m using the following command:

    sudo docker run --rm \
      --volumes-from <CONTAINER_NAME> \
      -v ~/backups:/backups \
      ubuntu \
      tar cvzf /backups/backup.tar.gz /var/www/html

    That might be alot. Let’s break it down.

    1. sudo docker run will start a new container as root. In this case from the ubuntu image (i.e., the image name provided in line 4).
    2. --volumes-from will mount all volumes from the given container name. Of course, you’ll have to replace the placeholder with the name of your container.
    3. -v ~/backups:/backups will mount the backups folder located in your home directory to a /backups folder in the container we’re starting. We will use this to store the backups we’re about to create.
    4. ubuntu being the image we’re using to start our disposable backup container.
    5. Finally there is the actual command to backup the files we want from the WordPress container which are available from the /var/www/html directory. We tar (create an archive containing all files from the given directory) and zip (compress) them with tar cvzf /backups/backup.tar.gz /var/www/html (create verbose zip file).

    Once we execute the command all the files from the /var/www/html directory from within our WordPress container will end up in ~/backups/backup.tar.gz and the backup container we just fired will be removed automatically once the tar command has finished.

    I will issue a second command just like the above to backup the corresponding MySQL database container.

    sudo docker run --rm \
      --volumes-from <CONTAINER_NAME> \
      -v ~/backups:/backups \
      ubuntu \
      tar cvzf /backups/backup-db.tar.gz /var/lib/mysql

    We’ve created two zip files in our backup folder. One for the files and one for the database of our WordPress instance.

    Let’s build on that.

    Automating your backups

    Since it would be a bit tedious to fully type out the same commands each time we want to backup our data, let’s automate the process.

    As a first step I add all the backup commands for the volumes I want to backup into a script.

    #!/bin/bash
    docker run --rm \
      --volumes-from <CONTAINER_NAME> \
      -v ~/backups:/backups \
      ubuntu \
      tar cvzf /backups/backup.tar.gz /var/www/html
    
    docker run --rm \
      --volumes-from <CONTAINER_NAME> \
      -v ~/backups:/backups \
      ubuntu \
      tar cvzf /backups/backup-db.tar.gz /var/lib/mysql

    If I run that script as the non-root user, I’ll run into the following error.

    docker: permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Head "http://%2Fvar%2Frun%2Fdocker.sock/_ping": dial unix /var/run/docker.sock: connect: permission denied.

    What’s going on? Well, as you may have noticed, I omitted the sudo command from the docker commands because I plan to call this script from a cron job where I won’t be able to input my password. However the docker command needs root access since Docker is not a rootless installation in this instance.

    Let’s tie it all together and add the script to the root user’s cron jobs.

    sudo crontab -e

    Assuming your script resides in a script folder below your home folder, add the following and save.

    0 1 * * * ~/scripts/backup.sh

    This will execute the script with root privileges each night at 1 o’clock in the night.

    However, the backup files will belong to the root user. Let’s remedy that.

    File permissions

    Add the following line to your backup script at the end.

    chown $(id -u -n):$(id -g -n) ~/backups/*

    This will assign ownership and group of all backups to your user.

    Summary

    We’ve automated daily backups of our Docker volumes using shell scripts and cron jobs. Neat. Let me know what you think.

    Oh, and please check these articles for further reading: