██▒   █▓ ▒█████   ██▓▓█████▄ 
▓██░   █▒▒██▒  ██▒▓██▒▒██▀ ██▌
 ▓██  █▒░▒██░  ██▒▒██▒░██   █▌
  ▒██ █░░▒██   ██░░██░░▓█▄   ▌
   ▒▀█░  ░ ████▓▒░░██░░▒████▓ 
   ░ ▐░  ░ ▒░▒░▒░ ░▓   ▒▒▓  ▒ 
   ░ ░░    ░ ▒ ▒░  ▒ ░ ░ ▒  ▒ 
     ░░  ░ ░ ░ ▒   ▒ ░ ░ ░  ░ 
      ░      ░ ░   ░     ░    
     ░                 ░      
    

 ___      ___ ________  ___  ________     
|\  \    /  /|\   __  \|\  \|\   ___ \    
\ \  \  /  / | \  \|\  \ \  \ \  \_|\ \   
 \ \  \/  / / \ \  \\\  \ \  \ \  \ \\ \  
  \ \    / /   \ \  \\\  \ \  \ \  \_\\ \ 
   \ \__/ /     \ \_______\ \__\ \_______\
    \|__|/       \|_______|\|__|\|_______|
    

Yane ✖ Karov

  • Building an automated installer for a self-hosted full-stack QR code generator (legacy)

    QRGen is built in React/Vite (TSX), NGINX (proxying/forwarding), Express (TS), Certbot (Python Fork), Bash for automation, and rootless Docker.

    TLD;DR

    This article is a (now legacy) companion deep dive into the QRGen project installer, a self-hosted solution to generating QR codes. It offers a point in time snapshot of the project, as it existed before the submodule refactor, and before the project was split into a frontend and backend. It's a bit of a mess, but I hope you enjoy it anyway. You can find the code to the version documented on this blog post in the legacy-full-release branch on GitHub or the more up to date version here

    AI DISCLAIMER

    I used ChatGPT-4 to bounce stuff off in the making of this project and article, resulting in more than a couple rewrites of both this article and the project itself. Regardless, I hope that this content is both interesting and useful to you.

    QRGen

    Hubris. Let me show you.

    Skip this section if you're not interested in my personal experience building the project and just want to read the technical stuff.

    I had just finished sectioning off my new home wireless network into VLANs, and realized that I needed a way to share the Wi-Fi password with guests. I really didn't want to use a "free" online QR code generator, due to the often closed source, you are the product feeling I got when researching the available options. Some of which were SEO drivel, seemingly owned by the same damn company. More on that later. So, looking for a way to distribute a 168-character Wi-Fi password, in true hacker spirit, I decided to build the damn thing myself. Along the way I realised that I could make it a bit more useful, and decided to add a few more fields to the QR code generator, eventually realizing that I wanted to generate a bunch of codes for other things, not just the one off.

    So, I took a quick Google search and found a guide to generating QR codes in under 7 lines of code. I thought 'hell yeah, this is going to be so easy'. And, honestly, building the initial server really was; it took a total of about two minutes to get the original code generation working, and another half hour to get a nice express API server up and running with a bit of JavaScript. I was feeling pretty good about myself, and decided to take a break for the evening.

    The next day, I shared what I was building with my wife, who manages a boutique store and she was really interested in how what I had built could translate into sharing product information with customers. She wasn't particularly keen on the idea of running a curl command every time she wanted to generate a QR code so I decided to build a frontend for the project. Since the last project I worked on was in React, and I hadn't freshed up in a while, I decided to use CRA. FFW a few days and the frontend prototype was largely done, and I was feeling pretty proud of myself.

    On a break, I was browsing Reddit on /react, when I read a comment about the changing landscape of bundling React apps, and decided to check it out. Turns out CRA isn't receiving half as much love as it used to, and third party bundlers are the new hotness, and the flavour of said hotness is Vite. Since I had already built the frontend in CRA+JavaScript I was a little bummed out, but while my motivation was still high from rapidly completing the prototype, I decided to let my curiosity guide me. I decided to port the project over to Vite. This time, I would build the frontend in TypeScript/TSX, and use Vite to bundle it. This took a bit longer than I expected, several more days since there were also some new data structures to implement and additional overhead that TypeScript brings to the table.

    By this point about another week had passed, and I was looking at the state of the project and wondering what else I could do to make it better. Not wanting to have to jump between TSX and JavaScript as frequently as I had been, in part due to the cognitive load, I decided to port the backend to TypeScript as well. This was a bit more involved than I had anticipated, as around this time I was also starting to automate the heck out the project. There were a lot of moving parts, such as dependencies, points of service integration, container management etc. This all led to the decision to put a lot of my dynamic code in heredocs which would be automatically integrated and staged in the project directory, using Bash. I should probably note here, that the reason I chose Bash over Python for this task comes down to a few simple, but perhaps to some dirty points:

    1. Comfortability & Productivity - I'm honestly really comfortable writing shell code. Yes, I know this likely means there is something wrong with my brain. Bash has a very large number of footguns compared to other scripting languages, but this is overshadowed by how insanely productive it can be to write in.
    2. Portability - I had the probably overly ambitious target of wanting the code to be 'templatable', while also being distro agnostic, and I didn't want to have to worry about Python versions, or Python not being installed.

    Once I was done porting Express to TypeScript, I decided to add a few more fields to the QR code generator, cleaning up the code as I went. I also decided to add a few more features, such as the ability to save QR codes, batch generate QR codes, and a few other things. At the current rate of progress I was pretty content, so I started to split my time 50/40 between writing feature code and automation code, and 10% on documentation.

    Another week past by the time I was ensuring stability, adding security features, such as rate limiting, CORs support, XSS protection, etc. & testing different deployment scenarios. I also realized that this project would be a lot more secure if the docker containers were run in rootless mode, and decided to automate that as well. What an absolute PITA. This consistently took the most time to get right, although I'm pretty happy with the end result. It mostly came down to the fact that I had to do a lot of incremental testing, and I learn about the many nuances of docker, docker-rootless, the ways in which they differ, and some of the meat and potatoes behind linux internals & containers in general.

    Anyway, it's been over a month since I started this project now, and I'm going to keep updating it as I go. I just wanted to document the progress so far, and share some of my experiences along the way.

    Data Structures, Design, Automation, Code, and Container Lifecycle

    None of the subsequent sections will be covering the installation, setup or configuration of the project, as that is covered thoroughly in the README.md. This will be a deep dive into the project's data structures, design, automation, and code.

    Data Structures

    In a loose sense the project breaks up into about half a dozen 'layers' or so, each with their own data structures, and each with their own purpose to serve. I'll be breaking down each layer, and the data structures that are used within them, and their purpose. I'll also be trying to provide visualizations where possible.

    Design

    From a high level, the shell scripts automatically configure the project's environment, the docker containers that are used to run the project's services & the docker-compose.yml file that orchestrates them. It also has another aspect which handles user prompts, service management, logging and so on.

    The project initially took a 'dynamically generate all the things' approach which definitely increased the speed of development, at a cost to maintainability. I've since moved to a more modular approach - integrating the corresponding backend and frontend(s) into git submodules. This works to clone the respective code bases & branches which is better than individually managing which files and directories are copied at build time. It also allows for the project to be more easily updated, and for the project to be more easily extended in the future with various releases.

    Code

    Breaking down depends.sh by function and purpose

    #!/bin/bash
    
    `set -euo pipefail`
    
    This line ensures that our archaic script terminates upon:
    
    - Command failures (`e`)
    - Variables that aren't being set properly (`u`)
    - Commands in any pipe that have failed (`-o pipefail`)
    

    main()

    main() is the entry point for the script - it sequentially checks for root, invokes functions to install NVM, Node.js, Docker, and Docker Compose.

    ...
    function main() {
      # Check for necessary privileges
      if [[ $EUID -ne 0 ]]; then
        echo "This script must be run as root. Please use sudo."
        exit 1
      fi
    
      # Exports the current terminal & sets the locale to UTF-8 to the script's environment for automated execution
      export TERM=${TERM:-xterm}
      export LC_ALL=en_US.UTF-8
      export LANG=en_US.UTF-8
    
      user_name="${1:-docker-primary}"
      nvm_install_url="https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.5/install.sh"
      NODE_VERSION="20.8.0"
    
      installation_menu
    }
    
    main "$@"
    

    installation_menu()

    The installation_menu() function is the main menu for the script, allowing the user to select which actions to perform, and in what order. We use a case statement to handle the user's input, and invoke the corresponding functions. The functions are defined below.

    function installation_menu() {
      local choice
      echo "Choose an action:"
      echo "1) Full Installation (All)"
      echo "2) Setup User Account"
      echo "3) Install Packages and Dependencies"
      echo "4) Setup NVM and Node.js"
      echo "5) Uninstall Packages and Dependencies"
      echo "6) Remove User Account"
      echo "7) Remove NVM and Node.js"
      echo "8) Full Uninstallation (All)"
      read -rp "Your choice (1-8): " choice
    
      case $choice in
        1)
          setup_user
          install_packages
          setup_nvm_node
          ;;
        2) setup_user ;;
        3) install_packages ;;
        4) setup_nvm_node ;;
        5) uninstall_packages ;;
        6) remove_user ;;
        7) remove_nvm_node ;;
        8)
          remove_nvm_node
          remove_user
          uninstall_packages
          ;;
        *)
          echo "Invalid choice. Exiting."
          exit 1
          ;;
      esac
    }
    

    setup_user()

    The setup_user() check if a specified user exists; If it does, it offers the option to reset their password or skip setup. If the user does not exist, it creates a new user without a password and prompts for setup.

    function setup_user() {
      echo "Setting up $user_name user..."
    
      if id "$user_name" &> /dev/null; then
        local user_choice
        echo "User $user_name already exists."
        echo "1) Reset password"
        echo "2) Skip user setup"
        while true; do
          read -rp "Your choice (1-2): " user_choice
          case $user_choice in
            1 | 2) break ;;
            *) echo "Please enter a valid choice (1 or 2)." ;;
          esac
        done
        case $user_choice in
          1) setup_user_with_prompt ;;
          2) echo "User setup skipped." ;;
          *) echo "Invalid choice. Exiting." ;;
        esac
      else
        sudo adduser --disabled-password --gecos "" "$user_name"
        setup_user_with_prompt
      fi
    }
    

    setup_user_with_prompt()

    setup_user_with_prompt() handles the actual password reset, it makes sure that the user is initialized and prompts for a new password.

    function setup_user_with_prompt() {
      if [ -z "$user_name" ]; then
        echo "Error: user_name is not set."
        return 1
      fi
    
      if ! id "$user_name" &> /dev/null; then
        echo "User $user_name does not exist."
        return 2
      fi
    
      echo "Setting a new password for $user_name."
      passwd "$user_name" || {
        echo "Failed to set password for $user_name."
        return 3
      }
    }
    

    install_packages()

    install_packages() is critical, as it removes packages that will break the project. It adds the Docker repository to the apt sources list, and installs several necessary packages for the project. The packages include Docker, Docker Compose, ifnotify-tools for monitoring file changes, jq for JSON parsing, uidmap for generating UUIDs, and curl for downloading files.

    function install_packages() {
      echo "Removing conflicting packages..."
      local remove_packages=(docker.io docker-doc docker-compose podman-docker containerd runc)
      for package in "${remove_packages[@]}"; do
        sudo apt-get remove -y "$package"
      done
    
      echo "Installing required packages..."
      sudo apt-get update -y
      sudo apt-get install -y ca-certificates curl netcat gnupg uidmap inotify-tools
    
      sudo install -m 0755 -d /etc/apt/keyrings
      curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --batch --yes --dearmor -o /etc/apt/keyrings/docker.gpg
      sudo chmod a+r /etc/apt/keyrings/docker.gpg
    
      echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
      $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
    
      sudo apt-get update -y
      sudo apt-get install -y jq docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
    }
    

    setup_nvm_node()

    setup_nvm_node() is a simple function that downloads and installs NVM, and subsequently uses NVM to install Node.js. This is specifically done to avoid conflicts with other projects that may require different versions of Node.js. and to ensure that the project/docker-primary user always has access to the required version of Node.js.

    function setup_nvm_node() {
      echo "Setting up NVM and Node.js..."
    
      if id "$user_name" &> /dev/null; then
        sudo mkdir -p /home/"$user_name"/.nvm
        sudo chown "$user_name:$user_name" /home/"$user_name"/.nvm
    
        sudo -Eu "$user_name" bash << EOF
    export NVM_DIR="/home/$user_name/.nvm"
    export npm_config_cache="/home/$user_name/.npm"
    curl -o- $nvm_install_url | bash
    source "\$NVM_DIR/nvm.sh"
    nvm install $NODE_VERSION
    nvm use $NODE_VERSION
    nvm alias default $NODE_VERSION
    npm install -g npm
    EOF
      else
        echo "User $user_name does not exist. Exiting..."
        exit 1
      fi
    }
    

    uninstall_packages()

    The depends.sh script also comes with an uninstallation feature, which is useful for cleaning up the environment if the project is no longer needed. The uninstall_packages() function removes the packages installed by install_packages(), and removes the Docker repository from the apt sources list. Will also attempt to fix broken installs, if any and automatically remove the docker repository if it exists.

    function uninstall_packages() {
      echo "Attempting to uninstall packages..."
      local packages=(docker-ce dockerthe setup of the project-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin docker-compose)
      for package in "${packages[@]}"; do
        if ! sudo apt-get purge -y "$package"; then
          echo "Error occurred during uninstallation of $package."
          echo "Attempting to fix broken installs."
          sudo apt --fix-broken install
        fi
      done
    
      if [ -f /etc/apt/sources.list.d/docker.list ]; then
        echo "Removing Docker repository..."
        sudo rm /etc/apt/sources.list.d/docker.list
      fi
    
      sudo apt-get autoremove -y
    }
    

    remove_user()

    The remove_user() function removes the project user, if it exists, by prompting the user for confirmation and subsequently deleting the user and their home directory, as well as any processes running under the user.

    function remove_user() {
      echo "Removing $user_name user..."
      if pgrep -u "$user_name" > /dev/null; then
        echo "There are active processes running under the $user_name user."
        local response
        read -rp "Would you like to kill all processes and continue with user removal? (y/N) " response
        if [[ $response =~ ^[Yy][Ee]?[Ss]?$ ]]; then
          sudo pkill -9 -u "$user_name"
          sleep 2  # Allow some time for processes to be terminated
        else
          echo "Skipping user removal."
          return
        fi
      fi
    
      sudo deluser --remove-home "$user_name"
    }
    

    remove_nvm_node()

    The remove_nvm_node() function assumes to always run before remove_user() and although it's quite redundant in the case that the user does a full uninstall, it cleanly removes NVM and Node.js, if they exist, by prompting the user for confirmation and subsequently deleting the NVM directory and any Node.js versions installed by NVM specifically for the project user.

    function remove_nvm_node() {
      echo "Removing NVM and Node.js..."
      if id "$user_name" &> /dev/null; then
        local nvm_dir="/home/$user_name/.nvm"
        local nvm_sh="$nvm_dir/nvm.sh"
    
        if [ -s "$nvm_sh" ]; then
          # Load NVM and uninstall Node versions
          sudo -u "$user_name" bash -c "source $nvm_sh && nvm deactivate && nvm uninstall --lts && nvm uninstall --current"
    
          # Remove NVM directory
          sudo rm -rf "$nvm_dir"
          echo "NVM and Node.js removed for user $user_name."
        else
          echo "NVM is not installed for $user_name. Skipping..."
        fi
      else
        echo "User $user_name does not exist. Exiting..."
        exit 1
      fi
    }
    

    The install.sh script is the main entrypoint for the entire project and automates it almost entirely. Having said that, the documentation moving forward will likely be far more verbose and equally likely to rapidly fall out of date. The script focuses on sourcing other modular scripts, declaring constants, and invoking functions to set up the project's environment, as well as validate the presence of required files. This script is never to be run as root, instead, it should use the project docker-primary user created by depends.sh.

    The first couple of lines are the same as depends.sh, but pivot by changing the user to the script's directory. This is a practice that ensures the script is always run from the same directory, regardless of where the user is when they invoke the script.

    #!/bin/bash
    
    # Exit on error, undefined variable, or pipe failure.
    set -euo pipefail
    
    # Change to the script's directory.
    cd "$(dirname "$0")"
    ...
    

    The following line ensures that the script is run directly, and not sourced.

    [[ ${BASH_SOURCE[0]} == "${0}"   ]] && main
    

    main()

    It invokes the main() function, which invokes the user_prompt() function, allowing the user to select which actions to perform, and in what order.

    # Main entry point of the script.
    main() {
      # This condition checks if the script is being sourced or executed.
      [[ ${BASH_SOURCE[0]} != "$0" ]] && echo "This script must be run, not sourced." && exit 1
    
      # Trap the SIGINT signal (Ctrl+C) and call the quit function.
      trap quit SIGINT
    
      # Prompt for user options
      user_prompt
    }
    

    Here, we perform a check to verify the existence of the .env file and source it if it exists.

      # Load environment variables if .env file exists.
      if [[ -f .env ]]; then
          . .env
      else
          echo "Error: .env file not found."
          exit 1
      fi
    

    We also declare clusters of associative arrays in scope to store several of the projects core mappings which are core to the dynamic configuration of several files.

      # Define global associative arrays.
      dirs=(
                     [BACKEND_DIR]="${PROJECT_ROOT_DIR}/backend"
                     [FRONTEND_DIR]="${PROJECT_ROOT_DIR}/frontend"
                     [SERVER_DIR]="${PROJECT_ROOT_DIR}/server"
                     [CERTBOT_DIR]="${PROJECT_ROOT_DIR}/certbot"
                     [CERTS_DIR]="${PROJECT_ROOT_DIR}/certs"
                     [WEBROOT_DIR]="${PROJECT_ROOT_DIR}/webroot"
                     [CERTS_DH_DIR]="${PROJECT_ROOT_DIR}/certs/dhparam"
      )
    
      internal_dirs=(
                     [INTERNAL_LETS_ENCRYPT_DIR]="/etc/letsencrypt"
                     [INTERNAL_LETS_ENCRYPT_LOGS_DIR]="/var/log/letsencrypt"
                     [INTERNAL_WEBROOT_DIR]="/usr/share/nginx/html"
                     [INTERNAL_CERTS_DH_DIR]="/etc/ssl/certs/dhparam"
      )
    
      ssl_paths=(
                     [PRIVKEY_PATH]="${internal_dirs[INTERNAL_LETS_ENCRYPT_DIR]}/live/${DOMAIN_NAME}/privkey.pem"
                     [FULLCHAIN_PATH]="${internal_dirs[INTERNAL_LETS_ENCRYPT_DIR]}/live/${DOMAIN_NAME}/fullchain.pem"
                     [DH_PARAMS_PATH]="${internal_dirs[INTERNAL_CERTS_DH_DIR]}/dhparam-2048.pem"
      )
    
      certbot_volume_mappings=(
                     [LETS_ENCRYPT_VOLUME_MAPPING]="${dirs[CERTS_DIR]}:${internal_dirs[INTERNAL_LETS_ENCRYPT_DIR]}"
                     [LETS_ENCRYPT_LOGS_VOLUME_MAPPING]="${dirs[CERTBOT_DIR]}/logs:${internal_dirs[INTERNAL_LETS_ENCRYPT_LOGS_DIR]}"
                     [CERTS_DH_VOLUME_MAPPING]="${dirs[CERTS_DH_DIR]}:${internal_dirs[INTERNAL_CERTS_DH_DIR]}"
                     [WEBROOT_VOLUME_MAPPING]="${dirs[WEBROOT_DIR]}:${internal_dirs[INTERNAL_WEBROOT_DIR]}"
      )
    

    Moving on - I'll be breaking several modular aspects of the code down, partially for ease of explanation, not so much by execution order. For example, here are several helper functions.

    Helper Functions

    create_directory()

    create_directory() is a simple function that checks if a directory exists, and if not, creates it, otherwise skips it. It takes a single argument, the directory to create.

    #!/bin/bash
    
    create_directory() {
      local directory="$1"
      if [ ! -d "$directory" ]; then
        mkdir -p "$directory"
        echo "$directory created."
      else
        echo "$directory already exists."
      fi
    }
    

    copy_server_files()

    copy_server_files() is a function that copies the frontend and backend files and directories as they correspond to the local project structure. This is used to separate out source files into their respective directories, and to ensure that the project is always in a clean state and that source files from the various projects (frontend, backend, certbot) aren't mixed up in the build stage.

    # Directories
    PROJECT_ROOT_DIR="${HOME}/QRGen"
    BACKEND_DIR="${PROJECT_ROOT_DIR}/backend"
    FRONTEND_DIR="${PROJECT_ROOT_DIR}/frontend"
    
    #!/bin/bash
    
    copy_server_files() {
      echo "Copying server files..."
      copy_frontend_files
      copy_backend_files
    }
    
    
    copy_backend_files() {
      echo "Copying backend files..."
      cp -r "server" "$BACKEND_DIR"
      cp "tsconfig.json" "$BACKEND_DIR"
      cp ".env" "$BACKEND_DIR"
      backend_files="backend/*"
    }
    
    copy_frontend_files() {
      ls "$PROJECT_ROOT_DIR"
      echo "Copying frontend files..."
      cp -r "src" "$FRONTEND_DIR"
      cp -r "public" "$FRONTEND_DIR"
      cp "tsconfig.json" "$FRONTEND_DIR"
      cp "index.html" "$FRONTEND_DIR"
    }
    

    Docker Container Lifecycle Functions

    docker_compose_exists()

    docker_compose_exists() is a really simple function that checks if the docker-compose.yml file exists in the project root directory. That's all it does 👽

    #!/bin/bash
    
    docker_compose_exists() {
      [[ -f "${PROJECT_ROOT_DIR}/docker-compose.yml" ]]
    }
    

    stop_containers()

    Really simple one, stop_containers() is a function that stops the containers using docker-compose if the docker-compose.yml file exists.

    #!/bin/bash
    
    stop_containers() {
      test_docker_env
      if docker_compose_exists; then
        echo "Stopping containers using docker-compose..."
        docker compose -f "${DOCKER_COMPOSE_FILE}" down
      fi
    }
    

    produce_docker_logs()

    produce_docker_logs() is a function that produces a date stamped log dump against each service defined in the docker-compose.yml file, if it exists.

    #!/bin/bash
    
    produce_docker_logs() {
      if docker_compose_exists; then
    
        # Get a list of services defined in the Compose file
        local services
        local service
    
        services=$(docker compose -f "$compose_file" config --services)
    
        # Loop through each service and produce logs
        for service in $services; do
          echo "Logs for service: $service" "@" "$(date)"
          docker compose -f "$compose_file" logs "$service"
          echo "--------------------------------------------"
        done
      else
        echo "Docker Compose not found. Please install Docker Compose."
      fi
    }
    

    test_docker_env()

    This small but important function ensures that the Docker environment variables are set, and if not, sets them to the expected values. test_docker_env() assigns the socket with the user's ID to the DOCKER_HOST environment variable, and if it's not set, sets it to the expected value, then exports it to the environment. This is critical to being able to run any build commands against the rootless Docker daemon.

    #!/bin/bash
    
    test_docker_env() {
      echo "Ensuring Docker environment variables are set..."
      local expected_docker_host
      # Update or set DOCKER_HOST.
      expected_docker_host="unix:///run/user/$(id -u)/docker.sock"
      if [ -z "${DOCKER_HOST:-}" ] || [ "${DOCKER_HOST:-}" != "${expected_docker_host}" ]; then
        DOCKER_HOST="${expected_docker_host}"
        export DOCKER_HOST
        echo "Set DOCKER_HOST to ${DOCKER_HOST}"
      fi
    }
    

    Networking Functions

    is_port_in_use()

    is_port_in_use() is a function that checks if the given port is a valid number, and not in use using a combination of netcat and regex to determine which return code to provide.

    #!/bin/bash
    
    # Check if the given port is a valid number and not in use.
    is_port_in_use() {
      local port="$1"
      # Check if the port is a number.
      if ! [[ $port =~ ^[0-9]+$ ]]; then
        echo "Error: Port must be a number."
        return 2 # Return a different exit code for invalid input.
      fi
    
      # Check if the port is in use. Using netcat (nc) as it is more commonly available.
      if nc -z 127.0.0.1 "$port" > /dev/null 2>&1; then
        return 0  # Port is in use
      else
        return 1  # Port is not in use
      fi
    }
    

    ensure_port_available()

    ensure_port_available() is handled by a while loop that uses is_port_in_use() to check if the port is in use, and if so, prompts the user for an alternative.

    #!/bin/bash
    
    ensure_port_available() {
      local port="$1"
      local default_port=$NGINX_PORT # Store the default port in case we need to use it again.
    
      # Check if the port is in use, and prompt for a new one if it is.
      while is_port_in_use "$port"; do
        local input_port
        echo "Port $port is already in use."
        read -rp "Please provide an alternate port or Ctrl+C to exit: " input_port
    
        # Use the provided port or default to the previously set default_port if no input is given
        port="${input_port:-$default_port}"
      done
    
      # Set the NGINX_PORT to the selected port that is not in use.
      NGINX_PORT="$port"
      echo "Selected port $NGINX_PORT is available."
    }
    

    Environment Functions

    setup_docker_rootless()

    setup_docker_rootless() is required to configure Docker to operate in a rootless mode. It ensures that Docker is installed, the rootless setup tool is available, and the user's bashrc file is updated with necessary environment variables. The function also manages Docker's systemd services for the user, which checks the status, starts, and enables the Docker service in rootless mode.

    #!/bin/bash
    
    # Configures Docker to operate in rootless mode, updating user's bashrc as required.
    setup_docker_rootless() {
      echo "Setting up Docker in rootless mode..."
    
      # Validate Docker installation.
      if ! command -v docker &> /dev/null; then
        echo "Docker is not installed. Please install Docker to continue."
        exit 1
      fi
    
      # Ensure rootless setup tool is available before attempting setup.
      if ! command -v dockerd-rootless-setuptool.sh > /dev/null 2>&1; then
        echo "dockerd-rootless-setuptool.sh not found. Exiting."
        return 1
      else
        dockerd-rootless-setuptool.sh install
      fi
    
      # Ensure Docker environment variables are set.
      test_docker_env
    
      # Append environment settings to the user's bashrc.
      add_to_bashrc() {
        local line="$1"
        if ! grep -q "^${line}$" ~/.bashrc; then
          echo "$line" >> ~/.bashrc
        fi
      }
    
      add_to_bashrc "export PATH=/usr/bin:$PATH"
      add_to_bashrc "export XDG_RUNTIME_DIR=/run/user/$(id -u)"
      add_to_bashrc "DOCKER_HOST=unix:///run/user/$(id -u)/docker.sock"
    
      # Manage Docker's systemd services.
      systemctl --user status docker.service
      systemctl --user start docker.service
      systemctl --user enable docker.service
    }
    

    setup_project_directories()

    Here, setup_project_directories() creates the project's core build directory structure. If the server source directories exists, copy_server_files() is called to copy the frontend and backend files and directories as they correspond to the local project structure. Otherwise, the script exits with an error message.

    #!/bin/bash
    
    setup_project_directories() {
      echo "Staging project directories..."
    
      local directory
      for directory in "$SERVER_DIR" "$FRONTEND_DIR" "$BACKEND_DIR" "$CERTBOT_DIR" "$PROJECT_LOGS_DIR"; do
        create_directory "$directory"
      done
    
      local src_dir="$HOME/QRGen/src"
      local server_src_dir="$src_dir/server"
    
      if [[ -d $src_dir && -d $server_src_dir  ]]; then
        copy_server_files
      else
        echo "Error: Sources are not available, exiting..."
        exit 1
      fi
    }
    

    generate_server_files()

    generate_server_files() preemptively generates all the necessary configuration files for the project by calling their respective functions.

    #!/bin/bash
    
    generate_server_files() {
      echo "Creating server configuration files..."
      configure_backend_tsconfig
      configure_dot_env
      echo "Configuring the Docker Express..."
      configure_backend_docker
      echo "Configuring the Docker NGINX Proxy..."
      configure_frontend_docker
      echo "Configuring the Docker Certbot..."
      configure_certbot_docker
      echo "Configuring Docker Compose..."
      configure_docker_compose
    }
    

    Configuration Functions

    configure_backend_tsconfig()

    configure_backend_tsconfig() uses a heredoc to generate a tsconfig.json file for the Express server in the backend directory. This is probably just as easy to copy over directly. Artifact from when the project was 90% dynamically generated

    #!/bin/bash
    
    configure_backend_tsconfig() {
      cat << EOF > "$BACKEND_DIR/tsconfig.json"
    {
      "compilerOptions": {
        "target": "ES2022",
        "module": "CommonJS",
        "lib": ["ES2022"],
        "outDir": "./dist",
        "rootDir": "./src",
        "strict": true,
        "moduleResolution": "node",
        "skipLibCheck": true,
        "esModuleInterop": true,
        "resolveJsonModule": true,
        "isolatedModules": true,
        "noEmitOnError": true,
        "forceConsistentCasingInFileNames": true,
        "noUnusedLocals": true,
        "noUnusedParameters": true,
        "noImplicitReturns": true,
        "noFallthroughCasesInSwitch": true
      },
      "include": ["src/**/*.ts"],  // Source files to be compiled
    }
    EOF
    }
    

    configure_dot_env()

    configure_dot_env() generates an .env file for the Express server in the backend directory, which is distinct from the one used by the installer in the project root directory. It's much cleaner than the one used by the installer as it only contains the origin, port, and SSL constants. These values are shared across .env's and are set based on the user's input at the time of installation. The reason for this is to prevent many installation specific values from being shared across .env's, which would be a security risk.

    #!/bin/bash
    
    configure_dot_env() {
      cat << EOF > "$BACKEND_DIR/.env"
    ORIGIN=$ORIGIN
    PORT=$BACKEND_PORT
    USE_SSL=$USE_SSL
    EOF
    }
    

    configure_backend_docker()

    configure_backend_docker() uses a heredoc to generate a Dockerfile for the Express server in the backend directory, different from the Dockerfile(s) used by the frontend and certbot services. This Dockerfile is used to build the backend/Express server container. It selects the .env defined node version, selects the internal /usr/app work directory, installs the necessary dependencies, copies backend sources, and sets the backend port. The command provided runs the Express server using ts-node using the existing server.ts file as the entrypoint.

    #!/bin/bash
    
    configure_backend_docker() {
      cat << EOF > "$BACKEND_DIR/Dockerfile"
    # Use the latest version of Node.js
    FROM node:$NODE_VERSION
    
    # Set the default working directory
    WORKDIR /usr/app
    
    RUN npm install -g ts-node typescript \
    && npm install --save-dev typescript ts-node jest ts-jest jsdom \
    && npx tsc --init \
    && npm install dotenv express cors multer archiver express-rate-limit helmet qrcode \
    && npm install --save-dev @types/express @types/cors @types/node @types/multer @types/archiver \
    && npm install --save-dev @types/express-rate-limit @types/helmet @types/qrcode @types/jest \
    
    COPY $backend_files /usr/app
    
    # Set the backend express port
    EXPOSE $BACKEND_PORT
    
    # Use ts-node to run the TypeScript server file
    CMD ["npx", "ts-node", "src/server.ts"]
    EOF
    }
    

    configure_frontend_docker()

    configure_frontend_docker() uses a heredoc to generate a Dockerfile for the React/Vite app and the NGINX server in the frontend directory. It gets the latest version of Node.js defined in the .env and sets the default working directory. then, it uses npm to initialize a project, before installing the necessary dependencies and generating a build template with vite & react-ts. After this, the autogenerated files that aren't in kebab case are removed from the generated project. Frontend sources are copied over, and the new working directory is set (from the template generation) which is really important as it ensures that the build process is always run from the frontend directory, and not the container root directory. The project is built, and the NGINX server is installed.

    Note: that NGINX could have its own container using shared volumes here, but it's probably unnecessary and simply easier to just run it in the same container as the frontend for the time being.

    The build files are copied to the NGINX directory, and the .well-known and .well-known/acme-challenge directories are created. Modifying .well-known to provision read write and execute permissions for all users is necessary so that the certbot can read/write/traverse the directory and provision the SSL certificates. The NGINX port is set, and the server is run in the foreground.

    #!/bin/bash
    
    #######################################
    # Configures NGINX with SSL and optional settings
    # Globals:
    #   BACKEND_PORT
    #   DH_PARAMS_PATH
    #   DNS_RESOLVER
    #   DOMAIN_NAME
    #   INTERNAL_LETS_ENCRYPT_DIR
    #   NGINX_PORT
    #   NGINX_SSL_PORT
    #   PROJECT_ROOT_DIR
    #   SUBDOMAIN
    #   TIMEOUT
    #   USE_LETS_ENCRYPT
    #   USE_SELF_SIGNED_CERTS
    #   internal_dirs
    #   ssl_paths
    # Arguments:
    #  None
    # Returns:
    #   1 on error
    #######################################
    
    #######################################
    # description
    # Arguments:
    #   1
    #######################################
    log_error() {
        echo "[ERROR] $(date '+%Y-%m-%d %H:%M:%S') - $1"
    }
    
    #######################################
    # description
    # Globals:
    #   BACKEND_PORT
    #   DH_PARAMS_PATH
    #   DNS_RESOLVER
    #   DOMAIN_NAME
    #   INTERNAL_LETS_ENCRYPT_DIR
    #   NGINX_PORT
    #   NGINX_SSL_PORT
    #   PROJECT_ROOT_DIR
    #   SUBDOMAIN
    #   TIMEOUT
    #   USE_LETS_ENCRYPT
    #   USE_SELF_SIGNED_CERTS
    #   USE_SSL_BACKWARD_COMPAT
    # Arguments:
    #  None
    # Returns:
    #   1 ...
    #######################################
    # bashsupport disable=BP5006
    configure_nginx() {
        echo "Creating NGINX configuration..."
        TLS_PROTOCOL_SUPPORT=${TLS_PROTOCOL_SUPPORT:-"restricted"}
    
        # Initialize local variables
        backend_scheme="http"
        server_name="${DOMAIN_NAME}"
        default_port_directive="listen $NGINX_PORT;"
        default_port_directive+=$'\n'
        default_port_directive+="        listen [::]:$NGINX_PORT;"
        ssl_listen_directive=""
        ssl_mode_block=""
        resolver_settings=""
        certs=""
        security_headers=""
        acme_challenge_server_block=""
    
        backup_existing_config
        configure_subdomain
        configure_https
        configure_acme_challenge
        write_nginx_config
    }
    
    #######################################
    # description
    # Globals:
    #   DOMAIN_NAME
    #   SUBDOMAIN
    #   server_name
    # Arguments:
    #  None
    #######################################
    configure_subdomain() {
        if [[ $SUBDOMAIN != "www" && -n $SUBDOMAIN ]]; then
            server_name="${DOMAIN_NAME} ${SUBDOMAIN}.${DOMAIN_NAME}"
      fi
    }
    
    #######################################
    # description
    # Globals:
    #   DNS_RESOLVER
    #   NGINX_SSL_PORT
    #   TIMEOUT
    #   USE_LETS_ENCRYPT
    #   USE_SELF_SIGNED_CERTS
    #   backend_scheme
    #   resolver_settings
    #   ssl_listen_directive
    # Arguments:
    #  None
    #######################################
    configure_https() {
        if [[ $USE_LETS_ENCRYPT == "yes" ]] || [[ $USE_SELF_SIGNED_CERTS == "yes" ]]; then
            backend_scheme="https"
            ssl_listen_directive="listen $NGINX_SSL_PORT ssl;"
            ssl_listen_directive+=$'\n'
            ssl_listen_directive+="        listen [::]:""$NGINX_SSL_PORT ssl;"
            configure_ssl_mode
            resolver_settings="resolver ${DNS_RESOLVER} valid=300s;"
            resolver_settings+=$'\n'
            resolver_settings+="        resolver_timeout ${TIMEOUT}ms;"
            configure_certs
            configure_security_headers
      fi
    }
    
    configure_ssl_mode() {
        if [[ $TLS_PROTOCOL_SUPPORT == "restricted" ]]; then
            ssl_mode_block=$(get_gzip)
            ssl_mode_block+=$'\n'
            ssl_mode_block+=$(get_ssl_protocol_compatibility)
            ssl_mode_block+=$'\n'
            ssl_mode_block+=$(get_ssl_additional_config)
      else
            ssl_mode_block=$(get_gzip)
            ssl_mode_block+=$'\n'
            ssl_mode_block+=$(tls_protocol_one_three_restrict)
            ssl_mode_block+=$'\n'
            ssl_mode_block+=$(get_ssl_additional_config)
      fi
    }
    
    #######################################
    # Turn off gzip compression
    # Globals:
    #   None
    # Arguments:
    #  None
    #######################################
    get_gzip() {
        cat <<- EOF
         gzip off;
    EOF
    }
    
    #######################################
    # description
    # Globals:
    #   DH_PARAMS_PATH
    #   ssl_paths
    # Arguments:
    #  None
    #######################################
    get_ssl_protocol_compatibility() {
        cat <<- EOF
            ssl_protocols TLSv1.2 TLSv1.3;
    EOF
    }
    
    #######################################
    # SSL additional configuration, covering cipher suites, session cache, and other
    # security-related features. This configuration is recommended for a modern secure
    # Globals:
    #   DH_PARAMS_PATH
    #   ssl_paths
    # Arguments:
    #  None
    #######################################
    get_ssl_additional_config() {
        cat <<- EOF
            ssl_prefer_server_ciphers on;
            ssl_ciphers 'ECDH+AESGCM:ECDH+AES256:!DH+3DES:!ADH:!AECDH:!MD5:!ECDHE-RSA-AES256-SHA384:!ECDHE-RSA-AES256-SHA:!ECDHE-RSA-AES128-SHA256:!ECDHE-RSA-AES128-SHA:!RC2:!RC4:!DES:!EXPORT:!NULL:!SHA1';
            ssl_buffer_size 8k;
            ssl_dhparam ${ssl_paths[DH_PARAMS_PATH]};
            ssl_ecdh_curve secp384r1;
            ssl_stapling on;
            ssl_stapling_verify on;
            ssl_session_cache shared:SSL:10m;
            ssl_session_timeout 10m;
    EOF
    }
    
    #######################################
    # description
    # Globals:
    #   DH_PARAMS_PATH
    #   ssl_paths
    # Arguments:
    #  None
    #######################################
    tls_protocol_one_three_restrict() {
        cat <<- EOF
        ssl_protocols TLSv1.3;
    EOF
    }
    
    #######################################
    # description
    # Globals:
    #   DOMAIN_NAME
    #   INTERNAL_LETS_ENCRYPT_DIR
    #   certs
    #   internal_dirs
    # Arguments:
    #  None
    #######################################
    configure_certs() {
          certs="
            ssl_certificate ${internal_dirs[INTERNAL_LETS_ENCRYPT_DIR]}/live/${DOMAIN_NAME}/fullchain.pem;
            ssl_certificate_key ${internal_dirs[INTERNAL_LETS_ENCRYPT_DIR]}/live/${DOMAIN_NAME}/privkey.pem;
            ssl_trusted_certificate ${internal_dirs[INTERNAL_LETS_ENCRYPT_DIR]}/live/${DOMAIN_NAME}/fullchain.pem;"
    }
    
    #######################################
    # description
    # Globals:
    #   DOMAIN_NAME
    #   INTERNAL_LETS_ENCRYPT_DIR
    #   USE_LETS_ENCRYPT
    #   internal_dirs
    #   security_headers
    # Arguments:
    #  None
    #######################################
    configure_security_headers() {
      security_headers="
                # Prevent clickjacking by instructing the browser to deny rendering iframes
                add_header X-Frame-Options 'DENY' always;
    
                # Protect against MIME type sniffing security vulnerabilities
                add_header X-Content-Type-Options nosniff always;
    
                # Enable XSS filtering in browsers that support it
                add_header X-XSS-Protection '1; mode=block' always;
    
                # Control the information that the browser includes with navigations away from your site
                add_header Referrer-Policy 'strict-origin-when-cross-origin' always;
    
                # Content Security Policy
                # The CSP restricts the sources of content like scripts, styles, images, etc. to increase security
                # 'self' keyword restricts loading resources to the same origin as the document
                # Adjust the policy directives based on your application's specific needs
                add_header Content-Security-Policy \"default-src 'self'; script-src 'self'; object-src 'none'; img-src 'self' data:; media-src 'none'; frame-src 'none'; font-src 'self'; connect-src 'self';\";"
    
      if [[ $USE_LETS_ENCRYPT == "yes" ]]; then
        security_headers+="
    
                # HTTP Strict Transport Security (HSTS) for 1 year, including subdomains
                add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains' always;"
      fi
    }
    
    #######################################
    # description
    # Globals:
    #   USE_LETS_ENCRYPT
    #   acme_challenge_server_block
    #   server_name
    # Arguments:
    #  None
    #######################################
    configure_acme_challenge() {
        if [[ $USE_LETS_ENCRYPT == "yes" ]]; then
            acme_challenge_server_block="server {
              listen 80;
              listen [::]:80;
              server_name ${server_name};
              location / {
                  return 301 https://\$host\$request_uri;
              }
              location /.well-known/acme-challenge/ {
                  allow all;
                  root /usr/share/nginx/html;
              }
          }"
      fi
    }
    
    #######################################
    # description
    # Globals:
    #   PROJECT_ROOT_DIR
    # Arguments:
    #  None
    #######################################
    backup_existing_config() {
        if [[ -f ${NGINX_CONF_FILE}   ]]; then
            cp "${NGINX_CONF_FILE}" "${NGINX_CONF_FILE}.bak"
            echo "Backup created at \"${NGINX_CONF_FILE}.bak\""
      fi
    }
    
    #######################################
    # description
    # Globals:
    #   BACKEND_PORT
    #   PROJECT_ROOT_DIR
    #   acme_challenge_server_block
    #   backend_scheme
    #   certs
    #   default_port_directive
    #   resolver_settings
    #   security_headers
    #   server_name
    #   ssl_listen_directive
    #   ssl_mode_block
    # Arguments:
    #  None
    #######################################
    write_nginx_config() {
        cat <<- EOF > "${NGINX_CONF_FILE}"
    worker_processes auto;
    events {
        worker_connections 1024;
    }
    http {
        include /etc/nginx/mime.types;
        default_type application/octet-stream;
    
        server {
            ${default_port_directive}
            ${ssl_listen_directive}
            server_name ${server_name};
            ${ssl_mode_block}
            ${resolver_settings}
            ${certs}
    
            location / {
                root /usr/share/nginx/html;
                index index.html index.htm;
                try_files \$uri \$uri/ /index.html;
                ${security_headers}
            }
    
            location /qr/ {
                proxy_pass ${backend_scheme}://backend:${BACKEND_PORT};
                proxy_set_header Host \$host;
                proxy_set_header X-Real-IP \$remote_addr;
                proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
            }
        }
        ${acme_challenge_server_block}
    }
    EOF
            cat "${NGINX_CONF_FILE}"
            echo "NGINX configuration written to ${NGINX_CONF_FILE}"
    }
    

    configure_certbot_docker()

    configure_certbot_docker() creates a Dockerfile in the certbot directory, defined in the .env, and defines a custom Docker image for Certbot based on the Python Alpine Linux image. The Dockerfile sets up Certbot by downloading sources from my custom certbot fork hosted on Github. This contains changes to allow for the automation of overwriting certificate directories if the user wishes. This is critical to the project's functionality as it deals with the specific edge case of the user wanting to overwrite the certificate directory with a new certificate without interaction. This is the only change from the certbot mainline.

    Once this is complete, necessary files are copied into the image, and runtime dependencies are installed. Certbot is then installed from the forked source code, build dependencies are handled, and clean up occurs afterward to create a lean and functional Certbot Docker image. It also includes workarounds for potential issues in fetching Rust crates needed for the cryptography library.

    #!/bin/bash
    
    configure_certbot_docker() {
      cat << EOF > "$CERTBOT_DIR/Dockerfile"
    FROM python:3.10-alpine3.16 as certbot
    
    ENTRYPOINT [ "certbot" ]
    EXPOSE 80 443
    VOLUME /etc/letsencrypt /var/lib/letsencrypt
    WORKDIR /opt/certbot
    
    # Retrieve certbot code
    RUN mkdir -p src \
     && wget -O certbot-master.zip https://github.com/error-try-again/certbot/archive/refs/heads/master.zip \
     && unzip certbot-master.zip \
     && cp certbot-master/CHANGELOG.md certbot-master/README.rst src/ \
     && cp -r certbot-master/tools tools \
     && cp -r certbot-master/acme src/acme \
     && cp -r certbot-master/certbot src/certbot \
     && rm -rf certbot-master.tar.gz certbot-master
    
    # Install certbot runtime dependencies
    RUN apk add --no-cache --virtual .certbot-deps \
            libffi \
            libssl1.1 \
            openssl \
            ca-certificates \
            binutils
    
    # We set this environment variable and install git while building to try and
    # increase the stability of fetching the rust crates needed to build the
    # cryptography library
    ARG CARGO_NET_GIT_FETCH_WITH_CLI=true
    # Install certbot from sources
    RUN apk add --no-cache --virtual .build-deps \
            gcc \
            linux-headers \
            openssl-dev \
            musl-dev \
            libffi-dev \
            python3-dev \
            cargo \
            git \
            pkgconfig \
        && python tools/pip_install.py --no-cache-dir \
                --editable src/acme \
                --editable src/certbot \
        && apk del .build-deps \
        && rm -rf ${HOME}/.cargo
    EOF
    }
    

    generate_self_signed_certificates()

    generate_self_signed_certificates() is responsible for creating self-signed SSL certificates for the specified domain name. It ensures the necessary directories for storing the certificates are present, checks if the certificates already exist or if regeneration is needed, and then generates a new self-signed certificate along with its private key using OpenSSL. Additionally, it generates Diffie-Hellman parameters for enhanced security during SSL/TLS negotiations. The function provides feedback on the status of the certificate generation process, including paths where certificates and parameters are stored.

    #!/bin/bash
    
    generate_self_signed_certificates() {
      local certs_dir="${dirs[CERTS_DIR]}"
      local certs_dh_dir="${dirs[CERTS_DH_DIR]}"
    
      echo "Generating self-signed certificates for ${DOMAIN_NAME}..."
    
      local certs_path=${certs_dir}/live/${DOMAIN_NAME}
    
      # Ensure the necessary directories exist
      create_directory "${certs_path}"
      create_directory "${certs_dh_dir}"
    
      local dh_params_path="${certs_dh_dir}/dhparam-2048.pem"
    
      # Check and generate new self-signed certificates if needed
      if [[ ! -f "${certs_path}/fullchain.pem" ]] || prompt_for_regeneration "${certs_path}"; then
        # Create self-signed certificate and private key
        openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
          -keyout "${certs_path}/privkey.pem" \
          -out "${certs_path}/fullchain.pem" \
          -subj "/CN=${DOMAIN_NAME}"
    
        echo "Self-signed certificates for ${DOMAIN_NAME} generated at ${certs_path}."
        openssl dhparam -out "${dh_params_path}" 2048
        echo "DH parameters generated at ${dh_params_path}."
      else
        echo "Certificates for ${DOMAIN_NAME} already exist at ${certs_path}."
      fi
    }
    

    configure_docker_compose()

    Into the thick of it. configure_docker_compose() dynamically configures a Docker Compose setup based on SSL certificate requirements, generating service definitions, network configurations, and volume mappings. It does this to adapt to different SSL setups: Let's Encrypt, self-signed certificates, or no SSL, adjusting ports, shared volumes, and service dependencies accordingly. Heredocs are used again to create a Docker Compose file with service definitions for the backend, frontend, and Certbot (for SSL), along with network and volume configurations. It also includes helper functions to generate specific Docker Compose sections for each service and the overall network and volume setup, ensuring compatibility with the selected SSL configuration. The final Docker Compose configuration is outputted to the project's root directory.

    I'll take a moment to highlight the generate_certbot_command() function, which is used to generate the command that will be used to provision the SSL certificates. The constant/uppercase flags here are defined directly in the .env while the lowercase values such as the domain name are defined by the user in the user_prompt().

    #!/bin/bash
    
    configure_docker_compose() {
      # Local variables for service definitions and volume mappings
      local certbot_service_definition=""
      local http01_ports=""
      local frontend_certbot_shared_volume=""
      local certs_volume=""
    
      # Configure for Let's Encrypt if enabled
      if [[ $USE_LETS_ENCRYPT == "yes" ]]; then
        echo "Configuring Docker Compose for Let's Encrypt..."
    
        # Ports for HTTP-01 challenge
        http01_ports="- \"${NGINX_SSL_PORT}:${NGINX_SSL_PORT}\""
        http01_ports+=$'\n      - "80:80"'
    
        # Shared volumes for Let's Encrypt and SSL certificates
        frontend_certbot_shared_volume="- nginx-shared-volume:${internal_dirs[INTERNAL_WEBROOT_DIR]}"
        frontend_certbot_shared_volume+=$'\n      - '${certbot_volume_mappings[LETS_ENCRYPT_VOLUME_MAPPING]}
        frontend_certbot_shared_volume+=$'\n      - '${certbot_volume_mappings[LETS_ENCRYPT_LOGS_VOLUME_MAPPING]}
        frontend_certbot_shared_volume+=$'\n      - '${certbot_volume_mappings[CERTS_DH_VOLUME_MAPPING]}
    
        certs_volume="    volumes:"
        certs_volume+=$'\n      - '${dirs[CERTS_DIR]}/live/${DOMAIN_NAME}/privkey.pem:/etc/ssl/certs/privkey.pem:ro
        certs_volume+=$'\n      - '${dirs[CERTS_DIR]}/live/${DOMAIN_NAME}/fullchain.pem:/etc/ssl/certs/fullchain.pem:ro
    
        # Generate Certbot service definition
        certbot_service_definition=$(create_certbot_service "$(generate_certbot_command)" "$frontend_certbot_shared_volume")
    
      elif [[ $USE_SELF_SIGNED_CERTS == "yes" ]]; then
        echo "Configuring Docker Compose for self-signed certificates..."
    
        http01_ports="- \"${NGINX_SSL_PORT}:${NGINX_SSL_PORT}\""
        http01_ports+=$'\n      - "80:80"'
    
        frontend_certbot_shared_volume+=$'\n      - '${dirs[CERTS_DIR]}/live/${DOMAIN_NAME}/privkey.pem:/etc/letsencrypt/live/${DOMAIN_NAME}/privkey.pem:ro
        frontend_certbot_shared_volume+=$'\n      - '${dirs[CERTS_DIR]}/live/${DOMAIN_NAME}/fullchain.pem:/etc/letsencrypt/live/${DOMAIN_NAME}/fullchain.pem:ro
        frontend_certbot_shared_volume+=$'\n      - '${dirs[CERTS_DH_DIR]}:${internal_dirs[INTERNAL_CERTS_DH_DIR]}:ro
    
        certs_volume="    volumes:"
        certs_volume+=$'\n      - '${dirs[CERTS_DIR]}/live/${DOMAIN_NAME}/privkey.pem:/etc/ssl/certs/privkey.pem:ro
        certs_volume+=$'\n      - '${dirs[CERTS_DIR]}/live/${DOMAIN_NAME}/fullchain.pem:/etc/ssl/certs/fullchain.pem:ro
    
      else
        echo "Configuring Docker Compose without SSL certificates..."
      fi
    
      local backend_section
      local frontend_section
      local network_section
      local volume_section
    
      # Assembling Docker Compose sections
      backend_section=$(create_backend_service "$certs_volume")
      frontend_section=$(create_frontend_service "$http01_ports" "$frontend_certbot_shared_volume")
      network_section=$(create_network_definition)
      volume_section=$(create_volume_definition)
    
      # Write Docker Compose file
      {
        echo "version: '3.8'"
        echo "services:"
        echo "$backend_section"
        echo "$frontend_section"
        echo "$certbot_service_definition"
        echo "$network_section"
        echo "$volume_section"
      } > "${PROJECT_ROOT_DIR}/docker-compose.yml"
    
      # Display the generated Docker Compose file
      cat "${PROJECT_ROOT_DIR}/docker-compose.yml"
      echo "Docker Compose configuration written to ${PROJECT_ROOT_DIR}/docker-compose.yml"
    }
    
    generate_certbot_command() {
      echo "certonly \
    --webroot \
    --webroot-path=${internal_dirs[INTERNAL_WEBROOT_DIR]} \
    ${email_flag} \
    ${TOS_FLAG} \
    ${NO_EFF_EMAIL_FLAG} \
    ${NON_INTERACTIVE_FLAG} \
    ${RSA_KEY_SIZE_FLAG} \
    ${force_renew_flag} \
    ${hsts_flag} \
    ${must_staple_flag} \
    ${uir_flag} \
    ${ocsp_stapling_flag} \
    ${strict_permissions_flag} \
    ${production_certs_flag} \
    ${dry_run_flag} \
    ${overwrite_self_signed_certs_flag}" \
        --domains "${DOMAIN_NAME}" \
        --domains "$SUBDOMAIN"."${DOMAIN_NAME}"
    }
    
    create_backend_service() {
      local volume_section=$1
      echo "  backend:
        build:
          context: .
          dockerfile: ./backend/Dockerfile
        ports:
          - \"${BACKEND_PORT}:${BACKEND_PORT}\"
        networks:
          - qrgen
    $volume_section"
    }
    
    create_frontend_service() {
      local ports_section=$1
      local volume_section=$2
      echo "  frontend:
        build:
          context: .
          dockerfile: ./frontend/Dockerfile
        ports:
          - \"${NGINX_PORT}:${NGINX_PORT}\"
          $ports_section
        networks:
          - qrgen
        volumes:
          - ./frontend:/usr/share/nginx/html:ro
          - ./nginx.conf:/etc/nginx/nginx.conf:ro
          $volume_section
        depends_on:
          - backend"
    }
    
    create_certbot_service() {
      local command=$1
      local volumes=$2
      echo "  certbot:
        build:
          context: .
          dockerfile: ./certbot/Dockerfile
        command: $command
        volumes:
          $volumes
        depends_on:
          - frontend"
    }
    
    create_network_definition() {
      echo "networks:
      qrgen:
        driver: bridge"
    }
    
    create_volume_definition() {
      if [[ $USE_LETS_ENCRYPT == "yes" ]] || [[ $USE_SELF_SIGNED_CERTS == "yes" ]]; then
        echo "volumes:
      nginx-shared-volume:
        driver: local"
      fi
    }
    

    generate_certbot_renewal_job()

    generate_certbot_renewal_job() is poised to automate the renewal process of SSL certificates using Certbot within the rootless Docker environment we've setup. It first creates a script (certbot_renew.sh) to run the Certbot renewal with a dry run and, if successful, proceed with the actual renewal and restart the necessary Docker services. This script is then made executable and scheduled as a Cron job to run daily at midnight. This will ensure that the certificates are renewed before they expire. There are obviously more efficient ways to do this, but this is a simple and effective solution. The function checks if this Cron job already exists to avoid duplication. This setup ensures that SSL certificates are regularly renewed and the related services are restarted to apply these updates, with logs of the renewal process being maintained for monitoring and troubleshooting.

    #!/bin/bash
    
    generate_certbot_renewal_job(){
    
      # Create the certbot renew script with a heredoc
      cat << 'EOF' > "${PROJECT_ROOT_DIR}/certbot_renew.sh"
    #!/bin/bash
    
    # Exit on any error
    set -e
    
    LOG_FILE="${PROJECT_LOGS_DIR}/certbot_renew.log"
    
    # Function to perform a certbot renewal
    renew_certbot() {
      # Run the certbot service with dry run first
      docker compose run --rm certbot renew --dry-run
    
      # If the dry run succeeds, run certbot renewal without dry run
      echo "Certbot dry run succeeded, attempting renewal..."
      docker compose run --rm certbot renew
    
      # Restart the nginx frontend and backend services
      docker compose restart frontend
      docker compose restart backend
    }
    
    # Start logging
    {
      echo "Running certbot renewal script on $(date)"
      renew_certbot
    } | tee -a "${LOG_FILE}"
    EOF
    
      # Make the certbot renew script executable
      chmod +x "${PROJECT_ROOT_DIR}/certbot_renew.sh"
    
      # Setup Cron Job
      local cron_script_path="${PROJECT_ROOT_DIR}/certbot_renew.sh"
      local cron_log_path="${PROJECT_LOGS_DIR}/certbot_cron.log"
    
      # Cron job to run certbot renewal every day at midnight
      local cron_job="0 0 * * 1-7 ${cron_script_path} >> ${cron_log_path} 2>&1"
    
      # Check if the cron job already exists
      if ! crontab -l | grep -Fq "$cron_job"; then
        # Add the cron job if it doesn't exist
        (
          crontab -l 2> /dev/null
                                   echo "$cron_job"
        )                                             | crontab -
        echo "Cron job added."
      else
        echo "Cron job already exists. No action taken."
      fi
    }
    

    I've excluded operations.sh & user_prompt.sh for now as they are overly complex and desperately need to be modularized before I document them and give them a summary.

    Express Server

    How I built the QRGen backend server.

    React/Vite App

    How I built the QRGen frontend server.

    © 2024 Yane ✖ Karov