My thoughts on Slackware, life and everything

Tag: keycloak

Slackware Cloud Server Series, Episode 10: Workflow Management

For my Slackware Cloud Server series of articles, we are going to have a look at a system for workflow management, personal note taking and all kinds of other collaborative features.

When the COVID pandemic hit the world, me and my wife began a routine of regular walks in the open fields and forests near us, simply to escape the confines of the house and have a mental break from all the tension. We really enjoyed the quiet of those days, we rarely encountered other wanderers – but that’s an aside.
My wife started documenting our walks, our bicycle trips and eventually also our holidays in OneNote. It was a convenient note-taking tool which combines structured text with images and hyperlinks. Even though OneNote is a MS Windows program, it saves its data in the Cloud, allowing me to access the collection of our walking notes in a Slackware web browser.
Some of you may be using Miro at work, for your Agile workflows, for brainstorming and in general, as an online replacement for physical whiteboards. Online collaborative tools like Miro became immensely popular because of the COVID pandemic when coming to the office every day was no longer feasible.
The selling point of the above tools is that they are cloud-centered. Your data is stored with the tool provider and you can work – individually or in groups – on your projects online.

As always, there’s a catch. Miro is commercial and comes with a paid subscription. OneNote is free, Windows-only but with browser-based access to your data, yet has a 5 GB storage cap.
For individual usage, several free alternatives have risen in popularity; note-taking apps that provide an alternative to OneNote such as EverNote, Joplin, but also evolutions of the note-taking concept like Notion, Obsidian, LogSeq and more. While EverNote and Notion are not Open Source, they have a free plan for online storage. LogSeq and Obsidian are open source tools for off-line usage and single-user, but you can store their local database in a cloud storage like Dropbox, OneDrive, Google One or Nextcloud if you want. Joplin is open source, multi-user but not collaborative and stores its data on a backend server – either its own Joplin Cloud or else a WebDAV server like Nextcloud. I plan on writing an article about Joplin, too. It’s on my TODO.

This list is far from complete – there are many more alternatives and they all will try to cater to your specific needs.

I am not going to discuss the pros and cons of these tools, I have not tested them all. I like to make calculated choices based on available information and then stick to my choices. You can only become good at something if you really invest time. It’s also why I am not a distro-hopper and stuck with Slackware from day one.

My choice of personal workflow management tool is AFFiNE. The main reasons for finally picking it over the alternatives, is that the AFFiNE server backend is is open source software, it has desktop and mobile apps and a browser client, and it allows you to work offline or sync your project data to a cloud server. Its collaborative features allow a team to work jointly and simultaneously on a project.
Most importantly, it offers a self-hosting option using Docker Compose and the self-host version integrates with OpenID Connect (OIDC) Identity Providers (IDP) like Keycloak. Aka Single Sign-On (SSO).

A caveat upfront: This software is in active development. The developers are friendly and responsive to the questions from their community. But some of the features that you would like to see in the self-hosted version are not yet built, or not trivial to implement, or take ages to implement, or simply badly documented. That is exactly why I decided to write this article: to provide complete documentation to the online community about how to setup and configure your own AFFiNE server with SSO provided by Keycloak (or any other IDP than Keycloak which implements OIDC).
This article will evolve in parallel with AFFiNE’s development, and as features get added or bugs resolved, I will update the text here as well.
Actually, that is exactly what happened because while writing this text, AFFiNE devs pushed a Christmas upgrade and I had to adapt my descriptions in some places. Also, some of the screenshots I made will probably look slightly different on new releases.

Check out the list below which shows past, present and future episodes in my Slackware Cloud Server series. If the article has already been written you’ll be able to click on the subject.
The first episode also contains an introduction with some more detail about what you can expect.


Introduction

Let’s dive a bit deeper into AFFiNE.

AFFiNE is a privacy-first, open source “Write, Plan, Draw, All At Once” all-in-one workspace which is currently in a ‘Beta‘ stage (version 0.19.1), with a new release every 6 weeks. With each update, the changes and new features are impressive.

Everything in AFFiNE is made up of blocks. Blocks can be organized in Docs and these can be represented in different ways. Here is some terminology to help you get acquainted with the tool quickly:

  • Blocks: These are the atomic elements that comprise your Docs. They can contain text, images, embedded web pages etc.

  • Docs: Your main canvas. It has ‘doc info’ associated for indexing and referencing. Your Doc  has two views: Edgeless and Page Mode.

  • Page Mode: Presents a set of blocks in the form of a linear document which always fits on your page.

  • Edgeless mode: Presents all content of your Doc in a edgeless (infinite) canvas.

  • Doc info: Also called ‘Info’, refers to all the attributes and fields contained within a Doc.

  • Blocks with Databases : A structured container to index, group, update or oversee blocks in different views.

  • Collections: A smart folder where you can manually add pages or automatically add pages through rules.

  • Workspaces: Your virtual space to capture, create and plan as just one person or together as a team.

  • Members: Collaborators of a Workspace. Members can have different roles.

  • Settings: Your personal appearance and usage preferences allow you to tailor your Workspace to your needs.

When working in AFFiNE you get an edgeless (aka infinite) canvas where you engage in a variety of activities like documenting, creating mood-boards, brainstorming, project planning, creative drawing, mind-mapping, all using an intuitive block editor, and then connect all of your ideas via the relations you apply to the various blocks, simply dragging arrows across your canvas.
You can toggle the two main viewports: either you work in the block editor in an infinite canvas, or you fit the structured textual content into your browser page.

Functionally, AFFiNE offers a blend of how Miro and Notion work. The edgeless whiteboard canvas with many templates to choose from, is definitely inspired by Miro. The block editor is something which Notion and other alternatives are well-known for.
Concepts used in AFFiNE to create structure in your workflow are Frame, Group and Database.

Your data will be stored on your local disk or in your browser’s cache by default, but you have the option to login to a cloud server and sync your data to the server.
The company that develops AFFiNe (ToEverything aka Theory Of Everything) offers a free plan with 10 GB of online project storage and a maximum of three collaborators to invite to your projects. But we are more interested in the self-hosted version where we are in control of that data. There, we decide how much you and your friends can store and how big your team can become when you engage in collaborative work. The self-hosted version of AFFiNE Cloud can eliminate all those limitations of the free plan.

If you decided to switch from your current knowledge management solution, then it’s good to know that AFFiNE can import content from other tools, specifically it supports Notion export files, but also will import HTML and Markdown files.

Please note that ToEverything, the company behind AFFiNE, funds the software’s development from donations it receives and from the Pro subscriptions to their own AFFiNE Cloud offering. If you setup a self-hosted AFFiNE and really like it, and also use it as a collaboration platform with a group of friends/colleagues, you might want to consider setting up a donation to support a sustained development.


Preamble

This section describes the technical details of our setup, as well as the things which you should have prepared before trying to implement the instructions in this article.

Web Hosts

For the sake of this instruction, I will use the hostname “https://affine.darkstar.lan” as your landing page for AFFiNE.

Furthermore, “https://sso.darkstar.lan/auth” is the Keycloak base URL (see Episode 2 to read how we setup Keycloak as our identity provider).

In Keycloak, we have configured a realm called ‘foundation‘ which contains our user accounts and application client configurations.

Setting up your domain (which will hopefully be something else than “darkstar.lan”…) with new hostnames and then setting up web servers for the hostnames in that domain is an exercise left to the reader. Before continuing, please ensure that your equivalent for the following host has a web server running. It doesn’t have to serve any content yet but we will add some blocks of configuration to the VirtualHost definition during the steps outlined in the remainder of this article:

  • affine.darkstar.lan

Using a  Let’s Encrypt SSL certificate to provide encrypted connections (HTTPS) to your webserver is documented in an earlier blog article.

Note that I am talking about webserver “hosts” but in fact, all of these are just virtual webservers running on the same machine, at the same IP address, served by the same Apache httpd program, but with different DNS entries. There is no need at all for multiple computers when setting up your Slackware Cloud server.

Docker network

  • We assign a Docker network segment to our AFFiNE containers: 172.22.0.0/16
  • We assign a specific IPv4 address to the AFFiNE server itself (so that it is able to send emails): 172.22.0.5

File Locations

  • The Docker configuration goes into: /usr/local/docker-affine/
  • The data generated by the AFFiNE server goes into: /opt/dockerfiles/affine/

Secrets

The Docker stack we create for our AFFiNE server uses several secrets (credentials).

In this article, we will use example values for these secrets – be sure to generate and use your own strings here!

# Credentials for the Postgres database account:
AFFINE_DB_USERNAME=affine
AFFINE_DB_PASSWORD=0Igiu3PyijI4xbyJ87kTZuPQi4P9z4pd

# Credentials for the account that authenticates to the SMTP server for sending emails:
AFFINE_MAILER_USER=affinemailer
AFFINE_MAILER_PASSWORD=E9X46W3vz8h1nBVqBHgKCISxRufRsHlAXSEbcXER/58=

# Credentials for the OIDC client are shared between Keycloak and AFFiNE:
AFFINE_OIDC_CLIENT_ID=affine
AFFINE_OIDC_CLIENT_SECRET=TZ5PBCw66IhDtZJeBD4ctsS2Hrb253uY

Note that AFFiNE’s internal implementation chokes on a Postgress password containing special characters (at least up to version 0.19.1).


Apache reverse proxy configuration

We are going to run AFFiNE in a Docker container stack. The configuration will be such that the server will only listen for clients at one TCP port at the localhost address (127.0.0.1).

To make our AFFinE storage and database backend available to the users at the address https://affine.darkstar.lan/ we are using a reverse-proxy setup. The flow is as follows: the user connects to the reverse proxy using HTTPS (encrypted connection) and the reverse proxy connects to the AFFinE backend on the  client’s behalf.  Traffic between the reverse proxy (Apache httpd in our case) and the AFFiNE server’s Docker container is un-encrypted. That is not a problem, we give the AFFiNE server its own private network segment inside Docker.
A reverse proxy is capable of handling many simultaneous connections and can be configured to offer SSL-encrypted connections to the remote users even when the backend can only communicate over clear-text un-encrypted connections.

Add the following reverse proxy lines to your VirtualHost definition of the “https://affine.darkstar.lan” web site configuration and restart httpd:

# ---
# Required modules:
# mod_proxy, mod_ssl, proxy_wstunnel, http2, headers, remoteip

# No caching:
Header set Cache-Control "max-age=1, no-control"
ProxyRequests Off
ProxyVia on
ProxyAddHeaders On
ProxyPreserveHost On

<Proxy *>
  Options FollowSymLinks MultiViews
  Require all granted
</Proxy>

# Letsencrypt places a file in this folder when updating/verifying certs.
# This line will tell apache to not to use the proxy for this folder:
ProxyPass "/.well-known/" "!"

<IfModule mod_ssl.c>
  SSLProxyEngine on
  RequestHeader set X-Forwarded-Proto "https"
  RequestHeader set X-Forwarded-Port "443"
</IfModule>

# AFFiNE is hosted on https://affine.darkstar.lan/
<Location />
  ProxyPass "http://127.0.0.1:3010/"
  ProxyPassReverse "http://127.0.0.1:3010/"
</Location>

# WebSocket proxy:
RewriteEngine on
RewriteCond %{HTTP:Connection} Upgrade [NC]
RewriteCond %{HTTP:Upgrade} websocket [NC]
RewriteRule ^/?(.*) "ws://127.0.0.1:3010/$1" [P,L]
# ---

If you want to make your non-encrypted web address http://affine.darkstar.lan redirect automatically to the encrypted ‘https://‘ variant, be sure to add this block to its VirtualHost definition to ensure that Letsencrypt can still access your server’s challenge file via an un-encrypted connection:

<If "%{REQUEST_URI} !~ m#/\.well-known/acme-challenge/#">
    Redirect permanent / https://affine.darkstar.lan/
</If>

The hostname and TCP port numbers shown in bold green are defined elsewhere in this article, they should stay matching when you decide to use a different hostname and port numbers.


AFFiNE Server preparations

We will give the AFFiNE server its own internal Docker network. That way, the inter-container communication stays behind its gateway, this prevents snooping the network traffic.

Docker network

Create the network using the following command:

docker network create \
  --driver=bridge \
  --subnet=172.22.0.0/16 --ip-range=172.22.0.0/25 --gateway=172.22.0.1 \
  affine.lan

Docker’s gateway address in any network segment will always have the “1” number at the end.
Select a yet unused network range for this subnet. You can find out about the subnets which are already defined for Docker by running this command:

# ip route |grep -E '(docker|br-)'

The ‘affine.lan‘ network you created will be represented in the AFFiNE docker-compose.yml file with the following code block:

networks:
  affine.lan:
    external: true
Create directories

Create the directory for the docker-compose.yml and other startup files:

# mkdir -p /usr/local/docker-affine

Create the directories to store data:

# mkdir -p /opt/dockerfiles/affine/{config,postgres,storage}

Download the docker-compose and a sample .env file:

# cd /usr/local/docker-affine
# wget -O docker-compose.yml https://raw.githubusercontent.com/toeverything/AFFiNE/refs/heads/canary/.github/deployment/self-host/compose.yaml
# wget https://raw.githubusercontent.com/toeverything/AFFiNE/refs/heads/canary/.github/deployment/self-host/.env.example
# cp .env.example .env

It looks like with the release of 0.19 the developers are also posting versions of the docker-compose.yml and default.env.example files in the Assets section of the Releases page.

Considerations for the .env file

Docker Compose is able to read environment variables from an external file. By default, this file is called ‘.env‘ and must be located in the same directory as the ‘docker-compose.yml‘ file. In fact ‘.env‘ will be searched in the current working directory, but I always execute ‘docker-compose‘ in the directory containing its YAML file anyway and to make it really fool-proof the YAML file will define the ‘.env‘ file location explicitly.

In this environment file we are going to specify things like accounts, passwords, TCP ports and the like, so that they do not have to be referenced in the ‘docker-compose.yml‘ file or even in the process environment space. You can shield ‘.env‘ from prying eyes, thus making your setup more secure.

This is eventually the content of the ‘/usr/local/docker-affine/.env‘ file, excluding the OIDC configuration:

# ---
# Select a revision to deploy, available values: stable, beta, canary
AFFINE_REVISION=stable

# Our name:
AFFINE_SERVER_NAME=Alien's AFFiNE

# Set the port for the server container it will expose the server on
PORT=3010

# Set the host for the server for outgoing links
AFFINE_SERVER_HTTPS=true
AFFINE_SERVER_HOST=affine.darkstar.lan
AFFINE_SERVER_EXTERNAL_URL=https://affine.darkstar.lan

# Position of the database data to persist
DB_DATA_LOCATION=/opt/dockerfiles/affine/postgres
# Position of the upload data (images, files, etc.) to persist
UPLOAD_LOCATION=/opt/dockerfiles/affine/storage
# Position of the configuration files to persist
CONFIG_LOCATION=/opt/dockerfiles/affine/config

# Database credentials
AFFINE_DB_USERNAME=affine
AFFINE_DB_PASSWORD=0Igiu3PyijI4xbyJ87kTZuPQi4P9z4pd
AFFINE_DB_DATABASE=affinedb

# Mailer service for sending collaboration invites:
AFFINE_MAILER_HOST=affine.darkstar.lan
AFFINE_MAILER_PORT=587
AFFINE_MAILER_USER=affinemailer
AFFINE_MAILER_PASSWORD=E9X46W3vz8h1nBVqBHgKCISxRufRsHlAXSEbcXER/58=
AFFINE_MAILER_SENDER=affinemailer@darkstar.lan
AFFINE_MAILER_SECURE=false

# We hard-code the IP address for the server so that we can make it send emails:
AFFINE_IPV4_ADDRESS=172.22.0.5

# Here you will add OIDC credentials later
# ---

Note that I kept having issues with some environment variables not getting filled with values inside the containers. I found out that a variable in the ‘.env‘ file that had a dash ‘-‘ as part of the name would not be recognized inside a container, that is why now I only use capital letters and the underscore,

The Docker Compose configuration

The  ‘docker-compose.yml‘ file we downloaded  to/usr/local/docker-affine/ in one of the previous chapters will create multiple containers, one for AFFiNE itself, one for the Postgres database and one for the Redis memory cache. I made a few tweaks to the original, so eventually it looks like this (excluding the OIDC configuration:

# ---
name: affine
services:
  affine:
    image: ghcr.io/toeverything/affine-graphql:${AFFINE_REVISION:-stable}
    container_name: affine_server
    ports:
      - '127.0.0.1:${PORT:-3010}:3010'
    depends_on:
      redis:
        condition: service_healthy
      postgres:
        condition: service_healthy
      affine_migration:
        condition: service_completed_successfully
    volumes:
      # custom configurations
      - ${UPLOAD_LOCATION}:/root/.affine/storage
      - ${CONFIG_LOCATION}:/root/.affine/config
      # Here you will add a workaround for an OIDC bug later
    env_file:
      - path: ".env"
    environment:
      - ENABLE_TELEMETRY=false
      - REDIS_SERVER_HOST=redis
      - DATABASE_URL=postgresql://${AFFINE_DB_USERNAME}:${AFFINE_DB_PASSWORD}@postgres:5432/${AFFINE_DB_DATABASE:-affine}
      - MAILER_HOST=${AFFINE_MAILER_HOST}
      - MAILER_PORT=${AFFINE_MAILER_PORT}
      - MAILER_USER=${AFFINE_MAILER_USER}
      - MAILER_PASSWORD=${AFFINE_MAILER_PASSWORD}
      - MAILER_SENDER=${AFFINE_MAILER_SENDER}
      - MAILER_SECURE=${AFFINE_MAILER_SECURE}
      # Here you will add OIDC environment variables later
   networks:
      affine.lan:
        ipv4_address: ${AFFINE_IPV4_ADDRESS}
        aliases:
          - affine.affine.lan
    restart: unless-stopped

  affine_migration:
    image: ghcr.io/toeverything/affine-graphql:${AFFINE_REVISION:-stable}
    container_name: affine_migration_job
    volumes:
      # custom configurations
      - ${UPLOAD_LOCATION}:/root/.affine/storage
      - ${CONFIG_LOCATION}:/root/.affine/config
    command: ['sh', '-c', 'node ./scripts/self-host-predeploy.js']
    env_file:
      - path: ".env"
    environment:
       - REDIS_SERVER_HOST=redis
       - DATABASE_URL=postgresql://${AFFINE_DB_USERNAME}:${AFFINE_DB_PASSWORD}@postgres:5432/${AFFINE_DB_DATABASE:-affine}
    depends_on:
      redis:
       condition: service_healthy 
      postgres:
        condition: service_healthy
    networks:
      - affine.lan

  redis:
    image: redis
    container_name: affine_redis
    healthcheck:
      test: ['CMD', 'redis-cli', '--raw', 'incr', 'ping']
      interval: 10s
      timeout: 5s
      retries: 5
    networks:
      - affine.lan
    restart: unless-stopped

  postgres:
    image: postgres:16
    container_name: affine_postgres
    volumes:
      - ${DB_DATA_LOCATION}:/var/lib/postgresql/data
    env_file:
      - path: ".env"
    environment:
      POSTGRES_USER: ${AFFINE_DB_USERNAME}
      POSTGRES_PASSWORD: ${AFFINE_DB_PASSWORD}
      POSTGRES_DB: ${AFFINE_DB_DATABASE:-affine}
      POSTGRES_INITDB_ARGS: '--data-checksums'
    healthcheck:
      test:
        ['CMD', 'pg_isready', '-U', "${AFFINE_DB_USERNAME}", '-d', "${AFFINE_DB_DATABASE:-affine}"]
      interval: 10s
      timeout: 5s
      retries: 5
    networks:
      - affine.lan
    restart: unless-stopped

networks:
  affine.lan:
    external: true
# ---

Initializing the Docker stack

The docker-compose.yml file in /usr/local/docker-affine defines the container stack, the .env file in that same directory contains credentials and other variables. If you hadn’t created the Docker network yet, do it now! See the “Docker network” section higher up.
Start the Docker container stack. There will be three containers eventually and a temporary ‘migration’ container performing the administrative tasks prior to starting the server:

# cd /usr/local/docker-affine
# docker-compose up -d && docker-compose logs -f

And monitor the logs if you think the startup is troublesome. The above command-line will show the detailed log of the startup after the containers have been instantiated and you can quit that log-tail using ‘Ctrl-C‘ without fear of killing your containers.
If you want to check the logs for the AFFiNE server using the name we gave its container (affine_server):

# docker logs affine_server

Or check the logs of the full Docker Compose stack using the ‘affine’ service name (the first line in the docker-compose.yml file):

# docker-compose logs affine

When this is the first time you start the Docker stack, the Postgres database will be initialized. This will take a few seconds extra. When the server is up and running, use a webbrowser to access  your AFFiNE workspace at https://affine.darkstar.lan/. The next section “Setting up the server admin” has instructions on the steps you need to take to setup an admin account.

The server backend shows version information by pointing curl (or a web browser) at the URL https://affine.darkstar.lan/info – it will return a YAML string which looks like this:

{
  "compatibility": "0.19.1",
  "message": "AFFiNE 0.19.1 Server",
  "type": "selfhosted",
  "flavor": "allinone"
}

Setting up the server admin

The first time you connect to the self-hosted AFFiNE server, you are being asked to create an admin account. For that reason, you may want to connect directly to the container via http://localhost:3010 when logged-in to your Docker host.
If you are not the paranoid type, you can also connect to the external URL https://affine.darkstar.lan/ of course 🙂 Just make sure that you do that before some interested 3rd-party comes visiting.

In a few steps, you will be taken through a setup procedure where you enter your name, email address and a password.

And voila! You are the admin of your new AFFiNE server.
When you remove the “/admin/...” path from the resulting URL in your browser, that will take you to the default AFFiNE Workspace for your account, which will always be populated with a demo page:

You immediately see the red banner, informing you that your work will be kept in your browser cache and may be lost when the browser crashes, and that you should really enable a Cloud sync. Of course, the word “Cloud” in this context means nothing more than your own self-hosted server.

Note that the URL for administering your server is https://affine.darkstar.lan/admin/!


Administering the server

Creating and managing users

The first decision you need to make is whether you are going to open up your AFFiNE server to anyone interested. By default, new users can register themselves via email. AFFiNE will create an account for them and a “magic link” with a login token will be sent to them. If you go to https://affine.darkstar.lan/admin/settings  you see  that there  is  a slider  which  allows  you  to  disable  the  self-registration  feature:

If you decide to disable the self-registration, you’ll have to create accounts for your users manually in https://affine.darkstar.lan/admin/accounts via the “+ Add User” button:

 

One big caveat for this way of creating user accounts is that you need to have configured the mail transport. AFFiNE needs to send emails to your users.
That requires a bit of configuration in the Docker stack (but that has already been taken care of in the above docker-compose.yml and .env files) and also on the Docker host. You will find the detailed instructions in the section further down named “Configuring the mail transport (Docker container & host)“.

If you implement Single Sign-On (SSO) via Keycloak then AFFiNE only needs to send emails if a user wants to invite another user in order to collaborate on a workspace.

Customizing the users’ abilities

The self-hosted AFFiNE server adds every user to the “Free Plan” just like when you would create an account on the company’s server https://app.affine.pro/ . However, the reason for self-hosting is to take control over our data as well as our own capabilities. The “Free Plan” comes with a maximum of 10 GB server storage, a 10 MB filesize upload limit, 7 days of file version history and a maximum of 3 members in your workspaces. The developers are apparently still considering what kind of capabilities are relevant for users of a self-hosted instance.

We are not going to wait. I’ll show how you stretch those limits so that they are no longer relevant.
A bit of familiarity with Postgres will help with that, since it involves directly modifying AFFiNE database records.

First, open a Postgres prompt to our affinedb database on the affine_postgres container:

# docker exec -it affine_postgres psql -U affine affinedb

The “affinedb=#” in the rest of this section depicts the Postgres command prompt. This is where you are going to type the SQL commands that show information from the database and will change some of the data in there. We will be examining the ‘users’, ‘features’ and ‘user_features’ tables and make our changes in the ‘user_features’ table when we assign a different Plan to your users’ accounts.

Execute some actual SQL
  • Let’s see who the registered users are on our server. I limit the output of the command to just my own user who logged in via Single Sign-On:

affinedb=# select * from users;

                  id                  |      name       |           email           |                                             password                                              |         created_at         |       email_verified       |                                               avatar_url                                               | registered 
--------------------------------------+-----------------+---------------------------+---------------------------------------------------------------------------------------------------+----------------------------+----------------------------+--------------------------------------------------------------------------------------------------------+------------
 01ba65de-6d3b-4eb2-9cd4-be98264e4370 | Eric Hameleers  | alien@slackware.com       |                                                                                                   | 2024-12-22 14:01:37.834+00 | 2024-12-22 14:01:37.829+00 | https://affine.darkstar.lan/api/avatars/01ba65de-6d3b-4eb2-9cd4-be98264e4370-avatar-1734879819544 | t
  • The orange highlight shows my user_id, which is basically a UUID string. We will be using that user_id in the next SQL commands. To get a list of the Plans (features) that are available for AFFiNE users we can do a SQL query as follows:

affinedb=# select id, feature, configs from features;

  • But I am going to leave that as an exercise for the reader, because I will show a more tailored version of that command soon. First, let’s look at what Plan (the feature_id in the user_features table) my user was assigned to:

affinedb=# select * from user_features where user_id = '01ba65de-6d3b-4eb2-9cd4-be98264e4370‘;

 id |               user_id                | feature_id |  reason  |         created_at         | expired_at | activated 
----+--------------------------------------+------------+----------+----------------------------+------------+-----------
 11 | 01ba65de-6d3b-4eb2-9cd4-be98264e4370 |         13 | sign up  | 2024-12-22 14:44:20.203+00 |            | t
  • Apparently I am on a Plan with a feature_id of “13”. Let’s get more details about that Plan, and let’s already add “16” to that query (you will soon see why I want that)

affinedb=# select id, feature, configs from features where id = 13 or id = 16;

 id |   feature    |                                                                             configs                                                                             
----+--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------
 13 | free_plan_v1 | {"name":"Free","blobLimit":10485760,"businessBlobLimit":104857600,"storageQuota":10737418240,"historyPeriod":604800000,"memberLimit":3,"copilotActionLimit":10}
 16 | lifetime_pro_plan_v1 | {"name":"Lifetime Pro","blobLimit":104857600,"storageQuota":1099511627776,"historyPeriod":2592000000,"memberLimit":10,"copilotActionLimit":10}
  • By looking in more detail to the feature definition for id “13” aka the “Free” plan (the ‘configs‘ field) we see that there is a 10 MB upload limit (the blobLimit in bytes); a 10 GB storage limit (the storageQuota in bytes), a 7-day historical version retention (the historyPeriod in seconds), and a limit of 3 members who can collaborate on your workspace (the memberLimit). You also notice that the “Lifetime Pro” plan with an id of “16” has considerably higher limits (100 MB file upload limit, 1 TB of storage, 30 days history retention, 10 members to collaborate with).
    These queries show that by default when you sign up with the self-hosted version, you get assigned to the “Free Plan” which corresponds to a feature_id of “13”. We are going to change that for our user and set it to “16” which is the “Unlimited Pro Plan”:

affinedb=# update user_features set feature_id = 16, reason = 'selfhost' where user_id = '01ba65de-6d3b-4eb2-9cd4-be98264e4370' and feature_id = 13;

  • Now, when you look at the user_features table, my account shows the “selfhost” string as the reason for change:

affinedb=# select * from user_features where user_id = '01ba65de-6d3b-4eb2-9cd4-be98264e4370‘;

 id |               user_id                | feature_id |  reason  |         created_at         | expired_at | activated 
----+--------------------------------------+------------+----------+----------------------------+------------+-----------
 11 | 01ba65de-6d3b-4eb2-9cd4-be98264e4370 |         16 | selfhost | 2024-12-28 13:23:17.826+00 |            | t
  • If we look at our account settings in AFFiNE now, we see that the Plan has changed:

 

You need to repeat this for every user who registers at your AFFiNE instance.


User passwords

There’s a difference between users who are logging in via OIDC, and the rest of them. When your users login via OIDC using an Identity Provider (IDP) such as Keycloak, the password is not stored in AFFiNE, and the user can always logout and login again.

But when you (the admin) create an account, or you have self-registration enabled (which is the default) and the user submits their email to the server, then AFFiNe will send the user a “magic link” via email every time they want to login to your server. That is a bit cumbersome, but the user can do something about that.

In the “Account settings” dialog which you reach by clicking on the user avatar, there’s a “Password” section which tells you “Set a password to sign in to your account“. When the user has set a password, then subsequet login attempts will not trigger a “magic link” any longer, but a password entry field will be displayed instead.


Connecting to the workspace

The admin user has been created and you can keep using that to create content in your AFFiNE workspaces of course. But you can also create a separate user account; see the previous section on how to allow more users access to your server.
I’ll come to the login later, let’s first have a look at what happens when you connect to your AFFiNE server again, after having logged out your admin user account.

There’s a difference, caused by cookies that are set by the AFFiNE server, in what you see when you connect without being logged in.

  1. You have not yet converted your local data to a Cloud sync workspace.
    If you access https://affine.darkstar.lan/ you will land into the “Demo Workspace“, containing the single Doc “Write, Draw, Plan all at Once“:
    By default, anything you create will stay in the browser cache.
    If you want to start syncing to your AFFiNE server, click the avatar icon in the top left of the window:
    A login dialog opens, the same actually that you will get in “option 2” below. Enter your email address and click “Continue with email” to login. After completing your login, the Cloud-sync of your workspaces commences.
  2. You were already syncing your workspace to your AFFiNE Cloud, then logged off, and now you want to login again.
    You will now be greeted with these options instead of the “Demo Workspace“:
    … and if you click on either the “Sign up / Sign in” or the “Create cloud workspace” you will be taken to the actual login screen:
    Here you type your account’s email address and press “ENTER” or click the “Continue with email“. Depending on whether you have already defined a password for your account, the next screen will either show a password entry field or else a message informing you that a “Magic link” has been sent to your email address. The “Magic link” URL contains a login token allowing you to login without a password. The token expires after 30 minutes. You’ll keep getting “Magic links” until you configure a password for your user account.
    The workspaces that you had already created will be presented and you can select which one you want to continue working on, or else create a new one right away:

You’re all set, enjoy AFFiNE!


Adding Single Sign-On (SSO)

Any Slackware Cloud Server user will have their account already setup in your Keycloak database. The first time they login to your AFFiNE server using SSO, the account will be activated automatically.

We need to define a new Client ID in Keycloak, which we are going to use with AFFiNE. Essentially, Keycloak and AFFiNE need a shared credential.

Add a AFFiNE Client ID in Keycloak

Point your browser to the Keycloak Admin console https://sso.darkstar.lan/auth/admin/ to start the configuration process.

Add a ‘confidential’ openid-connect client in the ‘foundation‘ Keycloak realm (the realm where you created your users in the previous Episodes of this article series):

  • Select ‘foundation‘ realm; click on ‘Clients‘ and then click ‘Create‘ button.
    • Client ID‘ = “affine“
    • Client Type‘ = “OpenID Connect” (the default)
      Note that in Keycloak < 20.x this field was called ‘Client Protocol‘ and its value “openid-connect”.
    • Toggle ‘Client authentication‘ to “On”. This will set the client access type to “confidential”
      Note that in Keycloak < 20.x this was equivalent to setting ‘Access type‘ to “confidential”.
    • Check that ‘Standard Flow‘ is enabled.
    • Save.
  • Also in ‘Settings‘, configure how AFFiNE server connects to Keycloak.
    Our AFFiNE container is running on https://affine.darkstar.lan . We add

    • Root URL‘ = https://affine.darkstar.lan/
    • Home URL‘ = https://affine.darkstar.lan/auth/callback/
    • Valid Redirect URIs‘ = https://affine.darkstar.lan/*
    • Web Origins‘ = https://affine.darkstar.lan/
    • Save.

To obtain the secret for the “affine” Client ID:

  • Go to “Credentials > Client authenticator > Client ID and Secret
    • Copy the Secret (TZ5PBCw66IhDtZJeBD4ctsS2Hrb253uY). This secret is an example string of course, yours will be different. I will be re-using this value below. You will use your own generated value.

Add an OIDC definition to AFFiNE

We have all the information we need to enhance our Docker stack.

First, add the credentials that AFFiNE shares with Keycloak, to the ‘.env’ file of your Docker Compose definition (I left a hint already in magenta higher-up in this article):

# OIDC (OpenID Connect):
AFFINE_OIDC_ISSUER=https://sso.darkstar.lan/auth/realms/foundation
AFFINE_OIDC_CLIENT_ID=affine
AFFINE_OIDC_CLIENT_SECRET=TZ5PBCw66IhDtZJeBD4ctsS2Hrb253uY

Then, add these variables in our ‘docker-compose.yml’ file at the location which I highlighted for you in magenta:

  - OAUTH_OIDC_ISSUER=${AFFINE_OIDC_ISSUER}
  - OAUTH_OIDC_CLIENT_ID=${AFFINE_OIDC_CLIENT_ID}
  - OAUTH_OIDC_CLIENT_SECRET=${AFFINE_OIDC_CLIENT_SECRET}
  - OAUTH_OIDC_SCOPE=openid email profile offline_access
  - OAUTH_OIDC_CLAIM_MAP_USERNAME=preferred_username
  - OAUTH_OIDC_CLAIM_MAP_EMAIL=email
  - OAUTH_OIDC_CLAIM_MAP_NAME=preferred_username

Lastly, the OIDC plugin needs to be enabled. You do that by adding the following text block to the end of the file ‘/opt/dockerfiles/affine/config/affine.js’:

/* OAuth Plugin */
AFFiNE.use('oauth', {
  providers: {
    oidc: {
      // OpenID Connect
      issuer: 'https://sso.darkstar.lan/auth/realms/foundation',
      clientId: 'affine',
      clientSecret: 'TZ5PBCw66IhDtZJeBD4ctsS2Hrb253uY',
      args: {
        scope: 'openid email profile offline_access',
        claim_id: 'preferred_username',
        claim_email: 'email',
        claim_name: 'preferred_username',
      },
    },
  },
});

This OIDC definition block is already part of that file (apart from the magenta bit), but commented-out, and also includes examples for Google and Github authentication. Adding the complete block is cleaner than un-commenting a batch of lines.

Note: this duplicates the configuration for the OIDC client (at least the relevant values are also configured in the ‘.env’ file), but I did not find a way around that. The complete configuration must be present inside ‘affine.js’ otherwise you will not get an option to use OIDC as a login provider.

Bugs to resolve first

Two things had been bugging me for days until I found hints online and by combining their fixes, I was finally able to make my self-hosted AFFiNE server (version 0.18) work with Single Sign-On.
Right after Christmas 2024, a new version 0.19.1 was released which solved one of those two bugs (clicking “Continue with OIDC” button would take you to the ‘https://app.affine.pro/oauth‘ page instead of your own ‘https://affine.darkstar.lan/oauth‘ page. Inside the container, there was a hard-coded URL).
The other bug hopefully gets resolved in a future release, it has already been reported in the project tracker:

  • The OIDC plugin would not be enabled even after properly configuring ‘/opt/dockerfiles/affine/config/affine.js’. It turns out that AFFiNE is not reading that file and the internal defaults take precedence.
    In order to work around this bug, I simply mounted my local ‘affine.js’ file into the container, overwriting the internal version. To do this, you need to add this line to your ‘docker-compose.yml‘ file at the magenta highlighted location under “volumes“:
    - ${CONFIG_LOCATION}/affine.js:/app/dist/config/affine.js:ro

All set!

This completes the Single Sign-On configuration. Now when you access your AFFiNE server and want to login, the screen will show the following:

Clicking on that “Continue with OIDC” will take you to the Keycloak login dialog. After you logged on using your SSO credentials, you will be asked to select an existing workspace or create a new one.
You will then be returned to the AFFiNE landing page https://affine.darkstar.lan …  but here you run into another bug, or perhaps it is a configuration oversight which I do not recognize: the page appears exactly as before you logged in.
You need to do a page refresh (Ctrl-R in your browser) and then you’ll see that you are indeed logged-in and you are syncing to the AFFiNE Cloud (aka your own server).


Configuring mail transport (Docker container & host)

Note that a large chunk of this section was copied from a previous article in the series. I do not know whether you actually read all of them in order, so I think it is prudent to share a complete set of instructions in each article.

AFFiNE needs to be able to send emails in the following circumstances:

  1. You create the user accounts manually, or else you want to give users an option to sign up via e-mail. In both cases, AFFiNE sends a “magic link” to that email address when the user attempts to login.
    The “magic link” will allow the user to create an account without initial password: the URL contains a login token. The user should then set a password in AFFiNE to be able to login later without the need for an another email with a “magic link”.
  2. A user wants to invite collaborators to their workspace. The invites are sent via email.

In the ‘/usr/local/docker-affine/.env‘ file which contains the configuration for Docker Compose, the hostname or IP  address and the TCP port for your own SMTP server needs to be provided. You can configure TLS encrypted connections, but that is not mandatory.
User credentials for sending the emails and a return address need to be added as well. The complete set looks like this:

# Mailer service for sending collaboration invites:
AFFINE_MAILER_HOST=affine.darkstar.lan
AFFINE_MAILER_PORT=587
AFFINE_MAILER_USER=affinemailer
AFFINE_MAILER_PASSWORD=E9X46W3vz8h1nBVqBHgKCISxRufRsHlAXSEbcXER/58=
AFFINE_MAILER_SENDER=affinemailer@darkstar.lan
AFFINE_MAILER_SECURE=false

Note that when I tried “AFFINE_MAILER_SECURE=true“, I was not able to make AFFiNE send emails, the encrypted SMTP connection would fail due to OpenSSL compatibility issues. My Docker host is running a hardened Slackware-current, perhaps I disabled some older cipher or protocol that the container needs? This is something I would like to see resolved.

Make the host accept mail from the AFFiNE container

When AFFiNE starts sending emails from its Docker container, we want Sendmail or Postfix to accept and process these. What commonly happens if a SMTP server receives emails from an unknown IP address is to reject those emails with “Relaying denied: ip name lookup failed“. We don’t want that to happen.

In Docker, you already performed these steps:

  • Create an IP network for AFFiNE and assign a name to it
  • Assign a fixed IP address to the AFFiNE container

On the Docker host, these are the steps to complete:

  • Announce the Docker IP/hostname mapping
  • Setup a local DNS server
  • Configure SASL authentication mechanisms to be used by the MTA (mail transport agent, eg. Postfix)
  • Create a system user account to be used by AFFiNE when authenticating to the MTA
  • Add SASL AUTH and also TLS encryption capabilities to the MTA
Assign IP address to the Docker container.

The ‘affine.lan‘ network definition is in the section “Docker network” higher-up.

The ‘docker-compose.yml‘ file contains the lines hard-coding the IP address:

networks:
  affine.lan:
    ipv4_address: ${AFFINE_IPV4_ADDRESS}
    aliases:
      - affine.affine.lan

With the value for that variable ${AFFINE_IPV4_ADDRESS} being defined in the ‘.env‘ file:

# We hard-code the IP address for the server so that we can make it send emails:
AFFINE_IPV4_ADDRESS=172.22.0.5
Add IP / name mapping to the Docker host

In ‘/etc/hosts‘ you need to add the following:

172.22.0.5    affine affine.affine.lan

And to ‘/etc/networks‘ add this line:

affine.lan   172.22

DNS serving local IPs on the Docker host

Under the assumption that your Cloud Server does not act as a LAN’s DNS server, we will use dnsmasq as the local nameserver. Dnsmasq is able to use the content of /etc/hosts and /etc/networks when responding to DNS queries. We can use the default, unchanged ‘/etc/dnsmasq.conf‘ configuration file.

But first, add this single line at the top of the host server’s ‘/etc/resolv.conf‘ (it may already be there as a result of setting up Keycloak), so that all local DNS queries will will be handled by our local dnsmasq service:

nameserver 127.0.0.1

If you have not yet done so, (as root) make the startup script ‘/etc/rc.d/rc.dnsmasq‘ executable and start dnsmasq manually (Slackware will take care of starting it on every subsequent reboot):

# chmod +x /etc/rc.d/rc.dnsmasq
# /etc/rc.d/rc.dnsmasq start

If dnsmasq is already running (eg. when you have Keycloak running and sending emails) then send SIGHUP to the program as follows:

# killall -HUP dnsmasq

That tells dnsmasq to reload its configuration. Check that it’s working and continue to the next step:

# nslookup affine.affine.lan
Server: 127.0.0.1
Address: 127.0.0.1#53
Name: affine.affine.lan
Address: 172.22.0.5
Configuring SASL on the Docker host

The mailserver aka MTA (Sendmail or Postfix) requires that remote clients authenticate themselves. The Simple Authentication and Security Layer (SASL) protocol is used for that, but typically, these MTA’s do not implement SASL themselves. There are two usable SASL implementations available on Slackware: Cyrus SASL and Dovecot; I picked Cyrus SASL just because I know that better.
We need to configure the method of SASL authentication for the SMTP daemon, which is via the saslauthd daemon. That one is not started by default on Slackware.

If the file ‘/etc/sasl2/smtpd.conf‘ does not yet exist, create it and add the following content:

pwcheck_method: saslauthd
mech_list: PLAIN LOGIN

Don’t add any further mechanisms to that list, except for PLAIN LOGIN. The resulting transfer of cleartext credentials is the reason that we also wrap the communication between mail client and server in a TLS encryption layer.

If the startup script ‘/etc/rc.d/rc.saslauthd‘ is not yet executable, make it so and start it manually this time (Slackware will take care of starting it on every subsequent reboot):

# chmod +x /etc/rc.d/rc.saslauthd
# /etc/rc.d/rc.saslauthd start

Create the mail user

We need a system account to allow AFFiNE to authenticate to the SMTP server. Let’s go with userid ‘affinemailer‘.
The following two commands will create the user and set a password:

# /usr/sbin/useradd -c "AFFiNE Mailer" -m -g daemon -s /bin/false affinemailer
# passwd affinemailer

Write down the password you assigned to the user ‘affinemailer‘. Both this userid and its password are used in the ‘.env‘ file of Docker Compose; in fact this is what I already posted higher-up:

AFFINE_MAILER_USER=affinemailer
AFFINE_MAILER_PASSWORD=E9X46W3vz8h1nBVqBHgKCISxRufRsHlAXSEbcXER/58=

After the account creation, you can test whether you configured SASL authentication correctly by running:

# testsaslauthd -u affinemailer -p E9X46W3vz8h1nBVqBHgKCISxRufRsHlAXSEbcXER/58=

… which should reply with:

0: OK "Success."

Configuring Sendmail on the Docker host

Since Postfix has replaced Sendmail as the MTA in Slackware a couple of years ago already, I am going to be concise here:
Make Sendmail aware that the AFFiNE container is a known local host by adding the following line to “/etc/mail/local-host-names” and restarting the sendmail daemon:

affine.affine.lan

The Sendmail package for Slackware provides a ‘.mc’ file to help you configure SASL-AUTH-TLS in case you had not yet implemented that: ‘/usr/share/sendmail/cf/cf/sendmail-slackware-tls-sasl.mc‘.

Configuring Postfix on the Docker host

If you use Postfix instead of Sendmail, this is what you have to change in the default configuration:

In ‘/etc/postfix/master.cf‘, uncomment this line to make the Postfix server listen on port 587 as well as 25 (port 25 is often firewalled or otherwise blocked):

submission inet n - n - - smtpd

In ‘/etc/postfix/main.cf‘, add these lines at the bottom:

# ---
# Allow Docker containers to send mail through the host:
mynetworks_style = class
# ---

Assuming you have not configured SASL AUTH before you also need to add:

# ---
# The assumption is that you have created your server's SSL certificates
# using Let's Encrypt and 'dehydrated':
smtpd_tls_cert_file = /etc/dehydrated/certs/darkstar.lan/fullchain.pem
smtpd_tls_key_file = /etc/dehydrated/certs/darkstar.lan/privkey.pem
smtpd_tls_security_level = encrypt

# Enable SASL AUTH:
smtpd_sasl_auth_enable = yes
syslog_name = postfix/submission
smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination
# ---

After making modifications to the Postfix configuration, always run a check for correctness of the syntax, and do a reload if you don’t see issues:

# postfix check
# postfix reload

More details about SASL AUTH to be found in ‘/usr/doc/postfix/readme/SASL_README‘ on your own host machine.

Note: if you provide Postfix with SSL certificates through Let’s Encrypt (using the dehydrated tool) be sure to reload the Postfix configuration every time ‘dehydrated’ refreshes its certificates.

  1. In ‘/etc/dehydrated/hook.sh’ look for the ‘deploy_cert()‘ function and add these lines at the end of that function (perhaps the ‘apachectl‘ call is already there):

    # After successfully renewing our Apache certs, the non-root user 'dehydrated_user'
    # uses 'sudo' to reload the Apache configuration:
    sudo /usr/sbin/apachectl -k graceful
    # ... and uses 'sudo' to reload the Postfix configuration:
    sudo /usr/sbin/postfix reload
  2. Assuming you are not running dehydrated as root but instead as ‘dehydrated_user‘, you need to add a file in ‘/etc/sudoers.d/‘ – let’s name it ‘postfix_reload‘ – and copy this line into the file:

    dehydrated_user ALL=NOPASSWD: /usr/sbin/postfix reload
Success or failure

You can query your mail server to see if you were successful in adding SASL AUTH and TLS capabilities:

$ telnet smtp.darkstar.lan 587
Trying XXX.XXX.XXX.XXX...
Connected to smtp.darkstar.lan.
Escape character is '^]'.
220 smtp.darkstar.lan ESMTP Postfix
EHLO foo.org
250-PIPELINING
250-SIZE 10240000
250-VRFY
250-ETRN
250-STARTTLS
250-ENHANCEDSTATUSCODES
250-8BITMIME
250-DSN
250-SMTPUTF8
250 CHUNKING
AUTH LOGIN
530 5.7.0 Must issue a STARTTLS command first
QUIT
221 2.0.0 Bye
Connection closed by foreign host.


Conclusion

I hope to have given a complete overview to get your own self-hosted AFFiNE server up and running. Much of the information available online is quite scattered.
I’d love to hear from you if you were successful after following these instructions. Let me know your feedback below.

Have fun!

 

Slackware Cloud Server Series, Episode 8: Media Streaming Platform

Here is a new installment in the series which teaches how you can run a variety of services on your own private cloud server for family, friends and your local community, remaining independent of any of the commercial providers out there.

Today we will look into setting up a media streaming platform. You probably have a subscription – or multiple! – for Netflix, Prime, Disney+, AppleTV, HBO Max, Hulu, Peacock or any of the other streaming media providers. But if you already are in possession of your own local media files (movies, pictures, e-books or music) you will be excited to hear that you can make those media available in really similar fashion to those big platforms. I.e. you can stream – and enable others to stream! – these media files from just about anywhere on the globe.
Once we have this streaming server up and running I will show you how to setup our Identity Provider (Keycloak) just like we did for the other services I wrote about in the scope of this article series. The accounts that you already have created for the people that matter to you will then also have access to your streaming content via Single Sign-On (SSO).

Check out the list below which shows past, present and future episodes in the series, if the article has already been written you’ll be able to click on the subject.
The first episode also contains an introduction with some more detail about what you can expect.

Introduction

Before we had on-demand video streaming services, linear television was basically the only option to consume movies, documentaries and shows in your home. The broadcasting company decides on the daily programming and you do not have any choice in what you would like to view at any time of any day. Your viewing will be interrupted by advertisements that you cannot skip. Of course, if there’s nothing of interest on television, you could rent a video-tape or DVD to watch a movie in your own time, instead of going to the theater.
Actually, Netflix started as an innovative DVD rental company, sending their customers DVD’s by regular postal mail. They switched that DVD rental service to a subscription model but eventually realized the potential of subscription-based on-demand video streaming. The Netflix as we know it was born.

Nowadays we cannot imagine a world without the ability to fully personalize the way you consume movies and tv-shows. But that creates a dependency on a commercial provider. In this article I want to show you how to setup your own private streaming platform which you fully control. The engine of that platform will be Jellyfin, This is a fully open source program, descended from the final open source version of Emby before that became a closed-source product. Jellyfin has a client-server model where the server is under your control. You will learn how to set it up and run it as a Docker container. Jellyfin offers a variety of clients which can connect to this server and stream its content: there’s a client program for Android phones and Android TV, WebOS, iOS and there’s always the web client which is offered to browsers that connect to the server’s address.
The Jellyfin interface for clients is clean and informative, on par with commercial alternatives. The server collects information about your local content from online sources – as scheduled tasks or whenever you add a new movie, piece of music or e-book.

The good and bad of subscription-based streaming services

The Netflix business model has proven so successful that many content providers followed its lead, and present-day we are spoiled with an abundance of viewing options. If there’s anything you would like to watch, chances are high that the video is available for streaming already. The same is true for music – a Spotify subscription opens up a huge catalog of popular music negating the need to buy physical audio CD media.
The flipside of the coin is of course the fact that we are confronted with a fundamentally fragmented landscape of video streaming offerings. There’s a lot of choice but is that good for the consumer?

Streaming video platforms strategically focus on exclusive content to entice consumers into subscribing to their service offering. Exclusive content can be the pre-existing movie catalog of a film studio (see MGM+, Paramount+, Disney+ and more) or else entirely new content – movies/series that are commissioned by a streaming platform. Netflix, Apple TV, Amazon Prime are pouring billions of euros into the creation of new content since they do not have a library of existing content that they own. Social media are used as the battle field where these content providers try to win you over and subscribe. News outlets review  the content which premieres on all these platforms and you read about that and want to take part in the excitement.

The result is that you, the consumer, are very much aware of all those terrific new movies and series that are released on streaming platforms, but the only way to view them all is to pay for them all. The various platforms will not usually license their own cool stuff to other providers. So what happens? You subscribe to multiple platforms and ultimately you are paying mostly for content you’ll never watch.
Worse, there seems to be a trend where these subscription fees are increasing faster than your salary is growing, and on top of that, the cheaper subscriptions not just reduce the viewing quality but also force you to watch advertisements. At that point you are basically back at where you were trying to get away from: linear television riddled with ads from which you cannot escape.
The big companies make big bucks and have created an over-priced product which sucks. The consumer loses.

Bottom-line, any subscription based service model gives you access to content for as long as you pay. And sometimes you even need to pay extra – simply to have a comfortable viewing experience. You do and will not own any of that content. When you cancel your subscription you lose access to the content permanently.

In order to gain control over what you want to view, where and when, there is quite the choice when it comes to setting up the required infrastructure. Look at Kodi, Plex, Emby or Jellyfin for instance. All of those programs implement private streaming servers, with slightly different goals. The catch is that they can only stream the content which you already own and store on local disks. Did you already back up your DVD’s and music CD’s to hard-disk? Then you are in luck.

Preamble

This section describes the technical details of our setup, as well as the things which you should have prepared before trying to implement the instructions in this article.

For the sake of this instruction, I will use the hostname “https://jellyfin.darkstar.lan” as the URL where users will connect to the Jellyfin server.
Furthermore, “https://sso.darkstar.lan/auth” is the Keycloak base URL (see Episode 2 to read how we setup Keycloak as our identity provider).

Setting up your domain (which will hopefully be something else than “darkstar.lan”…) with new hostnames and then setting up web servers for the hostnames in that domain is an exercise left to the reader. Before continuing, please ensure that your equivalent for the following host has a web server running. It doesn’t have to serve any content yet but we will add some blocks of configuration to the VirtualHost definition during the steps outlined in the remainder of this article:

  • jellyfin.darkstar.lan

I expect that your Keycloak application is already running at your own real-life equivalent of https://sso.darkstar.lan/auth .

Using a  Let’s Encrypt SSL certificate to provide encrypted connections (HTTPS) to your webserver is documented in an earlier blog article.

Note that I am talking about webserver “hosts” but in fact, all of these are just virtual webservers running on the same machine, at the same IP address, served by the same Apache httpd program, but with different DNS entries. There is no need at all for multiple computers when setting up your Slackware Cloud server.

Apache reverse proxy configuration

We are going to run Jellyfin in a Docker container. The configuration will be such that the server will only listen for clients at a single TCP port at the localhost address (127.0.0.1).
To make our Jellyfin available for everyone at the address https://jellyfin.darkstar.lan/ we are using a reverse-proxy setup. This step can be done after the container is up and running, but I prefer to configure Apache in advance of the Jellyfin server start. It is a matter of preference.
Add the following reverse proxy lines to your VirtualHost definition of the “jellyfin.darkstar.lan” web site configuration and restart httpd:

# ---
# Required modules:
# mod_proxy, mod_ssl, proxy_wstunnel, http2, headers, remoteip

ProxyRequests Off
ProxyVia on
ProxyAddHeaders On
ProxyPreserveHost On

<Proxy *>
  Require all granted
</Proxy>

# Letsencrypt places a file in this folder when updating/verifying certs.
# This line will tell apache to not to use the proxy for this folder:
ProxyPass "/.well-known/" "!"

<IfModule mod_ssl.c>
  # Tell Jellyfin to forward that requests came from TLS connections:
  RequestHeader set X-Forwarded-Proto "https"
  RequestHeader set X-Forwarded-Port "443"
</IfModule>

# To work on WebOS TV, which runs the Jellyfin client in an I-Frame,
# you need to mitigate the SAMEORIGIN setting for X-Frame-Options
# if you configured this in your Apache httpd,
# or else you will just see a black screen after login:
Header always unset X-Frame-Options env=HTTPS

# Jellyfin hosted on https://jellyfin.darkstar.lan/
<Location /socket>
  ProxyPreserveHost On
  ProxyPass "ws://127.0.0.1:8096/socket"
  ProxyPassReverse "ws://127.0.0.1:8096/socket"
</Location>

<Location />
  ProxyPass "http://127.0.0.1:8096/"
  ProxyPassReverse "http://127.0.0.1:8096/"
</Location>
# ---

Jellyfin server setup

Prepare the Docker side

The Jellyfin Docker container runs with a specific internal user account. In order to recognize it on the host and to apply proper access control to the data which will be generated by Jellyfin on your host, we start with creating the user account on the host:

# /usr/sbin/groupadd -g 990 jellyfin
# /usr/sbin/useradd -c "Jellyfin" -d /opt/dockerfiles/jellyfin -M -g jellyfin -s /bin/false -u 990 jellyfin

Create the directories where our Jellyfin server will save its configuration and media caches, and let the user jellyfin own these directories:

# mkdir -p /opt/dockerfiles/jellyfin/{cache,config}
# chown -R jellyfin:jellyfin /opt/dockerfiles/jellyfin

If you want to enable GPU hardware-assisted video transcoding in the container, you have to add the jellyfin user as a member of the video group:

# gpasswd -a jellyfin video

Additionally you’ll require a dedicated Nvidia graphics card in your host computer and also install the Nvidia driver on the host, as well as the Nvidia Container Toolkit in Docker. This is an advanced setup which is outside of the scope of this article.

With the preliminaries taken care of, we now create the ‘docker-compose.yml‘ file for the streaming server. Store this one in its own directory:

# mkdir /usr/local/docker-jellyfin
# vi /usr/local/docker-jellyfin/docker-compose.yml

… and copy this content into the file:

version: '3.5'
services:
  jellyfin:
    image: jellyfin/jellyfin
    container_name: jellyfin
    user: '990:990'
    network_mode: 'host'
    ports:
    - 8096:8096
    volumes:
    - /etc/localtime:/etc/localtime:ro
    - /opt/dockerfiles/jellyfin/config:/config
    - /opt/dockerfiles/jellyfin/cache:/cache
    - /data/mp3:/music:ro     # Use the location of your actual mp3 collection here
    - /data/video:/video:ro   # Use the location of your actual video collection here
    - /data/books/:/ebooks:ro # Use the location of your actual e-book collection here
    restart: 'unless-stopped'
    deploy:
      resources:
        limits:
          cpus: '2'
          memory: 1024M
    # Optional - alternative address used for autodiscovery:
    environment:
      - JELLYFIN_PublishedServerUrl="https://jellyfin.darkstar.lan"
    # Optional - may be necessary for docker healthcheck to pass,
    # if running in host network mode
    extra_hosts:
      - "host.docker.internal:host-gateway"

Some remarks about this docker-compose file.

  • In green, I have higlighted the userIdNumber, the exposed TCP port and the URL by which you want to access the Jellyfin server once it is up and running. You will find these being referenced in other sections of this article.
  • I show a few examples of how you can bind your own media library storage into the container so that Jellyfin can be configured to serve them. You would of course replace my example locations with your own local paths to media you want to make available. Following my example, these media directories would be available inside the container as “/music“, “/video” and “/ebooks“. When configuring the media libraries on your Jellyfin server,  you are going to point it to these directories.
  • From experience I can inform you that in its default configuration, the Jellyfin server would often get starved of memory and the OOM-killer would kick in. Therefore I give the server 2 CPU cores and 1 GB of RAM to operate reliably. Tune these numbers to your own specific needs.
  • The ‘host‘ network mode of Docker is required only if you want to make your Jellyfin streaming server discoverable on your local network using DLNA. If you do not care about DLNA auto-discovery then you can add a comment in front of the network_mode: 'host' line out or simply delete the whole line.
    FYI: DLNA will send a broadcast signal from Jellyfin. This broadcast is limited to Jellyfin’s current subnet. When using Docker, the network should use ‘Host Mode’, otherwise the broadcast signal will only be sent to the bridged network inside of docker.
    Note: in the case of ‘Host Mode’, the Docker published port (8096) will not be used.
  • You may have noticed that there’s no database configuration. Jellyfin uses SQLite for its databases.

Start your new server

Starting the server is as simple as:

# cd /usr/local/docker-jellyfin/
# docker-compose up -d

For now, we limit the availability of Jellyfin to only localhost connections (unless you have already setup the Apache reverse configuration). That’s because we have not configured an admin account yet and do not want some random person to hi-jack the server. The Apache httpd reverse proxy makes the server accessible more universally.

Note that the Jellyfin logfiles can be found in /opt/dockerfiles/jellyfin/config/log/. Check these logs for clues if the server misbehaves or won’t even start.

Initial runtime configuration

Once our Jellyfin container is up and running, you can access it via http://127.0.0.1:8096/

The first step to take is connect a browser to this URL and create an admin user account. Jellyfin will provide an initial setup wizard to configure the interface language,  and create that first user account who will have admin rights over the server. You can add more users later via the Dashboard, and if you are going to configure Jellyfin to use Single Sign-On (SSO, see below) then you do not need to create any further users at all.

When the admin user has been created, you can start adding your media libraries:

Depending on the content type which you select for your libraries, Jellyfin will handle these libraries differently when presenting them to users. Movies will be presented along with metadata about the movie, its actors, director etc while E-books will show a synopsis of the story, its author and will offer the option to open and read them in the browser. Picture libraries can be played as a slide-show. And so on.

The next question will be to allow remote access and optionally an automatic port mapping via UPnP:

Leave the first checkbox enabled, since we want people to be able to access the streaming server remotely. Leave the UPnP option un-checked as it is not needed and may affect your internet router’s functioning.

This concludes the initial setup. Jellyfin will immediately start indexing the media libraries you have added during the setup. You can always add more libraries later on, by visiting the Admin Dashboard.

Jellyfin Single Sign On using Keycloak

Jellyfin does not support  OpenID Connect by itself. However, a plugin exists which can add OIDC support. This will allow our server to offer Single Sign On (SSO) to users using our Keycloak identity provider.
Only the admin user will have their own local account. Any Slackware Cloud Server user will have their account already setup in your Keycloak database. The first time they login to your Jellyfin server using SSO, the account will be activated automatically.

We will now define a new Client in Keycloak that we can use with Jellyfin, add the OIDC plugin to Jellyfin, configure that plugin using the newly created Keycloak Client ID details, add a trigger in the login page that calls Keycloak for Single Sign-On, and then finally enable the plugin.

Adding Jellyfin Client ID in Keycloak

Point your browser to the Keycloak Admin console https://sso.darkstar.lan/auth/admin/ to start the configuration process.

Add a ‘confidential’ openid-connect client in the ‘foundation‘ Keycloak realm (the realm where you created your users in the previous Episodes of this article series):

  • Select ‘foundation‘ realm; click on ‘Clients‘ and then click ‘Create‘ button.
    • Client ID‘ = “jellyfin
    • Client Type‘ = “OpenID Connect” (the default)
      Note that in Keycloak < 20.x this field was called ‘Client Protocol‘ and its value “openid-connect”.
    • Toggle ‘Client authentication‘ to “On”. This will set the client access type to “confidential”
      Note that in Keycloak < 20.x this was equivalent to setting ‘Access type‘ to “confidential”.
    • Check that ‘Standard Flow‘ is enabled.
    • Save.
  • Also in ‘Settings‘, allow this app from Keycloak.
    Our Jellyfin container is running on https://jellyfin.darkstar.lan . We add

    • Valid Redirect URIs‘ = https://jellyfin.slackware.nl/sso/OID/redirect/keycloak/*
    • Root URL‘ = https://jellyfin.darkstar.lan/
    • Web Origins‘ = https://jellyfin.darkstar.lan/+
    • Admin URL‘ = https://jellyfin.darkstar.lan
    • Save.

To obtain the secret for the “jellyfin” Client ID:

  • Go to “Credentials > Client authenticator > Client ID and Secret
    • Copy the Secret (MazskzUw7ZTanZUf9ljYsEts4ky7Uo0N)

This secret is an example string of course, yours will be different. I will be re-using this value below. You will use your own generated value.

Finally, configure the protocol mapping. Protocol mappers map items (such as a group name or an email address, for example) to a specific claim in the ‘identity and access token‘ – i.e. the information which is going to be passed between Keycloak Identity Provider and the Jellyfin server.
This mapping will allow Jellyfin to determine whether a user is allowed in, and/or whether the user will have administrator access.

  • For Keycloak versions < 20.x:
    • Open the ‘Mappers‘ tab to add a protocol mapper.
    • Click ‘Add Builtin
    • Select either “Groups”, “Realm Roles”, or “Client Roles”, depending on the role system you are planning on using.
      In our case, the choice is “Realm Roles”.
  • For Keycloak versions >= 20.x:
    • Click ‘Clients‘ in the left sidebar of the realm
    • Click on our “jellyfin” client and switch to the ‘Client Scopes‘ tab
    • In ‘Assigned client scope‘ click on “jellyfin-dedicated” scope
    • In the ‘Mappers‘ tab, click on ‘Add Predefined Mapper
    • You can select either “Groups”, “Realm Roles”, or “Client Roles”, depending on the role system you are planning on using.
      In our case, use “Realm Roles” and click ‘Create‘. The mapping will be created.
  • Once the mapper is added, click on the mapper to edit it
    • Note down the ‘Token Claim Name‘.
      In our case, that name is “realm_access.roles“.
    • Enable all four toggles: “Multivalued”, “Add to ID token”, “Add to access token”, and “Add to userinfo”.

Creating roles and groups in Keycloak

Jellyfin supports more than one admin user. Our initial local user account is an admin user by default. You may want to allow another user to act as an administrator as well. Since all other users will be defined in the Keycloak identity provider, we need to be able to differentiate between regular and admin users in Jellyfin. To achieve this, we use Keycloak groups, and we will use role-mapping to map OIDC roles to these groups.

Our Jellyfin administrators group will be : “jellyfin-admins”. Members of this group will be able to administer the Jellyfin server. The Jellyfin users group will be called: “jellyfin-users”. Only those user accounts who are members of this group will be able to access and use your Jellyfin server.
The Keycloak roles we create will have the same name. Once they have been created, you can forget about them. You will only have to manage the groups to add/remove users.
Let’s create those roles and groups in the Keycloak admin interface:

  • Select the ‘foundation‘ realm; click on ‘Roles‘ and then click ‘Create role‘ button.
    • ‘Role name‘ = “jellyfin-users
    • Click ‘Save‘.
    • Click ‘Create role‘ again: ‘Role name‘ = “jellyfin-admins
    • Click ‘Save‘.
  • Select the ‘foundation‘ realm; click on ‘Groups‘ and then click ‘Create group‘ button.
    • Group name‘ = “jellyfin-users
    • Click ‘Create
    • In the ‘Members‘ tab, add the users you want to become part of this group.
    • Go to the ‘Role mapping‘ tab, click ‘Assign role‘. Select “jellyfin-users” and click ‘Assign
    • Click ‘Save‘.
    • Click ‘Create group‘ again, ‘Group name‘ = “jellyfin-admins
    • Click ‘Create
    • In the ‘Members‘ tab, add the users you want to be the server administrators.
    • Go to the ‘Role mapping‘ tab, click ‘Assign role‘. Select “jellyfin-admins” and click ‘Assign
    • Click ‘Save‘.

Add OIDC plugin to Jellyfin

Install the 9p4/jellyfin-plugin-sso github repository into Jellyfin:

  • Go to your Jellyfin Administrator’s Dashboard:
    • Click your profile icon in top-right and click ‘Dashboard
  • Click ‘Plugins‘ in the left sidebar to open that section.
  • Click ‘Repositories‘:
    • Click ‘+‘ to add the following repository details:
      Repository Name: “Jellyfin SSO
      Repository URL:
      https://raw.githubusercontent.com/9p4/jellyfin-plugin-sso/manifest-release/manifest.json
  • Click ‘Save‘.
  • Click ‘Ok‘ to acknowledge that you know what you are doing – this completes the repository installation.
  • Now, click ‘Catalog‘ in the left sidebar.
    • Select ‘SSO Authentication‘ from the ‘Authentication‘ section.
    • Click ‘Install‘ to install the most recent version (pre-selected).
    • Click ‘Ok‘ to acknowledge that you know what you are doing – this completes the plugin installation.

After installing this plugin but before configuring it, restart the Jellyfin container, for instance via the commands:

# cd /usr/local/docker-jellyfin/
# docker-compose restart

Configure the SSO plugin

  • Go to your Jellyfin Administrator’s Dashboard:
    • Click your profile icon in top-right and click ‘Dashboard
  • Click ‘Plugins‘ in the left sidebar to open that section.
  • Click the ‘SSO-Auth‘ plugin.
  • Add a provider with the following settings:
    • Name of the OIDC Provider: keycloak
    • OID Endpoint: https://sso.darkstar.lan/auth/realms/foundation
    • OpenID Client ID: jellyfin
    • OID Secret: MazskzUw7ZTanZUf9ljYsEts4ky7Uo0N
    • Enabled: Checked
    • Enable Authorization by Plugin: Checked
    • Enable All Folders: Checked
    • Roles: jellyfin-users
    • Admin Roles: jellyfin-admins
    • Role Claim: realm_access.roles
    • Set default username claim: preferred_username
  • All other options may remain unchecked or un-configured.
  • Click ‘Save‘.
  • Enable the plugin.

Note that for Keycloak the default role claim is ‘realm_access.roles’. I tried to use Groups instead of Realm Roles but ‘groups’ are not part of Default Scope. My attempt to configure ‘Request Additional Scopes’ and entering ‘groups’ resulted in ‘illegal scope’ error.
By default the scope is limited in Jellyfin SSO to “openid profile”.

Add a SSO button to the login page

Finally, we need to create the trigger which makes Jellyfin actually connect to the Keycloak identity provider. For this, we make smart use of Jellyfin’s ‘branding’ capability which allows to customize the login page.

  • Go to your Jellyfin Administrator’s Dashboard:
    • Click your profile icon in top-right and click ‘Dashboard
  • Click ‘General‘ in the left sidebar
  • Under ‘Quick Connect‘, make sure that ‘Enable Quick Connect on this server‘ is checked
  • Under ‘Branding‘, add these lines in the ‘Login disclaimer‘ field:
    <form action="https://jellyfin.darkstar.lan/sso/OID/start/keycloak">
    <button class="raised block emby-button button-submit">
    Single Sign-On
    </button>
    </form>
  • Also under ‘Branding‘, add these lines to ‘Custom CSS Code‘:
    a.raised.emby-button {
    padding: 0.9em 1em;
    color: inherit !important;
    }
    .disclaimerContainer {
    display: block !important;
    width: auto !important;
    height: auto !important;
    }

Start Jellyfin with SSO

The jellyfin server needs to be restarted after configuring and enabling the SSO plugin. Once that is done, we have an additional button in our login page, allowing you to login with “Single Sign-On“.

Only the local admin user would still use the User/Password fields, but all other users will click the “Single Sign-On” button to be taken to the Keycloak login page;  and return to the Jellyfin content once they are properly authenticated.

Jellyfin usage

Initial media libraries for first-time users

When you have your server running and are preparing for your first users to get onboarded, you need to consider what level of initial access you want to give to a user who logs in for the first time.

In the SSO Plugin Configuration section, the default access was set to “All folders” meaning all your libraries will be instantaneously visible. If you do not want that, you can alternatively enable only the folder/folders that you want your first-time users to see (which may be ‘None‘). Then, once  a user logs into Jellyfin for the first time and the server adds the user, you can go to that user’s profile and manually enable additional folders aka media libraries for them.

Note that after re-configuring any plugin, you need to restart Jellyfin.

Scheduled tasks

In the Admin Dashboard you’ll find a section ‘Scheduled tasks‘. One of these tasks is scanning for new media that get added to your libraries. The frequency with which this task is triggered may be to low if you add new media regularly. This is definitely not as fancy as how PLEX discovers new media as soon as it is added to a library, but hey! You get what you pay for 🙂

You can always trigger a scan manually if you do not want to wait for the scheduled task to run.

Further considerations

Running Jellyfin at a URL with subfolder

Suppose you want to run Jellyfin at https://darkstar.lan/jellyfin/ – i.e. in a subfolder of your host’s domainname.
To use a subfolder you will have to do some trivial tweaks to the reverse proxy configuration:

# Jellyfin hosted on https://darkstar.lan/jellyfin
<Location /jellyfin/socket>
  ProxyPreserveHost On
  ProxyPass "ws://127.0.0.1:8096/jellyfin/socket"
  ProxyPassReverse "ws://127.0.0.1:8096/jellyfin/socket"
</Location>
<Location />
  ProxyPass "http://127.0.0.1:8096/jellyfin"
  ProxyPassReverse "http://127.0.0.1:8096/jellyfin"
</Location>

More importantly, you also need to set the “Base URL” field in the Jellyfin server. This can be done by navigating to the “Admin Dashboard -> Networking -> Base URL” in the web client. Fill this field with the subfolder name “/jellyfin” and click Save.
The Jellyfin container will need to be restarted before this change takes effect and you may have to force-refresh your browser view.

Custom background for the login page

You saw in the screenshot above that you can customize the backdrop for your login screen. To achieve that, add these lines to ‘Custom CSS Code’ and supply the correct path to your own background image:
/*turn background container transparent*/
.backgroundContainer{
background-color: transparent;
}
/*add image to loginPage only*/
#loginPage{
background-image: url("/graphics/mybg.jpg");
background-size: cover;
/*background-size: cover; scales image to fit bg*/
/*background-size: contain; repeat to fit bg*/
}

Note that the location "/graphics/mybg.jpg" translates to https://jellyfin.darkstar.lan/graphics/mybg.jpg for any web client, so that is where you will have to make it available via Apache on your host.

Conclusion

This concludes the instructions for setting up your private streaming server. I hope I was clear enough, but if I have omitted steps or made mistakes, please let me know in the comments section.
I hope you like this article and when you do implement a Jellyfin server, may it bring you lots of fun.

Cheers, Eric

Slackware Cloud Server Series, Episode 7: Decentralized Social Media

Hi all!
It has been a while since I wrote an episode for my series about using Slackware as your private/personal ‘cloud server’. Time for something new!

Since a lot of people these days are looking for alternatives to Twitter and Mastodon is a popular choice, I thought it would be worthwhile to document the process of setting up your own Mastodon server. It can be a platform just for you, or you can invite friends and family, or open it up to the world. Your choice. The server you’ll learn to setup by reading this article uses the same Identity Provider (Keycloak) which is also used by all the other services I wrote about in the scope of this series. I.e. a private server using single sign-on for your own family/friends/community.

Check out the list below which shows past, present and future episodes in the series, if the article has already been written you’ll be able to click on the subject.
The first episode also contains an introduction with some more detail about what you can expect.

  • Episode 1: Managing your Docker Infrastructure
  • Episode 2: Identity and Access management (IAM)
  • Episode 3 : Video Conferencing
  • Episode 4: Productivity Platform
  • Episode 5: Collaborative document editing
  • Episode 6: Etherpad with Whiteboard
  • Episode 7 (this article): Decentralized Social Media
    Setting up Mastodon as an open source alternative to the Twitter social media platform.

    • Introduction
    • What is decentralized social media
    • Preamble
    • Mastodon server setup
      • Prepare the Docker side
      • Define your unique setup
      • Configure your host for email delivery
      • Download required Docker images
      • Create a mastodon role in Postgres
      • Mastodon initial setup
    • Tuning and tweaking your new server
      • Run-time configuration
      • Command-line server management
      • Data retention
      • Full-text search
      • Reconfiguration
      • Growth
    • Connect your Mastodon instance to the Fediverse
    • Mastodon Single Sign On using Keycloak
      • Adding Mastodon client to Keycloak
      • Adding OIDC configuration to Mastodon’s Docker definition
      • Food for thought
      • Start Mastodon with SSO
    • Apache reverse proxy configuration
    • Attribution
    • Appendix
  • Episode 8: Media streaming platform
  • Episode 9: Cloudsync for 2FA Authenticator
  • Episode X: Docker Registry

Introduction

Twitter alternatives seem to be in high demand these days. It’s time to provide the users of your Slackware Cloud services with an fully Open Source social media platform that allows for better local control and integrates with other servers around the globe. It’s time for Mastodon.

This article is not meant to educate you on how to migrate away from Twitter as a user. I wrote a separate blog about that. Here we are going to look at setting up a Mastodon server instance, connecting this server to the rest of the Mastodon federated network, and then invite the users of your server to hop on and start following and interacting with the people they may already know from Twitter.

Setting up Mastodon is not trivial. The server consists of several services that work together, sharing data safely using secrets. This is an ideal case for Docker Compose and in fact, Mastodon’s github already contains a “docker-compose.yml” file which is pretty usable as a starting point.
Our Mastodon server will run as a set of microservices: a Postgres database, a Redis cache, and three separate instances of Tootsuite (the Mastodon code) acting as the web front-end for serving the user interface, a streaming server to deliver updates to users in real-time, and a background processing service to which the web service offloads a lot of its requests in order to deliver a snappy user interface.
These services can be scaled up in case the number of users grows, but for the sake of this article, we are going to assume that your audience is several tens or hundreds of users max.

Mastodon documentation is high-quality and includes instructions on how to setup your own server. Those pages discuss the security measures you would have to take, such as disabling password login, activating a firewall, using fail2ban to monitor for break-ins and act timely on those attempts.
The hardware requirements for setting up your own Mastodon server from scratch are well-documented. Assume that your Mastodon instance will consume 2 to 4 GB of RAM and several 10’s of GB disk space to cache the media that is shown in your users’ news feed. You can configure the expiry time of cached data to keep the local storage need manageable. You can opt for S3 cloud storage if you have the money and don’t want to run the risk of running out of disk space.

What is decentralized social media

Let’s first have a look at its opposite: Twitter. The Twitter microblogging platform presents itself as a easy-to-use website where you can write short texts and with the press of a key, share your thoughts with all the other users of Twitter. Your posts (tweets) will be seen by people who follow you, and if those persons reply to you or like your post, their followers will see your post in their timeline.

I highlighted several bits of social media terminology in italics. It’s the glue that connects all users of the platform. But there is more. Twitter runs algorithms that analyze your tweeting and liking behavior. Based on on the behavioral profile they compile of you these algorithms will slowly start feeding other people’s posts into your timeline that have relevance to the subjects you showed your interest in. Historically this has been the cause of “social media bubbles” where you are unwittingly sucked into a downward spiral with increasingly narrow focus. People become less willing to accept other people’s views and eventually radicalize. Facebook is another social media platform with similar traps.

All this is not describing a place where I feel comfortable. So what are the alternatives?
You could of course just decide to quit social media completely, but you would miss out on a good amount of serious conversation. There’s a variety of open source implementations of distributed or federated networks. For instance Diaspora is a distributed social media platform that exists since 2010 and GNU Social since 2008 even. Pleroma is similar to Mastodon in that both use the ActivityPub W3C protocol and therefore are easily connected. But I’ll focus on Mastodon. The Mastodon network is federated, meaning that it consists of many independently hosted server instances that are all interconnected and share data with each other in real-time. Compare this to a distributed network which does not have any identifiable center (Bittorrent for instance).

The Mastodon project was created back in 2016 by a German developer because he was fed up with Twitter and thought he could do better. Mastodon started gaining real traction in April 2022 when Musk announced he wanted to buy Twitter. Since completing this deal, here has been a steady exodus of frustrated Twitter users. This resulted in a tremendous increase of new Mastodon users, its user base increasing with 50,000 per day on average.

As a Twitter migrant, the first thing you need to decide on is: on which Mastodon server should I create my account? See, that is perhaps the biggest conceptual difference with Twitter where you just have an account, period. On Mastodon, you have an account on a server. On Twitter I am @erichameleers. But on Mastodon I am @alien@fosstodon.org but I can just as well be @alien@mastodon.slackware.nl ! Same person, different accounts. Now this is not efficient of course, but it shows that you can move from one server to another server, and your ‘handle’ will change accordingly since the servername is part of it.

As a Mastodon user you essentially subscribe to three separate news feeds: your home timeline, showing posts of people you follow as well as other people’s posts that were boosted by people you follow.  Then there’s the local timeline: public posts from people that have an account on the same Mastodon server instance where you created your account. And finally the federated timeline, showing all posts that your server knows about, which is mainly the posts from people being followed by all the other users of your server. Which means, if you run a small server in terms of users, your local and federated timelines will be relatively clean. But on bigger instances with thousands of users, you can easily get intimidated by the flood of messages. That’s why as a user you should subscribe to hashtags as well as follow users that you are interested in. Curating the home timeline like that will keep you sane. See my previous blog for more details.

As a Mastodon server administrator, you will have to think about the environment you want to provide to its future users.  Will you define a set of house rules? Will you allow anyone to sign up or do you want to control who ‘lives’ in your server? Ideally you want people to pick your server, create an account, feel fine, and never move on to another server instance. But the strength of open source is also a weakness: when you become the server administrator, you assume responsibility for an unhampered user experience. You need to monitor your server health and monitor/moderate the content that is shared by its users. You need to keep it connected to the network of federated servers. You might have to pay for hosting, data traffic and storage. Are you prepared to do this for a long time? If so, will you be asking your users for monetary support (donations or otherwise)?
Think before you do.

And this is how you do it.

Preamble

This section describes the technical details of our setup, as well as the things which you should have prepared before trying to implement the instructions in this article.

For the sake of this instruction, I will use the hostname “https://mastodon.darkstar.lan” as the URL where users will connect to the Mastodon server.
Furthermore, “https://sso.darkstar.lan/auth” is the Keycloak base URL (see Episode 2 for how we did the Keycloak setup).

The Mastodon container stack (it uses multiple containers) uses a specific internal IP subnet and we will assign static IP addresses to one or more containers. That internal subnet will be “172.22.0.0/16“.
Note that Docker by default will use a single IP range for all its containers if you do not specify a range to be used. The default range is “172.17.0.0/16

Setting up your domain (which will hopefully be something else than “darkstar.lan”…) with new hostnames and then setting up web servers for the hostnames in that domain is an exercise left to the reader. Before continuing, please ensure that your equivalent for the following host has a web server running. It doesn’t have to serve any content yet but we will add some blocks of configuration to the VirtualHost definition during the steps outlined in the remainder of this article:

  • mastodon.darkstar.lan

I expect that your Keycloak application is already running at your own real-life equivalent of https://sso.darkstar.lan/auth .

Using a  Let’s Encrypt SSL certificate to provide encrypted connections (HTTPS) to your webserver is documented in an earlier blog article.

Note that I am talking about webserver “hosts” but in fact, all of these are just virtual webservers running on the same machine, at the same IP address, served by the same Apache httpd program, but with different DNS entries. There is no need at all for multiple computers when setting up your Slackware Cloud server.

Mastodon server setup

Prepare the Docker side

Let’s start with creating the directories where our Mastodon server will save its user data and media caches:

# mkdir -p /opt/dockerfiles/mastodon/{postgresdata,redis,public/system,elasticsearch}

Then we only need two files from the Mastodon git repository. The ‘docker-compose.yml‘ file being the most important, so download that one first:

# mkdir /usr/local/docker-mastodon
# cd /usr/local/docker-mastodon
# wget https://github.com/mastodon/mastodon/raw/main/docker-compose.yml

Ownership for the “public” directory structure needs to be set to
user:group “991:991” because that’s the mastodon userID inside the container:

# chown -R 991:991 /opt/dockerfiles/mastodon/public/

This provides a good base for our container stack setup. Mastodon’s own ‘docker-compose.yml‘ implementation expects a file in the same directory called ‘.env.production‘ which contains all the variable/value pairs required to run the server. We will download a sample version of that .env file from the same Mastodon git repository in a moment.
We need a bit of prep-work on both these files before running our first “docker-compose” command. First the YAML file:

  • Remove all the ‘build: .’ lines from the ‘docker-compose.yml‘ file. We will not build local images from scratch; we will use the official Docker images found on Docker Hub.
  • Pin the downloaded Docker images to specific versions; look them up on Docker Hub. Using ‘latest’ is not recommended for production.
    For instance: change all “tootsuite/mastodon” to “tootsuite/mastodon:v4.0.2” where “v4.0.2” is the most recent stable version. If you omit the version in this statement,
    by default “latest” will be assumed and then you won’t be certain of the actual version of Mastodon you are running.
  • Give all containers a name using “container_name” statement, so that they are more easily recognizeable in “docker ps” output instead of just a container ID:
    • container_name: mstdn_postgres
    • container_name: mstdn_redis
    • container_name: mstdn_es
    • container_name: mstdn_web
    • container_name: mstdn_streaming
    • container_name: mstdn_sidekiq
  • Modify the ‘volumes’ directives in ‘docker-compose.yml‘ and define storage locations outside of the local directory.
    By default, Docker  Compose will create data directories in the current directory.

    • ./postgres14:/var/lib/postgresql/data‘ should become:
      /opt/dockerfiles/mastodon/postgres:/var/lib/postgresql/data
    • ./redis:/data‘ should become:
      /opt/dockerfiles/mastodon/redis:/data
    • ./elasticsearch:/data‘ should become:
      /opt/dockerfiles/mastodon/elasticsearch:/data
    • ./public/system:/mastodon/public/system‘ should become:
      /opt/dockerfiles/mastodon/public/system:/mastodon/public/system
  • Change the default TCP ports (3000 for the ‘web’ service and 4000 for ‘streaming’ service) to respectively 3333 and 4444 (ports 3000 and 4000 may already be in use); note that there are multiple occurrences of these port numbers in the YML file, but only the ‘ports‘ value needs to be changed:
    • '127.0.0.1:3000:3000' needs to become: '127.0.0.1:3333:3000'
    • '127.0.0.1:4000:4000' needs to become: '127.0.0.1:4444:4000'
  • The Redis exposed port needs to be changed from the default “6379” to e.g. “6380” to prevent a clash with another already running Redis server on your host. Again, this only needs a modification in the ‘redis:’ section of ‘docker-compose.yml‘ because internally, the container services can talk freely to the default port.
    We add two lines in the ‘redis:’ section of ‘docker-compose.yml‘:
    ports:
    - '127.0.0.1:6380:6379'
  • Give Mastodon its own internal IP range, because we need to assign the ‘web’ container its own fixed IP address. Then we can tell Sendmail that it is OK to relay emails from the web server (if you use Postfix instead of Sendmail, you can tell me what you needed to do instead and I will update this article… I only use Sendmail).
    Make sure to pick a yet un-used subnet range. Check the output of “route -n” or “ip  route show” to find which IP subnets are currently in use.
    At the bottom of your ‘docker-compose.yml‘ file change the entire ‘networks:’ section so that it looks like this:

    # ---
    networks:
      external_network:
        ipam:
          config:
            - subnet: 172.22.0.0/16
      internal_network:
        internal: true
    # --

    For the ‘web’ container we change the ‘networks:’ definition to:

    networks:
      internal_network:
      external_network:
        ipv4_address: 172.22.0.3
        aliases:
        - mstdn_web.external_network
    

Download the sample environment file from Mastodon’s git repository and use it to create a bootstrap ‘.env.production‘ file. It will contain variables with empty values, but without the existence of this file the initial setup of Mastodon’s docker stack will fail:

# cd /usr/local/docker-mastodon
# wget https://raw.githubusercontent.com/mastodon/mastodon/main/.env.production.sample
# cp .env.production.sample .env.production

This file is full of empty variables and some explanation about their purpose. The Mastodon setup process will eventually dump the full content for ‘.env.production‘ to standard output. You will copy this output into ‘.env.production‘ replacing the whatever was in there at first.

Define your unique setup

Your Mastodon server uses a Postgres database, Postgres will also need an admin password, you’ll need a database user/password combo, et cetera. All these parameters correspond with a variable value in ‘.env.production‘.
Here is the bare minimum configuration you should prepare in advance of starting the Mastodon setup process. With ‘prepare’ I mean, write down the values that you want to use for your server setup. Values in green are going to be unique for your own setup, this article uses example values of course.

# Our server hostname:
LOCAL_DOMAIN=mastodon.darkstar.lan
# Postgress bootstrap:
DB_NAME=mastodon
DB_USER=mastodon
DB_PASS=XBrhvXcm840p8w60L9xe2dnjzbiutmP6
# Optionally the server can send notification emails:
SMTP_SERVER=darkstar.lan
SMTP_PORT=587
SMTP_LOGIN=
SMTP_PASSWORD=
SMTP_AUTH_METHOD=plain
SMTP_OPENSSL_VERIFY_MODE=none
SMTP_ENABLE_STARTTLS=auto
SMTP_FROM_ADDRESS=notifications@mastodon.darkstar.lan

You will additionally need a password for the Postgres admin user when you initialize the database in one of the next sections. Just like for the ‘DB_PASS‘ variable above (which is the password for the database user account), you can generate a random password using this command:

# cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1
ZsAMBvLi9JvowrSUYSC60muWAgIPwIoz

When you have written down everything, we can continue.

Configure your host for email delivery

Part of the Mastodon server setup is to allow it to send notification emails. Note that this is an optional choice. You can skip that part of the setup if you want.
If you want your server to be able to send email notifications, your host needs to relay those emails, and particularly Sendmail requires some information to allow this. The IP address of the Mastodon webserver needs to be trusted by Sendmail as an email origin.

First the DNS part: if you use dnsmasq to provide DNS to your host machine, add the following line to “/etc/hosts”:

172.22.0.3    mstdn_web mstdn_web.external_network

followed by a “killall -HUP dnsmasq” to let your DNS server pick up the update in the hosts file. If you use bind, you’ll know how to add an IP to hostname mapping.
Sendmail needs to be able to resolve the IP when Mastodon requests an email to be sent.

For the Sendmail part of the configuration, add the following to “/etc/mail/access” if it is not already there:

127.0.0  RELAY
172.17   RELAY
172.19   RELAY
172.22   RELAY

It tells Sendmail to recognize our Docker IP ranges as trustworthy.
Run “makemap hash /etc/mail/access.db < /etc/mail/access” to compile the “access” file into a Sendmail database and reload Sendmail:

# /etc/rc.d/rc.sendmail restart

Download required Docker images

To download (pull) the required images from Docker Hub, you run:

# cd /usr/local/docker-mastodon
# docker-compose build

This does not yet start the containers.

Create a mastodon role in Postgres

We need to create the Postgres role “mastodon” prior to starting the Mastodon server setup, because the setup will fail otherwise with “Database connection could not be established with this configuration, try again. FATAL: role “mastodon” does not exist“.
To accomplish this, we spin up a temporary Postgres container using the same Docker image and configuration as we will use for the Mastodon container stack, i.e. we copy most of the parameters out of our ‘docker-compose.yml‘ file:

# docker run --rm --name postgres-bootstrap \
    -v /opt/dockerfiles/mastodon/postgresdata:/var/lib/postgresql/data \
    -e POSTGRES_PASSWORD="ZsAMBvLi9JvowrSUYSC60muWAgIPwIoz" \
    -d postgres:14-alpine

The “run --rm” triggers the removal of the temporary containers after the configuration is complete.

When this container is running , we ‘exec’ into a psql shell:

# docker exec -it postgres-bootstrap psql -U postgres

The following SQL commands will initialize the database and create the “mastodon” role for us, and all of that will be stored in “/opt/dockerfiles/mastodon/postgresdata” which is the location we will also be using for our Mastodon container stack. The removal of the container afterwards will not affect our new database since that will be created outside of the container.

postgres-# CREATE USER mastodon WITH PASSWORD 'XBrhvXcm840p8w60L9xe2dnjzbiutmP6' CREATEDB; exit
postgres-# \q

We can then stop the Postgres container, and continue with the Mastodon setup:

# docker stop postgres-bootstrap

Mastodon server initial setup

Note below the use of “bundle exec rake” instead of just “rake” as used in the official documentation; this avoids the error: “Gem::LoadError: You have already activated rake 13.0.3, but your Gemfile requires rake 13.0.6. Prepending `bundle exec` to your command may solve this.

Everything is in place to start the setup. We spin up a temporary web server using docker-compose. Using docker-compose ensures that the web server’s dependent containers are started in advance:

# docker-compose run --rm web bundle exec rake mastodon:setup

This command starts an interactive dialog allowing you to configure basic and mandatory options. It will also pre-compile JavaScript and CSS assets, and generate a new set of application secrets used for communication between the various containers.
The configurator will output a full server configuration to standard output at the end. You need to copy and paste that configuration into the ‘.env.production‘ file.
Use the information you compiled in the previous section “determine your unique setup” when answering these questions. After successful completion, this is what you should see:

Below is your configuration, save it to an .env.production file outside Docker:
# Generated with mastodon:setup on 2022-12-12 12:12:12 UTC
# Some variables in this file will be interpreted differently whether you are
# using docker-compose or not.
LOCAL_DOMAIN=mastodon.darkstar.lan
SINGLE_USER_MODE=false
SECRET_KEY_BASE=cf709f9ef70555d82dfb236e5010fff69af2c6d5528a0a0dfc423eba324c87116b031c6fa25b71176ec961018aa80e1f8f2c8c619783972805e698bd9b36cb39
OTP_SECRET=9e27360c39d87e0101c5b1bd24e2c8c306d6ee09da1cbac5f43e5587b223d4d5f240ec565aba92d91435a7bce49e83da2821269a47b0f8077781d73213ef1216
VAPID_PRIVATE_KEY=WvVnHQlzJEQXDuJqR8q1m7CHRGnWI1EiIdO75Di0oDA=
VAPID_PUBLIC_KEY=BLOn5R3ILCIgzQ0VY3cjXFe-IyHSBVtscIG-SaL3pcAOcEqRN4sZAkyAf0iRg6hZuFUVHuFhn1Dm7pZZbNoTF2g=
DB_HOST=db
DB_PORT=5432
DB_NAME=mastodon
DB_USER=mastodon
DB_PASS=XBrhvXcm840p8w60L9xe2dnjzbiutmP6
REDIS_HOST=redis
REDIS_PORT=6379
REDIS_PASSWORD=
SMTP_SERVER=darkstar.lan
SMTP_PORT=587
SMTP_LOGIN=
SMTP_PASSWORD=
SMTP_AUTH_METHOD=plain
SMTP_OPENSSL_VERIFY_MODE=none
SMTP_ENABLE_STARTTLS=auto
SMTP_FROM_ADDRESS=Mastodon <notifications@mastodon.darkstar.lan>

It is also saved within this container so you can proceed with this wizard.

The next step in the configuration initializes the Mastodon database. It is followed by a prompt to create the server’s admin user. The setup program will output an initial password for this admin user, which you can use to logon to the Mastodon Web interface. Be sure to change that password after logging in!

Now launch the Mastodon server using:

# docker-compose up -d

Note that if you run “docker-compose up” without the “-d” so that the process remains in the foreground, you’ll see the following warnings coming from the Redis server:

mstdn_redis | 1:M 12 Dec 2022 13:55:16.140 # You requested maxclients of 10000 requiring at least 10032 max file descriptors.
mstdn_redis | 1:M 12 Dec 2022 13:55:16.140 # Server can't set maximum open files to 10032 because of OS error: Operation not permitted.
mstdn_redis | 1:M 12 Dec 2022 13:55:16.140 # Current maximum open files is 4096. maxclients has been reduced to 4064 to compensate for low ulimit. If you need higher maxclients increase 'ulimit -n'.

I leave it up to you to look into increasing the max-open-files default of 4096.

Tuning and tweaking your new server

Run-time configuration

Now that your server is up and running, it is time to use the admin account for which you received an initial password during setup, to personalize it.
Mastodon has a documentation page on this process. The server admin has access to a set of menu items under “Settings > Administration“.

One menu item is “Relays“. This is where you would add one or more relays to speed up the process of Federation between your small instance and the rest of the Fediverse. See the section further down called “Connect your Mastodon instance to the Fediverse” for the details.

Spend some time in the “Server Settings” and “Server Rules” submenus and their tabs (such as “Branding“) to add information about your server that identifies it to visitors and users, and shows the “house rules” that clarify what you expect of people that want an account on your server.
Here is an example of how branding is used to present my site to visitors:

Command-line server management

The admin user has access to the “Admin CLI” which is the fancy name for the “tootctl” command in the mstdn_web container. You can find its man-page at https://docs.joinmastodon.org/admin/tootctl/ .
If you need to run the “tootctl” command, use “docker exec” to execute your command inside of the already running Web container which we gave the name ‘mstdn_web‘ in ‘docker-compose.yml‘:

# docker exec -it mstdn_web bin/tootctl <some_command_option>

Data retention

The tab “Content Retention” under “Server Settings” will be of importance to you, depending on the limitations of your storage capacity. It allows you to specify a max age of downloaded media after which those files get purged from the local  cache. The amount of GB in use can increase rapidly as the number of local users grows. Alternatively you can run the following Docker commands on the host’s commandline to delete parts of the cache immediately instead of waiting for the container’s own scheduled maintenance. In my case, you’ll see that the server is quite inactive (single-user instance) and there’s nothing to be removed:

# docker exec -it mstdn_web bin/tootctl preview_cards remove
0/0 |===========================================================| Time: 00:00:00
Removed 0 preview cards (approx. 0 Bytes)
# docker exec -it mstdn_web bin/tootctl media remove
0/0 |===========================================================| Time: 00:00:00
Removed 0 media attachments (approx. 0 Bytes) 
# docker exec -it mstdn_web bin/tootctl cache clear
OK

Full-text search

If you want to support full-text search of posts on your Mastodon server, you should un-comment the container definition for elasticsearch (the ‘es‘ service) in your ‘docker-compose.yml‘ file and run “docker-compose down ; docker-compose build ; docker-compose up -d” to pull the elasticsearch container from Docker Hub. Note that this will tax your host with additional RAM, CPU and storage demand.

Reconfiguration

In case of future re-configuration you may want to skip the full configuration if you only want to setup or migrate the database, in which case you can invoke the database setup directly:

# docker compose run --rm -v $(pwd)/.env.production:/opt/mastodon/.env.production web bundle exec rake db:setup

Growth

As your instance welcomes more users, you may have to scale up the service. The official Mastodon documentation has a page with considerations: https://docs.joinmastodon.org/admin/scaling/

Adding more concurrency is relatively easy. But when it comes to caching the data and media pulled in by your users’ activities, you may eventually run into the limits of your local server storage. If your server is that successful, consider setting up a support model using Patreon, PayPal or other means that will provide you with the funds to connect your Mastodon instance to Cloud-based storage. That way, storage needs won’t be limited by the dimensions of your local hardware but rather by the funds you collect from your users.

Remember, Mastodon is a federated network with a lot of server instances, but the Mastodon users will expect that their account is going to be available at all times. You will have to work out a model where you can give your users that kind of reassurance. Grow a team of server moderators and admins, promote your server, secure a means of funding which allows to operate your server for at least the next 3 to 6 months even when the flow of money stops. Create room for contingencies.
Mastodon does not show ads, and instead relies a lot on its users to keep the network afloat.

Connect your Mastodon instance to the Fediverse

A Mastodon server depends on its users to determine what information to pull from other servers in the Fediverse. If your users start following people on remote instances or subscribe to hashtags, your server instance will start federating, i.e. it will start retrieving this information, at the same time introducing your instance to remote instances.

If you are going to be running a small Mastodon instance with only a few users, getting connected to the wider Fediverse may be challenging. The start of your server’s federation may not be guaranteed. To accommodate this, the Mastodon network contains relay servers.
Adding one or more of these relays to your server configuration makes the relay push federated data to your server. A list of relay servers is available for instance here: https://techbriefly.com/2022/11/11/active-mastodon-relays/ and I am sure you can find more relays mentioned in other locations. Some relays require that their admin acknowledges and approves your request before the data push is activated.

Mastodon Single Sign On using Keycloak

Like with the other cloud services we have been deploying, our Mastodon server will be using our Keycloak-based Single Sign On solution using OpenID Connect. Only the admin user will have their own local account. Any Slackware Cloud Server user will have their account already setup in your Keycloak database. The first time they login to your Mastodon server, the account will be activated automatically.
It means that your server should disable the account registration page. You can configure that in “Server Settings > Registrations“.

Adding Mastodon Client ID in Keycloak

Point your browser to the Keycloak Admin console https://sso.darkstar.lan/auth/admin/ to start the configuration process.

Add a ‘confidential’ openid-connect client in the ‘foundation‘ Keycloak realm (the realm where you created your users in the previous Episodes of this article series):

  • Select ‘foundation‘ realm; click on ‘Clients‘ and then click ‘Create‘ button.
    • Client ID‘ = “mastodon
    • Client Protocol‘ = “openid-connect” (the default)
    • Access type‘ = “confidential”
    • Save.
  • Also in ‘Settings‘, allow this app from Keycloak.
    Our Mastodon container is running on https://mastodon.darkstar.lan . We add

    • Valid Redirect URIs‘ = https://mastodon.slackware.nl/auth/auth/openid_connect/callback
    • Base URL‘ = https://mastodon.darkstar.lan/
    • Web Origins‘ = https://mastodon.darkstar.lan
    • Save.
  • To obtain the secret for the “mastodon” Client ID, go to “Credentials > Client authenticator > Client ID and Secret
    • Copy the Secret (Q5PZA2xQcpDcdvGpxqViQIVgI6slm7xO)
  • Alternatively to retrieve the secret, go to ‘Installation‘ tab to download the ‘keycloak.json‘ file for this new client:
    • Format Option‘ = “Keycloak OIDC JSON”
    • Click ‘Download‘ which downloads a file “keycloak.json” with the following content:
# ---
{
    "realm": "foundation",
    "auth-server-url": "https://sso.darkstar.lan/auth",
    "ssl-required": "external",
    "resource": "mastodon",
    "credentials": {
     "secret": "Q5PZA2xQcpDcdvGpxqViQIVgI6slm7xO"
    },
    "confidential-port": 0
}
# ---

This secret is an example string of course, yours will be different. I will be re-using this value below. You will use your own generated value.

Add OIDC configuration to Mastodon’s Docker definition

Bring the mastodon container stack down if it is currently running:
# cd /usr/share/docker/data/mastodon
# docker-compose down

Add the following set of definitions to the ‘.env.production‘ file:

# Enable OIDC:
OIDC_ENABLED=true
# Text to appear on the login button:
OIDC_DISPLAY_NAME=Keycloak SSO
# Where to find your Keycloak OIDC server:
OIDC_ISSUER=https://sso.darkstar.lan/auth/realms/foundation
# Use discovery to determine all OIDC endpoints:
OIDC_DISCOVERY=true
# Scope you want to obtain from OIDC server:
OIDC_SCOPE=openid,profile,email
# Field to be used for populating user's @alias:
OIDC_UID_FIELD=preferred_username
# Client ID you configured for Mastodon in Keycloak:
OIDC_CLIENT_ID=mastodon
# Secret of the Client ID you configured for Mastodon in Keycloak:
OIDC_CLIENT_SECRET=Q5PZA2xQcpDcdvGpxqViQIVgI6slm7xO
# Where OIDC server should come back after authentication:
OIDC_REDIRECT_URI=https://mastodon.darkstar.lan/auth/auth/openid_connect/callback
# Assume emails are verified by the OIDC server:
OIDC_SECURITY_ASSUME_EMAIL_IS_VERIFIED=true

Food for thought

Let’s dive into the meaning of this line which you just added:
>  OIDC_UID_FIELD=preferred_username
This “preferred_username” field translates to the Username property of a Keycloak account. The translation is made in the OpenID ‘Client Scope‘.
The Docker Compose definition which we added to ‘.env.production‘ contains the line below, allowing any attribute in the scopes ‘openid‘, ‘profile‘ and ‘email‘ to be added to the ‘token claim‘ – which is the packet of data which is exchanged between Keycloak and its client, the Mastodon server.
> OIDC_SCOPE=openid,profile,email

To learn more about the available attributes, login to Keycloak as the admin user and select our “foundation” realm.
Via ‘Configure‘ > ‘Client Scopes‘ click on ‘profile‘ > ‘Mappers‘ > ‘username’  where you will see that the ‘username‘ property has a token claim name of ‘preferred_username‘. We use ‘preferred_username’ in the OIDC_UID_ID variable which means that the actual user will see his familiar account name being used in Mastodon just like in all the other Cloud services.

However, what if you don’t want to use regular user account names for your Mastodon? After all, Twitter usernames are your own choice, as a Mastodon server admin you may want to offer the same freedom to your users.
In that case, consider using one of the other available attributes. There is for instance ‘nickname‘ which is also a User Attribute and therefore acceptable. It will not be a trivial exercise however: in Keycloak you must create a customized user page which allows the user to change not just their email or password, but also their nickname. For this, you will have to add ‘nickname’ as a mapped attribute to your realm’s user accounts first. And you have to ensure somehow that the nickname values are going to be unique. I have not researched how this should (or even could) be achieved. If any of you readers actually succeeds in doing this, I would be interested to know, leave a comment below please!

Start Mastodon with SSO

Start the mastodon container stack in the directory where we have our tailored ‘docker-compose.yml‘ file:

# cd /usr/share/docker/data/mastodon
# docker-compose up -d

And voila! We have an additional button in our login page, allowing you to login with “Keycloak SSO“.

Apache reverse proxy configuration

To make Mastodon available at https://mastodon.darkstar.lan/ we are using a reverse-proxy setup. This step can be done after the container stack is already up and running, but I prefer to configure Apache in advance of the Mastodon server start. You choose.

Add the following reverse proxy lines to your VirtualHost definition of the “mastodon.darkstar.lan” web site configuration and restart httpd:

# ---
# Set some headers:
Header always set Referrer-Policy "strict-origin-when-cross-origin"
Header always set Strict-Transport-Security "max-age=31536000"
RequestHeader set X-Forwarded-Proto "https"

# Reverse proxy to Mastodon Docker container stack:
SSLProxyEngine on
ProxyTimeout 900
ProxyVia On
ProxyRequests Off
ProxyPreserveHost On

<Proxy *>
    Options FollowSymLinks MultiViews
    AllowOverride All
    Order allow,deny
    allow from all
</Proxy>

<Location />
    ProxyPass        http://127.0.0.1:3333/ retry=0 timeout=30
    ProxyPassReverse http://127.0.0.1:3333/
</Location>
<Location /api/v1/streaming>
    ProxyPass        ws://127.0.0.1:4444/ retry=0 timeout=30
    ProxyPassReverse ws://127.0.0.1:4444/
</Location>
# ---

Attribution

When setting up my own server, I was helped by reading these pages:

Continue reading

© 2025 Alien Pastures

Theme by Anders NorenUp ↑