My thoughts on Slackware, life and everything

Tag: slackwarecloudserver (Page 1 of 2)

Slackware Cloud Server Series, Episode 7: Decentralized Social Media

Hi all!
It has been a while since I wrote an episode for my series about using Slackware as your private/personal ‘cloud server’. Time for something new!

Since a lot of people these days are looking for alternatives to Twitter and Mastodon is a popular choice, I thought it would be worthwhile to document the process of setting up your own Mastodon server. It can be a platform just for you, or you can invite friends and family, or open it up to the world. Your choice. The server you’ll learn to setup by reading this article uses the same Identity Provider (Keycloak) which is also used by all the other services I wrote about in the scope of this series. I.e. a private server using single sign-on for your own family/friends/community.

Check out the list below which shows past, present and future episodes in the series, if the article has already been written you’ll be able to click on the subject.
The first episode also contains an introduction with some more detail about what you can expect.

  • Episode 1: Managing your Docker Infrastructure
  • Episode 2: Identity and Access management (IAM)
  • Episode 3 : Video Conferencing
  • Episode 4: Productivity Platform
  • Episode 5: Collaborative document editing
  • Episode 6: Etherpad with Whiteboard
  • Episode 7 (this article): Decentralized Social Media
    Setting up Mastodon as an open source alternative to the Twitter social media platform.

    • Introduction
    • What is decentralized social media
    • Preamble
    • Mastodon server setup
      • Prepare the Docker side
      • Define your unique setup
      • Configure your host for email delivery
      • Download required Docker images
      • Create a mastodon role in Postgres
      • Mastodon initial setup
    • Tuning and tweaking your new server
      • Run-time configuration
      • Command-line server management
      • Data retention
      • Full-text search
      • Reconfiguration
      • Growth
    • Connect your Mastodon instance to the Fediverse
    • Mastodon Single Sign On using Keycloak
      • Adding Mastodon client to Keycloak
      • Adding OIDC configuration to Mastodon’s Docker definition
      • Food for thought
      • Start Mastodon with SSO
    • Apache reverse proxy configuration
    • Attribution
    • Appendix
  • Episode X: Docker Registry

Introduction

Twitter alternatives seem to be in high demand these days. It’s time to provide the users of your Slackware Cloud services with an fully Open Source social media platform that allows for better local control and integrates with other servers around the globe. It’s time for Mastodon.

This article is not meant to educate you on how to migrate away from Twitter as a user. I wrote a separate blog about that. Here we are going to look at setting up a Mastodon server instance, connecting this server to the rest of the Mastodon federated network, and then invite the users of your server to hop on and start following and interacting with the people they may already know from Twitter.

Setting up Mastodon is not trivial. The server consists of several services that work together, sharing data safely using secrets. This is an ideal case for Docker Compose and in fact, Mastodon’s github already contains a “docker-compose.yml” file which is pretty usable as a starting point.
Our Mastodon server will run as a set of microservices: a Postgres database, a Redis cache, and three separate instances of Tootsuite (the Mastodon code) acting as the web front-end for serving the user interface, a streaming server to deliver updates to users in real-time, and a background processing service to which the web service offloads a lot of its requests in order to deliver a snappy user interface.
These services can be scaled up in case the number of users grows, but for the sake of this article, we are going to assume that your audience is several tens or hundreds of users max.

Mastodon documentation is high-quality and includes instructions on how to setup your own server. Those pages discuss the security measures you would have to take, such as disabling password login, activating a firewall, using fail2ban to monitor for break-ins and act timely on those attempts.
The hardware requirements for setting up your own Mastodon server from scratch are well-documented. Assume that your Mastodon instance will consume 2 to 4 GB of RAM and several 10’s of GB disk space to cache the media that is shown in your users’ news feed. You can configure the expiry time of cached data to keep the local storage need manageable. You can opt for S3 cloud storage if you have the money and don’t want to run the risk of running out of disk space.

What is decentralized social media

Let’s first have a look at its opposite: Twitter. The Twitter microblogging platform presents itself as a easy-to-use website where you can write short texts and with the press of a key, share your thoughts with all the other users of Twitter. Your posts (tweets) will be seen by people who follow you, and if those persons reply to you or like your post, their followers will see your post in their timeline.

I highlighted several bits of social media terminology in italics. It’s the glue that connects all users of the platform. But there is more. Twitter runs algorithms that analyze your tweeting and liking behavior. Based on on the behavioral profile they compile of you these algorithms will slowly start feeding other people’s posts into your timeline that have relevance to the subjects you showed your interest in. Historically this has been the cause of “social media bubbles” where you are unwittingly sucked into a downward spiral with increasingly narrow focus. People become less willing to accept other people’s views and eventually radicalize. Facebook is another social media platform with similar traps.

All this is not describing a place where I feel comfortable. So what are the alternatives?
You could of course just decide to quit social media completely, but you would miss out on a good amount of serious conversation. There’s a variety of open source implementations of distributed or federated networks. For instance Diaspora is a distributed social media platform that exists since 2010 and GNU Social since 2008 even. Pleroma is similar to Mastodon in that both use the ActivityPub W3C protocol and therefore are easily connected. But I’ll focus on Mastodon. The Mastodon network is federated, meaning that it consists of many independently hosted server instances that are all interconnected and share data with each other in real-time. Compare this to a distributed network which does not have any identifiable center (Bittorrent for instance).

The Mastodon project was created back in 2016 by a German developer because he was fed up with Twitter and thought he could do better. Mastodon started gaining real traction in April 2022 when Musk announced he wanted to buy Twitter. Since completing this deal, here has been a steady exodus of frustrated Twitter users. This resulted in a tremendous increase of new Mastodon users, its user base increasing with 50,000 per day on average.

As a Twitter migrant, the first thing you need to decide on is: on which Mastodon server should I create my account? See, that is perhaps the biggest conceptual difference with Twitter where you just have an account, period. On Mastodon, you have an account on a server. On Twitter I am @erichameleers. But on Mastodon I am @alien@fosstodon.org but I can just as well be @alien@mastodon.slackware.nl ! Same person, different accounts. Now this is not efficient of course, but it shows that you can move from one server to another server, and your ‘handle’ will change accordingly since the servername is part of it.

As a Mastodon user you essentially subscribe to three separate news feeds: your home timeline, showing posts of people you follow as well as other people’s posts that were boosted by people you follow.  Then there’s the local timeline: public posts from people that have an account on the same Mastodon server instance where you created your account. And finally the federated timeline, showing all posts that your server knows about, which is mainly the posts from people being followed by all the other users of your server. Which means, if you run a small server in terms of users, your local and federated timelines will be relatively clean. But on bigger instances with thousands of users, you can easily get intimidated by the flood of messages. That’s why as a user you should subscribe to hashtags as well as follow users that you are interested in. Curating the home timeline like that will keep you sane. See my previous blog for more details.

As a Mastodon server administrator, you will have to think about the environment you want to provide to its future users.  Will you define a set of house rules? Will you allow anyone to sign up or do you want to control who ‘lives’ in your server? Ideally you want people to pick your server, create an account, feel fine, and never move on to another server instance. But the strength of open source is also a weakness: when you become the server administrator, you assume responsibility for an unhampered user experience. You need to monitor your server health and monitor/moderate the content that is shared by its users. You need to keep it connected to the network of federated servers. You might have to pay for hosting, data traffic and storage. Are you prepared to do this for a long time? If so, will you be asking your users for monetary support (donations or otherwise)?
Think before you do.

And this is how you do it.

Preamble

This section describes the technical details of our setup, as well as the things which you should have prepared before trying to implement the instructions in this article.

For the sake of this instruction, I will use the hostname “https://mastodon.darkstar.lan” as the URL where users will connect to the Mastodon server.
Furthermore, “https://sso.darkstar.lan/auth” is the Keycloak base URL (see Episode 2 for how we did the Keycloak setup).

The Mastodon container stack (it uses multiple containers) uses a specific internal IP subnet and we will assign static IP addresses to one or more containers. That internal subnet will be “172.22.0.0/16“.
Note that Docker by default will use a single IP range for all its containers if you do not specify a range to be used. The default range is “172.17.0.0/16

Setting up your domain (which will hopefully be something else than “darkstar.lan”…) with new hostnames and then setting up web servers for the hostnames in that domain is an exercise left to the reader. Before continuing, please ensure that your equivalent for the following host has a web server running. It doesn’t have to serve any content yet but we will add some blocks of configuration to the VirtualHost definition during the steps outlined in the remainder of this article:

  • mastodon.darkstar.lan

I expect that your Keycloak application is already running at your own real-life equivalent of https://sso.darkstar.lan/auth .

Using a  Let’s Encrypt SSL certificate to provide encrypted connections (HTTPS) to your webserver is documented in an earlier blog article.

Note that I am talking about webserver “hosts” but in fact, all of these are just virtual webservers running on the same machine, at the same IP address, served by the same Apache httpd program, but with different DNS entries. There is no need at all for multiple computers when setting up your Slackware Cloud server.

Mastodon server setup

Prepare the Docker side

Let’s start with creating the directories where our Mastodon server will save its user data and media caches:

# mkdir -p /opt/dockerfiles/mastodon/{postgresdata,redis,public/system,elasticsearch}

Then we only need two files from the Mastodon git repository. The ‘docker-compose.yml‘ file being the most important, so download that one first:

# mkdir /usr/local/docker-mastodon
# cd /usr/local/docker-mastodon
# wget https://github.com/mastodon/mastodon/raw/main/docker-compose.yml

Ownership for the “public” directory structure needs to be set to
user:group “991:991” because that’s the mastodon userID inside the container:

# chown -R 991:991 /opt/dockerfiles/mastodon/public/

This provides a good base for our container stack setup. Mastodon’s own ‘docker-compose.yml‘ implementation expects a file in the same directory called ‘.env.production‘ which contains all the variable/value pairs required to run the server. We will download a sample version of that .env file from the same Mastodon git repository in a moment.
We need a bit of prep-work on both these files before running our first “docker-compose” command. First the YAML file:

  • Remove all the ‘build: .’ lines from the ‘docker-compose.yml‘ file. We will not build local images from scratch; we will use the official Docker images found on Docker Hub.
  • Pin the downloaded Docker images to specific versions; look them up on Docker Hub. Using ‘latest’ is not recommended for production.
    For instance: change all “tootsuite/mastodon” to “tootsuite/mastodon:v4.0.2” where “v4.0.2” is the most recent stable version. If you omit the version in this statement,
    by default “latest” will be assumed and then you won’t be certain of the actual version of Mastodon you are running.
  • Give all containers a name using “container_name” statement, so that they are more easily recognizeable in “docker ps” output instead of just a container ID:
    • container_name: mstdn_postgres
    • container_name: mstdn_redis
    • container_name: mstdn_es
    • container_name: mstdn_web
    • container_name: mstdn_streaming
    • container_name: mstdn_sidekiq
  • Modify the ‘volumes’ directives in ‘docker-compose.yml‘ and define storage locations outside of the local directory.
    By default, Docker  Compose will create data directories in the current directory.

    • ./postgres14:/var/lib/postgresql/data‘ should become:
      /opt/dockerfiles/mastodon/postgres:/var/lib/postgresql/data
    • ./redis:/data‘ should become:
      /opt/dockerfiles/mastodon/redis:/data
    • ./elasticsearch:/data‘ should become:
      /opt/dockerfiles/mastodon/elasticsearch:/data
    • ./public/system:/mastodon/public/system‘ should become:
      /opt/dockerfiles/mastodon/public/system:/mastodon/public/system
  • Change the default TCP ports (3000 for the ‘web’ service and 4000 for ‘streaming’ service) to respectively 3333 and 4444 (ports 3000 and 4000 may already be in use); note that there are multiple occurrences of these port numbers in the YML file, but only the ‘ports‘ value needs to be changed:
    • '127.0.0.1:3000:3000' needs to become: '127.0.0.1:3333:3000'
    • '127.0.0.1:4000:4000' needs to become: '127.0.0.1:4444:4000'
  • The Redis exposed port needs to be changed from the default “6379” to e.g. “6380” to prevent a clash with another already running Redis server on your host. Again, this only needs a modification in the ‘redis:’ section of ‘docker-compose.yml‘ because internally, the container services can talk freely to the default port.
    We add two lines in the ‘redis:’ section of ‘docker-compose.yml‘:
    ports:
    - '127.0.0.1:6380:6379'
  • Give Mastodon its own internal IP range, because we need to assign the ‘web’ container its own fixed IP address. Then we can tell Sendmail that it is OK to relay emails from the web server (if you use Postfix instead of Sendmail, you can tell me what you needed to do instead and I will update this article… I only use Sendmail).
    Make sure to pick a yet un-used subnet range. Check the output of “route -n” or “ip  route show” to find which IP subnets are currently in use.
    At the bottom of your ‘docker-compose.yml‘ file change the entire ‘networks:’ section so that it looks like this:

    # ---
    networks:
      external_network:
        ipam:
          config:
            - subnet: 172.22.0.0/16
      internal_network:
        internal: true
    # --

    For the ‘web’ container we change the ‘networks:’ definition to:

    networks:
      internal_network:
      external_network:
        ipv4_address: 172.22.0.3
        aliases:
        - mstdn_web.external_network
    

Download the sample environment file from Mastodon’s git repository and use it to create a bootstrap ‘.env.production‘ file. It will contain variables with empty values, but without the existence of this file the initial setup of Mastodon’s docker stack will fail:

# cd /usr/local/docker-mastodon
# wget https://raw.githubusercontent.com/mastodon/mastodon/main/.env.production.sample
# cp .env.production.sample .env.production

This file is full of empty variables and some explanation about their purpose. The Mastodon setup process will eventually dump the full content for ‘.env.production‘ to standard output. You will copy this output into ‘.env.production‘ replacing the whatever was in there at first.

Define your unique setup

Your Mastodon server uses a Postgres database, Postgres will also need an admin password, you’ll need a database user/password combo, et cetera. All these parameters correspond with a variable value in ‘.env.production‘.
Here is the bare minimum configuration you should prepare in advance of starting the Mastodon setup process. With ‘prepare’ I mean, write down the values that you want to use for your server setup. Values in green are going to be unique for your own setup, this article uses example values of course.

# Our server hostname:
LOCAL_DOMAIN=mastodon.darkstar.lan
# Postgress bootstrap:
DB_NAME=mastodon
DB_USER=mastodon
DB_PASS=XBrhvXcm840p8w60L9xe2dnjzbiutmP6
# Optionally the server can send notification emails:
SMTP_SERVER=darkstar.lan
SMTP_PORT=587
SMTP_LOGIN=
SMTP_PASSWORD=
SMTP_AUTH_METHOD=plain
SMTP_OPENSSL_VERIFY_MODE=none
SMTP_ENABLE_STARTTLS=auto
SMTP_FROM_ADDRESS=notifications@mastodon.darkstar.lan

You will additionally need a password for the Postgres admin user when you initialize the database in one of the next sections. Just like for the ‘DB_PASS‘ variable above (which is the password for the database user account), you can generate a random password using this command:

# cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1
ZsAMBvLi9JvowrSUYSC60muWAgIPwIoz

When you have written down everything, we can continue.

Configure your host for email delivery

Part of the Mastodon server setup is to allow it to send notification emails. Note that this is an optional choice. You can skip that part of the setup if you want.
If you want your server to be able to send email notifications, your host needs to relay those emails, and particularly Sendmail requires some information to allow this. The IP address of the Mastodon webserver needs to be trusted by Sendmail as an email origin.

First the DNS part: if you use dnsmasq to provide DNS to your host machine, add the following line to “/etc/hosts”:

172.22.0.3    mstdn_web mstdn_web.external_network

followed by a “killall -HUP dnsmasq” to let your DNS server pick up the update in the hosts file. If you use bind, you’ll know how to add an IP to hostname mapping.
Sendmail needs to be able to resolve the IP when Mastodon requests an email to be sent.

For the Sendmail part of the configuration, add the following to “/etc/mail/access” if it is not already there:

127.0.0  RELAY
172.17   RELAY
172.19   RELAY
172.22   RELAY

It tells Sendmail to recognize our Docker IP ranges as trustworthy.
Run “makemap hash /etc/mail/access.db < /etc/mail/access” to compile the “access” file into a Sendmail database and reload Sendmail:

# /etc/rc.d/rc.sendmail restart

Download required Docker images

To download (pull) the required images from Docker Hub, you run:

# cd /usr/local/docker-mastodon
# docker-compose build

This does not yet start the containers.

Create a mastodon role in Postgres

We need to create the Postgres role “mastodon” prior to starting the Mastodon server setup, because the setup will fail otherwise with “Database connection could not be established with this configuration, try again. FATAL: role “mastodon” does not exist“.
To accomplish this, we spin up a temporary Postgres container using the same Docker image and configuration as we will use for the Mastodon container stack, i.e. we copy most of the parameters out of our ‘docker-compose.yml‘ file:

# docker run --rm --name postgres-bootstrap \
    -v /opt/dockerfiles/mastodon/postgresdata:/var/lib/postgresql/data \
    -e POSTGRES_PASSWORD="ZsAMBvLi9JvowrSUYSC60muWAgIPwIoz" \
    -d postgres:14-alpine

The “run --rm” triggers the removal of the temporary containers after the configuration is complete.

When this container is running , we ‘exec’ into a psql shell:

# docker exec -it postgres-bootstrap psql -U postgres

The following SQL commands will initialize the database and create the “mastodon” role for us, and all of that will be stored in “/opt/dockerfiles/mastodon/postgresdata” which is the location we will also be using for our Mastodon container stack. The removal of the container afterwards will not affect our new database since that will be created outside of the container.

postgres-# CREATE USER mastodon WITH PASSWORD 'XBrhvXcm840p8w60L9xe2dnjzbiutmP6' CREATEDB; exit
postgres-# \q

We can then stop the Postgres container, and continue with the Mastodon setup:

# docker stop postgres-bootstrap

Mastodon server initial setup

Note below the use of “bundle exec rake” instead of just “rake” as used in the official documentation; this avoids the error: “Gem::LoadError: You have already activated rake 13.0.3, but your Gemfile requires rake 13.0.6. Prepending `bundle exec` to your command may solve this.

Everything is in place to start the setup. We spin up a temporary web server using docker-compose. Using docker-compose ensures that the web server’s dependent containers are started in advance:

# docker-compose run --rm web bundle exec rake mastodon:setup

This command starts an interactive dialog allowing you to configure basic and mandatory options. It will also pre-compile JavaScript and CSS assets, and generate a new set of application secrets used for communication between the various containers.
The configurator will output a full server configuration to standard output at the end. You need to copy and paste that configuration into the ‘.env.production‘ file.
Use the information you compiled in the previous section “determine your unique setup” when answering these questions. After successful completion, this is what you should see:

Below is your configuration, save it to an .env.production file outside Docker:
# Generated with mastodon:setup on 2022-12-12 12:12:12 UTC
# Some variables in this file will be interpreted differently whether you are
# using docker-compose or not.
LOCAL_DOMAIN=mastodon.darkstar.lan
SINGLE_USER_MODE=false
SECRET_KEY_BASE=cf709f9ef70555d82dfb236e5010fff69af2c6d5528a0a0dfc423eba324c87116b031c6fa25b71176ec961018aa80e1f8f2c8c619783972805e698bd9b36cb39
OTP_SECRET=9e27360c39d87e0101c5b1bd24e2c8c306d6ee09da1cbac5f43e5587b223d4d5f240ec565aba92d91435a7bce49e83da2821269a47b0f8077781d73213ef1216
VAPID_PRIVATE_KEY=WvVnHQlzJEQXDuJqR8q1m7CHRGnWI1EiIdO75Di0oDA=
VAPID_PUBLIC_KEY=BLOn5R3ILCIgzQ0VY3cjXFe-IyHSBVtscIG-SaL3pcAOcEqRN4sZAkyAf0iRg6hZuFUVHuFhn1Dm7pZZbNoTF2g=
DB_HOST=db
DB_PORT=5432
DB_NAME=mastodon
DB_USER=mastodon
DB_PASS=XBrhvXcm840p8w60L9xe2dnjzbiutmP6
REDIS_HOST=redis
REDIS_PORT=6379
REDIS_PASSWORD=
SMTP_SERVER=darkstar.lan
SMTP_PORT=587
SMTP_LOGIN=
SMTP_PASSWORD=
SMTP_AUTH_METHOD=plain
SMTP_OPENSSL_VERIFY_MODE=none
SMTP_ENABLE_STARTTLS=auto
SMTP_FROM_ADDRESS=Mastodon <notifications@mastodon.darkstar.lan>

It is also saved within this container so you can proceed with this wizard.

The next step in the configuration initializes the Mastodon database. It is followed by a prompt to create the server’s admin user. The setup program will output an initial password for this admin user, which you can use to logon to the Mastodon Web interface. Be sure to change that password after logging in!

Now launch the Mastodon server using:

# docker-compose up -d

Note that if you run “docker-compose up” without the “-d” so that the process remains in the foreground, you’ll see the following warnings coming from the Redis server:

mstdn_redis | 1:M 12 Dec 2022 13:55:16.140 # You requested maxclients of 10000 requiring at least 10032 max file descriptors.
mstdn_redis | 1:M 12 Dec 2022 13:55:16.140 # Server can't set maximum open files to 10032 because of OS error: Operation not permitted.
mstdn_redis | 1:M 12 Dec 2022 13:55:16.140 # Current maximum open files is 4096. maxclients has been reduced to 4064 to compensate for low ulimit. If you need higher maxclients increase 'ulimit -n'.

I leave it up to you to look into increasing the max-open-files default of 4096.

Tuning and tweaking your new server

Run-time configuration

Now that your server is up and running, it is time to use the admin account for which you received an initial password during setup, to personalize it.
Mastodon has a documentation page on this process. The server admin has access to a set of menu items under “Settings > Administration“.

One menu item is “Relays“. This is where you would add one or more relays to speed up the process of Federation between your small instance and the rest of the Fediverse. See the section further down called “Connect your Mastodon instance to the Fediverse” for the details.

Spend some time in the “Server Settings” and “Server Rules” submenus and their tabs (such as “Branding“) to add information about your server that identifies it to visitors and users, and shows the “house rules” that clarify what you expect of people that want an account on your server.
Here is an example of how branding is used to present my site to visitors:

Command-line server management

The admin user has access to the “Admin CLI” which is the fancy name for the “tootctl” command in the mstdn_web container. You can find its man-page at https://docs.joinmastodon.org/admin/tootctl/ .
If you need to run the “tootctl” command, use “docker exec” to execute your command inside of the already running Web container which we gave the name ‘mstdn_web‘ in ‘docker-compose.yml‘:

# docker exec -it mstdn_web bin/tootctl <some_command_option>

Data retention

The tab “Content Retention” under “Server Settings” will be of importance to you, depending on the limitations of your storage capacity. It allows you to specify a max age of downloaded media after which those files get purged from the local  cache. The amount of GB in use can increase rapidly as the number of local users grows. Alternatively you can run the following Docker commands on the host’s commandline to delete parts of the cache immediately instead of waiting for the container’s own scheduled maintenance. In my case, you’ll see that the server is quite inactive (single-user instance) and there’s nothing to be removed:

# docker exec -it mstdn_web bin/tootctl preview_cards remove
0/0 |===========================================================| Time: 00:00:00
Removed 0 preview cards (approx. 0 Bytes)
# docker exec -it mstdn_web bin/tootctl media remove
0/0 |===========================================================| Time: 00:00:00
Removed 0 media attachments (approx. 0 Bytes) 
# docker exec -it mstdn_web bin/tootctl cache clear
OK

Full-text search

If you want to support full-text search of posts on your Mastodon server, you should un-comment the container definition for elasticsearch (the ‘es‘ service) in your ‘docker-compose.yml‘ file and run “docker-compose down ; docker-compose build ; docker-compose up -d” to pull the elasticsearch container from Docker Hub. Note that this will tax your host with additional RAM, CPU and storage demand.

Reconfiguration

In case of future re-configuration you may want to skip the full configuration if you only want to setup or migrate the database, in which case you can invoke the database setup directly:

# docker compose run --rm -v $(pwd)/.env.production:/opt/mastodon/.env.production web bundle exec rake db:setup

Growth

As your instance welcomes more users, you may have to scale up the service. The official Mastodon documentation has a page with considerations: https://docs.joinmastodon.org/admin/scaling/

Adding more concurrency is relatively easy. But when it comes to caching the data and media pulled in by your users’ activities, you may eventually run into the limits of your local server storage. If your server is that successful, consider setting up a support model using Patreon, PayPal or other means that will provide you with the funds to connect your Mastodon instance to Cloud-based storage. That way, storage needs won’t be limited by the dimensions of your local hardware but rather by the funds you collect from your users.

Remember, Mastodon is a federated network with a lot of server instances, but the Mastodon users will expect that their account is going to be available at all times. You will have to work out a model where you can give your users that kind of reassurance. Grow a team of server moderators and admins, promote your server, secure a means of funding which allows to operate your server for at least the next 3 to 6 months even when the flow of money stops. Create room for contingencies.
Mastodon does not show ads, and instead relies a lot on its users to keep the network afloat.

Connect your Mastodon instance to the Fediverse

A Mastodon server depends on its users to determine what information to pull from other servers in the Fediverse. If your users start following people on remote instances or subscribe to hashtags, your server instance will start federating, i.e. it will start retrieving this information, at the same time introducing your instance to remote instances.

If you are going to be running a small Mastodon instance with only a few users, getting connected to the wider Fediverse may be challenging. The start of your server’s federation may not be guaranteed. To accommodate this, the Mastodon network contains relay servers.
Adding one or more of these relays to your server configuration makes the relay push federated data to your server. A list of relay servers is available for instance here: https://techbriefly.com/2022/11/11/active-mastodon-relays/ and I am sure you can find more relays mentioned in other locations. Some relays require that their admin acknowledges and approves your request before the data push is activated.

Mastodon Single Sign On using Keycloak

Like with the other cloud services we have been deploying, our Mastodon server will be using our Keycloak-based Single Sign On solution using OpenID Connect. Only the admin user will have their own local account. Any Slackware Cloud Server user will have their account already setup in your Keycloak database. The first time they login to your Mastodon server, the account will be activated automatically.
It means that your server should disable the account registration page. You can configure that in “Server Settings > Registrations“.

Adding Mastodon Client ID in Keycloak

Point your browser to the Keycloak Admin console https://sso.darkstar.lan/auth/admin/ to start the configuration process.

Add a ‘confidential’ openid-connect client in the ‘foundation‘ Keycloak realm (the realm where you created your users in the previous Episodes of this article series):

  • Select ‘foundation‘ realm; click on ‘Clients‘ and then click ‘Create‘ button.
    • Client ID‘ = “mastodon
    • Client Protocol‘ = “openid-connect” (the default)
    • Access type‘ = “confidential”
    • Save.
  • Also in ‘Settings‘, allow this app from Keycloak.
    Our Mastodon container is running on https://mastodon.darkstar.lan . We add

    • Valid Redirect URIs‘ = https://mastodon.slackware.nl/auth/auth/openid_connect/callback
    • Base URL‘ = https://mastodon.darkstar.lan/
    • Web Origins‘ = https://mastodon.darkstar.lan
    • Save.
  • To obtain the secret for the “mastodon” Client ID, go to “Credentials > Client authenticator > Client ID and Secret
    • Copy the Secret (Q5PZA2xQcpDcdvGpxqViQIVgI6slm7xO)
  • Alternatively to retrieve the secret, go to ‘Installation‘ tab to download the ‘keycloak.json‘ file for this new client:
    • Format Option‘ = “Keycloak OIDC JSON”
    • Click ‘Download‘ which downloads a file “keycloak.json” with the following content:
# ---
{
    "realm": "foundation",
    "auth-server-url": "https://sso.darkstar.lan/auth",
    "ssl-required": "external",
    "resource": "mastodon",
    "credentials": {
     "secret": "Q5PZA2xQcpDcdvGpxqViQIVgI6slm7xO"
    },
    "confidential-port": 0
}
# ---

This secret is an example string of course, yours will be different. I will be re-using this value below. You will use your own generated value.

Add OIDC configuration to Mastodon’s Docker definition

Bring the mastodon container stack down if it is currently running:
# cd /usr/share/docker/data/mastodon
# docker-compose down

Add the following set of definitions to the ‘.env.production‘ file:

# Enable OIDC:
OIDC_ENABLED=true
# Text to appear on the login button:
OIDC_DISPLAY_NAME=Keycloak SSO
# Where to find your Keycloak OIDC server:
OIDC_ISSUER=https://sso.darkstar.lan/auth/realms/foundation
# Use discovery to determine all OIDC endpoints:
OIDC_DISCOVERY=true
# Scope you want to obtain from OIDC server:
OIDC_SCOPE=openid,profile,email
# Field to be used for populating user's @alias:
OIDC_UID_FIELD=preferred_username
# Client ID you configured for Mastodon in Keycloak:
OIDC_CLIENT_ID=mastodon
# Secret of the Client ID you configured for Mastodon in Keycloak:
OIDC_CLIENT_SECRET=Q5PZA2xQcpDcdvGpxqViQIVgI6slm7xO
# Where OIDC server should come back after authentication:
OIDC_REDIRECT_URI=https://mastodon.darkstar.lan/auth/auth/openid_connect/callback
# Assume emails are verified by the OIDC server:
OIDC_SECURITY_ASSUME_EMAIL_IS_VERIFIED=true

Food for thought

Let’s dive into the meaning of this line which you just added:
>  OIDC_UID_FIELD=preferred_username
This “preferred_username” field translates to the Username property of a Keycloak account. The translation is made in the OpenID ‘Client Scope‘.
The Docker Compose definition which we added to ‘.env.production‘ contains the line below, allowing any attribute in the scopes ‘openid‘, ‘profile‘ and ‘email‘ to be added to the ‘token claim‘ – which is the packet of data which is exchanged between Keycloak and its client, the Mastodon server.
> OIDC_SCOPE=openid,profile,email

To learn more about the available attributes, login to Keycloak as the admin user and select our “foundation” realm.
Via ‘Configure‘ > ‘Client Scopes‘ click on ‘profile‘ > ‘Mappers‘ > ‘username’  where you will see that the ‘username‘ property has a token claim name of ‘preferred_username‘. We use ‘preferred_username’ in the OIDC_UID_ID variable which means that the actual user will see his familiar account name being used in Mastodon just like in all the other Cloud services.

However, what if you don’t want to use regular user account names for your Mastodon? After all, Twitter usernames are your own choice, as a Mastodon server admin you may want to offer the same freedom to your users.
In that case, consider using one of the other available attributes. There is for instance ‘nickname‘ which is also a User Attribute and therefore acceptable. It will not be a trivial exercise however: in Keycloak you must create a customized user page which allows the user to change not just their email or password, but also their nickname. For this, you will have to add ‘nickname’ as a mapped attribute to your realm’s user accounts first. And you have to ensure somehow that the nickname values are going to be unique. I have not researched how this should (or even could) be achieved. If any of you readers actually succeeds in doing this, I would be interested to know, leave a comment below please!

Start Mastodon with SSO

Start the mastodon container stack in the directory where we have our tailored ‘docker-compose.yml‘ file:

# cd /usr/share/docker/data/mastodon
# docker-compose up -d

And voila! We have an additional button in our login page, allowing you to login with “Keycloak SSO“.

Apache reverse proxy configuration

To make Mastodon available at https://mastodon.darkstar.lan/ we are using a reverse-proxy setup. This step can be done after the container stack is already up and running, but I prefer to configure Apache in advance of the Mastodon server start. You choose.

Add the following reverse proxy lines to your VirtualHost definition of the “mastodon.darkstar.lan” web site configuration and restart httpd:

# ---
# Set some headers:
Header always set Referrer-Policy "strict-origin-when-cross-origin"
Header always set Strict-Transport-Security "max-age=31536000"
RequestHeader set X-Forwarded-Proto "https"

# Reverse proxy to Mastodon Docker container stack:
SSLProxyEngine on
ProxyTimeout 900
ProxyVia On
ProxyRequests Off
ProxyPreserveHost On

<Proxy *>
    Options FollowSymLinks MultiViews
    AllowOverride All
    Order allow,deny
    allow from all
</Proxy>

<Location />
    ProxyPass        http://127.0.0.1:3333/ retry=0 timeout=30
    ProxyPassReverse http://127.0.0.1:3333/
</Location>
<Location /api/v1/streaming>
    ProxyPass        ws://127.0.0.1:4444/ retry=0 timeout=30
    ProxyPassReverse ws://127.0.0.1:4444/
</Location>
# ---

Attribution

When setting up my own server, I was helped by reading these pages:

Continue reading

Slackware Cloud Server Series Episode 6: Etherpad with Whiteboard

Hi all!
This is the 6th episode in a series I am writing about using Slackware as your private/personal ‘cloud server’. It is an unscheduled break-out topic to discuss an Etherpad server specifically.
Check out the list below which shows past, present and future episodes in the series, if the article has already been written you’ll be able to click on the subject.
The first episode also contains an introduction with some more detail about what you can expect.
These articles are living documents, i.e. based on readers’ feedback I may add, update or modify their content.

Etherpad with Whiteboard

In Episode 3 (Video Conferencing) we setup a Jitsi Meet server in a Docker container stack which includes an Etherpad server for real-time document collaboration during a video meeting.
That Etherpad instance as configured by the Docker-Jitsi-Meet project is really only a demo setup. It uses a “dirtydb” JSON backend wich is not meant for anything else but testing. It really needs a proper SQL database like MariaDB to power it. And you can’t export your documents from this demo Etherpad in any meaningful format.
Furthermore, this Etherpad container is not using our Keycloak IAM for authentication; everyone who knows the public URL can create a document, invite others and start writing. Even shared documents created in Jitsi meetings are not secure and anyone who guesses the room name has access to the Etherpad document.

This article means to set things right and configure Etherpad correctly, adding Whiteboard functionality as we go. I will also discuss the differences between our Jitsi integrated Etherpad and a running a standalone Etherpad server in case you are not interested in Video meetings and only want the text collaboration.

Preamble

This article assumes you have already setup an Etherpad in a Docker container as part of a Dockerized Jitsi Meet server (see Episode 3 in this series), and this Etherpad is running at a publicly accessible URL:

  • https://meet.darkstar.lan/pad/

We want to make it delegate user authentication to our OpenID Provider: Keycloak. That Keycloak service is available at:

  • https://sso.darkstar.lan/auth

If you are not interested in Jitsi Meet and only want to know how to run an Etherpad server, this article still contains everything you need but keep in mind that my examples are all assuming the above URL for the Etherpad. Adapt that URL to your own real-life situation. You may still have to setup an Apache webserver first, which serves an empty page at “https://meet.darkstar.lan/” but I will leave that to you.

Configuring MariaDB

By default, Etherpad will use a ‘DirtyDB’ JSON file-based backend. It is straight-forward to make it switch to for instance a MariaDB database server backend, we only need to provide the connection details for a pre-existing database.
Like with the previous articles we are using the Slackware MariaDB database server which is running on the host. First, we will create a database (etherpad_db), a database user (etherpad) and grant this user sufficient access to the database. Then we will use these database configuration values when editing the Docker-Jitsi-Meet files in order to change the Etherpad container properties.
This is how we create the database and the user (using a secure password string for ‘EPPPASSWD‘ of course):

$ mysql -uroot -p
> CREATE DATABASE IF NOT EXISTS `etherpad_db` CHARACTER SET utf8 COLLATE utf8_unicode_ci;
> CREATE USER 'etherpad'@'localhost' identified by 'EPPASSWD';
> CREATE USER 'etherpad'@'%' identified by 'EPPPASSWD';
> GRANT CREATE,ALTER,SELECT,INSERT,UPDATE,DELETE on `etherpad_db`.* to 'etherpad'@'localhost';
> GRANT CREATE,ALTER,SELECT,INSERT,UPDATE,DELETE on `etherpad_db`.* to 'etherpad'@'%';
> FLUSH PRIVILEGES;
> exit;

Note from the above SQL statements that we are allowing the ‘etherpad‘ user remote access to the database. This is needed because Etherpad in the Docker container contacts MariaDB via the network, using the IP address of the Docker network bridge in the Jitsi Meet container stack.

Reconfiguring Docker-Jitsi-Meet

My advise is to start with briefly re-visiting Episode 3 of the series and read back how we customized the ‘docker-compose.yml‘ and ‘.env‘ files in order to startup the Docker-Jitsi-Meet stack properly. Because we are going to update these two files again.
This is what we need to change to make Etherpad connect to the external MariaDB database:

Relevant .env additions:

# MariaDB parameters for mysql DB instead of dirtydb
ETHERPAD_DB_TYPE=mysql
ETHERPAD_DB_HOST=172.20.0.1
ETHERPAD_DB_PORT=3306
ETHERPAD_DB_NAME=etherpad_db
ETHERPAD_DB_USER=etherpad
ETHERPAD_DB_PASS=EPPASSWD
ETHERPAD_DB_CHARSET=utf8

Relevant docker-compose additions:

In the ‘.env‘ file we defined the IP address for the database server (172.20.0.1). Etherpad is running inside a container, and its way out is through the default gateway of its Docker network. In order to have 172.20.0.1 as the gateway address, we need to configure the internal ‘meet.jitsi‘ network a deterministic IP range so that we always know its gateway address. if we are going to give that network the IP range “172.20.0.0/16“, the “networks” statement all the way at the bottom needs to be changed from:

# Custom network so all services can communicate using a FQDN
networks:
  meet.jitsi:

to:

# Custom network so all services can communicate using a FQDN 
networks: 
  meet.jitsi: 
    ipam: 
      config: 
        - subnet: 172.20.0.0/16

Use the variables we added to ‘.env’ to create an updated Etherpad container definition. Right underneath this line:

            - SKIN_VARIANTS=${ETHERPAD_SKIN_VARIANTS}

Add the following lines:

            - DB_TYPE=${ETHERPAD_DB_TYPE} 
            - DB_HOST=${ETHERPAD_DB_HOST} 
            - DB_PORT=${ETHERPAD_DB_PORT} 
            - DB_NAME=${ETHERPAD_DB_NAME} 
            - DB_USER=${ETHERPAD_DB_USER} 
            - DB_PASS=${ETHERPAD_DB_PASS} 
            - DB_CHARSET=${ETHERPAD_DB_CHARSET} 

Accessing the admin console

Etherpad has an admin console where you can manage its plugin configuration and other things too. It will only be enabled if you configure an admin password. So let’s do that too.
This is what we need to change to enable the admin console for Etherpad:

Relevant .env additions:

# The password for Etherpad admin page
ETHERPAD_ADMIN_PASSWORD="my_secret_admin_pass"

Relevant docker-compose additions:

Right underneath this line:

            - SKIN_VARIANTS=${ETHERPAD_SKIN_VARIANTS}

Add the following lines:

            - ADMIN_PASSWORD=${ETHERPAD_ADMIN_PASSWORD}

Relevant Apache httpd additions:

If you would now access the URL for the admin console, https://meet.darkstar.lan/pad/admin/ you would only see the message “Unauthorized“. The Etherpad expects you to provide the Basic Authentication hook in front of that page which passes the admin credentials on to the backend. So, we will add a ‘AuthType Basic‘ block to our Apache httpd configuration to add Basic Authentication which will pop up a login dialog, and then add the admin user and its password “my_secret_admin_pass” to a htaccess file.

Remember, in Episode 4 we configured the Etherpad to be available at “https://meet.darkstar.lan/pad/” which means the admin console URL is “https://meet.darkstar.lan/pad/admin/
This is the block to add to your VirtualHost configuration for the Etherpad:

<Location /pad/admin>
    AuthType Basic
    AuthBasicAuthoritative off
    AuthName "Welcome to the Etherpad"
    AuthUserFile /etc/httpd/passwords/htaccess.epl
    Require valid-user
    Order Deny,Allow
    Deny from all
    Satisfy Any
</Location>

And then we still need to create that htaccess file using the ‘htpasswd‘ tool, like this:

# mkdir /etc/httpd/passwords
# htpasswd -B -c /etc/httpd/passwords/htaccess.epl admin

The “-B” parameter enforces the use of bcrypt encryption for passwords. This is currently considered to be very secure.
The above command will prompt for the password, and there you enter that “my_secret_admin_pass” string. The content of that file will look like this:

# cat /etc/httpd/passwords/htaccess.epl
admin:$2y$05$JpvucTlKIQEJViCynem.JelENHpv/maJStsPM4iU9d/sg4cMU.UfW

The Docker Jitsi Meet container stack needs to be refreshed and restarted since we edited ‘.env’. I don’t want to repeat the detailed instructions here, so refer you to the section “Considerations about the “.env” file” in Episode 3 of this article series. Do that now, and when the updated container stack is up and running again, continue here.

And after also restarting the Apache httpd and refreshing the URL “https://meet.darkstar.lan/pad/admin/” you will be asked to enter your admin credentials and you will end up in the Etherpad admin console.
The screenshot below does not reflect the status of the barebones Etherpad by the way; you see a lot of installed plugins mentioned on the admin page. We will be installing those into the Etherpad image in one of the next sections:

At this stage, we have accomplished a well-performing Etherpad installation with a SQL database back-end and an administrative web-interface. The next step is to add authentication through an OpenID provider like our Keycloak IAM server.

Integrating with Keycloak IAM

The out-of-the-box Etherpad Docker container is not very functional. The above sections already showed how to replace the “DirtyDB” with a proper SQL database server like MariaDB. But the default image misses a few useful plugins and a real desktop editor program which allows Etherpad users to export their collaborative work to a proper document format instead of pure HTML.

Whatever plugins we add, at the very least we need to add a plugin which allows us to let the Etherpad authenticate against our Keycloak IAM server. This plugin needs to be inside the Docker image, we cannot use it outside the running container. There’s no other option than to create a custom Docker image for Etherpad. We use this as an opportunity to add some more plugins, as well as Abiword (to enable document export in Etherpad).

I’ll show how to announce Etherpad to Keycloak (we create a Client profile in Keycloak); then I’ll share the required configuration to be added to the Etherpad Docker files; and then I’ll show how to create a custom Docker image enriched with additional plugins which we will use instead of the basic image from Docker Hub.

Keycloak

First of all, let’s create a Client profile for Etherpad in Keycloak.

  • Login to the Keycloak Admin Console (https://sso.darkstar.lan/auth/admin/)
    • Select our ‘Foundation‘ realm from the dropdown at the left.
    • Under ‘Clients‘, create a new client:
      ‘Client ID’ = “etherpad
      ‘Root URL’ = “https://meet.darkstar.lan/pad/ep_openid_connect/callback
      Note that for the Etherpad OIDC plugin ‘ep_openid_connect’  – see below – to work, the ‘Valid Redirect URIs’  (a.k.a. callback URL) must be the concatenation of the Etherpad base URL (https://meet.darkstar.lan/pad/) plus “/ep_openid_connect/callback“. When setting the ‘Root URL‘ to the above value, the ‘Redirect URIs‘ will automatically also be set correctly to “https://meet.darkstar.lan/pad/ep_openid_connect/callback/*
    • Save.
    • In the ‘Settings‘ tab, change:
      ‘Access Type’ = “confidential” (default is “public”)
    • Save.
    • Go to the ‘Credentials‘ tab
    • Make sure that ‘Client Authenticator‘ is set to “Client Id and Secret
    • Copy the value of the ‘Secret‘, which we will use later in the Etherpad connector; the Secret will look somewhat like this:
      2jnc8H6RH9jIYMXExUHA7XF7uD8YKIRs“.

Keycloak configuration being completed, we can turn our attention to the connector between Etherpad and Keycloak.

EP_openid_connect

We will use the Etherpad plugin ep_openid_connect which I already briefly mentioned earlier. This plugin provides the needed OpenID client functionality to Etherpad.
When we add this plugin to the Etherpad Docker image we need to be able to configure it via the ‘docker-compose.yml‘ and ‘.env‘ files of Docker-Jitsi-Meet. The existing configuration files in the repository for the docker-jitsi-meet stack are just meant to make the basic Etherpad work, so we need to add more parameters to configure our custom Etherpad properly.
I will show you what you need to add, and where.

The ‘ep_openid_connect’ plugin expects an ‘ep_openid_connect’ block in the ‘settings.json’ file (we will get to that file in the next section). Since that file is JSON-formatted, we arrive at the following structure:

"ep_openid_connect": {
"issuer": "https://sso.darkstar.lan/auth/realms/foundation",
"client_id": "etherpad",
"client_secret": "2jnc8H6RH9jIYMXExUHA7XF7uD8YKIRs",
"base_url": "https://meet.darkstar.lan/pad"
},

The text string values in green highlight are of course the relevant ones. What is the meaning of the parameters:

  • issuer: this is the string you obtain through Keycloak’s OpenID Discovery URL. Make sure you have ‘jq‘ installed and then run this command to obtain the value for ‘issuer‘:
    $ curl https://sso.darkstar.lan/auth/realms/foundation/.well-known/openid-configuration | jq .issuer
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100  5749  100  5749    0     0   6594      0 --:--:-- --:--:-- --:--:--  6600
    "https://sso.darkstar.lan/auth/realms/foundation"

    In this case, you could easily have guessed the ‘issuer’ value, but using the above ‘well-known’ query URL will always get you the correct value.

  • client_id, client_secret: those are the same OAuth2 values obtained from Keycloak when creating the Etherpad Client profile as seen above.
  • base_url: this is the URL where Etherpad is externally accessible (https://meet.darkstar.lan/pad/ – see Episode 3).

Additionally, since we are now enforcing login, Etherpad’s ‘requireAuthentication‘ setting must be set to “true”. Note that the default setting is “false”; this is how that setting is defined in the Etherpad configuration:

"requireAuthentication": "${REQUIRE_AUTHENTICATION:false}",

We’ll just have to define a “true” value for that variable later on.

Note: Each configuration parameter can also be set via an environment variable, using the syntax "${ENV_VAR}" or "${ENV_VAR:default_value}". This ability is what we will use when updating the Docker Compose file for Jitsi Meet. We will not use the literal JSON block above, instead we will fill it with variable names and use our Docker Compose files to provide values for these variables. That way I am able to create a generic Docker image that I can upload to the Docker Hub and share with other people.
The file ‘settings.json.template‘ in the Etherpad repository has lots of examples.

Hold on to that thought for a minute while we proceed with creating our custom Etherpad Docker image, since we have all the data available to do this now. Once we have that image, we will once again return to the re-configuration of Docker Jitsi Meet and integrate our Etherpad with the Jitsi container stack.

Custom Etherpad Docker image

How to create a custom Docker image?

  • First we clone the “etherpad-lite” git repository. That contains a Dockerfile plus all the context that is needed by ‘docker build‘ to generate an image.
    $ mkdir ~/docker-etherpad-slack
    $ cd ~/docker-etherpad-slack
    $ git clone https://github.com/ether/etherpad-lite .
  • There is one relevant configuration file in the root directory of the checked-out repository:  ‘settings.json.docker‘. This file will be copied into the Etherpad Docker image and renamed to ‘settings.json‘ when we run ‘docker build‘ command. Any plugin configuration we want to enable via environment variables needs to be present in this file.
    Now the standard configurable parameters for the Etherpad are contained in that file, but our custom settings for the “ep_openid_connect” plugin are not. I already showed you how that block of configurable parameters looks in the previous section, and I promised to parametrize it. This is how the parameters look, and we will give them values in the next section where we update the Docker Jitsi Meet configuration.
  • Relevant ‘settings.json.docker’ additions:
    • Support for OpenID Connect in Etherpad – add this JSON code:
      "ep_openid_connect": {
      "issuer": "${OIDC_ISSUER:undefined}",
      "client_id": "${OIDC_CLIENT_ID:undefined}",
      "client_secret": "${OIDC_CLIENT_SECRET:undefined}",
      "base_url": "${OIDC_BASE_URL:undefined}"
      },
    • We also add a connector for the WBO Whiteboard server (its setup is described in the next section below) to the Docker image: the plugin is called ‘ep_whiteboard‘ and needs the following JSON configuration block to be added:
      "ep_draw": {
      "host": "${WBO_HOST:undefined}"
      },
    • Enable AbiWord in the configuration, since we are going to add it to the image. The full path to the ‘abiword‘ binary needs to be configured in ‘settings.json.docker‘.
      Look up this line in the file:
      "abiword": "${ABIWORD:null}",
      and change it to:
      "abiword": "${ABIWORD:/usr/bin/abiword}",
  • Then we build a new image, adding several useful (according to the developers) plugins, as well as the Abiword word processor.
    I tag the resulting image as “liveslak/etherpad” so that I can upload (push) it to the Docker Hub later on:

    $ docker build \
    --build-arg ETHERPAD_PLUGINS="ep_openid_connect ep_whiteboard ep_author_neat ep_headings2 ep_markdown ep_comments_page ep_align ep_font_color ep_webrtc ep_embedded_hyperlinks2" \
    --build-arg INSTALL_ABIWORD="yes" \
    --tag liveslak/etherpad .

This leads to the following output and results in an image which is quite a bit larger (786 MB uncompressed) as the standard Etherpad image (474 MB uncompressed) because of the added functionality:

Step 24/26 : HEALTHCHECK --interval=20s --timeout=3s CMD ["etherpad-healthcheck"]
---> Running in 2c241a795e46
Removing intermediate container 2c241a795e46
---> 5ca0246c1e61
Step 25/26 : EXPOSE 9001
---> Running in 21fcdf511d46
Removing intermediate container 21fcdf511d46
---> 1c288502f632
Step 26/26 : CMD ["etherpad"]
---> Running in 08a25b585280
Removing intermediate container 08a25b585280
---> 912c54fb6c0a
Successfully built 912c54fb6c0a
Successfully tagged liveslak/etherpad:latest

If you create the image on another computer and need to transfer it to your Slackware Cloud Server in order to use it there, you can save the image to a compressed tarball on the build machine, using docker commands:
$ docker save liveslak/etherpad | xz > etherpad-slack.tar.xz

You can use ‘rsync’ or ‘scp’ to transfer that tarball to your Cloud Server and then load it into the Docker environment there, also using docker commands so that you don’t need to know the intimate details on how Docker works with images:
$ cat etherpad-slack.tar.xz | docker load

I pushed this image to my own Docker repository https://hub.docker.com/repository/docker/liveslak/etherpad but I first added a tag to reflect the latest Etherpad release (1.8.16 at the moment):

$ docker login
$ docker tag liveslak/etherpad liveslak/etherpad:1.8.16
$ docker push liveslak/etherpad:1.8.16
$ docker push liveslak/etherpad:latest

This means that you can use the Hub version of ‘liveslak/etherpad’. But you can just as well use your own locally generated etherpad image in the ‘docker run‘ commands that launch your Etherpad container.
When you have a local image called “liveslak/etherpad”, then Docker will not check for an online image called “liveslak/etherpad”. If you did not generate your own image, Docker will look for (and find) my image at the Hub (or at the private Registry you may have configured), so it will download and use that.

Setting up WBO Whiteboard

Etherpad will be even more attractive if it offers users a collaborative Whiteboard and not just a collaborative text editor.
Enter WBO, which is an actual drawing board with infinite canvas and real-time refresh for all users.
Its boards are persistent; if you re-visit a board later on, all your content will still be there. Look at the WBO demo site… amazing.

We will run WBO in its own Docker container and re-configure our Etherpad webserver with a reverse proxy so that WBO can be integrated into Etherpad through the ‘ep_whiteboard’ connector.
It’s not so complex actually.

Docker container

First, launch a Docker container running WBO. We ensure that the data of the whiteboards you will be creating are going to be stored persistently outside of the container, so let’s create that data directory first and ensure that the internal WBO user is able to write there (you may have a different preference for directory location):

# mkdir -p /opt/dockerfiles/wbo-boards
# chown 1000:10 /opt/dockerfiles/wbo-boards

Then launch the container as a background process, and make it listen at port “5001” of your host’s loopback address:

$  docker run -d -p localhost:5001:80 -v "/opt/dockerfiles/wbo-boards:/opt/app/server-data" --restart unless-stopped --name whiteboard lovasoa/wbo:latest

Reverse proxy

We make WBO available behind an apache httpd reverse proxy which takes care of the encryption (https) using a Let’s Encrypt certificate.

Add the following block to your <VirtualHost></VirtualHost> definition of the server which also defines the reverse proxy for your Etherpad (which is https://meet.darkstar.lan/pad/ remember?):

# Reverse proxy for the WBO whiteboard Docker container:
<Location /whitepad/>
    ProxyPass http://127.0.0.1:5001/
    ProxyPassReverse http://127.0.0.1:5001/
</Location>

After restarting Apache httpd, your WBO whiteboard will be accessible via https://meet.darkstar.lan/whitepad/ . We will use that green highlighted text down below as the value for the ETHERPAD_WBO_HOST variable. Etherpad will prefix that text with “https://” and that prefix cannot be changed… hence the requirement for a reverse proxy that can handle the data encryption.

One caveat when you do this on your real-life internet-facing cloud server…
The Whiteboard server is accessible without authentication. It may be advisable to just come up with a different path component than “/whitepad/“, you can think of something like a UUID-like string: “/8cd77cbe-a694-4390-800a-638c7cc05f49/” as long as you use the same string in both places (reverse proxy and ETHERPAD_WBO_HOST definitions). Also, your board names are not visible anywhere unless you share their URLS with other people. So, a relatively safe environment.

Using the custom Etherpad with Jitsi Meet

If you followed Episode 3, you will have a directory “/usr/local/docker-jitsi-meet-stable-6826“, your version number may differ from my “6826“. Inside you will have your modified ‘docker-compose.yml‘ file.

We are going to edit two files: ‘.env‘ and ‘docker-compose.yml‘.

  • Relevant ‘.env’ additions:
    In the ‘.env‘ file we define correct values for the variables we introduced earlier. You can add the following lines basically anywhere, but it is of course most readable if you copy them immediately after the other ETHERPAD_* variables you added earlier on for the MySQL database backend:
    ETHERPAD_OIDC_ISSUER="https://sso.darkstar.lan/auth/realms/foundation"
    ETHERPAD_OIDC_BASE_URL="https://meet.darkstar.lan/pad/"
    ETHERPAD_OIDC_CLIENT_ID="etherpad"
    ETHERPAD_OIDC_CLIENT_SECRET="2jnc8H6RH9jIYMXExUHA7XF7uD8YKIRs"
    ETHERPAD_REQUIRE_AUTHENTICATION="true"
    ETHERPAD_WBO_HOST="meet.darkstar.lan/whitepad"
  • Relevant ‘docker-compose.yml’ additions:
    Add the following lines to the “etherpad:” section immediately below the MySQL database variable definitions you added earlier on in this Episode. You notice the variable names we defined in the previous section when dealing with ‘ep_openid_connect‘:
    - OIDC_ISSUER=${ETHERPAD_OIDC_ISSUER}
    - OIDC_CLIENT_ID=${ETHERPAD_OIDC_CLIENT_ID}
    - OIDC_CLIENT_SECRET=${ETHERPAD_OIDC_CLIENT_SECRET}
    - OIDC_BASE_URL=${ETHERPAD_OIDC_BASE_URL}
    - REQUIRE_AUTHENTICATION=${ETHERPAD_REQUIRE_AUTHENTICATION}
    - WBO_HOST=${ETHERPAD_WBO_HOST}
  • More ‘docker-compose.yml’ updates:
    The “etherpad:” service definition in that YAML file contains the following reference to the Etherpad Docker image:

image: etherpad/etherpad:1.8.16

You need to change that line to:

image: liveslak/etherpad:1.8.16

…in order to use our custom Etherpad image instead of the default one.

The re-configuration is complete and since we modified the ‘.env‘ file again, we  need to refresh and restart our Docker Jitsi Meet container stack again.
Note however, that at this point we have to perform this restart differently than mentioned earlier in this article. Since we are switching to a new Etherpad image, the container based on the old image needs to be removed also. For this scenario, please consult the detailed instructions in both sections “Considerations about the “.env” file” and “Upgrading Docker-Jitsi-Meet” in Episode 3 of this article series.
The complete set of steps to follow is a mix of both sections, and I share it with you for completeness’ sake:

# cd /usr/local/docker-jitsi-meet-stable-*
# docker-compose down
# rm -rf /usr/share/docker/data/jitsi-meet-cfg/
# mkdir -p /usr/share/docker/data/jitsi-meet-cfg/{web/letsencrypt,transcripts,prosody/config,prosody/prosody-plugins-custom,jicofo,jvb,jigasi,jibri}
# docker-compose pull
# docker-compose up -d

Don’t forget to remove the old, unused, Etherpad image because it is now wasting 474 MB uncompressed disk space.

Summarizing

Even though we set it up as part of the Jitsi stack, we now have a standalone Etherpad running which requires you to login when you visit “https://meet.darkstar.lan/pad/”.
On the other hand, you can also access Etherpad via Jitsi Meet. What’s different?
When you start a Jitsi meeting, via “https://meet.darkstar.lan/” and then click on “Open shared document“, you are already authenticated against Keycloak and the Etherpad document will open for you right away, no second login required.

After login, you will be met with a much more powerful editor than the basic one that comes with Docker Jitsi Meet. You’ll notice the extended document export capability thanks to Abiword and the small video widget at the top for face-to-face communication thanks to the WebRTC plugin.

Happy collaborating!

Running the custom Etherpad standalone

If you are not interested in Jitsi Meet, this is the command to start the customized Etherpad container and make it listen at port 9001 of the loopback address:

# docker run -d -p 127.0.0.1:9001:9001 liveslak/etherpad

The Etherpad container is now accessible only on your computer by pointing your browser at http://localhost:9001/ . You still need to add an Apache reverse proxy definition to the VirtualHost site definition to make your Etherpad available for other users at https://meet.darkstar.lan/pad/ .
If you want to change the container’s behavior using the available variables as documented before, you can pass these to the ‘docker run‘ command using one or more “-e” parameters, like so (this example just enables the admin console):

# docker run -d -p 127.0.0.1:9001:9001 \
  -e ADMIN_PASSWORD="my_secret_admin_pass" \
  --name etherpad \
  liveslak/etherpad

With additional environment variables you can enable more of the latent functionality. See the earlier sections of this article for all the relevant variables: those that enable the MySQL database backend; the one that enables the Whiteboard; those that enable the Keycloak authentication, etc.

Thanks

Etherpad with integrated Whiteboard can be a compelling solution for some user groups. Even without Jitsi Meet, you can jointly write and draw, save your work to your local harddrive and you have voice & video in a small overlay if you need to discuss the proceedings.
I encourage you to try it out. With or without integration into Jitsi Meet or even without Keycloak authentication if you want to create this as a completely free and low-treshold service to your local community.

Let me know what you think of this Episode in the comments section below. The final Episode, how to setup your own private Docker image repository, will take some time to write… I have not yet started doing in-depth research on that topic. But the six available Episodes will hopefully keep you occupied for a while 🙂
Thanks for reading until the end.

Eric

Slackware Cloud Server Series Episode 5: Collaborative Document Editing

Hi all!
A spin-off from our previous Episode in this series is this fifth article about using Slackware as your private/personal ‘cloud server’.
Check out the list below which shows past, present and future episodes in the series, if the article has already been written you’ll be able to click on the subject.
The first episode also contains an introduction with some more detail about what you can expect.
These articles are living documents, i.e. based on readers’ feedback I may add, update or modify their content.

Collaborative document editing

In the previous Episode called “Productivity Platform” I have shown you how to setup the NextCloud platform on your Slackware server. I had promised a separate article about the addition of “collaborative editing” and this is it.

Collaborative editing on documents allows people from all over the world to have the exact same document (text, spreadsheet, presentation, vector graphics) open in a web-based online editor and collaborate on its content in real-time. Every editor can see what the others are currently working on.
The most widely used online office suite with these capabilities is Microsoft Office 365.

Of course, Microsoft 365 is not free. It uses a license model where you pay for its use per month. If you stop paying… you lose access to your online office suite and if you are unlucky, you lose access to your OneDrive files as well.

How is the situation in the Open Source world? Looking for free and open (source as well as standards-adhering!) desktop office programs, many people acknowledge that Libreoffice is an important OSOSS (Open Standards & Open Source Software) alternative to Microsoft’s Office line of programs. However… a cloud-native online web-based variant of the LibreOffice suite of programs is not trivially accessible. By nature, online software needs to be hosted somewhere and by extension, the documents you edit online need to be maintained on cloud storage as well… a challenge for free software enthusiasts when the truth is that hosting costs money. The commercially successful Microsoft Office 365 has the dominant position there.

Now, this article is going to free you (a bit) from Big Tech. I will show you how to enrich your personal Slackware Cloud Server with exactly that what seems unattainable for free software lovers: a web-based online version of LibreOffice which makes it possible for you and yours to (jointly if you want) edit the documents that you are already hosting in your NextCloud accounts.

This will be achieved by using the power of LibreOffice. An web-based version of LibreOffice exists and is called Collabora Online. It is being developed by Collabora Productivity, a subsidiary of the company Collabora who are a main contributor of LibreOffice code for many years already, and have a commercial partnership with The Document Foundation.
Formerly known under the name of LibreOffice Online, Collabora have moved the project development from LIbreOffice’s git repository to their own git repository and renamed it.
The Collabora Online Development Edition (CODE) is the version offered for free to the Open Source community (similar to the LibreOffice Community Edition) but Collabora also offer a commercial paid-for version of their Online suite. Companies using the commercial variant basically sponsor Collabora for the development of the Open Source version.

This article focuses on the integration of CODE into your NextCloud environment.

Preamble

For the sake of this article, I am using an ‘artificial’ non-existing domain name “darkstar.lan”.
I expect that your own real-life Slackware server is running Nextcloud at a publicly accessible URL, but in this article it will be:

  • https://darkstar.lan/nextcloud/

The Collabora Online Development Edition will run at its own hostname in the “darkstar.lan” domain. It is of course a virtual server running in Apache httpd, and you probably defined it in a “<VirtualHost></VirtualHost>” block. This article assumes that we are going to host CODE on a bare webserver which is already running at:

  • https://code.darkstar.lan/

Collabora Online Development Edition (CODE)

CODE is not a standalone application. It is a document-editor backend service without user interface and it communicates with your NextCloud platform via WOPI (the ‘Web Application Open Platform Interface’ protocol). It expects NextCloud to provide the user interface (a web page).
CODE is a ‘WOPI Client‘ with an API that offers editing capabilities for the documents that you host on your NextCloud (the ‘WOPI Host‘). The online editor is displayed in an embedded HTML frame right inside your familiar NextCloud environment.

The steps that we are going to take in order to integrate CODE into Nextcloud are:

  • Get a Docker container running with CODE inside and listening at 127.0.0.1:9980 for connection requests.
  • Configure Apache httpd with a reverse proxy so that this localhost port is hidden behind an externally accessible secure URL:  https://code.darkstar.lan .
  • Add the ‘richdocuments‘ app to your NextCloud server and connect it to our CODE container.
  • Tell your users!

First step: download the Collabora Online Docker image. The ‘docker run‘ command which we are going to execute in a moment would also perform that download if necessary, but it does not hurt to already make the image available locally ahead of time, so that any startup issues of the container are not clouded by connectivity issues toward the Docker Hub:

# docker pull collabora/code

The ‘docker run‘ commandline which starts the CODE container is a complex one, so let’s briefly look at its most important parameters.
As stated in the ‘preamble’, our Collabora Online Development Edition will run at hostname “code.darkstar.net” and you need to tell it explicitly which other hosts are permitted to embed it in a web-page. For us this means that our NextCloud server must be allowed to embed content originating from the CODE container when you open a document for editing. Our NextCloud hostname is passed to the container with the “aliasgroup1” environment variable.
In case you want to permit the use of your CODE Docker container to more than one Nextcloud server (for instance, you want to allow your friend who is also running a NextCloud server to use your CODE server as its collaborative editor)? Then add more hostnames using multiple ‘aliasgroupX‘ environment variables like this:

-e 'aliasgroup1=https://darkstar.lan/nextcloud' -e 'aliasgroup2=https://second.nextcloud.com'

The command to start the container is as follows:

# docker run -t -d -p 127.0.0.1:9980:9980 \
  -e "aliasgroup1=https://darkstar.lan/nextcloud" \
  -e "server_name=code.darkstar.lan" \
  -e "username=codeadmin" -e "password=My_Secret_Pwd" \
  -e "DONT_GEN_SSL_CERT" \
  -e "extra_params=--o:ssl.enable=false --o:ssl.termination=true --o:net.proto=IPv4 --o:net.post_allow.host[0]=.+ --o:storage.wopi.host[0]=.+" \
  --restart always --cap-add MKNOD collabora/code

Optionally, run with additional dictionaries.

-e 'dictionaries=nl de en fr es ..'

Through the “-p” parameter we expose the CODE container’s address at 127.0.0.1:9980 .

You see several parameters (-e "DONT_GEN_SSL_CERT" -e "extra_params=--o:ssl.enable=false --o:ssl.termination=true") that are meant to disable the container’s use of SSL encryption when communicating with the outside world. The Apache reverse proxy which we will place in front of it takes care of client-to-server encryption using a Let’s Encrypt certificate instead of the container’s self-signed certificate.

There are also a couple of parameters (-e "extra_params= --o:net.proto=IPv4 --o:net.post_allow.host[0]=.+ --o:storage.wopi.host[0]=.+") which allow the Collabora Online server to be used by hosts in your network – like your NextCloud instance. Note the use of wildcard “.+” in both the “net.post_allow.host” and “storage.wopi.host” options. I have been unsuccessful in determining a more restricted wildcard that is a better match with my network IP ranges. In the end I went for what I could find in online help forums as something which works.

The “MKNOD” capability is needed for CODE to do its work, apparently it needs to be able to create special device nodes in the filesystem using the ‘mknod‘ command. Note that according to the documentation, this makes the CODE image incompatible with Docker Swarm, but Swarm is not within the scope of my article series anyway.

If your ‘docker run‘ command-line contains a “username” and “password” parameter, then Collabora Online will enable its admin console (only accessible with the above username/password) at: https://code.darkstar.lan/browser/dist/admin/admin.html . At that URL you will find some statistics of the operations performed by the online editor.

Configuring Apache

Note that in many places on the Internet, the documentation for the reverse proxy still mentions the old LibreOffice OnLine “lool” and “loolwsd” phrases instead of using the new COllabora OnLine phrases “cool” and “coolwsd“. Luckily, Collabora’s own documentation is correct: https://sdk.collaboraonline.com/docs/installation/Proxy_settings.html. So here is what I am using on my CODE server, running on IP address:port “127.0.0.1:9980” and reverse-proxied to https://code.darkstar.lan/ .
I assume you know where to add it to your Apache httpd VirtualHost definition:

# START Reverse proxy to the CODE Docker container:
#
AllowEncodedSlashes NoDecode
ProxyPreserveHost On
ProxyTimeout 900

# Cert is issued for code.darkstar.lan and we proxy to localhost:
SSLProxyEngine On
RequestHeader set X-Forwarded-Proto "https"

# Static html, js, images, etc. served from coolwsd;
# And 'browser' is the client part of Collabora Online:
ProxyPass /browser http://127.0.0.1:9980/browser retry=0
ProxyPassReverse /browser http://127.0.0.1:9980/browser

# WOPI discovery URL:
ProxyPass /hosting/discovery http://127.0.0.1:9980/hosting/discovery retry=0
ProxyPassReverse /hosting/discovery http://127.0.0.1:9980/hosting/discovery

# Capabilities:
ProxyPass /hosting/capabilities http://127.0.0.1:9980/hosting/capabilities retry=0
ProxyPassReverse /hosting/capabilities http://127.0.0.1:9980/hosting/capabilities

# Do not forget WebSocket proxy:
RewriteEngine on
RewriteCond %{HTTP:Connection} Upgrade [NC]
RewriteCond %{HTTP:Upgrade} websocket [NC]

# Main websocket
ProxyPassMatch "/cool/(.*)/ws$" ws://127.0.0.1:9980/cool/$1/ws nocanon

# Admin Console websocket:
ProxyPass /cool/adminws ws://127.0.0.1:9980/cool/adminws

# Download as, Fullscreen presentation and Image upload operations:
ProxyPass /cool http://127.0.0.1:9980/cool
ProxyPassReverse /cool http://127.0.0.1:9980/cool
# Compatibility with integrations that use the /lool/convert-to endpoint:
ProxyPass /lool http://127.0.0.1:9980/cool
ProxyPassReverse /lool http://127.0.0.1:9980/cool
#
# END Reverse proxy to the CODE Docker container

Upgrading the Docker image for CODE

Occasionally, new versions of this docker image are released with security and feature updates. This is how you upgrade to a new version:

Grab the new docker image:

$ docker pull collabora/code

List docker images:

$ docker ps

From the output you can glean the Container ID of your CODE docker container. Stop and remove the Collabora Online docker container:

$ docker stop CONTAINER_ID
$ docker rm CONTAINER_ID

Remove the old un-used image, after looking up its ID (check the timestamp):

$ docker images | grep collabora
$ docker image rm IMAGE_ID

Start a container based on the new image, using the same ‘docker run -d‘ command as before.

CODE Docker container configuration

The default configuration for the CODE Docker image is contained in a file inside the container, called “/etc/coolwsd/coolwsd.xml” . Some of the configuration options are documented online, but the file itself is well-documented too.

After starting the container, you can copy the configuration file out of the container using docker commands as shown below. After making your changes you copy it back into the container using docker commands. When a configuration change is detected by the running container it will restart (this is why it is important to have “--restart always” as part of your ‘docker run‘ commandline).
How to do this:

Determine the CODE Docker ID by looking at the running containers:

$ docker ps | grep code

With an output like this:

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e3668a79b647 collabora/code "/start-collabora-onâ?¦?" 2 weeks ago Up 7 days 127.0.0.1:9980->9980/tcp suspicious_kirch

Copy out the “coolwsd.xml” configuration file into the current directory of your host filesystem:

$ docker cp e3668a79b647:/etc/coolwsd/coolwsd.xml coolwsd.xml

Make your configuration changes to the local copy and then add it back into the container:

$ docker cp coolwsd.xml e3668a79b647:/etc/coolwsd/coolwsd.xml

The CODE container needs some time for its automatic restart, so wait a bit before you try to access it again from inside NextCloud.

Custom Docker Image

The Docker image for CODE can be customized within limits, using the variables in its configuration file, see above. But if you need substantial customization like adding more fonts or removing the 20-concurrent-open-documents limit, you can rebuild the Docker image yourself.
The code to rebuild the image can be found in the Collabora Online github repository. Use the available scripts like “install-collabora-online-ubuntu.sh” to create a custom script for your Slackware server, add your desired customizations like additional fonts or packages to that script and build your own custom Docker image with it.
Of course, the command-line to start your CODE container must be adapted to use your custom image… since you no longer want the default image to be pulled from the “collabora/code” repository in the Docker Hub.

Adding CODE to NextCloud

The preliminaries have been dealt with. Now we are getting to the core of the story: connecting your NextCloud server to the online editor backend.

The ‘richdocuments‘ app is the glue we need on NextCloud to offer collaborative document editing capabilities to our users. The human-readable name of this app as listed in your app overview is “NextCloud Office”. In fact there is an extended version of the app called ‘richdocumentscode‘ which contains an embedded version of CODE, but it is severely under-performing. I enabled the embedded CODE server in that app only to find that it crippled my complete NextCloud platform. Every action slowed down to a horrible crawl. That was the moment I decided I needed CODE to run separately, in a Docker container and outside of NextCloud.

Get the ‘richdocuments’ app by going to “Profile > Apps” and search for it in the appstore. Install it and then go to the new menu item which has suddenly appeared: “Profile > System > Administration > Collabora Online Development Server” so you can configure the URL of your Collabora Online container:

The configuration boils down to:

  • Select the radio-button ‘Use your own server‘;
  • Enter the URL “https://code.darkstar.lan/“;
  • Save

NextCloud will make an attempt to contact your CODE server and if it gets the correct responses back, the configuration page will display a green checkbox along “Collabora Online server is reachable“.

All done! We can now check out how this online office suite looks on our NextCloud.
If you are more a ‘television personality’, you can also watch a video about how to add CODE to NextCloud, created by NextCloud’s own Marketing Director (and former member of the KDE marketing team) Jos Poortvliet:

 

Using CODE in NextCloud

Once CODE is available in your NextCloud and you click on a document, you see the connection to the online editor being initialized:

This process takes a few seconds and you end up in a web-based editor which looks like:

There it is. An online web-based office suite based on LibreOffice, which is able to open and edit the documents you are storing in your NextCloud account. And if you tell NextCloud to share your document(s) with others, then they too can collaborate on your document, all together in real-time.

Thanks

You made it! After reading and implementing the ideas from these first five articles in the Series you should now have a Slackware Cloud Server up and running, full of features that everybody will be envious of, and it’s all free and under your control! No more need for Google, Microsoft, Amazon, Zoom, Dropbox and the likes. Meet and collaborate to your hearts’ desire with family, friends, co-workers or groups of Open Source developers.

Good luck and let me know about your successes, but also your frustrations and how you overcame your knowledge gaps (these articles assumed quite a bit of prior knowledge of running a webserver, configuring an online domain, managing your firewall etcetera).

There’s some episodes remaining, doing a deep-dive into Etherpad and setting the first steps into creating your own private Docker Hub without artificial usage limitations. Writing those last two episodes will take more time, so do not expect them to be released in the next couple of weeks; however I hope you will like those as well.

Eric

Slackware Cloud Server Series, Episode 4: Productivity Platform

Hi all!
Welcome to the fourth episode in a series of articles that I am writing about using Slackware as your private/personal ‘cloud server’.
Check out the list below which shows past, present and future episodes in the series, if the article has already been written you’ll be able to click on the subject.
The first episode also contains an introduction with some more detail about what you can expect.
These articles are living documents, i.e. based on readers’ feedback I may add, update or modify their content.

Nextcloud Productivity Platform

So far, I have explored Docker container infrastructure management, shown you how to build an Identity and Access Management service in Docker and used that to make a Video Conferencing platform available to the users of your Slackware Cloud server. Today we are going to explore other means of collaboration and personal productivity.
Some of the topics in the current Episode will cover cloud storage: how do you share your data (documents, music, photos, videos) across your computers with a safe backup copy on your Cloud server? How do you keep track of your activities, read your emails, plan your work, chat with your friends and setup video calls with them? Moreover, how can I achieve all of this without having to depend on the Cloud plaforms that Google, Dropbox, Amazon, Microsoft offer and which seem free but are most certainly not?

The Nextcloud platform which is the main topic of this article, offers a solution to this dilemma. Nextcloud is a spin-off from ownCloud. It was announced in 2016 by the original founder of ownCloud after he left the company because he did not agree to the course ownCloud was going to take. OwnCloud now offers part of its features exclusively as a paid-for subscription model, whereas Nextcloud is fully open source and all features are available to community users as well as paying customers.

The core of Nextcloud is called the Nextcloud Hub, the server product which integrates the three main components Files, Talk and Groupware and which you can host and manage on a server which you own and control. Client programs are available for Linux, Windows and Android platforms that allow you to sync your local files to the server, participate in videoconferences directly from your phone etc.

The goal of setting up Nextcloud on our Slackware Cloud Server is to achieve total control over your data, share these data securely with other people and eliminate dependencies on similar commercial offerings while enjoying the same functionality.

But first a slight detour before we dive into the setup of our Collaboration Platform.
Nextcloud allows for document collaboration in similar fashion to Microsoft Office 365. This is a topic which has its own Episode in the article series: the next one called “Collaborative Document Editing” will center around the integration of Collabora Online Development Edition (CODE) into our Nextcloud Hub.
Collabora is a major contributor to the LibreOffice suite. An online web-based front end to LibreOffice programs was initiated and is being developed by Collabora since 2014, and in essence, Collabora Online is the continued development of the original ‘LibreOffice Online’ code. Collabora moved the repository to github and it is the source used to build CODE, but is also used to create an “enterprise” version which has a paid subscription model. The pre-compiled CODE binaries have a limitation of 10 simultaneously edited documents, but compiling the source yourself will circumvent this artificial limitation.

Preamble

For the scope of this article I will assume that you have a Slackware 15.0 server with IP address “10.10.10.10“. Your own IP address is of course different but your (web)server needs to be accessible online.
Our example server (you need to apply real-life values here of course) has the following configuration:

  • a working Docker environment;
  • Apache httpd configured and running and reachable as “https://darkstar.lan/” with the DocumentRoot directory “/var/www/htdocs/” which is the default location for Apache httpd’s content in Slackware.
  • Keycloak configured and running at “https://sso.darkstar.lan/auth/”;
  • Jitsi Meet configured to use Keycloak for authentication and running at “https://meet.darkstar.lan/“; with the keycloak-jitsi bridge at “https://sso.meet.darkstar.lan/“.
  • PHP version 7.3, 7.4 or 8.0, which is the reason for the Slackware 15.0 requirement – the PHP of Slackware 14.2 is just too old.

Nextcloud Hub II

The Nextcloud Hub II is the new name of the Nextcloud server product which you can download as a single ZIP file and install into your webserver. Nextcloud Hub can be extended with plugins and for that, there is an active App Store. The Nextcloud administrator can search for, install, enable, disable, update, configure and remove these applications from the web-based management console or even from the command-line. This article will discuss some of these applications, in particular the ‘featured apps‘ which are the ones that are supported or developed by Nextcloud GmbH directly.
The core of Nextcloud Hub II consists of the following components:

Files:

Nextcloud Files allows easy access to your files, photos and documents. You can share files with other Nextcloud users, even with remote Nextcloud server instances. When CODE is installed (see the next Episode in this Series) you can even collaborate on documents in real-time with friends, family, team members, customers etc. We have inherent security through data encryption (in transit and on the local file storage) and access control mechanisms, so that only those people with whom you explicitly share your file(s) will be able to access them.

Users store files in their account and this translates to a per-user directory in the host’s filesystem. By default this storage area is not encrypted. If you don’t feel at ease with the idea that unauthorized access to the server potentially exposes userdata to the intruder, the Nextcloud administrator can enable on-disk encryption of all the users’ files. Once encrypted, the userdata can not be decrypted by anyone else than the owner of the files (aka the user) and those with whom files are shared. An encryption key is created per user and the user’s password unlocks that key which is always and only stored on the server. If the user loses their password, they will permanently lose access to their data!
The administrator can configure a ‘recovery key‘ however. You may pose that this allows that administrator to gain access to the userdata, but that’s the other side of the coin of being able to decrypt userdata in the event of loss of the password…

Talk:

Nextcloud Talk offers screensharing, online meetings and web conferencing. The video conferencing capabilities are not yet as strong as Jitsi Meet (in particular it does not scale well to a lot of simultaneous users) but the Talk component is being improved continuously.

Groupware:

Nextcloud Groupware consists of easy to use applications for Webmail, Calendaring and Contacts and is integrated with Nextcloud Files. In webmail you can configure multiple IMAP server accounts, PGP encryption is supported. Calendars can be shared and have Nextcloud Talk integration for video conferences. You can create appointments and define and book resources. Contacts can be grouped, shared with others, and synchronized to and from your phone.
In addition, you can install the featured app ‘Deck‘ which enables you to manage your work with others through private or shared Kanban-style task boards.

Setting up MariaDB SQL server

Nextcloud Hub uses a MariaDB server as its configuration back-end. The users’ files are stored in your host file-system.
You could use SQLite instead… but that is really going to affect the performance of your server negatively if the load increases. In case you have never setup a MariaDB SQL server, I will give you a brief rundown.

MariaDB is part of a full installation of Slackware, and since ‘full install‘ is the recommended install, I will assume you have a mariadb package installed. If not, because you decided to do only a partial installation on your server, be sure to start with an installation of mariadb and its dependencies lz4 and liburing.

  • Create the system tables for the SQL server:
    # mysql_install_db --user=mysql
    Installing MariaDB/MySQL system tables in '/var/lib/mysql' ...
    OK
  • Make the MariaDB boot script executable:
    # chmod +x /etc/rc.d/rc.mysqld
  • Comment the ‘ SKIP="--skip-networking" ‘ line in ‘rc.mysqld‘ because we will use client connections from outside.
  • Execute the boot script (or reboot the machine):
    # /etc/rc.d/rc.mysqld start
  • Perform initial configuration of the database server:
    # /usr/bin/mysql_secure_installation
    Switch to unix_socket authentication [Y/n] n
    Set root password? [Y/n] y
    Remove anonymous users? [Y/n] y
    Disallow root login remotely? [Y/n] y
    Remove test database and access to it? [Y/n] y
    Reload privilege tables now? [Y/n] y

    This script locks down your SQL server properly. It will prompt you to setup the admin (root) password and gives you the option of removing the test databases and anonymous user which are created by default. This is strongly recommended for production servers. I answered “NO” to “Switch to unix_socket authentication” because the Docker containers on our Slackware Cloud server will need to access this MariaDB SQL server over the local network.

You now have MariaDB SQL server up and running, the root (admin) password is set and the root user can only connect to the database from the localhost. It’s ready to setup its first database.
We will manually create the required SQL database  for NextCloud (you might want to insert your own account/password combo instead of my example “nextcloud/YourSecretPassword“). Since we are installing NextCloud on the ‘bare metal‘, NextCloud can communicate with the SQL server via the ‘localhost‘ network address:

# mysql -uroot -p
> CREATE USER 'nextcloud'@'localhost' IDENTIFIED BY 'YourSecretPassword';
> CREATE DATABASE IF NOT EXISTS ncslack CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci;
> GRANT ALL PRIVILEGES ON ncslack.* TO 'nextcloud'@'localhost';
> FLUSH PRIVILEGES;
> quit;

Configuring Apache and PHP

First of all, we are going to increase the PHP memory limit from the default 128 MB to 512 MB (recommended value) in the file ‘/etc/php.ini‘ or else the more complex operations will fail:

memory_limit = 512M

Don’t forget to enable PHP support in “/etc/httpd/httpd.conf” by un-commenting the line:
Include /etc/httpd/mod_php.conf‘.

Your Apache server needs some directives inside the ‘<VirtualHost></VirtualHost>‘ block that are not enabled by default.
This section needs to be added:

  <Directory /var/www/htdocs/nextcloud/>
    Require all granted
    Satisfy Any
    AllowOverride All
    Options FollowSymLinks MultiViews

    <IfModule mod_dav.c>
      Dav off
    </IfModule>
  </Directory>
  Alias "/localapps" "/opt/netxtcloud/localapps"
  <Directory "/opt/nextcloud/localapps">
    Require all granted
    AllowOverride All
    Options FollowSymLinks MultiViews
  </Directory>

Also, the Apache httpd modules mod_rewrite, mod_headers, mod_env, mod_dir and mod_mime need to be enabled. This is usually done in ‘/etc/http/httpd.conf‘.

Configuring local storage

We are going to maintain all of the Nextcloud users’ personal data separately from the server code and even outside of the webserver document root. Nobody should be able to access your files through a possible hole in your webserver.

Also, when the Nextcloud administrator (you) is going to extend the server’s functionality by installing additional apps, you will want these apps to be installed outside of the Nextcloud application directory. This allows for a clean upgrade process if we want to install a newer version in the future. The “/opt/nextcloud/” will be the root directory for all of this. The httpd user account “apache” needs to own these files.

Let’s start with creating the userdata directory:

#  mkdir -p /opt/nextcloud/data
#  chown apache:wheel /opt/nextcloud/data

And the directory where the add-ons will be installed:

# mkdir -p /opt/nextcloud/localapps
# chown apache:wheel /opt/nextcloud/localapps

To the Apache httpd VirtualHost configuration you should add an  ‘Alias‘ statement which points the webserver “/localapps” path component to the filesystem location “/opt/nextcloud/localapps“. This line was  already mentioned in the Apache configuration snippet further up:

Alias "/localapps" "/opt/netxtcloud/localapps"

Installing the Nextcloud Hub

We will download the latest stable release from their server installation page.
At the time of writing, 23.0.0 is the latest available so:

# cd ~
# wget https://download.nextcloud.com/server/releases/nextcloud-23.0.0.zip
# unzip -n nextcloud-23.0.0.zip -d /var/www/htdocs/
# chown -R apache:wheel /var/www/htdocs/nextcloud/

We have already created the SQL database, so we can immediately continue with the Web setup. Point your browser at http://darkstar.lan/nextcloud/ :

This is where you will define the account and password for the Nextcloud administrator. It will be the only account with local credentials (stored in the SQL backend). All  future Nextcloud user authentication & authorization will be configured and managed via the Keycloak IAM program. The green text snippets match with configuration settings documented higher up in this article.

  • I will use the name “admin” for the account and give it a secure password.
  • Storage & Database:
    • Data Folder: “/opt/nextcloud/data
    • Database: MySQL/MariaDB
    • Database user: “nextcloud
    • Database password: “YourSecretPassword
    • Database name: “ncslack
    • Localhost: add the MariaDB port number so that it reads “localhost:3306”
  • Install recommended apps: Calendar, Contacts, Talk, Mail & Collaborative editing: ensure that this box is checked (note that the Collaborative Editing program – CODE – download will probably fail but we will install it as an external application later):

TIP: I experienced that when the password fields are set to “show password”, the installation will refuse to continue.

Click “Finish” to kick off the Nextcloud setup.

This configuration process ends with some screens that will show off the capabilities of the programs, you can either close these immediately (the “X” top right) or click through them until the end. You will end up in your Nextcloud dashboard:

Post-configuration

After installation and initial configuration we are now going to tune the server.

Pretty URLs

A Nextcloud URL looks like “http://localhost/nextcloud/index.php/apps/dashboard/” and I want to remove the “index.php” string which makes the URLs look nicer. Check the official documentation.

  • Edit “./config/config.php” file in the nextcloud directory. Add the following two variables right before the last line which goes like “);“:
    ‘overwrite.cli.url’ => ‘https://darkstar.lan/nextcloud’,
    ‘htaccess.RewriteBase’ => ‘/nextcloud‘,
  • We are going to use the NextCloud command-line interface ‘occ‘ (here it shows its heritage; ‘occ‘ stands for ‘Own Cloud Console‘) in order to apply the changes in ‘config.php‘ to the ‘.htaccess‘ file in the NextCloud installation directory.
    The ‘occ’ command needs to be run as the httpd user (‘apache‘ in the case of Slackware):
    # sudo -u apache php -d memory_limit=512M /var/www/htdocs/nextcloud/occ \
    maintenance:update:htaccess
    .htaccess has been updated
    Note that I have not been able to get this to actually work! Pointers with a fix for pretty URLs are welcome!

Server scan

You can run a “server scan” as admin user via “Settings > Administration > Security & Setup Warnings” to see if Nextcloud has suggestions on improving performance and functionality of the server.
In my case, it reports that a number of database indexes are  missing, and that the following commando will fix this:

# sudo -u apache php /var/www/htdocs/nextcloud/occ db:add-missing-indices

This scan will also point out performance issues about which I will share more details, below in “Tuning server performance” section.

Cron maintenance

It is advised to install a cron job which runs as the “apache” user, and performs maintenance every 5 minutes. In Nextcloud as the admin user, you configure the “Cron” maintenance type via “Settings > Administration > Basic Settings > Background Jobs“.
This needs to be accompanied by an actual cron job definition on the Slackware server; so this is the command that I added to the crontab for “apache“:

# crontab -u apache -l
# Run maintenance for NextCloud every 5 minutes:
*/5 * * * * /usr/bin/php -f /var/www/htdocs/nextcloud/cron.php

Install add-on applications outside of Nextcloud directory

We have already created a ‘/opt/nextcloud/localapps‘ directory with the intention of using that location for the installation of all the apps that we want to add to our Hub.
In order to enable a separate application installation path, you need to add the following lines to ‘./config/config.php‘:

'appstoreenabled' => true,
'apps_paths' => array(
    array(
    'path'=> OC::$SERVERROOT . '/apps',
    'url' => '/apps',
    'writable' => false,
  ),
  array(
    'path'=> '/opt/nextcloud/localapps',
    'url' => '/localapps',
    'writable' => true,
  ),
),

Note that if you get an error from external applications when they attempt to parse the ‘config.php’ file of NextCloud which looks like “Expected boolean literal, integer literal, float literal, string literal, ‘null’, ‘array’ or ‘[‘”, you may have to replace:
'path'=> OC::$SERVERROOT . '/apps',
with
'path'=> '/var/www/htdocs/nextcloud/apps',

Tuning server performance

Memory cache

The main optimizations which will boost your server’s performance and responsiveness are related to memory caching and transactional file locking.

You can address these both by installing redis (an in-memory key-value memory store) and php-redis (PHP extension for redis) as Slackware packages from my repository:

After installing redis, configure it correctly for Nextcloud.

  • Enable the php redis extension by editing ‘/etc/php.d/redis.ini‘ and removing the semicolon so that you get the following line:
    ; Enable redis extension module
    extension=redis.so

    Optionally add these lines to tune its performance:
    ; Should the locking be enabled? Defaults to: 0.
    redis.session.locking_enabled = 1
    ; How long should the lock live (in seconds)?
    ; Defaults to: value of max_execution_time.
    redis.session.lock_expire = 60
    ; How long to wait between attempts to acquire lock, in microseconds?.
    ; Defaults to: 2000
    redis.session.lock_wait_time = 10000
    ; Maximum number of times to retry (-1 means infinite).
    ; Defaults to: 10
    redis.session.lock_retries = -1
  • Make the redis daemon listen at a UNIX socket as well as a TCP port, uncomment these lines in the “/etc/redis/redis.conf” configuration file:
    unixsocket /var/run/redis/redis.sock
    unixsocketperm 660
  • Make the init script executable and start it manually (it gets added to rc.local so it will start on every boot from now on):
    # chmod +x /etc/rc.d/rc.redis
    # /etc/rc.d/rc.redis start

Enable the use of redis in Nextcloud’s main configuration file “/var/www/htdocs/nextcloud/config/config.php” by adding the following block:

'memcache.locking' => '\\OC\\Memcache\\Redis',
'memcache.local' => '\\OC\\Memcache\\Redis',
'memcache.distributed' => '\\OC\\Memcache\\Redis',
'redis' => 
array (
    'host' => '/var/run/redis/redis.sock',
    'port' => 0,
),

Finally, allow the “apache” user to be able to read the Redis socket by adding the user to the “redis” group:

# gpasswd -a apache redis

… and restart the Apache httpd service.

PHP OPcache

OPcache is a PHP extension for Apache httpd which stores (pre-)compiled PHP script bytecode in memory. This speeds up the performance of PHP-based applications like Nextcloud, because it reduces or eliminates the need to load PHP scripts from disk and parse them every time they are called.
Slackware’s PHP has opcache enabled in “/etc/php.ini“. Relevant settings are:

opcache.enable=1
opcache.memory_consumption=128
opcache.max_accelerated_files=10000
opcache.revalidate_freq=200

These are default values for Slackware and therefore they are commented-out in the “php.ini” file but you can validate that they are enabled via the command:

# php -i |grep opcache

If you find that OPcache is disabled, you should fix that ASAP.

Check your security

Nextcloud offers a neat online tool that connects to your Nextcloud instance and performs a series of security checks, and will advise you in case you need to improve on your server security: https://scan.nextcloud.com/ .

Commandline management

The web-based management interface for the “admin” user is ‘https://darkstar.lan/nextcloud/index.php/settings/user‘ but a lot of administration tasks can also be done directly from the Slackware server’s command prompt (as root) using the “Own Cloud Console” program ‘occ‘ I already briefly touched on earlier in this article.

Some examples:

Install an application, say “richdocumentscode” which is a built-in low-performance version of CODE (Collabora Online Development Edition):

# sudo -u apache php -d memory_limit=512M /var/www/htdocs/nextcloud/occ app:install richdocumentscode

App updates can be done on commandline too:

# sudo -u apache php -d memory_limit=512M /var/www/htdocs/nextcloud/occ app:update --all

Remove an app:

# sudo -u apache php -d memory_limit=512M /var/www/htdocs/nextcloud/occ app:remove richdocumentscode

Get the list of possible occ command options:

# sudo -u apache php -d memory_limit=512M /var/www/htdocs/nextcloud/occ app:list

Integration with Keycloak IAM

The server is running nicely, and it is time to invite users. All our users will be managed by our Keycloak IAM program.
To integrate Keycloak into Nextcloud we first create a “Client ID” for Nextcloud in our Keykloak admin console and copy the relevant secret bits from that ID into Nextcloud. Nextcloud will contact Keycloak with a URL that will ebable login for the users in our “foundation” realm, i.e. the realm which we created when we setup Keycloak.

Keyloak:

  • Login to the Keycloak Admin Console, https://sso.darkstar.lan/auth/admin/ .
    • Under ‘Clients‘, create a new client:
      Client ID‘ = “nextcloud
      Root URL‘ = “darkstar.lan/nextcloud
    • Save.
    • In the ‘Settings‘ tab, change:
      Access Type‘ = “confidential” (default is “public”)
    • Save.
    • In the ‘Credentials‘ tab, copy the value of the ‘Secret‘, which we will use later in Nextcloud: “61a3dfa1-656d-45eb-943a-f3579a062ccb” (your own value will of course be different).

Nextcloud:

  • Login to Nextcloud as the admin user: https://darkstar.lan/nextcloud
    • In ‘Profile > Apps‘, install the ‘Social Login‘ app.
  • Configure Nextcloud’s social login to use Keycloak as the identity provider
    • Make sure ‘Disable auto create new users‘ is UN-checked because every user logging in via Keycloak for the first time must automatically be created as a new user.
    • DO check ‘Prevent creating an account if the email address exists in another account‘ to ensure that everyone will have only one account
    • Add a new ‘Custom OpenID Connect‘ by clicking on the ‘+‘:
      ‘Title’ = “keycloak”
      ‘Authorize url’ = “https://sso.darkstar.lan/auth/realms/foundation/protocol/openid-connect/auth”
      ‘Token url’ = “https://sso.darkstar.lan/auth/realms/foundation/protocol/openid-connect/token”
      ‘Client id’ = “nextcloud
      ‘Client Secret’ = “61a3dfa1-656d-45eb-943a-f3579a062ccb” (this is the value you obtained earlier in the Keycloak Client definition)
      ‘Scope’ = “openid”
      ‘Default group’ = “none”
    • ‘Groups claim’ = “nextcloud” (optional setting, make sure you create a group “nextcloud” in Keycloak’s “foundation” realm and have added your future Nextcloud users as members of this group).
    • Also optional, ‘Add group mapping‘ can automatically arrange that the Keycloak group “nextcloud” is mapped to Nextcloud group, e.g. called “My social circle” with which you can limit access in Nextcloud to only those apps that you approved.
    • Save.

Your Nextcloud login page will now additionally offer a ‘Login with Keycloak Authentication” option. Clicking this will redirect you to a Keycloak authentication page. Screenshots of how this looks exactly can be found further down in the section “Nextcloud clients“.
After successful login and authorization with an existing Keycloak user account you will be returned to Nextcloud and logged in. If this was the user’s first login to Nextcloud using Keycloak credentials, Nextcloud will create an internal user account which is linked to the Keycloak credentials.

Integrating Jitsi Meet

The integration of an embedded Jitsi Meet window into Nextcloud requires some configuration in Apache httpd (for the embedding) and in Nextcloud itself (securing the intercommunication via a JSON Web Token).

Apache:

Add the following to your httpd global configuration, for instance to ‘/etc/httpd/httpd.conf‘ (the parts in green need to be replaced by your real-life hostnames of course):

# Mitigate the risk of "click-jacking" (other sites embedding your pages
# and adding other, possibly malicious, content)
# See https://developer.mozilla.org/en-US/docs/HTTP/X-Frame-Options
Header always append X-Frame-Options SAMEORIGIN
# And allow embedding of Jitsi Meet in Nextcloud (add any URL for which you would also allow embedding):
RequestHeader set X-HTTPS 1
Header always set Content-Security-Policy "frame-ancestors 'self';"
Header always set Content-Security-Policy "frame-ancestors https://darkstar.lan https://sso.darkstar.lan https://meet.darkstar.lan ;"

Followed by a restart of Apache httpd.

Nextcloud:

  • Install the “Jitsi Integration” app as the admin user via ‘Profile > Apps
  • Go to  ‘Profile >Settings > Jitsi’
    • ‘Server URL’: enter “https://meet.darkstar.lan
    • ‘JWT Secret’: enter “NmjPGpn+NjTe7oQUV9YqfaXiBULcsxYj” (this is the JWT token which you entered as the value for ‘JWT_APP_SECRET‘ variable in the ‘.env‘ file of  our ‘docker-jitsi-meet‘  instance, see the previous Episode on Video Conferencing for details)
    • JWT App Id‘: enter “jitsi” (which is the value of the  ‘JWT_APP_ID‘ variable in that same ‘.env‘ file)
    • Save

That’s it!
Users will now find a ‘Conferences‘ icon in the application bar at the top of their Nextcloud home. Clicking it will open an embedded Jitsi video conferencing window.

One obvious advantage to using this NextCloud embedded Jitsi Meet is that the resulting Meet URLs are impossible to guess, they will not mention the meeting room name but instead use a UUID like ‘https://meet.darkstar.lan/06135d8e-5365-4a6f-b31b-a654d92e8985‘.
If you compare this to the regular Jitsi Meet at ‘https://meet.darkstar.lan/‘, your meeting room with the name ‘roomname‘ would be accessible as ‘https://meet.darkstar.lan/roomname‘ which is a lot easier to guess… un-desirable if you want to hold private meetings.

Adding new apps

I encourage you to check out the Nextcloud App Store. There’s a wealth of cool, useful and fun applications to be found there.

Some of the apps that I installed are: GPXPod (visualize your GPX routes); Carnet (powerful note-taking app); Music (listen to your local collection or tune into online streams); PhoneTrack (keep track of your phone, needs the Android app to be installed too); Element (messenger app with bridges to Matrix and Libera.Chat for instance); Cookbook (maintain your own recipe database and easily import new recipes from external websites using only the URL to the recipe), etc…

Adding client push daemon

Extremely useful is Push update support for the desktop app. The 21 release of Nextcloud introduced a ‘high performance backend‘ for filetransfers. This increases the speed of filetransfers substantially, but it needs a couple of things to make it work.

  • The worker app is called “Client Push” and needs to be installed on your server from the Nextcloud App Store. Once the app is installed, the push binary still needs to be setup. The README on github contains detailed setup instructions. For the Slackware way, read on.
  • You need to have the redis package installed and the redis server needs to be started and running. We already took care of this in the section ‘Performance tuning‘.
  • Slackware, Apache and Nextcloud need to be configured for redis and the notify_push daemon.

Slackware:

The notify_push binary gets installed as ‘/opt/nextcloud/localapps/notify_push/bin/x86_64/notify_push‘. Hint: this is 64-bit only. This binary needs to run always, so the easiest way is to add a Slackware init script and call that in ‘/etc/rc.d/rc.local‘.

The init script can be downloaded from my repository, but it’s listed here in full as well. I highlighted the texts in green that are relevant. Save it as ‘/etc/rc.d/rc.ncpush‘ and make it executable:

#!/bin/bash
# Description: Push daemon for NextCloud clients.
# Needs: redis php-fpm mariadb
# Written by: Eric Hameleers <alien@slackware.com> 2021
daemon=/usr/bin/daemon
description="Push daemon for Nextcloud clients"
pidfile=${pidfile:-/var/run/nextcloud/notify_push.pid}
ncconfig=${ncconfig:-/var/www/htdocs/nextcloud/config/config.php}
command=${command:-/opt/nextcloud/localapps/notify_push/bin/x86_64/notify_push}
command_user=${command_user:-apache}
command_args="--bind 127.0.0.1 --port 7867 $ncconfig"

[ ! -x $command ] && exit 99
[ ! -f $ncconfig ] && exit 99

RETVAL=0

start() {
  if [ -e "$pidfile" ]; then
    echo "$description already started!"
  else
    echo -n "Starting $description: "
    mkdir -p $(dirname $pidfile)
    chown $command_user $(dirname $pidfile)
    chmod 0770 $(dirname $pidfile)
    $daemon -S -u $command_user -F $pidfile -- $command $command_args
    RETVAL=$?
    [ $RETVAL -eq 0 ] && touch /var/lock/subsys/$(basename $command)
    echo "- done."
  fi
}
stop(){
  echo -n "Stopping $description: "
  kill -TERM $(cat $pidfile)
  RETVAL=$?
  [ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/$(basename $command)
  echo "- done."
}
restart(){
  stop
  start
}
condrestart(){
  [ -e /var/lock/subsys/$(basename $command) ] && restart
}
status() {
  pids=$(cat $pidfile 2>/dev/null)
  if test "$pids" ; then
    echo "$description is running."
    ps up $pids
  else
    echo "$description is stopped."
  fi
}
# See how we were called.
case "$1" in
start)
  start
  ;;
stop)
  stop
  ;;
status)
  status
  ;;
restart)
  restart
  ;;
condrestart)
  condrestart
  ;;
*)
  echo "Usage: $0 {start|stop|status|restart|condrestart}"
  RETVAL=1
esac
exit $RETVAL
# ---

The script is then called on boot by adding the following to your server’s ‘/etc/rc.d/rc.local‘:

if [ -x /etc/rc.d/rc.ncpush ]; then
  # Start Nextcloud Client Push Daemon
  echo "Starting Nextcloud Client Push Daemon: /etc/rc.d/rc.ncpush start"
  /etc/rc.d/rc.ncpush start
fi

Apache:

Setup the bits of Apache configuration for your webserver. The following block of configuration needs to be added:

ProxyPreserveHost On
ProxyTimeout 900
SSLProxyEngine on
RequestHeader set X-Forwarded-Proto "https"

# Setup a reverse proxy for the client push server
# https://github.com/nextcloud/notify_push/blob/main/README.md :
<Location /push/ws>
    ProxyPass ws://127.0.0.1:7867/ws
</Location>
<Location /push/>
    ProxyPass http://127.0.0.1:7867/
    ProxyPassReverse http://127.0.0.1:7867/
</Location>

# Do not forget WebSocket proxy:
RewriteEngine on
RewriteCond %{HTTP:Connection} Upgrade [NC]
RewriteCond %{HTTP:Upgrade} websocket [NC]

You know where to add the above.

Nextcloud:

Add these lines to “./config/config.php“. Relevant pieces are highlighted in green, you should of course change that into your own Internet IP address:

‘trusted_proxies’ => [‘10.10.10.10‘],
‘forwarded_for_headers’ => [‘HTTP_X_FORWARDED’, ‘HTTP_FORWARDED_FOR’, ‘HTTP_X_FORWARDED_FOR’],

Piecing it all together:

Preparations are complete after you have restarted Apache httpd and started the ‘rc.ncpush‘ script. Redis should also be running.
Now we can finally enable the app we installed in Nextcloud:

# sudo -u apache php -d memory_limit=512M /var/www/htdocs/nextcloud/occ app:enable notify_push

Setup the connection between the app and the daemon:

# sudo -u apache php -d memory_limit=512M /var/www/htdocs/nextcloud/occ notify_push:setup https://darkstar.lan/push/
> redis is configured
> push server is receiving redis messages
> push server can load mount info from database
> push server can connect to the Nextcloud server
> push server is a trusted proxy
> push server is running the same version as the app
configuration saved

Upgrading to newer release

Here is an example of the steps to take when you upgrade from one major release to the next (for instance you want to upgrade from 23 to 24).

First, download and extract the release tarball, and set the file ownership to the ‘apache‘ user:

# cd ~
# wget https://download.nextcloud.com/server/releases/nextcloud-24.0.0.tar.bz2
# tar xf nextcloud-24.0.0.tar.bz2
# chown -R apache:wheel /root/nextcloud

Disable the server-maintenance cronjob of the “apache” user before continuing, for instance by adding ‘#” in front of the cron commandline!

# crontab -u apache -e
# crontab -u apache -l
# Run maintenance for NextCloud every 5 minutes:
####*/5 * * * * /usr/bin/php -f /var/www/htdocs/nextcloud/cron.php

Stop the Apache httpd:

# /etc/rc.d/rc.httpd stop

Rename the old “/var/www/htdocs/nextcloud/” installation directory and move the new ‘nextcloud‘ directory in its place:

# mv -i /var/www/htdocs/nextcloud /root/nextcloud.orig
# mv -i /root/nextcloud /var/www/htdocs/nextcloud

Copy your Nextcloud configuration into the new directory (this should only overwrite the ‘config.sample.php‘ file):

# cp -ia /root/nextcloud.orig/config/* /var/www/htdocs/nextcloud/config/

Start the Apache httpd again:

# /etc/rc.d/rc.httpd start

Start the commandline updater. This MUST be done from within the Nextcloud installation directory:

# cd /var/www/htdocs/nextcloud/
# sudo -u apache php -d memory_limit=512M occ upgrade

Now, re-enable the cronjob for user “apache“.
Then re-create the now again missing database indices:

# sudo -u apache php /var/www/htdocs/nextcloud/occ db:add-missing-indices

Stable versus beta channels

Nextcloud offers “Stable” and “Beta” channels of its software. From time to time, new functionality becomes available in the Beta channel that you really want to use.
As the admin user, you can configure your preferred release channel in “Settings > Administration > Overview > Version > Update Channel” if you have installed Nextcloud from a release tarball like we did.

NOTE that you can only upgrade to a newer versionSkipping major versions when upgrading and downgrading to older versions is not supported by Nextcloud.
For instance, if you went via ‘Beta‘ to 24.0.0rc4 and ‘Stable‘ is still on 23.0.5, you have to wait with further upgrades until 24.0.0 or later becomes available in the ‘Stable‘ channel.

Nextcloud makes new versions incrementally available to user installations in the Stable channel which means it can take a while before your server alerts you that a new release is available.

Nextcloud clients

Nextcloud mobile and desktop clients are available from https://nextcloud.com/clients/ .

Client applications can keep your files synchronized between Nextcloud server and your desktop or phone; can manage your passwords that you store securely on the server; can manage your calendar, setup chat and video connections to other users of your server; and more.

The desktop file-sync client (nextcloud-client) is available in my own repository as a package and on SlackBuilds.org as a build script (but the SBo version is actually still the old OwnCloud mirall client). Its GUI is Qt5-based. This is a free alternative for Dropbox and the likes!

When you start the nextcloud-client desktop application for the first time, you will go through an initial setup. The client needs to get configured to connect to your Nextcloud server:

After clicking “Log in to your Nextcloud” you can enter the URL at which your server is available:

The client will open your default browser (in case of my KDE Plasma5, this was Konqueror) and ask you to switch to that browser window to logon to your Nextcloud:

You can click “Copy Link” so that you  can use another browser if you are not happy with the default browser choice. The next steps take place inside that browser window:

You need to login with your Nextcloud user account. In this dialog we click on “Log in with Keycloak Authentication” instead of entering an account/password into the top entry fields – remember, only the admin user has an account with local credentials:

You will probably recognize the familiar Keycloak Single Sign On dialogs from the previous Episode (Video Conferencing):

This account has 2-Factor Authentication (2FA) enabled:

After succesful login, the Nextcloud server wants to get your explicit permission to grant access for the desktop client to your server data:

After you click “Grant access” the client will finally be able to connect to your data and start syncing:

You get a choice of the local desktop directory to synchronize with your server data – by default this is ‘~/Nextcloud/‘:

The client docks into the system tray from where you can access your files and manage the sync. The desktop client integrates nicely with KDE’s Dolphin and Konqueror, and shows the sync status of the ‘~/Nextcloud/‘ directory (and the files in it) with a badge – similar to what Dropbox does.
All files inside ‘~/Nextcloud/‘ will have a right-mouseclick context menu offering some filesharing options and the possibility to open the serverlocation of the file in your webbrowser.

Fixes

To end this Episode, I will collect the fixes that were necessary to make Nextcloud do what I wanted it to do.

Enable geolocation in the NextCloud phonetrack app by applying this patch:

# diff -u lib/public/AppFramework/Http/FeaturePolicy.php{.orig,}
--- lib/public/AppFramework/Http/FeaturePolicy.php.orig 2021-10-02 12:57:56.1506
83402 +0200
+++ lib/public/AppFramework/Http/FeaturePolicy.php      2021-10-11 15:38:01.2146
23284 +0200
@@ -49,7 +49,9 @@
        ];

        /** @var string[] of allowed domains that can use the geolocation of the
 device */
-       protected $geolocationDomains = [];
+       protected $geolocationDomains = [
+               '\'self\'',
+       ];

        /** @var string[] of allowed domains that can use the microphone */
        protected $microphoneDomains = [];

Thanks

Thanks for reading all the way to the end of this Episode. I hope you learnt from it and are eager to try this Nextcloud platform on your own private Slackware Cloud Server!
As always, leave constructive and helpful feedback in the comments section below.

Cheers, Eric

Slackware Cloud Server Series, Episode 3: Video Conferencing

Hi all!
This is already the third episode in a series of articles I am writing about using Slackware as your private/personal ‘cloud server’. Time flies when you’re having fun.
We’re still waiting for Slackware 15.0 and in the meantime, I thought I’d speed up the release of my article on Video Conferencing. My initial plan was to release one article per week after Slackware 15 had been made available. The latter still did not happen (unstuck in time again?) but then I realized, an article about Docker and another about Keykloak still won’t give you something tangible and productive to run and use. So here is Episode 3, a couple of days earlier than planned, to spend your lazy sunday on: create your own video conferencing platform.
Episodes 4 and 5 won’t be far off, since I have already written those as well.

Check out the list below which shows past, present and future episodes in the series, if the article has already been written you’ll be able to click on the subject.
The first episode also contains an introduction with some more detail about what you can expect.

  • Episode 1: Managing your Docker Infrastructure
  • Episode 2: Identity and Access management (IAM)
  • Episode 3 (this article): Video Conferencing
    Setting up Jitsi Meet – the Open Source video conferencing platform. This makes us independent of cloud conferencing services like MS Teams, Zoom or Google Meet. The Jitsi login is offloaded to our Keycloak IAM provider.

    • Jitsi Meet on Docker
    • Preamble
    • Initial Configuration
    • Adding Etherpad integration
    • Creating application directories
    • Starting Jitsi Meet
    • Considerations about the “.env” file
    • Upgrading Docker-Jitsi-Meet
    • Apache reverse proxy setup
    • Fixing Etherpad integration
    • Network troubleshooting
    • Creating internal Jitsi accounts
    • Connecting Jitsi and Keycloak
      • Adding jitsi-keycloak
      • Configuration of jitsi-keycloak in the Keycloak Admin console
      • Remaining configuration done in jitsi-keycloak
    • Configure docker-jitsi-meet for use of jitsi-keycloak
    • Firing up the bbq
    • Thanks
    • Attribution
  • Episode 4: Productivity Platform
  • Episode 5: Collaborative document editing
  • Episode 6: Etherpad with Whiteboard
  • Episode 7: Decentralized Social Media
  • Episode X: Docker Registry

Secure Video Conferencing

Actually, my original interest in Docker was raised in the beginning of 2020 when the Corona pandemic was new, everybody was afraid and people were sent home to continue work and school activities from there.
One of the major challenges for people was to stay connected. Zoom went from a fairly obscure program to a hugely popular video conferencing platform in no time at all (until severe security flaws made a fair-sized dent in its reputation); Microsoft positioned its Teams platform as the successor of Skype but targets mostly corporate users; Google Hangouts became Google Meet and is nowadays the video conferencing platform of choice for all corporations that have not yet been caught in the Microsoft vendor lock-in.
None of these conferencing platforms are open source and all of them are fully cloud-hosted and are inseparable from privacy concerns. In addition, un-paid use of these platforms imposes some levels of limitation to the size and quality of your meetings. As a user, you do not have control at all.

Enter Jitsi, whose Jitsi Meet platform is available for everybody to use online for free and without restrictions. Not just free, but Open Source, end-to-end encrypted communication and you can host the complete infrastructure on hardware that you own and control.
People do not even have to create an account in order to participate – the organizer can share a URL with everyone who (s)he wants to join a session.

Jitsi is not as widely known as Zoom, and that is a pity. Therefore this Episode in my Slackware Cloud Server series will focus on getting Jitsi Meet up and running on your server, and we will let login be handled by the Keycloak Identity and Access Management (IAM) tool which we have learnt to setup in the previous Episode.

In early 2020, when it became clear that our Slackware coreteam member Alphageek (Erik Jan Tromp) would not stay with us for long due to a terminal illness, I went looking for a private video conferencing platform for our Slackware team and found Jitsi Meet.
I had no success in getting it to work on my Slackware server unfortunately. Jitsi Meet is a complex product made of several independent pieces of software which need to be configured ‘just right‘ to make them work together properly. I failed. I was not able to make it work in time to let alphageek use it.
But I also noticed that Jitsi Meet was offered as a Docker-based solution. That was the start of a learning process full of blood sweat & tears which culminated in this article series.

With this article I hope to give you a jump-start in getting your personal video conferencing platform up and running. I will focus on the basic required functionality but I will leave some of the more advanced scenarios for you to investigate: session recording; automatic subtitling of spoken word; integrating VOIP telephony; to name a few.

Jitsi Meet on Docker

Docker-Jitsi-Meet is a Jitsi Github project which uses Docker Compose to create a fully integrated Jitsi application stack which works out of the box. All internal container-to-container configurations are pre-configured.

As you can see from the picture below, the only network ports that need to be accessible from the outside are the HTTPS port (TCP port 443) of your webserver, UDP port 10000 for the WebRTC (video) connections and optionally (not discussed in my article) UDP port range 20000 – 20050 for allowing VOIP telephones to take part in Jitsi meetings.

These ports need to be opened in your server firewall.

Installing docker-jitsi-meet is relatively straight-forward if you go the quick-start page and follow the instructions to the letter. Integrating Jitsi with Keycloak involves using a connector which is not part of either programs; I will show you how to connect them all.

You will be running all of this in Docker containers eventually, but there’s stuff to download, edit and create first. I did not say it was trivial…

Preamble

For the sake of this instruction, I will use the hostname “https://meet.darkstar.lan” as the URL where users will connect to their conferences; The server’s public IP address will be “10.10.10.10“.
Furthermore, “https://sso.meet.darkstar.lan” will be the URL for the connector between Jitsi and Keycloak and “https://sso.darkstar.lan/auth” is the Keycloak base URL (see Episode 2 for how we did the Keycloak setup).

Setting up your domain (which will hopefully be something else than “darkstar.lan”…) with new hostnames and then setting up web servers for the hostnames in that domain is an exercise left to the reader. Before continuing, please ensure that your equivalents for the following two hosts have a web server running. They don’t have to serve any content yet but we will add some blocks of configuration to their VirtualHost definitions during the steps outlined in the remainder of this article:

  • meet.darkstar.lan
  • sso.meet.darkstar.lan

I expect that your Keycloak application is already running at your own real-life equivalent of https://sso.darkstar.lan/auth .

Using a  Let’s Encrypt SSL certificate to provide encrypted connections (HTTPS) to your webserver is documented in an earlier blog article.

Note that I am talking about webserver “hosts” but in fact, all of these are just virtual webservers running on the same machine, at the same IP address, served by the same Apache httpd program, but with different DNS entries. There is no need at all for multiple computers when setting up your Slackware Cloud server.

Initial Configuration

Download and extract the tarball of the latest stable release: https://github.com/jitsi/docker-jitsi-meet/releases/latest into the “/usr/local/” directory. Basically any directory will do but I am already backing up /usr/local so the Jitsi stuff will automatically be taken into backup with all the rest.
At the moment of writing, the latest stable version number is ‘6826‘. Which means, after extracting the tarball we do:

cd /usr/local/docker-jitsi-meet-stable-6826/

A Jitsi Meet container stack for Docker Compose is defined in the file “docker-compose.yml” which you find in this directory.
In addition to this YAML file, the ‘docker-compose‘ program parses a file named “.env” if it exists in the same directory. Its content is used to initialize the container environment. You can for instance store passwords and other secrets in “.env” but also all the configuration variables that define how your stack will function.
Docker-Jitsi-Meet ships an example environment file containing every configurable option, but mostly commented-out.

Configuration:

We start with creating a configuration file “.env” from the example file “env.example“:

$ cp -i env.example .env

And then edit the “.env” file to define our desired configuration.

First of all,

  • Change “CONFIG=~/.jitsi-meet-cfg” to “CONFIG=/usr/share/docker/data/jitsi-meet-cfg” because I do not want application data in my user’s or root’s homedirectory.

Then the ones that are easy to understand:

  • Change “HTTP_PORT=8000” to "HTTP_PORT=8440” because port 8000 is used by far too many applications. Port 8440 is what we will use again in the reverse proxy configuration.
  • Change “TZ=UTC” to “TZ=Europe/Amsterdam” or whatever timezone your server is in.
  • Change “#PUBLIC_URL=https://meet.example.com” to “PUBLIC_URL=https://meet.darkstar.lan/” i.e. change it to the URL where you want people to connect. The connections will be handled by your Apache httpd server who will manage the traffic back and forth between Jitsi container and the client.
  • Change “#DOCKER_HOST_ADDRESS=192.168.1.1” to “DOCKER_HOST_ADDRESS=10.10.10.10” where of course “10.10.10.10” needs to be replaced by your server’s actual public Internet IP address.

Other settings that I would explicitly enable but their commented-out values are the default values anyway (matter of taste, it avoids getting bitten by a future change in application default settings):

  • ENABLE_LOBBY=1“; “ENABLE_PREJOIN_PAGE=1“; “ENABLE_WELCOME_PAGE=1“; “ENABLE_BREAKOUT_ROOMS=1“; “ENABLE_NOISY_MIC_DETECTION=1“.

IPv6 Network consideration:

  • Change “#ENABLE_IPV6=1” to “ENABLE_IPV6=0” if your Docker installation has ipv6 disabled. This is a requirement if your host server would have ipv6 disabled.
    You can find out whether ipv6 is disabled in Docker, because in that case the file “/etc/docker/daemon.json” will contain this statement:

    { "ipv6": false }

Connection encryption:

  • Change “#DISABLE_HTTPS=1” to “DISABLE_HTTPS=1“. We disable HTTPS in the container because we will again use Apache http reverse proxy to handle encryption.
  • Change “#ENABLE_LETSENCRYPT=1” to “ENABLE_LETSENCRYPT=0” because we do not want the container to handle automatic certificate renewals – it’s just too much of a hassle on a server where you already run a webserver on ports 80 and 443. Our Apache reverse proxy is equipped with a Let’s Encrypt SSL certificate and I want to handle SSL certificate renewals centrally – on the host.

Authentication:

The authentication will be offloaded to Keycloak using JSON Web Tokens aka ‘JWT‘ for the inter-process communication. The following variables in “.env” need to be changed:

  • #ENABLE_AUTH=1” should become “ENABLE_AUTH=1
  • #ENABLE_GUESTS=1” should become “ENABLE_GUESTS=1
  • #AUTH_TYPE=internal” should become “AUTH_TYPE=jwt
  • TOKEN_AUTH_URL=https://auth.meet.example.com/{room}” should become “TOKEN_AUTH_URL=https://sso.meet.darkstar.lan/{room}
  • #JWT_APP_ID=my_jitsi_app_id” should become “JWT_APP_ID=jitsi
  • #JWT_APP_SECRET=my_jitsi_app_secret” should become “JWT_APP_SECRET=NmjPGpn+NjTe7oQUV9YqfaXiBULcsxYj

Actually, to avoid confusion: my proposed value of “JWT_APP_SECRET" (the string “NmjPGpn+NjTe7oQUV9YqfaXiBULcsxYj”) is a value which you will be generating yourself a few sections further down. It is a string which is used by two applications to establish mutual trust in their intercommunication.

We will re-visit the meaning and values of JWT_APP_ID and JWT_APP_SECRET in a moment.

When our modifications to the “.env” file are complete, we run a script which will fill the values for all PASSWORD variables with random strings (this can be done at any time really):

$ ./gen-passwords.sh

Adding Etherpad integration

Etherpad is an online editor for real-time collaboration. The Docker version of Jitsi Meet is able to integrate Etherpad into your video conferences. I am going to show you how to run Etherpad on your Slackware Cloud server and integrate collaborative editing into your video meetings.

The git checkout of ‘docker-jitsi-meet‘ into /usr/local/docker-jitsi-meet-stable-6826/will have given you not only a docker-compose.yml file which starts Jitsi and its related containers, but also a file etherpad.yml. This is a Docker Compose file which starts an Etherpad container and connects it to the Jitsi Meet container stack.
FYI: you can use Docker Compose to process multiple YAML files in one command-line instead of implicitly processing only the ‘docker-compose.yml’ file (which happens if you do not explicitly mention the YAML filename in a “-f” parameter).
For instance if you wanted to start Jitsi and Etherpad together, you would use a command like this, using two “-f” parameters to specify the two YAML files:

# docker-compose -f docker-compose.yml -f etherpad.yml up -d

But I found out the hard way that this is risky.
Because sometime in the future you may want to bring that container stack down, for instance to upgrade Jitsi Meet to the latest version. If you forget that you had actually started two stacks (I consider the ‘etherpad.yml’ as the source for a second stack ) and you simply run “docker-compose down” in the directory… then only the Jitsi Meet stack will be brought down and Etherpad will happily keep running.
To protect myself from my future self, I have copied the content of ‘etherpad.yml‘ and added it to the bottom of ‘docker-compose.yml‘, so that I can simply run:

# docker-compose up -d

I leave it up to you to pick either scenario. Whatever works best for you.

Now on to the stuff that needs fixing because the standard configuration will not result in a working Etherpad integration.
First of all, add a “ports” configuration to expose the Etherpad port outside of the container. This is how that looks in the YAML file:

# Etherpad: real-time collaborative document editing
etherpad:
    image: etherpad/etherpad:1.8.6
    restart: ${RESTART_POLICY}
    ports:
        - '127.0.0.1:9001:9001'
    environment:
        - TITLE=${ETHERPAD_TITLE}
        - DEFAULT_PAD_TEXT=${ETHERPAD_DEFAULT_PAD_TEXT}
        - SKIN_NAME=${ETHERPAD_SKIN_NAME}
        - SKIN_VARIANTS=${ETHERPAD_SKIN_VARIANTS}
networks:
    meet.jitsi:
        aliases:
            - etherpad.meet.jitsi

You will also have to edit the “.env” file a bit more. Look for the ETHERPAD related variables and set them like so:

# Set etherpad-lite URL in docker local network (uncomment to enable)
ETHERPAD_URL_BASE=http://etherpad.meet.jitsi:9001
# Set etherpad-lite public URL, including /p/ pad path fragment (uncomment to enable)
ETHERPAD_PUBLIC_URL=https://meet.darkstar.lan/pad/p/
# Name your etherpad instance!
ETHERPAD_TITLE=Slackware EtherPad Chat
# The default text of a pad
ETHERPAD_DEFAULT_PAD_TEXT="Welcome to Slackware Web Chat!\n\n"

The most important setting is highlighted in green: “https://meet.darkstar.lan/pad/p/” . This is the external URL where we will expose our Etherpad. Since the Docker container exposes Etherpad only at the localhost address “127.0.0.1:9001” we need to setup yet another Apache reverse proxy. See the section “Apache reverse proxy setup” below.
There is one potential snag and you have to consider the implications: in the above proposed setup we expose Etherpad in the “/pad/” subdirectory of our Jitsi Meet server. But the Jitsi conference rooms also are exposed as a subdirectory, but then without the trailing slash. Which means everything will work just fine as long as nobody decides to call her conference room “pad” – that can lead to unexpected side effects. You could remedy that by choosing a more complex string than “/pad/” for Etherpad, or else setup a separate web host (for instance “etherpad.darkstar.lan“) just for Etherpad.

In any case, with all the preliminaries taken care of, you can continue with the next sections of the article.
Note: After starting the containers, you will have to do one last edit in the configuration of Jitsi Meet to actually make Etherpad available in your videomeetings. See the section “Fixing Etherpad integration” below.

I am still investigating the integration of Keycloak authentication with Etherpad. Once I am sure I have a working setup, I will do a write-up on the subject in a future article in this series. In the meantime, you need to realize that your Etherpad is publicly accessible.

Creating application directories

The various Docker containers that make up Docker-Jitsi-Meet need to write data which should persist across reboots. The “CONFIG” variable in “.env” points to the root of that directory structure and we need to create the empty directory tree manually before firing up the containers.
Using one smart command which will be expanded by Bash to a lot of ‘mkdir‘ commands:

# mkdir -p /usr/share/docker/data/jitsi-meet-cfg/{web/letsencrypt,transcripts,prosody/config,proso
dy/prosody-plugins-custom,jicofo,jvb,jigasi,jibri}

Starting Jitsi Meet

# cd /usr/local/docker-jitsi-meet-stable-*
# docker-compose up -d

With an output that will look somewhat like:

Creating network "docker-jitsi-meet-stable-4548-1_meet.jitsi" with the default driver
Creating docker-jitsi-meet-stable-4548-1_prosody_1 ... done
Creating docker-jitsi-meet-stable-4548-1_web_1 ... done
Creating docker-jitsi-meet-stable-4548-1_jicofo_1 ...
Creating docker-jitsi-meet-stable-4548-1_jvb_1 ...
...
Pulling web (jitsi/web:stable-4548-1)...
stable-4548-1: Pulling from jitsi/web
b248fa9f6d2a: Pull complete
173b15edefe3: Pull complete
3242417dae3a: Pull complete
331e7c5436be: Pull complete
6418fea5411e: Pull complete
0123aaecd2d8: Pull complete
bd0655288f32: Pull complete
f2905e1ad808: Pull complete
8bcc7f5a0af7: Pull complete
20878400e460: Extracting [====================================> ]
84.67MB/114.9MBB
..... etcetera

And that’s it. Our Jitsi Meet video conferencing platform is up and running.
But it is not yet accessible: we still need to connect the container stack to the outside world. This is achieved by adding an Apache httpd reverse proxy between our Docker stack and the users. See below!

 

Considerations about the “.env” file

Note that the “.env” file is only used the very first time ‘docker-compose‘ starts up your docker-jitsi-meet container stack, in order to  populate /usr/share/docker/data/jitsi-meet-cfg/ and its subdirectories.

After that initial start of the docker-jitsi-meet container stack you can tweak your setup by editing files in the /usr/share/docker/data/jitsi-meet-cfg/ directory tree, since these directories are mounted inside the various containers that make up Docker-Jitsi-Meet.
But if you ever edit that “.env” file again… you need to remove and re-create the directories below /usr/share/docker/data/jitsi-meet-cfg/ and restart the container stack.

NOTE: ‘docker-compose stop‘ stops all containers in the stack which was originally created by the ‘docker-compose up -d‘ command. Using ‘down‘ instead of ‘stop‘ will additionally remove containers and networks as defined in the Compose file(s). After using ‘down‘ you would have to use ‘up -d‘ instead of ‘start‘ to bring the stack back online.

This is how you deal with “.env” configuration changes:

# cd /usr/local/docker-jitsi-meet-stable-*
# docker-compose stop
# vi .env
# ... make your changes
# rm -rf /usr/share/docker/data/jitsi-meet-cfg/
# mkdir -p /usr/share/docker/data/jitsi-meet-cfg/{web/letsencrypt,transcripts,prosody/config,prosody/prosody-plugins-custom,jicofo,jvb,jigasi,jibri}
# docker-compose start

Upgrading Docker-Jitsi-Meet

You don’t need to follow the above process if you want to upgrade Docker-Jitsi-Meet to the latest stable release as part of life cycle management, but with an un-changed.env” file. In such a case, you simply execute:

# cd /usr/local/docker-jitsi-meet-stable-*
# docker-compose down
# docker-compose pull
# docker-compose up -d

Apache reverse proxy setup

We need to connect the users of our Jitsi and Etherpad services to the containers. Since these containers are exposed by Docker only at the loopback address (127.0.0.1 aka localhost) we use the Apache httpd’s ‘reverse proxy‘ feature.

These three blocks of text need to be added to the VirtualHost definition for your “meet.darkstar.lan” webserver so that it can act as a reverse proxy and connects your users to the Docker Jitsi Meet and Etherpad containers:

Generic block:

SSLProxyEngine on
RequestHeader set X-Forwarded-Proto "https"
ProxyTimeout 900
ProxyVia On
ProxyRequests Off
ProxyPreserveHost On
Options FollowSymLinks MultiViews
AllowOverride All
Order allow,deny
allow from all

Specific to Jitsi Meet:

<Location />
    ProxyPass http://127.0.0.1:8440/
    ProxyPassReverse http://127.0.0.1:8440/
</Location>
# Do not forget WebSocket proxy:
RewriteEngine on
RewriteCond %{HTTP:Connection} Upgrade [NC]
RewriteCond %{HTTP:Upgrade} websocket [NC]
RewriteRule ^/?(.*) "ws://127.0.0.1:8440/$1" [P,L]

And specific to Etherpad:

<Location /pad/>
    ProxyPass http://127.0.0.1:9001/ retry=0 timeout=30
    ProxyPassReverse http://127.0.0.1:9001/
    AddOutputFilterByType SUBSTITUTE text/html
    Substitute "s|meet.darkstar.lan/|meet.darkstar.lan/pad/|i" 
</Location>
<Location pad/socket.io>
    # This is needed to handle websocket transport through the proxy, since
    # etherpad does not use a specific sub-folder, such as /ws/
    # to handle this kind of traffic.
    RewriteEngine On
    RewriteCond %{QUERY_STRING} transport=websocket [NC]
    RewriteRule /(.*) ws://127.0.0.1:9001/socket.io/$1 [P,L]
    ProxyPass http://127.0.0.1:9001/socket.io retry=0 timeout=30
    ProxyPassReverse http://127.0.0.1:9001/socket.io
    AddOutputFilterByType SUBSTITUTE text/html
    Substitute "s|meet.darkstar.lan/|meet.darkstar.lan/pad/|i" 
</Location>

In “127.0.0.1:8440” you will recognize the TCP port 8440 which we configured for the Jitsi container in the “.env" file earlier. The “127.0.0.1:9001” corresponds to the port 9001 which we exposed explicitly in the ‘docker-compose.yml‘ file for the Etherpad service.

After adding this reverse proxy configuration and restarting Apache httpd. your video conference server will be publicly accessible at https://meet.darkstar.nl/ .

Fixing Etherpad integration

I told you earlier that you needed to make a final edit after the Jitsi Meet stack is up & running to fix the Etherpad integration.
Open the stack’s global config file “/opt/jitsi-meet-cfg/web/config.js” in your editor and look for this section of text:

// If set, add a "Open shared document" link to the bottom right menu that
// will open an etherpad document.
// etherpad_base: 'https://meet.darkstar.lan/pad/p/',

You need to un-comment the last line so that this section looks like:

// If set, add a "Open shared document" link to the bottom right menu that
// will open an etherpad document.
etherpad_base: 'https://meet.darkstar.lan/pad/p/',

It’s a long-standing bug apparently.
Now, when you join a Jitsi Meeting, the menu which opens when you click the three-dots “more actions” menu in the bar at the bottom of your screen, will contain an item “Open shared document“:

If you select this, your video will be replaced by an Etherpad “pad” with the name of your Jitsi meeting room.

Externally i.e. outside of the Jitsi videomeeting, your Etherpad ‘pad‘ will be available as “https://meet.darkstar.lan/pad/p/jitsiroom” where “jitsiroom” is the name you gave your Jitsi videomeeting aka ‘room‘. This means that people outside of your videomeeting can still collaborate with you in real-time.

Network troubleshooting

Docker’s own dynamic management of iptables chains and rulesets will be thwarted if you decide to restart your host firewall. The custom Docker chains disappear and the docker daemon gets confused. If you get these errors in logfiles when starting the Docker-Jitsi-Meet containers, simply restart the docker daemon itself (/etc/rc.d/rc.docker restart):

> driver failed programming external connectivity on endpoint docker-jitsi-meet
> iptables failed
> iptables: No chain/target/match by that name

Creating internal Jitsi accounts

Just for reference, in case you want to play with Jitsi before integrating it with Keycloak.
Internal Jitsi users must be created with the “prosodyctl” utility in the prosody container.

In order to run that command, you need to first start a shell in the corresponding container – and you need to do this from within the extracted tarball directory “/usr/local/docker-jitsi-meet-stable-*“:

# cd /usr/local/docker-jitsi-meet-stable-*
# docker-compose exec prosody /bin/bash

Once you are at the prompt of that shell in the container, run the following command to create a user:

> prosodyctl --config /config/prosody.cfg.lua register TheDesiredUsername meet.jitsi TheDesiredPassword

Note that the command produces no output. Example for a new user ‘alien‘:

> prosodyctl --config /config/prosody.cfg.lua register alien meet.jitsi WelcomeBOB!

Now user “alien” will be able to login to Jitsi Meet and start a video conference.

Connecting Jitsi and Keycloak

The goal is of course to move to a Single Sign On solution instead of using local accounts. Jitsi supports JWT Tokens which it should get from a OAuth/OpenID provider. We have Keycloak lined up for that, since it supports OAuth, OpenID, SAML and more.

Adding jitsi-keycloak

Using Keycloak as OAuth provider for Jitsi Meet is not directly possible, since unfortunately Keycloak’s JWT token is not 100% compatible with Jitsi. So a ‘middleware‘ is needed, and jitsi-keycloak fills that gap.

We will download the middleware from their git repository and setup a local directory below “/usr/share/docker/data” where we have been storing configurations for all our applications so far. All we are going to use from that repository checkout is the Docker Compose file you can find in there. The actual ‘jitsi-keycloak‘ middleware will eventually be running as yet another Docker container.

# cd /usr/local/
# git clone https://github.com/d3473r/jitsi-keycloak jitsi-keycloak
# mkdir -p /usr/share/docker/data/jitsi-keycloak/config
# cp ./jitsi-keycloak/example/docker-compose.yml /usr/share/docker/data/jitsi-keycloak/

Edit our working copy ‘/usr/share/docker/data/jitsi-keycloak/docker-compose.yml‘ to provide the correct environment variables for our instances of our already running Jitsi and Keycloak containers:

# --- start ---
version: '3'

services:
    jitsi-keycloak:
    image: d3473r/jitsi-keycloak
    container_name: jisi-keycloak
    hostname: jisi-keycloak
    restart: always
environment:
    JITSI_SECRET: NmjPGpn+NjTe7oQUV9YqfaXiBULcsxYj
    DEFAULT_ROOM: welcome
    JITSI_URL: https://meet.darkstar.lan/
    JITSI_SUB: meet.darkstar.lan
volumes:
    - /usr/share/docker/data/jitsi-keycloak/config:/config
ports:
    - "3000:3000"
networks:
    keycloak0.lan:
    ipv4_address: 172.20.0.6
aliases:
    - jitsi-keycloak.keycloak0.lan

networks:
    keycloak0.lan
    external: true
# --- end ---

The string value for the JITSI_SECRET variable needs to be the same string we used in the definition of the Jitsi container earlier, where the variable is called JWT_APP_SECRET.

Hint: in Bash you can create a random 32 character string like this:

$ cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1
BySOoKBDIC1NWfeYpktvexJIqOAcAMEt

If you have nodejs installed, generate a random ‘secret’ string using this ‘node’ command:

$ node -e "console.log(require('crypto').randomBytes(24).toString('base64'));"
NmjPGpn+NjTe7oQUV9YqfaXiBULcsxYj

Configuration of jitsi-keycloak in the Keycloak Admin console

Point your browser to the Keycloak Admin console https://sso.darkstar.lan/auth/admin/ to start the configuration process.

Add a public openid-connect client in the ‘foundation‘ Keycloak realm (the realm where you created your users in the previous Episode of this article series):

  • Select ‘foundation‘ realm; click on ‘Clients‘ and then click ‘Create‘ button.
    • Client ID‘ = “jitsi
    • Client Protocol‘ = “openid-connect” (the default)
    • Save.
  • Also in ‘Settings‘, allow this app from Keycloak.
    Our Jitsi-keycloak container is running on https://sso.meet.darkstar.lan . Therefore we add

    • Valid Redirect URIs‘ = https://sso.meet.darkstar.lan/*
    • Web Origins‘ = https://sso.meet.darkstar.lan
    • Save.
  • Go to ‘Installation‘ tab to download the ‘keycloak.json‘ file for this new client:
    • Format Option‘ = “Keycloak OIDC JSON”
    • Click ‘Download‘ which downloads a file “keycloak.json” with the following content:
# ---
{
    "realm": "foundation",
    "auth-server-url": "https://sso.darkstar.lan/auth",
    "ssl-required": "external",
    "resource": "jitsi",
    "public-client": true,
    "confidential-port": 0
}
# ---

Remaining configuration done in jitsi-keycloak

Back at your server’s shell prompt again, do as follows:

Copy the downloaded “keycloak.json” file into the ‘/config‘ directory of jitsi-keycloak (the container’s /config is exposed in the host filesystem as /usr/share/docker/data/jitsi-keycloak/config).

# cp ~/Download/keycloak.json /usr/share/docker/data/jitsi-keycloak/config/

Start the jitsi-keycloak container in the directory where we have our tailored ‘docker-compose.yml‘ file:

# cd /usr/share/docker/data/jitsi-keycloak
# docker-compose up -d

Once the container is running, we make jitsi-keycloak available at https://sso.meet.darkstar.lan/ using a reverse-proxy setup (jitsi-keycloak will not work in a sub-folder).
Add these reverse proxy lines to your VirtualHost definition of the “sso.meet.darkstar.lan” web site configuration and restart httpd:

# ---
# Reverse proxy to jitsi-keycloak Docker container:
SSLProxyEngine On
SSLProxyCheckPeerCN on
SSLProxyCheckPeerExpire on
RequestHeader set X-Forwarded-Proto: "https"
RequestHeader set X-Forwarded-Port: "443"

<Location />
    AllowOverride None
    Require all granted
    Order allow,deny
    Allow from all
</Location>

ProxyPreserveHost On
ProxyRequests Off
ProxyVia on
ProxyAddHeaders On
AllowEncodedSlashes NoDecode

# Jitsi-keycloak:
ProxyPass / http://127.0.0.1:3000/
ProxyPassReverse / http://127.0.0.1:3000/
# ---

Configure docker-jitsi-meet for use of jitsi-keycloak

Actually, you have already done all the correct changes which are needed in the ‘.env‘ file for Docker Compose!
The  docker-jitsi-meet configurations that are relevant for jitsi-keycloak are as follows:

ENABLE_AUTH=1
AUTH_TYPE=jwt
JWT_APP_ID=jitsi
JWT_APP_SECRET=NmjPGpn+NjTe7oQUV9YqfaXiBULcsxYj
# To enable an automatic redirect from Jitsi to the Keycloak login page:
TOKEN_AUTH_URL=https://sso.meet.darkstar.lan/{room}

The values for ‘JWT_APP_SECRET‘ and ‘JITSI_SECRET‘ must be identical, and the value of ‘JWT_APP_ID‘ must be equal to “jitsi“.

Firing up the bbq

With all the prep work completed and the containers are running, we can enjoy the new online video conferencing platform we now operate for friends and family.

So, how does this actually look in practice? I’ll share a couple of screenshots from a Jitsi Meet session that I setup. Look at how cool it looks (and not just because of the screenshot of my den and the Slackware hoodie I am wearing…)

The Jitsi Meet welcome screen:

Device settings:

Joining a meeting:

Logging in via Keycloak SSO, you’ll notice that I have configured 2-Factor Authentication for my account:

After having logged in, I am back at the “join meeting screen” but now with my name written as Keycloak knows it (“Eric Hameleers” instead of “Alien BOB“) and I need to click one more time on the “Join” button.
Then I am participating in the meeting as the moderator.
You’ve probably noticed that I flipped my camera view here. I also added one ‘break-out room‘ to allow for separate discussions to take place outside of the main room:

 

 

And if you are not the moderator but a guest who received the link to this meeting, this is what you’ll see at first:

 

Cool, eh?

Thanks

… again for taking the time to read through another lengthy article. Share your feedback in the comments section below, if you actually implemented Jitsi Meet on your own server.


Attribution

The Docker-Jitsi-Meet architecture image was taken from Jitsi’s github site.

« Older posts

© 2024 Alien Pastures

Theme by Anders NorenUp ↑