A month full of interesting package updates in my Slackware package repositories. I have not blogged about them, because of a busy work schedule, but here are the highlights.
Note that you can subscribe your feedreader to my RSS feeds (regular and restricted) so that you never miss a package update!
Ardour
With more than two years of development after 6.0 was released in May 2020, a new major update for Ardour was finally made available last week. Packages are available for 32bit and 64bit Slackware 15.0 and -current.
Ardour 7.0 comes with lots of new features, and Unfa goes in-depth in this YouTube video:
Avidemux
Avidemux 2.8.1 was released in September, and I missed the announcement. Fortunately I was alerted to it today by a Slackware user who commented on the blog. These packages are found in my restricted repository because they contain AAC encoder libraries, the code for which is patent-encumbered in the United States.
For the 32bit package I had to forcibly disable SSE support in the soundtouch library, if anyone comes across a patch that fixes the compilation error, let me know. I guess nobody runs test builds of Avidemux on a 32bit OS anymore.
Chromium
I uploaded three consecutive updates for Chromium 106 (regular as well as un-googled) during the last month, did anyone notice?
As usual, any update to Chromium is a must-do, to eradicate any vulnerabilities that allow online hackers to own your computer. Again, subscribing to my repository’s RSS feed will alert you to updates immediately.
Docker
My four Docker related packages (runc, containerd, docker and docker-compose, you don’t need any other package) were also updated to their latest releases last week.
A note: I provide 32bit packages for Docker, even though that is supposed to not work. At least, it is not supported by the developers. I wonder, since I tested the 32bit packages and they actually do work (I can run 32bit containers on a 32bit host) is there anyone who uses these? Or should I skip 32bit builds of future Docker releases altogether? Let me know.
LibreOffice
LibreOffice 7.4.2 was released last week and I uploaded a set of packages right before the weekend, so that you can enjoy the latest and greatest of this office suite on Slackware 15.0 and -current.
Note that I build these packages on Slackware 15.0 but also offer these same packages for installation on slackware-current. Since slackware-current ships newer (incompatible) versions of boost and icu4c, please also install boost-compat and icu4c-compat from my repository – these packages contain older versions of the boost and icu4c libraries and are a live-saver if you are running slackware-current. Note that this “compat” is not the same as “compat32” – which is the designation for the converted 32bit Slackware packages in my multilib package set!
OBS Studio
If you ever have a need for recording a live video using professional-grade software, Open Broadcaster Software released OBS Studio version 28.0.3 recently. If you want to broadcast a live stream of an event you are covering, OBS Studio plugs straight into Youtube, Facebook, Twitch or other streaming platforms. Packages are available for Slackware 15.0 and -current.
More…
Also I had to update Calibre, FFMpeg and Audacity packages for Slackware-current, after the recent incompatible upgrades of Qt5 and FFMpeg in the OS.
If you wonder ‘why ffmpeg, it’s part of Slackware already‘ – my ffmpeg package has several codecs enabled that the stock Slackware version does not offer, particularly the package in my restricted repository.
Hi all!
This is the 6th episode in a series I am writing about using Slackware as your private/personal ‘cloud server’. It is an unscheduled break-out topic to discuss an Etherpad server specifically.
Check out the list below which shows past, present and future episodes in the series, if the article has already been written you’ll be able to click on the subject.
The first episode also contains an introduction with some more detail about what you can expect.
These articles are living documents, i.e. based on readers’ feedback I may add, update or modify their content.
In Episode 3 (Video Conferencing) we setup a Jitsi Meet server in a Docker container stack which includes an Etherpad server for real-time document collaboration during a video meeting.
That Etherpad instance as configured by the Docker-Jitsi-Meet project is really only a demo setup. It uses a “dirtydb” JSON backend wich is not meant for anything else but testing. It really needs a proper SQL database like MariaDB to power it. And you can’t export your documents from this demo Etherpad in any meaningful format.
Furthermore, this Etherpad container is not using our Keycloak IAM for authentication; everyone who knows the public URL can create a document, invite others and start writing. Even shared documents created in Jitsi meetings are not secure and anyone who guesses the room name has access to the Etherpad document.
This article means to set things right and configure Etherpad correctly, adding Whiteboard functionality as we go. I will also discuss the differences between our Jitsi integrated Etherpad and a running a standalone Etherpad server in case you are not interested in Video meetings and only want the text collaboration.
Preamble
This article assumes you have already setup an Etherpad in a Docker container as part of a Dockerized Jitsi Meet server (see Episode 3 in this series), and this Etherpad is running at a publicly accessible URL:
https://meet.darkstar.lan/pad/
We want to make it delegate user authentication to our OpenID Provider: Keycloak. That Keycloak service is available at:
https://sso.darkstar.lan/auth
If you are not interested in Jitsi Meet and only want to know how to run an Etherpad server, this article still contains everything you need but keep in mind that my examples are all assuming the above URL for the Etherpad. Adapt that URL to your own real-life situation. You may still have to setup an Apache webserver first, which serves an empty page at “https://meet.darkstar.lan/” but I will leave that to you.
Configuring MariaDB
By default, Etherpad will use a ‘DirtyDB’ JSON file-based backend. It is straight-forward to make it switch to for instance a MariaDB database server backend, we only need to provide the connection details for a pre-existing database.
Like with the previous articles we are using the Slackware MariaDB database server which is running on the host. First, we will create a database (etherpad_db), a database user (etherpad) and grant this user sufficient access to the database. Then we will use these database configuration values when editing the Docker-Jitsi-Meet files in order to change the Etherpad container properties.
This is how we create the database and the user (using a secure password string for ‘EPPPASSWD‘ of course):
$ mysql -uroot -p
> CREATE DATABASE IF NOT EXISTS `etherpad_db` CHARACTER SET utf8 COLLATE utf8_unicode_ci;
> CREATE USER 'etherpad'@'localhost' identified by 'EPPASSWD';
> CREATE USER 'etherpad'@'%' identified by 'EPPPASSWD';
> GRANT CREATE,ALTER,SELECT,INSERT,UPDATE,DELETE on `etherpad_db`.* to 'etherpad'@'localhost';
> GRANT CREATE,ALTER,SELECT,INSERT,UPDATE,DELETE on `etherpad_db`.* to 'etherpad'@'%';
> FLUSH PRIVILEGES;
> exit;
Note from the above SQL statements that we are allowing the ‘etherpad‘ user remote access to the database. This is needed because Etherpad in the Docker container contacts MariaDB via the network, using the IP address of the Docker network bridge in the Jitsi Meet container stack.
Reconfiguring Docker-Jitsi-Meet
My advise is to start with briefly re-visiting Episode 3 of the series and read back how we customized the ‘docker-compose.yml‘ and ‘.env‘ files in order to startup the Docker-Jitsi-Meet stack properly. Because we are going to update these two files again.
This is what we need to change to make Etherpad connect to the external MariaDB database:
Relevant .env additions:
# MariaDB parameters for mysql DB instead of dirtydb
ETHERPAD_DB_TYPE=mysql
ETHERPAD_DB_HOST=172.20.0.1
ETHERPAD_DB_PORT=3306
ETHERPAD_DB_NAME=etherpad_db
ETHERPAD_DB_USER=etherpad
ETHERPAD_DB_PASS=EPPASSWD
ETHERPAD_DB_CHARSET=utf8
Relevant docker-compose additions:
In the ‘.env‘ file we defined the IP address for the database server (172.20.0.1). Etherpad is running inside a container, and its way out is through the default gateway of its Docker network. In order to have 172.20.0.1 as the gateway address, we need to configure the internal ‘meet.jitsi‘ network a deterministic IP range so that we always know its gateway address. if we are going to give that network the IP range “172.20.0.0/16“, the “networks” statement all the way at the bottom needs to be changed from:
# Custom network so all services can communicate using a FQDN
networks:
meet.jitsi:
to:
# Custom network so all services can communicate using a FQDN
networks:
meet.jitsi:
ipam:
config:
- subnet: 172.20.0.0/16
Use the variables we added to ‘.env’ to create an updated Etherpad container definition. Right underneath this line:
Etherpad has an admin console where you can manage its plugin configuration and other things too. It will only be enabled if you configure an admin password. So let’s do that too.
This is what we need to change to enable the admin console for Etherpad:
Relevant .env additions:
# The password for Etherpad admin page
ETHERPAD_ADMIN_PASSWORD="my_secret_admin_pass"
Relevant docker-compose additions:
Right underneath this line:
- SKIN_VARIANTS=${ETHERPAD_SKIN_VARIANTS}
Add the following lines:
- ADMIN_PASSWORD=${ETHERPAD_ADMIN_PASSWORD}
Relevant Apache httpd additions:
If you would now access the URL for the admin console, https://meet.darkstar.lan/pad/admin/ you would only see the message “Unauthorized“. The Etherpad expects you to provide the Basic Authentication hook in front of that page which passes the admin credentials on to the backend. So, we will add a ‘AuthType Basic‘ block to our Apache httpd configuration to add Basic Authentication which will pop up a login dialog, and then add the admin user and its password “my_secret_admin_pass” to a htaccess file.
Remember, in Episode 4 we configured the Etherpad to be available at “https://meet.darkstar.lan/pad/” which means the admin console URL is “https://meet.darkstar.lan/pad/admin/”
This is the block to add to your VirtualHost configuration for the Etherpad:
<Location /pad/admin>
AuthType Basic
AuthBasicAuthoritative off
AuthName "Welcome to the Etherpad"
AuthUserFile /etc/httpd/passwords/htaccess.epl
Require valid-user
Order Deny,Allow
Deny from all
Satisfy Any
</Location>
And then we still need to create that htaccess file using the ‘htpasswd‘ tool, like this:
The “-B” parameter enforces the use of bcrypt encryption for passwords. This is currently considered to be very secure.
The above command will prompt for the password, and there you enter that “my_secret_admin_pass” string. The content of that file will look like this:
The Docker Jitsi Meet container stack needs to be refreshed and restarted since we edited ‘.env’. I don’t want to repeat the detailed instructions here, so refer you to the section “Considerations about the “.env” file” in Episode 3 of this article series. Do that now, and when the updated container stack is up and running again, continue here.
And after also restarting the Apache httpd and refreshing the URL “https://meet.darkstar.lan/pad/admin/” you will be asked to enter your admin credentials and you will end up in the Etherpad admin console.
The screenshot below does not reflect the status of the barebones Etherpad by the way; you see a lot of installed plugins mentioned on the admin page. We will be installing those into the Etherpad image in one of the next sections:
At this stage, we have accomplished a well-performing Etherpad installation with a SQL database back-end and an administrative web-interface. The next step is to add authentication through an OpenID provider like our Keycloak IAM server.
Integrating with Keycloak IAM
The out-of-the-box Etherpad Docker container is not very functional. The above sections already showed how to replace the “DirtyDB” with a proper SQL database server like MariaDB. But the default image misses a few useful plugins and a real desktop editor program which allows Etherpad users to export their collaborative work to a proper document format instead of pure HTML.
Whatever plugins we add, at the very least we need to add a plugin which allows us to let the Etherpad authenticate against our Keycloak IAM server. This plugin needs to be inside the Docker image, we cannot use it outside the running container. There’s no other option than to create a custom Docker image for Etherpad. We use this as an opportunity to add some more plugins, as well as Abiword (to enable document export in Etherpad).
I’ll show how to announce Etherpad to Keycloak (we create a Client profile in Keycloak); then I’ll share the required configuration to be added to the Etherpad Docker files; and then I’ll show how to create a custom Docker image enriched with additional plugins which we will use instead of the basic image from Docker Hub.
Keycloak
First of all, let’s create a Client profile for Etherpad in Keycloak.
Login to the Keycloak Admin Console (https://sso.darkstar.lan/auth/admin/)
Select our ‘Foundation‘ realm from the dropdown at the left.
Under ‘Clients‘, create a new client:
‘Client ID’ = “etherpad”
‘Root URL’ = “https://meet.darkstar.lan/pad/ep_openid_connect/callback”
Note that for the Etherpad OIDC plugin ‘ep_openid_connect’ – see below – to work, the ‘Valid Redirect URIs’ (a.k.a. callback URL) must be the concatenation of the Etherpad base URL (https://meet.darkstar.lan/pad/) plus “/ep_openid_connect/callback“. When setting the ‘Root URL‘ to the above value, the ‘Redirect URIs‘ will automatically also be set correctly to “https://meet.darkstar.lan/pad/ep_openid_connect/callback/*“
Save.
In the ‘Settings‘ tab, change:
‘Access Type’ = “confidential” (default is “public”)
Save.
Go to the ‘Credentials‘ tab
Make sure that ‘Client Authenticator‘ is set to “Client Id and Secret“
Copy the value of the ‘Secret‘, which we will use later in the Etherpad connector; the Secret will look somewhat like this:
“2jnc8H6RH9jIYMXExUHA7XF7uD8YKIRs“.
Keycloak configuration being completed, we can turn our attention to the connector between Etherpad and Keycloak.
EP_openid_connect
We will use the Etherpad plugin ep_openid_connect which I already briefly mentioned earlier. This plugin provides the needed OpenID client functionality to Etherpad.
When we add this plugin to the Etherpad Docker image we need to be able to configure it via the ‘docker-compose.yml‘ and ‘.env‘ files of Docker-Jitsi-Meet. The existing configuration files in the repository for the docker-jitsi-meet stack are just meant to make the basic Etherpad work, so we need to add more parameters to configure our custom Etherpad properly.
I will show you what you need to add, and where.
The ‘ep_openid_connect’ plugin expects an ‘ep_openid_connect’ block in the ‘settings.json’ file (we will get to that file in the next section). Since that file is JSON-formatted, we arrive at the following structure:
The text string values in green highlight are of course the relevant ones. What is the meaning of the parameters:
issuer: this is the string you obtain through Keycloak’s OpenID Discovery URL. Make sure you have ‘jq‘ installed and then run this command to obtain the value for ‘issuer‘:
$ curl https://sso.darkstar.lan/auth/realms/foundation/.well-known/openid-configuration | jq .issuer
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 5749 100 5749 0 0 6594 0 --:--:-- --:--:-- --:--:-- 6600
"https://sso.darkstar.lan/auth/realms/foundation"
In this case, you could easily have guessed the ‘issuer’ value, but using the above ‘well-known’ query URL will always get you the correct value.
client_id, client_secret: those are the same OAuth2 values obtained from Keycloak when creating the Etherpad Client profile as seen above.
base_url: this is the URL where Etherpad is externally accessible (https://meet.darkstar.lan/pad/ – see Episode 3).
Additionally, since we are now enforcing login, Etherpad’s ‘requireAuthentication‘ setting must be set to “true”. Note that the default setting is “false”; this is how that setting is defined in the Etherpad configuration:
We’ll just have to define a “true” value for that variable later on.
Note: Each configuration parameter can also be set via an environment variable, using the syntax "${ENV_VAR}" or "${ENV_VAR:default_value}". This ability is what we will use when updating the Docker Compose file for Jitsi Meet. We will not use the literal JSON block above, instead we will fill it with variable names and use our Docker Compose files to provide values for these variables. That way I am able to create a generic Docker image that I can upload to the Docker Hub and share with other people.
The file ‘settings.json.template‘ in the Etherpad repository has lots of examples.
Hold on to that thought for a minute while we proceed with creating our custom Etherpad Docker image, since we have all the data available to do this now. Once we have that image, we will once again return to the re-configuration of Docker Jitsi Meet and integrate our Etherpad with the Jitsi container stack.
Custom Etherpad Docker image
How to create a custom Docker image?
First we clone the “etherpad-lite” git repository. That contains a Dockerfile plus all the context that is needed by ‘docker build‘ to generate an image. $ mkdir ~/docker-etherpad-slack
$ cd ~/docker-etherpad-slack
$ git clone https://github.com/ether/etherpad-lite .
There is one relevant configuration file in the root directory of the checked-out repository: ‘settings.json.docker‘. This file will be copied into the Etherpad Docker image and renamed to ‘settings.json‘ when we run ‘docker build‘ command. Any plugin configuration we want to enable via environment variables needs to be present in this file.
Now the standard configurable parameters for the Etherpad are contained in that file, but our custom settings for the “ep_openid_connect” plugin are not. I already showed you how that block of configurable parameters looks in the previous section, and I promised to parametrize it. This is how the parameters look, and we will give them values in the next section where we update the Docker Jitsi Meet configuration.
Relevant ‘settings.json.docker’ additions:
Support for OpenID Connect in Etherpad – add this JSON code: "ep_openid_connect": {
"issuer": "${OIDC_ISSUER:undefined}",
"client_id": "${OIDC_CLIENT_ID:undefined}",
"client_secret": "${OIDC_CLIENT_SECRET:undefined}",
"base_url": "${OIDC_BASE_URL:undefined}"
},
We also add a connector for the WBO Whiteboard server (its setup is described in the next section below) to the Docker image: the plugin is called ‘ep_whiteboard‘ and needs the following JSON configuration block to be added: "ep_draw": {
"host": "${WBO_HOST:undefined}"
},
Enable AbiWord in the configuration, since we are going to add it to the image. The full path to the ‘abiword‘ binary needs to be configured in ‘settings.json.docker‘.
Look up this line in the file: "abiword": "${ABIWORD:null}",
and change it to: "abiword": "${ABIWORD:/usr/bin/abiword}",
Then we build a new image, adding several useful (according to the developers) plugins, as well as the Abiword word processor.
I tag the resulting image as “liveslak/etherpad” so that I can upload (push) it to the Docker Hub later on:
This leads to the following output and results in an image which is quite a bit larger (786 MB uncompressed) as the standard Etherpad image (474 MB uncompressed) because of the added functionality:
If you create the image on another computer and need to transfer it to your Slackware Cloud Server in order to use it there, you can save the image to a compressed tarball on the build machine, using docker commands: $ docker save liveslak/etherpad | xz > etherpad-slack.tar.xz
You can use ‘rsync’ or ‘scp’ to transfer that tarball to your Cloud Server and then load it into the Docker environment there, also using docker commands so that you don’t need to know the intimate details on how Docker works with images: $ cat etherpad-slack.tar.xz | docker load
This means that you can use the Hub version of ‘liveslak/etherpad’. But you can just as well use your own locally generated etherpad image in the ‘docker run‘ commands that launch your Etherpad container.
When you have a local image called “liveslak/etherpad”, then Docker will not check for an online image called “liveslak/etherpad”. If you did not generate your own image, Docker will look for (and find) my image at the Hub (or at the private Registry you may have configured), so it will download and use that.
Setting up WBO Whiteboard
Etherpad will be even more attractive if it offers users a collaborative Whiteboard and not just a collaborative text editor.
Enter WBO, which is an actual drawing board with infinite canvas and real-time refresh for all users.
Its boards are persistent; if you re-visit a board later on, all your content will still be there. Look at the WBO demo site… amazing.
We will run WBO in its own Docker container and re-configure our Etherpad webserver with a reverse proxy so that WBO can be integrated into Etherpad through the ‘ep_whiteboard’ connector.
It’s not so complex actually.
Docker container
First, launch a Docker container running WBO. We ensure that the data of the whiteboards you will be creating are going to be stored persistently outside of the container, so let’s create that data directory first and ensure that the internal WBO user is able to write there (you may have a different preference for directory location):
We make WBO available behind an apache httpd reverse proxy which takes care of the encryption (https) using a Let’s Encrypt certificate.
Add the following block to your <VirtualHost></VirtualHost> definition of the server which also defines the reverse proxy for your Etherpad (which is https://meet.darkstar.lan/pad/ remember?):
# Reverse proxy for the WBO whiteboard Docker container:
<Location /whitepad/>
ProxyPass http://127.0.0.1:5001/
ProxyPassReverse http://127.0.0.1:5001/
</Location>
After restarting Apache httpd, your WBO whiteboard will be accessible via https://meet.darkstar.lan/whitepad/ . We will use that green highlighted text down below as the value for the ETHERPAD_WBO_HOST variable. Etherpad will prefix that text with “https://” and that prefix cannot be changed… hence the requirement for a reverse proxy that can handle the data encryption.
One caveat when you do this on your real-life internet-facing cloud server…
The Whiteboard server is accessible without authentication. It may be advisable to just come up with a different path component than “/whitepad/“, you can think of something like a UUID-like string: “/8cd77cbe-a694-4390-800a-638c7cc05f49/” as long as you use the same string in both places (reverse proxy and ETHERPAD_WBO_HOST definitions). Also, your board names are not visible anywhere unless you share their URLS with other people. So, a relatively safe environment.
Using the custom Etherpad with Jitsi Meet
If you followed Episode 3, you will have a directory “/usr/local/docker-jitsi-meet-stable-6826“, your version number may differ from my “6826“. Inside you will have your modified ‘docker-compose.yml‘ file.
We are going to edit two files: ‘.env‘ and ‘docker-compose.yml‘.
Relevant ‘.env’ additions:
In the ‘.env‘ file we define correct values for the variables we introduced earlier. You can add the following lines basically anywhere, but it is of course most readable if you copy them immediately after the other ETHERPAD_* variables you added earlier on for the MySQL database backend: ETHERPAD_OIDC_ISSUER="https://sso.darkstar.lan/auth/realms/foundation"
ETHERPAD_OIDC_BASE_URL="https://meet.darkstar.lan/pad/"
ETHERPAD_OIDC_CLIENT_ID="etherpad"
ETHERPAD_OIDC_CLIENT_SECRET="2jnc8H6RH9jIYMXExUHA7XF7uD8YKIRs"
ETHERPAD_REQUIRE_AUTHENTICATION="true"
ETHERPAD_WBO_HOST="meet.darkstar.lan/whitepad"
Relevant ‘docker-compose.yml’ additions:
Add the following lines to the “etherpad:” section immediately below the MySQL database variable definitions you added earlier on in this Episode. You notice the variable names we defined in the previous section when dealing with ‘ep_openid_connect‘: - OIDC_ISSUER=${ETHERPAD_OIDC_ISSUER}
- OIDC_CLIENT_ID=${ETHERPAD_OIDC_CLIENT_ID}
- OIDC_CLIENT_SECRET=${ETHERPAD_OIDC_CLIENT_SECRET}
- OIDC_BASE_URL=${ETHERPAD_OIDC_BASE_URL}
- REQUIRE_AUTHENTICATION=${ETHERPAD_REQUIRE_AUTHENTICATION}
- WBO_HOST=${ETHERPAD_WBO_HOST}
More ‘docker-compose.yml’ updates: The “etherpad:” service definition in that YAML file contains the following reference to the Etherpad Docker image:
image: etherpad/etherpad:1.8.16
You need to change that line to:
image: liveslak/etherpad:1.8.16
…in order to use our custom Etherpad image instead of the default one.
The re-configuration is complete and since we modified the ‘.env‘ file again, we need to refresh and restart our Docker Jitsi Meet container stack again.
Note however, that at this point we have to perform this restart differently than mentioned earlier in this article. Since we are switching to a new Etherpad image, the container based on the old image needs to be removed also. For this scenario, please consult the detailed instructions in both sections “Considerations about the “.env” file” and “Upgrading Docker-Jitsi-Meet” in Episode 3 of this article series.
The complete set of steps to follow is a mix of both sections, and I share it with you for completeness’ sake:
# cd /usr/local/docker-jitsi-meet-stable-*
# docker-compose down
# rm -rf /usr/share/docker/data/jitsi-meet-cfg/
# mkdir -p /usr/share/docker/data/jitsi-meet-cfg/{web/letsencrypt,transcripts,prosody/config,prosody/prosody-plugins-custom,jicofo,jvb,jigasi,jibri}
# docker-compose pull
# docker-compose up -d
Don’t forget to remove the old, unused, Etherpad image because it is now wasting 474 MB uncompressed disk space.
Summarizing
Even though we set it up as part of the Jitsi stack, we now have a standalone Etherpad running which requires you to login when you visit “https://meet.darkstar.lan/pad/”.
On the other hand, you can also access Etherpad via Jitsi Meet. What’s different?
When you start a Jitsi meeting, via “https://meet.darkstar.lan/” and then click on “Open shared document“, you are already authenticated against Keycloak and the Etherpad document will open for you right away, no second login required.
After login, you will be met with a much more powerful editor than the basic one that comes with Docker Jitsi Meet. You’ll notice the extended document export capability thanks to Abiword and the small video widget at the top for face-to-face communication thanks to the WebRTC plugin.
Happy collaborating!
Running the custom Etherpad standalone
If you are not interested in Jitsi Meet, this is the command to start the customized Etherpad container and make it listen at port 9001 of the loopback address:
# docker run -d -p 127.0.0.1:9001:9001 liveslak/etherpad
The Etherpad container is now accessible only on your computer by pointing your browser at http://localhost:9001/ . You still need to add an Apache reverse proxy definition to the VirtualHost site definition to make your Etherpad available for other users at https://meet.darkstar.lan/pad/ .
If you want to change the container’s behavior using the available variables as documented before, you can pass these to the ‘docker run‘ command using one or more “-e” parameters, like so (this example just enables the admin console):
With additional environment variables you can enable more of the latent functionality. See the earlier sections of this article for all the relevant variables: those that enable the MySQL database backend; the one that enables the Whiteboard; those that enable the Keycloak authentication, etc.
Thanks
Etherpad with integrated Whiteboard can be a compelling solution for some user groups. Even without Jitsi Meet, you can jointly write and draw, save your work to your local harddrive and you have voice & video in a small overlay if you need to discuss the proceedings.
I encourage you to try it out. With or without integration into Jitsi Meet or even without Keycloak authentication if you want to create this as a completely free and low-treshold service to your local community.
Let me know what you think of this Episode in the comments section below. The final Episode, how to setup your own private Docker image repository, will take some time to write… I have not yet started doing in-depth research on that topic. But the six available Episodes will hopefully keep you occupied for a while 🙂
Thanks for reading until the end.
Hi all!
This is already the third episode in a series of articles I am writing about using Slackware as your private/personal ‘cloud server’. Time flies when you’re having fun.
We’re still waiting for Slackware 15.0 and in the meantime, I thought I’d speed up the release of my article on Video Conferencing. My initial plan was to release one article per week after Slackware 15 had been made available. The latter still did not happen (unstuck in time again?) but then I realized, an article about Docker and another about Keykloak still won’t give you something tangible and productive to run and use. So here is Episode 3, a couple of days earlier than planned, to spend your lazy sunday on: create your own video conferencing platform.
Episodes 4 and 5 won’t be far off, since I have already written those as well.
Check out the list below which shows past, present and future episodes in the series, if the article has already been written you’ll be able to click on the subject.
The first episode also contains an introduction with some more detail about what you can expect.
Episode 3 (this article): Video Conferencing Setting up Jitsi Meet – the Open Source video conferencing platform. This makes us independent of cloud conferencing services like MS Teams, Zoom or Google Meet. The Jitsi login is offloaded to our Keycloak IAM provider.
Jitsi Meet on Docker
Preamble
Initial Configuration
Adding Etherpad integration
Creating application directories
Starting Jitsi Meet
Considerations about the “.env” file
Upgrading Docker-Jitsi-Meet
Apache reverse proxy setup
Fixing Etherpad integration
Network troubleshooting
Creating internal Jitsi accounts
Connecting Jitsi and Keycloak
Adding jitsi-keycloak
Configuration of jitsi-keycloak in the Keycloak Admin console
Remaining configuration done in jitsi-keycloak
Configure docker-jitsi-meet for use of jitsi-keycloak
Actually, my original interest in Docker was raised in the beginning of 2020 when the Corona pandemic was new, everybody was afraid and people were sent home to continue work and school activities from there.
One of the major challenges for people was to stay connected. Zoom went from a fairly obscure program to a hugely popular video conferencing platform in no time at all (until severe security flaws made a fair-sized dent in its reputation); Microsoft positioned its Teams platform as the successor of Skype but targets mostly corporate users; Google Hangouts became Google Meet and is nowadays the video conferencing platform of choice for all corporations that have not yet been caught in the Microsoft vendor lock-in.
None of these conferencing platforms are open source and all of them are fully cloud-hosted and are inseparable from privacy concerns. In addition, un-paid use of these platforms imposes some levels of limitation to the size and quality of your meetings. As a user, you do not have control at all.
Enter Jitsi, whose Jitsi Meet platform is available for everybody to use online for free and without restrictions. Not just free, but Open Source, end-to-end encrypted communication and you can host the complete infrastructure on hardware that you own and control.
People do not even have to create an account in order to participate – the organizer can share a URL with everyone who (s)he wants to join a session.
Jitsi is not as widely known as Zoom, and that is a pity. Therefore this Episode in my Slackware Cloud Server series will focus on getting Jitsi Meet up and running on your server, and we will let login be handled by the Keycloak Identity and Access Management (IAM) tool which we have learnt to setup in the previous Episode.
In early 2020, when it became clear that our Slackware coreteam member Alphageek (Erik Jan Tromp) would not stay with us for long due to a terminal illness, I went looking for a private video conferencing platform for our Slackware team and found Jitsi Meet.
I had no success in getting it to work on my Slackware server unfortunately. Jitsi Meet is a complex product made of several independent pieces of software which need to be configured ‘just right‘ to make them work together properly. I failed. I was not able to make it work in time to let alphageek use it.
But I also noticed that Jitsi Meet was offered as a Docker-based solution. That was the start of a learning process full of blood sweat & tears which culminated in this article series.
With this article I hope to give you a jump-start in getting your personal video conferencing platform up and running. I will focus on the basic required functionality but I will leave some of the more advanced scenarios for you to investigate: session recording; automatic subtitling of spoken word; integrating VOIP telephony; to name a few.
Jitsi Meet on Docker
Docker-Jitsi-Meet is a Jitsi Github project which uses Docker Compose to create a fully integrated Jitsi application stack which works out of the box. All internal container-to-container configurations are pre-configured.
As you can see from the picture below, the only network ports that need to be accessible from the outside are the HTTPS port (TCP port 443) of your webserver, UDP port 10000 for the WebRTC (video) connections and optionally (not discussed in my article) UDP port range 20000 – 20050 for allowing VOIP telephones to take part in Jitsi meetings.
These ports need to be opened in your server firewall.
Installing docker-jitsi-meet is relatively straight-forward if you go the quick-start page and follow the instructions to the letter. Integrating Jitsi with Keycloak involves using a connector which is not part of either programs; I will show you how to connect them all.
You will be running all of this in Docker containers eventually, but there’s stuff to download, edit and create first. I did not say it was trivial…
Preamble
For the sake of this instruction, I will use the hostname “https://meet.darkstar.lan” as the URL where users will connect to their conferences; The server’s public IP address will be “10.10.10.10“.
Furthermore, “https://sso.meet.darkstar.lan” will be the URL for the connector between Jitsi and Keycloak and “https://sso.darkstar.lan/auth” is the Keycloak base URL (see Episode 2 for how we did the Keycloak setup).
Setting up your domain (which will hopefully be something else than “darkstar.lan”…) with new hostnames and then setting up web servers for the hostnames in that domain is an exercise left to the reader. Before continuing, please ensure that your equivalents for the following two hosts have a web server running. They don’t have to serve any content yet but we will add some blocks of configuration to their VirtualHost definitions during the steps outlined in the remainder of this article:
meet.darkstar.lan
sso.meet.darkstar.lan
I expect that your Keycloak application is already running at your own real-life equivalent of https://sso.darkstar.lan/auth .
Note that I am talking about webserver “hosts” but in fact, all of these are just virtual webservers running on the same machine, at the same IP address, served by the same Apache httpd program, but with different DNS entries. There is no need at all for multiple computers when setting up your Slackware Cloud server.
Initial Configuration
Download and extract the tarball of the latest stable release: https://github.com/jitsi/docker-jitsi-meet/releases/latest into the “/usr/local/” directory. Basically any directory will do but I am already backing up /usr/local so the Jitsi stuff will automatically be taken into backup with all the rest.
At the moment of writing, the latest stable version number is ‘6826‘. Which means, after extracting the tarball we do:
cd /usr/local/docker-jitsi-meet-stable-6826/
A Jitsi Meet container stack for Docker Compose is defined in the file “docker-compose.yml” which you find in this directory.
In addition to this YAML file, the ‘docker-compose‘ program parses a file named “.env” if it exists in the same directory. Its content is used to initialize the container environment. You can for instance store passwords and other secrets in “.env” but also all the configuration variables that define how your stack will function.
Docker-Jitsi-Meet ships an example environment file containing every configurable option, but mostly commented-out.
Configuration:
We start with creating a configuration file “.env” from the example file “env.example“:
$ cp -i env.example .env
And then edit the “.env” file to define our desired configuration.
First of all,
Change “CONFIG=~/.jitsi-meet-cfg” to “CONFIG=/usr/share/docker/data/jitsi-meet-cfg” because I do not want application data in my user’s or root’s homedirectory.
Then the ones that are easy to understand:
Change “HTTP_PORT=8000” to "HTTP_PORT=8440” because port 8000 is used by far too many applications. Port 8440 is what we will use again in the reverse proxy configuration.
Change “TZ=UTC” to “TZ=Europe/Amsterdam” or whatever timezone your server is in.
Change “#PUBLIC_URL=https://meet.example.com” to “PUBLIC_URL=https://meet.darkstar.lan/” i.e. change it to the URL where you want people to connect. The connections will be handled by your Apache httpd server who will manage the traffic back and forth between Jitsi container and the client.
Change “#DOCKER_HOST_ADDRESS=192.168.1.1” to “DOCKER_HOST_ADDRESS=10.10.10.10” where of course “10.10.10.10” needs to be replaced by your server’s actual public Internet IP address.
Other settings that I would explicitly enable but their commented-out values are the default values anyway (matter of taste, it avoids getting bitten by a future change in application default settings):
Change “#ENABLE_IPV6=1” to “ENABLE_IPV6=0” if your Docker installation has ipv6 disabled. This is a requirement if your host server would have ipv6 disabled.
You can find out whether ipv6 is disabled in Docker, because in that case the file “/etc/docker/daemon.json” will contain this statement:
{ "ipv6": false }
Connection encryption:
Change “#DISABLE_HTTPS=1” to “DISABLE_HTTPS=1“. We disable HTTPS in the container because we will again use Apache http reverse proxy to handle encryption.
Change “#ENABLE_LETSENCRYPT=1” to “ENABLE_LETSENCRYPT=0” because we do not want the container to handle automatic certificate renewals – it’s just too much of a hassle on a server where you already run a webserver on ports 80 and 443. Our Apache reverse proxy is equipped with a Let’s Encrypt SSL certificate and I want to handle SSL certificate renewals centrally – on the host.
Authentication:
The authentication will be offloaded to Keycloak using JSON Web Tokens aka ‘JWT‘ for the inter-process communication. The following variables in “.env” need to be changed:
“#ENABLE_AUTH=1” should become “ENABLE_AUTH=1“
“#ENABLE_GUESTS=1” should become “ENABLE_GUESTS=1“
“#AUTH_TYPE=internal” should become “AUTH_TYPE=jwt“
“TOKEN_AUTH_URL=https://auth.meet.example.com/{room}” should become “TOKEN_AUTH_URL=https://sso.meet.darkstar.lan/{room}“
“#JWT_APP_ID=my_jitsi_app_id” should become “JWT_APP_ID=jitsi“
“#JWT_APP_SECRET=my_jitsi_app_secret” should become “JWT_APP_SECRET=NmjPGpn+NjTe7oQUV9YqfaXiBULcsxYj“
Actually, to avoid confusion: my proposed value of “JWT_APP_SECRET" (the string “NmjPGpn+NjTe7oQUV9YqfaXiBULcsxYj”) is a value which you will be generating yourself a few sections further down. It is a string which is used by two applications to establish mutual trust in their intercommunication.
We will re-visit the meaning and values of JWT_APP_ID and JWT_APP_SECRET in a moment.
When our modifications to the “.env” file are complete, we run a script which will fill the values for all PASSWORD variables with random strings (this can be done at any time really):
$ ./gen-passwords.sh
Note that in later versions of docker-jitsi-meet, the env.example file has become a lot smaller. Docker Jitsi has implemented all variables with default values. Beware that these defaults might not be working for your case!
The full documentation on configurable parameters is found at: https://jitsi.github.io/handbook/docs/devops-guide/devops-guide-docker
Adding Etherpad integration
Etherpad is an online editor for real-time collaboration. The Docker version of Jitsi Meet is able to integrate Etherpad into your video conferences. I am going to show you how to run Etherpad on your Slackware Cloud server and integrate collaborative editing into your video meetings.
The git checkout of ‘docker-jitsi-meet‘ into /usr/local/docker-jitsi-meet-stable-6826/will have given you not only a docker-compose.yml file which starts Jitsi and its related containers, but also a file etherpad.yml. This is a Docker Compose file which starts an Etherpad container and connects it to the Jitsi Meet container stack.
FYI: you can use Docker Compose to process multiple YAML files in one command-line instead of implicitly processing only the ‘docker-compose.yml’ file (which happens if you do not explicitly mention the YAML filename in a “-f” parameter).
For instance if you wanted to start Jitsi and Etherpad together, you would use a command like this, using two “-f” parameters to specify the two YAML files:
# docker-compose -f docker-compose.yml -f etherpad.yml up -d
But I found out the hard way that this is risky.
Because sometime in the future you may want to bring that container stack down, for instance to upgrade Jitsi Meet to the latest version. If you forget that you had actually started two stacks (I consider the ‘etherpad.yml’ as the source for a second stack ) and you simply run “docker-compose down” in the directory… then only the Jitsi Meet stack will be brought down and Etherpad will happily keep running.
To protect myself from my future self, I have copied the content of ‘etherpad.yml‘ and added it to the bottom of ‘docker-compose.yml‘, so that I can simply run:
# docker-compose up -d
I leave it up to you to pick either scenario. Whatever works best for you.
Now on to the stuff that needs fixing because the standard configuration will not result in a working Etherpad integration.
First of all, add a “ports” configuration to expose the Etherpad port outside of the container. This is how that looks in the YAML file:
You will also have to edit the “.env” file a bit more. Look for the ETHERPAD related variables and set them like so:
# Set etherpad-lite URL in docker local network (uncomment to enable)
ETHERPAD_URL_BASE=http://etherpad.meet.jitsi:9001
# Set etherpad-lite public URL, including /p/ pad path fragment (uncomment to enable)
ETHERPAD_PUBLIC_URL=https://meet.darkstar.lan/pad/p/
# Name your etherpad instance!
ETHERPAD_TITLE=Slackware EtherPad Chat
# The default text of a pad
ETHERPAD_DEFAULT_PAD_TEXT="Welcome to Slackware Web Chat!\n\n"
The most important setting is highlighted in green: “https://meet.darkstar.lan/pad/p/” . This is the external URL where we will expose our Etherpad. Since the Docker container exposes Etherpad only at the localhost address “127.0.0.1:9001” we need to setup yet another Apache reverse proxy. See the section “Apache reverse proxy setup” below.
There is one potential snag and you have to consider the implications: in the above proposed setup we expose Etherpad in the “/pad/” subdirectory of our Jitsi Meet server. But the Jitsi conference rooms also are exposed as a subdirectory, but then without the trailing slash. Which means everything will work just fine as long as nobody decides to call her conference room “pad” – that can lead to unexpected side effects. You could remedy that by choosing a more complex string than “/pad/” for Etherpad, or else setup a separate web host (for instance “etherpad.darkstar.lan“) just for Etherpad.
In any case, with all the preliminaries taken care of, you can continue with the next sections of the article.
Note: After starting the containers, you will have to do one last edit in the configuration of Jitsi Meet to actually make Etherpad available in your videomeetings. See the section “Fixing Etherpad integration” below.
I am still investigating the integration of Keycloak authentication with Etherpad. Once I am sure I have a working setup, I will do a write-up on the subject in a future article in this series. In the meantime, you need to realize that your Etherpad is publicly accessible.
Creating application directories
The various Docker containers that make up Docker-Jitsi-Meet need to write data which should persist across reboots. The “CONFIG” variable in “.env” points to the root of that directory structure and we need to create the empty directory tree manually before firing up the containers.
Using one smart command which will be expanded by Bash to a lot of ‘mkdir‘ commands:
And that’s it. Our Jitsi Meet video conferencing platform is up and running.
But it is not yet accessible: we still need to connect the container stack to the outside world. This is achieved by adding an Apache httpd reverse proxy between our Docker stack and the users. See below!
Considerations about the “.env” file
Note that the “.env” file is only used the very first time ‘docker-compose‘ starts up your docker-jitsi-meet container stack, in order to populate /usr/share/docker/data/jitsi-meet-cfg/ and its subdirectories.
After that initial start of the docker-jitsi-meet container stack you can tweak your setup by editing files in the /usr/share/docker/data/jitsi-meet-cfg/ directory tree, since these directories are mounted inside the various containers that make up Docker-Jitsi-Meet.
But if you ever edit that “.env” file again… you need to remove and re-create the directories below /usr/share/docker/data/jitsi-meet-cfg/ and restart the container stack.
NOTE: ‘docker-compose stop‘ stops all containers in the stack which was originally created by the ‘docker-compose up -d‘ command. Using ‘down‘ instead of ‘stop‘ will additionally remove containers and networks as defined in the Compose file(s). After using ‘down‘ you would have to use ‘up -d‘ instead of ‘start‘ to bring the stack back online.
This is how you deal with “.env” configuration changes:
# cd /usr/local/docker-jitsi-meet-stable-*
# docker-compose stop
# vi .env
# ... make your changes
# rm -rf /usr/share/docker/data/jitsi-meet-cfg/
# mkdir -p /usr/share/docker/data/jitsi-meet-cfg/{web/letsencrypt,transcripts,prosody/config,prosody/prosody-plugins-custom,jicofo,jvb,jigasi,jibri}
# docker-compose start
Upgrading Docker-Jitsi-Meet
You don’t need to follow the above process if you want to upgrade Docker-Jitsi-Meet to the latest stable release as part of life cycle management, but with an un-changed “.env” file. In such a case, you simply execute:
# cd /usr/local/docker-jitsi-meet-stable-*
# docker-compose down
# docker-compose pull
# docker-compose up -d
Apache reverse proxy setup
We need to connect the users of our Jitsi and Etherpad services to the containers. Since these containers are exposed by Docker only at the loopback address (127.0.0.1 aka localhost) we use the Apache httpd’s ‘reverse proxy‘ feature.
These three blocks of text need to be added to the VirtualHost definition for your “meet.darkstar.lan” webserver so that it can act as a reverse proxy and connects your users to the Docker Jitsi Meet and Etherpad containers:
Generic block:
SSLProxyEngine on
RequestHeader set X-Forwarded-Proto "https"
ProxyTimeout 900
ProxyVia On
ProxyRequests Off
ProxyPreserveHost On
Options FollowSymLinks MultiViews
AllowOverride All
Order allow,deny
allow from all
Specific to Jitsi Meet:
<Location />
ProxyPass http://127.0.0.1:8440/
ProxyPassReverse http://127.0.0.1:8440/
</Location>
# Do not forget WebSocket proxy:
RewriteEngine on
RewriteCond %{HTTP:Connection} Upgrade [NC]
RewriteCond %{HTTP:Upgrade} websocket [NC]
RewriteRule ^/?(.*) "ws://127.0.0.1:8440/$1" [P,L]
And specific to Etherpad:
<Location /pad/>
ProxyPass http://127.0.0.1:9001/ retry=0 timeout=30
ProxyPassReverse http://127.0.0.1:9001/
AddOutputFilterByType SUBSTITUTE text/html
Substitute "s|meet.darkstar.lan/|meet.darkstar.lan/pad/|i"
</Location>
<Location pad/socket.io>
# This is needed to handle websocket transport through the proxy, since
# etherpad does not use a specific sub-folder, such as /ws/
# to handle this kind of traffic.
RewriteEngine On
RewriteCond %{QUERY_STRING} transport=websocket [NC]
RewriteRule /(.*) ws://127.0.0.1:9001/socket.io/$1 [P,L]
ProxyPass http://127.0.0.1:9001/socket.io retry=0 timeout=30
ProxyPassReverse http://127.0.0.1:9001/socket.io
AddOutputFilterByType SUBSTITUTE text/html
Substitute "s|meet.darkstar.lan/|meet.darkstar.lan/pad/|i"
</Location>
In “127.0.0.1:8440” you will recognize the TCP port 8440 which we configured for the Jitsi container in the “.env" file earlier. The “127.0.0.1:9001” corresponds to the port 9001 which we exposed explicitly in the ‘docker-compose.yml‘ file for the Etherpad service.
After adding this reverse proxy configuration and restarting Apache httpd. your video conference server will be publicly accessible at https://meet.darkstar.nl/ .
Fixing Etherpad integration
I told you earlier that you needed to make a final edit after the Jitsi Meet stack is up & running to fix the Etherpad integration.
Open the stack’s global config file “/opt/jitsi-meet-cfg/web/config.js” in your editor and look for this section of text:
// If set, add a "Open shared document" link to the bottom right menu that
// will open an etherpad document.
// etherpad_base: 'https://meet.darkstar.lan/pad/p/',
You need to un-comment the last line so that this section looks like:
// If set, add a "Open shared document" link to the bottom right menu that
// will open an etherpad document.
etherpad_base: 'https://meet.darkstar.lan/pad/p/',
It’s a long-standing bug apparently.
Note that in newer releases of docker-jitsi-meet, this manual edit in web/config.js is no longer needed for proper Etherpad integration, It’s automatically added there now as: config.etherpad_base = 'https://meet.darkstar.lan/pad/p/';
The ‘ports’ section still needs to be added to the etherpad definition in our docker-compose.yml file.
Now, when you join a Jitsi Meeting, the menu which opens when you click the three-dots “more actions” menu in the bar at the bottom of your screen, will contain an item “Open shared document“:
If you select this, your video will be replaced by an Etherpad “pad” with the name of your Jitsi meeting room.
Externally i.e. outside of the Jitsi videomeeting, your Etherpad ‘pad‘ will be available as “https://meet.darkstar.lan/pad/p/jitsiroom” where “jitsiroom” is the name you gave your Jitsi videomeeting aka ‘room‘. This means that people outside of your videomeeting can still collaborate with you in real-time.
Network troubleshooting
Docker’s own dynamic management of iptables chains and rulesets will be thwarted if you decide to restart your host firewall. The custom Docker chains disappear and the docker daemon gets confused. If you get these errors in logfiles when starting the Docker-Jitsi-Meet containers, simply restart the docker daemon itself (/etc/rc.d/rc.docker restart):
> driver failed programming external connectivity on endpoint docker-jitsi-meet
> iptables failed
> iptables: No chain/target/match by that name
Creating internal Jitsi accounts
Just for reference, in case you want to play with Jitsi before integrating it with Keycloak.
Internal Jitsi users must be created with the “prosodyctl” utility in the prosody container.
In order to run that command, you need to first start a shell in the corresponding container – and you need to do this from within the extracted tarball directory “/usr/local/docker-jitsi-meet-stable-*“:
# cd /usr/local/docker-jitsi-meet-stable-*
# docker-compose exec prosody /bin/bash
Once you are at the prompt of that shell in the container, run the following command to create a user:
Now user “alien” will be able to login to Jitsi Meet and start a video conference.
Connecting Jitsi and Keycloak
The goal is of course to move to a Single Sign On solution instead of using local accounts. Jitsi supports JWT Tokens which it should get from a OAuth/OpenID provider. We have Keycloak lined up for that, since it supports OAuth, OpenID, SAML and more.
Adding jitsi-keycloak
Using Keycloak as OAuth provider for Jitsi Meet is not directly possible, since unfortunately Keycloak’s JWT token is not 100% compatible with Jitsi. So a ‘middleware‘ is needed, and jitsi-keycloak fills that gap.
We will download the middleware from their git repository and setup a local directory below “/usr/share/docker/data” where we have been storing configurations for all our applications so far. All we are going to use from that repository checkout is the Docker Compose file you can find in there. The actual ‘jitsi-keycloak‘ middleware will eventually be running as yet another Docker container.
Edit our working copy ‘/usr/share/docker/data/jitsi-keycloak/docker-compose.yml‘ to provide the correct environment variables for our instances of our already running Jitsi and Keycloak containers:
The string value for the JITSI_SECRET variable needs to be the same string we used in the definition of the Jitsi container earlier, where the variable is called JWT_APP_SECRET.
Hint: in Bash you can create a random 32 character string like this:
Configuration of jitsi-keycloak in the Keycloak Admin console
Point your browser to the Keycloak Admin console https://sso.darkstar.lan/auth/admin/ to start the configuration process.
Add a public openid-connect client in the ‘foundation‘ Keycloak realm (the realm where you created your users in the previous Episode of this article series):
Select ‘foundation‘ realm; click on ‘Clients‘ and then click ‘Create‘ button.
‘Client ID‘ = “jitsi“
‘Client Protocol‘ = “openid-connect” (the default)
Save.
Also in ‘Settings‘, allow this app from Keycloak.
Our Jitsi-keycloak container is running on https://sso.meet.darkstar.lan . Therefore we add
Click ‘Download‘ which downloads a file “keycloak.json” with the below content:
On Keycloak >= 20.x,
Go to ‘Clients‘ tab
Select the ‘jitsi‘ client
Click the ‘Action‘ dropdown in the top right of the page
Select ‘Download adapter config‘ and keep the default format option ‘Keycloak OIDC JSON‘
Click ‘Download‘ or else copy/paste the JSON code which is displayed on-screen.
Remaining configuration done in jitsi-keycloak
Back at your server’s shell prompt again, do as follows:
Copy the downloaded “keycloak.json” file into the ‘/config‘ directory of jitsi-keycloak (the container’s /config is exposed in the host filesystem as /usr/share/docker/data/jitsi-keycloak/config).
Start the jitsi-keycloak container in the directory where we have our tailored ‘docker-compose.yml‘ file:
# cd /usr/share/docker/data/jitsi-keycloak
# docker-compose up -d
Once the container is running, we make jitsi-keycloak available at https://sso.meet.darkstar.lan/ using a reverse-proxy setup (jitsi-keycloak will not work in a sub-folder).
Add these reverse proxy lines to your VirtualHost definition of the “sso.meet.darkstar.lan” web site configuration and restart httpd:
# ---
# Reverse proxy to jitsi-keycloak Docker container:
SSLProxyEngine On
SSLProxyCheckPeerCN on
SSLProxyCheckPeerExpire on
RequestHeader set X-Forwarded-Proto: "https"
RequestHeader set X-Forwarded-Port: "443"
<Location />
AllowOverride None
Require all granted
Order allow,deny
Allow from all
</Location>
ProxyPreserveHost On
ProxyRequests Off
ProxyVia on
ProxyAddHeaders On
AllowEncodedSlashes NoDecode
# Jitsi-keycloak:
ProxyPass / http://127.0.0.1:3000/
ProxyPassReverse / http://127.0.0.1:3000/
# ---
Configure docker-jitsi-meet for use of jitsi-keycloak
Actually, you have already done all the correct changes which are needed in the ‘.env‘ file for Docker Compose!
The docker-jitsi-meet configurations that are relevant for jitsi-keycloak are as follows:
ENABLE_AUTH=1
AUTH_TYPE=jwt
JWT_APP_ID=jitsi
JWT_APP_SECRET=NmjPGpn+NjTe7oQUV9YqfaXiBULcsxYj
# To enable an automatic redirect from Jitsi to the Keycloak login page:
TOKEN_AUTH_URL=https://sso.meet.darkstar.lan/{room}
The values for ‘JWT_APP_SECRET‘ and ‘JITSI_SECRET‘ must be identical, and the value of ‘JWT_APP_ID‘ must be equal to “jitsi“.
Firing up the bbq
With all the prep work completed and the containers are running, we can enjoy the new online video conferencing platform we now operate for friends and family.
So, how does this actually look in practice? I’ll share a couple of screenshots from a Jitsi Meet session that I setup. Look at how cool it looks (and not just because of the screenshot of my den and the Slackware hoodie I am wearing…)
The Jitsi Meet welcome screen:
Device settings:
Joining a meeting:
Logging in via Keycloak SSO, you’ll notice that I have configured 2-Factor Authentication for my account:
After having logged in, I am back at the “join meeting screen” but now with my name written as Keycloak knows it (“Eric Hameleers” instead of “Alien BOB“) and I need to click one more time on the “Join” button.
Then I am participating in the meeting as the moderator.
You’ve probably noticed that I flipped my camera view here. I also added one ‘break-out room‘ to allow for separate discussions to take place outside of the main room:
And if you are not the moderator but a guest who received the link to this meeting, this is what you’ll see at first:
Cool, eh?
Thanks
… again for taking the time to read through another lengthy article. Share your feedback in the comments section below, if you actually implemented Jitsi Meet on your own server.
Attribution
The Docker-Jitsi-Meet architecture image was taken from Jitsi’s github site.
Hi all!
This is the first installment of a series of articles I intend to write in early 2022. With Slackware 15.0 around the corner, I think it is a good time to show people that Slackware is as strong as ever a server platform. The core of this series is not about setting up a mail, print or web server – those are pretty well-documented already. I’m going to show what is possible with Slackware as your personal cloud platform.
A lot of the work that went into developing Slackware between the 14.2 and 15.0 releases was focusing on desktop usage. The distro is equipped with the latest and greatest KDE and XFCE desktops, a low-latency preemptive kernel, Pipewire multimedia framework supporting capture and playback of audio and video with minimal latency, et cetera. I have been enjoying Slackware-current as a desktop / laptop powerhouse for many years and built a Digital Audio Workstation and a persistent encrypted Live Distro with SecureBoot support out of it.
Slackware Cloud Server Series – Summary
The imminent release of Slackware 15.0 gives me fresh energy to look into ‘uncharted’ territory. And therefore I am going to write about setting up collaborative web-based services. Whether you will do this for the members of your family, your friends, your company or any group of people that you interact with a lot, this year you will hopefully learn how to become less dependent on the big ‘cloud’ players like Google, Microsoft, Zoom, Dropbox. What I will show you, is how to setup your own collaboration platform on a server or servers that you own and control. Think of file-sharing, video-conferencing, collaborative document editing, syncing files from your desktop and your phone to your cloud server. If the series is received well, I may additionally write about expanding your cloud server to a private platform for watching movies, listening to network audio, and so on.
I have a good notion of what I will write about, but I am sure that as this series grows and I tell you stories, you will get triggered about subjects that I may not have considered yet. So here’s a challenge for you, the reader: let me know what you think would be a good addition to the series. What makes Slackware great as a “personal cloud” platform to bind people together.
Topics you can expect in this series:
Episode 1 (this article): Managing your Docker Infrastructure
Here I will show you how to setup and manage Docker containers on Slackware, considering graphical management and firewalling. This is the foundation for what we will be doing in subsequent episodes.
Docker architecture
Examples
Docker Compose
Building a Docker image and running a container based on it
Help on commands
Obtain information about images and containers
Stop and remove a container
Refresh a container with the latest image from the vendor
Episode 2: Identity and Access management (IAM)
Setting up Keycloak for Identity and Access Management (IAM) to provide people with a single user account for all the Slackware Server Services we will be creating.
Episode 3: Video Conferencing
Setting up Jitsi Meet . This makes us independent of cloud conferencing services like MS Teams, Zoom or Google Meet. The Jitsi login is again offloaded to our Keycloak IAM provider.
Episode 4: Productivity Platform
Setting up NextCloud as the productivity hub where you can store and read your documents, host your photo library, hold video meetings, manage your activities, read and send emails, keep track of where your smartphone went, chat with people and a lot more. I will also show how to integrate the Jitsi video conferencing solution which was the topic of the previous episode into our NextCloud platform.
NextCloud will be setup on your ‘bare metal’ server. User authentication to the platform will be handled by our Keycloak IAM server.
Episode 5: Collaborative document editing
Integrating Collabora Online Development Edition (CODE) with NextCloud and move from reading/displaying your documents to collaborative real-time document editing with friends, family or colleagues in your self-hosted LibreOffice Online server.
Episode 6: Etherpad with Whiteboard Setting up an improved Etherpad real-time collaborative editor with more power than the out-of-the-box version which is used with Jitsi Meet, at the same time integrating it with Keycloak IAM for authentication.
Episode 8: Media streaming platform Setting up the Jellyfin media platform. If you have a decent collection of digital or digitized media (audio, video, photos) your cloudserver will become your friends’ autonomous alternative to Netflix and the likes. Single Sign On is provided to your users via Keycloak IAM.
Episode 9: Cloudsync for 2FA Authenticator
Setting up an Ente backend server as a cloud sync location for the Ente Auth 2FA application (Android, iOS, web).
Stop worrying that you’ll lose access to secure web sites when you lose your smartphone and with it, the two-factor authentication codes that it supplies. You’ll be up and running with a new 2FA authenticator in no time when all your tokens are stored securely and end-to-end encrypted on a backend server that is fully under your own control.
Episode X: Docker Registry
Setting up a Docker Registry. A local index and registry of Docker images allows us to become independent of Docker’s commercially driven limitations on images you create and want to share with the world or use privately. I will for instance use the public Docker Hub for all the services that are discussed in this series of articles, and I will point out the limitations of doing so.
I am not decided on the actual implementation of the Registry. The Docker Hub is an implementation of the aforementioned Registry product and it is Open Source but it does not do authentication and it has no user interface. Docker (the company) adds a lot to this service that is not Open Source to make the Hub the popular platform that it became.
So perhaps I will combine this Registry with Portus, which is an Open Source authorization service and user interface for the Docker Registry. It is created/maintained by the OpenSuse team. It can use Keycloak as the OIDC (OpenID Connect) provider, and so our Keycloak IAM server will do the Single Sign On.
Or I will use Quay, which is Red Hat’s solution for self-hosted Docker Registry including a web-based user interface that looks a lot like the Docker Hub. Quay can also offload the authentication/authorization to an OIDC provider like Keycloak. In addition, it allows scanning your hosted images for known security vulnerabilities using Clair, another Red Hat product and part of Project Quay.
Managing your Docker Infrastructure
In a previous article, I have shared my Docker related packages with you for Slackware 14.2 and 15.0 (well, -current formally since we are still waiting for 15.0). I assume you are running the newest Slackware almost-at-version 15.0 and have already installed those four Docker packages (containerd, runc, docker and docker-compose), you have added your user account to the ‘docker‘ group and have at least once rebooted your computer or manually ran “/etc/rc.d/rc.docker start“.
I also assume that you have a basic understanding of containers and how they differ from Virtual Machines. A brief summary:
Containers run directly on the host kernel and the ‘bare metal’. The Docker engine hides the host operating system particulars from the applications that run in the containerized environment. Docker uses cgroups and Linux kernel capabilities to shield container processes from other containers and the host, but using a “ps” command on the host you can easily see all the container processes.
A Virtual Machine on the other hand, is a virtualized machine hardware environment created by a hypervisor program like QEMU, VMWare, VirtualBox etc. Inside that virtual machine you can run a full Operating System which thinks it is running on real hardware. It will be unaware that it is in fact running inside a VM on a host computer which probably runs a completely different OS.
From the host user perspective, nothing that goes on in the VM is actually visible; the user merely sees the hypervisor running.
And from a Docker container as well as VM user perspective, anything that happens outside that environment is invisible. The host OS can not be reached directly from within the guest.
What I am not going to discuss in the scope of this article is orchestration tools like Kubernetes (K8s) or Docker Swarm, that make it possible or at least a lot more convenient to manage a cluster of servers running Docker containers with multiple micro-services that are able to scale on-demand.
Docker architecture
You create and run a Docker container from (layers of) pre-existing images that inherit from each other, using the ‘docker run‘ command. You can save a container that you created, ‘flattening’ the layers the container consists of into one new image using ‘docker commit‘.
You start and run a container by referring the Docker image it should be based on, and if that image is not available locally then Docker will download it from a Docker Registry (by default hub.docker.com but it can also be a private Registry you run and control yourself). You can also manually download an image using the ‘docker pull‘ command in advance of the ‘docker run‘ command which is going to use that image so that you don’t have to wait for an image download at the moment suprême of starting your container.
Building an image yourself using the ‘docker build‘ command is essentially ruled by a Dockerfile and a context (files on the local filesystem which are used in the creation of the image). The container which is based on that image typically runs an application or performs a task.
Examples
An example: running an application which is not provided by your Slackware OS. A container can run a Postgres database server which you then use on your host computer or in other containers. This Postgres container will typically be started on boot and keeps running until shutdown of the host computer. It’s as trivial as running:
Another example: building Slackware packages reliably and without introducing unwanted dependencies. In this case, you use a container based from a Slackware Docker image, to download and compile a Slackware package, and then export the resulting package from within the container to the host computer’s local filesystem. Such a Docker container based on a Slackware image will be able to offer the same OS with the same configuration, every time it starts and functions independently of the host. You could be compiling Slackware packages on a Debian host for instance. Exercise left for the reader: you will have to create that Slackware Docker image yourself… it is not available online.
Docker Compose
Also created by the Docker company, docker-compose is a separate application which uses the capabilities of Docker. The 2.x release of Docker Compose which I offer in my repository is written in the Go language which makes it fully self-contained. The previous 1.x releases that you can obtain from SlackBuilds.org were heavily dependent on Python and for that version of docker-compose you had to install 13 dependent packages.
Docker Compose expands on the Docker concept of a single container with a single purpose. Compose is a tool for defining and running multi-container Docker applications. The individual Docker applications themselves are defined by a Dockerfile, which makes their deployment reproducible in any Docker environment.
You write a Compose file (typically called docker-compose.yml) which defines the container services that you want to run. This Compose file is written in YAML and according to the Compose file specification . The file describes an application service which is built out of multiple Docker containers as well as their shared resources and private communication channels. Docker-compose reads the yaml file and creates data volumes and internal networks, starts the defined containers and connects them to the just-created volumes and networks. There’s a lot more you can do in a Compose file, check out the reference.
The result of the ‘docker-compose up -d‘ command is the start-up of a complex service running multiple inter-dependent applications (you’ll see those referred to as micro-services) that usually have a single point of interaction with its users (an IP address/port or a https URL). Essentially a black-box where all the internal workings are hidden inside that collection of containers. As an administrator there’s little to nothing to configure and as an user you don’t have to know anything about that complexity.
And again you could be trivially running applications in Docker containers on your Slackware host that you would not be able to run on the bare host OS without a lot of weeping and gnashing of teeth .
Docker Compose offers a lot of versatility that regular Docker does not. Whether you use one or the other depends on the complexity of that what you want to achieve. In future episodes of this article series, I will give examples of both.
Building a Docker image and running a container based on it
The ‘docker build’ command has a lot of parameters but it accepts a single argument which is either a directory containing a Dockerfile and the required context (local files to be used in the image creation process), or a URL the content of which it will download and interpret as a Dockerfile. You can also use the “-f” switch to specify a filename if it’s not called “Dockerfile” literally, or pipe a Dockerfile content into the ‘docker build‘ command via STDIN. It’s good practice to ‘tag’ the resulting image so that you’ll understand its purpose later on. You can then upload or ‘push’ the image to a Docker Registry, either the public one or a private registry that you or your company control.
Let me give a comprehensive example. We will build a new Slackware image which is based on my Slackware ‘base‘ image on the public Docker Hub (docker pull liveslak/slackware:latest). The Dockerfile will add a couple of packages, create a user account called “alien” and start a login shell if needed.
Note that in this example I will write the Dockerfile on the fly using ‘cat‘ and a ‘here-document‘ to pipe it into the ‘docker build‘ command. I will tag the resulting image as ‘slacktest‘ using the “-t” switch of the ‘docker build‘ command:
$ cat <<'EOT' | docker build -t slacktest -
FROM liveslak/slackware:latest
MAINTAINER Eric Hameleers <alien@slackware.com>
ARG SL_UID="1000"
ARG SL_GID="users"
ARG SL_USER="alien"
# Install compiler toolchain and supporting tools.
RUN rm -f /etc/slackpkg/templates/compilertoolchain.template
RUN for PKG in \
ca-certificates \
curl \
cyrus-sasl \
gc \
gcc \
git \
glibc \
glibc-profile \
glibc-zoneinfo \
guile \
intltool \
kernel-headers \
libmpc \
libffi \
libtasn1 \
make \
mpfr \
nettle \
p11-kit \
perl \
; do echo $PKG >> /etc/slackpkg/templates/compilertoolchain.template ; done
RUN slackpkg -batch=on -default_answer=y update gpg
RUN slackpkg -batch=on -default_answer=y update
RUN slackpkg -batch=on -default_answer=y install-template compilertoolchain
# Refresh SSL certificates:
RUN /usr/sbin/update-ca-certificates -f
# Create the user to switch to:
RUN useradd -m -u "${SL_UID}" -g "${SL_GID}" -G wheel "${SL_USER}" && \
sed -ri 's/^# (%wheel.*NOPASSWD.*)$/\1/' /etc/sudoers
USER "${SL_USER}"
ENV HOME /home/"${SL_USER}"
WORKDIR /home/"${SL_USER}"
# Start a bash shell if the container user does not provide a command:
CMD bash -l
EOT
The output of this ‘docker build’ command shows that the process starts with downloading (pulling) the slackware base image:
Sending build context to Docker daemon 3.072kB
Step 1/16 : FROM liveslak/slackware:base_x64_14.2
base_x64_14.2: Pulling from liveslak/slackware
Digest: sha256:352219d8d91416519e2425a13938f94600b50cc9334fc45d56caa62f7a193748
Status: Downloaded newer image for liveslak/slackware:base_x64_14.2
---> 3a9e2b677e58
Step 2/16 : MAINTAINER Eric Hameleers <alien@slackware.com>
---> Running in d7e0e17f68e6
Removing intermediate container d7e0e17f68e6
---> 75b0ae363daf
...(lot of output snipped) ...
Successfully built e296afb023af
Successfully tagged slacktest:latest
$ docker images |grep slacktest
slacktest latest e296afb023af 2 minutes ago 387MB
Here you see how a Dockerfile uses ‘FROM’ to download a pre-existing image from the Hub and then RUNs several commands which it applies to the base image which was downloaded from Docker Hub, creating a new image and then naming it “slacktest” with a tag “latest” and an ID of “e296afb023af“. In that process, several intermediate layers are created, basically every command in the Dockerfile which you would normally execute on the command-line will create one. The end result is an image which consists of multiple layers. You can use the ‘history’ command so see how these were built:
$ docker history e296afb023af
IMAGE CREATED CREATED BY SIZE COMMENT
e296afb023af 49 minutes ago /bin/sh -c #(nop) CMD ["/bin/sh" "-c" "bash… 0B
e1224d15abac 49 minutes ago /bin/sh -c #(nop) WORKDIR /home/alien 0B
4ce31fa86cca 49 minutes ago /bin/sh -c #(nop) ENV HOME=/home/alien 0B
365bffd67f07 49 minutes ago /bin/sh -c #(nop) USER alien 0B
05d08fe94b47 49 minutes ago |3 SL_GID=users SL_UID=1000 SL_USER=alien /bi… 336kB
3a300b93311f 49 minutes ago |3 SL_GID=users SL_UID=1000 SL_USER=alien /bi… 218kB
7a14f55146dd 50 minutes ago |3 SL_GID=users SL_UID=1000 SL_USER=alien /bi… 233MB
a556359ec6d3 51 minutes ago |3 SL_GID=users SL_UID=1000 SL_USER=alien /bi… 6.21MB
da19c35bed72 52 minutes ago |3 SL_GID=users SL_UID=1000 SL_USER=alien /bi… 2.13kB
604fece72366 52 minutes ago |3 SL_GID=users SL_UID=1000 SL_USER=alien /bi… 161B
1c72e034ef06 52 minutes ago |3 SL_GID=users SL_UID=1000 SL_USER=alien /bi… 0B
3f020d0c2024 52 minutes ago /bin/sh -c #(nop) ARG SL_USER=alien 0B
91a09ad8693a 52 minutes ago /bin/sh -c #(nop) ARG SL_GID=users 0B
96d433e0162d 52 minutes ago /bin/sh -c #(nop) ARG SL_UID=1000 0B
75b0ae363daf 52 minutes ago /bin/sh -c #(nop) MAINTAINER Eric Hameleers… 0B
3a9e2b677e58 11 days ago 148MB Imported from -
That last line shows the image “3a9e2b677e58” which is the slackware ‘base‘ image which I downloaded from Docker Hub already 11 days ago.
If we now use ‘docker run’ to create a container based on this image we would expect it to give us a bash prompt as user ‘alien’. Let’s try it and run an interactive container connecting it to a pseudo terminal using the “-ti” switch:
$ docker run -ti slacktest
alien@88081b465a98:~$ id
uid=1000(alien) gid=100(users) groups=100(users),10(wheel)
alien@88081b465a98:~$ exit
logout
$ docker ps -a | grep slacktest
88081b465a98 slacktest "/bin/sh -c 'bash -l'" 2 minutes ago Exited (0) 16 seconds ago
$ docker rm 88081b465a98
88081b465a98
$
Indeed we ended up as user “alien” in a machine whose hostname is the ID of the container (88081b465a98). And with the final command, we could remove the container without first stopping it, because it was already stopped when we issued the “exit” command.
Help on commands
Working with Docker containers, it is good practice that you get familiar with the command-line client (aptly name ‘docker’). You’ll find a lot of command-line examples in this article which should really become part of your hands-on toolkit.
The ‘docker’ tool has some 30 sub-commands and all of them have built-in help. To get the full list of available docker commands run:
$ docker --help
The help per command can be found by suffixing it with “–help”. Many commands have sub-commands and these will also show help:
$ docker system --help
$ docker system prune --help
Obtain information about images and containers
The ‘docker’ command-line utility can give you a lot of information about your docker infrastructure. I will discuss a graphical management tool a bit further down, but here are some useful commands:
Get a listing of all images on your Docker host:
$ docker images
Get a listing of all running containers:
$ docker ps
Also list those containers that are in a stopped state:
$ docker ps -a
Show diskspace used by all your images, containers, data volumes and caches:
$ docker system df
Show the logs of a container:
$ docker logs CONTAINER_ID
Get the ID of a container or an image:
$ docker ps -aqf "name=containername"
The “containername” is interpreted as a regular expression. If you omit the “q” switch you will see the full info about the container, instead of only the container ID. Find an image ID using the following command (if you use a containername instead of an imagename you have a second way to get a container ID):
$ docker inspect --format="{{.Id}}" imagename
Note that in this case, “imagename” needs to be the exact string of the image (or container) as shown in the ‘docker ps‘ output, not a regular expression or part of the name.
Get the filesystem location of a data volume:
Data volumes are managed by Docker, on a Linux host the actual physical location of a Docker volume is somewhere below “/var/lib/docker/volumes/“. but on a Windows Docker host that will be something entirely different.
Using the ‘docker inspect‘ command you can find the actual filesystem location of a data volume by evaluating the “Mountpoint” value in the output. Example from my local server:
You could use this information to access data inside that volume directly, without using Docker tools.
More:
There’s a lot more but I leave it up to you to find other commands that are informational for you.
Stop and remove a container
Note that when you stop and remove a container, you do not delete the image(s) on which the container is based! So what is the usefulness of removing containers? Simple: sometimes a container crashes or gets corrupted. In such a case you simply stop the container, remove it and start it again using the exact same command which was used earlier to start it. The container will be rebuilt from scratch from the still existing images on your local file-system and the service it provides will start normally again.
As an example, create and run the ‘hello-world‘ container, then stop and remove it again:
$ docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
...
$ docker ps -a |grep hello-world
0369206ddf2d hello-world "/hello" ...
$ CONTAINER_ID=$(docker ps -a |grep hello-world | awk '{print $1}')
$ docker stop $CONTAINER_ID
$ docker rm $CONTAINER_ID
Refresh a container with the latest image from the vendor
Suppose you are running a Docker container for some complex piece of software. You can’t simply run “slackpkg upgrade-all” to upgrade that software to its latest version or to fix some security vulnerability. Instead, what you do with a container is to refresh the Docker image which underpins that container, then stop the container, remove the container layer, remove the old vendor image, and then start the container again. On startup of the container, it will now be based on the latest vendor image.
Configuration of a container happens either inside of data volumes that you maintain outside of the container and/or on the startup command-line of that container. Deleting the container or the image layers is not touching any of your configuration or your data.
This is the strong point of container life cycle management – it’s so easy to automate and scale up.
As an example, let’s refresh a container that I have running here at home (the Portainer container that’s discussed further down).
Download the latest Portainer Community Edition image from Docker Hub:
$ docker pull portainer/portainer-ce
Find the ID (first comumn) of the portainer container that we currently have running:
As time passes and you experiment with containers, you will inevitably end up with stuff on your host computer that is no longer being used. Think of images which you wanted to try out with ‘docker run‘ or downloaded using ‘docker pull‘. Or cached data resulting from image building. The ‘docker system df‘ command I showed you earlier will give you a scare perhaps if you run it… see here the output when I run it on my server:
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 11 10 3.424GB 390.3MB (11%)
Containers 10 3 531.8MB 32.01MB (6%)
Local Volumes 3 1 0B 0B
Build Cache 0 0 0B 0B
The “reclaimable” percentage is data which is no longer being used by any container that Docker knows about. You can safely delete that data. The following command will do that for you:
$ docker system prune
If you want to make sure that every un-used scrap of image data is deleted, not just the dangling symlinked images, run:
$ docker system prune -a
And if you want to remove un-used data volumes, you would run:
$ docker system prune --volumes
. . . of course you execute this command only after you have verified that the affected volumes do not contain data you still need.
Difference between load/save and import/export
Docker manages images, containers, data volumes etc in an OS independent way. Meaning you are able to transport data volumes, images and containers from one host to another, or even from one type of OS (Linux) to another (MS Windows) and preserve compatibility.
Import and export commands – used for migration of container filesystems
Docker command ‘docker export‘ allows you to create a (compressed) tar archive containing the bare file system of a container:
This filesystem tarball can again be imported into a new ‘filesystem image‘. Also look at my example of creating a Slackware base image using ‘docker import‘.
The export and import commands don’t preserve the container’s configuration and underlying image information, you are dealing with just the filesystem. This makes these commands quite suited for creating Docker base images which you can then base your future containers and images on via the “FROM” keyword in a Dockerfile.
For actual Docker migrations to another host, the load and save commands are better suited. Read on:
Load and save commands – used for host migration of images
Docker command ‘docker save‘ allows you to create a (compressed) tar archive of a Docker image, containing all its layers, tags, history and configuration. The only thing that is not added to this tarball are the data volumes.
If you want to migrate a container, you first save (commit) its changes as a new Docker image using the ‘docker commit’ command.
$ docker commit CONTAINER_ID my_important_image
… and then save that image in a OS-agnostic way into a tarball. The ‘save’ command does not compress the data so we use ‘xz’ for that:
$ docker save my_important_image | xz > my_important_image.tar.xz
This compressed tarball can be copied to another host (potentially with a different OS running on the host) and ‘docker load‘ allows you to load this image (even when compressed) with all its tag information and history into the new host’s Docker environment.
$ cat my_important_image.tar.xz | docker load
You can then use the ‘docker run‘ command as usual to start a new container based off that image.
And what to do with the data volumes? Docker manages its volumes in an OS agnostic way which means we can probably not just create a tarball from the volume directory on the host, since that may not be compatible with how a data volume is implemented on another host. If we use Docker commands to save a data volume and load it on another host, we should be covered. The Docker documentation on volumes has an example on how to migrate data volumes.
Saving a data volume to a tarball:
Suppose your ‘portainer‘ container uses a data volume called ‘portainer_data‘ which is mounted on the container’s “/data” directory, containing important data which you want to migrate to another host.
To generate a tarball of the data in this volume, you create a one-shot container based on some base image (I’ll use ‘alpine’) that does not do anything in itself, and give that container the command to create the backup of your ‘portainer_data‘ volume. The steps are:
Launch a new container and mount the volume from the ‘portainer‘ container
Mount a local host directory as /backup
Pass a command that tars the contents of the ‘portainer_data‘ volume to a backup.tar file inside our /backup directory.
When the command completes and the container stops, we are left with a backup of our ‘portainer_data‘ volume; the file “backup.tar” in our current directory.
$ docker run --rm --volumes-from portainer -v $(pwd):/backup alpine tar cvf /backup/backup.tar /data
… and on a new host, load this tarball into the data volume of the ‘portainer’ container which you have already setup there but it’s still missing its data:
$ docker run --rm --volumes-from portainer -v $(pwd):/backup alpine bash -c "cd /data && tar xvf /backup/backup.tar --strip 1"
Voilà!
Limiting the log size for your containers
Docker captures the standard & error output (stdout/stderr) for all your containers and writes these to logfiles in JSON format. By default, Docker does not impose size restrictions on its log files.
These logfiles are shown when you run a ‘docker logs CONTAINER_ID‘ command.
This behavior could potentially fill up your hard drive if you run a container which generates an extensive amount of logging data. By modifying Docker’s “/etc/docker/daemon.json” configuration file we can limit the maximum size of the container log files. This is what you need to add to the Docker daemon configuration to limit the size of any log file to 50 MB and keep at most two backups after log rotation:
There’s a nice graphical web-based administration tool for your Docker infrastructure which itself runs in a container (you can also install it on bare metal as a self-contained executable), called portainer. Portainer will listen at a TCP port of your host and you can point your web browser at http://localhost:9000/ to access the GUI. The Community Edition (portainer-ce) is open source and free and this is what we’ll install and run as a Docker container.
The Portainer developers posted a tutorial and feature walk-through of their product on Youtube:
Some considerations before we start implementing this.
Data persistence:
Portainer wants to keep some data persistent across restarts. When it is run from within a container, you want to have that persistent data outside of your container, somewhere on the local filesystem. The Docker concept for persistent data management is called “volumes”. Docker volumes are created as directories within “/var/lib/docker/volumes/” on Linux. A volume is made available in the container using the “-v” switch to the ‘docker run‘ command, which maps a host volume to an internal container directory. Docker manages its volumes in an OS agnostic way, making it possible to transparently move your containers from Linux to Windows for instance.
We will not use a Docker volume in this example but instead create the “/usr/share/docker/data/portainer” directory and pass that as the external location for the container’s internal “/data” directory.
Network security:
We will expose the Portainer network listen port to only localhost and then use an Apache reverse proxy to let clients connect. Note that port 9000 exposes the Portainer UI without SSL encryption; we will use Apache reverse proxy to handle the encryption.
Application security:
Below you will see that I am mapping the Docker management socket “/var/run/docker.sock” into the container. Normally that would be very bad practice, but Portainer is going to be our Docker management tool and therefore it needs the access to this socket.
This batch of commands will get your Portainer-CE instance up and running and it will be restarted when Docker starts up (eg. after rebooting).
Two of the three network ports that Portainer opens up are not being exposed (mapped) to the outside world:
Port 8000 exposes a SSH tunnel server for Portainer Agents, but we will not use this. Portainer Agents are installed on Docker Swarm Clusters to allow central management of your Swarm from a single Portainer Server.
Also, the Portainer container generates a self-signed SSL certificate on first startup and uses it to offer https encrypted connections on port 9443 but our reverse proxy solution allows the use of a proper Let’s Encrypt certificate.
Now that the Portainer container is running, connect a browser on your host computer to http://localhost:9000/ to perform initial setup; basically defining the password for the Portainer admin user account.
The reverse proxy configuration including adding a Let’s Encrypt certificate is a bit beyond the scope of this article, but you can read this earlier blog post to learn about securing your webserver with SSL certificates.
I will share the block that you would have to add to your Apache configuration somehow. With this configuration and proper SSL setup your Portainer would be accessible via https://your_host_name/portainer/ instead of only at http://localhost:9000/ :
Here is a screenshot of my Docker infrastructure at home as interpreted by Portainer:
Have a look at Portainer to see if it adds something useful to your Docker management toolkit. For me the most useful feature is the real-time container logging display, which is a lot more convenient than trying to scroll back in a Linux screen session.
Docker’s effect on iptables firewall
Docker dynamically modifies your host’s iptables ruleset on Linux when it starts, in order to provide container isolation. See https://docs.docker.com/network/iptables/ for the rationale. This can sometimes interfere with pre-existing services, most prominently the Virtual Machines that you may also be running and whose virtual network has been bridged to your host’s network interface.
I use vde (virtual distributed ethernet) for instance to provide transparent bridged network connectivity for the QEMU VM’s that I use when building packages for Slackware.
With this bridged network setup, your host computer essentially acts as a router, forwarding network packets across the bridge.
When Docker starts, it will add two custom iptables chains named ‘DOCKER’ and ‘DOCKER-USER’, adds its own custom rules to the ‘DOCKER’ chain and then sets the default FORWARD policy to DROP. Thereby killing the network connectivity for the already bridged networks… Bad. But even though Docker has a configurable parameter to disable iptables, this only prevents the creation of these custom chains and rules when the Docker daemon starts. Docker confesses that there is no guarantee that later on when you start containers, your iptables rulesets will stay un-touched. So, disabling iptables management in Docker is not an option. I’ll tell you a bit on how to deal with this (all information gleaned from the Internet).
Generic iptables
Since Docker sets the policy for the FORWARD chain to DROP, for your bridged networks this will result loss of connectivity (packets are no longer forwarded across the bridge).
You can use the custom DOCKER-USER chain to ensure that this bridging is not disrupted. Docker inserts these custom chains in such a way that rules defined in the DOCKER-USER chain have precedence over rules in the DOCKER chain. You have to add an explicit ACCEPT rule to that DOCKER-USER chain:
# iptables -I DOCKER-USER -i br0 -j ACCEPT
Where ‘br0‘ is the name of your network bridge (adapt if needed).
UFW (Uncomplicated Firewall)
UFW is a tool created for Ubuntu but popular on all distros, which aims at simplifying iptables ruleset management by providing a higher-level commandline interface. There’s a package build script for it on SlackBuilds.org.
UFW allows to insert rules that are evaluated after all other chains have been traversed. This allows the user to nullify the unwanted behavior of Docker which kills bridged networks. See this github discussion for the full details.
Append the following at the end of /etc/ufw/after.rules (replace eth0 with your external facing interface):
# Put Docker behind UFW
*filter
:DOCKER-USER - [0:0]
:ufw-user-input - [0:0]
-A DOCKER-USER -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A DOCKER-USER -m conntrack --ctstate INVALID -j DROP
-A DOCKER-USER -i eth0 -j ufw-user-input
-A DOCKER-USER -i eth0 -j DROP
COMMIT
And undo any and all of:
Remove “iptables”: “false” from /etc/docker/daemon.json
Revert to DEFAULT_FORWARD_POLICY=”DROP” in /etc/default/ufw
Remove any docker related changes to /etc/ufw/before.rules
Firewalld
Firewalld is a dynamically managed firewall which works with a ‘zones‘ concept and gets its triggers from a D-Bus interface. There’s a package build script available on SlackBuilds.org.
If you are running Docker version 20.10.0 or higher with firewalld on your system with and Docker has iptables support enabled, Docker automatically creates a firewalld zone called docker and inserts all the network interfaces it creates (for example, docker0) into the docker zone to allow seamless networking.
Consider running the following firewalld command to remove the docker interface from the ‘trusted‘ zone.
# Please substitute the appropriate zone and docker interface
$ firewall-cmd --zone=trusted --remove-interface=docker0 --permanent$ firewall-cmd --reload
Restarting the docker daemon (/etc/rc.d/rc.docker restart) will then insert the interface into the docker zone.
Alien’s Easy Firewall Generator (EFG)
If you used my EFG to create a firewall configuration, you will be bitten by the fact that Docker and Docker Compose create new network devices . Permissive rules need to be be added to the INPUT and OUTPUT chains of your “rc.firewall” or else ‘dmesg’ will be full of ‘OUTPUT: packet died’.
The Docker network interface is called “docker0” and for each container, Docker creates an additional bridge interface, the name of which starts with “br-” and is followed by a hash value. Therefore, these lines need to be applied at the appropriate locations in the script:
DOCK_IFACE="docker0"
BR_IFACE="br-+"
$IPT -A INPUT -p ALL -i $DOCK_IFACE -j ACCEPT
$IPT -A INPUT -p ALL -i $BR_IFACE -j ACCEPT
$IPT -A OUTPUT -p ALL -o $DOCK_IFACE -j ACCEPT
$IPT -A OUTPUT -p ALL -o $BR_IFACE -j ACCEPT
Thanks
… for reading until the end! I hope you gained some knowledge. Feel free to leave constructive feedback in the comments section below.
Cheers, Eric
Attribution
Most of the firewall-related information comes from docker.com and github.com. Please inform me about errors.
All images in this article were taken from docs.docker.com website.
I have been using Docker for a while now, it’s being used to provide services to friends and family.
I was always intimidated by the large amount of packages that were needed to get Docker and docker-compose up and running, and I did not have experience with Docker at the time (almost two years ago) so I decided to go the easy route and use the SlackBuilds.org scripts when I first needed to run a Docker container. I wrote a blog about that even, it explained how to run an Outline server to allow journalists to do their work in repressive countries but the article also shares the details how to build the Docker packages and run the daemon.
If you want to read some background information about Docker’s strength and what its use-cases are, I encourage you to start reading here: https://docs.docker.com/get-started/overview/ .
Essentially, Docker uses Linux kernel and filesystem capabilities to isolate an application and its dependencies from the host computer it is being executed on. Docker provides powerful means to connect multiple containers via internal (virtual) networking and can expose ports to the network outside of your container. It enables you to run applications reliably without having to worry about the underlying Operating system. You can even run Docker on a MS Windows computer but your containerized application running inside Docker will not be aware of that.
This is sometimes called ‘light-weight virtualization’ because unlike real virtualization solutions like QEMU, Virtual Box or VMWare, the containerized application still runs on your host’s kernel. This is why you can run a 32-bit container image on a 64-bit (Linux 64-bit kernel has that capability to execute 32-bit binaries) host but you cannot run a 64-bit image on a 32-bit host kernel.
Now that I am more familiar with Docker, have been running multiple services in containers for more than a year, and have created and published my own images (more about that later) I decided to create my own set of Docker packages. Having pre-built packages will make it a lot easier for people to start exploring the usefulness of Docker containers.
One thing upfront: I have significantly decreased the total amount of packages you need to run Docker.
I have combined the SlackBuilds.org packages ‘docker’, ‘docker-cli’, ‘docker-proxy’ and ‘tini’ into a single package called ‘docker’ and also added ‘docker-buildx’ to that docker package. Also, the re-write of docker-compose from Python to Go has the benefit that the run-time package dependencies for ‘docker-compose’ have been reduced from thirteen to zero.
Starting with my Docker packages
As stated in the subject: the packages I created are for Slackware-current only. If you want to compile this yourself on Slackware 14.2, I cannot guarantee success since I did not try compiling them there myself – but in any case you’ll have to build and install libseccomp from SlackBuilds.org, this is part of -current but not 14.2.
What you need from my repository to run Docker is: runc, containerd, docker and docker-compose. Four packages – that’s it.
If you want to be able to (re-)compile these packages, you will additionally need google-go-lang. After installing google-go-lang you need to logoff and login again (or run the command “source /etc/profile.d/go.sh” in your terminal) to give Google’s version of Go preference over the GCC version of Go that’s probably already installed on your computer.
The ‘docker’ package installation script will add a couple of lines to “/etc/rc.d/rc.local” and “/etc/rc.d/rc.local_shutdown” to make Docker start on boot and properly stop during shutdown of the computer. The docker rc script “/etc/rc.d/rc.docker” will initially be installed without the execute bit, so if you actually want to start using Docker you have to make the script executable. This is a one-time action. Future package upgrades will honor the executable status of that script.
# chmod +x /etc/rc.d/rc.docker
You can start the Docker daemon now if you don’t want to waste time with a reboot:
# /etc/rc.d/rc.docker start
The package installation will also trigger the creation of a new group called ‘docker’. If you want to be able to run and manage your Docker images and containers as your own non-root user account, you need to add your user account to this ‘docker’ group and then logoff/login again, and restart the docker deamon. Otherwise, all Docker operations can only be executed by the root user.
# gpasswd -a <your_useraccount> docker
# /etc/rc.d/rc.docker restart
After doing all the prep work, your account added to the ‘docker’ group and the daemon running, it’s time for a first test. Run the following command:
$ docker run hello-world
You’ll see the following output:
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
2db29710123e: Pull complete
Digest: sha256:2498fce14358aa50ead0cc6c19990fc6ff866ce72aeb5546e1d59caac3d0d60f
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly
...
What you will have validated here is the proper functioning of your Docker installation:
The Docker command-line client contacted the Docker daemon.
The Docker daemon downloaded (or ‘pulled‘) the “hello-world” image from the Docker Hub.
The Docker daemon created a new container from that image which runs the executable that produces the output you could read on your terminal just now.
The Docker daemon streamed that output to the Docker client, which sent it to your terminal.
The “hello-world” image is of course trivial, but there are many more with real-life use-cases which you can find on Docker Hub, https://hub.docker.com/. You can download these images freely but if you want to upload (or ‘push’) an image which you created yourself, you’ll have to create an account on Docker Hub. Free acccounts allow the creation of one single private repository and unlimited public repositories.
Slackware Docker images
I have a repository on Docker Hub where I share my base images for stable Slackware versions (that’s the 14.2 release right now). Go get them at https://hub.docker.com/r/liveslak/slackware/tags . These “base images” are roughly 55 MB compressed, which means they are really basic. They are created using a script (create_slackware_docker_image.sh) which takes its inspiration both from the liveslak scripts and of Vincent Batts‘ work on Docker images for Slackware. Essentially, the script installs some packages into a package root, uses tar to put all of that in a tarball and loads the tarfile into Docker which will then make it into an actual image for you. That image wille be called “slackware:base_x64_14.2” (unless you specified a different architecture and release of course) and that’s what I uploaded to the Docker Hub.
My tiny Slackware docker-images are not really meant to be used as-is. Instead, they can act as the foundation for Slackware-based Docker images that you might want to build yourself – any Docker image starts with some existing base image and adds new layers on top. Base images like mine above don’t inherit from a lower-level image and are a special case, read more here: https://docs.docker.com/develop/develop-images/baseimages/
Now let’s pull that Slackware base image from the Hub and peek inside!
$ docker run -ti liveslak/slackware:latest /bin/bash -l
The “-ti” parameters tell docker to assign a pseudo-tty and run an interactive session on your terminal.
The convention for the image name is “username/imagename:tag” which shows that “liveslak” is the user who hosts the image on Docker Hub; “slackware” is the name of the image and it has a tag “latest” which means: just give me the latest version that I can get. For the slackware image, this means you get the 64-bit variant (I also have a 32-bit image) of Slackware 14.2.
The “/bin/bash -l” at the end is the command which Docker should run after bringing the container online. Remember, the base image contains nothing but a small amount of installed Slackware packages and does not start any application by itself. More complex Docker images may run all kinds of applications, some in the background and some meant to be interacted with such as the bash shell.
Running the above command yields this result:
Unable to find image 'liveslak/slackware:latest' locally
latest: Pulling from liveslak/slackware
6c721e5d29bd: Pull complete
Digest: sha256:352219d8d91416519e2425a13938f94600b50cc9334fc45d56caa62f7a193748
Status: Downloaded newer image for liveslak/slackware:latest
root@b0264b9e59ff:/#
And we end up at the command prompt of our running container. The container user is ‘root’, it’s the only user in that base image. You do not have to enter a password.
Let’s play a bit:
Suppose you want want to download and use the 32-bit Slackware 14.2 base image instead. Then you would run:
$ docker run -ti liveslak/slackware:base_ia32_14.2 /bin/bash -l
Here ends my very brief introduction to Docker on Slackware. Let me know what you think of this! Is there anything you would like to see explained in more detail?
Eric
Update 2022-jan-13: I have added packages for Slackware 14.2 (32bit and 64bit) to my repository.
About this blog
I am Eric Hameleers, and this is where I think out loud.
More about me.
Sponsoring
Please consider a small donation to support my activities.
a/exfatprogs-1.2.6-x86_64-1.txz: Upgraded. d/mercurial-6.9-x86_64-1.txz: Upgraded. d/python-setuptools-75.6.0-x86_64-1.txz: Upgraded. d/re2c-4.0-x86_64-1.txz: Upgraded. l/nodejs-20.18.1-x86_64-1.txz: Upgraded. n/ca-certificates-20241120-noarch-1.txz: Upgraded. This update provides the latest CA certificates to check for the authenticity of SSL connections. n/iproute2-6.12.0-x86_64-1.txz: Upgraded. xap/mozilla-thunderbird-128.4.4esr-x86_64-1.txz: Upgraded. This is a bugfix release. For more information, see: https://www.mozilla.org/en-US/thunderbird/128.4.4esr/releasenotes/
academic/gwyddion: Updated for version 2.67. academic/qucs-s: Updated for version 24.4.1. audio/tuner: Updated for version 1.5.3. audio/xwax: Updated for version 1.9. desktop/SiriKali: Updated for version 1.7.0. desktop/durden: Bump BUILD for new luajit. desktop/enlightenment: Bump BUILD for new luajit. desktop/i3: Updated for version 4.24. desktop/jwm: Updated for version 2.4.6. desktop/nwg-hello: Updated for version 0.3.0. desktop/nwg-panel: Updated for […]
Dear visitor, you seem to be using an Ad Blocker. Please consider whitelisting 'Alien Pastures'. I use the revenue from displaying ads (small as it is) to keep this site running. Thanks!
Recent comments