My thoughts on Slackware, life and everything

Category: Rant (Page 3 of 10)

Using Let’s Encrypt to Secure your Slackware webserver with HTTPS

In the ‘good old days‘ where everyone was a hippy and everyone trusted the other person to do the right thing, encryption was not on the table. We used telnet to login to remote servers, we transferred files from and to FTP servers in the clear, we surfed the nascent WWW using http:// links; there were no pay-walls; and user credentials, well who’d ever heard of those, right.
Now we live in a time where every government spies on you, fake news is the new news, presidents lead their country as if it were a mobster organisation and you’ll go to jail – or worse – if your opinion does not agree with the ruling class or the verbal minority.
So naturally everybody wants – no, needs – to encrypt their communication on the public Internet nowadays.

Lucky for us, Linux is a good platform for the security minded person. All the tools you can wish for are available, for free, with ample documentation and support on how to use them. SSH secure logins, PGP encrypted emails, SSL-encrypted instant messaging, TOR clients for the darkweb, HTTPS connections to remote servers, nothing new. Bob’s your uncle. If you are a consumer.

It’s just that until not too long ago, if you wanted to provide content on a web-server and wanted to make your users’ communications secure with HTTPS, you’d have to pay a lot of money for a SSL certificate that would be accepted by all browsers. Companies like VeriSign, DigiCert, Komodo, Symantec, GeoTrust are Certificate Authorities whose root certificates ended up in all certificate bundles of Operating Systems, browsers and other tools, but these big boys want you to pay them a lot of money for their services.
You can of course use free tools (openssl) to generate SSL vertificates yourself, but these self-signed certificates are difficult to understand and accept for your users if they are primarily non-technical (“hello supportline, my browser tells me that my connection is insecure and your certificate is not trusted“).

SSL certificates for the masses

Since long I have been a supporter of CACert, an organization whose goal is to democratize the use of SSL certificates. Similar to the PGP web-of-trust, the CACert organization has created a group of ‘assurers‘ – these are the people who can create free SSL certificates. These ‘assurers’ are trusted because their identities are being verified face-to-face by showing passports and faces. Getting your assurer status means that your credentials need to be signed by people who agree that you are who you say you are. CACert organizes regular events where you can connect with assurers, and/or become one yourself.
Unfortunately, this grass-roots approach is something the big players (think Google, Mozilla) can not accept, since they do not have control over who becomes an assurer and who is able to issue certificates. Their browsers are therefore still not accepting the CACert root certificate. This is why my web site still needs to display a link to “fix the certificate warning“.
This is not manageable in the long term, even though I still hope the CACert root certificate will ultimately end up being trusted by all browsers.

So I looked at Let’s Encrypt again.
Let’s Encrypt is an organization which has been founded in 2016 by a group of institutions (Electronic Frontier Foundation, Mozilla Foundation, Michigan University, Akamai Technologies and Cisco Systems) who wanted to promote the use of encrypted web traffic by allowing everyone to create the required SSL certificates in an automated way, for free. These institutions have worked with web-browser providers to get them to accept and trust the Let’s Encrypt root certificates. And that was successful.
The result is that nowadays, Let’s Encrypt acts as a free, automated, and open Certificate Authority. You can download and use one of many client programs that are able to create and renew the necessary SSL certificates for your web servers. And all modern browsers accept and trust these certificates.

Let’s Encrypt SSL certificates have a expiration of 3 months after creation, which makes it mandatory to use some mechanism that does regular expiration checks on your server and renews the certificate in time.

I will dedicate the rest of this article to explain how you can use ‘dehydrated‘, a 3rd-party and free Let’s Encrypt client which is fully compatible with the official ‘CertBot’ client of Let’s Encrypt.
Why a 3rd-party tool and not the official client? Well, dehydrated is a simple Bash shell script, easy to read and yet fully functional. On the other hand, please have a look at the list of dependencies you’ll have to install before you can use CertBot on Slackware! That’s 17 other packages! The choice was easily made, and dehydrated is actively developed and supported.

I will show you how to download, install and configure dehydrated, how to configure your Apache web server to use a Let’s Encrypt certificate, and how to automate the renewal of your certificates. After reading the below instructions, you should be able to let people connect to your web-server using HTTPS.


Configure dehydrated

Dehydrated is part of Slackware since the 15.0 release. It ships with a default configuration, a man-page and documentation.

The installed package will also create a cron job “/etc/cron.d/dehydrated” which makes dehydrated run once a day at midnight. I want that file to have some comments about what it does and I do not want to run it at midnight, so I overwrite it with a line that makes it run once a week at 21:00 instead. It will also log its activity to a logfile, “/var/log/dehydrated” in the example below:

cat <<EOT > /etc/cron.d/dehydrated
# Check for renewal of Let's Encrypt certificates once per week on Monday:
0 21 * * Mon /usr/bin/dehydrated -c >> /var/log/dehydrated 2>&1
EOT

Dehydrated uses a directory structure below “/etc/dehydrated/”.
The main configuration file you’ll find there is called “config”.
The file “domains.txt” contains the host- and domain names you want to manage SSL certificates for.
The directory “accounts” will contain your Let’s Encrypt user account and private key, once you’ve registered with them.
And a new directory “certs” will be created to store the SSL certificates you are going to create and maintain.

How to deal with these files is going to be addressed in the next paragraphs.

The dehydrated configuration files

config

The main configuration file “/etc/dehydrated/config” is well-commented, so I just show the lines that I used:

DEHYDRATED_USER=alien
DEHYDRATED_GROUP=wheel
CA="https://acme-staging-v02.api.letsencrypt.org/directory"
#CA="https://acme-v02.api.letsencrypt.org/directory"
CHALLENGETYPE="http-01"
WELLKNOWN="/usr/local/dehydrated"
PRIVATE_KEY_RENEW="no"
CONTACT_EMAIL=eric.hameleers@gmail.com
LOCKFILE="${BASEDIR}/var/lock"
HOOK=/etc/dehydrated/hook.sh

Let’s go through these parameters:

  • We are starting the ‘dehydrated script as root, via a cron job or at the commandline. The values for DEHYDRATED_USER and DEHYDRATED_GROUP are the user and group the script will switch to at startup. All activities will be done as user ‘alien’ and group ‘wheel’ and not as the user ‘root’. This is a safety measure.
  • CA: this contains the Let’s Encrypt URL for dehydrated to connect to. You’ll notice that I actually list two values for “CA” but one is commented out. The idea is that you use the ‘staging’ URL for all your tests and trials, and once you are satisfied with your setup, you switch to the URL for production usage.
    Also note that Let’s Encrypt expects clients to use the ACMEv2 protocol. The older ACMEv1 protocol will still work, but you can not register a new account using the old protocol. Its only use nowadays is to assist in migrating old setups to ACMEv2. The “CA” URL contains the protocol version number, and I highlighted that part in red.
  • CHALLENGETYPE : we will be using HTTP challenge type because that’s easiest to configure. Alternatively if you manage your own DNS domain you could let dehydrate update your DNS zone table to provide the challenge that Let’s Encrypt demands.
    What is this challenge? Let’s Encrypt’s ACME-protocol wants to verify that you are in control of your domain and/or hostname. It will try to access a verification file via a HTTP request to your webserver.
  • WELLKNOWN: this defines the local directory  where dehydrated creates the ‘challenge-tokens’ which are then served by your webserver. The Let’s Encrypt ‘ACME server’ will connect to your server as part of the ‘http-01’ challenge and expects to find a specific file there with specific content (created by dehydrated). In the case of a webserver running on our example domain “foo.net”, that URL would be  http://foo.net/.well-known/acme-challenge/m4g1C-t0k3n . The dehydrate client must provide that “m4g1c-t0k3n” file which it will create during a certificate creation or renewal. Below I will explain how to create this URL location “.well-known/acme-challenge” and make it readable for an external server like Let’s Encrypt.
    If your “domains.txt” file contains more than one hostname or domain, the ACME server will repeat this challenge for every one of them. Usually, multiple hostnames or (sub-)domains means that you have defined multiple VirtualHost in your Apache webserver configuration. For every VirtualHost you need to enable access to this ‘http-01’ challenge location (I will show you how, below).
    Note: The first connect from the ACME server will always be over HTTP on port 80, but if your site does a redirect to HTTPS, that will work.
  • PRIVATE_KEY_RENEW: whether you want the certificate’s private key to be renewed along with the certificate itself. I chose “no” but the default is “yes”.
  • CONTACT_EMAIL: the email address which will be associated with your Let’s Encrypt account. This is where warning emails will be sent if your certificate about to expire but has not been renewed.
  • LOCK: the directory (which must be writable by our non-root user) where dehydrated will place a lock file during operation.
  • HOOK: the path to an optional script that will be invoked at various parts of dehydrate’s activities and which allows you to perform all kinds of related administrative tasks – such as restarting httpd after you have renewed its SSL certificate.
    NOTE: do not enable this “HOOK” line – i.e. put a ‘#” comment character in front of the line – until you actually have created a working and executable shell script with that name! You’ll get errors otherwise about the non-existing script.

domains.txt

The file “/etc/dehydrated/domains.txt” contains the list hosts and domain names you want to associate with your SSL certificates. You need to realize that a SSL certificate contains the hostname(s) or the domain name(s) that it is going to be used for. That is why you will sometimes see a “hostname does not match server certificate” warning if you open a URL in your browser, it means that the remote server’s SSL certificate was originally meant to be used with a different hostname.

In our case, the “domains.txt” file contains just one hostname on a single line:

www.foo.net

… but that line can contain any amount of different space-separated hosts under the same domain. For instance the line could be “foo.net www.foo.net” which would tell Let’s Encrypt that the certificate is going to be used on two separate web servers: one with hostname “foo.net” and the other with the hostname “www.foo.net“. Both names will be incorporated into the certificate.

Your “/etc/dehydrated/domains.txt” file can be used to manage the certificates of multiple domains, each domain on its own line (e.g. domain foo.org on one line, and domain foo.net on another line). Each line corresponds to a different SSL certificate – e.g. for different domains. Every line can contain multiple hosts in a single domain (for instance: foo.org www.foo.org ftp.foo.org).

Directory configuration

Two directories are important for dehydrated, and we need to create and/or configure them properly.

/etc/dehydrated

First, the dehydrated configuration directory. We have configured dehydrated to run as user ‘alien’ instead of user ‘root’ so we need to ensure that the directory is writable by this user. Or better (since we installed this as a Slackware package and a package upgrade would undo an ownership change of /etc/dehydrated) let’s manually create the subdirectories “accounts” “certs”, “chains” and “var” where our user actually needs to write, and make ‘alien’ the owner:

# mkdir -p /etc/dehydrated/accounts
# chown alien:wheel /etc/dehydrated/accounts
# mkdir -p /etc/dehydrated/certs
# chown alien:wheel /etc/dehydrated/certs
# mkdir -p /etc/dehydrated/chains
# chown alien:wheel /etc/dehydrated/chains
# mkdir -p /etc/dehydrated/var
# chown alien:wheel /etc/dehydrated/var

/usr/local/dehydrated

The directory “/usr/local/dehydrated” is the location where dehydrated to will generate the Let’s Encrypt challenge files. These files provide the proof that we actually own the domain(s) we are requesting a certificate for.
So let’s create that directory and allow our non-root user to write there:

# mkdir -p /usr/local/dehydrated
# chown alien:wheel /usr/local/dehydrated

SUDO considerations

We configured the dehydrated script to drop its root privileges at startup and continue as user ‘alien’, group ‘wheel’. Because we also change the group iit is important that the sudo line for root in the file “/etc/sudoers” is changed from the default:

#root ALL=(ALL) ALL

to

root ALL=(ALL:ALL) ALL

Else you’ll get the error “Sorry, user root is not allowed to execute ‘/usr/bin/dehydrated -c’ as alien:wheel on localhost.“.

Apache configuration

I expect that you have already setup your Apache for un-encrypted connections and already have a web site. If you still need to figure out how to setup a web site using Apache, I suggest you look for a good tutorial before you proceed with my article, like https://docs.slackware.com/howtos:network_services:setup_apache .

Before we register an account with Let’s Encrypt and start generating certificates, let’s first update our existing Apache configuration so that it works with dehydrated. We need to make the ‘http-01’ challenge location (http://foo.net/.well-known/acme-challenge/) accessible to external web clients, else the certificate generation will fail.

Note that the above example mentions the “foo.net” hostname. If your “/etc/dehydrated/domains.txt” contains lines with multiple hosts under a domain, you’ll have to make the URL path component “/.well-known/acme-challenge” accessible through every domain host you configured in Apache. The complete certificate generation process will fail in case any of these challenge URLs cannot be validated.
To make life more simple if you run multiple web servers, we created “/usr/local/dehydrated/” to store the challenge file. It’s a single file location.  With the help of the Apache “Alias” directive we can use that single file location in all our web servers.

Use this snippet of text in the <VirtualHost></VirtualHost> configuration block for every webserver host:

# We store the dehydrated info under /usr/local and use an Apache 'Alias'
# to be able to use it for multiple domains. You'd use this snippet:
Alias /.well-known/acme-challenge /usr/local/dehydrated
<Directory /usr/local/dehydrated>
    Options None
    AllowOverride None
     Require all granted
</Directory>

You can use “lynx” on the command-line to test whether a URL is valid:

$ lynx -dump http://www.foo.net/.well-known/acme-challenge/
Forbidden: You don't have permission to access /.well-known/acme-challenge/ on this server.

Despite that error, this message actually shows that the URL works (otherwise the return message would have been “Not Found: The requested URL /.well-known/acme-challenge was not found on this server.“).

This completes the required Let’s Encrypt modifications to your Apache web server configuration.
Next, and before we restart ‘httpd‘, our Apache server must be enabled to accept SSL connections. This is achieved by un-commenting the following line in “/etc/httpd/httpd.conf”:

# Secure (SSL/TLS) connections
Include /etc/httpd/extra/httpd-ssl.conf

You can now restart Apache httpd to activate our modifications (but always test the syntax of your configuration first:

# apachectl configtest
# /etc/rc.d/rc.httpd restart

To end the Apache configuration instructions, here are the bits that define the SSL parameters for your host. Note that you should not add them yet! You do not have a SSL certificate yet. Only after you have executed “dehydrated -c” and obtained the certificates, you can add the following lines to every <VirtualHost</VirtualHost> block where where you previously added the ‘Alias’ related stuff above:

SSLEngine on
SSLCertificateFile /etc/dehydrated/certs/foo.net/cert.pem
SSLCertificateKeyFile /etc/dehydrated/certs/foo.net/privkey.pem
SSLCertificateChainFile /etc/dehydrated/certs/foo.net/chain.pem
SSLCACertificatePath /etc/ssl/certs
SSLCACertificateFile /etc/ssl/certs/ca-certificates.crt

Note the hostname “foo.net” in these SSL lines above? This is an example of course and you need to change that to your own hostname.
What you need to realize is that this name corresponds to the first name of the line in your “/etc/dehydrated/domains.txt” file. Earlier in the article I used an example line for this “domains.txt” file which looks like this: “foo.net www.foo.net“. Even more hosts are possible, they should be space-separated. A single certificate will be generated which is valid for all of these hosts, and the directory where they are stored in is “/etc/dehydrated/certs/” followed by “./foo.net” which the name of that first entry of the line.

Running dehydrated for the first time, using the Let’s Encrypt staging server:

With all the preliminaries taken care of, we can now proceed and run ‘dehydrated’ for the first time. Remember to make it connect to the Let’s Encrypt ‘staging’ server during all your tests, to prevent their production server from getting swamped with bogus test requests!

Examining the manual page (run “man dehydrated“) we find that we need the parameter ‘–cron’, or ‘-c’, to sign/renew non-existent/changed/expiring certificates:

# /usr/bin/dehydrated -c
# INFO: Using main config file /etc/dehydrated/config
# INFO: Running /usr/bin/dehydrated as alien/wheel
# INFO: Using main config file /etc/dehydrated/config

To use dehydrated with this certificate authority you have to agree to their terms of service which you can find here: https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf

To accept these terms of service run `/usr/bin/dehydrated --register --accept-terms`.

What did we learn here?
In order to use dehydrated, you’ll have to register first. Let’s create your account and generate your private key!

Do not forget to set the “CA” value in /etc/dehydrated/config to a URL supporting ACMEv2. If you use the old staging server URL you’ll see this error: “Account creation on ACMEv1 is disabled. Please upgrade your ACME client to a version that supports ACMEv2 / RFC 8555. See https://community.letsencrypt.org/t/end-of-life-plan-for-acmev1/88430 for details.

With the proper CA value configured (you’ll have to do this both for the staging and for the production server URL) , you’ll see this if you run “/usr/bin/dehydrated –register –accept-terms”:

# /usr/bin/dehydrated --register --accept-terms
# INFO: Using main config file /etc/dehydrated/config
# INFO: Running /usr/bin/dehydrated as alien/wheel
# INFO: Using main config file /etc/dehydrated/config
+ Generating account key...
+ Registering account key with ACME server...
+ Fetching account ID...
+ Done!

Generate a test certificate

We’re  ready to roll. As said before, it is proper etiquette to run all your tests against the Let’s Encrypt ‘staging’ server and use their production server only for the real certificates you’re going to deploy.
Let’s run the command which is also being used in our weekly cron job, “/usr/bin/dehydrated -c”:

# /usr/bin/dehydrated -c
# INFO: Using main config file /etc/dehydrated/config
# INFO: Running /usr/bin/dehydrated as alien/wheel
# INFO: Using main config file /etc/dehydrated/config
+ Creating chain cache directory /etc/dehydrated/chains
Processing www.foo.net
+ Creating new directory /etc/dehydrated/certs/www.foo.net ...
+ Signing domains...
+ Generating private key...
+ Generating signing request...
+ Requesting new certificate order from CA...
+ Received 1 authorizations URLs from the CA
+ Handling authorization for www.foo.net
+ Found valid authorization for www.foo.net
+ 0 pending challenge(s)
+ Requesting certificate...
+ Checking certificate...
+ Done!
+ Creating fullchain.pem...
+ Done!

This works! You can check your web site now if you did not forget to add the SSL lines to your VirtualHost block; your browser will complain that it is getting served an un-trusted SSL certificate issued by “Fake LE Intermediate X1“.

Generate a production certificate

First, change the “CA” variable in “/etc/dehydrated/config” to the production CA URL “https://acme-v02.api.letsencrypt.org/directory”.
Remove the fake certificates that were created in the previous testing step so that we can create real certificates next:

# rm -r /etc/dehydrated/certs/www.foo.net

Now that we’ve cleaned out the fake certificates, we’ll generate real ones:

# /usr/bin/dehydrated -c
# INFO: Using main config file /etc/dehydrated/config
# INFO: Running /usr/bin/dehydrated as alien/wheel
# INFO: Using main config file /etc/dehydrated/config
Processing www.foo.net
+ Creating new directory /etc/dehydrated/certs/www.foo.net ...
+ Signing domains...
+ Generating private key...
+ Generating signing request...
+ Requesting new certificate order from CA...
+ Received 1 authorizations URLs from the CA
+ Handling authorization for www.foo.net
+ 1 pending challenge(s)
+ Deploying challenge tokens...
+ Responding to challenge for www.foo.net authorization...
+ Challenge is valid!
+ Cleaning challenge tokens...
+ Requesting certificate...
+ Checking certificate...
+ Done!
+ Creating fullchain.pem...
+ Done!

If you reload the Apache server configuration (using the command “apachectl -k graceful”) you’ll now see that your SSL certificate has been signed by “Let’s Encrypt Authority X3” and it is trusted by your browser. We did it!

Automatically reloading Apache config after cert renewal

When your weekly cron job decides that it is time to renew your certificate, we want the dehydrated script (which runs as a non-root account) to reload the Apache configuration. And of course, only root is allowed to do so.

We’ll need a bit of sudo magic to make it possible for the non-root account to run the “apachectl” program. Instead of editing the main file “/etc/sudoers” with the command “visudo” we create a new file “httpd_reload” especially for this occasion, in sub-directory “/etc/sudoers.d/” as follows:

# cat <<EOT > /etc/sudoers.d/httpd_reload
alien ALL=NOPASSWD: /usr/sbin/apachectl -k graceful
EOT

This sudo configuration allows user ‘alien’ to run the exact command “sudo /usr/sbin/apachectl -k graceful” with root privileges.

Next, we need to instruct the dehydrated  script to automatically run “sudo /usr/bin/apachectl -k graceful” after it has renewed any of our certificates. That is where the “HOOK” parameter in “/etc/dehydrated/config” comes to play.

As the hook script, we are going to use dehydrated’s own sample “hook.sh” script that can be downloaded from https://raw.githubusercontent.com/lukas2511/dehydrated/master/docs/examples/hook.sh or (if you used the SlackBuilds.org script to create a package) use “/usr/doc/dehydrated-*/examples/hook.sh”.

# cp -i /usr/doc/dehydrated-*/examples/hook.sh /etc/dehydrated/
# chmod +x /etc/dehydrated/hook.sh

This shell script contains a number of functions, each is relevant and will be called at a certain stage of the certificate renewal process. The dehydrated script will provide several environment variables to allow a high degree of customization, and all of that is properly documented in the sample script, but we do not need any of that. Just at the end of the “deploy_cert()” function we need to add a few lines:

deploy_cert() {
# ...
# After successfully renewing our Apache certs, the non-root user 'alien'
# uses 'sudo' to reload the Apache configuration:
sudo /usr/sbin/apachectl -k graceful
}

That’s all. Next time dehydrated renews a certificate, the hook script will be called and that will reload the Apache configuration at the appropriate moment, making the new certificate available to visitors of your web site.

Summarizing

I am glad you made it all the way down here! In my usual writing style, the article is quite verbose and gives all kinds of contextual information. Sometimes that makes it difficult for the “don’t bother me with knowledge, just show me the text I should copy/paste ” user but I do not care for that.

I do hope you found this article interesting, and useful. If you spotted any falsehoods,let me know in the comments section below. If some part needs more clarification, just tell me.

Have fun with a secure web!

Eric

Continue reading

Patrick, next Slackware and moving forward with KDE Plasma5

I assume that many of you will have been reading the recent Linux Questions thread “Donating to Slackware” and in particular Patrick Volkerding’s reply where he explains that the Slackware Store (an entity independent of Slackware with which he has a business arrangement involving a percentage of sales profit and medical insurance) has not been paying him any money for the last two years and that most likely all the PayPal donations through the Store have gone into the pockets of the Store owners. Read that thread if you have not done so yet.
Basically Pat is broke. That thread lists a PayPal address which Pat eventually shared and where donations can be sent directly to him, so that he can fix his roof, his airco, his crashing server and his wife’s car. That would be a start.

That LQ thread is also perused to discuss possible ways forward for Pat (setting up a Patreon account, or a business PayPal account, etc) so that he can support his family and continue working on Slackware. To me it looks like the Store will be a thing of the past unless they change their attitude. Switching from a business model where revenue is generated from optical media sales, to a model where supporters set up a recurring payment in exchange for the prolonged existence of their favorite distro, and possibly get Pat to write up some hands-on stories as a reward, may ultimately benefit Pat, and Slackware, more than the way things are handled at the moment. If you are doubting the financial impact of a recurring payment through Patreon or PayPal, look at it this way: if you donate one euro per month, you will probably not even notice that the money is shifted out. But with 2000 people donating one euro per month, Pat would have a basic income (pre-tax) already. Not a lot, but it’s a start. The 2000 people is a rough estimate of the people who ordered a DVD or CD through the store: the owners told Pat that the earnings of the 14.2 release were 100K (and Pat got 15K out of that, go figure!). Divide that through ~50 euro per DVD, results in 2000 people. Then there’s all these people who donated money through the Store or bought shirts, caps and stickers. I think the amounts of money even a small community (like us Slackware users) can contribute should enable Pat to shed his financial worries. The fact that the Slackware Store basically has been ripping off the hand that feeds them is enraging and inexcusable.
This is all about a community standing up to provide support for what (or who) bonds us together.

Very important to take from Pat’s reply is that he’s “never really been in this for the money” but without income, Slackware’s development is ultimately affected too. I hesitated writing this article, even after Patrick’s LQ post, because it is Patrick’s life and I won’t decide for him how to live it. But I am passionate about Slackware, and care a great deal for Pat, Andrea and Briah, and wish them nothing but the best.

So, in that LQ thread and in private talks, I guess that there will be a lot of discussions as well about the shape and form of a future Slackware. Should it shrink to a “core distro” on which others can build their repositories, for instance offering Plasma5, MATE, Cinnamon desktop environments? How to integrate these external repositories so that a new install could effortlessly be expanded with extra functionality? Should Plasma5 be included? Should PAM be included? And so forth. Lots of exciting developments in stock!

As for KDE Plasma5, I talked with Pat about the way forward and what his plans are with regard to Slackware and Plasma5. Pat indicated that he would at this moment be in favor of going with the latest and greatest instead of adopting LTS (Long Term Support) releases of KDE, because of the reports from several Slackware Plasma5 users that several usability bugs have been solved in the latest releases (part of those improvements can be attributed to the newer Qt5). If Pat decides not to adopt Plasma5 into Slackware, then as long as he provides a solid base in Slackware 15.0 I can keep providing Plasma5 as an add-on through my ‘ktown’ repository. That “solid base” would at least have to be Qt5, its supporting libraries, and recompiled/upgraded phonon, poppler, harfbuzz etc packages to add Qt5 support so that I can cut my “deps” section substantially and no longer have to provide alternate versions of packages that are also part of Slackware but lack required functionality.

And that is why my next update of ‘ktown‘ will see the removal of the LTS software versions in the ‘latest‘ sub-repository and at the same time, the bleeding edge ‘testing‘ sub-repository will be promoted to the ‘latest‘. The ‘testing’ and ‘latest’ will then contain the same packages, so that everyone will upgrade to the same July ’18 packages.
I still need to start collecting the new KDE source archives, sync my virtual machines to latest -current and start compiling. Don’t expect packages before the weekend…

Eric

Tracking development of slackware in git

Something had been nagging me for a long time, and I finally had enough of that itch and decided to deal with it.

As you know, there’s a private and a public side to Slackware’s development. The discussions and decisions are handled internally among the members of ‘the team’ and are not shared with the public at large until an update is done to the ‘slackware-current’ tree which can be found on every Internet mirror.
Thus you have access to the latest state of development always. But for some people it is a compelling idea to be able to access the development updates in a public repository like git – where you can track the changes over time.

A recent discussion on LinuxQuestions brought up the topic of SlackBuild scripts in Slackware-current. The scripts you can find in the -current directory tree on the Slackware mirrors are always the latest version. Sometimes there’s a good reason to want to go back in time and fetch an earlier version. In the thread post with the appropriate number “1337” it is ponce (Matteo Bernardini) who replies with a link to a git repository maintained by Adrien Nader which already has been tracking the development in -current for nearly 8 years!. So it’s quite a convenient way to retrieve a historical version of any script.

Me being me, it’s the existence of that repository which has been nagging me for a couple of years. Why? Because I wondered how it was done. And if I question an issue long enough, I will eventually create my own solution – as a learning exercise of course, but also to give back to the community.

And so, today arrived. I was pondering – if I were to create a git repository for tracking the developments in -current, what would I want in there? Exactly the same as Adrien’s? The answer has been “no” for a while. The most important capability that is missing from Adrien’s repo is that it contains a lot of compressed files that are impossible to read. Think of patches and doinst.sh scripts, and more. So I gave myself the task to implement a git repository with uncompressed files, as an improvement on the original effort. Also, it should track all relevant files in the complete tree, not just in the “./source/” subdirectory. In particular the documentation files (various .TXT files).

The result is a script, maintain_current_git.sh, and a repository, https://git.slackware.nl/current/ .
The repository just had its first commit. For those who want to check out a commit in order to compile a package from there, the maintain_current_git.sh script generates another script called ‘recompress.sh‘ and stores that script in the root directory of the repository. When you run this recompress script in the root directory of the repository, it will re-compress all the files that had been un-compressed before committing them to git. That way, a SlackBuild script will find the correct files and will function as intended. Note that you would still have to download the source tarballs from somewhere, because this repository of mine will only track the Slackware-specific files.

I decided that it is prudent and more respectful to not import Adrien’s work into my own repository. The two are similar but different and I think everyone of you can choose which repository suits your needs better.

I have scheduled the above script to run twice a day and update the git repository when new updates become available.
As with all my scripts, this one has a “-h” parameter to explain its usage. Let me know if it – and the git repository – are useful to you.
This particular script may be a bit messy because I have not spent a lot of time polishing it. I hope that’s OK 😉

Have fun!

Fun and games in -current when ABIs break

Oh my.
It was not an April Fools joke when the ChangeLog.txt of slackware-current mentioned the following (I left out the non-relevant package updates):

 

+--------------------------+
Sun Apr 1 02:53:26 UTC 2018
...
l/icu4c-61.1-x86_64-1.txz: Upgraded.
 Shared library .so-version bump.
...
l/poppler-0.63.0-x86_64-1.txz: Upgraded.
 Shared library .so-version bump.

All of us  who follow Slackware’s development know that “shared library version bump” means ABI breakage. I.e. a lot of 3rd-party binaries will suddenly not find required library versions anymore. In particular icu4c and poppler are nasty beasts. Slackware’s own packages had been carefully updated and recompiled where needed of course, so there was no breakage in the distro itself. But many people do not run a bare Slackware installation… a lot of software is usually installed on top. And that is the software which will be affected by an incompatible change like this one on April 1st.

What’s this  version bump all about? How is it possible that it affects your computer so deeply?

Most programs depend on other programs. Software developers hate to re-invent the wheel if they can avoid it. Lots of lower-level or widely used functionality has been put into software libraries. Think of network access functionality, text rendering, encryption etc – smart people have created useful, efficient and robust software and stuffed that code into libraries. Your own program can link against these libraries at run-time and access the functionality they have to offer and your program needs.
If many programs link to the same libraries, that reduces the memory footprint because a library has to be loaded into memory only once even if many programs are using it. These libraries are therefore called ‘shared libraries‘ or ‘shared objects‘ (hence the extension ‘.so‘) or also dynamic libraries. For this dynamic linking to work, programs use binary interfaces at runtime that were established when the program was compiled and linked.
A shared library version only needs to change if the Application Binary Interface (ABI) changes. When that happens, all binaries that depend on the library need to be recompiled to adjust to the new ABI.
Among others, an ABI depends on the machine architecture, and on the toolchain (compiler, linker) used to generate the binary code from its sources. An ABI guarantees binary compatibility: the program will work on every machine with the same ABI, without a need for recompilation.

When will an ABI usually change?

One case is when the library’s API (Application Programming Interface, i.e. the way in which access to the library routines is defined in its source code) changes. This mostly occurs in software that is not yet stable and where its programmers add new functionality or revise the methods of calling the library’s functionality. Mature software on the other hand has a well-defined API which is rarely subject to change.

Another case is when the toolchain is updated. Slackware’s toolchain has been very stable and ABI changes have seldomly been introduced. As an example, Patrick talks about the a.out to ELF migration and the libc5 to glibc migration in a 2012 interview.

So why does well-established software like icu4c and poppler change their ABI almost on every minor release, thereby pissing off a really large crowd? You tell me. Arrogance or sloppiness, but let’s attribute it to bad project management. Because it probably could have been avoided in many if not all cases.

Anyway, some of you upgraded first to this new batch of updates in -current, then found out that some 3rd party packages stopped working, and then started looking for a cause.
Folks: if you are running slackware-current, you always check the ChangeLog.txt first. And if you spot a “shared library version bump” you stop right there and assess the situation. Some friends on LinuxQuestions were a bit more vocal about the way they had found themselves at a dead Plasma5 screen… others understood the situation better and and realized they had to wait for 3rd party packagers to update their repositories instead of assuming ill will.
If you are unable to cope with this kind of occasional breakage, use a stable Slackware release. That’s what all of us have been telling you all along.

My packages have been affected as well:

  • Plasma 5 stopped working,
    The fix is to recompile several packages among which a number of big ones: qt5, qt5-webkit and calligra. That means, you will have to have patience. The 64bit repository has been updated in the meantime. After upgrading from my ‘ktown’ repository your Plasma5 will again be fully functional. The 32bit repository updates will hopefully follow tomorrow have been uploaded now. I had to restart the 32bit qt5 compilation because of an internal compiler error halfway.
    The updated Plasma5 will even have some new functionality – since i slipped in the KMymoney program which otherwise would have been introduced at the April update of ‘ktown‘.
  • LibreOffice stopped working.
    The 64bit packages have been recompiled. They have been updated actually since there was a micro source version bump from 6.0.3.1 to 6.0.3.2 yesterday – my luck! The 32bit packages have to wait until after ‘ktown‘ updates have been completed and then I will upload them. New libreoffice packages for -current are now available.
  • Pale Moon stopped working.
    I was able to recompile the packages inbetween the Plasma5 compilation because it only takes little time. The updated palemoon packages are already in my repository.
  • Calibre stopped working.
    This will have to wait until after all the other updates New calibre packages for -current are now available..

And probably other stuff is broken too but I have not yet spotted that. If you find breakage, please report it so that I can recompile the package.

Encrypted Media Extensions on the World Wide Web

Today, a post about Digital Rights Management. I am not going to bore you with the pros and cons of restricting your freedom, but I do want to point to a meaningful event which happened this week.

Before I continue, I want you to fully realize that with Slackware Linux, your rights are not taken away. You are free to use – or not use – technologies that allow you to watch “protected” content like Netflix videos. Our browsers will work just as well if you choose not to use DRM technologies. The libraries which implement the DRM layer are separate from the Slackware packages containing the browsers (Firefox, Chromium) and are not distributed with the OS. It is up to you to add DRM extensions if you need them. You are and remain in control of your OS.

With that out of the way, what happened?
This week, the World Wide Web Consortium (W3C) has finally approved the Encrypted Media Extensions (EME) as part of the HTML5 standard. Objections from the Electronic Frontier Foundation, Free Software Foundation and other digital freedom advocates have not been honored. But that does not necessarily mean it’s a bad thing. EME is a standard to implement DRM, but it is not a DRM solution itself. EME allows companies that built their business model around the commercial distribution of protected media content to create rich applications that run in your browser, based on international standards.
Digital Rights Management is not new and it is not going away either. However: it is in need of standardizing to improve the current status-quo. Because there are already several de-facto standards to stream protected content to your browser: Flash, Silverlight, to name the two that have been most widely used in the past years. Both these technologies are dying or dead already. New technologies that build on HTML5 are already becoming unofficial standards; think of Widevine, which is a DRM solution from Google. Not just browser plugins like the ones I mentioned, but also applications can implement DRM when they allow you to watch or listen to multimedia without the option to make unrestricted local copies. Locally stored content will be encrypted and can only be played back using the original app. Lots of those on Android for instance.

DRM solutions are proprietary. Their code is not free and the libraries are distributed as binary-only. There’s a logic to that of course. Think what you will, but there are both providers and consumers that embrace them. What is more important, is that there is wisdom in embedding these technologies in Web standards. We should not encourage companies to pollute our computers with incompatible and non-interoperable solutions. So yes, I am glad that EME is a W3C standard finally. Let the Web remain viable, allowing maximum flexibility and compatibility.

I mentioned Widevine in the text, and I have something new to tell about that too.
My package repository contains the chromium-widevine-plugin. It is a add-on package to my chromium browser package that allows you (among others) to watch Netflix video content in your Chromium browser. In the past I have always used the Google Chrome RPM’s to extract the ‘libwidevinecdm.so‘ library and make a Slackware package out of it. Google stopped distributing 32bit versions of the Chrome browser after version 48 so those of you on 32bit Slackware and using my Chromium package, were stuck with an old version of the Widevine CDM library and no way of knowing how long this library would remain compatible with newer Chromium sources.
But Mozilla have since then extended Firefox’ capabilities, so that it too is now able to use Widevine’s Content Decryption Module. In Firefox, this DRM capability was implemented in such a way that by default, the browser is completely DRM-free. You (the enduser) first have to explicitly enable DRM in the browser’s settings after which Firefox will download the Widevine CDM from an Internet URL. And since Firefox comes in 32bit as well as 64bit variants, I was thinking “where do they download these Widevine libraries and are they useable in Chromium as well?”

So I set out to find the Firefox download location for Widevine CDM libraries, found them, retrieved them and tested the libraries in 32bit and 64bit Chromium. Lo and behold…. this worked!

I have now rewritten my SlackBuild script for the chromium-widevine-plugin package to use this alternate download location. And since I no longer have to extract the library from a Chrome RPM, I have also changed the package version numbering. The package version no longer reflects the Chrome release, but now it is actually reflecting the internal version of the Widevine CDM library.

Have fun watching Netflix! While I am at it, I recommend The Expanse, or perhaps Helix. If you’re not so much into Sci-Fi (or have already seen those series) and want to know more about our basic foods, check out Cooked.

« Older posts Newer posts »

© 2024 Alien Pastures

Theme by Anders NorenUp ↑