My thoughts on Slackware, life and everything

Tag: ca

Secure Boot support landed in liveslak 1.5.0

Secure Boot ???

Secure Boot is part of the UEFI specification and first appeared in the Unified Extensible Firmware Interface (UEFI) 2.3.1 specification (Errata C). It is meant to prevent the execution of unauthorized code upon boot of a computer. Most modern Personal Computers will have a way of enabling Secure Boot in UEFI, but it is common to leave it disabled if you are not running a Microsoft OS on it since Microsoft controls Secure Boot.
For dual-boot scenario’s the story is different however. Microsoft Windows 8 and 10 advise to have Secure Boot enabled but don’t enforce it, but as far as I know, for Microsoft Windows 11 enabling Secure Boot will be a requirement to get full upgrade support.

Microsoft’s own Windows OS-es fully support Secure Boot (naturally), but non-Microsoft Operating Systems that want to boot on a Secure Boot enabled computer, must have an initial bootloader which has been cryptographically signed with the ‘Microsoft UEFI CA‘ key. Getting Microsoft to sign a 3rd-party (mostly Linux) bootloader is a complex procedure which involves money (but not always) and establishment of trust. You need to realize that if a malicious bootloader gets accidentally signed by Microsoft, this bootloader could be used to deliver all kinds of nasty to every Secure Boot platform in existence. And that would defy the whole purpose of a tightly controlled ‘Secure Boot‘ environment.

It’s likely that more and more computers with MS Windows installed or with a dual-boot configuration will have Secure Boot enabled in the near future. For Slackware and its derivatives, including my liveslak based ISOs, this is a potential problem because Secure Boot is not supported yet and therefore our OS will not launch.
Additionally, one of the use-cases of liveslak ISOs is to take it along with you on a USB stick on your key-chain and boot random PC hardware with it – just for fun, or to test the Slackware compatibility of new hardware, or simply to have your Slackware desktop with you on a stick all the time.
A liveslak ISO will be a lot less useful if every PC you encounter has Secure Boot enabled.
So I set out to find a solution.

One possible way out of this dilemma is to acquire a Microsoft-signed first-stage bootloader, just like Red Hat, Fedora, Debian, Ubuntu and the other big brothers already provide. Like said before – this process is complex and I am not able to satisfy the requirements at the moment. In future, who knows? I am working on that.

Alternatively, and other small distros seem to do this too, an already signed bootloader can be ‘borrowed’ from another distro and used to boot a liveslak ISO. But that has consequences which I will elaborate further down. This is the solution I chose.

Note: In case you have doubts: the liveslak ISOs support Secure Boot from now on. But nothing changes when your computer does not have Secure Boot enabled. Slackware Live Edition will boot and work exactly like before.

Boot sequence

Let me give a quick and dirty and possibly not 100% terminology-accurate explanation of the boot procedure in a Secure Boot computer.

  • The TPM chip in the computer may be used to store a bunch of (Microsoft) security certificates which are trusted by the Secure Boot subsystem of UEFI, as well as (potentially) a list of certificates that have been revoked due to security violations. Otherwise these keys are stored in NVRAM (non-volatile RAM).
    Secure Boot will load only bootloaders that have been signed by a trusted Microsoft certificate. The most widely used Linux bootloader which Microsoft accepts for signing is the ‘shim‘, a trivial application which does not do much more than verifying the signatures of the second-stage bootloader which it is hard-coded to look for. A signed  ‘shim’ binary also contains an embedded SSL certificate of the distro that owns the shim binary.
    The shim accepts two certificates when it checks the signature of the second-stage bootloader and other binaries: one is the Microsoft certificate and the other is this embedded distro certificate. This allows such a distro to provide a nice unattended problem-free boot on every computer with Secure Boot enabled.
  • The shim loads the second-stage bootloader, this is most often the Grub2 bootloader but it could be any EFI image file (such as rEFInd). The hard-coded default filename that the shim will attempt to load is ‘grubx64.efi’ so whatever your distro bootloader, it needs to be renamed to that.
    Shim will check the signing certificate of that file and will proceed only if it recognizes the certificate as valid.
  • Grub will be informed by the shim that it loads in a Secure Boot environment and because of that, Grub will ask the shim if it is OK to load the kernel it is configured for. Shim will check the signing certificate of your kernel and will signal an ‘OK’ to Grub only if it recognizes the kernel’s signing certificate as valid.
  • Your Linux OS will now boot properly.
    But what happens if Grub or the kernel are not signed, or when they are signed with an un-recognized certificate? Well, your computer will refuse to boot.

Lack of trust

The problem for liveslak (and for Slackware) is that neither Patrick nor I own any of the private keys corresponding to the SSL certificate that these big distros embed in the signed ‘shim’ binary they provide (naturally). Therefore I must sign the liveslak grub and kernel images using my own private key and public certificate pair.
The public certificate I use for signing is not recognized by UEFI SecureBoot, since it is not embedded in any bootloader which has been signed by the Microsoft Certificate Authority. If Secure Boot is enabled then the signing certificate of liveslak’s grub and kernel images will not be accepted by shim… and the Live OS fails to boot.

As the owner of your PC, you can clear the Microsoft certificates from the local certificate store and load your own certificate instead. Your computer will then boot without issue because your bootloader finds the certificate that it is validating in the trust store.
This is probably the best solution if you are never going to run MS Windows on it anyway. But if you do, or might, or if you want to provide a solution that works on a multitude of computers besides your own, you need to come up with something else.

You are the Machine Owner

Lucky for us poor distro developers, Secure Boot recognizes a database of Machine Owner Keys (MOK) next to the Microsoft trusted keys. And you are the Machine Owner if you have physical access to the computer and know the password to unlock the MOK database (it is empty by default).
The MOK can be populated with public keys of your own making, or of your friends… and/or with liveslak’s public key. And then the shim will accept liveslak’s signed Grub and kernel… and liveslak will boot just fine.

Managing Machine Owner Keys (MOK)

My repository contains two relevant packages: mokutil and sbsigntools. The mokutil package contains the ‘mokutil’ program which allows you to query the SecureBoot NVRAM and to manipulate MOK certificates (the latter only if Secure Boot is enabled). The sbsigntools package contains ‘sbsign’ which is used by liveslak to cryptographically sign the Grub and kernel images.

To check whether Secure Boot is enabled, run:

# mokutil --sb-state

If this returns “Secure Boot enabled” then you are probably running a commercial Linux distro :-). In this case, you can generate a MOK enroll request to the shim before rebooting:

# wget https://download.liveslak.org/secureboot/liveslak.der
# mokutil --import liveslak.der

You will be prompted for an optional password. If you enter one, this password then needs to be entered again after reboot to complete the MOK enrollment.

If Secure Boot is not enabled, don’t worry: you can still enroll this ‘liveslak.der’. Just continue reading.

Adding liveslak.der certificate to MOK database

The remainder of the article will describe in detail how to add the liveslak public key (a SSL certificate) to the MOK if you were unable to use the mokutil program as described above.

Note that adding the certificate is a one-time action. Once the MOK database contains the liveslak certificate, future liveslak ISOs will boot on your SecureBoot-enabled computer out of the box.

Enable Secure Boot in your UEFI (press F1 or whatever key your computer needs to enter setup and find the proper menu to enable SB). Then reboot your computer with a liveslak (1.5.0 or later release) medium inserted.
UEFI will look for the first-stage bootloader on the liveslak medium in the ESP partition (ESP = EFI System Partition which is a FAT partition with partition type 0xEF00). According to UEFI spec, it wants to load the file ‘/EFI/BOOT/bootx64.efi’. The Fedora ‘shim’ binary which is now contained in the liveslak ISO has been renamed to that exact filename. Shim loads, it will try to load our Grub and discovers that its SSL signature is invalid/unknown. Shim will then ask you if it should load the MokManager instead (the ‘mmmx64.efi file in the same directory) which is signed with the distro certificate embedded in the shim.

If you press ‘any key’ within 10 seconds, the MokManager will load and you will have your opportunity to enroll the ‘liveslak.der’ certificate. If you wait instead, the computer reboots and if you had already made an enrollment request as described in the previous paragraph, that request will then be erased and you’ll have to try again.
If MokManager loads, it will show the below screen if you created an enrollment request before the reboot. Select the “Enroll MOK” to complete that enrollment request.


If you entered a password during the “mokutil –import” command you will have to enter that same password next:

 

If you did not create an enrollment request prior to reboot, then this particular menu item will not be present and you should use the “Enroll key from disk” menu instead.
A simple and spartan file browser will be your guide towards the ‘/EFI/BOOT/’ directory on the liveslak medium; select ‘liveslak.der’ file and confirm.
After enrollment of the ‘liveslak.der’ certificate, you must reboot.

After the reboot, your Slackware Live Edition should boot as usual. If you watch closely you will see a flash right before Grub loads – that is the shim loading right before it.

Now, you can check that Secure Boot is actually enabled:

root@darkstar:~# dmesg |grep -i secure
[ 0.016864] Secure boot enabled

Cool!

Download Slackware Live Edition

You can find a set of new ISOs based on liveslak on download.liveslak.org/latest/, the 64bit versions of them support Secure Boot.
I tried also generating 32bit ISOs but they refuse to load the init script. The kernel loads, it unpacks the initrd image, but then the boot process stalls right before control is handed over to the initrd. I have no idea why. I tried generating ISOs with an older release of liveslak (1.4.0) that had proven to produce working 32bit ISOs but no luck. Something in the new 5.15 kernel perhaps which liveslak fails to add to the ISO? Suggestions are welcome.

It will probably work best if you use the ‘iso2usb.sh‘ script provided with liveslak to transfer the ISO content to a USB stick, it will give you a persistent environment with optional data encryption and apparently this will allow it to boot on a lot more computers.

Get liveslak sources

The liveslak project is hosted in git. Its browsable cgit interface is here: https://git.liveslak.org/liveslak/

A set of the liveslak scripts can also be downloaded from http://www.slackware.com/~alien/liveslak/ or https://slackware.nl/people/alien/liveslak/

The liveslak public key (SSL certificate in DER encoding) that you need to enroll can be downloaded from https://download.liveslak.org/secureboot/liveslak.der

Let me know your experiences!
Cheers, Eric

Using Let’s Encrypt to Secure your Slackware webserver with HTTPS

In the ‘good old days‘ where everyone was a hippy and everyone trusted the other person to do the right thing, encryption was not on the table. We used telnet to login to remote servers, we transferred files from and to FTP servers in the clear, we surfed the nascent WWW using http:// links; there were no pay-walls; and user credentials, well who’d ever heard of those, right.
Now we live in a time where every government spies on you, fake news is the new news, presidents lead their country as if it were a mobster organisation and you’ll go to jail – or worse – if your opinion does not agree with the ruling class or the verbal minority.
So naturally everybody wants – no, needs – to encrypt their communication on the public Internet nowadays.

Lucky for us, Linux is a good platform for the security minded person. All the tools you can wish for are available, for free, with ample documentation and support on how to use them. SSH secure logins, PGP encrypted emails, SSL-encrypted instant messaging, TOR clients for the darkweb, HTTPS connections to remote servers, nothing new. Bob’s your uncle. If you are a consumer.

It’s just that until not too long ago, if you wanted to provide content on a web-server and wanted to make your users’ communications secure with HTTPS, you’d have to pay a lot of money for a SSL certificate that would be accepted by all browsers. Companies like VeriSign, DigiCert, Komodo, Symantec, GeoTrust are Certificate Authorities whose root certificates ended up in all certificate bundles of Operating Systems, browsers and other tools, but these big boys want you to pay them a lot of money for their services.
You can of course use free tools (openssl) to generate SSL vertificates yourself, but these self-signed certificates are difficult to understand and accept for your users if they are primarily non-technical (“hello supportline, my browser tells me that my connection is insecure and your certificate is not trusted“).

SSL certificates for the masses

Since long I have been a supporter of CACert, an organization whose goal is to democratize the use of SSL certificates. Similar to the PGP web-of-trust, the CACert organization has created a group of ‘assurers‘ – these are the people who can create free SSL certificates. These ‘assurers’ are trusted because their identities are being verified face-to-face by showing passports and faces. Getting your assurer status means that your credentials need to be signed by people who agree that you are who you say you are. CACert organizes regular events where you can connect with assurers, and/or become one yourself.
Unfortunately, this grass-roots approach is something the big players (think Google, Mozilla) can not accept, since they do not have control over who becomes an assurer and who is able to issue certificates. Their browsers are therefore still not accepting the CACert root certificate. This is why my web site still needs to display a link to “fix the certificate warning“.
This is not manageable in the long term, even though I still hope the CACert root certificate will ultimately end up being trusted by all browsers.

So I looked at Let’s Encrypt again.
Let’s Encrypt is an organization which has been founded in 2016 by a group of institutions (Electronic Frontier Foundation, Mozilla Foundation, Michigan University, Akamai Technologies and Cisco Systems) who wanted to promote the use of encrypted web traffic by allowing everyone to create the required SSL certificates in an automated way, for free. These institutions have worked with web-browser providers to get them to accept and trust the Let’s Encrypt root certificates. And that was successful.
The result is that nowadays, Let’s Encrypt acts as a free, automated, and open Certificate Authority. You can download and use one of many client programs that are able to create and renew the necessary SSL certificates for your web servers. And all modern browsers accept and trust these certificates.

Let’s Encrypt SSL certificates have a expiration of 3 months after creation, which makes it mandatory to use some mechanism that does regular expiration checks on your server and renews the certificate in time.

I will dedicate the rest of this article to explain how you can use ‘dehydrated‘, a 3rd-party and free Let’s Encrypt client which is fully compatible with the official ‘CertBot’ client of Let’s Encrypt.
Why a 3rd-party tool and not the official client? Well, dehydrated is a simple Bash shell script, easy to read and yet fully functional. On the other hand, please have a look at the list of dependencies you’ll have to install before you can use CertBot on Slackware! That’s 17 other packages! The choice was easily made, and dehydrated is actively developed and supported.

I will show you how to download, install and configure dehydrated, how to configure your Apache web server to use a Let’s Encrypt certificate, and how to automate the renewal of your certificates. After reading the below instructions, you should be able to let people connect to your web-server using HTTPS.


Configure dehydrated

Dehydrated is part of Slackware since the 15.0 release. It ships with a default configuration, a man-page and documentation.

The installed package will also create a cron job “/etc/cron.d/dehydrated” which makes dehydrated run once a day at midnight. I want that file to have some comments about what it does and I do not want to run it at midnight, so I overwrite it with a line that makes it run once a week at 21:00 instead. It will also log its activity to a logfile, “/var/log/dehydrated” in the example below:

cat <<EOT > /etc/cron.d/dehydrated
# Check for renewal of Let's Encrypt certificates once per week on Monday:
0 21 * * Mon /usr/bin/dehydrated -c >> /var/log/dehydrated 2>&1
EOT

Dehydrated uses a directory structure below “/etc/dehydrated/”.
The main configuration file you’ll find there is called “config”.
The file “domains.txt” contains the host- and domain names you want to manage SSL certificates for.
The directory “accounts” will contain your Let’s Encrypt user account and private key, once you’ve registered with them.
And a new directory “certs” will be created to store the SSL certificates you are going to create and maintain.

How to deal with these files is going to be addressed in the next paragraphs.

The dehydrated configuration files

config

The main configuration file “/etc/dehydrated/config” is well-commented, so I just show the lines that I used:

DEHYDRATED_USER=alien
DEHYDRATED_GROUP=wheel
CA="https://acme-staging-v02.api.letsencrypt.org/directory"
#CA="https://acme-v02.api.letsencrypt.org/directory"
CHALLENGETYPE="http-01"
WELLKNOWN="/usr/local/dehydrated"
PRIVATE_KEY_RENEW="no"
CONTACT_EMAIL=eric.hameleers@gmail.com
LOCKFILE="${BASEDIR}/var/lock"
HOOK=/etc/dehydrated/hook.sh

Let’s go through these parameters:

  • We are starting the ‘dehydrated script as root, via a cron job or at the commandline. The values for DEHYDRATED_USER and DEHYDRATED_GROUP are the user and group the script will switch to at startup. All activities will be done as user ‘alien’ and group ‘wheel’ and not as the user ‘root’. This is a safety measure.
  • CA: this contains the Let’s Encrypt URL for dehydrated to connect to. You’ll notice that I actually list two values for “CA” but one is commented out. The idea is that you use the ‘staging’ URL for all your tests and trials, and once you are satisfied with your setup, you switch to the URL for production usage.
    Also note that Let’s Encrypt expects clients to use the ACMEv2 protocol. The older ACMEv1 protocol will still work, but you can not register a new account using the old protocol. Its only use nowadays is to assist in migrating old setups to ACMEv2. The “CA” URL contains the protocol version number, and I highlighted that part in red.
  • CHALLENGETYPE : we will be using HTTP challenge type because that’s easiest to configure. Alternatively if you manage your own DNS domain you could let dehydrate update your DNS zone table to provide the challenge that Let’s Encrypt demands.
    What is this challenge? Let’s Encrypt’s ACME-protocol wants to verify that you are in control of your domain and/or hostname. It will try to access a verification file via a HTTP request to your webserver.
  • WELLKNOWN: this defines the local directory  where dehydrated creates the ‘challenge-tokens’ which are then served by your webserver. The Let’s Encrypt ‘ACME server’ will connect to your server as part of the ‘http-01’ challenge and expects to find a specific file there with specific content (created by dehydrated). In the case of a webserver running on our example domain “foo.net”, that URL would be  http://foo.net/.well-known/acme-challenge/m4g1C-t0k3n . The dehydrate client must provide that “m4g1c-t0k3n” file which it will create during a certificate creation or renewal. Below I will explain how to create this URL location “.well-known/acme-challenge” and make it readable for an external server like Let’s Encrypt.
    If your “domains.txt” file contains more than one hostname or domain, the ACME server will repeat this challenge for every one of them. Usually, multiple hostnames or (sub-)domains means that you have defined multiple VirtualHost in your Apache webserver configuration. For every VirtualHost you need to enable access to this ‘http-01’ challenge location (I will show you how, below).
    Note: The first connect from the ACME server will always be over HTTP on port 80, but if your site does a redirect to HTTPS, that will work.
  • PRIVATE_KEY_RENEW: whether you want the certificate’s private key to be renewed along with the certificate itself. I chose “no” but the default is “yes”.
  • CONTACT_EMAIL: the email address which will be associated with your Let’s Encrypt account. This is where warning emails will be sent if your certificate about to expire but has not been renewed.
  • LOCK: the directory (which must be writable by our non-root user) where dehydrated will place a lock file during operation.
  • HOOK: the path to an optional script that will be invoked at various parts of dehydrate’s activities and which allows you to perform all kinds of related administrative tasks – such as restarting httpd after you have renewed its SSL certificate.
    NOTE: do not enable this “HOOK” line – i.e. put a ‘#” comment character in front of the line – until you actually have created a working and executable shell script with that name! You’ll get errors otherwise about the non-existing script.

domains.txt

The file “/etc/dehydrated/domains.txt” contains the list hosts and domain names you want to associate with your SSL certificates. You need to realize that a SSL certificate contains the hostname(s) or the domain name(s) that it is going to be used for. That is why you will sometimes see a “hostname does not match server certificate” warning if you open a URL in your browser, it means that the remote server’s SSL certificate was originally meant to be used with a different hostname.

In our case, the “domains.txt” file contains just one hostname on a single line:

www.foo.net

… but that line can contain any amount of different space-separated hosts under the same domain. For instance the line could be “foo.net www.foo.net” which would tell Let’s Encrypt that the certificate is going to be used on two separate web servers: one with hostname “foo.net” and the other with the hostname “www.foo.net“. Both names will be incorporated into the certificate.

Your “/etc/dehydrated/domains.txt” file can be used to manage the certificates of multiple domains, each domain on its own line (e.g. domain foo.org on one line, and domain foo.net on another line). Each line corresponds to a different SSL certificate – e.g. for different domains. Every line can contain multiple hosts in a single domain (for instance: foo.org www.foo.org ftp.foo.org).

Directory configuration

Two directories are important for dehydrated, and we need to create and/or configure them properly.

/etc/dehydrated

First, the dehydrated configuration directory. We have configured dehydrated to run as user ‘alien’ instead of user ‘root’ so we need to ensure that the directory is writable by this user. Or better (since we installed this as a Slackware package and a package upgrade would undo an ownership change of /etc/dehydrated) let’s manually create the subdirectories “accounts” “certs”, “chains” and “var” where our user actually needs to write, and make ‘alien’ the owner:

# mkdir -p /etc/dehydrated/accounts
# chown alien:wheel /etc/dehydrated/accounts
# mkdir -p /etc/dehydrated/certs
# chown alien:wheel /etc/dehydrated/certs
# mkdir -p /etc/dehydrated/chains
# chown alien:wheel /etc/dehydrated/chains
# mkdir -p /etc/dehydrated/var
# chown alien:wheel /etc/dehydrated/var

/usr/local/dehydrated

The directory “/usr/local/dehydrated” is the location where dehydrated to will generate the Let’s Encrypt challenge files. These files provide the proof that we actually own the domain(s) we are requesting a certificate for.
So let’s create that directory and allow our non-root user to write there:

# mkdir -p /usr/local/dehydrated
# chown alien:wheel /usr/local/dehydrated

SUDO considerations

We configured the dehydrated script to drop its root privileges at startup and continue as user ‘alien’, group ‘wheel’. Because we also change the group iit is important that the sudo line for root in the file “/etc/sudoers” is changed from the default:

#root ALL=(ALL) ALL

to

root ALL=(ALL:ALL) ALL

Else you’ll get the error “Sorry, user root is not allowed to execute ‘/usr/bin/dehydrated -c’ as alien:wheel on localhost.“.

Apache configuration

I expect that you have already setup your Apache for un-encrypted connections and already have a web site. If you still need to figure out how to setup a web site using Apache, I suggest you look for a good tutorial before you proceed with my article, like https://docs.slackware.com/howtos:network_services:setup_apache .

Before we register an account with Let’s Encrypt and start generating certificates, let’s first update our existing Apache configuration so that it works with dehydrated. We need to make the ‘http-01’ challenge location (http://foo.net/.well-known/acme-challenge/) accessible to external web clients, else the certificate generation will fail.

Note that the above example mentions the “foo.net” hostname. If your “/etc/dehydrated/domains.txt” contains lines with multiple hosts under a domain, you’ll have to make the URL path component “/.well-known/acme-challenge” accessible through every domain host you configured in Apache. The complete certificate generation process will fail in case any of these challenge URLs cannot be validated.
To make life more simple if you run multiple web servers, we created “/usr/local/dehydrated/” to store the challenge file. It’s a single file location.  With the help of the Apache “Alias” directive we can use that single file location in all our web servers.

Use this snippet of text in the <VirtualHost></VirtualHost> configuration block for every webserver host:

# We store the dehydrated info under /usr/local and use an Apache 'Alias'
# to be able to use it for multiple domains. You'd use this snippet:
Alias /.well-known/acme-challenge /usr/local/dehydrated
<Directory /usr/local/dehydrated>
    Options None
    AllowOverride None
     Require all granted
</Directory>

You can use “lynx” on the command-line to test whether a URL is valid:

$ lynx -dump http://www.foo.net/.well-known/acme-challenge/
Forbidden: You don't have permission to access /.well-known/acme-challenge/ on this server.

Despite that error, this message actually shows that the URL works (otherwise the return message would have been “Not Found: The requested URL /.well-known/acme-challenge was not found on this server.“).

This completes the required Let’s Encrypt modifications to your Apache web server configuration.
Next, and before we restart ‘httpd‘, our Apache server must be enabled to accept SSL connections. This is achieved by un-commenting the following line in “/etc/httpd/httpd.conf”:

# Secure (SSL/TLS) connections
Include /etc/httpd/extra/httpd-ssl.conf

You can now restart Apache httpd to activate our modifications (but always test the syntax of your configuration first:

# apachectl configtest
# /etc/rc.d/rc.httpd restart

To end the Apache configuration instructions, here are the bits that define the SSL parameters for your host. Note that you should not add them yet! You do not have a SSL certificate yet. Only after you have executed “dehydrated -c” and obtained the certificates, you can add the following lines to every <VirtualHost</VirtualHost> block where where you previously added the ‘Alias’ related stuff above:

SSLEngine on
SSLCertificateFile /etc/dehydrated/certs/foo.net/cert.pem
SSLCertificateKeyFile /etc/dehydrated/certs/foo.net/privkey.pem
SSLCertificateChainFile /etc/dehydrated/certs/foo.net/chain.pem
SSLCACertificatePath /etc/ssl/certs
SSLCACertificateFile /etc/ssl/certs/ca-certificates.crt

Note the hostname “foo.net” in these SSL lines above? This is an example of course and you need to change that to your own hostname.
What you need to realize is that this name corresponds to the first name of the line in your “/etc/dehydrated/domains.txt” file. Earlier in the article I used an example line for this “domains.txt” file which looks like this: “foo.net www.foo.net“. Even more hosts are possible, they should be space-separated. A single certificate will be generated which is valid for all of these hosts, and the directory where they are stored in is “/etc/dehydrated/certs/” followed by “./foo.net” which the name of that first entry of the line.

Running dehydrated for the first time, using the Let’s Encrypt staging server:

With all the preliminaries taken care of, we can now proceed and run ‘dehydrated’ for the first time. Remember to make it connect to the Let’s Encrypt ‘staging’ server during all your tests, to prevent their production server from getting swamped with bogus test requests!

Examining the manual page (run “man dehydrated“) we find that we need the parameter ‘–cron’, or ‘-c’, to sign/renew non-existent/changed/expiring certificates:

# /usr/bin/dehydrated -c
# INFO: Using main config file /etc/dehydrated/config
# INFO: Running /usr/bin/dehydrated as alien/wheel
# INFO: Using main config file /etc/dehydrated/config

To use dehydrated with this certificate authority you have to agree to their terms of service which you can find here: https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf

To accept these terms of service run `/usr/bin/dehydrated --register --accept-terms`.

What did we learn here?
In order to use dehydrated, you’ll have to register first. Let’s create your account and generate your private key!

Do not forget to set the “CA” value in /etc/dehydrated/config to a URL supporting ACMEv2. If you use the old staging server URL you’ll see this error: “Account creation on ACMEv1 is disabled. Please upgrade your ACME client to a version that supports ACMEv2 / RFC 8555. See https://community.letsencrypt.org/t/end-of-life-plan-for-acmev1/88430 for details.

With the proper CA value configured (you’ll have to do this both for the staging and for the production server URL) , you’ll see this if you run “/usr/bin/dehydrated –register –accept-terms”:

# /usr/bin/dehydrated --register --accept-terms
# INFO: Using main config file /etc/dehydrated/config
# INFO: Running /usr/bin/dehydrated as alien/wheel
# INFO: Using main config file /etc/dehydrated/config
+ Generating account key...
+ Registering account key with ACME server...
+ Fetching account ID...
+ Done!

Generate a test certificate

We’re  ready to roll. As said before, it is proper etiquette to run all your tests against the Let’s Encrypt ‘staging’ server and use their production server only for the real certificates you’re going to deploy.
Let’s run the command which is also being used in our weekly cron job, “/usr/bin/dehydrated -c”:

# /usr/bin/dehydrated -c
# INFO: Using main config file /etc/dehydrated/config
# INFO: Running /usr/bin/dehydrated as alien/wheel
# INFO: Using main config file /etc/dehydrated/config
+ Creating chain cache directory /etc/dehydrated/chains
Processing www.foo.net
+ Creating new directory /etc/dehydrated/certs/www.foo.net ...
+ Signing domains...
+ Generating private key...
+ Generating signing request...
+ Requesting new certificate order from CA...
+ Received 1 authorizations URLs from the CA
+ Handling authorization for www.foo.net
+ Found valid authorization for www.foo.net
+ 0 pending challenge(s)
+ Requesting certificate...
+ Checking certificate...
+ Done!
+ Creating fullchain.pem...
+ Done!

This works! You can check your web site now if you did not forget to add the SSL lines to your VirtualHost block; your browser will complain that it is getting served an un-trusted SSL certificate issued by “Fake LE Intermediate X1“.

Generate a production certificate

First, change the “CA” variable in “/etc/dehydrated/config” to the production CA URL “https://acme-v02.api.letsencrypt.org/directory”.
Remove the fake certificates that were created in the previous testing step so that we can create real certificates next:

# rm -r /etc/dehydrated/certs/www.foo.net

Now that we’ve cleaned out the fake certificates, we’ll generate real ones:

# /usr/bin/dehydrated -c
# INFO: Using main config file /etc/dehydrated/config
# INFO: Running /usr/bin/dehydrated as alien/wheel
# INFO: Using main config file /etc/dehydrated/config
Processing www.foo.net
+ Creating new directory /etc/dehydrated/certs/www.foo.net ...
+ Signing domains...
+ Generating private key...
+ Generating signing request...
+ Requesting new certificate order from CA...
+ Received 1 authorizations URLs from the CA
+ Handling authorization for www.foo.net
+ 1 pending challenge(s)
+ Deploying challenge tokens...
+ Responding to challenge for www.foo.net authorization...
+ Challenge is valid!
+ Cleaning challenge tokens...
+ Requesting certificate...
+ Checking certificate...
+ Done!
+ Creating fullchain.pem...
+ Done!

If you reload the Apache server configuration (using the command “apachectl -k graceful”) you’ll now see that your SSL certificate has been signed by “Let’s Encrypt Authority X3” and it is trusted by your browser. We did it!

Automatically reloading Apache config after cert renewal

When your weekly cron job decides that it is time to renew your certificate, we want the dehydrated script (which runs as a non-root account) to reload the Apache configuration. And of course, only root is allowed to do so.

We’ll need a bit of sudo magic to make it possible for the non-root account to run the “apachectl” program. Instead of editing the main file “/etc/sudoers” with the command “visudo” we create a new file “httpd_reload” especially for this occasion, in sub-directory “/etc/sudoers.d/” as follows:

# cat <<EOT > /etc/sudoers.d/httpd_reload
alien ALL=NOPASSWD: /usr/sbin/apachectl -k graceful
EOT

This sudo configuration allows user ‘alien’ to run the exact command “sudo /usr/sbin/apachectl -k graceful” with root privileges.

Next, we need to instruct the dehydrated  script to automatically run “sudo /usr/bin/apachectl -k graceful” after it has renewed any of our certificates. That is where the “HOOK” parameter in “/etc/dehydrated/config” comes to play.

As the hook script, we are going to use dehydrated’s own sample “hook.sh” script that can be downloaded from https://raw.githubusercontent.com/lukas2511/dehydrated/master/docs/examples/hook.sh or (if you used the SlackBuilds.org script to create a package) use “/usr/doc/dehydrated-*/examples/hook.sh”.

# cp -i /usr/doc/dehydrated-*/examples/hook.sh /etc/dehydrated/
# chmod +x /etc/dehydrated/hook.sh

This shell script contains a number of functions, each is relevant and will be called at a certain stage of the certificate renewal process. The dehydrated script will provide several environment variables to allow a high degree of customization, and all of that is properly documented in the sample script, but we do not need any of that. Just at the end of the “deploy_cert()” function we need to add a few lines:

deploy_cert() {
# ...
# After successfully renewing our Apache certs, the non-root user 'alien'
# uses 'sudo' to reload the Apache configuration:
sudo /usr/sbin/apachectl -k graceful
}

That’s all. Next time dehydrated renews a certificate, the hook script will be called and that will reload the Apache configuration at the appropriate moment, making the new certificate available to visitors of your web site.

Summarizing

I am glad you made it all the way down here! In my usual writing style, the article is quite verbose and gives all kinds of contextual information. Sometimes that makes it difficult for the “don’t bother me with knowledge, just show me the text I should copy/paste ” user but I do not care for that.

I do hope you found this article interesting, and useful. If you spotted any falsehoods,let me know in the comments section below. If some part needs more clarification, just tell me.

Have fun with a secure web!

Eric

Continue reading

Adding CACert root certificates to your Slackware

Long before the “letsencrypt” initiative, we already had another free and open Certificate Authority, called CACert.org. CACert is community driven, and uses ‘assurers’ who personally verify users’ identities, thereby building a “web of trust”. Unfortunately, the big players on the Internet (Google, Mozilla, Microsoft) have always refused to accept and incorporate the CACert root certificate into their browsers. Instead, after many years of imploring these companies to add CACert as a trusted Certificate Authority without any success, they spat in the face of the community and launched their own alternative for free SSL certificates: letsencrypt.

And therefore, even today, a site that uses a CACert-issued SSL certificate is flagged by browsers as untrustworthy. In my opinion. this refusal to accept a community-driven security initiative is nothing short of bullying.

My own server, bear.alienbase.nl, uses a CACert-issued certificate. Folks, it is secure to use https on it! Even when Chrome or Firefox says it is not. So, how to fix that bogus warning message?
For Firefox, Chrome and for the OS in general: import the CACert certificates as follows:

First add the CACert root and class3 certificates to your Linux system.
As the root user you download the two .crt files, copy them to /etc/ssl/certs and generate openssl hashes (I used backslashes to indicate that some lines are wrapping because the text would otherwise not be visible on this page):

# cd /tmp
# mkdir CACert
# cd CACert/
# wget -O cacert-root.crt http://www.cacert.org/certs/root.crt
# wget -O cacert-class3.crt http://www.cacert.org/certs/class3.crt
# cp -ia cacert-*.crt /etc/ssl/certs/
# cd /etc/ssl/certs/
# ln -s cacert-root.crt \
   `openssl x509 -noout -hash -in cacert-root.crt`.0
# ln -s cacert-class3.crt \
   `openssl x509 -noout -hash -in cacert-class3.crt`.0

To make your browsers support CaCert they need to import their root and class3 certficates first. I will focus on Firefox and Chromium (instructions will work for Pale Moon and Chrome as well).

First the Firefox or Pale Moon browser. Open the page http://www.cacert.org/index.php?id=3

  1. Click on the link for “Class 1 PKI Key” called “Root Certificate (DER Format)“. You will see the text “You have been asked to trust a new Certificate Authority (CA). Do you want to trust ‘CA Cert Signing Authority’ for the following purposes?“. At a minimum you must check the box to the left of “Trust this CA to identify web sites” before importing the certificate.
  2. Then do the same for the “Class 3 PKI Key” called “Intermediate Certificate (DER Format)” a bit lower on the page.

Next, Chrome/Chromium. To add the CACert root and class3 certificates to your Chromium browser do the following as your regular user account (see also http://wiki.cacert.org/FAQ/BrowserClients#Linux)

$ cd /tmp/CACert/
$ certutil -d sql:$HOME/.pki/nssdb \
   -A -t TC -n "CAcert.org" -i cacert-root.crt
$ certutil -d sql:$HOME/.pki/nssdb \
   -A -t TC -n "CAcert.org Class 3" -i cacert-class3.crt

And you’ll end up with a trusted site next time you visit my ‘bear’ server:

© 2024 Alien Pastures

Theme by Anders NorenUp ↑