Category Archives: Software

Chromium 90 packages – again 32bit related issues

There’s new ‘chromium‘ and ‘chromium-widevine-plugin‘ packages in my repository. And chromium-ungoogled packages will follow soon.

Chromium was upgraded to 90.0.4430.72 but unfortunately the 32bit package which I have built (twice) crashes immediately on startup. This happens both on Slackware 14.2 and on -current. The error message looks like it is something different than the glibc-2.3x related seccomp crash behavior in the previous Chromium 89.x releases.
Since I don’t have hardware that is running a 32bit Slackware OS and could only test on QEMU virtual machines so far, I can not confirm with 100% certainty that the new Chromium will or will not work on your 32bit Slackware OS, which is why I also kept the older 32bit chromium-89.0.4389.114 package in the repository.
Let me know about your experiences down here in the comments section! I am getting tired of begging Google developers not to break 32bit binaries every major release. I’d be grateful if people running 32bit Slackware who are affected by the 32bit Chromium crashes, would chime in. Else I will eventually have to drop 32bit support.

There’s also an updated Widevine plugin package, unfortunately Google only released this newer version for 64bit systems. The new 64bit package has version “4.10.2209.1” whereas the 32bit package remains at version “4.10.1582.2”.
Note that this Widevine plugin is meant for chromium-ungoogled only. The ‘real’ Chromium does not need or use it, since Chromium downloads this CDM library automatically for you.

Have fun!
Eric

Chromium security updates (and fix for 32-bit crash)

I have updated the ‘chromium‘, ‘chromium-ungoogled‘ and ‘chromium-widevine-plugin‘ packages in my repository.

For Chromium (-ungoogled) these are security updates. The new 89.0.4389.90 release addresses several critical vulnerabilities (it’s the third release in the 89 series in rapid succession actually, to fix critical bugs) but in particular it plugs a zero-day exploit that exists in the wild: CVE-2021-21193. You are urged to update your installation of Chromium (-ungoogled) ASAP.

I made chromium-ungoogled also available for Slackware 14.2, I hope that makes some people happy.

Since I had to build packages anyway, I took the opportunity to apply a patch that fixes the crashes on 32-bit systems with glibc-2.33 installed (i.e. on Slackware-current).
In that same chromium-distro-packagers group that is the home of the discussion about Google’s decision to cripple 3rd-party Chromium browsers, I had asked the Chromium team to address the crash Slackware users are experiencing. Google is no longer offering 32-bit binaries which means, issues like these are not likely to be caught in their own tests, but they are listening to the packagers who do build 32-bit binaries. Luckily. And the fix took a while to actually get implemented, but in the end it all worked out. I assume that the patch will end up in the Chromium source code after it passes the internal review process.

The Widevine plugin package for which I provided an update, is meant for chromium-ungoogled only. The ‘real’ Chromium does not need or use it, since Chromium downloads this CDM library automatically for you. The change to the package is small: it adds a compatibility symlink. That is not needed for chromium-ungoogled itself, but I was alerted to the fact that Spotify specifically looks for ‘libwidevinecdm.so’ in the toplevel Chromium library directory. The update takes care of that.

Also, this was the last package which i compiled for Chromium that contains my Google API Key as well as the OAuth client/secret credentials. I noticed that Chromium still works as before, even now after the 15 March deadline has passed, but future builds of my package will only contain my API key. That will leave the Safe Browsing functional, but it removes the Chrome Sync and other features. If you still want Chrome Sync to work with Chromium, I just want to point you to “/etc/chromium/01-apikeys.conf” in my future packages and get inspired by its content.

Have fun!
Eric

Sync and share your (Chromium and more) browser data among all your computers

I had promised to write a follow-up article about my journey to find a good alternative for the “Chrome Sync” feature that used to be available in my Chromium packages for Slackware. This is that article. It is fairly large and I hope it is written well enough that you can follow the explanations and examples and are able to implement this yourself. The article is divided into a couple of main sections:

  • Securely syncing your browser’s online passwords to (and retrieving them from) a central location under your control
  • Syncing your browser’s bookmarks to a central location under your control
  • Migrating your Chromium passwords into a central password database under your control
  • Migrating your Chromium bookmarks into a bookmark server under your control.

If you run into anything that is not clear, or does not work for you, please let me know in the comments section!

I’d like to remind you that after 15 March 2021 the Chrome Sync service provided by Google’s cloud infrastructure will only be accessible by Google’s own (partly closed-source) Chrome browser. All other Chromium-based software products are blocked from what Google calls “their private API services”.
All non-Google-controlled 3rd parties providing software binaries based on the Chromium source code are affected, not just my Slackware package. The Chromium browser packages that have been provided in most Linux distros for many years, will lose a prime selling point (being able to sync your browser history, bookmarks and online passwords to the cloud securely) while Google will still be able to follow your browsing activities and monetize that data.
For many people (including myself) using Chromium, the ability to sync your private browser data securely to Google’s cloud and thus get a unified browser experience on all platforms (Linux, Android, Windows, Mac operating systems) certainly outweighed the  potential monetary gain for Google by using a product with its origins in Google – like Chromium. Compiling the sources on Slackware allowed me to deal with some of the privacy concerns. But now the balance has been tipped.

Google took away this Chrome Sync from us Linux Chromium users, and it’s important to come with alternatives.
If you don’t use Chrome or Chromium, then this article is probably not relevant. I will assume you are using Mozilla’s Firefox and that browser has its own open source cloud sync server. You use Mozilla’s own cloud sync or else setup a Sync Server on a machine which is fully under your control. The same is not possible for Chromium browsers unfortunately, although Brave and Vivaldi – two not completely open source Chromium derivatives – have developed their own cloud sync server implementations. But (again unfortunately) those sync solutions are fully entangled with their own browser products.

I already wrote about “un-Googled Chromium“, the browser which is stripped from everything that would attempt to connect to a Google service. The browser that is provided by my “chromium-ungoogled” package is as good as the original “chromium” browser in terms of core functionality and perhaps it is even a tad faster than the original. I urge you to read that article before continuing below.
Because the remainder of this article will deal with: how do you sync, or save, your browser’s online passwords, your bookmarks and your history to a remote server? Using only open source tools? And in such a way that no one will be able to access these data unless you approve it?

Let’s start with a disappointment: there is currently no Open Source solution available that lets you sync your browser history to a cloud server and then access that history in other browsers on other computers. But, one of the products further below (xBrowserSync) does have this feature on its development roadmap – we will just have to be a wee bit patient.
With that out of the way, let’s focus on what is possible.

Syncing your browser’s online passwords

The internet is not a safe place. Sometimes, web sites are hacked and user credentials obtained. If you deal with web sites where you have to authenticate yourself to gain access to certain functionality, that server needs to store your credentials (hopefully encrypted) locally. If you want to prevent that hackers use your credentials to do you harm, it is highly recommended to always use strong passwords, and to use different passwords for every web site that you need to login to.
Remembering all these strong passwords is an impossible task for the average human, which is why we have password managers. A password manager is an application that provides a means to store your credential data in an encrypted database which you unlock with just a single passphrase – and that leaves you with the more easily achievable task to remember one strong password to gain access to all the others. Password managers have become quite user-friendly and some of them integrate with your web-browser.

I present one of those password managers here, KeePassXC. I think it is the best solution to our problem. It is cross-platform (Linux, Windows, MacOS), uses strong encryption, is open source and integrates with Google and Mozilla browsers. Including un-Googled Chromium.
KeePassXC is not a client-server based service though – it all happens on your local computer. In order to make your password database accessible on multiple computers, you will have to store it in such a way that it automatically gets uploaded to (and then kept in sync with) a networked location. If you want to be able to access your passwords while on the move (i.e. outside of your house) the most logical solution is to use a “cloud storage” provider who offers cross-platform sync clients – at a minimum we want Slackware Linux to be supported.

The steps to take:

  1. Ensure that you have a cloud storage client installed on your computers. I’ll give two examples: a client application for the free basic plan of the commercial Dropbox, or for a NextCloud server you operate yourself. There are other solutions of course, use whatever you like most.
    Key here is that you are going to create an encrypted password database somewhere inside the directory tree that you sync to this cloud storage provider.
  2. Download and install chromium-ungoogled. Start it and go through the initial configuration as explained in a previous article so that browser extensions can be installed properly and easily, and ensure that you install the KeepassXC extension from the Web Store. Don’t configure that extension yet.
  3. On your Slackware-current computer, download and install the keepassxc package. Like my chromium-ungoogled package, I have this only available for Slackware-current but in future I intend to add Slackware 14.2 support also for chromium-ungoogled and the sync software.
  4. Start KeePassXC – it is a graphical program so you need to be logged in to a graphical session like Plasma5 or XFCE – and configure it to automatically launch at system startup i.e. when you login (in General > Basic Settings). Configure all other convenience settings to your liking, such as minimizing to system tray, automatically locking the database etc.
  5. Enable browser integration in KeePassXC (Tools > Settings > Browser integration > Enable). In the “General” tab, check the box to at least enable “Chromium” integration. Check any other browser that you would like to use with KeepassXC but for the purposes of this article “Chromium” is the only mandatory choice.
  6. [Only if you use my chromium-ungoogled package instead of regular chromium]: Open the “Advanced” tab and check the box “Use a custom browser configuration location”. Select “Chromium” in the “Browser Type” dropdown. In the “Config Location” entry field you write “~/.config/chromium-ungoogled/NativeMessagingHosts”. This will generate a file “org.keepassxc.keepassxc_browser.json” in that location, specifying the integration parameters between the KeePassXC program and the browser extension.
    You may have to fiddle a bit with that last step, to get KeePassXC to actually create the JSON file. It may be that you have to startup your chromium-ungoogled first and then enable browser integration in KeePassXC.
  7. In KeepassXC you now create a new database (or point it to a database you may already have) in your cloud-sync directory. That database will then sync remotely and will be accessible by KeepassXC instances running on other computers that have the same cloud sync client installed.
    Word of advice: If you do not have a password database already and are starting from scratch, it will be easier to import your old browser’s passwords right now instead of having to merge them later. Look at the end of this article for the section “Importing your passwords from Chromium/Chrome into KeePassXC” which will tell you exactly how to import your old passwords before you continue to the next step!
  8. Now that we have successfully connected KeePassX to its browser extension, your browser will ask you if it should store any new credentials you enter on web sites, into its encrypted KeePassXC database. The KeePassXC extension will automatically add icons to credential entry fields on web pages that allow you to auto-fill these fields if the web site is already known in your database. And if you don’t want to invent strong passwords yourself, password fields can be enhanced with an icon that gives access to the KeePassXC strong random password generator.
  9. Goal achieved. You can now share and use your online passwords on multiple computers. Note that this functionality is not limited to your (un-googled) Chromium but also to the Firefox browsers that you use.

Note: For Android if you installed un-Googled Chromium from F-Droid following these instructions, there’s the compatible KeePass2Android or KeePassDX, both are recommended by the KeepassXC developers, so you can access all your passwords even if there’s no integration with the browser.

Syncing your browser’s bookmarks

For this part, I had already done some preparations. I shared those in a previous post: how to build and configure a MongoDB database server. The best open source bookmark sync server I could find is xBrowserSync, and it uses MongoDB as its database backend.
XBrowserSync is a service you can operate yourself by installing it on a machine you control, but you can also use any of the free online xBrowserSync servers that are operated by volunteers on the internet. Registration to a server is not required, access is anonymous and only you know the sync ID and password which are used to store your data.

I’ll first instruct you how to get the browser-side working with the developers’ freely accessible servers and then I’ll show you how to setup your own sync server.

Setting up your browser

It starts with installing xBrowserSync browser extension into your (un-Googled) Chromium from the Web Store. Important: if you want to use xBrowserSync you need to disable other synchronization mechanisms like Chrome Sync first to prevent interoperability issues. Opening the settings for this extension takes you through a set of introduction messages and then lets you pick a sync server to connect to – the default is the developers’ own server. You then specify a password which will be used to encrypt your data before it gets uploaded. The server will assign you a “Sync ID“, a seemingly random string of 32 hex characters. Write down that Sync ID! It is the combination of the Sync ID and your password that will grant access to your bookmarks if you install the xBrowserSync extension in another browser.
Sync will start immediately.

Setting up your server

Now onto setting up a xBrowserSync server yourself. After all, what’s the fun in using someone else’s services, right? You could have stuck with Chrome otherwise.
To install a xBrowserSync service, you’ll require an online computer which ideally runs 24/7. You can use a desktop or a real server, but it needs to be a 64bit system. The reason? MongoDB, the database backend, only supports 64-bit computers. You won’t be able to compile it on 32-bit.
I run mine from a computer at home, but you can rent a server online too if you would like that better. Assuming here that the computer runs 64-bit Slackware 14.2 or higher, the requirements are:

  • Apache’s webserver “httpd” is installed and running (other web servers like nginx are also possible but outside the scope of this article). Apache httpd is part of a Slackware full installation. In this article we will assume that the URL where you can reach the webserver is https://darkstar.net/ .
  • You have configured your Apache httpd for secure connections using HTTPS and the SSL certificates come from Let’s Encrypt. If this is something you still need to do, read a previous article here on the blog, on how to set that up using the ‘dehydrated‘ script (which was added to Slackware-current since I wrote that post).
  • TCP ports 80 (http) and 443 (https) are open to the Internet (if you want to be able to access your server while on the move). For a server that is running in your own home, you’ll have to port-forward these ports 80 and 443 from your ISP’s modem/router to your own computer.
  • Node.js is installed. You can get a package from my repository.
  • MongoDB is installed and the database server is running. A package is available in my repository and configuration of MongoDB is covered in my previous article.

We start by creating a dedicated user account to run the xBrowserSync service on our computer:

# groupadd xsync
# useradd -g xsync -d /home/xsync -m -s /bin/false xsync

Then clone the xBrowserSync API source repository. My suggestion is to use /usr/local for that and give our new “xsync” user the access permissions to maintain the xBrowserSync files. The ‘#’ depicts root’s shell prompt and ‘$’ is a user shell prompt:

# mkdir /usr/local/xbrowsersync
# chown xsync /usr/local/xbrowsersync
# su - xsync -s /bin/bash
$ cd /usr/local/xbrowsersync
$ git clone https://github.com/xbrowsersync/api.git

Still as user ‘xsync’, install and build the xBrowserSync API package:

$ cd api
$ npm install
$ npm audit fix
$ exit

As ‘root’, configure the log location we will use when the service starts :

# mkdir /var/log/xBrowserSync/
# chown xsync:xsync /var/log/xBrowserSync/

We have xBrowserSync installed, now we need to create the conditions for it to work. So we start with configuring a MongoDB database for xBrowserSync. Below I will use example names for that database, the DB user account and its password. You should of course pick the names of your choosing! So change the bold red texts when talking to the database.
Database: xbrowsersyncdb
DB User: slackadmin
DB Pass: slackpass

Start a mongo shell – it does not matter under which user account you do this as long as you know the MongoDB admin account (mongoadmin in this example) and its password:

$ mongo -u mongoadmin -p
>

Run the following commands in the mongo shell:

use admin
db.createUser({ user: "slackadmin", pwd: "slackpass", roles: [{ role: "readWrite", db: "xbrowsersyncdb" }] })
use xbrowsersyncdb
db.newsynclogs.createIndex({ "expiresAt": 1 }, { expireAfterSeconds: 0 })
db.newsynclogs.createIndex({ "ipAddress": 1 })

With the MongoDB database ready for use, we can proceed with configuring xBrowserSync. Login again to your computer as user ‘xsync’ and make a copy of the default settings file so that we can insert our own custom settings:

$ cd /usr/local/xbrowsersync/api
$ cp config/settings.default.json config/settings.json

In that file “/usr/local/xbrowsersync/api/config/settings.json”, change the following values from their defaults (or at least ensure that the values below are present). I highlighted the MongoDB connection parameters in red:

"status.message": "Welcome to Slackware xBrowserSync"
"db.name": "xbrowsersyncdb"
"db.username": "slackadmin"
"db.password": "slackpass"
"port": "8080"
"location": "your_two_character_country_id"
"log.file.path": "/var/log/xBrowserSync/api.log"
"server.relativePath": "/"

Note that the nodejs server listens at port 8080 by default and I leave it so. But if you are already running some kind of program using that same port number, you can change the port to a different value, say “8888” in the above file – look for “port”: “8080”, .

To operate the new xBrowserSync server, we create a shell script we can start at boot. The commands below show the use of a “here-document” to write a file. If you are unsure how to deal with below commands, check out what a “here-document” actually means.
Still as user ‘xsync’ you run:

$ mkdir ~/bin
$ cat <<EOT > ~/bin/xbs_start.sh
#!/bin/sh
cd /usr/local/xbrowsersync/api
node dist/api.js
EOT
$ chmod 755 ~/bin/xbs_start.sh

This is a simple script to start a node.js server. That server does not move itself to the background, but we need the command to ‘disappear’ into the background if we want our computer to complete its boot sequence. There is a nice way to deal with backgrounding a program that can not run as a background service all by itself: we offer it a ‘virtual console’ and then move that virtual console to the background. A VNC server would be the graphical X analog of such a ‘virtual console’.
Slackware comes with two programs that offer what we need: screen and tmux. I am a screen fan so I’ll use that. As user ‘root’, add this to ‘/etc/rc.d/rc.local’:

# Start the xBrowserSync Server:
if [ -x /home/xsync/bin/xbs_start.sh ]; then
  echo "Starting xBrowserSync Server:  ~xsync/bin/xbs_start.sh"
  su - xsync -s /bin/bash -c "/usr/bin/screen -AL -mdS XSYNC bash -c '/home/xsync/bin/xbs_start.sh'"
fi

This ensures that your xBrowserSync service will start at boot, and the service will run in a detached ‘screen‘ session for user ‘xsync’. If you want to access the running program, you can attach to (and detach from) that running ‘screen’ session from your console or terminal prompt. As user ‘xsync’, simply run:

$ screen -x

… to connect to the virtual terminal, to look at the program output or perhaps even to terminate it with “Ctrl-c“. And if you have seen enough, you can easily detach from the ‘screen’ session again using the “Ctrl-d” key combination, leaving that session to continue running in the background undisturbed.
I will not force you to reboot your computer just to get the service up in the air, so this one time we are going to start it manually. Login as user ‘xsync’ and then run:

$ screen -AL -mdS XSYNC bash -c '/home/xsync/bin/xbs_start.sh'

You will not see a thing but XBrowserSync is now running on your computer.
All that’s left to do now, is to allow connections to the xBrowserSync program by your Chromium or Firefox browser. The program listens at  the loopback address (127.0.0.1 aka localhost) on TCP port 8080, so access is only possible from that same computer. That is a safety measure – no one can access your service from a remote computer by default.
Note that if you changed the default xBrowserSync listen port from 8080 to a different value in “settings.json” above, you need to use that same port number in the Apache configuration block below.

We will configure Apache httpd to act as a reverse proxy – this means that we will deploy the httpd server as the frontend for xBrowserSync. Browsers will never connect to xBrowserSync directly (it is not even accessible from other computers), but instead we let them connect to Apache httpd. The webserver will relay these browser connections to the sync server running on the same machine.
Apache’s httpd is much better at handling a multitude of connections than the nodejs server used for xBrowserSync and we can also let httpd handle the encrypted connections so that we don’t have to complicate the xBrowserSync setup unnecessarily.

In the context of this article, we will operate our xBrowserSync server at the URL https://darkstar.net/xbrowsersync/ – this will be the URL you enter in the browser extension’s configuration instead of the default service URL https://api.xbrowsersync.org/. The bit I highlighted in red will be reflected in our Apache configuration.
There is no need to add anything to your VirtualHost configuration for the HTTP (80) web service, since we want to access the service exclusively over HTTPS (encrypted connections). This is the block of text you have to add to your VirtualHost configuration for the HTTPS (443) web service:

ProxyRequests Off
ProxyVia on
ProxyAddHeaders On
ProxyPreserveHost On

<Proxy *>
  Require all granted
</Proxy>

<Location "/xbrowsersync/">
  # Route incoming HTTP(s) traffic at /xbrowsersync/ to port 8080 of XBrowserSync
  ProxyPass        http://127.0.0.1:8080/
  ProxyPassReverse http://127.0.0.1:8080/
</Location>

Check your Apache configuration before starting the server:

# apachectl configtest
# /etc/rc.d/rc.httpd restart

Check the server status at https://darkstar.net/xbrowsersync/info
A nice GUI view of your service, and also the URL that you need to enter in the browser add-on: https://darkstar.net/xbrowsersync/

Importing your passwords from Chromium/Chrome into KeePassXC

You don’t have to start from scratch with storing web site credentials if you were already saving them in Chromium – whether you were using Chrome Sync or not.
This is how you export all your stored credential data from your Chromium browser and import that into KeePassXC:

  1. In Chromium, open “chrome://settings/passwords” and next to “Saved Passwords” click the triple-dot menu and select “Export Passwords”.
    By default, a comma-separated values (CSV) file will be generated as “~/Documents/Chrome Passwords.csv”. That file contains all your passwords un-encrypted, so be sure to delete that file immediately after the next steps!
  2. If you also save Android app passwords into Google’s password vault and your Chromium browser was using Chrome Sync, you’ll have to clean the above CSV file from all lines that start with the string “,android:”. For instance using a command like this:

    $ sed -e "/^,android:/d" -i ~/Documents/Chrome\ Passwords.csv

  3. Open the KeePassXC program on your computer. We are going to import the CSV file containing the credentials we just exported from Chromium via the menu “Database > Import > CSV File”. The first thing to do is to pick your new database’s format and encryption parameters. The default values are perfectly OK. Then define the password that is going to protect your database, and the location to store that database.
    • If this is actually going to be the database you intend to use with your browser, then select a directory that is being synchronized with an external cloud storage location.
    • If you already have a password database you are using in your browser but you want to import your old Chromium passwords after the fact, then the location of this database does not matter since it is only a temporary one and can be deleted after merging its contents with the real password database. So in this case, anywhere inside your homedirectory is just fine.
  4. An import wizard will then guide you through the process of associating the CSV data to matching database fields.
    One picture says more than a thousand words here, so this is what you should aim for to match the columns in the CSV file (name, user, password, url) to the corresponding fields in the KeePassXC database (Title, Username, Password,  URL):
    The trick here is to check the box “First line has field names” and then select: “name” in the Title dropdown; “username” in the Username dropdown; “password” in the Password dropdown and “url” in the URL dropdown. The dropdown for Group will show “name” as well but you need to select “Not Present” there and also ensure that all the other dropdowns are showing “Not Present”.
  5. After importing has completed and you have validated that the CSV values (name, user, password and url) are indeed imported into the correct database fields, the final question: is this your passwords database to use with the browser plugin?
    • If yes, then you are done!
    • If no, then we need to merge the database we just created into your real passwords database.
      Open your real passwords database in KeePassXC and then select the menu “Database > Merge from Database…”. You get the chance to select your temporary database you just created from the imported CSV data. Enter that database’s password to unlock it and its data will immediately be merged. You can now safely delete the temporary database where you imported that CSV data.
  6. You must now delete the CSV file containing all your clear-text passwords. It is no longer needed.
  7. That’s all. Good luck with using your passwords across devices and browsers safely and under your exclusive control.

Importing your browser bookmarks into XBrowserSync

Like with your passwords above, there’s also an easy way to import the bookmarks you were using in Chromium into XBrowserSync so that you can make these available in un-Googled Chromium. Remember that Chromium’s use of Chrome Sync conflicts with the xBrowserSync extension. You can of course first disable that Chrome Sync (and after 15 March 2021 you should do that anyway) and then enable xBrowserSync, but the procedure below will likely also work when you want to export bookmarks from another browser not supported by xBrowserSync.

  1. In Chromium, open the bookmark manager: “chrome://bookmarks/
  2. Click on the triple-dot menu in the top-right and select “Export bookmarks”. Save the resulting HTML file somewhere, it is temporary and can be deleted after importing its contents into un-Googled Chromium.
  3. Open the resulting HTML file into an editor. The “Kate” editor in KDE Plasma can deal pretty well with the HTML and breaks it down for you into sections you can easily collapse, expand or remove. You need to remove the “Apps” bookmark section in the HTML which is not useful for the un-Googled version, and perhaps you want to remove some other stuff as well.
  4. In un-Googled Chromium, open “chrome://bookmarks/” and this time select “Import bookmarks” from the triple-dot menu in the top-right corner. Pick the HTML file you exported from Chromium and the imported content will show up in your Bookmarks Bar as a bookmark-menu called “Imported”.
  5. You’ll probably want to move stuff out of that “Imported” menu and into the locations where you previously had them in your old Chromium’s  Bookmarks Bar. You can do that most easily from within the bookmark manager by right-clicking stuff, selecting “Cut” and then pasting them into another location on the Bookmarks Bar.
    One caveat: I found out empirically that the xBrowserSync extension does not like it when it is still syncing a change in your bookmarks, and then you make another change before it is finished. It will simply invalidate the sync and try to revert to the previously known bookmark list. The thing is, if you did a “Cut” in the meantime with the intention to “Paste” a set of bookmarks somewhere else, that set will be gone forever. Best is to keep an eye on the xBrowserSync extension’s icon next to the URL entry field and do not try to make changes while it is still in the process of syncing. If you keep this in mind, then the experience will be just great.
  6. All done! Enjoy the fact that the bookmarks you create or modify in one browser, will be immediately accessible in other connected browsers.

This concludes a lengthy article, I hope it contains information you can make good use of. In the days and weeks to come, we’ll see what happens with the original Chromium and whether un-Googled Chromium can take its place. Let me know your experiences.

Have fun! Eric

HOWTO: install MongoDB on Slackware

Today I am going to show you how to install MongoDB, create a database admin account and enforce basic security.

Why MongoDB when Slackware already has MariaDB? Well, the two are not comparable. MariaDB is a SQL database server, whereas MongoDB is a “NoSQL” database server, aka “Not only SQL“, and its queries – just like its object storage format – are in JSON. The two types of databases have entirely different usage targets.

MongoDB is a ‘general-purpose, document-based database server‘. It has use-cases where it is more powerful than the traditional row/column model of a relational database management system. NoSQL databases, in particular MongoDB, are preferred over RDBMS in Cloud services, Big Data environments and for high-volume web based data processing services. These are typically environments where flexibility is required to handle big amounts of unstructured data and constantly varying schemas. A distributed cluster of MongoDB servers excels at “map-reduce“, the framework invented by Google for condensing large volumes of data into useful aggregated results – the very paradigm that catapulted Google Search into the number one position of search engines shortly after the turn of the millennium.

Again, why then MongoDB? Who cares?
The above preamble is no sales pitch, rather it is meant to give you some background first. This article is actually meant to bridge a previous and a future article here on the blog. In a previous article I wrote about un-googling your browser experience and promised that I would share with you a solution that allows you to sync your browser’s online passwords and bookmarks (and hopefully soon also your browsing history) to an online server that is fully under your control.
In a future article I will document how to setup that sync service, but to that end I need a working MongoDB server first. Creating a ‘mongodb’ package that I was satisfied with, and that is also usable on Slackware 14.2, proved a bit more time-consuming than I expected, but here it is and here we go.

Caveat: since MongoDB 3.4, the developers dropped support for 32-bit platforms. Current version is 4.4.4 and therefore the packages for MongoDB that I provide are for 64-bit Slackware 14.2 and -current only.

  1. Download and install a ‘mongodb’ package for your Slackware: http://www.slackware.com/~alien/slackbuilds/mongodb
    The installation of that package will create a “mongo” user and a “mongo” group on your computer, it will also install a RC script “/etc/rc.d/rc.mongodb” and add a couple of lines to your “/etc/rc.d/rc.local” so that the MongoDB server will be started automatically every time  your computer boots, as long as the “/etc/rc.d/rc.mongodb” script is executable (which it is by default). Also installed is a default configuration file “/etc/mongodb.conf” which is used by the RC script:

    processManagement:
      fork: true
      pidFilePath: "/var/run/mongodb/mongod.pid"
    net:
      bindIp: localhost
      port: 27017
    storage:
      dbPath: /var/lib/mongodb
      journal:
        enabled: true
    systemLog:
      destination: file
      path: "/var/log/mongodb/mongod.log"
      logAppend: true
    cloud:
      monitoring:
        free:
          state: off
    security:
      authorization: disabled
    

    This configures MongoDB to be accessible only at the “127.0.0.1” loopback interface, listen at the MongoDB default TCP port “27017”, and store its databases in “/var/lib/mongodb/”. This MongoDB configuration for Slackware comes out of the box without any form of access control and no authentication. We will fix that below.

  2. Increase the maximum number of open files (1024 by default) by un-commenting the following line in “/etc/login.defs”:
    ULIMIT 2097152
    If you omit this, every connection you make to the server will warn you about it.
  3. Then start the MongoDB server, the first time manually, but next time you boot the computer this will be done automatically. As root, run (‘#’ is the root prompt of course… don’t try to type it):
    # /etc/rc.d/rc.mongodb start
  4. After we have validated that the server is running using the “/etc/rc.d/rc.mongodb status” command, we are now going to enable access control and authentication. The documentation for that, as well as more recommendations on how to secure your database server, is available at: https://docs.mongodb.com/manual/administration/security-checklist/ . Based on that documentation we take the following steps to enable access control and enforce authentication:
    1. Add an administrative account to MongoDB. In this example I will use the name “slackadmin” with a password “slackpass” – change these to something you like better.
      Remember that the server runs without authentication or access control out of the box, so connecting to it will be quite easy, You can start the “mongo” client from any user account, for instance your own regular login account (‘$’ is the Bash shell’s user prompt of course… don’t try to type it). By default, the “mongo” client program will connect to a server on the “localhost” address at port 27017 and since the Slackware package uses these defaults, no commandline parameters are required:
      $ mongo
      connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
      Implicit session: session { "id" : UUID("12345678-1234-1234-1234-123456789012") }
      MongoDB server version: 4.4.4
      Welcome to the MongoDB shell.
      For interactive help, type "help".
      For more comprehensive documentation, see
      https://docs.mongodb.com/
      Questions? Try the MongoDB Developer Community Forums
      https://community.mongodb.com
      >

      At the mongo shell prompt, enter these lines to create the administrative account. Note that in MongoDB, the “admin” database is the default database in which user accounts are created and user access control to the other databases is defined:

      use admin
      db.createUser(
        {
          user: "slackadmin",
          pwd: "slackpass", // or use passwordPrompt(),
          roles: [ { role: "userAdminAnyDatabase", db: "admin" }, "readWriteAnyDatabase" ]
        }
      )
      quit()
      
    2. Now that we have added a user to the ‘admin’ database who has full access, we can stop the server and change its configuration to enforce authentication. As root:
      # /etc/rc.d/rc.mongodb stop

      … and then change “authorization: disabled” to “authorization: enabled” in the file “/etc/mongod.conf”. After that change, start MongoDB again with the RC script and if you then attempt to run the “mongo” client just like that and try to run a command that needs privileges, you’ll get an error:

      $ mongo
      > show roles
      uncaught exception: Error: command rolesInfo requires authentication :

      We will have to login to the server now, in order to do meaningful things. Since we have only one user still, we use that. Note that you will be asked to enter the user’s password after pressing ENTER, and this time we will get better feedback for our “show roles” command:

      $ mongo -u slackadmin -p
      > show roles
      {
      	"role" : "dbAdmin",
      	"db" : "test",
      	"isBuiltin" : true,
      	"roles" : [ ],
      	"inheritedRoles" : [ ]
      }
      {
      	"role" : "dbOwner",
      ... 

      In case you forgot to authenticate when starting the mongo client , i.e. you just executed “mongo“, don’t worry. You can authenticate from within the mongo client shell also:

      $ mongo
      > use admin
      > db.auth("slackadmin", passwordPrompt()) 

       

If you are going to so some serious work with the database, I recommend creating a backup schedule. But the account “slackadmin” which we just created does not have the required privileges to read the ‘config’ database. For the account to be able to backup/restore the complete database, it needs different roles. There’s a ‘backup‘ and a ‘restore‘ role built into MongoDB already, and the developers recommend that you create a separate non-root database account which is only to be used for dump & restore.
For the sake of this article we will just grant the internal ‘root’ role to our account, i.e. full control over all databases. In the mongo client you need to run:

> use admin;
> db.grantRolesToUser( “slackadmin“, [ “root” ]);

MongoDB’s database administration tools are not part of the MongoDB source distribution, you need to get these sources separately and compile them yourself, or download the official binaries. The relevant binaries will obviously be “mongodump” and “mongorestore“.
I have re-packaged these official binaries and added them to my repository. Note that these are meant for Slackware-current, but you can run the binaries on Slackware 14.2 too, provided that you first compile and install a “krb5” package using the SlackBuilds.org sources. This resolves a run-time library dependency, you don’t actually have to run krb5.
A command-line which creates a full dump of all your databases in the form of a single archive file called “mongodump.archive“:

# /usr/bin/mongodump --host=localhost --port=27017 --username=slackadmin --password=slackpass --authenticationDatabase=admin --archive > mongodump.archive

You can then restore that dump into a new instance of MongoDB:

# /usr/bin/mongorestore --host=localhost --port=27017 --username=slackadmin --password=slackpass --authenticationDatabase=admin --archive < mongodump.archive

The restore will not overwrite the records that already exist in your database, so if you want mongorestore to first drop the existing databases before the restore, add “--drop” as an additional parameter to the mongorestore command.

What you do with the database is now up to you. In any case, I will expect that you have a running and pre-configured MongoDB database instance, next time when I write an article about browser sync. Don’t let that limit you! There will probably be other good uses for this article, you just need to go find them now.

Good luck! Eric

How to ‘un-google’ your Chromium browser experience

… Aka the future of Chromium based (embedded) browsers


On March 15th 2021, Google is going to block non-Google chromium-based browsers from accessing certain “private Google Chrome web services” by unilaterally revoking agreements made with 3rd parties in the past.
Meaning, every Chromium based product not officially distributed by Google will be limited to the use of only a few public Google Chrome web services.
The most important service that remains open is “safe browsing”. The safe browsing feature identifies unsafe websites across the Internet and notifies browser users about the potential harm such websites can cause.

The most prominent feature which will be blocked after March 15th is the “Chrome Sync”. This Chrome Sync capability in Chromium based browsers allows you to login to Google’s Sync cloud servers and save your passwords, browsing history and bookmarks/favorites to your personal encrypted cloud vault inside Google’s infrastructure.
Extremely convenient for people who access the Internet using multiple devices (like me: Chrome on a few Windows desktops, Chromium on several Slackware desktops and laptop and Chrome Mobile on my Android smartphone) and who want a unified user experience in Chrome/chromium across all these platforms.
In order to boost the development of Chromium-based (embedded) browser products, Google made deals with 3rd parties as far back as 2013 (from what I could find) and spiced the API keys of these 3rd parties with access to crucial Google Webservices providing features that would draw users to these products.
If you offer a product that calls upon Google’s Web Services there is a monetary cost involved once the number of your users’ connections exceeds the monthly upper limit for free usage. So on top of providing us access to these Google APIs (in the case of Open Source Distro Chromium packagers) the Chromium team also substantially increased the non-billed monthly API consumption by the users of our distros’ Chromium browsers. This helped to prevent us poor distro packagers from being billed for Cloud API usage in case our browser packages gained popularity.
And then, early 2021, some Google white-collar people decided they had enough of these freeloaders.

When Google dropped the bomb on us – on the distro packagers in particular – a fierce discussion started in two Google Groups (posts in one group are mostly duplicated  into the other group): Chromium Packagers and Chromium Embedders. It’s like talking to corporate drones – every question we asked is replied to with the same bogus standard texts. Arrogance to the max!
Even more poignant is a parallel discussion in Chromium Embedders, where some large electronics manufacturers discovered that some of their commercial products are similarly affected. Consumer Electronic products that ship with browser-based embedded applications like Smart TV’s often use CEF (Chromium Embedded Framework) and Google will block access for CEF products to their “private” Chrome APIs just like it’s going to do with distro browsers – they are all based on the same Chromium source code and are all non-Google products.

If you wonder what happened to the Google motto “Don’t be Evil” – in 2018 that phrase was removed from the employee Code of Conduct. And indeed, looking at the discussions in aforementioned topics the top brass feels completely ‘senang‘ throwing us distro packagers under the bus while at the same time chastising us because apparently we do not adhere to their Code of Conduct.

Enough of all the bullshit – let’s look into the future. What can we do as Linux users, and what will I do as a distro packager.

Let me be clear: I do not want to take choices away from you. You can keep using Chromium, you can switch to Chrome, you can investigate whether Vivaldi or Brave (two chromium-based browsers with their own Google-free implementation of cloud sync) are better options for you.
I will however have to deal with the fact that I can no longer build a Chromium package that offers a synchronization of your private browser data out of the box. So what I will discuss in the remainder of this article are possibilities.

Chromium packages for Slackware are here to stay

… but I will remove my personal Google ID and corresponding secret from my chromium package. They will have been invalidated anyway on March 15 and are therefore useless. What I will leave in, is my “Slackware Chromium API Key” which keeps the “safe browsing” functionality alive if you use my browser.

I want to state here that from now on, I also explicitly forbid others / distros to re-use and re-package my binaries in order to  make them part of their own Linux Distribution: thinking of Slacko Puppy, Porteus, Slint and others. If needed I will use “cease & desist” messages if people refuse to comply. I am not going to pay Google for the use of my binaries in distros that I do not control. The use of my API key is automatic if you run my Chromium binaries, and it involves a monthly cost if Google’s Could APIs get called too much. I already had to negotiate several times with the Chromium people to avoid getting billed when their policies changed. So get your own API key and compile your own version of the browser please.
You can request your own APIkey/ID/string in case you did not realize that! You’ll get capped access to Google API services, good for a single person but still without access to Cloud Sync. If you introduce yourself to the Chromium team as a distro packager, they may help you with increasing your browser’s un-billed API usage.

There’s a public discussion in the Google Group threads that I referred to above, about your personal use of the official Google API keys. This could offer a way out of the blockade and would allow you to keep using Chrome Sync in a Chromium browser even after the distro packagers’ API keys have been invalidated. These official Chrome API key/ID/secret strings are contained as clear-text strings in the public chromium source code for a long time already!
While I am not going to advocate that you should do this, it is up to you (the individual end user of a Chromium-based browser) to find those strings online and apply them to your browser’s startup environment.

Let me explain a bit. When I compile Chromium, my personal API key and Google client-ID are being embedded in the resulting browser binary, and that’s why everything works so nicely out of the box. In future I will not be embedding my client-ID anymore, but my API key for the browser will remain. That his how Safe Browsing will still work (it’s associated to the API key) but Chrome Sync will stop working (because that’s associated with the Client-ID).
The good news is that Chromium browsers will check the environment when they start up, and look for specific variables that contain a custom API key and client-ID. My chromium package is built in such a way that it is easy to add such customization, by creating a “.conf” file in directory “/etc/chromium/”.
In the Slackware package for Chromium, you will find an example of how to apply such an APIkey/ID/secret combo. Just look at the file “/etc/chromium/01-apikeys.conf.sample”. If you remove the “.sample” suffix this file will then define three environment variables on startup of Chromium that tell the browser to use a specific service configuration.
And you  can also copy the Google Chrome key/id/secret into that file and then it’s as if you are using a Chrome browser when talking to Google’s cloud services.

An ‘un-googled’ browser experience

The above API blocking scenario is a “win/lose” scenario as far as I am concerned. For Google it is a “win”: they still get to collect the data related to your online activities which they can monetize. And you “lose” because in return Google won’t allow you to use their cloud sync service any longer. That is not acceptable. And it lead to a bit of research into the possibilities of turning this fiasco into a “win” for the user.
Turns out that there’s is actually an existing online project: “ungoogled-chromium – a lightweight approach to removing Google web service dependency“.
High-over: the “un-googled chromium” project offers a set of patches that can be applied to the Chromium source code. These patches remove any occurrence of Google Web Service URLs from the source code which means that the resulting browser binaries are incapable of sending your private data into Google datacenters. Additionally these patches bring  privacy enhancements borrowed from other Chromium derivatives like the Inox patchset, Debian’s Chromium, Iridium browser and Bromite.
Not just a “win” for the user but a “lose” for Google. They basically brought this down on themselves.

My conclusion was that a removal of Google associations from Chromium and at the same time improving its privacy controls is what I must be focusing on in future Chromium packages.

During my research I did look at existing alternative Chromium browser implementations. They all have their own merits I guess. I do not like to switch to Vivaldi since I think its development process is hazy i.e. not public. Only its complete release tarballs are downloadable. Or Brave – its sources are not available at all and it tries to enforce an awards system where you are encouraged to view ads – I mean, WTF? If I wanted to run a browser compiled by another party that tries to use me for their own gain, I could just stick with the official Chrome and be happy. But that is not my goal.

What I did instead was to enhance my chromium.SlackBuild script with a single environment variable “USE_UNGOOGLED” plus some shell scripting which is executed when that variable is set to ‘true’ (i.e. the value “1”). The result of running “USE_UNGOOGLED=1 ./chromium.SlackBuild” is a package that contains an “un-googled” Chromium browser that has no connection at all to Google services.
I make that package available separately at https://slackware.nl/people/alien/slackbuilds/chromium-ungoogled/

Be warned: using un-Googled Chromium needs some getting used to, but no worries: I will guide you through the initial hurdles in this article. Continue reading! And especially read the ungoogled-chromium FAQ.

The first time you start my chromium-ungoogled it will create a profile directory “~/.config/chromium-ungoogled” which means you can use regular Chromium and the un-googled chromium in parallel, they will not pollute or affect each other’s profiles.

You’ll notice as well that the default start page points to the Chrome Web Store but the link actually does not work. That’s unfortunate but I decided not to look into changing the default start page (for now). Patch welcome.

Which leads to the first question also answered in the above FAQ: how to install Chrome extensions if the Chrome Web Store is inaccessible?
The answer allowing direct installations from the Web Store afterwards is to download and install the chromium-web-store extension (Chrome extensions are packed in files with .crx suffix). You have to do this manually but the instructions for these manual installation steps are clear. Then, any subsequent extensions are a lot easier to install.

Another quirk you may have questions about is the fact that un-Googled Chromium seems to forget your website login credentials all the time. Actually this is done on purpose. FAQ #1 answers this: Look under chrome://settings/content/cookies and search for “Clear cookies and site data when you quit Chromium“. Disable this setting to prevent the above behavior.

Watching Netflix, Hulu or Disney+ content will not work out of the box, you’ll have to install the Widevine CDM library yourself. If you have been a Slackware user for a while, you may recall that I used to provide chromium-widevine-plugin packages, until the capability to download that plugin all by itself was added to Chromium source code by Google. Well… the un-Googled Chromium removed that capability again but I have updated my package repository with a version of the widevine-plugin that works with with the un-Googled browser.

Safe browsing is not available in un-Googled Chromium, since that too is a service provided by Google. Recommended alternatives are uBlock Origin or uMatrix.

Sync your browser data to an online service which is under your own – not Google’s – control

Now that we said good-bye to Google Cloud Sync, can  we still sync our passwords, bookmarks and browsing history to a remote server and access these data from multiple browsers? Yes we can!
Even better, we can sync that data to a place that is under our own control. Multiple computers using the same synchronized data will give you the same experience as your prior usage of Google Cloud Sync. This will then also not be limited to Chromium based browsers – Mozilla based browsers are able to access the same centrally stored data. Win!

The question is then: how to implement it? Is this something you can do without being an IT person or a Slackware Guru?
I will show you that the answer is “yes”, in a follow-up article dealing with keepassxc and xbrowsersync.

Have fun! Eric