My thoughts on Slackware, life and everything

Category: Rant (Page 2 of 10)

Migrating from Twitter to Mastodon

I assume many of you are watching the Twitter soap, waiting for the moment that the social media platform burns down completely. Its fresh owner Elon Musk is abusing his newfound powers to vent his far-right extremist ideas, firing most of Twitter’s employees, shutting down the content moderation team, and so on.

You should definitively be looking for an alternative if you are interested in socially interacting with other people through online platforms. And especially if you represent a business or an organization and use Twitter as a communication medium, you must absolutely reconsider whether you are using the right platform.

A lot of people have been looking for alternatives to satisfy their Twitter habits. That process already started in February 2022 when Musk announced his intention to buy Twitter. But after completing that deal and essentially taking over daily operations, there has been a massive exodus. The platform which is absorbing most of this exodus as new users seems to be Mastodon. But Mastodon is a different platform than Twitter and a lot of new users struggle with the concepts. There’s a lot of documentation but not everyone reads documentation prior to jumping into the action.

I will share some pointers that may help you make the decision to move to Mastodon and getting all setup there.

Choose a Mastodon server

Mastodon is a federated network as opposed to Twitter which is a centralized network.  To use Twitter, you login to a single URL, and have access the tweets of every other user of the platform.
Federated on the other hand means, the Mastodon network is run by any amount of independently and non-commercially hosted server instances, that are all interconnected and share updates in real-time. You login to your account on a specific server and have access to the full federated network’s content from there.
Unlike Twitter, where you have a unique handle (mine would be @erichameleers), your identity on the Mastodon network will also show the server you are operating from. My identity is a combination of the nick ‘alien’ on server ‘fosstodon.org’ which would make my Mastodon handle @alien@fosstodon.org.

If you have difficulties grasping the concept of a server or an instance… compare it to a local community center in your town. You go there to hang out with like-minded people but the television screens on the walls will show you what’s going on in the outside world. But you won’t enter just the first community center you encounter… you’d want to know a bit about the kind of people that frequent the place. But if you just want to mingle in a large crowd, I guess you would rather go to the football stadium.
I hope the comparison was not too cheesy, but I have been told that “you techies built this stuff but normal people do not know what words like server or instance mean at all”. Fair enough.

So, your first step will be to pick and choose the server where you are going to create an account. Server instances are sometimes topic based (with a focus on for instance open source, food, journalism, art and so on) which will influence the nature of the posts you will see scrolling by in your Local and Federated timelines (see below).

There is a searchable list of available servers at https://instances.social/list which can help in your decision making. Do you want an account on a server with lots of user accounts already present, which usually means the admins know what they are doing and they have long-term commitment? Or do you want a smaller instance but with a focus on a specific topic so you’ll have a bigger chance to hang out with like-minded people?
Also there’s https://joinmastodon.org/servers which is a page you can set a bunch of conditions (location, language, topic, legal entity, speed of account enablement, community-size) and which will then show a shortlist of Mastodon instances that match your preferences.
I chose fosstodon.org as my logon server because of its focus on people who like to work with Free and Open Source Software.

Configure your profile

Once you have selected a server, open its URL in a web browser, click the “Create Account” button and start building your user profile. Be sure to tell a bit about yourself.
Mastodon allows a form of identity verification, not unlike the idea behind Twitter’s “blue badge” but then done right, and without cost! Your profile can list up to 4 personal web sites, and if you own or control these, you can add a bit of text on its web page which will be used by Mastodon to validate that it is actually your website. And it will color that website green on your profile page. Check out how that looks on my profile page:

 

Apps

Any Mastodon server is accessible by a web browser, and for many users that is sufficient. Don’t forget to enable the “enhanced web interface” in your profile settings!
On a smartphone however, the web user interface experience is not optimal. There are apps that you can use instead.
On Android, there’s the official Mastodon app, or else you could try Tusky. Both are free but there’s also paid-for apps available in the Store, like Fedilab.
Apple also has the official Mastodon app in their Store, but Tootle could be an alternative option.

Timelines

The way in which other people’s posts are presented chronologically on your home page (the so-called timeline) is fundamentally different in Mastodon compared to Twitter. There you have a single timeline, populated by posts from the people you follow, combined with other people’s posts that have been favored (liked) by people you follow. Twitter will additionally show posts from people unknown  to you, if its algorithms decide that these posts may be relevant for you based on the behavioral profile it created from your interactions with the platform. Therein lies the danger that you get pulled into an information bubble.
Plus you will see lots of targeted advertising by Twitter’s affiliates, again based on your interests and the posts you liked or replied to in the past.

Mastodon on the other hand does not analyze your posting behavior, it does not run algorithms to influence your experience of the platform. You decide what you will see on your home timeline.
Regarding timelines, Mastodon has three of them!

  • The Home timeline is displayed on your home screen by default, and contains the posts of people you follow, as well as posts from other people that were favored (boosted) by the people you follow, plus posts that contain hashtags that you have subscribed to (see below) and posts that directly mention your @nick.
  • The Local timeline is a separate view, showing posts of all the other users of the Mastodon server instance where you are logged-in.
  • The Federated timeline finally is a chronological feed of all the posts that are originating from outside your local server, of which the local server has been made aware. Meaning, you will see an arbitrary subset of all communication in the Fediverse, their relevance boosted by the interests of all users of your local server instance.

The nature of the content displayed in Local and Federated timelines is why I mentioned earlier that you might want to create an account on a Mastodon server centering around a specific topic that interests you. You will see more  posts on such a server that are relevant to you.

Find people and hashtags to follow

Curating your timeline is a matter of following people you are interested in, apply proper filtering to hide unwanted content, and subscribe to hashtags to drag in the news from far away.

How to find people to follow?
Let’s start with people you are already following on Twitter. To find out who is already on Mastodon you can enter “mastodon” in the Twitter search bar and then limit the search results to “people you follow”.
Also, there are Twitter-to-Mastodon gateways which expose Twitter accounts to the Mastodon network; when you search for someone on Mastodon and the server part of their account contains a word like “birdsite” then this is a person whose Twitter posts are automatically being replicated on Mastodon.
And there’s a lot of folks who already migrated of course, and you will easily find them by entering their name or any nick/handle you know into the search box.
If you want to know who I am following, that’s not a secret: https://fosstodon.org/@alien/following

How to subscribe to hashtags?
If your server runs Mastodon version 4 or newer (most of them will do by now, the current version at the time of writing this is 4.0.2), and if you are using a Web browser to access Mastodon, you perform the following steps: enter a word in the Mastodon search box; in the results pane click on “hashtags” to display only the hashtags that match your search phrase; click on the hashtag that interests you; and finally click the “follow” icon which will be shown to the right of that hashtag.
Now, posts containing this hashtag will start showing on your Home timeline.

How to filter out the unwanted content?
In “Settings > Filters” you can use the “Add new filter” button to create filters that trigger on text strings that optionally match whole words. Posts containing the trigger text can be hidden from your Home, Local and/or Federated timelines, and/or conversations/mentions. You can set an expiry date to a filter if you are only temporarily fed up with someone you follow.
Next to filtering, you can also mute people you follow. The easiest way to do so is when you go that user’s profile page, click on the 3-dot menu to the right of the username and select “mute”. By the way, that menu contains a whole lot of ways to change your level of interaction with this person, go have a look!

Inform your Twitter friends

What I did to inform my Twitter friends and followers that I migrated to Mastodon, was adding my Mastodon handle to my Twitter display name. I am now known as “Eric Hameleers (@alien@fosstodon.org)“. If you want to be a bit less conspicuous about it, you could also just update your profile description to tell people where they can find you on Mastodon. After all, everyone remembers the ban hammers on Freenode IRC network where every account was auto-banned and channel auto-disowned when you mentioned you had moved to Libera.Chat. Musk is maniacal and so emotionally unstable that he could just do something similar to Twitter.

Documentation!

Detailed explanations on how to use Mastodon and interact with other people on the network can be found in https://docs.joinmastodon.org/ . I encourage you to read that page, it will prove quite useful.

Move to a different server (optional)

If you decide that you actually like another community better, it is possible to move your Mastodon account from one server to another. There’s multiple ways to achieve this. It all starts with the creation of your new account of course, and adding aliases for both the old and the new account that link them together.
Then you set a redirect from your old to your new account. People visiting your old profile will be informed where you moved to. Next to a redirect, you can also initiate a formal move of your account, and in this case Mastodon will automatically move all your followers from your old to your new account. Your old account will become a redirect to your new account but at least to your followers the process is relatively transparent.
Redirected accounts will be excluded from search results so that people searching for you will only find the new account.

What is not moved, are your historical posts and the people your old account is following. Both your accounts will also go into a cooldown period, where it is not possible to initiate another account move.

Interested? Curious? I hope to see some of you on Mastodon!

Eric

Gentoo eudev adopted by Eudev Project

A recent LinuxQuestions thread discusses the depreciation of the eudev fork which was created by Gentoo a few years back in order to keep systemd at bay. This step by Gentoo sparks some serious doubts among LQ members about what Slackware should do – is the inclusion of systemd near, now that eudev is dead?

Short recap: In November 2015 Slackware replaced its no longer maintained original udev with this new eudev (a standalone extract of udev out of the systemd sources but modified so that every dependency on systemd is removed). This change was actually my chance to announce the liveslak project as a ‘celebration to say farewell to udev‘.
In November of 2020, a similar event happened when Slackware replaced ConsoleKit2 with elogind – a standalone copy of the logind code extracted from systemd and with all dependencies to systemd removed. Both events were meant to keep Slackware free of systemd, at least for a while… who can stem the flow of water.

But there is good news. Yesterday, a collaboration between Alpine, Devuan and Gentoo contributors has announced their adoption of eudev and a new repository has been created where the new project will further develop eudev: https://github.com/eudev-project/eudev/blob/master/README.md . Let’s give these folk our best wishes!

Eric

How to ‘un-google’ your Chromium browser experience

… Aka the future of Chromium based (embedded) browsers


On March 15th 2021, Google is going to block non-Google chromium-based browsers from accessing certain “private Google Chrome web services” by unilaterally revoking agreements made with 3rd parties in the past.
Meaning, every Chromium based product not officially distributed by Google will be limited to the use of only a few public Google Chrome web services.
The most important service that remains open is “safe browsing”. The safe browsing feature identifies unsafe websites across the Internet and notifies browser users about the potential harm such websites can cause.

The most prominent feature which will be blocked after March 15th is the “Chrome Sync”. This Chrome Sync capability in Chromium based browsers allows you to login to Google’s Sync cloud servers and save your passwords, browsing history and bookmarks/favorites to your personal encrypted cloud vault inside Google’s infrastructure.
Extremely convenient for people who access the Internet using multiple devices (like me: Chrome on a few Windows desktops, Chromium on several Slackware desktops and laptop and Chrome Mobile on my Android smartphone) and who want a unified user experience in Chrome/chromium across all these platforms.
In order to boost the development of Chromium-based (embedded) browser products, Google made deals with 3rd parties as far back as 2013 (from what I could find) and spiced the API keys of these 3rd parties with access to crucial Google Webservices providing features that would draw users to these products.
If you offer a product that calls upon Google’s Web Services there is a monetary cost involved once the number of your users’ connections exceeds the monthly upper limit for free usage. So on top of providing us access to these Google APIs (in the case of Open Source Distro Chromium packagers) the Chromium team also substantially increased the non-billed monthly API consumption by the users of our distros’ Chromium browsers. This helped to prevent us poor distro packagers from being billed for Cloud API usage in case our browser packages gained popularity.
And then, early 2021, some Google white-collar people decided they had enough of these freeloaders.

When Google dropped the bomb on us – on the distro packagers in particular – a fierce discussion started in two Google Groups (posts in one group are mostly duplicated  into the other group): Chromium Packagers and Chromium Embedders. It’s like talking to corporate drones – every question we asked is replied to with the same bogus standard texts. Arrogance to the max!
Even more poignant is a parallel discussion in Chromium Embedders, where some large electronics manufacturers discovered that some of their commercial products are similarly affected. Consumer Electronic products that ship with browser-based embedded applications like Smart TV’s often use CEF (Chromium Embedded Framework) and Google will block access for CEF products to their “private” Chrome APIs just like it’s going to do with distro browsers – they are all based on the same Chromium source code and are all non-Google products.

If you wonder what happened to the Google motto “Don’t be Evil” – in 2018 that phrase was removed from the employee Code of Conduct. And indeed, looking at the discussions in aforementioned topics the top brass feels completely ‘senang‘ throwing us distro packagers under the bus while at the same time chastising us because apparently we do not adhere to their Code of Conduct.

Enough of all the bullshit – let’s look into the future. What can we do as Linux users, and what will I do as a distro packager.

Let me be clear: I do not want to take choices away from you. You can keep using Chromium, you can switch to Chrome, you can investigate whether Vivaldi or Brave (two chromium-based browsers with their own Google-free implementation of cloud sync) are better options for you.
I will however have to deal with the fact that I can no longer build a Chromium package that offers a synchronization of your private browser data out of the box. So what I will discuss in the remainder of this article are possibilities.

Chromium packages for Slackware are here to stay

… but I will remove my personal Google ID and corresponding secret from my chromium package. They will have been invalidated anyway on March 15 and are therefore useless. What I will leave in, is my “Slackware Chromium API Key” which keeps the “safe browsing” functionality alive if you use my browser.

I want to state here that from now on, I also explicitly forbid others / distros to re-use and re-package my binaries in order to  make them part of their own Linux Distribution: thinking of Slacko Puppy, Porteus, Slint and others. If needed I will use “cease & desist” messages if people refuse to comply. I am not going to pay Google for the use of my binaries in distros that I do not control. The use of my API key is automatic if you run my Chromium binaries, and it involves a monthly cost if Google’s Could APIs get called too much. I already had to negotiate several times with the Chromium people to avoid getting billed when their policies changed. So get your own API key and compile your own version of the browser please.
You can request your own APIkey/ID/string in case you did not realize that! You’ll get capped access to Google API services, good for a single person but still without access to Cloud Sync. If you introduce yourself to the Chromium team as a distro packager, they may help you with increasing your browser’s un-billed API usage.

There’s a public discussion in the Google Group threads that I referred to above, about your personal use of the official Google API keys. This could offer a way out of the blockade and would allow you to keep using Chrome Sync in a Chromium browser even after the distro packagers’ API keys have been invalidated. These official Chrome API key/ID/secret strings are contained as clear-text strings in the public chromium source code for a long time already!
While I am not going to advocate that you should do this, it is up to you (the individual end user of a Chromium-based browser) to find those strings online and apply them to your browser’s startup environment.

Let me explain a bit. When I compile Chromium, my personal API key and Google client-ID are being embedded in the resulting browser binary, and that’s why everything works so nicely out of the box. In future I will not be embedding my client-ID anymore, but my API key for the browser will remain. That his how Safe Browsing will still work (it’s associated to the API key) but Chrome Sync will stop working (because that’s associated with the Client-ID).
The good news is that Chromium browsers will check the environment when they start up, and look for specific variables that contain a custom API key and client-ID. My chromium package is built in such a way that it is easy to add such customization, by creating a “.conf” file in directory “/etc/chromium/”.
In the Slackware package for Chromium, you will find an example of how to apply such an APIkey/ID/secret combo. Just look at the file “/etc/chromium/01-apikeys.conf.sample”. If you remove the “.sample” suffix this file will then define three environment variables on startup of Chromium that tell the browser to use a specific service configuration.
And you  can also copy the Google Chrome key/id/secret into that file and then it’s as if you are using a Chrome browser when talking to Google’s cloud services.

An ‘un-googled’ browser experience

The above API blocking scenario is a “win/lose” scenario as far as I am concerned. For Google it is a “win”: they still get to collect the data related to your online activities which they can monetize. And you “lose” because in return Google won’t allow you to use their cloud sync service any longer. That is not acceptable. And it lead to a bit of research into the possibilities of turning this fiasco into a “win” for the user.
Turns out that there’s is actually an existing online project: “ungoogled-chromium – a lightweight approach to removing Google web service dependency“.
High-over: the “un-googled chromium” project offers a set of patches that can be applied to the Chromium source code. These patches remove any occurrence of Google Web Service URLs from the source code which means that the resulting browser binaries are incapable of sending your private data into Google datacenters. Additionally these patches bring  privacy enhancements borrowed from other Chromium derivatives like the Inox patchset, Debian’s Chromium, Iridium browser and Bromite.
Not just a “win” for the user but a “lose” for Google. They basically brought this down on themselves.

My conclusion was that a removal of Google associations from Chromium and at the same time improving its privacy controls is what I must be focusing on in future Chromium packages.

During my research I did look at existing alternative Chromium browser implementations. They all have their own merits I guess. I do not like to switch to Vivaldi since I think its development process is hazy i.e. not public. Only its complete release tarballs are downloadable. Or Brave – its sources are not available at all and it tries to enforce an awards system where you are encouraged to view ads – I mean, WTF? If I wanted to run a browser compiled by another party that tries to use me for their own gain, I could just stick with the official Chrome and be happy. But that is not my goal.

What I did instead was to enhance my chromium.SlackBuild script with a single environment variable “USE_UNGOOGLED” plus some shell scripting which is executed when that variable is set to ‘true’ (i.e. the value “1”). The result of running “USE_UNGOOGLED=1 ./chromium.SlackBuild” is a package that contains an “un-googled” Chromium browser that has no connection at all to Google services.
I make that package available separately at https://slackware.nl/people/alien/slackbuilds/chromium-ungoogled/

Be warned: using un-Googled Chromium needs some getting used to, but no worries: I will guide you through the initial hurdles in this article. Continue reading! And especially read the ungoogled-chromium FAQ.

The first time you start my chromium-ungoogled it will create a profile directory “~/.config/chromium-ungoogled” which means you can use regular Chromium and the un-googled chromium in parallel, they will not pollute or affect each other’s profiles.

You’ll notice as well that the default start page points to the Chrome Web Store but the link actually does not work. That’s unfortunate but I decided not to look into changing the default start page (for now). Patch welcome.

Which leads to the first question also answered in the above FAQ: how to install Chrome extensions if the Chrome Web Store is inaccessible?
The answer allowing direct installations from the Web Store afterwards is to download and install the chromium-web-store extension (Chrome extensions are packed in files with .crx suffix). You have to do this manually but the instructions for these manual installation steps are clear. Then, any subsequent extensions are a lot easier to install.

Another quirk you may have questions about is the fact that un-Googled Chromium seems to forget your website login credentials all the time. Actually this is done on purpose. FAQ #1 answers this: Look under chrome://settings/content/cookies and search for “Clear cookies and site data when you quit Chromium“. Disable this setting to prevent the above behavior.

Watching Netflix, Hulu or Disney+ content will not work out of the box, you’ll have to install the Widevine CDM library yourself. If you have been a Slackware user for a while, you may recall that I used to provide chromium-widevine-plugin packages, until the capability to download that plugin all by itself was added to Chromium source code by Google. Well… the un-Googled Chromium removed that capability again but I have updated my package repository with a version of the widevine-plugin that works with with the un-Googled browser.

Safe browsing is not available in un-Googled Chromium, since that too is a service provided by Google. Recommended alternatives are uBlock Origin or uMatrix.

Sync your browser data to an online service which is under your own – not Google’s – control

Now that we said good-bye to Google Cloud Sync, can  we still sync our passwords, bookmarks and browsing history to a remote server and access these data from multiple browsers? Yes we can!
Even better, we can sync that data to a place that is under our own control. Multiple computers using the same synchronized data will give you the same experience as your prior usage of Google Cloud Sync. This will then also not be limited to Chromium based browsers – Mozilla based browsers are able to access the same centrally stored data. Win!

The question is then: how to implement it? Is this something you can do without being an IT person or a Slackware Guru?
I will show you that the answer is “yes”, in a follow-up article dealing with keepassxc and xbrowsersync.

Have fun! Eric

Google muzzles all Chromium browsers on 15 March 2021

Ominous music

A word of caution: long rant ahead. I apologize in advance.
There was an impactful post on the Google Chromium blog, last friday.  I recommend you read it now: https://blog.chromium.org/2021/01/limiting-private-api-availability-in.html

The message to take away from that post is “We are limiting access to our private Chrome APIs starting on March 15, 2021“.

What is the relevance I hear you ask.
Well, I provide Chromium packages for Slackware, both 32bit and 64bit versions. These chromium packages are built on our native Slackware platform, as opposed to the official Google Chrome binaries which are compiled on an older Ubuntu probably, for maximum compatibility across Linux distros where these binaries are used. One unique quality of my Chromium packages for Slackware is that I provide them for 32bit Slackware. Google ceased providing official 32bit binaries long ago.

In my Slackware Chromium builds, I disable some of the more intrusive Google features. An example: listening all the time to someone saying “OK Google” and sending the follow-up voice clip to Google Search.

And I create a Chromium package which is actually usable enough that people prefer it over Google’s own Chrome binaries, The reason for this usefulness is the fact that I enable access to Google’s cloud sync platform through my personal so-called “Google API key“. In Chromium for Slackware, you can logon to your Google account, sync your preferences, bookmarks, history, passwords etc to and from your cloud storage on Google’s platform. Your Chromium browser on Slackware is able to use Google’s location services and offer localized content; it uses Google’s  translation engine, etcetera. All that is possible because I formally requested and was granted access to these Google services through their APIs within the context of providing them through a Chromium package for Slackware.

The API key, combined with my ID and passphrase that allow your Chromium browser to access all these Google services are embedded in the binary – they are added during compilation. They are my key, and they are distributed and used with written permission from the Chromium team.

These API keys are usually meant to be used by software developers when testing their programs which they base on Chromium code. Every time a Chromium browser I compiled talks to Google through their Cloud Service APIs, a counter increases on my API key. Usage of the API keys for developers is rate-limited,  which means if an API key is used too frequently, you hit a limit and you’ll get an error response instead of a search result. So I made a deal with the Google Chromium team to be recognized as a real product with real users and an increased API usage frequency. Because I get billed for every access to the APIs which exceeds my allotted quota and I am generous but not crazy.
I know that several derivative distributions re-use my Chromium binary packages (without giving credit) and hence tax the usage quota on my Google Cloud account, but I cover this through donations, thank you my friends, and no thanks to the leeches of those distros.

Now, what Google wants to do is limit the access to and usage of these Google services to only the software they themselves publish – i.e. Google Chrome. They are going to deny access to Google’s Cloud Services for all 3rd-party Chromium products (i.e. any binary software not distributed by Google).
Understand that there are many derivative browsers out there – based on the Open Source Chromium codebase – currently using a Google API key to access and use Google Cloud services. I am not talking about just the Chromium packages which you will find for most Linux distros and which are maintained by ‘distro packagers’. But also commercial and non-commercial products that offer browser-like features or interface and use an embedded version of Chromium to enable these capabilities. The whole Google Cloud ecosystem which is accessible using Google API Keys is built into the core of Chromium source code… all that these companies had to do was hack & compile the Chromium code, request their own API key and let the users of their (non-)commercial product store all their private data on Google’s Cloud.

Google does not like it that 3rd parties use their infrastructure to store user data Google cannot control. So they decided to deliver a blanket strike – not considering the differences in usage, simply killing everything that is not Google.
Their statement to us distro packagers is that our use of the API keys violates their Terms of Service. The fact is that in the past, several distros have actively worked with Google’s Chromium team to give their browser a wider audience through functional builds of the Open Source part of Chrome. I think that Google should be pleased with the increased profits associated with the multitude of Linux users using their services.
This is an excerpt from the formal acknowledgement email I received (dating back to 2013) with the approval to use my personal Google API key in a Chromium package for Slackware:

Hi Eric,

Note that the public Terms of Service do not allow distribution of the API
keys in any form. To make this work for you, on behalf of Google Chrome
Team I am providing you with:

    -
    Official permission to include Google API keys in your packages and to
    distribute these packages.  The remainder of the Terms of Service for each
    API applies, but at this time you are not bound by the requirement to only
    access the APIs for personal and development use, and
    -
    Additional quota for each API in an effort to adequately support your
    users.

I recommend providing keys at build time, by passing additional flags to
build/gyp_chromium. In your package spec file, please make an easy to see
and obvious warning that the keys are only to be used for Slackware. Here
is an example text you can use:

# Set up Google API keys, see
http://www.chromium.org/developers/how-tos/api-keys .
# Note: these are for ... use ONLY. For your own distribution,
# please get your own set of keys.

And indeed, my chromium.SlackBuild script contains this warning ever since:

# This package is built with Alien's Google API keys for Chromium.
# The keys are contained in the file "chromium_apikeys".
# If you want to rebuild this package, you can use my API keys, however:
# you are not allowed to re-distribute these keys!!
# You can also obtain your own, see:
# http://www.chromium.org/developers/how-tos/api-keys

It effectively means that I alone am entitled to distribute the binary Chromium packages that I create. All derivative distros that use/repackage my binaries in any form are in violation of this statement.

On March 15, 2021 access to Google’s Cloud services will be revoked from my API key (and that of all the other 3rd parties providing any sort of Chromium-related binaries). It means that my Chromium will revert to a simple browser which will allow you to login to your Google account and store your data (bookmarks/passwords/history) locally but will not sync that data to and from your Google Cloud account. Also, location and translation services and probably several other services will stop working in the browser. Effectively, Google will muzzle any Chromium browser, forcing people to use their closed Chrome binaries instead if they want cross-platform access to their data. For instance, using Chrome on Android and Chromium on Slackware.
Yes, Chrome is based on Chromium source code but there’s code added on top that we do not know of. Not everybody is comfortable with that. There was a good reason to start distributing a Chromium package for Slackware!

Now the one million dollar question:

Will you (users of my package) still use this muzzled version of Chromium? After all, Slackware-current (soon to become 15.0 stable) contains the Falkon browser as part of Plasma5, and Falkon is a Chromium browser core with a Qt5 graphical interface, and it does not use any Google API key either. Falkon will therefore offer the same or a similar feature set as a muzzled Chromium.
If you prefer not to use Chromium any longer after March 15, because this browser lost its value and unique distinguishing features for you, then I would like to know. Compiling Chromium is not trivial, it takes a lot of effort every major release to understand why it no longer compiles and then finding solutions for that, and then the compile time is horribly long as well. Any mistake or build failure sets me back a day easily. It means that I will stop providing Chromium packages in the event of diminishing interest. I have better things to do than fight with Google.

Please share your thoughts in the comments section below

FYI:

There are two threads on Google Groups where the discussion is captured; the Chromium Embedders group: https://groups.google.com/a/chromium.org/g/embedder-dev/c/NXm7GIKTNTE  – and most of it (but not all!) is duplicated in the Chromium Distro Packagers group: https://groups.google.com/a/chromium.org/g/chromium-packagers/c/SG6jnsP4pWM – I advise you to read the cases made by several distro packagers and especially take good care of how the Google representatives are answering our concerns. There’s more than a tad of arrogance and disrespect to be found there, so much that one poster pointed the Googlers that take part in the discussion (Director level mind you; not the friendly developers and community managers who have been assisting us all these years) to the Chromium Code of Conduct. I am so pissed with this attitude that I forwarded the discussion to Larry Page in a hissy fit… not that I expect him to read and answer, but it had to be done. Remember the original Google Code of Conduct mantra “Don’t be evil”?

Warning about Python3 update in latest -current

Warning for people running Slackware-current and have 3rd party packages installed (who doesn’t) that depend on Python3. That includes you who are running KDE Plasma5!

The “Sun Oct 25 18:05:51 UTC 2020” update in Slackware-current comes with a bump in the Python3 version (to 3.9) which is incompatible with software which already has been compiled against an older version of Python3 (like 3.8).

I found 26 of my own packages on my laptop that depend on Python3 and they are all probably going to break when upgrading to the latest slackware-current. This includes Plasma5 ‘ktown’ packages but also several of my DAW packages.

I suggest that you wait with upgrading to the latest -current for a short while. Give Pat a chance to add Plasma5 to Slackware, because I am not going to recompile any ‘ktown’ package over this.
I will however look at my other packages (cecilia5, wxpython to name but a few) and recompile those against the newer Python. Watch this space for more news.

« Older posts Newer posts »

© 2025 Alien Pastures

Theme by Anders NorenUp ↑