My thoughts on Slackware, life and everything

Tag: sync (Page 1 of 2)

Sync and share your (Chromium and more) browser data among all your computers

I had promised to write a follow-up article about my journey to find a good alternative for the “Chrome Sync” feature that used to be available in my Chromium packages for Slackware. This is that article. It is fairly large and I hope it is written well enough that you can follow the explanations and examples and are able to implement this yourself. The article is divided into a couple of main sections:

  • Securely syncing your browser’s online passwords to (and retrieving them from) a central location under your control
  • Syncing your browser’s bookmarks to a central location under your control
  • Migrating your Chromium passwords into a central password database under your control
  • Migrating your Chromium bookmarks into a bookmark server under your control.

If you run into anything that is not clear, or does not work for you, please let me know in the comments section!

I’d like to remind you that after 15 March 2021 the Chrome Sync service provided by Google’s cloud infrastructure will only be accessible by Google’s own (partly closed-source) Chrome browser. All other Chromium-based software products are blocked from what Google calls “their private API services”.
All non-Google-controlled 3rd parties providing software binaries based on the Chromium source code are affected, not just my Slackware package. The Chromium browser packages that have been provided in most Linux distros for many years, will lose a prime selling point (being able to sync your browser history, bookmarks and online passwords to the cloud securely) while Google will still be able to follow your browsing activities and monetize that data.
For many people (including myself) using Chromium, the ability to sync your private browser data securely to Google’s cloud and thus get a unified browser experience on all platforms (Linux, Android, Windows, Mac operating systems) certainly outweighed the  potential monetary gain for Google by using a product with its origins in Google – like Chromium. Compiling the sources on Slackware allowed me to deal with some of the privacy concerns. But now the balance has been tipped.

Google took away this Chrome Sync from us Linux Chromium users, and it’s important to come with alternatives.
If you don’t use Chrome or Chromium, then this article is probably not relevant. I will assume you are using Mozilla’s Firefox and that browser has its own open source cloud sync server. You use Mozilla’s own cloud sync or else setup a Sync Server on a machine which is fully under your control. The same is not possible for Chromium browsers unfortunately, although Brave and Vivaldi – two not completely open source Chromium derivatives – have developed their own cloud sync server implementations. But (again unfortunately) those sync solutions are fully entangled with their own browser products.

I already wrote about “un-Googled Chromium“, the browser which is stripped from everything that would attempt to connect to a Google service. The browser that is provided by my “chromium-ungoogled” package is as good as the original “chromium” browser in terms of core functionality and perhaps it is even a tad faster than the original. I urge you to read that article before continuing below.
Because the remainder of this article will deal with: how do you sync, or save, your browser’s online passwords, your bookmarks and your history to a remote server? Using only open source tools? And in such a way that no one will be able to access these data unless you approve it?

Let’s start with a disappointment: there is currently no Open Source solution available that lets you sync your browser history to a cloud server and then access that history in other browsers on other computers. But, one of the products further below (xBrowserSync) does have this feature on its development roadmap – we will just have to be a wee bit patient.
With that out of the way, let’s focus on what is possible.

Syncing your browser’s online passwords

The internet is not a safe place. Sometimes, web sites are hacked and user credentials obtained. If you deal with web sites where you have to authenticate yourself to gain access to certain functionality, that server needs to store your credentials (hopefully encrypted) locally. If you want to prevent that hackers use your credentials to do you harm, it is highly recommended to always use strong passwords, and to use different passwords for every web site that you need to login to.
Remembering all these strong passwords is an impossible task for the average human, which is why we have password managers. A password manager is an application that provides a means to store your credential data in an encrypted database which you unlock with just a single passphrase – and that leaves you with the more easily achievable task to remember one strong password to gain access to all the others. Password managers have become quite user-friendly and some of them integrate with your web-browser.

I present one of those password managers here, KeePassXC. I think it is the best solution to our problem. It is cross-platform (Linux, Windows, MacOS), uses strong encryption, is open source and integrates with Google and Mozilla browsers. Including un-Googled Chromium.
KeePassXC is not a client-server based service though – it all happens on your local computer. In order to make your password database accessible on multiple computers, you will have to store it in such a way that it automatically gets uploaded to (and then kept in sync with) a networked location. If you want to be able to access your passwords while on the move (i.e. outside of your house) the most logical solution is to use a “cloud storage” provider who offers cross-platform sync clients – at a minimum we want Slackware Linux to be supported.

The steps to take:

  1. Ensure that you have a cloud storage client installed on your computers. I’ll give two examples: a client application for the free basic plan of the commercial Dropbox, or for a NextCloud server you operate yourself. There are other solutions of course, use whatever you like most.
    Key here is that you are going to create an encrypted password database somewhere inside the directory tree that you sync to this cloud storage provider.
  2. Download and install chromium-ungoogled. Start it and go through the initial configuration as explained in a previous article so that browser extensions can be installed properly and easily, and ensure that you install the KeepassXC extension from the Web Store. Don’t configure that extension yet.
  3. On your Slackware-current computer, download and install the keepassxc package. Like my chromium-ungoogled package, I have this only available for Slackware-current but in future I intend to add Slackware 14.2 support also for chromium-ungoogled and the sync software.
  4. Start KeePassXC – it is a graphical program so you need to be logged in to a graphical session like Plasma5 or XFCE – and configure it to automatically launch at system startup i.e. when you login (in General > Basic Settings). Configure all other convenience settings to your liking, such as minimizing to system tray, automatically locking the database etc.
  5. Enable browser integration in KeePassXC (Tools > Settings > Browser integration > Enable). In the “General” tab, check the box to at least enable “Chromium” integration. Check any other browser that you would like to use with KeepassXC but for the purposes of this article “Chromium” is the only mandatory choice.
  6. [Only if you use my chromium-ungoogled package instead of regular chromium]: Open the “Advanced” tab and check the box “Use a custom browser configuration location”. Select “Chromium” in the “Browser Type” dropdown. In the “Config Location” entry field you write “~/.config/chromium-ungoogled/NativeMessagingHosts”. This will generate a file “org.keepassxc.keepassxc_browser.json” in that location, specifying the integration parameters between the KeePassXC program and the browser extension.
    You may have to fiddle a bit with that last step, to get KeePassXC to actually create the JSON file. It may be that you have to startup your chromium-ungoogled first and then enable browser integration in KeePassXC.
  7. In KeepassXC you now create a new database (or point it to a database you may already have) in your cloud-sync directory. That database will then sync remotely and will be accessible by KeepassXC instances running on other computers that have the same cloud sync client installed.
    Word of advice: If you do not have a password database already and are starting from scratch, it will be easier to import your old browser’s passwords right now instead of having to merge them later. Look at the end of this article for the section “Importing your passwords from Chromium/Chrome into KeePassXC” which will tell you exactly how to import your old passwords before you continue to the next step!
  8. Now that we have successfully connected KeePassX to its browser extension, your browser will ask you if it should store any new credentials you enter on web sites, into its encrypted KeePassXC database. The KeePassXC extension will automatically add icons to credential entry fields on web pages that allow you to auto-fill these fields if the web site is already known in your database. And if you don’t want to invent strong passwords yourself, password fields can be enhanced with an icon that gives access to the KeePassXC strong random password generator.
  9. Goal achieved. You can now share and use your online passwords on multiple computers. Note that this functionality is not limited to your (un-googled) Chromium but also to the Firefox browsers that you use.

Note: For Android if you installed un-Googled Chromium from F-Droid following these instructions, there’s the compatible KeePass2Android or KeePassDX, both are recommended by the KeepassXC developers, so you can access all your passwords even if there’s no integration with the browser.

Syncing your browser’s bookmarks

For this part, I had already done some preparations. I shared those in a previous post: how to build and configure a MongoDB database server. The best open source bookmark sync server I could find is xBrowserSync, and it uses MongoDB as its database backend.
XBrowserSync is a service you can operate yourself by installing it on a machine you control, but you can also use any of the free online xBrowserSync servers that are operated by volunteers on the internet. Registration to a server is not required, access is anonymous and only you know the sync ID and password which are used to store your data.

I’ll first instruct you how to get the browser-side working with the developers’ freely accessible servers and then I’ll show you how to setup your own sync server.

Setting up your browser

It starts with installing xBrowserSync browser extension into your (un-Googled) Chromium from the Web Store. Important: if you want to use xBrowserSync you need to disable other synchronization mechanisms like Chrome Sync first to prevent interoperability issues. Opening the settings for this extension takes you through a set of introduction messages and then lets you pick a sync server to connect to – the default is the developers’ own server. You then specify a password which will be used to encrypt your data before it gets uploaded. The server will assign you a “Sync ID“, a seemingly random string of 32 hex characters. Write down that Sync ID! It is the combination of the Sync ID and your password that will grant access to your bookmarks if you install the xBrowserSync extension in another browser.
Sync will start immediately.

Setting up your server

Now onto setting up a xBrowserSync server yourself. After all, what’s the fun in using someone else’s services, right? You could have stuck with Chrome otherwise.
To install a xBrowserSync service, you’ll require an online computer which ideally runs 24/7. You can use a desktop or a real server, but it needs to be a 64bit system. The reason? MongoDB, the database backend, only supports 64-bit computers. You won’t be able to compile it on 32-bit.
I run mine from a computer at home, but you can rent a server online too if you would like that better. Assuming here that the computer runs 64-bit Slackware 14.2 or higher, the requirements are:

  • Apache’s webserver “httpd” is installed and running (other web servers like nginx are also possible but outside the scope of this article). Apache httpd is part of a Slackware full installation. In this article we will assume that the URL where you can reach the webserver is https://darkstar.net/ .
  • You have configured your Apache httpd for secure connections using HTTPS and the SSL certificates come from Let’s Encrypt. If this is something you still need to do, read a previous article here on the blog, on how to set that up using the ‘dehydrated‘ script (which was added to Slackware-current since I wrote that post).
  • TCP ports 80 (http) and 443 (https) are open to the Internet (if you want to be able to access your server while on the move). For a server that is running in your own home, you’ll have to port-forward these ports 80 and 443 from your ISP’s modem/router to your own computer.
  • Node.js is installed. You can get a package from my repository.
  • MongoDB is installed and the database server is running. A package is available in my repository and configuration of MongoDB is covered in my previous article.

We start by creating a dedicated user account to run the xBrowserSync service on our computer:

# groupadd xsync
# useradd -g xsync -d /home/xsync -m -s /bin/false xsync

Then clone the xBrowserSync API source repository. My suggestion is to use /usr/local for that and give our new “xsync” user the access permissions to maintain the xBrowserSync files. The ‘#’ depicts root’s shell prompt and ‘$’ is a user shell prompt:

# mkdir /usr/local/xbrowsersync
# chown xsync /usr/local/xbrowsersync
# su - xsync -s /bin/bash
$ cd /usr/local/xbrowsersync
$ git clone https://github.com/xbrowsersync/api.git

Still as user ‘xsync’, install and build the xBrowserSync API package:

$ cd api
$ npm install
$ npm audit fix
$ exit

As ‘root’, configure the log location we will use when the service starts :

# mkdir /var/log/xBrowserSync/
# chown xsync:xsync /var/log/xBrowserSync/

We have xBrowserSync installed, now we need to create the conditions for it to work. So we start with configuring a MongoDB database for xBrowserSync. Below I will use example names for that database, the DB user account and its password. You should of course pick the names of your choosing! So change the bold red texts when talking to the database.
Database: xbrowsersyncdb
DB User: slackadmin
DB Pass: slackpass

Start a mongo shell – it does not matter under which user account you do this as long as you know the MongoDB admin account (mongoadmin in this example) and its password:

$ mongo -u mongoadmin -p
>

Run the following commands in the mongo shell:

use admin
db.createUser({ user: "slackadmin", pwd: "slackpass", roles: [{ role: "readWrite", db: "xbrowsersyncdb" }] })
use xbrowsersyncdb
db.newsynclogs.createIndex({ "expiresAt": 1 }, { expireAfterSeconds: 0 })
db.newsynclogs.createIndex({ "ipAddress": 1 })

With the MongoDB database ready for use, we can proceed with configuring xBrowserSync. Login again to your computer as user ‘xsync’ and make a copy of the default settings file so that we can insert our own custom settings:

$ cd /usr/local/xbrowsersync/api
$ cp config/settings.default.json config/settings.json

In that file “/usr/local/xbrowsersync/api/config/settings.json”, change the following values from their defaults (or at least ensure that the values below are present). I highlighted the MongoDB connection parameters in red:

"status.message": "Welcome to Slackware xBrowserSync"
"db.name": "xbrowsersyncdb"
"db.username": "slackadmin"
"db.password": "slackpass"
"port": "8080"
"location": "your_two_character_country_id"
"log.file.path": "/var/log/xBrowserSync/api.log"
"server.relativePath": "/"

Note that the nodejs server listens at port 8080 by default and I leave it so. But if you are already running some kind of program using that same port number, you can change the port to a different value, say “8888” in the above file – look for “port”: “8080”, .

To operate the new xBrowserSync server, we create a shell script we can start at boot. The commands below show the use of a “here-document” to write a file. If you are unsure how to deal with below commands, check out what a “here-document” actually means.
Still as user ‘xsync’ you run:

$ mkdir ~/bin
$ cat <<EOT > ~/bin/xbs_start.sh
#!/bin/sh
cd /usr/local/xbrowsersync/api
node dist/api.js
EOT
$ chmod 755 ~/bin/xbs_start.sh

This is a simple script to start a node.js server. That server does not move itself to the background, but we need the command to ‘disappear’ into the background if we want our computer to complete its boot sequence. There is a nice way to deal with backgrounding a program that can not run as a background service all by itself: we offer it a ‘virtual console’ and then move that virtual console to the background. A VNC server would be the graphical X analog of such a ‘virtual console’.
Slackware comes with two programs that offer what we need: screen and tmux. I am a screen fan so I’ll use that. As user ‘root’, add this to ‘/etc/rc.d/rc.local’:

# Start the xBrowserSync Server:
if [ -x /home/xsync/bin/xbs_start.sh ]; then
  echo "Starting xBrowserSync Server:  ~xsync/bin/xbs_start.sh"
  su - xsync -s /bin/bash -c "/usr/bin/screen -AL -mdS XSYNC bash -c '/home/xsync/bin/xbs_start.sh'"
fi

This ensures that your xBrowserSync service will start at boot, and the service will run in a detached ‘screen‘ session for user ‘xsync’. If you want to access the running program, you can attach to (and detach from) that running ‘screen’ session from your console or terminal prompt. As user ‘xsync’, simply run:

$ screen -x

… to connect to the virtual terminal, to look at the program output or perhaps even to terminate it with “Ctrl-c“. And if you have seen enough, you can easily detach from the ‘screen’ session again using the “Ctrl-d” key combination, leaving that session to continue running in the background undisturbed.
I will not force you to reboot your computer just to get the service up in the air, so this one time we are going to start it manually. Login as user ‘xsync’ and then run:

$ screen -AL -mdS XSYNC bash -c '/home/xsync/bin/xbs_start.sh'

You will not see a thing but XBrowserSync is now running on your computer.
All that’s left to do now, is to allow connections to the xBrowserSync program by your Chromium or Firefox browser. The program listens at  the loopback address (127.0.0.1 aka localhost) on TCP port 8080, so access is only possible from that same computer. That is a safety measure – no one can access your service from a remote computer by default.
Note that if you changed the default xBrowserSync listen port from 8080 to a different value in “settings.json” above, you need to use that same port number in the Apache configuration block below.

We will configure Apache httpd to act as a reverse proxy – this means that we will deploy the httpd server as the frontend for xBrowserSync. Browsers will never connect to xBrowserSync directly (it is not even accessible from other computers), but instead we let them connect to Apache httpd. The webserver will relay these browser connections to the sync server running on the same machine.
Apache’s httpd is much better at handling a multitude of connections than the nodejs server used for xBrowserSync and we can also let httpd handle the encrypted connections so that we don’t have to complicate the xBrowserSync setup unnecessarily.

In the context of this article, we will operate our xBrowserSync server at the URL https://darkstar.net/xbrowsersync/ – this will be the URL you enter in the browser extension’s configuration instead of the default service URL https://api.xbrowsersync.org/. The bit I highlighted in red will be reflected in our Apache configuration.
There is no need to add anything to your VirtualHost configuration for the HTTP (80) web service, since we want to access the service exclusively over HTTPS (encrypted connections). This is the block of text you have to add to your VirtualHost configuration for the HTTPS (443) web service:

ProxyRequests Off
ProxyVia on
ProxyAddHeaders On
ProxyPreserveHost On

<Proxy *>
  Require all granted
</Proxy>

<Location "/xbrowsersync/">
  # Route incoming HTTP(s) traffic at /xbrowsersync/ to port 8080 of XBrowserSync
  ProxyPass        http://127.0.0.1:8080/
  ProxyPassReverse http://127.0.0.1:8080/
</Location>

Check your Apache configuration before starting the server:

# apachectl configtest
# /etc/rc.d/rc.httpd restart

Check the server status at https://darkstar.net/xbrowsersync/info
A nice GUI view of your service, and also the URL that you need to enter in the browser add-on: https://darkstar.net/xbrowsersync/

Importing your passwords from Chromium/Chrome into KeePassXC

You don’t have to start from scratch with storing web site credentials if you were already saving them in Chromium – whether you were using Chrome Sync or not.
This is how you export all your stored credential data from your Chromium browser and import that into KeePassXC:

  1. In Chromium, open “chrome://settings/passwords” and next to “Saved Passwords” click the triple-dot menu and select “Export Passwords”.
    By default, a comma-separated values (CSV) file will be generated as “~/Documents/Chrome Passwords.csv”. That file contains all your passwords un-encrypted, so be sure to delete that file immediately after the next steps!
  2. If you also save Android app passwords into Google’s password vault and your Chromium browser was using Chrome Sync, you’ll have to clean the above CSV file from all lines that start with the string “,android:”. For instance using a command like this:

    $ sed -e "/^,android:/d" -i ~/Documents/Chrome\ Passwords.csv

  3. Open the KeePassXC program on your computer. We are going to import the CSV file containing the credentials we just exported from Chromium via the menu “Database > Import > CSV File”. The first thing to do is to pick your new database’s format and encryption parameters. The default values are perfectly OK. Then define the password that is going to protect your database, and the location to store that database.
    • If this is actually going to be the database you intend to use with your browser, then select a directory that is being synchronized with an external cloud storage location.
    • If you already have a password database you are using in your browser but you want to import your old Chromium passwords after the fact, then the location of this database does not matter since it is only a temporary one and can be deleted after merging its contents with the real password database. So in this case, anywhere inside your homedirectory is just fine.
  4. An import wizard will then guide you through the process of associating the CSV data to matching database fields.
    One picture says more than a thousand words here, so this is what you should aim for to match the columns in the CSV file (name, user, password, url) to the corresponding fields in the KeePassXC database (Title, Username, Password,  URL):
    The trick here is to check the box “First line has field names” and then select: “name” in the Title dropdown; “username” in the Username dropdown; “password” in the Password dropdown and “url” in the URL dropdown. The dropdown for Group will show “name” as well but you need to select “Not Present” there and also ensure that all the other dropdowns are showing “Not Present”.
  5. After importing has completed and you have validated that the CSV values (name, user, password and url) are indeed imported into the correct database fields, the final question: is this your passwords database to use with the browser plugin?
    • If yes, then you are done!
    • If no, then we need to merge the database we just created into your real passwords database.
      Open your real passwords database in KeePassXC and then select the menu “Database > Merge from Database…”. You get the chance to select your temporary database you just created from the imported CSV data. Enter that database’s password to unlock it and its data will immediately be merged. You can now safely delete the temporary database where you imported that CSV data.
  6. You must now delete the CSV file containing all your clear-text passwords. It is no longer needed.
  7. That’s all. Good luck with using your passwords across devices and browsers safely and under your exclusive control.

Importing your browser bookmarks into XBrowserSync

Like with your passwords above, there’s also an easy way to import the bookmarks you were using in Chromium into XBrowserSync so that you can make these available in un-Googled Chromium. Remember that Chromium’s use of Chrome Sync conflicts with the xBrowserSync extension. You can of course first disable that Chrome Sync (and after 15 March 2021 you should do that anyway) and then enable xBrowserSync, but the procedure below will likely also work when you want to export bookmarks from another browser not supported by xBrowserSync.

  1. In Chromium, open the bookmark manager: “chrome://bookmarks/
  2. Click on the triple-dot menu in the top-right and select “Export bookmarks”. Save the resulting HTML file somewhere, it is temporary and can be deleted after importing its contents into un-Googled Chromium.
  3. Open the resulting HTML file into an editor. The “Kate” editor in KDE Plasma can deal pretty well with the HTML and breaks it down for you into sections you can easily collapse, expand or remove. You need to remove the “Apps” bookmark section in the HTML which is not useful for the un-Googled version, and perhaps you want to remove some other stuff as well.
  4. In un-Googled Chromium, open “chrome://bookmarks/” and this time select “Import bookmarks” from the triple-dot menu in the top-right corner. Pick the HTML file you exported from Chromium and the imported content will show up in your Bookmarks Bar as a bookmark-menu called “Imported”.
  5. You’ll probably want to move stuff out of that “Imported” menu and into the locations where you previously had them in your old Chromium’s  Bookmarks Bar. You can do that most easily from within the bookmark manager by right-clicking stuff, selecting “Cut” and then pasting them into another location on the Bookmarks Bar.
    One caveat: I found out empirically that the xBrowserSync extension does not like it when it is still syncing a change in your bookmarks, and then you make another change before it is finished. It will simply invalidate the sync and try to revert to the previously known bookmark list. The thing is, if you did a “Cut” in the meantime with the intention to “Paste” a set of bookmarks somewhere else, that set will be gone forever. Best is to keep an eye on the xBrowserSync extension’s icon next to the URL entry field and do not try to make changes while it is still in the process of syncing. If you keep this in mind, then the experience will be just great.
  6. All done! Enjoy the fact that the bookmarks you create or modify in one browser, will be immediately accessible in other connected browsers.

This concludes a lengthy article, I hope it contains information you can make good use of. In the days and weeks to come, we’ll see what happens with the original Chromium and whether un-Googled Chromium can take its place. Let me know your experiences.

Have fun! Eric

HOWTO: install MongoDB on Slackware

Today I am going to show you how to install MongoDB, create a database admin account and enforce basic security.

Why MongoDB when Slackware already has MariaDB? Well, the two are not comparable. MariaDB is a SQL database server, whereas MongoDB is a “NoSQL” database server, aka “Not only SQL“, and its queries – just like its object storage format – are in JSON. The two types of databases have entirely different usage targets.

MongoDB is a ‘general-purpose, document-based database server‘. It has use-cases where it is more powerful than the traditional row/column model of a relational database management system. NoSQL databases, in particular MongoDB, are preferred over RDBMS in Cloud services, Big Data environments and for high-volume web based data processing services. These are typically environments where flexibility is required to handle big amounts of unstructured data and constantly varying schemas. A distributed cluster of MongoDB servers excels at “map-reduce“, the framework invented by Google for condensing large volumes of data into useful aggregated results – the very paradigm that catapulted Google Search into the number one position of search engines shortly after the turn of the millennium.

Again, why then MongoDB? Who cares?
The above preamble is no sales pitch, rather it is meant to give you some background first. This article is actually meant to bridge a previous and a future article here on the blog. In a previous article I wrote about un-googling your browser experience and promised that I would share with you a solution that allows you to sync your browser’s online passwords and bookmarks (and hopefully soon also your browsing history) to an online server that is fully under your control.
In a future article I will document how to setup that sync service, but to that end I need a working MongoDB server first. Creating a ‘mongodb’ package that I was satisfied with, and that is also usable on Slackware 14.2, proved a bit more time-consuming than I expected, but here it is and here we go.

Caveat: since MongoDB 3.4, the developers dropped support for 32-bit platforms. Current version is 4.4.4 and therefore the packages for MongoDB that I provide are for 64-bit Slackware 14.2 and -current only.

  1. Download and install a ‘mongodb’ package for your Slackware: http://www.slackware.com/~alien/slackbuilds/mongodb
    The installation of that package will create a “mongo” user and a “mongo” group on your computer, it will also install a RC script “/etc/rc.d/rc.mongodb” and add a couple of lines to your “/etc/rc.d/rc.local” so that the MongoDB server will be started automatically every time  your computer boots, as long as the “/etc/rc.d/rc.mongodb” script is executable (which it is by default). Also installed is a default configuration file “/etc/mongodb.conf” which is used by the RC script:

    processManagement:
      fork: true
      pidFilePath: "/var/run/mongodb/mongod.pid"
    net:
      bindIp: localhost
      port: 27017
    storage:
      dbPath: /var/lib/mongodb
      journal:
        enabled: true
    systemLog:
      destination: file
      path: "/var/log/mongodb/mongod.log"
      logAppend: true
    cloud:
      monitoring:
        free:
          state: off
    security:
      authorization: disabled
    

    This configures MongoDB to be accessible only at the “127.0.0.1” loopback interface, listen at the MongoDB default TCP port “27017”, and store its databases in “/var/lib/mongodb/”. This MongoDB configuration for Slackware comes out of the box without any form of access control and no authentication. We will fix that below.

  2. Increase the maximum number of open files (1024 by default) by un-commenting the following line in “/etc/login.defs”:
    ULIMIT 2097152
    If you omit this, every connection you make to the server will warn you about it.
  3. Then start the MongoDB server, the first time manually, but next time you boot the computer this will be done automatically. As root, run (‘#’ is the root prompt of course… don’t try to type it):
    # /etc/rc.d/rc.mongodb start
  4. After we have validated that the server is running using the “/etc/rc.d/rc.mongodb status” command, we are now going to enable access control and authentication. The documentation for that, as well as more recommendations on how to secure your database server, is available at: https://docs.mongodb.com/manual/administration/security-checklist/ . Based on that documentation we take the following steps to enable access control and enforce authentication:
    1. Add an administrative account to MongoDB. In this example I will use the name “slackadmin” with a password “slackpass” – change these to something you like better.
      Remember that the server runs without authentication or access control out of the box, so connecting to it will be quite easy, You can start the “mongo” client from any user account, for instance your own regular login account (‘$’ is the Bash shell’s user prompt of course… don’t try to type it). By default, the “mongo” client program will connect to a server on the “localhost” address at port 27017 and since the Slackware package uses these defaults, no commandline parameters are required:
      $ mongo
      connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
      Implicit session: session { "id" : UUID("12345678-1234-1234-1234-123456789012") }
      MongoDB server version: 4.4.4
      Welcome to the MongoDB shell.
      For interactive help, type "help".
      For more comprehensive documentation, see
      https://docs.mongodb.com/
      Questions? Try the MongoDB Developer Community Forums
      https://community.mongodb.com
      >

      At the mongo shell prompt, enter these lines to create the administrative account. Note that in MongoDB, the “admin” database is the default database in which user accounts are created and user access control to the other databases is defined:

      use admin
      db.createUser(
        {
          user: "slackadmin",
          pwd: "slackpass", // or use passwordPrompt(),
          roles: [ { role: "userAdminAnyDatabase", db: "admin" }, "readWriteAnyDatabase" ]
        }
      )
      quit()
      
    2. Now that we have added a user to the ‘admin’ database who has full access, we can stop the server and change its configuration to enforce authentication. As root:
      # /etc/rc.d/rc.mongodb stop

      … and then change “authorization: disabled” to “authorization: enabled” in the file “/etc/mongod.conf”. After that change, start MongoDB again with the RC script and if you then attempt to run the “mongo” client just like that and try to run a command that needs privileges, you’ll get an error:

      $ mongo
      > show roles
      uncaught exception: Error: command rolesInfo requires authentication :

      We will have to login to the server now, in order to do meaningful things. Since we have only one user still, we use that. Note that you will be asked to enter the user’s password after pressing ENTER, and this time we will get better feedback for our “show roles” command:

      $ mongo -u slackadmin -p
      > show roles
      {
      	"role" : "dbAdmin",
      	"db" : "test",
      	"isBuiltin" : true,
      	"roles" : [ ],
      	"inheritedRoles" : [ ]
      }
      {
      	"role" : "dbOwner",
      ... 

      In case you forgot to authenticate when starting the mongo client , i.e. you just executed “mongo“, don’t worry. You can authenticate from within the mongo client shell also:

      $ mongo
      > use admin
      > db.auth("slackadmin", passwordPrompt()) 

      What you do with the database is now up to you. In any case, I will expect that you have a running and pre-configured MongoDB database instance, next time when I write an article about browser sync. Don’t let that limit you! There will probably be other good uses for this article, you just need to go find them now.

Good luck! Eric

How to ‘un-google’ your Chromium browser experience

… Aka the future of Chromium based (embedded) browsers


On March 15th 2021, Google is going to block non-Google chromium-based browsers from accessing certain “private Google Chrome web services” by unilaterally revoking agreements made with 3rd parties in the past.
Meaning, every Chromium based product not officially distributed by Google will be limited to the use of only a few public Google Chrome web services.
The most important service that remains open is “safe browsing”. The safe browsing feature identifies unsafe websites across the Internet and notifies browser users about the potential harm such websites can cause.

The most prominent feature which will be blocked after March 15th is the “Chrome Sync”. This Chrome Sync capability in Chromium based browsers allows you to login to Google’s Sync cloud servers and save your passwords, browsing history and bookmarks/favorites to your personal encrypted cloud vault inside Google’s infrastructure.
Extremely convenient for people who access the Internet using multiple devices (like me: Chrome on a few Windows desktops, Chromium on several Slackware desktops and laptop and Chrome Mobile on my Android smartphone) and who want a unified user experience in Chrome/chromium across all these platforms.
In order to boost the development of Chromium-based (embedded) browser products, Google made deals with 3rd parties as far back as 2013 (from what I could find) and spiced the API keys of these 3rd parties with access to crucial Google Webservices providing features that would draw users to these products.
If you offer a product that calls upon Google’s Web Services there is a monetary cost involved once the number of your users’ connections exceeds the monthly upper limit for free usage. So on top of providing us access to these Google APIs (in the case of Open Source Distro Chromium packagers) the Chromium team also substantially increased the non-billed monthly API consumption by the users of our distros’ Chromium browsers. This helped to prevent us poor distro packagers from being billed for Cloud API usage in case our browser packages gained popularity.
And then, early 2021, some Google white-collar people decided they had enough of these freeloaders.

When Google dropped the bomb on us – on the distro packagers in particular – a fierce discussion started in two Google Groups (posts in one group are mostly duplicated  into the other group): Chromium Packagers and Chromium Embedders. It’s like talking to corporate drones – every question we asked is replied to with the same bogus standard texts. Arrogance to the max!
Even more poignant is a parallel discussion in Chromium Embedders, where some large electronics manufacturers discovered that some of their commercial products are similarly affected. Consumer Electronic products that ship with browser-based embedded applications like Smart TV’s often use CEF (Chromium Embedded Framework) and Google will block access for CEF products to their “private” Chrome APIs just like it’s going to do with distro browsers – they are all based on the same Chromium source code and are all non-Google products.

If you wonder what happened to the Google motto “Don’t be Evil” – in 2018 that phrase was removed from the employee Code of Conduct. And indeed, looking at the discussions in aforementioned topics the top brass feels completely ‘senang‘ throwing us distro packagers under the bus while at the same time chastising us because apparently we do not adhere to their Code of Conduct.

Enough of all the bullshit – let’s look into the future. What can we do as Linux users, and what will I do as a distro packager.

Let me be clear: I do not want to take choices away from you. You can keep using Chromium, you can switch to Chrome, you can investigate whether Vivaldi or Brave (two chromium-based browsers with their own Google-free implementation of cloud sync) are better options for you.
I will however have to deal with the fact that I can no longer build a Chromium package that offers a synchronization of your private browser data out of the box. So what I will discuss in the remainder of this article are possibilities.

Chromium packages for Slackware are here to stay

… but I will remove my personal Google ID and corresponding secret from my chromium package. They will have been invalidated anyway on March 15 and are therefore useless. What I will leave in, is my “Slackware Chromium API Key” which keeps the “safe browsing” functionality alive if you use my browser.

I want to state here that from now on, I also explicitly forbid others / distros to re-use and re-package my binaries in order to  make them part of their own Linux Distribution: thinking of Slacko Puppy, Porteus, Slint and others. If needed I will use “cease & desist” messages if people refuse to comply. I am not going to pay Google for the use of my binaries in distros that I do not control. The use of my API key is automatic if you run my Chromium binaries, and it involves a monthly cost if Google’s Could APIs get called too much. I already had to negotiate several times with the Chromium people to avoid getting billed when their policies changed. So get your own API key and compile your own version of the browser please.
You can request your own APIkey/ID/string in case you did not realize that! You’ll get capped access to Google API services, good for a single person but still without access to Cloud Sync. If you introduce yourself to the Chromium team as a distro packager, they may help you with increasing your browser’s un-billed API usage.

There’s a public discussion in the Google Group threads that I referred to above, about your personal use of the official Google API keys. This could offer a way out of the blockade and would allow you to keep using Chrome Sync in a Chromium browser even after the distro packagers’ API keys have been invalidated. These official Chrome API key/ID/secret strings are contained as clear-text strings in the public chromium source code for a long time already!
While I am not going to advocate that you should do this, it is up to you (the individual end user of a Chromium-based browser) to find those strings online and apply them to your browser’s startup environment.

Let me explain a bit. When I compile Chromium, my personal API key and Google client-ID are being embedded in the resulting browser binary, and that’s why everything works so nicely out of the box. In future I will not be embedding my client-ID anymore, but my API key for the browser will remain. That his how Safe Browsing will still work (it’s associated to the API key) but Chrome Sync will stop working (because that’s associated with the Client-ID).
The good news is that Chromium browsers will check the environment when they start up, and look for specific variables that contain a custom API key and client-ID. My chromium package is built in such a way that it is easy to add such customization, by creating a “.conf” file in directory “/etc/chromium/”.
In the Slackware package for Chromium, you will find an example of how to apply such an APIkey/ID/secret combo. Just look at the file “/etc/chromium/01-apikeys.conf.sample”. If you remove the “.sample” suffix this file will then define three environment variables on startup of Chromium that tell the browser to use a specific service configuration.
And you  can also copy the Google Chrome key/id/secret into that file and then it’s as if you are using a Chrome browser when talking to Google’s cloud services.

An ‘un-googled’ browser experience

The above API blocking scenario is a “win/lose” scenario as far as I am concerned. For Google it is a “win”: they still get to collect the data related to your online activities which they can monetize. And you “lose” because in return Google won’t allow you to use their cloud sync service any longer. That is not acceptable. And it lead to a bit of research into the possibilities of turning this fiasco into a “win” for the user.
Turns out that there’s is actually an existing online project: “ungoogled-chromium – a lightweight approach to removing Google web service dependency“.
High-over: the “un-googled chromium” project offers a set of patches that can be applied to the Chromium source code. These patches remove any occurrence of Google Web Service URLs from the source code which means that the resulting browser binaries are incapable of sending your private data into Google datacenters. Additionally these patches bring  privacy enhancements borrowed from other Chromium derivatives like the Inox patchset, Debian’s Chromium, Iridium browser and Bromite.
Not just a “win” for the user but a “lose” for Google. They basically brought this down on themselves.

My conclusion was that a removal of Google associations from Chromium and at the same time improving its privacy controls is what I must be focusing on in future Chromium packages.

During my research I did look at existing alternative Chromium browser implementations. They all have their own merits I guess. I do not like to switch to Vivaldi since I think its development process is hazy i.e. not public. Only its complete release tarballs are downloadable. Or Brave – its sources are not available at all and it tries to enforce an awards system where you are encouraged to view ads – I mean, WTF? If I wanted to run a browser compiled by another party that tries to use me for their own gain, I could just stick with the official Chrome and be happy. But that is not my goal.

What I did instead was to enhance my chromium.SlackBuild script with a single environment variable “USE_UNGOOGLED” plus some shell scripting which is executed when that variable is set to ‘true’ (i.e. the value “1”). The result of running “USE_UNGOOGLED=1 ./chromium.SlackBuild” is a package that contains an “un-googled” Chromium browser that has no connection at all to Google services.
I make that package available separately at https://slackware.nl/people/alien/slackbuilds/chromium-ungoogled/

Be warned: using un-Googled Chromium needs some getting used to, but no worries: I will guide you through the initial hurdles in this article. Continue reading! And especially read the ungoogled-chromium FAQ.

The first time you start my chromium-ungoogled it will create a profile directory “~/.config/chromium-ungoogled” which means you can use regular Chromium and the un-googled chromium in parallel, they will not pollute or affect each other’s profiles.

You’ll notice as well that the default start page points to the Chrome Web Store but the link actually does not work. That’s unfortunate but I decided not to look into changing the default start page (for now). Patch welcome.

Which leads to the first question also answered in the above FAQ: how to install Chrome extensions if the Chrome Web Store is inaccessible?
The answer allowing direct installations from the Web Store afterwards is to download and install the chromium-web-store extension (Chrome extensions are packed in files with .crx suffix). You have to do this manually but the instructions for these manual installation steps are clear. Then, any subsequent extensions are a lot easier to install.

Another quirk you may have questions about is the fact that un-Googled Chromium seems to forget your website login credentials all the time. Actually this is done on purpose. FAQ #1 answers this: Look under chrome://settings/content/cookies and search for “Clear cookies and site data when you quit Chromium“. Disable this setting to prevent the above behavior.

Watching Netflix, Hulu or Disney+ content will not work out of the box, you’ll have to install the Widevine CDM library yourself. If you have been a Slackware user for a while, you may recall that I used to provide chromium-widevine-plugin packages, until the capability to download that plugin all by itself was added to Chromium source code by Google. Well… the un-Googled Chromium removed that capability again but I have updated my package repository with a version of the widevine-plugin that works with with the un-Googled browser.

Safe browsing is not available in un-Googled Chromium, since that too is a service provided by Google. Recommended alternatives are uBlock Origin or uMatrix.

Sync your browser data to an online service which is under your own – not Google’s – control

Now that we said good-bye to Google Cloud Sync, can  we still sync our passwords, bookmarks and browsing history to a remote server and access these data from multiple browsers? Yes we can!
Even better, we can sync that data to a place that is under our own control. Multiple computers using the same synchronized data will give you the same experience as your prior usage of Google Cloud Sync. This will then also not be limited to Chromium based browsers – Mozilla based browsers are able to access the same centrally stored data. Win!

The question is then: how to implement it? Is this something you can do without being an IT person or a Slackware Guru?
I will show you that the answer is “yes”, in a follow-up article dealing with keepassxc and xbrowsersync.

Have fun! Eric

Google muzzles all Chromium browsers on 15 March 2021

Ominous music

A word of caution: long rant ahead. I apologize in advance.
There was an impactful post on the Google Chromium blog, last friday.  I recommend you read it now: https://blog.chromium.org/2021/01/limiting-private-api-availability-in.html

The message to take away from that post is “We are limiting access to our private Chrome APIs starting on March 15, 2021“.

What is the relevance I hear you ask.
Well, I provide Chromium packages for Slackware, both 32bit and 64bit versions. These chromium packages are built on our native Slackware platform, as opposed to the official Google Chrome binaries which are compiled on an older Ubuntu probably, for maximum compatibility across Linux distros where these binaries are used. One unique quality of my Chromium packages for Slackware is that I provide them for 32bit Slackware. Google ceased providing official 32bit binaries long ago.

In my Slackware Chromium builds, I disable some of the more intrusive Google features. An example: listening all the time to someone saying “OK Google” and sending the follow-up voice clip to Google Search.

And I create a Chromium package which is actually usable enough that people prefer it over Google’s own Chrome binaries, The reason for this usefulness is the fact that I enable access to Google’s cloud sync platform through my personal so-called “Google API key“. In Chromium for Slackware, you can logon to your Google account, sync your preferences, bookmarks, history, passwords etc to and from your cloud storage on Google’s platform. Your Chromium browser on Slackware is able to use Google’s location services and offer localized content; it uses Google’s  translation engine, etcetera. All that is possible because I formally requested and was granted access to these Google services through their APIs within the context of providing them through a Chromium package for Slackware.

The API key, combined with my ID and passphrase that allow your Chromium browser to access all these Google services are embedded in the binary – they are added during compilation. They are my key, and they are distributed and used with written permission from the Chromium team.

These API keys are usually meant to be used by software developers when testing their programs which they base on Chromium code. Every time a Chromium browser I compiled talks to Google through their Cloud Service APIs, a counter increases on my API key. Usage of the API keys for developers is rate-limited,  which means if an API key is used too frequently, you hit a limit and you’ll get an error response instead of a search result. So I made a deal with the Google Chromium team to be recognized as a real product with real users and an increased API usage frequency. Because I get billed for every access to the APIs which exceeds my allotted quota and I am generous but not crazy.
I know that several derivative distributions re-use my Chromium binary packages (without giving credit) and hence tax the usage quota on my Google Cloud account, but I cover this through donations, thank you my friends, and no thanks to the leeches of those distros.

Now, what Google wants to do is limit the access to and usage of these Google services to only the software they themselves publish – i.e. Google Chrome. They are going to deny access to Google’s Cloud Services for all 3rd-party Chromium products (i.e. any binary software not distributed by Google).
Understand that there are many derivative browsers out there – based on the Open Source Chromium codebase – currently using a Google API key to access and use Google Cloud services. I am not talking about just the Chromium packages which you will find for most Linux distros and which are maintained by ‘distro packagers’. But also commercial and non-commercial products that offer browser-like features or interface and use an embedded version of Chromium to enable these capabilities. The whole Google Cloud ecosystem which is accessible using Google API Keys is built into the core of Chromium source code… all that these companies had to do was hack & compile the Chromium code, request their own API key and let the users of their (non-)commercial product store all their private data on Google’s Cloud.

Google does not like it that 3rd parties use their infrastructure to store user data Google cannot control. So they decided to deliver a blanket strike – not considering the differences in usage, simply killing everything that is not Google.
Their statement to us distro packagers is that our use of the API keys violates their Terms of Service. The fact is that in the past, several distros have actively worked with Google’s Chromium team to give their browser a wider audience through functional builds of the Open Source part of Chrome. I think that Google should be pleased with the increased profits associated with the multitude of Linux users using their services.
This is an excerpt from the formal acknowledgement email I received (dating back to 2013) with the approval to use my personal Google API key in a Chromium package for Slackware:

Hi Eric,

Note that the public Terms of Service do not allow distribution of the API
keys in any form. To make this work for you, on behalf of Google Chrome
Team I am providing you with:

    -
    Official permission to include Google API keys in your packages and to
    distribute these packages.  The remainder of the Terms of Service for each
    API applies, but at this time you are not bound by the requirement to only
    access the APIs for personal and development use, and
    -
    Additional quota for each API in an effort to adequately support your
    users.

I recommend providing keys at build time, by passing additional flags to
build/gyp_chromium. In your package spec file, please make an easy to see
and obvious warning that the keys are only to be used for Slackware. Here
is an example text you can use:

# Set up Google API keys, see
http://www.chromium.org/developers/how-tos/api-keys .
# Note: these are for ... use ONLY. For your own distribution,
# please get your own set of keys.

And indeed, my chromium.SlackBuild script contains this warning ever since:

# This package is built with Alien's Google API keys for Chromium.
# The keys are contained in the file "chromium_apikeys".
# If you want to rebuild this package, you can use my API keys, however:
# you are not allowed to re-distribute these keys!!
# You can also obtain your own, see:
# http://www.chromium.org/developers/how-tos/api-keys

It effectively means that I alone am entitled to distribute the binary Chromium packages that I create. All derivative distros that use/repackage my binaries in any form are in violation of this statement.

On March 15, 2021 access to Google’s Cloud services will be revoked from my API key (and that of all the other 3rd parties providing any sort of Chromium-related binaries). It means that my Chromium will revert to a simple browser which will allow you to login to your Google account and store your data (bookmarks/passwords/history) locally but will not sync that data to and from your Google Cloud account. Also, location and translation services and probably several other services will stop working in the browser. Effectively, Google will muzzle any Chromium browser, forcing people to use their closed Chrome binaries instead if they want cross-platform access to their data. For instance, using Chrome on Android and Chromium on Slackware.
Yes, Chrome is based on Chromium source code but there’s code added on top that we do not know of. Not everybody is comfortable with that. There was a good reason to start distributing a Chromium package for Slackware!

Now the one million dollar question:

Will you (users of my package) still use this muzzled version of Chromium? After all, Slackware-current (soon to become 15.0 stable) contains the Falkon browser as part of Plasma5, and Falkon is a Chromium browser core with a Qt5 graphical interface, and it does not use any Google API key either. Falkon will therefore offer the same or a similar feature set as a muzzled Chromium.
If you prefer not to use Chromium any longer after March 15, because this browser lost its value and unique distinguishing features for you, then I would like to know. Compiling Chromium is not trivial, it takes a lot of effort every major release to understand why it no longer compiles and then finding solutions for that, and then the compile time is horribly long as well. Any mistake or build failure sets me back a day easily. It means that I will stop providing Chromium packages in the event of diminishing interest. I have better things to do than fight with Google.

Please share your thoughts in the comments section below

FYI:

There are two threads on Google Groups where the discussion is captured; the Chromium Embedders group: https://groups.google.com/a/chromium.org/g/embedder-dev/c/NXm7GIKTNTE  – and most of it (but not all!) is duplicated in the Chromium Distro Packagers group: https://groups.google.com/a/chromium.org/g/chromium-packagers/c/SG6jnsP4pWM – I advise you to read the cases made by several distro packagers and especially take good care of how the Google representatives are answering our concerns. There’s more than a tad of arrogance and disrespect to be found there, so much that one poster pointed the Googlers that take part in the discussion (Director level mind you; not the friendly developers and community managers who have been assisting us all these years) to the Chromium Code of Conduct. I am so pissed with this attitude that I forwarded the discussion to Larry Page in a hissy fit… not that I expect him to read and answer, but it had to be done. Remember the original Google Code of Conduct mantra “Don’t be evil”?

Fixing audio sync with ffmpeg

The ffmpeg developers and their libav antipodes are engaged in a healthy battle. Ever since there was a fall-out and the ffmpeg developer community split in two (forking ffmpeg into “libav”), ffmpeg itself has seen many releases which tend to incorporate the good stuff from the other team as well as their own advancements.

Last in series is ffmpeg-0.9 for which I built Slackware packages (if you want to be able to create mp3 or aac sound, get the packages with MP3 and AAC encoding enabled instead.

The package will come in handy if you want to try what I am going to describe next.

Re-sync your movie’s audio.

You probably have seen the issue yourself too: for instance, I have a file “original.avi” which has an audio track (or “stream“) which is slightly out of sync with the video… just enough to annoy the hell out of me. I need to delay the audio by 0.2 seconds to make the movie playback in sync. Luckily, ffmpeg can fix this for you very easily.

Let’s analyze the available streams in the original video (remember, UNIX starts counting at zero):

$ ffmpeg -i original.avi

Input #0, avi, from ‘original.avi’:

Stream #0.0: Video: mpeg4, yuv420p, 672×272 [PAR 1:1 DAR 42:17], 23.98 fps, 23.98 tbr, 23.98 tbn, 23.98 tbc
Stream #0.1: Audio: mp3, 48000 Hz, stereo, s16, 128 kb/s

You see that ffmpeg reports a “stream #0.0” which is the first stream in the first input file (right now we have only one input file but that will change later on) – the video. The second stream, called “stream #0.1“, is the audio track.

What I need is to give ffmpeg the video and audio as separate inputs, instruct it to delay our audio and re-assemble the two streams into one resultant movie file. The parameters which define two inputs where the second input will be delayed for N seconds, goes like this:

$ ffmpeg -i inputfile1 -itsoffset N -i inputfile2

However, we do not have a separate audio and video tracks, we just have the single original AVI file. Luckily, the “inputfile1” and “inputfile2” can be the same file! We just need to find a way to tell ffmpeg what stream to use from which input. Look at how ffmpeg reports the content of input files if you list the same file twice:

$ ffmpeg -i original.avi -i original.avi

Input #0, avi, from ‘original.avi’:

Stream #0.0: Video: mpeg4, yuv420p, 672×272 [PAR 1:1 DAR 42:17], 23.98 fps, 23.98 tbr, 23.98 tbn, 23.98 tbc
Stream #0.1: Audio: mp3, 48000 Hz, stereo, s16, 128 kb/s

Input #1, avi, from ‘original.avi’:

Stream #1.0: Video: mpeg4, yuv420p, 672×272 [PAR 1:1 DAR 42:17], 23.98 fps, 23.98 tbr, 23.98 tbn, 23.98 tbc
Stream #1.1: Audio: mp3, 48000 Hz, stereo, s16, 128 kb/s

You see that the different streams in multiple input files are all numbered uniquely. We will need this defining quality. I colored the numbers with red & purple – these colors will show up in my example commands below.

Our remaining issue is that ffmpeg must be told that it has to use only the video stream of the first inputfile, and only the audio stream of the second inputfile. Ffmpeg will then have to do its magic and finally re-assemble the two streams into a resulting movie file. That resulting AVI file also expects video as the first stream, and audio as the second stream, just like our original AVI is laid out. Movie players will get confused otherwise.

Ffmpeg has the “map” parameter to specify this. I have looked long and hard at this parameter and its use… it is not easy for me to follow the logic. A bit like the git version control system, which does not fit into my brain conceptually, either. But perhaps I can finally explain it properly, to myself as well as to you, the reader.

Actually, we need two “map” parameters, one to map the input to the output video and another to map the input to the output audio. Map parameters are specified in the order the streams are going to be added to the output file. Remember, we want to delay the audio, so inherently the audio track must be taken from the second inputfile.

In the example below, the first “-map 0:0” parameter specifies how to create the first stream in the output. We need the first stream in the output to be video. The “0:0” value means “first_inputfile:first_stream“.

The second “-map 1:1” parameter specifies where ffmpeg should find the audio (which is going to be the second stream in the output). The value “1:1” specifies “second_inputfile:seccond_stream.

$ ffmpeg -i original.avi -itsoffset 0.2 -i original.avi -map 0:0 -map 1:1

There is one more thing (even though it looks like ffmpeg is smart enough to do this without explicitly telling so). I do not want any re-encoding of the audio or video to happen, so I instruct ffmpeg to “copy” the audio and video stream without intermediate decoding and re-encoding. The “‘-acodec copy” and “-vcodec copy” parameters take care of this.

We now have the information to write a ffmpeg commandline which takes audio and video streams from the same file and re-assembles the movie with the audio stream delayed by 0.2 seconds. The resulting synchronized movie is called “synced.avi” and the conversion takes seconds, rather than minutes:

$ ffmpeg -i original.avi -itsoffset 0.2 -i original.avi -map 0:0 -map 1:1  -acodec copy -vcodec copy synced.avi

Cheers, Eric

« Older posts

© 2024 Alien Pastures

Theme by Anders NorenUp ↑