Main menu:

Sponsoring

Please consider a small donation:

 

Also appreciated: support me by clicking the ads (costs nothing) :-)

 

Or you can donate bitcoin:

 

Thanks to TekLinks in Birmingham, AL, for providing colocation and bandwidth.

Page Rank

Fame

FOSS Force Best Blog--2013 Award

Recent posts

Recent comments

About this blog

I am Eric Hameleers, and this is where I think out loud.
More about me.

Search

My Favourites

Slackware

Calendar

July 2014
M T W T F S S
« Jun    
 123456
78910111213
14151617181920
21222324252627
28293031  

RSS Alien's Slackware packages

RSS Alien's unofficial KDE Slackware packages

RSS Alien's multilib packages

Meta

Fixing audio sync with ffmpeg

The ffmpeg developers and their libav antipodes are engaged in a healthy battle. Ever since there was a fall-out and the ffmpeg developer community split in two (forking ffmpeg into “libav”), ffmpeg itself has seen many releases which tend to incorporate the good stuff from the other team as well as their own advancements.

Last in series is ffmpeg-0.9 for which I built Slackware packages (if you want to be able to create mp3 or aac sound, get the packages with MP3 and AAC encoding enabled instead.

The package will come in handy if you want to try what I am going to describe next.

Re-sync your movie’s audio.

You probably have seen the issue yourself too: for instance, I have a file “original.avi” which has an audio track (or “stream“) which is slightly out of sync with the video… just enough to annoy the hell out of me. I need to delay the audio by 0.2 seconds to make the movie playback in sync. Luckily, ffmpeg can fix this for you very easily.

Let’s analyze the available streams in the original video (remember, UNIX starts counting at zero):

$ ffmpeg -i original.avi

Input #0, avi, from ‘original.avi’:

Stream #0.0: Video: mpeg4, yuv420p, 672×272 [PAR 1:1 DAR 42:17], 23.98 fps, 23.98 tbr, 23.98 tbn, 23.98 tbc
Stream #0.1: Audio: mp3, 48000 Hz, stereo, s16, 128 kb/s

You see that ffmpeg reports a “stream #0.0” which is the first stream in the first input file (right now we have only one input file but that will change later on) – the video. The second stream, called “stream #0.1“, is the audio track.

What I need is to give ffmpeg the video and audio as separate inputs, instruct it to delay our audio and re-assemble the two streams into one resultant movie file. The parameters which define two inputs where the second input will be delayed for N seconds, goes like this:

$ ffmpeg -i inputfile1 -itsoffset N -i inputfile2

However, we do not have a separate audio and video tracks, we just have the single original AVI file. Luckily, the “inputfile1″ and “inputfile2″ can be the same file! We just need to find a way to tell ffmpeg what stream to use from which input. Look at how ffmpeg reports the content of input files if you list the same file twice:

$ ffmpeg -i original.avi -i original.avi

Input #0, avi, from ‘original.avi’:

Stream #0.0: Video: mpeg4, yuv420p, 672×272 [PAR 1:1 DAR 42:17], 23.98 fps, 23.98 tbr, 23.98 tbn, 23.98 tbc
Stream #0.1: Audio: mp3, 48000 Hz, stereo, s16, 128 kb/s

Input #1, avi, from ‘original.avi’:

Stream #1.0: Video: mpeg4, yuv420p, 672×272 [PAR 1:1 DAR 42:17], 23.98 fps, 23.98 tbr, 23.98 tbn, 23.98 tbc
Stream #1.1: Audio: mp3, 48000 Hz, stereo, s16, 128 kb/s

You see that the different streams in multiple input files are all numbered uniquely. We will need this defining quality. I colored the numbers with red & purple – these colors will show up in my example commands below.

Our remaining issue is that ffmpeg must be told that it has to use only the video stream of the first inputfile, and only the audio stream of the second inputfile. Ffmpeg will then have to do its magic and finally re-assemble the two streams into a resulting movie file. That resulting AVI file also expects video as the first stream, and audio as the second stream, just like our original AVI is laid out. Movie players will get confused otherwise.

Ffmpeg has the “map” parameter to specify this. I have looked long and hard at this parameter and its use… it is not easy for me to follow the logic. A bit like the git version control system, which does not fit into my brain conceptually, either. But perhaps I can finally explain it properly, to myself as well as to you, the reader.

Actually, we need two “map” parameters, one to map the input to the output video and another to map the input to the output audio. Map parameters are specified in the order the streams are going to be added to the output file. Remember, we want to delay the audio, so inherently the audio track must be taken from the second inputfile.

In the example below, the first “-map 0:0” parameter specifies how to create the first stream in the output. We need the first stream in the output to be video. The “0:0” value means “first_inputfile:first_stream“.

The second “-map 1:1” parameter specifies where ffmpeg should find the audio (which is going to be the second stream in the output). The value “1:1” specifies “second_inputfile:seccond_stream.

$ ffmpeg -i original.avi -itsoffset 0.2 -i original.avi -map 0:0 -map 1:1

There is one more thing (even though it looks like ffmpeg is smart enough to do this without explicitly telling so). I do not want any re-encoding of the audio or video to happen, so I instruct ffmpeg to “copy” the audio and video stream without intermediate decoding and re-encoding. The “‘-acodec copy” and “-vcodec copy” parameters take care of this.

We now have the information to write a ffmpeg commandline which takes audio and video streams from the same file and re-assembles the movie with the audio stream delayed by 0.2 seconds. The resulting synchronized movie is called “synced.avi” and the conversion takes seconds, rather than minutes:

$ ffmpeg -i original.avi -itsoffset 0.2 -i original.avi -map 0:0 -map 1:1  -acodec copy -vcodec copy synced.avi

Cheers, Eric

Comments

Comment from gegechris99
Posted: December 14, 2011 at 14:24

Thanks Eric for this tutorial.

Your explanation of the mapping logic is quite clear to me (the reader) :)

Pingback from Links 16/12/2011: Red Hat Upgraded, Android Everywhere | Techrights
Posted: December 16, 2011 at 11:27

[...] Fixing audio sync with ffmpeg [...]

Comment from Lawrence McDonald
Posted: December 17, 2011 at 13:42

Thank you Eric
Merry Christmas and happy new year !

Comment from pcelka
Posted: December 19, 2011 at 23:06

Thanks for this nice tutorial!

Joyeux Noël et bonne année! (From France)

Pingback from Herk's Lab » Video Capture for Linux – Take One
Posted: October 29, 2012 at 02:30

[...] here for the explanation of syncing your audio with [...]

Comment from Clm
Posted: November 28, 2012 at 20:44

Exactly what I was looking for…
Thanks !

Comment from Lord Stranger
Posted: January 30, 2013 at 17:10

Thanks – worked a treat!

Comment from kaaposc
Posted: April 19, 2013 at 10:54

Thanks for this post! I have one de-synced video that I first tried to sync with VLC’s sync tool and pin pointed delay of audio at 1.750 seconds. When put this value in ffmpeg, audio was now lagging behind video. Finally ffmpeg gave me good results with -itsoffset 1.0.. I wonder why such difference?

Pingback from avi file, audio out of sync
Posted: July 4, 2013 at 00:59

[...] a permanent fix. Last night I was trawling around for solutions. I came across this using ffmpeg: http://alien.slackbook.org/blog/fixi…c-with-ffmpeg/ The only problem with this is accurately defining the time at which audio lags. Now I need to dig [...]

Comment from gmv
Posted: April 28, 2014 at 06:16

I will record the video as a series of images and sound from the line-in then get in front of the camera and bang two sticks together like the movie people do then figure the delay by seeing the impact frame number and how far it is from the start. Then I know how to align the two in a final video. Just remember the audio travels about 1130FPS to the microphone. Seems to work OK. BUT you-tube will still report audio might not be in sync ????

Write a comment