The ffmpeg developers and their libav antipodes are engaged in a healthy battle. Ever since there was a fall-out and the ffmpeg developer community split in two (forking ffmpeg into “libav”), ffmpeg itself has seen many releases which tend to incorporate the good stuff from the other team as well as their own advancements.
Last in series is ffmpeg-0.9 for which I built Slackware packages (if you want to be able to create mp3 or aac sound, get the packages with MP3 and AAC encoding enabled instead.
The package will come in handy if you want to try what I am going to describe next.
Re-sync your movie’s audio.
You probably have seen the issue yourself too: for instance, I have a file “original.avi” which has an audio track (or “stream“) which is slightly out of sync with the video… just enough to annoy the hell out of me. I need to delay the audio by 0.2 seconds to make the movie playback in sync. Luckily, ffmpeg can fix this for you very easily.
Let’s analyze the available streams in the original video (remember, UNIX starts counting at zero):
$ ffmpeg -i original.avi
Input #0, avi, from ‘original.avi’:
…
Stream #0.0: Video: mpeg4, yuv420p, 672×272 [PAR 1:1 DAR 42:17], 23.98 fps, 23.98 tbr, 23.98 tbn, 23.98 tbc
Stream #0.1: Audio: mp3, 48000 Hz, stereo, s16, 128 kb/s
You see that ffmpeg reports a “stream #0.0” which is the first stream in the first input file (right now we have only one input file but that will change later on) – the video. The second stream, called “stream #0.1“, is the audio track.
What I need is to give ffmpeg the video and audio as separate inputs, instruct it to delay our audio and re-assemble the two streams into one resultant movie file. The parameters which define two inputs where the second input will be delayed for N seconds, goes like this:
$ ffmpeg -i inputfile1 -itsoffset N -i inputfile2
However, we do not have a separate audio and video tracks, we just have the single original AVI file. Luckily, the “inputfile1” and “inputfile2” can be the same file! We just need to find a way to tell ffmpeg what stream to use from which input. Look at how ffmpeg reports the content of input files if you list the same file twice:
$ ffmpeg -i original.avi -i original.avi
Input #0, avi, from ‘original.avi’:
…
Stream #0.0: Video: mpeg4, yuv420p, 672×272 [PAR 1:1 DAR 42:17], 23.98 fps, 23.98 tbr, 23.98 tbn, 23.98 tbc
Stream #0.1: Audio: mp3, 48000 Hz, stereo, s16, 128 kb/s
…
Input #1, avi, from ‘original.avi’:
…
Stream #1.0: Video: mpeg4, yuv420p, 672×272 [PAR 1:1 DAR 42:17], 23.98 fps, 23.98 tbr, 23.98 tbn, 23.98 tbc
Stream #1.1: Audio: mp3, 48000 Hz, stereo, s16, 128 kb/s
You see that the different streams in multiple input files are all numbered uniquely. We will need this defining quality. I colored the numbers with red & purple – these colors will show up in my example commands below.
Our remaining issue is that ffmpeg must be told that it has to use only the video stream of the first inputfile, and only the audio stream of the second inputfile. Ffmpeg will then have to do its magic and finally re-assemble the two streams into a resulting movie file. That resulting AVI file also expects video as the first stream, and audio as the second stream, just like our original AVI is laid out. Movie players will get confused otherwise.
Ffmpeg has the “map” parameter to specify this. I have looked long and hard at this parameter and its use… it is not easy for me to follow the logic. A bit like the git version control system, which does not fit into my brain conceptually, either. But perhaps I can finally explain it properly, to myself as well as to you, the reader.
Actually, we need two “map” parameters, one to map the input to the output video and another to map the input to the output audio. Map parameters are specified in the order the streams are going to be added to the output file. Remember, we want to delay the audio, so inherently the audio track must be taken from the second inputfile.
In the example below, the first “-map 0:0” parameter specifies how to create the first stream in the output. We need the first stream in the output to be video. The “0:0” value means “first_inputfile:first_stream“.
The second “-map 1:1” parameter specifies where ffmpeg should find the audio (which is going to be the second stream in the output). The value “1:1” specifies “second_inputfile:seccond_stream“.
$ ffmpeg -i original.avi -itsoffset 0.2 -i original.avi -map 0:0 -map 1:1
There is one more thing (even though it looks like ffmpeg is smart enough to do this without explicitly telling so). I do not want any re-encoding of the audio or video to happen, so I instruct ffmpeg to “copy” the audio and video stream without intermediate decoding and re-encoding. The “‘-acodec copy” and “-vcodec copy” parameters take care of this.
We now have the information to write a ffmpeg commandline which takes audio and video streams from the same file and re-assembles the movie with the audio stream delayed by 0.2 seconds. The resulting synchronized movie is called “synced.avi” and the conversion takes seconds, rather than minutes:
$ ffmpeg -i original.avi -itsoffset 0.2 -i original.avi -map 0:0 -map 1:1 -acodec copy -vcodec copy synced.avi
Cheers, Eric
Thanks Eric for this tutorial.
Your explanation of the mapping logic is quite clear to me (the reader) 🙂
Thank you Eric
Merry Christmas and happy new year !
Thanks for this nice tutorial!
Joyeux Noël et bonne année! (From France)
Exactly what I was looking for…
Thanks !
Thanks – worked a treat!
Thanks for this post! I have one de-synced video that I first tried to sync with VLC’s sync tool and pin pointed delay of audio at 1.750 seconds. When put this value in ffmpeg, audio was now lagging behind video. Finally ffmpeg gave me good results with -itsoffset 1.0.. I wonder why such difference?
I will record the video as a series of images and sound from the line-in then get in front of the camera and bang two sticks together like the movie people do then figure the delay by seeing the impact frame number and how far it is from the start. Then I know how to align the two in a final video. Just remember the audio travels about 1130FPS to the microphone. Seems to work OK. BUT you-tube will still report audio might not be in sync ????
I have a file DL’d from Youtube, where the sync is OK for the first half then drifts to 250m out by the end. I tried making several files with different audio offsets with ffmpeg as above, then cutting out chunks with different offsets with MP4Box -split-chunk, then putting them together again into one corrected file. The chunks played OK (correct sync) in VLC and mplayer, but not in Totem (gstreamer based).
MP4Box -cat reassembled the chunks but the sound was lost from all chunks after the first. Tried using Openshot instead to reassemble the file but the offsets were lost i.e. the out-of-sync returned.
Just have to keep trying I guess.
Hi, I’m in current and use your ffmpeg package (2.4.3) and your kde 4.14.3, but I’ve a big problem with kdenlive: I’ve built v0.98 from sources but can’t open .avi or .mpeg files because he request all libraries in xxx.so.55 and i’ve all in xxx.so.56.
Then i’ve done symlink with 55 instead of 56 but when run the program it give me this error :
mlt_repository_init: failed to dlopen /usr/lib/mlt/libmltavformat.so
(/usr/lib/libavdevice.so.55: version `LIBAVDEVICE_55′ not found (required by /usr/lib/mlt/libmltavformat.so))
I’ve tried to build mlt but it’s the same.
Have you any suggestion to me?
Thanks in advance and excuse me if this is’nt the correct place where to put this question.
Hi Fabio
If you are running Slackware-current then you should switch to the KDE 4.14.3 that is now part of Slackware-current. I removed all those packages from my ‘ktown’ repository except for phonon-vlc.
Your error about “libraryname.so.55” means that you have compiled your sources in the presence of an older ffmpeg and then upgraded ffmpeg. Recompiling again, is the only solution probably.
Thankyou very much for the reply. Ok I’m going to trying this way. But I’ve anoter little question: if i switch my system to the new KDE 5 (with your packages of course), it’s possible to compile kdenlive for this platform?
Hi Fabio. I do not see why that would not work. Just try.
need help with ffmpeg 3.0.
I have a sync issue with .mxf
RAN: ffmpeg -i elrs0322.mxf
Input#0
stream 0:0
stream 0:2
RAN: ffmpeg -i elrs0311.mxf -itsoffset 1 -i input2file2
–permission error on input2file2
RA: ffmpeg -i elrs0311.mxf -i elrs0311.mxf
no input file.
could you email me?
RAN: ffmpeg -i elrs0311.mxf -itsoffset 1 -i elrs0311.mxf
RESULTS
permission error folder
john.morrissey89@yahoo.com
thanks
Are you for real? Am I your slave? Private free assistance? What do you think happens now? Silence.
Much thanks! I used “-cv copy” instead of “-acodec copy -vcodec copy”, as the latter was spitting out an error. Must be my version of FFMpeg.
You know, perhaps that “-vcodec” option has been deprecated in your ffmpeg binary.” syntax would seem to be the preferable option anyway since it uses the same stream syntax that other options like “stream mapping” also use.
If you look at the manual page at http://ffmpeg.org/ffmpeg-all.html#Stream-copy ; then you see that “-vcodec copy” is an alias to “-codec:v copy”. And that last one can be shortened to “-c:v copy”.
The “-codec:
Hi
When ever you have time, can you please add to ffmpeg the flag to support pulse? right now looks like it is build without it:
$ ffmpeg –version 2>&1 | grep -i pulse
$
$ ffmpeg -video_size 1024×768 -framerate 25 -f x11grab -i :0.0+100,200 -f pulse -ac 2 -i default output.mkv
(…)
Unknown input format: ‘pulse’
not that i like pulse, but trying to record something that use pulse is hard 🙂
thanks for all your packages
magg, so apparently “-f pulse” generates that error because by default, when compiling ffmpeg it will not use libpulse and adding “–enable-libpulse” is required.
I will take care of that with the next series of ffmpeg updates. Thanks for showing this.
I appreciate still seeing help articles like this long since they were authored. Kudos for keeping this online. Most of the search results I found referenced offsetting a full second or more. This is the first result I found with an example of fractional delay. Thanks again.
Real Magic this code.
On one desynced movie the value of 0.2 worked swell, on another the value of 1.0 .
Thank you Alien Pastures.
Dear Eric,
Thank you for this post. I have an mp4 with video, and a wav audio file. I know that I have a 433millisecond delay between the video and audio. I tried this with VLC, but it did not save the synched audio and video.
Then I tried with ffmpeg with the general structure as described in your ‘tutorial’.
ffmpeg -i inputfile1 -itsoffset N -i inputfile2 -map 0:0 -map 1:1 -acodec copy -vcodec copy synced.mp4
That is with the video in the mp4 and the audio in the mp3 I got the following
ffmpeg -i mymovie.mp4 -itsoffset 0.433 -i mysound.wav -map 0:0 -map 1:1 -acodec copy -vcodec copy synced.mp4
Note: 0.433 is 433 milliseconds.
BUT I GET AN ERROR:
Unrecognized option ‘itoffset’.
Error splitting the argument list: Option not found
Any idea?
Thank you,
Anthony of Sydney Australia
I know you posted this 11 years ago, but I’m hitting a weird wall where the output file has no audio at all. I’m not sure how to start troubleshooting and would welcome any suggestions!
ffmpeg -i “input.mp4” -itsoffset 27.3 -i “input.mp4” -map 0:0 -map 1:1 -acodec copy -vcodec copy “output.mp4”
Amanda, have you examined your input file? Is the video track actually the first (index ‘0’) and audio the second (index ‘1’)?
Dear Eric,
I tried to do as much as I can to figure out the problem:
First I tried using an mp3 version of the file mysound.mp3 (144 kbs, 44.1kHz):
ffmpeg -i mymovie.mp4 -itsoffset 0.433 -i mysound.mp3 -map 0:0 -map 1:1 -acodec copy -vcodec copy synced.mp4
I get the error:
Stream map ‘1:1’ matches no streams.
To ignore this, add a trailing ‘?’ to the map.
Second, I added a trailing ‘?’ to the map.
ffmpeg -i mymovie.mp4 -itsoffset 0.433 -i mysound.mp3 -map 0:0 -map 1:1? -acodec copy -vcodec copy synced.mp4
NOW – when I playback synced.mp4, there’s video BUT no sound.
Thank you in advance,
Anthony of Sydney Australia
No idea Anthony unless you misspelled ‘itoffset’ also in the actual command. It is ‘itsoffset’.
Anthony check what I wrote about the map parameter and also run “ffmpeg -i mymovie.mp4 -i mysound.mp3” to see why the “-map 1:1” is wrong for a file which contains only one stream.
Dear Eric,
Thank you very much for averting me to the meaning of map fileno:streamno.
It works now with the following structure
I replace map 1:1 with map 1:0, that is the audio file = second file with the one stream.
ffmpeg -i mymovie.mp4 -itsoffset 0.433 -i mysound.mp3 -map 0:0 -map 1:0 -acodec copy -vcodec copy synced.mp4
The syncing is perfect.
Many thanks
Anthony from Sydney Australia
Dear Eric,
I don’t know if this question is out-of-the-scope of the tutorial. The abovementioned file synced.mp4 is perfectly synced when I play on my PC.
However, when I play the same file synced.mp4 from the TV’s USB or playing the file from a burnt DVD, the audio is out-of-sync with the video.
Do I have to convert the file synced.mp4 to something else?
Thank you,
Anthony of Sydney Australia
Anthony, playback of audio/video means that the playback device needs to decode both audio and video from their compressed formats to the uncompressed bitstream. That requires processing power. Some compression formats require more CPU power than others. In you case it looks like the television (which will certainly have a lower-spec CPU) has issues keeping up with the decoding.
Many of the older PC’s can’t cope all that well with modern MP4 video but can playback old-fashioned AVI files with more ease. Try re-encoding that MP4 to a AVI file.
Try something like:
ffmpeg -i myvideo.mp4 -vcodec mpeg4 -acodec libmp3lame -qscale:v 2 -qscale:a 5 myvideo.avi
The ‘qscale’ parameters control the quality (and thus the bitrate) of the resulting AVI file. Try setting the “-qscale:v” value to 3 or 4 if your television is still having difficulty with the setting of “2” – lower value means higher quality video.
I’m Bac. I’m a newbie to ffmpeg. I’m having trouble in making video and audio, using -itsoffset, concretely:
ffmpeg -i output-ask.mp4 -itsoffset 0.5 -i output-ask.mp4 -map 0:v -map 1:a -vcodec copy -acodec copy -y -shortest output_f.mp4, but unfortunately, it’s doesn’t work at all. Could you kindly help
p.s. the video is shared on google drive at https://drive.google.com/drive/folders/1IxgSixnmIe79T_NjJgzLeaBgRHdfdg8C?usp=sharing
Bac, think a bit longer about the bit “-map 0:v -map 1:a ” and compare it to the examples I gave.
Hello Eric,
I’m Bac. I’m a newbie to ffmpeg. I’m having trouble in delaying audio 30 seconds in compare with video, using -itsoffset, concretely:
ffmpeg -i output-ask.mp4 -itsoffset 0.5 -i output-ask.mp4 -map 0:v -map 1:a -vcodec copy -acodec copy -y -shortest output_f.mp4, but unfortunately, it’s doesn’t work at all. Could you kindly help
p.s. the video (~1 MB)is shared on google drive at https://drive.google.com/drive/folders/1IxgSixnmIe79T_NjJgzLeaBgRHdfdg8C?usp=sharing
Below is additional info.:
– the command has used, but not affect
ffmpeg -i output-ask.mp4 -itsoffset 0.5 -i output-ask.mp4 -map 0:v -map 1:a -vcodec copy -acodec copy -y -shortest output.mp4
– the ffmpeg version:
ffmpeg-20180828-26dc763-win64-static.zip
ffmpeg -version:
ffmpeg version N-91712-g26dc763245 Copyright (c) 2000-2018 the FFmpeg developers
built with gcc 8.2.1 (GCC) 20180813
configuration: –enable-gpl –enable-version3 –enable-sdl2 –enable-fontconfig –enable-gnutls –enable-iconv –enable-libass –enable-libbluray –enable-libfreetype –enable-libmp3lame –enable-libopencore-amrnb –enable-libopencore-amrwb –enable-libopenjpeg –enable-libopus –enable-libshine –enable-libsnappy –enable-libsoxr –enable-libtheora –enable-libtwolame –enable-libvpx –enable-libwavpack –enable-libwebp –enable-libx264 –enable-libx265 –enable-libxml2 –enable-libzimg –enable-lzma –enable-zlib –enable-gmp –enable-libvidstab –enable-libvorbis –enable-libvo-amrwbenc –enable-libmysofa –enable-libspeex –enable-libxvid –enable-libaom –enable-libmfx –enable-amf –enable-ffnvcodec –enable-cuvid –enable-d3d11va –enable-nvenc –enable-nvdec –enable-dxva2 –enable-avisynth
libavutil 56. 19.100 / 56. 19.100
libavcodec 58. 27.100 / 58. 27.100
libavformat 58. 17.103 / 58. 17.103
libavdevice 58. 4.101 / 58. 4.101
libavfilter 7. 26.100 / 7. 26.100
libswscale 5. 2.100 / 5. 2.100
libswresample 3. 2.100 / 3. 2.100
libpostproc 55. 2.100 / 55. 2.100
Thanks indeed alienbob. I’ve changed to -map 0:1 -map 1:1, as below, but the audio is still in advance the video. Would you kindly shed light.
ffmpeg -i output-ask.mp4 -itsoffset 0.5 -i output-ask.mp4 -map 0:1 -map 1:1 -vcodec copy -acodec copy -y -shortest output.mp4
p.s. dear admin, I mistakenly posted the same question, could you please delete the second one as I can’t, thanks and sorry for this
Bac, using ” -itsoffset 0.5″ will delay audio with half a second instead of half a minute (30 seconds). Try ” -itsoffset 30″
Sincerely thanks alienbob. I observed no effect, after several attempts:
– 30 seconds or even more
– put -itsoffset before the video, instead of audio: ffmpeg -itsoffset 150 -i test.mp4 -i test.mp4 -map 0:0 -map 1:1 -vcodec copy -acodec copy -y output_f.mp4
– another 2 different videos (mp4)
I’m hitting a wall, help is highly appreciated. Thanks
Buongiorno
In alcuni video che scaricavo, alla fine del download veniva la scritta:
“WARNING 24171; malformed AAC bitstream detected. Install ffmpeg or avconv to fix this automatically.”
ma da quando ho messo nella stessa cartella di youtube-dl anche ffmpeg, me lo risolve dicendo:
“[ffmpeg] Fixing malformed AAC bitstream in…”
Le mie domande sono: si può fare il Fixing per dei filmati già presenti sul mio PC? C’è una stringa MS-DOS in proposito?
Grazie e Cordiali Saluti
Sorry… only English on this blog.
Good morning
In some videos that I downloaded, at the end of the download was written:
“WARNING 24171; malformed AAC bitstream detected. Install ffmpeg or avconv to fix this automatically. ”
but since I put in the same folder of youtube-dl also ffmpeg, I solved it by saying:
“[Ffmpeg] Fixing malformed AAC bitstream in …”
My questions are: can I Fix for movies already on my PC? Is there an MS-DOS string in this regard?
Thanks and best regards
Marco, your question is unrelated to the topic of this blog post.
I cannot tell you how exactly to fix your movies, because I do not use MS-DOS.
But you should be able to use ffmpeg in a DOS box to fix any movie that is already on your computer. You just need to find the correct ffmpeg command-line parameters.
Trying to fix audio sync for an automated video recording workflow. -itoffset is working great, but the effect is only seen when playing back the files in a player.
If I bring the output files into an editing program like Premiere, the audio/video tracks are unchanged.
Is -itoffset just modifying the playback timestamp for the audio/video tracks? These files are headed to video editors downstream in the workflow, so I’m trying to get a fix that fixes the actual file, not just modifies metadata.
Thanks in advance for your thoughts, and this great post.
Hi Ben,
Interesting observation.
Looks like this is an issue which is specific to MP4 containers, as there’s an old bug which describes the issue you are experiencing: https://trac.ffmpeg.org/ticket/1349
Are you indeed working with MP4 files?
Yes, working with .mp4 source files.
I had a similar issue last year when working with files created by an Android app – that time caused by wildly variable framerates that averaged out to 24fps over enough time, but had the same symptom here. Bring the mp4 into After Effects for post-processing, and suddenly audio is out of sync. Solution that time was to transcode to a mezzanine codec first (ProRes) which magically read the PTS of the mp4 and gave a synced file that could be used in AfterEffects.
My issue this time is delay that is caused by the physical workflow in the recording setup (hardware is adding a couple frames video delay, and the encoder is adding a couple more adding up to about 6 frames of video delay.
It’s looking to me like there isn’t a one-line quick-fix here, and that I’ll need to de-mux the A and V streams, delay the audio, and mux it all back together.
I think I have this sorted now.
ffmpeg -i source.mp4 -i source.mp4 -filter_complex “adelay=150|150” -map 0:0 -map 1:1 -c:v copy -c:a aac output-0150.mp4
Happily the video stream can just be copied over – that was going to be a big performance hit to re-encode that. So far in some limited tests this is working well and Premiere is showing correctly shifted audio.
I’ll note that this solution probably would not work if audio was behind the video (needing to delay the video). That would probably require re-encoding on the video, both to accomplish the trim (probably using a ss seek?) as well as to be able to have that seek work on a sub-GOP duration.
Thanks again for this great post, and letting me think through my problem in a comment thread!
Hi Ben,
It’s good to read that you were able to come up with a solution *and* document it here in your comment.
That enriches the article and will likely help other people in future. Thanks.
Utmost helpful! and me being grateful.
Using Openshot, this is a known bore.
Alas, somehow and contrary to the man page, ffmpeg doesn’t seem to want to accept negative numbers for itsoffset. At least here it doesn’t seem to. So the delay for the video signals needs to come (almost illogically early) in front of the first stream:
ffmpeg -itsoffset 1.0 -i infile.mp4 -i infile.mp4 -map 0:0 -map 1:1 -acodec copy -vcodec copy synced_file.mp4
for a delay of 1.0 seconds.
Thanks for your explanation Uwe, highly appreciated.
Glad I found this valuable information.Just thought I would add that this still works with ffmpeg on my Mint 20.1 in 2021.
Very helpful. Thanks!
Great article – one question: i want to test the offset by only exporting a little video snippet with eg. 10 seconds length (instead of the whole movie) – how would i do that?
For example: extract 10 Seconds from position 00:20:00 – thanks!