Compare commits

..

1 Commits

Author SHA1 Message Date
Michael Niedermayer
ecc5e42d92 Update for 2.2-rc1
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-01 04:03:08 +01:00
3038 changed files with 103255 additions and 210236 deletions

1
.gitattributes vendored
View File

@@ -1 +0,0 @@
*.pnm -diff -text

9
.gitignore vendored
View File

@@ -15,7 +15,6 @@
*.pdb *.pdb
*.so *.so
*.so.* *.so.*
*.swp
*.ver *.ver
*-example *-example
*-test *-test
@@ -37,9 +36,8 @@
/doc/avoptions_format.texi /doc/avoptions_format.texi
/doc/doxy/html/ /doc/doxy/html/
/doc/examples/avio_reading /doc/examples/avio_reading
/doc/examples/decoding_encoding /doc/examples/avcodec
/doc/examples/demuxing_decoding /doc/examples/demuxing_decoding
/doc/examples/extract_mvs
/doc/examples/filter_audio /doc/examples/filter_audio
/doc/examples/filtering_audio /doc/examples/filtering_audio
/doc/examples/filtering_video /doc/examples/filtering_video
@@ -50,7 +48,6 @@
/doc/examples/resampling_audio /doc/examples/resampling_audio
/doc/examples/scaling_video /doc/examples/scaling_video
/doc/examples/transcode_aac /doc/examples/transcode_aac
/doc/examples/transcoding
/doc/fate.txt /doc/fate.txt
/doc/print_options /doc/print_options
/lcov/ /lcov/
@@ -62,9 +59,7 @@
/tests/audiogen /tests/audiogen
/tests/base64 /tests/base64
/tests/data/ /tests/data/
/tests/pixfmts.mak
/tests/rotozoom /tests/rotozoom
/tests/test_copy.ffmeta
/tests/tiny_psnr /tests/tiny_psnr
/tests/tiny_ssim /tests/tiny_ssim
/tests/videogen /tests/videogen
@@ -83,8 +78,6 @@
/tools/pktdumper /tools/pktdumper
/tools/probetest /tools/probetest
/tools/qt-faststart /tools/qt-faststart
/tools/sidxindex
/tools/trasher /tools/trasher
/tools/seek_print /tools/seek_print
/tools/uncoded_frame
/tools/zmqsend /tools/zmqsend

117
Changelog
View File

@@ -1,113 +1,7 @@
Entries are sorted chronologically from oldest to youngest within each release, Entries are sorted chronologically from oldest to youngest within each release,
releases are sorted from youngest to oldest. releases are sorted from youngest to oldest.
version 2.6: version <next>
- nvenc encoder
- 10bit spp filter
- colorlevels filter
- RIFX format for *.wav files
- RTP/mpegts muxer
- non continuous cache protocol support
- tblend filter
- cropdetect support for non 8bpp, absolute (if limit >= 1) and relative (if limit < 1.0) threshold
- Camellia symmetric block cipher
- OpenH264 encoder wrapper
- VOC seeking support
- Closed caption Decoder
- fspp, uspp, pp7 MPlayer postprocessing filters ported to native filters
- showpalette filter
- Twofish symmetric block cipher
- Support DNx100 (960x720@8)
- eq2 filter ported from libmpcodecs as eq filter
- removed libmpcodecs
- Changed default DNxHD colour range in QuickTime .mov derivatives to mpeg range
- ported softpulldown filter from libmpcodecs as repeatfields filter
- dcshift filter
- RTP depacketizer for loss tolerant payload format for MP3 audio (RFC 5219)
- RTP depacketizer for AC3 payload format (RFC 4184)
- palettegen and paletteuse filters
- VP9 RTP payload format (draft 0) experimental depacketizer
- RTP depacketizer for DV (RFC 6469)
- DXVA2-accelerated HEVC decoding
- AAC ELD 480 decoding
- Intel QSV-accelerated H.264 decoding
- DSS SP decoder and DSS demuxer
- Fix stsd atom corruption in DNxHD QuickTimes
- Canopus HQX decoder
- RTP depacketization of T.140 text (RFC 4103)
- VP9 RTP payload format (draft 0) experimental depacketizer
- Port MIPS opttimizations to 64-bit
version 2.5:
- HEVC/H.265 RTP payload format (draft v6) packetizer
- SUP/PGS subtitle demuxer
- ffprobe -show_pixel_formats option
- CAST128 symmetric block cipher, ECB mode
- STL subtitle demuxer and decoder
- libutvideo YUV 4:2:2 10bit support
- XCB-based screen-grabber
- UDP-Lite support (RFC 3828)
- xBR scaling filter
- AVFoundation screen capturing support
- ffserver supports codec private options
- creating DASH compatible fragmented MP4, MPEG-DASH segmenting muxer
- WebP muxer with animated WebP support
- zygoaudio decoding support
- APNG demuxer
- postproc visualization support
version 2.4:
- Icecast protocol
- ported lenscorrection filter from frei0r filter
- large optimizations in dctdnoiz to make it usable
- ICY metadata are now requested by default with the HTTP protocol
- support for using metadata in stream specifiers in fftools
- LZMA compression support in TIFF decoder
- H.261 RTP payload format (RFC 4587) depacketizer and experimental packetizer
- HEVC/H.265 RTP payload format (draft v6) depacketizer
- added codecview filter to visualize information exported by some codecs
- Matroska 3D support thorugh side data
- HTML generation using texi2html is deprecated in favor of makeinfo/texi2any
- silenceremove filter
version 2.3:
- AC3 fixed-point decoding
- shuffleplanes filter
- subfile protocol
- Phantom Cine demuxer
- replaygain data export
- VP7 video decoder
- Alias PIX image encoder and decoder
- Improvements to the BRender PIX image decoder
- Improvements to the XBM decoder
- QTKit input device
- improvements to OpenEXR image decoder
- support decoding 16-bit RLE SGI images
- GDI screen grabbing for Windows
- alternative rendition support for HTTP Live Streaming
- AVFoundation input device
- Direct Stream Digital (DSD) decoder
- Magic Lantern Video (MLV) demuxer
- On2 AVC (Audio for Video) decoder
- support for decoding through DXVA2 in ffmpeg
- libbs2b-based stereo-to-binaural audio filter
- libx264 reference frames count limiting depending on level
- native Opus decoder
- display matrix export and rotation API
- WebVTT encoder
- showcqt multimedia filter
- zoompan filter
- signalstats filter
- hqx filter (hq2x, hq3x, hq4x)
- flanger filter
- Image format auto-detection
- LRC demuxer and muxer
- Samba protocol (via libsmbclient)
- WebM DASH Manifest muxer
- libfribidi support in drawtext
version 2.2: version 2.2:
@@ -137,8 +31,6 @@ version 2.2:
- Support DNx444 - Support DNx444
- libx265 encoder - libx265 encoder
- dejudder filter - dejudder filter
- Autodetect VDA like all other hardware accelerations
- aliases and defaults for Ogg subtypes (opus, spx)
version 2.1: version 2.1:
@@ -326,7 +218,7 @@ version 1.1:
- JSON captions for TED talks decoding support - JSON captions for TED talks decoding support
- SOX Resampler support in libswresample - SOX Resampler support in libswresample
- aselect filter - aselect filter
- SGI RLE 8-bit / Silicon Graphics RLE 8-bit video decoder - SGI RLE 8-bit decoder
- Silicon Graphics Motion Video Compressor 1 & 2 decoder - Silicon Graphics Motion Video Compressor 1 & 2 decoder
- Silicon Graphics Movie demuxer - Silicon Graphics Movie demuxer
- apad filter - apad filter
@@ -370,9 +262,7 @@ version 1.0:
- RTMPE protocol support - RTMPE protocol support
- RTMPTE protocol support - RTMPTE protocol support
- showwaves and showspectrum filter - showwaves and showspectrum filter
- LucasArts SMUSH SANM playback support - LucasArts SMUSH playback support
- LucasArts SMUSH VIMA audio decoder (ADPCM)
- LucasArts SMUSH demuxer
- SAMI, RealText and SubViewer demuxers and decoders - SAMI, RealText and SubViewer demuxers and decoders
- Heart Of Darkness PAF playback support - Heart Of Darkness PAF playback support
- iec61883 device - iec61883 device
@@ -496,7 +386,6 @@ version 0.10:
- ffwavesynth decoder - ffwavesynth decoder
- aviocat tool - aviocat tool
- ffeval tool - ffeval tool
- support encoding and decoding 4-channel SGI images
version 0.9: version 0.9:

15
INSTALL Normal file
View File

@@ -0,0 +1,15 @@
1) Type './configure' to create the configuration. A list of configure
options is printed by running 'configure --help'.
'configure' can be launched from a directory different from the FFmpeg
sources to build the objects out of tree. To do this, use an absolute
path when launching 'configure', e.g. '/ffmpegdir/ffmpeg/configure'.
2) Then type 'make' to build FFmpeg. GNU Make 3.81 or later is required.
3) Type 'make install' to install all binaries and libraries you built.
NOTICE
- Non system dependencies (e.g. libx264, libvpx) are disabled by default.

View File

@@ -1,17 +0,0 @@
#Installing FFmpeg:
1. Type `./configure` to create the configuration. A list of configure
options is printed by running `configure --help`.
`configure` can be launched from a directory different from the FFmpeg
sources to build the objects out of tree. To do this, use an absolute
path when launching `configure`, e.g. `/ffmpegdir/ffmpeg/configure`.
2. Then type `make` to build FFmpeg. GNU Make 3.81 or later is required.
3. Type `make install` to install all binaries and libraries you built.
NOTICE
------
- Non system dependencies (e.g. libx264, libvpx) are disabled by default.

View File

@@ -1,73 +1,67 @@
#FFmpeg: FFmpeg:
Most files in FFmpeg are under the GNU Lesser General Public License version 2.1 Most files in FFmpeg are under the GNU Lesser General Public License version 2.1
or later (LGPL v2.1+). Read the file `COPYING.LGPLv2.1` for details. Some other or later (LGPL v2.1+). Read the file COPYING.LGPLv2.1 for details. Some other
files have MIT/X11/BSD-style licenses. In combination the LGPL v2.1+ applies to files have MIT/X11/BSD-style licenses. In combination the LGPL v2.1+ applies to
FFmpeg. FFmpeg.
Some optional parts of FFmpeg are licensed under the GNU General Public License Some optional parts of FFmpeg are licensed under the GNU General Public License
version 2 or later (GPL v2+). See the file `COPYING.GPLv2` for details. None of version 2 or later (GPL v2+). See the file COPYING.GPLv2 for details. None of
these parts are used by default, you have to explicitly pass `--enable-gpl` to these parts are used by default, you have to explicitly pass --enable-gpl to
configure to activate them. In this case, FFmpeg's license changes to GPL v2+. configure to activate them. In this case, FFmpeg's license changes to GPL v2+.
Specifically, the GPL parts of FFmpeg are: Specifically, the GPL parts of FFmpeg are
- libpostproc - libpostproc
- libmpcodecs
- optional x86 optimizations in the files - optional x86 optimizations in the files
- `libavcodec/x86/flac_dsp_gpl.asm` libavcodec/x86/idct_mmx.c
- `libavcodec/x86/idct_mmx.c`
- libutvideo encoding/decoding wrappers in - libutvideo encoding/decoding wrappers in
`libavcodec/libutvideo*.cpp` libavcodec/libutvideo*.cpp
- the X11 grabber in `libavdevice/x11grab.c` - the X11 grabber in libavdevice/x11grab.c
- the swresample test app in - the swresample test app in
`libswresample/swresample-test.c` libswresample/swresample-test.c
- the `texi2pod.pl` tool - the texi2pod.pl tool
- the following filters in libavfilter: - the following filters in libavfilter:
- `f_ebur128.c` - f_ebur128.c
- `vf_blackframe.c` - vf_blackframe.c
- `vf_boxblur.c` - vf_boxblur.c
- `vf_colormatrix.c` - vf_colormatrix.c
- `vf_cropdetect.c` - vf_cropdetect.c
- `vf_delogo.c` - vf_decimate.c
- `vf_eq.c` - vf_delogo.c
- `vf_fspp.c` - vf_geq.c
- `vf_geq.c` - vf_histeq.c
- `vf_histeq.c` - vf_hqdn3d.c
- `vf_hqdn3d.c` - vf_kerndeint.c
- `vf_interlace.c` - vf_mcdeint.c
- `vf_kerndeint.c` - vf_mp.c
- `vf_mcdeint.c` - vf_owdenoise.c
- `vf_mpdecimate.c` - vf_perspective.c
- `vf_owdenoise.c` - vf_phase.c
- `vf_perspective.c` - vf_pp.c
- `vf_phase.c` - vf_pullup.c
- `vf_pp.c` - vf_sab.c
- `vf_pp7.c` - vf_smartblur.c
- `vf_pullup.c` - vf_spp.c
- `vf_sab.c` - vf_stereo3d.c
- `vf_smartblur.c` - vf_super2xsai.c
- `vf_repeatfields.c` - vf_tinterlace.c
- `vf_spp.c` - vsrc_mptestsrc.c
- `vf_stereo3d.c`
- `vf_super2xsai.c`
- `vf_tinterlace.c`
- `vf_uspp.c`
- `vsrc_mptestsrc.c`
Should you, for whatever reason, prefer to use version 3 of the (L)GPL, then Should you, for whatever reason, prefer to use version 3 of the (L)GPL, then
the configure parameter `--enable-version3` will activate this licensing option the configure parameter --enable-version3 will activate this licensing option
for you. Read the file `COPYING.LGPLv3` or, if you have enabled GPL parts, for you. Read the file COPYING.LGPLv3 or, if you have enabled GPL parts,
`COPYING.GPLv3` to learn the exact legal terms that apply in this case. COPYING.GPLv3 to learn the exact legal terms that apply in this case.
There are a handful of files under other licensing terms, namely: There are a handful of files under other licensing terms, namely:
* The files `libavcodec/jfdctfst.c`, `libavcodec/jfdctint_template.c` and * The files libavcodec/jfdctfst.c, libavcodec/jfdctint_template.c and
`libavcodec/jrevdct.c` are taken from libjpeg, see the top of the files for libavcodec/jrevdct.c are taken from libjpeg, see the top of the files for
licensing details. Specifically note that you must credit the IJG in the licensing details. Specifically note that you must credit the IJG in the
documentation accompanying your program if you only distribute executables. documentation accompanying your program if you only distribute executables.
You must also indicate any changes including additions and deletions to You must also indicate any changes including additions and deletions to
those three files in the documentation. those three files in the documentation.
* `tests/reference.pnm` is under the expat license.
external libraries external libraries
@@ -80,22 +74,21 @@ compatible libraries
-------------------- --------------------
The following libraries are under GPL: The following libraries are under GPL:
- frei0r - frei0r
- libcdio - libcdio
- libutvideo - libutvideo
- libvidstab - libvidstab
- libx264 - libx264
- libx265 - libx265
- libxavs - libxavs
- libxvid - libxvid
When combining them with FFmpeg, FFmpeg needs to be licensed as GPL as well by When combining them with FFmpeg, FFmpeg needs to be licensed as GPL as well by
passing `--enable-gpl` to configure. passing --enable-gpl to configure.
The OpenCORE and VisualOn libraries are under the Apache License 2.0. That The OpenCORE and VisualOn libraries are under the Apache License 2.0. That
license is incompatible with the LGPL v2.1 and the GPL v2, but not with license is incompatible with the LGPL v2.1 and the GPL v2, but not with
version 3 of those licenses. So to combine these libraries with FFmpeg, the version 3 of those licenses. So to combine these libraries with FFmpeg, the
license version needs to be upgraded by passing `--enable-version3` to configure. license version needs to be upgraded by passing --enable-version3 to configure.
incompatible libraries incompatible libraries
---------------------- ----------------------
@@ -103,7 +96,7 @@ incompatible libraries
The Fraunhofer AAC library, FAAC and aacplus are under licenses which The Fraunhofer AAC library, FAAC and aacplus are under licenses which
are incompatible with the GPLv2 and v3. We do not know for certain if their are incompatible with the GPLv2 and v3. We do not know for certain if their
licenses are compatible with the LGPL. licenses are compatible with the LGPL.
If you wish to enable these libraries, pass `--enable-nonfree` to configure. If you wish to enable these libraries, pass --enable-nonfree to configure.
But note that if you enable any of these libraries the resulting binary will But note that if you enable any of these libraries the resulting binary will
be under a complex license mix that is more restrictive than the LGPL and that be under a complex license mix that is more restrictive than the LGPL and that
may result in additional obligations. It is possible that these may result in additional obligations. It is possible that these

View File

@@ -44,8 +44,8 @@ Miscellaneous Areas
=================== ===================
documentation Stefano Sabatini, Mike Melanson, Timothy Gu documentation Stefano Sabatini, Mike Melanson, Timothy Gu
build system (configure, makefiles) Diego Biurrun, Mans Rullgard build system (configure,Makefiles) Diego Biurrun, Mans Rullgard
project server Árpád Gereöffy, Michael Niedermayer, Reimar Doeffinger, Alexander Strasser project server Árpád Gereöffy, Michael Niedermayer, Reimar Döffinger, Alexander Strasser
presets Robert Swain presets Robert Swain
metadata subsystem Aurelien Jacobs metadata subsystem Aurelien Jacobs
release management Michael Niedermayer release management Michael Niedermayer
@@ -54,10 +54,8 @@ release management Michael Niedermayer
Communication Communication
============= =============
website Deby Barbara Lepage website Robert Swain, Lou Logan
fate.ffmpeg.org Timothy Gu mailinglists Michael Niedermayer, Baptiste Coudurier, Lou Logan
Trac bug tracker Alexander Strasser, Michael Niedermayer, Carl Eugen Hoyos, Lou Logan
mailing lists Michael Niedermayer, Baptiste Coudurier, Lou Logan
Google+ Paul B Mahol, Michael Niedermayer, Alexander Strasser Google+ Paul B Mahol, Michael Niedermayer, Alexander Strasser
Twitter Lou Logan Twitter Lou Logan
Launchpad Timothy Gu Launchpad Timothy Gu
@@ -75,7 +73,6 @@ Other:
bprint Nicolas George bprint Nicolas George
bswap.h bswap.h
des Reimar Doeffinger des Reimar Doeffinger
dynarray.h Nicolas George
eval.c, eval.h Michael Niedermayer eval.c, eval.h Michael Niedermayer
float_dsp Loren Merritt float_dsp Loren Merritt
hash Reimar Doeffinger hash Reimar Doeffinger
@@ -132,7 +129,6 @@ Generic Parts:
tableprint.c, tableprint.h Reimar Doeffinger tableprint.c, tableprint.h Reimar Doeffinger
fixed point FFT: fixed point FFT:
fft* Zeljko Lukac fft* Zeljko Lukac
Text Subtitles Clément Bœsch
Codecs: Codecs:
4xm.c Michael Niedermayer 4xm.c Michael Niedermayer
@@ -156,7 +152,6 @@ Codecs:
celp_filters.* Vitor Sessak celp_filters.* Vitor Sessak
cinepak.c Roberto Togni cinepak.c Roberto Togni
cinepakenc.c Rl / Aetey G.T. AB cinepakenc.c Rl / Aetey G.T. AB
ccaption_dec.c Anshul Maheshwari
cljr Alex Beregszaszi cljr Alex Beregszaszi
cllc.c Derek Buitenhuis cllc.c Derek Buitenhuis
cook.c, cookdata.h Benjamin Larsson cook.c, cookdata.h Benjamin Larsson
@@ -166,15 +161,12 @@ Codecs:
dca.c Kostya Shishkov, Benjamin Larsson dca.c Kostya Shishkov, Benjamin Larsson
dnxhd* Baptiste Coudurier dnxhd* Baptiste Coudurier
dpcm.c Mike Melanson dpcm.c Mike Melanson
dss_sp.c Oleksij Rempel, Michael Niedermayer
dv.c Roman Shaposhnik dv.c Roman Shaposhnik
dvbsubdec.c Anshul Maheshwari
dxa.c Kostya Shishkov dxa.c Kostya Shishkov
eacmv*, eaidct*, eat* Peter Ross eacmv*, eaidct*, eat* Peter Ross
exif.c, exif.h Thilo Borgmann exif.c, exif.h Thilo Borgmann
ffv1* Michael Niedermayer ffv1.c Michael Niedermayer
ffwavesynth.c Nicolas George ffwavesynth.c Nicolas George
fic.c Derek Buitenhuis
flac* Justin Ruggles flac* Justin Ruggles
flashsv* Benjamin Larsson flashsv* Benjamin Larsson
flicvideo.c Mike Melanson flicvideo.c Mike Melanson
@@ -184,7 +176,7 @@ Codecs:
h261* Michael Niedermayer h261* Michael Niedermayer
h263* Michael Niedermayer h263* Michael Niedermayer
h264* Loren Merritt, Michael Niedermayer h264* Loren Merritt, Michael Niedermayer
huffyuv* Michael Niedermayer, Christophe Gisquet huffyuv.c Michael Niedermayer
idcinvideo.c Mike Melanson idcinvideo.c Mike Melanson
imc* Benjamin Larsson imc* Benjamin Larsson
indeo2* Kostya Shishkov indeo2* Kostya Shishkov
@@ -228,7 +220,6 @@ Codecs:
msvideo1.c Mike Melanson msvideo1.c Mike Melanson
nellymoserdec.c Benjamin Larsson nellymoserdec.c Benjamin Larsson
nuv.c Reimar Doeffinger nuv.c Reimar Doeffinger
nvenc.c Timo Rothenpieler
paf.* Paul B Mahol paf.* Paul B Mahol
pcx.c Ivo van Poorten pcx.c Ivo van Poorten
pgssubdec.c Reimar Doeffinger pgssubdec.c Reimar Doeffinger
@@ -245,12 +236,12 @@ Codecs:
rtjpeg.c, rtjpeg.h Reimar Doeffinger rtjpeg.c, rtjpeg.h Reimar Doeffinger
rv10.c Michael Niedermayer rv10.c Michael Niedermayer
rv3* Kostya Shishkov rv3* Kostya Shishkov
rv4* Kostya Shishkov, Christophe Gisquet rv4* Kostya Shishkov
s3tc* Ivo van Poorten s3tc* Ivo van Poorten
smacker.c Kostya Shishkov smacker.c Kostya Shishkov
smc.c Mike Melanson smc.c Mike Melanson
smvjpegdec.c Ash Hughes smvjpegdec.c Ash Hughes
snow* Michael Niedermayer, Loren Merritt snow.c Michael Niedermayer, Loren Merritt
sonic.c Alex Beregszaszi sonic.c Alex Beregszaszi
srt* Aurelien Jacobs srt* Aurelien Jacobs
sunrast.c Ivo van Poorten sunrast.c Ivo van Poorten
@@ -269,13 +260,13 @@ Codecs:
v410*.c Derek Buitenhuis v410*.c Derek Buitenhuis
vb.c Kostya Shishkov vb.c Kostya Shishkov
vble.c Derek Buitenhuis vble.c Derek Buitenhuis
vc1* Kostya Shishkov, Christophe Gisquet vc1* Kostya Shishkov
vcr1.c Michael Niedermayer vcr1.c Michael Niedermayer
vda_h264_dec.c Xidorn Quan vda_h264_dec.c Xidorn Quan
vima.c Paul B Mahol vima.c Paul B Mahol
vmnc.c Kostya Shishkov vmnc.c Kostya Shishkov
vorbisdec.c Denes Balatoni, David Conrad vorbis_dec.c Denes Balatoni, David Conrad
vorbisenc.c Oded Shimon vorbis_enc.c Oded Shimon
vp3* Mike Melanson vp3* Mike Melanson
vp5 Aurelien Jacobs vp5 Aurelien Jacobs
vp6 Aurelien Jacobs vp6 Aurelien Jacobs
@@ -311,21 +302,16 @@ libavdevice
libavdevice/avdevice.h libavdevice/avdevice.h
avfoundation.m Thilo Borgmann dshow.c Roger Pack
decklink* Deti Fliegl
dshow.c Roger Pack (CC rogerdpack@gmail.com)
fbdev_enc.c Lukasz Marek fbdev_enc.c Lukasz Marek
gdigrab.c Roger Pack (CC rogerdpack@gmail.com)
iec61883.c Georg Lippitsch iec61883.c Georg Lippitsch
lavfi Stefano Sabatini lavfi Stefano Sabatini
libdc1394.c Roman Shaposhnik libdc1394.c Roman Shaposhnik
opengl_enc.c Lukasz Marek opengl_enc.c Lukasz Marek
pulse_audio_enc.c Lukasz Marek pulse_audio_enc.c Lukasz Marek
qtkit.m Thilo Borgmann
sdl Stefano Sabatini sdl Stefano Sabatini
v4l2.c Giorgio Vazzana v4l2.c Luca Abeni
vfwcap.c Ramiro Polla vfwcap.c Ramiro Polla
xv.c Lukasz Marek
libavfilter libavfilter
=========== ===========
@@ -347,9 +333,7 @@ Filters:
af_compand.c Paul B Mahol af_compand.c Paul B Mahol
af_ladspa.c Paul B Mahol af_ladspa.c Paul B Mahol
af_pan.c Nicolas George af_pan.c Nicolas George
af_silenceremove.c Paul B Mahol
avf_avectorscope.c Paul B Mahol avf_avectorscope.c Paul B Mahol
avf_showcqt.c Muhammad Faiz
vf_blend.c Paul B Mahol vf_blend.c Paul B Mahol
vf_colorbalance.c Paul B Mahol vf_colorbalance.c Paul B Mahol
vf_dejudder.c Nicholas Robbins vf_dejudder.c Nicholas Robbins
@@ -357,10 +341,7 @@ Filters:
vf_drawbox.c/drawgrid Andrey Utkin vf_drawbox.c/drawgrid Andrey Utkin
vf_extractplanes.c Paul B Mahol vf_extractplanes.c Paul B Mahol
vf_histogram.c Paul B Mahol vf_histogram.c Paul B Mahol
vf_hqx.c Clément Bœsch
vf_idet.c Pascal Massimino
vf_il.c Paul B Mahol vf_il.c Paul B Mahol
vf_lenscorrection.c Daniel Oberhoff
vf_mergeplanes.c Paul B Mahol vf_mergeplanes.c Paul B Mahol
vf_psnr.c Paul B Mahol vf_psnr.c Paul B Mahol
vf_scale.c Michael Niedermayer vf_scale.c Michael Niedermayer
@@ -389,7 +370,6 @@ Muxers/Demuxers:
aiffdec.c Baptiste Coudurier, Matthieu Bouron aiffdec.c Baptiste Coudurier, Matthieu Bouron
aiffenc.c Baptiste Coudurier, Matthieu Bouron aiffenc.c Baptiste Coudurier, Matthieu Bouron
ape.c Kostya Shishkov ape.c Kostya Shishkov
apngdec.c Benoit Fouet
ass* Aurelien Jacobs ass* Aurelien Jacobs
astdec.c Paul B Mahol astdec.c Paul B Mahol
astenc.c James Almer astenc.c James Almer
@@ -402,7 +382,6 @@ Muxers/Demuxers:
cdxl.c Paul B Mahol cdxl.c Paul B Mahol
crc.c Michael Niedermayer crc.c Michael Niedermayer
daud.c Reimar Doeffinger daud.c Reimar Doeffinger
dss.c Oleksij Rempel, Michael Niedermayer
dtshddec.c Paul B Mahol dtshddec.c Paul B Mahol
dv.c Roman Shaposhnik dv.c Roman Shaposhnik
dxa.c Kostya Shishkov dxa.c Kostya Shishkov
@@ -432,7 +411,6 @@ Muxers/Demuxers:
matroska.c Aurelien Jacobs matroska.c Aurelien Jacobs
matroskadec.c Aurelien Jacobs matroskadec.c Aurelien Jacobs
matroskaenc.c David Conrad matroskaenc.c David Conrad
matroska subtitles (matroskaenc.c) John Peebles
metadata* Aurelien Jacobs metadata* Aurelien Jacobs
mgsts.c Paul B Mahol mgsts.c Paul B Mahol
microdvd* Aurelien Jacobs microdvd* Aurelien Jacobs
@@ -442,15 +420,14 @@ Muxers/Demuxers:
mpc.c Kostya Shishkov mpc.c Kostya Shishkov
mpeg.c Michael Niedermayer mpeg.c Michael Niedermayer
mpegenc.c Michael Niedermayer mpegenc.c Michael Niedermayer
mpegts.c Marton Balint mpegts* Baptiste Coudurier
mpegtsenc.c Baptiste Coudurier
msnwc_tcp.c Ramiro Polla msnwc_tcp.c Ramiro Polla
mtv.c Reynaldo H. Verdejo Pinochet mtv.c Reynaldo H. Verdejo Pinochet
mxf* Baptiste Coudurier mxf* Baptiste Coudurier
mxfdec.c Tomas Härdin mxfdec.c Tomas Härdin
nistspheredec.c Paul B Mahol nistspheredec.c Paul B Mahol
nsvdec.c Francois Revol nsvdec.c Francois Revol
nut* Michael Niedermayer nut.c Michael Niedermayer
nuv.c Reimar Doeffinger nuv.c Reimar Doeffinger
oggdec.c, oggdec.h David Conrad oggdec.c, oggdec.h David Conrad
oggenc.c Baptiste Coudurier oggenc.c Baptiste Coudurier
@@ -467,19 +444,12 @@ Muxers/Demuxers:
rmdec.c, rmenc.c Ronald S. Bultje, Kostya Shishkov rmdec.c, rmenc.c Ronald S. Bultje, Kostya Shishkov
rtmp* Kostya Shishkov rtmp* Kostya Shishkov
rtp.c, rtpenc.c Martin Storsjo rtp.c, rtpenc.c Martin Storsjo
rtpdec_ac3.* Gilles Chanteperdrix
rtpdec_dv.* Thomas Volkert
rtpdec_h261.*, rtpenc_h261.* Thomas Volkert
rtpdec_hevc.*, rtpenc_hevc.* Thomas Volkert
rtpdec_mpa_robust.* Gilles Chanteperdrix
rtpdec_asf.* Ronald S. Bultje rtpdec_asf.* Ronald S. Bultje
rtpdec_vp9.c Thomas Volkert
rtpenc_mpv.*, rtpenc_aac.* Martin Storsjo rtpenc_mpv.*, rtpenc_aac.* Martin Storsjo
rtsp.c Luca Barbato rtsp.c Luca Barbato
sbgdec.c Nicolas George sbgdec.c Nicolas George
sdp.c Martin Storsjo sdp.c Martin Storsjo
segafilm.c Mike Melanson segafilm.c Mike Melanson
segment.c Stefano Sabatini
siff.c Kostya Shishkov siff.c Kostya Shishkov
smacker.c Kostya Shishkov smacker.c Kostya Shishkov
smjpeg* Paul B Mahol smjpeg* Paul B Mahol
@@ -492,7 +462,6 @@ Muxers/Demuxers:
voc.c Aurelien Jacobs voc.c Aurelien Jacobs
wav.c Michael Niedermayer wav.c Michael Niedermayer
wc3movie.c Mike Melanson wc3movie.c Mike Melanson
webm dash (matroskaenc.c) Vignesh Venkatasubramanian
webvtt* Matthew J Heaney webvtt* Matthew J Heaney
westwood.c Mike Melanson westwood.c Mike Melanson
wtv.c Peter Ross wtv.c Peter Ross
@@ -506,7 +475,6 @@ Protocols:
libssh.c Lukasz Marek libssh.c Lukasz Marek
mms*.c Ronald S. Bultje mms*.c Ronald S. Bultje
udp.c Luca Abeni udp.c Luca Abeni
icecast.c Marvin Scholz
libswresample libswresample
@@ -535,8 +503,6 @@ Amiga / PowerPC Colin Ward
Linux / PowerPC Luca Barbato Linux / PowerPC Luca Barbato
Windows MinGW Alex Beregszaszi, Ramiro Polla Windows MinGW Alex Beregszaszi, Ramiro Polla
Windows Cygwin Victor Paesa Windows Cygwin Victor Paesa
Windows MSVC Matthew Oliver
Windows ICL Matthew Oliver
ADI/Blackfin DSP Marc Hoffman ADI/Blackfin DSP Marc Hoffman
Sparc Roman Shaposhnik Sparc Roman Shaposhnik
x86 Michael Niedermayer x86 Michael Niedermayer
@@ -545,10 +511,9 @@ x86 Michael Niedermayer
Releases Releases
======== ========
2.6 Michael Niedermayer
2.5 Michael Niedermayer
2.4 Michael Niedermayer
2.2 Michael Niedermayer 2.2 Michael Niedermayer
2.1 Michael Niedermayer
1.2 Michael Niedermayer
If you want to maintain an older release, please contact us If you want to maintain an older release, please contact us
@@ -564,7 +529,7 @@ Attila Kinali 11F0 F9A6 A1D2 11F6 C745 D10C 6520 BCDD F2DF E765
Baptiste Coudurier 8D77 134D 20CC 9220 201F C5DB 0AC9 325C 5C1A BAAA Baptiste Coudurier 8D77 134D 20CC 9220 201F C5DB 0AC9 325C 5C1A BAAA
Ben Littler 3EE3 3723 E560 3214 A8CD 4DEB 2CDB FCE7 768C 8D2C Ben Littler 3EE3 3723 E560 3214 A8CD 4DEB 2CDB FCE7 768C 8D2C
Benoit Fouet B22A 4F4F 43EF 636B BB66 FCDC 0023 AE1E 2985 49C8 Benoit Fouet B22A 4F4F 43EF 636B BB66 FCDC 0023 AE1E 2985 49C8
Clément Bœsch 52D0 3A82 D445 F194 DB8B 2B16 87EE 2CB8 F4B8 FCF9 Bœsch Clément 52D0 3A82 D445 F194 DB8B 2B16 87EE 2CB8 F4B8 FCF9
Daniel Verkamp 78A6 07ED 782C 653E C628 B8B9 F0EB 8DD8 2F0E 21C7 Daniel Verkamp 78A6 07ED 782C 653E C628 B8B9 F0EB 8DD8 2F0E 21C7
Diego Biurrun 8227 1E31 B6D9 4994 7427 E220 9CAE D6CC 4757 FCC5 Diego Biurrun 8227 1E31 B6D9 4994 7427 E220 9CAE D6CC 4757 FCC5
FFmpeg release signing key FCF9 86EA 15E6 E293 A564 4F10 B432 2F04 D676 58D8 FFmpeg release signing key FCF9 86EA 15E6 E293 A564 4F10 B432 2F04 D676 58D8
@@ -579,14 +544,12 @@ Michael Niedermayer 9FF2 128B 147E F673 0BAD F133 611E C787 040B 0FAB
Nicolas George 24CE 01CE 9ACC 5CEB 74D8 8D9D B063 D997 36E5 4C93 Nicolas George 24CE 01CE 9ACC 5CEB 74D8 8D9D B063 D997 36E5 4C93
Panagiotis Issaris 6571 13A3 33D9 3726 F728 AA98 F643 B12E ECF3 E029 Panagiotis Issaris 6571 13A3 33D9 3726 F728 AA98 F643 B12E ECF3 E029
Peter Ross A907 E02F A6E5 0CD2 34CD 20D2 6760 79C5 AC40 DD6B Peter Ross A907 E02F A6E5 0CD2 34CD 20D2 6760 79C5 AC40 DD6B
Reimar Doeffinger C61D 16E5 9E2C D10C 8958 38A4 0899 A2B9 06D4 D9C7 Reimar Döffinger C61D 16E5 9E2C D10C 8958 38A4 0899 A2B9 06D4 D9C7
Reinhard Tartler 9300 5DC2 7E87 6C37 ED7B CA9A 9808 3544 9453 48A4 Reinhard Tartler 9300 5DC2 7E87 6C37 ED7B CA9A 9808 3544 9453 48A4
Reynaldo H. Verdejo Pinochet 6E27 CD34 170C C78E 4D4F 5F40 C18E 077F 3114 452A Reynaldo H. Verdejo Pinochet 6E27 CD34 170C C78E 4D4F 5F40 C18E 077F 3114 452A
Robert Swain EE7A 56EA 4A81 A7B5 2001 A521 67FA 362D A2FC 3E71 Robert Swain EE7A 56EA 4A81 A7B5 2001 A521 67FA 362D A2FC 3E71
Sascha Sommer 38A0 F88B 868E 9D3A 97D4 D6A0 E823 706F 1E07 0D3C Sascha Sommer 38A0 F88B 868E 9D3A 97D4 D6A0 E823 706F 1E07 0D3C
Stefano Sabatini 0D0B AD6B 5330 BBAD D3D6 6A0C 719C 2839 FC43 2D5F Stefano Sabatini 0D0B AD6B 5330 BBAD D3D6 6A0C 719C 2839 FC43 2D5F
Stephan Hilb 4F38 0B3A 5F39 B99B F505 E562 8D5C 5554 4E17 8863 Stephan Hilb 4F38 0B3A 5F39 B99B F505 E562 8D5C 5554 4E17 8863
Tiancheng "Timothy" Gu 9456 AFC0 814A 8139 E994 8351 7FE6 B095 B582 B0D4
Tim Nicholson 38CF DB09 3ED0 F607 8B67 6CED 0C0B FC44 8B0B FC83
Tomas Härdin A79D 4E3D F38F 763F 91F5 8B33 A01E 8AE0 41BB 2551 Tomas Härdin A79D 4E3D F38F 763F 91F5 8B33 A01E 8AE0 41BB 2551
Wei Gao 4269 7741 857A 0E60 9EC5 08D2 4744 4EFA 62C1 87B9 Wei Gao 4269 7741 857A 0E60 9EC5 08D2 4744 4EFA 62C1 87B9

View File

@@ -4,7 +4,6 @@ include config.mak
vpath %.c $(SRC_PATH) vpath %.c $(SRC_PATH)
vpath %.cpp $(SRC_PATH) vpath %.cpp $(SRC_PATH)
vpath %.h $(SRC_PATH) vpath %.h $(SRC_PATH)
vpath %.m $(SRC_PATH)
vpath %.S $(SRC_PATH) vpath %.S $(SRC_PATH)
vpath %.asm $(SRC_PATH) vpath %.asm $(SRC_PATH)
vpath %.rc $(SRC_PATH) vpath %.rc $(SRC_PATH)
@@ -30,24 +29,19 @@ $(foreach prog,$(AVBASENAMES),$(eval OBJS-$(prog)-$(CONFIG_OPENCL) += cmdutils_o
OBJS-ffmpeg += ffmpeg_opt.o ffmpeg_filter.o OBJS-ffmpeg += ffmpeg_opt.o ffmpeg_filter.o
OBJS-ffmpeg-$(HAVE_VDPAU_X11) += ffmpeg_vdpau.o OBJS-ffmpeg-$(HAVE_VDPAU_X11) += ffmpeg_vdpau.o
OBJS-ffmpeg-$(HAVE_DXVA2_LIB) += ffmpeg_dxva2.o
OBJS-ffmpeg-$(CONFIG_VDA) += ffmpeg_vda.o
OBJS-ffserver += ffserver_config.o
TESTTOOLS = audiogen videogen rotozoom tiny_psnr tiny_ssim base64 TESTTOOLS = audiogen videogen rotozoom tiny_psnr tiny_ssim base64
HOSTPROGS := $(TESTTOOLS:%=tests/%) doc/print_options HOSTPROGS := $(TESTTOOLS:%=tests/%) doc/print_options
TOOLS = qt-faststart trasher uncoded_frame TOOLS = qt-faststart trasher uncoded_frame
TOOLS-$(CONFIG_ZLIB) += cws2fws TOOLS-$(CONFIG_ZLIB) += cws2fws
# $(FFLIBS-yes) needs to be in linking order FFLIBS-$(CONFIG_AVDEVICE) += avdevice
FFLIBS-$(CONFIG_AVDEVICE) += avdevice FFLIBS-$(CONFIG_AVFILTER) += avfilter
FFLIBS-$(CONFIG_AVFILTER) += avfilter FFLIBS-$(CONFIG_AVFORMAT) += avformat
FFLIBS-$(CONFIG_AVFORMAT) += avformat
FFLIBS-$(CONFIG_AVCODEC) += avcodec
FFLIBS-$(CONFIG_AVRESAMPLE) += avresample FFLIBS-$(CONFIG_AVRESAMPLE) += avresample
FFLIBS-$(CONFIG_POSTPROC) += postproc FFLIBS-$(CONFIG_AVCODEC) += avcodec
FFLIBS-$(CONFIG_SWRESAMPLE) += swresample FFLIBS-$(CONFIG_POSTPROC) += postproc
FFLIBS-$(CONFIG_SWSCALE) += swscale FFLIBS-$(CONFIG_SWRESAMPLE)+= swresample
FFLIBS-$(CONFIG_SWSCALE) += swscale
FFLIBS := avutil FFLIBS := avutil
@@ -64,7 +58,7 @@ FF_DEP_LIBS := $(DEP_LIBS)
all: $(AVPROGS) all: $(AVPROGS)
$(TOOLS): %$(EXESUF): %.o $(EXEOBJS) $(TOOLS): %$(EXESUF): %.o $(EXEOBJS)
$(LD) $(LDFLAGS) $(LDEXEFLAGS) $(LD_O) $^ $(ELIBS) $(LD) $(LDFLAGS) $(LD_O) $^ $(ELIBS)
tools/cws2fws$(EXESUF): ELIBS = $(ZLIB) tools/cws2fws$(EXESUF): ELIBS = $(ZLIB)
tools/uncoded_frame$(EXESUF): $(FF_DEP_LIBS) tools/uncoded_frame$(EXESUF): $(FF_DEP_LIBS)
@@ -78,9 +72,10 @@ config.h: .config
SUBDIR_VARS := CLEANFILES EXAMPLES FFLIBS HOSTPROGS TESTPROGS TOOLS \ SUBDIR_VARS := CLEANFILES EXAMPLES FFLIBS HOSTPROGS TESTPROGS TOOLS \
HEADERS ARCH_HEADERS BUILT_HEADERS SKIPHEADERS \ HEADERS ARCH_HEADERS BUILT_HEADERS SKIPHEADERS \
ARMV5TE-OBJS ARMV6-OBJS ARMV8-OBJS VFP-OBJS NEON-OBJS \ ARMV5TE-OBJS ARMV6-OBJS VFP-OBJS NEON-OBJS \
ALTIVEC-OBJS MMX-OBJS YASM-OBJS \ ALTIVEC-OBJS VIS-OBJS \
MIPSFPU-OBJS MIPSDSPR2-OBJS MIPSDSPR1-OBJS \ MMX-OBJS YASM-OBJS \
MIPSFPU-OBJS MIPSDSPR2-OBJS MIPSDSPR1-OBJS MIPS32R2-OBJS \
OBJS SLIBOBJS HOSTOBJS TESTOBJS OBJS SLIBOBJS HOSTOBJS TESTOBJS
define RESET define RESET
@@ -93,7 +88,6 @@ $(foreach V,$(SUBDIR_VARS),$(eval $(call RESET,$(V))))
SUBDIR := $(1)/ SUBDIR := $(1)/
include $(SRC_PATH)/$(1)/Makefile include $(SRC_PATH)/$(1)/Makefile
-include $(SRC_PATH)/$(1)/$(ARCH)/Makefile -include $(SRC_PATH)/$(1)/$(ARCH)/Makefile
-include $(SRC_PATH)/$(1)/$(INTRINSICS)/Makefile
include $(SRC_PATH)/library.mak include $(SRC_PATH)/library.mak
endef endef
@@ -112,14 +106,14 @@ endef
$(foreach P,$(PROGS),$(eval $(call DOPROG,$(P:$(PROGSSUF)$(EXESUF)=)))) $(foreach P,$(PROGS),$(eval $(call DOPROG,$(P:$(PROGSSUF)$(EXESUF)=))))
ffprobe.o cmdutils.o libavcodec/utils.o libavformat/utils.o libavdevice/avdevice.o libavfilter/avfilter.o libavutil/utils.o libpostproc/postprocess.o libswresample/swresample.o libswscale/utils.o : libavutil/ffversion.h ffprobe.o cmdutils.o : libavutil/ffversion.h
$(PROGS): %$(PROGSSUF)$(EXESUF): %$(PROGSSUF)_g$(EXESUF) $(PROGS): %$(PROGSSUF)$(EXESUF): %$(PROGSSUF)_g$(EXESUF)
$(CP) $< $@ $(CP) $< $@
$(STRIP) $@ $(STRIP) $@
%$(PROGSSUF)_g$(EXESUF): %.o $(FF_DEP_LIBS) %$(PROGSSUF)_g$(EXESUF): %.o $(FF_DEP_LIBS)
$(LD) $(LDFLAGS) $(LDEXEFLAGS) $(LD_O) $(OBJS-$*) $(FF_EXTRALIBS) $(LD) $(LDFLAGS) $(LD_O) $(OBJS-$*) $(FF_EXTRALIBS)
OBJDIRS += tools OBJDIRS += tools

18
README Normal file
View File

@@ -0,0 +1,18 @@
FFmpeg README
-------------
1) Documentation
----------------
* Read the documentation in the doc/ directory in git.
You can also view it online at http://ffmpeg.org/documentation.html
2) Licensing
------------
* See the LICENSE file.
3) Build and Install
--------------------
* See the INSTALL file.

View File

@@ -1,42 +0,0 @@
FFmpeg README
=============
FFmpeg is a collection of libraries and tools to process multimedia content
such as audio, video, subtitles and related metadata.
## Libraries
* `libavcodec` provides implementation of a wider range of codecs.
* `libavformat` implements streaming protocols, container formats and basic I/O access.
* `libavutil` includes hashers, decompressors and miscellaneous utility functions.
* `libavfilter` provides a mean to alter decoded Audio and Video through chain of filters.
* `libavdevice` provides an abstraction to access capture and playback devices.
* `libswresample` implements audio mixing and resampling routines.
* `libswscale` implements color conversion and scaling routines.
## Tools
* [ffmpeg](http://ffmpeg.org/ffmpeg.html) is a command line toolbox to
manipulate, convert and stream multimedia content.
* [ffplay](http://ffmpeg.org/ffplay.html) is a minimalistic multimedia player.
* [ffprobe](http://ffmpeg.org/ffprobe.html) is a simple analysis tool to inspect
multimedia content.
* [ffserver](http://ffmpeg.org/ffserver.html) is a multimedia streaming server
for live broadcasts.
* Additional small tools such as `aviocat`, `ismindex` and `qt-faststart`.
## Documentation
The offline documentation is available in the **doc/** directory.
The online documentation is available in the main [website](http://ffmpeg.org)
and in the [wiki](http://trac.ffmpeg.org).
### Examples
Coding examples are available in the **doc/examples** directory.
## License
FFmpeg codebase is mainly LGPL-licensed with optional components licensed under
GPL. Please refer to the LICENSE file for detailed information.

View File

@@ -1 +1 @@
2.5.git 2.2-rc1

1
VERSION Normal file
View File

@@ -0,0 +1 @@
2.2-rc1

View File

@@ -1,14 +1,16 @@
OBJS-$(HAVE_ARMV5TE) += $(ARMV5TE-OBJS) $(ARMV5TE-OBJS-yes) OBJS-$(HAVE_ARMV5TE) += $(ARMV5TE-OBJS) $(ARMV5TE-OBJS-yes)
OBJS-$(HAVE_ARMV6) += $(ARMV6-OBJS) $(ARMV6-OBJS-yes) OBJS-$(HAVE_ARMV6) += $(ARMV6-OBJS) $(ARMV6-OBJS-yes)
OBJS-$(HAVE_ARMV8) += $(ARMV8-OBJS) $(ARMV8-OBJS-yes)
OBJS-$(HAVE_VFP) += $(VFP-OBJS) $(VFP-OBJS-yes) OBJS-$(HAVE_VFP) += $(VFP-OBJS) $(VFP-OBJS-yes)
OBJS-$(HAVE_NEON) += $(NEON-OBJS) $(NEON-OBJS-yes) OBJS-$(HAVE_NEON) += $(NEON-OBJS) $(NEON-OBJS-yes)
OBJS-$(HAVE_MIPSFPU) += $(MIPSFPU-OBJS) $(MIPSFPU-OBJS-yes) OBJS-$(HAVE_MIPSFPU) += $(MIPSFPU-OBJS) $(MIPSFPU-OBJS-yes)
OBJS-$(HAVE_MIPS32R2) += $(MIPS32R2-OBJS) $(MIPS32R2-OBJS-yes)
OBJS-$(HAVE_MIPSDSPR1) += $(MIPSDSPR1-OBJS) $(MIPSDSPR1-OBJS-yes) OBJS-$(HAVE_MIPSDSPR1) += $(MIPSDSPR1-OBJS) $(MIPSDSPR1-OBJS-yes)
OBJS-$(HAVE_MIPSDSPR2) += $(MIPSDSPR2-OBJS) $(MIPSDSPR2-OBJS-yes) OBJS-$(HAVE_MIPSDSPR2) += $(MIPSDSPR2-OBJS) $(MIPSDSPR2-OBJS-yes)
OBJS-$(HAVE_ALTIVEC) += $(ALTIVEC-OBJS) $(ALTIVEC-OBJS-yes) OBJS-$(HAVE_ALTIVEC) += $(ALTIVEC-OBJS) $(ALTIVEC-OBJS-yes)
OBJS-$(HAVE_VIS) += $(VIS-OBJS) $(VIS-OBJS-yes)
OBJS-$(HAVE_MMX) += $(MMX-OBJS) $(MMX-OBJS-yes) OBJS-$(HAVE_MMX) += $(MMX-OBJS) $(MMX-OBJS-yes)
OBJS-$(HAVE_YASM) += $(YASM-OBJS) $(YASM-OBJS-yes) OBJS-$(HAVE_YASM) += $(YASM-OBJS) $(YASM-OBJS-yes)

View File

@@ -66,7 +66,6 @@ AVDictionary *swr_opts;
AVDictionary *format_opts, *codec_opts, *resample_opts; AVDictionary *format_opts, *codec_opts, *resample_opts;
static FILE *report_file; static FILE *report_file;
static int report_file_level = AV_LOG_DEBUG;
int hide_banner = 0; int hide_banner = 0;
void init_opts(void) void init_opts(void)
@@ -105,10 +104,8 @@ static void log_callback_report(void *ptr, int level, const char *fmt, va_list v
av_log_default_callback(ptr, level, fmt, vl); av_log_default_callback(ptr, level, fmt, vl);
av_log_format_line(ptr, level, fmt, vl2, line, sizeof(line), &print_prefix); av_log_format_line(ptr, level, fmt, vl2, line, sizeof(line), &print_prefix);
va_end(vl2); va_end(vl2);
if (report_file_level >= level) { fputs(line, report_file);
fputs(line, report_file); fflush(report_file);
fflush(report_file);
}
} }
static void (*program_exit)(int ret); static void (*program_exit)(int ret);
@@ -166,7 +163,7 @@ void show_help_options(const OptionDef *options, const char *msg, int req_flags,
int first; int first;
first = 1; first = 1;
for (po = options; po->name; po++) { for (po = options; po->name != NULL; po++) {
char buf[64]; char buf[64];
if (((po->flags & req_flags) != req_flags) || if (((po->flags & req_flags) != req_flags) ||
@@ -205,7 +202,7 @@ static const OptionDef *find_option(const OptionDef *po, const char *name)
const char *p = strchr(name, ':'); const char *p = strchr(name, ':');
int len = p ? p - name : strlen(name); int len = p ? p - name : strlen(name);
while (po->name) { while (po->name != NULL) {
if (!strncmp(name, po->name, len) && strlen(po->name) == len) if (!strncmp(name, po->name, len) && strlen(po->name) == len)
break; break;
po++; po++;
@@ -254,7 +251,7 @@ static void prepare_app_arguments(int *argc_ptr, char ***argv_ptr)
win32_argv_utf8 = av_mallocz(sizeof(char *) * (win32_argc + 1) + buffsize); win32_argv_utf8 = av_mallocz(sizeof(char *) * (win32_argc + 1) + buffsize);
argstr_flat = (char *)win32_argv_utf8 + sizeof(char *) * (win32_argc + 1); argstr_flat = (char *)win32_argv_utf8 + sizeof(char *) * (win32_argc + 1);
if (!win32_argv_utf8) { if (win32_argv_utf8 == NULL) {
LocalFree(argv_w); LocalFree(argv_w);
return; return;
} }
@@ -290,14 +287,10 @@ static int write_option(void *optctx, const OptionDef *po, const char *opt,
if (po->flags & OPT_SPEC) { if (po->flags & OPT_SPEC) {
SpecifierOpt **so = dst; SpecifierOpt **so = dst;
char *p = strchr(opt, ':'); char *p = strchr(opt, ':');
char *str;
dstcount = (int *)(so + 1); dstcount = (int *)(so + 1);
*so = grow_array(*so, sizeof(**so), dstcount, *dstcount + 1); *so = grow_array(*so, sizeof(**so), dstcount, *dstcount + 1);
str = av_strdup(p ? p + 1 : ""); (*so)[*dstcount - 1].specifier = av_strdup(p ? p + 1 : "");
if (!str)
return AVERROR(ENOMEM);
(*so)[*dstcount - 1].specifier = str;
dst = &(*so)[*dstcount - 1].u; dst = &(*so)[*dstcount - 1].u;
} }
@@ -305,8 +298,6 @@ static int write_option(void *optctx, const OptionDef *po, const char *opt,
char *str; char *str;
str = av_strdup(arg); str = av_strdup(arg);
av_freep(dst); av_freep(dst);
if (!str)
return AVERROR(ENOMEM);
*(char **)dst = str; *(char **)dst = str;
} else if (po->flags & OPT_BOOL || po->flags & OPT_INT) { } else if (po->flags & OPT_BOOL || po->flags & OPT_INT) {
*(int *)dst = parse_number_or_die(opt, arg, OPT_INT64, INT_MIN, INT_MAX); *(int *)dst = parse_number_or_die(opt, arg, OPT_INT64, INT_MIN, INT_MAX);
@@ -450,7 +441,7 @@ int locate_option(int argc, char **argv, const OptionDef *options,
(po->name && !strcmp(optname, po->name))) (po->name && !strcmp(optname, po->name)))
return i; return i;
if (!po->name || po->flags & HAS_ARG) if (po->flags & HAS_ARG)
i++; i++;
} }
return 0; return 0;
@@ -561,11 +552,6 @@ int opt_default(void *optctx, const char *opt, const char *arg)
} }
consumed = 1; consumed = 1;
} }
#else
if (!consumed && !strcmp(opt, "sws_flags")) {
av_log(NULL, AV_LOG_WARNING, "Ignoring %s %s, due to disabled swscale\n", opt, arg);
consumed = 1;
}
#endif #endif
#if CONFIG_SWRESAMPLE #if CONFIG_SWRESAMPLE
swr_class = swr_get_class(); swr_class = swr_get_class();
@@ -676,7 +662,7 @@ static void init_parse_context(OptionParseContext *octx,
memset(octx, 0, sizeof(*octx)); memset(octx, 0, sizeof(*octx));
octx->nb_groups = nb_groups; octx->nb_groups = nb_groups;
octx->groups = av_mallocz_array(octx->nb_groups, sizeof(*octx->groups)); octx->groups = av_mallocz(sizeof(*octx->groups) * octx->nb_groups);
if (!octx->groups) if (!octx->groups)
exit_program(1); exit_program(1);
@@ -848,17 +834,10 @@ int opt_loglevel(void *optctx, const char *opt, const char *arg)
}; };
char *tail; char *tail;
int level; int level;
int flags;
int i; int i;
flags = av_log_get_flags();
tail = strstr(arg, "repeat"); tail = strstr(arg, "repeat");
if (tail) av_log_set_flags(tail ? 0 : AV_LOG_SKIP_REPEATED);
flags &= ~AV_LOG_SKIP_REPEATED;
else
flags |= AV_LOG_SKIP_REPEATED;
av_log_set_flags(flags);
if (tail == arg) if (tail == arg)
arg += 6 + (arg[6]=='+'); arg += 6 + (arg[6]=='+');
if(tail && !*arg) if(tail && !*arg)
@@ -940,13 +919,6 @@ static int init_report(const char *env)
av_free(filename_template); av_free(filename_template);
filename_template = val; filename_template = val;
val = NULL; val = NULL;
} else if (!strcmp(key, "level")) {
char *tail;
report_file_level = strtol(val, &tail, 10);
if (*tail) {
av_log(NULL, AV_LOG_FATAL, "Invalid report file level\n");
exit_program(1);
}
} else { } else {
av_log(NULL, AV_LOG_ERROR, "Unknown key '%s' in FFREPORT\n", key); av_log(NULL, AV_LOG_ERROR, "Unknown key '%s' in FFREPORT\n", key);
} }
@@ -965,10 +937,9 @@ static int init_report(const char *env)
report_file = fopen(filename.str, "w"); report_file = fopen(filename.str, "w");
if (!report_file) { if (!report_file) {
int ret = AVERROR(errno);
av_log(NULL, AV_LOG_ERROR, "Failed to open report \"%s\": %s\n", av_log(NULL, AV_LOG_ERROR, "Failed to open report \"%s\": %s\n",
filename.str, strerror(errno)); filename.str, strerror(errno));
return ret; return AVERROR(errno);
} }
av_log_set_callback(log_callback_report); av_log_set_callback(log_callback_report);
av_log(NULL, AV_LOG_INFO, av_log(NULL, AV_LOG_INFO,
@@ -1081,7 +1052,8 @@ static void print_program_info(int flags, int level)
av_log(NULL, level, " Copyright (c) %d-%d the FFmpeg developers", av_log(NULL, level, " Copyright (c) %d-%d the FFmpeg developers",
program_birth_year, CONFIG_THIS_YEAR); program_birth_year, CONFIG_THIS_YEAR);
av_log(NULL, level, "\n"); av_log(NULL, level, "\n");
av_log(NULL, level, "%sbuilt with %s\n", indent, CC_IDENT); av_log(NULL, level, "%sbuilt on %s %s with %s\n",
indent, __DATE__, __TIME__, CC_IDENT);
av_log(NULL, level, "%sconfiguration: " FFMPEG_CONFIGURATION "\n", indent); av_log(NULL, level, "%sconfiguration: " FFMPEG_CONFIGURATION "\n", indent);
} }
@@ -1126,7 +1098,7 @@ void show_banner(int argc, char **argv, const OptionDef *options)
int show_version(void *optctx, const char *opt, const char *arg) int show_version(void *optctx, const char *opt, const char *arg)
{ {
av_log_set_callback(log_callback_help); av_log_set_callback(log_callback_help);
print_program_info (SHOW_COPYRIGHT, AV_LOG_INFO); print_program_info (0 , AV_LOG_INFO);
print_all_libs_info(SHOW_VERSION, AV_LOG_INFO); print_all_libs_info(SHOW_VERSION, AV_LOG_INFO);
return 0; return 0;
@@ -1214,24 +1186,16 @@ int show_license(void *optctx, const char *opt, const char *arg)
return 0; return 0;
} }
static int is_device(const AVClass *avclass) int show_formats(void *optctx, const char *opt, const char *arg)
{
if (!avclass)
return 0;
return AV_IS_INPUT_DEVICE(avclass->category) || AV_IS_OUTPUT_DEVICE(avclass->category);
}
static int show_formats_devices(void *optctx, const char *opt, const char *arg, int device_only)
{ {
AVInputFormat *ifmt = NULL; AVInputFormat *ifmt = NULL;
AVOutputFormat *ofmt = NULL; AVOutputFormat *ofmt = NULL;
const char *last_name; const char *last_name;
int is_dev;
printf("%s\n" printf("File formats:\n"
" D. = Demuxing supported\n" " D. = Demuxing supported\n"
" .E = Muxing supported\n" " .E = Muxing supported\n"
" --\n", device_only ? "Devices:" : "File formats:"); " --\n");
last_name = "000"; last_name = "000";
for (;;) { for (;;) {
int decode = 0; int decode = 0;
@@ -1240,10 +1204,7 @@ static int show_formats_devices(void *optctx, const char *opt, const char *arg,
const char *long_name = NULL; const char *long_name = NULL;
while ((ofmt = av_oformat_next(ofmt))) { while ((ofmt = av_oformat_next(ofmt))) {
is_dev = is_device(ofmt->priv_class); if ((name == NULL || strcmp(ofmt->name, name) < 0) &&
if (!is_dev && device_only)
continue;
if ((!name || strcmp(ofmt->name, name) < 0) &&
strcmp(ofmt->name, last_name) > 0) { strcmp(ofmt->name, last_name) > 0) {
name = ofmt->name; name = ofmt->name;
long_name = ofmt->long_name; long_name = ofmt->long_name;
@@ -1251,10 +1212,7 @@ static int show_formats_devices(void *optctx, const char *opt, const char *arg,
} }
} }
while ((ifmt = av_iformat_next(ifmt))) { while ((ifmt = av_iformat_next(ifmt))) {
is_dev = is_device(ifmt->priv_class); if ((name == NULL || strcmp(ifmt->name, name) < 0) &&
if (!is_dev && device_only)
continue;
if ((!name || strcmp(ifmt->name, name) < 0) &&
strcmp(ifmt->name, last_name) > 0) { strcmp(ifmt->name, last_name) > 0) {
name = ifmt->name; name = ifmt->name;
long_name = ifmt->long_name; long_name = ifmt->long_name;
@@ -1263,7 +1221,7 @@ static int show_formats_devices(void *optctx, const char *opt, const char *arg,
if (name && strcmp(ifmt->name, name) == 0) if (name && strcmp(ifmt->name, name) == 0)
decode = 1; decode = 1;
} }
if (!name) if (name == NULL)
break; break;
last_name = name; last_name = name;
@@ -1276,16 +1234,6 @@ static int show_formats_devices(void *optctx, const char *opt, const char *arg,
return 0; return 0;
} }
int show_formats(void *optctx, const char *opt, const char *arg)
{
return show_formats_devices(optctx, opt, arg, 0);
}
int show_devices(void *optctx, const char *opt, const char *arg)
{
return show_formats_devices(optctx, opt, arg, 1);
}
#define PRINT_CODEC_SUPPORTED(codec, field, type, list_name, term, get_name) \ #define PRINT_CODEC_SUPPORTED(codec, field, type, list_name, term, get_name) \
if (codec->field) { \ if (codec->field) { \
const type *p = codec->field; \ const type *p = codec->field; \
@@ -1430,9 +1378,6 @@ int show_codecs(void *optctx, const char *opt, const char *arg)
const AVCodecDescriptor *desc = codecs[i]; const AVCodecDescriptor *desc = codecs[i];
const AVCodec *codec = NULL; const AVCodec *codec = NULL;
if (strstr(desc->name, "_deprecated"))
continue;
printf(" "); printf(" ");
printf(avcodec_find_decoder(desc->id) ? "D" : "."); printf(avcodec_find_decoder(desc->id) ? "D" : ".");
printf(avcodec_find_encoder(desc->id) ? "E" : "."); printf(avcodec_find_encoder(desc->id) ? "E" : ".");
@@ -1544,8 +1489,7 @@ int show_protocols(void *optctx, const char *opt, const char *arg)
int show_filters(void *optctx, const char *opt, const char *arg) int show_filters(void *optctx, const char *opt, const char *arg)
{ {
#if CONFIG_AVFILTER const AVFilter av_unused(*filter) = NULL;
const AVFilter *filter = NULL;
char descr[64], *descr_cur; char descr[64], *descr_cur;
int i, j; int i, j;
const AVFilterPad *pad; const AVFilterPad *pad;
@@ -1558,6 +1502,7 @@ int show_filters(void *optctx, const char *opt, const char *arg)
" V = Video input/output\n" " V = Video input/output\n"
" N = Dynamic number and/or type of input/output\n" " N = Dynamic number and/or type of input/output\n"
" | = Source or sink filter\n"); " | = Source or sink filter\n");
#if CONFIG_AVFILTER
while ((filter = avfilter_next(filter))) { while ((filter = avfilter_next(filter))) {
descr_cur = descr; descr_cur = descr;
for (i = 0; i < 2; i++) { for (i = 0; i < 2; i++) {
@@ -1582,8 +1527,6 @@ int show_filters(void *optctx, const char *opt, const char *arg)
filter->process_command ? 'C' : '.', filter->process_command ? 'C' : '.',
filter->name, descr, filter->description); filter->name, descr, filter->description);
} }
#else
printf("No filters available: libavfilter disabled\n");
#endif #endif
return 0; return 0;
} }
@@ -1642,19 +1585,19 @@ int show_layouts(void *optctx, const char *opt, const char *arg)
const char *name, *descr; const char *name, *descr;
printf("Individual channels:\n" printf("Individual channels:\n"
"NAME DESCRIPTION\n"); "NAME DESCRIPTION\n");
for (i = 0; i < 63; i++) { for (i = 0; i < 63; i++) {
name = av_get_channel_name((uint64_t)1 << i); name = av_get_channel_name((uint64_t)1 << i);
if (!name) if (!name)
continue; continue;
descr = av_get_channel_description((uint64_t)1 << i); descr = av_get_channel_description((uint64_t)1 << i);
printf("%-14s %s\n", name, descr); printf("%-12s%s\n", name, descr);
} }
printf("\nStandard channel layouts:\n" printf("\nStandard channel layouts:\n"
"NAME DECOMPOSITION\n"); "NAME DECOMPOSITION\n");
for (i = 0; !av_get_standard_channel_layout(i, &layout, &name); i++) { for (i = 0; !av_get_standard_channel_layout(i, &layout, &name); i++) {
if (name) { if (name) {
printf("%-14s ", name); printf("%-12s", name);
for (j = 1; j; j <<= 1) for (j = 1; j; j <<= 1)
if ((layout & j)) if ((layout & j))
printf("%s%s", (layout & (j - 1)) ? "+" : "", av_get_channel_name(j)); printf("%s%s", (layout & (j - 1)) ? "+" : "", av_get_channel_name(j));
@@ -1821,8 +1764,6 @@ int show_help(void *optctx, const char *opt, const char *arg)
av_log_set_callback(log_callback_help); av_log_set_callback(log_callback_help);
topic = av_strdup(arg ? arg : ""); topic = av_strdup(arg ? arg : "");
if (!topic)
return AVERROR(ENOMEM);
par = strchr(topic, '='); par = strchr(topic, '=');
if (par) if (par)
*par++ = 0; *par++ = 0;
@@ -1862,48 +1803,35 @@ int read_yesno(void)
int cmdutils_read_file(const char *filename, char **bufptr, size_t *size) int cmdutils_read_file(const char *filename, char **bufptr, size_t *size)
{ {
int64_t ret; int ret;
FILE *f = av_fopen_utf8(filename, "rb"); FILE *f = av_fopen_utf8(filename, "rb");
if (!f) { if (!f) {
ret = AVERROR(errno);
av_log(NULL, AV_LOG_ERROR, "Cannot read file '%s': %s\n", filename, av_log(NULL, AV_LOG_ERROR, "Cannot read file '%s': %s\n", filename,
strerror(errno)); strerror(errno));
return ret; return AVERROR(errno);
} }
fseek(f, 0, SEEK_END);
ret = fseek(f, 0, SEEK_END); *size = ftell(f);
if (ret == -1) { fseek(f, 0, SEEK_SET);
ret = AVERROR(errno); if (*size == (size_t)-1) {
goto out; av_log(NULL, AV_LOG_ERROR, "IO error: %s\n", strerror(errno));
fclose(f);
return AVERROR(errno);
} }
ret = ftell(f);
if (ret < 0) {
ret = AVERROR(errno);
goto out;
}
*size = ret;
ret = fseek(f, 0, SEEK_SET);
if (ret == -1) {
ret = AVERROR(errno);
goto out;
}
*bufptr = av_malloc(*size + 1); *bufptr = av_malloc(*size + 1);
if (!*bufptr) { if (!*bufptr) {
av_log(NULL, AV_LOG_ERROR, "Could not allocate file buffer\n"); av_log(NULL, AV_LOG_ERROR, "Could not allocate file buffer\n");
ret = AVERROR(ENOMEM); fclose(f);
goto out; return AVERROR(ENOMEM);
} }
ret = fread(*bufptr, 1, *size, f); ret = fread(*bufptr, 1, *size, f);
if (ret < *size) { if (ret < *size) {
av_free(*bufptr); av_free(*bufptr);
if (ferror(f)) { if (ferror(f)) {
ret = AVERROR(errno);
av_log(NULL, AV_LOG_ERROR, "Error while reading file '%s': %s\n", av_log(NULL, AV_LOG_ERROR, "Error while reading file '%s': %s\n",
filename, strerror(errno)); filename, strerror(errno));
ret = AVERROR(errno);
} else } else
ret = AVERROR_EOF; ret = AVERROR_EOF;
} else { } else {
@@ -1911,9 +1839,6 @@ int cmdutils_read_file(const char *filename, char **bufptr, size_t *size)
(*bufptr)[(*size)++] = '\0'; (*bufptr)[(*size)++] = '\0';
} }
out:
if (ret < 0)
av_log(NULL, AV_LOG_ERROR, "IO error: %s\n", av_err2str(ret));
fclose(f); fclose(f);
return ret; return ret;
} }
@@ -2013,12 +1938,11 @@ AVDictionary *filter_codec_opts(AVDictionary *opts, enum AVCodecID codec_id,
switch (check_stream_specifier(s, st, p + 1)) { switch (check_stream_specifier(s, st, p + 1)) {
case 1: *p = 0; break; case 1: *p = 0; break;
case 0: continue; case 0: continue;
default: exit_program(1); default: return NULL;
} }
if (av_opt_find(&cc, t->key, NULL, flags, AV_OPT_SEARCH_FAKE_OBJ) || if (av_opt_find(&cc, t->key, NULL, flags, AV_OPT_SEARCH_FAKE_OBJ) ||
!codec || (codec && codec->priv_class &&
(codec->priv_class &&
av_opt_find(&codec->priv_class, t->key, NULL, flags, av_opt_find(&codec->priv_class, t->key, NULL, flags,
AV_OPT_SEARCH_FAKE_OBJ))) AV_OPT_SEARCH_FAKE_OBJ)))
av_dict_set(&ret, t->key, t->value, 0); av_dict_set(&ret, t->key, t->value, 0);
@@ -2041,7 +1965,7 @@ AVDictionary **setup_find_stream_info_opts(AVFormatContext *s,
if (!s->nb_streams) if (!s->nb_streams)
return NULL; return NULL;
opts = av_mallocz_array(s->nb_streams, sizeof(*opts)); opts = av_mallocz(s->nb_streams * sizeof(*opts));
if (!opts) { if (!opts) {
av_log(NULL, AV_LOG_ERROR, av_log(NULL, AV_LOG_ERROR,
"Could not alloc memory for stream options.\n"); "Could not alloc memory for stream options.\n");
@@ -2060,7 +1984,7 @@ void *grow_array(void *array, int elem_size, int *size, int new_size)
exit_program(1); exit_program(1);
} }
if (*size < new_size) { if (*size < new_size) {
uint8_t *tmp = av_realloc_array(array, new_size, elem_size); uint8_t *tmp = av_realloc(array, new_size*elem_size);
if (!tmp) { if (!tmp) {
av_log(NULL, AV_LOG_ERROR, "Could not alloc buffer.\n"); av_log(NULL, AV_LOG_ERROR, "Could not alloc buffer.\n");
exit_program(1); exit_program(1);
@@ -2071,161 +1995,3 @@ void *grow_array(void *array, int elem_size, int *size, int new_size)
} }
return array; return array;
} }
#if CONFIG_AVDEVICE
static int print_device_sources(AVInputFormat *fmt, AVDictionary *opts)
{
int ret, i;
AVDeviceInfoList *device_list = NULL;
if (!fmt || !fmt->priv_class || !AV_IS_INPUT_DEVICE(fmt->priv_class->category))
return AVERROR(EINVAL);
printf("Audo-detected sources for %s:\n", fmt->name);
if (!fmt->get_device_list) {
ret = AVERROR(ENOSYS);
printf("Cannot list sources. Not implemented.\n");
goto fail;
}
if ((ret = avdevice_list_input_sources(fmt, NULL, opts, &device_list)) < 0) {
printf("Cannot list sources.\n");
goto fail;
}
for (i = 0; i < device_list->nb_devices; i++) {
printf("%s %s [%s]\n", device_list->default_device == i ? "*" : " ",
device_list->devices[i]->device_name, device_list->devices[i]->device_description);
}
fail:
avdevice_free_list_devices(&device_list);
return ret;
}
static int print_device_sinks(AVOutputFormat *fmt, AVDictionary *opts)
{
int ret, i;
AVDeviceInfoList *device_list = NULL;
if (!fmt || !fmt->priv_class || !AV_IS_OUTPUT_DEVICE(fmt->priv_class->category))
return AVERROR(EINVAL);
printf("Audo-detected sinks for %s:\n", fmt->name);
if (!fmt->get_device_list) {
ret = AVERROR(ENOSYS);
printf("Cannot list sinks. Not implemented.\n");
goto fail;
}
if ((ret = avdevice_list_output_sinks(fmt, NULL, opts, &device_list)) < 0) {
printf("Cannot list sinks.\n");
goto fail;
}
for (i = 0; i < device_list->nb_devices; i++) {
printf("%s %s [%s]\n", device_list->default_device == i ? "*" : " ",
device_list->devices[i]->device_name, device_list->devices[i]->device_description);
}
fail:
avdevice_free_list_devices(&device_list);
return ret;
}
static int show_sinks_sources_parse_arg(const char *arg, char **dev, AVDictionary **opts)
{
int ret;
if (arg) {
char *opts_str = NULL;
av_assert0(dev && opts);
*dev = av_strdup(arg);
if (!*dev)
return AVERROR(ENOMEM);
if ((opts_str = strchr(*dev, ','))) {
*(opts_str++) = '\0';
if (opts_str[0] && ((ret = av_dict_parse_string(opts, opts_str, "=", ":", 0)) < 0)) {
av_freep(dev);
return ret;
}
}
} else
printf("\nDevice name is not provided.\n"
"You can pass devicename[,opt1=val1[,opt2=val2...]] as an argument.\n\n");
return 0;
}
int show_sources(void *optctx, const char *opt, const char *arg)
{
AVInputFormat *fmt = NULL;
char *dev = NULL;
AVDictionary *opts = NULL;
int ret = 0;
int error_level = av_log_get_level();
av_log_set_level(AV_LOG_ERROR);
if ((ret = show_sinks_sources_parse_arg(arg, &dev, &opts)) < 0)
goto fail;
do {
fmt = av_input_audio_device_next(fmt);
if (fmt) {
if (!strcmp(fmt->name, "lavfi"))
continue; //it's pointless to probe lavfi
if (dev && !av_match_name(dev, fmt->name))
continue;
print_device_sources(fmt, opts);
}
} while (fmt);
do {
fmt = av_input_video_device_next(fmt);
if (fmt) {
if (dev && !av_match_name(dev, fmt->name))
continue;
print_device_sources(fmt, opts);
}
} while (fmt);
fail:
av_dict_free(&opts);
av_free(dev);
av_log_set_level(error_level);
return ret;
}
int show_sinks(void *optctx, const char *opt, const char *arg)
{
AVOutputFormat *fmt = NULL;
char *dev = NULL;
AVDictionary *opts = NULL;
int ret = 0;
int error_level = av_log_get_level();
av_log_set_level(AV_LOG_ERROR);
if ((ret = show_sinks_sources_parse_arg(arg, &dev, &opts)) < 0)
goto fail;
do {
fmt = av_output_audio_device_next(fmt);
if (fmt) {
if (dev && !av_match_name(dev, fmt->name))
continue;
print_device_sinks(fmt, opts);
}
} while (fmt);
do {
fmt = av_output_video_device_next(fmt);
if (fmt) {
if (dev && !av_match_name(dev, fmt->name))
continue;
print_device_sinks(fmt, opts);
}
} while (fmt);
fail:
av_dict_free(&opts);
av_free(dev);
av_log_set_level(error_level);
return ret;
}
#endif

View File

@@ -24,13 +24,12 @@
#include <stdint.h> #include <stdint.h>
#include "config.h"
#include "libavcodec/avcodec.h" #include "libavcodec/avcodec.h"
#include "libavfilter/avfilter.h" #include "libavfilter/avfilter.h"
#include "libavformat/avformat.h" #include "libavformat/avformat.h"
#include "libswscale/swscale.h" #include "libswscale/swscale.h"
#ifdef _WIN32 #ifdef __MINGW32__
#undef main /* We don't want SDL to override our main() */ #undef main /* We don't want SDL to override our main() */
#endif #endif
@@ -59,7 +58,7 @@ void register_exit(void (*cb)(int ret));
/** /**
* Wraps exit with a program-specific cleanup routine. * Wraps exit with a program-specific cleanup routine.
*/ */
void exit_program(int ret) av_noreturn; void exit_program(int ret);
/** /**
* Initialize the cmdutils option system, in particular * Initialize the cmdutils option system, in particular
@@ -431,31 +430,10 @@ int show_license(void *optctx, const char *opt, const char *arg);
/** /**
* Print a listing containing all the formats supported by the * Print a listing containing all the formats supported by the
* program (including devices).
* This option processing function does not utilize the arguments.
*/
int show_formats(void *optctx, const char *opt, const char *arg);
/**
* Print a listing containing all the devices supported by the
* program. * program.
* This option processing function does not utilize the arguments. * This option processing function does not utilize the arguments.
*/ */
int show_devices(void *optctx, const char *opt, const char *arg); int show_formats(void *optctx, const char *opt, const char *arg);
#if CONFIG_AVDEVICE
/**
* Print a listing containing audodetected sinks of the output device.
* Device name with options may be passed as an argument to limit results.
*/
int show_sinks(void *optctx, const char *opt, const char *arg);
/**
* Print a listing containing audodetected sources of the input device.
* Device name with options may be passed as an argument to limit results.
*/
int show_sources(void *optctx, const char *opt, const char *arg);
#endif
/** /**
* Print a listing containing all the codecs supported by the * Print a listing containing all the codecs supported by the

View File

@@ -6,7 +6,6 @@
{ "version" , OPT_EXIT, {.func_arg = show_version}, "show version" }, { "version" , OPT_EXIT, {.func_arg = show_version}, "show version" },
{ "buildconf" , OPT_EXIT, {.func_arg = show_buildconf}, "show build configuration" }, { "buildconf" , OPT_EXIT, {.func_arg = show_buildconf}, "show build configuration" },
{ "formats" , OPT_EXIT, {.func_arg = show_formats }, "show available formats" }, { "formats" , OPT_EXIT, {.func_arg = show_formats }, "show available formats" },
{ "devices" , OPT_EXIT, {.func_arg = show_devices }, "show available devices" },
{ "codecs" , OPT_EXIT, {.func_arg = show_codecs }, "show available codecs" }, { "codecs" , OPT_EXIT, {.func_arg = show_codecs }, "show available codecs" },
{ "decoders" , OPT_EXIT, {.func_arg = show_decoders }, "show available decoders" }, { "decoders" , OPT_EXIT, {.func_arg = show_decoders }, "show available decoders" },
{ "encoders" , OPT_EXIT, {.func_arg = show_encoders }, "show available encoders" }, { "encoders" , OPT_EXIT, {.func_arg = show_encoders }, "show available encoders" },
@@ -27,9 +26,3 @@
{ "opencl_bench", OPT_EXIT, {.func_arg = opt_opencl_bench}, "run benchmark on all OpenCL devices and show results" }, { "opencl_bench", OPT_EXIT, {.func_arg = opt_opencl_bench}, "run benchmark on all OpenCL devices and show results" },
{ "opencl_options", HAS_ARG, {.func_arg = opt_opencl}, "set OpenCL environment options" }, { "opencl_options", HAS_ARG, {.func_arg = opt_opencl}, "set OpenCL environment options" },
#endif #endif
#if CONFIG_AVDEVICE
{ "sources" , OPT_EXIT | HAS_ARG, { .func_arg = show_sources },
"list sources of the input device", "device" },
{ "sinks" , OPT_EXIT | HAS_ARG, { .func_arg = show_sinks },
"list sinks of the output device", "device" },
#endif

View File

@@ -181,12 +181,12 @@ static int64_t run_opencl_bench(AVOpenCLExternalEnv *ext_opencl_env)
OCLCHECK(clSetKernelArg, kernel, arg++, sizeof(cl_int), &width); OCLCHECK(clSetKernelArg, kernel, arg++, sizeof(cl_int), &width);
OCLCHECK(clSetKernelArg, kernel, arg++, sizeof(cl_int), &height); OCLCHECK(clSetKernelArg, kernel, arg++, sizeof(cl_int), &height);
start = av_gettime_relative(); start = av_gettime();
for (i = 0; i < OPENCL_NB_ITER; i++) for (i = 0; i < OPENCL_NB_ITER; i++)
OCLCHECK(clEnqueueNDRangeKernel, ext_opencl_env->command_queue, kernel, 2, NULL, OCLCHECK(clEnqueueNDRangeKernel, ext_opencl_env->command_queue, kernel, 2, NULL,
global_work_size_2d, local_work_size_2d, 0, NULL, NULL); global_work_size_2d, local_work_size_2d, 0, NULL, NULL);
clFinish(ext_opencl_env->command_queue); clFinish(ext_opencl_env->command_queue);
ret = (av_gettime_relative() - start)/OPENCL_NB_ITER; ret = (av_gettime() - start)/OPENCL_NB_ITER;
end: end:
if (kernel) if (kernel)
clReleaseKernel(kernel); clReleaseKernel(kernel);
@@ -224,7 +224,7 @@ int opt_opencl_bench(void *optctx, const char *opt, const char *arg)
av_log(NULL, AV_LOG_ERROR, "No OpenCL device detected!\n"); av_log(NULL, AV_LOG_ERROR, "No OpenCL device detected!\n");
return AVERROR(EINVAL); return AVERROR(EINVAL);
} }
if (!(devices = av_malloc_array(nb_devices, sizeof(OpenCLDeviceBenchmark)))) { if (!(devices = av_malloc(sizeof(OpenCLDeviceBenchmark) * nb_devices))) {
av_log(NULL, AV_LOG_ERROR, "Could not allocate buffer\n"); av_log(NULL, AV_LOG_ERROR, "Could not allocate buffer\n");
return AVERROR(ENOMEM); return AVERROR(ENOMEM);
} }

View File

@@ -5,14 +5,6 @@
# first so "all" becomes default target # first so "all" becomes default target
all: all-yes all: all-yes
DEFAULT_YASMD=.dbg
ifeq (1, DBG)
YASMD=$(DEFAULT_YASMD)
else
YASMD=
endif
ifndef SUBDIR ifndef SUBDIR
ifndef V ifndef V
@@ -59,9 +51,6 @@ COMPILE_HOSTC = $(call COMPILE,HOSTCC)
%.o: %.cpp %.o: %.cpp
$(COMPILE_CXX) $(COMPILE_CXX)
%.o: %.m
$(COMPILE_C)
%.s: %.c %.s: %.c
$(CC) $(CPPFLAGS) $(CFLAGS) -S -o $@ $< $(CC) $(CPPFLAGS) $(CFLAGS) -S -o $@ $<
@@ -101,7 +90,7 @@ include $(SRC_PATH)/arch.mak
OBJS += $(OBJS-yes) OBJS += $(OBJS-yes)
SLIBOBJS += $(SLIBOBJS-yes) SLIBOBJS += $(SLIBOBJS-yes)
FFLIBS := $($(NAME)_FFLIBS) $(FFLIBS-yes) $(FFLIBS) FFLIBS := $(FFLIBS-yes) $(FFLIBS)
TESTPROGS += $(TESTPROGS-yes) TESTPROGS += $(TESTPROGS-yes)
LDLIBS = $(FFLIBS:%=%$(BUILDSUF)) LDLIBS = $(FFLIBS:%=%$(BUILDSUF))
@@ -146,17 +135,17 @@ $(TOOLOBJS): | tools
OBJDIRS := $(OBJDIRS) $(dir $(OBJS) $(HOBJS) $(HOSTOBJS) $(SLIBOBJS) $(TESTOBJS)) OBJDIRS := $(OBJDIRS) $(dir $(OBJS) $(HOBJS) $(HOSTOBJS) $(SLIBOBJS) $(TESTOBJS))
CLEANSUFFIXES = *.d *.o *~ *.h.c *.map *.ver *.ho *.gcno *.gcda *$(DEFAULT_YASMD).asm CLEANSUFFIXES = *.d *.o *~ *.h.c *.map *.ver *.ho *.gcno *.gcda
DISTCLEANSUFFIXES = *.pc DISTCLEANSUFFIXES = *.pc
LIBSUFFIXES = *.a *.lib *.so *.so.* *.dylib *.dll *.def *.dll.a LIBSUFFIXES = *.a *.lib *.so *.so.* *.dylib *.dll *.def *.dll.a
define RULES define RULES
clean:: clean::
$(RM) $(OBJS) $(OBJS:.o=.d) $(OBJS:.o=$(DEFAULT_YASMD).d) $(RM) $(OBJS) $(OBJS:.o=.d)
$(RM) $(HOSTPROGS) $(RM) $(HOSTPROGS)
$(RM) $(TOOLS) $(RM) $(TOOLS)
endef endef
$(eval $(RULES)) $(eval $(RULES))
-include $(wildcard $(OBJS:.o=.d) $(HOSTOBJS:.o=.d) $(TESTOBJS:.o=.d) $(HOBJS:.o=.d) $(SLIBOBJS:.o=.d)) $(OBJS:.o=$(DEFAULT_YASMD).d) -include $(wildcard $(OBJS:.o=.d) $(HOSTOBJS:.o=.d) $(TESTOBJS:.o=.d) $(HOBJS:.o=.d) $(SLIBOBJS:.o=.d))

View File

@@ -13,8 +13,7 @@
// //
// You should have received a copy of the GNU General Public License // You should have received a copy of the GNU General Public License
// along with this program; if not, write to the Free Software // along with this program; if not, write to the Free Software
// Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, // Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA, or visit
// MA 02110-1301 USA, or visit
// http://www.gnu.org/copyleft/gpl.html . // http://www.gnu.org/copyleft/gpl.html .
// //
// As a special exception, I give you permission to link to the // As a special exception, I give you permission to link to the
@@ -805,7 +804,7 @@ struct AVS_Library {
AVSC_INLINE AVS_Library * avs_load_library() { AVSC_INLINE AVS_Library * avs_load_library() {
AVS_Library *library = (AVS_Library *)malloc(sizeof(AVS_Library)); AVS_Library *library = (AVS_Library *)malloc(sizeof(AVS_Library));
if (!library) if (library == NULL)
return NULL; return NULL;
library->handle = LoadLibrary("avisynth"); library->handle = LoadLibrary("avisynth");
if (library->handle == NULL) if (library->handle == NULL)
@@ -870,7 +869,7 @@ fail:
} }
AVSC_INLINE void avs_free_library(AVS_Library *library) { AVSC_INLINE void avs_free_library(AVS_Library *library) {
if (!library) if (library == NULL)
return; return;
FreeLibrary(library->handle); FreeLibrary(library->handle);
free(library); free(library);

View File

@@ -13,8 +13,7 @@
// //
// You should have received a copy of the GNU General Public License // You should have received a copy of the GNU General Public License
// along with this program; if not, write to the Free Software // along with this program; if not, write to the Free Software
// Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, // Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA, or visit
// MA 02110-1301 USA, or visit
// http://www.gnu.org/copyleft/gpl.html . // http://www.gnu.org/copyleft/gpl.html .
// //
// As a special exception, I give you permission to link to the // As a special exception, I give you permission to link to the
@@ -513,21 +512,21 @@ AVSC_INLINE AVS_Value avs_array_elt(AVS_Value v, int index)
// only use these functions on am AVS_Value that does not already have // only use these functions on am AVS_Value that does not already have
// an active value. Remember, treat AVS_Value as a fat pointer. // an active value. Remember, treat AVS_Value as a fat pointer.
AVSC_INLINE AVS_Value avs_new_value_bool(int v0) AVSC_INLINE AVS_Value avs_new_value_bool(int v0)
{ AVS_Value v = {0}; v.type = 'b'; v.d.boolean = v0 == 0 ? 0 : 1; return v; } { AVS_Value v; v.type = 'b'; v.d.boolean = v0 == 0 ? 0 : 1; return v; }
AVSC_INLINE AVS_Value avs_new_value_int(int v0) AVSC_INLINE AVS_Value avs_new_value_int(int v0)
{ AVS_Value v = {0}; v.type = 'i'; v.d.integer = v0; return v; } { AVS_Value v; v.type = 'i'; v.d.integer = v0; return v; }
AVSC_INLINE AVS_Value avs_new_value_string(const char * v0) AVSC_INLINE AVS_Value avs_new_value_string(const char * v0)
{ AVS_Value v = {0}; v.type = 's'; v.d.string = v0; return v; } { AVS_Value v; v.type = 's'; v.d.string = v0; return v; }
AVSC_INLINE AVS_Value avs_new_value_float(float v0) AVSC_INLINE AVS_Value avs_new_value_float(float v0)
{ AVS_Value v = {0}; v.type = 'f'; v.d.floating_pt = v0; return v;} { AVS_Value v; v.type = 'f'; v.d.floating_pt = v0; return v;}
AVSC_INLINE AVS_Value avs_new_value_error(const char * v0) AVSC_INLINE AVS_Value avs_new_value_error(const char * v0)
{ AVS_Value v = {0}; v.type = 'e'; v.d.string = v0; return v; } { AVS_Value v; v.type = 'e'; v.d.string = v0; return v; }
#ifndef AVSC_NO_DECLSPEC #ifndef AVSC_NO_DECLSPEC
AVSC_INLINE AVS_Value avs_new_value_clip(AVS_Clip * v0) AVSC_INLINE AVS_Value avs_new_value_clip(AVS_Clip * v0)
{ AVS_Value v = {0}; avs_set_to_clip(&v, v0); return v; } { AVS_Value v; avs_set_to_clip(&v, v0); return v; }
#endif #endif
AVSC_INLINE AVS_Value avs_new_value_array(AVS_Value * v0, int size) AVSC_INLINE AVS_Value avs_new_value_array(AVS_Value * v0, int size)
{ AVS_Value v = {0}; v.type = 'a'; v.d.array = v0; v.array_size = size; return v; } { AVS_Value v; v.type = 'a'; v.d.array = v0; v.array_size = size; return v; }
///////////////////////////////////////////////////////////////////// /////////////////////////////////////////////////////////////////////
// //

View File

@@ -52,8 +52,8 @@ namespace avxsynth {
// //
// Functions // Functions
// //
#define MAKEDWORD(a,b,c,d) (((a) << 24) | ((b) << 16) | ((c) << 8) | (d)) #define MAKEDWORD(a,b,c,d) ((a << 24) | (b << 16) | (c << 8) | (d))
#define MAKEWORD(a,b) (((a) << 8) | (b)) #define MAKEWORD(a,b) ((a << 8) | (b))
#define lstrlen strlen #define lstrlen strlen
#define lstrcpy strcpy #define lstrcpy strcpy

View File

@@ -1,35 +0,0 @@
/*
* Work around broken floating point limits on some systems.
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include_next <float.h>
#ifdef FLT_MAX
#undef FLT_MAX
#define FLT_MAX 3.40282346638528859812e+38F
#undef FLT_MIN
#define FLT_MIN 1.17549435082228750797e-38F
#undef DBL_MAX
#define DBL_MAX ((double)1.79769313486231570815e+308L)
#undef DBL_MIN
#define DBL_MIN ((double)2.22507385850720138309e-308L)
#endif

View File

@@ -1,22 +0,0 @@
/*
* Work around broken floating point limits on some systems.
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include_next <limits.h>
#include <float.h>

View File

@@ -54,7 +54,7 @@ static int getopt(int argc, char *argv[], char *opts)
} }
} }
optopt = c = argv[optind][sp]; optopt = c = argv[optind][sp];
if (c == ':' || !(cp = strchr(opts, c))) { if (c == ':' || (cp = strchr(opts, c)) == NULL) {
fprintf(stderr, ": illegal option -- %c\n", c); fprintf(stderr, ": illegal option -- %c\n", c);
if (argv[optind][++sp] == '\0') { if (argv[optind][++sp] == '\0') {
optind++; optind++;

View File

@@ -39,7 +39,6 @@
#include <windows.h> #include <windows.h>
#include <process.h> #include <process.h>
#include "libavutil/attributes.h"
#include "libavutil/common.h" #include "libavutil/common.h"
#include "libavutil/internal.h" #include "libavutil/internal.h"
#include "libavutil/mem.h" #include "libavutil/mem.h"
@@ -55,30 +54,36 @@ typedef struct pthread_t {
* not mutexes */ * not mutexes */
typedef CRITICAL_SECTION pthread_mutex_t; typedef CRITICAL_SECTION pthread_mutex_t;
/* This is the CONDITION_VARIABLE typedef for using Windows' native /* This is the CONDITIONAL_VARIABLE typedef for using Window's native
* conditional variables on kernels 6.0+. */ * conditional variables on kernels 6.0+.
#if HAVE_CONDITION_VARIABLE_PTR * MinGW does not currently have this typedef. */
typedef CONDITION_VARIABLE pthread_cond_t;
#else
typedef struct pthread_cond_t { typedef struct pthread_cond_t {
void *Ptr; void *ptr;
} pthread_cond_t; } pthread_cond_t;
/* function pointers to conditional variable API on windows 6.0+ kernels */
#if _WIN32_WINNT < 0x0600
static void (WINAPI *cond_broadcast)(pthread_cond_t *cond);
static void (WINAPI *cond_init)(pthread_cond_t *cond);
static void (WINAPI *cond_signal)(pthread_cond_t *cond);
static BOOL (WINAPI *cond_wait)(pthread_cond_t *cond, pthread_mutex_t *mutex,
DWORD milliseconds);
#else
#define cond_init InitializeConditionVariable
#define cond_broadcast WakeAllConditionVariable
#define cond_signal WakeConditionVariable
#define cond_wait SleepConditionVariableCS
#endif #endif
#if _WIN32_WINNT >= 0x0600 static unsigned __stdcall attribute_align_arg win32thread_worker(void *arg)
#define InitializeCriticalSection(x) InitializeCriticalSectionEx(x, 0, 0)
#define WaitForSingleObject(a, b) WaitForSingleObjectEx(a, b, FALSE)
#endif
static av_unused unsigned __stdcall attribute_align_arg win32thread_worker(void *arg)
{ {
pthread_t *h = arg; pthread_t *h = arg;
h->ret = h->func(h->arg); h->ret = h->func(h->arg);
return 0; return 0;
} }
static av_unused int pthread_create(pthread_t *thread, const void *unused_attr, static int pthread_create(pthread_t *thread, const void *unused_attr,
void *(*start_routine)(void*), void *arg) void *(*start_routine)(void*), void *arg)
{ {
thread->func = start_routine; thread->func = start_routine;
thread->arg = arg; thread->arg = arg;
@@ -87,7 +92,7 @@ static av_unused int pthread_create(pthread_t *thread, const void *unused_attr,
return !thread->handle; return !thread->handle;
} }
static av_unused void pthread_join(pthread_t thread, void **value_ptr) static void pthread_join(pthread_t thread, void **value_ptr)
{ {
DWORD ret = WaitForSingleObject(thread.handle, INFINITE); DWORD ret = WaitForSingleObject(thread.handle, INFINITE);
if (ret != WAIT_OBJECT_0) if (ret != WAIT_OBJECT_0)
@@ -118,36 +123,6 @@ static inline int pthread_mutex_unlock(pthread_mutex_t *m)
return 0; return 0;
} }
#if _WIN32_WINNT >= 0x0600
static inline int pthread_cond_init(pthread_cond_t *cond, const void *unused_attr)
{
InitializeConditionVariable(cond);
return 0;
}
/* native condition variables do not destroy */
static inline void pthread_cond_destroy(pthread_cond_t *cond)
{
return;
}
static inline void pthread_cond_broadcast(pthread_cond_t *cond)
{
WakeAllConditionVariable(cond);
}
static inline int pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mutex)
{
SleepConditionVariableCS(cond, mutex, INFINITE);
return 0;
}
static inline void pthread_cond_signal(pthread_cond_t *cond)
{
WakeConditionVariable(cond);
}
#else // _WIN32_WINNT < 0x0600
/* for pre-Windows 6.0 platforms we need to define and use our own condition /* for pre-Windows 6.0 platforms we need to define and use our own condition
* variable and api */ * variable and api */
typedef struct win32_cond_t { typedef struct win32_cond_t {
@@ -159,41 +134,33 @@ typedef struct win32_cond_t {
volatile int is_broadcast; volatile int is_broadcast;
} win32_cond_t; } win32_cond_t;
/* function pointers to conditional variable API on windows 6.0+ kernels */ static void pthread_cond_init(pthread_cond_t *cond, const void *unused_attr)
static void (WINAPI *cond_broadcast)(pthread_cond_t *cond);
static void (WINAPI *cond_init)(pthread_cond_t *cond);
static void (WINAPI *cond_signal)(pthread_cond_t *cond);
static BOOL (WINAPI *cond_wait)(pthread_cond_t *cond, pthread_mutex_t *mutex,
DWORD milliseconds);
static av_unused int pthread_cond_init(pthread_cond_t *cond, const void *unused_attr)
{ {
win32_cond_t *win32_cond = NULL; win32_cond_t *win32_cond = NULL;
if (cond_init) { if (cond_init) {
cond_init(cond); cond_init(cond);
return 0; return;
} }
/* non native condition variables */ /* non native condition variables */
win32_cond = av_mallocz(sizeof(win32_cond_t)); win32_cond = av_mallocz(sizeof(win32_cond_t));
if (!win32_cond) if (!win32_cond)
return ENOMEM; return;
cond->Ptr = win32_cond; cond->ptr = win32_cond;
win32_cond->semaphore = CreateSemaphore(NULL, 0, 0x7fffffff, NULL); win32_cond->semaphore = CreateSemaphore(NULL, 0, 0x7fffffff, NULL);
if (!win32_cond->semaphore) if (!win32_cond->semaphore)
return ENOMEM; return;
win32_cond->waiters_done = CreateEvent(NULL, TRUE, FALSE, NULL); win32_cond->waiters_done = CreateEvent(NULL, TRUE, FALSE, NULL);
if (!win32_cond->waiters_done) if (!win32_cond->waiters_done)
return ENOMEM; return;
pthread_mutex_init(&win32_cond->mtx_waiter_count, NULL); pthread_mutex_init(&win32_cond->mtx_waiter_count, NULL);
pthread_mutex_init(&win32_cond->mtx_broadcast, NULL); pthread_mutex_init(&win32_cond->mtx_broadcast, NULL);
return 0;
} }
static av_unused void pthread_cond_destroy(pthread_cond_t *cond) static void pthread_cond_destroy(pthread_cond_t *cond)
{ {
win32_cond_t *win32_cond = cond->Ptr; win32_cond_t *win32_cond = cond->ptr;
/* native condition variables do not destroy */ /* native condition variables do not destroy */
if (cond_init) if (cond_init)
return; return;
@@ -204,12 +171,12 @@ static av_unused void pthread_cond_destroy(pthread_cond_t *cond)
pthread_mutex_destroy(&win32_cond->mtx_waiter_count); pthread_mutex_destroy(&win32_cond->mtx_waiter_count);
pthread_mutex_destroy(&win32_cond->mtx_broadcast); pthread_mutex_destroy(&win32_cond->mtx_broadcast);
av_freep(&win32_cond); av_freep(&win32_cond);
cond->Ptr = NULL; cond->ptr = NULL;
} }
static av_unused void pthread_cond_broadcast(pthread_cond_t *cond) static void pthread_cond_broadcast(pthread_cond_t *cond)
{ {
win32_cond_t *win32_cond = cond->Ptr; win32_cond_t *win32_cond = cond->ptr;
int have_waiter; int have_waiter;
if (cond_broadcast) { if (cond_broadcast) {
@@ -238,9 +205,9 @@ static av_unused void pthread_cond_broadcast(pthread_cond_t *cond)
pthread_mutex_unlock(&win32_cond->mtx_broadcast); pthread_mutex_unlock(&win32_cond->mtx_broadcast);
} }
static av_unused int pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mutex) static int pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mutex)
{ {
win32_cond_t *win32_cond = cond->Ptr; win32_cond_t *win32_cond = cond->ptr;
int last_waiter; int last_waiter;
if (cond_wait) { if (cond_wait) {
cond_wait(cond, mutex, INFINITE); cond_wait(cond, mutex, INFINITE);
@@ -270,9 +237,9 @@ static av_unused int pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mu
return pthread_mutex_lock(mutex); return pthread_mutex_lock(mutex);
} }
static av_unused void pthread_cond_signal(pthread_cond_t *cond) static void pthread_cond_signal(pthread_cond_t *cond)
{ {
win32_cond_t *win32_cond = cond->Ptr; win32_cond_t *win32_cond = cond->ptr;
int have_waiter; int have_waiter;
if (cond_signal) { if (cond_signal) {
cond_signal(cond); cond_signal(cond);
@@ -294,9 +261,8 @@ static av_unused void pthread_cond_signal(pthread_cond_t *cond)
pthread_mutex_unlock(&win32_cond->mtx_broadcast); pthread_mutex_unlock(&win32_cond->mtx_broadcast);
} }
#endif
static av_unused void w32thread_init(void) static void w32thread_init(void)
{ {
#if _WIN32_WINNT < 0x0600 #if _WIN32_WINNT < 0x0600
HANDLE kernel_dll = GetModuleHandle(TEXT("kernel32.dll")); HANDLE kernel_dll = GetModuleHandle(TEXT("kernel32.dll"));

1610
configure vendored

File diff suppressed because it is too large Load Diff

View File

@@ -2,442 +2,43 @@ Never assume the API of libav* to be stable unless at least 1 month has passed
since the last major version increase or the API was added. since the last major version increase or the API was added.
The last version increases were: The last version increases were:
libavcodec: 2014-08-09 libavcodec: 2013-03-xx
libavdevice: 2014-08-09 libavdevice: 2013-03-xx
libavfilter: 2014-08-09 libavfilter: 2013-12-xx
libavformat: 2014-08-09 libavformat: 2013-03-xx
libavresample: 2014-08-09 libavresample: 2012-10-05
libpostproc: 2014-08-09 libpostproc: 2011-04-18
libswresample: 2014-08-09 libswresample: 2011-09-19
libswscale: 2014-08-09 libswscale: 2011-06-20
libavutil: 2014-08-09 libavutil: 2012-10-22
API changes, most recent first: API changes, most recent first:
-------- 8< --------- FFmpeg 2.6 was cut here -------- 8< --------- 2014-xx-xx - xxxxxxx - lavu 53.05.0 - frame.h
2015-03-04 - cca4476 - lavf 56.25.100
Add avformat_flush()
2015-03-03 - 81a9126 - lavf 56.24.100
Add avio_put_str16be()
2015-02-19 - 560eb71 / 31d2039 - lavc 56.23.100 / 56.13.0
Add width, height, coded_width, coded_height and format to
AVCodecParserContext.
2015-02-19 - e375511 / 5b1d9ce - lavu 54.19.100 / 54.9.0
Add AV_PIX_FMT_QSV for QSV hardware acceleration.
2015-02-14 - ba22295 - lavc 56.21.102
Deprecate VIMA decoder.
2015-01-27 - 62a82c6 / 728685f - lavc 56.21.100 / 56.12.0, lavu 54.18.100 / 54.8.0 - avcodec.h, frame.h
Add AV_PKT_DATA_AUDIO_SERVICE_TYPE and AV_FRAME_DATA_AUDIO_SERVICE_TYPE for
storing the audio service type as side data.
2015-01-16 - a47c933 - lavf 56.19.100 - avformat.h
Add data_codec and data_codec_id for storing codec of data stream
2015-01-11 - 007c33d - lavd 56.4.100 - avdevice.h
Add avdevice_list_input_sources().
Add avdevice_list_output_sinks().
2014-12-25 - d7aaeea / c220a60 - lavc 56.19.100 / 56.10.0 - vdpau.h
Add av_vdpau_get_surface_parameters().
2014-12-25 - ddb9a24 / 6c99c92 - lavc 56.18.100 / 56.9.0 - avcodec.h
Add AV_HWACCEL_FLAG_ALLOW_HIGH_DEPTH flag to av_vdpau_bind_context().
2014-12-25 - d16079a / 57b6704 - lavc 56.17.100 / 56.8.0 - avcodec.h
Add AVCodecContext.sw_pix_fmt.
2014-12-04 - 6e9ac02 - lavc 56.14.100 - dv_profile.h
Add av_dv_codec_profile2().
-------- 8< --------- FFmpeg 2.5 was cut here -------- 8< ---------
2014-11-21 - ab922f9 - lavu 54.15.100 - dict.h
Add av_dict_get_string().
2014-11-18 - a54a51c - lavu 54.14.100 - float_dsp.h
Add avpriv_float_dsp_alloc().
2014-11-16 - 6690d4c3 - lavf 56.13.100 - avformat.h
Add AVStream.recommended_encoder_configuration with accessors.
2014-11-16 - bee5844d - lavu 54.13.100 - opt.h
Add av_opt_serialize().
2014-11-16 - eec69332 - lavu 54.12.100 - opt.h
Add av_opt_is_set_to_default().
2014-11-06 - 44fa267 / 5e80fb7 - lavc 56.11.100 / 56.6.0 - vorbis_parser.h
Add a public API for parsing vorbis packets.
2014-10-15 - 17085a0 / 7ea1b34 - lavc 56.7.100 / 56.5.0 - avcodec.h
Replace AVCodecContext.time_base used for decoding
with AVCodecContext.framerate.
2014-10-15 - 51c810e / d565fef1 - lavc 56.6.100 / 56.4.0 - avcodec.h
Add AV_HWACCEL_FLAG_IGNORE_LEVEL flag to av_vdpau_bind_context().
2014-10-13 - da21895 / 2df0c32e - lavc 56.5.100 / 56.3.0 - avcodec.h
Add AVCodecContext.initial_padding. Deprecate the use of AVCodecContext.delay
for audio encoding.
2014-10-08 - bb44f7d / 5a419b2 - lavu 54.10.100 / 54.4.0 - pixdesc.h
Add API to return the name of frame and context color properties.
2014-10-06 - a61899a / e3e158e - lavc 56.3.100 / 56.2.0 - vdpau.h
Add av_vdpau_bind_context(). This function should now be used for creating
(or resetting) a AVVDPAUContext instead of av_vdpau_alloc_context().
2014-10-02 - cdd6f05 - lavc 56.2.100 - avcodec.h
2014-10-02 - cdd6f05 - lavu 54.9.100 - frame.h
Add AV_FRAME_DATA_SKIP_SAMPLES. Add lavc CODEC_FLAG2_SKIP_MANUAL and
AVOption "skip_manual", which makes lavc export skip information via
AV_FRAME_DATA_SKIP_SAMPLES AVFrame side data, instead of skipping and
discarding samples automatically.
2014-10-02 - 0d92b0d - lavu 54.8.100 - avstring.h
Add av_match_list()
2014-09-24 - ac68295 - libpostproc 53.1.100
Add visualization support
2014-09-19 - 6edd6a4 - lavc 56.1.101 - dv_profile.h
deprecate avpriv_dv_frame_profile2(), which was made public by accident.
-------- 8< --------- FFmpeg 2.4 was cut here -------- 8< ---------
2014-08-25 - 215db29 / b263f8f - lavf 56.3.100 / 56.3.0 - avformat.h
Add AVFormatContext.max_ts_probe.
2014-08-28 - f30a815 / 9301486 - lavc 56.1.100 / 56.1.0 - avcodec.h
Add AV_PKT_DATA_STEREO3D to export container-level stereo3d information.
2014-08-23 - 8fc9bd0 - lavu 54.7.100 - dict.h
AV_DICT_DONT_STRDUP_KEY and AV_DICT_DONT_STRDUP_VAL arguments are now
freed even on error. This is consistent with the behaviour all users
of it we could find expect.
2014-08-21 - 980a5b0 - lavu 54.6.100 - frame.h motion_vector.h
Add AV_FRAME_DATA_MOTION_VECTORS side data and AVMotionVector structure
2014-08-16 - b7d5e01 - lswr 1.1.100 - swresample.h
Add AVFrame based API
2014-08-16 - c2829dc - lavu 54.4.100 - dict.h
Add av_dict_set_int helper function.
2014-08-13 - c8571c6 / 8ddc326 - lavu 54.3.100 / 54.3.0 - mem.h
Add av_strndup().
2014-08-13 - 2ba4577 / a8c104a - lavu 54.2.100 / 54.2.0 - opt.h
Add av_opt_get_dict_val/set_dict_val with AV_OPT_TYPE_DICT to support
dictionary types being set as options.
2014-08-13 - afbd4b8 - lavf 56.01.0 - avformat.h
Add AVFormatContext.event_flags and AVStream.event_flags for signaling to
the user when events happen in the file/stream.
2014-08-10 - 78eaaa8 / fb1ddcd - lavr 2.1.0 - avresample.h
Add avresample_convert_frame() and avresample_config().
2014-08-10 - 78eaaa8 / fb1ddcd - lavu 54.1.100 / 54.1.0 - error.h
Add AVERROR_INPUT_CHANGED and AVERROR_OUTPUT_CHANGED.
2014-08-08 - 3841f2a / d35b94f - lavc 55.73.102 / 55.57.4 - avcodec.h
Deprecate FF_IDCT_XVIDMMX define and xvidmmx idct option.
Replaced by FF_IDCT_XVID and xvid respectively.
2014-08-08 - 5c3c671 - lavf 55.53.100 - avio.h
Add avio_feof() and deprecate url_feof().
2014-08-07 - bb78903 - lsws 2.1.3 - swscale.h
sws_getContext is not going to be removed in the future.
2014-08-07 - a561662 / ad1ee5f - lavc 55.73.101 / 55.57.3 - avcodec.h
reordered_opaque is not going to be removed in the future.
2014-08-02 - 28a2107 - lavu 52.98.100 - pixelutils.h
Add pixelutils API with SAD functions
2014-08-04 - 6017c98 / e9abafc - lavu 52.97.100 / 53.22.0 - pixfmt.h
Add AV_PIX_FMT_YA16 pixel format for 16 bit packed gray with alpha.
2014-08-04 - 4c8bc6f / e96c3b8 - lavu 52.96.101 / 53.21.1 - avstring.h
Rename AV_PIX_FMT_Y400A to AV_PIX_FMT_YA8 to better identify the format.
An alias pixel format and color space name are provided for compatibility.
2014-08-04 - 073c074 / d2962e9 - lavu 52.96.100 / 53.21.0 - pixdesc.h
Support name aliases for pixel formats.
2014-08-03 - 71d008e / 1ef9e83 - lavc 55.72.101 / 55.57.2 - avcodec.h
2014-08-03 - 71d008e / 1ef9e83 - lavu 52.95.100 / 53.20.0 - frame.h
Deprecate AVCodecContext.dtg_active_format and use side-data instead.
2014-08-03 - e680c73 - lavc 55.72.100 - avcodec.h
Add get_pixels() to AVDCT
2014-08-03 - 9400603 / 9f17685 - lavc 55.71.101 / 55.57.1 - avcodec.h
Deprecate unused FF_IDCT_IPP define and ipp avcodec option.
Deprecate unused FF_DEBUG_PTS define and pts avcodec option.
Deprecate unused FF_CODER_TYPE_DEFLATE define and deflate avcodec option.
Deprecate unused FF_DCT_INT define and int avcodec option.
Deprecate unused avcodec option scenechange_factor.
2014-07-30 - ba3e331 - lavu 52.94.100 - frame.h
Add av_frame_side_data_name()
2014-07-29 - 80a3a66 / 3a19405 - lavf 56.01.100 / 56.01.0 - avformat.h
Add mime_type field to AVProbeData, which now MUST be initialized in
order to avoid uninitialized reads of the mime_type pointer, likely
leading to crashes.
Typically, this means you will do 'AVProbeData pd = { 0 };' instead of
'AVProbeData pd;'.
2014-07-29 - 31e0b5d / 69e7336 - lavu 52.92.100 / 53.19.0 - avstring.h
Make name matching function from lavf public as av_match_name().
2014-07-28 - 2e5c8b0 / c5fca01 - lavc 55.71.100 / 55.57.0 - avcodec.h
Add AV_CODEC_PROP_REORDER to mark codecs supporting frame reordering.
2014-07-27 - ff9a154 - lavf 55.50.100 - avformat.h
New field int64_t probesize2 instead of deprecated
field int probesize.
2014-07-27 - 932ff70 - lavc 55.70.100 - avdct.h
Add AVDCT / avcodec_dct_alloc() / avcodec_dct_init().
2014-07-23 - 8a4c086 - lavf 55.49.100 - avio.h
Add avio_read_to_bprint()
-------- 8< --------- FFmpeg 2.3 was cut here -------- 8< ---------
2014-07-14 - 62227a7 - lavf 55.47.100 - avformat.h
Add av_stream_get_parser()
2014-07-09 - c67690f / a54f03b - lavu 52.92.100 / 53.18.0 - display.h
Add av_display_matrix_flip() to flip the transformation matrix.
2014-07-09 - 1b58f13 / f6ee61f - lavc 55.69.100 / 55.56.0 - dv_profile.h
Add a public API for DV profile handling.
2014-06-20 - 0dceefc / 9e500ef - lavu 52.90.100 / 53.17.0 - imgutils.h
Add av_image_check_sar().
2014-06-20 - 4a99333 / 874390e - lavc 55.68.100 / 55.55.0 - avcodec.h
Add av_packet_rescale_ts() to simplify timestamp conversion.
2014-06-18 - ac293b6 / 194be1f - lavf 55.44.100 / 55.20.0 - avformat.h
The proper way for providing a hint about the desired timebase to the muxers
is now setting AVStream.time_base, instead of AVStream.codec.time_base as was
done previously. The old method is now deprecated.
2014-06-11 - 67d29da - lavc 55.66.101 - avcodec.h
Increase FF_INPUT_BUFFER_PADDING_SIZE to 32 due to some corner cases needing
it
2014-06-10 - 5482780 - lavf 55.43.100 - avformat.h
New field int64_t max_analyze_duration2 instead of deprecated
int max_analyze_duration.
2014-05-30 - 00759d7 - lavu 52.89.100 - opt.h
Add av_opt_copy()
2014-06-01 - 03bb99a / 0957b27 - lavc 55.66.100 / 55.54.0 - avcodec.h
Add AVCodecContext.side_data_only_packets to allow encoders to output packets
with only side data. This option may become mandatory in the future, so all
users are recommended to update their code and enable this option.
2014-06-01 - 6e8e9f1 / 8c02adc - lavu 52.88.100 / 53.16.0 - frame.h, pixfmt.h
Move all color-related enums (AVColorPrimaries, AVColorSpace, AVColorRange,
AVColorTransferCharacteristic, and AVChromaLocation) inside lavu.
And add AVFrame fields for them.
2014-05-29 - bdb2e80 / b2d4565 - lavr 1.3.0 - avresample.h
Add avresample_max_output_samples
2014-05-28 - d858ee7 / 6d21259 - lavf 55.42.100 / 55.19.0 - avformat.h
Add strict_std_compliance and related AVOptions to support experimental
muxing.
2014-05-26 - 55cc60c - lavu 52.87.100 - threadmessage.h
Add thread message queue API.
2014-05-26 - c37d179 - lavf 55.41.100 - avformat.h
Add format_probesize to AVFormatContext.
2014-05-20 - 7d25af1 / c23c96b - lavf 55.39.100 / 55.18.0 - avformat.h
Add av_stream_get_side_data() to access stream-level side data
in the same way as av_packet_get_side_data().
2014-05-20 - 7336e39 - lavu 52.86.100 - fifo.h
Add av_fifo_alloc_array() function.
2014-05-19 - ef1d4ee / bddd8cb - lavu 52.85.100 / 53.15.0 - frame.h, display.h
Add AV_FRAME_DATA_DISPLAYMATRIX for exporting frame-level
spatial rendering on video frames for proper display.
2014-05-19 - ef1d4ee / bddd8cb - lavc 55.64.100 / 55.53.0 - avcodec.h
Add AV_PKT_DATA_DISPLAYMATRIX for exporting packet-level
spatial rendering on video frames for proper display.
2014-05-19 - 999a99c / a312f71 - lavf 55.38.101 / 55.17.1 - avformat.h
Deprecate AVStream.pts and the AVFrac struct, which was its only use case.
See use av_stream_get_end_pts()
2014-05-18 - 68c0518 / fd05602 - lavc 55.63.100 / 55.52.0 - avcodec.h
Add avcodec_free_context(). From now on it should be used for freeing
AVCodecContext.
2014-05-17 - 0eec06e / 1bd0bdc - lavu 52.84.100 / 54.5.0 - time.h
Add av_gettime_relative() av_gettime_relative_is_monotonic()
2014-05-15 - eacf7d6 / 0c1959b - lavf 55.38.100 / 55.17.0 - avformat.h
Add AVFMT_FLAG_BITEXACT flag. Muxers now use it instead of checking
CODEC_FLAG_BITEXACT on the first stream.
2014-05-15 - 96cb4c8 - lswr 0.19.100 - swresample.h
Add swr_close()
2014-05-11 - 14aef38 / 66e6c8a - lavu 52.83.100 / 53.14.0 - pixfmt.h
Add AV_PIX_FMT_VDA for new-style VDA acceleration.
2014-05-07 - 351f611 - lavu 52.82.100 - fifo.h
Add av_fifo_freep() function.
2014-05-02 - ba52fb11 - lavu 52.81.100 - opt.h
Add av_opt_set_dict2() function.
2014-05-01 - e77b985 / a2941c8 - lavc 55.60.103 / 55.50.3 - avcodec.h
Deprecate CODEC_FLAG_MV0. It is replaced by the flag "mv0" in the
"mpv_flags" private option of the mpegvideo encoders.
2014-05-01 - e40ae8c / 6484149 - lavc 55.60.102 / 55.50.2 - avcodec.h
Deprecate CODEC_FLAG_GMC. It is replaced by the "gmc" private option of the
libxvid encoder.
2014-05-01 - 1851643 / b2c3171 - lavc 55.60.101 / 55.50.1 - avcodec.h
Deprecate CODEC_FLAG_NORMALIZE_AQP. It is replaced by the flag "naq" in the
"mpv_flags" private option of the mpegvideo encoders.
2014-05-01 - cac07d0 / 5fcceda - avcodec.h
Deprecate CODEC_FLAG_INPUT_PRESERVED. Its functionality is replaced by passing
reference-counted frames to encoders.
2014-04-30 - 617e866 - lavu 52.81.100 - pixdesc.h
Add av_find_best_pix_fmt_of_2(), av_get_pix_fmt_loss()
Deprecate avcodec_get_pix_fmt_loss(), avcodec_find_best_pix_fmt_of_2()
2014-04-29 - 1bf6396 - lavc 55.60.100 - avcodec.h
Add AVCodecDescriptor.mime_types field.
2014-04-29 - b804eb4 - lavu 52.80.100 - hash.h
Add av_hash_final_bin(), av_hash_final_hex() and av_hash_final_b64().
2014-03-07 - 8b2a130 - lavc 55.50.0 / 55.53.100 - dxva2.h
Add FF_DXVA2_WORKAROUND_INTEL_CLEARVIDEO for old Intel GPUs.
2014-04-22 - 502512e /dac7e8a - lavu 53.13.0 / 52.78.100 - avutil.h
Add av_get_time_base_q().
2014-04-17 - a8d01a7 / 0983d48 - lavu 53.12.0 / 52.77.100 - crc.h
Add AV_CRC_16_ANSI_LE crc variant.
2014-04-15 - ef818d8 - lavf 55.37.101 - avformat.h
Add av_format_inject_global_side_data()
2014-04-12 - 4f698be - lavu 52.76.100 - log.h
Add av_log_get_flags()
2014-04-11 - 6db42a2b - lavd 55.12.100 - avdevice.h
Add avdevice_capabilities_create() function.
Add avdevice_capabilities_free() function.
2014-04-07 - 0a1cc04 / 8b17243 - lavu 52.75.100 / 53.11.0 - pixfmt.h
Add AV_PIX_FMT_YVYU422 pixel format.
2014-04-04 - c1d0536 / 8542f9c - lavu 52.74.100 / 53.10.0 - replaygain.h
Full scale for peak values is now 100000 (instead of UINT32_MAX) and values
may overflow.
2014-04-03 - c16e006 / 7763118 - lavu 52.73.100 / 53.9.0 - log.h
Add AV_LOG(c) macro to have 256 color debug messages.
2014-04-03 - eaed4da9 - lavu 52.72.100 - opt.h
Add AV_OPT_MULTI_COMPONENT_RANGE define to allow return
multi-component option ranges.
2014-03-29 - cd50a44b - lavu 52.70.100 - mem.h
Add av_dynarray_add_nofree() function.
2014-02-24 - 3e1f241 / d161ae0 - lavu 52.69.100 / 53.8.0 - frame.h
Add av_frame_remove_side_data() for removing a single side data
instance from a frame.
2014-03-24 - 83e8978 / 5a7e35d - lavu 52.68.100 / 53.7.0 - frame.h, replaygain.h
Add AV_FRAME_DATA_REPLAYGAIN for exporting replaygain tags.
Add a new header replaygain.h with the AVReplayGain struct.
2014-03-24 - 83e8978 / 5a7e35d - lavc 55.54.100 / 55.36.0 - avcodec.h
Add AV_PKT_DATA_REPLAYGAIN for exporting replaygain tags.
2014-03-24 - 595ba3b / 25b3258 - lavf 55.35.100 / 55.13.0 - avformat.h
Add AVStream.side_data and AVStream.nb_side_data for exporting stream-global
side data (e.g. replaygain tags, video rotation)
2014-03-24 - bd34e26 / 0e2c3ee - lavc 55.53.100 / 55.35.0 - avcodec.h
Give the name AVPacketSideData to the previously anonymous struct used for
AVPacket.side_data.
-------- 8< --------- FFmpeg 2.2 was cut here -------- 8< ---------
2014-03-18 - 37c07d4 - lsws 2.5.102
Make gray16 full-scale.
2014-03-16 - 6b1ca17 / 1481d24 - lavu 52.67.100 / 53.6.0 - pixfmt.h
Add RGBA64_LIBAV pixel format and variants for compatibility
2014-03-11 - 3f3229c - lavf 55.34.101 - avformat.h
Set AVFormatContext.start_time_realtime when demuxing.
2014-03-03 - 06fed440 - lavd 55.11.100 - avdevice.h
Add av_input_audio_device_next().
Add av_input_video_device_next().
Add av_output_audio_device_next().
Add av_output_video_device_next().
2014-02-24 - fff5262 / 1155fd0 - lavu 52.66.100 / 53.5.0 - frame.h
Add av_frame_copy() for copying the frame data. Add av_frame_copy() for copying the frame data.
2014-02-24 - a66be60 - lswr 0.18.100 - swresample.h 2014-02-xx - xxxxxxx - lswr 0.18.100 - swresample.h
Add swr_is_initialized() for checking whether a resample context is initialized. Add swr_is_initialized() for checking whether a resample context is initialized.
2014-02-22 - 5367c0b / 7e86c27 - lavr 1.2.0 - avresample.h 2014-02-xx - xxxxxxx - lavr 1.2.0 - avresample.h
Add avresample_is_open() for checking whether a resample context is open. Add avresample_is_open() for checking whether a resample context is open.
2014-02-19 - 6a24d77 / c3ecd96 - lavu 52.65.100 / 53.4.0 - opt.h 2014-xx-xx - xxxxxxx - lavu 53.04.0 - opt.h
Add AV_OPT_FLAG_EXPORT and AV_OPT_FLAG_READONLY to mark options meant (only) Add AV_OPT_FLAG_EXPORT and AV_OPT_FLAG_READONLY to mark options meant (only)
for reading. for reading.
2014-02-19 - f4c8d00 / 6bb8720 - lavu 52.64.101 / 53.3.1 - opt.h 2014-xx-xx - xxxxxxx - lavu 53.03.01 - opt.h
Deprecate unused AV_OPT_FLAG_METADATA. Deprecate unused AV_OPT_FLAG_METADATA.
2014-02-16 - 81c3f81 - lavd 55.10.100 - avdevice.h 2014-02-xx - xxxxxxx - lavd 55.10.100 - avdevice.h
Add avdevice_list_devices() and avdevice_free_list_devices() Add avdevice_list_devices() and avdevice_free_list_devices()
2014-02-16 - db3c970 - lavf 55.33.100 - avio.h 2014-02-16 - db3c970 - lavf 55.33.100 - avio.h
Add avio_find_protocol_name() to find out the name of the protocol that would Add avio_find_protocol_name() to find out the name of the protocol that would
be selected for a given URL. be selected for a given URL.
2014-02-15 - a2bc6c1 / c98f316 - lavu 52.64.100 / 53.3.0 - frame.h 2014-02-xx - xxxxxxx - lavu 53.3.0 - frame.h
Add AV_FRAME_DATA_DOWNMIX_INFO value to the AVFrameSideDataType enum and Add AV_FRAME_DATA_DOWNMIX_INFO value to the AVFrameSideDataType enum and
downmix_info.h API, which identify downmix-related metadata. downmix_info.h API, which identify downmix-related metadata.
@@ -448,7 +49,7 @@ API changes, most recent first:
Add AVFormatContext.max_interleave_delta for controlling amount of buffering Add AVFormatContext.max_interleave_delta for controlling amount of buffering
when interleaving. when interleaving.
2014-02-02 - 5871ee5 - lavf 55.29.100 - avformat.h 2014-02-02 - xxxxxxx - lavf 55.29.100 - avformat.h
Add output_ts_offset muxing option to AVFormatContext. Add output_ts_offset muxing option to AVFormatContext.
2014-01-27 - 102bd64 - lavd 55.7.100 - avdevice.h 2014-01-27 - 102bd64 - lavd 55.7.100 - avdevice.h
@@ -468,10 +69,10 @@ API changes, most recent first:
(i.e. as if the CODEC_FLAG_EMU_EDGE flag was always on). Deprecate (i.e. as if the CODEC_FLAG_EMU_EDGE flag was always on). Deprecate
CODEC_FLAG_EMU_EDGE and avcodec_get_edge_width(). CODEC_FLAG_EMU_EDGE and avcodec_get_edge_width().
2014-01-19 - 1a193c4 - lavf 55.25.100 - avformat.h 2014-01-19 - xxxxxxx - lavf 55.25.100 - avformat.h
Add avformat_get_mov_video_tags() and avformat_get_mov_audio_tags(). Add avformat_get_mov_video_tags() and avformat_get_mov_audio_tags().
2014-01-19 - 3532dd5 - lavu 52.63.100 - rational.h 2014-01-19 - xxxxxxx - lavu 52.63.100 - rational.h
Add av_make_q() function. Add av_make_q() function.
2014-01-05 - 4cf4da9 / 5b4797a - lavu 52.62.100 / 53.2.0 - frame.h 2014-01-05 - 4cf4da9 / 5b4797a - lavu 52.62.100 / 53.2.0 - frame.h
@@ -481,16 +82,16 @@ API changes, most recent first:
2014-01-05 - 751385f / 5c437fb - lavu 52.61.100 / 53.1.0 - channel_layout.h 2014-01-05 - 751385f / 5c437fb - lavu 52.61.100 / 53.1.0 - channel_layout.h
Add values for various Dolby flags to the AVMatrixEncoding enum. Add values for various Dolby flags to the AVMatrixEncoding enum.
2014-01-04 - b317f94 - lavu 52.60.100 - mathematics.h 2014-01-04 - xxxxxxx - lavu 52.60.100 - mathematics.h
Add av_add_stable() function. Add av_add_stable() function.
2013-12-22 - 911676c - lavu 52.59.100 - avstring.h 2013-12-22 - xxxxxxx - lavu 52.59.100 - avstring.h
Add av_strnlen() function. Add av_strnlen() function.
2013-12-09 - 64f73ac - lavu 52.57.100 - opencl.h 2013-12-xx - xxxxxxx - lavu 52.57.100 - opencl.h
Add av_opencl_benchmark() function. Add av_opencl_benchmark() function.
2013-11-30 - 82b2e9c - lavu 52.56.100 - ffversion.h 2013-11-xx - xxxxxxx - lavu 52.56.100 - ffversion.h
Moves version.h to libavutil/ffversion.h. Moves version.h to libavutil/ffversion.h.
Install ffversion.h and make it public. Install ffversion.h and make it public.
@@ -507,13 +108,13 @@ API changes, most recent first:
Add AV_FRAME_DATA_A53_CC value to the AVFrameSideDataType enum, which Add AV_FRAME_DATA_A53_CC value to the AVFrameSideDataType enum, which
identifies ATSC A53 Part 4 Closed Captions data. identifies ATSC A53 Part 4 Closed Captions data.
2013-11-22 - 6859065 - lavu 52.54.100 - avstring.h 2013-11-XX - xxxxxxx - lavu 52.54.100 - avstring.h
Add av_utf8_decode() function. Add av_utf8_decode() function.
2013-11-22 - fb7d70c - lavc 55.44.100 - avcodec.h 2013-11-22 - fb7d70c - lavc 55.44.100 - avcodec.h
Add HEVC profiles Add HEVC profiles
2013-11-20 - c28b61c - lavc 55.44.100 - avcodec.h 2013-11-xx - xxxxxxx - lavc 55.44.100 - avcodec.h
Add av_packet_{un,}pack_dictionary() Add av_packet_{un,}pack_dictionary()
Add AV_PKT_METADATA_UPDATE side data type, used to transmit key/value Add AV_PKT_METADATA_UPDATE side data type, used to transmit key/value
strings between a stream and the application. strings between a stream and the application.
@@ -525,7 +126,7 @@ API changes, most recent first:
Deprecate AVCodecContext.error_rate, it is replaced by the 'error_rate' Deprecate AVCodecContext.error_rate, it is replaced by the 'error_rate'
private option of the mpegvideo encoder family. private option of the mpegvideo encoder family.
2013-11-14 - 31c09b7 / 728c465 - lavc 55.42.100 / 55.26.0 - vdpau.h 2013-11-14 - 31c09b7 / 728c465 - lavc 55.26.0 - vdpau.h
Add av_vdpau_get_profile(). Add av_vdpau_get_profile().
Add av_vdpau_alloc_context(). This function must from now on be Add av_vdpau_alloc_context(). This function must from now on be
used for allocating AVVDPAUContext. used for allocating AVVDPAUContext.
@@ -535,32 +136,29 @@ API changes, most recent first:
Add ITU-R BT.2020 and other not yet included values to color primaries, Add ITU-R BT.2020 and other not yet included values to color primaries,
transfer characteristics and colorspaces. transfer characteristics and colorspaces.
2013-11-04 - 85cabf1 - lavu 52.50.100 - avutil.h 2013-11-04 - xxxxxxx - lavu 52.50.100 - avutil.h
Add av_fopen_utf8() Add av_fopen_utf8()
2013-10-31 - 78265fc / 28096e0 - lavu 52.49.100 / 52.17.0 - frame.h 2013-10-31 - 78265fc / 28096e0 - lavu 52.49.100 / 52.17.0 - frame.h
Add AVFrame.flags and AV_FRAME_FLAG_CORRUPT. Add AVFrame.flags and AV_FRAME_FLAG_CORRUPT.
2013-10-27 - xxxxxxx - lavc 55.39.100 - avcodec.h
-------- 8< --------- FFmpeg 2.1 was cut here -------- 8< ---------
2013-10-27 - dbe6f9f - lavc 55.39.100 - avcodec.h
Add CODEC_CAP_DELAY support to avcodec_decode_subtitle2. Add CODEC_CAP_DELAY support to avcodec_decode_subtitle2.
2013-10-27 - d61617a - lavu 52.48.100 - parseutils.h 2013-10-27 - xxxxxxx - lavu 52.48.100 - parseutils.h
Add av_get_known_color_name(). Add av_get_known_color_name().
2013-10-17 - 8696e51 - lavu 52.47.100 - opt.h 2013-10-17 - xxxxxxx - lavu 52.47.100 - opt.h
Add AV_OPT_TYPE_CHANNEL_LAYOUT and channel layout option handlers Add AV_OPT_TYPE_CHANNEL_LAYOUT and channel layout option handlers
av_opt_get_channel_layout() and av_opt_set_channel_layout(). av_opt_get_channel_layout() and av_opt_set_channel_layout().
2013-10-06 - ccf96f8 -libswscale 2.5.101 - options.c 2013-10-xx - xxxxxxx -libswscale 2.5.101 - options.c
Change default scaler to bicubic Change default scaler to bicubic
2013-10-03 - e57dba0 - lavc 55.34.100 - avcodec.h 2013-10-03 - xxxxxxx - lavc 55.34.100 - avcodec.h
Add av_codec_get_max_lowres() Add av_codec_get_max_lowres()
2013-10-02 - 5082fcc - lavf 55.19.100 - avformat.h 2013-10-02 - xxxxxxx - lavf 55.19.100 - avformat.h
Add audio/video/subtitle AVCodec fields to AVFormatContext to force specific Add audio/video/subtitle AVCodec fields to AVFormatContext to force specific
decoders decoders
@@ -578,7 +176,7 @@ API changes, most recent first:
2013-09-04 - 3e1f507 - lavc 55.31.101 - avcodec.h 2013-09-04 - 3e1f507 - lavc 55.31.101 - avcodec.h
avcodec_close() argument can be NULL. avcodec_close() argument can be NULL.
2013-09-04 - 36cd017a - lavf 55.16.101 - avformat.h 2013-09-04 - 36cd017 - lavf 55.16.101 - avformat.h
avformat_close_input() argument can be NULL and point on NULL. avformat_close_input() argument can be NULL and point on NULL.
2013-08-29 - e31db62 - lavf 55.15.100 - avformat.h 2013-08-29 - e31db62 - lavf 55.15.100 - avformat.h
@@ -587,10 +185,10 @@ API changes, most recent first:
2013-08-15 - 1e0e193 - lsws 2.5.100 - 2013-08-15 - 1e0e193 - lsws 2.5.100 -
Add a sws_dither AVOption, allowing to set the dither algorithm used Add a sws_dither AVOption, allowing to set the dither algorithm used
2013-08-11 - d404fe35 - lavc 55.27.100 - vdpau.h 2013-08-xx - xxxxxxx - lavc 55.27.100 - vdpau.h
Add a render2 alternative to the render callback function. Add a render2 alternative to the render callback function.
2013-08-11 - af05edc - lavc 55.26.100 - vdpau.h 2013-08-xx - xxxxxxx - lavc 55.26.100 - vdpau.h
Add allocation function for AVVDPAUContext, allowing Add allocation function for AVVDPAUContext, allowing
to extend it in the future without breaking ABI/API. to extend it in the future without breaking ABI/API.
@@ -600,7 +198,7 @@ API changes, most recent first:
2013-08-05 - 9547e3e / f824535 - lavc 55.22.100 / 55.13.0 - avcodec.h 2013-08-05 - 9547e3e / f824535 - lavc 55.22.100 / 55.13.0 - avcodec.h
Deprecate the bitstream-related members from struct AVVDPAUContext. Deprecate the bitstream-related members from struct AVVDPAUContext.
The bitstream buffers no longer need to be explicitly freed. The bistream buffers no longer need to be explicitly freed.
2013-08-05 - 3b805dc / 549294f - lavc 55.21.100 / 55.12.0 - avcodec.h 2013-08-05 - 3b805dc / 549294f - lavc 55.21.100 / 55.12.0 - avcodec.h
Deprecate the CODEC_CAP_HWACCEL_VDPAU codec capability. Use CODEC_CAP_HWACCEL Deprecate the CODEC_CAP_HWACCEL_VDPAU codec capability. Use CODEC_CAP_HWACCEL
@@ -616,9 +214,6 @@ API changes, most recent first:
Add avcodec_chroma_pos_to_enum() Add avcodec_chroma_pos_to_enum()
Add avcodec_enum_to_chroma_pos() Add avcodec_enum_to_chroma_pos()
-------- 8< --------- FFmpeg 2.0 was cut here -------- 8< ---------
2013-07-03 - 838bd73 - lavfi 3.78.100 - avfilter.h 2013-07-03 - 838bd73 - lavfi 3.78.100 - avfilter.h
Deprecate avfilter_graph_parse() in favor of the equivalent Deprecate avfilter_graph_parse() in favor of the equivalent
avfilter_graph_parse_ptr(). avfilter_graph_parse_ptr().
@@ -691,9 +286,6 @@ API changes, most recent first:
2013-03-17 - 7aa9af5 - lavu 52.20.100 - opt.h 2013-03-17 - 7aa9af5 - lavu 52.20.100 - opt.h
Add AV_OPT_TYPE_VIDEO_RATE value to AVOptionType enum. Add AV_OPT_TYPE_VIDEO_RATE value to AVOptionType enum.
-------- 8< --------- FFmpeg 1.2 was cut here -------- 8< ---------
2013-03-07 - 9767ec6 - lavu 52.18.100 - avstring.h,bprint.h 2013-03-07 - 9767ec6 - lavu 52.18.100 - avstring.h,bprint.h
Add av_escape() and av_bprint_escape() API. Add av_escape() and av_bprint_escape() API.
@@ -706,9 +298,6 @@ API changes, most recent first:
2013-01-01 - 2eb2e17 - lavfi 3.34.100 2013-01-01 - 2eb2e17 - lavfi 3.34.100
Add avfilter_get_audio_buffer_ref_from_arrays_channels. Add avfilter_get_audio_buffer_ref_from_arrays_channels.
-------- 8< --------- FFmpeg 1.1 was cut here -------- 8< ---------
2012-12-20 - 34de47aa - lavfi 3.29.100 - avfilter.h 2012-12-20 - 34de47aa - lavfi 3.29.100 - avfilter.h
Add AVFilterLink.channels, avfilter_link_get_channels() Add AVFilterLink.channels, avfilter_link_get_channels()
and avfilter_ref_get_channels(). and avfilter_ref_get_channels().
@@ -754,9 +343,6 @@ API changes, most recent first:
Add LIBSWRESAMPLE_VERSION, LIBSWRESAMPLE_BUILD Add LIBSWRESAMPLE_VERSION, LIBSWRESAMPLE_BUILD
and LIBSWRESAMPLE_IDENT symbols. and LIBSWRESAMPLE_IDENT symbols.
-------- 8< --------- FFmpeg 1.0 was cut here -------- 8< ---------
2012-09-06 - 29e972f - lavu 51.72.100 - parseutils.h 2012-09-06 - 29e972f - lavu 51.72.100 - parseutils.h
Add av_small_strptime() time parsing function. Add av_small_strptime() time parsing function.
@@ -961,9 +547,6 @@ lavd 54.4.100 / 54.0.0, lavfi 3.5.0
avresample_read() are now uint8_t** instead of void**. avresample_read() are now uint8_t** instead of void**.
Libavresample is now stable. Libavresample is now stable.
2012-09-26 - 3ba0dab7 / 1384df64 - lavf 54.29.101 / 56.06.3 - avformat.h
Add AVFormatContext.avoid_negative_ts.
2012-09-24 - 46a3595 / a42aada - lavc 54.59.100 / 54.28.0 - avcodec.h 2012-09-24 - 46a3595 / a42aada - lavc 54.59.100 / 54.28.0 - avcodec.h
Add avcodec_free_frame(). This function must now Add avcodec_free_frame(). This function must now
be used for freeing an AVFrame. be used for freeing an AVFrame.
@@ -1178,9 +761,6 @@ lavd 54.4.100 / 54.0.0, lavfi 3.5.0
2012-01-12 - b18e17e / 3167dc9 - lavfi 2.59.100 / 2.15.0 2012-01-12 - b18e17e / 3167dc9 - lavfi 2.59.100 / 2.15.0
Add a new installed header -- libavfilter/version.h -- with version macros. Add a new installed header -- libavfilter/version.h -- with version macros.
-------- 8< --------- FFmpeg 0.9 was cut here -------- 8< ---------
2011-12-08 - a502939 - lavfi 2.52.0 2011-12-08 - a502939 - lavfi 2.52.0
Add av_buffersink_poll_frame() to buffersink.h. Add av_buffersink_poll_frame() to buffersink.h.
@@ -1209,9 +789,6 @@ lavd 54.4.100 / 54.0.0, lavfi 3.5.0
Add avformat_close_input(). Add avformat_close_input().
Deprecate av_close_input_file() and av_close_input_stream(). Deprecate av_close_input_file() and av_close_input_stream().
2011-12-09 - c59b80c / b2890f5 - lavu 51.32.0 / 51.20.0 - audioconvert.h
Expand the channel layout list.
2011-12-02 - e4de716 / 0eea212 - lavc 53.40.0 / 53.25.0 2011-12-02 - e4de716 / 0eea212 - lavc 53.40.0 / 53.25.0
Add nb_samples and extended_data fields to AVFrame. Add nb_samples and extended_data fields to AVFrame.
Deprecate AVCODEC_MAX_AUDIO_FRAME_SIZE. Deprecate AVCODEC_MAX_AUDIO_FRAME_SIZE.
@@ -1225,10 +802,6 @@ lavd 54.4.100 / 54.0.0, lavfi 3.5.0
Change AVCodecContext.error[4] to [8] at next major bump. Change AVCodecContext.error[4] to [8] at next major bump.
Add AV_NUM_DATA_POINTERS to simplify the bump transition. Add AV_NUM_DATA_POINTERS to simplify the bump transition.
2011-11-24 - lavu 51.29.0 / 51.19.0
92afb43 / bd97b2e - add planar RGB pixel formats
92afb43 / 6b0768e - add PIX_FMT_PLANAR and PIX_FMT_RGB pixel descriptions
2011-11-23 - 8e576d5 / bbb46f3 - lavu 51.27.0 / 51.18.0 2011-11-23 - 8e576d5 / bbb46f3 - lavu 51.27.0 / 51.18.0
Add av_samples_get_buffer_size(), av_samples_fill_arrays(), and Add av_samples_get_buffer_size(), av_samples_fill_arrays(), and
av_samples_alloc(), to samplefmt.h. av_samples_alloc(), to samplefmt.h.
@@ -1390,13 +963,6 @@ lavd 54.4.100 / 54.0.0, lavfi 3.5.0
2011-06-28 - 5129336 - lavu 51.11.0 - avutil.h 2011-06-28 - 5129336 - lavu 51.11.0 - avutil.h
Define the AV_PICTURE_TYPE_NONE value in AVPictureType enum. Define the AV_PICTURE_TYPE_NONE value in AVPictureType enum.
-------- 8< --------- FFmpeg 0.7 was cut here -------- 8< ---------
-------- 8< --------- FFmpeg 0.8 was cut here -------- 8< ---------
2011-06-19 - fd2c0a5 - lavfi 2.23.0 - avfilter.h 2011-06-19 - fd2c0a5 - lavfi 2.23.0 - avfilter.h
Add layout negotiation fields and helper functions. Add layout negotiation fields and helper functions.
@@ -2074,9 +1640,6 @@ lavd 54.4.100 / 54.0.0, lavfi 3.5.0
2010-06-02 - 7e566bb - lavc 52.73.0 - av_get_codec_tag_string() 2010-06-02 - 7e566bb - lavc 52.73.0 - av_get_codec_tag_string()
Add av_get_codec_tag_string(). Add av_get_codec_tag_string().
-------- 8< --------- FFmpeg 0.6 was cut here -------- 8< ---------
2010-06-01 - 2b99142 - lsws 0.11.0 - convertPalette API 2010-06-01 - 2b99142 - lsws 0.11.0 - convertPalette API
Add sws_convertPalette8ToPacked32() and sws_convertPalette8ToPacked24(). Add sws_convertPalette8ToPacked32() and sws_convertPalette8ToPacked24().
@@ -2094,6 +1657,10 @@ lavd 54.4.100 / 54.0.0, lavfi 3.5.0
2010-05-09 - b6bc205 - lavfi 1.20.0 - AVFilterPicRef 2010-05-09 - b6bc205 - lavfi 1.20.0 - AVFilterPicRef
Add interlaced and top_field_first fields to AVFilterPicRef. Add interlaced and top_field_first fields to AVFilterPicRef.
------------------------------8<-------------------------------------
0.6 branch was cut here
----------------------------->8--------------------------------------
2010-05-01 - 8e2ee18 - lavf 52.62.0 - probe function 2010-05-01 - 8e2ee18 - lavf 52.62.0 - probe function
Add av_probe_input_format2 to API, it allows ignoring probe Add av_probe_input_format2 to API, it allows ignoring probe
results below given score and returns the actual probe score. results below given score and returns the actual probe score.

View File

@@ -31,7 +31,7 @@ PROJECT_NAME = FFmpeg
# This could be handy for archiving the generated documentation or # This could be handy for archiving the generated documentation or
# if some version control system is used. # if some version control system is used.
PROJECT_NUMBER = PROJECT_NUMBER = 2.2-rc1
# With the PROJECT_LOGO tag one can specify a logo or icon that is included # With the PROJECT_LOGO tag one can specify a logo or icon that is included
# in the documentation. The maximum height of the logo should not exceed 55 # in the documentation. The maximum height of the logo should not exceed 55
@@ -759,7 +759,7 @@ ALPHABETICAL_INDEX = YES
# the COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns # the COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns
# in which this list will be split (can be a number in the range [1..20]) # in which this list will be split (can be a number in the range [1..20])
COLS_IN_ALPHA_INDEX = 5 COLS_IN_ALPHA_INDEX = 2
# In case all classes in a project start with a common prefix, all # In case all classes in a project start with a common prefix, all
# classes will be put under the same header in the alphabetical index. # classes will be put under the same header in the alphabetical index.
@@ -1056,7 +1056,7 @@ FORMULA_TRANSPARENT = YES
# typically be disabled. For large projects the javascript based search engine # typically be disabled. For large projects the javascript based search engine
# can be slow, then enabling SERVER_BASED_SEARCH may provide a better solution. # can be slow, then enabling SERVER_BASED_SEARCH may provide a better solution.
SEARCHENGINE = YES SEARCHENGINE = NO
# When the SERVER_BASED_SEARCH tag is enabled the search engine will be # When the SERVER_BASED_SEARCH tag is enabled the search engine will be
# implemented using a PHP enabled web server instead of at the web client # implemented using a PHP enabled web server instead of at the web client
@@ -1359,8 +1359,6 @@ PREDEFINED = "__attribute__(x)=" \
"DECLARE_ALIGNED(a,t,n)=t n" \ "DECLARE_ALIGNED(a,t,n)=t n" \
"offsetof(x,y)=0x42" \ "offsetof(x,y)=0x42" \
av_alloc_size \ av_alloc_size \
AV_GCC_VERSION_AT_LEAST(x,y)=1 \
__GNUC__=1 \
# If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then # If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then
# this tag can be used to specify a list of macro names that should be expanded. # this tag can be used to specify a list of macro names that should be expanded.

View File

@@ -38,20 +38,16 @@ DOCS = $(DOCS-yes)
DOC_EXAMPLES-$(CONFIG_AVIO_READING_EXAMPLE) += avio_reading DOC_EXAMPLES-$(CONFIG_AVIO_READING_EXAMPLE) += avio_reading
DOC_EXAMPLES-$(CONFIG_AVCODEC_EXAMPLE) += avcodec DOC_EXAMPLES-$(CONFIG_AVCODEC_EXAMPLE) += avcodec
DOC_EXAMPLES-$(CONFIG_DECODING_ENCODING_EXAMPLE) += decoding_encoding
DOC_EXAMPLES-$(CONFIG_DEMUXING_DECODING_EXAMPLE) += demuxing_decoding DOC_EXAMPLES-$(CONFIG_DEMUXING_DECODING_EXAMPLE) += demuxing_decoding
DOC_EXAMPLES-$(CONFIG_EXTRACT_MVS_EXAMPLE) += extract_mvs
DOC_EXAMPLES-$(CONFIG_FILTER_AUDIO_EXAMPLE) += filter_audio DOC_EXAMPLES-$(CONFIG_FILTER_AUDIO_EXAMPLE) += filter_audio
DOC_EXAMPLES-$(CONFIG_FILTERING_AUDIO_EXAMPLE) += filtering_audio DOC_EXAMPLES-$(CONFIG_FILTERING_AUDIO_EXAMPLE) += filtering_audio
DOC_EXAMPLES-$(CONFIG_FILTERING_VIDEO_EXAMPLE) += filtering_video DOC_EXAMPLES-$(CONFIG_FILTERING_VIDEO_EXAMPLE) += filtering_video
DOC_EXAMPLES-$(CONFIG_METADATA_EXAMPLE) += metadata DOC_EXAMPLES-$(CONFIG_METADATA_EXAMPLE) += metadata
DOC_EXAMPLES-$(CONFIG_MUXING_EXAMPLE) += muxing DOC_EXAMPLES-$(CONFIG_MUXING_EXAMPLE) += muxing
DOC_EXAMPLES-$(CONFIG_QSVDEC_EXAMPLE) += qsvdec
DOC_EXAMPLES-$(CONFIG_REMUXING_EXAMPLE) += remuxing DOC_EXAMPLES-$(CONFIG_REMUXING_EXAMPLE) += remuxing
DOC_EXAMPLES-$(CONFIG_RESAMPLING_AUDIO_EXAMPLE) += resampling_audio DOC_EXAMPLES-$(CONFIG_RESAMPLING_AUDIO_EXAMPLE) += resampling_audio
DOC_EXAMPLES-$(CONFIG_SCALING_VIDEO_EXAMPLE) += scaling_video DOC_EXAMPLES-$(CONFIG_SCALING_VIDEO_EXAMPLE) += scaling_video
DOC_EXAMPLES-$(CONFIG_TRANSCODE_AAC_EXAMPLE) += transcode_aac DOC_EXAMPLES-$(CONFIG_TRANSCODE_AAC_EXAMPLE) += transcode_aac
DOC_EXAMPLES-$(CONFIG_TRANSCODING_EXAMPLE) += transcoding
ALL_DOC_EXAMPLES_LIST = $(DOC_EXAMPLES-) $(DOC_EXAMPLES-yes) ALL_DOC_EXAMPLES_LIST = $(DOC_EXAMPLES-) $(DOC_EXAMPLES-yes)
DOC_EXAMPLES := $(DOC_EXAMPLES-yes:%=doc/examples/%$(PROGSSUF)$(EXESUF)) DOC_EXAMPLES := $(DOC_EXAMPLES-yes:%=doc/examples/%$(PROGSSUF)$(EXESUF))
@@ -83,25 +79,14 @@ $(GENTEXI): doc/avoptions_%.texi: doc/print_options$(HOSTEXESUF)
$(M)doc/print_options $* > $@ $(M)doc/print_options $* > $@
doc/%.html: TAG = HTML doc/%.html: TAG = HTML
doc/%-all.html: TAG = HTML
ifdef HAVE_MAKEINFO_HTML
doc/%.html: doc/%.texi $(SRC_PATH)/doc/t2h.pm $(GENTEXI)
$(Q)$(TEXIDEP)
$(M)makeinfo --html -I doc --no-split -D config-not-all --init-file=$(SRC_PATH)/doc/t2h.pm --output $@ $<
doc/%-all.html: doc/%.texi $(SRC_PATH)/doc/t2h.pm $(GENTEXI)
$(Q)$(TEXIDEP)
$(M)makeinfo --html -I doc --no-split -D config-all --init-file=$(SRC_PATH)/doc/t2h.pm --output $@ $<
else
doc/%.html: doc/%.texi $(SRC_PATH)/doc/t2h.init $(GENTEXI) doc/%.html: doc/%.texi $(SRC_PATH)/doc/t2h.init $(GENTEXI)
$(Q)$(TEXIDEP) $(Q)$(TEXIDEP)
$(M)texi2html -I doc -monolithic --D=config-not-all --init-file $(SRC_PATH)/doc/t2h.init --output $@ $< $(M)texi2html -I doc -monolithic --D=config-not-all --init-file $(SRC_PATH)/doc/t2h.init --output $@ $<
doc/%-all.html: TAG = HTML
doc/%-all.html: doc/%.texi $(SRC_PATH)/doc/t2h.init $(GENTEXI) doc/%-all.html: doc/%.texi $(SRC_PATH)/doc/t2h.init $(GENTEXI)
$(Q)$(TEXIDEP) $(Q)$(TEXIDEP)
$(M)texi2html -I doc -monolithic --D=config-all --init-file $(SRC_PATH)/doc/t2h.init --output $@ $< $(M)texi2html -I doc -monolithic --D=config-all --init-file $(SRC_PATH)/doc/t2h.init --output $@ $<
endif
doc/%.pod: TAG = POD doc/%.pod: TAG = POD
doc/%.pod: doc/%.texi $(SRC_PATH)/doc/texi2pod.pl $(GENTEXI) doc/%.pod: doc/%.texi $(SRC_PATH)/doc/texi2pod.pl $(GENTEXI)
@@ -115,9 +100,9 @@ doc/%-all.pod: doc/%.texi $(SRC_PATH)/doc/texi2pod.pl $(GENTEXI)
doc/%.1 doc/%.3: TAG = MAN doc/%.1 doc/%.3: TAG = MAN
doc/%.1: doc/%.pod $(GENTEXI) doc/%.1: doc/%.pod $(GENTEXI)
$(M)pod2man --section=1 --center=" " --release=" " --date=" " $< > $@ $(M)pod2man --section=1 --center=" " --release=" " $< > $@
doc/%.3: doc/%.pod $(GENTEXI) doc/%.3: doc/%.pod $(GENTEXI)
$(M)pod2man --section=3 --center=" " --release=" " --date=" " $< > $@ $(M)pod2man --section=3 --center=" " --release=" " $< > $@
$(DOCS) doc/doxy/html: | doc/ $(DOCS) doc/doxy/html: | doc/
$(DOC_EXAMPLES:%$(EXESUF)=%.o): | doc/examples $(DOC_EXAMPLES:%$(EXESUF)=%.o): | doc/examples
@@ -125,9 +110,8 @@ OBJDIRS += doc/examples
DOXY_INPUT = $(addprefix $(SRC_PATH)/, $(INSTHEADERS) $(DOC_EXAMPLES:%$(EXESUF)=%.c) $(LIB_EXAMPLES:%$(EXESUF)=%.c)) DOXY_INPUT = $(addprefix $(SRC_PATH)/, $(INSTHEADERS) $(DOC_EXAMPLES:%$(EXESUF)=%.c) $(LIB_EXAMPLES:%$(EXESUF)=%.c))
doc/doxy/html: TAG = DOXY doc/doxy/html: $(SRC_PATH)/doc/Doxyfile $(DOXY_INPUT)
doc/doxy/html: $(SRC_PATH)/doc/Doxyfile $(SRC_PATH)/doc/doxy-wrapper.sh $(DOXY_INPUT) $(M)$(SRC_PATH)/doc/doxy-wrapper.sh $(SRC_PATH) $< $(DOXY_INPUT)
$(M)$(SRC_PATH)/doc/doxy-wrapper.sh $(SRC_PATH) $< $(DOXYGEN) $(DOXY_INPUT)
install-doc: install-html install-man install-doc: install-html install-man

16
doc/RELEASE_NOTES Normal file
View File

@@ -0,0 +1,16 @@
Release Notes
=============
* 2.2 "Muybridge" March, 2014
General notes
-------------
See the Changelog file for a list of significant changes. Note, there
are many more new features and bugfixes than whats listed there.
Bugreports against FFmpeg git master or the most recent FFmpeg release are
accepted. If you are experiencing issues with any formally released version of
FFmpeg, please try git master to check if the issue still exists. If it does,
make your report against the development code following the usual bug reporting
guidelines.

View File

@@ -13,16 +13,7 @@ bitstream filter using the option @code{--disable-bsf=BSF}.
The option @code{-bsfs} of the ff* tools will display the list of The option @code{-bsfs} of the ff* tools will display the list of
all the supported bitstream filters included in your build. all the supported bitstream filters included in your build.
The ff* tools have a -bsf option applied per stream, taking a Below is a description of the currently available bitstream filters.
comma-separated list of filters, whose parameters follow the filter
name after a '='.
@example
ffmpeg -i INPUT -c:v copy -bsf:v filter1[=opt1=str1/opt2=str2][,filter2] OUTPUT
@end example
Below is a description of the currently available bitstream filters,
with their parameters, if any.
@section aac_adtstoasc @section aac_adtstoasc
@@ -83,18 +74,7 @@ format with @command{ffmpeg}, you can use the command:
ffmpeg -i INPUT.mp4 -codec copy -bsf:v h264_mp4toannexb OUTPUT.ts ffmpeg -i INPUT.mp4 -codec copy -bsf:v h264_mp4toannexb OUTPUT.ts
@end example @end example
@section imxdump @section imx_dump_header
Modifies the bitstream to fit in MOV and to be usable by the Final Cut
Pro decoder. This filter only applies to the mpeg2video codec, and is
likely not needed for Final Cut Pro 7 and newer with the appropriate
@option{-tag:v}.
For example, to remux 30 MB/sec NTSC IMX to MOV:
@example
ffmpeg -i input.mxf -c copy -bsf:v imxdump -tag:v mx3n output.mov
@end example
@section mjpeg2jpeg @section mjpeg2jpeg
@@ -141,20 +121,6 @@ ffmpeg -i frame_%d.jpg -c:v copy rotated.avi
@section noise @section noise
Damages the contents of packets without damaging the container. Can be
used for fuzzing or testing error resilience/concealment.
Parameters:
A numeral string, whose value is related to how often output bytes will
be modified. Therefore, values below or equal to 0 are forbidden, and
the lower the more frequent bytes will be modified, with 1 meaning
every byte is modified.
@example
ffmpeg -i INPUT -c copy -bsf noise[=1] output.mkv
@end example
applies the modification to every byte.
@section remove_extra @section remove_extra
@c man end BITSTREAM FILTERS @c man end BITSTREAM FILTERS

File diff suppressed because one or more lines are too long

View File

@@ -7,11 +7,6 @@ V
Disable the default terse mode, the full command issued by make and its Disable the default terse mode, the full command issued by make and its
output will be shown on the screen. output will be shown on the screen.
DBG
Preprocess x86 external assembler files to a .dbg.asm file in the object
directory, which then gets compiled. Helps developping those assembler
files.
DESTDIR DESTDIR
Destination directory for the install targets, useful to prepare packages Destination directory for the install targets, useful to prepare packages
or install FFmpeg in cross-environments. or install FFmpeg in cross-environments.
@@ -30,9 +25,6 @@ fate-list
install install
Install headers, libraries and programs. Install headers, libraries and programs.
examples
Build all examples located in doc/examples.
libavformat/output-example libavformat/output-example
Build the libavformat basic example. Build the libavformat basic example.
@@ -42,9 +34,6 @@ libavcodec/api-example
libswscale/swscale-test libswscale/swscale-test
Build the swscale self-test (useful also as example). Build the swscale self-test (useful also as example).
config
Reconfigure the project with current configuration.
Useful standard make commands: Useful standard make commands:
make -t <target> make -t <target>

View File

@@ -7,7 +7,7 @@ all the encoders and decoders. In addition each codec may support
so-called private options, which are specific for a given codec. so-called private options, which are specific for a given codec.
Sometimes, a global option may only affect a specific kind of codec, Sometimes, a global option may only affect a specific kind of codec,
and may be nonsensical or ignored by another, so you need to be aware and may be unsensical or ignored by another, so you need to be aware
of the meaning of the specified options. Also some options are of the meaning of the specified options. Also some options are
meant only for decoding or encoding. meant only for decoding or encoding.
@@ -71,9 +71,7 @@ Force low delay.
@item global_header @item global_header
Place global headers in extradata instead of every keyframe. Place global headers in extradata instead of every keyframe.
@item bitexact @item bitexact
Only write platform-, build- and time-independent data. (except (I)DCT). Use only bitexact stuff (except (I)DCT).
This ensures that file and data checksums are reproducible and match between
platforms. Its primary use is for regression testing.
@item aic @item aic
Apply H263 advanced intra coding / mpeg4 ac prediction. Apply H263 advanced intra coding / mpeg4 ac prediction.
@item cbp @item cbp
@@ -287,11 +285,6 @@ detect bitstream specification deviations
detect improper bitstream length detect improper bitstream length
@item explode @item explode
abort decoding on minor error detection abort decoding on minor error detection
@item ignore_err
ignore decoding errors, and continue decoding.
This is useful if you want to analyze the content of a video and thus want
everything to be decoded no matter what. This option will not result in a video
that is pleasing to watch in case of errors.
@item careful @item careful
consider things that violate the spec and have not been seen in the wild as errors consider things that violate the spec and have not been seen in the wild as errors
@item compliant @item compliant
@@ -396,9 +389,6 @@ Possible values:
@item simplemmx @item simplemmx
@item simpleauto
Automatically pick a IDCT compatible with the simple one
@item arm @item arm
@item altivec @item altivec
@@ -434,8 +424,6 @@ Possible values:
iterative motion vector (MV) search (slow) iterative motion vector (MV) search (slow)
@item deblock @item deblock
use strong deblock filter for damaged MBs use strong deblock filter for damaged MBs
@item favor_inter
favor predicting from the previous frame instead of the current
@end table @end table
@item bits_per_coded_sample @var{integer} @item bits_per_coded_sample @var{integer}
@@ -495,15 +483,11 @@ visualize block types
picture buffer allocations picture buffer allocations
@item thread_ops @item thread_ops
threading operations threading operations
@item nomc
skip motion compensation
@end table @end table
@item vismv @var{integer} (@emph{decoding,video}) @item vismv @var{integer} (@emph{decoding,video})
Visualize motion vectors (MVs). Visualize motion vectors (MVs).
This option is deprecated, see the codecview filter instead.
Possible values: Possible values:
@table @samp @table @samp
@item pf @item pf
@@ -803,9 +787,6 @@ Frame data might be split into multiple chunks.
Show all frames before the first keyframe. Show all frames before the first keyframe.
@item skiprd @item skiprd
Deprecated, use mpegvideo private options instead. Deprecated, use mpegvideo private options instead.
@item export_mvs
Export motion vectors into frame side-data (see @code{AV_FRAME_DATA_MOTION_VECTORS})
for codecs that support it. See also @file{doc/examples/export_mvs.c}.
@end table @end table
@item error @var{integer} (@emph{encoding,video}) @item error @var{integer} (@emph{encoding,video})
@@ -865,14 +846,6 @@ Possible values:
@item mpeg2_aac_he @item mpeg2_aac_he
@item mpeg4_sp
@item mpeg4_core
@item mpeg4_main
@item mpeg4_asp
@item dts @item dts
@item dts_es @item dts_es
@@ -906,7 +879,7 @@ Set frame skip factor.
Set frame skip exponent. Set frame skip exponent.
Negative values behave identical to the corresponding positive ones, except Negative values behave identical to the corresponding positive ones, except
that the score is normalized. that the score is normalized.
Positive values exist primarily for compatibility reasons and are not so useful. Positive values exist primarly for compatibility reasons and are not so useful.
@item skipcmp @var{integer} (@emph{encoding,video}) @item skipcmp @var{integer} (@emph{encoding,video})
Set frame skip compare function. Set frame skip compare function.
@@ -1052,26 +1025,15 @@ Set the log level offset.
Number of slices, used in parallelized encoding. Number of slices, used in parallelized encoding.
@item thread_type @var{flags} (@emph{decoding/encoding,video}) @item thread_type @var{flags} (@emph{decoding/encoding,video})
Select which multithreading methods to use. Select multithreading type.
Use of @samp{frame} will increase decoding delay by one frame per
thread, so clients which cannot provide future frames should not use
it.
Possible values: Possible values:
@table @samp @table @samp
@item slice @item slice
Decode more than one part of a single frame at once.
Multithreading using slices works only when the video was encoded with
slices.
@item frame @item frame
Decode more than one frame at once.
@end table @end table
Default value is @samp{slice+frame}.
@item audio_service_type @var{integer} (@emph{encoding,audio}) @item audio_service_type @var{integer} (@emph{encoding,audio})
Set audio service type. Set audio service type.
@@ -1126,19 +1088,6 @@ Interlaced video, bottom coded first, top displayed first
Set to 1 to disable processing alpha (transparency). This works like the Set to 1 to disable processing alpha (transparency). This works like the
@samp{gray} flag in the @option{flags} option which skips chroma information @samp{gray} flag in the @option{flags} option which skips chroma information
instead of alpha. Default is 0. instead of alpha. Default is 0.
@item codec_whitelist @var{list} (@emph{input})
"," separated List of allowed decoders. By default all are allowed.
@item dump_separator @var{string} (@emph{input})
Separator used to separate the fields printed on the command line about the
Stream parameters.
For example to separate the fields with newlines and indention:
@example
ffprobe -dump_separator "
" -i ~/videos/matrixbench_mpeg2.mpg
@end example
@end table @end table
@c man end CODEC OPTIONS @c man end CODEC OPTIONS

View File

@@ -62,7 +62,7 @@ AC-3 audio decoder.
This decoder implements part of ATSC A/52:2010 and ETSI TS 102 366, as well as This decoder implements part of ATSC A/52:2010 and ETSI TS 102 366, as well as
the undocumented RealAudio 3 (a.k.a. dnet). the undocumented RealAudio 3 (a.k.a. dnet).
@subsection AC-3 Decoder Options @subsubsection AC-3 Decoder Options
@table @option @table @option
@@ -163,9 +163,6 @@ Requires the presence of the libopus headers and library during
configuration. You need to explicitly configure the build with configuration. You need to explicitly configure the build with
@code{--enable-libopus}. @code{--enable-libopus}.
An FFmpeg native decoder for Opus exists, so users can decode Opus
without this library.
@c man end AUDIO DECODERS @c man end AUDIO DECODERS
@chapter Subtitles Decoders @chapter Subtitles Decoders
@@ -190,15 +187,6 @@ The format for this option is a string containing 16 24-bits hexadecimal
numbers (without 0x prefix) separated by comas, for example @code{0d00ee, numbers (without 0x prefix) separated by comas, for example @code{0d00ee,
ee450d, 101010, eaeaea, 0ce60b, ec14ed, ebff0b, 0d617a, 7b7b7b, d1d1d1, ee450d, 101010, eaeaea, 0ce60b, ec14ed, ebff0b, 0d617a, 7b7b7b, d1d1d1,
7b2a0e, 0d950c, 0f007b, cf0dec, cfa80c, 7c127b}. 7b2a0e, 0d950c, 0f007b, cf0dec, cfa80c, 7c127b}.
@item ifo_palette
Specify the IFO file from which the global palette is obtained.
(experimental)
@item forced_subs_only
Only decode subtitle entries marked as forced. Some titles have forced
and non-forced subtitles in the same track. Setting this flag to @code{1}
will only keep the forced subtitles. Default value is @code{0}.
@end table @end table
@section libzvbi-teletext @section libzvbi-teletext

View File

@@ -29,26 +29,6 @@ the caller can decide which variant streams to actually receive.
The total bitrate of the variant that the stream belongs to is The total bitrate of the variant that the stream belongs to is
available in a metadata key named "variant_bitrate". available in a metadata key named "variant_bitrate".
@section apng
Animated Portable Network Graphics demuxer.
This demuxer is used to demux APNG files.
All headers, but the PNG signature, up to (but not including) the first
fcTL chunk are transmitted as extradata.
Frames are then split as being all the chunks between two fcTL ones, or
between the last fcTL and IEND chunks.
@table @option
@item -ignore_loop @var{bool}
Ignore the loop variable in the file if set.
@item -max_fps @var{int}
Maximum framerate in frames per second (0 for no limit).
@item -default_fps @var{int}
Default framerate in frames per second when none is specified in the file
(0 meaning as fast as possible).
@end table
@section asf @section asf
Advanced Systems Format demuxer. Advanced Systems Format demuxer.
@@ -94,7 +74,7 @@ following directive is recognized:
Path to a file to read; special characters and spaces must be escaped with Path to a file to read; special characters and spaces must be escaped with
backslash or single quotes. backslash or single quotes.
All subsequent file-related directives apply to that file. All subsequent directives apply to that file.
@item @code{ffconcat version 1.0} @item @code{ffconcat version 1.0}
Identify the script type and version. It also sets the @option{safe} option Identify the script type and version. It also sets the @option{safe} option
@@ -112,22 +92,6 @@ file is not available or accurate.
If the duration is set for all files, then it is possible to seek in the If the duration is set for all files, then it is possible to seek in the
whole concatenated video. whole concatenated video.
@item @code{stream}
Introduce a stream in the virtual file.
All subsequent stream-related directives apply to the last introduced
stream.
Some streams properties must be set in order to allow identifying the
matching streams in the subfiles.
If no streams are defined in the script, the streams from the first file are
copied.
@item @code{exact_stream_id @var{id}}
Set the id of the stream.
If this directive is given, the string with the corresponding id in the
subfiles will be used.
This is especially useful for MPEG-PS (VOB) files, where the order of the
streams is not reliable.
@end table @end table
@subsection Options @subsection Options
@@ -148,14 +112,6 @@ If set to 0, any file name is accepted.
The default is -1, it is equivalent to 1 if the format was automatically The default is -1, it is equivalent to 1 if the format was automatically
probed and 0 otherwise. probed and 0 otherwise.
@item auto_convert
If set to 1, try to perform automatic conversions on packet data to make the
streams concatenable.
Currently, the only conversion is adding the h264_mp4toannexb bitstream
filter to H.264 streams in MP4 format. This is necessary in particular if
there are resolution changes.
@end table @end table
@section flv @section flv
@@ -194,40 +150,6 @@ See @url{http://quvi.sourceforge.net/} for more information.
FFmpeg needs to be built with @code{--enable-libquvi} for this demuxer to be FFmpeg needs to be built with @code{--enable-libquvi} for this demuxer to be
enabled. enabled.
@section gif
Animated GIF demuxer.
It accepts the following options:
@table @option
@item min_delay
Set the minimum valid delay between frames in hundredths of seconds.
Range is 0 to 6000. Default value is 2.
@item default_delay
Set the default delay between frames in hundredths of seconds.
Range is 0 to 6000. Default value is 10.
@item ignore_loop
GIF files can contain information to loop a certain number of times (or
infinitely). If @option{ignore_loop} is set to 1, then the loop setting
from the input will be ignored and looping will not occur. If set to 0,
then looping will occur and will cycle the number of times according to
the GIF. Default value is 1.
@end table
For example, with the overlay filter, place an infinitely looping GIF
over another video:
@example
ffmpeg -i input.mp4 -ignore_loop 0 -i input.gif -filter_complex overlay=shortest=1 out.mkv
@end example
Note that in the above example the shortest option for overlay filter is
used to end the output video at the length of the shortest input file,
which in this case is @file{input.mp4} as the GIF in this example loops
infinitely.
@section image2 @section image2
Image file demuxer. Image file demuxer.
@@ -327,8 +249,6 @@ is 5.
If set to 1, will set frame timestamp to modification time of image file. Note If set to 1, will set frame timestamp to modification time of image file. Note
that monotonity of timestamps is not provided: images go in the same order as that monotonity of timestamps is not provided: images go in the same order as
without this option. Default value is 0. without this option. Default value is 0.
If set to 2, will set frame timestamp to the modification time of the image file in
nanosecond precision.
@item video_size @item video_size
Set the video size of the images to read. If not specified the video Set the video size of the images to read. If not specified the video
size is guessed from the first image file in the sequence. size is guessed from the first image file in the sequence.
@@ -376,7 +296,7 @@ teletext packet PTS and DTS values untouched.
Raw video demuxer. Raw video demuxer.
This demuxer allows one to read raw video data. Since there is no header This demuxer allows to read raw video data. Since there is no header
specifying the assumed video parameters, the user must specify them specifying the assumed video parameters, the user must specify them
in order to be able to decode the data correctly. in order to be able to decode the data correctly.

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle Developer Documentation @settitle Developer Documentation
@titlepage @titlepage
@@ -324,12 +323,9 @@ Always fill out the commit log message. Describe in a few lines what you
changed and why. You can refer to mailing list postings if you fix a changed and why. You can refer to mailing list postings if you fix a
particular bug. Comments such as "fixed!" or "Changed it." are unacceptable. particular bug. Comments such as "fixed!" or "Changed it." are unacceptable.
Recommended format: Recommended format:
@example
area changed: Short 1 line description area changed: Short 1 line description
details describing what and why and giving references. details describing what and why and giving references.
@end example
@item @item
Make sure the author of the commit is set correctly. (see git commit --author) Make sure the author of the commit is set correctly. (see git commit --author)
@@ -648,12 +644,12 @@ accordingly].
@subsection Adding files to the fate-suite dataset @subsection Adding files to the fate-suite dataset
When there is no muxer or encoder available to generate test media for a When there is no muxer or encoder available to generate test media for a
specific test then the media has to be included in the fate-suite. specific test then the media has to be inlcuded in the fate-suite.
First please make sure that the sample file is as small as possible to test the First please make sure that the sample file is as small as possible to test the
respective decoder or demuxer sufficiently. Large files increase network respective decoder or demuxer sufficiently. Large files increase network
bandwidth and disk space requirements. bandwidth and disk space requirements.
Once you have a working fate test and fate sample, provide in the commit Once you have a working fate test and fate sample, provide in the commit
message or introductory message for the patch series that you post to message or introductionary message for the patch series that you post to
the ffmpeg-devel mailing list, a direct link to download the sample media. the ffmpeg-devel mailing list, a direct link to download the sample media.

View File

@@ -2,20 +2,11 @@
SRC_PATH="${1}" SRC_PATH="${1}"
DOXYFILE="${2}" DOXYFILE="${2}"
DOXYGEN="${3}"
shift 3 shift 2
if [ -e "$SRC_PATH/VERSION" ]; then doxygen - <<EOF
VERSION=`cat "$SRC_PATH/VERSION"`
else
VERSION=`cd "$SRC_PATH"; git describe`
fi
$DOXYGEN - <<EOF
@INCLUDE = ${DOXYFILE} @INCLUDE = ${DOXYFILE}
INPUT = $@ INPUT = $@
EXAMPLE_PATH = ${SRC_PATH}/doc/examples EXAMPLE_PATH = ${SRC_PATH}/doc/examples
HTML_TIMESTAMP = NO
PROJECT_NUMBER = $VERSION
EOF EOF

View File

@@ -80,7 +80,7 @@ thresholds with quantizer steps to find the appropriate quantization with
distortion below threshold band by band. distortion below threshold band by band.
The quality of this method is comparable to the two loop searching method The quality of this method is comparable to the two loop searching method
described below, but somewhat a little better and slower. descibed below, but somewhat a little better and slower.
@item anmr @item anmr
Average noise to mask ratio (ANMR) trellis-based solution. Average noise to mask ratio (ANMR) trellis-based solution.
@@ -807,7 +807,7 @@ while producing the worst quality.
@item reservoir @item reservoir
Enable use of bit reservoir when set to 1. Default value is 1. LAME Enable use of bit reservoir when set to 1. Default value is 1. LAME
has this enabled by default, but can be overridden by use has this enabled by default, but can be overriden by use
@option{--nores} option. @option{--nores} option.
@item joint_stereo (@emph{-m j}) @item joint_stereo (@emph{-m j})
@@ -1032,7 +1032,7 @@ configuration. You need to explicitly configure the build with
@subsection Option Mapping @subsection Option Mapping
Most libopus options are modelled after the @command{opusenc} utility from Most libopus options are modeled after the @command{opusenc} utility from
opus-tools. The following is an option mapping chart describing options opus-tools. The following is an option mapping chart describing options
supported by the libopus wrapper, and their @command{opusenc}-equivalent supported by the libopus wrapper, and their @command{opusenc}-equivalent
in parentheses. in parentheses.
@@ -1271,7 +1271,7 @@ Requires the presence of the libtheora headers and library during
configuration. You need to explicitly configure the build with configuration. You need to explicitly configure the build with
@code{--enable-libtheora}. @code{--enable-libtheora}.
For more information about the libtheora project see For more informations about the libtheora project see
@url{http://www.theora.org/}. @url{http://www.theora.org/}.
@subsection Options @subsection Options
@@ -1330,7 +1330,7 @@ ffmpeg -i INPUT -codec:v libtheora -b:v 1000k OUTPUT.ogg
@section libvpx @section libvpx
VP8/VP9 format supported through libvpx. VP8 format supported through libvpx.
Requires the presence of the libvpx headers and library during configuration. Requires the presence of the libvpx headers and library during configuration.
You need to explicitly configure the build with @code{--enable-libvpx}. You need to explicitly configure the build with @code{--enable-libvpx}.
@@ -1442,9 +1442,6 @@ g_lag_in_frames
@item vp8flags error_resilient @item vp8flags error_resilient
g_error_resilient g_error_resilient
@item aq_mode
@code{VP9E_SET_AQ_MODE}
@end table @end table
For more information about libvpx see: For more information about libvpx see:
@@ -1528,7 +1525,7 @@ for detail retention (adaptive quantization, psy-RD, psy-trellis).
Many libx264 encoder options are mapped to FFmpeg global codec Many libx264 encoder options are mapped to FFmpeg global codec
options, while unique encoder options are provided through private options, while unique encoder options are provided through private
options. Additionally the @option{x264opts} and @option{x264-params} options. Additionally the @option{x264opts} and @option{x264-params}
private options allows one to pass a list of key=value tuples as accepted private options allows to pass a list of key=value tuples as accepted
by the libx264 @code{x264_param_parse} function. by the libx264 @code{x264_param_parse} function.
The x264 project website is at The x264 project website is at
@@ -1569,34 +1566,25 @@ kilobits/s.
@item g (@emph{keyint}) @item g (@emph{keyint})
@item qmin (@emph{qpmin})
Minimum quantizer scale.
@item qmax (@emph{qpmax}) @item qmax (@emph{qpmax})
Maximum quantizer scale.
@item qmin (@emph{qpmin})
@item qdiff (@emph{qpstep}) @item qdiff (@emph{qpstep})
Maximum difference between quantizer scales.
@item qblur (@emph{qblur}) @item qblur (@emph{qblur})
Quantizer curve blur
@item qcomp (@emph{qcomp}) @item qcomp (@emph{qcomp})
Quantizer curve compression factor
@item refs (@emph{ref}) @item refs (@emph{ref})
Number of reference frames each P-frame can use. The range is from @var{0-16}.
@item sc_threshold (@emph{scenecut}) @item sc_threshold (@emph{scenecut})
Sets the threshold for the scene change detection.
@item trellis (@emph{trellis}) @item trellis (@emph{trellis})
Performs Trellis quantization to increase efficiency. Enabled by default.
@item nr (@emph{nr}) @item nr (@emph{nr})
@item me_range (@emph{merange}) @item me_range (@emph{merange})
Maximum range of the motion search in pixels.
@item me_method (@emph{me}) @item me_method (@emph{me})
Set motion estimation method. Possible values in the decreasing order Set motion estimation method. Possible values in the decreasing order
@@ -1618,13 +1606,10 @@ Hadamard exhaustive search (slowest).
@end table @end table
@item subq (@emph{subme}) @item subq (@emph{subme})
Sub-pixel motion estimation method.
@item b_strategy (@emph{b-adapt}) @item b_strategy (@emph{b-adapt})
Adaptive B-frame placement decision algorithm. Use only on first-pass.
@item keyint_min (@emph{min-keyint}) @item keyint_min (@emph{min-keyint})
Minimum GOP size.
@item coder @item coder
Set entropy encoder. Possible values: Set entropy encoder. Possible values:
@@ -1651,7 +1636,6 @@ Ignore chroma in motion estimation. It generates the same effect as
@end table @end table
@item threads (@emph{threads}) @item threads (@emph{threads})
Number of encoding threads.
@item thread_type @item thread_type
Set multithreading technique. Possible values: Set multithreading technique. Possible values:
@@ -1745,10 +1729,6 @@ Enable calculation and printing SSIM stats after the encoding.
Enable the use of Periodic Intra Refresh instead of IDR frames when set Enable the use of Periodic Intra Refresh instead of IDR frames when set
to 1. to 1.
@item avcintra-class (@emph{class})
Configure the encoder to generate AVC-Intra.
Valid values are 50,100 and 200
@item bluray-compat (@emph{bluray-compat}) @item bluray-compat (@emph{bluray-compat})
Configure the encoder to be compatible with the bluray standard. Configure the encoder to be compatible with the bluray standard.
It is a shorthand for setting "bluray-compat=1 force-cfr=1". It is a shorthand for setting "bluray-compat=1 force-cfr=1".
@@ -1873,7 +1853,7 @@ Override the x264 configuration using a :-separated list of key=value
parameters. parameters.
This option is functionally the same as the @option{x264opts}, but is This option is functionally the same as the @option{x264opts}, but is
duplicated for compatibility with the Libav fork. duplicated for compability with the Libav fork.
For example to specify libx264 encoding options with @command{ffmpeg}: For example to specify libx264 encoding options with @command{ffmpeg}:
@example @example
@@ -1886,34 +1866,6 @@ no-fast-pskip=1:subq=6:8x8dct=0:trellis=0 OUTPUT
Encoding ffpresets for common usages are provided so they can be used with the Encoding ffpresets for common usages are provided so they can be used with the
general presets system (e.g. passing the @option{pre} option). general presets system (e.g. passing the @option{pre} option).
@section libx265
x265 H.265/HEVC encoder wrapper.
This encoder requires the presence of the libx265 headers and library
during configuration. You need to explicitly configure the build with
@option{--enable-libx265}.
@subsection Options
@table @option
@item preset
Set the x265 preset.
@item tune
Set the x265 tune parameter.
@item x265-params
Set x265 options using a list of @var{key}=@var{value} couples separated
by ":". See @command{x265 --help} for a list of options.
For example to specify libx265 encoding options with @option{-x265-params}:
@example
ffmpeg -i input -c:v libx265 -x265-params crf=26:psy-rd=1 output.mp4
@end example
@end table
@section libxvid @section libxvid
Xvid MPEG-4 Part 2 encoder wrapper. Xvid MPEG-4 Part 2 encoder wrapper.
@@ -2077,30 +2029,6 @@ fastest.
@end table @end table
@section mpeg2
MPEG-2 video encoder.
@subsection Options
@table @option
@item seq_disp_ext @var{integer}
Specifies if the encoder should write a sequence_display_extension to the
output.
@table @option
@item -1
@itemx auto
Decide automatically to write it or not (this is the default) by checking if
the data to be written is different from the default or unspecified values.
@item 0
@itemx never
Never write it.
@item 1
@itemx always
Always write it.
@end table
@end table
@section png @section png
PNG image encoder. PNG image encoder.
@@ -2119,7 +2047,7 @@ Set physical density of pixels, in dots per meter, unset by default
Apple ProRes encoder. Apple ProRes encoder.
FFmpeg contains 2 ProRes encoders, the prores-aw and prores-ks encoder. FFmpeg contains 2 ProRes encoders, the prores-aw and prores-ks encoder.
The used encoder can be chosen with the @code{-vcodec} option. The used encoder can be choosen with the @code{-vcodec} option.
@subsection Private Options for prores-ks @subsection Private Options for prores-ks
@@ -2171,7 +2099,7 @@ Use @var{0} to disable alpha plane coding.
@subsection Speed considerations @subsection Speed considerations
In the default mode of operation the encoder has to honor frame constraints In the default mode of operation the encoder has to honor frame constraints
(i.e. not produce frames with size bigger than requested) while still making (i.e. not produc frames with size bigger than requested) while still making
output picture as good as possible. output picture as good as possible.
A frame containing a lot of small details is harder to compress and the encoder A frame containing a lot of small details is harder to compress and the encoder
would spend more time searching for appropriate quantizers for each slice. would spend more time searching for appropriate quantizers for each slice.
@@ -2182,27 +2110,3 @@ For the fastest encoding speed set the @option{qscale} parameter (4 is the
recommended value) and do not set a size constraint. recommended value) and do not set a size constraint.
@c man end VIDEO ENCODERS @c man end VIDEO ENCODERS
@chapter Subtitles Encoders
@c man begin SUBTITLES ENCODERS
@section dvdsub
This codec encodes the bitmap subtitle format that is used in DVDs.
Typically they are stored in VOBSUB file pairs (*.idx + *.sub),
and they can also be used in Matroska files.
@subsection Options
@table @option
@item even_rows_fix
When set to 1, enable a work-around that makes the number of pixel rows
even in all subtitles. This fixes a problem with some players that
cut off the bottom row if the number is odd. The work-around just adds
a fully transparent row if needed. The overhead is low, typically
one byte per subtitle on average.
By default, this work-around is disabled.
@end table
@c man end SUBTITLES ENCODERS

View File

@@ -12,9 +12,8 @@ CFLAGS := $(shell pkg-config --cflags $(FFMPEG_LIBS)) $(CFLAGS)
LDLIBS := $(shell pkg-config --libs $(FFMPEG_LIBS)) $(LDLIBS) LDLIBS := $(shell pkg-config --libs $(FFMPEG_LIBS)) $(LDLIBS)
EXAMPLES= avio_reading \ EXAMPLES= avio_reading \
decoding_encoding \ avcodec \
demuxing_decoding \ demuxing_decoding \
extract_mvs \
filtering_video \ filtering_video \
filtering_audio \ filtering_audio \
metadata \ metadata \
@@ -23,13 +22,11 @@ EXAMPLES= avio_reading \
resampling_audio \ resampling_audio \
scaling_video \ scaling_video \
transcode_aac \ transcode_aac \
transcoding \
OBJS=$(addsuffix .o,$(EXAMPLES)) OBJS=$(addsuffix .o,$(EXAMPLES))
# the following examples make explicit use of the math library # the following examples make explicit use of the math library
avcodec: LDLIBS += -lm avcodec: LDLIBS += -lm
decoding_encoding: LDLIBS += -lm
muxing: LDLIBS += -lm muxing: LDLIBS += -lm
resampling_audio: LDLIBS += -lm resampling_audio: LDLIBS += -lm

View File

@@ -24,7 +24,7 @@
* @file * @file
* libavcodec API use example. * libavcodec API use example.
* *
* @example decoding_encoding.c * @example avcodec.c
* Note that libavcodec only handles codecs (mpeg, mpeg4, etc...), * Note that libavcodec only handles codecs (mpeg, mpeg4, etc...),
* not file formats (avi, vob, mp4, mov, mkv, mxf, flv, mpegts, mpegps, etc...). See library 'libavformat' for the * not file formats (avi, vob, mp4, mov, mkv, mxf, flv, mpegts, mpegps, etc...). See library 'libavformat' for the
* format handling * format handling
@@ -288,7 +288,6 @@ static void audio_decode_example(const char *outfilename, const char *filename)
avpkt.size = fread(inbuf, 1, AUDIO_INBUF_SIZE, f); avpkt.size = fread(inbuf, 1, AUDIO_INBUF_SIZE, f);
while (avpkt.size > 0) { while (avpkt.size > 0) {
int i, ch;
int got_frame = 0; int got_frame = 0;
if (!decoded_frame) { if (!decoded_frame) {
@@ -305,15 +304,15 @@ static void audio_decode_example(const char *outfilename, const char *filename)
} }
if (got_frame) { if (got_frame) {
/* if a frame has been decoded, output it */ /* if a frame has been decoded, output it */
int data_size = av_get_bytes_per_sample(c->sample_fmt); int data_size = av_samples_get_buffer_size(NULL, c->channels,
decoded_frame->nb_samples,
c->sample_fmt, 1);
if (data_size < 0) { if (data_size < 0) {
/* This should not occur, checking just for paranoia */ /* This should not occur, checking just for paranoia */
fprintf(stderr, "Failed to calculate data size\n"); fprintf(stderr, "Failed to calculate data size\n");
exit(1); exit(1);
} }
for (i=0; i<decoded_frame->nb_samples; i++) fwrite(decoded_frame->data[0], 1, data_size, outfile);
for (ch=0; ch<c->channels; ch++)
fwrite(decoded_frame->data[ch] + data_size*i, 1, data_size, outfile);
} }
avpkt.size -= len; avpkt.size -= len;
avpkt.data += len; avpkt.data += len;
@@ -376,13 +375,7 @@ static void video_encode_example(const char *filename, int codec_id)
c->height = 288; c->height = 288;
/* frames per second */ /* frames per second */
c->time_base = (AVRational){1,25}; c->time_base = (AVRational){1,25};
/* emit one intra frame every ten frames c->gop_size = 10; /* emit one intra frame every ten frames */
* check frame pict_type before passing frame
* to encoder, if frame->pict_type is AV_PICTURE_TYPE_I
* then gop_size is ignored and the output of encoder
* will always be I frame irrespective to gop_size
*/
c->gop_size = 10;
c->max_b_frames = 1; c->max_b_frames = 1;
c->pix_fmt = AV_PIX_FMT_YUV420P; c->pix_fmt = AV_PIX_FMT_YUV420P;
@@ -641,7 +634,7 @@ int main(int argc, char **argv)
"This program generates a synthetic stream and encodes it to a file\n" "This program generates a synthetic stream and encodes it to a file\n"
"named test.h264, test.mp2 or test.mpg depending on output_type.\n" "named test.h264, test.mp2 or test.mpg depending on output_type.\n"
"The encoded stream is then decoded and written to a raw data output.\n" "The encoded stream is then decoded and written to a raw data output.\n"
"output_type must be chosen between 'h264', 'mp2', 'mpg'.\n", "output_type must be choosen between 'h264', 'mp2', 'mpg'.\n",
argv[0]); argv[0]);
return 1; return 1;
} }
@@ -651,7 +644,7 @@ int main(int argc, char **argv)
video_encode_example("test.h264", AV_CODEC_ID_H264); video_encode_example("test.h264", AV_CODEC_ID_H264);
} else if (!strcmp(output_type, "mp2")) { } else if (!strcmp(output_type, "mp2")) {
audio_encode_example("test.mp2"); audio_encode_example("test.mp2");
audio_decode_example("test.pcm", "test.mp2"); audio_decode_example("test.sw", "test.mp2");
} else if (!strcmp(output_type, "mpg")) { } else if (!strcmp(output_type, "mpg")) {
video_encode_example("test.mpg", AV_CODEC_ID_MPEG1VIDEO); video_encode_example("test.mpg", AV_CODEC_ID_MPEG1VIDEO);
video_decode_example("test%02d.pgm", "test.mpg"); video_decode_example("test%02d.pgm", "test.mpg");

View File

@@ -119,10 +119,8 @@ int main(int argc, char *argv[])
end: end:
avformat_close_input(&fmt_ctx); avformat_close_input(&fmt_ctx);
/* note: the internal buffer could have changed, and be != avio_ctx_buffer */ /* note: the internal buffer could have changed, and be != avio_ctx_buffer */
if (avio_ctx) { av_freep(&avio_ctx->buffer);
av_freep(&avio_ctx->buffer); av_freep(&avio_ctx);
av_freep(&avio_ctx);
}
av_file_unmap(buffer, buffer_size); av_file_unmap(buffer, buffer_size);
if (ret < 0) { if (ret < 0) {

View File

@@ -36,8 +36,6 @@
static AVFormatContext *fmt_ctx = NULL; static AVFormatContext *fmt_ctx = NULL;
static AVCodecContext *video_dec_ctx = NULL, *audio_dec_ctx; static AVCodecContext *video_dec_ctx = NULL, *audio_dec_ctx;
static int width, height;
static enum AVPixelFormat pix_fmt;
static AVStream *video_stream = NULL, *audio_stream = NULL; static AVStream *video_stream = NULL, *audio_stream = NULL;
static const char *src_filename = NULL; static const char *src_filename = NULL;
static const char *video_dst_filename = NULL; static const char *video_dst_filename = NULL;
@@ -81,20 +79,6 @@ static int decode_packet(int *got_frame, int cached)
fprintf(stderr, "Error decoding video frame (%s)\n", av_err2str(ret)); fprintf(stderr, "Error decoding video frame (%s)\n", av_err2str(ret));
return ret; return ret;
} }
if (video_dec_ctx->width != width || video_dec_ctx->height != height ||
video_dec_ctx->pix_fmt != pix_fmt) {
/* To handle this change, one could call av_image_alloc again and
* decode the following frames into another rawvideo file. */
fprintf(stderr, "Error: Width, height and pixel format have to be "
"constant in a rawvideo file, but the width, height or "
"pixel format of the input video changed:\n"
"old: width = %d, height = %d, format = %s\n"
"new: width = %d, height = %d, format = %s\n",
width, height, av_get_pix_fmt_name(pix_fmt),
video_dec_ctx->width, video_dec_ctx->height,
av_get_pix_fmt_name(video_dec_ctx->pix_fmt));
return -1;
}
if (*got_frame) { if (*got_frame) {
printf("video_frame%s n:%d coded_n:%d pts:%s\n", printf("video_frame%s n:%d coded_n:%d pts:%s\n",
@@ -106,7 +90,7 @@ static int decode_packet(int *got_frame, int cached)
* this is required since rawvideo expects non aligned data */ * this is required since rawvideo expects non aligned data */
av_image_copy(video_dst_data, video_dst_linesize, av_image_copy(video_dst_data, video_dst_linesize,
(const uint8_t **)(frame->data), frame->linesize, (const uint8_t **)(frame->data), frame->linesize,
pix_fmt, width, height); video_dec_ctx->pix_fmt, video_dec_ctx->width, video_dec_ctx->height);
/* write to rawvideo file */ /* write to rawvideo file */
fwrite(video_dst_data[0], 1, video_dst_bufsize, video_dst_file); fwrite(video_dst_data[0], 1, video_dst_bufsize, video_dst_file);
@@ -154,7 +138,7 @@ static int decode_packet(int *got_frame, int cached)
static int open_codec_context(int *stream_idx, static int open_codec_context(int *stream_idx,
AVFormatContext *fmt_ctx, enum AVMediaType type) AVFormatContext *fmt_ctx, enum AVMediaType type)
{ {
int ret, stream_index; int ret;
AVStream *st; AVStream *st;
AVCodecContext *dec_ctx = NULL; AVCodecContext *dec_ctx = NULL;
AVCodec *dec = NULL; AVCodec *dec = NULL;
@@ -166,8 +150,8 @@ static int open_codec_context(int *stream_idx,
av_get_media_type_string(type), src_filename); av_get_media_type_string(type), src_filename);
return ret; return ret;
} else { } else {
stream_index = ret; *stream_idx = ret;
st = fmt_ctx->streams[stream_index]; st = fmt_ctx->streams[*stream_idx];
/* find decoder for the stream */ /* find decoder for the stream */
dec_ctx = st->codec; dec_ctx = st->codec;
@@ -186,7 +170,6 @@ static int open_codec_context(int *stream_idx,
av_get_media_type_string(type)); av_get_media_type_string(type));
return ret; return ret;
} }
*stream_idx = stream_index;
} }
return 0; return 0;
@@ -281,11 +264,9 @@ int main (int argc, char **argv)
} }
/* allocate image where the decoded image will be put */ /* allocate image where the decoded image will be put */
width = video_dec_ctx->width;
height = video_dec_ctx->height;
pix_fmt = video_dec_ctx->pix_fmt;
ret = av_image_alloc(video_dst_data, video_dst_linesize, ret = av_image_alloc(video_dst_data, video_dst_linesize,
width, height, pix_fmt, 1); video_dec_ctx->width, video_dec_ctx->height,
video_dec_ctx->pix_fmt, 1);
if (ret < 0) { if (ret < 0) {
fprintf(stderr, "Could not allocate raw video buffer\n"); fprintf(stderr, "Could not allocate raw video buffer\n");
goto end; goto end;
@@ -298,7 +279,7 @@ int main (int argc, char **argv)
audio_dec_ctx = audio_stream->codec; audio_dec_ctx = audio_stream->codec;
audio_dst_file = fopen(audio_dst_filename, "wb"); audio_dst_file = fopen(audio_dst_filename, "wb");
if (!audio_dst_file) { if (!audio_dst_file) {
fprintf(stderr, "Could not open destination file %s\n", audio_dst_filename); fprintf(stderr, "Could not open destination file %s\n", video_dst_filename);
ret = 1; ret = 1;
goto end; goto end;
} }
@@ -360,7 +341,7 @@ int main (int argc, char **argv)
if (video_stream) { if (video_stream) {
printf("Play the output video file with the command:\n" printf("Play the output video file with the command:\n"
"ffplay -f rawvideo -pix_fmt %s -video_size %dx%d %s\n", "ffplay -f rawvideo -pix_fmt %s -video_size %dx%d %s\n",
av_get_pix_fmt_name(pix_fmt), width, height, av_get_pix_fmt_name(video_dec_ctx->pix_fmt), video_dec_ctx->width, video_dec_ctx->height,
video_dst_filename); video_dst_filename);
} }

View File

@@ -1,185 +0,0 @@
/*
* Copyright (c) 2012 Stefano Sabatini
* Copyright (c) 2014 Clément Bœsch
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include <libavutil/motion_vector.h>
#include <libavformat/avformat.h>
static AVFormatContext *fmt_ctx = NULL;
static AVCodecContext *video_dec_ctx = NULL;
static AVStream *video_stream = NULL;
static const char *src_filename = NULL;
static int video_stream_idx = -1;
static AVFrame *frame = NULL;
static AVPacket pkt;
static int video_frame_count = 0;
static int decode_packet(int *got_frame, int cached)
{
int decoded = pkt.size;
*got_frame = 0;
if (pkt.stream_index == video_stream_idx) {
int ret = avcodec_decode_video2(video_dec_ctx, frame, got_frame, &pkt);
if (ret < 0) {
fprintf(stderr, "Error decoding video frame (%s)\n", av_err2str(ret));
return ret;
}
if (*got_frame) {
int i;
AVFrameSideData *sd;
video_frame_count++;
sd = av_frame_get_side_data(frame, AV_FRAME_DATA_MOTION_VECTORS);
if (sd) {
const AVMotionVector *mvs = (const AVMotionVector *)sd->data;
for (i = 0; i < sd->size / sizeof(*mvs); i++) {
const AVMotionVector *mv = &mvs[i];
printf("%d,%2d,%2d,%2d,%4d,%4d,%4d,%4d,0x%"PRIx64"\n",
video_frame_count, mv->source,
mv->w, mv->h, mv->src_x, mv->src_y,
mv->dst_x, mv->dst_y, mv->flags);
}
}
}
}
return decoded;
}
static int open_codec_context(int *stream_idx,
AVFormatContext *fmt_ctx, enum AVMediaType type)
{
int ret;
AVStream *st;
AVCodecContext *dec_ctx = NULL;
AVCodec *dec = NULL;
AVDictionary *opts = NULL;
ret = av_find_best_stream(fmt_ctx, type, -1, -1, NULL, 0);
if (ret < 0) {
fprintf(stderr, "Could not find %s stream in input file '%s'\n",
av_get_media_type_string(type), src_filename);
return ret;
} else {
*stream_idx = ret;
st = fmt_ctx->streams[*stream_idx];
/* find decoder for the stream */
dec_ctx = st->codec;
dec = avcodec_find_decoder(dec_ctx->codec_id);
if (!dec) {
fprintf(stderr, "Failed to find %s codec\n",
av_get_media_type_string(type));
return AVERROR(EINVAL);
}
/* Init the video decoder */
av_dict_set(&opts, "flags2", "+export_mvs", 0);
if ((ret = avcodec_open2(dec_ctx, dec, &opts)) < 0) {
fprintf(stderr, "Failed to open %s codec\n",
av_get_media_type_string(type));
return ret;
}
}
return 0;
}
int main(int argc, char **argv)
{
int ret = 0, got_frame;
if (argc != 2) {
fprintf(stderr, "Usage: %s <video>\n", argv[0]);
exit(1);
}
src_filename = argv[1];
av_register_all();
if (avformat_open_input(&fmt_ctx, src_filename, NULL, NULL) < 0) {
fprintf(stderr, "Could not open source file %s\n", src_filename);
exit(1);
}
if (avformat_find_stream_info(fmt_ctx, NULL) < 0) {
fprintf(stderr, "Could not find stream information\n");
exit(1);
}
if (open_codec_context(&video_stream_idx, fmt_ctx, AVMEDIA_TYPE_VIDEO) >= 0) {
video_stream = fmt_ctx->streams[video_stream_idx];
video_dec_ctx = video_stream->codec;
}
av_dump_format(fmt_ctx, 0, src_filename, 0);
if (!video_stream) {
fprintf(stderr, "Could not find video stream in the input, aborting\n");
ret = 1;
goto end;
}
frame = av_frame_alloc();
if (!frame) {
fprintf(stderr, "Could not allocate frame\n");
ret = AVERROR(ENOMEM);
goto end;
}
printf("framenum,source,blockw,blockh,srcx,srcy,dstx,dsty,flags\n");
/* initialize packet, set data to NULL, let the demuxer fill it */
av_init_packet(&pkt);
pkt.data = NULL;
pkt.size = 0;
/* read frames from the file */
while (av_read_frame(fmt_ctx, &pkt) >= 0) {
AVPacket orig_pkt = pkt;
do {
ret = decode_packet(&got_frame, 0);
if (ret < 0)
break;
pkt.data += ret;
pkt.size -= ret;
} while (pkt.size > 0);
av_free_packet(&orig_pkt);
}
/* flush cached frames */
pkt.data = NULL;
pkt.size = 0;
do {
decode_packet(&got_frame, 1);
} while (got_frame);
end:
avcodec_close(video_dec_ctx);
avformat_close_input(&fmt_ctx);
av_frame_free(&frame);
return ret < 0;
}

View File

@@ -45,7 +45,6 @@
#include "libavutil/channel_layout.h" #include "libavutil/channel_layout.h"
#include "libavutil/md5.h" #include "libavutil/md5.h"
#include "libavutil/mem.h"
#include "libavutil/opt.h" #include "libavutil/opt.h"
#include "libavutil/samplefmt.h" #include "libavutil/samplefmt.h"

View File

@@ -145,28 +145,12 @@ static int init_filters(const char *filters_descr)
goto end; goto end;
} }
/* /* Endpoints for the filter graph. */
* Set the endpoints for the filter graph. The filter_graph will
* be linked to the graph described by filters_descr.
*/
/*
* The buffer source output must be connected to the input pad of
* the first filter described by filters_descr; since the first
* filter input label is not specified, it is set to "in" by
* default.
*/
outputs->name = av_strdup("in"); outputs->name = av_strdup("in");
outputs->filter_ctx = buffersrc_ctx; outputs->filter_ctx = buffersrc_ctx;
outputs->pad_idx = 0; outputs->pad_idx = 0;
outputs->next = NULL; outputs->next = NULL;
/*
* The buffer sink input must be connected to the output pad of
* the last filter described by filters_descr; since the last
* filter output label is not specified, it is set to "out" by
* default.
*/
inputs->name = av_strdup("out"); inputs->name = av_strdup("out");
inputs->filter_ctx = buffersink_ctx; inputs->filter_ctx = buffersink_ctx;
inputs->pad_idx = 0; inputs->pad_idx = 0;

View File

@@ -90,7 +90,6 @@ static int init_filters(const char *filters_descr)
AVFilter *buffersink = avfilter_get_by_name("buffersink"); AVFilter *buffersink = avfilter_get_by_name("buffersink");
AVFilterInOut *outputs = avfilter_inout_alloc(); AVFilterInOut *outputs = avfilter_inout_alloc();
AVFilterInOut *inputs = avfilter_inout_alloc(); AVFilterInOut *inputs = avfilter_inout_alloc();
AVRational time_base = fmt_ctx->streams[video_stream_index]->time_base;
enum AVPixelFormat pix_fmts[] = { AV_PIX_FMT_GRAY8, AV_PIX_FMT_NONE }; enum AVPixelFormat pix_fmts[] = { AV_PIX_FMT_GRAY8, AV_PIX_FMT_NONE };
filter_graph = avfilter_graph_alloc(); filter_graph = avfilter_graph_alloc();
@@ -103,7 +102,7 @@ static int init_filters(const char *filters_descr)
snprintf(args, sizeof(args), snprintf(args, sizeof(args),
"video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d", "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
dec_ctx->width, dec_ctx->height, dec_ctx->pix_fmt, dec_ctx->width, dec_ctx->height, dec_ctx->pix_fmt,
time_base.num, time_base.den, dec_ctx->time_base.num, dec_ctx->time_base.den,
dec_ctx->sample_aspect_ratio.num, dec_ctx->sample_aspect_ratio.den); dec_ctx->sample_aspect_ratio.num, dec_ctx->sample_aspect_ratio.den);
ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in", ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in",
@@ -128,28 +127,12 @@ static int init_filters(const char *filters_descr)
goto end; goto end;
} }
/* /* Endpoints for the filter graph. */
* Set the endpoints for the filter graph. The filter_graph will
* be linked to the graph described by filters_descr.
*/
/*
* The buffer source output must be connected to the input pad of
* the first filter described by filters_descr; since the first
* filter input label is not specified, it is set to "in" by
* default.
*/
outputs->name = av_strdup("in"); outputs->name = av_strdup("in");
outputs->filter_ctx = buffersrc_ctx; outputs->filter_ctx = buffersrc_ctx;
outputs->pad_idx = 0; outputs->pad_idx = 0;
outputs->next = NULL; outputs->next = NULL;
/*
* The buffer sink input must be connected to the output pad of
* the last filter described by filters_descr; since the last
* filter output label is not specified, it is set to "out" by
* default.
*/
inputs->name = av_strdup("out"); inputs->name = av_strdup("out");
inputs->filter_ctx = buffersink_ctx; inputs->filter_ctx = buffersink_ctx;
inputs->pad_idx = 0; inputs->pad_idx = 0;

View File

@@ -34,8 +34,6 @@
#include <string.h> #include <string.h>
#include <math.h> #include <math.h>
#include <libavutil/avassert.h>
#include <libavutil/channel_layout.h>
#include <libavutil/opt.h> #include <libavutil/opt.h>
#include <libavutil/mathematics.h> #include <libavutil/mathematics.h>
#include <libavutil/timestamp.h> #include <libavutil/timestamp.h>
@@ -43,28 +41,13 @@
#include <libswscale/swscale.h> #include <libswscale/swscale.h>
#include <libswresample/swresample.h> #include <libswresample/swresample.h>
static int audio_is_eof, video_is_eof;
#define STREAM_DURATION 10.0 #define STREAM_DURATION 10.0
#define STREAM_FRAME_RATE 25 /* 25 images/s */ #define STREAM_FRAME_RATE 25 /* 25 images/s */
#define STREAM_PIX_FMT AV_PIX_FMT_YUV420P /* default pix_fmt */ #define STREAM_PIX_FMT AV_PIX_FMT_YUV420P /* default pix_fmt */
#define SCALE_FLAGS SWS_BICUBIC static int sws_flags = SWS_BICUBIC;
// a wrapper around a single output AVStream
typedef struct OutputStream {
AVStream *st;
/* pts of the next frame that will be generated */
int64_t next_pts;
int samples_count;
AVFrame *frame;
AVFrame *tmp_frame;
float t, tincr, tincr2;
struct SwsContext *sws_ctx;
struct SwrContext *swr_ctx;
} OutputStream;
static void log_packet(const AVFormatContext *fmt_ctx, const AVPacket *pkt) static void log_packet(const AVFormatContext *fmt_ctx, const AVPacket *pkt)
{ {
@@ -80,7 +63,9 @@ static void log_packet(const AVFormatContext *fmt_ctx, const AVPacket *pkt)
static int write_frame(AVFormatContext *fmt_ctx, const AVRational *time_base, AVStream *st, AVPacket *pkt) static int write_frame(AVFormatContext *fmt_ctx, const AVRational *time_base, AVStream *st, AVPacket *pkt)
{ {
/* rescale output packet timestamp values from codec to stream timebase */ /* rescale output packet timestamp values from codec to stream timebase */
av_packet_rescale_ts(pkt, *time_base, st->time_base); pkt->pts = av_rescale_q_rnd(pkt->pts, *time_base, st->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
pkt->dts = av_rescale_q_rnd(pkt->dts, *time_base, st->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
pkt->duration = av_rescale_q(pkt->duration, *time_base, st->time_base);
pkt->stream_index = st->index; pkt->stream_index = st->index;
/* Write the compressed frame to the media file. */ /* Write the compressed frame to the media file. */
@@ -89,12 +74,11 @@ static int write_frame(AVFormatContext *fmt_ctx, const AVRational *time_base, AV
} }
/* Add an output stream. */ /* Add an output stream. */
static void add_stream(OutputStream *ost, AVFormatContext *oc, static AVStream *add_stream(AVFormatContext *oc, AVCodec **codec,
AVCodec **codec, enum AVCodecID codec_id)
enum AVCodecID codec_id)
{ {
AVCodecContext *c; AVCodecContext *c;
int i; AVStream *st;
/* find the encoder */ /* find the encoder */
*codec = avcodec_find_encoder(codec_id); *codec = avcodec_find_encoder(codec_id);
@@ -104,13 +88,13 @@ static void add_stream(OutputStream *ost, AVFormatContext *oc,
exit(1); exit(1);
} }
ost->st = avformat_new_stream(oc, *codec); st = avformat_new_stream(oc, *codec);
if (!ost->st) { if (!st) {
fprintf(stderr, "Could not allocate stream\n"); fprintf(stderr, "Could not allocate stream\n");
exit(1); exit(1);
} }
ost->st->id = oc->nb_streams-1; st->id = oc->nb_streams-1;
c = ost->st->codec; c = st->codec;
switch ((*codec)->type) { switch ((*codec)->type) {
case AVMEDIA_TYPE_AUDIO: case AVMEDIA_TYPE_AUDIO:
@@ -118,24 +102,7 @@ static void add_stream(OutputStream *ost, AVFormatContext *oc,
(*codec)->sample_fmts[0] : AV_SAMPLE_FMT_FLTP; (*codec)->sample_fmts[0] : AV_SAMPLE_FMT_FLTP;
c->bit_rate = 64000; c->bit_rate = 64000;
c->sample_rate = 44100; c->sample_rate = 44100;
if ((*codec)->supported_samplerates) { c->channels = 2;
c->sample_rate = (*codec)->supported_samplerates[0];
for (i = 0; (*codec)->supported_samplerates[i]; i++) {
if ((*codec)->supported_samplerates[i] == 44100)
c->sample_rate = 44100;
}
}
c->channels = av_get_channel_layout_nb_channels(c->channel_layout);
c->channel_layout = AV_CH_LAYOUT_STEREO;
if ((*codec)->channel_layouts) {
c->channel_layout = (*codec)->channel_layouts[0];
for (i = 0; (*codec)->channel_layouts[i]; i++) {
if ((*codec)->channel_layouts[i] == AV_CH_LAYOUT_STEREO)
c->channel_layout = AV_CH_LAYOUT_STEREO;
}
}
c->channels = av_get_channel_layout_nb_channels(c->channel_layout);
ost->st->time_base = (AVRational){ 1, c->sample_rate };
break; break;
case AVMEDIA_TYPE_VIDEO: case AVMEDIA_TYPE_VIDEO:
@@ -149,9 +116,8 @@ static void add_stream(OutputStream *ost, AVFormatContext *oc,
* of which frame timestamps are represented. For fixed-fps content, * of which frame timestamps are represented. For fixed-fps content,
* timebase should be 1/framerate and timestamp increments should be * timebase should be 1/framerate and timestamp increments should be
* identical to 1. */ * identical to 1. */
ost->st->time_base = (AVRational){ 1, STREAM_FRAME_RATE }; c->time_base.den = STREAM_FRAME_RATE;
c->time_base = ost->st->time_base; c->time_base.num = 1;
c->gop_size = 12; /* emit one intra frame every twelve frames at most */ c->gop_size = 12; /* emit one intra frame every twelve frames at most */
c->pix_fmt = STREAM_PIX_FMT; c->pix_fmt = STREAM_PIX_FMT;
if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) { if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) {
@@ -173,262 +139,258 @@ static void add_stream(OutputStream *ost, AVFormatContext *oc,
/* Some formats want stream headers to be separate. */ /* Some formats want stream headers to be separate. */
if (oc->oformat->flags & AVFMT_GLOBALHEADER) if (oc->oformat->flags & AVFMT_GLOBALHEADER)
c->flags |= CODEC_FLAG_GLOBAL_HEADER; c->flags |= CODEC_FLAG_GLOBAL_HEADER;
return st;
} }
/**************************************************************/ /**************************************************************/
/* audio output */ /* audio output */
static AVFrame *alloc_audio_frame(enum AVSampleFormat sample_fmt, static float t, tincr, tincr2;
uint64_t channel_layout,
int sample_rate, int nb_samples) AVFrame *audio_frame;
static uint8_t **src_samples_data;
static int src_samples_linesize;
static int src_nb_samples;
static int max_dst_nb_samples;
uint8_t **dst_samples_data;
int dst_samples_linesize;
int dst_samples_size;
int samples_count;
struct SwrContext *swr_ctx = NULL;
static void open_audio(AVFormatContext *oc, AVCodec *codec, AVStream *st)
{ {
AVFrame *frame = av_frame_alloc(); AVCodecContext *c;
int ret; int ret;
if (!frame) { c = st->codec;
fprintf(stderr, "Error allocating an audio frame\n");
/* allocate and init a re-usable frame */
audio_frame = av_frame_alloc();
if (!audio_frame) {
fprintf(stderr, "Could not allocate audio frame\n");
exit(1); exit(1);
} }
frame->format = sample_fmt;
frame->channel_layout = channel_layout;
frame->sample_rate = sample_rate;
frame->nb_samples = nb_samples;
if (nb_samples) {
ret = av_frame_get_buffer(frame, 0);
if (ret < 0) {
fprintf(stderr, "Error allocating an audio buffer\n");
exit(1);
}
}
return frame;
}
static void open_audio(AVFormatContext *oc, AVCodec *codec, OutputStream *ost, AVDictionary *opt_arg)
{
AVCodecContext *c;
int nb_samples;
int ret;
AVDictionary *opt = NULL;
c = ost->st->codec;
/* open it */ /* open it */
av_dict_copy(&opt, opt_arg, 0); ret = avcodec_open2(c, codec, NULL);
ret = avcodec_open2(c, codec, &opt);
av_dict_free(&opt);
if (ret < 0) { if (ret < 0) {
fprintf(stderr, "Could not open audio codec: %s\n", av_err2str(ret)); fprintf(stderr, "Could not open audio codec: %s\n", av_err2str(ret));
exit(1); exit(1);
} }
/* init signal generator */ /* init signal generator */
ost->t = 0; t = 0;
ost->tincr = 2 * M_PI * 110.0 / c->sample_rate; tincr = 2 * M_PI * 110.0 / c->sample_rate;
/* increment frequency by 110 Hz per second */ /* increment frequency by 110 Hz per second */
ost->tincr2 = 2 * M_PI * 110.0 / c->sample_rate / c->sample_rate; tincr2 = 2 * M_PI * 110.0 / c->sample_rate / c->sample_rate;
if (c->codec->capabilities & CODEC_CAP_VARIABLE_FRAME_SIZE) src_nb_samples = c->codec->capabilities & CODEC_CAP_VARIABLE_FRAME_SIZE ?
nb_samples = 10000; 10000 : c->frame_size;
else
nb_samples = c->frame_size;
ost->frame = alloc_audio_frame(c->sample_fmt, c->channel_layout, ret = av_samples_alloc_array_and_samples(&src_samples_data, &src_samples_linesize, c->channels,
c->sample_rate, nb_samples); src_nb_samples, AV_SAMPLE_FMT_S16, 0);
ost->tmp_frame = alloc_audio_frame(AV_SAMPLE_FMT_S16, c->channel_layout, if (ret < 0) {
c->sample_rate, nb_samples); fprintf(stderr, "Could not allocate source samples\n");
exit(1);
}
/* compute the number of converted samples: buffering is avoided
* ensuring that the output buffer will contain at least all the
* converted input samples */
max_dst_nb_samples = src_nb_samples;
/* create resampler context */ /* create resampler context */
ost->swr_ctx = swr_alloc(); if (c->sample_fmt != AV_SAMPLE_FMT_S16) {
if (!ost->swr_ctx) { swr_ctx = swr_alloc();
if (!swr_ctx) {
fprintf(stderr, "Could not allocate resampler context\n"); fprintf(stderr, "Could not allocate resampler context\n");
exit(1); exit(1);
} }
/* set options */ /* set options */
av_opt_set_int (ost->swr_ctx, "in_channel_count", c->channels, 0); av_opt_set_int (swr_ctx, "in_channel_count", c->channels, 0);
av_opt_set_int (ost->swr_ctx, "in_sample_rate", c->sample_rate, 0); av_opt_set_int (swr_ctx, "in_sample_rate", c->sample_rate, 0);
av_opt_set_sample_fmt(ost->swr_ctx, "in_sample_fmt", AV_SAMPLE_FMT_S16, 0); av_opt_set_sample_fmt(swr_ctx, "in_sample_fmt", AV_SAMPLE_FMT_S16, 0);
av_opt_set_int (ost->swr_ctx, "out_channel_count", c->channels, 0); av_opt_set_int (swr_ctx, "out_channel_count", c->channels, 0);
av_opt_set_int (ost->swr_ctx, "out_sample_rate", c->sample_rate, 0); av_opt_set_int (swr_ctx, "out_sample_rate", c->sample_rate, 0);
av_opt_set_sample_fmt(ost->swr_ctx, "out_sample_fmt", c->sample_fmt, 0); av_opt_set_sample_fmt(swr_ctx, "out_sample_fmt", c->sample_fmt, 0);
/* initialize the resampling context */ /* initialize the resampling context */
if ((ret = swr_init(ost->swr_ctx)) < 0) { if ((ret = swr_init(swr_ctx)) < 0) {
fprintf(stderr, "Failed to initialize the resampling context\n"); fprintf(stderr, "Failed to initialize the resampling context\n");
exit(1); exit(1);
} }
ret = av_samples_alloc_array_and_samples(&dst_samples_data, &dst_samples_linesize, c->channels,
max_dst_nb_samples, c->sample_fmt, 0);
if (ret < 0) {
fprintf(stderr, "Could not allocate destination samples\n");
exit(1);
}
} else {
dst_samples_data = src_samples_data;
}
dst_samples_size = av_samples_get_buffer_size(NULL, c->channels, max_dst_nb_samples,
c->sample_fmt, 0);
} }
/* Prepare a 16 bit dummy audio frame of 'frame_size' samples and /* Prepare a 16 bit dummy audio frame of 'frame_size' samples and
* 'nb_channels' channels. */ * 'nb_channels' channels. */
static AVFrame *get_audio_frame(OutputStream *ost) static void get_audio_frame(int16_t *samples, int frame_size, int nb_channels)
{ {
AVFrame *frame = ost->tmp_frame;
int j, i, v; int j, i, v;
int16_t *q = (int16_t*)frame->data[0]; int16_t *q;
/* check if we want to generate more frames */ q = samples;
if (av_compare_ts(ost->next_pts, ost->st->codec->time_base, for (j = 0; j < frame_size; j++) {
STREAM_DURATION, (AVRational){ 1, 1 }) >= 0) v = (int)(sin(t) * 10000);
return NULL; for (i = 0; i < nb_channels; i++)
for (j = 0; j <frame->nb_samples; j++) {
v = (int)(sin(ost->t) * 10000);
for (i = 0; i < ost->st->codec->channels; i++)
*q++ = v; *q++ = v;
ost->t += ost->tincr; t += tincr;
ost->tincr += ost->tincr2; tincr += tincr2;
} }
frame->pts = ost->next_pts;
ost->next_pts += frame->nb_samples;
return frame;
} }
/* static void write_audio_frame(AVFormatContext *oc, AVStream *st, int flush)
* encode one audio frame and send it to the muxer
* return 1 when encoding is finished, 0 otherwise
*/
static int write_audio_frame(AVFormatContext *oc, OutputStream *ost)
{ {
AVCodecContext *c; AVCodecContext *c;
AVPacket pkt = { 0 }; // data and size must be 0; AVPacket pkt = { 0 }; // data and size must be 0;
AVFrame *frame; int got_packet, ret, dst_nb_samples;
int ret;
int got_packet;
int dst_nb_samples;
av_init_packet(&pkt); av_init_packet(&pkt);
c = ost->st->codec; c = st->codec;
frame = get_audio_frame(ost); if (!flush) {
get_audio_frame((int16_t *)src_samples_data[0], src_nb_samples, c->channels);
if (frame) {
/* convert samples from native format to destination codec format, using the resampler */ /* convert samples from native format to destination codec format, using the resampler */
if (swr_ctx) {
/* compute destination number of samples */ /* compute destination number of samples */
dst_nb_samples = av_rescale_rnd(swr_get_delay(ost->swr_ctx, c->sample_rate) + frame->nb_samples, dst_nb_samples = av_rescale_rnd(swr_get_delay(swr_ctx, c->sample_rate) + src_nb_samples,
c->sample_rate, c->sample_rate, AV_ROUND_UP); c->sample_rate, c->sample_rate, AV_ROUND_UP);
av_assert0(dst_nb_samples == frame->nb_samples); if (dst_nb_samples > max_dst_nb_samples) {
av_free(dst_samples_data[0]);
/* when we pass a frame to the encoder, it may keep a reference to it ret = av_samples_alloc(dst_samples_data, &dst_samples_linesize, c->channels,
* internally; dst_nb_samples, c->sample_fmt, 0);
* make sure we do not overwrite it here if (ret < 0)
*/ exit(1);
ret = av_frame_make_writable(ost->frame); max_dst_nb_samples = dst_nb_samples;
if (ret < 0) dst_samples_size = av_samples_get_buffer_size(NULL, c->channels, dst_nb_samples,
exit(1); c->sample_fmt, 0);
}
/* convert to destination format */ /* convert to destination format */
ret = swr_convert(ost->swr_ctx, ret = swr_convert(swr_ctx,
ost->frame->data, dst_nb_samples, dst_samples_data, dst_nb_samples,
(const uint8_t **)frame->data, frame->nb_samples); (const uint8_t **)src_samples_data, src_nb_samples);
if (ret < 0) { if (ret < 0) {
fprintf(stderr, "Error while converting\n"); fprintf(stderr, "Error while converting\n");
exit(1); exit(1);
} }
frame = ost->frame; } else {
dst_nb_samples = src_nb_samples;
}
frame->pts = av_rescale_q(ost->samples_count, (AVRational){1, c->sample_rate}, c->time_base); audio_frame->nb_samples = dst_nb_samples;
ost->samples_count += dst_nb_samples; audio_frame->pts = av_rescale_q(samples_count, (AVRational){1, c->sample_rate}, c->time_base);
avcodec_fill_audio_frame(audio_frame, c->channels, c->sample_fmt,
dst_samples_data[0], dst_samples_size, 0);
samples_count += dst_nb_samples;
} }
ret = avcodec_encode_audio2(c, &pkt, frame, &got_packet); ret = avcodec_encode_audio2(c, &pkt, flush ? NULL : audio_frame, &got_packet);
if (ret < 0) { if (ret < 0) {
fprintf(stderr, "Error encoding audio frame: %s\n", av_err2str(ret)); fprintf(stderr, "Error encoding audio frame: %s\n", av_err2str(ret));
exit(1); exit(1);
} }
if (got_packet) { if (!got_packet) {
ret = write_frame(oc, &c->time_base, ost->st, &pkt); if (flush)
if (ret < 0) { audio_is_eof = 1;
fprintf(stderr, "Error while writing audio frame: %s\n", return;
av_err2str(ret));
exit(1);
}
} }
return (frame || got_packet) ? 0 : 1; ret = write_frame(oc, &c->time_base, st, &pkt);
if (ret < 0) {
fprintf(stderr, "Error while writing audio frame: %s\n",
av_err2str(ret));
exit(1);
}
}
static void close_audio(AVFormatContext *oc, AVStream *st)
{
avcodec_close(st->codec);
if (dst_samples_data != src_samples_data) {
av_free(dst_samples_data[0]);
av_free(dst_samples_data);
}
av_free(src_samples_data[0]);
av_free(src_samples_data);
av_frame_free(&audio_frame);
} }
/**************************************************************/ /**************************************************************/
/* video output */ /* video output */
static AVFrame *alloc_picture(enum AVPixelFormat pix_fmt, int width, int height) static AVFrame *frame;
{ static AVPicture src_picture, dst_picture;
AVFrame *picture; static int frame_count;
int ret;
picture = av_frame_alloc(); static void open_video(AVFormatContext *oc, AVCodec *codec, AVStream *st)
if (!picture)
return NULL;
picture->format = pix_fmt;
picture->width = width;
picture->height = height;
/* allocate the buffers for the frame data */
ret = av_frame_get_buffer(picture, 32);
if (ret < 0) {
fprintf(stderr, "Could not allocate frame data.\n");
exit(1);
}
return picture;
}
static void open_video(AVFormatContext *oc, AVCodec *codec, OutputStream *ost, AVDictionary *opt_arg)
{ {
int ret; int ret;
AVCodecContext *c = ost->st->codec; AVCodecContext *c = st->codec;
AVDictionary *opt = NULL;
av_dict_copy(&opt, opt_arg, 0);
/* open the codec */ /* open the codec */
ret = avcodec_open2(c, codec, &opt); ret = avcodec_open2(c, codec, NULL);
av_dict_free(&opt);
if (ret < 0) { if (ret < 0) {
fprintf(stderr, "Could not open video codec: %s\n", av_err2str(ret)); fprintf(stderr, "Could not open video codec: %s\n", av_err2str(ret));
exit(1); exit(1);
} }
/* allocate and init a re-usable frame */ /* allocate and init a re-usable frame */
ost->frame = alloc_picture(c->pix_fmt, c->width, c->height); frame = av_frame_alloc();
if (!ost->frame) { if (!frame) {
fprintf(stderr, "Could not allocate video frame\n"); fprintf(stderr, "Could not allocate video frame\n");
exit(1); exit(1);
} }
frame->format = c->pix_fmt;
frame->width = c->width;
frame->height = c->height;
/* Allocate the encoded raw picture. */
ret = avpicture_alloc(&dst_picture, c->pix_fmt, c->width, c->height);
if (ret < 0) {
fprintf(stderr, "Could not allocate picture: %s\n", av_err2str(ret));
exit(1);
}
/* If the output format is not YUV420P, then a temporary YUV420P /* If the output format is not YUV420P, then a temporary YUV420P
* picture is needed too. It is then converted to the required * picture is needed too. It is then converted to the required
* output format. */ * output format. */
ost->tmp_frame = NULL;
if (c->pix_fmt != AV_PIX_FMT_YUV420P) { if (c->pix_fmt != AV_PIX_FMT_YUV420P) {
ost->tmp_frame = alloc_picture(AV_PIX_FMT_YUV420P, c->width, c->height); ret = avpicture_alloc(&src_picture, AV_PIX_FMT_YUV420P, c->width, c->height);
if (!ost->tmp_frame) { if (ret < 0) {
fprintf(stderr, "Could not allocate temporary picture\n"); fprintf(stderr, "Could not allocate temporary picture: %s\n",
av_err2str(ret));
exit(1); exit(1);
} }
} }
/* copy data and linesize picture pointers to frame */
*((AVPicture *)frame) = dst_picture;
} }
/* Prepare a dummy image. */ /* Prepare a dummy image. */
static void fill_yuv_image(AVFrame *pict, int frame_index, static void fill_yuv_image(AVPicture *pict, int frame_index,
int width, int height) int width, int height)
{ {
int x, y, i, ret; int x, y, i;
/* when we pass a frame to the encoder, it may keep a reference to it
* internally;
* make sure we do not overwrite it here
*/
ret = av_frame_make_writable(pict);
if (ret < 0)
exit(1);
i = frame_index; i = frame_index;
@@ -446,89 +408,65 @@ static void fill_yuv_image(AVFrame *pict, int frame_index,
} }
} }
static AVFrame *get_video_frame(OutputStream *ost) static void write_video_frame(AVFormatContext *oc, AVStream *st, int flush)
{
AVCodecContext *c = ost->st->codec;
/* check if we want to generate more frames */
if (av_compare_ts(ost->next_pts, ost->st->codec->time_base,
STREAM_DURATION, (AVRational){ 1, 1 }) >= 0)
return NULL;
if (c->pix_fmt != AV_PIX_FMT_YUV420P) {
/* as we only generate a YUV420P picture, we must convert it
* to the codec pixel format if needed */
if (!ost->sws_ctx) {
ost->sws_ctx = sws_getContext(c->width, c->height,
AV_PIX_FMT_YUV420P,
c->width, c->height,
c->pix_fmt,
SCALE_FLAGS, NULL, NULL, NULL);
if (!ost->sws_ctx) {
fprintf(stderr,
"Could not initialize the conversion context\n");
exit(1);
}
}
fill_yuv_image(ost->tmp_frame, ost->next_pts, c->width, c->height);
sws_scale(ost->sws_ctx,
(const uint8_t * const *)ost->tmp_frame->data, ost->tmp_frame->linesize,
0, c->height, ost->frame->data, ost->frame->linesize);
} else {
fill_yuv_image(ost->frame, ost->next_pts, c->width, c->height);
}
ost->frame->pts = ost->next_pts++;
return ost->frame;
}
/*
* encode one video frame and send it to the muxer
* return 1 when encoding is finished, 0 otherwise
*/
static int write_video_frame(AVFormatContext *oc, OutputStream *ost)
{ {
int ret; int ret;
AVCodecContext *c; static struct SwsContext *sws_ctx;
AVFrame *frame; AVCodecContext *c = st->codec;
int got_packet = 0;
c = ost->st->codec; if (!flush) {
if (c->pix_fmt != AV_PIX_FMT_YUV420P) {
/* as we only generate a YUV420P picture, we must convert it
* to the codec pixel format if needed */
if (!sws_ctx) {
sws_ctx = sws_getContext(c->width, c->height, AV_PIX_FMT_YUV420P,
c->width, c->height, c->pix_fmt,
sws_flags, NULL, NULL, NULL);
if (!sws_ctx) {
fprintf(stderr,
"Could not initialize the conversion context\n");
exit(1);
}
}
fill_yuv_image(&src_picture, frame_count, c->width, c->height);
sws_scale(sws_ctx,
(const uint8_t * const *)src_picture.data, src_picture.linesize,
0, c->height, dst_picture.data, dst_picture.linesize);
} else {
fill_yuv_image(&dst_picture, frame_count, c->width, c->height);
}
}
frame = get_video_frame(ost); if (oc->oformat->flags & AVFMT_RAWPICTURE && !flush) {
/* Raw video case - directly store the picture in the packet */
if (oc->oformat->flags & AVFMT_RAWPICTURE) {
/* a hack to avoid data copy with some raw video muxers */
AVPacket pkt; AVPacket pkt;
av_init_packet(&pkt); av_init_packet(&pkt);
if (!frame)
return 1;
pkt.flags |= AV_PKT_FLAG_KEY; pkt.flags |= AV_PKT_FLAG_KEY;
pkt.stream_index = ost->st->index; pkt.stream_index = st->index;
pkt.data = (uint8_t *)frame; pkt.data = dst_picture.data[0];
pkt.size = sizeof(AVPicture); pkt.size = sizeof(AVPicture);
pkt.pts = pkt.dts = frame->pts;
av_packet_rescale_ts(&pkt, c->time_base, ost->st->time_base);
ret = av_interleaved_write_frame(oc, &pkt); ret = av_interleaved_write_frame(oc, &pkt);
} else { } else {
AVPacket pkt = { 0 }; AVPacket pkt = { 0 };
int got_packet;
av_init_packet(&pkt); av_init_packet(&pkt);
/* encode the image */ /* encode the image */
ret = avcodec_encode_video2(c, &pkt, frame, &got_packet); frame->pts = frame_count;
ret = avcodec_encode_video2(c, &pkt, flush ? NULL : frame, &got_packet);
if (ret < 0) { if (ret < 0) {
fprintf(stderr, "Error encoding video frame: %s\n", av_err2str(ret)); fprintf(stderr, "Error encoding video frame: %s\n", av_err2str(ret));
exit(1); exit(1);
} }
/* If size is zero, it means the image was buffered. */
if (got_packet) { if (got_packet) {
ret = write_frame(oc, &c->time_base, ost->st, &pkt); ret = write_frame(oc, &c->time_base, st, &pkt);
} else { } else {
if (flush)
video_is_eof = 1;
ret = 0; ret = 0;
} }
} }
@@ -537,17 +475,15 @@ static int write_video_frame(AVFormatContext *oc, OutputStream *ost)
fprintf(stderr, "Error while writing video frame: %s\n", av_err2str(ret)); fprintf(stderr, "Error while writing video frame: %s\n", av_err2str(ret));
exit(1); exit(1);
} }
frame_count++;
return (frame || got_packet) ? 0 : 1;
} }
static void close_stream(AVFormatContext *oc, OutputStream *ost) static void close_video(AVFormatContext *oc, AVStream *st)
{ {
avcodec_close(ost->st->codec); avcodec_close(st->codec);
av_frame_free(&ost->frame); av_free(src_picture.data[0]);
av_frame_free(&ost->tmp_frame); av_free(dst_picture.data[0]);
sws_freeContext(ost->sws_ctx); av_frame_free(&frame);
swr_free(&ost->swr_ctx);
} }
/**************************************************************/ /**************************************************************/
@@ -555,20 +491,18 @@ static void close_stream(AVFormatContext *oc, OutputStream *ost)
int main(int argc, char **argv) int main(int argc, char **argv)
{ {
OutputStream video_st = { 0 }, audio_st = { 0 };
const char *filename; const char *filename;
AVOutputFormat *fmt; AVOutputFormat *fmt;
AVFormatContext *oc; AVFormatContext *oc;
AVStream *audio_st, *video_st;
AVCodec *audio_codec, *video_codec; AVCodec *audio_codec, *video_codec;
int ret; double audio_time, video_time;
int have_video = 0, have_audio = 0; int flush, ret;
int encode_video = 0, encode_audio = 0;
AVDictionary *opt = NULL;
/* Initialize libavcodec, and register all codecs and formats. */ /* Initialize libavcodec, and register all codecs and formats. */
av_register_all(); av_register_all();
if (argc < 2) { if (argc != 2) {
printf("usage: %s output_file\n" printf("usage: %s output_file\n"
"API example program to output a media file with libavformat.\n" "API example program to output a media file with libavformat.\n"
"This program generates a synthetic audio and video stream, encodes and\n" "This program generates a synthetic audio and video stream, encodes and\n"
@@ -580,9 +514,6 @@ int main(int argc, char **argv)
} }
filename = argv[1]; filename = argv[1];
if (argc > 3 && !strcmp(argv[2], "-flags")) {
av_dict_set(&opt, argv[2]+1, argv[3], 0);
}
/* allocate the output media context */ /* allocate the output media context */
avformat_alloc_output_context2(&oc, NULL, NULL, filename); avformat_alloc_output_context2(&oc, NULL, NULL, filename);
@@ -597,24 +528,20 @@ int main(int argc, char **argv)
/* Add the audio and video streams using the default format codecs /* Add the audio and video streams using the default format codecs
* and initialize the codecs. */ * and initialize the codecs. */
if (fmt->video_codec != AV_CODEC_ID_NONE) { video_st = NULL;
add_stream(&video_st, oc, &video_codec, fmt->video_codec); audio_st = NULL;
have_video = 1;
encode_video = 1; if (fmt->video_codec != AV_CODEC_ID_NONE)
} video_st = add_stream(oc, &video_codec, fmt->video_codec);
if (fmt->audio_codec != AV_CODEC_ID_NONE) { if (fmt->audio_codec != AV_CODEC_ID_NONE)
add_stream(&audio_st, oc, &audio_codec, fmt->audio_codec); audio_st = add_stream(oc, &audio_codec, fmt->audio_codec);
have_audio = 1;
encode_audio = 1;
}
/* Now that all the parameters are set, we can open the audio and /* Now that all the parameters are set, we can open the audio and
* video codecs and allocate the necessary encode buffers. */ * video codecs and allocate the necessary encode buffers. */
if (have_video) if (video_st)
open_video(oc, video_codec, &video_st, opt); open_video(oc, video_codec, video_st);
if (audio_st)
if (have_audio) open_audio(oc, audio_codec, audio_st);
open_audio(oc, audio_codec, &audio_st, opt);
av_dump_format(oc, 0, filename, 1); av_dump_format(oc, 0, filename, 1);
@@ -629,21 +556,30 @@ int main(int argc, char **argv)
} }
/* Write the stream header, if any. */ /* Write the stream header, if any. */
ret = avformat_write_header(oc, &opt); ret = avformat_write_header(oc, NULL);
if (ret < 0) { if (ret < 0) {
fprintf(stderr, "Error occurred when opening output file: %s\n", fprintf(stderr, "Error occurred when opening output file: %s\n",
av_err2str(ret)); av_err2str(ret));
return 1; return 1;
} }
while (encode_video || encode_audio) { flush = 0;
/* select the stream to encode */ while ((video_st && !video_is_eof) || (audio_st && !audio_is_eof)) {
if (encode_video && /* Compute current audio and video time. */
(!encode_audio || av_compare_ts(video_st.next_pts, video_st.st->codec->time_base, audio_time = (audio_st && !audio_is_eof) ? audio_st->pts.val * av_q2d(audio_st->time_base) : INFINITY;
audio_st.next_pts, audio_st.st->codec->time_base) <= 0)) { video_time = (video_st && !video_is_eof) ? video_st->pts.val * av_q2d(video_st->time_base) : INFINITY;
encode_video = !write_video_frame(oc, &video_st);
} else { if (!flush &&
encode_audio = !write_audio_frame(oc, &audio_st); (!audio_st || audio_time >= STREAM_DURATION) &&
(!video_st || video_time >= STREAM_DURATION)) {
flush = 1;
}
/* write interleaved audio and video frames */
if (audio_st && !audio_is_eof && audio_time <= video_time) {
write_audio_frame(oc, audio_st, flush);
} else if (video_st && !video_is_eof && video_time < audio_time) {
write_video_frame(oc, video_st, flush);
} }
} }
@@ -654,14 +590,14 @@ int main(int argc, char **argv)
av_write_trailer(oc); av_write_trailer(oc);
/* Close each codec. */ /* Close each codec. */
if (have_video) if (video_st)
close_stream(oc, &video_st); close_video(oc, video_st);
if (have_audio) if (audio_st)
close_stream(oc, &audio_st); close_audio(oc, audio_st);
if (!(fmt->flags & AVFMT_NOFILE)) if (!(fmt->flags & AVFMT_NOFILE))
/* Close the output file. */ /* Close the output file. */
avio_closep(&oc->pb); avio_close(oc->pb);
/* free the stream */ /* free the stream */
avformat_free_context(oc); avformat_free_context(oc);

View File

@@ -1,484 +0,0 @@
/*
* Copyright (c) 2015 Anton Khirnov
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
/**
* @file
* Intel QSV-accelerated H.264 decoding example.
*
* @example qsvdec.c
* This example shows how to do QSV-accelerated H.264 decoding with output
* frames in the VA-API video surfaces.
*/
#include "config.h"
#include <stdio.h>
#include <mfx/mfxvideo.h>
#include <va/va.h>
#include <va/va_x11.h>
#include <X11/Xlib.h>
#include "libavformat/avformat.h"
#include "libavformat/avio.h"
#include "libavcodec/avcodec.h"
#include "libavcodec/qsv.h"
#include "libavutil/error.h"
#include "libavutil/mem.h"
typedef struct DecodeContext {
mfxSession mfx_session;
VADisplay va_dpy;
VASurfaceID *surfaces;
mfxMemId *surface_ids;
int *surface_used;
int nb_surfaces;
mfxFrameInfo frame_info;
} DecodeContext;
static mfxStatus frame_alloc(mfxHDL pthis, mfxFrameAllocRequest *req,
mfxFrameAllocResponse *resp)
{
DecodeContext *decode = pthis;
int err, i;
if (decode->surfaces) {
fprintf(stderr, "Multiple allocation requests.\n");
return MFX_ERR_MEMORY_ALLOC;
}
if (!(req->Type & MFX_MEMTYPE_VIDEO_MEMORY_DECODER_TARGET)) {
fprintf(stderr, "Unsupported surface type: %d\n", req->Type);
return MFX_ERR_UNSUPPORTED;
}
if (req->Info.BitDepthLuma != 8 || req->Info.BitDepthChroma != 8 ||
req->Info.Shift || req->Info.FourCC != MFX_FOURCC_NV12 ||
req->Info.ChromaFormat != MFX_CHROMAFORMAT_YUV420) {
fprintf(stderr, "Unsupported surface properties.\n");
return MFX_ERR_UNSUPPORTED;
}
decode->surfaces = av_malloc_array (req->NumFrameSuggested, sizeof(*decode->surfaces));
decode->surface_ids = av_malloc_array (req->NumFrameSuggested, sizeof(*decode->surface_ids));
decode->surface_used = av_mallocz_array(req->NumFrameSuggested, sizeof(*decode->surface_used));
if (!decode->surfaces || !decode->surface_ids || !decode->surface_used)
goto fail;
err = vaCreateSurfaces(decode->va_dpy, VA_RT_FORMAT_YUV420,
req->Info.Width, req->Info.Height,
decode->surfaces, req->NumFrameSuggested,
NULL, 0);
if (err != VA_STATUS_SUCCESS) {
fprintf(stderr, "Error allocating VA surfaces\n");
goto fail;
}
decode->nb_surfaces = req->NumFrameSuggested;
for (i = 0; i < decode->nb_surfaces; i++)
decode->surface_ids[i] = &decode->surfaces[i];
resp->mids = decode->surface_ids;
resp->NumFrameActual = decode->nb_surfaces;
decode->frame_info = req->Info;
return MFX_ERR_NONE;
fail:
av_freep(&decode->surfaces);
av_freep(&decode->surface_ids);
av_freep(&decode->surface_used);
return MFX_ERR_MEMORY_ALLOC;
}
static mfxStatus frame_free(mfxHDL pthis, mfxFrameAllocResponse *resp)
{
DecodeContext *decode = pthis;
if (decode->surfaces)
vaDestroySurfaces(decode->va_dpy, decode->surfaces, decode->nb_surfaces);
av_freep(&decode->surfaces);
av_freep(&decode->surface_ids);
av_freep(&decode->surface_used);
decode->nb_surfaces = 0;
return MFX_ERR_NONE;
}
static mfxStatus frame_lock(mfxHDL pthis, mfxMemId mid, mfxFrameData *ptr)
{
return MFX_ERR_UNSUPPORTED;
}
static mfxStatus frame_unlock(mfxHDL pthis, mfxMemId mid, mfxFrameData *ptr)
{
return MFX_ERR_UNSUPPORTED;
}
static mfxStatus frame_get_hdl(mfxHDL pthis, mfxMemId mid, mfxHDL *hdl)
{
*hdl = mid;
return MFX_ERR_NONE;
}
static void free_buffer(void *opaque, uint8_t *data)
{
int *used = opaque;
*used = 0;
av_freep(&data);
}
static int get_buffer(AVCodecContext *avctx, AVFrame *frame, int flags)
{
DecodeContext *decode = avctx->opaque;
mfxFrameSurface1 *surf;
AVBufferRef *surf_buf;
int idx;
for (idx = 0; idx < decode->nb_surfaces; idx++) {
if (!decode->surface_used[idx])
break;
}
if (idx == decode->nb_surfaces) {
fprintf(stderr, "No free surfaces\n");
return AVERROR(ENOMEM);
}
surf = av_mallocz(sizeof(*surf));
if (!surf)
return AVERROR(ENOMEM);
surf_buf = av_buffer_create((uint8_t*)surf, sizeof(*surf), free_buffer,
&decode->surface_used[idx], AV_BUFFER_FLAG_READONLY);
if (!surf_buf) {
av_freep(&surf);
return AVERROR(ENOMEM);
}
surf->Info = decode->frame_info;
surf->Data.MemId = &decode->surfaces[idx];
frame->buf[0] = surf_buf;
frame->data[3] = (uint8_t*)surf;
decode->surface_used[idx] = 1;
return 0;
}
static int get_format(AVCodecContext *avctx, const enum AVPixelFormat *pix_fmts)
{
while (*pix_fmts != AV_PIX_FMT_NONE) {
if (*pix_fmts == AV_PIX_FMT_QSV) {
if (!avctx->hwaccel_context) {
DecodeContext *decode = avctx->opaque;
AVQSVContext *qsv = av_qsv_alloc_context();
if (!qsv)
return AV_PIX_FMT_NONE;
qsv->session = decode->mfx_session;
qsv->iopattern = MFX_IOPATTERN_OUT_VIDEO_MEMORY;
avctx->hwaccel_context = qsv;
}
return AV_PIX_FMT_QSV;
}
pix_fmts++;
}
fprintf(stderr, "The QSV pixel format not offered in get_format()\n");
return AV_PIX_FMT_NONE;
}
static int decode_packet(DecodeContext *decode, AVCodecContext *decoder_ctx,
AVFrame *frame, AVPacket *pkt,
AVIOContext *output_ctx)
{
int ret = 0;
int got_frame = 1;
while (pkt->size > 0 || (!pkt->data && got_frame)) {
ret = avcodec_decode_video2(decoder_ctx, frame, &got_frame, pkt);
if (ret < 0) {
fprintf(stderr, "Error during decoding\n");
return ret;
}
pkt->data += ret;
pkt->size -= ret;
/* A real program would do something useful with the decoded frame here.
* We just retrieve the raw data and write it to a file, which is rather
* useless but pedagogic. */
if (got_frame) {
mfxFrameSurface1 *surf = (mfxFrameSurface1*)frame->data[3];
VASurfaceID surface = *(VASurfaceID*)surf->Data.MemId;
VAImageFormat img_fmt = {
.fourcc = VA_FOURCC_NV12,
.byte_order = VA_LSB_FIRST,
.bits_per_pixel = 8,
.depth = 8,
};
VAImage img;
VAStatus err;
uint8_t *data;
int i, j;
img.buf = VA_INVALID_ID;
img.image_id = VA_INVALID_ID;
err = vaCreateImage(decode->va_dpy, &img_fmt,
frame->width, frame->height, &img);
if (err != VA_STATUS_SUCCESS) {
fprintf(stderr, "Error creating an image: %s\n",
vaErrorStr(err));
ret = AVERROR_UNKNOWN;
goto fail;
}
err = vaGetImage(decode->va_dpy, surface, 0, 0,
frame->width, frame->height,
img.image_id);
if (err != VA_STATUS_SUCCESS) {
fprintf(stderr, "Error getting an image: %s\n",
vaErrorStr(err));
ret = AVERROR_UNKNOWN;
goto fail;
}
err = vaMapBuffer(decode->va_dpy, img.buf, (void**)&data);
if (err != VA_STATUS_SUCCESS) {
fprintf(stderr, "Error mapping the image buffer: %s\n",
vaErrorStr(err));
ret = AVERROR_UNKNOWN;
goto fail;
}
for (i = 0; i < img.num_planes; i++)
for (j = 0; j < (img.height >> (i > 0)); j++)
avio_write(output_ctx, data + img.offsets[i] + j * img.pitches[i], img.width);
fail:
if (img.buf != VA_INVALID_ID)
vaUnmapBuffer(decode->va_dpy, img.buf);
if (img.image_id != VA_INVALID_ID)
vaDestroyImage(decode->va_dpy, img.image_id);
av_frame_unref(frame);
if (ret < 0)
return ret;
}
}
return 0;
}
int main(int argc, char **argv)
{
AVFormatContext *input_ctx = NULL;
AVStream *video_st = NULL;
AVCodecContext *decoder_ctx = NULL;
const AVCodec *decoder;
AVPacket pkt = { 0 };
AVFrame *frame = NULL;
DecodeContext decode = { NULL };
Display *dpy = NULL;
int va_ver_major, va_ver_minor;
mfxIMPL mfx_impl = MFX_IMPL_AUTO_ANY;
mfxVersion mfx_ver = { { 1, 1 } };
mfxFrameAllocator frame_allocator = {
.pthis = &decode,
.Alloc = frame_alloc,
.Lock = frame_lock,
.Unlock = frame_unlock,
.GetHDL = frame_get_hdl,
.Free = frame_free,
};
AVIOContext *output_ctx = NULL;
int ret, i, err;
av_register_all();
if (argc < 3) {
fprintf(stderr, "Usage: %s <input file> <output file>\n", argv[0]);
return 1;
}
/* open the input file */
ret = avformat_open_input(&input_ctx, argv[1], NULL, NULL);
if (ret < 0) {
fprintf(stderr, "Cannot open input file '%s': ", argv[1]);
goto finish;
}
/* find the first H.264 video stream */
for (i = 0; i < input_ctx->nb_streams; i++) {
AVStream *st = input_ctx->streams[i];
if (st->codec->codec_id == AV_CODEC_ID_H264 && !video_st)
video_st = st;
else
st->discard = AVDISCARD_ALL;
}
if (!video_st) {
fprintf(stderr, "No H.264 video stream in the input file\n");
goto finish;
}
/* initialize VA-API */
dpy = XOpenDisplay(NULL);
if (!dpy) {
fprintf(stderr, "Cannot open the X display\n");
goto finish;
}
decode.va_dpy = vaGetDisplay(dpy);
if (!decode.va_dpy) {
fprintf(stderr, "Cannot open the VA display\n");
goto finish;
}
err = vaInitialize(decode.va_dpy, &va_ver_major, &va_ver_minor);
if (err != VA_STATUS_SUCCESS) {
fprintf(stderr, "Cannot initialize VA: %s\n", vaErrorStr(err));
goto finish;
}
fprintf(stderr, "Initialized VA v%d.%d\n", va_ver_major, va_ver_minor);
/* initialize an MFX session */
err = MFXInit(mfx_impl, &mfx_ver, &decode.mfx_session);
if (err != MFX_ERR_NONE) {
fprintf(stderr, "Error initializing an MFX session\n");
goto finish;
}
MFXVideoCORE_SetHandle(decode.mfx_session, MFX_HANDLE_VA_DISPLAY, decode.va_dpy);
MFXVideoCORE_SetFrameAllocator(decode.mfx_session, &frame_allocator);
/* initialize the decoder */
decoder = avcodec_find_decoder_by_name("h264_qsv");
if (!decoder) {
fprintf(stderr, "The QSV decoder is not present in libavcodec\n");
goto finish;
}
decoder_ctx = avcodec_alloc_context3(decoder);
if (!decoder_ctx) {
ret = AVERROR(ENOMEM);
goto finish;
}
decoder_ctx->codec_id = AV_CODEC_ID_H264;
if (video_st->codec->extradata_size) {
decoder_ctx->extradata = av_mallocz(video_st->codec->extradata_size +
FF_INPUT_BUFFER_PADDING_SIZE);
if (!decoder_ctx->extradata) {
ret = AVERROR(ENOMEM);
goto finish;
}
memcpy(decoder_ctx->extradata, video_st->codec->extradata,
video_st->codec->extradata_size);
decoder_ctx->extradata_size = video_st->codec->extradata_size;
}
decoder_ctx->refcounted_frames = 1;
decoder_ctx->opaque = &decode;
decoder_ctx->get_buffer2 = get_buffer;
decoder_ctx->get_format = get_format;
ret = avcodec_open2(decoder_ctx, NULL, NULL);
if (ret < 0) {
fprintf(stderr, "Error opening the decoder: ");
goto finish;
}
/* open the output stream */
ret = avio_open(&output_ctx, argv[2], AVIO_FLAG_WRITE);
if (ret < 0) {
fprintf(stderr, "Error opening the output context: ");
goto finish;
}
frame = av_frame_alloc();
if (!frame) {
ret = AVERROR(ENOMEM);
goto finish;
}
/* actual decoding */
while (ret >= 0) {
ret = av_read_frame(input_ctx, &pkt);
if (ret < 0)
break;
if (pkt.stream_index == video_st->index)
ret = decode_packet(&decode, decoder_ctx, frame, &pkt, output_ctx);
av_packet_unref(&pkt);
}
/* flush the decoder */
pkt.data = NULL;
pkt.size = 0;
ret = decode_packet(&decode, decoder_ctx, frame, &pkt, output_ctx);
finish:
if (ret < 0) {
char buf[1024];
av_strerror(ret, buf, sizeof(buf));
fprintf(stderr, "%s\n", buf);
}
avformat_close_input(&input_ctx);
av_frame_free(&frame);
if (decode.mfx_session)
MFXClose(decode.mfx_session);
if (decode.va_dpy)
vaTerminate(decode.va_dpy);
if (dpy)
XCloseDisplay(dpy);
if (decoder_ctx)
av_freep(&decoder_ctx->hwaccel_context);
avcodec_free_context(&decoder_ctx);
avio_close(output_ctx);
return ret;
}

View File

@@ -99,7 +99,6 @@ int main(int argc, char **argv)
fprintf(stderr, "Failed to copy context from input to output stream codec context\n"); fprintf(stderr, "Failed to copy context from input to output stream codec context\n");
goto end; goto end;
} }
out_stream->codec->codec_tag = 0;
if (ofmt_ctx->oformat->flags & AVFMT_GLOBALHEADER) if (ofmt_ctx->oformat->flags & AVFMT_GLOBALHEADER)
out_stream->codec->flags |= CODEC_FLAG_GLOBAL_HEADER; out_stream->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
} }
@@ -153,7 +152,7 @@ end:
/* close output */ /* close output */
if (ofmt_ctx && !(ofmt->flags & AVFMT_NOFILE)) if (ofmt_ctx && !(ofmt->flags & AVFMT_NOFILE))
avio_closep(&ofmt_ctx->pb); avio_close(ofmt_ctx->pb);
avformat_free_context(ofmt_ctx); avformat_free_context(ofmt_ctx);
if (ret < 0 && ret != AVERROR_EOF) { if (ret < 0 && ret != AVERROR_EOF) {

View File

@@ -168,7 +168,7 @@ int main(int argc, char **argv)
dst_nb_samples = av_rescale_rnd(swr_get_delay(swr_ctx, src_rate) + dst_nb_samples = av_rescale_rnd(swr_get_delay(swr_ctx, src_rate) +
src_nb_samples, dst_rate, src_rate, AV_ROUND_UP); src_nb_samples, dst_rate, src_rate, AV_ROUND_UP);
if (dst_nb_samples > max_dst_nb_samples) { if (dst_nb_samples > max_dst_nb_samples) {
av_freep(&dst_data[0]); av_free(dst_data[0]);
ret = av_samples_alloc(dst_data, &dst_linesize, dst_nb_channels, ret = av_samples_alloc(dst_data, &dst_linesize, dst_nb_channels,
dst_nb_samples, dst_sample_fmt, 1); dst_nb_samples, dst_sample_fmt, 1);
if (ret < 0) if (ret < 0)
@@ -199,7 +199,8 @@ int main(int argc, char **argv)
fmt, dst_ch_layout, dst_nb_channels, dst_rate, dst_filename); fmt, dst_ch_layout, dst_nb_channels, dst_rate, dst_filename);
end: end:
fclose(dst_file); if (dst_file)
fclose(dst_file);
if (src_data) if (src_data)
av_freep(&src_data[0]); av_freep(&src_data[0]);

View File

@@ -132,7 +132,8 @@ int main(int argc, char **argv)
av_get_pix_fmt_name(dst_pix_fmt), dst_w, dst_h, dst_filename); av_get_pix_fmt_name(dst_pix_fmt), dst_w, dst_h, dst_filename);
end: end:
fclose(dst_file); if (dst_file)
fclose(dst_file);
av_freep(&src_data[0]); av_freep(&src_data[0]);
av_freep(&dst_data[0]); av_freep(&dst_data[0]);
sws_freeContext(sws_ctx); sws_freeContext(sws_ctx);

View File

@@ -41,16 +41,18 @@
#include "libswresample/swresample.h" #include "libswresample/swresample.h"
/** The output bit rate in kbit/s */ /** The output bit rate in kbit/s */
#define OUTPUT_BIT_RATE 96000 #define OUTPUT_BIT_RATE 48000
/** The number of output channels */ /** The number of output channels */
#define OUTPUT_CHANNELS 2 #define OUTPUT_CHANNELS 2
/** The audio sample output format */
#define OUTPUT_SAMPLE_FORMAT AV_SAMPLE_FMT_S16
/** /**
* Convert an error code into a text message. * Convert an error code into a text message.
* @param error Error code to be converted * @param error Error code to be converted
* @return Corresponding error text (not thread-safe) * @return Corresponding error text (not thread-safe)
*/ */
static const char *get_error_text(const int error) static char *const get_error_text(const int error)
{ {
static char error_buffer[255]; static char error_buffer[255];
av_strerror(error, error_buffer, sizeof(error_buffer)); av_strerror(error, error_buffer, sizeof(error_buffer));
@@ -167,7 +169,7 @@ static int open_output_file(const char *filename,
goto cleanup; goto cleanup;
} }
/** Save the encoder context for easier access later. */ /** Save the encoder context for easiert access later. */
*output_codec_context = stream->codec; *output_codec_context = stream->codec;
/** /**
@@ -177,16 +179,9 @@ static int open_output_file(const char *filename,
(*output_codec_context)->channels = OUTPUT_CHANNELS; (*output_codec_context)->channels = OUTPUT_CHANNELS;
(*output_codec_context)->channel_layout = av_get_default_channel_layout(OUTPUT_CHANNELS); (*output_codec_context)->channel_layout = av_get_default_channel_layout(OUTPUT_CHANNELS);
(*output_codec_context)->sample_rate = input_codec_context->sample_rate; (*output_codec_context)->sample_rate = input_codec_context->sample_rate;
(*output_codec_context)->sample_fmt = output_codec->sample_fmts[0]; (*output_codec_context)->sample_fmt = AV_SAMPLE_FMT_S16;
(*output_codec_context)->bit_rate = OUTPUT_BIT_RATE; (*output_codec_context)->bit_rate = OUTPUT_BIT_RATE;
/** Allow the use of the experimental AAC encoder */
(*output_codec_context)->strict_std_compliance = FF_COMPLIANCE_EXPERIMENTAL;
/** Set the sample rate for the container. */
stream->time_base.den = input_codec_context->sample_rate;
stream->time_base.num = 1;
/** /**
* Some container formats (like MP4) require global headers to be present * Some container formats (like MP4) require global headers to be present
* Mark the encoder so that it behaves accordingly. * Mark the encoder so that it behaves accordingly.
@@ -204,7 +199,7 @@ static int open_output_file(const char *filename,
return 0; return 0;
cleanup: cleanup:
avio_closep(&(*output_format_context)->pb); avio_close((*output_format_context)->pb);
avformat_free_context(*output_format_context); avformat_free_context(*output_format_context);
*output_format_context = NULL; *output_format_context = NULL;
return error < 0 ? error : AVERROR_EXIT; return error < 0 ? error : AVERROR_EXIT;
@@ -276,11 +271,10 @@ static int init_resampler(AVCodecContext *input_codec_context,
} }
/** Initialize a FIFO buffer for the audio samples to be encoded. */ /** Initialize a FIFO buffer for the audio samples to be encoded. */
static int init_fifo(AVAudioFifo **fifo, AVCodecContext *output_codec_context) static int init_fifo(AVAudioFifo **fifo)
{ {
/** Create the FIFO buffer based on the specified output sample format. */ /** Create the FIFO buffer based on the specified output sample format. */
if (!(*fifo = av_audio_fifo_alloc(output_codec_context->sample_fmt, if (!(*fifo = av_audio_fifo_alloc(OUTPUT_SAMPLE_FORMAT, OUTPUT_CHANNELS, 1))) {
output_codec_context->channels, 1))) {
fprintf(stderr, "Could not allocate FIFO\n"); fprintf(stderr, "Could not allocate FIFO\n");
return AVERROR(ENOMEM); return AVERROR(ENOMEM);
} }
@@ -312,7 +306,7 @@ static int decode_audio_frame(AVFrame *frame,
/** Read one audio frame from the input file into a temporary packet. */ /** Read one audio frame from the input file into a temporary packet. */
if ((error = av_read_frame(input_format_context, &input_packet)) < 0) { if ((error = av_read_frame(input_format_context, &input_packet)) < 0) {
/** If we are at the end of the file, flush the decoder below. */ /** If we are the the end of the file, flush the decoder below. */
if (error == AVERROR_EOF) if (error == AVERROR_EOF)
*finished = 1; *finished = 1;
else { else {
@@ -543,9 +537,6 @@ static int init_output_frame(AVFrame **frame,
return 0; return 0;
} }
/** Global timestamp for the audio frames */
static int64_t pts = 0;
/** Encode one frame worth of audio to the output file. */ /** Encode one frame worth of audio to the output file. */
static int encode_audio_frame(AVFrame *frame, static int encode_audio_frame(AVFrame *frame,
AVFormatContext *output_format_context, AVFormatContext *output_format_context,
@@ -557,12 +548,6 @@ static int encode_audio_frame(AVFrame *frame,
int error; int error;
init_packet(&output_packet); init_packet(&output_packet);
/** Set a timestamp based on the sample rate for the container. */
if (frame) {
frame->pts = pts;
pts += frame->nb_samples;
}
/** /**
* Encode the audio frame and store it in the temporary packet. * Encode the audio frame and store it in the temporary packet.
* The output audio stream encoder is used to do this. * The output audio stream encoder is used to do this.
@@ -674,7 +659,7 @@ int main(int argc, char **argv)
&resample_context)) &resample_context))
goto cleanup; goto cleanup;
/** Initialize the FIFO buffer to store audio samples to be encoded. */ /** Initialize the FIFO buffer to store audio samples to be encoded. */
if (init_fifo(&fifo, output_codec_context)) if (init_fifo(&fifo))
goto cleanup; goto cleanup;
/** Write the header of the output file container. */ /** Write the header of the output file container. */
if (write_output_file_header(output_format_context)) if (write_output_file_header(output_format_context))
@@ -758,7 +743,7 @@ cleanup:
if (output_codec_context) if (output_codec_context)
avcodec_close(output_codec_context); avcodec_close(output_codec_context);
if (output_format_context) { if (output_format_context) {
avio_closep(&output_format_context->pb); avio_close(output_format_context->pb);
avformat_free_context(output_format_context); avformat_free_context(output_format_context);
} }
if (input_codec_context) if (input_codec_context)

View File

@@ -1,583 +0,0 @@
/*
* Copyright (c) 2010 Nicolas George
* Copyright (c) 2011 Stefano Sabatini
* Copyright (c) 2014 Andrey Utkin
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
/**
* @file
* API example for demuxing, decoding, filtering, encoding and muxing
* @example transcoding.c
*/
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libavfilter/avfiltergraph.h>
#include <libavfilter/avcodec.h>
#include <libavfilter/buffersink.h>
#include <libavfilter/buffersrc.h>
#include <libavutil/opt.h>
#include <libavutil/pixdesc.h>
static AVFormatContext *ifmt_ctx;
static AVFormatContext *ofmt_ctx;
typedef struct FilteringContext {
AVFilterContext *buffersink_ctx;
AVFilterContext *buffersrc_ctx;
AVFilterGraph *filter_graph;
} FilteringContext;
static FilteringContext *filter_ctx;
static int open_input_file(const char *filename)
{
int ret;
unsigned int i;
ifmt_ctx = NULL;
if ((ret = avformat_open_input(&ifmt_ctx, filename, NULL, NULL)) < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot open input file\n");
return ret;
}
if ((ret = avformat_find_stream_info(ifmt_ctx, NULL)) < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot find stream information\n");
return ret;
}
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
AVStream *stream;
AVCodecContext *codec_ctx;
stream = ifmt_ctx->streams[i];
codec_ctx = stream->codec;
/* Reencode video & audio and remux subtitles etc. */
if (codec_ctx->codec_type == AVMEDIA_TYPE_VIDEO
|| codec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
/* Open decoder */
ret = avcodec_open2(codec_ctx,
avcodec_find_decoder(codec_ctx->codec_id), NULL);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Failed to open decoder for stream #%u\n", i);
return ret;
}
}
}
av_dump_format(ifmt_ctx, 0, filename, 0);
return 0;
}
static int open_output_file(const char *filename)
{
AVStream *out_stream;
AVStream *in_stream;
AVCodecContext *dec_ctx, *enc_ctx;
AVCodec *encoder;
int ret;
unsigned int i;
ofmt_ctx = NULL;
avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, filename);
if (!ofmt_ctx) {
av_log(NULL, AV_LOG_ERROR, "Could not create output context\n");
return AVERROR_UNKNOWN;
}
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
out_stream = avformat_new_stream(ofmt_ctx, NULL);
if (!out_stream) {
av_log(NULL, AV_LOG_ERROR, "Failed allocating output stream\n");
return AVERROR_UNKNOWN;
}
in_stream = ifmt_ctx->streams[i];
dec_ctx = in_stream->codec;
enc_ctx = out_stream->codec;
if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO
|| dec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
/* in this example, we choose transcoding to same codec */
encoder = avcodec_find_encoder(dec_ctx->codec_id);
if (!encoder) {
av_log(NULL, AV_LOG_FATAL, "Neccessary encoder not found\n");
return AVERROR_INVALIDDATA;
}
/* In this example, we transcode to same properties (picture size,
* sample rate etc.). These properties can be changed for output
* streams easily using filters */
if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO) {
enc_ctx->height = dec_ctx->height;
enc_ctx->width = dec_ctx->width;
enc_ctx->sample_aspect_ratio = dec_ctx->sample_aspect_ratio;
/* take first format from list of supported formats */
enc_ctx->pix_fmt = encoder->pix_fmts[0];
/* video time_base can be set to whatever is handy and supported by encoder */
enc_ctx->time_base = dec_ctx->time_base;
} else {
enc_ctx->sample_rate = dec_ctx->sample_rate;
enc_ctx->channel_layout = dec_ctx->channel_layout;
enc_ctx->channels = av_get_channel_layout_nb_channels(enc_ctx->channel_layout);
/* take first format from list of supported formats */
enc_ctx->sample_fmt = encoder->sample_fmts[0];
enc_ctx->time_base = (AVRational){1, enc_ctx->sample_rate};
}
/* Third parameter can be used to pass settings to encoder */
ret = avcodec_open2(enc_ctx, encoder, NULL);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot open video encoder for stream #%u\n", i);
return ret;
}
} else if (dec_ctx->codec_type == AVMEDIA_TYPE_UNKNOWN) {
av_log(NULL, AV_LOG_FATAL, "Elementary stream #%d is of unknown type, cannot proceed\n", i);
return AVERROR_INVALIDDATA;
} else {
/* if this stream must be remuxed */
ret = avcodec_copy_context(ofmt_ctx->streams[i]->codec,
ifmt_ctx->streams[i]->codec);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Copying stream context failed\n");
return ret;
}
}
if (ofmt_ctx->oformat->flags & AVFMT_GLOBALHEADER)
enc_ctx->flags |= CODEC_FLAG_GLOBAL_HEADER;
}
av_dump_format(ofmt_ctx, 0, filename, 1);
if (!(ofmt_ctx->oformat->flags & AVFMT_NOFILE)) {
ret = avio_open(&ofmt_ctx->pb, filename, AVIO_FLAG_WRITE);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Could not open output file '%s'", filename);
return ret;
}
}
/* init muxer, write output file header */
ret = avformat_write_header(ofmt_ctx, NULL);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Error occurred when opening output file\n");
return ret;
}
return 0;
}
static int init_filter(FilteringContext* fctx, AVCodecContext *dec_ctx,
AVCodecContext *enc_ctx, const char *filter_spec)
{
char args[512];
int ret = 0;
AVFilter *buffersrc = NULL;
AVFilter *buffersink = NULL;
AVFilterContext *buffersrc_ctx = NULL;
AVFilterContext *buffersink_ctx = NULL;
AVFilterInOut *outputs = avfilter_inout_alloc();
AVFilterInOut *inputs = avfilter_inout_alloc();
AVFilterGraph *filter_graph = avfilter_graph_alloc();
if (!outputs || !inputs || !filter_graph) {
ret = AVERROR(ENOMEM);
goto end;
}
if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO) {
buffersrc = avfilter_get_by_name("buffer");
buffersink = avfilter_get_by_name("buffersink");
if (!buffersrc || !buffersink) {
av_log(NULL, AV_LOG_ERROR, "filtering source or sink element not found\n");
ret = AVERROR_UNKNOWN;
goto end;
}
snprintf(args, sizeof(args),
"video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
dec_ctx->width, dec_ctx->height, dec_ctx->pix_fmt,
dec_ctx->time_base.num, dec_ctx->time_base.den,
dec_ctx->sample_aspect_ratio.num,
dec_ctx->sample_aspect_ratio.den);
ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in",
args, NULL, filter_graph);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot create buffer source\n");
goto end;
}
ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out",
NULL, NULL, filter_graph);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot create buffer sink\n");
goto end;
}
ret = av_opt_set_bin(buffersink_ctx, "pix_fmts",
(uint8_t*)&enc_ctx->pix_fmt, sizeof(enc_ctx->pix_fmt),
AV_OPT_SEARCH_CHILDREN);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output pixel format\n");
goto end;
}
} else if (dec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
buffersrc = avfilter_get_by_name("abuffer");
buffersink = avfilter_get_by_name("abuffersink");
if (!buffersrc || !buffersink) {
av_log(NULL, AV_LOG_ERROR, "filtering source or sink element not found\n");
ret = AVERROR_UNKNOWN;
goto end;
}
if (!dec_ctx->channel_layout)
dec_ctx->channel_layout =
av_get_default_channel_layout(dec_ctx->channels);
snprintf(args, sizeof(args),
"time_base=%d/%d:sample_rate=%d:sample_fmt=%s:channel_layout=0x%"PRIx64,
dec_ctx->time_base.num, dec_ctx->time_base.den, dec_ctx->sample_rate,
av_get_sample_fmt_name(dec_ctx->sample_fmt),
dec_ctx->channel_layout);
ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in",
args, NULL, filter_graph);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer source\n");
goto end;
}
ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out",
NULL, NULL, filter_graph);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer sink\n");
goto end;
}
ret = av_opt_set_bin(buffersink_ctx, "sample_fmts",
(uint8_t*)&enc_ctx->sample_fmt, sizeof(enc_ctx->sample_fmt),
AV_OPT_SEARCH_CHILDREN);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output sample format\n");
goto end;
}
ret = av_opt_set_bin(buffersink_ctx, "channel_layouts",
(uint8_t*)&enc_ctx->channel_layout,
sizeof(enc_ctx->channel_layout), AV_OPT_SEARCH_CHILDREN);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output channel layout\n");
goto end;
}
ret = av_opt_set_bin(buffersink_ctx, "sample_rates",
(uint8_t*)&enc_ctx->sample_rate, sizeof(enc_ctx->sample_rate),
AV_OPT_SEARCH_CHILDREN);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output sample rate\n");
goto end;
}
} else {
ret = AVERROR_UNKNOWN;
goto end;
}
/* Endpoints for the filter graph. */
outputs->name = av_strdup("in");
outputs->filter_ctx = buffersrc_ctx;
outputs->pad_idx = 0;
outputs->next = NULL;
inputs->name = av_strdup("out");
inputs->filter_ctx = buffersink_ctx;
inputs->pad_idx = 0;
inputs->next = NULL;
if (!outputs->name || !inputs->name) {
ret = AVERROR(ENOMEM);
goto end;
}
if ((ret = avfilter_graph_parse_ptr(filter_graph, filter_spec,
&inputs, &outputs, NULL)) < 0)
goto end;
if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0)
goto end;
/* Fill FilteringContext */
fctx->buffersrc_ctx = buffersrc_ctx;
fctx->buffersink_ctx = buffersink_ctx;
fctx->filter_graph = filter_graph;
end:
avfilter_inout_free(&inputs);
avfilter_inout_free(&outputs);
return ret;
}
static int init_filters(void)
{
const char *filter_spec;
unsigned int i;
int ret;
filter_ctx = av_malloc_array(ifmt_ctx->nb_streams, sizeof(*filter_ctx));
if (!filter_ctx)
return AVERROR(ENOMEM);
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
filter_ctx[i].buffersrc_ctx = NULL;
filter_ctx[i].buffersink_ctx = NULL;
filter_ctx[i].filter_graph = NULL;
if (!(ifmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO
|| ifmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO))
continue;
if (ifmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO)
filter_spec = "null"; /* passthrough (dummy) filter for video */
else
filter_spec = "anull"; /* passthrough (dummy) filter for audio */
ret = init_filter(&filter_ctx[i], ifmt_ctx->streams[i]->codec,
ofmt_ctx->streams[i]->codec, filter_spec);
if (ret)
return ret;
}
return 0;
}
static int encode_write_frame(AVFrame *filt_frame, unsigned int stream_index, int *got_frame) {
int ret;
int got_frame_local;
AVPacket enc_pkt;
int (*enc_func)(AVCodecContext *, AVPacket *, const AVFrame *, int *) =
(ifmt_ctx->streams[stream_index]->codec->codec_type ==
AVMEDIA_TYPE_VIDEO) ? avcodec_encode_video2 : avcodec_encode_audio2;
if (!got_frame)
got_frame = &got_frame_local;
av_log(NULL, AV_LOG_INFO, "Encoding frame\n");
/* encode filtered frame */
enc_pkt.data = NULL;
enc_pkt.size = 0;
av_init_packet(&enc_pkt);
ret = enc_func(ofmt_ctx->streams[stream_index]->codec, &enc_pkt,
filt_frame, got_frame);
av_frame_free(&filt_frame);
if (ret < 0)
return ret;
if (!(*got_frame))
return 0;
/* prepare packet for muxing */
enc_pkt.stream_index = stream_index;
av_packet_rescale_ts(&enc_pkt,
ofmt_ctx->streams[stream_index]->codec->time_base,
ofmt_ctx->streams[stream_index]->time_base);
av_log(NULL, AV_LOG_DEBUG, "Muxing frame\n");
/* mux encoded frame */
ret = av_interleaved_write_frame(ofmt_ctx, &enc_pkt);
return ret;
}
static int filter_encode_write_frame(AVFrame *frame, unsigned int stream_index)
{
int ret;
AVFrame *filt_frame;
av_log(NULL, AV_LOG_INFO, "Pushing decoded frame to filters\n");
/* push the decoded frame into the filtergraph */
ret = av_buffersrc_add_frame_flags(filter_ctx[stream_index].buffersrc_ctx,
frame, 0);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Error while feeding the filtergraph\n");
return ret;
}
/* pull filtered frames from the filtergraph */
while (1) {
filt_frame = av_frame_alloc();
if (!filt_frame) {
ret = AVERROR(ENOMEM);
break;
}
av_log(NULL, AV_LOG_INFO, "Pulling filtered frame from filters\n");
ret = av_buffersink_get_frame(filter_ctx[stream_index].buffersink_ctx,
filt_frame);
if (ret < 0) {
/* if no more frames for output - returns AVERROR(EAGAIN)
* if flushed and no more frames for output - returns AVERROR_EOF
* rewrite retcode to 0 to show it as normal procedure completion
*/
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
ret = 0;
av_frame_free(&filt_frame);
break;
}
filt_frame->pict_type = AV_PICTURE_TYPE_NONE;
ret = encode_write_frame(filt_frame, stream_index, NULL);
if (ret < 0)
break;
}
return ret;
}
static int flush_encoder(unsigned int stream_index)
{
int ret;
int got_frame;
if (!(ofmt_ctx->streams[stream_index]->codec->codec->capabilities &
CODEC_CAP_DELAY))
return 0;
while (1) {
av_log(NULL, AV_LOG_INFO, "Flushing stream #%u encoder\n", stream_index);
ret = encode_write_frame(NULL, stream_index, &got_frame);
if (ret < 0)
break;
if (!got_frame)
return 0;
}
return ret;
}
int main(int argc, char **argv)
{
int ret;
AVPacket packet = { .data = NULL, .size = 0 };
AVFrame *frame = NULL;
enum AVMediaType type;
unsigned int stream_index;
unsigned int i;
int got_frame;
int (*dec_func)(AVCodecContext *, AVFrame *, int *, const AVPacket *);
if (argc != 3) {
av_log(NULL, AV_LOG_ERROR, "Usage: %s <input file> <output file>\n", argv[0]);
return 1;
}
av_register_all();
avfilter_register_all();
if ((ret = open_input_file(argv[1])) < 0)
goto end;
if ((ret = open_output_file(argv[2])) < 0)
goto end;
if ((ret = init_filters()) < 0)
goto end;
/* read all packets */
while (1) {
if ((ret = av_read_frame(ifmt_ctx, &packet)) < 0)
break;
stream_index = packet.stream_index;
type = ifmt_ctx->streams[packet.stream_index]->codec->codec_type;
av_log(NULL, AV_LOG_DEBUG, "Demuxer gave frame of stream_index %u\n",
stream_index);
if (filter_ctx[stream_index].filter_graph) {
av_log(NULL, AV_LOG_DEBUG, "Going to reencode&filter the frame\n");
frame = av_frame_alloc();
if (!frame) {
ret = AVERROR(ENOMEM);
break;
}
av_packet_rescale_ts(&packet,
ifmt_ctx->streams[stream_index]->time_base,
ifmt_ctx->streams[stream_index]->codec->time_base);
dec_func = (type == AVMEDIA_TYPE_VIDEO) ? avcodec_decode_video2 :
avcodec_decode_audio4;
ret = dec_func(ifmt_ctx->streams[stream_index]->codec, frame,
&got_frame, &packet);
if (ret < 0) {
av_frame_free(&frame);
av_log(NULL, AV_LOG_ERROR, "Decoding failed\n");
break;
}
if (got_frame) {
frame->pts = av_frame_get_best_effort_timestamp(frame);
ret = filter_encode_write_frame(frame, stream_index);
av_frame_free(&frame);
if (ret < 0)
goto end;
} else {
av_frame_free(&frame);
}
} else {
/* remux this frame without reencoding */
av_packet_rescale_ts(&packet,
ifmt_ctx->streams[stream_index]->time_base,
ofmt_ctx->streams[stream_index]->time_base);
ret = av_interleaved_write_frame(ofmt_ctx, &packet);
if (ret < 0)
goto end;
}
av_free_packet(&packet);
}
/* flush filters and encoders */
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
/* flush filter */
if (!filter_ctx[i].filter_graph)
continue;
ret = filter_encode_write_frame(NULL, i);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Flushing filter failed\n");
goto end;
}
/* flush encoder */
ret = flush_encoder(i);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Flushing encoder failed\n");
goto end;
}
}
av_write_trailer(ofmt_ctx);
end:
av_free_packet(&packet);
av_frame_free(&frame);
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
avcodec_close(ifmt_ctx->streams[i]->codec);
if (ofmt_ctx && ofmt_ctx->nb_streams > i && ofmt_ctx->streams[i] && ofmt_ctx->streams[i]->codec)
avcodec_close(ofmt_ctx->streams[i]->codec);
if (filter_ctx && filter_ctx[i].filter_graph)
avfilter_graph_free(&filter_ctx[i].filter_graph);
}
av_free(filter_ctx);
avformat_close_input(&ifmt_ctx);
if (ofmt_ctx && !(ofmt_ctx->oformat->flags & AVFMT_NOFILE))
avio_closep(&ofmt_ctx->pb);
avformat_free_context(ofmt_ctx);
if (ret < 0)
av_log(NULL, AV_LOG_ERROR, "Error occurred: %s\n", av_err2str(ret));
return ret ? 1 : 0;
}

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle FFmpeg FAQ @settitle FFmpeg FAQ
@titlepage @titlepage
@@ -91,56 +90,6 @@ To build FFmpeg, you need to install the development package. It is usually
called @file{libfoo-dev} or @file{libfoo-devel}. You can remove it after the called @file{libfoo-dev} or @file{libfoo-devel}. You can remove it after the
build is finished, but be sure to keep the main package. build is finished, but be sure to keep the main package.
@section How do I make @command{pkg-config} find my libraries?
Somewhere along with your libraries, there is a @file{.pc} file (or several)
in a @file{pkgconfig} directory. You need to set environment variables to
point @command{pkg-config} to these files.
If you need to @emph{add} directories to @command{pkg-config}'s search list
(typical use case: library installed separately), add it to
@code{$PKG_CONFIG_PATH}:
@example
export PKG_CONFIG_PATH=/opt/x264/lib/pkgconfig:/opt/opus/lib/pkgconfig
@end example
If you need to @emph{replace} @command{pkg-config}'s search list
(typical use case: cross-compiling), set it in
@code{$PKG_CONFIG_LIBDIR}:
@example
export PKG_CONFIG_LIBDIR=/home/me/cross/usr/lib/pkgconfig:/home/me/cross/usr/local/lib/pkgconfig
@end example
If you need to know the library's internal dependencies (typical use: static
linking), add the @code{--static} option to @command{pkg-config}:
@example
./configure --pkg-config-flags=--static
@end example
@section How do I use @command{pkg-config} when cross-compiling?
The best way is to install @command{pkg-config} in your cross-compilation
environment. It will automatically use the cross-compilation libraries.
You can also use @command{pkg-config} from the host environment by
specifying explicitly @code{--pkg-config=pkg-config} to @command{configure}.
In that case, you must point @command{pkg-config} to the correct directories
using the @code{PKG_CONFIG_LIBDIR}, as explained in the previous entry.
As an intermediate solution, you can place in your cross-compilation
environment a script that calls the host @command{pkg-config} with
@code{PKG_CONFIG_LIBDIR} set. That script can look like that:
@example
#!/bin/sh
PKG_CONFIG_LIBDIR=/path/to/cross/lib/pkgconfig
export PKG_CONFIG_LIBDIR
exec /usr/bin/pkg-config "$@@"
@end example
@chapter Usage @chapter Usage
@section ffmpeg does not work; what is wrong? @section ffmpeg does not work; what is wrong?
@@ -443,7 +392,7 @@ VOB and a few other formats do not have a global header that describes
everything present in the file. Instead, applications are supposed to scan everything present in the file. Instead, applications are supposed to scan
the file to see what it contains. Since VOB files are frequently large, only the file to see what it contains. Since VOB files are frequently large, only
the beginning is scanned. If the subtitles happen only later in the file, the beginning is scanned. If the subtitles happen only later in the file,
they will not be initially detected. they will not be initally detected.
Some applications, including the @code{ffmpeg} command-line tool, can only Some applications, including the @code{ffmpeg} command-line tool, can only
work with streams that were detected during the initial scan; streams that work with streams that were detected during the initial scan; streams that
@@ -467,40 +416,6 @@ point acceptable for your tastes. The most common options to do that are
@option{-qscale} and @option{-qmax}, but you should peruse the documentation @option{-qscale} and @option{-qmax}, but you should peruse the documentation
of the encoder you chose. of the encoder you chose.
@section I have a stretched video, why does scaling does not fix it?
A lot of video codecs and formats can store the @emph{aspect ratio} of the
video: this is the ratio between the width and the height of either the full
image (DAR, display aspect ratio) or individual pixels (SAR, sample aspect
ratio). For example, EGA screens at resolution 640×350 had 4:3 DAR and 35:48
SAR.
Most still image processing work with square pixels, i.e. 1:1 SAR, but a lot
of video standards, especially from the analogic-numeric transition era, use
non-square pixels.
Most processing filters in FFmpeg handle the aspect ratio to avoid
stretching the image: cropping adjusts the DAR to keep the SAR constant,
scaling adjusts the SAR to keep the DAR constant.
If you want to stretch, or “unstretch”, the image, you need to override the
information with the
@url{http://ffmpeg.org/ffmpeg-filters.html#setdar_002c-setsar, @code{setdar or setsar filters}}.
Do not forget to examine carefully the original video to check whether the
stretching comes from the image or from the aspect ratio information.
For example, to fix a badly encoded EGA capture, use the following commands,
either the first one to upscale to square pixels or the second one to set
the correct aspect ratio or the third one to avoid transcoding (may not work
depending on the format / codec / player / phase of the moon):
@example
ffmpeg -i ega_screen.nut -vf scale=640:480,setsar=1 ega_screen_scaled.nut
ffmpeg -i ega_screen.nut -vf setdar=4/3 ega_screen_anamorphic.nut
ffmpeg -i ega_screen.nut -aspect 4/3 -c copy ega_screen_overridden.nut
@end example
@chapter Development @chapter Development
@section Are there examples illustrating how to use the FFmpeg libraries, particularly libavcodec and libavformat? @section Are there examples illustrating how to use the FFmpeg libraries, particularly libavcodec and libavformat?

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle FFmpeg Automated Testing Environment @settitle FFmpeg Automated Testing Environment
@titlepage @titlepage

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle FFmpeg Bitstream Filters Documentation @settitle FFmpeg Bitstream Filters Documentation
@titlepage @titlepage

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle FFmpeg Codecs Documentation @settitle FFmpeg Codecs Documentation
@titlepage @titlepage

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle FFmpeg Devices Documentation @settitle FFmpeg Devices Documentation
@titlepage @titlepage

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle FFmpeg Filters Documentation @settitle FFmpeg Filters Documentation
@titlepage @titlepage

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle FFmpeg Formats Documentation @settitle FFmpeg Formats Documentation
@titlepage @titlepage

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle FFmpeg Protocols Documentation @settitle FFmpeg Protocols Documentation
@titlepage @titlepage

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle FFmpeg Resampler Documentation @settitle FFmpeg Resampler Documentation
@titlepage @titlepage
@@ -15,7 +14,7 @@
The FFmpeg resampler provides a high-level interface to the The FFmpeg resampler provides a high-level interface to the
libswresample library audio resampling utilities. In particular it libswresample library audio resampling utilities. In particular it
allows one to perform audio resampling, audio channel layout rematrixing, allows to perform audio resampling, audio channel layout rematrixing,
and convert audio format and packing layout. and convert audio format and packing layout.
@c man end DESCRIPTION @c man end DESCRIPTION

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle FFmpeg Scaler Documentation @settitle FFmpeg Scaler Documentation
@titlepage @titlepage
@@ -14,7 +13,7 @@
@c man begin DESCRIPTION @c man begin DESCRIPTION
The FFmpeg rescaler provides a high-level interface to the libswscale The FFmpeg rescaler provides a high-level interface to the libswscale
library image conversion utilities. In particular it allows one to perform library image conversion utilities. In particular it allows to perform
image rescaling and pixel format conversion. image rescaling and pixel format conversion.
@c man end DESCRIPTION @c man end DESCRIPTION

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle FFmpeg Utilities Documentation @settitle FFmpeg Utilities Documentation
@titlepage @titlepage

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle ffmpeg Documentation @settitle ffmpeg Documentation
@titlepage @titlepage
@@ -91,8 +90,7 @@ the following diagram:
| | | |
| decoded | | decoded |
| frames | | frames |
|_________| ________ ______________ |_________|
________ ______________ |
| | | | | | | | | |
| output | <-------- | encoded data | <----+ | output | <-------- | encoded data | <----+
| file | muxer | packets | encoder | file | muxer | packets | encoder
@@ -125,16 +123,11 @@ the same type. In the above diagram they can be represented by simply inserting
an additional step between decoding and encoding: an additional step between decoding and encoding:
@example @example
_________ ______________ _________ __________ ______________
| | | | | | simple | | | |
| decoded | | encoded data | | decoded | fltrgrph | filtered | encoder | encoded data |
| frames |\ _ | packets | | frames | ----------> | frames | ---------> | packets |
|_________| \ /||______________| |_________| |__________| |______________|
\ __________ /
simple _\|| | / encoder
filtergraph | filtered |/
| frames |
|__________|
@end example @end example
@@ -273,13 +266,8 @@ ffmpeg -i INPUT -map 0 -c copy -c:v:1 libx264 -c:a:137 libvorbis OUTPUT
will copy all the streams except the second video, which will be encoded with will copy all the streams except the second video, which will be encoded with
libx264, and the 138th audio, which will be encoded with libvorbis. libx264, and the 138th audio, which will be encoded with libvorbis.
@item -t @var{duration} (@emph{input/output}) @item -t @var{duration} (@emph{output})
When used as an input option (before @code{-i}), limit the @var{duration} of Stop writing the output after its duration reaches @var{duration}.
data read from the input file.
When used as an output option (before an output filename), stop writing the
output after its duration reaches @var{duration}.
@var{duration} may be a number in seconds, or in @code{hh:mm:ss[.xxx]} form. @var{duration} may be a number in seconds, or in @code{hh:mm:ss[.xxx]} form.
-to and -t are mutually exclusive and -t has priority. -to and -t are mutually exclusive and -t has priority.
@@ -340,7 +328,7 @@ ffmpeg -i in.avi -metadata title="my title" out.flv
To set the language of the first audio stream: To set the language of the first audio stream:
@example @example
ffmpeg -i INPUT -metadata:s:a:0 language=eng OUTPUT ffmpeg -i INPUT -metadata:s:a:1 language=eng OUTPUT
@end example @end example
@item -target @var{type} (@emph{output}) @item -target @var{type} (@emph{output})
@@ -361,7 +349,7 @@ ffmpeg -i myfile.avi -target vcd -bf 2 /tmp/vcd.mpg
@end example @end example
@item -dframes @var{number} (@emph{output}) @item -dframes @var{number} (@emph{output})
Set the number of data frames to output. This is an alias for @code{-frames:d}. Set the number of data frames to record. This is an alias for @code{-frames:d}.
@item -frames[:@var{stream_specifier}] @var{framecount} (@emph{output,per-stream}) @item -frames[:@var{stream_specifier}] @var{framecount} (@emph{output,per-stream})
Stop writing to the stream after @var{framecount} frames. Stop writing to the stream after @var{framecount} frames.
@@ -468,15 +456,12 @@ attachments.
@table @option @table @option
@item -vframes @var{number} (@emph{output}) @item -vframes @var{number} (@emph{output})
Set the number of video frames to output. This is an alias for @code{-frames:v}. Set the number of video frames to record. This is an alias for @code{-frames:v}.
@item -r[:@var{stream_specifier}] @var{fps} (@emph{input/output,per-stream}) @item -r[:@var{stream_specifier}] @var{fps} (@emph{input/output,per-stream})
Set frame rate (Hz value, fraction or abbreviation). Set frame rate (Hz value, fraction or abbreviation).
As an input option, ignore any timestamps stored in the file and instead As an input option, ignore any timestamps stored in the file and instead
generate timestamps assuming constant frame rate @var{fps}. generate timestamps assuming constant frame rate @var{fps}.
This is not the same as the @option{-framerate} option used for some input formats
like image2 or v4l2 (it used to be the same in older versions of FFmpeg).
If in doubt use @option{-framerate} instead of the input option @option{-r}.
As an output option, duplicate or drop input frames to achieve constant output As an output option, duplicate or drop input frames to achieve constant output
frame rate @var{fps}. frame rate @var{fps}.
@@ -538,7 +523,7 @@ filter the stream.
This is an alias for @code{-filter:v}, see the @ref{filter_option,,-filter option}. This is an alias for @code{-filter:v}, see the @ref{filter_option,,-filter option}.
@end table @end table
@section Advanced Video options @section Advanced Video Options
@table @option @table @option
@item -pix_fmt[:@var{stream_specifier}] @var{format} (@emph{input/output,per-stream}) @item -pix_fmt[:@var{stream_specifier}] @var{format} (@emph{input/output,per-stream})
@@ -652,14 +637,8 @@ Do not use any hardware acceleration (the default).
@item auto @item auto
Automatically select the hardware acceleration method. Automatically select the hardware acceleration method.
@item vda
Use Apple VDA hardware acceleration.
@item vdpau @item vdpau
Use VDPAU (Video Decode and Presentation API for Unix) hardware acceleration. Use VDPAU (Video Decode and Presentation API for Unix) hardware acceleration.
@item dxva2
Use DXVA2 (DirectX Video Acceleration) hardware acceleration.
@end table @end table
This option has no effect if the selected hwaccel is not available or not This option has no effect if the selected hwaccel is not available or not
@@ -682,10 +661,6 @@ method chosen.
@item vdpau @item vdpau
For VDPAU, this option specifies the X11 display/screen to use. If this option For VDPAU, this option specifies the X11 display/screen to use. If this option
is not specified, the value of the @var{DISPLAY} environment variable is used is not specified, the value of the @var{DISPLAY} environment variable is used
@item dxva2
For DXVA2, this option should contain the number of the display adapter to use.
If this option is not specified, the default adapter is used.
@end table @end table
@end table @end table
@@ -693,7 +668,7 @@ If this option is not specified, the default adapter is used.
@table @option @table @option
@item -aframes @var{number} (@emph{output}) @item -aframes @var{number} (@emph{output})
Set the number of audio frames to output. This is an alias for @code{-frames:a}. Set the number of audio frames to record. This is an alias for @code{-frames:a}.
@item -ar[:@var{stream_specifier}] @var{freq} (@emph{input/output,per-stream}) @item -ar[:@var{stream_specifier}] @var{freq} (@emph{input/output,per-stream})
Set the audio sampling frequency. For output streams it is set by Set the audio sampling frequency. For output streams it is set by
default to the frequency of the corresponding input stream. For input default to the frequency of the corresponding input stream. For input
@@ -721,7 +696,7 @@ filter the stream.
This is an alias for @code{-filter:a}, see the @ref{filter_option,,-filter option}. This is an alias for @code{-filter:a}, see the @ref{filter_option,,-filter option}.
@end table @end table
@section Advanced Audio options @section Advanced Audio options:
@table @option @table @option
@item -atag @var{fourcc/tag} (@emph{output}) @item -atag @var{fourcc/tag} (@emph{output})
@@ -736,7 +711,7 @@ stereo but not 6 channels as 5.1. The default is to always try to guess. Use
0 to disable all guessing. 0 to disable all guessing.
@end table @end table
@section Subtitle options @section Subtitle options:
@table @option @table @option
@item -scodec @var{codec} (@emph{input/output}) @item -scodec @var{codec} (@emph{input/output})
@@ -747,7 +722,7 @@ Disable subtitle recording.
Deprecated, see -bsf Deprecated, see -bsf
@end table @end table
@section Advanced Subtitle options @section Advanced Subtitle options:
@table @option @table @option
@@ -825,11 +800,6 @@ To map all the streams except the second audio, use negative mappings
ffmpeg -i INPUT -map 0 -map -0:a:1 OUTPUT ffmpeg -i INPUT -map 0 -map -0:a:1 OUTPUT
@end example @end example
To pick the English audio stream:
@example
ffmpeg -i INPUT -map 0:m:language:eng OUTPUT
@end example
Note that using this option disables the default mappings for this output file. Note that using this option disables the default mappings for this output file.
@item -map_channel [@var{input_file_id}.@var{stream_specifier}.@var{channel_id}|-1][:@var{output_file_id}.@var{stream_specifier}] @item -map_channel [@var{input_file_id}.@var{stream_specifier}.@var{channel_id}|-1][:@var{output_file_id}.@var{stream_specifier}]
@@ -995,13 +965,6 @@ With -map you can select from which stream the timestamps should be
taken. You can leave either video or audio unchanged and sync the taken. You can leave either video or audio unchanged and sync the
remaining stream(s) to the unchanged one. remaining stream(s) to the unchanged one.
@item -frame_drop_threshold @var{parameter}
Frame drop threshold, which specifies how much behind video frames can
be before they are dropped. In frame rate units, so 1.0 is one frame.
The default is -1.1. One possible usecase is to avoid framedrops in case
of noisy timestamps or to increase frame drop precision in case of exact
timestamps.
@item -async @var{samples_per_second} @item -async @var{samples_per_second}
Audio sync method. "Stretches/squeezes" the audio stream to match the timestamps, Audio sync method. "Stretches/squeezes" the audio stream to match the timestamps,
the parameter is the maximum samples per second by which the audio is changed. the parameter is the maximum samples per second by which the audio is changed.
@@ -1024,12 +987,6 @@ processing (e.g. in case the format option @option{avoid_negative_ts}
is enabled) the output timestamps may mismatch with the input is enabled) the output timestamps may mismatch with the input
timestamps even when this option is selected. timestamps even when this option is selected.
@item -start_at_zero
When used with @option{copyts}, shift input timestamps so they start at zero.
This means that using e.g. @code{-ss 50} will make output timestamps start at
50 seconds, regardless of what timestamp the input file started at.
@item -copytb @var{mode} @item -copytb @var{mode}
Specify how to set the encoder timebase when stream copying. @var{mode} is an Specify how to set the encoder timebase when stream copying. @var{mode} is an
integer numeric value, and can assume one of the following values: integer numeric value, and can assume one of the following values:
@@ -1158,12 +1115,6 @@ This option enables or disables accurate seeking in input files with the
transcoding. Use @option{-noaccurate_seek} to disable it, which may be useful transcoding. Use @option{-noaccurate_seek} to disable it, which may be useful
e.g. when copying some streams and transcoding the others. e.g. when copying some streams and transcoding the others.
@item -thread_message_queue @var{size} (@emph{input})
This option sets the maximum number of queued packets when reading from the
file or device. With low latency / high rate live streams, packets may be
discarded if they are not read in a timely manner; raising this value can
avoid it.
@item -override_ffserver (@emph{global}) @item -override_ffserver (@emph{global})
Overrides the input specifications from @command{ffserver}. Using this Overrides the input specifications from @command{ffserver}. Using this
option you can map any input stream to @command{ffserver} and control option you can map any input stream to @command{ffserver} and control
@@ -1174,35 +1125,6 @@ requested by @command{ffserver}.
The option is intended for cases where features are needed that cannot be The option is intended for cases where features are needed that cannot be
specified to @command{ffserver} but can be to @command{ffmpeg}. specified to @command{ffserver} but can be to @command{ffmpeg}.
@item -sdp_file @var{file} (@emph{global})
Print sdp information to @var{file}.
This allows dumping sdp information when at least one output isn't an
rtp stream.
@item -discard (@emph{input})
Allows discarding specific streams or frames of streams at the demuxer.
Not all demuxers support this.
@table @option
@item none
Discard no frame.
@item default
Default, which discards no frames.
@item noref
Discard all non-reference frames.
@item bidir
Discard all bidirectional frames.
@item nokey
Discard all frames excepts keyframes.
@item all
Discard all frames.
@end table
@end table @end table
As a special exception, you can use a bitmap subtitle stream as input: it As a special exception, you can use a bitmap subtitle stream as input: it
@@ -1228,10 +1150,7 @@ awkward to specify on the command line. Lines starting with the hash
('#') character are ignored and are used to provide comments. Check ('#') character are ignored and are used to provide comments. Check
the @file{presets} directory in the FFmpeg source tree for examples. the @file{presets} directory in the FFmpeg source tree for examples.
There are two types of preset files: ffpreset and avpreset files. Preset files are specified with the @code{vpre}, @code{apre},
@subsection ffpreset files
ffpreset files are specified with the @code{vpre}, @code{apre},
@code{spre}, and @code{fpre} options. The @code{fpre} option takes the @code{spre}, and @code{fpre} options. The @code{fpre} option takes the
filename of the preset instead of a preset name as input and can be filename of the preset instead of a preset name as input and can be
used for any kind of codec. For the @code{vpre}, @code{apre}, and used for any kind of codec. For the @code{vpre}, @code{apre}, and
@@ -1256,26 +1175,6 @@ directories, where @var{codec_name} is the name of the codec to which
the preset file options will be applied. For example, if you select the preset file options will be applied. For example, if you select
the video codec with @code{-vcodec libvpx} and use @code{-vpre 1080p}, the video codec with @code{-vcodec libvpx} and use @code{-vpre 1080p},
then it will search for the file @file{libvpx-1080p.ffpreset}. then it will search for the file @file{libvpx-1080p.ffpreset}.
@subsection avpreset files
avpreset files are specified with the @code{pre} option. They work similar to
ffpreset files, but they only allow encoder- specific options. Therefore, an
@var{option}=@var{value} pair specifying an encoder cannot be used.
When the @code{pre} option is specified, ffmpeg will look for files with the
suffix .avpreset in the directories @file{$AVCONV_DATADIR} (if set), and
@file{$HOME/.avconv}, and in the datadir defined at configuration time (usually
@file{PREFIX/share/ffmpeg}), in that order.
First ffmpeg searches for a file named @var{codec_name}-@var{arg}.avpreset in
the above-mentioned directories, where @var{codec_name} is the name of the codec
to which the preset file options will be applied. For example, if you select the
video codec with @code{-vcodec libvpx} and use @code{-pre 1080p}, then it will
search for the file @file{libvpx-1080p.avpreset}.
If no such file is found, then ffmpeg will search for a file named
@var{arg}.avpreset in the same directories.
@c man end OPTIONS @c man end OPTIONS
@chapter Tips @chapter Tips
@@ -1322,6 +1221,21 @@ quality).
@chapter Examples @chapter Examples
@c man begin EXAMPLES @c man begin EXAMPLES
@section Preset files
A preset file contains a sequence of @var{option=value} pairs, one for
each line, specifying a sequence of options which can be specified also on
the command line. Lines starting with the hash ('#') character are ignored and
are used to provide comments. Empty lines are also ignored. Check the
@file{presets} directory in the FFmpeg source tree for examples.
Preset files are specified with the @code{pre} option, this option takes a
preset name as input. FFmpeg searches for a file named @var{preset_name}.avpreset in
the directories @file{$AVCONV_DATADIR} (if set), and @file{$HOME/.ffmpeg}, and in
the data directory defined at configuration time (usually @file{$PREFIX/share/ffmpeg})
in that order. For example, if the argument is @code{libx264-max}, it will
search for the file @file{libx264-max.avpreset}.
@section Video and Audio grabbing @section Video and Audio grabbing
If you specify the input format and device then ffmpeg can grab video If you specify the input format and device then ffmpeg can grab video
@@ -1491,11 +1405,11 @@ ffmpeg -f image2 -pattern_type glob -i 'foo-*.jpeg' -r 12 -s WxH foo.avi
You can put many streams of the same type in the output: You can put many streams of the same type in the output:
@example @example
ffmpeg -i test1.avi -i test2.avi -map 1:1 -map 1:0 -map 0:1 -map 0:0 -c copy -y test12.nut ffmpeg -i test1.avi -i test2.avi -map 0:3 -map 0:2 -map 0:1 -map 0:0 -c copy test12.nut
@end example @end example
The resulting output file @file{test12.nut} will contain the first four streams The resulting output file @file{test12.avi} will contain first four streams from
from the input files in reverse order. the input file in reverse order.
@item @item
To force CBR video output: To force CBR video output:

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle ffplay Documentation @settitle ffplay Documentation
@titlepage @titlepage
@@ -38,14 +37,10 @@ Force displayed height.
Set frame size (WxH or abbreviation), needed for videos which do Set frame size (WxH or abbreviation), needed for videos which do
not contain a header with the frame size like raw YUV. This option not contain a header with the frame size like raw YUV. This option
has been deprecated in favor of private options, try -video_size. has been deprecated in favor of private options, try -video_size.
@item -fs
Start in fullscreen mode.
@item -an @item -an
Disable audio. Disable audio.
@item -vn @item -vn
Disable video. Disable video.
@item -sn
Disable subtitles.
@item -ss @var{pos} @item -ss @var{pos}
Seek to a given position in seconds. Seek to a given position in seconds.
@item -t @var{duration} @item -t @var{duration}
@@ -89,9 +84,6 @@ output. In the filtergraph, the input is associated to the label
ffmpeg-filters manual for more information about the filtergraph ffmpeg-filters manual for more information about the filtergraph
syntax. syntax.
You can specify this parameter multiple times and cycle through the specified
filtergraphs along with the show modes by pressing the key @key{w}.
@item -af @var{filtergraph} @item -af @var{filtergraph}
@var{filtergraph} is a description of the filtergraph to apply to @var{filtergraph} is a description of the filtergraph to apply to
the input audio. the input audio.
@@ -114,10 +106,15 @@ duration, the codec parameters, the current position in the stream and
the audio/video synchronisation drift. It is on by default, to the audio/video synchronisation drift. It is on by default, to
explicitly disable it you need to specify @code{-nostats}. explicitly disable it you need to specify @code{-nostats}.
@item -bug
Work around bugs.
@item -fast @item -fast
Non-spec-compliant optimizations. Non-spec-compliant optimizations.
@item -genpts @item -genpts
Generate pts. Generate pts.
@item -rtp_tcp
Force RTP/TCP protocol usage instead of RTP/UDP. It is only meaningful
if you are streaming with the RTSP protocol.
@item -sync @var{type} @item -sync @var{type}
Set the master clock to audio (@code{type=audio}), video Set the master clock to audio (@code{type=audio}), video
(@code{type=video}) or external (@code{type=ext}). Default is audio. The (@code{type=video}) or external (@code{type=ext}). Default is audio. The
@@ -125,20 +122,23 @@ master clock is used to control audio-video synchronization. Most media
players use audio as master clock, but in some cases (streaming or high players use audio as master clock, but in some cases (streaming or high
quality broadcast) it is necessary to change that. This option is mainly quality broadcast) it is necessary to change that. This option is mainly
used for debugging purposes. used for debugging purposes.
@item -ast @var{audio_stream_specifier} @item -threads @var{count}
Select the desired audio stream using the given stream specifier. The stream Set the thread count.
specifiers are described in the @ref{Stream specifiers} chapter. If this option @item -ast @var{audio_stream_number}
is not specified, the "best" audio stream is selected in the program of the Select the desired audio stream number, counting from 0. The number
already selected video stream. refers to the list of all the input audio streams. If it is greater
@item -vst @var{video_stream_specifier} than the number of audio streams minus one, then the last one is
Select the desired video stream using the given stream specifier. The stream selected, if it is negative the audio playback is disabled.
specifiers are described in the @ref{Stream specifiers} chapter. If this option @item -vst @var{video_stream_number}
is not specified, the "best" video stream is selected. Select the desired video stream number, counting from 0. The number
@item -sst @var{subtitle_stream_specifier} refers to the list of all the input video streams. If it is greater
Select the desired subtitle stream using the given stream specifier. The stream than the number of video streams minus one, then the last one is
specifiers are described in the @ref{Stream specifiers} chapter. If this option selected, if it is negative the video playback is disabled.
is not specified, the "best" subtitle stream is selected in the program of the @item -sst @var{subtitle_stream_number}
already selected video or audio stream. Select the desired subtitle stream number, counting from 0. The number
refers to the list of all the input subtitle streams. If it is greater
than the number of subtitle streams minus one, then the last one is
selected, if it is negative the subtitle rendering is disabled.
@item -autoexit @item -autoexit
Exit when video is done playing. Exit when video is done playing.
@item -exitonkeydown @item -exitonkeydown
@@ -159,22 +159,6 @@ Force a specific video decoder.
@item -scodec @var{codec_name} @item -scodec @var{codec_name}
Force a specific subtitle decoder. Force a specific subtitle decoder.
@item -autorotate
Automatically rotate the video according to presentation metadata. Enabled by
default, use @option{-noautorotate} to disable it.
@item -framedrop
Drop video frames if video is out of sync. Enabled by default if the master
clock is not set to video. Use this option to enable frame dropping for all
master clock sources, use @option{-noframedrop} to disable it.
@item -infbuf
Do not limit the input buffer size, read as much data as possible from the
input as soon as possible. Enabled by default for realtime streams, where data
may be dropped if not read in time. Use this option to enable infinite buffers
for all inputs, use @option{-noinfbuf} to disable it.
@end table @end table
@section While playing @section While playing
@@ -190,7 +174,7 @@ Toggle full screen.
Pause. Pause.
@item a @item a
Cycle audio channel in the current program. Cycle audio channel in the curret program.
@item v @item v
Cycle video channel. Cycle video channel.
@@ -202,7 +186,7 @@ Cycle subtitle channel in the current program.
Cycle program. Cycle program.
@item w @item w
Cycle video filters or show modes. Show audio waves.
@item s @item s
Step to the next frame. Step to the next frame.

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle ffprobe Documentation @settitle ffprobe Documentation
@titlepage @titlepage
@@ -120,10 +119,6 @@ Show payload data, as a hexadecimal and ASCII dump. Coupled with
The dump is printed as the "data" field. It may contain newlines. The dump is printed as the "data" field. It may contain newlines.
@item -show_data_hash @var{algorithm}
Show a hash of payload data, for packets with @option{-show_packets} and for
codec extradata with @option{-show_streams}.
@item -show_error @item -show_error
Show information about the error found when trying to probe the input. Show information about the error found when trying to probe the input.
@@ -185,7 +180,7 @@ format : stream=codec_type
To show all the tags in the stream and format sections: To show all the tags in the stream and format sections:
@example @example
stream_tags : format_tags format_tags : format_tags
@end example @end example
To show only the @code{title} tag (if available) in the stream To show only the @code{title} tag (if available) in the stream
@@ -322,12 +317,6 @@ Show information related to program and library versions. This is the
equivalent of setting both @option{-show_program_version} and equivalent of setting both @option{-show_program_version} and
@option{-show_library_versions} options. @option{-show_library_versions} options.
@item -show_pixel_formats
Show information about all pixel formats supported by FFmpeg.
Pixel format information for each format is printed within a section
with name "PIXEL_FORMAT".
@item -bitexact @item -bitexact
Force bitexact output, useful to produce output which is not dependent Force bitexact output, useful to produce output which is not dependent
on the specific build. on the specific build.

View File

@@ -8,17 +8,15 @@
<xsd:complexType name="ffprobeType"> <xsd:complexType name="ffprobeType">
<xsd:sequence> <xsd:sequence>
<xsd:element name="program_version" type="ffprobe:programVersionType" minOccurs="0" maxOccurs="1" />
<xsd:element name="library_versions" type="ffprobe:libraryVersionsType" minOccurs="0" maxOccurs="1" />
<xsd:element name="pixel_formats" type="ffprobe:pixelFormatsType" minOccurs="0" maxOccurs="1" />
<xsd:element name="packets" type="ffprobe:packetsType" minOccurs="0" maxOccurs="1" /> <xsd:element name="packets" type="ffprobe:packetsType" minOccurs="0" maxOccurs="1" />
<xsd:element name="frames" type="ffprobe:framesType" minOccurs="0" maxOccurs="1" /> <xsd:element name="frames" type="ffprobe:framesType" minOccurs="0" maxOccurs="1" />
<xsd:element name="packets_and_frames" type="ffprobe:packetsAndFramesType" minOccurs="0" maxOccurs="1" />
<xsd:element name="programs" type="ffprobe:programsType" minOccurs="0" maxOccurs="1" />
<xsd:element name="streams" type="ffprobe:streamsType" minOccurs="0" maxOccurs="1" /> <xsd:element name="streams" type="ffprobe:streamsType" minOccurs="0" maxOccurs="1" />
<xsd:element name="programs" type="ffprobe:programsType" minOccurs="0" maxOccurs="1" />
<xsd:element name="chapters" type="ffprobe:chaptersType" minOccurs="0" maxOccurs="1" /> <xsd:element name="chapters" type="ffprobe:chaptersType" minOccurs="0" maxOccurs="1" />
<xsd:element name="format" type="ffprobe:formatType" minOccurs="0" maxOccurs="1" /> <xsd:element name="format" type="ffprobe:formatType" minOccurs="0" maxOccurs="1" />
<xsd:element name="error" type="ffprobe:errorType" minOccurs="0" maxOccurs="1" /> <xsd:element name="error" type="ffprobe:errorType" minOccurs="0" maxOccurs="1" />
<xsd:element name="program_version" type="ffprobe:programVersionType" minOccurs="0" maxOccurs="1" />
<xsd:element name="library_versions" type="ffprobe:libraryVersionsType" minOccurs="0" maxOccurs="1" />
</xsd:sequence> </xsd:sequence>
</xsd:complexType> </xsd:complexType>
@@ -37,16 +35,6 @@
</xsd:sequence> </xsd:sequence>
</xsd:complexType> </xsd:complexType>
<xsd:complexType name="packetsAndFramesType">
<xsd:sequence>
<xsd:choice minOccurs="0" maxOccurs="unbounded">
<xsd:element name="packet" type="ffprobe:packetType" minOccurs="0" maxOccurs="unbounded"/>
<xsd:element name="frame" type="ffprobe:frameType" minOccurs="0" maxOccurs="unbounded"/>
<xsd:element name="subtitle" type="ffprobe:subtitleType" minOccurs="0" maxOccurs="unbounded"/>
</xsd:choice>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="packetType"> <xsd:complexType name="packetType">
<xsd:attribute name="codec_type" type="xsd:string" use="required" /> <xsd:attribute name="codec_type" type="xsd:string" use="required" />
<xsd:attribute name="stream_index" type="xsd:int" use="required" /> <xsd:attribute name="stream_index" type="xsd:int" use="required" />
@@ -62,15 +50,9 @@
<xsd:attribute name="pos" type="xsd:long" /> <xsd:attribute name="pos" type="xsd:long" />
<xsd:attribute name="flags" type="xsd:string" use="required" /> <xsd:attribute name="flags" type="xsd:string" use="required" />
<xsd:attribute name="data" type="xsd:string" /> <xsd:attribute name="data" type="xsd:string" />
<xsd:attribute name="data_hash" type="xsd:string" />
</xsd:complexType> </xsd:complexType>
<xsd:complexType name="frameType"> <xsd:complexType name="frameType">
<xsd:sequence>
<xsd:element name="tag" type="ffprobe:tagType" minOccurs="0" maxOccurs="unbounded"/>
<xsd:element name="side_data_list" type="ffprobe:frameSideDataListType" minOccurs="0" maxOccurs="1" />
</xsd:sequence>
<xsd:attribute name="media_type" type="xsd:string" use="required"/> <xsd:attribute name="media_type" type="xsd:string" use="required"/>
<xsd:attribute name="key_frame" type="xsd:int" use="required"/> <xsd:attribute name="key_frame" type="xsd:int" use="required"/>
<xsd:attribute name="pts" type="xsd:long" /> <xsd:attribute name="pts" type="xsd:long" />
@@ -105,16 +87,6 @@
<xsd:attribute name="repeat_pict" type="xsd:int" /> <xsd:attribute name="repeat_pict" type="xsd:int" />
</xsd:complexType> </xsd:complexType>
<xsd:complexType name="frameSideDataListType">
<xsd:sequence>
<xsd:element name="side_data" type="ffprobe:frameSideDataType" minOccurs="1" maxOccurs="unbounded"/>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="frameSideDataType">
<xsd:attribute name="side_data_type" type="xsd:string"/>
<xsd:attribute name="side_data_size" type="xsd:int" />
</xsd:complexType>
<xsd:complexType name="subtitleType"> <xsd:complexType name="subtitleType">
<xsd:attribute name="media_type" type="xsd:string" fixed="subtitle" use="required"/> <xsd:attribute name="media_type" type="xsd:string" fixed="subtitle" use="required"/>
<xsd:attribute name="pts" type="xsd:long" /> <xsd:attribute name="pts" type="xsd:long" />
@@ -166,7 +138,6 @@
<xsd:attribute name="codec_tag" type="xsd:string" use="required"/> <xsd:attribute name="codec_tag" type="xsd:string" use="required"/>
<xsd:attribute name="codec_tag_string" type="xsd:string" use="required"/> <xsd:attribute name="codec_tag_string" type="xsd:string" use="required"/>
<xsd:attribute name="extradata" type="xsd:string" /> <xsd:attribute name="extradata" type="xsd:string" />
<xsd:attribute name="extradata_hash" type="xsd:string" />
<!-- video attributes --> <!-- video attributes -->
<xsd:attribute name="width" type="xsd:int"/> <xsd:attribute name="width" type="xsd:int"/>
@@ -176,13 +147,7 @@
<xsd:attribute name="display_aspect_ratio" type="xsd:string"/> <xsd:attribute name="display_aspect_ratio" type="xsd:string"/>
<xsd:attribute name="pix_fmt" type="xsd:string"/> <xsd:attribute name="pix_fmt" type="xsd:string"/>
<xsd:attribute name="level" type="xsd:int"/> <xsd:attribute name="level" type="xsd:int"/>
<xsd:attribute name="color_range" type="xsd:string"/>
<xsd:attribute name="color_space" type="xsd:string"/>
<xsd:attribute name="color_transfer" type="xsd:string"/>
<xsd:attribute name="color_primaries" type="xsd:string"/>
<xsd:attribute name="chroma_location" type="xsd:string"/>
<xsd:attribute name="timecode" type="xsd:string"/> <xsd:attribute name="timecode" type="xsd:string"/>
<xsd:attribute name="refs" type="xsd:int"/>
<!-- audio attributes --> <!-- audio attributes -->
<xsd:attribute name="sample_fmt" type="xsd:string"/> <xsd:attribute name="sample_fmt" type="xsd:string"/>
@@ -200,8 +165,6 @@
<xsd:attribute name="duration_ts" type="xsd:long"/> <xsd:attribute name="duration_ts" type="xsd:long"/>
<xsd:attribute name="duration" type="xsd:float"/> <xsd:attribute name="duration" type="xsd:float"/>
<xsd:attribute name="bit_rate" type="xsd:int"/> <xsd:attribute name="bit_rate" type="xsd:int"/>
<xsd:attribute name="max_bit_rate" type="xsd:int"/>
<xsd:attribute name="bits_per_raw_sample" type="xsd:int"/>
<xsd:attribute name="nb_frames" type="xsd:int"/> <xsd:attribute name="nb_frames" type="xsd:int"/>
<xsd:attribute name="nb_read_frames" type="xsd:int"/> <xsd:attribute name="nb_read_frames" type="xsd:int"/>
<xsd:attribute name="nb_read_packets" type="xsd:int"/> <xsd:attribute name="nb_read_packets" type="xsd:int"/>
@@ -254,7 +217,10 @@
<xsd:complexType name="programVersionType"> <xsd:complexType name="programVersionType">
<xsd:attribute name="version" type="xsd:string" use="required"/> <xsd:attribute name="version" type="xsd:string" use="required"/>
<xsd:attribute name="copyright" type="xsd:string" use="required"/> <xsd:attribute name="copyright" type="xsd:string" use="required"/>
<xsd:attribute name="compiler_ident" type="xsd:string" use="required"/> <xsd:attribute name="build_date" type="xsd:string" use="required"/>
<xsd:attribute name="build_time" type="xsd:string" use="required"/>
<xsd:attribute name="compiler_type" type="xsd:string" use="required"/>
<xsd:attribute name="compiler_version" type="xsd:string" use="required"/>
<xsd:attribute name="configuration" type="xsd:string" use="required"/> <xsd:attribute name="configuration" type="xsd:string" use="required"/>
</xsd:complexType> </xsd:complexType>
@@ -291,45 +257,4 @@
<xsd:element name="library_version" type="ffprobe:libraryVersionType" minOccurs="0" maxOccurs="unbounded"/> <xsd:element name="library_version" type="ffprobe:libraryVersionType" minOccurs="0" maxOccurs="unbounded"/>
</xsd:sequence> </xsd:sequence>
</xsd:complexType> </xsd:complexType>
<xsd:complexType name="pixelFormatFlagsType">
<xsd:attribute name="big_endian" type="xsd:int" use="required"/>
<xsd:attribute name="palette" type="xsd:int" use="required"/>
<xsd:attribute name="bitstream" type="xsd:int" use="required"/>
<xsd:attribute name="hwaccel" type="xsd:int" use="required"/>
<xsd:attribute name="planar" type="xsd:int" use="required"/>
<xsd:attribute name="rgb" type="xsd:int" use="required"/>
<xsd:attribute name="pseudopal" type="xsd:int" use="required"/>
<xsd:attribute name="alpha" type="xsd:int" use="required"/>
</xsd:complexType>
<xsd:complexType name="pixelFormatComponentType">
<xsd:attribute name="index" type="xsd:int" use="required"/>
<xsd:attribute name="bit_depth" type="xsd:int" use="required"/>
</xsd:complexType>
<xsd:complexType name="pixelFormatComponentsType">
<xsd:sequence>
<xsd:element name="component" type="ffprobe:pixelFormatComponentType" minOccurs="0" maxOccurs="unbounded"/>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="pixelFormatType">
<xsd:sequence>
<xsd:element name="flags" type="ffprobe:pixelFormatFlagsType" minOccurs="0" maxOccurs="1"/>
<xsd:element name="components" type="ffprobe:pixelFormatComponentsType" minOccurs="0" maxOccurs="1"/>
</xsd:sequence>
<xsd:attribute name="name" type="xsd:string" use="required"/>
<xsd:attribute name="nb_components" type="xsd:int" use="required"/>
<xsd:attribute name="log2_chroma_w" type="xsd:int"/>
<xsd:attribute name="log2_chroma_h" type="xsd:int"/>
<xsd:attribute name="bits_per_pixel" type="xsd:int"/>
</xsd:complexType>
<xsd:complexType name="pixelFormatsType">
<xsd:sequence>
<xsd:element name="pixel_format" type="ffprobe:pixelFormatType" minOccurs="0" maxOccurs="unbounded"/>
</xsd:sequence>
</xsd:complexType>
</xsd:schema> </xsd:schema>

View File

@@ -1,11 +1,11 @@
# Port on which the server is listening. You must select a different # Port on which the server is listening. You must select a different
# port from your standard HTTP web server if it is running on the same # port from your standard HTTP web server if it is running on the same
# computer. # computer.
HTTPPort 8090 Port 8090
# Address on which the server is bound. Only useful if you have # Address on which the server is bound. Only useful if you have
# several network interfaces. # several network interfaces.
HTTPBindAddress 0.0.0.0 BindAddress 0.0.0.0
# Number of simultaneous HTTP connections that can be handled. It has # Number of simultaneous HTTP connections that can be handled. It has
# to be defined *before* the MaxClients parameter, since it defines the # to be defined *before* the MaxClients parameter, since it defines the

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle ffserver Documentation @settitle ffserver Documentation
@titlepage @titlepage
@@ -67,7 +66,7 @@ http://@var{ffserver_ip_address}:@var{http_port}/@var{feed_name}
where @var{ffserver_ip_address} is the IP address of the machine where where @var{ffserver_ip_address} is the IP address of the machine where
@command{ffserver} is installed, @var{http_port} is the port number of @command{ffserver} is installed, @var{http_port} is the port number of
the HTTP server (configured through the @option{HTTPPort} option), and the HTTP server (configured through the @option{Port} option), and
@var{feed_name} is the name of the corresponding feed defined in the @var{feed_name} is the name of the corresponding feed defined in the
configuration file. configuration file.
@@ -102,7 +101,7 @@ http://@var{ffserver_ip_address}:@var{rtsp_port}/@var{stream_name}[@var{options}
the configuration file. @var{options} is a list of options specified the configuration file. @var{options} is a list of options specified
after the URL which affects how the stream is served by after the URL which affects how the stream is served by
@command{ffserver}. @var{http_port} and @var{rtsp_port} are the HTTP @command{ffserver}. @var{http_port} and @var{rtsp_port} are the HTTP
and RTSP ports configured with the options @var{HTTPPort} and and RTSP ports configured with the options @var{Port} and
@var{RTSPPort} respectively. @var{RTSPPort} respectively.
In case the stream is associated to a feed, the encoding parameters In case the stream is associated to a feed, the encoding parameters
@@ -112,7 +111,7 @@ must be configured in the stream configuration. They are sent to
the @command{ffmpeg} encoders. the @command{ffmpeg} encoders.
The @command{ffmpeg} @option{override_ffserver} commandline option The @command{ffmpeg} @option{override_ffserver} commandline option
allows one to override the encoding parameters set by the server. allows to override the encoding parameters set by the server.
Multiple streams can be connected to the same feed. Multiple streams can be connected to the same feed.
@@ -204,9 +203,11 @@ WARNING: trying to stream test1.mpg doesn't work with WMP as it tries to
transfer the entire file before starting to play. transfer the entire file before starting to play.
The same is true of AVI files. The same is true of AVI files.
You should edit the @file{ffserver.conf} file to suit your needs (in @section What happens next?
terms of frame rates etc). Then install @command{ffserver} and
@command{ffmpeg}, write a script to start them up, and off you go. You should edit the ffserver.conf file to suit your needs (in terms of
frame rates etc). Then install ffserver and ffmpeg, write a script to start
them up, and off you go.
@section What else can it do? @section What else can it do?
@@ -353,29 +354,20 @@ allow everybody else.
@section Global options @section Global options
@table @option @table @option
@item HTTPPort @var{port_number}
@item Port @var{port_number} @item Port @var{port_number}
@item RTSPPort @var{port_number} @item RTSPPort @var{port_number}
@var{HTTPPort} sets the HTTP server listening TCP port number, Set TCP port number on which the HTTP/RTSP server is listening. You
@var{RTSPPort} sets the RTSP server listening TCP port number. must select a different port from your standard HTTP web server if it
is running on the same computer.
@var{Port} is the equivalent of @var{HTTPPort} and is deprecated.
You must select a different port from your standard HTTP web server if
it is running on the same computer.
If not specified, no corresponding server will be created. If not specified, no corresponding server will be created.
@item HTTPBindAddress @var{ip_address}
@item BindAddress @var{ip_address} @item BindAddress @var{ip_address}
@item RTSPBindAddress @var{ip_address} @item RTSPBindAddress @var{ip_address}
Set address on which the HTTP/RTSP server is bound. Only useful if you Set address on which the HTTP/RTSP server is bound. Only useful if you
have several network interfaces. have several network interfaces.
@var{BindAddress} is the equivalent of @var{HTTPBindAddress} and is
deprecated.
@item MaxHTTPConnections @var{n} @item MaxHTTPConnections @var{n}
Set number of simultaneous HTTP connections that can be handled. It Set number of simultaneous HTTP connections that can be handled. It
has to be defined @emph{before} the @option{MaxClients} parameter, has to be defined @emph{before} the @option{MaxClients} parameter,
@@ -409,12 +401,6 @@ ignored, and the log is written to standard output.
Set no-daemon mode. This option is currently ignored since now Set no-daemon mode. This option is currently ignored since now
@command{ffserver} will always work in no-daemon mode, and is @command{ffserver} will always work in no-daemon mode, and is
deprecated. deprecated.
@item UseDefaults
@item NoDefaults
Control whether default codec options are used for the all streams or not.
Each stream may overwrite this setting for its own. Default is @var{UseDefaults}.
The lastest occurrence overrides previous if multiple definitions.
@end table @end table
@section Feed section @section Feed section
@@ -578,11 +564,6 @@ deprecated in favor of @option{Metadata}.
@item Metadata @var{key} @var{value} @item Metadata @var{key} @var{value}
Set metadata value on the output stream. Set metadata value on the output stream.
@item UseDefaults
@item NoDefaults
Control whether default codec options are used for the stream or not.
Default is @var{UseDefaults} unless disabled globally.
@item NoAudio @item NoAudio
@item NoVideo @item NoVideo
Suppress audio/video. Suppress audio/video.
@@ -601,9 +582,8 @@ Set sampling frequency for audio. When using low bitrates, you should
lower this frequency to 22050 or 11025. The supported frequencies lower this frequency to 22050 or 11025. The supported frequencies
depend on the selected audio codec. depend on the selected audio codec.
@item AVOptionAudio [@var{codec}:]@var{option} @var{value} (@emph{encoding,audio}) @item AVOptionAudio @var{option} @var{value} (@emph{encoding,audio})
Set generic or private option for audio stream. Set generic option for audio stream.
Private option must be prefixed with codec name or codec must be defined before.
@item AVPresetAudio @var{preset} (@emph{encoding,audio}) @item AVPresetAudio @var{preset} (@emph{encoding,audio})
Set preset for audio stream. Set preset for audio stream.
@@ -680,9 +660,8 @@ Set video @option{qdiff} encoding option.
@item DarkMask @var{float} (@emph{encoding,video}) @item DarkMask @var{float} (@emph{encoding,video})
Set @option{lumi_mask}/@option{dark_mask} encoding options. Set @option{lumi_mask}/@option{dark_mask} encoding options.
@item AVOptionVideo [@var{codec}:]@var{option} @var{value} (@emph{encoding,video}) @item AVOptionVideo @var{option} @var{value} (@emph{encoding,video})
Set generic or private option for video stream. Set generic option for video stream.
Private option must be prefixed with codec name or codec must be defined before.
@item AVPresetVideo @var{preset} (@emph{encoding,video}) @item AVPresetVideo @var{preset} (@emph{encoding,video})
Set preset for video stream. Set preset for video stream.

View File

@@ -3,7 +3,7 @@ representing a number as input, which may be followed by one of the SI
unit prefixes, for example: 'K', 'M', or 'G'. unit prefixes, for example: 'K', 'M', or 'G'.
If 'i' is appended to the SI unit prefix, the complete prefix will be If 'i' is appended to the SI unit prefix, the complete prefix will be
interpreted as a unit prefix for binary multiples, which are based on interpreted as a unit prefix for binary multiplies, which are based on
powers of 1024 instead of powers of 1000. Appending 'B' to the SI unit powers of 1024 instead of powers of 1000. Appending 'B' to the SI unit
prefix multiplies the value by 8. This allows using, for example: prefix multiplies the value by 8. This allows using, for example:
'KB', 'MiB', 'G' and 'B' as number suffixes. 'KB', 'MiB', 'G' and 'B' as number suffixes.
@@ -44,15 +44,8 @@ streams of this type.
If @var{stream_index} is given, then it matches the stream with number @var{stream_index} If @var{stream_index} is given, then it matches the stream with number @var{stream_index}
in the program with the id @var{program_id}. Otherwise, it matches all streams in the in the program with the id @var{program_id}. Otherwise, it matches all streams in the
program. program.
@item #@var{stream_id} or i:@var{stream_id} @item #@var{stream_id}
Match the stream by stream id (e.g. PID in MPEG-TS container). Matches the stream by a format-specific ID.
@item m:@var{key}[:@var{value}]
Matches streams with the metadata tag @var{key} having the specified value. If
@var{value} is not given, matches streams that contain the given tag with any
value.
Note that in @command{ffmpeg}, matching by metadata will only work properly for
input files.
@end table @end table
@section Generic options @section Generic options
@@ -103,10 +96,7 @@ Print detailed information about the filter name @var{filter_name}. Use the
Show version. Show version.
@item -formats @item -formats
Show available formats (including devices). Show available formats.
@item -devices
Show available devices.
@item -codecs @item -codecs
Show all codecs known to libavcodec. Show all codecs known to libavcodec.
@@ -141,22 +131,6 @@ Show channel names and standard channel layouts.
@item -colors @item -colors
Show recognized color names. Show recognized color names.
@item -sources @var{device}[,@var{opt1}=@var{val1}[,@var{opt2}=@var{val2}]...]
Show autodetected sources of the intput device.
Some devices may provide system-dependent source names that cannot be autodetected.
The returned list cannot be assumed to be always complete.
@example
ffmpeg -sources pulse,server=192.168.0.4
@end example
@item -sinks @var{device}[,@var{opt1}=@var{val1}[,@var{opt2}=@var{val2}]...]
Show autodetected sinks of the output device.
Some devices may provide system-dependent sink names that cannot be autodetected.
The returned list cannot be assumed to be always complete.
@example
ffmpeg -sinks pulse,server=192.168.0.4
@end example
@item -loglevel [repeat+]@var{loglevel} | -v [repeat+]@var{loglevel} @item -loglevel [repeat+]@var{loglevel} | -v [repeat+]@var{loglevel}
Set the logging level used by the library. Set the logging level used by the library.
Adding "repeat+" indicates that repeated log output should not be compressed Adding "repeat+" indicates that repeated log output should not be compressed
@@ -165,27 +139,27 @@ omitted. "repeat" can also be used alone.
If "repeat" is used alone, and with no prior loglevel set, the default If "repeat" is used alone, and with no prior loglevel set, the default
loglevel will be used. If multiple loglevel parameters are given, using loglevel will be used. If multiple loglevel parameters are given, using
'repeat' will not change the loglevel. 'repeat' will not change the loglevel.
@var{loglevel} is a string or a number containing one of the following values: @var{loglevel} is a number or a string containing one of the following values:
@table @samp @table @samp
@item quiet, -8 @item quiet
Show nothing at all; be silent. Show nothing at all; be silent.
@item panic, 0 @item panic
Only show fatal errors which could lead the process to crash, such as Only show fatal errors which could lead the process to crash, such as
and assert failure. This is not currently used for anything. and assert failure. This is not currently used for anything.
@item fatal, 8 @item fatal
Only show fatal errors. These are errors after which the process absolutely Only show fatal errors. These are errors after which the process absolutely
cannot continue after. cannot continue after.
@item error, 16 @item error
Show all errors, including ones which can be recovered from. Show all errors, including ones which can be recovered from.
@item warning, 24 @item warning
Show all warnings and errors. Any message related to possibly Show all warnings and errors. Any message related to possibly
incorrect or unexpected events will be shown. incorrect or unexpected events will be shown.
@item info, 32 @item info
Show informative messages during processing. This is in addition to Show informative messages during processing. This is in addition to
warnings and errors. This is the default value. warnings and errors. This is the default value.
@item verbose, 40 @item verbose
Same as @code{info}, except more verbose. Same as @code{info}, except more verbose.
@item debug, 48 @item debug
Show everything, including debugging information. Show everything, including debugging information.
@end table @end table
@@ -204,29 +178,19 @@ directory.
This file can be useful for bug reports. This file can be useful for bug reports.
It also implies @code{-loglevel verbose}. It also implies @code{-loglevel verbose}.
Setting the environment variable @env{FFREPORT} to any value has the Setting the environment variable @code{FFREPORT} to any value has the
same effect. If the value is a ':'-separated key=value sequence, these same effect. If the value is a ':'-separated key=value sequence, these
options will affect the report; option values must be escaped if they options will affect the report; options values must be escaped if they
contain special characters or the options delimiter ':' (see the contain special characters or the options delimiter ':' (see the
``Quoting and escaping'' section in the ffmpeg-utils manual). ``Quoting and escaping'' section in the ffmpeg-utils manual). The
following option is recognized:
The following options are recognized:
@table @option @table @option
@item file @item file
set the file name to use for the report; @code{%p} is expanded to the name set the file name to use for the report; @code{%p} is expanded to the name
of the program, @code{%t} is expanded to a timestamp, @code{%%} is expanded of the program, @code{%t} is expanded to a timestamp, @code{%%} is expanded
to a plain @code{%} to a plain @code{%}
@item level
set the log verbosity level using a numerical value (see @code{-loglevel}).
@end table @end table
For example, to output a report to a file named @file{ffreport.log}
using a log level of @code{32} (alias for log level @code{info}):
@example
FFREPORT=file=ffreport.log:level=32 ffmpeg -i input output
@end example
Errors in parsing the environment variable are not fatal, and will not Errors in parsing the environment variable are not fatal, and will not
appear in the report. appear in the report.
@@ -294,41 +258,8 @@ Possible flags for this option are:
@end table @end table
@item -opencl_bench @item -opencl_bench
This option is used to benchmark all available OpenCL devices and print the Benchmark all available OpenCL devices and show the results. This option
results. This option is only available when FFmpeg has been compiled with is only available when FFmpeg has been compiled with @code{--enable-opencl}.
@code{--enable-opencl}.
When FFmpeg is configured with @code{--enable-opencl}, the options for the
global OpenCL context are set via @option{-opencl_options}. See the
"OpenCL Options" section in the ffmpeg-utils manual for the complete list of
supported options. Amongst others, these options include the ability to select
a specific platform and device to run the OpenCL code on. By default, FFmpeg
will run on the first device of the first platform. While the options for the
global OpenCL context provide flexibility to the user in selecting the OpenCL
device of their choice, most users would probably want to select the fastest
OpenCL device for their system.
This option assists the selection of the most efficient configuration by
identifying the appropriate device for the user's system. The built-in
benchmark is run on all the OpenCL devices and the performance is measured for
each device. The devices in the results list are sorted based on their
performance with the fastest device listed first. The user can subsequently
invoke @command{ffmpeg} using the device deemed most appropriate via
@option{-opencl_options} to obtain the best performance for the OpenCL
accelerated code.
Typical usage to use the fastest OpenCL device involve the following steps.
Run the command:
@example
ffmpeg -opencl_bench
@end example
Note down the platform ID (@var{pidx}) and device ID (@var{didx}) of the first
i.e. fastest device in the list.
Select the platform and device using the command:
@example
ffmpeg -opencl_options platform_idx=@var{pidx}:device_idx=@var{didx} ...
@end example
@item -opencl_options options (@emph{global}) @item -opencl_options options (@emph{global})
Set OpenCL environment options. This option is only available when Set OpenCL environment options. This option is only available when

File diff suppressed because it is too large Load Diff

View File

@@ -55,10 +55,6 @@ Do not merge side data.
Enable RTP MP4A-LATM payload. Enable RTP MP4A-LATM payload.
@item nobuffer @item nobuffer
Reduce the latency introduced by optional buffering Reduce the latency introduced by optional buffering
@item bitexact
Only write platform-, build- and time-independent data.
This ensures that file and data checksums are reproducible and match between
platforms. Its primary use is for regression testing.
@end table @end table
@item seek2any @var{integer} (@emph{input}) @item seek2any @var{integer} (@emph{input})
@@ -172,18 +168,6 @@ The offset is added by the muxer to the output timestamps.
Specifying a positive offset means that the corresponding streams are Specifying a positive offset means that the corresponding streams are
delayed bt the time duration specified in @var{offset}. Default value delayed bt the time duration specified in @var{offset}. Default value
is @code{0} (meaning that no offset is applied). is @code{0} (meaning that no offset is applied).
@item format_whitelist @var{list} (@emph{input})
"," separated List of allowed demuxers. By default all are allowed.
@item dump_separator @var{string} (@emph{input})
Separator used to separate the fields printed on the command line about the
Stream parameters.
For example to separate the fields with newlines and indention:
@example
ffprobe -dump_separator "
" -i ~/videos/matrixbench_mpeg2.mpg
@end example
@end table @end table
@c man end FORMAT OPTIONS @c man end FORMAT OPTIONS

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle General Documentation @settitle General Documentation
@titlepage @titlepage
@@ -109,14 +108,6 @@ Go to @url{http://www.wavpack.com/} and follow the instructions for
installing the library. Then pass @code{--enable-libwavpack} to configure to installing the library. Then pass @code{--enable-libwavpack} to configure to
enable it. enable it.
@section OpenH264
FFmpeg can make use of the OpenH264 library for H.264 encoding.
Go to @url{http://www.openh264.org/} and follow the instructions for
installing the library. Then pass @code{--enable-libopenh264} to configure to
enable it.
@section x264 @section x264
FFmpeg can make use of the x264 library for H.264 encoding. FFmpeg can make use of the x264 library for H.264 encoding.
@@ -139,7 +130,7 @@ Go to @url{http://x265.org/developers.html} and follow the instructions
for installing the library. Then pass @code{--enable-libx265} to configure for installing the library. Then pass @code{--enable-libx265} to configure
to enable it. to enable it.
@float NOTE @float note
x265 is under the GNU Public License Version 2 or later x265 is under the GNU Public License Version 2 or later
(see @url{http://www.gnu.org/licenses/old-licenses/gpl-2.0.html} for (see @url{http://www.gnu.org/licenses/old-licenses/gpl-2.0.html} for
details), you must upgrade FFmpeg's license to GPL in order to use it. details), you must upgrade FFmpeg's license to GPL in order to use it.
@@ -152,7 +143,7 @@ by Google as part of the WebRTC project. libilbc is a packaging friendly
copy of the iLBC codec. FFmpeg can make use of the libilbc library for copy of the iLBC codec. FFmpeg can make use of the libilbc library for
iLBC encoding and decoding. iLBC encoding and decoding.
Go to @url{https://github.com/TimothyGu/libilbc} and follow the instructions for Go to @url{https://github.com/dekkers/libilbc} and follow the instructions for
installing the library. Then pass @code{--enable-libilbc} to configure to installing the library. Then pass @code{--enable-libilbc} to configure to
enable it. enable it.
@@ -171,27 +162,6 @@ libzvbi is licensed under the GNU General Public License Version 2 or later
you must upgrade FFmpeg's license to GPL in order to use it. you must upgrade FFmpeg's license to GPL in order to use it.
@end float @end float
@section AviSynth
FFmpeg can read AviSynth scripts as input. To enable support, pass
@code{--enable-avisynth} to configure. The correct headers are
included in compat/avisynth/, which allows the user to enable support
without needing to search for these headers themselves.
For Windows, supported AviSynth variants are
@url{http://avisynth.nl, AviSynth 2.5 or 2.6} for 32-bit builds and
@url{http://avs-plus.net, AviSynth+ 0.1} for 32-bit and 64-bit builds.
For Linux and OS X, the supported AviSynth variant is
@url{https://github.com/avxsynth/avxsynth, AvxSynth}.
@float NOTE
AviSynth and AvxSynth are loaded dynamically. Distributors can build FFmpeg
with @code{--enable-avisynth}, and the binaries will work regardless of the
end user having AviSynth or AvxSynth installed - they'll only need to be
installed to use AviSynth scripts (obviously).
@end float
@chapter Supported File Formats, Codecs or Features @chapter Supported File Formats, Codecs or Features
@@ -214,7 +184,7 @@ library:
@item American Laser Games MM @tab @tab X @item American Laser Games MM @tab @tab X
@tab Multimedia format used in games like Mad Dog McCree. @tab Multimedia format used in games like Mad Dog McCree.
@item 3GPP AMR @tab X @tab X @item 3GPP AMR @tab X @tab X
@item Amazing Studio Packed Animation File @tab @tab X @item Amazing Studio Packed Animation File @tab @tab X
@tab Multimedia format used in game Heart Of Darkness. @tab Multimedia format used in game Heart Of Darkness.
@item Apple HTTP Live Streaming @tab @tab X @item Apple HTTP Live Streaming @tab @tab X
@item Artworx Data Format @tab @tab X @item Artworx Data Format @tab @tab X
@@ -252,11 +222,8 @@ library:
@tab Used in the game Cyberia from Interplay. @tab Used in the game Cyberia from Interplay.
@item Delphine Software International CIN @tab @tab X @item Delphine Software International CIN @tab @tab X
@tab Multimedia format used by Delphine Software games. @tab Multimedia format used by Delphine Software games.
@item Digital Speech Standard (DSS) @tab @tab X
@item Canopus HQX @tab @tab X
@item CD+G @tab @tab X @item CD+G @tab @tab X
@tab Video format used by CD+G karaoke disks @tab Video format used by CD+G karaoke disks
@item Phantom Cine @tab @tab X
@item Commodore CDXL @tab @tab X @item Commodore CDXL @tab @tab X
@tab Amiga CD video format @tab Amiga CD video format
@item Core Audio Format @tab X @tab X @item Core Audio Format @tab X @tab X
@@ -270,7 +237,6 @@ library:
@item Deluxe Paint Animation @tab @tab X @item Deluxe Paint Animation @tab @tab X
@item DFA @tab @tab X @item DFA @tab @tab X
@tab This format is used in Chronomaster game @tab This format is used in Chronomaster game
@item DSD Stream File (DSF) @tab @tab X
@item DV video @tab X @tab X @item DV video @tab X @tab X
@item DXA @tab @tab X @item DXA @tab @tab X
@tab This format is used in the non-Windows version of the Feeble Files @tab This format is used in the non-Windows version of the Feeble Files
@@ -321,11 +287,9 @@ library:
@tab Used by Linux Media Labs MPEG-4 PCI boards @tab Used by Linux Media Labs MPEG-4 PCI boards
@item LOAS @tab @tab X @item LOAS @tab @tab X
@tab contains LATM multiplexed AAC audio @tab contains LATM multiplexed AAC audio
@item LRC @tab X @tab X
@item LVF @tab @tab X @item LVF @tab @tab X
@item LXF @tab @tab X @item LXF @tab @tab X
@tab VR native stream format, used by Leitch/Harris' video servers. @tab VR native stream format, used by Leitch/Harris' video servers.
@item Magic Lantern Video (MLV) @tab @tab X
@item Matroska @tab X @tab X @item Matroska @tab X @tab X
@item Matroska audio @tab X @tab @item Matroska audio @tab X @tab
@item FFmpeg metadata @tab X @tab X @item FFmpeg metadata @tab X @tab X
@@ -465,7 +429,6 @@ library:
@item Sony Wave64 (W64) @tab X @tab X @item Sony Wave64 (W64) @tab X @tab X
@item SoX native format @tab X @tab X @item SoX native format @tab X @tab X
@item SUN AU format @tab X @tab X @item SUN AU format @tab X @tab X
@item SUP raw PGS subtitles @tab @tab X
@item Text files @tab @tab X @item Text files @tab @tab X
@item THP @tab @tab X @item THP @tab @tab X
@tab Used on the Nintendo GameCube. @tab Used on the Nintendo GameCube.
@@ -504,13 +467,11 @@ following image formats are supported:
@item Name @tab Encoding @tab Decoding @tab Comments @item Name @tab Encoding @tab Decoding @tab Comments
@item .Y.U.V @tab X @tab X @item .Y.U.V @tab X @tab X
@tab one raw file per component @tab one raw file per component
@item Alias PIX @tab X @tab X
@tab Alias/Wavefront PIX image format
@item animated GIF @tab X @tab X @item animated GIF @tab X @tab X
@item BMP @tab X @tab X @item BMP @tab X @tab X
@tab Microsoft BMP image @tab Microsoft BMP image
@item BRender PIX @tab @tab X @item PIX @tab @tab X
@tab Argonaut BRender 3D engine image format. @tab PIX is an image format used in the Argonaut BRender engine.
@item DPX @tab X @tab X @item DPX @tab X @tab X
@tab Digital Picture Exchange @tab Digital Picture Exchange
@item EXR @tab @tab X @item EXR @tab @tab X
@@ -664,7 +625,7 @@ following image formats are supported:
@item H.263 / H.263-1996 @tab X @tab X @item H.263 / H.263-1996 @tab X @tab X
@item H.263+ / H.263-1998 / H.263 version 2 @tab X @tab X @item H.263+ / H.263-1998 / H.263 version 2 @tab X @tab X
@item H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 @tab E @tab X @item H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 @tab E @tab X
@tab encoding supported through external library libx264 and OpenH264 @tab encoding supported through external library libx264
@item HEVC @tab X @tab X @item HEVC @tab X @tab X
@tab encoding supported through the external library libx265 @tab encoding supported through the external library libx265
@item HNM version 4 @tab @tab X @item HNM version 4 @tab @tab X
@@ -698,8 +659,8 @@ following image formats are supported:
@item LCL (LossLess Codec Library) MSZH @tab @tab X @item LCL (LossLess Codec Library) MSZH @tab @tab X
@item LCL (LossLess Codec Library) ZLIB @tab E @tab E @item LCL (LossLess Codec Library) ZLIB @tab E @tab E
@item LOCO @tab @tab X @item LOCO @tab @tab X
@item LucasArts SANM/Smush @tab @tab X @item LucasArts Smush @tab @tab X
@tab Used in LucasArts games / SMUSH animations. @tab Used in LucasArts games.
@item lossless MJPEG @tab X @tab X @item lossless MJPEG @tab X @tab X
@item Microsoft ATC Screen @tab @tab X @item Microsoft ATC Screen @tab @tab X
@tab Also known as Microsoft Screen 3. @tab Also known as Microsoft Screen 3.
@@ -734,8 +695,6 @@ following image formats are supported:
@tab fourcc: VP50 @tab fourcc: VP50
@item On2 VP6 @tab @tab X @item On2 VP6 @tab @tab X
@tab fourcc: VP60,VP61,VP62 @tab fourcc: VP60,VP61,VP62
@item On2 VP7 @tab @tab X
@tab fourcc: VP70,VP71
@item VP8 @tab E @tab X @item VP8 @tab E @tab X
@tab fourcc: VP80, encoding supported through external library libvpx @tab fourcc: VP80, encoding supported through external library libvpx
@item VP9 @tab E @tab X @item VP9 @tab E @tab X
@@ -765,11 +724,11 @@ following image formats are supported:
@tab Texture dictionaries used by the Renderware Engine. @tab Texture dictionaries used by the Renderware Engine.
@item RL2 video @tab @tab X @item RL2 video @tab @tab X
@tab used in some games by Entertainment Software Partners @tab used in some games by Entertainment Software Partners
@item SGI RLE 8-bit @tab @tab X
@item Sierra VMD video @tab @tab X @item Sierra VMD video @tab @tab X
@tab Used in Sierra VMD files. @tab Used in Sierra VMD files.
@item Silicon Graphics Motion Video Compressor 1 (MVC1) @tab @tab X @item Silicon Graphics Motion Video Compressor 1 (MVC1) @tab @tab X
@item Silicon Graphics Motion Video Compressor 2 (MVC2) @tab @tab X @item Silicon Graphics Motion Video Compressor 2 (MVC2) @tab @tab X
@item Silicon Graphics RLE 8-bit video @tab @tab X
@item Smacker video @tab @tab X @item Smacker video @tab @tab X
@tab Video encoding used in Smacker. @tab Video encoding used in Smacker.
@item SMPTE VC-1 @tab @tab X @item SMPTE VC-1 @tab @tab X
@@ -835,7 +794,7 @@ following image formats are supported:
@tab encoding supported through external library libaacplus @tab encoding supported through external library libaacplus
@item AAC @tab E @tab X @item AAC @tab E @tab X
@tab encoding supported through external library libfaac and libvo-aacenc @tab encoding supported through external library libfaac and libvo-aacenc
@item AC-3 @tab IX @tab IX @item AC-3 @tab IX @tab X
@item ADPCM 4X Movie @tab @tab X @item ADPCM 4X Movie @tab @tab X
@item ADPCM CDROM XA @tab @tab X @item ADPCM CDROM XA @tab @tab X
@item ADPCM Creative Technology @tab @tab X @item ADPCM Creative Technology @tab @tab X
@@ -879,8 +838,6 @@ following image formats are supported:
@item ADPCM Sound Blaster Pro 2-bit @tab @tab X @item ADPCM Sound Blaster Pro 2-bit @tab @tab X
@item ADPCM Sound Blaster Pro 2.6-bit @tab @tab X @item ADPCM Sound Blaster Pro 2.6-bit @tab @tab X
@item ADPCM Sound Blaster Pro 4-bit @tab @tab X @item ADPCM Sound Blaster Pro 4-bit @tab @tab X
@item ADPCM VIMA
@tab Used in LucasArts SMUSH animations.
@item ADPCM Westwood Studios IMA @tab @tab X @item ADPCM Westwood Studios IMA @tab @tab X
@tab Used in Westwood Studios games like Command and Conquer. @tab Used in Westwood Studios games like Command and Conquer.
@item ADPCM Yamaha @tab X @tab X @item ADPCM Yamaha @tab X @tab X
@@ -900,7 +857,6 @@ following image formats are supported:
@tab decoding supported through external library libcelt @tab decoding supported through external library libcelt
@item Delphine Software International CIN audio @tab @tab X @item Delphine Software International CIN audio @tab @tab X
@tab Codec used in Delphine Software International games. @tab Codec used in Delphine Software International games.
@item Digital Speech Standard - Standard Play mode (DSS SP) @tab @tab X
@item Discworld II BMV Audio @tab @tab X @item Discworld II BMV Audio @tab @tab X
@item COOK @tab @tab X @item COOK @tab @tab X
@tab All versions except 5.1 are supported. @tab All versions except 5.1 are supported.
@@ -914,10 +870,6 @@ following image formats are supported:
@item DPCM Sol @tab @tab X @item DPCM Sol @tab @tab X
@item DPCM Xan @tab @tab X @item DPCM Xan @tab @tab X
@tab Used in Origin's Wing Commander IV AVI files. @tab Used in Origin's Wing Commander IV AVI files.
@item DSD (Direct Stream Digitial), least significant bit first @tab @tab X
@item DSD (Direct Stream Digitial), most significant bit first @tab @tab X
@item DSD (Direct Stream Digitial), least significant bit first, planar @tab @tab X
@item DSD (Direct Stream Digitial), most significant bit first, planar @tab @tab X
@item DSP Group TrueSpeech @tab @tab X @item DSP Group TrueSpeech @tab @tab X
@item DV audio @tab @tab X @item DV audio @tab @tab X
@item Enhanced AC-3 @tab X @tab X @item Enhanced AC-3 @tab X @tab X
@@ -940,14 +892,13 @@ following image formats are supported:
@item Monkey's Audio @tab @tab X @item Monkey's Audio @tab @tab X
@item MP1 (MPEG audio layer 1) @tab @tab IX @item MP1 (MPEG audio layer 1) @tab @tab IX
@item MP2 (MPEG audio layer 2) @tab IX @tab IX @item MP2 (MPEG audio layer 2) @tab IX @tab IX
@tab encoding supported also through external library TwoLAME @tab libtwolame can be used alternatively for encoding.
@item MP3 (MPEG audio layer 3) @tab E @tab IX @item MP3 (MPEG audio layer 3) @tab E @tab IX
@tab encoding supported through external library LAME, ADU MP3 and MP3onMP4 also supported @tab encoding supported through external library LAME, ADU MP3 and MP3onMP4 also supported
@item MPEG-4 Audio Lossless Coding (ALS) @tab @tab X @item MPEG-4 Audio Lossless Coding (ALS) @tab @tab X
@item Musepack SV7 @tab @tab X @item Musepack SV7 @tab @tab X
@item Musepack SV8 @tab @tab X @item Musepack SV8 @tab @tab X
@item Nellymoser Asao @tab X @tab X @item Nellymoser Asao @tab X @tab X
@item On2 AVC (Audio for Video Codec) @tab @tab X
@item Opus @tab E @tab E @item Opus @tab E @tab E
@tab supported through external library libopus @tab supported through external library libopus
@item PCM A-law @tab X @tab X @item PCM A-law @tab X @tab X
@@ -1043,7 +994,6 @@ performance on systems without hardware floating point support).
@item PJS (Phoenix) @tab @tab X @tab @tab X @item PJS (Phoenix) @tab @tab X @tab @tab X
@item RealText @tab @tab X @tab @tab X @item RealText @tab @tab X @tab @tab X
@item SAMI @tab @tab X @tab @tab X @item SAMI @tab @tab X @tab @tab X
@item Spruce format (STL) @tab @tab X @tab @tab X
@item SSA/ASS @tab X @tab X @tab X @tab X @item SSA/ASS @tab X @tab X @tab X @tab X
@item SubRip (SRT) @tab X @tab X @tab X @tab X @item SubRip (SRT) @tab X @tab X @tab X @tab X
@item SubViewer v1 @tab @tab X @tab @tab X @item SubViewer v1 @tab @tab X @tab @tab X
@@ -1051,7 +1001,7 @@ performance on systems without hardware floating point support).
@item TED Talks captions @tab @tab X @tab @tab X @item TED Talks captions @tab @tab X @tab @tab X
@item VobSub (IDX+SUB) @tab @tab X @tab @tab X @item VobSub (IDX+SUB) @tab @tab X @tab @tab X
@item VPlayer @tab @tab X @tab @tab X @item VPlayer @tab @tab X @tab @tab X
@item WebVTT @tab X @tab X @tab X @tab X @item WebVTT @tab X @tab X @tab @tab X
@item XSUB @tab @tab @tab X @tab X @item XSUB @tab @tab @tab X @tab X
@end multitable @end multitable
@@ -1069,7 +1019,6 @@ performance on systems without hardware floating point support).
@item HLS @tab X @item HLS @tab X
@item HTTP @tab X @item HTTP @tab X
@item HTTPS @tab X @item HTTPS @tab X
@item Icecast @tab X
@item MMSH @tab X @item MMSH @tab X
@item MMST @tab X @item MMST @tab X
@item pipe @tab X @item pipe @tab X
@@ -1080,7 +1029,6 @@ performance on systems without hardware floating point support).
@item RTMPTE @tab X @item RTMPTE @tab X
@item RTMPTS @tab X @item RTMPTS @tab X
@item RTP @tab X @item RTP @tab X
@item SAMBA @tab E
@item SCTP @tab X @item SCTP @tab X
@item SFTP @tab E @item SFTP @tab E
@item TCP @tab X @item TCP @tab X
@@ -1114,7 +1062,6 @@ performance on systems without hardware floating point support).
@item Video4Linux2 @tab X @tab X @item Video4Linux2 @tab X @tab X
@item VfW capture @tab X @tab @item VfW capture @tab X @tab
@item X11 grabbing @tab X @tab @item X11 grabbing @tab X @tab
@item Win32 grabbing @tab X @tab
@end multitable @end multitable
@code{X} means that input/output is supported. @code{X} means that input/output is supported.

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle Using git to develop FFmpeg @settitle Using git to develop FFmpeg
@@ -300,7 +299,7 @@ the current branch history.
git commit --amend git commit --amend
@end example @end example
allows one to amend the last commit details quickly. allows to amend the last commit details quickly.
@example @example
git rebase -i origin/master git rebase -i origin/master

View File

@@ -13,8 +13,8 @@ You can disable all the input devices using the configure option
option "--enable-indev=@var{INDEV}", or you can disable a particular option "--enable-indev=@var{INDEV}", or you can disable a particular
input device using the option "--disable-indev=@var{INDEV}". input device using the option "--disable-indev=@var{INDEV}".
The option "-devices" of the ff* tools will display the list of The option "-formats" of the ff* tools will display the list of
supported input devices. supported input devices (amongst the demuxers).
A description of the currently available input devices follows. A description of the currently available input devices follows.
@@ -51,101 +51,6 @@ ffmpeg -f alsa -i hw:0 alsaout.wav
For more information see: For more information see:
@url{http://www.alsa-project.org/alsa-doc/alsa-lib/pcm.html} @url{http://www.alsa-project.org/alsa-doc/alsa-lib/pcm.html}
@section avfoundation
AVFoundation input device.
AVFoundation is the currently recommended framework by Apple for streamgrabbing on OSX >= 10.7 as well as on iOS.
The older QTKit framework has been marked deprecated since OSX version 10.7.
The input filename has to be given in the following syntax:
@example
-i "[[VIDEO]:[AUDIO]]"
@end example
The first entry selects the video input while the latter selects the audio input.
The stream has to be specified by the device name or the device index as shown by the device list.
Alternatively, the video and/or audio input device can be chosen by index using the
@option{
-video_device_index <INDEX>
}
and/or
@option{
-audio_device_index <INDEX>
}
, overriding any
device name or index given in the input filename.
All available devices can be enumerated by using @option{-list_devices true}, listing
all device names and corresponding indices.
There are two device name aliases:
@table @code
@item default
Select the AVFoundation default device of the corresponding type.
@item none
Do not record the corresponding media type.
This is equivalent to specifying an empty device name or index.
@end table
@subsection Options
AVFoundation supports the following options:
@table @option
@item -list_devices <TRUE|FALSE>
If set to true, a list of all available input devices is given showing all
device names and indices.
@item -video_device_index <INDEX>
Specify the video device by its index. Overrides anything given in the input filename.
@item -audio_device_index <INDEX>
Specify the audio device by its index. Overrides anything given in the input filename.
@item -pixel_format <FORMAT>
Request the video device to use a specific pixel format.
If the specified format is not supported, a list of available formats is given
und the first one in this list is used instead. Available pixel formats are:
@code{monob, rgb555be, rgb555le, rgb565be, rgb565le, rgb24, bgr24, 0rgb, bgr0, 0bgr, rgb0,
bgr48be, uyvy422, yuva444p, yuva444p16le, yuv444p, yuv422p16, yuv422p10, yuv444p10,
yuv420p, nv12, yuyv422, gray}
@end table
@subsection Examples
@itemize
@item
Print the list of AVFoundation supported devices and exit:
@example
$ ffmpeg -f avfoundation -list_devices true -i ""
@end example
@item
Record video from video device 0 and audio from audio device 0 into out.avi:
@example
$ ffmpeg -f avfoundation -i "0:0" out.avi
@end example
@item
Record video from video device 2 and audio from audio device 1 into out.avi:
@example
$ ffmpeg -f avfoundation -video_device_index 2 -i ":1" out.avi
@end example
@item
Record video from the system default video device using the pixel format bgr0 and do not record any audio into out.avi:
@example
$ ffmpeg -f avfoundation -pixel_format bgr0 -i "default:none" out.avi
@end example
@end itemize
@section bktr @section bktr
BSD video input device. BSD video input device.
@@ -167,7 +72,7 @@ The input name should be in the format:
@end example @end example
where @var{TYPE} can be either @var{audio} or @var{video}, where @var{TYPE} can be either @var{audio} or @var{video},
and @var{NAME} is the device's name or alternative name.. and @var{NAME} is the device's name.
@subsection Options @subsection Options
@@ -220,61 +125,6 @@ Setting this value too low can degrade performance.
See also See also
@url{http://msdn.microsoft.com/en-us/library/windows/desktop/dd377582(v=vs.85).aspx} @url{http://msdn.microsoft.com/en-us/library/windows/desktop/dd377582(v=vs.85).aspx}
@item video_pin_name
Select video capture pin to use by name or alternative name.
@item audio_pin_name
Select audio capture pin to use by name or alternative name.
@item crossbar_video_input_pin_number
Select video input pin number for crossbar device. This will be
routed to the crossbar device's Video Decoder output pin.
Note that changing this value can affect future invocations
(sets a new default) until system reboot occurs.
@item crossbar_audio_input_pin_number
Select audio input pin number for crossbar device. This will be
routed to the crossbar device's Audio Decoder output pin.
Note that changing this value can affect future invocations
(sets a new default) until system reboot occurs.
@item show_video_device_dialog
If set to @option{true}, before capture starts, popup a display dialog
to the end user, allowing them to change video filter properties
and configurations manually.
Note that for crossbar devices, adjusting values in this dialog
may be needed at times to toggle between PAL (25 fps) and NTSC (29.97)
input frame rates, sizes, interlacing, etc. Changing these values can
enable different scan rates/frame rates and avoiding green bars at
the bottom, flickering scan lines, etc.
Note that with some devices, changing these properties can also affect future
invocations (sets new defaults) until system reboot occurs.
@item show_audio_device_dialog
If set to @option{true}, before capture starts, popup a display dialog
to the end user, allowing them to change audio filter properties
and configurations manually.
@item show_video_crossbar_connection_dialog
If set to @option{true}, before capture starts, popup a display
dialog to the end user, allowing them to manually
modify crossbar pin routings, when it opens a video device.
@item show_audio_crossbar_connection_dialog
If set to @option{true}, before capture starts, popup a display
dialog to the end user, allowing them to manually
modify crossbar pin routings, when it opens an audio device.
@item show_analog_tv_tuner_dialog
If set to @option{true}, before capture starts, popup a display
dialog to the end user, allowing them to manually
modify TV channels and frequencies.
@item show_analog_tv_tuner_audio_dialog
If set to @option{true}, before capture starts, popup a display
dialog to the end user, allowing them to manually
modify TV audio (like mono vs. stereo, Language A,B or C).
@end table @end table
@subsection Examples @subsection Examples
@@ -311,19 +161,6 @@ Print the list of supported options in selected device and exit:
$ ffmpeg -list_options true -f dshow -i video="Camera" $ ffmpeg -list_options true -f dshow -i video="Camera"
@end example @end example
@item
Specify pin names to capture by name or alternative name, specify alternative device name:
@example
$ ffmpeg -f dshow -audio_pin_name "Audio Out" -video_pin_name 2 -i video=video="@@device_pnp_\\?\pci#ven_1a0a&dev_6200&subsys_62021461&rev_01#4&e2c7dd6&0&00e1#@{65e8773d-8f56-11d0-a3b9-00a0c9223196@}\@{ca465100-deb0-4d59-818f-8c477184adf6@}":audio="Microphone"
@end example
@item
Configure a crossbar device, specifying crossbar pins, allow user to adjust video capture properties at startup:
@example
$ ffmpeg -f dshow -show_video_device_dialog true -crossbar_video_input_pin_number 0
-crossbar_audio_input_pin_number 3 -i video="AVerMedia BDA Analog Capture":audio="AVerMedia BDA Analog Capture"
@end example
@end itemize @end itemize
@section dv1394 @section dv1394
@@ -355,81 +192,6 @@ ffmpeg -f fbdev -frames:v 1 -r 1 -i /dev/fb0 screenshot.jpeg
See also @url{http://linux-fbdev.sourceforge.net/}, and fbset(1). See also @url{http://linux-fbdev.sourceforge.net/}, and fbset(1).
@section gdigrab
Win32 GDI-based screen capture device.
This device allows you to capture a region of the display on Windows.
There are two options for the input filename:
@example
desktop
@end example
or
@example
title=@var{window_title}
@end example
The first option will capture the entire desktop, or a fixed region of the
desktop. The second option will instead capture the contents of a single
window, regardless of its position on the screen.
For example, to grab the entire desktop using @command{ffmpeg}:
@example
ffmpeg -f gdigrab -framerate 6 -i desktop out.mpg
@end example
Grab a 640x480 region at position @code{10,20}:
@example
ffmpeg -f gdigrab -framerate 6 -offset_x 10 -offset_y 20 -video_size vga -i desktop out.mpg
@end example
Grab the contents of the window named "Calculator"
@example
ffmpeg -f gdigrab -framerate 6 -i title=Calculator out.mpg
@end example
@subsection Options
@table @option
@item draw_mouse
Specify whether to draw the mouse pointer. Use the value @code{0} to
not draw the pointer. Default value is @code{1}.
@item framerate
Set the grabbing frame rate. Default value is @code{ntsc},
corresponding to a frame rate of @code{30000/1001}.
@item show_region
Show grabbed region on screen.
If @var{show_region} is specified with @code{1}, then the grabbing
region will be indicated on screen. With this option, it is easy to
know what is being grabbed if only a portion of the screen is grabbed.
Note that @var{show_region} is incompatible with grabbing the contents
of a single window.
For example:
@example
ffmpeg -f gdigrab -show_region 1 -framerate 6 -video_size cif -offset_x 10 -offset_y 20 -i desktop out.mpg
@end example
@item video_size
Set the video frame size. The default is to capture the full screen if @file{desktop} is selected, or the full window size if @file{title=@var{window_title}} is selected.
@item offset_x
When capturing a region with @var{video_size}, set the distance from the left edge of the screen or desktop.
Note that the offset calculation is from the top left corner of the primary monitor on Windows. If you have a monitor positioned to the left of your primary monitor, you will need to use a negative @var{offset_x} value to move the region to that monitor.
@item offset_y
When capturing a region with @var{video_size}, set the distance from the top edge of the screen or desktop.
Note that the offset calculation is from the top left corner of the primary monitor on Windows. If you have a monitor positioned above your primary monitor, you will need to use a negative @var{offset_y} value to move the region to that monitor.
@end table
@section iec61883 @section iec61883
FireWire DV/HDV input device using libiec61883. FireWire DV/HDV input device using libiec61883.
@@ -458,7 +220,7 @@ not work and result in undefined behavior.
The values @option{auto}, @option{dv} and @option{hdv} are supported. The values @option{auto}, @option{dv} and @option{hdv} are supported.
@item dvbuffer @item dvbuffer
Set maximum size of buffer for incoming data, in frames. For DV, this Set maxiumum size of buffer for incoming data, in frames. For DV, this
is an exact value. For HDV, it is not frame exact, since HDV does is an exact value. For HDV, it is not frame exact, since HDV does
not have a fixed frame size. not have a fixed frame size.
@@ -563,14 +325,6 @@ generated by the device.
The first unlabelled output is automatically assigned to the "out0" The first unlabelled output is automatically assigned to the "out0"
label, but all the others need to be specified explicitly. label, but all the others need to be specified explicitly.
The suffix "+subcc" can be appended to the output label to create an extra
stream with the closed captions packets attached to that output
(experimental; only for EIA-608 / CEA-708 for now).
The subcc streams are created after all the normal streams, in the order of
the corresponding stream.
For example, if there is "out19+subcc", "out7+subcc" and up to "out42", the
stream #43 is subcc for stream #7 and stream #44 is subcc for stream #19.
If not specified defaults to the filename specified for the input If not specified defaults to the filename specified for the input
device. device.
@@ -617,63 +371,12 @@ Read an audio stream and a video stream and play it back with
ffplay -f lavfi "movie=test.avi[out0];amovie=test.wav[out1]" ffplay -f lavfi "movie=test.avi[out0];amovie=test.wav[out1]"
@end example @end example
@item
Dump decoded frames to images and closed captions to a file (experimental):
@example
ffmpeg -f lavfi -i "movie=test.ts[out0+subcc]" -map v frame%08d.png -map s -c copy -f rawvideo subcc.bin
@end example
@end itemize @end itemize
@section libcdio
Audio-CD input device based on libcdio.
To enable this input device during configuration you need libcdio
installed on your system. It requires the configure option
@code{--enable-libcdio}.
This device allows playing and grabbing from an Audio-CD.
For example to copy with @command{ffmpeg} the entire Audio-CD in @file{/dev/sr0},
you may run the command:
@example
ffmpeg -f libcdio -i /dev/sr0 cd.wav
@end example
@subsection Options
@table @option
@item speed
Set drive reading speed. Default value is 0.
The speed is specified CD-ROM speed units. The speed is set through
the libcdio @code{cdio_cddap_speed_set} function. On many CD-ROM
drives, specifying a value too large will result in using the fastest
speed.
@item paranoia_mode
Set paranoia recovery mode flags. It accepts one of the following values:
@table @samp
@item disable
@item verify
@item overlap
@item neverskip
@item full
@end table
Default value is @samp{disable}.
For more information about the available recovery modes, consult the
paranoia project documentation.
@end table
@section libdc1394 @section libdc1394
IIDC1394 input device, based on libdc1394 and libraw1394. IIDC1394 input device, based on libdc1394 and libraw1394.
Requires the configure option @code{--enable-libdc1394}.
@section openal @section openal
The OpenAL input device provides audio capture on all systems with a The OpenAL input device provides audio capture on all systems with a
@@ -706,7 +409,7 @@ OpenAL is part of Core Audio, the official Mac OS X Audio interface.
See @url{http://developer.apple.com/technologies/mac/audio-and-video.html} See @url{http://developer.apple.com/technologies/mac/audio-and-video.html}
@end table @end table
This device allows one to capture from an audio input device handled This device allows to capture from an audio input device handled
through OpenAL. through OpenAL.
You need to specify the name of the device to capture in the provided You need to specify the name of the device to capture in the provided
@@ -828,33 +531,6 @@ Record a stream from default device:
ffmpeg -f pulse -i default /tmp/pulse.wav ffmpeg -f pulse -i default /tmp/pulse.wav
@end example @end example
@section qtkit
QTKit input device.
The filename passed as input is parsed to contain either a device name or index.
The device index can also be given by using -video_device_index.
A given device index will override any given device name.
If the desired device consists of numbers only, use -video_device_index to identify it.
The default device will be chosen if an empty string or the device name "default" is given.
The available devices can be enumerated by using -list_devices.
@example
ffmpeg -f qtkit -i "0" out.mpg
@end example
@example
ffmpeg -f qtkit -video_device_index 0 -i "" out.mpg
@end example
@example
ffmpeg -f qtkit -i "default" out.mpg
@end example
@example
ffmpeg -f qtkit -list_devices true -i ""
@end example
@section sndio @section sndio
sndio input device. sndio input device.
@@ -941,7 +617,7 @@ Select the pixel format (only valid for raw video input).
@item input_format @item input_format
Set the preferred pixel format (for raw video) or a codec name. Set the preferred pixel format (for raw video) or a codec name.
This option allows one to select the input format, when several are This option allows to select the input format, when several are
available. available.
@item framerate @item framerate
@@ -1002,14 +678,7 @@ other filename will be interpreted as device number 0.
X11 video input device. X11 video input device.
To enable this input device during configuration you need libxcb This device allows to capture a region of an X11 display.
installed on your system. It will be automatically detected during
configuration.
Alternatively, the configure option @option{--enable-x11grab} exists
for legacy Xlib users.
This device allows one to capture a region of an X11 display.
The filename passed as input has the syntax: The filename passed as input has the syntax:
@example @example
@@ -1025,12 +694,10 @@ omitted, and defaults to "localhost". The environment variable
area with respect to the top-left border of the X11 screen. They area with respect to the top-left border of the X11 screen. They
default to 0. default to 0.
Check the X11 documentation (e.g. @command{man X}) for more detailed Check the X11 documentation (e.g. man X) for more detailed information.
information.
Use the @command{xdpyinfo} program for getting basic information about Use the @command{dpyinfo} program for getting basic information about the
the properties of your X11 display (e.g. grep for "name" or properties of your X11 display (e.g. grep for "name" or "dimensions").
"dimensions").
For example to grab from @file{:0.0} using @command{ffmpeg}: For example to grab from @file{:0.0} using @command{ffmpeg}:
@example @example
@@ -1079,10 +746,6 @@ If @var{show_region} is specified with @code{1}, then the grabbing
region will be indicated on screen. With this option, it is easy to region will be indicated on screen. With this option, it is easy to
know what is being grabbed if only a portion of the screen is grabbed. know what is being grabbed if only a portion of the screen is grabbed.
@item region_border
Set the region border thickness if @option{-show_region 1} is used.
Range is 1 to 128 and default is 3 (XCB-based x11grab only).
For example: For example:
@example @example
ffmpeg -f x11grab -show_region 1 -framerate 25 -video_size cif -i :0.0+10,20 out.mpg ffmpeg -f x11grab -show_region 1 -framerate 25 -video_size cif -i :0.0+10,20 out.mpg
@@ -1095,103 +758,6 @@ ffmpeg -f x11grab -follow_mouse centered -show_region 1 -framerate 25 -video_siz
@item video_size @item video_size
Set the video frame size. Default value is @code{vga}. Set the video frame size. Default value is @code{vga}.
@item use_shm
Use the MIT-SHM extension for shared memory. Default value is @code{1}.
It may be necessary to disable it for remote displays (legacy x11grab
only).
@end table @end table
@subsection @var{grab_x} @var{grab_y} AVOption
The syntax is:
@example
-grab_x @var{x_offset} -grab_y @var{y_offset}
@end example
Set the grabing region coordinates. The are expressed as offset from the top left
corner of the X11 window. The default value is 0.
@section decklink
The decklink input device provides capture capabilities for Blackmagic
DeckLink devices.
To enable this input device, you need the Blackmagic DeckLink SDK and you
need to configure with the appropriate @code{--extra-cflags}
and @code{--extra-ldflags}.
On Windows, you need to run the IDL files through @command{widl}.
DeckLink is very picky about the formats it supports. Pixel format is
uyvy422 or v210, framerate and video size must be determined for your device with
@command{-list_formats 1}. Audio sample rate is always 48 kHz and the number
of channels can be 2, 8 or 16.
@subsection Options
@table @option
@item list_devices
If set to @option{true}, print a list of devices and exit.
Defaults to @option{false}.
@item list_formats
If set to @option{true}, print a list of supported formats and exit.
Defaults to @option{false}.
@item bm_v210
If set to @samp{1}, video is captured in 10 bit v210 instead
of uyvy422. Not all Blackmagic devices support this option.
@item bm_channels <CHANNELS>
Number of audio channels, can be 2, 8 or 16
@item bm_audiodepth <BITDEPTH>
Audio bit depth, can be 16 or 32.
@end table
@subsection Examples
@itemize
@item
List input devices:
@example
ffmpeg -f decklink -list_devices 1 -i dummy
@end example
@item
List supported formats:
@example
ffmpeg -f decklink -list_formats 1 -i 'Intensity Pro'
@end example
@item
Capture video clip at 1080i50 (format 11):
@example
ffmpeg -f decklink -i 'Intensity Pro@@11' -acodec copy -vcodec copy output.avi
@end example
@item
Capture video clip at 1080i50 10 bit:
@example
ffmpeg -bm_v210 1 -f decklink -i 'UltraStudio Mini Recorder@@11' -acodec copy -vcodec copy output.avi
@end example
@item
Capture video clip at 720p50 with 32bit audio:
@example
ffmpeg -bm_audiodepth 32 -f decklink -i 'UltraStudio Mini Recorder@@14' -acodec copy -vcodec copy output.avi
@end example
@item
Capture video clip at 576i50 with 8 audio channels:
@example
ffmpeg -bm_channels 8 -f decklink -i 'UltraStudio Mini Recorder@@3' -acodec copy -vcodec copy output.avi
@end example
@end itemize
@c man end INPUT DEVICES @c man end INPUT DEVICES

View File

@@ -22,7 +22,7 @@ a mail for every change to every issue.
(the above does all work already after light testing) (the above does all work already after light testing)
The subscription URL for the ffmpeg-trac list is: The subscription URL for the ffmpeg-trac list is:
http(s)://lists.ffmpeg.org/mailman/listinfo/ffmpeg-trac http(s)://ffmpeg.org/mailman/listinfo/ffmpeg-trac
The URL of the webinterface of the tracker is: The URL of the webinterface of the tracker is:
http(s)://trac.ffmpeg.org http(s)://trac.ffmpeg.org

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle Libavcodec Documentation @settitle Libavcodec Documentation
@titlepage @titlepage

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle Libavdevice Documentation @settitle Libavdevice Documentation
@titlepage @titlepage

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle Libavfilter Documentation @settitle Libavfilter Documentation
@titlepage @titlepage

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle Libavformat Documentation @settitle Libavformat Documentation
@titlepage @titlepage

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle Libavutil Documentation @settitle Libavutil Documentation
@titlepage @titlepage

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle Libswresample Documentation @settitle Libswresample Documentation
@titlepage @titlepage

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle Libswscale Documentation @settitle Libswscale Documentation
@titlepage @titlepage

View File

@@ -194,19 +194,15 @@ can not be smaller than one centi second.
Apple HTTP Live Streaming muxer that segments MPEG-TS according to Apple HTTP Live Streaming muxer that segments MPEG-TS according to
the HTTP Live Streaming (HLS) specification. the HTTP Live Streaming (HLS) specification.
It creates a playlist file, and one or more segment files. The output filename It creates a playlist file and numbered segment files. The output
specifies the playlist filename. filename specifies the playlist filename; the segment filenames
receive the same basename as the playlist, a sequential number and
By default, the muxer creates a file for each segment produced. These files a .ts extension.
have the same name as the playlist, followed by a sequential number and a
.ts extension.
For example, to convert an input file with @command{ffmpeg}: For example, to convert an input file with @command{ffmpeg}:
@example @example
ffmpeg -i in.nut out.m3u8 ffmpeg -i in.nut out.m3u8
@end example @end example
This example will produce the playlist, @file{out.m3u8}, and segment files:
@file{out0.ts}, @file{out1.ts}, @file{out2.ts}, etc.
See also the @ref{segment} muxer, which provides a more generic and See also the @ref{segment} muxer, which provides a more generic and
flexible implementation of a segmenter, and can be used to perform HLS flexible implementation of a segmenter, and can be used to perform HLS
@@ -224,11 +220,6 @@ Set the segment length in seconds. Default value is 2.
Set the maximum number of playlist entries. If set to 0 the list file Set the maximum number of playlist entries. If set to 0 the list file
will contain all the segments. Default value is 5. will contain all the segments. Default value is 5.
@item hls_ts_options @var{options_list}
Set output format options using a :-separated list of key=value
parameters. Values containing @code{:} special characters must be
escaped.
@item hls_wrap @var{wrap} @item hls_wrap @var{wrap}
Set the number after which the segment filename number (the number Set the number after which the segment filename number (the number
specified in each segment file) wraps. If set to 0 the number will be specified in each segment file) wraps. If set to 0 the number will be
@@ -242,41 +233,10 @@ to @var{wrap}.
Start the playlist sequence number from @var{number}. Default value is Start the playlist sequence number from @var{number}. Default value is
0. 0.
@item hls_allow_cache @var{allowcache}
Explicitly set whether the client MAY (1) or MUST NOT (0) cache media segments.
@item hls_base_url @var{baseurl}
Append @var{baseurl} to every entry in the playlist.
Useful to generate playlists with absolute paths.
Note that the playlist sequence number must be unique for each segment Note that the playlist sequence number must be unique for each segment
and it is not to be confused with the segment filename sequence number and it is not to be confused with the segment filename sequence number
which can be cyclic, for example if the @option{wrap} option is which can be cyclic, for example if the @option{wrap} option is
specified. specified.
@item hls_segment_filename @var{filename}
Set the segment filename. Unless hls_flags single_file is set @var{filename}
is used as a string format with the segment number:
@example
ffmpeg in.nut -hls_segment_filename 'file%03d.ts' out.m3u8
@end example
This example will produce the playlist, @file{out.m3u8}, and segment files:
@file{file000.ts}, @file{file001.ts}, @file{file002.ts}, etc.
@item hls_flags single_file
If this flag is set, the muxer will store all segments in a single MPEG-TS
file, and will use byte ranges in the playlist. HLS playlists generated with
this way will have the version number 4.
For example:
@example
ffmpeg -i in.nut -hls_flags single_file out.m3u8
@end example
Will produce the playlist, @file{out.m3u8}, and a single segment file,
@file{out.ts}.
@item hls_flags delete_segments
Segment files removed from the playlist are deleted after a period of time
equal to the duration of the segment plus the duration of the playlist.
@end table @end table
@anchor{ico} @anchor{ico}
@@ -381,7 +341,8 @@ ffmpeg -f v4l2 -r 1 -i /dev/video0 -f image2 -strftime 1 "%Y-%m-%d_%H-%M-%S.jpg"
@table @option @table @option
@item start_number @item start_number
Start the sequence from the specified number. Default value is 0. Start the sequence from the specified number. Default value is 1. Must
be a non-negative number.
@item update @item update
If set to 1, the filename will always be interpreted as just a If set to 1, the filename will always be interpreted as just a
@@ -571,6 +532,7 @@ a short portion of the file. With this option set, there is no initial
mdat atom, and the moov atom only describes the tracks but has mdat atom, and the moov atom only describes the tracks but has
a zero duration. a zero duration.
Files written with this option set do not work in QuickTime.
This option is implicitly set when writing ismv (Smooth Streaming) files. This option is implicitly set when writing ismv (Smooth Streaming) files.
@item -movflags separate_moof @item -movflags separate_moof
Write a separate moof (movie fragment) atom for each track. Normally, Write a separate moof (movie fragment) atom for each track. Normally,
@@ -585,22 +547,6 @@ This operation can take a while, and will not work in various situations such
as fragmented output, thus it is not enabled by default. as fragmented output, thus it is not enabled by default.
@item -movflags rtphint @item -movflags rtphint
Add RTP hinting tracks to the output file. Add RTP hinting tracks to the output file.
@item -movflags disable_chpl
Disable Nero chapter markers (chpl atom). Normally, both Nero chapters
and a QuickTime chapter track are written to the file. With this option
set, only the QuickTime chapter track will be written. Nero chapters can
cause failures when the file is reprocessed with certain tagging programs, like
mp3Tag 2.61a and iTunes 11.3, most likely other versions are affected as well.
@item -movflags omit_tfhd_offset
Do not write any absolute base_data_offset in tfhd atoms. This avoids
tying fragments to absolute byte positions in the file/streams.
@item -movflags default_base_moof
Similarly to the omit_tfhd_offset, this flag avoids writing the
absolute base_data_offset field in tfhd atoms, but does so by using
the new default-base-is-moof flag instead. This flag is new from
14496-12:2012. This may make the fragments easier to parse in certain
circumstances (avoiding basing track fragment location calculations
on the implicit end of the previous track fragment).
@end table @end table
@subsection Example @subsection Example
@@ -613,38 +559,29 @@ ffmpeg -re @var{<normal input/transcoding options>} -movflags isml+frag_keyframe
@section mp3 @section mp3
The MP3 muxer writes a raw MP3 stream with the following optional features: The MP3 muxer writes a raw MP3 stream with an ID3v2 header at the beginning and
@itemize @bullet optionally an ID3v1 tag at the end. ID3v2.3 and ID3v2.4 are supported, the
@item @code{id3v2_version} option controls which one is used. Setting
An ID3v2 metadata header at the beginning (enabled by default). Versions 2.3 and @code{id3v2_version} to 0 will disable the ID3v2 header completely. The legacy
2.4 are supported, the @code{id3v2_version} private option controls which one is ID3v1 tag is not written by default, but may be enabled with the
used (3 or 4). Setting @code{id3v2_version} to 0 disables the ID3v2 header @code{write_id3v1} option.
completely.
The muxer supports writing attached pictures (APIC frames) to the ID3v2 header. The muxer may also write a Xing frame at the beginning, which contains the
The pictures are supplied to the muxer in form of a video stream with a single number of frames in the file. It is useful for computing duration of VBR files.
packet. There can be any number of those streams, each will correspond to a The Xing frame is written if the output stream is seekable and if the
single APIC frame. The stream metadata tags @var{title} and @var{comment} map @code{write_xing} option is set to 1 (the default).
to APIC @var{description} and @var{picture type} respectively. See
The muxer supports writing ID3v2 attached pictures (APIC frames). The pictures
are supplied to the muxer in form of a video stream with a single packet. There
can be any number of those streams, each will correspond to a single APIC frame.
The stream metadata tags @var{title} and @var{comment} map to APIC
@var{description} and @var{picture type} respectively. See
@url{http://id3.org/id3v2.4.0-frames} for allowed picture types. @url{http://id3.org/id3v2.4.0-frames} for allowed picture types.
Note that the APIC frames must be written at the beginning, so the muxer will Note that the APIC frames must be written at the beginning, so the muxer will
buffer the audio frames until it gets all the pictures. It is therefore advised buffer the audio frames until it gets all the pictures. It is therefore advised
to provide the pictures as soon as possible to avoid excessive buffering. to provide the pictures as soon as possible to avoid excessive buffering.
@item
A Xing/LAME frame right after the ID3v2 header (if present). It is enabled by
default, but will be written only if the output is seekable. The
@code{write_xing} private option can be used to disable it. The frame contains
various information that may be useful to the decoder, like the audio duration
or encoder delay.
@item
A legacy ID3v1 tag at the end of the file (disabled by default). It may be
enabled with the @code{write_id3v1} private option, but as its capabilities are
very limited, its usage is not recommended.
@end itemize
Examples: Examples:
Write an mp3 with an ID3v2.3 header and an ID3v1 footer: Write an mp3 with an ID3v2.3 header and an ID3v1 footer:
@@ -689,9 +626,6 @@ Set the transport_stream_id (default 0x0001). This identifies a
transponder in DVB. transponder in DVB.
@item -mpegts_service_id @var{number} @item -mpegts_service_id @var{number}
Set the service_id (default 0x0001) also known as program in DVB. Set the service_id (default 0x0001) also known as program in DVB.
@item -mpegts_service_type @var{number}
Set the program service_type (default @var{digital_tv}), see below
a list of pre defined values.
@item -mpegts_pmt_start_pid @var{number} @item -mpegts_pmt_start_pid @var{number}
Set the first PID for PMT (default 0x1000, max 0x1f00). Set the first PID for PMT (default 0x1000, max 0x1f00).
@item -mpegts_start_pid @var{number} @item -mpegts_start_pid @var{number}
@@ -699,10 +633,7 @@ Set the first PID for data packets (default 0x0100, max 0x0f00).
@item -mpegts_m2ts_mode @var{number} @item -mpegts_m2ts_mode @var{number}
Enable m2ts mode if set to 1. Default value is -1 which disables m2ts mode. Enable m2ts mode if set to 1. Default value is -1 which disables m2ts mode.
@item -muxrate @var{number} @item -muxrate @var{number}
Set a constant muxrate (default VBR). Set muxrate.
@item -pcr_period @var{numer}
Override the default PCR retransmission time (default 20ms), ignored
if variable muxrate is selected.
@item -pes_payload_size @var{number} @item -pes_payload_size @var{number}
Set minimum PES packet payload in bytes. Set minimum PES packet payload in bytes.
@item -mpegts_flags @var{flags} @item -mpegts_flags @var{flags}
@@ -726,27 +657,6 @@ ffmpeg -i source2.ts -codec copy -f mpegts -tables_version 1 udp://1.1.1.1:1111
@end example @end example
@end table @end table
Option mpegts_service_type accepts the following values:
@table @option
@item hex_value
Any hexdecimal value between 0x01 to 0xff as defined in ETSI 300 468.
@item digital_tv
Digital TV service.
@item digital_radio
Digital Radio service.
@item teletext
Teletext service.
@item advanced_codec_digital_radio
Advanced Codec Digital Radio service.
@item mpeg2_digital_hdtv
MPEG2 Digital HDTV service.
@item advanced_codec_digital_sdtv
Advanced Codec Digital SDTV service.
@item advanced_codec_digital_hdtv
Advanced Codec Digital HDTV service.
@end table
Option mpegts_flags may take a set of such flags: Option mpegts_flags may take a set of such flags:
@table @option @table @option
@@ -792,30 +702,6 @@ Alternatively you can write the command as:
ffmpeg -benchmark -i INPUT -f null - ffmpeg -benchmark -i INPUT -f null -
@end example @end example
@section nut
@table @option
@item -syncpoints @var{flags}
Change the syncpoint usage in nut:
@table @option
@item @var{default} use the normal low-overhead seeking aids.
@item @var{none} do not use the syncpoints at all, reducing the overhead but making the stream non-seekable;
Use of this option is not recommended, as the resulting files are very damage
sensitive and seeking is not possible. Also in general the overhead from
syncpoints is negligible. Note, -@code{write_index} 0 can be used to disable
all growing data tables, allowing to mux endless streams with limited memory
and without these disadvantages.
@item @var{timestamped} extend the syncpoint with a wallclock field.
@end table
The @var{none} and @var{timestamped} flags are experimental.
@item -write_index @var{bool}
Write index at the end, the default is to write an index.
@end table
@example
ffmpeg -i INPUT -f_strict experimental -syncpoints none - | processor
@end example
@section ogg @section ogg
Ogg container muxer. Ogg container muxer.
@@ -829,11 +715,6 @@ is 1 second. A value of 0 will fill all segments, making pages as large as
possible. A value of 1 will effectively use 1 packet-per-page in most possible. A value of 1 will effectively use 1 packet-per-page in most
situations, giving a small seek granularity at the cost of additional container situations, giving a small seek granularity at the cost of additional container
overhead. overhead.
@item -serial_offset @var{value}
Serial value from which to set the streams serial number.
Setting it to different and sufficiently large values ensures that the produced
ogg files can be safely chained.
@end table @end table
@anchor{segment} @anchor{segment}
@@ -842,9 +723,8 @@ ogg files can be safely chained.
Basic stream segmenter. Basic stream segmenter.
This muxer outputs streams to a number of separate files of nearly This muxer outputs streams to a number of separate files of nearly
fixed duration. Output filename pattern can be set in a fashion fixed duration. Output filename pattern can be set in a fashion similar to
similar to @ref{image2}, or by using a @code{strftime} template if @ref{image2}.
the @option{strftime} option is enabled.
@code{stream_segment} is a variant of the muxer used to write to @code{stream_segment} is a variant of the muxer used to write to
streaming output formats, i.e. which do not require global headers, streaming output formats, i.e. which do not require global headers,
@@ -878,7 +758,7 @@ The segment muxer supports the following options:
@table @option @table @option
@item reference_stream @var{specifier} @item reference_stream @var{specifier}
Set the reference stream, as specified by the string @var{specifier}. Set the reference stream, as specified by the string @var{specifier}.
If @var{specifier} is set to @code{auto}, the reference is chosen If @var{specifier} is set to @code{auto}, the reference is choosen
automatically. Otherwise it must be a stream specifier (see the ``Stream automatically. Otherwise it must be a stream specifier (see the ``Stream
specifiers'' chapter in the ffmpeg manual) which specifies the specifiers'' chapter in the ffmpeg manual) which specifies the
reference stream. The default value is @code{auto}. reference stream. The default value is @code{auto}.
@@ -887,11 +767,6 @@ reference stream. The default value is @code{auto}.
Override the inner container format, by default it is guessed by the filename Override the inner container format, by default it is guessed by the filename
extension. extension.
@item segment_format_options @var{options_list}
Set output format options using a :-separated list of key=value
parameters. Values containing the @code{:} special character must be
escaped.
@item segment_list @var{name} @item segment_list @var{name}
Generate also a listfile named @var{name}. If not specified no Generate also a listfile named @var{name}. If not specified no
listfile is generated. listfile is generated.
@@ -908,21 +783,17 @@ Allow caching (only affects M3U8 list files).
Allow live-friendly file generation. Allow live-friendly file generation.
@end table @end table
@item segment_list_type @var{type}
Select the listing format.
@table @option
@item @var{flat} use a simple flat list of entries.
@item @var{hls} use a m3u8-like structure.
@end table
@item segment_list_size @var{size} @item segment_list_size @var{size}
Update the list file so that it contains at most @var{size} Update the list file so that it contains at most the last @var{size}
segments. If 0 the list file will contain all the segments. Default segments. If 0 the list file will contain all the segments. Default
value is 0. value is 0.
@item segment_list_entry_prefix @var{prefix} @item segment_list_entry_prefix @var{prefix}
Prepend @var{prefix} to each entry. Useful to generate absolute paths. Set @var{prefix} to prepend to the name of each entry filename. By
By default no prefix is applied. default no prefix is applied.
@item segment_list_type @var{type}
Specify the format for the segment list file.
The following values are recognized: The following values are recognized:
@table @samp @table @samp
@@ -973,16 +844,6 @@ Note that splitting may not be accurate, unless you force the
reference stream key-frames at the given time. See the introductory reference stream key-frames at the given time. See the introductory
notice and the examples below. notice and the examples below.
@item segment_atclocktime @var{1|0}
If set to "1" split at regular clock time intervals starting from 00:00
o'clock. The @var{time} value specified in @option{segment_time} is
used for setting the length of the splitting interval.
For example with @option{segment_time} set to "900" this makes it possible
to create files at 12:00 o'clock, 12:15, 12:30, etc.
Default value is "0".
@item segment_time_delta @var{delta} @item segment_time_delta @var{delta}
Specify the accuracy time when selecting the start time for a Specify the accuracy time when selecting the start time for a
segment, expressed as a duration specification. Default value is "0". segment, expressed as a duration specification. Default value is "0".
@@ -1024,12 +885,6 @@ Wrap around segment index once it reaches @var{limit}.
@item segment_start_number @var{number} @item segment_start_number @var{number}
Set the sequence number of the first segment. Defaults to @code{0}. Set the sequence number of the first segment. Defaults to @code{0}.
@item strftime @var{1|0}
Use the @code{strftime} function to define the name of the new
segments to write. If this is selected, the output segment name must
contain a @code{strftime} function template. Default value is
@code{0}.
@item reset_timestamps @var{1|0} @item reset_timestamps @var{1|0}
Reset timestamps at the begin of each segment, so that each segment Reset timestamps at the begin of each segment, so that each segment
will start with near-zero timestamps. It is meant to ease the playback will start with near-zero timestamps. It is meant to ease the playback
@@ -1045,7 +900,7 @@ argument must be a time duration specification, and defaults to 0.
@itemize @itemize
@item @item
Remux the content of file @file{in.mkv} to a list of segments To remux the content of file @file{in.mkv} to a list of segments
@file{out-000.nut}, @file{out-001.nut}, etc., and write the list of @file{out-000.nut}, @file{out-001.nut}, etc., and write the list of
generated segments to @file{out.list}: generated segments to @file{out.list}:
@example @example
@@ -1053,20 +908,14 @@ ffmpeg -i in.mkv -codec copy -map 0 -f segment -segment_list out.list out%03d.nu
@end example @end example
@item @item
Segment input and set output format options for the output segments: As the example above, but segment the input file according to the split
@example points specified by the @var{segment_times} option:
ffmpeg -i in.mkv -f segment -segment_time 10 -segment_format_options movflags=+faststart out%03d.mp4
@end example
@item
Segment the input file according to the split points specified by the
@var{segment_times} option:
@example @example
ffmpeg -i in.mkv -codec copy -map 0 -f segment -segment_list out.csv -segment_times 1,2,3,5,8,13,21 out%03d.nut ffmpeg -i in.mkv -codec copy -map 0 -f segment -segment_list out.csv -segment_times 1,2,3,5,8,13,21 out%03d.nut
@end example @end example
@item @item
Use the @command{ffmpeg} @option{force_key_frames} As the example above, but use the @command{ffmpeg} @option{force_key_frames}
option to force key frames in the input at the specified location, together option to force key frames in the input at the specified location, together
with the segment option @option{segment_time_delta} to account for with the segment option @option{segment_time_delta} to account for
possible roundings operated when setting key frame times. possible roundings operated when setting key frame times.
@@ -1085,7 +934,7 @@ ffmpeg -i in.mkv -codec copy -map 0 -f segment -segment_list out.csv -segment_fr
@end example @end example
@item @item
Convert the @file{in.mkv} to TS segments using the @code{libx264} To convert the @file{in.mkv} to TS segments using the @code{libx264}
and @code{libfaac} encoders: and @code{libfaac} encoders:
@example @example
ffmpeg -i in.mkv -map 0 -codec:v libx264 -codec:a libfaac -f ssegment -segment_list out.list out%03d.ts ffmpeg -i in.mkv -map 0 -codec:v libx264 -codec:a libfaac -f ssegment -segment_list out.list out%03d.ts
@@ -1100,28 +949,6 @@ ffmpeg -re -i in.mkv -codec copy -map 0 -f segment -segment_list playlist.m3u8 \
@end example @end example
@end itemize @end itemize
@section smoothstreaming
Smooth Streaming muxer generates a set of files (Manifest, chunks) suitable for serving with conventional web server.
@table @option
@item window_size
Specify the number of fragments kept in the manifest. Default 0 (keep all).
@item extra_window_size
Specify the number of fragments kept outside of the manifest before removing from disk. Default 5.
@item lookahead_count
Specify the number of lookahead fragments. Default 2.
@item min_frag_duration
Specify the minimum fragment duration (in microseconds). Default 5000000.
@item remove_at_exit
Specify whether to remove all fragments when finished. Default 0 (do not remove).
@end table
@section tee @section tee
The tee muxer can be used to write the same data to several files or any The tee muxer can be used to write the same data to several files or any
@@ -1159,7 +986,7 @@ It is possible to specify to which streams a given bitstream filter
applies, by appending a stream specifier to the option separated by applies, by appending a stream specifier to the option separated by
@code{/}. @var{spec} must be a stream specifier (see @ref{Format @code{/}. @var{spec} must be a stream specifier (see @ref{Format
stream specifiers}). If the stream specifier is not specified, the stream specifiers}). If the stream specifier is not specified, the
bitstream filters will be applied to all streams in the output. bistream filters will be applied to all streams in the output.
Several bitstream filters can be specified, separated by ",". Several bitstream filters can be specified, separated by ",".
@@ -1206,34 +1033,4 @@ Note: some codecs may need different options depending on the output format;
the auto-detection of this can not work with the tee muxer. The main example the auto-detection of this can not work with the tee muxer. The main example
is the @option{global_header} flag. is the @option{global_header} flag.
@section webm_dash_manifest
WebM DASH Manifest muxer.
This muxer implements the WebM DASH Manifest specification to generate the DASH manifest XML.
@subsection Options
This muxer supports the following options:
@table @option
@item adaptation_sets
This option has the following syntax: "id=x,streams=a,b,c id=y,streams=d,e" where x and y are the
unique identifiers of the adaptation sets and a,b,c,d and e are the indices of the corresponding
audio and video streams. Any number of adaptation sets can be added using this option.
@end table
@subsection Example
@example
ffmpeg -f webm_dash_manifest -i video1.webm \
-f webm_dash_manifest -i video2.webm \
-f webm_dash_manifest -i audio1.webm \
-f webm_dash_manifest -i audio2.webm \
-map 0 -map 1 -map 2 -map 3 \
-c copy \
-f webm_dash_manifest \
-adaptation_sets "id=0,streams=0,1 id=1,streams=2,3" \
manifest.xml
@end example
@c man end MUXERS @c man end MUXERS

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle NUT @settitle NUT
@@ -22,27 +21,6 @@ The official nut specification is at svn://svn.mplayerhq.hu/nut
In case of any differences between this text and the official specification, In case of any differences between this text and the official specification,
the official specification shall prevail. the official specification shall prevail.
@chapter Modes
NUT has some variants signaled by using the flags field in its main header.
@multitable @columnfractions .4 .4
@item BROADCAST @tab Extend the syncpoint to report the sender wallclock
@item PIPE @tab Omit completely the syncpoint
@end multitable
@section BROADCAST
The BROADCAST variant provides a secondary time reference to facilitate
detecting endpoint latency and network delays.
It assumes all the endpoint clocks are syncronized.
To be used in real-time scenarios.
@section PIPE
The PIPE variant assumes NUT is used as non-seekable intermediate container,
by not using syncpoint removes unneeded overhead and reduces the overall
memory usage.
@chapter Container-specific codec tags @chapter Container-specific codec tags
@section Generic raw YUVA formats @section Generic raw YUVA formats

View File

@@ -79,6 +79,9 @@ qpel{8,16}_mc??_old_c / *pixels{8,16}_l4
Just used to work around a bug in an old libavcodec encoder version. Just used to work around a bug in an old libavcodec encoder version.
Don't optimize them. Don't optimize them.
tpel_mc_func {put,avg}_tpel_pixels_tab
Used only for SVQ3, so only optimize them if you need fast SVQ3 decoding.
add_bytes/diff_bytes add_bytes/diff_bytes
For huffyuv only, optimize if you want a faster ffhuffyuv codec. For huffyuv only, optimize if you want a faster ffhuffyuv codec.
@@ -136,6 +139,9 @@ dct_unquantize_mpeg2
dct_unquantize_h263 dct_unquantize_h263
Used in MPEG-4/H.263 en/decoding. Used in MPEG-4/H.263 en/decoding.
FIXME remaining functions?
BTW, most of these functions are in dsputil.c/.h, some are in mpegvideo.c/.h.
Alignment: Alignment:
@@ -191,11 +197,6 @@ __asm__() block.
Use external asm (nasm/yasm) or inline asm (__asm__()), do not use intrinsics. Use external asm (nasm/yasm) or inline asm (__asm__()), do not use intrinsics.
The latter requires a good optimizing compiler which gcc is not. The latter requires a good optimizing compiler which gcc is not.
When debugging a x86 external asm compilation issue, if lost in the macro
expansions, add DBG=1 to your make command-line: the input file will be
preprocessed, stripped of the debug/empty lines, then compiled, showing the
actual lines causing issues.
Inline asm vs. external asm Inline asm vs. external asm
--------------------------- ---------------------------
Both inline asm (__asm__("..") in a .c file, handled by a compiler such as gcc) Both inline asm (__asm__("..") in a .c file, handled by a compiler such as gcc)
@@ -267,6 +268,17 @@ CELL/SPU:
http://www-01.ibm.com/chips/techlib/techlib.nsf/techdocs/30B3520C93F437AB87257060006FFE5E/$file/Language_Extensions_for_CBEA_2.4.pdf http://www-01.ibm.com/chips/techlib/techlib.nsf/techdocs/30B3520C93F437AB87257060006FFE5E/$file/Language_Extensions_for_CBEA_2.4.pdf
http://www-01.ibm.com/chips/techlib/techlib.nsf/techdocs/9F820A5FFA3ECE8C8725716A0062585F/$file/CBE_Handbook_v1.1_24APR2007_pub.pdf http://www-01.ibm.com/chips/techlib/techlib.nsf/techdocs/9F820A5FFA3ECE8C8725716A0062585F/$file/CBE_Handbook_v1.1_24APR2007_pub.pdf
SPARC-specific:
---------------
SPARC Joint Programming Specification (JPS1): Commonality
http://www.fujitsu.com/downloads/PRMPWR/JPS1-R1.0.4-Common-pub.pdf
UltraSPARC III Processor User's Manual (contains instruction timings)
http://www.sun.com/processors/manuals/USIIIv2.pdf
VIS Whitepaper (contains optimization guidelines)
http://www.sun.com/processors/vis/download/vis/vis_whitepaper.pdf
GCC asm links: GCC asm links:
-------------- --------------
official doc but quite ugly official doc but quite ugly

View File

@@ -13,8 +13,8 @@ You can disable all the output devices using the configure option
option "--enable-outdev=@var{OUTDEV}", or you can disable a particular option "--enable-outdev=@var{OUTDEV}", or you can disable a particular
input device using the option "--disable-outdev=@var{OUTDEV}". input device using the option "--disable-outdev=@var{OUTDEV}".
The option "-devices" of the ff* tools will display the list of The option "-formats" of the ff* tools will display the list of
enabled output devices. enabled output devices (amongst the muxers).
A description of the currently available output devices follows. A description of the currently available output devices follows.
@@ -42,7 +42,7 @@ ffmpeg -i INPUT -f alsa hw:1,7
CACA output device. CACA output device.
This output device allows one to show a video stream in CACA window. This output device allows to show a video stream in CACA window.
Only one CACA window is allowed per application, so you can Only one CACA window is allowed per application, so you can
have only one instance of this output device in an application. have only one instance of this output device in an application.
@@ -216,15 +216,15 @@ OpenGL output device.
To enable this output device you need to configure FFmpeg with @code{--enable-opengl}. To enable this output device you need to configure FFmpeg with @code{--enable-opengl}.
This output device allows one to render to OpenGL context. Device allows to render to OpenGL context.
Context may be provided by application or default SDL window is created. Context may be provided by application or default SDL window is created.
When device renders to external context, application must implement handlers for following messages: When device renders to external context, application must implement handlers for following messages:
@code{AV_DEV_TO_APP_CREATE_WINDOW_BUFFER} - create OpenGL context on current thread. @code{AV_CTL_MESSAGE_CREATE_WINDOW_BUFFER} - create OpenGL context on current thread.
@code{AV_DEV_TO_APP_PREPARE_WINDOW_BUFFER} - make OpenGL context current. @code{AV_CTL_MESSAGE_PREPARE_WINDOW_BUFFER} - make OpenGL context current.
@code{AV_DEV_TO_APP_DISPLAY_WINDOW_BUFFER} - swap buffers. @code{AV_CTL_MESSAGE_DISPLAY_WINDOW_BUFFER} - swap buffers.
@code{AV_DEV_TO_APP_DESTROY_WINDOW_BUFFER} - destroy OpenGL context. @code{AV_CTL_MESSAGE_DESTROY_WINDOW_BUFFER} - destroy OpenGL context.
Application is also required to inform a device about current resolution by sending @code{AV_APP_TO_DEV_WINDOW_SIZE} message. Application is also required to inform a device about current resolution by sending @code{AV_DEVICE_WINDOW_RESIZED} message.
@subsection Options @subsection Options
@table @option @table @option
@@ -237,10 +237,6 @@ Application must provide OpenGL context and both @code{window_size_cb} and @code
@item window_title @item window_title
Set the SDL window title, if not specified default to the filename specified for the output device. Set the SDL window title, if not specified default to the filename specified for the output device.
Ignored when @option{no_window} is set. Ignored when @option{no_window} is set.
@item window_size
Set preferred window size, can be a string of the form widthxheight or a video size abbreviation.
If not specified it defaults to the size of the input video, downscaled according to the aspect ratio.
Mostly usable when @option{no_window} is not set.
@end table @end table
@@ -294,20 +290,6 @@ When both options are provided then the highest value is used
are set to 0 (which is default), the device will use the default are set to 0 (which is default), the device will use the default
PulseAudio duration value. By default PulseAudio set buffer duration PulseAudio duration value. By default PulseAudio set buffer duration
to around 2 seconds. to around 2 seconds.
@item prebuf
Specify pre-buffering size in bytes. The server does not start with
playback before at least @option{prebuf} bytes are available in the
buffer. By default this option is initialized to the same value as
@option{buffer_size} or @option{buffer_duration} (whichever is bigger).
@item minreq
Specify minimum request size in bytes. The server does not request less
than @option{minreq} bytes from the client, instead waits until the buffer
is free enough to request more bytes at once. It is recommended to not set
this option, which will initialize this to a value that is deemed sensible
by the server.
@end table @end table
@subsection Examples @subsection Examples
@@ -320,7 +302,7 @@ ffmpeg -i INPUT -f pulse "stream name"
SDL (Simple DirectMedia Layer) output device. SDL (Simple DirectMedia Layer) output device.
This output device allows one to show a video stream in an SDL This output device allows to show a video stream in an SDL
window. Only one SDL window is allowed per application, so you can window. Only one SDL window is allowed per application, so you can
have only one instance of this output device in an application. have only one instance of this output device in an application.
@@ -379,7 +361,7 @@ sndio audio output device.
XV (XVideo) output device. XV (XVideo) output device.
This output device allows one to show a video stream in a X Window System This output device allows to show a video stream in a X Window System
window. window.
@subsection Options @subsection Options
@@ -406,26 +388,19 @@ For example, @code{dual-headed:0.1} would specify screen 1 of display
Check the X11 specification for more detailed information about the Check the X11 specification for more detailed information about the
display name format. display name format.
@item window_id
When set to non-zero value then device doesn't create new window,
but uses existing one with provided @var{window_id}. By default
this options is set to zero and device creates its own window.
@item window_size @item window_size
Set the created window size, can be a string of the form Set the created window size, can be a string of the form
@var{width}x@var{height} or a video size abbreviation. If not @var{width}x@var{height} or a video size abbreviation. If not
specified it defaults to the size of the input video. specified it defaults to the size of the input video.
Ignored when @var{window_id} is set.
@item window_x @item window_x
@item window_y @item window_y
Set the X and Y window offsets for the created window. They are both Set the X and Y window offsets for the created window. They are both
set to 0 by default. The values may be ignored by the window manager. set to 0 by default. The values may be ignored by the window manager.
Ignored when @var{window_id} is set.
@item window_title @item window_title
Set the window title, if not specified default to the filename Set the window title, if not specified default to the filename
specified for the output device. Ignored when @var{window_id} is set. specified for the output device.
@end table @end table
For more information about XVideo see @url{http://www.x.org/}. For more information about XVideo see @url{http://www.x.org/}.

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle Platform Specific Information @settitle Platform Specific Information
@titlepage @titlepage
@@ -25,20 +24,6 @@ If not, then you should install a different compiler that has no
hard-coded path to gas. In the worst case pass @code{--disable-asm} hard-coded path to gas. In the worst case pass @code{--disable-asm}
to configure. to configure.
@section Advanced linking configuration
If you compiled FFmpeg libraries statically and you want to use them to
build your own shared library, you may need to force PIC support (with
@code{--enable-pic} during FFmpeg configure) and add the following option
to your project LDFLAGS:
@example
-Wl,-Bsymbolic
@end example
If your target platform requires position independent binaries, you should
pass the correct linking flag (e.g. @code{-pie}) to @code{--extra-ldexeflags}.
@section BSD @section BSD
BSD make will not build FFmpeg, you need to install and use GNU Make BSD make will not build FFmpeg, you need to install and use GNU Make
@@ -66,15 +51,14 @@ The toolchain provided with Xcode is sufficient to build the basic
unacelerated code. unacelerated code.
Mac OS X on PowerPC or ARM (iPhone) requires a preprocessor from Mac OS X on PowerPC or ARM (iPhone) requires a preprocessor from
@url{https://github.com/FFmpeg/gas-preprocessor} or @url{http://github.com/yuvi/gas-preprocessor} to build the optimized
@url{https://github.com/yuvi/gas-preprocessor}(currently outdated) to build the optimized assembler functions. Just download the Perl script and put it somewhere
assembly functions. Put the Perl script somewhere
in your PATH, FFmpeg's configure will pick it up automatically. in your PATH, FFmpeg's configure will pick it up automatically.
Mac OS X on amd64 and x86 requires @command{yasm} to build most of the Mac OS X on amd64 and x86 requires @command{yasm} to build most of the
optimized assembly functions. @uref{http://www.finkproject.org/, Fink}, optimized assembler functions. @uref{http://www.finkproject.org/, Fink},
@uref{http://www.gentoo.org/proj/en/gentoo-alt/prefix/bootstrap-macos.xml, Gentoo Prefix}, @uref{http://www.gentoo.org/proj/en/gentoo-alt/prefix/bootstrap-macos.xml, Gentoo Prefix},
@uref{https://mxcl.github.com/homebrew/, Homebrew} @uref{http://mxcl.github.com/homebrew/, Homebrew}
or @uref{http://www.macports.org, MacPorts} can easily provide it. or @uref{http://www.macports.org, MacPorts} can easily provide it.
@@ -97,9 +81,9 @@ the FFmpeg Windows Help Forum at @url{http://ffmpeg.zeranoe.com/forum/}.
@section Native Windows compilation using MinGW or MinGW-w64 @section Native Windows compilation using MinGW or MinGW-w64
FFmpeg can be built to run natively on Windows using the MinGW-w64 FFmpeg can be built to run natively on Windows using the MinGW or MinGW-w64
toolchain. Install the latest versions of MSYS2 and MinGW-w64 from toolchains. Install the latest versions of MSYS and MinGW or MinGW-w64 from
@url{http://msys2.github.io/} and/or @url{http://mingw-w64.sourceforge.net/}. @url{http://www.mingw.org/} or @url{http://mingw-w64.sourceforge.net/}.
You can find detailed installation instructions in the download section and You can find detailed installation instructions in the download section and
the FAQ. the FAQ.
@@ -107,7 +91,7 @@ Notes:
@itemize @itemize
@item Building natively using MSYS2 can be sped up by disabling implicit rules @item Building natively using MSYS can be sped up by disabling implicit rules
in the Makefile by calling @code{make -r} instead of plain @code{make}. This in the Makefile by calling @code{make -r} instead of plain @code{make}. This
speed up is close to non-existent for normal one-off builds and is only speed up is close to non-existent for normal one-off builds and is only
noticeable when running make for a second time (for example during noticeable when running make for a second time (for example during
@@ -134,12 +118,13 @@ You will need the following prerequisites:
(if using MSVC 2012 or earlier) (if using MSVC 2012 or earlier)
@item @uref{http://code.google.com/p/msinttypes/, msinttypes} @item @uref{http://code.google.com/p/msinttypes/, msinttypes}
(if using MSVC 2012 or earlier) (if using MSVC 2012 or earlier)
@item @uref{http://msys2.github.io/, MSYS2} @item @uref{http://www.mingw.org/, MSYS}
@item @uref{http://yasm.tortall.net/, YASM} @item @uref{http://yasm.tortall.net/, YASM}
(Also available via MSYS2's package manager.) @item @uref{http://gnuwin32.sourceforge.net/packages/bc.htm, bc for Windows} if
you want to run @uref{fate.html, FATE}.
@end itemize @end itemize
To set up a proper environment in MSYS2, you need to run @code{msys_shell.bat} from To set up a proper environment in MSYS, you need to run @code{msys.bat} from
the Visual Studio or Intel Compiler command prompt. the Visual Studio or Intel Compiler command prompt.
Place @code{yasm.exe} somewhere in your @code{PATH}. If using MSVC 2012 or Place @code{yasm.exe} somewhere in your @code{PATH}. If using MSVC 2012 or
@@ -278,12 +263,12 @@ llrint() in its C library.
Install your Cygwin with all the "Base" packages, plus the Install your Cygwin with all the "Base" packages, plus the
following "Devel" ones: following "Devel" ones:
@example @example
binutils, gcc4-core, make, git, mingw-runtime, texinfo binutils, gcc4-core, make, git, mingw-runtime, texi2html
@end example @end example
In order to run FATE you will also need the following "Utils" packages: In order to run FATE you will also need the following "Utils" packages:
@example @example
diffutils bc, diffutils
@end example @end example
If you want to build FFmpeg with additional libraries, download Cygwin If you want to build FFmpeg with additional libraries, download Cygwin

View File

@@ -26,10 +26,6 @@
#include <string.h> #include <string.h>
#include <float.h> #include <float.h>
// print_options is build for the host, os_support.h isn't needed and is setup
// for the target. without this build breaks on mingw
#define AVFORMAT_OS_SUPPORT_H
#include "libavformat/avformat.h" #include "libavformat/avformat.h"
#include "libavformat/options_table.h" #include "libavformat/options_table.h"
#include "libavcodec/avcodec.h" #include "libavcodec/avcodec.h"

View File

@@ -166,7 +166,7 @@ This protocol accepts the following options.
@table @option @table @option
@item timeout @item timeout
Set timeout in microseconds of socket I/O operations used by the underlying low level Set timeout of socket I/O operations used by the underlying low level
operation. By default it is set to -1, which means that the timeout is operation. By default it is set to -1, which means that the timeout is
not specified. not specified.
@@ -213,7 +213,7 @@ m3u8 files.
HTTP (Hyper Text Transfer Protocol). HTTP (Hyper Text Transfer Protocol).
This protocol accepts the following options: This protocol accepts the following options.
@table @option @table @option
@item seekable @item seekable
@@ -223,60 +223,51 @@ if set to -1 it will try to autodetect if it is seekable. Default
value is -1. value is -1.
@item chunked_post @item chunked_post
If set to 1 use chunked Transfer-Encoding for posts, default is 1. If set to 1 use chunked transfer-encoding for posts, default is 1.
@item content_type
Set a specific content type for the POST messages.
@item headers @item headers
Set custom HTTP headers, can override built in default headers. The Set custom HTTP headers, can override built in default headers. The
value must be a string encoding the headers. value must be a string encoding the headers.
@item content_type
Force a content type.
@item user-agent
Override User-Agent header. If not specified the protocol will use a
string describing the libavformat build.
@item multiple_requests @item multiple_requests
Use persistent connections if set to 1, default is 0. Use persistent connections if set to 1. By default it is 0.
@item post_data @item post_data
Set custom HTTP post data. Set custom HTTP post data.
@item user-agent
@item user_agent
Override the User-Agent header. If not specified the protocol will use a
string describing the libavformat build. ("Lavf/<version>")
@item timeout @item timeout
Set timeout in microseconds of socket I/O operations used by the underlying low level Set timeout of socket I/O operations used by the underlying low level
operation. By default it is set to -1, which means that the timeout is operation. By default it is set to -1, which means that the timeout is
not specified. not specified.
@item mime_type @item mime_type
Export the MIME type. Set MIME type.
@item icy @item icy
If set to 1 request ICY (SHOUTcast) metadata from the server. If the server If set to 1 request ICY (SHOUTcast) metadata from the server. If the server
supports this, the metadata has to be retrieved by the application by reading supports this, the metadata has to be retrieved by the application by reading
the @option{icy_metadata_headers} and @option{icy_metadata_packet} options. the @option{icy_metadata_headers} and @option{icy_metadata_packet} options.
The default is 1. The default is 0.
@item icy_metadata_headers @item icy_metadata_headers
If the server supports ICY metadata, this contains the ICY-specific HTTP reply If the server supports ICY metadata, this contains the ICY specific HTTP reply
headers, separated by newline characters. headers, separated with newline characters.
@item icy_metadata_packet @item icy_metadata_packet
If the server supports ICY metadata, and @option{icy} was set to 1, this If the server supports ICY metadata, and @option{icy} was set to 1, this
contains the last non-empty metadata packet sent by the server. It should be contains the last non-empty metadata packet sent by the server.
polled in regular intervals by applications interested in mid-stream metadata
updates.
@item cookies @item cookies
Set the cookies to be sent in future requests. The format of each cookie is the Set the cookies to be sent in future requests. The format of each cookie is the
same as the value of a Set-Cookie HTTP response field. Multiple cookies can be same as the value of a Set-Cookie HTTP response field. Multiple cookies can be
delimited by a newline character. delimited by a newline character.
@item offset
Set initial byte offset.
@item end_offset
Try to limit the request to bytes preceding this offset.
@end table @end table
@subsection HTTP Cookies @subsection HTTP Cookies
@@ -293,50 +284,6 @@ The required syntax to play a stream specifying a cookie is:
ffplay -cookies "nlqptid=nltid=tsn; path=/; domain=somedomain.com;" http://somedomain.com/somestream.m3u8 ffplay -cookies "nlqptid=nltid=tsn; path=/; domain=somedomain.com;" http://somedomain.com/somestream.m3u8
@end example @end example
@section Icecast
Icecast protocol (stream to Icecast servers)
This protocol accepts the following options:
@table @option
@item ice_genre
Set the stream genre.
@item ice_name
Set the stream name.
@item ice_description
Set the stream description.
@item ice_url
Set the stream website URL.
@item ice_public
Set if the stream should be public.
The default is 0 (not public).
@item user_agent
Override the User-Agent header. If not specified a string of the form
"Lavf/<version>" will be used.
@item password
Set the Icecast mountpoint password.
@item content_type
Set the stream content type. This must be set if it is different from
audio/mpeg.
@item legacy_icecast
This enables support for Icecast versions < 2.4.0, that do not support the
HTTP PUT method but the SOURCE method.
@end table
@example
icecast://[@var{username}[:@var{password}]@@]@var{server}:@var{port}/@var{mountpoint}
@end example
@section mmst @section mmst
MMS (Microsoft Media Server) protocol over TCP. MMS (Microsoft Media Server) protocol over TCP.
@@ -581,35 +528,6 @@ The Real-Time Messaging Protocol tunneled through HTTPS (RTMPTS) is used
for streaming multimedia content within HTTPS requests to traverse for streaming multimedia content within HTTPS requests to traverse
firewalls. firewalls.
@section libsmbclient
libsmbclient permits one to manipulate CIFS/SMB network resources.
Following syntax is required.
@example
smb://[[domain:]user[:password@@]]server[/share[/path[/file]]]
@end example
This protocol accepts the following options.
@table @option
@item timeout
Set timeout in miliseconds of socket I/O operations used by the underlying
low level operation. By default it is set to -1, which means that the timeout
is not specified.
@item truncate
Truncate existing files on write, if set to 1. A value of 0 prevents
truncating. Default value is 1.
@item workgroup
Set the workgroup used for making connections. By default workgroup is not specified.
@end table
For more information see: @url{http://www.samba.org/}.
@section libssh @section libssh
Secure File Transfer Protocol via libssh Secure File Transfer Protocol via libssh
@@ -750,7 +668,7 @@ port will be used for the local RTP and RTCP ports.
@item @item
If @option{localrtcpport} (the local RTCP port) is not set it will be If @option{localrtcpport} (the local RTCP port) is not set it will be
set to the local RTP port value plus 1. set to the the local RTP port value plus 1.
@end enumerate @end enumerate
@section rtsp @section rtsp
@@ -764,7 +682,7 @@ data transferred over RDT).
The muxer can be used to send a stream using RTSP ANNOUNCE to a server The muxer can be used to send a stream using RTSP ANNOUNCE to a server
supporting it (currently Darwin Streaming Server and Mischa Spiegelmock's supporting it (currently Darwin Streaming Server and Mischa Spiegelmock's
@uref{https://github.com/revmischa/rtsp-server, RTSP server}). @uref{http://github.com/revmischa/rtsp-server, RTSP server}).
The required syntax for a RTSP url is: The required syntax for a RTSP url is:
@example @example
@@ -783,7 +701,7 @@ Do not start playing the stream immediately if set to 1. Default value
is 0. is 0.
@item rtsp_transport @item rtsp_transport
Set RTSP transport protocols. Set RTSP trasport protocols.
It accepts the following values: It accepts the following values:
@table @samp @table @samp
@@ -815,8 +733,6 @@ The following values are accepted:
Accept packets only from negotiated peer address and port. Accept packets only from negotiated peer address and port.
@item listen @item listen
Act as a server, listening for an incoming connection. Act as a server, listening for an incoming connection.
@item prefer_tcp
Try TCP for RTP transport first, if TCP is available as RTSP RTP transport.
@end table @end table
Default value is @samp{none}. Default value is @samp{none}.
@@ -842,17 +758,17 @@ Set maximum local UDP port. Default value is 65000.
@item timeout @item timeout
Set maximum timeout (in seconds) to wait for incoming connections. Set maximum timeout (in seconds) to wait for incoming connections.
A value of -1 means infinite (default). This option implies the A value of -1 mean infinite (default). This option implies the
@option{rtsp_flags} set to @samp{listen}. @option{rtsp_flags} set to @samp{listen}.
@item reorder_queue_size @item reorder_queue_size
Set number of packets to buffer for handling of reordered packets. Set number of packets to buffer for handling of reordered packets.
@item stimeout @item stimeout
Set socket TCP I/O timeout in microseconds. Set socket TCP I/O timeout in micro seconds.
@item user-agent @item user-agent
Override User-Agent header. If not specified, it defaults to the Override User-Agent header. If not specified, it default to the
libavformat identifier string. libavformat identifier string.
@end table @end table
@@ -1033,33 +949,9 @@ this binary block are used as master key, the following 14 bytes are
used as master salt. used as master salt.
@end table @end table
@section subfile
Virtually extract a segment of a file or another stream.
The underlying stream must be seekable.
Accepted options:
@table @option
@item start
Start offset of the extracted segment, in bytes.
@item end
End offset of the extracted segment, in bytes.
@end table
Examples:
Extract a chapter from a DVD VOB file (start and end sectors obtained
externally and multiplied by 2048):
@example
subfile,,start,153391104,end,268142592,,:/media/dvd/VIDEO_TS/VTS_08_1.VOB
@end example
Play an AVI file directly from a TAR archive:
subfile,,start,183241728,end,366490624,,:archive.tar
@section tcp @section tcp
Transmission Control Protocol. Trasmission Control Protocol.
The required syntax for a TCP url is: The required syntax for a TCP url is:
@example @example
@@ -1081,8 +973,8 @@ Set raise error timeout, expressed in microseconds.
This option is only relevant in read mode: if no data arrived in more This option is only relevant in read mode: if no data arrived in more
than this time interval, raise error. than this time interval, raise error.
@item listen_timeout=@var{milliseconds} @item listen_timeout=@var{microseconds}
Set listen timeout, expressed in milliseconds. Set listen timeout, expressed in microseconds.
@end table @end table
The following example shows how to setup a listening TCP connection The following example shows how to setup a listening TCP connection
@@ -1165,7 +1057,7 @@ udp://@var{hostname}:@var{port}[?@var{options}]
@var{options} contains a list of &-separated options of the form @var{key}=@var{val}. @var{options} contains a list of &-separated options of the form @var{key}=@var{val}.
In case threading is enabled on the system, a circular buffer is used In case threading is enabled on the system, a circular buffer is used
to store the incoming data, which allows one to reduce loss of data due to to store the incoming data, which allows to reduce loss of data due to
UDP socket buffer overruns. The @var{fifo_size} and UDP socket buffer overruns. The @var{fifo_size} and
@var{overrun_nonfatal} options are related to this buffer. @var{overrun_nonfatal} options are related to this buffer.
@@ -1173,9 +1065,8 @@ The list of supported options follows.
@table @option @table @option
@item buffer_size=@var{size} @item buffer_size=@var{size}
Set the UDP maximum socket buffer size in bytes. This is used to set either Set the UDP socket buffer size in bytes. This is used both for the
the receive or send buffer size, depending on what the socket is used for. receiving and the sending buffer size.
Default is 64KB. See also @var{fifo_size}.
@item localport=@var{port} @item localport=@var{port}
Override the local UDP port to bind with. Override the local UDP port to bind with.
@@ -1226,12 +1117,6 @@ Set raise error timeout, expressed in microseconds.
This option is only relevant in read mode: if no data arrived in more This option is only relevant in read mode: if no data arrived in more
than this time interval, raise error. than this time interval, raise error.
@item broadcast=@var{1|0}
Explicitly allow or disallow UDP broadcasting.
Note that broadcasting may not work properly on networks having
a broadcast storm protection.
@end table @end table
@subsection Examples @subsection Examples

View File

@@ -35,7 +35,7 @@ Select nearest neighbor rescaling algorithm.
@item area @item area
Select averaging area rescaling algorithm. Select averaging area rescaling algorithm.
@item bicublin @item bicubiclin
Select bicubic scaling algorithm for the luma component, bilinear for Select bicubic scaling algorithm for the luma component, bilinear for
chroma components. chroma components.
@@ -112,14 +112,6 @@ bayer dither
@item ed @item ed
error diffusion dither error diffusion dither
@item a_dither
arithmetic dither, based using addition
@item x_dither
arithmetic dither, based using xor (more random/less apparent patterning that
a_dither).
@end table @end table
@end table @end table

View File

@@ -618,6 +618,7 @@ flip wavelet?
try to use the wavelet transformed predicted image (motion compensated image) as context for coding the residual coefficients try to use the wavelet transformed predicted image (motion compensated image) as context for coding the residual coefficients
try the MV length as context for coding the residual coefficients try the MV length as context for coding the residual coefficients
use extradata for stuff which is in the keyframes now? use extradata for stuff which is in the keyframes now?
the MV median predictor is patented IIRC
implement per picture halfpel interpolation implement per picture halfpel interpolation
try different range coder state transition tables for different contexts try different range coder state transition tables for different contexts

23
doc/style.min.css vendored

File diff suppressed because one or more lines are too long

View File

@@ -1,35 +1,26 @@
# Init file for texi2html.
# This is deprecated, and the makeinfo/texi2any version is doc/t2h.pm
# no horiz rules between sections # no horiz rules between sections
$end_section = \&FFmpeg_end_section; $end_section = \&FFmpeg_end_section;
sub FFmpeg_end_section($$) sub FFmpeg_end_section($$)
{ {
} }
my $TEMPLATE_HEADER1 = $ENV{"FFMPEG_HEADER1"} || <<EOT; $EXTRA_HEAD =
<!DOCTYPE html> '<link rel="icon" href="favicon.png" type="image/png" />
<html lang="en"> ';
<head>
<meta charset="utf-8" /> $CSS_LINES = $ENV{"FFMPEG_CSS"} || <<EOT;
<meta http-equiv="X-UA-Compatible" content="IE=edge" /> <link rel="stylesheet" type="text/css" href="default.css" />
<title>FFmpeg documentation</title>
<link rel="stylesheet" href="bootstrap.min.css" />
<link rel="stylesheet" href="style.min.css" />
EOT EOT
my $TEMPLATE_HEADER2 = $ENV{"FFMPEG_HEADER2"} || <<EOT; my $TEMPLATE_HEADER = $ENV{"FFMPEG_HEADER"} || <<EOT;
</head> <link rel="icon" href="favicon.png" type="image/png" />
<body> </head>
<div style="width: 95%; margin: auto"> <body>
<div id="container">
<div id="body">
EOT EOT
my $TEMPLATE_FOOTER = $ENV{"FFMPEG_FOOTER"} || <<EOT; $PRE_BODY_CLOSE = '</div></div>';
</div>
</body>
</html>
EOT
$SMALL_RULE = ''; $SMALL_RULE = '';
$BODYTEXT = ''; $BODYTEXT = '';
@@ -91,25 +82,21 @@ sub FFmpeg_print_page_head($$)
$longtitle = "FFmpeg documentation : " . $longtitle; $longtitle = "FFmpeg documentation : " . $longtitle;
print $fh <<EOT; print $fh <<EOT;
$TEMPLATE_HEADER1 <!DOCTYPE html>
$description <html>
<meta name="keywords" content="$longtitle">
<meta name="Generator" content="$Texi2HTML::THISDOC{program}">
$Texi2HTML::THISDOC{'copying'}<!-- Created on $Texi2HTML::THISDOC{today} by $Texi2HTML::THISDOC{program} --> $Texi2HTML::THISDOC{'copying'}<!-- Created on $Texi2HTML::THISDOC{today} by $Texi2HTML::THISDOC{program} -->
<!-- <!--
$Texi2HTML::THISDOC{program_authors} $Texi2HTML::THISDOC{program_authors}
--> -->
$encoding <head>
$TEMPLATE_HEADER2 <title>$longtitle</title>
EOT
}
$print_page_foot = \&FFmpeg_print_page_foot; $description
sub FFmpeg_print_page_foot($$) <meta name="keywords" content="$longtitle">
{ <meta name="Generator" content="$Texi2HTML::THISDOC{program}">
my $fh = shift; $encoding
print $fh <<EOT; $CSS_LINES
$TEMPLATE_FOOTER $TEMPLATE_HEADER
EOT EOT
} }

View File

@@ -1,339 +0,0 @@
# makeinfo HTML output init file
#
# Copyright (c) 2011, 2012 Free Software Foundation, Inc.
# Copyright (c) 2014 Andreas Cadhalpun
# Copyright (c) 2014 Tiancheng "Timothy" Gu
#
# This file is part of FFmpeg.
#
# FFmpeg is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 3 of the License, or
# (at your option) any later version.
#
# FFmpeg is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public
# License along with FFmpeg; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
# no navigation elements
set_from_init_file('HEADERS', 0);
sub ffmpeg_heading_command($$$$$)
{
my $self = shift;
my $cmdname = shift;
my $command = shift;
my $args = shift;
my $content = shift;
my $result = '';
# not clear that it may really happen
if ($self->in_string) {
$result .= $self->command_string($command) ."\n" if ($cmdname ne 'node');
$result .= $content if (defined($content));
return $result;
}
my $element_id = $self->command_id($command);
$result .= "<a name=\"$element_id\"></a>\n"
if (defined($element_id) and $element_id ne '');
print STDERR "Process $command "
.Texinfo::Structuring::_print_root_command_texi($command)."\n"
if ($self->get_conf('DEBUG'));
my $element;
if ($Texinfo::Common::root_commands{$command->{'cmdname'}}
and $command->{'parent'}
and $command->{'parent'}->{'type'}
and $command->{'parent'}->{'type'} eq 'element') {
$element = $command->{'parent'};
}
if ($element) {
$result .= &{$self->{'format_element_header'}}($self, $cmdname,
$command, $element);
}
my $heading_level;
# node is used as heading if there is nothing else.
if ($cmdname eq 'node') {
if (!$element or (!$element->{'extra'}->{'section'}
and $element->{'extra'}->{'node'}
and $element->{'extra'}->{'node'} eq $command
# bogus node may not have been normalized
and defined($command->{'extra'}->{'normalized'}))) {
if ($command->{'extra'}->{'normalized'} eq 'Top') {
$heading_level = 0;
} else {
$heading_level = 3;
}
}
} else {
$heading_level = $command->{'level'};
}
my $heading = $self->command_text($command);
# $heading not defined may happen if the command is a @node, for example
# if there is an error in the node.
if (defined($heading) and $heading ne '' and defined($heading_level)) {
if ($Texinfo::Common::root_commands{$cmdname}
and $Texinfo::Common::sectioning_commands{$cmdname}) {
my $content_href = $self->command_contents_href($command, 'contents',
$self->{'current_filename'});
if ($content_href) {
my $this_href = $content_href =~ s/^\#toc-/\#/r;
$heading .= '<span class="pull-right">'.
'<a class="anchor hidden-xs" '.
"href=\"$this_href\" aria-hidden=\"true\">".
($ENV{"FA_ICONS"} ? '<i class="fa fa-link"></i>'
: '#').
'</a> '.
'<a class="anchor hidden-xs"'.
"href=\"$content_href\" aria-hidden=\"true\">".
($ENV{"FA_ICONS"} ? '<i class="fa fa-navicon"></i>'
: 'TOC').
'</a>'.
'</span>';
}
}
if ($self->in_preformatted()) {
$result .= $heading."\n";
} else {
# if the level was changed, set the command name right
if ($cmdname ne 'node'
and $heading_level ne $Texinfo::Common::command_structuring_level{$cmdname}) {
$cmdname
= $Texinfo::Common::level_to_structuring_command{$cmdname}->[$heading_level];
}
$result .= &{$self->{'format_heading_text'}}(
$self, $cmdname, $heading,
$heading_level +
$self->get_conf('CHAPTER_HEADER_LEVEL') - 1, $command);
}
}
$result .= $content if (defined($content));
return $result;
}
foreach my $command (keys(%Texinfo::Common::sectioning_commands), 'node') {
texinfo_register_command_formatting($command, \&ffmpeg_heading_command);
}
# print the TOC where @contents is used
set_from_init_file('INLINE_CONTENTS', 1);
# make chapters <h2>
set_from_init_file('CHAPTER_HEADER_LEVEL', 2);
# Do not add <hr>
set_from_init_file('DEFAULT_RULE', '');
set_from_init_file('BIG_RULE', '');
# Customized file beginning
sub ffmpeg_begin_file($$$)
{
my $self = shift;
my $filename = shift;
my $element = shift;
my $command;
if ($element and $self->get_conf('SPLIT')) {
$command = $self->element_command($element);
}
my ($title, $description, $encoding, $date, $css_lines,
$doctype, $bodytext, $copying_comment, $after_body_open,
$extra_head, $program_and_version, $program_homepage,
$program, $generator) = $self->_file_header_informations($command);
my $links = $self->_get_links ($filename, $element);
my $head1 = $ENV{"FFMPEG_HEADER1"} || <<EOT;
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
<!-- Created by $program_and_version, $program_homepage -->
<head>
<meta charset="utf-8">
<title>
EOT
my $head_title = <<EOT;
$title
EOT
my $head2 = $ENV{"FFMPEG_HEADER2"} || <<EOT;
</title>
<meta name="viewport" content="width=device-width,initial-scale=1.0">
<link rel="stylesheet" type="text/css" href="bootstrap.min.css">
<link rel="stylesheet" type="text/css" href="style.min.css">
</head>
<body>
<div style="width: 95%; margin: auto">
<h1>
EOT
my $head3 = $ENV{"FFMPEG_HEADER3"} || <<EOT;
</h1>
EOT
return $head1 . $head_title . $head2 . $head_title . $head3;
}
texinfo_register_formatting_function('begin_file', \&ffmpeg_begin_file);
sub ffmpeg_program_string($)
{
my $self = shift;
if (defined($self->get_conf('PROGRAM'))
and $self->get_conf('PROGRAM') ne ''
and defined($self->get_conf('PACKAGE_URL'))) {
return $self->convert_tree(
$self->gdt('This document was generated using @uref{{program_homepage}, @emph{{program}}}.',
{ 'program_homepage' => $self->get_conf('PACKAGE_URL'),
'program' => $self->get_conf('PROGRAM') }));
} else {
return $self->convert_tree(
$self->gdt('This document was generated automatically.'));
}
}
texinfo_register_formatting_function('program_string', \&ffmpeg_program_string);
# Customized file ending
sub ffmpeg_end_file($)
{
my $self = shift;
my $program_string = &{$self->{'format_program_string'}}($self);
my $program_text = <<EOT;
<p style="font-size: small;">
$program_string
</p>
EOT
my $footer = $ENV{FFMPEG_FOOTER} || <<EOT;
</div>
</body>
</html>
EOT
return $program_text . $footer;
}
texinfo_register_formatting_function('end_file', \&ffmpeg_end_file);
# Dummy title command
# Ignore title. Title is handled through ffmpeg_begin_file().
set_from_init_file('USE_TITLEPAGE_FOR_TITLE', 1);
sub ffmpeg_title($$$$)
{
return '';
}
texinfo_register_command_formatting('titlefont',
\&ffmpeg_title);
# Customized float command. Part of code borrowed from GNU Texinfo.
sub ffmpeg_float($$$$$)
{
my $self = shift;
my $cmdname = shift;
my $command = shift;
my $args = shift;
my $content = shift;
my ($caption, $prepended) = Texinfo::Common::float_name_caption($self,
$command);
my $caption_text = '';
my $prepended_text;
my $prepended_save = '';
if ($self->in_string()) {
if ($prepended) {
$prepended_text = $self->convert_tree_new_formatting_context(
$prepended, 'float prepended');
} else {
$prepended_text = '';
}
if ($caption) {
$caption_text = $self->convert_tree_new_formatting_context(
{'contents' => $caption->{'args'}->[0]->{'contents'}},
'float caption');
}
return $prepended.$content.$caption_text;
}
my $id = $self->command_id($command);
my $label;
if (defined($id) and $id ne '') {
$label = "<a name=\"$id\"></a>";
} else {
$label = '';
}
if ($prepended) {
if ($caption) {
# prepend the prepended tree to the first paragraph
my @caption_original_contents = @{$caption->{'args'}->[0]->{'contents'}};
my @caption_contents;
my $new_paragraph;
while (@caption_original_contents) {
my $content = shift @caption_original_contents;
if ($content->{'type'} and $content->{'type'} eq 'paragraph') {
%{$new_paragraph} = %{$content};
$new_paragraph->{'contents'} = [@{$content->{'contents'}}];
unshift (@{$new_paragraph->{'contents'}}, {'cmdname' => 'strong',
'args' => [{'type' => 'brace_command_arg',
'contents' => [$prepended]}]});
push @caption_contents, $new_paragraph;
last;
} else {
push @caption_contents, $content;
}
}
push @caption_contents, @caption_original_contents;
if ($new_paragraph) {
$caption_text = $self->convert_tree_new_formatting_context(
{'contents' => \@caption_contents}, 'float caption');
$prepended_text = '';
}
}
if ($caption_text eq '') {
$prepended_text = $self->convert_tree_new_formatting_context(
$prepended, 'float prepended');
if ($prepended_text ne '') {
$prepended_save = $prepended_text;
$prepended_text = '<p><strong>'.$prepended_text.'</strong></p>';
}
}
} else {
$prepended_text = '';
}
if ($caption and $caption_text eq '') {
$caption_text = $self->convert_tree_new_formatting_context(
$caption->{'args'}->[0], 'float caption');
}
if ($prepended_text.$caption_text ne '') {
$prepended_text = $self->_attribute_class('div','float-caption'). '>'
. $prepended_text;
$caption_text .= '</div>';
}
my $html_class = '';
if ($prepended_save =~ /NOTE/) {
$html_class = 'info';
$prepended_text = '';
$caption_text = '';
} elsif ($prepended_save =~ /IMPORTANT/) {
$html_class = 'warning';
$prepended_text = '';
$caption_text = '';
}
return $self->_attribute_class('div', $html_class). '>' . "\n" .
$prepended_text . $caption_text . $content . '</div>';
}
texinfo_register_command_formatting('float',
\&ffmpeg_float);
1;

View File

@@ -282,14 +282,6 @@ INF: while(<$inf>) {
$_ = "\n=over 4\n"; $_ = "\n=over 4\n";
}; };
/^\@(multitable)\s+{.*/ and do {
push @endwstack, $endw;
push @icstack, $ic;
$endw = $1;
$ic = "";
$_ = "\n=over 4\n";
};
/^\@((?:small)?example|display)/ and do { /^\@((?:small)?example|display)/ and do {
push @endwstack, $endw; push @endwstack, $endw;
$endw = $1; $endw = $1;
@@ -306,10 +298,10 @@ INF: while(<$inf>) {
/^\@tab\s+(.*\S)\s*$/ and $endw eq "multitable" and do { /^\@tab\s+(.*\S)\s*$/ and $endw eq "multitable" and do {
my $columns = $1; my $columns = $1;
$columns =~ s/\@tab//; $columns =~ s/\@tab/ : /;
$_ = $columns; $_ = " : ". $columns;
$chapter =~ s/$//; $chapter =~ s/\n+\s+$//;
}; };
/^\@itemx?\s*(.+)?$/ and do { /^\@itemx?\s*(.+)?$/ and do {
@@ -332,14 +324,13 @@ $inf = pop @instack;
die "No filename or title\n" unless defined $fn && defined $tl; die "No filename or title\n" unless defined $fn && defined $tl;
# always use utf8
print "=encoding utf8\n\n";
$chapters{NAME} = "$fn \- $tl\n"; $chapters{NAME} = "$fn \- $tl\n";
$chapters{FOOTNOTES} .= "=back\n" if exists $chapters{FOOTNOTES}; $chapters{FOOTNOTES} .= "=back\n" if exists $chapters{FOOTNOTES};
unshift @chapters_sequence, "NAME"; unshift @chapters_sequence, "NAME";
for $chapter (@chapters_sequence) { for $chapter (@chapters_sequence) {
# always use utf8
print "=encoding utf8\n";
if (exists $chapters{$chapter}) { if (exists $chapters{$chapter}) {
$head = uc($chapter); $head = uc($chapter);
print "=head1 $head\n\n"; print "=head1 $head\n\n";

View File

@@ -782,9 +782,6 @@ large numbers (usually 2^53 and larger).
Round the value of expression @var{expr} upwards to the nearest Round the value of expression @var{expr} upwards to the nearest
integer. For example, "ceil(1.5)" is "2.0". integer. For example, "ceil(1.5)" is "2.0".
@item clip(x, min, max)
Return the value of @var{x} clipped between @var{min} and @var{max}.
@item cos(x) @item cos(x)
Compute cosine of @var{x}. Compute cosine of @var{x}.
@@ -1034,7 +1031,7 @@ indication of the corresponding powers of 10 and of 2.
10^24 / 2^70 10^24 / 2^70
@end table @end table
@c man end EXPRESSION EVALUATION @c man end
@chapter OpenCL Options @chapter OpenCL Options
@c man begin OPENCL OPTIONS @c man begin OPENCL OPTIONS
@@ -1059,7 +1056,7 @@ which can be obtained with @code{ffmpeg -opencl_bench} or @code{av_opencl_get_de
@item device_idx @item device_idx
Select the index of the device used to run OpenCL code. Select the index of the device used to run OpenCL code.
The specified index must be one of the indexes in the device list which The specifed index must be one of the indexes in the device list which
can be obtained with @code{ffmpeg -opencl_bench} or @code{av_opencl_get_device_list()}. can be obtained with @code{ffmpeg -opencl_bench} or @code{av_opencl_get_device_list()}.
@end table @end table

View File

@@ -1,423 +0,0 @@
This document is a tutorial/initiation for writing simple filters in
libavfilter.
Foreword: just like everything else in FFmpeg, libavfilter is monolithic, which
means that it is highly recommended that you submit your filters to the FFmpeg
development mailing-list and make sure it is applied. Otherwise, your filter is
likely to have a very short lifetime due to more a less regular internal API
changes, and a limited distribution, review, and testing.
Bootstrap
=========
Let's say you want to write a new simple video filter called "foobar" which
takes one frame in input, changes the pixels in whatever fashion you fancy, and
outputs the modified frame. The most simple way of doing this is to take a
similar filter. We'll pick edgedetect, but any other should do. You can look
for others using the `./ffmpeg -v 0 -filters|grep ' V->V '` command.
- sed 's/edgedetect/foobar/g;s/EdgeDetect/Foobar/g' libavfilter/vf_edgedetect.c > libavfilter/vf_foobar.c
- edit libavfilter/Makefile, and add an entry for "foobar" following the
pattern of the other filters.
- edit libavfilter/allfilters.c, and add an entry for "foobar" following the
pattern of the other filters.
- ./configure ...
- make -j<whatever> ffmpeg
- ./ffmpeg -i http://samples.ffmpeg.org/image-samples/lena.pnm -vf foobar foobar.png
Note here: you can obviously use a random local image instead of a remote URL.
If everything went right, you should get a foobar.png with Lena edge-detected.
That's it, your new playground is ready.
Some little details about what's going on:
libavfilter/allfilters.c:avfilter_register_all() is called at runtime to create
a list of the available filters, but it's important to know that this file is
also parsed by the configure script, which in turn will define variables for
the build system and the C:
--- after running configure ---
$ grep FOOBAR config.mak
CONFIG_FOOBAR_FILTER=yes
$ grep FOOBAR config.h
#define CONFIG_FOOBAR_FILTER 1
CONFIG_FOOBAR_FILTER=yes from the config.mak is later used to enable the filter in
libavfilter/Makefile and CONFIG_FOOBAR_FILTER=1 from the config.h will be used
for registering the filter in libavfilter/allfilters.c.
Filter code layout
==================
You now need some theory about the general code layout of a filter. Open your
libavfilter/vf_foobar.c. This section will detail the important parts of the
code you need to understand before messing with it.
Copyright
---------
First chunk is the copyright. Most filters are LGPL, and we are assuming
vf_foobar is as well. We are also assuming vf_foobar is not an edge detector
filter, so you can update the boilerplate with your credits.
Doxy
----
Next chunk is the Doxygen about the file. See http://ffmpeg.org/doxygen/trunk/.
Detail here what the filter is, does, and add some references if you feel like
it.
Context
-------
Skip the headers and scroll down to the definition of FoobarContext. This is
your local state context. It is already filled with 0 when you get it so do not
worry about uninitialized read into this context. This is where you put every
"global" information you need, typically the variable storing the user options.
You'll notice the first field "const AVClass *class"; it's the only field you
need to keep assuming you have a context. There are some magic you don't care
about around this field, just let it be (in first position) for now.
Options
-------
Then comes the options array. This is what will define the user accessible
options. For example, -vf foobar=mode=colormix:high=0.4:low=0.1. Most options
have the following pattern:
name, description, offset, type, default value, minimum value, maximum value, flags
- name is the option name, keep it simple, lowercase
- description are short, in lowercase, without period, and describe what they
do, for example "set the foo of the bar"
- offset is the offset of the field in your local context, see the OFFSET()
macro; the option parser will use that information to fill the fields
according to the user input
- type is any of AV_OPT_TYPE_* defined in libavutil/opt.h
- default value is an union where you pick the appropriate type; "{.dbl=0.3}",
"{.i64=0x234}", "{.str=NULL}", ...
- min and max values define the range of available values, inclusive
- flags are AVOption generic flags. See AV_OPT_FLAG_* definitions
In doubt, just look at the other AVOption definitions all around the codebase,
there are tons of examples.
Class
-----
AVFILTER_DEFINE_CLASS(foobar) will define a unique foobar_class with some kind
of signature referencing the options, etc. which will be referenced in the
definition of the AVFilter.
Filter definition
-----------------
At the end of the file, you will find foobar_inputs, foobar_outputs and
the AVFilter ff_vf_foobar. Don't forget to update the AVFilter.description with
a description of what the filter does, starting with a capitalized letter and
ending with a period. You'd better drop the AVFilter.flags entry for now, and
re-add them later depending on the capabilities of your filter.
Callbacks
---------
Let's now study the common callbacks. Before going into details, note that all
these callbacks are explained in details in libavfilter/avfilter.h, so in
doubt, refer to the doxy in that file.
init()
~~~~~~
First one to be called is init(). It's flagged as cold because not called
often. Look for "cold" on
http://gcc.gnu.org/onlinedocs/gcc/Function-Attributes.html for more
information.
As the name suggests, init() is where you eventually initialize and allocate
your buffers, pre-compute your data, etc. Note that at this point, your local
context already has the user options initialized, but you still haven't any
clue about the kind of data input you will get, so this function is often
mainly used to sanitize the user options.
Some init()s will also define the number of inputs or outputs dynamically
according to the user options. A good example of this is the split filter, but
we won't cover this here since vf_foobar is just a simple 1:1 filter.
uninit()
~~~~~~~~
Similarly, there is the uninit() callback, doing what the name suggest. Free
everything you allocated here.
query_formats()
~~~~~~~~~~~~~~~
This is following the init() and is used for the format negotiation, basically
where you say what pixel format(s) (gray, rgb 32, yuv 4:2:0, ...) you accept
for your inputs, and what you can output. All pixel formats are defined in
libavutil/pixfmt.h. If you don't change the pixel format between the input and
the output, you just have to define a pixel formats array and call
ff_set_common_formats(). For more complex negotiation, you can refer to other
filters such as vf_scale.
config_props()
~~~~~~~~~~~~~~
This callback is not necessary, but you will probably have one or more
config_props() anyway. It's not a callback for the filter itself but for its
inputs or outputs (they're called "pads" - AVFilterPad - in libavfilter's
lexicon).
Inside the input config_props(), you are at a point where you know which pixel
format has been picked after query_formats(), and more information such as the
video width and height (inlink->{w,h}). So if you need to update your internal
context state depending on your input you can do it here. In edgedetect you can
see that this callback is used to allocate buffers depending on these
information. They will be destroyed in uninit().
Inside the output config_props(), you can define what you want to change in the
output. Typically, if your filter is going to double the size of the video, you
will update outlink->w and outlink->h.
filter_frame()
~~~~~~~~~~~~~~
This is the callback you are waiting from the beginning: it is where you
process the received frames. Along with the frame, you get the input link from
where the frame comes from.
static int filter_frame(AVFilterLink *inlink, AVFrame *in) { ... }
You can get the filter context through that input link:
AVFilterContext *ctx = inlink->dst;
Then access your internal state context:
FoobarContext *foobar = ctx->priv;
And also the output link where you will send your frame when you are done:
AVFilterLink *outlink = ctx->outputs[0];
Here, we are picking the first output. You can have several, but in our case we
only have one since we are in a 1:1 input-output situation.
If you want to define a simple pass-through filter, you can just do:
return ff_filter_frame(outlink, in);
But of course, you probably want to change the data of that frame.
This can be done by accessing frame->data[] and frame->linesize[]. Important
note here: the width does NOT match the linesize. The linesize is always
greater or equal to the width. The padding created should not be changed or
even read. Typically, keep in mind that a previous filter in your chain might
have altered the frame dimension but not the linesize. Imagine a crop filter
that halves the video size: the linesizes won't be changed, just the width.
<-------------- linesize ------------------------>
+-------------------------------+----------------+ ^
| | | |
| | | |
| picture | padding | | height
| | | |
| | | |
+-------------------------------+----------------+ v
<----------- width ------------->
Before modifying the "in" frame, you have to make sure it is writable, or get a
new one. Multiple scenarios are possible here depending on the kind of
processing you are doing.
Let's say you want to change one pixel depending on multiple pixels (typically
the surrounding ones) of the input. In that case, you can't do an in-place
processing of the input so you will need to allocate a new frame, with the same
properties as the input one, and send that new frame to the next filter:
AVFrame *out = ff_get_video_buffer(outlink, outlink->w, outlink->h);
if (!out) {
av_frame_free(&in);
return AVERROR(ENOMEM);
}
av_frame_copy_props(out, in);
// out->data[...] = foobar(in->data[...])
av_frame_free(&in);
return ff_filter_frame(outlink, out);
In-place processing
~~~~~~~~~~~~~~~~~~~
If you can just alter the input frame, you probably just want to do that
instead:
av_frame_make_writable(in);
// in->data[...] = foobar(in->data[...])
return ff_filter_frame(outlink, in);
You may wonder why a frame might not be writable. The answer is that for
example a previous filter might still own the frame data: imagine a filter
prior to yours in the filtergraph that needs to cache the frame. You must not
alter that frame, otherwise it will make that previous filter buggy. This is
where av_frame_make_writable() helps (it won't have any effect if the frame
already is writable).
The problem with using av_frame_make_writable() is that in the worst case it
will copy the whole input frame before you change it all over again with your
filter: if the frame is not writable, av_frame_make_writable() will allocate
new buffers, and copy the input frame data. You don't want that, and you can
avoid it by just allocating a new buffer if necessary, and process from in to
out in your filter, saving the memcpy. Generally, this is done following this
scheme:
int direct = 0;
AVFrame *out;
if (av_frame_is_writable(in)) {
direct = 1;
out = in;
} else {
out = ff_get_video_buffer(outlink, outlink->w, outlink->h);
if (!out) {
av_frame_free(&in);
return AVERROR(ENOMEM);
}
av_frame_copy_props(out, in);
}
// out->data[...] = foobar(in->data[...])
if (!direct)
av_frame_free(&in);
return ff_filter_frame(outlink, out);
Of course, this will only work if you can do in-place processing. To test if
your filter handles well the permissions, you can use the perms filter. For
example with:
-vf perms=random,foobar
Make sure no automatic pixel conversion is inserted between perms and foobar,
otherwise the frames permissions might change again and the test will be
meaningless: add av_log(0,0,"direct=%d\n",direct) in your code to check that.
You can avoid the issue with something like:
-vf format=rgb24,perms=random,foobar
...assuming your filter accepts rgb24 of course. This will make sure the
necessary conversion is inserted before the perms filter.
Timeline
~~~~~~~~
Adding timeline support
(http://ffmpeg.org/ffmpeg-filters.html#Timeline-editing) is often an easy
feature to add. In the most simple case, you just have to add
AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC to the AVFilter.flags. You can typically
do this when your filter does not need to save the previous context frames, or
basically if your filter just alter whatever goes in and doesn't need
previous/future information. See for instance commit 86cb986ce that adds
timeline support to the fieldorder filter.
In some cases, you might need to reset your context somehow. This is handled by
the AVFILTER_FLAG_SUPPORT_TIMELINE_INTERNAL flag which is used if the filter
must not process the frames but still wants to keep track of the frames going
through (to keep them in cache for when it's enabled again). See for example
commit 69d72140a that adds timeline support to the phase filter.
Threading
~~~~~~~~~
libavfilter does not yet support frame threading, but you can add slice
threading to your filters.
Let's say the foobar filter has the following frame processing function:
dst = out->data[0];
src = in ->data[0];
for (y = 0; y < inlink->h; y++) {
for (x = 0; x < inlink->w; x++)
dst[x] = foobar(src[x]);
dst += out->linesize[0];
src += in ->linesize[0];
}
The first thing is to make this function work into slices. The new code will
look like this:
for (y = slice_start; y < slice_end; y++) {
for (x = 0; x < inlink->w; x++)
dst[x] = foobar(src[x]);
dst += out->linesize[0];
src += in ->linesize[0];
}
The source and destination pointers, and slice_start/slice_end will be defined
according to the number of jobs. Generally, it looks like this:
const int slice_start = (in->height * jobnr ) / nb_jobs;
const int slice_end = (in->height * (jobnr+1)) / nb_jobs;
uint8_t *dst = out->data[0] + slice_start * out->linesize[0];
const uint8_t *src = in->data[0] + slice_start * in->linesize[0];
This new code will be isolated in a new filter_slice():
static int filter_slice(AVFilterContext *ctx, void *arg, int jobnr, int nb_jobs) { ... }
Note that we need our input and output frame to define slice_{start,end} and
dst/src, which are not available in that callback. They will be transmitted
through the opaque void *arg. You have to define a structure which contains
everything you need:
typedef struct ThreadData {
AVFrame *in, *out;
} ThreadData;
If you need some more information from your local context, put them here.
In you filter_slice function, you access it like that:
const ThreadData *td = arg;
Then in your filter_frame() callback, you need to call the threading
distributor with something like this:
ThreadData td;
// ...
td.in = in;
td.out = out;
ctx->internal->execute(ctx, filter_slice, &td, NULL, FFMIN(outlink->h, ctx->graph->nb_threads));
// ...
return ff_filter_frame(outlink, out);
Last step is to add AVFILTER_FLAG_SLICE_THREADS flag to AVFilter.flags.
For more example of slice threading additions, you can try to run git log -p
--grep 'slice threading' libavfilter/
Finalization
~~~~~~~~~~~~
When your awesome filter is finished, you have a few more steps before you're
done:
- write its documentation in doc/filters.texi, and test the output with make
doc/ffmpeg-filters.html.
- add a FATE test, generally by adding an entry in
tests/fate/filter-video.mak, add running make fate-filter-foobar GEN=1 to
generate the data.
- add an entry in the Changelog
- edit libavfilter/version.h and increase LIBAVFILTER_VERSION_MINOR by one
(and reset LIBAVFILTER_VERSION_MICRO to 100)
- git add ... && git commit -m "avfilter: add foobar filter." && git format-patch -1
When all of this is done, you can submit your patch to the ffmpeg-devel
mailing-list for review. If you need any help, feel free to come on our IRC
channel, #ffmpeg-devel on irc.freenode.net.

Some files were not shown because too many files have changed in this diff Show More