Compare commits
1 Commits
Author | SHA1 | Date | |
---|---|---|---|
![]() |
35a7b73590 |
1
.gitattributes
vendored
1
.gitattributes
vendored
@@ -1 +0,0 @@
|
||||
*.pnm -diff -text
|
17
.gitignore
vendored
17
.gitignore
vendored
@@ -15,7 +15,6 @@
|
||||
*.pdb
|
||||
*.so
|
||||
*.so.*
|
||||
*.swp
|
||||
*.ver
|
||||
*-example
|
||||
*-test
|
||||
@@ -28,6 +27,7 @@
|
||||
/ffserver
|
||||
/config.*
|
||||
/coverage.info
|
||||
/version.h
|
||||
/doc/*.1
|
||||
/doc/*.3
|
||||
/doc/*.html
|
||||
@@ -35,36 +35,27 @@
|
||||
/doc/config.texi
|
||||
/doc/avoptions_codec.texi
|
||||
/doc/avoptions_format.texi
|
||||
/doc/doxy/html/
|
||||
/doc/examples/avio_reading
|
||||
/doc/examples/decoding_encoding
|
||||
/doc/examples/demuxing_decoding
|
||||
/doc/examples/extract_mvs
|
||||
/doc/examples/filter_audio
|
||||
/doc/examples/demuxing
|
||||
/doc/examples/filtering_audio
|
||||
/doc/examples/filtering_video
|
||||
/doc/examples/metadata
|
||||
/doc/examples/muxing
|
||||
/doc/examples/pc-uninstalled
|
||||
/doc/examples/remuxing
|
||||
/doc/examples/resampling_audio
|
||||
/doc/examples/scaling_video
|
||||
/doc/examples/transcode_aac
|
||||
/doc/examples/transcoding
|
||||
/doc/fate.txt
|
||||
/doc/doxy/html/
|
||||
/doc/print_options
|
||||
/lcov/
|
||||
/libavcodec/*_tablegen
|
||||
/libavcodec/*_tables.c
|
||||
/libavcodec/*_tables.h
|
||||
/libavutil/avconfig.h
|
||||
/libavutil/ffversion.h
|
||||
/tests/audiogen
|
||||
/tests/base64
|
||||
/tests/data/
|
||||
/tests/pixfmts.mak
|
||||
/tests/rotozoom
|
||||
/tests/test_copy.ffmeta
|
||||
/tests/tiny_psnr
|
||||
/tests/tiny_ssim
|
||||
/tests/videogen
|
||||
@@ -83,8 +74,6 @@
|
||||
/tools/pktdumper
|
||||
/tools/probetest
|
||||
/tools/qt-faststart
|
||||
/tools/sidxindex
|
||||
/tools/trasher
|
||||
/tools/seek_print
|
||||
/tools/uncoded_frame
|
||||
/tools/zmqsend
|
||||
|
153
Changelog
153
Changelog
@@ -1,144 +1,7 @@
|
||||
Entries are sorted chronologically from oldest to youngest within each release,
|
||||
releases are sorted from youngest to oldest.
|
||||
|
||||
version 2.6:
|
||||
- nvenc encoder
|
||||
- 10bit spp filter
|
||||
- colorlevels filter
|
||||
- RIFX format for *.wav files
|
||||
- RTP/mpegts muxer
|
||||
- non continuous cache protocol support
|
||||
- tblend filter
|
||||
- cropdetect support for non 8bpp, absolute (if limit >= 1) and relative (if limit < 1.0) threshold
|
||||
- Camellia symmetric block cipher
|
||||
- OpenH264 encoder wrapper
|
||||
- VOC seeking support
|
||||
- Closed caption Decoder
|
||||
- fspp, uspp, pp7 MPlayer postprocessing filters ported to native filters
|
||||
- showpalette filter
|
||||
- Twofish symmetric block cipher
|
||||
- Support DNx100 (960x720@8)
|
||||
- eq2 filter ported from libmpcodecs as eq filter
|
||||
- removed libmpcodecs
|
||||
- Changed default DNxHD colour range in QuickTime .mov derivatives to mpeg range
|
||||
- ported softpulldown filter from libmpcodecs as repeatfields filter
|
||||
- dcshift filter
|
||||
- RTP depacketizer for loss tolerant payload format for MP3 audio (RFC 5219)
|
||||
- RTP depacketizer for AC3 payload format (RFC 4184)
|
||||
- palettegen and paletteuse filters
|
||||
- VP9 RTP payload format (draft 0) experimental depacketizer
|
||||
- RTP depacketizer for DV (RFC 6469)
|
||||
- DXVA2-accelerated HEVC decoding
|
||||
- AAC ELD 480 decoding
|
||||
- Intel QSV-accelerated H.264 decoding
|
||||
- DSS SP decoder and DSS demuxer
|
||||
- Fix stsd atom corruption in DNxHD QuickTimes
|
||||
- Canopus HQX decoder
|
||||
- RTP depacketization of T.140 text (RFC 4103)
|
||||
- VP9 RTP payload format (draft 0) experimental depacketizer
|
||||
- Port MIPS opttimizations to 64-bit
|
||||
|
||||
|
||||
version 2.5:
|
||||
- HEVC/H.265 RTP payload format (draft v6) packetizer
|
||||
- SUP/PGS subtitle demuxer
|
||||
- ffprobe -show_pixel_formats option
|
||||
- CAST128 symmetric block cipher, ECB mode
|
||||
- STL subtitle demuxer and decoder
|
||||
- libutvideo YUV 4:2:2 10bit support
|
||||
- XCB-based screen-grabber
|
||||
- UDP-Lite support (RFC 3828)
|
||||
- xBR scaling filter
|
||||
- AVFoundation screen capturing support
|
||||
- ffserver supports codec private options
|
||||
- creating DASH compatible fragmented MP4, MPEG-DASH segmenting muxer
|
||||
- WebP muxer with animated WebP support
|
||||
- zygoaudio decoding support
|
||||
- APNG demuxer
|
||||
- postproc visualization support
|
||||
|
||||
|
||||
version 2.4:
|
||||
- Icecast protocol
|
||||
- ported lenscorrection filter from frei0r filter
|
||||
- large optimizations in dctdnoiz to make it usable
|
||||
- ICY metadata are now requested by default with the HTTP protocol
|
||||
- support for using metadata in stream specifiers in fftools
|
||||
- LZMA compression support in TIFF decoder
|
||||
- H.261 RTP payload format (RFC 4587) depacketizer and experimental packetizer
|
||||
- HEVC/H.265 RTP payload format (draft v6) depacketizer
|
||||
- added codecview filter to visualize information exported by some codecs
|
||||
- Matroska 3D support thorugh side data
|
||||
- HTML generation using texi2html is deprecated in favor of makeinfo/texi2any
|
||||
- silenceremove filter
|
||||
|
||||
|
||||
version 2.3:
|
||||
- AC3 fixed-point decoding
|
||||
- shuffleplanes filter
|
||||
- subfile protocol
|
||||
- Phantom Cine demuxer
|
||||
- replaygain data export
|
||||
- VP7 video decoder
|
||||
- Alias PIX image encoder and decoder
|
||||
- Improvements to the BRender PIX image decoder
|
||||
- Improvements to the XBM decoder
|
||||
- QTKit input device
|
||||
- improvements to OpenEXR image decoder
|
||||
- support decoding 16-bit RLE SGI images
|
||||
- GDI screen grabbing for Windows
|
||||
- alternative rendition support for HTTP Live Streaming
|
||||
- AVFoundation input device
|
||||
- Direct Stream Digital (DSD) decoder
|
||||
- Magic Lantern Video (MLV) demuxer
|
||||
- On2 AVC (Audio for Video) decoder
|
||||
- support for decoding through DXVA2 in ffmpeg
|
||||
- libbs2b-based stereo-to-binaural audio filter
|
||||
- libx264 reference frames count limiting depending on level
|
||||
- native Opus decoder
|
||||
- display matrix export and rotation API
|
||||
- WebVTT encoder
|
||||
- showcqt multimedia filter
|
||||
- zoompan filter
|
||||
- signalstats filter
|
||||
- hqx filter (hq2x, hq3x, hq4x)
|
||||
- flanger filter
|
||||
- Image format auto-detection
|
||||
- LRC demuxer and muxer
|
||||
- Samba protocol (via libsmbclient)
|
||||
- WebM DASH Manifest muxer
|
||||
- libfribidi support in drawtext
|
||||
|
||||
|
||||
version 2.2:
|
||||
|
||||
- HNM version 4 demuxer and video decoder
|
||||
- Live HDS muxer
|
||||
- setsar/setdar filters now support variables in ratio expressions
|
||||
- elbg filter
|
||||
- string validation in ffprobe
|
||||
- support for decoding through VDPAU in ffmpeg (the -hwaccel option)
|
||||
- complete Voxware MetaSound decoder
|
||||
- remove mp3_header_compress bitstream filter
|
||||
- Windows resource files for shared libraries
|
||||
- aeval filter
|
||||
- stereoscopic 3d metadata handling
|
||||
- WebP encoding via libwebp
|
||||
- ATRAC3+ decoder
|
||||
- VP8 in Ogg demuxing
|
||||
- side & metadata support in NUT
|
||||
- framepack filter
|
||||
- XYZ12 rawvideo support in NUT
|
||||
- Exif metadata support in WebP decoder
|
||||
- OpenGL device
|
||||
- Use metadata_header_padding to control padding in ID3 tags (currently used in
|
||||
MP3, AIFF, and OMA files), FLAC header, and the AVI "junk" block.
|
||||
- Mirillis FIC video decoder
|
||||
- Support DNx444
|
||||
- libx265 encoder
|
||||
- dejudder filter
|
||||
- Autodetect VDA like all other hardware accelerations
|
||||
- aliases and defaults for Ogg subtypes (opus, spx)
|
||||
version <next>
|
||||
|
||||
|
||||
version 2.1:
|
||||
@@ -183,8 +46,7 @@ version 2.1:
|
||||
- ReplayGain scanner
|
||||
- Enhanced Low Delay AAC (ER AAC ELD) decoding (no LD SBR support)
|
||||
- Linux framebuffer output device
|
||||
- HEVC decoder
|
||||
- raw HEVC, HEVC in MOV/MP4, HEVC in Matroska, HEVC in MPEG-TS demuxing
|
||||
- HEVC decoder, raw HEVC demuxer, HEVC demuxing in TS, Matroska and MP4
|
||||
- mergeplanes filter
|
||||
|
||||
|
||||
@@ -200,7 +62,7 @@ version 2.0:
|
||||
- 10% faster aac encoding on x86 and MIPS
|
||||
- sine audio filter source
|
||||
- WebP demuxing and decoding support
|
||||
- ffmpeg options -filter_script and -filter_complex_script, which allow a
|
||||
- new ffmpeg options -filter_script and -filter_complex_script, which allow a
|
||||
filtergraph description to be read from a file
|
||||
- OpenCL support
|
||||
- audio phaser filter
|
||||
@@ -208,7 +70,7 @@ version 2.0:
|
||||
- libquvi demuxer
|
||||
- uniform options syntax across all filters
|
||||
- telecine filter
|
||||
- interlace filter
|
||||
- new interlace filter
|
||||
- smptehdbars source
|
||||
- inverse telecine filters (fieldmatch and decimate)
|
||||
- colorbalance filter
|
||||
@@ -326,7 +188,7 @@ version 1.1:
|
||||
- JSON captions for TED talks decoding support
|
||||
- SOX Resampler support in libswresample
|
||||
- aselect filter
|
||||
- SGI RLE 8-bit / Silicon Graphics RLE 8-bit video decoder
|
||||
- SGI RLE 8-bit decoder
|
||||
- Silicon Graphics Motion Video Compressor 1 & 2 decoder
|
||||
- Silicon Graphics Movie demuxer
|
||||
- apad filter
|
||||
@@ -370,9 +232,7 @@ version 1.0:
|
||||
- RTMPE protocol support
|
||||
- RTMPTE protocol support
|
||||
- showwaves and showspectrum filter
|
||||
- LucasArts SMUSH SANM playback support
|
||||
- LucasArts SMUSH VIMA audio decoder (ADPCM)
|
||||
- LucasArts SMUSH demuxer
|
||||
- LucasArts SMUSH playback support
|
||||
- SAMI, RealText and SubViewer demuxers and decoders
|
||||
- Heart Of Darkness PAF playback support
|
||||
- iec61883 device
|
||||
@@ -496,7 +356,6 @@ version 0.10:
|
||||
- ffwavesynth decoder
|
||||
- aviocat tool
|
||||
- ffeval tool
|
||||
- support encoding and decoding 4-channel SGI images
|
||||
|
||||
|
||||
version 0.9:
|
||||
|
15
INSTALL
Normal file
15
INSTALL
Normal file
@@ -0,0 +1,15 @@
|
||||
|
||||
1) Type './configure' to create the configuration. A list of configure
|
||||
options is printed by running 'configure --help'.
|
||||
|
||||
'configure' can be launched from a directory different from the FFmpeg
|
||||
sources to build the objects out of tree. To do this, use an absolute
|
||||
path when launching 'configure', e.g. '/ffmpegdir/ffmpeg/configure'.
|
||||
|
||||
2) Then type 'make' to build FFmpeg. GNU Make 3.81 or later is required.
|
||||
|
||||
3) Type 'make install' to install all binaries and libraries you built.
|
||||
|
||||
NOTICE
|
||||
|
||||
- Non system dependencies (e.g. libx264, libvpx) are disabled by default.
|
17
INSTALL.md
17
INSTALL.md
@@ -1,17 +0,0 @@
|
||||
#Installing FFmpeg:
|
||||
|
||||
1. Type `./configure` to create the configuration. A list of configure
|
||||
options is printed by running `configure --help`.
|
||||
|
||||
`configure` can be launched from a directory different from the FFmpeg
|
||||
sources to build the objects out of tree. To do this, use an absolute
|
||||
path when launching `configure`, e.g. `/ffmpegdir/ffmpeg/configure`.
|
||||
|
||||
2. Then type `make` to build FFmpeg. GNU Make 3.81 or later is required.
|
||||
|
||||
3. Type `make install` to install all binaries and libraries you built.
|
||||
|
||||
NOTICE
|
||||
------
|
||||
|
||||
- Non system dependencies (e.g. libx264, libvpx) are disabled by default.
|
@@ -1,73 +1,68 @@
|
||||
#FFmpeg:
|
||||
FFmpeg:
|
||||
|
||||
Most files in FFmpeg are under the GNU Lesser General Public License version 2.1
|
||||
or later (LGPL v2.1+). Read the file `COPYING.LGPLv2.1` for details. Some other
|
||||
or later (LGPL v2.1+). Read the file COPYING.LGPLv2.1 for details. Some other
|
||||
files have MIT/X11/BSD-style licenses. In combination the LGPL v2.1+ applies to
|
||||
FFmpeg.
|
||||
|
||||
Some optional parts of FFmpeg are licensed under the GNU General Public License
|
||||
version 2 or later (GPL v2+). See the file `COPYING.GPLv2` for details. None of
|
||||
these parts are used by default, you have to explicitly pass `--enable-gpl` to
|
||||
version 2 or later (GPL v2+). See the file COPYING.GPLv2 for details. None of
|
||||
these parts are used by default, you have to explicitly pass --enable-gpl to
|
||||
configure to activate them. In this case, FFmpeg's license changes to GPL v2+.
|
||||
|
||||
Specifically, the GPL parts of FFmpeg are:
|
||||
Specifically, the GPL parts of FFmpeg are
|
||||
|
||||
- libpostproc
|
||||
- libmpcodecs
|
||||
- optional x86 optimizations in the files
|
||||
- `libavcodec/x86/flac_dsp_gpl.asm`
|
||||
- `libavcodec/x86/idct_mmx.c`
|
||||
libavcodec/x86/idct_mmx.c
|
||||
- libutvideo encoding/decoding wrappers in
|
||||
`libavcodec/libutvideo*.cpp`
|
||||
- the X11 grabber in `libavdevice/x11grab.c`
|
||||
libavcodec/libutvideo*.cpp
|
||||
- the X11 grabber in libavdevice/x11grab.c
|
||||
- the swresample test app in
|
||||
`libswresample/swresample-test.c`
|
||||
- the `texi2pod.pl` tool
|
||||
libswresample/swresample-test.c
|
||||
- the texi2pod.pl tool
|
||||
- the following filters in libavfilter:
|
||||
- `f_ebur128.c`
|
||||
- `vf_blackframe.c`
|
||||
- `vf_boxblur.c`
|
||||
- `vf_colormatrix.c`
|
||||
- `vf_cropdetect.c`
|
||||
- `vf_delogo.c`
|
||||
- `vf_eq.c`
|
||||
- `vf_fspp.c`
|
||||
- `vf_geq.c`
|
||||
- `vf_histeq.c`
|
||||
- `vf_hqdn3d.c`
|
||||
- `vf_interlace.c`
|
||||
- `vf_kerndeint.c`
|
||||
- `vf_mcdeint.c`
|
||||
- `vf_mpdecimate.c`
|
||||
- `vf_owdenoise.c`
|
||||
- `vf_perspective.c`
|
||||
- `vf_phase.c`
|
||||
- `vf_pp.c`
|
||||
- `vf_pp7.c`
|
||||
- `vf_pullup.c`
|
||||
- `vf_sab.c`
|
||||
- `vf_smartblur.c`
|
||||
- `vf_repeatfields.c`
|
||||
- `vf_spp.c`
|
||||
- `vf_stereo3d.c`
|
||||
- `vf_super2xsai.c`
|
||||
- `vf_tinterlace.c`
|
||||
- `vf_uspp.c`
|
||||
- `vsrc_mptestsrc.c`
|
||||
- f_ebur128.c
|
||||
- vf_blackframe.c
|
||||
- vf_boxblur.c
|
||||
- vf_colormatrix.c
|
||||
- vf_cropdetect.c
|
||||
- vf_decimate.c
|
||||
- vf_delogo.c
|
||||
- vf_geq.c
|
||||
- vf_histeq.c
|
||||
- vf_hqdn3d.c
|
||||
- vf_kerndeint.c
|
||||
- vf_mcdeint.c
|
||||
- vf_mp.c
|
||||
- vf_owdenoise.c
|
||||
- vf_perspective.c
|
||||
- vf_phase.c
|
||||
- vf_pp.c
|
||||
- vf_pullup.c
|
||||
- vf_sab.c
|
||||
- vf_smartblur.c
|
||||
- vf_spp.c
|
||||
- vf_stereo3d.c
|
||||
- vf_super2xsai.c
|
||||
- vf_tinterlace.c
|
||||
- vf_yadif.c
|
||||
- vsrc_mptestsrc.c
|
||||
|
||||
Should you, for whatever reason, prefer to use version 3 of the (L)GPL, then
|
||||
the configure parameter `--enable-version3` will activate this licensing option
|
||||
for you. Read the file `COPYING.LGPLv3` or, if you have enabled GPL parts,
|
||||
`COPYING.GPLv3` to learn the exact legal terms that apply in this case.
|
||||
the configure parameter --enable-version3 will activate this licensing option
|
||||
for you. Read the file COPYING.LGPLv3 or, if you have enabled GPL parts,
|
||||
COPYING.GPLv3 to learn the exact legal terms that apply in this case.
|
||||
|
||||
There are a handful of files under other licensing terms, namely:
|
||||
|
||||
* The files `libavcodec/jfdctfst.c`, `libavcodec/jfdctint_template.c` and
|
||||
`libavcodec/jrevdct.c` are taken from libjpeg, see the top of the files for
|
||||
* The files libavcodec/jfdctfst.c, libavcodec/jfdctint_template.c and
|
||||
libavcodec/jrevdct.c are taken from libjpeg, see the top of the files for
|
||||
licensing details. Specifically note that you must credit the IJG in the
|
||||
documentation accompanying your program if you only distribute executables.
|
||||
You must also indicate any changes including additions and deletions to
|
||||
those three files in the documentation.
|
||||
* `tests/reference.pnm` is under the expat license.
|
||||
|
||||
|
||||
external libraries
|
||||
@@ -80,22 +75,20 @@ compatible libraries
|
||||
--------------------
|
||||
|
||||
The following libraries are under GPL:
|
||||
- frei0r
|
||||
- libcdio
|
||||
- libutvideo
|
||||
- libvidstab
|
||||
- libx264
|
||||
- libx265
|
||||
- libxavs
|
||||
- libxvid
|
||||
|
||||
- frei0r
|
||||
- libcdio
|
||||
- libutvideo
|
||||
- libvidstab
|
||||
- libx264
|
||||
- libxavs
|
||||
- libxvid
|
||||
When combining them with FFmpeg, FFmpeg needs to be licensed as GPL as well by
|
||||
passing `--enable-gpl` to configure.
|
||||
passing --enable-gpl to configure.
|
||||
|
||||
The OpenCORE and VisualOn libraries are under the Apache License 2.0. That
|
||||
license is incompatible with the LGPL v2.1 and the GPL v2, but not with
|
||||
version 3 of those licenses. So to combine these libraries with FFmpeg, the
|
||||
license version needs to be upgraded by passing `--enable-version3` to configure.
|
||||
license version needs to be upgraded by passing --enable-version3 to configure.
|
||||
|
||||
incompatible libraries
|
||||
----------------------
|
||||
@@ -103,7 +96,7 @@ incompatible libraries
|
||||
The Fraunhofer AAC library, FAAC and aacplus are under licenses which
|
||||
are incompatible with the GPLv2 and v3. We do not know for certain if their
|
||||
licenses are compatible with the LGPL.
|
||||
If you wish to enable these libraries, pass `--enable-nonfree` to configure.
|
||||
If you wish to enable these libraries, pass --enable-nonfree to configure.
|
||||
But note that if you enable any of these libraries the resulting binary will
|
||||
be under a complex license mix that is more restrictive than the LGPL and that
|
||||
may result in additional obligations. It is possible that these
|
88
MAINTAINERS
88
MAINTAINERS
@@ -31,7 +31,7 @@ ffprobe:
|
||||
ffprobe.c Stefano Sabatini
|
||||
|
||||
ffserver:
|
||||
ffserver.c Reynaldo H. Verdejo Pinochet
|
||||
ffserver.c, ffserver.h Baptiste Coudurier
|
||||
|
||||
Commandline utility code:
|
||||
cmdutils.c, cmdutils.h Michael Niedermayer
|
||||
@@ -44,8 +44,8 @@ Miscellaneous Areas
|
||||
===================
|
||||
|
||||
documentation Stefano Sabatini, Mike Melanson, Timothy Gu
|
||||
build system (configure, makefiles) Diego Biurrun, Mans Rullgard
|
||||
project server Árpád Gereöffy, Michael Niedermayer, Reimar Doeffinger, Alexander Strasser
|
||||
build system (configure,Makefiles) Diego Biurrun, Mans Rullgard
|
||||
project server Árpád Gereöffy, Michael Niedermayer, Reimar Döffinger, Alexander Strasser
|
||||
presets Robert Swain
|
||||
metadata subsystem Aurelien Jacobs
|
||||
release management Michael Niedermayer
|
||||
@@ -54,10 +54,8 @@ release management Michael Niedermayer
|
||||
Communication
|
||||
=============
|
||||
|
||||
website Deby Barbara Lepage
|
||||
fate.ffmpeg.org Timothy Gu
|
||||
Trac bug tracker Alexander Strasser, Michael Niedermayer, Carl Eugen Hoyos, Lou Logan
|
||||
mailing lists Michael Niedermayer, Baptiste Coudurier, Lou Logan
|
||||
website Robert Swain, Lou Logan
|
||||
mailinglists Michael Niedermayer, Baptiste Coudurier, Lou Logan
|
||||
Google+ Paul B Mahol, Michael Niedermayer, Alexander Strasser
|
||||
Twitter Lou Logan
|
||||
Launchpad Timothy Gu
|
||||
@@ -75,7 +73,6 @@ Other:
|
||||
bprint Nicolas George
|
||||
bswap.h
|
||||
des Reimar Doeffinger
|
||||
dynarray.h Nicolas George
|
||||
eval.c, eval.h Michael Niedermayer
|
||||
float_dsp Loren Merritt
|
||||
hash Reimar Doeffinger
|
||||
@@ -132,7 +129,6 @@ Generic Parts:
|
||||
tableprint.c, tableprint.h Reimar Doeffinger
|
||||
fixed point FFT:
|
||||
fft* Zeljko Lukac
|
||||
Text Subtitles Clément Bœsch
|
||||
|
||||
Codecs:
|
||||
4xm.c Michael Niedermayer
|
||||
@@ -146,7 +142,6 @@ Codecs:
|
||||
ass* Aurelien Jacobs
|
||||
asv* Michael Niedermayer
|
||||
atrac3* Benjamin Larsson
|
||||
atrac3plus* Maxim Poliakovski
|
||||
bgmc.c, bgmc.h Thilo Borgmann
|
||||
bink.c Kostya Shishkov
|
||||
binkaudio.c Peter Ross
|
||||
@@ -155,8 +150,6 @@ Codecs:
|
||||
cdxl.c Paul B Mahol
|
||||
celp_filters.* Vitor Sessak
|
||||
cinepak.c Roberto Togni
|
||||
cinepakenc.c Rl / Aetey G.T. AB
|
||||
ccaption_dec.c Anshul Maheshwari
|
||||
cljr Alex Beregszaszi
|
||||
cllc.c Derek Buitenhuis
|
||||
cook.c, cookdata.h Benjamin Larsson
|
||||
@@ -166,15 +159,12 @@ Codecs:
|
||||
dca.c Kostya Shishkov, Benjamin Larsson
|
||||
dnxhd* Baptiste Coudurier
|
||||
dpcm.c Mike Melanson
|
||||
dss_sp.c Oleksij Rempel, Michael Niedermayer
|
||||
dv.c Roman Shaposhnik
|
||||
dvbsubdec.c Anshul Maheshwari
|
||||
dxa.c Kostya Shishkov
|
||||
eacmv*, eaidct*, eat* Peter Ross
|
||||
exif.c, exif.h Thilo Borgmann
|
||||
ffv1* Michael Niedermayer
|
||||
ffv1.c Michael Niedermayer
|
||||
ffwavesynth.c Nicolas George
|
||||
fic.c Derek Buitenhuis
|
||||
flac* Justin Ruggles
|
||||
flashsv* Benjamin Larsson
|
||||
flicvideo.c Mike Melanson
|
||||
@@ -184,7 +174,7 @@ Codecs:
|
||||
h261* Michael Niedermayer
|
||||
h263* Michael Niedermayer
|
||||
h264* Loren Merritt, Michael Niedermayer
|
||||
huffyuv* Michael Niedermayer, Christophe Gisquet
|
||||
huffyuv.c Michael Niedermayer
|
||||
idcinvideo.c Mike Melanson
|
||||
imc* Benjamin Larsson
|
||||
indeo2* Kostya Shishkov
|
||||
@@ -207,11 +197,8 @@ Codecs:
|
||||
libtheoraenc.c David Conrad
|
||||
libutvideo* Derek Buitenhuis
|
||||
libvorbis.c David Conrad
|
||||
libvpx* James Zern
|
||||
libx264.c Mans Rullgard, Jason Garrett-Glaser
|
||||
libx265.c Derek Buitenhuis
|
||||
libxavs.c Stefan Gehrer
|
||||
libzvbi-teletextdec.c Marton Balint
|
||||
loco.c Kostya Shishkov
|
||||
lzo.h, lzo.c Reimar Doeffinger
|
||||
mdec.c Michael Niedermayer
|
||||
@@ -228,7 +215,6 @@ Codecs:
|
||||
msvideo1.c Mike Melanson
|
||||
nellymoserdec.c Benjamin Larsson
|
||||
nuv.c Reimar Doeffinger
|
||||
nvenc.c Timo Rothenpieler
|
||||
paf.* Paul B Mahol
|
||||
pcx.c Ivo van Poorten
|
||||
pgssubdec.c Reimar Doeffinger
|
||||
@@ -245,12 +231,12 @@ Codecs:
|
||||
rtjpeg.c, rtjpeg.h Reimar Doeffinger
|
||||
rv10.c Michael Niedermayer
|
||||
rv3* Kostya Shishkov
|
||||
rv4* Kostya Shishkov, Christophe Gisquet
|
||||
rv4* Kostya Shishkov
|
||||
s3tc* Ivo van Poorten
|
||||
smacker.c Kostya Shishkov
|
||||
smc.c Mike Melanson
|
||||
smvjpegdec.c Ash Hughes
|
||||
snow* Michael Niedermayer, Loren Merritt
|
||||
snow.c Michael Niedermayer, Loren Merritt
|
||||
sonic.c Alex Beregszaszi
|
||||
srt* Aurelien Jacobs
|
||||
sunrast.c Ivo van Poorten
|
||||
@@ -269,18 +255,17 @@ Codecs:
|
||||
v410*.c Derek Buitenhuis
|
||||
vb.c Kostya Shishkov
|
||||
vble.c Derek Buitenhuis
|
||||
vc1* Kostya Shishkov, Christophe Gisquet
|
||||
vc1* Kostya Shishkov
|
||||
vcr1.c Michael Niedermayer
|
||||
vda_h264_dec.c Xidorn Quan
|
||||
vima.c Paul B Mahol
|
||||
vmnc.c Kostya Shishkov
|
||||
vorbisdec.c Denes Balatoni, David Conrad
|
||||
vorbisenc.c Oded Shimon
|
||||
vorbis_dec.c Denes Balatoni, David Conrad
|
||||
vorbis_enc.c Oded Shimon
|
||||
vp3* Mike Melanson
|
||||
vp5 Aurelien Jacobs
|
||||
vp6 Aurelien Jacobs
|
||||
vp8 David Conrad, Jason Garrett-Glaser, Ronald Bultje
|
||||
vp9 Ronald Bultje, Clément Bœsch
|
||||
vqavideo.c Mike Melanson
|
||||
wavpack.c Kostya Shishkov
|
||||
wmaprodec.c Sascha Sommer
|
||||
@@ -311,21 +296,15 @@ libavdevice
|
||||
libavdevice/avdevice.h
|
||||
|
||||
|
||||
avfoundation.m Thilo Borgmann
|
||||
decklink* Deti Fliegl
|
||||
dshow.c Roger Pack (CC rogerdpack@gmail.com)
|
||||
dshow.c Roger Pack
|
||||
fbdev_enc.c Lukasz Marek
|
||||
gdigrab.c Roger Pack (CC rogerdpack@gmail.com)
|
||||
iec61883.c Georg Lippitsch
|
||||
lavfi Stefano Sabatini
|
||||
libdc1394.c Roman Shaposhnik
|
||||
opengl_enc.c Lukasz Marek
|
||||
pulse_audio_enc.c Lukasz Marek
|
||||
qtkit.m Thilo Borgmann
|
||||
sdl Stefano Sabatini
|
||||
v4l2.c Giorgio Vazzana
|
||||
v4l2.c Luca Abeni
|
||||
vfwcap.c Ramiro Polla
|
||||
xv.c Lukasz Marek
|
||||
|
||||
libavfilter
|
||||
===========
|
||||
@@ -347,20 +326,14 @@ Filters:
|
||||
af_compand.c Paul B Mahol
|
||||
af_ladspa.c Paul B Mahol
|
||||
af_pan.c Nicolas George
|
||||
af_silenceremove.c Paul B Mahol
|
||||
avf_avectorscope.c Paul B Mahol
|
||||
avf_showcqt.c Muhammad Faiz
|
||||
vf_blend.c Paul B Mahol
|
||||
vf_colorbalance.c Paul B Mahol
|
||||
vf_dejudder.c Nicholas Robbins
|
||||
vf_delogo.c Jean Delvare (CC <khali@linux-fr.org>)
|
||||
vf_drawbox.c/drawgrid Andrey Utkin
|
||||
vf_extractplanes.c Paul B Mahol
|
||||
vf_histogram.c Paul B Mahol
|
||||
vf_hqx.c Clément Bœsch
|
||||
vf_idet.c Pascal Massimino
|
||||
vf_il.c Paul B Mahol
|
||||
vf_lenscorrection.c Daniel Oberhoff
|
||||
vf_mergeplanes.c Paul B Mahol
|
||||
vf_psnr.c Paul B Mahol
|
||||
vf_scale.c Michael Niedermayer
|
||||
@@ -389,7 +362,6 @@ Muxers/Demuxers:
|
||||
aiffdec.c Baptiste Coudurier, Matthieu Bouron
|
||||
aiffenc.c Baptiste Coudurier, Matthieu Bouron
|
||||
ape.c Kostya Shishkov
|
||||
apngdec.c Benoit Fouet
|
||||
ass* Aurelien Jacobs
|
||||
astdec.c Paul B Mahol
|
||||
astenc.c James Almer
|
||||
@@ -402,7 +374,6 @@ Muxers/Demuxers:
|
||||
cdxl.c Paul B Mahol
|
||||
crc.c Michael Niedermayer
|
||||
daud.c Reimar Doeffinger
|
||||
dss.c Oleksij Rempel, Michael Niedermayer
|
||||
dtshddec.c Paul B Mahol
|
||||
dv.c Roman Shaposhnik
|
||||
dxa.c Kostya Shishkov
|
||||
@@ -414,7 +385,6 @@ Muxers/Demuxers:
|
||||
flvdec.c, flvenc.c Michael Niedermayer
|
||||
gxf.c Reimar Doeffinger
|
||||
gxfenc.c Baptiste Coudurier
|
||||
hls.c Anssi Hannula
|
||||
idcin.c Mike Melanson
|
||||
idroqdec.c Mike Melanson
|
||||
iff.c Jaikrishnan Menon
|
||||
@@ -432,7 +402,6 @@ Muxers/Demuxers:
|
||||
matroska.c Aurelien Jacobs
|
||||
matroskadec.c Aurelien Jacobs
|
||||
matroskaenc.c David Conrad
|
||||
matroska subtitles (matroskaenc.c) John Peebles
|
||||
metadata* Aurelien Jacobs
|
||||
mgsts.c Paul B Mahol
|
||||
microdvd* Aurelien Jacobs
|
||||
@@ -442,15 +411,14 @@ Muxers/Demuxers:
|
||||
mpc.c Kostya Shishkov
|
||||
mpeg.c Michael Niedermayer
|
||||
mpegenc.c Michael Niedermayer
|
||||
mpegts.c Marton Balint
|
||||
mpegtsenc.c Baptiste Coudurier
|
||||
mpegts* Baptiste Coudurier
|
||||
msnwc_tcp.c Ramiro Polla
|
||||
mtv.c Reynaldo H. Verdejo Pinochet
|
||||
mxf* Baptiste Coudurier
|
||||
mxfdec.c Tomas Härdin
|
||||
nistspheredec.c Paul B Mahol
|
||||
nsvdec.c Francois Revol
|
||||
nut* Michael Niedermayer
|
||||
nut.c Michael Niedermayer
|
||||
nuv.c Reimar Doeffinger
|
||||
oggdec.c, oggdec.h David Conrad
|
||||
oggenc.c Baptiste Coudurier
|
||||
@@ -467,23 +435,15 @@ Muxers/Demuxers:
|
||||
rmdec.c, rmenc.c Ronald S. Bultje, Kostya Shishkov
|
||||
rtmp* Kostya Shishkov
|
||||
rtp.c, rtpenc.c Martin Storsjo
|
||||
rtpdec_ac3.* Gilles Chanteperdrix
|
||||
rtpdec_dv.* Thomas Volkert
|
||||
rtpdec_h261.*, rtpenc_h261.* Thomas Volkert
|
||||
rtpdec_hevc.*, rtpenc_hevc.* Thomas Volkert
|
||||
rtpdec_mpa_robust.* Gilles Chanteperdrix
|
||||
rtpdec_asf.* Ronald S. Bultje
|
||||
rtpdec_vp9.c Thomas Volkert
|
||||
rtpenc_mpv.*, rtpenc_aac.* Martin Storsjo
|
||||
rtsp.c Luca Barbato
|
||||
sbgdec.c Nicolas George
|
||||
sdp.c Martin Storsjo
|
||||
segafilm.c Mike Melanson
|
||||
segment.c Stefano Sabatini
|
||||
siff.c Kostya Shishkov
|
||||
smacker.c Kostya Shishkov
|
||||
smjpeg* Paul B Mahol
|
||||
spdif* Anssi Hannula
|
||||
srtdec.c Aurelien Jacobs
|
||||
swf.c Baptiste Coudurier
|
||||
takdec.c Paul B Mahol
|
||||
@@ -492,7 +452,6 @@ Muxers/Demuxers:
|
||||
voc.c Aurelien Jacobs
|
||||
wav.c Michael Niedermayer
|
||||
wc3movie.c Mike Melanson
|
||||
webm dash (matroskaenc.c) Vignesh Venkatasubramanian
|
||||
webvtt* Matthew J Heaney
|
||||
westwood.c Mike Melanson
|
||||
wtv.c Peter Ross
|
||||
@@ -506,7 +465,6 @@ Protocols:
|
||||
libssh.c Lukasz Marek
|
||||
mms*.c Ronald S. Bultje
|
||||
udp.c Luca Abeni
|
||||
icecast.c Marvin Scholz
|
||||
|
||||
|
||||
libswresample
|
||||
@@ -535,8 +493,6 @@ Amiga / PowerPC Colin Ward
|
||||
Linux / PowerPC Luca Barbato
|
||||
Windows MinGW Alex Beregszaszi, Ramiro Polla
|
||||
Windows Cygwin Victor Paesa
|
||||
Windows MSVC Matthew Oliver
|
||||
Windows ICL Matthew Oliver
|
||||
ADI/Blackfin DSP Marc Hoffman
|
||||
Sparc Roman Shaposhnik
|
||||
x86 Michael Niedermayer
|
||||
@@ -545,10 +501,8 @@ x86 Michael Niedermayer
|
||||
Releases
|
||||
========
|
||||
|
||||
2.6 Michael Niedermayer
|
||||
2.5 Michael Niedermayer
|
||||
2.4 Michael Niedermayer
|
||||
2.2 Michael Niedermayer
|
||||
2.1 Michael Niedermayer
|
||||
2.0 Michael Niedermayer
|
||||
|
||||
If you want to maintain an older release, please contact us
|
||||
|
||||
@@ -564,7 +518,7 @@ Attila Kinali 11F0 F9A6 A1D2 11F6 C745 D10C 6520 BCDD F2DF E765
|
||||
Baptiste Coudurier 8D77 134D 20CC 9220 201F C5DB 0AC9 325C 5C1A BAAA
|
||||
Ben Littler 3EE3 3723 E560 3214 A8CD 4DEB 2CDB FCE7 768C 8D2C
|
||||
Benoit Fouet B22A 4F4F 43EF 636B BB66 FCDC 0023 AE1E 2985 49C8
|
||||
Clément Bœsch 52D0 3A82 D445 F194 DB8B 2B16 87EE 2CB8 F4B8 FCF9
|
||||
Bœsch Clément 52D0 3A82 D445 F194 DB8B 2B16 87EE 2CB8 F4B8 FCF9
|
||||
Daniel Verkamp 78A6 07ED 782C 653E C628 B8B9 F0EB 8DD8 2F0E 21C7
|
||||
Diego Biurrun 8227 1E31 B6D9 4994 7427 E220 9CAE D6CC 4757 FCC5
|
||||
FFmpeg release signing key FCF9 86EA 15E6 E293 A564 4F10 B432 2F04 D676 58D8
|
||||
@@ -579,14 +533,12 @@ Michael Niedermayer 9FF2 128B 147E F673 0BAD F133 611E C787 040B 0FAB
|
||||
Nicolas George 24CE 01CE 9ACC 5CEB 74D8 8D9D B063 D997 36E5 4C93
|
||||
Panagiotis Issaris 6571 13A3 33D9 3726 F728 AA98 F643 B12E ECF3 E029
|
||||
Peter Ross A907 E02F A6E5 0CD2 34CD 20D2 6760 79C5 AC40 DD6B
|
||||
Reimar Doeffinger C61D 16E5 9E2C D10C 8958 38A4 0899 A2B9 06D4 D9C7
|
||||
Reimar Döffinger C61D 16E5 9E2C D10C 8958 38A4 0899 A2B9 06D4 D9C7
|
||||
Reinhard Tartler 9300 5DC2 7E87 6C37 ED7B CA9A 9808 3544 9453 48A4
|
||||
Reynaldo H. Verdejo Pinochet 6E27 CD34 170C C78E 4D4F 5F40 C18E 077F 3114 452A
|
||||
Robert Swain EE7A 56EA 4A81 A7B5 2001 A521 67FA 362D A2FC 3E71
|
||||
Sascha Sommer 38A0 F88B 868E 9D3A 97D4 D6A0 E823 706F 1E07 0D3C
|
||||
Stefano Sabatini 0D0B AD6B 5330 BBAD D3D6 6A0C 719C 2839 FC43 2D5F
|
||||
Stephan Hilb 4F38 0B3A 5F39 B99B F505 E562 8D5C 5554 4E17 8863
|
||||
Tiancheng "Timothy" Gu 9456 AFC0 814A 8139 E994 8351 7FE6 B095 B582 B0D4
|
||||
Tim Nicholson 38CF DB09 3ED0 F607 8B67 6CED 0C0B FC44 8B0B FC83
|
||||
Tomas Härdin A79D 4E3D F38F 763F 91F5 8B33 A01E 8AE0 41BB 2551
|
||||
Wei Gao 4269 7741 857A 0E60 9EC5 08D2 4744 4EFA 62C1 87B9
|
||||
|
100
Makefile
100
Makefile
@@ -4,50 +4,39 @@ include config.mak
|
||||
vpath %.c $(SRC_PATH)
|
||||
vpath %.cpp $(SRC_PATH)
|
||||
vpath %.h $(SRC_PATH)
|
||||
vpath %.m $(SRC_PATH)
|
||||
vpath %.S $(SRC_PATH)
|
||||
vpath %.asm $(SRC_PATH)
|
||||
vpath %.rc $(SRC_PATH)
|
||||
vpath %.v $(SRC_PATH)
|
||||
vpath %.texi $(SRC_PATH)
|
||||
vpath %/fate_config.sh.template $(SRC_PATH)
|
||||
|
||||
AVPROGS-$(CONFIG_FFMPEG) += ffmpeg
|
||||
AVPROGS-$(CONFIG_FFPLAY) += ffplay
|
||||
AVPROGS-$(CONFIG_FFPROBE) += ffprobe
|
||||
AVPROGS-$(CONFIG_FFSERVER) += ffserver
|
||||
PROGS-$(CONFIG_FFMPEG) += ffmpeg
|
||||
PROGS-$(CONFIG_FFPLAY) += ffplay
|
||||
PROGS-$(CONFIG_FFPROBE) += ffprobe
|
||||
PROGS-$(CONFIG_FFSERVER) += ffserver
|
||||
|
||||
AVPROGS := $(AVPROGS-yes:%=%$(PROGSSUF)$(EXESUF))
|
||||
INSTPROGS = $(AVPROGS-yes:%=%$(PROGSSUF)$(EXESUF))
|
||||
PROGS += $(AVPROGS)
|
||||
|
||||
AVBASENAMES = ffmpeg ffplay ffprobe ffserver
|
||||
ALLAVPROGS = $(AVBASENAMES:%=%$(PROGSSUF)$(EXESUF))
|
||||
ALLAVPROGS_G = $(AVBASENAMES:%=%$(PROGSSUF)_g$(EXESUF))
|
||||
|
||||
$(foreach prog,$(AVBASENAMES),$(eval OBJS-$(prog) += cmdutils.o))
|
||||
$(foreach prog,$(AVBASENAMES),$(eval OBJS-$(prog)-$(CONFIG_OPENCL) += cmdutils_opencl.o))
|
||||
|
||||
OBJS-ffmpeg += ffmpeg_opt.o ffmpeg_filter.o
|
||||
OBJS-ffmpeg-$(HAVE_VDPAU_X11) += ffmpeg_vdpau.o
|
||||
OBJS-ffmpeg-$(HAVE_DXVA2_LIB) += ffmpeg_dxva2.o
|
||||
OBJS-ffmpeg-$(CONFIG_VDA) += ffmpeg_vda.o
|
||||
OBJS-ffserver += ffserver_config.o
|
||||
PROGS := $(PROGS-yes:%=%$(PROGSSUF)$(EXESUF))
|
||||
INSTPROGS = $(PROGS-yes:%=%$(PROGSSUF)$(EXESUF))
|
||||
|
||||
OBJS = cmdutils.o $(EXEOBJS)
|
||||
OBJS-ffmpeg = ffmpeg_opt.o ffmpeg_filter.o
|
||||
TESTTOOLS = audiogen videogen rotozoom tiny_psnr tiny_ssim base64
|
||||
HOSTPROGS := $(TESTTOOLS:%=tests/%) doc/print_options
|
||||
TOOLS = qt-faststart trasher uncoded_frame
|
||||
TOOLS = qt-faststart trasher
|
||||
TOOLS-$(CONFIG_ZLIB) += cws2fws
|
||||
|
||||
# $(FFLIBS-yes) needs to be in linking order
|
||||
FFLIBS-$(CONFIG_AVDEVICE) += avdevice
|
||||
FFLIBS-$(CONFIG_AVFILTER) += avfilter
|
||||
FFLIBS-$(CONFIG_AVFORMAT) += avformat
|
||||
FFLIBS-$(CONFIG_AVCODEC) += avcodec
|
||||
BASENAMES = ffmpeg ffplay ffprobe ffserver
|
||||
ALLPROGS = $(BASENAMES:%=%$(PROGSSUF)$(EXESUF))
|
||||
ALLPROGS_G = $(BASENAMES:%=%$(PROGSSUF)_g$(EXESUF))
|
||||
|
||||
FFLIBS-$(CONFIG_AVDEVICE) += avdevice
|
||||
FFLIBS-$(CONFIG_AVFILTER) += avfilter
|
||||
FFLIBS-$(CONFIG_AVFORMAT) += avformat
|
||||
FFLIBS-$(CONFIG_AVRESAMPLE) += avresample
|
||||
FFLIBS-$(CONFIG_POSTPROC) += postproc
|
||||
FFLIBS-$(CONFIG_SWRESAMPLE) += swresample
|
||||
FFLIBS-$(CONFIG_SWSCALE) += swscale
|
||||
FFLIBS-$(CONFIG_AVCODEC) += avcodec
|
||||
FFLIBS-$(CONFIG_POSTPROC) += postproc
|
||||
FFLIBS-$(CONFIG_SWRESAMPLE)+= swresample
|
||||
FFLIBS-$(CONFIG_SWSCALE) += swscale
|
||||
|
||||
FFLIBS := avutil
|
||||
|
||||
@@ -61,14 +50,16 @@ include $(SRC_PATH)/common.mak
|
||||
FF_EXTRALIBS := $(FFEXTRALIBS)
|
||||
FF_DEP_LIBS := $(DEP_LIBS)
|
||||
|
||||
all: $(AVPROGS)
|
||||
all: $(PROGS)
|
||||
|
||||
$(PROGS): %$(EXESUF): %_g$(EXESUF)
|
||||
$(CP) $< $@
|
||||
$(STRIP) $@
|
||||
|
||||
$(TOOLS): %$(EXESUF): %.o $(EXEOBJS)
|
||||
$(LD) $(LDFLAGS) $(LDEXEFLAGS) $(LD_O) $^ $(ELIBS)
|
||||
$(LD) $(LDFLAGS) $(LD_O) $^ $(ELIBS)
|
||||
|
||||
tools/cws2fws$(EXESUF): ELIBS = $(ZLIB)
|
||||
tools/uncoded_frame$(EXESUF): $(FF_DEP_LIBS)
|
||||
tools/uncoded_frame$(EXESUF): ELIBS = $(FF_EXTRALIBS)
|
||||
|
||||
config.h: .config
|
||||
.config: $(wildcard $(FFLIBS:%=$(SRC_PATH)/lib%/all*.c))
|
||||
@@ -78,10 +69,11 @@ config.h: .config
|
||||
|
||||
SUBDIR_VARS := CLEANFILES EXAMPLES FFLIBS HOSTPROGS TESTPROGS TOOLS \
|
||||
HEADERS ARCH_HEADERS BUILT_HEADERS SKIPHEADERS \
|
||||
ARMV5TE-OBJS ARMV6-OBJS ARMV8-OBJS VFP-OBJS NEON-OBJS \
|
||||
ALTIVEC-OBJS MMX-OBJS YASM-OBJS \
|
||||
MIPSFPU-OBJS MIPSDSPR2-OBJS MIPSDSPR1-OBJS \
|
||||
OBJS SLIBOBJS HOSTOBJS TESTOBJS
|
||||
ARMV5TE-OBJS ARMV6-OBJS VFP-OBJS NEON-OBJS \
|
||||
ALTIVEC-OBJS VIS-OBJS \
|
||||
MMX-OBJS YASM-OBJS \
|
||||
MIPSFPU-OBJS MIPSDSPR2-OBJS MIPSDSPR1-OBJS MIPS32R2-OBJS \
|
||||
OBJS HOSTOBJS TESTOBJS
|
||||
|
||||
define RESET
|
||||
$(1) :=
|
||||
@@ -93,16 +85,13 @@ $(foreach V,$(SUBDIR_VARS),$(eval $(call RESET,$(V))))
|
||||
SUBDIR := $(1)/
|
||||
include $(SRC_PATH)/$(1)/Makefile
|
||||
-include $(SRC_PATH)/$(1)/$(ARCH)/Makefile
|
||||
-include $(SRC_PATH)/$(1)/$(INTRINSICS)/Makefile
|
||||
include $(SRC_PATH)/library.mak
|
||||
endef
|
||||
|
||||
$(foreach D,$(FFLIBS),$(eval $(call DOSUBDIR,lib$(D))))
|
||||
|
||||
include $(SRC_PATH)/doc/Makefile
|
||||
|
||||
define DOPROG
|
||||
OBJS-$(1) += $(1).o $(EXEOBJS) $(OBJS-$(1)-yes)
|
||||
OBJS-$(1) += $(1).o cmdutils.o $(EXEOBJS)
|
||||
$(1)$(PROGSSUF)_g$(EXESUF): $$(OBJS-$(1))
|
||||
$$(OBJS-$(1)): CFLAGS += $(CFLAGS-$(1))
|
||||
$(1)$(PROGSSUF)_g$(EXESUF): LDFLAGS += $(LDFLAGS-$(1))
|
||||
@@ -110,16 +99,10 @@ $(1)$(PROGSSUF)_g$(EXESUF): FF_EXTRALIBS += $(LIBS-$(1))
|
||||
-include $$(OBJS-$(1):.o=.d)
|
||||
endef
|
||||
|
||||
$(foreach P,$(PROGS),$(eval $(call DOPROG,$(P:$(PROGSSUF)$(EXESUF)=))))
|
||||
|
||||
ffprobe.o cmdutils.o libavcodec/utils.o libavformat/utils.o libavdevice/avdevice.o libavfilter/avfilter.o libavutil/utils.o libpostproc/postprocess.o libswresample/swresample.o libswscale/utils.o : libavutil/ffversion.h
|
||||
|
||||
$(PROGS): %$(PROGSSUF)$(EXESUF): %$(PROGSSUF)_g$(EXESUF)
|
||||
$(CP) $< $@
|
||||
$(STRIP) $@
|
||||
$(foreach P,$(PROGS-yes),$(eval $(call DOPROG,$(P))))
|
||||
|
||||
%$(PROGSSUF)_g$(EXESUF): %.o $(FF_DEP_LIBS)
|
||||
$(LD) $(LDFLAGS) $(LDEXEFLAGS) $(LD_O) $(OBJS-$*) $(FF_EXTRALIBS)
|
||||
$(LD) $(LDFLAGS) $(LD_O) $(OBJS-$*) $(FF_EXTRALIBS)
|
||||
|
||||
OBJDIRS += tools
|
||||
|
||||
@@ -131,14 +114,14 @@ GIT_LOG = $(SRC_PATH)/.git/logs/HEAD
|
||||
.version: $(wildcard $(GIT_LOG)) $(VERSION_SH) config.mak
|
||||
.version: M=@
|
||||
|
||||
libavutil/ffversion.h .version:
|
||||
$(M)$(VERSION_SH) $(SRC_PATH) libavutil/ffversion.h $(EXTRA_VERSION)
|
||||
version.h .version:
|
||||
$(M)$(VERSION_SH) $(SRC_PATH) version.h $(EXTRA_VERSION)
|
||||
$(Q)touch .version
|
||||
|
||||
# force version.sh to run whenever version might have changed
|
||||
-include .version
|
||||
|
||||
ifdef AVPROGS
|
||||
ifdef PROGS
|
||||
install: install-progs install-data
|
||||
endif
|
||||
|
||||
@@ -149,7 +132,7 @@ install-libs: install-libs-yes
|
||||
install-progs-yes:
|
||||
install-progs-$(CONFIG_SHARED): install-libs
|
||||
|
||||
install-progs: install-progs-yes $(AVPROGS)
|
||||
install-progs: install-progs-yes $(PROGS)
|
||||
$(Q)mkdir -p "$(BINDIR)"
|
||||
$(INSTALL) -c -m 755 $(INSTPROGS) "$(BINDIR)"
|
||||
|
||||
@@ -161,13 +144,13 @@ install-data: $(DATA_FILES) $(EXAMPLES_FILES)
|
||||
uninstall: uninstall-libs uninstall-headers uninstall-progs uninstall-data
|
||||
|
||||
uninstall-progs:
|
||||
$(RM) $(addprefix "$(BINDIR)/", $(ALLAVPROGS))
|
||||
$(RM) $(addprefix "$(BINDIR)/", $(ALLPROGS))
|
||||
|
||||
uninstall-data:
|
||||
$(RM) -r "$(DATADIR)"
|
||||
|
||||
clean::
|
||||
$(RM) $(ALLAVPROGS) $(ALLAVPROGS_G)
|
||||
$(RM) $(ALLPROGS) $(ALLPROGS_G)
|
||||
$(RM) $(CLEANSUFFIXES)
|
||||
$(RM) $(CLEANSUFFIXES:%=tools/%)
|
||||
$(RM) -r coverage-html
|
||||
@@ -175,13 +158,14 @@ clean::
|
||||
|
||||
distclean::
|
||||
$(RM) $(DISTCLEANSUFFIXES)
|
||||
$(RM) config.* .config libavutil/avconfig.h .version version.h libavutil/ffversion.h libavcodec/codec_names.h
|
||||
$(RM) config.* .config libavutil/avconfig.h .version version.h libavcodec/codec_names.h
|
||||
|
||||
config:
|
||||
$(SRC_PATH)/configure $(value FFMPEG_CONFIGURATION)
|
||||
|
||||
check: all alltools examples testprogs fate
|
||||
|
||||
include $(SRC_PATH)/doc/Makefile
|
||||
include $(SRC_PATH)/tests/Makefile
|
||||
|
||||
$(sort $(OBJDIRS)):
|
||||
|
18
README
Normal file
18
README
Normal file
@@ -0,0 +1,18 @@
|
||||
FFmpeg README
|
||||
-------------
|
||||
|
||||
1) Documentation
|
||||
----------------
|
||||
|
||||
* Read the documentation in the doc/ directory in git.
|
||||
You can also view it online at http://ffmpeg.org/documentation.html
|
||||
|
||||
2) Licensing
|
||||
------------
|
||||
|
||||
* See the LICENSE file.
|
||||
|
||||
3) Build and Install
|
||||
--------------------
|
||||
|
||||
* See the INSTALL file.
|
42
README.md
42
README.md
@@ -1,42 +0,0 @@
|
||||
FFmpeg README
|
||||
=============
|
||||
|
||||
FFmpeg is a collection of libraries and tools to process multimedia content
|
||||
such as audio, video, subtitles and related metadata.
|
||||
|
||||
## Libraries
|
||||
|
||||
* `libavcodec` provides implementation of a wider range of codecs.
|
||||
* `libavformat` implements streaming protocols, container formats and basic I/O access.
|
||||
* `libavutil` includes hashers, decompressors and miscellaneous utility functions.
|
||||
* `libavfilter` provides a mean to alter decoded Audio and Video through chain of filters.
|
||||
* `libavdevice` provides an abstraction to access capture and playback devices.
|
||||
* `libswresample` implements audio mixing and resampling routines.
|
||||
* `libswscale` implements color conversion and scaling routines.
|
||||
|
||||
## Tools
|
||||
|
||||
* [ffmpeg](http://ffmpeg.org/ffmpeg.html) is a command line toolbox to
|
||||
manipulate, convert and stream multimedia content.
|
||||
* [ffplay](http://ffmpeg.org/ffplay.html) is a minimalistic multimedia player.
|
||||
* [ffprobe](http://ffmpeg.org/ffprobe.html) is a simple analysis tool to inspect
|
||||
multimedia content.
|
||||
* [ffserver](http://ffmpeg.org/ffserver.html) is a multimedia streaming server
|
||||
for live broadcasts.
|
||||
* Additional small tools such as `aviocat`, `ismindex` and `qt-faststart`.
|
||||
|
||||
## Documentation
|
||||
|
||||
The offline documentation is available in the **doc/** directory.
|
||||
|
||||
The online documentation is available in the main [website](http://ffmpeg.org)
|
||||
and in the [wiki](http://trac.ffmpeg.org).
|
||||
|
||||
### Examples
|
||||
|
||||
Coding examples are available in the **doc/examples** directory.
|
||||
|
||||
## License
|
||||
|
||||
FFmpeg codebase is mainly LGPL-licensed with optional components licensed under
|
||||
GPL. Please refer to the LICENSE file for detailed information.
|
4
arch.mak
4
arch.mak
@@ -1,14 +1,16 @@
|
||||
OBJS-$(HAVE_ARMV5TE) += $(ARMV5TE-OBJS) $(ARMV5TE-OBJS-yes)
|
||||
OBJS-$(HAVE_ARMV6) += $(ARMV6-OBJS) $(ARMV6-OBJS-yes)
|
||||
OBJS-$(HAVE_ARMV8) += $(ARMV8-OBJS) $(ARMV8-OBJS-yes)
|
||||
OBJS-$(HAVE_VFP) += $(VFP-OBJS) $(VFP-OBJS-yes)
|
||||
OBJS-$(HAVE_NEON) += $(NEON-OBJS) $(NEON-OBJS-yes)
|
||||
|
||||
OBJS-$(HAVE_MIPSFPU) += $(MIPSFPU-OBJS) $(MIPSFPU-OBJS-yes)
|
||||
OBJS-$(HAVE_MIPS32R2) += $(MIPS32R2-OBJS) $(MIPS32R2-OBJS-yes)
|
||||
OBJS-$(HAVE_MIPSDSPR1) += $(MIPSDSPR1-OBJS) $(MIPSDSPR1-OBJS-yes)
|
||||
OBJS-$(HAVE_MIPSDSPR2) += $(MIPSDSPR2-OBJS) $(MIPSDSPR2-OBJS-yes)
|
||||
|
||||
OBJS-$(HAVE_ALTIVEC) += $(ALTIVEC-OBJS) $(ALTIVEC-OBJS-yes)
|
||||
|
||||
OBJS-$(HAVE_VIS) += $(VIS-OBJS) $(VIS-OBJS-yes)
|
||||
|
||||
OBJS-$(HAVE_MMX) += $(MMX-OBJS) $(MMX-OBJS-yes)
|
||||
OBJS-$(HAVE_YASM) += $(YASM-OBJS) $(YASM-OBJS-yes)
|
||||
|
393
cmdutils.c
393
cmdutils.c
@@ -20,7 +20,6 @@
|
||||
*/
|
||||
|
||||
#include <string.h>
|
||||
#include <stdint.h>
|
||||
#include <stdlib.h>
|
||||
#include <errno.h>
|
||||
#include <math.h>
|
||||
@@ -49,8 +48,8 @@
|
||||
#include "libavutil/dict.h"
|
||||
#include "libavutil/opt.h"
|
||||
#include "libavutil/cpu.h"
|
||||
#include "libavutil/ffversion.h"
|
||||
#include "cmdutils.h"
|
||||
#include "version.h"
|
||||
#if CONFIG_NETWORK
|
||||
#include "libavformat/network.h"
|
||||
#endif
|
||||
@@ -58,6 +57,10 @@
|
||||
#include <sys/time.h>
|
||||
#include <sys/resource.h>
|
||||
#endif
|
||||
#if CONFIG_OPENCL
|
||||
#include "libavutil/opencl.h"
|
||||
#endif
|
||||
|
||||
|
||||
static int init_report(const char *env);
|
||||
|
||||
@@ -65,9 +68,9 @@ struct SwsContext *sws_opts;
|
||||
AVDictionary *swr_opts;
|
||||
AVDictionary *format_opts, *codec_opts, *resample_opts;
|
||||
|
||||
const int this_year = 2013;
|
||||
|
||||
static FILE *report_file;
|
||||
static int report_file_level = AV_LOG_DEBUG;
|
||||
int hide_banner = 0;
|
||||
|
||||
void init_opts(void)
|
||||
{
|
||||
@@ -105,10 +108,8 @@ static void log_callback_report(void *ptr, int level, const char *fmt, va_list v
|
||||
av_log_default_callback(ptr, level, fmt, vl);
|
||||
av_log_format_line(ptr, level, fmt, vl2, line, sizeof(line), &print_prefix);
|
||||
va_end(vl2);
|
||||
if (report_file_level >= level) {
|
||||
fputs(line, report_file);
|
||||
fflush(report_file);
|
||||
}
|
||||
fputs(line, report_file);
|
||||
fflush(report_file);
|
||||
}
|
||||
|
||||
static void (*program_exit)(int ret);
|
||||
@@ -166,7 +167,7 @@ void show_help_options(const OptionDef *options, const char *msg, int req_flags,
|
||||
int first;
|
||||
|
||||
first = 1;
|
||||
for (po = options; po->name; po++) {
|
||||
for (po = options; po->name != NULL; po++) {
|
||||
char buf[64];
|
||||
|
||||
if (((po->flags & req_flags) != req_flags) ||
|
||||
@@ -205,7 +206,7 @@ static const OptionDef *find_option(const OptionDef *po, const char *name)
|
||||
const char *p = strchr(name, ':');
|
||||
int len = p ? p - name : strlen(name);
|
||||
|
||||
while (po->name) {
|
||||
while (po->name != NULL) {
|
||||
if (!strncmp(name, po->name, len) && strlen(po->name) == len)
|
||||
break;
|
||||
po++;
|
||||
@@ -254,7 +255,7 @@ static void prepare_app_arguments(int *argc_ptr, char ***argv_ptr)
|
||||
|
||||
win32_argv_utf8 = av_mallocz(sizeof(char *) * (win32_argc + 1) + buffsize);
|
||||
argstr_flat = (char *)win32_argv_utf8 + sizeof(char *) * (win32_argc + 1);
|
||||
if (!win32_argv_utf8) {
|
||||
if (win32_argv_utf8 == NULL) {
|
||||
LocalFree(argv_w);
|
||||
return;
|
||||
}
|
||||
@@ -290,14 +291,10 @@ static int write_option(void *optctx, const OptionDef *po, const char *opt,
|
||||
if (po->flags & OPT_SPEC) {
|
||||
SpecifierOpt **so = dst;
|
||||
char *p = strchr(opt, ':');
|
||||
char *str;
|
||||
|
||||
dstcount = (int *)(so + 1);
|
||||
*so = grow_array(*so, sizeof(**so), dstcount, *dstcount + 1);
|
||||
str = av_strdup(p ? p + 1 : "");
|
||||
if (!str)
|
||||
return AVERROR(ENOMEM);
|
||||
(*so)[*dstcount - 1].specifier = str;
|
||||
(*so)[*dstcount - 1].specifier = av_strdup(p ? p + 1 : "");
|
||||
dst = &(*so)[*dstcount - 1].u;
|
||||
}
|
||||
|
||||
@@ -305,8 +302,6 @@ static int write_option(void *optctx, const OptionDef *po, const char *opt,
|
||||
char *str;
|
||||
str = av_strdup(arg);
|
||||
av_freep(dst);
|
||||
if (!str)
|
||||
return AVERROR(ENOMEM);
|
||||
*(char **)dst = str;
|
||||
} else if (po->flags & OPT_BOOL || po->flags & OPT_INT) {
|
||||
*(int *)dst = parse_number_or_die(opt, arg, OPT_INT64, INT_MIN, INT_MAX);
|
||||
@@ -450,7 +445,7 @@ int locate_option(int argc, char **argv, const OptionDef *options,
|
||||
(po->name && !strcmp(optname, po->name)))
|
||||
return i;
|
||||
|
||||
if (!po->name || po->flags & HAS_ARG)
|
||||
if (po->flags & HAS_ARG)
|
||||
i++;
|
||||
}
|
||||
return 0;
|
||||
@@ -501,9 +496,6 @@ void parse_loglevel(int argc, char **argv, const OptionDef *options)
|
||||
fflush(report_file);
|
||||
}
|
||||
}
|
||||
idx = locate_option(argc, argv, options, "hide_banner");
|
||||
if (idx)
|
||||
hide_banner = 1;
|
||||
}
|
||||
|
||||
static const AVOption *opt_find(void *obj, const char *name, const char *unit,
|
||||
@@ -561,11 +553,6 @@ int opt_default(void *optctx, const char *opt, const char *arg)
|
||||
}
|
||||
consumed = 1;
|
||||
}
|
||||
#else
|
||||
if (!consumed && !strcmp(opt, "sws_flags")) {
|
||||
av_log(NULL, AV_LOG_WARNING, "Ignoring %s %s, due to disabled swscale\n", opt, arg);
|
||||
consumed = 1;
|
||||
}
|
||||
#endif
|
||||
#if CONFIG_SWRESAMPLE
|
||||
swr_class = swr_get_class();
|
||||
@@ -676,7 +663,7 @@ static void init_parse_context(OptionParseContext *octx,
|
||||
memset(octx, 0, sizeof(*octx));
|
||||
|
||||
octx->nb_groups = nb_groups;
|
||||
octx->groups = av_mallocz_array(octx->nb_groups, sizeof(*octx->groups));
|
||||
octx->groups = av_mallocz(sizeof(*octx->groups) * octx->nb_groups);
|
||||
if (!octx->groups)
|
||||
exit_program(1);
|
||||
|
||||
@@ -848,17 +835,10 @@ int opt_loglevel(void *optctx, const char *opt, const char *arg)
|
||||
};
|
||||
char *tail;
|
||||
int level;
|
||||
int flags;
|
||||
int i;
|
||||
|
||||
flags = av_log_get_flags();
|
||||
tail = strstr(arg, "repeat");
|
||||
if (tail)
|
||||
flags &= ~AV_LOG_SKIP_REPEATED;
|
||||
else
|
||||
flags |= AV_LOG_SKIP_REPEATED;
|
||||
|
||||
av_log_set_flags(flags);
|
||||
av_log_set_flags(tail ? 0 : AV_LOG_SKIP_REPEATED);
|
||||
if (tail == arg)
|
||||
arg += 6 + (arg[6]=='+');
|
||||
if(tail && !*arg)
|
||||
@@ -940,13 +920,6 @@ static int init_report(const char *env)
|
||||
av_free(filename_template);
|
||||
filename_template = val;
|
||||
val = NULL;
|
||||
} else if (!strcmp(key, "level")) {
|
||||
char *tail;
|
||||
report_file_level = strtol(val, &tail, 10);
|
||||
if (*tail) {
|
||||
av_log(NULL, AV_LOG_FATAL, "Invalid report file level\n");
|
||||
exit_program(1);
|
||||
}
|
||||
} else {
|
||||
av_log(NULL, AV_LOG_ERROR, "Unknown key '%s' in FFREPORT\n", key);
|
||||
}
|
||||
@@ -965,10 +938,9 @@ static int init_report(const char *env)
|
||||
|
||||
report_file = fopen(filename.str, "w");
|
||||
if (!report_file) {
|
||||
int ret = AVERROR(errno);
|
||||
av_log(NULL, AV_LOG_ERROR, "Failed to open report \"%s\": %s\n",
|
||||
filename.str, strerror(errno));
|
||||
return ret;
|
||||
return AVERROR(errno);
|
||||
}
|
||||
av_log_set_callback(log_callback_report);
|
||||
av_log(NULL, AV_LOG_INFO,
|
||||
@@ -1014,6 +986,26 @@ int opt_timelimit(void *optctx, const char *opt, const char *arg)
|
||||
return 0;
|
||||
}
|
||||
|
||||
#if CONFIG_OPENCL
|
||||
int opt_opencl(void *optctx, const char *opt, const char *arg)
|
||||
{
|
||||
char *key, *value;
|
||||
const char *opts = arg;
|
||||
int ret = 0;
|
||||
while (*opts) {
|
||||
ret = av_opt_get_key_value(&opts, "=", ":", 0, &key, &value);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
ret = av_opencl_set_option(key, value);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
if (*opts)
|
||||
opts++;
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
#endif
|
||||
|
||||
void print_error(const char *filename, int err)
|
||||
{
|
||||
char errbuf[128];
|
||||
@@ -1079,43 +1071,18 @@ static void print_program_info(int flags, int level)
|
||||
av_log(NULL, level, "%s version " FFMPEG_VERSION, program_name);
|
||||
if (flags & SHOW_COPYRIGHT)
|
||||
av_log(NULL, level, " Copyright (c) %d-%d the FFmpeg developers",
|
||||
program_birth_year, CONFIG_THIS_YEAR);
|
||||
program_birth_year, this_year);
|
||||
av_log(NULL, level, "\n");
|
||||
av_log(NULL, level, "%sbuilt with %s\n", indent, CC_IDENT);
|
||||
av_log(NULL, level, "%sbuilt on %s %s with %s\n",
|
||||
indent, __DATE__, __TIME__, CC_IDENT);
|
||||
|
||||
av_log(NULL, level, "%sconfiguration: " FFMPEG_CONFIGURATION "\n", indent);
|
||||
}
|
||||
|
||||
static void print_buildconf(int flags, int level)
|
||||
{
|
||||
const char *indent = flags & INDENT ? " " : "";
|
||||
char str[] = { FFMPEG_CONFIGURATION };
|
||||
char *conflist, *remove_tilde, *splitconf;
|
||||
|
||||
// Change all the ' --' strings to '~--' so that
|
||||
// they can be identified as tokens.
|
||||
while ((conflist = strstr(str, " --")) != NULL) {
|
||||
strncpy(conflist, "~--", 3);
|
||||
}
|
||||
|
||||
// Compensate for the weirdness this would cause
|
||||
// when passing 'pkg-config --static'.
|
||||
while ((remove_tilde = strstr(str, "pkg-config~")) != NULL) {
|
||||
strncpy(remove_tilde, "pkg-config ", 11);
|
||||
}
|
||||
|
||||
splitconf = strtok(str, "~");
|
||||
av_log(NULL, level, "\n%sconfiguration:\n", indent);
|
||||
while (splitconf != NULL) {
|
||||
av_log(NULL, level, "%s%s%s\n", indent, indent, splitconf);
|
||||
splitconf = strtok(NULL, "~");
|
||||
}
|
||||
}
|
||||
|
||||
void show_banner(int argc, char **argv, const OptionDef *options)
|
||||
{
|
||||
int idx = locate_option(argc, argv, options, "version");
|
||||
if (hide_banner || idx)
|
||||
if (idx)
|
||||
return;
|
||||
|
||||
print_program_info (INDENT|SHOW_COPYRIGHT, AV_LOG_INFO);
|
||||
@@ -1126,20 +1093,12 @@ void show_banner(int argc, char **argv, const OptionDef *options)
|
||||
int show_version(void *optctx, const char *opt, const char *arg)
|
||||
{
|
||||
av_log_set_callback(log_callback_help);
|
||||
print_program_info (SHOW_COPYRIGHT, AV_LOG_INFO);
|
||||
print_program_info (0 , AV_LOG_INFO);
|
||||
print_all_libs_info(SHOW_VERSION, AV_LOG_INFO);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int show_buildconf(void *optctx, const char *opt, const char *arg)
|
||||
{
|
||||
av_log_set_callback(log_callback_help);
|
||||
print_buildconf (INDENT|0, AV_LOG_INFO);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int show_license(void *optctx, const char *opt, const char *arg)
|
||||
{
|
||||
#if CONFIG_NONFREE
|
||||
@@ -1214,24 +1173,16 @@ int show_license(void *optctx, const char *opt, const char *arg)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int is_device(const AVClass *avclass)
|
||||
{
|
||||
if (!avclass)
|
||||
return 0;
|
||||
return AV_IS_INPUT_DEVICE(avclass->category) || AV_IS_OUTPUT_DEVICE(avclass->category);
|
||||
}
|
||||
|
||||
static int show_formats_devices(void *optctx, const char *opt, const char *arg, int device_only)
|
||||
int show_formats(void *optctx, const char *opt, const char *arg)
|
||||
{
|
||||
AVInputFormat *ifmt = NULL;
|
||||
AVOutputFormat *ofmt = NULL;
|
||||
const char *last_name;
|
||||
int is_dev;
|
||||
|
||||
printf("%s\n"
|
||||
printf("File formats:\n"
|
||||
" D. = Demuxing supported\n"
|
||||
" .E = Muxing supported\n"
|
||||
" --\n", device_only ? "Devices:" : "File formats:");
|
||||
" --\n");
|
||||
last_name = "000";
|
||||
for (;;) {
|
||||
int decode = 0;
|
||||
@@ -1240,10 +1191,7 @@ static int show_formats_devices(void *optctx, const char *opt, const char *arg,
|
||||
const char *long_name = NULL;
|
||||
|
||||
while ((ofmt = av_oformat_next(ofmt))) {
|
||||
is_dev = is_device(ofmt->priv_class);
|
||||
if (!is_dev && device_only)
|
||||
continue;
|
||||
if ((!name || strcmp(ofmt->name, name) < 0) &&
|
||||
if ((name == NULL || strcmp(ofmt->name, name) < 0) &&
|
||||
strcmp(ofmt->name, last_name) > 0) {
|
||||
name = ofmt->name;
|
||||
long_name = ofmt->long_name;
|
||||
@@ -1251,10 +1199,7 @@ static int show_formats_devices(void *optctx, const char *opt, const char *arg,
|
||||
}
|
||||
}
|
||||
while ((ifmt = av_iformat_next(ifmt))) {
|
||||
is_dev = is_device(ifmt->priv_class);
|
||||
if (!is_dev && device_only)
|
||||
continue;
|
||||
if ((!name || strcmp(ifmt->name, name) < 0) &&
|
||||
if ((name == NULL || strcmp(ifmt->name, name) < 0) &&
|
||||
strcmp(ifmt->name, last_name) > 0) {
|
||||
name = ifmt->name;
|
||||
long_name = ifmt->long_name;
|
||||
@@ -1263,7 +1208,7 @@ static int show_formats_devices(void *optctx, const char *opt, const char *arg,
|
||||
if (name && strcmp(ifmt->name, name) == 0)
|
||||
decode = 1;
|
||||
}
|
||||
if (!name)
|
||||
if (name == NULL)
|
||||
break;
|
||||
last_name = name;
|
||||
|
||||
@@ -1276,16 +1221,6 @@ static int show_formats_devices(void *optctx, const char *opt, const char *arg,
|
||||
return 0;
|
||||
}
|
||||
|
||||
int show_formats(void *optctx, const char *opt, const char *arg)
|
||||
{
|
||||
return show_formats_devices(optctx, opt, arg, 0);
|
||||
}
|
||||
|
||||
int show_devices(void *optctx, const char *opt, const char *arg)
|
||||
{
|
||||
return show_formats_devices(optctx, opt, arg, 1);
|
||||
}
|
||||
|
||||
#define PRINT_CODEC_SUPPORTED(codec, field, type, list_name, term, get_name) \
|
||||
if (codec->field) { \
|
||||
const type *p = codec->field; \
|
||||
@@ -1430,9 +1365,6 @@ int show_codecs(void *optctx, const char *opt, const char *arg)
|
||||
const AVCodecDescriptor *desc = codecs[i];
|
||||
const AVCodec *codec = NULL;
|
||||
|
||||
if (strstr(desc->name, "_deprecated"))
|
||||
continue;
|
||||
|
||||
printf(" ");
|
||||
printf(avcodec_find_decoder(desc->id) ? "D" : ".");
|
||||
printf(avcodec_find_encoder(desc->id) ? "E" : ".");
|
||||
@@ -1544,8 +1476,7 @@ int show_protocols(void *optctx, const char *opt, const char *arg)
|
||||
|
||||
int show_filters(void *optctx, const char *opt, const char *arg)
|
||||
{
|
||||
#if CONFIG_AVFILTER
|
||||
const AVFilter *filter = NULL;
|
||||
const AVFilter av_unused(*filter) = NULL;
|
||||
char descr[64], *descr_cur;
|
||||
int i, j;
|
||||
const AVFilterPad *pad;
|
||||
@@ -1558,6 +1489,7 @@ int show_filters(void *optctx, const char *opt, const char *arg)
|
||||
" V = Video input/output\n"
|
||||
" N = Dynamic number and/or type of input/output\n"
|
||||
" | = Source or sink filter\n");
|
||||
#if CONFIG_AVFILTER
|
||||
while ((filter = avfilter_next(filter))) {
|
||||
descr_cur = descr;
|
||||
for (i = 0; i < 2; i++) {
|
||||
@@ -1582,13 +1514,11 @@ int show_filters(void *optctx, const char *opt, const char *arg)
|
||||
filter->process_command ? 'C' : '.',
|
||||
filter->name, descr, filter->description);
|
||||
}
|
||||
#else
|
||||
printf("No filters available: libavfilter disabled\n");
|
||||
#endif
|
||||
return 0;
|
||||
}
|
||||
|
||||
int show_colors(void *optctx, const char *opt, const char *arg)
|
||||
void show_colors(void *optctx, const char *opt, const char *arg)
|
||||
{
|
||||
const char *name;
|
||||
const uint8_t *rgb;
|
||||
@@ -1598,8 +1528,6 @@ int show_colors(void *optctx, const char *opt, const char *arg)
|
||||
|
||||
for (i = 0; name = av_get_known_color_name(i, &rgb); i++)
|
||||
printf("%-32s #%02x%02x%02x\n", name, rgb[0], rgb[1], rgb[2]);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int show_pix_fmts(void *optctx, const char *opt, const char *arg)
|
||||
@@ -1642,19 +1570,19 @@ int show_layouts(void *optctx, const char *opt, const char *arg)
|
||||
const char *name, *descr;
|
||||
|
||||
printf("Individual channels:\n"
|
||||
"NAME DESCRIPTION\n");
|
||||
"NAME DESCRIPTION\n");
|
||||
for (i = 0; i < 63; i++) {
|
||||
name = av_get_channel_name((uint64_t)1 << i);
|
||||
if (!name)
|
||||
continue;
|
||||
descr = av_get_channel_description((uint64_t)1 << i);
|
||||
printf("%-14s %s\n", name, descr);
|
||||
printf("%-12s%s\n", name, descr);
|
||||
}
|
||||
printf("\nStandard channel layouts:\n"
|
||||
"NAME DECOMPOSITION\n");
|
||||
"NAME DECOMPOSITION\n");
|
||||
for (i = 0; !av_get_standard_channel_layout(i, &layout, &name); i++) {
|
||||
if (name) {
|
||||
printf("%-14s ", name);
|
||||
printf("%-12s", name);
|
||||
for (j = 1; j; j <<= 1)
|
||||
if ((layout & j))
|
||||
printf("%s%s", (layout & (j - 1)) ? "+" : "", av_get_channel_name(j));
|
||||
@@ -1821,8 +1749,6 @@ int show_help(void *optctx, const char *opt, const char *arg)
|
||||
av_log_set_callback(log_callback_help);
|
||||
|
||||
topic = av_strdup(arg ? arg : "");
|
||||
if (!topic)
|
||||
return AVERROR(ENOMEM);
|
||||
par = strchr(topic, '=');
|
||||
if (par)
|
||||
*par++ = 0;
|
||||
@@ -1862,48 +1788,35 @@ int read_yesno(void)
|
||||
|
||||
int cmdutils_read_file(const char *filename, char **bufptr, size_t *size)
|
||||
{
|
||||
int64_t ret;
|
||||
FILE *f = av_fopen_utf8(filename, "rb");
|
||||
int ret;
|
||||
FILE *f = fopen(filename, "rb");
|
||||
|
||||
if (!f) {
|
||||
ret = AVERROR(errno);
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot read file '%s': %s\n", filename,
|
||||
strerror(errno));
|
||||
return ret;
|
||||
return AVERROR(errno);
|
||||
}
|
||||
|
||||
ret = fseek(f, 0, SEEK_END);
|
||||
if (ret == -1) {
|
||||
ret = AVERROR(errno);
|
||||
goto out;
|
||||
fseek(f, 0, SEEK_END);
|
||||
*size = ftell(f);
|
||||
fseek(f, 0, SEEK_SET);
|
||||
if (*size == (size_t)-1) {
|
||||
av_log(NULL, AV_LOG_ERROR, "IO error: %s\n", strerror(errno));
|
||||
fclose(f);
|
||||
return AVERROR(errno);
|
||||
}
|
||||
|
||||
ret = ftell(f);
|
||||
if (ret < 0) {
|
||||
ret = AVERROR(errno);
|
||||
goto out;
|
||||
}
|
||||
*size = ret;
|
||||
|
||||
ret = fseek(f, 0, SEEK_SET);
|
||||
if (ret == -1) {
|
||||
ret = AVERROR(errno);
|
||||
goto out;
|
||||
}
|
||||
|
||||
*bufptr = av_malloc(*size + 1);
|
||||
if (!*bufptr) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Could not allocate file buffer\n");
|
||||
ret = AVERROR(ENOMEM);
|
||||
goto out;
|
||||
fclose(f);
|
||||
return AVERROR(ENOMEM);
|
||||
}
|
||||
ret = fread(*bufptr, 1, *size, f);
|
||||
if (ret < *size) {
|
||||
av_free(*bufptr);
|
||||
if (ferror(f)) {
|
||||
ret = AVERROR(errno);
|
||||
av_log(NULL, AV_LOG_ERROR, "Error while reading file '%s': %s\n",
|
||||
filename, strerror(errno));
|
||||
ret = AVERROR(errno);
|
||||
} else
|
||||
ret = AVERROR_EOF;
|
||||
} else {
|
||||
@@ -1911,9 +1824,6 @@ int cmdutils_read_file(const char *filename, char **bufptr, size_t *size)
|
||||
(*bufptr)[(*size)++] = '\0';
|
||||
}
|
||||
|
||||
out:
|
||||
if (ret < 0)
|
||||
av_log(NULL, AV_LOG_ERROR, "IO error: %s\n", av_err2str(ret));
|
||||
fclose(f);
|
||||
return ret;
|
||||
}
|
||||
@@ -2013,12 +1923,11 @@ AVDictionary *filter_codec_opts(AVDictionary *opts, enum AVCodecID codec_id,
|
||||
switch (check_stream_specifier(s, st, p + 1)) {
|
||||
case 1: *p = 0; break;
|
||||
case 0: continue;
|
||||
default: exit_program(1);
|
||||
default: return NULL;
|
||||
}
|
||||
|
||||
if (av_opt_find(&cc, t->key, NULL, flags, AV_OPT_SEARCH_FAKE_OBJ) ||
|
||||
!codec ||
|
||||
(codec->priv_class &&
|
||||
(codec && codec->priv_class &&
|
||||
av_opt_find(&codec->priv_class, t->key, NULL, flags,
|
||||
AV_OPT_SEARCH_FAKE_OBJ)))
|
||||
av_dict_set(&ret, t->key, t->value, 0);
|
||||
@@ -2041,7 +1950,7 @@ AVDictionary **setup_find_stream_info_opts(AVFormatContext *s,
|
||||
|
||||
if (!s->nb_streams)
|
||||
return NULL;
|
||||
opts = av_mallocz_array(s->nb_streams, sizeof(*opts));
|
||||
opts = av_mallocz(s->nb_streams * sizeof(*opts));
|
||||
if (!opts) {
|
||||
av_log(NULL, AV_LOG_ERROR,
|
||||
"Could not alloc memory for stream options.\n");
|
||||
@@ -2060,7 +1969,7 @@ void *grow_array(void *array, int elem_size, int *size, int new_size)
|
||||
exit_program(1);
|
||||
}
|
||||
if (*size < new_size) {
|
||||
uint8_t *tmp = av_realloc_array(array, new_size, elem_size);
|
||||
uint8_t *tmp = av_realloc(array, new_size*elem_size);
|
||||
if (!tmp) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Could not alloc buffer.\n");
|
||||
exit_program(1);
|
||||
@@ -2071,161 +1980,3 @@ void *grow_array(void *array, int elem_size, int *size, int new_size)
|
||||
}
|
||||
return array;
|
||||
}
|
||||
|
||||
#if CONFIG_AVDEVICE
|
||||
static int print_device_sources(AVInputFormat *fmt, AVDictionary *opts)
|
||||
{
|
||||
int ret, i;
|
||||
AVDeviceInfoList *device_list = NULL;
|
||||
|
||||
if (!fmt || !fmt->priv_class || !AV_IS_INPUT_DEVICE(fmt->priv_class->category))
|
||||
return AVERROR(EINVAL);
|
||||
|
||||
printf("Audo-detected sources for %s:\n", fmt->name);
|
||||
if (!fmt->get_device_list) {
|
||||
ret = AVERROR(ENOSYS);
|
||||
printf("Cannot list sources. Not implemented.\n");
|
||||
goto fail;
|
||||
}
|
||||
|
||||
if ((ret = avdevice_list_input_sources(fmt, NULL, opts, &device_list)) < 0) {
|
||||
printf("Cannot list sources.\n");
|
||||
goto fail;
|
||||
}
|
||||
|
||||
for (i = 0; i < device_list->nb_devices; i++) {
|
||||
printf("%s %s [%s]\n", device_list->default_device == i ? "*" : " ",
|
||||
device_list->devices[i]->device_name, device_list->devices[i]->device_description);
|
||||
}
|
||||
|
||||
fail:
|
||||
avdevice_free_list_devices(&device_list);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int print_device_sinks(AVOutputFormat *fmt, AVDictionary *opts)
|
||||
{
|
||||
int ret, i;
|
||||
AVDeviceInfoList *device_list = NULL;
|
||||
|
||||
if (!fmt || !fmt->priv_class || !AV_IS_OUTPUT_DEVICE(fmt->priv_class->category))
|
||||
return AVERROR(EINVAL);
|
||||
|
||||
printf("Audo-detected sinks for %s:\n", fmt->name);
|
||||
if (!fmt->get_device_list) {
|
||||
ret = AVERROR(ENOSYS);
|
||||
printf("Cannot list sinks. Not implemented.\n");
|
||||
goto fail;
|
||||
}
|
||||
|
||||
if ((ret = avdevice_list_output_sinks(fmt, NULL, opts, &device_list)) < 0) {
|
||||
printf("Cannot list sinks.\n");
|
||||
goto fail;
|
||||
}
|
||||
|
||||
for (i = 0; i < device_list->nb_devices; i++) {
|
||||
printf("%s %s [%s]\n", device_list->default_device == i ? "*" : " ",
|
||||
device_list->devices[i]->device_name, device_list->devices[i]->device_description);
|
||||
}
|
||||
|
||||
fail:
|
||||
avdevice_free_list_devices(&device_list);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int show_sinks_sources_parse_arg(const char *arg, char **dev, AVDictionary **opts)
|
||||
{
|
||||
int ret;
|
||||
if (arg) {
|
||||
char *opts_str = NULL;
|
||||
av_assert0(dev && opts);
|
||||
*dev = av_strdup(arg);
|
||||
if (!*dev)
|
||||
return AVERROR(ENOMEM);
|
||||
if ((opts_str = strchr(*dev, ','))) {
|
||||
*(opts_str++) = '\0';
|
||||
if (opts_str[0] && ((ret = av_dict_parse_string(opts, opts_str, "=", ":", 0)) < 0)) {
|
||||
av_freep(dev);
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
} else
|
||||
printf("\nDevice name is not provided.\n"
|
||||
"You can pass devicename[,opt1=val1[,opt2=val2...]] as an argument.\n\n");
|
||||
return 0;
|
||||
}
|
||||
|
||||
int show_sources(void *optctx, const char *opt, const char *arg)
|
||||
{
|
||||
AVInputFormat *fmt = NULL;
|
||||
char *dev = NULL;
|
||||
AVDictionary *opts = NULL;
|
||||
int ret = 0;
|
||||
int error_level = av_log_get_level();
|
||||
|
||||
av_log_set_level(AV_LOG_ERROR);
|
||||
|
||||
if ((ret = show_sinks_sources_parse_arg(arg, &dev, &opts)) < 0)
|
||||
goto fail;
|
||||
|
||||
do {
|
||||
fmt = av_input_audio_device_next(fmt);
|
||||
if (fmt) {
|
||||
if (!strcmp(fmt->name, "lavfi"))
|
||||
continue; //it's pointless to probe lavfi
|
||||
if (dev && !av_match_name(dev, fmt->name))
|
||||
continue;
|
||||
print_device_sources(fmt, opts);
|
||||
}
|
||||
} while (fmt);
|
||||
do {
|
||||
fmt = av_input_video_device_next(fmt);
|
||||
if (fmt) {
|
||||
if (dev && !av_match_name(dev, fmt->name))
|
||||
continue;
|
||||
print_device_sources(fmt, opts);
|
||||
}
|
||||
} while (fmt);
|
||||
fail:
|
||||
av_dict_free(&opts);
|
||||
av_free(dev);
|
||||
av_log_set_level(error_level);
|
||||
return ret;
|
||||
}
|
||||
|
||||
int show_sinks(void *optctx, const char *opt, const char *arg)
|
||||
{
|
||||
AVOutputFormat *fmt = NULL;
|
||||
char *dev = NULL;
|
||||
AVDictionary *opts = NULL;
|
||||
int ret = 0;
|
||||
int error_level = av_log_get_level();
|
||||
|
||||
av_log_set_level(AV_LOG_ERROR);
|
||||
|
||||
if ((ret = show_sinks_sources_parse_arg(arg, &dev, &opts)) < 0)
|
||||
goto fail;
|
||||
|
||||
do {
|
||||
fmt = av_output_audio_device_next(fmt);
|
||||
if (fmt) {
|
||||
if (dev && !av_match_name(dev, fmt->name))
|
||||
continue;
|
||||
print_device_sinks(fmt, opts);
|
||||
}
|
||||
} while (fmt);
|
||||
do {
|
||||
fmt = av_output_video_device_next(fmt);
|
||||
if (fmt) {
|
||||
if (dev && !av_match_name(dev, fmt->name))
|
||||
continue;
|
||||
print_device_sinks(fmt, opts);
|
||||
}
|
||||
} while (fmt);
|
||||
fail:
|
||||
av_dict_free(&opts);
|
||||
av_free(dev);
|
||||
av_log_set_level(error_level);
|
||||
return ret;
|
||||
}
|
||||
#endif
|
||||
|
47
cmdutils.h
47
cmdutils.h
@@ -24,13 +24,12 @@
|
||||
|
||||
#include <stdint.h>
|
||||
|
||||
#include "config.h"
|
||||
#include "libavcodec/avcodec.h"
|
||||
#include "libavfilter/avfilter.h"
|
||||
#include "libavformat/avformat.h"
|
||||
#include "libswscale/swscale.h"
|
||||
|
||||
#ifdef _WIN32
|
||||
#ifdef __MINGW32__
|
||||
#undef main /* We don't want SDL to override our main() */
|
||||
#endif
|
||||
|
||||
@@ -44,12 +43,16 @@ extern const char program_name[];
|
||||
*/
|
||||
extern const int program_birth_year;
|
||||
|
||||
/**
|
||||
* this year, defined by the program for show_banner()
|
||||
*/
|
||||
extern const int this_year;
|
||||
|
||||
extern AVCodecContext *avcodec_opts[AVMEDIA_TYPE_NB];
|
||||
extern AVFormatContext *avformat_opts;
|
||||
extern struct SwsContext *sws_opts;
|
||||
extern AVDictionary *swr_opts;
|
||||
extern AVDictionary *format_opts, *codec_opts, *resample_opts;
|
||||
extern int hide_banner;
|
||||
|
||||
/**
|
||||
* Register a program-specific cleanup routine.
|
||||
@@ -59,7 +62,7 @@ void register_exit(void (*cb)(int ret));
|
||||
/**
|
||||
* Wraps exit with a program-specific cleanup routine.
|
||||
*/
|
||||
void exit_program(int ret) av_noreturn;
|
||||
void exit_program(int ret);
|
||||
|
||||
/**
|
||||
* Initialize the cmdutils option system, in particular
|
||||
@@ -100,12 +103,8 @@ int opt_max_alloc(void *optctx, const char *opt, const char *arg);
|
||||
|
||||
int opt_codec_debug(void *optctx, const char *opt, const char *arg);
|
||||
|
||||
#if CONFIG_OPENCL
|
||||
int opt_opencl(void *optctx, const char *opt, const char *arg);
|
||||
|
||||
int opt_opencl_bench(void *optctx, const char *opt, const char *arg);
|
||||
#endif
|
||||
|
||||
/**
|
||||
* Limit the execution time.
|
||||
*/
|
||||
@@ -415,13 +414,6 @@ void show_banner(int argc, char **argv, const OptionDef *options);
|
||||
*/
|
||||
int show_version(void *optctx, const char *opt, const char *arg);
|
||||
|
||||
/**
|
||||
* Print the build configuration of the program to stdout. The contents
|
||||
* depend on the definition of FFMPEG_CONFIGURATION.
|
||||
* This option processing function does not utilize the arguments.
|
||||
*/
|
||||
int show_buildconf(void *optctx, const char *opt, const char *arg);
|
||||
|
||||
/**
|
||||
* Print the license of the program to stdout. The license depends on
|
||||
* the license of the libraries compiled into the program.
|
||||
@@ -431,31 +423,10 @@ int show_license(void *optctx, const char *opt, const char *arg);
|
||||
|
||||
/**
|
||||
* Print a listing containing all the formats supported by the
|
||||
* program (including devices).
|
||||
* This option processing function does not utilize the arguments.
|
||||
*/
|
||||
int show_formats(void *optctx, const char *opt, const char *arg);
|
||||
|
||||
/**
|
||||
* Print a listing containing all the devices supported by the
|
||||
* program.
|
||||
* This option processing function does not utilize the arguments.
|
||||
*/
|
||||
int show_devices(void *optctx, const char *opt, const char *arg);
|
||||
|
||||
#if CONFIG_AVDEVICE
|
||||
/**
|
||||
* Print a listing containing audodetected sinks of the output device.
|
||||
* Device name with options may be passed as an argument to limit results.
|
||||
*/
|
||||
int show_sinks(void *optctx, const char *opt, const char *arg);
|
||||
|
||||
/**
|
||||
* Print a listing containing audodetected sources of the input device.
|
||||
* Device name with options may be passed as an argument to limit results.
|
||||
*/
|
||||
int show_sources(void *optctx, const char *opt, const char *arg);
|
||||
#endif
|
||||
int show_formats(void *optctx, const char *opt, const char *arg);
|
||||
|
||||
/**
|
||||
* Print a listing containing all the codecs supported by the
|
||||
@@ -521,7 +492,7 @@ int show_sample_fmts(void *optctx, const char *opt, const char *arg);
|
||||
* Print a listing containing all the color names and values recognized
|
||||
* by the program.
|
||||
*/
|
||||
int show_colors(void *optctx, const char *opt, const char *arg);
|
||||
void show_colors(void *optctx, const char *opt, const char *arg);
|
||||
|
||||
/**
|
||||
* Return a positive value if a line read from standard input
|
||||
|
@@ -4,9 +4,7 @@
|
||||
{ "help" , OPT_EXIT, {.func_arg = show_help}, "show help", "topic" },
|
||||
{ "-help" , OPT_EXIT, {.func_arg = show_help}, "show help", "topic" },
|
||||
{ "version" , OPT_EXIT, {.func_arg = show_version}, "show version" },
|
||||
{ "buildconf" , OPT_EXIT, {.func_arg = show_buildconf}, "show build configuration" },
|
||||
{ "formats" , OPT_EXIT, {.func_arg = show_formats }, "show available formats" },
|
||||
{ "devices" , OPT_EXIT, {.func_arg = show_devices }, "show available devices" },
|
||||
{ "codecs" , OPT_EXIT, {.func_arg = show_codecs }, "show available codecs" },
|
||||
{ "decoders" , OPT_EXIT, {.func_arg = show_decoders }, "show available decoders" },
|
||||
{ "encoders" , OPT_EXIT, {.func_arg = show_encoders }, "show available encoders" },
|
||||
@@ -22,14 +20,6 @@
|
||||
{ "report" , 0, {(void*)opt_report}, "generate a report" },
|
||||
{ "max_alloc" , HAS_ARG, {.func_arg = opt_max_alloc}, "set maximum size of a single allocated block", "bytes" },
|
||||
{ "cpuflags" , HAS_ARG | OPT_EXPERT, { .func_arg = opt_cpuflags }, "force specific cpu flags", "flags" },
|
||||
{ "hide_banner", OPT_BOOL | OPT_EXPERT, {&hide_banner}, "do not show program banner", "hide_banner" },
|
||||
#if CONFIG_OPENCL
|
||||
{ "opencl_bench", OPT_EXIT, {.func_arg = opt_opencl_bench}, "run benchmark on all OpenCL devices and show results" },
|
||||
{ "opencl_options", HAS_ARG, {.func_arg = opt_opencl}, "set OpenCL environment options" },
|
||||
#endif
|
||||
#if CONFIG_AVDEVICE
|
||||
{ "sources" , OPT_EXIT | HAS_ARG, { .func_arg = show_sources },
|
||||
"list sources of the input device", "device" },
|
||||
{ "sinks" , OPT_EXIT | HAS_ARG, { .func_arg = show_sinks },
|
||||
"list sinks of the output device", "device" },
|
||||
#endif
|
||||
|
@@ -1,274 +0,0 @@
|
||||
/*
|
||||
* Copyright (C) 2013 Lenny Wang
|
||||
*
|
||||
* This file is part of FFmpeg.
|
||||
*
|
||||
* FFmpeg is free software; you can redistribute it and/or
|
||||
* modify it under the terms of the GNU Lesser General Public
|
||||
* License as published by the Free Software Foundation; either
|
||||
* version 2.1 of the License, or (at your option) any later version.
|
||||
*
|
||||
* FFmpeg is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
* Lesser General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU Lesser General Public
|
||||
* License along with FFmpeg; if not, write to the Free Software
|
||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
*/
|
||||
|
||||
#include "libavutil/opt.h"
|
||||
#include "libavutil/time.h"
|
||||
#include "libavutil/log.h"
|
||||
#include "libavutil/opencl.h"
|
||||
#include "cmdutils.h"
|
||||
|
||||
typedef struct {
|
||||
int platform_idx;
|
||||
int device_idx;
|
||||
char device_name[64];
|
||||
int64_t runtime;
|
||||
} OpenCLDeviceBenchmark;
|
||||
|
||||
const char *ocl_bench_source = AV_OPENCL_KERNEL(
|
||||
inline unsigned char clip_uint8(int a)
|
||||
{
|
||||
if (a & (~0xFF))
|
||||
return (-a)>>31;
|
||||
else
|
||||
return a;
|
||||
}
|
||||
|
||||
kernel void unsharp_bench(
|
||||
global unsigned char *src,
|
||||
global unsigned char *dst,
|
||||
global int *mask,
|
||||
int width,
|
||||
int height)
|
||||
{
|
||||
int i, j, local_idx, lc_idx, sum = 0;
|
||||
int2 thread_idx, block_idx, global_idx, lm_idx;
|
||||
thread_idx.x = get_local_id(0);
|
||||
thread_idx.y = get_local_id(1);
|
||||
block_idx.x = get_group_id(0);
|
||||
block_idx.y = get_group_id(1);
|
||||
global_idx.x = get_global_id(0);
|
||||
global_idx.y = get_global_id(1);
|
||||
local uchar data[32][32];
|
||||
local int lc[128];
|
||||
|
||||
for (i = 0; i <= 1; i++) {
|
||||
lm_idx.y = -8 + (block_idx.y + i) * 16 + thread_idx.y;
|
||||
lm_idx.y = lm_idx.y < 0 ? 0 : lm_idx.y;
|
||||
lm_idx.y = lm_idx.y >= height ? height - 1: lm_idx.y;
|
||||
for (j = 0; j <= 1; j++) {
|
||||
lm_idx.x = -8 + (block_idx.x + j) * 16 + thread_idx.x;
|
||||
lm_idx.x = lm_idx.x < 0 ? 0 : lm_idx.x;
|
||||
lm_idx.x = lm_idx.x >= width ? width - 1: lm_idx.x;
|
||||
data[i*16 + thread_idx.y][j*16 + thread_idx.x] = src[lm_idx.y*width + lm_idx.x];
|
||||
}
|
||||
}
|
||||
local_idx = thread_idx.y*16 + thread_idx.x;
|
||||
if (local_idx < 128)
|
||||
lc[local_idx] = mask[local_idx];
|
||||
barrier(CLK_LOCAL_MEM_FENCE);
|
||||
|
||||
\n#pragma unroll\n
|
||||
for (i = -4; i <= 4; i++) {
|
||||
lm_idx.y = 8 + i + thread_idx.y;
|
||||
\n#pragma unroll\n
|
||||
for (j = -4; j <= 4; j++) {
|
||||
lm_idx.x = 8 + j + thread_idx.x;
|
||||
lc_idx = (i + 4)*8 + j + 4;
|
||||
sum += (int)data[lm_idx.y][lm_idx.x] * lc[lc_idx];
|
||||
}
|
||||
}
|
||||
int temp = (int)data[thread_idx.y + 8][thread_idx.x + 8];
|
||||
int res = temp + (((temp - (int)((sum + 1<<15) >> 16))) >> 16);
|
||||
if (global_idx.x < width && global_idx.y < height)
|
||||
dst[global_idx.x + global_idx.y*width] = clip_uint8(res);
|
||||
}
|
||||
);
|
||||
|
||||
#define OCLCHECK(method, ... ) \
|
||||
do { \
|
||||
status = method(__VA_ARGS__); \
|
||||
if (status != CL_SUCCESS) { \
|
||||
av_log(NULL, AV_LOG_ERROR, # method " error '%s'\n", \
|
||||
av_opencl_errstr(status)); \
|
||||
ret = AVERROR_EXTERNAL; \
|
||||
goto end; \
|
||||
} \
|
||||
} while (0)
|
||||
|
||||
#define CREATEBUF(out, flags, size) \
|
||||
do { \
|
||||
out = clCreateBuffer(ext_opencl_env->context, flags, size, NULL, &status); \
|
||||
if (status != CL_SUCCESS) { \
|
||||
av_log(NULL, AV_LOG_ERROR, "Could not create OpenCL buffer\n"); \
|
||||
ret = AVERROR_EXTERNAL; \
|
||||
goto end; \
|
||||
} \
|
||||
} while (0)
|
||||
|
||||
static void fill_rand_int(int *data, int n)
|
||||
{
|
||||
int i;
|
||||
srand(av_gettime());
|
||||
for (i = 0; i < n; i++)
|
||||
data[i] = rand();
|
||||
}
|
||||
|
||||
#define OPENCL_NB_ITER 5
|
||||
static int64_t run_opencl_bench(AVOpenCLExternalEnv *ext_opencl_env)
|
||||
{
|
||||
int i, arg = 0, width = 1920, height = 1088;
|
||||
int64_t start, ret = 0;
|
||||
cl_int status;
|
||||
size_t kernel_len;
|
||||
char *inbuf;
|
||||
int *mask;
|
||||
int buf_size = width * height * sizeof(char);
|
||||
int mask_size = sizeof(uint32_t) * 128;
|
||||
|
||||
cl_mem cl_mask, cl_inbuf, cl_outbuf;
|
||||
cl_kernel kernel = NULL;
|
||||
cl_program program = NULL;
|
||||
size_t local_work_size_2d[2] = {16, 16};
|
||||
size_t global_work_size_2d[2] = {(size_t)width, (size_t)height};
|
||||
|
||||
if (!(inbuf = av_malloc(buf_size)) || !(mask = av_malloc(mask_size))) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Out of memory\n");
|
||||
ret = AVERROR(ENOMEM);
|
||||
goto end;
|
||||
}
|
||||
fill_rand_int((int*)inbuf, buf_size/4);
|
||||
fill_rand_int(mask, mask_size/4);
|
||||
|
||||
CREATEBUF(cl_mask, CL_MEM_READ_ONLY, mask_size);
|
||||
CREATEBUF(cl_inbuf, CL_MEM_READ_ONLY, buf_size);
|
||||
CREATEBUF(cl_outbuf, CL_MEM_READ_WRITE, buf_size);
|
||||
|
||||
kernel_len = strlen(ocl_bench_source);
|
||||
program = clCreateProgramWithSource(ext_opencl_env->context, 1, &ocl_bench_source,
|
||||
&kernel_len, &status);
|
||||
if (status != CL_SUCCESS || !program) {
|
||||
av_log(NULL, AV_LOG_ERROR, "OpenCL unable to create benchmark program\n");
|
||||
ret = AVERROR_EXTERNAL;
|
||||
goto end;
|
||||
}
|
||||
status = clBuildProgram(program, 1, &(ext_opencl_env->device_id), NULL, NULL, NULL);
|
||||
if (status != CL_SUCCESS) {
|
||||
av_log(NULL, AV_LOG_ERROR, "OpenCL unable to build benchmark program\n");
|
||||
ret = AVERROR_EXTERNAL;
|
||||
goto end;
|
||||
}
|
||||
kernel = clCreateKernel(program, "unsharp_bench", &status);
|
||||
if (status != CL_SUCCESS) {
|
||||
av_log(NULL, AV_LOG_ERROR, "OpenCL unable to create benchmark kernel\n");
|
||||
ret = AVERROR_EXTERNAL;
|
||||
goto end;
|
||||
}
|
||||
|
||||
OCLCHECK(clEnqueueWriteBuffer, ext_opencl_env->command_queue, cl_inbuf, CL_TRUE, 0,
|
||||
buf_size, inbuf, 0, NULL, NULL);
|
||||
OCLCHECK(clEnqueueWriteBuffer, ext_opencl_env->command_queue, cl_mask, CL_TRUE, 0,
|
||||
mask_size, mask, 0, NULL, NULL);
|
||||
OCLCHECK(clSetKernelArg, kernel, arg++, sizeof(cl_mem), &cl_inbuf);
|
||||
OCLCHECK(clSetKernelArg, kernel, arg++, sizeof(cl_mem), &cl_outbuf);
|
||||
OCLCHECK(clSetKernelArg, kernel, arg++, sizeof(cl_mem), &cl_mask);
|
||||
OCLCHECK(clSetKernelArg, kernel, arg++, sizeof(cl_int), &width);
|
||||
OCLCHECK(clSetKernelArg, kernel, arg++, sizeof(cl_int), &height);
|
||||
|
||||
start = av_gettime_relative();
|
||||
for (i = 0; i < OPENCL_NB_ITER; i++)
|
||||
OCLCHECK(clEnqueueNDRangeKernel, ext_opencl_env->command_queue, kernel, 2, NULL,
|
||||
global_work_size_2d, local_work_size_2d, 0, NULL, NULL);
|
||||
clFinish(ext_opencl_env->command_queue);
|
||||
ret = (av_gettime_relative() - start)/OPENCL_NB_ITER;
|
||||
end:
|
||||
if (kernel)
|
||||
clReleaseKernel(kernel);
|
||||
if (program)
|
||||
clReleaseProgram(program);
|
||||
if (cl_inbuf)
|
||||
clReleaseMemObject(cl_inbuf);
|
||||
if (cl_outbuf)
|
||||
clReleaseMemObject(cl_outbuf);
|
||||
if (cl_mask)
|
||||
clReleaseMemObject(cl_mask);
|
||||
av_free(inbuf);
|
||||
av_free(mask);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int compare_ocl_device_desc(const void *a, const void *b)
|
||||
{
|
||||
return ((OpenCLDeviceBenchmark*)a)->runtime - ((OpenCLDeviceBenchmark*)b)->runtime;
|
||||
}
|
||||
|
||||
int opt_opencl_bench(void *optctx, const char *opt, const char *arg)
|
||||
{
|
||||
int i, j, nb_devices = 0, count = 0;
|
||||
int64_t score = 0;
|
||||
AVOpenCLDeviceList *device_list;
|
||||
AVOpenCLDeviceNode *device_node = NULL;
|
||||
OpenCLDeviceBenchmark *devices = NULL;
|
||||
cl_platform_id platform;
|
||||
|
||||
av_opencl_get_device_list(&device_list);
|
||||
for (i = 0; i < device_list->platform_num; i++)
|
||||
nb_devices += device_list->platform_node[i]->device_num;
|
||||
if (!nb_devices) {
|
||||
av_log(NULL, AV_LOG_ERROR, "No OpenCL device detected!\n");
|
||||
return AVERROR(EINVAL);
|
||||
}
|
||||
if (!(devices = av_malloc_array(nb_devices, sizeof(OpenCLDeviceBenchmark)))) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Could not allocate buffer\n");
|
||||
return AVERROR(ENOMEM);
|
||||
}
|
||||
|
||||
for (i = 0; i < device_list->platform_num; i++) {
|
||||
for (j = 0; j < device_list->platform_node[i]->device_num; j++) {
|
||||
device_node = device_list->platform_node[i]->device_node[j];
|
||||
platform = device_list->platform_node[i]->platform_id;
|
||||
score = av_opencl_benchmark(device_node, platform, run_opencl_bench);
|
||||
if (score > 0) {
|
||||
devices[count].platform_idx = i;
|
||||
devices[count].device_idx = j;
|
||||
devices[count].runtime = score;
|
||||
strcpy(devices[count].device_name, device_node->device_name);
|
||||
count++;
|
||||
}
|
||||
}
|
||||
}
|
||||
qsort(devices, count, sizeof(OpenCLDeviceBenchmark), compare_ocl_device_desc);
|
||||
fprintf(stderr, "platform_idx\tdevice_idx\tdevice_name\truntime\n");
|
||||
for (i = 0; i < count; i++)
|
||||
fprintf(stdout, "%d\t%d\t%s\t%"PRId64"\n",
|
||||
devices[i].platform_idx, devices[i].device_idx,
|
||||
devices[i].device_name, devices[i].runtime);
|
||||
|
||||
av_opencl_free_device_list(&device_list);
|
||||
av_free(devices);
|
||||
return 0;
|
||||
}
|
||||
|
||||
int opt_opencl(void *optctx, const char *opt, const char *arg)
|
||||
{
|
||||
char *key, *value;
|
||||
const char *opts = arg;
|
||||
int ret = 0;
|
||||
while (*opts) {
|
||||
ret = av_opt_get_key_value(&opts, "=", ":", 0, &key, &value);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
ret = av_opencl_set_option(key, value);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
if (*opts)
|
||||
opts++;
|
||||
}
|
||||
return ret;
|
||||
}
|
38
common.mak
38
common.mak
@@ -5,20 +5,12 @@
|
||||
# first so "all" becomes default target
|
||||
all: all-yes
|
||||
|
||||
DEFAULT_YASMD=.dbg
|
||||
|
||||
ifeq (1, DBG)
|
||||
YASMD=$(DEFAULT_YASMD)
|
||||
else
|
||||
YASMD=
|
||||
endif
|
||||
|
||||
ifndef SUBDIR
|
||||
|
||||
ifndef V
|
||||
Q = @
|
||||
ECHO = printf "$(1)\t%s\n" $(2)
|
||||
BRIEF = CC CXX HOSTCC HOSTLD AS YASM AR LD STRIP CP WINDRES
|
||||
BRIEF = CC CXX HOSTCC HOSTLD AS YASM AR LD STRIP CP
|
||||
SILENT = DEPCC DEPHOSTCC DEPAS DEPYASM RANLIB RM
|
||||
|
||||
MSG = $@
|
||||
@@ -51,7 +43,6 @@ endef
|
||||
COMPILE_C = $(call COMPILE,CC)
|
||||
COMPILE_CXX = $(call COMPILE,CXX)
|
||||
COMPILE_S = $(call COMPILE,AS)
|
||||
COMPILE_HOSTC = $(call COMPILE,HOSTCC)
|
||||
|
||||
%.o: %.c
|
||||
$(COMPILE_C)
|
||||
@@ -59,21 +50,12 @@ COMPILE_HOSTC = $(call COMPILE,HOSTCC)
|
||||
%.o: %.cpp
|
||||
$(COMPILE_CXX)
|
||||
|
||||
%.o: %.m
|
||||
$(COMPILE_C)
|
||||
|
||||
%.s: %.c
|
||||
$(CC) $(CPPFLAGS) $(CFLAGS) -S -o $@ $<
|
||||
|
||||
%.o: %.S
|
||||
$(COMPILE_S)
|
||||
|
||||
%_host.o: %.c
|
||||
$(COMPILE_HOSTC)
|
||||
|
||||
%.o: %.rc
|
||||
$(WINDRES) $(IFLAGS) --preprocessor "$(DEPWINDRES) -E -xc-header -DRC_INVOKED $(CC_DEPFLAGS)" -o $@ $<
|
||||
|
||||
%.i: %.c
|
||||
$(CC) $(CCFLAGS) $(CC_E) $<
|
||||
|
||||
@@ -100,15 +82,14 @@ endif
|
||||
include $(SRC_PATH)/arch.mak
|
||||
|
||||
OBJS += $(OBJS-yes)
|
||||
SLIBOBJS += $(SLIBOBJS-yes)
|
||||
FFLIBS := $($(NAME)_FFLIBS) $(FFLIBS-yes) $(FFLIBS)
|
||||
FFLIBS := $(FFLIBS-yes) $(FFLIBS)
|
||||
TESTPROGS += $(TESTPROGS-yes)
|
||||
|
||||
LDLIBS = $(FFLIBS:%=%$(BUILDSUF))
|
||||
FFEXTRALIBS := $(LDLIBS:%=$(LD_LIB)) $(EXTRALIBS)
|
||||
|
||||
EXAMPLES := $(EXAMPLES:%=$(SUBDIR)%-example$(EXESUF))
|
||||
OBJS := $(sort $(OBJS:%=$(SUBDIR)%))
|
||||
SLIBOBJS := $(sort $(SLIBOBJS:%=$(SUBDIR)%))
|
||||
TESTOBJS := $(TESTOBJS:%=$(SUBDIR)%) $(TESTPROGS:%=$(SUBDIR)%-test.o)
|
||||
TESTPROGS := $(TESTPROGS:%=$(SUBDIR)%-test$(EXESUF))
|
||||
HOSTOBJS := $(HOSTPROGS:%=$(SUBDIR)%.o)
|
||||
@@ -132,31 +113,30 @@ checkheaders: $(HOBJS)
|
||||
alltools: $(TOOLS)
|
||||
|
||||
$(HOSTOBJS): %.o: %.c
|
||||
$(COMPILE_HOSTC)
|
||||
$(call COMPILE,HOSTCC)
|
||||
|
||||
$(HOSTPROGS): %$(HOSTEXESUF): %.o
|
||||
$(HOSTLD) $(HOSTLDFLAGS) $(HOSTLD_O) $^ $(HOSTLIBS)
|
||||
$(HOSTLD) $(HOSTLDFLAGS) $(HOSTLD_O) $< $(HOSTLIBS)
|
||||
|
||||
$(OBJS): | $(sort $(dir $(OBJS)))
|
||||
$(HOBJS): | $(sort $(dir $(HOBJS)))
|
||||
$(HOSTOBJS): | $(sort $(dir $(HOSTOBJS)))
|
||||
$(SLIBOBJS): | $(sort $(dir $(SLIBOBJS)))
|
||||
$(TESTOBJS): | $(sort $(dir $(TESTOBJS)))
|
||||
$(TOOLOBJS): | tools
|
||||
|
||||
OBJDIRS := $(OBJDIRS) $(dir $(OBJS) $(HOBJS) $(HOSTOBJS) $(SLIBOBJS) $(TESTOBJS))
|
||||
OBJDIRS := $(OBJDIRS) $(dir $(OBJS) $(HOBJS) $(HOSTOBJS) $(TESTOBJS))
|
||||
|
||||
CLEANSUFFIXES = *.d *.o *~ *.h.c *.map *.ver *.ho *.gcno *.gcda *$(DEFAULT_YASMD).asm
|
||||
CLEANSUFFIXES = *.d *.o *~ *.h.c *.map *.ver *.ho *.gcno *.gcda
|
||||
DISTCLEANSUFFIXES = *.pc
|
||||
LIBSUFFIXES = *.a *.lib *.so *.so.* *.dylib *.dll *.def *.dll.a
|
||||
|
||||
define RULES
|
||||
clean::
|
||||
$(RM) $(OBJS) $(OBJS:.o=.d) $(OBJS:.o=$(DEFAULT_YASMD).d)
|
||||
$(RM) $(OBJS) $(OBJS:.o=.d)
|
||||
$(RM) $(HOSTPROGS)
|
||||
$(RM) $(TOOLS)
|
||||
endef
|
||||
|
||||
$(eval $(RULES))
|
||||
|
||||
-include $(wildcard $(OBJS:.o=.d) $(HOSTOBJS:.o=.d) $(TESTOBJS:.o=.d) $(HOBJS:.o=.d) $(SLIBOBJS:.o=.d)) $(OBJS:.o=$(DEFAULT_YASMD).d)
|
||||
-include $(wildcard $(OBJS:.o=.d) $(HOSTOBJS:.o=.d) $(TESTOBJS:.o=.d) $(HOBJS:.o=.d))
|
||||
|
@@ -13,8 +13,7 @@
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with this program; if not, write to the Free Software
|
||||
// Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
|
||||
// MA 02110-1301 USA, or visit
|
||||
// Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA, or visit
|
||||
// http://www.gnu.org/copyleft/gpl.html .
|
||||
//
|
||||
// As a special exception, I give you permission to link to the
|
||||
@@ -805,7 +804,7 @@ struct AVS_Library {
|
||||
|
||||
AVSC_INLINE AVS_Library * avs_load_library() {
|
||||
AVS_Library *library = (AVS_Library *)malloc(sizeof(AVS_Library));
|
||||
if (!library)
|
||||
if (library == NULL)
|
||||
return NULL;
|
||||
library->handle = LoadLibrary("avisynth");
|
||||
if (library->handle == NULL)
|
||||
@@ -870,7 +869,7 @@ fail:
|
||||
}
|
||||
|
||||
AVSC_INLINE void avs_free_library(AVS_Library *library) {
|
||||
if (!library)
|
||||
if (library == NULL)
|
||||
return;
|
||||
FreeLibrary(library->handle);
|
||||
free(library);
|
||||
|
@@ -13,8 +13,7 @@
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with this program; if not, write to the Free Software
|
||||
// Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
|
||||
// MA 02110-1301 USA, or visit
|
||||
// Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA, or visit
|
||||
// http://www.gnu.org/copyleft/gpl.html .
|
||||
//
|
||||
// As a special exception, I give you permission to link to the
|
||||
@@ -513,21 +512,21 @@ AVSC_INLINE AVS_Value avs_array_elt(AVS_Value v, int index)
|
||||
// only use these functions on am AVS_Value that does not already have
|
||||
// an active value. Remember, treat AVS_Value as a fat pointer.
|
||||
AVSC_INLINE AVS_Value avs_new_value_bool(int v0)
|
||||
{ AVS_Value v = {0}; v.type = 'b'; v.d.boolean = v0 == 0 ? 0 : 1; return v; }
|
||||
{ AVS_Value v; v.type = 'b'; v.d.boolean = v0 == 0 ? 0 : 1; return v; }
|
||||
AVSC_INLINE AVS_Value avs_new_value_int(int v0)
|
||||
{ AVS_Value v = {0}; v.type = 'i'; v.d.integer = v0; return v; }
|
||||
{ AVS_Value v; v.type = 'i'; v.d.integer = v0; return v; }
|
||||
AVSC_INLINE AVS_Value avs_new_value_string(const char * v0)
|
||||
{ AVS_Value v = {0}; v.type = 's'; v.d.string = v0; return v; }
|
||||
{ AVS_Value v; v.type = 's'; v.d.string = v0; return v; }
|
||||
AVSC_INLINE AVS_Value avs_new_value_float(float v0)
|
||||
{ AVS_Value v = {0}; v.type = 'f'; v.d.floating_pt = v0; return v;}
|
||||
{ AVS_Value v; v.type = 'f'; v.d.floating_pt = v0; return v;}
|
||||
AVSC_INLINE AVS_Value avs_new_value_error(const char * v0)
|
||||
{ AVS_Value v = {0}; v.type = 'e'; v.d.string = v0; return v; }
|
||||
{ AVS_Value v; v.type = 'e'; v.d.string = v0; return v; }
|
||||
#ifndef AVSC_NO_DECLSPEC
|
||||
AVSC_INLINE AVS_Value avs_new_value_clip(AVS_Clip * v0)
|
||||
{ AVS_Value v = {0}; avs_set_to_clip(&v, v0); return v; }
|
||||
{ AVS_Value v; avs_set_to_clip(&v, v0); return v; }
|
||||
#endif
|
||||
AVSC_INLINE AVS_Value avs_new_value_array(AVS_Value * v0, int size)
|
||||
{ AVS_Value v = {0}; v.type = 'a'; v.d.array = v0; v.array_size = size; return v; }
|
||||
{ AVS_Value v; v.type = 'a'; v.d.array = v0; v.array_size = size; return v; }
|
||||
|
||||
/////////////////////////////////////////////////////////////////////
|
||||
//
|
||||
|
@@ -52,8 +52,8 @@ namespace avxsynth {
|
||||
//
|
||||
// Functions
|
||||
//
|
||||
#define MAKEDWORD(a,b,c,d) (((a) << 24) | ((b) << 16) | ((c) << 8) | (d))
|
||||
#define MAKEWORD(a,b) (((a) << 8) | (b))
|
||||
#define MAKEDWORD(a,b,c,d) ((a << 24) | (b << 16) | (c << 8) | (d))
|
||||
#define MAKEWORD(a,b) ((a << 8) | (b))
|
||||
|
||||
#define lstrlen strlen
|
||||
#define lstrcpy strcpy
|
||||
|
@@ -1,35 +0,0 @@
|
||||
/*
|
||||
* Work around broken floating point limits on some systems.
|
||||
*
|
||||
* This file is part of FFmpeg.
|
||||
*
|
||||
* FFmpeg is free software; you can redistribute it and/or
|
||||
* modify it under the terms of the GNU Lesser General Public
|
||||
* License as published by the Free Software Foundation; either
|
||||
* version 2.1 of the License, or (at your option) any later version.
|
||||
*
|
||||
* FFmpeg is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
* Lesser General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU Lesser General Public
|
||||
* License along with FFmpeg; if not, write to the Free Software
|
||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
*/
|
||||
|
||||
#include_next <float.h>
|
||||
|
||||
#ifdef FLT_MAX
|
||||
#undef FLT_MAX
|
||||
#define FLT_MAX 3.40282346638528859812e+38F
|
||||
|
||||
#undef FLT_MIN
|
||||
#define FLT_MIN 1.17549435082228750797e-38F
|
||||
|
||||
#undef DBL_MAX
|
||||
#define DBL_MAX ((double)1.79769313486231570815e+308L)
|
||||
|
||||
#undef DBL_MIN
|
||||
#define DBL_MIN ((double)2.22507385850720138309e-308L)
|
||||
#endif
|
@@ -1,22 +0,0 @@
|
||||
/*
|
||||
* Work around broken floating point limits on some systems.
|
||||
*
|
||||
* This file is part of FFmpeg.
|
||||
*
|
||||
* FFmpeg is free software; you can redistribute it and/or
|
||||
* modify it under the terms of the GNU Lesser General Public
|
||||
* License as published by the Free Software Foundation; either
|
||||
* version 2.1 of the License, or (at your option) any later version.
|
||||
*
|
||||
* FFmpeg is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
* Lesser General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU Lesser General Public
|
||||
* License along with FFmpeg; if not, write to the Free Software
|
||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
*/
|
||||
|
||||
#include_next <limits.h>
|
||||
#include <float.h>
|
@@ -38,6 +38,8 @@ static int optind = 1;
|
||||
static int optopt;
|
||||
static char *optarg;
|
||||
|
||||
#undef fprintf
|
||||
|
||||
static int getopt(int argc, char *argv[], char *opts)
|
||||
{
|
||||
static int sp = 1;
|
||||
@@ -54,7 +56,7 @@ static int getopt(int argc, char *argv[], char *opts)
|
||||
}
|
||||
}
|
||||
optopt = c = argv[optind][sp];
|
||||
if (c == ':' || !(cp = strchr(opts, c))) {
|
||||
if (c == ':' || (cp = strchr(opts, c)) == NULL) {
|
||||
fprintf(stderr, ": illegal option -- %c\n", c);
|
||||
if (argv[optind][++sp] == '\0') {
|
||||
optind++;
|
||||
|
@@ -32,8 +32,6 @@
|
||||
#undef __STRICT_ANSI__ /* for _beginthread() */
|
||||
#include <stdlib.h>
|
||||
|
||||
#include "libavutil/mem.h"
|
||||
|
||||
typedef TID pthread_t;
|
||||
typedef void pthread_attr_t;
|
||||
|
||||
|
@@ -24,6 +24,3 @@
|
||||
#if !defined(va_copy) && defined(_MSC_VER)
|
||||
#define va_copy(dst, src) ((dst) = (src))
|
||||
#endif
|
||||
#if !defined(va_copy) && defined(__GNUC__) && __GNUC__ < 3
|
||||
#define va_copy(dst, src) __va_copy(dst, src)
|
||||
#endif
|
||||
|
@@ -39,7 +39,6 @@
|
||||
#include <windows.h>
|
||||
#include <process.h>
|
||||
|
||||
#include "libavutil/attributes.h"
|
||||
#include "libavutil/common.h"
|
||||
#include "libavutil/internal.h"
|
||||
#include "libavutil/mem.h"
|
||||
@@ -55,30 +54,36 @@ typedef struct pthread_t {
|
||||
* not mutexes */
|
||||
typedef CRITICAL_SECTION pthread_mutex_t;
|
||||
|
||||
/* This is the CONDITION_VARIABLE typedef for using Windows' native
|
||||
* conditional variables on kernels 6.0+. */
|
||||
#if HAVE_CONDITION_VARIABLE_PTR
|
||||
typedef CONDITION_VARIABLE pthread_cond_t;
|
||||
#else
|
||||
/* This is the CONDITIONAL_VARIABLE typedef for using Window's native
|
||||
* conditional variables on kernels 6.0+.
|
||||
* MinGW does not currently have this typedef. */
|
||||
typedef struct pthread_cond_t {
|
||||
void *Ptr;
|
||||
void *ptr;
|
||||
} pthread_cond_t;
|
||||
|
||||
/* function pointers to conditional variable API on windows 6.0+ kernels */
|
||||
#if _WIN32_WINNT < 0x0600
|
||||
static void (WINAPI *cond_broadcast)(pthread_cond_t *cond);
|
||||
static void (WINAPI *cond_init)(pthread_cond_t *cond);
|
||||
static void (WINAPI *cond_signal)(pthread_cond_t *cond);
|
||||
static BOOL (WINAPI *cond_wait)(pthread_cond_t *cond, pthread_mutex_t *mutex,
|
||||
DWORD milliseconds);
|
||||
#else
|
||||
#define cond_init InitializeConditionVariable
|
||||
#define cond_broadcast WakeAllConditionVariable
|
||||
#define cond_signal WakeConditionVariable
|
||||
#define cond_wait SleepConditionVariableCS
|
||||
#endif
|
||||
|
||||
#if _WIN32_WINNT >= 0x0600
|
||||
#define InitializeCriticalSection(x) InitializeCriticalSectionEx(x, 0, 0)
|
||||
#define WaitForSingleObject(a, b) WaitForSingleObjectEx(a, b, FALSE)
|
||||
#endif
|
||||
|
||||
static av_unused unsigned __stdcall attribute_align_arg win32thread_worker(void *arg)
|
||||
static unsigned __stdcall attribute_align_arg win32thread_worker(void *arg)
|
||||
{
|
||||
pthread_t *h = arg;
|
||||
h->ret = h->func(h->arg);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static av_unused int pthread_create(pthread_t *thread, const void *unused_attr,
|
||||
void *(*start_routine)(void*), void *arg)
|
||||
static int pthread_create(pthread_t *thread, const void *unused_attr,
|
||||
void *(*start_routine)(void*), void *arg)
|
||||
{
|
||||
thread->func = start_routine;
|
||||
thread->arg = arg;
|
||||
@@ -87,7 +92,7 @@ static av_unused int pthread_create(pthread_t *thread, const void *unused_attr,
|
||||
return !thread->handle;
|
||||
}
|
||||
|
||||
static av_unused void pthread_join(pthread_t thread, void **value_ptr)
|
||||
static void pthread_join(pthread_t thread, void **value_ptr)
|
||||
{
|
||||
DWORD ret = WaitForSingleObject(thread.handle, INFINITE);
|
||||
if (ret != WAIT_OBJECT_0)
|
||||
@@ -118,36 +123,6 @@ static inline int pthread_mutex_unlock(pthread_mutex_t *m)
|
||||
return 0;
|
||||
}
|
||||
|
||||
#if _WIN32_WINNT >= 0x0600
|
||||
static inline int pthread_cond_init(pthread_cond_t *cond, const void *unused_attr)
|
||||
{
|
||||
InitializeConditionVariable(cond);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* native condition variables do not destroy */
|
||||
static inline void pthread_cond_destroy(pthread_cond_t *cond)
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
static inline void pthread_cond_broadcast(pthread_cond_t *cond)
|
||||
{
|
||||
WakeAllConditionVariable(cond);
|
||||
}
|
||||
|
||||
static inline int pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mutex)
|
||||
{
|
||||
SleepConditionVariableCS(cond, mutex, INFINITE);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void pthread_cond_signal(pthread_cond_t *cond)
|
||||
{
|
||||
WakeConditionVariable(cond);
|
||||
}
|
||||
|
||||
#else // _WIN32_WINNT < 0x0600
|
||||
/* for pre-Windows 6.0 platforms we need to define and use our own condition
|
||||
* variable and api */
|
||||
typedef struct win32_cond_t {
|
||||
@@ -159,41 +134,33 @@ typedef struct win32_cond_t {
|
||||
volatile int is_broadcast;
|
||||
} win32_cond_t;
|
||||
|
||||
/* function pointers to conditional variable API on windows 6.0+ kernels */
|
||||
static void (WINAPI *cond_broadcast)(pthread_cond_t *cond);
|
||||
static void (WINAPI *cond_init)(pthread_cond_t *cond);
|
||||
static void (WINAPI *cond_signal)(pthread_cond_t *cond);
|
||||
static BOOL (WINAPI *cond_wait)(pthread_cond_t *cond, pthread_mutex_t *mutex,
|
||||
DWORD milliseconds);
|
||||
|
||||
static av_unused int pthread_cond_init(pthread_cond_t *cond, const void *unused_attr)
|
||||
static void pthread_cond_init(pthread_cond_t *cond, const void *unused_attr)
|
||||
{
|
||||
win32_cond_t *win32_cond = NULL;
|
||||
if (cond_init) {
|
||||
cond_init(cond);
|
||||
return 0;
|
||||
return;
|
||||
}
|
||||
|
||||
/* non native condition variables */
|
||||
win32_cond = av_mallocz(sizeof(win32_cond_t));
|
||||
if (!win32_cond)
|
||||
return ENOMEM;
|
||||
cond->Ptr = win32_cond;
|
||||
return;
|
||||
cond->ptr = win32_cond;
|
||||
win32_cond->semaphore = CreateSemaphore(NULL, 0, 0x7fffffff, NULL);
|
||||
if (!win32_cond->semaphore)
|
||||
return ENOMEM;
|
||||
return;
|
||||
win32_cond->waiters_done = CreateEvent(NULL, TRUE, FALSE, NULL);
|
||||
if (!win32_cond->waiters_done)
|
||||
return ENOMEM;
|
||||
return;
|
||||
|
||||
pthread_mutex_init(&win32_cond->mtx_waiter_count, NULL);
|
||||
pthread_mutex_init(&win32_cond->mtx_broadcast, NULL);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static av_unused void pthread_cond_destroy(pthread_cond_t *cond)
|
||||
static void pthread_cond_destroy(pthread_cond_t *cond)
|
||||
{
|
||||
win32_cond_t *win32_cond = cond->Ptr;
|
||||
win32_cond_t *win32_cond = cond->ptr;
|
||||
/* native condition variables do not destroy */
|
||||
if (cond_init)
|
||||
return;
|
||||
@@ -204,12 +171,12 @@ static av_unused void pthread_cond_destroy(pthread_cond_t *cond)
|
||||
pthread_mutex_destroy(&win32_cond->mtx_waiter_count);
|
||||
pthread_mutex_destroy(&win32_cond->mtx_broadcast);
|
||||
av_freep(&win32_cond);
|
||||
cond->Ptr = NULL;
|
||||
cond->ptr = NULL;
|
||||
}
|
||||
|
||||
static av_unused void pthread_cond_broadcast(pthread_cond_t *cond)
|
||||
static void pthread_cond_broadcast(pthread_cond_t *cond)
|
||||
{
|
||||
win32_cond_t *win32_cond = cond->Ptr;
|
||||
win32_cond_t *win32_cond = cond->ptr;
|
||||
int have_waiter;
|
||||
|
||||
if (cond_broadcast) {
|
||||
@@ -238,9 +205,9 @@ static av_unused void pthread_cond_broadcast(pthread_cond_t *cond)
|
||||
pthread_mutex_unlock(&win32_cond->mtx_broadcast);
|
||||
}
|
||||
|
||||
static av_unused int pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mutex)
|
||||
static int pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mutex)
|
||||
{
|
||||
win32_cond_t *win32_cond = cond->Ptr;
|
||||
win32_cond_t *win32_cond = cond->ptr;
|
||||
int last_waiter;
|
||||
if (cond_wait) {
|
||||
cond_wait(cond, mutex, INFINITE);
|
||||
@@ -270,9 +237,9 @@ static av_unused int pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mu
|
||||
return pthread_mutex_lock(mutex);
|
||||
}
|
||||
|
||||
static av_unused void pthread_cond_signal(pthread_cond_t *cond)
|
||||
static void pthread_cond_signal(pthread_cond_t *cond)
|
||||
{
|
||||
win32_cond_t *win32_cond = cond->Ptr;
|
||||
win32_cond_t *win32_cond = cond->ptr;
|
||||
int have_waiter;
|
||||
if (cond_signal) {
|
||||
cond_signal(cond);
|
||||
@@ -294,9 +261,8 @@ static av_unused void pthread_cond_signal(pthread_cond_t *cond)
|
||||
|
||||
pthread_mutex_unlock(&win32_cond->mtx_broadcast);
|
||||
}
|
||||
#endif
|
||||
|
||||
static av_unused void w32thread_init(void)
|
||||
static void w32thread_init(void)
|
||||
{
|
||||
#if _WIN32_WINNT < 0x0600
|
||||
HANDLE kernel_dll = GetModuleHandle(TEXT("kernel32.dll"));
|
||||
|
@@ -1,132 +0,0 @@
|
||||
#!/bin/sh
|
||||
|
||||
# Copyright (c) 2013, Derek Buitenhuis
|
||||
#
|
||||
# Permission to use, copy, modify, and/or distribute this software for any
|
||||
# purpose with or without fee is hereby granted, provided that the above
|
||||
# copyright notice and this permission notice appear in all copies.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||
# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||
# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||
# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||
# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||
# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||
# OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
|
||||
# mktemp isn't POSIX, so supply an implementation
|
||||
mktemp() {
|
||||
echo "${2%%XXX*}.${HOSTNAME}.${UID}.$$"
|
||||
}
|
||||
|
||||
if [ $# -lt 2 ]; then
|
||||
echo "Usage: makedef <version_script> <objects>" >&2
|
||||
exit 0
|
||||
fi
|
||||
|
||||
vscript=$1
|
||||
shift
|
||||
|
||||
if [ ! -f "$vscript" ]; then
|
||||
echo "Version script does not exist" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
for object in "$@"; do
|
||||
if [ ! -f "$object" ]; then
|
||||
echo "Object does not exist: ${object}" >&2
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
# Create a lib temporarily to dump symbols from.
|
||||
# It's just much easier to do it this way
|
||||
libname=$(mktemp -u "library").lib
|
||||
|
||||
trap 'rm -f -- $libname' EXIT
|
||||
|
||||
lib -out:${libname} $@ >/dev/null
|
||||
if [ $? != 0 ]; then
|
||||
echo "Could not create temporary library." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
IFS='
|
||||
'
|
||||
|
||||
# Determine if we're building for x86 or x86_64 and
|
||||
# set the symbol prefix accordingly.
|
||||
prefix=""
|
||||
arch=$(dumpbin -headers ${libname} |
|
||||
tr '\t' ' ' |
|
||||
grep '^ \+.\+machine \+(.\+)' |
|
||||
head -1 |
|
||||
sed -e 's/^ \{1,\}.\{1,\} \{1,\}machine \{1,\}(\(...\)).*/\1/')
|
||||
|
||||
if [ "${arch}" = "x86" ]; then
|
||||
prefix="_"
|
||||
else
|
||||
if [ "${arch}" != "ARM" ] && [ "${arch}" != "x64" ]; then
|
||||
echo "Unknown machine type." >&2
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
started=0
|
||||
regex="none"
|
||||
|
||||
for line in $(cat ${vscript} | tr '\t' ' '); do
|
||||
# We only care about global symbols
|
||||
echo "${line}" | grep -q '^ \+global:'
|
||||
if [ $? = 0 ]; then
|
||||
started=1
|
||||
line=$(echo "${line}" | sed -e 's/^ \{1,\}global: *//')
|
||||
else
|
||||
echo "${line}" | grep -q '^ \+local:'
|
||||
if [ $? = 0 ]; then
|
||||
started=0
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ ${started} = 0 ]; then
|
||||
continue
|
||||
fi
|
||||
|
||||
# Handle multiple symbols on one line
|
||||
IFS=';'
|
||||
|
||||
# Work around stupid expansion to filenames
|
||||
line=$(echo "${line}" | sed -e 's/\*/.\\+/g')
|
||||
for exp in ${line}; do
|
||||
# Remove leading and trailing whitespace
|
||||
exp=$(echo "${exp}" | sed -e 's/^ *//' -e 's/ *$//')
|
||||
|
||||
if [ "${regex}" = "none" ]; then
|
||||
regex="${exp}"
|
||||
else
|
||||
regex="${regex};${exp}"
|
||||
fi
|
||||
done
|
||||
|
||||
IFS='
|
||||
'
|
||||
done
|
||||
|
||||
dump=$(dumpbin -linkermember:1 ${libname})
|
||||
|
||||
rm ${libname}
|
||||
|
||||
IFS=';'
|
||||
list=""
|
||||
for exp in ${regex}; do
|
||||
list="${list}"'
|
||||
'$(echo "${dump}" |
|
||||
sed -e '/public symbols/,$!d' -e '/^ \{1,\}Summary/,$d' -e "s/ \{1,\}${prefix}/ /" -e 's/ \{1,\}/ /g' |
|
||||
tail -n +2 |
|
||||
cut -d' ' -f3 |
|
||||
grep "^${exp}" |
|
||||
sed -e 's/^/ /')
|
||||
done
|
||||
|
||||
echo "EXPORTS"
|
||||
echo "${list}" | sort | uniq | tail -n +2
|
610
doc/APIchanges
610
doc/APIchanges
@@ -2,573 +2,44 @@ Never assume the API of libav* to be stable unless at least 1 month has passed
|
||||
since the last major version increase or the API was added.
|
||||
|
||||
The last version increases were:
|
||||
libavcodec: 2014-08-09
|
||||
libavdevice: 2014-08-09
|
||||
libavfilter: 2014-08-09
|
||||
libavformat: 2014-08-09
|
||||
libavresample: 2014-08-09
|
||||
libpostproc: 2014-08-09
|
||||
libswresample: 2014-08-09
|
||||
libswscale: 2014-08-09
|
||||
libavutil: 2014-08-09
|
||||
libavcodec: 2013-03-xx
|
||||
libavdevice: 2013-03-xx
|
||||
libavfilter: 2012-06-22
|
||||
libavformat: 2013-03-xx
|
||||
libavresample: 2012-10-05
|
||||
libpostproc: 2011-04-18
|
||||
libswresample: 2011-09-19
|
||||
libswscale: 2011-06-20
|
||||
libavutil: 2012-10-22
|
||||
|
||||
|
||||
API changes, most recent first:
|
||||
|
||||
-------- 8< --------- FFmpeg 2.6 was cut here -------- 8< ---------
|
||||
|
||||
2015-03-04 - cca4476 - lavf 56.25.100
|
||||
Add avformat_flush()
|
||||
|
||||
2015-03-03 - 81a9126 - lavf 56.24.100
|
||||
Add avio_put_str16be()
|
||||
|
||||
2015-02-19 - 560eb71 / 31d2039 - lavc 56.23.100 / 56.13.0
|
||||
Add width, height, coded_width, coded_height and format to
|
||||
AVCodecParserContext.
|
||||
|
||||
2015-02-19 - e375511 / 5b1d9ce - lavu 54.19.100 / 54.9.0
|
||||
Add AV_PIX_FMT_QSV for QSV hardware acceleration.
|
||||
|
||||
2015-02-14 - ba22295 - lavc 56.21.102
|
||||
Deprecate VIMA decoder.
|
||||
|
||||
2015-01-27 - 62a82c6 / 728685f - lavc 56.21.100 / 56.12.0, lavu 54.18.100 / 54.8.0 - avcodec.h, frame.h
|
||||
Add AV_PKT_DATA_AUDIO_SERVICE_TYPE and AV_FRAME_DATA_AUDIO_SERVICE_TYPE for
|
||||
storing the audio service type as side data.
|
||||
|
||||
2015-01-16 - a47c933 - lavf 56.19.100 - avformat.h
|
||||
Add data_codec and data_codec_id for storing codec of data stream
|
||||
|
||||
2015-01-11 - 007c33d - lavd 56.4.100 - avdevice.h
|
||||
Add avdevice_list_input_sources().
|
||||
Add avdevice_list_output_sinks().
|
||||
|
||||
2014-12-25 - d7aaeea / c220a60 - lavc 56.19.100 / 56.10.0 - vdpau.h
|
||||
Add av_vdpau_get_surface_parameters().
|
||||
|
||||
2014-12-25 - ddb9a24 / 6c99c92 - lavc 56.18.100 / 56.9.0 - avcodec.h
|
||||
Add AV_HWACCEL_FLAG_ALLOW_HIGH_DEPTH flag to av_vdpau_bind_context().
|
||||
|
||||
2014-12-25 - d16079a / 57b6704 - lavc 56.17.100 / 56.8.0 - avcodec.h
|
||||
Add AVCodecContext.sw_pix_fmt.
|
||||
|
||||
2014-12-04 - 6e9ac02 - lavc 56.14.100 - dv_profile.h
|
||||
Add av_dv_codec_profile2().
|
||||
|
||||
-------- 8< --------- FFmpeg 2.5 was cut here -------- 8< ---------
|
||||
|
||||
2014-11-21 - ab922f9 - lavu 54.15.100 - dict.h
|
||||
Add av_dict_get_string().
|
||||
|
||||
2014-11-18 - a54a51c - lavu 54.14.100 - float_dsp.h
|
||||
Add avpriv_float_dsp_alloc().
|
||||
|
||||
2014-11-16 - 6690d4c3 - lavf 56.13.100 - avformat.h
|
||||
Add AVStream.recommended_encoder_configuration with accessors.
|
||||
|
||||
2014-11-16 - bee5844d - lavu 54.13.100 - opt.h
|
||||
Add av_opt_serialize().
|
||||
|
||||
2014-11-16 - eec69332 - lavu 54.12.100 - opt.h
|
||||
Add av_opt_is_set_to_default().
|
||||
|
||||
2014-11-06 - 44fa267 / 5e80fb7 - lavc 56.11.100 / 56.6.0 - vorbis_parser.h
|
||||
Add a public API for parsing vorbis packets.
|
||||
|
||||
2014-10-15 - 17085a0 / 7ea1b34 - lavc 56.7.100 / 56.5.0 - avcodec.h
|
||||
Replace AVCodecContext.time_base used for decoding
|
||||
with AVCodecContext.framerate.
|
||||
|
||||
2014-10-15 - 51c810e / d565fef1 - lavc 56.6.100 / 56.4.0 - avcodec.h
|
||||
Add AV_HWACCEL_FLAG_IGNORE_LEVEL flag to av_vdpau_bind_context().
|
||||
|
||||
2014-10-13 - da21895 / 2df0c32e - lavc 56.5.100 / 56.3.0 - avcodec.h
|
||||
Add AVCodecContext.initial_padding. Deprecate the use of AVCodecContext.delay
|
||||
for audio encoding.
|
||||
|
||||
2014-10-08 - bb44f7d / 5a419b2 - lavu 54.10.100 / 54.4.0 - pixdesc.h
|
||||
Add API to return the name of frame and context color properties.
|
||||
|
||||
2014-10-06 - a61899a / e3e158e - lavc 56.3.100 / 56.2.0 - vdpau.h
|
||||
Add av_vdpau_bind_context(). This function should now be used for creating
|
||||
(or resetting) a AVVDPAUContext instead of av_vdpau_alloc_context().
|
||||
|
||||
2014-10-02 - cdd6f05 - lavc 56.2.100 - avcodec.h
|
||||
2014-10-02 - cdd6f05 - lavu 54.9.100 - frame.h
|
||||
Add AV_FRAME_DATA_SKIP_SAMPLES. Add lavc CODEC_FLAG2_SKIP_MANUAL and
|
||||
AVOption "skip_manual", which makes lavc export skip information via
|
||||
AV_FRAME_DATA_SKIP_SAMPLES AVFrame side data, instead of skipping and
|
||||
discarding samples automatically.
|
||||
|
||||
2014-10-02 - 0d92b0d - lavu 54.8.100 - avstring.h
|
||||
Add av_match_list()
|
||||
|
||||
2014-09-24 - ac68295 - libpostproc 53.1.100
|
||||
Add visualization support
|
||||
|
||||
2014-09-19 - 6edd6a4 - lavc 56.1.101 - dv_profile.h
|
||||
deprecate avpriv_dv_frame_profile2(), which was made public by accident.
|
||||
|
||||
|
||||
-------- 8< --------- FFmpeg 2.4 was cut here -------- 8< ---------
|
||||
|
||||
2014-08-25 - 215db29 / b263f8f - lavf 56.3.100 / 56.3.0 - avformat.h
|
||||
Add AVFormatContext.max_ts_probe.
|
||||
|
||||
2014-08-28 - f30a815 / 9301486 - lavc 56.1.100 / 56.1.0 - avcodec.h
|
||||
Add AV_PKT_DATA_STEREO3D to export container-level stereo3d information.
|
||||
|
||||
2014-08-23 - 8fc9bd0 - lavu 54.7.100 - dict.h
|
||||
AV_DICT_DONT_STRDUP_KEY and AV_DICT_DONT_STRDUP_VAL arguments are now
|
||||
freed even on error. This is consistent with the behaviour all users
|
||||
of it we could find expect.
|
||||
|
||||
2014-08-21 - 980a5b0 - lavu 54.6.100 - frame.h motion_vector.h
|
||||
Add AV_FRAME_DATA_MOTION_VECTORS side data and AVMotionVector structure
|
||||
|
||||
2014-08-16 - b7d5e01 - lswr 1.1.100 - swresample.h
|
||||
Add AVFrame based API
|
||||
|
||||
2014-08-16 - c2829dc - lavu 54.4.100 - dict.h
|
||||
Add av_dict_set_int helper function.
|
||||
|
||||
2014-08-13 - c8571c6 / 8ddc326 - lavu 54.3.100 / 54.3.0 - mem.h
|
||||
Add av_strndup().
|
||||
|
||||
2014-08-13 - 2ba4577 / a8c104a - lavu 54.2.100 / 54.2.0 - opt.h
|
||||
Add av_opt_get_dict_val/set_dict_val with AV_OPT_TYPE_DICT to support
|
||||
dictionary types being set as options.
|
||||
|
||||
2014-08-13 - afbd4b8 - lavf 56.01.0 - avformat.h
|
||||
Add AVFormatContext.event_flags and AVStream.event_flags for signaling to
|
||||
the user when events happen in the file/stream.
|
||||
|
||||
2014-08-10 - 78eaaa8 / fb1ddcd - lavr 2.1.0 - avresample.h
|
||||
Add avresample_convert_frame() and avresample_config().
|
||||
|
||||
2014-08-10 - 78eaaa8 / fb1ddcd - lavu 54.1.100 / 54.1.0 - error.h
|
||||
Add AVERROR_INPUT_CHANGED and AVERROR_OUTPUT_CHANGED.
|
||||
|
||||
2014-08-08 - 3841f2a / d35b94f - lavc 55.73.102 / 55.57.4 - avcodec.h
|
||||
Deprecate FF_IDCT_XVIDMMX define and xvidmmx idct option.
|
||||
Replaced by FF_IDCT_XVID and xvid respectively.
|
||||
|
||||
2014-08-08 - 5c3c671 - lavf 55.53.100 - avio.h
|
||||
Add avio_feof() and deprecate url_feof().
|
||||
|
||||
2014-08-07 - bb78903 - lsws 2.1.3 - swscale.h
|
||||
sws_getContext is not going to be removed in the future.
|
||||
|
||||
2014-08-07 - a561662 / ad1ee5f - lavc 55.73.101 / 55.57.3 - avcodec.h
|
||||
reordered_opaque is not going to be removed in the future.
|
||||
|
||||
2014-08-02 - 28a2107 - lavu 52.98.100 - pixelutils.h
|
||||
Add pixelutils API with SAD functions
|
||||
|
||||
2014-08-04 - 6017c98 / e9abafc - lavu 52.97.100 / 53.22.0 - pixfmt.h
|
||||
Add AV_PIX_FMT_YA16 pixel format for 16 bit packed gray with alpha.
|
||||
|
||||
2014-08-04 - 4c8bc6f / e96c3b8 - lavu 52.96.101 / 53.21.1 - avstring.h
|
||||
Rename AV_PIX_FMT_Y400A to AV_PIX_FMT_YA8 to better identify the format.
|
||||
An alias pixel format and color space name are provided for compatibility.
|
||||
|
||||
2014-08-04 - 073c074 / d2962e9 - lavu 52.96.100 / 53.21.0 - pixdesc.h
|
||||
Support name aliases for pixel formats.
|
||||
|
||||
2014-08-03 - 71d008e / 1ef9e83 - lavc 55.72.101 / 55.57.2 - avcodec.h
|
||||
2014-08-03 - 71d008e / 1ef9e83 - lavu 52.95.100 / 53.20.0 - frame.h
|
||||
Deprecate AVCodecContext.dtg_active_format and use side-data instead.
|
||||
|
||||
2014-08-03 - e680c73 - lavc 55.72.100 - avcodec.h
|
||||
Add get_pixels() to AVDCT
|
||||
|
||||
2014-08-03 - 9400603 / 9f17685 - lavc 55.71.101 / 55.57.1 - avcodec.h
|
||||
Deprecate unused FF_IDCT_IPP define and ipp avcodec option.
|
||||
Deprecate unused FF_DEBUG_PTS define and pts avcodec option.
|
||||
Deprecate unused FF_CODER_TYPE_DEFLATE define and deflate avcodec option.
|
||||
Deprecate unused FF_DCT_INT define and int avcodec option.
|
||||
Deprecate unused avcodec option scenechange_factor.
|
||||
|
||||
2014-07-30 - ba3e331 - lavu 52.94.100 - frame.h
|
||||
Add av_frame_side_data_name()
|
||||
|
||||
2014-07-29 - 80a3a66 / 3a19405 - lavf 56.01.100 / 56.01.0 - avformat.h
|
||||
Add mime_type field to AVProbeData, which now MUST be initialized in
|
||||
order to avoid uninitialized reads of the mime_type pointer, likely
|
||||
leading to crashes.
|
||||
Typically, this means you will do 'AVProbeData pd = { 0 };' instead of
|
||||
'AVProbeData pd;'.
|
||||
|
||||
2014-07-29 - 31e0b5d / 69e7336 - lavu 52.92.100 / 53.19.0 - avstring.h
|
||||
Make name matching function from lavf public as av_match_name().
|
||||
|
||||
2014-07-28 - 2e5c8b0 / c5fca01 - lavc 55.71.100 / 55.57.0 - avcodec.h
|
||||
Add AV_CODEC_PROP_REORDER to mark codecs supporting frame reordering.
|
||||
|
||||
2014-07-27 - ff9a154 - lavf 55.50.100 - avformat.h
|
||||
New field int64_t probesize2 instead of deprecated
|
||||
field int probesize.
|
||||
|
||||
2014-07-27 - 932ff70 - lavc 55.70.100 - avdct.h
|
||||
Add AVDCT / avcodec_dct_alloc() / avcodec_dct_init().
|
||||
|
||||
2014-07-23 - 8a4c086 - lavf 55.49.100 - avio.h
|
||||
Add avio_read_to_bprint()
|
||||
|
||||
|
||||
-------- 8< --------- FFmpeg 2.3 was cut here -------- 8< ---------
|
||||
|
||||
2014-07-14 - 62227a7 - lavf 55.47.100 - avformat.h
|
||||
Add av_stream_get_parser()
|
||||
|
||||
2014-07-09 - c67690f / a54f03b - lavu 52.92.100 / 53.18.0 - display.h
|
||||
Add av_display_matrix_flip() to flip the transformation matrix.
|
||||
|
||||
2014-07-09 - 1b58f13 / f6ee61f - lavc 55.69.100 / 55.56.0 - dv_profile.h
|
||||
Add a public API for DV profile handling.
|
||||
|
||||
2014-06-20 - 0dceefc / 9e500ef - lavu 52.90.100 / 53.17.0 - imgutils.h
|
||||
Add av_image_check_sar().
|
||||
|
||||
2014-06-20 - 4a99333 / 874390e - lavc 55.68.100 / 55.55.0 - avcodec.h
|
||||
Add av_packet_rescale_ts() to simplify timestamp conversion.
|
||||
|
||||
2014-06-18 - ac293b6 / 194be1f - lavf 55.44.100 / 55.20.0 - avformat.h
|
||||
The proper way for providing a hint about the desired timebase to the muxers
|
||||
is now setting AVStream.time_base, instead of AVStream.codec.time_base as was
|
||||
done previously. The old method is now deprecated.
|
||||
|
||||
2014-06-11 - 67d29da - lavc 55.66.101 - avcodec.h
|
||||
Increase FF_INPUT_BUFFER_PADDING_SIZE to 32 due to some corner cases needing
|
||||
it
|
||||
|
||||
2014-06-10 - 5482780 - lavf 55.43.100 - avformat.h
|
||||
New field int64_t max_analyze_duration2 instead of deprecated
|
||||
int max_analyze_duration.
|
||||
|
||||
2014-05-30 - 00759d7 - lavu 52.89.100 - opt.h
|
||||
Add av_opt_copy()
|
||||
|
||||
2014-06-01 - 03bb99a / 0957b27 - lavc 55.66.100 / 55.54.0 - avcodec.h
|
||||
Add AVCodecContext.side_data_only_packets to allow encoders to output packets
|
||||
with only side data. This option may become mandatory in the future, so all
|
||||
users are recommended to update their code and enable this option.
|
||||
|
||||
2014-06-01 - 6e8e9f1 / 8c02adc - lavu 52.88.100 / 53.16.0 - frame.h, pixfmt.h
|
||||
Move all color-related enums (AVColorPrimaries, AVColorSpace, AVColorRange,
|
||||
AVColorTransferCharacteristic, and AVChromaLocation) inside lavu.
|
||||
And add AVFrame fields for them.
|
||||
|
||||
2014-05-29 - bdb2e80 / b2d4565 - lavr 1.3.0 - avresample.h
|
||||
Add avresample_max_output_samples
|
||||
|
||||
2014-05-28 - d858ee7 / 6d21259 - lavf 55.42.100 / 55.19.0 - avformat.h
|
||||
Add strict_std_compliance and related AVOptions to support experimental
|
||||
muxing.
|
||||
|
||||
2014-05-26 - 55cc60c - lavu 52.87.100 - threadmessage.h
|
||||
Add thread message queue API.
|
||||
|
||||
2014-05-26 - c37d179 - lavf 55.41.100 - avformat.h
|
||||
Add format_probesize to AVFormatContext.
|
||||
|
||||
2014-05-20 - 7d25af1 / c23c96b - lavf 55.39.100 / 55.18.0 - avformat.h
|
||||
Add av_stream_get_side_data() to access stream-level side data
|
||||
in the same way as av_packet_get_side_data().
|
||||
|
||||
2014-05-20 - 7336e39 - lavu 52.86.100 - fifo.h
|
||||
Add av_fifo_alloc_array() function.
|
||||
|
||||
2014-05-19 - ef1d4ee / bddd8cb - lavu 52.85.100 / 53.15.0 - frame.h, display.h
|
||||
Add AV_FRAME_DATA_DISPLAYMATRIX for exporting frame-level
|
||||
spatial rendering on video frames for proper display.
|
||||
|
||||
2014-05-19 - ef1d4ee / bddd8cb - lavc 55.64.100 / 55.53.0 - avcodec.h
|
||||
Add AV_PKT_DATA_DISPLAYMATRIX for exporting packet-level
|
||||
spatial rendering on video frames for proper display.
|
||||
|
||||
2014-05-19 - 999a99c / a312f71 - lavf 55.38.101 / 55.17.1 - avformat.h
|
||||
Deprecate AVStream.pts and the AVFrac struct, which was its only use case.
|
||||
See use av_stream_get_end_pts()
|
||||
|
||||
2014-05-18 - 68c0518 / fd05602 - lavc 55.63.100 / 55.52.0 - avcodec.h
|
||||
Add avcodec_free_context(). From now on it should be used for freeing
|
||||
AVCodecContext.
|
||||
|
||||
2014-05-17 - 0eec06e / 1bd0bdc - lavu 52.84.100 / 54.5.0 - time.h
|
||||
Add av_gettime_relative() av_gettime_relative_is_monotonic()
|
||||
|
||||
2014-05-15 - eacf7d6 / 0c1959b - lavf 55.38.100 / 55.17.0 - avformat.h
|
||||
Add AVFMT_FLAG_BITEXACT flag. Muxers now use it instead of checking
|
||||
CODEC_FLAG_BITEXACT on the first stream.
|
||||
|
||||
2014-05-15 - 96cb4c8 - lswr 0.19.100 - swresample.h
|
||||
Add swr_close()
|
||||
|
||||
2014-05-11 - 14aef38 / 66e6c8a - lavu 52.83.100 / 53.14.0 - pixfmt.h
|
||||
Add AV_PIX_FMT_VDA for new-style VDA acceleration.
|
||||
|
||||
2014-05-07 - 351f611 - lavu 52.82.100 - fifo.h
|
||||
Add av_fifo_freep() function.
|
||||
|
||||
2014-05-02 - ba52fb11 - lavu 52.81.100 - opt.h
|
||||
Add av_opt_set_dict2() function.
|
||||
|
||||
2014-05-01 - e77b985 / a2941c8 - lavc 55.60.103 / 55.50.3 - avcodec.h
|
||||
Deprecate CODEC_FLAG_MV0. It is replaced by the flag "mv0" in the
|
||||
"mpv_flags" private option of the mpegvideo encoders.
|
||||
|
||||
2014-05-01 - e40ae8c / 6484149 - lavc 55.60.102 / 55.50.2 - avcodec.h
|
||||
Deprecate CODEC_FLAG_GMC. It is replaced by the "gmc" private option of the
|
||||
libxvid encoder.
|
||||
|
||||
2014-05-01 - 1851643 / b2c3171 - lavc 55.60.101 / 55.50.1 - avcodec.h
|
||||
Deprecate CODEC_FLAG_NORMALIZE_AQP. It is replaced by the flag "naq" in the
|
||||
"mpv_flags" private option of the mpegvideo encoders.
|
||||
|
||||
2014-05-01 - cac07d0 / 5fcceda - avcodec.h
|
||||
Deprecate CODEC_FLAG_INPUT_PRESERVED. Its functionality is replaced by passing
|
||||
reference-counted frames to encoders.
|
||||
|
||||
2014-04-30 - 617e866 - lavu 52.81.100 - pixdesc.h
|
||||
Add av_find_best_pix_fmt_of_2(), av_get_pix_fmt_loss()
|
||||
Deprecate avcodec_get_pix_fmt_loss(), avcodec_find_best_pix_fmt_of_2()
|
||||
|
||||
2014-04-29 - 1bf6396 - lavc 55.60.100 - avcodec.h
|
||||
Add AVCodecDescriptor.mime_types field.
|
||||
|
||||
2014-04-29 - b804eb4 - lavu 52.80.100 - hash.h
|
||||
Add av_hash_final_bin(), av_hash_final_hex() and av_hash_final_b64().
|
||||
|
||||
2014-03-07 - 8b2a130 - lavc 55.50.0 / 55.53.100 - dxva2.h
|
||||
Add FF_DXVA2_WORKAROUND_INTEL_CLEARVIDEO for old Intel GPUs.
|
||||
|
||||
2014-04-22 - 502512e /dac7e8a - lavu 53.13.0 / 52.78.100 - avutil.h
|
||||
Add av_get_time_base_q().
|
||||
|
||||
2014-04-17 - a8d01a7 / 0983d48 - lavu 53.12.0 / 52.77.100 - crc.h
|
||||
Add AV_CRC_16_ANSI_LE crc variant.
|
||||
|
||||
2014-04-15 - ef818d8 - lavf 55.37.101 - avformat.h
|
||||
Add av_format_inject_global_side_data()
|
||||
|
||||
2014-04-12 - 4f698be - lavu 52.76.100 - log.h
|
||||
Add av_log_get_flags()
|
||||
|
||||
2014-04-11 - 6db42a2b - lavd 55.12.100 - avdevice.h
|
||||
Add avdevice_capabilities_create() function.
|
||||
Add avdevice_capabilities_free() function.
|
||||
|
||||
2014-04-07 - 0a1cc04 / 8b17243 - lavu 52.75.100 / 53.11.0 - pixfmt.h
|
||||
Add AV_PIX_FMT_YVYU422 pixel format.
|
||||
|
||||
2014-04-04 - c1d0536 / 8542f9c - lavu 52.74.100 / 53.10.0 - replaygain.h
|
||||
Full scale for peak values is now 100000 (instead of UINT32_MAX) and values
|
||||
may overflow.
|
||||
|
||||
2014-04-03 - c16e006 / 7763118 - lavu 52.73.100 / 53.9.0 - log.h
|
||||
Add AV_LOG(c) macro to have 256 color debug messages.
|
||||
|
||||
2014-04-03 - eaed4da9 - lavu 52.72.100 - opt.h
|
||||
Add AV_OPT_MULTI_COMPONENT_RANGE define to allow return
|
||||
multi-component option ranges.
|
||||
|
||||
2014-03-29 - cd50a44b - lavu 52.70.100 - mem.h
|
||||
Add av_dynarray_add_nofree() function.
|
||||
|
||||
2014-02-24 - 3e1f241 / d161ae0 - lavu 52.69.100 / 53.8.0 - frame.h
|
||||
Add av_frame_remove_side_data() for removing a single side data
|
||||
instance from a frame.
|
||||
|
||||
2014-03-24 - 83e8978 / 5a7e35d - lavu 52.68.100 / 53.7.0 - frame.h, replaygain.h
|
||||
Add AV_FRAME_DATA_REPLAYGAIN for exporting replaygain tags.
|
||||
Add a new header replaygain.h with the AVReplayGain struct.
|
||||
|
||||
2014-03-24 - 83e8978 / 5a7e35d - lavc 55.54.100 / 55.36.0 - avcodec.h
|
||||
Add AV_PKT_DATA_REPLAYGAIN for exporting replaygain tags.
|
||||
|
||||
2014-03-24 - 595ba3b / 25b3258 - lavf 55.35.100 / 55.13.0 - avformat.h
|
||||
Add AVStream.side_data and AVStream.nb_side_data for exporting stream-global
|
||||
side data (e.g. replaygain tags, video rotation)
|
||||
|
||||
2014-03-24 - bd34e26 / 0e2c3ee - lavc 55.53.100 / 55.35.0 - avcodec.h
|
||||
Give the name AVPacketSideData to the previously anonymous struct used for
|
||||
AVPacket.side_data.
|
||||
|
||||
|
||||
-------- 8< --------- FFmpeg 2.2 was cut here -------- 8< ---------
|
||||
|
||||
2014-03-18 - 37c07d4 - lsws 2.5.102
|
||||
Make gray16 full-scale.
|
||||
|
||||
2014-03-16 - 6b1ca17 / 1481d24 - lavu 52.67.100 / 53.6.0 - pixfmt.h
|
||||
Add RGBA64_LIBAV pixel format and variants for compatibility
|
||||
|
||||
2014-03-11 - 3f3229c - lavf 55.34.101 - avformat.h
|
||||
Set AVFormatContext.start_time_realtime when demuxing.
|
||||
|
||||
2014-03-03 - 06fed440 - lavd 55.11.100 - avdevice.h
|
||||
Add av_input_audio_device_next().
|
||||
Add av_input_video_device_next().
|
||||
Add av_output_audio_device_next().
|
||||
Add av_output_video_device_next().
|
||||
|
||||
2014-02-24 - fff5262 / 1155fd0 - lavu 52.66.100 / 53.5.0 - frame.h
|
||||
Add av_frame_copy() for copying the frame data.
|
||||
|
||||
2014-02-24 - a66be60 - lswr 0.18.100 - swresample.h
|
||||
Add swr_is_initialized() for checking whether a resample context is initialized.
|
||||
|
||||
2014-02-22 - 5367c0b / 7e86c27 - lavr 1.2.0 - avresample.h
|
||||
Add avresample_is_open() for checking whether a resample context is open.
|
||||
|
||||
2014-02-19 - 6a24d77 / c3ecd96 - lavu 52.65.100 / 53.4.0 - opt.h
|
||||
Add AV_OPT_FLAG_EXPORT and AV_OPT_FLAG_READONLY to mark options meant (only)
|
||||
for reading.
|
||||
|
||||
2014-02-19 - f4c8d00 / 6bb8720 - lavu 52.64.101 / 53.3.1 - opt.h
|
||||
Deprecate unused AV_OPT_FLAG_METADATA.
|
||||
|
||||
2014-02-16 - 81c3f81 - lavd 55.10.100 - avdevice.h
|
||||
Add avdevice_list_devices() and avdevice_free_list_devices()
|
||||
|
||||
2014-02-16 - db3c970 - lavf 55.33.100 - avio.h
|
||||
Add avio_find_protocol_name() to find out the name of the protocol that would
|
||||
be selected for a given URL.
|
||||
|
||||
2014-02-15 - a2bc6c1 / c98f316 - lavu 52.64.100 / 53.3.0 - frame.h
|
||||
Add AV_FRAME_DATA_DOWNMIX_INFO value to the AVFrameSideDataType enum and
|
||||
downmix_info.h API, which identify downmix-related metadata.
|
||||
|
||||
2014-02-11 - 1b05ac2 - lavf 55.32.100 - avformat.h
|
||||
Add av_write_uncoded_frame() and av_interleaved_write_uncoded_frame().
|
||||
|
||||
2014-02-04 - 3adb5f8 / d9ae103 - lavf 55.30.100 / 55.11.0 - avformat.h
|
||||
Add AVFormatContext.max_interleave_delta for controlling amount of buffering
|
||||
when interleaving.
|
||||
|
||||
2014-02-02 - 5871ee5 - lavf 55.29.100 - avformat.h
|
||||
Add output_ts_offset muxing option to AVFormatContext.
|
||||
|
||||
2014-01-27 - 102bd64 - lavd 55.7.100 - avdevice.h
|
||||
lavf 55.28.100 - avformat.h
|
||||
Add avdevice_dev_to_app_control_message() function.
|
||||
|
||||
2014-01-27 - 7151411 - lavd 55.6.100 - avdevice.h
|
||||
lavf 55.27.100 - avformat.h
|
||||
Add avdevice_app_to_dev_control_message() function.
|
||||
|
||||
2014-01-24 - 86bee79 - lavf 55.26.100 - avformat.h
|
||||
Add AVFormatContext option metadata_header_padding to allow control over the
|
||||
amount of padding added.
|
||||
|
||||
2014-01-20 - eef74b2 / 93c553c - lavc 55.48.102 / 55.32.1 - avcodec.h
|
||||
Edges are not required anymore on video buffers allocated by get_buffer2()
|
||||
(i.e. as if the CODEC_FLAG_EMU_EDGE flag was always on). Deprecate
|
||||
CODEC_FLAG_EMU_EDGE and avcodec_get_edge_width().
|
||||
|
||||
2014-01-19 - 1a193c4 - lavf 55.25.100 - avformat.h
|
||||
Add avformat_get_mov_video_tags() and avformat_get_mov_audio_tags().
|
||||
|
||||
2014-01-19 - 3532dd5 - lavu 52.63.100 - rational.h
|
||||
Add av_make_q() function.
|
||||
|
||||
2014-01-05 - 4cf4da9 / 5b4797a - lavu 52.62.100 / 53.2.0 - frame.h
|
||||
Add AV_FRAME_DATA_MATRIXENCODING value to the AVFrameSideDataType enum, which
|
||||
identifies AVMatrixEncoding data.
|
||||
|
||||
2014-01-05 - 751385f / 5c437fb - lavu 52.61.100 / 53.1.0 - channel_layout.h
|
||||
Add values for various Dolby flags to the AVMatrixEncoding enum.
|
||||
|
||||
2014-01-04 - b317f94 - lavu 52.60.100 - mathematics.h
|
||||
Add av_add_stable() function.
|
||||
|
||||
2013-12-22 - 911676c - lavu 52.59.100 - avstring.h
|
||||
Add av_strnlen() function.
|
||||
|
||||
2013-12-09 - 64f73ac - lavu 52.57.100 - opencl.h
|
||||
Add av_opencl_benchmark() function.
|
||||
|
||||
2013-11-30 - 82b2e9c - lavu 52.56.100 - ffversion.h
|
||||
Moves version.h to libavutil/ffversion.h.
|
||||
Install ffversion.h and make it public.
|
||||
|
||||
2013-12-11 - 29c83d2 / b9fb59d,409a143 / 9431356,44967ab / d7b3ee9 - lavc 55.45.101 / 55.28.1 - avcodec.h
|
||||
av_frame_alloc(), av_frame_unref() and av_frame_free() now can and should be
|
||||
used instead of avcodec_alloc_frame(), avcodec_get_frame_defaults() and
|
||||
avcodec_free_frame() respectively. The latter three functions are deprecated.
|
||||
|
||||
2013-12-09 - 7a60348 / 7e244c6- - lavu 52.58.100 / 52.20.0 - frame.h
|
||||
Add AV_FRAME_DATA_STEREO3D value to the AVFrameSideDataType enum and
|
||||
stereo3d.h API, that identify codec-independent stereo3d information.
|
||||
|
||||
2013-11-26 - 625b290 / 1eaac1d- - lavu 52.55.100 / 52.19.0 - frame.h
|
||||
Add AV_FRAME_DATA_A53_CC value to the AVFrameSideDataType enum, which
|
||||
identifies ATSC A53 Part 4 Closed Captions data.
|
||||
|
||||
2013-11-22 - 6859065 - lavu 52.54.100 - avstring.h
|
||||
Add av_utf8_decode() function.
|
||||
|
||||
2013-11-22 - fb7d70c - lavc 55.44.100 - avcodec.h
|
||||
Add HEVC profiles
|
||||
|
||||
2013-11-20 - c28b61c - lavc 55.44.100 - avcodec.h
|
||||
Add av_packet_{un,}pack_dictionary()
|
||||
Add AV_PKT_METADATA_UPDATE side data type, used to transmit key/value
|
||||
strings between a stream and the application.
|
||||
|
||||
2013-11-14 - 7c888ae / cce3e0a - lavu 52.53.100 / 52.18.0 - mem.h
|
||||
Move av_fast_malloc() and av_fast_realloc() for libavcodec to libavutil.
|
||||
|
||||
2013-11-14 - b71e4d8 / 8941971 - lavc 55.43.100 / 55.27.0 - avcodec.h
|
||||
Deprecate AVCodecContext.error_rate, it is replaced by the 'error_rate'
|
||||
private option of the mpegvideo encoder family.
|
||||
|
||||
2013-11-14 - 31c09b7 / 728c465 - lavc 55.42.100 / 55.26.0 - vdpau.h
|
||||
Add av_vdpau_get_profile().
|
||||
Add av_vdpau_alloc_context(). This function must from now on be
|
||||
used for allocating AVVDPAUContext.
|
||||
|
||||
2013-11-04 - be41f21 / cd8f772 - lavc 55.41.100 / 55.25.0 - avcodec.h
|
||||
lavu 52.51.100 - frame.h
|
||||
Add ITU-R BT.2020 and other not yet included values to color primaries,
|
||||
transfer characteristics and colorspaces.
|
||||
|
||||
2013-11-04 - 85cabf1 - lavu 52.50.100 - avutil.h
|
||||
Add av_fopen_utf8()
|
||||
|
||||
2013-10-31 - 78265fc / 28096e0 - lavu 52.49.100 / 52.17.0 - frame.h
|
||||
Add AVFrame.flags and AV_FRAME_FLAG_CORRUPT.
|
||||
|
||||
|
||||
-------- 8< --------- FFmpeg 2.1 was cut here -------- 8< ---------
|
||||
|
||||
2013-10-27 - dbe6f9f - lavc 55.39.100 - avcodec.h
|
||||
2013-10-27 - xxxxxxx - lavc 55.39.100 - avcodec.h
|
||||
Add CODEC_CAP_DELAY support to avcodec_decode_subtitle2.
|
||||
|
||||
2013-10-27 - d61617a - lavu 52.48.100 - parseutils.h
|
||||
2013-10-27 - xxxxxxx - lavu 52.48.100 - parseutils.h
|
||||
Add av_get_known_color_name().
|
||||
|
||||
2013-10-17 - 8696e51 - lavu 52.47.100 - opt.h
|
||||
2013-10-17 - xxxxxxx - lavu 52.47.100 - opt.h
|
||||
Add AV_OPT_TYPE_CHANNEL_LAYOUT and channel layout option handlers
|
||||
av_opt_get_channel_layout() and av_opt_set_channel_layout().
|
||||
|
||||
2013-10-06 - ccf96f8 -libswscale 2.5.101 - options.c
|
||||
2013-10-xx - xxxxxxx -libswscale 2.5.101 - options.c
|
||||
Change default scaler to bicubic
|
||||
|
||||
2013-10-03 - e57dba0 - lavc 55.34.100 - avcodec.h
|
||||
2013-10-03 - xxxxxxx - lavc 55.34.100 - avcodec.h
|
||||
Add av_codec_get_max_lowres()
|
||||
|
||||
2013-10-02 - 5082fcc - lavf 55.19.100 - avformat.h
|
||||
2013-10-02 - xxxxxxx - lavf 55.19.100 - avformat.h
|
||||
Add audio/video/subtitle AVCodec fields to AVFormatContext to force specific
|
||||
decoders
|
||||
|
||||
2013-09-28 - 7381d31 / 0767bfd - lavfi 3.88.100 / 3.11.0 - avfilter.h
|
||||
2013-08-xx - xxxxxxx - lavfi 3.11.0 - avfilter.h
|
||||
Add AVFilterGraph.execute and AVFilterGraph.opaque for custom slice threading
|
||||
implementations.
|
||||
|
||||
2013-09-21 - 85f8a3c / e208e6d - lavu 52.46.100 / 52.16.0 - pixfmt.h
|
||||
2013-09-21 - xxxxxxx - lavu 52.16.0 - pixfmt.h
|
||||
Add interleaved 4:2:2 8/10-bit formats AV_PIX_FMT_NV16 and
|
||||
AV_PIX_FMT_NV20.
|
||||
|
||||
@@ -578,7 +49,7 @@ API changes, most recent first:
|
||||
2013-09-04 - 3e1f507 - lavc 55.31.101 - avcodec.h
|
||||
avcodec_close() argument can be NULL.
|
||||
|
||||
2013-09-04 - 36cd017a - lavf 55.16.101 - avformat.h
|
||||
2013-09-04 - 36cd017 - lavf 55.16.101 - avformat.h
|
||||
avformat_close_input() argument can be NULL and point on NULL.
|
||||
|
||||
2013-08-29 - e31db62 - lavf 55.15.100 - avformat.h
|
||||
@@ -587,10 +58,10 @@ API changes, most recent first:
|
||||
2013-08-15 - 1e0e193 - lsws 2.5.100 -
|
||||
Add a sws_dither AVOption, allowing to set the dither algorithm used
|
||||
|
||||
2013-08-11 - d404fe35 - lavc 55.27.100 - vdpau.h
|
||||
2013-08-xx - xxxxxxx - lavc 55.27.100 - vdpau.h
|
||||
Add a render2 alternative to the render callback function.
|
||||
|
||||
2013-08-11 - af05edc - lavc 55.26.100 - vdpau.h
|
||||
2013-08-xx - xxxxxxx - lavc 55.26.100 - vdpau.h
|
||||
Add allocation function for AVVDPAUContext, allowing
|
||||
to extend it in the future without breaking ABI/API.
|
||||
|
||||
@@ -600,7 +71,7 @@ API changes, most recent first:
|
||||
|
||||
2013-08-05 - 9547e3e / f824535 - lavc 55.22.100 / 55.13.0 - avcodec.h
|
||||
Deprecate the bitstream-related members from struct AVVDPAUContext.
|
||||
The bitstream buffers no longer need to be explicitly freed.
|
||||
The bistream buffers no longer need to be explicitly freed.
|
||||
|
||||
2013-08-05 - 3b805dc / 549294f - lavc 55.21.100 / 55.12.0 - avcodec.h
|
||||
Deprecate the CODEC_CAP_HWACCEL_VDPAU codec capability. Use CODEC_CAP_HWACCEL
|
||||
@@ -616,9 +87,6 @@ API changes, most recent first:
|
||||
Add avcodec_chroma_pos_to_enum()
|
||||
Add avcodec_enum_to_chroma_pos()
|
||||
|
||||
|
||||
-------- 8< --------- FFmpeg 2.0 was cut here -------- 8< ---------
|
||||
|
||||
2013-07-03 - 838bd73 - lavfi 3.78.100 - avfilter.h
|
||||
Deprecate avfilter_graph_parse() in favor of the equivalent
|
||||
avfilter_graph_parse_ptr().
|
||||
@@ -691,9 +159,6 @@ API changes, most recent first:
|
||||
2013-03-17 - 7aa9af5 - lavu 52.20.100 - opt.h
|
||||
Add AV_OPT_TYPE_VIDEO_RATE value to AVOptionType enum.
|
||||
|
||||
|
||||
-------- 8< --------- FFmpeg 1.2 was cut here -------- 8< ---------
|
||||
|
||||
2013-03-07 - 9767ec6 - lavu 52.18.100 - avstring.h,bprint.h
|
||||
Add av_escape() and av_bprint_escape() API.
|
||||
|
||||
@@ -706,9 +171,6 @@ API changes, most recent first:
|
||||
2013-01-01 - 2eb2e17 - lavfi 3.34.100
|
||||
Add avfilter_get_audio_buffer_ref_from_arrays_channels.
|
||||
|
||||
|
||||
-------- 8< --------- FFmpeg 1.1 was cut here -------- 8< ---------
|
||||
|
||||
2012-12-20 - 34de47aa - lavfi 3.29.100 - avfilter.h
|
||||
Add AVFilterLink.channels, avfilter_link_get_channels()
|
||||
and avfilter_ref_get_channels().
|
||||
@@ -754,9 +216,6 @@ API changes, most recent first:
|
||||
Add LIBSWRESAMPLE_VERSION, LIBSWRESAMPLE_BUILD
|
||||
and LIBSWRESAMPLE_IDENT symbols.
|
||||
|
||||
|
||||
-------- 8< --------- FFmpeg 1.0 was cut here -------- 8< ---------
|
||||
|
||||
2012-09-06 - 29e972f - lavu 51.72.100 - parseutils.h
|
||||
Add av_small_strptime() time parsing function.
|
||||
|
||||
@@ -961,9 +420,6 @@ lavd 54.4.100 / 54.0.0, lavfi 3.5.0
|
||||
avresample_read() are now uint8_t** instead of void**.
|
||||
Libavresample is now stable.
|
||||
|
||||
2012-09-26 - 3ba0dab7 / 1384df64 - lavf 54.29.101 / 56.06.3 - avformat.h
|
||||
Add AVFormatContext.avoid_negative_ts.
|
||||
|
||||
2012-09-24 - 46a3595 / a42aada - lavc 54.59.100 / 54.28.0 - avcodec.h
|
||||
Add avcodec_free_frame(). This function must now
|
||||
be used for freeing an AVFrame.
|
||||
@@ -1178,9 +634,6 @@ lavd 54.4.100 / 54.0.0, lavfi 3.5.0
|
||||
2012-01-12 - b18e17e / 3167dc9 - lavfi 2.59.100 / 2.15.0
|
||||
Add a new installed header -- libavfilter/version.h -- with version macros.
|
||||
|
||||
|
||||
-------- 8< --------- FFmpeg 0.9 was cut here -------- 8< ---------
|
||||
|
||||
2011-12-08 - a502939 - lavfi 2.52.0
|
||||
Add av_buffersink_poll_frame() to buffersink.h.
|
||||
|
||||
@@ -1209,9 +662,6 @@ lavd 54.4.100 / 54.0.0, lavfi 3.5.0
|
||||
Add avformat_close_input().
|
||||
Deprecate av_close_input_file() and av_close_input_stream().
|
||||
|
||||
2011-12-09 - c59b80c / b2890f5 - lavu 51.32.0 / 51.20.0 - audioconvert.h
|
||||
Expand the channel layout list.
|
||||
|
||||
2011-12-02 - e4de716 / 0eea212 - lavc 53.40.0 / 53.25.0
|
||||
Add nb_samples and extended_data fields to AVFrame.
|
||||
Deprecate AVCODEC_MAX_AUDIO_FRAME_SIZE.
|
||||
@@ -1225,10 +675,6 @@ lavd 54.4.100 / 54.0.0, lavfi 3.5.0
|
||||
Change AVCodecContext.error[4] to [8] at next major bump.
|
||||
Add AV_NUM_DATA_POINTERS to simplify the bump transition.
|
||||
|
||||
2011-11-24 - lavu 51.29.0 / 51.19.0
|
||||
92afb43 / bd97b2e - add planar RGB pixel formats
|
||||
92afb43 / 6b0768e - add PIX_FMT_PLANAR and PIX_FMT_RGB pixel descriptions
|
||||
|
||||
2011-11-23 - 8e576d5 / bbb46f3 - lavu 51.27.0 / 51.18.0
|
||||
Add av_samples_get_buffer_size(), av_samples_fill_arrays(), and
|
||||
av_samples_alloc(), to samplefmt.h.
|
||||
@@ -1390,13 +836,6 @@ lavd 54.4.100 / 54.0.0, lavfi 3.5.0
|
||||
2011-06-28 - 5129336 - lavu 51.11.0 - avutil.h
|
||||
Define the AV_PICTURE_TYPE_NONE value in AVPictureType enum.
|
||||
|
||||
|
||||
-------- 8< --------- FFmpeg 0.7 was cut here -------- 8< ---------
|
||||
|
||||
|
||||
|
||||
-------- 8< --------- FFmpeg 0.8 was cut here -------- 8< ---------
|
||||
|
||||
2011-06-19 - fd2c0a5 - lavfi 2.23.0 - avfilter.h
|
||||
Add layout negotiation fields and helper functions.
|
||||
|
||||
@@ -2074,9 +1513,6 @@ lavd 54.4.100 / 54.0.0, lavfi 3.5.0
|
||||
2010-06-02 - 7e566bb - lavc 52.73.0 - av_get_codec_tag_string()
|
||||
Add av_get_codec_tag_string().
|
||||
|
||||
|
||||
-------- 8< --------- FFmpeg 0.6 was cut here -------- 8< ---------
|
||||
|
||||
2010-06-01 - 2b99142 - lsws 0.11.0 - convertPalette API
|
||||
Add sws_convertPalette8ToPacked32() and sws_convertPalette8ToPacked24().
|
||||
|
||||
@@ -2094,6 +1530,10 @@ lavd 54.4.100 / 54.0.0, lavfi 3.5.0
|
||||
2010-05-09 - b6bc205 - lavfi 1.20.0 - AVFilterPicRef
|
||||
Add interlaced and top_field_first fields to AVFilterPicRef.
|
||||
|
||||
------------------------------8<-------------------------------------
|
||||
0.6 branch was cut here
|
||||
----------------------------->8--------------------------------------
|
||||
|
||||
2010-05-01 - 8e2ee18 - lavf 52.62.0 - probe function
|
||||
Add av_probe_input_format2 to API, it allows ignoring probe
|
||||
results below given score and returns the actual probe score.
|
||||
|
14
doc/Doxyfile
14
doc/Doxyfile
@@ -31,7 +31,7 @@ PROJECT_NAME = FFmpeg
|
||||
# This could be handy for archiving the generated documentation or
|
||||
# if some version control system is used.
|
||||
|
||||
PROJECT_NUMBER =
|
||||
PROJECT_NUMBER = 2.1
|
||||
|
||||
# With the PROJECT_LOGO tag one can specify a logo or icon that is included
|
||||
# in the documentation. The maximum height of the logo should not exceed 55
|
||||
@@ -759,7 +759,7 @@ ALPHABETICAL_INDEX = YES
|
||||
# the COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns
|
||||
# in which this list will be split (can be a number in the range [1..20])
|
||||
|
||||
COLS_IN_ALPHA_INDEX = 5
|
||||
COLS_IN_ALPHA_INDEX = 2
|
||||
|
||||
# In case all classes in a project start with a common prefix, all
|
||||
# classes will be put under the same header in the alphabetical index.
|
||||
@@ -793,13 +793,13 @@ HTML_FILE_EXTENSION = .html
|
||||
# each generated HTML page. If it is left blank doxygen will generate a
|
||||
# standard header.
|
||||
|
||||
HTML_HEADER =
|
||||
#HTML_HEADER = doc/doxy/header.html
|
||||
|
||||
# The HTML_FOOTER tag can be used to specify a personal HTML footer for
|
||||
# each generated HTML page. If it is left blank doxygen will generate a
|
||||
# standard footer.
|
||||
|
||||
HTML_FOOTER =
|
||||
#HTML_FOOTER = doc/doxy/footer.html
|
||||
|
||||
# The HTML_STYLESHEET tag can be used to specify a user-defined cascading
|
||||
# style sheet that is used by each HTML page. It can be used to
|
||||
@@ -808,7 +808,7 @@ HTML_FOOTER =
|
||||
# the style sheet file to the HTML output directory, so don't put your own
|
||||
# stylesheet in the HTML output directory as well, or it will be erased!
|
||||
|
||||
HTML_STYLESHEET =
|
||||
#HTML_STYLESHEET = doc/doxy/doxy_stylesheet.css
|
||||
|
||||
# The HTML_COLORSTYLE_HUE tag controls the color of the HTML output.
|
||||
# Doxygen will adjust the colors in the stylesheet and background images
|
||||
@@ -1056,7 +1056,7 @@ FORMULA_TRANSPARENT = YES
|
||||
# typically be disabled. For large projects the javascript based search engine
|
||||
# can be slow, then enabling SERVER_BASED_SEARCH may provide a better solution.
|
||||
|
||||
SEARCHENGINE = YES
|
||||
SEARCHENGINE = NO
|
||||
|
||||
# When the SERVER_BASED_SEARCH tag is enabled the search engine will be
|
||||
# implemented using a PHP enabled web server instead of at the web client
|
||||
@@ -1359,8 +1359,6 @@ PREDEFINED = "__attribute__(x)=" \
|
||||
"DECLARE_ALIGNED(a,t,n)=t n" \
|
||||
"offsetof(x,y)=0x42" \
|
||||
av_alloc_size \
|
||||
AV_GCC_VERSION_AT_LEAST(x,y)=1 \
|
||||
__GNUC__=1 \
|
||||
|
||||
# If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then
|
||||
# this tag can be used to specify a list of macro names that should be expanded.
|
||||
|
70
doc/Makefile
70
doc/Makefile
@@ -14,11 +14,11 @@ COMPONENTS-$(CONFIG_AVFORMAT) += ffmpeg-formats ffmpeg-protocols
|
||||
COMPONENTS-$(CONFIG_AVDEVICE) += ffmpeg-devices
|
||||
COMPONENTS-$(CONFIG_AVFILTER) += ffmpeg-filters
|
||||
|
||||
MANPAGES1 = $(AVPROGS-yes:%=doc/%.1) $(AVPROGS-yes:%=doc/%-all.1) $(COMPONENTS-yes:%=doc/%.1)
|
||||
MANPAGES1 = $(PROGS-yes:%=doc/%.1) $(PROGS-yes:%=doc/%-all.1) $(COMPONENTS-yes:%=doc/%.1)
|
||||
MANPAGES3 = $(LIBRARIES-yes:%=doc/%.3)
|
||||
MANPAGES = $(MANPAGES1) $(MANPAGES3)
|
||||
PODPAGES = $(AVPROGS-yes:%=doc/%.pod) $(AVPROGS-yes:%=doc/%-all.pod) $(COMPONENTS-yes:%=doc/%.pod) $(LIBRARIES-yes:%=doc/%.pod)
|
||||
HTMLPAGES = $(AVPROGS-yes:%=doc/%.html) $(AVPROGS-yes:%=doc/%-all.html) $(COMPONENTS-yes:%=doc/%.html) $(LIBRARIES-yes:%=doc/%.html) \
|
||||
PODPAGES = $(PROGS-yes:%=doc/%.pod) $(PROGS-yes:%=doc/%-all.pod) $(COMPONENTS-yes:%=doc/%.pod) $(LIBRARIES-yes:%=doc/%.pod)
|
||||
HTMLPAGES = $(PROGS-yes:%=doc/%.html) $(PROGS-yes:%=doc/%-all.html) $(COMPONENTS-yes:%=doc/%.html) $(LIBRARIES-yes:%=doc/%.html) \
|
||||
doc/developer.html \
|
||||
doc/faq.html \
|
||||
doc/fate.html \
|
||||
@@ -36,29 +36,6 @@ DOCS-$(CONFIG_MANPAGES) += $(MANPAGES)
|
||||
DOCS-$(CONFIG_TXTPAGES) += $(TXTPAGES)
|
||||
DOCS = $(DOCS-yes)
|
||||
|
||||
DOC_EXAMPLES-$(CONFIG_AVIO_READING_EXAMPLE) += avio_reading
|
||||
DOC_EXAMPLES-$(CONFIG_AVCODEC_EXAMPLE) += avcodec
|
||||
DOC_EXAMPLES-$(CONFIG_DECODING_ENCODING_EXAMPLE) += decoding_encoding
|
||||
DOC_EXAMPLES-$(CONFIG_DEMUXING_DECODING_EXAMPLE) += demuxing_decoding
|
||||
DOC_EXAMPLES-$(CONFIG_EXTRACT_MVS_EXAMPLE) += extract_mvs
|
||||
DOC_EXAMPLES-$(CONFIG_FILTER_AUDIO_EXAMPLE) += filter_audio
|
||||
DOC_EXAMPLES-$(CONFIG_FILTERING_AUDIO_EXAMPLE) += filtering_audio
|
||||
DOC_EXAMPLES-$(CONFIG_FILTERING_VIDEO_EXAMPLE) += filtering_video
|
||||
DOC_EXAMPLES-$(CONFIG_METADATA_EXAMPLE) += metadata
|
||||
DOC_EXAMPLES-$(CONFIG_MUXING_EXAMPLE) += muxing
|
||||
DOC_EXAMPLES-$(CONFIG_QSVDEC_EXAMPLE) += qsvdec
|
||||
DOC_EXAMPLES-$(CONFIG_REMUXING_EXAMPLE) += remuxing
|
||||
DOC_EXAMPLES-$(CONFIG_RESAMPLING_AUDIO_EXAMPLE) += resampling_audio
|
||||
DOC_EXAMPLES-$(CONFIG_SCALING_VIDEO_EXAMPLE) += scaling_video
|
||||
DOC_EXAMPLES-$(CONFIG_TRANSCODE_AAC_EXAMPLE) += transcode_aac
|
||||
DOC_EXAMPLES-$(CONFIG_TRANSCODING_EXAMPLE) += transcoding
|
||||
ALL_DOC_EXAMPLES_LIST = $(DOC_EXAMPLES-) $(DOC_EXAMPLES-yes)
|
||||
|
||||
DOC_EXAMPLES := $(DOC_EXAMPLES-yes:%=doc/examples/%$(PROGSSUF)$(EXESUF))
|
||||
ALL_DOC_EXAMPLES := $(ALL_DOC_EXAMPLES_LIST:%=doc/examples/%$(PROGSSUF)$(EXESUF))
|
||||
ALL_DOC_EXAMPLES_G := $(ALL_DOC_EXAMPLES_LIST:%=doc/examples/%$(PROGSSUF)_g$(EXESUF))
|
||||
PROGS += $(DOC_EXAMPLES)
|
||||
|
||||
all-$(CONFIG_DOC): doc
|
||||
|
||||
doc: documentation
|
||||
@@ -66,9 +43,7 @@ doc: documentation
|
||||
apidoc: doc/doxy/html
|
||||
documentation: $(DOCS)
|
||||
|
||||
examples: $(DOC_EXAMPLES)
|
||||
|
||||
TEXIDEP = perl $(SRC_PATH)/doc/texidep.pl $(SRC_PATH) $< $@ >$(@:%=%.d)
|
||||
TEXIDEP = awk '/^@(verbatim)?include/ { printf "$@: $(@D)/%s\n", $$2 }' <$< >$(@:%=%.d)
|
||||
|
||||
doc/%.txt: TAG = TXT
|
||||
doc/%.txt: doc/%.texi
|
||||
@@ -83,25 +58,14 @@ $(GENTEXI): doc/avoptions_%.texi: doc/print_options$(HOSTEXESUF)
|
||||
$(M)doc/print_options $* > $@
|
||||
|
||||
doc/%.html: TAG = HTML
|
||||
doc/%-all.html: TAG = HTML
|
||||
|
||||
ifdef HAVE_MAKEINFO_HTML
|
||||
doc/%.html: doc/%.texi $(SRC_PATH)/doc/t2h.pm $(GENTEXI)
|
||||
$(Q)$(TEXIDEP)
|
||||
$(M)makeinfo --html -I doc --no-split -D config-not-all --init-file=$(SRC_PATH)/doc/t2h.pm --output $@ $<
|
||||
|
||||
doc/%-all.html: doc/%.texi $(SRC_PATH)/doc/t2h.pm $(GENTEXI)
|
||||
$(Q)$(TEXIDEP)
|
||||
$(M)makeinfo --html -I doc --no-split -D config-all --init-file=$(SRC_PATH)/doc/t2h.pm --output $@ $<
|
||||
else
|
||||
doc/%.html: doc/%.texi $(SRC_PATH)/doc/t2h.init $(GENTEXI)
|
||||
$(Q)$(TEXIDEP)
|
||||
$(M)texi2html -I doc -monolithic --D=config-not-all --init-file $(SRC_PATH)/doc/t2h.init --output $@ $<
|
||||
|
||||
doc/%-all.html: TAG = HTML
|
||||
doc/%-all.html: doc/%.texi $(SRC_PATH)/doc/t2h.init $(GENTEXI)
|
||||
$(Q)$(TEXIDEP)
|
||||
$(M)texi2html -I doc -monolithic --D=config-all --init-file $(SRC_PATH)/doc/t2h.init --output $@ $<
|
||||
endif
|
||||
|
||||
doc/%.pod: TAG = POD
|
||||
doc/%.pod: doc/%.texi $(SRC_PATH)/doc/texi2pod.pl $(GENTEXI)
|
||||
@@ -115,19 +79,14 @@ doc/%-all.pod: doc/%.texi $(SRC_PATH)/doc/texi2pod.pl $(GENTEXI)
|
||||
|
||||
doc/%.1 doc/%.3: TAG = MAN
|
||||
doc/%.1: doc/%.pod $(GENTEXI)
|
||||
$(M)pod2man --section=1 --center=" " --release=" " --date=" " $< > $@
|
||||
$(M)pod2man --section=1 --center=" " --release=" " $< > $@
|
||||
doc/%.3: doc/%.pod $(GENTEXI)
|
||||
$(M)pod2man --section=3 --center=" " --release=" " --date=" " $< > $@
|
||||
$(M)pod2man --section=3 --center=" " --release=" " $< > $@
|
||||
|
||||
$(DOCS) doc/doxy/html: | doc/
|
||||
$(DOC_EXAMPLES:%$(EXESUF)=%.o): | doc/examples
|
||||
OBJDIRS += doc/examples
|
||||
|
||||
DOXY_INPUT = $(addprefix $(SRC_PATH)/, $(INSTHEADERS) $(DOC_EXAMPLES:%$(EXESUF)=%.c) $(LIB_EXAMPLES:%$(EXESUF)=%.c))
|
||||
|
||||
doc/doxy/html: TAG = DOXY
|
||||
doc/doxy/html: $(SRC_PATH)/doc/Doxyfile $(SRC_PATH)/doc/doxy-wrapper.sh $(DOXY_INPUT)
|
||||
$(M)$(SRC_PATH)/doc/doxy-wrapper.sh $(SRC_PATH) $< $(DOXYGEN) $(DOXY_INPUT)
|
||||
doc/doxy/html: $(SRC_PATH)/doc/Doxyfile $(INSTHEADERS)
|
||||
$(M)$(SRC_PATH)/doc/doxy-wrapper.sh $(SRC_PATH) $^
|
||||
|
||||
install-doc: install-html install-man
|
||||
|
||||
@@ -161,7 +120,7 @@ uninstall-html:
|
||||
$(RM) -r "$(DOCDIR)"
|
||||
|
||||
uninstall-man:
|
||||
$(RM) $(addprefix "$(MANDIR)/man1/",$(AVPROGS-yes:%=%.1) $(AVPROGS-yes:%=%-all.1) $(COMPONENTS-yes:%=%.1))
|
||||
$(RM) $(addprefix "$(MANDIR)/man1/",$(PROGS-yes:%=%.1) $(PROGS-yes:%=%-all.1) $(COMPONENTS-yes:%=%.1))
|
||||
$(RM) $(addprefix "$(MANDIR)/man3/",$(LIBRARIES-yes:%=%.3))
|
||||
|
||||
clean:: docclean
|
||||
@@ -169,13 +128,8 @@ clean:: docclean
|
||||
distclean:: docclean
|
||||
$(RM) doc/config.texi
|
||||
|
||||
examplesclean:
|
||||
$(RM) $(ALL_DOC_EXAMPLES) $(ALL_DOC_EXAMPLES_G)
|
||||
$(RM) $(CLEANSUFFIXES:%=doc/examples/%)
|
||||
|
||||
docclean: examplesclean
|
||||
$(RM) $(CLEANSUFFIXES:%=doc/%)
|
||||
$(RM) $(TXTPAGES) doc/*.html doc/*.pod doc/*.1 doc/*.3 doc/avoptions_*.texi
|
||||
docclean:
|
||||
$(RM) $(TXTPAGES) doc/*.html doc/*.pod doc/*.1 doc/*.3 $(CLEANSUFFIXES:%=doc/%) doc/avoptions_*.texi
|
||||
$(RM) -r doc/doxy/html
|
||||
|
||||
-include $(wildcard $(DOCS:%=%.d))
|
||||
|
16
doc/RELEASE_NOTES
Normal file
16
doc/RELEASE_NOTES
Normal file
@@ -0,0 +1,16 @@
|
||||
Release Notes
|
||||
=============
|
||||
|
||||
* 2.1 "Fourier" October, 2013
|
||||
|
||||
|
||||
General notes
|
||||
-------------
|
||||
See the Changelog file for a list of significant changes. Note, there
|
||||
are many more new features and bugfixes than whats listed there.
|
||||
|
||||
Bugreports against FFmpeg git master or the most recent FFmpeg release are
|
||||
accepted. If you are experiencing issues with any formally released version of
|
||||
FFmpeg, please try git master to check if the issue still exists. If it does,
|
||||
make your report against the development code following the usual bug reporting
|
||||
guidelines.
|
36
doc/avutil.txt
Normal file
36
doc/avutil.txt
Normal file
@@ -0,0 +1,36 @@
|
||||
AVUtil
|
||||
======
|
||||
libavutil is a small lightweight library of generally useful functions.
|
||||
It is not a library for code needed by both libavcodec and libavformat.
|
||||
|
||||
|
||||
Overview:
|
||||
=========
|
||||
adler32.c adler32 checksum
|
||||
aes.c AES encryption and decryption
|
||||
fifo.c resizeable first in first out buffer
|
||||
intfloat_readwrite.c portable reading and writing of floating point values
|
||||
log.c "printf" with context and level
|
||||
md5.c MD5 Message-Digest Algorithm
|
||||
rational.c code to perform exact calculations with rational numbers
|
||||
tree.c generic AVL tree
|
||||
crc.c generic CRC checksumming code
|
||||
integer.c 128bit integer math
|
||||
lls.c
|
||||
mathematics.c greatest common divisor, integer sqrt, integer log2, ...
|
||||
mem.c memory allocation routines with guaranteed alignment
|
||||
|
||||
Headers:
|
||||
bswap.h big/little/native-endian conversion code
|
||||
x86_cpu.h a few useful macros for unifying x86-64 and x86-32 code
|
||||
avutil.h
|
||||
common.h
|
||||
intreadwrite.h reading and writing of unaligned big/little/native-endian integers
|
||||
|
||||
|
||||
Goals:
|
||||
======
|
||||
* Modular (few interdependencies and the possibility of disabling individual parts during ./configure)
|
||||
* Small (source and object)
|
||||
* Efficient (low CPU and memory usage)
|
||||
* Useful (avoid useless features almost no one needs)
|
@@ -13,16 +13,7 @@ bitstream filter using the option @code{--disable-bsf=BSF}.
|
||||
The option @code{-bsfs} of the ff* tools will display the list of
|
||||
all the supported bitstream filters included in your build.
|
||||
|
||||
The ff* tools have a -bsf option applied per stream, taking a
|
||||
comma-separated list of filters, whose parameters follow the filter
|
||||
name after a '='.
|
||||
|
||||
@example
|
||||
ffmpeg -i INPUT -c:v copy -bsf:v filter1[=opt1=str1/opt2=str2][,filter2] OUTPUT
|
||||
@end example
|
||||
|
||||
Below is a description of the currently available bitstream filters,
|
||||
with their parameters, if any.
|
||||
Below is a description of the currently available bitstream filters.
|
||||
|
||||
@section aac_adtstoasc
|
||||
|
||||
@@ -83,18 +74,7 @@ format with @command{ffmpeg}, you can use the command:
|
||||
ffmpeg -i INPUT.mp4 -codec copy -bsf:v h264_mp4toannexb OUTPUT.ts
|
||||
@end example
|
||||
|
||||
@section imxdump
|
||||
|
||||
Modifies the bitstream to fit in MOV and to be usable by the Final Cut
|
||||
Pro decoder. This filter only applies to the mpeg2video codec, and is
|
||||
likely not needed for Final Cut Pro 7 and newer with the appropriate
|
||||
@option{-tag:v}.
|
||||
|
||||
For example, to remux 30 MB/sec NTSC IMX to MOV:
|
||||
|
||||
@example
|
||||
ffmpeg -i input.mxf -c copy -bsf:v imxdump -tag:v mx3n output.mov
|
||||
@end example
|
||||
@section imx_dump_header
|
||||
|
||||
@section mjpeg2jpeg
|
||||
|
||||
@@ -137,24 +117,12 @@ ffmpeg -i frame_%d.jpg -c:v copy rotated.avi
|
||||
|
||||
@section movsub
|
||||
|
||||
@section mp3_header_compress
|
||||
|
||||
@section mp3_header_decompress
|
||||
|
||||
@section noise
|
||||
|
||||
Damages the contents of packets without damaging the container. Can be
|
||||
used for fuzzing or testing error resilience/concealment.
|
||||
|
||||
Parameters:
|
||||
A numeral string, whose value is related to how often output bytes will
|
||||
be modified. Therefore, values below or equal to 0 are forbidden, and
|
||||
the lower the more frequent bytes will be modified, with 1 meaning
|
||||
every byte is modified.
|
||||
|
||||
@example
|
||||
ffmpeg -i INPUT -c copy -bsf noise[=1] output.mkv
|
||||
@end example
|
||||
applies the modification to every byte.
|
||||
|
||||
@section remove_extra
|
||||
|
||||
@c man end BITSTREAM FILTERS
|
||||
|
5
doc/bootstrap.min.css
vendored
5
doc/bootstrap.min.css
vendored
File diff suppressed because one or more lines are too long
@@ -7,11 +7,6 @@ V
|
||||
Disable the default terse mode, the full command issued by make and its
|
||||
output will be shown on the screen.
|
||||
|
||||
DBG
|
||||
Preprocess x86 external assembler files to a .dbg.asm file in the object
|
||||
directory, which then gets compiled. Helps developping those assembler
|
||||
files.
|
||||
|
||||
DESTDIR
|
||||
Destination directory for the install targets, useful to prepare packages
|
||||
or install FFmpeg in cross-environments.
|
||||
@@ -30,9 +25,6 @@ fate-list
|
||||
install
|
||||
Install headers, libraries and programs.
|
||||
|
||||
examples
|
||||
Build all examples located in doc/examples.
|
||||
|
||||
libavformat/output-example
|
||||
Build the libavformat basic example.
|
||||
|
||||
@@ -42,9 +34,6 @@ libavcodec/api-example
|
||||
libswscale/swscale-test
|
||||
Build the swscale self-test (useful also as example).
|
||||
|
||||
config
|
||||
Reconfigure the project with current configuration.
|
||||
|
||||
|
||||
Useful standard make commands:
|
||||
make -t <target>
|
||||
|
@@ -7,7 +7,7 @@ all the encoders and decoders. In addition each codec may support
|
||||
so-called private options, which are specific for a given codec.
|
||||
|
||||
Sometimes, a global option may only affect a specific kind of codec,
|
||||
and may be nonsensical or ignored by another, so you need to be aware
|
||||
and may be unsensical or ignored by another, so you need to be aware
|
||||
of the meaning of the specified options. Also some options are
|
||||
meant only for decoding or encoding.
|
||||
|
||||
@@ -71,9 +71,7 @@ Force low delay.
|
||||
@item global_header
|
||||
Place global headers in extradata instead of every keyframe.
|
||||
@item bitexact
|
||||
Only write platform-, build- and time-independent data. (except (I)DCT).
|
||||
This ensures that file and data checksums are reproducible and match between
|
||||
platforms. Its primary use is for regression testing.
|
||||
Use only bitexact stuff (except (I)DCT).
|
||||
@item aic
|
||||
Apply H263 advanced intra coding / mpeg4 ac prediction.
|
||||
@item cbp
|
||||
@@ -174,13 +172,7 @@ Set max video quantizer scale (VBR). Must be included between -1 and
|
||||
Set max difference between the quantizer scale (VBR).
|
||||
|
||||
@item bf @var{integer} (@emph{encoding,video})
|
||||
Set max number of B frames between non-B-frames.
|
||||
|
||||
Must be an integer between -1 and 16. 0 means that B-frames are
|
||||
disabled. If a value of -1 is used, it will choose an automatic value
|
||||
depending on the encoder.
|
||||
|
||||
Default value is 0.
|
||||
Set max number of B frames.
|
||||
|
||||
@item b_qfactor @var{float} (@emph{encoding,video})
|
||||
Set qp factor between P and B frames.
|
||||
@@ -287,11 +279,6 @@ detect bitstream specification deviations
|
||||
detect improper bitstream length
|
||||
@item explode
|
||||
abort decoding on minor error detection
|
||||
@item ignore_err
|
||||
ignore decoding errors, and continue decoding.
|
||||
This is useful if you want to analyze the content of a video and thus want
|
||||
everything to be decoded no matter what. This option will not result in a video
|
||||
that is pleasing to watch in case of errors.
|
||||
@item careful
|
||||
consider things that violate the spec and have not been seen in the wild as errors
|
||||
@item compliant
|
||||
@@ -396,9 +383,6 @@ Possible values:
|
||||
|
||||
@item simplemmx
|
||||
|
||||
@item simpleauto
|
||||
Automatically pick a IDCT compatible with the simple one
|
||||
|
||||
@item arm
|
||||
|
||||
@item altivec
|
||||
@@ -434,8 +418,6 @@ Possible values:
|
||||
iterative motion vector (MV) search (slow)
|
||||
@item deblock
|
||||
use strong deblock filter for damaged MBs
|
||||
@item favor_inter
|
||||
favor predicting from the previous frame instead of the current
|
||||
@end table
|
||||
|
||||
@item bits_per_coded_sample @var{integer}
|
||||
@@ -495,15 +477,11 @@ visualize block types
|
||||
picture buffer allocations
|
||||
@item thread_ops
|
||||
threading operations
|
||||
@item nomc
|
||||
skip motion compensation
|
||||
@end table
|
||||
|
||||
@item vismv @var{integer} (@emph{decoding,video})
|
||||
Visualize motion vectors (MVs).
|
||||
|
||||
This option is deprecated, see the codecview filter instead.
|
||||
|
||||
Possible values:
|
||||
@table @samp
|
||||
@item pf
|
||||
@@ -803,9 +781,6 @@ Frame data might be split into multiple chunks.
|
||||
Show all frames before the first keyframe.
|
||||
@item skiprd
|
||||
Deprecated, use mpegvideo private options instead.
|
||||
@item export_mvs
|
||||
Export motion vectors into frame side-data (see @code{AV_FRAME_DATA_MOTION_VECTORS})
|
||||
for codecs that support it. See also @file{doc/examples/export_mvs.c}.
|
||||
@end table
|
||||
|
||||
@item error @var{integer} (@emph{encoding,video})
|
||||
@@ -865,14 +840,6 @@ Possible values:
|
||||
|
||||
@item mpeg2_aac_he
|
||||
|
||||
@item mpeg4_sp
|
||||
|
||||
@item mpeg4_core
|
||||
|
||||
@item mpeg4_main
|
||||
|
||||
@item mpeg4_asp
|
||||
|
||||
@item dts
|
||||
|
||||
@item dts_es
|
||||
@@ -904,9 +871,6 @@ Set frame skip factor.
|
||||
|
||||
@item skip_exp @var{integer} (@emph{encoding,video})
|
||||
Set frame skip exponent.
|
||||
Negative values behave identical to the corresponding positive ones, except
|
||||
that the score is normalized.
|
||||
Positive values exist primarily for compatibility reasons and are not so useful.
|
||||
|
||||
@item skipcmp @var{integer} (@emph{encoding,video})
|
||||
Set frame skip compare function.
|
||||
@@ -1052,26 +1016,15 @@ Set the log level offset.
|
||||
Number of slices, used in parallelized encoding.
|
||||
|
||||
@item thread_type @var{flags} (@emph{decoding/encoding,video})
|
||||
Select which multithreading methods to use.
|
||||
|
||||
Use of @samp{frame} will increase decoding delay by one frame per
|
||||
thread, so clients which cannot provide future frames should not use
|
||||
it.
|
||||
Select multithreading type.
|
||||
|
||||
Possible values:
|
||||
@table @samp
|
||||
@item slice
|
||||
Decode more than one part of a single frame at once.
|
||||
|
||||
Multithreading using slices works only when the video was encoded with
|
||||
slices.
|
||||
|
||||
@item frame
|
||||
Decode more than one frame at once.
|
||||
|
||||
@end table
|
||||
|
||||
Default value is @samp{slice+frame}.
|
||||
|
||||
@item audio_service_type @var{integer} (@emph{encoding,audio})
|
||||
Set audio service type.
|
||||
|
||||
@@ -1126,26 +1079,9 @@ Interlaced video, bottom coded first, top displayed first
|
||||
Set to 1 to disable processing alpha (transparency). This works like the
|
||||
@samp{gray} flag in the @option{flags} option which skips chroma information
|
||||
instead of alpha. Default is 0.
|
||||
|
||||
@item codec_whitelist @var{list} (@emph{input})
|
||||
"," separated List of allowed decoders. By default all are allowed.
|
||||
|
||||
@item dump_separator @var{string} (@emph{input})
|
||||
Separator used to separate the fields printed on the command line about the
|
||||
Stream parameters.
|
||||
For example to separate the fields with newlines and indention:
|
||||
@example
|
||||
ffprobe -dump_separator "
|
||||
" -i ~/videos/matrixbench_mpeg2.mpg
|
||||
@end example
|
||||
|
||||
@end table
|
||||
|
||||
@c man end CODEC OPTIONS
|
||||
|
||||
@ifclear config-writeonly
|
||||
@include decoders.texi
|
||||
@end ifclear
|
||||
@ifclear config-readonly
|
||||
@include encoders.texi
|
||||
@end ifclear
|
||||
|
@@ -14,7 +14,7 @@ You can disable all the decoders with the configure option
|
||||
with the options @code{--enable-decoder=@var{DECODER}} /
|
||||
@code{--disable-decoder=@var{DECODER}}.
|
||||
|
||||
The option @code{-decoders} of the ff* tools will display the list of
|
||||
The option @code{-codecs} of the ff* tools will display the list of
|
||||
enabled decoders.
|
||||
|
||||
@c man end DECODERS
|
||||
@@ -52,37 +52,6 @@ top-field-first is assumed
|
||||
@chapter Audio Decoders
|
||||
@c man begin AUDIO DECODERS
|
||||
|
||||
A description of some of the currently available audio decoders
|
||||
follows.
|
||||
|
||||
@section ac3
|
||||
|
||||
AC-3 audio decoder.
|
||||
|
||||
This decoder implements part of ATSC A/52:2010 and ETSI TS 102 366, as well as
|
||||
the undocumented RealAudio 3 (a.k.a. dnet).
|
||||
|
||||
@subsection AC-3 Decoder Options
|
||||
|
||||
@table @option
|
||||
|
||||
@item -drc_scale @var{value}
|
||||
Dynamic Range Scale Factor. The factor to apply to dynamic range values
|
||||
from the AC-3 stream. This factor is applied exponentially.
|
||||
There are 3 notable scale factor ranges:
|
||||
@table @option
|
||||
@item drc_scale == 0
|
||||
DRC disabled. Produces full range audio.
|
||||
@item 0 < drc_scale <= 1
|
||||
DRC enabled. Applies a fraction of the stream DRC value.
|
||||
Audio reproduction is between full range and full compression.
|
||||
@item drc_scale > 1
|
||||
DRC enabled. Applies drc_scale asymmetrically.
|
||||
Loud sounds are fully compressed. Soft sounds are enhanced.
|
||||
@end table
|
||||
|
||||
@end table
|
||||
|
||||
@section ffwavesynth
|
||||
|
||||
Internal wave synthetizer.
|
||||
@@ -163,9 +132,6 @@ Requires the presence of the libopus headers and library during
|
||||
configuration. You need to explicitly configure the build with
|
||||
@code{--enable-libopus}.
|
||||
|
||||
An FFmpeg native decoder for Opus exists, so users can decode Opus
|
||||
without this library.
|
||||
|
||||
@c man end AUDIO DECODERS
|
||||
|
||||
@chapter Subtitles Decoders
|
||||
@@ -190,15 +156,6 @@ The format for this option is a string containing 16 24-bits hexadecimal
|
||||
numbers (without 0x prefix) separated by comas, for example @code{0d00ee,
|
||||
ee450d, 101010, eaeaea, 0ce60b, ec14ed, ebff0b, 0d617a, 7b7b7b, d1d1d1,
|
||||
7b2a0e, 0d950c, 0f007b, cf0dec, cfa80c, 7c127b}.
|
||||
|
||||
@item ifo_palette
|
||||
Specify the IFO file from which the global palette is obtained.
|
||||
(experimental)
|
||||
|
||||
@item forced_subs_only
|
||||
Only decode subtitle entries marked as forced. Some titles have forced
|
||||
and non-forced subtitles in the same track. Setting this flag to @code{1}
|
||||
will only keep the forced subtitles. Default value is @code{0}.
|
||||
@end table
|
||||
|
||||
@section libzvbi-teletext
|
||||
|
@@ -17,8 +17,8 @@ a:visited {
|
||||
}
|
||||
|
||||
#banner img {
|
||||
margin-bottom: 1px;
|
||||
margin-top: 5px;
|
||||
padding-bottom: 1px;
|
||||
padding-top: 5px;
|
||||
}
|
||||
|
||||
#body {
|
||||
|
@@ -29,26 +29,6 @@ the caller can decide which variant streams to actually receive.
|
||||
The total bitrate of the variant that the stream belongs to is
|
||||
available in a metadata key named "variant_bitrate".
|
||||
|
||||
@section apng
|
||||
|
||||
Animated Portable Network Graphics demuxer.
|
||||
|
||||
This demuxer is used to demux APNG files.
|
||||
All headers, but the PNG signature, up to (but not including) the first
|
||||
fcTL chunk are transmitted as extradata.
|
||||
Frames are then split as being all the chunks between two fcTL ones, or
|
||||
between the last fcTL and IEND chunks.
|
||||
|
||||
@table @option
|
||||
@item -ignore_loop @var{bool}
|
||||
Ignore the loop variable in the file if set.
|
||||
@item -max_fps @var{int}
|
||||
Maximum framerate in frames per second (0 for no limit).
|
||||
@item -default_fps @var{int}
|
||||
Default framerate in frames per second when none is specified in the file
|
||||
(0 meaning as fast as possible).
|
||||
@end table
|
||||
|
||||
@section asf
|
||||
|
||||
Advanced Systems Format demuxer.
|
||||
@@ -94,7 +74,7 @@ following directive is recognized:
|
||||
Path to a file to read; special characters and spaces must be escaped with
|
||||
backslash or single quotes.
|
||||
|
||||
All subsequent file-related directives apply to that file.
|
||||
All subsequent directives apply to that file.
|
||||
|
||||
@item @code{ffconcat version 1.0}
|
||||
Identify the script type and version. It also sets the @option{safe} option
|
||||
@@ -112,22 +92,6 @@ file is not available or accurate.
|
||||
If the duration is set for all files, then it is possible to seek in the
|
||||
whole concatenated video.
|
||||
|
||||
@item @code{stream}
|
||||
Introduce a stream in the virtual file.
|
||||
All subsequent stream-related directives apply to the last introduced
|
||||
stream.
|
||||
Some streams properties must be set in order to allow identifying the
|
||||
matching streams in the subfiles.
|
||||
If no streams are defined in the script, the streams from the first file are
|
||||
copied.
|
||||
|
||||
@item @code{exact_stream_id @var{id}}
|
||||
Set the id of the stream.
|
||||
If this directive is given, the string with the corresponding id in the
|
||||
subfiles will be used.
|
||||
This is especially useful for MPEG-PS (VOB) files, where the order of the
|
||||
streams is not reliable.
|
||||
|
||||
@end table
|
||||
|
||||
@subsection Options
|
||||
@@ -148,14 +112,6 @@ If set to 0, any file name is accepted.
|
||||
The default is -1, it is equivalent to 1 if the format was automatically
|
||||
probed and 0 otherwise.
|
||||
|
||||
@item auto_convert
|
||||
If set to 1, try to perform automatic conversions on packet data to make the
|
||||
streams concatenable.
|
||||
|
||||
Currently, the only conversion is adding the h264_mp4toannexb bitstream
|
||||
filter to H.264 streams in MP4 format. This is necessary in particular if
|
||||
there are resolution changes.
|
||||
|
||||
@end table
|
||||
|
||||
@section flv
|
||||
@@ -194,40 +150,6 @@ See @url{http://quvi.sourceforge.net/} for more information.
|
||||
FFmpeg needs to be built with @code{--enable-libquvi} for this demuxer to be
|
||||
enabled.
|
||||
|
||||
@section gif
|
||||
|
||||
Animated GIF demuxer.
|
||||
|
||||
It accepts the following options:
|
||||
|
||||
@table @option
|
||||
@item min_delay
|
||||
Set the minimum valid delay between frames in hundredths of seconds.
|
||||
Range is 0 to 6000. Default value is 2.
|
||||
|
||||
@item default_delay
|
||||
Set the default delay between frames in hundredths of seconds.
|
||||
Range is 0 to 6000. Default value is 10.
|
||||
|
||||
@item ignore_loop
|
||||
GIF files can contain information to loop a certain number of times (or
|
||||
infinitely). If @option{ignore_loop} is set to 1, then the loop setting
|
||||
from the input will be ignored and looping will not occur. If set to 0,
|
||||
then looping will occur and will cycle the number of times according to
|
||||
the GIF. Default value is 1.
|
||||
@end table
|
||||
|
||||
For example, with the overlay filter, place an infinitely looping GIF
|
||||
over another video:
|
||||
@example
|
||||
ffmpeg -i input.mp4 -ignore_loop 0 -i input.gif -filter_complex overlay=shortest=1 out.mkv
|
||||
@end example
|
||||
|
||||
Note that in the above example the shortest option for overlay filter is
|
||||
used to end the output video at the length of the shortest input file,
|
||||
which in this case is @file{input.mp4} as the GIF in this example loops
|
||||
infinitely.
|
||||
|
||||
@section image2
|
||||
|
||||
Image file demuxer.
|
||||
@@ -327,8 +249,6 @@ is 5.
|
||||
If set to 1, will set frame timestamp to modification time of image file. Note
|
||||
that monotonity of timestamps is not provided: images go in the same order as
|
||||
without this option. Default value is 0.
|
||||
If set to 2, will set frame timestamp to the modification time of the image file in
|
||||
nanosecond precision.
|
||||
@item video_size
|
||||
Set the video size of the images to read. If not specified the video
|
||||
size is guessed from the first image file in the sequence.
|
||||
@@ -376,7 +296,7 @@ teletext packet PTS and DTS values untouched.
|
||||
|
||||
Raw video demuxer.
|
||||
|
||||
This demuxer allows one to read raw video data. Since there is no header
|
||||
This demuxer allows to read raw video data. Since there is no header
|
||||
specifying the assumed video parameters, the user must specify them
|
||||
in order to be able to decode the data correctly.
|
||||
|
||||
|
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle Developer Documentation
|
||||
@titlepage
|
||||
@@ -93,7 +92,7 @@ for markup commands, i.e. use @code{@@param} and not @code{\param}.
|
||||
* more text ...
|
||||
* ...
|
||||
*/
|
||||
typedef struct Foobar @{
|
||||
typedef struct Foobar@{
|
||||
int var1; /**< var1 description */
|
||||
int var2; ///< var2 description
|
||||
/** var3 description */
|
||||
@@ -249,7 +248,7 @@ Contributions should be licensed under the
|
||||
@uref{http://www.gnu.org/licenses/lgpl-2.1.html, LGPL 2.1},
|
||||
including an "or any later version" clause, or, if you prefer
|
||||
a gift-style license, the
|
||||
@uref{http://opensource.org/licenses/isc-license.txt, ISC} or
|
||||
@uref{http://www.isc.org/software/license/, ISC} or
|
||||
@uref{http://mit-license.org/, MIT} license.
|
||||
@uref{http://www.gnu.org/licenses/gpl-2.0.html, GPL 2} including
|
||||
an "or any later version" clause is also acceptable, but LGPL is
|
||||
@@ -324,12 +323,9 @@ Always fill out the commit log message. Describe in a few lines what you
|
||||
changed and why. You can refer to mailing list postings if you fix a
|
||||
particular bug. Comments such as "fixed!" or "Changed it." are unacceptable.
|
||||
Recommended format:
|
||||
|
||||
@example
|
||||
area changed: Short 1 line description
|
||||
|
||||
details describing what and why and giving references.
|
||||
@end example
|
||||
|
||||
@item
|
||||
Make sure the author of the commit is set correctly. (see git commit --author)
|
||||
@@ -648,12 +644,12 @@ accordingly].
|
||||
@subsection Adding files to the fate-suite dataset
|
||||
|
||||
When there is no muxer or encoder available to generate test media for a
|
||||
specific test then the media has to be included in the fate-suite.
|
||||
specific test then the media has to be inlcuded in the fate-suite.
|
||||
First please make sure that the sample file is as small as possible to test the
|
||||
respective decoder or demuxer sufficiently. Large files increase network
|
||||
bandwidth and disk space requirements.
|
||||
Once you have a working fate test and fate sample, provide in the commit
|
||||
message or introductory message for the patch series that you post to
|
||||
message or introductionary message for the patch series that you post to
|
||||
the ffmpeg-devel mailing list, a direct link to download the sample media.
|
||||
|
||||
|
||||
|
@@ -17,9 +17,5 @@ for programmatic use.
|
||||
|
||||
@c man end DEVICE OPTIONS
|
||||
|
||||
@ifclear config-writeonly
|
||||
@include indevs.texi
|
||||
@end ifclear
|
||||
@ifclear config-readonly
|
||||
@include outdevs.texi
|
||||
@end ifclear
|
||||
|
@@ -2,20 +2,13 @@
|
||||
|
||||
SRC_PATH="${1}"
|
||||
DOXYFILE="${2}"
|
||||
DOXYGEN="${3}"
|
||||
|
||||
shift 3
|
||||
shift 2
|
||||
|
||||
if [ -e "$SRC_PATH/VERSION" ]; then
|
||||
VERSION=`cat "$SRC_PATH/VERSION"`
|
||||
else
|
||||
VERSION=`cd "$SRC_PATH"; git describe`
|
||||
fi
|
||||
|
||||
$DOXYGEN - <<EOF
|
||||
doxygen - <<EOF
|
||||
@INCLUDE = ${DOXYFILE}
|
||||
INPUT = $@
|
||||
EXAMPLE_PATH = ${SRC_PATH}/doc/examples
|
||||
HTML_TIMESTAMP = NO
|
||||
PROJECT_NUMBER = $VERSION
|
||||
HTML_HEADER = ${SRC_PATH}/doc/doxy/header.html
|
||||
HTML_FOOTER = ${SRC_PATH}/doc/doxy/footer.html
|
||||
HTML_STYLESHEET = ${SRC_PATH}/doc/doxy/doxy_stylesheet.css
|
||||
EOF
|
||||
|
2019
doc/doxy/doxy_stylesheet.css
Normal file
2019
doc/doxy/doxy_stylesheet.css
Normal file
File diff suppressed because it is too large
Load Diff
9
doc/doxy/footer.html
Normal file
9
doc/doxy/footer.html
Normal file
@@ -0,0 +1,9 @@
|
||||
|
||||
<footer class="footer pagination-right">
|
||||
<span class="label label-info">
|
||||
Generated on $datetime for $projectname by <a href="http://www.doxygen.org/index.html">doxygen</a> $doxygenversion
|
||||
</span>
|
||||
</footer>
|
||||
</div>
|
||||
</body>
|
||||
</html>
|
16
doc/doxy/header.html
Normal file
16
doc/doxy/header.html
Normal file
@@ -0,0 +1,16 @@
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=9"/>
|
||||
<!--BEGIN PROJECT_NAME--><title>$projectname: $title</title><!--END PROJECT_NAME-->
|
||||
<!--BEGIN !PROJECT_NAME--><title>$title</title><!--END !PROJECT_NAME-->
|
||||
<link href="$relpath$doxy_stylesheet.css" rel="stylesheet" type="text/css" />
|
||||
<!--Header replace -->
|
||||
|
||||
</head>
|
||||
|
||||
<div class="container">
|
||||
|
||||
<!--Header replace -->
|
||||
<div class="menu">
|
@@ -14,7 +14,7 @@ You can disable all the encoders with the configure option
|
||||
with the options @code{--enable-encoder=@var{ENCODER}} /
|
||||
@code{--disable-encoder=@var{ENCODER}}.
|
||||
|
||||
The option @code{-encoders} of the ff* tools will display the list of
|
||||
The option @code{-codecs} of the ff* tools will display the list of
|
||||
enabled encoders.
|
||||
|
||||
@c man end ENCODERS
|
||||
@@ -38,8 +38,8 @@ As this encoder is experimental, unexpected behavior may exist from time to
|
||||
time. For a more stable AAC encoder, see @ref{libvo-aacenc}. However, be warned
|
||||
that it has a worse quality reported by some users.
|
||||
|
||||
@c todo @ref{libaacplus}
|
||||
See also @ref{libfdk-aac-enc,,libfdk_aac} and @ref{libfaac}.
|
||||
@c Comment this out until somebody writes the respective documentation.
|
||||
@c See also @ref{libfaac}, @ref{libaacplus}, and @ref{libfdk-aac-enc}.
|
||||
|
||||
@subsection Options
|
||||
|
||||
@@ -80,7 +80,7 @@ thresholds with quantizer steps to find the appropriate quantization with
|
||||
distortion below threshold band by band.
|
||||
|
||||
The quality of this method is comparable to the two loop searching method
|
||||
described below, but somewhat a little better and slower.
|
||||
descibed below, but somewhat a little better and slower.
|
||||
|
||||
@item anmr
|
||||
Average noise to mask ratio (ANMR) trellis-based solution.
|
||||
@@ -494,285 +494,6 @@ Selected by Encoder (default)
|
||||
|
||||
@end table
|
||||
|
||||
@anchor{libfaac}
|
||||
@section libfaac
|
||||
|
||||
libfaac AAC (Advanced Audio Coding) encoder wrapper.
|
||||
|
||||
Requires the presence of the libfaac headers and library during
|
||||
configuration. You need to explicitly configure the build with
|
||||
@code{--enable-libfaac --enable-nonfree}.
|
||||
|
||||
This encoder is considered to be of higher quality with respect to the
|
||||
@ref{aacenc,,the native experimental FFmpeg AAC encoder}.
|
||||
|
||||
For more information see the libfaac project at
|
||||
@url{http://www.audiocoding.com/faac.html/}.
|
||||
|
||||
@subsection Options
|
||||
|
||||
The following shared FFmpeg codec options are recognized.
|
||||
|
||||
The following options are supported by the libfaac wrapper. The
|
||||
@command{faac}-equivalent of the options are listed in parentheses.
|
||||
|
||||
@table @option
|
||||
@item b (@emph{-b})
|
||||
Set bit rate in bits/s for ABR (Average Bit Rate) mode. If the bit rate
|
||||
is not explicitly specified, it is automatically set to a suitable
|
||||
value depending on the selected profile. @command{faac} bitrate is
|
||||
expressed in kilobits/s.
|
||||
|
||||
Note that libfaac does not support CBR (Constant Bit Rate) but only
|
||||
ABR (Average Bit Rate).
|
||||
|
||||
If VBR mode is enabled this option is ignored.
|
||||
|
||||
@item ar (@emph{-R})
|
||||
Set audio sampling rate (in Hz).
|
||||
|
||||
@item ac (@emph{-c})
|
||||
Set the number of audio channels.
|
||||
|
||||
@item cutoff (@emph{-C})
|
||||
Set cutoff frequency. If not specified (or explicitly set to 0) it
|
||||
will use a value automatically computed by the library. Default value
|
||||
is 0.
|
||||
|
||||
@item profile
|
||||
Set audio profile.
|
||||
|
||||
The following profiles are recognized:
|
||||
@table @samp
|
||||
@item aac_main
|
||||
Main AAC (Main)
|
||||
|
||||
@item aac_low
|
||||
Low Complexity AAC (LC)
|
||||
|
||||
@item aac_ssr
|
||||
Scalable Sample Rate (SSR)
|
||||
|
||||
@item aac_ltp
|
||||
Long Term Prediction (LTP)
|
||||
@end table
|
||||
|
||||
If not specified it is set to @samp{aac_low}.
|
||||
|
||||
@item flags +qscale
|
||||
Set constant quality VBR (Variable Bit Rate) mode.
|
||||
|
||||
@item global_quality
|
||||
Set quality in VBR mode as an integer number of lambda units.
|
||||
|
||||
Only relevant when VBR mode is enabled with @code{flags +qscale}. The
|
||||
value is converted to QP units by dividing it by @code{FF_QP2LAMBDA},
|
||||
and used to set the quality value used by libfaac. A reasonable range
|
||||
for the option value in QP units is [10-500], the higher the value the
|
||||
higher the quality.
|
||||
|
||||
@item q (@emph{-q})
|
||||
Enable VBR mode when set to a non-negative value, and set constant
|
||||
quality value as a double floating point value in QP units.
|
||||
|
||||
The value sets the quality value used by libfaac. A reasonable range
|
||||
for the option value is [10-500], the higher the value the higher the
|
||||
quality.
|
||||
|
||||
This option is valid only using the @command{ffmpeg} command-line
|
||||
tool. For library interface users, use @option{global_quality}.
|
||||
@end table
|
||||
|
||||
@subsection Examples
|
||||
|
||||
@itemize
|
||||
@item
|
||||
Use @command{ffmpeg} to convert an audio file to ABR 128 kbps AAC in an M4A (MP4)
|
||||
container:
|
||||
@example
|
||||
ffmpeg -i input.wav -codec:a libfaac -b:a 128k -output.m4a
|
||||
@end example
|
||||
|
||||
@item
|
||||
Use @command{ffmpeg} to convert an audio file to VBR AAC, using the
|
||||
LTP AAC profile:
|
||||
@example
|
||||
ffmpeg -i input.wav -c:a libfaac -profile:a aac_ltp -q:a 100 output.m4a
|
||||
@end example
|
||||
@end itemize
|
||||
|
||||
@anchor{libfdk-aac-enc}
|
||||
@section libfdk_aac
|
||||
|
||||
libfdk-aac AAC (Advanced Audio Coding) encoder wrapper.
|
||||
|
||||
The libfdk-aac library is based on the Fraunhofer FDK AAC code from
|
||||
the Android project.
|
||||
|
||||
Requires the presence of the libfdk-aac headers and library during
|
||||
configuration. You need to explicitly configure the build with
|
||||
@code{--enable-libfdk-aac}. The library is also incompatible with GPL,
|
||||
so if you allow the use of GPL, you should configure with
|
||||
@code{--enable-gpl --enable-nonfree --enable-libfdk-aac}.
|
||||
|
||||
This encoder is considered to be of higher quality with respect to
|
||||
both @ref{aacenc,,the native experimental FFmpeg AAC encoder} and
|
||||
@ref{libfaac}.
|
||||
|
||||
VBR encoding, enabled through the @option{vbr} or @option{flags
|
||||
+qscale} options, is experimental and only works with some
|
||||
combinations of parameters.
|
||||
|
||||
Support for encoding 7.1 audio is only available with libfdk-aac 0.1.3 or
|
||||
higher.
|
||||
|
||||
For more information see the fdk-aac project at
|
||||
@url{http://sourceforge.net/p/opencore-amr/fdk-aac/}.
|
||||
|
||||
@subsection Options
|
||||
|
||||
The following options are mapped on the shared FFmpeg codec options.
|
||||
|
||||
@table @option
|
||||
@item b
|
||||
Set bit rate in bits/s. If the bitrate is not explicitly specified, it
|
||||
is automatically set to a suitable value depending on the selected
|
||||
profile.
|
||||
|
||||
In case VBR mode is enabled the option is ignored.
|
||||
|
||||
@item ar
|
||||
Set audio sampling rate (in Hz).
|
||||
|
||||
@item channels
|
||||
Set the number of audio channels.
|
||||
|
||||
@item flags +qscale
|
||||
Enable fixed quality, VBR (Variable Bit Rate) mode.
|
||||
Note that VBR is implicitly enabled when the @option{vbr} value is
|
||||
positive.
|
||||
|
||||
@item cutoff
|
||||
Set cutoff frequency. If not specified (or explicitly set to 0) it
|
||||
will use a value automatically computed by the library. Default value
|
||||
is 0.
|
||||
|
||||
@item profile
|
||||
Set audio profile.
|
||||
|
||||
The following profiles are recognized:
|
||||
@table @samp
|
||||
@item aac_low
|
||||
Low Complexity AAC (LC)
|
||||
|
||||
@item aac_he
|
||||
High Efficiency AAC (HE-AAC)
|
||||
|
||||
@item aac_he_v2
|
||||
High Efficiency AAC version 2 (HE-AACv2)
|
||||
|
||||
@item aac_ld
|
||||
Low Delay AAC (LD)
|
||||
|
||||
@item aac_eld
|
||||
Enhanced Low Delay AAC (ELD)
|
||||
@end table
|
||||
|
||||
If not specified it is set to @samp{aac_low}.
|
||||
@end table
|
||||
|
||||
The following are private options of the libfdk_aac encoder.
|
||||
|
||||
@table @option
|
||||
@item afterburner
|
||||
Enable afterburner feature if set to 1, disabled if set to 0. This
|
||||
improves the quality but also the required processing power.
|
||||
|
||||
Default value is 1.
|
||||
|
||||
@item eld_sbr
|
||||
Enable SBR (Spectral Band Replication) for ELD if set to 1, disabled
|
||||
if set to 0.
|
||||
|
||||
Default value is 0.
|
||||
|
||||
@item signaling
|
||||
Set SBR/PS signaling style.
|
||||
|
||||
It can assume one of the following values:
|
||||
@table @samp
|
||||
@item default
|
||||
choose signaling implicitly (explicit hierarchical by default,
|
||||
implicit if global header is disabled)
|
||||
|
||||
@item implicit
|
||||
implicit backwards compatible signaling
|
||||
|
||||
@item explicit_sbr
|
||||
explicit SBR, implicit PS signaling
|
||||
|
||||
@item explicit_hierarchical
|
||||
explicit hierarchical signaling
|
||||
@end table
|
||||
|
||||
Default value is @samp{default}.
|
||||
|
||||
@item latm
|
||||
Output LATM/LOAS encapsulated data if set to 1, disabled if set to 0.
|
||||
|
||||
Default value is 0.
|
||||
|
||||
@item header_period
|
||||
Set StreamMuxConfig and PCE repetition period (in frames) for sending
|
||||
in-band configuration buffers within LATM/LOAS transport layer.
|
||||
|
||||
Must be a 16-bits non-negative integer.
|
||||
|
||||
Default value is 0.
|
||||
|
||||
@item vbr
|
||||
Set VBR mode, from 1 to 5. 1 is lowest quality (though still pretty
|
||||
good) and 5 is highest quality. A value of 0 will disable VBR, and CBR
|
||||
(Constant Bit Rate) is enabled.
|
||||
|
||||
Currently only the @samp{aac_low} profile supports VBR encoding.
|
||||
|
||||
VBR modes 1-5 correspond to roughly the following average bit rates:
|
||||
|
||||
@table @samp
|
||||
@item 1
|
||||
32 kbps/channel
|
||||
@item 2
|
||||
40 kbps/channel
|
||||
@item 3
|
||||
48-56 kbps/channel
|
||||
@item 4
|
||||
64 kbps/channel
|
||||
@item 5
|
||||
about 80-96 kbps/channel
|
||||
@end table
|
||||
|
||||
Default value is 0.
|
||||
@end table
|
||||
|
||||
@subsection Examples
|
||||
|
||||
@itemize
|
||||
@item
|
||||
Use @command{ffmpeg} to convert an audio file to VBR AAC in an M4A (MP4)
|
||||
container:
|
||||
@example
|
||||
ffmpeg -i input.wav -codec:a libfdk_aac -vbr 3 output.m4a
|
||||
@end example
|
||||
|
||||
@item
|
||||
Use @command{ffmpeg} to convert an audio file to CBR 64k kbps AAC, using the
|
||||
High-Efficiency AAC profile:
|
||||
@example
|
||||
ffmpeg -i input.wav -c:a libfdk_aac -profile:a aac_he -b:a 64k output.m4a
|
||||
@end example
|
||||
@end itemize
|
||||
|
||||
@anchor{libmp3lame}
|
||||
@section libmp3lame
|
||||
|
||||
@@ -792,7 +513,7 @@ The following options are supported by the libmp3lame wrapper. The
|
||||
|
||||
@table @option
|
||||
@item b (@emph{-b})
|
||||
Set bitrate expressed in bits/s for CBR or ABR. LAME @code{bitrate} is
|
||||
Set bitrate expressed in bits/s for CBR. LAME @code{bitrate} is
|
||||
expressed in kilobits/s.
|
||||
|
||||
@item q (@emph{-V})
|
||||
@@ -807,18 +528,13 @@ while producing the worst quality.
|
||||
|
||||
@item reservoir
|
||||
Enable use of bit reservoir when set to 1. Default value is 1. LAME
|
||||
has this enabled by default, but can be overridden by use
|
||||
has this enabled by default, but can be overriden by use
|
||||
@option{--nores} option.
|
||||
|
||||
@item joint_stereo (@emph{-m j})
|
||||
Enable the encoder to use (on a frame by frame basis) either L/R
|
||||
stereo or mid/side stereo. Default value is 1.
|
||||
|
||||
@item abr (@emph{--abr})
|
||||
Enable the encoder to use ABR when set to 1. The @command{lame}
|
||||
@option{--abr} sets the target bitrate, while this options only
|
||||
tells FFmpeg to use ABR still relies on @option{b} to set bitrate.
|
||||
|
||||
@end table
|
||||
|
||||
@section libopencore-amrnb
|
||||
@@ -1032,7 +748,7 @@ configuration. You need to explicitly configure the build with
|
||||
|
||||
@subsection Option Mapping
|
||||
|
||||
Most libopus options are modelled after the @command{opusenc} utility from
|
||||
Most libopus options are modeled after the @command{opusenc} utility from
|
||||
opus-tools. The following is an option mapping chart describing options
|
||||
supported by the libopus wrapper, and their @command{opusenc}-equivalent
|
||||
in parentheses.
|
||||
@@ -1070,7 +786,7 @@ Set maximum frame size, or duration of a frame in milliseconds. The
|
||||
argument must be exactly the following: 2.5, 5, 10, 20, 40, 60. Smaller
|
||||
frame sizes achieve lower latency but less quality at a given bitrate.
|
||||
Sizes greater than 20ms are only interesting at fairly low bitrates.
|
||||
The default is 20ms.
|
||||
The default of FFmpeg is 10ms, but is 20ms in @command{opusenc}.
|
||||
|
||||
@item packet_loss (@emph{expect-loss})
|
||||
Set expected packet loss percentage. The default is 0.
|
||||
@@ -1147,111 +863,32 @@ transient response is a higher bitrate.
|
||||
|
||||
@end table
|
||||
|
||||
@anchor{libwavpack}
|
||||
@section libwavpack
|
||||
|
||||
A wrapper providing WavPack encoding through libwavpack.
|
||||
|
||||
Only lossless mode using 32-bit integer samples is supported currently.
|
||||
|
||||
Requires the presence of the libwavpack headers and library during
|
||||
configuration. You need to explicitly configure the build with
|
||||
@code{--enable-libwavpack}.
|
||||
|
||||
Note that a libavcodec-native encoder for the WavPack codec exists so users can
|
||||
encode audios with this codec without using this encoder. See @ref{wavpackenc}.
|
||||
|
||||
@subsection Options
|
||||
|
||||
@command{wavpack} command line utility's corresponding options are listed in
|
||||
parentheses, if any.
|
||||
The @option{compression_level} option can be used to control speed vs.
|
||||
compression tradeoff, with the values mapped to libwavpack as follows:
|
||||
|
||||
@table @option
|
||||
@item frame_size (@emph{--blocksize})
|
||||
Default is 32768.
|
||||
|
||||
@item compression_level
|
||||
Set speed vs. compression tradeoff. Acceptable arguments are listed below:
|
||||
|
||||
@table @samp
|
||||
@item 0 (@emph{-f})
|
||||
Fast mode.
|
||||
@item 0
|
||||
Fast mode - corresponding to the wavpack @option{-f} option.
|
||||
|
||||
@item 1
|
||||
Normal (default) settings.
|
||||
|
||||
@item 2 (@emph{-h})
|
||||
High quality.
|
||||
@item 2
|
||||
High quality - corresponding to the wavpack @option{-h} option.
|
||||
|
||||
@item 3 (@emph{-hh})
|
||||
Very high quality.
|
||||
@item 3
|
||||
Very high quality - corresponding to the wavpack @option{-hh} option.
|
||||
|
||||
@item 4-8 (@emph{-hh -x}@var{EXTRAPROC})
|
||||
Same as @samp{3}, but with extra processing enabled.
|
||||
|
||||
@samp{4} is the same as @option{-x2} and @samp{8} is the same as @option{-x6}.
|
||||
|
||||
@end table
|
||||
@end table
|
||||
|
||||
@anchor{wavpackenc}
|
||||
@section wavpack
|
||||
|
||||
WavPack lossless audio encoder.
|
||||
|
||||
This is a libavcodec-native WavPack encoder. There is also an encoder based on
|
||||
libwavpack, but there is virtually no reason to use that encoder.
|
||||
|
||||
See also @ref{libwavpack}.
|
||||
|
||||
@subsection Options
|
||||
|
||||
The equivalent options for @command{wavpack} command line utility are listed in
|
||||
parentheses.
|
||||
|
||||
@subsubsection Shared options
|
||||
|
||||
The following shared options are effective for this encoder. Only special notes
|
||||
about this particular encoder will be documented here. For the general meaning
|
||||
of the options, see @ref{codec-options,,the Codec Options chapter}.
|
||||
|
||||
@table @option
|
||||
@item frame_size (@emph{--blocksize})
|
||||
For this encoder, the range for this option is between 128 and 131072. Default
|
||||
is automatically decided based on sample rate and number of channel.
|
||||
|
||||
For the complete formula of calculating default, see
|
||||
@file{libavcodec/wavpackenc.c}.
|
||||
|
||||
@item compression_level (@emph{-f}, @emph{-h}, @emph{-hh}, and @emph{-x})
|
||||
This option's syntax is consistent with @ref{libwavpack}'s.
|
||||
@end table
|
||||
|
||||
@subsubsection Private options
|
||||
|
||||
@table @option
|
||||
@item joint_stereo (@emph{-j})
|
||||
Set whether to enable joint stereo. Valid values are:
|
||||
|
||||
@table @samp
|
||||
@item on (@emph{1})
|
||||
Force mid/side audio encoding.
|
||||
@item off (@emph{0})
|
||||
Force left/right audio encoding.
|
||||
@item auto
|
||||
Let the encoder decide automatically.
|
||||
@end table
|
||||
|
||||
@item optimize_mono
|
||||
Set whether to enable optimization for mono. This option is only effective for
|
||||
non-mono streams. Available values:
|
||||
|
||||
@table @samp
|
||||
@item on
|
||||
enabled
|
||||
@item off
|
||||
disabled
|
||||
@end table
|
||||
@item 4-8
|
||||
Same as 3, but with extra processing enabled - corresponding to the wavpack
|
||||
@option{-x} option. I.e. 4 is the same as @option{-x2} and 8 is the same as
|
||||
@option{-x6}.
|
||||
|
||||
@end table
|
||||
|
||||
@@ -1265,15 +902,12 @@ follows.
|
||||
|
||||
@section libtheora
|
||||
|
||||
libtheora Theora encoder wrapper.
|
||||
Theora format supported through libtheora.
|
||||
|
||||
Requires the presence of the libtheora headers and library during
|
||||
configuration. You need to explicitly configure the build with
|
||||
@code{--enable-libtheora}.
|
||||
|
||||
For more information about the libtheora project see
|
||||
@url{http://www.theora.org/}.
|
||||
|
||||
@subsection Options
|
||||
|
||||
The following global options are mapped to internal libtheora options
|
||||
@@ -1281,11 +915,11 @@ which affect the quality and the bitrate of the encoded stream.
|
||||
|
||||
@table @option
|
||||
@item b
|
||||
Set the video bitrate in bit/s for CBR (Constant Bit Rate) mode. In
|
||||
case VBR (Variable Bit Rate) mode is enabled this option is ignored.
|
||||
Set the video bitrate, only works if the @code{qscale} flag in
|
||||
@option{flags} is not enabled.
|
||||
|
||||
@item flags
|
||||
Used to enable constant quality mode (VBR) encoding through the
|
||||
Used to enable constant quality mode encoding through the
|
||||
@option{qscale} flag, and to enable the @code{pass1} and @code{pass2}
|
||||
modes.
|
||||
|
||||
@@ -1293,44 +927,22 @@ modes.
|
||||
Set the GOP size.
|
||||
|
||||
@item global_quality
|
||||
Set the global quality as an integer in lambda units.
|
||||
Set the global quality in lambda units, only works if the
|
||||
@code{qscale} flag in @option{flags} is enabled. The value is clipped
|
||||
in the [0 - 10*@code{FF_QP2LAMBDA}] range, and then multiplied for 6.3
|
||||
to get a value in the native libtheora range [0-63]. A higher value
|
||||
corresponds to a higher quality.
|
||||
|
||||
Only relevant when VBR mode is enabled with @code{flags +qscale}. The
|
||||
value is converted to QP units by dividing it by @code{FF_QP2LAMBDA},
|
||||
clipped in the [0 - 10] range, and then multiplied by 6.3 to get a
|
||||
value in the native libtheora range [0-63]. A higher value corresponds
|
||||
to a higher quality.
|
||||
|
||||
@item q
|
||||
Enable VBR mode when set to a non-negative value, and set constant
|
||||
quality value as a double floating point value in QP units.
|
||||
|
||||
The value is clipped in the [0-10] range, and then multiplied by 6.3
|
||||
to get a value in the native libtheora range [0-63].
|
||||
|
||||
This option is valid only using the @command{ffmpeg} command-line
|
||||
tool. For library interface users, use @option{global_quality}.
|
||||
For example, to set maximum constant quality encoding with
|
||||
@command{ffmpeg}:
|
||||
@example
|
||||
ffmpeg -i INPUT -flags:v qscale -global_quality:v "10*QP2LAMBDA" -codec:v libtheora OUTPUT.ogg
|
||||
@end example
|
||||
@end table
|
||||
|
||||
@subsection Examples
|
||||
|
||||
@itemize
|
||||
@item
|
||||
Set maximum constant quality (VBR) encoding with @command{ffmpeg}:
|
||||
@example
|
||||
ffmpeg -i INPUT -codec:v libtheora -q:v 10 OUTPUT.ogg
|
||||
@end example
|
||||
|
||||
@item
|
||||
Use @command{ffmpeg} to convert a CBR 1000 kbps Theora video stream:
|
||||
@example
|
||||
ffmpeg -i INPUT -codec:v libtheora -b:v 1000k OUTPUT.ogg
|
||||
@end example
|
||||
@end itemize
|
||||
|
||||
@section libvpx
|
||||
|
||||
VP8/VP9 format supported through libvpx.
|
||||
VP8 format supported through libvpx.
|
||||
|
||||
Requires the presence of the libvpx headers and library during configuration.
|
||||
You need to explicitly configure the build with @code{--enable-libvpx}.
|
||||
@@ -1442,77 +1054,12 @@ g_lag_in_frames
|
||||
@item vp8flags error_resilient
|
||||
g_error_resilient
|
||||
|
||||
@item aq_mode
|
||||
@code{VP9E_SET_AQ_MODE}
|
||||
|
||||
@end table
|
||||
|
||||
For more information about libvpx see:
|
||||
@url{http://www.webmproject.org/}
|
||||
|
||||
|
||||
@section libwebp
|
||||
|
||||
libwebp WebP Image encoder wrapper
|
||||
|
||||
libwebp is Google's official encoder for WebP images. It can encode in either
|
||||
lossy or lossless mode. Lossy images are essentially a wrapper around a VP8
|
||||
frame. Lossless images are a separate codec developed by Google.
|
||||
|
||||
@subsection Pixel Format
|
||||
|
||||
Currently, libwebp only supports YUV420 for lossy and RGB for lossless due
|
||||
to limitations of the format and libwebp. Alpha is supported for either mode.
|
||||
Because of API limitations, if RGB is passed in when encoding lossy or YUV is
|
||||
passed in for encoding lossless, the pixel format will automatically be
|
||||
converted using functions from libwebp. This is not ideal and is done only for
|
||||
convenience.
|
||||
|
||||
@subsection Options
|
||||
|
||||
@table @option
|
||||
|
||||
@item -lossless @var{boolean}
|
||||
Enables/Disables use of lossless mode. Default is 0.
|
||||
|
||||
@item -compression_level @var{integer}
|
||||
For lossy, this is a quality/speed tradeoff. Higher values give better quality
|
||||
for a given size at the cost of increased encoding time. For lossless, this is
|
||||
a size/speed tradeoff. Higher values give smaller size at the cost of increased
|
||||
encoding time. More specifically, it controls the number of extra algorithms
|
||||
and compression tools used, and varies the combination of these tools. This
|
||||
maps to the @var{method} option in libwebp. The valid range is 0 to 6.
|
||||
Default is 4.
|
||||
|
||||
@item -qscale @var{float}
|
||||
For lossy encoding, this controls image quality, 0 to 100. For lossless
|
||||
encoding, this controls the effort and time spent at compressing more. The
|
||||
default value is 75. Note that for usage via libavcodec, this option is called
|
||||
@var{global_quality} and must be multiplied by @var{FF_QP2LAMBDA}.
|
||||
|
||||
@item -preset @var{type}
|
||||
Configuration preset. This does some automatic settings based on the general
|
||||
type of the image.
|
||||
@table @option
|
||||
@item none
|
||||
Do not use a preset.
|
||||
@item default
|
||||
Use the encoder default.
|
||||
@item picture
|
||||
Digital picture, like portrait, inner shot
|
||||
@item photo
|
||||
Outdoor photograph, with natural lighting
|
||||
@item drawing
|
||||
Hand or line drawing, with high-contrast details
|
||||
@item icon
|
||||
Small-sized colorful images
|
||||
@item text
|
||||
Text-like
|
||||
@end table
|
||||
|
||||
@end table
|
||||
|
||||
@section libx264, libx264rgb
|
||||
@section libx264
|
||||
|
||||
x264 H.264/MPEG-4 AVC encoder wrapper.
|
||||
|
||||
@@ -1528,22 +1075,12 @@ for detail retention (adaptive quantization, psy-RD, psy-trellis).
|
||||
Many libx264 encoder options are mapped to FFmpeg global codec
|
||||
options, while unique encoder options are provided through private
|
||||
options. Additionally the @option{x264opts} and @option{x264-params}
|
||||
private options allows one to pass a list of key=value tuples as accepted
|
||||
private options allows to pass a list of key=value tuples as accepted
|
||||
by the libx264 @code{x264_param_parse} function.
|
||||
|
||||
The x264 project website is at
|
||||
@url{http://www.videolan.org/developers/x264.html}.
|
||||
|
||||
The libx264rgb encoder is the same as libx264, except it accepts packed RGB
|
||||
pixel formats as input instead of YUV.
|
||||
|
||||
@subsection Supported Pixel Formats
|
||||
|
||||
x264 supports 8- to 10-bit color spaces. The exact bit depth is controlled at
|
||||
x264's configure time. FFmpeg only supports one bit depth in one particular
|
||||
build. In other words, it is not possible to build one FFmpeg with multiple
|
||||
versions of x264 with different bit depths.
|
||||
|
||||
@subsection Options
|
||||
|
||||
The following options are supported by the libx264 wrapper. The
|
||||
@@ -1569,34 +1106,25 @@ kilobits/s.
|
||||
|
||||
@item g (@emph{keyint})
|
||||
|
||||
@item qmin (@emph{qpmin})
|
||||
Minimum quantizer scale.
|
||||
|
||||
@item qmax (@emph{qpmax})
|
||||
Maximum quantizer scale.
|
||||
|
||||
@item qmin (@emph{qpmin})
|
||||
|
||||
@item qdiff (@emph{qpstep})
|
||||
Maximum difference between quantizer scales.
|
||||
|
||||
@item qblur (@emph{qblur})
|
||||
Quantizer curve blur
|
||||
|
||||
@item qcomp (@emph{qcomp})
|
||||
Quantizer curve compression factor
|
||||
|
||||
@item refs (@emph{ref})
|
||||
Number of reference frames each P-frame can use. The range is from @var{0-16}.
|
||||
|
||||
@item sc_threshold (@emph{scenecut})
|
||||
Sets the threshold for the scene change detection.
|
||||
|
||||
@item trellis (@emph{trellis})
|
||||
Performs Trellis quantization to increase efficiency. Enabled by default.
|
||||
|
||||
@item nr (@emph{nr})
|
||||
|
||||
@item me_range (@emph{merange})
|
||||
Maximum range of the motion search in pixels.
|
||||
|
||||
@item me_method (@emph{me})
|
||||
Set motion estimation method. Possible values in the decreasing order
|
||||
@@ -1618,13 +1146,10 @@ Hadamard exhaustive search (slowest).
|
||||
@end table
|
||||
|
||||
@item subq (@emph{subme})
|
||||
Sub-pixel motion estimation method.
|
||||
|
||||
@item b_strategy (@emph{b-adapt})
|
||||
Adaptive B-frame placement decision algorithm. Use only on first-pass.
|
||||
|
||||
@item keyint_min (@emph{min-keyint})
|
||||
Minimum GOP size.
|
||||
|
||||
@item coder
|
||||
Set entropy encoder. Possible values:
|
||||
@@ -1651,7 +1176,6 @@ Ignore chroma in motion estimation. It generates the same effect as
|
||||
@end table
|
||||
|
||||
@item threads (@emph{threads})
|
||||
Number of encoding threads.
|
||||
|
||||
@item thread_type
|
||||
Set multithreading technique. Possible values:
|
||||
@@ -1745,10 +1269,6 @@ Enable calculation and printing SSIM stats after the encoding.
|
||||
Enable the use of Periodic Intra Refresh instead of IDR frames when set
|
||||
to 1.
|
||||
|
||||
@item avcintra-class (@emph{class})
|
||||
Configure the encoder to generate AVC-Intra.
|
||||
Valid values are 50,100 and 200
|
||||
|
||||
@item bluray-compat (@emph{bluray-compat})
|
||||
Configure the encoder to be compatible with the bluray standard.
|
||||
It is a shorthand for setting "bluray-compat=1 force-cfr=1".
|
||||
@@ -1873,7 +1393,7 @@ Override the x264 configuration using a :-separated list of key=value
|
||||
parameters.
|
||||
|
||||
This option is functionally the same as the @option{x264opts}, but is
|
||||
duplicated for compatibility with the Libav fork.
|
||||
duplicated for compability with the Libav fork.
|
||||
|
||||
For example to specify libx264 encoding options with @command{ffmpeg}:
|
||||
@example
|
||||
@@ -1886,34 +1406,6 @@ no-fast-pskip=1:subq=6:8x8dct=0:trellis=0 OUTPUT
|
||||
Encoding ffpresets for common usages are provided so they can be used with the
|
||||
general presets system (e.g. passing the @option{pre} option).
|
||||
|
||||
@section libx265
|
||||
|
||||
x265 H.265/HEVC encoder wrapper.
|
||||
|
||||
This encoder requires the presence of the libx265 headers and library
|
||||
during configuration. You need to explicitly configure the build with
|
||||
@option{--enable-libx265}.
|
||||
|
||||
@subsection Options
|
||||
|
||||
@table @option
|
||||
@item preset
|
||||
Set the x265 preset.
|
||||
|
||||
@item tune
|
||||
Set the x265 tune parameter.
|
||||
|
||||
@item x265-params
|
||||
Set x265 options using a list of @var{key}=@var{value} couples separated
|
||||
by ":". See @command{x265 --help} for a list of options.
|
||||
|
||||
For example to specify libx265 encoding options with @option{-x265-params}:
|
||||
|
||||
@example
|
||||
ffmpeg -i input -c:v libx265 -x265-params crf=26:psy-rd=1 output.mp4
|
||||
@end example
|
||||
@end table
|
||||
|
||||
@section libxvid
|
||||
|
||||
Xvid MPEG-4 Part 2 encoder wrapper.
|
||||
@@ -2077,30 +1569,6 @@ fastest.
|
||||
|
||||
@end table
|
||||
|
||||
@section mpeg2
|
||||
|
||||
MPEG-2 video encoder.
|
||||
|
||||
@subsection Options
|
||||
|
||||
@table @option
|
||||
@item seq_disp_ext @var{integer}
|
||||
Specifies if the encoder should write a sequence_display_extension to the
|
||||
output.
|
||||
@table @option
|
||||
@item -1
|
||||
@itemx auto
|
||||
Decide automatically to write it or not (this is the default) by checking if
|
||||
the data to be written is different from the default or unspecified values.
|
||||
@item 0
|
||||
@itemx never
|
||||
Never write it.
|
||||
@item 1
|
||||
@itemx always
|
||||
Always write it.
|
||||
@end table
|
||||
@end table
|
||||
|
||||
@section png
|
||||
|
||||
PNG image encoder.
|
||||
@@ -2119,7 +1587,7 @@ Set physical density of pixels, in dots per meter, unset by default
|
||||
Apple ProRes encoder.
|
||||
|
||||
FFmpeg contains 2 ProRes encoders, the prores-aw and prores-ks encoder.
|
||||
The used encoder can be chosen with the @code{-vcodec} option.
|
||||
The used encoder can be choosen with the @code{-vcodec} option.
|
||||
|
||||
@subsection Private Options for prores-ks
|
||||
|
||||
@@ -2171,7 +1639,7 @@ Use @var{0} to disable alpha plane coding.
|
||||
@subsection Speed considerations
|
||||
|
||||
In the default mode of operation the encoder has to honor frame constraints
|
||||
(i.e. not produce frames with size bigger than requested) while still making
|
||||
(i.e. not produc frames with size bigger than requested) while still making
|
||||
output picture as good as possible.
|
||||
A frame containing a lot of small details is harder to compress and the encoder
|
||||
would spend more time searching for appropriate quantizers for each slice.
|
||||
@@ -2182,27 +1650,3 @@ For the fastest encoding speed set the @option{qscale} parameter (4 is the
|
||||
recommended value) and do not set a size constraint.
|
||||
|
||||
@c man end VIDEO ENCODERS
|
||||
|
||||
@chapter Subtitles Encoders
|
||||
@c man begin SUBTITLES ENCODERS
|
||||
|
||||
@section dvdsub
|
||||
|
||||
This codec encodes the bitmap subtitle format that is used in DVDs.
|
||||
Typically they are stored in VOBSUB file pairs (*.idx + *.sub),
|
||||
and they can also be used in Matroska files.
|
||||
|
||||
@subsection Options
|
||||
|
||||
@table @option
|
||||
@item even_rows_fix
|
||||
When set to 1, enable a work-around that makes the number of pixel rows
|
||||
even in all subtitles. This fixes a problem with some players that
|
||||
cut off the bottom row if the number is odd. The work-around just adds
|
||||
a fully transparent row if needed. The overhead is low, typically
|
||||
one byte per subtitle on average.
|
||||
|
||||
By default, this work-around is disabled.
|
||||
@end table
|
||||
|
||||
@c man end SUBTITLES ENCODERS
|
||||
|
@@ -11,24 +11,18 @@ CFLAGS += -Wall -g
|
||||
CFLAGS := $(shell pkg-config --cflags $(FFMPEG_LIBS)) $(CFLAGS)
|
||||
LDLIBS := $(shell pkg-config --libs $(FFMPEG_LIBS)) $(LDLIBS)
|
||||
|
||||
EXAMPLES= avio_reading \
|
||||
decoding_encoding \
|
||||
demuxing_decoding \
|
||||
extract_mvs \
|
||||
EXAMPLES= decoding_encoding \
|
||||
demuxing \
|
||||
filtering_video \
|
||||
filtering_audio \
|
||||
metadata \
|
||||
muxing \
|
||||
remuxing \
|
||||
resampling_audio \
|
||||
scaling_video \
|
||||
transcode_aac \
|
||||
transcoding \
|
||||
|
||||
OBJS=$(addsuffix .o,$(EXAMPLES))
|
||||
|
||||
# the following examples make explicit use of the math library
|
||||
avcodec: LDLIBS += -lm
|
||||
decoding_encoding: LDLIBS += -lm
|
||||
muxing: LDLIBS += -lm
|
||||
resampling_audio: LDLIBS += -lm
|
||||
|
@@ -5,19 +5,14 @@ Both following use cases rely on pkg-config and make, thus make sure
|
||||
that you have them installed and working on your system.
|
||||
|
||||
|
||||
Method 1: build the installed examples in a generic read/write user directory
|
||||
1) Build the installed examples in a generic read/write user directory
|
||||
|
||||
Copy to a read/write user directory and just use "make", it will link
|
||||
to the libraries on your system, assuming the PKG_CONFIG_PATH is
|
||||
correctly configured.
|
||||
|
||||
Method 2: build the examples in-tree
|
||||
2) Build the examples in-tree
|
||||
|
||||
Assuming you are in the source FFmpeg checkout directory, you need to build
|
||||
FFmpeg (no need to make install in any prefix). Then just run "make examples".
|
||||
This will build the examples using the FFmpeg build system. You can clean those
|
||||
examples using "make examplesclean"
|
||||
|
||||
If you want to try the dedicated Makefile examples (to emulate the first
|
||||
method), go into doc/examples and run a command such as
|
||||
PKG_CONFIG_PATH=pc-uninstalled make.
|
||||
FFmpeg (no need to make install in any prefix). Then you can go into
|
||||
doc/examples and run a command such as PKG_CONFIG_PATH=pc-uninstalled make.
|
||||
|
@@ -1,134 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2014 Stefano Sabatini
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
* of this software and associated documentation files (the "Software"), to deal
|
||||
* in the Software without restriction, including without limitation the rights
|
||||
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
* copies of the Software, and to permit persons to whom the Software is
|
||||
* furnished to do so, subject to the following conditions:
|
||||
*
|
||||
* The above copyright notice and this permission notice shall be included in
|
||||
* all copies or substantial portions of the Software.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
||||
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
* THE SOFTWARE.
|
||||
*/
|
||||
|
||||
/**
|
||||
* @file
|
||||
* libavformat AVIOContext API example.
|
||||
*
|
||||
* Make libavformat demuxer access media content through a custom
|
||||
* AVIOContext read callback.
|
||||
* @example avio_reading.c
|
||||
*/
|
||||
|
||||
#include <libavcodec/avcodec.h>
|
||||
#include <libavformat/avformat.h>
|
||||
#include <libavformat/avio.h>
|
||||
#include <libavutil/file.h>
|
||||
|
||||
struct buffer_data {
|
||||
uint8_t *ptr;
|
||||
size_t size; ///< size left in the buffer
|
||||
};
|
||||
|
||||
static int read_packet(void *opaque, uint8_t *buf, int buf_size)
|
||||
{
|
||||
struct buffer_data *bd = (struct buffer_data *)opaque;
|
||||
buf_size = FFMIN(buf_size, bd->size);
|
||||
|
||||
printf("ptr:%p size:%zu\n", bd->ptr, bd->size);
|
||||
|
||||
/* copy internal buffer data to buf */
|
||||
memcpy(buf, bd->ptr, buf_size);
|
||||
bd->ptr += buf_size;
|
||||
bd->size -= buf_size;
|
||||
|
||||
return buf_size;
|
||||
}
|
||||
|
||||
int main(int argc, char *argv[])
|
||||
{
|
||||
AVFormatContext *fmt_ctx = NULL;
|
||||
AVIOContext *avio_ctx = NULL;
|
||||
uint8_t *buffer = NULL, *avio_ctx_buffer = NULL;
|
||||
size_t buffer_size, avio_ctx_buffer_size = 4096;
|
||||
char *input_filename = NULL;
|
||||
int ret = 0;
|
||||
struct buffer_data bd = { 0 };
|
||||
|
||||
if (argc != 2) {
|
||||
fprintf(stderr, "usage: %s input_file\n"
|
||||
"API example program to show how to read from a custom buffer "
|
||||
"accessed through AVIOContext.\n", argv[0]);
|
||||
return 1;
|
||||
}
|
||||
input_filename = argv[1];
|
||||
|
||||
/* register codecs and formats and other lavf/lavc components*/
|
||||
av_register_all();
|
||||
|
||||
/* slurp file content into buffer */
|
||||
ret = av_file_map(input_filename, &buffer, &buffer_size, 0, NULL);
|
||||
if (ret < 0)
|
||||
goto end;
|
||||
|
||||
/* fill opaque structure used by the AVIOContext read callback */
|
||||
bd.ptr = buffer;
|
||||
bd.size = buffer_size;
|
||||
|
||||
if (!(fmt_ctx = avformat_alloc_context())) {
|
||||
ret = AVERROR(ENOMEM);
|
||||
goto end;
|
||||
}
|
||||
|
||||
avio_ctx_buffer = av_malloc(avio_ctx_buffer_size);
|
||||
if (!avio_ctx_buffer) {
|
||||
ret = AVERROR(ENOMEM);
|
||||
goto end;
|
||||
}
|
||||
avio_ctx = avio_alloc_context(avio_ctx_buffer, avio_ctx_buffer_size,
|
||||
0, &bd, &read_packet, NULL, NULL);
|
||||
if (!avio_ctx) {
|
||||
ret = AVERROR(ENOMEM);
|
||||
goto end;
|
||||
}
|
||||
fmt_ctx->pb = avio_ctx;
|
||||
|
||||
ret = avformat_open_input(&fmt_ctx, NULL, NULL, NULL);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Could not open input\n");
|
||||
goto end;
|
||||
}
|
||||
|
||||
ret = avformat_find_stream_info(fmt_ctx, NULL);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Could not find stream information\n");
|
||||
goto end;
|
||||
}
|
||||
|
||||
av_dump_format(fmt_ctx, 0, input_filename, 0);
|
||||
|
||||
end:
|
||||
avformat_close_input(&fmt_ctx);
|
||||
/* note: the internal buffer could have changed, and be != avio_ctx_buffer */
|
||||
if (avio_ctx) {
|
||||
av_freep(&avio_ctx->buffer);
|
||||
av_freep(&avio_ctx);
|
||||
}
|
||||
av_file_unmap(buffer, buffer_size);
|
||||
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error occurred: %s\n", av_err2str(ret));
|
||||
return 1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
@@ -24,10 +24,10 @@
|
||||
* @file
|
||||
* libavcodec API use example.
|
||||
*
|
||||
* @example decoding_encoding.c
|
||||
* Note that libavcodec only handles codecs (mpeg, mpeg4, etc...),
|
||||
* not file formats (avi, vob, mp4, mov, mkv, mxf, flv, mpegts, mpegps, etc...). See library 'libavformat' for the
|
||||
* format handling
|
||||
* @example doc/examples/decoding_encoding.c
|
||||
*/
|
||||
|
||||
#include <math.h>
|
||||
@@ -156,7 +156,7 @@ static void audio_encode_example(const char *filename)
|
||||
}
|
||||
|
||||
/* frame containing input raw audio */
|
||||
frame = av_frame_alloc();
|
||||
frame = avcodec_alloc_frame();
|
||||
if (!frame) {
|
||||
fprintf(stderr, "Could not allocate audio frame\n");
|
||||
exit(1);
|
||||
@@ -170,10 +170,6 @@ static void audio_encode_example(const char *filename)
|
||||
* we calculate the size of the samples buffer in bytes */
|
||||
buffer_size = av_samples_get_buffer_size(NULL, c->channels, c->frame_size,
|
||||
c->sample_fmt, 0);
|
||||
if (buffer_size < 0) {
|
||||
fprintf(stderr, "Could not get sample buffer size\n");
|
||||
exit(1);
|
||||
}
|
||||
samples = av_malloc(buffer_size);
|
||||
if (!samples) {
|
||||
fprintf(stderr, "Could not allocate %d bytes for samples buffer\n",
|
||||
@@ -191,7 +187,7 @@ static void audio_encode_example(const char *filename)
|
||||
/* encode a single tone sound */
|
||||
t = 0;
|
||||
tincr = 2 * M_PI * 440.0 / c->sample_rate;
|
||||
for (i = 0; i < 200; i++) {
|
||||
for(i=0;i<200;i++) {
|
||||
av_init_packet(&pkt);
|
||||
pkt.data = NULL; // packet data will be allocated by the encoder
|
||||
pkt.size = 0;
|
||||
@@ -231,7 +227,7 @@ static void audio_encode_example(const char *filename)
|
||||
fclose(f);
|
||||
|
||||
av_freep(&samples);
|
||||
av_frame_free(&frame);
|
||||
avcodec_free_frame(&frame);
|
||||
avcodec_close(c);
|
||||
av_free(c);
|
||||
}
|
||||
@@ -288,15 +284,15 @@ static void audio_decode_example(const char *outfilename, const char *filename)
|
||||
avpkt.size = fread(inbuf, 1, AUDIO_INBUF_SIZE, f);
|
||||
|
||||
while (avpkt.size > 0) {
|
||||
int i, ch;
|
||||
int got_frame = 0;
|
||||
|
||||
if (!decoded_frame) {
|
||||
if (!(decoded_frame = av_frame_alloc())) {
|
||||
if (!(decoded_frame = avcodec_alloc_frame())) {
|
||||
fprintf(stderr, "Could not allocate audio frame\n");
|
||||
exit(1);
|
||||
}
|
||||
}
|
||||
} else
|
||||
avcodec_get_frame_defaults(decoded_frame);
|
||||
|
||||
len = avcodec_decode_audio4(c, decoded_frame, &got_frame, &avpkt);
|
||||
if (len < 0) {
|
||||
@@ -305,15 +301,10 @@ static void audio_decode_example(const char *outfilename, const char *filename)
|
||||
}
|
||||
if (got_frame) {
|
||||
/* if a frame has been decoded, output it */
|
||||
int data_size = av_get_bytes_per_sample(c->sample_fmt);
|
||||
if (data_size < 0) {
|
||||
/* This should not occur, checking just for paranoia */
|
||||
fprintf(stderr, "Failed to calculate data size\n");
|
||||
exit(1);
|
||||
}
|
||||
for (i=0; i<decoded_frame->nb_samples; i++)
|
||||
for (ch=0; ch<c->channels; ch++)
|
||||
fwrite(decoded_frame->data[ch] + data_size*i, 1, data_size, outfile);
|
||||
int data_size = av_samples_get_buffer_size(NULL, c->channels,
|
||||
decoded_frame->nb_samples,
|
||||
c->sample_fmt, 1);
|
||||
fwrite(decoded_frame->data[0], 1, data_size, outfile);
|
||||
}
|
||||
avpkt.size -= len;
|
||||
avpkt.data += len;
|
||||
@@ -338,7 +329,7 @@ static void audio_decode_example(const char *outfilename, const char *filename)
|
||||
|
||||
avcodec_close(c);
|
||||
av_free(c);
|
||||
av_frame_free(&decoded_frame);
|
||||
avcodec_free_frame(&decoded_frame);
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -375,18 +366,12 @@ static void video_encode_example(const char *filename, int codec_id)
|
||||
c->width = 352;
|
||||
c->height = 288;
|
||||
/* frames per second */
|
||||
c->time_base = (AVRational){1,25};
|
||||
/* emit one intra frame every ten frames
|
||||
* check frame pict_type before passing frame
|
||||
* to encoder, if frame->pict_type is AV_PICTURE_TYPE_I
|
||||
* then gop_size is ignored and the output of encoder
|
||||
* will always be I frame irrespective to gop_size
|
||||
*/
|
||||
c->gop_size = 10;
|
||||
c->max_b_frames = 1;
|
||||
c->time_base= (AVRational){1,25};
|
||||
c->gop_size = 10; /* emit one intra frame every ten frames */
|
||||
c->max_b_frames=1;
|
||||
c->pix_fmt = AV_PIX_FMT_YUV420P;
|
||||
|
||||
if (codec_id == AV_CODEC_ID_H264)
|
||||
if(codec_id == AV_CODEC_ID_H264)
|
||||
av_opt_set(c->priv_data, "preset", "slow", 0);
|
||||
|
||||
/* open it */
|
||||
@@ -401,7 +386,7 @@ static void video_encode_example(const char *filename, int codec_id)
|
||||
exit(1);
|
||||
}
|
||||
|
||||
frame = av_frame_alloc();
|
||||
frame = avcodec_alloc_frame();
|
||||
if (!frame) {
|
||||
fprintf(stderr, "Could not allocate video frame\n");
|
||||
exit(1);
|
||||
@@ -420,7 +405,7 @@ static void video_encode_example(const char *filename, int codec_id)
|
||||
}
|
||||
|
||||
/* encode 1 second of video */
|
||||
for (i = 0; i < 25; i++) {
|
||||
for(i=0;i<25;i++) {
|
||||
av_init_packet(&pkt);
|
||||
pkt.data = NULL; // packet data will be allocated by the encoder
|
||||
pkt.size = 0;
|
||||
@@ -428,15 +413,15 @@ static void video_encode_example(const char *filename, int codec_id)
|
||||
fflush(stdout);
|
||||
/* prepare a dummy image */
|
||||
/* Y */
|
||||
for (y = 0; y < c->height; y++) {
|
||||
for (x = 0; x < c->width; x++) {
|
||||
for(y=0;y<c->height;y++) {
|
||||
for(x=0;x<c->width;x++) {
|
||||
frame->data[0][y * frame->linesize[0] + x] = x + y + i * 3;
|
||||
}
|
||||
}
|
||||
|
||||
/* Cb and Cr */
|
||||
for (y = 0; y < c->height/2; y++) {
|
||||
for (x = 0; x < c->width/2; x++) {
|
||||
for(y=0;y<c->height/2;y++) {
|
||||
for(x=0;x<c->width/2;x++) {
|
||||
frame->data[1][y * frame->linesize[1] + x] = 128 + y + i * 2;
|
||||
frame->data[2][y * frame->linesize[2] + x] = 64 + x + i * 5;
|
||||
}
|
||||
@@ -482,7 +467,7 @@ static void video_encode_example(const char *filename, int codec_id)
|
||||
avcodec_close(c);
|
||||
av_free(c);
|
||||
av_freep(&frame->data[0]);
|
||||
av_frame_free(&frame);
|
||||
avcodec_free_frame(&frame);
|
||||
printf("\n");
|
||||
}
|
||||
|
||||
@@ -496,10 +481,10 @@ static void pgm_save(unsigned char *buf, int wrap, int xsize, int ysize,
|
||||
FILE *f;
|
||||
int i;
|
||||
|
||||
f = fopen(filename,"w");
|
||||
fprintf(f, "P5\n%d %d\n%d\n", xsize, ysize, 255);
|
||||
for (i = 0; i < ysize; i++)
|
||||
fwrite(buf + i * wrap, 1, xsize, f);
|
||||
f=fopen(filename,"w");
|
||||
fprintf(f,"P5\n%d %d\n%d\n",xsize,ysize,255);
|
||||
for(i=0;i<ysize;i++)
|
||||
fwrite(buf + i * wrap,1,xsize,f);
|
||||
fclose(f);
|
||||
}
|
||||
|
||||
@@ -580,14 +565,14 @@ static void video_decode_example(const char *outfilename, const char *filename)
|
||||
exit(1);
|
||||
}
|
||||
|
||||
frame = av_frame_alloc();
|
||||
frame = avcodec_alloc_frame();
|
||||
if (!frame) {
|
||||
fprintf(stderr, "Could not allocate video frame\n");
|
||||
exit(1);
|
||||
}
|
||||
|
||||
frame_count = 0;
|
||||
for (;;) {
|
||||
for(;;) {
|
||||
avpkt.size = fread(inbuf, 1, INBUF_SIZE, f);
|
||||
if (avpkt.size == 0)
|
||||
break;
|
||||
@@ -624,7 +609,7 @@ static void video_decode_example(const char *outfilename, const char *filename)
|
||||
|
||||
avcodec_close(c);
|
||||
av_free(c);
|
||||
av_frame_free(&frame);
|
||||
avcodec_free_frame(&frame);
|
||||
printf("\n");
|
||||
}
|
||||
|
||||
@@ -641,7 +626,7 @@ int main(int argc, char **argv)
|
||||
"This program generates a synthetic stream and encodes it to a file\n"
|
||||
"named test.h264, test.mp2 or test.mpg depending on output_type.\n"
|
||||
"The encoded stream is then decoded and written to a raw data output.\n"
|
||||
"output_type must be chosen between 'h264', 'mp2', 'mpg'.\n",
|
||||
"output_type must be choosen between 'h264', 'mp2', 'mpg'.\n",
|
||||
argv[0]);
|
||||
return 1;
|
||||
}
|
||||
@@ -651,7 +636,7 @@ int main(int argc, char **argv)
|
||||
video_encode_example("test.h264", AV_CODEC_ID_H264);
|
||||
} else if (!strcmp(output_type, "mp2")) {
|
||||
audio_encode_example("test.mp2");
|
||||
audio_decode_example("test.pcm", "test.mp2");
|
||||
audio_decode_example("test.sw", "test.mp2");
|
||||
} else if (!strcmp(output_type, "mpg")) {
|
||||
video_encode_example("test.mpg", AV_CODEC_ID_MPEG1VIDEO);
|
||||
video_decode_example("test%02d.pgm", "test.mpg");
|
||||
|
@@ -22,11 +22,11 @@
|
||||
|
||||
/**
|
||||
* @file
|
||||
* Demuxing and decoding example.
|
||||
* libavformat demuxing API use example.
|
||||
*
|
||||
* Show how to use the libavformat and libavcodec API to demux and
|
||||
* decode audio and video data.
|
||||
* @example demuxing_decoding.c
|
||||
* @example doc/examples/demuxing.c
|
||||
*/
|
||||
|
||||
#include <libavutil/imgutils.h>
|
||||
@@ -36,8 +36,6 @@
|
||||
|
||||
static AVFormatContext *fmt_ctx = NULL;
|
||||
static AVCodecContext *video_dec_ctx = NULL, *audio_dec_ctx;
|
||||
static int width, height;
|
||||
static enum AVPixelFormat pix_fmt;
|
||||
static AVStream *video_stream = NULL, *audio_stream = NULL;
|
||||
static const char *src_filename = NULL;
|
||||
static const char *video_dst_filename = NULL;
|
||||
@@ -55,46 +53,18 @@ static AVPacket pkt;
|
||||
static int video_frame_count = 0;
|
||||
static int audio_frame_count = 0;
|
||||
|
||||
/* The different ways of decoding and managing data memory. You are not
|
||||
* supposed to support all the modes in your application but pick the one most
|
||||
* appropriate to your needs. Look for the use of api_mode in this example to
|
||||
* see what are the differences of API usage between them */
|
||||
enum {
|
||||
API_MODE_OLD = 0, /* old method, deprecated */
|
||||
API_MODE_NEW_API_REF_COUNT = 1, /* new method, using the frame reference counting */
|
||||
API_MODE_NEW_API_NO_REF_COUNT = 2, /* new method, without reference counting */
|
||||
};
|
||||
|
||||
static int api_mode = API_MODE_OLD;
|
||||
|
||||
static int decode_packet(int *got_frame, int cached)
|
||||
{
|
||||
int ret = 0;
|
||||
int decoded = pkt.size;
|
||||
|
||||
*got_frame = 0;
|
||||
|
||||
if (pkt.stream_index == video_stream_idx) {
|
||||
/* decode video frame */
|
||||
ret = avcodec_decode_video2(video_dec_ctx, frame, got_frame, &pkt);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error decoding video frame (%s)\n", av_err2str(ret));
|
||||
fprintf(stderr, "Error decoding video frame\n");
|
||||
return ret;
|
||||
}
|
||||
if (video_dec_ctx->width != width || video_dec_ctx->height != height ||
|
||||
video_dec_ctx->pix_fmt != pix_fmt) {
|
||||
/* To handle this change, one could call av_image_alloc again and
|
||||
* decode the following frames into another rawvideo file. */
|
||||
fprintf(stderr, "Error: Width, height and pixel format have to be "
|
||||
"constant in a rawvideo file, but the width, height or "
|
||||
"pixel format of the input video changed:\n"
|
||||
"old: width = %d, height = %d, format = %s\n"
|
||||
"new: width = %d, height = %d, format = %s\n",
|
||||
width, height, av_get_pix_fmt_name(pix_fmt),
|
||||
video_dec_ctx->width, video_dec_ctx->height,
|
||||
av_get_pix_fmt_name(video_dec_ctx->pix_fmt));
|
||||
return -1;
|
||||
}
|
||||
|
||||
if (*got_frame) {
|
||||
printf("video_frame%s n:%d coded_n:%d pts:%s\n",
|
||||
@@ -106,7 +76,7 @@ static int decode_packet(int *got_frame, int cached)
|
||||
* this is required since rawvideo expects non aligned data */
|
||||
av_image_copy(video_dst_data, video_dst_linesize,
|
||||
(const uint8_t **)(frame->data), frame->linesize,
|
||||
pix_fmt, width, height);
|
||||
video_dec_ctx->pix_fmt, video_dec_ctx->width, video_dec_ctx->height);
|
||||
|
||||
/* write to rawvideo file */
|
||||
fwrite(video_dst_data[0], 1, video_dst_bufsize, video_dst_file);
|
||||
@@ -115,7 +85,7 @@ static int decode_packet(int *got_frame, int cached)
|
||||
/* decode audio frame */
|
||||
ret = avcodec_decode_audio4(audio_dec_ctx, frame, got_frame, &pkt);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error decoding audio frame (%s)\n", av_err2str(ret));
|
||||
fprintf(stderr, "Error decoding audio frame\n");
|
||||
return ret;
|
||||
}
|
||||
/* Some audio decoders decode only part of the packet, and have to be
|
||||
@@ -143,22 +113,16 @@ static int decode_packet(int *got_frame, int cached)
|
||||
}
|
||||
}
|
||||
|
||||
/* If we use the new API with reference counting, we own the data and need
|
||||
* to de-reference it when we don't use it anymore */
|
||||
if (*got_frame && api_mode == API_MODE_NEW_API_REF_COUNT)
|
||||
av_frame_unref(frame);
|
||||
|
||||
return decoded;
|
||||
}
|
||||
|
||||
static int open_codec_context(int *stream_idx,
|
||||
AVFormatContext *fmt_ctx, enum AVMediaType type)
|
||||
{
|
||||
int ret, stream_index;
|
||||
int ret;
|
||||
AVStream *st;
|
||||
AVCodecContext *dec_ctx = NULL;
|
||||
AVCodec *dec = NULL;
|
||||
AVDictionary *opts = NULL;
|
||||
|
||||
ret = av_find_best_stream(fmt_ctx, type, -1, -1, NULL, 0);
|
||||
if (ret < 0) {
|
||||
@@ -166,8 +130,8 @@ static int open_codec_context(int *stream_idx,
|
||||
av_get_media_type_string(type), src_filename);
|
||||
return ret;
|
||||
} else {
|
||||
stream_index = ret;
|
||||
st = fmt_ctx->streams[stream_index];
|
||||
*stream_idx = ret;
|
||||
st = fmt_ctx->streams[*stream_idx];
|
||||
|
||||
/* find decoder for the stream */
|
||||
dec_ctx = st->codec;
|
||||
@@ -175,18 +139,14 @@ static int open_codec_context(int *stream_idx,
|
||||
if (!dec) {
|
||||
fprintf(stderr, "Failed to find %s codec\n",
|
||||
av_get_media_type_string(type));
|
||||
return AVERROR(EINVAL);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* Init the decoders, with or without reference counting */
|
||||
if (api_mode == API_MODE_NEW_API_REF_COUNT)
|
||||
av_dict_set(&opts, "refcounted_frames", "1", 0);
|
||||
if ((ret = avcodec_open2(dec_ctx, dec, &opts)) < 0) {
|
||||
if ((ret = avcodec_open2(dec_ctx, dec, NULL)) < 0) {
|
||||
fprintf(stderr, "Failed to open %s codec\n",
|
||||
av_get_media_type_string(type));
|
||||
return ret;
|
||||
}
|
||||
*stream_idx = stream_index;
|
||||
}
|
||||
|
||||
return 0;
|
||||
@@ -225,31 +185,15 @@ int main (int argc, char **argv)
|
||||
{
|
||||
int ret = 0, got_frame;
|
||||
|
||||
if (argc != 4 && argc != 5) {
|
||||
fprintf(stderr, "usage: %s [-refcount=<old|new_norefcount|new_refcount>] "
|
||||
"input_file video_output_file audio_output_file\n"
|
||||
if (argc != 4) {
|
||||
fprintf(stderr, "usage: %s input_file video_output_file audio_output_file\n"
|
||||
"API example program to show how to read frames from an input file.\n"
|
||||
"This program reads frames from a file, decodes them, and writes decoded\n"
|
||||
"video frames to a rawvideo file named video_output_file, and decoded\n"
|
||||
"audio frames to a rawaudio file named audio_output_file.\n\n"
|
||||
"If the -refcount option is specified, the program use the\n"
|
||||
"reference counting frame system which allows keeping a copy of\n"
|
||||
"the data for longer than one decode call. If unset, it's using\n"
|
||||
"the classic old method.\n"
|
||||
"audio frames to a rawaudio file named audio_output_file.\n"
|
||||
"\n", argv[0]);
|
||||
exit(1);
|
||||
}
|
||||
if (argc == 5) {
|
||||
const char *mode = argv[1] + strlen("-refcount=");
|
||||
if (!strcmp(mode, "old")) api_mode = API_MODE_OLD;
|
||||
else if (!strcmp(mode, "new_norefcount")) api_mode = API_MODE_NEW_API_NO_REF_COUNT;
|
||||
else if (!strcmp(mode, "new_refcount")) api_mode = API_MODE_NEW_API_REF_COUNT;
|
||||
else {
|
||||
fprintf(stderr, "unknow mode '%s'\n", mode);
|
||||
exit(1);
|
||||
}
|
||||
argv++;
|
||||
}
|
||||
src_filename = argv[1];
|
||||
video_dst_filename = argv[2];
|
||||
audio_dst_filename = argv[3];
|
||||
@@ -281,11 +225,9 @@ int main (int argc, char **argv)
|
||||
}
|
||||
|
||||
/* allocate image where the decoded image will be put */
|
||||
width = video_dec_ctx->width;
|
||||
height = video_dec_ctx->height;
|
||||
pix_fmt = video_dec_ctx->pix_fmt;
|
||||
ret = av_image_alloc(video_dst_data, video_dst_linesize,
|
||||
width, height, pix_fmt, 1);
|
||||
video_dec_ctx->width, video_dec_ctx->height,
|
||||
video_dec_ctx->pix_fmt, 1);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Could not allocate raw video buffer\n");
|
||||
goto end;
|
||||
@@ -298,7 +240,7 @@ int main (int argc, char **argv)
|
||||
audio_dec_ctx = audio_stream->codec;
|
||||
audio_dst_file = fopen(audio_dst_filename, "wb");
|
||||
if (!audio_dst_file) {
|
||||
fprintf(stderr, "Could not open destination file %s\n", audio_dst_filename);
|
||||
fprintf(stderr, "Could not open destination file %s\n", video_dst_filename);
|
||||
ret = 1;
|
||||
goto end;
|
||||
}
|
||||
@@ -313,12 +255,7 @@ int main (int argc, char **argv)
|
||||
goto end;
|
||||
}
|
||||
|
||||
/* When using the new API, you need to use the libavutil/frame.h API, while
|
||||
* the classic frame management is available in libavcodec */
|
||||
if (api_mode == API_MODE_OLD)
|
||||
frame = avcodec_alloc_frame();
|
||||
else
|
||||
frame = av_frame_alloc();
|
||||
frame = avcodec_alloc_frame();
|
||||
if (!frame) {
|
||||
fprintf(stderr, "Could not allocate frame\n");
|
||||
ret = AVERROR(ENOMEM);
|
||||
@@ -360,7 +297,7 @@ int main (int argc, char **argv)
|
||||
if (video_stream) {
|
||||
printf("Play the output video file with the command:\n"
|
||||
"ffplay -f rawvideo -pix_fmt %s -video_size %dx%d %s\n",
|
||||
av_get_pix_fmt_name(pix_fmt), width, height,
|
||||
av_get_pix_fmt_name(video_dec_ctx->pix_fmt), video_dec_ctx->width, video_dec_ctx->height,
|
||||
video_dst_filename);
|
||||
}
|
||||
|
||||
@@ -388,17 +325,16 @@ int main (int argc, char **argv)
|
||||
}
|
||||
|
||||
end:
|
||||
avcodec_close(video_dec_ctx);
|
||||
avcodec_close(audio_dec_ctx);
|
||||
if (video_dec_ctx)
|
||||
avcodec_close(video_dec_ctx);
|
||||
if (audio_dec_ctx)
|
||||
avcodec_close(audio_dec_ctx);
|
||||
avformat_close_input(&fmt_ctx);
|
||||
if (video_dst_file)
|
||||
fclose(video_dst_file);
|
||||
if (audio_dst_file)
|
||||
fclose(audio_dst_file);
|
||||
if (api_mode == API_MODE_OLD)
|
||||
avcodec_free_frame(&frame);
|
||||
else
|
||||
av_frame_free(&frame);
|
||||
av_free(frame);
|
||||
av_free(video_dst_data[0]);
|
||||
|
||||
return ret < 0;
|
@@ -1,185 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2012 Stefano Sabatini
|
||||
* Copyright (c) 2014 Clément Bœsch
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
* of this software and associated documentation files (the "Software"), to deal
|
||||
* in the Software without restriction, including without limitation the rights
|
||||
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
* copies of the Software, and to permit persons to whom the Software is
|
||||
* furnished to do so, subject to the following conditions:
|
||||
*
|
||||
* The above copyright notice and this permission notice shall be included in
|
||||
* all copies or substantial portions of the Software.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
||||
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
* THE SOFTWARE.
|
||||
*/
|
||||
|
||||
#include <libavutil/motion_vector.h>
|
||||
#include <libavformat/avformat.h>
|
||||
|
||||
static AVFormatContext *fmt_ctx = NULL;
|
||||
static AVCodecContext *video_dec_ctx = NULL;
|
||||
static AVStream *video_stream = NULL;
|
||||
static const char *src_filename = NULL;
|
||||
|
||||
static int video_stream_idx = -1;
|
||||
static AVFrame *frame = NULL;
|
||||
static AVPacket pkt;
|
||||
static int video_frame_count = 0;
|
||||
|
||||
static int decode_packet(int *got_frame, int cached)
|
||||
{
|
||||
int decoded = pkt.size;
|
||||
|
||||
*got_frame = 0;
|
||||
|
||||
if (pkt.stream_index == video_stream_idx) {
|
||||
int ret = avcodec_decode_video2(video_dec_ctx, frame, got_frame, &pkt);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error decoding video frame (%s)\n", av_err2str(ret));
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (*got_frame) {
|
||||
int i;
|
||||
AVFrameSideData *sd;
|
||||
|
||||
video_frame_count++;
|
||||
sd = av_frame_get_side_data(frame, AV_FRAME_DATA_MOTION_VECTORS);
|
||||
if (sd) {
|
||||
const AVMotionVector *mvs = (const AVMotionVector *)sd->data;
|
||||
for (i = 0; i < sd->size / sizeof(*mvs); i++) {
|
||||
const AVMotionVector *mv = &mvs[i];
|
||||
printf("%d,%2d,%2d,%2d,%4d,%4d,%4d,%4d,0x%"PRIx64"\n",
|
||||
video_frame_count, mv->source,
|
||||
mv->w, mv->h, mv->src_x, mv->src_y,
|
||||
mv->dst_x, mv->dst_y, mv->flags);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return decoded;
|
||||
}
|
||||
|
||||
static int open_codec_context(int *stream_idx,
|
||||
AVFormatContext *fmt_ctx, enum AVMediaType type)
|
||||
{
|
||||
int ret;
|
||||
AVStream *st;
|
||||
AVCodecContext *dec_ctx = NULL;
|
||||
AVCodec *dec = NULL;
|
||||
AVDictionary *opts = NULL;
|
||||
|
||||
ret = av_find_best_stream(fmt_ctx, type, -1, -1, NULL, 0);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Could not find %s stream in input file '%s'\n",
|
||||
av_get_media_type_string(type), src_filename);
|
||||
return ret;
|
||||
} else {
|
||||
*stream_idx = ret;
|
||||
st = fmt_ctx->streams[*stream_idx];
|
||||
|
||||
/* find decoder for the stream */
|
||||
dec_ctx = st->codec;
|
||||
dec = avcodec_find_decoder(dec_ctx->codec_id);
|
||||
if (!dec) {
|
||||
fprintf(stderr, "Failed to find %s codec\n",
|
||||
av_get_media_type_string(type));
|
||||
return AVERROR(EINVAL);
|
||||
}
|
||||
|
||||
/* Init the video decoder */
|
||||
av_dict_set(&opts, "flags2", "+export_mvs", 0);
|
||||
if ((ret = avcodec_open2(dec_ctx, dec, &opts)) < 0) {
|
||||
fprintf(stderr, "Failed to open %s codec\n",
|
||||
av_get_media_type_string(type));
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int main(int argc, char **argv)
|
||||
{
|
||||
int ret = 0, got_frame;
|
||||
|
||||
if (argc != 2) {
|
||||
fprintf(stderr, "Usage: %s <video>\n", argv[0]);
|
||||
exit(1);
|
||||
}
|
||||
src_filename = argv[1];
|
||||
|
||||
av_register_all();
|
||||
|
||||
if (avformat_open_input(&fmt_ctx, src_filename, NULL, NULL) < 0) {
|
||||
fprintf(stderr, "Could not open source file %s\n", src_filename);
|
||||
exit(1);
|
||||
}
|
||||
|
||||
if (avformat_find_stream_info(fmt_ctx, NULL) < 0) {
|
||||
fprintf(stderr, "Could not find stream information\n");
|
||||
exit(1);
|
||||
}
|
||||
|
||||
if (open_codec_context(&video_stream_idx, fmt_ctx, AVMEDIA_TYPE_VIDEO) >= 0) {
|
||||
video_stream = fmt_ctx->streams[video_stream_idx];
|
||||
video_dec_ctx = video_stream->codec;
|
||||
}
|
||||
|
||||
av_dump_format(fmt_ctx, 0, src_filename, 0);
|
||||
|
||||
if (!video_stream) {
|
||||
fprintf(stderr, "Could not find video stream in the input, aborting\n");
|
||||
ret = 1;
|
||||
goto end;
|
||||
}
|
||||
|
||||
frame = av_frame_alloc();
|
||||
if (!frame) {
|
||||
fprintf(stderr, "Could not allocate frame\n");
|
||||
ret = AVERROR(ENOMEM);
|
||||
goto end;
|
||||
}
|
||||
|
||||
printf("framenum,source,blockw,blockh,srcx,srcy,dstx,dsty,flags\n");
|
||||
|
||||
/* initialize packet, set data to NULL, let the demuxer fill it */
|
||||
av_init_packet(&pkt);
|
||||
pkt.data = NULL;
|
||||
pkt.size = 0;
|
||||
|
||||
/* read frames from the file */
|
||||
while (av_read_frame(fmt_ctx, &pkt) >= 0) {
|
||||
AVPacket orig_pkt = pkt;
|
||||
do {
|
||||
ret = decode_packet(&got_frame, 0);
|
||||
if (ret < 0)
|
||||
break;
|
||||
pkt.data += ret;
|
||||
pkt.size -= ret;
|
||||
} while (pkt.size > 0);
|
||||
av_free_packet(&orig_pkt);
|
||||
}
|
||||
|
||||
/* flush cached frames */
|
||||
pkt.data = NULL;
|
||||
pkt.size = 0;
|
||||
do {
|
||||
decode_packet(&got_frame, 1);
|
||||
} while (got_frame);
|
||||
|
||||
end:
|
||||
avcodec_close(video_dec_ctx);
|
||||
avformat_close_input(&fmt_ctx);
|
||||
av_frame_free(&frame);
|
||||
return ret < 0;
|
||||
}
|
@@ -1,365 +0,0 @@
|
||||
/*
|
||||
* copyright (c) 2013 Andrew Kelley
|
||||
*
|
||||
* This file is part of FFmpeg.
|
||||
*
|
||||
* FFmpeg is free software; you can redistribute it and/or
|
||||
* modify it under the terms of the GNU Lesser General Public
|
||||
* License as published by the Free Software Foundation; either
|
||||
* version 2.1 of the License, or (at your option) any later version.
|
||||
*
|
||||
* FFmpeg is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
* Lesser General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU Lesser General Public
|
||||
* License along with FFmpeg; if not, write to the Free Software
|
||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
*/
|
||||
|
||||
/**
|
||||
* @file
|
||||
* libavfilter API usage example.
|
||||
*
|
||||
* @example filter_audio.c
|
||||
* This example will generate a sine wave audio,
|
||||
* pass it through a simple filter chain, and then compute the MD5 checksum of
|
||||
* the output data.
|
||||
*
|
||||
* The filter chain it uses is:
|
||||
* (input) -> abuffer -> volume -> aformat -> abuffersink -> (output)
|
||||
*
|
||||
* abuffer: This provides the endpoint where you can feed the decoded samples.
|
||||
* volume: In this example we hardcode it to 0.90.
|
||||
* aformat: This converts the samples to the samplefreq, channel layout,
|
||||
* and sample format required by the audio device.
|
||||
* abuffersink: This provides the endpoint where you can read the samples after
|
||||
* they have passed through the filter chain.
|
||||
*/
|
||||
|
||||
#include <inttypes.h>
|
||||
#include <math.h>
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
|
||||
#include "libavutil/channel_layout.h"
|
||||
#include "libavutil/md5.h"
|
||||
#include "libavutil/mem.h"
|
||||
#include "libavutil/opt.h"
|
||||
#include "libavutil/samplefmt.h"
|
||||
|
||||
#include "libavfilter/avfilter.h"
|
||||
#include "libavfilter/buffersink.h"
|
||||
#include "libavfilter/buffersrc.h"
|
||||
|
||||
#define INPUT_SAMPLERATE 48000
|
||||
#define INPUT_FORMAT AV_SAMPLE_FMT_FLTP
|
||||
#define INPUT_CHANNEL_LAYOUT AV_CH_LAYOUT_5POINT0
|
||||
|
||||
#define VOLUME_VAL 0.90
|
||||
|
||||
static int init_filter_graph(AVFilterGraph **graph, AVFilterContext **src,
|
||||
AVFilterContext **sink)
|
||||
{
|
||||
AVFilterGraph *filter_graph;
|
||||
AVFilterContext *abuffer_ctx;
|
||||
AVFilter *abuffer;
|
||||
AVFilterContext *volume_ctx;
|
||||
AVFilter *volume;
|
||||
AVFilterContext *aformat_ctx;
|
||||
AVFilter *aformat;
|
||||
AVFilterContext *abuffersink_ctx;
|
||||
AVFilter *abuffersink;
|
||||
|
||||
AVDictionary *options_dict = NULL;
|
||||
uint8_t options_str[1024];
|
||||
uint8_t ch_layout[64];
|
||||
|
||||
int err;
|
||||
|
||||
/* Create a new filtergraph, which will contain all the filters. */
|
||||
filter_graph = avfilter_graph_alloc();
|
||||
if (!filter_graph) {
|
||||
fprintf(stderr, "Unable to create filter graph.\n");
|
||||
return AVERROR(ENOMEM);
|
||||
}
|
||||
|
||||
/* Create the abuffer filter;
|
||||
* it will be used for feeding the data into the graph. */
|
||||
abuffer = avfilter_get_by_name("abuffer");
|
||||
if (!abuffer) {
|
||||
fprintf(stderr, "Could not find the abuffer filter.\n");
|
||||
return AVERROR_FILTER_NOT_FOUND;
|
||||
}
|
||||
|
||||
abuffer_ctx = avfilter_graph_alloc_filter(filter_graph, abuffer, "src");
|
||||
if (!abuffer_ctx) {
|
||||
fprintf(stderr, "Could not allocate the abuffer instance.\n");
|
||||
return AVERROR(ENOMEM);
|
||||
}
|
||||
|
||||
/* Set the filter options through the AVOptions API. */
|
||||
av_get_channel_layout_string(ch_layout, sizeof(ch_layout), 0, INPUT_CHANNEL_LAYOUT);
|
||||
av_opt_set (abuffer_ctx, "channel_layout", ch_layout, AV_OPT_SEARCH_CHILDREN);
|
||||
av_opt_set (abuffer_ctx, "sample_fmt", av_get_sample_fmt_name(INPUT_FORMAT), AV_OPT_SEARCH_CHILDREN);
|
||||
av_opt_set_q (abuffer_ctx, "time_base", (AVRational){ 1, INPUT_SAMPLERATE }, AV_OPT_SEARCH_CHILDREN);
|
||||
av_opt_set_int(abuffer_ctx, "sample_rate", INPUT_SAMPLERATE, AV_OPT_SEARCH_CHILDREN);
|
||||
|
||||
/* Now initialize the filter; we pass NULL options, since we have already
|
||||
* set all the options above. */
|
||||
err = avfilter_init_str(abuffer_ctx, NULL);
|
||||
if (err < 0) {
|
||||
fprintf(stderr, "Could not initialize the abuffer filter.\n");
|
||||
return err;
|
||||
}
|
||||
|
||||
/* Create volume filter. */
|
||||
volume = avfilter_get_by_name("volume");
|
||||
if (!volume) {
|
||||
fprintf(stderr, "Could not find the volume filter.\n");
|
||||
return AVERROR_FILTER_NOT_FOUND;
|
||||
}
|
||||
|
||||
volume_ctx = avfilter_graph_alloc_filter(filter_graph, volume, "volume");
|
||||
if (!volume_ctx) {
|
||||
fprintf(stderr, "Could not allocate the volume instance.\n");
|
||||
return AVERROR(ENOMEM);
|
||||
}
|
||||
|
||||
/* A different way of passing the options is as key/value pairs in a
|
||||
* dictionary. */
|
||||
av_dict_set(&options_dict, "volume", AV_STRINGIFY(VOLUME_VAL), 0);
|
||||
err = avfilter_init_dict(volume_ctx, &options_dict);
|
||||
av_dict_free(&options_dict);
|
||||
if (err < 0) {
|
||||
fprintf(stderr, "Could not initialize the volume filter.\n");
|
||||
return err;
|
||||
}
|
||||
|
||||
/* Create the aformat filter;
|
||||
* it ensures that the output is of the format we want. */
|
||||
aformat = avfilter_get_by_name("aformat");
|
||||
if (!aformat) {
|
||||
fprintf(stderr, "Could not find the aformat filter.\n");
|
||||
return AVERROR_FILTER_NOT_FOUND;
|
||||
}
|
||||
|
||||
aformat_ctx = avfilter_graph_alloc_filter(filter_graph, aformat, "aformat");
|
||||
if (!aformat_ctx) {
|
||||
fprintf(stderr, "Could not allocate the aformat instance.\n");
|
||||
return AVERROR(ENOMEM);
|
||||
}
|
||||
|
||||
/* A third way of passing the options is in a string of the form
|
||||
* key1=value1:key2=value2.... */
|
||||
snprintf(options_str, sizeof(options_str),
|
||||
"sample_fmts=%s:sample_rates=%d:channel_layouts=0x%"PRIx64,
|
||||
av_get_sample_fmt_name(AV_SAMPLE_FMT_S16), 44100,
|
||||
(uint64_t)AV_CH_LAYOUT_STEREO);
|
||||
err = avfilter_init_str(aformat_ctx, options_str);
|
||||
if (err < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Could not initialize the aformat filter.\n");
|
||||
return err;
|
||||
}
|
||||
|
||||
/* Finally create the abuffersink filter;
|
||||
* it will be used to get the filtered data out of the graph. */
|
||||
abuffersink = avfilter_get_by_name("abuffersink");
|
||||
if (!abuffersink) {
|
||||
fprintf(stderr, "Could not find the abuffersink filter.\n");
|
||||
return AVERROR_FILTER_NOT_FOUND;
|
||||
}
|
||||
|
||||
abuffersink_ctx = avfilter_graph_alloc_filter(filter_graph, abuffersink, "sink");
|
||||
if (!abuffersink_ctx) {
|
||||
fprintf(stderr, "Could not allocate the abuffersink instance.\n");
|
||||
return AVERROR(ENOMEM);
|
||||
}
|
||||
|
||||
/* This filter takes no options. */
|
||||
err = avfilter_init_str(abuffersink_ctx, NULL);
|
||||
if (err < 0) {
|
||||
fprintf(stderr, "Could not initialize the abuffersink instance.\n");
|
||||
return err;
|
||||
}
|
||||
|
||||
/* Connect the filters;
|
||||
* in this simple case the filters just form a linear chain. */
|
||||
err = avfilter_link(abuffer_ctx, 0, volume_ctx, 0);
|
||||
if (err >= 0)
|
||||
err = avfilter_link(volume_ctx, 0, aformat_ctx, 0);
|
||||
if (err >= 0)
|
||||
err = avfilter_link(aformat_ctx, 0, abuffersink_ctx, 0);
|
||||
if (err < 0) {
|
||||
fprintf(stderr, "Error connecting filters\n");
|
||||
return err;
|
||||
}
|
||||
|
||||
/* Configure the graph. */
|
||||
err = avfilter_graph_config(filter_graph, NULL);
|
||||
if (err < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Error configuring the filter graph\n");
|
||||
return err;
|
||||
}
|
||||
|
||||
*graph = filter_graph;
|
||||
*src = abuffer_ctx;
|
||||
*sink = abuffersink_ctx;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Do something useful with the filtered data: this simple
|
||||
* example just prints the MD5 checksum of each plane to stdout. */
|
||||
static int process_output(struct AVMD5 *md5, AVFrame *frame)
|
||||
{
|
||||
int planar = av_sample_fmt_is_planar(frame->format);
|
||||
int channels = av_get_channel_layout_nb_channels(frame->channel_layout);
|
||||
int planes = planar ? channels : 1;
|
||||
int bps = av_get_bytes_per_sample(frame->format);
|
||||
int plane_size = bps * frame->nb_samples * (planar ? 1 : channels);
|
||||
int i, j;
|
||||
|
||||
for (i = 0; i < planes; i++) {
|
||||
uint8_t checksum[16];
|
||||
|
||||
av_md5_init(md5);
|
||||
av_md5_sum(checksum, frame->extended_data[i], plane_size);
|
||||
|
||||
fprintf(stdout, "plane %d: 0x", i);
|
||||
for (j = 0; j < sizeof(checksum); j++)
|
||||
fprintf(stdout, "%02X", checksum[j]);
|
||||
fprintf(stdout, "\n");
|
||||
}
|
||||
fprintf(stdout, "\n");
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Construct a frame of audio data to be filtered;
|
||||
* this simple example just synthesizes a sine wave. */
|
||||
static int get_input(AVFrame *frame, int frame_num)
|
||||
{
|
||||
int err, i, j;
|
||||
|
||||
#define FRAME_SIZE 1024
|
||||
|
||||
/* Set up the frame properties and allocate the buffer for the data. */
|
||||
frame->sample_rate = INPUT_SAMPLERATE;
|
||||
frame->format = INPUT_FORMAT;
|
||||
frame->channel_layout = INPUT_CHANNEL_LAYOUT;
|
||||
frame->nb_samples = FRAME_SIZE;
|
||||
frame->pts = frame_num * FRAME_SIZE;
|
||||
|
||||
err = av_frame_get_buffer(frame, 0);
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
/* Fill the data for each channel. */
|
||||
for (i = 0; i < 5; i++) {
|
||||
float *data = (float*)frame->extended_data[i];
|
||||
|
||||
for (j = 0; j < frame->nb_samples; j++)
|
||||
data[j] = sin(2 * M_PI * (frame_num + j) * (i + 1) / FRAME_SIZE);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int main(int argc, char *argv[])
|
||||
{
|
||||
struct AVMD5 *md5;
|
||||
AVFilterGraph *graph;
|
||||
AVFilterContext *src, *sink;
|
||||
AVFrame *frame;
|
||||
uint8_t errstr[1024];
|
||||
float duration;
|
||||
int err, nb_frames, i;
|
||||
|
||||
if (argc < 2) {
|
||||
fprintf(stderr, "Usage: %s <duration>\n", argv[0]);
|
||||
return 1;
|
||||
}
|
||||
|
||||
duration = atof(argv[1]);
|
||||
nb_frames = duration * INPUT_SAMPLERATE / FRAME_SIZE;
|
||||
if (nb_frames <= 0) {
|
||||
fprintf(stderr, "Invalid duration: %s\n", argv[1]);
|
||||
return 1;
|
||||
}
|
||||
|
||||
avfilter_register_all();
|
||||
|
||||
/* Allocate the frame we will be using to store the data. */
|
||||
frame = av_frame_alloc();
|
||||
if (!frame) {
|
||||
fprintf(stderr, "Error allocating the frame\n");
|
||||
return 1;
|
||||
}
|
||||
|
||||
md5 = av_md5_alloc();
|
||||
if (!md5) {
|
||||
fprintf(stderr, "Error allocating the MD5 context\n");
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* Set up the filtergraph. */
|
||||
err = init_filter_graph(&graph, &src, &sink);
|
||||
if (err < 0) {
|
||||
fprintf(stderr, "Unable to init filter graph:");
|
||||
goto fail;
|
||||
}
|
||||
|
||||
/* the main filtering loop */
|
||||
for (i = 0; i < nb_frames; i++) {
|
||||
/* get an input frame to be filtered */
|
||||
err = get_input(frame, i);
|
||||
if (err < 0) {
|
||||
fprintf(stderr, "Error generating input frame:");
|
||||
goto fail;
|
||||
}
|
||||
|
||||
/* Send the frame to the input of the filtergraph. */
|
||||
err = av_buffersrc_add_frame(src, frame);
|
||||
if (err < 0) {
|
||||
av_frame_unref(frame);
|
||||
fprintf(stderr, "Error submitting the frame to the filtergraph:");
|
||||
goto fail;
|
||||
}
|
||||
|
||||
/* Get all the filtered output that is available. */
|
||||
while ((err = av_buffersink_get_frame(sink, frame)) >= 0) {
|
||||
/* now do something with our filtered frame */
|
||||
err = process_output(md5, frame);
|
||||
if (err < 0) {
|
||||
fprintf(stderr, "Error processing the filtered frame:");
|
||||
goto fail;
|
||||
}
|
||||
av_frame_unref(frame);
|
||||
}
|
||||
|
||||
if (err == AVERROR(EAGAIN)) {
|
||||
/* Need to feed more frames in. */
|
||||
continue;
|
||||
} else if (err == AVERROR_EOF) {
|
||||
/* Nothing more to do, finish. */
|
||||
break;
|
||||
} else if (err < 0) {
|
||||
/* An error occurred. */
|
||||
fprintf(stderr, "Error filtering the data:");
|
||||
goto fail;
|
||||
}
|
||||
}
|
||||
|
||||
avfilter_graph_free(&graph);
|
||||
av_frame_free(&frame);
|
||||
av_freep(&md5);
|
||||
|
||||
return 0;
|
||||
|
||||
fail:
|
||||
av_strerror(err, errstr, sizeof(errstr));
|
||||
fprintf(stderr, "%s\n", errstr);
|
||||
return 1;
|
||||
}
|
@@ -25,7 +25,7 @@
|
||||
/**
|
||||
* @file
|
||||
* API example for audio decoding and filtering
|
||||
* @example filtering_audio.c
|
||||
* @example doc/examples/filtering_audio.c
|
||||
*/
|
||||
|
||||
#include <unistd.h>
|
||||
@@ -85,7 +85,7 @@ static int open_input_file(const char *filename)
|
||||
static int init_filters(const char *filters_descr)
|
||||
{
|
||||
char args[512];
|
||||
int ret = 0;
|
||||
int ret;
|
||||
AVFilter *abuffersrc = avfilter_get_by_name("abuffer");
|
||||
AVFilter *abuffersink = avfilter_get_by_name("abuffersink");
|
||||
AVFilterInOut *outputs = avfilter_inout_alloc();
|
||||
@@ -97,10 +97,6 @@ static int init_filters(const char *filters_descr)
|
||||
AVRational time_base = fmt_ctx->streams[audio_stream_index]->time_base;
|
||||
|
||||
filter_graph = avfilter_graph_alloc();
|
||||
if (!outputs || !inputs || !filter_graph) {
|
||||
ret = AVERROR(ENOMEM);
|
||||
goto end;
|
||||
}
|
||||
|
||||
/* buffer audio source: the decoded frames from the decoder will be inserted here. */
|
||||
if (!dec_ctx->channel_layout)
|
||||
@@ -113,7 +109,7 @@ static int init_filters(const char *filters_descr)
|
||||
args, NULL, filter_graph);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer source\n");
|
||||
goto end;
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* buffer audio sink: to terminate the filter chain. */
|
||||
@@ -121,63 +117,47 @@ static int init_filters(const char *filters_descr)
|
||||
NULL, NULL, filter_graph);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer sink\n");
|
||||
goto end;
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = av_opt_set_int_list(buffersink_ctx, "sample_fmts", out_sample_fmts, -1,
|
||||
AV_OPT_SEARCH_CHILDREN);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot set output sample format\n");
|
||||
goto end;
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = av_opt_set_int_list(buffersink_ctx, "channel_layouts", out_channel_layouts, -1,
|
||||
AV_OPT_SEARCH_CHILDREN);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot set output channel layout\n");
|
||||
goto end;
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = av_opt_set_int_list(buffersink_ctx, "sample_rates", out_sample_rates, -1,
|
||||
AV_OPT_SEARCH_CHILDREN);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot set output sample rate\n");
|
||||
goto end;
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* Set the endpoints for the filter graph. The filter_graph will
|
||||
* be linked to the graph described by filters_descr.
|
||||
*/
|
||||
|
||||
/*
|
||||
* The buffer source output must be connected to the input pad of
|
||||
* the first filter described by filters_descr; since the first
|
||||
* filter input label is not specified, it is set to "in" by
|
||||
* default.
|
||||
*/
|
||||
/* Endpoints for the filter graph. */
|
||||
outputs->name = av_strdup("in");
|
||||
outputs->filter_ctx = buffersrc_ctx;
|
||||
outputs->pad_idx = 0;
|
||||
outputs->next = NULL;
|
||||
|
||||
/*
|
||||
* The buffer sink input must be connected to the output pad of
|
||||
* the last filter described by filters_descr; since the last
|
||||
* filter output label is not specified, it is set to "out" by
|
||||
* default.
|
||||
*/
|
||||
inputs->name = av_strdup("out");
|
||||
inputs->filter_ctx = buffersink_ctx;
|
||||
inputs->pad_idx = 0;
|
||||
inputs->next = NULL;
|
||||
|
||||
if ((ret = avfilter_graph_parse_ptr(filter_graph, filters_descr,
|
||||
&inputs, &outputs, NULL)) < 0)
|
||||
goto end;
|
||||
&inputs, &outputs, NULL)) < 0)
|
||||
return ret;
|
||||
|
||||
if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0)
|
||||
goto end;
|
||||
return ret;
|
||||
|
||||
/* Print summary of the sink buffer
|
||||
* Note: args buffer is reused to store channel layout string */
|
||||
@@ -188,11 +168,7 @@ static int init_filters(const char *filters_descr)
|
||||
(char *)av_x_if_null(av_get_sample_fmt_name(outlink->format), "?"),
|
||||
args);
|
||||
|
||||
end:
|
||||
avfilter_inout_free(&inputs);
|
||||
avfilter_inout_free(&outputs);
|
||||
|
||||
return ret;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void print_frame(const AVFrame *frame)
|
||||
@@ -212,7 +188,7 @@ static void print_frame(const AVFrame *frame)
|
||||
int main(int argc, char **argv)
|
||||
{
|
||||
int ret;
|
||||
AVPacket packet0, packet;
|
||||
AVPacket packet;
|
||||
AVFrame *frame = av_frame_alloc();
|
||||
AVFrame *filt_frame = av_frame_alloc();
|
||||
int got_frame;
|
||||
@@ -226,6 +202,7 @@ int main(int argc, char **argv)
|
||||
exit(1);
|
||||
}
|
||||
|
||||
avcodec_register_all();
|
||||
av_register_all();
|
||||
avfilter_register_all();
|
||||
|
||||
@@ -235,24 +212,18 @@ int main(int argc, char **argv)
|
||||
goto end;
|
||||
|
||||
/* read all packets */
|
||||
packet0.data = NULL;
|
||||
packet.data = NULL;
|
||||
while (1) {
|
||||
if (!packet0.data) {
|
||||
if ((ret = av_read_frame(fmt_ctx, &packet)) < 0)
|
||||
break;
|
||||
packet0 = packet;
|
||||
}
|
||||
if ((ret = av_read_frame(fmt_ctx, &packet)) < 0)
|
||||
break;
|
||||
|
||||
if (packet.stream_index == audio_stream_index) {
|
||||
avcodec_get_frame_defaults(frame);
|
||||
got_frame = 0;
|
||||
ret = avcodec_decode_audio4(dec_ctx, frame, &got_frame, &packet);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Error decoding audio\n");
|
||||
continue;
|
||||
}
|
||||
packet.size -= ret;
|
||||
packet.data += ret;
|
||||
|
||||
if (got_frame) {
|
||||
/* push the audio data from decoded frame into the filtergraph */
|
||||
@@ -264,31 +235,29 @@ int main(int argc, char **argv)
|
||||
/* pull filtered audio from the filtergraph */
|
||||
while (1) {
|
||||
ret = av_buffersink_get_frame(buffersink_ctx, filt_frame);
|
||||
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
|
||||
if(ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
|
||||
break;
|
||||
if (ret < 0)
|
||||
if(ret < 0)
|
||||
goto end;
|
||||
print_frame(filt_frame);
|
||||
av_frame_unref(filt_frame);
|
||||
}
|
||||
}
|
||||
|
||||
if (packet.size <= 0)
|
||||
av_free_packet(&packet0);
|
||||
} else {
|
||||
/* discard non-wanted packets */
|
||||
av_free_packet(&packet0);
|
||||
}
|
||||
av_free_packet(&packet);
|
||||
}
|
||||
end:
|
||||
avfilter_graph_free(&filter_graph);
|
||||
avcodec_close(dec_ctx);
|
||||
if (dec_ctx)
|
||||
avcodec_close(dec_ctx);
|
||||
avformat_close_input(&fmt_ctx);
|
||||
av_frame_free(&frame);
|
||||
av_frame_free(&filt_frame);
|
||||
|
||||
if (ret < 0 && ret != AVERROR_EOF) {
|
||||
fprintf(stderr, "Error occurred: %s\n", av_err2str(ret));
|
||||
char buf[1024];
|
||||
av_strerror(ret, buf, sizeof(buf));
|
||||
fprintf(stderr, "Error occurred: %s\n", buf);
|
||||
exit(1);
|
||||
}
|
||||
|
||||
|
@@ -24,7 +24,7 @@
|
||||
/**
|
||||
* @file
|
||||
* API example for decoding and filtering
|
||||
* @example filtering_video.c
|
||||
* @example doc/examples/filtering_video.c
|
||||
*/
|
||||
|
||||
#define _XOPEN_SOURCE 600 /* for usleep */
|
||||
@@ -36,7 +36,6 @@
|
||||
#include <libavfilter/avcodec.h>
|
||||
#include <libavfilter/buffersink.h>
|
||||
#include <libavfilter/buffersrc.h>
|
||||
#include <libavutil/opt.h>
|
||||
|
||||
const char *filter_descr = "scale=78:24";
|
||||
|
||||
@@ -71,7 +70,6 @@ static int open_input_file(const char *filename)
|
||||
}
|
||||
video_stream_index = ret;
|
||||
dec_ctx = fmt_ctx->streams[video_stream_index]->codec;
|
||||
av_opt_set_int(dec_ctx, "refcounted_frames", 1, 0);
|
||||
|
||||
/* init the video decoder */
|
||||
if ((ret = avcodec_open2(dec_ctx, dec, NULL)) < 0) {
|
||||
@@ -85,71 +83,47 @@ static int open_input_file(const char *filename)
|
||||
static int init_filters(const char *filters_descr)
|
||||
{
|
||||
char args[512];
|
||||
int ret = 0;
|
||||
int ret;
|
||||
AVFilter *buffersrc = avfilter_get_by_name("buffer");
|
||||
AVFilter *buffersink = avfilter_get_by_name("buffersink");
|
||||
AVFilterInOut *outputs = avfilter_inout_alloc();
|
||||
AVFilterInOut *inputs = avfilter_inout_alloc();
|
||||
AVRational time_base = fmt_ctx->streams[video_stream_index]->time_base;
|
||||
enum AVPixelFormat pix_fmts[] = { AV_PIX_FMT_GRAY8, AV_PIX_FMT_NONE };
|
||||
AVBufferSinkParams *buffersink_params;
|
||||
|
||||
filter_graph = avfilter_graph_alloc();
|
||||
if (!outputs || !inputs || !filter_graph) {
|
||||
ret = AVERROR(ENOMEM);
|
||||
goto end;
|
||||
}
|
||||
|
||||
/* buffer video source: the decoded frames from the decoder will be inserted here. */
|
||||
snprintf(args, sizeof(args),
|
||||
"video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
|
||||
dec_ctx->width, dec_ctx->height, dec_ctx->pix_fmt,
|
||||
time_base.num, time_base.den,
|
||||
dec_ctx->time_base.num, dec_ctx->time_base.den,
|
||||
dec_ctx->sample_aspect_ratio.num, dec_ctx->sample_aspect_ratio.den);
|
||||
|
||||
ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in",
|
||||
args, NULL, filter_graph);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot create buffer source\n");
|
||||
goto end;
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* buffer video sink: to terminate the filter chain. */
|
||||
buffersink_params = av_buffersink_params_alloc();
|
||||
buffersink_params->pixel_fmts = pix_fmts;
|
||||
ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out",
|
||||
NULL, NULL, filter_graph);
|
||||
NULL, buffersink_params, filter_graph);
|
||||
av_free(buffersink_params);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot create buffer sink\n");
|
||||
goto end;
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = av_opt_set_int_list(buffersink_ctx, "pix_fmts", pix_fmts,
|
||||
AV_PIX_FMT_NONE, AV_OPT_SEARCH_CHILDREN);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot set output pixel format\n");
|
||||
goto end;
|
||||
}
|
||||
|
||||
/*
|
||||
* Set the endpoints for the filter graph. The filter_graph will
|
||||
* be linked to the graph described by filters_descr.
|
||||
*/
|
||||
|
||||
/*
|
||||
* The buffer source output must be connected to the input pad of
|
||||
* the first filter described by filters_descr; since the first
|
||||
* filter input label is not specified, it is set to "in" by
|
||||
* default.
|
||||
*/
|
||||
/* Endpoints for the filter graph. */
|
||||
outputs->name = av_strdup("in");
|
||||
outputs->filter_ctx = buffersrc_ctx;
|
||||
outputs->pad_idx = 0;
|
||||
outputs->next = NULL;
|
||||
|
||||
/*
|
||||
* The buffer sink input must be connected to the output pad of
|
||||
* the last filter described by filters_descr; since the last
|
||||
* filter output label is not specified, it is set to "out" by
|
||||
* default.
|
||||
*/
|
||||
inputs->name = av_strdup("out");
|
||||
inputs->filter_ctx = buffersink_ctx;
|
||||
inputs->pad_idx = 0;
|
||||
@@ -157,16 +131,11 @@ static int init_filters(const char *filters_descr)
|
||||
|
||||
if ((ret = avfilter_graph_parse_ptr(filter_graph, filters_descr,
|
||||
&inputs, &outputs, NULL)) < 0)
|
||||
goto end;
|
||||
return ret;
|
||||
|
||||
if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0)
|
||||
goto end;
|
||||
|
||||
end:
|
||||
avfilter_inout_free(&inputs);
|
||||
avfilter_inout_free(&outputs);
|
||||
|
||||
return ret;
|
||||
return ret;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void display_frame(const AVFrame *frame, AVRational time_base)
|
||||
@@ -217,6 +186,7 @@ int main(int argc, char **argv)
|
||||
exit(1);
|
||||
}
|
||||
|
||||
avcodec_register_all();
|
||||
av_register_all();
|
||||
avfilter_register_all();
|
||||
|
||||
@@ -231,6 +201,7 @@ int main(int argc, char **argv)
|
||||
break;
|
||||
|
||||
if (packet.stream_index == video_stream_index) {
|
||||
avcodec_get_frame_defaults(frame);
|
||||
got_frame = 0;
|
||||
ret = avcodec_decode_video2(dec_ctx, frame, &got_frame, &packet);
|
||||
if (ret < 0) {
|
||||
@@ -257,20 +228,22 @@ int main(int argc, char **argv)
|
||||
display_frame(filt_frame, buffersink_ctx->inputs[0]->time_base);
|
||||
av_frame_unref(filt_frame);
|
||||
}
|
||||
av_frame_unref(frame);
|
||||
}
|
||||
}
|
||||
av_free_packet(&packet);
|
||||
}
|
||||
end:
|
||||
avfilter_graph_free(&filter_graph);
|
||||
avcodec_close(dec_ctx);
|
||||
if (dec_ctx)
|
||||
avcodec_close(dec_ctx);
|
||||
avformat_close_input(&fmt_ctx);
|
||||
av_frame_free(&frame);
|
||||
av_frame_free(&filt_frame);
|
||||
|
||||
if (ret < 0 && ret != AVERROR_EOF) {
|
||||
fprintf(stderr, "Error occurred: %s\n", av_err2str(ret));
|
||||
char buf[1024];
|
||||
av_strerror(ret, buf, sizeof(buf));
|
||||
fprintf(stderr, "Error occurred: %s\n", buf);
|
||||
exit(1);
|
||||
}
|
||||
|
||||
|
@@ -23,7 +23,7 @@
|
||||
/**
|
||||
* @file
|
||||
* Shows how the metadata API can be used in application programs.
|
||||
* @example metadata.c
|
||||
* @example doc/examples/metadata.c
|
||||
*/
|
||||
|
||||
#include <stdio.h>
|
||||
|
@@ -24,9 +24,9 @@
|
||||
* @file
|
||||
* libavformat API example.
|
||||
*
|
||||
* Output a media file in any supported libavformat format. The default
|
||||
* codecs are used.
|
||||
* @example muxing.c
|
||||
* Output a media file in any supported libavformat format.
|
||||
* The default codecs are used.
|
||||
* @example doc/examples/muxing.c
|
||||
*/
|
||||
|
||||
#include <stdlib.h>
|
||||
@@ -34,67 +34,26 @@
|
||||
#include <string.h>
|
||||
#include <math.h>
|
||||
|
||||
#include <libavutil/avassert.h>
|
||||
#include <libavutil/channel_layout.h>
|
||||
#include <libavutil/opt.h>
|
||||
#include <libavutil/mathematics.h>
|
||||
#include <libavutil/timestamp.h>
|
||||
#include <libavformat/avformat.h>
|
||||
#include <libswscale/swscale.h>
|
||||
#include <libswresample/swresample.h>
|
||||
|
||||
#define STREAM_DURATION 10.0
|
||||
/* 5 seconds stream duration */
|
||||
#define STREAM_DURATION 200.0
|
||||
#define STREAM_FRAME_RATE 25 /* 25 images/s */
|
||||
#define STREAM_NB_FRAMES ((int)(STREAM_DURATION * STREAM_FRAME_RATE))
|
||||
#define STREAM_PIX_FMT AV_PIX_FMT_YUV420P /* default pix_fmt */
|
||||
|
||||
#define SCALE_FLAGS SWS_BICUBIC
|
||||
|
||||
// a wrapper around a single output AVStream
|
||||
typedef struct OutputStream {
|
||||
AVStream *st;
|
||||
|
||||
/* pts of the next frame that will be generated */
|
||||
int64_t next_pts;
|
||||
int samples_count;
|
||||
|
||||
AVFrame *frame;
|
||||
AVFrame *tmp_frame;
|
||||
|
||||
float t, tincr, tincr2;
|
||||
|
||||
struct SwsContext *sws_ctx;
|
||||
struct SwrContext *swr_ctx;
|
||||
} OutputStream;
|
||||
|
||||
static void log_packet(const AVFormatContext *fmt_ctx, const AVPacket *pkt)
|
||||
{
|
||||
AVRational *time_base = &fmt_ctx->streams[pkt->stream_index]->time_base;
|
||||
|
||||
printf("pts:%s pts_time:%s dts:%s dts_time:%s duration:%s duration_time:%s stream_index:%d\n",
|
||||
av_ts2str(pkt->pts), av_ts2timestr(pkt->pts, time_base),
|
||||
av_ts2str(pkt->dts), av_ts2timestr(pkt->dts, time_base),
|
||||
av_ts2str(pkt->duration), av_ts2timestr(pkt->duration, time_base),
|
||||
pkt->stream_index);
|
||||
}
|
||||
|
||||
static int write_frame(AVFormatContext *fmt_ctx, const AVRational *time_base, AVStream *st, AVPacket *pkt)
|
||||
{
|
||||
/* rescale output packet timestamp values from codec to stream timebase */
|
||||
av_packet_rescale_ts(pkt, *time_base, st->time_base);
|
||||
pkt->stream_index = st->index;
|
||||
|
||||
/* Write the compressed frame to the media file. */
|
||||
log_packet(fmt_ctx, pkt);
|
||||
return av_interleaved_write_frame(fmt_ctx, pkt);
|
||||
}
|
||||
static int sws_flags = SWS_BICUBIC;
|
||||
|
||||
/* Add an output stream. */
|
||||
static void add_stream(OutputStream *ost, AVFormatContext *oc,
|
||||
AVCodec **codec,
|
||||
enum AVCodecID codec_id)
|
||||
static AVStream *add_stream(AVFormatContext *oc, AVCodec **codec,
|
||||
enum AVCodecID codec_id)
|
||||
{
|
||||
AVCodecContext *c;
|
||||
int i;
|
||||
AVStream *st;
|
||||
|
||||
/* find the encoder */
|
||||
*codec = avcodec_find_encoder(codec_id);
|
||||
@@ -104,38 +63,20 @@ static void add_stream(OutputStream *ost, AVFormatContext *oc,
|
||||
exit(1);
|
||||
}
|
||||
|
||||
ost->st = avformat_new_stream(oc, *codec);
|
||||
if (!ost->st) {
|
||||
st = avformat_new_stream(oc, *codec);
|
||||
if (!st) {
|
||||
fprintf(stderr, "Could not allocate stream\n");
|
||||
exit(1);
|
||||
}
|
||||
ost->st->id = oc->nb_streams-1;
|
||||
c = ost->st->codec;
|
||||
st->id = oc->nb_streams-1;
|
||||
c = st->codec;
|
||||
|
||||
switch ((*codec)->type) {
|
||||
case AVMEDIA_TYPE_AUDIO:
|
||||
c->sample_fmt = (*codec)->sample_fmts ?
|
||||
(*codec)->sample_fmts[0] : AV_SAMPLE_FMT_FLTP;
|
||||
c->sample_fmt = AV_SAMPLE_FMT_FLTP;
|
||||
c->bit_rate = 64000;
|
||||
c->sample_rate = 44100;
|
||||
if ((*codec)->supported_samplerates) {
|
||||
c->sample_rate = (*codec)->supported_samplerates[0];
|
||||
for (i = 0; (*codec)->supported_samplerates[i]; i++) {
|
||||
if ((*codec)->supported_samplerates[i] == 44100)
|
||||
c->sample_rate = 44100;
|
||||
}
|
||||
}
|
||||
c->channels = av_get_channel_layout_nb_channels(c->channel_layout);
|
||||
c->channel_layout = AV_CH_LAYOUT_STEREO;
|
||||
if ((*codec)->channel_layouts) {
|
||||
c->channel_layout = (*codec)->channel_layouts[0];
|
||||
for (i = 0; (*codec)->channel_layouts[i]; i++) {
|
||||
if ((*codec)->channel_layouts[i] == AV_CH_LAYOUT_STEREO)
|
||||
c->channel_layout = AV_CH_LAYOUT_STEREO;
|
||||
}
|
||||
}
|
||||
c->channels = av_get_channel_layout_nb_channels(c->channel_layout);
|
||||
ost->st->time_base = (AVRational){ 1, c->sample_rate };
|
||||
c->channels = 2;
|
||||
break;
|
||||
|
||||
case AVMEDIA_TYPE_VIDEO:
|
||||
@@ -149,9 +90,8 @@ static void add_stream(OutputStream *ost, AVFormatContext *oc,
|
||||
* of which frame timestamps are represented. For fixed-fps content,
|
||||
* timebase should be 1/framerate and timestamp increments should be
|
||||
* identical to 1. */
|
||||
ost->st->time_base = (AVRational){ 1, STREAM_FRAME_RATE };
|
||||
c->time_base = ost->st->time_base;
|
||||
|
||||
c->time_base.den = STREAM_FRAME_RATE;
|
||||
c->time_base.num = 1;
|
||||
c->gop_size = 12; /* emit one intra frame every twelve frames at most */
|
||||
c->pix_fmt = STREAM_PIX_FMT;
|
||||
if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) {
|
||||
@@ -173,262 +113,237 @@ static void add_stream(OutputStream *ost, AVFormatContext *oc,
|
||||
/* Some formats want stream headers to be separate. */
|
||||
if (oc->oformat->flags & AVFMT_GLOBALHEADER)
|
||||
c->flags |= CODEC_FLAG_GLOBAL_HEADER;
|
||||
|
||||
return st;
|
||||
}
|
||||
|
||||
/**************************************************************/
|
||||
/* audio output */
|
||||
|
||||
static AVFrame *alloc_audio_frame(enum AVSampleFormat sample_fmt,
|
||||
uint64_t channel_layout,
|
||||
int sample_rate, int nb_samples)
|
||||
{
|
||||
AVFrame *frame = av_frame_alloc();
|
||||
int ret;
|
||||
static float t, tincr, tincr2;
|
||||
|
||||
if (!frame) {
|
||||
fprintf(stderr, "Error allocating an audio frame\n");
|
||||
exit(1);
|
||||
}
|
||||
static uint8_t **src_samples_data;
|
||||
static int src_samples_linesize;
|
||||
static int src_nb_samples;
|
||||
|
||||
frame->format = sample_fmt;
|
||||
frame->channel_layout = channel_layout;
|
||||
frame->sample_rate = sample_rate;
|
||||
frame->nb_samples = nb_samples;
|
||||
static int max_dst_nb_samples;
|
||||
uint8_t **dst_samples_data;
|
||||
int dst_samples_linesize;
|
||||
int dst_samples_size;
|
||||
|
||||
if (nb_samples) {
|
||||
ret = av_frame_get_buffer(frame, 0);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error allocating an audio buffer\n");
|
||||
exit(1);
|
||||
}
|
||||
}
|
||||
struct SwrContext *swr_ctx = NULL;
|
||||
|
||||
return frame;
|
||||
}
|
||||
|
||||
static void open_audio(AVFormatContext *oc, AVCodec *codec, OutputStream *ost, AVDictionary *opt_arg)
|
||||
static void open_audio(AVFormatContext *oc, AVCodec *codec, AVStream *st)
|
||||
{
|
||||
AVCodecContext *c;
|
||||
int nb_samples;
|
||||
int ret;
|
||||
AVDictionary *opt = NULL;
|
||||
|
||||
c = ost->st->codec;
|
||||
c = st->codec;
|
||||
|
||||
/* open it */
|
||||
av_dict_copy(&opt, opt_arg, 0);
|
||||
ret = avcodec_open2(c, codec, &opt);
|
||||
av_dict_free(&opt);
|
||||
ret = avcodec_open2(c, codec, NULL);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Could not open audio codec: %s\n", av_err2str(ret));
|
||||
exit(1);
|
||||
}
|
||||
|
||||
/* init signal generator */
|
||||
ost->t = 0;
|
||||
ost->tincr = 2 * M_PI * 110.0 / c->sample_rate;
|
||||
t = 0;
|
||||
tincr = 2 * M_PI * 110.0 / c->sample_rate;
|
||||
/* increment frequency by 110 Hz per second */
|
||||
ost->tincr2 = 2 * M_PI * 110.0 / c->sample_rate / c->sample_rate;
|
||||
tincr2 = 2 * M_PI * 110.0 / c->sample_rate / c->sample_rate;
|
||||
|
||||
if (c->codec->capabilities & CODEC_CAP_VARIABLE_FRAME_SIZE)
|
||||
nb_samples = 10000;
|
||||
else
|
||||
nb_samples = c->frame_size;
|
||||
src_nb_samples = c->codec->capabilities & CODEC_CAP_VARIABLE_FRAME_SIZE ?
|
||||
10000 : c->frame_size;
|
||||
|
||||
ost->frame = alloc_audio_frame(c->sample_fmt, c->channel_layout,
|
||||
c->sample_rate, nb_samples);
|
||||
ost->tmp_frame = alloc_audio_frame(AV_SAMPLE_FMT_S16, c->channel_layout,
|
||||
c->sample_rate, nb_samples);
|
||||
ret = av_samples_alloc_array_and_samples(&src_samples_data, &src_samples_linesize, c->channels,
|
||||
src_nb_samples, c->sample_fmt, 0);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Could not allocate source samples\n");
|
||||
exit(1);
|
||||
}
|
||||
|
||||
/* create resampler context */
|
||||
ost->swr_ctx = swr_alloc();
|
||||
if (!ost->swr_ctx) {
|
||||
if (c->sample_fmt != AV_SAMPLE_FMT_S16) {
|
||||
swr_ctx = swr_alloc();
|
||||
if (!swr_ctx) {
|
||||
fprintf(stderr, "Could not allocate resampler context\n");
|
||||
exit(1);
|
||||
}
|
||||
|
||||
/* set options */
|
||||
av_opt_set_int (ost->swr_ctx, "in_channel_count", c->channels, 0);
|
||||
av_opt_set_int (ost->swr_ctx, "in_sample_rate", c->sample_rate, 0);
|
||||
av_opt_set_sample_fmt(ost->swr_ctx, "in_sample_fmt", AV_SAMPLE_FMT_S16, 0);
|
||||
av_opt_set_int (ost->swr_ctx, "out_channel_count", c->channels, 0);
|
||||
av_opt_set_int (ost->swr_ctx, "out_sample_rate", c->sample_rate, 0);
|
||||
av_opt_set_sample_fmt(ost->swr_ctx, "out_sample_fmt", c->sample_fmt, 0);
|
||||
av_opt_set_int (swr_ctx, "in_channel_count", c->channels, 0);
|
||||
av_opt_set_int (swr_ctx, "in_sample_rate", c->sample_rate, 0);
|
||||
av_opt_set_sample_fmt(swr_ctx, "in_sample_fmt", AV_SAMPLE_FMT_S16, 0);
|
||||
av_opt_set_int (swr_ctx, "out_channel_count", c->channels, 0);
|
||||
av_opt_set_int (swr_ctx, "out_sample_rate", c->sample_rate, 0);
|
||||
av_opt_set_sample_fmt(swr_ctx, "out_sample_fmt", c->sample_fmt, 0);
|
||||
|
||||
/* initialize the resampling context */
|
||||
if ((ret = swr_init(ost->swr_ctx)) < 0) {
|
||||
if ((ret = swr_init(swr_ctx)) < 0) {
|
||||
fprintf(stderr, "Failed to initialize the resampling context\n");
|
||||
exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/* compute the number of converted samples: buffering is avoided
|
||||
* ensuring that the output buffer will contain at least all the
|
||||
* converted input samples */
|
||||
max_dst_nb_samples = src_nb_samples;
|
||||
ret = av_samples_alloc_array_and_samples(&dst_samples_data, &dst_samples_linesize, c->channels,
|
||||
max_dst_nb_samples, c->sample_fmt, 0);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Could not allocate destination samples\n");
|
||||
exit(1);
|
||||
}
|
||||
dst_samples_size = av_samples_get_buffer_size(NULL, c->channels, max_dst_nb_samples,
|
||||
c->sample_fmt, 0);
|
||||
}
|
||||
|
||||
/* Prepare a 16 bit dummy audio frame of 'frame_size' samples and
|
||||
* 'nb_channels' channels. */
|
||||
static AVFrame *get_audio_frame(OutputStream *ost)
|
||||
static void get_audio_frame(int16_t *samples, int frame_size, int nb_channels)
|
||||
{
|
||||
AVFrame *frame = ost->tmp_frame;
|
||||
int j, i, v;
|
||||
int16_t *q = (int16_t*)frame->data[0];
|
||||
int16_t *q;
|
||||
|
||||
/* check if we want to generate more frames */
|
||||
if (av_compare_ts(ost->next_pts, ost->st->codec->time_base,
|
||||
STREAM_DURATION, (AVRational){ 1, 1 }) >= 0)
|
||||
return NULL;
|
||||
|
||||
for (j = 0; j <frame->nb_samples; j++) {
|
||||
v = (int)(sin(ost->t) * 10000);
|
||||
for (i = 0; i < ost->st->codec->channels; i++)
|
||||
q = samples;
|
||||
for (j = 0; j < frame_size; j++) {
|
||||
v = (int)(sin(t) * 10000);
|
||||
for (i = 0; i < nb_channels; i++)
|
||||
*q++ = v;
|
||||
ost->t += ost->tincr;
|
||||
ost->tincr += ost->tincr2;
|
||||
t += tincr;
|
||||
tincr += tincr2;
|
||||
}
|
||||
|
||||
frame->pts = ost->next_pts;
|
||||
ost->next_pts += frame->nb_samples;
|
||||
|
||||
return frame;
|
||||
}
|
||||
|
||||
/*
|
||||
* encode one audio frame and send it to the muxer
|
||||
* return 1 when encoding is finished, 0 otherwise
|
||||
*/
|
||||
static int write_audio_frame(AVFormatContext *oc, OutputStream *ost)
|
||||
static void write_audio_frame(AVFormatContext *oc, AVStream *st)
|
||||
{
|
||||
AVCodecContext *c;
|
||||
AVPacket pkt = { 0 }; // data and size must be 0;
|
||||
AVFrame *frame;
|
||||
int ret;
|
||||
int got_packet;
|
||||
int dst_nb_samples;
|
||||
AVFrame *frame = avcodec_alloc_frame();
|
||||
int got_packet, ret, dst_nb_samples;
|
||||
|
||||
av_init_packet(&pkt);
|
||||
c = ost->st->codec;
|
||||
c = st->codec;
|
||||
|
||||
frame = get_audio_frame(ost);
|
||||
get_audio_frame((int16_t *)src_samples_data[0], src_nb_samples, c->channels);
|
||||
|
||||
if (frame) {
|
||||
/* convert samples from native format to destination codec format, using the resampler */
|
||||
/* compute destination number of samples */
|
||||
dst_nb_samples = av_rescale_rnd(swr_get_delay(ost->swr_ctx, c->sample_rate) + frame->nb_samples,
|
||||
c->sample_rate, c->sample_rate, AV_ROUND_UP);
|
||||
av_assert0(dst_nb_samples == frame->nb_samples);
|
||||
|
||||
/* when we pass a frame to the encoder, it may keep a reference to it
|
||||
* internally;
|
||||
* make sure we do not overwrite it here
|
||||
*/
|
||||
ret = av_frame_make_writable(ost->frame);
|
||||
if (ret < 0)
|
||||
exit(1);
|
||||
|
||||
/* convert to destination format */
|
||||
ret = swr_convert(ost->swr_ctx,
|
||||
ost->frame->data, dst_nb_samples,
|
||||
(const uint8_t **)frame->data, frame->nb_samples);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error while converting\n");
|
||||
/* convert samples from native format to destination codec format, using the resampler */
|
||||
if (swr_ctx) {
|
||||
/* compute destination number of samples */
|
||||
dst_nb_samples = av_rescale_rnd(swr_get_delay(swr_ctx, c->sample_rate) + src_nb_samples,
|
||||
c->sample_rate, c->sample_rate, AV_ROUND_UP);
|
||||
if (dst_nb_samples > max_dst_nb_samples) {
|
||||
av_free(dst_samples_data[0]);
|
||||
ret = av_samples_alloc(dst_samples_data, &dst_samples_linesize, c->channels,
|
||||
dst_nb_samples, c->sample_fmt, 0);
|
||||
if (ret < 0)
|
||||
exit(1);
|
||||
}
|
||||
frame = ost->frame;
|
||||
max_dst_nb_samples = dst_nb_samples;
|
||||
dst_samples_size = av_samples_get_buffer_size(NULL, c->channels, dst_nb_samples,
|
||||
c->sample_fmt, 0);
|
||||
}
|
||||
|
||||
frame->pts = av_rescale_q(ost->samples_count, (AVRational){1, c->sample_rate}, c->time_base);
|
||||
ost->samples_count += dst_nb_samples;
|
||||
/* convert to destination format */
|
||||
ret = swr_convert(swr_ctx,
|
||||
dst_samples_data, dst_nb_samples,
|
||||
(const uint8_t **)src_samples_data, src_nb_samples);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error while converting\n");
|
||||
exit(1);
|
||||
}
|
||||
} else {
|
||||
dst_samples_data[0] = src_samples_data[0];
|
||||
dst_nb_samples = src_nb_samples;
|
||||
}
|
||||
|
||||
frame->nb_samples = dst_nb_samples;
|
||||
avcodec_fill_audio_frame(frame, c->channels, c->sample_fmt,
|
||||
dst_samples_data[0], dst_samples_size, 0);
|
||||
|
||||
ret = avcodec_encode_audio2(c, &pkt, frame, &got_packet);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error encoding audio frame: %s\n", av_err2str(ret));
|
||||
exit(1);
|
||||
}
|
||||
|
||||
if (got_packet) {
|
||||
ret = write_frame(oc, &c->time_base, ost->st, &pkt);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error while writing audio frame: %s\n",
|
||||
av_err2str(ret));
|
||||
exit(1);
|
||||
}
|
||||
}
|
||||
if (!got_packet)
|
||||
return;
|
||||
|
||||
return (frame || got_packet) ? 0 : 1;
|
||||
pkt.stream_index = st->index;
|
||||
|
||||
/* Write the compressed frame to the media file. */
|
||||
ret = av_interleaved_write_frame(oc, &pkt);
|
||||
if (ret != 0) {
|
||||
fprintf(stderr, "Error while writing audio frame: %s\n",
|
||||
av_err2str(ret));
|
||||
exit(1);
|
||||
}
|
||||
avcodec_free_frame(&frame);
|
||||
}
|
||||
|
||||
static void close_audio(AVFormatContext *oc, AVStream *st)
|
||||
{
|
||||
avcodec_close(st->codec);
|
||||
av_free(src_samples_data[0]);
|
||||
av_free(dst_samples_data[0]);
|
||||
}
|
||||
|
||||
/**************************************************************/
|
||||
/* video output */
|
||||
|
||||
static AVFrame *alloc_picture(enum AVPixelFormat pix_fmt, int width, int height)
|
||||
{
|
||||
AVFrame *picture;
|
||||
int ret;
|
||||
static AVFrame *frame;
|
||||
static AVPicture src_picture, dst_picture;
|
||||
static int frame_count;
|
||||
|
||||
picture = av_frame_alloc();
|
||||
if (!picture)
|
||||
return NULL;
|
||||
|
||||
picture->format = pix_fmt;
|
||||
picture->width = width;
|
||||
picture->height = height;
|
||||
|
||||
/* allocate the buffers for the frame data */
|
||||
ret = av_frame_get_buffer(picture, 32);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Could not allocate frame data.\n");
|
||||
exit(1);
|
||||
}
|
||||
|
||||
return picture;
|
||||
}
|
||||
|
||||
static void open_video(AVFormatContext *oc, AVCodec *codec, OutputStream *ost, AVDictionary *opt_arg)
|
||||
static void open_video(AVFormatContext *oc, AVCodec *codec, AVStream *st)
|
||||
{
|
||||
int ret;
|
||||
AVCodecContext *c = ost->st->codec;
|
||||
AVDictionary *opt = NULL;
|
||||
|
||||
av_dict_copy(&opt, opt_arg, 0);
|
||||
AVCodecContext *c = st->codec;
|
||||
|
||||
/* open the codec */
|
||||
ret = avcodec_open2(c, codec, &opt);
|
||||
av_dict_free(&opt);
|
||||
ret = avcodec_open2(c, codec, NULL);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Could not open video codec: %s\n", av_err2str(ret));
|
||||
exit(1);
|
||||
}
|
||||
|
||||
/* allocate and init a re-usable frame */
|
||||
ost->frame = alloc_picture(c->pix_fmt, c->width, c->height);
|
||||
if (!ost->frame) {
|
||||
frame = avcodec_alloc_frame();
|
||||
if (!frame) {
|
||||
fprintf(stderr, "Could not allocate video frame\n");
|
||||
exit(1);
|
||||
}
|
||||
|
||||
/* Allocate the encoded raw picture. */
|
||||
ret = avpicture_alloc(&dst_picture, c->pix_fmt, c->width, c->height);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Could not allocate picture: %s\n", av_err2str(ret));
|
||||
exit(1);
|
||||
}
|
||||
|
||||
/* If the output format is not YUV420P, then a temporary YUV420P
|
||||
* picture is needed too. It is then converted to the required
|
||||
* output format. */
|
||||
ost->tmp_frame = NULL;
|
||||
if (c->pix_fmt != AV_PIX_FMT_YUV420P) {
|
||||
ost->tmp_frame = alloc_picture(AV_PIX_FMT_YUV420P, c->width, c->height);
|
||||
if (!ost->tmp_frame) {
|
||||
fprintf(stderr, "Could not allocate temporary picture\n");
|
||||
ret = avpicture_alloc(&src_picture, AV_PIX_FMT_YUV420P, c->width, c->height);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Could not allocate temporary picture: %s\n",
|
||||
av_err2str(ret));
|
||||
exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/* copy data and linesize picture pointers to frame */
|
||||
*((AVPicture *)frame) = dst_picture;
|
||||
}
|
||||
|
||||
/* Prepare a dummy image. */
|
||||
static void fill_yuv_image(AVFrame *pict, int frame_index,
|
||||
static void fill_yuv_image(AVPicture *pict, int frame_index,
|
||||
int width, int height)
|
||||
{
|
||||
int x, y, i, ret;
|
||||
|
||||
/* when we pass a frame to the encoder, it may keep a reference to it
|
||||
* internally;
|
||||
* make sure we do not overwrite it here
|
||||
*/
|
||||
ret = av_frame_make_writable(pict);
|
||||
if (ret < 0)
|
||||
exit(1);
|
||||
int x, y, i;
|
||||
|
||||
i = frame_index;
|
||||
|
||||
@@ -446,77 +361,53 @@ static void fill_yuv_image(AVFrame *pict, int frame_index,
|
||||
}
|
||||
}
|
||||
|
||||
static AVFrame *get_video_frame(OutputStream *ost)
|
||||
{
|
||||
AVCodecContext *c = ost->st->codec;
|
||||
|
||||
/* check if we want to generate more frames */
|
||||
if (av_compare_ts(ost->next_pts, ost->st->codec->time_base,
|
||||
STREAM_DURATION, (AVRational){ 1, 1 }) >= 0)
|
||||
return NULL;
|
||||
|
||||
if (c->pix_fmt != AV_PIX_FMT_YUV420P) {
|
||||
/* as we only generate a YUV420P picture, we must convert it
|
||||
* to the codec pixel format if needed */
|
||||
if (!ost->sws_ctx) {
|
||||
ost->sws_ctx = sws_getContext(c->width, c->height,
|
||||
AV_PIX_FMT_YUV420P,
|
||||
c->width, c->height,
|
||||
c->pix_fmt,
|
||||
SCALE_FLAGS, NULL, NULL, NULL);
|
||||
if (!ost->sws_ctx) {
|
||||
fprintf(stderr,
|
||||
"Could not initialize the conversion context\n");
|
||||
exit(1);
|
||||
}
|
||||
}
|
||||
fill_yuv_image(ost->tmp_frame, ost->next_pts, c->width, c->height);
|
||||
sws_scale(ost->sws_ctx,
|
||||
(const uint8_t * const *)ost->tmp_frame->data, ost->tmp_frame->linesize,
|
||||
0, c->height, ost->frame->data, ost->frame->linesize);
|
||||
} else {
|
||||
fill_yuv_image(ost->frame, ost->next_pts, c->width, c->height);
|
||||
}
|
||||
|
||||
ost->frame->pts = ost->next_pts++;
|
||||
|
||||
return ost->frame;
|
||||
}
|
||||
|
||||
/*
|
||||
* encode one video frame and send it to the muxer
|
||||
* return 1 when encoding is finished, 0 otherwise
|
||||
*/
|
||||
static int write_video_frame(AVFormatContext *oc, OutputStream *ost)
|
||||
static void write_video_frame(AVFormatContext *oc, AVStream *st)
|
||||
{
|
||||
int ret;
|
||||
AVCodecContext *c;
|
||||
AVFrame *frame;
|
||||
int got_packet = 0;
|
||||
static struct SwsContext *sws_ctx;
|
||||
AVCodecContext *c = st->codec;
|
||||
|
||||
c = ost->st->codec;
|
||||
|
||||
frame = get_video_frame(ost);
|
||||
if (frame_count >= STREAM_NB_FRAMES) {
|
||||
/* No more frames to compress. The codec has a latency of a few
|
||||
* frames if using B-frames, so we get the last frames by
|
||||
* passing the same picture again. */
|
||||
} else {
|
||||
if (c->pix_fmt != AV_PIX_FMT_YUV420P) {
|
||||
/* as we only generate a YUV420P picture, we must convert it
|
||||
* to the codec pixel format if needed */
|
||||
if (!sws_ctx) {
|
||||
sws_ctx = sws_getContext(c->width, c->height, AV_PIX_FMT_YUV420P,
|
||||
c->width, c->height, c->pix_fmt,
|
||||
sws_flags, NULL, NULL, NULL);
|
||||
if (!sws_ctx) {
|
||||
fprintf(stderr,
|
||||
"Could not initialize the conversion context\n");
|
||||
exit(1);
|
||||
}
|
||||
}
|
||||
fill_yuv_image(&src_picture, frame_count, c->width, c->height);
|
||||
sws_scale(sws_ctx,
|
||||
(const uint8_t * const *)src_picture.data, src_picture.linesize,
|
||||
0, c->height, dst_picture.data, dst_picture.linesize);
|
||||
} else {
|
||||
fill_yuv_image(&dst_picture, frame_count, c->width, c->height);
|
||||
}
|
||||
}
|
||||
|
||||
if (oc->oformat->flags & AVFMT_RAWPICTURE) {
|
||||
/* a hack to avoid data copy with some raw video muxers */
|
||||
/* Raw video case - directly store the picture in the packet */
|
||||
AVPacket pkt;
|
||||
av_init_packet(&pkt);
|
||||
|
||||
if (!frame)
|
||||
return 1;
|
||||
|
||||
pkt.flags |= AV_PKT_FLAG_KEY;
|
||||
pkt.stream_index = ost->st->index;
|
||||
pkt.data = (uint8_t *)frame;
|
||||
pkt.stream_index = st->index;
|
||||
pkt.data = dst_picture.data[0];
|
||||
pkt.size = sizeof(AVPicture);
|
||||
|
||||
pkt.pts = pkt.dts = frame->pts;
|
||||
av_packet_rescale_ts(&pkt, c->time_base, ost->st->time_base);
|
||||
|
||||
ret = av_interleaved_write_frame(oc, &pkt);
|
||||
} else {
|
||||
AVPacket pkt = { 0 };
|
||||
int got_packet;
|
||||
av_init_packet(&pkt);
|
||||
|
||||
/* encode the image */
|
||||
@@ -525,29 +416,30 @@ static int write_video_frame(AVFormatContext *oc, OutputStream *ost)
|
||||
fprintf(stderr, "Error encoding video frame: %s\n", av_err2str(ret));
|
||||
exit(1);
|
||||
}
|
||||
/* If size is zero, it means the image was buffered. */
|
||||
|
||||
if (got_packet) {
|
||||
ret = write_frame(oc, &c->time_base, ost->st, &pkt);
|
||||
if (!ret && got_packet && pkt.size) {
|
||||
pkt.stream_index = st->index;
|
||||
|
||||
/* Write the compressed frame to the media file. */
|
||||
ret = av_interleaved_write_frame(oc, &pkt);
|
||||
} else {
|
||||
ret = 0;
|
||||
}
|
||||
}
|
||||
|
||||
if (ret < 0) {
|
||||
if (ret != 0) {
|
||||
fprintf(stderr, "Error while writing video frame: %s\n", av_err2str(ret));
|
||||
exit(1);
|
||||
}
|
||||
|
||||
return (frame || got_packet) ? 0 : 1;
|
||||
frame_count++;
|
||||
}
|
||||
|
||||
static void close_stream(AVFormatContext *oc, OutputStream *ost)
|
||||
static void close_video(AVFormatContext *oc, AVStream *st)
|
||||
{
|
||||
avcodec_close(ost->st->codec);
|
||||
av_frame_free(&ost->frame);
|
||||
av_frame_free(&ost->tmp_frame);
|
||||
sws_freeContext(ost->sws_ctx);
|
||||
swr_free(&ost->swr_ctx);
|
||||
avcodec_close(st->codec);
|
||||
av_free(src_picture.data[0]);
|
||||
av_free(dst_picture.data[0]);
|
||||
av_free(frame);
|
||||
}
|
||||
|
||||
/**************************************************************/
|
||||
@@ -555,20 +447,18 @@ static void close_stream(AVFormatContext *oc, OutputStream *ost)
|
||||
|
||||
int main(int argc, char **argv)
|
||||
{
|
||||
OutputStream video_st = { 0 }, audio_st = { 0 };
|
||||
const char *filename;
|
||||
AVOutputFormat *fmt;
|
||||
AVFormatContext *oc;
|
||||
AVStream *audio_st, *video_st;
|
||||
AVCodec *audio_codec, *video_codec;
|
||||
double audio_time, video_time;
|
||||
int ret;
|
||||
int have_video = 0, have_audio = 0;
|
||||
int encode_video = 0, encode_audio = 0;
|
||||
AVDictionary *opt = NULL;
|
||||
|
||||
/* Initialize libavcodec, and register all codecs and formats. */
|
||||
av_register_all();
|
||||
|
||||
if (argc < 2) {
|
||||
if (argc != 2) {
|
||||
printf("usage: %s output_file\n"
|
||||
"API example program to output a media file with libavformat.\n"
|
||||
"This program generates a synthetic audio and video stream, encodes and\n"
|
||||
@@ -580,9 +470,6 @@ int main(int argc, char **argv)
|
||||
}
|
||||
|
||||
filename = argv[1];
|
||||
if (argc > 3 && !strcmp(argv[2], "-flags")) {
|
||||
av_dict_set(&opt, argv[2]+1, argv[3], 0);
|
||||
}
|
||||
|
||||
/* allocate the output media context */
|
||||
avformat_alloc_output_context2(&oc, NULL, NULL, filename);
|
||||
@@ -590,31 +477,29 @@ int main(int argc, char **argv)
|
||||
printf("Could not deduce output format from file extension: using MPEG.\n");
|
||||
avformat_alloc_output_context2(&oc, NULL, "mpeg", filename);
|
||||
}
|
||||
if (!oc)
|
||||
if (!oc) {
|
||||
return 1;
|
||||
|
||||
}
|
||||
fmt = oc->oformat;
|
||||
|
||||
/* Add the audio and video streams using the default format codecs
|
||||
* and initialize the codecs. */
|
||||
video_st = NULL;
|
||||
audio_st = NULL;
|
||||
|
||||
if (fmt->video_codec != AV_CODEC_ID_NONE) {
|
||||
add_stream(&video_st, oc, &video_codec, fmt->video_codec);
|
||||
have_video = 1;
|
||||
encode_video = 1;
|
||||
video_st = add_stream(oc, &video_codec, fmt->video_codec);
|
||||
}
|
||||
if (fmt->audio_codec != AV_CODEC_ID_NONE) {
|
||||
add_stream(&audio_st, oc, &audio_codec, fmt->audio_codec);
|
||||
have_audio = 1;
|
||||
encode_audio = 1;
|
||||
audio_st = add_stream(oc, &audio_codec, fmt->audio_codec);
|
||||
}
|
||||
|
||||
/* Now that all the parameters are set, we can open the audio and
|
||||
* video codecs and allocate the necessary encode buffers. */
|
||||
if (have_video)
|
||||
open_video(oc, video_codec, &video_st, opt);
|
||||
|
||||
if (have_audio)
|
||||
open_audio(oc, audio_codec, &audio_st, opt);
|
||||
if (video_st)
|
||||
open_video(oc, video_codec, video_st);
|
||||
if (audio_st)
|
||||
open_audio(oc, audio_codec, audio_st);
|
||||
|
||||
av_dump_format(oc, 0, filename, 1);
|
||||
|
||||
@@ -629,21 +514,30 @@ int main(int argc, char **argv)
|
||||
}
|
||||
|
||||
/* Write the stream header, if any. */
|
||||
ret = avformat_write_header(oc, &opt);
|
||||
ret = avformat_write_header(oc, NULL);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error occurred when opening output file: %s\n",
|
||||
av_err2str(ret));
|
||||
return 1;
|
||||
}
|
||||
|
||||
while (encode_video || encode_audio) {
|
||||
/* select the stream to encode */
|
||||
if (encode_video &&
|
||||
(!encode_audio || av_compare_ts(video_st.next_pts, video_st.st->codec->time_base,
|
||||
audio_st.next_pts, audio_st.st->codec->time_base) <= 0)) {
|
||||
encode_video = !write_video_frame(oc, &video_st);
|
||||
if (frame)
|
||||
frame->pts = 0;
|
||||
for (;;) {
|
||||
/* Compute current audio and video time. */
|
||||
audio_time = audio_st ? audio_st->pts.val * av_q2d(audio_st->time_base) : 0.0;
|
||||
video_time = video_st ? video_st->pts.val * av_q2d(video_st->time_base) : 0.0;
|
||||
|
||||
if ((!audio_st || audio_time >= STREAM_DURATION) &&
|
||||
(!video_st || video_time >= STREAM_DURATION))
|
||||
break;
|
||||
|
||||
/* write interleaved audio and video frames */
|
||||
if (!video_st || (video_st && audio_st && audio_time < video_time)) {
|
||||
write_audio_frame(oc, audio_st);
|
||||
} else {
|
||||
encode_audio = !write_audio_frame(oc, &audio_st);
|
||||
write_video_frame(oc, video_st);
|
||||
frame->pts += av_rescale_q(1, video_st->codec->time_base, video_st->time_base);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -654,14 +548,14 @@ int main(int argc, char **argv)
|
||||
av_write_trailer(oc);
|
||||
|
||||
/* Close each codec. */
|
||||
if (have_video)
|
||||
close_stream(oc, &video_st);
|
||||
if (have_audio)
|
||||
close_stream(oc, &audio_st);
|
||||
if (video_st)
|
||||
close_video(oc, video_st);
|
||||
if (audio_st)
|
||||
close_audio(oc, audio_st);
|
||||
|
||||
if (!(fmt->flags & AVFMT_NOFILE))
|
||||
/* Close the output file. */
|
||||
avio_closep(&oc->pb);
|
||||
avio_close(oc->pb);
|
||||
|
||||
/* free the stream */
|
||||
avformat_free_context(oc);
|
||||
|
@@ -1,484 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2015 Anton Khirnov
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
* of this software and associated documentation files (the "Software"), to deal
|
||||
* in the Software without restriction, including without limitation the rights
|
||||
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
* copies of the Software, and to permit persons to whom the Software is
|
||||
* furnished to do so, subject to the following conditions:
|
||||
*
|
||||
* The above copyright notice and this permission notice shall be included in
|
||||
* all copies or substantial portions of the Software.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
||||
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
* THE SOFTWARE.
|
||||
*/
|
||||
|
||||
/**
|
||||
* @file
|
||||
* Intel QSV-accelerated H.264 decoding example.
|
||||
*
|
||||
* @example qsvdec.c
|
||||
* This example shows how to do QSV-accelerated H.264 decoding with output
|
||||
* frames in the VA-API video surfaces.
|
||||
*/
|
||||
|
||||
#include "config.h"
|
||||
|
||||
#include <stdio.h>
|
||||
|
||||
#include <mfx/mfxvideo.h>
|
||||
|
||||
#include <va/va.h>
|
||||
#include <va/va_x11.h>
|
||||
#include <X11/Xlib.h>
|
||||
|
||||
#include "libavformat/avformat.h"
|
||||
#include "libavformat/avio.h"
|
||||
|
||||
#include "libavcodec/avcodec.h"
|
||||
#include "libavcodec/qsv.h"
|
||||
|
||||
#include "libavutil/error.h"
|
||||
#include "libavutil/mem.h"
|
||||
|
||||
typedef struct DecodeContext {
|
||||
mfxSession mfx_session;
|
||||
VADisplay va_dpy;
|
||||
|
||||
VASurfaceID *surfaces;
|
||||
mfxMemId *surface_ids;
|
||||
int *surface_used;
|
||||
int nb_surfaces;
|
||||
|
||||
mfxFrameInfo frame_info;
|
||||
} DecodeContext;
|
||||
|
||||
static mfxStatus frame_alloc(mfxHDL pthis, mfxFrameAllocRequest *req,
|
||||
mfxFrameAllocResponse *resp)
|
||||
{
|
||||
DecodeContext *decode = pthis;
|
||||
int err, i;
|
||||
|
||||
if (decode->surfaces) {
|
||||
fprintf(stderr, "Multiple allocation requests.\n");
|
||||
return MFX_ERR_MEMORY_ALLOC;
|
||||
}
|
||||
if (!(req->Type & MFX_MEMTYPE_VIDEO_MEMORY_DECODER_TARGET)) {
|
||||
fprintf(stderr, "Unsupported surface type: %d\n", req->Type);
|
||||
return MFX_ERR_UNSUPPORTED;
|
||||
}
|
||||
if (req->Info.BitDepthLuma != 8 || req->Info.BitDepthChroma != 8 ||
|
||||
req->Info.Shift || req->Info.FourCC != MFX_FOURCC_NV12 ||
|
||||
req->Info.ChromaFormat != MFX_CHROMAFORMAT_YUV420) {
|
||||
fprintf(stderr, "Unsupported surface properties.\n");
|
||||
return MFX_ERR_UNSUPPORTED;
|
||||
}
|
||||
|
||||
decode->surfaces = av_malloc_array (req->NumFrameSuggested, sizeof(*decode->surfaces));
|
||||
decode->surface_ids = av_malloc_array (req->NumFrameSuggested, sizeof(*decode->surface_ids));
|
||||
decode->surface_used = av_mallocz_array(req->NumFrameSuggested, sizeof(*decode->surface_used));
|
||||
if (!decode->surfaces || !decode->surface_ids || !decode->surface_used)
|
||||
goto fail;
|
||||
|
||||
err = vaCreateSurfaces(decode->va_dpy, VA_RT_FORMAT_YUV420,
|
||||
req->Info.Width, req->Info.Height,
|
||||
decode->surfaces, req->NumFrameSuggested,
|
||||
NULL, 0);
|
||||
if (err != VA_STATUS_SUCCESS) {
|
||||
fprintf(stderr, "Error allocating VA surfaces\n");
|
||||
goto fail;
|
||||
}
|
||||
decode->nb_surfaces = req->NumFrameSuggested;
|
||||
|
||||
for (i = 0; i < decode->nb_surfaces; i++)
|
||||
decode->surface_ids[i] = &decode->surfaces[i];
|
||||
|
||||
resp->mids = decode->surface_ids;
|
||||
resp->NumFrameActual = decode->nb_surfaces;
|
||||
|
||||
decode->frame_info = req->Info;
|
||||
|
||||
return MFX_ERR_NONE;
|
||||
fail:
|
||||
av_freep(&decode->surfaces);
|
||||
av_freep(&decode->surface_ids);
|
||||
av_freep(&decode->surface_used);
|
||||
|
||||
return MFX_ERR_MEMORY_ALLOC;
|
||||
}
|
||||
|
||||
static mfxStatus frame_free(mfxHDL pthis, mfxFrameAllocResponse *resp)
|
||||
{
|
||||
DecodeContext *decode = pthis;
|
||||
|
||||
if (decode->surfaces)
|
||||
vaDestroySurfaces(decode->va_dpy, decode->surfaces, decode->nb_surfaces);
|
||||
av_freep(&decode->surfaces);
|
||||
av_freep(&decode->surface_ids);
|
||||
av_freep(&decode->surface_used);
|
||||
decode->nb_surfaces = 0;
|
||||
|
||||
return MFX_ERR_NONE;
|
||||
}
|
||||
|
||||
static mfxStatus frame_lock(mfxHDL pthis, mfxMemId mid, mfxFrameData *ptr)
|
||||
{
|
||||
return MFX_ERR_UNSUPPORTED;
|
||||
}
|
||||
|
||||
static mfxStatus frame_unlock(mfxHDL pthis, mfxMemId mid, mfxFrameData *ptr)
|
||||
{
|
||||
return MFX_ERR_UNSUPPORTED;
|
||||
}
|
||||
|
||||
static mfxStatus frame_get_hdl(mfxHDL pthis, mfxMemId mid, mfxHDL *hdl)
|
||||
{
|
||||
*hdl = mid;
|
||||
return MFX_ERR_NONE;
|
||||
}
|
||||
|
||||
static void free_buffer(void *opaque, uint8_t *data)
|
||||
{
|
||||
int *used = opaque;
|
||||
*used = 0;
|
||||
av_freep(&data);
|
||||
}
|
||||
|
||||
static int get_buffer(AVCodecContext *avctx, AVFrame *frame, int flags)
|
||||
{
|
||||
DecodeContext *decode = avctx->opaque;
|
||||
|
||||
mfxFrameSurface1 *surf;
|
||||
AVBufferRef *surf_buf;
|
||||
int idx;
|
||||
|
||||
for (idx = 0; idx < decode->nb_surfaces; idx++) {
|
||||
if (!decode->surface_used[idx])
|
||||
break;
|
||||
}
|
||||
if (idx == decode->nb_surfaces) {
|
||||
fprintf(stderr, "No free surfaces\n");
|
||||
return AVERROR(ENOMEM);
|
||||
}
|
||||
|
||||
surf = av_mallocz(sizeof(*surf));
|
||||
if (!surf)
|
||||
return AVERROR(ENOMEM);
|
||||
surf_buf = av_buffer_create((uint8_t*)surf, sizeof(*surf), free_buffer,
|
||||
&decode->surface_used[idx], AV_BUFFER_FLAG_READONLY);
|
||||
if (!surf_buf) {
|
||||
av_freep(&surf);
|
||||
return AVERROR(ENOMEM);
|
||||
}
|
||||
|
||||
surf->Info = decode->frame_info;
|
||||
surf->Data.MemId = &decode->surfaces[idx];
|
||||
|
||||
frame->buf[0] = surf_buf;
|
||||
frame->data[3] = (uint8_t*)surf;
|
||||
|
||||
decode->surface_used[idx] = 1;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int get_format(AVCodecContext *avctx, const enum AVPixelFormat *pix_fmts)
|
||||
{
|
||||
while (*pix_fmts != AV_PIX_FMT_NONE) {
|
||||
if (*pix_fmts == AV_PIX_FMT_QSV) {
|
||||
if (!avctx->hwaccel_context) {
|
||||
DecodeContext *decode = avctx->opaque;
|
||||
AVQSVContext *qsv = av_qsv_alloc_context();
|
||||
if (!qsv)
|
||||
return AV_PIX_FMT_NONE;
|
||||
|
||||
qsv->session = decode->mfx_session;
|
||||
qsv->iopattern = MFX_IOPATTERN_OUT_VIDEO_MEMORY;
|
||||
|
||||
avctx->hwaccel_context = qsv;
|
||||
}
|
||||
|
||||
return AV_PIX_FMT_QSV;
|
||||
}
|
||||
|
||||
pix_fmts++;
|
||||
}
|
||||
|
||||
fprintf(stderr, "The QSV pixel format not offered in get_format()\n");
|
||||
|
||||
return AV_PIX_FMT_NONE;
|
||||
}
|
||||
|
||||
static int decode_packet(DecodeContext *decode, AVCodecContext *decoder_ctx,
|
||||
AVFrame *frame, AVPacket *pkt,
|
||||
AVIOContext *output_ctx)
|
||||
{
|
||||
int ret = 0;
|
||||
int got_frame = 1;
|
||||
|
||||
while (pkt->size > 0 || (!pkt->data && got_frame)) {
|
||||
ret = avcodec_decode_video2(decoder_ctx, frame, &got_frame, pkt);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error during decoding\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
pkt->data += ret;
|
||||
pkt->size -= ret;
|
||||
|
||||
/* A real program would do something useful with the decoded frame here.
|
||||
* We just retrieve the raw data and write it to a file, which is rather
|
||||
* useless but pedagogic. */
|
||||
if (got_frame) {
|
||||
mfxFrameSurface1 *surf = (mfxFrameSurface1*)frame->data[3];
|
||||
VASurfaceID surface = *(VASurfaceID*)surf->Data.MemId;
|
||||
|
||||
VAImageFormat img_fmt = {
|
||||
.fourcc = VA_FOURCC_NV12,
|
||||
.byte_order = VA_LSB_FIRST,
|
||||
.bits_per_pixel = 8,
|
||||
.depth = 8,
|
||||
};
|
||||
|
||||
VAImage img;
|
||||
|
||||
VAStatus err;
|
||||
uint8_t *data;
|
||||
int i, j;
|
||||
|
||||
img.buf = VA_INVALID_ID;
|
||||
img.image_id = VA_INVALID_ID;
|
||||
|
||||
err = vaCreateImage(decode->va_dpy, &img_fmt,
|
||||
frame->width, frame->height, &img);
|
||||
if (err != VA_STATUS_SUCCESS) {
|
||||
fprintf(stderr, "Error creating an image: %s\n",
|
||||
vaErrorStr(err));
|
||||
ret = AVERROR_UNKNOWN;
|
||||
goto fail;
|
||||
}
|
||||
|
||||
err = vaGetImage(decode->va_dpy, surface, 0, 0,
|
||||
frame->width, frame->height,
|
||||
img.image_id);
|
||||
if (err != VA_STATUS_SUCCESS) {
|
||||
fprintf(stderr, "Error getting an image: %s\n",
|
||||
vaErrorStr(err));
|
||||
ret = AVERROR_UNKNOWN;
|
||||
goto fail;
|
||||
}
|
||||
|
||||
err = vaMapBuffer(decode->va_dpy, img.buf, (void**)&data);
|
||||
if (err != VA_STATUS_SUCCESS) {
|
||||
fprintf(stderr, "Error mapping the image buffer: %s\n",
|
||||
vaErrorStr(err));
|
||||
ret = AVERROR_UNKNOWN;
|
||||
goto fail;
|
||||
}
|
||||
|
||||
for (i = 0; i < img.num_planes; i++)
|
||||
for (j = 0; j < (img.height >> (i > 0)); j++)
|
||||
avio_write(output_ctx, data + img.offsets[i] + j * img.pitches[i], img.width);
|
||||
|
||||
fail:
|
||||
if (img.buf != VA_INVALID_ID)
|
||||
vaUnmapBuffer(decode->va_dpy, img.buf);
|
||||
if (img.image_id != VA_INVALID_ID)
|
||||
vaDestroyImage(decode->va_dpy, img.image_id);
|
||||
av_frame_unref(frame);
|
||||
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int main(int argc, char **argv)
|
||||
{
|
||||
AVFormatContext *input_ctx = NULL;
|
||||
AVStream *video_st = NULL;
|
||||
AVCodecContext *decoder_ctx = NULL;
|
||||
const AVCodec *decoder;
|
||||
|
||||
AVPacket pkt = { 0 };
|
||||
AVFrame *frame = NULL;
|
||||
|
||||
DecodeContext decode = { NULL };
|
||||
|
||||
Display *dpy = NULL;
|
||||
int va_ver_major, va_ver_minor;
|
||||
|
||||
mfxIMPL mfx_impl = MFX_IMPL_AUTO_ANY;
|
||||
mfxVersion mfx_ver = { { 1, 1 } };
|
||||
|
||||
mfxFrameAllocator frame_allocator = {
|
||||
.pthis = &decode,
|
||||
.Alloc = frame_alloc,
|
||||
.Lock = frame_lock,
|
||||
.Unlock = frame_unlock,
|
||||
.GetHDL = frame_get_hdl,
|
||||
.Free = frame_free,
|
||||
};
|
||||
|
||||
AVIOContext *output_ctx = NULL;
|
||||
|
||||
int ret, i, err;
|
||||
|
||||
av_register_all();
|
||||
|
||||
if (argc < 3) {
|
||||
fprintf(stderr, "Usage: %s <input file> <output file>\n", argv[0]);
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* open the input file */
|
||||
ret = avformat_open_input(&input_ctx, argv[1], NULL, NULL);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Cannot open input file '%s': ", argv[1]);
|
||||
goto finish;
|
||||
}
|
||||
|
||||
/* find the first H.264 video stream */
|
||||
for (i = 0; i < input_ctx->nb_streams; i++) {
|
||||
AVStream *st = input_ctx->streams[i];
|
||||
|
||||
if (st->codec->codec_id == AV_CODEC_ID_H264 && !video_st)
|
||||
video_st = st;
|
||||
else
|
||||
st->discard = AVDISCARD_ALL;
|
||||
}
|
||||
if (!video_st) {
|
||||
fprintf(stderr, "No H.264 video stream in the input file\n");
|
||||
goto finish;
|
||||
}
|
||||
|
||||
/* initialize VA-API */
|
||||
dpy = XOpenDisplay(NULL);
|
||||
if (!dpy) {
|
||||
fprintf(stderr, "Cannot open the X display\n");
|
||||
goto finish;
|
||||
}
|
||||
decode.va_dpy = vaGetDisplay(dpy);
|
||||
if (!decode.va_dpy) {
|
||||
fprintf(stderr, "Cannot open the VA display\n");
|
||||
goto finish;
|
||||
}
|
||||
|
||||
err = vaInitialize(decode.va_dpy, &va_ver_major, &va_ver_minor);
|
||||
if (err != VA_STATUS_SUCCESS) {
|
||||
fprintf(stderr, "Cannot initialize VA: %s\n", vaErrorStr(err));
|
||||
goto finish;
|
||||
}
|
||||
fprintf(stderr, "Initialized VA v%d.%d\n", va_ver_major, va_ver_minor);
|
||||
|
||||
/* initialize an MFX session */
|
||||
err = MFXInit(mfx_impl, &mfx_ver, &decode.mfx_session);
|
||||
if (err != MFX_ERR_NONE) {
|
||||
fprintf(stderr, "Error initializing an MFX session\n");
|
||||
goto finish;
|
||||
}
|
||||
|
||||
MFXVideoCORE_SetHandle(decode.mfx_session, MFX_HANDLE_VA_DISPLAY, decode.va_dpy);
|
||||
MFXVideoCORE_SetFrameAllocator(decode.mfx_session, &frame_allocator);
|
||||
|
||||
/* initialize the decoder */
|
||||
decoder = avcodec_find_decoder_by_name("h264_qsv");
|
||||
if (!decoder) {
|
||||
fprintf(stderr, "The QSV decoder is not present in libavcodec\n");
|
||||
goto finish;
|
||||
}
|
||||
|
||||
decoder_ctx = avcodec_alloc_context3(decoder);
|
||||
if (!decoder_ctx) {
|
||||
ret = AVERROR(ENOMEM);
|
||||
goto finish;
|
||||
}
|
||||
decoder_ctx->codec_id = AV_CODEC_ID_H264;
|
||||
if (video_st->codec->extradata_size) {
|
||||
decoder_ctx->extradata = av_mallocz(video_st->codec->extradata_size +
|
||||
FF_INPUT_BUFFER_PADDING_SIZE);
|
||||
if (!decoder_ctx->extradata) {
|
||||
ret = AVERROR(ENOMEM);
|
||||
goto finish;
|
||||
}
|
||||
memcpy(decoder_ctx->extradata, video_st->codec->extradata,
|
||||
video_st->codec->extradata_size);
|
||||
decoder_ctx->extradata_size = video_st->codec->extradata_size;
|
||||
}
|
||||
decoder_ctx->refcounted_frames = 1;
|
||||
|
||||
decoder_ctx->opaque = &decode;
|
||||
decoder_ctx->get_buffer2 = get_buffer;
|
||||
decoder_ctx->get_format = get_format;
|
||||
|
||||
ret = avcodec_open2(decoder_ctx, NULL, NULL);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error opening the decoder: ");
|
||||
goto finish;
|
||||
}
|
||||
|
||||
/* open the output stream */
|
||||
ret = avio_open(&output_ctx, argv[2], AVIO_FLAG_WRITE);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error opening the output context: ");
|
||||
goto finish;
|
||||
}
|
||||
|
||||
frame = av_frame_alloc();
|
||||
if (!frame) {
|
||||
ret = AVERROR(ENOMEM);
|
||||
goto finish;
|
||||
}
|
||||
|
||||
/* actual decoding */
|
||||
while (ret >= 0) {
|
||||
ret = av_read_frame(input_ctx, &pkt);
|
||||
if (ret < 0)
|
||||
break;
|
||||
|
||||
if (pkt.stream_index == video_st->index)
|
||||
ret = decode_packet(&decode, decoder_ctx, frame, &pkt, output_ctx);
|
||||
|
||||
av_packet_unref(&pkt);
|
||||
}
|
||||
|
||||
/* flush the decoder */
|
||||
pkt.data = NULL;
|
||||
pkt.size = 0;
|
||||
ret = decode_packet(&decode, decoder_ctx, frame, &pkt, output_ctx);
|
||||
|
||||
finish:
|
||||
if (ret < 0) {
|
||||
char buf[1024];
|
||||
av_strerror(ret, buf, sizeof(buf));
|
||||
fprintf(stderr, "%s\n", buf);
|
||||
}
|
||||
|
||||
avformat_close_input(&input_ctx);
|
||||
|
||||
av_frame_free(&frame);
|
||||
|
||||
if (decode.mfx_session)
|
||||
MFXClose(decode.mfx_session);
|
||||
if (decode.va_dpy)
|
||||
vaTerminate(decode.va_dpy);
|
||||
if (dpy)
|
||||
XCloseDisplay(dpy);
|
||||
|
||||
if (decoder_ctx)
|
||||
av_freep(&decoder_ctx->hwaccel_context);
|
||||
avcodec_free_context(&decoder_ctx);
|
||||
|
||||
avio_close(output_ctx);
|
||||
|
||||
return ret;
|
||||
}
|
@@ -1,165 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2013 Stefano Sabatini
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
* of this software and associated documentation files (the "Software"), to deal
|
||||
* in the Software without restriction, including without limitation the rights
|
||||
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
* copies of the Software, and to permit persons to whom the Software is
|
||||
* furnished to do so, subject to the following conditions:
|
||||
*
|
||||
* The above copyright notice and this permission notice shall be included in
|
||||
* all copies or substantial portions of the Software.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
||||
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
* THE SOFTWARE.
|
||||
*/
|
||||
|
||||
/**
|
||||
* @file
|
||||
* libavformat/libavcodec demuxing and muxing API example.
|
||||
*
|
||||
* Remux streams from one container format to another.
|
||||
* @example remuxing.c
|
||||
*/
|
||||
|
||||
#include <libavutil/timestamp.h>
|
||||
#include <libavformat/avformat.h>
|
||||
|
||||
static void log_packet(const AVFormatContext *fmt_ctx, const AVPacket *pkt, const char *tag)
|
||||
{
|
||||
AVRational *time_base = &fmt_ctx->streams[pkt->stream_index]->time_base;
|
||||
|
||||
printf("%s: pts:%s pts_time:%s dts:%s dts_time:%s duration:%s duration_time:%s stream_index:%d\n",
|
||||
tag,
|
||||
av_ts2str(pkt->pts), av_ts2timestr(pkt->pts, time_base),
|
||||
av_ts2str(pkt->dts), av_ts2timestr(pkt->dts, time_base),
|
||||
av_ts2str(pkt->duration), av_ts2timestr(pkt->duration, time_base),
|
||||
pkt->stream_index);
|
||||
}
|
||||
|
||||
int main(int argc, char **argv)
|
||||
{
|
||||
AVOutputFormat *ofmt = NULL;
|
||||
AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
|
||||
AVPacket pkt;
|
||||
const char *in_filename, *out_filename;
|
||||
int ret, i;
|
||||
|
||||
if (argc < 3) {
|
||||
printf("usage: %s input output\n"
|
||||
"API example program to remux a media file with libavformat and libavcodec.\n"
|
||||
"The output format is guessed according to the file extension.\n"
|
||||
"\n", argv[0]);
|
||||
return 1;
|
||||
}
|
||||
|
||||
in_filename = argv[1];
|
||||
out_filename = argv[2];
|
||||
|
||||
av_register_all();
|
||||
|
||||
if ((ret = avformat_open_input(&ifmt_ctx, in_filename, 0, 0)) < 0) {
|
||||
fprintf(stderr, "Could not open input file '%s'", in_filename);
|
||||
goto end;
|
||||
}
|
||||
|
||||
if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) < 0) {
|
||||
fprintf(stderr, "Failed to retrieve input stream information");
|
||||
goto end;
|
||||
}
|
||||
|
||||
av_dump_format(ifmt_ctx, 0, in_filename, 0);
|
||||
|
||||
avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, out_filename);
|
||||
if (!ofmt_ctx) {
|
||||
fprintf(stderr, "Could not create output context\n");
|
||||
ret = AVERROR_UNKNOWN;
|
||||
goto end;
|
||||
}
|
||||
|
||||
ofmt = ofmt_ctx->oformat;
|
||||
|
||||
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
|
||||
AVStream *in_stream = ifmt_ctx->streams[i];
|
||||
AVStream *out_stream = avformat_new_stream(ofmt_ctx, in_stream->codec->codec);
|
||||
if (!out_stream) {
|
||||
fprintf(stderr, "Failed allocating output stream\n");
|
||||
ret = AVERROR_UNKNOWN;
|
||||
goto end;
|
||||
}
|
||||
|
||||
ret = avcodec_copy_context(out_stream->codec, in_stream->codec);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Failed to copy context from input to output stream codec context\n");
|
||||
goto end;
|
||||
}
|
||||
out_stream->codec->codec_tag = 0;
|
||||
if (ofmt_ctx->oformat->flags & AVFMT_GLOBALHEADER)
|
||||
out_stream->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
|
||||
}
|
||||
av_dump_format(ofmt_ctx, 0, out_filename, 1);
|
||||
|
||||
if (!(ofmt->flags & AVFMT_NOFILE)) {
|
||||
ret = avio_open(&ofmt_ctx->pb, out_filename, AVIO_FLAG_WRITE);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Could not open output file '%s'", out_filename);
|
||||
goto end;
|
||||
}
|
||||
}
|
||||
|
||||
ret = avformat_write_header(ofmt_ctx, NULL);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error occurred when opening output file\n");
|
||||
goto end;
|
||||
}
|
||||
|
||||
while (1) {
|
||||
AVStream *in_stream, *out_stream;
|
||||
|
||||
ret = av_read_frame(ifmt_ctx, &pkt);
|
||||
if (ret < 0)
|
||||
break;
|
||||
|
||||
in_stream = ifmt_ctx->streams[pkt.stream_index];
|
||||
out_stream = ofmt_ctx->streams[pkt.stream_index];
|
||||
|
||||
log_packet(ifmt_ctx, &pkt, "in");
|
||||
|
||||
/* copy packet */
|
||||
pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
|
||||
pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
|
||||
pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);
|
||||
pkt.pos = -1;
|
||||
log_packet(ofmt_ctx, &pkt, "out");
|
||||
|
||||
ret = av_interleaved_write_frame(ofmt_ctx, &pkt);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error muxing packet\n");
|
||||
break;
|
||||
}
|
||||
av_free_packet(&pkt);
|
||||
}
|
||||
|
||||
av_write_trailer(ofmt_ctx);
|
||||
end:
|
||||
|
||||
avformat_close_input(&ifmt_ctx);
|
||||
|
||||
/* close output */
|
||||
if (ofmt_ctx && !(ofmt->flags & AVFMT_NOFILE))
|
||||
avio_closep(&ofmt_ctx->pb);
|
||||
avformat_free_context(ofmt_ctx);
|
||||
|
||||
if (ret < 0 && ret != AVERROR_EOF) {
|
||||
fprintf(stderr, "Error occurred: %s\n", av_err2str(ret));
|
||||
return 1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
@@ -21,7 +21,7 @@
|
||||
*/
|
||||
|
||||
/**
|
||||
* @example resampling_audio.c
|
||||
* @example doc/examples/resampling_audio.c
|
||||
* libswresample API use example.
|
||||
*/
|
||||
|
||||
@@ -62,7 +62,7 @@ static int get_format_from_sample_fmt(const char **fmt,
|
||||
/**
|
||||
* Fill dst buffer with nb_samples, generated starting from t.
|
||||
*/
|
||||
static void fill_samples(double *dst, int nb_samples, int nb_channels, int sample_rate, double *t)
|
||||
void fill_samples(double *dst, int nb_samples, int nb_channels, int sample_rate, double *t)
|
||||
{
|
||||
int i, j;
|
||||
double tincr = 1.0 / sample_rate, *dstp = dst;
|
||||
@@ -168,7 +168,7 @@ int main(int argc, char **argv)
|
||||
dst_nb_samples = av_rescale_rnd(swr_get_delay(swr_ctx, src_rate) +
|
||||
src_nb_samples, dst_rate, src_rate, AV_ROUND_UP);
|
||||
if (dst_nb_samples > max_dst_nb_samples) {
|
||||
av_freep(&dst_data[0]);
|
||||
av_free(dst_data[0]);
|
||||
ret = av_samples_alloc(dst_data, &dst_linesize, dst_nb_channels,
|
||||
dst_nb_samples, dst_sample_fmt, 1);
|
||||
if (ret < 0)
|
||||
@@ -184,10 +184,6 @@ int main(int argc, char **argv)
|
||||
}
|
||||
dst_bufsize = av_samples_get_buffer_size(&dst_linesize, dst_nb_channels,
|
||||
ret, dst_sample_fmt, 1);
|
||||
if (dst_bufsize < 0) {
|
||||
fprintf(stderr, "Could not get sample buffer size\n");
|
||||
goto end;
|
||||
}
|
||||
printf("t:%f in:%d out:%d\n", t, src_nb_samples, ret);
|
||||
fwrite(dst_data[0], 1, dst_bufsize, dst_file);
|
||||
} while (t < 10);
|
||||
@@ -199,7 +195,8 @@ int main(int argc, char **argv)
|
||||
fmt, dst_ch_layout, dst_nb_channels, dst_rate, dst_filename);
|
||||
|
||||
end:
|
||||
fclose(dst_file);
|
||||
if (dst_file)
|
||||
fclose(dst_file);
|
||||
|
||||
if (src_data)
|
||||
av_freep(&src_data[0]);
|
||||
|
@@ -23,7 +23,7 @@
|
||||
/**
|
||||
* @file
|
||||
* libswscale API use example.
|
||||
* @example scaling_video.c
|
||||
* @example doc/examples/scaling_video.c
|
||||
*/
|
||||
|
||||
#include <libavutil/imgutils.h>
|
||||
@@ -132,7 +132,8 @@ int main(int argc, char **argv)
|
||||
av_get_pix_fmt_name(dst_pix_fmt), dst_w, dst_h, dst_filename);
|
||||
|
||||
end:
|
||||
fclose(dst_file);
|
||||
if (dst_file)
|
||||
fclose(dst_file);
|
||||
av_freep(&src_data[0]);
|
||||
av_freep(&dst_data[0]);
|
||||
sws_freeContext(sws_ctx);
|
||||
|
@@ -1,770 +0,0 @@
|
||||
/*
|
||||
* This file is part of FFmpeg.
|
||||
*
|
||||
* FFmpeg is free software; you can redistribute it and/or
|
||||
* modify it under the terms of the GNU Lesser General Public
|
||||
* License as published by the Free Software Foundation; either
|
||||
* version 2.1 of the License, or (at your option) any later version.
|
||||
*
|
||||
* FFmpeg is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
* Lesser General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU Lesser General Public
|
||||
* License along with FFmpeg; if not, write to the Free Software
|
||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
*/
|
||||
|
||||
/**
|
||||
* @file
|
||||
* simple audio converter
|
||||
*
|
||||
* @example transcode_aac.c
|
||||
* Convert an input audio file to AAC in an MP4 container using FFmpeg.
|
||||
* @author Andreas Unterweger (dustsigns@gmail.com)
|
||||
*/
|
||||
|
||||
#include <stdio.h>
|
||||
|
||||
#include "libavformat/avformat.h"
|
||||
#include "libavformat/avio.h"
|
||||
|
||||
#include "libavcodec/avcodec.h"
|
||||
|
||||
#include "libavutil/audio_fifo.h"
|
||||
#include "libavutil/avassert.h"
|
||||
#include "libavutil/avstring.h"
|
||||
#include "libavutil/frame.h"
|
||||
#include "libavutil/opt.h"
|
||||
|
||||
#include "libswresample/swresample.h"
|
||||
|
||||
/** The output bit rate in kbit/s */
|
||||
#define OUTPUT_BIT_RATE 96000
|
||||
/** The number of output channels */
|
||||
#define OUTPUT_CHANNELS 2
|
||||
|
||||
/**
|
||||
* Convert an error code into a text message.
|
||||
* @param error Error code to be converted
|
||||
* @return Corresponding error text (not thread-safe)
|
||||
*/
|
||||
static const char *get_error_text(const int error)
|
||||
{
|
||||
static char error_buffer[255];
|
||||
av_strerror(error, error_buffer, sizeof(error_buffer));
|
||||
return error_buffer;
|
||||
}
|
||||
|
||||
/** Open an input file and the required decoder. */
|
||||
static int open_input_file(const char *filename,
|
||||
AVFormatContext **input_format_context,
|
||||
AVCodecContext **input_codec_context)
|
||||
{
|
||||
AVCodec *input_codec;
|
||||
int error;
|
||||
|
||||
/** Open the input file to read from it. */
|
||||
if ((error = avformat_open_input(input_format_context, filename, NULL,
|
||||
NULL)) < 0) {
|
||||
fprintf(stderr, "Could not open input file '%s' (error '%s')\n",
|
||||
filename, get_error_text(error));
|
||||
*input_format_context = NULL;
|
||||
return error;
|
||||
}
|
||||
|
||||
/** Get information on the input file (number of streams etc.). */
|
||||
if ((error = avformat_find_stream_info(*input_format_context, NULL)) < 0) {
|
||||
fprintf(stderr, "Could not open find stream info (error '%s')\n",
|
||||
get_error_text(error));
|
||||
avformat_close_input(input_format_context);
|
||||
return error;
|
||||
}
|
||||
|
||||
/** Make sure that there is only one stream in the input file. */
|
||||
if ((*input_format_context)->nb_streams != 1) {
|
||||
fprintf(stderr, "Expected one audio input stream, but found %d\n",
|
||||
(*input_format_context)->nb_streams);
|
||||
avformat_close_input(input_format_context);
|
||||
return AVERROR_EXIT;
|
||||
}
|
||||
|
||||
/** Find a decoder for the audio stream. */
|
||||
if (!(input_codec = avcodec_find_decoder((*input_format_context)->streams[0]->codec->codec_id))) {
|
||||
fprintf(stderr, "Could not find input codec\n");
|
||||
avformat_close_input(input_format_context);
|
||||
return AVERROR_EXIT;
|
||||
}
|
||||
|
||||
/** Open the decoder for the audio stream to use it later. */
|
||||
if ((error = avcodec_open2((*input_format_context)->streams[0]->codec,
|
||||
input_codec, NULL)) < 0) {
|
||||
fprintf(stderr, "Could not open input codec (error '%s')\n",
|
||||
get_error_text(error));
|
||||
avformat_close_input(input_format_context);
|
||||
return error;
|
||||
}
|
||||
|
||||
/** Save the decoder context for easier access later. */
|
||||
*input_codec_context = (*input_format_context)->streams[0]->codec;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Open an output file and the required encoder.
|
||||
* Also set some basic encoder parameters.
|
||||
* Some of these parameters are based on the input file's parameters.
|
||||
*/
|
||||
static int open_output_file(const char *filename,
|
||||
AVCodecContext *input_codec_context,
|
||||
AVFormatContext **output_format_context,
|
||||
AVCodecContext **output_codec_context)
|
||||
{
|
||||
AVIOContext *output_io_context = NULL;
|
||||
AVStream *stream = NULL;
|
||||
AVCodec *output_codec = NULL;
|
||||
int error;
|
||||
|
||||
/** Open the output file to write to it. */
|
||||
if ((error = avio_open(&output_io_context, filename,
|
||||
AVIO_FLAG_WRITE)) < 0) {
|
||||
fprintf(stderr, "Could not open output file '%s' (error '%s')\n",
|
||||
filename, get_error_text(error));
|
||||
return error;
|
||||
}
|
||||
|
||||
/** Create a new format context for the output container format. */
|
||||
if (!(*output_format_context = avformat_alloc_context())) {
|
||||
fprintf(stderr, "Could not allocate output format context\n");
|
||||
return AVERROR(ENOMEM);
|
||||
}
|
||||
|
||||
/** Associate the output file (pointer) with the container format context. */
|
||||
(*output_format_context)->pb = output_io_context;
|
||||
|
||||
/** Guess the desired container format based on the file extension. */
|
||||
if (!((*output_format_context)->oformat = av_guess_format(NULL, filename,
|
||||
NULL))) {
|
||||
fprintf(stderr, "Could not find output file format\n");
|
||||
goto cleanup;
|
||||
}
|
||||
|
||||
av_strlcpy((*output_format_context)->filename, filename,
|
||||
sizeof((*output_format_context)->filename));
|
||||
|
||||
/** Find the encoder to be used by its name. */
|
||||
if (!(output_codec = avcodec_find_encoder(AV_CODEC_ID_AAC))) {
|
||||
fprintf(stderr, "Could not find an AAC encoder.\n");
|
||||
goto cleanup;
|
||||
}
|
||||
|
||||
/** Create a new audio stream in the output file container. */
|
||||
if (!(stream = avformat_new_stream(*output_format_context, output_codec))) {
|
||||
fprintf(stderr, "Could not create new stream\n");
|
||||
error = AVERROR(ENOMEM);
|
||||
goto cleanup;
|
||||
}
|
||||
|
||||
/** Save the encoder context for easier access later. */
|
||||
*output_codec_context = stream->codec;
|
||||
|
||||
/**
|
||||
* Set the basic encoder parameters.
|
||||
* The input file's sample rate is used to avoid a sample rate conversion.
|
||||
*/
|
||||
(*output_codec_context)->channels = OUTPUT_CHANNELS;
|
||||
(*output_codec_context)->channel_layout = av_get_default_channel_layout(OUTPUT_CHANNELS);
|
||||
(*output_codec_context)->sample_rate = input_codec_context->sample_rate;
|
||||
(*output_codec_context)->sample_fmt = output_codec->sample_fmts[0];
|
||||
(*output_codec_context)->bit_rate = OUTPUT_BIT_RATE;
|
||||
|
||||
/** Allow the use of the experimental AAC encoder */
|
||||
(*output_codec_context)->strict_std_compliance = FF_COMPLIANCE_EXPERIMENTAL;
|
||||
|
||||
/** Set the sample rate for the container. */
|
||||
stream->time_base.den = input_codec_context->sample_rate;
|
||||
stream->time_base.num = 1;
|
||||
|
||||
/**
|
||||
* Some container formats (like MP4) require global headers to be present
|
||||
* Mark the encoder so that it behaves accordingly.
|
||||
*/
|
||||
if ((*output_format_context)->oformat->flags & AVFMT_GLOBALHEADER)
|
||||
(*output_codec_context)->flags |= CODEC_FLAG_GLOBAL_HEADER;
|
||||
|
||||
/** Open the encoder for the audio stream to use it later. */
|
||||
if ((error = avcodec_open2(*output_codec_context, output_codec, NULL)) < 0) {
|
||||
fprintf(stderr, "Could not open output codec (error '%s')\n",
|
||||
get_error_text(error));
|
||||
goto cleanup;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
cleanup:
|
||||
avio_closep(&(*output_format_context)->pb);
|
||||
avformat_free_context(*output_format_context);
|
||||
*output_format_context = NULL;
|
||||
return error < 0 ? error : AVERROR_EXIT;
|
||||
}
|
||||
|
||||
/** Initialize one data packet for reading or writing. */
|
||||
static void init_packet(AVPacket *packet)
|
||||
{
|
||||
av_init_packet(packet);
|
||||
/** Set the packet data and size so that it is recognized as being empty. */
|
||||
packet->data = NULL;
|
||||
packet->size = 0;
|
||||
}
|
||||
|
||||
/** Initialize one audio frame for reading from the input file */
|
||||
static int init_input_frame(AVFrame **frame)
|
||||
{
|
||||
if (!(*frame = av_frame_alloc())) {
|
||||
fprintf(stderr, "Could not allocate input frame\n");
|
||||
return AVERROR(ENOMEM);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize the audio resampler based on the input and output codec settings.
|
||||
* If the input and output sample formats differ, a conversion is required
|
||||
* libswresample takes care of this, but requires initialization.
|
||||
*/
|
||||
static int init_resampler(AVCodecContext *input_codec_context,
|
||||
AVCodecContext *output_codec_context,
|
||||
SwrContext **resample_context)
|
||||
{
|
||||
int error;
|
||||
|
||||
/**
|
||||
* Create a resampler context for the conversion.
|
||||
* Set the conversion parameters.
|
||||
* Default channel layouts based on the number of channels
|
||||
* are assumed for simplicity (they are sometimes not detected
|
||||
* properly by the demuxer and/or decoder).
|
||||
*/
|
||||
*resample_context = swr_alloc_set_opts(NULL,
|
||||
av_get_default_channel_layout(output_codec_context->channels),
|
||||
output_codec_context->sample_fmt,
|
||||
output_codec_context->sample_rate,
|
||||
av_get_default_channel_layout(input_codec_context->channels),
|
||||
input_codec_context->sample_fmt,
|
||||
input_codec_context->sample_rate,
|
||||
0, NULL);
|
||||
if (!*resample_context) {
|
||||
fprintf(stderr, "Could not allocate resample context\n");
|
||||
return AVERROR(ENOMEM);
|
||||
}
|
||||
/**
|
||||
* Perform a sanity check so that the number of converted samples is
|
||||
* not greater than the number of samples to be converted.
|
||||
* If the sample rates differ, this case has to be handled differently
|
||||
*/
|
||||
av_assert0(output_codec_context->sample_rate == input_codec_context->sample_rate);
|
||||
|
||||
/** Open the resampler with the specified parameters. */
|
||||
if ((error = swr_init(*resample_context)) < 0) {
|
||||
fprintf(stderr, "Could not open resample context\n");
|
||||
swr_free(resample_context);
|
||||
return error;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
/** Initialize a FIFO buffer for the audio samples to be encoded. */
|
||||
static int init_fifo(AVAudioFifo **fifo, AVCodecContext *output_codec_context)
|
||||
{
|
||||
/** Create the FIFO buffer based on the specified output sample format. */
|
||||
if (!(*fifo = av_audio_fifo_alloc(output_codec_context->sample_fmt,
|
||||
output_codec_context->channels, 1))) {
|
||||
fprintf(stderr, "Could not allocate FIFO\n");
|
||||
return AVERROR(ENOMEM);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
/** Write the header of the output file container. */
|
||||
static int write_output_file_header(AVFormatContext *output_format_context)
|
||||
{
|
||||
int error;
|
||||
if ((error = avformat_write_header(output_format_context, NULL)) < 0) {
|
||||
fprintf(stderr, "Could not write output file header (error '%s')\n",
|
||||
get_error_text(error));
|
||||
return error;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
/** Decode one audio frame from the input file. */
|
||||
static int decode_audio_frame(AVFrame *frame,
|
||||
AVFormatContext *input_format_context,
|
||||
AVCodecContext *input_codec_context,
|
||||
int *data_present, int *finished)
|
||||
{
|
||||
/** Packet used for temporary storage. */
|
||||
AVPacket input_packet;
|
||||
int error;
|
||||
init_packet(&input_packet);
|
||||
|
||||
/** Read one audio frame from the input file into a temporary packet. */
|
||||
if ((error = av_read_frame(input_format_context, &input_packet)) < 0) {
|
||||
/** If we are at the end of the file, flush the decoder below. */
|
||||
if (error == AVERROR_EOF)
|
||||
*finished = 1;
|
||||
else {
|
||||
fprintf(stderr, "Could not read frame (error '%s')\n",
|
||||
get_error_text(error));
|
||||
return error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Decode the audio frame stored in the temporary packet.
|
||||
* The input audio stream decoder is used to do this.
|
||||
* If we are at the end of the file, pass an empty packet to the decoder
|
||||
* to flush it.
|
||||
*/
|
||||
if ((error = avcodec_decode_audio4(input_codec_context, frame,
|
||||
data_present, &input_packet)) < 0) {
|
||||
fprintf(stderr, "Could not decode frame (error '%s')\n",
|
||||
get_error_text(error));
|
||||
av_free_packet(&input_packet);
|
||||
return error;
|
||||
}
|
||||
|
||||
/**
|
||||
* If the decoder has not been flushed completely, we are not finished,
|
||||
* so that this function has to be called again.
|
||||
*/
|
||||
if (*finished && *data_present)
|
||||
*finished = 0;
|
||||
av_free_packet(&input_packet);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize a temporary storage for the specified number of audio samples.
|
||||
* The conversion requires temporary storage due to the different format.
|
||||
* The number of audio samples to be allocated is specified in frame_size.
|
||||
*/
|
||||
static int init_converted_samples(uint8_t ***converted_input_samples,
|
||||
AVCodecContext *output_codec_context,
|
||||
int frame_size)
|
||||
{
|
||||
int error;
|
||||
|
||||
/**
|
||||
* Allocate as many pointers as there are audio channels.
|
||||
* Each pointer will later point to the audio samples of the corresponding
|
||||
* channels (although it may be NULL for interleaved formats).
|
||||
*/
|
||||
if (!(*converted_input_samples = calloc(output_codec_context->channels,
|
||||
sizeof(**converted_input_samples)))) {
|
||||
fprintf(stderr, "Could not allocate converted input sample pointers\n");
|
||||
return AVERROR(ENOMEM);
|
||||
}
|
||||
|
||||
/**
|
||||
* Allocate memory for the samples of all channels in one consecutive
|
||||
* block for convenience.
|
||||
*/
|
||||
if ((error = av_samples_alloc(*converted_input_samples, NULL,
|
||||
output_codec_context->channels,
|
||||
frame_size,
|
||||
output_codec_context->sample_fmt, 0)) < 0) {
|
||||
fprintf(stderr,
|
||||
"Could not allocate converted input samples (error '%s')\n",
|
||||
get_error_text(error));
|
||||
av_freep(&(*converted_input_samples)[0]);
|
||||
free(*converted_input_samples);
|
||||
return error;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Convert the input audio samples into the output sample format.
|
||||
* The conversion happens on a per-frame basis, the size of which is specified
|
||||
* by frame_size.
|
||||
*/
|
||||
static int convert_samples(const uint8_t **input_data,
|
||||
uint8_t **converted_data, const int frame_size,
|
||||
SwrContext *resample_context)
|
||||
{
|
||||
int error;
|
||||
|
||||
/** Convert the samples using the resampler. */
|
||||
if ((error = swr_convert(resample_context,
|
||||
converted_data, frame_size,
|
||||
input_data , frame_size)) < 0) {
|
||||
fprintf(stderr, "Could not convert input samples (error '%s')\n",
|
||||
get_error_text(error));
|
||||
return error;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/** Add converted input audio samples to the FIFO buffer for later processing. */
|
||||
static int add_samples_to_fifo(AVAudioFifo *fifo,
|
||||
uint8_t **converted_input_samples,
|
||||
const int frame_size)
|
||||
{
|
||||
int error;
|
||||
|
||||
/**
|
||||
* Make the FIFO as large as it needs to be to hold both,
|
||||
* the old and the new samples.
|
||||
*/
|
||||
if ((error = av_audio_fifo_realloc(fifo, av_audio_fifo_size(fifo) + frame_size)) < 0) {
|
||||
fprintf(stderr, "Could not reallocate FIFO\n");
|
||||
return error;
|
||||
}
|
||||
|
||||
/** Store the new samples in the FIFO buffer. */
|
||||
if (av_audio_fifo_write(fifo, (void **)converted_input_samples,
|
||||
frame_size) < frame_size) {
|
||||
fprintf(stderr, "Could not write data to FIFO\n");
|
||||
return AVERROR_EXIT;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Read one audio frame from the input file, decodes, converts and stores
|
||||
* it in the FIFO buffer.
|
||||
*/
|
||||
static int read_decode_convert_and_store(AVAudioFifo *fifo,
|
||||
AVFormatContext *input_format_context,
|
||||
AVCodecContext *input_codec_context,
|
||||
AVCodecContext *output_codec_context,
|
||||
SwrContext *resampler_context,
|
||||
int *finished)
|
||||
{
|
||||
/** Temporary storage of the input samples of the frame read from the file. */
|
||||
AVFrame *input_frame = NULL;
|
||||
/** Temporary storage for the converted input samples. */
|
||||
uint8_t **converted_input_samples = NULL;
|
||||
int data_present;
|
||||
int ret = AVERROR_EXIT;
|
||||
|
||||
/** Initialize temporary storage for one input frame. */
|
||||
if (init_input_frame(&input_frame))
|
||||
goto cleanup;
|
||||
/** Decode one frame worth of audio samples. */
|
||||
if (decode_audio_frame(input_frame, input_format_context,
|
||||
input_codec_context, &data_present, finished))
|
||||
goto cleanup;
|
||||
/**
|
||||
* If we are at the end of the file and there are no more samples
|
||||
* in the decoder which are delayed, we are actually finished.
|
||||
* This must not be treated as an error.
|
||||
*/
|
||||
if (*finished && !data_present) {
|
||||
ret = 0;
|
||||
goto cleanup;
|
||||
}
|
||||
/** If there is decoded data, convert and store it */
|
||||
if (data_present) {
|
||||
/** Initialize the temporary storage for the converted input samples. */
|
||||
if (init_converted_samples(&converted_input_samples, output_codec_context,
|
||||
input_frame->nb_samples))
|
||||
goto cleanup;
|
||||
|
||||
/**
|
||||
* Convert the input samples to the desired output sample format.
|
||||
* This requires a temporary storage provided by converted_input_samples.
|
||||
*/
|
||||
if (convert_samples((const uint8_t**)input_frame->extended_data, converted_input_samples,
|
||||
input_frame->nb_samples, resampler_context))
|
||||
goto cleanup;
|
||||
|
||||
/** Add the converted input samples to the FIFO buffer for later processing. */
|
||||
if (add_samples_to_fifo(fifo, converted_input_samples,
|
||||
input_frame->nb_samples))
|
||||
goto cleanup;
|
||||
ret = 0;
|
||||
}
|
||||
ret = 0;
|
||||
|
||||
cleanup:
|
||||
if (converted_input_samples) {
|
||||
av_freep(&converted_input_samples[0]);
|
||||
free(converted_input_samples);
|
||||
}
|
||||
av_frame_free(&input_frame);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize one input frame for writing to the output file.
|
||||
* The frame will be exactly frame_size samples large.
|
||||
*/
|
||||
static int init_output_frame(AVFrame **frame,
|
||||
AVCodecContext *output_codec_context,
|
||||
int frame_size)
|
||||
{
|
||||
int error;
|
||||
|
||||
/** Create a new frame to store the audio samples. */
|
||||
if (!(*frame = av_frame_alloc())) {
|
||||
fprintf(stderr, "Could not allocate output frame\n");
|
||||
return AVERROR_EXIT;
|
||||
}
|
||||
|
||||
/**
|
||||
* Set the frame's parameters, especially its size and format.
|
||||
* av_frame_get_buffer needs this to allocate memory for the
|
||||
* audio samples of the frame.
|
||||
* Default channel layouts based on the number of channels
|
||||
* are assumed for simplicity.
|
||||
*/
|
||||
(*frame)->nb_samples = frame_size;
|
||||
(*frame)->channel_layout = output_codec_context->channel_layout;
|
||||
(*frame)->format = output_codec_context->sample_fmt;
|
||||
(*frame)->sample_rate = output_codec_context->sample_rate;
|
||||
|
||||
/**
|
||||
* Allocate the samples of the created frame. This call will make
|
||||
* sure that the audio frame can hold as many samples as specified.
|
||||
*/
|
||||
if ((error = av_frame_get_buffer(*frame, 0)) < 0) {
|
||||
fprintf(stderr, "Could allocate output frame samples (error '%s')\n",
|
||||
get_error_text(error));
|
||||
av_frame_free(frame);
|
||||
return error;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/** Global timestamp for the audio frames */
|
||||
static int64_t pts = 0;
|
||||
|
||||
/** Encode one frame worth of audio to the output file. */
|
||||
static int encode_audio_frame(AVFrame *frame,
|
||||
AVFormatContext *output_format_context,
|
||||
AVCodecContext *output_codec_context,
|
||||
int *data_present)
|
||||
{
|
||||
/** Packet used for temporary storage. */
|
||||
AVPacket output_packet;
|
||||
int error;
|
||||
init_packet(&output_packet);
|
||||
|
||||
/** Set a timestamp based on the sample rate for the container. */
|
||||
if (frame) {
|
||||
frame->pts = pts;
|
||||
pts += frame->nb_samples;
|
||||
}
|
||||
|
||||
/**
|
||||
* Encode the audio frame and store it in the temporary packet.
|
||||
* The output audio stream encoder is used to do this.
|
||||
*/
|
||||
if ((error = avcodec_encode_audio2(output_codec_context, &output_packet,
|
||||
frame, data_present)) < 0) {
|
||||
fprintf(stderr, "Could not encode frame (error '%s')\n",
|
||||
get_error_text(error));
|
||||
av_free_packet(&output_packet);
|
||||
return error;
|
||||
}
|
||||
|
||||
/** Write one audio frame from the temporary packet to the output file. */
|
||||
if (*data_present) {
|
||||
if ((error = av_write_frame(output_format_context, &output_packet)) < 0) {
|
||||
fprintf(stderr, "Could not write frame (error '%s')\n",
|
||||
get_error_text(error));
|
||||
av_free_packet(&output_packet);
|
||||
return error;
|
||||
}
|
||||
|
||||
av_free_packet(&output_packet);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Load one audio frame from the FIFO buffer, encode and write it to the
|
||||
* output file.
|
||||
*/
|
||||
static int load_encode_and_write(AVAudioFifo *fifo,
|
||||
AVFormatContext *output_format_context,
|
||||
AVCodecContext *output_codec_context)
|
||||
{
|
||||
/** Temporary storage of the output samples of the frame written to the file. */
|
||||
AVFrame *output_frame;
|
||||
/**
|
||||
* Use the maximum number of possible samples per frame.
|
||||
* If there is less than the maximum possible frame size in the FIFO
|
||||
* buffer use this number. Otherwise, use the maximum possible frame size
|
||||
*/
|
||||
const int frame_size = FFMIN(av_audio_fifo_size(fifo),
|
||||
output_codec_context->frame_size);
|
||||
int data_written;
|
||||
|
||||
/** Initialize temporary storage for one output frame. */
|
||||
if (init_output_frame(&output_frame, output_codec_context, frame_size))
|
||||
return AVERROR_EXIT;
|
||||
|
||||
/**
|
||||
* Read as many samples from the FIFO buffer as required to fill the frame.
|
||||
* The samples are stored in the frame temporarily.
|
||||
*/
|
||||
if (av_audio_fifo_read(fifo, (void **)output_frame->data, frame_size) < frame_size) {
|
||||
fprintf(stderr, "Could not read data from FIFO\n");
|
||||
av_frame_free(&output_frame);
|
||||
return AVERROR_EXIT;
|
||||
}
|
||||
|
||||
/** Encode one frame worth of audio samples. */
|
||||
if (encode_audio_frame(output_frame, output_format_context,
|
||||
output_codec_context, &data_written)) {
|
||||
av_frame_free(&output_frame);
|
||||
return AVERROR_EXIT;
|
||||
}
|
||||
av_frame_free(&output_frame);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/** Write the trailer of the output file container. */
|
||||
static int write_output_file_trailer(AVFormatContext *output_format_context)
|
||||
{
|
||||
int error;
|
||||
if ((error = av_write_trailer(output_format_context)) < 0) {
|
||||
fprintf(stderr, "Could not write output file trailer (error '%s')\n",
|
||||
get_error_text(error));
|
||||
return error;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
/** Convert an audio file to an AAC file in an MP4 container. */
|
||||
int main(int argc, char **argv)
|
||||
{
|
||||
AVFormatContext *input_format_context = NULL, *output_format_context = NULL;
|
||||
AVCodecContext *input_codec_context = NULL, *output_codec_context = NULL;
|
||||
SwrContext *resample_context = NULL;
|
||||
AVAudioFifo *fifo = NULL;
|
||||
int ret = AVERROR_EXIT;
|
||||
|
||||
if (argc < 3) {
|
||||
fprintf(stderr, "Usage: %s <input file> <output file>\n", argv[0]);
|
||||
exit(1);
|
||||
}
|
||||
|
||||
/** Register all codecs and formats so that they can be used. */
|
||||
av_register_all();
|
||||
/** Open the input file for reading. */
|
||||
if (open_input_file(argv[1], &input_format_context,
|
||||
&input_codec_context))
|
||||
goto cleanup;
|
||||
/** Open the output file for writing. */
|
||||
if (open_output_file(argv[2], input_codec_context,
|
||||
&output_format_context, &output_codec_context))
|
||||
goto cleanup;
|
||||
/** Initialize the resampler to be able to convert audio sample formats. */
|
||||
if (init_resampler(input_codec_context, output_codec_context,
|
||||
&resample_context))
|
||||
goto cleanup;
|
||||
/** Initialize the FIFO buffer to store audio samples to be encoded. */
|
||||
if (init_fifo(&fifo, output_codec_context))
|
||||
goto cleanup;
|
||||
/** Write the header of the output file container. */
|
||||
if (write_output_file_header(output_format_context))
|
||||
goto cleanup;
|
||||
|
||||
/**
|
||||
* Loop as long as we have input samples to read or output samples
|
||||
* to write; abort as soon as we have neither.
|
||||
*/
|
||||
while (1) {
|
||||
/** Use the encoder's desired frame size for processing. */
|
||||
const int output_frame_size = output_codec_context->frame_size;
|
||||
int finished = 0;
|
||||
|
||||
/**
|
||||
* Make sure that there is one frame worth of samples in the FIFO
|
||||
* buffer so that the encoder can do its work.
|
||||
* Since the decoder's and the encoder's frame size may differ, we
|
||||
* need to FIFO buffer to store as many frames worth of input samples
|
||||
* that they make up at least one frame worth of output samples.
|
||||
*/
|
||||
while (av_audio_fifo_size(fifo) < output_frame_size) {
|
||||
/**
|
||||
* Decode one frame worth of audio samples, convert it to the
|
||||
* output sample format and put it into the FIFO buffer.
|
||||
*/
|
||||
if (read_decode_convert_and_store(fifo, input_format_context,
|
||||
input_codec_context,
|
||||
output_codec_context,
|
||||
resample_context, &finished))
|
||||
goto cleanup;
|
||||
|
||||
/**
|
||||
* If we are at the end of the input file, we continue
|
||||
* encoding the remaining audio samples to the output file.
|
||||
*/
|
||||
if (finished)
|
||||
break;
|
||||
}
|
||||
|
||||
/**
|
||||
* If we have enough samples for the encoder, we encode them.
|
||||
* At the end of the file, we pass the remaining samples to
|
||||
* the encoder.
|
||||
*/
|
||||
while (av_audio_fifo_size(fifo) >= output_frame_size ||
|
||||
(finished && av_audio_fifo_size(fifo) > 0))
|
||||
/**
|
||||
* Take one frame worth of audio samples from the FIFO buffer,
|
||||
* encode it and write it to the output file.
|
||||
*/
|
||||
if (load_encode_and_write(fifo, output_format_context,
|
||||
output_codec_context))
|
||||
goto cleanup;
|
||||
|
||||
/**
|
||||
* If we are at the end of the input file and have encoded
|
||||
* all remaining samples, we can exit this loop and finish.
|
||||
*/
|
||||
if (finished) {
|
||||
int data_written;
|
||||
/** Flush the encoder as it may have delayed frames. */
|
||||
do {
|
||||
if (encode_audio_frame(NULL, output_format_context,
|
||||
output_codec_context, &data_written))
|
||||
goto cleanup;
|
||||
} while (data_written);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
/** Write the trailer of the output file container. */
|
||||
if (write_output_file_trailer(output_format_context))
|
||||
goto cleanup;
|
||||
ret = 0;
|
||||
|
||||
cleanup:
|
||||
if (fifo)
|
||||
av_audio_fifo_free(fifo);
|
||||
swr_free(&resample_context);
|
||||
if (output_codec_context)
|
||||
avcodec_close(output_codec_context);
|
||||
if (output_format_context) {
|
||||
avio_closep(&output_format_context->pb);
|
||||
avformat_free_context(output_format_context);
|
||||
}
|
||||
if (input_codec_context)
|
||||
avcodec_close(input_codec_context);
|
||||
if (input_format_context)
|
||||
avformat_close_input(&input_format_context);
|
||||
|
||||
return ret;
|
||||
}
|
@@ -1,583 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2010 Nicolas George
|
||||
* Copyright (c) 2011 Stefano Sabatini
|
||||
* Copyright (c) 2014 Andrey Utkin
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
* of this software and associated documentation files (the "Software"), to deal
|
||||
* in the Software without restriction, including without limitation the rights
|
||||
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
* copies of the Software, and to permit persons to whom the Software is
|
||||
* furnished to do so, subject to the following conditions:
|
||||
*
|
||||
* The above copyright notice and this permission notice shall be included in
|
||||
* all copies or substantial portions of the Software.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
||||
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
* THE SOFTWARE.
|
||||
*/
|
||||
|
||||
/**
|
||||
* @file
|
||||
* API example for demuxing, decoding, filtering, encoding and muxing
|
||||
* @example transcoding.c
|
||||
*/
|
||||
|
||||
#include <libavcodec/avcodec.h>
|
||||
#include <libavformat/avformat.h>
|
||||
#include <libavfilter/avfiltergraph.h>
|
||||
#include <libavfilter/avcodec.h>
|
||||
#include <libavfilter/buffersink.h>
|
||||
#include <libavfilter/buffersrc.h>
|
||||
#include <libavutil/opt.h>
|
||||
#include <libavutil/pixdesc.h>
|
||||
|
||||
static AVFormatContext *ifmt_ctx;
|
||||
static AVFormatContext *ofmt_ctx;
|
||||
typedef struct FilteringContext {
|
||||
AVFilterContext *buffersink_ctx;
|
||||
AVFilterContext *buffersrc_ctx;
|
||||
AVFilterGraph *filter_graph;
|
||||
} FilteringContext;
|
||||
static FilteringContext *filter_ctx;
|
||||
|
||||
static int open_input_file(const char *filename)
|
||||
{
|
||||
int ret;
|
||||
unsigned int i;
|
||||
|
||||
ifmt_ctx = NULL;
|
||||
if ((ret = avformat_open_input(&ifmt_ctx, filename, NULL, NULL)) < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot open input file\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
if ((ret = avformat_find_stream_info(ifmt_ctx, NULL)) < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot find stream information\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
|
||||
AVStream *stream;
|
||||
AVCodecContext *codec_ctx;
|
||||
stream = ifmt_ctx->streams[i];
|
||||
codec_ctx = stream->codec;
|
||||
/* Reencode video & audio and remux subtitles etc. */
|
||||
if (codec_ctx->codec_type == AVMEDIA_TYPE_VIDEO
|
||||
|| codec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
|
||||
/* Open decoder */
|
||||
ret = avcodec_open2(codec_ctx,
|
||||
avcodec_find_decoder(codec_ctx->codec_id), NULL);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Failed to open decoder for stream #%u\n", i);
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
av_dump_format(ifmt_ctx, 0, filename, 0);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int open_output_file(const char *filename)
|
||||
{
|
||||
AVStream *out_stream;
|
||||
AVStream *in_stream;
|
||||
AVCodecContext *dec_ctx, *enc_ctx;
|
||||
AVCodec *encoder;
|
||||
int ret;
|
||||
unsigned int i;
|
||||
|
||||
ofmt_ctx = NULL;
|
||||
avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, filename);
|
||||
if (!ofmt_ctx) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Could not create output context\n");
|
||||
return AVERROR_UNKNOWN;
|
||||
}
|
||||
|
||||
|
||||
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
|
||||
out_stream = avformat_new_stream(ofmt_ctx, NULL);
|
||||
if (!out_stream) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Failed allocating output stream\n");
|
||||
return AVERROR_UNKNOWN;
|
||||
}
|
||||
|
||||
in_stream = ifmt_ctx->streams[i];
|
||||
dec_ctx = in_stream->codec;
|
||||
enc_ctx = out_stream->codec;
|
||||
|
||||
if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO
|
||||
|| dec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
|
||||
/* in this example, we choose transcoding to same codec */
|
||||
encoder = avcodec_find_encoder(dec_ctx->codec_id);
|
||||
if (!encoder) {
|
||||
av_log(NULL, AV_LOG_FATAL, "Neccessary encoder not found\n");
|
||||
return AVERROR_INVALIDDATA;
|
||||
}
|
||||
|
||||
/* In this example, we transcode to same properties (picture size,
|
||||
* sample rate etc.). These properties can be changed for output
|
||||
* streams easily using filters */
|
||||
if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO) {
|
||||
enc_ctx->height = dec_ctx->height;
|
||||
enc_ctx->width = dec_ctx->width;
|
||||
enc_ctx->sample_aspect_ratio = dec_ctx->sample_aspect_ratio;
|
||||
/* take first format from list of supported formats */
|
||||
enc_ctx->pix_fmt = encoder->pix_fmts[0];
|
||||
/* video time_base can be set to whatever is handy and supported by encoder */
|
||||
enc_ctx->time_base = dec_ctx->time_base;
|
||||
} else {
|
||||
enc_ctx->sample_rate = dec_ctx->sample_rate;
|
||||
enc_ctx->channel_layout = dec_ctx->channel_layout;
|
||||
enc_ctx->channels = av_get_channel_layout_nb_channels(enc_ctx->channel_layout);
|
||||
/* take first format from list of supported formats */
|
||||
enc_ctx->sample_fmt = encoder->sample_fmts[0];
|
||||
enc_ctx->time_base = (AVRational){1, enc_ctx->sample_rate};
|
||||
}
|
||||
|
||||
/* Third parameter can be used to pass settings to encoder */
|
||||
ret = avcodec_open2(enc_ctx, encoder, NULL);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot open video encoder for stream #%u\n", i);
|
||||
return ret;
|
||||
}
|
||||
} else if (dec_ctx->codec_type == AVMEDIA_TYPE_UNKNOWN) {
|
||||
av_log(NULL, AV_LOG_FATAL, "Elementary stream #%d is of unknown type, cannot proceed\n", i);
|
||||
return AVERROR_INVALIDDATA;
|
||||
} else {
|
||||
/* if this stream must be remuxed */
|
||||
ret = avcodec_copy_context(ofmt_ctx->streams[i]->codec,
|
||||
ifmt_ctx->streams[i]->codec);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Copying stream context failed\n");
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
if (ofmt_ctx->oformat->flags & AVFMT_GLOBALHEADER)
|
||||
enc_ctx->flags |= CODEC_FLAG_GLOBAL_HEADER;
|
||||
|
||||
}
|
||||
av_dump_format(ofmt_ctx, 0, filename, 1);
|
||||
|
||||
if (!(ofmt_ctx->oformat->flags & AVFMT_NOFILE)) {
|
||||
ret = avio_open(&ofmt_ctx->pb, filename, AVIO_FLAG_WRITE);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Could not open output file '%s'", filename);
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
/* init muxer, write output file header */
|
||||
ret = avformat_write_header(ofmt_ctx, NULL);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Error occurred when opening output file\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int init_filter(FilteringContext* fctx, AVCodecContext *dec_ctx,
|
||||
AVCodecContext *enc_ctx, const char *filter_spec)
|
||||
{
|
||||
char args[512];
|
||||
int ret = 0;
|
||||
AVFilter *buffersrc = NULL;
|
||||
AVFilter *buffersink = NULL;
|
||||
AVFilterContext *buffersrc_ctx = NULL;
|
||||
AVFilterContext *buffersink_ctx = NULL;
|
||||
AVFilterInOut *outputs = avfilter_inout_alloc();
|
||||
AVFilterInOut *inputs = avfilter_inout_alloc();
|
||||
AVFilterGraph *filter_graph = avfilter_graph_alloc();
|
||||
|
||||
if (!outputs || !inputs || !filter_graph) {
|
||||
ret = AVERROR(ENOMEM);
|
||||
goto end;
|
||||
}
|
||||
|
||||
if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO) {
|
||||
buffersrc = avfilter_get_by_name("buffer");
|
||||
buffersink = avfilter_get_by_name("buffersink");
|
||||
if (!buffersrc || !buffersink) {
|
||||
av_log(NULL, AV_LOG_ERROR, "filtering source or sink element not found\n");
|
||||
ret = AVERROR_UNKNOWN;
|
||||
goto end;
|
||||
}
|
||||
|
||||
snprintf(args, sizeof(args),
|
||||
"video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
|
||||
dec_ctx->width, dec_ctx->height, dec_ctx->pix_fmt,
|
||||
dec_ctx->time_base.num, dec_ctx->time_base.den,
|
||||
dec_ctx->sample_aspect_ratio.num,
|
||||
dec_ctx->sample_aspect_ratio.den);
|
||||
|
||||
ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in",
|
||||
args, NULL, filter_graph);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot create buffer source\n");
|
||||
goto end;
|
||||
}
|
||||
|
||||
ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out",
|
||||
NULL, NULL, filter_graph);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot create buffer sink\n");
|
||||
goto end;
|
||||
}
|
||||
|
||||
ret = av_opt_set_bin(buffersink_ctx, "pix_fmts",
|
||||
(uint8_t*)&enc_ctx->pix_fmt, sizeof(enc_ctx->pix_fmt),
|
||||
AV_OPT_SEARCH_CHILDREN);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot set output pixel format\n");
|
||||
goto end;
|
||||
}
|
||||
} else if (dec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
|
||||
buffersrc = avfilter_get_by_name("abuffer");
|
||||
buffersink = avfilter_get_by_name("abuffersink");
|
||||
if (!buffersrc || !buffersink) {
|
||||
av_log(NULL, AV_LOG_ERROR, "filtering source or sink element not found\n");
|
||||
ret = AVERROR_UNKNOWN;
|
||||
goto end;
|
||||
}
|
||||
|
||||
if (!dec_ctx->channel_layout)
|
||||
dec_ctx->channel_layout =
|
||||
av_get_default_channel_layout(dec_ctx->channels);
|
||||
snprintf(args, sizeof(args),
|
||||
"time_base=%d/%d:sample_rate=%d:sample_fmt=%s:channel_layout=0x%"PRIx64,
|
||||
dec_ctx->time_base.num, dec_ctx->time_base.den, dec_ctx->sample_rate,
|
||||
av_get_sample_fmt_name(dec_ctx->sample_fmt),
|
||||
dec_ctx->channel_layout);
|
||||
ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in",
|
||||
args, NULL, filter_graph);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer source\n");
|
||||
goto end;
|
||||
}
|
||||
|
||||
ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out",
|
||||
NULL, NULL, filter_graph);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer sink\n");
|
||||
goto end;
|
||||
}
|
||||
|
||||
ret = av_opt_set_bin(buffersink_ctx, "sample_fmts",
|
||||
(uint8_t*)&enc_ctx->sample_fmt, sizeof(enc_ctx->sample_fmt),
|
||||
AV_OPT_SEARCH_CHILDREN);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot set output sample format\n");
|
||||
goto end;
|
||||
}
|
||||
|
||||
ret = av_opt_set_bin(buffersink_ctx, "channel_layouts",
|
||||
(uint8_t*)&enc_ctx->channel_layout,
|
||||
sizeof(enc_ctx->channel_layout), AV_OPT_SEARCH_CHILDREN);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot set output channel layout\n");
|
||||
goto end;
|
||||
}
|
||||
|
||||
ret = av_opt_set_bin(buffersink_ctx, "sample_rates",
|
||||
(uint8_t*)&enc_ctx->sample_rate, sizeof(enc_ctx->sample_rate),
|
||||
AV_OPT_SEARCH_CHILDREN);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot set output sample rate\n");
|
||||
goto end;
|
||||
}
|
||||
} else {
|
||||
ret = AVERROR_UNKNOWN;
|
||||
goto end;
|
||||
}
|
||||
|
||||
/* Endpoints for the filter graph. */
|
||||
outputs->name = av_strdup("in");
|
||||
outputs->filter_ctx = buffersrc_ctx;
|
||||
outputs->pad_idx = 0;
|
||||
outputs->next = NULL;
|
||||
|
||||
inputs->name = av_strdup("out");
|
||||
inputs->filter_ctx = buffersink_ctx;
|
||||
inputs->pad_idx = 0;
|
||||
inputs->next = NULL;
|
||||
|
||||
if (!outputs->name || !inputs->name) {
|
||||
ret = AVERROR(ENOMEM);
|
||||
goto end;
|
||||
}
|
||||
|
||||
if ((ret = avfilter_graph_parse_ptr(filter_graph, filter_spec,
|
||||
&inputs, &outputs, NULL)) < 0)
|
||||
goto end;
|
||||
|
||||
if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0)
|
||||
goto end;
|
||||
|
||||
/* Fill FilteringContext */
|
||||
fctx->buffersrc_ctx = buffersrc_ctx;
|
||||
fctx->buffersink_ctx = buffersink_ctx;
|
||||
fctx->filter_graph = filter_graph;
|
||||
|
||||
end:
|
||||
avfilter_inout_free(&inputs);
|
||||
avfilter_inout_free(&outputs);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int init_filters(void)
|
||||
{
|
||||
const char *filter_spec;
|
||||
unsigned int i;
|
||||
int ret;
|
||||
filter_ctx = av_malloc_array(ifmt_ctx->nb_streams, sizeof(*filter_ctx));
|
||||
if (!filter_ctx)
|
||||
return AVERROR(ENOMEM);
|
||||
|
||||
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
|
||||
filter_ctx[i].buffersrc_ctx = NULL;
|
||||
filter_ctx[i].buffersink_ctx = NULL;
|
||||
filter_ctx[i].filter_graph = NULL;
|
||||
if (!(ifmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO
|
||||
|| ifmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO))
|
||||
continue;
|
||||
|
||||
|
||||
if (ifmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO)
|
||||
filter_spec = "null"; /* passthrough (dummy) filter for video */
|
||||
else
|
||||
filter_spec = "anull"; /* passthrough (dummy) filter for audio */
|
||||
ret = init_filter(&filter_ctx[i], ifmt_ctx->streams[i]->codec,
|
||||
ofmt_ctx->streams[i]->codec, filter_spec);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int encode_write_frame(AVFrame *filt_frame, unsigned int stream_index, int *got_frame) {
|
||||
int ret;
|
||||
int got_frame_local;
|
||||
AVPacket enc_pkt;
|
||||
int (*enc_func)(AVCodecContext *, AVPacket *, const AVFrame *, int *) =
|
||||
(ifmt_ctx->streams[stream_index]->codec->codec_type ==
|
||||
AVMEDIA_TYPE_VIDEO) ? avcodec_encode_video2 : avcodec_encode_audio2;
|
||||
|
||||
if (!got_frame)
|
||||
got_frame = &got_frame_local;
|
||||
|
||||
av_log(NULL, AV_LOG_INFO, "Encoding frame\n");
|
||||
/* encode filtered frame */
|
||||
enc_pkt.data = NULL;
|
||||
enc_pkt.size = 0;
|
||||
av_init_packet(&enc_pkt);
|
||||
ret = enc_func(ofmt_ctx->streams[stream_index]->codec, &enc_pkt,
|
||||
filt_frame, got_frame);
|
||||
av_frame_free(&filt_frame);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
if (!(*got_frame))
|
||||
return 0;
|
||||
|
||||
/* prepare packet for muxing */
|
||||
enc_pkt.stream_index = stream_index;
|
||||
av_packet_rescale_ts(&enc_pkt,
|
||||
ofmt_ctx->streams[stream_index]->codec->time_base,
|
||||
ofmt_ctx->streams[stream_index]->time_base);
|
||||
|
||||
av_log(NULL, AV_LOG_DEBUG, "Muxing frame\n");
|
||||
/* mux encoded frame */
|
||||
ret = av_interleaved_write_frame(ofmt_ctx, &enc_pkt);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int filter_encode_write_frame(AVFrame *frame, unsigned int stream_index)
|
||||
{
|
||||
int ret;
|
||||
AVFrame *filt_frame;
|
||||
|
||||
av_log(NULL, AV_LOG_INFO, "Pushing decoded frame to filters\n");
|
||||
/* push the decoded frame into the filtergraph */
|
||||
ret = av_buffersrc_add_frame_flags(filter_ctx[stream_index].buffersrc_ctx,
|
||||
frame, 0);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Error while feeding the filtergraph\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* pull filtered frames from the filtergraph */
|
||||
while (1) {
|
||||
filt_frame = av_frame_alloc();
|
||||
if (!filt_frame) {
|
||||
ret = AVERROR(ENOMEM);
|
||||
break;
|
||||
}
|
||||
av_log(NULL, AV_LOG_INFO, "Pulling filtered frame from filters\n");
|
||||
ret = av_buffersink_get_frame(filter_ctx[stream_index].buffersink_ctx,
|
||||
filt_frame);
|
||||
if (ret < 0) {
|
||||
/* if no more frames for output - returns AVERROR(EAGAIN)
|
||||
* if flushed and no more frames for output - returns AVERROR_EOF
|
||||
* rewrite retcode to 0 to show it as normal procedure completion
|
||||
*/
|
||||
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
|
||||
ret = 0;
|
||||
av_frame_free(&filt_frame);
|
||||
break;
|
||||
}
|
||||
|
||||
filt_frame->pict_type = AV_PICTURE_TYPE_NONE;
|
||||
ret = encode_write_frame(filt_frame, stream_index, NULL);
|
||||
if (ret < 0)
|
||||
break;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int flush_encoder(unsigned int stream_index)
|
||||
{
|
||||
int ret;
|
||||
int got_frame;
|
||||
|
||||
if (!(ofmt_ctx->streams[stream_index]->codec->codec->capabilities &
|
||||
CODEC_CAP_DELAY))
|
||||
return 0;
|
||||
|
||||
while (1) {
|
||||
av_log(NULL, AV_LOG_INFO, "Flushing stream #%u encoder\n", stream_index);
|
||||
ret = encode_write_frame(NULL, stream_index, &got_frame);
|
||||
if (ret < 0)
|
||||
break;
|
||||
if (!got_frame)
|
||||
return 0;
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
|
||||
int main(int argc, char **argv)
|
||||
{
|
||||
int ret;
|
||||
AVPacket packet = { .data = NULL, .size = 0 };
|
||||
AVFrame *frame = NULL;
|
||||
enum AVMediaType type;
|
||||
unsigned int stream_index;
|
||||
unsigned int i;
|
||||
int got_frame;
|
||||
int (*dec_func)(AVCodecContext *, AVFrame *, int *, const AVPacket *);
|
||||
|
||||
if (argc != 3) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Usage: %s <input file> <output file>\n", argv[0]);
|
||||
return 1;
|
||||
}
|
||||
|
||||
av_register_all();
|
||||
avfilter_register_all();
|
||||
|
||||
if ((ret = open_input_file(argv[1])) < 0)
|
||||
goto end;
|
||||
if ((ret = open_output_file(argv[2])) < 0)
|
||||
goto end;
|
||||
if ((ret = init_filters()) < 0)
|
||||
goto end;
|
||||
|
||||
/* read all packets */
|
||||
while (1) {
|
||||
if ((ret = av_read_frame(ifmt_ctx, &packet)) < 0)
|
||||
break;
|
||||
stream_index = packet.stream_index;
|
||||
type = ifmt_ctx->streams[packet.stream_index]->codec->codec_type;
|
||||
av_log(NULL, AV_LOG_DEBUG, "Demuxer gave frame of stream_index %u\n",
|
||||
stream_index);
|
||||
|
||||
if (filter_ctx[stream_index].filter_graph) {
|
||||
av_log(NULL, AV_LOG_DEBUG, "Going to reencode&filter the frame\n");
|
||||
frame = av_frame_alloc();
|
||||
if (!frame) {
|
||||
ret = AVERROR(ENOMEM);
|
||||
break;
|
||||
}
|
||||
av_packet_rescale_ts(&packet,
|
||||
ifmt_ctx->streams[stream_index]->time_base,
|
||||
ifmt_ctx->streams[stream_index]->codec->time_base);
|
||||
dec_func = (type == AVMEDIA_TYPE_VIDEO) ? avcodec_decode_video2 :
|
||||
avcodec_decode_audio4;
|
||||
ret = dec_func(ifmt_ctx->streams[stream_index]->codec, frame,
|
||||
&got_frame, &packet);
|
||||
if (ret < 0) {
|
||||
av_frame_free(&frame);
|
||||
av_log(NULL, AV_LOG_ERROR, "Decoding failed\n");
|
||||
break;
|
||||
}
|
||||
|
||||
if (got_frame) {
|
||||
frame->pts = av_frame_get_best_effort_timestamp(frame);
|
||||
ret = filter_encode_write_frame(frame, stream_index);
|
||||
av_frame_free(&frame);
|
||||
if (ret < 0)
|
||||
goto end;
|
||||
} else {
|
||||
av_frame_free(&frame);
|
||||
}
|
||||
} else {
|
||||
/* remux this frame without reencoding */
|
||||
av_packet_rescale_ts(&packet,
|
||||
ifmt_ctx->streams[stream_index]->time_base,
|
||||
ofmt_ctx->streams[stream_index]->time_base);
|
||||
|
||||
ret = av_interleaved_write_frame(ofmt_ctx, &packet);
|
||||
if (ret < 0)
|
||||
goto end;
|
||||
}
|
||||
av_free_packet(&packet);
|
||||
}
|
||||
|
||||
/* flush filters and encoders */
|
||||
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
|
||||
/* flush filter */
|
||||
if (!filter_ctx[i].filter_graph)
|
||||
continue;
|
||||
ret = filter_encode_write_frame(NULL, i);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Flushing filter failed\n");
|
||||
goto end;
|
||||
}
|
||||
|
||||
/* flush encoder */
|
||||
ret = flush_encoder(i);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Flushing encoder failed\n");
|
||||
goto end;
|
||||
}
|
||||
}
|
||||
|
||||
av_write_trailer(ofmt_ctx);
|
||||
end:
|
||||
av_free_packet(&packet);
|
||||
av_frame_free(&frame);
|
||||
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
|
||||
avcodec_close(ifmt_ctx->streams[i]->codec);
|
||||
if (ofmt_ctx && ofmt_ctx->nb_streams > i && ofmt_ctx->streams[i] && ofmt_ctx->streams[i]->codec)
|
||||
avcodec_close(ofmt_ctx->streams[i]->codec);
|
||||
if (filter_ctx && filter_ctx[i].filter_graph)
|
||||
avfilter_graph_free(&filter_ctx[i].filter_graph);
|
||||
}
|
||||
av_free(filter_ctx);
|
||||
avformat_close_input(&ifmt_ctx);
|
||||
if (ofmt_ctx && !(ofmt_ctx->oformat->flags & AVFMT_NOFILE))
|
||||
avio_closep(&ofmt_ctx->pb);
|
||||
avformat_free_context(ofmt_ctx);
|
||||
|
||||
if (ret < 0)
|
||||
av_log(NULL, AV_LOG_ERROR, "Error occurred: %s\n", av_err2str(ret));
|
||||
|
||||
return ret ? 1 : 0;
|
||||
}
|
107
doc/faq.texi
107
doc/faq.texi
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle FFmpeg FAQ
|
||||
@titlepage
|
||||
@@ -91,56 +90,6 @@ To build FFmpeg, you need to install the development package. It is usually
|
||||
called @file{libfoo-dev} or @file{libfoo-devel}. You can remove it after the
|
||||
build is finished, but be sure to keep the main package.
|
||||
|
||||
@section How do I make @command{pkg-config} find my libraries?
|
||||
|
||||
Somewhere along with your libraries, there is a @file{.pc} file (or several)
|
||||
in a @file{pkgconfig} directory. You need to set environment variables to
|
||||
point @command{pkg-config} to these files.
|
||||
|
||||
If you need to @emph{add} directories to @command{pkg-config}'s search list
|
||||
(typical use case: library installed separately), add it to
|
||||
@code{$PKG_CONFIG_PATH}:
|
||||
|
||||
@example
|
||||
export PKG_CONFIG_PATH=/opt/x264/lib/pkgconfig:/opt/opus/lib/pkgconfig
|
||||
@end example
|
||||
|
||||
If you need to @emph{replace} @command{pkg-config}'s search list
|
||||
(typical use case: cross-compiling), set it in
|
||||
@code{$PKG_CONFIG_LIBDIR}:
|
||||
|
||||
@example
|
||||
export PKG_CONFIG_LIBDIR=/home/me/cross/usr/lib/pkgconfig:/home/me/cross/usr/local/lib/pkgconfig
|
||||
@end example
|
||||
|
||||
If you need to know the library's internal dependencies (typical use: static
|
||||
linking), add the @code{--static} option to @command{pkg-config}:
|
||||
|
||||
@example
|
||||
./configure --pkg-config-flags=--static
|
||||
@end example
|
||||
|
||||
@section How do I use @command{pkg-config} when cross-compiling?
|
||||
|
||||
The best way is to install @command{pkg-config} in your cross-compilation
|
||||
environment. It will automatically use the cross-compilation libraries.
|
||||
|
||||
You can also use @command{pkg-config} from the host environment by
|
||||
specifying explicitly @code{--pkg-config=pkg-config} to @command{configure}.
|
||||
In that case, you must point @command{pkg-config} to the correct directories
|
||||
using the @code{PKG_CONFIG_LIBDIR}, as explained in the previous entry.
|
||||
|
||||
As an intermediate solution, you can place in your cross-compilation
|
||||
environment a script that calls the host @command{pkg-config} with
|
||||
@code{PKG_CONFIG_LIBDIR} set. That script can look like that:
|
||||
|
||||
@example
|
||||
#!/bin/sh
|
||||
PKG_CONFIG_LIBDIR=/path/to/cross/lib/pkgconfig
|
||||
export PKG_CONFIG_LIBDIR
|
||||
exec /usr/bin/pkg-config "$@@"
|
||||
@end example
|
||||
|
||||
@chapter Usage
|
||||
|
||||
@section ffmpeg does not work; what is wrong?
|
||||
@@ -419,6 +368,26 @@ ffmpeg -f u16le -acodec pcm_s16le -ac 2 -ar 44100 -i all.a \
|
||||
rm temp[12].[av] all.[av]
|
||||
@end example
|
||||
|
||||
@section -profile option fails when encoding H.264 video with AAC audio
|
||||
|
||||
@command{ffmpeg} prints an error like
|
||||
|
||||
@example
|
||||
Undefined constant or missing '(' in 'baseline'
|
||||
Unable to parse option value "baseline"
|
||||
Error setting option profile to value baseline.
|
||||
@end example
|
||||
|
||||
Short answer: write @option{-profile:v} instead of @option{-profile}.
|
||||
|
||||
Long answer: this happens because the @option{-profile} option can apply to both
|
||||
video and audio. Specifically the AAC encoder also defines some profiles, none
|
||||
of which are named @var{baseline}.
|
||||
|
||||
The solution is to apply the @option{-profile} option to the video stream only
|
||||
by using @url{http://ffmpeg.org/ffmpeg.html#Stream-specifiers-1, Stream specifiers}.
|
||||
Appending @code{:v} to it will do exactly that.
|
||||
|
||||
@section Using @option{-f lavfi}, audio becomes mono for no apparent reason.
|
||||
|
||||
Use @option{-dumpgraph -} to find out exactly where the channel layout is
|
||||
@@ -443,7 +412,7 @@ VOB and a few other formats do not have a global header that describes
|
||||
everything present in the file. Instead, applications are supposed to scan
|
||||
the file to see what it contains. Since VOB files are frequently large, only
|
||||
the beginning is scanned. If the subtitles happen only later in the file,
|
||||
they will not be initially detected.
|
||||
they will not be initally detected.
|
||||
|
||||
Some applications, including the @code{ffmpeg} command-line tool, can only
|
||||
work with streams that were detected during the initial scan; streams that
|
||||
@@ -467,40 +436,6 @@ point acceptable for your tastes. The most common options to do that are
|
||||
@option{-qscale} and @option{-qmax}, but you should peruse the documentation
|
||||
of the encoder you chose.
|
||||
|
||||
@section I have a stretched video, why does scaling does not fix it?
|
||||
|
||||
A lot of video codecs and formats can store the @emph{aspect ratio} of the
|
||||
video: this is the ratio between the width and the height of either the full
|
||||
image (DAR, display aspect ratio) or individual pixels (SAR, sample aspect
|
||||
ratio). For example, EGA screens at resolution 640×350 had 4:3 DAR and 35:48
|
||||
SAR.
|
||||
|
||||
Most still image processing work with square pixels, i.e. 1:1 SAR, but a lot
|
||||
of video standards, especially from the analogic-numeric transition era, use
|
||||
non-square pixels.
|
||||
|
||||
Most processing filters in FFmpeg handle the aspect ratio to avoid
|
||||
stretching the image: cropping adjusts the DAR to keep the SAR constant,
|
||||
scaling adjusts the SAR to keep the DAR constant.
|
||||
|
||||
If you want to stretch, or “unstretch”, the image, you need to override the
|
||||
information with the
|
||||
@url{http://ffmpeg.org/ffmpeg-filters.html#setdar_002c-setsar, @code{setdar or setsar filters}}.
|
||||
|
||||
Do not forget to examine carefully the original video to check whether the
|
||||
stretching comes from the image or from the aspect ratio information.
|
||||
|
||||
For example, to fix a badly encoded EGA capture, use the following commands,
|
||||
either the first one to upscale to square pixels or the second one to set
|
||||
the correct aspect ratio or the third one to avoid transcoding (may not work
|
||||
depending on the format / codec / player / phase of the moon):
|
||||
|
||||
@example
|
||||
ffmpeg -i ega_screen.nut -vf scale=640:480,setsar=1 ega_screen_scaled.nut
|
||||
ffmpeg -i ega_screen.nut -vf setdar=4/3 ega_screen_anamorphic.nut
|
||||
ffmpeg -i ega_screen.nut -aspect 4/3 -c copy ega_screen_overridden.nut
|
||||
@end example
|
||||
|
||||
@chapter Development
|
||||
|
||||
@section Are there examples illustrating how to use the FFmpeg libraries, particularly libavcodec and libavformat?
|
||||
|
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle FFmpeg Automated Testing Environment
|
||||
@titlepage
|
||||
|
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle FFmpeg Bitstream Filters Documentation
|
||||
@titlepage
|
||||
|
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle FFmpeg Codecs Documentation
|
||||
@titlepage
|
||||
|
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle FFmpeg Devices Documentation
|
||||
@titlepage
|
||||
|
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle FFmpeg Filters Documentation
|
||||
@titlepage
|
||||
|
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle FFmpeg Formats Documentation
|
||||
@titlepage
|
||||
|
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle FFmpeg Protocols Documentation
|
||||
@titlepage
|
||||
|
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle FFmpeg Resampler Documentation
|
||||
@titlepage
|
||||
@@ -15,7 +14,7 @@
|
||||
|
||||
The FFmpeg resampler provides a high-level interface to the
|
||||
libswresample library audio resampling utilities. In particular it
|
||||
allows one to perform audio resampling, audio channel layout rematrixing,
|
||||
allows to perform audio resampling, audio channel layout rematrixing,
|
||||
and convert audio format and packing layout.
|
||||
|
||||
@c man end DESCRIPTION
|
||||
|
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle FFmpeg Scaler Documentation
|
||||
@titlepage
|
||||
@@ -14,7 +13,7 @@
|
||||
@c man begin DESCRIPTION
|
||||
|
||||
The FFmpeg rescaler provides a high-level interface to the libswscale
|
||||
library image conversion utilities. In particular it allows one to perform
|
||||
library image conversion utilities. In particular it allows to perform
|
||||
image rescaling and pixel format conversion.
|
||||
|
||||
@c man end DESCRIPTION
|
||||
|
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle FFmpeg Utilities Documentation
|
||||
@titlepage
|
||||
|
272
doc/ffmpeg.texi
272
doc/ffmpeg.texi
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle ffmpeg Documentation
|
||||
@titlepage
|
||||
@@ -81,23 +80,11 @@ The transcoding process in @command{ffmpeg} for each output can be described by
|
||||
the following diagram:
|
||||
|
||||
@example
|
||||
_______ ______________
|
||||
| | | |
|
||||
| input | demuxer | encoded data | decoder
|
||||
| file | ---------> | packets | -----+
|
||||
|_______| |______________| |
|
||||
v
|
||||
_________
|
||||
| |
|
||||
| decoded |
|
||||
| frames |
|
||||
|_________|
|
||||
________ ______________ |
|
||||
| | | | |
|
||||
| output | <-------- | encoded data | <----+
|
||||
| file | muxer | packets | encoder
|
||||
|________| |______________|
|
||||
|
||||
_______ ______________ _________ ______________ ________
|
||||
| | | | | | | | | |
|
||||
| input | demuxer | encoded data | decoder | decoded | encoder | encoded data | muxer | output |
|
||||
| file | ---------> | packets | ---------> | frames | ---------> | packets | -------> | file |
|
||||
|_______| |______________| |_________| |______________| |________|
|
||||
|
||||
@end example
|
||||
|
||||
@@ -125,16 +112,11 @@ the same type. In the above diagram they can be represented by simply inserting
|
||||
an additional step between decoding and encoding:
|
||||
|
||||
@example
|
||||
_________ ______________
|
||||
| | | |
|
||||
| decoded | | encoded data |
|
||||
| frames |\ _ | packets |
|
||||
|_________| \ /||______________|
|
||||
\ __________ /
|
||||
simple _\|| | / encoder
|
||||
filtergraph | filtered |/
|
||||
| frames |
|
||||
|__________|
|
||||
_________ __________ ______________
|
||||
| | | | | |
|
||||
| decoded | simple filtergraph | filtered | encoder | encoded data |
|
||||
| frames | -------------------> | frames | ---------> | packets |
|
||||
|_________| |__________| |______________|
|
||||
|
||||
@end example
|
||||
|
||||
@@ -143,10 +125,10 @@ Simple filtergraphs are configured with the per-stream @option{-filter} option
|
||||
A simple filtergraph for video can look for example like this:
|
||||
|
||||
@example
|
||||
_______ _____________ _______ ________
|
||||
| | | | | | | |
|
||||
| input | ---> | deinterlace | ---> | scale | ---> | output |
|
||||
|_______| |_____________| |_______| |________|
|
||||
_______ _____________ _______ _____ ________
|
||||
| | | | | | | | | |
|
||||
| input | ---> | deinterlace | ---> | scale | ---> | fps | ---> | output |
|
||||
|_______| |_____________| |_______| |_____| |________|
|
||||
|
||||
@end example
|
||||
|
||||
@@ -273,13 +255,8 @@ ffmpeg -i INPUT -map 0 -c copy -c:v:1 libx264 -c:a:137 libvorbis OUTPUT
|
||||
will copy all the streams except the second video, which will be encoded with
|
||||
libx264, and the 138th audio, which will be encoded with libvorbis.
|
||||
|
||||
@item -t @var{duration} (@emph{input/output})
|
||||
When used as an input option (before @code{-i}), limit the @var{duration} of
|
||||
data read from the input file.
|
||||
|
||||
When used as an output option (before an output filename), stop writing the
|
||||
output after its duration reaches @var{duration}.
|
||||
|
||||
@item -t @var{duration} (@emph{output})
|
||||
Stop writing the output after its duration reaches @var{duration}.
|
||||
@var{duration} may be a number in seconds, or in @code{hh:mm:ss[.xxx]} form.
|
||||
|
||||
-to and -t are mutually exclusive and -t has priority.
|
||||
@@ -308,20 +285,23 @@ input until the timestamps reach @var{position}.
|
||||
@var{position} may be either in seconds or in @code{hh:mm:ss[.xxx]} form.
|
||||
|
||||
@item -itsoffset @var{offset} (@emph{input})
|
||||
Set the input time offset.
|
||||
Set the input time offset in seconds.
|
||||
@code{[-]hh:mm:ss[.xxx]} syntax is also supported.
|
||||
The offset is added to the timestamps of the input files.
|
||||
Specifying a positive offset means that the corresponding
|
||||
streams are delayed by @var{offset} seconds.
|
||||
|
||||
@var{offset} must be a time duration specification,
|
||||
see @ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
|
||||
|
||||
The offset is added to the timestamps of the input files. Specifying
|
||||
a positive offset means that the corresponding streams are delayed by
|
||||
the time duration specified in @var{offset}.
|
||||
|
||||
@item -timestamp @var{date} (@emph{output})
|
||||
@item -timestamp @var{time} (@emph{output})
|
||||
Set the recording timestamp in the container.
|
||||
|
||||
@var{date} must be a time duration specification,
|
||||
see @ref{date syntax,,the Date section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
|
||||
The syntax for @var{time} is:
|
||||
@example
|
||||
now|([(YYYY-MM-DD|YYYYMMDD)[T|t| ]]((HH:MM:SS[.m...])|(HHMMSS[.m...]))[Z|z])
|
||||
@end example
|
||||
If the value is "now" it takes the current time.
|
||||
Time is local time unless 'Z' or 'z' is appended, in which case it is
|
||||
interpreted as UTC.
|
||||
If the year-month-day part is not specified it takes the current
|
||||
year-month-day.
|
||||
|
||||
@item -metadata[:metadata_specifier] @var{key}=@var{value} (@emph{output,per-metadata})
|
||||
Set a metadata key/value pair.
|
||||
@@ -340,7 +320,7 @@ ffmpeg -i in.avi -metadata title="my title" out.flv
|
||||
|
||||
To set the language of the first audio stream:
|
||||
@example
|
||||
ffmpeg -i INPUT -metadata:s:a:0 language=eng OUTPUT
|
||||
ffmpeg -i INPUT -metadata:s:a:1 language=eng OUTPUT
|
||||
@end example
|
||||
|
||||
@item -target @var{type} (@emph{output})
|
||||
@@ -361,20 +341,15 @@ ffmpeg -i myfile.avi -target vcd -bf 2 /tmp/vcd.mpg
|
||||
@end example
|
||||
|
||||
@item -dframes @var{number} (@emph{output})
|
||||
Set the number of data frames to output. This is an alias for @code{-frames:d}.
|
||||
Set the number of data frames to record. This is an alias for @code{-frames:d}.
|
||||
|
||||
@item -frames[:@var{stream_specifier}] @var{framecount} (@emph{output,per-stream})
|
||||
Stop writing to the stream after @var{framecount} frames.
|
||||
|
||||
@item -q[:@var{stream_specifier}] @var{q} (@emph{output,per-stream})
|
||||
@itemx -qscale[:@var{stream_specifier}] @var{q} (@emph{output,per-stream})
|
||||
Use fixed quality scale (VBR). The meaning of @var{q}/@var{qscale} is
|
||||
Use fixed quality scale (VBR). The meaning of @var{q} is
|
||||
codec-dependent.
|
||||
If @var{qscale} is used without a @var{stream_specifier} then it applies only
|
||||
to the video stream, this is to maintain compatibility with previous behavior
|
||||
and as specifying the same codec specific value to 2 different codecs that is
|
||||
audio and video generally is not what is intended when no stream_specifier is
|
||||
used.
|
||||
|
||||
@anchor{filter_option}
|
||||
@item -filter[:@var{stream_specifier}] @var{filtergraph} (@emph{output,per-stream})
|
||||
@@ -468,15 +443,12 @@ attachments.
|
||||
|
||||
@table @option
|
||||
@item -vframes @var{number} (@emph{output})
|
||||
Set the number of video frames to output. This is an alias for @code{-frames:v}.
|
||||
Set the number of video frames to record. This is an alias for @code{-frames:v}.
|
||||
@item -r[:@var{stream_specifier}] @var{fps} (@emph{input/output,per-stream})
|
||||
Set frame rate (Hz value, fraction or abbreviation).
|
||||
|
||||
As an input option, ignore any timestamps stored in the file and instead
|
||||
generate timestamps assuming constant frame rate @var{fps}.
|
||||
This is not the same as the @option{-framerate} option used for some input formats
|
||||
like image2 or v4l2 (it used to be the same in older versions of FFmpeg).
|
||||
If in doubt use @option{-framerate} instead of the input option @option{-r}.
|
||||
|
||||
As an output option, duplicate or drop input frames to achieve constant output
|
||||
frame rate @var{fps}.
|
||||
@@ -531,6 +503,9 @@ prefix is ``ffmpeg2pass''. The complete file name will be
|
||||
@file{PREFIX-N.log}, where N is a number specific to the output
|
||||
stream
|
||||
|
||||
@item -vlang @var{code}
|
||||
Set the ISO 639 language code (3 letters) of the current video stream.
|
||||
|
||||
@item -vf @var{filtergraph} (@emph{output})
|
||||
Create the filtergraph specified by @var{filtergraph} and use it to
|
||||
filter the stream.
|
||||
@@ -538,7 +513,7 @@ filter the stream.
|
||||
This is an alias for @code{-filter:v}, see the @ref{filter_option,,-filter option}.
|
||||
@end table
|
||||
|
||||
@section Advanced Video options
|
||||
@section Advanced Video Options
|
||||
|
||||
@table @option
|
||||
@item -pix_fmt[:@var{stream_specifier}] @var{format} (@emph{input/output,per-stream})
|
||||
@@ -641,59 +616,13 @@ would be more efficient.
|
||||
@item -copyinkf[:@var{stream_specifier}] (@emph{output,per-stream})
|
||||
When doing stream copy, copy also non-key frames found at the
|
||||
beginning.
|
||||
|
||||
@item -hwaccel[:@var{stream_specifier}] @var{hwaccel} (@emph{input,per-stream})
|
||||
Use hardware acceleration to decode the matching stream(s). The allowed values
|
||||
of @var{hwaccel} are:
|
||||
@table @option
|
||||
@item none
|
||||
Do not use any hardware acceleration (the default).
|
||||
|
||||
@item auto
|
||||
Automatically select the hardware acceleration method.
|
||||
|
||||
@item vda
|
||||
Use Apple VDA hardware acceleration.
|
||||
|
||||
@item vdpau
|
||||
Use VDPAU (Video Decode and Presentation API for Unix) hardware acceleration.
|
||||
|
||||
@item dxva2
|
||||
Use DXVA2 (DirectX Video Acceleration) hardware acceleration.
|
||||
@end table
|
||||
|
||||
This option has no effect if the selected hwaccel is not available or not
|
||||
supported by the chosen decoder.
|
||||
|
||||
Note that most acceleration methods are intended for playback and will not be
|
||||
faster than software decoding on modern CPUs. Additionally, @command{ffmpeg}
|
||||
will usually need to copy the decoded frames from the GPU memory into the system
|
||||
memory, resulting in further performance loss. This option is thus mainly
|
||||
useful for testing.
|
||||
|
||||
@item -hwaccel_device[:@var{stream_specifier}] @var{hwaccel_device} (@emph{input,per-stream})
|
||||
Select a device to use for hardware acceleration.
|
||||
|
||||
This option only makes sense when the @option{-hwaccel} option is also
|
||||
specified. Its exact meaning depends on the specific hardware acceleration
|
||||
method chosen.
|
||||
|
||||
@table @option
|
||||
@item vdpau
|
||||
For VDPAU, this option specifies the X11 display/screen to use. If this option
|
||||
is not specified, the value of the @var{DISPLAY} environment variable is used
|
||||
|
||||
@item dxva2
|
||||
For DXVA2, this option should contain the number of the display adapter to use.
|
||||
If this option is not specified, the default adapter is used.
|
||||
@end table
|
||||
@end table
|
||||
|
||||
@section Audio Options
|
||||
|
||||
@table @option
|
||||
@item -aframes @var{number} (@emph{output})
|
||||
Set the number of audio frames to output. This is an alias for @code{-frames:a}.
|
||||
Set the number of audio frames to record. This is an alias for @code{-frames:a}.
|
||||
@item -ar[:@var{stream_specifier}] @var{freq} (@emph{input/output,per-stream})
|
||||
Set the audio sampling frequency. For output streams it is set by
|
||||
default to the frequency of the corresponding input stream. For input
|
||||
@@ -721,7 +650,7 @@ filter the stream.
|
||||
This is an alias for @code{-filter:a}, see the @ref{filter_option,,-filter option}.
|
||||
@end table
|
||||
|
||||
@section Advanced Audio options
|
||||
@section Advanced Audio options:
|
||||
|
||||
@table @option
|
||||
@item -atag @var{fourcc/tag} (@emph{output})
|
||||
@@ -736,9 +665,11 @@ stereo but not 6 channels as 5.1. The default is to always try to guess. Use
|
||||
0 to disable all guessing.
|
||||
@end table
|
||||
|
||||
@section Subtitle options
|
||||
@section Subtitle options:
|
||||
|
||||
@table @option
|
||||
@item -slang @var{code}
|
||||
Set the ISO 639 language code (3 letters) of the current subtitle stream.
|
||||
@item -scodec @var{codec} (@emph{input/output})
|
||||
Set the subtitle codec. This is an alias for @code{-codec:s}.
|
||||
@item -sn (@emph{output})
|
||||
@@ -747,7 +678,7 @@ Disable subtitle recording.
|
||||
Deprecated, see -bsf
|
||||
@end table
|
||||
|
||||
@section Advanced Subtitle options
|
||||
@section Advanced Subtitle options:
|
||||
|
||||
@table @option
|
||||
|
||||
@@ -825,11 +756,6 @@ To map all the streams except the second audio, use negative mappings
|
||||
ffmpeg -i INPUT -map 0 -map -0:a:1 OUTPUT
|
||||
@end example
|
||||
|
||||
To pick the English audio stream:
|
||||
@example
|
||||
ffmpeg -i INPUT -map 0:m:language:eng OUTPUT
|
||||
@end example
|
||||
|
||||
Note that using this option disables the default mappings for this output file.
|
||||
|
||||
@item -map_channel [@var{input_file_id}.@var{stream_specifier}.@var{channel_id}|-1][:@var{output_file_id}.@var{stream_specifier}]
|
||||
@@ -995,13 +921,6 @@ With -map you can select from which stream the timestamps should be
|
||||
taken. You can leave either video or audio unchanged and sync the
|
||||
remaining stream(s) to the unchanged one.
|
||||
|
||||
@item -frame_drop_threshold @var{parameter}
|
||||
Frame drop threshold, which specifies how much behind video frames can
|
||||
be before they are dropped. In frame rate units, so 1.0 is one frame.
|
||||
The default is -1.1. One possible usecase is to avoid framedrops in case
|
||||
of noisy timestamps or to increase frame drop precision in case of exact
|
||||
timestamps.
|
||||
|
||||
@item -async @var{samples_per_second}
|
||||
Audio sync method. "Stretches/squeezes" the audio stream to match the timestamps,
|
||||
the parameter is the maximum samples per second by which the audio is changed.
|
||||
@@ -1024,12 +943,6 @@ processing (e.g. in case the format option @option{avoid_negative_ts}
|
||||
is enabled) the output timestamps may mismatch with the input
|
||||
timestamps even when this option is selected.
|
||||
|
||||
@item -start_at_zero
|
||||
When used with @option{copyts}, shift input timestamps so they start at zero.
|
||||
|
||||
This means that using e.g. @code{-ss 50} will make output timestamps start at
|
||||
50 seconds, regardless of what timestamp the input file started at.
|
||||
|
||||
@item -copytb @var{mode}
|
||||
Specify how to set the encoder timebase when stream copying. @var{mode} is an
|
||||
integer numeric value, and can assume one of the following values:
|
||||
@@ -1085,7 +998,7 @@ ffmpeg -i h264.mp4 -c:v copy -bsf:v h264_mp4toannexb -an out.h264
|
||||
ffmpeg -i file.mov -an -vn -bsf:s mov2textsub -c:s copy -f rawvideo sub.txt
|
||||
@end example
|
||||
|
||||
@item -tag[:@var{stream_specifier}] @var{codec_tag} (@emph{input/output,per-stream})
|
||||
@item -tag[:@var{stream_specifier}] @var{codec_tag} (@emph{per-stream})
|
||||
Force a tag/fourcc for matching streams.
|
||||
|
||||
@item -timecode @var{hh}:@var{mm}:@var{ss}SEP@var{ff}
|
||||
@@ -1158,50 +1071,13 @@ This option enables or disables accurate seeking in input files with the
|
||||
transcoding. Use @option{-noaccurate_seek} to disable it, which may be useful
|
||||
e.g. when copying some streams and transcoding the others.
|
||||
|
||||
@item -thread_message_queue @var{size} (@emph{input})
|
||||
This option sets the maximum number of queued packets when reading from the
|
||||
file or device. With low latency / high rate live streams, packets may be
|
||||
discarded if they are not read in a timely manner; raising this value can
|
||||
avoid it.
|
||||
|
||||
@item -override_ffserver (@emph{global})
|
||||
Overrides the input specifications from @command{ffserver}. Using this
|
||||
option you can map any input stream to @command{ffserver} and control
|
||||
many aspects of the encoding from @command{ffmpeg}. Without this
|
||||
option @command{ffmpeg} will transmit to @command{ffserver} what is
|
||||
requested by @command{ffserver}.
|
||||
|
||||
Overrides the input specifications from ffserver. Using this option you can
|
||||
map any input stream to ffserver and control many aspects of the encoding from
|
||||
ffmpeg. Without this option ffmpeg will transmit to ffserver what is requested by
|
||||
ffserver.
|
||||
The option is intended for cases where features are needed that cannot be
|
||||
specified to @command{ffserver} but can be to @command{ffmpeg}.
|
||||
|
||||
@item -sdp_file @var{file} (@emph{global})
|
||||
Print sdp information to @var{file}.
|
||||
This allows dumping sdp information when at least one output isn't an
|
||||
rtp stream.
|
||||
|
||||
@item -discard (@emph{input})
|
||||
Allows discarding specific streams or frames of streams at the demuxer.
|
||||
Not all demuxers support this.
|
||||
|
||||
@table @option
|
||||
@item none
|
||||
Discard no frame.
|
||||
|
||||
@item default
|
||||
Default, which discards no frames.
|
||||
|
||||
@item noref
|
||||
Discard all non-reference frames.
|
||||
|
||||
@item bidir
|
||||
Discard all bidirectional frames.
|
||||
|
||||
@item nokey
|
||||
Discard all frames excepts keyframes.
|
||||
|
||||
@item all
|
||||
Discard all frames.
|
||||
@end table
|
||||
specified to ffserver but can be to ffmpeg.
|
||||
|
||||
@end table
|
||||
|
||||
@@ -1228,10 +1104,7 @@ awkward to specify on the command line. Lines starting with the hash
|
||||
('#') character are ignored and are used to provide comments. Check
|
||||
the @file{presets} directory in the FFmpeg source tree for examples.
|
||||
|
||||
There are two types of preset files: ffpreset and avpreset files.
|
||||
|
||||
@subsection ffpreset files
|
||||
ffpreset files are specified with the @code{vpre}, @code{apre},
|
||||
Preset files are specified with the @code{vpre}, @code{apre},
|
||||
@code{spre}, and @code{fpre} options. The @code{fpre} option takes the
|
||||
filename of the preset instead of a preset name as input and can be
|
||||
used for any kind of codec. For the @code{vpre}, @code{apre}, and
|
||||
@@ -1256,26 +1129,6 @@ directories, where @var{codec_name} is the name of the codec to which
|
||||
the preset file options will be applied. For example, if you select
|
||||
the video codec with @code{-vcodec libvpx} and use @code{-vpre 1080p},
|
||||
then it will search for the file @file{libvpx-1080p.ffpreset}.
|
||||
|
||||
@subsection avpreset files
|
||||
avpreset files are specified with the @code{pre} option. They work similar to
|
||||
ffpreset files, but they only allow encoder- specific options. Therefore, an
|
||||
@var{option}=@var{value} pair specifying an encoder cannot be used.
|
||||
|
||||
When the @code{pre} option is specified, ffmpeg will look for files with the
|
||||
suffix .avpreset in the directories @file{$AVCONV_DATADIR} (if set), and
|
||||
@file{$HOME/.avconv}, and in the datadir defined at configuration time (usually
|
||||
@file{PREFIX/share/ffmpeg}), in that order.
|
||||
|
||||
First ffmpeg searches for a file named @var{codec_name}-@var{arg}.avpreset in
|
||||
the above-mentioned directories, where @var{codec_name} is the name of the codec
|
||||
to which the preset file options will be applied. For example, if you select the
|
||||
video codec with @code{-vcodec libvpx} and use @code{-pre 1080p}, then it will
|
||||
search for the file @file{libvpx-1080p.avpreset}.
|
||||
|
||||
If no such file is found, then ffmpeg will search for a file named
|
||||
@var{arg}.avpreset in the same directories.
|
||||
|
||||
@c man end OPTIONS
|
||||
|
||||
@chapter Tips
|
||||
@@ -1322,6 +1175,21 @@ quality).
|
||||
@chapter Examples
|
||||
@c man begin EXAMPLES
|
||||
|
||||
@section Preset files
|
||||
|
||||
A preset file contains a sequence of @var{option=value} pairs, one for
|
||||
each line, specifying a sequence of options which can be specified also on
|
||||
the command line. Lines starting with the hash ('#') character are ignored and
|
||||
are used to provide comments. Empty lines are also ignored. Check the
|
||||
@file{presets} directory in the FFmpeg source tree for examples.
|
||||
|
||||
Preset files are specified with the @code{pre} option, this option takes a
|
||||
preset name as input. FFmpeg searches for a file named @var{preset_name}.avpreset in
|
||||
the directories @file{$AVCONV_DATADIR} (if set), and @file{$HOME/.ffmpeg}, and in
|
||||
the data directory defined at configuration time (usually @file{$PREFIX/share/ffmpeg})
|
||||
in that order. For example, if the argument is @code{libx264-max}, it will
|
||||
search for the file @file{libx264-max.avpreset}.
|
||||
|
||||
@section Video and Audio grabbing
|
||||
|
||||
If you specify the input format and device then ffmpeg can grab video
|
||||
@@ -1491,11 +1359,11 @@ ffmpeg -f image2 -pattern_type glob -i 'foo-*.jpeg' -r 12 -s WxH foo.avi
|
||||
You can put many streams of the same type in the output:
|
||||
|
||||
@example
|
||||
ffmpeg -i test1.avi -i test2.avi -map 1:1 -map 1:0 -map 0:1 -map 0:0 -c copy -y test12.nut
|
||||
ffmpeg -i test1.avi -i test2.avi -map 0:3 -map 0:2 -map 0:1 -map 0:0 -c copy test12.nut
|
||||
@end example
|
||||
|
||||
The resulting output file @file{test12.nut} will contain the first four streams
|
||||
from the input files in reverse order.
|
||||
The resulting output file @file{test12.avi} will contain first four streams from
|
||||
the input file in reverse order.
|
||||
|
||||
@item
|
||||
To force CBR video output:
|
||||
|
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle ffplay Documentation
|
||||
@titlepage
|
||||
@@ -38,14 +37,10 @@ Force displayed height.
|
||||
Set frame size (WxH or abbreviation), needed for videos which do
|
||||
not contain a header with the frame size like raw YUV. This option
|
||||
has been deprecated in favor of private options, try -video_size.
|
||||
@item -fs
|
||||
Start in fullscreen mode.
|
||||
@item -an
|
||||
Disable audio.
|
||||
@item -vn
|
||||
Disable video.
|
||||
@item -sn
|
||||
Disable subtitles.
|
||||
@item -ss @var{pos}
|
||||
Seek to a given position in seconds.
|
||||
@item -t @var{duration}
|
||||
@@ -89,9 +84,6 @@ output. In the filtergraph, the input is associated to the label
|
||||
ffmpeg-filters manual for more information about the filtergraph
|
||||
syntax.
|
||||
|
||||
You can specify this parameter multiple times and cycle through the specified
|
||||
filtergraphs along with the show modes by pressing the key @key{w}.
|
||||
|
||||
@item -af @var{filtergraph}
|
||||
@var{filtergraph} is a description of the filtergraph to apply to
|
||||
the input audio.
|
||||
@@ -114,10 +106,15 @@ duration, the codec parameters, the current position in the stream and
|
||||
the audio/video synchronisation drift. It is on by default, to
|
||||
explicitly disable it you need to specify @code{-nostats}.
|
||||
|
||||
@item -bug
|
||||
Work around bugs.
|
||||
@item -fast
|
||||
Non-spec-compliant optimizations.
|
||||
@item -genpts
|
||||
Generate pts.
|
||||
@item -rtp_tcp
|
||||
Force RTP/TCP protocol usage instead of RTP/UDP. It is only meaningful
|
||||
if you are streaming with the RTSP protocol.
|
||||
@item -sync @var{type}
|
||||
Set the master clock to audio (@code{type=audio}), video
|
||||
(@code{type=video}) or external (@code{type=ext}). Default is audio. The
|
||||
@@ -125,20 +122,23 @@ master clock is used to control audio-video synchronization. Most media
|
||||
players use audio as master clock, but in some cases (streaming or high
|
||||
quality broadcast) it is necessary to change that. This option is mainly
|
||||
used for debugging purposes.
|
||||
@item -ast @var{audio_stream_specifier}
|
||||
Select the desired audio stream using the given stream specifier. The stream
|
||||
specifiers are described in the @ref{Stream specifiers} chapter. If this option
|
||||
is not specified, the "best" audio stream is selected in the program of the
|
||||
already selected video stream.
|
||||
@item -vst @var{video_stream_specifier}
|
||||
Select the desired video stream using the given stream specifier. The stream
|
||||
specifiers are described in the @ref{Stream specifiers} chapter. If this option
|
||||
is not specified, the "best" video stream is selected.
|
||||
@item -sst @var{subtitle_stream_specifier}
|
||||
Select the desired subtitle stream using the given stream specifier. The stream
|
||||
specifiers are described in the @ref{Stream specifiers} chapter. If this option
|
||||
is not specified, the "best" subtitle stream is selected in the program of the
|
||||
already selected video or audio stream.
|
||||
@item -threads @var{count}
|
||||
Set the thread count.
|
||||
@item -ast @var{audio_stream_number}
|
||||
Select the desired audio stream number, counting from 0. The number
|
||||
refers to the list of all the input audio streams. If it is greater
|
||||
than the number of audio streams minus one, then the last one is
|
||||
selected, if it is negative the audio playback is disabled.
|
||||
@item -vst @var{video_stream_number}
|
||||
Select the desired video stream number, counting from 0. The number
|
||||
refers to the list of all the input video streams. If it is greater
|
||||
than the number of video streams minus one, then the last one is
|
||||
selected, if it is negative the video playback is disabled.
|
||||
@item -sst @var{subtitle_stream_number}
|
||||
Select the desired subtitle stream number, counting from 0. The number
|
||||
refers to the list of all the input subtitle streams. If it is greater
|
||||
than the number of subtitle streams minus one, then the last one is
|
||||
selected, if it is negative the subtitle rendering is disabled.
|
||||
@item -autoexit
|
||||
Exit when video is done playing.
|
||||
@item -exitonkeydown
|
||||
@@ -159,22 +159,6 @@ Force a specific video decoder.
|
||||
|
||||
@item -scodec @var{codec_name}
|
||||
Force a specific subtitle decoder.
|
||||
|
||||
@item -autorotate
|
||||
Automatically rotate the video according to presentation metadata. Enabled by
|
||||
default, use @option{-noautorotate} to disable it.
|
||||
|
||||
@item -framedrop
|
||||
Drop video frames if video is out of sync. Enabled by default if the master
|
||||
clock is not set to video. Use this option to enable frame dropping for all
|
||||
master clock sources, use @option{-noframedrop} to disable it.
|
||||
|
||||
@item -infbuf
|
||||
Do not limit the input buffer size, read as much data as possible from the
|
||||
input as soon as possible. Enabled by default for realtime streams, where data
|
||||
may be dropped if not read in time. Use this option to enable infinite buffers
|
||||
for all inputs, use @option{-noinfbuf} to disable it.
|
||||
|
||||
@end table
|
||||
|
||||
@section While playing
|
||||
@@ -190,7 +174,7 @@ Toggle full screen.
|
||||
Pause.
|
||||
|
||||
@item a
|
||||
Cycle audio channel in the current program.
|
||||
Cycle audio channel in the curret program.
|
||||
|
||||
@item v
|
||||
Cycle video channel.
|
||||
@@ -202,13 +186,7 @@ Cycle subtitle channel in the current program.
|
||||
Cycle program.
|
||||
|
||||
@item w
|
||||
Cycle video filters or show modes.
|
||||
|
||||
@item s
|
||||
Step to the next frame.
|
||||
|
||||
Pause if the stream is not already paused, step to the next video
|
||||
frame, and pause.
|
||||
Show audio waves.
|
||||
|
||||
@item left/right
|
||||
Seek backward/forward 10 seconds.
|
||||
@@ -217,8 +195,6 @@ Seek backward/forward 10 seconds.
|
||||
Seek backward/forward 1 minute.
|
||||
|
||||
@item page down/page up
|
||||
Seek to the previous/next chapter.
|
||||
or if there are no chapters
|
||||
Seek backward/forward 10 minutes.
|
||||
|
||||
@item mouse click
|
||||
@@ -230,7 +206,6 @@ Seek to percentage in file corresponding to fraction of width.
|
||||
|
||||
@include config.texi
|
||||
@ifset config-all
|
||||
@set config-readonly
|
||||
@ifset config-avutil
|
||||
@include utils.texi
|
||||
@end ifset
|
||||
|
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle ffprobe Documentation
|
||||
@titlepage
|
||||
@@ -120,10 +119,6 @@ Show payload data, as a hexadecimal and ASCII dump. Coupled with
|
||||
|
||||
The dump is printed as the "data" field. It may contain newlines.
|
||||
|
||||
@item -show_data_hash @var{algorithm}
|
||||
Show a hash of payload data, for packets with @option{-show_packets} and for
|
||||
codec extradata with @option{-show_streams}.
|
||||
|
||||
@item -show_error
|
||||
Show information about the error found when trying to probe the input.
|
||||
|
||||
@@ -185,7 +180,7 @@ format : stream=codec_type
|
||||
|
||||
To show all the tags in the stream and format sections:
|
||||
@example
|
||||
stream_tags : format_tags
|
||||
format_tags : format_tags
|
||||
@end example
|
||||
|
||||
To show only the @code{title} tag (if available) in the stream
|
||||
@@ -202,11 +197,11 @@ The information for each single packet is printed within a dedicated
|
||||
section with name "PACKET".
|
||||
|
||||
@item -show_frames
|
||||
Show information about each frame and subtitle contained in the input
|
||||
multimedia stream.
|
||||
Show information about each frame contained in the input multimedia
|
||||
stream.
|
||||
|
||||
The information for each single frame is printed within a dedicated
|
||||
section with name "FRAME" or "SUBTITLE".
|
||||
section with name "FRAME".
|
||||
|
||||
@item -show_streams
|
||||
Show information about each media stream contained in the input
|
||||
@@ -322,12 +317,6 @@ Show information related to program and library versions. This is the
|
||||
equivalent of setting both @option{-show_program_version} and
|
||||
@option{-show_library_versions} options.
|
||||
|
||||
@item -show_pixel_formats
|
||||
Show information about all pixel formats supported by FFmpeg.
|
||||
|
||||
Pixel format information for each format is printed within a section
|
||||
with name "PIXEL_FORMAT".
|
||||
|
||||
@item -bitexact
|
||||
Force bitexact output, useful to produce output which is not dependent
|
||||
on the specific build.
|
||||
@@ -348,39 +337,6 @@ A writer may accept one or more arguments, which specify the options
|
||||
to adopt. The options are specified as a list of @var{key}=@var{value}
|
||||
pairs, separated by ":".
|
||||
|
||||
All writers support the following options:
|
||||
|
||||
@table @option
|
||||
@item string_validation, sv
|
||||
Set string validation mode.
|
||||
|
||||
The following values are accepted.
|
||||
@table @samp
|
||||
@item fail
|
||||
The writer will fail immediately in case an invalid string (UTF-8)
|
||||
sequence or code point is found in the input. This is especially
|
||||
useful to validate input metadata.
|
||||
|
||||
@item ignore
|
||||
Any validation error will be ignored. This will result in possibly
|
||||
broken output, especially with the json or xml writer.
|
||||
|
||||
@item replace
|
||||
The writer will substitute invalid UTF-8 sequences or code points with
|
||||
the string specified with the @option{string_validation_replacement}.
|
||||
@end table
|
||||
|
||||
Default value is @samp{replace}.
|
||||
|
||||
@item string_validation_replacement, svr
|
||||
Set replacement string to use in case @option{string_validation} is
|
||||
set to @samp{replace}.
|
||||
|
||||
In case the option is not specified, the writer will assume the empty
|
||||
string, that is it will remove the invalid sequences from the input
|
||||
strings.
|
||||
@end table
|
||||
|
||||
A description of the currently available writers follows.
|
||||
|
||||
@section default
|
||||
@@ -610,7 +566,6 @@ DV, GXF and AVI timecodes are available in format metadata
|
||||
|
||||
@include config.texi
|
||||
@ifset config-all
|
||||
@set config-readonly
|
||||
@ifset config-avutil
|
||||
@include utils.texi
|
||||
@end ifset
|
||||
|
106
doc/ffprobe.xsd
106
doc/ffprobe.xsd
@@ -8,17 +8,15 @@
|
||||
|
||||
<xsd:complexType name="ffprobeType">
|
||||
<xsd:sequence>
|
||||
<xsd:element name="program_version" type="ffprobe:programVersionType" minOccurs="0" maxOccurs="1" />
|
||||
<xsd:element name="library_versions" type="ffprobe:libraryVersionsType" minOccurs="0" maxOccurs="1" />
|
||||
<xsd:element name="pixel_formats" type="ffprobe:pixelFormatsType" minOccurs="0" maxOccurs="1" />
|
||||
<xsd:element name="packets" type="ffprobe:packetsType" minOccurs="0" maxOccurs="1" />
|
||||
<xsd:element name="frames" type="ffprobe:framesType" minOccurs="0" maxOccurs="1" />
|
||||
<xsd:element name="packets_and_frames" type="ffprobe:packetsAndFramesType" minOccurs="0" maxOccurs="1" />
|
||||
<xsd:element name="programs" type="ffprobe:programsType" minOccurs="0" maxOccurs="1" />
|
||||
<xsd:element name="streams" type="ffprobe:streamsType" minOccurs="0" maxOccurs="1" />
|
||||
<xsd:element name="programs" type="ffprobe:programsType" minOccurs="0" maxOccurs="1" />
|
||||
<xsd:element name="chapters" type="ffprobe:chaptersType" minOccurs="0" maxOccurs="1" />
|
||||
<xsd:element name="format" type="ffprobe:formatType" minOccurs="0" maxOccurs="1" />
|
||||
<xsd:element name="error" type="ffprobe:errorType" minOccurs="0" maxOccurs="1" />
|
||||
<xsd:element name="program_version" type="ffprobe:programVersionType" minOccurs="0" maxOccurs="1" />
|
||||
<xsd:element name="library_versions" type="ffprobe:libraryVersionsType" minOccurs="0" maxOccurs="1" />
|
||||
</xsd:sequence>
|
||||
</xsd:complexType>
|
||||
|
||||
@@ -30,20 +28,7 @@
|
||||
|
||||
<xsd:complexType name="framesType">
|
||||
<xsd:sequence>
|
||||
<xsd:choice minOccurs="0" maxOccurs="unbounded">
|
||||
<xsd:element name="frame" type="ffprobe:frameType" minOccurs="0" maxOccurs="unbounded"/>
|
||||
<xsd:element name="subtitle" type="ffprobe:subtitleType" minOccurs="0" maxOccurs="unbounded"/>
|
||||
</xsd:choice>
|
||||
</xsd:sequence>
|
||||
</xsd:complexType>
|
||||
|
||||
<xsd:complexType name="packetsAndFramesType">
|
||||
<xsd:sequence>
|
||||
<xsd:choice minOccurs="0" maxOccurs="unbounded">
|
||||
<xsd:element name="packet" type="ffprobe:packetType" minOccurs="0" maxOccurs="unbounded"/>
|
||||
<xsd:element name="frame" type="ffprobe:frameType" minOccurs="0" maxOccurs="unbounded"/>
|
||||
<xsd:element name="subtitle" type="ffprobe:subtitleType" minOccurs="0" maxOccurs="unbounded"/>
|
||||
</xsd:choice>
|
||||
<xsd:element name="frame" type="ffprobe:frameType" minOccurs="0" maxOccurs="unbounded"/>
|
||||
</xsd:sequence>
|
||||
</xsd:complexType>
|
||||
|
||||
@@ -62,15 +47,9 @@
|
||||
<xsd:attribute name="pos" type="xsd:long" />
|
||||
<xsd:attribute name="flags" type="xsd:string" use="required" />
|
||||
<xsd:attribute name="data" type="xsd:string" />
|
||||
<xsd:attribute name="data_hash" type="xsd:string" />
|
||||
</xsd:complexType>
|
||||
|
||||
<xsd:complexType name="frameType">
|
||||
<xsd:sequence>
|
||||
<xsd:element name="tag" type="ffprobe:tagType" minOccurs="0" maxOccurs="unbounded"/>
|
||||
<xsd:element name="side_data_list" type="ffprobe:frameSideDataListType" minOccurs="0" maxOccurs="1" />
|
||||
</xsd:sequence>
|
||||
|
||||
<xsd:attribute name="media_type" type="xsd:string" use="required"/>
|
||||
<xsd:attribute name="key_frame" type="xsd:int" use="required"/>
|
||||
<xsd:attribute name="pts" type="xsd:long" />
|
||||
@@ -79,8 +58,6 @@
|
||||
<xsd:attribute name="pkt_pts_time" type="xsd:float"/>
|
||||
<xsd:attribute name="pkt_dts" type="xsd:long" />
|
||||
<xsd:attribute name="pkt_dts_time" type="xsd:float"/>
|
||||
<xsd:attribute name="best_effort_timestamp" type="xsd:long" />
|
||||
<xsd:attribute name="best_effort_timestamp_time" type="xsd:float" />
|
||||
<xsd:attribute name="pkt_duration" type="xsd:long" />
|
||||
<xsd:attribute name="pkt_duration_time" type="xsd:float"/>
|
||||
<xsd:attribute name="pkt_pos" type="xsd:long" />
|
||||
@@ -105,26 +82,6 @@
|
||||
<xsd:attribute name="repeat_pict" type="xsd:int" />
|
||||
</xsd:complexType>
|
||||
|
||||
<xsd:complexType name="frameSideDataListType">
|
||||
<xsd:sequence>
|
||||
<xsd:element name="side_data" type="ffprobe:frameSideDataType" minOccurs="1" maxOccurs="unbounded"/>
|
||||
</xsd:sequence>
|
||||
</xsd:complexType>
|
||||
<xsd:complexType name="frameSideDataType">
|
||||
<xsd:attribute name="side_data_type" type="xsd:string"/>
|
||||
<xsd:attribute name="side_data_size" type="xsd:int" />
|
||||
</xsd:complexType>
|
||||
|
||||
<xsd:complexType name="subtitleType">
|
||||
<xsd:attribute name="media_type" type="xsd:string" fixed="subtitle" use="required"/>
|
||||
<xsd:attribute name="pts" type="xsd:long" />
|
||||
<xsd:attribute name="pts_time" type="xsd:float"/>
|
||||
<xsd:attribute name="format" type="xsd:int" />
|
||||
<xsd:attribute name="start_display_time" type="xsd:int" />
|
||||
<xsd:attribute name="end_display_time" type="xsd:int" />
|
||||
<xsd:attribute name="num_rects" type="xsd:int" />
|
||||
</xsd:complexType>
|
||||
|
||||
<xsd:complexType name="streamsType">
|
||||
<xsd:sequence>
|
||||
<xsd:element name="stream" type="ffprobe:streamType" minOccurs="0" maxOccurs="unbounded"/>
|
||||
@@ -166,7 +123,6 @@
|
||||
<xsd:attribute name="codec_tag" type="xsd:string" use="required"/>
|
||||
<xsd:attribute name="codec_tag_string" type="xsd:string" use="required"/>
|
||||
<xsd:attribute name="extradata" type="xsd:string" />
|
||||
<xsd:attribute name="extradata_hash" type="xsd:string" />
|
||||
|
||||
<!-- video attributes -->
|
||||
<xsd:attribute name="width" type="xsd:int"/>
|
||||
@@ -176,13 +132,7 @@
|
||||
<xsd:attribute name="display_aspect_ratio" type="xsd:string"/>
|
||||
<xsd:attribute name="pix_fmt" type="xsd:string"/>
|
||||
<xsd:attribute name="level" type="xsd:int"/>
|
||||
<xsd:attribute name="color_range" type="xsd:string"/>
|
||||
<xsd:attribute name="color_space" type="xsd:string"/>
|
||||
<xsd:attribute name="color_transfer" type="xsd:string"/>
|
||||
<xsd:attribute name="color_primaries" type="xsd:string"/>
|
||||
<xsd:attribute name="chroma_location" type="xsd:string"/>
|
||||
<xsd:attribute name="timecode" type="xsd:string"/>
|
||||
<xsd:attribute name="refs" type="xsd:int"/>
|
||||
|
||||
<!-- audio attributes -->
|
||||
<xsd:attribute name="sample_fmt" type="xsd:string"/>
|
||||
@@ -200,8 +150,6 @@
|
||||
<xsd:attribute name="duration_ts" type="xsd:long"/>
|
||||
<xsd:attribute name="duration" type="xsd:float"/>
|
||||
<xsd:attribute name="bit_rate" type="xsd:int"/>
|
||||
<xsd:attribute name="max_bit_rate" type="xsd:int"/>
|
||||
<xsd:attribute name="bits_per_raw_sample" type="xsd:int"/>
|
||||
<xsd:attribute name="nb_frames" type="xsd:int"/>
|
||||
<xsd:attribute name="nb_read_frames" type="xsd:int"/>
|
||||
<xsd:attribute name="nb_read_packets" type="xsd:int"/>
|
||||
@@ -254,7 +202,10 @@
|
||||
<xsd:complexType name="programVersionType">
|
||||
<xsd:attribute name="version" type="xsd:string" use="required"/>
|
||||
<xsd:attribute name="copyright" type="xsd:string" use="required"/>
|
||||
<xsd:attribute name="compiler_ident" type="xsd:string" use="required"/>
|
||||
<xsd:attribute name="build_date" type="xsd:string" use="required"/>
|
||||
<xsd:attribute name="build_time" type="xsd:string" use="required"/>
|
||||
<xsd:attribute name="compiler_type" type="xsd:string" use="required"/>
|
||||
<xsd:attribute name="compiler_version" type="xsd:string" use="required"/>
|
||||
<xsd:attribute name="configuration" type="xsd:string" use="required"/>
|
||||
</xsd:complexType>
|
||||
|
||||
@@ -291,45 +242,4 @@
|
||||
<xsd:element name="library_version" type="ffprobe:libraryVersionType" minOccurs="0" maxOccurs="unbounded"/>
|
||||
</xsd:sequence>
|
||||
</xsd:complexType>
|
||||
|
||||
<xsd:complexType name="pixelFormatFlagsType">
|
||||
<xsd:attribute name="big_endian" type="xsd:int" use="required"/>
|
||||
<xsd:attribute name="palette" type="xsd:int" use="required"/>
|
||||
<xsd:attribute name="bitstream" type="xsd:int" use="required"/>
|
||||
<xsd:attribute name="hwaccel" type="xsd:int" use="required"/>
|
||||
<xsd:attribute name="planar" type="xsd:int" use="required"/>
|
||||
<xsd:attribute name="rgb" type="xsd:int" use="required"/>
|
||||
<xsd:attribute name="pseudopal" type="xsd:int" use="required"/>
|
||||
<xsd:attribute name="alpha" type="xsd:int" use="required"/>
|
||||
</xsd:complexType>
|
||||
|
||||
<xsd:complexType name="pixelFormatComponentType">
|
||||
<xsd:attribute name="index" type="xsd:int" use="required"/>
|
||||
<xsd:attribute name="bit_depth" type="xsd:int" use="required"/>
|
||||
</xsd:complexType>
|
||||
|
||||
<xsd:complexType name="pixelFormatComponentsType">
|
||||
<xsd:sequence>
|
||||
<xsd:element name="component" type="ffprobe:pixelFormatComponentType" minOccurs="0" maxOccurs="unbounded"/>
|
||||
</xsd:sequence>
|
||||
</xsd:complexType>
|
||||
|
||||
<xsd:complexType name="pixelFormatType">
|
||||
<xsd:sequence>
|
||||
<xsd:element name="flags" type="ffprobe:pixelFormatFlagsType" minOccurs="0" maxOccurs="1"/>
|
||||
<xsd:element name="components" type="ffprobe:pixelFormatComponentsType" minOccurs="0" maxOccurs="1"/>
|
||||
</xsd:sequence>
|
||||
|
||||
<xsd:attribute name="name" type="xsd:string" use="required"/>
|
||||
<xsd:attribute name="nb_components" type="xsd:int" use="required"/>
|
||||
<xsd:attribute name="log2_chroma_w" type="xsd:int"/>
|
||||
<xsd:attribute name="log2_chroma_h" type="xsd:int"/>
|
||||
<xsd:attribute name="bits_per_pixel" type="xsd:int"/>
|
||||
</xsd:complexType>
|
||||
|
||||
<xsd:complexType name="pixelFormatsType">
|
||||
<xsd:sequence>
|
||||
<xsd:element name="pixel_format" type="ffprobe:pixelFormatType" minOccurs="0" maxOccurs="unbounded"/>
|
||||
</xsd:sequence>
|
||||
</xsd:complexType>
|
||||
</xsd:schema>
|
||||
|
@@ -1,11 +1,11 @@
|
||||
# Port on which the server is listening. You must select a different
|
||||
# port from your standard HTTP web server if it is running on the same
|
||||
# computer.
|
||||
HTTPPort 8090
|
||||
Port 8090
|
||||
|
||||
# Address on which the server is bound. Only useful if you have
|
||||
# several network interfaces.
|
||||
HTTPBindAddress 0.0.0.0
|
||||
BindAddress 0.0.0.0
|
||||
|
||||
# Number of simultaneous HTTP connections that can be handled. It has
|
||||
# to be defined *before* the MaxClients parameter, since it defines the
|
||||
@@ -235,7 +235,7 @@ StartSendOnKey
|
||||
|
||||
#<Stream test.ogg>
|
||||
#Feed feed1.ffm
|
||||
#Metadata title "Stream title"
|
||||
#Title "Stream title"
|
||||
#AudioBitRate 64
|
||||
#AudioChannels 2
|
||||
#AudioSampleRate 44100
|
||||
@@ -280,10 +280,10 @@ StartSendOnKey
|
||||
#<Stream file.asf>
|
||||
#File "/usr/local/httpd/htdocs/test.asf"
|
||||
#NoAudio
|
||||
#Metadata author "Me"
|
||||
#Metadata copyright "Super MegaCorp"
|
||||
#Metadata title "Test stream from disk"
|
||||
#Metadata comment "Test comment"
|
||||
#Author "Me"
|
||||
#Copyright "Super MegaCorp"
|
||||
#Title "Test stream from disk"
|
||||
#Comment "Test comment"
|
||||
#</Stream>
|
||||
|
||||
|
||||
|
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle ffserver Documentation
|
||||
@titlepage
|
||||
@@ -17,14 +16,11 @@ ffserver [@var{options}]
|
||||
@chapter Description
|
||||
@c man begin DESCRIPTION
|
||||
|
||||
@command{ffserver} is a streaming server for both audio and video.
|
||||
It supports several live feeds, streaming from files and time shifting
|
||||
on live feeds. You can seek to positions in the past on each live
|
||||
feed, provided you specify a big enough feed storage.
|
||||
|
||||
@command{ffserver} is configured through a configuration file, which
|
||||
is read at startup. If not explicitly specified, it will read from
|
||||
@file{/etc/ffserver.conf}.
|
||||
@command{ffserver} is a streaming server for both audio and video. It
|
||||
supports several live feeds, streaming from files and time shifting on
|
||||
live feeds (you can seek to positions in the past on each live feed,
|
||||
provided you specify a big enough feed storage in
|
||||
@file{ffserver.conf}).
|
||||
|
||||
@command{ffserver} receives prerecorded files or FFM streams from some
|
||||
@command{ffmpeg} instance as input, then streams them over
|
||||
@@ -43,126 +39,10 @@ For each feed you can have different output streams in various
|
||||
formats, each one specified by a @code{<Stream>} section in the
|
||||
configuration file.
|
||||
|
||||
@chapter Detailed description
|
||||
|
||||
@command{ffserver} works by forwarding streams encoded by
|
||||
@command{ffmpeg}, or pre-recorded streams which are read from disk.
|
||||
|
||||
Precisely, @command{ffserver} acts as an HTTP server, accepting POST
|
||||
requests from @command{ffmpeg} to acquire the stream to publish, and
|
||||
serving RTSP clients or HTTP clients GET requests with the stream
|
||||
media content.
|
||||
|
||||
A feed is an @ref{FFM} stream created by @command{ffmpeg}, and sent to
|
||||
a port where @command{ffserver} is listening.
|
||||
|
||||
Each feed is identified by a unique name, corresponding to the name
|
||||
of the resource published on @command{ffserver}, and is configured by
|
||||
a dedicated @code{Feed} section in the configuration file.
|
||||
|
||||
The feed publish URL is given by:
|
||||
@example
|
||||
http://@var{ffserver_ip_address}:@var{http_port}/@var{feed_name}
|
||||
@end example
|
||||
|
||||
where @var{ffserver_ip_address} is the IP address of the machine where
|
||||
@command{ffserver} is installed, @var{http_port} is the port number of
|
||||
the HTTP server (configured through the @option{HTTPPort} option), and
|
||||
@var{feed_name} is the name of the corresponding feed defined in the
|
||||
configuration file.
|
||||
|
||||
Each feed is associated to a file which is stored on disk. This stored
|
||||
file is used to allow to send pre-recorded data to a player as fast as
|
||||
possible when new content is added in real-time to the stream.
|
||||
|
||||
A "live-stream" or "stream" is a resource published by
|
||||
@command{ffserver}, and made accessible through the HTTP protocol to
|
||||
clients.
|
||||
|
||||
A stream can be connected to a feed, or to a file. In the first case,
|
||||
the published stream is forwarded from the corresponding feed
|
||||
generated by a running instance of @command{ffmpeg}, in the second
|
||||
case the stream is read from a pre-recorded file.
|
||||
|
||||
Each stream is identified by a unique name, corresponding to the name
|
||||
of the resource served by @command{ffserver}, and is configured by
|
||||
a dedicated @code{Stream} section in the configuration file.
|
||||
|
||||
The stream access HTTP URL is given by:
|
||||
@example
|
||||
http://@var{ffserver_ip_address}:@var{http_port}/@var{stream_name}[@var{options}]
|
||||
@end example
|
||||
|
||||
The stream access RTSP URL is given by:
|
||||
@example
|
||||
http://@var{ffserver_ip_address}:@var{rtsp_port}/@var{stream_name}[@var{options}]
|
||||
@end example
|
||||
|
||||
@var{stream_name} is the name of the corresponding stream defined in
|
||||
the configuration file. @var{options} is a list of options specified
|
||||
after the URL which affects how the stream is served by
|
||||
@command{ffserver}. @var{http_port} and @var{rtsp_port} are the HTTP
|
||||
and RTSP ports configured with the options @var{HTTPPort} and
|
||||
@var{RTSPPort} respectively.
|
||||
|
||||
In case the stream is associated to a feed, the encoding parameters
|
||||
must be configured in the stream configuration. They are sent to
|
||||
@command{ffmpeg} when setting up the encoding. This allows
|
||||
@command{ffserver} to define the encoding parameters used by
|
||||
the @command{ffmpeg} encoders.
|
||||
|
||||
The @command{ffmpeg} @option{override_ffserver} commandline option
|
||||
allows one to override the encoding parameters set by the server.
|
||||
|
||||
Multiple streams can be connected to the same feed.
|
||||
|
||||
For example, you can have a situation described by the following
|
||||
graph:
|
||||
@example
|
||||
_________ __________
|
||||
| | | |
|
||||
ffmpeg 1 -----| feed 1 |-----| stream 1 |
|
||||
\ |_________|\ |__________|
|
||||
\ \
|
||||
\ \ __________
|
||||
\ \ | |
|
||||
\ \| stream 2 |
|
||||
\ |__________|
|
||||
\
|
||||
\ _________ __________
|
||||
\ | | | |
|
||||
\| feed 2 |-----| stream 3 |
|
||||
|_________| |__________|
|
||||
|
||||
_________ __________
|
||||
| | | |
|
||||
ffmpeg 2 -----| feed 3 |-----| stream 4 |
|
||||
|_________| |__________|
|
||||
|
||||
_________ __________
|
||||
| | | |
|
||||
| file 1 |-----| stream 5 |
|
||||
|_________| |__________|
|
||||
@end example
|
||||
|
||||
@anchor{FFM}
|
||||
@section FFM, FFM2 formats
|
||||
|
||||
FFM and FFM2 are formats used by ffserver. They allow storing a wide variety of
|
||||
video and audio streams and encoding options, and can store a moving time segment
|
||||
of an infinite movie or a whole movie.
|
||||
|
||||
FFM is version specific, and there is limited compatibility of FFM files
|
||||
generated by one version of ffmpeg/ffserver and another version of
|
||||
ffmpeg/ffserver. It may work but it is not guaranteed to work.
|
||||
|
||||
FFM2 is extensible while maintaining compatibility and should work between
|
||||
differing versions of tools. FFM2 is the default.
|
||||
|
||||
@section Status stream
|
||||
|
||||
@command{ffserver} supports an HTTP interface which exposes the
|
||||
current status of the server.
|
||||
ffserver supports an HTTP interface which exposes the current status
|
||||
of the server.
|
||||
|
||||
Simply point your browser to the address of the special status stream
|
||||
specified in the configuration file.
|
||||
@@ -181,8 +61,27 @@ ACL allow 192.168.0.0 192.168.255.255
|
||||
then the server will post a page with the status information when
|
||||
the special stream @file{status.html} is requested.
|
||||
|
||||
@section What can this do?
|
||||
|
||||
When properly configured and running, you can capture video and audio in real
|
||||
time from a suitable capture card, and stream it out over the Internet to
|
||||
either Windows Media Player or RealAudio player (with some restrictions).
|
||||
|
||||
It can also stream from files, though that is currently broken. Very often, a
|
||||
web server can be used to serve up the files just as well.
|
||||
|
||||
It can stream prerecorded video from .ffm files, though it is somewhat tricky
|
||||
to make it work correctly.
|
||||
|
||||
@section How do I make it work?
|
||||
|
||||
First, build the kit. It *really* helps to have installed LAME first. Then when
|
||||
you run the ffserver ./configure, make sure that you have the
|
||||
@code{--enable-libmp3lame} flag turned on.
|
||||
|
||||
LAME is important as it allows for streaming audio to Windows Media Player.
|
||||
Don't ask why the other audio types do not work.
|
||||
|
||||
As a simple test, just run the following two command lines where INPUTFILE
|
||||
is some file which you can decode with ffmpeg:
|
||||
|
||||
@@ -204,9 +103,40 @@ WARNING: trying to stream test1.mpg doesn't work with WMP as it tries to
|
||||
transfer the entire file before starting to play.
|
||||
The same is true of AVI files.
|
||||
|
||||
You should edit the @file{ffserver.conf} file to suit your needs (in
|
||||
terms of frame rates etc). Then install @command{ffserver} and
|
||||
@command{ffmpeg}, write a script to start them up, and off you go.
|
||||
@section What happens next?
|
||||
|
||||
You should edit the ffserver.conf file to suit your needs (in terms of
|
||||
frame rates etc). Then install ffserver and ffmpeg, write a script to start
|
||||
them up, and off you go.
|
||||
|
||||
@section Troubleshooting
|
||||
|
||||
@subsection I don't hear any audio, but video is fine.
|
||||
|
||||
Maybe you didn't install LAME, or got your ./configure statement wrong. Check
|
||||
the ffmpeg output to see if a line referring to MP3 is present. If not, then
|
||||
your configuration was incorrect. If it is, then maybe your wiring is not
|
||||
set up correctly. Maybe the sound card is not getting data from the right
|
||||
input source. Maybe you have a really awful audio interface (like I do)
|
||||
that only captures in stereo and also requires that one channel be flipped.
|
||||
If you are one of these people, then export 'AUDIO_FLIP_LEFT=1' before
|
||||
starting ffmpeg.
|
||||
|
||||
@subsection The audio and video lose sync after a while.
|
||||
|
||||
Yes, they do.
|
||||
|
||||
@subsection After a long while, the video update rate goes way down in WMP.
|
||||
|
||||
Yes, it does. Who knows why?
|
||||
|
||||
@subsection WMP 6.4 behaves differently to WMP 7.
|
||||
|
||||
Yes, it does. Any thoughts on this would be gratefully received. These
|
||||
differences extend to embedding WMP into a web page. [There are two
|
||||
object IDs that you can use: The old one, which does not play well, and
|
||||
the new one, which does (both tested on the same system). However,
|
||||
I suspect that the new one is not available unless you have installed WMP 7].
|
||||
|
||||
@section What else can it do?
|
||||
|
||||
@@ -247,6 +177,9 @@ specify a time. In addition, ffserver will skip frames until a key_frame
|
||||
is found. This further reduces the startup delay by not transferring data
|
||||
that will be discarded.
|
||||
|
||||
* You may want to adjust the MaxBandwidth in the ffserver.conf to limit
|
||||
the amount of bandwidth consumed by live streams.
|
||||
|
||||
@section Why does the ?buffer / Preroll stop working after a time?
|
||||
|
||||
It turns out that (on my machine at least) the number of frames successfully
|
||||
@@ -280,6 +213,19 @@ You use this by adding the ?date= to the end of the URL for the stream.
|
||||
For example: @samp{http://localhost:8080/test.asf?date=2002-07-26T23:05:00}.
|
||||
@c man end
|
||||
|
||||
@section What is FFM, FFM2
|
||||
|
||||
FFM and FFM2 are formats used by ffserver. They allow storing a wide variety of
|
||||
video and audio streams and encoding options, and can store a moving time segment
|
||||
of an infinite movie or a whole movie.
|
||||
|
||||
FFM is version specific, and there is limited compatibility of FFM files
|
||||
generated by one version of ffmpeg/ffserver and another version of
|
||||
ffmpeg/ffserver. It may work but it is not guaranteed to work.
|
||||
|
||||
FFM2 is extensible while maintaining compatibility and should work between
|
||||
differing versions of tools. FFM2 is the default.
|
||||
|
||||
@chapter Options
|
||||
@c man begin OPTIONS
|
||||
|
||||
@@ -289,562 +235,15 @@ For example: @samp{http://localhost:8080/test.asf?date=2002-07-26T23:05:00}.
|
||||
|
||||
@table @option
|
||||
@item -f @var{configfile}
|
||||
Read configuration file @file{configfile}. If not specified it will
|
||||
read by default from @file{/etc/ffserver.conf}.
|
||||
|
||||
Use @file{configfile} instead of @file{/etc/ffserver.conf}.
|
||||
@item -n
|
||||
Enable no-launch mode. This option disables all the @code{Launch}
|
||||
directives within the various @code{<Feed>} sections. Since
|
||||
@command{ffserver} will not launch any @command{ffmpeg} instances, you
|
||||
will have to launch them manually.
|
||||
|
||||
Enable no-launch mode. This option disables all the Launch directives
|
||||
within the various <Stream> sections. Since ffserver will not launch
|
||||
any ffmpeg instances, you will have to launch them manually.
|
||||
@item -d
|
||||
Enable debug mode. This option increases log verbosity, and directs
|
||||
log messages to stdout. When specified, the @option{CustomLog} option
|
||||
is ignored.
|
||||
Enable debug mode. This option increases log verbosity, directs log
|
||||
messages to stdout.
|
||||
@end table
|
||||
|
||||
@chapter Configuration file syntax
|
||||
|
||||
@command{ffserver} reads a configuration file containing global
|
||||
options and settings for each stream and feed.
|
||||
|
||||
The configuration file consists of global options and dedicated
|
||||
sections, which must be introduced by "<@var{SECTION_NAME}
|
||||
@var{ARGS}>" on a separate line and must be terminated by a line in
|
||||
the form "</@var{SECTION_NAME}>". @var{ARGS} is optional.
|
||||
|
||||
Currently the following sections are recognized: @samp{Feed},
|
||||
@samp{Stream}, @samp{Redirect}.
|
||||
|
||||
A line starting with @code{#} is ignored and treated as a comment.
|
||||
|
||||
Name of options and sections are case-insensitive.
|
||||
|
||||
@section ACL syntax
|
||||
An ACL (Access Control List) specifies the address which are allowed
|
||||
to access a given stream, or to write a given feed.
|
||||
|
||||
It accepts the folling forms
|
||||
@itemize
|
||||
@item
|
||||
Allow/deny access to @var{address}.
|
||||
@example
|
||||
ACL ALLOW <address>
|
||||
ACL DENY <address>
|
||||
@end example
|
||||
|
||||
@item
|
||||
Allow/deny access to ranges of addresses from @var{first_address} to
|
||||
@var{last_address}.
|
||||
@example
|
||||
ACL ALLOW <first_address> <last_address>
|
||||
ACL DENY <first_address> <last_address>
|
||||
@end example
|
||||
@end itemize
|
||||
|
||||
You can repeat the ACL allow/deny as often as you like. It is on a per
|
||||
stream basis. The first match defines the action. If there are no matches,
|
||||
then the default is the inverse of the last ACL statement.
|
||||
|
||||
Thus 'ACL allow localhost' only allows access from localhost.
|
||||
'ACL deny 1.0.0.0 1.255.255.255' would deny the whole of network 1 and
|
||||
allow everybody else.
|
||||
|
||||
@section Global options
|
||||
@table @option
|
||||
@item HTTPPort @var{port_number}
|
||||
@item Port @var{port_number}
|
||||
@item RTSPPort @var{port_number}
|
||||
|
||||
@var{HTTPPort} sets the HTTP server listening TCP port number,
|
||||
@var{RTSPPort} sets the RTSP server listening TCP port number.
|
||||
|
||||
@var{Port} is the equivalent of @var{HTTPPort} and is deprecated.
|
||||
|
||||
You must select a different port from your standard HTTP web server if
|
||||
it is running on the same computer.
|
||||
|
||||
If not specified, no corresponding server will be created.
|
||||
|
||||
@item HTTPBindAddress @var{ip_address}
|
||||
@item BindAddress @var{ip_address}
|
||||
@item RTSPBindAddress @var{ip_address}
|
||||
Set address on which the HTTP/RTSP server is bound. Only useful if you
|
||||
have several network interfaces.
|
||||
|
||||
@var{BindAddress} is the equivalent of @var{HTTPBindAddress} and is
|
||||
deprecated.
|
||||
|
||||
@item MaxHTTPConnections @var{n}
|
||||
Set number of simultaneous HTTP connections that can be handled. It
|
||||
has to be defined @emph{before} the @option{MaxClients} parameter,
|
||||
since it defines the @option{MaxClients} maximum limit.
|
||||
|
||||
Default value is 2000.
|
||||
|
||||
@item MaxClients @var{n}
|
||||
Set number of simultaneous requests that can be handled. Since
|
||||
@command{ffserver} is very fast, it is more likely that you will want
|
||||
to leave this high and use @option{MaxBandwidth}.
|
||||
|
||||
Default value is 5.
|
||||
|
||||
@item MaxBandwidth @var{kbps}
|
||||
Set the maximum amount of kbit/sec that you are prepared to consume
|
||||
when streaming to clients.
|
||||
|
||||
Default value is 1000.
|
||||
|
||||
@item CustomLog @var{filename}
|
||||
Set access log file (uses standard Apache log file format). '-' is the
|
||||
standard output.
|
||||
|
||||
If not specified @command{ffserver} will produce no log.
|
||||
|
||||
In case the commandline option @option{-d} is specified this option is
|
||||
ignored, and the log is written to standard output.
|
||||
|
||||
@item NoDaemon
|
||||
Set no-daemon mode. This option is currently ignored since now
|
||||
@command{ffserver} will always work in no-daemon mode, and is
|
||||
deprecated.
|
||||
|
||||
@item UseDefaults
|
||||
@item NoDefaults
|
||||
Control whether default codec options are used for the all streams or not.
|
||||
Each stream may overwrite this setting for its own. Default is @var{UseDefaults}.
|
||||
The lastest occurrence overrides previous if multiple definitions.
|
||||
@end table
|
||||
|
||||
@section Feed section
|
||||
|
||||
A Feed section defines a feed provided to @command{ffserver}.
|
||||
|
||||
Each live feed contains one video and/or audio sequence coming from an
|
||||
@command{ffmpeg} encoder or another @command{ffserver}. This sequence
|
||||
may be encoded simultaneously with several codecs at several
|
||||
resolutions.
|
||||
|
||||
A feed instance specification is introduced by a line in the form:
|
||||
@example
|
||||
<Feed FEED_FILENAME>
|
||||
@end example
|
||||
|
||||
where @var{FEED_FILENAME} specifies the unique name of the FFM stream.
|
||||
|
||||
The following options are recognized within a Feed section.
|
||||
|
||||
@table @option
|
||||
@item File @var{filename}
|
||||
@item ReadOnlyFile @var{filename}
|
||||
Set the path where the feed file is stored on disk.
|
||||
|
||||
If not specified, the @file{/tmp/FEED.ffm} is assumed, where
|
||||
@var{FEED} is the feed name.
|
||||
|
||||
If @option{ReadOnlyFile} is used the file is marked as read-only and
|
||||
it will not be deleted or updated.
|
||||
|
||||
@item Truncate
|
||||
Truncate the feed file, rather than appending to it. By default
|
||||
@command{ffserver} will append data to the file, until the maximum
|
||||
file size value is reached (see @option{FileMaxSize} option).
|
||||
|
||||
@item FileMaxSize @var{size}
|
||||
Set maximum size of the feed file in bytes. 0 means unlimited. The
|
||||
postfixes @code{K} (2^10), @code{M} (2^20), and @code{G} (2^30) are
|
||||
recognized.
|
||||
|
||||
Default value is 5M.
|
||||
|
||||
@item Launch @var{args}
|
||||
Launch an @command{ffmpeg} command when creating @command{ffserver}.
|
||||
|
||||
@var{args} must be a sequence of arguments to be provided to an
|
||||
@command{ffmpeg} instance. The first provided argument is ignored, and
|
||||
it is replaced by a path with the same dirname of the @command{ffserver}
|
||||
instance, followed by the remaining argument and terminated with a
|
||||
path corresponding to the feed.
|
||||
|
||||
When the launched process exits, @command{ffserver} will launch
|
||||
another program instance.
|
||||
|
||||
In case you need a more complex @command{ffmpeg} configuration,
|
||||
e.g. if you need to generate multiple FFM feeds with a single
|
||||
@command{ffmpeg} instance, you should launch @command{ffmpeg} by hand.
|
||||
|
||||
This option is ignored in case the commandline option @option{-n} is
|
||||
specified.
|
||||
|
||||
@item ACL @var{spec}
|
||||
Specify the list of IP address which are allowed or denied to write
|
||||
the feed. Multiple ACL options can be specified.
|
||||
@end table
|
||||
|
||||
@section Stream section
|
||||
|
||||
A Stream section defines a stream provided by @command{ffserver}, and
|
||||
identified by a single name.
|
||||
|
||||
The stream is sent when answering a request containing the stream
|
||||
name.
|
||||
|
||||
A stream section must be introduced by the line:
|
||||
@example
|
||||
<Stream STREAM_NAME>
|
||||
@end example
|
||||
|
||||
where @var{STREAM_NAME} specifies the unique name of the stream.
|
||||
|
||||
The following options are recognized within a Stream section.
|
||||
|
||||
Encoding options are marked with the @emph{encoding} tag, and they are
|
||||
used to set the encoding parameters, and are mapped to libavcodec
|
||||
encoding options. Not all encoding options are supported, in
|
||||
particular it is not possible to set encoder private options. In order
|
||||
to override the encoding options specified by @command{ffserver}, you
|
||||
can use the @command{ffmpeg} @option{override_ffserver} commandline
|
||||
option.
|
||||
|
||||
Only one of the @option{Feed} and @option{File} options should be set.
|
||||
|
||||
@table @option
|
||||
@item Feed @var{feed_name}
|
||||
Set the input feed. @var{feed_name} must correspond to an existing
|
||||
feed defined in a @code{Feed} section.
|
||||
|
||||
When this option is set, encoding options are used to setup the
|
||||
encoding operated by the remote @command{ffmpeg} process.
|
||||
|
||||
@item File @var{filename}
|
||||
Set the filename of the pre-recorded input file to stream.
|
||||
|
||||
When this option is set, encoding options are ignored and the input
|
||||
file content is re-streamed as is.
|
||||
|
||||
@item Format @var{format_name}
|
||||
Set the format of the output stream.
|
||||
|
||||
Must be the name of a format recognized by FFmpeg. If set to
|
||||
@samp{status}, it is treated as a status stream.
|
||||
|
||||
@item InputFormat @var{format_name}
|
||||
Set input format. If not specified, it is automatically guessed.
|
||||
|
||||
@item Preroll @var{n}
|
||||
Set this to the number of seconds backwards in time to start. Note that
|
||||
most players will buffer 5-10 seconds of video, and also you need to allow
|
||||
for a keyframe to appear in the data stream.
|
||||
|
||||
Default value is 0.
|
||||
|
||||
@item StartSendOnKey
|
||||
Do not send stream until it gets the first key frame. By default
|
||||
@command{ffserver} will send data immediately.
|
||||
|
||||
@item MaxTime @var{n}
|
||||
Set the number of seconds to run. This value set the maximum duration
|
||||
of the stream a client will be able to receive.
|
||||
|
||||
A value of 0 means that no limit is set on the stream duration.
|
||||
|
||||
@item ACL @var{spec}
|
||||
Set ACL for the stream.
|
||||
|
||||
@item DynamicACL @var{spec}
|
||||
|
||||
@item RTSPOption @var{option}
|
||||
|
||||
@item MulticastAddress @var{address}
|
||||
|
||||
@item MulticastPort @var{port}
|
||||
|
||||
@item MulticastTTL @var{integer}
|
||||
|
||||
@item NoLoop
|
||||
|
||||
@item FaviconURL @var{url}
|
||||
Set favicon (favourite icon) for the server status page. It is ignored
|
||||
for regular streams.
|
||||
|
||||
@item Author @var{value}
|
||||
@item Comment @var{value}
|
||||
@item Copyright @var{value}
|
||||
@item Title @var{value}
|
||||
Set metadata corresponding to the option. All these options are
|
||||
deprecated in favor of @option{Metadata}.
|
||||
|
||||
@item Metadata @var{key} @var{value}
|
||||
Set metadata value on the output stream.
|
||||
|
||||
@item UseDefaults
|
||||
@item NoDefaults
|
||||
Control whether default codec options are used for the stream or not.
|
||||
Default is @var{UseDefaults} unless disabled globally.
|
||||
|
||||
@item NoAudio
|
||||
@item NoVideo
|
||||
Suppress audio/video.
|
||||
|
||||
@item AudioCodec @var{codec_name} (@emph{encoding,audio})
|
||||
Set audio codec.
|
||||
|
||||
@item AudioBitRate @var{rate} (@emph{encoding,audio})
|
||||
Set bitrate for the audio stream in kbits per second.
|
||||
|
||||
@item AudioChannels @var{n} (@emph{encoding,audio})
|
||||
Set number of audio channels.
|
||||
|
||||
@item AudioSampleRate @var{n} (@emph{encoding,audio})
|
||||
Set sampling frequency for audio. When using low bitrates, you should
|
||||
lower this frequency to 22050 or 11025. The supported frequencies
|
||||
depend on the selected audio codec.
|
||||
|
||||
@item AVOptionAudio [@var{codec}:]@var{option} @var{value} (@emph{encoding,audio})
|
||||
Set generic or private option for audio stream.
|
||||
Private option must be prefixed with codec name or codec must be defined before.
|
||||
|
||||
@item AVPresetAudio @var{preset} (@emph{encoding,audio})
|
||||
Set preset for audio stream.
|
||||
|
||||
@item VideoCodec @var{codec_name} (@emph{encoding,video})
|
||||
Set video codec.
|
||||
|
||||
@item VideoBitRate @var{n} (@emph{encoding,video})
|
||||
Set bitrate for the video stream in kbits per second.
|
||||
|
||||
@item VideoBitRateRange @var{range} (@emph{encoding,video})
|
||||
Set video bitrate range.
|
||||
|
||||
A range must be specified in the form @var{minrate}-@var{maxrate}, and
|
||||
specifies the @option{minrate} and @option{maxrate} encoding options
|
||||
expressed in kbits per second.
|
||||
|
||||
@item VideoBitRateRangeTolerance @var{n} (@emph{encoding,video})
|
||||
Set video bitrate tolerance in kbits per second.
|
||||
|
||||
@item PixelFormat @var{pixel_format} (@emph{encoding,video})
|
||||
Set video pixel format.
|
||||
|
||||
@item Debug @var{integer} (@emph{encoding,video})
|
||||
Set video @option{debug} encoding option.
|
||||
|
||||
@item Strict @var{integer} (@emph{encoding,video})
|
||||
Set video @option{strict} encoding option.
|
||||
|
||||
@item VideoBufferSize @var{n} (@emph{encoding,video})
|
||||
Set ratecontrol buffer size, expressed in KB.
|
||||
|
||||
@item VideoFrameRate @var{n} (@emph{encoding,video})
|
||||
Set number of video frames per second.
|
||||
|
||||
@item VideoSize (@emph{encoding,video})
|
||||
Set size of the video frame, must be an abbreviation or in the form
|
||||
@var{W}x@var{H}. See @ref{video size syntax,,the Video size section
|
||||
in the ffmpeg-utils(1) manual,ffmpeg-utils}.
|
||||
|
||||
Default value is @code{160x128}.
|
||||
|
||||
@item VideoIntraOnly (@emph{encoding,video})
|
||||
Transmit only intra frames (useful for low bitrates, but kills frame rate).
|
||||
|
||||
@item VideoGopSize @var{n} (@emph{encoding,video})
|
||||
If non-intra only, an intra frame is transmitted every VideoGopSize
|
||||
frames. Video synchronization can only begin at an intra frame.
|
||||
|
||||
@item VideoTag @var{tag} (@emph{encoding,video})
|
||||
Set video tag.
|
||||
|
||||
@item VideoHighQuality (@emph{encoding,video})
|
||||
@item Video4MotionVector (@emph{encoding,video})
|
||||
|
||||
@item BitExact (@emph{encoding,video})
|
||||
Set bitexact encoding flag.
|
||||
|
||||
@item IdctSimple (@emph{encoding,video})
|
||||
Set simple IDCT algorithm.
|
||||
|
||||
@item Qscale @var{n} (@emph{encoding,video})
|
||||
Enable constant quality encoding, and set video qscale (quantization
|
||||
scale) value, expressed in @var{n} QP units.
|
||||
|
||||
@item VideoQMin @var{n} (@emph{encoding,video})
|
||||
@item VideoQMax @var{n} (@emph{encoding,video})
|
||||
Set video qmin/qmax.
|
||||
|
||||
@item VideoQDiff @var{integer} (@emph{encoding,video})
|
||||
Set video @option{qdiff} encoding option.
|
||||
|
||||
@item LumiMask @var{float} (@emph{encoding,video})
|
||||
@item DarkMask @var{float} (@emph{encoding,video})
|
||||
Set @option{lumi_mask}/@option{dark_mask} encoding options.
|
||||
|
||||
@item AVOptionVideo [@var{codec}:]@var{option} @var{value} (@emph{encoding,video})
|
||||
Set generic or private option for video stream.
|
||||
Private option must be prefixed with codec name or codec must be defined before.
|
||||
|
||||
@item AVPresetVideo @var{preset} (@emph{encoding,video})
|
||||
Set preset for video stream.
|
||||
|
||||
@var{preset} must be the path of a preset file.
|
||||
@end table
|
||||
|
||||
@subsection Server status stream
|
||||
|
||||
A server status stream is a special stream which is used to show
|
||||
statistics about the @command{ffserver} operations.
|
||||
|
||||
It must be specified setting the option @option{Format} to
|
||||
@samp{status}.
|
||||
|
||||
@section Redirect section
|
||||
|
||||
A redirect section specifies where to redirect the requested URL to
|
||||
another page.
|
||||
|
||||
A redirect section must be introduced by the line:
|
||||
@example
|
||||
<Redirect NAME>
|
||||
@end example
|
||||
|
||||
where @var{NAME} is the name of the page which should be redirected.
|
||||
|
||||
It only accepts the option @option{URL}, which specify the redirection
|
||||
URL.
|
||||
|
||||
@chapter Stream examples
|
||||
|
||||
@itemize
|
||||
@item
|
||||
Multipart JPEG
|
||||
@example
|
||||
<Stream test.mjpg>
|
||||
Feed feed1.ffm
|
||||
Format mpjpeg
|
||||
VideoFrameRate 2
|
||||
VideoIntraOnly
|
||||
NoAudio
|
||||
Strict -1
|
||||
</Stream>
|
||||
@end example
|
||||
|
||||
@item
|
||||
Single JPEG
|
||||
@example
|
||||
<Stream test.jpg>
|
||||
Feed feed1.ffm
|
||||
Format jpeg
|
||||
VideoFrameRate 2
|
||||
VideoIntraOnly
|
||||
VideoSize 352x240
|
||||
NoAudio
|
||||
Strict -1
|
||||
</Stream>
|
||||
@end example
|
||||
|
||||
@item
|
||||
Flash
|
||||
@example
|
||||
<Stream test.swf>
|
||||
Feed feed1.ffm
|
||||
Format swf
|
||||
VideoFrameRate 2
|
||||
VideoIntraOnly
|
||||
NoAudio
|
||||
</Stream>
|
||||
@end example
|
||||
|
||||
@item
|
||||
ASF compatible
|
||||
@example
|
||||
<Stream test.asf>
|
||||
Feed feed1.ffm
|
||||
Format asf
|
||||
VideoFrameRate 15
|
||||
VideoSize 352x240
|
||||
VideoBitRate 256
|
||||
VideoBufferSize 40
|
||||
VideoGopSize 30
|
||||
AudioBitRate 64
|
||||
StartSendOnKey
|
||||
</Stream>
|
||||
@end example
|
||||
|
||||
@item
|
||||
MP3 audio
|
||||
@example
|
||||
<Stream test.mp3>
|
||||
Feed feed1.ffm
|
||||
Format mp2
|
||||
AudioCodec mp3
|
||||
AudioBitRate 64
|
||||
AudioChannels 1
|
||||
AudioSampleRate 44100
|
||||
NoVideo
|
||||
</Stream>
|
||||
@end example
|
||||
|
||||
@item
|
||||
Ogg Vorbis audio
|
||||
@example
|
||||
<Stream test.ogg>
|
||||
Feed feed1.ffm
|
||||
Metadata title "Stream title"
|
||||
AudioBitRate 64
|
||||
AudioChannels 2
|
||||
AudioSampleRate 44100
|
||||
NoVideo
|
||||
</Stream>
|
||||
@end example
|
||||
|
||||
@item
|
||||
Real with audio only at 32 kbits
|
||||
@example
|
||||
<Stream test.ra>
|
||||
Feed feed1.ffm
|
||||
Format rm
|
||||
AudioBitRate 32
|
||||
NoVideo
|
||||
</Stream>
|
||||
@end example
|
||||
|
||||
@item
|
||||
Real with audio and video at 64 kbits
|
||||
@example
|
||||
<Stream test.rm>
|
||||
Feed feed1.ffm
|
||||
Format rm
|
||||
AudioBitRate 32
|
||||
VideoBitRate 128
|
||||
VideoFrameRate 25
|
||||
VideoGopSize 25
|
||||
</Stream>
|
||||
@end example
|
||||
|
||||
@item
|
||||
For stream coming from a file: you only need to set the input filename
|
||||
and optionally a new format.
|
||||
|
||||
@example
|
||||
<Stream file.rm>
|
||||
File "/usr/local/httpd/htdocs/tlive.rm"
|
||||
NoAudio
|
||||
</Stream>
|
||||
@end example
|
||||
|
||||
@example
|
||||
<Stream file.asf>
|
||||
File "/usr/local/httpd/htdocs/test.asf"
|
||||
NoAudio
|
||||
Metadata author "Me"
|
||||
Metadata copyright "Super MegaCorp"
|
||||
Metadata title "Test stream from disk"
|
||||
Metadata comment "Test comment"
|
||||
</Stream>
|
||||
@end example
|
||||
@end itemize
|
||||
|
||||
@c man end
|
||||
|
||||
@include config.texi
|
||||
|
@@ -3,7 +3,7 @@ representing a number as input, which may be followed by one of the SI
|
||||
unit prefixes, for example: 'K', 'M', or 'G'.
|
||||
|
||||
If 'i' is appended to the SI unit prefix, the complete prefix will be
|
||||
interpreted as a unit prefix for binary multiples, which are based on
|
||||
interpreted as a unit prefix for binary multiplies, which are based on
|
||||
powers of 1024 instead of powers of 1000. Appending 'B' to the SI unit
|
||||
prefix multiplies the value by 8. This allows using, for example:
|
||||
'KB', 'MiB', 'G' and 'B' as number suffixes.
|
||||
@@ -44,15 +44,8 @@ streams of this type.
|
||||
If @var{stream_index} is given, then it matches the stream with number @var{stream_index}
|
||||
in the program with the id @var{program_id}. Otherwise, it matches all streams in the
|
||||
program.
|
||||
@item #@var{stream_id} or i:@var{stream_id}
|
||||
Match the stream by stream id (e.g. PID in MPEG-TS container).
|
||||
@item m:@var{key}[:@var{value}]
|
||||
Matches streams with the metadata tag @var{key} having the specified value. If
|
||||
@var{value} is not given, matches streams that contain the given tag with any
|
||||
value.
|
||||
|
||||
Note that in @command{ffmpeg}, matching by metadata will only work properly for
|
||||
input files.
|
||||
@item #@var{stream_id}
|
||||
Matches the stream by a format-specific ID.
|
||||
@end table
|
||||
|
||||
@section Generic options
|
||||
@@ -103,10 +96,7 @@ Print detailed information about the filter name @var{filter_name}. Use the
|
||||
Show version.
|
||||
|
||||
@item -formats
|
||||
Show available formats (including devices).
|
||||
|
||||
@item -devices
|
||||
Show available devices.
|
||||
Show available formats.
|
||||
|
||||
@item -codecs
|
||||
Show all codecs known to libavcodec.
|
||||
@@ -141,22 +131,6 @@ Show channel names and standard channel layouts.
|
||||
@item -colors
|
||||
Show recognized color names.
|
||||
|
||||
@item -sources @var{device}[,@var{opt1}=@var{val1}[,@var{opt2}=@var{val2}]...]
|
||||
Show autodetected sources of the intput device.
|
||||
Some devices may provide system-dependent source names that cannot be autodetected.
|
||||
The returned list cannot be assumed to be always complete.
|
||||
@example
|
||||
ffmpeg -sources pulse,server=192.168.0.4
|
||||
@end example
|
||||
|
||||
@item -sinks @var{device}[,@var{opt1}=@var{val1}[,@var{opt2}=@var{val2}]...]
|
||||
Show autodetected sinks of the output device.
|
||||
Some devices may provide system-dependent sink names that cannot be autodetected.
|
||||
The returned list cannot be assumed to be always complete.
|
||||
@example
|
||||
ffmpeg -sinks pulse,server=192.168.0.4
|
||||
@end example
|
||||
|
||||
@item -loglevel [repeat+]@var{loglevel} | -v [repeat+]@var{loglevel}
|
||||
Set the logging level used by the library.
|
||||
Adding "repeat+" indicates that repeated log output should not be compressed
|
||||
@@ -165,27 +139,27 @@ omitted. "repeat" can also be used alone.
|
||||
If "repeat" is used alone, and with no prior loglevel set, the default
|
||||
loglevel will be used. If multiple loglevel parameters are given, using
|
||||
'repeat' will not change the loglevel.
|
||||
@var{loglevel} is a string or a number containing one of the following values:
|
||||
@var{loglevel} is a number or a string containing one of the following values:
|
||||
@table @samp
|
||||
@item quiet, -8
|
||||
@item quiet
|
||||
Show nothing at all; be silent.
|
||||
@item panic, 0
|
||||
@item panic
|
||||
Only show fatal errors which could lead the process to crash, such as
|
||||
and assert failure. This is not currently used for anything.
|
||||
@item fatal, 8
|
||||
@item fatal
|
||||
Only show fatal errors. These are errors after which the process absolutely
|
||||
cannot continue after.
|
||||
@item error, 16
|
||||
@item error
|
||||
Show all errors, including ones which can be recovered from.
|
||||
@item warning, 24
|
||||
@item warning
|
||||
Show all warnings and errors. Any message related to possibly
|
||||
incorrect or unexpected events will be shown.
|
||||
@item info, 32
|
||||
@item info
|
||||
Show informative messages during processing. This is in addition to
|
||||
warnings and errors. This is the default value.
|
||||
@item verbose, 40
|
||||
@item verbose
|
||||
Same as @code{info}, except more verbose.
|
||||
@item debug, 48
|
||||
@item debug
|
||||
Show everything, including debugging information.
|
||||
@end table
|
||||
|
||||
@@ -204,39 +178,22 @@ directory.
|
||||
This file can be useful for bug reports.
|
||||
It also implies @code{-loglevel verbose}.
|
||||
|
||||
Setting the environment variable @env{FFREPORT} to any value has the
|
||||
Setting the environment variable @code{FFREPORT} to any value has the
|
||||
same effect. If the value is a ':'-separated key=value sequence, these
|
||||
options will affect the report; option values must be escaped if they
|
||||
options will affect the report; options values must be escaped if they
|
||||
contain special characters or the options delimiter ':' (see the
|
||||
``Quoting and escaping'' section in the ffmpeg-utils manual).
|
||||
|
||||
The following options are recognized:
|
||||
``Quoting and escaping'' section in the ffmpeg-utils manual). The
|
||||
following option is recognized:
|
||||
@table @option
|
||||
@item file
|
||||
set the file name to use for the report; @code{%p} is expanded to the name
|
||||
of the program, @code{%t} is expanded to a timestamp, @code{%%} is expanded
|
||||
to a plain @code{%}
|
||||
@item level
|
||||
set the log verbosity level using a numerical value (see @code{-loglevel}).
|
||||
@end table
|
||||
|
||||
For example, to output a report to a file named @file{ffreport.log}
|
||||
using a log level of @code{32} (alias for log level @code{info}):
|
||||
|
||||
@example
|
||||
FFREPORT=file=ffreport.log:level=32 ffmpeg -i input output
|
||||
@end example
|
||||
|
||||
Errors in parsing the environment variable are not fatal, and will not
|
||||
appear in the report.
|
||||
|
||||
@item -hide_banner
|
||||
Suppress printing banner.
|
||||
|
||||
All FFmpeg tools will normally show a copyright notice, build options
|
||||
and library versions. This option can be used to suppress printing
|
||||
this information.
|
||||
|
||||
@item -cpuflags flags (@emph{global})
|
||||
Allows setting and clearing cpu flags. This option is intended
|
||||
for testing. Do not use it unless you know what you're doing.
|
||||
@@ -293,43 +250,6 @@ Possible flags for this option are:
|
||||
@end table
|
||||
@end table
|
||||
|
||||
@item -opencl_bench
|
||||
This option is used to benchmark all available OpenCL devices and print the
|
||||
results. This option is only available when FFmpeg has been compiled with
|
||||
@code{--enable-opencl}.
|
||||
|
||||
When FFmpeg is configured with @code{--enable-opencl}, the options for the
|
||||
global OpenCL context are set via @option{-opencl_options}. See the
|
||||
"OpenCL Options" section in the ffmpeg-utils manual for the complete list of
|
||||
supported options. Amongst others, these options include the ability to select
|
||||
a specific platform and device to run the OpenCL code on. By default, FFmpeg
|
||||
will run on the first device of the first platform. While the options for the
|
||||
global OpenCL context provide flexibility to the user in selecting the OpenCL
|
||||
device of their choice, most users would probably want to select the fastest
|
||||
OpenCL device for their system.
|
||||
|
||||
This option assists the selection of the most efficient configuration by
|
||||
identifying the appropriate device for the user's system. The built-in
|
||||
benchmark is run on all the OpenCL devices and the performance is measured for
|
||||
each device. The devices in the results list are sorted based on their
|
||||
performance with the fastest device listed first. The user can subsequently
|
||||
invoke @command{ffmpeg} using the device deemed most appropriate via
|
||||
@option{-opencl_options} to obtain the best performance for the OpenCL
|
||||
accelerated code.
|
||||
|
||||
Typical usage to use the fastest OpenCL device involve the following steps.
|
||||
|
||||
Run the command:
|
||||
@example
|
||||
ffmpeg -opencl_bench
|
||||
@end example
|
||||
Note down the platform ID (@var{pidx}) and device ID (@var{didx}) of the first
|
||||
i.e. fastest device in the list.
|
||||
Select the platform and device using the command:
|
||||
@example
|
||||
ffmpeg -opencl_options platform_idx=@var{pidx}:device_idx=@var{didx} ...
|
||||
@end example
|
||||
|
||||
@item -opencl_options options (@emph{global})
|
||||
Set OpenCL environment options. This option is only available when
|
||||
FFmpeg has been compiled with @code{--enable-opencl}.
|
||||
|
3491
doc/filters.texi
3491
doc/filters.texi
File diff suppressed because it is too large
Load Diff
@@ -55,10 +55,6 @@ Do not merge side data.
|
||||
Enable RTP MP4A-LATM payload.
|
||||
@item nobuffer
|
||||
Reduce the latency introduced by optional buffering
|
||||
@item bitexact
|
||||
Only write platform-, build- and time-independent data.
|
||||
This ensures that file and data checksums are reproducible and match between
|
||||
platforms. Its primary use is for regression testing.
|
||||
@end table
|
||||
|
||||
@item seek2any @var{integer} (@emph{input})
|
||||
@@ -129,26 +125,18 @@ Consider things that a sane encoder should not do as an error.
|
||||
Use wallclock as timestamps.
|
||||
|
||||
@item avoid_negative_ts @var{integer} (@emph{output})
|
||||
|
||||
Possible values:
|
||||
@table @samp
|
||||
@item make_non_negative
|
||||
Shift timestamps to make them non-negative.
|
||||
Also note that this affects only leading negative timestamps, and not
|
||||
non-monotonic negative timestamps.
|
||||
@item make_zero
|
||||
Shift timestamps so that the first timestamp is 0.
|
||||
@item auto (default)
|
||||
Enables shifting when required by the target format.
|
||||
@item disabled
|
||||
Disables shifting of timestamp.
|
||||
@end table
|
||||
Shift timestamps to make them non-negative. A value of 1 enables shifting,
|
||||
a value of 0 disables it, the default value of -1 enables shifting
|
||||
when required by the target format.
|
||||
|
||||
When shifting is enabled, all output timestamps are shifted by the
|
||||
same amount. Audio, video, and subtitles desynching and relative
|
||||
timestamp differences are preserved compared to how they would have
|
||||
been without shifting.
|
||||
|
||||
Also note that this affects only leading negative timestamps, and not
|
||||
non-monotonic negative timestamps.
|
||||
|
||||
@item skip_initial_bytes @var{integer} (@emph{input})
|
||||
Set number of bytes to skip before reading header and frames if set to 1.
|
||||
Default is 0.
|
||||
@@ -160,30 +148,6 @@ Correct single timestamp overflows if set to 1. Default is 1.
|
||||
Flush the underlying I/O stream after each packet. Default 1 enables it, and
|
||||
has the effect of reducing the latency; 0 disables it and may slightly
|
||||
increase performance in some cases.
|
||||
|
||||
@item output_ts_offset @var{offset} (@emph{output})
|
||||
Set the output time offset.
|
||||
|
||||
@var{offset} must be a time duration specification,
|
||||
see @ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
|
||||
|
||||
The offset is added by the muxer to the output timestamps.
|
||||
|
||||
Specifying a positive offset means that the corresponding streams are
|
||||
delayed bt the time duration specified in @var{offset}. Default value
|
||||
is @code{0} (meaning that no offset is applied).
|
||||
|
||||
@item format_whitelist @var{list} (@emph{input})
|
||||
"," separated List of allowed demuxers. By default all are allowed.
|
||||
|
||||
@item dump_separator @var{string} (@emph{input})
|
||||
Separator used to separate the fields printed on the command line about the
|
||||
Stream parameters.
|
||||
For example to separate the fields with newlines and indention:
|
||||
@example
|
||||
ffprobe -dump_separator "
|
||||
" -i ~/videos/matrixbench_mpeg2.mpg
|
||||
@end example
|
||||
@end table
|
||||
|
||||
@c man end FORMAT OPTIONS
|
||||
@@ -219,10 +183,6 @@ The exact semantics of stream specifiers is defined by the
|
||||
@code{avformat_match_stream_specifier()} function declared in the
|
||||
@file{libavformat/avformat.h} header.
|
||||
|
||||
@ifclear config-writeonly
|
||||
@include demuxers.texi
|
||||
@end ifclear
|
||||
@ifclear config-readonly
|
||||
@include muxers.texi
|
||||
@end ifclear
|
||||
@include metadata.texi
|
||||
|
111
doc/general.texi
111
doc/general.texi
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle General Documentation
|
||||
@titlepage
|
||||
@@ -109,14 +108,6 @@ Go to @url{http://www.wavpack.com/} and follow the instructions for
|
||||
installing the library. Then pass @code{--enable-libwavpack} to configure to
|
||||
enable it.
|
||||
|
||||
@section OpenH264
|
||||
|
||||
FFmpeg can make use of the OpenH264 library for H.264 encoding.
|
||||
|
||||
Go to @url{http://www.openh264.org/} and follow the instructions for
|
||||
installing the library. Then pass @code{--enable-libopenh264} to configure to
|
||||
enable it.
|
||||
|
||||
@section x264
|
||||
|
||||
FFmpeg can make use of the x264 library for H.264 encoding.
|
||||
@@ -131,20 +122,6 @@ x264 is under the GNU Public License Version 2 or later
|
||||
details), you must upgrade FFmpeg's license to GPL in order to use it.
|
||||
@end float
|
||||
|
||||
@section x265
|
||||
|
||||
FFmpeg can make use of the x265 library for HEVC encoding.
|
||||
|
||||
Go to @url{http://x265.org/developers.html} and follow the instructions
|
||||
for installing the library. Then pass @code{--enable-libx265} to configure
|
||||
to enable it.
|
||||
|
||||
@float NOTE
|
||||
x265 is under the GNU Public License Version 2 or later
|
||||
(see @url{http://www.gnu.org/licenses/old-licenses/gpl-2.0.html} for
|
||||
details), you must upgrade FFmpeg's license to GPL in order to use it.
|
||||
@end float
|
||||
|
||||
@section libilbc
|
||||
|
||||
iLBC is a narrowband speech codec that has been made freely available
|
||||
@@ -152,7 +129,7 @@ by Google as part of the WebRTC project. libilbc is a packaging friendly
|
||||
copy of the iLBC codec. FFmpeg can make use of the libilbc library for
|
||||
iLBC encoding and decoding.
|
||||
|
||||
Go to @url{https://github.com/TimothyGu/libilbc} and follow the instructions for
|
||||
Go to @url{https://github.com/dekkers/libilbc} and follow the instructions for
|
||||
installing the library. Then pass @code{--enable-libilbc} to configure to
|
||||
enable it.
|
||||
|
||||
@@ -171,27 +148,6 @@ libzvbi is licensed under the GNU General Public License Version 2 or later
|
||||
you must upgrade FFmpeg's license to GPL in order to use it.
|
||||
@end float
|
||||
|
||||
@section AviSynth
|
||||
|
||||
FFmpeg can read AviSynth scripts as input. To enable support, pass
|
||||
@code{--enable-avisynth} to configure. The correct headers are
|
||||
included in compat/avisynth/, which allows the user to enable support
|
||||
without needing to search for these headers themselves.
|
||||
|
||||
For Windows, supported AviSynth variants are
|
||||
@url{http://avisynth.nl, AviSynth 2.5 or 2.6} for 32-bit builds and
|
||||
@url{http://avs-plus.net, AviSynth+ 0.1} for 32-bit and 64-bit builds.
|
||||
|
||||
For Linux and OS X, the supported AviSynth variant is
|
||||
@url{https://github.com/avxsynth/avxsynth, AvxSynth}.
|
||||
|
||||
@float NOTE
|
||||
AviSynth and AvxSynth are loaded dynamically. Distributors can build FFmpeg
|
||||
with @code{--enable-avisynth}, and the binaries will work regardless of the
|
||||
end user having AviSynth or AvxSynth installed - they'll only need to be
|
||||
installed to use AviSynth scripts (obviously).
|
||||
@end float
|
||||
|
||||
|
||||
@chapter Supported File Formats, Codecs or Features
|
||||
|
||||
@@ -214,7 +170,7 @@ library:
|
||||
@item American Laser Games MM @tab @tab X
|
||||
@tab Multimedia format used in games like Mad Dog McCree.
|
||||
@item 3GPP AMR @tab X @tab X
|
||||
@item Amazing Studio Packed Animation File @tab @tab X
|
||||
@item Amazing Studio Packed Animation File @tab @tab X
|
||||
@tab Multimedia format used in game Heart Of Darkness.
|
||||
@item Apple HTTP Live Streaming @tab @tab X
|
||||
@item Artworx Data Format @tab @tab X
|
||||
@@ -252,11 +208,8 @@ library:
|
||||
@tab Used in the game Cyberia from Interplay.
|
||||
@item Delphine Software International CIN @tab @tab X
|
||||
@tab Multimedia format used by Delphine Software games.
|
||||
@item Digital Speech Standard (DSS) @tab @tab X
|
||||
@item Canopus HQX @tab @tab X
|
||||
@item CD+G @tab @tab X
|
||||
@tab Video format used by CD+G karaoke disks
|
||||
@item Phantom Cine @tab @tab X
|
||||
@item Commodore CDXL @tab @tab X
|
||||
@tab Amiga CD video format
|
||||
@item Core Audio Format @tab X @tab X
|
||||
@@ -270,7 +223,6 @@ library:
|
||||
@item Deluxe Paint Animation @tab @tab X
|
||||
@item DFA @tab @tab X
|
||||
@tab This format is used in Chronomaster game
|
||||
@item DSD Stream File (DSF) @tab @tab X
|
||||
@item DV video @tab X @tab X
|
||||
@item DXA @tab @tab X
|
||||
@tab This format is used in the non-Windows version of the Feeble Files
|
||||
@@ -297,8 +249,6 @@ library:
|
||||
@item GXF @tab X @tab X
|
||||
@tab General eXchange Format SMPTE 360M, used by Thomson Grass Valley
|
||||
playout servers.
|
||||
@item HNM @tab @tab X
|
||||
@tab Only version 4 supported, used in some games from Cryo Interactive
|
||||
@item iCEDraw File @tab @tab X
|
||||
@item ICO @tab X @tab X
|
||||
@tab Microsoft Windows ICO
|
||||
@@ -321,11 +271,9 @@ library:
|
||||
@tab Used by Linux Media Labs MPEG-4 PCI boards
|
||||
@item LOAS @tab @tab X
|
||||
@tab contains LATM multiplexed AAC audio
|
||||
@item LRC @tab X @tab X
|
||||
@item LVF @tab @tab X
|
||||
@item LXF @tab @tab X
|
||||
@tab VR native stream format, used by Leitch/Harris' video servers.
|
||||
@item Magic Lantern Video (MLV) @tab @tab X
|
||||
@item Matroska @tab X @tab X
|
||||
@item Matroska audio @tab X @tab
|
||||
@item FFmpeg metadata @tab X @tab X
|
||||
@@ -351,8 +299,6 @@ library:
|
||||
@tab also known as DVB Transport Stream
|
||||
@item MPEG-4 @tab X @tab X
|
||||
@tab MPEG-4 is a variant of QuickTime.
|
||||
@item Mirillis FIC video @tab @tab X
|
||||
@tab No cursor rendering.
|
||||
@item MIME multipart JPEG @tab X @tab
|
||||
@item MSN TCP webcam @tab @tab X
|
||||
@tab Used by MSN Messenger webcam streams.
|
||||
@@ -392,7 +338,6 @@ library:
|
||||
@item raw H.261 @tab X @tab X
|
||||
@item raw H.263 @tab X @tab X
|
||||
@item raw H.264 @tab X @tab X
|
||||
@item raw HEVC @tab X @tab X
|
||||
@item raw Ingenient MJPEG @tab @tab X
|
||||
@item raw MJPEG @tab X @tab X
|
||||
@item raw MLP @tab @tab X
|
||||
@@ -465,7 +410,6 @@ library:
|
||||
@item Sony Wave64 (W64) @tab X @tab X
|
||||
@item SoX native format @tab X @tab X
|
||||
@item SUN AU format @tab X @tab X
|
||||
@item SUP raw PGS subtitles @tab @tab X
|
||||
@item Text files @tab @tab X
|
||||
@item THP @tab @tab X
|
||||
@tab Used on the Nintendo GameCube.
|
||||
@@ -504,13 +448,11 @@ following image formats are supported:
|
||||
@item Name @tab Encoding @tab Decoding @tab Comments
|
||||
@item .Y.U.V @tab X @tab X
|
||||
@tab one raw file per component
|
||||
@item Alias PIX @tab X @tab X
|
||||
@tab Alias/Wavefront PIX image format
|
||||
@item animated GIF @tab X @tab X
|
||||
@item BMP @tab X @tab X
|
||||
@tab Microsoft BMP image
|
||||
@item BRender PIX @tab @tab X
|
||||
@tab Argonaut BRender 3D engine image format.
|
||||
@item PIX @tab @tab X
|
||||
@tab PIX is an image format used in the Argonaut BRender engine.
|
||||
@item DPX @tab X @tab X
|
||||
@tab Digital Picture Exchange
|
||||
@item EXR @tab @tab X
|
||||
@@ -546,8 +488,8 @@ following image formats are supported:
|
||||
@tab YUV, JPEG and some extension is not supported yet.
|
||||
@item Truevision Targa @tab X @tab X
|
||||
@tab Targa (.TGA) image format
|
||||
@item WebP @tab E @tab X
|
||||
@tab WebP image format, encoding supported through external library libwebp
|
||||
@item WebP @tab @tab X
|
||||
@tab WebP image format
|
||||
@item XBM @tab X @tab X
|
||||
@tab X BitMap image format
|
||||
@item XFace @tab X @tab X
|
||||
@@ -664,10 +606,7 @@ following image formats are supported:
|
||||
@item H.263 / H.263-1996 @tab X @tab X
|
||||
@item H.263+ / H.263-1998 / H.263 version 2 @tab X @tab X
|
||||
@item H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 @tab E @tab X
|
||||
@tab encoding supported through external library libx264 and OpenH264
|
||||
@item HEVC @tab X @tab X
|
||||
@tab encoding supported through the external library libx265
|
||||
@item HNM version 4 @tab @tab X
|
||||
@tab encoding supported through external library libx264
|
||||
@item HuffYUV @tab X @tab X
|
||||
@item HuffYUV FFmpeg variant @tab X @tab X
|
||||
@item IBM Ultimotion @tab @tab X
|
||||
@@ -698,8 +637,8 @@ following image formats are supported:
|
||||
@item LCL (LossLess Codec Library) MSZH @tab @tab X
|
||||
@item LCL (LossLess Codec Library) ZLIB @tab E @tab E
|
||||
@item LOCO @tab @tab X
|
||||
@item LucasArts SANM/Smush @tab @tab X
|
||||
@tab Used in LucasArts games / SMUSH animations.
|
||||
@item LucasArts Smush @tab @tab X
|
||||
@tab Used in LucasArts games.
|
||||
@item lossless MJPEG @tab X @tab X
|
||||
@item Microsoft ATC Screen @tab @tab X
|
||||
@tab Also known as Microsoft Screen 3.
|
||||
@@ -719,6 +658,7 @@ following image formats are supported:
|
||||
@item Mobotix MxPEG video @tab @tab X
|
||||
@item Motion Pixels video @tab @tab X
|
||||
@item MPEG-1 video @tab X @tab X
|
||||
@item MPEG-1/2 video XvMC (X-Video Motion Compensation) @tab @tab X
|
||||
@item MPEG-2 video @tab X @tab X
|
||||
@item MPEG-4 part 2 @tab X @tab X
|
||||
@tab libxvidcore can be used alternatively for encoding.
|
||||
@@ -734,8 +674,6 @@ following image formats are supported:
|
||||
@tab fourcc: VP50
|
||||
@item On2 VP6 @tab @tab X
|
||||
@tab fourcc: VP60,VP61,VP62
|
||||
@item On2 VP7 @tab @tab X
|
||||
@tab fourcc: VP70,VP71
|
||||
@item VP8 @tab E @tab X
|
||||
@tab fourcc: VP80, encoding supported through external library libvpx
|
||||
@item VP9 @tab E @tab X
|
||||
@@ -765,11 +703,11 @@ following image formats are supported:
|
||||
@tab Texture dictionaries used by the Renderware Engine.
|
||||
@item RL2 video @tab @tab X
|
||||
@tab used in some games by Entertainment Software Partners
|
||||
@item SGI RLE 8-bit @tab @tab X
|
||||
@item Sierra VMD video @tab @tab X
|
||||
@tab Used in Sierra VMD files.
|
||||
@item Silicon Graphics Motion Video Compressor 1 (MVC1) @tab @tab X
|
||||
@item Silicon Graphics Motion Video Compressor 2 (MVC2) @tab @tab X
|
||||
@item Silicon Graphics RLE 8-bit video @tab @tab X
|
||||
@item Smacker video @tab @tab X
|
||||
@tab Video encoding used in Smacker.
|
||||
@item SMPTE VC-1 @tab @tab X
|
||||
@@ -835,7 +773,7 @@ following image formats are supported:
|
||||
@tab encoding supported through external library libaacplus
|
||||
@item AAC @tab E @tab X
|
||||
@tab encoding supported through external library libfaac and libvo-aacenc
|
||||
@item AC-3 @tab IX @tab IX
|
||||
@item AC-3 @tab IX @tab X
|
||||
@item ADPCM 4X Movie @tab @tab X
|
||||
@item ADPCM CDROM XA @tab @tab X
|
||||
@item ADPCM Creative Technology @tab @tab X
|
||||
@@ -879,8 +817,6 @@ following image formats are supported:
|
||||
@item ADPCM Sound Blaster Pro 2-bit @tab @tab X
|
||||
@item ADPCM Sound Blaster Pro 2.6-bit @tab @tab X
|
||||
@item ADPCM Sound Blaster Pro 4-bit @tab @tab X
|
||||
@item ADPCM VIMA
|
||||
@tab Used in LucasArts SMUSH animations.
|
||||
@item ADPCM Westwood Studios IMA @tab @tab X
|
||||
@tab Used in Westwood Studios games like Command and Conquer.
|
||||
@item ADPCM Yamaha @tab X @tab X
|
||||
@@ -893,14 +829,12 @@ following image formats are supported:
|
||||
@tab QuickTime fourcc 'alac'
|
||||
@item ATRAC1 @tab @tab X
|
||||
@item ATRAC3 @tab @tab X
|
||||
@item ATRAC3+ @tab @tab X
|
||||
@item Bink Audio @tab @tab X
|
||||
@tab Used in Bink and Smacker files in many games.
|
||||
@item CELT @tab @tab E
|
||||
@tab decoding supported through external library libcelt
|
||||
@item Delphine Software International CIN audio @tab @tab X
|
||||
@tab Codec used in Delphine Software International games.
|
||||
@item Digital Speech Standard - Standard Play mode (DSS SP) @tab @tab X
|
||||
@item Discworld II BMV Audio @tab @tab X
|
||||
@item COOK @tab @tab X
|
||||
@tab All versions except 5.1 are supported.
|
||||
@@ -914,10 +848,6 @@ following image formats are supported:
|
||||
@item DPCM Sol @tab @tab X
|
||||
@item DPCM Xan @tab @tab X
|
||||
@tab Used in Origin's Wing Commander IV AVI files.
|
||||
@item DSD (Direct Stream Digitial), least significant bit first @tab @tab X
|
||||
@item DSD (Direct Stream Digitial), most significant bit first @tab @tab X
|
||||
@item DSD (Direct Stream Digitial), least significant bit first, planar @tab @tab X
|
||||
@item DSD (Direct Stream Digitial), most significant bit first, planar @tab @tab X
|
||||
@item DSP Group TrueSpeech @tab @tab X
|
||||
@item DV audio @tab @tab X
|
||||
@item Enhanced AC-3 @tab X @tab X
|
||||
@@ -940,14 +870,13 @@ following image formats are supported:
|
||||
@item Monkey's Audio @tab @tab X
|
||||
@item MP1 (MPEG audio layer 1) @tab @tab IX
|
||||
@item MP2 (MPEG audio layer 2) @tab IX @tab IX
|
||||
@tab encoding supported also through external library TwoLAME
|
||||
@tab libtwolame can be used alternatively for encoding.
|
||||
@item MP3 (MPEG audio layer 3) @tab E @tab IX
|
||||
@tab encoding supported through external library LAME, ADU MP3 and MP3onMP4 also supported
|
||||
@item MPEG-4 Audio Lossless Coding (ALS) @tab @tab X
|
||||
@item Musepack SV7 @tab @tab X
|
||||
@item Musepack SV8 @tab @tab X
|
||||
@item Nellymoser Asao @tab X @tab X
|
||||
@item On2 AVC (Audio for Video Codec) @tab @tab X
|
||||
@item Opus @tab E @tab E
|
||||
@tab supported through external library libopus
|
||||
@item PCM A-law @tab X @tab X
|
||||
@@ -1010,6 +939,7 @@ following image formats are supported:
|
||||
@item Vorbis @tab E @tab X
|
||||
@tab A native but very primitive encoder exists.
|
||||
@item Voxware MetaSound @tab @tab X
|
||||
@tab imperfect and incomplete support
|
||||
@item WavPack @tab X @tab X
|
||||
@item Westwood Audio (SND1) @tab @tab X
|
||||
@item Windows Media Audio 1 @tab X @tab X
|
||||
@@ -1043,7 +973,6 @@ performance on systems without hardware floating point support).
|
||||
@item PJS (Phoenix) @tab @tab X @tab @tab X
|
||||
@item RealText @tab @tab X @tab @tab X
|
||||
@item SAMI @tab @tab X @tab @tab X
|
||||
@item Spruce format (STL) @tab @tab X @tab @tab X
|
||||
@item SSA/ASS @tab X @tab X @tab X @tab X
|
||||
@item SubRip (SRT) @tab X @tab X @tab X @tab X
|
||||
@item SubViewer v1 @tab @tab X @tab @tab X
|
||||
@@ -1051,7 +980,7 @@ performance on systems without hardware floating point support).
|
||||
@item TED Talks captions @tab @tab X @tab @tab X
|
||||
@item VobSub (IDX+SUB) @tab @tab X @tab @tab X
|
||||
@item VPlayer @tab @tab X @tab @tab X
|
||||
@item WebVTT @tab X @tab X @tab X @tab X
|
||||
@item WebVTT @tab X @tab X @tab @tab X
|
||||
@item XSUB @tab @tab @tab X @tab X
|
||||
@end multitable
|
||||
|
||||
@@ -1064,12 +993,10 @@ performance on systems without hardware floating point support).
|
||||
@multitable @columnfractions .4 .1
|
||||
@item Name @tab Support
|
||||
@item file @tab X
|
||||
@item FTP @tab X
|
||||
@item Gopher @tab X
|
||||
@item HLS @tab X
|
||||
@item HTTP @tab X
|
||||
@item HTTPS @tab X
|
||||
@item Icecast @tab X
|
||||
@item MMSH @tab X
|
||||
@item MMST @tab X
|
||||
@item pipe @tab X
|
||||
@@ -1080,9 +1007,7 @@ performance on systems without hardware floating point support).
|
||||
@item RTMPTE @tab X
|
||||
@item RTMPTS @tab X
|
||||
@item RTP @tab X
|
||||
@item SAMBA @tab E
|
||||
@item SCTP @tab X
|
||||
@item SFTP @tab E
|
||||
@item TCP @tab X
|
||||
@item TLS @tab X
|
||||
@item UDP @tab X
|
||||
@@ -1102,19 +1027,17 @@ performance on systems without hardware floating point support).
|
||||
@item caca @tab @tab X
|
||||
@item DV1394 @tab X @tab
|
||||
@item Lavfi virtual device @tab X @tab
|
||||
@item Linux framebuffer @tab X @tab X
|
||||
@item Linux framebuffer @tab X @tab
|
||||
@item JACK @tab X @tab
|
||||
@item LIBCDIO @tab X
|
||||
@item LIBDC1394 @tab X @tab
|
||||
@item OpenAL @tab X
|
||||
@item OpenGL @tab @tab X
|
||||
@item OSS @tab X @tab X
|
||||
@item PulseAudio @tab X @tab X
|
||||
@item Pulseaudio @tab X @tab
|
||||
@item SDL @tab @tab X
|
||||
@item Video4Linux2 @tab X @tab X
|
||||
@item VfW capture @tab X @tab
|
||||
@item X11 grabbing @tab X @tab
|
||||
@item Win32 grabbing @tab X @tab
|
||||
@end multitable
|
||||
|
||||
@code{X} means that input/output is supported.
|
||||
|
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle Using git to develop FFmpeg
|
||||
|
||||
@@ -300,7 +299,7 @@ the current branch history.
|
||||
git commit --amend
|
||||
@end example
|
||||
|
||||
allows one to amend the last commit details quickly.
|
||||
allows to amend the last commit details quickly.
|
||||
|
||||
@example
|
||||
git rebase -i origin/master
|
||||
|
273
doc/git-howto.txt
Normal file
273
doc/git-howto.txt
Normal file
@@ -0,0 +1,273 @@
|
||||
|
||||
About Git write access:
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Before everything else, you should know how to use GIT properly.
|
||||
Luckily Git comes with excellent documentation.
|
||||
|
||||
git --help
|
||||
man git
|
||||
|
||||
shows you the available subcommands,
|
||||
|
||||
git <command> --help
|
||||
man git-<command>
|
||||
|
||||
shows information about the subcommand <command>.
|
||||
|
||||
The most comprehensive manual is the website Git Reference
|
||||
|
||||
http://gitref.org/
|
||||
|
||||
For more information about the Git project, visit
|
||||
|
||||
http://git-scm.com/
|
||||
|
||||
Consult these resources whenever you have problems, they are quite exhaustive.
|
||||
|
||||
You do not need a special username or password.
|
||||
All you need is to provide a ssh public key to the Git server admin.
|
||||
|
||||
What follows now is a basic introduction to Git and some FFmpeg-specific
|
||||
guidelines. Read it at least once, if you are granted commit privileges to the
|
||||
FFmpeg project you are expected to be familiar with these rules.
|
||||
|
||||
|
||||
|
||||
I. BASICS:
|
||||
==========
|
||||
|
||||
0. Get GIT:
|
||||
|
||||
Most distributions have a git package, if not
|
||||
You can get git from http://git-scm.com/
|
||||
|
||||
|
||||
1. Cloning the source tree:
|
||||
|
||||
git clone git://source.ffmpeg.org/ffmpeg <target>
|
||||
|
||||
This will put the FFmpeg sources into the directory <target>.
|
||||
|
||||
git clone git@source.ffmpeg.org:ffmpeg <target>
|
||||
|
||||
This will put the FFmpeg sources into the directory <target> and let
|
||||
you push back your changes to the remote repository.
|
||||
|
||||
|
||||
2. Updating the source tree to the latest revision:
|
||||
|
||||
git pull (--ff-only)
|
||||
|
||||
pulls in the latest changes from the tracked branch. The tracked branch
|
||||
can be remote. By default the master branch tracks the branch master in
|
||||
the remote origin.
|
||||
Caveat: Since merge commits are forbidden at least for the initial
|
||||
months of git --ff-only or --rebase (see below) are recommended.
|
||||
--ff-only will fail and not create merge commits if your branch
|
||||
has diverged (has a different history) from the tracked branch.
|
||||
|
||||
2.a Rebasing your local branches:
|
||||
|
||||
git pull --rebase
|
||||
|
||||
fetches the changes from the main repository and replays your local commits
|
||||
over it. This is required to keep all your local changes at the top of
|
||||
FFmpeg's master tree. The master tree will reject pushes with merge commits.
|
||||
|
||||
|
||||
3. Adding/removing files/directories:
|
||||
|
||||
git add [-A] <filename/dirname>
|
||||
git rm [-r] <filename/dirname>
|
||||
|
||||
GIT needs to get notified of all changes you make to your working
|
||||
directory that makes files appear or disappear.
|
||||
Line moves across files are automatically tracked.
|
||||
|
||||
|
||||
4. Showing modifications:
|
||||
|
||||
git diff <filename(s)>
|
||||
|
||||
will show all local modifications in your working directory as unified diff.
|
||||
|
||||
|
||||
5. Inspecting the changelog:
|
||||
|
||||
git log <filename(s)>
|
||||
|
||||
You may also use the graphical tools like gitview or gitk or the web
|
||||
interface available at http://source.ffmpeg.org
|
||||
|
||||
6. Checking source tree status:
|
||||
|
||||
git status
|
||||
|
||||
detects all the changes you made and lists what actions will be taken in case
|
||||
of a commit (additions, modifications, deletions, etc.).
|
||||
|
||||
|
||||
7. Committing:
|
||||
|
||||
git diff --check
|
||||
|
||||
to double check your changes before committing them to avoid trouble later
|
||||
on. All experienced developers do this on each and every commit, no matter
|
||||
how small.
|
||||
Every one of them has been saved from looking like a fool by this many times.
|
||||
It's very easy for stray debug output or cosmetic modifications to slip in,
|
||||
please avoid problems through this extra level of scrutiny.
|
||||
|
||||
For cosmetics-only commits you should get (almost) empty output from
|
||||
|
||||
git diff -w -b <filename(s)>
|
||||
|
||||
Also check the output of
|
||||
|
||||
git status
|
||||
|
||||
to make sure you don't have untracked files or deletions.
|
||||
|
||||
git add [-i|-p|-A] <filenames/dirnames>
|
||||
|
||||
Make sure you have told git your name and email address, e.g. by running
|
||||
git config --global user.name "My Name"
|
||||
git config --global user.email my@email.invalid
|
||||
(--global to set the global configuration for all your git checkouts).
|
||||
|
||||
Git will select the changes to the files for commit. Optionally you can use
|
||||
the interactive or the patch mode to select hunk by hunk what should be
|
||||
added to the commit.
|
||||
|
||||
git commit
|
||||
|
||||
Git will commit the selected changes to your current local branch.
|
||||
|
||||
You will be prompted for a log message in an editor, which is either
|
||||
set in your personal configuration file through
|
||||
|
||||
git config core.editor
|
||||
|
||||
or set by one of the following environment variables:
|
||||
GIT_EDITOR, VISUAL or EDITOR.
|
||||
|
||||
Log messages should be concise but descriptive. Explain why you made a change,
|
||||
what you did will be obvious from the changes themselves most of the time.
|
||||
Saying just "bug fix" or "10l" is bad. Remember that people of varying skill
|
||||
levels look at and educate themselves while reading through your code. Don't
|
||||
include filenames in log messages, Git provides that information.
|
||||
|
||||
Possibly make the commit message have a terse, descriptive first line, an
|
||||
empty line and then a full description. The first line will be used to name
|
||||
the patch by git format-patch.
|
||||
|
||||
|
||||
8. Renaming/moving/copying files or contents of files:
|
||||
|
||||
Git automatically tracks such changes, making those normal commits.
|
||||
|
||||
mv/cp path/file otherpath/otherfile
|
||||
|
||||
git add [-A] .
|
||||
|
||||
git commit
|
||||
|
||||
Do not move, rename or copy files of which you are not the maintainer without
|
||||
discussing it on the mailing list first!
|
||||
|
||||
9. Reverting broken commits
|
||||
|
||||
git revert <commit>
|
||||
|
||||
git revert will generate a revert commit. This will not make the faulty
|
||||
commit disappear from the history.
|
||||
|
||||
git reset <commit>
|
||||
|
||||
git reset will uncommit the changes till <commit> rewriting the current
|
||||
branch history.
|
||||
|
||||
git commit --amend
|
||||
|
||||
allows to amend the last commit details quickly.
|
||||
|
||||
git rebase -i origin/master
|
||||
|
||||
will replay local commits over the main repository allowing to edit,
|
||||
merge or remove some of them in the process.
|
||||
|
||||
Note that the reset, commit --amend and rebase rewrite history, so you
|
||||
should use them ONLY on your local or topic branches.
|
||||
|
||||
The main repository will reject those changes.
|
||||
|
||||
10. Preparing a patchset.
|
||||
|
||||
git format-patch <commit> [-o directory]
|
||||
|
||||
will generate a set of patches for each commit between <commit> and
|
||||
current HEAD. E.g.
|
||||
|
||||
git format-patch origin/master
|
||||
|
||||
will generate patches for all commits on current branch which are not
|
||||
present in upstream.
|
||||
A useful shortcut is also
|
||||
|
||||
git format-patch -n
|
||||
|
||||
which will generate patches from last n commits.
|
||||
By default the patches are created in the current directory.
|
||||
|
||||
11. Sending patches for review
|
||||
|
||||
git send-email <commit list|directory>
|
||||
|
||||
will send the patches created by git format-patch or directly generates
|
||||
them. All the email fields can be configured in the global/local
|
||||
configuration or overridden by command line.
|
||||
Note that this tool must often be installed separately (e.g. git-email
|
||||
package on Debian-based distros).
|
||||
|
||||
12. Pushing changes to remote trees
|
||||
|
||||
git push
|
||||
|
||||
Will push the changes to the default remote (origin).
|
||||
Git will prevent you from pushing changes if the local and remote trees are
|
||||
out of sync. Refer to 2 and 2.a to sync the local tree.
|
||||
|
||||
git remote add <name> <url>
|
||||
|
||||
Will add additional remote with a name reference, it is useful if you want
|
||||
to push your local branch for review on a remote host.
|
||||
|
||||
git push <remote> <refspec>
|
||||
|
||||
Will push the changes to the remote repository. Omitting refspec makes git
|
||||
push update all the remote branches matching the local ones.
|
||||
|
||||
13. Finding a specific svn revision
|
||||
|
||||
Since version 1.7.1 git supports ':/foo' syntax for specifying commits
|
||||
based on a regular expression. see man gitrevisions
|
||||
|
||||
git show :/'as revision 23456'
|
||||
|
||||
will show the svn changeset r23456. With older git versions searching in
|
||||
the git log output is the easiest option (especially if a pager with
|
||||
search capabilities is used).
|
||||
This commit can be checked out with
|
||||
|
||||
git checkout -b svn_23456 :/'as revision 23456'
|
||||
|
||||
or for git < 1.7.1 with
|
||||
|
||||
git checkout -b svn_23456 $SHA1
|
||||
|
||||
where $SHA1 is the commit SHA1 from the 'git log' output.
|
||||
|
||||
|
||||
Contact the project admins <root at ffmpeg dot org> if you have technical
|
||||
problems with the GIT server.
|
454
doc/indevs.texi
454
doc/indevs.texi
@@ -13,8 +13,8 @@ You can disable all the input devices using the configure option
|
||||
option "--enable-indev=@var{INDEV}", or you can disable a particular
|
||||
input device using the option "--disable-indev=@var{INDEV}".
|
||||
|
||||
The option "-devices" of the ff* tools will display the list of
|
||||
supported input devices.
|
||||
The option "-formats" of the ff* tools will display the list of
|
||||
supported input devices (amongst the demuxers).
|
||||
|
||||
A description of the currently available input devices follows.
|
||||
|
||||
@@ -51,101 +51,6 @@ ffmpeg -f alsa -i hw:0 alsaout.wav
|
||||
For more information see:
|
||||
@url{http://www.alsa-project.org/alsa-doc/alsa-lib/pcm.html}
|
||||
|
||||
@section avfoundation
|
||||
|
||||
AVFoundation input device.
|
||||
|
||||
AVFoundation is the currently recommended framework by Apple for streamgrabbing on OSX >= 10.7 as well as on iOS.
|
||||
The older QTKit framework has been marked deprecated since OSX version 10.7.
|
||||
|
||||
The input filename has to be given in the following syntax:
|
||||
@example
|
||||
-i "[[VIDEO]:[AUDIO]]"
|
||||
@end example
|
||||
The first entry selects the video input while the latter selects the audio input.
|
||||
The stream has to be specified by the device name or the device index as shown by the device list.
|
||||
Alternatively, the video and/or audio input device can be chosen by index using the
|
||||
@option{
|
||||
-video_device_index <INDEX>
|
||||
}
|
||||
and/or
|
||||
@option{
|
||||
-audio_device_index <INDEX>
|
||||
}
|
||||
, overriding any
|
||||
device name or index given in the input filename.
|
||||
|
||||
All available devices can be enumerated by using @option{-list_devices true}, listing
|
||||
all device names and corresponding indices.
|
||||
|
||||
There are two device name aliases:
|
||||
@table @code
|
||||
|
||||
@item default
|
||||
Select the AVFoundation default device of the corresponding type.
|
||||
|
||||
@item none
|
||||
Do not record the corresponding media type.
|
||||
This is equivalent to specifying an empty device name or index.
|
||||
|
||||
@end table
|
||||
|
||||
@subsection Options
|
||||
|
||||
AVFoundation supports the following options:
|
||||
|
||||
@table @option
|
||||
|
||||
@item -list_devices <TRUE|FALSE>
|
||||
If set to true, a list of all available input devices is given showing all
|
||||
device names and indices.
|
||||
|
||||
@item -video_device_index <INDEX>
|
||||
Specify the video device by its index. Overrides anything given in the input filename.
|
||||
|
||||
@item -audio_device_index <INDEX>
|
||||
Specify the audio device by its index. Overrides anything given in the input filename.
|
||||
|
||||
@item -pixel_format <FORMAT>
|
||||
Request the video device to use a specific pixel format.
|
||||
If the specified format is not supported, a list of available formats is given
|
||||
und the first one in this list is used instead. Available pixel formats are:
|
||||
@code{monob, rgb555be, rgb555le, rgb565be, rgb565le, rgb24, bgr24, 0rgb, bgr0, 0bgr, rgb0,
|
||||
bgr48be, uyvy422, yuva444p, yuva444p16le, yuv444p, yuv422p16, yuv422p10, yuv444p10,
|
||||
yuv420p, nv12, yuyv422, gray}
|
||||
|
||||
@end table
|
||||
|
||||
@subsection Examples
|
||||
|
||||
@itemize
|
||||
|
||||
@item
|
||||
Print the list of AVFoundation supported devices and exit:
|
||||
@example
|
||||
$ ffmpeg -f avfoundation -list_devices true -i ""
|
||||
@end example
|
||||
|
||||
@item
|
||||
Record video from video device 0 and audio from audio device 0 into out.avi:
|
||||
@example
|
||||
$ ffmpeg -f avfoundation -i "0:0" out.avi
|
||||
@end example
|
||||
|
||||
@item
|
||||
Record video from video device 2 and audio from audio device 1 into out.avi:
|
||||
@example
|
||||
$ ffmpeg -f avfoundation -video_device_index 2 -i ":1" out.avi
|
||||
@end example
|
||||
|
||||
@item
|
||||
Record video from the system default video device using the pixel format bgr0 and do not record any audio into out.avi:
|
||||
@example
|
||||
$ ffmpeg -f avfoundation -pixel_format bgr0 -i "default:none" out.avi
|
||||
@end example
|
||||
|
||||
@end itemize
|
||||
|
||||
@section bktr
|
||||
|
||||
BSD video input device.
|
||||
@@ -167,7 +72,7 @@ The input name should be in the format:
|
||||
@end example
|
||||
|
||||
where @var{TYPE} can be either @var{audio} or @var{video},
|
||||
and @var{NAME} is the device's name or alternative name..
|
||||
and @var{NAME} is the device's name.
|
||||
|
||||
@subsection Options
|
||||
|
||||
@@ -220,61 +125,6 @@ Setting this value too low can degrade performance.
|
||||
See also
|
||||
@url{http://msdn.microsoft.com/en-us/library/windows/desktop/dd377582(v=vs.85).aspx}
|
||||
|
||||
@item video_pin_name
|
||||
Select video capture pin to use by name or alternative name.
|
||||
|
||||
@item audio_pin_name
|
||||
Select audio capture pin to use by name or alternative name.
|
||||
|
||||
@item crossbar_video_input_pin_number
|
||||
Select video input pin number for crossbar device. This will be
|
||||
routed to the crossbar device's Video Decoder output pin.
|
||||
Note that changing this value can affect future invocations
|
||||
(sets a new default) until system reboot occurs.
|
||||
|
||||
@item crossbar_audio_input_pin_number
|
||||
Select audio input pin number for crossbar device. This will be
|
||||
routed to the crossbar device's Audio Decoder output pin.
|
||||
Note that changing this value can affect future invocations
|
||||
(sets a new default) until system reboot occurs.
|
||||
|
||||
@item show_video_device_dialog
|
||||
If set to @option{true}, before capture starts, popup a display dialog
|
||||
to the end user, allowing them to change video filter properties
|
||||
and configurations manually.
|
||||
Note that for crossbar devices, adjusting values in this dialog
|
||||
may be needed at times to toggle between PAL (25 fps) and NTSC (29.97)
|
||||
input frame rates, sizes, interlacing, etc. Changing these values can
|
||||
enable different scan rates/frame rates and avoiding green bars at
|
||||
the bottom, flickering scan lines, etc.
|
||||
Note that with some devices, changing these properties can also affect future
|
||||
invocations (sets new defaults) until system reboot occurs.
|
||||
|
||||
@item show_audio_device_dialog
|
||||
If set to @option{true}, before capture starts, popup a display dialog
|
||||
to the end user, allowing them to change audio filter properties
|
||||
and configurations manually.
|
||||
|
||||
@item show_video_crossbar_connection_dialog
|
||||
If set to @option{true}, before capture starts, popup a display
|
||||
dialog to the end user, allowing them to manually
|
||||
modify crossbar pin routings, when it opens a video device.
|
||||
|
||||
@item show_audio_crossbar_connection_dialog
|
||||
If set to @option{true}, before capture starts, popup a display
|
||||
dialog to the end user, allowing them to manually
|
||||
modify crossbar pin routings, when it opens an audio device.
|
||||
|
||||
@item show_analog_tv_tuner_dialog
|
||||
If set to @option{true}, before capture starts, popup a display
|
||||
dialog to the end user, allowing them to manually
|
||||
modify TV channels and frequencies.
|
||||
|
||||
@item show_analog_tv_tuner_audio_dialog
|
||||
If set to @option{true}, before capture starts, popup a display
|
||||
dialog to the end user, allowing them to manually
|
||||
modify TV audio (like mono vs. stereo, Language A,B or C).
|
||||
|
||||
@end table
|
||||
|
||||
@subsection Examples
|
||||
@@ -311,19 +161,6 @@ Print the list of supported options in selected device and exit:
|
||||
$ ffmpeg -list_options true -f dshow -i video="Camera"
|
||||
@end example
|
||||
|
||||
@item
|
||||
Specify pin names to capture by name or alternative name, specify alternative device name:
|
||||
@example
|
||||
$ ffmpeg -f dshow -audio_pin_name "Audio Out" -video_pin_name 2 -i video=video="@@device_pnp_\\?\pci#ven_1a0a&dev_6200&subsys_62021461&rev_01#4&e2c7dd6&0&00e1#@{65e8773d-8f56-11d0-a3b9-00a0c9223196@}\@{ca465100-deb0-4d59-818f-8c477184adf6@}":audio="Microphone"
|
||||
@end example
|
||||
|
||||
@item
|
||||
Configure a crossbar device, specifying crossbar pins, allow user to adjust video capture properties at startup:
|
||||
@example
|
||||
$ ffmpeg -f dshow -show_video_device_dialog true -crossbar_video_input_pin_number 0
|
||||
-crossbar_audio_input_pin_number 3 -i video="AVerMedia BDA Analog Capture":audio="AVerMedia BDA Analog Capture"
|
||||
@end example
|
||||
|
||||
@end itemize
|
||||
|
||||
@section dv1394
|
||||
@@ -355,81 +192,6 @@ ffmpeg -f fbdev -frames:v 1 -r 1 -i /dev/fb0 screenshot.jpeg
|
||||
|
||||
See also @url{http://linux-fbdev.sourceforge.net/}, and fbset(1).
|
||||
|
||||
@section gdigrab
|
||||
|
||||
Win32 GDI-based screen capture device.
|
||||
|
||||
This device allows you to capture a region of the display on Windows.
|
||||
|
||||
There are two options for the input filename:
|
||||
@example
|
||||
desktop
|
||||
@end example
|
||||
or
|
||||
@example
|
||||
title=@var{window_title}
|
||||
@end example
|
||||
|
||||
The first option will capture the entire desktop, or a fixed region of the
|
||||
desktop. The second option will instead capture the contents of a single
|
||||
window, regardless of its position on the screen.
|
||||
|
||||
For example, to grab the entire desktop using @command{ffmpeg}:
|
||||
@example
|
||||
ffmpeg -f gdigrab -framerate 6 -i desktop out.mpg
|
||||
@end example
|
||||
|
||||
Grab a 640x480 region at position @code{10,20}:
|
||||
@example
|
||||
ffmpeg -f gdigrab -framerate 6 -offset_x 10 -offset_y 20 -video_size vga -i desktop out.mpg
|
||||
@end example
|
||||
|
||||
Grab the contents of the window named "Calculator"
|
||||
@example
|
||||
ffmpeg -f gdigrab -framerate 6 -i title=Calculator out.mpg
|
||||
@end example
|
||||
|
||||
@subsection Options
|
||||
|
||||
@table @option
|
||||
@item draw_mouse
|
||||
Specify whether to draw the mouse pointer. Use the value @code{0} to
|
||||
not draw the pointer. Default value is @code{1}.
|
||||
|
||||
@item framerate
|
||||
Set the grabbing frame rate. Default value is @code{ntsc},
|
||||
corresponding to a frame rate of @code{30000/1001}.
|
||||
|
||||
@item show_region
|
||||
Show grabbed region on screen.
|
||||
|
||||
If @var{show_region} is specified with @code{1}, then the grabbing
|
||||
region will be indicated on screen. With this option, it is easy to
|
||||
know what is being grabbed if only a portion of the screen is grabbed.
|
||||
|
||||
Note that @var{show_region} is incompatible with grabbing the contents
|
||||
of a single window.
|
||||
|
||||
For example:
|
||||
@example
|
||||
ffmpeg -f gdigrab -show_region 1 -framerate 6 -video_size cif -offset_x 10 -offset_y 20 -i desktop out.mpg
|
||||
@end example
|
||||
|
||||
@item video_size
|
||||
Set the video frame size. The default is to capture the full screen if @file{desktop} is selected, or the full window size if @file{title=@var{window_title}} is selected.
|
||||
|
||||
@item offset_x
|
||||
When capturing a region with @var{video_size}, set the distance from the left edge of the screen or desktop.
|
||||
|
||||
Note that the offset calculation is from the top left corner of the primary monitor on Windows. If you have a monitor positioned to the left of your primary monitor, you will need to use a negative @var{offset_x} value to move the region to that monitor.
|
||||
|
||||
@item offset_y
|
||||
When capturing a region with @var{video_size}, set the distance from the top edge of the screen or desktop.
|
||||
|
||||
Note that the offset calculation is from the top left corner of the primary monitor on Windows. If you have a monitor positioned above your primary monitor, you will need to use a negative @var{offset_y} value to move the region to that monitor.
|
||||
|
||||
@end table
|
||||
|
||||
@section iec61883
|
||||
|
||||
FireWire DV/HDV input device using libiec61883.
|
||||
@@ -458,7 +220,7 @@ not work and result in undefined behavior.
|
||||
The values @option{auto}, @option{dv} and @option{hdv} are supported.
|
||||
|
||||
@item dvbuffer
|
||||
Set maximum size of buffer for incoming data, in frames. For DV, this
|
||||
Set maxiumum size of buffer for incoming data, in frames. For DV, this
|
||||
is an exact value. For HDV, it is not frame exact, since HDV does
|
||||
not have a fixed frame size.
|
||||
|
||||
@@ -563,14 +325,6 @@ generated by the device.
|
||||
The first unlabelled output is automatically assigned to the "out0"
|
||||
label, but all the others need to be specified explicitly.
|
||||
|
||||
The suffix "+subcc" can be appended to the output label to create an extra
|
||||
stream with the closed captions packets attached to that output
|
||||
(experimental; only for EIA-608 / CEA-708 for now).
|
||||
The subcc streams are created after all the normal streams, in the order of
|
||||
the corresponding stream.
|
||||
For example, if there is "out19+subcc", "out7+subcc" and up to "out42", the
|
||||
stream #43 is subcc for stream #7 and stream #44 is subcc for stream #19.
|
||||
|
||||
If not specified defaults to the filename specified for the input
|
||||
device.
|
||||
|
||||
@@ -617,63 +371,12 @@ Read an audio stream and a video stream and play it back with
|
||||
ffplay -f lavfi "movie=test.avi[out0];amovie=test.wav[out1]"
|
||||
@end example
|
||||
|
||||
@item
|
||||
Dump decoded frames to images and closed captions to a file (experimental):
|
||||
@example
|
||||
ffmpeg -f lavfi -i "movie=test.ts[out0+subcc]" -map v frame%08d.png -map s -c copy -f rawvideo subcc.bin
|
||||
@end example
|
||||
|
||||
@end itemize
|
||||
|
||||
@section libcdio
|
||||
|
||||
Audio-CD input device based on libcdio.
|
||||
|
||||
To enable this input device during configuration you need libcdio
|
||||
installed on your system. It requires the configure option
|
||||
@code{--enable-libcdio}.
|
||||
|
||||
This device allows playing and grabbing from an Audio-CD.
|
||||
|
||||
For example to copy with @command{ffmpeg} the entire Audio-CD in @file{/dev/sr0},
|
||||
you may run the command:
|
||||
@example
|
||||
ffmpeg -f libcdio -i /dev/sr0 cd.wav
|
||||
@end example
|
||||
|
||||
@subsection Options
|
||||
@table @option
|
||||
@item speed
|
||||
Set drive reading speed. Default value is 0.
|
||||
|
||||
The speed is specified CD-ROM speed units. The speed is set through
|
||||
the libcdio @code{cdio_cddap_speed_set} function. On many CD-ROM
|
||||
drives, specifying a value too large will result in using the fastest
|
||||
speed.
|
||||
|
||||
@item paranoia_mode
|
||||
Set paranoia recovery mode flags. It accepts one of the following values:
|
||||
|
||||
@table @samp
|
||||
@item disable
|
||||
@item verify
|
||||
@item overlap
|
||||
@item neverskip
|
||||
@item full
|
||||
@end table
|
||||
|
||||
Default value is @samp{disable}.
|
||||
|
||||
For more information about the available recovery modes, consult the
|
||||
paranoia project documentation.
|
||||
@end table
|
||||
|
||||
@section libdc1394
|
||||
|
||||
IIDC1394 input device, based on libdc1394 and libraw1394.
|
||||
|
||||
Requires the configure option @code{--enable-libdc1394}.
|
||||
|
||||
@section openal
|
||||
|
||||
The OpenAL input device provides audio capture on all systems with a
|
||||
@@ -706,7 +409,7 @@ OpenAL is part of Core Audio, the official Mac OS X Audio interface.
|
||||
See @url{http://developer.apple.com/technologies/mac/audio-and-video.html}
|
||||
@end table
|
||||
|
||||
This device allows one to capture from an audio input device handled
|
||||
This device allows to capture from an audio input device handled
|
||||
through OpenAL.
|
||||
|
||||
You need to specify the name of the device to capture in the provided
|
||||
@@ -828,33 +531,6 @@ Record a stream from default device:
|
||||
ffmpeg -f pulse -i default /tmp/pulse.wav
|
||||
@end example
|
||||
|
||||
@section qtkit
|
||||
|
||||
QTKit input device.
|
||||
|
||||
The filename passed as input is parsed to contain either a device name or index.
|
||||
The device index can also be given by using -video_device_index.
|
||||
A given device index will override any given device name.
|
||||
If the desired device consists of numbers only, use -video_device_index to identify it.
|
||||
The default device will be chosen if an empty string or the device name "default" is given.
|
||||
The available devices can be enumerated by using -list_devices.
|
||||
|
||||
@example
|
||||
ffmpeg -f qtkit -i "0" out.mpg
|
||||
@end example
|
||||
|
||||
@example
|
||||
ffmpeg -f qtkit -video_device_index 0 -i "" out.mpg
|
||||
@end example
|
||||
|
||||
@example
|
||||
ffmpeg -f qtkit -i "default" out.mpg
|
||||
@end example
|
||||
|
||||
@example
|
||||
ffmpeg -f qtkit -list_devices true -i ""
|
||||
@end example
|
||||
|
||||
@section sndio
|
||||
|
||||
sndio input device.
|
||||
@@ -941,7 +617,7 @@ Select the pixel format (only valid for raw video input).
|
||||
|
||||
@item input_format
|
||||
Set the preferred pixel format (for raw video) or a codec name.
|
||||
This option allows one to select the input format, when several are
|
||||
This option allows to select the input format, when several are
|
||||
available.
|
||||
|
||||
@item framerate
|
||||
@@ -1002,14 +678,7 @@ other filename will be interpreted as device number 0.
|
||||
|
||||
X11 video input device.
|
||||
|
||||
To enable this input device during configuration you need libxcb
|
||||
installed on your system. It will be automatically detected during
|
||||
configuration.
|
||||
|
||||
Alternatively, the configure option @option{--enable-x11grab} exists
|
||||
for legacy Xlib users.
|
||||
|
||||
This device allows one to capture a region of an X11 display.
|
||||
This device allows to capture a region of an X11 display.
|
||||
|
||||
The filename passed as input has the syntax:
|
||||
@example
|
||||
@@ -1025,12 +694,10 @@ omitted, and defaults to "localhost". The environment variable
|
||||
area with respect to the top-left border of the X11 screen. They
|
||||
default to 0.
|
||||
|
||||
Check the X11 documentation (e.g. @command{man X}) for more detailed
|
||||
information.
|
||||
Check the X11 documentation (e.g. man X) for more detailed information.
|
||||
|
||||
Use the @command{xdpyinfo} program for getting basic information about
|
||||
the properties of your X11 display (e.g. grep for "name" or
|
||||
"dimensions").
|
||||
Use the @command{dpyinfo} program for getting basic information about the
|
||||
properties of your X11 display (e.g. grep for "name" or "dimensions").
|
||||
|
||||
For example to grab from @file{:0.0} using @command{ffmpeg}:
|
||||
@example
|
||||
@@ -1079,10 +746,6 @@ If @var{show_region} is specified with @code{1}, then the grabbing
|
||||
region will be indicated on screen. With this option, it is easy to
|
||||
know what is being grabbed if only a portion of the screen is grabbed.
|
||||
|
||||
@item region_border
|
||||
Set the region border thickness if @option{-show_region 1} is used.
|
||||
Range is 1 to 128 and default is 3 (XCB-based x11grab only).
|
||||
|
||||
For example:
|
||||
@example
|
||||
ffmpeg -f x11grab -show_region 1 -framerate 25 -video_size cif -i :0.0+10,20 out.mpg
|
||||
@@ -1095,103 +758,6 @@ ffmpeg -f x11grab -follow_mouse centered -show_region 1 -framerate 25 -video_siz
|
||||
|
||||
@item video_size
|
||||
Set the video frame size. Default value is @code{vga}.
|
||||
|
||||
@item use_shm
|
||||
Use the MIT-SHM extension for shared memory. Default value is @code{1}.
|
||||
It may be necessary to disable it for remote displays (legacy x11grab
|
||||
only).
|
||||
@end table
|
||||
|
||||
@subsection @var{grab_x} @var{grab_y} AVOption
|
||||
|
||||
The syntax is:
|
||||
@example
|
||||
-grab_x @var{x_offset} -grab_y @var{y_offset}
|
||||
@end example
|
||||
|
||||
Set the grabing region coordinates. The are expressed as offset from the top left
|
||||
corner of the X11 window. The default value is 0.
|
||||
|
||||
@section decklink
|
||||
|
||||
The decklink input device provides capture capabilities for Blackmagic
|
||||
DeckLink devices.
|
||||
|
||||
To enable this input device, you need the Blackmagic DeckLink SDK and you
|
||||
need to configure with the appropriate @code{--extra-cflags}
|
||||
and @code{--extra-ldflags}.
|
||||
On Windows, you need to run the IDL files through @command{widl}.
|
||||
|
||||
DeckLink is very picky about the formats it supports. Pixel format is
|
||||
uyvy422 or v210, framerate and video size must be determined for your device with
|
||||
@command{-list_formats 1}. Audio sample rate is always 48 kHz and the number
|
||||
of channels can be 2, 8 or 16.
|
||||
|
||||
@subsection Options
|
||||
|
||||
@table @option
|
||||
|
||||
@item list_devices
|
||||
If set to @option{true}, print a list of devices and exit.
|
||||
Defaults to @option{false}.
|
||||
|
||||
@item list_formats
|
||||
If set to @option{true}, print a list of supported formats and exit.
|
||||
Defaults to @option{false}.
|
||||
|
||||
@item bm_v210
|
||||
If set to @samp{1}, video is captured in 10 bit v210 instead
|
||||
of uyvy422. Not all Blackmagic devices support this option.
|
||||
|
||||
@item bm_channels <CHANNELS>
|
||||
Number of audio channels, can be 2, 8 or 16
|
||||
|
||||
@item bm_audiodepth <BITDEPTH>
|
||||
Audio bit depth, can be 16 or 32.
|
||||
|
||||
@end table
|
||||
|
||||
@subsection Examples
|
||||
|
||||
@itemize
|
||||
|
||||
@item
|
||||
List input devices:
|
||||
@example
|
||||
ffmpeg -f decklink -list_devices 1 -i dummy
|
||||
@end example
|
||||
|
||||
@item
|
||||
List supported formats:
|
||||
@example
|
||||
ffmpeg -f decklink -list_formats 1 -i 'Intensity Pro'
|
||||
@end example
|
||||
|
||||
@item
|
||||
Capture video clip at 1080i50 (format 11):
|
||||
@example
|
||||
ffmpeg -f decklink -i 'Intensity Pro@@11' -acodec copy -vcodec copy output.avi
|
||||
@end example
|
||||
|
||||
@item
|
||||
Capture video clip at 1080i50 10 bit:
|
||||
@example
|
||||
ffmpeg -bm_v210 1 -f decklink -i 'UltraStudio Mini Recorder@@11' -acodec copy -vcodec copy output.avi
|
||||
@end example
|
||||
|
||||
@item
|
||||
Capture video clip at 720p50 with 32bit audio:
|
||||
@example
|
||||
ffmpeg -bm_audiodepth 32 -f decklink -i 'UltraStudio Mini Recorder@@14' -acodec copy -vcodec copy output.avi
|
||||
@end example
|
||||
|
||||
@item
|
||||
Capture video clip at 576i50 with 8 audio channels:
|
||||
@example
|
||||
ffmpeg -bm_channels 8 -f decklink -i 'UltraStudio Mini Recorder@@3' -acodec copy -vcodec copy output.avi
|
||||
@end example
|
||||
|
||||
@end itemize
|
||||
|
||||
|
||||
@c man end INPUT DEVICES
|
||||
|
@@ -22,7 +22,7 @@ a mail for every change to every issue.
|
||||
(the above does all work already after light testing)
|
||||
|
||||
The subscription URL for the ffmpeg-trac list is:
|
||||
http(s)://lists.ffmpeg.org/mailman/listinfo/ffmpeg-trac
|
||||
http(s)://ffmpeg.org/mailman/listinfo/ffmpeg-trac
|
||||
The URL of the webinterface of the tracker is:
|
||||
http(s)://trac.ffmpeg.org
|
||||
|
||||
|
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle Libavcodec Documentation
|
||||
@titlepage
|
||||
|
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle Libavdevice Documentation
|
||||
@titlepage
|
||||
|
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle Libavfilter Documentation
|
||||
@titlepage
|
||||
|
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle Libavformat Documentation
|
||||
@titlepage
|
||||
|
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle Libavutil Documentation
|
||||
@titlepage
|
||||
@@ -17,25 +16,7 @@ The libavutil library is a utility library to aid portable
|
||||
multimedia programming. It contains safe portable string functions,
|
||||
random number generators, data structures, additional mathematics
|
||||
functions, cryptography and multimedia related functionality (like
|
||||
enumerations for pixel and sample formats). It is not a library for
|
||||
code needed by both libavcodec and libavformat.
|
||||
|
||||
The goals for this library is to be:
|
||||
|
||||
@table @strong
|
||||
@item Modular
|
||||
It should have few interdependencies and the possibility of disabling individual
|
||||
parts during @command{./configure}.
|
||||
|
||||
@item Small
|
||||
Both sources and objects should be small.
|
||||
|
||||
@item Efficient
|
||||
It should have low CPU and memory usage.
|
||||
|
||||
@item Useful
|
||||
It should avoid useless features that almost no one needs.
|
||||
@end table
|
||||
enumerations for pixel and sample formats).
|
||||
|
||||
@c man end DESCRIPTION
|
||||
|
||||
|
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle Libswresample Documentation
|
||||
@titlepage
|
||||
|
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle Libswscale Documentation
|
||||
@titlepage
|
||||
|
487
doc/muxers.texi
487
doc/muxers.texi
@@ -23,8 +23,6 @@ A description of some of the currently available muxers follows.
|
||||
|
||||
Audio Interchange File Format muxer.
|
||||
|
||||
@subsection Options
|
||||
|
||||
It accepts the following options:
|
||||
|
||||
@table @option
|
||||
@@ -51,10 +49,6 @@ The output of the muxer consists of a single line of the form:
|
||||
CRC=0x@var{CRC}, where @var{CRC} is a hexadecimal number 0-padded to
|
||||
8 digits containing the CRC for all the decoded input frames.
|
||||
|
||||
See also the @ref{framecrc} muxer.
|
||||
|
||||
@subsection Examples
|
||||
|
||||
For example to compute the CRC of the input, and store it in the file
|
||||
@file{out.crc}:
|
||||
@example
|
||||
@@ -74,6 +68,8 @@ and the input video converted to MPEG-2 video, use the command:
|
||||
ffmpeg -i INPUT -c:a pcm_u8 -c:v mpeg2video -f crc -
|
||||
@end example
|
||||
|
||||
See also the @ref{framecrc} muxer.
|
||||
|
||||
@anchor{framecrc}
|
||||
@section framecrc
|
||||
|
||||
@@ -93,8 +89,6 @@ packet of the form:
|
||||
@var{CRC} is a hexadecimal number 0-padded to 8 digits containing the
|
||||
CRC of the packet.
|
||||
|
||||
@subsection Examples
|
||||
|
||||
For example to compute the CRC of the audio and video frames in
|
||||
@file{INPUT}, converted to raw audio and video packets, and store it
|
||||
in the file @file{out.crc}:
|
||||
@@ -138,8 +132,6 @@ packet of the form:
|
||||
@var{MD5} is a hexadecimal number representing the computed MD5 hash
|
||||
for the packet.
|
||||
|
||||
@subsection Examples
|
||||
|
||||
For example to compute the MD5 of the audio and video frames in
|
||||
@file{INPUT}, converted to raw audio and video packets, and store it
|
||||
in the file @file{out.md5}:
|
||||
@@ -154,129 +146,30 @@ ffmpeg -i INPUT -f framemd5 -
|
||||
|
||||
See also the @ref{md5} muxer.
|
||||
|
||||
@anchor{gif}
|
||||
@section gif
|
||||
|
||||
Animated GIF muxer.
|
||||
|
||||
It accepts the following options:
|
||||
|
||||
@table @option
|
||||
@item loop
|
||||
Set the number of times to loop the output. Use @code{-1} for no loop, @code{0}
|
||||
for looping indefinitely (default).
|
||||
|
||||
@item final_delay
|
||||
Force the delay (expressed in centiseconds) after the last frame. Each frame
|
||||
ends with a delay until the next frame. The default is @code{-1}, which is a
|
||||
special value to tell the muxer to re-use the previous delay. In case of a
|
||||
loop, you might want to customize this value to mark a pause for instance.
|
||||
@end table
|
||||
|
||||
For example, to encode a gif looping 10 times, with a 5 seconds delay between
|
||||
the loops:
|
||||
@example
|
||||
ffmpeg -i INPUT -loop 10 -final_delay 500 out.gif
|
||||
@end example
|
||||
|
||||
Note 1: if you wish to extract the frames in separate GIF files, you need to
|
||||
force the @ref{image2} muxer:
|
||||
@example
|
||||
ffmpeg -i INPUT -c:v gif -f image2 "out%d.gif"
|
||||
@end example
|
||||
|
||||
Note 2: the GIF format has a very small time base: the delay between two frames
|
||||
can not be smaller than one centi second.
|
||||
|
||||
@anchor{hls}
|
||||
@section hls
|
||||
|
||||
Apple HTTP Live Streaming muxer that segments MPEG-TS according to
|
||||
the HTTP Live Streaming (HLS) specification.
|
||||
the HTTP Live Streaming specification.
|
||||
|
||||
It creates a playlist file, and one or more segment files. The output filename
|
||||
specifies the playlist filename.
|
||||
It creates a playlist file and numbered segment files. The output
|
||||
filename specifies the playlist filename; the segment filenames
|
||||
receive the same basename as the playlist, a sequential number and
|
||||
a .ts extension.
|
||||
|
||||
By default, the muxer creates a file for each segment produced. These files
|
||||
have the same name as the playlist, followed by a sequential number and a
|
||||
.ts extension.
|
||||
|
||||
For example, to convert an input file with @command{ffmpeg}:
|
||||
@example
|
||||
ffmpeg -i in.nut out.m3u8
|
||||
@end example
|
||||
This example will produce the playlist, @file{out.m3u8}, and segment files:
|
||||
@file{out0.ts}, @file{out1.ts}, @file{out2.ts}, etc.
|
||||
|
||||
See also the @ref{segment} muxer, which provides a more generic and
|
||||
flexible implementation of a segmenter, and can be used to perform HLS
|
||||
segmentation.
|
||||
|
||||
@subsection Options
|
||||
|
||||
This muxer supports the following options:
|
||||
|
||||
@table @option
|
||||
@item hls_time @var{seconds}
|
||||
Set the segment length in seconds. Default value is 2.
|
||||
|
||||
@item hls_list_size @var{size}
|
||||
Set the maximum number of playlist entries. If set to 0 the list file
|
||||
will contain all the segments. Default value is 5.
|
||||
|
||||
@item hls_ts_options @var{options_list}
|
||||
Set output format options using a :-separated list of key=value
|
||||
parameters. Values containing @code{:} special characters must be
|
||||
escaped.
|
||||
|
||||
@item hls_wrap @var{wrap}
|
||||
Set the number after which the segment filename number (the number
|
||||
specified in each segment file) wraps. If set to 0 the number will be
|
||||
never wrapped. Default value is 0.
|
||||
|
||||
This option is useful to avoid to fill the disk with many segment
|
||||
files, and limits the maximum number of segment files written to disk
|
||||
to @var{wrap}.
|
||||
|
||||
@item start_number @var{number}
|
||||
Start the playlist sequence number from @var{number}. Default value is
|
||||
0.
|
||||
|
||||
@item hls_allow_cache @var{allowcache}
|
||||
Explicitly set whether the client MAY (1) or MUST NOT (0) cache media segments.
|
||||
|
||||
@item hls_base_url @var{baseurl}
|
||||
Append @var{baseurl} to every entry in the playlist.
|
||||
Useful to generate playlists with absolute paths.
|
||||
|
||||
Note that the playlist sequence number must be unique for each segment
|
||||
and it is not to be confused with the segment filename sequence number
|
||||
which can be cyclic, for example if the @option{wrap} option is
|
||||
specified.
|
||||
|
||||
@item hls_segment_filename @var{filename}
|
||||
Set the segment filename. Unless hls_flags single_file is set @var{filename}
|
||||
is used as a string format with the segment number:
|
||||
@example
|
||||
ffmpeg in.nut -hls_segment_filename 'file%03d.ts' out.m3u8
|
||||
@end example
|
||||
This example will produce the playlist, @file{out.m3u8}, and segment files:
|
||||
@file{file000.ts}, @file{file001.ts}, @file{file002.ts}, etc.
|
||||
|
||||
@item hls_flags single_file
|
||||
If this flag is set, the muxer will store all segments in a single MPEG-TS
|
||||
file, and will use byte ranges in the playlist. HLS playlists generated with
|
||||
this way will have the version number 4.
|
||||
For example:
|
||||
@example
|
||||
ffmpeg -i in.nut -hls_flags single_file out.m3u8
|
||||
@end example
|
||||
Will produce the playlist, @file{out.m3u8}, and a single segment file,
|
||||
@file{out.ts}.
|
||||
|
||||
@item hls_flags delete_segments
|
||||
Segment files removed from the playlist are deleted after a period of time
|
||||
equal to the duration of the segment plus the duration of the playlist.
|
||||
@item -hls_time @var{seconds}
|
||||
Set the segment length in seconds.
|
||||
@item -hls_list_size @var{size}
|
||||
Set the maximum number of playlist entries.
|
||||
@item -hls_wrap @var{wrap}
|
||||
Set the number after which index wraps.
|
||||
@item -start_number @var{number}
|
||||
Start the sequence from @var{number}.
|
||||
@end table
|
||||
|
||||
@anchor{ico}
|
||||
@@ -342,8 +235,6 @@ The pattern "img%%-%d.jpg" will specify a sequence of filenames of the
|
||||
form @file{img%-1.jpg}, @file{img%-2.jpg}, ..., @file{img%-10.jpg},
|
||||
etc.
|
||||
|
||||
@subsection Examples
|
||||
|
||||
The following example shows how to use @command{ffmpeg} for creating a
|
||||
sequence of files @file{img-001.jpeg}, @file{img-002.jpeg}, ...,
|
||||
taking one image every second from the input video:
|
||||
@@ -366,31 +257,16 @@ Note also that the pattern must not necessarily contain "%d" or
|
||||
ffmpeg -i in.avi -f image2 -frames:v 1 img.jpeg
|
||||
@end example
|
||||
|
||||
The @option{strftime} option allows you to expand the filename with
|
||||
date and time information. Check the documentation of
|
||||
the @code{strftime()} function for the syntax.
|
||||
|
||||
For example to generate image files from the @code{strftime()}
|
||||
"%Y-%m-%d_%H-%M-%S" pattern, the following @command{ffmpeg} command
|
||||
can be used:
|
||||
@example
|
||||
ffmpeg -f v4l2 -r 1 -i /dev/video0 -f image2 -strftime 1 "%Y-%m-%d_%H-%M-%S.jpg"
|
||||
@end example
|
||||
|
||||
@subsection Options
|
||||
|
||||
@table @option
|
||||
@item start_number
|
||||
Start the sequence from the specified number. Default value is 0.
|
||||
@item start_number @var{number}
|
||||
Start the sequence from @var{number}. Default value is 1. Must be a
|
||||
non-negative number.
|
||||
|
||||
@item update
|
||||
If set to 1, the filename will always be interpreted as just a
|
||||
filename, not a pattern, and the corresponding file will be continuously
|
||||
overwritten with new images. Default value is 0.
|
||||
@item -update @var{number}
|
||||
If @var{number} is nonzero, the filename will always be interpreted as just a
|
||||
filename, not a pattern, and this file will be continuously overwritten with new
|
||||
images.
|
||||
|
||||
@item strftime
|
||||
If set to 1, expand the filename with date and time information from
|
||||
@code{strftime()}. Default value is 0.
|
||||
@end table
|
||||
|
||||
The image muxer supports the .Y.U.V image file format. This format is
|
||||
@@ -405,27 +281,25 @@ Matroska container muxer.
|
||||
|
||||
This muxer implements the matroska and webm container specs.
|
||||
|
||||
@subsection Metadata
|
||||
|
||||
The recognized metadata settings in this muxer are:
|
||||
|
||||
@table @option
|
||||
@item title
|
||||
Set title name provided to a single track.
|
||||
|
||||
@item language
|
||||
Specify the language of the track in the Matroska languages form.
|
||||
@item title=@var{title name}
|
||||
Name provided to a single track
|
||||
@end table
|
||||
|
||||
The language can be either the 3 letters bibliographic ISO-639-2 (ISO
|
||||
639-2/B) form (like "fre" for French), or a language code mixed with a
|
||||
country code for specialities in languages (like "fre-ca" for Canadian
|
||||
French).
|
||||
@table @option
|
||||
|
||||
@item stereo_mode
|
||||
Set stereo 3D video layout of two views in a single video track.
|
||||
@item language=@var{language name}
|
||||
Specifies the language of the track in the Matroska languages form
|
||||
@end table
|
||||
|
||||
The following values are recognized:
|
||||
@table @samp
|
||||
@table @option
|
||||
|
||||
@item stereo_mode=@var{mode}
|
||||
Stereo 3D video layout of two views in a single video track
|
||||
@table @option
|
||||
@item mono
|
||||
video is not stereo
|
||||
@item left_right
|
||||
@@ -464,11 +338,10 @@ For example a 3D WebM clip can be created using the following command line:
|
||||
ffmpeg -i sample_left_right_clip.mpg -an -c:v libvpx -metadata stereo_mode=left_right -y stereo_clip.webm
|
||||
@end example
|
||||
|
||||
@subsection Options
|
||||
|
||||
This muxer supports the following options:
|
||||
|
||||
@table @option
|
||||
|
||||
@item reserve_index_space
|
||||
By default, this muxer writes the index for seeking (called cues in Matroska
|
||||
terms) at the end of the file, because it cannot know in advance how much space
|
||||
@@ -483,6 +356,7 @@ for most use cases should be about 50kB per hour of video.
|
||||
|
||||
Note that cues are only written if the output is seekable and this option will
|
||||
have no effect if it is not.
|
||||
|
||||
@end table
|
||||
|
||||
@anchor{md5}
|
||||
@@ -512,9 +386,7 @@ ffmpeg -i INPUT -f md5 -
|
||||
|
||||
See also the @ref{framemd5} muxer.
|
||||
|
||||
@section mov, mp4, ismv
|
||||
|
||||
MOV/MP4/ISMV (Smooth Streaming) muxer.
|
||||
@section MOV/MP4/ISMV
|
||||
|
||||
The mov/mp4/ismv muxer supports fragmentation. Normally, a MOV/MP4
|
||||
file has all the metadata about all packets stored in one location
|
||||
@@ -530,8 +402,6 @@ very long files (since writing normal MOV/MP4 files stores info about
|
||||
every single packet in memory until the file is closed). The downside
|
||||
is that it is less compatible with other applications.
|
||||
|
||||
@subsection Options
|
||||
|
||||
Fragmentation is enabled by setting one of the AVOptions that define
|
||||
how to cut the file into fragments:
|
||||
|
||||
@@ -571,6 +441,7 @@ a short portion of the file. With this option set, there is no initial
|
||||
mdat atom, and the moov atom only describes the tracks but has
|
||||
a zero duration.
|
||||
|
||||
Files written with this option set do not work in QuickTime.
|
||||
This option is implicitly set when writing ismv (Smooth Streaming) files.
|
||||
@item -movflags separate_moof
|
||||
Write a separate moof (movie fragment) atom for each track. Normally,
|
||||
@@ -585,26 +456,8 @@ This operation can take a while, and will not work in various situations such
|
||||
as fragmented output, thus it is not enabled by default.
|
||||
@item -movflags rtphint
|
||||
Add RTP hinting tracks to the output file.
|
||||
@item -movflags disable_chpl
|
||||
Disable Nero chapter markers (chpl atom). Normally, both Nero chapters
|
||||
and a QuickTime chapter track are written to the file. With this option
|
||||
set, only the QuickTime chapter track will be written. Nero chapters can
|
||||
cause failures when the file is reprocessed with certain tagging programs, like
|
||||
mp3Tag 2.61a and iTunes 11.3, most likely other versions are affected as well.
|
||||
@item -movflags omit_tfhd_offset
|
||||
Do not write any absolute base_data_offset in tfhd atoms. This avoids
|
||||
tying fragments to absolute byte positions in the file/streams.
|
||||
@item -movflags default_base_moof
|
||||
Similarly to the omit_tfhd_offset, this flag avoids writing the
|
||||
absolute base_data_offset field in tfhd atoms, but does so by using
|
||||
the new default-base-is-moof flag instead. This flag is new from
|
||||
14496-12:2012. This may make the fragments easier to parse in certain
|
||||
circumstances (avoiding basing track fragment location calculations
|
||||
on the implicit end of the previous track fragment).
|
||||
@end table
|
||||
|
||||
@subsection Example
|
||||
|
||||
Smooth Streaming content can be pushed in real time to a publishing
|
||||
point on IIS with this muxer. Example:
|
||||
@example
|
||||
@@ -613,38 +466,26 @@ ffmpeg -re @var{<normal input/transcoding options>} -movflags isml+frag_keyframe
|
||||
|
||||
@section mp3
|
||||
|
||||
The MP3 muxer writes a raw MP3 stream with the following optional features:
|
||||
@itemize @bullet
|
||||
@item
|
||||
An ID3v2 metadata header at the beginning (enabled by default). Versions 2.3 and
|
||||
2.4 are supported, the @code{id3v2_version} private option controls which one is
|
||||
used (3 or 4). Setting @code{id3v2_version} to 0 disables the ID3v2 header
|
||||
completely.
|
||||
The MP3 muxer writes a raw MP3 stream with an ID3v2 header at the beginning and
|
||||
optionally an ID3v1 tag at the end. ID3v2.3 and ID3v2.4 are supported, the
|
||||
@code{id3v2_version} option controls which one is used. The legacy ID3v1 tag is
|
||||
not written by default, but may be enabled with the @code{write_id3v1} option.
|
||||
|
||||
The muxer supports writing attached pictures (APIC frames) to the ID3v2 header.
|
||||
The pictures are supplied to the muxer in form of a video stream with a single
|
||||
packet. There can be any number of those streams, each will correspond to a
|
||||
single APIC frame. The stream metadata tags @var{title} and @var{comment} map
|
||||
to APIC @var{description} and @var{picture type} respectively. See
|
||||
For seekable output the muxer also writes a Xing frame at the beginning, which
|
||||
contains the number of frames in the file. It is useful for computing duration
|
||||
of VBR files.
|
||||
|
||||
The muxer supports writing ID3v2 attached pictures (APIC frames). The pictures
|
||||
are supplied to the muxer in form of a video stream with a single packet. There
|
||||
can be any number of those streams, each will correspond to a single APIC frame.
|
||||
The stream metadata tags @var{title} and @var{comment} map to APIC
|
||||
@var{description} and @var{picture type} respectively. See
|
||||
@url{http://id3.org/id3v2.4.0-frames} for allowed picture types.
|
||||
|
||||
Note that the APIC frames must be written at the beginning, so the muxer will
|
||||
buffer the audio frames until it gets all the pictures. It is therefore advised
|
||||
to provide the pictures as soon as possible to avoid excessive buffering.
|
||||
|
||||
@item
|
||||
A Xing/LAME frame right after the ID3v2 header (if present). It is enabled by
|
||||
default, but will be written only if the output is seekable. The
|
||||
@code{write_xing} private option can be used to disable it. The frame contains
|
||||
various information that may be useful to the decoder, like the audio duration
|
||||
or encoder delay.
|
||||
|
||||
@item
|
||||
A legacy ID3v1 tag at the end of the file (disabled by default). It may be
|
||||
enabled with the @code{write_id3v1} private option, but as its capabilities are
|
||||
very limited, its usage is not recommended.
|
||||
@end itemize
|
||||
|
||||
Examples:
|
||||
|
||||
Write an mp3 with an ID3v2.3 header and an ID3v1 footer:
|
||||
@@ -659,24 +500,12 @@ ffmpeg -i input.mp3 -i cover.png -c copy -map 0 -map 1
|
||||
-metadata:s:v title="Album cover" -metadata:s:v comment="Cover (Front)" out.mp3
|
||||
@end example
|
||||
|
||||
Write a "clean" MP3 without any extra features:
|
||||
@example
|
||||
ffmpeg -i input.wav -write_xing 0 -id3v2_version 0 out.mp3
|
||||
@end example
|
||||
|
||||
@section mpegts
|
||||
|
||||
MPEG transport stream muxer.
|
||||
|
||||
This muxer implements ISO 13818-1 and part of ETSI EN 300 468.
|
||||
|
||||
The recognized metadata settings in mpegts muxer are @code{service_provider}
|
||||
and @code{service_name}. If they are not set the default for
|
||||
@code{service_provider} is "FFmpeg" and the default for
|
||||
@code{service_name} is "Service01".
|
||||
|
||||
@subsection Options
|
||||
|
||||
The muxer options are:
|
||||
|
||||
@table @option
|
||||
@@ -689,9 +518,6 @@ Set the transport_stream_id (default 0x0001). This identifies a
|
||||
transponder in DVB.
|
||||
@item -mpegts_service_id @var{number}
|
||||
Set the service_id (default 0x0001) also known as program in DVB.
|
||||
@item -mpegts_service_type @var{number}
|
||||
Set the program service_type (default @var{digital_tv}), see below
|
||||
a list of pre defined values.
|
||||
@item -mpegts_pmt_start_pid @var{number}
|
||||
Set the first PID for PMT (default 0x1000, max 0x1f00).
|
||||
@item -mpegts_start_pid @var{number}
|
||||
@@ -699,10 +525,7 @@ Set the first PID for data packets (default 0x0100, max 0x0f00).
|
||||
@item -mpegts_m2ts_mode @var{number}
|
||||
Enable m2ts mode if set to 1. Default value is -1 which disables m2ts mode.
|
||||
@item -muxrate @var{number}
|
||||
Set a constant muxrate (default VBR).
|
||||
@item -pcr_period @var{numer}
|
||||
Override the default PCR retransmission time (default 20ms), ignored
|
||||
if variable muxrate is selected.
|
||||
Set muxrate.
|
||||
@item -pes_payload_size @var{number}
|
||||
Set minimum PES packet payload in bytes.
|
||||
@item -mpegts_flags @var{flags}
|
||||
@@ -726,27 +549,6 @@ ffmpeg -i source2.ts -codec copy -f mpegts -tables_version 1 udp://1.1.1.1:1111
|
||||
@end example
|
||||
@end table
|
||||
|
||||
Option mpegts_service_type accepts the following values:
|
||||
|
||||
@table @option
|
||||
@item hex_value
|
||||
Any hexdecimal value between 0x01 to 0xff as defined in ETSI 300 468.
|
||||
@item digital_tv
|
||||
Digital TV service.
|
||||
@item digital_radio
|
||||
Digital Radio service.
|
||||
@item teletext
|
||||
Teletext service.
|
||||
@item advanced_codec_digital_radio
|
||||
Advanced Codec Digital Radio service.
|
||||
@item mpeg2_digital_hdtv
|
||||
MPEG2 Digital HDTV service.
|
||||
@item advanced_codec_digital_sdtv
|
||||
Advanced Codec Digital SDTV service.
|
||||
@item advanced_codec_digital_hdtv
|
||||
Advanced Codec Digital HDTV service.
|
||||
@end table
|
||||
|
||||
Option mpegts_flags may take a set of such flags:
|
||||
|
||||
@table @option
|
||||
@@ -756,7 +558,10 @@ Reemit PAT/PMT before writing the next packet.
|
||||
Use LATM packetization for AAC.
|
||||
@end table
|
||||
|
||||
@subsection Example
|
||||
The recognized metadata settings in mpegts muxer are @code{service_provider}
|
||||
and @code{service_name}. If they are not set the default for
|
||||
@code{service_provider} is "FFmpeg" and the default for
|
||||
@code{service_name} is "Service01".
|
||||
|
||||
@example
|
||||
ffmpeg -i file.mpg -c copy \
|
||||
@@ -792,30 +597,6 @@ Alternatively you can write the command as:
|
||||
ffmpeg -benchmark -i INPUT -f null -
|
||||
@end example
|
||||
|
||||
@section nut
|
||||
|
||||
@table @option
|
||||
@item -syncpoints @var{flags}
|
||||
Change the syncpoint usage in nut:
|
||||
@table @option
|
||||
@item @var{default} use the normal low-overhead seeking aids.
|
||||
@item @var{none} do not use the syncpoints at all, reducing the overhead but making the stream non-seekable;
|
||||
Use of this option is not recommended, as the resulting files are very damage
|
||||
sensitive and seeking is not possible. Also in general the overhead from
|
||||
syncpoints is negligible. Note, -@code{write_index} 0 can be used to disable
|
||||
all growing data tables, allowing to mux endless streams with limited memory
|
||||
and without these disadvantages.
|
||||
@item @var{timestamped} extend the syncpoint with a wallclock field.
|
||||
@end table
|
||||
The @var{none} and @var{timestamped} flags are experimental.
|
||||
@item -write_index @var{bool}
|
||||
Write index at the end, the default is to write an index.
|
||||
@end table
|
||||
|
||||
@example
|
||||
ffmpeg -i INPUT -f_strict experimental -syncpoints none - | processor
|
||||
@end example
|
||||
|
||||
@section ogg
|
||||
|
||||
Ogg container muxer.
|
||||
@@ -829,22 +610,15 @@ is 1 second. A value of 0 will fill all segments, making pages as large as
|
||||
possible. A value of 1 will effectively use 1 packet-per-page in most
|
||||
situations, giving a small seek granularity at the cost of additional container
|
||||
overhead.
|
||||
@item -serial_offset @var{value}
|
||||
Serial value from which to set the streams serial number.
|
||||
Setting it to different and sufficiently large values ensures that the produced
|
||||
ogg files can be safely chained.
|
||||
|
||||
@end table
|
||||
|
||||
@anchor{segment}
|
||||
@section segment, stream_segment, ssegment
|
||||
|
||||
Basic stream segmenter.
|
||||
|
||||
This muxer outputs streams to a number of separate files of nearly
|
||||
fixed duration. Output filename pattern can be set in a fashion
|
||||
similar to @ref{image2}, or by using a @code{strftime} template if
|
||||
the @option{strftime} option is enabled.
|
||||
The segmenter muxer outputs streams to a number of separate files of nearly
|
||||
fixed duration. Output filename pattern can be set in a fashion similar to
|
||||
@ref{image2}.
|
||||
|
||||
@code{stream_segment} is a variant of the muxer used to write to
|
||||
streaming output formats, i.e. which do not require global headers,
|
||||
@@ -864,21 +638,14 @@ The segment muxer works best with a single constant frame rate video.
|
||||
|
||||
Optionally it can generate a list of the created segments, by setting
|
||||
the option @var{segment_list}. The list type is specified by the
|
||||
@var{segment_list_type} option. The entry filenames in the segment
|
||||
list are set by default to the basename of the corresponding segment
|
||||
files.
|
||||
|
||||
See also the @ref{hls} muxer, which provides a more specific
|
||||
implementation for HLS segmentation.
|
||||
|
||||
@subsection Options
|
||||
@var{segment_list_type} option.
|
||||
|
||||
The segment muxer supports the following options:
|
||||
|
||||
@table @option
|
||||
@item reference_stream @var{specifier}
|
||||
Set the reference stream, as specified by the string @var{specifier}.
|
||||
If @var{specifier} is set to @code{auto}, the reference is chosen
|
||||
If @var{specifier} is set to @code{auto}, the reference is choosen
|
||||
automatically. Otherwise it must be a stream specifier (see the ``Stream
|
||||
specifiers'' chapter in the ffmpeg manual) which specifies the
|
||||
reference stream. The default value is @code{auto}.
|
||||
@@ -887,11 +654,6 @@ reference stream. The default value is @code{auto}.
|
||||
Override the inner container format, by default it is guessed by the filename
|
||||
extension.
|
||||
|
||||
@item segment_format_options @var{options_list}
|
||||
Set output format options using a :-separated list of key=value
|
||||
parameters. Values containing the @code{:} special character must be
|
||||
escaped.
|
||||
|
||||
@item segment_list @var{name}
|
||||
Generate also a listfile named @var{name}. If not specified no
|
||||
listfile is generated.
|
||||
@@ -908,21 +670,15 @@ Allow caching (only affects M3U8 list files).
|
||||
Allow live-friendly file generation.
|
||||
@end table
|
||||
|
||||
@item segment_list_type @var{type}
|
||||
Select the listing format.
|
||||
@table @option
|
||||
@item @var{flat} use a simple flat list of entries.
|
||||
@item @var{hls} use a m3u8-like structure.
|
||||
@end table
|
||||
Default value is @code{samp}.
|
||||
|
||||
@item segment_list_size @var{size}
|
||||
Update the list file so that it contains at most @var{size}
|
||||
Update the list file so that it contains at most the last @var{size}
|
||||
segments. If 0 the list file will contain all the segments. Default
|
||||
value is 0.
|
||||
|
||||
@item segment_list_entry_prefix @var{prefix}
|
||||
Prepend @var{prefix} to each entry. Useful to generate absolute paths.
|
||||
By default no prefix is applied.
|
||||
@item segment_list_type @var{type}
|
||||
Specify the format for the segment list file.
|
||||
|
||||
The following values are recognized:
|
||||
@table @samp
|
||||
@@ -973,16 +729,6 @@ Note that splitting may not be accurate, unless you force the
|
||||
reference stream key-frames at the given time. See the introductory
|
||||
notice and the examples below.
|
||||
|
||||
@item segment_atclocktime @var{1|0}
|
||||
If set to "1" split at regular clock time intervals starting from 00:00
|
||||
o'clock. The @var{time} value specified in @option{segment_time} is
|
||||
used for setting the length of the splitting interval.
|
||||
|
||||
For example with @option{segment_time} set to "900" this makes it possible
|
||||
to create files at 12:00 o'clock, 12:15, 12:30, etc.
|
||||
|
||||
Default value is "0".
|
||||
|
||||
@item segment_time_delta @var{delta}
|
||||
Specify the accuracy time when selecting the start time for a
|
||||
segment, expressed as a duration specification. Default value is "0".
|
||||
@@ -1002,7 +748,7 @@ In particular may be used in combination with the @file{ffmpeg} option
|
||||
@var{force_key_frames} may not be set accurately because of rounding
|
||||
issues, with the consequence that a key frame time may result set just
|
||||
before the specified time. For constant frame rate videos a value of
|
||||
1/(2*@var{frame_rate}) should address the worst case mismatch between
|
||||
1/2*@var{frame_rate} should address the worst case mismatch between
|
||||
the specified time and the time set by @var{force_key_frames}.
|
||||
|
||||
@item segment_times @var{times}
|
||||
@@ -1024,12 +770,6 @@ Wrap around segment index once it reaches @var{limit}.
|
||||
@item segment_start_number @var{number}
|
||||
Set the sequence number of the first segment. Defaults to @code{0}.
|
||||
|
||||
@item strftime @var{1|0}
|
||||
Use the @code{strftime} function to define the name of the new
|
||||
segments to write. If this is selected, the output segment name must
|
||||
contain a @code{strftime} function template. Default value is
|
||||
@code{0}.
|
||||
|
||||
@item reset_timestamps @var{1|0}
|
||||
Reset timestamps at the begin of each segment, so that each segment
|
||||
will start with near-zero timestamps. It is meant to ease the playback
|
||||
@@ -1045,7 +785,7 @@ argument must be a time duration specification, and defaults to 0.
|
||||
|
||||
@itemize
|
||||
@item
|
||||
Remux the content of file @file{in.mkv} to a list of segments
|
||||
To remux the content of file @file{in.mkv} to a list of segments
|
||||
@file{out-000.nut}, @file{out-001.nut}, etc., and write the list of
|
||||
generated segments to @file{out.list}:
|
||||
@example
|
||||
@@ -1053,20 +793,14 @@ ffmpeg -i in.mkv -codec copy -map 0 -f segment -segment_list out.list out%03d.nu
|
||||
@end example
|
||||
|
||||
@item
|
||||
Segment input and set output format options for the output segments:
|
||||
@example
|
||||
ffmpeg -i in.mkv -f segment -segment_time 10 -segment_format_options movflags=+faststart out%03d.mp4
|
||||
@end example
|
||||
|
||||
@item
|
||||
Segment the input file according to the split points specified by the
|
||||
@var{segment_times} option:
|
||||
As the example above, but segment the input file according to the split
|
||||
points specified by the @var{segment_times} option:
|
||||
@example
|
||||
ffmpeg -i in.mkv -codec copy -map 0 -f segment -segment_list out.csv -segment_times 1,2,3,5,8,13,21 out%03d.nut
|
||||
@end example
|
||||
|
||||
@item
|
||||
Use the @command{ffmpeg} @option{force_key_frames}
|
||||
As the example above, but use the @command{ffmpeg} @option{force_key_frames}
|
||||
option to force key frames in the input at the specified location, together
|
||||
with the segment option @option{segment_time_delta} to account for
|
||||
possible roundings operated when setting key frame times.
|
||||
@@ -1085,7 +819,7 @@ ffmpeg -i in.mkv -codec copy -map 0 -f segment -segment_list out.csv -segment_fr
|
||||
@end example
|
||||
|
||||
@item
|
||||
Convert the @file{in.mkv} to TS segments using the @code{libx264}
|
||||
To convert the @file{in.mkv} to TS segments using the @code{libx264}
|
||||
and @code{libfaac} encoders:
|
||||
@example
|
||||
ffmpeg -i in.mkv -map 0 -codec:v libx264 -codec:a libfaac -f ssegment -segment_list out.list out%03d.ts
|
||||
@@ -1100,28 +834,6 @@ ffmpeg -re -i in.mkv -codec copy -map 0 -f segment -segment_list playlist.m3u8 \
|
||||
@end example
|
||||
@end itemize
|
||||
|
||||
@section smoothstreaming
|
||||
|
||||
Smooth Streaming muxer generates a set of files (Manifest, chunks) suitable for serving with conventional web server.
|
||||
|
||||
@table @option
|
||||
@item window_size
|
||||
Specify the number of fragments kept in the manifest. Default 0 (keep all).
|
||||
|
||||
@item extra_window_size
|
||||
Specify the number of fragments kept outside of the manifest before removing from disk. Default 5.
|
||||
|
||||
@item lookahead_count
|
||||
Specify the number of lookahead fragments. Default 2.
|
||||
|
||||
@item min_frag_duration
|
||||
Specify the minimum fragment duration (in microseconds). Default 5000000.
|
||||
|
||||
@item remove_at_exit
|
||||
Specify whether to remove all fragments when finished. Default 0 (do not remove).
|
||||
|
||||
@end table
|
||||
|
||||
@section tee
|
||||
|
||||
The tee muxer can be used to write the same data to several files or any
|
||||
@@ -1137,8 +849,8 @@ to feed the same packets to several muxers directly.
|
||||
The slave outputs are specified in the file name given to the muxer,
|
||||
separated by '|'. If any of the slave name contains the '|' separator,
|
||||
leading or trailing spaces or any special character, it must be
|
||||
escaped (see @ref{quoting_and_escaping,,the "Quoting and escaping"
|
||||
section in the ffmpeg-utils(1) manual,ffmpeg-utils}).
|
||||
escaped (see the ``Quoting and escaping'' section in the ffmpeg-utils
|
||||
manual).
|
||||
|
||||
Muxer options can be specified for each slave by prepending them as a list of
|
||||
@var{key}=@var{value} pairs separated by ':', between square brackets. If
|
||||
@@ -1153,13 +865,10 @@ output name suffix.
|
||||
|
||||
@item bsfs[/@var{spec}]
|
||||
Specify a list of bitstream filters to apply to the specified
|
||||
output.
|
||||
|
||||
It is possible to specify to which streams a given bitstream filter
|
||||
applies, by appending a stream specifier to the option separated by
|
||||
@code{/}. @var{spec} must be a stream specifier (see @ref{Format
|
||||
stream specifiers}). If the stream specifier is not specified, the
|
||||
bitstream filters will be applied to all streams in the output.
|
||||
output. It is possible to specify to which streams a given bitstream
|
||||
filter applies, by appending a stream specifier to the option
|
||||
separated by @code{/}. If the stream specifier is not specified, the
|
||||
bistream filters will be applied to all streams in the output.
|
||||
|
||||
Several bitstream filters can be specified, separated by ",".
|
||||
|
||||
@@ -1169,8 +878,7 @@ specified by a stream specifier. If not specified, this defaults to
|
||||
all the input streams.
|
||||
@end table
|
||||
|
||||
@subsection Examples
|
||||
|
||||
Some examples follow.
|
||||
@itemize
|
||||
@item
|
||||
Encode something and both archive it in a WebM file and stream it
|
||||
@@ -1191,49 +899,10 @@ audio packets.
|
||||
ffmpeg -i ... -map 0 -flags +global_header -c:v libx264 -c:a aac -strict experimental
|
||||
-f tee "[bsfs/v=dump_extra]out.ts|[movflags=+faststart]out.mp4|[select=a]out.aac"
|
||||
@end example
|
||||
|
||||
@item
|
||||
As below, but select only stream @code{a:1} for the audio output. Note
|
||||
that a second level escaping must be performed, as ":" is a special
|
||||
character used to separate options.
|
||||
@example
|
||||
ffmpeg -i ... -map 0 -flags +global_header -c:v libx264 -c:a aac -strict experimental
|
||||
-f tee "[bsfs/v=dump_extra]out.ts|[movflags=+faststart]out.mp4|[select=\'a:1\']out.aac"
|
||||
@end example
|
||||
@end itemize
|
||||
|
||||
Note: some codecs may need different options depending on the output format;
|
||||
the auto-detection of this can not work with the tee muxer. The main example
|
||||
is the @option{global_header} flag.
|
||||
|
||||
@section webm_dash_manifest
|
||||
|
||||
WebM DASH Manifest muxer.
|
||||
|
||||
This muxer implements the WebM DASH Manifest specification to generate the DASH manifest XML.
|
||||
|
||||
@subsection Options
|
||||
|
||||
This muxer supports the following options:
|
||||
|
||||
@table @option
|
||||
@item adaptation_sets
|
||||
This option has the following syntax: "id=x,streams=a,b,c id=y,streams=d,e" where x and y are the
|
||||
unique identifiers of the adaptation sets and a,b,c,d and e are the indices of the corresponding
|
||||
audio and video streams. Any number of adaptation sets can be added using this option.
|
||||
@end table
|
||||
|
||||
@subsection Example
|
||||
@example
|
||||
ffmpeg -f webm_dash_manifest -i video1.webm \
|
||||
-f webm_dash_manifest -i video2.webm \
|
||||
-f webm_dash_manifest -i audio1.webm \
|
||||
-f webm_dash_manifest -i audio2.webm \
|
||||
-map 0 -map 1 -map 2 -map 3 \
|
||||
-c copy \
|
||||
-f webm_dash_manifest \
|
||||
-adaptation_sets "id=0,streams=0,1 id=1,streams=2,3" \
|
||||
manifest.xml
|
||||
@end example
|
||||
|
||||
@c man end MUXERS
|
||||
|
22
doc/nut.texi
22
doc/nut.texi
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle NUT
|
||||
|
||||
@@ -22,27 +21,6 @@ The official nut specification is at svn://svn.mplayerhq.hu/nut
|
||||
In case of any differences between this text and the official specification,
|
||||
the official specification shall prevail.
|
||||
|
||||
@chapter Modes
|
||||
NUT has some variants signaled by using the flags field in its main header.
|
||||
|
||||
@multitable @columnfractions .4 .4
|
||||
@item BROADCAST @tab Extend the syncpoint to report the sender wallclock
|
||||
@item PIPE @tab Omit completely the syncpoint
|
||||
@end multitable
|
||||
|
||||
@section BROADCAST
|
||||
|
||||
The BROADCAST variant provides a secondary time reference to facilitate
|
||||
detecting endpoint latency and network delays.
|
||||
It assumes all the endpoint clocks are syncronized.
|
||||
To be used in real-time scenarios.
|
||||
|
||||
@section PIPE
|
||||
|
||||
The PIPE variant assumes NUT is used as non-seekable intermediate container,
|
||||
by not using syncpoint removes unneeded overhead and reduces the overall
|
||||
memory usage.
|
||||
|
||||
@chapter Container-specific codec tags
|
||||
|
||||
@section Generic raw YUVA formats
|
||||
|
@@ -79,6 +79,9 @@ qpel{8,16}_mc??_old_c / *pixels{8,16}_l4
|
||||
Just used to work around a bug in an old libavcodec encoder version.
|
||||
Don't optimize them.
|
||||
|
||||
tpel_mc_func {put,avg}_tpel_pixels_tab
|
||||
Used only for SVQ3, so only optimize them if you need fast SVQ3 decoding.
|
||||
|
||||
add_bytes/diff_bytes
|
||||
For huffyuv only, optimize if you want a faster ffhuffyuv codec.
|
||||
|
||||
@@ -136,6 +139,9 @@ dct_unquantize_mpeg2
|
||||
dct_unquantize_h263
|
||||
Used in MPEG-4/H.263 en/decoding.
|
||||
|
||||
FIXME remaining functions?
|
||||
BTW, most of these functions are in dsputil.c/.h, some are in mpegvideo.c/.h.
|
||||
|
||||
|
||||
|
||||
Alignment:
|
||||
@@ -191,11 +197,6 @@ __asm__() block.
|
||||
Use external asm (nasm/yasm) or inline asm (__asm__()), do not use intrinsics.
|
||||
The latter requires a good optimizing compiler which gcc is not.
|
||||
|
||||
When debugging a x86 external asm compilation issue, if lost in the macro
|
||||
expansions, add DBG=1 to your make command-line: the input file will be
|
||||
preprocessed, stripped of the debug/empty lines, then compiled, showing the
|
||||
actual lines causing issues.
|
||||
|
||||
Inline asm vs. external asm
|
||||
---------------------------
|
||||
Both inline asm (__asm__("..") in a .c file, handled by a compiler such as gcc)
|
||||
@@ -267,6 +268,17 @@ CELL/SPU:
|
||||
http://www-01.ibm.com/chips/techlib/techlib.nsf/techdocs/30B3520C93F437AB87257060006FFE5E/$file/Language_Extensions_for_CBEA_2.4.pdf
|
||||
http://www-01.ibm.com/chips/techlib/techlib.nsf/techdocs/9F820A5FFA3ECE8C8725716A0062585F/$file/CBE_Handbook_v1.1_24APR2007_pub.pdf
|
||||
|
||||
SPARC-specific:
|
||||
---------------
|
||||
SPARC Joint Programming Specification (JPS1): Commonality
|
||||
http://www.fujitsu.com/downloads/PRMPWR/JPS1-R1.0.4-Common-pub.pdf
|
||||
|
||||
UltraSPARC III Processor User's Manual (contains instruction timings)
|
||||
http://www.sun.com/processors/manuals/USIIIv2.pdf
|
||||
|
||||
VIS Whitepaper (contains optimization guidelines)
|
||||
http://www.sun.com/processors/vis/download/vis/vis_whitepaper.pdf
|
||||
|
||||
GCC asm links:
|
||||
--------------
|
||||
official doc but quite ugly
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user