Compare commits

..

160 Commits

Author SHA1 Message Date
Michael Niedermayer
b4552cc9b8 update for FFmpeg 2.0.3
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-12-24 03:22:00 +01:00
Michael Niedermayer
172f929767 avutil/log: skip IO calls on empty strings
These occur when no context is set for example, thus they are common

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit a044a183a3)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-12-24 01:05:47 +01:00
Michael Niedermayer
66a9edfcf6 do O(1) instead of O(n) atomic operations in register functions
about 1ms faster startup time

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 133fbfc781)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-12-24 01:05:47 +01:00
Paul B Mahol
a9382fc15c avcodec/libopusenc: change default frame duration to 20 ms
20 ms is used by libopus encoder.

Signed-off-by: Paul B Mahol <onemda@gmail.com>
(cherry picked from commit 74906d3727)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-12-24 01:05:47 +01:00
Michael Niedermayer
bd9dcb411d avcodec/jpeg2000dec: Check precno before using it in JPEG2000_PGOD_CPRL
Fixes out of array reads
Fixes: asan_heap-oob_f0de57_6823_mjp2.mov

Found-by: Mateusz "j00ru" Jurczyk and Gynvael Coldwind
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 3d5a5e86be)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-12-24 01:05:47 +01:00
Michael Niedermayer
ae81a0e32d avcodec: move end zeroing code from av_packet_split_side_data() to avcodec_decode_subtitle2()
This code changes the input packet, which is read only and can in
rare circumstances lead to decoder errors. (i run into one of these in
the audio decoder, which corrupted the packet during av_find_stream_info()
so that actual decoding that single packet failed later)
Until a better fix is implemented, this commit limits the problem.
A better fix might be to make the subtitle decoders not depend on
data[size] = 0 or to copy their input when this is not the case.
(cherry picked from commit 01923bab98)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-12-24 01:05:47 +01:00
Michael Niedermayer
4f93400db1 avutil: reintroduce lls1 as the 52 ABI needs it
lls1 taken from ff130d7

This is incompatible with libavcodec version
55.18.100 to 55.43.100 except 55.39.101
This incompatibility is caused by these libavcodec versions depending on
a libavutil 52 which is ABI incompatible with the previous ABI 52

you can avoid this incompatibility by upgrading your libavcodec so it
does no longer depend on the invalid ABI

See: 502ab21af0
See: cc6714bb16
See: 41578f70cf
See: Ticket3136
Tested-by: marillat
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit b382d09d29)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-12-24 01:05:47 +01:00
Michael Niedermayer
0cd61c7f7d rename new lls code to lls2 to avoid conflict with the old which has a different ABI
also remove failed attempt at a compatibility layer, the code simply cannot work

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit c3814ab654)

Conflicts:

	libavcodec/version.h
2013-12-24 01:05:47 +01:00
Michael Niedermayer
28ac4e91dc avutil: rename lls to lls2
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit bbe66ef912)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-12-24 01:05:47 +01:00
Diego Biurrun
6b683be641 mpeg12dec: Remove incomplete and wrong UV swapping code for VCR2
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 3215140425)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-12-24 01:05:47 +01:00
Kostya Shishkov
3a3b5ae4c0 mpegvideo: Fix swapping of UV planes for VCR2
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit bae14f38d9)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-12-24 01:05:47 +01:00
Michael Niedermayer
70fcea3b77 h264: Do not treat the initial frame special in handling of frame gaps
The not handling of frame gaps has lead to the lack of a dummy reference
frame, which has lead to the failure of decode_slice_header() which has
lead to one SEI recovery message being skiped which had introduced a
slightly suboptimal recovery point for at least 1 h264 file compared to
JM.

Found-by: Carl & BugMaster
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 9e5ef1c5c3)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-12-24 01:05:47 +01:00
Michael Niedermayer
b545d11d49 avcodec/jpeg2000dec: non zero image offsets are not supported
Fixes out of array accesses
Fixes Ticket3080
Found-by: ami_stuff
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 780669ef7c)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-12-24 01:05:47 +01:00
Michael Niedermayer
4324d7bade avformat/thp: force moving forward
Fixes infinite loop
Fixes Ticket3098

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 6c4b87d3d6)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-12-24 01:05:47 +01:00
Michael Niedermayer
c88bdac460 avformat/thp: fix variable types to avoid overflows
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 2b1056e4e2)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-12-24 01:05:47 +01:00
Michael Niedermayer
bdf6e6fff4 avcodec/jpeglsdec: check err value for ls_get_code_runterm()
Fixes infinite loop
Fixes Ticket3086

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit cc0e47b550)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-12-24 01:05:47 +01:00
Michael Niedermayer
83e4aa3e7c avutil/opt: initialize ret
Fixes CID1108610
Fixes use of uninitialized variable

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 2d8ccf0adc)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-12-24 01:05:47 +01:00
Michael Niedermayer
fce2cfbdcf avcodec/utils: add some saftey checks to add_metadata_from_side_data()
This fixes potential overreads with crafted files.

Found-by: wm4
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 838f461b07)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-12-24 01:05:47 +01:00
Michael Niedermayer
72f1907c96 avcodec/avpacket/av_packet_split_side_data: ensure that side data padding is initialized
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 240fd8c96f)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-12-24 01:05:46 +01:00
Michael Niedermayer
074ebfacf4 avfilter/ff_insert_pad: fix order of operations
Fixes out of bounds access
Fixes CID732170
Fixes CID732169

No filter is known to use this function in a way so the issue can be reproduced.

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit ab2bfb85d4)

Conflicts:

	libavfilter/avfilter.c
2013-12-24 01:05:46 +01:00
Michael Niedermayer
47f8497837 avcodec/jpeg2000dec: fix context consistency with too large lowres
Fixes out of array accesses
Fixes Ticket2898

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit a1b9004b76)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-12-24 01:05:46 +01:00
Michael Niedermayer
93f26b7992 avcodec/jpeg2000dec: prevent out of array accesses in pixel addressing
Fixes Ticket2921

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit fe448cd28d)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-12-24 01:05:46 +01:00
Michael Niedermayer
f684bbf224 avcodec/jpeg2000dec: check transform equality in MCT
Fixes null pointer dereference
Fixes Ticket2843

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit ac3b01a9c0)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-12-24 01:05:46 +01:00
Michael Niedermayer
edca16f1af ffserver: strip odd chars from html error messages before sending them back
Fixes Ticket3034

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 885739f3b4)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-12-24 01:05:46 +01:00
Mason Carter
aeac212fda VC1: Fix intensity compensation performance regression
Fix https://trac.ffmpeg.org/ticket/3204

The problem was that intensity compensation was always used once it was
encountered. This is because v->next_use_ic was never set back to zero.
To fix this, when resetting v->next_luty/uv, also reset v->next_use_ic.

This improved (restored) performance by 85% when decoding
http://bit.ly/bbbwmv

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit ed5bed4152)
2013-12-22 16:09:44 +01:00
Martin Storsjö
62f05d6309 arm: Don't clobber callee saved registers in scalarproduct
q4-q7/d8-d15 are supposed to not be clobbered by the callee.

CC: libav-stable@libav.org
Signed-off-by: Martin Storsjö <martin@martin.st>
(cherry picked from commit d307e408d4)
2013-12-21 09:58:12 +01:00
Michael Niedermayer
bd339d4882 swscale/utils: check chroma width for fast bilinear scaler
Fixes artifacts where fast bilinear was used for downscaling chroma

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 037fc3b054)
2013-12-16 02:24:13 +01:00
Michael Niedermayer
9de71b0eb2 swscale/utils: remove useless ()
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 554e913fd7)
2013-12-16 02:24:12 +01:00
Michael Niedermayer
7e73760950 avcodec/cabac: force get_cabac to be not inlined
works around bug in gccs inline asm register assignment
Fixes Ticket3177

gcc from 4.4 to 4.6 is affected at least, no non affected gccs known
clang seems not affected

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 0538b29ae8)
2013-12-09 10:36:00 +01:00
Michael Niedermayer
0639e403be avcodec/error_resilience: check that er is supported before attempting to read the status of the previous slice
Fixes incorrectly set error_occured and improves speed

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 90539cea33)
2013-12-07 11:34:42 +01:00
Michael Niedermayer
5c7d6be5f9 avcodec/error_resilience: factor er_supported() check out
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit afb18c5578)
2013-12-07 11:34:25 +01:00
Nicolas George
2005887707 lavfi/af_pan: support unknown layouts on input.
Fix trac ticket #2899.
(cherry picked from commit 7b0a587393)
2013-11-28 01:09:19 +01:00
Nicolas George
cf8462ce00 lavfi/af_pan: support unknown layouts on output.
(cherry picked from commit 4e9adc9b73)
2013-11-28 01:09:12 +01:00
Nicolas George
02c8c064ea lswr: fix assert failure on unknown layouts.
(cherry picked from commit 4a640a6ac8)
2013-11-28 01:09:06 +01:00
Nicolas George
9a4acedf31 lavfi: parsing helper for unknown channel layouts.
Make ff_parse_channel_layout() accept unknown layouts too.
(cherry picked from commit 6e2473edfd)
2013-11-28 01:08:59 +01:00
Nicolas George
c2d37e7364 lavfi/avfiltergraph: do not reduce incompatible lists.
A list of "all channel layouts" but not "all channel counts"
can not be reduced to a single unknown channel count.
(cherry picked from commit d300f5f6f5)
2013-11-28 01:08:52 +01:00
Nicolas George
62baf22ec0 lavfi/avfiltergraph: suggest a solution when format selection fails.
Format selection can fail if unknown channel layouts are used
with filters that do not support it.
(cherry picked from commit f775eb3fb4)
2013-11-28 01:08:46 +01:00
Nicolas George
8f7d839e15 lavd/lavfi: support unknown channel layouts.
(cherry picked from commit 863fb11f63)
2013-11-28 01:08:37 +01:00
Clément Bœsch
93716f7bea avformat/image2: allow muxing gif files.
Fixes Ticket #2936.
(cherry picked from commit f70db22999)

Conflicts:
	libavformat/img2enc.c
2013-11-18 14:34:59 +01:00
Clément Bœsch
ff4c53e8b3 build: avoid stdin stall with GNU AS probing.
a758c5e added probing for various tools, such as AS. Unfortunately, GNU
AS is reading stdin with -v, and thus configure is stalled with
configure arguments such as --as=as.

Fixes Ticket #1898.
(cherry picked from commit dbb41f93c1)
2013-11-18 14:33:46 +01:00
Michael Niedermayer
9a22d6dd63 avformat/utils: dont count attached pics toward the probesize
Such pics behave more like headers which we also dont count.
Fixes Ticket3146

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit a8dec360c5)
2013-11-18 14:33:09 +01:00
Michael Niedermayer
beb28bc55d avcodec/bink: fix seeking to frame 0
Fixes Ticket3088

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit cb52d6da0a)

Conflicts:
	libavcodec/bink.c
2013-10-31 00:57:11 +01:00
Michael Niedermayer
1fa7ad2e20 ffmpeg_filter: Fix non jpeg yuv in jpeg support
This is a regression, did not bisect so dont know what caused it but
likely some changes to the command line handling code.

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit d9bc251d39)
2013-10-27 20:24:31 +01:00
Michael Niedermayer
f514834917 avformat/utils: do not override pts in h264 when they are provided from the demuxer
Fixes Ticket2143

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 1e5271a9fd)
2013-10-27 19:40:25 +01:00
Michael Niedermayer
1d0e583728 h264: make flush_change() set mmco_reset
This ensures that frames do not get mixed on context reinits

Fixes Ticket2836

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 3c9dd93faa)
2013-10-26 02:41:31 +02:00
Michael Niedermayer
782331be1e avcodec/h264: reduce noisiness of "mmco: unref short failure"
Do not consider it an error if we have no frames and should discard one.
This condition can easily happen when decoding is started from an I frame

Fixes Ticket2811

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 08a8976196)
2013-10-26 00:52:34 +02:00
Michael Niedermayer
94c7ee4d9e avformat/mp3dec: perform seek resync in the correct direction
Fixes seeking to the last frame in CBR files
Fixes Ticket2773

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit ba8716df7f)
2013-10-26 00:51:50 +02:00
Michael Niedermayer
b7154758de avformat/wavdec: Fix smv packet interleaving
This strips the relative timestamp "flag" off.

Fixes Ticket2849

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 6abb9eb525)
2013-10-26 00:50:44 +02:00
Michael Niedermayer
cd7d575e90 avcodec/h264: do not trust last_pic_droppable when marking pictures as done
This simplifies the code and fixes a deadlock

Fixes Ticket2927
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 29ffeef5e7)
2013-10-26 00:50:00 +02:00
Stefano Sabatini
17d169ce0f doc/Makefile: fix man pages uninstall path
Fix trac ticket #3054.
(cherry picked from commit af1c538850)
2013-10-26 00:49:18 +02:00
Michael Niedermayer
fb1fb462e5 avformat/mov: force parsing of headers if stts is absent
Fixes Ticket2991

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit e41ea866fc)
2013-10-24 10:35:34 +02:00
Michael Niedermayer
7da810e68b avformat/gifdec: make GIF_APP_EXT_LABEL parsing more robust
Fixes Ticket3021

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit e1f8184a1a)
2013-10-24 10:33:43 +02:00
Michael Niedermayer
6de6d9e2d3 avformat/matroskadec: only set r_frame_rate if the value is within reasonable limits
Fixes Ticket2451

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 6853e40106)
2013-10-24 10:30:33 +02:00
Michael Niedermayer
736851264b avformat/wavdec: Dont trust the fact chunk for PCM
Fixes Ticket3033

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 83fc6c822b)
2013-10-24 10:29:37 +02:00
Michael Niedermayer
59431fc841 avcodec/h264_refs: modify key frame detection heuristic to detect more cases
Fixes Ticket2968

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 5ac6b6028f)
2013-10-24 10:28:20 +02:00
mrlika
f581e25a69 lavd/v4l2: do not fail when VIDIOC_ENUMSTD returns EINVAL without a valid match
With some (buggy) drivers, the VIDIOC_G_STD ioctl returns a std_id that cannot
be matched with any of the enumerated v4l2_standard structures (for example
std_id = 0 or std_id = 0xffffff). Do not fail when we reach the end of the
enumeration without a valid match.

Fixes ticket #2370

Note: This commit message has been modified by Giorgio Vazzana, the original
commit message was:

"Fixed regression for mandatory VIDIOC_ENUMSTD support by v4l2"

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit ed72542539)
2013-10-24 10:09:30 +02:00
Stefano Sabatini
9d0bb7fc39 doc/codecs: fix dangling reference to codec-options chapter
(cherry picked from commit b4bd21b7fe)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-10-08 20:57:07 +02:00
Michael Niedermayer
a53fd4b758 update for 2.0.2
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-10-08 19:39:58 +02:00
Paul B Mahol
b4ccdf5e68 avcodec/ffv1dec: fix format detection
Fixes crash with carefuly designed files.

Signed-off-by: Paul B Mahol <onemda@gmail.com>
(cherry picked from commit a27227d401)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-10-08 18:13:55 +02:00
Michael Niedermayer
9b02aa2593 avcodec/parser: reset indexes on realloc failure
Fixes Ticket2982

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit f31011e9ab)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-10-08 18:13:55 +02:00
Michael Niedermayer
f089e67d51 avcodec/imgconvert/get_color_type: fix type for PAL8
Fixes Ticket3008

Fate changes as PAL8 gets used instead of BGR8

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 95666b2298)
2013-10-02 23:47:41 +02:00
Thilo Borgmann
842d7c9b3a configure: fix logic for threads in case of OpenCL is enabled.
Fixes ticket 3004.

Signed-off-by: Thilo Borgmann <thilo.borgmann@mail.de>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit d3a03d90a3)
2013-10-02 23:44:39 +02:00
Michael Niedermayer
187297b871 Merge remote-tracking branch 'TimothyGu/release/2.0' into release/2.0
* TimothyGu/release/2.0:
  doc/encoders: add doc for AAC encoder
  doc/formats: Add documentation for 3 parameters that have been missing
  doc/encoders: improve libvo-aacenc doc
  doc/encoders: reformat and add some clarification in libtwolame doc
  doc/encoders: reformat libmp3lame doc
  doc/encoders: add libxvid doc
  doc/encoders: partially rewrite and reformat libx264 docs

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2013-09-23 22:31:44 +02:00
Michael Niedermayer
211ad5042a Merge remote-tracking branch 'jamrial/release/2.0' into release/2.0
* jamrial/release/2.0:
  avformat/matroskadec: check out_samplerate before using it in av_rescale()
  matroskadec: Improve TTA duration calculation
  avformat/oggparsevorbis: fix leak of tt
  avformat/oggparsevorbis: fix leak of ct

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2013-09-23 22:30:49 +02:00
Michael Niedermayer
2b06f5f8f1 avcodec/g2meet: Fix framebuf size
Currently the code can in some cases draw tiles that hang outside the
allocated buffer. This patch increases the buffer size to avoid out
of array accesses. An alternative would be to fail if such tiles are
encountered.
I do not know if any valid files use such hanging tiles.

Fixes Ticket2971
Found-by: ami_stuff
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit e07ac727c1)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-09-23 21:46:15 +02:00
Michael Niedermayer
0a64b25c77 avcodec/g2meet: Fix order of align and pixel size multiplication.
Fixes out of array accesses
Fixes Ticket2922

Found-by: ami_stuff
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 821a5938d1)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-09-23 21:46:15 +02:00
Michael Niedermayer
40e52bbb63 avcodec/ffv1enc: update buffer check for 16bps
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 3728603f18)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-09-23 21:46:15 +02:00
Michael Niedermayer
aeec1a6430 avcodec/truemotion2: Fix av_freep arguments
Fixes null pointer dereference
Fixes Ticket2944

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit c54aa2fb0f)

Conflicts:

	libavcodec/truemotion2.c
2013-09-23 21:46:15 +02:00
Michael Niedermayer
ef121a88d5 avcodec/mjpegdec: Add some sanity checks to ljpeg_decode_rgb_scan()
These prevent the rgb ljpeg code from being run on parameters that it doesnt
support. No testcase available but it seems possible to trigger these.

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 61c68000ed)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-09-23 21:46:15 +02:00
Michael Niedermayer
db99c41567 avfilter/vf_fps: make sure the fifo is not empty before using it
Fixes Ticket2905

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit cdd5df8189)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-09-23 21:46:15 +02:00
Michael Niedermayer
54bdb5fc86 avcodec/dsputil: fix signedness in sizeof() comparissions
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 454a11a1c9)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-09-23 21:46:15 +02:00
Michael Niedermayer
2fd824b466 ffv1dec: Check bits_per_raw_sample and colorspace for equality in ver 0/1 headers
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit b05cd1ea7e)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-09-23 21:46:15 +02:00
Michael Niedermayer
005b38f8f1 avcodec/ffv1dec: check global header version
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 20b965a1a4)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-09-23 21:46:15 +02:00
Timothy Gu
6911d9e1b0 doc/encoders: add doc for AAC encoder
Thanks-to: Kostya Shishkov <kostya.shishkov@gmail.com>
Signed-off-by: Timothy Gu <timothygu99@gmail.com>
Signed-off-by: Stefano Sabatini <stefasab@gmail.com>
(cherry picked from commit 0e11790cf7)

Signed-off-by: Timothy Gu <timothygu99@gmail.com>
2013-09-22 15:29:53 -07:00
Timothy Gu
cecb2b39ce doc/formats: Add documentation for 3 parameters that have been missing
Signed-off-by: Timothy Gu <timothygu99@gmail.com>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit b7dd459863)

Signed-off-by: Timothy Gu <timothygu99@gmail.com>
2013-09-22 15:29:22 -07:00
Timothy Gu
9ebfee7ac0 doc/encoders: improve libvo-aacenc doc
Signed-off-by: Timothy Gu <timothygu99@gmail.com>
Signed-off-by: Stefano Sabatini <stefasab@gmail.com>
(cherry picked from commit 81bbe49a0e)

Signed-off-by: Timothy Gu <timothygu99@gmail.com>
2013-09-22 15:29:12 -07:00
Timothy Gu
01b39884c7 doc/encoders: reformat and add some clarification in libtwolame doc
Signed-off-by: Timothy Gu <timothygu99@gmail.com>
Signed-off-by: Stefano Sabatini <stefasab@gmail.com>
(cherry picked from commit e45e72f5f8)

Signed-off-by: Timothy Gu <timothygu99@gmail.com>
2013-09-22 15:28:56 -07:00
Timothy Gu
c08e8ab715 doc/encoders: reformat libmp3lame doc
Signed-off-by: Timothy Gu <timothygu99@gmail.com>
Signed-off-by: Stefano Sabatini <stefasab@gmail.com>
(cherry picked from commit 40b8350b57)

Signed-off-by: Timothy Gu <timothygu99@gmail.com>
2013-09-22 15:28:40 -07:00
Timothy Gu
68ae344b5e doc/encoders: add libxvid doc
Signed-off-by: Stefano Sabatini <stefasab@gmail.com>
(cherry picked from commit 6b255e5e70)

Signed-off-by: Timothy Gu <timothygu99@gmail.com>
2013-09-22 15:27:25 -07:00
Timothy Gu
5753d780b4 doc/encoders: partially rewrite and reformat libx264 docs
Format is based on the thread:
"[PATCH] doc/encoders: Add libopus encoder doc" (06-28-2013)
http://thread.gmane.org/gmane.comp.video.ffmpeg.devel/165368/

Also merge the two option sections (Mapping and Private options).

Patch partially edited by Stefano Sabatini.

Signed-off-by: Stefano Sabatini <stefasab@gmail.com>
(cherry picked from commit 11cb697501)

Signed-off-by: Timothy Gu <timothygu99@gmail.com>
2013-09-22 15:26:26 -07:00
Michael Niedermayer
dbb534cea6 avformat/matroskadec: check out_samplerate before using it in av_rescale()
Prevent assertion failure with damaged input

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 338f8b2eaf)
2013-09-18 15:07:49 -03:00
James Almer
414d75b8bc matroskadec: Improve TTA duration calculation
Calculate the duration as accurately as possible to improve decoding of samples
where the last frame is smaller than the rest.

Signed-off-by: James Almer <jamrial@gmail.com>
Approved-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit af248fa117)
2013-09-18 15:06:31 -03:00
Michael Niedermayer
c52a25e03b avformat/oggparsevorbis: fix leak of tt
Fixes CID1061059
Fixes fate
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>

(cherry picked from commit f3b7f47070)
2013-09-18 14:53:25 -03:00
Michael Niedermayer
c7966bf795 avformat/oggparsevorbis: fix leak of ct
Fixes CID1061058
Fixes fate

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit d0a882ab1d)
2013-09-18 14:52:58 -03:00
Clément Bœsch
0f429392cf avcodec/assenc: fix potential overread.
(cherry picked from commit 860a081058)

Signed-off-by: Alexander Strasser <eclipse7@gmx.net>
2013-09-15 22:21:05 +02:00
Clément Bœsch
2dc6c5d462 avcodec/srtdec: fix potential overread.
(cherry picked from commit 3a54c221d5)

Signed-off-by: Alexander Strasser <eclipse7@gmx.net>
2013-09-15 22:21:05 +02:00
Clément Bœsch
dc0403530e avformat/subtitles: add a next line jumper and use it.
This fixes a bunch of possible overread in avformat with the idiom p +=
strcspn(p, "\n") + 1 (strcspn() can focus on the trailing '\0' if no
'\n' is found, so the +1 leads to an overread).

Note on lavf/matroskaenc: no extra subtitles.o Makefile dependency is
added because only the header is required for ff_subtitles_next_line().

Note on lavf/mpsubdec: code gets slightly complex to avoid an infinite
loop in the probing since there is no more forced increment.

NOTE:
Code of function ff_subtitles_next_line fixed by Alexander Strasser.

The original code from master did test the wrong character, but was
corrected by a subsequent commit. That commit however is not backported,
so it had to be fixed in this commit for the backport.

Conflicts:
	libavformat/mpl2dec.c

(cherry picked from commit 90fc00a623)

Signed-off-by: Alexander Strasser <eclipse7@gmx.net>
2013-09-15 22:21:05 +02:00
Clément Bœsch
59147be24f avformat/srtdec: skip initial random line breaks.
I found a bunch of (recent) SRT files in the wild with 3 to 10 line
breaks at the beginning.

(cherry picked from commit cfcd55db16)

Signed-off-by: Alexander Strasser <eclipse7@gmx.net>
2013-09-15 22:21:04 +02:00
Carl Eugen Hoyos
09bc4be3db Use rc_max_rate if no video bit_rate was specified when muxing mxf_d10.
Fixes ticket #2945.

Reviewed-by: Matthieu Bouron
(cherry picked from commit d73565d5dd)
2013-09-12 23:11:47 +02:00
Carl Eugen Hoyos
aaef59d535 Store the video bit_rate in the context when muxing mxf.
This will allow using rc_max_rate if no bit_rate is specified (on remuxing).

Reviewed-by: Matthieu Bouron
(cherry picked from commit 52cf08b4c8)
2013-09-12 23:06:38 +02:00
Clément Bœsch
283e070877 avformat/subtitles: check lower bound for duration overlap seeking.
(cherry picked from commit 1ca4bf930b)
2013-09-10 21:32:56 +02:00
Clément Bœsch
4e0c29451b avformat/vobsub: fix seeking.
(cherry picked from commit f8678dcef3)
2013-09-10 21:32:52 +02:00
Paul B Mahol
f5ae34250a avformat/matroskaenc: remove bogus prores tag
Fixes: ffmpeg -i input -c:v prores output.mkv

Signed-off-by: Paul B Mahol <onemda@gmail.com>
(cherry picked from commit 14851ca5f5)

Conflicts:
	libavformat/matroskaenc.c
2013-09-08 21:44:06 +02:00
Carl Eugen Hoyos
fc4c29bc6e Read h264 headers from v4l2 to allow stream-copying.
Fixes ticket #2882.
Analyzed and tested by William C Bonner.
(cherry picked from commit e337c9d564)
2013-09-05 22:49:52 +02:00
Paul B Mahol
6158eec53f w64dec: fix end position of summarylist guid
Noticed-by: James Almer

Signed-off-by: Paul B Mahol <onemda@gmail.com>
(cherry picked from commit 3e36dc8626)
2013-09-05 22:49:28 +02:00
Paul B Mahol
6c0fef5762 w64dec: fix skipping of unknown guids
Regression since 14d50c1.
Fixes #2932.

Signed-off-by: Paul B Mahol <onemda@gmail.com>
(cherry picked from commit 79b70e47a4)
2013-09-05 22:49:16 +02:00
Carl Eugen Hoyos
f5a4bd23e9 Avoid a deadlock when decoding wma.
Fixes ticket #2925.
(cherry picked from commit ec8a4841f7)
2013-09-02 08:54:32 +02:00
Carl Eugen Hoyos
8e1760f37f avcodec/ffv1dec: reorganize thread init/update
This moves some allocations to init, reducing possible failure modes in update.
Always copies from the previous context instead of just during init

Fixes Ticket2923

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 21dc3a3cc2)
2013-09-02 08:49:59 +02:00
Michael Niedermayer
1da5ab751f avcodec/ffv1dec: move initial_states init to init_thread_copy()
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit c72cca5a44)
2013-09-02 08:47:50 +02:00
Michael Niedermayer
237ef710a1 avformat/lxfdec: use a parser to parse video frame headers
lxf needs a parser (or would need to set a few fields explicitly).
Fixes Ticket2917

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 8349be852b)
2013-09-01 09:59:14 +02:00
Michael Niedermayer
be47e93134 avcodec/h264: set er.ref_count earlier
Fixes Ticket2910

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 93cf7b0195)
2013-09-01 09:58:55 +02:00
Michael Niedermayer
1821c849da avformat/avidec: match first index and first packet size=0 handling
Fixes Ticket2861

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 227a0eb5a9)

Conflicts:
	libavformat/avidec.c
2013-08-31 09:47:23 +02:00
Michael Niedermayer
a4522ae516 avcodec/pngdsp: fix (un)signed type in end comparission
Fixes out of array accesses
Fixes Ticket2919

Found_by: ami_stuff
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 86736f59d6)
2013-08-30 23:35:47 +02:00
Michael Niedermayer
c7ee4bc016 ffv1dec: check that global parameters dont change in version 0/1
Such changes are not allowed nor supported

Fixes Ticket2906

Found-by: ami_stuff
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 547d690d67)
2013-08-30 11:23:25 +02:00
Lukasz Marek
0cabb95811 lavf/ftp: fix possible crash
(cherry picked from commit f3ace37a3b)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-08-29 02:16:41 +02:00
Michael Niedermayer
a1b32533aa jpeg2000: fix dereferencing invalid pointers
Found-by: Laurent Butti <laurentb@gmail.com>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 912ce9dd20)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-08-29 02:13:25 +02:00
Michael Niedermayer
e2eb0d2326 avcodec/jpeg2000dec: Check cdx/y values more carefully
Some invalid values where not handled correctly in the later pixel
format matching code.
Fixes out of array accesses
Fixes Ticket2848

Found-by: Piotr Bandurski <ami_stuff@o2.pl>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 8bb11c3ca7)

Conflicts:

	libavcodec/jpeg2000dec.c
2013-08-29 02:13:25 +02:00
Michael Niedermayer
acac6b0d69 avcodec/rpza: Perform pointer advance and checks before using the pointers
Fixes out of array accesses
Fixes Ticket2850

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 3819db745d)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-08-29 02:13:25 +02:00
Michael Niedermayer
deb8d0d6a1 avcodec/flashsv: check diff_start/height
Fixes out of array accesses
Fixes Ticket2844

Found-by: ami_stuff
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 880c73cd76)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-08-29 02:13:25 +02:00
Michael Niedermayer
3e817d91ef jpeg2000: check log2_cblk dimensions
Fixes out of array access
Fixes Ticket2895

Found-by: Piotr Bandurski <ami_stuff@o2.pl>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 9a271a9368)
2013-08-25 15:57:13 +02:00
Michael Niedermayer
e231f0fade swr/rematrix: Fix handling of AV_CH_LAYOUT_STEREO_DOWNMIX output
Fixes Ticket2859

Note, testcases related to the downmix channels are welcome.
(id like to make sure this is working correctly now, as obviously it didnt
 work before ...)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit c56d4dab03)
2013-08-20 18:39:23 +02:00
Michael Niedermayer
f0f55e6726 swr: clean layouts before checking sanity
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 6dfffe9200)
2013-08-20 18:39:19 +02:00
Michael Niedermayer
61dc8494d7 movenc: ilbc needs audio_vbr set.
Without this the block_align or bitrate value is not available to the decoder

Fixes Ticket2858

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 3d64845600)
2013-08-20 18:38:05 +02:00
Stephen Hutchinson
423b87d621 Revert "doc/RELEASE_NOTES: add a note about AVISynth"
This reverts commit 3aa2257d24.
2013-08-18 23:14:04 -04:00
Stephen Hutchinson
8d9568b4a1 avisynth: Support video input from AviSynth 2.5 properly.
Uses the 2.5 compatibility header included with the variant of
FFMS2 that uses AviSynth's C-interface. A copy of this header is
now provided in compat/avisynth.

avs_get_row_size_p and avs_get_height_p changed between versions
2.5 and 2.6. Since the avisynth_c.h header that avformat uses
assumes AviSynth 2.6, it would cause 2.5 to crash if given any
kind of real video (the Version() function was known to work,
though).

AvxSynth was unaffected by this issue because, despite being based
on AviSynth 2.5.8 and using 2.5.8's interface version number of 3,
it actually uses 2.6's versions of these functions.

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-08-18 23:13:27 -04:00
Michael Niedermayer
acf511de34 avcodec/g2meet: fix src pointer checks in kempf_decode_tile()
Fixes Ticket2842

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 2960576378)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-08-11 00:14:17 +02:00
Michael Niedermayer
fd2951bb53 update for 2.0.1
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-08-10 23:58:48 +02:00
Michael Niedermayer
fa004f4854 MAINTAINERS: add Alexander Strasser for the server
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit c11c180132)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-08-10 23:58:48 +02:00
Michael Niedermayer
1155bdb754 MAINTAINERS: remove myself from movenc, 2 maintainers should be enough
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 55a88daf6f)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-08-10 23:58:48 +02:00
Matthieu Bouron
01838c5732 MAINTAINERS: add myself as maintainer for lavf/aiff* and lavf/movenc.c
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 88a1ff2233)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-08-10 23:58:48 +02:00
Michael Niedermayer
f09f33031b MAINTAINERS: alphabetical sort
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 81f4d55c36)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-08-10 23:58:48 +02:00
Michael Niedermayer
b2a9f64e1b MAINTAINERS: Add some maintainers for parts of libavutil
Developers added are active and in the copyright of the specified files,

If anyone wants to maintain anything else, send a patch that adds you to
MAINTAINERS.

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit b45b1d7af9)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-08-10 23:58:48 +02:00
Michael Niedermayer
15ea618ef6 MAINTAINERS: order libavutil entries alphabetically
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 48188a5120)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-08-10 23:58:48 +02:00
Michael Niedermayer
ec33423273 MAINTAINERS: drop 1.1 from the releases that i maintain
There seems to be no need to continue maintaining it, people can easily
upgrade to 1.2

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 5fe5f020b6)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-08-10 23:58:48 +02:00
Michael Niedermayer
d5dd54df69 MAINTAINERS: add myself as maintainer for the interface code to swresample & swscale in libavfilter
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 5ad4e29337)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-08-10 23:58:48 +02:00
Stephen Hutchinson
e7a4c34e7c avisynth: Exit gracefully when trying to serve video from v2.5.8.
'Fixes' ticket #2526 insofar as it stops 2.5.8 from crashing and
tells the user to upgrade to 2.6 if they want to make video input
work. A real solution to #2526 would be to get video input from
2.5.8 to work right.

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 9db353bc47)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-08-10 21:45:11 +02:00
Stephen Hutchinson
2881bfbfd6 avisynth: Cosmetics
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit f277d6bf42)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-08-10 21:45:03 +02:00
Paul B Mahol
80fb38153e sgidec: safer check for buffer overflow
Signed-off-by: Paul B Mahol <onemda@gmail.com>
(cherry picked from commit 86e722ab97)

Conflicts:

	libavcodec/sgidec.c

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-08-10 02:03:37 +02:00
Paul B Mahol
b79f337f8a ttaenc: fix packet size
Signed-off-by: Paul B Mahol <onemda@gmail.com>
(cherry picked from commit bc2187cfdb)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-08-10 02:01:40 +02:00
Michael Niedermayer
f593ac1c21 matroskaenc: simplify mkv_check_tag()
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 066111bf19)
2013-08-09 16:28:39 -03:00
James Almer
baf92305a6 lavf/matroskaenc: Check for valid metadata before creating tags
Tags must have at least one SimpleTag element to be spec conformant.
Updated lavf-mkv and seek-lavf-mkv FATE references as the tests were affected by
this.

Fixes ticket #2785

Signed-off-by: James Almer <jamrial@gmail.com>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 088ed53146)
2013-08-09 16:28:31 -03:00
Michael Niedermayer
d6d168e87b avfilter/vf_separatefields: fix ;;
Found-by: llogan
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 74561680cd)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-08-09 13:58:45 +02:00
Michael Niedermayer
50f9c4acc3 avformat/paf: Fix integer overflow and out of array read
Found-by:  Laurent Butti <laurentb@gmail.com>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit f58cd2867a)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-08-09 13:53:27 +02:00
Michael Niedermayer
211374e52a avutil/mem: Fix flipped condition
Fixes return code and later null pointer dereference

Found-by: Laurent Butti <laurentb@gmail.com>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit c94f9e8542)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-08-09 13:53:25 +02:00
Michael Niedermayer
1bf2461765 avfilter: fix plane validity checks
Fixes out of array accesses

(cherry picked from commit e43a0a232d)
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-08-09 13:53:14 +02:00
Michael Niedermayer
64444cd578 avcodec/kmvc: fix MV checks
Fixes Ticket2813
Fixes regression since 70b5583

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 3cd8aaa2b2)
2013-07-31 02:54:26 +02:00
Paul B Mahol
0047a31090 Revert "pnm: remove nonsense code"
Breaks decoding pgms with 255 < maxval < 65535.

Found-by: Carl Eugen Hoyos <cehoyos@ag.or.at>.

This reverts commit a0348d0966.
(cherry picked from commit 768e40b451)
2013-07-29 00:05:01 +02:00
Michael Niedermayer
d73ce6cb56 jpeg2000dec: Support non subsampled 9-16bit planar pixel formats
This applies changes similar to fc6de70c44
to the >8bit codepath

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 1434df3b93)
2013-07-28 04:09:21 +02:00
Michael Niedermayer
9a6d3eee59 jpeg2000dec: silence unused variable warning
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit db33010483)
2013-07-28 04:09:11 +02:00
Michael Niedermayer
8b221d60fa jpeg2000dec: Support non subsampled 8bit planar pixel formats
Fixes file2.jp2

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit fc6de70c44)
2013-07-28 04:08:10 +02:00
Michael Niedermayer
9da9b36435 jpeg2000dec: parse CDEF
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>

Conflicts:

	libavcodec/jpeg2000dec.c

Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 99de97cabf)
2013-07-28 04:07:36 +02:00
Carl Eugen Hoyos
09b33f9a82 Fix pix_fmt detection in the native jpeg2000 decoder.
Based on b7a928b by Michael Bradshaw.
Fixes ticket #2683.

Reviewed-by: Nicolas Bertrand
(cherry picked from commit b39a6bbe7f)
2013-07-28 04:07:20 +02:00
Michael Niedermayer
fa6b6dad3d jpeg2000: fix overflow in dequantization
Fixes decoding of file generated with:
ffmpeg -f lavfi -i smptehdbars=hd720 -pix_fmt rgb48 /tmp/o.jp2

Reviewed-by: Nicolas BERTRAND <nicoinattendu@gmail.com>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit f57119b8e5)
2013-07-28 04:07:10 +02:00
Nicolas Bertrand
e0d88cfd18 jpeg2000: Initialize only once mqc arrays
Increases encoding and decoding speed

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit dd1382ac95)
2013-07-28 04:07:02 +02:00
Michael Niedermayer
18043e3d22 avformat/dtsdec: Improve probe, reject things looking like analoge signals
Fixes Ticket2810

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 6663205338)
2013-07-26 12:18:46 +02:00
Rémi Denis-Courmont
ccf470fdb6 mpeg12: Ignore slice threading if hwaccel is active
Slice threading does not work with hardware acceleration, as decoding
is per-picture.  This fixes Bugzilla #542.

Signed-off-by: Diego Biurrun <diego@biurrun.de>
(cherry picked from commit 93a51984a2)

Conflicts:
	libavcodec/mpeg12dec.c
2013-07-26 11:35:36 +02:00
Michael Niedermayer
8f9bc6f2ce swscale/input: fix 16bit gbrp input
Fixes Ticket2793

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit a4b55bbb6f)
2013-07-19 14:50:16 +02:00
Lukasz Marek
fcab45f39b ftp: fix interrupt callback misuse
FTP protocol used interrupt callback to simulate nonblock
operation which is a misuse of this callback.

This commit make FTP protocol fully blocking and removes
invalid usage of interrutp callback

Also adds support for multiline responses delimited with dashes
(cherry picked from commit 247e658784)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-07-18 01:54:24 +02:00
Michael Niedermayer
bc44d06c3d avformat/matroskadec: Detect conflicting sample rate/default_duration
Fixes Ticket2508

Thanks-to: Moritz Bunkus
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 6158a3bcdf)
2013-07-16 11:43:21 +02:00
Michael Niedermayer
7740e36a89 mjpegdec: Fix used quant index for gbr
Fixes Ticket1651

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 15cee5e562)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-07-11 12:06:57 +02:00
Michael Niedermayer
6127f792f9 mjpegdec: initialize source variables before gbr remap
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 94e86ae15a)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-07-11 12:06:57 +02:00
Carl Eugen Hoyos
fd2cf9c45d Suggest recompilation with openssl or gnutls if the https protocol is not found.
Fixes ticket #2765.
(cherry picked from commit 1db88c33f2)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-07-11 12:06:57 +02:00
Carl Eugen Hoyos
fc3dec8b62 lavf/utils.c: Avoid a null pointer dereference on oom after duration_error allocation.
(cherry picked from commit c9eb5c9751)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-07-11 12:06:56 +02:00
Luca Barbato
a7315116dd wmavoice: conceal clearly corrupted blocks
Reported-by: Mateusz "j00ru" Jurczyk and Gynvael Coldwind
(cherry picked from commit d14a26edb7)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-07-11 12:06:56 +02:00
Michael Niedermayer
37268dcc86 avcodec/qdm2: initialize sign_bits
Fixes non deterministic output

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 8f09957194)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-07-11 12:03:30 +02:00
Michael Niedermayer
ea28e74205 avcodec/qdm2: store bits in an integer instead of float variable
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit fbe159e850)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-07-11 12:02:56 +02:00
Piotr Bandurski
c1c84f0a55 avformat/utils: avformat_find_stream_info set value for ret in case of oom
without it FFmpeg didn't display any error message when oom event occured

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit b050956334)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-07-11 12:02:33 +02:00
Paul B Mahol
56bf38859b lavfi/aconvert: unbreak
Even if its deprecated, it should still work correctly.

Signed-off-by: Paul B Mahol <onemda@gmail.com>
(cherry picked from commit bc95b94289)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-07-11 12:02:00 +02:00
Piotr Bandurski
1cda4aa1e0 avformat/utils: avformat_find_stream_info fix a crash in case of oom
fixes ticket #2767

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit ccf9211e29)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-07-11 12:00:39 +02:00
Michael Niedermayer
9711b52739 avfilter/af_earwax: Fix out of array accesses on odd packets
Found-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 0a3a0edd52)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2013-07-10 19:07:50 +02:00
3456 changed files with 101338 additions and 280861 deletions

16
.gitignore vendored
View File

@@ -15,7 +15,6 @@
*.pdb
*.so
*.so.*
*.swp
*.ver
*-example
*-test
@@ -28,6 +27,7 @@
/ffserver
/config.*
/coverage.info
/version.h
/doc/*.1
/doc/*.3
/doc/*.html
@@ -35,34 +35,26 @@
/doc/config.texi
/doc/avoptions_codec.texi
/doc/avoptions_format.texi
/doc/doxy/html/
/doc/examples/avio_reading
/doc/examples/decoding_encoding
/doc/examples/demuxing_decoding
/doc/examples/extract_mvs
/doc/examples/filter_audio
/doc/examples/demuxing
/doc/examples/filtering_audio
/doc/examples/filtering_video
/doc/examples/metadata
/doc/examples/muxing
/doc/examples/pc-uninstalled
/doc/examples/remuxing
/doc/examples/resampling_audio
/doc/examples/scaling_video
/doc/examples/transcode_aac
/doc/examples/transcoding
/doc/fate.txt
/doc/doxy/html/
/doc/print_options
/lcov/
/libavcodec/*_tablegen
/libavcodec/*_tables.c
/libavcodec/*_tables.h
/libavutil/avconfig.h
/libavutil/ffversion.h
/tests/audiogen
/tests/base64
/tests/data/
/tests/pixfmts.mak
/tests/rotozoom
/tests/tiny_psnr
/tests/tiny_ssim
@@ -71,7 +63,6 @@
/tools/aviocat
/tools/ffbisect
/tools/bisect.need
/tools/crypto_bench
/tools/cws2fws
/tools/fourcc2pixfmt
/tools/ffescape
@@ -84,5 +75,4 @@
/tools/qt-faststart
/tools/trasher
/tools/seek_print
/tools/uncoded_frame
/tools/zmqsend

195
Changelog
View File

@@ -1,184 +1,6 @@
Entries are sorted chronologically from oldest to youngest within each release,
releases are sorted from youngest to oldest.
version <next>:
version 2.4.2:
- avcodec/on2avc: Check number of channels
- avcodec/hevc: fix chroma transform_add size
- avcodec/h264: Check mode before considering mixed mode intra prediction
- avformat/mpegts: use a padded buffer in read_sl_header()
- avformat/mpegts: Check desc_len / get8() return code
- avcodec/vorbisdec: Fix off by 1 error in ptns_to_read
- sdp: add support for H.261
- avcodec/svq3: Do not memcpy AVFrame
- avcodec/smc: fix off by 1 error
- avcodec/qpeg: fix off by 1 error in MV bounds check
- avcodec/gifdec: factorize interleave end handling out
- avcodec/cinepak: fix integer underflow
- avcodec/pngdec: Check bits per pixel before setting monoblack pixel format
- avcodec/pngdec: Calculate MPNG bytewidth more defensively
- avcodec/tiff: more completely check bpp/bppcount
- avcodec/mmvideo: Bounds check 2nd line of HHV Intra blocks
- avcodec/h263dec: Fix decoding messenger.h263
- avcodec/utils: Add case for jv to avcodec_align_dimensions2()
- avcodec/mjpegdec: check bits per pixel for changes similar to dimensions
- avcodec/jpeglsdec: Check run value more completely in ls_decode_line()
- avformat/hlsenc: export inner muxer timebase
- configure: add noexecstack to linker options if supported.
- avcodec/ac3enc_template: fix out of array read
- avutil/x86/cpu: fix cpuid sub-leaf selection
- avformat/img2dec: enable generic seeking for image pipes
- avformat/img2dec: initialize pkt->pos for image pipes
- avformat/img2dec: pass error code and signal EOF
- avformat/img2dec: fix error code at EOF for pipes
- libavutil/opt: fix av_opt_set_channel_layout() to access correct memory address
- tests/fate-run.sh: Cat .err file in case of error with V>0
- avformat/riffenc: Filter out "BottomUp" in ff_put_bmp_header()
- avcodec/webp: fix default palette color 0xff000000 -> 0x00000000
- avcodec/asvenc: fix AAN scaling
- Fix compile error on arm4/arm5 platform
version 2.4.1:
- swscale: Allow chroma samples to be above and to the left of luma samples
- avcodec/libilbc: support for latest git of libilbc
- avcodec/webp: treat out-of-bound palette index as translucent black
- vf_deshake: rename Transform.vector to Transform.vec to avoid compiler confusion
- apetag: Fix APE tag size check
- tools/crypto_bench: fix build when AV_READ_TIME is unavailable
version 2.4:
- Icecast protocol
- ported lenscorrection filter from frei0r filter
- large optimizations in dctdnoiz to make it usable
- ICY metadata are now requested by default with the HTTP protocol
- support for using metadata in stream specifiers in fftools
- LZMA compression support in TIFF decoder
- support for H.261 RTP payload format (RFC 4587)
- HEVC/H.265 RTP payload format (draft v6) depacketizer
- added codecview filter to visualize information exported by some codecs
- Matroska 3D support thorugh side data
- HTML generation using texi2html is deprecated in favor of makeinfo/texi2any
- silenceremove filter
version 2.3:
- AC3 fixed-point decoding
- shuffleplanes filter
- subfile protocol
- Phantom Cine demuxer
- replaygain data export
- VP7 video decoder
- Alias PIX image encoder and decoder
- Improvements to the BRender PIX image decoder
- Improvements to the XBM decoder
- QTKit input device
- improvements to OpenEXR image decoder
- support decoding 16-bit RLE SGI images
- GDI screen grabbing for Windows
- alternative rendition support for HTTP Live Streaming
- AVFoundation input device
- Direct Stream Digital (DSD) decoder
- Magic Lantern Video (MLV) demuxer
- On2 AVC (Audio for Video) decoder
- support for decoding through DXVA2 in ffmpeg
- libbs2b-based stereo-to-binaural audio filter
- libx264 reference frames count limiting depending on level
- native Opus decoder
- display matrix export and rotation API
- WebVTT encoder
- showcqt multimedia filter
- zoompan filter
- signalstats filter
- hqx filter (hq2x, hq3x, hq4x)
- flanger filter
- Image format auto-detection
- LRC demuxer and muxer
- Samba protocol (via libsmbclient)
- WebM DASH Manifest muxer
- libfribidi support in drawtext
version 2.2:
- HNM version 4 demuxer and video decoder
- Live HDS muxer
- setsar/setdar filters now support variables in ratio expressions
- elbg filter
- string validation in ffprobe
- support for decoding through VDPAU in ffmpeg (the -hwaccel option)
- complete Voxware MetaSound decoder
- remove mp3_header_compress bitstream filter
- Windows resource files for shared libraries
- aeval filter
- stereoscopic 3d metadata handling
- WebP encoding via libwebp
- ATRAC3+ decoder
- VP8 in Ogg demuxing
- side & metadata support in NUT
- framepack filter
- XYZ12 rawvideo support in NUT
- Exif metadata support in WebP decoder
- OpenGL device
- Use metadata_header_padding to control padding in ID3 tags (currently used in
MP3, AIFF, and OMA files), FLAC header, and the AVI "junk" block.
- Mirillis FIC video decoder
- Support DNx444
- libx265 encoder
- dejudder filter
- Autodetect VDA like all other hardware accelerations
- aliases and defaults for Ogg subtypes (opus, spx)
version 2.1:
- aecho filter
- perspective filter ported from libmpcodecs
- ffprobe -show_programs option
- compand filter
- RTMP seek support
- when transcoding with ffmpeg (i.e. not streamcopying), -ss is now accurate
even when used as an input option. Previous behavior can be restored with
the -noaccurate_seek option.
- ffmpeg -t option can now be used for inputs, to limit the duration of
data read from an input file
- incomplete Voxware MetaSound decoder
- read EXIF metadata from JPEG
- DVB teletext decoder
- phase filter ported from libmpcodecs
- w3fdif filter
- Opus support in Matroska
- FFV1 version 1.3 is stable and no longer experimental
- FFV1: YUVA(444,422,420) 9, 10 and 16 bit support
- changed DTS stream id in lavf mpeg ps muxer from 0x8a to 0x88, to be
more consistent with other muxers.
- adelay filter
- pullup filter ported from libmpcodecs
- ffprobe -read_intervals option
- Lossless and alpha support for WebP decoder
- Error Resilient AAC syntax (ER AAC LC) decoding
- Low Delay AAC (ER AAC LD) decoding
- mux chapters in ASF files
- SFTP protocol (via libssh)
- libx264: add ability to encode in YUVJ422P and YUVJ444P
- Fraps: use BT.709 colorspace by default for yuv, as reference fraps decoder does
- make decoding alpha optional for prores, ffv1 and vp6 by setting
the skip_alpha flag.
- ladspa wrapper filter
- native VP9 decoder
- dpx parser
- max_error_rate parameter in ffmpeg
- PulseAudio output device
- ReplayGain scanner
- Enhanced Low Delay AAC (ER AAC ELD) decoding (no LD SBR support)
- Linux framebuffer output device
- HEVC decoder
- raw HEVC, HEVC in MOV/MP4, HEVC in Matroska, HEVC in MPEG-TS demuxing
- mergeplanes filter
version 2.0:
- curves filter
@@ -191,7 +13,7 @@ version 2.0:
- 10% faster aac encoding on x86 and MIPS
- sine audio filter source
- WebP demuxing and decoding support
- ffmpeg options -filter_script and -filter_complex_script, which allow a
- new ffmpeg options -filter_script and -filter_complex_script, which allow a
filtergraph description to be read from a file
- OpenCL support
- audio phaser filter
@@ -199,7 +21,7 @@ version 2.0:
- libquvi demuxer
- uniform options syntax across all filters
- telecine filter
- interlace filter
- new interlace filter
- smptehdbars source
- inverse telecine filters (fieldmatch and decimate)
- colorbalance filter
@@ -317,7 +139,7 @@ version 1.1:
- JSON captions for TED talks decoding support
- SOX Resampler support in libswresample
- aselect filter
- SGI RLE 8-bit / Silicon Graphics RLE 8-bit video decoder
- SGI RLE 8-bit decoder
- Silicon Graphics Motion Video Compressor 1 & 2 decoder
- Silicon Graphics Movie demuxer
- apad filter
@@ -361,9 +183,7 @@ version 1.0:
- RTMPE protocol support
- RTMPTE protocol support
- showwaves and showspectrum filter
- LucasArts SMUSH SANM playback support
- LucasArts SMUSH VIMA audio decoder (ADPCM)
- LucasArts SMUSH demuxer
- LucasArts SMUSH playback support
- SAMI, RealText and SubViewer demuxers and decoders
- Heart Of Darkness PAF playback support
- iec61883 device
@@ -487,7 +307,6 @@ version 0.10:
- ffwavesynth decoder
- aviocat tool
- ffeval tool
- support encoding and decoding 4-channel SGI images
version 0.9:
@@ -788,7 +607,7 @@ version 0.6:
- LPCM support in MPEG-TS (HDMV RID as found on Blu-ray disks)
- WMA Pro decoder
- Core Audio Format demuxer
- ATRAC1 decoder
- Atrac1 decoder
- MD STUDIO audio demuxer
- RF64 support in WAV demuxer
- MPEG-4 Audio Lossless Coding (ALS) decoder
@@ -888,7 +707,7 @@ version 0.5:
- MXF demuxer
- VC-1/WMV3/WMV9 video decoder
- MacIntel support
- AviSynth support
- AVISynth support
- VMware video decoder
- VP5 video decoder
- VP6 video decoder
@@ -916,7 +735,7 @@ version 0.5:
- Interplay C93 demuxer and video decoder
- Bethsoft VID demuxer and video decoder
- CRYO APC demuxer
- ATRAC3 decoder
- Atrac3 decoder
- V.Flash PTX decoder
- RoQ muxer, RoQ audio encoder
- Renderware TXD demuxer and decoder

15
INSTALL Normal file
View File

@@ -0,0 +1,15 @@
1) Type './configure' to create the configuration. A list of configure
options is printed by running 'configure --help'.
'configure' can be launched from a directory different from the FFmpeg
sources to build the objects out of tree. To do this, use an absolute
path when launching 'configure', e.g. '/ffmpegdir/ffmpeg/configure'.
2) Then type 'make' to build FFmpeg. GNU Make 3.81 or later is required.
3) Type 'make install' to install all binaries and libraries you built.
NOTICE
- Non system dependencies (e.g. libx264, libvpx) are disabled by default.

View File

@@ -1,17 +0,0 @@
#Installing FFmpeg:
1. Type `./configure` to create the configuration. A list of configure
options is printed by running `configure --help`.
`configure` can be launched from a directory different from the FFmpeg
sources to build the objects out of tree. To do this, use an absolute
path when launching `configure`, e.g. `/ffmpegdir/ffmpeg/configure`.
2. Then type `make` to build FFmpeg. GNU Make 3.81 or later is required.
3. Type `make install` to install all binaries and libraries you built.
NOTICE
------
- Non system dependencies (e.g. libx264, libvpx) are disabled by default.

View File

@@ -1,4 +1,4 @@
#FFmpeg:
FFmpeg:
Most files in FFmpeg are under the GNU Lesser General Public License version 2.1
or later (LGPL v2.1+). Read the file COPYING.LGPLv2.1 for details. Some other
@@ -10,12 +10,11 @@ version 2 or later (GPL v2+). See the file COPYING.GPLv2 for details. None of
these parts are used by default, you have to explicitly pass --enable-gpl to
configure to activate them. In this case, FFmpeg's license changes to GPL v2+.
Specifically, the GPL parts of FFmpeg are:
Specifically, the GPL parts of FFmpeg are
- libpostproc
- libmpcodecs
- optional x86 optimizations in the files
libavcodec/x86/flac_dsp_gpl.asm
libavcodec/x86/idct_mmx.c
- libutvideo encoding/decoding wrappers in
libavcodec/libutvideo*.cpp
@@ -34,21 +33,20 @@ Specifically, the GPL parts of FFmpeg are:
- vf_geq.c
- vf_histeq.c
- vf_hqdn3d.c
- vf_interlace.c
- vf_hue.c
- vf_kerndeint.c
- vf_mcdeint.c
- vf_mp.c
- vf_noise.c
- vf_owdenoise.c
- vf_perspective.c
- vf_phase.c
- vf_pp.c
- vf_pullup.c
- vf_sab.c
- vf_smartblur.c
- vf_spp.c
- vf_stereo3d.c
- vf_super2xsai.c
- vf_tinterlace.c
- vf_yadif.c
- vsrc_mptestsrc.c
Should you, for whatever reason, prefer to use version 3 of the (L)GPL, then
@@ -81,7 +79,6 @@ The following libraries are under GPL:
- libutvideo
- libvidstab
- libx264
- libx265
- libxavs
- libxvid
When combining them with FFmpeg, FFmpeg needs to be licensed as GPL as well by

View File

@@ -31,7 +31,7 @@ ffprobe:
ffprobe.c Stefano Sabatini
ffserver:
ffserver.c Reynaldo H. Verdejo Pinochet
ffserver.c, ffserver.h Baptiste Coudurier
Commandline utility code:
cmdutils.c, cmdutils.h Michael Niedermayer
@@ -43,26 +43,16 @@ QuickTime faststart:
Miscellaneous Areas
===================
documentation Stefano Sabatini, Mike Melanson, Timothy Gu
build system (configure, makefiles) Diego Biurrun, Mans Rullgard
project server Árpád Gereöffy, Michael Niedermayer, Reimar Doeffinger, Alexander Strasser
documentation Mike Melanson
website Robert Swain, Lou Logan
build system (configure,Makefiles) Diego Biurrun, Mans Rullgard
project server Árpád Gereöffy, Michael Niedermayer, Reimar Döffinger, Alexander Strasser
mailinglists Michael Niedermayer, Baptiste Coudurier, Lou Logan
presets Robert Swain
metadata subsystem Aurelien Jacobs
release management Michael Niedermayer
Communication
=============
website Deby Barbara Lepage
fate.ffmpeg.org Timothy Gu
Trac bug tracker Alexander Strasser, Michael Niedermayer, Carl Eugen Hoyos, Lou Logan
mailing lists Michael Niedermayer, Baptiste Coudurier, Lou Logan
Google+ Paul B Mahol, Michael Niedermayer, Alexander Strasser
Twitter Lou Logan
Launchpad Timothy Gu
libavutil
=========
@@ -75,17 +65,13 @@ Other:
bprint Nicolas George
bswap.h
des Reimar Doeffinger
dynarray.h Nicolas George
eval.c, eval.h Michael Niedermayer
float_dsp Loren Merritt
hash Reimar Doeffinger
intfloat* Michael Niedermayer
integer.c, integer.h Michael Niedermayer
lzo Reimar Doeffinger
mathematics.c, mathematics.h Michael Niedermayer
mem.c, mem.h Michael Niedermayer
opencl.c, opencl.h Wei Gao
opt.c, opt.h Michael Niedermayer
rational.c, rational.h Michael Niedermayer
rc4 Reimar Doeffinger
ripemd.c, ripemd.h James Almer
@@ -100,6 +86,10 @@ Generic Parts:
avcodec.h Michael Niedermayer
utility code:
utils.c Michael Niedermayer
mem.c Michael Niedermayer
opt.c, opt.h Michael Niedermayer
arithmetic expression evaluator:
eval.c Michael Niedermayer
audio and video frame extraction:
parser.c Michael Niedermayer
bitstream reading:
@@ -130,9 +120,6 @@ Generic Parts:
libpostproc/* Michael Niedermayer
table generation:
tableprint.c, tableprint.h Reimar Doeffinger
fixed point FFT:
fft* Zeljko Lukac
Text Subtitles Clément Bœsch
Codecs:
4xm.c Michael Niedermayer
@@ -146,7 +133,6 @@ Codecs:
ass* Aurelien Jacobs
asv* Michael Niedermayer
atrac3* Benjamin Larsson
atrac3plus* Maxim Poliakovski
bgmc.c, bgmc.h Thilo Borgmann
bink.c Kostya Shishkov
binkaudio.c Peter Ross
@@ -155,7 +141,6 @@ Codecs:
cdxl.c Paul B Mahol
celp_filters.* Vitor Sessak
cinepak.c Roberto Togni
cinepakenc.c Rl / Aetey G.T. AB
cljr Alex Beregszaszi
cllc.c Derek Buitenhuis
cook.c, cookdata.h Benjamin Larsson
@@ -166,13 +151,10 @@ Codecs:
dnxhd* Baptiste Coudurier
dpcm.c Mike Melanson
dv.c Roman Shaposhnik
dvbsubdec.c Anshul Maheshwari
dxa.c Kostya Shishkov
eacmv*, eaidct*, eat* Peter Ross
exif.c, exif.h Thilo Borgmann
ffv1* Michael Niedermayer
ffv1.c Michael Niedermayer
ffwavesynth.c Nicolas George
fic.c Derek Buitenhuis
flac* Justin Ruggles
flashsv* Benjamin Larsson
flicvideo.c Mike Melanson
@@ -182,7 +164,7 @@ Codecs:
h261* Michael Niedermayer
h263* Michael Niedermayer
h264* Loren Merritt, Michael Niedermayer
huffyuv* Michael Niedermayer, Christophe Gisquet
huffyuv.c Michael Niedermayer
idcinvideo.c Mike Melanson
imc* Benjamin Larsson
indeo2* Kostya Shishkov
@@ -205,11 +187,8 @@ Codecs:
libtheoraenc.c David Conrad
libutvideo* Derek Buitenhuis
libvorbis.c David Conrad
libvpx* James Zern
libx264.c Mans Rullgard, Jason Garrett-Glaser
libx265.c Derek Buitenhuis
libxavs.c Stefan Gehrer
libzvbi-teletextdec.c Marton Balint
loco.c Kostya Shishkov
lzo.h, lzo.c Reimar Doeffinger
mdec.c Michael Niedermayer
@@ -242,12 +221,12 @@ Codecs:
rtjpeg.c, rtjpeg.h Reimar Doeffinger
rv10.c Michael Niedermayer
rv3* Kostya Shishkov
rv4* Kostya Shishkov, Christophe Gisquet
rv4* Kostya Shishkov
s3tc* Ivo van Poorten
smacker.c Kostya Shishkov
smc.c Mike Melanson
smvjpegdec.c Ash Hughes
snow* Michael Niedermayer, Loren Merritt
snow.c Michael Niedermayer, Loren Merritt
sonic.c Alex Beregszaszi
srt* Aurelien Jacobs
sunrast.c Ivo van Poorten
@@ -266,18 +245,17 @@ Codecs:
v410*.c Derek Buitenhuis
vb.c Kostya Shishkov
vble.c Derek Buitenhuis
vc1* Kostya Shishkov, Christophe Gisquet
vc1* Kostya Shishkov
vcr1.c Michael Niedermayer
vda_h264_dec.c Xidorn Quan
vima.c Paul B Mahol
vmnc.c Kostya Shishkov
vorbisdec.c Denes Balatoni, David Conrad
vorbisenc.c Oded Shimon
vorbis_dec.c Denes Balatoni, David Conrad
vorbis_enc.c Oded Shimon
vp3* Mike Melanson
vp5 Aurelien Jacobs
vp6 Aurelien Jacobs
vp8 David Conrad, Jason Garrett-Glaser, Ronald Bultje
vp9 Ronald Bultje, Clément Bœsch
vqavideo.c Mike Melanson
wavpack.c Kostya Shishkov
wmaprodec.c Sascha Sommer
@@ -286,7 +264,6 @@ Codecs:
wnv1.c Kostya Shishkov
xan.c Mike Melanson
xbm* Paul B Mahol
xface Stefano Sabatini
xl.c Kostya Shishkov
xvmc.c Ivan Kalvachev
xwd* Paul B Mahol
@@ -308,20 +285,11 @@ libavdevice
libavdevice/avdevice.h
avfoundation.m Thilo Borgmann
dshow.c Roger Pack (CC rogerdpack@gmail.com)
fbdev_enc.c Lukasz Marek
gdigrab.c Roger Pack (CC rogerdpack@gmail.com)
dshow.c Roger Pack
iec61883.c Georg Lippitsch
lavfi Stefano Sabatini
libdc1394.c Roman Shaposhnik
opengl_enc.c Lukasz Marek
pulse_audio_enc.c Lukasz Marek
qtkit.m Thilo Borgmann
sdl Stefano Sabatini
v4l2.c Giorgio Vazzana
v4l2.c Luca Abeni
vfwcap.c Ramiro Polla
xv.c Lukasz Marek
libavfilter
===========
@@ -330,39 +298,14 @@ Generic parts:
graphdump.c Nicolas George
Filters:
af_adelay.c Paul B Mahol
af_aecho.c Paul B Mahol
af_afade.c Paul B Mahol
af_amerge.c Nicolas George
af_aphaser.c Paul B Mahol
af_aresample.c Michael Niedermayer
af_astats.c Paul B Mahol
af_astreamsync.c Nicolas George
af_atempo.c Pavel Koshevoy
af_biquads.c Paul B Mahol
af_compand.c Paul B Mahol
af_ladspa.c Paul B Mahol
af_pan.c Nicolas George
af_silenceremove.c Paul B Mahol
avf_avectorscope.c Paul B Mahol
avf_showcqt.c Muhammad Faiz
vf_blend.c Paul B Mahol
vf_colorbalance.c Paul B Mahol
vf_dejudder.c Nicholas Robbins
vf_delogo.c Jean Delvare (CC <khali@linux-fr.org>)
vf_drawbox.c/drawgrid Andrey Utkin
vf_extractplanes.c Paul B Mahol
vf_histogram.c Paul B Mahol
vf_hqx.c Clément Bœsch
vf_idet.c Pascal Massimino
vf_il.c Paul B Mahol
vf_lenscorrection.c Daniel Oberhoff
vf_mergeplanes.c Paul B Mahol
vf_psnr.c Paul B Mahol
vf_scale.c Michael Niedermayer
vf_separatefields.c Paul B Mahol
vf_stereo3d.c Paul B Mahol
vf_telecine.c Paul B Mahol
vf_yadif.c Michael Niedermayer
Sources:
@@ -408,7 +351,6 @@ Muxers/Demuxers:
flvdec.c, flvenc.c Michael Niedermayer
gxf.c Reimar Doeffinger
gxfenc.c Baptiste Coudurier
hls.c Anssi Hannula
idcin.c Mike Melanson
idroqdec.c Mike Melanson
iff.c Jaikrishnan Menon
@@ -426,7 +368,6 @@ Muxers/Demuxers:
matroska.c Aurelien Jacobs
matroskadec.c Aurelien Jacobs
matroskaenc.c David Conrad
matroska subtitles (matroskaenc.c) John Peebles
metadata* Aurelien Jacobs
mgsts.c Paul B Mahol
microdvd* Aurelien Jacobs
@@ -436,15 +377,14 @@ Muxers/Demuxers:
mpc.c Kostya Shishkov
mpeg.c Michael Niedermayer
mpegenc.c Michael Niedermayer
mpegts.c Marton Balint
mpegtsenc.c Baptiste Coudurier
mpegts* Baptiste Coudurier
msnwc_tcp.c Ramiro Polla
mtv.c Reynaldo H. Verdejo Pinochet
mxf* Baptiste Coudurier
mxfdec.c Tomas Härdin
nistspheredec.c Paul B Mahol
nsvdec.c Francois Revol
nut* Michael Niedermayer
nut.c Michael Niedermayer
nuv.c Reimar Doeffinger
oggdec.c, oggdec.h David Conrad
oggenc.c Baptiste Coudurier
@@ -461,19 +401,15 @@ Muxers/Demuxers:
rmdec.c, rmenc.c Ronald S. Bultje, Kostya Shishkov
rtmp* Kostya Shishkov
rtp.c, rtpenc.c Martin Storsjo
rtpdec_h261.*, rtpenc_h261.* Thomas Volkert
rtpdec_hevc.* Thomas Volkert
rtpdec_asf.* Ronald S. Bultje
rtpenc_mpv.*, rtpenc_aac.* Martin Storsjo
rtsp.c Luca Barbato
sbgdec.c Nicolas George
sdp.c Martin Storsjo
segafilm.c Mike Melanson
segment.c Stefano Sabatini
siff.c Kostya Shishkov
smacker.c Kostya Shishkov
smjpeg* Paul B Mahol
spdif* Anssi Hannula
srtdec.c Aurelien Jacobs
swf.c Baptiste Coudurier
takdec.c Paul B Mahol
@@ -482,7 +418,6 @@ Muxers/Demuxers:
voc.c Aurelien Jacobs
wav.c Michael Niedermayer
wc3movie.c Mike Melanson
webm dash (matroskaenc.c) Vignesh Venkatasubramanian
webvtt* Matthew J Heaney
westwood.c Mike Melanson
wtv.c Peter Ross
@@ -493,7 +428,6 @@ Protocols:
bluray.c Petri Hintukainen
ftp.c Lukasz Marek
http.c Ronald S. Bultje
libssh.c Lukasz Marek
mms*.c Ronald S. Bultje
udp.c Luca Abeni
@@ -518,14 +452,12 @@ Operating systems / CPU architectures
Alpha Mans Rullgard, Falk Hueffner
ARM Mans Rullgard
AVR32 Mans Rullgard
MIPS Mans Rullgard, Nedeljko Babic
MIPS Mans Rullgard
Mac OS X / PowerPC Romain Dolbeau, Guillaume Poirier
Amiga / PowerPC Colin Ward
Linux / PowerPC Luca Barbato
Windows MinGW Alex Beregszaszi, Ramiro Polla
Windows Cygwin Victor Paesa
Windows MSVC Matthew Oliver
Windows ICL Matthew Oliver
ADI/Blackfin DSP Marc Hoffman
Sparc Roman Shaposhnik
x86 Michael Niedermayer
@@ -534,8 +466,7 @@ x86 Michael Niedermayer
Releases
========
2.4 Michael Niedermayer
2.2 Michael Niedermayer
2.0 Michael Niedermayer
1.2 Michael Niedermayer
If you want to maintain an older release, please contact us
@@ -544,7 +475,6 @@ If you want to maintain an older release, please contact us
GnuPG Fingerprints of maintainers and contributors
==================================================
Alexander Strasser 1C96 78B7 83CB 8AA7 9AF5 D1EB A7D8 A57B A876 E58F
Anssi Hannula 1A92 FF42 2DD9 8D2E 8AF7 65A9 4278 C520 513D F3CB
Anton Khirnov 6D0C 6625 56F8 65D1 E5F5 814B B50A 1241 C067 07AB
Ash Hughes 694D 43D2 D180 C7C7 6421 ABD3 A641 D0B7 623D 6029
@@ -552,7 +482,7 @@ Attila Kinali 11F0 F9A6 A1D2 11F6 C745 D10C 6520 BCDD F2DF E765
Baptiste Coudurier 8D77 134D 20CC 9220 201F C5DB 0AC9 325C 5C1A BAAA
Ben Littler 3EE3 3723 E560 3214 A8CD 4DEB 2CDB FCE7 768C 8D2C
Benoit Fouet B22A 4F4F 43EF 636B BB66 FCDC 0023 AE1E 2985 49C8
Clément Bœsch 52D0 3A82 D445 F194 DB8B 2B16 87EE 2CB8 F4B8 FCF9
Bœsch Clément 52D0 3A82 D445 F194 DB8B 2B16 87EE 2CB8 F4B8 FCF9
Daniel Verkamp 78A6 07ED 782C 653E C628 B8B9 F0EB 8DD8 2F0E 21C7
Diego Biurrun 8227 1E31 B6D9 4994 7427 E220 9CAE D6CC 4757 FCC5
FFmpeg release signing key FCF9 86EA 15E6 E293 A564 4F10 B432 2F04 D676 58D8
@@ -567,14 +497,12 @@ Michael Niedermayer 9FF2 128B 147E F673 0BAD F133 611E C787 040B 0FAB
Nicolas George 24CE 01CE 9ACC 5CEB 74D8 8D9D B063 D997 36E5 4C93
Panagiotis Issaris 6571 13A3 33D9 3726 F728 AA98 F643 B12E ECF3 E029
Peter Ross A907 E02F A6E5 0CD2 34CD 20D2 6760 79C5 AC40 DD6B
Reimar Doeffinger C61D 16E5 9E2C D10C 8958 38A4 0899 A2B9 06D4 D9C7
Reimar Döffinger C61D 16E5 9E2C D10C 8958 38A4 0899 A2B9 06D4 D9C7
Reinhard Tartler 9300 5DC2 7E87 6C37 ED7B CA9A 9808 3544 9453 48A4
Reynaldo H. Verdejo Pinochet 6E27 CD34 170C C78E 4D4F 5F40 C18E 077F 3114 452A
Robert Swain EE7A 56EA 4A81 A7B5 2001 A521 67FA 362D A2FC 3E71
Sascha Sommer 38A0 F88B 868E 9D3A 97D4 D6A0 E823 706F 1E07 0D3C
Stefano Sabatini 0D0B AD6B 5330 BBAD D3D6 6A0C 719C 2839 FC43 2D5F
Stephan Hilb 4F38 0B3A 5F39 B99B F505 E562 8D5C 5554 4E17 8863
Tiancheng "Timothy" Gu 9456 AFC0 814A 8139 E994 8351 7FE6 B095 B582 B0D4
Tim Nicholson 38CF DB09 3ED0 F607 8B67 6CED 0C0B FC44 8B0B FC83
Tomas Härdin A79D 4E3D F38F 763F 91F5 8B33 A01E 8AE0 41BB 2551
Wei Gao 4269 7741 857A 0E60 9EC5 08D2 4744 4EFA 62C1 87B9

View File

@@ -4,49 +4,39 @@ include config.mak
vpath %.c $(SRC_PATH)
vpath %.cpp $(SRC_PATH)
vpath %.h $(SRC_PATH)
vpath %.m $(SRC_PATH)
vpath %.S $(SRC_PATH)
vpath %.asm $(SRC_PATH)
vpath %.rc $(SRC_PATH)
vpath %.v $(SRC_PATH)
vpath %.texi $(SRC_PATH)
vpath %/fate_config.sh.template $(SRC_PATH)
AVPROGS-$(CONFIG_FFMPEG) += ffmpeg
AVPROGS-$(CONFIG_FFPLAY) += ffplay
AVPROGS-$(CONFIG_FFPROBE) += ffprobe
AVPROGS-$(CONFIG_FFSERVER) += ffserver
PROGS-$(CONFIG_FFMPEG) += ffmpeg
PROGS-$(CONFIG_FFPLAY) += ffplay
PROGS-$(CONFIG_FFPROBE) += ffprobe
PROGS-$(CONFIG_FFSERVER) += ffserver
AVPROGS := $(AVPROGS-yes:%=%$(PROGSSUF)$(EXESUF))
INSTPROGS = $(AVPROGS-yes:%=%$(PROGSSUF)$(EXESUF))
PROGS += $(AVPROGS)
AVBASENAMES = ffmpeg ffplay ffprobe ffserver
ALLAVPROGS = $(AVBASENAMES:%=%$(PROGSSUF)$(EXESUF))
ALLAVPROGS_G = $(AVBASENAMES:%=%$(PROGSSUF)_g$(EXESUF))
$(foreach prog,$(AVBASENAMES),$(eval OBJS-$(prog) += cmdutils.o))
$(foreach prog,$(AVBASENAMES),$(eval OBJS-$(prog)-$(CONFIG_OPENCL) += cmdutils_opencl.o))
OBJS-ffmpeg += ffmpeg_opt.o ffmpeg_filter.o
OBJS-ffmpeg-$(HAVE_VDPAU_X11) += ffmpeg_vdpau.o
OBJS-ffmpeg-$(HAVE_DXVA2_LIB) += ffmpeg_dxva2.o
OBJS-ffmpeg-$(CONFIG_VDA) += ffmpeg_vda.o
PROGS := $(PROGS-yes:%=%$(PROGSSUF)$(EXESUF))
INSTPROGS = $(PROGS-yes:%=%$(PROGSSUF)$(EXESUF))
OBJS = cmdutils.o $(EXEOBJS)
OBJS-ffmpeg = ffmpeg_opt.o ffmpeg_filter.o
TESTTOOLS = audiogen videogen rotozoom tiny_psnr tiny_ssim base64
HOSTPROGS := $(TESTTOOLS:%=tests/%) doc/print_options
TOOLS = qt-faststart trasher uncoded_frame
TOOLS = qt-faststart trasher
TOOLS-$(CONFIG_ZLIB) += cws2fws
# $(FFLIBS-yes) needs to be in linking order
FFLIBS-$(CONFIG_AVDEVICE) += avdevice
FFLIBS-$(CONFIG_AVFILTER) += avfilter
FFLIBS-$(CONFIG_AVFORMAT) += avformat
FFLIBS-$(CONFIG_AVCODEC) += avcodec
BASENAMES = ffmpeg ffplay ffprobe ffserver
ALLPROGS = $(BASENAMES:%=%$(PROGSSUF)$(EXESUF))
ALLPROGS_G = $(BASENAMES:%=%$(PROGSSUF)_g$(EXESUF))
FFLIBS-$(CONFIG_AVDEVICE) += avdevice
FFLIBS-$(CONFIG_AVFILTER) += avfilter
FFLIBS-$(CONFIG_AVFORMAT) += avformat
FFLIBS-$(CONFIG_AVRESAMPLE) += avresample
FFLIBS-$(CONFIG_POSTPROC) += postproc
FFLIBS-$(CONFIG_SWRESAMPLE) += swresample
FFLIBS-$(CONFIG_SWSCALE) += swscale
FFLIBS-$(CONFIG_AVCODEC) += avcodec
FFLIBS-$(CONFIG_POSTPROC) += postproc
FFLIBS-$(CONFIG_SWRESAMPLE)+= swresample
FFLIBS-$(CONFIG_SWSCALE) += swscale
FFLIBS := avutil
@@ -60,14 +50,16 @@ include $(SRC_PATH)/common.mak
FF_EXTRALIBS := $(FFEXTRALIBS)
FF_DEP_LIBS := $(DEP_LIBS)
all: $(AVPROGS)
all: $(PROGS)
$(PROGS): %$(EXESUF): %_g$(EXESUF)
$(CP) $< $@
$(STRIP) $@
$(TOOLS): %$(EXESUF): %.o $(EXEOBJS)
$(LD) $(LDFLAGS) $(LDEXEFLAGS) $(LD_O) $^ $(ELIBS)
$(LD) $(LDFLAGS) $(LD_O) $^ $(ELIBS)
tools/cws2fws$(EXESUF): ELIBS = $(ZLIB)
tools/uncoded_frame$(EXESUF): $(FF_DEP_LIBS)
tools/uncoded_frame$(EXESUF): ELIBS = $(FF_EXTRALIBS)
config.h: .config
.config: $(wildcard $(FFLIBS:%=$(SRC_PATH)/lib%/all*.c))
@@ -77,10 +69,11 @@ config.h: .config
SUBDIR_VARS := CLEANFILES EXAMPLES FFLIBS HOSTPROGS TESTPROGS TOOLS \
HEADERS ARCH_HEADERS BUILT_HEADERS SKIPHEADERS \
ARMV5TE-OBJS ARMV6-OBJS ARMV8-OBJS VFP-OBJS NEON-OBJS \
ALTIVEC-OBJS MMX-OBJS YASM-OBJS \
ARMV5TE-OBJS ARMV6-OBJS VFP-OBJS NEON-OBJS \
ALTIVEC-OBJS VIS-OBJS \
MMX-OBJS YASM-OBJS \
MIPSFPU-OBJS MIPSDSPR2-OBJS MIPSDSPR1-OBJS MIPS32R2-OBJS \
OBJS SLIBOBJS HOSTOBJS TESTOBJS
OBJS HOSTOBJS TESTOBJS
define RESET
$(1) :=
@@ -92,16 +85,13 @@ $(foreach V,$(SUBDIR_VARS),$(eval $(call RESET,$(V))))
SUBDIR := $(1)/
include $(SRC_PATH)/$(1)/Makefile
-include $(SRC_PATH)/$(1)/$(ARCH)/Makefile
-include $(SRC_PATH)/$(1)/$(INTRINSICS)/Makefile
include $(SRC_PATH)/library.mak
endef
$(foreach D,$(FFLIBS),$(eval $(call DOSUBDIR,lib$(D))))
include $(SRC_PATH)/doc/Makefile
define DOPROG
OBJS-$(1) += $(1).o $(EXEOBJS) $(OBJS-$(1)-yes)
OBJS-$(1) += $(1).o cmdutils.o $(EXEOBJS)
$(1)$(PROGSSUF)_g$(EXESUF): $$(OBJS-$(1))
$$(OBJS-$(1)): CFLAGS += $(CFLAGS-$(1))
$(1)$(PROGSSUF)_g$(EXESUF): LDFLAGS += $(LDFLAGS-$(1))
@@ -109,16 +99,10 @@ $(1)$(PROGSSUF)_g$(EXESUF): FF_EXTRALIBS += $(LIBS-$(1))
-include $$(OBJS-$(1):.o=.d)
endef
$(foreach P,$(PROGS),$(eval $(call DOPROG,$(P:$(PROGSSUF)$(EXESUF)=))))
ffprobe.o cmdutils.o : libavutil/ffversion.h
$(PROGS): %$(PROGSSUF)$(EXESUF): %$(PROGSSUF)_g$(EXESUF)
$(CP) $< $@
$(STRIP) $@
$(foreach P,$(PROGS-yes),$(eval $(call DOPROG,$(P))))
%$(PROGSSUF)_g$(EXESUF): %.o $(FF_DEP_LIBS)
$(LD) $(LDFLAGS) $(LDEXEFLAGS) $(LD_O) $(OBJS-$*) $(FF_EXTRALIBS)
$(LD) $(LDFLAGS) $(LD_O) $(OBJS-$*) $(FF_EXTRALIBS)
OBJDIRS += tools
@@ -130,14 +114,14 @@ GIT_LOG = $(SRC_PATH)/.git/logs/HEAD
.version: $(wildcard $(GIT_LOG)) $(VERSION_SH) config.mak
.version: M=@
libavutil/ffversion.h .version:
$(M)$(VERSION_SH) $(SRC_PATH) libavutil/ffversion.h $(EXTRA_VERSION)
version.h .version:
$(M)$(VERSION_SH) $(SRC_PATH) version.h $(EXTRA_VERSION)
$(Q)touch .version
# force version.sh to run whenever version might have changed
-include .version
ifdef AVPROGS
ifdef PROGS
install: install-progs install-data
endif
@@ -148,7 +132,7 @@ install-libs: install-libs-yes
install-progs-yes:
install-progs-$(CONFIG_SHARED): install-libs
install-progs: install-progs-yes $(AVPROGS)
install-progs: install-progs-yes $(PROGS)
$(Q)mkdir -p "$(BINDIR)"
$(INSTALL) -c -m 755 $(INSTPROGS) "$(BINDIR)"
@@ -160,13 +144,13 @@ install-data: $(DATA_FILES) $(EXAMPLES_FILES)
uninstall: uninstall-libs uninstall-headers uninstall-progs uninstall-data
uninstall-progs:
$(RM) $(addprefix "$(BINDIR)/", $(ALLAVPROGS))
$(RM) $(addprefix "$(BINDIR)/", $(ALLPROGS))
uninstall-data:
$(RM) -r "$(DATADIR)"
clean::
$(RM) $(ALLAVPROGS) $(ALLAVPROGS_G)
$(RM) $(ALLPROGS) $(ALLPROGS_G)
$(RM) $(CLEANSUFFIXES)
$(RM) $(CLEANSUFFIXES:%=tools/%)
$(RM) -r coverage-html
@@ -174,13 +158,14 @@ clean::
distclean::
$(RM) $(DISTCLEANSUFFIXES)
$(RM) config.* .config libavutil/avconfig.h .version version.h libavutil/ffversion.h libavcodec/codec_names.h
$(RM) config.* .config libavutil/avconfig.h .version version.h libavcodec/codec_names.h
config:
$(SRC_PATH)/configure $(value FFMPEG_CONFIGURATION)
check: all alltools examples testprogs fate
include $(SRC_PATH)/doc/Makefile
include $(SRC_PATH)/tests/Makefile
$(sort $(OBJDIRS)):

18
README Normal file
View File

@@ -0,0 +1,18 @@
FFmpeg README
-------------
1) Documentation
----------------
* Read the documentation in the doc/ directory in git.
You can also view it online at http://ffmpeg.org/documentation.html
2) Licensing
------------
* See the LICENSE file.
3) Build and Install
--------------------
* See the INSTALL file.

View File

@@ -1,40 +0,0 @@
FFmpeg README
=============
FFmpeg is a collection of libraries and tools to process multimedia content
such as audio, video, subtitles and related metadata.
## Libraries
* `libavcodec` provides implementation of a wider range of codecs.
* `libavformat` implements streaming protocols, container formats and basic I/O access.
* `libavutil` includes hashers, decompressors and miscellaneous utility functions.
* `libavfilter` provides a mean to alter decoded Audio and Video through chain of filters.
* `libavdevice` provides an abstraction to access capture and playback devices.
* `libswresample` implements audio mixing and resampling routines.
* `libswscale` implements color conversion and scaling routines.
## Tools
* [ffmpeg](http://ffmpeg.org/ffmpeg.html) is a command line toolbox to
manipulate, convert and stream multimedia content.
* [ffplay](http://ffmpeg.org/ffplay.html) is a minimalistic multimedia player.
* [ffprobe](http://ffmpeg.org/ffprobe.html) is a simple analisys tool to inspect
multimedia content.
* Additional small tools such as `aviocat`, `ismindex` and `qt-faststart`.
## Documentation
The offline documentation is available in the **doc/** directory.
The online documentation is available in the main [website](http://ffmpeg.org)
and in the [wiki](http://trac.ffmpeg.org).
### Examples
Conding examples are available in the **doc/example** directory.
## License
FFmpeg codebase is mainly LGPL-licensed with optional components licensed under
GPL. Please refer to the LICENSE file for detailed information.

View File

@@ -1 +1 @@
2.4.2
2.0.3

View File

@@ -1,83 +0,0 @@
┌────────────────────────────────────────┐
│ RELEASE NOTES for FFmpeg 2.4 "Fresnel" │
└────────────────────────────────────────┘
The FFmpeg Project proudly presents FFmpeg 2.4 "Fresnel", just 2 months
after the release of 2.3. Since this wasn't a long time ago, the Changelog
is a bit short this time.
The most important thing in this release is the major version bump of the
libraries. This means that this release is neither ABI-compatible nor
fully API-compatible. But on the other hand it is aligned with the Libav
11 release series, and will as a result probably end up being maintained for
a long time.
As usual, if you have any question on this release or any FFmpeg related
topic, feel free to join us on the #ffmpeg IRC channel (on
irc.freenode.net).
┌────────────────────────────┐
│ 🔨 API Information │
└────────────────────────────┘
FFmpeg 2.4 includes the following library versions:
• libavutil 54.7.100
• libavcodec 56.1.100
• libavformat 56.4.101
• libavdevice 56.0.100
• libavfilter 5.1.100
• libswscale 3.0.100
• libswresample 1.1.100
• libpostproc 53.0.100
Important API changes since 2.3:
• The new field mime_type was added to AVProbeData, which can
cause crashes, if it is not initialized.
• Some deprecated functions were removed.
• The avfilter_graph_parse function was made compatible with Libav.
• The Matroska demuxer now outputs verbatim ASS packets.
Please refer to the doc/APIchanges file for more information.
┌────────────────────────────┐
│ ★ List of New Features │
└────────────────────────────┘
┌────────────────────────────┐
│ libavformat │
└────────────────────────────┘
• Icecast protocol.
• API for live metadata updates through event flags.
• UTF-16 support in text subtitles formats.
• The ASS muxer now reorders the Dialogue events properly.
• support for H.261 RTP payload format (RFC 4587)
• HEVC/H.265 RTP payload format (draft v6) depacketizer
┌────────────────────────────┐
│ libavfilter │
└────────────────────────────┘
• Ported lenscorrection filter from frei0r filter.
• Large optimizations in dctdnoiz to make it usable.
• Added codecview filter to visualize information exported by some codecs.
• Added silenceremove filter.
┌────────────────────────────┐
│ libavutil │
└────────────────────────────┘
• Added clip() function in eval.
┌────────────────────────────┐
│ ⚠ Behaviour changes │
└────────────────────────────┘
• dctdnoiz filter now uses a block size of 8x8 instead of 16x16 by default
• -vismv option is deprecated in favor of the codecview filter
• libmodplug is now detected through pkg-config
• HTML documentation generation through texi2html is deprecated in
favor of makeinfo/texi2any
• ICY metadata are now requested by default with the HTTP protocol

1
VERSION Normal file
View File

@@ -0,0 +1 @@
2.0.3

View File

@@ -1,6 +1,5 @@
OBJS-$(HAVE_ARMV5TE) += $(ARMV5TE-OBJS) $(ARMV5TE-OBJS-yes)
OBJS-$(HAVE_ARMV6) += $(ARMV6-OBJS) $(ARMV6-OBJS-yes)
OBJS-$(HAVE_ARMV8) += $(ARMV8-OBJS) $(ARMV8-OBJS-yes)
OBJS-$(HAVE_VFP) += $(VFP-OBJS) $(VFP-OBJS-yes)
OBJS-$(HAVE_NEON) += $(NEON-OBJS) $(NEON-OBJS-yes)
@@ -11,5 +10,7 @@ OBJS-$(HAVE_MIPSDSPR2) += $(MIPSDSPR2-OBJS) $(MIPSDSPR2-OBJS-yes)
OBJS-$(HAVE_ALTIVEC) += $(ALTIVEC-OBJS) $(ALTIVEC-OBJS-yes)
OBJS-$(HAVE_VIS) += $(VIS-OBJS) $(VIS-OBJS-yes)
OBJS-$(HAVE_MMX) += $(MMX-OBJS) $(MMX-OBJS-yes)
OBJS-$(HAVE_YASM) += $(YASM-OBJS) $(YASM-OBJS-yes)

View File

@@ -20,7 +20,6 @@
*/
#include <string.h>
#include <stdint.h>
#include <stdlib.h>
#include <errno.h>
#include <math.h>
@@ -48,9 +47,8 @@
#include "libavutil/eval.h"
#include "libavutil/dict.h"
#include "libavutil/opt.h"
#include "libavutil/cpu.h"
#include "libavutil/ffversion.h"
#include "cmdutils.h"
#include "version.h"
#if CONFIG_NETWORK
#include "libavformat/network.h"
#endif
@@ -58,6 +56,10 @@
#include <sys/time.h>
#include <sys/resource.h>
#endif
#if CONFIG_OPENCL
#include "libavutil/opencl.h"
#endif
static int init_report(const char *env);
@@ -65,9 +67,9 @@ struct SwsContext *sws_opts;
AVDictionary *swr_opts;
AVDictionary *format_opts, *codec_opts, *resample_opts;
const int this_year = 2013;
static FILE *report_file;
static int report_file_level = AV_LOG_DEBUG;
int hide_banner = 0;
void init_opts(void)
{
@@ -105,10 +107,8 @@ static void log_callback_report(void *ptr, int level, const char *fmt, va_list v
av_log_default_callback(ptr, level, fmt, vl);
av_log_format_line(ptr, level, fmt, vl2, line, sizeof(line), &print_prefix);
va_end(vl2);
if (report_file_level >= level) {
fputs(line, report_file);
fflush(report_file);
}
fputs(line, report_file);
fflush(report_file);
}
static void (*program_exit)(int ret);
@@ -166,7 +166,7 @@ void show_help_options(const OptionDef *options, const char *msg, int req_flags,
int first;
first = 1;
for (po = options; po->name; po++) {
for (po = options; po->name != NULL; po++) {
char buf[64];
if (((po->flags & req_flags) != req_flags) ||
@@ -205,7 +205,7 @@ static const OptionDef *find_option(const OptionDef *po, const char *name)
const char *p = strchr(name, ':');
int len = p ? p - name : strlen(name);
while (po->name) {
while (po->name != NULL) {
if (!strncmp(name, po->name, len) && strlen(po->name) == len)
break;
po++;
@@ -213,10 +213,7 @@ static const OptionDef *find_option(const OptionDef *po, const char *name)
return po;
}
/* _WIN32 means using the windows libc - cygwin doesn't define that
* by default. HAVE_COMMANDLINETOARGVW is true on cygwin, while
* it doesn't provide the actual command line via GetCommandLineW(). */
#if HAVE_COMMANDLINETOARGVW && defined(_WIN32)
#if HAVE_COMMANDLINETOARGVW
#include <windows.h>
#include <shellapi.h>
/* Will be leaked on exit */
@@ -254,7 +251,7 @@ static void prepare_app_arguments(int *argc_ptr, char ***argv_ptr)
win32_argv_utf8 = av_mallocz(sizeof(char *) * (win32_argc + 1) + buffsize);
argstr_flat = (char *)win32_argv_utf8 + sizeof(char *) * (win32_argc + 1);
if (!win32_argv_utf8) {
if (win32_argv_utf8 == NULL) {
LocalFree(argv_w);
return;
}
@@ -495,18 +492,6 @@ void parse_loglevel(int argc, char **argv, const OptionDef *options)
fflush(report_file);
}
}
idx = locate_option(argc, argv, options, "hide_banner");
if (idx)
hide_banner = 1;
}
static const AVOption *opt_find(void *obj, const char *name, const char *unit,
int opt_flags, int search_flags)
{
const AVOption *o = av_opt_find(obj, name, unit, opt_flags, search_flags);
if(o && !o->flags)
return NULL;
return o;
}
#define FLAGS (o->type == AV_OPT_TYPE_FLAGS) ? AV_DICT_APPEND : 0
@@ -529,14 +514,14 @@ int opt_default(void *optctx, const char *opt, const char *arg)
p = opt + strlen(opt);
av_strlcpy(opt_stripped, opt, FFMIN(sizeof(opt_stripped), p - opt + 1));
if ((o = opt_find(&cc, opt_stripped, NULL, 0,
if ((o = av_opt_find(&cc, opt_stripped, NULL, 0,
AV_OPT_SEARCH_CHILDREN | AV_OPT_SEARCH_FAKE_OBJ)) ||
((opt[0] == 'v' || opt[0] == 'a' || opt[0] == 's') &&
(o = opt_find(&cc, opt + 1, NULL, 0, AV_OPT_SEARCH_FAKE_OBJ)))) {
(o = av_opt_find(&cc, opt + 1, NULL, 0, AV_OPT_SEARCH_FAKE_OBJ)))) {
av_dict_set(&codec_opts, opt, arg, FLAGS);
consumed = 1;
}
if ((o = opt_find(&fc, opt, NULL, 0,
if ((o = av_opt_find(&fc, opt, NULL, 0,
AV_OPT_SEARCH_CHILDREN | AV_OPT_SEARCH_FAKE_OBJ))) {
av_dict_set(&format_opts, opt, arg, FLAGS);
if (consumed)
@@ -545,7 +530,7 @@ int opt_default(void *optctx, const char *opt, const char *arg)
}
#if CONFIG_SWSCALE
sc = sws_get_class();
if (!consumed && opt_find(&sc, opt, NULL, 0,
if (!consumed && av_opt_find(&sc, opt, NULL, 0,
AV_OPT_SEARCH_CHILDREN | AV_OPT_SEARCH_FAKE_OBJ)) {
// XXX we only support sws_flags, not arbitrary sws options
int ret = av_opt_set(sws_opts, opt, arg, 0);
@@ -555,15 +540,10 @@ int opt_default(void *optctx, const char *opt, const char *arg)
}
consumed = 1;
}
#else
if (!consumed && !strcmp(opt, "sws_flags")) {
av_log(NULL, AV_LOG_WARNING, "Ignoring %s %s, due to disabled swscale\n", opt, arg);
consumed = 1;
}
#endif
#if CONFIG_SWRESAMPLE
swr_class = swr_get_class();
if (!consumed && (o=opt_find(&swr_class, opt, NULL, 0,
if (!consumed && (o=av_opt_find(&swr_class, opt, NULL, 0,
AV_OPT_SEARCH_CHILDREN | AV_OPT_SEARCH_FAKE_OBJ))) {
struct SwrContext *swr = swr_alloc();
int ret = av_opt_set(swr, opt, arg, 0);
@@ -577,7 +557,7 @@ int opt_default(void *optctx, const char *opt, const char *arg)
}
#endif
#if CONFIG_AVRESAMPLE
if ((o=opt_find(&rc, opt, NULL, 0,
if ((o=av_opt_find(&rc, opt, NULL, 0,
AV_OPT_SEARCH_CHILDREN | AV_OPT_SEARCH_FAKE_OBJ))) {
av_dict_set(&resample_opts, opt, arg, FLAGS);
consumed = 1;
@@ -670,7 +650,7 @@ static void init_parse_context(OptionParseContext *octx,
memset(octx, 0, sizeof(*octx));
octx->nb_groups = nb_groups;
octx->groups = av_mallocz_array(octx->nb_groups, sizeof(*octx->groups));
octx->groups = av_mallocz(sizeof(*octx->groups) * octx->nb_groups);
if (!octx->groups)
exit_program(1);
@@ -816,18 +796,6 @@ do { \
return 0;
}
int opt_cpuflags(void *optctx, const char *opt, const char *arg)
{
int ret;
unsigned flags = av_get_cpu_flags();
if ((ret = av_parse_cpu_caps(&flags, arg)) < 0)
return ret;
av_force_cpu_flags(flags);
return 0;
}
int opt_loglevel(void *optctx, const char *opt, const char *arg)
{
const struct { const char *name; int level; } log_levels[] = {
@@ -842,17 +810,10 @@ int opt_loglevel(void *optctx, const char *opt, const char *arg)
};
char *tail;
int level;
int flags;
int i;
flags = av_log_get_flags();
tail = strstr(arg, "repeat");
if (tail)
flags &= ~AV_LOG_SKIP_REPEATED;
else
flags |= AV_LOG_SKIP_REPEATED;
av_log_set_flags(flags);
av_log_set_flags(tail ? 0 : AV_LOG_SKIP_REPEATED);
if (tail == arg)
arg += 6 + (arg[6]=='+');
if(tail && !*arg)
@@ -934,13 +895,6 @@ static int init_report(const char *env)
av_free(filename_template);
filename_template = val;
val = NULL;
} else if (!strcmp(key, "level")) {
char *tail;
report_file_level = strtol(val, &tail, 10);
if (*tail) {
av_log(NULL, AV_LOG_FATAL, "Invalid report file level\n");
exit_program(1);
}
} else {
av_log(NULL, AV_LOG_ERROR, "Unknown key '%s' in FFREPORT\n", key);
}
@@ -994,6 +948,18 @@ int opt_max_alloc(void *optctx, const char *opt, const char *arg)
return 0;
}
int opt_cpuflags(void *optctx, const char *opt, const char *arg)
{
int ret;
unsigned flags = av_get_cpu_flags();
if ((ret = av_parse_cpu_caps(&flags, arg)) < 0)
return ret;
av_force_cpu_flags(flags);
return 0;
}
int opt_timelimit(void *optctx, const char *opt, const char *arg)
{
#if HAVE_SETRLIMIT
@@ -1007,6 +973,26 @@ int opt_timelimit(void *optctx, const char *opt, const char *arg)
return 0;
}
#if CONFIG_OPENCL
int opt_opencl(void *optctx, const char *opt, const char *arg)
{
char *key, *value;
const char *opts = arg;
int ret = 0;
while (*opts) {
ret = av_opt_get_key_value(&opts, "=", ":", 0, &key, &value);
if (ret < 0)
return ret;
ret = av_opencl_set_option(key, value);
if (ret < 0)
return ret;
if (*opts)
opts++;
}
return ret;
}
#endif
void print_error(const char *filename, int err)
{
char errbuf[128];
@@ -1072,7 +1058,7 @@ static void print_program_info(int flags, int level)
av_log(NULL, level, "%s version " FFMPEG_VERSION, program_name);
if (flags & SHOW_COPYRIGHT)
av_log(NULL, level, " Copyright (c) %d-%d the FFmpeg developers",
program_birth_year, CONFIG_THIS_YEAR);
program_birth_year, this_year);
av_log(NULL, level, "\n");
av_log(NULL, level, "%sbuilt on %s %s with %s\n",
indent, __DATE__, __TIME__, CC_IDENT);
@@ -1080,36 +1066,10 @@ static void print_program_info(int flags, int level)
av_log(NULL, level, "%sconfiguration: " FFMPEG_CONFIGURATION "\n", indent);
}
static void print_buildconf(int flags, int level)
{
const char *indent = flags & INDENT ? " " : "";
char str[] = { FFMPEG_CONFIGURATION };
char *conflist, *remove_tilde, *splitconf;
// Change all the ' --' strings to '~--' so that
// they can be identified as tokens.
while ((conflist = strstr(str, " --")) != NULL) {
strncpy(conflist, "~--", 3);
}
// Compensate for the weirdness this would cause
// when passing 'pkg-config --static'.
while ((remove_tilde = strstr(str, "pkg-config~")) != NULL) {
strncpy(remove_tilde, "pkg-config ", 11);
}
splitconf = strtok(str, "~");
av_log(NULL, level, "\n%sconfiguration:\n", indent);
while (splitconf != NULL) {
av_log(NULL, level, "%s%s%s\n", indent, indent, splitconf);
splitconf = strtok(NULL, "~");
}
}
void show_banner(int argc, char **argv, const OptionDef *options)
{
int idx = locate_option(argc, argv, options, "version");
if (hide_banner || idx)
if (idx)
return;
print_program_info (INDENT|SHOW_COPYRIGHT, AV_LOG_INFO);
@@ -1120,20 +1080,12 @@ void show_banner(int argc, char **argv, const OptionDef *options)
int show_version(void *optctx, const char *opt, const char *arg)
{
av_log_set_callback(log_callback_help);
print_program_info (SHOW_COPYRIGHT, AV_LOG_INFO);
print_program_info (0 , AV_LOG_INFO);
print_all_libs_info(SHOW_VERSION, AV_LOG_INFO);
return 0;
}
int show_buildconf(void *optctx, const char *opt, const char *arg)
{
av_log_set_callback(log_callback_help);
print_buildconf (INDENT|0, AV_LOG_INFO);
return 0;
}
int show_license(void *optctx, const char *opt, const char *arg)
{
#if CONFIG_NONFREE
@@ -1208,29 +1160,16 @@ int show_license(void *optctx, const char *opt, const char *arg)
return 0;
}
static int is_device(const AVClass *avclass)
{
if (!avclass)
return 0;
return avclass->category == AV_CLASS_CATEGORY_DEVICE_VIDEO_OUTPUT ||
avclass->category == AV_CLASS_CATEGORY_DEVICE_VIDEO_INPUT ||
avclass->category == AV_CLASS_CATEGORY_DEVICE_AUDIO_OUTPUT ||
avclass->category == AV_CLASS_CATEGORY_DEVICE_AUDIO_INPUT ||
avclass->category == AV_CLASS_CATEGORY_DEVICE_OUTPUT ||
avclass->category == AV_CLASS_CATEGORY_DEVICE_INPUT;
}
static int show_formats_devices(void *optctx, const char *opt, const char *arg, int device_only)
int show_formats(void *optctx, const char *opt, const char *arg)
{
AVInputFormat *ifmt = NULL;
AVOutputFormat *ofmt = NULL;
const char *last_name;
int is_dev;
printf("%s\n"
printf("File formats:\n"
" D. = Demuxing supported\n"
" .E = Muxing supported\n"
" --\n", device_only ? "Devices:" : "File formats:");
" --\n");
last_name = "000";
for (;;) {
int decode = 0;
@@ -1239,10 +1178,7 @@ static int show_formats_devices(void *optctx, const char *opt, const char *arg,
const char *long_name = NULL;
while ((ofmt = av_oformat_next(ofmt))) {
is_dev = is_device(ofmt->priv_class);
if (!is_dev && device_only)
continue;
if ((!name || strcmp(ofmt->name, name) < 0) &&
if ((name == NULL || strcmp(ofmt->name, name) < 0) &&
strcmp(ofmt->name, last_name) > 0) {
name = ofmt->name;
long_name = ofmt->long_name;
@@ -1250,10 +1186,7 @@ static int show_formats_devices(void *optctx, const char *opt, const char *arg,
}
}
while ((ifmt = av_iformat_next(ifmt))) {
is_dev = is_device(ifmt->priv_class);
if (!is_dev && device_only)
continue;
if ((!name || strcmp(ifmt->name, name) < 0) &&
if ((name == NULL || strcmp(ifmt->name, name) < 0) &&
strcmp(ifmt->name, last_name) > 0) {
name = ifmt->name;
long_name = ifmt->long_name;
@@ -1262,7 +1195,7 @@ static int show_formats_devices(void *optctx, const char *opt, const char *arg,
if (name && strcmp(ifmt->name, name) == 0)
decode = 1;
}
if (!name)
if (name == NULL)
break;
last_name = name;
@@ -1275,16 +1208,6 @@ static int show_formats_devices(void *optctx, const char *opt, const char *arg,
return 0;
}
int show_formats(void *optctx, const char *opt, const char *arg)
{
return show_formats_devices(optctx, opt, arg, 0);
}
int show_devices(void *optctx, const char *opt, const char *arg)
{
return show_formats_devices(optctx, opt, arg, 1);
}
#define PRINT_CODEC_SUPPORTED(codec, field, type, list_name, term, get_name) \
if (codec->field) { \
const type *p = codec->field; \
@@ -1429,9 +1352,6 @@ int show_codecs(void *optctx, const char *opt, const char *arg)
const AVCodecDescriptor *desc = codecs[i];
const AVCodec *codec = NULL;
if (strstr(desc->name, "_deprecated"))
continue;
printf(" ");
printf(avcodec_find_decoder(desc->id) ? "D" : ".");
printf(avcodec_find_encoder(desc->id) ? "E" : ".");
@@ -1549,9 +1469,8 @@ int show_filters(void *optctx, const char *opt, const char *arg)
const AVFilterPad *pad;
printf("Filters:\n"
" T.. = Timeline support\n"
" .S. = Slice threading\n"
" ..C = Commmand support\n"
" T. = Timeline support\n"
" .S = Slice threading\n"
" A = Audio input/output\n"
" V = Video input/output\n"
" N = Dynamic number and/or type of input/output\n"
@@ -1575,30 +1494,15 @@ int show_filters(void *optctx, const char *opt, const char *arg)
( i && (filter->flags & AVFILTER_FLAG_DYNAMIC_OUTPUTS))) ? 'N' : '|';
}
*descr_cur = 0;
printf(" %c%c%c %-16s %-10s %s\n",
printf(" %c%c %-16s %-10s %s\n",
filter->flags & AVFILTER_FLAG_SUPPORT_TIMELINE ? 'T' : '.',
filter->flags & AVFILTER_FLAG_SLICE_THREADS ? 'S' : '.',
filter->process_command ? 'C' : '.',
filter->name, descr, filter->description);
}
#endif
return 0;
}
int show_colors(void *optctx, const char *opt, const char *arg)
{
const char *name;
const uint8_t *rgb;
int i;
printf("%-32s #RRGGBB\n", "name");
for (i = 0; name = av_get_known_color_name(i, &rgb); i++)
printf("%-32s #%02x%02x%02x\n", name, rgb[0], rgb[1], rgb[2]);
return 0;
}
int show_pix_fmts(void *optctx, const char *opt, const char *arg)
{
const AVPixFmtDescriptor *pix_desc = NULL;
@@ -1639,19 +1543,19 @@ int show_layouts(void *optctx, const char *opt, const char *arg)
const char *name, *descr;
printf("Individual channels:\n"
"NAME DESCRIPTION\n");
"NAME DESCRIPTION\n");
for (i = 0; i < 63; i++) {
name = av_get_channel_name((uint64_t)1 << i);
if (!name)
continue;
descr = av_get_channel_description((uint64_t)1 << i);
printf("%-14s %s\n", name, descr);
printf("%-12s%s\n", name, descr);
}
printf("\nStandard channel layouts:\n"
"NAME DECOMPOSITION\n");
"NAME DECOMPOSITION\n");
for (i = 0; !av_get_standard_channel_layout(i, &layout, &name); i++) {
if (name) {
printf("%-14s ", name);
printf("%-12s", name);
for (j = 1; j; j <<= 1)
if ((layout & j))
printf("%s%s", (layout & (j - 1)) ? "+" : "", av_get_channel_name(j));
@@ -1858,7 +1762,7 @@ int read_yesno(void)
int cmdutils_read_file(const char *filename, char **bufptr, size_t *size)
{
int ret;
FILE *f = av_fopen_utf8(filename, "rb");
FILE *f = fopen(filename, "rb");
if (!f) {
av_log(NULL, AV_LOG_ERROR, "Cannot read file '%s': %s\n", filename,
@@ -1996,8 +1900,7 @@ AVDictionary *filter_codec_opts(AVDictionary *opts, enum AVCodecID codec_id,
}
if (av_opt_find(&cc, t->key, NULL, flags, AV_OPT_SEARCH_FAKE_OBJ) ||
!codec ||
(codec->priv_class &&
(codec && codec->priv_class &&
av_opt_find(&codec->priv_class, t->key, NULL, flags,
AV_OPT_SEARCH_FAKE_OBJ)))
av_dict_set(&ret, t->key, t->value, 0);
@@ -2020,7 +1923,7 @@ AVDictionary **setup_find_stream_info_opts(AVFormatContext *s,
if (!s->nb_streams)
return NULL;
opts = av_mallocz_array(s->nb_streams, sizeof(*opts));
opts = av_mallocz(s->nb_streams * sizeof(*opts));
if (!opts) {
av_log(NULL, AV_LOG_ERROR,
"Could not alloc memory for stream options.\n");

View File

@@ -24,13 +24,12 @@
#include <stdint.h>
#include "config.h"
#include "libavcodec/avcodec.h"
#include "libavfilter/avfilter.h"
#include "libavformat/avformat.h"
#include "libswscale/swscale.h"
#ifdef _WIN32
#ifdef __MINGW32__
#undef main /* We don't want SDL to override our main() */
#endif
@@ -44,12 +43,16 @@ extern const char program_name[];
*/
extern const int program_birth_year;
/**
* this year, defined by the program for show_banner()
*/
extern const int this_year;
extern AVCodecContext *avcodec_opts[AVMEDIA_TYPE_NB];
extern AVFormatContext *avformat_opts;
extern struct SwsContext *sws_opts;
extern AVDictionary *swr_opts;
extern AVDictionary *format_opts, *codec_opts, *resample_opts;
extern int hide_banner;
/**
* Register a program-specific cleanup routine.
@@ -59,7 +62,7 @@ void register_exit(void (*cb)(int ret));
/**
* Wraps exit with a program-specific cleanup routine.
*/
void exit_program(int ret) av_noreturn;
void exit_program(int ret);
/**
* Initialize the cmdutils option system, in particular
@@ -78,11 +81,6 @@ void uninit_opts(void);
*/
void log_callback_help(void* ptr, int level, const char* fmt, va_list vl);
/**
* Override the cpuflags.
*/
int opt_cpuflags(void *optctx, const char *opt, const char *arg);
/**
* Fallback for options that are not explicitly handled, these will be
* parsed through AVOptions.
@@ -98,14 +96,12 @@ int opt_report(const char *opt);
int opt_max_alloc(void *optctx, const char *opt, const char *arg);
int opt_cpuflags(void *optctx, const char *opt, const char *arg);
int opt_codec_debug(void *optctx, const char *opt, const char *arg);
#if CONFIG_OPENCL
int opt_opencl(void *optctx, const char *opt, const char *arg);
int opt_opencl_bench(void *optctx, const char *opt, const char *arg);
#endif
/**
* Limit the execution time.
*/
@@ -415,13 +411,6 @@ void show_banner(int argc, char **argv, const OptionDef *options);
*/
int show_version(void *optctx, const char *opt, const char *arg);
/**
* Print the build configuration of the program to stdout. The contents
* depend on the definition of FFMPEG_CONFIGURATION.
* This option processing function does not utilize the arguments.
*/
int show_buildconf(void *optctx, const char *opt, const char *arg);
/**
* Print the license of the program to stdout. The license depends on
* the license of the libraries compiled into the program.
@@ -431,17 +420,10 @@ int show_license(void *optctx, const char *opt, const char *arg);
/**
* Print a listing containing all the formats supported by the
* program (including devices).
* This option processing function does not utilize the arguments.
*/
int show_formats(void *optctx, const char *opt, const char *arg);
/**
* Print a listing containing all the devices supported by the
* program.
* This option processing function does not utilize the arguments.
*/
int show_devices(void *optctx, const char *opt, const char *arg);
int show_formats(void *optctx, const char *opt, const char *arg);
/**
* Print a listing containing all the codecs supported by the
@@ -503,12 +485,6 @@ int show_layouts(void *optctx, const char *opt, const char *arg);
*/
int show_sample_fmts(void *optctx, const char *opt, const char *arg);
/**
* Print a listing containing all the color names and values recognized
* by the program.
*/
int show_colors(void *optctx, const char *opt, const char *arg);
/**
* Return a positive value if a line read from standard input
* starts with [yY], otherwise return 0.
@@ -522,7 +498,7 @@ int read_yesno(void);
* @param filename file to read from
* @param bufptr location where pointer to buffer is returned
* @param size location where size of buffer is returned
* @return >= 0 in case of success, a negative value corresponding to an
* @return 0 in case of success, a negative value corresponding to an
* AVERROR error code in case of failure.
*/
int cmdutils_read_file(const char *filename, char **bufptr, size_t *size);

View File

@@ -4,9 +4,7 @@
{ "help" , OPT_EXIT, {.func_arg = show_help}, "show help", "topic" },
{ "-help" , OPT_EXIT, {.func_arg = show_help}, "show help", "topic" },
{ "version" , OPT_EXIT, {.func_arg = show_version}, "show version" },
{ "buildconf" , OPT_EXIT, {.func_arg = show_buildconf}, "show build configuration" },
{ "formats" , OPT_EXIT, {.func_arg = show_formats }, "show available formats" },
{ "devices" , OPT_EXIT, {.func_arg = show_devices }, "show available devices" },
{ "codecs" , OPT_EXIT, {.func_arg = show_codecs }, "show available codecs" },
{ "decoders" , OPT_EXIT, {.func_arg = show_decoders }, "show available decoders" },
{ "encoders" , OPT_EXIT, {.func_arg = show_encoders }, "show available encoders" },
@@ -16,14 +14,11 @@
{ "pix_fmts" , OPT_EXIT, {.func_arg = show_pix_fmts }, "show available pixel formats" },
{ "layouts" , OPT_EXIT, {.func_arg = show_layouts }, "show standard channel layouts" },
{ "sample_fmts", OPT_EXIT, {.func_arg = show_sample_fmts }, "show available audio sample formats" },
{ "colors" , OPT_EXIT, {.func_arg = show_colors }, "show available color names" },
{ "loglevel" , HAS_ARG, {.func_arg = opt_loglevel}, "set logging level", "loglevel" },
{ "v", HAS_ARG, {.func_arg = opt_loglevel}, "set logging level", "loglevel" },
{ "report" , 0, {(void*)opt_report}, "generate a report" },
{ "max_alloc" , HAS_ARG, {.func_arg = opt_max_alloc}, "set maximum size of a single allocated block", "bytes" },
{ "cpuflags" , HAS_ARG | OPT_EXPERT, { .func_arg = opt_cpuflags }, "force specific cpu flags", "flags" },
{ "hide_banner", OPT_BOOL | OPT_EXPERT, {&hide_banner}, "do not show program banner", "hide_banner" },
{ "cpuflags" , HAS_ARG | OPT_EXPERT, {.func_arg = opt_cpuflags}, "force specific cpu flags", "flags" },
#if CONFIG_OPENCL
{ "opencl_bench", OPT_EXIT, {.func_arg = opt_opencl_bench}, "run benchmark on all OpenCL devices and show results" },
{ "opencl_options", HAS_ARG, {.func_arg = opt_opencl}, "set OpenCL environment options" },
#endif

View File

@@ -1,274 +0,0 @@
/*
* Copyright (C) 2013 Lenny Wang
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "libavutil/opt.h"
#include "libavutil/time.h"
#include "libavutil/log.h"
#include "libavutil/opencl.h"
#include "cmdutils.h"
typedef struct {
int platform_idx;
int device_idx;
char device_name[64];
int64_t runtime;
} OpenCLDeviceBenchmark;
const char *ocl_bench_source = AV_OPENCL_KERNEL(
inline unsigned char clip_uint8(int a)
{
if (a & (~0xFF))
return (-a)>>31;
else
return a;
}
kernel void unsharp_bench(
global unsigned char *src,
global unsigned char *dst,
global int *mask,
int width,
int height)
{
int i, j, local_idx, lc_idx, sum = 0;
int2 thread_idx, block_idx, global_idx, lm_idx;
thread_idx.x = get_local_id(0);
thread_idx.y = get_local_id(1);
block_idx.x = get_group_id(0);
block_idx.y = get_group_id(1);
global_idx.x = get_global_id(0);
global_idx.y = get_global_id(1);
local uchar data[32][32];
local int lc[128];
for (i = 0; i <= 1; i++) {
lm_idx.y = -8 + (block_idx.y + i) * 16 + thread_idx.y;
lm_idx.y = lm_idx.y < 0 ? 0 : lm_idx.y;
lm_idx.y = lm_idx.y >= height ? height - 1: lm_idx.y;
for (j = 0; j <= 1; j++) {
lm_idx.x = -8 + (block_idx.x + j) * 16 + thread_idx.x;
lm_idx.x = lm_idx.x < 0 ? 0 : lm_idx.x;
lm_idx.x = lm_idx.x >= width ? width - 1: lm_idx.x;
data[i*16 + thread_idx.y][j*16 + thread_idx.x] = src[lm_idx.y*width + lm_idx.x];
}
}
local_idx = thread_idx.y*16 + thread_idx.x;
if (local_idx < 128)
lc[local_idx] = mask[local_idx];
barrier(CLK_LOCAL_MEM_FENCE);
\n#pragma unroll\n
for (i = -4; i <= 4; i++) {
lm_idx.y = 8 + i + thread_idx.y;
\n#pragma unroll\n
for (j = -4; j <= 4; j++) {
lm_idx.x = 8 + j + thread_idx.x;
lc_idx = (i + 4)*8 + j + 4;
sum += (int)data[lm_idx.y][lm_idx.x] * lc[lc_idx];
}
}
int temp = (int)data[thread_idx.y + 8][thread_idx.x + 8];
int res = temp + (((temp - (int)((sum + 1<<15) >> 16))) >> 16);
if (global_idx.x < width && global_idx.y < height)
dst[global_idx.x + global_idx.y*width] = clip_uint8(res);
}
);
#define OCLCHECK(method, ... ) \
do { \
status = method(__VA_ARGS__); \
if (status != CL_SUCCESS) { \
av_log(NULL, AV_LOG_ERROR, # method " error '%s'\n", \
av_opencl_errstr(status)); \
ret = AVERROR_EXTERNAL; \
goto end; \
} \
} while (0)
#define CREATEBUF(out, flags, size) \
do { \
out = clCreateBuffer(ext_opencl_env->context, flags, size, NULL, &status); \
if (status != CL_SUCCESS) { \
av_log(NULL, AV_LOG_ERROR, "Could not create OpenCL buffer\n"); \
ret = AVERROR_EXTERNAL; \
goto end; \
} \
} while (0)
static void fill_rand_int(int *data, int n)
{
int i;
srand(av_gettime());
for (i = 0; i < n; i++)
data[i] = rand();
}
#define OPENCL_NB_ITER 5
static int64_t run_opencl_bench(AVOpenCLExternalEnv *ext_opencl_env)
{
int i, arg = 0, width = 1920, height = 1088;
int64_t start, ret = 0;
cl_int status;
size_t kernel_len;
char *inbuf;
int *mask;
int buf_size = width * height * sizeof(char);
int mask_size = sizeof(uint32_t) * 128;
cl_mem cl_mask, cl_inbuf, cl_outbuf;
cl_kernel kernel = NULL;
cl_program program = NULL;
size_t local_work_size_2d[2] = {16, 16};
size_t global_work_size_2d[2] = {(size_t)width, (size_t)height};
if (!(inbuf = av_malloc(buf_size)) || !(mask = av_malloc(mask_size))) {
av_log(NULL, AV_LOG_ERROR, "Out of memory\n");
ret = AVERROR(ENOMEM);
goto end;
}
fill_rand_int((int*)inbuf, buf_size/4);
fill_rand_int(mask, mask_size/4);
CREATEBUF(cl_mask, CL_MEM_READ_ONLY, mask_size);
CREATEBUF(cl_inbuf, CL_MEM_READ_ONLY, buf_size);
CREATEBUF(cl_outbuf, CL_MEM_READ_WRITE, buf_size);
kernel_len = strlen(ocl_bench_source);
program = clCreateProgramWithSource(ext_opencl_env->context, 1, &ocl_bench_source,
&kernel_len, &status);
if (status != CL_SUCCESS || !program) {
av_log(NULL, AV_LOG_ERROR, "OpenCL unable to create benchmark program\n");
ret = AVERROR_EXTERNAL;
goto end;
}
status = clBuildProgram(program, 1, &(ext_opencl_env->device_id), NULL, NULL, NULL);
if (status != CL_SUCCESS) {
av_log(NULL, AV_LOG_ERROR, "OpenCL unable to build benchmark program\n");
ret = AVERROR_EXTERNAL;
goto end;
}
kernel = clCreateKernel(program, "unsharp_bench", &status);
if (status != CL_SUCCESS) {
av_log(NULL, AV_LOG_ERROR, "OpenCL unable to create benchmark kernel\n");
ret = AVERROR_EXTERNAL;
goto end;
}
OCLCHECK(clEnqueueWriteBuffer, ext_opencl_env->command_queue, cl_inbuf, CL_TRUE, 0,
buf_size, inbuf, 0, NULL, NULL);
OCLCHECK(clEnqueueWriteBuffer, ext_opencl_env->command_queue, cl_mask, CL_TRUE, 0,
mask_size, mask, 0, NULL, NULL);
OCLCHECK(clSetKernelArg, kernel, arg++, sizeof(cl_mem), &cl_inbuf);
OCLCHECK(clSetKernelArg, kernel, arg++, sizeof(cl_mem), &cl_outbuf);
OCLCHECK(clSetKernelArg, kernel, arg++, sizeof(cl_mem), &cl_mask);
OCLCHECK(clSetKernelArg, kernel, arg++, sizeof(cl_int), &width);
OCLCHECK(clSetKernelArg, kernel, arg++, sizeof(cl_int), &height);
start = av_gettime_relative();
for (i = 0; i < OPENCL_NB_ITER; i++)
OCLCHECK(clEnqueueNDRangeKernel, ext_opencl_env->command_queue, kernel, 2, NULL,
global_work_size_2d, local_work_size_2d, 0, NULL, NULL);
clFinish(ext_opencl_env->command_queue);
ret = (av_gettime_relative() - start)/OPENCL_NB_ITER;
end:
if (kernel)
clReleaseKernel(kernel);
if (program)
clReleaseProgram(program);
if (cl_inbuf)
clReleaseMemObject(cl_inbuf);
if (cl_outbuf)
clReleaseMemObject(cl_outbuf);
if (cl_mask)
clReleaseMemObject(cl_mask);
av_free(inbuf);
av_free(mask);
return ret;
}
static int compare_ocl_device_desc(const void *a, const void *b)
{
return ((OpenCLDeviceBenchmark*)a)->runtime - ((OpenCLDeviceBenchmark*)b)->runtime;
}
int opt_opencl_bench(void *optctx, const char *opt, const char *arg)
{
int i, j, nb_devices = 0, count = 0;
int64_t score = 0;
AVOpenCLDeviceList *device_list;
AVOpenCLDeviceNode *device_node = NULL;
OpenCLDeviceBenchmark *devices = NULL;
cl_platform_id platform;
av_opencl_get_device_list(&device_list);
for (i = 0; i < device_list->platform_num; i++)
nb_devices += device_list->platform_node[i]->device_num;
if (!nb_devices) {
av_log(NULL, AV_LOG_ERROR, "No OpenCL device detected!\n");
return AVERROR(EINVAL);
}
if (!(devices = av_malloc_array(nb_devices, sizeof(OpenCLDeviceBenchmark)))) {
av_log(NULL, AV_LOG_ERROR, "Could not allocate buffer\n");
return AVERROR(ENOMEM);
}
for (i = 0; i < device_list->platform_num; i++) {
for (j = 0; j < device_list->platform_node[i]->device_num; j++) {
device_node = device_list->platform_node[i]->device_node[j];
platform = device_list->platform_node[i]->platform_id;
score = av_opencl_benchmark(device_node, platform, run_opencl_bench);
if (score > 0) {
devices[count].platform_idx = i;
devices[count].device_idx = j;
devices[count].runtime = score;
strcpy(devices[count].device_name, device_node->device_name);
count++;
}
}
}
qsort(devices, count, sizeof(OpenCLDeviceBenchmark), compare_ocl_device_desc);
fprintf(stderr, "platform_idx\tdevice_idx\tdevice_name\truntime\n");
for (i = 0; i < count; i++)
fprintf(stdout, "%d\t%d\t%s\t%"PRId64"\n",
devices[i].platform_idx, devices[i].device_idx,
devices[i].device_name, devices[i].runtime);
av_opencl_free_device_list(&device_list);
av_free(devices);
return 0;
}
int opt_opencl(void *optctx, const char *opt, const char *arg)
{
char *key, *value;
const char *opts = arg;
int ret = 0;
while (*opts) {
ret = av_opt_get_key_value(&opts, "=", ":", 0, &key, &value);
if (ret < 0)
return ret;
ret = av_opencl_set_option(key, value);
if (ret < 0)
return ret;
if (*opts)
opts++;
}
return ret;
}

View File

@@ -10,7 +10,7 @@ ifndef SUBDIR
ifndef V
Q = @
ECHO = printf "$(1)\t%s\n" $(2)
BRIEF = CC CXX HOSTCC HOSTLD AS YASM AR LD STRIP CP WINDRES
BRIEF = CC CXX HOSTCC HOSTLD AS YASM AR LD STRIP CP
SILENT = DEPCC DEPHOSTCC DEPAS DEPYASM RANLIB RM
MSG = $@
@@ -43,7 +43,6 @@ endef
COMPILE_C = $(call COMPILE,CC)
COMPILE_CXX = $(call COMPILE,CXX)
COMPILE_S = $(call COMPILE,AS)
COMPILE_HOSTC = $(call COMPILE,HOSTCC)
%.o: %.c
$(COMPILE_C)
@@ -51,21 +50,12 @@ COMPILE_HOSTC = $(call COMPILE,HOSTCC)
%.o: %.cpp
$(COMPILE_CXX)
%.o: %.m
$(COMPILE_C)
%.s: %.c
$(CC) $(CPPFLAGS) $(CFLAGS) -S -o $@ $<
%.o: %.S
$(COMPILE_S)
%_host.o: %.c
$(COMPILE_HOSTC)
%.o: %.rc
$(WINDRES) $(IFLAGS) --preprocessor "$(DEPWINDRES) -E -xc-header -DRC_INVOKED $(CC_DEPFLAGS)" -o $@ $<
%.i: %.c
$(CC) $(CCFLAGS) $(CC_E) $<
@@ -92,15 +82,14 @@ endif
include $(SRC_PATH)/arch.mak
OBJS += $(OBJS-yes)
SLIBOBJS += $(SLIBOBJS-yes)
FFLIBS := $($(NAME)_FFLIBS) $(FFLIBS-yes) $(FFLIBS)
FFLIBS := $(FFLIBS-yes) $(FFLIBS)
TESTPROGS += $(TESTPROGS-yes)
LDLIBS = $(FFLIBS:%=%$(BUILDSUF))
FFEXTRALIBS := $(LDLIBS:%=$(LD_LIB)) $(EXTRALIBS)
EXAMPLES := $(EXAMPLES:%=$(SUBDIR)%-example$(EXESUF))
OBJS := $(sort $(OBJS:%=$(SUBDIR)%))
SLIBOBJS := $(sort $(SLIBOBJS:%=$(SUBDIR)%))
TESTOBJS := $(TESTOBJS:%=$(SUBDIR)%) $(TESTPROGS:%=$(SUBDIR)%-test.o)
TESTPROGS := $(TESTPROGS:%=$(SUBDIR)%-test$(EXESUF))
HOSTOBJS := $(HOSTPROGS:%=$(SUBDIR)%.o)
@@ -110,8 +99,7 @@ TOOLOBJS := $(TOOLS:%=tools/%.o)
TOOLS := $(TOOLS:%=tools/%$(EXESUF))
HEADERS += $(HEADERS-yes)
PATH_LIBNAME = $(foreach NAME,$(1),lib$(NAME)/$($(CONFIG_SHARED:yes=S)LIBNAME))
DEP_LIBS := $(foreach lib,$(FFLIBS),$(call PATH_LIBNAME,$(lib)))
DEP_LIBS := $(foreach NAME,$(FFLIBS),lib$(NAME)/$($(CONFIG_SHARED:yes=S)LIBNAME))
SRC_DIR := $(SRC_PATH)/lib$(NAME)
ALLHEADERS := $(subst $(SRC_DIR)/,$(SUBDIR),$(wildcard $(SRC_DIR)/*.h $(SRC_DIR)/$(ARCH)/*.h))
@@ -124,19 +112,18 @@ checkheaders: $(HOBJS)
alltools: $(TOOLS)
$(HOSTOBJS): %.o: %.c
$(COMPILE_HOSTC)
$(call COMPILE,HOSTCC)
$(HOSTPROGS): %$(HOSTEXESUF): %.o
$(HOSTLD) $(HOSTLDFLAGS) $(HOSTLD_O) $^ $(HOSTLIBS)
$(HOSTLD) $(HOSTLDFLAGS) $(HOSTLD_O) $< $(HOSTLIBS)
$(OBJS): | $(sort $(dir $(OBJS)))
$(HOBJS): | $(sort $(dir $(HOBJS)))
$(HOSTOBJS): | $(sort $(dir $(HOSTOBJS)))
$(SLIBOBJS): | $(sort $(dir $(SLIBOBJS)))
$(TESTOBJS): | $(sort $(dir $(TESTOBJS)))
$(TOOLOBJS): | tools
OBJDIRS := $(OBJDIRS) $(dir $(OBJS) $(HOBJS) $(HOSTOBJS) $(SLIBOBJS) $(TESTOBJS))
OBJDIRS := $(OBJDIRS) $(dir $(OBJS) $(HOBJS) $(HOSTOBJS) $(TESTOBJS))
CLEANSUFFIXES = *.d *.o *~ *.h.c *.map *.ver *.ho *.gcno *.gcda
DISTCLEANSUFFIXES = *.pc
@@ -151,4 +138,4 @@ endef
$(eval $(RULES))
-include $(wildcard $(OBJS:.o=.d) $(HOSTOBJS:.o=.d) $(TESTOBJS:.o=.d) $(HOBJS:.o=.d) $(SLIBOBJS:.o=.d))
-include $(wildcard $(OBJS:.o=.d) $(HOSTOBJS:.o=.d) $(TESTOBJS:.o=.d) $(HOBJS:.o=.d))

View File

@@ -1,26 +1,9 @@
/*
* Work around the class() function in AIX math.h clashing with
* identifiers named "class".
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
* Workaround aix-specific class() function clashing with ffmpeg class usage
*/
#ifndef FFMPEG_COMPAT_AIX_MATH_H
#define FFMPEG_COMPAT_AIX_MATH_H
#ifndef COMPAT_AIX_MATH_H
#define COMPAT_AIX_MATH_H
#define class class_in_math_h_causes_problems
@@ -28,4 +11,4 @@
#undef class
#endif /* FFMPEG_COMPAT_AIX_MATH_H */
#endif /* COMPAT_AIX_MATH_H */

View File

@@ -13,8 +13,7 @@
//
// You should have received a copy of the GNU General Public License
// along with this program; if not, write to the Free Software
// Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
// MA 02110-1301 USA, or visit
// Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA, or visit
// http://www.gnu.org/copyleft/gpl.html .
//
// As a special exception, I give you permission to link to the
@@ -805,7 +804,7 @@ struct AVS_Library {
AVSC_INLINE AVS_Library * avs_load_library() {
AVS_Library *library = (AVS_Library *)malloc(sizeof(AVS_Library));
if (!library)
if (library == NULL)
return NULL;
library->handle = LoadLibrary("avisynth");
if (library->handle == NULL)
@@ -870,7 +869,7 @@ fail:
}
AVSC_INLINE void avs_free_library(AVS_Library *library) {
if (!library)
if (library == NULL)
return;
FreeLibrary(library->handle);
free(library);

View File

@@ -13,8 +13,7 @@
//
// You should have received a copy of the GNU General Public License
// along with this program; if not, write to the Free Software
// Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
// MA 02110-1301 USA, or visit
// Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA, or visit
// http://www.gnu.org/copyleft/gpl.html .
//
// As a special exception, I give you permission to link to the

View File

@@ -1,35 +0,0 @@
/*
* Work around broken floating point limits on some systems.
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include_next <float.h>
#ifdef FLT_MAX
#undef FLT_MAX
#define FLT_MAX 3.40282346638528859812e+38F
#undef FLT_MIN
#define FLT_MIN 1.17549435082228750797e-38F
#undef DBL_MAX
#define DBL_MAX ((double)1.79769313486231570815e+308L)
#undef DBL_MIN
#define DBL_MIN ((double)2.22507385850720138309e-308L)
#endif

View File

@@ -1,22 +0,0 @@
/*
* Work around broken floating point limits on some systems.
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include_next <limits.h>
#include <float.h>

View File

@@ -38,6 +38,8 @@ static int optind = 1;
static int optopt;
static char *optarg;
#undef fprintf
static int getopt(int argc, char *argv[], char *opts)
{
static int sp = 1;
@@ -54,7 +56,7 @@ static int getopt(int argc, char *argv[], char *opts)
}
}
optopt = c = argv[optind][sp];
if (c == ':' || !(cp = strchr(opts, c))) {
if (c == ':' || (cp = strchr(opts, c)) == NULL) {
fprintf(stderr, ": illegal option -- %c\n", c);
if (argv[optind][++sp] == '\0') {
optind++;

View File

@@ -32,8 +32,6 @@
#undef __STRICT_ANSI__ /* for _beginthread() */
#include <stdlib.h>
#include "libavutil/mem.h"
typedef TID pthread_t;
typedef void pthread_attr_t;

View File

@@ -1,24 +1,3 @@
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef FFMPEG_COMPAT_TMS470_MATH_H
#define FFMPEG_COMPAT_TMS470_MATH_H
#include_next <math.h>
#undef INFINITY
@@ -26,5 +5,3 @@
#define INFINITY (*(const float*)((const unsigned []){ 0x7f800000 }))
#define NAN (*(const float*)((const unsigned []){ 0x7fc00000 }))
#endif /* FFMPEG_COMPAT_TMS470_MATH_H */

View File

@@ -24,6 +24,3 @@
#if !defined(va_copy) && defined(_MSC_VER)
#define va_copy(dst, src) ((dst) = (src))
#endif
#if !defined(va_copy) && defined(__GNUC__) && __GNUC__ < 3
#define va_copy(dst, src) __va_copy(dst, src)
#endif

View File

@@ -26,8 +26,8 @@
* w32threads to pthreads wrapper
*/
#ifndef FFMPEG_COMPAT_W32PTHREADS_H
#define FFMPEG_COMPAT_W32PTHREADS_H
#ifndef LIBAV_W32PTHREADS_H
#define LIBAV_W32PTHREADS_H
/* Build up a pthread-like API using underlying Windows API. Have only static
* methods so as to not conflict with a potentially linked in pthread-win32
@@ -39,7 +39,6 @@
#include <windows.h>
#include <process.h>
#include "libavutil/attributes.h"
#include "libavutil/common.h"
#include "libavutil/internal.h"
#include "libavutil/mem.h"
@@ -63,40 +62,21 @@ typedef struct pthread_cond_t {
} pthread_cond_t;
/* function pointers to conditional variable API on windows 6.0+ kernels */
#if _WIN32_WINNT < 0x0600
static void (WINAPI *cond_broadcast)(pthread_cond_t *cond);
static void (WINAPI *cond_init)(pthread_cond_t *cond);
static void (WINAPI *cond_signal)(pthread_cond_t *cond);
static BOOL (WINAPI *cond_wait)(pthread_cond_t *cond, pthread_mutex_t *mutex,
DWORD milliseconds);
#else
#define cond_init InitializeConditionVariable
#define cond_broadcast WakeAllConditionVariable
#define cond_signal WakeConditionVariable
#define cond_wait SleepConditionVariableCS
#define CreateEvent(a, reset, init, name) \
CreateEventEx(a, name, \
(reset ? CREATE_EVENT_MANUAL_RESET : 0) | \
(init ? CREATE_EVENT_INITIAL_SET : 0), \
EVENT_ALL_ACCESS)
// CreateSemaphoreExA seems to be desktop-only, but as long as we don't
// use named semaphores, it doesn't matter if we use the W version.
#define CreateSemaphore(a, b, c, d) \
CreateSemaphoreExW(a, b, c, d, 0, SEMAPHORE_ALL_ACCESS)
#define InitializeCriticalSection(x) InitializeCriticalSectionEx(x, 0, 0)
#define WaitForSingleObject(a, b) WaitForSingleObjectEx(a, b, FALSE)
#endif
static av_unused unsigned __stdcall attribute_align_arg win32thread_worker(void *arg)
static unsigned __stdcall attribute_align_arg win32thread_worker(void *arg)
{
pthread_t *h = arg;
h->ret = h->func(h->arg);
return 0;
}
static av_unused int pthread_create(pthread_t *thread, const void *unused_attr,
void *(*start_routine)(void*), void *arg)
static int pthread_create(pthread_t *thread, const void *unused_attr,
void *(*start_routine)(void*), void *arg)
{
thread->func = start_routine;
thread->arg = arg;
@@ -105,7 +85,7 @@ static av_unused int pthread_create(pthread_t *thread, const void *unused_attr,
return !thread->handle;
}
static av_unused void pthread_join(pthread_t thread, void **value_ptr)
static void pthread_join(pthread_t thread, void **value_ptr)
{
DWORD ret = WaitForSingleObject(thread.handle, INFINITE);
if (ret != WAIT_OBJECT_0)
@@ -147,32 +127,31 @@ typedef struct win32_cond_t {
volatile int is_broadcast;
} win32_cond_t;
static av_unused int pthread_cond_init(pthread_cond_t *cond, const void *unused_attr)
static void pthread_cond_init(pthread_cond_t *cond, const void *unused_attr)
{
win32_cond_t *win32_cond = NULL;
if (cond_init) {
cond_init(cond);
return 0;
return;
}
/* non native condition variables */
win32_cond = av_mallocz(sizeof(win32_cond_t));
if (!win32_cond)
return ENOMEM;
return;
cond->ptr = win32_cond;
win32_cond->semaphore = CreateSemaphore(NULL, 0, 0x7fffffff, NULL);
if (!win32_cond->semaphore)
return ENOMEM;
return;
win32_cond->waiters_done = CreateEvent(NULL, TRUE, FALSE, NULL);
if (!win32_cond->waiters_done)
return ENOMEM;
return;
pthread_mutex_init(&win32_cond->mtx_waiter_count, NULL);
pthread_mutex_init(&win32_cond->mtx_broadcast, NULL);
return 0;
}
static av_unused void pthread_cond_destroy(pthread_cond_t *cond)
static void pthread_cond_destroy(pthread_cond_t *cond)
{
win32_cond_t *win32_cond = cond->ptr;
/* native condition variables do not destroy */
@@ -188,7 +167,7 @@ static av_unused void pthread_cond_destroy(pthread_cond_t *cond)
cond->ptr = NULL;
}
static av_unused void pthread_cond_broadcast(pthread_cond_t *cond)
static void pthread_cond_broadcast(pthread_cond_t *cond)
{
win32_cond_t *win32_cond = cond->ptr;
int have_waiter;
@@ -219,7 +198,7 @@ static av_unused void pthread_cond_broadcast(pthread_cond_t *cond)
pthread_mutex_unlock(&win32_cond->mtx_broadcast);
}
static av_unused int pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mutex)
static int pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mutex)
{
win32_cond_t *win32_cond = cond->ptr;
int last_waiter;
@@ -251,7 +230,7 @@ static av_unused int pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mu
return pthread_mutex_lock(mutex);
}
static av_unused void pthread_cond_signal(pthread_cond_t *cond)
static void pthread_cond_signal(pthread_cond_t *cond)
{
win32_cond_t *win32_cond = cond->ptr;
int have_waiter;
@@ -276,7 +255,7 @@ static av_unused void pthread_cond_signal(pthread_cond_t *cond)
pthread_mutex_unlock(&win32_cond->mtx_broadcast);
}
static av_unused void w32thread_init(void)
static void w32thread_init(void)
{
#if _WIN32_WINNT < 0x0600
HANDLE kernel_dll = GetModuleHandle(TEXT("kernel32.dll"));
@@ -289,8 +268,13 @@ static av_unused void w32thread_init(void)
(void*)GetProcAddress(kernel_dll, "WakeConditionVariable");
cond_wait =
(void*)GetProcAddress(kernel_dll, "SleepConditionVariableCS");
#else
cond_init = InitializeConditionVariable;
cond_broadcast = WakeAllConditionVariable;
cond_signal = WakeConditionVariable;
cond_wait = SleepConditionVariableCS;
#endif
}
#endif /* FFMPEG_COMPAT_W32PTHREADS_H */
#endif /* LIBAV_W32PTHREADS_H */

View File

@@ -1,132 +0,0 @@
#!/bin/sh
# Copyright (c) 2013, Derek Buitenhuis
#
# Permission to use, copy, modify, and/or distribute this software for any
# purpose with or without fee is hereby granted, provided that the above
# copyright notice and this permission notice appear in all copies.
#
# THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
# OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
# mktemp isn't POSIX, so supply an implementation
mktemp() {
echo "${2%%XXX*}.${HOSTNAME}.${UID}.$$"
}
if [ $# -lt 2 ]; then
echo "Usage: makedef <version_script> <objects>" >&2
exit 0
fi
vscript=$1
shift
if [ ! -f "$vscript" ]; then
echo "Version script does not exist" >&2
exit 1
fi
for object in "$@"; do
if [ ! -f "$object" ]; then
echo "Object does not exist: ${object}" >&2
exit 1
fi
done
# Create a lib temporarily to dump symbols from.
# It's just much easier to do it this way
libname=$(mktemp -u "library").lib
trap 'rm -f -- $libname' EXIT
lib -out:${libname} $@ >/dev/null
if [ $? != 0 ]; then
echo "Could not create temporary library." >&2
exit 1
fi
IFS='
'
# Determine if we're building for x86 or x86_64 and
# set the symbol prefix accordingly.
prefix=""
arch=$(dumpbin -headers ${libname} |
tr '\t' ' ' |
grep '^ \+.\+machine \+(.\+)' |
head -1 |
sed -e 's/^ \{1,\}.\{1,\} \{1,\}machine \{1,\}(\(...\)).*/\1/')
if [ "${arch}" = "x86" ]; then
prefix="_"
else
if [ "${arch}" != "ARM" ] && [ "${arch}" != "x64" ]; then
echo "Unknown machine type." >&2
exit 1
fi
fi
started=0
regex="none"
for line in $(cat ${vscript} | tr '\t' ' '); do
# We only care about global symbols
echo "${line}" | grep -q '^ \+global:'
if [ $? = 0 ]; then
started=1
line=$(echo "${line}" | sed -e 's/^ \{1,\}global: *//')
else
echo "${line}" | grep -q '^ \+local:'
if [ $? = 0 ]; then
started=0
fi
fi
if [ ${started} = 0 ]; then
continue
fi
# Handle multiple symbols on one line
IFS=';'
# Work around stupid expansion to filenames
line=$(echo "${line}" | sed -e 's/\*/.\\+/g')
for exp in ${line}; do
# Remove leading and trailing whitespace
exp=$(echo "${exp}" | sed -e 's/^ *//' -e 's/ *$//')
if [ "${regex}" = "none" ]; then
regex="${exp}"
else
regex="${regex};${exp}"
fi
done
IFS='
'
done
dump=$(dumpbin -linkermember:1 ${libname})
rm ${libname}
IFS=';'
list=""
for exp in ${regex}; do
list="${list}"'
'$(echo "${dump}" |
sed -e '/public symbols/,$!d' -e '/^ \{1,\}Summary/,$d' -e "s/ \{1,\}${prefix}/ /" -e 's/ \{1,\}/ /g' |
tail -n +2 |
cut -d' ' -f3 |
grep "^${exp}" |
sed -e 's/^/ /')
done
echo "EXPORTS"
echo "${list}" | sort | uniq | tail -n +2

1959
configure vendored

File diff suppressed because it is too large Load Diff

View File

@@ -2,542 +2,37 @@ Never assume the API of libav* to be stable unless at least 1 month has passed
since the last major version increase or the API was added.
The last version increases were:
libavcodec: 2014-08-09
libavdevice: 2014-08-09
libavfilter: 2014-08-09
libavformat: 2014-08-09
libavresample: 2014-08-09
libpostproc: 2014-08-09
libswresample: 2014-08-09
libswscale: 2014-08-09
libavutil: 2014-08-09
libavcodec: 2013-03-xx
libavdevice: 2013-03-xx
libavfilter: 2012-06-22
libavformat: 2013-03-xx
libavresample: 2012-10-05
libpostproc: 2011-04-18
libswresample: 2011-09-19
libswscale: 2011-06-20
libavutil: 2012-10-22
API changes, most recent first:
-------- 8< --------- FFmpeg 2.4 was cut here -------- 8< ---------
2014-08-28 - f30a815 / 9301486 - lavc 56.1.100 / 56.1.0 - avcodec.h
Add AV_PKT_DATA_STEREO3D to export container-level stereo3d information.
2014-08-25 - 215db29 / b263f8f - lavf 56.3.100 / 56.3.0 - avformat.h
Add AVFormatContext.max_ts_probe.
2014-08-23 - 8fc9bd0 - lavu 54.7.100 - dict.h
AV_DICT_DONT_STRDUP_KEY and AV_DICT_DONT_STRDUP_VAL arguments are now
freed even on error. This is consistent with the behaviour all users
of it we could find expect.
2014-08-21 - 980a5b0 - lavu 54.6.100 - frame.h motion_vector.h
Add AV_FRAME_DATA_MOTION_VECTORS side data and AVMotionVector structure
2014-08-16 - b7d5e01 - lswr 1.1.100 - swresample.h
Add AVFrame based API
2014-08-16 - c2829dc - lavu 54.4.100 - dict.h
Add av_dict_set_int helper function.
2014-08-13 - c8571c6 / 8ddc326 - lavu 54.3.100 / 54.3.0 - mem.h
Add av_strndup().
2014-08-13 - 2ba4577 / a8c104a - lavu 54.2.100 / 54.2.0 - opt.h
Add av_opt_get_dict_val/set_dict_val with AV_OPT_TYPE_DICT to support
dictionary types being set as options.
2014-08-13 - afbd4b8 - lavf 56.01.0 - avformat.h
Add AVFormatContext.event_flags and AVStream.event_flags for signaling to
the user when events happen in the file/stream.
2014-08-10 - 78eaaa8 / fb1ddcd - lavr 2.1.0 - avresample.h
Add avresample_convert_frame() and avresample_config().
2014-08-10 - 78eaaa8 / fb1ddcd - lavu 54.1.100 / 54.1.0 - error.h
Add AVERROR_INPUT_CHANGED and AVERROR_OUTPUT_CHANGED.
2014-08-08 - 3841f2a / d35b94f - lavc 55.73.102 / 55.57.4 - avcodec.h
Deprecate FF_IDCT_XVIDMMX define and xvidmmx idct option.
Replaced by FF_IDCT_XVID and xvid respectively.
2014-08-08 - 5c3c671 - lavf 55.53.100 - avio.h
Add avio_feof() and deprecate url_feof().
2014-08-07 - bb78903 - lsws 2.1.3 - swscale.h
sws_getContext is not going to be removed in the future.
2014-08-07 - a561662 / ad1ee5f - lavc 55.73.101 / 55.57.3 - avcodec.h
reordered_opaque is not going to be removed in the future.
2014-08-02 - 28a2107 - lavu 52.98.100 - pixelutils.h
Add pixelutils API with SAD functions
2014-08-04 - 6017c98 / e9abafc - lavu 52.97.100 / 53.22.0 - pixfmt.h
Add AV_PIX_FMT_YA16 pixel format for 16 bit packed gray with alpha.
2014-08-04 - 4c8bc6f / e96c3b8 - lavu 52.96.101 / 53.21.1 - avstring.h
Rename AV_PIX_FMT_Y400A to AV_PIX_FMT_YA8 to better identify the format.
An alias pixel format and color space name are provided for compatibility.
2014-08-04 - 073c074 / d2962e9 - lavu 52.96.100 / 53.21.0 - pixdesc.h
Support name aliases for pixel formats.
2014-08-03 - 71d008e / 1ef9e83 - lavc 55.72.101 / 55.57.2 - avcodec.h
2014-08-03 - 71d008e / 1ef9e83 - lavu 52.95.100 / 53.20.0 - frame.h
Deprecate AVCodecContext.dtg_active_format and use side-data instead.
2014-08-03 - e680c73 - lavc 55.72.100 - avcodec.h
Add get_pixels() to AVDCT
2014-08-03 - 9400603 / 9f17685 - lavc 55.71.101 / 55.57.1 - avcodec.h
Deprecate unused FF_IDCT_IPP define and ipp avcodec option.
Deprecate unused FF_DEBUG_PTS define and pts avcodec option.
Deprecate unused FF_CODER_TYPE_DEFLATE define and deflate avcodec option.
Deprecate unused FF_DCT_INT define and int avcodec option.
Deprecate unused avcodec option scenechange_factor.
2014-07-30 - ba3e331 - lavu 52.94.100 - frame.h
Add av_frame_side_data_name()
2014-07-29 - 80a3a66 / 3a19405 - lavf 56.01.100 / 56.01.0 - avformat.h
Add mime_type field to AVProbeData, which now MUST be initialized in
order to avoid uninitialized reads of the mime_type pointer, likely
leading to crashes.
Typically, this means you will do 'AVProbeData pd = { 0 };' instead of
'AVProbeData pd;'.
2014-07-29 - 31e0b5d / 69e7336 - lavu 52.92.100 / 53.19.0 - avstring.h
Make name matching function from lavf public as av_match_name().
2014-07-28 - 2e5c8b0 / c5fca01 - lavc 55.71.100 / 55.57.0 - avcodec.h
Add AV_CODEC_PROP_REORDER to mark codecs supporting frame reordering.
2014-07-27 - ff9a154 - lavf 55.50.100 - avformat.h
New field int64_t probesize2 instead of deprecated
field int probesize.
2014-07-27 - 932ff70 - lavc 55.70.100 - avdct.h
Add AVDCT / avcodec_dct_alloc() / avcodec_dct_init().
2014-07-23 - 8a4c086 - lavf 55.49.100 - avio.h
Add avio_read_to_bprint()
-------- 8< --------- FFmpeg 2.3 was cut here -------- 8< ---------
2014-07-14 - 62227a7 - lavf 55.47.100 - avformat.h
Add av_stream_get_parser()
2014-07-09 - c67690f / a54f03b - lavu 52.92.100 / 53.18.0 - display.h
Add av_display_matrix_flip() to flip the transformation matrix.
2014-07-09 - 1b58f13 / f6ee61f - lavc 55.69.100 / 55.56.0 - dv_profile.h
Add a public API for DV profile handling.
2014-06-20 - 0dceefc / 9e500ef - lavu 52.90.100 / 53.17.0 - imgutils.h
Add av_image_check_sar().
2014-06-20 - 4a99333 / 874390e - lavc 55.68.100 / 55.55.0 - avcodec.h
Add av_packet_rescale_ts() to simplify timestamp conversion.
2014-06-18 - ac293b6 / 194be1f - lavf 55.44.100 / 55.20.0 - avformat.h
The proper way for providing a hint about the desired timebase to the muxers
is now setting AVStream.time_base, instead of AVStream.codec.time_base as was
done previously. The old method is now deprecated.
2014-06-11 - 67d29da - lavc 55.66.101 - avcodec.h
Increase FF_INPUT_BUFFER_PADDING_SIZE to 32 due to some corner cases needing
it
2014-06-10 - xxxxxxx - lavf 55.43.100 - avformat.h
New field int64_t max_analyze_duration2 instead of deprecated
int max_analyze_duration.
2014-05-30 - 00759d7 - lavu 52.89.100 - opt.h
Add av_opt_copy()
2014-06-01 - 03bb99a / 0957b27 - lavc 55.66.100 / 55.54.0 - avcodec.h
Add AVCodecContext.side_data_only_packets to allow encoders to output packets
with only side data. This option may become mandatory in the future, so all
users are recommended to update their code and enable this option.
2014-06-01 - 6e8e9f1 / 8c02adc - lavu 52.88.100 / 53.16.0 - frame.h, pixfmt.h
Move all color-related enums (AVColorPrimaries, AVColorSpace, AVColorRange,
AVColorTransferCharacteristic, and AVChromaLocation) inside lavu.
And add AVFrame fields for them.
2014-05-29 - bdb2e80 / b2d4565 - lavr 1.3.0 - avresample.h
Add avresample_max_output_samples
2014-05-28 - d858ee7 / 6d21259 - lavf 55.42.100 / 55.19.0 - avformat.h
Add strict_std_compliance and related AVOptions to support experimental
muxing.
2014-05-26 - xxxxxxx - lavu 52.87.100 - threadmessage.h
Add thread message queue API.
2014-05-26 - c37d179 - lavf 55.41.100 - avformat.h
Add format_probesize to AVFormatContext.
2014-05-20 - 7d25af1 / c23c96b - lavf 55.39.100 / 55.18.0 - avformat.h
Add av_stream_get_side_data() to access stream-level side data
in the same way as av_packet_get_side_data().
2014-05-xx - xxxxxxx - lavu 52.86.100 - fifo.h
Add av_fifo_alloc_array() function.
2014-05-19 - ef1d4ee / bddd8cb - lavu 52.85.100 / 53.15.0 - frame.h, display.h
Add AV_FRAME_DATA_DISPLAYMATRIX for exporting frame-level
spatial rendering on video frames for proper display.
2014-05-19 - ef1d4ee / bddd8cb - lavc 55.64.100 / 55.53.0 - avcodec.h
Add AV_PKT_DATA_DISPLAYMATRIX for exporting packet-level
spatial rendering on video frames for proper display.
2014-05-19 - 999a99c / a312f71 - lavf 55.38.101 / 55.17.1 - avformat.h
Deprecate AVStream.pts and the AVFrac struct, which was its only use case.
See use av_stream_get_end_pts()
2014-05-18 - 68c0518 / fd05602 - lavc 55.63.100 / 55.52.0 - avcodec.h
Add avcodec_free_context(). From now on it should be used for freeing
AVCodecContext.
2014-05-17 - 0eec06e - lavu 52.84.100 - time.h
Add av_gettime_relative() av_gettime_relative_is_monotonic()
2014-05-15 - eacf7d6 / 0c1959b - lavf 55.38.100 / 55.17.0 - avformat.h
Add AVFMT_FLAG_BITEXACT flag. Muxers now use it instead of checking
CODEC_FLAG_BITEXACT on the first stream.
2014-05-15 - 96cb4c8 - lswr 0.19.100 - swresample.h
Add swr_close()
2014-05-11 - 14aef38 / 66e6c8a - lavu 52.83.100 / 53.14.0 - pixfmt.h
Add AV_PIX_FMT_VDA for new-style VDA acceleration.
2014-05-xx - xxxxxxx - lavu 52.82.0 - fifo.h
Add av_fifo_freep() function.
2014-05-02 - ba52fb11 - lavu 52.81.0 - opt.h
Add av_opt_set_dict2() function.
2014-05-01 - e77b985 / a2941c8 - lavc 55.60.103 / 55.50.3 - avcodec.h
Deprecate CODEC_FLAG_MV0. It is replaced by the flag "mv0" in the
"mpv_flags" private option of the mpegvideo encoders.
2014-05-01 - e40ae8c / 6484149 - lavc 55.60.102 / 55.50.2 - avcodec.h
Deprecate CODEC_FLAG_GMC. It is replaced by the "gmc" private option of the
libxvid encoder.
2014-05-01 - 1851643 / b2c3171 - lavc 55.60.101 / 55.50.1 - avcodec.h
Deprecate CODEC_FLAG_NORMALIZE_AQP. It is replaced by the flag "naq" in the
"mpv_flags" private option of the mpegvideo encoders.
2014-05-01 - cac07d0 / 5fcceda - avcodec.h
Deprecate CODEC_FLAG_INPUT_PRESERVED. Its functionality is replaced by passing
reference-counted frames to encoders.
2014-04-29 - 1bf6396 - lavc 55.60.100 - avcodec.h
Add AVCodecDescriptor.mime_types field.
2014-04-29 - xxxxxxx - lavu 52.80.0 - hash.h
Add av_hash_final_bin(), av_hash_final_hex() and av_hash_final_b64().
2014-03-07 - 8b2a130 - lavc 55.50.0 / 55.53.100 - dxva2.h
Add FF_DXVA2_WORKAROUND_INTEL_CLEARVIDEO for old Intel GPUs.
2014-04-22 - 502512e /dac7e8a - lavu 53.13.0 / 52.78.100 - avutil.h
Add av_get_time_base_q().
2014-04-17 - a8d01a7 / 0983d48 - lavu 53.12.0 / 52.77.100 - crc.h
Add AV_CRC_16_ANSI_LE crc variant.
2014-04-XX - xxxxxxx - lavf xx.xx.1xx - avformat.h
Add av_format_inject_global_side_data()
2014-04-12 - 4f698be - lavu 52.76.100 - log.h
Add av_log_get_flags()
2014-04-11 - 6db42a2b - lavd 55.12.100 - avdevice.h
Add avdevice_capabilities_create() function.
Add avdevice_capabilities_free() function.
2014-04-07 - 0a1cc04 / 8b17243 - lavu 52.75.100 / 53.11.0 - pixfmt.h
Add AV_PIX_FMT_YVYU422 pixel format.
2014-04-04 - c1d0536 / 8542f9c - lavu 52.74.100 / 53.10.0 - replaygain.h
Full scale for peak values is now 100000 (instead of UINT32_MAX) and values
may overflow.
2014-04-03 - c16e006 / 7763118 - lavu 52.73.100 / 53.9.0 - log.h
Add AV_LOG(c) macro to have 256 color debug messages.
2014-04-03 - eaed4da9 - lavu 52.72.100 - opt.h
Add AV_OPT_MULTI_COMPONENT_RANGE define to allow return
multi-component option ranges.
2014-03-29 - cd50a44b - lavu 52.70.100 - mem.h
Add av_dynarray_add_nofree() function.
2014-02-24 - 3e1f241 / d161ae0 - lavu 52.69.100 / 53.8.0 - frame.h
Add av_frame_remove_side_data() for removing a single side data
instance from a frame.
2014-03-24 - 83e8978 / 5a7e35d - lavu 52.68.100 / 53.7.0 - frame.h, replaygain.h
Add AV_FRAME_DATA_REPLAYGAIN for exporting replaygain tags.
Add a new header replaygain.h with the AVReplayGain struct.
2014-03-24 - 83e8978 / 5a7e35d - lavc 55.54.100 / 55.36.0 - avcodec.h
Add AV_PKT_DATA_REPLAYGAIN for exporting replaygain tags.
2014-03-24 - 595ba3b / 25b3258 - lavf 55.35.100 / 55.13.0 - avformat.h
Add AVStream.side_data and AVStream.nb_side_data for exporting stream-global
side data (e.g. replaygain tags, video rotation)
2014-03-24 - bd34e26 / 0e2c3ee - lavc 55.53.100 / 55.35.0 - avcodec.h
Give the name AVPacketSideData to the previously anonymous struct used for
AVPacket.side_data.
-------- 8< --------- FFmpeg 2.2 was cut here -------- 8< ---------
2014-03-18 - 37c07d4 - lsws 2.5.102
Make gray16 full-scale.
2014-03-16 - 6b1ca17 / 1481d24 - lavu 52.67.100 / 53.6.0 - pixfmt.h
Add RGBA64_LIBAV pixel format and variants for compatibility
2014-03-11 - 3f3229c - lavf 55.34.101 - avformat.h
Set AVFormatContext.start_time_realtime when demuxing.
2014-03-03 - 06fed440 - lavd 55.11.100 - avdevice.h
Add av_input_audio_device_next().
Add av_input_video_device_next().
Add av_output_audio_device_next().
Add av_output_video_device_next().
2014-02-24 - fff5262 / 1155fd0 - lavu 52.66.100 / 53.5.0 - frame.h
Add av_frame_copy() for copying the frame data.
2014-02-24 - a66be60 - lswr 0.18.100 - swresample.h
Add swr_is_initialized() for checking whether a resample context is initialized.
2014-02-22 - 5367c0b / 7e86c27 - lavr 1.2.0 - avresample.h
Add avresample_is_open() for checking whether a resample context is open.
2014-02-19 - 6a24d77 / c3ecd96 - lavu 52.65.100 / 53.4.0 - opt.h
Add AV_OPT_FLAG_EXPORT and AV_OPT_FLAG_READONLY to mark options meant (only)
for reading.
2014-02-19 - f4c8d00 / 6bb8720 - lavu 52.64.101 / 53.3.1 - opt.h
Deprecate unused AV_OPT_FLAG_METADATA.
2014-02-xx - xxxxxxx - lavd 55.10.100 - avdevice.h
Add avdevice_list_devices() and avdevice_free_list_devices()
2014-02-16 - db3c970 - lavf 55.33.100 - avio.h
Add avio_find_protocol_name() to find out the name of the protocol that would
be selected for a given URL.
2014-02-15 - a2bc6c1 / c98f316 - lavu 52.64.100 / 53.3.0 - frame.h
Add AV_FRAME_DATA_DOWNMIX_INFO value to the AVFrameSideDataType enum and
downmix_info.h API, which identify downmix-related metadata.
2014-02-11 - 1b05ac2 - lavf 55.32.100 - avformat.h
Add av_write_uncoded_frame() and av_interleaved_write_uncoded_frame().
2014-02-04 - 3adb5f8 / d9ae103 - lavf 55.30.100 / 55.11.0 - avformat.h
Add AVFormatContext.max_interleave_delta for controlling amount of buffering
when interleaving.
2014-02-02 - 5871ee5 - lavf 55.29.100 - avformat.h
Add output_ts_offset muxing option to AVFormatContext.
2014-01-27 - 102bd64 - lavd 55.7.100 - avdevice.h
lavf 55.28.100 - avformat.h
Add avdevice_dev_to_app_control_message() function.
2014-01-27 - 7151411 - lavd 55.6.100 - avdevice.h
lavf 55.27.100 - avformat.h
Add avdevice_app_to_dev_control_message() function.
2014-01-24 - 86bee79 - lavf 55.26.100 - avformat.h
Add AVFormatContext option metadata_header_padding to allow control over the
amount of padding added.
2014-01-20 - eef74b2 / 93c553c - lavc 55.48.102 / 55.32.1 - avcodec.h
Edges are not required anymore on video buffers allocated by get_buffer2()
(i.e. as if the CODEC_FLAG_EMU_EDGE flag was always on). Deprecate
CODEC_FLAG_EMU_EDGE and avcodec_get_edge_width().
2014-01-19 - 1a193c4 - lavf 55.25.100 - avformat.h
Add avformat_get_mov_video_tags() and avformat_get_mov_audio_tags().
2014-01-19 - xxxxxxx - lavu 52.63.100 - rational.h
Add av_make_q() function.
2014-01-05 - 4cf4da9 / 5b4797a - lavu 52.62.100 / 53.2.0 - frame.h
Add AV_FRAME_DATA_MATRIXENCODING value to the AVFrameSideDataType enum, which
identifies AVMatrixEncoding data.
2014-01-05 - 751385f / 5c437fb - lavu 52.61.100 / 53.1.0 - channel_layout.h
Add values for various Dolby flags to the AVMatrixEncoding enum.
2014-01-04 - b317f94 - lavu 52.60.100 - mathematics.h
Add av_add_stable() function.
2013-12-22 - 911676c - lavu 52.59.100 - avstring.h
Add av_strnlen() function.
2013-12-09 - 64f73ac - lavu 52.57.100 - opencl.h
Add av_opencl_benchmark() function.
2013-11-30 - 82b2e9c - lavu 52.56.100 - ffversion.h
Moves version.h to libavutil/ffversion.h.
Install ffversion.h and make it public.
2013-12-11 - 29c83d2 / b9fb59d,409a143 / 9431356,44967ab / d7b3ee9 - lavc 55.45.101 / 55.28.1 - avcodec.h
av_frame_alloc(), av_frame_unref() and av_frame_free() now can and should be
used instead of avcodec_alloc_frame(), avcodec_get_frame_defaults() and
avcodec_free_frame() respectively. The latter three functions are deprecated.
2013-12-09 - 7a60348 / 7e244c6- - lavu 52.58.100 / 52.20.0 - frame.h
Add AV_FRAME_DATA_STEREO3D value to the AVFrameSideDataType enum and
stereo3d.h API, that identify codec-independent stereo3d information.
2013-11-26 - 625b290 / 1eaac1d- - lavu 52.55.100 / 52.19.0 - frame.h
Add AV_FRAME_DATA_A53_CC value to the AVFrameSideDataType enum, which
identifies ATSC A53 Part 4 Closed Captions data.
2013-11-22 - 6859065 - lavu 52.54.100 - avstring.h
Add av_utf8_decode() function.
2013-11-22 - fb7d70c - lavc 55.44.100 - avcodec.h
Add HEVC profiles
2013-11-20 - c28b61c - lavc 55.44.100 - avcodec.h
Add av_packet_{un,}pack_dictionary()
Add AV_PKT_METADATA_UPDATE side data type, used to transmit key/value
strings between a stream and the application.
2013-11-14 - 7c888ae / cce3e0a - lavu 52.53.100 / 52.18.0 - mem.h
Move av_fast_malloc() and av_fast_realloc() for libavcodec to libavutil.
2013-11-14 - b71e4d8 / 8941971 - lavc 55.43.100 / 55.27.0 - avcodec.h
Deprecate AVCodecContext.error_rate, it is replaced by the 'error_rate'
private option of the mpegvideo encoder family.
2013-11-14 - 31c09b7 / 728c465 - lavc 55.42.100 / 55.26.0 - vdpau.h
Add av_vdpau_get_profile().
Add av_vdpau_alloc_context(). This function must from now on be
used for allocating AVVDPAUContext.
2013-11-04 - be41f21 / cd8f772 - lavc 55.41.100 / 55.25.0 - avcodec.h
lavu 52.51.100 - frame.h
Add ITU-R BT.2020 and other not yet included values to color primaries,
transfer characteristics and colorspaces.
2013-11-04 - 85cabf1 - lavu 52.50.100 - avutil.h
Add av_fopen_utf8()
2013-10-31 - 78265fc / 28096e0 - lavu 52.49.100 / 52.17.0 - frame.h
Add AVFrame.flags and AV_FRAME_FLAG_CORRUPT.
-------- 8< --------- FFmpeg 2.1 was cut here -------- 8< ---------
2013-10-27 - dbe6f9f - lavc 55.39.100 - avcodec.h
Add CODEC_CAP_DELAY support to avcodec_decode_subtitle2.
2013-10-27 - d61617a - lavu 52.48.100 - parseutils.h
Add av_get_known_color_name().
2013-10-17 - 8696e51 - lavu 52.47.100 - opt.h
Add AV_OPT_TYPE_CHANNEL_LAYOUT and channel layout option handlers
av_opt_get_channel_layout() and av_opt_set_channel_layout().
2013-10-06 - ccf96f8 -libswscale 2.5.101 - options.c
Change default scaler to bicubic
2013-10-03 - e57dba0 - lavc 55.34.100 - avcodec.h
Add av_codec_get_max_lowres()
2013-10-02 - 5082fcc - lavf 55.19.100 - avformat.h
Add audio/video/subtitle AVCodec fields to AVFormatContext to force specific
decoders
2013-09-28 - 7381d31 / 0767bfd - lavfi 3.88.100 / 3.11.0 - avfilter.h
Add AVFilterGraph.execute and AVFilterGraph.opaque for custom slice threading
implementations.
2013-09-21 - 85f8a3c / e208e6d - lavu 52.46.100 / 52.16.0 - pixfmt.h
Add interleaved 4:2:2 8/10-bit formats AV_PIX_FMT_NV16 and
AV_PIX_FMT_NV20.
2013-09-16 - c74c3fb / 3feb3d6 - lavu 52.44.100 / 52.15.0 - mem.h
Add av_reallocp.
2013-09-04 - 3e1f507 - lavc 55.31.101 - avcodec.h
avcodec_close() argument can be NULL.
2013-09-04 - 36cd017a - lavf 55.16.101 - avformat.h
avformat_close_input() argument can be NULL and point on NULL.
2013-08-29 - e31db62 - lavf 55.15.100 - avformat.h
Add av_format_get_probe_score().
2013-08-15 - 1e0e193 - lsws 2.5.100 -
Add a sws_dither AVOption, allowing to set the dither algorithm used
2013-08-11 - d404fe35 - lavc 55.27.100 - vdpau.h
Add a render2 alternative to the render callback function.
2013-08-11 - af05edc - lavc 55.26.100 - vdpau.h
Add allocation function for AVVDPAUContext, allowing
to extend it in the future without breaking ABI/API.
2013-08-10 - 67a580f / 5a9a9d4 - lavc 55.25.100 / 55.16.0 - avcodec.h
Extend AVPacket API with av_packet_unref, av_packet_ref,
av_packet_move_ref, av_packet_copy_props, av_packet_free_side_data.
2013-08-05 - 9547e3e / f824535 - lavc 55.22.100 / 55.13.0 - avcodec.h
Deprecate the bitstream-related members from struct AVVDPAUContext.
The bitstream buffers no longer need to be explicitly freed.
2013-08-05 - 3b805dc / 549294f - lavc 55.21.100 / 55.12.0 - avcodec.h
Deprecate the CODEC_CAP_HWACCEL_VDPAU codec capability. Use CODEC_CAP_HWACCEL
and select the AV_PIX_FMT_VDPAU format with get_format() instead.
2013-08-05 - 4ee0984 / a0ad5d0 - lavu 52.41.100 / 52.14.0 - pixfmt.h
Deprecate AV_PIX_FMT_VDPAU_*. Use AV_PIX_FMT_VDPAU instead.
2013-08-02 - 82fdfe8 / a8b1927 - lavc 55.20.100 / 55.11.0 - avcodec.h
Add output_picture_number to AVCodecParserContext.
2013-07-23 - abc8110 - lavc 55.19.100 - avcodec.h
Add avcodec_chroma_pos_to_enum()
Add avcodec_enum_to_chroma_pos()
-------- 8< --------- FFmpeg 2.0 was cut here -------- 8< ---------
2013-07-03 - 838bd73 - lavfi 3.78.100 - avfilter.h
2013-07-03 - xxxxxxx - lavfi 3.78.100 - avfilter.h
Deprecate avfilter_graph_parse() in favor of the equivalent
avfilter_graph_parse_ptr().
2013-06-24 - af5f9c0 / 95d5246 - lavc 55.17.100 / 55.10.0 - avcodec.h
2013-06-xx - xxxxxxx - lavc 55.10.0 - avcodec.h
Add MPEG-2 AAC profiles
2013-06-25 - af5f9c0 / 95d5246 - lavf 55.10.100 - avformat.h
2013-06-xx - xxxxxxx - lavf 55.10.100 - avformat.h
Add AV_DISPOSITION_* flags to indicate text track kind.
2013-06-15 - 99b8cd0 - lavu 52.36.100
2013-06-xx - xxxxxxx - lavu 52.36.100
Add AVRIPEMD:
av_ripemd_alloc()
av_ripemd_init()
av_ripemd_update()
av_ripemd_final()
2013-06-04 - 30b491f / fc962d4 - lavu 52.35.100 / 52.13.0 - mem.h
2013-06-05 - fc962d4 - lavu 52.13.0 - mem.h
Add av_realloc_array and av_reallocp_array
2013-05-30 - 682b227 - lavu 52.35.100
@@ -547,58 +42,55 @@ API changes, most recent first:
av_sha512_update()
av_sha512_final()
2013-05-24 - 8d4e969 / 129bb23 - lavfi 3.10.0 / 3.70.100 - avfilter.h
2013-05-24 - xxxxxxx - lavfi 3.70.100 - avfilter.h
Add support for slice multithreading to lavfi. Filters supporting threading
are marked with AVFILTER_FLAG_SLICE_THREADS.
New fields AVFilterContext.thread_type, AVFilterGraph.thread_type and
AVFilterGraph.nb_threads (accessible directly or through AVOptions) may be
used to configure multithreading.
2013-05-24 - fe40a9f / 2a6eaea - lavu 52.12.0 / 52.34.100 - cpu.h
2013-05-24 - xxxxxxx - lavu 52.34.100 - cpu.h
Add av_cpu_count() function for getting the number of logical CPUs.
2013-05-24 - 0c25c39 / b493847 - lavc 55.7.0 / 55.12.100 - avcodec.h
2013-05-24 - xxxxxxx - lavc 55.12.100 - avcodec.h
Add picture_structure to AVCodecParserContext.
2013-05-17 - 3a751ea - lavu 52.33.100 - opt.h
2013-05-17 - xxxxxxx - lavu 52.33.100 - opt.h
Add AV_OPT_TYPE_COLOR value to AVOptionType enum.
2013-05-13 - e398416 - lavu 52.31.100 - mem.h
2013-05-13 - xxxxxxx - lavu 52.31.100 - mem.h
Add av_dynarray2_add().
2013-05-12 - 1776177 - lavfi 3.65.100
2013-05-12 - xxxxxxx - lavfi 3.65.100
Add AVFILTER_FLAG_SUPPORT_TIMELINE* filter flags.
2013-04-19 - 380cfce - lavc 55.4.100
2013-04-19 - xxxxxxx - lavc 55.4.100
Add AV_CODEC_PROP_TEXT_SUB property for text based subtitles codec.
2013-04-18 - 7c1a002 - lavf 55.3.100
2013-04-18 - xxxxxxx - lavf 55.3.100
The matroska demuxer can now output proper verbatim ASS packets. It will
become the default starting lavf 56.0.100.
2013-04-10 - af0d270 - lavu 25.26.100 - avutil.h,opt.h
2013-04-10 - xxxxxxx - lavu 25.26.100 - avutil.h,opt.h
Add av_int_list_length()
and av_opt_set_int_list().
2013-03-30 - 5c73645 - lavu 52.24.100 - samplefmt.h
2013-03-30 - xxxxxxx - lavu 52.24.100 - samplefmt.h
Add av_samples_alloc_array_and_samples().
2013-03-29 - ef7b6b4 - lavf 55.1.100 - avformat.h
2013-03-29 - xxxxxxx - lavf 55.1.100 - avformat.h
Add av_guess_frame_rate()
2013-03-20 - 8d928a9 - lavu 52.22.100 - opt.h
2013-03-20 - xxxxxxx - lavu 52.22.100 - opt.h
Add AV_OPT_TYPE_DURATION value to AVOptionType enum.
2013-03-17 - 7aa9af5 - lavu 52.20.100 - opt.h
2013-03-17 - xxxxxx - lavu 52.20.100 - opt.h
Add AV_OPT_TYPE_VIDEO_RATE value to AVOptionType enum.
-------- 8< --------- FFmpeg 1.2 was cut here -------- 8< ---------
2013-03-07 - 9767ec6 - lavu 52.18.100 - avstring.h,bprint.h
2013-03-07 - xxxxxx - lavu 52.18.100 - avstring.h,bprint.h
Add av_escape() and av_bprint_escape() API.
2013-02-24 - b59cd08 - lavfi 3.41.100 - buffersink.h
2013-02-24 - xxxxxx - lavfi 3.41.100 - buffersink.h
Add sample_rates field to AVABufferSinkParams.
2013-01-17 - a1a707f - lavf 54.61.100
@@ -607,14 +99,11 @@ API changes, most recent first:
2013-01-01 - 2eb2e17 - lavfi 3.34.100
Add avfilter_get_audio_buffer_ref_from_arrays_channels.
-------- 8< --------- FFmpeg 1.1 was cut here -------- 8< ---------
2012-12-20 - 34de47aa - lavfi 3.29.100 - avfilter.h
Add AVFilterLink.channels, avfilter_link_get_channels()
and avfilter_ref_get_channels().
2012-12-15 - 96d815fc - lavc 54.80.100 - avcodec.h
2012-12-15 - 2ada584d - lavc 54.80.100 - avcodec.h
Add pkt_size field to AVFrame.
2012-11-25 - c70ec631 - lavu 52.9.100 - opt.h
@@ -655,9 +144,6 @@ API changes, most recent first:
Add LIBSWRESAMPLE_VERSION, LIBSWRESAMPLE_BUILD
and LIBSWRESAMPLE_IDENT symbols.
-------- 8< --------- FFmpeg 1.0 was cut here -------- 8< ---------
2012-09-06 - 29e972f - lavu 51.72.100 - parseutils.h
Add av_small_strptime() time parsing function.
@@ -730,52 +216,52 @@ API changes, most recent first:
2012-03-26 - a67d9cf - lavfi 2.66.100
Add avfilter_fill_frame_from_{audio_,}buffer_ref() functions.
2013-05-15 - ff46809 / e6c4ac7 - lavu 52.32.100 / 52.11.0 - pixdesc.h
2013-05-xx - xxxxxxx - lavu 52.11.0 - pixdesc.h
Replace PIX_FMT_* flags with AV_PIX_FMT_FLAG_*.
2013-04-03 - 6fc58a8 / 507b1e4 - lavc 55.7.100 / 55.4.0 - avcodec.h
2013-04-xx - xxxxxxx - lavc 55.4.0 - avcodec.h
Add field_order to AVCodecParserContext.
2013-04-19 - f4b05cd / 5e83d9a - lavc 55.5.100 / 55.2.0 - avcodec.h
2013-03-xx - xxxxxxx - lavc 55.2.0 - avcodec.h
Add CODEC_FLAG_UNALIGNED to allow decoders to produce unaligned output.
2013-04-11 - lavfi 3.53.100 / 3.8.0
231fd44 / 38f0c07 - Move all content from avfiltergraph.h to avfilter.h. Deprecate
2013-04-11 - lavfi 3.8.0
38f0c07 - Move all content from avfiltergraph.h to avfilter.h. Deprecate
avfilterhraph.h, user applications should include just avfilter.h
86070b8 / bc1a985 - Add avfilter_graph_alloc_filter(), deprecate avfilter_open() and
bc1a985 - Add avfilter_graph_alloc_filter(), deprecate avfilter_open() and
avfilter_graph_add_filter().
4fde705 / 1113672 - Add AVFilterContext.graph pointing to the AVFilterGraph that contains the
1113672 - Add AVFilterContext.graph pointing to the AVFilterGraph that contains the
filter.
710b0aa / 48a5ada - Add avfilter_init_str(), deprecate avfilter_init_filter().
46de9ba / 1ba95a9 - Add avfilter_init_dict().
16fc24b / 7cdd737 - Add AVFilter.flags field and AVFILTER_FLAG_DYNAMIC_{INPUTS,OUTPUTS} flags.
f4db6bf / 7e8fe4b - Add avfilter_pad_count() for counting filter inputs/outputs.
835cc0f / fa2a34c - Add avfilter_next(), deprecate av_filter_next().
48a5ada - Add avfilter_init_str(), deprecate avfilter_init_filter().
1ba95a9 - Add avfilter_init_dict().
7cdd737 - Add AVFilter.flags field and AVFILTER_FLAG_DYNAMIC_{INPUTS,OUTPUTS} flags.
7e8fe4b - Add avfilter_pad_count() for counting filter inputs/outputs.
fa2a34c - Add avfilter_next(), deprecate av_filter_next().
Deprecate avfilter_uninit().
2013-04-09 - lavfi 3.51.100 / 3.7.0 - avfilter.h
0594ef0 / b439c99 - Add AVFilter.priv_class for exporting filter options through the
2013-04-09 - lavfi 3.7.0 - avfilter.h
b439c99 - Add AVFilter.priv_class for exporting filter options through the
AVOptions API in the similar way private options work in lavc and lavf.
44d4488 / 8114c10 - Add avfilter_get_class().
8114c10 - Add avfilter_get_class().
Switch all filters to use AVOptions.
2013-03-19 - 17ebef2 / 2c328a9 - lavu 52.20.100 / 52.9.0 - pixdesc.h
2013-03-19 - 2c328a9 - lavu 52.9.0 - pixdesc.h
Add av_pix_fmt_count_planes() function for counting planes in a pixel format.
2013-03-16 - ecade98 / 42c7c61 - lavfi 3.47.100 / 3.6.0
2013-03-16 - 42c7c61 - lavfi 3.6.0
Add AVFilterGraph.nb_filters, deprecate AVFilterGraph.filter_count.
2013-03-08 - Reference counted buffers - lavu 52.8.0, lavc 55.0.100 / 55.0.0, lavf 55.0.100 / 55.0.0,
lavd 54.4.100 / 54.0.0, lavfi 3.5.0
36099df / 8e401db, 532f31a / 1cec062 - add a new API for reference counted buffers and buffer
2013-03-08 - Reference counted buffers - lavu 52.8.0, lavc 55.0.0, lavf 55.0.0,
lavd 54.0.0, lavfi 3.5.0
8e401db, 1cec062 - add a new API for reference counted buffers and buffer
pools (new header libavutil/buffer.h).
2653e12 / 1afddbe - add AVPacket.buf to allow reference counting for the AVPacket data.
1afddbe - add AVPacket.buf to allow reference counting for the AVPacket data.
Add av_packet_from_data() function for constructing packets from
av_malloc()ed data.
c4e8821 / 7ecc2d4 - move AVFrame from lavc to lavu (new header libavutil/frame.h), add
7ecc2d4 - move AVFrame from lavc to lavu (new header libavutil/frame.h), add
AVFrame.buf/extended_buf to allow reference counting for the AVFrame
data. Add new API for working with reference-counted AVFrames.
80e9e63 / 759001c - add the refcounted_frames field to AVCodecContext to make audio and
759001c - add the refcounted_frames field to AVCodecContext to make audio and
video decoders return reference-counted frames. Add get_buffer2()
callback to AVCodecContext which allocates reference-counted frames.
Add avcodec_default_get_buffer2() as the default get_buffer2()
@@ -793,30 +279,30 @@ lavd 54.4.100 / 54.0.0, lavfi 3.5.0
* qscale_table, qstride, qscale_type, mbskip_table, motion_val,
mb_type, dct_coeff, ref_index -- mpegvideo-specific tables,
which are not exported anymore.
a05a44e / 7e35037 - switch libavfilter to use AVFrame instead of AVFilterBufferRef. Add
7e35037 - switch libavfilter to use AVFrame instead of AVFilterBufferRef. Add
av_buffersrc_add_frame(), deprecate av_buffersrc_buffer().
Add av_buffersink_get_frame() and av_buffersink_get_samples(),
deprecate av_buffersink_read() and av_buffersink_read_samples().
Deprecate AVFilterBufferRef and all functions for working with it.
2013-03-17 - 6c17ff8 / 12c5c1d - lavu 52.19.100 / 52.8.0 - avstring.h
2013-03-17 - 12c5c1d - lavu 52.8.0 - avstring.h
Add av_isdigit, av_isgraph, av_isspace, av_isxdigit.
2013-02-23 - 71cf094 / 9f12235 - lavfi 3.40.100 / 3.4.0 - avfiltergraph.h
2013-02-23 - 9f12235 - lavfi 3.4.0 - avfiltergraph.h
Add resample_lavr_opts to AVFilterGraph for setting libavresample options
for auto-inserted resample filters.
2013-01-25 - e7e14bc / 38c1466 - lavu 52.17.100 / 52.7.0 - dict.h
2013-01-25 - 38c1466 - lavu 52.7.0 - dict.h
Add av_dict_parse_string() to set multiple key/value pairs at once from a
string.
2013-01-25 - 25be630 / b85a5e8 - lavu 52.16.100 / 52.6.0 - avstring.h
2013-01-25 - b85a5e8 - lavu 52.6.0 - avstring.h
Add av_strnstr()
2013-01-15 - e7e0186 / 8ee288d - lavu 52.15.100 / 52.5.0 - hmac.h
2013-01-15 - 8ee288d - lavu 52.5.0 - hmac.h
Add AVHMAC.
2013-01-13 - 8ee7b38 / 44e065d - lavc 54.87.100 / 54.36.0 - vdpau.h
2013-01-13 - 44e065d - lavc 54.87.100 / 54.36.0 - vdpau.h
Add AVVDPAUContext struct for VDPAU hardware-accelerated decoding.
2013-01-12 - dae382b / 169fb94 - lavu 52.14.100 / 52.4.0 - pixdesc.h
@@ -904,7 +390,7 @@ lavd 54.4.100 / 54.0.0, lavfi 3.5.0
2012-07-29 - 7c26761 / 681ed00 - lavf 54.22.100 / 54.13.0 - avformat.h
Add AVFMT_FLAG_NOBUFFER for low latency use cases.
2012-07-10 - fbe0245 / f3e5e6f - lavu 51.65.100 / 51.37.0
2012-07-10 - 5fade8a - lavu 51.37.0
Add av_malloc_array() and av_mallocz_array()
2012-06-22 - e847f41 / d3d3a32 - lavu 51.61.100 / 51.34.0
@@ -1076,9 +562,6 @@ lavd 54.4.100 / 54.0.0, lavfi 3.5.0
2012-01-12 - b18e17e / 3167dc9 - lavfi 2.59.100 / 2.15.0
Add a new installed header -- libavfilter/version.h -- with version macros.
-------- 8< --------- FFmpeg 0.9 was cut here -------- 8< ---------
2011-12-08 - a502939 - lavfi 2.52.0
Add av_buffersink_poll_frame() to buffersink.h.
@@ -1107,9 +590,6 @@ lavd 54.4.100 / 54.0.0, lavfi 3.5.0
Add avformat_close_input().
Deprecate av_close_input_file() and av_close_input_stream().
2011-12-09 - c59b80c / b2890f5 - lavu 51.32.0 / 51.20.0 - audioconvert.h
Expand the channel layout list.
2011-12-02 - e4de716 / 0eea212 - lavc 53.40.0 / 53.25.0
Add nb_samples and extended_data fields to AVFrame.
Deprecate AVCODEC_MAX_AUDIO_FRAME_SIZE.
@@ -1123,10 +603,6 @@ lavd 54.4.100 / 54.0.0, lavfi 3.5.0
Change AVCodecContext.error[4] to [8] at next major bump.
Add AV_NUM_DATA_POINTERS to simplify the bump transition.
2011-11-24 - lavu 51.29.0 / 51.19.0
92afb43 / bd97b2e - add planar RGB pixel formats
92afb43 / 6b0768e - add PIX_FMT_PLANAR and PIX_FMT_RGB pixel descriptions
2011-11-23 - 8e576d5 / bbb46f3 - lavu 51.27.0 / 51.18.0
Add av_samples_get_buffer_size(), av_samples_fill_arrays(), and
av_samples_alloc(), to samplefmt.h.
@@ -1288,13 +764,6 @@ lavd 54.4.100 / 54.0.0, lavfi 3.5.0
2011-06-28 - 5129336 - lavu 51.11.0 - avutil.h
Define the AV_PICTURE_TYPE_NONE value in AVPictureType enum.
-------- 8< --------- FFmpeg 0.7 was cut here -------- 8< ---------
-------- 8< --------- FFmpeg 0.8 was cut here -------- 8< ---------
2011-06-19 - fd2c0a5 - lavfi 2.23.0 - avfilter.h
Add layout negotiation fields and helper functions.
@@ -1972,9 +1441,6 @@ lavd 54.4.100 / 54.0.0, lavfi 3.5.0
2010-06-02 - 7e566bb - lavc 52.73.0 - av_get_codec_tag_string()
Add av_get_codec_tag_string().
-------- 8< --------- FFmpeg 0.6 was cut here -------- 8< ---------
2010-06-01 - 2b99142 - lsws 0.11.0 - convertPalette API
Add sws_convertPalette8ToPacked32() and sws_convertPalette8ToPacked24().
@@ -1992,6 +1458,10 @@ lavd 54.4.100 / 54.0.0, lavfi 3.5.0
2010-05-09 - b6bc205 - lavfi 1.20.0 - AVFilterPicRef
Add interlaced and top_field_first fields to AVFilterPicRef.
------------------------------8<-------------------------------------
0.6 branch was cut here
----------------------------->8--------------------------------------
2010-05-01 - 8e2ee18 - lavf 52.62.0 - probe function
Add av_probe_input_format2 to API, it allows ignoring probe
results below given score and returns the actual probe score.

View File

@@ -31,9 +31,9 @@ PROJECT_NAME = FFmpeg
# This could be handy for archiving the generated documentation or
# if some version control system is used.
PROJECT_NUMBER = 2.4.2
PROJECT_NUMBER = 2.0.3
# With the PROJECT_LOGO tag one can specify a logo or icon that is included
# With the PROJECT_LOGO tag one can specify an logo or icon that is included
# in the documentation. The maximum height of the logo should not exceed 55
# pixels and the maximum width should not exceed 200 pixels. Doxygen will
# copy the logo to the output directory.
@@ -759,7 +759,7 @@ ALPHABETICAL_INDEX = YES
# the COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns
# in which this list will be split (can be a number in the range [1..20])
COLS_IN_ALPHA_INDEX = 5
COLS_IN_ALPHA_INDEX = 2
# In case all classes in a project start with a common prefix, all
# classes will be put under the same header in the alphabetical index.
@@ -793,13 +793,13 @@ HTML_FILE_EXTENSION = .html
# each generated HTML page. If it is left blank doxygen will generate a
# standard header.
HTML_HEADER =
#HTML_HEADER = doc/doxy/header.html
# The HTML_FOOTER tag can be used to specify a personal HTML footer for
# each generated HTML page. If it is left blank doxygen will generate a
# standard footer.
HTML_FOOTER =
#HTML_FOOTER = doc/doxy/footer.html
# The HTML_STYLESHEET tag can be used to specify a user-defined cascading
# style sheet that is used by each HTML page. It can be used to
@@ -808,7 +808,7 @@ HTML_FOOTER =
# the style sheet file to the HTML output directory, so don't put your own
# stylesheet in the HTML output directory as well, or it will be erased!
HTML_STYLESHEET =
#HTML_STYLESHEET = doc/doxy/doxy_stylesheet.css
# The HTML_COLORSTYLE_HUE tag controls the color of the HTML output.
# Doxygen will adjust the colors in the stylesheet and background images
@@ -1056,7 +1056,7 @@ FORMULA_TRANSPARENT = YES
# typically be disabled. For large projects the javascript based search engine
# can be slow, then enabling SERVER_BASED_SEARCH may provide a better solution.
SEARCHENGINE = YES
SEARCHENGINE = NO
# When the SERVER_BASED_SEARCH tag is enabled the search engine will be
# implemented using a PHP enabled web server instead of at the web client
@@ -1359,8 +1359,6 @@ PREDEFINED = "__attribute__(x)=" \
"DECLARE_ALIGNED(a,t,n)=t n" \
"offsetof(x,y)=0x42" \
av_alloc_size \
AV_GCC_VERSION_AT_LEAST(x,y)=1 \
__GNUC__=1 \
# If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then
# this tag can be used to specify a list of macro names that should be expanded.

View File

@@ -14,11 +14,11 @@ COMPONENTS-$(CONFIG_AVFORMAT) += ffmpeg-formats ffmpeg-protocols
COMPONENTS-$(CONFIG_AVDEVICE) += ffmpeg-devices
COMPONENTS-$(CONFIG_AVFILTER) += ffmpeg-filters
MANPAGES1 = $(AVPROGS-yes:%=doc/%.1) $(AVPROGS-yes:%=doc/%-all.1) $(COMPONENTS-yes:%=doc/%.1)
MANPAGES1 = $(PROGS-yes:%=doc/%.1) $(PROGS-yes:%=doc/%-all.1) $(COMPONENTS-yes:%=doc/%.1)
MANPAGES3 = $(LIBRARIES-yes:%=doc/%.3)
MANPAGES = $(MANPAGES1) $(MANPAGES3)
PODPAGES = $(AVPROGS-yes:%=doc/%.pod) $(AVPROGS-yes:%=doc/%-all.pod) $(COMPONENTS-yes:%=doc/%.pod) $(LIBRARIES-yes:%=doc/%.pod)
HTMLPAGES = $(AVPROGS-yes:%=doc/%.html) $(AVPROGS-yes:%=doc/%-all.html) $(COMPONENTS-yes:%=doc/%.html) $(LIBRARIES-yes:%=doc/%.html) \
PODPAGES = $(PROGS-yes:%=doc/%.pod) $(PROGS-yes:%=doc/%-all.pod) $(COMPONENTS-yes:%=doc/%.pod) $(LIBRARIES-yes:%=doc/%.pod)
HTMLPAGES = $(PROGS-yes:%=doc/%.html) $(PROGS-yes:%=doc/%-all.html) $(COMPONENTS-yes:%=doc/%.html) $(LIBRARIES-yes:%=doc/%.html) \
doc/developer.html \
doc/faq.html \
doc/fate.html \
@@ -36,28 +36,6 @@ DOCS-$(CONFIG_MANPAGES) += $(MANPAGES)
DOCS-$(CONFIG_TXTPAGES) += $(TXTPAGES)
DOCS = $(DOCS-yes)
DOC_EXAMPLES-$(CONFIG_AVIO_READING_EXAMPLE) += avio_reading
DOC_EXAMPLES-$(CONFIG_AVCODEC_EXAMPLE) += avcodec
DOC_EXAMPLES-$(CONFIG_DECODING_ENCODING_EXAMPLE) += decoding_encoding
DOC_EXAMPLES-$(CONFIG_DEMUXING_DECODING_EXAMPLE) += demuxing_decoding
DOC_EXAMPLES-$(CONFIG_EXTRACT_MVS_EXAMPLE) += extract_mvs
DOC_EXAMPLES-$(CONFIG_FILTER_AUDIO_EXAMPLE) += filter_audio
DOC_EXAMPLES-$(CONFIG_FILTERING_AUDIO_EXAMPLE) += filtering_audio
DOC_EXAMPLES-$(CONFIG_FILTERING_VIDEO_EXAMPLE) += filtering_video
DOC_EXAMPLES-$(CONFIG_METADATA_EXAMPLE) += metadata
DOC_EXAMPLES-$(CONFIG_MUXING_EXAMPLE) += muxing
DOC_EXAMPLES-$(CONFIG_REMUXING_EXAMPLE) += remuxing
DOC_EXAMPLES-$(CONFIG_RESAMPLING_AUDIO_EXAMPLE) += resampling_audio
DOC_EXAMPLES-$(CONFIG_SCALING_VIDEO_EXAMPLE) += scaling_video
DOC_EXAMPLES-$(CONFIG_TRANSCODE_AAC_EXAMPLE) += transcode_aac
DOC_EXAMPLES-$(CONFIG_TRANSCODING_EXAMPLE) += transcoding
ALL_DOC_EXAMPLES_LIST = $(DOC_EXAMPLES-) $(DOC_EXAMPLES-yes)
DOC_EXAMPLES := $(DOC_EXAMPLES-yes:%=doc/examples/%$(PROGSSUF)$(EXESUF))
ALL_DOC_EXAMPLES := $(ALL_DOC_EXAMPLES_LIST:%=doc/examples/%$(PROGSSUF)$(EXESUF))
ALL_DOC_EXAMPLES_G := $(ALL_DOC_EXAMPLES_LIST:%=doc/examples/%$(PROGSSUF)_g$(EXESUF))
PROGS += $(DOC_EXAMPLES)
all-$(CONFIG_DOC): doc
doc: documentation
@@ -65,9 +43,7 @@ doc: documentation
apidoc: doc/doxy/html
documentation: $(DOCS)
examples: $(DOC_EXAMPLES)
TEXIDEP = perl $(SRC_PATH)/doc/texidep.pl $(SRC_PATH) $< $@ >$(@:%=%.d)
TEXIDEP = awk '/^@(verbatim)?include/ { printf "$@: $(@D)/%s\n", $$2 }' <$< >$(@:%=%.d)
doc/%.txt: TAG = TXT
doc/%.txt: doc/%.texi
@@ -82,25 +58,14 @@ $(GENTEXI): doc/avoptions_%.texi: doc/print_options$(HOSTEXESUF)
$(M)doc/print_options $* > $@
doc/%.html: TAG = HTML
doc/%-all.html: TAG = HTML
ifdef HAVE_MAKEINFO_HTML
doc/%.html: doc/%.texi $(SRC_PATH)/doc/t2h.pm $(GENTEXI)
$(Q)$(TEXIDEP)
$(M)makeinfo --html -I doc --no-split -D config-not-all --init-file=$(SRC_PATH)/doc/t2h.pm --output $@ $<
doc/%-all.html: doc/%.texi $(SRC_PATH)/doc/t2h.pm $(GENTEXI)
$(Q)$(TEXIDEP)
$(M)makeinfo --html -I doc --no-split -D config-all --init-file=$(SRC_PATH)/doc/t2h.pm --output $@ $<
else
doc/%.html: doc/%.texi $(SRC_PATH)/doc/t2h.init $(GENTEXI)
$(Q)$(TEXIDEP)
$(M)texi2html -I doc -monolithic --D=config-not-all --init-file $(SRC_PATH)/doc/t2h.init --output $@ $<
doc/%-all.html: TAG = HTML
doc/%-all.html: doc/%.texi $(SRC_PATH)/doc/t2h.init $(GENTEXI)
$(Q)$(TEXIDEP)
$(M)texi2html -I doc -monolithic --D=config-all --init-file $(SRC_PATH)/doc/t2h.init --output $@ $<
endif
doc/%.pod: TAG = POD
doc/%.pod: doc/%.texi $(SRC_PATH)/doc/texi2pod.pl $(GENTEXI)
@@ -119,29 +84,12 @@ doc/%.3: doc/%.pod $(GENTEXI)
$(M)pod2man --section=3 --center=" " --release=" " $< > $@
$(DOCS) doc/doxy/html: | doc/
$(DOC_EXAMPLES:%$(EXESUF)=%.o): | doc/examples
OBJDIRS += doc/examples
DOXY_INPUT = $(addprefix $(SRC_PATH)/, $(INSTHEADERS) $(DOC_EXAMPLES:%$(EXESUF)=%.c) $(LIB_EXAMPLES:%$(EXESUF)=%.c))
doc/doxy/html: TAG = DOXY
doc/doxy/html: $(SRC_PATH)/doc/Doxyfile $(SRC_PATH)/doc/doxy-wrapper.sh $(DOXY_INPUT)
$(M)$(SRC_PATH)/doc/doxy-wrapper.sh $(SRC_PATH) $< $(DOXYGEN) $(DOXY_INPUT)
install-doc: install-html install-man
install-html:
doc/doxy/html: $(SRC_PATH)/doc/Doxyfile $(INSTHEADERS)
$(M)$(SRC_PATH)/doc/doxy-wrapper.sh $(SRC_PATH) $^
install-man:
ifdef CONFIG_HTMLPAGES
install-progs-$(CONFIG_DOC): install-html
install-html: $(HTMLPAGES)
$(Q)mkdir -p "$(DOCDIR)"
$(INSTALL) -m 644 $(HTMLPAGES) "$(DOCDIR)"
endif
ifdef CONFIG_MANPAGES
install-progs-$(CONFIG_DOC): install-man
@@ -152,15 +100,10 @@ install-man: $(MANPAGES)
$(INSTALL) -m 644 $(MANPAGES3) "$(MANDIR)/man3"
endif
uninstall: uninstall-doc
uninstall-doc: uninstall-html uninstall-man
uninstall-html:
$(RM) -r "$(DOCDIR)"
uninstall: uninstall-man
uninstall-man:
$(RM) $(addprefix "$(MANDIR)/man1/",$(AVPROGS-yes:%=%.1) $(AVPROGS-yes:%=%-all.1) $(COMPONENTS-yes:%=%.1))
$(RM) $(addprefix "$(MANDIR)/man1/",$(PROGS-yes:%=%.1) $(PROGS-yes:%=%-all.1) $(COMPONENTS-yes:%=%.1))
$(RM) $(addprefix "$(MANDIR)/man3/",$(LIBRARIES-yes:%=%.3))
clean:: docclean
@@ -168,13 +111,8 @@ clean:: docclean
distclean:: docclean
$(RM) doc/config.texi
examplesclean:
$(RM) $(ALL_DOC_EXAMPLES) $(ALL_DOC_EXAMPLES_G)
$(RM) $(CLEANSUFFIXES:%=doc/examples/%)
docclean: examplesclean
$(RM) $(CLEANSUFFIXES:%=doc/%)
$(RM) $(TXTPAGES) doc/*.html doc/*.pod doc/*.1 doc/*.3 doc/avoptions_*.texi
docclean:
$(RM) $(TXTPAGES) doc/*.html doc/*.pod doc/*.1 doc/*.3 $(CLEANSUFFIXES:%=doc/%) doc/avoptions_*.texi
$(RM) -r doc/doxy/html
-include $(wildcard $(DOCS:%=%.d))

16
doc/RELEASE_NOTES Normal file
View File

@@ -0,0 +1,16 @@
Release Notes
=============
* 2.0 "Nameless" July, 2013
General notes
-------------
See the Changelog file for a list of significant changes. Note, there
are many more new features and bugfixes than whats listed there.
Bugreports against FFmpeg git master or the most recent FFmpeg release are
accepted. If you are experiencing issues with any formally released version of
FFmpeg, please try git master to check if the issue still exists. If it does,
make your report against the development code following the usual bug reporting
guidelines.

View File

@@ -3,7 +3,7 @@ representing a number as input, which may be followed by one of the SI
unit prefixes, for example: 'K', 'M', or 'G'.
If 'i' is appended to the SI unit prefix, the complete prefix will be
interpreted as a unit prefix for binary multiples, which are based on
interpreted as a unit prefix for binary multiplies, which are based on
powers of 1024 instead of powers of 1000. Appending 'B' to the SI unit
prefix multiplies the value by 8. This allows using, for example:
'KB', 'MiB', 'G' and 'B' as number suffixes.
@@ -44,15 +44,8 @@ streams of this type.
If @var{stream_index} is given, then it matches the stream with number @var{stream_index}
in the program with the id @var{program_id}. Otherwise, it matches all streams in the
program.
@item #@var{stream_id} or i:@var{stream_id}
Match the stream by stream id (e.g. PID in MPEG-TS container).
@item m:@var{key}[:@var{value}]
Matches streams with the metadata tag @var{key} having the specified value. If
@var{value} is not given, matches streams that contain the given tag with any
value.
Note that in @command{ffmpeg}, matching by metadata will only work properly for
input files.
@item #@var{stream_id}
Matches the stream by a format-specific ID.
@end table
@section Generic options
@@ -66,18 +59,10 @@ Show license.
@item -h, -?, -help, --help [@var{arg}]
Show help. An optional parameter may be specified to print help about a specific
item. If no argument is specified, only basic (non advanced) tool
options are shown.
item.
Possible values of @var{arg} are:
@table @option
@item long
Print advanced tool options in addition to the basic tool options.
@item full
Print complete list of options, including shared and private options
for encoders, decoders, demuxers, muxers, filters, etc.
@item decoder=@var{decoder_name}
Print detailed information about the decoder named @var{decoder_name}. Use the
@option{-decoders} option to get a list of all decoders.
@@ -97,6 +82,7 @@ Print detailed information about the muxer named @var{muxer_name}. Use the
@item filter=@var{filter_name}
Print detailed information about the filter name @var{filter_name}. Use the
@option{-filters} option to get a list of all filters.
@end table
@item -version
@@ -135,9 +121,6 @@ Show available sample formats.
@item -layouts
Show channel names and standard channel layouts.
@item -colors
Show recognized color names.
@item -loglevel [repeat+]@var{loglevel} | -v [repeat+]@var{loglevel}
Set the logging level used by the library.
Adding "repeat+" indicates that repeated log output should not be compressed
@@ -196,20 +179,11 @@ following option is recognized:
set the file name to use for the report; @code{%p} is expanded to the name
of the program, @code{%t} is expanded to a timestamp, @code{%%} is expanded
to a plain @code{%}
@item level
set the log level
@end table
Errors in parsing the environment variable are not fatal, and will not
appear in the report.
@item -hide_banner
Suppress printing banner.
All FFmpeg tools will normally show a copyright notice, build options
and library versions. This option can be used to suppress printing
this information.
@item -cpuflags flags (@emph{global})
Allows setting and clearing cpu flags. This option is intended
for testing. Do not use it unless you know what you're doing.
@@ -266,10 +240,6 @@ Possible flags for this option are:
@end table
@end table
@item -opencl_bench
Benchmark all available OpenCL devices and show the results. This option
is only available when FFmpeg has been compiled with @code{--enable-opencl}.
@item -opencl_options options (@emph{global})
Set OpenCL environment options. This option is only available when
FFmpeg has been compiled with @code{--enable-opencl}.
@@ -301,12 +271,11 @@ muxer:
ffmpeg -i input.flac -id3v2_version 3 out.mp3
@end example
All codec AVOptions are per-stream, and thus a stream specifier
should be attached to them.
All codec AVOptions are obviously per-stream, so the chapter on stream
specifiers applies to them
Note: the @option{-nooption} syntax cannot be used for boolean
AVOptions, use @option{-option 0}/@option{-option 1}.
Note @option{-nooption} syntax cannot be used for boolean AVOptions,
use @option{-option 0}/@option{-option 1}.
Note: the old undocumented way of specifying per-stream AVOptions by
prepending v/a/s to the options name is now obsolete and will be
removed soon.
Note2 old undocumented way of specifying per-stream AVOptions by prepending
v/a/s to the options name is now obsolete and will be removed soon.

36
doc/avutil.txt Normal file
View File

@@ -0,0 +1,36 @@
AVUtil
======
libavutil is a small lightweight library of generally useful functions.
It is not a library for code needed by both libavcodec and libavformat.
Overview:
=========
adler32.c adler32 checksum
aes.c AES encryption and decryption
fifo.c resizeable first in first out buffer
intfloat_readwrite.c portable reading and writing of floating point values
log.c "printf" with context and level
md5.c MD5 Message-Digest Algorithm
rational.c code to perform exact calculations with rational numbers
tree.c generic AVL tree
crc.c generic CRC checksumming code
integer.c 128bit integer math
lls.c
mathematics.c greatest common divisor, integer sqrt, integer log2, ...
mem.c memory allocation routines with guaranteed alignment
Headers:
bswap.h big/little/native-endian conversion code
x86_cpu.h a few useful macros for unifying x86-64 and x86-32 code
avutil.h
common.h
intreadwrite.h reading and writing of unaligned big/little/native-endian integers
Goals:
======
* Modular (few interdependencies and the possibility of disabling individual parts during ./configure)
* Small (source and object)
* Efficient (low CPU and memory usage)
* Useful (avoid useless features almost no one needs)

View File

@@ -30,33 +30,7 @@ ADTS AAC container to a FLV or a MOV/MP4 file.
Remove zero padding at the end of a packet.
@section dump_extra
Add extradata to the beginning of the filtered packets.
The additional argument specifies which packets should be filtered.
It accepts the values:
@table @samp
@item a
add extradata to all key packets, but only if @var{local_header} is
set in the @option{flags2} codec context field
@item k
add extradata to all key packets
@item e
add extradata to all packets
@end table
If not specified it is assumed @samp{k}.
For example the following @command{ffmpeg} command forces a global
header (thus disabling individual packet headers) in the H.264 packets
generated by the @code{libx264} encoder, but corrects them by adding
the header stored in extradata to the key packets:
@example
ffmpeg -i INPUT -map 0 -flags:v +global_header -c:v libx264 -bsf:v dump_extra out.ts
@end example
@section dump_extradata
@section h264_mp4toannexb
@@ -74,18 +48,7 @@ format with @command{ffmpeg}, you can use the command:
ffmpeg -i INPUT.mp4 -codec copy -bsf:v h264_mp4toannexb OUTPUT.ts
@end example
@section imxdump
Modifies the bitstream to fit in MOV and to be usable by the Final Cut
Pro decoder. This filter only applies to the mpeg2video codec, and is
likely not needed for Final Cut Pro 7 and newer with the appropriate
@option{-tag:v}.
For example, to remux 30 MB/sec NTSC IMX to MOV:
@example
ffmpeg -i input.mxf -c copy -bsf:v imxdump -tag:v mx3n output.mov
@end example
@section imx_dump_header
@section mjpeg2jpeg
@@ -128,17 +91,12 @@ ffmpeg -i frame_%d.jpg -c:v copy rotated.avi
@section movsub
@section mp3_header_compress
@section mp3_header_decompress
@section noise
Damages the contents of packets without damaging the container. Can be
used for fuzzing or testing error resilience/concealment.
@example
ffmpeg -i INPUT -c copy -bsf noise output.mkv
@end example
@section remove_extra
@section remove_extradata
@c man end BITSTREAM FILTERS

File diff suppressed because one or more lines are too long

View File

@@ -25,9 +25,6 @@ fate-list
install
Install headers, libraries and programs.
examples
Build all examples located in doc/examples.
libavformat/output-example
Build the libavformat basic example.
@@ -37,9 +34,6 @@ libavcodec/api-example
libswscale/swscale-test
Build the swscale self-test (useful also as example).
config
Reconfigure the project with current configuration.
Useful standard make commands:
make -t <target>

View File

@@ -84,6 +84,9 @@ Apply interlaced motion estimation.
Use closed gop.
@end table
@item sub_id @var{integer}
Deprecated, currently unused.
@item me_method @var{integer} (@emph{encoding,video})
Set motion estimation method.
@@ -172,13 +175,7 @@ Set max video quantizer scale (VBR). Must be included between -1 and
Set max difference between the quantizer scale (VBR).
@item bf @var{integer} (@emph{encoding,video})
Set max number of B frames between non-B-frames.
Must be an integer between -1 and 16. 0 means that B-frames are
disabled. If a value of -1 is used, it will choose an automatic value
depending on the encoder.
Default value is 0.
Set max number of B frames.
@item b_qfactor @var{float} (@emph{encoding,video})
Set qp factor between P and B frames.
@@ -285,11 +282,6 @@ detect bitstream specification deviations
detect improper bitstream length
@item explode
abort decoding on minor error detection
@item ignore_err
ignore decoding errors, and continue decoding.
This is useful if you want to analyze the content of a video and thus want
everything to be decoded no matter what. This option will not result in a video
that is pleasing to watch in case of errors.
@item careful
consider things that violate the spec and have not been seen in the wild as errors
@item compliant
@@ -394,8 +386,9 @@ Possible values:
@item simplemmx
@item simpleauto
Automatically pick a IDCT compatible with the simple one
@item libmpeg2mmx
@item mmi
@item arm
@@ -413,6 +406,10 @@ Automatically pick a IDCT compatible with the simple one
@item simplealpha
@item h264
@item vp3
@item ipp
@item xvidmmx
@@ -432,8 +429,6 @@ Possible values:
iterative motion vector (MV) search (slow)
@item deblock
use strong deblock filter for damaged MBs
@item favor_inter
favor predicting from the previous frame instead of the current
@end table
@item bits_per_coded_sample @var{integer}
@@ -498,8 +493,6 @@ threading operations
@item vismv @var{integer} (@emph{decoding,video})
Visualize motion vectors (MVs).
This option is deprecated, see the codecview filter instead.
Possible values:
@table @samp
@item pf
@@ -779,29 +772,26 @@ Set noise reduction.
Set number of bits which should be loaded into the rc buffer before
decoding starts.
@item inter_threshold @var{integer} (@emph{encoding,video})
@item flags2 @var{flags} (@emph{decoding/encoding,audio,video})
Possible values:
@table @samp
@item fast
Allow non spec compliant speedup tricks.
allow non spec compliant speedup tricks
@item sgop
Deprecated, use mpegvideo private options instead.
Deprecated, use mpegvideo private options instead
@item noout
Skip bitstream encoding.
@item ignorecrop
Ignore cropping information from sps.
skip bitstream encoding
@item local_header
Place global headers at every keyframe instead of in extradata.
place global headers at every keyframe instead of in extradata
@item chunks
Frame data might be split into multiple chunks.
Frame data might be split into multiple chunks
@item showall
Show all frames before the first keyframe.
Show all frames before the first keyframe
@item skiprd
Deprecated, use mpegvideo private options instead.
@item export_mvs
Export motion vectors into frame side-data (see @code{AV_FRAME_DATA_MOTION_VECTORS})
for codecs that support it. See also @file{doc/examples/export_mvs.c}.
Deprecated, use mpegvideo private options instead
@end table
@item error @var{integer} (@emph{encoding,video})
@@ -857,10 +847,6 @@ Possible values:
@item aac_eld
@item mpeg2_aac_low
@item mpeg2_aac_he
@item dts
@item dts_es
@@ -892,9 +878,6 @@ Set frame skip factor.
@item skip_exp @var{integer} (@emph{encoding,video})
Set frame skip exponent.
Negative values behave identical to the corresponding positive ones, except
that the score is normalized.
Positive values exist primarily for compatibility reasons and are not so useful.
@item skipcmp @var{integer} (@emph{encoding,video})
Set frame skip compare function.
@@ -1040,26 +1023,15 @@ Set the log level offset.
Number of slices, used in parallelized encoding.
@item thread_type @var{flags} (@emph{decoding/encoding,video})
Select which multithreading methods to use.
Use of @samp{frame} will increase decoding delay by one frame per
thread, so clients which cannot provide future frames should not use
it.
Select multithreading type.
Possible values:
@table @samp
@item slice
Decode more than one part of a single frame at once.
Multithreading using slices works only when the video was encoded with
slices.
@item frame
Decode more than one frame at once.
@end table
Default value is @samp{slice+frame}.
@item audio_service_type @var{integer} (@emph{encoding,audio})
Set audio service type.
@@ -1093,34 +1065,9 @@ Set sample format audio decoders should prefer. Default value is
@item sub_charenc @var{encoding} (@emph{decoding,subtitles})
Set the input subtitles character encoding.
@item field_order @var{field_order} (@emph{video})
Set/override the field order of the video.
Possible values:
@table @samp
@item progressive
Progressive video
@item tt
Interlaced video, top field coded and displayed first
@item bb
Interlaced video, bottom field coded and displayed first
@item tb
Interlaced video, top coded first, bottom displayed first
@item bt
Interlaced video, bottom coded first, top displayed first
@end table
@item skip_alpha @var{integer} (@emph{decoding,video})
Set to 1 to disable processing alpha (transparency). This works like the
@samp{gray} flag in the @option{flags} option which skips chroma information
instead of alpha. Default is 0.
@end table
@c man end CODEC OPTIONS
@ifclear config-writeonly
@include decoders.texi
@end ifclear
@ifclear config-readonly
@include encoders.texi
@end ifclear

View File

@@ -14,7 +14,7 @@ You can disable all the decoders with the configure option
with the options @code{--enable-decoder=@var{DECODER}} /
@code{--disable-decoder=@var{DECODER}}.
The option @code{-decoders} of the ff* tools will display the list of
The option @code{-codecs} of the ff* tools will display the list of
enabled decoders.
@c man end DECODERS
@@ -52,37 +52,6 @@ top-field-first is assumed
@chapter Audio Decoders
@c man begin AUDIO DECODERS
A description of some of the currently available audio decoders
follows.
@section ac3
AC-3 audio decoder.
This decoder implements part of ATSC A/52:2010 and ETSI TS 102 366, as well as
the undocumented RealAudio 3 (a.k.a. dnet).
@subsection AC-3 Decoder Options
@table @option
@item -drc_scale @var{value}
Dynamic Range Scale Factor. The factor to apply to dynamic range values
from the AC-3 stream. This factor is applied exponentially.
There are 3 notable scale factor ranges:
@table @option
@item drc_scale == 0
DRC disabled. Produces full range audio.
@item 0 < drc_scale <= 1
DRC enabled. Applies a fraction of the stream DRC value.
Audio reproduction is between full range and full compression.
@item drc_scale > 1
DRC enabled. Applies drc_scale asymmetrically.
Loud sounds are fully compressed. Soft sounds are enhanced.
@end table
@end table
@section ffwavesynth
Internal wave synthetizer.
@@ -163,9 +132,6 @@ Requires the presence of the libopus headers and library during
configuration. You need to explicitly configure the build with
@code{--enable-libopus}.
An FFmpeg native decoder for Opus exists, so users can decode Opus
without this library.
@c man end AUDIO DECODERS
@chapter Subtitles Decoders
@@ -192,45 +158,4 @@ ee450d, 101010, eaeaea, 0ce60b, ec14ed, ebff0b, 0d617a, 7b7b7b, d1d1d1,
7b2a0e, 0d950c, 0f007b, cf0dec, cfa80c, 7c127b}.
@end table
@section libzvbi-teletext
Libzvbi allows libavcodec to decode DVB teletext pages and DVB teletext
subtitles. Requires the presence of the libzvbi headers and library during
configuration. You need to explicitly configure the build with
@code{--enable-libzvbi}.
@subsection Options
@table @option
@item txt_page
List of teletext page numbers to decode. You may use the special * string to
match all pages. Pages that do not match the specified list are dropped.
Default value is *.
@item txt_chop_top
Discards the top teletext line. Default value is 1.
@item txt_format
Specifies the format of the decoded subtitles. The teletext decoder is capable
of decoding the teletext pages to bitmaps or to simple text, you should use
"bitmap" for teletext pages, because certain graphics and colors cannot be
expressed in simple text. You might use "text" for teletext based subtitles if
your application can handle simple text based subtitles. Default value is
bitmap.
@item txt_left
X offset of generated bitmaps, default is 0.
@item txt_top
Y offset of generated bitmaps, default is 0.
@item txt_chop_spaces
Chops leading and trailing spaces and removes empty lines from the generated
text. This option is useful for teletext based subtitles where empty spaces may
be present at the start or at the end of the lines or empty lines may be
present between the subtitle lines because of double-sized teletext charactes.
Default value is 1.
@item txt_duration
Sets the display duration of the decoded teletext pages or subtitles in
miliseconds. Default value is 30000 which is 30 seconds.
@item txt_transparent
Force transparent background of the generated teletext bitmaps. Default value
is 0 which means an opaque (black) background.
@end table
@c man end SUBTILES DECODERS

View File

@@ -1,7 +1,3 @@
a.summary-letter {
text-decoration: none;
}
a {
color: #2D6198;
}
@@ -17,8 +13,8 @@ a:visited {
}
#banner img {
margin-bottom: 1px;
margin-top: 5px;
padding-bottom: 1px;
padding-top: 5px;
}
#body {
@@ -49,16 +45,11 @@ body {
text-align: center;
}
h1 a, h2 a, h3 a, h4 a {
text-decoration: inherit;
color: inherit;
}
h1, h2, h3, h4 {
h1, h2, h3 {
padding-left: 0.4em;
border-radius: 4px;
padding-bottom: 0.25em;
padding-top: 0.25em;
padding-bottom: 0.2em;
padding-top: 0.2em;
border: 1px solid #6A996A;
}
@@ -72,22 +63,15 @@ h1 {
h2 {
color: #313131;
font-size: 1.0em;
font-size: 0.9em;
background-color: #ABE3AB;
}
h3 {
color: #313131;
font-size: 0.9em;
margin-bottom: -6px;
background-color: #BBF3BB;
}
h4 {
color: #313131;
font-size: 0.8em;
margin-bottom: -8px;
background-color: #D1FDD1;
background-color: #BBF3BB;
}
img {

View File

@@ -1,7 +1,7 @@
@chapter Demuxers
@c man begin DEMUXERS
Demuxers are configured elements in FFmpeg that can read the
Demuxers are configured elements in FFmpeg which allow to read the
multimedia streams from a particular type of file.
When you configure your FFmpeg build, all the supported demuxers
@@ -29,17 +29,6 @@ the caller can decide which variant streams to actually receive.
The total bitrate of the variant that the stream belongs to is
available in a metadata key named "variant_bitrate".
@section asf
Advanced Systems Format demuxer.
This demuxer is used to demux ASF files and MMS network streams.
@table @option
@item -no_resync_search @var{bool}
Do not try to resynchronize by looking for a certain optional start code.
@end table
@anchor{concat}
@section concat
@@ -74,7 +63,7 @@ following directive is recognized:
Path to a file to read; special characters and spaces must be escaped with
backslash or single quotes.
All subsequent file-related directives apply to that file.
All subsequent directives apply to that file.
@item @code{ffconcat version 1.0}
Identify the script type and version. It also sets the @option{safe} option
@@ -92,22 +81,6 @@ file is not available or accurate.
If the duration is set for all files, then it is possible to seek in the
whole concatenated video.
@item @code{stream}
Introduce a stream in the virtual file.
All subsequent stream-related directives apply to the last introduced
stream.
Some streams properties must be set in order to allow identifying the
matching streams in the subfiles.
If no streams are defined in the script, the streams from the first file are
copied.
@item @code{exact_stream_id @var{id}}
Set the id of the stream.
If this directive is given, the string with the corresponding id in the
subfiles will be used.
This is especially useful for MPEG-PS (VOB) files, where the order of the
streams is not reliable.
@end table
@subsection Options
@@ -128,25 +101,6 @@ If set to 0, any file name is accepted.
The default is -1, it is equivalent to 1 if the format was automatically
probed and 0 otherwise.
@item auto_convert
If set to 1, try to perform automatic conversions on packet data to make the
streams concatenable.
Currently, the only conversion is adding the h264_mp4toannexb bitstream
filter to H.264 streams in MP4 format. This is necessary in particular if
there are resolution changes.
@end table
@section flv
Adobe Flash Video Format demuxer.
This demuxer is used to demux FLV files and RTMP network streams.
@table @option
@item -flv_metadata @var{bool}
Allocate the streams according to the onMetaData array content.
@end table
@section libgme
@@ -174,40 +128,6 @@ See @url{http://quvi.sourceforge.net/} for more information.
FFmpeg needs to be built with @code{--enable-libquvi} for this demuxer to be
enabled.
@section gif
Animated GIF demuxer.
It accepts the following options:
@table @option
@item min_delay
Set the minimum valid delay between frames in hundredths of seconds.
Range is 0 to 6000. Default value is 2.
@item default_delay
Set the default delay between frames in hundredths of seconds.
Range is 0 to 6000. Default value is 10.
@item ignore_loop
GIF files can contain information to loop a certain number of times (or
infinitely). If @option{ignore_loop} is set to 1, then the loop setting
from the input will be ignored and looping will not occur. If set to 0,
then looping will occur and will cycle the number of times according to
the GIF. Default value is 1.
@end table
For example, with the overlay filter, place an infinitely looping GIF
over another video:
@example
ffmpeg -i input.mp4 -ignore_loop 0 -i input.gif -filter_complex overlay=shortest=1 out.mkv
@end example
Note that in the above example the shortest option for overlay filter is
used to end the output video at the length of the shortest input file,
which in this case is @file{input.mp4} as the GIF in this example loops
infinitely.
@section image2
Image file demuxer.
@@ -307,8 +227,6 @@ is 5.
If set to 1, will set frame timestamp to modification time of image file. Note
that monotonity of timestamps is not provided: images go in the same order as
without this option. Default value is 0.
If set to 2, will set frame timestamp to the modification time of the image file in
nanosecond precision.
@item video_size
Set the video size of the images to read. If not specified the video
size is guessed from the first image file in the sequence.
@@ -322,41 +240,28 @@ Use @command{ffmpeg} for creating a video from the images in the file
sequence @file{img-001.jpeg}, @file{img-002.jpeg}, ..., assuming an
input frame rate of 10 frames per second:
@example
ffmpeg -framerate 10 -i 'img-%03d.jpeg' out.mkv
ffmpeg -i 'img-%03d.jpeg' -r 10 out.mkv
@end example
@item
As above, but start by reading from a file with index 100 in the sequence:
@example
ffmpeg -framerate 10 -start_number 100 -i 'img-%03d.jpeg' out.mkv
ffmpeg -start_number 100 -i 'img-%03d.jpeg' -r 10 out.mkv
@end example
@item
Read images matching the "*.png" glob pattern , that is all the files
terminating with the ".png" suffix:
@example
ffmpeg -framerate 10 -pattern_type glob -i "*.png" out.mkv
ffmpeg -pattern_type glob -i "*.png" -r 10 out.mkv
@end example
@end itemize
@section mpegts
MPEG-2 transport stream demuxer.
@table @option
@item fix_teletext_pts
Overrides teletext packet PTS and DTS values with the timestamps calculated
from the PCR of the first program which the teletext stream is part of and is
not discarded. Default value is 1, set this option to 0 if you want your
teletext packet PTS and DTS values untouched.
@end table
@section rawvideo
Raw video demuxer.
This demuxer allows one to read raw video data. Since there is no header
This demuxer allows to read raw video data. Since there is no header
specifying the assumed video parameters, the user must specify them
in order to be able to decode the data correctly.

View File

@@ -11,23 +11,29 @@
@chapter Developers Guide
@section Notes for external developers
@section API
@itemize @bullet
@item libavcodec is the library containing the codecs (both encoding and
decoding). Look at @file{doc/examples/decoding_encoding.c} to see how to use
it.
This document is mostly useful for internal FFmpeg developers.
External developers who need to use the API in their application should
refer to the API doxygen documentation in the public headers, and
check the examples in @file{doc/examples} and in the source code to
see how the public API is employed.
@item libavformat is the library containing the file format handling (mux and
demux code for several formats). Look at @file{ffplay.c} to use it in a
player. See @file{doc/examples/muxing.c} to use it to generate audio or video
streams.
You can use the FFmpeg libraries in your commercial program, but you
are encouraged to @emph{publish any patch you make}. In this case the
best way to proceed is to send your patches to the ffmpeg-devel
mailing list following the guidelines illustrated in the remainder of
this document.
@end itemize
For more detailed legal information about the use of FFmpeg in
external programs read the @file{LICENSE} file in the source tree and
consult @url{http://ffmpeg.org/legal.html}.
@section Integrating libavcodec or libavformat in your program
You can integrate all the source code of the libraries to link them
statically to avoid any version problem. All you need is to provide a
'config.mak' and a 'config.h' in the parent directory. See the defines
generated by ./configure to understand what is needed.
You can use libavcodec or libavformat in your commercial program, but
@emph{any patch you make must be published}. The best way to proceed is
to send your patches to the FFmpeg mailing list.
@section Contributing
@@ -51,16 +57,13 @@ and should try to fix issues their commit causes.
@subsection Code formatting conventions
There are the following guidelines regarding the indentation in files:
@itemize @bullet
@item
Indent size is 4.
@item
The TAB character is forbidden outside of Makefiles as is any
form of trailing whitespace. Commits containing either will be
rejected by the git repository.
@item
You should try to limit your code lines to 80 characters; however, do so if
and only if this improves readability.
@@ -92,7 +95,7 @@ for markup commands, i.e. use @code{@@param} and not @code{\param}.
* more text ...
* ...
*/
typedef struct Foobar @{
typedef struct Foobar@{
int var1; /**< var1 description */
int var2; ///< var2 description
/** var3 description */
@@ -114,17 +117,13 @@ int myfunc(int my_parameter)
FFmpeg is programmed in the ISO C90 language with a few additional
features from ISO C99, namely:
@itemize @bullet
@item
the @samp{inline} keyword;
@item
@samp{//} comments;
@item
designated struct initializers (@samp{struct s x = @{ .i = 17 @};})
@item
compound literals (@samp{x = (struct s) @{ 17, 23 @};})
@end itemize
@@ -136,17 +135,13 @@ clarity and performance.
All code must compile with recent versions of GCC and a number of other
currently supported compilers. To ensure compatibility, please do not use
additional C99 features or GCC extensions. Especially watch out for:
@itemize @bullet
@item
mixing statements and declarations;
@item
@samp{long long} (use @samp{int64_t} instead);
@item
@samp{__attribute__} not protected by @samp{#ifdef __GNUC__} or similar;
@item
GCC statement expressions (@samp{(x = (@{ int y = 4; y; @})}).
@end itemize
@@ -158,25 +153,20 @@ All names should be composed with underscores (_), not CamelCase. For example,
for example structs and enums; they should always be in the CamelCase
There are the following conventions for naming variables and functions:
@itemize @bullet
@item
For local variables no prefix is required.
@item
For file-scope variables and functions declared as @code{static}, no prefix
is required.
@item
For variables and functions visible outside of file scope, but only used
internally by a library, an @code{ff_} prefix should be used,
e.g. @samp{ff_w64_demuxer}.
@item
For variables and functions visible outside of file scope, used internally
across multiple libraries, use @code{avpriv_} as prefix, for example,
@samp{avpriv_aac_parse_header}.
@item
Each library has its own prefix for public symbols, in addition to the
commonly used @code{av_} (@code{avformat_} for libavformat,
@@ -196,12 +186,10 @@ are reserved at the file level and may not be used for externally visible
symbols. If in doubt, just avoid names starting with @code{_} altogether.
@subsection Miscellaneous conventions
@itemize @bullet
@item
fprintf and printf are forbidden in libavformat and libavcodec,
please use av_log() instead.
@item
Casts should be used only when necessary. Unneeded parentheses
should also be avoided if they don't make the code easier to understand.
@@ -244,157 +232,131 @@ For Emacs, add these roughly equivalent lines to your @file{.emacs.d/init.el}:
@enumerate
@item
Contributions should be licensed under the
@uref{http://www.gnu.org/licenses/lgpl-2.1.html, LGPL 2.1},
including an "or any later version" clause, or, if you prefer
a gift-style license, the
@uref{http://opensource.org/licenses/isc-license.txt, ISC} or
@uref{http://mit-license.org/, MIT} license.
@uref{http://www.gnu.org/licenses/gpl-2.0.html, GPL 2} including
an "or any later version" clause is also acceptable, but LGPL is
preferred.
If you add a new file, give it a proper license header. Do not copy and
paste it from a random place, use an existing file as template.
Contributions should be licensed under the
@uref{http://www.gnu.org/licenses/lgpl-2.1.html, LGPL 2.1},
including an "or any later version" clause, or, if you prefer
a gift-style license, the
@uref{http://www.isc.org/software/license/, ISC} or
@uref{http://mit-license.org/, MIT} license.
@uref{http://www.gnu.org/licenses/gpl-2.0.html, GPL 2} including
an "or any later version" clause is also acceptable, but LGPL is
preferred.
@item
You must not commit code which breaks FFmpeg! (Meaning unfinished but
enabled code which breaks compilation or compiles but does not work or
breaks the regression tests)
You can commit unfinished stuff (for testing etc), but it must be disabled
(#ifdef etc) by default so it does not interfere with other developers'
work.
You must not commit code which breaks FFmpeg! (Meaning unfinished but
enabled code which breaks compilation or compiles but does not work or
breaks the regression tests)
You can commit unfinished stuff (for testing etc), but it must be disabled
(#ifdef etc) by default so it does not interfere with other developers'
work.
@item
The commit message should have a short first line in the form of
a @samp{topic: short description} as a header, separated by a newline
from the body consisting of an explanation of why the change is necessary.
If the commit fixes a known bug on the bug tracker, the commit message
should include its bug ID. Referring to the issue on the bug tracker does
not exempt you from writing an excerpt of the bug in the commit message.
The commit message should have a short first line in the form of
a @samp{topic: short description} as a header, separated by a newline
from the body consisting of an explanation of why the change is necessary.
If the commit fixes a known bug on the bug tracker, the commit message
should include its bug ID. Referring to the issue on the bug tracker does
not exempt you from writing an excerpt of the bug in the commit message.
@item
You do not have to over-test things. If it works for you, and you think it
should work for others, then commit. If your code has problems
(portability, triggers compiler bugs, unusual environment etc) they will be
reported and eventually fixed.
You do not have to over-test things. If it works for you, and you think it
should work for others, then commit. If your code has problems
(portability, triggers compiler bugs, unusual environment etc) they will be
reported and eventually fixed.
@item
Do not commit unrelated changes together, split them into self-contained
pieces. Also do not forget that if part B depends on part A, but A does not
depend on B, then A can and should be committed first and separate from B.
Keeping changes well split into self-contained parts makes reviewing and
understanding them on the commit log mailing list easier. This also helps
in case of debugging later on.
Also if you have doubts about splitting or not splitting, do not hesitate to
ask/discuss it on the developer mailing list.
Do not commit unrelated changes together, split them into self-contained
pieces. Also do not forget that if part B depends on part A, but A does not
depend on B, then A can and should be committed first and separate from B.
Keeping changes well split into self-contained parts makes reviewing and
understanding them on the commit log mailing list easier. This also helps
in case of debugging later on.
Also if you have doubts about splitting or not splitting, do not hesitate to
ask/discuss it on the developer mailing list.
@item
Do not change behavior of the programs (renaming options etc) or public
API or ABI without first discussing it on the ffmpeg-devel mailing list.
Do not remove functionality from the code. Just improve!
Note: Redundant code can be removed.
Do not change behavior of the programs (renaming options etc) or public
API or ABI without first discussing it on the ffmpeg-devel mailing list.
Do not remove functionality from the code. Just improve!
Note: Redundant code can be removed.
@item
Do not commit changes to the build system (Makefiles, configure script)
which change behavior, defaults etc, without asking first. The same
applies to compiler warning fixes, trivial looking fixes and to code
maintained by other developers. We usually have a reason for doing things
the way we do. Send your changes as patches to the ffmpeg-devel mailing
list, and if the code maintainers say OK, you may commit. This does not
apply to files you wrote and/or maintain.
Do not commit changes to the build system (Makefiles, configure script)
which change behavior, defaults etc, without asking first. The same
applies to compiler warning fixes, trivial looking fixes and to code
maintained by other developers. We usually have a reason for doing things
the way we do. Send your changes as patches to the ffmpeg-devel mailing
list, and if the code maintainers say OK, you may commit. This does not
apply to files you wrote and/or maintain.
@item
We refuse source indentation and other cosmetic changes if they are mixed
with functional changes, such commits will be rejected and removed. Every
developer has his own indentation style, you should not change it. Of course
if you (re)write something, you can use your own style, even though we would
prefer if the indentation throughout FFmpeg was consistent (Many projects
force a given indentation style - we do not.). If you really need to make
indentation changes (try to avoid this), separate them strictly from real
changes.
NOTE: If you had to put if()@{ .. @} over a large (> 5 lines) chunk of code,
then either do NOT change the indentation of the inner part within (do not
move it to the right)! or do so in a separate commit
We refuse source indentation and other cosmetic changes if they are mixed
with functional changes, such commits will be rejected and removed. Every
developer has his own indentation style, you should not change it. Of course
if you (re)write something, you can use your own style, even though we would
prefer if the indentation throughout FFmpeg was consistent (Many projects
force a given indentation style - we do not.). If you really need to make
indentation changes (try to avoid this), separate them strictly from real
changes.
NOTE: If you had to put if()@{ .. @} over a large (> 5 lines) chunk of code,
then either do NOT change the indentation of the inner part within (do not
move it to the right)! or do so in a separate commit
@item
Always fill out the commit log message. Describe in a few lines what you
changed and why. You can refer to mailing list postings if you fix a
particular bug. Comments such as "fixed!" or "Changed it." are unacceptable.
Recommended format:
@example
area changed: Short 1 line description
details describing what and why and giving references.
@end example
Always fill out the commit log message. Describe in a few lines what you
changed and why. You can refer to mailing list postings if you fix a
particular bug. Comments such as "fixed!" or "Changed it." are unacceptable.
Recommended format:
area changed: Short 1 line description
details describing what and why and giving references.
@item
Make sure the author of the commit is set correctly. (see git commit --author)
If you apply a patch, send an
answer to ffmpeg-devel (or wherever you got the patch from) saying that
you applied the patch.
Make sure the author of the commit is set correctly. (see git commit --author)
If you apply a patch, send an
answer to ffmpeg-devel (or wherever you got the patch from) saying that
you applied the patch.
@item
When applying patches that have been discussed (at length) on the mailing
list, reference the thread in the log message.
When applying patches that have been discussed (at length) on the mailing
list, reference the thread in the log message.
@item
Do NOT commit to code actively maintained by others without permission.
Send a patch to ffmpeg-devel instead. If no one answers within a reasonable
timeframe (12h for build failures and security fixes, 3 days small changes,
1 week for big patches) then commit your patch if you think it is OK.
Also note, the maintainer can simply ask for more time to review!
Do NOT commit to code actively maintained by others without permission.
Send a patch to ffmpeg-devel instead. If no one answers within a reasonable
timeframe (12h for build failures and security fixes, 3 days small changes,
1 week for big patches) then commit your patch if you think it is OK.
Also note, the maintainer can simply ask for more time to review!
@item
Subscribe to the ffmpeg-cvslog mailing list. The diffs of all commits
are sent there and reviewed by all the other developers. Bugs and possible
improvements or general questions regarding commits are discussed there. We
expect you to react if problems with your code are uncovered.
Subscribe to the ffmpeg-cvslog mailing list. The diffs of all commits
are sent there and reviewed by all the other developers. Bugs and possible
improvements or general questions regarding commits are discussed there. We
expect you to react if problems with your code are uncovered.
@item
Update the documentation if you change behavior or add features. If you are
unsure how best to do this, send a patch to ffmpeg-devel, the documentation
maintainer(s) will review and commit your stuff.
Update the documentation if you change behavior or add features. If you are
unsure how best to do this, send a patch to ffmpeg-devel, the documentation
maintainer(s) will review and commit your stuff.
@item
Try to keep important discussions and requests (also) on the public
developer mailing list, so that all developers can benefit from them.
Try to keep important discussions and requests (also) on the public
developer mailing list, so that all developers can benefit from them.
@item
Never write to unallocated memory, never write over the end of arrays,
always check values read from some untrusted source before using them
as array index or other risky things.
Never write to unallocated memory, never write over the end of arrays,
always check values read from some untrusted source before using them
as array index or other risky things.
@item
Remember to check if you need to bump versions for the specific libav*
parts (libavutil, libavcodec, libavformat) you are changing. You need
to change the version integer.
Incrementing the first component means no backward compatibility to
previous versions (e.g. removal of a function from the public API).
Incrementing the second component means backward compatible change
(e.g. addition of a function to the public API or extension of an
existing data structure).
Incrementing the third component means a noteworthy binary compatible
change (e.g. encoder bug fix that matters for the decoder). The third
component always starts at 100 to distinguish FFmpeg from Libav.
Remember to check if you need to bump versions for the specific libav*
parts (libavutil, libavcodec, libavformat) you are changing. You need
to change the version integer.
Incrementing the first component means no backward compatibility to
previous versions (e.g. removal of a function from the public API).
Incrementing the second component means backward compatible change
(e.g. addition of a function to the public API or extension of an
existing data structure).
Incrementing the third component means a noteworthy binary compatible
change (e.g. encoder bug fix that matters for the decoder). The third
component always starts at 100 to distinguish FFmpeg from Libav.
@item
Compiler warnings indicate potential bugs or code with bad style. If a type of
warning always points to correct and clean code, that warning should
be disabled, not the code changed.
Thus the remaining warnings can either be bugs or correct code.
If it is a bug, the bug has to be fixed. If it is not, the code should
be changed to not generate a warning unless that causes a slowdown
or obfuscates the code.
Compiler warnings indicate potential bugs or code with bad style. If a type of
warning always points to correct and clean code, that warning should
be disabled, not the code changed.
Thus the remaining warnings can either be bugs or correct code.
If it is a bug, the bug has to be fixed. If it is not, the code should
be changed to not generate a warning unless that causes a slowdown
or obfuscates the code.
@item
Make sure that no parts of the codebase that you maintain are missing from the
@file{MAINTAINERS} file. If something that you want to maintain is missing add it with
your name after it.
If at some point you no longer want to maintain some code, then please help
finding a new maintainer and also don't forget updating the @file{MAINTAINERS} file.
If you add a new file, give it a proper license header. Do not copy and
paste it from a random place, use an existing file as template.
@end enumerate
We think our rules are not too hard. If you have comments, contact us.
@@ -449,51 +411,40 @@ send a reminder by email. Your patch should eventually be dealt with.
@enumerate
@item
Did you use av_cold for codec initialization and close functions?
Did you use av_cold for codec initialization and close functions?
@item
Did you add a long_name under NULL_IF_CONFIG_SMALL to the AVCodec or
AVInputFormat/AVOutputFormat struct?
Did you add a long_name under NULL_IF_CONFIG_SMALL to the AVCodec or
AVInputFormat/AVOutputFormat struct?
@item
Did you bump the minor version number (and reset the micro version
number) in @file{libavcodec/version.h} or @file{libavformat/version.h}?
Did you bump the minor version number (and reset the micro version
number) in @file{libavcodec/version.h} or @file{libavformat/version.h}?
@item
Did you register it in @file{allcodecs.c} or @file{allformats.c}?
Did you register it in @file{allcodecs.c} or @file{allformats.c}?
@item
Did you add the AVCodecID to @file{avcodec.h}?
When adding new codec IDs, also add an entry to the codec descriptor
list in @file{libavcodec/codec_desc.c}.
Did you add the AVCodecID to @file{avcodec.h}?
When adding new codec IDs, also add an entry to the codec descriptor
list in @file{libavcodec/codec_desc.c}.
@item
If it has a FourCC, did you add it to @file{libavformat/riff.c},
even if it is only a decoder?
If it has a FourCC, did you add it to @file{libavformat/riff.c},
even if it is only a decoder?
@item
Did you add a rule to compile the appropriate files in the Makefile?
Remember to do this even if you're just adding a format to a file that is
already being compiled by some other rule, like a raw demuxer.
Did you add a rule to compile the appropriate files in the Makefile?
Remember to do this even if you're just adding a format to a file that is
already being compiled by some other rule, like a raw demuxer.
@item
Did you add an entry to the table of supported formats or codecs in
@file{doc/general.texi}?
Did you add an entry to the table of supported formats or codecs in
@file{doc/general.texi}?
@item
Did you add an entry in the Changelog?
Did you add an entry in the Changelog?
@item
If it depends on a parser or a library, did you add that dependency in
configure?
If it depends on a parser or a library, did you add that dependency in
configure?
@item
Did you @code{git add} the appropriate files before committing?
Did you @code{git add} the appropriate files before committing?
@item
Did you make sure it compiles standalone, i.e. with
@code{configure --disable-everything --enable-decoder=foo}
(or @code{--enable-demuxer} or whatever your component is)?
Did you make sure it compiles standalone, i.e. with
@code{configure --disable-everything --enable-decoder=foo}
(or @code{--enable-demuxer} or whatever your component is)?
@end enumerate
@@ -501,109 +452,82 @@ Did you make sure it compiles standalone, i.e. with
@enumerate
@item
Does @code{make fate} pass with the patch applied?
Does @code{make fate} pass with the patch applied?
@item
Was the patch generated with git format-patch or send-email?
Was the patch generated with git format-patch or send-email?
@item
Did you sign off your patch? (git commit -s)
See @url{http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob_plain;f=Documentation/SubmittingPatches} for the meaning
of sign off.
Did you sign off your patch? (git commit -s)
See @url{http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob_plain;f=Documentation/SubmittingPatches} for the meaning
of sign off.
@item
Did you provide a clear git commit log message?
Did you provide a clear git commit log message?
@item
Is the patch against latest FFmpeg git master branch?
Is the patch against latest FFmpeg git master branch?
@item
Are you subscribed to ffmpeg-devel?
(the list is subscribers only due to spam)
Are you subscribed to ffmpeg-devel?
(the list is subscribers only due to spam)
@item
Have you checked that the changes are minimal, so that the same cannot be
achieved with a smaller patch and/or simpler final code?
Have you checked that the changes are minimal, so that the same cannot be
achieved with a smaller patch and/or simpler final code?
@item
If the change is to speed critical code, did you benchmark it?
If the change is to speed critical code, did you benchmark it?
@item
If you did any benchmarks, did you provide them in the mail?
If you did any benchmarks, did you provide them in the mail?
@item
Have you checked that the patch does not introduce buffer overflows or
other security issues?
Have you checked that the patch does not introduce buffer overflows or
other security issues?
@item
Did you test your decoder or demuxer against damaged data? If no, see
tools/trasher, the noise bitstream filter, and
@uref{http://caca.zoy.org/wiki/zzuf, zzuf}. Your decoder or demuxer
should not crash, end in a (near) infinite loop, or allocate ridiculous
amounts of memory when fed damaged data.
Did you test your decoder or demuxer against damaged data? If no, see
tools/trasher, the noise bitstream filter, and
@uref{http://caca.zoy.org/wiki/zzuf, zzuf}. Your decoder or demuxer
should not crash, end in a (near) infinite loop, or allocate ridiculous
amounts of memory when fed damaged data.
@item
Does the patch not mix functional and cosmetic changes?
Does the patch not mix functional and cosmetic changes?
@item
Did you add tabs or trailing whitespace to the code? Both are forbidden.
Did you add tabs or trailing whitespace to the code? Both are forbidden.
@item
Is the patch attached to the email you send?
Is the patch attached to the email you send?
@item
Is the mime type of the patch correct? It should be text/x-diff or
text/x-patch or at least text/plain and not application/octet-stream.
Is the mime type of the patch correct? It should be text/x-diff or
text/x-patch or at least text/plain and not application/octet-stream.
@item
If the patch fixes a bug, did you provide a verbose analysis of the bug?
If the patch fixes a bug, did you provide a verbose analysis of the bug?
@item
If the patch fixes a bug, did you provide enough information, including
a sample, so the bug can be reproduced and the fix can be verified?
Note please do not attach samples >100k to mails but rather provide a
URL, you can upload to ftp://upload.ffmpeg.org
If the patch fixes a bug, did you provide enough information, including
a sample, so the bug can be reproduced and the fix can be verified?
Note please do not attach samples >100k to mails but rather provide a
URL, you can upload to ftp://upload.ffmpeg.org
@item
Did you provide a verbose summary about what the patch does change?
Did you provide a verbose summary about what the patch does change?
@item
Did you provide a verbose explanation why it changes things like it does?
Did you provide a verbose explanation why it changes things like it does?
@item
Did you provide a verbose summary of the user visible advantages and
disadvantages if the patch is applied?
Did you provide a verbose summary of the user visible advantages and
disadvantages if the patch is applied?
@item
Did you provide an example so we can verify the new feature added by the
patch easily?
Did you provide an example so we can verify the new feature added by the
patch easily?
@item
If you added a new file, did you insert a license header? It should be
taken from FFmpeg, not randomly copied and pasted from somewhere else.
If you added a new file, did you insert a license header? It should be
taken from FFmpeg, not randomly copied and pasted from somewhere else.
@item
You should maintain alphabetical order in alphabetically ordered lists as
long as doing so does not break API/ABI compatibility.
You should maintain alphabetical order in alphabetically ordered lists as
long as doing so does not break API/ABI compatibility.
@item
Lines with similar content should be aligned vertically when doing so
improves readability.
Lines with similar content should be aligned vertically when doing so
improves readability.
@item
Consider to add a regression test for your code.
Consider to add a regression test for your code.
@item
If you added YASM code please check that things still work with --disable-yasm
If you added YASM code please check that things still work with --disable-yasm
@item
Make sure you check the return values of function and return appropriate
error codes. Especially memory allocation functions like @code{av_malloc()}
are notoriously left unchecked, which is a serious problem.
Make sure you check the return values of function and return appropriate
error codes. Especially memory allocation functions like @code{av_malloc()}
are notoriously left unchecked, which is a serious problem.
@item
Test your code with valgrind and or Address Sanitizer to ensure it's free
of leaks, out of array accesses, etc.
Test your code with valgrind and or Address Sanitizer to ensure it's free
of leaks, out of array accesses, etc.
@end enumerate
@section Patch review process
@@ -666,15 +590,12 @@ the following steps:
@item
Configure to compile with instrumentation enabled:
@code{configure --toolchain=gcov}.
@item
Run your test case, either manually or via FATE. This can be either
the full FATE regression suite, or any arbitrary invocation of any
front-end tool provided by FFmpeg, in any combination.
@item
Run @code{make lcov} to generate coverage data in HTML format.
@item
View @code{lcov/index.html} in your preferred HTML viewer.
@end enumerate
@@ -709,13 +630,12 @@ There are two kinds of releases:
@enumerate
@item
@strong{Major releases} always include the latest and greatest
features and functionality.
@strong{Major releases} always include the latest and greatest
features and functionality.
@item
@strong{Point releases} are cut from @strong{release} branches,
which are named @code{release/X}, with @code{X} being the release
version number.
@strong{Point releases} are cut from @strong{release} branches,
which are named @code{release/X}, with @code{X} being the release
version number.
@end enumerate
Note that we promise to our users that shared libraries from any FFmpeg
@@ -736,18 +656,15 @@ inclusion into a point release:
@enumerate
@item
Fixes a security issue, preferably identified by a @strong{CVE
number} issued by @url{http://cve.mitre.org/}.
Fixes a security issue, preferably identified by a @strong{CVE
number} issued by @url{http://cve.mitre.org/}.
@item
Fixes a documented bug in @url{https://trac.ffmpeg.org}.
Fixes a documented bug in @url{https://trac.ffmpeg.org}.
@item
Improves the included documentation.
Improves the included documentation.
@item
Retains both source code and binary compatibility with previous
point releases of the same release branch.
Retains both source code and binary compatibility with previous
point releases of the same release branch.
@end enumerate
The order for checking the rules is (1 OR 2 OR 3) AND 4.
@@ -759,42 +676,33 @@ The release process involves the following steps:
@enumerate
@item
Ensure that the @file{RELEASE} file contains the version number for
the upcoming release.
Ensure that the @file{RELEASE} file contains the version number for
the upcoming release.
@item
Add the release at @url{https://trac.ffmpeg.org/admin/ticket/versions}.
Add the release at @url{https://trac.ffmpeg.org/admin/ticket/versions}.
@item
Announce the intent to do a release to the mailing list.
Announce the intent to do a release to the mailing list.
@item
Make sure all relevant security fixes have been backported. See
@url{https://ffmpeg.org/security.html}.
Make sure all relevant security fixes have been backported. See
@url{https://ffmpeg.org/security.html}.
@item
Ensure that the FATE regression suite still passes in the release
branch on at least @strong{i386} and @strong{amd64}
(cf. @ref{Regression tests}).
Ensure that the FATE regression suite still passes in the release
branch on at least @strong{i386} and @strong{amd64}
(cf. @ref{Regression tests}).
@item
Prepare the release tarballs in @code{bz2} and @code{gz} formats, and
supplementing files that contain @code{gpg} signatures
Prepare the release tarballs in @code{bz2} and @code{gz} formats, and
supplementing files that contain @code{gpg} signatures
@item
Publish the tarballs at @url{http://ffmpeg.org/releases}. Create and
push an annotated tag in the form @code{nX}, with @code{X}
containing the version number.
Publish the tarballs at @url{http://ffmpeg.org/releases}. Create and
push an annotated tag in the form @code{nX}, with @code{X}
containing the version number.
@item
Propose and send a patch to the @strong{ffmpeg-devel} mailing list
with a news entry for the website.
Propose and send a patch to the @strong{ffmpeg-devel} mailing list
with a news entry for the website.
@item
Publish the news entry.
Publish the news entry.
@item
Send announcement to the mailing list.
Send announcement to the mailing list.
@end enumerate
@bye

View File

@@ -17,9 +17,5 @@ for programmatic use.
@c man end DEVICE OPTIONS
@ifclear config-writeonly
@include indevs.texi
@end ifclear
@ifclear config-readonly
@include outdevs.texi
@end ifclear

View File

@@ -2,12 +2,13 @@
SRC_PATH="${1}"
DOXYFILE="${2}"
DOXYGEN="${3}"
shift 3
shift 2
$DOXYGEN - <<EOF
doxygen - <<EOF
@INCLUDE = ${DOXYFILE}
INPUT = $@
EXAMPLE_PATH = ${SRC_PATH}/doc/examples
HTML_HEADER = ${SRC_PATH}/doc/doxy/header.html
HTML_FOOTER = ${SRC_PATH}/doc/doxy/footer.html
HTML_STYLESHEET = ${SRC_PATH}/doc/doxy/doxy_stylesheet.css
EOF

2019
doc/doxy/doxy_stylesheet.css Normal file

File diff suppressed because it is too large Load Diff

9
doc/doxy/footer.html Normal file
View File

@@ -0,0 +1,9 @@
<footer class="footer pagination-right">
<span class="label label-info">
Generated on $datetime for $projectname by&#160;<a href="http://www.doxygen.org/index.html">doxygen</a> $doxygenversion
</span>
</footer>
</div>
</body>
</html>

16
doc/doxy/header.html Normal file
View File

@@ -0,0 +1,16 @@
<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<meta http-equiv="X-UA-Compatible" content="IE=9"/>
<!--BEGIN PROJECT_NAME--><title>$projectname: $title</title><!--END PROJECT_NAME-->
<!--BEGIN !PROJECT_NAME--><title>$title</title><!--END !PROJECT_NAME-->
<link href="$relpath$doxy_stylesheet.css" rel="stylesheet" type="text/css" />
<!--Header replace -->
</head>
<div class="container">
<!--Header replace -->
<div class="menu">

View File

@@ -14,7 +14,7 @@ You can disable all the encoders with the configure option
with the options @code{--enable-encoder=@var{ENCODER}} /
@code{--disable-encoder=@var{ENCODER}}.
The option @code{-encoders} of the ff* tools will display the list of
The option @code{-codecs} of the ff* tools will display the list of
enabled encoders.
@c man end ENCODERS
@@ -38,8 +38,8 @@ As this encoder is experimental, unexpected behavior may exist from time to
time. For a more stable AAC encoder, see @ref{libvo-aacenc}. However, be warned
that it has a worse quality reported by some users.
@c todo @ref{libaacplus}
See also @ref{libfdk-aac-enc,,libfdk_aac} and @ref{libfaac}.
@c Comment this out until somebody writes the respective documentation.
@c See also @ref{libfaac}, @ref{libaacplus}, and @ref{libfdk-aac-enc}.
@subsection Options
@@ -71,7 +71,7 @@ Force middle/side encoding.
Set AAC encoder coding method. Possible values:
@table @samp
@item faac
@item 0
FAAC-inspired method.
This method is a simplified reimplementation of the method used in FAAC, which
@@ -80,15 +80,15 @@ thresholds with quantizer steps to find the appropriate quantization with
distortion below threshold band by band.
The quality of this method is comparable to the two loop searching method
described below, but somewhat a little better and slower.
descibed below, but somewhat a little better and slower.
@item anmr
@item 1
Average noise to mask ratio (ANMR) trellis-based solution.
This has a theoretic best quality out of all the coding methods, but at the
cost of the slowest speed.
@item twoloop
@item 2
Two loop searching (TLS) method.
This method first sets quantizers depending on band thresholds and then tries
@@ -97,7 +97,7 @@ all quantizers and adjusting some individual quantizer a little.
This method produces similar quality with the FAAC method and is the default.
@item fast
@item 3
Constant quantizer method.
This method sets a constant quantizer for all bands. This is the fastest of all
@@ -107,6 +107,13 @@ the methods, yet produces the worst quality.
@end table
@subsection Tips and Tricks
According to some reports
(e.g. @url{http://d.hatena.ne.jp/kamedo2/20120729/1343545890}), setting the
@option{cutoff} option to 15000 Hz greatly improves the quality of the output
quality. As a result, we encourage you to do the same.
@section ac3 and ac3_fixed
AC-3 audio encoders.
@@ -494,286 +501,6 @@ Selected by Encoder (default)
@end table
@anchor{libfaac}
@section libfaac
libfaac AAC (Advanced Audio Coding) encoder wrapper.
Requires the presence of the libfaac headers and library during
configuration. You need to explicitly configure the build with
@code{--enable-libfaac --enable-nonfree}.
This encoder is considered to be of higher quality with respect to the
@ref{aacenc,,the native experimental FFmpeg AAC encoder}.
For more information see the libfaac project at
@url{http://www.audiocoding.com/faac.html/}.
@subsection Options
The following shared FFmpeg codec options are recognized.
The following options are supported by the libfaac wrapper. The
@command{faac}-equivalent of the options are listed in parentheses.
@table @option
@item b (@emph{-b})
Set bit rate in bits/s for ABR (Average Bit Rate) mode. If the bit rate
is not explicitly specified, it is automatically set to a suitable
value depending on the selected profile. @command{faac} bitrate is
expressed in kilobits/s.
Note that libfaac does not support CBR (Constant Bit Rate) but only
ABR (Average Bit Rate).
If VBR mode is enabled this option is ignored.
@item ar (@emph{-R})
Set audio sampling rate (in Hz).
@item ac (@emph{-c})
Set the number of audio channels.
@item cutoff (@emph{-C})
Set cutoff frequency. If not specified (or explicitly set to 0) it
will use a value automatically computed by the library. Default value
is 0.
@item profile
Set audio profile.
The following profiles are recognized:
@table @samp
@item aac_main
Main AAC (Main)
@item aac_low
Low Complexity AAC (LC)
@item aac_ssr
Scalable Sample Rate (SSR)
@item aac_ltp
Long Term Prediction (LTP)
@end table
If not specified it is set to @samp{aac_low}.
@item flags +qscale
Set constant quality VBR (Variable Bit Rate) mode.
@item global_quality
Set quality in VBR mode as an integer number of lambda units.
Only relevant when VBR mode is enabled with @code{flags +qscale}. The
value is converted to QP units by dividing it by @code{FF_QP2LAMBDA},
and used to set the quality value used by libfaac. A reasonable range
for the option value in QP units is [10-500], the higher the value the
higher the quality.
@item q (@emph{-q})
Enable VBR mode when set to a non-negative value, and set constant
quality value as a double floating point value in QP units.
The value sets the quality value used by libfaac. A reasonable range
for the option value is [10-500], the higher the value the higher the
quality.
This option is valid only using the @command{ffmpeg} command-line
tool. For library interface users, use @option{global_quality}.
@end table
@subsection Examples
@itemize
@item
Use @command{ffmpeg} to convert an audio file to ABR 128 kbps AAC in an M4A (MP4)
container:
@example
ffmpeg -i input.wav -codec:a libfaac -b:a 128k -output.m4a
@end example
@item
Use @command{ffmpeg} to convert an audio file to VBR AAC, using the
LTP AAC profile:
@example
ffmpeg -i input.wav -c:a libfaac -profile:a aac_ltp -q:a 100 output.m4a
@end example
@end itemize
@anchor{libfdk-aac-enc}
@section libfdk_aac
libfdk-aac AAC (Advanced Audio Coding) encoder wrapper.
The libfdk-aac library is based on the Fraunhofer FDK AAC code from
the Android project.
Requires the presence of the libfdk-aac headers and library during
configuration. You need to explicitly configure the build with
@code{--enable-libfdk-aac}. The library is also incompatible with GPL,
so if you allow the use of GPL, you should configure with
@code{--enable-gpl --enable-nonfree --enable-libfdk-aac}.
This encoder is considered to be of higher quality with respect to
both @ref{aacenc,,the native experimental FFmpeg AAC encoder} and
@ref{libfaac}.
VBR encoding, enabled through the @option{vbr} or @option{flags
+qscale} options, is experimental and only works with some
combinations of parameters.
Support for encoding 7.1 audio is only available with libfdk-aac 0.1.3 or
higher.
For more information see the fdk-aac project at
@url{http://sourceforge.net/p/opencore-amr/fdk-aac/}.
@subsection Options
The following options are mapped on the shared FFmpeg codec options.
@table @option
@item b
Set bit rate in bits/s. If the bitrate is not explicitly specified, it
is automatically set to a suitable value depending on the selected
profile.
In case VBR mode is enabled the option is ignored.
@item ar
Set audio sampling rate (in Hz).
@item channels
Set the number of audio channels.
@item flags +qscale
Enable fixed quality, VBR (Variable Bit Rate) mode.
Note that VBR is implicitly enabled when the @option{vbr} value is
positive.
@item cutoff
Set cutoff frequency. If not specified (or explicitly set to 0) it
will use a value automatically computed by the library. Default value
is 0.
@item profile
Set audio profile.
The following profiles are recognized:
@table @samp
@item aac_low
Low Complexity AAC (LC)
@item aac_he
High Efficiency AAC (HE-AAC)
@item aac_he_v2
High Efficiency AAC version 2 (HE-AACv2)
@item aac_ld
Low Delay AAC (LD)
@item aac_eld
Enhanced Low Delay AAC (ELD)
@end table
If not specified it is set to @samp{aac_low}.
@end table
The following are private options of the libfdk_aac encoder.
@table @option
@item afterburner
Enable afterburner feature if set to 1, disabled if set to 0. This
improves the quality but also the required processing power.
Default value is 1.
@item eld_sbr
Enable SBR (Spectral Band Replication) for ELD if set to 1, disabled
if set to 0.
Default value is 0.
@item signaling
Set SBR/PS signaling style.
It can assume one of the following values:
@table @samp
@item default
choose signaling implicitly (explicit hierarchical by default,
implicit if global header is disabled)
@item implicit
implicit backwards compatible signaling
@item explicit_sbr
explicit SBR, implicit PS signaling
@item explicit_hierarchical
explicit hierarchical signaling
@end table
Default value is @samp{default}.
@item latm
Output LATM/LOAS encapsulated data if set to 1, disabled if set to 0.
Default value is 0.
@item header_period
Set StreamMuxConfig and PCE repetition period (in frames) for sending
in-band configuration buffers within LATM/LOAS transport layer.
Must be a 16-bits non-negative integer.
Default value is 0.
@item vbr
Set VBR mode, from 1 to 5. 1 is lowest quality (though still pretty
good) and 5 is highest quality. A value of 0 will disable VBR, and CBR
(Constant Bit Rate) is enabled.
Currently only the @samp{aac_low} profile supports VBR encoding.
VBR modes 1-5 correspond to roughly the following average bit rates:
@table @samp
@item 1
32 kbps/channel
@item 2
40 kbps/channel
@item 3
48-56 kbps/channel
@item 4
64 kbps/channel
@item 5
about 80-96 kbps/channel
@end table
Default value is 0.
@end table
@subsection Examples
@itemize
@item
Use @command{ffmpeg} to convert an audio file to VBR AAC in an M4A (MP4)
container:
@example
ffmpeg -i input.wav -codec:a libfdk_aac -vbr 3 output.m4a
@end example
@item
Use @command{ffmpeg} to convert an audio file to CBR 64k kbps AAC, using the
High-Efficiency AAC profile:
@example
ffmpeg -i input.wav -c:a libfdk_aac -profile:a aac_he -b:a 64k output.m4a
@end example
@end itemize
@anchor{libmp3lame}
@section libmp3lame
LAME (Lame Ain't an MP3 Encoder) MP3 encoder wrapper.
@@ -782,9 +509,6 @@ Requires the presence of the libmp3lame headers and library during
configuration. You need to explicitly configure the build with
@code{--enable-libmp3lame}.
See @ref{libshine} for a fixed-point MP3 encoder, although with a
lower quality.
@subsection Options
The following options are supported by the libmp3lame wrapper. The
@@ -792,7 +516,7 @@ The following options are supported by the libmp3lame wrapper. The
@table @option
@item b (@emph{-b})
Set bitrate expressed in bits/s for CBR or ABR. LAME @code{bitrate} is
Set bitrate expressed in bits/s for CBR. LAME @code{bitrate} is
expressed in kilobits/s.
@item q (@emph{-V})
@@ -807,18 +531,13 @@ while producing the worst quality.
@item reservoir
Enable use of bit reservoir when set to 1. Default value is 1. LAME
has this enabled by default, but can be overridden by use
has this enabled by default, but can be overriden by use
@option{--nores} option.
@item joint_stereo (@emph{-m j})
Enable the encoder to use (on a frame by frame basis) either L/R
stereo or mid/side stereo. Default value is 1.
@item abr (@emph{--abr})
Enable the encoder to use ABR when set to 1. The @command{lame}
@option{--abr} sets the target bitrate, while this options only
tells FFmpeg to use ABR still relies on @option{b} to set bitrate.
@end table
@section libopencore-amrnb
@@ -858,42 +577,6 @@ default value is 0 (disabled).
@end table
@anchor{libshine}
@section libshine
Shine Fixed-Point MP3 encoder wrapper.
Shine is a fixed-point MP3 encoder. It has a far better performance on
platforms without an FPU, e.g. armel CPUs, and some phones and tablets.
However, as it is more targeted on performance than quality, it is not on par
with LAME and other production-grade encoders quality-wise. Also, according to
the project's homepage, this encoder may not be free of bugs as the code was
written a long time ago and the project was dead for at least 5 years.
This encoder only supports stereo and mono input. This is also CBR-only.
The original project (last updated in early 2007) is at
@url{http://sourceforge.net/projects/libshine-fxp/}. We only support the
updated fork by the Savonet/Liquidsoap project at @url{https://github.com/savonet/shine}.
Requires the presence of the libshine headers and library during
configuration. You need to explicitly configure the build with
@code{--enable-libshine}.
See also @ref{libmp3lame}.
@subsection Options
The following options are supported by the libshine wrapper. The
@command{shineenc}-equivalent of the options are listed in parentheses.
@table @option
@item b (@emph{-b})
Set bitrate expressed in bits/s for CBR. @command{shineenc} @option{-b} option
is expressed in kilobits/s.
@end table
@section libtwolame
TwoLAME MP2 encoder wrapper.
@@ -1032,7 +715,7 @@ configuration. You need to explicitly configure the build with
@subsection Option Mapping
Most libopus options are modelled after the @command{opusenc} utility from
Most libopus options are modeled after the @command{opusenc} utility from
opus-tools. The following is an option mapping chart describing options
supported by the libopus wrapper, and their @command{opusenc}-equivalent
in parentheses.
@@ -1095,163 +778,32 @@ respectively. The default is 0 (cutoff disabled).
@end table
@section libvorbis
libvorbis encoder wrapper.
Requires the presence of the libvorbisenc headers and library during
configuration. You need to explicitly configure the build with
@code{--enable-libvorbis}.
@subsection Options
The following options are supported by the libvorbis wrapper. The
@command{oggenc}-equivalent of the options are listed in parentheses.
To get a more accurate and extensive documentation of the libvorbis
options, consult the libvorbisenc's and @command{oggenc}'s documentations.
See @url{http://xiph.org/vorbis/},
@url{http://wiki.xiph.org/Vorbis-tools}, and oggenc(1).
@table @option
@item b (@emph{-b})
Set bitrate expressed in bits/s for ABR. @command{oggenc} @option{-b} is
expressed in kilobits/s.
@item q (@emph{-q})
Set constant quality setting for VBR. The value should be a float
number in the range of -1.0 to 10.0. The higher the value, the better
the quality. The default value is @samp{3.0}.
This option is valid only using the @command{ffmpeg} command-line tool.
For library interface users, use @option{global_quality}.
@item cutoff (@emph{--advanced-encode-option lowpass_frequency=N})
Set cutoff bandwidth in Hz, a value of 0 disables cutoff. @command{oggenc}'s
related option is expressed in kHz. The default value is @samp{0} (cutoff
disabled).
@item minrate (@emph{-m})
Set minimum bitrate expressed in bits/s. @command{oggenc} @option{-m} is
expressed in kilobits/s.
@item maxrate (@emph{-M})
Set maximum bitrate expressed in bits/s. @command{oggenc} @option{-M} is
expressed in kilobits/s. This only has effect on ABR mode.
@item iblock (@emph{--advanced-encode-option impulse_noisetune=N})
Set noise floor bias for impulse blocks. The value is a float number from
-15.0 to 0.0. A negative bias instructs the encoder to pay special attention
to the crispness of transients in the encoded audio. The tradeoff for better
transient response is a higher bitrate.
@end table
@anchor{libwavpack}
@section libwavpack
A wrapper providing WavPack encoding through libwavpack.
Only lossless mode using 32-bit integer samples is supported currently.
Requires the presence of the libwavpack headers and library during
configuration. You need to explicitly configure the build with
@code{--enable-libwavpack}.
Note that a libavcodec-native encoder for the WavPack codec exists so users can
encode audios with this codec without using this encoder. See @ref{wavpackenc}.
@subsection Options
@command{wavpack} command line utility's corresponding options are listed in
parentheses, if any.
The @option{compression_level} option can be used to control speed vs.
compression tradeoff, with the values mapped to libwavpack as follows:
@table @option
@item frame_size (@emph{--blocksize})
Default is 32768.
@item compression_level
Set speed vs. compression tradeoff. Acceptable arguments are listed below:
@table @samp
@item 0 (@emph{-f})
Fast mode.
@item 0
Fast mode - corresponding to the wavpack @option{-f} option.
@item 1
Normal (default) settings.
@item 2 (@emph{-h})
High quality.
@item 2
High quality - corresponding to the wavpack @option{-h} option.
@item 3 (@emph{-hh})
Very high quality.
@item 3
Very high quality - corresponding to the wavpack @option{-hh} option.
@item 4-8 (@emph{-hh -x}@var{EXTRAPROC})
Same as @samp{3}, but with extra processing enabled.
@samp{4} is the same as @option{-x2} and @samp{8} is the same as @option{-x6}.
@end table
@end table
@anchor{wavpackenc}
@section wavpack
WavPack lossless audio encoder.
This is a libavcodec-native WavPack encoder. There is also an encoder based on
libwavpack, but there is virtually no reason to use that encoder.
See also @ref{libwavpack}.
@subsection Options
The equivalent options for @command{wavpack} command line utility are listed in
parentheses.
@subsubsection Shared options
The following shared options are effective for this encoder. Only special notes
about this particular encoder will be documented here. For the general meaning
of the options, see @ref{codec-options,,the Codec Options chapter}.
@table @option
@item frame_size (@emph{--blocksize})
For this encoder, the range for this option is between 128 and 131072. Default
is automatically decided based on sample rate and number of channel.
For the complete formula of calculating default, see
@file{libavcodec/wavpackenc.c}.
@item compression_level (@emph{-f}, @emph{-h}, @emph{-hh}, and @emph{-x})
This option's syntax is consistent with @ref{libwavpack}'s.
@end table
@subsubsection Private options
@table @option
@item joint_stereo (@emph{-j})
Set whether to enable joint stereo. Valid values are:
@table @samp
@item on (@emph{1})
Force mid/side audio encoding.
@item off (@emph{0})
Force left/right audio encoding.
@item auto
Let the encoder decide automatically.
@end table
@item optimize_mono
Set whether to enable optimization for mono. This option is only effective for
non-mono streams. Available values:
@table @samp
@item on
enabled
@item off
disabled
@end table
@item 4-8
Same as 3, but with extra processing enabled - corresponding to the wavpack
@option{-x} option. I.e. 4 is the same as @option{-x2} and 8 is the same as
@option{-x6}.
@end table
@@ -1265,15 +817,12 @@ follows.
@section libtheora
libtheora Theora encoder wrapper.
Theora format supported through libtheora.
Requires the presence of the libtheora headers and library during
configuration. You need to explicitly configure the build with
@code{--enable-libtheora}.
For more information about the libtheora project see
@url{http://www.theora.org/}.
@subsection Options
The following global options are mapped to internal libtheora options
@@ -1281,11 +830,11 @@ which affect the quality and the bitrate of the encoded stream.
@table @option
@item b
Set the video bitrate in bit/s for CBR (Constant Bit Rate) mode. In
case VBR (Variable Bit Rate) mode is enabled this option is ignored.
Set the video bitrate, only works if the @code{qscale} flag in
@option{flags} is not enabled.
@item flags
Used to enable constant quality mode (VBR) encoding through the
Used to enable constant quality mode encoding through the
@option{qscale} flag, and to enable the @code{pass1} and @code{pass2}
modes.
@@ -1293,44 +842,22 @@ modes.
Set the GOP size.
@item global_quality
Set the global quality as an integer in lambda units.
Set the global quality in lambda units, only works if the
@code{qscale} flag in @option{flags} is enabled. The value is clipped
in the [0 - 10*@code{FF_QP2LAMBDA}] range, and then multiplied for 6.3
to get a value in the native libtheora range [0-63]. A higher value
corresponds to a higher quality.
Only relevant when VBR mode is enabled with @code{flags +qscale}. The
value is converted to QP units by dividing it by @code{FF_QP2LAMBDA},
clipped in the [0 - 10] range, and then multiplied by 6.3 to get a
value in the native libtheora range [0-63]. A higher value corresponds
to a higher quality.
@item q
Enable VBR mode when set to a non-negative value, and set constant
quality value as a double floating point value in QP units.
The value is clipped in the [0-10] range, and then multiplied by 6.3
to get a value in the native libtheora range [0-63].
This option is valid only using the @command{ffmpeg} command-line
tool. For library interface users, use @option{global_quality}.
For example, to set maximum constant quality encoding with
@command{ffmpeg}:
@example
ffmpeg -i INPUT -flags:v qscale -global_quality:v "10*QP2LAMBDA" -codec:v libtheora OUTPUT.ogg
@end example
@end table
@subsection Examples
@itemize
@item
Set maximum constant quality (VBR) encoding with @command{ffmpeg}:
@example
ffmpeg -i INPUT -codec:v libtheora -q:v 10 OUTPUT.ogg
@end example
@item
Use @command{ffmpeg} to convert a CBR 1000 kbps Theora video stream:
@example
ffmpeg -i INPUT -codec:v libtheora -b:v 1000k OUTPUT.ogg
@end example
@end itemize
@section libvpx
VP8/VP9 format supported through libvpx.
VP8 format supported through libvpx.
Requires the presence of the libvpx headers and library during configuration.
You need to explicitly configure the build with @code{--enable-libvpx}.
@@ -1442,77 +969,12 @@ g_lag_in_frames
@item vp8flags error_resilient
g_error_resilient
@item aq_mode
@code{VP9E_SET_AQ_MODE}
@end table
For more information about libvpx see:
@url{http://www.webmproject.org/}
@section libwebp
libwebp WebP Image encoder wrapper
libwebp is Google's official encoder for WebP images. It can encode in either
lossy or lossless mode. Lossy images are essentially a wrapper around a VP8
frame. Lossless images are a separate codec developed by Google.
@subsection Pixel Format
Currently, libwebp only supports YUV420 for lossy and RGB for lossless due
to limitations of the format and libwebp. Alpha is supported for either mode.
Because of API limitations, if RGB is passed in when encoding lossy or YUV is
passed in for encoding lossless, the pixel format will automatically be
converted using functions from libwebp. This is not ideal and is done only for
convenience.
@subsection Options
@table @option
@item -lossless @var{boolean}
Enables/Disables use of lossless mode. Default is 0.
@item -compression_level @var{integer}
For lossy, this is a quality/speed tradeoff. Higher values give better quality
for a given size at the cost of increased encoding time. For lossless, this is
a size/speed tradeoff. Higher values give smaller size at the cost of increased
encoding time. More specifically, it controls the number of extra algorithms
and compression tools used, and varies the combination of these tools. This
maps to the @var{method} option in libwebp. The valid range is 0 to 6.
Default is 4.
@item -qscale @var{float}
For lossy encoding, this controls image quality, 0 to 100. For lossless
encoding, this controls the effort and time spent at compressing more. The
default value is 75. Note that for usage via libavcodec, this option is called
@var{global_quality} and must be multiplied by @var{FF_QP2LAMBDA}.
@item -preset @var{type}
Configuration preset. This does some automatic settings based on the general
type of the image.
@table @option
@item none
Do not use a preset.
@item default
Use the encoder default.
@item picture
Digital picture, like portrait, inner shot
@item photo
Outdoor photograph, with natural lighting
@item drawing
Hand or line drawing, with high-contrast details
@item icon
Small-sized colorful images
@item text
Text-like
@end table
@end table
@section libx264, libx264rgb
@section libx264
x264 H.264/MPEG-4 AVC encoder wrapper.
@@ -1528,22 +990,12 @@ for detail retention (adaptive quantization, psy-RD, psy-trellis).
Many libx264 encoder options are mapped to FFmpeg global codec
options, while unique encoder options are provided through private
options. Additionally the @option{x264opts} and @option{x264-params}
private options allows one to pass a list of key=value tuples as accepted
private options allows to pass a list of key=value tuples as accepted
by the libx264 @code{x264_param_parse} function.
The x264 project website is at
@url{http://www.videolan.org/developers/x264.html}.
The libx264rgb encoder is the same as libx264, except it accepts packed RGB
pixel formats as input instead of YUV.
@subsection Supported Pixel Formats
x264 supports 8- to 10-bit color spaces. The exact bit depth is controlled at
x264's configure time. FFmpeg only supports one bit depth in one particular
build. In other words, it is not possible to build one FFmpeg with multiple
versions of x264 with different bit depths.
@subsection Options
The following options are supported by the libx264 wrapper. The
@@ -1569,34 +1021,25 @@ kilobits/s.
@item g (@emph{keyint})
@item qmin (@emph{qpmin})
Minimum quantizer scale.
@item qmax (@emph{qpmax})
Maximum quantizer scale.
@item qmin (@emph{qpmin})
@item qdiff (@emph{qpstep})
Maximum difference between quantizer scales.
@item qblur (@emph{qblur})
Quantizer curve blur
@item qcomp (@emph{qcomp})
Quantizer curve compression factor
@item refs (@emph{ref})
Number of reference frames each P-frame can use. The range is from @var{0-16}.
@item sc_threshold (@emph{scenecut})
Sets the threshold for the scene change detection.
@item trellis (@emph{trellis})
Performs Trellis quantization to increase efficiency. Enabled by default.
@item nr (@emph{nr})
@item me_range (@emph{merange})
Maximum range of the motion search in pixels.
@item me_method (@emph{me})
Set motion estimation method. Possible values in the decreasing order
@@ -1618,13 +1061,10 @@ Hadamard exhaustive search (slowest).
@end table
@item subq (@emph{subme})
Sub-pixel motion estimation method.
@item b_strategy (@emph{b-adapt})
Adaptive B-frame placement decision algorithm. Use only on first-pass.
@item keyint_min (@emph{min-keyint})
Minimum GOP size.
@item coder
Set entropy encoder. Possible values:
@@ -1651,7 +1091,6 @@ Ignore chroma in motion estimation. It generates the same effect as
@end table
@item threads (@emph{threads})
Number of encoding threads.
@item thread_type
Set multithreading technique. Possible values:
@@ -1745,10 +1184,6 @@ Enable calculation and printing SSIM stats after the encoding.
Enable the use of Periodic Intra Refresh instead of IDR frames when set
to 1.
@item bluray-compat (@emph{bluray-compat})
Configure the encoder to be compatible with the bluray standard.
It is a shorthand for setting "bluray-compat=1 force-cfr=1".
@item b-bias (@emph{b-bias})
Set the influence on how often B-frames are used.
@@ -1869,7 +1304,7 @@ Override the x264 configuration using a :-separated list of key=value
parameters.
This option is functionally the same as the @option{x264opts}, but is
duplicated for compatibility with the Libav fork.
duplicated for compability with the Libav fork.
For example to specify libx264 encoding options with @command{ffmpeg}:
@example
@@ -1993,80 +1428,6 @@ half pixel and quarter pixel refinement for 8x8 blocks, and rate
distortion-based search using square pattern.
@end table
@item lumi_aq
Enable lumi masking adaptive quantization when set to 1. Default is 0
(disabled).
@item variance_aq
Enable variance adaptive quantization when set to 1. Default is 0
(disabled).
When combined with @option{lumi_aq}, the resulting quality will not
be better than any of the two specified individually. In other
words, the resulting quality will be the worse one of the two
effects.
@item ssim
Set structural similarity (SSIM) displaying method. Possible values:
@table @samp
@item off
Disable displaying of SSIM information.
@item avg
Output average SSIM at the end of encoding to stdout. The format of
showing the average SSIM is:
@example
Average SSIM: %f
@end example
For users who are not familiar with C, %f means a float number, or
a decimal (e.g. 0.939232).
@item frame
Output both per-frame SSIM data during encoding and average SSIM at
the end of encoding to stdout. The format of per-frame information
is:
@example
SSIM: avg: %1.3f min: %1.3f max: %1.3f
@end example
For users who are not familiar with C, %1.3f means a float number
rounded to 3 digits after the dot (e.g. 0.932).
@end table
@item ssim_acc
Set SSIM accuracy. Valid options are integers within the range of
0-4, while 0 gives the most accurate result and 4 computes the
fastest.
@end table
@section mpeg2
MPEG-2 video encoder.
@subsection Options
@table @option
@item seq_disp_ext @var{integer}
Specifies if the encoder should write a sequence_display_extension to the
output.
@table @option
@item -1
@itemx auto
Decide automatically to write it or not (this is the default) by checking if
the data to be written is different from the default or unspecified values.
@item 0
@itemx never
Never write it.
@item 1
@itemx always
Always write it.
@end table
@end table
@section png
@@ -2087,7 +1448,7 @@ Set physical density of pixels, in dots per meter, unset by default
Apple ProRes encoder.
FFmpeg contains 2 ProRes encoders, the prores-aw and prores-ks encoder.
The used encoder can be chosen with the @code{-vcodec} option.
The used encoder can be choosen with the @code{-vcodec} option.
@subsection Private Options for prores-ks
@@ -2150,27 +1511,3 @@ For the fastest encoding speed set the @option{qscale} parameter (4 is the
recommended value) and do not set a size constraint.
@c man end VIDEO ENCODERS
@chapter Subtitles Encoders
@c man begin SUBTITLES ENCODERS
@section dvdsub
This codec encodes the bitmap subtitle format that is used in DVDs.
Typically they are stored in VOBSUB file pairs (*.idx + *.sub),
and they can also be used in Matroska files.
@subsection Options
@table @option
@item even_rows_fix
When set to 1, enable a work-around that makes the number of pixel rows
even in all subtitles. This fixes a problem with some players that
cut off the bottom row if the number is odd. The work-around just adds
a fully transparent row if needed. The overhead is low, typically
one byte per subtitle on average.
By default, this work-around is disabled.
@end table
@c man end SUBTITLES ENCODERS

View File

@@ -11,26 +11,20 @@ CFLAGS += -Wall -g
CFLAGS := $(shell pkg-config --cflags $(FFMPEG_LIBS)) $(CFLAGS)
LDLIBS := $(shell pkg-config --libs $(FFMPEG_LIBS)) $(LDLIBS)
EXAMPLES= avio_reading \
decoding_encoding \
demuxing_decoding \
extract_mvs \
EXAMPLES= decoding_encoding \
demuxing \
filtering_video \
filtering_audio \
metadata \
muxing \
remuxing \
resampling_audio \
scaling_video \
transcode_aac \
transcoding \
OBJS=$(addsuffix .o,$(EXAMPLES))
# the following examples make explicit use of the math library
avcodec: LDLIBS += -lm
decoding_encoding: LDLIBS += -lm
muxing: LDLIBS += -lm
resampling_audio: LDLIBS += -lm
.phony: all clean-test clean

View File

@@ -5,19 +5,14 @@ Both following use cases rely on pkg-config and make, thus make sure
that you have them installed and working on your system.
Method 1: build the installed examples in a generic read/write user directory
1) Build the installed examples in a generic read/write user directory
Copy to a read/write user directory and just use "make", it will link
to the libraries on your system, assuming the PKG_CONFIG_PATH is
correctly configured.
Method 2: build the examples in-tree
2) Build the examples in-tree
Assuming you are in the source FFmpeg checkout directory, you need to build
FFmpeg (no need to make install in any prefix). Then just run "make examples".
This will build the examples using the FFmpeg build system. You can clean those
examples using "make examplesclean"
If you want to try the dedicated Makefile examples (to emulate the first
method), go into doc/examples and run a command such as
PKG_CONFIG_PATH=pc-uninstalled make.
FFmpeg (no need to make install in any prefix). Then you can go into the
doc/examples and run a command such as PKG_CONFIG_PATH=pc-uninstalled make.

View File

@@ -1,134 +0,0 @@
/*
* Copyright (c) 2014 Stefano Sabatini
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
/**
* @file
* libavformat AVIOContext API example.
*
* Make libavformat demuxer access media content through a custom
* AVIOContext read callback.
* @example avio_reading.c
*/
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libavformat/avio.h>
#include <libavutil/file.h>
struct buffer_data {
uint8_t *ptr;
size_t size; ///< size left in the buffer
};
static int read_packet(void *opaque, uint8_t *buf, int buf_size)
{
struct buffer_data *bd = (struct buffer_data *)opaque;
buf_size = FFMIN(buf_size, bd->size);
printf("ptr:%p size:%zu\n", bd->ptr, bd->size);
/* copy internal buffer data to buf */
memcpy(buf, bd->ptr, buf_size);
bd->ptr += buf_size;
bd->size -= buf_size;
return buf_size;
}
int main(int argc, char *argv[])
{
AVFormatContext *fmt_ctx = NULL;
AVIOContext *avio_ctx = NULL;
uint8_t *buffer = NULL, *avio_ctx_buffer = NULL;
size_t buffer_size, avio_ctx_buffer_size = 4096;
char *input_filename = NULL;
int ret = 0;
struct buffer_data bd = { 0 };
if (argc != 2) {
fprintf(stderr, "usage: %s input_file\n"
"API example program to show how to read from a custom buffer "
"accessed through AVIOContext.\n", argv[0]);
return 1;
}
input_filename = argv[1];
/* register codecs and formats and other lavf/lavc components*/
av_register_all();
/* slurp file content into buffer */
ret = av_file_map(input_filename, &buffer, &buffer_size, 0, NULL);
if (ret < 0)
goto end;
/* fill opaque structure used by the AVIOContext read callback */
bd.ptr = buffer;
bd.size = buffer_size;
if (!(fmt_ctx = avformat_alloc_context())) {
ret = AVERROR(ENOMEM);
goto end;
}
avio_ctx_buffer = av_malloc(avio_ctx_buffer_size);
if (!avio_ctx_buffer) {
ret = AVERROR(ENOMEM);
goto end;
}
avio_ctx = avio_alloc_context(avio_ctx_buffer, avio_ctx_buffer_size,
0, &bd, &read_packet, NULL, NULL);
if (!avio_ctx) {
ret = AVERROR(ENOMEM);
goto end;
}
fmt_ctx->pb = avio_ctx;
ret = avformat_open_input(&fmt_ctx, NULL, NULL, NULL);
if (ret < 0) {
fprintf(stderr, "Could not open input\n");
goto end;
}
ret = avformat_find_stream_info(fmt_ctx, NULL);
if (ret < 0) {
fprintf(stderr, "Could not find stream information\n");
goto end;
}
av_dump_format(fmt_ctx, 0, input_filename, 0);
end:
avformat_close_input(&fmt_ctx);
/* note: the internal buffer could have changed, and be != avio_ctx_buffer */
if (avio_ctx) {
av_freep(&avio_ctx->buffer);
av_freep(&avio_ctx);
}
av_file_unmap(buffer, buffer_size);
if (ret < 0) {
fprintf(stderr, "Error occurred: %s\n", av_err2str(ret));
return 1;
}
return 0;
}

View File

@@ -24,10 +24,10 @@
* @file
* libavcodec API use example.
*
* @example decoding_encoding.c
* Note that libavcodec only handles codecs (mpeg, mpeg4, etc...),
* not file formats (avi, vob, mp4, mov, mkv, mxf, flv, mpegts, mpegps, etc...). See library 'libavformat' for the
* format handling
* @example doc/examples/decoding_encoding.c
*/
#include <math.h>
@@ -156,7 +156,7 @@ static void audio_encode_example(const char *filename)
}
/* frame containing input raw audio */
frame = av_frame_alloc();
frame = avcodec_alloc_frame();
if (!frame) {
fprintf(stderr, "Could not allocate audio frame\n");
exit(1);
@@ -170,10 +170,6 @@ static void audio_encode_example(const char *filename)
* we calculate the size of the samples buffer in bytes */
buffer_size = av_samples_get_buffer_size(NULL, c->channels, c->frame_size,
c->sample_fmt, 0);
if (buffer_size < 0) {
fprintf(stderr, "Could not get sample buffer size\n");
exit(1);
}
samples = av_malloc(buffer_size);
if (!samples) {
fprintf(stderr, "Could not allocate %d bytes for samples buffer\n",
@@ -191,7 +187,7 @@ static void audio_encode_example(const char *filename)
/* encode a single tone sound */
t = 0;
tincr = 2 * M_PI * 440.0 / c->sample_rate;
for (i = 0; i < 200; i++) {
for(i=0;i<200;i++) {
av_init_packet(&pkt);
pkt.data = NULL; // packet data will be allocated by the encoder
pkt.size = 0;
@@ -231,7 +227,7 @@ static void audio_encode_example(const char *filename)
fclose(f);
av_freep(&samples);
av_frame_free(&frame);
avcodec_free_frame(&frame);
avcodec_close(c);
av_free(c);
}
@@ -291,11 +287,12 @@ static void audio_decode_example(const char *outfilename, const char *filename)
int got_frame = 0;
if (!decoded_frame) {
if (!(decoded_frame = av_frame_alloc())) {
if (!(decoded_frame = avcodec_alloc_frame())) {
fprintf(stderr, "Could not allocate audio frame\n");
exit(1);
}
}
} else
avcodec_get_frame_defaults(decoded_frame);
len = avcodec_decode_audio4(c, decoded_frame, &got_frame, &avpkt);
if (len < 0) {
@@ -307,11 +304,6 @@ static void audio_decode_example(const char *outfilename, const char *filename)
int data_size = av_samples_get_buffer_size(NULL, c->channels,
decoded_frame->nb_samples,
c->sample_fmt, 1);
if (data_size < 0) {
/* This should not occur, checking just for paranoia */
fprintf(stderr, "Failed to calculate data size\n");
exit(1);
}
fwrite(decoded_frame->data[0], 1, data_size, outfile);
}
avpkt.size -= len;
@@ -337,7 +329,7 @@ static void audio_decode_example(const char *outfilename, const char *filename)
avcodec_close(c);
av_free(c);
av_frame_free(&decoded_frame);
avcodec_free_frame(&decoded_frame);
}
/*
@@ -374,18 +366,12 @@ static void video_encode_example(const char *filename, int codec_id)
c->width = 352;
c->height = 288;
/* frames per second */
c->time_base = (AVRational){1,25};
/* emit one intra frame every ten frames
* check frame pict_type before passing frame
* to encoder, if frame->pict_type is AV_PICTURE_TYPE_I
* then gop_size is ignored and the output of encoder
* will always be I frame irrespective to gop_size
*/
c->gop_size = 10;
c->max_b_frames = 1;
c->time_base= (AVRational){1,25};
c->gop_size = 10; /* emit one intra frame every ten frames */
c->max_b_frames=1;
c->pix_fmt = AV_PIX_FMT_YUV420P;
if (codec_id == AV_CODEC_ID_H264)
if(codec_id == AV_CODEC_ID_H264)
av_opt_set(c->priv_data, "preset", "slow", 0);
/* open it */
@@ -400,7 +386,7 @@ static void video_encode_example(const char *filename, int codec_id)
exit(1);
}
frame = av_frame_alloc();
frame = avcodec_alloc_frame();
if (!frame) {
fprintf(stderr, "Could not allocate video frame\n");
exit(1);
@@ -419,7 +405,7 @@ static void video_encode_example(const char *filename, int codec_id)
}
/* encode 1 second of video */
for (i = 0; i < 25; i++) {
for(i=0;i<25;i++) {
av_init_packet(&pkt);
pkt.data = NULL; // packet data will be allocated by the encoder
pkt.size = 0;
@@ -427,15 +413,15 @@ static void video_encode_example(const char *filename, int codec_id)
fflush(stdout);
/* prepare a dummy image */
/* Y */
for (y = 0; y < c->height; y++) {
for (x = 0; x < c->width; x++) {
for(y=0;y<c->height;y++) {
for(x=0;x<c->width;x++) {
frame->data[0][y * frame->linesize[0] + x] = x + y + i * 3;
}
}
/* Cb and Cr */
for (y = 0; y < c->height/2; y++) {
for (x = 0; x < c->width/2; x++) {
for(y=0;y<c->height/2;y++) {
for(x=0;x<c->width/2;x++) {
frame->data[1][y * frame->linesize[1] + x] = 128 + y + i * 2;
frame->data[2][y * frame->linesize[2] + x] = 64 + x + i * 5;
}
@@ -481,7 +467,7 @@ static void video_encode_example(const char *filename, int codec_id)
avcodec_close(c);
av_free(c);
av_freep(&frame->data[0]);
av_frame_free(&frame);
avcodec_free_frame(&frame);
printf("\n");
}
@@ -495,10 +481,10 @@ static void pgm_save(unsigned char *buf, int wrap, int xsize, int ysize,
FILE *f;
int i;
f = fopen(filename,"w");
fprintf(f, "P5\n%d %d\n%d\n", xsize, ysize, 255);
for (i = 0; i < ysize; i++)
fwrite(buf + i * wrap, 1, xsize, f);
f=fopen(filename,"w");
fprintf(f,"P5\n%d %d\n%d\n",xsize,ysize,255);
for(i=0;i<ysize;i++)
fwrite(buf + i * wrap,1,xsize,f);
fclose(f);
}
@@ -579,14 +565,14 @@ static void video_decode_example(const char *outfilename, const char *filename)
exit(1);
}
frame = av_frame_alloc();
frame = avcodec_alloc_frame();
if (!frame) {
fprintf(stderr, "Could not allocate video frame\n");
exit(1);
}
frame_count = 0;
for (;;) {
for(;;) {
avpkt.size = fread(inbuf, 1, INBUF_SIZE, f);
if (avpkt.size == 0)
break;
@@ -623,7 +609,7 @@ static void video_decode_example(const char *outfilename, const char *filename)
avcodec_close(c);
av_free(c);
av_frame_free(&frame);
avcodec_free_frame(&frame);
printf("\n");
}
@@ -640,7 +626,7 @@ int main(int argc, char **argv)
"This program generates a synthetic stream and encodes it to a file\n"
"named test.h264, test.mp2 or test.mpg depending on output_type.\n"
"The encoded stream is then decoded and written to a raw data output.\n"
"output_type must be chosen between 'h264', 'mp2', 'mpg'.\n",
"output_type must be choosen between 'h264', 'mp2', 'mpg'.\n",
argv[0]);
return 1;
}

View File

@@ -22,11 +22,11 @@
/**
* @file
* Demuxing and decoding example.
* libavformat demuxing API use example.
*
* Show how to use the libavformat and libavcodec API to demux and
* decode audio and video data.
* @example demuxing_decoding.c
* @example doc/examples/demuxing.c
*/
#include <libavutil/imgutils.h>
@@ -47,36 +47,25 @@ static uint8_t *video_dst_data[4] = {NULL};
static int video_dst_linesize[4];
static int video_dst_bufsize;
static uint8_t **audio_dst_data = NULL;
static int audio_dst_linesize;
static int audio_dst_bufsize;
static int video_stream_idx = -1, audio_stream_idx = -1;
static AVFrame *frame = NULL;
static AVPacket pkt;
static int video_frame_count = 0;
static int audio_frame_count = 0;
/* The different ways of decoding and managing data memory. You are not
* supposed to support all the modes in your application but pick the one most
* appropriate to your needs. Look for the use of api_mode in this example to
* see what are the differences of API usage between them */
enum {
API_MODE_OLD = 0, /* old method, deprecated */
API_MODE_NEW_API_REF_COUNT = 1, /* new method, using the frame reference counting */
API_MODE_NEW_API_NO_REF_COUNT = 2, /* new method, without reference counting */
};
static int api_mode = API_MODE_OLD;
static int decode_packet(int *got_frame, int cached)
{
int ret = 0;
int decoded = pkt.size;
*got_frame = 0;
if (pkt.stream_index == video_stream_idx) {
/* decode video frame */
ret = avcodec_decode_video2(video_dec_ctx, frame, got_frame, &pkt);
if (ret < 0) {
fprintf(stderr, "Error decoding video frame (%s)\n", av_err2str(ret));
fprintf(stderr, "Error decoding video frame\n");
return ret;
}
@@ -99,40 +88,40 @@ static int decode_packet(int *got_frame, int cached)
/* decode audio frame */
ret = avcodec_decode_audio4(audio_dec_ctx, frame, got_frame, &pkt);
if (ret < 0) {
fprintf(stderr, "Error decoding audio frame (%s)\n", av_err2str(ret));
fprintf(stderr, "Error decoding audio frame\n");
return ret;
}
/* Some audio decoders decode only part of the packet, and have to be
* called again with the remainder of the packet data.
* Sample: fate-suite/lossless-audio/luckynight-partial.shn
* Also, some decoders might over-read the packet. */
decoded = FFMIN(ret, pkt.size);
if (*got_frame) {
size_t unpadded_linesize = frame->nb_samples * av_get_bytes_per_sample(frame->format);
printf("audio_frame%s n:%d nb_samples:%d pts:%s\n",
cached ? "(cached)" : "",
audio_frame_count++, frame->nb_samples,
av_ts2timestr(frame->pts, &audio_dec_ctx->time_base));
/* Write the raw audio data samples of the first plane. This works
* fine for packed formats (e.g. AV_SAMPLE_FMT_S16). However,
* most audio decoders output planar audio, which uses a separate
* plane of audio samples for each channel (e.g. AV_SAMPLE_FMT_S16P).
* In other words, this code will write only the first audio channel
* in these cases.
* You should use libswresample or libavfilter to convert the frame
* to packed data. */
fwrite(frame->extended_data[0], 1, unpadded_linesize, audio_dst_file);
ret = av_samples_alloc(audio_dst_data, &audio_dst_linesize, av_frame_get_channels(frame),
frame->nb_samples, frame->format, 1);
if (ret < 0) {
fprintf(stderr, "Could not allocate audio buffer\n");
return AVERROR(ENOMEM);
}
/* TODO: extend return code of the av_samples_* functions so that this call is not needed */
audio_dst_bufsize =
av_samples_get_buffer_size(NULL, av_frame_get_channels(frame),
frame->nb_samples, frame->format, 1);
/* copy audio data to destination buffer:
* this is required since rawaudio expects non aligned data */
av_samples_copy(audio_dst_data, frame->data, 0, 0,
frame->nb_samples, av_frame_get_channels(frame), frame->format);
/* write to rawaudio file */
fwrite(audio_dst_data[0], 1, audio_dst_bufsize, audio_dst_file);
av_freep(&audio_dst_data[0]);
}
}
/* If we use the new API with reference counting, we own the data and need
* to de-reference it when we don't use it anymore */
if (*got_frame && api_mode == API_MODE_NEW_API_REF_COUNT)
av_frame_unref(frame);
return decoded;
return ret;
}
static int open_codec_context(int *stream_idx,
@@ -142,7 +131,6 @@ static int open_codec_context(int *stream_idx,
AVStream *st;
AVCodecContext *dec_ctx = NULL;
AVCodec *dec = NULL;
AVDictionary *opts = NULL;
ret = av_find_best_stream(fmt_ctx, type, -1, -1, NULL, 0);
if (ret < 0) {
@@ -159,13 +147,10 @@ static int open_codec_context(int *stream_idx,
if (!dec) {
fprintf(stderr, "Failed to find %s codec\n",
av_get_media_type_string(type));
return AVERROR(EINVAL);
return ret;
}
/* Init the decoders, with or without reference counting */
if (api_mode == API_MODE_NEW_API_REF_COUNT)
av_dict_set(&opts, "refcounted_frames", "1", 0);
if ((ret = avcodec_open2(dec_ctx, dec, &opts)) < 0) {
if ((ret = avcodec_open2(dec_ctx, dec, NULL)) < 0) {
fprintf(stderr, "Failed to open %s codec\n",
av_get_media_type_string(type));
return ret;
@@ -208,31 +193,15 @@ int main (int argc, char **argv)
{
int ret = 0, got_frame;
if (argc != 4 && argc != 5) {
fprintf(stderr, "usage: %s [-refcount=<old|new_norefcount|new_refcount>] "
"input_file video_output_file audio_output_file\n"
if (argc != 4) {
fprintf(stderr, "usage: %s input_file video_output_file audio_output_file\n"
"API example program to show how to read frames from an input file.\n"
"This program reads frames from a file, decodes them, and writes decoded\n"
"video frames to a rawvideo file named video_output_file, and decoded\n"
"audio frames to a rawaudio file named audio_output_file.\n\n"
"If the -refcount option is specified, the program use the\n"
"reference counting frame system which allows keeping a copy of\n"
"the data for longer than one decode call. If unset, it's using\n"
"the classic old method.\n"
"audio frames to a rawaudio file named audio_output_file.\n"
"\n", argv[0]);
exit(1);
}
if (argc == 5) {
const char *mode = argv[1] + strlen("-refcount=");
if (!strcmp(mode, "old")) api_mode = API_MODE_OLD;
else if (!strcmp(mode, "new_norefcount")) api_mode = API_MODE_NEW_API_NO_REF_COUNT;
else if (!strcmp(mode, "new_refcount")) api_mode = API_MODE_NEW_API_REF_COUNT;
else {
fprintf(stderr, "unknow mode '%s'\n", mode);
exit(1);
}
argv++;
}
src_filename = argv[1];
video_dst_filename = argv[2];
audio_dst_filename = argv[3];
@@ -275,14 +244,25 @@ int main (int argc, char **argv)
}
if (open_codec_context(&audio_stream_idx, fmt_ctx, AVMEDIA_TYPE_AUDIO) >= 0) {
int nb_planes;
audio_stream = fmt_ctx->streams[audio_stream_idx];
audio_dec_ctx = audio_stream->codec;
audio_dst_file = fopen(audio_dst_filename, "wb");
if (!audio_dst_file) {
fprintf(stderr, "Could not open destination file %s\n", audio_dst_filename);
fprintf(stderr, "Could not open destination file %s\n", video_dst_filename);
ret = 1;
goto end;
}
nb_planes = av_sample_fmt_is_planar(audio_dec_ctx->sample_fmt) ?
audio_dec_ctx->channels : 1;
audio_dst_data = av_mallocz(sizeof(uint8_t *) * nb_planes);
if (!audio_dst_data) {
fprintf(stderr, "Could not allocate audio data buffers\n");
ret = AVERROR(ENOMEM);
goto end;
}
}
/* dump input information to stderr */
@@ -294,12 +274,7 @@ int main (int argc, char **argv)
goto end;
}
/* When using the new API, you need to use the libavutil/frame.h API, while
* the classic frame management is available in libavcodec */
if (api_mode == API_MODE_OLD)
frame = avcodec_alloc_frame();
else
frame = av_frame_alloc();
frame = avcodec_alloc_frame();
if (!frame) {
fprintf(stderr, "Could not allocate frame\n");
ret = AVERROR(ENOMEM);
@@ -318,15 +293,8 @@ int main (int argc, char **argv)
/* read frames from the file */
while (av_read_frame(fmt_ctx, &pkt) >= 0) {
AVPacket orig_pkt = pkt;
do {
ret = decode_packet(&got_frame, 0);
if (ret < 0)
break;
pkt.data += ret;
pkt.size -= ret;
} while (pkt.size > 0);
av_free_packet(&orig_pkt);
decode_packet(&got_frame, 0);
av_free_packet(&pkt);
}
/* flush cached frames */
@@ -346,41 +314,29 @@ int main (int argc, char **argv)
}
if (audio_stream) {
enum AVSampleFormat sfmt = audio_dec_ctx->sample_fmt;
int n_channels = audio_dec_ctx->channels;
const char *fmt;
if (av_sample_fmt_is_planar(sfmt)) {
const char *packed = av_get_sample_fmt_name(sfmt);
printf("Warning: the sample format the decoder produced is planar "
"(%s). This example will output the first channel only.\n",
packed ? packed : "?");
sfmt = av_get_packed_sample_fmt(sfmt);
n_channels = 1;
}
if ((ret = get_format_from_sample_fmt(&fmt, sfmt)) < 0)
if ((ret = get_format_from_sample_fmt(&fmt, audio_dec_ctx->sample_fmt)) < 0)
goto end;
printf("Play the output audio file with the command:\n"
"ffplay -f %s -ac %d -ar %d %s\n",
fmt, n_channels, audio_dec_ctx->sample_rate,
fmt, audio_dec_ctx->channels, audio_dec_ctx->sample_rate,
audio_dst_filename);
}
end:
avcodec_close(video_dec_ctx);
avcodec_close(audio_dec_ctx);
if (video_dec_ctx)
avcodec_close(video_dec_ctx);
if (audio_dec_ctx)
avcodec_close(audio_dec_ctx);
avformat_close_input(&fmt_ctx);
if (video_dst_file)
fclose(video_dst_file);
if (audio_dst_file)
fclose(audio_dst_file);
if (api_mode == API_MODE_OLD)
avcodec_free_frame(&frame);
else
av_frame_free(&frame);
av_free(frame);
av_free(video_dst_data[0]);
av_free(audio_dst_data);
return ret < 0;
}

View File

@@ -1,185 +0,0 @@
/*
* Copyright (c) 2012 Stefano Sabatini
* Copyright (c) 2014 Clément Bœsch
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include <libavutil/motion_vector.h>
#include <libavformat/avformat.h>
static AVFormatContext *fmt_ctx = NULL;
static AVCodecContext *video_dec_ctx = NULL;
static AVStream *video_stream = NULL;
static const char *src_filename = NULL;
static int video_stream_idx = -1;
static AVFrame *frame = NULL;
static AVPacket pkt;
static int video_frame_count = 0;
static int decode_packet(int *got_frame, int cached)
{
int decoded = pkt.size;
*got_frame = 0;
if (pkt.stream_index == video_stream_idx) {
int ret = avcodec_decode_video2(video_dec_ctx, frame, got_frame, &pkt);
if (ret < 0) {
fprintf(stderr, "Error decoding video frame (%s)\n", av_err2str(ret));
return ret;
}
if (*got_frame) {
int i;
AVFrameSideData *sd;
video_frame_count++;
sd = av_frame_get_side_data(frame, AV_FRAME_DATA_MOTION_VECTORS);
if (sd) {
const AVMotionVector *mvs = (const AVMotionVector *)sd->data;
for (i = 0; i < sd->size / sizeof(*mvs); i++) {
const AVMotionVector *mv = &mvs[i];
printf("%d,%2d,%2d,%2d,%4d,%4d,%4d,%4d,0x%"PRIx64"\n",
video_frame_count, mv->source,
mv->w, mv->h, mv->src_x, mv->src_y,
mv->dst_x, mv->dst_y, mv->flags);
}
}
}
}
return decoded;
}
static int open_codec_context(int *stream_idx,
AVFormatContext *fmt_ctx, enum AVMediaType type)
{
int ret;
AVStream *st;
AVCodecContext *dec_ctx = NULL;
AVCodec *dec = NULL;
AVDictionary *opts = NULL;
ret = av_find_best_stream(fmt_ctx, type, -1, -1, NULL, 0);
if (ret < 0) {
fprintf(stderr, "Could not find %s stream in input file '%s'\n",
av_get_media_type_string(type), src_filename);
return ret;
} else {
*stream_idx = ret;
st = fmt_ctx->streams[*stream_idx];
/* find decoder for the stream */
dec_ctx = st->codec;
dec = avcodec_find_decoder(dec_ctx->codec_id);
if (!dec) {
fprintf(stderr, "Failed to find %s codec\n",
av_get_media_type_string(type));
return AVERROR(EINVAL);
}
/* Init the video decoder */
av_dict_set(&opts, "flags2", "+export_mvs", 0);
if ((ret = avcodec_open2(dec_ctx, dec, &opts)) < 0) {
fprintf(stderr, "Failed to open %s codec\n",
av_get_media_type_string(type));
return ret;
}
}
return 0;
}
int main(int argc, char **argv)
{
int ret = 0, got_frame;
if (argc != 2) {
fprintf(stderr, "Usage: %s <video>\n", argv[0]);
exit(1);
}
src_filename = argv[1];
av_register_all();
if (avformat_open_input(&fmt_ctx, src_filename, NULL, NULL) < 0) {
fprintf(stderr, "Could not open source file %s\n", src_filename);
exit(1);
}
if (avformat_find_stream_info(fmt_ctx, NULL) < 0) {
fprintf(stderr, "Could not find stream information\n");
exit(1);
}
if (open_codec_context(&video_stream_idx, fmt_ctx, AVMEDIA_TYPE_VIDEO) >= 0) {
video_stream = fmt_ctx->streams[video_stream_idx];
video_dec_ctx = video_stream->codec;
}
av_dump_format(fmt_ctx, 0, src_filename, 0);
if (!video_stream) {
fprintf(stderr, "Could not find video stream in the input, aborting\n");
ret = 1;
goto end;
}
frame = av_frame_alloc();
if (!frame) {
fprintf(stderr, "Could not allocate frame\n");
ret = AVERROR(ENOMEM);
goto end;
}
printf("framenum,source,blockw,blockh,srcx,srcy,dstx,dsty,flags\n");
/* initialize packet, set data to NULL, let the demuxer fill it */
av_init_packet(&pkt);
pkt.data = NULL;
pkt.size = 0;
/* read frames from the file */
while (av_read_frame(fmt_ctx, &pkt) >= 0) {
AVPacket orig_pkt = pkt;
do {
ret = decode_packet(&got_frame, 0);
if (ret < 0)
break;
pkt.data += ret;
pkt.size -= ret;
} while (pkt.size > 0);
av_free_packet(&orig_pkt);
}
/* flush cached frames */
pkt.data = NULL;
pkt.size = 0;
do {
decode_packet(&got_frame, 1);
} while (got_frame);
end:
avcodec_close(video_dec_ctx);
avformat_close_input(&fmt_ctx);
av_frame_free(&frame);
return ret < 0;
}

View File

@@ -1,365 +0,0 @@
/*
* copyright (c) 2013 Andrew Kelley
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/**
* @file
* libavfilter API usage example.
*
* @example filter_audio.c
* This example will generate a sine wave audio,
* pass it through a simple filter chain, and then compute the MD5 checksum of
* the output data.
*
* The filter chain it uses is:
* (input) -> abuffer -> volume -> aformat -> abuffersink -> (output)
*
* abuffer: This provides the endpoint where you can feed the decoded samples.
* volume: In this example we hardcode it to 0.90.
* aformat: This converts the samples to the samplefreq, channel layout,
* and sample format required by the audio device.
* abuffersink: This provides the endpoint where you can read the samples after
* they have passed through the filter chain.
*/
#include <inttypes.h>
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
#include "libavutil/channel_layout.h"
#include "libavutil/md5.h"
#include "libavutil/mem.h"
#include "libavutil/opt.h"
#include "libavutil/samplefmt.h"
#include "libavfilter/avfilter.h"
#include "libavfilter/buffersink.h"
#include "libavfilter/buffersrc.h"
#define INPUT_SAMPLERATE 48000
#define INPUT_FORMAT AV_SAMPLE_FMT_FLTP
#define INPUT_CHANNEL_LAYOUT AV_CH_LAYOUT_5POINT0
#define VOLUME_VAL 0.90
static int init_filter_graph(AVFilterGraph **graph, AVFilterContext **src,
AVFilterContext **sink)
{
AVFilterGraph *filter_graph;
AVFilterContext *abuffer_ctx;
AVFilter *abuffer;
AVFilterContext *volume_ctx;
AVFilter *volume;
AVFilterContext *aformat_ctx;
AVFilter *aformat;
AVFilterContext *abuffersink_ctx;
AVFilter *abuffersink;
AVDictionary *options_dict = NULL;
uint8_t options_str[1024];
uint8_t ch_layout[64];
int err;
/* Create a new filtergraph, which will contain all the filters. */
filter_graph = avfilter_graph_alloc();
if (!filter_graph) {
fprintf(stderr, "Unable to create filter graph.\n");
return AVERROR(ENOMEM);
}
/* Create the abuffer filter;
* it will be used for feeding the data into the graph. */
abuffer = avfilter_get_by_name("abuffer");
if (!abuffer) {
fprintf(stderr, "Could not find the abuffer filter.\n");
return AVERROR_FILTER_NOT_FOUND;
}
abuffer_ctx = avfilter_graph_alloc_filter(filter_graph, abuffer, "src");
if (!abuffer_ctx) {
fprintf(stderr, "Could not allocate the abuffer instance.\n");
return AVERROR(ENOMEM);
}
/* Set the filter options through the AVOptions API. */
av_get_channel_layout_string(ch_layout, sizeof(ch_layout), 0, INPUT_CHANNEL_LAYOUT);
av_opt_set (abuffer_ctx, "channel_layout", ch_layout, AV_OPT_SEARCH_CHILDREN);
av_opt_set (abuffer_ctx, "sample_fmt", av_get_sample_fmt_name(INPUT_FORMAT), AV_OPT_SEARCH_CHILDREN);
av_opt_set_q (abuffer_ctx, "time_base", (AVRational){ 1, INPUT_SAMPLERATE }, AV_OPT_SEARCH_CHILDREN);
av_opt_set_int(abuffer_ctx, "sample_rate", INPUT_SAMPLERATE, AV_OPT_SEARCH_CHILDREN);
/* Now initialize the filter; we pass NULL options, since we have already
* set all the options above. */
err = avfilter_init_str(abuffer_ctx, NULL);
if (err < 0) {
fprintf(stderr, "Could not initialize the abuffer filter.\n");
return err;
}
/* Create volume filter. */
volume = avfilter_get_by_name("volume");
if (!volume) {
fprintf(stderr, "Could not find the volume filter.\n");
return AVERROR_FILTER_NOT_FOUND;
}
volume_ctx = avfilter_graph_alloc_filter(filter_graph, volume, "volume");
if (!volume_ctx) {
fprintf(stderr, "Could not allocate the volume instance.\n");
return AVERROR(ENOMEM);
}
/* A different way of passing the options is as key/value pairs in a
* dictionary. */
av_dict_set(&options_dict, "volume", AV_STRINGIFY(VOLUME_VAL), 0);
err = avfilter_init_dict(volume_ctx, &options_dict);
av_dict_free(&options_dict);
if (err < 0) {
fprintf(stderr, "Could not initialize the volume filter.\n");
return err;
}
/* Create the aformat filter;
* it ensures that the output is of the format we want. */
aformat = avfilter_get_by_name("aformat");
if (!aformat) {
fprintf(stderr, "Could not find the aformat filter.\n");
return AVERROR_FILTER_NOT_FOUND;
}
aformat_ctx = avfilter_graph_alloc_filter(filter_graph, aformat, "aformat");
if (!aformat_ctx) {
fprintf(stderr, "Could not allocate the aformat instance.\n");
return AVERROR(ENOMEM);
}
/* A third way of passing the options is in a string of the form
* key1=value1:key2=value2.... */
snprintf(options_str, sizeof(options_str),
"sample_fmts=%s:sample_rates=%d:channel_layouts=0x%"PRIx64,
av_get_sample_fmt_name(AV_SAMPLE_FMT_S16), 44100,
(uint64_t)AV_CH_LAYOUT_STEREO);
err = avfilter_init_str(aformat_ctx, options_str);
if (err < 0) {
av_log(NULL, AV_LOG_ERROR, "Could not initialize the aformat filter.\n");
return err;
}
/* Finally create the abuffersink filter;
* it will be used to get the filtered data out of the graph. */
abuffersink = avfilter_get_by_name("abuffersink");
if (!abuffersink) {
fprintf(stderr, "Could not find the abuffersink filter.\n");
return AVERROR_FILTER_NOT_FOUND;
}
abuffersink_ctx = avfilter_graph_alloc_filter(filter_graph, abuffersink, "sink");
if (!abuffersink_ctx) {
fprintf(stderr, "Could not allocate the abuffersink instance.\n");
return AVERROR(ENOMEM);
}
/* This filter takes no options. */
err = avfilter_init_str(abuffersink_ctx, NULL);
if (err < 0) {
fprintf(stderr, "Could not initialize the abuffersink instance.\n");
return err;
}
/* Connect the filters;
* in this simple case the filters just form a linear chain. */
err = avfilter_link(abuffer_ctx, 0, volume_ctx, 0);
if (err >= 0)
err = avfilter_link(volume_ctx, 0, aformat_ctx, 0);
if (err >= 0)
err = avfilter_link(aformat_ctx, 0, abuffersink_ctx, 0);
if (err < 0) {
fprintf(stderr, "Error connecting filters\n");
return err;
}
/* Configure the graph. */
err = avfilter_graph_config(filter_graph, NULL);
if (err < 0) {
av_log(NULL, AV_LOG_ERROR, "Error configuring the filter graph\n");
return err;
}
*graph = filter_graph;
*src = abuffer_ctx;
*sink = abuffersink_ctx;
return 0;
}
/* Do something useful with the filtered data: this simple
* example just prints the MD5 checksum of each plane to stdout. */
static int process_output(struct AVMD5 *md5, AVFrame *frame)
{
int planar = av_sample_fmt_is_planar(frame->format);
int channels = av_get_channel_layout_nb_channels(frame->channel_layout);
int planes = planar ? channels : 1;
int bps = av_get_bytes_per_sample(frame->format);
int plane_size = bps * frame->nb_samples * (planar ? 1 : channels);
int i, j;
for (i = 0; i < planes; i++) {
uint8_t checksum[16];
av_md5_init(md5);
av_md5_sum(checksum, frame->extended_data[i], plane_size);
fprintf(stdout, "plane %d: 0x", i);
for (j = 0; j < sizeof(checksum); j++)
fprintf(stdout, "%02X", checksum[j]);
fprintf(stdout, "\n");
}
fprintf(stdout, "\n");
return 0;
}
/* Construct a frame of audio data to be filtered;
* this simple example just synthesizes a sine wave. */
static int get_input(AVFrame *frame, int frame_num)
{
int err, i, j;
#define FRAME_SIZE 1024
/* Set up the frame properties and allocate the buffer for the data. */
frame->sample_rate = INPUT_SAMPLERATE;
frame->format = INPUT_FORMAT;
frame->channel_layout = INPUT_CHANNEL_LAYOUT;
frame->nb_samples = FRAME_SIZE;
frame->pts = frame_num * FRAME_SIZE;
err = av_frame_get_buffer(frame, 0);
if (err < 0)
return err;
/* Fill the data for each channel. */
for (i = 0; i < 5; i++) {
float *data = (float*)frame->extended_data[i];
for (j = 0; j < frame->nb_samples; j++)
data[j] = sin(2 * M_PI * (frame_num + j) * (i + 1) / FRAME_SIZE);
}
return 0;
}
int main(int argc, char *argv[])
{
struct AVMD5 *md5;
AVFilterGraph *graph;
AVFilterContext *src, *sink;
AVFrame *frame;
uint8_t errstr[1024];
float duration;
int err, nb_frames, i;
if (argc < 2) {
fprintf(stderr, "Usage: %s <duration>\n", argv[0]);
return 1;
}
duration = atof(argv[1]);
nb_frames = duration * INPUT_SAMPLERATE / FRAME_SIZE;
if (nb_frames <= 0) {
fprintf(stderr, "Invalid duration: %s\n", argv[1]);
return 1;
}
avfilter_register_all();
/* Allocate the frame we will be using to store the data. */
frame = av_frame_alloc();
if (!frame) {
fprintf(stderr, "Error allocating the frame\n");
return 1;
}
md5 = av_md5_alloc();
if (!md5) {
fprintf(stderr, "Error allocating the MD5 context\n");
return 1;
}
/* Set up the filtergraph. */
err = init_filter_graph(&graph, &src, &sink);
if (err < 0) {
fprintf(stderr, "Unable to init filter graph:");
goto fail;
}
/* the main filtering loop */
for (i = 0; i < nb_frames; i++) {
/* get an input frame to be filtered */
err = get_input(frame, i);
if (err < 0) {
fprintf(stderr, "Error generating input frame:");
goto fail;
}
/* Send the frame to the input of the filtergraph. */
err = av_buffersrc_add_frame(src, frame);
if (err < 0) {
av_frame_unref(frame);
fprintf(stderr, "Error submitting the frame to the filtergraph:");
goto fail;
}
/* Get all the filtered output that is available. */
while ((err = av_buffersink_get_frame(sink, frame)) >= 0) {
/* now do something with our filtered frame */
err = process_output(md5, frame);
if (err < 0) {
fprintf(stderr, "Error processing the filtered frame:");
goto fail;
}
av_frame_unref(frame);
}
if (err == AVERROR(EAGAIN)) {
/* Need to feed more frames in. */
continue;
} else if (err == AVERROR_EOF) {
/* Nothing more to do, finish. */
break;
} else if (err < 0) {
/* An error occurred. */
fprintf(stderr, "Error filtering the data:");
goto fail;
}
}
avfilter_graph_free(&graph);
av_frame_free(&frame);
av_freep(&md5);
return 0;
fail:
av_strerror(err, errstr, sizeof(errstr));
fprintf(stderr, "%s\n", errstr);
return 1;
}

View File

@@ -25,7 +25,7 @@
/**
* @file
* API example for audio decoding and filtering
* @example filtering_audio.c
* @example doc/examples/filtering_audio.c
*/
#include <unistd.h>
@@ -38,8 +38,8 @@
#include <libavfilter/buffersrc.h>
#include <libavutil/opt.h>
static const char *filter_descr = "aresample=8000,aformat=sample_fmts=s16:channel_layouts=mono";
static const char *player = "ffplay -f s16le -ar 8000 -ac 1 -";
const char *filter_descr = "aresample=8000,aformat=sample_fmts=s16:channel_layouts=mono";
const char *player = "ffplay -f s16le -ar 8000 -ac 1 -";
static AVFormatContext *fmt_ctx;
static AVCodecContext *dec_ctx;
@@ -85,22 +85,18 @@ static int open_input_file(const char *filename)
static int init_filters(const char *filters_descr)
{
char args[512];
int ret = 0;
int ret;
AVFilter *abuffersrc = avfilter_get_by_name("abuffer");
AVFilter *abuffersink = avfilter_get_by_name("abuffersink");
AVFilterInOut *outputs = avfilter_inout_alloc();
AVFilterInOut *inputs = avfilter_inout_alloc();
static const enum AVSampleFormat out_sample_fmts[] = { AV_SAMPLE_FMT_S16, -1 };
static const int64_t out_channel_layouts[] = { AV_CH_LAYOUT_MONO, -1 };
static const int out_sample_rates[] = { 8000, -1 };
const enum AVSampleFormat out_sample_fmts[] = { AV_SAMPLE_FMT_S16, -1 };
const int64_t out_channel_layouts[] = { AV_CH_LAYOUT_MONO, -1 };
const int out_sample_rates[] = { 8000, -1 };
const AVFilterLink *outlink;
AVRational time_base = fmt_ctx->streams[audio_stream_index]->time_base;
filter_graph = avfilter_graph_alloc();
if (!outputs || !inputs || !filter_graph) {
ret = AVERROR(ENOMEM);
goto end;
}
/* buffer audio source: the decoded frames from the decoder will be inserted here. */
if (!dec_ctx->channel_layout)
@@ -113,7 +109,7 @@ static int init_filters(const char *filters_descr)
args, NULL, filter_graph);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer source\n");
goto end;
return ret;
}
/* buffer audio sink: to terminate the filter chain. */
@@ -121,28 +117,28 @@ static int init_filters(const char *filters_descr)
NULL, NULL, filter_graph);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer sink\n");
goto end;
return ret;
}
ret = av_opt_set_int_list(buffersink_ctx, "sample_fmts", out_sample_fmts, -1,
AV_OPT_SEARCH_CHILDREN);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output sample format\n");
goto end;
return ret;
}
ret = av_opt_set_int_list(buffersink_ctx, "channel_layouts", out_channel_layouts, -1,
AV_OPT_SEARCH_CHILDREN);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output channel layout\n");
goto end;
return ret;
}
ret = av_opt_set_int_list(buffersink_ctx, "sample_rates", out_sample_rates, -1,
AV_OPT_SEARCH_CHILDREN);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output sample rate\n");
goto end;
return ret;
}
/* Endpoints for the filter graph. */
@@ -157,11 +153,11 @@ static int init_filters(const char *filters_descr)
inputs->next = NULL;
if ((ret = avfilter_graph_parse_ptr(filter_graph, filters_descr,
&inputs, &outputs, NULL)) < 0)
goto end;
&inputs, &outputs, NULL)) < 0)
return ret;
if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0)
goto end;
return ret;
/* Print summary of the sink buffer
* Note: args buffer is reused to store channel layout string */
@@ -172,11 +168,7 @@ static int init_filters(const char *filters_descr)
(char *)av_x_if_null(av_get_sample_fmt_name(outlink->format), "?"),
args);
end:
avfilter_inout_free(&inputs);
avfilter_inout_free(&outputs);
return ret;
return 0;
}
static void print_frame(const AVFrame *frame)
@@ -196,7 +188,7 @@ static void print_frame(const AVFrame *frame)
int main(int argc, char **argv)
{
int ret;
AVPacket packet0, packet;
AVPacket packet;
AVFrame *frame = av_frame_alloc();
AVFrame *filt_frame = av_frame_alloc();
int got_frame;
@@ -210,6 +202,7 @@ int main(int argc, char **argv)
exit(1);
}
avcodec_register_all();
av_register_all();
avfilter_register_all();
@@ -219,24 +212,18 @@ int main(int argc, char **argv)
goto end;
/* read all packets */
packet0.data = NULL;
packet.data = NULL;
while (1) {
if (!packet0.data) {
if ((ret = av_read_frame(fmt_ctx, &packet)) < 0)
break;
packet0 = packet;
}
if ((ret = av_read_frame(fmt_ctx, &packet)) < 0)
break;
if (packet.stream_index == audio_stream_index) {
avcodec_get_frame_defaults(frame);
got_frame = 0;
ret = avcodec_decode_audio4(dec_ctx, frame, &got_frame, &packet);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Error decoding audio\n");
continue;
}
packet.size -= ret;
packet.data += ret;
if (got_frame) {
/* push the audio data from decoded frame into the filtergraph */
@@ -248,31 +235,29 @@ int main(int argc, char **argv)
/* pull filtered audio from the filtergraph */
while (1) {
ret = av_buffersink_get_frame(buffersink_ctx, filt_frame);
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
if(ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
break;
if (ret < 0)
if(ret < 0)
goto end;
print_frame(filt_frame);
av_frame_unref(filt_frame);
}
}
if (packet.size <= 0)
av_free_packet(&packet0);
} else {
/* discard non-wanted packets */
av_free_packet(&packet0);
}
av_free_packet(&packet);
}
end:
avfilter_graph_free(&filter_graph);
avcodec_close(dec_ctx);
if (dec_ctx)
avcodec_close(dec_ctx);
avformat_close_input(&fmt_ctx);
av_frame_free(&frame);
av_frame_free(&filt_frame);
if (ret < 0 && ret != AVERROR_EOF) {
fprintf(stderr, "Error occurred: %s\n", av_err2str(ret));
char buf[1024];
av_strerror(ret, buf, sizeof(buf));
fprintf(stderr, "Error occurred: %s\n", buf);
exit(1);
}

View File

@@ -24,7 +24,7 @@
/**
* @file
* API example for decoding and filtering
* @example filtering_video.c
* @example doc/examples/filtering_video.c
*/
#define _XOPEN_SOURCE 600 /* for usleep */
@@ -36,7 +36,6 @@
#include <libavfilter/avcodec.h>
#include <libavfilter/buffersink.h>
#include <libavfilter/buffersrc.h>
#include <libavutil/opt.h>
const char *filter_descr = "scale=78:24";
@@ -71,7 +70,6 @@ static int open_input_file(const char *filename)
}
video_stream_index = ret;
dec_ctx = fmt_ctx->streams[video_stream_index]->codec;
av_opt_set_int(dec_ctx, "refcounted_frames", 1, 0);
/* init the video decoder */
if ((ret = avcodec_open2(dec_ctx, dec, NULL)) < 0) {
@@ -85,18 +83,15 @@ static int open_input_file(const char *filename)
static int init_filters(const char *filters_descr)
{
char args[512];
int ret = 0;
int ret;
AVFilter *buffersrc = avfilter_get_by_name("buffer");
AVFilter *buffersink = avfilter_get_by_name("buffersink");
AVFilterInOut *outputs = avfilter_inout_alloc();
AVFilterInOut *inputs = avfilter_inout_alloc();
enum AVPixelFormat pix_fmts[] = { AV_PIX_FMT_GRAY8, AV_PIX_FMT_NONE };
AVBufferSinkParams *buffersink_params;
filter_graph = avfilter_graph_alloc();
if (!outputs || !inputs || !filter_graph) {
ret = AVERROR(ENOMEM);
goto end;
}
/* buffer video source: the decoded frames from the decoder will be inserted here. */
snprintf(args, sizeof(args),
@@ -109,22 +104,18 @@ static int init_filters(const char *filters_descr)
args, NULL, filter_graph);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot create buffer source\n");
goto end;
return ret;
}
/* buffer video sink: to terminate the filter chain. */
buffersink_params = av_buffersink_params_alloc();
buffersink_params->pixel_fmts = pix_fmts;
ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out",
NULL, NULL, filter_graph);
NULL, buffersink_params, filter_graph);
av_free(buffersink_params);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot create buffer sink\n");
goto end;
}
ret = av_opt_set_int_list(buffersink_ctx, "pix_fmts", pix_fmts,
AV_PIX_FMT_NONE, AV_OPT_SEARCH_CHILDREN);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output pixel format\n");
goto end;
return ret;
}
/* Endpoints for the filter graph. */
@@ -140,16 +131,11 @@ static int init_filters(const char *filters_descr)
if ((ret = avfilter_graph_parse_ptr(filter_graph, filters_descr,
&inputs, &outputs, NULL)) < 0)
goto end;
return ret;
if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0)
goto end;
end:
avfilter_inout_free(&inputs);
avfilter_inout_free(&outputs);
return ret;
return ret;
return 0;
}
static void display_frame(const AVFrame *frame, AVRational time_base)
@@ -200,6 +186,7 @@ int main(int argc, char **argv)
exit(1);
}
avcodec_register_all();
av_register_all();
avfilter_register_all();
@@ -214,6 +201,7 @@ int main(int argc, char **argv)
break;
if (packet.stream_index == video_stream_index) {
avcodec_get_frame_defaults(frame);
got_frame = 0;
ret = avcodec_decode_video2(dec_ctx, frame, &got_frame, &packet);
if (ret < 0) {
@@ -240,20 +228,22 @@ int main(int argc, char **argv)
display_frame(filt_frame, buffersink_ctx->inputs[0]->time_base);
av_frame_unref(filt_frame);
}
av_frame_unref(frame);
}
}
av_free_packet(&packet);
}
end:
avfilter_graph_free(&filter_graph);
avcodec_close(dec_ctx);
if (dec_ctx)
avcodec_close(dec_ctx);
avformat_close_input(&fmt_ctx);
av_frame_free(&frame);
av_frame_free(&filt_frame);
if (ret < 0 && ret != AVERROR_EOF) {
fprintf(stderr, "Error occurred: %s\n", av_err2str(ret));
char buf[1024];
av_strerror(ret, buf, sizeof(buf));
fprintf(stderr, "Error occurred: %s\n", buf);
exit(1);
}

View File

@@ -23,7 +23,7 @@
/**
* @file
* Shows how the metadata API can be used in application programs.
* @example metadata.c
* @example doc/examples/metadata.c
*/
#include <stdio.h>

View File

@@ -24,9 +24,9 @@
* @file
* libavformat API example.
*
* Output a media file in any supported libavformat format. The default
* codecs are used.
* @example muxing.c
* Output a media file in any supported libavformat format.
* The default codecs are used.
* @example doc/examples/muxing.c
*/
#include <stdlib.h>
@@ -34,67 +34,26 @@
#include <string.h>
#include <math.h>
#include <libavutil/avassert.h>
#include <libavutil/channel_layout.h>
#include <libavutil/opt.h>
#include <libavutil/mathematics.h>
#include <libavutil/timestamp.h>
#include <libavformat/avformat.h>
#include <libswscale/swscale.h>
#include <libswresample/swresample.h>
#define STREAM_DURATION 10.0
/* 5 seconds stream duration */
#define STREAM_DURATION 200.0
#define STREAM_FRAME_RATE 25 /* 25 images/s */
#define STREAM_NB_FRAMES ((int)(STREAM_DURATION * STREAM_FRAME_RATE))
#define STREAM_PIX_FMT AV_PIX_FMT_YUV420P /* default pix_fmt */
#define SCALE_FLAGS SWS_BICUBIC
// a wrapper around a single output AVStream
typedef struct OutputStream {
AVStream *st;
/* pts of the next frame that will be generated */
int64_t next_pts;
int samples_count;
AVFrame *frame;
AVFrame *tmp_frame;
float t, tincr, tincr2;
struct SwsContext *sws_ctx;
struct SwrContext *swr_ctx;
} OutputStream;
static void log_packet(const AVFormatContext *fmt_ctx, const AVPacket *pkt)
{
AVRational *time_base = &fmt_ctx->streams[pkt->stream_index]->time_base;
printf("pts:%s pts_time:%s dts:%s dts_time:%s duration:%s duration_time:%s stream_index:%d\n",
av_ts2str(pkt->pts), av_ts2timestr(pkt->pts, time_base),
av_ts2str(pkt->dts), av_ts2timestr(pkt->dts, time_base),
av_ts2str(pkt->duration), av_ts2timestr(pkt->duration, time_base),
pkt->stream_index);
}
static int write_frame(AVFormatContext *fmt_ctx, const AVRational *time_base, AVStream *st, AVPacket *pkt)
{
/* rescale output packet timestamp values from codec to stream timebase */
av_packet_rescale_ts(pkt, *time_base, st->time_base);
pkt->stream_index = st->index;
/* Write the compressed frame to the media file. */
log_packet(fmt_ctx, pkt);
return av_interleaved_write_frame(fmt_ctx, pkt);
}
static int sws_flags = SWS_BICUBIC;
/* Add an output stream. */
static void add_stream(OutputStream *ost, AVFormatContext *oc,
AVCodec **codec,
enum AVCodecID codec_id)
static AVStream *add_stream(AVFormatContext *oc, AVCodec **codec,
enum AVCodecID codec_id)
{
AVCodecContext *c;
int i;
AVStream *st;
/* find the encoder */
*codec = avcodec_find_encoder(codec_id);
@@ -104,38 +63,20 @@ static void add_stream(OutputStream *ost, AVFormatContext *oc,
exit(1);
}
ost->st = avformat_new_stream(oc, *codec);
if (!ost->st) {
st = avformat_new_stream(oc, *codec);
if (!st) {
fprintf(stderr, "Could not allocate stream\n");
exit(1);
}
ost->st->id = oc->nb_streams-1;
c = ost->st->codec;
st->id = oc->nb_streams-1;
c = st->codec;
switch ((*codec)->type) {
case AVMEDIA_TYPE_AUDIO:
c->sample_fmt = (*codec)->sample_fmts ?
(*codec)->sample_fmts[0] : AV_SAMPLE_FMT_FLTP;
c->sample_fmt = AV_SAMPLE_FMT_FLTP;
c->bit_rate = 64000;
c->sample_rate = 44100;
if ((*codec)->supported_samplerates) {
c->sample_rate = (*codec)->supported_samplerates[0];
for (i = 0; (*codec)->supported_samplerates[i]; i++) {
if ((*codec)->supported_samplerates[i] == 44100)
c->sample_rate = 44100;
}
}
c->channels = av_get_channel_layout_nb_channels(c->channel_layout);
c->channel_layout = AV_CH_LAYOUT_STEREO;
if ((*codec)->channel_layouts) {
c->channel_layout = (*codec)->channel_layouts[0];
for (i = 0; (*codec)->channel_layouts[i]; i++) {
if ((*codec)->channel_layouts[i] == AV_CH_LAYOUT_STEREO)
c->channel_layout = AV_CH_LAYOUT_STEREO;
}
}
c->channels = av_get_channel_layout_nb_channels(c->channel_layout);
ost->st->time_base = (AVRational){ 1, c->sample_rate };
c->channels = 2;
break;
case AVMEDIA_TYPE_VIDEO:
@@ -149,9 +90,8 @@ static void add_stream(OutputStream *ost, AVFormatContext *oc,
* of which frame timestamps are represented. For fixed-fps content,
* timebase should be 1/framerate and timestamp increments should be
* identical to 1. */
ost->st->time_base = (AVRational){ 1, STREAM_FRAME_RATE };
c->time_base = ost->st->time_base;
c->time_base.den = STREAM_FRAME_RATE;
c->time_base.num = 1;
c->gop_size = 12; /* emit one intra frame every twelve frames at most */
c->pix_fmt = STREAM_PIX_FMT;
if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) {
@@ -173,262 +113,237 @@ static void add_stream(OutputStream *ost, AVFormatContext *oc,
/* Some formats want stream headers to be separate. */
if (oc->oformat->flags & AVFMT_GLOBALHEADER)
c->flags |= CODEC_FLAG_GLOBAL_HEADER;
return st;
}
/**************************************************************/
/* audio output */
static AVFrame *alloc_audio_frame(enum AVSampleFormat sample_fmt,
uint64_t channel_layout,
int sample_rate, int nb_samples)
{
AVFrame *frame = av_frame_alloc();
int ret;
static float t, tincr, tincr2;
if (!frame) {
fprintf(stderr, "Error allocating an audio frame\n");
exit(1);
}
static uint8_t **src_samples_data;
static int src_samples_linesize;
static int src_nb_samples;
frame->format = sample_fmt;
frame->channel_layout = channel_layout;
frame->sample_rate = sample_rate;
frame->nb_samples = nb_samples;
static int max_dst_nb_samples;
uint8_t **dst_samples_data;
int dst_samples_linesize;
int dst_samples_size;
if (nb_samples) {
ret = av_frame_get_buffer(frame, 0);
if (ret < 0) {
fprintf(stderr, "Error allocating an audio buffer\n");
exit(1);
}
}
struct SwrContext *swr_ctx = NULL;
return frame;
}
static void open_audio(AVFormatContext *oc, AVCodec *codec, OutputStream *ost, AVDictionary *opt_arg)
static void open_audio(AVFormatContext *oc, AVCodec *codec, AVStream *st)
{
AVCodecContext *c;
int nb_samples;
int ret;
AVDictionary *opt = NULL;
c = ost->st->codec;
c = st->codec;
/* open it */
av_dict_copy(&opt, opt_arg, 0);
ret = avcodec_open2(c, codec, &opt);
av_dict_free(&opt);
ret = avcodec_open2(c, codec, NULL);
if (ret < 0) {
fprintf(stderr, "Could not open audio codec: %s\n", av_err2str(ret));
exit(1);
}
/* init signal generator */
ost->t = 0;
ost->tincr = 2 * M_PI * 110.0 / c->sample_rate;
t = 0;
tincr = 2 * M_PI * 110.0 / c->sample_rate;
/* increment frequency by 110 Hz per second */
ost->tincr2 = 2 * M_PI * 110.0 / c->sample_rate / c->sample_rate;
tincr2 = 2 * M_PI * 110.0 / c->sample_rate / c->sample_rate;
if (c->codec->capabilities & CODEC_CAP_VARIABLE_FRAME_SIZE)
nb_samples = 10000;
else
nb_samples = c->frame_size;
src_nb_samples = c->codec->capabilities & CODEC_CAP_VARIABLE_FRAME_SIZE ?
10000 : c->frame_size;
ost->frame = alloc_audio_frame(c->sample_fmt, c->channel_layout,
c->sample_rate, nb_samples);
ost->tmp_frame = alloc_audio_frame(AV_SAMPLE_FMT_S16, c->channel_layout,
c->sample_rate, nb_samples);
ret = av_samples_alloc_array_and_samples(&src_samples_data, &src_samples_linesize, c->channels,
src_nb_samples, c->sample_fmt, 0);
if (ret < 0) {
fprintf(stderr, "Could not allocate source samples\n");
exit(1);
}
/* create resampler context */
ost->swr_ctx = swr_alloc();
if (!ost->swr_ctx) {
if (c->sample_fmt != AV_SAMPLE_FMT_S16) {
swr_ctx = swr_alloc();
if (!swr_ctx) {
fprintf(stderr, "Could not allocate resampler context\n");
exit(1);
}
/* set options */
av_opt_set_int (ost->swr_ctx, "in_channel_count", c->channels, 0);
av_opt_set_int (ost->swr_ctx, "in_sample_rate", c->sample_rate, 0);
av_opt_set_sample_fmt(ost->swr_ctx, "in_sample_fmt", AV_SAMPLE_FMT_S16, 0);
av_opt_set_int (ost->swr_ctx, "out_channel_count", c->channels, 0);
av_opt_set_int (ost->swr_ctx, "out_sample_rate", c->sample_rate, 0);
av_opt_set_sample_fmt(ost->swr_ctx, "out_sample_fmt", c->sample_fmt, 0);
av_opt_set_int (swr_ctx, "in_channel_count", c->channels, 0);
av_opt_set_int (swr_ctx, "in_sample_rate", c->sample_rate, 0);
av_opt_set_sample_fmt(swr_ctx, "in_sample_fmt", AV_SAMPLE_FMT_S16, 0);
av_opt_set_int (swr_ctx, "out_channel_count", c->channels, 0);
av_opt_set_int (swr_ctx, "out_sample_rate", c->sample_rate, 0);
av_opt_set_sample_fmt(swr_ctx, "out_sample_fmt", c->sample_fmt, 0);
/* initialize the resampling context */
if ((ret = swr_init(ost->swr_ctx)) < 0) {
if ((ret = swr_init(swr_ctx)) < 0) {
fprintf(stderr, "Failed to initialize the resampling context\n");
exit(1);
}
}
/* compute the number of converted samples: buffering is avoided
* ensuring that the output buffer will contain at least all the
* converted input samples */
max_dst_nb_samples = src_nb_samples;
ret = av_samples_alloc_array_and_samples(&dst_samples_data, &dst_samples_linesize, c->channels,
max_dst_nb_samples, c->sample_fmt, 0);
if (ret < 0) {
fprintf(stderr, "Could not allocate destination samples\n");
exit(1);
}
dst_samples_size = av_samples_get_buffer_size(NULL, c->channels, max_dst_nb_samples,
c->sample_fmt, 0);
}
/* Prepare a 16 bit dummy audio frame of 'frame_size' samples and
* 'nb_channels' channels. */
static AVFrame *get_audio_frame(OutputStream *ost)
static void get_audio_frame(int16_t *samples, int frame_size, int nb_channels)
{
AVFrame *frame = ost->tmp_frame;
int j, i, v;
int16_t *q = (int16_t*)frame->data[0];
int16_t *q;
/* check if we want to generate more frames */
if (av_compare_ts(ost->next_pts, ost->st->codec->time_base,
STREAM_DURATION, (AVRational){ 1, 1 }) >= 0)
return NULL;
for (j = 0; j <frame->nb_samples; j++) {
v = (int)(sin(ost->t) * 10000);
for (i = 0; i < ost->st->codec->channels; i++)
q = samples;
for (j = 0; j < frame_size; j++) {
v = (int)(sin(t) * 10000);
for (i = 0; i < nb_channels; i++)
*q++ = v;
ost->t += ost->tincr;
ost->tincr += ost->tincr2;
t += tincr;
tincr += tincr2;
}
frame->pts = ost->next_pts;
ost->next_pts += frame->nb_samples;
return frame;
}
/*
* encode one audio frame and send it to the muxer
* return 1 when encoding is finished, 0 otherwise
*/
static int write_audio_frame(AVFormatContext *oc, OutputStream *ost)
static void write_audio_frame(AVFormatContext *oc, AVStream *st)
{
AVCodecContext *c;
AVPacket pkt = { 0 }; // data and size must be 0;
AVFrame *frame;
int ret;
int got_packet;
int dst_nb_samples;
AVFrame *frame = avcodec_alloc_frame();
int got_packet, ret, dst_nb_samples;
av_init_packet(&pkt);
c = ost->st->codec;
c = st->codec;
frame = get_audio_frame(ost);
get_audio_frame((int16_t *)src_samples_data[0], src_nb_samples, c->channels);
if (frame) {
/* convert samples from native format to destination codec format, using the resampler */
/* compute destination number of samples */
dst_nb_samples = av_rescale_rnd(swr_get_delay(ost->swr_ctx, c->sample_rate) + frame->nb_samples,
c->sample_rate, c->sample_rate, AV_ROUND_UP);
av_assert0(dst_nb_samples == frame->nb_samples);
/* when we pass a frame to the encoder, it may keep a reference to it
* internally;
* make sure we do not overwrite it here
*/
ret = av_frame_make_writable(ost->frame);
if (ret < 0)
exit(1);
/* convert to destination format */
ret = swr_convert(ost->swr_ctx,
ost->frame->data, dst_nb_samples,
(const uint8_t **)frame->data, frame->nb_samples);
if (ret < 0) {
fprintf(stderr, "Error while converting\n");
/* convert samples from native format to destination codec format, using the resampler */
if (swr_ctx) {
/* compute destination number of samples */
dst_nb_samples = av_rescale_rnd(swr_get_delay(swr_ctx, c->sample_rate) + src_nb_samples,
c->sample_rate, c->sample_rate, AV_ROUND_UP);
if (dst_nb_samples > max_dst_nb_samples) {
av_free(dst_samples_data[0]);
ret = av_samples_alloc(dst_samples_data, &dst_samples_linesize, c->channels,
dst_nb_samples, c->sample_fmt, 0);
if (ret < 0)
exit(1);
}
frame = ost->frame;
max_dst_nb_samples = dst_nb_samples;
dst_samples_size = av_samples_get_buffer_size(NULL, c->channels, dst_nb_samples,
c->sample_fmt, 0);
}
frame->pts = av_rescale_q(ost->samples_count, (AVRational){1, c->sample_rate}, c->time_base);
ost->samples_count += dst_nb_samples;
/* convert to destination format */
ret = swr_convert(swr_ctx,
dst_samples_data, dst_nb_samples,
(const uint8_t **)src_samples_data, src_nb_samples);
if (ret < 0) {
fprintf(stderr, "Error while converting\n");
exit(1);
}
} else {
dst_samples_data[0] = src_samples_data[0];
dst_nb_samples = src_nb_samples;
}
frame->nb_samples = dst_nb_samples;
avcodec_fill_audio_frame(frame, c->channels, c->sample_fmt,
dst_samples_data[0], dst_samples_size, 0);
ret = avcodec_encode_audio2(c, &pkt, frame, &got_packet);
if (ret < 0) {
fprintf(stderr, "Error encoding audio frame: %s\n", av_err2str(ret));
exit(1);
}
if (got_packet) {
ret = write_frame(oc, &c->time_base, ost->st, &pkt);
if (ret < 0) {
fprintf(stderr, "Error while writing audio frame: %s\n",
av_err2str(ret));
exit(1);
}
}
if (!got_packet)
return;
return (frame || got_packet) ? 0 : 1;
pkt.stream_index = st->index;
/* Write the compressed frame to the media file. */
ret = av_interleaved_write_frame(oc, &pkt);
if (ret != 0) {
fprintf(stderr, "Error while writing audio frame: %s\n",
av_err2str(ret));
exit(1);
}
avcodec_free_frame(&frame);
}
static void close_audio(AVFormatContext *oc, AVStream *st)
{
avcodec_close(st->codec);
av_free(src_samples_data[0]);
av_free(dst_samples_data[0]);
}
/**************************************************************/
/* video output */
static AVFrame *alloc_picture(enum AVPixelFormat pix_fmt, int width, int height)
{
AVFrame *picture;
int ret;
static AVFrame *frame;
static AVPicture src_picture, dst_picture;
static int frame_count;
picture = av_frame_alloc();
if (!picture)
return NULL;
picture->format = pix_fmt;
picture->width = width;
picture->height = height;
/* allocate the buffers for the frame data */
ret = av_frame_get_buffer(picture, 32);
if (ret < 0) {
fprintf(stderr, "Could not allocate frame data.\n");
exit(1);
}
return picture;
}
static void open_video(AVFormatContext *oc, AVCodec *codec, OutputStream *ost, AVDictionary *opt_arg)
static void open_video(AVFormatContext *oc, AVCodec *codec, AVStream *st)
{
int ret;
AVCodecContext *c = ost->st->codec;
AVDictionary *opt = NULL;
av_dict_copy(&opt, opt_arg, 0);
AVCodecContext *c = st->codec;
/* open the codec */
ret = avcodec_open2(c, codec, &opt);
av_dict_free(&opt);
ret = avcodec_open2(c, codec, NULL);
if (ret < 0) {
fprintf(stderr, "Could not open video codec: %s\n", av_err2str(ret));
exit(1);
}
/* allocate and init a re-usable frame */
ost->frame = alloc_picture(c->pix_fmt, c->width, c->height);
if (!ost->frame) {
frame = avcodec_alloc_frame();
if (!frame) {
fprintf(stderr, "Could not allocate video frame\n");
exit(1);
}
/* Allocate the encoded raw picture. */
ret = avpicture_alloc(&dst_picture, c->pix_fmt, c->width, c->height);
if (ret < 0) {
fprintf(stderr, "Could not allocate picture: %s\n", av_err2str(ret));
exit(1);
}
/* If the output format is not YUV420P, then a temporary YUV420P
* picture is needed too. It is then converted to the required
* output format. */
ost->tmp_frame = NULL;
if (c->pix_fmt != AV_PIX_FMT_YUV420P) {
ost->tmp_frame = alloc_picture(AV_PIX_FMT_YUV420P, c->width, c->height);
if (!ost->tmp_frame) {
fprintf(stderr, "Could not allocate temporary picture\n");
ret = avpicture_alloc(&src_picture, AV_PIX_FMT_YUV420P, c->width, c->height);
if (ret < 0) {
fprintf(stderr, "Could not allocate temporary picture: %s\n",
av_err2str(ret));
exit(1);
}
}
/* copy data and linesize picture pointers to frame */
*((AVPicture *)frame) = dst_picture;
}
/* Prepare a dummy image. */
static void fill_yuv_image(AVFrame *pict, int frame_index,
static void fill_yuv_image(AVPicture *pict, int frame_index,
int width, int height)
{
int x, y, i, ret;
/* when we pass a frame to the encoder, it may keep a reference to it
* internally;
* make sure we do not overwrite it here
*/
ret = av_frame_make_writable(pict);
if (ret < 0)
exit(1);
int x, y, i;
i = frame_index;
@@ -446,77 +361,53 @@ static void fill_yuv_image(AVFrame *pict, int frame_index,
}
}
static AVFrame *get_video_frame(OutputStream *ost)
{
AVCodecContext *c = ost->st->codec;
/* check if we want to generate more frames */
if (av_compare_ts(ost->next_pts, ost->st->codec->time_base,
STREAM_DURATION, (AVRational){ 1, 1 }) >= 0)
return NULL;
if (c->pix_fmt != AV_PIX_FMT_YUV420P) {
/* as we only generate a YUV420P picture, we must convert it
* to the codec pixel format if needed */
if (!ost->sws_ctx) {
ost->sws_ctx = sws_getContext(c->width, c->height,
AV_PIX_FMT_YUV420P,
c->width, c->height,
c->pix_fmt,
SCALE_FLAGS, NULL, NULL, NULL);
if (!ost->sws_ctx) {
fprintf(stderr,
"Could not initialize the conversion context\n");
exit(1);
}
}
fill_yuv_image(ost->tmp_frame, ost->next_pts, c->width, c->height);
sws_scale(ost->sws_ctx,
(const uint8_t * const *)ost->tmp_frame->data, ost->tmp_frame->linesize,
0, c->height, ost->frame->data, ost->frame->linesize);
} else {
fill_yuv_image(ost->frame, ost->next_pts, c->width, c->height);
}
ost->frame->pts = ost->next_pts++;
return ost->frame;
}
/*
* encode one video frame and send it to the muxer
* return 1 when encoding is finished, 0 otherwise
*/
static int write_video_frame(AVFormatContext *oc, OutputStream *ost)
static void write_video_frame(AVFormatContext *oc, AVStream *st)
{
int ret;
AVCodecContext *c;
AVFrame *frame;
int got_packet = 0;
static struct SwsContext *sws_ctx;
AVCodecContext *c = st->codec;
c = ost->st->codec;
frame = get_video_frame(ost);
if (frame_count >= STREAM_NB_FRAMES) {
/* No more frames to compress. The codec has a latency of a few
* frames if using B-frames, so we get the last frames by
* passing the same picture again. */
} else {
if (c->pix_fmt != AV_PIX_FMT_YUV420P) {
/* as we only generate a YUV420P picture, we must convert it
* to the codec pixel format if needed */
if (!sws_ctx) {
sws_ctx = sws_getContext(c->width, c->height, AV_PIX_FMT_YUV420P,
c->width, c->height, c->pix_fmt,
sws_flags, NULL, NULL, NULL);
if (!sws_ctx) {
fprintf(stderr,
"Could not initialize the conversion context\n");
exit(1);
}
}
fill_yuv_image(&src_picture, frame_count, c->width, c->height);
sws_scale(sws_ctx,
(const uint8_t * const *)src_picture.data, src_picture.linesize,
0, c->height, dst_picture.data, dst_picture.linesize);
} else {
fill_yuv_image(&dst_picture, frame_count, c->width, c->height);
}
}
if (oc->oformat->flags & AVFMT_RAWPICTURE) {
/* a hack to avoid data copy with some raw video muxers */
/* Raw video case - directly store the picture in the packet */
AVPacket pkt;
av_init_packet(&pkt);
if (!frame)
return 1;
pkt.flags |= AV_PKT_FLAG_KEY;
pkt.stream_index = ost->st->index;
pkt.data = (uint8_t *)frame;
pkt.stream_index = st->index;
pkt.data = dst_picture.data[0];
pkt.size = sizeof(AVPicture);
pkt.pts = pkt.dts = frame->pts;
av_packet_rescale_ts(&pkt, c->time_base, ost->st->time_base);
ret = av_interleaved_write_frame(oc, &pkt);
} else {
AVPacket pkt = { 0 };
int got_packet;
av_init_packet(&pkt);
/* encode the image */
@@ -525,29 +416,30 @@ static int write_video_frame(AVFormatContext *oc, OutputStream *ost)
fprintf(stderr, "Error encoding video frame: %s\n", av_err2str(ret));
exit(1);
}
/* If size is zero, it means the image was buffered. */
if (got_packet) {
ret = write_frame(oc, &c->time_base, ost->st, &pkt);
if (!ret && got_packet && pkt.size) {
pkt.stream_index = st->index;
/* Write the compressed frame to the media file. */
ret = av_interleaved_write_frame(oc, &pkt);
} else {
ret = 0;
}
}
if (ret < 0) {
if (ret != 0) {
fprintf(stderr, "Error while writing video frame: %s\n", av_err2str(ret));
exit(1);
}
return (frame || got_packet) ? 0 : 1;
frame_count++;
}
static void close_stream(AVFormatContext *oc, OutputStream *ost)
static void close_video(AVFormatContext *oc, AVStream *st)
{
avcodec_close(ost->st->codec);
av_frame_free(&ost->frame);
av_frame_free(&ost->tmp_frame);
sws_freeContext(ost->sws_ctx);
swr_free(&ost->swr_ctx);
avcodec_close(st->codec);
av_free(src_picture.data[0]);
av_free(dst_picture.data[0]);
av_free(frame);
}
/**************************************************************/
@@ -555,20 +447,18 @@ static void close_stream(AVFormatContext *oc, OutputStream *ost)
int main(int argc, char **argv)
{
OutputStream video_st = { 0 }, audio_st = { 0 };
const char *filename;
AVOutputFormat *fmt;
AVFormatContext *oc;
AVStream *audio_st, *video_st;
AVCodec *audio_codec, *video_codec;
double audio_time, video_time;
int ret;
int have_video = 0, have_audio = 0;
int encode_video = 0, encode_audio = 0;
AVDictionary *opt = NULL;
/* Initialize libavcodec, and register all codecs and formats. */
av_register_all();
if (argc < 2) {
if (argc != 2) {
printf("usage: %s output_file\n"
"API example program to output a media file with libavformat.\n"
"This program generates a synthetic audio and video stream, encodes and\n"
@@ -580,9 +470,6 @@ int main(int argc, char **argv)
}
filename = argv[1];
if (argc > 3 && !strcmp(argv[2], "-flags")) {
av_dict_set(&opt, argv[2]+1, argv[3], 0);
}
/* allocate the output media context */
avformat_alloc_output_context2(&oc, NULL, NULL, filename);
@@ -590,31 +477,29 @@ int main(int argc, char **argv)
printf("Could not deduce output format from file extension: using MPEG.\n");
avformat_alloc_output_context2(&oc, NULL, "mpeg", filename);
}
if (!oc)
if (!oc) {
return 1;
}
fmt = oc->oformat;
/* Add the audio and video streams using the default format codecs
* and initialize the codecs. */
video_st = NULL;
audio_st = NULL;
if (fmt->video_codec != AV_CODEC_ID_NONE) {
add_stream(&video_st, oc, &video_codec, fmt->video_codec);
have_video = 1;
encode_video = 1;
video_st = add_stream(oc, &video_codec, fmt->video_codec);
}
if (fmt->audio_codec != AV_CODEC_ID_NONE) {
add_stream(&audio_st, oc, &audio_codec, fmt->audio_codec);
have_audio = 1;
encode_audio = 1;
audio_st = add_stream(oc, &audio_codec, fmt->audio_codec);
}
/* Now that all the parameters are set, we can open the audio and
* video codecs and allocate the necessary encode buffers. */
if (have_video)
open_video(oc, video_codec, &video_st, opt);
if (have_audio)
open_audio(oc, audio_codec, &audio_st, opt);
if (video_st)
open_video(oc, video_codec, video_st);
if (audio_st)
open_audio(oc, audio_codec, audio_st);
av_dump_format(oc, 0, filename, 1);
@@ -629,21 +514,30 @@ int main(int argc, char **argv)
}
/* Write the stream header, if any. */
ret = avformat_write_header(oc, &opt);
ret = avformat_write_header(oc, NULL);
if (ret < 0) {
fprintf(stderr, "Error occurred when opening output file: %s\n",
av_err2str(ret));
return 1;
}
while (encode_video || encode_audio) {
/* select the stream to encode */
if (encode_video &&
(!encode_audio || av_compare_ts(video_st.next_pts, video_st.st->codec->time_base,
audio_st.next_pts, audio_st.st->codec->time_base) <= 0)) {
encode_video = !write_video_frame(oc, &video_st);
if (frame)
frame->pts = 0;
for (;;) {
/* Compute current audio and video time. */
audio_time = audio_st ? audio_st->pts.val * av_q2d(audio_st->time_base) : 0.0;
video_time = video_st ? video_st->pts.val * av_q2d(video_st->time_base) : 0.0;
if ((!audio_st || audio_time >= STREAM_DURATION) &&
(!video_st || video_time >= STREAM_DURATION))
break;
/* write interleaved audio and video frames */
if (!video_st || (video_st && audio_st && audio_time < video_time)) {
write_audio_frame(oc, audio_st);
} else {
encode_audio = !write_audio_frame(oc, &audio_st);
write_video_frame(oc, video_st);
frame->pts += av_rescale_q(1, video_st->codec->time_base, video_st->time_base);
}
}
@@ -654,10 +548,10 @@ int main(int argc, char **argv)
av_write_trailer(oc);
/* Close each codec. */
if (have_video)
close_stream(oc, &video_st);
if (have_audio)
close_stream(oc, &audio_st);
if (video_st)
close_video(oc, video_st);
if (audio_st)
close_audio(oc, audio_st);
if (!(fmt->flags & AVFMT_NOFILE))
/* Close the output file. */

View File

@@ -1,165 +0,0 @@
/*
* Copyright (c) 2013 Stefano Sabatini
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
/**
* @file
* libavformat/libavcodec demuxing and muxing API example.
*
* Remux streams from one container format to another.
* @example remuxing.c
*/
#include <libavutil/timestamp.h>
#include <libavformat/avformat.h>
static void log_packet(const AVFormatContext *fmt_ctx, const AVPacket *pkt, const char *tag)
{
AVRational *time_base = &fmt_ctx->streams[pkt->stream_index]->time_base;
printf("%s: pts:%s pts_time:%s dts:%s dts_time:%s duration:%s duration_time:%s stream_index:%d\n",
tag,
av_ts2str(pkt->pts), av_ts2timestr(pkt->pts, time_base),
av_ts2str(pkt->dts), av_ts2timestr(pkt->dts, time_base),
av_ts2str(pkt->duration), av_ts2timestr(pkt->duration, time_base),
pkt->stream_index);
}
int main(int argc, char **argv)
{
AVOutputFormat *ofmt = NULL;
AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
AVPacket pkt;
const char *in_filename, *out_filename;
int ret, i;
if (argc < 3) {
printf("usage: %s input output\n"
"API example program to remux a media file with libavformat and libavcodec.\n"
"The output format is guessed according to the file extension.\n"
"\n", argv[0]);
return 1;
}
in_filename = argv[1];
out_filename = argv[2];
av_register_all();
if ((ret = avformat_open_input(&ifmt_ctx, in_filename, 0, 0)) < 0) {
fprintf(stderr, "Could not open input file '%s'", in_filename);
goto end;
}
if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) < 0) {
fprintf(stderr, "Failed to retrieve input stream information");
goto end;
}
av_dump_format(ifmt_ctx, 0, in_filename, 0);
avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, out_filename);
if (!ofmt_ctx) {
fprintf(stderr, "Could not create output context\n");
ret = AVERROR_UNKNOWN;
goto end;
}
ofmt = ofmt_ctx->oformat;
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
AVStream *in_stream = ifmt_ctx->streams[i];
AVStream *out_stream = avformat_new_stream(ofmt_ctx, in_stream->codec->codec);
if (!out_stream) {
fprintf(stderr, "Failed allocating output stream\n");
ret = AVERROR_UNKNOWN;
goto end;
}
ret = avcodec_copy_context(out_stream->codec, in_stream->codec);
if (ret < 0) {
fprintf(stderr, "Failed to copy context from input to output stream codec context\n");
goto end;
}
out_stream->codec->codec_tag = 0;
if (ofmt_ctx->oformat->flags & AVFMT_GLOBALHEADER)
out_stream->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
}
av_dump_format(ofmt_ctx, 0, out_filename, 1);
if (!(ofmt->flags & AVFMT_NOFILE)) {
ret = avio_open(&ofmt_ctx->pb, out_filename, AVIO_FLAG_WRITE);
if (ret < 0) {
fprintf(stderr, "Could not open output file '%s'", out_filename);
goto end;
}
}
ret = avformat_write_header(ofmt_ctx, NULL);
if (ret < 0) {
fprintf(stderr, "Error occurred when opening output file\n");
goto end;
}
while (1) {
AVStream *in_stream, *out_stream;
ret = av_read_frame(ifmt_ctx, &pkt);
if (ret < 0)
break;
in_stream = ifmt_ctx->streams[pkt.stream_index];
out_stream = ofmt_ctx->streams[pkt.stream_index];
log_packet(ifmt_ctx, &pkt, "in");
/* copy packet */
pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);
pkt.pos = -1;
log_packet(ofmt_ctx, &pkt, "out");
ret = av_interleaved_write_frame(ofmt_ctx, &pkt);
if (ret < 0) {
fprintf(stderr, "Error muxing packet\n");
break;
}
av_free_packet(&pkt);
}
av_write_trailer(ofmt_ctx);
end:
avformat_close_input(&ifmt_ctx);
/* close output */
if (ofmt_ctx && !(ofmt->flags & AVFMT_NOFILE))
avio_close(ofmt_ctx->pb);
avformat_free_context(ofmt_ctx);
if (ret < 0 && ret != AVERROR_EOF) {
fprintf(stderr, "Error occurred: %s\n", av_err2str(ret));
return 1;
}
return 0;
}

View File

@@ -21,7 +21,7 @@
*/
/**
* @example resampling_audio.c
* @example doc/examples/resampling_audio.c
* libswresample API use example.
*/
@@ -62,7 +62,7 @@ static int get_format_from_sample_fmt(const char **fmt,
/**
* Fill dst buffer with nb_samples, generated starting from t.
*/
static void fill_samples(double *dst, int nb_samples, int nb_channels, int sample_rate, double *t)
void fill_samples(double *dst, int nb_samples, int nb_channels, int sample_rate, double *t)
{
int i, j;
double tincr = 1.0 / sample_rate, *dstp = dst;
@@ -168,7 +168,7 @@ int main(int argc, char **argv)
dst_nb_samples = av_rescale_rnd(swr_get_delay(swr_ctx, src_rate) +
src_nb_samples, dst_rate, src_rate, AV_ROUND_UP);
if (dst_nb_samples > max_dst_nb_samples) {
av_freep(&dst_data[0]);
av_free(dst_data[0]);
ret = av_samples_alloc(dst_data, &dst_linesize, dst_nb_channels,
dst_nb_samples, dst_sample_fmt, 1);
if (ret < 0)
@@ -184,10 +184,6 @@ int main(int argc, char **argv)
}
dst_bufsize = av_samples_get_buffer_size(&dst_linesize, dst_nb_channels,
ret, dst_sample_fmt, 1);
if (dst_bufsize < 0) {
fprintf(stderr, "Could not get sample buffer size\n");
goto end;
}
printf("t:%f in:%d out:%d\n", t, src_nb_samples, ret);
fwrite(dst_data[0], 1, dst_bufsize, dst_file);
} while (t < 10);
@@ -199,7 +195,8 @@ int main(int argc, char **argv)
fmt, dst_ch_layout, dst_nb_channels, dst_rate, dst_filename);
end:
fclose(dst_file);
if (dst_file)
fclose(dst_file);
if (src_data)
av_freep(&src_data[0]);

View File

@@ -23,7 +23,7 @@
/**
* @file
* libswscale API use example.
* @example scaling_video.c
* @example doc/examples/scaling_video.c
*/
#include <libavutil/imgutils.h>
@@ -132,7 +132,8 @@ int main(int argc, char **argv)
av_get_pix_fmt_name(dst_pix_fmt), dst_w, dst_h, dst_filename);
end:
fclose(dst_file);
if (dst_file)
fclose(dst_file);
av_freep(&src_data[0]);
av_freep(&dst_data[0]);
sws_freeContext(sws_ctx);

View File

@@ -1,755 +0,0 @@
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/**
* @file
* simple audio converter
*
* @example transcode_aac.c
* Convert an input audio file to AAC in an MP4 container using FFmpeg.
* @author Andreas Unterweger (dustsigns@gmail.com)
*/
#include <stdio.h>
#include "libavformat/avformat.h"
#include "libavformat/avio.h"
#include "libavcodec/avcodec.h"
#include "libavutil/audio_fifo.h"
#include "libavutil/avassert.h"
#include "libavutil/avstring.h"
#include "libavutil/frame.h"
#include "libavutil/opt.h"
#include "libswresample/swresample.h"
/** The output bit rate in kbit/s */
#define OUTPUT_BIT_RATE 48000
/** The number of output channels */
#define OUTPUT_CHANNELS 2
/** The audio sample output format */
#define OUTPUT_SAMPLE_FORMAT AV_SAMPLE_FMT_S16
/**
* Convert an error code into a text message.
* @param error Error code to be converted
* @return Corresponding error text (not thread-safe)
*/
static const char *get_error_text(const int error)
{
static char error_buffer[255];
av_strerror(error, error_buffer, sizeof(error_buffer));
return error_buffer;
}
/** Open an input file and the required decoder. */
static int open_input_file(const char *filename,
AVFormatContext **input_format_context,
AVCodecContext **input_codec_context)
{
AVCodec *input_codec;
int error;
/** Open the input file to read from it. */
if ((error = avformat_open_input(input_format_context, filename, NULL,
NULL)) < 0) {
fprintf(stderr, "Could not open input file '%s' (error '%s')\n",
filename, get_error_text(error));
*input_format_context = NULL;
return error;
}
/** Get information on the input file (number of streams etc.). */
if ((error = avformat_find_stream_info(*input_format_context, NULL)) < 0) {
fprintf(stderr, "Could not open find stream info (error '%s')\n",
get_error_text(error));
avformat_close_input(input_format_context);
return error;
}
/** Make sure that there is only one stream in the input file. */
if ((*input_format_context)->nb_streams != 1) {
fprintf(stderr, "Expected one audio input stream, but found %d\n",
(*input_format_context)->nb_streams);
avformat_close_input(input_format_context);
return AVERROR_EXIT;
}
/** Find a decoder for the audio stream. */
if (!(input_codec = avcodec_find_decoder((*input_format_context)->streams[0]->codec->codec_id))) {
fprintf(stderr, "Could not find input codec\n");
avformat_close_input(input_format_context);
return AVERROR_EXIT;
}
/** Open the decoder for the audio stream to use it later. */
if ((error = avcodec_open2((*input_format_context)->streams[0]->codec,
input_codec, NULL)) < 0) {
fprintf(stderr, "Could not open input codec (error '%s')\n",
get_error_text(error));
avformat_close_input(input_format_context);
return error;
}
/** Save the decoder context for easier access later. */
*input_codec_context = (*input_format_context)->streams[0]->codec;
return 0;
}
/**
* Open an output file and the required encoder.
* Also set some basic encoder parameters.
* Some of these parameters are based on the input file's parameters.
*/
static int open_output_file(const char *filename,
AVCodecContext *input_codec_context,
AVFormatContext **output_format_context,
AVCodecContext **output_codec_context)
{
AVIOContext *output_io_context = NULL;
AVStream *stream = NULL;
AVCodec *output_codec = NULL;
int error;
/** Open the output file to write to it. */
if ((error = avio_open(&output_io_context, filename,
AVIO_FLAG_WRITE)) < 0) {
fprintf(stderr, "Could not open output file '%s' (error '%s')\n",
filename, get_error_text(error));
return error;
}
/** Create a new format context for the output container format. */
if (!(*output_format_context = avformat_alloc_context())) {
fprintf(stderr, "Could not allocate output format context\n");
return AVERROR(ENOMEM);
}
/** Associate the output file (pointer) with the container format context. */
(*output_format_context)->pb = output_io_context;
/** Guess the desired container format based on the file extension. */
if (!((*output_format_context)->oformat = av_guess_format(NULL, filename,
NULL))) {
fprintf(stderr, "Could not find output file format\n");
goto cleanup;
}
av_strlcpy((*output_format_context)->filename, filename,
sizeof((*output_format_context)->filename));
/** Find the encoder to be used by its name. */
if (!(output_codec = avcodec_find_encoder(AV_CODEC_ID_AAC))) {
fprintf(stderr, "Could not find an AAC encoder.\n");
goto cleanup;
}
/** Create a new audio stream in the output file container. */
if (!(stream = avformat_new_stream(*output_format_context, output_codec))) {
fprintf(stderr, "Could not create new stream\n");
error = AVERROR(ENOMEM);
goto cleanup;
}
/** Save the encoder context for easiert access later. */
*output_codec_context = stream->codec;
/**
* Set the basic encoder parameters.
* The input file's sample rate is used to avoid a sample rate conversion.
*/
(*output_codec_context)->channels = OUTPUT_CHANNELS;
(*output_codec_context)->channel_layout = av_get_default_channel_layout(OUTPUT_CHANNELS);
(*output_codec_context)->sample_rate = input_codec_context->sample_rate;
(*output_codec_context)->sample_fmt = AV_SAMPLE_FMT_S16;
(*output_codec_context)->bit_rate = OUTPUT_BIT_RATE;
/**
* Some container formats (like MP4) require global headers to be present
* Mark the encoder so that it behaves accordingly.
*/
if ((*output_format_context)->oformat->flags & AVFMT_GLOBALHEADER)
(*output_codec_context)->flags |= CODEC_FLAG_GLOBAL_HEADER;
/** Open the encoder for the audio stream to use it later. */
if ((error = avcodec_open2(*output_codec_context, output_codec, NULL)) < 0) {
fprintf(stderr, "Could not open output codec (error '%s')\n",
get_error_text(error));
goto cleanup;
}
return 0;
cleanup:
avio_close((*output_format_context)->pb);
avformat_free_context(*output_format_context);
*output_format_context = NULL;
return error < 0 ? error : AVERROR_EXIT;
}
/** Initialize one data packet for reading or writing. */
static void init_packet(AVPacket *packet)
{
av_init_packet(packet);
/** Set the packet data and size so that it is recognized as being empty. */
packet->data = NULL;
packet->size = 0;
}
/** Initialize one audio frame for reading from the input file */
static int init_input_frame(AVFrame **frame)
{
if (!(*frame = av_frame_alloc())) {
fprintf(stderr, "Could not allocate input frame\n");
return AVERROR(ENOMEM);
}
return 0;
}
/**
* Initialize the audio resampler based on the input and output codec settings.
* If the input and output sample formats differ, a conversion is required
* libswresample takes care of this, but requires initialization.
*/
static int init_resampler(AVCodecContext *input_codec_context,
AVCodecContext *output_codec_context,
SwrContext **resample_context)
{
int error;
/**
* Create a resampler context for the conversion.
* Set the conversion parameters.
* Default channel layouts based on the number of channels
* are assumed for simplicity (they are sometimes not detected
* properly by the demuxer and/or decoder).
*/
*resample_context = swr_alloc_set_opts(NULL,
av_get_default_channel_layout(output_codec_context->channels),
output_codec_context->sample_fmt,
output_codec_context->sample_rate,
av_get_default_channel_layout(input_codec_context->channels),
input_codec_context->sample_fmt,
input_codec_context->sample_rate,
0, NULL);
if (!*resample_context) {
fprintf(stderr, "Could not allocate resample context\n");
return AVERROR(ENOMEM);
}
/**
* Perform a sanity check so that the number of converted samples is
* not greater than the number of samples to be converted.
* If the sample rates differ, this case has to be handled differently
*/
av_assert0(output_codec_context->sample_rate == input_codec_context->sample_rate);
/** Open the resampler with the specified parameters. */
if ((error = swr_init(*resample_context)) < 0) {
fprintf(stderr, "Could not open resample context\n");
swr_free(resample_context);
return error;
}
return 0;
}
/** Initialize a FIFO buffer for the audio samples to be encoded. */
static int init_fifo(AVAudioFifo **fifo)
{
/** Create the FIFO buffer based on the specified output sample format. */
if (!(*fifo = av_audio_fifo_alloc(OUTPUT_SAMPLE_FORMAT, OUTPUT_CHANNELS, 1))) {
fprintf(stderr, "Could not allocate FIFO\n");
return AVERROR(ENOMEM);
}
return 0;
}
/** Write the header of the output file container. */
static int write_output_file_header(AVFormatContext *output_format_context)
{
int error;
if ((error = avformat_write_header(output_format_context, NULL)) < 0) {
fprintf(stderr, "Could not write output file header (error '%s')\n",
get_error_text(error));
return error;
}
return 0;
}
/** Decode one audio frame from the input file. */
static int decode_audio_frame(AVFrame *frame,
AVFormatContext *input_format_context,
AVCodecContext *input_codec_context,
int *data_present, int *finished)
{
/** Packet used for temporary storage. */
AVPacket input_packet;
int error;
init_packet(&input_packet);
/** Read one audio frame from the input file into a temporary packet. */
if ((error = av_read_frame(input_format_context, &input_packet)) < 0) {
/** If we are the the end of the file, flush the decoder below. */
if (error == AVERROR_EOF)
*finished = 1;
else {
fprintf(stderr, "Could not read frame (error '%s')\n",
get_error_text(error));
return error;
}
}
/**
* Decode the audio frame stored in the temporary packet.
* The input audio stream decoder is used to do this.
* If we are at the end of the file, pass an empty packet to the decoder
* to flush it.
*/
if ((error = avcodec_decode_audio4(input_codec_context, frame,
data_present, &input_packet)) < 0) {
fprintf(stderr, "Could not decode frame (error '%s')\n",
get_error_text(error));
av_free_packet(&input_packet);
return error;
}
/**
* If the decoder has not been flushed completely, we are not finished,
* so that this function has to be called again.
*/
if (*finished && *data_present)
*finished = 0;
av_free_packet(&input_packet);
return 0;
}
/**
* Initialize a temporary storage for the specified number of audio samples.
* The conversion requires temporary storage due to the different format.
* The number of audio samples to be allocated is specified in frame_size.
*/
static int init_converted_samples(uint8_t ***converted_input_samples,
AVCodecContext *output_codec_context,
int frame_size)
{
int error;
/**
* Allocate as many pointers as there are audio channels.
* Each pointer will later point to the audio samples of the corresponding
* channels (although it may be NULL for interleaved formats).
*/
if (!(*converted_input_samples = calloc(output_codec_context->channels,
sizeof(**converted_input_samples)))) {
fprintf(stderr, "Could not allocate converted input sample pointers\n");
return AVERROR(ENOMEM);
}
/**
* Allocate memory for the samples of all channels in one consecutive
* block for convenience.
*/
if ((error = av_samples_alloc(*converted_input_samples, NULL,
output_codec_context->channels,
frame_size,
output_codec_context->sample_fmt, 0)) < 0) {
fprintf(stderr,
"Could not allocate converted input samples (error '%s')\n",
get_error_text(error));
av_freep(&(*converted_input_samples)[0]);
free(*converted_input_samples);
return error;
}
return 0;
}
/**
* Convert the input audio samples into the output sample format.
* The conversion happens on a per-frame basis, the size of which is specified
* by frame_size.
*/
static int convert_samples(const uint8_t **input_data,
uint8_t **converted_data, const int frame_size,
SwrContext *resample_context)
{
int error;
/** Convert the samples using the resampler. */
if ((error = swr_convert(resample_context,
converted_data, frame_size,
input_data , frame_size)) < 0) {
fprintf(stderr, "Could not convert input samples (error '%s')\n",
get_error_text(error));
return error;
}
return 0;
}
/** Add converted input audio samples to the FIFO buffer for later processing. */
static int add_samples_to_fifo(AVAudioFifo *fifo,
uint8_t **converted_input_samples,
const int frame_size)
{
int error;
/**
* Make the FIFO as large as it needs to be to hold both,
* the old and the new samples.
*/
if ((error = av_audio_fifo_realloc(fifo, av_audio_fifo_size(fifo) + frame_size)) < 0) {
fprintf(stderr, "Could not reallocate FIFO\n");
return error;
}
/** Store the new samples in the FIFO buffer. */
if (av_audio_fifo_write(fifo, (void **)converted_input_samples,
frame_size) < frame_size) {
fprintf(stderr, "Could not write data to FIFO\n");
return AVERROR_EXIT;
}
return 0;
}
/**
* Read one audio frame from the input file, decodes, converts and stores
* it in the FIFO buffer.
*/
static int read_decode_convert_and_store(AVAudioFifo *fifo,
AVFormatContext *input_format_context,
AVCodecContext *input_codec_context,
AVCodecContext *output_codec_context,
SwrContext *resampler_context,
int *finished)
{
/** Temporary storage of the input samples of the frame read from the file. */
AVFrame *input_frame = NULL;
/** Temporary storage for the converted input samples. */
uint8_t **converted_input_samples = NULL;
int data_present;
int ret = AVERROR_EXIT;
/** Initialize temporary storage for one input frame. */
if (init_input_frame(&input_frame))
goto cleanup;
/** Decode one frame worth of audio samples. */
if (decode_audio_frame(input_frame, input_format_context,
input_codec_context, &data_present, finished))
goto cleanup;
/**
* If we are at the end of the file and there are no more samples
* in the decoder which are delayed, we are actually finished.
* This must not be treated as an error.
*/
if (*finished && !data_present) {
ret = 0;
goto cleanup;
}
/** If there is decoded data, convert and store it */
if (data_present) {
/** Initialize the temporary storage for the converted input samples. */
if (init_converted_samples(&converted_input_samples, output_codec_context,
input_frame->nb_samples))
goto cleanup;
/**
* Convert the input samples to the desired output sample format.
* This requires a temporary storage provided by converted_input_samples.
*/
if (convert_samples((const uint8_t**)input_frame->extended_data, converted_input_samples,
input_frame->nb_samples, resampler_context))
goto cleanup;
/** Add the converted input samples to the FIFO buffer for later processing. */
if (add_samples_to_fifo(fifo, converted_input_samples,
input_frame->nb_samples))
goto cleanup;
ret = 0;
}
ret = 0;
cleanup:
if (converted_input_samples) {
av_freep(&converted_input_samples[0]);
free(converted_input_samples);
}
av_frame_free(&input_frame);
return ret;
}
/**
* Initialize one input frame for writing to the output file.
* The frame will be exactly frame_size samples large.
*/
static int init_output_frame(AVFrame **frame,
AVCodecContext *output_codec_context,
int frame_size)
{
int error;
/** Create a new frame to store the audio samples. */
if (!(*frame = av_frame_alloc())) {
fprintf(stderr, "Could not allocate output frame\n");
return AVERROR_EXIT;
}
/**
* Set the frame's parameters, especially its size and format.
* av_frame_get_buffer needs this to allocate memory for the
* audio samples of the frame.
* Default channel layouts based on the number of channels
* are assumed for simplicity.
*/
(*frame)->nb_samples = frame_size;
(*frame)->channel_layout = output_codec_context->channel_layout;
(*frame)->format = output_codec_context->sample_fmt;
(*frame)->sample_rate = output_codec_context->sample_rate;
/**
* Allocate the samples of the created frame. This call will make
* sure that the audio frame can hold as many samples as specified.
*/
if ((error = av_frame_get_buffer(*frame, 0)) < 0) {
fprintf(stderr, "Could allocate output frame samples (error '%s')\n",
get_error_text(error));
av_frame_free(frame);
return error;
}
return 0;
}
/** Encode one frame worth of audio to the output file. */
static int encode_audio_frame(AVFrame *frame,
AVFormatContext *output_format_context,
AVCodecContext *output_codec_context,
int *data_present)
{
/** Packet used for temporary storage. */
AVPacket output_packet;
int error;
init_packet(&output_packet);
/**
* Encode the audio frame and store it in the temporary packet.
* The output audio stream encoder is used to do this.
*/
if ((error = avcodec_encode_audio2(output_codec_context, &output_packet,
frame, data_present)) < 0) {
fprintf(stderr, "Could not encode frame (error '%s')\n",
get_error_text(error));
av_free_packet(&output_packet);
return error;
}
/** Write one audio frame from the temporary packet to the output file. */
if (*data_present) {
if ((error = av_write_frame(output_format_context, &output_packet)) < 0) {
fprintf(stderr, "Could not write frame (error '%s')\n",
get_error_text(error));
av_free_packet(&output_packet);
return error;
}
av_free_packet(&output_packet);
}
return 0;
}
/**
* Load one audio frame from the FIFO buffer, encode and write it to the
* output file.
*/
static int load_encode_and_write(AVAudioFifo *fifo,
AVFormatContext *output_format_context,
AVCodecContext *output_codec_context)
{
/** Temporary storage of the output samples of the frame written to the file. */
AVFrame *output_frame;
/**
* Use the maximum number of possible samples per frame.
* If there is less than the maximum possible frame size in the FIFO
* buffer use this number. Otherwise, use the maximum possible frame size
*/
const int frame_size = FFMIN(av_audio_fifo_size(fifo),
output_codec_context->frame_size);
int data_written;
/** Initialize temporary storage for one output frame. */
if (init_output_frame(&output_frame, output_codec_context, frame_size))
return AVERROR_EXIT;
/**
* Read as many samples from the FIFO buffer as required to fill the frame.
* The samples are stored in the frame temporarily.
*/
if (av_audio_fifo_read(fifo, (void **)output_frame->data, frame_size) < frame_size) {
fprintf(stderr, "Could not read data from FIFO\n");
av_frame_free(&output_frame);
return AVERROR_EXIT;
}
/** Encode one frame worth of audio samples. */
if (encode_audio_frame(output_frame, output_format_context,
output_codec_context, &data_written)) {
av_frame_free(&output_frame);
return AVERROR_EXIT;
}
av_frame_free(&output_frame);
return 0;
}
/** Write the trailer of the output file container. */
static int write_output_file_trailer(AVFormatContext *output_format_context)
{
int error;
if ((error = av_write_trailer(output_format_context)) < 0) {
fprintf(stderr, "Could not write output file trailer (error '%s')\n",
get_error_text(error));
return error;
}
return 0;
}
/** Convert an audio file to an AAC file in an MP4 container. */
int main(int argc, char **argv)
{
AVFormatContext *input_format_context = NULL, *output_format_context = NULL;
AVCodecContext *input_codec_context = NULL, *output_codec_context = NULL;
SwrContext *resample_context = NULL;
AVAudioFifo *fifo = NULL;
int ret = AVERROR_EXIT;
if (argc < 3) {
fprintf(stderr, "Usage: %s <input file> <output file>\n", argv[0]);
exit(1);
}
/** Register all codecs and formats so that they can be used. */
av_register_all();
/** Open the input file for reading. */
if (open_input_file(argv[1], &input_format_context,
&input_codec_context))
goto cleanup;
/** Open the output file for writing. */
if (open_output_file(argv[2], input_codec_context,
&output_format_context, &output_codec_context))
goto cleanup;
/** Initialize the resampler to be able to convert audio sample formats. */
if (init_resampler(input_codec_context, output_codec_context,
&resample_context))
goto cleanup;
/** Initialize the FIFO buffer to store audio samples to be encoded. */
if (init_fifo(&fifo))
goto cleanup;
/** Write the header of the output file container. */
if (write_output_file_header(output_format_context))
goto cleanup;
/**
* Loop as long as we have input samples to read or output samples
* to write; abort as soon as we have neither.
*/
while (1) {
/** Use the encoder's desired frame size for processing. */
const int output_frame_size = output_codec_context->frame_size;
int finished = 0;
/**
* Make sure that there is one frame worth of samples in the FIFO
* buffer so that the encoder can do its work.
* Since the decoder's and the encoder's frame size may differ, we
* need to FIFO buffer to store as many frames worth of input samples
* that they make up at least one frame worth of output samples.
*/
while (av_audio_fifo_size(fifo) < output_frame_size) {
/**
* Decode one frame worth of audio samples, convert it to the
* output sample format and put it into the FIFO buffer.
*/
if (read_decode_convert_and_store(fifo, input_format_context,
input_codec_context,
output_codec_context,
resample_context, &finished))
goto cleanup;
/**
* If we are at the end of the input file, we continue
* encoding the remaining audio samples to the output file.
*/
if (finished)
break;
}
/**
* If we have enough samples for the encoder, we encode them.
* At the end of the file, we pass the remaining samples to
* the encoder.
*/
while (av_audio_fifo_size(fifo) >= output_frame_size ||
(finished && av_audio_fifo_size(fifo) > 0))
/**
* Take one frame worth of audio samples from the FIFO buffer,
* encode it and write it to the output file.
*/
if (load_encode_and_write(fifo, output_format_context,
output_codec_context))
goto cleanup;
/**
* If we are at the end of the input file and have encoded
* all remaining samples, we can exit this loop and finish.
*/
if (finished) {
int data_written;
/** Flush the encoder as it may have delayed frames. */
do {
if (encode_audio_frame(NULL, output_format_context,
output_codec_context, &data_written))
goto cleanup;
} while (data_written);
break;
}
}
/** Write the trailer of the output file container. */
if (write_output_file_trailer(output_format_context))
goto cleanup;
ret = 0;
cleanup:
if (fifo)
av_audio_fifo_free(fifo);
swr_free(&resample_context);
if (output_codec_context)
avcodec_close(output_codec_context);
if (output_format_context) {
avio_close(output_format_context->pb);
avformat_free_context(output_format_context);
}
if (input_codec_context)
avcodec_close(input_codec_context);
if (input_format_context)
avformat_close_input(&input_format_context);
return ret;
}

View File

@@ -1,597 +0,0 @@
/*
* Copyright (c) 2010 Nicolas George
* Copyright (c) 2011 Stefano Sabatini
* Copyright (c) 2014 Andrey Utkin
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
/**
* @file
* API example for demuxing, decoding, filtering, encoding and muxing
* @example transcoding.c
*/
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libavfilter/avfiltergraph.h>
#include <libavfilter/avcodec.h>
#include <libavfilter/buffersink.h>
#include <libavfilter/buffersrc.h>
#include <libavutil/opt.h>
#include <libavutil/pixdesc.h>
static AVFormatContext *ifmt_ctx;
static AVFormatContext *ofmt_ctx;
typedef struct FilteringContext {
AVFilterContext *buffersink_ctx;
AVFilterContext *buffersrc_ctx;
AVFilterGraph *filter_graph;
} FilteringContext;
static FilteringContext *filter_ctx;
static int open_input_file(const char *filename)
{
int ret;
unsigned int i;
ifmt_ctx = NULL;
if ((ret = avformat_open_input(&ifmt_ctx, filename, NULL, NULL)) < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot open input file\n");
return ret;
}
if ((ret = avformat_find_stream_info(ifmt_ctx, NULL)) < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot find stream information\n");
return ret;
}
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
AVStream *stream;
AVCodecContext *codec_ctx;
stream = ifmt_ctx->streams[i];
codec_ctx = stream->codec;
/* Reencode video & audio and remux subtitles etc. */
if (codec_ctx->codec_type == AVMEDIA_TYPE_VIDEO
|| codec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
/* Open decoder */
ret = avcodec_open2(codec_ctx,
avcodec_find_decoder(codec_ctx->codec_id), NULL);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Failed to open decoder for stream #%u\n", i);
return ret;
}
}
}
av_dump_format(ifmt_ctx, 0, filename, 0);
return 0;
}
static int open_output_file(const char *filename)
{
AVStream *out_stream;
AVStream *in_stream;
AVCodecContext *dec_ctx, *enc_ctx;
AVCodec *encoder;
int ret;
unsigned int i;
ofmt_ctx = NULL;
avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, filename);
if (!ofmt_ctx) {
av_log(NULL, AV_LOG_ERROR, "Could not create output context\n");
return AVERROR_UNKNOWN;
}
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
out_stream = avformat_new_stream(ofmt_ctx, NULL);
if (!out_stream) {
av_log(NULL, AV_LOG_ERROR, "Failed allocating output stream\n");
return AVERROR_UNKNOWN;
}
in_stream = ifmt_ctx->streams[i];
dec_ctx = in_stream->codec;
enc_ctx = out_stream->codec;
if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO
|| dec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
/* in this example, we choose transcoding to same codec */
encoder = avcodec_find_encoder(dec_ctx->codec_id);
/* In this example, we transcode to same properties (picture size,
* sample rate etc.). These properties can be changed for output
* streams easily using filters */
if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO) {
enc_ctx->height = dec_ctx->height;
enc_ctx->width = dec_ctx->width;
enc_ctx->sample_aspect_ratio = dec_ctx->sample_aspect_ratio;
/* take first format from list of supported formats */
enc_ctx->pix_fmt = encoder->pix_fmts[0];
/* video time_base can be set to whatever is handy and supported by encoder */
enc_ctx->time_base = dec_ctx->time_base;
} else {
enc_ctx->sample_rate = dec_ctx->sample_rate;
enc_ctx->channel_layout = dec_ctx->channel_layout;
enc_ctx->channels = av_get_channel_layout_nb_channels(enc_ctx->channel_layout);
/* take first format from list of supported formats */
enc_ctx->sample_fmt = encoder->sample_fmts[0];
enc_ctx->time_base = (AVRational){1, enc_ctx->sample_rate};
}
/* Third parameter can be used to pass settings to encoder */
ret = avcodec_open2(enc_ctx, encoder, NULL);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot open video encoder for stream #%u\n", i);
return ret;
}
} else if (dec_ctx->codec_type == AVMEDIA_TYPE_UNKNOWN) {
av_log(NULL, AV_LOG_FATAL, "Elementary stream #%d is of unknown type, cannot proceed\n", i);
return AVERROR_INVALIDDATA;
} else {
/* if this stream must be remuxed */
ret = avcodec_copy_context(ofmt_ctx->streams[i]->codec,
ifmt_ctx->streams[i]->codec);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Copying stream context failed\n");
return ret;
}
}
if (ofmt_ctx->oformat->flags & AVFMT_GLOBALHEADER)
enc_ctx->flags |= CODEC_FLAG_GLOBAL_HEADER;
}
av_dump_format(ofmt_ctx, 0, filename, 1);
if (!(ofmt_ctx->oformat->flags & AVFMT_NOFILE)) {
ret = avio_open(&ofmt_ctx->pb, filename, AVIO_FLAG_WRITE);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Could not open output file '%s'", filename);
return ret;
}
}
/* init muxer, write output file header */
ret = avformat_write_header(ofmt_ctx, NULL);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Error occurred when opening output file\n");
return ret;
}
return 0;
}
static int init_filter(FilteringContext* fctx, AVCodecContext *dec_ctx,
AVCodecContext *enc_ctx, const char *filter_spec)
{
char args[512];
int ret = 0;
AVFilter *buffersrc = NULL;
AVFilter *buffersink = NULL;
AVFilterContext *buffersrc_ctx = NULL;
AVFilterContext *buffersink_ctx = NULL;
AVFilterInOut *outputs = avfilter_inout_alloc();
AVFilterInOut *inputs = avfilter_inout_alloc();
AVFilterGraph *filter_graph = avfilter_graph_alloc();
if (!outputs || !inputs || !filter_graph) {
ret = AVERROR(ENOMEM);
goto end;
}
if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO) {
buffersrc = avfilter_get_by_name("buffer");
buffersink = avfilter_get_by_name("buffersink");
if (!buffersrc || !buffersink) {
av_log(NULL, AV_LOG_ERROR, "filtering source or sink element not found\n");
ret = AVERROR_UNKNOWN;
goto end;
}
snprintf(args, sizeof(args),
"video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
dec_ctx->width, dec_ctx->height, dec_ctx->pix_fmt,
dec_ctx->time_base.num, dec_ctx->time_base.den,
dec_ctx->sample_aspect_ratio.num,
dec_ctx->sample_aspect_ratio.den);
ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in",
args, NULL, filter_graph);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot create buffer source\n");
goto end;
}
ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out",
NULL, NULL, filter_graph);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot create buffer sink\n");
goto end;
}
ret = av_opt_set_bin(buffersink_ctx, "pix_fmts",
(uint8_t*)&enc_ctx->pix_fmt, sizeof(enc_ctx->pix_fmt),
AV_OPT_SEARCH_CHILDREN);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output pixel format\n");
goto end;
}
} else if (dec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
buffersrc = avfilter_get_by_name("abuffer");
buffersink = avfilter_get_by_name("abuffersink");
if (!buffersrc || !buffersink) {
av_log(NULL, AV_LOG_ERROR, "filtering source or sink element not found\n");
ret = AVERROR_UNKNOWN;
goto end;
}
if (!dec_ctx->channel_layout)
dec_ctx->channel_layout =
av_get_default_channel_layout(dec_ctx->channels);
snprintf(args, sizeof(args),
"time_base=%d/%d:sample_rate=%d:sample_fmt=%s:channel_layout=0x%"PRIx64,
dec_ctx->time_base.num, dec_ctx->time_base.den, dec_ctx->sample_rate,
av_get_sample_fmt_name(dec_ctx->sample_fmt),
dec_ctx->channel_layout);
ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in",
args, NULL, filter_graph);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer source\n");
goto end;
}
ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out",
NULL, NULL, filter_graph);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer sink\n");
goto end;
}
ret = av_opt_set_bin(buffersink_ctx, "sample_fmts",
(uint8_t*)&enc_ctx->sample_fmt, sizeof(enc_ctx->sample_fmt),
AV_OPT_SEARCH_CHILDREN);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output sample format\n");
goto end;
}
ret = av_opt_set_bin(buffersink_ctx, "channel_layouts",
(uint8_t*)&enc_ctx->channel_layout,
sizeof(enc_ctx->channel_layout), AV_OPT_SEARCH_CHILDREN);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output channel layout\n");
goto end;
}
ret = av_opt_set_bin(buffersink_ctx, "sample_rates",
(uint8_t*)&enc_ctx->sample_rate, sizeof(enc_ctx->sample_rate),
AV_OPT_SEARCH_CHILDREN);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output sample rate\n");
goto end;
}
} else {
ret = AVERROR_UNKNOWN;
goto end;
}
/* Endpoints for the filter graph. */
outputs->name = av_strdup("in");
outputs->filter_ctx = buffersrc_ctx;
outputs->pad_idx = 0;
outputs->next = NULL;
inputs->name = av_strdup("out");
inputs->filter_ctx = buffersink_ctx;
inputs->pad_idx = 0;
inputs->next = NULL;
if (!outputs->name || !inputs->name) {
ret = AVERROR(ENOMEM);
goto end;
}
if ((ret = avfilter_graph_parse_ptr(filter_graph, filter_spec,
&inputs, &outputs, NULL)) < 0)
goto end;
if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0)
goto end;
/* Fill FilteringContext */
fctx->buffersrc_ctx = buffersrc_ctx;
fctx->buffersink_ctx = buffersink_ctx;
fctx->filter_graph = filter_graph;
end:
avfilter_inout_free(&inputs);
avfilter_inout_free(&outputs);
return ret;
}
static int init_filters(void)
{
const char *filter_spec;
unsigned int i;
int ret;
filter_ctx = av_malloc_array(ifmt_ctx->nb_streams, sizeof(*filter_ctx));
if (!filter_ctx)
return AVERROR(ENOMEM);
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
filter_ctx[i].buffersrc_ctx = NULL;
filter_ctx[i].buffersink_ctx = NULL;
filter_ctx[i].filter_graph = NULL;
if (!(ifmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO
|| ifmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO))
continue;
if (ifmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO)
filter_spec = "null"; /* passthrough (dummy) filter for video */
else
filter_spec = "anull"; /* passthrough (dummy) filter for audio */
ret = init_filter(&filter_ctx[i], ifmt_ctx->streams[i]->codec,
ofmt_ctx->streams[i]->codec, filter_spec);
if (ret)
return ret;
}
return 0;
}
static int encode_write_frame(AVFrame *filt_frame, unsigned int stream_index, int *got_frame) {
int ret;
int got_frame_local;
AVPacket enc_pkt;
int (*enc_func)(AVCodecContext *, AVPacket *, const AVFrame *, int *) =
(ifmt_ctx->streams[stream_index]->codec->codec_type ==
AVMEDIA_TYPE_VIDEO) ? avcodec_encode_video2 : avcodec_encode_audio2;
if (!got_frame)
got_frame = &got_frame_local;
av_log(NULL, AV_LOG_INFO, "Encoding frame\n");
/* encode filtered frame */
enc_pkt.data = NULL;
enc_pkt.size = 0;
av_init_packet(&enc_pkt);
ret = enc_func(ofmt_ctx->streams[stream_index]->codec, &enc_pkt,
filt_frame, got_frame);
av_frame_free(&filt_frame);
if (ret < 0)
return ret;
if (!(*got_frame))
return 0;
/* prepare packet for muxing */
enc_pkt.stream_index = stream_index;
enc_pkt.dts = av_rescale_q_rnd(enc_pkt.dts,
ofmt_ctx->streams[stream_index]->codec->time_base,
ofmt_ctx->streams[stream_index]->time_base,
AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
enc_pkt.pts = av_rescale_q_rnd(enc_pkt.pts,
ofmt_ctx->streams[stream_index]->codec->time_base,
ofmt_ctx->streams[stream_index]->time_base,
AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
enc_pkt.duration = av_rescale_q(enc_pkt.duration,
ofmt_ctx->streams[stream_index]->codec->time_base,
ofmt_ctx->streams[stream_index]->time_base);
av_log(NULL, AV_LOG_DEBUG, "Muxing frame\n");
/* mux encoded frame */
ret = av_interleaved_write_frame(ofmt_ctx, &enc_pkt);
return ret;
}
static int filter_encode_write_frame(AVFrame *frame, unsigned int stream_index)
{
int ret;
AVFrame *filt_frame;
av_log(NULL, AV_LOG_INFO, "Pushing decoded frame to filters\n");
/* push the decoded frame into the filtergraph */
ret = av_buffersrc_add_frame_flags(filter_ctx[stream_index].buffersrc_ctx,
frame, 0);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Error while feeding the filtergraph\n");
return ret;
}
/* pull filtered frames from the filtergraph */
while (1) {
filt_frame = av_frame_alloc();
if (!filt_frame) {
ret = AVERROR(ENOMEM);
break;
}
av_log(NULL, AV_LOG_INFO, "Pulling filtered frame from filters\n");
ret = av_buffersink_get_frame(filter_ctx[stream_index].buffersink_ctx,
filt_frame);
if (ret < 0) {
/* if no more frames for output - returns AVERROR(EAGAIN)
* if flushed and no more frames for output - returns AVERROR_EOF
* rewrite retcode to 0 to show it as normal procedure completion
*/
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
ret = 0;
av_frame_free(&filt_frame);
break;
}
filt_frame->pict_type = AV_PICTURE_TYPE_NONE;
ret = encode_write_frame(filt_frame, stream_index, NULL);
if (ret < 0)
break;
}
return ret;
}
static int flush_encoder(unsigned int stream_index)
{
int ret;
int got_frame;
if (!(ofmt_ctx->streams[stream_index]->codec->codec->capabilities &
CODEC_CAP_DELAY))
return 0;
while (1) {
av_log(NULL, AV_LOG_INFO, "Flushing stream #%u encoder\n", stream_index);
ret = encode_write_frame(NULL, stream_index, &got_frame);
if (ret < 0)
break;
if (!got_frame)
return 0;
}
return ret;
}
int main(int argc, char **argv)
{
int ret;
AVPacket packet = { .data = NULL, .size = 0 };
AVFrame *frame = NULL;
enum AVMediaType type;
unsigned int stream_index;
unsigned int i;
int got_frame;
int (*dec_func)(AVCodecContext *, AVFrame *, int *, const AVPacket *);
if (argc != 3) {
av_log(NULL, AV_LOG_ERROR, "Usage: %s <input file> <output file>\n", argv[0]);
return 1;
}
av_register_all();
avfilter_register_all();
if ((ret = open_input_file(argv[1])) < 0)
goto end;
if ((ret = open_output_file(argv[2])) < 0)
goto end;
if ((ret = init_filters()) < 0)
goto end;
/* read all packets */
while (1) {
if ((ret = av_read_frame(ifmt_ctx, &packet)) < 0)
break;
stream_index = packet.stream_index;
type = ifmt_ctx->streams[packet.stream_index]->codec->codec_type;
av_log(NULL, AV_LOG_DEBUG, "Demuxer gave frame of stream_index %u\n",
stream_index);
if (filter_ctx[stream_index].filter_graph) {
av_log(NULL, AV_LOG_DEBUG, "Going to reencode&filter the frame\n");
frame = av_frame_alloc();
if (!frame) {
ret = AVERROR(ENOMEM);
break;
}
packet.dts = av_rescale_q_rnd(packet.dts,
ifmt_ctx->streams[stream_index]->time_base,
ifmt_ctx->streams[stream_index]->codec->time_base,
AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
packet.pts = av_rescale_q_rnd(packet.pts,
ifmt_ctx->streams[stream_index]->time_base,
ifmt_ctx->streams[stream_index]->codec->time_base,
AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
dec_func = (type == AVMEDIA_TYPE_VIDEO) ? avcodec_decode_video2 :
avcodec_decode_audio4;
ret = dec_func(ifmt_ctx->streams[stream_index]->codec, frame,
&got_frame, &packet);
if (ret < 0) {
av_frame_free(&frame);
av_log(NULL, AV_LOG_ERROR, "Decoding failed\n");
break;
}
if (got_frame) {
frame->pts = av_frame_get_best_effort_timestamp(frame);
ret = filter_encode_write_frame(frame, stream_index);
av_frame_free(&frame);
if (ret < 0)
goto end;
} else {
av_frame_free(&frame);
}
} else {
/* remux this frame without reencoding */
packet.dts = av_rescale_q_rnd(packet.dts,
ifmt_ctx->streams[stream_index]->time_base,
ofmt_ctx->streams[stream_index]->time_base,
AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
packet.pts = av_rescale_q_rnd(packet.pts,
ifmt_ctx->streams[stream_index]->time_base,
ofmt_ctx->streams[stream_index]->time_base,
AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
ret = av_interleaved_write_frame(ofmt_ctx, &packet);
if (ret < 0)
goto end;
}
av_free_packet(&packet);
}
/* flush filters and encoders */
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
/* flush filter */
if (!filter_ctx[i].filter_graph)
continue;
ret = filter_encode_write_frame(NULL, i);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Flushing filter failed\n");
goto end;
}
/* flush encoder */
ret = flush_encoder(i);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Flushing encoder failed\n");
goto end;
}
}
av_write_trailer(ofmt_ctx);
end:
av_free_packet(&packet);
av_frame_free(&frame);
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
avcodec_close(ifmt_ctx->streams[i]->codec);
if (ofmt_ctx && ofmt_ctx->nb_streams > i && ofmt_ctx->streams[i] && ofmt_ctx->streams[i]->codec)
avcodec_close(ofmt_ctx->streams[i]->codec);
if (filter_ctx && filter_ctx[i].filter_graph)
avfilter_graph_free(&filter_ctx[i].filter_graph);
}
av_free(filter_ctx);
avformat_close_input(&ifmt_ctx);
if (ofmt_ctx && !(ofmt_ctx->oformat->flags & AVFMT_NOFILE))
avio_close(ofmt_ctx->pb);
avformat_free_context(ofmt_ctx);
if (ret < 0)
av_log(NULL, AV_LOG_ERROR, "Error occurred: %s\n", av_err2str(ret));
return ret ? 1 : 0;
}

View File

@@ -105,7 +105,7 @@ For example, img1.jpg, img2.jpg, img3.jpg,...
Then you may run:
@example
ffmpeg -f image2 -i img%d.jpg /tmp/a.mpg
ffmpeg -f image2 -i img%d.jpg /tmp/a.mpg
@end example
Notice that @samp{%d} is replaced by the image number.
@@ -118,7 +118,7 @@ the sequence. This is useful if your sequence does not start with
example will start with @file{img100.jpg}:
@example
ffmpeg -f image2 -start_number 100 -i img%d.jpg /tmp/a.mpg
ffmpeg -f image2 -start_number 100 -i img%d.jpg /tmp/a.mpg
@end example
If you have large number of pictures to rename, you can use the
@@ -128,7 +128,7 @@ that match @code{*jpg} to the @file{/tmp} directory in the sequence of
@file{img001.jpg}, @file{img002.jpg} and so on.
@example
x=1; for i in *jpg; do counter=$(printf %03d $x); ln -s "$i" /tmp/img"$counter".jpg; x=$(($x+1)); done
x=1; for i in *jpg; do counter=$(printf %03d $x); ln -s "$i" /tmp/img"$counter".jpg; x=$(($x+1)); done
@end example
If you want to sequence them by oldest modified first, substitute
@@ -137,7 +137,7 @@ If you want to sequence them by oldest modified first, substitute
Then run:
@example
ffmpeg -f image2 -i /tmp/img%03d.jpg /tmp/a.mpg
ffmpeg -f image2 -i /tmp/img%03d.jpg /tmp/a.mpg
@end example
The same logic is used for any image format that ffmpeg reads.
@@ -145,7 +145,7 @@ The same logic is used for any image format that ffmpeg reads.
You can also use @command{cat} to pipe images to ffmpeg:
@example
cat *.jpg | ffmpeg -f image2pipe -c:v mjpeg -i - output.mpg
cat *.jpg | ffmpeg -f image2pipe -c:v mjpeg -i - output.mpg
@end example
@section How do I encode movie to single pictures?
@@ -153,7 +153,7 @@ cat *.jpg | ffmpeg -f image2pipe -c:v mjpeg -i - output.mpg
Use:
@example
ffmpeg -i movie.mpg movie%d.jpg
ffmpeg -i movie.mpg movie%d.jpg
@end example
The @file{movie.mpg} used as input will be converted to
@@ -169,7 +169,7 @@ to force the encoding.
Applying that to the previous example:
@example
ffmpeg -i movie.mpg -f image2 -c:v mjpeg menu%d.jpg
ffmpeg -i movie.mpg -f image2 -c:v mjpeg menu%d.jpg
@end example
Beware that there is no "jpeg" codec. Use "mjpeg" instead.
@@ -227,15 +227,15 @@ then you may use any file that DirectShow can read as input.
Just create an "input.avs" text file with this single line ...
@example
DirectShowSource("C:\path to your file\yourfile.asf")
DirectShowSource("C:\path to your file\yourfile.asf")
@end example
... and then feed that text file to ffmpeg:
@example
ffmpeg -i input.avs
ffmpeg -i input.avs
@end example
For ANY other help on AviSynth, please visit the
@uref{http://www.avisynth.org/, AviSynth homepage}.
For ANY other help on Avisynth, please visit the
@uref{http://www.avisynth.org/, Avisynth homepage}.
@section How can I join video files?
@@ -368,6 +368,26 @@ ffmpeg -f u16le -acodec pcm_s16le -ac 2 -ar 44100 -i all.a \
rm temp[12].[av] all.[av]
@end example
@section -profile option fails when encoding H.264 video with AAC audio
@command{ffmpeg} prints an error like
@example
Undefined constant or missing '(' in 'baseline'
Unable to parse option value "baseline"
Error setting option profile to value baseline.
@end example
Short answer: write @option{-profile:v} instead of @option{-profile}.
Long answer: this happens because the @option{-profile} option can apply to both
video and audio. Specifically the AAC encoder also defines some profiles, none
of which are named @var{baseline}.
The solution is to apply the @option{-profile} option to the video stream only
by using @url{http://ffmpeg.org/ffmpeg.html#Stream-specifiers-1, Stream specifiers}.
Appending @code{:v} to it will do exactly that.
@section Using @option{-f lavfi}, audio becomes mono for no apparent reason.
Use @option{-dumpgraph -} to find out exactly where the channel layout is
@@ -392,7 +412,7 @@ VOB and a few other formats do not have a global header that describes
everything present in the file. Instead, applications are supposed to scan
the file to see what it contains. Since VOB files are frequently large, only
the beginning is scanned. If the subtitles happen only later in the file,
they will not be initially detected.
they will not be initally detected.
Some applications, including the @code{ffmpeg} command-line tool, can only
work with streams that were detected during the initial scan; streams that
@@ -455,10 +475,9 @@ read @uref{http://www.tux.org/lkml/#s15, "Programming Religion"}.
@section Why are the ffmpeg programs devoid of debugging symbols?
The build process creates @command{ffmpeg_g}, @command{ffplay_g}, etc. which
contain full debug information. Those binaries are stripped to create
@command{ffmpeg}, @command{ffplay}, etc. If you need the debug information, use
the *_g versions.
The build process creates ffmpeg_g, ffplay_g, etc. which contain full debug
information. Those binaries are stripped to create ffmpeg, ffplay, etc. If
you need the debug information, use the *_g versions.
@section I do not like the LGPL, can I contribute code under the GPL instead?
@@ -478,7 +497,7 @@ An easy way to get the full list of required libraries in dependency order
is to use @code{pkg-config}.
@example
c99 -o program program.c $(pkg-config --cflags --libs libavformat libavcodec)
c99 -o program program.c $(pkg-config --cflags --libs libavformat libavcodec)
@end example
See @file{doc/example/Makefile} and @file{doc/example/pc-uninstalled} for
@@ -502,6 +521,10 @@ to use them you have to append -D__STDC_CONSTANT_MACROS to your CXXFLAGS
You have to create a custom AVIOContext using @code{avio_alloc_context},
see @file{libavformat/aviobuf.c} in FFmpeg and @file{libmpdemux/demux_lavf.c} in MPlayer or MPlayer2 sources.
@section Where can I find libav* headers for Pascal/Delphi?
see @url{http://www.iversenit.dk/dev/ffmpeg-headers/}
@section Where is the documentation about ffv1, msmpeg4, asv1, 4xm?
see @url{http://www.ffmpeg.org/~michael/}
@@ -514,12 +537,11 @@ In this specific case please look at RFC 4629 to see how it should be done.
@section AVStream.r_frame_rate is wrong, it is much larger than the frame rate.
@code{r_frame_rate} is NOT the average frame rate, it is the smallest frame rate
r_frame_rate is NOT the average frame rate, it is the smallest frame rate
that can accurately represent all timestamps. So no, it is not
wrong if it is larger than the average!
For example, if you have mixed 25 and 30 fps content, then @code{r_frame_rate}
will be 150 (it is the least common multiple).
If you are looking for the average frame rate, see @code{AVStream.avg_frame_rate}.
For example, if you have mixed 25 and 30 fps content, then r_frame_rate
will be 150.
@section Why is @code{make fate} not running all tests?

View File

@@ -153,20 +153,20 @@ the synchronisation of the samples directory.
@table @option
@item fate-rsync
Download/synchronize sample files to the configured samples directory.
Download/synchronize sample files to the configured samples directory.
@item fate-list
Will list all fate/regression test targets.
Will list all fate/regression test targets.
@item fate
Run the FATE test suite (requires the fate-suite dataset).
Run the FATE test suite (requires the fate-suite dataset).
@end table
@section Makefile variables
@table @option
@item V
Verbosity level, can be set to 0, 1 or 2.
Verbosity level, can be set to 0, 1 or 2.
@itemize
@item 0: show just the test arguments
@item 1: show just the command used in the test
@@ -174,26 +174,22 @@ Verbosity level, can be set to 0, 1 or 2.
@end itemize
@item SAMPLES
Specify or override the path to the FATE samples at make time, it has a
meaning only while running the regression tests.
Specify or override the path to the FATE samples at make time, it has a
meaning only while running the regression tests.
@item THREADS
Specify how many threads to use while running regression tests, it is
quite useful to detect thread-related regressions.
Specify how many threads to use while running regression tests, it is
quite useful to detect thread-related regressions.
@item THREAD_TYPE
Specify which threading strategy test, either @var{slice} or @var{frame},
by default @var{slice+frame}
Specify which threading strategy test, either @var{slice} or @var{frame},
by default @var{slice+frame}
@item CPUFLAGS
Specify CPU flags.
Specify CPU flags.
@item TARGET_EXEC
Specify or override the wrapper used to run the tests.
The @var{TARGET_EXEC} option provides a way to run FATE wrapped in
@command{valgrind}, @command{qemu-user} or @command{wine} or on remote targets
through @command{ssh}.
Specify or override the wrapper used to run the tests.
The @var{TARGET_EXEC} option provides a way to run FATE wrapped in
@command{valgrind}, @command{qemu-user} or @command{wine} or on remote targets
through @command{ssh}.
@item GEN
Set to @var{1} to generate the missing or mismatched references.
@end table

View File

@@ -12,9 +12,9 @@
@chapter Description
@c man begin DESCRIPTION
The FFmpeg resampler provides a high-level interface to the
The FFmpeg resampler provides an high-level interface to the
libswresample library audio resampling utilities. In particular it
allows one to perform audio resampling, audio channel layout rematrixing,
allows to perform audio resampling, audio channel layout rematrixing,
and convert audio format and packing layout.
@c man end DESCRIPTION

View File

@@ -12,8 +12,8 @@
@chapter Description
@c man begin DESCRIPTION
The FFmpeg rescaler provides a high-level interface to the libswscale
library image conversion utilities. In particular it allows one to perform
The FFmpeg rescaler provides an high-level interface to the libswscale
library image conversion utilities. In particular it allows to perform
image rescaling and pixel format conversion.
@c man end DESCRIPTION

View File

@@ -80,23 +80,11 @@ The transcoding process in @command{ffmpeg} for each output can be described by
the following diagram:
@example
_______ ______________
| | | |
| input | demuxer | encoded data | decoder
| file | ---------> | packets | -----+
|_______| |______________| |
v
_________
| |
| decoded |
| frames |
|_________|
________ ______________ |
| | | | |
| output | <-------- | encoded data | <----+
| file | muxer | packets | encoder
|________| |______________|
_______ ______________ _________ ______________ ________
| | | | | | | | | |
| input | demuxer | encoded data | decoder | decoded | encoder | encoded data | muxer | output |
| file | ---------> | packets | ---------> | frames | ---------> | packets | -------> | file |
|_______| |______________| |_________| |______________| |________|
@end example
@@ -124,16 +112,11 @@ the same type. In the above diagram they can be represented by simply inserting
an additional step between decoding and encoding:
@example
_________ ______________
| | | |
| decoded | | encoded data |
| frames |\ _ | packets |
|_________| \ /||______________|
\ __________ /
simple _\|| | / encoder
filtergraph | filtered |/
| frames |
|__________|
_________ __________ ______________
| | | | | |
| decoded | simple filtergraph | filtered | encoder | encoded data |
| frames | -------------------> | frames | ---------> | packets |
|_________| |__________| |______________|
@end example
@@ -142,10 +125,10 @@ Simple filtergraphs are configured with the per-stream @option{-filter} option
A simple filtergraph for video can look for example like this:
@example
_______ _____________ _______ ________
| | | | | | | |
| input | ---> | deinterlace | ---> | scale | ---> | output |
|_______| |_____________| |_______| |________|
_______ _____________ _______ _____ ________
| | | | | | | | | |
| input | ---> | deinterlace | ---> | scale | ---> | fps | ---> | output |
|_______| |_____________| |_______| |_____| |________|
@end example
@@ -231,7 +214,7 @@ described.
@chapter Options
@c man begin OPTIONS
@include fftools-common-opts.texi
@include avtools-common-opts.texi
@section Main options
@@ -272,13 +255,8 @@ ffmpeg -i INPUT -map 0 -c copy -c:v:1 libx264 -c:a:137 libvorbis OUTPUT
will copy all the streams except the second video, which will be encoded with
libx264, and the 138th audio, which will be encoded with libvorbis.
@item -t @var{duration} (@emph{input/output})
When used as an input option (before @code{-i}), limit the @var{duration} of
data read from the input file.
When used as an output option (before an output filename), stop writing the
output after its duration reaches @var{duration}.
@item -t @var{duration} (@emph{output})
Stop writing the output after its duration reaches @var{duration}.
@var{duration} may be a number in seconds, or in @code{hh:mm:ss[.xxx]} form.
-to and -t are mutually exclusive and -t has priority.
@@ -294,33 +272,30 @@ Set the file size limit, expressed in bytes.
@item -ss @var{position} (@emph{input/output})
When used as an input option (before @code{-i}), seeks in this input file to
@var{position}. Note the in most formats it is not possible to seek exactly, so
@command{ffmpeg} will seek to the closest seek point before @var{position}.
When transcoding and @option{-accurate_seek} is enabled (the default), this
extra segment between the seek point and @var{position} will be decoded and
discarded. When doing stream copy or when @option{-noaccurate_seek} is used, it
will be preserved.
When used as an output option (before an output filename), decodes but discards
input until the timestamps reach @var{position}.
@var{position}. When used as an output option (before an output filename),
decodes but discards input until the timestamps reach @var{position}. This is
slower, but more accurate.
@var{position} may be either in seconds or in @code{hh:mm:ss[.xxx]} form.
@item -itsoffset @var{offset} (@emph{input})
Set the input time offset.
Set the input time offset in seconds.
@code{[-]hh:mm:ss[.xxx]} syntax is also supported.
The offset is added to the timestamps of the input files.
Specifying a positive offset means that the corresponding
streams are delayed by @var{offset} seconds.
@var{offset} must be a time duration specification,
see @ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
The offset is added to the timestamps of the input files. Specifying
a positive offset means that the corresponding streams are delayed by
the time duration specified in @var{offset}.
@item -timestamp @var{date} (@emph{output})
@item -timestamp @var{time} (@emph{output})
Set the recording timestamp in the container.
@var{date} must be a time duration specification,
see @ref{date syntax,,the Date section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
The syntax for @var{time} is:
@example
now|([(YYYY-MM-DD|YYYYMMDD)[T|t| ]]((HH:MM:SS[.m...])|(HHMMSS[.m...]))[Z|z])
@end example
If the value is "now" it takes the current time.
Time is local time unless 'Z' or 'z' is appended, in which case it is
interpreted as UTC.
If the year-month-day part is not specified it takes the current
year-month-day.
@item -metadata[:metadata_specifier] @var{key}=@var{value} (@emph{output,per-metadata})
Set a metadata key/value pair.
@@ -339,7 +314,7 @@ ffmpeg -i in.avi -metadata title="my title" out.flv
To set the language of the first audio stream:
@example
ffmpeg -i INPUT -metadata:s:a:0 language=eng OUTPUT
ffmpeg -i INPUT -metadata:s:a:1 language=eng OUTPUT
@end example
@item -target @var{type} (@emph{output})
@@ -367,13 +342,8 @@ Stop writing to the stream after @var{framecount} frames.
@item -q[:@var{stream_specifier}] @var{q} (@emph{output,per-stream})
@itemx -qscale[:@var{stream_specifier}] @var{q} (@emph{output,per-stream})
Use fixed quality scale (VBR). The meaning of @var{q}/@var{qscale} is
Use fixed quality scale (VBR). The meaning of @var{q} is
codec-dependent.
If @var{qscale} is used without a @var{stream_specifier} then it applies only
to the video stream, this is to maintain compatibility with previous behavior
and as specifying the same codec specific value to 2 different codecs that is
audio and video generally is not what is intended when no stream_specifier is
used.
@anchor{filter_option}
@item -filter[:@var{stream_specifier}] @var{filtergraph} (@emph{output,per-stream})
@@ -473,9 +443,6 @@ Set frame rate (Hz value, fraction or abbreviation).
As an input option, ignore any timestamps stored in the file and instead
generate timestamps assuming constant frame rate @var{fps}.
This is not the same as the @option{-framerate} option used for some input formats
like image2 or v4l2 (it used to be the same in older versions of FFmpeg).
If in doubt use @option{-framerate} instead of the input option @option{-r}.
As an output option, duplicate or drop input frames to achieve constant output
frame rate @var{fps}.
@@ -530,6 +497,9 @@ prefix is ``ffmpeg2pass''. The complete file name will be
@file{PREFIX-N.log}, where N is a number specific to the output
stream
@item -vlang @var{code}
Set the ISO 639 language code (3 letters) of the current video stream.
@item -vf @var{filtergraph} (@emph{output})
Create the filtergraph specified by @var{filtergraph} and use it to
filter the stream.
@@ -537,7 +507,7 @@ filter the stream.
This is an alias for @code{-filter:v}, see the @ref{filter_option,,-filter option}.
@end table
@section Advanced Video options
@section Advanced Video Options
@table @option
@item -pix_fmt[:@var{stream_specifier}] @var{format} (@emph{input/output,per-stream})
@@ -640,52 +610,6 @@ would be more efficient.
@item -copyinkf[:@var{stream_specifier}] (@emph{output,per-stream})
When doing stream copy, copy also non-key frames found at the
beginning.
@item -hwaccel[:@var{stream_specifier}] @var{hwaccel} (@emph{input,per-stream})
Use hardware acceleration to decode the matching stream(s). The allowed values
of @var{hwaccel} are:
@table @option
@item none
Do not use any hardware acceleration (the default).
@item auto
Automatically select the hardware acceleration method.
@item vda
Use Apple VDA hardware acceleration.
@item vdpau
Use VDPAU (Video Decode and Presentation API for Unix) hardware acceleration.
@item dxva2
Use DXVA2 (DirectX Video Acceleration) hardware acceleration.
@end table
This option has no effect if the selected hwaccel is not available or not
supported by the chosen decoder.
Note that most acceleration methods are intended for playback and will not be
faster than software decoding on modern CPUs. Additionally, @command{ffmpeg}
will usually need to copy the decoded frames from the GPU memory into the system
memory, resulting in further performance loss. This option is thus mainly
useful for testing.
@item -hwaccel_device[:@var{stream_specifier}] @var{hwaccel_device} (@emph{input,per-stream})
Select a device to use for hardware acceleration.
This option only makes sense when the @option{-hwaccel} option is also
specified. Its exact meaning depends on the specific hardware acceleration
method chosen.
@table @option
@item vdpau
For VDPAU, this option specifies the X11 display/screen to use. If this option
is not specified, the value of the @var{DISPLAY} environment variable is used
@item dxva2
For DXVA2, this option should contain the number of the display adapter to use.
If this option is not specified, the default adapter is used.
@end table
@end table
@section Audio Options
@@ -720,7 +644,7 @@ filter the stream.
This is an alias for @code{-filter:a}, see the @ref{filter_option,,-filter option}.
@end table
@section Advanced Audio options
@section Advanced Audio options:
@table @option
@item -atag @var{fourcc/tag} (@emph{output})
@@ -735,9 +659,11 @@ stereo but not 6 channels as 5.1. The default is to always try to guess. Use
0 to disable all guessing.
@end table
@section Subtitle options
@section Subtitle options:
@table @option
@item -slang @var{code}
Set the ISO 639 language code (3 letters) of the current subtitle stream.
@item -scodec @var{codec} (@emph{input/output})
Set the subtitle codec. This is an alias for @code{-codec:s}.
@item -sn (@emph{output})
@@ -746,7 +672,7 @@ Disable subtitle recording.
Deprecated, see -bsf
@end table
@section Advanced Subtitle options
@section Advanced Subtitle options:
@table @option
@@ -824,11 +750,6 @@ To map all the streams except the second audio, use negative mappings
ffmpeg -i INPUT -map 0 -map -0:a:1 OUTPUT
@end example
To pick the English audio stream:
@example
ffmpeg -i INPUT -map 0:m:language:eng OUTPUT
@end example
Note that using this option disables the default mappings for this output file.
@item -map_channel [@var{input_file_id}.@var{stream_specifier}.@var{channel_id}|-1][:@var{output_file_id}.@var{stream_specifier}]
@@ -950,12 +871,13 @@ Dump each input packet to stderr.
When dumping packets, also dump the payload.
@item -re (@emph{input})
Read input at native frame rate. Mainly used to simulate a grab device.
or live input stream (e.g. when reading from a file). Should not be used
with actual grab devices or live input streams (where it can cause packet
loss).
By default @command{ffmpeg} attempts to read the input(s) as fast as possible.
This option will slow down the reading of the input(s) to the native frame rate
of the input(s). It is useful for real-time output (e.g. live streaming).
of the input(s). It is useful for real-time output (e.g. live streaming). If
your input(s) is coming from some other live streaming source (through HTTP or
UDP for example) the server might already be in real-time, thus the option will
likely not be required. On the other hand, this is meaningful if your input(s)
is a file you are trying to push in real-time.
@item -loop_input
Loop over the input stream. Currently it works only for image
streams. This option is used for automatic FFserver testing.
@@ -1071,7 +993,7 @@ ffmpeg -i h264.mp4 -c:v copy -bsf:v h264_mp4toannexb -an out.h264
ffmpeg -i file.mov -an -vn -bsf:s mov2textsub -c:s copy -f rawvideo sub.txt
@end example
@item -tag[:@var{stream_specifier}] @var{codec_tag} (@emph{input/output,per-stream})
@item -tag[:@var{stream_specifier}] @var{codec_tag} (@emph{per-stream})
Force a tag/fourcc for matching streams.
@item -timecode @var{hh}:@var{mm}:@var{ss}SEP@var{ff}
@@ -1138,45 +1060,13 @@ This option is similar to @option{-filter_complex}, the only difference is that
its argument is the name of the file from which a complex filtergraph
description is to be read.
@item -accurate_seek (@emph{input})
This option enables or disables accurate seeking in input files with the
@option{-ss} option. It is enabled by default, so seeking is accurate when
transcoding. Use @option{-noaccurate_seek} to disable it, which may be useful
e.g. when copying some streams and transcoding the others.
@item -override_ffserver (@emph{global})
Overrides the input specifications from @command{ffserver}. Using this
option you can map any input stream to @command{ffserver} and control
many aspects of the encoding from @command{ffmpeg}. Without this
option @command{ffmpeg} will transmit to @command{ffserver} what is
requested by @command{ffserver}.
Overrides the input specifications from ffserver. Using this option you can
map any input stream to ffserver and control many aspects of the encoding from
ffmpeg. Without this option ffmpeg will transmit to ffserver what is requested by
ffserver.
The option is intended for cases where features are needed that cannot be
specified to @command{ffserver} but can be to @command{ffmpeg}.
@item -discard (@emph{input})
Allows discarding specific streams or frames of streams at the demuxer.
Not all demuxers support this.
@table @option
@item none
Discard no frame.
@item default
Default, which discards no frames.
@item noref
Discard all non-reference frames.
@item bidir
Discard all bidirectional frames.
@item nokey
Discard all frames excepts keyframes.
@item all
Discard all frames.
@end table
specified to ffserver but can be to ffmpeg.
@end table
@@ -1235,7 +1125,7 @@ then it will search for the file @file{libvpx-1080p.ffpreset}.
@itemize
@item
For streaming at very low bitrates, use a low frame rate
For streaming at very low bitrate application, use a low frame rate
and a small GOP size. This is especially true for RealVideo where
the Linux player does not seem to be very fast, so it can miss
frames. An example is:
@@ -1314,14 +1204,14 @@ standard mixer.
Grab the X11 display with ffmpeg via
@example
ffmpeg -f x11grab -video_size cif -framerate 25 -i :0.0 /tmp/out.mpg
ffmpeg -f x11grab -s cif -r 25 -i :0.0 /tmp/out.mpg
@end example
0.0 is display.screen number of your X11 server, same as
the DISPLAY environment variable.
@example
ffmpeg -f x11grab -video_size cif -framerate 25 -i :0.0+10,20 /tmp/out.mpg
ffmpeg -f x11grab -s cif -r 25 -i :0.0+10,20 /tmp/out.mpg
@end example
0.0 is display.screen number of your X11 server, same as the DISPLAY environment
@@ -1458,11 +1348,11 @@ ffmpeg -f image2 -pattern_type glob -i 'foo-*.jpeg' -r 12 -s WxH foo.avi
You can put many streams of the same type in the output:
@example
ffmpeg -i test1.avi -i test2.avi -map 1:1 -map 1:0 -map 0:1 -map 0:0 -c copy -y test12.nut
ffmpeg -i test1.avi -i test2.avi -map 0:3 -map 0:2 -map 0:1 -map 0:0 -c copy test12.nut
@end example
The resulting output file @file{test12.nut} will contain the first four streams
from the input files in reverse order.
The resulting output file @file{test12.avi} will contain first four streams from
the input file in reverse order.
@item
To force CBR video output:

View File

@@ -24,7 +24,7 @@ various FFmpeg APIs.
@chapter Options
@c man begin OPTIONS
@include fftools-common-opts.texi
@include avtools-common-opts.texi
@section Main options
@@ -84,9 +84,6 @@ output. In the filtergraph, the input is associated to the label
ffmpeg-filters manual for more information about the filtergraph
syntax.
You can specify this parameter multiple times and cycle through the specified
filtergraphs along with the show modes by pressing the key @key{w}.
@item -af @var{filtergraph}
@var{filtergraph} is a description of the filtergraph to apply to
the input audio.
@@ -162,10 +159,6 @@ Force a specific video decoder.
@item -scodec @var{codec_name}
Force a specific subtitle decoder.
@item -autorotate
Automatically rotate the video according to presentation metadata. Set by
default, use -noautorotate to disable.
@end table
@section While playing
@@ -181,25 +174,16 @@ Toggle full screen.
Pause.
@item a
Cycle audio channel in the current program.
Cycle audio channel.
@item v
Cycle video channel.
@item t
Cycle subtitle channel in the current program.
@item c
Cycle program.
Cycle subtitle channel.
@item w
Cycle video filters or show modes.
@item s
Step to the next frame.
Pause if the stream is not already paused, step to the next video
frame, and pause.
Show audio waves.
@item left/right
Seek backward/forward 10 seconds.
@@ -208,8 +192,6 @@ Seek backward/forward 10 seconds.
Seek backward/forward 1 minute.
@item page down/page up
Seek to the previous/next chapter.
or if there are no chapters
Seek backward/forward 10 minutes.
@item mouse click
@@ -221,7 +203,6 @@ Seek to percentage in file corresponding to fraction of width.
@include config.texi
@ifset config-all
@set config-readonly
@ifset config-avutil
@include utils.texi
@end ifset

View File

@@ -44,15 +44,14 @@ name (which may be shared by other sections), and an unique
name. See the output of @option{sections}.
Metadata tags stored in the container or in the streams are recognized
and printed in the corresponding "FORMAT", "STREAM" or "PROGRAM_STREAM"
section.
and printed in the corresponding "FORMAT" or "STREAM" section.
@c man end
@chapter Options
@c man begin OPTIONS
@include fftools-common-opts.texi
@include avtools-common-opts.texi
@section Main options
@@ -113,16 +112,12 @@ ffprobe -show_packets -select_streams v:1 INPUT
@end example
@item -show_data
Show payload data, as a hexadecimal and ASCII dump. Coupled with
Show payload data, as an hexadecimal and ASCII dump. Coupled with
@option{-show_packets}, it will dump the packets' data. Coupled with
@option{-show_streams}, it will dump the codec extradata.
The dump is printed as the "data" field. It may contain newlines.
@item -show_data_hash @var{algorithm}
Show a hash of payload data, for packets with @option{-show_packets} and for
codec extradata with @option{-show_streams}.
@item -show_error
Show information about the error found when trying to probe the input.
@@ -184,7 +179,7 @@ format : stream=codec_type
To show all the tags in the stream and format sections:
@example
stream_tags : format_tags
format_tags : format_tags
@end example
To show only the @code{title} tag (if available) in the stream
@@ -201,11 +196,11 @@ The information for each single packet is printed within a dedicated
section with name "PACKET".
@item -show_frames
Show information about each frame and subtitle contained in the input
multimedia stream.
Show information about each frame contained in the input multimedia
stream.
The information for each single frame is printed within a dedicated
section with name "FRAME" or "SUBTITLE".
section with name "FRAME".
@item -show_streams
Show information about each media stream contained in the input
@@ -214,13 +209,6 @@ multimedia stream.
Each media stream information is printed within a dedicated section
with name "STREAM".
@item -show_programs
Show information about programs and their streams contained in the input
multimedia stream.
Each media stream information is printed within a dedicated section
with name "PROGRAM_STREAM".
@item -show_chapters
Show information about chapters stored in the format.
@@ -234,70 +222,6 @@ corresponding stream section.
Count the number of packets per stream and report it in the
corresponding stream section.
@item -read_intervals @var{read_intervals}
Read only the specified intervals. @var{read_intervals} must be a
sequence of interval specifications separated by ",".
@command{ffprobe} will seek to the interval starting point, and will
continue reading from that.
Each interval is specified by two optional parts, separated by "%".
The first part specifies the interval start position. It is
interpreted as an abolute position, or as a relative offset from the
current position if it is preceded by the "+" character. If this first
part is not specified, no seeking will be performed when reading this
interval.
The second part specifies the interval end position. It is interpreted
as an absolute position, or as a relative offset from the current
position if it is preceded by the "+" character. If the offset
specification starts with "#", it is interpreted as the number of
packets to read (not including the flushing packets) from the interval
start. If no second part is specified, the program will read until the
end of the input.
Note that seeking is not accurate, thus the actual interval start
point may be different from the specified position. Also, when an
interval duration is specified, the absolute end time will be computed
by adding the duration to the interval start point found by seeking
the file, rather than to the specified start value.
The formal syntax is given by:
@example
@var{INTERVAL} ::= [@var{START}|+@var{START_OFFSET}][%[@var{END}|+@var{END_OFFSET}]]
@var{INTERVALS} ::= @var{INTERVAL}[,@var{INTERVALS}]
@end example
A few examples follow.
@itemize
@item
Seek to time 10, read packets until 20 seconds after the found seek
point, then seek to position @code{01:30} (1 minute and thirty
seconds) and read packets until position @code{01:45}.
@example
10%+20,01:30%01:45
@end example
@item
Read only 42 packets after seeking to position @code{01:23}:
@example
01:23%+#42
@end example
@item
Read only the first 20 seconds from the start:
@example
%+20
@end example
@item
Read from the start until position @code{02:30}:
@example
%02:30
@end example
@end itemize
@item -show_private_data, -private
Show private data, that is data depending on the format of the
particular shown element.
@@ -341,39 +265,6 @@ A writer may accept one or more arguments, which specify the options
to adopt. The options are specified as a list of @var{key}=@var{value}
pairs, separated by ":".
All writers support the following options:
@table @option
@item string_validation, sv
Set string validation mode.
The following values are accepted.
@table @samp
@item fail
The writer will fail immediately in case an invalid string (UTF-8)
sequence or code point is found in the input. This is especially
useful to validate input metadata.
@item ignore
Any validation error will be ignored. This will result in possibly
broken output, especially with the json or xml writer.
@item replace
The writer will substitute invalid UTF-8 sequences or code points with
the string specified with the @option{string_validation_replacement}.
@end table
Default value is @samp{replace}.
@item string_validation_replacement, svr
Set replacement string to use in case @option{string_validation} is
set to @samp{replace}.
In case the option is not specified, the writer will assume the empty
string, that is it will remove the invalid sequences from the input
strings.
@end table
A description of the currently available writers follows.
@section default
@@ -388,8 +279,8 @@ keyN=valN
[/SECTION]
@end example
Metadata tags are printed as a line in the corresponding FORMAT, STREAM or
PROGRAM_STREAM section, and are prefixed by the string "TAG:".
Metadata tags are printed as a line in the corresponding FORMAT or
STREAM section, and are prefixed by the string "TAG:".
A description of the accepted options follows.
@@ -603,7 +494,6 @@ DV, GXF and AVI timecodes are available in format metadata
@include config.texi
@ifset config-all
@set config-readonly
@ifset config-avutil
@include utils.texi
@end ifset

View File

@@ -8,15 +8,14 @@
<xsd:complexType name="ffprobeType">
<xsd:sequence>
<xsd:element name="program_version" type="ffprobe:programVersionType" minOccurs="0" maxOccurs="1" />
<xsd:element name="library_versions" type="ffprobe:libraryVersionsType" minOccurs="0" maxOccurs="1" />
<xsd:element name="packets" type="ffprobe:packetsType" minOccurs="0" maxOccurs="1" />
<xsd:element name="frames" type="ffprobe:framesType" minOccurs="0" maxOccurs="1" />
<xsd:element name="programs" type="ffprobe:programsType" minOccurs="0" maxOccurs="1" />
<xsd:element name="streams" type="ffprobe:streamsType" minOccurs="0" maxOccurs="1" />
<xsd:element name="chapters" type="ffprobe:chaptersType" minOccurs="0" maxOccurs="1" />
<xsd:element name="format" type="ffprobe:formatType" minOccurs="0" maxOccurs="1" />
<xsd:element name="error" type="ffprobe:errorType" minOccurs="0" maxOccurs="1" />
<xsd:element name="program_version" type="ffprobe:programVersionType" minOccurs="0" maxOccurs="1" />
<xsd:element name="library_versions" type="ffprobe:libraryVersionsType" minOccurs="0" maxOccurs="1" />
</xsd:sequence>
</xsd:complexType>
@@ -28,10 +27,7 @@
<xsd:complexType name="framesType">
<xsd:sequence>
<xsd:choice minOccurs="0" maxOccurs="unbounded">
<xsd:element name="frame" type="ffprobe:frameType" minOccurs="0" maxOccurs="unbounded"/>
<xsd:element name="subtitle" type="ffprobe:subtitleType" minOccurs="0" maxOccurs="unbounded"/>
</xsd:choice>
<xsd:element name="frame" type="ffprobe:frameType" minOccurs="0" maxOccurs="unbounded"/>
</xsd:sequence>
</xsd:complexType>
@@ -50,15 +46,9 @@
<xsd:attribute name="pos" type="xsd:long" />
<xsd:attribute name="flags" type="xsd:string" use="required" />
<xsd:attribute name="data" type="xsd:string" />
<xsd:attribute name="data_hash" type="xsd:string" />
</xsd:complexType>
<xsd:complexType name="frameType">
<xsd:sequence>
<xsd:element name="tag" type="ffprobe:tagType" minOccurs="0" maxOccurs="unbounded"/>
<xsd:element name="side_data_list" type="ffprobe:frameSideDataListType" minOccurs="0" maxOccurs="1" />
</xsd:sequence>
<xsd:attribute name="media_type" type="xsd:string" use="required"/>
<xsd:attribute name="key_frame" type="xsd:int" use="required"/>
<xsd:attribute name="pts" type="xsd:long" />
@@ -67,8 +57,6 @@
<xsd:attribute name="pkt_pts_time" type="xsd:float"/>
<xsd:attribute name="pkt_dts" type="xsd:long" />
<xsd:attribute name="pkt_dts_time" type="xsd:float"/>
<xsd:attribute name="best_effort_timestamp" type="xsd:long" />
<xsd:attribute name="best_effort_timestamp_time" type="xsd:float" />
<xsd:attribute name="pkt_duration" type="xsd:long" />
<xsd:attribute name="pkt_duration_time" type="xsd:float"/>
<xsd:attribute name="pkt_pos" type="xsd:long" />
@@ -93,38 +81,12 @@
<xsd:attribute name="repeat_pict" type="xsd:int" />
</xsd:complexType>
<xsd:complexType name="frameSideDataListType">
<xsd:sequence>
<xsd:element name="side_data" type="ffprobe:frameSideDataType" minOccurs="1" maxOccurs="unbounded"/>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="frameSideDataType">
<xsd:attribute name="side_data_type" type="xsd:string"/>
<xsd:attribute name="side_data_size" type="xsd:int" />
</xsd:complexType>
<xsd:complexType name="subtitleType">
<xsd:attribute name="media_type" type="xsd:string" fixed="subtitle" use="required"/>
<xsd:attribute name="pts" type="xsd:long" />
<xsd:attribute name="pts_time" type="xsd:float"/>
<xsd:attribute name="format" type="xsd:int" />
<xsd:attribute name="start_display_time" type="xsd:int" />
<xsd:attribute name="end_display_time" type="xsd:int" />
<xsd:attribute name="num_rects" type="xsd:int" />
</xsd:complexType>
<xsd:complexType name="streamsType">
<xsd:sequence>
<xsd:element name="stream" type="ffprobe:streamType" minOccurs="0" maxOccurs="unbounded"/>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="programsType">
<xsd:sequence>
<xsd:element name="program" type="ffprobe:programType" minOccurs="0" maxOccurs="unbounded"/>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="streamDispositionType">
<xsd:attribute name="default" type="xsd:int" use="required" />
<xsd:attribute name="dub" type="xsd:int" use="required" />
@@ -141,8 +103,8 @@
<xsd:complexType name="streamType">
<xsd:sequence>
<xsd:element name="disposition" type="ffprobe:streamDispositionType" minOccurs="0" maxOccurs="1"/>
<xsd:element name="tag" type="ffprobe:tagType" minOccurs="0" maxOccurs="unbounded"/>
<xsd:element name="disposition" type="ffprobe:streamDispositionType" minOccurs="0" maxOccurs="1"/>
</xsd:sequence>
<xsd:attribute name="index" type="xsd:int" use="required"/>
@@ -154,7 +116,6 @@
<xsd:attribute name="codec_tag" type="xsd:string" use="required"/>
<xsd:attribute name="codec_tag_string" type="xsd:string" use="required"/>
<xsd:attribute name="extradata" type="xsd:string" />
<xsd:attribute name="extradata_hash" type="xsd:string" />
<!-- video attributes -->
<xsd:attribute name="width" type="xsd:int"/>
@@ -164,15 +125,12 @@
<xsd:attribute name="display_aspect_ratio" type="xsd:string"/>
<xsd:attribute name="pix_fmt" type="xsd:string"/>
<xsd:attribute name="level" type="xsd:int"/>
<xsd:attribute name="color_range" type="xsd:string"/>
<xsd:attribute name="color_space" type="xsd:string"/>
<xsd:attribute name="timecode" type="xsd:string"/>
<!-- audio attributes -->
<xsd:attribute name="sample_fmt" type="xsd:string"/>
<xsd:attribute name="sample_rate" type="xsd:int"/>
<xsd:attribute name="channels" type="xsd:int"/>
<xsd:attribute name="channel_layout" type="xsd:string"/>
<xsd:attribute name="bits_per_sample" type="xsd:int"/>
<xsd:attribute name="id" type="xsd:string"/>
@@ -184,30 +142,11 @@
<xsd:attribute name="duration_ts" type="xsd:long"/>
<xsd:attribute name="duration" type="xsd:float"/>
<xsd:attribute name="bit_rate" type="xsd:int"/>
<xsd:attribute name="max_bit_rate" type="xsd:int"/>
<xsd:attribute name="bits_per_raw_sample" type="xsd:int"/>
<xsd:attribute name="nb_frames" type="xsd:int"/>
<xsd:attribute name="nb_read_frames" type="xsd:int"/>
<xsd:attribute name="nb_read_packets" type="xsd:int"/>
</xsd:complexType>
<xsd:complexType name="programType">
<xsd:sequence>
<xsd:element name="tag" type="ffprobe:tagType" minOccurs="0" maxOccurs="unbounded"/>
<xsd:element name="streams" type="ffprobe:streamsType" minOccurs="0" maxOccurs="1"/>
</xsd:sequence>
<xsd:attribute name="program_id" type="xsd:int" use="required"/>
<xsd:attribute name="program_num" type="xsd:int" use="required"/>
<xsd:attribute name="nb_streams" type="xsd:int" use="required"/>
<xsd:attribute name="start_time" type="xsd:float"/>
<xsd:attribute name="start_pts" type="xsd:long"/>
<xsd:attribute name="end_time" type="xsd:float"/>
<xsd:attribute name="end_pts" type="xsd:long"/>
<xsd:attribute name="pmt_pid" type="xsd:int" use="required"/>
<xsd:attribute name="pcr_pid" type="xsd:int" use="required"/>
</xsd:complexType>
<xsd:complexType name="formatType">
<xsd:sequence>
<xsd:element name="tag" type="ffprobe:tagType" minOccurs="0" maxOccurs="unbounded"/>
@@ -215,14 +154,12 @@
<xsd:attribute name="filename" type="xsd:string" use="required"/>
<xsd:attribute name="nb_streams" type="xsd:int" use="required"/>
<xsd:attribute name="nb_programs" type="xsd:int" use="required"/>
<xsd:attribute name="format_name" type="xsd:string" use="required"/>
<xsd:attribute name="format_long_name" type="xsd:string"/>
<xsd:attribute name="start_time" type="xsd:float"/>
<xsd:attribute name="duration" type="xsd:float"/>
<xsd:attribute name="size" type="xsd:long"/>
<xsd:attribute name="bit_rate" type="xsd:long"/>
<xsd:attribute name="probe_score" type="xsd:int"/>
</xsd:complexType>
<xsd:complexType name="tagType">
@@ -240,7 +177,8 @@
<xsd:attribute name="copyright" type="xsd:string" use="required"/>
<xsd:attribute name="build_date" type="xsd:string" use="required"/>
<xsd:attribute name="build_time" type="xsd:string" use="required"/>
<xsd:attribute name="compiler_ident" type="xsd:string" use="required"/>
<xsd:attribute name="compiler_type" type="xsd:string" use="required"/>
<xsd:attribute name="compiler_version" type="xsd:string" use="required"/>
<xsd:attribute name="configuration" type="xsd:string" use="required"/>
</xsd:complexType>

View File

@@ -1,11 +1,11 @@
# Port on which the server is listening. You must select a different
# port from your standard HTTP web server if it is running on the same
# computer.
HTTPPort 8090
Port 8090
# Address on which the server is bound. Only useful if you have
# several network interfaces.
HTTPBindAddress 0.0.0.0
BindAddress 0.0.0.0
# Number of simultaneous HTTP connections that can be handled. It has
# to be defined *before* the MaxClients parameter, since it defines the
@@ -235,7 +235,7 @@ StartSendOnKey
#<Stream test.ogg>
#Feed feed1.ffm
#Metadata title "Stream title"
#Title "Stream title"
#AudioBitRate 64
#AudioChannels 2
#AudioSampleRate 44100
@@ -280,10 +280,10 @@ StartSendOnKey
#<Stream file.asf>
#File "/usr/local/httpd/htdocs/test.asf"
#NoAudio
#Metadata author "Me"
#Metadata copyright "Super MegaCorp"
#Metadata title "Test stream from disk"
#Metadata comment "Test comment"
#Author "Me"
#Copyright "Super MegaCorp"
#Title "Test stream from disk"
#Comment "Test comment"
#</Stream>

View File

@@ -16,14 +16,11 @@ ffserver [@var{options}]
@chapter Description
@c man begin DESCRIPTION
@command{ffserver} is a streaming server for both audio and video.
It supports several live feeds, streaming from files and time shifting
on live feeds. You can seek to positions in the past on each live
feed, provided you specify a big enough feed storage.
@command{ffserver} is configured through a configuration file, which
is read at startup. If not explicitly specified, it will read from
@file{/etc/ffserver.conf}.
@command{ffserver} is a streaming server for both audio and video. It
supports several live feeds, streaming from files and time shifting on
live feeds (you can seek to positions in the past on each live feed,
provided you specify a big enough feed storage in
@file{ffserver.conf}).
@command{ffserver} receives prerecorded files or FFM streams from some
@command{ffmpeg} instance as input, then streams them over
@@ -42,126 +39,10 @@ For each feed you can have different output streams in various
formats, each one specified by a @code{<Stream>} section in the
configuration file.
@chapter Detailed description
@command{ffserver} works by forwarding streams encoded by
@command{ffmpeg}, or pre-recorded streams which are read from disk.
Precisely, @command{ffserver} acts as an HTTP server, accepting POST
requests from @command{ffmpeg} to acquire the stream to publish, and
serving RTSP clients or HTTP clients GET requests with the stream
media content.
A feed is an @ref{FFM} stream created by @command{ffmpeg}, and sent to
a port where @command{ffserver} is listening.
Each feed is identified by a unique name, corresponding to the name
of the resource published on @command{ffserver}, and is configured by
a dedicated @code{Feed} section in the configuration file.
The feed publish URL is given by:
@example
http://@var{ffserver_ip_address}:@var{http_port}/@var{feed_name}
@end example
where @var{ffserver_ip_address} is the IP address of the machine where
@command{ffserver} is installed, @var{http_port} is the port number of
the HTTP server (configured through the @option{HTTPPort} option), and
@var{feed_name} is the name of the corresponding feed defined in the
configuration file.
Each feed is associated to a file which is stored on disk. This stored
file is used to allow to send pre-recorded data to a player as fast as
possible when new content is added in real-time to the stream.
A "live-stream" or "stream" is a resource published by
@command{ffserver}, and made accessible through the HTTP protocol to
clients.
A stream can be connected to a feed, or to a file. In the first case,
the published stream is forwarded from the corresponding feed
generated by a running instance of @command{ffmpeg}, in the second
case the stream is read from a pre-recorded file.
Each stream is identified by a unique name, corresponding to the name
of the resource served by @command{ffserver}, and is configured by
a dedicated @code{Stream} section in the configuration file.
The stream access HTTP URL is given by:
@example
http://@var{ffserver_ip_address}:@var{http_port}/@var{stream_name}[@var{options}]
@end example
The stream access RTSP URL is given by:
@example
http://@var{ffserver_ip_address}:@var{rtsp_port}/@var{stream_name}[@var{options}]
@end example
@var{stream_name} is the name of the corresponding stream defined in
the configuration file. @var{options} is a list of options specified
after the URL which affects how the stream is served by
@command{ffserver}. @var{http_port} and @var{rtsp_port} are the HTTP
and RTSP ports configured with the options @var{HTTPPort} and
@var{RTSPPort} respectively.
In case the stream is associated to a feed, the encoding parameters
must be configured in the stream configuration. They are sent to
@command{ffmpeg} when setting up the encoding. This allows
@command{ffserver} to define the encoding parameters used by
the @command{ffmpeg} encoders.
The @command{ffmpeg} @option{override_ffserver} commandline option
allows one to override the encoding parameters set by the server.
Multiple streams can be connected to the same feed.
For example, you can have a situation described by the following
graph:
@example
_________ __________
| | | |
ffmpeg 1 -----| feed 1 |-----| stream 1 |
\ |_________|\ |__________|
\ \
\ \ __________
\ \ | |
\ \| stream 2 |
\ |__________|
\
\ _________ __________
\ | | | |
\| feed 2 |-----| stream 3 |
|_________| |__________|
_________ __________
| | | |
ffmpeg 2 -----| feed 3 |-----| stream 4 |
|_________| |__________|
_________ __________
| | | |
| file 1 |-----| stream 5 |
|_________| |__________|
@end example
@anchor{FFM}
@section FFM, FFM2 formats
FFM and FFM2 are formats used by ffserver. They allow storing a wide variety of
video and audio streams and encoding options, and can store a moving time segment
of an infinite movie or a whole movie.
FFM is version specific, and there is limited compatibility of FFM files
generated by one version of ffmpeg/ffserver and another version of
ffmpeg/ffserver. It may work but it is not guaranteed to work.
FFM2 is extensible while maintaining compatibility and should work between
differing versions of tools. FFM2 is the default.
@section Status stream
@command{ffserver} supports an HTTP interface which exposes the
current status of the server.
ffserver supports an HTTP interface which exposes the current status
of the server.
Simply point your browser to the address of the special status stream
specified in the configuration file.
@@ -180,8 +61,27 @@ ACL allow 192.168.0.0 192.168.255.255
then the server will post a page with the status information when
the special stream @file{status.html} is requested.
@section What can this do?
When properly configured and running, you can capture video and audio in real
time from a suitable capture card, and stream it out over the Internet to
either Windows Media Player or RealAudio player (with some restrictions).
It can also stream from files, though that is currently broken. Very often, a
web server can be used to serve up the files just as well.
It can stream prerecorded video from .ffm files, though it is somewhat tricky
to make it work correctly.
@section How do I make it work?
First, build the kit. It *really* helps to have installed LAME first. Then when
you run the ffserver ./configure, make sure that you have the
@code{--enable-libmp3lame} flag turned on.
LAME is important as it allows for streaming audio to Windows Media Player.
Don't ask why the other audio types do not work.
As a simple test, just run the following two command lines where INPUTFILE
is some file which you can decode with ffmpeg:
@@ -203,9 +103,40 @@ WARNING: trying to stream test1.mpg doesn't work with WMP as it tries to
transfer the entire file before starting to play.
The same is true of AVI files.
You should edit the @file{ffserver.conf} file to suit your needs (in
terms of frame rates etc). Then install @command{ffserver} and
@command{ffmpeg}, write a script to start them up, and off you go.
@section What happens next?
You should edit the ffserver.conf file to suit your needs (in terms of
frame rates etc). Then install ffserver and ffmpeg, write a script to start
them up, and off you go.
@section Troubleshooting
@subsection I don't hear any audio, but video is fine.
Maybe you didn't install LAME, or got your ./configure statement wrong. Check
the ffmpeg output to see if a line referring to MP3 is present. If not, then
your configuration was incorrect. If it is, then maybe your wiring is not
set up correctly. Maybe the sound card is not getting data from the right
input source. Maybe you have a really awful audio interface (like I do)
that only captures in stereo and also requires that one channel be flipped.
If you are one of these people, then export 'AUDIO_FLIP_LEFT=1' before
starting ffmpeg.
@subsection The audio and video lose sync after a while.
Yes, they do.
@subsection After a long while, the video update rate goes way down in WMP.
Yes, it does. Who knows why?
@subsection WMP 6.4 behaves differently to WMP 7.
Yes, it does. Any thoughts on this would be gratefully received. These
differences extend to embedding WMP into a web page. [There are two
object IDs that you can use: The old one, which does not play well, and
the new one, which does (both tested on the same system). However,
I suspect that the new one is not available unless you have installed WMP 7].
@section What else can it do?
@@ -246,6 +177,9 @@ specify a time. In addition, ffserver will skip frames until a key_frame
is found. This further reduces the startup delay by not transferring data
that will be discarded.
* You may want to adjust the MaxBandwidth in the ffserver.conf to limit
the amount of bandwidth consumed by live streams.
@section Why does the ?buffer / Preroll stop working after a time?
It turns out that (on my machine at least) the number of frames successfully
@@ -279,558 +213,37 @@ You use this by adding the ?date= to the end of the URL for the stream.
For example: @samp{http://localhost:8080/test.asf?date=2002-07-26T23:05:00}.
@c man end
@section What is FFM, FFM2
FFM and FFM2 are formats used by ffserver. They allow storing a wide variety of
video and audio streams and encoding options, and can store a moving time segment
of an infinite movie or a whole movie.
FFM is version specific, and there is limited compatibility of FFM files
generated by one version of ffmpeg/ffserver and another version of
ffmpeg/ffserver. It may work but it is not guaranteed to work.
FFM2 is extensible while maintaining compatibility and should work between
differing versions of tools. FFM2 is the default.
@chapter Options
@c man begin OPTIONS
@include fftools-common-opts.texi
@include avtools-common-opts.texi
@section Main options
@table @option
@item -f @var{configfile}
Read configuration file @file{configfile}. If not specified it will
read by default from @file{/etc/ffserver.conf}.
Use @file{configfile} instead of @file{/etc/ffserver.conf}.
@item -n
Enable no-launch mode. This option disables all the @code{Launch}
directives within the various @code{<Feed>} sections. Since
@command{ffserver} will not launch any @command{ffmpeg} instances, you
will have to launch them manually.
Enable no-launch mode. This option disables all the Launch directives
within the various <Stream> sections. Since ffserver will not launch
any ffmpeg instances, you will have to launch them manually.
@item -d
Enable debug mode. This option increases log verbosity, and directs
log messages to stdout. When specified, the @option{CustomLog} option
is ignored.
Enable debug mode. This option increases log verbosity, directs log
messages to stdout.
@end table
@chapter Configuration file syntax
@command{ffserver} reads a configuration file containing global
options and settings for each stream and feed.
The configuration file consists of global options and dedicated
sections, which must be introduced by "<@var{SECTION_NAME}
@var{ARGS}>" on a separate line and must be terminated by a line in
the form "</@var{SECTION_NAME}>". @var{ARGS} is optional.
Currently the following sections are recognized: @samp{Feed},
@samp{Stream}, @samp{Redirect}.
A line starting with @code{#} is ignored and treated as a comment.
Name of options and sections are case-insensitive.
@section ACL syntax
An ACL (Access Control List) specifies the address which are allowed
to access a given stream, or to write a given feed.
It accepts the folling forms
@itemize
@item
Allow/deny access to @var{address}.
@example
ACL ALLOW <address>
ACL DENY <address>
@end example
@item
Allow/deny access to ranges of addresses from @var{first_address} to
@var{last_address}.
@example
ACL ALLOW <first_address> <last_address>
ACL DENY <first_address> <last_address>
@end example
@end itemize
You can repeat the ACL allow/deny as often as you like. It is on a per
stream basis. The first match defines the action. If there are no matches,
then the default is the inverse of the last ACL statement.
Thus 'ACL allow localhost' only allows access from localhost.
'ACL deny 1.0.0.0 1.255.255.255' would deny the whole of network 1 and
allow everybody else.
@section Global options
@table @option
@item HTTPPort @var{port_number}
@item Port @var{port_number}
@item RTSPPort @var{port_number}
@var{HTTPPort} sets the HTTP server listening TCP port number,
@var{RTSPPort} sets the RTSP server listening TCP port number.
@var{Port} is the equivalent of @var{HTTPPort} and is deprecated.
You must select a different port from your standard HTTP web server if
it is running on the same computer.
If not specified, no corresponding server will be created.
@item HTTPBindAddress @var{ip_address}
@item BindAddress @var{ip_address}
@item RTSPBindAddress @var{ip_address}
Set address on which the HTTP/RTSP server is bound. Only useful if you
have several network interfaces.
@var{BindAddress} is the equivalent of @var{HTTPBindAddress} and is
deprecated.
@item MaxHTTPConnections @var{n}
Set number of simultaneous HTTP connections that can be handled. It
has to be defined @emph{before} the @option{MaxClients} parameter,
since it defines the @option{MaxClients} maximum limit.
Default value is 2000.
@item MaxClients @var{n}
Set number of simultaneous requests that can be handled. Since
@command{ffserver} is very fast, it is more likely that you will want
to leave this high and use @option{MaxBandwidth}.
Default value is 5.
@item MaxBandwidth @var{kbps}
Set the maximum amount of kbit/sec that you are prepared to consume
when streaming to clients.
Default value is 1000.
@item CustomLog @var{filename}
Set access log file (uses standard Apache log file format). '-' is the
standard output.
If not specified @command{ffserver} will produce no log.
In case the commandline option @option{-d} is specified this option is
ignored, and the log is written to standard output.
@item NoDaemon
Set no-daemon mode. This option is currently ignored since now
@command{ffserver} will always work in no-daemon mode, and is
deprecated.
@end table
@section Feed section
A Feed section defines a feed provided to @command{ffserver}.
Each live feed contains one video and/or audio sequence coming from an
@command{ffmpeg} encoder or another @command{ffserver}. This sequence
may be encoded simultaneously with several codecs at several
resolutions.
A feed instance specification is introduced by a line in the form:
@example
<Feed FEED_FILENAME>
@end example
where @var{FEED_FILENAME} specifies the unique name of the FFM stream.
The following options are recognized within a Feed section.
@table @option
@item File @var{filename}
@item ReadOnlyFile @var{filename}
Set the path where the feed file is stored on disk.
If not specified, the @file{/tmp/FEED.ffm} is assumed, where
@var{FEED} is the feed name.
If @option{ReadOnlyFile} is used the file is marked as read-only and
it will not be deleted or updated.
@item Truncate
Truncate the feed file, rather than appending to it. By default
@command{ffserver} will append data to the file, until the maximum
file size value is reached (see @option{FileMaxSize} option).
@item FileMaxSize @var{size}
Set maximum size of the feed file in bytes. 0 means unlimited. The
postfixes @code{K} (2^10), @code{M} (2^20), and @code{G} (2^30) are
recognized.
Default value is 5M.
@item Launch @var{args}
Launch an @command{ffmpeg} command when creating @command{ffserver}.
@var{args} must be a sequence of arguments to be provided to an
@command{ffmpeg} instance. The first provided argument is ignored, and
it is replaced by a path with the same dirname of the @command{ffserver}
instance, followed by the remaining argument and terminated with a
path corresponding to the feed.
When the launched process exits, @command{ffserver} will launch
another program instance.
In case you need a more complex @command{ffmpeg} configuration,
e.g. if you need to generate multiple FFM feeds with a single
@command{ffmpeg} instance, you should launch @command{ffmpeg} by hand.
This option is ignored in case the commandline option @option{-n} is
specified.
@item ACL @var{spec}
Specify the list of IP address which are allowed or denied to write
the feed. Multiple ACL options can be specified.
@end table
@section Stream section
A Stream section defines a stream provided by @command{ffserver}, and
identified by a single name.
The stream is sent when answering a request containing the stream
name.
A stream section must be introduced by the line:
@example
<Stream STREAM_NAME>
@end example
where @var{STREAM_NAME} specifies the unique name of the stream.
The following options are recognized within a Stream section.
Encoding options are marked with the @emph{encoding} tag, and they are
used to set the encoding parameters, and are mapped to libavcodec
encoding options. Not all encoding options are supported, in
particular it is not possible to set encoder private options. In order
to override the encoding options specified by @command{ffserver}, you
can use the @command{ffmpeg} @option{override_ffserver} commandline
option.
Only one of the @option{Feed} and @option{File} options should be set.
@table @option
@item Feed @var{feed_name}
Set the input feed. @var{feed_name} must correspond to an existing
feed defined in a @code{Feed} section.
When this option is set, encoding options are used to setup the
encoding operated by the remote @command{ffmpeg} process.
@item File @var{filename}
Set the filename of the pre-recorded input file to stream.
When this option is set, encoding options are ignored and the input
file content is re-streamed as is.
@item Format @var{format_name}
Set the format of the output stream.
Must be the name of a format recognized by FFmpeg. If set to
@samp{status}, it is treated as a status stream.
@item InputFormat @var{format_name}
Set input format. If not specified, it is automatically guessed.
@item Preroll @var{n}
Set this to the number of seconds backwards in time to start. Note that
most players will buffer 5-10 seconds of video, and also you need to allow
for a keyframe to appear in the data stream.
Default value is 0.
@item StartSendOnKey
Do not send stream until it gets the first key frame. By default
@command{ffserver} will send data immediately.
@item MaxTime @var{n}
Set the number of seconds to run. This value set the maximum duration
of the stream a client will be able to receive.
A value of 0 means that no limit is set on the stream duration.
@item ACL @var{spec}
Set ACL for the stream.
@item DynamicACL @var{spec}
@item RTSPOption @var{option}
@item MulticastAddress @var{address}
@item MulticastPort @var{port}
@item MulticastTTL @var{integer}
@item NoLoop
@item FaviconURL @var{url}
Set favicon (favourite icon) for the server status page. It is ignored
for regular streams.
@item Author @var{value}
@item Comment @var{value}
@item Copyright @var{value}
@item Title @var{value}
Set metadata corresponding to the option. All these options are
deprecated in favor of @option{Metadata}.
@item Metadata @var{key} @var{value}
Set metadata value on the output stream.
@item NoAudio
@item NoVideo
Suppress audio/video.
@item AudioCodec @var{codec_name} (@emph{encoding,audio})
Set audio codec.
@item AudioBitRate @var{rate} (@emph{encoding,audio})
Set bitrate for the audio stream in kbits per second.
@item AudioChannels @var{n} (@emph{encoding,audio})
Set number of audio channels.
@item AudioSampleRate @var{n} (@emph{encoding,audio})
Set sampling frequency for audio. When using low bitrates, you should
lower this frequency to 22050 or 11025. The supported frequencies
depend on the selected audio codec.
@item AVOptionAudio @var{option} @var{value} (@emph{encoding,audio})
Set generic option for audio stream.
@item AVPresetAudio @var{preset} (@emph{encoding,audio})
Set preset for audio stream.
@item VideoCodec @var{codec_name} (@emph{encoding,video})
Set video codec.
@item VideoBitRate @var{n} (@emph{encoding,video})
Set bitrate for the video stream in kbits per second.
@item VideoBitRateRange @var{range} (@emph{encoding,video})
Set video bitrate range.
A range must be specified in the form @var{minrate}-@var{maxrate}, and
specifies the @option{minrate} and @option{maxrate} encoding options
expressed in kbits per second.
@item VideoBitRateRangeTolerance @var{n} (@emph{encoding,video})
Set video bitrate tolerance in kbits per second.
@item PixelFormat @var{pixel_format} (@emph{encoding,video})
Set video pixel format.
@item Debug @var{integer} (@emph{encoding,video})
Set video @option{debug} encoding option.
@item Strict @var{integer} (@emph{encoding,video})
Set video @option{strict} encoding option.
@item VideoBufferSize @var{n} (@emph{encoding,video})
Set ratecontrol buffer size, expressed in KB.
@item VideoFrameRate @var{n} (@emph{encoding,video})
Set number of video frames per second.
@item VideoSize (@emph{encoding,video})
Set size of the video frame, must be an abbreviation or in the form
@var{W}x@var{H}. See @ref{video size syntax,,the Video size section
in the ffmpeg-utils(1) manual,ffmpeg-utils}.
Default value is @code{160x128}.
@item VideoIntraOnly (@emph{encoding,video})
Transmit only intra frames (useful for low bitrates, but kills frame rate).
@item VideoGopSize @var{n} (@emph{encoding,video})
If non-intra only, an intra frame is transmitted every VideoGopSize
frames. Video synchronization can only begin at an intra frame.
@item VideoTag @var{tag} (@emph{encoding,video})
Set video tag.
@item VideoHighQuality (@emph{encoding,video})
@item Video4MotionVector (@emph{encoding,video})
@item BitExact (@emph{encoding,video})
Set bitexact encoding flag.
@item IdctSimple (@emph{encoding,video})
Set simple IDCT algorithm.
@item Qscale @var{n} (@emph{encoding,video})
Enable constant quality encoding, and set video qscale (quantization
scale) value, expressed in @var{n} QP units.
@item VideoQMin @var{n} (@emph{encoding,video})
@item VideoQMax @var{n} (@emph{encoding,video})
Set video qmin/qmax.
@item VideoQDiff @var{integer} (@emph{encoding,video})
Set video @option{qdiff} encoding option.
@item LumiMask @var{float} (@emph{encoding,video})
@item DarkMask @var{float} (@emph{encoding,video})
Set @option{lumi_mask}/@option{dark_mask} encoding options.
@item AVOptionVideo @var{option} @var{value} (@emph{encoding,video})
Set generic option for video stream.
@item AVPresetVideo @var{preset} (@emph{encoding,video})
Set preset for video stream.
@var{preset} must be the path of a preset file.
@end table
@subsection Server status stream
A server status stream is a special stream which is used to show
statistics about the @command{ffserver} operations.
It must be specified setting the option @option{Format} to
@samp{status}.
@section Redirect section
A redirect section specifies where to redirect the requested URL to
another page.
A redirect section must be introduced by the line:
@example
<Redirect NAME>
@end example
where @var{NAME} is the name of the page which should be redirected.
It only accepts the option @option{URL}, which specify the redirection
URL.
@chapter Stream examples
@itemize
@item
Multipart JPEG
@example
<Stream test.mjpg>
Feed feed1.ffm
Format mpjpeg
VideoFrameRate 2
VideoIntraOnly
NoAudio
Strict -1
</Stream>
@end example
@item
Single JPEG
@example
<Stream test.jpg>
Feed feed1.ffm
Format jpeg
VideoFrameRate 2
VideoIntraOnly
VideoSize 352x240
NoAudio
Strict -1
</Stream>
@end example
@item
Flash
@example
<Stream test.swf>
Feed feed1.ffm
Format swf
VideoFrameRate 2
VideoIntraOnly
NoAudio
</Stream>
@end example
@item
ASF compatible
@example
<Stream test.asf>
Feed feed1.ffm
Format asf
VideoFrameRate 15
VideoSize 352x240
VideoBitRate 256
VideoBufferSize 40
VideoGopSize 30
AudioBitRate 64
StartSendOnKey
</Stream>
@end example
@item
MP3 audio
@example
<Stream test.mp3>
Feed feed1.ffm
Format mp2
AudioCodec mp3
AudioBitRate 64
AudioChannels 1
AudioSampleRate 44100
NoVideo
</Stream>
@end example
@item
Ogg Vorbis audio
@example
<Stream test.ogg>
Feed feed1.ffm
Metadata title "Stream title"
AudioBitRate 64
AudioChannels 2
AudioSampleRate 44100
NoVideo
</Stream>
@end example
@item
Real with audio only at 32 kbits
@example
<Stream test.ra>
Feed feed1.ffm
Format rm
AudioBitRate 32
NoVideo
</Stream>
@end example
@item
Real with audio and video at 64 kbits
@example
<Stream test.rm>
Feed feed1.ffm
Format rm
AudioBitRate 32
VideoBitRate 128
VideoFrameRate 25
VideoGopSize 25
</Stream>
@end example
@item
For stream coming from a file: you only need to set the input filename
and optionally a new format.
@example
<Stream file.rm>
File "/usr/local/httpd/htdocs/tlive.rm"
NoAudio
</Stream>
@end example
@example
<Stream file.asf>
File "/usr/local/httpd/htdocs/test.asf"
NoAudio
Metadata author "Me"
Metadata copyright "Super MegaCorp"
Metadata title "Test stream from disk"
Metadata comment "Test comment"
</Stream>
@end example
@end itemize
@c man end
@include config.texi

View File

@@ -166,7 +166,7 @@ Buffer references ownership and permissions
WRITE permission.
* Filters that read their input to produce a new frame on output (like
scale) need the READ permission on input and must request a buffer
scale) need the READ permission on input and and must request a buffer
with the WRITE permission.
* Filters that intend to keep a reference after the filtering process

File diff suppressed because it is too large Load Diff

View File

@@ -58,8 +58,7 @@ Reduce the latency introduced by optional buffering
@end table
@item seek2any @var{integer} (@emph{input})
Allow seeking to non-keyframes on demuxer level when supported if set to 1.
Default is 0.
Forces seeking to enable seek to any mode if set to 1. Default is 0.
@item analyzeduration @var{integer} (@emph{input})
Specify how many microseconds are analyzed to probe the input. A
@@ -125,29 +124,20 @@ Consider things that a sane encoder should not do as an error.
Use wallclock as timestamps.
@item avoid_negative_ts @var{integer} (@emph{output})
Possible values:
@table @samp
@item make_non_negative
Shift timestamps to make them non-negative.
Also note that this affects only leading negative timestamps, and not
non-monotonic negative timestamps.
@item make_zero
Shift timestamps so that the first timestamp is 0.
@item auto (default)
Enables shifting when required by the target format.
@item disabled
Disables shifting of timestamp.
@end table
Shift timestamps to make them non-negative. A value of 1 enables shifting,
a value of 0 disables it, the default value of -1 enables shifting
when required by the target format.
When shifting is enabled, all output timestamps are shifted by the
same amount. Audio, video, and subtitles desynching and relative
timestamp differences are preserved compared to how they would have
been without shifting.
Also note that this affects only leading negative timestamps, and not
non-monotonic negative timestamps.
@item skip_initial_bytes @var{integer} (@emph{input})
Set number of bytes to skip before reading header and frames if set to 1.
Default is 0.
Set number initial bytes to skip. Default is 0.
@item correct_ts_overflow @var{integer} (@emph{input})
Correct single timestamp overflows if set to 1. Default is 1.
@@ -156,57 +146,10 @@ Correct single timestamp overflows if set to 1. Default is 1.
Flush the underlying I/O stream after each packet. Default 1 enables it, and
has the effect of reducing the latency; 0 disables it and may slightly
increase performance in some cases.
@item output_ts_offset @var{offset} (@emph{output})
Set the output time offset.
@var{offset} must be a time duration specification,
see @ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
The offset is added by the muxer to the output timestamps.
Specifying a positive offset means that the corresponding streams are
delayed bt the time duration specified in @var{offset}. Default value
is @code{0} (meaning that no offset is applied).
@end table
@c man end FORMAT OPTIONS
@anchor{Format stream specifiers}
@section Format stream specifiers
Format stream specifiers allow selection of one or more streams that
match specific properties.
Possible forms of stream specifiers are:
@table @option
@item @var{stream_index}
Matches the stream with this index.
@item @var{stream_type}[:@var{stream_index}]
@var{stream_type} is one of following: 'v' for video, 'a' for audio,
's' for subtitle, 'd' for data, and 't' for attachments. If
@var{stream_index} is given, then it matches the stream number
@var{stream_index} of this type. Otherwise, it matches all streams of
this type.
@item p:@var{program_id}[:@var{stream_index}]
If @var{stream_index} is given, then it matches the stream with number
@var{stream_index} in the program with the id
@var{program_id}. Otherwise, it matches all streams in the program.
@item #@var{stream_id}
Matches the stream by a format-specific ID.
@end table
The exact semantics of stream specifiers is defined by the
@code{avformat_match_stream_specifier()} function declared in the
@file{libavformat/avformat.h} header.
@ifclear config-writeonly
@include demuxers.texi
@end ifclear
@ifclear config-readonly
@include muxers.texi
@end ifclear
@include metadata.texi

View File

@@ -94,7 +94,7 @@ Then pass @code{--enable-libtwolame} to configure to enable it.
@section libvpx
FFmpeg can make use of the libvpx library for VP8/VP9 encoding.
FFmpeg can make use of the libvpx library for VP8 encoding.
Go to @url{http://www.webmproject.org/} and follow the instructions for
installing the library. Then pass @code{--enable-libvpx} to configure to
@@ -122,20 +122,6 @@ x264 is under the GNU Public License Version 2 or later
details), you must upgrade FFmpeg's license to GPL in order to use it.
@end float
@section x265
FFmpeg can make use of the x265 library for HEVC encoding.
Go to @url{http://x265.org/developers.html} and follow the instructions
for installing the library. Then pass @code{--enable-libx265} to configure
to enable it.
@float NOTE
x265 is under the GNU Public License Version 2 or later
(see @url{http://www.gnu.org/licenses/old-licenses/gpl-2.0.html} for
details), you must upgrade FFmpeg's license to GPL in order to use it.
@end float
@section libilbc
iLBC is a narrowband speech codec that has been made freely available
@@ -147,41 +133,6 @@ Go to @url{https://github.com/dekkers/libilbc} and follow the instructions for
installing the library. Then pass @code{--enable-libilbc} to configure to
enable it.
@section libzvbi
libzvbi is a VBI decoding library which can be used by FFmpeg to decode DVB
teletext pages and DVB teletext subtitles.
Go to @url{http://sourceforge.net/projects/zapping/} and follow the instructions for
installing the library. Then pass @code{--enable-libzvbi} to configure to
enable it.
@float NOTE
libzvbi is licensed under the GNU General Public License Version 2 or later
(see @url{http://www.gnu.org/licenses/old-licenses/gpl-2.0.html} for details),
you must upgrade FFmpeg's license to GPL in order to use it.
@end float
@section AviSynth
FFmpeg can read AviSynth scripts as input. To enable support, pass
@code{--enable-avisynth} to configure. The correct headers are
included in compat/avisynth/, which allows the user to enable support
without needing to search for these headers themselves.
For Windows, supported AviSynth variants are
@url{http://avisynth.nl, AviSynth 2.5 or 2.6} for 32-bit builds and
@url{http://avs-plus.net, AviSynth+ 0.1} for 32-bit and 64-bit builds.
For Linux and OS X, the supported AviSynth variant is
@url{https://github.com/avxsynth/avxsynth, AvxSynth}.
@float NOTE
AviSynth and AvxSynth are loaded dynamically. Distributors can build FFmpeg
with @code{--enable-avisynth}, and the binaries will work regardless of the
end user having AviSynth or AvxSynth installed - they'll only need to be
installed to use AviSynth scripts (obviously).
@end float
@chapter Supported File Formats, Codecs or Features
@@ -205,7 +156,7 @@ library:
@item American Laser Games MM @tab @tab X
@tab Multimedia format used in games like Mad Dog McCree.
@item 3GPP AMR @tab X @tab X
@item Amazing Studio Packed Animation File @tab @tab X
@item Amazing Studio Packed Animation File @tab @tab X
@tab Multimedia format used in game Heart Of Darkness.
@item Apple HTTP Live Streaming @tab @tab X
@item Artworx Data Format @tab @tab X
@@ -217,7 +168,7 @@ library:
@item AST @tab X @tab X
@tab Audio format used on the Nintendo Wii.
@item AVI @tab X @tab X
@item AviSynth @tab @tab X
@item AVISynth @tab @tab X
@item AVR @tab @tab X
@tab Audio format used on Mac.
@item AVS @tab @tab X
@@ -245,7 +196,6 @@ library:
@tab Multimedia format used by Delphine Software games.
@item CD+G @tab @tab X
@tab Video format used by CD+G karaoke disks
@item Phantom Cine @tab @tab X
@item Commodore CDXL @tab @tab X
@tab Amiga CD video format
@item Core Audio Format @tab X @tab X
@@ -259,7 +209,6 @@ library:
@item Deluxe Paint Animation @tab @tab X
@item DFA @tab @tab X
@tab This format is used in Chronomaster game
@item DSD Stream File (DSF) @tab @tab X
@item DV video @tab X @tab X
@item DXA @tab @tab X
@tab This format is used in the non-Windows version of the Feeble Files
@@ -286,8 +235,6 @@ library:
@item GXF @tab X @tab X
@tab General eXchange Format SMPTE 360M, used by Thomson Grass Valley
playout servers.
@item HNM @tab @tab X
@tab Only version 4 supported, used in some games from Cryo Interactive
@item iCEDraw File @tab @tab X
@item ICO @tab X @tab X
@tab Microsoft Windows ICO
@@ -310,11 +257,9 @@ library:
@tab Used by Linux Media Labs MPEG-4 PCI boards
@item LOAS @tab @tab X
@tab contains LATM multiplexed AAC audio
@item LRC @tab X @tab X
@item LVF @tab @tab X
@item LXF @tab @tab X
@tab VR native stream format, used by Leitch/Harris' video servers.
@item Magic Lantern Video (MLV) @tab @tab X
@item Matroska @tab X @tab X
@item Matroska audio @tab X @tab
@item FFmpeg metadata @tab X @tab X
@@ -340,8 +285,6 @@ library:
@tab also known as DVB Transport Stream
@item MPEG-4 @tab X @tab X
@tab MPEG-4 is a variant of QuickTime.
@item Mirillis FIC video @tab @tab X
@tab No cursor rendering.
@item MIME multipart JPEG @tab X @tab
@item MSN TCP webcam @tab @tab X
@tab Used by MSN Messenger webcam streams.
@@ -381,7 +324,6 @@ library:
@item raw H.261 @tab X @tab X
@item raw H.263 @tab X @tab X
@item raw H.264 @tab X @tab X
@item raw HEVC @tab X @tab X
@item raw Ingenient MJPEG @tab @tab X
@item raw MJPEG @tab X @tab X
@item raw MLP @tab @tab X
@@ -492,13 +434,11 @@ following image formats are supported:
@item Name @tab Encoding @tab Decoding @tab Comments
@item .Y.U.V @tab X @tab X
@tab one raw file per component
@item Alias PIX @tab X @tab X
@tab Alias/Wavefront PIX image format
@item animated GIF @tab X @tab X
@item BMP @tab X @tab X
@tab Microsoft BMP image
@item BRender PIX @tab @tab X
@tab Argonaut BRender 3D engine image format.
@item PIX @tab @tab X
@tab PIX is an image format used in the Argonaut BRender engine.
@item DPX @tab X @tab X
@tab Digital Picture Exchange
@item EXR @tab @tab X
@@ -534,8 +474,6 @@ following image formats are supported:
@tab YUV, JPEG and some extension is not supported yet.
@item Truevision Targa @tab X @tab X
@tab Targa (.TGA) image format
@item WebP @tab E @tab X
@tab WebP image format, encoding supported through external library libwebp
@item XBM @tab X @tab X
@tab X BitMap image format
@item XFace @tab X @tab X
@@ -653,9 +591,7 @@ following image formats are supported:
@item H.263+ / H.263-1998 / H.263 version 2 @tab X @tab X
@item H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 @tab E @tab X
@tab encoding supported through external library libx264
@item HEVC @tab X @tab X
@tab encoding supported through the external library libx265
@item HNM version 4 @tab @tab X
@item H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 (VDPAU acceleration) @tab E @tab X
@item HuffYUV @tab X @tab X
@item HuffYUV FFmpeg variant @tab X @tab X
@item IBM Ultimotion @tab @tab X
@@ -686,8 +622,8 @@ following image formats are supported:
@item LCL (LossLess Codec Library) MSZH @tab @tab X
@item LCL (LossLess Codec Library) ZLIB @tab E @tab E
@item LOCO @tab @tab X
@item LucasArts SANM/Smush @tab @tab X
@tab Used in LucasArts games / SMUSH animations.
@item LucasArts Smush @tab @tab X
@tab Used in LucasArts games.
@item lossless MJPEG @tab X @tab X
@item Microsoft ATC Screen @tab @tab X
@tab Also known as Microsoft Screen 3.
@@ -707,6 +643,8 @@ following image formats are supported:
@item Mobotix MxPEG video @tab @tab X
@item Motion Pixels video @tab @tab X
@item MPEG-1 video @tab X @tab X
@item MPEG-1/2 video XvMC (X-Video Motion Compensation) @tab @tab X
@item MPEG-1/2 video (VDPAU acceleration) @tab @tab X
@item MPEG-2 video @tab X @tab X
@item MPEG-4 part 2 @tab X @tab X
@tab libxvidcore can be used alternatively for encoding.
@@ -722,12 +660,8 @@ following image formats are supported:
@tab fourcc: VP50
@item On2 VP6 @tab @tab X
@tab fourcc: VP60,VP61,VP62
@item On2 VP7 @tab @tab X
@tab fourcc: VP70,VP71
@item VP8 @tab E @tab X
@tab fourcc: VP80, encoding supported through external library libvpx
@item VP9 @tab E @tab X
@tab encoding supported through external library libvpx
@item Pinnacle TARGA CineWave YUV16 @tab @tab X
@tab fourcc: Y216
@item Prores @tab @tab X
@@ -753,11 +687,11 @@ following image formats are supported:
@tab Texture dictionaries used by the Renderware Engine.
@item RL2 video @tab @tab X
@tab used in some games by Entertainment Software Partners
@item SGI RLE 8-bit @tab @tab X
@item Sierra VMD video @tab @tab X
@tab Used in Sierra VMD files.
@item Silicon Graphics Motion Video Compressor 1 (MVC1) @tab @tab X
@item Silicon Graphics Motion Video Compressor 2 (MVC2) @tab @tab X
@item Silicon Graphics RLE 8-bit video @tab @tab X
@item Smacker video @tab @tab X
@tab Video encoding used in Smacker.
@item SMPTE VC-1 @tab @tab X
@@ -823,7 +757,7 @@ following image formats are supported:
@tab encoding supported through external library libaacplus
@item AAC @tab E @tab X
@tab encoding supported through external library libfaac and libvo-aacenc
@item AC-3 @tab IX @tab IX
@item AC-3 @tab IX @tab X
@item ADPCM 4X Movie @tab @tab X
@item ADPCM CDROM XA @tab @tab X
@item ADPCM Creative Technology @tab @tab X
@@ -867,8 +801,6 @@ following image formats are supported:
@item ADPCM Sound Blaster Pro 2-bit @tab @tab X
@item ADPCM Sound Blaster Pro 2.6-bit @tab @tab X
@item ADPCM Sound Blaster Pro 4-bit @tab @tab X
@item ADPCM VIMA
@tab Used in LucasArts SMUSH animations.
@item ADPCM Westwood Studios IMA @tab @tab X
@tab Used in Westwood Studios games like Command and Conquer.
@item ADPCM Yamaha @tab X @tab X
@@ -879,9 +811,8 @@ following image formats are supported:
@item Amazing Studio PAF Audio @tab @tab X
@item Apple lossless audio @tab X @tab X
@tab QuickTime fourcc 'alac'
@item ATRAC1 @tab @tab X
@item ATRAC3 @tab @tab X
@item ATRAC3+ @tab @tab X
@item Atrac 1 @tab @tab X
@item Atrac 3 @tab @tab X
@item Bink Audio @tab @tab X
@tab Used in Bink and Smacker files in many games.
@item CELT @tab @tab E
@@ -901,10 +832,6 @@ following image formats are supported:
@item DPCM Sol @tab @tab X
@item DPCM Xan @tab @tab X
@tab Used in Origin's Wing Commander IV AVI files.
@item DSD (Direct Stream Digitial), least significant bit first @tab @tab X
@item DSD (Direct Stream Digitial), most significant bit first @tab @tab X
@item DSD (Direct Stream Digitial), least significant bit first, planar @tab @tab X
@item DSD (Direct Stream Digitial), most significant bit first, planar @tab @tab X
@item DSP Group TrueSpeech @tab @tab X
@item DV audio @tab @tab X
@item Enhanced AC-3 @tab X @tab X
@@ -927,14 +854,13 @@ following image formats are supported:
@item Monkey's Audio @tab @tab X
@item MP1 (MPEG audio layer 1) @tab @tab IX
@item MP2 (MPEG audio layer 2) @tab IX @tab IX
@tab encoding supported also through external library TwoLAME
@tab libtwolame can be used alternatively for encoding.
@item MP3 (MPEG audio layer 3) @tab E @tab IX
@tab encoding supported through external library LAME, ADU MP3 and MP3onMP4 also supported
@item MPEG-4 Audio Lossless Coding (ALS) @tab @tab X
@item Musepack SV7 @tab @tab X
@item Musepack SV8 @tab @tab X
@item Nellymoser Asao @tab X @tab X
@item On2 AVC (Audio for Video Codec) @tab @tab X
@item Opus @tab E @tab E
@tab supported through external library libopus
@item PCM A-law @tab X @tab X
@@ -996,8 +922,8 @@ following image formats are supported:
@tab Used in LucasArts SMUSH animations.
@item Vorbis @tab E @tab X
@tab A native but very primitive encoder exists.
@item Voxware MetaSound @tab @tab X
@item WavPack @tab X @tab X
@item WavPack @tab E @tab X
@tab supported through external library libwavpack
@item Westwood Audio (SND1) @tab @tab X
@item Windows Media Audio 1 @tab X @tab X
@item Windows Media Audio 2 @tab X @tab X
@@ -1020,7 +946,6 @@ performance on systems without hardware floating point support).
@item 3GPP Timed Text @tab @tab @tab X @tab X
@item AQTitle @tab @tab X @tab @tab X
@item DVB @tab X @tab X @tab X @tab X
@item DVB teletext @tab @tab X @tab @tab E
@item DVD @tab X @tab X @tab X @tab X
@item JACOsub @tab X @tab X @tab @tab X
@item MicroDVD @tab X @tab X @tab @tab X
@@ -1037,25 +962,21 @@ performance on systems without hardware floating point support).
@item TED Talks captions @tab @tab X @tab @tab X
@item VobSub (IDX+SUB) @tab @tab X @tab @tab X
@item VPlayer @tab @tab X @tab @tab X
@item WebVTT @tab X @tab X @tab X @tab X
@item WebVTT @tab X @tab X @tab @tab X
@item XSUB @tab @tab @tab X @tab X
@end multitable
@code{X} means that the feature is supported.
@code{E} means that support is provided through an external library.
@section Network Protocols
@multitable @columnfractions .4 .1
@item Name @tab Support
@item file @tab X
@item FTP @tab X
@item Gopher @tab X
@item HLS @tab X
@item HTTP @tab X
@item HTTPS @tab X
@item Icecast @tab X
@item MMSH @tab X
@item MMST @tab X
@item pipe @tab X
@@ -1066,9 +987,7 @@ performance on systems without hardware floating point support).
@item RTMPTE @tab X
@item RTMPTS @tab X
@item RTP @tab X
@item SAMBA @tab E
@item SCTP @tab X
@item SFTP @tab E
@item TCP @tab X
@item TLS @tab X
@item UDP @tab X
@@ -1088,19 +1007,17 @@ performance on systems without hardware floating point support).
@item caca @tab @tab X
@item DV1394 @tab X @tab
@item Lavfi virtual device @tab X @tab
@item Linux framebuffer @tab X @tab X
@item Linux framebuffer @tab X @tab
@item JACK @tab X @tab
@item LIBCDIO @tab X
@item LIBDC1394 @tab X @tab
@item OpenAL @tab X
@item OpenGL @tab @tab X
@item OSS @tab X @tab X
@item PulseAudio @tab X @tab X
@item Pulseaudio @tab X @tab
@item SDL @tab @tab X
@item Video4Linux2 @tab X @tab X
@item VfW capture @tab X @tab
@item X11 grabbing @tab X @tab
@item Win32 grabbing @tab X @tab
@end multitable
@code{X} means that input/output is supported.

View File

@@ -299,7 +299,7 @@ the current branch history.
git commit --amend
@end example
allows one to amend the last commit details quickly.
allows to amend the last commit details quickly.
@example
git rebase -i origin/master

273
doc/git-howto.txt Normal file
View File

@@ -0,0 +1,273 @@
About Git write access:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Before everything else, you should know how to use GIT properly.
Luckily Git comes with excellent documentation.
git --help
man git
shows you the available subcommands,
git <command> --help
man git-<command>
shows information about the subcommand <command>.
The most comprehensive manual is the website Git Reference
http://gitref.org/
For more information about the Git project, visit
http://git-scm.com/
Consult these resources whenever you have problems, they are quite exhaustive.
You do not need a special username or password.
All you need is to provide a ssh public key to the Git server admin.
What follows now is a basic introduction to Git and some FFmpeg-specific
guidelines. Read it at least once, if you are granted commit privileges to the
FFmpeg project you are expected to be familiar with these rules.
I. BASICS:
==========
0. Get GIT:
Most distributions have a git package, if not
You can get git from http://git-scm.com/
1. Cloning the source tree:
git clone git://source.ffmpeg.org/ffmpeg <target>
This will put the FFmpeg sources into the directory <target>.
git clone git@source.ffmpeg.org:ffmpeg <target>
This will put the FFmpeg sources into the directory <target> and let
you push back your changes to the remote repository.
2. Updating the source tree to the latest revision:
git pull (--ff-only)
pulls in the latest changes from the tracked branch. The tracked branch
can be remote. By default the master branch tracks the branch master in
the remote origin.
Caveat: Since merge commits are forbidden at least for the initial
months of git --ff-only or --rebase (see below) are recommended.
--ff-only will fail and not create merge commits if your branch
has diverged (has a different history) from the tracked branch.
2.a Rebasing your local branches:
git pull --rebase
fetches the changes from the main repository and replays your local commits
over it. This is required to keep all your local changes at the top of
FFmpeg's master tree. The master tree will reject pushes with merge commits.
3. Adding/removing files/directories:
git add [-A] <filename/dirname>
git rm [-r] <filename/dirname>
GIT needs to get notified of all changes you make to your working
directory that makes files appear or disappear.
Line moves across files are automatically tracked.
4. Showing modifications:
git diff <filename(s)>
will show all local modifications in your working directory as unified diff.
5. Inspecting the changelog:
git log <filename(s)>
You may also use the graphical tools like gitview or gitk or the web
interface available at http://source.ffmpeg.org
6. Checking source tree status:
git status
detects all the changes you made and lists what actions will be taken in case
of a commit (additions, modifications, deletions, etc.).
7. Committing:
git diff --check
to double check your changes before committing them to avoid trouble later
on. All experienced developers do this on each and every commit, no matter
how small.
Every one of them has been saved from looking like a fool by this many times.
It's very easy for stray debug output or cosmetic modifications to slip in,
please avoid problems through this extra level of scrutiny.
For cosmetics-only commits you should get (almost) empty output from
git diff -w -b <filename(s)>
Also check the output of
git status
to make sure you don't have untracked files or deletions.
git add [-i|-p|-A] <filenames/dirnames>
Make sure you have told git your name and email address, e.g. by running
git config --global user.name "My Name"
git config --global user.email my@email.invalid
(--global to set the global configuration for all your git checkouts).
Git will select the changes to the files for commit. Optionally you can use
the interactive or the patch mode to select hunk by hunk what should be
added to the commit.
git commit
Git will commit the selected changes to your current local branch.
You will be prompted for a log message in an editor, which is either
set in your personal configuration file through
git config core.editor
or set by one of the following environment variables:
GIT_EDITOR, VISUAL or EDITOR.
Log messages should be concise but descriptive. Explain why you made a change,
what you did will be obvious from the changes themselves most of the time.
Saying just "bug fix" or "10l" is bad. Remember that people of varying skill
levels look at and educate themselves while reading through your code. Don't
include filenames in log messages, Git provides that information.
Possibly make the commit message have a terse, descriptive first line, an
empty line and then a full description. The first line will be used to name
the patch by git format-patch.
8. Renaming/moving/copying files or contents of files:
Git automatically tracks such changes, making those normal commits.
mv/cp path/file otherpath/otherfile
git add [-A] .
git commit
Do not move, rename or copy files of which you are not the maintainer without
discussing it on the mailing list first!
9. Reverting broken commits
git revert <commit>
git revert will generate a revert commit. This will not make the faulty
commit disappear from the history.
git reset <commit>
git reset will uncommit the changes till <commit> rewriting the current
branch history.
git commit --amend
allows to amend the last commit details quickly.
git rebase -i origin/master
will replay local commits over the main repository allowing to edit,
merge or remove some of them in the process.
Note that the reset, commit --amend and rebase rewrite history, so you
should use them ONLY on your local or topic branches.
The main repository will reject those changes.
10. Preparing a patchset.
git format-patch <commit> [-o directory]
will generate a set of patches for each commit between <commit> and
current HEAD. E.g.
git format-patch origin/master
will generate patches for all commits on current branch which are not
present in upstream.
A useful shortcut is also
git format-patch -n
which will generate patches from last n commits.
By default the patches are created in the current directory.
11. Sending patches for review
git send-email <commit list|directory>
will send the patches created by git format-patch or directly generates
them. All the email fields can be configured in the global/local
configuration or overridden by command line.
Note that this tool must often be installed separately (e.g. git-email
package on Debian-based distros).
12. Pushing changes to remote trees
git push
Will push the changes to the default remote (origin).
Git will prevent you from pushing changes if the local and remote trees are
out of sync. Refer to 2 and 2.a to sync the local tree.
git remote add <name> <url>
Will add additional remote with a name reference, it is useful if you want
to push your local branch for review on a remote host.
git push <remote> <refspec>
Will push the changes to the remote repository. Omitting refspec makes git
push update all the remote branches matching the local ones.
13. Finding a specific svn revision
Since version 1.7.1 git supports ':/foo' syntax for specifying commits
based on a regular expression. see man gitrevisions
git show :/'as revision 23456'
will show the svn changeset r23456. With older git versions searching in
the git log output is the easiest option (especially if a pager with
search capabilities is used).
This commit can be checked out with
git checkout -b svn_23456 :/'as revision 23456'
or for git < 1.7.1 with
git checkout -b svn_23456 $SHA1
where $SHA1 is the commit SHA1 from the 'git log' output.
Contact the project admins <root at ffmpeg dot org> if you have technical
problems with the GIT server.

View File

@@ -13,8 +13,8 @@ You can disable all the input devices using the configure option
option "--enable-indev=@var{INDEV}", or you can disable a particular
input device using the option "--disable-indev=@var{INDEV}".
The option "-devices" of the ff* tools will display the list of
supported input devices.
The option "-formats" of the ff* tools will display the list of
supported input devices (amongst the demuxers).
A description of the currently available input devices follows.
@@ -51,41 +51,6 @@ ffmpeg -f alsa -i hw:0 alsaout.wav
For more information see:
@url{http://www.alsa-project.org/alsa-doc/alsa-lib/pcm.html}
@section avfoundation
AVFoundation input device.
AVFoundation is the currently recommended framework by Apple for streamgrabbing on OSX >= 10.7 as well as on iOS.
The older QTKit framework has been marked deprecated since OSX version 10.7.
The filename passed as input is parsed to contain either a device name or index.
The device index can also be given by using -video_device_index.
A given device index will override any given device name.
If the desired device consists of numbers only, use -video_device_index to identify it.
The default device will be chosen if an empty string or the device name "default" is given.
The available devices can be enumerated by using -list_devices.
The pixel format can be set using -pixel_format.
Available formats:
monob, rgb555be, rgb555le, rgb565be, rgb565le, rgb24, bgr24, 0rgb, bgr0, 0bgr, rgb0,
bgr48be, uyvy422, yuva444p, yuva444p16le, yuv444p, yuv422p16, yuv422p10, yuv444p10,
yuv420p, nv12, yuyv422, gray
@example
ffmpeg -f avfoundation -i "0" out.mpg
@end example
@example
ffmpeg -f avfoundation -video_device_index 0 -i "" out.mpg
@end example
@example
ffmpeg -f avfoundation -pixel_format bgr0 -i "default" out.mpg
@end example
@example
ffmpeg -f avfoundation -list_devices true -i ""
@end example
@section bktr
BSD video input device.
@@ -227,81 +192,6 @@ ffmpeg -f fbdev -frames:v 1 -r 1 -i /dev/fb0 screenshot.jpeg
See also @url{http://linux-fbdev.sourceforge.net/}, and fbset(1).
@section gdigrab
Win32 GDI-based screen capture device.
This device allows you to capture a region of the display on Windows.
There are two options for the input filename:
@example
desktop
@end example
or
@example
title=@var{window_title}
@end example
The first option will capture the entire desktop, or a fixed region of the
desktop. The second option will instead capture the contents of a single
window, regardless of its position on the screen.
For example, to grab the entire desktop using @command{ffmpeg}:
@example
ffmpeg -f gdigrab -framerate 6 -i desktop out.mpg
@end example
Grab a 640x480 region at position @code{10,20}:
@example
ffmpeg -f gdigrab -framerate 6 -offset_x 10 -offset_y 20 -video_size vga -i desktop out.mpg
@end example
Grab the contents of the window named "Calculator"
@example
ffmpeg -f gdigrab -framerate 6 -i title=Calculator out.mpg
@end example
@subsection Options
@table @option
@item draw_mouse
Specify whether to draw the mouse pointer. Use the value @code{0} to
not draw the pointer. Default value is @code{1}.
@item framerate
Set the grabbing frame rate. Default value is @code{ntsc},
corresponding to a frame rate of @code{30000/1001}.
@item show_region
Show grabbed region on screen.
If @var{show_region} is specified with @code{1}, then the grabbing
region will be indicated on screen. With this option, it is easy to
know what is being grabbed if only a portion of the screen is grabbed.
Note that @var{show_region} is incompatible with grabbing the contents
of a single window.
For example:
@example
ffmpeg -f gdigrab -show_region 1 -framerate 6 -video_size cif -offset_x 10 -offset_y 20 -i desktop out.mpg
@end example
@item video_size
Set the video frame size. The default is to capture the full screen if @file{desktop} is selected, or the full window size if @file{title=@var{window_title}} is selected.
@item offset_x
When capturing a region with @var{video_size}, set the distance from the left edge of the screen or desktop.
Note that the offset calculation is from the top left corner of the primary monitor on Windows. If you have a monitor positioned to the left of your primary monitor, you will need to use a negative @var{offset_x} value to move the region to that monitor.
@item offset_y
When capturing a region with @var{video_size}, set the distance from the top edge of the screen or desktop.
Note that the offset calculation is from the top left corner of the primary monitor on Windows. If you have a monitor positioned above your primary monitor, you will need to use a negative @var{offset_y} value to move the region to that monitor.
@end table
@section iec61883
FireWire DV/HDV input device using libiec61883.
@@ -483,28 +373,10 @@ ffplay -f lavfi "movie=test.avi[out0];amovie=test.wav[out1]"
@end itemize
@section libcdio
Audio-CD input device based on cdio.
To enable this input device during configuration you need libcdio
installed on your system. Requires the configure option
@code{--enable-libcdio}.
This device allows playing and grabbing from an Audio-CD.
For example to copy with @command{ffmpeg} the entire Audio-CD in /dev/sr0,
you may run the command:
@example
ffmpeg -f libcdio -i /dev/sr0 cd.wav
@end example
@section libdc1394
IIDC1394 input device, based on libdc1394 and libraw1394.
Requires the configure option @code{--enable-libdc1394}.
@section openal
The OpenAL input device provides audio capture on all systems with a
@@ -537,7 +409,7 @@ OpenAL is part of Core Audio, the official Mac OS X Audio interface.
See @url{http://developer.apple.com/technologies/mac/audio-and-video.html}
@end table
This device allows one to capture from an audio input device handled
This device allows to capture from an audio input device handled
through OpenAL.
You need to specify the name of the device to capture in the provided
@@ -613,79 +485,87 @@ For more information about OSS see:
@section pulse
PulseAudio input device.
pulseaudio input device.
To enable this output device you need to configure FFmpeg with @code{--enable-libpulse}.
To enable this input device during configuration you need libpulse-simple
installed in your system.
The filename to provide to the input device is a source device or the
string "default"
To list the PulseAudio source devices and their properties you can invoke
To list the pulse source devices and their properties you can invoke
the command @command{pactl list sources}.
More information about PulseAudio can be found on @url{http://www.pulseaudio.org}.
@subsection Options
@table @option
@item server
Connect to a specific PulseAudio server, specified by an IP address.
Default server is used when not provided.
@item name
Specify the application name PulseAudio will use when showing active clients,
by default it is the @code{LIBAVFORMAT_IDENT} string.
@item stream_name
Specify the stream name PulseAudio will use when showing active streams,
by default it is "record".
@item sample_rate
Specify the samplerate in Hz, by default 48kHz is used.
@item channels
Specify the channels in use, by default 2 (stereo) is set.
@item frame_size
Specify the number of bytes per frame, by default it is set to 1024.
@item fragment_size
Specify the minimal buffering fragment in PulseAudio, it will affect the
audio latency. By default it is unset.
@end table
@subsection Examples
Record a stream from default device:
@example
ffmpeg -f pulse -i default /tmp/pulse.wav
@end example
@section qtkit
QTKit input device.
The filename passed as input is parsed to contain either a device name or index.
The device index can also be given by using -video_device_index.
A given device index will override any given device name.
If the desired device consists of numbers only, use -video_device_index to identify it.
The default device will be chosen if an empty string or the device name "default" is given.
The available devices can be enumerated by using -list_devices.
@subsection @var{server} AVOption
The syntax is:
@example
ffmpeg -f qtkit -i "0" out.mpg
-server @var{server name}
@end example
Connects to a specific server.
@subsection @var{name} AVOption
The syntax is:
@example
ffmpeg -f qtkit -video_device_index 0 -i "" out.mpg
-name @var{application name}
@end example
Specify the application name pulse will use when showing active clients,
by default it is the LIBAVFORMAT_IDENT string
@subsection @var{stream_name} AVOption
The syntax is:
@example
ffmpeg -f qtkit -i "default" out.mpg
-stream_name @var{stream name}
@end example
Specify the stream name pulse will use when showing active streams,
by default it is "record"
@subsection @var{sample_rate} AVOption
The syntax is:
@example
ffmpeg -f qtkit -list_devices true -i ""
-sample_rate @var{samplerate}
@end example
Specify the samplerate in Hz, by default 48kHz is used.
@subsection @var{channels} AVOption
The syntax is:
@example
-channels @var{N}
@end example
Specify the channels in use, by default 2 (stereo) is set.
@subsection @var{frame_size} AVOption
The syntax is:
@example
-frame_size @var{bytes}
@end example
Specify the number of byte per frame, by default it is set to 1024.
@subsection @var{fragment_size} AVOption
The syntax is:
@example
-fragment_size @var{bytes}
@end example
Specify the minimal buffering fragment in pulseaudio, it will affect the
audio latency. By default it is unset.
@section sndio
sndio input device.
@@ -772,7 +652,7 @@ Select the pixel format (only valid for raw video input).
@item input_format
Set the preferred pixel format (for raw video) or a codec name.
This option allows one to select the input format, when several are
This option allows to select the input format, when several are
available.
@item framerate
@@ -833,10 +713,7 @@ other filename will be interpreted as device number 0.
X11 video input device.
Depends on X11, Xext, and Xfixes. Requires the configure option
@code{--enable-x11grab}.
This device allows one to capture a region of an X11 display.
This device allows to capture a region of an X11 display.
The filename passed as input has the syntax:
@example
@@ -859,12 +736,12 @@ properties of your X11 display (e.g. grep for "name" or "dimensions").
For example to grab from @file{:0.0} using @command{ffmpeg}:
@example
ffmpeg -f x11grab -framerate 25 -video_size cif -i :0.0 out.mpg
ffmpeg -f x11grab -r 25 -s cif -i :0.0 out.mpg
@end example
Grab at position @code{10,20}:
@example
ffmpeg -f x11grab -framerate 25 -video_size cif -i :0.0+10,20 out.mpg
ffmpeg -f x11grab -r 25 -s cif -i :0.0+10,20 out.mpg
@end example
@subsection Options
@@ -885,12 +762,12 @@ zero) to the edge of region.
For example:
@example
ffmpeg -f x11grab -follow_mouse centered -framerate 25 -video_size cif -i :0.0 out.mpg
ffmpeg -f x11grab -follow_mouse centered -r 25 -s cif -i :0.0 out.mpg
@end example
To follow only when the mouse pointer reaches within 100 pixels to edge:
@example
ffmpeg -f x11grab -follow_mouse 100 -framerate 25 -video_size cif -i :0.0 out.mpg
ffmpeg -f x11grab -follow_mouse 100 -r 25 -s cif -i :0.0 out.mpg
@end example
@item framerate
@@ -906,20 +783,16 @@ know what is being grabbed if only a portion of the screen is grabbed.
For example:
@example
ffmpeg -f x11grab -show_region 1 -framerate 25 -video_size cif -i :0.0+10,20 out.mpg
ffmpeg -f x11grab -show_region 1 -r 25 -s cif -i :0.0+10,20 out.mpg
@end example
With @var{follow_mouse}:
@example
ffmpeg -f x11grab -follow_mouse centered -show_region 1 -framerate 25 -video_size cif -i :0.0 out.mpg
ffmpeg -f x11grab -follow_mouse centered -show_region 1 -r 25 -s cif -i :0.0 out.mpg
@end example
@item video_size
Set the video frame size. Default value is @code{vga}.
@item use_shm
Use the MIT-SHM extension for shared memory. Default value is @code{1}.
It may be necessary to disable it for remote displays.
@end table
@c man end INPUT DEVICES

View File

@@ -1,4 +1,4 @@
FFmpeg's bug/feature request tracker manual
FFmpeg's bug/patch/feature request tracker manual
=================================================
NOTE: This is a draft.
@@ -11,7 +11,7 @@ existing issues can be done through a web interface.
Issues can be different kinds of things we want to keep track of
but that do not belong into the source tree itself. This includes
bug reports, feature requests and license violations. We
bug reports, patches, feature requests and license violations. We
might add more items to this list in the future, so feel free to
propose a new `type of issue' on the ffmpeg-devel mailing list if
you feel it is worth tracking.
@@ -22,15 +22,12 @@ a mail for every change to every issue.
(the above does all work already after light testing)
The subscription URL for the ffmpeg-trac list is:
http(s)://lists.ffmpeg.org/mailman/listinfo/ffmpeg-trac
http(s)://ffmpeg.org/mailman/listinfo/ffmpeg-trac
The URL of the webinterface of the tracker is:
http(s)://trac.ffmpeg.org
Type:
-----
art
Artwork such as photos, music, banners, and logos.
bug / defect
An error, flaw, mistake, failure, or fault in FFmpeg or libav* that
prevents it from behaving as intended.
@@ -44,18 +41,20 @@ feature request / enhancement
license violation
ticket to keep track of (L)GPL violations of ffmpeg by others
sponsoring request
Developer requests for hardware, software, specifications, money,
refunds, etc.
patch
A patch as generated by diff which conforms to the patch submission and
development policy.
Priority:
---------
critical
Bugs about data loss and security issues.
Bugs and patches which deal with data loss and security issues.
No feature request can be critical.
important
Bugs which make FFmpeg unusable for a significant number of users.
Bugs which make FFmpeg unusable for a significant number of users, and
patches fixing them.
Examples here might be completely broken MPEG-4 decoding or a build issue
on Linux.
While broken 4xm decoding or a broken OS/2 build would not be important,
@@ -69,7 +68,7 @@ normal
minor
Bugs about things like spelling errors, "mp2" instead of
Bugs and patches about things like spelling errors, "mp2" instead of
"mp3" being shown and such.
Feature requests about things few people want or which do not make a big
difference.
@@ -104,13 +103,13 @@ This state implicates that the bug either has been reproduced or that
reproduction is not needed as the bug is already understood.
Type/Status:
Type/Status/Substatus:
----------
*/new
Initial state of new bugs and feature requests submitted by
*/new/new
Initial state of new bugs, patches and feature requests submitted by
users.
*/open
*/open/open
Issues which have been briefly looked at and which did not look outright
invalid.
This implicates that no real more detailed state applies yet. Conversely,
@@ -118,7 +117,9 @@ Type/Status:
looked at.
*/closed/duplicate
Bugs or feature requests which are duplicates.
Bugs, patches or feature requests which are duplicates.
Note that patches dealing with the same thing in a different way are not
duplicates.
Note, if you mark something as duplicate, do not forget setting the
superseder so bug reports are properly linked.
@@ -133,7 +134,7 @@ Type/Status:
bug/closed/fixed
Bugs which have to the best of our knowledge been fixed.
bug/closed/wontfix
bug/closed/wont_fix
Bugs which we will not fix. Possible reasons include legality, high
complexity for the sake of supporting obscure corner cases, speed loss
for similarly esoteric purposes, et cetera.
@@ -147,15 +148,33 @@ bug/closed/works_for_me
reproduction failed - that is the code seems to work correctly to the
best of our knowledge.
feature_request/closed/fixed
patch/open/approved
Patches which have been reviewed and approved by a developer.
Such patches can be applied anytime by any other developer after some
reasonable testing (compile + regression tests + does the patch do
what the author claimed).
patch/open/needs_changes
Patches which have been reviewed and need changes to be accepted.
patch/closed/applied
Patches which have been applied.
patch/closed/rejected
Patches which have been rejected.
feature_request/closed/implemented
Feature requests which have been implemented.
feature_request/closed/wontfix
feature_request/closed/wont_implement
Feature requests which will not be implemented. The reasons here could
be legal, philosophical or others.
Note, please do not use type-status-substatus combinations other than the
above without asking on ffmpeg-dev first!
Note2, if you provide the requested info do not forget to remove the
needs_more_info resolution.
needs_more_info substatus.
Component:
----------

View File

@@ -16,25 +16,7 @@ The libavutil library is a utility library to aid portable
multimedia programming. It contains safe portable string functions,
random number generators, data structures, additional mathematics
functions, cryptography and multimedia related functionality (like
enumerations for pixel and sample formats). It is not a library for
code needed by both libavcodec and libavformat.
The goals for this library is to be:
@table @strong
@item Modular
It should have few interdependencies and the possibility of disabling individual
parts during @command{./configure}.
@item Small
Both sources and objects should be small.
@item Efficient
It should have low CPU and memory usage.
@item Useful
It should avoid useless features that almost no one needs.
@end table
enumerations for pixel and sample formats).
@c man end DESCRIPTION

View File

@@ -20,7 +20,7 @@ Specifically, this library performs the following conversions:
@itemize
@item
@emph{Resampling}: is the process of changing the audio rate, for
example from a high sample rate of 44100Hz to 8000Hz. Audio
example from an high sample rate of 44100Hz to 8000Hz. Audio
conversion from high to low sample rate is a lossy process. Several
resampling options and algorithms are available.

View File

@@ -47,11 +47,6 @@ Files that have MIPS copyright notice in them:
* libavutil/mips/
float_dsp_mips.c
libm_mips.h
* libavcodec/
fft_fixed_32.c
fft_init_table.c
fft_table.h
mdct_fixed_32.c
* libavcodec/mips/
aaccoder_mips.c
aacpsy_mips.h

View File

@@ -23,8 +23,6 @@ A description of some of the currently available muxers follows.
Audio Interchange File Format muxer.
@subsection Options
It accepts the following options:
@table @option
@@ -51,10 +49,6 @@ The output of the muxer consists of a single line of the form:
CRC=0x@var{CRC}, where @var{CRC} is a hexadecimal number 0-padded to
8 digits containing the CRC for all the decoded input frames.
See also the @ref{framecrc} muxer.
@subsection Examples
For example to compute the CRC of the input, and store it in the file
@file{out.crc}:
@example
@@ -74,6 +68,8 @@ and the input video converted to MPEG-2 video, use the command:
ffmpeg -i INPUT -c:a pcm_u8 -c:v mpeg2video -f crc -
@end example
See also the @ref{framecrc} muxer.
@anchor{framecrc}
@section framecrc
@@ -93,8 +89,6 @@ packet of the form:
@var{CRC} is a hexadecimal number 0-padded to 8 digits containing the
CRC of the packet.
@subsection Examples
For example to compute the CRC of the audio and video frames in
@file{INPUT}, converted to raw audio and video packets, and store it
in the file @file{out.crc}:
@@ -138,8 +132,6 @@ packet of the form:
@var{MD5} is a hexadecimal number representing the computed MD5 hash
for the packet.
@subsection Examples
For example to compute the MD5 of the audio and video frames in
@file{INPUT}, converted to raw audio and video packets, and store it
in the file @file{out.md5}:
@@ -154,93 +146,30 @@ ffmpeg -i INPUT -f framemd5 -
See also the @ref{md5} muxer.
@anchor{gif}
@section gif
Animated GIF muxer.
It accepts the following options:
@table @option
@item loop
Set the number of times to loop the output. Use @code{-1} for no loop, @code{0}
for looping indefinitely (default).
@item final_delay
Force the delay (expressed in centiseconds) after the last frame. Each frame
ends with a delay until the next frame. The default is @code{-1}, which is a
special value to tell the muxer to re-use the previous delay. In case of a
loop, you might want to customize this value to mark a pause for instance.
@end table
For example, to encode a gif looping 10 times, with a 5 seconds delay between
the loops:
@example
ffmpeg -i INPUT -loop 10 -final_delay 500 out.gif
@end example
Note 1: if you wish to extract the frames in separate GIF files, you need to
force the @ref{image2} muxer:
@example
ffmpeg -i INPUT -c:v gif -f image2 "out%d.gif"
@end example
Note 2: the GIF format has a very small time base: the delay between two frames
can not be smaller than one centi second.
@anchor{hls}
@section hls
Apple HTTP Live Streaming muxer that segments MPEG-TS according to
the HTTP Live Streaming (HLS) specification.
the HTTP Live Streaming specification.
It creates a playlist file and numbered segment files. The output
filename specifies the playlist filename; the segment filenames
receive the same basename as the playlist, a sequential number and
a .ts extension.
For example, to convert an input file with @command{ffmpeg}:
@example
ffmpeg -i in.nut out.m3u8
@end example
See also the @ref{segment} muxer, which provides a more generic and
flexible implementation of a segmenter, and can be used to perform HLS
segmentation.
@subsection Options
This muxer supports the following options:
@table @option
@item hls_time @var{seconds}
Set the segment length in seconds. Default value is 2.
@item hls_list_size @var{size}
Set the maximum number of playlist entries. If set to 0 the list file
will contain all the segments. Default value is 5.
@item hls_wrap @var{wrap}
Set the number after which the segment filename number (the number
specified in each segment file) wraps. If set to 0 the number will be
never wrapped. Default value is 0.
This option is useful to avoid to fill the disk with many segment
files, and limits the maximum number of segment files written to disk
to @var{wrap}.
@item start_number @var{number}
Start the playlist sequence number from @var{number}. Default value is
0.
@item hls_base_url @var{baseurl}
Append @var{baseurl} to every entry in the playlist.
Useful to generate playlists with absolute paths.
Note that the playlist sequence number must be unique for each segment
and it is not to be confused with the segment filename sequence number
which can be cyclic, for example if the @option{wrap} option is
specified.
@item -hls_time @var{seconds}
Set the segment length in seconds.
@item -hls_list_size @var{size}
Set the maximum number of playlist entries.
@item -hls_wrap @var{wrap}
Set the number after which index wraps.
@item -start_number @var{number}
Start the sequence from @var{number}.
@end table
@anchor{ico}
@@ -306,8 +235,6 @@ The pattern "img%%-%d.jpg" will specify a sequence of filenames of the
form @file{img%-1.jpg}, @file{img%-2.jpg}, ..., @file{img%-10.jpg},
etc.
@subsection Examples
The following example shows how to use @command{ffmpeg} for creating a
sequence of files @file{img-001.jpeg}, @file{img-002.jpeg}, ...,
taking one image every second from the input video:
@@ -330,32 +257,16 @@ Note also that the pattern must not necessarily contain "%d" or
ffmpeg -i in.avi -f image2 -frames:v 1 img.jpeg
@end example
The @option{strftime} option allows you to expand the filename with
date and time information. Check the documentation of
the @code{strftime()} function for the syntax.
For example to generate image files from the @code{strftime()}
"%Y-%m-%d_%H-%M-%S" pattern, the following @command{ffmpeg} command
can be used:
@example
ffmpeg -f v4l2 -r 1 -i /dev/video0 -f image2 -strftime 1 "%Y-%m-%d_%H-%M-%S.jpg"
@end example
@subsection Options
@table @option
@item start_number
Start the sequence from the specified number. Default value is 1. Must
be a non-negative number.
@item start_number @var{number}
Start the sequence from @var{number}. Default value is 1. Must be a
positive number.
@item update
If set to 1, the filename will always be interpreted as just a
filename, not a pattern, and the corresponding file will be continuously
overwritten with new images. Default value is 0.
@item -update @var{number}
If @var{number} is nonzero, the filename will always be interpreted as just a
filename, not a pattern, and this file will be continuously overwritten with new
images.
@item strftime
If set to 1, expand the filename with date and time information from
@code{strftime()}. Default value is 0.
@end table
The image muxer supports the .Y.U.V image file format. This format is
@@ -370,27 +281,25 @@ Matroska container muxer.
This muxer implements the matroska and webm container specs.
@subsection Metadata
The recognized metadata settings in this muxer are:
@table @option
@item title
Set title name provided to a single track.
@item language
Specify the language of the track in the Matroska languages form.
@item title=@var{title name}
Name provided to a single track
@end table
The language can be either the 3 letters bibliographic ISO-639-2 (ISO
639-2/B) form (like "fre" for French), or a language code mixed with a
country code for specialities in languages (like "fre-ca" for Canadian
French).
@table @option
@item stereo_mode
Set stereo 3D video layout of two views in a single video track.
@item language=@var{language name}
Specifies the language of the track in the Matroska languages form
@end table
The following values are recognized:
@table @samp
@table @option
@item stereo_mode=@var{mode}
Stereo 3D video layout of two views in a single video track
@table @option
@item mono
video is not stereo
@item left_right
@@ -429,11 +338,10 @@ For example a 3D WebM clip can be created using the following command line:
ffmpeg -i sample_left_right_clip.mpg -an -c:v libvpx -metadata stereo_mode=left_right -y stereo_clip.webm
@end example
@subsection Options
This muxer supports the following options:
@table @option
@item reserve_index_space
By default, this muxer writes the index for seeking (called cues in Matroska
terms) at the end of the file, because it cannot know in advance how much space
@@ -448,6 +356,7 @@ for most use cases should be about 50kB per hour of video.
Note that cues are only written if the output is seekable and this option will
have no effect if it is not.
@end table
@anchor{md5}
@@ -477,9 +386,7 @@ ffmpeg -i INPUT -f md5 -
See also the @ref{framemd5} muxer.
@section mov, mp4, ismv
MOV/MP4/ISMV (Smooth Streaming) muxer.
@section MOV/MP4/ISMV
The mov/mp4/ismv muxer supports fragmentation. Normally, a MOV/MP4
file has all the metadata about all packets stored in one location
@@ -495,8 +402,6 @@ very long files (since writing normal MOV/MP4 files stores info about
every single packet in memory until the file is closed). The downside
is that it is less compatible with other applications.
@subsection Options
Fragmentation is enabled by setting one of the AVOptions that define
how to cut the file into fragments:
@@ -546,21 +451,13 @@ pair for each track, making it easier to separate tracks.
This option is implicitly set when writing ismv (Smooth Streaming) files.
@item -movflags faststart
Run a second pass moving the index (moov atom) to the beginning of the file.
This operation can take a while, and will not work in various situations such
Run a second pass moving the moov atom on top of the file. This
operation can take a while, and will not work in various situations such
as fragmented output, thus it is not enabled by default.
@item -movflags rtphint
Add RTP hinting tracks to the output file.
@item -movflags disable_chpl
Disable Nero chapter markers (chpl atom). Normally, both Nero chapters
and a QuickTime chapter track are written to the file. With this option
set, only the QuickTime chapter track will be written. Nero chapters can
cause failures when the file is reprocessed with certain tagging programs, like
mp3Tag 2.61a and iTunes 11.3, most likely other versions are affected as well.
@end table
@subsection Example
Smooth Streaming content can be pushed in real time to a publishing
point on IIS with this muxer. Example:
@example
@@ -571,15 +468,12 @@ ffmpeg -re @var{<normal input/transcoding options>} -movflags isml+frag_keyframe
The MP3 muxer writes a raw MP3 stream with an ID3v2 header at the beginning and
optionally an ID3v1 tag at the end. ID3v2.3 and ID3v2.4 are supported, the
@code{id3v2_version} option controls which one is used. Setting
@code{id3v2_version} to 0 will disable the ID3v2 header completely. The legacy
ID3v1 tag is not written by default, but may be enabled with the
@code{write_id3v1} option.
@code{id3v2_version} option controls which one is used. The legacy ID3v1 tag is
not written by default, but may be enabled with the @code{write_id3v1} option.
The muxer may also write a Xing frame at the beginning, which contains the
number of frames in the file. It is useful for computing duration of VBR files.
The Xing frame is written if the output stream is seekable and if the
@code{write_xing} option is set to 1 (the default).
For seekable output the muxer also writes a Xing frame at the beginning, which
contains the number of frames in the file. It is useful for computing duration
of VBR files.
The muxer supports writing ID3v2 attached pictures (APIC frames). The pictures
are supplied to the muxer in form of a video stream with a single packet. There
@@ -606,24 +500,12 @@ ffmpeg -i input.mp3 -i cover.png -c copy -map 0 -map 1
-metadata:s:v title="Album cover" -metadata:s:v comment="Cover (Front)" out.mp3
@end example
Write a "clean" MP3 without any extra features:
@example
ffmpeg -i input.wav -write_xing 0 -id3v2_version 0 out.mp3
@end example
@section mpegts
MPEG transport stream muxer.
This muxer implements ISO 13818-1 and part of ETSI EN 300 468.
The recognized metadata settings in mpegts muxer are @code{service_provider}
and @code{service_name}. If they are not set the default for
@code{service_provider} is "FFmpeg" and the default for
@code{service_name} is "Service01".
@subsection Options
The muxer options are:
@table @option
@@ -640,46 +522,12 @@ Set the service_id (default 0x0001) also known as program in DVB.
Set the first PID for PMT (default 0x1000, max 0x1f00).
@item -mpegts_start_pid @var{number}
Set the first PID for data packets (default 0x0100, max 0x0f00).
@item -mpegts_m2ts_mode @var{number}
Enable m2ts mode if set to 1. Default value is -1 which disables m2ts mode.
@item -muxrate @var{number}
Set a constant muxrate (default VBR).
@item -pcr_period @var{numer}
Override the default PCR retransmission time (default 20ms), ignored
if variable muxrate is selected.
@item -pes_payload_size @var{number}
Set minimum PES packet payload in bytes.
@item -mpegts_flags @var{flags}
Set flags (see below).
@item -mpegts_copyts @var{number}
Preserve original timestamps, if value is set to 1. Default value is -1, which
results in shifting timestamps so that they start from 0.
@item -tables_version @var{number}
Set PAT, PMT and SDT version (default 0, valid values are from 0 to 31, inclusively).
This option allows updating stream structure so that standard consumer may
detect the change. To do so, reopen output AVFormatContext (in case of API
usage) or restart ffmpeg instance, cyclically changing tables_version value:
@example
ffmpeg -i source1.ts -codec copy -f mpegts -tables_version 0 udp://1.1.1.1:1111
ffmpeg -i source2.ts -codec copy -f mpegts -tables_version 1 udp://1.1.1.1:1111
...
ffmpeg -i source3.ts -codec copy -f mpegts -tables_version 31 udp://1.1.1.1:1111
ffmpeg -i source1.ts -codec copy -f mpegts -tables_version 0 udp://1.1.1.1:1111
ffmpeg -i source2.ts -codec copy -f mpegts -tables_version 1 udp://1.1.1.1:1111
...
@end example
@end table
Option mpegts_flags may take a set of such flags:
@table @option
@item resend_headers
Reemit PAT/PMT before writing the next packet.
@item latm
Use LATM packetization for AAC.
@end table
@subsection Example
The recognized metadata settings in mpegts muxer are @code{service_provider}
and @code{service_name}. If they are not set the default for
@code{service_provider} is "FFmpeg" and the default for
@code{service_name} is "Service01".
@example
ffmpeg -i file.mpg -c copy \
@@ -715,30 +563,6 @@ Alternatively you can write the command as:
ffmpeg -benchmark -i INPUT -f null -
@end example
@section nut
@table @option
@item -syncpoints @var{flags}
Change the syncpoint usage in nut:
@table @option
@item @var{default} use the normal low-overhead seeking aids.
@item @var{none} do not use the syncpoints at all, reducing the overhead but making the stream non-seekable;
Use of this option is not recommended, as the resulting files are very damage
sensitive and seeking is not possible. Also in general the overhead from
syncpoints is negligible. Note, -@code{write_index} 0 can be used to disable
all growing data tables, allowing to mux endless streams with limited memory
and wihout these disadvantages.
@item @var{timestamped} extend the syncpoint with a wallclock field.
@end table
The @var{none} and @var{timestamped} flags are experimental.
@item -write_index @var{bool}
Write index at the end, the default is to write an index.
@end table
@example
ffmpeg -i INPUT -f_strict experimental -syncpoints none - | processor
@end example
@section ogg
Ogg container muxer.
@@ -754,12 +578,11 @@ situations, giving a small seek granularity at the cost of additional container
overhead.
@end table
@anchor{segment}
@section segment, stream_segment, ssegment
Basic stream segmenter.
This muxer outputs streams to a number of separate files of nearly
The segmenter muxer outputs streams to a number of separate files of nearly
fixed duration. Output filename pattern can be set in a fashion similar to
@ref{image2}.
@@ -781,21 +604,14 @@ The segment muxer works best with a single constant frame rate video.
Optionally it can generate a list of the created segments, by setting
the option @var{segment_list}. The list type is specified by the
@var{segment_list_type} option. The entry filenames in the segment
list are set by default to the basename of the corresponding segment
files.
See also the @ref{hls} muxer, which provides a more specific
implementation for HLS segmentation.
@subsection Options
@var{segment_list_type} option.
The segment muxer supports the following options:
@table @option
@item reference_stream @var{specifier}
Set the reference stream, as specified by the string @var{specifier}.
If @var{specifier} is set to @code{auto}, the reference is chosen
If @var{specifier} is set to @code{auto}, the reference is choosen
automatically. Otherwise it must be a stream specifier (see the ``Stream
specifiers'' chapter in the ffmpeg manual) which specifies the
reference stream. The default value is @code{auto}.
@@ -804,11 +620,6 @@ reference stream. The default value is @code{auto}.
Override the inner container format, by default it is guessed by the filename
extension.
@item segment_format_options @var{options_list}
Set output format options using a :-separated list of key=value
parameters. Values containing the @code{:} special character must be
escaped.
@item segment_list @var{name}
Generate also a listfile named @var{name}. If not specified no
listfile is generated.
@@ -825,21 +636,15 @@ Allow caching (only affects M3U8 list files).
Allow live-friendly file generation.
@end table
@item segment_list_type @var{type}
Select the listing format.
@table @option
@item @var{flat} use a simple flat list of entries.
@item @var{hls} use a m3u8-like structure.
@end table
Default value is @code{samp}.
@item segment_list_size @var{size}
Update the list file so that it contains at most @var{size}
Update the list file so that it contains at most the last @var{size}
segments. If 0 the list file will contain all the segments. Default
value is 0.
@item segment_list_entry_prefix @var{prefix}
Prepend @var{prefix} to each entry. Useful to generate absolute paths.
By default no prefix is applied.
@item segment_list_type @var{type}
Specify the format for the segment list file.
The following values are recognized:
@table @samp
@@ -890,16 +695,6 @@ Note that splitting may not be accurate, unless you force the
reference stream key-frames at the given time. See the introductory
notice and the examples below.
@item segment_atclocktime @var{1|0}
If set to "1" split at regular clock time intervals starting from 00:00
o'clock. The @var{time} value specified in @option{segment_time} is
used for setting the length of the splitting interval.
For example with @option{segment_time} set to "900" this makes it possible
to create files at 12:00 o'clock, 12:15, 12:30, etc.
Default value is "0".
@item segment_time_delta @var{delta}
Specify the accuracy time when selecting the start time for a
segment, expressed as a duration specification. Default value is "0".
@@ -919,7 +714,7 @@ In particular may be used in combination with the @file{ffmpeg} option
@var{force_key_frames} may not be set accurately because of rounding
issues, with the consequence that a key frame time may result set just
before the specified time. For constant frame rate videos a value of
1/(2*@var{frame_rate}) should address the worst case mismatch between
1/2*@var{frame_rate} should address the worst case mismatch between
the specified time and the time set by @var{force_key_frames}.
@item segment_times @var{times}
@@ -946,17 +741,13 @@ Reset timestamps at the begin of each segment, so that each segment
will start with near-zero timestamps. It is meant to ease the playback
of the generated segments. May not work with some combinations of
muxers/codecs. It is set to @code{0} by default.
@item initial_offset @var{offset}
Specify timestamp offset to apply to the output packet timestamps. The
argument must be a time duration specification, and defaults to 0.
@end table
@subsection Examples
@itemize
@item
Remux the content of file @file{in.mkv} to a list of segments
To remux the content of file @file{in.mkv} to a list of segments
@file{out-000.nut}, @file{out-001.nut}, etc., and write the list of
generated segments to @file{out.list}:
@example
@@ -964,20 +755,14 @@ ffmpeg -i in.mkv -codec copy -map 0 -f segment -segment_list out.list out%03d.nu
@end example
@item
Segment input and set output format options for the output segments:
@example
ffmpeg -i in.mkv -f segment -segment_time 10 -segment_format_options movflags=+faststart out%03d.mp4
@end example
@item
Segment the input file according to the split points specified by the
@var{segment_times} option:
As the example above, but segment the input file according to the split
points specified by the @var{segment_times} option:
@example
ffmpeg -i in.mkv -codec copy -map 0 -f segment -segment_list out.csv -segment_times 1,2,3,5,8,13,21 out%03d.nut
@end example
@item
Use the @command{ffmpeg} @option{force_key_frames}
As the example above, but use the @command{ffmpeg} @option{force_key_frames}
option to force key frames in the input at the specified location, together
with the segment option @option{segment_time_delta} to account for
possible roundings operated when setting key frame times.
@@ -996,7 +781,7 @@ ffmpeg -i in.mkv -codec copy -map 0 -f segment -segment_list out.csv -segment_fr
@end example
@item
Convert the @file{in.mkv} to TS segments using the @code{libx264}
To convert the @file{in.mkv} to TS segments using the @code{libx264}
and @code{libfaac} encoders:
@example
ffmpeg -i in.mkv -map 0 -codec:v libx264 -codec:a libfaac -f ssegment -segment_list out.list out%03d.ts
@@ -1011,28 +796,6 @@ ffmpeg -re -i in.mkv -codec copy -map 0 -f segment -segment_list playlist.m3u8 \
@end example
@end itemize
@section smoothstreaming
Smooth Streaming muxer generates a set of files (Manifest, chunks) suitable for serving with conventional web server.
@table @option
@item window_size
Specify the number of fragments kept in the manifest. Default 0 (keep all).
@item extra_window_size
Specify the number of fragments kept outside of the manifest before removing from disk. Default 5.
@item lookahead_count
Specify the number of lookahead fragments. Default 2.
@item min_frag_duration
Specify the minimum fragment duration (in microseconds). Default 5000000.
@item remove_at_exit
Specify whether to remove all fragments when finished. Default 0 (do not remove).
@end table
@section tee
The tee muxer can be used to write the same data to several files or any
@@ -1048,103 +811,24 @@ to feed the same packets to several muxers directly.
The slave outputs are specified in the file name given to the muxer,
separated by '|'. If any of the slave name contains the '|' separator,
leading or trailing spaces or any special character, it must be
escaped (see @ref{quoting_and_escaping,,the "Quoting and escaping"
section in the ffmpeg-utils(1) manual,ffmpeg-utils}).
escaped (see the ``Quoting and escaping'' section in the ffmpeg-utils
manual).
Muxer options can be specified for each slave by prepending them as a list of
Options can be specified for each slave by prepending them as a list of
@var{key}=@var{value} pairs separated by ':', between square brackets. If
the options values contain a special character or the ':' separator, they
must be escaped; note that this is a second level escaping.
The following special options are also recognized:
@table @option
@item f
Specify the format name. Useful if it cannot be guessed from the
output name suffix.
@item bsfs[/@var{spec}]
Specify a list of bitstream filters to apply to the specified
output.
It is possible to specify to which streams a given bitstream filter
applies, by appending a stream specifier to the option separated by
@code{/}. @var{spec} must be a stream specifier (see @ref{Format
stream specifiers}). If the stream specifier is not specified, the
bitstream filters will be applied to all streams in the output.
Several bitstream filters can be specified, separated by ",".
@item select
Select the streams that should be mapped to the slave output,
specified by a stream specifier. If not specified, this defaults to
all the input streams.
@end table
@subsection Examples
@itemize
@item
Encode something and both archive it in a WebM file and stream it
Example: encode something and both archive it in a WebM file and stream it
as MPEG-TS over UDP (the streams need to be explicitly mapped):
@example
ffmpeg -i ... -c:v libx264 -c:a mp2 -f tee -map 0:v -map 0:a
"archive-20121107.mkv|[f=mpegts]udp://10.0.1.255:1234/"
@end example
@item
Use @command{ffmpeg} to encode the input, and send the output
to three different destinations. The @code{dump_extra} bitstream
filter is used to add extradata information to all the output video
keyframes packets, as requested by the MPEG-TS format. The select
option is applied to @file{out.aac} in order to make it contain only
audio packets.
@example
ffmpeg -i ... -map 0 -flags +global_header -c:v libx264 -c:a aac -strict experimental
-f tee "[bsfs/v=dump_extra]out.ts|[movflags=+faststart]out.mp4|[select=a]out.aac"
@end example
@item
As below, but select only stream @code{a:1} for the audio output. Note
that a second level escaping must be performed, as ":" is a special
character used to separate options.
@example
ffmpeg -i ... -map 0 -flags +global_header -c:v libx264 -c:a aac -strict experimental
-f tee "[bsfs/v=dump_extra]out.ts|[movflags=+faststart]out.mp4|[select=\'a:1\']out.aac"
@end example
@end itemize
Note: some codecs may need different options depending on the output format;
the auto-detection of this can not work with the tee muxer. The main example
is the @option{global_header} flag.
@section webm_dash_manifest
WebM DASH Manifest muxer.
This muxer implements the WebM DASH Manifest specification to generate the DASH manifest XML.
@subsection Options
This muxer supports the following options:
@table @option
@item adaptation_sets
This option has the following syntax: "id=x,streams=a,b,c id=y,streams=d,e" where x and y are the
unique identifiers of the adaptation sets and a,b,c,d and e are the indices of the corresponding
audio and video streams. Any number of adaptation sets can be added using this option.
@end table
@subsection Example
@example
ffmpeg -f webm_dash_manifest -i video1.webm \
-f webm_dash_manifest -i video2.webm \
-f webm_dash_manifest -i audio1.webm \
-f webm_dash_manifest -i audio2.webm \
-map 0 -map 1 -map 2 -map 3 \
-c copy \
-f webm_dash_manifest \
-adaptation_sets "id=0,streams=0,1 id=1,streams=2,3" \
manifest.xml
@end example
@c man end MUXERS

View File

@@ -21,27 +21,6 @@ The official nut specification is at svn://svn.mplayerhq.hu/nut
In case of any differences between this text and the official specification,
the official specification shall prevail.
@chapter Modes
NUT has some variants signaled by using the flags field in its main header.
@multitable @columnfractions .4 .4
@item BROADCAST @tab Extend the syncpoint to report the sender wallclock
@item PIPE @tab Omit completely the syncpoint
@end multitable
@section BROADCAST
The BROADCAST variant provides a secondary time reference to facilitate
detecting endpoint latency and network delays.
It assumes all the endpoint clocks are syncronized.
To be used in real-time scenarios.
@section PIPE
The PIPE variant assumes NUT is used as non-seekable intermediate container,
by not using syncpoint removes unneeded overhead and reduces the overall
memory usage.
@chapter Container-specific codec tags
@section Generic raw YUVA formats

View File

@@ -79,6 +79,9 @@ qpel{8,16}_mc??_old_c / *pixels{8,16}_l4
Just used to work around a bug in an old libavcodec encoder version.
Don't optimize them.
tpel_mc_func {put,avg}_tpel_pixels_tab
Used only for SVQ3, so only optimize them if you need fast SVQ3 decoding.
add_bytes/diff_bytes
For huffyuv only, optimize if you want a faster ffhuffyuv codec.
@@ -136,6 +139,9 @@ dct_unquantize_mpeg2
dct_unquantize_h263
Used in MPEG-4/H.263 en/decoding.
FIXME remaining functions?
BTW, most of these functions are in dsputil.c/.h, some are in mpegvideo.c/.h.
Alignment:
@@ -262,6 +268,17 @@ CELL/SPU:
http://www-01.ibm.com/chips/techlib/techlib.nsf/techdocs/30B3520C93F437AB87257060006FFE5E/$file/Language_Extensions_for_CBEA_2.4.pdf
http://www-01.ibm.com/chips/techlib/techlib.nsf/techdocs/9F820A5FFA3ECE8C8725716A0062585F/$file/CBE_Handbook_v1.1_24APR2007_pub.pdf
SPARC-specific:
---------------
SPARC Joint Programming Specification (JPS1): Commonality
http://www.fujitsu.com/downloads/PRMPWR/JPS1-R1.0.4-Common-pub.pdf
UltraSPARC III Processor User's Manual (contains instruction timings)
http://www.sun.com/processors/manuals/USIIIv2.pdf
VIS Whitepaper (contains optimization guidelines)
http://www.sun.com/processors/vis/download/vis/vis_whitepaper.pdf
GCC asm links:
--------------
official doc but quite ugly

View File

@@ -1,7 +1,7 @@
@chapter Output Devices
@c man begin OUTPUT DEVICES
Output devices are configured elements in FFmpeg that can write
Output devices are configured elements in FFmpeg which allow to write
multimedia data to an output device attached to your system.
When you configure your FFmpeg build, all the supported output devices
@@ -13,8 +13,8 @@ You can disable all the output devices using the configure option
option "--enable-outdev=@var{OUTDEV}", or you can disable a particular
input device using the option "--disable-outdev=@var{OUTDEV}".
The option "-devices" of the ff* tools will display the list of
enabled output devices.
The option "-formats" of the ff* tools will display the list of
enabled output devices (amongst the muxers).
A description of the currently available output devices follows.
@@ -22,27 +22,11 @@ A description of the currently available output devices follows.
ALSA (Advanced Linux Sound Architecture) output device.
@subsection Examples
@itemize
@item
Play a file on default ALSA device:
@example
ffmpeg -i INPUT -f alsa default
@end example
@item
Play a file on soundcard 1, audio device 7:
@example
ffmpeg -i INPUT -f alsa hw:1,7
@end example
@end itemize
@section caca
CACA output device.
This output device allows one to show a video stream in CACA window.
This output device allows to show a video stream in CACA window.
Only one CACA window is allowed per application, so you can
have only one instance of this output device in an application.
@@ -120,207 +104,15 @@ ffmpeg -i INPUT -pix_fmt rgb24 -f caca -list_dither colors -
@end example
@end itemize
@section decklink
The decklink output device provides playback capabilities for Blackmagic
DeckLink devices.
To enable this output device, you need the Blackmagic DeckLink SDK and you
need to configure with the appropriate @code{--extra-cflags}
and @code{--extra-ldflags}.
On Windows, you need to run the IDL files through @command{widl}.
DeckLink is very picky about the formats it supports. Pixel format is always
uyvy422, framerate and video size must be determined for your device with
@command{-list_formats 1}. Audio sample rate is always 48 kHz.
@subsection Options
@table @option
@item list_devices
If set to @option{true}, print a list of devices and exit.
Defaults to @option{false}.
@item list_formats
If set to @option{true}, print a list of supported formats and exit.
Defaults to @option{false}.
@item preroll
Amount of time to preroll video in seconds.
Defaults to @option{0.5}.
@end table
@subsection Examples
@itemize
@item
List output devices:
@example
ffmpeg -i test.avi -f decklink -list_devices 1 dummy
@end example
@item
List supported formats:
@example
ffmpeg -i test.avi -f decklink -list_formats 1 'DeckLink Mini Monitor'
@end example
@item
Play video clip:
@example
ffmpeg -i test.avi -f decklink -pix_fmt uyvy422 'DeckLink Mini Monitor'
@end example
@item
Play video clip with non-standard framerate or video size:
@example
ffmpeg -i test.avi -f decklink -pix_fmt uyvy422 -s 720x486 -r 24000/1001 'DeckLink Mini Monitor'
@end example
@end itemize
@section fbdev
Linux framebuffer output device.
The Linux framebuffer is a graphic hardware-independent abstraction
layer to show graphics on a computer monitor, typically on the
console. It is accessed through a file device node, usually
@file{/dev/fb0}.
For more detailed information read the file
@file{Documentation/fb/framebuffer.txt} included in the Linux source tree.
@subsection Options
@table @option
@item xoffset
@item yoffset
Set x/y coordinate of top left corner. Default is 0.
@end table
@subsection Examples
Play a file on framebuffer device @file{/dev/fb0}.
Required pixel format depends on current framebuffer settings.
@example
ffmpeg -re -i INPUT -vcodec rawvideo -pix_fmt bgra -f fbdev /dev/fb0
@end example
See also @url{http://linux-fbdev.sourceforge.net/}, and fbset(1).
@section opengl
OpenGL output device.
To enable this output device you need to configure FFmpeg with @code{--enable-opengl}.
This output device allows one to render to OpenGL context.
Context may be provided by application or default SDL window is created.
When device renders to external context, application must implement handlers for following messages:
@code{AV_DEV_TO_APP_CREATE_WINDOW_BUFFER} - create OpenGL context on current thread.
@code{AV_DEV_TO_APP_PREPARE_WINDOW_BUFFER} - make OpenGL context current.
@code{AV_DEV_TO_APP_DISPLAY_WINDOW_BUFFER} - swap buffers.
@code{AV_DEV_TO_APP_DESTROY_WINDOW_BUFFER} - destroy OpenGL context.
Application is also required to inform a device about current resolution by sending @code{AV_APP_TO_DEV_WINDOW_SIZE} message.
@subsection Options
@table @option
@item background
Set background color. Black is a default.
@item no_window
Disables default SDL window when set to non-zero value.
Application must provide OpenGL context and both @code{window_size_cb} and @code{window_swap_buffers_cb} callbacks when set.
@item window_title
Set the SDL window title, if not specified default to the filename specified for the output device.
Ignored when @option{no_window} is set.
@item window_size
Set preferred window size, can be a string of the form widthxheight or a video size abbreviation.
If not specified it defaults to the size of the input video, downscaled according to the aspect ratio.
Mostly usable when @option{no_window} is not set.
@end table
@subsection Examples
Play a file on SDL window using OpenGL rendering:
@example
ffmpeg -i INPUT -f opengl "window title"
@end example
@section oss
OSS (Open Sound System) output device.
@section pulse
PulseAudio output device.
To enable this output device you need to configure FFmpeg with @code{--enable-libpulse}.
More information about PulseAudio can be found on @url{http://www.pulseaudio.org}
@subsection Options
@table @option
@item server
Connect to a specific PulseAudio server, specified by an IP address.
Default server is used when not provided.
@item name
Specify the application name PulseAudio will use when showing active clients,
by default it is the @code{LIBAVFORMAT_IDENT} string.
@item stream_name
Specify the stream name PulseAudio will use when showing active streams,
by default it is set to the specified output name.
@item device
Specify the device to use. Default device is used when not provided.
List of output devices can be obtained with command @command{pactl list sinks}.
@item buffer_size
@item buffer_duration
Control the size and duration of the PulseAudio buffer. A small buffer
gives more control, but requires more frequent updates.
@option{buffer_size} specifies size in bytes while
@option{buffer_duration} specifies duration in milliseconds.
When both options are provided then the highest value is used
(duration is recalculated to bytes using stream parameters). If they
are set to 0 (which is default), the device will use the default
PulseAudio duration value. By default PulseAudio set buffer duration
to around 2 seconds.
@item prebuf
Specify pre-buffering size in bytes. The server does not start with
playback before at least @option{prebuf} bytes are available in the
buffer. By default this option is initialized to the same value as
@option{buffer_size} or @option{buffer_duration} (whichever is bigger).
@item minreq
Specify minimum request size in bytes. The server does not request less
than @option{minreq} bytes from the client, instead waits until the buffer
is free enough to request more bytes at once. It is recommended to not set
this option, which will initialize this to a value that is deemed sensible
by the server.
@end table
@subsection Examples
Play a file on default device on default server:
@example
ffmpeg -i INPUT -f pulse "stream name"
@end example
@section sdl
SDL (Simple DirectMedia Layer) output device.
This output device allows one to show a video stream in an SDL
This output device allows to show a video stream in an SDL
window. Only one SDL window is allowed per application, so you can
have only one instance of this output device in an application.
@@ -347,20 +139,6 @@ Set the SDL window size, can be a string of the form
@var{width}x@var{height} or a video size abbreviation.
If not specified it defaults to the size of the input video,
downscaled according to the aspect ratio.
@item window_fullscreen
Set fullscreen mode when non-zero value is provided.
Default value is zero.
@end table
@subsection Interactive commands
The window created by the device can be controlled through the
following interactive commands.
@table @key
@item q, ESC
Quit the device immediately.
@end table
@subsection Examples
@@ -379,7 +157,7 @@ sndio audio output device.
XV (XVideo) output device.
This output device allows one to show a video stream in a X Window System
This output device allows to show a video stream in a X Window System
window.
@subsection Options
@@ -406,26 +184,19 @@ For example, @code{dual-headed:0.1} would specify screen 1 of display
Check the X11 specification for more detailed information about the
display name format.
@item window_id
When set to non-zero value then device doesn't create new window,
but uses existing one with provided @var{window_id}. By default
this options is set to zero and device creates its own window.
@item window_size
Set the created window size, can be a string of the form
@var{width}x@var{height} or a video size abbreviation. If not
specified it defaults to the size of the input video.
Ignored when @var{window_id} is set.
@item window_x
@item window_y
Set the X and Y window offsets for the created window. They are both
set to 0 by default. The values may be ignored by the window manager.
Ignored when @var{window_id} is set.
@item window_title
Set the window title, if not specified default to the filename
specified for the output device. Ignored when @var{window_id} is set.
specified for the output device.
@end table
For more information about XVideo see @url{http://www.x.org/}.

View File

@@ -24,20 +24,6 @@ If not, then you should install a different compiler that has no
hard-coded path to gas. In the worst case pass @code{--disable-asm}
to configure.
@section Advanced linking configuration
If you compiled FFmpeg libraries statically and you want to use them to
build your own shared library, you may need to force PIC support (with
@code{--enable-pic} during FFmpeg configure) and add the following option
to your project LDFLAGS:
@example
-Wl,-Bsymbolic
@end example
If your target platform requires position independent binaries, you should
pass the correct linking flag (e.g. @code{-pie}) to @code{--extra-ldexeflags}.
@section BSD
BSD make will not build FFmpeg, you need to install and use GNU Make
@@ -65,15 +51,14 @@ The toolchain provided with Xcode is sufficient to build the basic
unacelerated code.
Mac OS X on PowerPC or ARM (iPhone) requires a preprocessor from
@url{https://github.com/FFmpeg/gas-preprocessor} or
@url{https://github.com/yuvi/gas-preprocessor}(currently outdated) to build the optimized
assembly functions. Put the Perl script somewhere
@url{http://github.com/yuvi/gas-preprocessor} to build the optimized
assembler functions. Just download the Perl script and put it somewhere
in your PATH, FFmpeg's configure will pick it up automatically.
Mac OS X on amd64 and x86 requires @command{yasm} to build most of the
optimized assembly functions. @uref{http://www.finkproject.org/, Fink},
optimized assembler functions. @uref{http://www.finkproject.org/, Fink},
@uref{http://www.gentoo.org/proj/en/gentoo-alt/prefix/bootstrap-macos.xml, Gentoo Prefix},
@uref{https://mxcl.github.com/homebrew/, Homebrew}
@uref{http://mxcl.github.com/homebrew/, Homebrew}
or @uref{http://www.macports.org, MacPorts} can easily provide it.
@@ -123,16 +108,14 @@ libavformat) as DLLs.
@section Microsoft Visual C++ or Intel C++ Compiler for Windows
FFmpeg can be built with MSVC 2012 or earlier using a C99-to-C89 conversion utility
and wrapper, or with MSVC 2013 and ICL natively.
FFmpeg can be built with MSVC or ICL using a C99-to-C89 conversion utility and
wrapper. For ICL, only the wrapper is used, since ICL supports C99.
You will need the following prerequisites:
@itemize
@item @uref{https://github.com/libav/c99-to-c89/, C99-to-C89 Converter & Wrapper}
(if using MSVC 2012 or earlier)
@item @uref{http://download.videolan.org/pub/contrib/c99-to-c89/, C99-to-C89 Converter & Wrapper}
@item @uref{http://code.google.com/p/msinttypes/, msinttypes}
(if using MSVC 2012 or earlier)
@item @uref{http://www.mingw.org/, MSYS}
@item @uref{http://yasm.tortall.net/, YASM}
@item @uref{http://gnuwin32.sourceforge.net/packages/bc.htm, bc for Windows} if
@@ -142,16 +125,14 @@ you want to run @uref{fate.html, FATE}.
To set up a proper environment in MSYS, you need to run @code{msys.bat} from
the Visual Studio or Intel Compiler command prompt.
Place @code{yasm.exe} somewhere in your @code{PATH}. If using MSVC 2012 or
earlier, place @code{c99wrap.exe} and @code{c99conv.exe} somewhere in your
@code{PATH} as well.
Place @code{makedef}, @code{c99wrap.exe}, @code{c99conv.exe}, and @code{yasm.exe}
somewhere in your @code{PATH}.
Next, make sure any other headers and libs you want to use, such as zlib, are
located in a spot that the compiler can see. Do so by modifying the @code{LIB}
and @code{INCLUDE} environment variables to include the @strong{Windows-style}
paths to these directories. Alternatively, you can try and use the
@code{--extra-cflags}/@code{--extra-ldflags} configure options. If using MSVC
2012 or earlier, place @code{inttypes.h} somewhere the compiler can see too.
Next, make sure @code{inttypes.h} and any other headers and libs you want to use
are located in a spot that the compiler can see. Do so by modifying the @code{LIB}
and @code{INCLUDE} environment variables to include the @strong{Windows} paths to
these directories. Alternatively, you can try and use the
@code{--extra-cflags}/@code{--extra-ldflags} configure options.
Finally, run:
@@ -201,9 +182,7 @@ can see.
@itemize
@item Visual Studio 2010 Pro and Express
@item Visual Studio 2012 Pro and Express
@item Visual Studio 2013 Pro and Express
@item Intel Composer XE 2013
@item Intel Composer XE 2013 SP1
@end itemize
Anything else is not officially supported.
@@ -278,7 +257,7 @@ llrint() in its C library.
Install your Cygwin with all the "Base" packages, plus the
following "Devel" ones:
@example
binutils, gcc4-core, make, git, mingw-runtime, texinfo
binutils, gcc4-core, make, git, mingw-runtime, texi2html
@end example
In order to run FATE you will also need the following "Utils" packages:

View File

@@ -1,20 +1,20 @@
/*
* Copyright (c) 2012 Anton Khirnov
*
* This file is part of FFmpeg.
* This file is part of Libav.
*
* FFmpeg is free software; you can redistribute it and/or
* Libav is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* Libav is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* License along with Libav; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
@@ -27,9 +27,7 @@
#include <float.h>
#include "libavformat/avformat.h"
#include "libavformat/options_table.h"
#include "libavcodec/avcodec.h"
#include "libavcodec/options_table.h"
#include "libavutil/opt.h"
static void print_usage(void)
@@ -98,14 +96,18 @@ static void show_opts(const AVOption *opts, int per_stream)
static void show_format_opts(void)
{
#include "libavformat/options_table.h"
printf("@section Format AVOptions\n");
show_opts(avformat_options, 0);
show_opts(options, 0);
}
static void show_codec_opts(void)
{
#include "libavcodec/options_table.h"
printf("@section Codec AVOptions\n");
show_opts(avcodec_options, 1);
show_opts(options, 1);
}
int main(int argc, char **argv)

View File

@@ -1,8 +1,8 @@
@chapter Protocols
@c man begin PROTOCOLS
Protocols are configured elements in FFmpeg that enable access to
resources that require specific protocols.
Protocols are configured elements in FFmpeg which allow to access
resources which require the use of a particular protocol.
When you configure your FFmpeg build, all the supported protocols are
enabled by default. You can list all available ones using the
@@ -117,19 +117,7 @@ ffmpeg -i "data:image/gif;base64,R0lGODdhCAAIAMIEAAAAAAAA//8AAP//AP/////////////
File access protocol.
Allow to read from or write to a file.
A file URL can have the form:
@example
file:@var{filename}
@end example
where @var{filename} is the path of the file to read.
An URL that does not have a protocol prefix will be assumed to be a
file URL. Depending on the build, an URL that looks like a Windows
path with the drive letter at the beginning will also be assumed to be
a file URL (usually not the case in builds for unix-like systems).
Allow to read from or read to a file.
For example to read from a file @file{input.mpeg} with @command{ffmpeg}
use the command:
@@ -137,19 +125,9 @@ use the command:
ffmpeg -i file:input.mpeg output.mpeg
@end example
This protocol accepts the following options:
@table @option
@item truncate
Truncate existing files on write, if set to 1. A value of 0 prevents
truncating. Default value is 1.
@item blocksize
Set I/O operation maximum block size, in bytes. Default value is
@code{INT_MAX}, which results in not limiting the requested block size.
Setting this value reasonably low improves user termination request reaction
time, which is valuable for files on slow medium.
@end table
The ff* tools default to the file protocol, that is a resource
specified with the name "FILE.mpeg" is interpreted as the URL
"file:FILE.mpeg".
@section ftp
@@ -166,7 +144,7 @@ This protocol accepts the following options.
@table @option
@item timeout
Set timeout in microseconds of socket I/O operations used by the underlying low level
Set timeout of socket I/O operations used by the underlying low level
operation. By default it is set to -1, which means that the timeout is
not specified.
@@ -213,7 +191,7 @@ m3u8 files.
HTTP (Hyper Text Transfer Protocol).
This protocol accepts the following options:
This protocol accepts the following options.
@table @option
@item seekable
@@ -223,60 +201,51 @@ if set to -1 it will try to autodetect if it is seekable. Default
value is -1.
@item chunked_post
If set to 1 use chunked Transfer-Encoding for posts, default is 1.
@item content_type
Set a specific content type for the POST messages.
If set to 1 use chunked transfer-encoding for posts, default is 1.
@item headers
Set custom HTTP headers, can override built in default headers. The
value must be a string encoding the headers.
@item content_type
Force a content type.
@item user-agent
Override User-Agent header. If not specified the protocol will use a
string describing the libavformat build.
@item multiple_requests
Use persistent connections if set to 1, default is 0.
Use persistent connections if set to 1. By default it is 0.
@item post_data
Set custom HTTP post data.
@item user-agent
@item user_agent
Override the User-Agent header. If not specified the protocol will use a
string describing the libavformat build. ("Lavf/<version>")
@item timeout
Set timeout in microseconds of socket I/O operations used by the underlying low level
Set timeout of socket I/O operations used by the underlying low level
operation. By default it is set to -1, which means that the timeout is
not specified.
@item mime_type
Export the MIME type.
Set MIME type.
@item icy
If set to 1 request ICY (SHOUTcast) metadata from the server. If the server
supports this, the metadata has to be retrieved by the application by reading
the @option{icy_metadata_headers} and @option{icy_metadata_packet} options.
The default is 1.
The default is 0.
@item icy_metadata_headers
If the server supports ICY metadata, this contains the ICY-specific HTTP reply
headers, separated by newline characters.
If the server supports ICY metadata, this contains the ICY specific HTTP reply
headers, separated with newline characters.
@item icy_metadata_packet
If the server supports ICY metadata, and @option{icy} was set to 1, this
contains the last non-empty metadata packet sent by the server. It should be
polled in regular intervals by applications interested in mid-stream metadata
updates.
contains the last non-empty metadata packet sent by the server.
@item cookies
Set the cookies to be sent in future requests. The format of each cookie is the
same as the value of a Set-Cookie HTTP response field. Multiple cookies can be
delimited by a newline character.
@item offset
Set initial byte offset.
@item end_offset
Try to limit the request to bytes preceding this offset.
@end table
@subsection HTTP Cookies
@@ -293,50 +262,6 @@ The required syntax to play a stream specifying a cookie is:
ffplay -cookies "nlqptid=nltid=tsn; path=/; domain=somedomain.com;" http://somedomain.com/somestream.m3u8
@end example
@section Icecast
Icecast protocol (stream to Icecast servers)
This protocol accepts the following options:
@table @option
@item ice_genre
Set the stream genre.
@item ice_name
Set the stream name.
@item ice_description
Set the stream description.
@item ice_url
Set the stream website URL.
@item ice_public
Set if the stream should be public.
The default is 0 (not public).
@item user_agent
Override the User-Agent header. If not specified a string of the form
"Lavf/<version>" will be used.
@item password
Set the Icecast mountpoint password.
@item content_type
Set the stream content type. This must be set if it is different from
audio/mpeg.
@item legacy_icecast
This enables support for Icecast versions < 2.4.0, that do not support the
HTTP PUT method but the SOURCE method.
@end table
@example
icecast://[@var{username}[:@var{password}]@@]@var{server}:@var{port}/@var{mountpoint}
@end example
@section mmst
MMS (Microsoft Media Server) protocol over TCP.
@@ -400,16 +325,6 @@ ffmpeg -i test.wav -f avi pipe:1 | cat > test.avi
ffmpeg -i test.wav -f avi pipe: | cat > test.avi
@end example
This protocol accepts the following options:
@table @option
@item blocksize
Set I/O operation maximum block size, in bytes. Default value is
@code{INT_MAX}, which results in not limiting the requested block size.
Setting this value reasonably low improves user termination request reaction
time, which is valuable if data transmission is slow.
@end table
Note that some formats (typically MOV), require the output protocol to
be seekable, so they will fail with the pipe output protocol.
@@ -422,18 +337,12 @@ content across a TCP/IP network.
The required syntax is:
@example
rtmp://[@var{username}:@var{password}@@]@var{server}[:@var{port}][/@var{app}][/@var{instance}][/@var{playpath}]
rtmp://@var{server}[:@var{port}][/@var{app}][/@var{instance}][/@var{playpath}]
@end example
The accepted parameters are:
@table @option
@item username
An optional username (mostly for publishing).
@item password
An optional password (mostly for publishing).
@item server
The address of the RTMP server.
@@ -484,8 +393,7 @@ times to construct arbitrary AMF sequences.
@item rtmp_flashver
Version of the Flash plugin used to run the SWF player. The default
is LNX 9,0,124,2. (When publishing, the default is FMLE/3.0 (compatible;
<libavformat version>).)
is LNX 9,0,124,2.
@item rtmp_flush_interval
Number of packets flushed in the same request (RTMPT only). The default
@@ -535,12 +443,6 @@ For example to read with @command{ffplay} a multimedia resource named
ffplay rtmp://myserver/vod/sample
@end example
To publish to a password protected server, passing the playpath and
app names separately:
@example
ffmpeg -re -i <input> -f flv -rtmp_playpath some/long/path -rtmp_app long/app/name rtmp://username:password@@myserver/
@end example
@section rtmpe
Encrypted Real-Time Messaging Protocol.
@@ -581,72 +483,7 @@ The Real-Time Messaging Protocol tunneled through HTTPS (RTMPTS) is used
for streaming multimedia content within HTTPS requests to traverse
firewalls.
@section libsmbclient
libsmbclient permits one to manipulate CIFS/SMB network resources.
Following syntax is required.
@example
smb://[[domain:]user[:password@@]]server[/share[/path[/file]]]
@end example
This protocol accepts the following options.
@table @option
@item timeout
Set timeout in miliseconds of socket I/O operations used by the underlying
low level operation. By default it is set to -1, which means that the timeout
is not specified.
@item truncate
Truncate existing files on write, if set to 1. A value of 0 prevents
truncating. Default value is 1.
@item workgroup
Set the workgroup used for making connections. By default workgroup is not specified.
@end table
For more information see: @url{http://www.samba.org/}.
@section libssh
Secure File Transfer Protocol via libssh
Allow to read from or write to remote resources using SFTP protocol.
Following syntax is required.
@example
sftp://[user[:password]@@]server[:port]/path/to/remote/resource.mpeg
@end example
This protocol accepts the following options.
@table @option
@item timeout
Set timeout of socket I/O operations used by the underlying low level
operation. By default it is set to -1, which means that the timeout
is not specified.
@item truncate
Truncate existing files on write, if set to 1. A value of 0 prevents
truncating. Default value is 1.
@item private_key
Specify the path of the file containing private key to use during authorization.
By default libssh searches for keys in the @file{~/.ssh/} directory.
@end table
Example: Play a file stored on remote server.
@example
ffplay sftp://user:password@@server_address:22/home/user/resource.mpeg
@end example
@section librtmp rtmp, rtmpe, rtmps, rtmpt, rtmpte
@section rtmp, rtmpe, rtmps, rtmpt, rtmpte
Real-Time Messaging Protocol and its variants supported through
librtmp.
@@ -688,75 +525,10 @@ ffplay "rtmp://myserver/live/mystream live=1"
@section rtp
Real-time Transport Protocol.
The required syntax for an RTP URL is:
rtp://@var{hostname}[:@var{port}][?@var{option}=@var{val}...]
@var{port} specifies the RTP port to use.
The following URL options are supported:
@table @option
@item ttl=@var{n}
Set the TTL (Time-To-Live) value (for multicast only).
@item rtcpport=@var{n}
Set the remote RTCP port to @var{n}.
@item localrtpport=@var{n}
Set the local RTP port to @var{n}.
@item localrtcpport=@var{n}'
Set the local RTCP port to @var{n}.
@item pkt_size=@var{n}
Set max packet size (in bytes) to @var{n}.
@item connect=0|1
Do a @code{connect()} on the UDP socket (if set to 1) or not (if set
to 0).
@item sources=@var{ip}[,@var{ip}]
List allowed source IP addresses.
@item block=@var{ip}[,@var{ip}]
List disallowed (blocked) source IP addresses.
@item write_to_source=0|1
Send packets to the source address of the latest received packet (if
set to 1) or to a default remote address (if set to 0).
@item localport=@var{n}
Set the local RTP port to @var{n}.
This is a deprecated option. Instead, @option{localrtpport} should be
used.
@end table
Important notes:
@enumerate
@item
If @option{rtcpport} is not set the RTCP port will be set to the RTP
port value plus 1.
@item
If @option{localrtpport} (the local RTP port) is not set any available
port will be used for the local RTP and RTCP ports.
@item
If @option{localrtcpport} (the local RTCP port) is not set it will be
set to the the local RTP port value plus 1.
@end enumerate
Real-Time Protocol.
@section rtsp
Real-Time Streaming Protocol.
RTSP is not technically a protocol handler in libavformat, it is a demuxer
and muxer. The demuxer supports both normal RTSP (with data transferred
over RTP; this is used by e.g. Apple and Microsoft) and Real-RTSP (with
@@ -764,29 +536,21 @@ data transferred over RDT).
The muxer can be used to send a stream using RTSP ANNOUNCE to a server
supporting it (currently Darwin Streaming Server and Mischa Spiegelmock's
@uref{https://github.com/revmischa/rtsp-server, RTSP server}).
@uref{http://github.com/revmischa/rtsp-server, RTSP server}).
The required syntax for a RTSP url is:
@example
rtsp://@var{hostname}[:@var{port}]/@var{path}
@end example
Options can be set on the @command{ffmpeg}/@command{ffplay} command
line, or set in code via @code{AVOption}s or in
@code{avformat_open_input}.
The following options (set on the @command{ffmpeg}/@command{ffplay} command
line, or set in code via @code{AVOption}s or in @code{avformat_open_input}),
are supported:
The following options are supported.
Flags for @code{rtsp_transport}:
@table @option
@item initial_pause
Do not start playing the stream immediately if set to 1. Default value
is 0.
@item rtsp_transport
Set RTSP transport protocols.
It accepts the following values:
@table @samp
@item udp
Use UDP as lower transport protocol.
@@ -804,56 +568,15 @@ passing proxies.
Multiple lower transport protocols may be specified, in that case they are
tried one at a time (if the setup of one fails, the next one is tried).
For the muxer, only the @samp{tcp} and @samp{udp} options are supported.
For the muxer, only the @code{tcp} and @code{udp} options are supported.
@item rtsp_flags
Set RTSP flags.
Flags for @code{rtsp_flags}:
The following values are accepted:
@table @samp
@table @option
@item filter_src
Accept packets only from negotiated peer address and port.
@item listen
Act as a server, listening for an incoming connection.
@item prefer_tcp
Try TCP for RTP transport first, if TCP is available as RTSP RTP transport.
@end table
Default value is @samp{none}.
@item allowed_media_types
Set media types to accept from the server.
The following flags are accepted:
@table @samp
@item video
@item audio
@item data
@end table
By default it accepts all media types.
@item min_port
Set minimum local UDP port. Default value is 5000.
@item max_port
Set maximum local UDP port. Default value is 65000.
@item timeout
Set maximum timeout (in seconds) to wait for incoming connections.
A value of -1 means infinite (default). This option implies the
@option{rtsp_flags} set to @samp{listen}.
@item reorder_queue_size
Set number of packets to buffer for handling of reordered packets.
@item stimeout
Set socket TCP I/O timeout in microseconds.
@item user-agent
Override User-Agent header. If not specified, it defaults to the
libavformat identifier string.
@end table
When receiving data over UDP, the demuxer tries to reorder received packets
@@ -866,36 +589,36 @@ streams to display can be chosen with @code{-vst} @var{n} and
@code{-ast} @var{n} for video and audio respectively, and can be switched
on the fly by pressing @code{v} and @code{a}.
@subsection Examples
Example command lines:
The following examples all make use of the @command{ffplay} and
@command{ffmpeg} tools.
To watch a stream over UDP, with a max reordering delay of 0.5 seconds:
@itemize
@item
Watch a stream over UDP, with a max reordering delay of 0.5 seconds:
@example
ffplay -max_delay 500000 -rtsp_transport udp rtsp://server/video.mp4
@end example
@item
Watch a stream tunneled over HTTP:
To watch a stream tunneled over HTTP:
@example
ffplay -rtsp_transport http rtsp://server/video.mp4
@end example
@item
Send a stream in realtime to a RTSP server, for others to watch:
To send a stream in realtime to a RTSP server, for others to watch:
@example
ffmpeg -re -i @var{input} -f rtsp -muxdelay 0.1 rtsp://server/live.sdp
@end example
@item
Receive a stream in realtime:
To receive a stream in realtime:
@example
ffmpeg -rtsp_flags listen -i rtsp://ownaddress/live.sdp @var{output}
@end example
@end itemize
@table @option
@item stimeout
Socket IO timeout in micro seconds.
@end table
@section sap
@@ -1033,109 +756,57 @@ this binary block are used as master key, the following 14 bytes are
used as master salt.
@end table
@section subfile
Virtually extract a segment of a file or another stream.
The underlying stream must be seekable.
Accepted options:
@table @option
@item start
Start offset of the extracted segment, in bytes.
@item end
End offset of the extracted segment, in bytes.
@end table
Examples:
Extract a chapter from a DVD VOB file (start and end sectors obtained
externally and multiplied by 2048):
@example
subfile,,start,153391104,end,268142592,,:/media/dvd/VIDEO_TS/VTS_08_1.VOB
@end example
Play an AVI file directly from a TAR archive:
subfile,,start,183241728,end,366490624,,:archive.tar
@section tcp
Transmission Control Protocol.
Trasmission Control Protocol.
The required syntax for a TCP url is:
@example
tcp://@var{hostname}:@var{port}[?@var{options}]
@end example
@var{options} contains a list of &-separated options of the form
@var{key}=@var{val}.
The list of supported options follows.
@table @option
@item listen=@var{1|0}
Listen for an incoming connection. Default value is 0.
@item listen
Listen for an incoming connection
@item timeout=@var{microseconds}
Set raise error timeout, expressed in microseconds.
In read mode: if no data arrived in more than this time interval, raise error.
In write mode: if socket cannot be written in more than this time interval, raise error.
This also sets timeout on TCP connection establishing.
This option is only relevant in read mode: if no data arrived in more
than this time interval, raise error.
@item listen_timeout=@var{microseconds}
Set listen timeout, expressed in microseconds.
@end table
The following example shows how to setup a listening TCP connection
with @command{ffmpeg}, which is then accessed with @command{ffplay}:
@example
ffmpeg -i @var{input} -f @var{format} tcp://@var{hostname}:@var{port}?listen
ffplay tcp://@var{hostname}:@var{port}
@end example
@end table
@section tls
Transport Layer Security (TLS) / Secure Sockets Layer (SSL)
Transport Layer Security/Secure Sockets Layer
The required syntax for a TLS/SSL url is:
@example
tls://@var{hostname}:@var{port}[?@var{options}]
@end example
The following parameters can be set via command line options
(or in code via @code{AVOption}s):
@table @option
@item ca_file, cafile=@var{filename}
A file containing certificate authority (CA) root certificates to treat
as trusted. If the linked TLS library contains a default this might not
need to be specified for verification to work, but not all libraries and
setups have defaults built in.
The file must be in OpenSSL PEM format.
@item listen
Act as a server, listening for an incoming connection.
@item tls_verify=@var{1|0}
If enabled, try to verify the peer that we are communicating with.
Note, if using OpenSSL, this currently only makes sure that the
peer certificate is signed by one of the root certificates in the CA
database, but it does not validate that the certificate actually
matches the host name we are trying to connect to. (With GnuTLS,
the host name is validated as well.)
@item cafile=@var{filename}
Certificate authority file. The file must be in OpenSSL PEM format.
This is disabled by default since it requires a CA database to be
provided by the caller in many cases.
@item cert=@var{filename}
Certificate file. The file must be in OpenSSL PEM format.
@item cert_file, cert=@var{filename}
A file containing a certificate to use in the handshake with the peer.
(When operating as server, in listen mode, this is more often required
by the peer, while client certificates only are mandated in certain
setups.)
@item key=@var{filename}
Private key file.
@item key_file, key=@var{filename}
A file containing the private key for the certificate.
@item listen=@var{1|0}
If enabled, listen for connections on the provided port, and assume
the server role in the handshake instead of the client role.
@item verify=@var{0|1}
Verify the peer's certificate.
@end table
@@ -1157,7 +828,7 @@ ffplay tls://@var{hostname}:@var{port}
User Datagram Protocol.
The required syntax for an UDP URL is:
The required syntax for a UDP url is:
@example
udp://@var{hostname}:@var{port}[?@var{options}]
@end example
@@ -1165,17 +836,17 @@ udp://@var{hostname}:@var{port}[?@var{options}]
@var{options} contains a list of &-separated options of the form @var{key}=@var{val}.
In case threading is enabled on the system, a circular buffer is used
to store the incoming data, which allows one to reduce loss of data due to
to store the incoming data, which allows to reduce loss of data due to
UDP socket buffer overruns. The @var{fifo_size} and
@var{overrun_nonfatal} options are related to this buffer.
The list of supported options follows.
@table @option
@item buffer_size=@var{size}
Set the UDP maximum socket buffer size in bytes. This is used to set either
the receive or send buffer size, depending on what the socket is used for.
Default is 64KB. See also @var{fifo_size}.
Set the UDP socket buffer size in bytes. This is used both for the
receiving and the sending buffer size.
@item localport=@var{port}
Override the local UDP port to bind with.
@@ -1222,59 +893,24 @@ Survive in case of UDP receiving circular buffer overrun. Default
value is 0.
@item timeout=@var{microseconds}
Set raise error timeout, expressed in microseconds.
This option is only relevant in read mode: if no data arrived in more
than this time interval, raise error.
@item broadcast=@var{1|0}
Explicitly allow or disallow UDP broadcasting.
Note that broadcasting may not work properly on networks having
a broadcast storm protection.
In read mode: if no data arrived in more than this time interval, raise error.
@end table
@subsection Examples
Some usage examples of the UDP protocol with @command{ffmpeg} follow.
@itemize
@item
Use @command{ffmpeg} to stream over UDP to a remote endpoint:
To stream over UDP to a remote endpoint:
@example
ffmpeg -i @var{input} -f @var{format} udp://@var{hostname}:@var{port}
@end example
@item
Use @command{ffmpeg} to stream in mpegts format over UDP using 188
sized UDP packets, using a large input buffer:
To stream in mpegts format over UDP using 188 sized UDP packets, using a large input buffer:
@example
ffmpeg -i @var{input} -f mpegts udp://@var{hostname}:@var{port}?pkt_size=188&buffer_size=65535
@end example
@item
Use @command{ffmpeg} to receive over UDP from a remote endpoint:
To receive over UDP from a remote endpoint:
@example
ffmpeg -i udp://[@var{multicast-address}]:@var{port} ...
ffmpeg -i udp://[@var{multicast-address}]:@var{port}
@end example
@end itemize
@section unix
Unix local socket
The required syntax for a Unix socket URL is:
@example
unix://@var{filepath}
@end example
The following parameters can be set via command line options
(or in code via @code{AVOption}s):
@table @option
@item timeout
Timeout in ms.
@item listen
Create the Unix socket in listening mode.
@end table
@c man end PROTOCOLS

View File

@@ -42,11 +42,10 @@ Set the internal sample format. Default value is @code{none}.
This will automatically be chosen when it is not explicitly set.
@item icl, in_channel_layout
@item ocl, out_channel_layout
Set the input/output channel layout.
Set the input channel layout.
See @ref{channel layout syntax,,the Channel Layout section in the ffmpeg-utils(1) manual,ffmpeg-utils}
for the required syntax.
@item ocl, out_channel_layout
Set the output channel layout.
@item clev, center_mix_level
Set the center mix level. It is a value expressed in deciBel, and must be
@@ -64,11 +63,6 @@ be in the interval [-32,32].
@item rmvol, rematrix_volume
Set rematrix volume. Default value is 1.0.
@item rematrix_maxval
Set maximum output value for rematrixing.
This can be used to prevent clipping vs. preventing volumn reduction
A value of 1.0 prevents cliping.
@item flags, swr_flags
Set flags used by the converter. Default value is 0.

View File

@@ -1,4 +1,3 @@
@anchor{scaler_options}
@chapter Scaler Options
@c man begin SCALER OPTIONS
@@ -10,7 +9,6 @@ FFmpeg tools. For programmatic use, they can be set explicitly in the
@table @option
@anchor{sws_flags}
@item sws_flags
Set the scaler flags. This is also used to set the scaling
algorithm. Only a single algorithm should be selected.
@@ -35,7 +33,7 @@ Select nearest neighbor rescaling algorithm.
@item area
Select averaging area rescaling algorithm.
@item bicublin
@item bicubiclin
Select bicubic scaling algorithm for the luma component, bilinear for
chroma components.
@@ -96,32 +94,6 @@ Set scaling algorithm parameters. The specified values are specific of
some scaling algorithms and ignored by others. The specified values
are floating point number values.
@item sws_dither
Set the dithering algorithm. Accepts one of the following
values. Default value is @samp{auto}.
@table @samp
@item auto
automatic choice
@item none
no dithering
@item bayer
bayer dither
@item ed
error diffusion dither
@item a_dither
arithmetic dither, based using addition
@item x_dither
arithmetic dither, based using xor (more random/less apparent patterning that
a_dither).
@end table
@end table
@c man end SCALER OPTIONS

View File

@@ -50,10 +50,8 @@ header:
temporal_decomposition_count u header_state
spatial_decomposition_count u header_state
colorspace_type u header_state
if (nb_planes > 2) {
chroma_h_shift u header_state
chroma_v_shift u header_state
}
chroma_h_shift u header_state
chroma_v_shift u header_state
spatial_scalability b header_state
max_ref_frames-1 u header_state
qlogs
@@ -61,7 +59,7 @@ header:
if(!keyframe){
update_mc b header_state
if(update_mc){
for(plane=0; plane<nb_plane_types; plane++){
for(plane=0; plane<2; plane++){
diag_mc b header_state
htaps/2-1 u header_state
for(i= p->htaps/2; i; i--)
@@ -82,7 +80,7 @@ header:
block_max_depth s header_state
qlogs:
for(plane=0; plane<nb_plane_types; plane++){
for(plane=0; plane<2; plane++){
quant_table[plane][0][0] s header_state
for(level=0; level < spatial_decomposition_count; level++){
quant_table[plane][level][1]s header_state
@@ -133,10 +131,8 @@ block(level):
residual:
residual2(luma)
if (nb_planes > 2) {
residual2(chroma_cr)
residual2(chroma_cb)
}
residual2(chroma_cr)
residual2(chroma_cb)
residual2:
for(level=0; level<spatial_decomposition_count; level++){
@@ -150,7 +146,7 @@ residual2:
subband:
FIXME
nb_plane_types = gray ? 1 : 2;
Tag description:
----------------
@@ -172,11 +168,7 @@ spatial_decomposition_count
FIXME
colorspace_type
0 unspecified YcbCr
1 Gray
2 Gray + Alpha
3 GBR
4 GBRA
0
this MUST NOT change within a bitstream
chroma_h_shift
@@ -618,6 +610,7 @@ flip wavelet?
try to use the wavelet transformed predicted image (motion compensated image) as context for coding the residual coefficients
try the MV length as context for coding the residual coefficients
use extradata for stuff which is in the keyframes now?
the MV median predictor is patented IIRC
implement per picture halfpel interpolation
try different range coder state transition tables for different contexts

24
doc/soc.txt Normal file
View File

@@ -0,0 +1,24 @@
Google Summer of Code and similar project guidelines
Summer of Code is a project by Google in which students are paid to implement
some nice new features for various participating open source projects ...
This text is a collection of things to take care of for the next soc as
it's a little late for this year's soc (2006).
The Goal:
Our goal in respect to soc is and must be of course exactly one thing and
that is to improve FFmpeg, to reach this goal, code must
* conform to the development policy and patch submission guidelines
* must improve FFmpeg somehow (faster, smaller, "better",
more codecs supported, fewer bugs, cleaner, ...)
for mentors and other developers to help students to reach that goal it is
essential that changes to their codebase are publicly visible, clean and
easy reviewable that again leads us to:
* use of a revision control system like git
* separation of cosmetic from non-cosmetic changes (this is almost entirely
ignored by mentors and students in soc 2006 which might lead to a surprise
when the code will be reviewed at the end before a possible inclusion in
FFmpeg, individual changes were generally not reviewable due to cosmetics).
* frequent commits, so that comments can be provided early

23
doc/style.min.css vendored

File diff suppressed because one or more lines are too long

View File

@@ -1,35 +1,25 @@
# Init file for texi2html.
# This is deprecated, and the makeinfo/texi2any version is doc/t2h.pm
# no horiz rules between sections
$end_section = \&FFmpeg_end_section;
sub FFmpeg_end_section($$)
{
}
my $TEMPLATE_HEADER1 = $ENV{"FFMPEG_HEADER1"} || <<EOT;
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
<title>FFmpeg documentation</title>
<link rel="stylesheet" href="bootstrap.min.css" />
<link rel="stylesheet" href="style.min.css" />
$EXTRA_HEAD =
'<link rel="icon" href="favicon.png" type="image/png" />
';
$CSS_LINES = $ENV{"FFMPEG_CSS"} || <<EOT;
<link rel="stylesheet" type="text/css" href="default.css" />
EOT
my $TEMPLATE_HEADER2 = $ENV{"FFMPEG_HEADER2"} || <<EOT;
</head>
<body>
<div style="width: 95%; margin: auto">
my $TEMPLATE_HEADER = $ENV{"FFMPEG_HEADER"} || <<EOT;
<link rel="icon" href="favicon.png" type="image/png" />
</head>
<body>
<div id="container">
EOT
my $TEMPLATE_FOOTER = $ENV{"FFMPEG_FOOTER"} || <<EOT;
</div>
</body>
</html>
EOT
$PRE_BODY_CLOSE = '</div></div>';
$SMALL_RULE = '';
$BODYTEXT = '';
@@ -42,7 +32,7 @@ sub FFmpeg_print_page_foot($$)
T2H_DEFAULT_program_string() : program_string();
print $fh '<footer class="footer pagination-right">' . "\n";
print $fh '<span class="label label-info">' . $program_string;
print $fh "</span></footer></div></div></body>\n";
print $fh "</span></footer></div>\n";
}
$float = \&FFmpeg_float;
@@ -91,25 +81,23 @@ sub FFmpeg_print_page_head($$)
$longtitle = "FFmpeg documentation : " . $longtitle;
print $fh <<EOT;
$TEMPLATE_HEADER1
$description
<meta name="keywords" content="$longtitle">
<meta name="Generator" content="$Texi2HTML::THISDOC{program}">
<!DOCTYPE html>
<html>
$Texi2HTML::THISDOC{'copying'}<!-- Created on $Texi2HTML::THISDOC{today} by $Texi2HTML::THISDOC{program} -->
<!--
$Texi2HTML::THISDOC{program_authors}
-->
$encoding
$TEMPLATE_HEADER2
EOT
}
<head>
<title>$longtitle</title>
$print_page_foot = \&FFmpeg_print_page_foot;
sub FFmpeg_print_page_foot($$)
{
my $fh = shift;
print $fh <<EOT;
$TEMPLATE_FOOTER
$description
<meta name="keywords" content="$longtitle">
<meta name="resource-type" content="document">
<meta name="distribution" content="global">
<meta name="Generator" content="$Texi2HTML::THISDOC{program}">
$encoding
$CSS_LINES
$TEMPLATE_HEADER
EOT
}

Some files were not shown because too many files have changed in this diff Show More