Compare commits

..

132 Commits

Author SHA1 Message Date
Michael Niedermayer
6ba07e9948 update for 2.2-rc2
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-15 03:08:20 +01:00
Michael Niedermayer
125bea15d1 avcodec/libx265: fill headers in extradata
Fixes Ticket3457

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit dded5ed9c5)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-15 02:42:11 +01:00
Michael Niedermayer
70e3cc282b avutil/timestamp: Warn about missing __STDC_FORMAT_MACROS for C++ use
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 8b02dfd37c)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-15 02:42:11 +01:00
Michael Niedermayer
242df26b44 mvformat/movenc: fix IMX
fixes Ticket3351

Tested-by: carl
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 72d44f1583)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-15 02:42:11 +01:00
Michael Niedermayer
46c2dba20e doc/texi2pod: fix encoding type
docs say:
'A document having more than one "=encoding" line should be considered an error. '

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 12ce58bebd)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-15 02:42:11 +01:00
Carl Eugen Hoyos
3caa6a5a57 Revert "Allow stream-copying grayscale mov files."
This reverts commit 691dec6201.

The commit did not fix ticket #3215, it was fixed one commit earlier.
The revert may break other use-cases but they should be fixed differently,
the offending commit introduced too many problems.

Fixes ticket #3377.
Fixes ticket #3378.
(cherry picked from commit 54bbe3e2a6)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-15 02:42:11 +01:00
Matt Oliver
bf08665e2e Fix modplug linkage on Windows.
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 99b48fd448)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-15 02:42:11 +01:00
Stephen Hutchinson
c4f5f4dbd3 doc/general.texi: Adjust the notes on AviSynth
FFmpeg provides local copies of these headers in compat/avisynth/,
and there is no restriction against using 2.5.

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 5336cd6374)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-15 02:41:27 +01:00
Michael Niedermayer
29df24252a Merge commit '2b9ee7d5b901e0d7ba617511e4ed31d3043894d3' into release/2.2
* commit '2b9ee7d5b901e0d7ba617511e4ed31d3043894d3':
  doc: Add section about AviSynth support

Conflicts:
	doc/general.texi

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-15 02:40:44 +01:00
Stephen Hutchinson
2b9ee7d5b9 doc: Add section about AviSynth support
Signed-off-by: Diego Biurrun <diego@biurrun.de>
(cherry picked from commit 908836e207)
2014-03-11 20:40:47 +01:00
Michael Niedermayer
f800cacada Merge remote-tracking branch 'qatar/release/10' into release/2.2
* qatar/release/10:
  lavf: always use av_free

See: 88c8e4afea
Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-11 13:21:56 +01:00
Michael Niedermayer
5227eac5b0 Merge commit '6d56bc9a6d853a33fe53ab63db580c4facaba420' into release/2.2
* commit '6d56bc9a6d853a33fe53ab63db580c4facaba420':
  lavf: simplify ff_hevc_annexb2mp4_buf

Conflicts:
	libavformat/hevc.c
	libavformat/hevc.h

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-11 13:13:42 +01:00
Michael Niedermayer
bb40f8f5e2 Merge commit '2c5e1d0933facc20c6926a788cce05d3e6cad149' into release/2.2
* commit '2c5e1d0933facc20c6926a788cce05d3e6cad149':
  configure: Use the right pkgconf file for openjpeg

Conflicts:
	configure

No change as the incorrect code wasnt in ffmpegs configure

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-11 13:13:13 +01:00
Michael Niedermayer
ad8bf22086 Merge commit 'b37b83214ae3a462df1e8d3cc765ddbd2bfc73aa' into release/2.2
* commit 'b37b83214ae3a462df1e8d3cc765ddbd2bfc73aa':
  hevc: Use get_se_golomb_long

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-11 13:12:41 +01:00
Michael Niedermayer
7f8804296d Merge commit '6d7ab09788bdafffb3f3fc4f7feb262eb8cdf0b1' into release/2.2
* commit '6d7ab09788bdafffb3f3fc4f7feb262eb8cdf0b1':
  golomb: Add a get_se_golomb_long

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-11 12:16:27 +01:00
Michael Niedermayer
f67e75b5dc Merge commit '227cfc1f10a940c88ad3742ec805c07b6a5e7abb' into release/2.2
* commit '227cfc1f10a940c88ad3742ec805c07b6a5e7abb':
  vf_frei0r: adjust error messages

Conflicts:
	libavfilter/vf_frei0r.c

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-11 12:16:05 +01:00
Michael Niedermayer
35e63f35b0 Merge commit '416847d19593e87ee1704c26a9a638fd6b0d977c' into release/2.2
* commit '416847d19593e87ee1704c26a9a638fd6b0d977c':
  vf_frei0r: prevent a segfault when filter parameters are not set

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-11 12:15:51 +01:00
Michael Niedermayer
3bfb7a2537 Merge commit 'bd4ad1a1d52b8882df016826b8bdcf7b1009cb97' into release/2.2
* commit 'bd4ad1a1d52b8882df016826b8bdcf7b1009cb97':
  vf_frei0r: fix missing end of line character

Conflicts:
	libavfilter/vf_frei0r.c

No change as the token parsing change was not merged

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-11 12:15:21 +01:00
Michael Niedermayer
4a1e7a6fb7 Merge commit '6230de03aad9f26d5843afb913d196622e0b5b98' into release/2.2
* commit '6230de03aad9f26d5843afb913d196622e0b5b98':
  vf_frei0r: refactor library loading from env variable

Conflicts:
	configure
	libavfilter/vf_frei0r.c

Not merged, we use av_strtok() which leads to simpler code

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-11 12:14:41 +01:00
Michael Niedermayer
ff1e982205 Merge commit '45acc228a6d5f1e7d6c5ce6da63b293bd5eda57d' into release/2.2
* commit '45acc228a6d5f1e7d6c5ce6da63b293bd5eda57d':
  doc: fix a couple of typos in frame.h

Conflicts:
	libavutil/frame.h

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-11 12:09:13 +01:00
Michael Niedermayer
bb116e6ba3 Merge commit 'd37fac6dbbdddb76225aa691b83ffd9a0c7dae6b' into release/2.2
* commit 'd37fac6dbbdddb76225aa691b83ffd9a0c7dae6b':
  isom: lpcm in mov default to big endian
  movdec: handle 0x7fff langcode as macintosh per the specs

No change as these have been part of the branch previously

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-11 12:08:21 +01:00
Michael Niedermayer
ebe356bf1c avformat/hevc: fix mix of av_malloc() with free()
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 88c8e4afea)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-11 12:06:13 +01:00
Michael Niedermayer
30099413ec Merge commit 'eabefe83f40a65d0f0c2a9a0521f6d96c3932545' into release/2.2
* commit 'eabefe83f40a65d0f0c2a9a0521f6d96c3932545':
  movenc: allow muxing HEVC in MODE_MP4.

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-11 12:05:53 +01:00
Michael Niedermayer
186e0ff067 avformat/hevc: Make return codes consistent and more flexible
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 5d5e2bd862)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-11 12:05:29 +01:00
Michael Niedermayer
2642ad9f55 Merge commit 'eaa79b79b25ac0ceaf44fe575a3ae724b87285b2' into release/2.2
* commit 'eaa79b79b25ac0ceaf44fe575a3ae724b87285b2':
  movenc: enable Annex B to MP4 conversion for HEVC tracks.

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-11 12:04:54 +01:00
Michael Niedermayer
95ddd2227b avformat: fix hevcs use of golomb from avformat
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit cb403b2570)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-11 11:49:51 +01:00
Michael Niedermayer
3faebed6fa Merge commit 'c761379825ff0bf9dd191e244c4b2f7697fb2b3c' into release/2.2
* commit 'c761379825ff0bf9dd191e244c4b2f7697fb2b3c':
  movenc: write hvcC tag for HEVC.

Conflicts:
	libavformat/Makefile

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-11 11:48:10 +01:00
Michael Niedermayer
3aee1fa5b6 Merge commit 'ea3309eba715e83027e8ece4a226e39a4bf2a6ce' into release/2.2
* commit 'ea3309eba715e83027e8ece4a226e39a4bf2a6ce':
  movenc: use 'hev1' tag for HEVC in MODE_MOV.

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-11 11:47:51 +01:00
Michael Niedermayer
89a9c84ebb Merge commit '1c1e252cd1cbd5f59fe118c49f6d7207dbdfdbd4' into release/2.2
* commit '1c1e252cd1cbd5f59fe118c49f6d7207dbdfdbd4':
  movenc: Add a fallback fragmentation method for plain mp4 as well

Conflicts:
	libavformat/movenc.c

See: ef1aae6ea9
Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-11 11:46:55 +01:00
Michael Niedermayer
0adde39e04 Merge commit 'ca2c9d6b9bfadb64e1502594fdf745a391699890' into release/2.2
* commit 'ca2c9d6b9bfadb64e1502594fdf745a391699890':
  hevc: make pps/sps ids unsigned where necessary

Conflicts:
	libavcodec/hevc.h
	libavcodec/hevc_ps.c

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-11 11:39:46 +01:00
Michael Niedermayer
03ae616b19 Merge commit 'fa6b99d351ed483766a875054676a56fd8459774' into release/2.2
* commit 'fa6b99d351ed483766a875054676a56fd8459774':
  hevc: Do not turn 32bit timebases into negative numbers

Conflicts:
	libavcodec/hevc.c

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-11 11:39:31 +01:00
Michael Niedermayer
830c3058ff Merge commit 'd79cb6947e4a9c42ac20925dd920d3a0910d9a26' into release/2.2
* commit 'd79cb6947e4a9c42ac20925dd920d3a0910d9a26':
  hevc: use av_mallocz() for allocating tab_ipm

See: 26568c04a8
Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-11 11:29:06 +01:00
Michael Niedermayer
b12c5cbbb2 Merge commit '5aa4b29bbefc06fc2bbcb52af7a14393a1bcf504' into release/2.2
* commit '5aa4b29bbefc06fc2bbcb52af7a14393a1bcf504':
  hevc: Use get_bits_long() in decode_vui()

See: e15a57b67a
Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-11 11:28:23 +01:00
Michael Niedermayer
82c96b5ad8 Merge commit 'e4cbd0d6e5a7b3b850d72f4f4ef0124b27dbdcbd' into release/2.2
* commit 'e4cbd0d6e5a7b3b850d72f4f4ef0124b27dbdcbd':
  changelog: Cleanups and prepare for v10_beta2

Conflicts:
	Changelog

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-11 11:23:30 +01:00
Michael Niedermayer
3e4b957847 Merge commit '0ede7b534483c5c90f404a8f11f776d2f2da4e7e' into release/2.2
* commit '0ede7b534483c5c90f404a8f11f776d2f2da4e7e':
  float_dsp: fix errors in documentation

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-11 11:22:10 +01:00
Michael Niedermayer
cbabbe8220 Merge commit '5b933be089ab2657eb754ebf5b804ae43badf13d' into release/2.2
* commit '5b933be089ab2657eb754ebf5b804ae43badf13d':
  arm: vp3: remove incorrect const in ff_vp3_idct_dc_add_neon declaration

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-11 11:21:54 +01:00
Michael Niedermayer
80122a3af3 Merge commit 'f2693e98b449592ec0ed4979220814bf54e60a16' into release/2.2
* commit 'f2693e98b449592ec0ed4979220814bf54e60a16':
  build: Use pkg-config for openjpeg

Conflicts:
	configure

Not merged / merge just for metadata at request of carl

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-11 11:21:18 +01:00
Michael Niedermayer
a475755b3a Merge commit 'c3861e14ceace7ee69820091871173b4abcae311' into release/2.2
* commit 'c3861e14ceace7ee69820091871173b4abcae311':
  movenc: allow override of "writing application" tag

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-11 11:04:13 +01:00
Michael Niedermayer
66030e8133 Merge commit 'daa5a988e2ec8275ad8b724ea68f78306c271ae7' into release/2.2
* commit 'daa5a988e2ec8275ad8b724ea68f78306c271ae7':
  matroskaenc: allow override of "writing application" tag

Conflicts:
	libavformat/matroskaenc.c

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-11 11:03:30 +01:00
Michael Niedermayer
46f8d838b3 Merge commit 'db67b7c31b6fdd3747e2b5328945ad2091533698' into release/2.2
* commit 'db67b7c31b6fdd3747e2b5328945ad2091533698':
  rv10: Forward error from rv10_decode_packet

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-11 11:03:17 +01:00
Michael Niedermayer
bc3648d4b4 Merge commit 'a643a47d41f4924b66fce339e4b82aaee20825be' into release/2.2
* commit 'a643a47d41f4924b66fce339e4b82aaee20825be':
  fic: Properly handle skip frames

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-11 11:03:01 +01:00
Michael Niedermayer
27e6b4a3ff Merge commit '23af29e8825ac112877b9ac0572ef11e5f0539f2' into release/2.2
* commit '23af29e8825ac112877b9ac0572ef11e5f0539f2':
  arm: hpeldsp: fix put_pixels8_y2_{,no_rnd_}armv6

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-11 11:02:36 +01:00
Michael Niedermayer
b82860caa7 Merge commit '72a58c0772450993d375c6cf4b187a068f5bc765' into release/2.2
* commit '72a58c0772450993d375c6cf4b187a068f5bc765':
  Update default FATE URL for release/10

Conflicts:
	tests/Makefile

Merge for metadata only as we dont duplicate the fate samples per release branch.
Theres no need for that currently

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-11 10:49:37 +01:00
Michael Niedermayer
3d05625136 Merge commit 'd5254230068e196a2496618c0d89cdfbc41f7478' into release/2.2
* commit 'd5254230068e196a2496618c0d89cdfbc41f7478':
  Revert "Add libx265 encoder"

Conflicts:
	Changelog
	LICENSE
	configure
	doc/general.texi
	libavcodec/allcodecs.c
	libavcodec/libx265.c

Not merged, release branches should only contain bugfixes
a feature removial like this has to be discussed on ffmpeg-devel first

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-11 10:46:52 +01:00
Michael Niedermayer
ddd3301bad Merge commit '4b476e6aa4b830f919cf3c67ba2caa039ff285b9' into release/2.2
* commit '4b476e6aa4b830f919cf3c67ba2caa039ff285b9':
  configure: enable PIC on s390(x)

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-11 10:32:26 +01:00
Michael Niedermayer
123981930f Merge commit 'a1ab3300c83a16c2d5f5d29c51393668b9d92667' into release/2.2
* commit 'a1ab3300c83a16c2d5f5d29c51393668b9d92667':
  arm: hpeldsp: prevent overreads in armv6 asm

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-11 10:31:47 +01:00
Lukasz Marek
3171e2360a Revert "lavu/buffer: add release function"
This reverts commit 3144440004.

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit bba7b6fc41)

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-11 09:06:51 +01:00
Michael Niedermayer
3533a850e7 lavf: always use av_free
Signed-off-by: Tim Walker <tdskywalker@gmail.com>
(cherry picked from commit 77e9123fe5)
2014-03-10 19:53:51 -04:00
Tim Walker
6d56bc9a6d lavf: simplify ff_hevc_annexb2mp4_buf
Use ff_hevc_annexb2mp4 instead of duplicating
its functionality, and update the documentation
to match the new behavior.

(cherry picked from commit 34bbc81de8)
2014-03-10 19:53:44 -04:00
Luca Barbato
2c5e1d0933 configure: Use the right pkgconf file for openjpeg
The current release of version 1 uses libopenjpeg1.

(cherry picked from commit 4a8562394b)
2014-03-10 19:53:27 -04:00
Luca Barbato
b37b83214a hevc: Use get_se_golomb_long
Do not use inline functions that refer to tables present in other
libraries.

(cherry picked from commit ee17be3fdd)
2014-03-10 19:53:26 -04:00
Luca Barbato
6d7ab09788 golomb: Add a get_se_golomb_long
Useful in libavformat mostly.

(cherry picked from commit 5eacbb5328)
2014-03-10 19:53:04 -04:00
Vittorio Giovara
227cfc1f10 vf_frei0r: adjust error messages
(cherry picked from commit 8accddeb58)
2014-03-09 17:54:01 -04:00
Vittorio Giovara
416847d195 vf_frei0r: prevent a segfault when filter parameters are not set
(cherry picked from commit 4e0be9c86f)
2014-03-09 17:53:47 -04:00
Vittorio Giovara
bd4ad1a1d5 vf_frei0r: fix missing end of line character
Error introduced in 61b323ce7c.

(cherry picked from commit 4c41a7a179)
2014-03-09 17:53:33 -04:00
Vittorio Giovara
6230de03aa vf_frei0r: refactor library loading from env variable
strtok_r is not needed any more, so remove it from configure.

(cherry picked from commit 61b323ce7c)
2014-03-09 17:53:24 -04:00
Janne Grunau
45acc228a6 doc: fix a couple of typos in frame.h
(cherry picked from commit a18ef7a76c)
2014-03-09 17:53:09 -04:00
Mark Himsley
d37fac6dbb isom: lpcm in mov default to big endian
It is my understanding that "Unless otherwise stated, all data in a
QuickTime movie is stored in big-endian byte ordering" [1] in MOV files.

I have a couple of thousand files, which technically are invalid because
their sound sample description element 4CC is 'lpcm' but its version is
0 - and "Version 0 supports only uncompressed audio in raw ('raw ') or
twos-complement ('twos') format" [2]

Because isom.c only contains a mapping for 4CC 'lpcm' to
AV_CODEC_ID_PCM_S16LE, these files have their audio decoded as LE when
it is actually BE.

This commit adds AV_CODEC_ID_PCM_S16BE as the first match for 4CC 'lpcm'.

[1]
https://developer.apple.com/library/mac/documentation/quicktime/QTFF/qtff.pdf
page 21
[2]
https://developer.apple.com/library/mac/documentation/quicktime/QTFF/qtff.pdf
page 178

Reviewed-by: Yusuke Nakamura <muken.the.vfrmaniac@gmail.com>
(cherry picked from commit 360022bd3b)
2014-03-09 17:50:54 -04:00
Baptiste Coudurier
7940306a47 movdec: handle 0x7fff langcode as macintosh per the specs
The correct point that seperates ISO and MAC language codes is 0x400
according to the current QT spec. Old QT specs did not list where this
seperation is but apparently only defined the meaning of the first 137.

(cherry picked from commit 9e71cc81f3)
2014-03-09 17:50:41 -04:00
Tim Walker
eabefe83f4 movenc: allow muxing HEVC in MODE_MP4.
(cherry picked from commit 4f3db5d341)
2014-03-09 16:58:28 -04:00
Tim Walker
eaa79b79b2 movenc: enable Annex B to MP4 conversion for HEVC tracks.
(cherry picked from commit b6c61fb83e)
2014-03-09 16:58:26 -04:00
Tim Walker
c761379825 movenc: write hvcC tag for HEVC.
(cherry picked from commit 20b40a597c)
2014-03-09 16:58:21 -04:00
Tim Walker
ea3309eba7 movenc: use 'hev1' tag for HEVC in MODE_MOV.
'hvc1' requires that parameter set NAL units be
present only in the samples entry, but not in the
samples themselves, requiring that additional
parameter sets, if present, be filtered out of the
samples and placed in new, additional sample entries
if they override or otherwise conflict with the
parameter sets present in the first sample entry.
We do not have any way of doing this at present, so
the files we produce can only comply with the
restrictions set for the 'hev1' sample entry name in
ISO/IEC 14496-15.

(cherry picked from commit 1d9014f0b0)
2014-03-09 16:58:15 -04:00
Martin Storsjö
1c1e252cd1 movenc: Add a fallback fragmentation method for plain mp4 as well
Previously the default fragmentation method was only enabled
if writing an ISM file.

Signed-off-by: Martin Storsjö <martin@martin.st>
(cherry picked from commit 1e142d5b48)
2014-03-09 16:57:53 -04:00
Vittorio Giovara
ca2c9d6b9b hevc: make pps/sps ids unsigned where necessary
Fixes integer overflow and out of array accesses.
Found-by: Mateusz j00ru Jurczyk and Gynvael Coldwind

(cherry picked from commit 4d33873c29)
2014-03-09 14:55:38 -04:00
Michael Niedermayer
fa6b99d351 hevc: Do not turn 32bit timebases into negative numbers
Found-by: Mateusz "j00ru" Jurczyk and Gynvael Coldwind
(cherry picked from commit ed06e5d92b)
2014-03-09 14:55:36 -04:00
Michael Niedermayer
d79cb6947e hevc: use av_mallocz() for allocating tab_ipm
Fixes use of uninitialized memory and out of stack array read.
Found-by: Mateusz "j00ru" Jurczyk and Gynvael Coldwind

(cherry picked from commit 6cc94e9719)
2014-03-09 14:55:35 -04:00
Michael Niedermayer
5aa4b29bbe hevc: Use get_bits_long() in decode_vui()
Fix assertion failure.
Found-by: Mateusz "j00ru" Jurczyk and Gynvael Coldwind

(cherry picked from commit 920c01adce)
2014-03-09 14:55:34 -04:00
Reinhard Tartler
e4cbd0d6e5 changelog: Cleanups and prepare for v10_beta2 2014-03-08 20:50:36 -05:00
Janne Grunau
0ede7b5344 float_dsp: fix errors in documentation
(cherry picked from commit 74cc901905)
2014-03-08 19:36:20 -05:00
Janne Grunau
5b933be089 arm: vp3: remove incorrect const in ff_vp3_idct_dc_add_neon declaration
Was missed in aeaf268e52 when integrating
clear_blocks into the idct.

(cherry picked from commit 4506a854a4)
2014-03-08 19:36:02 -05:00
Pierre Lejeune
f2693e98b4 build: Use pkg-config for openjpeg
Bug-Id: 387
CC: libav-stable@libav.org
(cherry picked from commit 0e0cefb222)
2014-03-08 19:34:44 -05:00
John Stebbins
c3861e14ce movenc: allow override of "writing application" tag
Signed-off-by: Tim Walker <tdskywalker@gmail.com>

CC: libav-stable@libav.org
(cherry picked from commit 565e0c6d86)
2014-03-08 19:34:42 -05:00
John Stebbins
daa5a988e2 matroskaenc: allow override of "writing application" tag
Signed-off-by: Tim Walker <tdskywalker@gmail.com>

CC: libav-stable@libav.org
(cherry picked from commit 0092c1dd8d)
2014-03-08 19:34:39 -05:00
Keiji Costantini
db67b7c31b rv10: Forward error from rv10_decode_packet
Signed-off-by: Diego Biurrun <diego@biurrun.de>
(cherry picked from commit b4d372e091)
2014-03-08 19:34:29 -05:00
Derek Buitenhuis
a643a47d41 fic: Properly handle skip frames
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
(cherry picked from commit f87a6e500b)
2014-03-08 19:33:41 -05:00
Janne Grunau
23af29e882 arm: hpeldsp: fix put_pixels8_y2_{,no_rnd_}armv6
The overread avoidance fix in cbddee1cca
broke the computation for the last row since it prevented the safe
reading from the height+1-th row.
2014-03-08 21:48:21 +01:00
Nicolas George
7d995cd1b8 lavfi/af_atempo: clear references before returning error.
Once the frame has been given to ff_filter_frame(), it can
no longer be used, even on error.

Fix trac ticket #3430.
(cherry picked from commit bc6901c949)
2014-03-08 15:17:14 +01:00
Reinhard Tartler
72a58c0772 Update default FATE URL for release/10 2014-03-07 08:32:55 -05:00
Reinhard Tartler
d525423006 Revert "Add libx265 encoder"
cf. the discussion following
https://lists.libav.org/pipermail/libav-devel/2014-March/056894.html

This reverts commit 50ea93158d.

Conflicts:
	doc/general.texi
	libavcodec/version.h
2014-03-07 08:32:55 -05:00
Reinhard Tartler
4b476e6aa4 configure: enable PIC on s390(x)
The s390 architecture requires shared libraries to be built in PIC mode.
Otherwise applications will get wrong relocations at run-time, leading
to confusing segmentation faults.

CC: libav-stable@libav.org
(cherry picked from commit 5ddc9f5052)
2014-03-07 08:32:55 -05:00
Michael Niedermayer
124c78fd44 avformat/oggparsevorbis: dont use invalid granules
Fixes Ticket3437

Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
(cherry picked from commit 12b97dd375)
2014-03-06 09:31:49 +01:00
Janne Grunau
a1ab3300c8 arm: hpeldsp: prevent overreads in armv6 asm
Based on a patch by Russel King <rmk+libav@arm.linux.org.uk>

Bug-Id: 646
CC: libav-stable@libav.org
2014-03-05 16:21:52 +01:00
Michael Niedermayer
1af235f6b3 Merge remote-tracking branch 'qatar/release/10' into release/2.2
* qatar/release/10:
  ituh263: reject b-frame with pp_time = 0
  af_compand: replace strtok_r() with av_get_token()
  latm: Always reconfigure if no extradata was set previously
  af_compand: add a dependency on strtok_r
  lavfi: add compand audio filter

Conflicts:
	Changelog
	doc/filters.texi
	libavfilter/Makefile
	libavfilter/af_compand.c
	libavfilter/allfilters.c
	libavfilter/version.h

All changes are already in our 2.2 branch, this is just for metadata

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-03 23:39:42 +01:00
Michael Niedermayer
82031e41f8 Merge commit '596d3e20ae69a278d562eea08f5e8c0ae5a5bfc4' into release/2.2
* commit '596d3e20ae69a278d562eea08f5e8c0ae5a5bfc4':
  parser: cosmetics: Drop some unnecessary parentheses
  parser: K&R formatting cosmetics
  parser: Remove commented-out cruft
  doc: name correct header
  af_volume: preserve frame properties

Conflicts:
	doc/APIchanges
	libavcodec/parser.c
	libavfilter/af_volume.c

All changes are already in our 2.2 branch, this is just for metadata

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-03 23:30:47 +01:00
Michael Niedermayer
222e7549a7 Merge commit '7933039ade01b39638ec3d9e638b6ae06ee84984' into release/2.2
* commit '7933039ade01b39638ec3d9e638b6ae06ee84984':
  af_resample: preserve frame properties
  avconv: Do not divide by zero
  dca: replace some memcpy by AV_COPY128
  h264: avoid undefined behavior in chroma motion compensation
  x86: dsputil: Use correct file name as multiple inclusion guard

Conflicts:
	ffmpeg.c

All changes are already in our 2.2 branch, this is just for metadata

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-03 23:16:15 +01:00
Michael Niedermayer
eb2244ece9 Merge commit '4015829accc2382393d42d62654eb96d896d1326' into release/2.2
* commit '4015829accc2382393d42d62654eb96d896d1326':
  bit_depth_template: Use file name as multiple inclusion guard
  svq3: Adjust #endif comment
  hevc: Mention the missing SPS in the error message
  doc: Name the MOV muxer as it should be called
  doc: Sort the muxer documentation

Conflicts:
	doc/muxers.texi

All changes are already in our 2.2 branch, this is just for metadata

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-03 23:03:44 +01:00
Michael Niedermayer
b967c10029 Merge commit '39dc4a6bb34baf833ce1e5eabad7d0dbf933237d' into release/2.2
* commit '39dc4a6bb34baf833ce1e5eabad7d0dbf933237d':
  x86: dca: Add missing multiple inclusion guards
  gitignore: Add all examples below doc/examples
  arm: Mark the stack as non-executable
  doxygen: Replace @parblock syntax with manual linebreaks
  doxygen: Add a number of missing function parameter descriptions

Conflicts:
	.gitignore
	libavformat/avformat.h

All changes are already in our 2.2 branch, this is just for metadata

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-03 22:53:25 +01:00
Michael Niedermayer
7ff4cd2acc Merge commit 'a6a2d8eb8f125a2edb512a7a47df33dbd70d6b35' into release/2.2
* commit 'a6a2d8eb8f125a2edb512a7a47df33dbd70d6b35':
  qt-faststart: Add a note about the -movflags +faststart feature
  qt-faststart: Avoid unintentionally sign extending BE_32
  qt-faststart: Check offset_count before reading from the moov_atom buffer
  qt-faststart: Check the ftello() return codes
  qt-faststart: Fix the signedness of variables keeping the ftello return values
  qt-faststart: Check fseeko() return codes
  qt-faststart: Simplify code by using a MIN() macro
  qt-faststart: Increase the copy buffer size to 64 KB

Conflicts:
	tools/qt-faststart.c

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-03 22:42:25 +01:00
Michael Niedermayer
c4149c4d54 Merge commit '1d1df82093fdacb2cbc443c70c80f8f801002d28' into release/2.2
* commit '1d1df82093fdacb2cbc443c70c80f8f801002d28':
  pthread_frame: flush all threads on flush, not just the first one

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-03 22:30:54 +01:00
Michael Niedermayer
8ad2f45964 Merge commit 'de187e3e9ec4803575deb1c293ccad84d2a88da8' into release/2.2
* commit 'de187e3e9ec4803575deb1c293ccad84d2a88da8':
  h264: Fix a typo from the previous commit
  h264: Lower bound check for slice offsets
  hevc: Always consider VLC NALU type mismatch fatal
  Prepare for 10_beta2 Release
  build: The MPEG-4 video parser depends on h263dsp

Conflicts:
	RELEASE
	configure

All changes are already in our 2.2 branch, this is just for metadata

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-03 22:22:15 +01:00
Keiji Costantini
5df52b0131 ituh263: reject b-frame with pp_time = 0
Avoid a division by 0 in ff_mpeg4_set_one_direct_mv.

Sample-Id: 00000168-google
Reported-by: Mateusz "j00ru" Jurczyk and Gynvael Coldwind

Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
(cherry picked from commit 9514440337)
2014-03-02 11:42:38 -05:00
Diego Biurrun
596d3e20ae parser: cosmetics: Drop some unnecessary parentheses
(cherry picked from commit 4ec336484d)
2014-03-02 11:42:38 -05:00
Anton Khirnov
00d5ff6431 af_compand: replace strtok_r() with av_get_token()
(cherry picked from commit bc6461c286)
2014-03-02 11:42:38 -05:00
Luca Barbato
437179e9c8 parser: K&R formatting cosmetics
Signed-off-by: Diego Biurrun <diego@biurrun.de>
(cherry picked from commit a1c699659d)
2014-03-02 11:42:38 -05:00
Hendrik Leppkes
031d3b66c2 latm: Always reconfigure if no extradata was set previously
AAC LOAS can have new audio config objects in the stream itself.

Make sure the decoder reconfigures itself when the first one arrives
midstream.

Bug-Id: 644
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
(cherry picked from commit 3aca10bf76)
2014-03-02 11:42:38 -05:00
Diego Biurrun
b76871d870 parser: Remove commented-out cruft
(cherry picked from commit ed61f3ca8a)
2014-03-02 11:42:38 -05:00
Anton Khirnov
15ae305007 af_compand: add a dependency on strtok_r
(cherry picked from commit 291e49d4e7)
2014-03-02 11:42:38 -05:00
Vittorio Giovara
3c72204ae0 doc: name correct header
(cherry picked from commit 48d1ed9c83)
2014-03-02 11:42:38 -05:00
Andrew Kelley
ba21499648 lavfi: add compand audio filter
Signed-off-by: Anton Khirnov <anton@khirnov.net>
(cherry picked from commit 738f83582a)

Conflicts:
	libavfilter/version.h
2014-03-02 11:42:38 -05:00
Anton Khirnov
7933039ade af_resample: preserve frame properties
(cherry picked from commit dcc7e4bf1d)
2014-03-02 11:42:37 -05:00
Diego Biurrun
4015829acc bit_depth_template: Use file name as multiple inclusion guard
(cherry picked from commit ba42c85247)
2014-03-02 11:42:37 -05:00
Diego Biurrun
39dc4a6bb3 x86: dca: Add missing multiple inclusion guards
(cherry picked from commit b23bc95920)
2014-03-02 11:42:37 -05:00
Lou Logan
a6a2d8eb8f qt-faststart: Add a note about the -movflags +faststart feature
Signed-off-by: Martin Storsjö <martin@martin.st>
(cherry picked from commit 700687ebe0)
2014-03-02 11:42:37 -05:00
Anton Khirnov
58556826a8 af_volume: preserve frame properties
(cherry picked from commit 39c2880eea)
2014-03-02 11:42:37 -05:00
Luca Barbato
bc2c9a479a avconv: Do not divide by zero
(cherry picked from commit 5c79d2e12d)
2014-03-02 11:42:37 -05:00
Diego Biurrun
9cc22be032 svq3: Adjust #endif comment
(cherry picked from commit 61e7c7f27b)
2014-03-02 11:42:37 -05:00
Diego Biurrun
33e1bca651 gitignore: Add all examples below doc/examples
(cherry picked from commit 294a51e18a)
2014-03-02 11:42:37 -05:00
Martin Storsjö
9841617b7f qt-faststart: Avoid unintentionally sign extending BE_32
Without this cast, the BE_32() expression is sign extended when
assigned to an uint64_t, since the uint8_t|uint8_t expression
is promoted to an int.

Also avoid undefined behaviour when left shifting an uint8_t
by 24 by casting it to an uint32_t explicitly before shifting.

Based on a patch by Michael Niedermayer.

Signed-off-by: Martin Storsjö <martin@martin.st>
(cherry picked from commit ea7f79f937)
2014-03-02 11:42:37 -05:00
Christophe Gisquet
2897481f64 dca: replace some memcpy by AV_COPY128
Signed-off-by: Janne Grunau <janne-libav@jannau.net>
(cherry picked from commit ef010f08ae)
2014-03-02 11:42:37 -05:00
Luca Barbato
646c564de5 hevc: Mention the missing SPS in the error message
(cherry picked from commit 175e506332)
2014-03-02 11:42:37 -05:00
Martin Storsjö
cd6281abef arm: Mark the stack as non-executable
If linking in an object file without this attribute set, the
linker will assume that an executable stack might be needed.

Signed-off-by: Martin Storsjö <martin@martin.st>
(cherry picked from commit 543156d751)
2014-03-02 11:42:37 -05:00
Janne Grunau
697be8173b h264: avoid undefined behavior in chroma motion compensation
Makes fate-h264 pass under valgrind --undef-value-errors=yes with
-cpuflags none. {avg,put}_h264_chroma_mc8_8 approximately 5% faster,
{avg,put}_h264_chroma_mc4_8 2% faster both on x86 and arm.

(cherry picked from commit 982b596ea6)
2014-03-02 11:42:37 -05:00
Luca Barbato
1853d8bb7a doc: Name the MOV muxer as it should be called
The section name is the muxer, not the format.

(cherry picked from commit 93632a70f9)
2014-03-02 11:42:37 -05:00
Diego Biurrun
1779cd7695 doxygen: Replace @parblock syntax with manual linebreaks
@parblock is only supported in very recent Doxygen versions.

(cherry picked from commit 2f2b2efd31)
2014-03-02 11:42:37 -05:00
Diego Biurrun
bb4820727f x86: dsputil: Use correct file name as multiple inclusion guard
(cherry picked from commit 017a06a9ee)
2014-03-02 11:42:37 -05:00
Luca Barbato
affc7687d3 doc: Sort the muxer documentation
Keep the sections alphabetically sorted.

(cherry picked from commit a7b3216cbd)
2014-03-02 11:42:37 -05:00
Diego Biurrun
3569470693 doxygen: Add a number of missing function parameter descriptions
(cherry picked from commit 4d7ab5cfeb)
2014-03-02 11:42:37 -05:00
Anton Khirnov
1d1df82093 pthread_frame: flush all threads on flush, not just the first one
avcodec_flush_buffers() must release all internally held references
according to its documentation, for which all the threads need to be
flushed.

CC:libav-stable@libav.org
Bug-Id: vlc/9665
(cherry picked from commit d1f9563d50)
2014-03-02 11:42:36 -05:00
Luca Barbato
de187e3e9e h264: Fix a typo from the previous commit
f777504f64 changed a - in +

CC: libav-stable@libav.org
(cherry picked from commit d922c5a5fb)
2014-03-02 11:42:36 -05:00
Michael Niedermayer
7754d48381 qt-faststart: Check offset_count before reading from the moov_atom buffer
CC: libav-stable@libav.org
Signed-off-by: Martin Storsjö <martin@martin.st>
(cherry picked from commit bb95334c34)
2014-03-02 11:42:36 -05:00
Vittorio Giovara
63169474b3 h264: Lower bound check for slice offsets
And use the value from the specification.

Sample-Id: 00000451-google
Found-by: Mateusz j00ru Jurczyk and Gynvael Coldwind
CC: libav-stable@libav.org

Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
(cherry picked from commit f777504f64)
2014-03-02 11:42:36 -05:00
Michael Niedermayer
b3f106cb1f qt-faststart: Check the ftello() return codes
This silences a warning in the coverity static analyzer.

Signed-off-by: Martin Storsjö <martin@martin.st>
(cherry picked from commit 6384885425)
2014-03-02 11:42:36 -05:00
Luca Barbato
9b6ccf0f24 hevc: Always consider VLC NALU type mismatch fatal
Sample-Id: 00001667-google
Reported-by: Mateusz "j00ru" Jurczyk and Gynvael Coldwind
CC: libav-stable@libav.org
(cherry picked from commit 521726ff57)
2014-03-02 11:42:36 -05:00
Michael Niedermayer
298d66c8de qt-faststart: Fix the signedness of variables keeping the ftello return values
These variables are assigned the return values of ftello, which
returns an off_t, which is a signed type. On errors, ftello returns
-1, thus make sure this error return value can be stored properly.

Signed-off-by: Martin Storsjö <martin@martin.st>
(cherry picked from commit 03c2a66fcf)
2014-03-02 11:42:36 -05:00
Reinhard Tartler
4be1b68d52 Prepare for 10_beta2 Release 2014-03-02 11:42:36 -05:00
Michael Niedermayer
92edc13d69 qt-faststart: Check fseeko() return codes
Signed-off-by: Martin Storsjö <martin@martin.st>
(cherry picked from commit 5612244351)
2014-03-02 11:42:36 -05:00
Michael Niedermayer
c9f015f1c6 qt-faststart: Simplify code by using a MIN() macro
qt-faststart doesn't use the normal libav headers at all since
it's supposed to be a completely standalone tool, so we implement
the macro locally in this file.

Signed-off-by: Martin Storsjö <martin@martin.st>
(cherry picked from commit ea15a9a5d8)
2014-03-02 11:42:36 -05:00
Martin Storsjö
db6b2ca0b3 qt-faststart: Increase the copy buffer size to 64 KB
Copying data in chunks of 1 KB is a little wasteful.

64 KB should still easily fit on the stack, so there's no need
to allocate it dynamically.

Signed-off-by: Martin Storsjö <martin@martin.st>
(cherry picked from commit 3cbc7ef3d6)
2014-03-02 11:42:36 -05:00
Michael Niedermayer
3503ec8461 Changelog: remove <next>
Found-by: Timothy
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-01 18:26:04 +01:00
Michael Niedermayer
ecc5e42d92 Update for 2.2-rc1
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
2014-03-01 04:03:08 +01:00
Diego Biurrun
f87ce262f6 build: The MPEG-4 video parser depends on h263dsp
The dependency is indirect through the h263/mpegvideo code.
CC: libav-stable@libav.org
(cherry picked from commit 192ccc5034)
2014-02-20 12:06:39 +01:00
5439 changed files with 170196 additions and 560240 deletions

1
.gitattributes vendored
View File

@@ -1 +0,0 @@
*.pnm -diff -text

62
.gitignore vendored
View File

@@ -1,6 +1,5 @@
*.a *.a
*.o *.o
*.o.*
*.d *.d
*.def *.def
*.dll *.dll
@@ -16,11 +15,10 @@
*.pdb *.pdb
*.so *.so
*.so.* *.so.*
*.swp
*.ver *.ver
*-example
*-test
*_g *_g
\#*
.\#*
/.config /.config
/.version /.version
/ffmpeg /ffmpeg
@@ -29,7 +27,57 @@
/ffserver /ffserver
/config.* /config.*
/coverage.info /coverage.info
/avversion.h /doc/*.1
/doc/*.3
/doc/*.html
/doc/*.pod
/doc/config.texi
/doc/avoptions_codec.texi
/doc/avoptions_format.texi
/doc/doxy/html/
/doc/examples/avio_reading
/doc/examples/avcodec
/doc/examples/demuxing_decoding
/doc/examples/filter_audio
/doc/examples/filtering_audio
/doc/examples/filtering_video
/doc/examples/metadata
/doc/examples/muxing
/doc/examples/pc-uninstalled
/doc/examples/remuxing
/doc/examples/resampling_audio
/doc/examples/scaling_video
/doc/examples/transcode_aac
/doc/fate.txt
/doc/print_options
/lcov/ /lcov/
/src /libavcodec/*_tablegen
/mapfile /libavcodec/*_tables.c
/libavcodec/*_tables.h
/libavutil/avconfig.h
/libavutil/ffversion.h
/tests/audiogen
/tests/base64
/tests/data/
/tests/rotozoom
/tests/tiny_psnr
/tests/tiny_ssim
/tests/videogen
/tests/vsynth1/
/tools/aviocat
/tools/ffbisect
/tools/bisect.need
/tools/crypto_bench
/tools/cws2fws
/tools/fourcc2pixfmt
/tools/ffescape
/tools/ffeval
/tools/ffhash
/tools/graph2dot
/tools/ismindex
/tools/pktdumper
/tools/probetest
/tools/qt-faststart
/tools/trasher
/tools/seek_print
/tools/zmqsend

View File

@@ -1,26 +0,0 @@
language: c
sudo: false
os:
- linux
- osx
addons:
apt:
packages:
- yasm
- diffutils
compiler:
- clang
- gcc
cache:
directories:
- ffmpeg-samples
before_install:
- if [ "$TRAVIS_OS_NAME" == "osx" ]; then brew update --all; fi
install:
- if [ "$TRAVIS_OS_NAME" == "osx" ]; then brew install yasm; fi
script:
- mkdir -p ffmpeg-samples
- ./configure --samples=ffmpeg-samples --cc=$CC
- make -j 8
- make fate-rsync
- make check -j 8

327
Changelog
View File

@@ -1,316 +1,6 @@
Entries are sorted chronologically from oldest to youngest within each release, Entries are sorted chronologically from oldest to youngest within each release,
releases are sorted from youngest to oldest. releases are sorted from youngest to oldest.
version <next>:
version 3.1.1:
- doc/APIchanges: document the lavu/lavf field moves
- avformat/avformat: Move new field to the end of AVStream
- avformat/utils: update deprecated AVStream->codec when the context is updated
- avutil/frame: Move new field to the end of AVFrame
- libavcodec/exr : fix decoding piz float file.
- avformat/mov: Check sample size
- lavfi: Move new field to the end of AVFilterContext
- lavfi: Move new field to the end of AVFilterLink
- ffplay: Fix usage of private lavfi API
- lavc/mediacodecdec_h264: add missing NAL headers to SPS/PPS buffers
- lavc/pnm_parser: disable parsing for text based PNMs
version 3.1:
- DXVA2-accelerated HEVC Main10 decoding
- fieldhint filter
- loop video filter and aloop audio filter
- Bob Weaver deinterlacing filter
- firequalizer filter
- datascope filter
- bench and abench filters
- ciescope filter
- protocol blacklisting API
- MediaCodec H264 decoding
- VC-2 HQ RTP payload format (draft v1) depacketizer and packetizer
- VP9 RTP payload format (draft v2) packetizer
- AudioToolbox audio decoders
- AudioToolbox audio encoders
- coreimage filter (GPU based image filtering on OSX)
- libdcadec removed
- bitstream filter for extracting DTS core
- ADPCM IMA DAT4 decoder
- musx demuxer
- aix demuxer
- remap filter
- hash and framehash muxers
- colorspace filter
- hdcd filter
- readvitc filter
- VAAPI-accelerated format conversion and scaling
- libnpp/CUDA-accelerated format conversion and scaling
- Duck TrueMotion 2.0 Real Time decoder
- Wideband Single-bit Data (WSD) demuxer
- VAAPI-accelerated H.264/HEVC/MJPEG encoding
- DTS Express (LBR) decoder
- Generic OpenMAX IL encoder with support for Raspberry Pi
- IFF ANIM demuxer & decoder
- Direct Stream Transfer (DST) decoder
- loudnorm filter
- MTAF demuxer and decoder
- MagicYUV decoder
- OpenExr improvements (tile data and B44/B44A support)
- BitJazz SheerVideo decoder
- CUDA CUVID H264/HEVC decoder
- 10-bit depth support in native utvideo decoder
- libutvideo wrapper removed
- YUY2 Lossless Codec decoder
- VideoToolbox H.264 encoder
version 3.0:
- Common Encryption (CENC) MP4 encoding and decoding support
- DXV decoding
- extrastereo filter
- ocr filter
- alimiter filter
- stereowiden filter
- stereotools filter
- rubberband filter
- tremolo filter
- agate filter
- chromakey filter
- maskedmerge filter
- Screenpresso SPV1 decoding
- chromaprint fingerprinting muxer
- ffplay dynamic volume control
- displace filter
- selectivecolor filter
- extensive native AAC encoder improvements and removal of experimental flag
- ADPCM PSX decoder
- 3dostr, dcstr, fsb, genh, vag, xvag, ads, msf, svag & vpk demuxer
- zscale filter
- wve demuxer
- zero-copy Intel QSV transcoding in ffmpeg
- shuffleframes filter
- SDX2 DPCM decoder
- vibrato filter
- innoHeim/Rsupport Screen Capture Codec decoder
- ADPCM AICA decoder
- Interplay ACM demuxer and audio decoder
- XMA1 & XMA2 decoder
- realtime filter
- anoisesrc audio filter source
- IVR demuxer
- compensationdelay filter
- acompressor filter
- support encoding 16-bit RLE SGI images
- apulsator filter
- sidechaingate audio filter
- mipsdspr1 option has been renamed to mipsdsp
- aemphasis filter
- mips32r5 option has been removed
- mips64r6 option has been removed
- DXVA2-accelerated VP9 decoding
- SOFAlizer: virtual binaural acoustics filter
- VAAPI VP9 hwaccel
- audio high-order multiband parametric equalizer
- automatic bitstream filtering
- showspectrumpic filter
- libstagefright support removed
- spectrumsynth filter
- ahistogram filter
- only seek with the right mouse button in ffplay
- toggle full screen when double-clicking with the left mouse button in ffplay
- afftfilt filter
- convolution filter
- libquvi support removed
- support for dvaudio in wav and avi
- libaacplus and libvo-aacenc support removed
- Cineform HD decoder
- new DCA decoder with full support for DTS-HD extensions
- significant performance improvements in Windows Television (WTV) demuxer
- nnedi deinterlacer
- streamselect video and astreamselect audio filter
- swaprect filter
- metadata video and ametadata audio filter
- SMPTE VC-2 HQ profile support for the Dirac decoder
- SMPTE VC-2 native encoder supporting the HQ profile
version 2.8:
- colorkey video filter
- BFSTM/BCSTM demuxer
- little-endian ADPCM_THP decoder
- Hap decoder and encoder
- DirectDraw Surface image/texture decoder
- ssim filter
- optional new ASF demuxer
- showvolume filter
- Many improvements to the JPEG 2000 decoder
- Go2Meeting decoding support
- adrawgraph audio and drawgraph video filter
- removegrain video filter
- Intel QSV-accelerated MPEG-2 video and HEVC encoding
- Intel QSV-accelerated MPEG-2 video and HEVC decoding
- Intel QSV-accelerated VC-1 video decoding
- libkvazaar HEVC encoder
- erosion, dilation, deflate and inflate video filters
- Dynamic Audio Normalizer as dynaudnorm filter
- Reverse video and areverse audio filter
- Random filter
- deband filter
- AAC fixed-point decoding
- sidechaincompress audio filter
- bitstream filter for converting HEVC from MP4 to Annex B
- acrossfade audio filter
- allyuv and allrgb video sources
- atadenoise video filter
- OS X VideoToolbox support
- aphasemeter filter
- showfreqs filter
- vectorscope filter
- waveform filter
- hstack and vstack filter
- Support DNx100 (1440x1080@8)
- VAAPI hevc hwaccel
- VDPAU hevc hwaccel
- framerate filter
- Switched default encoders for webm to VP9 and Opus
- Removed experimental flag from the JPEG 2000 encoder
version 2.7:
- FFT video filter
- TDSC decoder
- DTS lossless extension (XLL) decoding (not lossless, disabled by default)
- showwavespic filter
- DTS decoding through libdcadec
- Drop support for nvenc API before 5.0
- nvenc HEVC encoder
- Detelecine filter
- Intel QSV-accelerated H.264 encoding
- MMAL-accelerated H.264 decoding
- basic APNG encoder and muxer with default extension "apng"
- unpack DivX-style packed B-frames in MPEG-4 bitstream filter
- WebM Live Chunk Muxer
- nvenc level and tier options
- chorus filter
- Canopus HQ/HQA decoder
- Automatically rotate videos based on metadata in ffmpeg
- improved Quickdraw compatibility
- VP9 high bit-depth and extended colorspaces decoding support
- WebPAnimEncoder API when available for encoding and muxing WebP
- Direct3D11-accelerated decoding
- Support Secure Transport
- Multipart JPEG demuxer
version 2.6:
- nvenc encoder
- 10bit spp filter
- colorlevels filter
- RIFX format for *.wav files
- RTP/mpegts muxer
- non continuous cache protocol support
- tblend filter
- cropdetect support for non 8bpp, absolute (if limit >= 1) and relative (if limit < 1.0) threshold
- Camellia symmetric block cipher
- OpenH264 encoder wrapper
- VOC seeking support
- Closed caption Decoder
- fspp, uspp, pp7 MPlayer postprocessing filters ported to native filters
- showpalette filter
- Twofish symmetric block cipher
- Support DNx100 (960x720@8)
- eq2 filter ported from libmpcodecs as eq filter
- removed libmpcodecs
- Changed default DNxHD colour range in QuickTime .mov derivatives to mpeg range
- ported softpulldown filter from libmpcodecs as repeatfields filter
- dcshift filter
- RTP depacketizer for loss tolerant payload format for MP3 audio (RFC 5219)
- RTP depacketizer for AC3 payload format (RFC 4184)
- palettegen and paletteuse filters
- VP9 RTP payload format (draft 0) experimental depacketizer
- RTP depacketizer for DV (RFC 6469)
- DXVA2-accelerated HEVC decoding
- AAC ELD 480 decoding
- Intel QSV-accelerated H.264 decoding
- DSS SP decoder and DSS demuxer
- Fix stsd atom corruption in DNxHD QuickTimes
- Canopus HQX decoder
- RTP depacketization of T.140 text (RFC 4103)
- Port MIPS optimizations to 64-bit
version 2.5:
- HEVC/H.265 RTP payload format (draft v6) packetizer
- SUP/PGS subtitle demuxer
- ffprobe -show_pixel_formats option
- CAST128 symmetric block cipher, ECB mode
- STL subtitle demuxer and decoder
- libutvideo YUV 4:2:2 10bit support
- XCB-based screen-grabber
- UDP-Lite support (RFC 3828)
- xBR scaling filter
- AVFoundation screen capturing support
- ffserver supports codec private options
- creating DASH compatible fragmented MP4, MPEG-DASH segmenting muxer
- WebP muxer with animated WebP support
- zygoaudio decoding support
- APNG demuxer
- postproc visualization support
version 2.4:
- Icecast protocol
- ported lenscorrection filter from frei0r filter
- large optimizations in dctdnoiz to make it usable
- ICY metadata are now requested by default with the HTTP protocol
- support for using metadata in stream specifiers in fftools
- LZMA compression support in TIFF decoder
- H.261 RTP payload format (RFC 4587) depacketizer and experimental packetizer
- HEVC/H.265 RTP payload format (draft v6) depacketizer
- added codecview filter to visualize information exported by some codecs
- Matroska 3D support thorugh side data
- HTML generation using texi2html is deprecated in favor of makeinfo/texi2any
- silenceremove filter
version 2.3:
- AC3 fixed-point decoding
- shuffleplanes filter
- subfile protocol
- Phantom Cine demuxer
- replaygain data export
- VP7 video decoder
- Alias PIX image encoder and decoder
- Improvements to the BRender PIX image decoder
- Improvements to the XBM decoder
- QTKit input device
- improvements to OpenEXR image decoder
- support decoding 16-bit RLE SGI images
- GDI screen grabbing for Windows
- alternative rendition support for HTTP Live Streaming
- AVFoundation input device
- Direct Stream Digital (DSD) decoder
- Magic Lantern Video (MLV) demuxer
- On2 AVC (Audio for Video) decoder
- support for decoding through DXVA2 in ffmpeg
- libbs2b-based stereo-to-binaural audio filter
- libx264 reference frames count limiting depending on level
- native Opus decoder
- display matrix export and rotation API
- WebVTT encoder
- showcqt multimedia filter
- zoompan filter
- signalstats filter
- hqx filter (hq2x, hq3x, hq4x)
- flanger filter
- Image format auto-detection
- LRC demuxer and muxer
- Samba protocol (via libsmbclient)
- WebM DASH Manifest muxer
- libfribidi support in drawtext
version 2.2: version 2.2:
- HNM version 4 demuxer and video decoder - HNM version 4 demuxer and video decoder
@@ -338,8 +28,6 @@ version 2.2:
- Support DNx444 - Support DNx444
- libx265 encoder - libx265 encoder
- dejudder filter - dejudder filter
- Autodetect VDA like all other hardware accelerations
- aliases and defaults for Ogg subtypes (opus, spx)
version 2.1: version 2.1:
@@ -527,7 +215,7 @@ version 1.1:
- JSON captions for TED talks decoding support - JSON captions for TED talks decoding support
- SOX Resampler support in libswresample - SOX Resampler support in libswresample
- aselect filter - aselect filter
- SGI RLE 8-bit / Silicon Graphics RLE 8-bit video decoder - SGI RLE 8-bit decoder
- Silicon Graphics Motion Video Compressor 1 & 2 decoder - Silicon Graphics Motion Video Compressor 1 & 2 decoder
- Silicon Graphics Movie demuxer - Silicon Graphics Movie demuxer
- apad filter - apad filter
@@ -571,9 +259,7 @@ version 1.0:
- RTMPE protocol support - RTMPE protocol support
- RTMPTE protocol support - RTMPTE protocol support
- showwaves and showspectrum filter - showwaves and showspectrum filter
- LucasArts SMUSH SANM playback support - LucasArts SMUSH playback support
- LucasArts SMUSH VIMA audio decoder (ADPCM)
- LucasArts SMUSH demuxer
- SAMI, RealText and SubViewer demuxers and decoders - SAMI, RealText and SubViewer demuxers and decoders
- Heart Of Darkness PAF playback support - Heart Of Darkness PAF playback support
- iec61883 device - iec61883 device
@@ -697,7 +383,6 @@ version 0.10:
- ffwavesynth decoder - ffwavesynth decoder
- aviocat tool - aviocat tool
- ffeval tool - ffeval tool
- support encoding and decoding 4-channel SGI images
version 0.9: version 0.9:
@@ -746,7 +431,7 @@ easier to use. The changes are:
all the stream in the first input file, except for the second audio all the stream in the first input file, except for the second audio
stream'. stream'.
* There is a new option -c (or -codec) for choosing the decoder/encoder to * There is a new option -c (or -codec) for choosing the decoder/encoder to
use, which makes it possible to precisely specify target stream(s) consistently with use, which allows to precisely specify target stream(s) consistently with
other options. E.g. -c:v lib264 sets the codec for all video streams, -c:a:0 other options. E.g. -c:v lib264 sets the codec for all video streams, -c:a:0
libvorbis sets the codec for the first audio stream and -c copy copies all libvorbis sets the codec for the first audio stream and -c copy copies all
the streams without reencoding. Old -vcodec/-acodec/-scodec options are now the streams without reencoding. Old -vcodec/-acodec/-scodec options are now
@@ -935,8 +620,8 @@ version 0.8:
- showinfo filter added - showinfo filter added
- SMPTE 302M AES3 audio decoder - SMPTE 302M AES3 audio decoder
- Apple Core Audio Format muxer - Apple Core Audio Format muxer
- 9 bits and 10 bits per sample support in the H.264 decoder - 9bit and 10bit per sample support in the H.264 decoder
- 9 bits and 10 bits FFV1 encoding / decoding - 9bit and 10bit FFV1 encoding / decoding
- split filter added - split filter added
- select filter added - select filter added
- sdl output device added - sdl output device added
@@ -1229,7 +914,7 @@ version 0.4.9-pre1:
- rate distorted optimal lambda->qp support - rate distorted optimal lambda->qp support
- AAC encoding with libfaac - AAC encoding with libfaac
- Sunplus JPEG codec (SP5X) support - Sunplus JPEG codec (SP5X) support
- use Lagrange multiplier instead of QP for ratecontrol - use Lagrange multipler instead of QP for ratecontrol
- Theora/VP3 decoding support - Theora/VP3 decoding support
- XA and ADX ADPCM codecs - XA and ADX ADPCM codecs
- export MPEG-2 active display area / pan scan - export MPEG-2 active display area / pan scan

15
INSTALL Normal file
View File

@@ -0,0 +1,15 @@
1) Type './configure' to create the configuration. A list of configure
options is printed by running 'configure --help'.
'configure' can be launched from a directory different from the FFmpeg
sources to build the objects out of tree. To do this, use an absolute
path when launching 'configure', e.g. '/ffmpegdir/ffmpeg/configure'.
2) Then type 'make' to build FFmpeg. GNU Make 3.81 or later is required.
3) Type 'make install' to install all binaries and libraries you built.
NOTICE
- Non system dependencies (e.g. libx264, libvpx) are disabled by default.

View File

@@ -1,17 +0,0 @@
#Installing FFmpeg:
1. Type `./configure` to create the configuration. A list of configure
options is printed by running `configure --help`.
`configure` can be launched from a directory different from the FFmpeg
sources to build the objects out of tree. To do this, use an absolute
path when launching `configure`, e.g. `/ffmpegdir/ffmpeg/configure`.
2. Then type `make` to build FFmpeg. GNU Make 3.81 or later is required.
3. Type `make install` to install all binaries and libraries you built.
NOTICE
------
- Non system dependencies (e.g. libx264, libvpx) are disabled by default.

103
LICENSE Normal file
View File

@@ -0,0 +1,103 @@
FFmpeg:
Most files in FFmpeg are under the GNU Lesser General Public License version 2.1
or later (LGPL v2.1+). Read the file COPYING.LGPLv2.1 for details. Some other
files have MIT/X11/BSD-style licenses. In combination the LGPL v2.1+ applies to
FFmpeg.
Some optional parts of FFmpeg are licensed under the GNU General Public License
version 2 or later (GPL v2+). See the file COPYING.GPLv2 for details. None of
these parts are used by default, you have to explicitly pass --enable-gpl to
configure to activate them. In this case, FFmpeg's license changes to GPL v2+.
Specifically, the GPL parts of FFmpeg are
- libpostproc
- libmpcodecs
- optional x86 optimizations in the files
libavcodec/x86/idct_mmx.c
- libutvideo encoding/decoding wrappers in
libavcodec/libutvideo*.cpp
- the X11 grabber in libavdevice/x11grab.c
- the swresample test app in
libswresample/swresample-test.c
- the texi2pod.pl tool
- the following filters in libavfilter:
- f_ebur128.c
- vf_blackframe.c
- vf_boxblur.c
- vf_colormatrix.c
- vf_cropdetect.c
- vf_decimate.c
- vf_delogo.c
- vf_geq.c
- vf_histeq.c
- vf_hqdn3d.c
- vf_kerndeint.c
- vf_mcdeint.c
- vf_mp.c
- vf_owdenoise.c
- vf_perspective.c
- vf_phase.c
- vf_pp.c
- vf_pullup.c
- vf_sab.c
- vf_smartblur.c
- vf_spp.c
- vf_stereo3d.c
- vf_super2xsai.c
- vf_tinterlace.c
- vsrc_mptestsrc.c
Should you, for whatever reason, prefer to use version 3 of the (L)GPL, then
the configure parameter --enable-version3 will activate this licensing option
for you. Read the file COPYING.LGPLv3 or, if you have enabled GPL parts,
COPYING.GPLv3 to learn the exact legal terms that apply in this case.
There are a handful of files under other licensing terms, namely:
* The files libavcodec/jfdctfst.c, libavcodec/jfdctint_template.c and
libavcodec/jrevdct.c are taken from libjpeg, see the top of the files for
licensing details. Specifically note that you must credit the IJG in the
documentation accompanying your program if you only distribute executables.
You must also indicate any changes including additions and deletions to
those three files in the documentation.
external libraries
==================
FFmpeg can be combined with a number of external libraries, which sometimes
affect the licensing of binaries resulting from the combination.
compatible libraries
--------------------
The following libraries are under GPL:
- frei0r
- libcdio
- libutvideo
- libvidstab
- libx264
- libx265
- libxavs
- libxvid
When combining them with FFmpeg, FFmpeg needs to be licensed as GPL as well by
passing --enable-gpl to configure.
The OpenCORE and VisualOn libraries are under the Apache License 2.0. That
license is incompatible with the LGPL v2.1 and the GPL v2, but not with
version 3 of those licenses. So to combine these libraries with FFmpeg, the
license version needs to be upgraded by passing --enable-version3 to configure.
incompatible libraries
----------------------
The Fraunhofer AAC library, FAAC and aacplus are under licenses which
are incompatible with the GPLv2 and v3. We do not know for certain if their
licenses are compatible with the LGPL.
If you wish to enable these libraries, pass --enable-nonfree to configure.
But note that if you enable any of these libraries the resulting binary will
be under a complex license mix that is more restrictive than the LGPL and that
may result in additional obligations. It is possible that these
restrictions cause the resulting binary to be unredistributeable.

View File

@@ -1,124 +0,0 @@
# License
Most files in FFmpeg are under the GNU Lesser General Public License version 2.1
or later (LGPL v2.1+). Read the file `COPYING.LGPLv2.1` for details. Some other
files have MIT/X11/BSD-style licenses. In combination the LGPL v2.1+ applies to
FFmpeg.
Some optional parts of FFmpeg are licensed under the GNU General Public License
version 2 or later (GPL v2+). See the file `COPYING.GPLv2` for details. None of
these parts are used by default, you have to explicitly pass `--enable-gpl` to
configure to activate them. In this case, FFmpeg's license changes to GPL v2+.
Specifically, the GPL parts of FFmpeg are:
- libpostproc
- optional x86 optimization in the files
- `libavcodec/x86/flac_dsp_gpl.asm`
- `libavcodec/x86/idct_mmx.c`
- `libavfilter/x86/vf_removegrain.asm`
- the X11 grabber in `libavdevice/x11grab.c`
- the following building and testing tools
- `compat/solaris/make_sunver.pl`
- `doc/t2h.pm`
- `doc/texi2pod.pl`
- `libswresample/swresample-test.c`
- `tests/checkasm/*`
- `tests/tiny_ssim.c`
- the following filters in libavfilter:
- `f_ebur128.c`
- `vf_blackframe.c`
- `vf_boxblur.c`
- `vf_colormatrix.c`
- `vf_cover_rect.c`
- `vf_cropdetect.c`
- `vf_delogo.c`
- `vf_eq.c`
- `vf_find_rect.c`
- `vf_fspp.c`
- `vf_geq.c`
- `vf_histeq.c`
- `vf_hqdn3d.c`
- `vf_interlace.c`
- `vf_kerndeint.c`
- `vf_mcdeint.c`
- `vf_mpdecimate.c`
- `vf_owdenoise.c`
- `vf_perspective.c`
- `vf_phase.c`
- `vf_pp.c`
- `vf_pp7.c`
- `vf_pullup.c`
- `vf_repeatfields.c`
- `vf_sab.c`
- `vf_smartblur.c`
- `vf_spp.c`
- `vf_stereo3d.c`
- `vf_super2xsai.c`
- `vf_tinterlace.c`
- `vf_uspp.c`
- `vsrc_mptestsrc.c`
Should you, for whatever reason, prefer to use version 3 of the (L)GPL, then
the configure parameter `--enable-version3` will activate this licensing option
for you. Read the file `COPYING.LGPLv3` or, if you have enabled GPL parts,
`COPYING.GPLv3` to learn the exact legal terms that apply in this case.
There are a handful of files under other licensing terms, namely:
* The files `libavcodec/jfdctfst.c`, `libavcodec/jfdctint_template.c` and
`libavcodec/jrevdct.c` are taken from libjpeg, see the top of the files for
licensing details. Specifically note that you must credit the IJG in the
documentation accompanying your program if you only distribute executables.
You must also indicate any changes including additions and deletions to
those three files in the documentation.
* `tests/reference.pnm` is under the expat license.
## External libraries
FFmpeg can be combined with a number of external libraries, which sometimes
affect the licensing of binaries resulting from the combination.
### Compatible libraries
The following libraries are under GPL:
- frei0r
- libcdio
- librubberband
- libvidstab
- libx264
- libx265
- libxavs
- libxvid
When combining them with FFmpeg, FFmpeg needs to be licensed as GPL as well by
passing `--enable-gpl` to configure.
The OpenCORE and VisualOn libraries are under the Apache License 2.0. That
license is incompatible with the LGPL v2.1 and the GPL v2, but not with
version 3 of those licenses. So to combine these libraries with FFmpeg, the
license version needs to be upgraded by passing `--enable-version3` to configure.
### Incompatible libraries
There are certain libraries you can combine with FFmpeg whose licenses are not
compatible with the GPL and/or the LGPL. If you wish to enable these
libraries, even in circumstances that their license may be incompatible, pass
`--enable-nonfree` to configure. But note that if you enable any of these
libraries the resulting binary will be under a complex license mix that is
more restrictive than the LGPL and that may result in additional obligations.
It is possible that these restrictions cause the resulting binary to be
unredistributable.
The Fraunhofer FDK AAC and OpenSSL libraries are under licenses which are
incompatible with the GPLv2 and v3. To the best of our knowledge, they are
compatible with the LGPL.
The FAAC library is incompatible with all versions of GPL and LGPL.
The NVENC library, while its header file is licensed under the compatible MIT
license, requires a proprietary binary blob at run time, and is deemed to be
incompatible with the GPL. We are not certain if it is compatible with the
LGPL, but we require `--enable-nonfree` even with LGPL configurations in case
it is not.

View File

@@ -14,6 +14,7 @@ patches and related discussions.
Project Leader Project Leader
============== ==============
Michael Niedermayer
final design decisions final design decisions
@@ -42,8 +43,9 @@ QuickTime faststart:
Miscellaneous Areas Miscellaneous Areas
=================== ===================
documentation Stefano Sabatini, Mike Melanson, Timothy Gu, Lou Logan documentation Stefano Sabatini, Mike Melanson, Timothy Gu
project server Árpád Gereöffy, Michael Niedermayer, Reimar Doeffinger, Alexander Strasser build system (configure,Makefiles) Diego Biurrun, Mans Rullgard
project server Árpád Gereöffy, Michael Niedermayer, Reimar Döffinger, Alexander Strasser
presets Robert Swain presets Robert Swain
metadata subsystem Aurelien Jacobs metadata subsystem Aurelien Jacobs
release management Michael Niedermayer release management Michael Niedermayer
@@ -52,12 +54,10 @@ release management Michael Niedermayer
Communication Communication
============= =============
website Deby Barbara Lepage website Robert Swain, Lou Logan
fate.ffmpeg.org Timothy Gu mailinglists Michael Niedermayer, Baptiste Coudurier, Lou Logan
Trac bug tracker Alexander Strasser, Michael Niedermayer, Carl Eugen Hoyos, Lou Logan
mailing lists Baptiste Coudurier, Lou Logan
Google+ Paul B Mahol, Michael Niedermayer, Alexander Strasser Google+ Paul B Mahol, Michael Niedermayer, Alexander Strasser
Twitter Lou Logan, Reynaldo H. Verdejo Pinochet Twitter Lou Logan
Launchpad Timothy Gu Launchpad Timothy Gu
@@ -70,11 +70,9 @@ Internal Interfaces:
libavutil/common.h Michael Niedermayer libavutil/common.h Michael Niedermayer
Other: Other:
aes_ctr.c, aes_ctr.h Eran Kornblau
bprint Nicolas George bprint Nicolas George
bswap.h bswap.h
des Reimar Doeffinger des Reimar Doeffinger
dynarray.h Nicolas George
eval.c, eval.h Michael Niedermayer eval.c, eval.h Michael Niedermayer
float_dsp Loren Merritt float_dsp Loren Merritt
hash Reimar Doeffinger hash Reimar Doeffinger
@@ -88,6 +86,7 @@ Other:
rational.c, rational.h Michael Niedermayer rational.c, rational.h Michael Niedermayer
rc4 Reimar Doeffinger rc4 Reimar Doeffinger
ripemd.c, ripemd.h James Almer ripemd.c, ripemd.h James Almer
timecode Clément Bœsch
libavcodec libavcodec
@@ -115,6 +114,8 @@ Generic Parts:
faandct.c, faandct.h Michael Niedermayer faandct.c, faandct.h Michael Niedermayer
Golomb coding: Golomb coding:
golomb.c, golomb.h Michael Niedermayer golomb.c, golomb.h Michael Niedermayer
LPC:
lpc.c, lpc.h Justin Ruggles
motion estimation: motion estimation:
motion* Michael Niedermayer motion* Michael Niedermayer
rate control: rate control:
@@ -128,42 +129,46 @@ Generic Parts:
tableprint.c, tableprint.h Reimar Doeffinger tableprint.c, tableprint.h Reimar Doeffinger
fixed point FFT: fixed point FFT:
fft* Zeljko Lukac fft* Zeljko Lukac
Text Subtitles Clément Bœsch
Codecs: Codecs:
4xm.c Michael Niedermayer 4xm.c Michael Niedermayer
8bps.c Roberto Togni 8bps.c Roberto Togni
8svx.c Jaikrishnan Menon 8svx.c Jaikrishnan Menon
aacenc*, aaccoder.c Rostislav Pehlivanov aasc.c Kostya Shishkov
ac3* Justin Ruggles
alacenc.c Jaikrishnan Menon alacenc.c Jaikrishnan Menon
alsdec.c Thilo Borgmann alsdec.c Thilo Borgmann
apedec.c Kostya Shishkov
ass* Aurelien Jacobs ass* Aurelien Jacobs
asv* Michael Niedermayer asv* Michael Niedermayer
atrac3* Benjamin Larsson
atrac3plus* Maxim Poliakovski atrac3plus* Maxim Poliakovski
bgmc.c, bgmc.h Thilo Borgmann bgmc.c, bgmc.h Thilo Borgmann
bink.c Kostya Shishkov
binkaudio.c Peter Ross binkaudio.c Peter Ross
bmp.c Mans Rullgard, Kostya Shishkov
cavs* Stefan Gehrer cavs* Stefan Gehrer
cdxl.c Paul B Mahol cdxl.c Paul B Mahol
celp_filters.* Vitor Sessak celp_filters.* Vitor Sessak
cinepak.c Roberto Togni cinepak.c Roberto Togni
cinepakenc.c Rl / Aetey G.T. AB cinepakenc.c Rl / Aetey G.T. AB
ccaption_dec.c Anshul Maheshwari
cljr Alex Beregszaszi cljr Alex Beregszaszi
cllc.c Derek Buitenhuis
cook.c, cookdata.h Benjamin Larsson
cpia.c Stephan Hilb cpia.c Stephan Hilb
crystalhd.c Philip Langdale crystalhd.c Philip Langdale
cscd.c Reimar Doeffinger cscd.c Reimar Doeffinger
cuvid.c Timo Rothenpieler dca.c Kostya Shishkov, Benjamin Larsson
dirac* Rostislav Pehlivanov
dnxhd* Baptiste Coudurier dnxhd* Baptiste Coudurier
dpcm.c Mike Melanson dpcm.c Mike Melanson
dss_sp.c Oleksij Rempel
dv.c Roman Shaposhnik dv.c Roman Shaposhnik
dvbsubdec.c Anshul Maheshwari dxa.c Kostya Shishkov
eacmv*, eaidct*, eat* Peter Ross eacmv*, eaidct*, eat* Peter Ross
evrc* Paul B Mahol
exif.c, exif.h Thilo Borgmann exif.c, exif.h Thilo Borgmann
ffv1* Michael Niedermayer ffv1.c Michael Niedermayer
ffwavesynth.c Nicolas George ffwavesynth.c Nicolas George
flac* Justin Ruggles
flashsv* Benjamin Larsson
flicvideo.c Mike Melanson flicvideo.c Mike Melanson
g722.c Martin Storsjo g722.c Martin Storsjo
g726.c Roman Shaposhnik g726.c Roman Shaposhnik
@@ -171,47 +176,58 @@ Codecs:
h261* Michael Niedermayer h261* Michael Niedermayer
h263* Michael Niedermayer h263* Michael Niedermayer
h264* Loren Merritt, Michael Niedermayer h264* Loren Merritt, Michael Niedermayer
hap* Tom Butterworth huffyuv.c Michael Niedermayer
huffyuv* Michael Niedermayer, Christophe Gisquet
idcinvideo.c Mike Melanson idcinvideo.c Mike Melanson
imc* Benjamin Larsson
indeo2* Kostya Shishkov
indeo5* Kostya Shishkov
interplayvideo.c Mike Melanson interplayvideo.c Mike Melanson
jni*, ffjni* Matthieu Bouron ivi* Kostya Shishkov
jacosub* Clément Bœsch
jpeg2000* Nicolas Bertrand jpeg2000* Nicolas Bertrand
jpeg_ls.c Kostya Shishkov
jvdec.c Peter Ross jvdec.c Peter Ross
kmvc.c Kostya Shishkov
lcl*.c Roberto Togni, Reimar Doeffinger lcl*.c Roberto Togni, Reimar Doeffinger
libcelt_dec.c Nicolas George libcelt_dec.c Nicolas George
libdirac* David Conrad libdirac* David Conrad
libgsm.c Michel Bardiaux libgsm.c Michel Bardiaux
libkvazaar.c Arttu Ylä-Outinen
libopenjpeg.c Jaikrishnan Menon libopenjpeg.c Jaikrishnan Menon
libopenjpegenc.c Michael Bradshaw libopenjpegenc.c Michael Bradshaw
libschroedinger* David Conrad libschroedinger* David Conrad
libspeexdec.c Justin Ruggles
libtheoraenc.c David Conrad libtheoraenc.c David Conrad
libutvideo* Derek Buitenhuis
libvorbis.c David Conrad libvorbis.c David Conrad
libvpx* James Zern libvpx* James Zern
libx264.c Mans Rullgard, Jason Garrett-Glaser
libx265.c Derek Buitenhuis
libxavs.c Stefan Gehrer libxavs.c Stefan Gehrer
libzvbi-teletextdec.c Marton Balint libzvbi-teletextdec.c Marton Balint
loco.c Kostya Shishkov
lzo.h, lzo.c Reimar Doeffinger lzo.h, lzo.c Reimar Doeffinger
mdec.c Michael Niedermayer mdec.c Michael Niedermayer
mimic.c Ramiro Polla mimic.c Ramiro Polla
mjpeg*.c Michael Niedermayer mjpeg*.c Michael Niedermayer
mlp* Ramiro Polla mlp* Ramiro Polla
mmvideo.c Peter Ross mmvideo.c Peter Ross
mpc* Kostya Shishkov
mpeg12.c, mpeg12data.h Michael Niedermayer mpeg12.c, mpeg12data.h Michael Niedermayer
mpegvideo.c, mpegvideo.h Michael Niedermayer mpegvideo.c, mpegvideo.h Michael Niedermayer
mqc* Nicolas Bertrand mqc* Nicolas Bertrand
msmpeg4.c, msmpeg4data.h Michael Niedermayer msmpeg4.c, msmpeg4data.h Michael Niedermayer
msrle.c Mike Melanson msrle.c Mike Melanson
msvideo1.c Mike Melanson msvideo1.c Mike Melanson
nellymoserdec.c Benjamin Larsson
nuv.c Reimar Doeffinger nuv.c Reimar Doeffinger
nvenc* Timo Rothenpieler
paf.* Paul B Mahol paf.* Paul B Mahol
pcx.c Ivo van Poorten pcx.c Ivo van Poorten
pgssubdec.c Reimar Doeffinger pgssubdec.c Reimar Doeffinger
ptx.c Ivo van Poorten ptx.c Ivo van Poorten
qcelp* Reynaldo H. Verdejo Pinochet qcelp* Reynaldo H. Verdejo Pinochet
qdm2.c, qdm2data.h Roberto Togni qdm2.c, qdm2data.h Roberto Togni, Benjamin Larsson
qsv* Ivan Uskov qdrw.c Kostya Shishkov
qpeg.c Kostya Shishkov
qtrle.c Mike Melanson qtrle.c Mike Melanson
ra144.c, ra144.h, ra288.c, ra288.h Roberto Togni ra144.c, ra144.h, ra288.c, ra288.h Roberto Togni
resample2.c Michael Niedermayer resample2.c Michael Niedermayer
@@ -219,51 +235,65 @@ Codecs:
rpza.c Roberto Togni rpza.c Roberto Togni
rtjpeg.c, rtjpeg.h Reimar Doeffinger rtjpeg.c, rtjpeg.h Reimar Doeffinger
rv10.c Michael Niedermayer rv10.c Michael Niedermayer
rv4* Christophe Gisquet rv3* Kostya Shishkov
rv4* Kostya Shishkov
s3tc* Ivo van Poorten s3tc* Ivo van Poorten
smacker.c Kostya Shishkov
smc.c Mike Melanson smc.c Mike Melanson
smvjpegdec.c Ash Hughes smvjpegdec.c Ash Hughes
snow* Michael Niedermayer, Loren Merritt snow.c Michael Niedermayer, Loren Merritt
sonic.c Alex Beregszaszi sonic.c Alex Beregszaszi
srt* Aurelien Jacobs srt* Aurelien Jacobs
sunrast.c Ivo van Poorten sunrast.c Ivo van Poorten
svq3.c Michael Niedermayer svq3.c Michael Niedermayer
tak* Paul B Mahol tak* Paul B Mahol
targa.c Kostya Shishkov
tiff.c Kostya Shishkov
truemotion1* Mike Melanson truemotion1* Mike Melanson
truemotion2* Kostya Shishkov
truespeech.c Kostya Shishkov
tscc.c Kostya Shishkov
tta.c Alex Beregszaszi, Jaikrishnan Menon tta.c Alex Beregszaszi, Jaikrishnan Menon
ttaenc.c Paul B Mahol ttaenc.c Paul B Mahol
txd.c Ivo van Poorten txd.c Ivo van Poorten
vc1* Christophe Gisquet ulti* Kostya Shishkov
vc2* Rostislav Pehlivanov v410*.c Derek Buitenhuis
vb.c Kostya Shishkov
vble.c Derek Buitenhuis
vc1* Kostya Shishkov
vcr1.c Michael Niedermayer vcr1.c Michael Niedermayer
vda_h264_dec.c Xidorn Quan vda_h264_dec.c Xidorn Quan
videotoolboxenc.c Rick Kern
vima.c Paul B Mahol vima.c Paul B Mahol
vorbisdec.c Denes Balatoni, David Conrad vmnc.c Kostya Shishkov
vorbisenc.c Oded Shimon vorbis_dec.c Denes Balatoni, David Conrad
vorbis_enc.c Oded Shimon
vp3* Mike Melanson vp3* Mike Melanson
vp5 Aurelien Jacobs vp5 Aurelien Jacobs
vp6 Aurelien Jacobs vp6 Aurelien Jacobs
vp8 David Conrad, Ronald Bultje vp8 David Conrad, Jason Garrett-Glaser, Ronald Bultje
vp9 Ronald Bultje vp9 Ronald Bultje, Clément Bœsch
vqavideo.c Mike Melanson vqavideo.c Mike Melanson
wavpack.c Kostya Shishkov
wmaprodec.c Sascha Sommer wmaprodec.c Sascha Sommer
wmavoice.c Ronald S. Bultje wmavoice.c Ronald S. Bultje
wmv2.c Michael Niedermayer wmv2.c Michael Niedermayer
wnv1.c Kostya Shishkov
xan.c Mike Melanson xan.c Mike Melanson
xbm* Paul B Mahol xbm* Paul B Mahol
xface Stefano Sabatini xface Stefano Sabatini
xl.c Kostya Shishkov
xvmc.c Ivan Kalvachev xvmc.c Ivan Kalvachev
xwd* Paul B Mahol xwd* Paul B Mahol
zerocodec.c Derek Buitenhuis
zmbv* Kostya Shishkov
Hardware acceleration: Hardware acceleration:
crystalhd.c Philip Langdale crystalhd.c Philip Langdale
dxva2* Hendrik Leppkes, Laurent Aimar dxva2* Laurent Aimar
mediacodec* Matthieu Bouron libstagefright.cpp Mohamed Naufal
vaapi* Gwenole Beauchesne vaapi* Gwenole Beauchesne
vaapi_encode* Mark Thompson vda* Sebastien Zwickert
vdpau* Philip Langdale, Carl Eugen Hoyos vdpau* Carl Eugen Hoyos
videotoolbox* Rick Kern
libavdevice libavdevice
@@ -272,21 +302,16 @@ libavdevice
libavdevice/avdevice.h libavdevice/avdevice.h
avfoundation.m Thilo Borgmann dshow.c Roger Pack
decklink* Deti Fliegl
dshow.c Roger Pack (CC rogerdpack@gmail.com)
fbdev_enc.c Lukasz Marek fbdev_enc.c Lukasz Marek
gdigrab.c Roger Pack (CC rogerdpack@gmail.com)
iec61883.c Georg Lippitsch iec61883.c Georg Lippitsch
lavfi Stefano Sabatini lavfi Stefano Sabatini
libdc1394.c Roman Shaposhnik libdc1394.c Roman Shaposhnik
opengl_enc.c Lukasz Marek opengl_enc.c Lukasz Marek
pulse_audio_enc.c Lukasz Marek pulse_audio_enc.c Lukasz Marek
qtkit.m Thilo Borgmann
sdl Stefano Sabatini sdl Stefano Sabatini
v4l2.c Giorgio Vazzana v4l2.c Luca Abeni
vfwcap.c Ramiro Polla vfwcap.c Ramiro Polla
xv.c Lukasz Marek
libavfilter libavfilter
=========== ===========
@@ -295,7 +320,6 @@ Generic parts:
graphdump.c Nicolas George graphdump.c Nicolas George
Filters: Filters:
f_drawgraph.c Paul B Mahol
af_adelay.c Paul B Mahol af_adelay.c Paul B Mahol
af_aecho.c Paul B Mahol af_aecho.c Paul B Mahol
af_afade.c Paul B Mahol af_afade.c Paul B Mahol
@@ -303,48 +327,28 @@ Filters:
af_aphaser.c Paul B Mahol af_aphaser.c Paul B Mahol
af_aresample.c Michael Niedermayer af_aresample.c Michael Niedermayer
af_astats.c Paul B Mahol af_astats.c Paul B Mahol
af_astreamsync.c Nicolas George
af_atempo.c Pavel Koshevoy af_atempo.c Pavel Koshevoy
af_biquads.c Paul B Mahol af_biquads.c Paul B Mahol
af_chorus.c Paul B Mahol
af_compand.c Paul B Mahol af_compand.c Paul B Mahol
af_firequalizer.c Muhammad Faiz
af_ladspa.c Paul B Mahol af_ladspa.c Paul B Mahol
af_loudnorm.c Kyle Swanson
af_pan.c Nicolas George af_pan.c Nicolas George
af_sidechaincompress.c Paul B Mahol
af_silenceremove.c Paul B Mahol
avf_aphasemeter.c Paul B Mahol
avf_avectorscope.c Paul B Mahol avf_avectorscope.c Paul B Mahol
avf_showcqt.c Muhammad Faiz
vf_blend.c Paul B Mahol vf_blend.c Paul B Mahol
vf_chromakey.c Timo Rothenpieler
vf_colorchannelmixer.c Paul B Mahol
vf_colorbalance.c Paul B Mahol vf_colorbalance.c Paul B Mahol
vf_colorkey.c Timo Rothenpieler
vf_colorlevels.c Paul B Mahol
vf_coreimage.m Thilo Borgmann
vf_deband.c Paul B Mahol
vf_dejudder.c Nicholas Robbins vf_dejudder.c Nicholas Robbins
vf_delogo.c Jean Delvare (CC <jdelvare@suse.com>) vf_delogo.c Jean Delvare (CC <khali@linux-fr.org>)
vf_drawbox.c/drawgrid Andrey Utkin vf_drawbox.c/drawgrid Andrey Utkin
vf_extractplanes.c Paul B Mahol vf_extractplanes.c Paul B Mahol
vf_histogram.c Paul B Mahol vf_histogram.c Paul B Mahol
vf_hqx.c Clément Bœsch
vf_idet.c Pascal Massimino
vf_il.c Paul B Mahol vf_il.c Paul B Mahol
vf_lenscorrection.c Daniel Oberhoff
vf_mergeplanes.c Paul B Mahol vf_mergeplanes.c Paul B Mahol
vf_neighbor.c Paul B Mahol
vf_psnr.c Paul B Mahol vf_psnr.c Paul B Mahol
vf_random.c Paul B Mahol
vf_readvitc.c Tobias Rapp (CC t.rapp at noa-archive dot com)
vf_scale.c Michael Niedermayer vf_scale.c Michael Niedermayer
vf_separatefields.c Paul B Mahol vf_separatefields.c Paul B Mahol
vf_ssim.c Paul B Mahol
vf_stereo3d.c Paul B Mahol vf_stereo3d.c Paul B Mahol
vf_telecine.c Paul B Mahol vf_telecine.c Paul B Mahol
vf_yadif.c Michael Niedermayer vf_yadif.c Michael Niedermayer
vf_zoompan.c Paul B Mahol
Sources: Sources:
vsrc_mandelbrot.c Michael Niedermayer vsrc_mandelbrot.c Michael Niedermayer
@@ -357,17 +361,15 @@ Generic parts:
libavformat/avformat.h Michael Niedermayer libavformat/avformat.h Michael Niedermayer
Utility Code: Utility Code:
libavformat/utils.c Michael Niedermayer libavformat/utils.c Michael Niedermayer
Text Subtitles Clément Bœsch
Muxers/Demuxers: Muxers/Demuxers:
4xm.c Mike Melanson 4xm.c Mike Melanson
aadec.c Vesselin Bontchev (vesselin.bontchev at yandex dot com)
adtsenc.c Robert Swain adtsenc.c Robert Swain
afc.c Paul B Mahol afc.c Paul B Mahol
aiffdec.c Baptiste Coudurier, Matthieu Bouron aiffdec.c Baptiste Coudurier, Matthieu Bouron
aiffenc.c Baptiste Coudurier, Matthieu Bouron aiffenc.c Baptiste Coudurier, Matthieu Bouron
apngdec.c Benoit Fouet ape.c Kostya Shishkov
ass* Aurelien Jacobs ass* Aurelien Jacobs
astdec.c Paul B Mahol astdec.c Paul B Mahol
astenc.c James Almer astenc.c James Almer
@@ -380,18 +382,18 @@ Muxers/Demuxers:
cdxl.c Paul B Mahol cdxl.c Paul B Mahol
crc.c Michael Niedermayer crc.c Michael Niedermayer
daud.c Reimar Doeffinger daud.c Reimar Doeffinger
dss.c Oleksij Rempel
dtshddec.c Paul B Mahol dtshddec.c Paul B Mahol
dv.c Roman Shaposhnik dv.c Roman Shaposhnik
dxa.c Kostya Shishkov
electronicarts.c Peter Ross electronicarts.c Peter Ross
epafdec.c Paul B Mahol epafdec.c Paul B Mahol
ffm* Baptiste Coudurier ffm* Baptiste Coudurier
flac* Justin Ruggles
flic.c Mike Melanson flic.c Mike Melanson
flvdec.c, flvenc.c Michael Niedermayer flvdec.c, flvenc.c Michael Niedermayer
gxf.c Reimar Doeffinger gxf.c Reimar Doeffinger
gxfenc.c Baptiste Coudurier gxfenc.c Baptiste Coudurier
hls.c Anssi Hannula hls.c Anssi Hannula
hls encryption (hlsenc.c) Christian Suloway
idcin.c Mike Melanson idcin.c Mike Melanson
idroqdec.c Mike Melanson idroqdec.c Mike Melanson
iff.c Jaikrishnan Menon iff.c Jaikrishnan Menon
@@ -399,6 +401,7 @@ Muxers/Demuxers:
ipmovie.c Mike Melanson ipmovie.c Mike Melanson
ircam* Paul B Mahol ircam* Paul B Mahol
iss.c Stefan Gehrer iss.c Stefan Gehrer
jacosub* Clément Bœsch
jvdec.c Peter Ross jvdec.c Peter Ross
libmodplug.c Clément Bœsch libmodplug.c Clément Bœsch
libnut.c Oded Shimon libnut.c Oded Shimon
@@ -408,30 +411,27 @@ Muxers/Demuxers:
matroska.c Aurelien Jacobs matroska.c Aurelien Jacobs
matroskadec.c Aurelien Jacobs matroskadec.c Aurelien Jacobs
matroskaenc.c David Conrad matroskaenc.c David Conrad
matroska subtitles (matroskaenc.c) John Peebles
metadata* Aurelien Jacobs metadata* Aurelien Jacobs
mgsts.c Paul B Mahol mgsts.c Paul B Mahol
microdvd* Aurelien Jacobs microdvd* Aurelien Jacobs
mm.c Peter Ross mm.c Peter Ross
mov.c Baptiste Coudurier mov.c Michael Niedermayer, Baptiste Coudurier
movenc.c Baptiste Coudurier, Matthieu Bouron movenc.c Baptiste Coudurier, Matthieu Bouron
movenccenc.c Eran Kornblau mpc.c Kostya Shishkov
mpeg.c Michael Niedermayer mpeg.c Michael Niedermayer
mpegenc.c Michael Niedermayer mpegenc.c Michael Niedermayer
mpegts.c Marton Balint mpegts* Baptiste Coudurier
mpegtsenc.c Baptiste Coudurier
msnwc_tcp.c Ramiro Polla msnwc_tcp.c Ramiro Polla
mtv.c Reynaldo H. Verdejo Pinochet mtv.c Reynaldo H. Verdejo Pinochet
mxf* Baptiste Coudurier mxf* Baptiste Coudurier
mxfdec.c Tomas Härdin mxfdec.c Tomas Härdin
nistspheredec.c Paul B Mahol nistspheredec.c Paul B Mahol
nsvdec.c Francois Revol nsvdec.c Francois Revol
nut* Michael Niedermayer nut.c Michael Niedermayer
nuv.c Reimar Doeffinger nuv.c Reimar Doeffinger
oggdec.c, oggdec.h David Conrad oggdec.c, oggdec.h David Conrad
oggenc.c Baptiste Coudurier oggenc.c Baptiste Coudurier
oggparse*.c David Conrad oggparse*.c David Conrad
oggparsedaala* Rostislav Pehlivanov
oma.c Maxim Poliakovski oma.c Maxim Poliakovski
paf.c Paul B Mahol paf.c Paul B Mahol
psxstr.c Mike Melanson psxstr.c Mike Melanson
@@ -441,21 +441,17 @@ Muxers/Demuxers:
raw.c Michael Niedermayer raw.c Michael Niedermayer
rdt.c Ronald S. Bultje rdt.c Ronald S. Bultje
rl2.c Sascha Sommer rl2.c Sascha Sommer
rmdec.c, rmenc.c Ronald S. Bultje rmdec.c, rmenc.c Ronald S. Bultje, Kostya Shishkov
rtmp* Kostya Shishkov
rtp.c, rtpenc.c Martin Storsjo rtp.c, rtpenc.c Martin Storsjo
rtpdec_ac3.* Gilles Chanteperdrix
rtpdec_dv.* Thomas Volkert
rtpdec_h261.*, rtpenc_h261.* Thomas Volkert
rtpdec_hevc.*, rtpenc_hevc.* Thomas Volkert
rtpdec_mpa_robust.* Gilles Chanteperdrix
rtpdec_asf.* Ronald S. Bultje rtpdec_asf.* Ronald S. Bultje
rtpdec_vc2hq.*, rtpenc_vc2hq.* Thomas Volkert
rtpdec_vp9.c Thomas Volkert
rtpenc_mpv.*, rtpenc_aac.* Martin Storsjo rtpenc_mpv.*, rtpenc_aac.* Martin Storsjo
rtsp.c Luca Barbato
sbgdec.c Nicolas George sbgdec.c Nicolas George
sdp.c Martin Storsjo sdp.c Martin Storsjo
segafilm.c Mike Melanson segafilm.c Mike Melanson
segment.c Stefano Sabatini siff.c Kostya Shishkov
smacker.c Kostya Shishkov
smjpeg* Paul B Mahol smjpeg* Paul B Mahol
spdif* Anssi Hannula spdif* Anssi Hannula
srtdec.c Aurelien Jacobs srtdec.c Aurelien Jacobs
@@ -466,21 +462,19 @@ Muxers/Demuxers:
voc.c Aurelien Jacobs voc.c Aurelien Jacobs
wav.c Michael Niedermayer wav.c Michael Niedermayer
wc3movie.c Mike Melanson wc3movie.c Mike Melanson
webm dash (matroskaenc.c) Vignesh Venkatasubramanian
webvtt* Matthew J Heaney webvtt* Matthew J Heaney
westwood.c Mike Melanson westwood.c Mike Melanson
wtv.c Peter Ross wtv.c Peter Ross
wv.c Kostya Shishkov
wvenc.c Paul B Mahol wvenc.c Paul B Mahol
Protocols: Protocols:
async.c Zhang Rui
bluray.c Petri Hintukainen bluray.c Petri Hintukainen
ftp.c Lukasz Marek ftp.c Lukasz Marek
http.c Ronald S. Bultje http.c Ronald S. Bultje
libssh.c Lukasz Marek libssh.c Lukasz Marek
mms*.c Ronald S. Bultje mms*.c Ronald S. Bultje
udp.c Luca Abeni udp.c Luca Abeni
icecast.c Marvin Scholz
libswresample libswresample
@@ -500,27 +494,26 @@ Resamplers:
Operating systems / CPU architectures Operating systems / CPU architectures
===================================== =====================================
Alpha Falk Hueffner Alpha Mans Rullgard, Falk Hueffner
MIPS Nedeljko Babic ARM Mans Rullgard
AVR32 Mans Rullgard
MIPS Mans Rullgard, Nedeljko Babic
Mac OS X / PowerPC Romain Dolbeau, Guillaume Poirier Mac OS X / PowerPC Romain Dolbeau, Guillaume Poirier
Amiga / PowerPC Colin Ward Amiga / PowerPC Colin Ward
Linux / PowerPC Luca Barbato
Windows MinGW Alex Beregszaszi, Ramiro Polla Windows MinGW Alex Beregszaszi, Ramiro Polla
Windows Cygwin Victor Paesa Windows Cygwin Victor Paesa
Windows MSVC Matthew Oliver, Hendrik Leppkes
Windows ICL Matthew Oliver
ADI/Blackfin DSP Marc Hoffman ADI/Blackfin DSP Marc Hoffman
Sparc Roman Shaposhnik Sparc Roman Shaposhnik
OS/2 KO Myung-Hun x86 Michael Niedermayer
Releases Releases
======== ========
2.8 Michael Niedermayer 2.2 Michael Niedermayer
2.7 Michael Niedermayer 2.1 Michael Niedermayer
2.6 Michael Niedermayer 1.2 Michael Niedermayer
2.5 Michael Niedermayer
2.4 Michael Niedermayer
If you want to maintain an older release, please contact us If you want to maintain an older release, please contact us
@@ -530,33 +523,33 @@ GnuPG Fingerprints of maintainers and contributors
Alexander Strasser 1C96 78B7 83CB 8AA7 9AF5 D1EB A7D8 A57B A876 E58F Alexander Strasser 1C96 78B7 83CB 8AA7 9AF5 D1EB A7D8 A57B A876 E58F
Anssi Hannula 1A92 FF42 2DD9 8D2E 8AF7 65A9 4278 C520 513D F3CB Anssi Hannula 1A92 FF42 2DD9 8D2E 8AF7 65A9 4278 C520 513D F3CB
Anton Khirnov 6D0C 6625 56F8 65D1 E5F5 814B B50A 1241 C067 07AB
Ash Hughes 694D 43D2 D180 C7C7 6421 ABD3 A641 D0B7 623D 6029 Ash Hughes 694D 43D2 D180 C7C7 6421 ABD3 A641 D0B7 623D 6029
Attila Kinali 11F0 F9A6 A1D2 11F6 C745 D10C 6520 BCDD F2DF E765 Attila Kinali 11F0 F9A6 A1D2 11F6 C745 D10C 6520 BCDD F2DF E765
Baptiste Coudurier 8D77 134D 20CC 9220 201F C5DB 0AC9 325C 5C1A BAAA Baptiste Coudurier 8D77 134D 20CC 9220 201F C5DB 0AC9 325C 5C1A BAAA
Ben Littler 3EE3 3723 E560 3214 A8CD 4DEB 2CDB FCE7 768C 8D2C Ben Littler 3EE3 3723 E560 3214 A8CD 4DEB 2CDB FCE7 768C 8D2C
Benoit Fouet B22A 4F4F 43EF 636B BB66 FCDC 0023 AE1E 2985 49C8 Benoit Fouet B22A 4F4F 43EF 636B BB66 FCDC 0023 AE1E 2985 49C8
Clément Bœsch 52D0 3A82 D445 F194 DB8B 2B16 87EE 2CB8 F4B8 FCF9 Bœsch Clément 52D0 3A82 D445 F194 DB8B 2B16 87EE 2CB8 F4B8 FCF9
Daniel Verkamp 78A6 07ED 782C 653E C628 B8B9 F0EB 8DD8 2F0E 21C7 Daniel Verkamp 78A6 07ED 782C 653E C628 B8B9 F0EB 8DD8 2F0E 21C7
Diego Biurrun 8227 1E31 B6D9 4994 7427 E220 9CAE D6CC 4757 FCC5
FFmpeg release signing key FCF9 86EA 15E6 E293 A564 4F10 B432 2F04 D676 58D8 FFmpeg release signing key FCF9 86EA 15E6 E293 A564 4F10 B432 2F04 D676 58D8
Ganesh Ajjanagadde C96A 848E 97C3 CEA2 AB72 5CE4 45F9 6A2D 3C36 FB1B
Gwenole Beauchesne 2E63 B3A6 3E44 37E2 017D 2704 53C7 6266 B153 99C4 Gwenole Beauchesne 2E63 B3A6 3E44 37E2 017D 2704 53C7 6266 B153 99C4
Jaikrishnan Menon 61A1 F09F 01C9 2D45 78E1 C862 25DC 8831 AF70 D368 Jaikrishnan Menon 61A1 F09F 01C9 2D45 78E1 C862 25DC 8831 AF70 D368
Jean Delvare 7CA6 9F44 60F1 BDC4 1FD2 C858 A552 6B9B B3CD 4E6A Jean Delvare 7CA6 9F44 60F1 BDC4 1FD2 C858 A552 6B9B B3CD 4E6A
Justin Ruggles 3136 ECC0 C10D 6C04 5F43 CA29 FCBE CD2A 3787 1EBF
Loren Merritt ABD9 08F4 C920 3F65 D8BE 35D7 1540 DAA7 060F 56DE Loren Merritt ABD9 08F4 C920 3F65 D8BE 35D7 1540 DAA7 060F 56DE
Lou Logan 7D68 DC73 CBEF EABB 671A B6CF 621C 2E28 82F8 DC3A Lou Logan 7D68 DC73 CBEF EABB 671A B6CF 621C 2E28 82F8 DC3A
Luca Barbato 6677 4209 213C 8843 5B67 29E7 E84C 78C2 84E9 0E34
Michael Niedermayer 9FF2 128B 147E F673 0BAD F133 611E C787 040B 0FAB Michael Niedermayer 9FF2 128B 147E F673 0BAD F133 611E C787 040B 0FAB
Nicolas George 24CE 01CE 9ACC 5CEB 74D8 8D9D B063 D997 36E5 4C93 Nicolas George 24CE 01CE 9ACC 5CEB 74D8 8D9D B063 D997 36E5 4C93
Panagiotis Issaris 6571 13A3 33D9 3726 F728 AA98 F643 B12E ECF3 E029 Panagiotis Issaris 6571 13A3 33D9 3726 F728 AA98 F643 B12E ECF3 E029
Peter Ross A907 E02F A6E5 0CD2 34CD 20D2 6760 79C5 AC40 DD6B Peter Ross A907 E02F A6E5 0CD2 34CD 20D2 6760 79C5 AC40 DD6B
Philip Langdale 5DC5 8D66 5FBA 3A43 18EC 045E F8D6 B194 6A75 682E Reimar Döffinger C61D 16E5 9E2C D10C 8958 38A4 0899 A2B9 06D4 D9C7
Reimar Doeffinger C61D 16E5 9E2C D10C 8958 38A4 0899 A2B9 06D4 D9C7
Reinhard Tartler 9300 5DC2 7E87 6C37 ED7B CA9A 9808 3544 9453 48A4 Reinhard Tartler 9300 5DC2 7E87 6C37 ED7B CA9A 9808 3544 9453 48A4
Reynaldo H. Verdejo Pinochet 6E27 CD34 170C C78E 4D4F 5F40 C18E 077F 3114 452A Reynaldo H. Verdejo Pinochet 6E27 CD34 170C C78E 4D4F 5F40 C18E 077F 3114 452A
Robert Swain EE7A 56EA 4A81 A7B5 2001 A521 67FA 362D A2FC 3E71 Robert Swain EE7A 56EA 4A81 A7B5 2001 A521 67FA 362D A2FC 3E71
Sascha Sommer 38A0 F88B 868E 9D3A 97D4 D6A0 E823 706F 1E07 0D3C Sascha Sommer 38A0 F88B 868E 9D3A 97D4 D6A0 E823 706F 1E07 0D3C
Stefano Sabatini 0D0B AD6B 5330 BBAD D3D6 6A0C 719C 2839 FC43 2D5F Stefano Sabatini 0D0B AD6B 5330 BBAD D3D6 6A0C 719C 2839 FC43 2D5F
Stephan Hilb 4F38 0B3A 5F39 B99B F505 E562 8D5C 5554 4E17 8863 Stephan Hilb 4F38 0B3A 5F39 B99B F505 E562 8D5C 5554 4E17 8863
Tiancheng "Timothy" Gu 9456 AFC0 814A 8139 E994 8351 7FE6 B095 B582 B0D4
Tim Nicholson 38CF DB09 3ED0 F607 8B67 6CED 0C0B FC44 8B0B FC83
Tomas Härdin A79D 4E3D F38F 763F 91F5 8B33 A01E 8AE0 41BB 2551 Tomas Härdin A79D 4E3D F38F 763F 91F5 8B33 A01E 8AE0 41BB 2551
Wei Gao 4269 7741 857A 0E60 9EC5 08D2 4744 4EFA 62C1 87B9 Wei Gao 4269 7741 857A 0E60 9EC5 08D2 4744 4EFA 62C1 87B9

View File

@@ -4,8 +4,6 @@ include config.mak
vpath %.c $(SRC_PATH) vpath %.c $(SRC_PATH)
vpath %.cpp $(SRC_PATH) vpath %.cpp $(SRC_PATH)
vpath %.h $(SRC_PATH) vpath %.h $(SRC_PATH)
vpath %.inc $(SRC_PATH)
vpath %.m $(SRC_PATH)
vpath %.S $(SRC_PATH) vpath %.S $(SRC_PATH)
vpath %.asm $(SRC_PATH) vpath %.asm $(SRC_PATH)
vpath %.rc $(SRC_PATH) vpath %.rc $(SRC_PATH)
@@ -30,28 +28,17 @@ $(foreach prog,$(AVBASENAMES),$(eval OBJS-$(prog) += cmdutils.o))
$(foreach prog,$(AVBASENAMES),$(eval OBJS-$(prog)-$(CONFIG_OPENCL) += cmdutils_opencl.o)) $(foreach prog,$(AVBASENAMES),$(eval OBJS-$(prog)-$(CONFIG_OPENCL) += cmdutils_opencl.o))
OBJS-ffmpeg += ffmpeg_opt.o ffmpeg_filter.o OBJS-ffmpeg += ffmpeg_opt.o ffmpeg_filter.o
OBJS-ffmpeg-$(CONFIG_VIDEOTOOLBOX) += ffmpeg_videotoolbox.o
OBJS-ffmpeg-$(CONFIG_LIBMFX) += ffmpeg_qsv.o
OBJS-ffmpeg-$(CONFIG_VAAPI) += ffmpeg_vaapi.o
ifndef CONFIG_VIDEOTOOLBOX
OBJS-ffmpeg-$(CONFIG_VDA) += ffmpeg_videotoolbox.o
endif
OBJS-ffmpeg-$(CONFIG_CUVID) += ffmpeg_cuvid.o
OBJS-ffmpeg-$(HAVE_DXVA2_LIB) += ffmpeg_dxva2.o
OBJS-ffmpeg-$(HAVE_VDPAU_X11) += ffmpeg_vdpau.o OBJS-ffmpeg-$(HAVE_VDPAU_X11) += ffmpeg_vdpau.o
OBJS-ffserver += ffserver_config.o TESTTOOLS = audiogen videogen rotozoom tiny_psnr tiny_ssim base64
TESTTOOLS = audiogen videogen rotozoom tiny_psnr tiny_ssim base64 audiomatch
HOSTPROGS := $(TESTTOOLS:%=tests/%) doc/print_options HOSTPROGS := $(TESTTOOLS:%=tests/%) doc/print_options
TOOLS = qt-faststart trasher uncoded_frame TOOLS = qt-faststart trasher uncoded_frame
TOOLS-$(CONFIG_ZLIB) += cws2fws TOOLS-$(CONFIG_ZLIB) += cws2fws
# $(FFLIBS-yes) needs to be in linking order
FFLIBS-$(CONFIG_AVDEVICE) += avdevice FFLIBS-$(CONFIG_AVDEVICE) += avdevice
FFLIBS-$(CONFIG_AVFILTER) += avfilter FFLIBS-$(CONFIG_AVFILTER) += avfilter
FFLIBS-$(CONFIG_AVFORMAT) += avformat FFLIBS-$(CONFIG_AVFORMAT) += avformat
FFLIBS-$(CONFIG_AVCODEC) += avcodec
FFLIBS-$(CONFIG_AVRESAMPLE) += avresample FFLIBS-$(CONFIG_AVRESAMPLE) += avresample
FFLIBS-$(CONFIG_AVCODEC) += avcodec
FFLIBS-$(CONFIG_POSTPROC) += postproc FFLIBS-$(CONFIG_POSTPROC) += postproc
FFLIBS-$(CONFIG_SWRESAMPLE)+= swresample FFLIBS-$(CONFIG_SWRESAMPLE)+= swresample
FFLIBS-$(CONFIG_SWSCALE) += swscale FFLIBS-$(CONFIG_SWSCALE) += swscale
@@ -61,19 +48,17 @@ FFLIBS := avutil
DATA_FILES := $(wildcard $(SRC_PATH)/presets/*.ffpreset) $(SRC_PATH)/doc/ffprobe.xsd DATA_FILES := $(wildcard $(SRC_PATH)/presets/*.ffpreset) $(SRC_PATH)/doc/ffprobe.xsd
EXAMPLES_FILES := $(wildcard $(SRC_PATH)/doc/examples/*.c) $(SRC_PATH)/doc/examples/Makefile $(SRC_PATH)/doc/examples/README EXAMPLES_FILES := $(wildcard $(SRC_PATH)/doc/examples/*.c) $(SRC_PATH)/doc/examples/Makefile $(SRC_PATH)/doc/examples/README
SKIPHEADERS = cmdutils_common_opts.h \ SKIPHEADERS = cmdutils_common_opts.h compat/w32pthreads.h
compat/w32pthreads.h
include $(SRC_PATH)/common.mak include $(SRC_PATH)/common.mak
FF_EXTRALIBS := $(FFEXTRALIBS) FF_EXTRALIBS := $(FFEXTRALIBS)
FF_DEP_LIBS := $(DEP_LIBS) FF_DEP_LIBS := $(DEP_LIBS)
FF_STATIC_DEP_LIBS := $(STATIC_DEP_LIBS)
all: $(AVPROGS) all: $(AVPROGS)
$(TOOLS): %$(EXESUF): %.o $(EXEOBJS) $(TOOLS): %$(EXESUF): %.o $(EXEOBJS)
$(LD) $(LDFLAGS) $(LDEXEFLAGS) $(LD_O) $^ $(ELIBS) $(LD) $(LDFLAGS) $(LD_O) $^ $(ELIBS)
tools/cws2fws$(EXESUF): ELIBS = $(ZLIB) tools/cws2fws$(EXESUF): ELIBS = $(ZLIB)
tools/uncoded_frame$(EXESUF): $(FF_DEP_LIBS) tools/uncoded_frame$(EXESUF): $(FF_DEP_LIBS)
@@ -87,10 +72,11 @@ config.h: .config
SUBDIR_VARS := CLEANFILES EXAMPLES FFLIBS HOSTPROGS TESTPROGS TOOLS \ SUBDIR_VARS := CLEANFILES EXAMPLES FFLIBS HOSTPROGS TESTPROGS TOOLS \
HEADERS ARCH_HEADERS BUILT_HEADERS SKIPHEADERS \ HEADERS ARCH_HEADERS BUILT_HEADERS SKIPHEADERS \
ARMV5TE-OBJS ARMV6-OBJS ARMV8-OBJS VFP-OBJS NEON-OBJS \ ARMV5TE-OBJS ARMV6-OBJS VFP-OBJS NEON-OBJS \
ALTIVEC-OBJS MMX-OBJS YASM-OBJS \ ALTIVEC-OBJS VIS-OBJS \
MIPSFPU-OBJS MIPSDSPR2-OBJS MIPSDSP-OBJS MSA-OBJS \ MMX-OBJS YASM-OBJS \
MMI-OBJS OBJS SLIBOBJS HOSTOBJS TESTOBJS MIPSFPU-OBJS MIPSDSPR2-OBJS MIPSDSPR1-OBJS MIPS32R2-OBJS \
OBJS SLIBOBJS HOSTOBJS TESTOBJS
define RESET define RESET
$(1) := $(1) :=
@@ -102,7 +88,6 @@ $(foreach V,$(SUBDIR_VARS),$(eval $(call RESET,$(V))))
SUBDIR := $(1)/ SUBDIR := $(1)/
include $(SRC_PATH)/$(1)/Makefile include $(SRC_PATH)/$(1)/Makefile
-include $(SRC_PATH)/$(1)/$(ARCH)/Makefile -include $(SRC_PATH)/$(1)/$(ARCH)/Makefile
-include $(SRC_PATH)/$(1)/$(INTRINSICS)/Makefile
include $(SRC_PATH)/library.mak include $(SRC_PATH)/library.mak
endef endef
@@ -121,14 +106,14 @@ endef
$(foreach P,$(PROGS),$(eval $(call DOPROG,$(P:$(PROGSSUF)$(EXESUF)=)))) $(foreach P,$(PROGS),$(eval $(call DOPROG,$(P:$(PROGSSUF)$(EXESUF)=))))
ffprobe.o cmdutils.o libavcodec/utils.o libavformat/utils.o libavdevice/avdevice.o libavfilter/avfilter.o libavutil/utils.o libpostproc/postprocess.o libswresample/swresample.o libswscale/utils.o : libavutil/ffversion.h ffprobe.o cmdutils.o : libavutil/ffversion.h
$(PROGS): %$(PROGSSUF)$(EXESUF): %$(PROGSSUF)_g$(EXESUF) $(PROGS): %$(PROGSSUF)$(EXESUF): %$(PROGSSUF)_g$(EXESUF)
$(CP) $< $@ $(CP) $< $@
$(STRIP) $@ $(STRIP) $@
%$(PROGSSUF)_g$(EXESUF): %.o $(FF_DEP_LIBS) %$(PROGSSUF)_g$(EXESUF): %.o $(FF_DEP_LIBS)
$(LD) $(LDFLAGS) $(LDEXEFLAGS) $(LD_O) $(OBJS-$*) $(FF_EXTRALIBS) $(LD) $(LDFLAGS) $(LD_O) $(OBJS-$*) $(FF_EXTRALIBS)
OBJDIRS += tools OBJDIRS += tools
@@ -180,15 +165,11 @@ clean::
$(RM) $(CLEANSUFFIXES) $(RM) $(CLEANSUFFIXES)
$(RM) $(CLEANSUFFIXES:%=tools/%) $(RM) $(CLEANSUFFIXES:%=tools/%)
$(RM) -r coverage-html $(RM) -r coverage-html
$(RM) -rf coverage.info coverage.info.in lcov $(RM) -rf coverage.info lcov
distclean:: distclean::
$(RM) $(DISTCLEANSUFFIXES) $(RM) $(DISTCLEANSUFFIXES)
$(RM) config.* .config libavutil/avconfig.h .version mapfile avversion.h version.h libavutil/ffversion.h libavcodec/codec_names.h libavcodec/bsf_list.c libavformat/protocol_list.c $(RM) config.* .config libavutil/avconfig.h .version version.h libavutil/ffversion.h libavcodec/codec_names.h
ifeq ($(SRC_LINK),src)
$(RM) src
endif
$(RM) -rf doc/examples/pc-uninstalled
config: config:
$(SRC_PATH)/configure $(value FFMPEG_CONFIGURATION) $(SRC_PATH)/configure $(value FFMPEG_CONFIGURATION)

18
README Normal file
View File

@@ -0,0 +1,18 @@
FFmpeg README
-------------
1) Documentation
----------------
* Read the documentation in the doc/ directory in git.
You can also view it online at http://ffmpeg.org/documentation.html
2) Licensing
------------
* See the LICENSE file.
3) Build and Install
--------------------
* See the INSTALL file.

View File

@@ -1,49 +0,0 @@
FFmpeg README
=============
FFmpeg is a collection of libraries and tools to process multimedia content
such as audio, video, subtitles and related metadata.
## Libraries
* `libavcodec` provides implementation of a wider range of codecs.
* `libavformat` implements streaming protocols, container formats and basic I/O access.
* `libavutil` includes hashers, decompressors and miscellaneous utility functions.
* `libavfilter` provides a mean to alter decoded Audio and Video through chain of filters.
* `libavdevice` provides an abstraction to access capture and playback devices.
* `libswresample` implements audio mixing and resampling routines.
* `libswscale` implements color conversion and scaling routines.
## Tools
* [ffmpeg](https://ffmpeg.org/ffmpeg.html) is a command line toolbox to
manipulate, convert and stream multimedia content.
* [ffplay](https://ffmpeg.org/ffplay.html) is a minimalistic multimedia player.
* [ffprobe](https://ffmpeg.org/ffprobe.html) is a simple analysis tool to inspect
multimedia content.
* [ffserver](https://ffmpeg.org/ffserver.html) is a multimedia streaming server
for live broadcasts.
* Additional small tools such as `aviocat`, `ismindex` and `qt-faststart`.
## Documentation
The offline documentation is available in the **doc/** directory.
The online documentation is available in the main [website](https://ffmpeg.org)
and in the [wiki](https://trac.ffmpeg.org).
### Examples
Coding examples are available in the **doc/examples** directory.
## License
FFmpeg codebase is mainly LGPL-licensed with optional components licensed under
GPL. Please refer to the LICENSE file for detailed information.
## Contributing
Patches should be submitted to the ffmpeg-devel mailing list using
`git format-patch` or `git send-email`. Github pull requests should be
avoided because they are not part of our review process. Few developers
follow pull requests so they will likely be ignored.

View File

@@ -1 +1 @@
3.1.1 2.2-rc2

View File

@@ -1,15 +0,0 @@
┌────────────────────────────────────────┐
│ RELEASE NOTES for FFmpeg 3.1 "Laplace" │
└────────────────────────────────────────┘
The FFmpeg Project proudly presents FFmpeg 3.1 "Laplace", about 4
months after the release of FFmpeg 3.0.
A complete Changelog is available at the root of the project, and the
complete Git history on http://source.ffmpeg.org.
We hope you will like this release as much as we enjoyed working on it, and
as usual, if you have any questions about it, or any FFmpeg related topic,
feel free to join us on the #ffmpeg IRC channel (on irc.freenode.net) or ask
on the mailing-lists.

1
VERSION Normal file
View File

@@ -0,0 +1 @@
2.2-rc2

View File

@@ -1,17 +1,16 @@
OBJS-$(HAVE_ARMV5TE) += $(ARMV5TE-OBJS) $(ARMV5TE-OBJS-yes) OBJS-$(HAVE_ARMV5TE) += $(ARMV5TE-OBJS) $(ARMV5TE-OBJS-yes)
OBJS-$(HAVE_ARMV6) += $(ARMV6-OBJS) $(ARMV6-OBJS-yes) OBJS-$(HAVE_ARMV6) += $(ARMV6-OBJS) $(ARMV6-OBJS-yes)
OBJS-$(HAVE_ARMV8) += $(ARMV8-OBJS) $(ARMV8-OBJS-yes)
OBJS-$(HAVE_VFP) += $(VFP-OBJS) $(VFP-OBJS-yes) OBJS-$(HAVE_VFP) += $(VFP-OBJS) $(VFP-OBJS-yes)
OBJS-$(HAVE_NEON) += $(NEON-OBJS) $(NEON-OBJS-yes) OBJS-$(HAVE_NEON) += $(NEON-OBJS) $(NEON-OBJS-yes)
OBJS-$(HAVE_MIPSFPU) += $(MIPSFPU-OBJS) $(MIPSFPU-OBJS-yes) OBJS-$(HAVE_MIPSFPU) += $(MIPSFPU-OBJS) $(MIPSFPU-OBJS-yes)
OBJS-$(HAVE_MIPSDSP) += $(MIPSDSP-OBJS) $(MIPSDSP-OBJS-yes) OBJS-$(HAVE_MIPS32R2) += $(MIPS32R2-OBJS) $(MIPS32R2-OBJS-yes)
OBJS-$(HAVE_MIPSDSPR1) += $(MIPSDSPR1-OBJS) $(MIPSDSPR1-OBJS-yes)
OBJS-$(HAVE_MIPSDSPR2) += $(MIPSDSPR2-OBJS) $(MIPSDSPR2-OBJS-yes) OBJS-$(HAVE_MIPSDSPR2) += $(MIPSDSPR2-OBJS) $(MIPSDSPR2-OBJS-yes)
OBJS-$(HAVE_MSA) += $(MSA-OBJS) $(MSA-OBJS-yes)
OBJS-$(HAVE_MMI) += $(MMI-OBJS) $(MMI-OBJS-yes)
OBJS-$(HAVE_ALTIVEC) += $(ALTIVEC-OBJS) $(ALTIVEC-OBJS-yes) OBJS-$(HAVE_ALTIVEC) += $(ALTIVEC-OBJS) $(ALTIVEC-OBJS-yes)
OBJS-$(HAVE_VSX) += $(VSX-OBJS) $(VSX-OBJS-yes)
OBJS-$(HAVE_VIS) += $(VIS-OBJS) $(VIS-OBJS-yes)
OBJS-$(HAVE_MMX) += $(MMX-OBJS) $(MMX-OBJS-yes) OBJS-$(HAVE_MMX) += $(MMX-OBJS) $(MMX-OBJS-yes)
OBJS-$(HAVE_YASM) += $(YASM-OBJS) $(YASM-OBJS-yes) OBJS-$(HAVE_YASM) += $(YASM-OBJS) $(YASM-OBJS-yes)

View File

@@ -41,10 +41,8 @@
#include "libavutil/avassert.h" #include "libavutil/avassert.h"
#include "libavutil/avstring.h" #include "libavutil/avstring.h"
#include "libavutil/bprint.h" #include "libavutil/bprint.h"
#include "libavutil/display.h"
#include "libavutil/mathematics.h" #include "libavutil/mathematics.h"
#include "libavutil/imgutils.h" #include "libavutil/imgutils.h"
#include "libavutil/libm.h"
#include "libavutil/parseutils.h" #include "libavutil/parseutils.h"
#include "libavutil/pixdesc.h" #include "libavutil/pixdesc.h"
#include "libavutil/eval.h" #include "libavutil/eval.h"
@@ -52,7 +50,6 @@
#include "libavutil/opt.h" #include "libavutil/opt.h"
#include "libavutil/cpu.h" #include "libavutil/cpu.h"
#include "libavutil/ffversion.h" #include "libavutil/ffversion.h"
#include "libavutil/version.h"
#include "cmdutils.h" #include "cmdutils.h"
#if CONFIG_NETWORK #if CONFIG_NETWORK
#include "libavformat/network.h" #include "libavformat/network.h"
@@ -64,23 +61,29 @@
static int init_report(const char *env); static int init_report(const char *env);
AVDictionary *sws_dict; struct SwsContext *sws_opts;
AVDictionary *swr_opts; AVDictionary *swr_opts;
AVDictionary *format_opts, *codec_opts, *resample_opts; AVDictionary *format_opts, *codec_opts, *resample_opts;
static FILE *report_file; static FILE *report_file;
static int report_file_level = AV_LOG_DEBUG;
int hide_banner = 0; int hide_banner = 0;
void init_opts(void) void init_opts(void)
{ {
av_dict_set(&sws_dict, "flags", "bicubic", 0);
if(CONFIG_SWSCALE)
sws_opts = sws_getContext(16, 16, 0, 16, 16, 0, SWS_BICUBIC,
NULL, NULL, NULL);
} }
void uninit_opts(void) void uninit_opts(void)
{ {
#if CONFIG_SWSCALE
sws_freeContext(sws_opts);
sws_opts = NULL;
#endif
av_dict_free(&swr_opts); av_dict_free(&swr_opts);
av_dict_free(&sws_dict);
av_dict_free(&format_opts); av_dict_free(&format_opts);
av_dict_free(&codec_opts); av_dict_free(&codec_opts);
av_dict_free(&resample_opts); av_dict_free(&resample_opts);
@@ -101,11 +104,9 @@ static void log_callback_report(void *ptr, int level, const char *fmt, va_list v
av_log_default_callback(ptr, level, fmt, vl); av_log_default_callback(ptr, level, fmt, vl);
av_log_format_line(ptr, level, fmt, vl2, line, sizeof(line), &print_prefix); av_log_format_line(ptr, level, fmt, vl2, line, sizeof(line), &print_prefix);
va_end(vl2); va_end(vl2);
if (report_file_level >= level) {
fputs(line, report_file); fputs(line, report_file);
fflush(report_file); fflush(report_file);
} }
}
static void (*program_exit)(int ret); static void (*program_exit)(int ret);
@@ -162,7 +163,7 @@ void show_help_options(const OptionDef *options, const char *msg, int req_flags,
int first; int first;
first = 1; first = 1;
for (po = options; po->name; po++) { for (po = options; po->name != NULL; po++) {
char buf[64]; char buf[64];
if (((po->flags & req_flags) != req_flags) || if (((po->flags & req_flags) != req_flags) ||
@@ -201,7 +202,7 @@ static const OptionDef *find_option(const OptionDef *po, const char *name)
const char *p = strchr(name, ':'); const char *p = strchr(name, ':');
int len = p ? p - name : strlen(name); int len = p ? p - name : strlen(name);
while (po->name) { while (po->name != NULL) {
if (!strncmp(name, po->name, len) && strlen(po->name) == len) if (!strncmp(name, po->name, len) && strlen(po->name) == len)
break; break;
po++; po++;
@@ -250,7 +251,7 @@ static void prepare_app_arguments(int *argc_ptr, char ***argv_ptr)
win32_argv_utf8 = av_mallocz(sizeof(char *) * (win32_argc + 1) + buffsize); win32_argv_utf8 = av_mallocz(sizeof(char *) * (win32_argc + 1) + buffsize);
argstr_flat = (char *)win32_argv_utf8 + sizeof(char *) * (win32_argc + 1); argstr_flat = (char *)win32_argv_utf8 + sizeof(char *) * (win32_argc + 1);
if (!win32_argv_utf8) { if (win32_argv_utf8 == NULL) {
LocalFree(argv_w); LocalFree(argv_w);
return; return;
} }
@@ -286,14 +287,10 @@ static int write_option(void *optctx, const OptionDef *po, const char *opt,
if (po->flags & OPT_SPEC) { if (po->flags & OPT_SPEC) {
SpecifierOpt **so = dst; SpecifierOpt **so = dst;
char *p = strchr(opt, ':'); char *p = strchr(opt, ':');
char *str;
dstcount = (int *)(so + 1); dstcount = (int *)(so + 1);
*so = grow_array(*so, sizeof(**so), dstcount, *dstcount + 1); *so = grow_array(*so, sizeof(**so), dstcount, *dstcount + 1);
str = av_strdup(p ? p + 1 : ""); (*so)[*dstcount - 1].specifier = av_strdup(p ? p + 1 : "");
if (!str)
return AVERROR(ENOMEM);
(*so)[*dstcount - 1].specifier = str;
dst = &(*so)[*dstcount - 1].u; dst = &(*so)[*dstcount - 1].u;
} }
@@ -301,8 +298,6 @@ static int write_option(void *optctx, const OptionDef *po, const char *opt,
char *str; char *str;
str = av_strdup(arg); str = av_strdup(arg);
av_freep(dst); av_freep(dst);
if (!str)
return AVERROR(ENOMEM);
*(char **)dst = str; *(char **)dst = str;
} else if (po->flags & OPT_BOOL || po->flags & OPT_INT) { } else if (po->flags & OPT_BOOL || po->flags & OPT_INT) {
*(int *)dst = parse_number_or_die(opt, arg, OPT_INT64, INT_MIN, INT_MAX); *(int *)dst = parse_number_or_die(opt, arg, OPT_INT64, INT_MIN, INT_MAX);
@@ -446,7 +441,7 @@ int locate_option(int argc, char **argv, const OptionDef *options,
(po->name && !strcmp(optname, po->name))) (po->name && !strcmp(optname, po->name)))
return i; return i;
if (!po->name || po->flags & HAS_ARG) if (po->flags & HAS_ARG)
i++; i++;
} }
return 0; return 0;
@@ -476,22 +471,10 @@ static void dump_argument(const char *a)
fputc('"', report_file); fputc('"', report_file);
} }
static void check_options(const OptionDef *po)
{
while (po->name) {
if (po->flags & OPT_PERFILE)
av_assert0(po->flags & (OPT_INPUT | OPT_OUTPUT));
po++;
}
}
void parse_loglevel(int argc, char **argv, const OptionDef *options) void parse_loglevel(int argc, char **argv, const OptionDef *options)
{ {
int idx = locate_option(argc, argv, options, "loglevel"); int idx = locate_option(argc, argv, options, "loglevel");
const char *env; const char *env;
check_options(options);
if (!idx) if (!idx)
idx = locate_option(argc, argv, options, "v"); idx = locate_option(argc, argv, options, "v");
if (idx && argv[idx + 1]) if (idx && argv[idx + 1])
@@ -523,7 +506,7 @@ static const AVOption *opt_find(void *obj, const char *name, const char *unit,
return o; return o;
} }
#define FLAGS (o->type == AV_OPT_TYPE_FLAGS && (arg[0]=='-' || arg[0]=='+')) ? AV_DICT_APPEND : 0 #define FLAGS (o->type == AV_OPT_TYPE_FLAGS) ? AV_DICT_APPEND : 0
int opt_default(void *optctx, const char *opt, const char *arg) int opt_default(void *optctx, const char *opt, const char *arg)
{ {
const AVOption *o; const AVOption *o;
@@ -534,12 +517,7 @@ int opt_default(void *optctx, const char *opt, const char *arg)
#if CONFIG_AVRESAMPLE #if CONFIG_AVRESAMPLE
const AVClass *rc = avresample_get_class(); const AVClass *rc = avresample_get_class();
#endif #endif
#if CONFIG_SWSCALE const AVClass *sc, *swr_class;
const AVClass *sc = sws_get_class();
#endif
#if CONFIG_SWRESAMPLE
const AVClass *swr_class = swr_get_class();
#endif
if (!strcmp(opt, "debug") || !strcmp(opt, "fdebug")) if (!strcmp(opt, "debug") || !strcmp(opt, "fdebug"))
av_log_set_level(AV_LOG_DEBUG); av_log_set_level(AV_LOG_DEBUG);
@@ -563,33 +541,20 @@ int opt_default(void *optctx, const char *opt, const char *arg)
consumed = 1; consumed = 1;
} }
#if CONFIG_SWSCALE #if CONFIG_SWSCALE
if (!consumed && (o = opt_find(&sc, opt, NULL, 0, sc = sws_get_class();
AV_OPT_SEARCH_CHILDREN | AV_OPT_SEARCH_FAKE_OBJ))) { if (!consumed && opt_find(&sc, opt, NULL, 0,
struct SwsContext *sws = sws_alloc_context(); AV_OPT_SEARCH_CHILDREN | AV_OPT_SEARCH_FAKE_OBJ)) {
int ret = av_opt_set(sws, opt, arg, 0); // XXX we only support sws_flags, not arbitrary sws options
sws_freeContext(sws); int ret = av_opt_set(sws_opts, opt, arg, 0);
if (!strcmp(opt, "srcw") || !strcmp(opt, "srch") ||
!strcmp(opt, "dstw") || !strcmp(opt, "dsth") ||
!strcmp(opt, "src_format") || !strcmp(opt, "dst_format")) {
av_log(NULL, AV_LOG_ERROR, "Directly using swscale dimensions/format options is not supported, please use the -s or -pix_fmt options\n");
return AVERROR(EINVAL);
}
if (ret < 0) { if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Error setting option %s.\n", opt); av_log(NULL, AV_LOG_ERROR, "Error setting option %s.\n", opt);
return ret; return ret;
} }
av_dict_set(&sws_dict, opt, arg, FLAGS);
consumed = 1;
}
#else
if (!consumed && !strcmp(opt, "sws_flags")) {
av_log(NULL, AV_LOG_WARNING, "Ignoring %s %s, due to disabled swscale\n", opt, arg);
consumed = 1; consumed = 1;
} }
#endif #endif
#if CONFIG_SWRESAMPLE #if CONFIG_SWRESAMPLE
swr_class = swr_get_class();
if (!consumed && (o=opt_find(&swr_class, opt, NULL, 0, if (!consumed && (o=opt_find(&swr_class, opt, NULL, 0,
AV_OPT_SEARCH_CHILDREN | AV_OPT_SEARCH_FAKE_OBJ))) { AV_OPT_SEARCH_CHILDREN | AV_OPT_SEARCH_FAKE_OBJ))) {
struct SwrContext *swr = swr_alloc(); struct SwrContext *swr = swr_alloc();
@@ -653,7 +618,9 @@ static void finish_group(OptionParseContext *octx, int group_idx,
*g = octx->cur_group; *g = octx->cur_group;
g->arg = arg; g->arg = arg;
g->group_def = l->group_def; g->group_def = l->group_def;
g->sws_dict = sws_dict; #if CONFIG_SWSCALE
g->sws_opts = sws_opts;
#endif
g->swr_opts = swr_opts; g->swr_opts = swr_opts;
g->codec_opts = codec_opts; g->codec_opts = codec_opts;
g->format_opts = format_opts; g->format_opts = format_opts;
@@ -662,7 +629,9 @@ static void finish_group(OptionParseContext *octx, int group_idx,
codec_opts = NULL; codec_opts = NULL;
format_opts = NULL; format_opts = NULL;
resample_opts = NULL; resample_opts = NULL;
sws_dict = NULL; #if CONFIG_SWSCALE
sws_opts = NULL;
#endif
swr_opts = NULL; swr_opts = NULL;
init_opts(); init_opts();
@@ -693,7 +662,7 @@ static void init_parse_context(OptionParseContext *octx,
memset(octx, 0, sizeof(*octx)); memset(octx, 0, sizeof(*octx));
octx->nb_groups = nb_groups; octx->nb_groups = nb_groups;
octx->groups = av_mallocz_array(octx->nb_groups, sizeof(*octx->groups)); octx->groups = av_mallocz(sizeof(*octx->groups) * octx->nb_groups);
if (!octx->groups) if (!octx->groups)
exit_program(1); exit_program(1);
@@ -718,8 +687,9 @@ void uninit_parse_context(OptionParseContext *octx)
av_dict_free(&l->groups[j].codec_opts); av_dict_free(&l->groups[j].codec_opts);
av_dict_free(&l->groups[j].format_opts); av_dict_free(&l->groups[j].format_opts);
av_dict_free(&l->groups[j].resample_opts); av_dict_free(&l->groups[j].resample_opts);
#if CONFIG_SWSCALE
av_dict_free(&l->groups[j].sws_dict); sws_freeContext(l->groups[j].sws_opts);
#endif
av_dict_free(&l->groups[j].swr_opts); av_dict_free(&l->groups[j].swr_opts);
} }
av_freep(&l->groups); av_freep(&l->groups);
@@ -861,21 +831,13 @@ int opt_loglevel(void *optctx, const char *opt, const char *arg)
{ "info" , AV_LOG_INFO }, { "info" , AV_LOG_INFO },
{ "verbose", AV_LOG_VERBOSE }, { "verbose", AV_LOG_VERBOSE },
{ "debug" , AV_LOG_DEBUG }, { "debug" , AV_LOG_DEBUG },
{ "trace" , AV_LOG_TRACE },
}; };
char *tail; char *tail;
int level; int level;
int flags;
int i; int i;
flags = av_log_get_flags();
tail = strstr(arg, "repeat"); tail = strstr(arg, "repeat");
if (tail) av_log_set_flags(tail ? 0 : AV_LOG_SKIP_REPEATED);
flags &= ~AV_LOG_SKIP_REPEATED;
else
flags |= AV_LOG_SKIP_REPEATED;
av_log_set_flags(flags);
if (tail == arg) if (tail == arg)
arg += 6 + (arg[6]=='+'); arg += 6 + (arg[6]=='+');
if(tail && !*arg) if(tail && !*arg)
@@ -957,13 +919,6 @@ static int init_report(const char *env)
av_free(filename_template); av_free(filename_template);
filename_template = val; filename_template = val;
val = NULL; val = NULL;
} else if (!strcmp(key, "level")) {
char *tail;
report_file_level = strtol(val, &tail, 10);
if (*tail) {
av_log(NULL, AV_LOG_FATAL, "Invalid report file level\n");
exit_program(1);
}
} else { } else {
av_log(NULL, AV_LOG_ERROR, "Unknown key '%s' in FFREPORT\n", key); av_log(NULL, AV_LOG_ERROR, "Unknown key '%s' in FFREPORT\n", key);
} }
@@ -982,10 +937,9 @@ static int init_report(const char *env)
report_file = fopen(filename.str, "w"); report_file = fopen(filename.str, "w");
if (!report_file) { if (!report_file) {
int ret = AVERROR(errno);
av_log(NULL, AV_LOG_ERROR, "Failed to open report \"%s\": %s\n", av_log(NULL, AV_LOG_ERROR, "Failed to open report \"%s\": %s\n",
filename.str, strerror(errno)); filename.str, strerror(errno));
return ret; return AVERROR(errno);
} }
av_log_set_callback(log_callback_report); av_log_set_callback(log_callback_report);
av_log(NULL, AV_LOG_INFO, av_log(NULL, AV_LOG_INFO,
@@ -1059,8 +1013,7 @@ static int warned_cfg = 0;
LIB##LIBNAME##_VERSION_MAJOR, \ LIB##LIBNAME##_VERSION_MAJOR, \
LIB##LIBNAME##_VERSION_MINOR, \ LIB##LIBNAME##_VERSION_MINOR, \
LIB##LIBNAME##_VERSION_MICRO, \ LIB##LIBNAME##_VERSION_MICRO, \
AV_VERSION_MAJOR(version), AV_VERSION_MINOR(version),\ version >> 16, version >> 8 & 0xff, version & 0xff); \
AV_VERSION_MICRO(version)); \
} \ } \
if (flags & SHOW_CONFIG) { \ if (flags & SHOW_CONFIG) { \
const char *cfg = libname##_configuration(); \ const char *cfg = libname##_configuration(); \
@@ -1099,7 +1052,8 @@ static void print_program_info(int flags, int level)
av_log(NULL, level, " Copyright (c) %d-%d the FFmpeg developers", av_log(NULL, level, " Copyright (c) %d-%d the FFmpeg developers",
program_birth_year, CONFIG_THIS_YEAR); program_birth_year, CONFIG_THIS_YEAR);
av_log(NULL, level, "\n"); av_log(NULL, level, "\n");
av_log(NULL, level, "%sbuilt with %s\n", indent, CC_IDENT); av_log(NULL, level, "%sbuilt on %s %s with %s\n",
indent, __DATE__, __TIME__, CC_IDENT);
av_log(NULL, level, "%sconfiguration: " FFMPEG_CONFIGURATION "\n", indent); av_log(NULL, level, "%sconfiguration: " FFMPEG_CONFIGURATION "\n", indent);
} }
@@ -1144,7 +1098,7 @@ void show_banner(int argc, char **argv, const OptionDef *options)
int show_version(void *optctx, const char *opt, const char *arg) int show_version(void *optctx, const char *opt, const char *arg)
{ {
av_log_set_callback(log_callback_help); av_log_set_callback(log_callback_help);
print_program_info (SHOW_COPYRIGHT, AV_LOG_INFO); print_program_info (0 , AV_LOG_INFO);
print_all_libs_info(SHOW_VERSION, AV_LOG_INFO); print_all_libs_info(SHOW_VERSION, AV_LOG_INFO);
return 0; return 0;
@@ -1232,24 +1186,16 @@ int show_license(void *optctx, const char *opt, const char *arg)
return 0; return 0;
} }
static int is_device(const AVClass *avclass) int show_formats(void *optctx, const char *opt, const char *arg)
{
if (!avclass)
return 0;
return AV_IS_INPUT_DEVICE(avclass->category) || AV_IS_OUTPUT_DEVICE(avclass->category);
}
static int show_formats_devices(void *optctx, const char *opt, const char *arg, int device_only)
{ {
AVInputFormat *ifmt = NULL; AVInputFormat *ifmt = NULL;
AVOutputFormat *ofmt = NULL; AVOutputFormat *ofmt = NULL;
const char *last_name; const char *last_name;
int is_dev;
printf("%s\n" printf("File formats:\n"
" D. = Demuxing supported\n" " D. = Demuxing supported\n"
" .E = Muxing supported\n" " .E = Muxing supported\n"
" --\n", device_only ? "Devices:" : "File formats:"); " --\n");
last_name = "000"; last_name = "000";
for (;;) { for (;;) {
int decode = 0; int decode = 0;
@@ -1258,10 +1204,7 @@ static int show_formats_devices(void *optctx, const char *opt, const char *arg,
const char *long_name = NULL; const char *long_name = NULL;
while ((ofmt = av_oformat_next(ofmt))) { while ((ofmt = av_oformat_next(ofmt))) {
is_dev = is_device(ofmt->priv_class); if ((name == NULL || strcmp(ofmt->name, name) < 0) &&
if (!is_dev && device_only)
continue;
if ((!name || strcmp(ofmt->name, name) < 0) &&
strcmp(ofmt->name, last_name) > 0) { strcmp(ofmt->name, last_name) > 0) {
name = ofmt->name; name = ofmt->name;
long_name = ofmt->long_name; long_name = ofmt->long_name;
@@ -1269,10 +1212,7 @@ static int show_formats_devices(void *optctx, const char *opt, const char *arg,
} }
} }
while ((ifmt = av_iformat_next(ifmt))) { while ((ifmt = av_iformat_next(ifmt))) {
is_dev = is_device(ifmt->priv_class); if ((name == NULL || strcmp(ifmt->name, name) < 0) &&
if (!is_dev && device_only)
continue;
if ((!name || strcmp(ifmt->name, name) < 0) &&
strcmp(ifmt->name, last_name) > 0) { strcmp(ifmt->name, last_name) > 0) {
name = ifmt->name; name = ifmt->name;
long_name = ifmt->long_name; long_name = ifmt->long_name;
@@ -1281,7 +1221,7 @@ static int show_formats_devices(void *optctx, const char *opt, const char *arg,
if (name && strcmp(ifmt->name, name) == 0) if (name && strcmp(ifmt->name, name) == 0)
decode = 1; decode = 1;
} }
if (!name) if (name == NULL)
break; break;
last_name = name; last_name = name;
@@ -1294,16 +1234,6 @@ static int show_formats_devices(void *optctx, const char *opt, const char *arg,
return 0; return 0;
} }
int show_formats(void *optctx, const char *opt, const char *arg)
{
return show_formats_devices(optctx, opt, arg, 0);
}
int show_devices(void *optctx, const char *opt, const char *arg)
{
return show_formats_devices(optctx, opt, arg, 1);
}
#define PRINT_CODEC_SUPPORTED(codec, field, type, list_name, term, get_name) \ #define PRINT_CODEC_SUPPORTED(codec, field, type, list_name, term, get_name) \
if (codec->field) { \ if (codec->field) { \
const type *p = codec->field; \ const type *p = codec->field; \
@@ -1324,47 +1254,16 @@ static void print_codec(const AVCodec *c)
printf("%s %s [%s]:\n", encoder ? "Encoder" : "Decoder", c->name, printf("%s %s [%s]:\n", encoder ? "Encoder" : "Decoder", c->name,
c->long_name ? c->long_name : ""); c->long_name ? c->long_name : "");
printf(" General capabilities: ");
if (c->capabilities & AV_CODEC_CAP_DRAW_HORIZ_BAND)
printf("horizband ");
if (c->capabilities & AV_CODEC_CAP_DR1)
printf("dr1 ");
if (c->capabilities & AV_CODEC_CAP_TRUNCATED)
printf("trunc ");
if (c->capabilities & AV_CODEC_CAP_DELAY)
printf("delay ");
if (c->capabilities & AV_CODEC_CAP_SMALL_LAST_FRAME)
printf("small ");
if (c->capabilities & AV_CODEC_CAP_SUBFRAMES)
printf("subframes ");
if (c->capabilities & AV_CODEC_CAP_EXPERIMENTAL)
printf("exp ");
if (c->capabilities & AV_CODEC_CAP_CHANNEL_CONF)
printf("chconf ");
if (c->capabilities & AV_CODEC_CAP_PARAM_CHANGE)
printf("paramchange ");
if (c->capabilities & AV_CODEC_CAP_VARIABLE_FRAME_SIZE)
printf("variable ");
if (c->capabilities & (AV_CODEC_CAP_FRAME_THREADS |
AV_CODEC_CAP_SLICE_THREADS |
AV_CODEC_CAP_AUTO_THREADS))
printf("threads ");
if (!c->capabilities)
printf("none");
printf("\n");
if (c->type == AVMEDIA_TYPE_VIDEO || if (c->type == AVMEDIA_TYPE_VIDEO ||
c->type == AVMEDIA_TYPE_AUDIO) { c->type == AVMEDIA_TYPE_AUDIO) {
printf(" Threading capabilities: "); printf(" Threading capabilities: ");
switch (c->capabilities & (AV_CODEC_CAP_FRAME_THREADS | switch (c->capabilities & (CODEC_CAP_FRAME_THREADS |
AV_CODEC_CAP_SLICE_THREADS | CODEC_CAP_SLICE_THREADS)) {
AV_CODEC_CAP_AUTO_THREADS)) { case CODEC_CAP_FRAME_THREADS |
case AV_CODEC_CAP_FRAME_THREADS | CODEC_CAP_SLICE_THREADS: printf("frame and slice"); break;
AV_CODEC_CAP_SLICE_THREADS: printf("frame and slice"); break; case CODEC_CAP_FRAME_THREADS: printf("frame"); break;
case AV_CODEC_CAP_FRAME_THREADS: printf("frame"); break; case CODEC_CAP_SLICE_THREADS: printf("slice"); break;
case AV_CODEC_CAP_SLICE_THREADS: printf("slice"); break; default: printf("no"); break;
case AV_CODEC_CAP_AUTO_THREADS : printf("auto"); break;
default: printf("none"); break;
} }
printf("\n"); printf("\n");
} }
@@ -1423,7 +1322,7 @@ static int compare_codec_desc(const void *a, const void *b)
const AVCodecDescriptor * const *da = a; const AVCodecDescriptor * const *da = a;
const AVCodecDescriptor * const *db = b; const AVCodecDescriptor * const *db = b;
return (*da)->type != (*db)->type ? FFDIFFSIGN((*da)->type, (*db)->type) : return (*da)->type != (*db)->type ? (*da)->type - (*db)->type :
strcmp((*da)->name, (*db)->name); strcmp((*da)->name, (*db)->name);
} }
@@ -1479,9 +1378,6 @@ int show_codecs(void *optctx, const char *opt, const char *arg)
const AVCodecDescriptor *desc = codecs[i]; const AVCodecDescriptor *desc = codecs[i];
const AVCodec *codec = NULL; const AVCodec *codec = NULL;
if (strstr(desc->name, "_deprecated"))
continue;
printf(" "); printf(" ");
printf(avcodec_find_decoder(desc->id) ? "D" : "."); printf(avcodec_find_decoder(desc->id) ? "D" : ".");
printf(avcodec_find_encoder(desc->id) ? "E" : "."); printf(avcodec_find_encoder(desc->id) ? "E" : ".");
@@ -1537,11 +1433,11 @@ static void print_codecs(int encoder)
while ((codec = next_codec_for_id(desc->id, codec, encoder))) { while ((codec = next_codec_for_id(desc->id, codec, encoder))) {
printf(" %c", get_media_type_char(desc->type)); printf(" %c", get_media_type_char(desc->type));
printf((codec->capabilities & AV_CODEC_CAP_FRAME_THREADS) ? "F" : "."); printf((codec->capabilities & CODEC_CAP_FRAME_THREADS) ? "F" : ".");
printf((codec->capabilities & AV_CODEC_CAP_SLICE_THREADS) ? "S" : "."); printf((codec->capabilities & CODEC_CAP_SLICE_THREADS) ? "S" : ".");
printf((codec->capabilities & AV_CODEC_CAP_EXPERIMENTAL) ? "X" : "."); printf((codec->capabilities & CODEC_CAP_EXPERIMENTAL) ? "X" : ".");
printf((codec->capabilities & AV_CODEC_CAP_DRAW_HORIZ_BAND)?"B" : "."); printf((codec->capabilities & CODEC_CAP_DRAW_HORIZ_BAND)?"B" : ".");
printf((codec->capabilities & AV_CODEC_CAP_DR1) ? "D" : "."); printf((codec->capabilities & CODEC_CAP_DR1) ? "D" : ".");
printf(" %-20s %s", codec->name, codec->long_name ? codec->long_name : ""); printf(" %-20s %s", codec->name, codec->long_name ? codec->long_name : "");
if (strcmp(codec->name, desc->name)) if (strcmp(codec->name, desc->name))
@@ -1593,8 +1489,7 @@ int show_protocols(void *optctx, const char *opt, const char *arg)
int show_filters(void *optctx, const char *opt, const char *arg) int show_filters(void *optctx, const char *opt, const char *arg)
{ {
#if CONFIG_AVFILTER const AVFilter av_unused(*filter) = NULL;
const AVFilter *filter = NULL;
char descr[64], *descr_cur; char descr[64], *descr_cur;
int i, j; int i, j;
const AVFilterPad *pad; const AVFilterPad *pad;
@@ -1602,11 +1497,12 @@ int show_filters(void *optctx, const char *opt, const char *arg)
printf("Filters:\n" printf("Filters:\n"
" T.. = Timeline support\n" " T.. = Timeline support\n"
" .S. = Slice threading\n" " .S. = Slice threading\n"
" ..C = Command support\n" " ..C = Commmand support\n"
" A = Audio input/output\n" " A = Audio input/output\n"
" V = Video input/output\n" " V = Video input/output\n"
" N = Dynamic number and/or type of input/output\n" " N = Dynamic number and/or type of input/output\n"
" | = Source or sink filter\n"); " | = Source or sink filter\n");
#if CONFIG_AVFILTER
while ((filter = avfilter_next(filter))) { while ((filter = avfilter_next(filter))) {
descr_cur = descr; descr_cur = descr;
for (i = 0; i < 2; i++) { for (i = 0; i < 2; i++) {
@@ -1615,24 +1511,22 @@ int show_filters(void *optctx, const char *opt, const char *arg)
*(descr_cur++) = '>'; *(descr_cur++) = '>';
} }
pad = i ? filter->outputs : filter->inputs; pad = i ? filter->outputs : filter->inputs;
for (j = 0; pad && avfilter_pad_get_name(pad, j); j++) { for (j = 0; pad && pad[j].name; j++) {
if (descr_cur >= descr + sizeof(descr) - 4) if (descr_cur >= descr + sizeof(descr) - 4)
break; break;
*(descr_cur++) = get_media_type_char(avfilter_pad_get_type(pad, j)); *(descr_cur++) = get_media_type_char(pad[j].type);
} }
if (!j) if (!j)
*(descr_cur++) = ((!i && (filter->flags & AVFILTER_FLAG_DYNAMIC_INPUTS)) || *(descr_cur++) = ((!i && (filter->flags & AVFILTER_FLAG_DYNAMIC_INPUTS)) ||
( i && (filter->flags & AVFILTER_FLAG_DYNAMIC_OUTPUTS))) ? 'N' : '|'; ( i && (filter->flags & AVFILTER_FLAG_DYNAMIC_OUTPUTS))) ? 'N' : '|';
} }
*descr_cur = 0; *descr_cur = 0;
printf(" %c%c%c %-17s %-10s %s\n", printf(" %c%c%c %-16s %-10s %s\n",
filter->flags & AVFILTER_FLAG_SUPPORT_TIMELINE ? 'T' : '.', filter->flags & AVFILTER_FLAG_SUPPORT_TIMELINE ? 'T' : '.',
filter->flags & AVFILTER_FLAG_SLICE_THREADS ? 'S' : '.', filter->flags & AVFILTER_FLAG_SLICE_THREADS ? 'S' : '.',
filter->process_command ? 'C' : '.', filter->process_command ? 'C' : '.',
filter->name, descr, filter->description); filter->name, descr, filter->description);
} }
#else
printf("No filters available: libavfilter disabled\n");
#endif #endif
return 0; return 0;
} }
@@ -1697,13 +1591,13 @@ int show_layouts(void *optctx, const char *opt, const char *arg)
if (!name) if (!name)
continue; continue;
descr = av_get_channel_description((uint64_t)1 << i); descr = av_get_channel_description((uint64_t)1 << i);
printf("%-14s %s\n", name, descr); printf("%-12s%s\n", name, descr);
} }
printf("\nStandard channel layouts:\n" printf("\nStandard channel layouts:\n"
"NAME DECOMPOSITION\n"); "NAME DECOMPOSITION\n");
for (i = 0; !av_get_standard_channel_layout(i, &layout, &name); i++) { for (i = 0; !av_get_standard_channel_layout(i, &layout, &name); i++) {
if (name) { if (name) {
printf("%-14s ", name); printf("%-12s", name);
for (j = 1; j; j <<= 1) for (j = 1; j; j <<= 1)
if ((layout & j)) if ((layout & j))
printf("%s%s", (layout & (j - 1)) ? "+" : "", av_get_channel_name(j)); printf("%s%s", (layout & (j - 1)) ? "+" : "", av_get_channel_name(j));
@@ -1870,8 +1764,6 @@ int show_help(void *optctx, const char *opt, const char *arg)
av_log_set_callback(log_callback_help); av_log_set_callback(log_callback_help);
topic = av_strdup(arg ? arg : ""); topic = av_strdup(arg ? arg : "");
if (!topic)
return AVERROR(ENOMEM);
par = strchr(topic, '='); par = strchr(topic, '=');
if (par) if (par)
*par++ = 0; *par++ = 0;
@@ -1909,6 +1801,48 @@ int read_yesno(void)
return yesno; return yesno;
} }
int cmdutils_read_file(const char *filename, char **bufptr, size_t *size)
{
int ret;
FILE *f = av_fopen_utf8(filename, "rb");
if (!f) {
av_log(NULL, AV_LOG_ERROR, "Cannot read file '%s': %s\n", filename,
strerror(errno));
return AVERROR(errno);
}
fseek(f, 0, SEEK_END);
*size = ftell(f);
fseek(f, 0, SEEK_SET);
if (*size == (size_t)-1) {
av_log(NULL, AV_LOG_ERROR, "IO error: %s\n", strerror(errno));
fclose(f);
return AVERROR(errno);
}
*bufptr = av_malloc(*size + 1);
if (!*bufptr) {
av_log(NULL, AV_LOG_ERROR, "Could not allocate file buffer\n");
fclose(f);
return AVERROR(ENOMEM);
}
ret = fread(*bufptr, 1, *size, f);
if (ret < *size) {
av_free(*bufptr);
if (ferror(f)) {
av_log(NULL, AV_LOG_ERROR, "Error while reading file '%s': %s\n",
filename, strerror(errno));
ret = AVERROR(errno);
} else
ret = AVERROR_EOF;
} else {
ret = 0;
(*bufptr)[(*size)++] = '\0';
}
fclose(f);
return ret;
}
FILE *get_preset_file(char *filename, size_t filename_size, FILE *get_preset_file(char *filename, size_t filename_size,
const char *preset_name, int is_path, const char *preset_name, int is_path,
const char *codec_name) const char *codec_name)
@@ -2004,12 +1938,11 @@ AVDictionary *filter_codec_opts(AVDictionary *opts, enum AVCodecID codec_id,
switch (check_stream_specifier(s, st, p + 1)) { switch (check_stream_specifier(s, st, p + 1)) {
case 1: *p = 0; break; case 1: *p = 0; break;
case 0: continue; case 0: continue;
default: exit_program(1); default: return NULL;
} }
if (av_opt_find(&cc, t->key, NULL, flags, AV_OPT_SEARCH_FAKE_OBJ) || if (av_opt_find(&cc, t->key, NULL, flags, AV_OPT_SEARCH_FAKE_OBJ) ||
!codec || (codec && codec->priv_class &&
(codec->priv_class &&
av_opt_find(&codec->priv_class, t->key, NULL, flags, av_opt_find(&codec->priv_class, t->key, NULL, flags,
AV_OPT_SEARCH_FAKE_OBJ))) AV_OPT_SEARCH_FAKE_OBJ)))
av_dict_set(&ret, t->key, t->value, 0); av_dict_set(&ret, t->key, t->value, 0);
@@ -2032,7 +1965,7 @@ AVDictionary **setup_find_stream_info_opts(AVFormatContext *s,
if (!s->nb_streams) if (!s->nb_streams)
return NULL; return NULL;
opts = av_mallocz_array(s->nb_streams, sizeof(*opts)); opts = av_mallocz(s->nb_streams * sizeof(*opts));
if (!opts) { if (!opts) {
av_log(NULL, AV_LOG_ERROR, av_log(NULL, AV_LOG_ERROR,
"Could not alloc memory for stream options.\n"); "Could not alloc memory for stream options.\n");
@@ -2051,7 +1984,7 @@ void *grow_array(void *array, int elem_size, int *size, int new_size)
exit_program(1); exit_program(1);
} }
if (*size < new_size) { if (*size < new_size) {
uint8_t *tmp = av_realloc_array(array, new_size, elem_size); uint8_t *tmp = av_realloc(array, new_size*elem_size);
if (!tmp) { if (!tmp) {
av_log(NULL, AV_LOG_ERROR, "Could not alloc buffer.\n"); av_log(NULL, AV_LOG_ERROR, "Could not alloc buffer.\n");
exit_program(1); exit_program(1);
@@ -2062,189 +1995,3 @@ void *grow_array(void *array, int elem_size, int *size, int new_size)
} }
return array; return array;
} }
double get_rotation(AVStream *st)
{
AVDictionaryEntry *rotate_tag = av_dict_get(st->metadata, "rotate", NULL, 0);
uint8_t* displaymatrix = av_stream_get_side_data(st,
AV_PKT_DATA_DISPLAYMATRIX, NULL);
double theta = 0;
if (rotate_tag && *rotate_tag->value && strcmp(rotate_tag->value, "0")) {
char *tail;
theta = av_strtod(rotate_tag->value, &tail);
if (*tail)
theta = 0;
}
if (displaymatrix && !theta)
theta = -av_display_rotation_get((int32_t*) displaymatrix);
theta -= 360*floor(theta/360 + 0.9/360);
if (fabs(theta - 90*round(theta/90)) > 2)
av_log(NULL, AV_LOG_WARNING, "Odd rotation angle.\n"
"If you want to help, upload a sample "
"of this file to ftp://upload.ffmpeg.org/incoming/ "
"and contact the ffmpeg-devel mailing list. (ffmpeg-devel@ffmpeg.org)");
return theta;
}
#if CONFIG_AVDEVICE
static int print_device_sources(AVInputFormat *fmt, AVDictionary *opts)
{
int ret, i;
AVDeviceInfoList *device_list = NULL;
if (!fmt || !fmt->priv_class || !AV_IS_INPUT_DEVICE(fmt->priv_class->category))
return AVERROR(EINVAL);
printf("Audo-detected sources for %s:\n", fmt->name);
if (!fmt->get_device_list) {
ret = AVERROR(ENOSYS);
printf("Cannot list sources. Not implemented.\n");
goto fail;
}
if ((ret = avdevice_list_input_sources(fmt, NULL, opts, &device_list)) < 0) {
printf("Cannot list sources.\n");
goto fail;
}
for (i = 0; i < device_list->nb_devices; i++) {
printf("%s %s [%s]\n", device_list->default_device == i ? "*" : " ",
device_list->devices[i]->device_name, device_list->devices[i]->device_description);
}
fail:
avdevice_free_list_devices(&device_list);
return ret;
}
static int print_device_sinks(AVOutputFormat *fmt, AVDictionary *opts)
{
int ret, i;
AVDeviceInfoList *device_list = NULL;
if (!fmt || !fmt->priv_class || !AV_IS_OUTPUT_DEVICE(fmt->priv_class->category))
return AVERROR(EINVAL);
printf("Audo-detected sinks for %s:\n", fmt->name);
if (!fmt->get_device_list) {
ret = AVERROR(ENOSYS);
printf("Cannot list sinks. Not implemented.\n");
goto fail;
}
if ((ret = avdevice_list_output_sinks(fmt, NULL, opts, &device_list)) < 0) {
printf("Cannot list sinks.\n");
goto fail;
}
for (i = 0; i < device_list->nb_devices; i++) {
printf("%s %s [%s]\n", device_list->default_device == i ? "*" : " ",
device_list->devices[i]->device_name, device_list->devices[i]->device_description);
}
fail:
avdevice_free_list_devices(&device_list);
return ret;
}
static int show_sinks_sources_parse_arg(const char *arg, char **dev, AVDictionary **opts)
{
int ret;
if (arg) {
char *opts_str = NULL;
av_assert0(dev && opts);
*dev = av_strdup(arg);
if (!*dev)
return AVERROR(ENOMEM);
if ((opts_str = strchr(*dev, ','))) {
*(opts_str++) = '\0';
if (opts_str[0] && ((ret = av_dict_parse_string(opts, opts_str, "=", ":", 0)) < 0)) {
av_freep(dev);
return ret;
}
}
} else
printf("\nDevice name is not provided.\n"
"You can pass devicename[,opt1=val1[,opt2=val2...]] as an argument.\n\n");
return 0;
}
int show_sources(void *optctx, const char *opt, const char *arg)
{
AVInputFormat *fmt = NULL;
char *dev = NULL;
AVDictionary *opts = NULL;
int ret = 0;
int error_level = av_log_get_level();
av_log_set_level(AV_LOG_ERROR);
if ((ret = show_sinks_sources_parse_arg(arg, &dev, &opts)) < 0)
goto fail;
do {
fmt = av_input_audio_device_next(fmt);
if (fmt) {
if (!strcmp(fmt->name, "lavfi"))
continue; //it's pointless to probe lavfi
if (dev && !av_match_name(dev, fmt->name))
continue;
print_device_sources(fmt, opts);
}
} while (fmt);
do {
fmt = av_input_video_device_next(fmt);
if (fmt) {
if (dev && !av_match_name(dev, fmt->name))
continue;
print_device_sources(fmt, opts);
}
} while (fmt);
fail:
av_dict_free(&opts);
av_free(dev);
av_log_set_level(error_level);
return ret;
}
int show_sinks(void *optctx, const char *opt, const char *arg)
{
AVOutputFormat *fmt = NULL;
char *dev = NULL;
AVDictionary *opts = NULL;
int ret = 0;
int error_level = av_log_get_level();
av_log_set_level(AV_LOG_ERROR);
if ((ret = show_sinks_sources_parse_arg(arg, &dev, &opts)) < 0)
goto fail;
do {
fmt = av_output_audio_device_next(fmt);
if (fmt) {
if (dev && !av_match_name(dev, fmt->name))
continue;
print_device_sinks(fmt, opts);
}
} while (fmt);
do {
fmt = av_output_video_device_next(fmt);
if (fmt) {
if (dev && !av_match_name(dev, fmt->name))
continue;
print_device_sinks(fmt, opts);
}
} while (fmt);
fail:
av_dict_free(&opts);
av_free(dev);
av_log_set_level(error_level);
return ret;
}
#endif

View File

@@ -19,18 +19,17 @@
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/ */
#ifndef CMDUTILS_H #ifndef FFMPEG_CMDUTILS_H
#define CMDUTILS_H #define FFMPEG_CMDUTILS_H
#include <stdint.h> #include <stdint.h>
#include "config.h"
#include "libavcodec/avcodec.h" #include "libavcodec/avcodec.h"
#include "libavfilter/avfilter.h" #include "libavfilter/avfilter.h"
#include "libavformat/avformat.h" #include "libavformat/avformat.h"
#include "libswscale/swscale.h" #include "libswscale/swscale.h"
#ifdef _WIN32 #ifdef __MINGW32__
#undef main /* We don't want SDL to override our main() */ #undef main /* We don't want SDL to override our main() */
#endif #endif
@@ -46,7 +45,7 @@ extern const int program_birth_year;
extern AVCodecContext *avcodec_opts[AVMEDIA_TYPE_NB]; extern AVCodecContext *avcodec_opts[AVMEDIA_TYPE_NB];
extern AVFormatContext *avformat_opts; extern AVFormatContext *avformat_opts;
extern AVDictionary *sws_dict; extern struct SwsContext *sws_opts;
extern AVDictionary *swr_opts; extern AVDictionary *swr_opts;
extern AVDictionary *format_opts, *codec_opts, *resample_opts; extern AVDictionary *format_opts, *codec_opts, *resample_opts;
extern int hide_banner; extern int hide_banner;
@@ -59,7 +58,7 @@ void register_exit(void (*cb)(int ret));
/** /**
* Wraps exit with a program-specific cleanup routine. * Wraps exit with a program-specific cleanup routine.
*/ */
void exit_program(int ret) av_noreturn; void exit_program(int ret);
/** /**
* Initialize the cmdutils option system, in particular * Initialize the cmdutils option system, in particular
@@ -277,7 +276,7 @@ typedef struct OptionGroup {
AVDictionary *codec_opts; AVDictionary *codec_opts;
AVDictionary *format_opts; AVDictionary *format_opts;
AVDictionary *resample_opts; AVDictionary *resample_opts;
AVDictionary *sws_dict; struct SwsContext *sws_opts;
AVDictionary *swr_opts; AVDictionary *swr_opts;
} OptionGroup; } OptionGroup;
@@ -431,31 +430,10 @@ int show_license(void *optctx, const char *opt, const char *arg);
/** /**
* Print a listing containing all the formats supported by the * Print a listing containing all the formats supported by the
* program (including devices).
* This option processing function does not utilize the arguments.
*/
int show_formats(void *optctx, const char *opt, const char *arg);
/**
* Print a listing containing all the devices supported by the
* program. * program.
* This option processing function does not utilize the arguments. * This option processing function does not utilize the arguments.
*/ */
int show_devices(void *optctx, const char *opt, const char *arg); int show_formats(void *optctx, const char *opt, const char *arg);
#if CONFIG_AVDEVICE
/**
* Print a listing containing audodetected sinks of the output device.
* Device name with options may be passed as an argument to limit results.
*/
int show_sinks(void *optctx, const char *opt, const char *arg);
/**
* Print a listing containing audodetected sources of the input device.
* Device name with options may be passed as an argument to limit results.
*/
int show_sources(void *optctx, const char *opt, const char *arg);
#endif
/** /**
* Print a listing containing all the codecs supported by the * Print a listing containing all the codecs supported by the
@@ -529,6 +507,18 @@ int show_colors(void *optctx, const char *opt, const char *arg);
*/ */
int read_yesno(void); int read_yesno(void);
/**
* Read the file with name filename, and put its content in a newly
* allocated 0-terminated buffer.
*
* @param filename file to read from
* @param bufptr location where pointer to buffer is returned
* @param size location where size of buffer is returned
* @return >= 0 in case of success, a negative value corresponding to an
* AVERROR error code in case of failure.
*/
int cmdutils_read_file(const char *filename, char **bufptr, size_t *size);
/** /**
* Get a file corresponding to a preset file. * Get a file corresponding to a preset file.
* *
@@ -585,6 +575,4 @@ void *grow_array(void *array, int elem_size, int *size, int new_size);
char name[128];\ char name[128];\
av_get_channel_layout_string(name, sizeof(name), 0, ch_layout); av_get_channel_layout_string(name, sizeof(name), 0, ch_layout);
double get_rotation(AVStream *st);
#endif /* CMDUTILS_H */ #endif /* CMDUTILS_H */

View File

@@ -6,7 +6,6 @@
{ "version" , OPT_EXIT, {.func_arg = show_version}, "show version" }, { "version" , OPT_EXIT, {.func_arg = show_version}, "show version" },
{ "buildconf" , OPT_EXIT, {.func_arg = show_buildconf}, "show build configuration" }, { "buildconf" , OPT_EXIT, {.func_arg = show_buildconf}, "show build configuration" },
{ "formats" , OPT_EXIT, {.func_arg = show_formats }, "show available formats" }, { "formats" , OPT_EXIT, {.func_arg = show_formats }, "show available formats" },
{ "devices" , OPT_EXIT, {.func_arg = show_devices }, "show available devices" },
{ "codecs" , OPT_EXIT, {.func_arg = show_codecs }, "show available codecs" }, { "codecs" , OPT_EXIT, {.func_arg = show_codecs }, "show available codecs" },
{ "decoders" , OPT_EXIT, {.func_arg = show_decoders }, "show available decoders" }, { "decoders" , OPT_EXIT, {.func_arg = show_decoders }, "show available decoders" },
{ "encoders" , OPT_EXIT, {.func_arg = show_encoders }, "show available encoders" }, { "encoders" , OPT_EXIT, {.func_arg = show_encoders }, "show available encoders" },
@@ -27,9 +26,3 @@
{ "opencl_bench", OPT_EXIT, {.func_arg = opt_opencl_bench}, "run benchmark on all OpenCL devices and show results" }, { "opencl_bench", OPT_EXIT, {.func_arg = opt_opencl_bench}, "run benchmark on all OpenCL devices and show results" },
{ "opencl_options", HAS_ARG, {.func_arg = opt_opencl}, "set OpenCL environment options" }, { "opencl_options", HAS_ARG, {.func_arg = opt_opencl}, "set OpenCL environment options" },
#endif #endif
#if CONFIG_AVDEVICE
{ "sources" , OPT_EXIT | HAS_ARG, { .func_arg = show_sources },
"list sources of the input device", "device" },
{ "sinks" , OPT_EXIT | HAS_ARG, { .func_arg = show_sinks },
"list sinks of the output device", "device" },
#endif

View File

@@ -22,7 +22,6 @@
#include "libavutil/time.h" #include "libavutil/time.h"
#include "libavutil/log.h" #include "libavutil/log.h"
#include "libavutil/opencl.h" #include "libavutil/opencl.h"
#include "libavutil/avstring.h"
#include "cmdutils.h" #include "cmdutils.h"
typedef struct { typedef struct {
@@ -182,12 +181,12 @@ static int64_t run_opencl_bench(AVOpenCLExternalEnv *ext_opencl_env)
OCLCHECK(clSetKernelArg, kernel, arg++, sizeof(cl_int), &width); OCLCHECK(clSetKernelArg, kernel, arg++, sizeof(cl_int), &width);
OCLCHECK(clSetKernelArg, kernel, arg++, sizeof(cl_int), &height); OCLCHECK(clSetKernelArg, kernel, arg++, sizeof(cl_int), &height);
start = av_gettime_relative(); start = av_gettime();
for (i = 0; i < OPENCL_NB_ITER; i++) for (i = 0; i < OPENCL_NB_ITER; i++)
OCLCHECK(clEnqueueNDRangeKernel, ext_opencl_env->command_queue, kernel, 2, NULL, OCLCHECK(clEnqueueNDRangeKernel, ext_opencl_env->command_queue, kernel, 2, NULL,
global_work_size_2d, local_work_size_2d, 0, NULL, NULL); global_work_size_2d, local_work_size_2d, 0, NULL, NULL);
clFinish(ext_opencl_env->command_queue); clFinish(ext_opencl_env->command_queue);
ret = (av_gettime_relative() - start)/OPENCL_NB_ITER; ret = (av_gettime() - start)/OPENCL_NB_ITER;
end: end:
if (kernel) if (kernel)
clReleaseKernel(kernel); clReleaseKernel(kernel);
@@ -206,9 +205,7 @@ end:
static int compare_ocl_device_desc(const void *a, const void *b) static int compare_ocl_device_desc(const void *a, const void *b)
{ {
const OpenCLDeviceBenchmark* va = (const OpenCLDeviceBenchmark*)a; return ((OpenCLDeviceBenchmark*)a)->runtime - ((OpenCLDeviceBenchmark*)b)->runtime;
const OpenCLDeviceBenchmark* vb = (const OpenCLDeviceBenchmark*)b;
return FFDIFFSIGN(va->runtime , vb->runtime);
} }
int opt_opencl_bench(void *optctx, const char *opt, const char *arg) int opt_opencl_bench(void *optctx, const char *opt, const char *arg)
@@ -227,7 +224,7 @@ int opt_opencl_bench(void *optctx, const char *opt, const char *arg)
av_log(NULL, AV_LOG_ERROR, "No OpenCL device detected!\n"); av_log(NULL, AV_LOG_ERROR, "No OpenCL device detected!\n");
return AVERROR(EINVAL); return AVERROR(EINVAL);
} }
if (!(devices = av_malloc_array(nb_devices, sizeof(OpenCLDeviceBenchmark)))) { if (!(devices = av_malloc(sizeof(OpenCLDeviceBenchmark) * nb_devices))) {
av_log(NULL, AV_LOG_ERROR, "Could not allocate buffer\n"); av_log(NULL, AV_LOG_ERROR, "Could not allocate buffer\n");
return AVERROR(ENOMEM); return AVERROR(ENOMEM);
} }
@@ -241,8 +238,7 @@ int opt_opencl_bench(void *optctx, const char *opt, const char *arg)
devices[count].platform_idx = i; devices[count].platform_idx = i;
devices[count].device_idx = j; devices[count].device_idx = j;
devices[count].runtime = score; devices[count].runtime = score;
av_strlcpy(devices[count].device_name, device_node->device_name, strcpy(devices[count].device_name, device_node->device_name);
sizeof(devices[count].device_name));
count++; count++;
} }
} }

View File

@@ -5,20 +5,12 @@
# first so "all" becomes default target # first so "all" becomes default target
all: all-yes all: all-yes
DEFAULT_YASMD=.dbg
ifeq ($(DBG),1)
YASMD=$(DEFAULT_YASMD)
else
YASMD=
endif
ifndef SUBDIR ifndef SUBDIR
ifndef V ifndef V
Q = @ Q = @
ECHO = printf "$(1)\t%s\n" $(2) ECHO = printf "$(1)\t%s\n" $(2)
BRIEF = CC CXX OBJCC HOSTCC HOSTLD AS YASM AR LD STRIP CP WINDRES BRIEF = CC CXX HOSTCC HOSTLD AS YASM AR LD STRIP CP WINDRES
SILENT = DEPCC DEPHOSTCC DEPAS DEPYASM RANLIB RM SILENT = DEPCC DEPHOSTCC DEPAS DEPYASM RANLIB RM
MSG = $@ MSG = $@
@@ -32,14 +24,12 @@ endif
ALLFFLIBS = avcodec avdevice avfilter avformat avresample avutil postproc swscale swresample ALLFFLIBS = avcodec avdevice avfilter avformat avresample avutil postproc swscale swresample
# NASM requires -I path terminated with / # NASM requires -I path terminated with /
IFLAGS := -I. -I$(SRC_LINK)/ IFLAGS := -I. -I$(SRC_PATH)/
CPPFLAGS := $(IFLAGS) $(CPPFLAGS) CPPFLAGS := $(IFLAGS) $(CPPFLAGS)
CFLAGS += $(ECFLAGS) CFLAGS += $(ECFLAGS)
CCFLAGS = $(CPPFLAGS) $(CFLAGS) CCFLAGS = $(CPPFLAGS) $(CFLAGS)
OBJCFLAGS += $(EOBJCFLAGS)
OBJCCFLAGS = $(CPPFLAGS) $(CFLAGS) $(OBJCFLAGS)
ASFLAGS := $(CPPFLAGS) $(ASFLAGS) ASFLAGS := $(CPPFLAGS) $(ASFLAGS)
CXXFLAGS := $(CPPFLAGS) $(CFLAGS) $(CXXFLAGS) CXXFLAGS += $(CPPFLAGS) $(CFLAGS)
YASMFLAGS += $(IFLAGS:%=%/) -Pconfig.asm YASMFLAGS += $(IFLAGS:%=%/) -Pconfig.asm
HOSTCCFLAGS = $(IFLAGS) $(HOSTCPPFLAGS) $(HOSTCFLAGS) HOSTCCFLAGS = $(IFLAGS) $(HOSTCPPFLAGS) $(HOSTCFLAGS)
@@ -47,13 +37,12 @@ LDFLAGS := $(ALLFFLIBS:%=$(LD_PATH)lib%) $(LDFLAGS)
define COMPILE define COMPILE
$(call $(1)DEP,$(1)) $(call $(1)DEP,$(1))
$($(1)) $($(1)FLAGS) $($(1)_DEPFLAGS) $($(1)_C) $($(1)_O) $(patsubst $(SRC_PATH)/%,$(SRC_LINK)/%,$<) $($(1)) $($(1)FLAGS) $($(1)_DEPFLAGS) $($(1)_C) $($(1)_O) $<
endef endef
COMPILE_C = $(call COMPILE,CC) COMPILE_C = $(call COMPILE,CC)
COMPILE_CXX = $(call COMPILE,CXX) COMPILE_CXX = $(call COMPILE,CXX)
COMPILE_S = $(call COMPILE,AS) COMPILE_S = $(call COMPILE,AS)
COMPILE_M = $(call COMPILE,OBJCC)
COMPILE_HOSTC = $(call COMPILE,HOSTCC) COMPILE_HOSTC = $(call COMPILE,HOSTCC)
%.o: %.c %.o: %.c
@@ -62,11 +51,8 @@ COMPILE_HOSTC = $(call COMPILE,HOSTCC)
%.o: %.cpp %.o: %.cpp
$(COMPILE_CXX) $(COMPILE_CXX)
%.o: %.m
$(COMPILE_M)
%.s: %.c %.s: %.c
$(CC) $(CCFLAGS) -S -o $@ $< $(CC) $(CPPFLAGS) $(CFLAGS) -S -o $@ $<
%.o: %.S %.o: %.S
$(COMPILE_S) $(COMPILE_S)
@@ -84,9 +70,7 @@ COMPILE_HOSTC = $(call COMPILE,HOSTCC)
$(Q)echo '#include "$*.h"' >$@ $(Q)echo '#include "$*.h"' >$@
%.ver: %.v %.ver: %.v
$(Q)sed 's/$$MAJOR/$($(basename $(@F))_VERSION_MAJOR)/' $^ | sed -e 's/:/:\ $(Q)sed 's/$$MAJOR/$($(basename $(@F))_VERSION_MAJOR)/' $^ > $@
/' -e 's/; /;\
/g' > $@
%.c %.h: TAG = GEN %.c %.h: TAG = GEN
@@ -106,7 +90,7 @@ include $(SRC_PATH)/arch.mak
OBJS += $(OBJS-yes) OBJS += $(OBJS-yes)
SLIBOBJS += $(SLIBOBJS-yes) SLIBOBJS += $(SLIBOBJS-yes)
FFLIBS := $($(NAME)_FFLIBS) $(FFLIBS-yes) $(FFLIBS) FFLIBS := $(FFLIBS-yes) $(FFLIBS)
TESTPROGS += $(TESTPROGS-yes) TESTPROGS += $(TESTPROGS-yes)
LDLIBS = $(FFLIBS:%=%$(BUILDSUF)) LDLIBS = $(FFLIBS:%=%$(BUILDSUF))
@@ -114,8 +98,8 @@ FFEXTRALIBS := $(LDLIBS:%=$(LD_LIB)) $(EXTRALIBS)
OBJS := $(sort $(OBJS:%=$(SUBDIR)%)) OBJS := $(sort $(OBJS:%=$(SUBDIR)%))
SLIBOBJS := $(sort $(SLIBOBJS:%=$(SUBDIR)%)) SLIBOBJS := $(sort $(SLIBOBJS:%=$(SUBDIR)%))
TESTOBJS := $(TESTOBJS:%=$(SUBDIR)tests/%) $(TESTPROGS:%=$(SUBDIR)tests/%.o) TESTOBJS := $(TESTOBJS:%=$(SUBDIR)%) $(TESTPROGS:%=$(SUBDIR)%-test.o)
TESTPROGS := $(TESTPROGS:%=$(SUBDIR)tests/%$(EXESUF)) TESTPROGS := $(TESTPROGS:%=$(SUBDIR)%-test$(EXESUF))
HOSTOBJS := $(HOSTPROGS:%=$(SUBDIR)%.o) HOSTOBJS := $(HOSTPROGS:%=$(SUBDIR)%.o)
HOSTPROGS := $(HOSTPROGS:%=$(SUBDIR)%$(HOSTEXESUF)) HOSTPROGS := $(HOSTPROGS:%=$(SUBDIR)%$(HOSTEXESUF))
TOOLS += $(TOOLS-yes) TOOLS += $(TOOLS-yes)
@@ -123,9 +107,8 @@ TOOLOBJS := $(TOOLS:%=tools/%.o)
TOOLS := $(TOOLS:%=tools/%$(EXESUF)) TOOLS := $(TOOLS:%=tools/%$(EXESUF))
HEADERS += $(HEADERS-yes) HEADERS += $(HEADERS-yes)
PATH_LIBNAME = $(foreach NAME,$(1),lib$(NAME)/$($(2)LIBNAME)) PATH_LIBNAME = $(foreach NAME,$(1),lib$(NAME)/$($(CONFIG_SHARED:yes=S)LIBNAME))
DEP_LIBS := $(foreach lib,$(FFLIBS),$(call PATH_LIBNAME,$(lib),$(CONFIG_SHARED:yes=S))) DEP_LIBS := $(foreach lib,$(FFLIBS),$(call PATH_LIBNAME,$(lib)))
STATIC_DEP_LIBS := $(foreach lib,$(FFLIBS),$(call PATH_LIBNAME,$(lib)))
SRC_DIR := $(SRC_PATH)/lib$(NAME) SRC_DIR := $(SRC_PATH)/lib$(NAME)
ALLHEADERS := $(subst $(SRC_DIR)/,$(SUBDIR),$(wildcard $(SRC_DIR)/*.h $(SRC_DIR)/$(ARCH)/*.h)) ALLHEADERS := $(subst $(SRC_DIR)/,$(SUBDIR),$(wildcard $(SRC_DIR)/*.h $(SRC_DIR)/$(ARCH)/*.h))
@@ -152,15 +135,17 @@ $(TOOLOBJS): | tools
OBJDIRS := $(OBJDIRS) $(dir $(OBJS) $(HOBJS) $(HOSTOBJS) $(SLIBOBJS) $(TESTOBJS)) OBJDIRS := $(OBJDIRS) $(dir $(OBJS) $(HOBJS) $(HOSTOBJS) $(SLIBOBJS) $(TESTOBJS))
CLEANSUFFIXES = *.d *.o *~ *.h.c *.map *.ver *.ver-sol2 *.ho *.gcno *.gcda *$(DEFAULT_YASMD).asm CLEANSUFFIXES = *.d *.o *~ *.h.c *.map *.ver *.ho *.gcno *.gcda
DISTCLEANSUFFIXES = *.pc DISTCLEANSUFFIXES = *.pc
LIBSUFFIXES = *.a *.lib *.so *.so.* *.dylib *.dll *.def *.dll.a LIBSUFFIXES = *.a *.lib *.so *.so.* *.dylib *.dll *.def *.dll.a
define RULES define RULES
clean:: clean::
$(RM) $(HOSTPROGS) $(TESTPROGS) $(TOOLS) $(RM) $(OBJS) $(OBJS:.o=.d)
$(RM) $(HOSTPROGS)
$(RM) $(TOOLS)
endef endef
$(eval $(RULES)) $(eval $(RULES))
-include $(wildcard $(OBJS:.o=.d) $(HOSTOBJS:.o=.d) $(TESTOBJS:.o=.d) $(HOBJS:.o=.d) $(SLIBOBJS:.o=.d)) $(OBJS:.o=$(DEFAULT_YASMD).d) -include $(wildcard $(OBJS:.o=.d) $(HOSTOBJS:.o=.d) $(TESTOBJS:.o=.d) $(HOBJS:.o=.d) $(SLIBOBJS:.o=.d))

View File

@@ -19,8 +19,8 @@
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/ */
#ifndef COMPAT_AIX_MATH_H #ifndef FFMPEG_COMPAT_AIX_MATH_H
#define COMPAT_AIX_MATH_H #define FFMPEG_COMPAT_AIX_MATH_H
#define class class_in_math_h_causes_problems #define class class_in_math_h_causes_problems
@@ -28,4 +28,4 @@
#undef class #undef class
#endif /* COMPAT_AIX_MATH_H */ #endif /* FFMPEG_COMPAT_AIX_MATH_H */

View File

@@ -13,8 +13,7 @@
// //
// You should have received a copy of the GNU General Public License // You should have received a copy of the GNU General Public License
// along with this program; if not, write to the Free Software // along with this program; if not, write to the Free Software
// Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, // Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA, or visit
// MA 02110-1301 USA, or visit
// http://www.gnu.org/copyleft/gpl.html . // http://www.gnu.org/copyleft/gpl.html .
// //
// As a special exception, I give you permission to link to the // As a special exception, I give you permission to link to the
@@ -38,9 +37,40 @@
#ifndef __AVISYNTH_C__ #ifndef __AVISYNTH_C__
#define __AVISYNTH_C__ #define __AVISYNTH_C__
#include "avs/config.h" #ifdef __cplusplus
#include "avs/capi.h" # define EXTERN_C extern "C"
#include "avs/types.h" #else
# define EXTERN_C
#endif
#define AVSC_USE_STDCALL 1
#ifndef AVSC_USE_STDCALL
# define AVSC_CC __cdecl
#else
# define AVSC_CC __stdcall
#endif
#define AVSC_INLINE static __inline
#ifdef AVISYNTH_C_EXPORTS
# define AVSC_EXPORT EXTERN_C
# define AVSC_API(ret, name) EXTERN_C __declspec(dllexport) ret AVSC_CC name
#else
# define AVSC_EXPORT EXTERN_C __declspec(dllexport)
# ifndef AVSC_NO_DECLSPEC
# define AVSC_API(ret, name) EXTERN_C __declspec(dllimport) ret AVSC_CC name
# else
# define AVSC_API(ret, name) typedef ret (AVSC_CC *name##_func)
# endif
#endif
typedef unsigned char BYTE;
#ifdef __GNUC__
typedef long long int INT64;
#else
typedef __int64 INT64;
#endif
///////////////////////////////////////////////////////////////////// /////////////////////////////////////////////////////////////////////
@@ -48,8 +78,8 @@
// Constants // Constants
// //
#ifndef __AVISYNTH_6_H__ #ifndef __AVISYNTH_H__
enum { AVISYNTH_INTERFACE_VERSION = 6 }; enum { AVISYNTH_INTERFACE_VERSION = 4 };
#endif #endif
enum {AVS_SAMPLE_INT8 = 1<<0, enum {AVS_SAMPLE_INT8 = 1<<0,
@@ -81,8 +111,8 @@ enum {AVS_CS_BGR = 1<<28,
AVS_CS_PLANAR = 1<<31, AVS_CS_PLANAR = 1<<31,
AVS_CS_SHIFT_SUB_WIDTH = 0, AVS_CS_SHIFT_SUB_WIDTH = 0,
AVS_CS_SHIFT_SUB_HEIGHT = 8, AVS_CS_SHIFT_SUB_HEIGHT = 1 << 3,
AVS_CS_SHIFT_SAMPLE_BITS = 16, AVS_CS_SHIFT_SAMPLE_BITS = 1 << 4,
AVS_CS_SUB_WIDTH_MASK = 7 << AVS_CS_SHIFT_SUB_WIDTH, AVS_CS_SUB_WIDTH_MASK = 7 << AVS_CS_SHIFT_SUB_WIDTH,
AVS_CS_SUB_WIDTH_1 = 3 << AVS_CS_SHIFT_SUB_WIDTH, // YV24 AVS_CS_SUB_WIDTH_1 = 3 << AVS_CS_SHIFT_SUB_WIDTH, // YV24
@@ -149,66 +179,15 @@ enum { //SUBTYPES
AVS_FILTER_OUTPUT_TYPE_DIFFERENT=4}; AVS_FILTER_OUTPUT_TYPE_DIFFERENT=4};
enum { enum {
// New 2.6 explicitly defined cache hints. AVS_CACHE_NOTHING=0,
AVS_CACHE_NOTHING=10, // Do not cache video. AVS_CACHE_RANGE=1,
AVS_CACHE_WINDOW=11, // Hard protect upto X frames within a range of X from the current frame N. AVS_CACHE_ALL=2,
AVS_CACHE_GENERIC=12, // LRU cache upto X frames. AVS_CACHE_AUDIO=3,
AVS_CACHE_FORCE_GENERIC=13, // LRU cache upto X frames, override any previous CACHE_WINDOW. AVS_CACHE_AUDIO_NONE=4,
AVS_CACHE_AUDIO_AUTO=5
AVS_CACHE_GET_POLICY=30, // Get the current policy.
AVS_CACHE_GET_WINDOW=31, // Get the current window h_span.
AVS_CACHE_GET_RANGE=32, // Get the current generic frame range.
AVS_CACHE_AUDIO=50, // Explicitly do cache audio, X byte cache.
AVS_CACHE_AUDIO_NOTHING=51, // Explicitly do not cache audio.
AVS_CACHE_AUDIO_NONE=52, // Audio cache off (auto mode), X byte intial cache.
AVS_CACHE_AUDIO_AUTO=53, // Audio cache on (auto mode), X byte intial cache.
AVS_CACHE_GET_AUDIO_POLICY=70, // Get the current audio policy.
AVS_CACHE_GET_AUDIO_SIZE=71, // Get the current audio cache size.
AVS_CACHE_PREFETCH_FRAME=100, // Queue request to prefetch frame N.
AVS_CACHE_PREFETCH_GO=101, // Action video prefetches.
AVS_CACHE_PREFETCH_AUDIO_BEGIN=120, // Begin queue request transaction to prefetch audio (take critical section).
AVS_CACHE_PREFETCH_AUDIO_STARTLO=121, // Set low 32 bits of start.
AVS_CACHE_PREFETCH_AUDIO_STARTHI=122, // Set high 32 bits of start.
AVS_CACHE_PREFETCH_AUDIO_COUNT=123, // Set low 32 bits of length.
AVS_CACHE_PREFETCH_AUDIO_COMMIT=124, // Enqueue request transaction to prefetch audio (release critical section).
AVS_CACHE_PREFETCH_AUDIO_GO=125, // Action audio prefetches.
AVS_CACHE_GETCHILD_CACHE_MODE=200, // Cache ask Child for desired video cache mode.
AVS_CACHE_GETCHILD_CACHE_SIZE=201, // Cache ask Child for desired video cache size.
AVS_CACHE_GETCHILD_AUDIO_MODE=202, // Cache ask Child for desired audio cache mode.
AVS_CACHE_GETCHILD_AUDIO_SIZE=203, // Cache ask Child for desired audio cache size.
AVS_CACHE_GETCHILD_COST=220, // Cache ask Child for estimated processing cost.
AVS_CACHE_COST_ZERO=221, // Child response of zero cost (ptr arithmetic only).
AVS_CACHE_COST_UNIT=222, // Child response of unit cost (less than or equal 1 full frame blit).
AVS_CACHE_COST_LOW=223, // Child response of light cost. (Fast)
AVS_CACHE_COST_MED=224, // Child response of medium cost. (Real time)
AVS_CACHE_COST_HI=225, // Child response of heavy cost. (Slow)
AVS_CACHE_GETCHILD_THREAD_MODE=240, // Cache ask Child for thread safetyness.
AVS_CACHE_THREAD_UNSAFE=241, // Only 1 thread allowed for all instances. 2.5 filters default!
AVS_CACHE_THREAD_CLASS=242, // Only 1 thread allowed for each instance. 2.6 filters default!
AVS_CACHE_THREAD_SAFE=243, // Allow all threads in any instance.
AVS_CACHE_THREAD_OWN=244, // Safe but limit to 1 thread, internally threaded.
AVS_CACHE_GETCHILD_ACCESS_COST=260, // Cache ask Child for preferred access pattern.
AVS_CACHE_ACCESS_RAND=261, // Filter is access order agnostic.
AVS_CACHE_ACCESS_SEQ0=262, // Filter prefers sequential access (low cost)
AVS_CACHE_ACCESS_SEQ1=263, // Filter needs sequential access (high cost)
}; };
#ifdef BUILDING_AVSCORE #define AVS_FRAME_ALIGN 16
struct AVS_ScriptEnvironment {
IScriptEnvironment * env;
const char * error;
AVS_ScriptEnvironment(IScriptEnvironment * e = 0)
: env(e), error(0) {}
};
#endif
typedef struct AVS_Clip AVS_Clip; typedef struct AVS_Clip AVS_Clip;
typedef struct AVS_ScriptEnvironment AVS_ScriptEnvironment; typedef struct AVS_ScriptEnvironment AVS_ScriptEnvironment;
@@ -258,23 +237,29 @@ AVSC_INLINE int avs_is_yuv(const AVS_VideoInfo * p)
AVSC_INLINE int avs_is_yuy2(const AVS_VideoInfo * p) AVSC_INLINE int avs_is_yuy2(const AVS_VideoInfo * p)
{ return (p->pixel_type & AVS_CS_YUY2) == AVS_CS_YUY2; } { return (p->pixel_type & AVS_CS_YUY2) == AVS_CS_YUY2; }
AVSC_API(int, avs_is_yv24)(const AVS_VideoInfo * p); AVSC_INLINE int avs_is_yv24(const AVS_VideoInfo * p)
{ return (p->pixel_type & AVS_CS_PLANAR_MASK) == (AVS_CS_YV24 & AVS_CS_PLANAR_FILTER); }
AVSC_API(int, avs_is_yv16)(const AVS_VideoInfo * p); AVSC_INLINE int avs_is_yv16(const AVS_VideoInfo * p)
{ return (p->pixel_type & AVS_CS_PLANAR_MASK) == (AVS_CS_YV16 & AVS_CS_PLANAR_FILTER); }
AVSC_API(int, avs_is_yv12)(const AVS_VideoInfo * p) ; AVSC_INLINE int avs_is_yv12(const AVS_VideoInfo * p)
{ return (p->pixel_type & AVS_CS_PLANAR_MASK) == (AVS_CS_YV12 & AVS_CS_PLANAR_FILTER); }
AVSC_API(int, avs_is_yv411)(const AVS_VideoInfo * p); AVSC_INLINE int avs_is_yv411(const AVS_VideoInfo * p)
{ return (p->pixel_type & AVS_CS_PLANAR_MASK) == (AVS_CS_YV411 & AVS_CS_PLANAR_FILTER); }
AVSC_API(int, avs_is_y8)(const AVS_VideoInfo * p); AVSC_INLINE int avs_is_y8(const AVS_VideoInfo * p)
{ return (p->pixel_type & AVS_CS_PLANAR_MASK) == (AVS_CS_Y8 & AVS_CS_PLANAR_FILTER); }
AVSC_INLINE int avs_is_property(const AVS_VideoInfo * p, int property) AVSC_INLINE int avs_is_property(const AVS_VideoInfo * p, int property)
{ return ((p->image_type & property)==property ); } { return ((p->pixel_type & property)==property ); }
AVSC_INLINE int avs_is_planar(const AVS_VideoInfo * p) AVSC_INLINE int avs_is_planar(const AVS_VideoInfo * p)
{ return !!(p->pixel_type & AVS_CS_PLANAR); } { return !!(p->pixel_type & AVS_CS_PLANAR); }
AVSC_API(int, avs_is_color_space)(const AVS_VideoInfo * p, int c_space); AVSC_INLINE int avs_is_color_space(const AVS_VideoInfo * p, int c_space)
{ return avs_is_planar(p) ? ((p->pixel_type & AVS_CS_PLANAR_MASK) == (c_space & AVS_CS_PLANAR_FILTER)) : ((p->pixel_type & c_space) == c_space); }
AVSC_INLINE int avs_is_field_based(const AVS_VideoInfo * p) AVSC_INLINE int avs_is_field_based(const AVS_VideoInfo * p)
{ return !!(p->image_type & AVS_IT_FIELDBASED); } { return !!(p->image_type & AVS_IT_FIELDBASED); }
@@ -288,18 +273,25 @@ AVSC_INLINE int avs_is_bff(const AVS_VideoInfo * p)
AVSC_INLINE int avs_is_tff(const AVS_VideoInfo * p) AVSC_INLINE int avs_is_tff(const AVS_VideoInfo * p)
{ return !!(p->image_type & AVS_IT_TFF); } { return !!(p->image_type & AVS_IT_TFF); }
AVSC_API(int, avs_get_plane_width_subsampling)(const AVS_VideoInfo * p, int plane); AVSC_INLINE int avs_bits_per_pixel(const AVS_VideoInfo * p)
{
switch (p->pixel_type) {
case AVS_CS_BGR24: return 24;
case AVS_CS_BGR32: return 32;
case AVS_CS_YUY2: return 16;
case AVS_CS_YV12:
case AVS_CS_I420: return 12;
default: return 0;
}
}
AVSC_INLINE int avs_bytes_from_pixels(const AVS_VideoInfo * p, int pixels)
{ return pixels * (avs_bits_per_pixel(p)>>3); } // Will work on planar images, but will return only luma planes
AVSC_API(int, avs_get_plane_height_subsampling)(const AVS_VideoInfo * p, int plane); AVSC_INLINE int avs_row_size(const AVS_VideoInfo * p)
{ return avs_bytes_from_pixels(p,p->width); } // Also only returns first plane on planar images
AVSC_INLINE int avs_bmp_size(const AVS_VideoInfo * vi)
AVSC_API(int, avs_bits_per_pixel)(const AVS_VideoInfo * p); { if (avs_is_planar(vi)) {int p = vi->height * ((avs_row_size(vi)+3) & ~3); p+=p>>1; return p; } return vi->height * ((avs_row_size(vi)+3) & ~3); }
AVSC_API(int, avs_bytes_from_pixels)(const AVS_VideoInfo * p, int pixels);
AVSC_API(int, avs_row_size)(const AVS_VideoInfo * p, int plane);
AVSC_API(int, avs_bmp_size)(const AVS_VideoInfo * vi);
AVSC_INLINE int avs_samples_per_second(const AVS_VideoInfo * p) AVSC_INLINE int avs_samples_per_second(const AVS_VideoInfo * p)
{ return p->audio_samples_per_second; } { return p->audio_samples_per_second; }
@@ -357,13 +349,11 @@ AVSC_INLINE void avs_set_fps(AVS_VideoInfo * p, unsigned numerator, unsigned den
p->fps_denominator = denominator/x; p->fps_denominator = denominator/x;
} }
#ifdef AVS_IMPLICIT_FUNCTION_DECLARATION_ERROR
AVSC_INLINE int avs_is_same_colorspace(AVS_VideoInfo * x, AVS_VideoInfo * y) AVSC_INLINE int avs_is_same_colorspace(AVS_VideoInfo * x, AVS_VideoInfo * y)
{ {
return (x->pixel_type == y->pixel_type) return (x->pixel_type == y->pixel_type)
|| (avs_is_yv12(x) && avs_is_yv12(y)); || (avs_is_yv12(x) && avs_is_yv12(y));
} }
#endif
///////////////////////////////////////////////////////////////////// /////////////////////////////////////////////////////////////////////
// //
@@ -400,38 +390,89 @@ typedef struct AVS_VideoFrame {
} AVS_VideoFrame; } AVS_VideoFrame;
// Access functions for AVS_VideoFrame // Access functions for AVS_VideoFrame
AVSC_API(int, avs_get_pitch_p)(const AVS_VideoFrame * p, int plane);
#ifdef AVS_IMPLICIT_FUNCTION_DECLARATION_ERROR
AVSC_INLINE int avs_get_pitch(const AVS_VideoFrame * p) { AVSC_INLINE int avs_get_pitch(const AVS_VideoFrame * p) {
return avs_get_pitch_p(p, 0);} return p->pitch;}
#endif
AVSC_API(int, avs_get_row_size_p)(const AVS_VideoFrame * p, int plane); AVSC_INLINE int avs_get_pitch_p(const AVS_VideoFrame * p, int plane) {
switch (plane) {
case AVS_PLANAR_U: case AVS_PLANAR_V: return p->pitchUV;}
return p->pitch;}
AVSC_INLINE int avs_get_row_size(const AVS_VideoFrame * p) { AVSC_INLINE int avs_get_row_size(const AVS_VideoFrame * p) {
return p->row_size; } return p->row_size; }
AVSC_API(int, avs_get_height_p)(const AVS_VideoFrame * p, int plane); AVSC_INLINE int avs_get_row_size_p(const AVS_VideoFrame * p, int plane) {
int r;
switch (plane) {
case AVS_PLANAR_U: case AVS_PLANAR_V:
if (p->pitchUV) return p->row_sizeUV;
else return 0;
case AVS_PLANAR_U_ALIGNED: case AVS_PLANAR_V_ALIGNED:
if (p->pitchUV) {
r = (p->row_sizeUV+AVS_FRAME_ALIGN-1)&(~(AVS_FRAME_ALIGN-1)); // Aligned rowsize
if (r < p->pitchUV)
return r;
return p->row_sizeUV;
} else return 0;
case AVS_PLANAR_Y_ALIGNED:
r = (p->row_size+AVS_FRAME_ALIGN-1)&(~(AVS_FRAME_ALIGN-1)); // Aligned rowsize
if (r <= p->pitch)
return r;
return p->row_size;
}
return p->row_size;
}
AVSC_INLINE int avs_get_height(const AVS_VideoFrame * p) { AVSC_INLINE int avs_get_height(const AVS_VideoFrame * p) {
return p->height;} return p->height;}
AVSC_API(const BYTE *, avs_get_read_ptr_p)(const AVS_VideoFrame * p, int plane); AVSC_INLINE int avs_get_height_p(const AVS_VideoFrame * p, int plane) {
switch (plane) {
case AVS_PLANAR_U: case AVS_PLANAR_V:
if (p->pitchUV) return p->heightUV;
return 0;
}
return p->height;}
#ifdef AVS_IMPLICIT_FUNCTION_DECLARATION_ERROR
AVSC_INLINE const BYTE* avs_get_read_ptr(const AVS_VideoFrame * p) { AVSC_INLINE const BYTE* avs_get_read_ptr(const AVS_VideoFrame * p) {
return avs_get_read_ptr_p(p, 0);} return p->vfb->data + p->offset;}
#endif
AVSC_API(int, avs_is_writable)(const AVS_VideoFrame * p); AVSC_INLINE const BYTE* avs_get_read_ptr_p(const AVS_VideoFrame * p, int plane)
{
switch (plane) {
case AVS_PLANAR_U: return p->vfb->data + p->offsetU;
case AVS_PLANAR_V: return p->vfb->data + p->offsetV;
default: return p->vfb->data + p->offset;}
}
AVSC_API(BYTE *, avs_get_write_ptr_p)(const AVS_VideoFrame * p, int plane); AVSC_INLINE int avs_is_writable(const AVS_VideoFrame * p) {
return (p->refcount == 1 && p->vfb->refcount == 1);}
AVSC_INLINE BYTE* avs_get_write_ptr(const AVS_VideoFrame * p)
{
if (avs_is_writable(p)) {
++p->vfb->sequence_number;
return p->vfb->data + p->offset;
} else
return 0;
}
AVSC_INLINE BYTE* avs_get_write_ptr_p(const AVS_VideoFrame * p, int plane)
{
if (plane==AVS_PLANAR_Y && avs_is_writable(p)) {
++p->vfb->sequence_number;
return p->vfb->data + p->offset;
} else if (plane==AVS_PLANAR_Y) {
return 0;
} else {
switch (plane) {
case AVS_PLANAR_U: return p->vfb->data + p->offsetU;
case AVS_PLANAR_V: return p->vfb->data + p->offsetV;
default: return p->vfb->data + p->offset;
}
}
}
#ifdef AVS_IMPLICIT_FUNCTION_DECLARATION_ERROR
AVSC_INLINE BYTE* avs_get_write_ptr(const AVS_VideoFrame * p) {
return avs_get_write_ptr_p(p, 0);}
#endif
AVSC_API(void, avs_release_video_frame)(AVS_VideoFrame *); AVSC_API(void, avs_release_video_frame)(AVS_VideoFrame *);
// makes a shallow copy of a video frame // makes a shallow copy of a video frame
@@ -616,16 +657,12 @@ enum {
AVS_CPUF_SSSE3 = 0x200, // Core 2 AVS_CPUF_SSSE3 = 0x200, // Core 2
AVS_CPUF_SSE4 = 0x400, // Penryn, Wolfdale, Yorkfield AVS_CPUF_SSE4 = 0x400, // Penryn, Wolfdale, Yorkfield
AVS_CPUF_SSE4_1 = 0x400, AVS_CPUF_SSE4_1 = 0x400,
//AVS_CPUF_AVX = 0x800, // Sandy Bridge, Bulldozer AVS_CPUF_SSE4_2 = 0x800, // Nehalem
AVS_CPUF_SSE4_2 = 0x1000, // Nehalem
//AVS_CPUF_AVX2 = 0x2000, // Haswell
//AVS_CPUF_AVX512 = 0x4000, // Knights Landing
}; };
AVSC_API(const char *, avs_get_error)(AVS_ScriptEnvironment *); // return 0 if no error AVSC_API(const char *, avs_get_error)(AVS_ScriptEnvironment *); // return 0 if no error
AVSC_API(int, avs_get_cpu_flags)(AVS_ScriptEnvironment *); AVSC_API(long, avs_get_cpu_flags)(AVS_ScriptEnvironment *);
AVSC_API(int, avs_check_version)(AVS_ScriptEnvironment *, int version); AVSC_API(int, avs_check_version)(AVS_ScriptEnvironment *, int version);
AVSC_API(char *, avs_save_string)(AVS_ScriptEnvironment *, const char* s, int length); AVSC_API(char *, avs_save_string)(AVS_ScriptEnvironment *, const char* s, int length);
@@ -662,12 +699,12 @@ AVSC_API(AVS_VideoFrame *, avs_new_video_frame_a)(AVS_ScriptEnvironment *,
AVSC_INLINE AVSC_INLINE
AVS_VideoFrame * avs_new_video_frame(AVS_ScriptEnvironment * env, AVS_VideoFrame * avs_new_video_frame(AVS_ScriptEnvironment * env,
const AVS_VideoInfo * vi) const AVS_VideoInfo * vi)
{return avs_new_video_frame_a(env,vi,FRAME_ALIGN);} {return avs_new_video_frame_a(env,vi,AVS_FRAME_ALIGN);}
AVSC_INLINE AVSC_INLINE
AVS_VideoFrame * avs_new_frame(AVS_ScriptEnvironment * env, AVS_VideoFrame * avs_new_frame(AVS_ScriptEnvironment * env,
const AVS_VideoInfo * vi) const AVS_VideoInfo * vi)
{return avs_new_video_frame_a(env,vi,FRAME_ALIGN);} {return avs_new_video_frame_a(env,vi,AVS_FRAME_ALIGN);}
#endif #endif
@@ -735,6 +772,7 @@ struct AVS_Library {
AVSC_DECLARE_FUNC(avs_function_exists); AVSC_DECLARE_FUNC(avs_function_exists);
AVSC_DECLARE_FUNC(avs_get_audio); AVSC_DECLARE_FUNC(avs_get_audio);
AVSC_DECLARE_FUNC(avs_get_cpu_flags); AVSC_DECLARE_FUNC(avs_get_cpu_flags);
AVSC_DECLARE_FUNC(avs_get_error);
AVSC_DECLARE_FUNC(avs_get_frame); AVSC_DECLARE_FUNC(avs_get_frame);
AVSC_DECLARE_FUNC(avs_get_parity); AVSC_DECLARE_FUNC(avs_get_parity);
AVSC_DECLARE_FUNC(avs_get_var); AVSC_DECLARE_FUNC(avs_get_var);
@@ -759,27 +797,6 @@ struct AVS_Library {
AVSC_DECLARE_FUNC(avs_subframe_planar); AVSC_DECLARE_FUNC(avs_subframe_planar);
AVSC_DECLARE_FUNC(avs_take_clip); AVSC_DECLARE_FUNC(avs_take_clip);
AVSC_DECLARE_FUNC(avs_vsprintf); AVSC_DECLARE_FUNC(avs_vsprintf);
AVSC_DECLARE_FUNC(avs_get_error);
AVSC_DECLARE_FUNC(avs_is_yv24);
AVSC_DECLARE_FUNC(avs_is_yv16);
AVSC_DECLARE_FUNC(avs_is_yv12);
AVSC_DECLARE_FUNC(avs_is_yv411);
AVSC_DECLARE_FUNC(avs_is_y8);
AVSC_DECLARE_FUNC(avs_is_color_space);
AVSC_DECLARE_FUNC(avs_get_plane_width_subsampling);
AVSC_DECLARE_FUNC(avs_get_plane_height_subsampling);
AVSC_DECLARE_FUNC(avs_bits_per_pixel);
AVSC_DECLARE_FUNC(avs_bytes_from_pixels);
AVSC_DECLARE_FUNC(avs_row_size);
AVSC_DECLARE_FUNC(avs_bmp_size);
AVSC_DECLARE_FUNC(avs_get_pitch_p);
AVSC_DECLARE_FUNC(avs_get_row_size_p);
AVSC_DECLARE_FUNC(avs_get_height_p);
AVSC_DECLARE_FUNC(avs_get_read_ptr_p);
AVSC_DECLARE_FUNC(avs_is_writable);
AVSC_DECLARE_FUNC(avs_get_write_ptr_p);
}; };
#undef AVSC_DECLARE_FUNC #undef AVSC_DECLARE_FUNC
@@ -814,6 +831,7 @@ AVSC_INLINE AVS_Library * avs_load_library() {
AVSC_LOAD_FUNC(avs_function_exists); AVSC_LOAD_FUNC(avs_function_exists);
AVSC_LOAD_FUNC(avs_get_audio); AVSC_LOAD_FUNC(avs_get_audio);
AVSC_LOAD_FUNC(avs_get_cpu_flags); AVSC_LOAD_FUNC(avs_get_cpu_flags);
AVSC_LOAD_FUNC(avs_get_error);
AVSC_LOAD_FUNC(avs_get_frame); AVSC_LOAD_FUNC(avs_get_frame);
AVSC_LOAD_FUNC(avs_get_parity); AVSC_LOAD_FUNC(avs_get_parity);
AVSC_LOAD_FUNC(avs_get_var); AVSC_LOAD_FUNC(avs_get_var);
@@ -839,27 +857,6 @@ AVSC_INLINE AVS_Library * avs_load_library() {
AVSC_LOAD_FUNC(avs_take_clip); AVSC_LOAD_FUNC(avs_take_clip);
AVSC_LOAD_FUNC(avs_vsprintf); AVSC_LOAD_FUNC(avs_vsprintf);
AVSC_LOAD_FUNC(avs_get_error);
AVSC_LOAD_FUNC(avs_is_yv24);
AVSC_LOAD_FUNC(avs_is_yv16);
AVSC_LOAD_FUNC(avs_is_yv12);
AVSC_LOAD_FUNC(avs_is_yv411);
AVSC_LOAD_FUNC(avs_is_y8);
AVSC_LOAD_FUNC(avs_is_color_space);
AVSC_LOAD_FUNC(avs_get_plane_width_subsampling);
AVSC_LOAD_FUNC(avs_get_plane_height_subsampling);
AVSC_LOAD_FUNC(avs_bits_per_pixel);
AVSC_LOAD_FUNC(avs_bytes_from_pixels);
AVSC_LOAD_FUNC(avs_row_size);
AVSC_LOAD_FUNC(avs_bmp_size);
AVSC_LOAD_FUNC(avs_get_pitch_p);
AVSC_LOAD_FUNC(avs_get_row_size_p);
AVSC_LOAD_FUNC(avs_get_height_p);
AVSC_LOAD_FUNC(avs_get_read_ptr_p);
AVSC_LOAD_FUNC(avs_is_writable);
AVSC_LOAD_FUNC(avs_get_write_ptr_p);
#undef __AVSC_STRINGIFY #undef __AVSC_STRINGIFY
#undef AVSC_STRINGIFY #undef AVSC_STRINGIFY
#undef AVSC_LOAD_FUNC #undef AVSC_LOAD_FUNC

View File

@@ -0,0 +1,68 @@
// Copyright (c) 2011 FFmpegSource Project
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
/* these are defines/functions that are used and were changed in the switch to 2.6
* and are needed to maintain full compatility with 2.5 */
enum {
AVS_CS_YV12_25 = 1<<3 | AVS_CS_YUV | AVS_CS_PLANAR, // y-v-u, planar
AVS_CS_I420_25 = 1<<4 | AVS_CS_YUV | AVS_CS_PLANAR, // y-u-v, planar
};
AVSC_INLINE int avs_get_height_p_25(const AVS_VideoFrame * p, int plane) {
switch (plane)
{
case AVS_PLANAR_U: case AVS_PLANAR_V:
if (p->pitchUV)
return p->height>>1;
return 0;
}
return p->height;}
AVSC_INLINE int avs_get_row_size_p_25(const AVS_VideoFrame * p, int plane) {
int r;
switch (plane)
{
case AVS_PLANAR_U: case AVS_PLANAR_V:
if (p->pitchUV)
return p->row_size>>1;
else
return 0;
case AVS_PLANAR_U_ALIGNED: case AVS_PLANAR_V_ALIGNED:
if (p->pitchUV)
{
r = ((p->row_size+AVS_FRAME_ALIGN-1)&(~(AVS_FRAME_ALIGN-1)) )>>1; // Aligned rowsize
if (r < p->pitchUV)
return r;
return p->row_size>>1;
}
else
return 0;
case AVS_PLANAR_Y_ALIGNED:
r = (p->row_size+AVS_FRAME_ALIGN-1)&(~(AVS_FRAME_ALIGN-1)); // Aligned rowsize
if (r <= p->pitch)
return r;
return p->row_size;
}
return p->row_size;
}
AVSC_INLINE int avs_is_yv12_25(const AVS_VideoInfo * p)
{ return ((p->pixel_type & AVS_CS_YV12_25) == AVS_CS_YV12_25)||((p->pixel_type & AVS_CS_I420_25) == AVS_CS_I420_25); }

View File

@@ -1,62 +0,0 @@
// Avisynth C Interface Version 0.20
// Copyright 2003 Kevin Atkinson
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program; if not, write to the Free Software
// Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA, or visit
// http://www.gnu.org/copyleft/gpl.html .
//
// As a special exception, I give you permission to link to the
// Avisynth C interface with independent modules that communicate with
// the Avisynth C interface solely through the interfaces defined in
// avisynth_c.h, regardless of the license terms of these independent
// modules, and to copy and distribute the resulting combined work
// under terms of your choice, provided that every copy of the
// combined work is accompanied by a complete copy of the source code
// of the Avisynth C interface and Avisynth itself (with the version
// used to produce the combined work), being distributed under the
// terms of the GNU General Public License plus this exception. An
// independent module is a module which is not derived from or based
// on Avisynth C Interface, such as 3rd-party filters, import and
// export plugins, or graphical user interfaces.
#ifndef AVS_CAPI_H
#define AVS_CAPI_H
#ifdef __cplusplus
# define EXTERN_C extern "C"
#else
# define EXTERN_C
#endif
#ifndef AVSC_USE_STDCALL
# define AVSC_CC __cdecl
#else
# define AVSC_CC __stdcall
#endif
#define AVSC_INLINE static __inline
#ifdef BUILDING_AVSCORE
# define AVSC_EXPORT EXTERN_C
# define AVSC_API(ret, name) EXTERN_C __declspec(dllexport) ret AVSC_CC name
#else
# define AVSC_EXPORT EXTERN_C __declspec(dllexport)
# ifndef AVSC_NO_DECLSPEC
# define AVSC_API(ret, name) EXTERN_C __declspec(dllimport) ret AVSC_CC name
# else
# define AVSC_API(ret, name) typedef ret (AVSC_CC *name##_func)
# endif
#endif
#endif //AVS_CAPI_H

View File

@@ -1,55 +0,0 @@
// Avisynth C Interface Version 0.20
// Copyright 2003 Kevin Atkinson
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program; if not, write to the Free Software
// Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA, or visit
// http://www.gnu.org/copyleft/gpl.html .
//
// As a special exception, I give you permission to link to the
// Avisynth C interface with independent modules that communicate with
// the Avisynth C interface solely through the interfaces defined in
// avisynth_c.h, regardless of the license terms of these independent
// modules, and to copy and distribute the resulting combined work
// under terms of your choice, provided that every copy of the
// combined work is accompanied by a complete copy of the source code
// of the Avisynth C interface and Avisynth itself (with the version
// used to produce the combined work), being distributed under the
// terms of the GNU General Public License plus this exception. An
// independent module is a module which is not derived from or based
// on Avisynth C Interface, such as 3rd-party filters, import and
// export plugins, or graphical user interfaces.
#ifndef AVS_CONFIG_H
#define AVS_CONFIG_H
// Undefine this to get cdecl calling convention
#define AVSC_USE_STDCALL 1
// NOTE TO PLUGIN AUTHORS:
// Because FRAME_ALIGN can be substantially higher than the alignment
// a plugin actually needs, plugins should not use FRAME_ALIGN to check for
// alignment. They should always request the exact alignment value they need.
// This is to make sure that plugins work over the widest range of AviSynth
// builds possible.
#define FRAME_ALIGN 32
#if defined(_M_AMD64) || defined(__x86_64)
# define X86_64
#elif defined(_M_IX86) || defined(__i386__)
# define X86_32
#else
# error Unsupported CPU architecture.
#endif
#endif //AVS_CONFIG_H

View File

@@ -1,51 +0,0 @@
// Avisynth C Interface Version 0.20
// Copyright 2003 Kevin Atkinson
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program; if not, write to the Free Software
// Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA, or visit
// http://www.gnu.org/copyleft/gpl.html .
//
// As a special exception, I give you permission to link to the
// Avisynth C interface with independent modules that communicate with
// the Avisynth C interface solely through the interfaces defined in
// avisynth_c.h, regardless of the license terms of these independent
// modules, and to copy and distribute the resulting combined work
// under terms of your choice, provided that every copy of the
// combined work is accompanied by a complete copy of the source code
// of the Avisynth C interface and Avisynth itself (with the version
// used to produce the combined work), being distributed under the
// terms of the GNU General Public License plus this exception. An
// independent module is a module which is not derived from or based
// on Avisynth C Interface, such as 3rd-party filters, import and
// export plugins, or graphical user interfaces.
#ifndef AVS_TYPES_H
#define AVS_TYPES_H
// Define all types necessary for interfacing with avisynth.dll
// Raster types used by VirtualDub & Avisynth
typedef unsigned int Pixel32;
typedef unsigned char BYTE;
// Audio Sample information
typedef float SFLOAT;
#ifdef __GNUC__
typedef long long int INT64;
#else
typedef __int64 INT64;
#endif
#endif //AVS_TYPES_H

View File

@@ -13,8 +13,7 @@
// //
// You should have received a copy of the GNU General Public License // You should have received a copy of the GNU General Public License
// along with this program; if not, write to the Free Software // along with this program; if not, write to the Free Software
// Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, // Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA, or visit
// MA 02110-1301 USA, or visit
// http://www.gnu.org/copyleft/gpl.html . // http://www.gnu.org/copyleft/gpl.html .
// //
// As a special exception, I give you permission to link to the // As a special exception, I give you permission to link to the
@@ -513,21 +512,21 @@ AVSC_INLINE AVS_Value avs_array_elt(AVS_Value v, int index)
// only use these functions on am AVS_Value that does not already have // only use these functions on am AVS_Value that does not already have
// an active value. Remember, treat AVS_Value as a fat pointer. // an active value. Remember, treat AVS_Value as a fat pointer.
AVSC_INLINE AVS_Value avs_new_value_bool(int v0) AVSC_INLINE AVS_Value avs_new_value_bool(int v0)
{ AVS_Value v = {0}; v.type = 'b'; v.d.boolean = v0 == 0 ? 0 : 1; return v; } { AVS_Value v; v.type = 'b'; v.d.boolean = v0 == 0 ? 0 : 1; return v; }
AVSC_INLINE AVS_Value avs_new_value_int(int v0) AVSC_INLINE AVS_Value avs_new_value_int(int v0)
{ AVS_Value v = {0}; v.type = 'i'; v.d.integer = v0; return v; } { AVS_Value v; v.type = 'i'; v.d.integer = v0; return v; }
AVSC_INLINE AVS_Value avs_new_value_string(const char * v0) AVSC_INLINE AVS_Value avs_new_value_string(const char * v0)
{ AVS_Value v = {0}; v.type = 's'; v.d.string = v0; return v; } { AVS_Value v; v.type = 's'; v.d.string = v0; return v; }
AVSC_INLINE AVS_Value avs_new_value_float(float v0) AVSC_INLINE AVS_Value avs_new_value_float(float v0)
{ AVS_Value v = {0}; v.type = 'f'; v.d.floating_pt = v0; return v;} { AVS_Value v; v.type = 'f'; v.d.floating_pt = v0; return v;}
AVSC_INLINE AVS_Value avs_new_value_error(const char * v0) AVSC_INLINE AVS_Value avs_new_value_error(const char * v0)
{ AVS_Value v = {0}; v.type = 'e'; v.d.string = v0; return v; } { AVS_Value v; v.type = 'e'; v.d.string = v0; return v; }
#ifndef AVSC_NO_DECLSPEC #ifndef AVSC_NO_DECLSPEC
AVSC_INLINE AVS_Value avs_new_value_clip(AVS_Clip * v0) AVSC_INLINE AVS_Value avs_new_value_clip(AVS_Clip * v0)
{ AVS_Value v = {0}; avs_set_to_clip(&v, v0); return v; } { AVS_Value v; avs_set_to_clip(&v, v0); return v; }
#endif #endif
AVSC_INLINE AVS_Value avs_new_value_array(AVS_Value * v0, int size) AVSC_INLINE AVS_Value avs_new_value_array(AVS_Value * v0, int size)
{ AVS_Value v = {0}; v.type = 'a'; v.d.array = v0; v.array_size = size; return v; } { AVS_Value v; v.type = 'a'; v.d.array = v0; v.array_size = size; return v; }
///////////////////////////////////////////////////////////////////// /////////////////////////////////////////////////////////////////////
// //

View File

@@ -52,8 +52,8 @@ namespace avxsynth {
// //
// Functions // Functions
// //
#define MAKEDWORD(a,b,c,d) (((a) << 24) | ((b) << 16) | ((c) << 8) | (d)) #define MAKEDWORD(a,b,c,d) ((a << 24) | (b << 16) | (c << 8) | (d))
#define MAKEWORD(a,b) (((a) << 8) | (b)) #define MAKEWORD(a,b) ((a << 8) | (b))
#define lstrlen strlen #define lstrlen strlen
#define lstrcpy strcpy #define lstrcpy strcpy

View File

@@ -1,42 +0,0 @@
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef COMPAT_DISPATCH_SEMAPHORE_SEMAPHORE_H
#define COMPAT_DISPATCH_SEMAPHORE_SEMAPHORE_H
#include <dispatch/dispatch.h>
#include <errno.h>
#define sem_t dispatch_semaphore_t
#define sem_post(psem) dispatch_semaphore_signal(*psem)
#define sem_wait(psem) dispatch_semaphore_wait(*psem, DISPATCH_TIME_FOREVER)
#define sem_timedwait(psem, val) dispatch_semaphore_wait(*psem, dispatch_walltime(val, 0))
#define sem_destroy(psem) dispatch_release(*psem)
static inline int compat_sem_init(dispatch_semaphore_t *psem,
int unused, int val)
{
int ret = !!(*psem = dispatch_semaphore_create(val)) - 1;
if (ret < 0)
errno = ENOMEM;
return ret;
}
#define sem_init compat_sem_init
#endif /* COMPAT_DISPATCH_SEMAPHORE_SEMAPHORE_H */

View File

@@ -1,35 +0,0 @@
/*
* Work around broken floating point limits on some systems.
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include_next <float.h>
#ifdef FLT_MAX
#undef FLT_MAX
#define FLT_MAX 3.40282346638528859812e+38F
#undef FLT_MIN
#define FLT_MIN 1.17549435082228750797e-38F
#undef DBL_MAX
#define DBL_MAX ((double)1.79769313486231570815e+308L)
#undef DBL_MIN
#define DBL_MIN ((double)2.22507385850720138309e-308L)
#endif

View File

@@ -1,22 +0,0 @@
/*
* Work around broken floating point limits on some systems.
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include_next <limits.h>
#include <float.h>

View File

@@ -54,7 +54,7 @@ static int getopt(int argc, char *argv[], char *opts)
} }
} }
optopt = c = argv[optind][sp]; optopt = c = argv[optind][sp];
if (c == ':' || !(cp = strchr(opts, c))) { if (c == ':' || (cp = strchr(opts, c)) == NULL) {
fprintf(stderr, ": illegal option -- %c\n", c); fprintf(stderr, ": illegal option -- %c\n", c);
if (argv[optind][++sp] == '\0') { if (argv[optind][++sp] == '\0') {
optind++; optind++;

View File

@@ -19,8 +19,8 @@
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/ */
#ifndef COMPAT_MSVCRT_SNPRINTF_H #ifndef COMPAT_SNPRINTF_H
#define COMPAT_MSVCRT_SNPRINTF_H #define COMPAT_SNPRINTF_H
#include <stdarg.h> #include <stdarg.h>
#include <stdio.h> #include <stdio.h>
@@ -35,4 +35,4 @@ int avpriv_vsnprintf(char *s, size_t n, const char *fmt, va_list ap);
#define _snprintf avpriv_snprintf #define _snprintf avpriv_snprintf
#define vsnprintf avpriv_vsnprintf #define vsnprintf avpriv_vsnprintf
#endif /* COMPAT_MSVCRT_SNPRINTF_H */ #endif /* COMPAT_SNPRINTF_H */

View File

@@ -23,8 +23,8 @@
* os2threads to pthreads wrapper * os2threads to pthreads wrapper
*/ */
#ifndef COMPAT_OS2THREADS_H #ifndef AVCODEC_OS2PTHREADS_H
#define COMPAT_OS2THREADS_H #define AVCODEC_OS2PTHREADS_H
#define INCL_DOS #define INCL_DOS
#include <os2.h> #include <os2.h>
@@ -32,18 +32,9 @@
#undef __STRICT_ANSI__ /* for _beginthread() */ #undef __STRICT_ANSI__ /* for _beginthread() */
#include <stdlib.h> #include <stdlib.h>
#include <sys/builtin.h> #include "libavutil/mem.h"
#include <sys/fmutex.h>
#include "libavutil/attributes.h"
typedef struct {
TID tid;
void *(*start_routine)(void *);
void *arg;
void *result;
} pthread_t;
typedef TID pthread_t;
typedef void pthread_attr_t; typedef void pthread_attr_t;
typedef HMTX pthread_mutex_t; typedef HMTX pthread_mutex_t;
@@ -51,52 +42,47 @@ typedef void pthread_mutexattr_t;
typedef struct { typedef struct {
HEV event_sem; HEV event_sem;
HEV ack_sem; int wait_count;
volatile unsigned wait_count;
} pthread_cond_t; } pthread_cond_t;
typedef void pthread_condattr_t; typedef void pthread_condattr_t;
typedef struct { struct thread_arg {
volatile int done; void *(*start_routine)(void *);
_fmutex mtx; void *arg;
} pthread_once_t; };
#define PTHREAD_ONCE_INIT {0, _FMUTEX_INITIALIZER}
static void thread_entry(void *arg) static void thread_entry(void *arg)
{ {
pthread_t *thread = arg; struct thread_arg *thread_arg = arg;
thread->result = thread->start_routine(thread->arg); thread_arg->start_routine(thread_arg->arg);
av_free(thread_arg);
} }
static av_always_inline int pthread_create(pthread_t *thread, static av_always_inline int pthread_create(pthread_t *thread, const pthread_attr_t *attr, void *(*start_routine)(void*), void *arg)
const pthread_attr_t *attr,
void *(*start_routine)(void*),
void *arg)
{ {
thread->start_routine = start_routine; struct thread_arg *thread_arg;
thread->arg = arg;
thread->result = NULL;
thread->tid = _beginthread(thread_entry, NULL, 1024 * 1024, thread); thread_arg = av_mallocz(sizeof(struct thread_arg));
thread_arg->start_routine = start_routine;
thread_arg->arg = arg;
*thread = _beginthread(thread_entry, NULL, 256 * 1024, thread_arg);
return 0; return 0;
} }
static av_always_inline int pthread_join(pthread_t thread, void **value_ptr) static av_always_inline int pthread_join(pthread_t thread, void **value_ptr)
{ {
DosWaitThread(&thread.tid, DCWW_WAIT); DosWaitThread((PTID)&thread, DCWW_WAIT);
if (value_ptr)
*value_ptr = thread.result;
return 0; return 0;
} }
static av_always_inline int pthread_mutex_init(pthread_mutex_t *mutex, static av_always_inline int pthread_mutex_init(pthread_mutex_t *mutex, const pthread_mutexattr_t *attr)
const pthread_mutexattr_t *attr)
{ {
DosCreateMutexSem(NULL, (PHMTX)mutex, 0, FALSE); DosCreateMutexSem(NULL, (PHMTX)mutex, 0, FALSE);
@@ -124,11 +110,9 @@ static av_always_inline int pthread_mutex_unlock(pthread_mutex_t *mutex)
return 0; return 0;
} }
static av_always_inline int pthread_cond_init(pthread_cond_t *cond, static av_always_inline int pthread_cond_init(pthread_cond_t *cond, const pthread_condattr_t *attr)
const pthread_condattr_t *attr)
{ {
DosCreateEventSem(NULL, &cond->event_sem, DCE_POSTONE, FALSE); DosCreateEventSem(NULL, &cond->event_sem, DCE_POSTONE, FALSE);
DosCreateEventSem(NULL, &cond->ack_sem, DCE_POSTONE, FALSE);
cond->wait_count = 0; cond->wait_count = 0;
@@ -138,16 +122,16 @@ static av_always_inline int pthread_cond_init(pthread_cond_t *cond,
static av_always_inline int pthread_cond_destroy(pthread_cond_t *cond) static av_always_inline int pthread_cond_destroy(pthread_cond_t *cond)
{ {
DosCloseEventSem(cond->event_sem); DosCloseEventSem(cond->event_sem);
DosCloseEventSem(cond->ack_sem);
return 0; return 0;
} }
static av_always_inline int pthread_cond_signal(pthread_cond_t *cond) static av_always_inline int pthread_cond_signal(pthread_cond_t *cond)
{ {
if (!__atomic_cmpxchg32(&cond->wait_count, 0, 0)) { if (cond->wait_count > 0) {
DosPostEventSem(cond->event_sem); DosPostEventSem(cond->event_sem);
DosWaitEventSem(cond->ack_sem, SEM_INDEFINITE_WAIT);
cond->wait_count--;
} }
return 0; return 0;
@@ -155,47 +139,26 @@ static av_always_inline int pthread_cond_signal(pthread_cond_t *cond)
static av_always_inline int pthread_cond_broadcast(pthread_cond_t *cond) static av_always_inline int pthread_cond_broadcast(pthread_cond_t *cond)
{ {
while (!__atomic_cmpxchg32(&cond->wait_count, 0, 0)) while (cond->wait_count > 0) {
pthread_cond_signal(cond); DosPostEventSem(cond->event_sem);
cond->wait_count--;
}
return 0; return 0;
} }
static av_always_inline int pthread_cond_wait(pthread_cond_t *cond, static av_always_inline int pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mutex)
pthread_mutex_t *mutex)
{ {
__atomic_increment(&cond->wait_count); cond->wait_count++;
pthread_mutex_unlock(mutex); pthread_mutex_unlock(mutex);
DosWaitEventSem(cond->event_sem, SEM_INDEFINITE_WAIT); DosWaitEventSem(cond->event_sem, SEM_INDEFINITE_WAIT);
__atomic_decrement(&cond->wait_count);
DosPostEventSem(cond->ack_sem);
pthread_mutex_lock(mutex); pthread_mutex_lock(mutex);
return 0; return 0;
} }
static av_always_inline int pthread_once(pthread_once_t *once_control, #endif /* AVCODEC_OS2PTHREADS_H */
void (*init_routine)(void))
{
if (!once_control->done)
{
_fmutex_request(&once_control->mtx, 0);
if (!once_control->done)
{
init_routine();
once_control->done = 1;
}
_fmutex_release(&once_control->mtx);
}
return 0;
}
#endif /* COMPAT_OS2THREADS_H */

View File

@@ -1,352 +0,0 @@
#!/usr/bin/env perl
# make_sunver.pl
#
# Copyright (C) 2010, 2011, 2012, 2013
# Free Software Foundation, Inc.
#
# This file is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; see the file COPYING.GPLv3. If not see
# <http://www.gnu.org/licenses/>.
# This script takes at least two arguments, a GNU style version script and
# a list of object and archive files, and generates a corresponding Sun
# style version script as follows:
#
# Each glob pattern, C++ mangled pattern or literal in the input script is
# matched against all global symbols in the input objects, emitting those
# that matched (or nothing if no match was found).
# A comment with the original pattern and its type is left in the output
# file to make it easy to understand the matches.
#
# It uses elfdump when present (native), GNU readelf otherwise.
# It depends on the GNU version of c++filt, since it must understand the
# GNU mangling style.
use FileHandle;
use IPC::Open2;
# Enforce C locale.
$ENV{'LC_ALL'} = "C";
$ENV{'LANG'} = "C";
# Input version script, GNU style.
my $symvers = shift;
##########
# Get all the symbols from the library, match them, and add them to a hash.
my %sym_hash = ();
# List of objects and archives to process.
my @OBJECTS = ();
# List of shared objects to omit from processing.
my @SHAREDOBJS = ();
# Filter out those input archives that have corresponding shared objects to
# avoid adding all symbols matched in the archive to the output map.
foreach $file (@ARGV) {
if (($so = $file) =~ s/\.a$/.so/ && -e $so) {
printf STDERR "omitted $file -> $so\n";
push (@SHAREDOBJS, $so);
} else {
push (@OBJECTS, $file);
}
}
# We need to detect and ignore hidden symbols. Solaris nm can only detect
# this in the harder to parse default output format, and GNU nm not at all,
# so use elfdump -s in the native case and GNU readelf -s otherwise.
# GNU objdump -t cannot be used since it produces a variable number of
# columns.
# The path to elfdump.
my $elfdump = "/usr/ccs/bin/elfdump";
if (-f $elfdump) {
open ELFDUMP,$elfdump.' -s '.(join ' ',@OBJECTS).'|' or die $!;
my $skip_arsym = 0;
while (<ELFDUMP>) {
chomp;
# Ignore empty lines.
if (/^$/) {
# End of archive symbol table, stop skipping.
$skip_arsym = 0 if $skip_arsym;
next;
}
# Keep skipping until end of archive symbol table.
next if ($skip_arsym);
# Ignore object name header for individual objects and archives.
next if (/:$/);
# Ignore table header lines.
next if (/^Symbol Table Section:/);
next if (/index.*value.*size/);
# Start of archive symbol table: start skipping.
if (/^Symbol Table: \(archive/) {
$skip_arsym = 1;
next;
}
# Split table.
(undef, undef, undef, undef, $bind, $oth, undef, $shndx, $name) = split;
# Error out for unknown input.
die "unknown input line:\n$_" unless defined($bind);
# Ignore local symbols.
next if ($bind eq "LOCL");
# Ignore hidden symbols.
next if ($oth eq "H");
# Ignore undefined symbols.
next if ($shndx eq "UNDEF");
# Error out for unhandled cases.
if ($bind !~ /^(GLOB|WEAK)/ or $oth ne "D") {
die "unhandled symbol:\n$_";
}
# Remember symbol.
$sym_hash{$name}++;
}
close ELFDUMP or die "$elfdump error";
} else {
open READELF, 'readelf -s -W '.(join ' ',@OBJECTS).'|' or die $!;
# Process each symbol.
while (<READELF>) {
chomp;
# Ignore empty lines.
next if (/^$/);
# Ignore object name header.
next if (/^File: .*$/);
# Ignore table header lines.
next if (/^Symbol table.*contains.*:/);
next if (/Num:.*Value.*Size/);
# Split table.
(undef, undef, undef, undef, $bind, $vis, $ndx, $name) = split;
# Error out for unknown input.
die "unknown input line:\n$_" unless defined($bind);
# Ignore local symbols.
next if ($bind eq "LOCAL");
# Ignore hidden symbols.
next if ($vis eq "HIDDEN");
# Ignore undefined symbols.
next if ($ndx eq "UND");
# Error out for unhandled cases.
if ($bind !~ /^(GLOBAL|WEAK)/ or $vis ne "DEFAULT") {
die "unhandled symbol:\n$_";
}
# Remember symbol.
$sym_hash{$name}++;
}
close READELF or die "readelf error";
}
##########
# The various types of glob patterns.
#
# A glob pattern that is to be applied to the demangled name: 'cxx'.
# A glob patterns that applies directly to the name in the .o files: 'glob'.
# This pattern is ignored; used for local variables (usually just '*'): 'ign'.
# The type of the current pattern.
my $glob = 'glob';
# We're currently inside `extern "C++"', which Sun ld doesn't understand.
my $in_extern = 0;
# The c++filt command to use. This *must* be GNU c++filt; the Sun Studio
# c++filt doesn't handle the GNU mangling style.
my $cxxfilt = $ENV{'CXXFILT'} || "c++filt";
# The current version name.
my $current_version = "";
# Was there any attempt to match a symbol to this version?
my $matches_attempted;
# The number of versions which matched this symbol.
my $matched_symbols;
open F,$symvers or die $!;
# Print information about generating this file
print "# This file was generated by make_sunver.pl. DO NOT EDIT!\n";
print "# It was generated by:\n";
printf "# %s %s %s\n", $0, $symvers, (join ' ',@ARGV);
printf "# Omitted archives with corresponding shared libraries: %s\n",
(join ' ', @SHAREDOBJS) if $#SHAREDOBJS >= 0;
print "#\n\n";
print "\$mapfile_version 2\n";
while (<F>) {
# Lines of the form '};'
if (/^([ \t]*)(\}[ \t]*;[ \t]*)$/) {
$glob = 'glob';
if ($in_extern) {
$in_extern--;
print "$1##$2\n";
} else {
print;
}
next;
}
# Lines of the form '} SOME_VERSION_NAME_1.0;'
if (/^[ \t]*\}[ \tA-Z0-9_.a-z]+;[ \t]*$/) {
$glob = 'glob';
# We tried to match symbols agains this version, but none matched.
# Emit dummy hidden symbol to avoid marking this version WEAK.
if ($matches_attempted && $matched_symbols == 0) {
print " hidden:\n";
print " .force_WEAK_off_$current_version = DATA S0x0 V0x0;\n";
}
print; next;
}
# Comment and blank lines
if (/^[ \t]*\#/) { print; next; }
if (/^[ \t]*$/) { print; next; }
# Lines of the form '{'
if (/^([ \t]*){$/) {
if ($in_extern) {
print "$1##{\n";
} else {
print;
}
next;
}
# Lines of the form 'SOME_VERSION_NAME_1.1 {'
if (/^([A-Z0-9_.]+)[ \t]+{$/) {
# Record version name.
$current_version = $1;
# Reset match attempts, #matched symbols for this version.
$matches_attempted = 0;
$matched_symbols = 0;
print "SYMBOL_VERSION $1 {\n";
next;
}
# Ignore 'global:'
if (/^[ \t]*global:$/) { print; next; }
# After 'local:', globs should be ignored, they won't be exported.
if (/^[ \t]*local:$/) {
$glob = 'ign';
print;
next;
}
# After 'extern "C++"', globs are C++ patterns
if (/^([ \t]*)(extern \"C\+\+\"[ \t]*)$/) {
$in_extern++;
$glob = 'cxx';
# Need to comment, Sun ld cannot handle this.
print "$1##$2\n"; next;
}
# Chomp newline now we're done with passing through the input file.
chomp;
# Catch globs. Note that '{}' is not allowed in globs by this script,
# so only '*' and '[]' are available.
if (/^([ \t]*)([^ \t;{}#]+);?[ \t]*$/) {
my $ws = $1;
my $ptn = $2;
# Turn the glob into a regex by replacing '*' with '.*', '?' with '.'.
# Keep $ptn so we can still print the original form.
($pattern = $ptn) =~ s/\*/\.\*/g;
$pattern =~ s/\?/\./g;
if ($glob eq 'ign') {
# We're in a local: * section; just continue.
print "$_\n";
next;
}
# Print the glob commented for human readers.
print "$ws##$ptn ($glob)\n";
# We tried to match a symbol to this version.
$matches_attempted++;
if ($glob eq 'glob') {
my %ptn_syms = ();
# Match ptn against symbols in %sym_hash.
foreach my $sym (keys %sym_hash) {
# Maybe it matches one of the patterns based on the symbol in
# the .o file.
$ptn_syms{$sym}++ if ($sym =~ /^$pattern$/);
}
foreach my $sym (sort keys(%ptn_syms)) {
$matched_symbols++;
print "$ws$sym;\n";
}
} elsif ($glob eq 'cxx') {
my %dem_syms = ();
# Verify that we're actually using GNU c++filt. Other versions
# most likely cannot handle GNU style symbol mangling.
my $cxxout = `$cxxfilt --version 2>&1`;
$cxxout =~ m/GNU/ or die "$0 requires GNU c++filt to function";
# Talk to c++filt through a pair of file descriptors.
# Need to start a fresh instance per pattern, otherwise the
# process grows to 500+ MB.
my $pid = open2(*FILTIN, *FILTOUT, $cxxfilt) or die $!;
# Match ptn against symbols in %sym_hash.
foreach my $sym (keys %sym_hash) {
# No? Well, maybe its demangled form matches one of those
# patterns.
printf FILTOUT "%s\n",$sym;
my $dem = <FILTIN>;
chomp $dem;
$dem_syms{$sym}++ if ($dem =~ /^$pattern$/);
}
close FILTOUT or die "c++filt error";
close FILTIN or die "c++filt error";
# Need to wait for the c++filt process to avoid lots of zombies.
waitpid $pid, 0;
foreach my $sym (sort keys(%dem_syms)) {
$matched_symbols++;
print "$ws$sym;\n";
}
} else {
# No? Well, then ignore it.
}
next;
}
# Important sanity check. This script can't handle lots of formats
# that GNU ld can, so be sure to error out if one is seen!
die "strange line `$_'";
}
close F;

View File

@@ -16,8 +16,8 @@
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/ */
#ifndef COMPAT_TMS470_MATH_H #ifndef FFMPEG_COMPAT_TMS470_MATH_H
#define COMPAT_TMS470_MATH_H #define FFMPEG_COMPAT_TMS470_MATH_H
#include_next <math.h> #include_next <math.h>
@@ -27,4 +27,4 @@
#define INFINITY (*(const float*)((const unsigned []){ 0x7f800000 })) #define INFINITY (*(const float*)((const unsigned []){ 0x7f800000 }))
#define NAN (*(const float*)((const unsigned []){ 0x7fc00000 })) #define NAN (*(const float*)((const unsigned []){ 0x7fc00000 }))
#endif /* COMPAT_TMS470_MATH_H */ #endif /* FFMPEG_COMPAT_TMS470_MATH_H */

View File

@@ -19,9 +19,6 @@
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/ */
#ifndef COMPAT_VA_COPY_H
#define COMPAT_VA_COPY_H
#include <stdarg.h> #include <stdarg.h>
#if !defined(va_copy) && defined(_MSC_VER) #if !defined(va_copy) && defined(_MSC_VER)
@@ -30,5 +27,3 @@
#if !defined(va_copy) && defined(__GNUC__) && __GNUC__ < 3 #if !defined(va_copy) && defined(__GNUC__) && __GNUC__ < 3
#define va_copy(dst, src) __va_copy(dst, src) #define va_copy(dst, src) __va_copy(dst, src)
#endif #endif
#endif /* COMPAT_VA_COPY_H */

View File

@@ -26,8 +26,8 @@
* w32threads to pthreads wrapper * w32threads to pthreads wrapper
*/ */
#ifndef COMPAT_W32PTHREADS_H #ifndef FFMPEG_COMPAT_W32PTHREADS_H
#define COMPAT_W32PTHREADS_H #define FFMPEG_COMPAT_W32PTHREADS_H
/* Build up a pthread-like API using underlying Windows API. Have only static /* Build up a pthread-like API using underlying Windows API. Have only static
* methods so as to not conflict with a potentially linked in pthread-win32 * methods so as to not conflict with a potentially linked in pthread-win32
@@ -39,12 +39,6 @@
#include <windows.h> #include <windows.h>
#include <process.h> #include <process.h>
#if _WIN32_WINNT < 0x0600 && defined(__MINGW32__)
#undef MemoryBarrier
#define MemoryBarrier __sync_synchronize
#endif
#include "libavutil/attributes.h"
#include "libavutil/common.h" #include "libavutil/common.h"
#include "libavutil/internal.h" #include "libavutil/internal.h"
#include "libavutil/mem.h" #include "libavutil/mem.h"
@@ -60,56 +54,52 @@ typedef struct pthread_t {
* not mutexes */ * not mutexes */
typedef CRITICAL_SECTION pthread_mutex_t; typedef CRITICAL_SECTION pthread_mutex_t;
/* This is the CONDITION_VARIABLE typedef for using Windows' native /* This is the CONDITIONAL_VARIABLE typedef for using Window's native
* conditional variables on kernels 6.0+. */ * conditional variables on kernels 6.0+.
#if HAVE_CONDITION_VARIABLE_PTR * MinGW does not currently have this typedef. */
typedef CONDITION_VARIABLE pthread_cond_t;
#else
typedef struct pthread_cond_t { typedef struct pthread_cond_t {
void *Ptr; void *ptr;
} pthread_cond_t; } pthread_cond_t;
/* function pointers to conditional variable API on windows 6.0+ kernels */
#if _WIN32_WINNT < 0x0600
static void (WINAPI *cond_broadcast)(pthread_cond_t *cond);
static void (WINAPI *cond_init)(pthread_cond_t *cond);
static void (WINAPI *cond_signal)(pthread_cond_t *cond);
static BOOL (WINAPI *cond_wait)(pthread_cond_t *cond, pthread_mutex_t *mutex,
DWORD milliseconds);
#else
#define cond_init InitializeConditionVariable
#define cond_broadcast WakeAllConditionVariable
#define cond_signal WakeConditionVariable
#define cond_wait SleepConditionVariableCS
#endif #endif
#if _WIN32_WINNT >= 0x0600 static unsigned __stdcall attribute_align_arg win32thread_worker(void *arg)
#define InitializeCriticalSection(x) InitializeCriticalSectionEx(x, 0, 0)
#define WaitForSingleObject(a, b) WaitForSingleObjectEx(a, b, FALSE)
#endif
static av_unused unsigned __stdcall attribute_align_arg win32thread_worker(void *arg)
{ {
pthread_t *h = arg; pthread_t *h = arg;
h->ret = h->func(h->arg); h->ret = h->func(h->arg);
return 0; return 0;
} }
static av_unused int pthread_create(pthread_t *thread, const void *unused_attr, static int pthread_create(pthread_t *thread, const void *unused_attr,
void *(*start_routine)(void*), void *arg) void *(*start_routine)(void*), void *arg)
{ {
thread->func = start_routine; thread->func = start_routine;
thread->arg = arg; thread->arg = arg;
#if HAVE_WINRT
thread->handle = (void*)CreateThread(NULL, 0, win32thread_worker, thread,
0, NULL);
#else
thread->handle = (void*)_beginthreadex(NULL, 0, win32thread_worker, thread, thread->handle = (void*)_beginthreadex(NULL, 0, win32thread_worker, thread,
0, NULL); 0, NULL);
#endif
return !thread->handle; return !thread->handle;
} }
static av_unused int pthread_join(pthread_t thread, void **value_ptr) static void pthread_join(pthread_t thread, void **value_ptr)
{ {
DWORD ret = WaitForSingleObject(thread.handle, INFINITE); DWORD ret = WaitForSingleObject(thread.handle, INFINITE);
if (ret != WAIT_OBJECT_0) { if (ret != WAIT_OBJECT_0)
if (ret == WAIT_ABANDONED) return;
return EINVAL;
else
return EDEADLK;
}
if (value_ptr) if (value_ptr)
*value_ptr = thread.ret; *value_ptr = thread.ret;
CloseHandle(thread.handle); CloseHandle(thread.handle);
return 0;
} }
static inline int pthread_mutex_init(pthread_mutex_t *m, void* attr) static inline int pthread_mutex_init(pthread_mutex_t *m, void* attr)
@@ -133,115 +123,8 @@ static inline int pthread_mutex_unlock(pthread_mutex_t *m)
return 0; return 0;
} }
#if _WIN32_WINNT >= 0x0600
typedef INIT_ONCE pthread_once_t;
#define PTHREAD_ONCE_INIT INIT_ONCE_STATIC_INIT
static av_unused int pthread_once(pthread_once_t *once_control, void (*init_routine)(void))
{
BOOL pending = FALSE;
InitOnceBeginInitialize(once_control, 0, &pending, NULL);
if (pending)
init_routine();
InitOnceComplete(once_control, 0, NULL);
return 0;
}
static inline int pthread_cond_init(pthread_cond_t *cond, const void *unused_attr)
{
InitializeConditionVariable(cond);
return 0;
}
/* native condition variables do not destroy */
static inline int pthread_cond_destroy(pthread_cond_t *cond)
{
return 0;
}
static inline int pthread_cond_broadcast(pthread_cond_t *cond)
{
WakeAllConditionVariable(cond);
return 0;
}
static inline int pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mutex)
{
SleepConditionVariableCS(cond, mutex, INFINITE);
return 0;
}
static inline int pthread_cond_signal(pthread_cond_t *cond)
{
WakeConditionVariable(cond);
return 0;
}
#else // _WIN32_WINNT < 0x0600
/* atomic init state of dynamically loaded functions */
static LONG w32thread_init_state = 0;
static av_unused void w32thread_init(void);
/* for pre-Windows 6.0 platforms, define INIT_ONCE struct,
* compatible to the one used in the native API */
typedef union pthread_once_t {
void * Ptr; ///< For the Windows 6.0+ native functions
LONG state; ///< For the pre-Windows 6.0 compat code
} pthread_once_t;
#define PTHREAD_ONCE_INIT {0}
/* function pointers to init once API on windows 6.0+ kernels */
static BOOL (WINAPI *initonce_begin)(pthread_once_t *lpInitOnce, DWORD dwFlags, BOOL *fPending, void **lpContext);
static BOOL (WINAPI *initonce_complete)(pthread_once_t *lpInitOnce, DWORD dwFlags, void *lpContext);
/* pre-Windows 6.0 compat using a spin-lock */
static inline void w32thread_once_fallback(LONG volatile *state, void (*init_routine)(void))
{
switch (InterlockedCompareExchange(state, 1, 0)) {
/* Initial run */
case 0:
init_routine();
InterlockedExchange(state, 2);
break;
/* Another thread is running init */
case 1:
while (1) {
MemoryBarrier();
if (*state == 2)
break;
Sleep(0);
}
break;
/* Initialization complete */
case 2:
break;
}
}
static av_unused int pthread_once(pthread_once_t *once_control, void (*init_routine)(void))
{
w32thread_once_fallback(&w32thread_init_state, w32thread_init);
/* Use native functions on Windows 6.0+ */
if (initonce_begin && initonce_complete) {
BOOL pending = FALSE;
initonce_begin(once_control, 0, &pending, NULL);
if (pending)
init_routine();
initonce_complete(once_control, 0, NULL);
return 0;
}
w32thread_once_fallback(&once_control->state, init_routine);
return 0;
}
/* for pre-Windows 6.0 platforms we need to define and use our own condition /* for pre-Windows 6.0 platforms we need to define and use our own condition
* variable and api */ * variable and api */
typedef struct win32_cond_t { typedef struct win32_cond_t {
pthread_mutex_t mtx_broadcast; pthread_mutex_t mtx_broadcast;
pthread_mutex_t mtx_waiter_count; pthread_mutex_t mtx_waiter_count;
@@ -251,47 +134,36 @@ typedef struct win32_cond_t {
volatile int is_broadcast; volatile int is_broadcast;
} win32_cond_t; } win32_cond_t;
/* function pointers to conditional variable API on windows 6.0+ kernels */ static void pthread_cond_init(pthread_cond_t *cond, const void *unused_attr)
static void (WINAPI *cond_broadcast)(pthread_cond_t *cond);
static void (WINAPI *cond_init)(pthread_cond_t *cond);
static void (WINAPI *cond_signal)(pthread_cond_t *cond);
static BOOL (WINAPI *cond_wait)(pthread_cond_t *cond, pthread_mutex_t *mutex,
DWORD milliseconds);
static av_unused int pthread_cond_init(pthread_cond_t *cond, const void *unused_attr)
{ {
win32_cond_t *win32_cond = NULL; win32_cond_t *win32_cond = NULL;
w32thread_once_fallback(&w32thread_init_state, w32thread_init);
if (cond_init) { if (cond_init) {
cond_init(cond); cond_init(cond);
return 0; return;
} }
/* non native condition variables */ /* non native condition variables */
win32_cond = av_mallocz(sizeof(win32_cond_t)); win32_cond = av_mallocz(sizeof(win32_cond_t));
if (!win32_cond) if (!win32_cond)
return ENOMEM; return;
cond->Ptr = win32_cond; cond->ptr = win32_cond;
win32_cond->semaphore = CreateSemaphore(NULL, 0, 0x7fffffff, NULL); win32_cond->semaphore = CreateSemaphore(NULL, 0, 0x7fffffff, NULL);
if (!win32_cond->semaphore) if (!win32_cond->semaphore)
return ENOMEM; return;
win32_cond->waiters_done = CreateEvent(NULL, TRUE, FALSE, NULL); win32_cond->waiters_done = CreateEvent(NULL, TRUE, FALSE, NULL);
if (!win32_cond->waiters_done) if (!win32_cond->waiters_done)
return ENOMEM; return;
pthread_mutex_init(&win32_cond->mtx_waiter_count, NULL); pthread_mutex_init(&win32_cond->mtx_waiter_count, NULL);
pthread_mutex_init(&win32_cond->mtx_broadcast, NULL); pthread_mutex_init(&win32_cond->mtx_broadcast, NULL);
return 0;
} }
static av_unused int pthread_cond_destroy(pthread_cond_t *cond) static void pthread_cond_destroy(pthread_cond_t *cond)
{ {
win32_cond_t *win32_cond = cond->Ptr; win32_cond_t *win32_cond = cond->ptr;
/* native condition variables do not destroy */ /* native condition variables do not destroy */
if (cond_init) if (cond_init)
return 0; return;
/* non native condition variables */ /* non native condition variables */
CloseHandle(win32_cond->semaphore); CloseHandle(win32_cond->semaphore);
@@ -299,18 +171,17 @@ static av_unused int pthread_cond_destroy(pthread_cond_t *cond)
pthread_mutex_destroy(&win32_cond->mtx_waiter_count); pthread_mutex_destroy(&win32_cond->mtx_waiter_count);
pthread_mutex_destroy(&win32_cond->mtx_broadcast); pthread_mutex_destroy(&win32_cond->mtx_broadcast);
av_freep(&win32_cond); av_freep(&win32_cond);
cond->Ptr = NULL; cond->ptr = NULL;
return 0;
} }
static av_unused int pthread_cond_broadcast(pthread_cond_t *cond) static void pthread_cond_broadcast(pthread_cond_t *cond)
{ {
win32_cond_t *win32_cond = cond->Ptr; win32_cond_t *win32_cond = cond->ptr;
int have_waiter; int have_waiter;
if (cond_broadcast) { if (cond_broadcast) {
cond_broadcast(cond); cond_broadcast(cond);
return 0; return;
} }
/* non native condition variables */ /* non native condition variables */
@@ -332,12 +203,11 @@ static av_unused int pthread_cond_broadcast(pthread_cond_t *cond)
} else } else
pthread_mutex_unlock(&win32_cond->mtx_waiter_count); pthread_mutex_unlock(&win32_cond->mtx_waiter_count);
pthread_mutex_unlock(&win32_cond->mtx_broadcast); pthread_mutex_unlock(&win32_cond->mtx_broadcast);
return 0;
} }
static av_unused int pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mutex) static int pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mutex)
{ {
win32_cond_t *win32_cond = cond->Ptr; win32_cond_t *win32_cond = cond->ptr;
int last_waiter; int last_waiter;
if (cond_wait) { if (cond_wait) {
cond_wait(cond, mutex, INFINITE); cond_wait(cond, mutex, INFINITE);
@@ -367,13 +237,13 @@ static av_unused int pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mu
return pthread_mutex_lock(mutex); return pthread_mutex_lock(mutex);
} }
static av_unused int pthread_cond_signal(pthread_cond_t *cond) static void pthread_cond_signal(pthread_cond_t *cond)
{ {
win32_cond_t *win32_cond = cond->Ptr; win32_cond_t *win32_cond = cond->ptr;
int have_waiter; int have_waiter;
if (cond_signal) { if (cond_signal) {
cond_signal(cond); cond_signal(cond);
return 0; return;
} }
pthread_mutex_lock(&win32_cond->mtx_broadcast); pthread_mutex_lock(&win32_cond->mtx_broadcast);
@@ -390,11 +260,9 @@ static av_unused int pthread_cond_signal(pthread_cond_t *cond)
} }
pthread_mutex_unlock(&win32_cond->mtx_broadcast); pthread_mutex_unlock(&win32_cond->mtx_broadcast);
return 0;
} }
#endif
static av_unused void w32thread_init(void) static void w32thread_init(void)
{ {
#if _WIN32_WINNT < 0x0600 #if _WIN32_WINNT < 0x0600
HANDLE kernel_dll = GetModuleHandle(TEXT("kernel32.dll")); HANDLE kernel_dll = GetModuleHandle(TEXT("kernel32.dll"));
@@ -407,12 +275,8 @@ static av_unused void w32thread_init(void)
(void*)GetProcAddress(kernel_dll, "WakeConditionVariable"); (void*)GetProcAddress(kernel_dll, "WakeConditionVariable");
cond_wait = cond_wait =
(void*)GetProcAddress(kernel_dll, "SleepConditionVariableCS"); (void*)GetProcAddress(kernel_dll, "SleepConditionVariableCS");
initonce_begin =
(void*)GetProcAddress(kernel_dll, "InitOnceBeginInitialize");
initonce_complete =
(void*)GetProcAddress(kernel_dll, "InitOnceComplete");
#endif #endif
} }
#endif /* COMPAT_W32PTHREADS_H */ #endif /* FFMPEG_COMPAT_W32PTHREADS_H */

View File

@@ -1,9 +0,0 @@
#!/bin/sh
LINK_EXE_PATH=$(dirname "$(command -v cl)")/link
if [ -x "$LINK_EXE_PATH" ]; then
"$LINK_EXE_PATH" $@
else
link $@
fi
exit $?

2971
configure vendored

File diff suppressed because it is too large Load Diff

9
doc/.gitignore vendored
View File

@@ -1,9 +0,0 @@
/*.1
/*.3
/*.html
/*.pod
/config.texi
/avoptions_codec.texi
/avoptions_format.texi
/fate.txt
/print_options

File diff suppressed because it is too large Load Diff

View File

@@ -31,7 +31,7 @@ PROJECT_NAME = FFmpeg
# This could be handy for archiving the generated documentation or # This could be handy for archiving the generated documentation or
# if some version control system is used. # if some version control system is used.
PROJECT_NUMBER = 3.1.1 PROJECT_NUMBER = 2.2-rc2
# With the PROJECT_LOGO tag one can specify a logo or icon that is included # With the PROJECT_LOGO tag one can specify a logo or icon that is included
# in the documentation. The maximum height of the logo should not exceed 55 # in the documentation. The maximum height of the logo should not exceed 55
@@ -759,7 +759,7 @@ ALPHABETICAL_INDEX = YES
# the COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns # the COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns
# in which this list will be split (can be a number in the range [1..20]) # in which this list will be split (can be a number in the range [1..20])
COLS_IN_ALPHA_INDEX = 5 COLS_IN_ALPHA_INDEX = 2
# In case all classes in a project start with a common prefix, all # In case all classes in a project start with a common prefix, all
# classes will be put under the same header in the alphabetical index. # classes will be put under the same header in the alphabetical index.
@@ -1056,7 +1056,7 @@ FORMULA_TRANSPARENT = YES
# typically be disabled. For large projects the javascript based search engine # typically be disabled. For large projects the javascript based search engine
# can be slow, then enabling SERVER_BASED_SEARCH may provide a better solution. # can be slow, then enabling SERVER_BASED_SEARCH may provide a better solution.
SEARCHENGINE = YES SEARCHENGINE = NO
# When the SERVER_BASED_SEARCH tag is enabled the search engine will be # When the SERVER_BASED_SEARCH tag is enabled the search engine will be
# implemented using a PHP enabled web server instead of at the web client # implemented using a PHP enabled web server instead of at the web client
@@ -1359,9 +1359,6 @@ PREDEFINED = "__attribute__(x)=" \
"DECLARE_ALIGNED(a,t,n)=t n" \ "DECLARE_ALIGNED(a,t,n)=t n" \
"offsetof(x,y)=0x42" \ "offsetof(x,y)=0x42" \
av_alloc_size \ av_alloc_size \
AV_GCC_VERSION_AT_LEAST(x,y)=1 \
AV_GCC_VERSION_AT_MOST(x,y)=0 \
__GNUC__=1 \
# If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then # If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then
# this tag can be used to specify a list of macro names that should be expanded. # this tag can be used to specify a list of macro names that should be expanded.
@@ -1429,7 +1426,7 @@ PERL_PATH = /usr/bin/perl
#--------------------------------------------------------------------------- #---------------------------------------------------------------------------
# If the CLASS_DIAGRAMS tag is set to YES (the default) Doxygen will # If the CLASS_DIAGRAMS tag is set to YES (the default) Doxygen will
# generate an inheritance diagram (in HTML, RTF and LaTeX) for classes with base # generate a inheritance diagram (in HTML, RTF and LaTeX) for classes with base
# or super classes. Setting the tag to NO turns the diagrams off. Note that # or super classes. Setting the tag to NO turns the diagrams off. Note that
# this option is superseded by the HAVE_DOT option below. This is only a # this option is superseded by the HAVE_DOT option below. This is only a
# fallback. It is recommended to install and use dot, since it yields more # fallback. It is recommended to install and use dot, since it yields more

View File

@@ -36,23 +36,18 @@ DOCS-$(CONFIG_MANPAGES) += $(MANPAGES)
DOCS-$(CONFIG_TXTPAGES) += $(TXTPAGES) DOCS-$(CONFIG_TXTPAGES) += $(TXTPAGES)
DOCS = $(DOCS-yes) DOCS = $(DOCS-yes)
DOC_EXAMPLES-$(CONFIG_AVIO_DIR_CMD_EXAMPLE) += avio_dir_cmd
DOC_EXAMPLES-$(CONFIG_AVIO_READING_EXAMPLE) += avio_reading DOC_EXAMPLES-$(CONFIG_AVIO_READING_EXAMPLE) += avio_reading
DOC_EXAMPLES-$(CONFIG_AVCODEC_EXAMPLE) += avcodec DOC_EXAMPLES-$(CONFIG_AVCODEC_EXAMPLE) += avcodec
DOC_EXAMPLES-$(CONFIG_DECODING_ENCODING_EXAMPLE) += decoding_encoding
DOC_EXAMPLES-$(CONFIG_DEMUXING_DECODING_EXAMPLE) += demuxing_decoding DOC_EXAMPLES-$(CONFIG_DEMUXING_DECODING_EXAMPLE) += demuxing_decoding
DOC_EXAMPLES-$(CONFIG_EXTRACT_MVS_EXAMPLE) += extract_mvs
DOC_EXAMPLES-$(CONFIG_FILTER_AUDIO_EXAMPLE) += filter_audio DOC_EXAMPLES-$(CONFIG_FILTER_AUDIO_EXAMPLE) += filter_audio
DOC_EXAMPLES-$(CONFIG_FILTERING_AUDIO_EXAMPLE) += filtering_audio DOC_EXAMPLES-$(CONFIG_FILTERING_AUDIO_EXAMPLE) += filtering_audio
DOC_EXAMPLES-$(CONFIG_FILTERING_VIDEO_EXAMPLE) += filtering_video DOC_EXAMPLES-$(CONFIG_FILTERING_VIDEO_EXAMPLE) += filtering_video
DOC_EXAMPLES-$(CONFIG_METADATA_EXAMPLE) += metadata DOC_EXAMPLES-$(CONFIG_METADATA_EXAMPLE) += metadata
DOC_EXAMPLES-$(CONFIG_MUXING_EXAMPLE) += muxing DOC_EXAMPLES-$(CONFIG_MUXING_EXAMPLE) += muxing
DOC_EXAMPLES-$(CONFIG_QSVDEC_EXAMPLE) += qsvdec
DOC_EXAMPLES-$(CONFIG_REMUXING_EXAMPLE) += remuxing DOC_EXAMPLES-$(CONFIG_REMUXING_EXAMPLE) += remuxing
DOC_EXAMPLES-$(CONFIG_RESAMPLING_AUDIO_EXAMPLE) += resampling_audio DOC_EXAMPLES-$(CONFIG_RESAMPLING_AUDIO_EXAMPLE) += resampling_audio
DOC_EXAMPLES-$(CONFIG_SCALING_VIDEO_EXAMPLE) += scaling_video DOC_EXAMPLES-$(CONFIG_SCALING_VIDEO_EXAMPLE) += scaling_video
DOC_EXAMPLES-$(CONFIG_TRANSCODE_AAC_EXAMPLE) += transcode_aac DOC_EXAMPLES-$(CONFIG_TRANSCODE_AAC_EXAMPLE) += transcode_aac
DOC_EXAMPLES-$(CONFIG_TRANSCODING_EXAMPLE) += transcoding
ALL_DOC_EXAMPLES_LIST = $(DOC_EXAMPLES-) $(DOC_EXAMPLES-yes) ALL_DOC_EXAMPLES_LIST = $(DOC_EXAMPLES-) $(DOC_EXAMPLES-yes)
DOC_EXAMPLES := $(DOC_EXAMPLES-yes:%=doc/examples/%$(PROGSSUF)$(EXESUF)) DOC_EXAMPLES := $(DOC_EXAMPLES-yes:%=doc/examples/%$(PROGSSUF)$(EXESUF))
@@ -84,25 +79,14 @@ $(GENTEXI): doc/avoptions_%.texi: doc/print_options$(HOSTEXESUF)
$(M)doc/print_options $* > $@ $(M)doc/print_options $* > $@
doc/%.html: TAG = HTML doc/%.html: TAG = HTML
doc/%-all.html: TAG = HTML
ifdef HAVE_MAKEINFO_HTML
doc/%.html: doc/%.texi $(SRC_PATH)/doc/t2h.pm $(GENTEXI)
$(Q)$(TEXIDEP)
$(M)makeinfo --html -I doc --no-split -D config-not-all --init-file=$(SRC_PATH)/doc/t2h.pm --output $@ $<
doc/%-all.html: doc/%.texi $(SRC_PATH)/doc/t2h.pm $(GENTEXI)
$(Q)$(TEXIDEP)
$(M)makeinfo --html -I doc --no-split -D config-all --init-file=$(SRC_PATH)/doc/t2h.pm --output $@ $<
else
doc/%.html: doc/%.texi $(SRC_PATH)/doc/t2h.init $(GENTEXI) doc/%.html: doc/%.texi $(SRC_PATH)/doc/t2h.init $(GENTEXI)
$(Q)$(TEXIDEP) $(Q)$(TEXIDEP)
$(M)texi2html -I doc -monolithic --D=config-not-all --init-file $(SRC_PATH)/doc/t2h.init --output $@ $< $(M)texi2html -I doc -monolithic --D=config-not-all --init-file $(SRC_PATH)/doc/t2h.init --output $@ $<
doc/%-all.html: TAG = HTML
doc/%-all.html: doc/%.texi $(SRC_PATH)/doc/t2h.init $(GENTEXI) doc/%-all.html: doc/%.texi $(SRC_PATH)/doc/t2h.init $(GENTEXI)
$(Q)$(TEXIDEP) $(Q)$(TEXIDEP)
$(M)texi2html -I doc -monolithic --D=config-all --init-file $(SRC_PATH)/doc/t2h.init --output $@ $< $(M)texi2html -I doc -monolithic --D=config-all --init-file $(SRC_PATH)/doc/t2h.init --output $@ $<
endif
doc/%.pod: TAG = POD doc/%.pod: TAG = POD
doc/%.pod: doc/%.texi $(SRC_PATH)/doc/texi2pod.pl $(GENTEXI) doc/%.pod: doc/%.texi $(SRC_PATH)/doc/texi2pod.pl $(GENTEXI)
@@ -116,20 +100,18 @@ doc/%-all.pod: doc/%.texi $(SRC_PATH)/doc/texi2pod.pl $(GENTEXI)
doc/%.1 doc/%.3: TAG = MAN doc/%.1 doc/%.3: TAG = MAN
doc/%.1: doc/%.pod $(GENTEXI) doc/%.1: doc/%.pod $(GENTEXI)
$(M)pod2man --section=1 --center=" " --release=" " --date=" " $< > $@ $(M)pod2man --section=1 --center=" " --release=" " $< > $@
doc/%.3: doc/%.pod $(GENTEXI) doc/%.3: doc/%.pod $(GENTEXI)
$(M)pod2man --section=3 --center=" " --release=" " --date=" " $< > $@ $(M)pod2man --section=3 --center=" " --release=" " $< > $@
$(DOCS) doc/doxy/html: | doc/ $(DOCS) doc/doxy/html: | doc/
$(DOC_EXAMPLES:%$(EXESUF)=%.o): | doc/examples $(DOC_EXAMPLES:%$(EXESUF)=%.o): | doc/examples
OBJDIRS += doc/examples OBJDIRS += doc/examples
DOXY_INPUT = $(INSTHEADERS) $(DOC_EXAMPLES:%$(EXESUF)=%.c) $(LIB_EXAMPLES:%$(EXESUF)=%.c) DOXY_INPUT = $(addprefix $(SRC_PATH)/, $(INSTHEADERS) $(DOC_EXAMPLES:%$(EXESUF)=%.c) $(LIB_EXAMPLES:%$(EXESUF)=%.c))
DOXY_INPUT_DEPS = $(addprefix $(SRC_PATH)/, $(DOXY_INPUT))
doc/doxy/html: TAG = DOXY doc/doxy/html: $(SRC_PATH)/doc/Doxyfile $(DOXY_INPUT)
doc/doxy/html: $(SRC_PATH)/doc/Doxyfile $(SRC_PATH)/doc/doxy-wrapper.sh $(DOXY_INPUT_DEPS) $(M)$(SRC_PATH)/doc/doxy-wrapper.sh $(SRC_PATH) $< $(DOXY_INPUT)
$(M)OUT_DIR=$$PWD/doc/doxy; cd $(SRC_PATH); ./doc/doxy-wrapper.sh $$OUT_DIR $< $(DOXYGEN) $(DOXY_INPUT);
install-doc: install-html install-man install-doc: install-html install-man

16
doc/RELEASE_NOTES Normal file
View File

@@ -0,0 +1,16 @@
Release Notes
=============
* 2.2 "Muybridge" March, 2014
General notes
-------------
See the Changelog file for a list of significant changes. Note, there
are many more new features and bugfixes than whats listed there.
Bugreports against FFmpeg git master or the most recent FFmpeg release are
accepted. If you are experiencing issues with any formally released version of
FFmpeg, please try git master to check if the issue still exists. If it does,
make your report against the development code following the usual bug reporting
guidelines.

View File

@@ -13,16 +13,7 @@ bitstream filter using the option @code{--disable-bsf=BSF}.
The option @code{-bsfs} of the ff* tools will display the list of The option @code{-bsfs} of the ff* tools will display the list of
all the supported bitstream filters included in your build. all the supported bitstream filters included in your build.
The ff* tools have a -bsf option applied per stream, taking a Below is a description of the currently available bitstream filters.
comma-separated list of filters, whose parameters follow the filter
name after a '='.
@example
ffmpeg -i INPUT -c:v copy -bsf:v filter1[=opt1=str1/opt2=str2][,filter2] OUTPUT
@end example
Below is a description of the currently available bitstream filters,
with their parameters, if any.
@section aac_adtstoasc @section aac_adtstoasc
@@ -67,10 +58,6 @@ the header stored in extradata to the key packets:
ffmpeg -i INPUT -map 0 -flags:v +global_header -c:v libx264 -bsf:v dump_extra out.ts ffmpeg -i INPUT -map 0 -flags:v +global_header -c:v libx264 -bsf:v dump_extra out.ts
@end example @end example
@section dca_core
Extract DCA core from DTS-HD streams.
@section h264_mp4toannexb @section h264_mp4toannexb
Convert an H.264 bitstream from length prefixed mode to start code Convert an H.264 bitstream from length prefixed mode to start code
@@ -87,18 +74,7 @@ format with @command{ffmpeg}, you can use the command:
ffmpeg -i INPUT.mp4 -codec copy -bsf:v h264_mp4toannexb OUTPUT.ts ffmpeg -i INPUT.mp4 -codec copy -bsf:v h264_mp4toannexb OUTPUT.ts
@end example @end example
@section imxdump @section imx_dump_header
Modifies the bitstream to fit in MOV and to be usable by the Final Cut
Pro decoder. This filter only applies to the mpeg2video codec, and is
likely not needed for Final Cut Pro 7 and newer with the appropriate
@option{-tag:v}.
For example, to remux 30 MB/sec NTSC IMX to MOV:
@example
ffmpeg -i input.mxf -c copy -bsf:v imxdump -tag:v mx3n output.mov
@end example
@section mjpeg2jpeg @section mjpeg2jpeg
@@ -143,42 +119,8 @@ ffmpeg -i frame_%d.jpg -c:v copy rotated.avi
@section mp3_header_decompress @section mp3_header_decompress
@section mpeg4_unpack_bframes
Unpack DivX-style packed B-frames.
DivX-style packed B-frames are not valid MPEG-4 and were only a
workaround for the broken Video for Windows subsystem.
They use more space, can cause minor AV sync issues, require more
CPU power to decode (unless the player has some decoded picture queue
to compensate the 2,0,2,0 frame per packet style) and cause
trouble if copied into a standard container like mp4 or mpeg-ps/ts,
because MPEG-4 decoders may not be able to decode them, since they are
not valid MPEG-4.
For example to fix an AVI file containing an MPEG-4 stream with
DivX-style packed B-frames using @command{ffmpeg}, you can use the command:
@example
ffmpeg -i INPUT.avi -codec copy -bsf:v mpeg4_unpack_bframes OUTPUT.avi
@end example
@section noise @section noise
Damages the contents of packets without damaging the container. Can be
used for fuzzing or testing error resilience/concealment.
Parameters:
A numeral string, whose value is related to how often output bytes will
be modified. Therefore, values below or equal to 0 are forbidden, and
the lower the more frequent bytes will be modified, with 1 meaning
every byte is modified.
@example
ffmpeg -i INPUT -c copy -bsf noise[=1] output.mkv
@end example
applies the modification to every byte.
@section remove_extra @section remove_extra
@c man end BITSTREAM FILTERS @c man end BITSTREAM FILTERS

File diff suppressed because one or more lines are too long

View File

@@ -7,55 +7,44 @@ V
Disable the default terse mode, the full command issued by make and its Disable the default terse mode, the full command issued by make and its
output will be shown on the screen. output will be shown on the screen.
DBG
Preprocess x86 external assembler files to a .dbg.asm file in the object
directory, which then gets compiled. Helps in developing those assembler
files.
DESTDIR DESTDIR
Destination directory for the install targets, useful to prepare packages Destination directory for the install targets, useful to prepare packages
or install FFmpeg in cross-environments. or install FFmpeg in cross-environments.
GEN
Set to 1 to generate the missing or mismatched references.
Makefile targets: Makefile targets:
all all
Default target, builds all the libraries and the executables. Default target, builds all the libraries and the executables.
fate fate
Run the fate test suite, note that you must have installed it. Run the fate test suite, note you must have installed it
fate-list fate-list
List all fate/regression test targets. Will list all fate/regression test targets
install install
Install headers, libraries and programs. Install headers, libraries and programs.
examples
Build all examples located in doc/examples.
libavformat/output-example libavformat/output-example
Build the libavformat basic example. Build the libavformat basic example.
libswscale/swscale-test libavcodec/api-example
Build the swscale self-test (useful also as an example). Build the libavcodec basic example.
config libswscale/swscale-test
Reconfigure the project with the current configuration. Build the swscale self-test (useful also as example).
Useful standard make commands: Useful standard make commands:
make -t <target> make -t <target>
Touch all files that otherwise would be built, this is useful to reduce Touch all files that otherwise would be build, this is useful to reduce
unneeded rebuilding when changing headers, but note that you must force rebuilds unneeded rebuilding when changing headers, but note you must force rebuilds
of files that actually need it by hand then. of files that actually need it by hand then.
make -j<num> make -j<num>
Rebuild with multiple jobs at the same time. Faster on multi processor systems. rebuild with multiple jobs at the same time. Faster on multi processor systems
make -k make -k
Continue build in case of errors, this is useful for the regression tests continue build in case of errors, this is useful for the regression tests
sometimes but note that it will still not run all reg tests. sometimes but note it will still not run all reg tests.

View File

@@ -7,7 +7,7 @@ all the encoders and decoders. In addition each codec may support
so-called private options, which are specific for a given codec. so-called private options, which are specific for a given codec.
Sometimes, a global option may only affect a specific kind of codec, Sometimes, a global option may only affect a specific kind of codec,
and may be nonsensical or ignored by another, so you need to be aware and may be unsensical or ignored by another, so you need to be aware
of the meaning of the specified options. Also some options are of the meaning of the specified options. Also some options are
meant only for decoding or encoding. meant only for decoding or encoding.
@@ -71,9 +71,7 @@ Force low delay.
@item global_header @item global_header
Place global headers in extradata instead of every keyframe. Place global headers in extradata instead of every keyframe.
@item bitexact @item bitexact
Only write platform-, build- and time-independent data. (except (I)DCT). Use only bitexact stuff (except (I)DCT).
This ensures that file and data checksums are reproducible and match between
platforms. Its primary use is for regression testing.
@item aic @item aic
Apply H263 advanced intra coding / mpeg4 ac prediction. Apply H263 advanced intra coding / mpeg4 ac prediction.
@item cbp @item cbp
@@ -129,7 +127,7 @@ should be @code{1 / frame_rate} and timestamp increments should be
identically 1. identically 1.
@item g @var{integer} (@emph{encoding,video}) @item g @var{integer} (@emph{encoding,video})
Set the group of picture (GOP) size. Default value is 12. Set the group of picture size. Default value is 12.
@item ar @var{integer} (@emph{decoding/encoding,audio}) @item ar @var{integer} (@emph{decoding/encoding,audio})
Set audio sampling rate (in Hz). Set audio sampling rate (in Hz).
@@ -257,7 +255,7 @@ Specify how strictly to follow the standards.
Possible values: Possible values:
@table @samp @table @samp
@item very @item very
strictly conform to an older more strict version of the spec or reference software strictly conform to a older more strict version of the spec or reference software
@item strict @item strict
strictly conform to all the things in the spec no matter what consequences strictly conform to all the things in the spec no matter what consequences
@item normal @item normal
@@ -287,11 +285,6 @@ detect bitstream specification deviations
detect improper bitstream length detect improper bitstream length
@item explode @item explode
abort decoding on minor error detection abort decoding on minor error detection
@item ignore_err
ignore decoding errors, and continue decoding.
This is useful if you want to analyze the content of a video and thus want
everything to be decoded no matter what. This option will not result in a video
that is pleasing to watch in case of errors.
@item careful @item careful
consider things that violate the spec and have not been seen in the wild as errors consider things that violate the spec and have not been seen in the wild as errors
@item compliant @item compliant
@@ -396,9 +389,6 @@ Possible values:
@item simplemmx @item simplemmx
@item simpleauto
Automatically pick a IDCT compatible with the simple one
@item arm @item arm
@item altivec @item altivec
@@ -434,8 +424,6 @@ Possible values:
iterative motion vector (MV) search (slow) iterative motion vector (MV) search (slow)
@item deblock @item deblock
use strong deblock filter for damaged MBs use strong deblock filter for damaged MBs
@item favor_inter
favor predicting from the previous frame instead of the current
@end table @end table
@item bits_per_coded_sample @var{integer} @item bits_per_coded_sample @var{integer}
@@ -456,9 +444,6 @@ Possible values:
@item aspect @var{rational number} (@emph{encoding,video}) @item aspect @var{rational number} (@emph{encoding,video})
Set sample aspect ratio. Set sample aspect ratio.
@item sar @var{rational number} (@emph{encoding,video})
Set sample aspect ratio. Alias to @var{aspect}.
@item debug @var{flags} (@emph{decoding/encoding,audio,video,subtitles}) @item debug @var{flags} (@emph{decoding/encoding,audio,video,subtitles})
Print specific debug info. Print specific debug info.
@@ -478,9 +463,6 @@ per-block quantization parameter (QP)
motion vector motion vector
@item dct_coeff @item dct_coeff
@item green_metadata
display complexity metadata for the upcoming frame, GoP or for a given duration.
@item skip @item skip
@item startcode @item startcode
@@ -501,15 +483,11 @@ visualize block types
picture buffer allocations picture buffer allocations
@item thread_ops @item thread_ops
threading operations threading operations
@item nomc
skip motion compensation
@end table @end table
@item vismv @var{integer} (@emph{decoding,video}) @item vismv @var{integer} (@emph{decoding,video})
Visualize motion vectors (MVs). Visualize motion vectors (MVs).
This option is deprecated, see the codecview filter instead.
Possible values: Possible values:
@table @samp @table @samp
@item pf @item pf
@@ -809,9 +787,6 @@ Frame data might be split into multiple chunks.
Show all frames before the first keyframe. Show all frames before the first keyframe.
@item skiprd @item skiprd
Deprecated, use mpegvideo private options instead. Deprecated, use mpegvideo private options instead.
@item export_mvs
Export motion vectors into frame side-data (see @code{AV_FRAME_DATA_MOTION_VECTORS})
for codecs that support it. See also @file{doc/examples/export_mvs.c}.
@end table @end table
@item error @var{integer} (@emph{encoding,video}) @item error @var{integer} (@emph{encoding,video})
@@ -820,17 +795,13 @@ for codecs that support it. See also @file{doc/examples/export_mvs.c}.
Deprecated, use mpegvideo private options instead. Deprecated, use mpegvideo private options instead.
@item threads @var{integer} (@emph{decoding/encoding,video}) @item threads @var{integer} (@emph{decoding/encoding,video})
Set the number of threads to be used, in case the selected codec
implementation supports multi-threading.
Possible values: Possible values:
@table @samp @table @samp
@item auto, 0 @item auto
automatically select the number of threads to set detect a good number of threads
@end table @end table
Default value is @samp{auto}.
@item me_threshold @var{integer} (@emph{encoding,video}) @item me_threshold @var{integer} (@emph{encoding,video})
Set motion estimation threshold. Set motion estimation threshold.
@@ -875,14 +846,6 @@ Possible values:
@item mpeg2_aac_he @item mpeg2_aac_he
@item mpeg4_sp
@item mpeg4_core
@item mpeg4_main
@item mpeg4_asp
@item dts @item dts
@item dts_es @item dts_es
@@ -916,7 +879,7 @@ Set frame skip factor.
Set frame skip exponent. Set frame skip exponent.
Negative values behave identical to the corresponding positive ones, except Negative values behave identical to the corresponding positive ones, except
that the score is normalized. that the score is normalized.
Positive values exist primarily for compatibility reasons and are not so useful. Positive values exist primarly for compatibility reasons and are not so useful.
@item skipcmp @var{integer} (@emph{encoding,video}) @item skipcmp @var{integer} (@emph{encoding,video})
Set frame skip compare function. Set frame skip compare function.
@@ -1050,44 +1013,9 @@ Possible values:
@item rc_min_vbv_use @var{float} (@emph{encoding,video}) @item rc_min_vbv_use @var{float} (@emph{encoding,video})
@item ticks_per_frame @var{integer} (@emph{decoding/encoding,audio,video}) @item ticks_per_frame @var{integer} (@emph{decoding/encoding,audio,video})
@item color_primaries @var{integer} (@emph{decoding/encoding,video}) @item color_primaries @var{integer} (@emph{decoding/encoding,video})
@item color_trc @var{integer} (@emph{decoding/encoding,video}) @item color_trc @var{integer} (@emph{decoding/encoding,video})
Possible values:
@table @samp
@item bt709
BT.709
@item gamma22
BT.470 M
@item gamma28
BT.470 BG
@item linear
SMPTE 170 M
@item log
SMPTE 240 M
@item log_sqrt
Linear
@item iec61966_2_4
Log
@item bt1361
Log square root
@item iec61966_2_1
IEC 61966-2-4
@item bt2020_10bit
BT.1361
@item bt2020_12bit
IEC 61966-2-1
@item smpte2084
BT.2020 - 10 bit
@item smpte428_1
BT.2020 - 12 bit
@end table
@item colorspace @var{integer} (@emph{decoding/encoding,video}) @item colorspace @var{integer} (@emph{decoding/encoding,video})
@item color_range @var{integer} (@emph{decoding/encoding,video}) @item color_range @var{integer} (@emph{decoding/encoding,video})
If used as input parameter, it serves as a hint to the decoder, which
color_range the input has.
@item chroma_sample_location @var{integer} (@emph{decoding/encoding,video}) @item chroma_sample_location @var{integer} (@emph{decoding/encoding,video})
@item log_level_offset @var{integer} @item log_level_offset @var{integer}
@@ -1097,26 +1025,15 @@ Set the log level offset.
Number of slices, used in parallelized encoding. Number of slices, used in parallelized encoding.
@item thread_type @var{flags} (@emph{decoding/encoding,video}) @item thread_type @var{flags} (@emph{decoding/encoding,video})
Select which multithreading methods to use. Select multithreading type.
Use of @samp{frame} will increase decoding delay by one frame per
thread, so clients which cannot provide future frames should not use
it.
Possible values: Possible values:
@table @samp @table @samp
@item slice @item slice
Decode more than one part of a single frame at once.
Multithreading using slices works only when the video was encoded with
slices.
@item frame @item frame
Decode more than one frame at once.
@end table @end table
Default value is @samp{slice+frame}.
@item audio_service_type @var{integer} (@emph{encoding,audio}) @item audio_service_type @var{integer} (@emph{encoding,audio})
Set audio service type. Set audio service type.
@@ -1171,19 +1088,6 @@ Interlaced video, bottom coded first, top displayed first
Set to 1 to disable processing alpha (transparency). This works like the Set to 1 to disable processing alpha (transparency). This works like the
@samp{gray} flag in the @option{flags} option which skips chroma information @samp{gray} flag in the @option{flags} option which skips chroma information
instead of alpha. Default is 0. instead of alpha. Default is 0.
@item codec_whitelist @var{list} (@emph{input})
"," separated List of allowed decoders. By default all are allowed.
@item dump_separator @var{string} (@emph{input})
Separator used to separate the fields printed on the command line about the
Stream parameters.
For example to separate the fields with newlines and indention:
@example
ffprobe -dump_separator "
" -i ~/videos/matrixbench_mpeg2.mpg
@end example
@end table @end table
@c man end CODEC OPTIONS @c man end CODEC OPTIONS

View File

@@ -25,13 +25,6 @@ enabled decoders.
A description of some of the currently available video decoders A description of some of the currently available video decoders
follows. follows.
@section hevc
HEVC / H.265 decoder.
Note: the @option{skip_loop_filter} option has effect only at level
@code{all}.
@section rawvideo @section rawvideo
Raw video decoder. Raw video decoder.
@@ -69,7 +62,7 @@ AC-3 audio decoder.
This decoder implements part of ATSC A/52:2010 and ETSI TS 102 366, as well as This decoder implements part of ATSC A/52:2010 and ETSI TS 102 366, as well as
the undocumented RealAudio 3 (a.k.a. dnet). the undocumented RealAudio 3 (a.k.a. dnet).
@subsection AC-3 Decoder Options @subsubsection AC-3 Decoder Options
@table @option @table @option
@@ -90,23 +83,6 @@ Loud sounds are fully compressed. Soft sounds are enhanced.
@end table @end table
@section flac
FLAC audio decoder.
This decoder aims to implement the complete FLAC specification from Xiph.
@subsection FLAC Decoder options
@table @option
@item -use_buggy_lpc
The lavc FLAC encoder used to produce buggy streams with high lpc values
(like the default value). This option makes it possible to decode such streams
correctly by using lavc's old buggy lpc logic for decoding.
@end table
@section ffwavesynth @section ffwavesynth
Internal wave synthetizer. Internal wave synthetizer.
@@ -187,33 +163,11 @@ Requires the presence of the libopus headers and library during
configuration. You need to explicitly configure the build with configuration. You need to explicitly configure the build with
@code{--enable-libopus}. @code{--enable-libopus}.
An FFmpeg native decoder for Opus exists, so users can decode Opus
without this library.
@c man end AUDIO DECODERS @c man end AUDIO DECODERS
@chapter Subtitles Decoders @chapter Subtitles Decoders
@c man begin SUBTILES DECODERS @c man begin SUBTILES DECODERS
@section dvbsub
@subsection Options
@table @option
@item compute_clut
@table @option
@item -1
Compute clut if no matching CLUT is in the stream.
@item 0
Never compute CLUT
@item 1
Always compute CLUT and override the one provided in the stream.
@end table
@item dvb_substream
Selects the dvb substream, or all substreams if -1 which is default.
@end table
@section dvdsub @section dvdsub
This codec decodes the bitmap subtitles used in DVDs; the same subtitles can This codec decodes the bitmap subtitles used in DVDs; the same subtitles can
@@ -233,15 +187,6 @@ The format for this option is a string containing 16 24-bits hexadecimal
numbers (without 0x prefix) separated by comas, for example @code{0d00ee, numbers (without 0x prefix) separated by comas, for example @code{0d00ee,
ee450d, 101010, eaeaea, 0ce60b, ec14ed, ebff0b, 0d617a, 7b7b7b, d1d1d1, ee450d, 101010, eaeaea, 0ce60b, ec14ed, ebff0b, 0d617a, 7b7b7b, d1d1d1,
7b2a0e, 0d950c, 0f007b, cf0dec, cfa80c, 7c127b}. 7b2a0e, 0d950c, 0f007b, cf0dec, cfa80c, 7c127b}.
@item ifo_palette
Specify the IFO file from which the global palette is obtained.
(experimental)
@item forced_subs_only
Only decode subtitle entries marked as forced. Some titles have forced
and non-forced subtitles in the same track. Setting this flag to @code{1}
will only keep the forced subtitles. Default value is @code{0}.
@end table @end table
@section libzvbi-teletext @section libzvbi-teletext
@@ -282,13 +227,7 @@ Sets the display duration of the decoded teletext pages or subtitles in
miliseconds. Default value is 30000 which is 30 seconds. miliseconds. Default value is 30000 which is 30 seconds.
@item txt_transparent @item txt_transparent
Force transparent background of the generated teletext bitmaps. Default value Force transparent background of the generated teletext bitmaps. Default value
is 0 which means an opaque background. is 0 which means an opaque (black) background.
@item txt_opacity
Sets the opacity (0-255) of the teletext background. If
@option{txt_transparent} is not set, it only affects characters between a start
box and an end box, typically subtitles. Default value is 0 if
@option{txt_transparent} is set, 255 otherwise.
@end table @end table
@c man end SUBTILES DECODERS @c man end SUBTILES DECODERS

View File

@@ -18,12 +18,6 @@ enabled demuxers.
The description of some of the currently available demuxers follows. The description of some of the currently available demuxers follows.
@section aa
Audible Format 2, 3, and 4 demuxer.
This demuxer is used to demux Audible Format 2, 3, and 4 (.aa) files.
@section applehttp @section applehttp
Apple HTTP Live Streaming demuxer. Apple HTTP Live Streaming demuxer.
@@ -35,26 +29,6 @@ the caller can decide which variant streams to actually receive.
The total bitrate of the variant that the stream belongs to is The total bitrate of the variant that the stream belongs to is
available in a metadata key named "variant_bitrate". available in a metadata key named "variant_bitrate".
@section apng
Animated Portable Network Graphics demuxer.
This demuxer is used to demux APNG files.
All headers, but the PNG signature, up to (but not including) the first
fcTL chunk are transmitted as extradata.
Frames are then split as being all the chunks between two fcTL ones, or
between the last fcTL and IEND chunks.
@table @option
@item -ignore_loop @var{bool}
Ignore the loop variable in the file if set.
@item -max_fps @var{int}
Maximum framerate in frames per second (0 for no limit).
@item -default_fps @var{int}
Default framerate in frames per second when none is specified in the file
(0 meaning as fast as possible).
@end table
@section asf @section asf
Advanced Systems Format demuxer. Advanced Systems Format demuxer.
@@ -100,11 +74,11 @@ following directive is recognized:
Path to a file to read; special characters and spaces must be escaped with Path to a file to read; special characters and spaces must be escaped with
backslash or single quotes. backslash or single quotes.
All subsequent file-related directives apply to that file. All subsequent directives apply to that file.
@item @code{ffconcat version 1.0} @item @code{ffconcat version 1.0}
Identify the script type and version. It also sets the @option{safe} option Identify the script type and version. It also sets the @option{safe} option
to 1 if it was -1. to 1 if it was to its default -1.
To make FFmpeg recognize the format automatically, this directive must To make FFmpeg recognize the format automatically, this directive must
appears exactly as is (no extra space or byte-order-mark) on the very first appears exactly as is (no extra space or byte-order-mark) on the very first
@@ -118,63 +92,6 @@ file is not available or accurate.
If the duration is set for all files, then it is possible to seek in the If the duration is set for all files, then it is possible to seek in the
whole concatenated video. whole concatenated video.
@item @code{inpoint @var{timestamp}}
In point of the file. When the demuxer opens the file it instantly seeks to the
specified timestamp. Seeking is done so that all streams can be presented
successfully at In point.
This directive works best with intra frame codecs, because for non-intra frame
ones you will usually get extra packets before the actual In point and the
decoded content will most likely contain frames before In point too.
For each file, packets before the file In point will have timestamps less than
the calculated start timestamp of the file (negative in case of the first
file), and the duration of the files (if not specified by the @code{duration}
directive) will be reduced based on their specified In point.
Because of potential packets before the specified In point, packet timestamps
may overlap between two concatenated files.
@item @code{outpoint @var{timestamp}}
Out point of the file. When the demuxer reaches the specified decoding
timestamp in any of the streams, it handles it as an end of file condition and
skips the current and all the remaining packets from all streams.
Out point is exclusive, which means that the demuxer will not output packets
with a decoding timestamp greater or equal to Out point.
This directive works best with intra frame codecs and formats where all streams
are tightly interleaved. For non-intra frame codecs you will usually get
additional packets with presentation timestamp after Out point therefore the
decoded content will most likely contain frames after Out point too. If your
streams are not tightly interleaved you may not get all the packets from all
streams before Out point and you may only will be able to decode the earliest
stream until Out point.
The duration of the files (if not specified by the @code{duration}
directive) will be reduced based on their specified Out point.
@item @code{file_packet_metadata @var{key=value}}
Metadata of the packets of the file. The specified metadata will be set for
each file packet. You can specify this directive multiple times to add multiple
metadata entries.
@item @code{stream}
Introduce a stream in the virtual file.
All subsequent stream-related directives apply to the last introduced
stream.
Some streams properties must be set in order to allow identifying the
matching streams in the subfiles.
If no streams are defined in the script, the streams from the first file are
copied.
@item @code{exact_stream_id @var{id}}
Set the id of the stream.
If this directive is given, the string with the corresponding id in the
subfiles will be used.
This is especially useful for MPEG-PS (VOB) files, where the order of the
streams is not reliable.
@end table @end table
@subsection Options @subsection Options
@@ -192,57 +109,11 @@ component.
If set to 0, any file name is accepted. If set to 0, any file name is accepted.
The default is 1. The default is -1, it is equivalent to 1 if the format was automatically
-1 is equivalent to 1 if the format was automatically
probed and 0 otherwise. probed and 0 otherwise.
@item auto_convert
If set to 1, try to perform automatic conversions on packet data to make the
streams concatenable.
The default is 1.
Currently, the only conversion is adding the h264_mp4toannexb bitstream
filter to H.264 streams in MP4 format. This is necessary in particular if
there are resolution changes.
@item segment_time_metadata
If set to 1, every packet will contain the @var{lavf.concat.start_time} and the
@var{lavf.concat.duration} packet metadata values which are the start_time and
the duration of the respective file segments in the concatenated output
expressed in microseconds. The duration metadata is only set if it is known
based on the concat file.
The default is 0.
@end table @end table
@subsection Examples
@itemize
@item
Use absolute filenames and include some comments:
@example
# my first filename
file /mnt/share/file-1.wav
# my second filename including whitespace
file '/mnt/share/file 2.wav'
# my third filename including whitespace plus single quote
file '/mnt/share/file 3'\''.wav'
@end example
@item
Allow for input format auto-probing, use safe filenames and set the duration of
the first file:
@example
ffconcat version 1.0
file file-1.wav
duration 20.0
file subdir/file-2.wav
@end example
@end itemize
@section flv @section flv
Adobe Flash Video Format demuxer. Adobe Flash Video Format demuxer.
@@ -267,44 +138,17 @@ track. Track indexes start at 0. The demuxer exports the number of tracks as
For very large files, the @option{max_size} option may have to be adjusted. For very large files, the @option{max_size} option may have to be adjusted.
@section gif @section libquvi
Animated GIF demuxer. Play media from Internet services using the quvi project.
It accepts the following options: The demuxer accepts a @option{format} option to request a specific quality. It
is by default set to @var{best}.
@table @option See @url{http://quvi.sourceforge.net/} for more information.
@item min_delay
Set the minimum valid delay between frames in hundredths of seconds.
Range is 0 to 6000. Default value is 2.
@item max_gif_delay FFmpeg needs to be built with @code{--enable-libquvi} for this demuxer to be
Set the maximum valid delay between frames in hundredth of seconds. enabled.
Range is 0 to 65535. Default value is 65535 (nearly eleven minutes),
the maximum value allowed by the specification.
@item default_delay
Set the default delay between frames in hundredths of seconds.
Range is 0 to 6000. Default value is 10.
@item ignore_loop
GIF files can contain information to loop a certain number of times (or
infinitely). If @option{ignore_loop} is set to 1, then the loop setting
from the input will be ignored and looping will not occur. If set to 0,
then looping will occur and will cycle the number of times according to
the GIF. Default value is 1.
@end table
For example, with the overlay filter, place an infinitely looping GIF
over another video:
@example
ffmpeg -i input.mp4 -ignore_loop 0 -i input.gif -filter_complex overlay=shortest=1 out.mkv
@end example
Note that in the above example the shortest option for overlay filter is
used to end the output video at the length of the shortest input file,
which in this case is @file{input.mp4} as the GIF in this example loops
infinitely.
@section image2 @section image2
@@ -331,10 +175,6 @@ Select the pattern type used to interpret the provided filename.
@var{pattern_type} accepts one of the following values. @var{pattern_type} accepts one of the following values.
@table @option @table @option
@item none
Disable pattern matching, therefore the video will only contain the specified
image. You should use this option if you do not want to create sequences from
multiple images and your filenames may contain special pattern characters.
@item sequence @item sequence
Select a sequence pattern type, used to specify a sequence of files Select a sequence pattern type, used to specify a sequence of files
indexed by sequential numbers. indexed by sequential numbers.
@@ -409,8 +249,6 @@ is 5.
If set to 1, will set frame timestamp to modification time of image file. Note If set to 1, will set frame timestamp to modification time of image file. Note
that monotonity of timestamps is not provided: images go in the same order as that monotonity of timestamps is not provided: images go in the same order as
without this option. Default value is 0. without this option. Default value is 0.
If set to 2, will set frame timestamp to the modification time of the image file in
nanosecond precision.
@item video_size @item video_size
Set the video size of the images to read. If not specified the video Set the video size of the images to read. If not specified the video
size is guessed from the first image file in the sequence. size is guessed from the first image file in the sequence.
@@ -441,69 +279,24 @@ ffmpeg -framerate 10 -pattern_type glob -i "*.png" out.mkv
@end example @end example
@end itemize @end itemize
@section mov/mp4/3gp/QuickTime
QuickTime / MP4 demuxer.
This demuxer accepts the following options:
@table @option
@item enable_drefs
Enable loading of external tracks, disabled by default.
Enabling this can theoretically leak information in some use cases.
@item use_absolute_path
Allows loading of external tracks via absolute paths, disabled by default.
Enabling this poses a security risk. It should only be enabled if the source
is known to be non malicious.
@end table
@section mpegts @section mpegts
MPEG-2 transport stream demuxer. MPEG-2 transport stream demuxer.
This demuxer accepts the following options:
@table @option @table @option
@item resync_size
Set size limit for looking up a new synchronization. Default value is
65536.
@item fix_teletext_pts @item fix_teletext_pts
Override teletext packet PTS and DTS values with the timestamps calculated Overrides teletext packet PTS and DTS values with the timestamps calculated
from the PCR of the first program which the teletext stream is part of and is from the PCR of the first program which the teletext stream is part of and is
not discarded. Default value is 1, set this option to 0 if you want your not discarded. Default value is 1, set this option to 0 if you want your
teletext packet PTS and DTS values untouched. teletext packet PTS and DTS values untouched.
@item ts_packetsize
Output option carrying the raw packet size in bytes.
Show the detected raw packet size, cannot be set by the user.
@item scan_all_pmts
Scan and combine all PMTs. The value is an integer with value from -1
to 1 (-1 means automatic setting, 1 means enabled, 0 means
disabled). Default value is -1.
@end table
@section mpjpeg
MJPEG encapsulated in multi-part MIME demuxer.
This demuxer allows reading of MJPEG, where each frame is represented as a part of
multipart/x-mixed-replace stream.
@table @option
@item strict_mime_boundary
Default implementation applies a relaxed standard to multi-part MIME boundary detection,
to prevent regression with numerous existing endpoints not generating a proper MIME
MJPEG stream. Turning this option on by setting it to 1 will result in a stricter check
of the boundary value.
@end table @end table
@section rawvideo @section rawvideo
Raw video demuxer. Raw video demuxer.
This demuxer allows one to read raw video data. Since there is no header This demuxer allows to read raw video data. Since there is no header
specifying the assumed video parameters, the user must specify them specifying the assumed video parameters, the user must specify them
in order to be able to decode the data correctly. in order to be able to decode the data correctly.

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle Developer Documentation @settitle Developer Documentation
@titlepage @titlepage
@@ -28,14 +27,14 @@ this document.
For more detailed legal information about the use of FFmpeg in For more detailed legal information about the use of FFmpeg in
external programs read the @file{LICENSE} file in the source tree and external programs read the @file{LICENSE} file in the source tree and
consult @url{https://ffmpeg.org/legal.html}. consult @url{http://ffmpeg.org/legal.html}.
@section Contributing @section Contributing
There are 3 ways by which code gets into FFmpeg. There are 3 ways by which code gets into ffmpeg.
@itemize @bullet @itemize @bullet
@item Submitting patches to the main developer mailing list. @item Submitting Patches to the main developer mailing list
See @ref{Submitting patches} for details. see @ref{Submitting patches} for details.
@item Directly committing changes to the main tree. @item Directly committing changes to the main tree.
@item Committing changes to a git clone, for example on github.com or @item Committing changes to a git clone, for example on github.com or
gitorious.org. And asking us to merge these changes. gitorious.org. And asking us to merge these changes.
@@ -65,9 +64,6 @@ rejected by the git repository.
@item @item
You should try to limit your code lines to 80 characters; however, do so if You should try to limit your code lines to 80 characters; however, do so if
and only if this improves readability. and only if this improves readability.
@item
K&R coding style is used.
@end itemize @end itemize
The presentation is one inspired by 'indent -i4 -kr -nut'. The presentation is one inspired by 'indent -i4 -kr -nut'.
@@ -127,10 +123,10 @@ the @samp{inline} keyword;
@samp{//} comments; @samp{//} comments;
@item @item
designated struct initializers (@samp{struct s x = @{ .i = 17 @};}); designated struct initializers (@samp{struct s x = @{ .i = 17 @};})
@item @item
compound literals (@samp{x = (struct s) @{ 17, 23 @};}). compound literals (@samp{x = (struct s) @{ 17, 23 @};})
@end itemize @end itemize
These features are supported by all compilers we care about, so we will not These features are supported by all compilers we care about, so we will not
@@ -159,7 +155,7 @@ GCC statement expressions (@samp{(x = (@{ int y = 4; y; @})}).
All names should be composed with underscores (_), not CamelCase. For example, All names should be composed with underscores (_), not CamelCase. For example,
@samp{avfilter_get_video_buffer} is an acceptable function name and @samp{avfilter_get_video_buffer} is an acceptable function name and
@samp{AVFilterGetVideo} is not. The exception from this are type names, like @samp{AVFilterGetVideo} is not. The exception from this are type names, like
for example structs and enums; they should always be in CamelCase. for example structs and enums; they should always be in the CamelCase
There are the following conventions for naming variables and functions: There are the following conventions for naming variables and functions:
@@ -231,7 +227,7 @@ autocmd InsertEnter * match ForbiddenWhitespace /\t\|\s\+\%#\@@<!$/
@end example @end example
For Emacs, add these roughly equivalent lines to your @file{.emacs.d/init.el}: For Emacs, add these roughly equivalent lines to your @file{.emacs.d/init.el}:
@lisp @example
(c-add-style "ffmpeg" (c-add-style "ffmpeg"
'("k&r" '("k&r"
(c-basic-offset . 4) (c-basic-offset . 4)
@@ -242,7 +238,7 @@ For Emacs, add these roughly equivalent lines to your @file{.emacs.d/init.el}:
) )
) )
(setq c-default-style "ffmpeg") (setq c-default-style "ffmpeg")
@end lisp @end example
@section Development Policy @section Development Policy
@@ -327,12 +323,9 @@ Always fill out the commit log message. Describe in a few lines what you
changed and why. You can refer to mailing list postings if you fix a changed and why. You can refer to mailing list postings if you fix a
particular bug. Comments such as "fixed!" or "Changed it." are unacceptable. particular bug. Comments such as "fixed!" or "Changed it." are unacceptable.
Recommended format: Recommended format:
@example
area changed: Short 1 line description area changed: Short 1 line description
details describing what and why and giving references. details describing what and why and giving references.
@end example
@item @item
Make sure the author of the commit is set correctly. (see git commit --author) Make sure the author of the commit is set correctly. (see git commit --author)
@@ -397,41 +390,12 @@ or obfuscates the code.
Make sure that no parts of the codebase that you maintain are missing from the Make sure that no parts of the codebase that you maintain are missing from the
@file{MAINTAINERS} file. If something that you want to maintain is missing add it with @file{MAINTAINERS} file. If something that you want to maintain is missing add it with
your name after it. your name after it.
If at some point you no longer want to maintain some code, then please help in If at some point you no longer want to maintain some code, then please help
finding a new maintainer and also don't forget to update the @file{MAINTAINERS} file. finding a new maintainer and also don't forget updating the @file{MAINTAINERS} file.
@end enumerate @end enumerate
We think our rules are not too hard. If you have comments, contact us. We think our rules are not too hard. If you have comments, contact us.
@section Code of conduct
Be friendly and respectful towards others and third parties.
Treat others the way you yourself want to be treated.
Be considerate. Not everyone shares the same viewpoint and priorities as you do.
Different opinions and interpretations help the project.
Looking at issues from a different perspective assists development.
Do not assume malice for things that can be attributed to incompetence. Even if
it is malice, it's rarely good to start with that as initial assumption.
Stay friendly even if someone acts contrarily. Everyone has a bad day
once in a while.
If you yourself have a bad day or are angry then try to take a break and reply
once you are calm and without anger if you have to.
Try to help other team members and cooperate if you can.
The goal of software development is to create technical excellence, not for any
individual to be better and "win" against the others. Large software projects
are only possible and successful through teamwork.
If someone struggles do not put them down. Give them a helping hand
instead and point them in the right direction.
Finally, keep in mind the immortal words of Bill and Ted,
"Be excellent to each other."
@anchor{Submitting patches} @anchor{Submitting patches}
@section Submitting patches @section Submitting patches
@@ -439,7 +403,7 @@ First, read the @ref{Coding Rules} above if you did not yet, in particular
the rules regarding patch submission. the rules regarding patch submission.
When you submit your patch, please use @code{git format-patch} or When you submit your patch, please use @code{git format-patch} or
@code{git send-email}. We cannot read other diffs :-). @code{git send-email}. We cannot read other diffs :-)
Also please do not submit a patch which contains several unrelated changes. Also please do not submit a patch which contains several unrelated changes.
Split it into separate, self-contained pieces. This does not mean splitting Split it into separate, self-contained pieces. This does not mean splitting
@@ -462,7 +426,7 @@ Also please if you send several patches, send each patch as a separate mail,
do not attach several unrelated patches to the same mail. do not attach several unrelated patches to the same mail.
Patches should be posted to the Patches should be posted to the
@uref{https://lists.ffmpeg.org/mailman/listinfo/ffmpeg-devel, ffmpeg-devel} @uref{http://lists.ffmpeg.org/mailman/listinfo/ffmpeg-devel, ffmpeg-devel}
mailing list. Use @code{git send-email} when possible since it will properly mailing list. Use @code{git send-email} when possible since it will properly
send patches without requiring extra care. If you cannot, then send patches send patches without requiring extra care. If you cannot, then send patches
as base64-encoded attachments, so your patch is not trashed during as base64-encoded attachments, so your patch is not trashed during
@@ -575,10 +539,6 @@ tools/trasher, the noise bitstream filter, and
should not crash, end in a (near) infinite loop, or allocate ridiculous should not crash, end in a (near) infinite loop, or allocate ridiculous
amounts of memory when fed damaged data. amounts of memory when fed damaged data.
@item
Did you test your decoder or demuxer against sample files?
Samples may be obtained at @url{https://samples.ffmpeg.org}.
@item @item
Does the patch not mix functional and cosmetic changes? Does the patch not mix functional and cosmetic changes?
@@ -599,7 +559,7 @@ If the patch fixes a bug, did you provide a verbose analysis of the bug?
If the patch fixes a bug, did you provide enough information, including If the patch fixes a bug, did you provide enough information, including
a sample, so the bug can be reproduced and the fix can be verified? a sample, so the bug can be reproduced and the fix can be verified?
Note please do not attach samples >100k to mails but rather provide a Note please do not attach samples >100k to mails but rather provide a
URL, you can upload to ftp://upload.ffmpeg.org. URL, you can upload to ftp://upload.ffmpeg.org
@item @item
Did you provide a verbose summary about what the patch does change? Did you provide a verbose summary about what the patch does change?
@@ -628,10 +588,10 @@ Lines with similar content should be aligned vertically when doing so
improves readability. improves readability.
@item @item
Consider adding a regression test for your code. Consider to add a regression test for your code.
@item @item
If you added YASM code please check that things still work with --disable-yasm. If you added YASM code please check that things still work with --disable-yasm
@item @item
Make sure you check the return values of function and return appropriate Make sure you check the return values of function and return appropriate
@@ -669,10 +629,6 @@ not related to the comments received during review. Such patches will
be rejected. Instead, submit significant changes or new features as be rejected. Instead, submit significant changes or new features as
separate patches. separate patches.
Everyone is welcome to review patches. Also if you are waiting for your patch
to be reviewed, please consider helping to review other patches, that is a great
way to get everyone's patches reviewed sooner.
@anchor{Regression tests} @anchor{Regression tests}
@section Regression tests @section Regression tests
@@ -688,14 +644,15 @@ accordingly].
@subsection Adding files to the fate-suite dataset @subsection Adding files to the fate-suite dataset
When there is no muxer or encoder available to generate test media for a When there is no muxer or encoder available to generate test media for a
specific test then the media has to be included in the fate-suite. specific test then the media has to be inlcuded in the fate-suite.
First please make sure that the sample file is as small as possible to test the First please make sure that the sample file is as small as possible to test the
respective decoder or demuxer sufficiently. Large files increase network respective decoder or demuxer sufficiently. Large files increase network
bandwidth and disk space requirements. bandwidth and disk space requirements.
Once you have a working fate test and fate sample, provide in the commit Once you have a working fate test and fate sample, provide in the commit
message or introductory message for the patch series that you post to message or introductionary message for the patch series that you post to
the ffmpeg-devel mailing list, a direct link to download the sample media. the ffmpeg-devel mailing list, a direct link to download the sample media.
@subsection Visualizing Test Coverage @subsection Visualizing Test Coverage
The FFmpeg build system allows visualizing the test coverage in an easy The FFmpeg build system allows visualizing the test coverage in an easy
@@ -743,7 +700,7 @@ FFmpeg maintains a set of @strong{release branches}, which are the
recommended deliverable for system integrators and distributors (such as recommended deliverable for system integrators and distributors (such as
Linux distributions, etc.). At regular times, a @strong{release Linux distributions, etc.). At regular times, a @strong{release
manager} prepares, tests and publishes tarballs on the manager} prepares, tests and publishes tarballs on the
@url{https://ffmpeg.org} website. @url{http://ffmpeg.org} website.
There are two kinds of releases: There are two kinds of releases:
@@ -822,7 +779,7 @@ Prepare the release tarballs in @code{bz2} and @code{gz} formats, and
supplementing files that contain @code{gpg} signatures supplementing files that contain @code{gpg} signatures
@item @item
Publish the tarballs at @url{https://ffmpeg.org/releases}. Create and Publish the tarballs at @url{http://ffmpeg.org/releases}. Create and
push an annotated tag in the form @code{nX}, with @code{X} push an annotated tag in the form @code{nX}, with @code{X}
containing the version number. containing the version number.
@@ -834,7 +791,7 @@ with a news entry for the website.
Publish the news entry. Publish the news entry.
@item @item
Send an announcement to the mailing list. Send announcement to the mailing list.
@end enumerate @end enumerate
@bye @bye

View File

@@ -1,21 +1,12 @@
#!/bin/sh #!/bin/sh
OUT_DIR="${1}" SRC_PATH="${1}"
DOXYFILE="${2}" DOXYFILE="${2}"
DOXYGEN="${3}"
shift 3 shift 2
if [ -e "VERSION" ]; then doxygen - <<EOF
VERSION=`cat "VERSION"`
else
VERSION=`git describe`
fi
$DOXYGEN - <<EOF
@INCLUDE = ${DOXYFILE} @INCLUDE = ${DOXYFILE}
INPUT = $@ INPUT = $@
HTML_TIMESTAMP = NO EXAMPLE_PATH = ${SRC_PATH}/doc/examples
PROJECT_NUMBER = $VERSION
OUTPUT_DIRECTORY = $OUT_DIR
EOF EOF

1
doc/doxy/.gitignore vendored
View File

@@ -1 +0,0 @@
/html/

File diff suppressed because it is too large Load Diff

View File

@@ -1,16 +0,0 @@
/avio_dir_cmd
/avio_reading
/decoding_encoding
/demuxing_decoding
/extract_mvs
/filter_audio
/filtering_audio
/filtering_video
/metadata
/muxing
/pc-uninstalled
/remuxing
/resampling_audio
/scaling_video
/transcode_aac
/transcoding

View File

@@ -11,27 +11,22 @@ CFLAGS += -Wall -g
CFLAGS := $(shell pkg-config --cflags $(FFMPEG_LIBS)) $(CFLAGS) CFLAGS := $(shell pkg-config --cflags $(FFMPEG_LIBS)) $(CFLAGS)
LDLIBS := $(shell pkg-config --libs $(FFMPEG_LIBS)) $(LDLIBS) LDLIBS := $(shell pkg-config --libs $(FFMPEG_LIBS)) $(LDLIBS)
EXAMPLES= avio_dir_cmd \ EXAMPLES= avio_reading \
avio_reading \ avcodec \
decoding_encoding \
demuxing_decoding \ demuxing_decoding \
extract_mvs \
filtering_video \ filtering_video \
filtering_audio \ filtering_audio \
http_multiclient \
metadata \ metadata \
muxing \ muxing \
remuxing \ remuxing \
resampling_audio \ resampling_audio \
scaling_video \ scaling_video \
transcode_aac \ transcode_aac \
transcoding \
OBJS=$(addsuffix .o,$(EXAMPLES)) OBJS=$(addsuffix .o,$(EXAMPLES))
# the following examples make explicit use of the math library # the following examples make explicit use of the math library
avcodec: LDLIBS += -lm avcodec: LDLIBS += -lm
decoding_encoding: LDLIBS += -lm
muxing: LDLIBS += -lm muxing: LDLIBS += -lm
resampling_audio: LDLIBS += -lm resampling_audio: LDLIBS += -lm

View File

@@ -24,10 +24,10 @@
* @file * @file
* libavcodec API use example. * libavcodec API use example.
* *
* @example decoding_encoding.c * @example avcodec.c
* Note that libavcodec only handles codecs (MPEG, MPEG-4, etc...), * Note that libavcodec only handles codecs (mpeg, mpeg4, etc...),
* not file formats (AVI, VOB, MP4, MOV, MKV, MXF, FLV, MPEG-TS, MPEG-PS, etc...). * not file formats (avi, vob, mp4, mov, mkv, mxf, flv, mpegts, mpegps, etc...). See library 'libavformat' for the
* See library 'libavformat' for the format handling * format handling
*/ */
#include <math.h> #include <math.h>
@@ -211,7 +211,7 @@ static void audio_encode_example(const char *filename)
} }
if (got_output) { if (got_output) {
fwrite(pkt.data, 1, pkt.size, f); fwrite(pkt.data, 1, pkt.size, f);
av_packet_unref(&pkt); av_free_packet(&pkt);
} }
} }
@@ -225,7 +225,7 @@ static void audio_encode_example(const char *filename)
if (got_output) { if (got_output) {
fwrite(pkt.data, 1, pkt.size, f); fwrite(pkt.data, 1, pkt.size, f);
av_packet_unref(&pkt); av_free_packet(&pkt);
} }
} }
fclose(f); fclose(f);
@@ -245,7 +245,7 @@ static void audio_decode_example(const char *outfilename, const char *filename)
AVCodecContext *c= NULL; AVCodecContext *c= NULL;
int len; int len;
FILE *f, *outfile; FILE *f, *outfile;
uint8_t inbuf[AUDIO_INBUF_SIZE + AV_INPUT_BUFFER_PADDING_SIZE]; uint8_t inbuf[AUDIO_INBUF_SIZE + FF_INPUT_BUFFER_PADDING_SIZE];
AVPacket avpkt; AVPacket avpkt;
AVFrame *decoded_frame = NULL; AVFrame *decoded_frame = NULL;
@@ -253,7 +253,7 @@ static void audio_decode_example(const char *outfilename, const char *filename)
printf("Decode audio file %s to %s\n", filename, outfilename); printf("Decode audio file %s to %s\n", filename, outfilename);
/* find the MPEG audio decoder */ /* find the mpeg audio decoder */
codec = avcodec_find_decoder(AV_CODEC_ID_MP2); codec = avcodec_find_decoder(AV_CODEC_ID_MP2);
if (!codec) { if (!codec) {
fprintf(stderr, "Codec not found\n"); fprintf(stderr, "Codec not found\n");
@@ -288,7 +288,6 @@ static void audio_decode_example(const char *outfilename, const char *filename)
avpkt.size = fread(inbuf, 1, AUDIO_INBUF_SIZE, f); avpkt.size = fread(inbuf, 1, AUDIO_INBUF_SIZE, f);
while (avpkt.size > 0) { while (avpkt.size > 0) {
int i, ch;
int got_frame = 0; int got_frame = 0;
if (!decoded_frame) { if (!decoded_frame) {
@@ -305,15 +304,15 @@ static void audio_decode_example(const char *outfilename, const char *filename)
} }
if (got_frame) { if (got_frame) {
/* if a frame has been decoded, output it */ /* if a frame has been decoded, output it */
int data_size = av_get_bytes_per_sample(c->sample_fmt); int data_size = av_samples_get_buffer_size(NULL, c->channels,
decoded_frame->nb_samples,
c->sample_fmt, 1);
if (data_size < 0) { if (data_size < 0) {
/* This should not occur, checking just for paranoia */ /* This should not occur, checking just for paranoia */
fprintf(stderr, "Failed to calculate data size\n"); fprintf(stderr, "Failed to calculate data size\n");
exit(1); exit(1);
} }
for (i=0; i<decoded_frame->nb_samples; i++) fwrite(decoded_frame->data[0], 1, data_size, outfile);
for (ch=0; ch<c->channels; ch++)
fwrite(decoded_frame->data[ch] + data_size*i, 1, data_size, outfile);
} }
avpkt.size -= len; avpkt.size -= len;
avpkt.data += len; avpkt.data += len;
@@ -356,7 +355,7 @@ static void video_encode_example(const char *filename, int codec_id)
printf("Encode video file %s\n", filename); printf("Encode video file %s\n", filename);
/* find the video encoder */ /* find the mpeg1 video encoder */
codec = avcodec_find_encoder(codec_id); codec = avcodec_find_encoder(codec_id);
if (!codec) { if (!codec) {
fprintf(stderr, "Codec not found\n"); fprintf(stderr, "Codec not found\n");
@@ -376,13 +375,7 @@ static void video_encode_example(const char *filename, int codec_id)
c->height = 288; c->height = 288;
/* frames per second */ /* frames per second */
c->time_base = (AVRational){1,25}; c->time_base = (AVRational){1,25};
/* emit one intra frame every ten frames c->gop_size = 10; /* emit one intra frame every ten frames */
* check frame pict_type before passing frame
* to encoder, if frame->pict_type is AV_PICTURE_TYPE_I
* then gop_size is ignored and the output of encoder
* will always be I frame irrespective to gop_size
*/
c->gop_size = 10;
c->max_b_frames = 1; c->max_b_frames = 1;
c->pix_fmt = AV_PIX_FMT_YUV420P; c->pix_fmt = AV_PIX_FMT_YUV420P;
@@ -454,7 +447,7 @@ static void video_encode_example(const char *filename, int codec_id)
if (got_output) { if (got_output) {
printf("Write frame %3d (size=%5d)\n", i, pkt.size); printf("Write frame %3d (size=%5d)\n", i, pkt.size);
fwrite(pkt.data, 1, pkt.size, f); fwrite(pkt.data, 1, pkt.size, f);
av_packet_unref(&pkt); av_free_packet(&pkt);
} }
} }
@@ -471,11 +464,11 @@ static void video_encode_example(const char *filename, int codec_id)
if (got_output) { if (got_output) {
printf("Write frame %3d (size=%5d)\n", i, pkt.size); printf("Write frame %3d (size=%5d)\n", i, pkt.size);
fwrite(pkt.data, 1, pkt.size, f); fwrite(pkt.data, 1, pkt.size, f);
av_packet_unref(&pkt); av_free_packet(&pkt);
} }
} }
/* add sequence end code to have a real MPEG file */ /* add sequence end code to have a real mpeg file */
fwrite(endcode, 1, sizeof(endcode), f); fwrite(endcode, 1, sizeof(endcode), f);
fclose(f); fclose(f);
@@ -521,7 +514,7 @@ static int decode_write_frame(const char *outfilename, AVCodecContext *avctx,
/* the picture is allocated by the decoder, no need to free it */ /* the picture is allocated by the decoder, no need to free it */
snprintf(buf, sizeof(buf), outfilename, *frame_count); snprintf(buf, sizeof(buf), outfilename, *frame_count);
pgm_save(frame->data[0], frame->linesize[0], pgm_save(frame->data[0], frame->linesize[0],
frame->width, frame->height, buf); avctx->width, avctx->height, buf);
(*frame_count)++; (*frame_count)++;
} }
if (pkt->data) { if (pkt->data) {
@@ -538,17 +531,17 @@ static void video_decode_example(const char *outfilename, const char *filename)
int frame_count; int frame_count;
FILE *f; FILE *f;
AVFrame *frame; AVFrame *frame;
uint8_t inbuf[INBUF_SIZE + AV_INPUT_BUFFER_PADDING_SIZE]; uint8_t inbuf[INBUF_SIZE + FF_INPUT_BUFFER_PADDING_SIZE];
AVPacket avpkt; AVPacket avpkt;
av_init_packet(&avpkt); av_init_packet(&avpkt);
/* set end of buffer to 0 (this ensures that no overreading happens for damaged MPEG streams) */ /* set end of buffer to 0 (this ensures that no overreading happens for damaged mpeg streams) */
memset(inbuf + INBUF_SIZE, 0, AV_INPUT_BUFFER_PADDING_SIZE); memset(inbuf + INBUF_SIZE, 0, FF_INPUT_BUFFER_PADDING_SIZE);
printf("Decode video file %s to %s\n", filename, outfilename); printf("Decode video file %s to %s\n", filename, outfilename);
/* find the MPEG-1 video decoder */ /* find the mpeg1 video decoder */
codec = avcodec_find_decoder(AV_CODEC_ID_MPEG1VIDEO); codec = avcodec_find_decoder(AV_CODEC_ID_MPEG1VIDEO);
if (!codec) { if (!codec) {
fprintf(stderr, "Codec not found\n"); fprintf(stderr, "Codec not found\n");
@@ -561,8 +554,8 @@ static void video_decode_example(const char *outfilename, const char *filename)
exit(1); exit(1);
} }
if (codec->capabilities & AV_CODEC_CAP_TRUNCATED) if(codec->capabilities&CODEC_CAP_TRUNCATED)
c->flags |= AV_CODEC_FLAG_TRUNCATED; // we do not send complete frames c->flags|= CODEC_FLAG_TRUNCATED; /* we do not send complete frames */
/* For some codecs, such as msmpeg4 and mpeg4, width and height /* For some codecs, such as msmpeg4 and mpeg4, width and height
MUST be initialized there because this information is not MUST be initialized there because this information is not
@@ -613,9 +606,9 @@ static void video_decode_example(const char *outfilename, const char *filename)
exit(1); exit(1);
} }
/* Some codecs, such as MPEG, transmit the I- and P-frame with a /* some codecs, such as MPEG, transmit the I and P frame with a
latency of one frame. You must do the following to have a latency of one frame. You must do the following to have a
chance to get the last frame of the video. */ chance to get the last frame of the video */
avpkt.data = NULL; avpkt.data = NULL;
avpkt.size = 0; avpkt.size = 0;
decode_write_frame(outfilename, c, frame, &frame_count, &avpkt, 1); decode_write_frame(outfilename, c, frame, &frame_count, &avpkt, 1);
@@ -641,7 +634,7 @@ int main(int argc, char **argv)
"This program generates a synthetic stream and encodes it to a file\n" "This program generates a synthetic stream and encodes it to a file\n"
"named test.h264, test.mp2 or test.mpg depending on output_type.\n" "named test.h264, test.mp2 or test.mpg depending on output_type.\n"
"The encoded stream is then decoded and written to a raw data output.\n" "The encoded stream is then decoded and written to a raw data output.\n"
"output_type must be chosen between 'h264', 'mp2', 'mpg'.\n", "output_type must be choosen between 'h264', 'mp2', 'mpg'.\n",
argv[0]); argv[0]);
return 1; return 1;
} }
@@ -651,7 +644,7 @@ int main(int argc, char **argv)
video_encode_example("test.h264", AV_CODEC_ID_H264); video_encode_example("test.h264", AV_CODEC_ID_H264);
} else if (!strcmp(output_type, "mp2")) { } else if (!strcmp(output_type, "mp2")) {
audio_encode_example("test.mp2"); audio_encode_example("test.mp2");
audio_decode_example("test.pcm", "test.mp2"); audio_decode_example("test.sw", "test.mp2");
} else if (!strcmp(output_type, "mpg")) { } else if (!strcmp(output_type, "mpg")) {
video_encode_example("test.mpg", AV_CODEC_ID_MPEG1VIDEO); video_encode_example("test.mpg", AV_CODEC_ID_MPEG1VIDEO);
video_decode_example("test%02d.pgm", "test.mpg"); video_decode_example("test%02d.pgm", "test.mpg");

View File

@@ -1,180 +0,0 @@
/*
* Copyright (c) 2014 Lukasz Marek
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libavformat/avio.h>
static const char *type_string(int type)
{
switch (type) {
case AVIO_ENTRY_DIRECTORY:
return "<DIR>";
case AVIO_ENTRY_FILE:
return "<FILE>";
case AVIO_ENTRY_BLOCK_DEVICE:
return "<BLOCK DEVICE>";
case AVIO_ENTRY_CHARACTER_DEVICE:
return "<CHARACTER DEVICE>";
case AVIO_ENTRY_NAMED_PIPE:
return "<PIPE>";
case AVIO_ENTRY_SYMBOLIC_LINK:
return "<LINK>";
case AVIO_ENTRY_SOCKET:
return "<SOCKET>";
case AVIO_ENTRY_SERVER:
return "<SERVER>";
case AVIO_ENTRY_SHARE:
return "<SHARE>";
case AVIO_ENTRY_WORKGROUP:
return "<WORKGROUP>";
case AVIO_ENTRY_UNKNOWN:
default:
break;
}
return "<UNKNOWN>";
}
static int list_op(const char *input_dir)
{
AVIODirEntry *entry = NULL;
AVIODirContext *ctx = NULL;
int cnt, ret;
char filemode[4], uid_and_gid[20];
if ((ret = avio_open_dir(&ctx, input_dir, NULL)) < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot open directory: %s.\n", av_err2str(ret));
goto fail;
}
cnt = 0;
for (;;) {
if ((ret = avio_read_dir(ctx, &entry)) < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot list directory: %s.\n", av_err2str(ret));
goto fail;
}
if (!entry)
break;
if (entry->filemode == -1) {
snprintf(filemode, 4, "???");
} else {
snprintf(filemode, 4, "%3"PRIo64, entry->filemode);
}
snprintf(uid_and_gid, 20, "%"PRId64"(%"PRId64")", entry->user_id, entry->group_id);
if (cnt == 0)
av_log(NULL, AV_LOG_INFO, "%-9s %12s %30s %10s %s %16s %16s %16s\n",
"TYPE", "SIZE", "NAME", "UID(GID)", "UGO", "MODIFIED",
"ACCESSED", "STATUS_CHANGED");
av_log(NULL, AV_LOG_INFO, "%-9s %12"PRId64" %30s %10s %s %16"PRId64" %16"PRId64" %16"PRId64"\n",
type_string(entry->type),
entry->size,
entry->name,
uid_and_gid,
filemode,
entry->modification_timestamp,
entry->access_timestamp,
entry->status_change_timestamp);
avio_free_directory_entry(&entry);
cnt++;
};
fail:
avio_close_dir(&ctx);
return ret;
}
static int del_op(const char *url)
{
int ret = avpriv_io_delete(url);
if (ret < 0)
av_log(NULL, AV_LOG_ERROR, "Cannot delete '%s': %s.\n", url, av_err2str(ret));
return ret;
}
static int move_op(const char *src, const char *dst)
{
int ret = avpriv_io_move(src, dst);
if (ret < 0)
av_log(NULL, AV_LOG_ERROR, "Cannot move '%s' into '%s': %s.\n", src, dst, av_err2str(ret));
return ret;
}
static void usage(const char *program_name)
{
fprintf(stderr, "usage: %s OPERATION entry1 [entry2]\n"
"API example program to show how to manipulate resources "
"accessed through AVIOContext.\n"
"OPERATIONS:\n"
"list list content of the directory\n"
"move rename content in directory\n"
"del delete content in directory\n",
program_name);
}
int main(int argc, char *argv[])
{
const char *op = NULL;
int ret;
av_log_set_level(AV_LOG_DEBUG);
if (argc < 2) {
usage(argv[0]);
return 1;
}
/* register codecs and formats and other lavf/lavc components*/
av_register_all();
avformat_network_init();
op = argv[1];
if (strcmp(op, "list") == 0) {
if (argc < 3) {
av_log(NULL, AV_LOG_INFO, "Missing argument for list operation.\n");
ret = AVERROR(EINVAL);
} else {
ret = list_op(argv[2]);
}
} else if (strcmp(op, "del") == 0) {
if (argc < 3) {
av_log(NULL, AV_LOG_INFO, "Missing argument for del operation.\n");
ret = AVERROR(EINVAL);
} else {
ret = del_op(argv[2]);
}
} else if (strcmp(op, "move") == 0) {
if (argc < 4) {
av_log(NULL, AV_LOG_INFO, "Missing argument for move operation.\n");
ret = AVERROR(EINVAL);
} else {
ret = move_op(argv[2], argv[3]);
}
} else {
av_log(NULL, AV_LOG_INFO, "Invalid operation %s\n", op);
ret = AVERROR(EINVAL);
}
avformat_network_deinit();
return ret < 0 ? 1 : 0;
}

View File

@@ -119,10 +119,8 @@ int main(int argc, char *argv[])
end: end:
avformat_close_input(&fmt_ctx); avformat_close_input(&fmt_ctx);
/* note: the internal buffer could have changed, and be != avio_ctx_buffer */ /* note: the internal buffer could have changed, and be != avio_ctx_buffer */
if (avio_ctx) {
av_freep(&avio_ctx->buffer); av_freep(&avio_ctx->buffer);
av_freep(&avio_ctx); av_freep(&avio_ctx);
}
av_file_unmap(buffer, buffer_size); av_file_unmap(buffer, buffer_size);
if (ret < 0) { if (ret < 0) {

View File

@@ -36,8 +36,6 @@
static AVFormatContext *fmt_ctx = NULL; static AVFormatContext *fmt_ctx = NULL;
static AVCodecContext *video_dec_ctx = NULL, *audio_dec_ctx; static AVCodecContext *video_dec_ctx = NULL, *audio_dec_ctx;
static int width, height;
static enum AVPixelFormat pix_fmt;
static AVStream *video_stream = NULL, *audio_stream = NULL; static AVStream *video_stream = NULL, *audio_stream = NULL;
static const char *src_filename = NULL; static const char *src_filename = NULL;
static const char *video_dst_filename = NULL; static const char *video_dst_filename = NULL;
@@ -55,11 +53,17 @@ static AVPacket pkt;
static int video_frame_count = 0; static int video_frame_count = 0;
static int audio_frame_count = 0; static int audio_frame_count = 0;
/* Enable or disable frame reference counting. You are not supposed to support /* The different ways of decoding and managing data memory. You are not
* both paths in your application but pick the one most appropriate to your * supposed to support all the modes in your application but pick the one most
* needs. Look for the use of refcount in this example to see what are the * appropriate to your needs. Look for the use of api_mode in this example to
* differences of API usage between them. */ * see what are the differences of API usage between them */
static int refcount = 0; enum {
API_MODE_OLD = 0, /* old method, deprecated */
API_MODE_NEW_API_REF_COUNT = 1, /* new method, using the frame reference counting */
API_MODE_NEW_API_NO_REF_COUNT = 2, /* new method, without reference counting */
};
static int api_mode = API_MODE_OLD;
static int decode_packet(int *got_frame, int cached) static int decode_packet(int *got_frame, int cached)
{ {
@@ -77,22 +81,6 @@ static int decode_packet(int *got_frame, int cached)
} }
if (*got_frame) { if (*got_frame) {
if (frame->width != width || frame->height != height ||
frame->format != pix_fmt) {
/* To handle this change, one could call av_image_alloc again and
* decode the following frames into another rawvideo file. */
fprintf(stderr, "Error: Width, height and pixel format have to be "
"constant in a rawvideo file, but the width, height or "
"pixel format of the input video changed:\n"
"old: width = %d, height = %d, format = %s\n"
"new: width = %d, height = %d, format = %s\n",
width, height, av_get_pix_fmt_name(pix_fmt),
frame->width, frame->height,
av_get_pix_fmt_name(frame->format));
return -1;
}
printf("video_frame%s n:%d coded_n:%d pts:%s\n", printf("video_frame%s n:%d coded_n:%d pts:%s\n",
cached ? "(cached)" : "", cached ? "(cached)" : "",
video_frame_count++, frame->coded_picture_number, video_frame_count++, frame->coded_picture_number,
@@ -102,7 +90,7 @@ static int decode_packet(int *got_frame, int cached)
* this is required since rawvideo expects non aligned data */ * this is required since rawvideo expects non aligned data */
av_image_copy(video_dst_data, video_dst_linesize, av_image_copy(video_dst_data, video_dst_linesize,
(const uint8_t **)(frame->data), frame->linesize, (const uint8_t **)(frame->data), frame->linesize,
pix_fmt, width, height); video_dec_ctx->pix_fmt, video_dec_ctx->width, video_dec_ctx->height);
/* write to rawvideo file */ /* write to rawvideo file */
fwrite(video_dst_data[0], 1, video_dst_bufsize, video_dst_file); fwrite(video_dst_data[0], 1, video_dst_bufsize, video_dst_file);
@@ -139,9 +127,9 @@ static int decode_packet(int *got_frame, int cached)
} }
} }
/* If we use frame reference counting, we own the data and need /* If we use the new API with reference counting, we own the data and need
* to de-reference it when we don't use it anymore */ * to de-reference it when we don't use it anymore */
if (*got_frame && refcount) if (*got_frame && api_mode == API_MODE_NEW_API_REF_COUNT)
av_frame_unref(frame); av_frame_unref(frame);
return decoded; return decoded;
@@ -150,7 +138,7 @@ static int decode_packet(int *got_frame, int cached)
static int open_codec_context(int *stream_idx, static int open_codec_context(int *stream_idx,
AVFormatContext *fmt_ctx, enum AVMediaType type) AVFormatContext *fmt_ctx, enum AVMediaType type)
{ {
int ret, stream_index; int ret;
AVStream *st; AVStream *st;
AVCodecContext *dec_ctx = NULL; AVCodecContext *dec_ctx = NULL;
AVCodec *dec = NULL; AVCodec *dec = NULL;
@@ -162,8 +150,8 @@ static int open_codec_context(int *stream_idx,
av_get_media_type_string(type), src_filename); av_get_media_type_string(type), src_filename);
return ret; return ret;
} else { } else {
stream_index = ret; *stream_idx = ret;
st = fmt_ctx->streams[stream_index]; st = fmt_ctx->streams[*stream_idx];
/* find decoder for the stream */ /* find decoder for the stream */
dec_ctx = st->codec; dec_ctx = st->codec;
@@ -175,13 +163,13 @@ static int open_codec_context(int *stream_idx,
} }
/* Init the decoders, with or without reference counting */ /* Init the decoders, with or without reference counting */
av_dict_set(&opts, "refcounted_frames", refcount ? "1" : "0", 0); if (api_mode == API_MODE_NEW_API_REF_COUNT)
av_dict_set(&opts, "refcounted_frames", "1", 0);
if ((ret = avcodec_open2(dec_ctx, dec, &opts)) < 0) { if ((ret = avcodec_open2(dec_ctx, dec, &opts)) < 0) {
fprintf(stderr, "Failed to open %s codec\n", fprintf(stderr, "Failed to open %s codec\n",
av_get_media_type_string(type)); av_get_media_type_string(type));
return ret; return ret;
} }
*stream_idx = stream_index;
} }
return 0; return 0;
@@ -221,19 +209,28 @@ int main (int argc, char **argv)
int ret = 0, got_frame; int ret = 0, got_frame;
if (argc != 4 && argc != 5) { if (argc != 4 && argc != 5) {
fprintf(stderr, "usage: %s [-refcount] input_file video_output_file audio_output_file\n" fprintf(stderr, "usage: %s [-refcount=<old|new_norefcount|new_refcount>] "
"input_file video_output_file audio_output_file\n"
"API example program to show how to read frames from an input file.\n" "API example program to show how to read frames from an input file.\n"
"This program reads frames from a file, decodes them, and writes decoded\n" "This program reads frames from a file, decodes them, and writes decoded\n"
"video frames to a rawvideo file named video_output_file, and decoded\n" "video frames to a rawvideo file named video_output_file, and decoded\n"
"audio frames to a rawaudio file named audio_output_file.\n\n" "audio frames to a rawaudio file named audio_output_file.\n\n"
"If the -refcount option is specified, the program use the\n" "If the -refcount option is specified, the program use the\n"
"reference counting frame system which allows keeping a copy of\n" "reference counting frame system which allows keeping a copy of\n"
"the data for longer than one decode call.\n" "the data for longer than one decode call. If unset, it's using\n"
"the classic old method.\n"
"\n", argv[0]); "\n", argv[0]);
exit(1); exit(1);
} }
if (argc == 5 && !strcmp(argv[1], "-refcount")) { if (argc == 5) {
refcount = 1; const char *mode = argv[1] + strlen("-refcount=");
if (!strcmp(mode, "old")) api_mode = API_MODE_OLD;
else if (!strcmp(mode, "new_norefcount")) api_mode = API_MODE_NEW_API_NO_REF_COUNT;
else if (!strcmp(mode, "new_refcount")) api_mode = API_MODE_NEW_API_REF_COUNT;
else {
fprintf(stderr, "unknow mode '%s'\n", mode);
exit(1);
}
argv++; argv++;
} }
src_filename = argv[1]; src_filename = argv[1];
@@ -267,11 +264,9 @@ int main (int argc, char **argv)
} }
/* allocate image where the decoded image will be put */ /* allocate image where the decoded image will be put */
width = video_dec_ctx->width;
height = video_dec_ctx->height;
pix_fmt = video_dec_ctx->pix_fmt;
ret = av_image_alloc(video_dst_data, video_dst_linesize, ret = av_image_alloc(video_dst_data, video_dst_linesize,
width, height, pix_fmt, 1); video_dec_ctx->width, video_dec_ctx->height,
video_dec_ctx->pix_fmt, 1);
if (ret < 0) { if (ret < 0) {
fprintf(stderr, "Could not allocate raw video buffer\n"); fprintf(stderr, "Could not allocate raw video buffer\n");
goto end; goto end;
@@ -284,7 +279,7 @@ int main (int argc, char **argv)
audio_dec_ctx = audio_stream->codec; audio_dec_ctx = audio_stream->codec;
audio_dst_file = fopen(audio_dst_filename, "wb"); audio_dst_file = fopen(audio_dst_filename, "wb");
if (!audio_dst_file) { if (!audio_dst_file) {
fprintf(stderr, "Could not open destination file %s\n", audio_dst_filename); fprintf(stderr, "Could not open destination file %s\n", video_dst_filename);
ret = 1; ret = 1;
goto end; goto end;
} }
@@ -299,6 +294,11 @@ int main (int argc, char **argv)
goto end; goto end;
} }
/* When using the new API, you need to use the libavutil/frame.h API, while
* the classic frame management is available in libavcodec */
if (api_mode == API_MODE_OLD)
frame = avcodec_alloc_frame();
else
frame = av_frame_alloc(); frame = av_frame_alloc();
if (!frame) { if (!frame) {
fprintf(stderr, "Could not allocate frame\n"); fprintf(stderr, "Could not allocate frame\n");
@@ -326,7 +326,7 @@ int main (int argc, char **argv)
pkt.data += ret; pkt.data += ret;
pkt.size -= ret; pkt.size -= ret;
} while (pkt.size > 0); } while (pkt.size > 0);
av_packet_unref(&orig_pkt); av_free_packet(&orig_pkt);
} }
/* flush cached frames */ /* flush cached frames */
@@ -341,7 +341,7 @@ int main (int argc, char **argv)
if (video_stream) { if (video_stream) {
printf("Play the output video file with the command:\n" printf("Play the output video file with the command:\n"
"ffplay -f rawvideo -pix_fmt %s -video_size %dx%d %s\n", "ffplay -f rawvideo -pix_fmt %s -video_size %dx%d %s\n",
av_get_pix_fmt_name(pix_fmt), width, height, av_get_pix_fmt_name(video_dec_ctx->pix_fmt), video_dec_ctx->width, video_dec_ctx->height,
video_dst_filename); video_dst_filename);
} }
@@ -376,6 +376,9 @@ end:
fclose(video_dst_file); fclose(video_dst_file);
if (audio_dst_file) if (audio_dst_file)
fclose(audio_dst_file); fclose(audio_dst_file);
if (api_mode == API_MODE_OLD)
avcodec_free_frame(&frame);
else
av_frame_free(&frame); av_frame_free(&frame);
av_free(video_dst_data[0]); av_free(video_dst_data[0]);

View File

@@ -1,185 +0,0 @@
/*
* Copyright (c) 2012 Stefano Sabatini
* Copyright (c) 2014 Clément Bœsch
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include <libavutil/motion_vector.h>
#include <libavformat/avformat.h>
static AVFormatContext *fmt_ctx = NULL;
static AVCodecContext *video_dec_ctx = NULL;
static AVStream *video_stream = NULL;
static const char *src_filename = NULL;
static int video_stream_idx = -1;
static AVFrame *frame = NULL;
static AVPacket pkt;
static int video_frame_count = 0;
static int decode_packet(int *got_frame, int cached)
{
int decoded = pkt.size;
*got_frame = 0;
if (pkt.stream_index == video_stream_idx) {
int ret = avcodec_decode_video2(video_dec_ctx, frame, got_frame, &pkt);
if (ret < 0) {
fprintf(stderr, "Error decoding video frame (%s)\n", av_err2str(ret));
return ret;
}
if (*got_frame) {
int i;
AVFrameSideData *sd;
video_frame_count++;
sd = av_frame_get_side_data(frame, AV_FRAME_DATA_MOTION_VECTORS);
if (sd) {
const AVMotionVector *mvs = (const AVMotionVector *)sd->data;
for (i = 0; i < sd->size / sizeof(*mvs); i++) {
const AVMotionVector *mv = &mvs[i];
printf("%d,%2d,%2d,%2d,%4d,%4d,%4d,%4d,0x%"PRIx64"\n",
video_frame_count, mv->source,
mv->w, mv->h, mv->src_x, mv->src_y,
mv->dst_x, mv->dst_y, mv->flags);
}
}
}
}
return decoded;
}
static int open_codec_context(int *stream_idx,
AVFormatContext *fmt_ctx, enum AVMediaType type)
{
int ret;
AVStream *st;
AVCodecContext *dec_ctx = NULL;
AVCodec *dec = NULL;
AVDictionary *opts = NULL;
ret = av_find_best_stream(fmt_ctx, type, -1, -1, NULL, 0);
if (ret < 0) {
fprintf(stderr, "Could not find %s stream in input file '%s'\n",
av_get_media_type_string(type), src_filename);
return ret;
} else {
*stream_idx = ret;
st = fmt_ctx->streams[*stream_idx];
/* find decoder for the stream */
dec_ctx = st->codec;
dec = avcodec_find_decoder(dec_ctx->codec_id);
if (!dec) {
fprintf(stderr, "Failed to find %s codec\n",
av_get_media_type_string(type));
return AVERROR(EINVAL);
}
/* Init the video decoder */
av_dict_set(&opts, "flags2", "+export_mvs", 0);
if ((ret = avcodec_open2(dec_ctx, dec, &opts)) < 0) {
fprintf(stderr, "Failed to open %s codec\n",
av_get_media_type_string(type));
return ret;
}
}
return 0;
}
int main(int argc, char **argv)
{
int ret = 0, got_frame;
if (argc != 2) {
fprintf(stderr, "Usage: %s <video>\n", argv[0]);
exit(1);
}
src_filename = argv[1];
av_register_all();
if (avformat_open_input(&fmt_ctx, src_filename, NULL, NULL) < 0) {
fprintf(stderr, "Could not open source file %s\n", src_filename);
exit(1);
}
if (avformat_find_stream_info(fmt_ctx, NULL) < 0) {
fprintf(stderr, "Could not find stream information\n");
exit(1);
}
if (open_codec_context(&video_stream_idx, fmt_ctx, AVMEDIA_TYPE_VIDEO) >= 0) {
video_stream = fmt_ctx->streams[video_stream_idx];
video_dec_ctx = video_stream->codec;
}
av_dump_format(fmt_ctx, 0, src_filename, 0);
if (!video_stream) {
fprintf(stderr, "Could not find video stream in the input, aborting\n");
ret = 1;
goto end;
}
frame = av_frame_alloc();
if (!frame) {
fprintf(stderr, "Could not allocate frame\n");
ret = AVERROR(ENOMEM);
goto end;
}
printf("framenum,source,blockw,blockh,srcx,srcy,dstx,dsty,flags\n");
/* initialize packet, set data to NULL, let the demuxer fill it */
av_init_packet(&pkt);
pkt.data = NULL;
pkt.size = 0;
/* read frames from the file */
while (av_read_frame(fmt_ctx, &pkt) >= 0) {
AVPacket orig_pkt = pkt;
do {
ret = decode_packet(&got_frame, 0);
if (ret < 0)
break;
pkt.data += ret;
pkt.size -= ret;
} while (pkt.size > 0);
av_packet_unref(&orig_pkt);
}
/* flush cached frames */
pkt.data = NULL;
pkt.size = 0;
do {
decode_packet(&got_frame, 1);
} while (got_frame);
end:
avcodec_close(video_dec_ctx);
avformat_close_input(&fmt_ctx);
av_frame_free(&frame);
return ret < 0;
}

View File

@@ -45,7 +45,6 @@
#include "libavutil/channel_layout.h" #include "libavutil/channel_layout.h"
#include "libavutil/md5.h" #include "libavutil/md5.h"
#include "libavutil/mem.h"
#include "libavutil/opt.h" #include "libavutil/opt.h"
#include "libavutil/samplefmt.h" #include "libavutil/samplefmt.h"

View File

@@ -33,6 +33,7 @@
#include <libavcodec/avcodec.h> #include <libavcodec/avcodec.h>
#include <libavformat/avformat.h> #include <libavformat/avformat.h>
#include <libavfilter/avfiltergraph.h> #include <libavfilter/avfiltergraph.h>
#include <libavfilter/avcodec.h>
#include <libavfilter/buffersink.h> #include <libavfilter/buffersink.h>
#include <libavfilter/buffersrc.h> #include <libavfilter/buffersrc.h>
#include <libavutil/opt.h> #include <libavutil/opt.h>
@@ -65,7 +66,7 @@ static int open_input_file(const char *filename)
/* select the audio stream */ /* select the audio stream */
ret = av_find_best_stream(fmt_ctx, AVMEDIA_TYPE_AUDIO, -1, -1, &dec, 0); ret = av_find_best_stream(fmt_ctx, AVMEDIA_TYPE_AUDIO, -1, -1, &dec, 0);
if (ret < 0) { if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot find an audio stream in the input file\n"); av_log(NULL, AV_LOG_ERROR, "Cannot find a audio stream in the input file\n");
return ret; return ret;
} }
audio_stream_index = ret; audio_stream_index = ret;
@@ -144,28 +145,12 @@ static int init_filters(const char *filters_descr)
goto end; goto end;
} }
/* /* Endpoints for the filter graph. */
* Set the endpoints for the filter graph. The filter_graph will
* be linked to the graph described by filters_descr.
*/
/*
* The buffer source output must be connected to the input pad of
* the first filter described by filters_descr; since the first
* filter input label is not specified, it is set to "in" by
* default.
*/
outputs->name = av_strdup("in"); outputs->name = av_strdup("in");
outputs->filter_ctx = buffersrc_ctx; outputs->filter_ctx = buffersrc_ctx;
outputs->pad_idx = 0; outputs->pad_idx = 0;
outputs->next = NULL; outputs->next = NULL;
/*
* The buffer sink input must be connected to the output pad of
* the last filter described by filters_descr; since the last
* filter output label is not specified, it is set to "out" by
* default.
*/
inputs->name = av_strdup("out"); inputs->name = av_strdup("out");
inputs->filter_ctx = buffersink_ctx; inputs->filter_ctx = buffersink_ctx;
inputs->pad_idx = 0; inputs->pad_idx = 0;
@@ -273,10 +258,10 @@ int main(int argc, char **argv)
} }
if (packet.size <= 0) if (packet.size <= 0)
av_packet_unref(&packet0); av_free_packet(&packet0);
} else { } else {
/* discard non-wanted packets */ /* discard non-wanted packets */
av_packet_unref(&packet0); av_free_packet(&packet0);
} }
} }
end: end:

View File

@@ -33,14 +33,12 @@
#include <libavcodec/avcodec.h> #include <libavcodec/avcodec.h>
#include <libavformat/avformat.h> #include <libavformat/avformat.h>
#include <libavfilter/avfiltergraph.h> #include <libavfilter/avfiltergraph.h>
#include <libavfilter/avcodec.h>
#include <libavfilter/buffersink.h> #include <libavfilter/buffersink.h>
#include <libavfilter/buffersrc.h> #include <libavfilter/buffersrc.h>
#include <libavutil/opt.h> #include <libavutil/opt.h>
const char *filter_descr = "scale=78:24,transpose=cclock"; const char *filter_descr = "scale=78:24";
/* other way:
scale=78:24 [scl]; [scl] transpose=cclock // assumes "[in]" and "[out]" to be input output pads respectively
*/
static AVFormatContext *fmt_ctx; static AVFormatContext *fmt_ctx;
static AVCodecContext *dec_ctx; static AVCodecContext *dec_ctx;
@@ -92,7 +90,6 @@ static int init_filters(const char *filters_descr)
AVFilter *buffersink = avfilter_get_by_name("buffersink"); AVFilter *buffersink = avfilter_get_by_name("buffersink");
AVFilterInOut *outputs = avfilter_inout_alloc(); AVFilterInOut *outputs = avfilter_inout_alloc();
AVFilterInOut *inputs = avfilter_inout_alloc(); AVFilterInOut *inputs = avfilter_inout_alloc();
AVRational time_base = fmt_ctx->streams[video_stream_index]->time_base;
enum AVPixelFormat pix_fmts[] = { AV_PIX_FMT_GRAY8, AV_PIX_FMT_NONE }; enum AVPixelFormat pix_fmts[] = { AV_PIX_FMT_GRAY8, AV_PIX_FMT_NONE };
filter_graph = avfilter_graph_alloc(); filter_graph = avfilter_graph_alloc();
@@ -105,7 +102,7 @@ static int init_filters(const char *filters_descr)
snprintf(args, sizeof(args), snprintf(args, sizeof(args),
"video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d", "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
dec_ctx->width, dec_ctx->height, dec_ctx->pix_fmt, dec_ctx->width, dec_ctx->height, dec_ctx->pix_fmt,
time_base.num, time_base.den, dec_ctx->time_base.num, dec_ctx->time_base.den,
dec_ctx->sample_aspect_ratio.num, dec_ctx->sample_aspect_ratio.den); dec_ctx->sample_aspect_ratio.num, dec_ctx->sample_aspect_ratio.den);
ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in", ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in",
@@ -130,28 +127,12 @@ static int init_filters(const char *filters_descr)
goto end; goto end;
} }
/* /* Endpoints for the filter graph. */
* Set the endpoints for the filter graph. The filter_graph will
* be linked to the graph described by filters_descr.
*/
/*
* The buffer source output must be connected to the input pad of
* the first filter described by filters_descr; since the first
* filter input label is not specified, it is set to "in" by
* default.
*/
outputs->name = av_strdup("in"); outputs->name = av_strdup("in");
outputs->filter_ctx = buffersrc_ctx; outputs->filter_ctx = buffersrc_ctx;
outputs->pad_idx = 0; outputs->pad_idx = 0;
outputs->next = NULL; outputs->next = NULL;
/*
* The buffer sink input must be connected to the output pad of
* the last filter described by filters_descr; since the last
* filter output label is not specified, it is set to "out" by
* default.
*/
inputs->name = av_strdup("out"); inputs->name = av_strdup("out");
inputs->filter_ctx = buffersink_ctx; inputs->filter_ctx = buffersink_ctx;
inputs->pad_idx = 0; inputs->pad_idx = 0;
@@ -262,7 +243,7 @@ int main(int argc, char **argv)
av_frame_unref(frame); av_frame_unref(frame);
} }
} }
av_packet_unref(&packet); av_free_packet(&packet);
} }
end: end:
avfilter_graph_free(&filter_graph); avfilter_graph_free(&filter_graph);

View File

@@ -1,155 +0,0 @@
/*
* Copyright (c) 2015 Stephan Holljes
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
/**
* @file
* libavformat multi-client network API usage example.
*
* @example http_multiclient.c
* This example will serve a file without decoding or demuxing it over http.
* Multiple clients can connect and will receive the same file.
*/
#include <libavformat/avformat.h>
#include <libavutil/opt.h>
#include <unistd.h>
void process_client(AVIOContext *client, const char *in_uri)
{
AVIOContext *input = NULL;
uint8_t buf[1024];
int ret, n, reply_code;
char *resource = NULL;
while ((ret = avio_handshake(client)) > 0) {
av_opt_get(client, "resource", AV_OPT_SEARCH_CHILDREN, &resource);
// check for strlen(resource) is necessary, because av_opt_get()
// may return empty string.
if (resource && strlen(resource))
break;
}
if (ret < 0)
goto end;
av_log(client, AV_LOG_TRACE, "resource=%p\n", resource);
if (resource && resource[0] == '/' && !strcmp((resource + 1), in_uri)) {
reply_code = 200;
} else {
reply_code = AVERROR_HTTP_NOT_FOUND;
}
if ((ret = av_opt_set_int(client, "reply_code", reply_code, AV_OPT_SEARCH_CHILDREN)) < 0) {
av_log(client, AV_LOG_ERROR, "Failed to set reply_code: %s.\n", av_err2str(ret));
goto end;
}
av_log(client, AV_LOG_TRACE, "Set reply code to %d\n", reply_code);
while ((ret = avio_handshake(client)) > 0);
if (ret < 0)
goto end;
fprintf(stderr, "Handshake performed.\n");
if (reply_code != 200)
goto end;
fprintf(stderr, "Opening input file.\n");
if ((ret = avio_open2(&input, in_uri, AVIO_FLAG_READ, NULL, NULL)) < 0) {
av_log(input, AV_LOG_ERROR, "Failed to open input: %s: %s.\n", in_uri,
av_err2str(ret));
goto end;
}
for(;;) {
n = avio_read(input, buf, sizeof(buf));
if (n < 0) {
if (n == AVERROR_EOF)
break;
av_log(input, AV_LOG_ERROR, "Error reading from input: %s.\n",
av_err2str(n));
break;
}
avio_write(client, buf, n);
avio_flush(client);
}
end:
fprintf(stderr, "Flushing client\n");
avio_flush(client);
fprintf(stderr, "Closing client\n");
avio_close(client);
fprintf(stderr, "Closing input\n");
avio_close(input);
}
int main(int argc, char **argv)
{
av_log_set_level(AV_LOG_TRACE);
AVDictionary *options = NULL;
AVIOContext *client = NULL, *server = NULL;
const char *in_uri, *out_uri;
int ret, pid;
if (argc < 3) {
printf("usage: %s input http://hostname[:port]\n"
"API example program to serve http to multiple clients.\n"
"\n", argv[0]);
return 1;
}
in_uri = argv[1];
out_uri = argv[2];
av_register_all();
avformat_network_init();
if ((ret = av_dict_set(&options, "listen", "2", 0)) < 0) {
fprintf(stderr, "Failed to set listen mode for server: %s\n", av_err2str(ret));
return ret;
}
if ((ret = avio_open2(&server, out_uri, AVIO_FLAG_WRITE, NULL, &options)) < 0) {
fprintf(stderr, "Failed to open server: %s\n", av_err2str(ret));
return ret;
}
fprintf(stderr, "Entering main loop.\n");
for(;;) {
if ((ret = avio_accept(server, &client)) < 0)
goto end;
fprintf(stderr, "Accepted client, forking process.\n");
// XXX: Since we don't reap our children and don't ignore signals
// this produces zombie processes.
pid = fork();
if (pid < 0) {
perror("Fork failed");
ret = AVERROR(errno);
goto end;
}
if (pid == 0) {
fprintf(stderr, "In child.\n");
process_client(client, in_uri);
avio_close(server);
exit(0);
}
if (pid > 0)
avio_close(client);
}
end:
avio_close(server);
if (ret < 0 && ret != AVERROR_EOF) {
fprintf(stderr, "Some errors occurred: %s\n", av_err2str(ret));
return 1;
}
return 0;
}

View File

@@ -34,8 +34,6 @@
#include <string.h> #include <string.h>
#include <math.h> #include <math.h>
#include <libavutil/avassert.h>
#include <libavutil/channel_layout.h>
#include <libavutil/opt.h> #include <libavutil/opt.h>
#include <libavutil/mathematics.h> #include <libavutil/mathematics.h>
#include <libavutil/timestamp.h> #include <libavutil/timestamp.h>
@@ -43,29 +41,13 @@
#include <libswscale/swscale.h> #include <libswscale/swscale.h>
#include <libswresample/swresample.h> #include <libswresample/swresample.h>
static int audio_is_eof, video_is_eof;
#define STREAM_DURATION 10.0 #define STREAM_DURATION 10.0
#define STREAM_FRAME_RATE 25 /* 25 images/s */ #define STREAM_FRAME_RATE 25 /* 25 images/s */
#define STREAM_PIX_FMT AV_PIX_FMT_YUV420P /* default pix_fmt */ #define STREAM_PIX_FMT AV_PIX_FMT_YUV420P /* default pix_fmt */
#define SCALE_FLAGS SWS_BICUBIC static int sws_flags = SWS_BICUBIC;
// a wrapper around a single output AVStream
typedef struct OutputStream {
AVStream *st;
AVCodecContext *enc;
/* pts of the next frame that will be generated */
int64_t next_pts;
int samples_count;
AVFrame *frame;
AVFrame *tmp_frame;
float t, tincr, tincr2;
struct SwsContext *sws_ctx;
struct SwrContext *swr_ctx;
} OutputStream;
static void log_packet(const AVFormatContext *fmt_ctx, const AVPacket *pkt) static void log_packet(const AVFormatContext *fmt_ctx, const AVPacket *pkt)
{ {
@@ -81,7 +63,9 @@ static void log_packet(const AVFormatContext *fmt_ctx, const AVPacket *pkt)
static int write_frame(AVFormatContext *fmt_ctx, const AVRational *time_base, AVStream *st, AVPacket *pkt) static int write_frame(AVFormatContext *fmt_ctx, const AVRational *time_base, AVStream *st, AVPacket *pkt)
{ {
/* rescale output packet timestamp values from codec to stream timebase */ /* rescale output packet timestamp values from codec to stream timebase */
av_packet_rescale_ts(pkt, *time_base, st->time_base); pkt->pts = av_rescale_q_rnd(pkt->pts, *time_base, st->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
pkt->dts = av_rescale_q_rnd(pkt->dts, *time_base, st->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
pkt->duration = av_rescale_q(pkt->duration, *time_base, st->time_base);
pkt->stream_index = st->index; pkt->stream_index = st->index;
/* Write the compressed frame to the media file. */ /* Write the compressed frame to the media file. */
@@ -90,12 +74,11 @@ static int write_frame(AVFormatContext *fmt_ctx, const AVRational *time_base, AV
} }
/* Add an output stream. */ /* Add an output stream. */
static void add_stream(OutputStream *ost, AVFormatContext *oc, static AVStream *add_stream(AVFormatContext *oc, AVCodec **codec,
AVCodec **codec,
enum AVCodecID codec_id) enum AVCodecID codec_id)
{ {
AVCodecContext *c; AVCodecContext *c;
int i; AVStream *st;
/* find the encoder */ /* find the encoder */
*codec = avcodec_find_encoder(codec_id); *codec = avcodec_find_encoder(codec_id);
@@ -105,18 +88,13 @@ static void add_stream(OutputStream *ost, AVFormatContext *oc,
exit(1); exit(1);
} }
ost->st = avformat_new_stream(oc, NULL); st = avformat_new_stream(oc, *codec);
if (!ost->st) { if (!st) {
fprintf(stderr, "Could not allocate stream\n"); fprintf(stderr, "Could not allocate stream\n");
exit(1); exit(1);
} }
ost->st->id = oc->nb_streams-1; st->id = oc->nb_streams-1;
c = avcodec_alloc_context3(*codec); c = st->codec;
if (!c) {
fprintf(stderr, "Could not alloc an encoding context\n");
exit(1);
}
ost->enc = c;
switch ((*codec)->type) { switch ((*codec)->type) {
case AVMEDIA_TYPE_AUDIO: case AVMEDIA_TYPE_AUDIO:
@@ -124,24 +102,7 @@ static void add_stream(OutputStream *ost, AVFormatContext *oc,
(*codec)->sample_fmts[0] : AV_SAMPLE_FMT_FLTP; (*codec)->sample_fmts[0] : AV_SAMPLE_FMT_FLTP;
c->bit_rate = 64000; c->bit_rate = 64000;
c->sample_rate = 44100; c->sample_rate = 44100;
if ((*codec)->supported_samplerates) { c->channels = 2;
c->sample_rate = (*codec)->supported_samplerates[0];
for (i = 0; (*codec)->supported_samplerates[i]; i++) {
if ((*codec)->supported_samplerates[i] == 44100)
c->sample_rate = 44100;
}
}
c->channels = av_get_channel_layout_nb_channels(c->channel_layout);
c->channel_layout = AV_CH_LAYOUT_STEREO;
if ((*codec)->channel_layouts) {
c->channel_layout = (*codec)->channel_layouts[0];
for (i = 0; (*codec)->channel_layouts[i]; i++) {
if ((*codec)->channel_layouts[i] == AV_CH_LAYOUT_STEREO)
c->channel_layout = AV_CH_LAYOUT_STEREO;
}
}
c->channels = av_get_channel_layout_nb_channels(c->channel_layout);
ost->st->time_base = (AVRational){ 1, c->sample_rate };
break; break;
case AVMEDIA_TYPE_VIDEO: case AVMEDIA_TYPE_VIDEO:
@@ -155,13 +116,12 @@ static void add_stream(OutputStream *ost, AVFormatContext *oc,
* of which frame timestamps are represented. For fixed-fps content, * of which frame timestamps are represented. For fixed-fps content,
* timebase should be 1/framerate and timestamp increments should be * timebase should be 1/framerate and timestamp increments should be
* identical to 1. */ * identical to 1. */
ost->st->time_base = (AVRational){ 1, STREAM_FRAME_RATE }; c->time_base.den = STREAM_FRAME_RATE;
c->time_base = ost->st->time_base; c->time_base.num = 1;
c->gop_size = 12; /* emit one intra frame every twelve frames at most */ c->gop_size = 12; /* emit one intra frame every twelve frames at most */
c->pix_fmt = STREAM_PIX_FMT; c->pix_fmt = STREAM_PIX_FMT;
if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) { if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) {
/* just for testing, we also add B-frames */ /* just for testing, we also add B frames */
c->max_b_frames = 2; c->max_b_frames = 2;
} }
if (c->codec_id == AV_CODEC_ID_MPEG1VIDEO) { if (c->codec_id == AV_CODEC_ID_MPEG1VIDEO) {
@@ -178,185 +138,183 @@ static void add_stream(OutputStream *ost, AVFormatContext *oc,
/* Some formats want stream headers to be separate. */ /* Some formats want stream headers to be separate. */
if (oc->oformat->flags & AVFMT_GLOBALHEADER) if (oc->oformat->flags & AVFMT_GLOBALHEADER)
c->flags |= AV_CODEC_FLAG_GLOBAL_HEADER; c->flags |= CODEC_FLAG_GLOBAL_HEADER;
return st;
} }
/**************************************************************/ /**************************************************************/
/* audio output */ /* audio output */
static AVFrame *alloc_audio_frame(enum AVSampleFormat sample_fmt, static float t, tincr, tincr2;
uint64_t channel_layout,
int sample_rate, int nb_samples)
{
AVFrame *frame = av_frame_alloc();
int ret;
if (!frame) { AVFrame *audio_frame;
fprintf(stderr, "Error allocating an audio frame\n"); static uint8_t **src_samples_data;
exit(1); static int src_samples_linesize;
} static int src_nb_samples;
frame->format = sample_fmt; static int max_dst_nb_samples;
frame->channel_layout = channel_layout; uint8_t **dst_samples_data;
frame->sample_rate = sample_rate; int dst_samples_linesize;
frame->nb_samples = nb_samples; int dst_samples_size;
int samples_count;
if (nb_samples) { struct SwrContext *swr_ctx = NULL;
ret = av_frame_get_buffer(frame, 0);
if (ret < 0) {
fprintf(stderr, "Error allocating an audio buffer\n");
exit(1);
}
}
return frame; static void open_audio(AVFormatContext *oc, AVCodec *codec, AVStream *st)
}
static void open_audio(AVFormatContext *oc, AVCodec *codec, OutputStream *ost, AVDictionary *opt_arg)
{ {
AVCodecContext *c; AVCodecContext *c;
int nb_samples;
int ret; int ret;
AVDictionary *opt = NULL;
c = ost->enc; c = st->codec;
/* allocate and init a re-usable frame */
audio_frame = av_frame_alloc();
if (!audio_frame) {
fprintf(stderr, "Could not allocate audio frame\n");
exit(1);
}
/* open it */ /* open it */
av_dict_copy(&opt, opt_arg, 0); ret = avcodec_open2(c, codec, NULL);
ret = avcodec_open2(c, codec, &opt);
av_dict_free(&opt);
if (ret < 0) { if (ret < 0) {
fprintf(stderr, "Could not open audio codec: %s\n", av_err2str(ret)); fprintf(stderr, "Could not open audio codec: %s\n", av_err2str(ret));
exit(1); exit(1);
} }
/* init signal generator */ /* init signal generator */
ost->t = 0; t = 0;
ost->tincr = 2 * M_PI * 110.0 / c->sample_rate; tincr = 2 * M_PI * 110.0 / c->sample_rate;
/* increment frequency by 110 Hz per second */ /* increment frequency by 110 Hz per second */
ost->tincr2 = 2 * M_PI * 110.0 / c->sample_rate / c->sample_rate; tincr2 = 2 * M_PI * 110.0 / c->sample_rate / c->sample_rate;
if (c->codec->capabilities & AV_CODEC_CAP_VARIABLE_FRAME_SIZE) src_nb_samples = c->codec->capabilities & CODEC_CAP_VARIABLE_FRAME_SIZE ?
nb_samples = 10000; 10000 : c->frame_size;
else
nb_samples = c->frame_size;
ost->frame = alloc_audio_frame(c->sample_fmt, c->channel_layout, ret = av_samples_alloc_array_and_samples(&src_samples_data, &src_samples_linesize, c->channels,
c->sample_rate, nb_samples); src_nb_samples, AV_SAMPLE_FMT_S16, 0);
ost->tmp_frame = alloc_audio_frame(AV_SAMPLE_FMT_S16, c->channel_layout,
c->sample_rate, nb_samples);
/* copy the stream parameters to the muxer */
ret = avcodec_parameters_from_context(ost->st->codecpar, c);
if (ret < 0) { if (ret < 0) {
fprintf(stderr, "Could not copy the stream parameters\n"); fprintf(stderr, "Could not allocate source samples\n");
exit(1); exit(1);
} }
/* compute the number of converted samples: buffering is avoided
* ensuring that the output buffer will contain at least all the
* converted input samples */
max_dst_nb_samples = src_nb_samples;
/* create resampler context */ /* create resampler context */
ost->swr_ctx = swr_alloc(); if (c->sample_fmt != AV_SAMPLE_FMT_S16) {
if (!ost->swr_ctx) { swr_ctx = swr_alloc();
if (!swr_ctx) {
fprintf(stderr, "Could not allocate resampler context\n"); fprintf(stderr, "Could not allocate resampler context\n");
exit(1); exit(1);
} }
/* set options */ /* set options */
av_opt_set_int (ost->swr_ctx, "in_channel_count", c->channels, 0); av_opt_set_int (swr_ctx, "in_channel_count", c->channels, 0);
av_opt_set_int (ost->swr_ctx, "in_sample_rate", c->sample_rate, 0); av_opt_set_int (swr_ctx, "in_sample_rate", c->sample_rate, 0);
av_opt_set_sample_fmt(ost->swr_ctx, "in_sample_fmt", AV_SAMPLE_FMT_S16, 0); av_opt_set_sample_fmt(swr_ctx, "in_sample_fmt", AV_SAMPLE_FMT_S16, 0);
av_opt_set_int (ost->swr_ctx, "out_channel_count", c->channels, 0); av_opt_set_int (swr_ctx, "out_channel_count", c->channels, 0);
av_opt_set_int (ost->swr_ctx, "out_sample_rate", c->sample_rate, 0); av_opt_set_int (swr_ctx, "out_sample_rate", c->sample_rate, 0);
av_opt_set_sample_fmt(ost->swr_ctx, "out_sample_fmt", c->sample_fmt, 0); av_opt_set_sample_fmt(swr_ctx, "out_sample_fmt", c->sample_fmt, 0);
/* initialize the resampling context */ /* initialize the resampling context */
if ((ret = swr_init(ost->swr_ctx)) < 0) { if ((ret = swr_init(swr_ctx)) < 0) {
fprintf(stderr, "Failed to initialize the resampling context\n"); fprintf(stderr, "Failed to initialize the resampling context\n");
exit(1); exit(1);
} }
ret = av_samples_alloc_array_and_samples(&dst_samples_data, &dst_samples_linesize, c->channels,
max_dst_nb_samples, c->sample_fmt, 0);
if (ret < 0) {
fprintf(stderr, "Could not allocate destination samples\n");
exit(1);
}
} else {
dst_samples_data = src_samples_data;
}
dst_samples_size = av_samples_get_buffer_size(NULL, c->channels, max_dst_nb_samples,
c->sample_fmt, 0);
} }
/* Prepare a 16 bit dummy audio frame of 'frame_size' samples and /* Prepare a 16 bit dummy audio frame of 'frame_size' samples and
* 'nb_channels' channels. */ * 'nb_channels' channels. */
static AVFrame *get_audio_frame(OutputStream *ost) static void get_audio_frame(int16_t *samples, int frame_size, int nb_channels)
{ {
AVFrame *frame = ost->tmp_frame;
int j, i, v; int j, i, v;
int16_t *q = (int16_t*)frame->data[0]; int16_t *q;
/* check if we want to generate more frames */ q = samples;
if (av_compare_ts(ost->next_pts, ost->enc->time_base, for (j = 0; j < frame_size; j++) {
STREAM_DURATION, (AVRational){ 1, 1 }) >= 0) v = (int)(sin(t) * 10000);
return NULL; for (i = 0; i < nb_channels; i++)
for (j = 0; j <frame->nb_samples; j++) {
v = (int)(sin(ost->t) * 10000);
for (i = 0; i < ost->enc->channels; i++)
*q++ = v; *q++ = v;
ost->t += ost->tincr; t += tincr;
ost->tincr += ost->tincr2; tincr += tincr2;
}
} }
frame->pts = ost->next_pts; static void write_audio_frame(AVFormatContext *oc, AVStream *st, int flush)
ost->next_pts += frame->nb_samples;
return frame;
}
/*
* encode one audio frame and send it to the muxer
* return 1 when encoding is finished, 0 otherwise
*/
static int write_audio_frame(AVFormatContext *oc, OutputStream *ost)
{ {
AVCodecContext *c; AVCodecContext *c;
AVPacket pkt = { 0 }; // data and size must be 0; AVPacket pkt = { 0 }; // data and size must be 0;
AVFrame *frame; int got_packet, ret, dst_nb_samples;
int ret;
int got_packet;
int dst_nb_samples;
av_init_packet(&pkt); av_init_packet(&pkt);
c = ost->enc; c = st->codec;
frame = get_audio_frame(ost); if (!flush) {
get_audio_frame((int16_t *)src_samples_data[0], src_nb_samples, c->channels);
if (frame) {
/* convert samples from native format to destination codec format, using the resampler */ /* convert samples from native format to destination codec format, using the resampler */
if (swr_ctx) {
/* compute destination number of samples */ /* compute destination number of samples */
dst_nb_samples = av_rescale_rnd(swr_get_delay(ost->swr_ctx, c->sample_rate) + frame->nb_samples, dst_nb_samples = av_rescale_rnd(swr_get_delay(swr_ctx, c->sample_rate) + src_nb_samples,
c->sample_rate, c->sample_rate, AV_ROUND_UP); c->sample_rate, c->sample_rate, AV_ROUND_UP);
av_assert0(dst_nb_samples == frame->nb_samples); if (dst_nb_samples > max_dst_nb_samples) {
av_free(dst_samples_data[0]);
/* when we pass a frame to the encoder, it may keep a reference to it ret = av_samples_alloc(dst_samples_data, &dst_samples_linesize, c->channels,
* internally; dst_nb_samples, c->sample_fmt, 0);
* make sure we do not overwrite it here
*/
ret = av_frame_make_writable(ost->frame);
if (ret < 0) if (ret < 0)
exit(1); exit(1);
max_dst_nb_samples = dst_nb_samples;
dst_samples_size = av_samples_get_buffer_size(NULL, c->channels, dst_nb_samples,
c->sample_fmt, 0);
}
/* convert to destination format */ /* convert to destination format */
ret = swr_convert(ost->swr_ctx, ret = swr_convert(swr_ctx,
ost->frame->data, dst_nb_samples, dst_samples_data, dst_nb_samples,
(const uint8_t **)frame->data, frame->nb_samples); (const uint8_t **)src_samples_data, src_nb_samples);
if (ret < 0) { if (ret < 0) {
fprintf(stderr, "Error while converting\n"); fprintf(stderr, "Error while converting\n");
exit(1); exit(1);
} }
frame = ost->frame; } else {
dst_nb_samples = src_nb_samples;
frame->pts = av_rescale_q(ost->samples_count, (AVRational){1, c->sample_rate}, c->time_base);
ost->samples_count += dst_nb_samples;
} }
ret = avcodec_encode_audio2(c, &pkt, frame, &got_packet); audio_frame->nb_samples = dst_nb_samples;
audio_frame->pts = av_rescale_q(samples_count, (AVRational){1, c->sample_rate}, c->time_base);
avcodec_fill_audio_frame(audio_frame, c->channels, c->sample_fmt,
dst_samples_data[0], dst_samples_size, 0);
samples_count += dst_nb_samples;
}
ret = avcodec_encode_audio2(c, &pkt, flush ? NULL : audio_frame, &got_packet);
if (ret < 0) { if (ret < 0) {
fprintf(stderr, "Error encoding audio frame: %s\n", av_err2str(ret)); fprintf(stderr, "Error encoding audio frame: %s\n", av_err2str(ret));
exit(1); exit(1);
} }
if (got_packet) { if (!got_packet) {
ret = write_frame(oc, &c->time_base, ost->st, &pkt); if (flush)
audio_is_eof = 1;
return;
}
ret = write_frame(oc, &c->time_base, st, &pkt);
if (ret < 0) { if (ret < 0) {
fprintf(stderr, "Error while writing audio frame: %s\n", fprintf(stderr, "Error while writing audio frame: %s\n",
av_err2str(ret)); av_err2str(ret));
@@ -364,91 +322,75 @@ static int write_audio_frame(AVFormatContext *oc, OutputStream *ost)
} }
} }
return (frame || got_packet) ? 0 : 1; static void close_audio(AVFormatContext *oc, AVStream *st)
{
avcodec_close(st->codec);
if (dst_samples_data != src_samples_data) {
av_free(dst_samples_data[0]);
av_free(dst_samples_data);
}
av_free(src_samples_data[0]);
av_free(src_samples_data);
av_frame_free(&audio_frame);
} }
/**************************************************************/ /**************************************************************/
/* video output */ /* video output */
static AVFrame *alloc_picture(enum AVPixelFormat pix_fmt, int width, int height) static AVFrame *frame;
{ static AVPicture src_picture, dst_picture;
AVFrame *picture; static int frame_count;
int ret;
picture = av_frame_alloc(); static void open_video(AVFormatContext *oc, AVCodec *codec, AVStream *st)
if (!picture)
return NULL;
picture->format = pix_fmt;
picture->width = width;
picture->height = height;
/* allocate the buffers for the frame data */
ret = av_frame_get_buffer(picture, 32);
if (ret < 0) {
fprintf(stderr, "Could not allocate frame data.\n");
exit(1);
}
return picture;
}
static void open_video(AVFormatContext *oc, AVCodec *codec, OutputStream *ost, AVDictionary *opt_arg)
{ {
int ret; int ret;
AVCodecContext *c = ost->enc; AVCodecContext *c = st->codec;
AVDictionary *opt = NULL;
av_dict_copy(&opt, opt_arg, 0);
/* open the codec */ /* open the codec */
ret = avcodec_open2(c, codec, &opt); ret = avcodec_open2(c, codec, NULL);
av_dict_free(&opt);
if (ret < 0) { if (ret < 0) {
fprintf(stderr, "Could not open video codec: %s\n", av_err2str(ret)); fprintf(stderr, "Could not open video codec: %s\n", av_err2str(ret));
exit(1); exit(1);
} }
/* allocate and init a re-usable frame */ /* allocate and init a re-usable frame */
ost->frame = alloc_picture(c->pix_fmt, c->width, c->height); frame = av_frame_alloc();
if (!ost->frame) { if (!frame) {
fprintf(stderr, "Could not allocate video frame\n"); fprintf(stderr, "Could not allocate video frame\n");
exit(1); exit(1);
} }
frame->format = c->pix_fmt;
frame->width = c->width;
frame->height = c->height;
/* Allocate the encoded raw picture. */
ret = avpicture_alloc(&dst_picture, c->pix_fmt, c->width, c->height);
if (ret < 0) {
fprintf(stderr, "Could not allocate picture: %s\n", av_err2str(ret));
exit(1);
}
/* If the output format is not YUV420P, then a temporary YUV420P /* If the output format is not YUV420P, then a temporary YUV420P
* picture is needed too. It is then converted to the required * picture is needed too. It is then converted to the required
* output format. */ * output format. */
ost->tmp_frame = NULL;
if (c->pix_fmt != AV_PIX_FMT_YUV420P) { if (c->pix_fmt != AV_PIX_FMT_YUV420P) {
ost->tmp_frame = alloc_picture(AV_PIX_FMT_YUV420P, c->width, c->height); ret = avpicture_alloc(&src_picture, AV_PIX_FMT_YUV420P, c->width, c->height);
if (!ost->tmp_frame) { if (ret < 0) {
fprintf(stderr, "Could not allocate temporary picture\n"); fprintf(stderr, "Could not allocate temporary picture: %s\n",
av_err2str(ret));
exit(1); exit(1);
} }
} }
/* copy the stream parameters to the muxer */ /* copy data and linesize picture pointers to frame */
ret = avcodec_parameters_from_context(ost->st->codecpar, c); *((AVPicture *)frame) = dst_picture;
if (ret < 0) {
fprintf(stderr, "Could not copy the stream parameters\n");
exit(1);
}
} }
/* Prepare a dummy image. */ /* Prepare a dummy image. */
static void fill_yuv_image(AVFrame *pict, int frame_index, static void fill_yuv_image(AVPicture *pict, int frame_index,
int width, int height) int width, int height)
{ {
int x, y, i, ret; int x, y, i;
/* when we pass a frame to the encoder, it may keep a reference to it
* internally;
* make sure we do not overwrite it here
*/
ret = av_frame_make_writable(pict);
if (ret < 0)
exit(1);
i = frame_index; i = frame_index;
@@ -466,89 +408,82 @@ static void fill_yuv_image(AVFrame *pict, int frame_index,
} }
} }
static AVFrame *get_video_frame(OutputStream *ost) static void write_video_frame(AVFormatContext *oc, AVStream *st, int flush)
{ {
AVCodecContext *c = ost->enc; int ret;
static struct SwsContext *sws_ctx;
/* check if we want to generate more frames */ AVCodecContext *c = st->codec;
if (av_compare_ts(ost->next_pts, c->time_base,
STREAM_DURATION, (AVRational){ 1, 1 }) >= 0)
return NULL;
if (!flush) {
if (c->pix_fmt != AV_PIX_FMT_YUV420P) { if (c->pix_fmt != AV_PIX_FMT_YUV420P) {
/* as we only generate a YUV420P picture, we must convert it /* as we only generate a YUV420P picture, we must convert it
* to the codec pixel format if needed */ * to the codec pixel format if needed */
if (!ost->sws_ctx) { if (!sws_ctx) {
ost->sws_ctx = sws_getContext(c->width, c->height, sws_ctx = sws_getContext(c->width, c->height, AV_PIX_FMT_YUV420P,
AV_PIX_FMT_YUV420P, c->width, c->height, c->pix_fmt,
c->width, c->height, sws_flags, NULL, NULL, NULL);
c->pix_fmt, if (!sws_ctx) {
SCALE_FLAGS, NULL, NULL, NULL);
if (!ost->sws_ctx) {
fprintf(stderr, fprintf(stderr,
"Could not initialize the conversion context\n"); "Could not initialize the conversion context\n");
exit(1); exit(1);
} }
} }
fill_yuv_image(ost->tmp_frame, ost->next_pts, c->width, c->height); fill_yuv_image(&src_picture, frame_count, c->width, c->height);
sws_scale(ost->sws_ctx, sws_scale(sws_ctx,
(const uint8_t * const *)ost->tmp_frame->data, ost->tmp_frame->linesize, (const uint8_t * const *)src_picture.data, src_picture.linesize,
0, c->height, ost->frame->data, ost->frame->linesize); 0, c->height, dst_picture.data, dst_picture.linesize);
} else { } else {
fill_yuv_image(ost->frame, ost->next_pts, c->width, c->height); fill_yuv_image(&dst_picture, frame_count, c->width, c->height);
}
} }
ost->frame->pts = ost->next_pts++; if (oc->oformat->flags & AVFMT_RAWPICTURE && !flush) {
/* Raw video case - directly store the picture in the packet */
AVPacket pkt;
av_init_packet(&pkt);
return ost->frame; pkt.flags |= AV_PKT_FLAG_KEY;
} pkt.stream_index = st->index;
pkt.data = dst_picture.data[0];
pkt.size = sizeof(AVPicture);
/* ret = av_interleaved_write_frame(oc, &pkt);
* encode one video frame and send it to the muxer } else {
* return 1 when encoding is finished, 0 otherwise
*/
static int write_video_frame(AVFormatContext *oc, OutputStream *ost)
{
int ret;
AVCodecContext *c;
AVFrame *frame;
int got_packet = 0;
AVPacket pkt = { 0 }; AVPacket pkt = { 0 };
int got_packet;
c = ost->enc;
frame = get_video_frame(ost);
av_init_packet(&pkt); av_init_packet(&pkt);
/* encode the image */ /* encode the image */
ret = avcodec_encode_video2(c, &pkt, frame, &got_packet); frame->pts = frame_count;
ret = avcodec_encode_video2(c, &pkt, flush ? NULL : frame, &got_packet);
if (ret < 0) { if (ret < 0) {
fprintf(stderr, "Error encoding video frame: %s\n", av_err2str(ret)); fprintf(stderr, "Error encoding video frame: %s\n", av_err2str(ret));
exit(1); exit(1);
} }
/* If size is zero, it means the image was buffered. */
if (got_packet) { if (got_packet) {
ret = write_frame(oc, &c->time_base, ost->st, &pkt); ret = write_frame(oc, &c->time_base, st, &pkt);
} else { } else {
if (flush)
video_is_eof = 1;
ret = 0; ret = 0;
} }
}
if (ret < 0) { if (ret < 0) {
fprintf(stderr, "Error while writing video frame: %s\n", av_err2str(ret)); fprintf(stderr, "Error while writing video frame: %s\n", av_err2str(ret));
exit(1); exit(1);
} }
frame_count++;
return (frame || got_packet) ? 0 : 1;
} }
static void close_stream(AVFormatContext *oc, OutputStream *ost) static void close_video(AVFormatContext *oc, AVStream *st)
{ {
avcodec_free_context(&ost->enc); avcodec_close(st->codec);
av_frame_free(&ost->frame); av_free(src_picture.data[0]);
av_frame_free(&ost->tmp_frame); av_free(dst_picture.data[0]);
sws_freeContext(ost->sws_ctx); av_frame_free(&frame);
swr_free(&ost->swr_ctx);
} }
/**************************************************************/ /**************************************************************/
@@ -556,21 +491,18 @@ static void close_stream(AVFormatContext *oc, OutputStream *ost)
int main(int argc, char **argv) int main(int argc, char **argv)
{ {
OutputStream video_st = { 0 }, audio_st = { 0 };
const char *filename; const char *filename;
AVOutputFormat *fmt; AVOutputFormat *fmt;
AVFormatContext *oc; AVFormatContext *oc;
AVStream *audio_st, *video_st;
AVCodec *audio_codec, *video_codec; AVCodec *audio_codec, *video_codec;
int ret; double audio_time, video_time;
int have_video = 0, have_audio = 0; int flush, ret;
int encode_video = 0, encode_audio = 0;
AVDictionary *opt = NULL;
int i;
/* Initialize libavcodec, and register all codecs and formats. */ /* Initialize libavcodec, and register all codecs and formats. */
av_register_all(); av_register_all();
if (argc < 2) { if (argc != 2) {
printf("usage: %s output_file\n" printf("usage: %s output_file\n"
"API example program to output a media file with libavformat.\n" "API example program to output a media file with libavformat.\n"
"This program generates a synthetic audio and video stream, encodes and\n" "This program generates a synthetic audio and video stream, encodes and\n"
@@ -582,10 +514,6 @@ int main(int argc, char **argv)
} }
filename = argv[1]; filename = argv[1];
for (i = 2; i+1 < argc; i+=2) {
if (!strcmp(argv[i], "-flags") || !strcmp(argv[i], "-fflags"))
av_dict_set(&opt, argv[i]+1, argv[i+1], 0);
}
/* allocate the output media context */ /* allocate the output media context */
avformat_alloc_output_context2(&oc, NULL, NULL, filename); avformat_alloc_output_context2(&oc, NULL, NULL, filename);
@@ -600,24 +528,20 @@ int main(int argc, char **argv)
/* Add the audio and video streams using the default format codecs /* Add the audio and video streams using the default format codecs
* and initialize the codecs. */ * and initialize the codecs. */
if (fmt->video_codec != AV_CODEC_ID_NONE) { video_st = NULL;
add_stream(&video_st, oc, &video_codec, fmt->video_codec); audio_st = NULL;
have_video = 1;
encode_video = 1; if (fmt->video_codec != AV_CODEC_ID_NONE)
} video_st = add_stream(oc, &video_codec, fmt->video_codec);
if (fmt->audio_codec != AV_CODEC_ID_NONE) { if (fmt->audio_codec != AV_CODEC_ID_NONE)
add_stream(&audio_st, oc, &audio_codec, fmt->audio_codec); audio_st = add_stream(oc, &audio_codec, fmt->audio_codec);
have_audio = 1;
encode_audio = 1;
}
/* Now that all the parameters are set, we can open the audio and /* Now that all the parameters are set, we can open the audio and
* video codecs and allocate the necessary encode buffers. */ * video codecs and allocate the necessary encode buffers. */
if (have_video) if (video_st)
open_video(oc, video_codec, &video_st, opt); open_video(oc, video_codec, video_st);
if (audio_st)
if (have_audio) open_audio(oc, audio_codec, audio_st);
open_audio(oc, audio_codec, &audio_st, opt);
av_dump_format(oc, 0, filename, 1); av_dump_format(oc, 0, filename, 1);
@@ -632,21 +556,30 @@ int main(int argc, char **argv)
} }
/* Write the stream header, if any. */ /* Write the stream header, if any. */
ret = avformat_write_header(oc, &opt); ret = avformat_write_header(oc, NULL);
if (ret < 0) { if (ret < 0) {
fprintf(stderr, "Error occurred when opening output file: %s\n", fprintf(stderr, "Error occurred when opening output file: %s\n",
av_err2str(ret)); av_err2str(ret));
return 1; return 1;
} }
while (encode_video || encode_audio) { flush = 0;
/* select the stream to encode */ while ((video_st && !video_is_eof) || (audio_st && !audio_is_eof)) {
if (encode_video && /* Compute current audio and video time. */
(!encode_audio || av_compare_ts(video_st.next_pts, video_st.enc->time_base, audio_time = (audio_st && !audio_is_eof) ? audio_st->pts.val * av_q2d(audio_st->time_base) : INFINITY;
audio_st.next_pts, audio_st.enc->time_base) <= 0)) { video_time = (video_st && !video_is_eof) ? video_st->pts.val * av_q2d(video_st->time_base) : INFINITY;
encode_video = !write_video_frame(oc, &video_st);
} else { if (!flush &&
encode_audio = !write_audio_frame(oc, &audio_st); (!audio_st || audio_time >= STREAM_DURATION) &&
(!video_st || video_time >= STREAM_DURATION)) {
flush = 1;
}
/* write interleaved audio and video frames */
if (audio_st && !audio_is_eof && audio_time <= video_time) {
write_audio_frame(oc, audio_st, flush);
} else if (video_st && !video_is_eof && video_time < audio_time) {
write_video_frame(oc, video_st, flush);
} }
} }
@@ -657,14 +590,14 @@ int main(int argc, char **argv)
av_write_trailer(oc); av_write_trailer(oc);
/* Close each codec. */ /* Close each codec. */
if (have_video) if (video_st)
close_stream(oc, &video_st); close_video(oc, video_st);
if (have_audio) if (audio_st)
close_stream(oc, &audio_st); close_audio(oc, audio_st);
if (!(fmt->flags & AVFMT_NOFILE)) if (!(fmt->flags & AVFMT_NOFILE))
/* Close the output file. */ /* Close the output file. */
avio_closep(&oc->pb); avio_close(oc->pb);
/* free the stream */ /* free the stream */
avformat_free_context(oc); avformat_free_context(oc);

View File

@@ -1,487 +0,0 @@
/*
* Copyright (c) 2015 Anton Khirnov
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
/**
* @file
* Intel QSV-accelerated H.264 decoding example.
*
* @example qsvdec.c
* This example shows how to do QSV-accelerated H.264 decoding with output
* frames in the VA-API video surfaces.
*/
#include "config.h"
#include <stdio.h>
#include <mfx/mfxvideo.h>
#include <va/va.h>
#include <va/va_x11.h>
#include <X11/Xlib.h>
#include "libavformat/avformat.h"
#include "libavformat/avio.h"
#include "libavcodec/avcodec.h"
#include "libavcodec/qsv.h"
#include "libavutil/error.h"
#include "libavutil/mem.h"
typedef struct DecodeContext {
mfxSession mfx_session;
VADisplay va_dpy;
VASurfaceID *surfaces;
mfxMemId *surface_ids;
int *surface_used;
int nb_surfaces;
mfxFrameInfo frame_info;
} DecodeContext;
static mfxStatus frame_alloc(mfxHDL pthis, mfxFrameAllocRequest *req,
mfxFrameAllocResponse *resp)
{
DecodeContext *decode = pthis;
int err, i;
if (decode->surfaces) {
fprintf(stderr, "Multiple allocation requests.\n");
return MFX_ERR_MEMORY_ALLOC;
}
if (!(req->Type & MFX_MEMTYPE_VIDEO_MEMORY_DECODER_TARGET)) {
fprintf(stderr, "Unsupported surface type: %d\n", req->Type);
return MFX_ERR_UNSUPPORTED;
}
if (req->Info.BitDepthLuma != 8 || req->Info.BitDepthChroma != 8 ||
req->Info.Shift || req->Info.FourCC != MFX_FOURCC_NV12 ||
req->Info.ChromaFormat != MFX_CHROMAFORMAT_YUV420) {
fprintf(stderr, "Unsupported surface properties.\n");
return MFX_ERR_UNSUPPORTED;
}
decode->surfaces = av_malloc_array (req->NumFrameSuggested, sizeof(*decode->surfaces));
decode->surface_ids = av_malloc_array (req->NumFrameSuggested, sizeof(*decode->surface_ids));
decode->surface_used = av_mallocz_array(req->NumFrameSuggested, sizeof(*decode->surface_used));
if (!decode->surfaces || !decode->surface_ids || !decode->surface_used)
goto fail;
err = vaCreateSurfaces(decode->va_dpy, VA_RT_FORMAT_YUV420,
req->Info.Width, req->Info.Height,
decode->surfaces, req->NumFrameSuggested,
NULL, 0);
if (err != VA_STATUS_SUCCESS) {
fprintf(stderr, "Error allocating VA surfaces\n");
goto fail;
}
decode->nb_surfaces = req->NumFrameSuggested;
for (i = 0; i < decode->nb_surfaces; i++)
decode->surface_ids[i] = &decode->surfaces[i];
resp->mids = decode->surface_ids;
resp->NumFrameActual = decode->nb_surfaces;
decode->frame_info = req->Info;
return MFX_ERR_NONE;
fail:
av_freep(&decode->surfaces);
av_freep(&decode->surface_ids);
av_freep(&decode->surface_used);
return MFX_ERR_MEMORY_ALLOC;
}
static mfxStatus frame_free(mfxHDL pthis, mfxFrameAllocResponse *resp)
{
return MFX_ERR_NONE;
}
static mfxStatus frame_lock(mfxHDL pthis, mfxMemId mid, mfxFrameData *ptr)
{
return MFX_ERR_UNSUPPORTED;
}
static mfxStatus frame_unlock(mfxHDL pthis, mfxMemId mid, mfxFrameData *ptr)
{
return MFX_ERR_UNSUPPORTED;
}
static mfxStatus frame_get_hdl(mfxHDL pthis, mfxMemId mid, mfxHDL *hdl)
{
*hdl = mid;
return MFX_ERR_NONE;
}
static void free_surfaces(DecodeContext *decode)
{
if (decode->surfaces)
vaDestroySurfaces(decode->va_dpy, decode->surfaces, decode->nb_surfaces);
av_freep(&decode->surfaces);
av_freep(&decode->surface_ids);
av_freep(&decode->surface_used);
decode->nb_surfaces = 0;
}
static void free_buffer(void *opaque, uint8_t *data)
{
int *used = opaque;
*used = 0;
av_freep(&data);
}
static int get_buffer(AVCodecContext *avctx, AVFrame *frame, int flags)
{
DecodeContext *decode = avctx->opaque;
mfxFrameSurface1 *surf;
AVBufferRef *surf_buf;
int idx;
for (idx = 0; idx < decode->nb_surfaces; idx++) {
if (!decode->surface_used[idx])
break;
}
if (idx == decode->nb_surfaces) {
fprintf(stderr, "No free surfaces\n");
return AVERROR(ENOMEM);
}
surf = av_mallocz(sizeof(*surf));
if (!surf)
return AVERROR(ENOMEM);
surf_buf = av_buffer_create((uint8_t*)surf, sizeof(*surf), free_buffer,
&decode->surface_used[idx], AV_BUFFER_FLAG_READONLY);
if (!surf_buf) {
av_freep(&surf);
return AVERROR(ENOMEM);
}
surf->Info = decode->frame_info;
surf->Data.MemId = &decode->surfaces[idx];
frame->buf[0] = surf_buf;
frame->data[3] = (uint8_t*)surf;
decode->surface_used[idx] = 1;
return 0;
}
static int get_format(AVCodecContext *avctx, const enum AVPixelFormat *pix_fmts)
{
while (*pix_fmts != AV_PIX_FMT_NONE) {
if (*pix_fmts == AV_PIX_FMT_QSV) {
if (!avctx->hwaccel_context) {
DecodeContext *decode = avctx->opaque;
AVQSVContext *qsv = av_qsv_alloc_context();
if (!qsv)
return AV_PIX_FMT_NONE;
qsv->session = decode->mfx_session;
qsv->iopattern = MFX_IOPATTERN_OUT_VIDEO_MEMORY;
avctx->hwaccel_context = qsv;
}
return AV_PIX_FMT_QSV;
}
pix_fmts++;
}
fprintf(stderr, "The QSV pixel format not offered in get_format()\n");
return AV_PIX_FMT_NONE;
}
static int decode_packet(DecodeContext *decode, AVCodecContext *decoder_ctx,
AVFrame *frame, AVPacket *pkt,
AVIOContext *output_ctx)
{
int ret = 0;
int got_frame = 1;
while (pkt->size > 0 || (!pkt->data && got_frame)) {
ret = avcodec_decode_video2(decoder_ctx, frame, &got_frame, pkt);
if (ret < 0) {
fprintf(stderr, "Error during decoding\n");
return ret;
}
pkt->data += ret;
pkt->size -= ret;
/* A real program would do something useful with the decoded frame here.
* We just retrieve the raw data and write it to a file, which is rather
* useless but pedagogic. */
if (got_frame) {
mfxFrameSurface1 *surf = (mfxFrameSurface1*)frame->data[3];
VASurfaceID surface = *(VASurfaceID*)surf->Data.MemId;
VAImageFormat img_fmt = {
.fourcc = VA_FOURCC_NV12,
.byte_order = VA_LSB_FIRST,
.bits_per_pixel = 8,
.depth = 8,
};
VAImage img;
VAStatus err;
uint8_t *data;
int i, j;
img.buf = VA_INVALID_ID;
img.image_id = VA_INVALID_ID;
err = vaCreateImage(decode->va_dpy, &img_fmt,
frame->width, frame->height, &img);
if (err != VA_STATUS_SUCCESS) {
fprintf(stderr, "Error creating an image: %s\n",
vaErrorStr(err));
ret = AVERROR_UNKNOWN;
goto fail;
}
err = vaGetImage(decode->va_dpy, surface, 0, 0,
frame->width, frame->height,
img.image_id);
if (err != VA_STATUS_SUCCESS) {
fprintf(stderr, "Error getting an image: %s\n",
vaErrorStr(err));
ret = AVERROR_UNKNOWN;
goto fail;
}
err = vaMapBuffer(decode->va_dpy, img.buf, (void**)&data);
if (err != VA_STATUS_SUCCESS) {
fprintf(stderr, "Error mapping the image buffer: %s\n",
vaErrorStr(err));
ret = AVERROR_UNKNOWN;
goto fail;
}
for (i = 0; i < img.num_planes; i++)
for (j = 0; j < (img.height >> (i > 0)); j++)
avio_write(output_ctx, data + img.offsets[i] + j * img.pitches[i], img.width);
fail:
if (img.buf != VA_INVALID_ID)
vaUnmapBuffer(decode->va_dpy, img.buf);
if (img.image_id != VA_INVALID_ID)
vaDestroyImage(decode->va_dpy, img.image_id);
av_frame_unref(frame);
if (ret < 0)
return ret;
}
}
return 0;
}
int main(int argc, char **argv)
{
AVFormatContext *input_ctx = NULL;
AVStream *video_st = NULL;
AVCodecContext *decoder_ctx = NULL;
const AVCodec *decoder;
AVPacket pkt = { 0 };
AVFrame *frame = NULL;
DecodeContext decode = { NULL };
Display *dpy = NULL;
int va_ver_major, va_ver_minor;
mfxIMPL mfx_impl = MFX_IMPL_AUTO_ANY;
mfxVersion mfx_ver = { { 1, 1 } };
mfxFrameAllocator frame_allocator = {
.pthis = &decode,
.Alloc = frame_alloc,
.Lock = frame_lock,
.Unlock = frame_unlock,
.GetHDL = frame_get_hdl,
.Free = frame_free,
};
AVIOContext *output_ctx = NULL;
int ret, i, err;
av_register_all();
if (argc < 3) {
fprintf(stderr, "Usage: %s <input file> <output file>\n", argv[0]);
return 1;
}
/* open the input file */
ret = avformat_open_input(&input_ctx, argv[1], NULL, NULL);
if (ret < 0) {
fprintf(stderr, "Cannot open input file '%s': ", argv[1]);
goto finish;
}
/* find the first H.264 video stream */
for (i = 0; i < input_ctx->nb_streams; i++) {
AVStream *st = input_ctx->streams[i];
if (st->codecpar->codec_id == AV_CODEC_ID_H264 && !video_st)
video_st = st;
else
st->discard = AVDISCARD_ALL;
}
if (!video_st) {
fprintf(stderr, "No H.264 video stream in the input file\n");
goto finish;
}
/* initialize VA-API */
dpy = XOpenDisplay(NULL);
if (!dpy) {
fprintf(stderr, "Cannot open the X display\n");
goto finish;
}
decode.va_dpy = vaGetDisplay(dpy);
if (!decode.va_dpy) {
fprintf(stderr, "Cannot open the VA display\n");
goto finish;
}
err = vaInitialize(decode.va_dpy, &va_ver_major, &va_ver_minor);
if (err != VA_STATUS_SUCCESS) {
fprintf(stderr, "Cannot initialize VA: %s\n", vaErrorStr(err));
goto finish;
}
fprintf(stderr, "Initialized VA v%d.%d\n", va_ver_major, va_ver_minor);
/* initialize an MFX session */
err = MFXInit(mfx_impl, &mfx_ver, &decode.mfx_session);
if (err != MFX_ERR_NONE) {
fprintf(stderr, "Error initializing an MFX session\n");
goto finish;
}
MFXVideoCORE_SetHandle(decode.mfx_session, MFX_HANDLE_VA_DISPLAY, decode.va_dpy);
MFXVideoCORE_SetFrameAllocator(decode.mfx_session, &frame_allocator);
/* initialize the decoder */
decoder = avcodec_find_decoder_by_name("h264_qsv");
if (!decoder) {
fprintf(stderr, "The QSV decoder is not present in libavcodec\n");
goto finish;
}
decoder_ctx = avcodec_alloc_context3(decoder);
if (!decoder_ctx) {
ret = AVERROR(ENOMEM);
goto finish;
}
decoder_ctx->codec_id = AV_CODEC_ID_H264;
if (video_st->codecpar->extradata_size) {
decoder_ctx->extradata = av_mallocz(video_st->codecpar->extradata_size +
AV_INPUT_BUFFER_PADDING_SIZE);
if (!decoder_ctx->extradata) {
ret = AVERROR(ENOMEM);
goto finish;
}
memcpy(decoder_ctx->extradata, video_st->codecpar->extradata,
video_st->codecpar->extradata_size);
decoder_ctx->extradata_size = video_st->codecpar->extradata_size;
}
decoder_ctx->refcounted_frames = 1;
decoder_ctx->opaque = &decode;
decoder_ctx->get_buffer2 = get_buffer;
decoder_ctx->get_format = get_format;
ret = avcodec_open2(decoder_ctx, NULL, NULL);
if (ret < 0) {
fprintf(stderr, "Error opening the decoder: ");
goto finish;
}
/* open the output stream */
ret = avio_open(&output_ctx, argv[2], AVIO_FLAG_WRITE);
if (ret < 0) {
fprintf(stderr, "Error opening the output context: ");
goto finish;
}
frame = av_frame_alloc();
if (!frame) {
ret = AVERROR(ENOMEM);
goto finish;
}
/* actual decoding */
while (ret >= 0) {
ret = av_read_frame(input_ctx, &pkt);
if (ret < 0)
break;
if (pkt.stream_index == video_st->index)
ret = decode_packet(&decode, decoder_ctx, frame, &pkt, output_ctx);
av_packet_unref(&pkt);
}
/* flush the decoder */
pkt.data = NULL;
pkt.size = 0;
ret = decode_packet(&decode, decoder_ctx, frame, &pkt, output_ctx);
finish:
if (ret < 0) {
char buf[1024];
av_strerror(ret, buf, sizeof(buf));
fprintf(stderr, "%s\n", buf);
}
avformat_close_input(&input_ctx);
av_frame_free(&frame);
if (decoder_ctx)
av_freep(&decoder_ctx->hwaccel_context);
avcodec_free_context(&decoder_ctx);
free_surfaces(&decode);
if (decode.mfx_session)
MFXClose(decode.mfx_session);
if (decode.va_dpy)
vaTerminate(decode.va_dpy);
if (dpy)
XCloseDisplay(dpy);
avio_close(output_ctx);
return ret;
}

View File

@@ -99,9 +99,8 @@ int main(int argc, char **argv)
fprintf(stderr, "Failed to copy context from input to output stream codec context\n"); fprintf(stderr, "Failed to copy context from input to output stream codec context\n");
goto end; goto end;
} }
out_stream->codec->codec_tag = 0;
if (ofmt_ctx->oformat->flags & AVFMT_GLOBALHEADER) if (ofmt_ctx->oformat->flags & AVFMT_GLOBALHEADER)
out_stream->codec->flags |= AV_CODEC_FLAG_GLOBAL_HEADER; out_stream->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
} }
av_dump_format(ofmt_ctx, 0, out_filename, 1); av_dump_format(ofmt_ctx, 0, out_filename, 1);
@@ -143,7 +142,7 @@ int main(int argc, char **argv)
fprintf(stderr, "Error muxing packet\n"); fprintf(stderr, "Error muxing packet\n");
break; break;
} }
av_packet_unref(&pkt); av_free_packet(&pkt);
} }
av_write_trailer(ofmt_ctx); av_write_trailer(ofmt_ctx);
@@ -153,7 +152,7 @@ end:
/* close output */ /* close output */
if (ofmt_ctx && !(ofmt->flags & AVFMT_NOFILE)) if (ofmt_ctx && !(ofmt->flags & AVFMT_NOFILE))
avio_closep(&ofmt_ctx->pb); avio_close(ofmt_ctx->pb);
avformat_free_context(ofmt_ctx); avformat_free_context(ofmt_ctx);
if (ret < 0 && ret != AVERROR_EOF) { if (ret < 0 && ret != AVERROR_EOF) {

View File

@@ -168,7 +168,7 @@ int main(int argc, char **argv)
dst_nb_samples = av_rescale_rnd(swr_get_delay(swr_ctx, src_rate) + dst_nb_samples = av_rescale_rnd(swr_get_delay(swr_ctx, src_rate) +
src_nb_samples, dst_rate, src_rate, AV_ROUND_UP); src_nb_samples, dst_rate, src_rate, AV_ROUND_UP);
if (dst_nb_samples > max_dst_nb_samples) { if (dst_nb_samples > max_dst_nb_samples) {
av_freep(&dst_data[0]); av_free(dst_data[0]);
ret = av_samples_alloc(dst_data, &dst_linesize, dst_nb_channels, ret = av_samples_alloc(dst_data, &dst_linesize, dst_nb_channels,
dst_nb_samples, dst_sample_fmt, 1); dst_nb_samples, dst_sample_fmt, 1);
if (ret < 0) if (ret < 0)
@@ -199,6 +199,7 @@ int main(int argc, char **argv)
fmt, dst_ch_layout, dst_nb_channels, dst_rate, dst_filename); fmt, dst_ch_layout, dst_nb_channels, dst_rate, dst_filename);
end: end:
if (dst_file)
fclose(dst_file); fclose(dst_file);
if (src_data) if (src_data)

View File

@@ -132,6 +132,7 @@ int main(int argc, char **argv)
av_get_pix_fmt_name(dst_pix_fmt), dst_w, dst_h, dst_filename); av_get_pix_fmt_name(dst_pix_fmt), dst_w, dst_h, dst_filename);
end: end:
if (dst_file)
fclose(dst_file); fclose(dst_file);
av_freep(&src_data[0]); av_freep(&src_data[0]);
av_freep(&dst_data[0]); av_freep(&dst_data[0]);

View File

@@ -41,16 +41,18 @@
#include "libswresample/swresample.h" #include "libswresample/swresample.h"
/** The output bit rate in kbit/s */ /** The output bit rate in kbit/s */
#define OUTPUT_BIT_RATE 96000 #define OUTPUT_BIT_RATE 48000
/** The number of output channels */ /** The number of output channels */
#define OUTPUT_CHANNELS 2 #define OUTPUT_CHANNELS 2
/** The audio sample output format */
#define OUTPUT_SAMPLE_FORMAT AV_SAMPLE_FMT_S16
/** /**
* Convert an error code into a text message. * Convert an error code into a text message.
* @param error Error code to be converted * @param error Error code to be converted
* @return Corresponding error text (not thread-safe) * @return Corresponding error text (not thread-safe)
*/ */
static const char *get_error_text(const int error) static char *const get_error_text(const int error)
{ {
static char error_buffer[255]; static char error_buffer[255];
av_strerror(error, error_buffer, sizeof(error_buffer)); av_strerror(error, error_buffer, sizeof(error_buffer));
@@ -62,7 +64,6 @@ static int open_input_file(const char *filename,
AVFormatContext **input_format_context, AVFormatContext **input_format_context,
AVCodecContext **input_codec_context) AVCodecContext **input_codec_context)
{ {
AVCodecContext *avctx;
AVCodec *input_codec; AVCodec *input_codec;
int error; int error;
@@ -92,39 +93,23 @@ static int open_input_file(const char *filename,
} }
/** Find a decoder for the audio stream. */ /** Find a decoder for the audio stream. */
if (!(input_codec = avcodec_find_decoder((*input_format_context)->streams[0]->codecpar->codec_id))) { if (!(input_codec = avcodec_find_decoder((*input_format_context)->streams[0]->codec->codec_id))) {
fprintf(stderr, "Could not find input codec\n"); fprintf(stderr, "Could not find input codec\n");
avformat_close_input(input_format_context); avformat_close_input(input_format_context);
return AVERROR_EXIT; return AVERROR_EXIT;
} }
/** allocate a new decoding context */
avctx = avcodec_alloc_context3(input_codec);
if (!avctx) {
fprintf(stderr, "Could not allocate a decoding context\n");
avformat_close_input(input_format_context);
return AVERROR(ENOMEM);
}
/** initialize the stream parameters with demuxer information */
error = avcodec_parameters_to_context(avctx, (*input_format_context)->streams[0]->codecpar);
if (error < 0) {
avformat_close_input(input_format_context);
avcodec_free_context(&avctx);
return error;
}
/** Open the decoder for the audio stream to use it later. */ /** Open the decoder for the audio stream to use it later. */
if ((error = avcodec_open2(avctx, input_codec, NULL)) < 0) { if ((error = avcodec_open2((*input_format_context)->streams[0]->codec,
input_codec, NULL)) < 0) {
fprintf(stderr, "Could not open input codec (error '%s')\n", fprintf(stderr, "Could not open input codec (error '%s')\n",
get_error_text(error)); get_error_text(error));
avcodec_free_context(&avctx);
avformat_close_input(input_format_context); avformat_close_input(input_format_context);
return error; return error;
} }
/** Save the decoder context for easier access later. */ /** Save the decoder context for easier access later. */
*input_codec_context = avctx; *input_codec_context = (*input_format_context)->streams[0]->codec;
return 0; return 0;
} }
@@ -139,7 +124,6 @@ static int open_output_file(const char *filename,
AVFormatContext **output_format_context, AVFormatContext **output_format_context,
AVCodecContext **output_codec_context) AVCodecContext **output_codec_context)
{ {
AVCodecContext *avctx = NULL;
AVIOContext *output_io_context = NULL; AVIOContext *output_io_context = NULL;
AVStream *stream = NULL; AVStream *stream = NULL;
AVCodec *output_codec = NULL; AVCodec *output_codec = NULL;
@@ -179,64 +163,43 @@ static int open_output_file(const char *filename,
} }
/** Create a new audio stream in the output file container. */ /** Create a new audio stream in the output file container. */
if (!(stream = avformat_new_stream(*output_format_context, NULL))) { if (!(stream = avformat_new_stream(*output_format_context, output_codec))) {
fprintf(stderr, "Could not create new stream\n"); fprintf(stderr, "Could not create new stream\n");
error = AVERROR(ENOMEM); error = AVERROR(ENOMEM);
goto cleanup; goto cleanup;
} }
avctx = avcodec_alloc_context3(output_codec); /** Save the encoder context for easiert access later. */
if (!avctx) { *output_codec_context = stream->codec;
fprintf(stderr, "Could not allocate an encoding context\n");
error = AVERROR(ENOMEM);
goto cleanup;
}
/** /**
* Set the basic encoder parameters. * Set the basic encoder parameters.
* The input file's sample rate is used to avoid a sample rate conversion. * The input file's sample rate is used to avoid a sample rate conversion.
*/ */
avctx->channels = OUTPUT_CHANNELS; (*output_codec_context)->channels = OUTPUT_CHANNELS;
avctx->channel_layout = av_get_default_channel_layout(OUTPUT_CHANNELS); (*output_codec_context)->channel_layout = av_get_default_channel_layout(OUTPUT_CHANNELS);
avctx->sample_rate = input_codec_context->sample_rate; (*output_codec_context)->sample_rate = input_codec_context->sample_rate;
avctx->sample_fmt = output_codec->sample_fmts[0]; (*output_codec_context)->sample_fmt = AV_SAMPLE_FMT_S16;
avctx->bit_rate = OUTPUT_BIT_RATE; (*output_codec_context)->bit_rate = OUTPUT_BIT_RATE;
/** Allow the use of the experimental AAC encoder */
avctx->strict_std_compliance = FF_COMPLIANCE_EXPERIMENTAL;
/** Set the sample rate for the container. */
stream->time_base.den = input_codec_context->sample_rate;
stream->time_base.num = 1;
/** /**
* Some container formats (like MP4) require global headers to be present * Some container formats (like MP4) require global headers to be present
* Mark the encoder so that it behaves accordingly. * Mark the encoder so that it behaves accordingly.
*/ */
if ((*output_format_context)->oformat->flags & AVFMT_GLOBALHEADER) if ((*output_format_context)->oformat->flags & AVFMT_GLOBALHEADER)
avctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER; (*output_codec_context)->flags |= CODEC_FLAG_GLOBAL_HEADER;
/** Open the encoder for the audio stream to use it later. */ /** Open the encoder for the audio stream to use it later. */
if ((error = avcodec_open2(avctx, output_codec, NULL)) < 0) { if ((error = avcodec_open2(*output_codec_context, output_codec, NULL)) < 0) {
fprintf(stderr, "Could not open output codec (error '%s')\n", fprintf(stderr, "Could not open output codec (error '%s')\n",
get_error_text(error)); get_error_text(error));
goto cleanup; goto cleanup;
} }
error = avcodec_parameters_from_context(stream->codecpar, avctx);
if (error < 0) {
fprintf(stderr, "Could not initialize stream parameters\n");
goto cleanup;
}
/** Save the encoder context for easier access later. */
*output_codec_context = avctx;
return 0; return 0;
cleanup: cleanup:
avcodec_free_context(&avctx); avio_close((*output_format_context)->pb);
avio_closep(&(*output_format_context)->pb);
avformat_free_context(*output_format_context); avformat_free_context(*output_format_context);
*output_format_context = NULL; *output_format_context = NULL;
return error < 0 ? error : AVERROR_EXIT; return error < 0 ? error : AVERROR_EXIT;
@@ -308,11 +271,10 @@ static int init_resampler(AVCodecContext *input_codec_context,
} }
/** Initialize a FIFO buffer for the audio samples to be encoded. */ /** Initialize a FIFO buffer for the audio samples to be encoded. */
static int init_fifo(AVAudioFifo **fifo, AVCodecContext *output_codec_context) static int init_fifo(AVAudioFifo **fifo)
{ {
/** Create the FIFO buffer based on the specified output sample format. */ /** Create the FIFO buffer based on the specified output sample format. */
if (!(*fifo = av_audio_fifo_alloc(output_codec_context->sample_fmt, if (!(*fifo = av_audio_fifo_alloc(OUTPUT_SAMPLE_FORMAT, OUTPUT_CHANNELS, 1))) {
output_codec_context->channels, 1))) {
fprintf(stderr, "Could not allocate FIFO\n"); fprintf(stderr, "Could not allocate FIFO\n");
return AVERROR(ENOMEM); return AVERROR(ENOMEM);
} }
@@ -344,7 +306,7 @@ static int decode_audio_frame(AVFrame *frame,
/** Read one audio frame from the input file into a temporary packet. */ /** Read one audio frame from the input file into a temporary packet. */
if ((error = av_read_frame(input_format_context, &input_packet)) < 0) { if ((error = av_read_frame(input_format_context, &input_packet)) < 0) {
/** If we are at the end of the file, flush the decoder below. */ /** If we are the the end of the file, flush the decoder below. */
if (error == AVERROR_EOF) if (error == AVERROR_EOF)
*finished = 1; *finished = 1;
else { else {
@@ -364,7 +326,7 @@ static int decode_audio_frame(AVFrame *frame,
data_present, &input_packet)) < 0) { data_present, &input_packet)) < 0) {
fprintf(stderr, "Could not decode frame (error '%s')\n", fprintf(stderr, "Could not decode frame (error '%s')\n",
get_error_text(error)); get_error_text(error));
av_packet_unref(&input_packet); av_free_packet(&input_packet);
return error; return error;
} }
@@ -374,7 +336,7 @@ static int decode_audio_frame(AVFrame *frame,
*/ */
if (*finished && *data_present) if (*finished && *data_present)
*finished = 0; *finished = 0;
av_packet_unref(&input_packet); av_free_packet(&input_packet);
return 0; return 0;
} }
@@ -575,9 +537,6 @@ static int init_output_frame(AVFrame **frame,
return 0; return 0;
} }
/** Global timestamp for the audio frames */
static int64_t pts = 0;
/** Encode one frame worth of audio to the output file. */ /** Encode one frame worth of audio to the output file. */
static int encode_audio_frame(AVFrame *frame, static int encode_audio_frame(AVFrame *frame,
AVFormatContext *output_format_context, AVFormatContext *output_format_context,
@@ -589,12 +548,6 @@ static int encode_audio_frame(AVFrame *frame,
int error; int error;
init_packet(&output_packet); init_packet(&output_packet);
/** Set a timestamp based on the sample rate for the container. */
if (frame) {
frame->pts = pts;
pts += frame->nb_samples;
}
/** /**
* Encode the audio frame and store it in the temporary packet. * Encode the audio frame and store it in the temporary packet.
* The output audio stream encoder is used to do this. * The output audio stream encoder is used to do this.
@@ -603,7 +556,7 @@ static int encode_audio_frame(AVFrame *frame,
frame, data_present)) < 0) { frame, data_present)) < 0) {
fprintf(stderr, "Could not encode frame (error '%s')\n", fprintf(stderr, "Could not encode frame (error '%s')\n",
get_error_text(error)); get_error_text(error));
av_packet_unref(&output_packet); av_free_packet(&output_packet);
return error; return error;
} }
@@ -612,11 +565,11 @@ static int encode_audio_frame(AVFrame *frame,
if ((error = av_write_frame(output_format_context, &output_packet)) < 0) { if ((error = av_write_frame(output_format_context, &output_packet)) < 0) {
fprintf(stderr, "Could not write frame (error '%s')\n", fprintf(stderr, "Could not write frame (error '%s')\n",
get_error_text(error)); get_error_text(error));
av_packet_unref(&output_packet); av_free_packet(&output_packet);
return error; return error;
} }
av_packet_unref(&output_packet); av_free_packet(&output_packet);
} }
return 0; return 0;
@@ -706,7 +659,7 @@ int main(int argc, char **argv)
&resample_context)) &resample_context))
goto cleanup; goto cleanup;
/** Initialize the FIFO buffer to store audio samples to be encoded. */ /** Initialize the FIFO buffer to store audio samples to be encoded. */
if (init_fifo(&fifo, output_codec_context)) if (init_fifo(&fifo))
goto cleanup; goto cleanup;
/** Write the header of the output file container. */ /** Write the header of the output file container. */
if (write_output_file_header(output_format_context)) if (write_output_file_header(output_format_context))
@@ -788,13 +741,13 @@ cleanup:
av_audio_fifo_free(fifo); av_audio_fifo_free(fifo);
swr_free(&resample_context); swr_free(&resample_context);
if (output_codec_context) if (output_codec_context)
avcodec_free_context(&output_codec_context); avcodec_close(output_codec_context);
if (output_format_context) { if (output_format_context) {
avio_closep(&output_format_context->pb); avio_close(output_format_context->pb);
avformat_free_context(output_format_context); avformat_free_context(output_format_context);
} }
if (input_codec_context) if (input_codec_context)
avcodec_free_context(&input_codec_context); avcodec_close(input_codec_context);
if (input_format_context) if (input_format_context)
avformat_close_input(&input_format_context); avformat_close_input(&input_format_context);

View File

@@ -1,585 +0,0 @@
/*
* Copyright (c) 2010 Nicolas George
* Copyright (c) 2011 Stefano Sabatini
* Copyright (c) 2014 Andrey Utkin
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
/**
* @file
* API example for demuxing, decoding, filtering, encoding and muxing
* @example transcoding.c
*/
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libavfilter/avfiltergraph.h>
#include <libavfilter/buffersink.h>
#include <libavfilter/buffersrc.h>
#include <libavutil/opt.h>
#include <libavutil/pixdesc.h>
static AVFormatContext *ifmt_ctx;
static AVFormatContext *ofmt_ctx;
typedef struct FilteringContext {
AVFilterContext *buffersink_ctx;
AVFilterContext *buffersrc_ctx;
AVFilterGraph *filter_graph;
} FilteringContext;
static FilteringContext *filter_ctx;
static int open_input_file(const char *filename)
{
int ret;
unsigned int i;
ifmt_ctx = NULL;
if ((ret = avformat_open_input(&ifmt_ctx, filename, NULL, NULL)) < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot open input file\n");
return ret;
}
if ((ret = avformat_find_stream_info(ifmt_ctx, NULL)) < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot find stream information\n");
return ret;
}
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
AVStream *stream;
AVCodecContext *codec_ctx;
stream = ifmt_ctx->streams[i];
codec_ctx = stream->codec;
/* Reencode video & audio and remux subtitles etc. */
if (codec_ctx->codec_type == AVMEDIA_TYPE_VIDEO
|| codec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
/* Open decoder */
ret = avcodec_open2(codec_ctx,
avcodec_find_decoder(codec_ctx->codec_id), NULL);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Failed to open decoder for stream #%u\n", i);
return ret;
}
}
}
av_dump_format(ifmt_ctx, 0, filename, 0);
return 0;
}
static int open_output_file(const char *filename)
{
AVStream *out_stream;
AVStream *in_stream;
AVCodecContext *dec_ctx, *enc_ctx;
AVCodec *encoder;
int ret;
unsigned int i;
ofmt_ctx = NULL;
avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, filename);
if (!ofmt_ctx) {
av_log(NULL, AV_LOG_ERROR, "Could not create output context\n");
return AVERROR_UNKNOWN;
}
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
out_stream = avformat_new_stream(ofmt_ctx, NULL);
if (!out_stream) {
av_log(NULL, AV_LOG_ERROR, "Failed allocating output stream\n");
return AVERROR_UNKNOWN;
}
in_stream = ifmt_ctx->streams[i];
dec_ctx = in_stream->codec;
enc_ctx = out_stream->codec;
if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO
|| dec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
/* in this example, we choose transcoding to same codec */
encoder = avcodec_find_encoder(dec_ctx->codec_id);
if (!encoder) {
av_log(NULL, AV_LOG_FATAL, "Necessary encoder not found\n");
return AVERROR_INVALIDDATA;
}
/* In this example, we transcode to same properties (picture size,
* sample rate etc.). These properties can be changed for output
* streams easily using filters */
if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO) {
enc_ctx->height = dec_ctx->height;
enc_ctx->width = dec_ctx->width;
enc_ctx->sample_aspect_ratio = dec_ctx->sample_aspect_ratio;
/* take first format from list of supported formats */
if (encoder->pix_fmts)
enc_ctx->pix_fmt = encoder->pix_fmts[0];
else
enc_ctx->pix_fmt = dec_ctx->pix_fmt;
/* video time_base can be set to whatever is handy and supported by encoder */
enc_ctx->time_base = dec_ctx->time_base;
} else {
enc_ctx->sample_rate = dec_ctx->sample_rate;
enc_ctx->channel_layout = dec_ctx->channel_layout;
enc_ctx->channels = av_get_channel_layout_nb_channels(enc_ctx->channel_layout);
/* take first format from list of supported formats */
enc_ctx->sample_fmt = encoder->sample_fmts[0];
enc_ctx->time_base = (AVRational){1, enc_ctx->sample_rate};
}
/* Third parameter can be used to pass settings to encoder */
ret = avcodec_open2(enc_ctx, encoder, NULL);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot open video encoder for stream #%u\n", i);
return ret;
}
} else if (dec_ctx->codec_type == AVMEDIA_TYPE_UNKNOWN) {
av_log(NULL, AV_LOG_FATAL, "Elementary stream #%d is of unknown type, cannot proceed\n", i);
return AVERROR_INVALIDDATA;
} else {
/* if this stream must be remuxed */
ret = avcodec_copy_context(ofmt_ctx->streams[i]->codec,
ifmt_ctx->streams[i]->codec);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Copying stream context failed\n");
return ret;
}
}
if (ofmt_ctx->oformat->flags & AVFMT_GLOBALHEADER)
enc_ctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
}
av_dump_format(ofmt_ctx, 0, filename, 1);
if (!(ofmt_ctx->oformat->flags & AVFMT_NOFILE)) {
ret = avio_open(&ofmt_ctx->pb, filename, AVIO_FLAG_WRITE);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Could not open output file '%s'", filename);
return ret;
}
}
/* init muxer, write output file header */
ret = avformat_write_header(ofmt_ctx, NULL);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Error occurred when opening output file\n");
return ret;
}
return 0;
}
static int init_filter(FilteringContext* fctx, AVCodecContext *dec_ctx,
AVCodecContext *enc_ctx, const char *filter_spec)
{
char args[512];
int ret = 0;
AVFilter *buffersrc = NULL;
AVFilter *buffersink = NULL;
AVFilterContext *buffersrc_ctx = NULL;
AVFilterContext *buffersink_ctx = NULL;
AVFilterInOut *outputs = avfilter_inout_alloc();
AVFilterInOut *inputs = avfilter_inout_alloc();
AVFilterGraph *filter_graph = avfilter_graph_alloc();
if (!outputs || !inputs || !filter_graph) {
ret = AVERROR(ENOMEM);
goto end;
}
if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO) {
buffersrc = avfilter_get_by_name("buffer");
buffersink = avfilter_get_by_name("buffersink");
if (!buffersrc || !buffersink) {
av_log(NULL, AV_LOG_ERROR, "filtering source or sink element not found\n");
ret = AVERROR_UNKNOWN;
goto end;
}
snprintf(args, sizeof(args),
"video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
dec_ctx->width, dec_ctx->height, dec_ctx->pix_fmt,
dec_ctx->time_base.num, dec_ctx->time_base.den,
dec_ctx->sample_aspect_ratio.num,
dec_ctx->sample_aspect_ratio.den);
ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in",
args, NULL, filter_graph);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot create buffer source\n");
goto end;
}
ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out",
NULL, NULL, filter_graph);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot create buffer sink\n");
goto end;
}
ret = av_opt_set_bin(buffersink_ctx, "pix_fmts",
(uint8_t*)&enc_ctx->pix_fmt, sizeof(enc_ctx->pix_fmt),
AV_OPT_SEARCH_CHILDREN);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output pixel format\n");
goto end;
}
} else if (dec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
buffersrc = avfilter_get_by_name("abuffer");
buffersink = avfilter_get_by_name("abuffersink");
if (!buffersrc || !buffersink) {
av_log(NULL, AV_LOG_ERROR, "filtering source or sink element not found\n");
ret = AVERROR_UNKNOWN;
goto end;
}
if (!dec_ctx->channel_layout)
dec_ctx->channel_layout =
av_get_default_channel_layout(dec_ctx->channels);
snprintf(args, sizeof(args),
"time_base=%d/%d:sample_rate=%d:sample_fmt=%s:channel_layout=0x%"PRIx64,
dec_ctx->time_base.num, dec_ctx->time_base.den, dec_ctx->sample_rate,
av_get_sample_fmt_name(dec_ctx->sample_fmt),
dec_ctx->channel_layout);
ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in",
args, NULL, filter_graph);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer source\n");
goto end;
}
ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out",
NULL, NULL, filter_graph);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer sink\n");
goto end;
}
ret = av_opt_set_bin(buffersink_ctx, "sample_fmts",
(uint8_t*)&enc_ctx->sample_fmt, sizeof(enc_ctx->sample_fmt),
AV_OPT_SEARCH_CHILDREN);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output sample format\n");
goto end;
}
ret = av_opt_set_bin(buffersink_ctx, "channel_layouts",
(uint8_t*)&enc_ctx->channel_layout,
sizeof(enc_ctx->channel_layout), AV_OPT_SEARCH_CHILDREN);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output channel layout\n");
goto end;
}
ret = av_opt_set_bin(buffersink_ctx, "sample_rates",
(uint8_t*)&enc_ctx->sample_rate, sizeof(enc_ctx->sample_rate),
AV_OPT_SEARCH_CHILDREN);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output sample rate\n");
goto end;
}
} else {
ret = AVERROR_UNKNOWN;
goto end;
}
/* Endpoints for the filter graph. */
outputs->name = av_strdup("in");
outputs->filter_ctx = buffersrc_ctx;
outputs->pad_idx = 0;
outputs->next = NULL;
inputs->name = av_strdup("out");
inputs->filter_ctx = buffersink_ctx;
inputs->pad_idx = 0;
inputs->next = NULL;
if (!outputs->name || !inputs->name) {
ret = AVERROR(ENOMEM);
goto end;
}
if ((ret = avfilter_graph_parse_ptr(filter_graph, filter_spec,
&inputs, &outputs, NULL)) < 0)
goto end;
if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0)
goto end;
/* Fill FilteringContext */
fctx->buffersrc_ctx = buffersrc_ctx;
fctx->buffersink_ctx = buffersink_ctx;
fctx->filter_graph = filter_graph;
end:
avfilter_inout_free(&inputs);
avfilter_inout_free(&outputs);
return ret;
}
static int init_filters(void)
{
const char *filter_spec;
unsigned int i;
int ret;
filter_ctx = av_malloc_array(ifmt_ctx->nb_streams, sizeof(*filter_ctx));
if (!filter_ctx)
return AVERROR(ENOMEM);
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
filter_ctx[i].buffersrc_ctx = NULL;
filter_ctx[i].buffersink_ctx = NULL;
filter_ctx[i].filter_graph = NULL;
if (!(ifmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO
|| ifmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO))
continue;
if (ifmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO)
filter_spec = "null"; /* passthrough (dummy) filter for video */
else
filter_spec = "anull"; /* passthrough (dummy) filter for audio */
ret = init_filter(&filter_ctx[i], ifmt_ctx->streams[i]->codec,
ofmt_ctx->streams[i]->codec, filter_spec);
if (ret)
return ret;
}
return 0;
}
static int encode_write_frame(AVFrame *filt_frame, unsigned int stream_index, int *got_frame) {
int ret;
int got_frame_local;
AVPacket enc_pkt;
int (*enc_func)(AVCodecContext *, AVPacket *, const AVFrame *, int *) =
(ifmt_ctx->streams[stream_index]->codec->codec_type ==
AVMEDIA_TYPE_VIDEO) ? avcodec_encode_video2 : avcodec_encode_audio2;
if (!got_frame)
got_frame = &got_frame_local;
av_log(NULL, AV_LOG_INFO, "Encoding frame\n");
/* encode filtered frame */
enc_pkt.data = NULL;
enc_pkt.size = 0;
av_init_packet(&enc_pkt);
ret = enc_func(ofmt_ctx->streams[stream_index]->codec, &enc_pkt,
filt_frame, got_frame);
av_frame_free(&filt_frame);
if (ret < 0)
return ret;
if (!(*got_frame))
return 0;
/* prepare packet for muxing */
enc_pkt.stream_index = stream_index;
av_packet_rescale_ts(&enc_pkt,
ofmt_ctx->streams[stream_index]->codec->time_base,
ofmt_ctx->streams[stream_index]->time_base);
av_log(NULL, AV_LOG_DEBUG, "Muxing frame\n");
/* mux encoded frame */
ret = av_interleaved_write_frame(ofmt_ctx, &enc_pkt);
return ret;
}
static int filter_encode_write_frame(AVFrame *frame, unsigned int stream_index)
{
int ret;
AVFrame *filt_frame;
av_log(NULL, AV_LOG_INFO, "Pushing decoded frame to filters\n");
/* push the decoded frame into the filtergraph */
ret = av_buffersrc_add_frame_flags(filter_ctx[stream_index].buffersrc_ctx,
frame, 0);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Error while feeding the filtergraph\n");
return ret;
}
/* pull filtered frames from the filtergraph */
while (1) {
filt_frame = av_frame_alloc();
if (!filt_frame) {
ret = AVERROR(ENOMEM);
break;
}
av_log(NULL, AV_LOG_INFO, "Pulling filtered frame from filters\n");
ret = av_buffersink_get_frame(filter_ctx[stream_index].buffersink_ctx,
filt_frame);
if (ret < 0) {
/* if no more frames for output - returns AVERROR(EAGAIN)
* if flushed and no more frames for output - returns AVERROR_EOF
* rewrite retcode to 0 to show it as normal procedure completion
*/
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
ret = 0;
av_frame_free(&filt_frame);
break;
}
filt_frame->pict_type = AV_PICTURE_TYPE_NONE;
ret = encode_write_frame(filt_frame, stream_index, NULL);
if (ret < 0)
break;
}
return ret;
}
static int flush_encoder(unsigned int stream_index)
{
int ret;
int got_frame;
if (!(ofmt_ctx->streams[stream_index]->codec->codec->capabilities &
AV_CODEC_CAP_DELAY))
return 0;
while (1) {
av_log(NULL, AV_LOG_INFO, "Flushing stream #%u encoder\n", stream_index);
ret = encode_write_frame(NULL, stream_index, &got_frame);
if (ret < 0)
break;
if (!got_frame)
return 0;
}
return ret;
}
int main(int argc, char **argv)
{
int ret;
AVPacket packet = { .data = NULL, .size = 0 };
AVFrame *frame = NULL;
enum AVMediaType type;
unsigned int stream_index;
unsigned int i;
int got_frame;
int (*dec_func)(AVCodecContext *, AVFrame *, int *, const AVPacket *);
if (argc != 3) {
av_log(NULL, AV_LOG_ERROR, "Usage: %s <input file> <output file>\n", argv[0]);
return 1;
}
av_register_all();
avfilter_register_all();
if ((ret = open_input_file(argv[1])) < 0)
goto end;
if ((ret = open_output_file(argv[2])) < 0)
goto end;
if ((ret = init_filters()) < 0)
goto end;
/* read all packets */
while (1) {
if ((ret = av_read_frame(ifmt_ctx, &packet)) < 0)
break;
stream_index = packet.stream_index;
type = ifmt_ctx->streams[packet.stream_index]->codec->codec_type;
av_log(NULL, AV_LOG_DEBUG, "Demuxer gave frame of stream_index %u\n",
stream_index);
if (filter_ctx[stream_index].filter_graph) {
av_log(NULL, AV_LOG_DEBUG, "Going to reencode&filter the frame\n");
frame = av_frame_alloc();
if (!frame) {
ret = AVERROR(ENOMEM);
break;
}
av_packet_rescale_ts(&packet,
ifmt_ctx->streams[stream_index]->time_base,
ifmt_ctx->streams[stream_index]->codec->time_base);
dec_func = (type == AVMEDIA_TYPE_VIDEO) ? avcodec_decode_video2 :
avcodec_decode_audio4;
ret = dec_func(ifmt_ctx->streams[stream_index]->codec, frame,
&got_frame, &packet);
if (ret < 0) {
av_frame_free(&frame);
av_log(NULL, AV_LOG_ERROR, "Decoding failed\n");
break;
}
if (got_frame) {
frame->pts = av_frame_get_best_effort_timestamp(frame);
ret = filter_encode_write_frame(frame, stream_index);
av_frame_free(&frame);
if (ret < 0)
goto end;
} else {
av_frame_free(&frame);
}
} else {
/* remux this frame without reencoding */
av_packet_rescale_ts(&packet,
ifmt_ctx->streams[stream_index]->time_base,
ofmt_ctx->streams[stream_index]->time_base);
ret = av_interleaved_write_frame(ofmt_ctx, &packet);
if (ret < 0)
goto end;
}
av_packet_unref(&packet);
}
/* flush filters and encoders */
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
/* flush filter */
if (!filter_ctx[i].filter_graph)
continue;
ret = filter_encode_write_frame(NULL, i);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Flushing filter failed\n");
goto end;
}
/* flush encoder */
ret = flush_encoder(i);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Flushing encoder failed\n");
goto end;
}
}
av_write_trailer(ofmt_ctx);
end:
av_packet_unref(&packet);
av_frame_free(&frame);
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
avcodec_close(ifmt_ctx->streams[i]->codec);
if (ofmt_ctx && ofmt_ctx->nb_streams > i && ofmt_ctx->streams[i] && ofmt_ctx->streams[i]->codec)
avcodec_close(ofmt_ctx->streams[i]->codec);
if (filter_ctx && filter_ctx[i].filter_graph)
avfilter_graph_free(&filter_ctx[i].filter_graph);
}
av_free(filter_ctx);
avformat_close_input(&ifmt_ctx);
if (ofmt_ctx && !(ofmt_ctx->oformat->flags & AVFMT_NOFILE))
avio_closep(&ofmt_ctx->pb);
avformat_free_context(ofmt_ctx);
if (ret < 0)
av_log(NULL, AV_LOG_ERROR, "Error occurred: %s\n", av_err2str(ret));
return ret ? 1 : 0;
}

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle FFmpeg FAQ @settitle FFmpeg FAQ
@titlepage @titlepage
@@ -91,63 +90,13 @@ To build FFmpeg, you need to install the development package. It is usually
called @file{libfoo-dev} or @file{libfoo-devel}. You can remove it after the called @file{libfoo-dev} or @file{libfoo-devel}. You can remove it after the
build is finished, but be sure to keep the main package. build is finished, but be sure to keep the main package.
@section How do I make @command{pkg-config} find my libraries?
Somewhere along with your libraries, there is a @file{.pc} file (or several)
in a @file{pkgconfig} directory. You need to set environment variables to
point @command{pkg-config} to these files.
If you need to @emph{add} directories to @command{pkg-config}'s search list
(typical use case: library installed separately), add it to
@code{$PKG_CONFIG_PATH}:
@example
export PKG_CONFIG_PATH=/opt/x264/lib/pkgconfig:/opt/opus/lib/pkgconfig
@end example
If you need to @emph{replace} @command{pkg-config}'s search list
(typical use case: cross-compiling), set it in
@code{$PKG_CONFIG_LIBDIR}:
@example
export PKG_CONFIG_LIBDIR=/home/me/cross/usr/lib/pkgconfig:/home/me/cross/usr/local/lib/pkgconfig
@end example
If you need to know the library's internal dependencies (typical use: static
linking), add the @code{--static} option to @command{pkg-config}:
@example
./configure --pkg-config-flags=--static
@end example
@section How do I use @command{pkg-config} when cross-compiling?
The best way is to install @command{pkg-config} in your cross-compilation
environment. It will automatically use the cross-compilation libraries.
You can also use @command{pkg-config} from the host environment by
specifying explicitly @code{--pkg-config=pkg-config} to @command{configure}.
In that case, you must point @command{pkg-config} to the correct directories
using the @code{PKG_CONFIG_LIBDIR}, as explained in the previous entry.
As an intermediate solution, you can place in your cross-compilation
environment a script that calls the host @command{pkg-config} with
@code{PKG_CONFIG_LIBDIR} set. That script can look like that:
@example
#!/bin/sh
PKG_CONFIG_LIBDIR=/path/to/cross/lib/pkgconfig
export PKG_CONFIG_LIBDIR
exec /usr/bin/pkg-config "$@@"
@end example
@chapter Usage @chapter Usage
@section ffmpeg does not work; what is wrong? @section ffmpeg does not work; what is wrong?
Try a @code{make distclean} in the ffmpeg source directory before the build. Try a @code{make distclean} in the ffmpeg source directory before the build.
If this does not help see If this does not help see
(@url{https://ffmpeg.org/bugreports.html}). (@url{http://ffmpeg.org/bugreports.html}).
@section How do I encode single pictures into movies? @section How do I encode single pictures into movies?
@@ -311,18 +260,18 @@ invoking ffmpeg with several @option{-i} options.
For audio, to put all channels together in a single stream (example: two For audio, to put all channels together in a single stream (example: two
mono streams into one stereo stream): this is sometimes called to mono streams into one stereo stream): this is sometimes called to
@emph{merge} them, and can be done using the @emph{merge} them, and can be done using the
@url{https://ffmpeg.org/ffmpeg-filters.html#amerge, @code{amerge}} filter. @url{http://ffmpeg.org/ffmpeg-filters.html#amerge, @code{amerge}} filter.
@item @item
For audio, to play one on top of the other: this is called to @emph{mix} For audio, to play one on top of the other: this is called to @emph{mix}
them, and can be done by first merging them into a single stream and then them, and can be done by first merging them into a single stream and then
using the @url{https://ffmpeg.org/ffmpeg-filters.html#pan, @code{pan}} filter to mix using the @url{http://ffmpeg.org/ffmpeg-filters.html#pan, @code{pan}} filter to mix
the channels at will. the channels at will.
@item @item
For video, to display both together, side by side or one on top of a part of For video, to display both together, side by side or one on top of a part of
the other; it can be done using the the other; it can be done using the
@url{https://ffmpeg.org/ffmpeg-filters.html#overlay, @code{overlay}} video filter. @url{http://ffmpeg.org/ffmpeg-filters.html#overlay, @code{overlay}} video filter.
@end itemize @end itemize
@@ -333,23 +282,23 @@ There are several solutions, depending on the exact circumstances.
@subsection Concatenating using the concat @emph{filter} @subsection Concatenating using the concat @emph{filter}
FFmpeg has a @url{https://ffmpeg.org/ffmpeg-filters.html#concat, FFmpeg has a @url{http://ffmpeg.org/ffmpeg-filters.html#concat,
@code{concat}} filter designed specifically for that, with examples in the @code{concat}} filter designed specifically for that, with examples in the
documentation. This operation is recommended if you need to re-encode. documentation. This operation is recommended if you need to re-encode.
@subsection Concatenating using the concat @emph{demuxer} @subsection Concatenating using the concat @emph{demuxer}
FFmpeg has a @url{https://www.ffmpeg.org/ffmpeg-formats.html#concat, FFmpeg has a @url{http://www.ffmpeg.org/ffmpeg-formats.html#concat,
@code{concat}} demuxer which you can use when you want to avoid a re-encode and @code{concat}} demuxer which you can use when you want to avoid a re-encode and
your format doesn't support file level concatenation. your format doesn't support file level concatenation.
@subsection Concatenating using the concat @emph{protocol} (file level) @subsection Concatenating using the concat @emph{protocol} (file level)
FFmpeg has a @url{https://ffmpeg.org/ffmpeg-protocols.html#concat, FFmpeg has a @url{http://ffmpeg.org/ffmpeg-protocols.html#concat,
@code{concat}} protocol designed specifically for that, with examples in the @code{concat}} protocol designed specifically for that, with examples in the
documentation. documentation.
A few multimedia containers (MPEG-1, MPEG-2 PS, DV) allow one to concatenate A few multimedia containers (MPEG-1, MPEG-2 PS, DV) allow to concatenate
video by merely concatenating the files containing them. video by merely concatenating the files containing them.
Hence you may concatenate your multimedia files by first transcoding them to Hence you may concatenate your multimedia files by first transcoding them to
@@ -443,7 +392,7 @@ VOB and a few other formats do not have a global header that describes
everything present in the file. Instead, applications are supposed to scan everything present in the file. Instead, applications are supposed to scan
the file to see what it contains. Since VOB files are frequently large, only the file to see what it contains. Since VOB files are frequently large, only
the beginning is scanned. If the subtitles happen only later in the file, the beginning is scanned. If the subtitles happen only later in the file,
they will not be initially detected. they will not be initally detected.
Some applications, including the @code{ffmpeg} command-line tool, can only Some applications, including the @code{ffmpeg} command-line tool, can only
work with streams that were detected during the initial scan; streams that work with streams that were detected during the initial scan; streams that
@@ -467,40 +416,6 @@ point acceptable for your tastes. The most common options to do that are
@option{-qscale} and @option{-qmax}, but you should peruse the documentation @option{-qscale} and @option{-qmax}, but you should peruse the documentation
of the encoder you chose. of the encoder you chose.
@section I have a stretched video, why does scaling does not fix it?
A lot of video codecs and formats can store the @emph{aspect ratio} of the
video: this is the ratio between the width and the height of either the full
image (DAR, display aspect ratio) or individual pixels (SAR, sample aspect
ratio). For example, EGA screens at resolution 640×350 had 4:3 DAR and 35:48
SAR.
Most still image processing work with square pixels, i.e. 1:1 SAR, but a lot
of video standards, especially from the analogic-numeric transition era, use
non-square pixels.
Most processing filters in FFmpeg handle the aspect ratio to avoid
stretching the image: cropping adjusts the DAR to keep the SAR constant,
scaling adjusts the SAR to keep the DAR constant.
If you want to stretch, or “unstretch”, the image, you need to override the
information with the
@url{https://ffmpeg.org/ffmpeg-filters.html#setdar_002c-setsar, @code{setdar or setsar filters}}.
Do not forget to examine carefully the original video to check whether the
stretching comes from the image or from the aspect ratio information.
For example, to fix a badly encoded EGA capture, use the following commands,
either the first one to upscale to square pixels or the second one to set
the correct aspect ratio or the third one to avoid transcoding (may not work
depending on the format / codec / player / phase of the moon):
@example
ffmpeg -i ega_screen.nut -vf scale=640:480,setsar=1 ega_screen_scaled.nut
ffmpeg -i ega_screen.nut -vf setdar=4/3 ega_screen_anamorphic.nut
ffmpeg -i ega_screen.nut -aspect 4/3 -c copy ega_screen_overridden.nut
@end example
@chapter Development @chapter Development
@section Are there examples illustrating how to use the FFmpeg libraries, particularly libavcodec and libavformat? @section Are there examples illustrating how to use the FFmpeg libraries, particularly libavcodec and libavformat?
@@ -589,7 +504,7 @@ see @file{libavformat/aviobuf.c} in FFmpeg and @file{libmpdemux/demux_lavf.c} in
@section Where is the documentation about ffv1, msmpeg4, asv1, 4xm? @section Where is the documentation about ffv1, msmpeg4, asv1, 4xm?
see @url{https://www.ffmpeg.org/~michael/} see @url{http://www.ffmpeg.org/~michael/}
@section How do I feed H.263-RTP (and other codecs in RTP) to libavcodec? @section How do I feed H.263-RTP (and other codecs in RTP) to libavcodec?

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle FFmpeg Automated Testing Environment @settitle FFmpeg Automated Testing Environment
@titlepage @titlepage
@@ -54,8 +53,8 @@ make fate SAMPLES=fate-suite/
The above commands set the samples location by passing a makefile The above commands set the samples location by passing a makefile
variable via command line. It is also possible to set the samples variable via command line. It is also possible to set the samples
location at source configuration time by invoking configure with location at source configuration time by invoking configure with
@option{--samples=<path to the samples directory>}. Afterwards you can `--samples=<path to the samples directory>'. Afterwards you can
invoke the makefile targets without setting the @var{SAMPLES} makefile invoke the makefile targets without setting the SAMPLES makefile
variable. This is illustrated by the following commands: variable. This is illustrated by the following commands:
@example @example
@@ -102,14 +101,14 @@ The mentioned configuration template is also available here:
@end ifhtml @end ifhtml
Create a configuration that suits your needs, based on the configuration Create a configuration that suits your needs, based on the configuration
template. The @env{slot} configuration variable can be any string that is not template. The `slot' configuration variable can be any string that is not
yet used, but it is suggested that you name it adhering to the following yet used, but it is suggested that you name it adhering to the following
pattern @samp{@var{arch}-@var{os}-@var{compiler}-@var{compiler version}}. The pattern <arch>-<os>-<compiler>-<compiler version>. The configuration file
configuration file itself will be sourced in a shell script, therefore all itself will be sourced in a shell script, therefore all shell features may
shell features may be used. This enables you to setup the environment as you be used. This enables you to setup the environment as you need it for your
need it for your build. build.
For your first test runs the @env{fate_recv} variable should be empty or For your first test runs the `fate_recv' variable should be empty or
commented out. This will run everything as normal except that it will omit commented out. This will run everything as normal except that it will omit
the submission of the results to the server. The following files should be the submission of the results to the server. The following files should be
present in $workdir as specified in the configuration file: present in $workdir as specified in the configuration file:
@@ -132,7 +131,7 @@ of the server and to accept its host key. This can usually be achieved by
running your SSH client manually and killing it after you accepted the key. running your SSH client manually and killing it after you accepted the key.
The FATE server's fingerprint is: The FATE server's fingerprint is:
@table @samp @table @option
@item RSA @item RSA
d3:f1:83:97:a4:75:2b:a6:fb:d6:e8:aa:81:93:97:51 d3:f1:83:97:a4:75:2b:a6:fb:d6:e8:aa:81:93:97:51
@item ECDSA @item ECDSA
@@ -165,7 +164,7 @@ Run the FATE test suite (requires the fate-suite dataset).
@section Makefile variables @section Makefile variables
@table @env @table @option
@item V @item V
Verbosity level, can be set to 0, 1 or 2. Verbosity level, can be set to 0, 1 or 2.
@itemize @itemize
@@ -183,20 +182,20 @@ Specify how many threads to use while running regression tests, it is
quite useful to detect thread-related regressions. quite useful to detect thread-related regressions.
@item THREAD_TYPE @item THREAD_TYPE
Specify which threading strategy test, either @samp{slice} or @samp{frame}, Specify which threading strategy test, either @var{slice} or @var{frame},
by default @samp{slice+frame} by default @var{slice+frame}
@item CPUFLAGS @item CPUFLAGS
Specify CPU flags. Specify CPU flags.
@item TARGET_EXEC @item TARGET_EXEC
Specify or override the wrapper used to run the tests. Specify or override the wrapper used to run the tests.
The @env{TARGET_EXEC} option provides a way to run FATE wrapped in The @var{TARGET_EXEC} option provides a way to run FATE wrapped in
@command{valgrind}, @command{qemu-user} or @command{wine} or on remote targets @command{valgrind}, @command{qemu-user} or @command{wine} or on remote targets
through @command{ssh}. through @command{ssh}.
@item GEN @item GEN
Set to @samp{1} to generate the missing or mismatched references. Set to @var{1} to generate the missing or mismatched references.
@end table @end table
@section Examples @section Examples

View File

@@ -1,6 +1,5 @@
slot= # some unique identifier slot= # some unique identifier
repo=git://source.ffmpeg.org/ffmpeg.git # the source repository repo=git://source.ffmpeg.org/ffmpeg.git # the source repository
#branch=release/2.6 # the branch to test
samples= # path to samples directory samples= # path to samples directory
workdir= # directory in which to do all the work workdir= # directory in which to do all the work
#fate_recv="ssh -T fate@fate.ffmpeg.org" # command to submit report #fate_recv="ssh -T fate@fate.ffmpeg.org" # command to submit report

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle FFmpeg Bitstream Filters Documentation @settitle FFmpeg Bitstream Filters Documentation
@titlepage @titlepage

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle FFmpeg Codecs Documentation @settitle FFmpeg Codecs Documentation
@titlepage @titlepage

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle FFmpeg Devices Documentation @settitle FFmpeg Devices Documentation
@titlepage @titlepage

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle FFmpeg Filters Documentation @settitle FFmpeg Filters Documentation
@titlepage @titlepage

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle FFmpeg Formats Documentation @settitle FFmpeg Formats Documentation
@titlepage @titlepage

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle FFmpeg Protocols Documentation @settitle FFmpeg Protocols Documentation
@titlepage @titlepage

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle FFmpeg Resampler Documentation @settitle FFmpeg Resampler Documentation
@titlepage @titlepage
@@ -15,7 +14,7 @@
The FFmpeg resampler provides a high-level interface to the The FFmpeg resampler provides a high-level interface to the
libswresample library audio resampling utilities. In particular it libswresample library audio resampling utilities. In particular it
allows one to perform audio resampling, audio channel layout rematrixing, allows to perform audio resampling, audio channel layout rematrixing,
and convert audio format and packing layout. and convert audio format and packing layout.
@c man end DESCRIPTION @c man end DESCRIPTION

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle FFmpeg Scaler Documentation @settitle FFmpeg Scaler Documentation
@titlepage @titlepage
@@ -14,7 +13,7 @@
@c man begin DESCRIPTION @c man begin DESCRIPTION
The FFmpeg rescaler provides a high-level interface to the libswscale The FFmpeg rescaler provides a high-level interface to the libswscale
library image conversion utilities. In particular it allows one to perform library image conversion utilities. In particular it allows to perform
image rescaling and pixel format conversion. image rescaling and pixel format conversion.
@c man end DESCRIPTION @c man end DESCRIPTION

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle FFmpeg Utilities Documentation @settitle FFmpeg Utilities Documentation
@titlepage @titlepage

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle ffmpeg Documentation @settitle ffmpeg Documentation
@titlepage @titlepage
@@ -80,7 +79,7 @@ The format option may be needed for raw input files.
The transcoding process in @command{ffmpeg} for each output can be described by The transcoding process in @command{ffmpeg} for each output can be described by
the following diagram: the following diagram:
@verbatim @example
_______ ______________ _______ ______________
| | | | | | | |
| input | demuxer | encoded data | decoder | input | demuxer | encoded data | decoder
@@ -91,15 +90,14 @@ the following diagram:
| | | |
| decoded | | decoded |
| frames | | frames |
|_________| ________ ______________ |_________|
________ ______________ |
| | | | | | | | | |
| output | <-------- | encoded data | <----+ | output | <-------- | encoded data | <----+
| file | muxer | packets | encoder | file | muxer | packets | encoder
|________| |______________| |________| |______________|
@end verbatim @end example
@command{ffmpeg} calls the libavformat library (containing demuxers) to read @command{ffmpeg} calls the libavformat library (containing demuxers) to read
input files and get packets containing encoded data from them. When there are input files and get packets containing encoded data from them. When there are
@@ -124,31 +122,26 @@ Simple filtergraphs are those that have exactly one input and output, both of
the same type. In the above diagram they can be represented by simply inserting the same type. In the above diagram they can be represented by simply inserting
an additional step between decoding and encoding: an additional step between decoding and encoding:
@verbatim @example
_________ ______________ _________ __________ ______________
| | | | | | simple | | | |
| decoded | | encoded data | | decoded | fltrgrph | filtered | encoder | encoded data |
| frames |\ _ | packets | | frames | ----------> | frames | ---------> | packets |
|_________| \ /||______________| |_________| |__________| |______________|
\ __________ /
simple _\|| | / encoder
filtergraph | filtered |/
| frames |
|__________|
@end verbatim @end example
Simple filtergraphs are configured with the per-stream @option{-filter} option Simple filtergraphs are configured with the per-stream @option{-filter} option
(with @option{-vf} and @option{-af} aliases for video and audio respectively). (with @option{-vf} and @option{-af} aliases for video and audio respectively).
A simple filtergraph for video can look for example like this: A simple filtergraph for video can look for example like this:
@verbatim @example
_______ _____________ _______ ________ _______ _____________ _______ ________
| | | | | | | | | | | | | | | |
| input | ---> | deinterlace | ---> | scale | ---> | output | | input | ---> | deinterlace | ---> | scale | ---> | output |
|_______| |_____________| |_______| |________| |_______| |_____________| |_______| |________|
@end verbatim @end example
Note that some filters change frame properties but not frame contents. E.g. the Note that some filters change frame properties but not frame contents. E.g. the
@code{fps} filter in the example above changes number of frames, but does not @code{fps} filter in the example above changes number of frames, but does not
@@ -161,7 +154,7 @@ processing chain applied to one stream. This is the case, for example, when the
more than one input and/or output, or when output stream type is different from more than one input and/or output, or when output stream type is different from
input. They can be represented with the following diagram: input. They can be represented with the following diagram:
@verbatim @example
_________ _________
| | | |
| input 0 |\ __________ | input 0 |\ __________
@@ -179,7 +172,7 @@ input. They can be represented with the following diagram:
| input 2 |/ | input 2 |/
|_________| |_________|
@end verbatim @end example
Complex filtergraphs are configured with the @option{-filter_complex} option. Complex filtergraphs are configured with the @option{-filter_complex} option.
Note that this option is global, since a complex filtergraph, by its nature, Note that this option is global, since a complex filtergraph, by its nature,
@@ -198,14 +191,14 @@ step for the specified stream, so it does only demuxing and muxing. It is useful
for changing the container format or modifying container-level metadata. The for changing the container format or modifying container-level metadata. The
diagram above will, in this case, simplify to this: diagram above will, in this case, simplify to this:
@verbatim @example
_______ ______________ ________ _______ ______________ ________
| | | | | | | | | | | |
| input | demuxer | encoded data | muxer | output | | input | demuxer | encoded data | muxer | output |
| file | ---------> | packets | -------> | file | | file | ---------> | packets | -------> | file |
|_______| |______________| |________| |_______| |______________| |________|
@end verbatim @end example
Since there is no decoding or encoding, it is very fast and there is no quality Since there is no decoding or encoding, it is very fast and there is no quality
loss. However, it might not work in some cases because of many factors. Applying loss. However, it might not work in some cases because of many factors. Applying
@@ -253,10 +246,6 @@ Overwrite output files without asking.
Do not overwrite output files, and exit immediately if a specified Do not overwrite output files, and exit immediately if a specified
output file already exists. output file already exists.
@item -stream_loop @var{number} (@emph{input})
Set number of times input stream shall be looped. Loop 0 means no loop,
loop -1 means infinite loop.
@item -c[:@var{stream_specifier}] @var{codec} (@emph{input/output,per-stream}) @item -c[:@var{stream_specifier}] @var{codec} (@emph{input/output,per-stream})
@itemx -codec[:@var{stream_specifier}] @var{codec} (@emph{input/output,per-stream}) @itemx -codec[:@var{stream_specifier}] @var{codec} (@emph{input/output,per-stream})
Select an encoder (when used before an output file) or a decoder (when used Select an encoder (when used before an output file) or a decoder (when used
@@ -277,34 +266,25 @@ ffmpeg -i INPUT -map 0 -c copy -c:v:1 libx264 -c:a:137 libvorbis OUTPUT
will copy all the streams except the second video, which will be encoded with will copy all the streams except the second video, which will be encoded with
libx264, and the 138th audio, which will be encoded with libvorbis. libx264, and the 138th audio, which will be encoded with libvorbis.
@item -t @var{duration} (@emph{input/output}) @item -t @var{duration} (@emph{output})
When used as an input option (before @code{-i}), limit the @var{duration} of Stop writing the output after its duration reaches @var{duration}.
data read from the input file. @var{duration} may be a number in seconds, or in @code{hh:mm:ss[.xxx]} form.
When used as an output option (before an output filename), stop writing the
output after its duration reaches @var{duration}.
@var{duration} must be a time duration specification,
see @ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
-to and -t are mutually exclusive and -t has priority. -to and -t are mutually exclusive and -t has priority.
@item -to @var{position} (@emph{output}) @item -to @var{position} (@emph{output})
Stop writing the output at @var{position}. Stop writing the output at @var{position}.
@var{position} must be a time duration specification, @var{position} may be a number in seconds, or in @code{hh:mm:ss[.xxx]} form.
see @ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
-to and -t are mutually exclusive and -t has priority. -to and -t are mutually exclusive and -t has priority.
@item -fs @var{limit_size} (@emph{output}) @item -fs @var{limit_size} (@emph{output})
Set the file size limit, expressed in bytes. No further chunk of bytes is written Set the file size limit, expressed in bytes.
after the limit is exceeded. The size of the output file is slightly more than the
requested file size.
@item -ss @var{position} (@emph{input/output}) @item -ss @var{position} (@emph{input/output})
When used as an input option (before @code{-i}), seeks in this input file to When used as an input option (before @code{-i}), seeks in this input file to
@var{position}. Note that in most formats it is not possible to seek exactly, @var{position}. Note the in most formats it is not possible to seek exactly, so
so @command{ffmpeg} will seek to the closest seek point before @var{position}. @command{ffmpeg} will seek to the closest seek point before @var{position}.
When transcoding and @option{-accurate_seek} is enabled (the default), this When transcoding and @option{-accurate_seek} is enabled (the default), this
extra segment between the seek point and @var{position} will be decoded and extra segment between the seek point and @var{position} will be decoded and
discarded. When doing stream copy or when @option{-noaccurate_seek} is used, it discarded. When doing stream copy or when @option{-noaccurate_seek} is used, it
@@ -313,13 +293,7 @@ will be preserved.
When used as an output option (before an output filename), decodes but discards When used as an output option (before an output filename), decodes but discards
input until the timestamps reach @var{position}. input until the timestamps reach @var{position}.
@var{position} must be a time duration specification, @var{position} may be either in seconds or in @code{hh:mm:ss[.xxx]} form.
see @ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
@item -sseof @var{position} (@emph{input/output})
Like the @code{-ss} option but relative to the "end of file". That is negative
values are earlier in the file, 0 is at EOF.
@item -itsoffset @var{offset} (@emph{input}) @item -itsoffset @var{offset} (@emph{input})
Set the input time offset. Set the input time offset.
@@ -334,15 +308,15 @@ the time duration specified in @var{offset}.
@item -timestamp @var{date} (@emph{output}) @item -timestamp @var{date} (@emph{output})
Set the recording timestamp in the container. Set the recording timestamp in the container.
@var{date} must be a date specification, @var{date} must be a time duration specification,
see @ref{date syntax,,the Date section in the ffmpeg-utils(1) manual,ffmpeg-utils}. see @ref{date syntax,,the Date section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
@item -metadata[:metadata_specifier] @var{key}=@var{value} (@emph{output,per-metadata}) @item -metadata[:metadata_specifier] @var{key}=@var{value} (@emph{output,per-metadata})
Set a metadata key/value pair. Set a metadata key/value pair.
An optional @var{metadata_specifier} may be given to set metadata An optional @var{metadata_specifier} may be given to set metadata
on streams, chapters or programs. See @code{-map_metadata} on streams or chapters. See @code{-map_metadata} documentation for
documentation for details. details.
This option overrides metadata set with @code{-map_metadata}. It is This option overrides metadata set with @code{-map_metadata}. It is
also possible to delete metadata by using an empty value. also possible to delete metadata by using an empty value.
@@ -354,14 +328,9 @@ ffmpeg -i in.avi -metadata title="my title" out.flv
To set the language of the first audio stream: To set the language of the first audio stream:
@example @example
ffmpeg -i INPUT -metadata:s:a:0 language=eng OUTPUT ffmpeg -i INPUT -metadata:s:a:1 language=eng OUTPUT
@end example @end example
@item -program [title=@var{title}:][program_num=@var{program_num}:]st=@var{stream}[:st=@var{stream}...] (@emph{output})
Creates a program with the specified @var{title}, @var{program_num} and adds the specified
@var{stream}(s) to it.
@item -target @var{type} (@emph{output}) @item -target @var{type} (@emph{output})
Specify target file type (@code{vcd}, @code{svcd}, @code{dvd}, @code{dv}, Specify target file type (@code{vcd}, @code{svcd}, @code{dvd}, @code{dv},
@code{dv50}). @var{type} may be prefixed with @code{pal-}, @code{ntsc-} or @code{dv50}). @var{type} may be prefixed with @code{pal-}, @code{ntsc-} or
@@ -380,7 +349,7 @@ ffmpeg -i myfile.avi -target vcd -bf 2 /tmp/vcd.mpg
@end example @end example
@item -dframes @var{number} (@emph{output}) @item -dframes @var{number} (@emph{output})
Set the number of data frames to output. This is an alias for @code{-frames:d}. Set the number of data frames to record. This is an alias for @code{-frames:d}.
@item -frames[:@var{stream_specifier}] @var{framecount} (@emph{output,per-stream}) @item -frames[:@var{stream_specifier}] @var{framecount} (@emph{output,per-stream})
Stop writing to the stream after @var{framecount} frames. Stop writing to the stream after @var{framecount} frames.
@@ -481,24 +450,18 @@ Technical note -- attachments are implemented as codec extradata, so this
option can actually be used to extract extradata from any stream, not just option can actually be used to extract extradata from any stream, not just
attachments. attachments.
@item -noautorotate
Disable automatically rotating video based on file metadata.
@end table @end table
@section Video Options @section Video Options
@table @option @table @option
@item -vframes @var{number} (@emph{output}) @item -vframes @var{number} (@emph{output})
Set the number of video frames to output. This is an alias for @code{-frames:v}. Set the number of video frames to record. This is an alias for @code{-frames:v}.
@item -r[:@var{stream_specifier}] @var{fps} (@emph{input/output,per-stream}) @item -r[:@var{stream_specifier}] @var{fps} (@emph{input/output,per-stream})
Set frame rate (Hz value, fraction or abbreviation). Set frame rate (Hz value, fraction or abbreviation).
As an input option, ignore any timestamps stored in the file and instead As an input option, ignore any timestamps stored in the file and instead
generate timestamps assuming constant frame rate @var{fps}. generate timestamps assuming constant frame rate @var{fps}.
This is not the same as the @option{-framerate} option used for some input formats
like image2 or v4l2 (it used to be the same in older versions of FFmpeg).
If in doubt use @option{-framerate} instead of the input option @option{-r}.
As an output option, duplicate or drop input frames to achieve constant output As an output option, duplicate or drop input frames to achieve constant output
frame rate @var{fps}. frame rate @var{fps}.
@@ -560,7 +523,7 @@ filter the stream.
This is an alias for @code{-filter:v}, see the @ref{filter_option,,-filter option}. This is an alias for @code{-filter:v}, see the @ref{filter_option,,-filter option}.
@end table @end table
@section Advanced Video options @section Advanced Video Options
@table @option @table @option
@item -pix_fmt[:@var{stream_specifier}] @var{format} (@emph{input/output,per-stream}) @item -pix_fmt[:@var{stream_specifier}] @var{format} (@emph{input/output,per-stream})
@@ -674,24 +637,8 @@ Do not use any hardware acceleration (the default).
@item auto @item auto
Automatically select the hardware acceleration method. Automatically select the hardware acceleration method.
@item vda
Use Apple VDA hardware acceleration.
@item vdpau @item vdpau
Use VDPAU (Video Decode and Presentation API for Unix) hardware acceleration. Use VDPAU (Video Decode and Presentation API for Unix) hardware acceleration.
@item dxva2
Use DXVA2 (DirectX Video Acceleration) hardware acceleration.
@item qsv
Use the Intel QuickSync Video acceleration for video transcoding.
Unlike most other values, this option does not enable accelerated decoding (that
is used automatically whenever a qsv decoder is selected), but accelerated
transcoding, without copying the frames into the system memory.
For it to work, both the decoder and the encoder must support QSV acceleration
and no filters must be used.
@end table @end table
This option has no effect if the selected hwaccel is not available or not This option has no effect if the selected hwaccel is not available or not
@@ -714,36 +661,14 @@ method chosen.
@item vdpau @item vdpau
For VDPAU, this option specifies the X11 display/screen to use. If this option For VDPAU, this option specifies the X11 display/screen to use. If this option
is not specified, the value of the @var{DISPLAY} environment variable is used is not specified, the value of the @var{DISPLAY} environment variable is used
@item dxva2
For DXVA2, this option should contain the number of the display adapter to use.
If this option is not specified, the default adapter is used.
@item qsv
For QSV, this option corresponds to the values of MFX_IMPL_* . Allowed values
are:
@table @option
@item auto
@item sw
@item hw
@item auto_any
@item hw_any
@item hw2
@item hw3
@item hw4
@end table @end table
@end table @end table
@item -hwaccels
List all hardware acceleration methods supported in this build of ffmpeg.
@end table
@section Audio Options @section Audio Options
@table @option @table @option
@item -aframes @var{number} (@emph{output}) @item -aframes @var{number} (@emph{output})
Set the number of audio frames to output. This is an alias for @code{-frames:a}. Set the number of audio frames to record. This is an alias for @code{-frames:a}.
@item -ar[:@var{stream_specifier}] @var{freq} (@emph{input/output,per-stream}) @item -ar[:@var{stream_specifier}] @var{freq} (@emph{input/output,per-stream})
Set the audio sampling frequency. For output streams it is set by Set the audio sampling frequency. For output streams it is set by
default to the frequency of the corresponding input stream. For input default to the frequency of the corresponding input stream. For input
@@ -771,7 +696,7 @@ filter the stream.
This is an alias for @code{-filter:a}, see the @ref{filter_option,,-filter option}. This is an alias for @code{-filter:a}, see the @ref{filter_option,,-filter option}.
@end table @end table
@section Advanced Audio options @section Advanced Audio options:
@table @option @table @option
@item -atag @var{fourcc/tag} (@emph{output}) @item -atag @var{fourcc/tag} (@emph{output})
@@ -786,7 +711,7 @@ stereo but not 6 channels as 5.1. The default is to always try to guess. Use
0 to disable all guessing. 0 to disable all guessing.
@end table @end table
@section Subtitle options @section Subtitle options:
@table @option @table @option
@item -scodec @var{codec} (@emph{input/output}) @item -scodec @var{codec} (@emph{input/output})
@@ -797,7 +722,7 @@ Disable subtitle recording.
Deprecated, see -bsf Deprecated, see -bsf
@end table @end table
@section Advanced Subtitle options @section Advanced Subtitle options:
@table @option @table @option
@@ -875,21 +800,8 @@ To map all the streams except the second audio, use negative mappings
ffmpeg -i INPUT -map 0 -map -0:a:1 OUTPUT ffmpeg -i INPUT -map 0 -map -0:a:1 OUTPUT
@end example @end example
To pick the English audio stream:
@example
ffmpeg -i INPUT -map 0:m:language:eng OUTPUT
@end example
Note that using this option disables the default mappings for this output file. Note that using this option disables the default mappings for this output file.
@item -ignore_unknown
Ignore input streams with unknown type instead of failing if copying
such streams is attempted.
@item -copy_unknown
Allow input streams with unknown type to be copied instead of failing if copying
such streams is attempted.
@item -map_channel [@var{input_file_id}.@var{stream_specifier}.@var{channel_id}|-1][:@var{output_file_id}.@var{stream_specifier}] @item -map_channel [@var{input_file_id}.@var{stream_specifier}.@var{channel_id}|-1][:@var{output_file_id}.@var{stream_specifier}]
Map an audio channel from a given input to an output. If Map an audio channel from a given input to an output. If
@var{output_file_id}.@var{stream_specifier} is not set, the audio channel will @var{output_file_id}.@var{stream_specifier} is not set, the audio channel will
@@ -1053,13 +965,6 @@ With -map you can select from which stream the timestamps should be
taken. You can leave either video or audio unchanged and sync the taken. You can leave either video or audio unchanged and sync the
remaining stream(s) to the unchanged one. remaining stream(s) to the unchanged one.
@item -frame_drop_threshold @var{parameter}
Frame drop threshold, which specifies how much behind video frames can
be before they are dropped. In frame rate units, so 1.0 is one frame.
The default is -1.1. One possible usecase is to avoid framedrops in case
of noisy timestamps or to increase frame drop precision in case of exact
timestamps.
@item -async @var{samples_per_second} @item -async @var{samples_per_second}
Audio sync method. "Stretches/squeezes" the audio stream to match the timestamps, Audio sync method. "Stretches/squeezes" the audio stream to match the timestamps,
the parameter is the maximum samples per second by which the audio is changed. the parameter is the maximum samples per second by which the audio is changed.
@@ -1082,12 +987,6 @@ processing (e.g. in case the format option @option{avoid_negative_ts}
is enabled) the output timestamps may mismatch with the input is enabled) the output timestamps may mismatch with the input
timestamps even when this option is selected. timestamps even when this option is selected.
@item -start_at_zero
When used with @option{copyts}, shift input timestamps so they start at zero.
This means that using e.g. @code{-ss 50} will make output timestamps start at
50 seconds, regardless of what timestamp the input file started at.
@item -copytb @var{mode} @item -copytb @var{mode}
Specify how to set the encoder timebase when stream copying. @var{mode} is an Specify how to set the encoder timebase when stream copying. @var{mode} is an
integer numeric value, and can assume one of the following values: integer numeric value, and can assume one of the following values:
@@ -1216,19 +1115,6 @@ This option enables or disables accurate seeking in input files with the
transcoding. Use @option{-noaccurate_seek} to disable it, which may be useful transcoding. Use @option{-noaccurate_seek} to disable it, which may be useful
e.g. when copying some streams and transcoding the others. e.g. when copying some streams and transcoding the others.
@item -seek_timestamp (@emph{input})
This option enables or disables seeking by timestamp in input files with the
@option{-ss} option. It is disabled by default. If enabled, the argument
to the @option{-ss} option is considered an actual timestamp, and is not
offset by the start time of the file. This matters only for files which do
not start from timestamp 0, such as transport streams.
@item -thread_queue_size @var{size} (@emph{input})
This option sets the maximum number of queued packets when reading from the
file or device. With low latency / high rate live streams, packets may be
discarded if they are not read in a timely manner; raising this value can
avoid it.
@item -override_ffserver (@emph{global}) @item -override_ffserver (@emph{global})
Overrides the input specifications from @command{ffserver}. Using this Overrides the input specifications from @command{ffserver}. Using this
option you can map any input stream to @command{ffserver} and control option you can map any input stream to @command{ffserver} and control
@@ -1239,46 +1125,6 @@ requested by @command{ffserver}.
The option is intended for cases where features are needed that cannot be The option is intended for cases where features are needed that cannot be
specified to @command{ffserver} but can be to @command{ffmpeg}. specified to @command{ffserver} but can be to @command{ffmpeg}.
@item -sdp_file @var{file} (@emph{global})
Print sdp information for an output stream to @var{file}.
This allows dumping sdp information when at least one output isn't an
rtp stream. (Requires at least one of the output formats to be rtp).
@item -discard (@emph{input})
Allows discarding specific streams or frames of streams at the demuxer.
Not all demuxers support this.
@table @option
@item none
Discard no frame.
@item default
Default, which discards no frames.
@item noref
Discard all non-reference frames.
@item bidir
Discard all bidirectional frames.
@item nokey
Discard all frames excepts keyframes.
@item all
Discard all frames.
@end table
@item -abort_on @var{flags} (@emph{global})
Stop and abort on various conditions. The following flags are available:
@table @option
@item empty_output
No packets were passed to the muxer, the output is empty.
@end table
@item -xerror (@emph{global})
Stop and exit on error
@end table @end table
As a special exception, you can use a bitmap subtitle stream as input: it As a special exception, you can use a bitmap subtitle stream as input: it
@@ -1304,10 +1150,7 @@ awkward to specify on the command line. Lines starting with the hash
('#') character are ignored and are used to provide comments. Check ('#') character are ignored and are used to provide comments. Check
the @file{presets} directory in the FFmpeg source tree for examples. the @file{presets} directory in the FFmpeg source tree for examples.
There are two types of preset files: ffpreset and avpreset files. Preset files are specified with the @code{vpre}, @code{apre},
@subsection ffpreset files
ffpreset files are specified with the @code{vpre}, @code{apre},
@code{spre}, and @code{fpre} options. The @code{fpre} option takes the @code{spre}, and @code{fpre} options. The @code{fpre} option takes the
filename of the preset instead of a preset name as input and can be filename of the preset instead of a preset name as input and can be
used for any kind of codec. For the @code{vpre}, @code{apre}, and used for any kind of codec. For the @code{vpre}, @code{apre}, and
@@ -1332,31 +1175,67 @@ directories, where @var{codec_name} is the name of the codec to which
the preset file options will be applied. For example, if you select the preset file options will be applied. For example, if you select
the video codec with @code{-vcodec libvpx} and use @code{-vpre 1080p}, the video codec with @code{-vcodec libvpx} and use @code{-vpre 1080p},
then it will search for the file @file{libvpx-1080p.ffpreset}. then it will search for the file @file{libvpx-1080p.ffpreset}.
@subsection avpreset files
avpreset files are specified with the @code{pre} option. They work similar to
ffpreset files, but they only allow encoder- specific options. Therefore, an
@var{option}=@var{value} pair specifying an encoder cannot be used.
When the @code{pre} option is specified, ffmpeg will look for files with the
suffix .avpreset in the directories @file{$AVCONV_DATADIR} (if set), and
@file{$HOME/.avconv}, and in the datadir defined at configuration time (usually
@file{PREFIX/share/ffmpeg}), in that order.
First ffmpeg searches for a file named @var{codec_name}-@var{arg}.avpreset in
the above-mentioned directories, where @var{codec_name} is the name of the codec
to which the preset file options will be applied. For example, if you select the
video codec with @code{-vcodec libvpx} and use @code{-pre 1080p}, then it will
search for the file @file{libvpx-1080p.avpreset}.
If no such file is found, then ffmpeg will search for a file named
@var{arg}.avpreset in the same directories.
@c man end OPTIONS @c man end OPTIONS
@chapter Tips
@c man begin TIPS
@itemize
@item
For streaming at very low bitrates, use a low frame rate
and a small GOP size. This is especially true for RealVideo where
the Linux player does not seem to be very fast, so it can miss
frames. An example is:
@example
ffmpeg -g 3 -r 3 -t 10 -b:v 50k -s qcif -f rv10 /tmp/b.rm
@end example
@item
The parameter 'q' which is displayed while encoding is the current
quantizer. The value 1 indicates that a very good quality could
be achieved. The value 31 indicates the worst quality. If q=31 appears
too often, it means that the encoder cannot compress enough to meet
your bitrate. You must either increase the bitrate, decrease the
frame rate or decrease the frame size.
@item
If your computer is not fast enough, you can speed up the
compression at the expense of the compression ratio. You can use
'-me zero' to speed up motion estimation, and '-g 0' to disable
motion estimation completely (you have only I-frames, which means it
is about as good as JPEG compression).
@item
To have very low audio bitrates, reduce the sampling frequency
(down to 22050 Hz for MPEG audio, 22050 or 11025 for AC-3).
@item
To have a constant quality (but a variable bitrate), use the option
'-qscale n' when 'n' is between 1 (excellent quality) and 31 (worst
quality).
@end itemize
@c man end TIPS
@chapter Examples @chapter Examples
@c man begin EXAMPLES @c man begin EXAMPLES
@section Preset files
A preset file contains a sequence of @var{option=value} pairs, one for
each line, specifying a sequence of options which can be specified also on
the command line. Lines starting with the hash ('#') character are ignored and
are used to provide comments. Empty lines are also ignored. Check the
@file{presets} directory in the FFmpeg source tree for examples.
Preset files are specified with the @code{pre} option, this option takes a
preset name as input. FFmpeg searches for a file named @var{preset_name}.avpreset in
the directories @file{$AVCONV_DATADIR} (if set), and @file{$HOME/.ffmpeg}, and in
the data directory defined at configuration time (usually @file{$PREFIX/share/ffmpeg})
in that order. For example, if the argument is @code{libx264-max}, it will
search for the file @file{libx264-max.avpreset}.
@section Video and Audio grabbing @section Video and Audio grabbing
If you specify the input format and device then ffmpeg can grab video If you specify the input format and device then ffmpeg can grab video
@@ -1504,7 +1383,7 @@ combination with -ss to start extracting from a certain point in time.
For creating a video from many images: For creating a video from many images:
@example @example
ffmpeg -f image2 -framerate 12 -i foo-%03d.jpeg -s WxH foo.avi ffmpeg -f image2 -i foo-%03d.jpeg -r 12 -s WxH foo.avi
@end example @end example
The syntax @code{foo-%03d.jpeg} specifies to use a decimal number The syntax @code{foo-%03d.jpeg} specifies to use a decimal number
@@ -1519,18 +1398,18 @@ image2-specific @code{-pattern_type glob} option.
For example, for creating a video from filenames matching the glob pattern For example, for creating a video from filenames matching the glob pattern
@code{foo-*.jpeg}: @code{foo-*.jpeg}:
@example @example
ffmpeg -f image2 -pattern_type glob -framerate 12 -i 'foo-*.jpeg' -s WxH foo.avi ffmpeg -f image2 -pattern_type glob -i 'foo-*.jpeg' -r 12 -s WxH foo.avi
@end example @end example
@item @item
You can put many streams of the same type in the output: You can put many streams of the same type in the output:
@example @example
ffmpeg -i test1.avi -i test2.avi -map 1:1 -map 1:0 -map 0:1 -map 0:0 -c copy -y test12.nut ffmpeg -i test1.avi -i test2.avi -map 0:3 -map 0:2 -map 0:1 -map 0:0 -c copy test12.nut
@end example @end example
The resulting output file @file{test12.nut} will contain the first four streams The resulting output file @file{test12.avi} will contain first four streams from
from the input files in reverse order. the input file in reverse order.
@item @item
To force CBR video output: To force CBR video output:

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle ffplay Documentation @settitle ffplay Documentation
@titlepage @titlepage
@@ -38,26 +37,14 @@ Force displayed height.
Set frame size (WxH or abbreviation), needed for videos which do Set frame size (WxH or abbreviation), needed for videos which do
not contain a header with the frame size like raw YUV. This option not contain a header with the frame size like raw YUV. This option
has been deprecated in favor of private options, try -video_size. has been deprecated in favor of private options, try -video_size.
@item -fs
Start in fullscreen mode.
@item -an @item -an
Disable audio. Disable audio.
@item -vn @item -vn
Disable video. Disable video.
@item -sn
Disable subtitles.
@item -ss @var{pos} @item -ss @var{pos}
Seek to @var{pos}. Note that in most formats it is not possible to seek Seek to a given position in seconds.
exactly, so @command{ffplay} will seek to the nearest seek point to
@var{pos}.
@var{pos} must be a time duration specification,
see @ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
@item -t @var{duration} @item -t @var{duration}
Play @var{duration} seconds of audio/video. play <duration> seconds of audio/video
@var{duration} must be a time duration specification,
see @ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
@item -bytes @item -bytes
Seek by bytes. Seek by bytes.
@item -nodisp @item -nodisp
@@ -97,9 +84,6 @@ output. In the filtergraph, the input is associated to the label
ffmpeg-filters manual for more information about the filtergraph ffmpeg-filters manual for more information about the filtergraph
syntax. syntax.
You can specify this parameter multiple times and cycle through the specified
filtergraphs along with the show modes by pressing the key @key{w}.
@item -af @var{filtergraph} @item -af @var{filtergraph}
@var{filtergraph} is a description of the filtergraph to apply to @var{filtergraph} is a description of the filtergraph to apply to
the input audio. the input audio.
@@ -122,10 +106,15 @@ duration, the codec parameters, the current position in the stream and
the audio/video synchronisation drift. It is on by default, to the audio/video synchronisation drift. It is on by default, to
explicitly disable it you need to specify @code{-nostats}. explicitly disable it you need to specify @code{-nostats}.
@item -bug
Work around bugs.
@item -fast @item -fast
Non-spec-compliant optimizations. Non-spec-compliant optimizations.
@item -genpts @item -genpts
Generate pts. Generate pts.
@item -rtp_tcp
Force RTP/TCP protocol usage instead of RTP/UDP. It is only meaningful
if you are streaming with the RTSP protocol.
@item -sync @var{type} @item -sync @var{type}
Set the master clock to audio (@code{type=audio}), video Set the master clock to audio (@code{type=audio}), video
(@code{type=video}) or external (@code{type=ext}). Default is audio. The (@code{type=video}) or external (@code{type=ext}). Default is audio. The
@@ -133,20 +122,23 @@ master clock is used to control audio-video synchronization. Most media
players use audio as master clock, but in some cases (streaming or high players use audio as master clock, but in some cases (streaming or high
quality broadcast) it is necessary to change that. This option is mainly quality broadcast) it is necessary to change that. This option is mainly
used for debugging purposes. used for debugging purposes.
@item -ast @var{audio_stream_specifier} @item -threads @var{count}
Select the desired audio stream using the given stream specifier. The stream Set the thread count.
specifiers are described in the @ref{Stream specifiers} chapter. If this option @item -ast @var{audio_stream_number}
is not specified, the "best" audio stream is selected in the program of the Select the desired audio stream number, counting from 0. The number
already selected video stream. refers to the list of all the input audio streams. If it is greater
@item -vst @var{video_stream_specifier} than the number of audio streams minus one, then the last one is
Select the desired video stream using the given stream specifier. The stream selected, if it is negative the audio playback is disabled.
specifiers are described in the @ref{Stream specifiers} chapter. If this option @item -vst @var{video_stream_number}
is not specified, the "best" video stream is selected. Select the desired video stream number, counting from 0. The number
@item -sst @var{subtitle_stream_specifier} refers to the list of all the input video streams. If it is greater
Select the desired subtitle stream using the given stream specifier. The stream than the number of video streams minus one, then the last one is
specifiers are described in the @ref{Stream specifiers} chapter. If this option selected, if it is negative the video playback is disabled.
is not specified, the "best" subtitle stream is selected in the program of the @item -sst @var{subtitle_stream_number}
already selected video or audio stream. Select the desired subtitle stream number, counting from 0. The number
refers to the list of all the input subtitle streams. If it is greater
than the number of subtitle streams minus one, then the last one is
selected, if it is negative the subtitle rendering is disabled.
@item -autoexit @item -autoexit
Exit when video is done playing. Exit when video is done playing.
@item -exitonkeydown @item -exitonkeydown
@@ -167,22 +159,6 @@ Force a specific video decoder.
@item -scodec @var{codec_name} @item -scodec @var{codec_name}
Force a specific subtitle decoder. Force a specific subtitle decoder.
@item -autorotate
Automatically rotate the video according to file metadata. Enabled by
default, use @option{-noautorotate} to disable it.
@item -framedrop
Drop video frames if video is out of sync. Enabled by default if the master
clock is not set to video. Use this option to enable frame dropping for all
master clock sources, use @option{-noframedrop} to disable it.
@item -infbuf
Do not limit the input buffer size, read as much data as possible from the
input as soon as possible. Enabled by default for realtime streams, where data
may be dropped if not read in time. Use this option to enable infinite buffers
for all inputs, use @option{-noinfbuf} to disable it.
@end table @end table
@section While playing @section While playing
@@ -197,17 +173,8 @@ Toggle full screen.
@item p, SPC @item p, SPC
Pause. Pause.
@item m
Toggle mute.
@item 9, 0
Decrease and increase volume respectively.
@item /, *
Decrease and increase volume respectively.
@item a @item a
Cycle audio channel in the current program. Cycle audio channel in the curret program.
@item v @item v
Cycle video channel. Cycle video channel.
@@ -219,7 +186,7 @@ Cycle subtitle channel in the current program.
Cycle program. Cycle program.
@item w @item w
Cycle video filters or show modes. Show audio waves.
@item s @item s
Step to the next frame. Step to the next frame.
@@ -238,12 +205,9 @@ Seek to the previous/next chapter.
or if there are no chapters or if there are no chapters
Seek backward/forward 10 minutes. Seek backward/forward 10 minutes.
@item right mouse click @item mouse click
Seek to percentage in file corresponding to fraction of width. Seek to percentage in file corresponding to fraction of width.
@item left mouse double-click
Toggle full screen.
@end table @end table
@c man end @c man end

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle ffprobe Documentation @settitle ffprobe Documentation
@titlepage @titlepage
@@ -120,10 +119,6 @@ Show payload data, as a hexadecimal and ASCII dump. Coupled with
The dump is printed as the "data" field. It may contain newlines. The dump is printed as the "data" field. It may contain newlines.
@item -show_data_hash @var{algorithm}
Show a hash of payload data, for packets with @option{-show_packets} and for
codec extradata with @option{-show_streams}.
@item -show_error @item -show_error
Show information about the error found when trying to probe the input. Show information about the error found when trying to probe the input.
@@ -185,7 +180,7 @@ format : stream=codec_type
To show all the tags in the stream and format sections: To show all the tags in the stream and format sections:
@example @example
stream_tags : format_tags format_tags : format_tags
@end example @end example
To show only the @code{title} tag (if available) in the stream To show only the @code{title} tag (if available) in the stream
@@ -322,12 +317,6 @@ Show information related to program and library versions. This is the
equivalent of setting both @option{-show_program_version} and equivalent of setting both @option{-show_program_version} and
@option{-show_library_versions} options. @option{-show_library_versions} options.
@item -show_pixel_formats
Show information about all pixel formats supported by FFmpeg.
Pixel format information for each format is printed within a section
with name "PIXEL_FORMAT".
@item -bitexact @item -bitexact
Force bitexact output, useful to produce output which is not dependent Force bitexact output, useful to produce output which is not dependent
on the specific build. on the specific build.
@@ -447,17 +436,17 @@ writer).
It can assume one of the following values: It can assume one of the following values:
@table @option @table @option
@item c @item c
Perform C-like escaping. Strings containing a newline (@samp{\n}), carriage Perform C-like escaping. Strings containing a newline ('\n'), carriage
return (@samp{\r}), a tab (@samp{\t}), a form feed (@samp{\f}), the escaping return ('\r'), a tab ('\t'), a form feed ('\f'), the escaping
character (@samp{\}) or the item separator character @var{SEP} are escaped character ('\') or the item separator character @var{SEP} are escaped using C-like fashioned
using C-like fashioned escaping, so that a newline is converted to the escaping, so that a newline is converted to the sequence "\n", a
sequence @samp{\n}, a carriage return to @samp{\r}, @samp{\} to @samp{\\} and carriage return to "\r", '\' to "\\" and the separator @var{SEP} is
the separator @var{SEP} is converted to @samp{\@var{SEP}}. converted to "\@var{SEP}".
@item csv @item csv
Perform CSV-like escaping, as described in RFC4180. Strings Perform CSV-like escaping, as described in RFC4180. Strings
containing a newline (@samp{\n}), a carriage return (@samp{\r}), a double quote containing a newline ('\n'), a carriage return ('\r'), a double quote
(@samp{"}), or @var{SEP} are enclosed in double-quotes. ('"'), or @var{SEP} are enclosed in double-quotes.
@item none @item none
Perform no escaping. Perform no escaping.
@@ -485,7 +474,7 @@ The description of the accepted options follows.
Separator character used to separate the chapter, the section name, IDs and Separator character used to separate the chapter, the section name, IDs and
potential tags in the printed field key. potential tags in the printed field key.
Default value is @samp{.}. Default value is '.'.
@item hierarchical, h @item hierarchical, h
Specify if the section name specification should be hierarchical. If Specify if the section name specification should be hierarchical. If
@@ -507,22 +496,21 @@ The following conventions are adopted:
@item @item
all key and values are UTF-8 all key and values are UTF-8
@item @item
@samp{.} is the subgroup separator '.' is the subgroup separator
@item @item
newline, @samp{\t}, @samp{\f}, @samp{\b} and the following characters are newline, '\t', '\f', '\b' and the following characters are escaped
escaped
@item @item
@samp{\} is the escape character '\' is the escape character
@item @item
@samp{#} is the comment indicator '#' is the comment indicator
@item @item
@samp{=} is the key/value separator '=' is the key/value separator
@item @item
@samp{:} is not used but usually parsed as key/value separator ':' is not used but usually parsed as key/value separator
@end itemize @end itemize
This writer accepts options as a list of @var{key}=@var{value} pairs, This writer accepts options as a list of @var{key}=@var{value} pairs,
separated by @samp{:}. separated by ":".
The description of the accepted options follows. The description of the accepted options follows.

View File

@@ -8,17 +8,15 @@
<xsd:complexType name="ffprobeType"> <xsd:complexType name="ffprobeType">
<xsd:sequence> <xsd:sequence>
<xsd:element name="program_version" type="ffprobe:programVersionType" minOccurs="0" maxOccurs="1" />
<xsd:element name="library_versions" type="ffprobe:libraryVersionsType" minOccurs="0" maxOccurs="1" />
<xsd:element name="pixel_formats" type="ffprobe:pixelFormatsType" minOccurs="0" maxOccurs="1" />
<xsd:element name="packets" type="ffprobe:packetsType" minOccurs="0" maxOccurs="1" /> <xsd:element name="packets" type="ffprobe:packetsType" minOccurs="0" maxOccurs="1" />
<xsd:element name="frames" type="ffprobe:framesType" minOccurs="0" maxOccurs="1" /> <xsd:element name="frames" type="ffprobe:framesType" minOccurs="0" maxOccurs="1" />
<xsd:element name="packets_and_frames" type="ffprobe:packetsAndFramesType" minOccurs="0" maxOccurs="1" />
<xsd:element name="programs" type="ffprobe:programsType" minOccurs="0" maxOccurs="1" />
<xsd:element name="streams" type="ffprobe:streamsType" minOccurs="0" maxOccurs="1" /> <xsd:element name="streams" type="ffprobe:streamsType" minOccurs="0" maxOccurs="1" />
<xsd:element name="programs" type="ffprobe:programsType" minOccurs="0" maxOccurs="1" />
<xsd:element name="chapters" type="ffprobe:chaptersType" minOccurs="0" maxOccurs="1" /> <xsd:element name="chapters" type="ffprobe:chaptersType" minOccurs="0" maxOccurs="1" />
<xsd:element name="format" type="ffprobe:formatType" minOccurs="0" maxOccurs="1" /> <xsd:element name="format" type="ffprobe:formatType" minOccurs="0" maxOccurs="1" />
<xsd:element name="error" type="ffprobe:errorType" minOccurs="0" maxOccurs="1" /> <xsd:element name="error" type="ffprobe:errorType" minOccurs="0" maxOccurs="1" />
<xsd:element name="program_version" type="ffprobe:programVersionType" minOccurs="0" maxOccurs="1" />
<xsd:element name="library_versions" type="ffprobe:libraryVersionsType" minOccurs="0" maxOccurs="1" />
</xsd:sequence> </xsd:sequence>
</xsd:complexType> </xsd:complexType>
@@ -37,22 +35,7 @@
</xsd:sequence> </xsd:sequence>
</xsd:complexType> </xsd:complexType>
<xsd:complexType name="packetsAndFramesType">
<xsd:sequence>
<xsd:choice minOccurs="0" maxOccurs="unbounded">
<xsd:element name="packet" type="ffprobe:packetType" minOccurs="0" maxOccurs="unbounded"/>
<xsd:element name="frame" type="ffprobe:frameType" minOccurs="0" maxOccurs="unbounded"/>
<xsd:element name="subtitle" type="ffprobe:subtitleType" minOccurs="0" maxOccurs="unbounded"/>
</xsd:choice>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="packetType"> <xsd:complexType name="packetType">
<xsd:sequence>
<xsd:element name="tag" type="ffprobe:tagType" minOccurs="0" maxOccurs="unbounded"/>
<xsd:element name="side_data_list" type="ffprobe:packetSideDataListType" minOccurs="0" maxOccurs="1" />
</xsd:sequence>
<xsd:attribute name="codec_type" type="xsd:string" use="required" /> <xsd:attribute name="codec_type" type="xsd:string" use="required" />
<xsd:attribute name="stream_index" type="xsd:int" use="required" /> <xsd:attribute name="stream_index" type="xsd:int" use="required" />
<xsd:attribute name="pts" type="xsd:long" /> <xsd:attribute name="pts" type="xsd:long" />
@@ -67,27 +50,10 @@
<xsd:attribute name="pos" type="xsd:long" /> <xsd:attribute name="pos" type="xsd:long" />
<xsd:attribute name="flags" type="xsd:string" use="required" /> <xsd:attribute name="flags" type="xsd:string" use="required" />
<xsd:attribute name="data" type="xsd:string" /> <xsd:attribute name="data" type="xsd:string" />
<xsd:attribute name="data_hash" type="xsd:string" />
</xsd:complexType>
<xsd:complexType name="packetSideDataListType">
<xsd:sequence>
<xsd:element name="side_data" type="ffprobe:packetSideDataType" minOccurs="1" maxOccurs="unbounded"/>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="packetSideDataType">
<xsd:attribute name="side_data_type" type="xsd:string"/>
<xsd:attribute name="side_data_size" type="xsd:int" />
</xsd:complexType> </xsd:complexType>
<xsd:complexType name="frameType"> <xsd:complexType name="frameType">
<xsd:sequence>
<xsd:element name="tag" type="ffprobe:tagType" minOccurs="0" maxOccurs="unbounded"/>
<xsd:element name="side_data_list" type="ffprobe:frameSideDataListType" minOccurs="0" maxOccurs="1" />
</xsd:sequence>
<xsd:attribute name="media_type" type="xsd:string" use="required"/> <xsd:attribute name="media_type" type="xsd:string" use="required"/>
<xsd:attribute name="stream_index" type="xsd:int" />
<xsd:attribute name="key_frame" type="xsd:int" use="required"/> <xsd:attribute name="key_frame" type="xsd:int" use="required"/>
<xsd:attribute name="pts" type="xsd:long" /> <xsd:attribute name="pts" type="xsd:long" />
<xsd:attribute name="pts_time" type="xsd:float"/> <xsd:attribute name="pts_time" type="xsd:float"/>
@@ -121,16 +87,6 @@
<xsd:attribute name="repeat_pict" type="xsd:int" /> <xsd:attribute name="repeat_pict" type="xsd:int" />
</xsd:complexType> </xsd:complexType>
<xsd:complexType name="frameSideDataListType">
<xsd:sequence>
<xsd:element name="side_data" type="ffprobe:frameSideDataType" minOccurs="1" maxOccurs="unbounded"/>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="frameSideDataType">
<xsd:attribute name="side_data_type" type="xsd:string"/>
<xsd:attribute name="side_data_size" type="xsd:int" />
</xsd:complexType>
<xsd:complexType name="subtitleType"> <xsd:complexType name="subtitleType">
<xsd:attribute name="media_type" type="xsd:string" fixed="subtitle" use="required"/> <xsd:attribute name="media_type" type="xsd:string" fixed="subtitle" use="required"/>
<xsd:attribute name="pts" type="xsd:long" /> <xsd:attribute name="pts" type="xsd:long" />
@@ -171,7 +127,6 @@
<xsd:sequence> <xsd:sequence>
<xsd:element name="disposition" type="ffprobe:streamDispositionType" minOccurs="0" maxOccurs="1"/> <xsd:element name="disposition" type="ffprobe:streamDispositionType" minOccurs="0" maxOccurs="1"/>
<xsd:element name="tag" type="ffprobe:tagType" minOccurs="0" maxOccurs="unbounded"/> <xsd:element name="tag" type="ffprobe:tagType" minOccurs="0" maxOccurs="unbounded"/>
<xsd:element name="side_data_list" type="ffprobe:packetSideDataListType" minOccurs="0" maxOccurs="1" />
</xsd:sequence> </xsd:sequence>
<xsd:attribute name="index" type="xsd:int" use="required"/> <xsd:attribute name="index" type="xsd:int" use="required"/>
@@ -183,25 +138,16 @@
<xsd:attribute name="codec_tag" type="xsd:string" use="required"/> <xsd:attribute name="codec_tag" type="xsd:string" use="required"/>
<xsd:attribute name="codec_tag_string" type="xsd:string" use="required"/> <xsd:attribute name="codec_tag_string" type="xsd:string" use="required"/>
<xsd:attribute name="extradata" type="xsd:string" /> <xsd:attribute name="extradata" type="xsd:string" />
<xsd:attribute name="extradata_hash" type="xsd:string" />
<!-- video attributes --> <!-- video attributes -->
<xsd:attribute name="width" type="xsd:int"/> <xsd:attribute name="width" type="xsd:int"/>
<xsd:attribute name="height" type="xsd:int"/> <xsd:attribute name="height" type="xsd:int"/>
<xsd:attribute name="coded_width" type="xsd:int"/>
<xsd:attribute name="coded_height" type="xsd:int"/>
<xsd:attribute name="has_b_frames" type="xsd:int"/> <xsd:attribute name="has_b_frames" type="xsd:int"/>
<xsd:attribute name="sample_aspect_ratio" type="xsd:string"/> <xsd:attribute name="sample_aspect_ratio" type="xsd:string"/>
<xsd:attribute name="display_aspect_ratio" type="xsd:string"/> <xsd:attribute name="display_aspect_ratio" type="xsd:string"/>
<xsd:attribute name="pix_fmt" type="xsd:string"/> <xsd:attribute name="pix_fmt" type="xsd:string"/>
<xsd:attribute name="level" type="xsd:int"/> <xsd:attribute name="level" type="xsd:int"/>
<xsd:attribute name="color_range" type="xsd:string"/>
<xsd:attribute name="color_space" type="xsd:string"/>
<xsd:attribute name="color_transfer" type="xsd:string"/>
<xsd:attribute name="color_primaries" type="xsd:string"/>
<xsd:attribute name="chroma_location" type="xsd:string"/>
<xsd:attribute name="timecode" type="xsd:string"/> <xsd:attribute name="timecode" type="xsd:string"/>
<xsd:attribute name="refs" type="xsd:int"/>
<!-- audio attributes --> <!-- audio attributes -->
<xsd:attribute name="sample_fmt" type="xsd:string"/> <xsd:attribute name="sample_fmt" type="xsd:string"/>
@@ -219,8 +165,6 @@
<xsd:attribute name="duration_ts" type="xsd:long"/> <xsd:attribute name="duration_ts" type="xsd:long"/>
<xsd:attribute name="duration" type="xsd:float"/> <xsd:attribute name="duration" type="xsd:float"/>
<xsd:attribute name="bit_rate" type="xsd:int"/> <xsd:attribute name="bit_rate" type="xsd:int"/>
<xsd:attribute name="max_bit_rate" type="xsd:int"/>
<xsd:attribute name="bits_per_raw_sample" type="xsd:int"/>
<xsd:attribute name="nb_frames" type="xsd:int"/> <xsd:attribute name="nb_frames" type="xsd:int"/>
<xsd:attribute name="nb_read_frames" type="xsd:int"/> <xsd:attribute name="nb_read_frames" type="xsd:int"/>
<xsd:attribute name="nb_read_packets" type="xsd:int"/> <xsd:attribute name="nb_read_packets" type="xsd:int"/>
@@ -273,9 +217,10 @@
<xsd:complexType name="programVersionType"> <xsd:complexType name="programVersionType">
<xsd:attribute name="version" type="xsd:string" use="required"/> <xsd:attribute name="version" type="xsd:string" use="required"/>
<xsd:attribute name="copyright" type="xsd:string" use="required"/> <xsd:attribute name="copyright" type="xsd:string" use="required"/>
<xsd:attribute name="build_date" type="xsd:string"/> <xsd:attribute name="build_date" type="xsd:string" use="required"/>
<xsd:attribute name="build_time" type="xsd:string"/> <xsd:attribute name="build_time" type="xsd:string" use="required"/>
<xsd:attribute name="compiler_ident" type="xsd:string" use="required"/> <xsd:attribute name="compiler_type" type="xsd:string" use="required"/>
<xsd:attribute name="compiler_version" type="xsd:string" use="required"/>
<xsd:attribute name="configuration" type="xsd:string" use="required"/> <xsd:attribute name="configuration" type="xsd:string" use="required"/>
</xsd:complexType> </xsd:complexType>
@@ -312,45 +257,4 @@
<xsd:element name="library_version" type="ffprobe:libraryVersionType" minOccurs="0" maxOccurs="unbounded"/> <xsd:element name="library_version" type="ffprobe:libraryVersionType" minOccurs="0" maxOccurs="unbounded"/>
</xsd:sequence> </xsd:sequence>
</xsd:complexType> </xsd:complexType>
<xsd:complexType name="pixelFormatFlagsType">
<xsd:attribute name="big_endian" type="xsd:int" use="required"/>
<xsd:attribute name="palette" type="xsd:int" use="required"/>
<xsd:attribute name="bitstream" type="xsd:int" use="required"/>
<xsd:attribute name="hwaccel" type="xsd:int" use="required"/>
<xsd:attribute name="planar" type="xsd:int" use="required"/>
<xsd:attribute name="rgb" type="xsd:int" use="required"/>
<xsd:attribute name="pseudopal" type="xsd:int" use="required"/>
<xsd:attribute name="alpha" type="xsd:int" use="required"/>
</xsd:complexType>
<xsd:complexType name="pixelFormatComponentType">
<xsd:attribute name="index" type="xsd:int" use="required"/>
<xsd:attribute name="bit_depth" type="xsd:int" use="required"/>
</xsd:complexType>
<xsd:complexType name="pixelFormatComponentsType">
<xsd:sequence>
<xsd:element name="component" type="ffprobe:pixelFormatComponentType" minOccurs="0" maxOccurs="unbounded"/>
</xsd:sequence>
</xsd:complexType>
<xsd:complexType name="pixelFormatType">
<xsd:sequence>
<xsd:element name="flags" type="ffprobe:pixelFormatFlagsType" minOccurs="0" maxOccurs="1"/>
<xsd:element name="components" type="ffprobe:pixelFormatComponentsType" minOccurs="0" maxOccurs="1"/>
</xsd:sequence>
<xsd:attribute name="name" type="xsd:string" use="required"/>
<xsd:attribute name="nb_components" type="xsd:int" use="required"/>
<xsd:attribute name="log2_chroma_w" type="xsd:int"/>
<xsd:attribute name="log2_chroma_h" type="xsd:int"/>
<xsd:attribute name="bits_per_pixel" type="xsd:int"/>
</xsd:complexType>
<xsd:complexType name="pixelFormatsType">
<xsd:sequence>
<xsd:element name="pixel_format" type="ffprobe:pixelFormatType" minOccurs="0" maxOccurs="unbounded"/>
</xsd:sequence>
</xsd:complexType>
</xsd:schema> </xsd:schema>

View File

@@ -1,11 +1,11 @@
# Port on which the server is listening. You must select a different # Port on which the server is listening. You must select a different
# port from your standard HTTP web server if it is running on the same # port from your standard HTTP web server if it is running on the same
# computer. # computer.
HTTPPort 8090 Port 8090
# Address on which the server is bound. Only useful if you have # Address on which the server is bound. Only useful if you have
# several network interfaces. # several network interfaces.
HTTPBindAddress 0.0.0.0 BindAddress 0.0.0.0
# Number of simultaneous HTTP connections that can be handled. It has # Number of simultaneous HTTP connections that can be handled. It has
# to be defined *before* the MaxClients parameter, since it defines the # to be defined *before* the MaxClients parameter, since it defines the
@@ -82,7 +82,6 @@ Feed feed1.ffm
# ra : RealNetworks-compatible stream. Audio only. # ra : RealNetworks-compatible stream. Audio only.
# mpjpeg : Multipart JPEG (works with Netscape without any plugin) # mpjpeg : Multipart JPEG (works with Netscape without any plugin)
# jpeg : Generate a single JPEG image. # jpeg : Generate a single JPEG image.
# mjpeg : Generate a M-JPEG stream.
# asf : ASF compatible streaming (Windows Media Player format). # asf : ASF compatible streaming (Windows Media Player format).
# swf : Macromedia Flash compatible stream # swf : Macromedia Flash compatible stream
# avi : AVI format (MPEG-4 video, MPEG audio sound) # avi : AVI format (MPEG-4 video, MPEG audio sound)

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle ffserver Documentation @settitle ffserver Documentation
@titlepage @titlepage
@@ -67,12 +66,12 @@ http://@var{ffserver_ip_address}:@var{http_port}/@var{feed_name}
where @var{ffserver_ip_address} is the IP address of the machine where where @var{ffserver_ip_address} is the IP address of the machine where
@command{ffserver} is installed, @var{http_port} is the port number of @command{ffserver} is installed, @var{http_port} is the port number of
the HTTP server (configured through the @option{HTTPPort} option), and the HTTP server (configured through the @option{Port} option), and
@var{feed_name} is the name of the corresponding feed defined in the @var{feed_name} is the name of the corresponding feed defined in the
configuration file. configuration file.
Each feed is associated to a file which is stored on disk. This stored Each feed is associated to a file which is stored on disk. This stored
file is used to send pre-recorded data to a player as fast as file is used to allow to send pre-recorded data to a player as fast as
possible when new content is added in real-time to the stream. possible when new content is added in real-time to the stream.
A "live-stream" or "stream" is a resource published by A "live-stream" or "stream" is a resource published by
@@ -102,7 +101,7 @@ http://@var{ffserver_ip_address}:@var{rtsp_port}/@var{stream_name}[@var{options}
the configuration file. @var{options} is a list of options specified the configuration file. @var{options} is a list of options specified
after the URL which affects how the stream is served by after the URL which affects how the stream is served by
@command{ffserver}. @var{http_port} and @var{rtsp_port} are the HTTP @command{ffserver}. @var{http_port} and @var{rtsp_port} are the HTTP
and RTSP ports configured with the options @var{HTTPPort} and and RTSP ports configured with the options @var{Port} and
@var{RTSPPort} respectively. @var{RTSPPort} respectively.
In case the stream is associated to a feed, the encoding parameters In case the stream is associated to a feed, the encoding parameters
@@ -112,14 +111,13 @@ must be configured in the stream configuration. They are sent to
the @command{ffmpeg} encoders. the @command{ffmpeg} encoders.
The @command{ffmpeg} @option{override_ffserver} commandline option The @command{ffmpeg} @option{override_ffserver} commandline option
allows one to override the encoding parameters set by the server. allows to override the encoding parameters set by the server.
Multiple streams can be connected to the same feed. Multiple streams can be connected to the same feed.
For example, you can have a situation described by the following For example, you can have a situation described by the following
graph: graph:
@example
@verbatim
_________ __________ _________ __________
| | | | | | | |
ffmpeg 1 -----| feed 1 |-----| stream 1 | ffmpeg 1 -----| feed 1 |-----| stream 1 |
@@ -144,8 +142,7 @@ ffmpeg 2 -----| feed 3 |-----| stream 4 |
| | | | | | | |
| file 1 |-----| stream 5 | | file 1 |-----| stream 5 |
|_________| |__________| |_________| |__________|
@end example
@end verbatim
@anchor{FFM} @anchor{FFM}
@section FFM, FFM2 formats @section FFM, FFM2 formats
@@ -206,9 +203,11 @@ WARNING: trying to stream test1.mpg doesn't work with WMP as it tries to
transfer the entire file before starting to play. transfer the entire file before starting to play.
The same is true of AVI files. The same is true of AVI files.
You should edit the @file{ffserver.conf} file to suit your needs (in @section What happens next?
terms of frame rates etc). Then install @command{ffserver} and
@command{ffmpeg}, write a script to start them up, and off you go. You should edit the ffserver.conf file to suit your needs (in terms of
frame rates etc). Then install ffserver and ffmpeg, write a script to start
them up, and off you go.
@section What else can it do? @section What else can it do?
@@ -355,29 +354,20 @@ allow everybody else.
@section Global options @section Global options
@table @option @table @option
@item HTTPPort @var{port_number}
@item Port @var{port_number} @item Port @var{port_number}
@item RTSPPort @var{port_number} @item RTSPPort @var{port_number}
@var{HTTPPort} sets the HTTP server listening TCP port number, Set TCP port number on which the HTTP/RTSP server is listening. You
@var{RTSPPort} sets the RTSP server listening TCP port number. must select a different port from your standard HTTP web server if it
is running on the same computer.
@var{Port} is the equivalent of @var{HTTPPort} and is deprecated.
You must select a different port from your standard HTTP web server if
it is running on the same computer.
If not specified, no corresponding server will be created. If not specified, no corresponding server will be created.
@item HTTPBindAddress @var{ip_address}
@item BindAddress @var{ip_address} @item BindAddress @var{ip_address}
@item RTSPBindAddress @var{ip_address} @item RTSPBindAddress @var{ip_address}
Set address on which the HTTP/RTSP server is bound. Only useful if you Set address on which the HTTP/RTSP server is bound. Only useful if you
have several network interfaces. have several network interfaces.
@var{BindAddress} is the equivalent of @var{HTTPBindAddress} and is
deprecated.
@item MaxHTTPConnections @var{n} @item MaxHTTPConnections @var{n}
Set number of simultaneous HTTP connections that can be handled. It Set number of simultaneous HTTP connections that can be handled. It
has to be defined @emph{before} the @option{MaxClients} parameter, has to be defined @emph{before} the @option{MaxClients} parameter,
@@ -411,12 +401,6 @@ ignored, and the log is written to standard output.
Set no-daemon mode. This option is currently ignored since now Set no-daemon mode. This option is currently ignored since now
@command{ffserver} will always work in no-daemon mode, and is @command{ffserver} will always work in no-daemon mode, and is
deprecated. deprecated.
@item UseDefaults
@item NoDefaults
Control whether default codec options are used for the all streams or not.
Each stream may overwrite this setting for its own. Default is @var{UseDefaults}.
The lastest occurrence overrides previous if multiple definitions.
@end table @end table
@section Feed section @section Feed section
@@ -580,11 +564,6 @@ deprecated in favor of @option{Metadata}.
@item Metadata @var{key} @var{value} @item Metadata @var{key} @var{value}
Set metadata value on the output stream. Set metadata value on the output stream.
@item UseDefaults
@item NoDefaults
Control whether default codec options are used for the stream or not.
Default is @var{UseDefaults} unless disabled globally.
@item NoAudio @item NoAudio
@item NoVideo @item NoVideo
Suppress audio/video. Suppress audio/video.
@@ -603,9 +582,8 @@ Set sampling frequency for audio. When using low bitrates, you should
lower this frequency to 22050 or 11025. The supported frequencies lower this frequency to 22050 or 11025. The supported frequencies
depend on the selected audio codec. depend on the selected audio codec.
@item AVOptionAudio [@var{codec}:]@var{option} @var{value} (@emph{encoding,audio}) @item AVOptionAudio @var{option} @var{value} (@emph{encoding,audio})
Set generic or private option for audio stream. Set generic option for audio stream.
Private option must be prefixed with codec name or codec must be defined before.
@item AVPresetAudio @var{preset} (@emph{encoding,audio}) @item AVPresetAudio @var{preset} (@emph{encoding,audio})
Set preset for audio stream. Set preset for audio stream.
@@ -682,9 +660,8 @@ Set video @option{qdiff} encoding option.
@item DarkMask @var{float} (@emph{encoding,video}) @item DarkMask @var{float} (@emph{encoding,video})
Set @option{lumi_mask}/@option{dark_mask} encoding options. Set @option{lumi_mask}/@option{dark_mask} encoding options.
@item AVOptionVideo [@var{codec}:]@var{option} @var{value} (@emph{encoding,video}) @item AVOptionVideo @var{option} @var{value} (@emph{encoding,video})
Set generic or private option for video stream. Set generic option for video stream.
Private option must be prefixed with codec name or codec must be defined before.
@item AVPresetVideo @var{preset} (@emph{encoding,video}) @item AVPresetVideo @var{preset} (@emph{encoding,video})
Set preset for video stream. Set preset for video stream.

View File

@@ -3,7 +3,7 @@ representing a number as input, which may be followed by one of the SI
unit prefixes, for example: 'K', 'M', or 'G'. unit prefixes, for example: 'K', 'M', or 'G'.
If 'i' is appended to the SI unit prefix, the complete prefix will be If 'i' is appended to the SI unit prefix, the complete prefix will be
interpreted as a unit prefix for binary multiples, which are based on interpreted as a unit prefix for binary multiplies, which are based on
powers of 1024 instead of powers of 1000. Appending 'B' to the SI unit powers of 1024 instead of powers of 1000. Appending 'B' to the SI unit
prefix multiplies the value by 8. This allows using, for example: prefix multiplies the value by 8. This allows using, for example:
'KB', 'MiB', 'G' and 'B' as number suffixes. 'KB', 'MiB', 'G' and 'B' as number suffixes.
@@ -36,28 +36,16 @@ Possible forms of stream specifiers are:
Matches the stream with this index. E.g. @code{-threads:1 4} would set the Matches the stream with this index. E.g. @code{-threads:1 4} would set the
thread count for the second stream to 4. thread count for the second stream to 4.
@item @var{stream_type}[:@var{stream_index}] @item @var{stream_type}[:@var{stream_index}]
@var{stream_type} is one of following: 'v' or 'V' for video, 'a' for audio, 's' @var{stream_type} is one of following: 'v' for video, 'a' for audio, 's' for subtitle,
for subtitle, 'd' for data, and 't' for attachments. 'v' matches all video 'd' for data, and 't' for attachments. If @var{stream_index} is given, then it matches
streams, 'V' only matches video streams which are not attached pictures, video
thumbnails or cover arts. If @var{stream_index} is given, then it matches
stream number @var{stream_index} of this type. Otherwise, it matches all stream number @var{stream_index} of this type. Otherwise, it matches all
streams of this type. streams of this type.
@item p:@var{program_id}[:@var{stream_index}] @item p:@var{program_id}[:@var{stream_index}]
If @var{stream_index} is given, then it matches the stream with number @var{stream_index} If @var{stream_index} is given, then it matches the stream with number @var{stream_index}
in the program with the id @var{program_id}. Otherwise, it matches all streams in the in the program with the id @var{program_id}. Otherwise, it matches all streams in the
program. program.
@item #@var{stream_id} or i:@var{stream_id} @item #@var{stream_id}
Match the stream by stream id (e.g. PID in MPEG-TS container). Matches the stream by a format-specific ID.
@item m:@var{key}[:@var{value}]
Matches streams with the metadata tag @var{key} having the specified value. If
@var{value} is not given, matches streams that contain the given tag with any
value.
@item u
Matches streams with usable configuration, the codec must be defined and the
essential information such as video dimension or audio sample rate must be present.
Note that in @command{ffmpeg}, matching by metadata will only work properly for
input files.
@end table @end table
@section Generic options @section Generic options
@@ -108,10 +96,7 @@ Print detailed information about the filter name @var{filter_name}. Use the
Show version. Show version.
@item -formats @item -formats
Show available formats (including devices). Show available formats.
@item -devices
Show available devices.
@item -codecs @item -codecs
Show all codecs known to libavcodec. Show all codecs known to libavcodec.
@@ -146,22 +131,6 @@ Show channel names and standard channel layouts.
@item -colors @item -colors
Show recognized color names. Show recognized color names.
@item -sources @var{device}[,@var{opt1}=@var{val1}[,@var{opt2}=@var{val2}]...]
Show autodetected sources of the intput device.
Some devices may provide system-dependent source names that cannot be autodetected.
The returned list cannot be assumed to be always complete.
@example
ffmpeg -sources pulse,server=192.168.0.4
@end example
@item -sinks @var{device}[,@var{opt1}=@var{val1}[,@var{opt2}=@var{val2}]...]
Show autodetected sinks of the output device.
Some devices may provide system-dependent sink names that cannot be autodetected.
The returned list cannot be assumed to be always complete.
@example
ffmpeg -sinks pulse,server=192.168.0.4
@end example
@item -loglevel [repeat+]@var{loglevel} | -v [repeat+]@var{loglevel} @item -loglevel [repeat+]@var{loglevel} | -v [repeat+]@var{loglevel}
Set the logging level used by the library. Set the logging level used by the library.
Adding "repeat+" indicates that repeated log output should not be compressed Adding "repeat+" indicates that repeated log output should not be compressed
@@ -170,29 +139,28 @@ omitted. "repeat" can also be used alone.
If "repeat" is used alone, and with no prior loglevel set, the default If "repeat" is used alone, and with no prior loglevel set, the default
loglevel will be used. If multiple loglevel parameters are given, using loglevel will be used. If multiple loglevel parameters are given, using
'repeat' will not change the loglevel. 'repeat' will not change the loglevel.
@var{loglevel} is a string or a number containing one of the following values: @var{loglevel} is a number or a string containing one of the following values:
@table @samp @table @samp
@item quiet, -8 @item quiet
Show nothing at all; be silent. Show nothing at all; be silent.
@item panic, 0 @item panic
Only show fatal errors which could lead the process to crash, such as Only show fatal errors which could lead the process to crash, such as
and assert failure. This is not currently used for anything. and assert failure. This is not currently used for anything.
@item fatal, 8 @item fatal
Only show fatal errors. These are errors after which the process absolutely Only show fatal errors. These are errors after which the process absolutely
cannot continue after. cannot continue after.
@item error, 16 @item error
Show all errors, including ones which can be recovered from. Show all errors, including ones which can be recovered from.
@item warning, 24 @item warning
Show all warnings and errors. Any message related to possibly Show all warnings and errors. Any message related to possibly
incorrect or unexpected events will be shown. incorrect or unexpected events will be shown.
@item info, 32 @item info
Show informative messages during processing. This is in addition to Show informative messages during processing. This is in addition to
warnings and errors. This is the default value. warnings and errors. This is the default value.
@item verbose, 40 @item verbose
Same as @code{info}, except more verbose. Same as @code{info}, except more verbose.
@item debug, 48 @item debug
Show everything, including debugging information. Show everything, including debugging information.
@item trace, 56
@end table @end table
By default the program logs to stderr, if coloring is supported by the By default the program logs to stderr, if coloring is supported by the
@@ -210,29 +178,19 @@ directory.
This file can be useful for bug reports. This file can be useful for bug reports.
It also implies @code{-loglevel verbose}. It also implies @code{-loglevel verbose}.
Setting the environment variable @env{FFREPORT} to any value has the Setting the environment variable @code{FFREPORT} to any value has the
same effect. If the value is a ':'-separated key=value sequence, these same effect. If the value is a ':'-separated key=value sequence, these
options will affect the report; option values must be escaped if they options will affect the report; options values must be escaped if they
contain special characters or the options delimiter ':' (see the contain special characters or the options delimiter ':' (see the
``Quoting and escaping'' section in the ffmpeg-utils manual). ``Quoting and escaping'' section in the ffmpeg-utils manual). The
following option is recognized:
The following options are recognized:
@table @option @table @option
@item file @item file
set the file name to use for the report; @code{%p} is expanded to the name set the file name to use for the report; @code{%p} is expanded to the name
of the program, @code{%t} is expanded to a timestamp, @code{%%} is expanded of the program, @code{%t} is expanded to a timestamp, @code{%%} is expanded
to a plain @code{%} to a plain @code{%}
@item level
set the log verbosity level using a numerical value (see @code{-loglevel}).
@end table @end table
For example, to output a report to a file named @file{ffreport.log}
using a log level of @code{32} (alias for log level @code{info}):
@example
FFREPORT=file=ffreport.log:level=32 ffmpeg -i input output
@end example
Errors in parsing the environment variable are not fatal, and will not Errors in parsing the environment variable are not fatal, and will not
appear in the report. appear in the report.
@@ -267,14 +225,10 @@ Possible flags for this option are:
@item sse4.1 @item sse4.1
@item sse4.2 @item sse4.2
@item avx @item avx
@item avx2
@item xop @item xop
@item fma3
@item fma4 @item fma4
@item 3dnow @item 3dnow
@item 3dnowext @item 3dnowext
@item bmi1
@item bmi2
@item cmov @item cmov
@end table @end table
@item ARM @item ARM
@@ -285,13 +239,6 @@ Possible flags for this option are:
@item vfp @item vfp
@item vfpv3 @item vfpv3
@item neon @item neon
@item setend
@end table
@item AArch64
@table @samp
@item armv8
@item vfp
@item neon
@end table @end table
@item PowerPC @item PowerPC
@table @samp @table @samp
@@ -311,41 +258,8 @@ Possible flags for this option are:
@end table @end table
@item -opencl_bench @item -opencl_bench
This option is used to benchmark all available OpenCL devices and print the Benchmark all available OpenCL devices and show the results. This option
results. This option is only available when FFmpeg has been compiled with is only available when FFmpeg has been compiled with @code{--enable-opencl}.
@code{--enable-opencl}.
When FFmpeg is configured with @code{--enable-opencl}, the options for the
global OpenCL context are set via @option{-opencl_options}. See the
"OpenCL Options" section in the ffmpeg-utils manual for the complete list of
supported options. Amongst others, these options include the ability to select
a specific platform and device to run the OpenCL code on. By default, FFmpeg
will run on the first device of the first platform. While the options for the
global OpenCL context provide flexibility to the user in selecting the OpenCL
device of their choice, most users would probably want to select the fastest
OpenCL device for their system.
This option assists the selection of the most efficient configuration by
identifying the appropriate device for the user's system. The built-in
benchmark is run on all the OpenCL devices and the performance is measured for
each device. The devices in the results list are sorted based on their
performance with the fastest device listed first. The user can subsequently
invoke @command{ffmpeg} using the device deemed most appropriate via
@option{-opencl_options} to obtain the best performance for the OpenCL
accelerated code.
Typical usage to use the fastest OpenCL device involve the following steps.
Run the command:
@example
ffmpeg -opencl_bench
@end example
Note down the platform ID (@var{pidx}) and device ID (@var{didx}) of the first
i.e. fastest device in the list.
Select the platform and device using the command:
@example
ffmpeg -opencl_options platform_idx=@var{pidx}:device_idx=@var{didx} ...
@end example
@item -opencl_options options (@emph{global}) @item -opencl_options options (@emph{global})
Set OpenCL environment options. This option is only available when Set OpenCL environment options. This option is only available when

View File

@@ -98,7 +98,7 @@ Buffer references ownership and permissions
The AVFilterLink structure has a few AVFilterBufferRef fields. The The AVFilterLink structure has a few AVFilterBufferRef fields. The
cur_buf and out_buf were used with the deprecated cur_buf and out_buf were used with the deprecated
start_frame/draw_slice/end_frame API and should no longer be used. start_frame/draw_slice/end_frame API and should no longer be used.
src_buf and partial_buf are used by libavfilter internally src_buf, cur_buf_copy and partial_buf are used by libavfilter internally
and must not be accessed by filters. and must not be accessed by filters.
Reference permissions Reference permissions
@@ -232,8 +232,7 @@ Frame scheduling
one of its inputs, repeatedly until at least one frame has been pushed. one of its inputs, repeatedly until at least one frame has been pushed.
Return values: Return values:
if request_frame could produce a frame, or at least make progress if request_frame could produce a frame, it should return 0;
towards producing a frame, it should return 0;
if it could not for temporary reasons, it should return AVERROR(EAGAIN); if it could not for temporary reasons, it should return AVERROR(EAGAIN);
if it could not because there are no more frames, it should return if it could not because there are no more frames, it should return
AVERROR_EOF. AVERROR_EOF.
@@ -245,6 +244,7 @@ Frame scheduling
push_one_frame(); push_one_frame();
return 0; return 0;
} }
while (!frame_pushed) {
input = input_where_a_frame_is_most_needed(); input = input_where_a_frame_is_most_needed();
ret = ff_request_frame(input); ret = ff_request_frame(input);
if (ret == AVERROR_EOF) { if (ret == AVERROR_EOF) {
@@ -252,11 +252,12 @@ Frame scheduling
} else if (ret < 0) { } else if (ret < 0) {
return ret; return ret;
} }
}
return 0; return 0;
Note that, except for filters that can have queued frames, request_frame Note that, except for filters that can have queued frames, request_frame
does not push frames: it requests them to its input, and as a reaction, does not push frames: it requests them to its input, and as a reaction,
the filter_frame method possibly will be called and do the work. the filter_frame method will be called and do the work.
Legacy API Legacy API
========== ==========

File diff suppressed because it is too large Load Diff

View File

@@ -23,7 +23,7 @@ Reduce buffering.
@item probesize @var{integer} (@emph{input}) @item probesize @var{integer} (@emph{input})
Set probing size in bytes, i.e. the size of the data to analyze to get Set probing size in bytes, i.e. the size of the data to analyze to get
stream information. A higher value will enable detecting more stream information. A higher value will allow to detect more
information in case it is dispersed into the stream, but will increase information in case it is dispersed into the stream, but will increase
latency. Must be an integer not lesser than 32. It is 5000000 by default. latency. Must be an integer not lesser than 32. It is 5000000 by default.
@@ -37,8 +37,6 @@ Possible values:
@table @samp @table @samp
@item ignidx @item ignidx
Ignore index. Ignore index.
@item fastseek
Enable fast, but inaccurate seeks for some formats.
@item genpts @item genpts
Generate PTS. Generate PTS.
@item nofillin @item nofillin
@@ -57,10 +55,6 @@ Do not merge side data.
Enable RTP MP4A-LATM payload. Enable RTP MP4A-LATM payload.
@item nobuffer @item nobuffer
Reduce the latency introduced by optional buffering Reduce the latency introduced by optional buffering
@item bitexact
Only write platform-, build- and time-independent data.
This ensures that file and data checksums are reproducible and match between
platforms. Its primary use is for regression testing.
@end table @end table
@item seek2any @var{integer} (@emph{input}) @item seek2any @var{integer} (@emph{input})
@@ -69,7 +63,7 @@ Default is 0.
@item analyzeduration @var{integer} (@emph{input}) @item analyzeduration @var{integer} (@emph{input})
Specify how many microseconds are analyzed to probe the input. A Specify how many microseconds are analyzed to probe the input. A
higher value will enable detecting more accurate information, but will higher value will allow to detect more accurate information, but will
increase latency. It defaults to 5,000,000 microseconds = 5 seconds. increase latency. It defaults to 5,000,000 microseconds = 5 seconds.
@item cryptokey @var{hexadecimal string} (@emph{input}) @item cryptokey @var{hexadecimal string} (@emph{input})
@@ -127,27 +121,8 @@ Consider all spec non compliancies as errors.
Consider things that a sane encoder should not do as an error. Consider things that a sane encoder should not do as an error.
@end table @end table
@item max_interleave_delta @var{integer} (@emph{output})
Set maximum buffering duration for interleaving. The duration is
expressed in microseconds, and defaults to 1000000 (1 second).
To ensure all the streams are interleaved correctly, libavformat will
wait until it has at least one packet for each stream before actually
writing any packets to the output file. When some streams are
"sparse" (i.e. there are large gaps between successive packets), this
can result in excessive buffering.
This field specifies the maximum difference between the timestamps of the
first and the last packet in the muxing queue, above which libavformat
will output a packet regardless of whether it has queued a packet for all
the streams.
If set to 0, libavformat will continue buffering packets until it has
a packet for each stream, regardless of the maximum timestamp
difference between the buffered packets.
@item use_wallclock_as_timestamps @var{integer} (@emph{input}) @item use_wallclock_as_timestamps @var{integer} (@emph{input})
Use wallclock as timestamps if set to 1. Default is 0. Use wallclock as timestamps.
@item avoid_negative_ts @var{integer} (@emph{output}) @item avoid_negative_ts @var{integer} (@emph{output})
@@ -193,18 +168,6 @@ The offset is added by the muxer to the output timestamps.
Specifying a positive offset means that the corresponding streams are Specifying a positive offset means that the corresponding streams are
delayed bt the time duration specified in @var{offset}. Default value delayed bt the time duration specified in @var{offset}. Default value
is @code{0} (meaning that no offset is applied). is @code{0} (meaning that no offset is applied).
@item format_whitelist @var{list} (@emph{input})
"," separated List of allowed demuxers. By default all are allowed.
@item dump_separator @var{string} (@emph{input})
Separator used to separate the fields printed on the command line about the
Stream parameters.
For example to separate the fields with newlines and indention:
@example
ffprobe -dump_separator "
" -i ~/videos/matrixbench_mpeg2.mpg
@end example
@end table @end table
@c man end FORMAT OPTIONS @c man end FORMAT OPTIONS

View File

@@ -1,5 +1,4 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle General Documentation @settitle General Documentation
@titlepage @titlepage
@@ -53,6 +52,14 @@ instructions for installing the libraries.
Then pass @code{--enable-libopencore-amrnb} and/or Then pass @code{--enable-libopencore-amrnb} and/or
@code{--enable-libopencore-amrwb} to configure to enable them. @code{--enable-libopencore-amrwb} to configure to enable them.
@subsection VisualOn AAC encoder library
FFmpeg can make use of the VisualOn AACenc library for AAC encoding.
Go to @url{http://sourceforge.net/projects/opencore-amr/} and follow the
instructions for installing the library.
Then pass @code{--enable-libvo-aacenc} to configure to enable it.
@subsection VisualOn AMR-WB encoder library @subsection VisualOn AMR-WB encoder library
FFmpeg can make use of the VisualOn AMR-WBenc library for AMR-WB encoding. FFmpeg can make use of the VisualOn AMR-WBenc library for AMR-WB encoding.
@@ -101,14 +108,6 @@ Go to @url{http://www.wavpack.com/} and follow the instructions for
installing the library. Then pass @code{--enable-libwavpack} to configure to installing the library. Then pass @code{--enable-libwavpack} to configure to
enable it. enable it.
@section OpenH264
FFmpeg can make use of the OpenH264 library for H.264 encoding.
Go to @url{http://www.openh264.org/} and follow the instructions for
installing the library. Then pass @code{--enable-libopenh264} to configure to
enable it.
@section x264 @section x264
FFmpeg can make use of the x264 library for H.264 encoding. FFmpeg can make use of the x264 library for H.264 encoding.
@@ -131,20 +130,12 @@ Go to @url{http://x265.org/developers.html} and follow the instructions
for installing the library. Then pass @code{--enable-libx265} to configure for installing the library. Then pass @code{--enable-libx265} to configure
to enable it. to enable it.
@float NOTE @float note
x265 is under the GNU Public License Version 2 or later x265 is under the GNU Public License Version 2 or later
(see @url{http://www.gnu.org/licenses/old-licenses/gpl-2.0.html} for (see @url{http://www.gnu.org/licenses/old-licenses/gpl-2.0.html} for
details), you must upgrade FFmpeg's license to GPL in order to use it. details), you must upgrade FFmpeg's license to GPL in order to use it.
@end float @end float
@section kvazaar
FFmpeg can make use of the kvazaar library for HEVC encoding.
Go to @url{https://github.com/ultravideo/kvazaar} and follow the
instructions for installing the library. Then pass
@code{--enable-libkvazaar} to configure to enable it.
@section libilbc @section libilbc
iLBC is a narrowband speech codec that has been made freely available iLBC is a narrowband speech codec that has been made freely available
@@ -152,7 +143,7 @@ by Google as part of the WebRTC project. libilbc is a packaging friendly
copy of the iLBC codec. FFmpeg can make use of the libilbc library for copy of the iLBC codec. FFmpeg can make use of the libilbc library for
iLBC encoding and decoding. iLBC encoding and decoding.
Go to @url{https://github.com/TimothyGu/libilbc} and follow the instructions for Go to @url{https://github.com/dekkers/libilbc} and follow the instructions for
installing the library. Then pass @code{--enable-libilbc} to configure to installing the library. Then pass @code{--enable-libilbc} to configure to
enable it. enable it.
@@ -165,6 +156,12 @@ Go to @url{http://sourceforge.net/projects/zapping/} and follow the instructions
installing the library. Then pass @code{--enable-libzvbi} to configure to installing the library. Then pass @code{--enable-libzvbi} to configure to
enable it. enable it.
@float NOTE
libzvbi is licensed under the GNU General Public License Version 2 or later
(see @url{http://www.gnu.org/licenses/old-licenses/gpl-2.0.html} for details),
you must upgrade FFmpeg's license to GPL in order to use it.
@end float
@section AviSynth @section AviSynth
FFmpeg can read AviSynth scripts as input. To enable support, pass FFmpeg can read AviSynth scripts as input. To enable support, pass
@@ -173,8 +170,8 @@ included in compat/avisynth/, which allows the user to enable support
without needing to search for these headers themselves. without needing to search for these headers themselves.
For Windows, supported AviSynth variants are For Windows, supported AviSynth variants are
@url{http://avisynth.nl, AviSynth 2.6 RC1 or higher} for 32-bit builds and @url{http://avisynth.nl, AviSynth 2.5 or 2.6} for 32-bit builds and
@url{http://avs-plus.net, AviSynth+ r1718 or higher} for 32-bit and 64-bit builds. @url{http://avs-plus.net, AviSynth+ 0.1} for 32-bit and 64-bit builds.
For Linux and OS X, the supported AviSynth variant is For Linux and OS X, the supported AviSynth variant is
@url{https://github.com/avxsynth/avxsynth, AvxSynth}. @url{https://github.com/avxsynth/avxsynth, AvxSynth}.
@@ -186,17 +183,6 @@ end user having AviSynth or AvxSynth installed - they'll only need to be
installed to use AviSynth scripts (obviously). installed to use AviSynth scripts (obviously).
@end float @end float
@section Intel QuickSync Video
FFmpeg can use Intel QuickSync Video (QSV) for accelerated encoding and decoding
of multiple codecs. To use QSV, FFmpeg must be linked against the @code{libmfx}
dispatcher, which loads the actual decoding libraries.
The dispatcher is open source and can be downloaded from
@url{https://github.com/lu-zero/mfx_dispatch.git}. FFmpeg needs to be configured
with the @code{--enable-libmfx} option and @code{pkg-config} needs to be able to
locate the dispatcher's @code{.pc} files.
@chapter Supported File Formats, Codecs or Features @chapter Supported File Formats, Codecs or Features
@@ -209,14 +195,9 @@ library:
@multitable @columnfractions .4 .1 .1 .4 @multitable @columnfractions .4 .1 .1 .4
@item Name @tab Encoding @tab Decoding @tab Comments @item Name @tab Encoding @tab Decoding @tab Comments
@item 3dostr @tab @tab X
@item 4xm @tab @tab X @item 4xm @tab @tab X
@tab 4X Technologies format, used in some games. @tab 4X Technologies format, used in some games.
@item 8088flex TMV @tab @tab X @item 8088flex TMV @tab @tab X
@item AAX @tab @tab X
@tab Audible Enhanced Audio format, used in audiobooks.
@item AA @tab @tab X
@tab Audible Format 2, 3, and 4, used in audiobooks.
@item ACT Voice @tab @tab X @item ACT Voice @tab @tab X
@tab contains G.729 audio @tab contains G.729 audio
@item Adobe Filmstrip @tab X @tab X @item Adobe Filmstrip @tab X @tab X
@@ -228,15 +209,10 @@ library:
@tab Multimedia format used in game Heart Of Darkness. @tab Multimedia format used in game Heart Of Darkness.
@item Apple HTTP Live Streaming @tab @tab X @item Apple HTTP Live Streaming @tab @tab X
@item Artworx Data Format @tab @tab X @item Artworx Data Format @tab @tab X
@item Interplay ACM @tab @tab X
@tab Audio only format used in some Interplay games.
@item ADP @tab @tab X @item ADP @tab @tab X
@tab Audio format used on the Nintendo Gamecube. @tab Audio format used on the Nintendo Gamecube.
@item AFC @tab @tab X @item AFC @tab @tab X
@tab Audio format used on the Nintendo Gamecube. @tab Audio format used on the Nintendo Gamecube.
@item ADS/SS2 @tab @tab X
@tab Audio format used on the PS2.
@item APNG @tab X @tab X
@item ASF @tab X @tab X @item ASF @tab X @tab X
@item AST @tab X @tab X @item AST @tab X @tab X
@tab Audio format used on the Nintendo Wii. @tab Audio format used on the Nintendo Wii.
@@ -257,8 +233,6 @@ library:
@tab Used in Z and Z95 games. @tab Used in Z and Z95 games.
@item Brute Force & Ignorance @tab @tab X @item Brute Force & Ignorance @tab @tab X
@tab Used in the game Flash Traffic: City of Angels. @tab Used in the game Flash Traffic: City of Angels.
@item BFSTM @tab @tab X
@tab Audio format used on the Nintendo WiiU (based on BRSTM).
@item BRSTM @tab @tab X @item BRSTM @tab @tab X
@tab Audio format used on the Nintendo Wii. @tab Audio format used on the Nintendo Wii.
@item BWF @tab X @tab X @item BWF @tab X @tab X
@@ -269,14 +243,8 @@ library:
@tab Used in the game Cyberia from Interplay. @tab Used in the game Cyberia from Interplay.
@item Delphine Software International CIN @tab @tab X @item Delphine Software International CIN @tab @tab X
@tab Multimedia format used by Delphine Software games. @tab Multimedia format used by Delphine Software games.
@item Digital Speech Standard (DSS) @tab @tab X
@item Canopus HQ @tab @tab X
@item Canopus HQA @tab @tab X
@item Canopus HQX @tab @tab X
@item CD+G @tab @tab X @item CD+G @tab @tab X
@tab Video format used by CD+G karaoke disks @tab Video format used by CD+G karaoke disks
@item Phantom Cine @tab @tab X
@item Cineform HD @tab @tab X
@item Commodore CDXL @tab @tab X @item Commodore CDXL @tab @tab X
@tab Amiga CD video format @tab Amiga CD video format
@item Core Audio Format @tab X @tab X @item Core Audio Format @tab X @tab X
@@ -288,11 +256,8 @@ library:
@tab Audio format used in some games by CRYO Interactive Entertainment. @tab Audio format used in some games by CRYO Interactive Entertainment.
@item D-Cinema audio @tab X @tab X @item D-Cinema audio @tab X @tab X
@item Deluxe Paint Animation @tab @tab X @item Deluxe Paint Animation @tab @tab X
@item DCSTR @tab @tab X
@item DFA @tab @tab X @item DFA @tab @tab X
@tab This format is used in Chronomaster game @tab This format is used in Chronomaster game
@item DirectDraw Surface @tab @tab X
@item DSD Stream File (DSF) @tab @tab X
@item DV video @tab X @tab X @item DV video @tab X @tab X
@item DXA @tab @tab X @item DXA @tab @tab X
@tab This format is used in the non-Windows version of the Feeble Files @tab This format is used in the non-Windows version of the Feeble Files
@@ -315,8 +280,6 @@ library:
@item G.723.1 @tab X @tab X @item G.723.1 @tab X @tab X
@item G.729 BIT @tab X @tab X @item G.729 BIT @tab X @tab X
@item G.729 raw @tab @tab X @item G.729 raw @tab @tab X
@item GENH @tab @tab X
@tab Audio format for various games.
@item GIF Animation @tab X @tab X @item GIF Animation @tab X @tab X
@item GXF @tab X @tab X @item GXF @tab X @tab X
@tab General eXchange Format SMPTE 360M, used by Thomson Grass Valley @tab General eXchange Format SMPTE 360M, used by Thomson Grass Valley
@@ -339,18 +302,15 @@ library:
@tab A format generated by IndigoVision 8000 video server. @tab A format generated by IndigoVision 8000 video server.
@item IVF (On2) @tab X @tab X @item IVF (On2) @tab X @tab X
@tab A format used by libvpx @tab A format used by libvpx
@item Internet Video Recording @tab @tab X
@item IRCAM @tab X @tab X @item IRCAM @tab X @tab X
@item LATM @tab X @tab X @item LATM @tab X @tab X
@item LMLM4 @tab @tab X @item LMLM4 @tab @tab X
@tab Used by Linux Media Labs MPEG-4 PCI boards @tab Used by Linux Media Labs MPEG-4 PCI boards
@item LOAS @tab @tab X @item LOAS @tab @tab X
@tab contains LATM multiplexed AAC audio @tab contains LATM multiplexed AAC audio
@item LRC @tab X @tab X
@item LVF @tab @tab X @item LVF @tab @tab X
@item LXF @tab @tab X @item LXF @tab @tab X
@tab VR native stream format, used by Leitch/Harris' video servers. @tab VR native stream format, used by Leitch/Harris' video servers.
@item Magic Lantern Video (MLV) @tab @tab X
@item Matroska @tab X @tab X @item Matroska @tab X @tab X
@item Matroska audio @tab X @tab @item Matroska audio @tab X @tab
@item FFmpeg metadata @tab X @tab X @item FFmpeg metadata @tab X @tab X
@@ -376,8 +336,6 @@ library:
@tab also known as DVB Transport Stream @tab also known as DVB Transport Stream
@item MPEG-4 @tab X @tab X @item MPEG-4 @tab X @tab X
@tab MPEG-4 is a variant of QuickTime. @tab MPEG-4 is a variant of QuickTime.
@item MSF @tab @tab X
@tab Audio format used on the PS3.
@item Mirillis FIC video @tab @tab X @item Mirillis FIC video @tab @tab X
@tab No cursor rendering. @tab No cursor rendering.
@item MIME multipart JPEG @tab X @tab @item MIME multipart JPEG @tab X @tab
@@ -461,7 +419,6 @@ library:
@item Redirector @tab @tab X @item Redirector @tab @tab X
@item RedSpark @tab @tab X @item RedSpark @tab @tab X
@item Renderware TeXture Dictionary @tab @tab X @item Renderware TeXture Dictionary @tab @tab X
@item Resolume DXV @tab @tab X
@item RL2 @tab @tab X @item RL2 @tab @tab X
@tab Audio and video format used in some games by Entertainment Software Partners. @tab Audio and video format used in some games by Entertainment Software Partners.
@item RPL/ARMovie @tab @tab X @item RPL/ARMovie @tab @tab X
@@ -493,23 +450,14 @@ library:
@item Sony Wave64 (W64) @tab X @tab X @item Sony Wave64 (W64) @tab X @tab X
@item SoX native format @tab X @tab X @item SoX native format @tab X @tab X
@item SUN AU format @tab X @tab X @item SUN AU format @tab X @tab X
@item SUP raw PGS subtitles @tab @tab X
@item SVAG @tab @tab X
@tab Audio format used in Konami PS2 games.
@item TDSC @tab @tab X
@item Text files @tab @tab X @item Text files @tab @tab X
@item THP @tab @tab X @item THP @tab @tab X
@tab Used on the Nintendo GameCube. @tab Used on the Nintendo GameCube.
@item Tiertex Limited SEQ @tab @tab X @item Tiertex Limited SEQ @tab @tab X
@tab Tiertex .seq files used in the DOS CD-ROM version of the game Flashback. @tab Tiertex .seq files used in the DOS CD-ROM version of the game Flashback.
@item True Audio @tab @tab X @item True Audio @tab @tab X
@item VAG @tab @tab X
@tab Audio format used in many Sony PS2 games.
@item VC-1 test bitstream @tab X @tab X @item VC-1 test bitstream @tab X @tab X
@item Vidvox Hap @tab X @tab X
@item Vivo @tab @tab X @item Vivo @tab @tab X
@item VPK @tab @tab X
@tab Audio format used in Sony PS games.
@item WAV @tab X @tab X @item WAV @tab X @tab X
@item WavPack @tab X @tab X @item WavPack @tab X @tab X
@item WebM @tab X @tab X @item WebM @tab X @tab X
@@ -520,12 +468,8 @@ library:
@tab Multimedia format used in Westwood Studios games. @tab Multimedia format used in Westwood Studios games.
@item Westwood Studios VQA @tab @tab X @item Westwood Studios VQA @tab @tab X
@tab Multimedia format used in Westwood Studios games. @tab Multimedia format used in Westwood Studios games.
@item Wideband Single-bit Data (WSD) @tab @tab X
@item WVE @tab @tab X
@item XMV @tab @tab X @item XMV @tab @tab X
@tab Microsoft video container used in Xbox games. @tab Microsoft video container used in Xbox games.
@item XVAG @tab @tab X
@tab Audio format used on the PS3.
@item xWMA @tab @tab X @item xWMA @tab @tab X
@tab Microsoft audio container used by XAudio 2. @tab Microsoft audio container used by XAudio 2.
@item eXtended BINary text (XBIN) @tab @tab X @item eXtended BINary text (XBIN) @tab @tab X
@@ -544,14 +488,11 @@ following image formats are supported:
@item Name @tab Encoding @tab Decoding @tab Comments @item Name @tab Encoding @tab Decoding @tab Comments
@item .Y.U.V @tab X @tab X @item .Y.U.V @tab X @tab X
@tab one raw file per component @tab one raw file per component
@item Alias PIX @tab X @tab X
@tab Alias/Wavefront PIX image format
@item animated GIF @tab X @tab X @item animated GIF @tab X @tab X
@item APNG @tab X @tab X
@item BMP @tab X @tab X @item BMP @tab X @tab X
@tab Microsoft BMP image @tab Microsoft BMP image
@item BRender PIX @tab @tab X @item PIX @tab @tab X
@tab Argonaut BRender 3D engine image format. @tab PIX is an image format used in the Argonaut BRender engine.
@item DPX @tab X @tab X @item DPX @tab X @tab X
@tab Digital Picture Exchange @tab Digital Picture Exchange
@item EXR @tab @tab X @item EXR @tab @tab X
@@ -644,7 +585,6 @@ following image formats are supported:
@item Bethesda VID video @tab @tab X @item Bethesda VID video @tab @tab X
@tab Used in some games from Bethesda Softworks. @tab Used in some games from Bethesda Softworks.
@item Bink Video @tab @tab X @item Bink Video @tab @tab X
@item BitJazz SheerVideo @tab @tab X
@item Bitmap Brothers JV video @tab @tab X @item Bitmap Brothers JV video @tab @tab X
@item y41p Brooktree uncompressed 4:1:1 12-bit @tab X @tab X @item y41p Brooktree uncompressed 4:1:1 12-bit @tab X @tab X
@item Brute Force & Ignorance @tab @tab X @item Brute Force & Ignorance @tab @tab X
@@ -700,17 +640,15 @@ following image formats are supported:
@tab Sorenson H.263 used in Flash @tab Sorenson H.263 used in Flash
@item Forward Uncompressed @tab @tab X @item Forward Uncompressed @tab @tab X
@item Fraps @tab @tab X @item Fraps @tab @tab X
@item Go2Meeting @tab @tab X
@tab fourcc: G2M2, G2M3
@item Go2Webinar @tab @tab X @item Go2Webinar @tab @tab X
@tab fourcc: G2M4 @tab fourcc: G2M4
@item H.261 @tab X @tab X @item H.261 @tab X @tab X
@item H.263 / H.263-1996 @tab X @tab X @item H.263 / H.263-1996 @tab X @tab X
@item H.263+ / H.263-1998 / H.263 version 2 @tab X @tab X @item H.263+ / H.263-1998 / H.263 version 2 @tab X @tab X
@item H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 @tab E @tab X @item H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 @tab E @tab X
@tab encoding supported through external library libx264 and OpenH264 @tab encoding supported through external library libx264
@item HEVC @tab X @tab X @item HEVC @tab X @tab X
@tab encoding supported through external library libx265 and libkvazaar @tab encoding supported through the external library libx265
@item HNM version 4 @tab @tab X @item HNM version 4 @tab @tab X
@item HuffYUV @tab X @tab X @item HuffYUV @tab X @tab X
@item HuffYUV FFmpeg variant @tab X @tab X @item HuffYUV FFmpeg variant @tab X @tab X
@@ -742,10 +680,9 @@ following image formats are supported:
@item LCL (LossLess Codec Library) MSZH @tab @tab X @item LCL (LossLess Codec Library) MSZH @tab @tab X
@item LCL (LossLess Codec Library) ZLIB @tab E @tab E @item LCL (LossLess Codec Library) ZLIB @tab E @tab E
@item LOCO @tab @tab X @item LOCO @tab @tab X
@item LucasArts SANM/Smush @tab @tab X @item LucasArts Smush @tab @tab X
@tab Used in LucasArts games / SMUSH animations. @tab Used in LucasArts games.
@item lossless MJPEG @tab X @tab X @item lossless MJPEG @tab X @tab X
@item MagicYUV Lossless Video @tab @tab X
@item Microsoft ATC Screen @tab @tab X @item Microsoft ATC Screen @tab @tab X
@tab Also known as Microsoft Screen 3. @tab Also known as Microsoft Screen 3.
@item Microsoft Expression Encoder Screen @tab @tab X @item Microsoft Expression Encoder Screen @tab @tab X
@@ -779,8 +716,6 @@ following image formats are supported:
@tab fourcc: VP50 @tab fourcc: VP50
@item On2 VP6 @tab @tab X @item On2 VP6 @tab @tab X
@tab fourcc: VP60,VP61,VP62 @tab fourcc: VP60,VP61,VP62
@item On2 VP7 @tab @tab X
@tab fourcc: VP70,VP71
@item VP8 @tab E @tab X @item VP8 @tab E @tab X
@tab fourcc: VP80, encoding supported through external library libvpx @tab fourcc: VP80, encoding supported through external library libvpx
@item VP9 @tab E @tab X @item VP9 @tab E @tab X
@@ -810,12 +745,11 @@ following image formats are supported:
@tab Texture dictionaries used by the Renderware Engine. @tab Texture dictionaries used by the Renderware Engine.
@item RL2 video @tab @tab X @item RL2 video @tab @tab X
@tab used in some games by Entertainment Software Partners @tab used in some games by Entertainment Software Partners
@item Screenpresso @tab @tab X @item SGI RLE 8-bit @tab @tab X
@item Sierra VMD video @tab @tab X @item Sierra VMD video @tab @tab X
@tab Used in Sierra VMD files. @tab Used in Sierra VMD files.
@item Silicon Graphics Motion Video Compressor 1 (MVC1) @tab @tab X @item Silicon Graphics Motion Video Compressor 1 (MVC1) @tab @tab X
@item Silicon Graphics Motion Video Compressor 2 (MVC2) @tab @tab X @item Silicon Graphics Motion Video Compressor 2 (MVC2) @tab @tab X
@item Silicon Graphics RLE 8-bit video @tab @tab X
@item Smacker video @tab @tab X @item Smacker video @tab @tab X
@tab Video encoding used in Smacker. @tab Video encoding used in Smacker.
@item SMPTE VC-1 @tab @tab X @item SMPTE VC-1 @tab @tab X
@@ -877,13 +811,12 @@ following image formats are supported:
@item Name @tab Encoding @tab Decoding @tab Comments @item Name @tab Encoding @tab Decoding @tab Comments
@item 8SVX exponential @tab @tab X @item 8SVX exponential @tab @tab X
@item 8SVX fibonacci @tab @tab X @item 8SVX fibonacci @tab @tab X
@item AAC @tab EX @tab X @item AAC+ @tab E @tab X
@tab encoding supported through internal encoder and external libraries libfaac and libfdk-aac @tab encoding supported through external library libaacplus
@item AAC+ @tab E @tab IX @item AAC @tab E @tab X
@tab encoding supported through external library libfdk-aac @tab encoding supported through external library libfaac and libvo-aacenc
@item AC-3 @tab IX @tab IX @item AC-3 @tab IX @tab X
@item ADPCM 4X Movie @tab @tab X @item ADPCM 4X Movie @tab @tab X
@item APDCM Yamaha AICA @tab @tab X
@item ADPCM CDROM XA @tab @tab X @item ADPCM CDROM XA @tab @tab X
@item ADPCM Creative Technology @tab @tab X @item ADPCM Creative Technology @tab @tab X
@tab 16 -> 4, 8 -> 4, 8 -> 3, 8 -> 2 @tab 16 -> 4, 8 -> 4, 8 -> 3, 8 -> 2
@@ -918,8 +851,7 @@ following image formats are supported:
@item ADPCM MS IMA @tab X @tab X @item ADPCM MS IMA @tab X @tab X
@item ADPCM Nintendo Gamecube AFC @tab @tab X @item ADPCM Nintendo Gamecube AFC @tab @tab X
@item ADPCM Nintendo Gamecube DTK @tab @tab X @item ADPCM Nintendo Gamecube DTK @tab @tab X
@item ADPCM Nintendo THP @tab @tab X @item ADPCM Nintendo Gamecube THP @tab @tab X
@item APDCM Playstation @tab @tab X
@item ADPCM QT IMA @tab X @tab X @item ADPCM QT IMA @tab X @tab X
@item ADPCM SEGA CRI ADX @tab X @tab X @item ADPCM SEGA CRI ADX @tab X @tab X
@tab Used in Sega Dreamcast games. @tab Used in Sega Dreamcast games.
@@ -927,8 +859,6 @@ following image formats are supported:
@item ADPCM Sound Blaster Pro 2-bit @tab @tab X @item ADPCM Sound Blaster Pro 2-bit @tab @tab X
@item ADPCM Sound Blaster Pro 2.6-bit @tab @tab X @item ADPCM Sound Blaster Pro 2.6-bit @tab @tab X
@item ADPCM Sound Blaster Pro 4-bit @tab @tab X @item ADPCM Sound Blaster Pro 4-bit @tab @tab X
@item ADPCM VIMA @tab @tab X
@tab Used in LucasArts SMUSH animations.
@item ADPCM Westwood Studios IMA @tab @tab X @item ADPCM Westwood Studios IMA @tab @tab X
@tab Used in Westwood Studios games like Command and Conquer. @tab Used in Westwood Studios games like Command and Conquer.
@item ADPCM Yamaha @tab X @tab X @item ADPCM Yamaha @tab X @tab X
@@ -948,29 +878,20 @@ following image formats are supported:
@tab decoding supported through external library libcelt @tab decoding supported through external library libcelt
@item Delphine Software International CIN audio @tab @tab X @item Delphine Software International CIN audio @tab @tab X
@tab Codec used in Delphine Software International games. @tab Codec used in Delphine Software International games.
@item Digital Speech Standard - Standard Play mode (DSS SP) @tab @tab X
@item Discworld II BMV Audio @tab @tab X @item Discworld II BMV Audio @tab @tab X
@item COOK @tab @tab X @item COOK @tab @tab X
@tab All versions except 5.1 are supported. @tab All versions except 5.1 are supported.
@item DCA (DTS Coherent Acoustics) @tab X @tab X @item DCA (DTS Coherent Acoustics) @tab X @tab X
@tab supported extensions: XCh, XXCH, X96, XBR, XLL, LBR (partially)
@item DPCM id RoQ @tab X @tab X @item DPCM id RoQ @tab X @tab X
@tab Used in Quake III, Jedi Knight 2 and other computer games. @tab Used in Quake III, Jedi Knight 2 and other computer games.
@item DPCM Interplay @tab @tab X @item DPCM Interplay @tab @tab X
@tab Used in various Interplay computer games. @tab Used in various Interplay computer games.
@item DPCM Squareroot-Delta-Exact @tab @tab X
@tab Used in various games.
@item DPCM Sierra Online @tab @tab X @item DPCM Sierra Online @tab @tab X
@tab Used in Sierra Online game audio files. @tab Used in Sierra Online game audio files.
@item DPCM Sol @tab @tab X @item DPCM Sol @tab @tab X
@item DPCM Xan @tab @tab X @item DPCM Xan @tab @tab X
@tab Used in Origin's Wing Commander IV AVI files. @tab Used in Origin's Wing Commander IV AVI files.
@item DSD (Direct Stream Digitial), least significant bit first @tab @tab X
@item DSD (Direct Stream Digitial), most significant bit first @tab @tab X
@item DSD (Direct Stream Digitial), least significant bit first, planar @tab @tab X
@item DSD (Direct Stream Digitial), most significant bit first, planar @tab @tab X
@item DSP Group TrueSpeech @tab @tab X @item DSP Group TrueSpeech @tab @tab X
@item DST (Direct Stream Transfer) @tab @tab X
@item DV audio @tab @tab X @item DV audio @tab @tab X
@item Enhanced AC-3 @tab X @tab X @item Enhanced AC-3 @tab X @tab X
@item EVRC (Enhanced Variable Rate Codec) @tab @tab X @item EVRC (Enhanced Variable Rate Codec) @tab @tab X
@@ -985,7 +906,6 @@ following image formats are supported:
@item iLBC (Internet Low Bitrate Codec) @tab E @tab E @item iLBC (Internet Low Bitrate Codec) @tab E @tab E
@tab encoding and decoding supported through external library libilbc @tab encoding and decoding supported through external library libilbc
@item IMC (Intel Music Coder) @tab @tab X @item IMC (Intel Music Coder) @tab @tab X
@item Interplay ACM @tab @tab X
@item MACE (Macintosh Audio Compression/Expansion) 3:1 @tab @tab X @item MACE (Macintosh Audio Compression/Expansion) 3:1 @tab @tab X
@item MACE (Macintosh Audio Compression/Expansion) 6:1 @tab @tab X @item MACE (Macintosh Audio Compression/Expansion) 6:1 @tab @tab X
@item MLP (Meridian Lossless Packing) @tab @tab X @item MLP (Meridian Lossless Packing) @tab @tab X
@@ -993,16 +913,15 @@ following image formats are supported:
@item Monkey's Audio @tab @tab X @item Monkey's Audio @tab @tab X
@item MP1 (MPEG audio layer 1) @tab @tab IX @item MP1 (MPEG audio layer 1) @tab @tab IX
@item MP2 (MPEG audio layer 2) @tab IX @tab IX @item MP2 (MPEG audio layer 2) @tab IX @tab IX
@tab encoding supported also through external library TwoLAME @tab libtwolame can be used alternatively for encoding.
@item MP3 (MPEG audio layer 3) @tab E @tab IX @item MP3 (MPEG audio layer 3) @tab E @tab IX
@tab encoding supported through external library LAME, ADU MP3 and MP3onMP4 also supported @tab encoding supported through external library LAME, ADU MP3 and MP3onMP4 also supported
@item MPEG-4 Audio Lossless Coding (ALS) @tab @tab X @item MPEG-4 Audio Lossless Coding (ALS) @tab @tab X
@item Musepack SV7 @tab @tab X @item Musepack SV7 @tab @tab X
@item Musepack SV8 @tab @tab X @item Musepack SV8 @tab @tab X
@item Nellymoser Asao @tab X @tab X @item Nellymoser Asao @tab X @tab X
@item On2 AVC (Audio for Video Codec) @tab @tab X @item Opus @tab E @tab E
@item Opus @tab E @tab X @tab supported through external library libopus
@tab encoding supported through external library libopus
@item PCM A-law @tab X @tab X @item PCM A-law @tab X @tab X
@item PCM mu-law @tab X @tab X @item PCM mu-law @tab X @tab X
@item PCM signed 8-bit planar @tab X @tab X @item PCM signed 8-bit planar @tab X @tab X
@@ -1070,8 +989,6 @@ following image formats are supported:
@item Windows Media Audio Lossless @tab @tab X @item Windows Media Audio Lossless @tab @tab X
@item Windows Media Audio Pro @tab @tab X @item Windows Media Audio Pro @tab @tab X
@item Windows Media Audio Voice @tab @tab X @item Windows Media Audio Voice @tab @tab X
@item Xbox Media Audio 1 @tab @tab X
@item Xbox Media Audio 2 @tab @tab X
@end multitable @end multitable
@code{X} means that encoding (resp. decoding) is supported. @code{X} means that encoding (resp. decoding) is supported.
@@ -1098,7 +1015,6 @@ performance on systems without hardware floating point support).
@item PJS (Phoenix) @tab @tab X @tab @tab X @item PJS (Phoenix) @tab @tab X @tab @tab X
@item RealText @tab @tab X @tab @tab X @item RealText @tab @tab X @tab @tab X
@item SAMI @tab @tab X @tab @tab X @item SAMI @tab @tab X @tab @tab X
@item Spruce format (STL) @tab @tab X @tab @tab X
@item SSA/ASS @tab X @tab X @tab X @tab X @item SSA/ASS @tab X @tab X @tab X @tab X
@item SubRip (SRT) @tab X @tab X @tab X @tab X @item SubRip (SRT) @tab X @tab X @tab X @tab X
@item SubViewer v1 @tab @tab X @tab @tab X @item SubViewer v1 @tab @tab X @tab @tab X
@@ -1106,7 +1022,7 @@ performance on systems without hardware floating point support).
@item TED Talks captions @tab @tab X @tab @tab X @item TED Talks captions @tab @tab X @tab @tab X
@item VobSub (IDX+SUB) @tab @tab X @tab @tab X @item VobSub (IDX+SUB) @tab @tab X @tab @tab X
@item VPlayer @tab @tab X @tab @tab X @item VPlayer @tab @tab X @tab @tab X
@item WebVTT @tab X @tab X @tab X @tab X @item WebVTT @tab X @tab X @tab @tab X
@item XSUB @tab @tab @tab X @tab X @item XSUB @tab @tab @tab X @tab X
@end multitable @end multitable
@@ -1124,7 +1040,6 @@ performance on systems without hardware floating point support).
@item HLS @tab X @item HLS @tab X
@item HTTP @tab X @item HTTP @tab X
@item HTTPS @tab X @item HTTPS @tab X
@item Icecast @tab X
@item MMSH @tab X @item MMSH @tab X
@item MMST @tab X @item MMST @tab X
@item pipe @tab X @item pipe @tab X
@@ -1135,7 +1050,6 @@ performance on systems without hardware floating point support).
@item RTMPTE @tab X @item RTMPTE @tab X
@item RTMPTS @tab X @item RTMPTS @tab X
@item RTP @tab X @item RTP @tab X
@item SAMBA @tab E
@item SCTP @tab X @item SCTP @tab X
@item SFTP @tab E @item SFTP @tab E
@item TCP @tab X @item TCP @tab X
@@ -1169,7 +1083,6 @@ performance on systems without hardware floating point support).
@item Video4Linux2 @tab X @tab X @item Video4Linux2 @tab X @tab X
@item VfW capture @tab X @tab @item VfW capture @tab X @tab
@item X11 grabbing @tab X @tab @item X11 grabbing @tab X @tab
@item Win32 grabbing @tab X @tab
@end multitable @end multitable
@code{X} means that input/output is supported. @code{X} means that input/output is supported.

View File

@@ -1,10 +1,9 @@
\input texinfo @c -*- texinfo -*- \input texinfo @c -*- texinfo -*-
@documentencoding UTF-8
@settitle Using Git to develop FFmpeg @settitle Using git to develop FFmpeg
@titlepage @titlepage
@center @titlefont{Using Git to develop FFmpeg} @center @titlefont{Using git to develop FFmpeg}
@end titlepage @end titlepage
@top @top
@@ -13,9 +12,9 @@
@chapter Introduction @chapter Introduction
This document aims in giving some quick references on a set of useful Git This document aims in giving some quick references on a set of useful git
commands. You should always use the extensive and detailed documentation commands. You should always use the extensive and detailed documentation
provided directly by Git: provided directly by git:
@example @example
git --help git --help
@@ -32,21 +31,22 @@ man git-<command>
shows information about the subcommand <command>. shows information about the subcommand <command>.
Additional information could be found on the Additional information could be found on the
@url{http://gitref.org, Git Reference} website. @url{http://gitref.org, Git Reference} website
For more information about the Git project, visit the For more information about the Git project, visit the
@url{http://git-scm.com/, Git website}.
@url{http://git-scm.com/, Git website}
Consult these resources whenever you have problems, they are quite exhaustive. Consult these resources whenever you have problems, they are quite exhaustive.
What follows now is a basic introduction to Git and some FFmpeg-specific What follows now is a basic introduction to Git and some FFmpeg-specific
guidelines to ease the contribution to the project. guidelines to ease the contribution to the project
@chapter Basics Usage @chapter Basics Usage
@section Get Git @section Get GIT
You can get Git from @url{http://git-scm.com/} You can get git from @url{http://git-scm.com/}
Most distribution and operating system provide a package for it. Most distribution and operating system provide a package for it.
@@ -65,21 +65,6 @@ git clone git@@source.ffmpeg.org:ffmpeg <target>
This will put the FFmpeg sources into the directory @var{<target>} and let This will put the FFmpeg sources into the directory @var{<target>} and let
you push back your changes to the remote repository. you push back your changes to the remote repository.
@example
git clone gil@@ffmpeg.org:ffmpeg-web <target>
@end example
This will put the source of the FFmpeg website into the directory
@var{<target>} and let you push back your changes to the remote repository.
(Note that @var{gil} stands for GItoLite and is not a typo of @var{git}.)
If you don't have write-access to the ffmpeg-web repository, you can
create patches after making a read-only ffmpeg-web clone:
@example
git clone git://ffmpeg.org/ffmpeg-web <target>
@end example
Make sure that you do not have Windows line endings in your checkouts, Make sure that you do not have Windows line endings in your checkouts,
otherwise you may experience spurious compilation failures. One way to otherwise you may experience spurious compilation failures. One way to
achieve this is to run achieve this is to run
@@ -89,7 +74,6 @@ git config --global core.autocrlf false
@end example @end example
@anchor{Updating the source tree to the latest revision}
@section Updating the source tree to the latest revision @section Updating the source tree to the latest revision
@example @example
@@ -122,7 +106,7 @@ git add [-A] <filename/dirname>
git rm [-r] <filename/dirname> git rm [-r] <filename/dirname>
@end example @end example
Git needs to get notified of all changes you make to your working GIT needs to get notified of all changes you make to your working
directory that makes files appear or disappear. directory that makes files appear or disappear.
Line moves across files are automatically tracked. Line moves across files are automatically tracked.
@@ -142,8 +126,8 @@ will show all local modifications in your working directory as unified diff.
git log <filename(s)> git log <filename(s)>
@end example @end example
You may also use the graphical tools like @command{gitview} or @command{gitk} You may also use the graphical tools like gitview or gitk or the web
or the web interface available at @url{http://source.ffmpeg.org/}. interface available at http://source.ffmpeg.org/
@section Checking source tree status @section Checking source tree status
@@ -164,7 +148,6 @@ git diff --check
to double check your changes before committing them to avoid trouble later to double check your changes before committing them to avoid trouble later
on. All experienced developers do this on each and every commit, no matter on. All experienced developers do this on each and every commit, no matter
how small. how small.
Every one of them has been saved from looking like a fool by this many times. Every one of them has been saved from looking like a fool by this many times.
It's very easy for stray debug output or cosmetic modifications to slip in, It's very easy for stray debug output or cosmetic modifications to slip in,
please avoid problems through this extra level of scrutiny. please avoid problems through this extra level of scrutiny.
@@ -187,14 +170,14 @@ to make sure you don't have untracked files or deletions.
git add [-i|-p|-A] <filenames/dirnames> git add [-i|-p|-A] <filenames/dirnames>
@end example @end example
Make sure you have told Git your name and email address Make sure you have told git your name and email address
@example @example
git config --global user.name "My Name" git config --global user.name "My Name"
git config --global user.email my@@email.invalid git config --global user.email my@@email.invalid
@end example @end example
Use @option{--global} to set the global configuration for all your Git checkouts. Use @var{--global} to set the global configuration for all your git checkouts.
Git will select the changes to the files for commit. Optionally you can use Git will select the changes to the files for commit. Optionally you can use
the interactive or the patch mode to select hunk by hunk what should be the interactive or the patch mode to select hunk by hunk what should be
@@ -225,7 +208,7 @@ include filenames in log messages, Git provides that information.
Possibly make the commit message have a terse, descriptive first line, an Possibly make the commit message have a terse, descriptive first line, an
empty line and then a full description. The first line will be used to name empty line and then a full description. The first line will be used to name
the patch by @command{git format-patch}. the patch by git format-patch.
@section Preparing a patchset @section Preparing a patchset
@@ -316,7 +299,7 @@ the current branch history.
git commit --amend git commit --amend
@end example @end example
allows one to amend the last commit details quickly. allows to amend the last commit details quickly.
@example @example
git rebase -i origin/master git rebase -i origin/master
@@ -341,14 +324,12 @@ faulty commit disappear from the history.
@section Pushing changes to remote trees @section Pushing changes to remote trees
@example @example
git push origin master --dry-run git push
@end example @end example
Will simulate a push of the local master branch to the default remote Will push the changes to the default remote (@var{origin}).
(@var{origin}). And list which branches and ranges or commits would have been
pushed.
Git will prevent you from pushing changes if the local and remote trees are Git will prevent you from pushing changes if the local and remote trees are
out of sync. Refer to @ref{Updating the source tree to the latest revision}. out of sync. Refer to and to sync the local tree.
@example @example
git remote add <name> <url> git remote add <name> <url>
@@ -367,24 +348,23 @@ branches matching the local ones.
@section Finding a specific svn revision @section Finding a specific svn revision
Since version 1.7.1 Git supports @samp{:/foo} syntax for specifying commits Since version 1.7.1 git supports @var{:/foo} syntax for specifying commits
based on a regular expression. see man gitrevisions based on a regular expression. see man gitrevisions
@example @example
git show :/'as revision 23456' git show :/'as revision 23456'
@end example @end example
will show the svn changeset @samp{r23456}. With older Git versions searching in will show the svn changeset @var{r23456}. With older git versions searching in
the @command{git log} output is the easiest option (especially if a pager with the @command{git log} output is the easiest option (especially if a pager with
search capabilities is used). search capabilities is used).
This commit can be checked out with This commit can be checked out with
@example @example
git checkout -b svn_23456 :/'as revision 23456' git checkout -b svn_23456 :/'as revision 23456'
@end example @end example
or for Git < 1.7.1 with or for git < 1.7.1 with
@example @example
git checkout -b svn_23456 $SHA1 git checkout -b svn_23456 $SHA1
@@ -393,7 +373,7 @@ git checkout -b svn_23456 $SHA1
where @var{$SHA1} is the commit hash from the @command{git log} output. where @var{$SHA1} is the commit hash from the @command{git log} output.
@chapter Pre-push checklist @chapter pre-push checklist
Once you have a set of commits that you feel are ready for pushing, Once you have a set of commits that you feel are ready for pushing,
work through the following checklist to doublecheck everything is in work through the following checklist to doublecheck everything is in
@@ -404,7 +384,7 @@ Apply your common sense, but if in doubt, err on the side of caution.
First, make sure that the commits and branches you are going to push First, make sure that the commits and branches you are going to push
match what you want pushed and that nothing is missing, extraneous or match what you want pushed and that nothing is missing, extraneous or
wrong. You can see what will be pushed by running the git push command wrong. You can see what will be pushed by running the git push command
with @option{--dry-run} first. And then inspecting the commits listed with with --dry-run first. And then inspecting the commits listed with
@command{git log -p 1234567..987654}. The @command{git status} command @command{git log -p 1234567..987654}. The @command{git status} command
may help in finding local changes that have been forgotten to be added. may help in finding local changes that have been forgotten to be added.
@@ -413,7 +393,7 @@ Next let the code pass through a full run of our test suite.
@itemize @itemize
@item @command{make distclean} @item @command{make distclean}
@item @command{/path/to/ffmpeg/configure} @item @command{/path/to/ffmpeg/configure}
@item @command{make fate} @item @command{make check}
@item if fate fails due to missing samples run @command{make fate-rsync} and retry @item if fate fails due to missing samples run @command{make fate-rsync} and retry
@end itemize @end itemize
@@ -431,5 +411,5 @@ recommended.
@chapter Server Issues @chapter Server Issues
Contact the project admins at @email{root@@ffmpeg.org} if you have technical Contact the project admins @email{root@@ffmpeg.org} if you have technical
problems with the Git server. problems with the GIT server.

View File

@@ -1,7 +1,7 @@
@chapter Input Devices @chapter Input Devices
@c man begin INPUT DEVICES @c man begin INPUT DEVICES
Input devices are configured elements in FFmpeg which enable accessing Input devices are configured elements in FFmpeg which allow to access
the data coming from a multimedia device attached to your system. the data coming from a multimedia device attached to your system.
When you configure your FFmpeg build, all the supported input devices When you configure your FFmpeg build, all the supported input devices
@@ -13,8 +13,8 @@ You can disable all the input devices using the configure option
option "--enable-indev=@var{INDEV}", or you can disable a particular option "--enable-indev=@var{INDEV}", or you can disable a particular
input device using the option "--disable-indev=@var{INDEV}". input device using the option "--disable-indev=@var{INDEV}".
The option "-devices" of the ff* tools will display the list of The option "-formats" of the ff* tools will display the list of
supported input devices. supported input devices (amongst the demuxers).
A description of the currently available input devices follows. A description of the currently available input devices follows.
@@ -51,244 +51,10 @@ ffmpeg -f alsa -i hw:0 alsaout.wav
For more information see: For more information see:
@url{http://www.alsa-project.org/alsa-doc/alsa-lib/pcm.html} @url{http://www.alsa-project.org/alsa-doc/alsa-lib/pcm.html}
@subsection Options
@table @option
@item sample_rate
Set the sample rate in Hz. Default is 48000.
@item channels
Set the number of channels. Default is 2.
@end table
@section avfoundation
AVFoundation input device.
AVFoundation is the currently recommended framework by Apple for streamgrabbing on OSX >= 10.7 as well as on iOS.
The older QTKit framework has been marked deprecated since OSX version 10.7.
The input filename has to be given in the following syntax:
@example
-i "[[VIDEO]:[AUDIO]]"
@end example
The first entry selects the video input while the latter selects the audio input.
The stream has to be specified by the device name or the device index as shown by the device list.
Alternatively, the video and/or audio input device can be chosen by index using the
@option{
-video_device_index <INDEX>
}
and/or
@option{
-audio_device_index <INDEX>
}
, overriding any
device name or index given in the input filename.
All available devices can be enumerated by using @option{-list_devices true}, listing
all device names and corresponding indices.
There are two device name aliases:
@table @code
@item default
Select the AVFoundation default device of the corresponding type.
@item none
Do not record the corresponding media type.
This is equivalent to specifying an empty device name or index.
@end table
@subsection Options
AVFoundation supports the following options:
@table @option
@item -list_devices <TRUE|FALSE>
If set to true, a list of all available input devices is given showing all
device names and indices.
@item -video_device_index <INDEX>
Specify the video device by its index. Overrides anything given in the input filename.
@item -audio_device_index <INDEX>
Specify the audio device by its index. Overrides anything given in the input filename.
@item -pixel_format <FORMAT>
Request the video device to use a specific pixel format.
If the specified format is not supported, a list of available formats is given
and the first one in this list is used instead. Available pixel formats are:
@code{monob, rgb555be, rgb555le, rgb565be, rgb565le, rgb24, bgr24, 0rgb, bgr0, 0bgr, rgb0,
bgr48be, uyvy422, yuva444p, yuva444p16le, yuv444p, yuv422p16, yuv422p10, yuv444p10,
yuv420p, nv12, yuyv422, gray}
@item -framerate
Set the grabbing frame rate. Default is @code{ntsc}, corresponding to a
frame rate of @code{30000/1001}.
@item -video_size
Set the video frame size.
@item -capture_cursor
Capture the mouse pointer. Default is 0.
@item -capture_mouse_clicks
Capture the screen mouse clicks. Default is 0.
@end table
@subsection Examples
@itemize
@item
Print the list of AVFoundation supported devices and exit:
@example
$ ffmpeg -f avfoundation -list_devices true -i ""
@end example
@item
Record video from video device 0 and audio from audio device 0 into out.avi:
@example
$ ffmpeg -f avfoundation -i "0:0" out.avi
@end example
@item
Record video from video device 2 and audio from audio device 1 into out.avi:
@example
$ ffmpeg -f avfoundation -video_device_index 2 -i ":1" out.avi
@end example
@item
Record video from the system default video device using the pixel format bgr0 and do not record any audio into out.avi:
@example
$ ffmpeg -f avfoundation -pixel_format bgr0 -i "default:none" out.avi
@end example
@end itemize
@section bktr @section bktr
BSD video input device. BSD video input device.
@subsection Options
@table @option
@item framerate
Set the frame rate.
@item video_size
Set the video frame size. Default is @code{vga}.
@item standard
Available values are:
@table @samp
@item pal
@item ntsc
@item secam
@item paln
@item palm
@item ntscj
@end table
@end table
@section decklink
The decklink input device provides capture capabilities for Blackmagic
DeckLink devices.
To enable this input device, you need the Blackmagic DeckLink SDK and you
need to configure with the appropriate @code{--extra-cflags}
and @code{--extra-ldflags}.
On Windows, you need to run the IDL files through @command{widl}.
DeckLink is very picky about the formats it supports. Pixel format is
uyvy422 or v210, framerate and video size must be determined for your device with
@command{-list_formats 1}. Audio sample rate is always 48 kHz and the number
of channels can be 2, 8 or 16. Note that all audio channels are bundled in one single
audio track.
@subsection Options
@table @option
@item list_devices
If set to @option{true}, print a list of devices and exit.
Defaults to @option{false}.
@item list_formats
If set to @option{true}, print a list of supported formats and exit.
Defaults to @option{false}.
@item bm_v210
If set to @samp{1}, video is captured in 10 bit v210 instead
of uyvy422. Not all Blackmagic devices support this option.
@item teletext_lines
If set to nonzero, an additional teletext stream will be captured from the
vertical ancillary data. This option is a bitmask of the VBI lines checked,
specifically lines 6 to 22, and lines 318 to 335. Line 6 is the LSB in the mask.
Selected lines which do not contain teletext information will be ignored. You
can use the special @option{all} constant to select all possible lines, or
@option{standard} to skip lines 6, 318 and 319, which are not compatible with all
receivers. Capturing teletext only works for SD PAL sources in 8 bit mode.
To use this option, ffmpeg needs to be compiled with @code{--enable-libzvbi}.
@item channels
Defines number of audio channels to capture. Must be @samp{2}, @samp{8} or @samp{16}.
Defaults to @samp{2}.
@end table
@subsection Examples
@itemize
@item
List input devices:
@example
ffmpeg -f decklink -list_devices 1 -i dummy
@end example
@item
List supported formats:
@example
ffmpeg -f decklink -list_formats 1 -i 'Intensity Pro'
@end example
@item
Capture video clip at 1080i50 (format 11):
@example
ffmpeg -f decklink -i 'Intensity Pro@@11' -acodec copy -vcodec copy output.avi
@end example
@item
Capture video clip at 1080i50 10 bit:
@example
ffmpeg -bm_v210 1 -f decklink -i 'UltraStudio Mini Recorder@@11' -acodec copy -vcodec copy output.avi
@end example
@item
Capture video clip at 1080i50 with 16 audio channels:
@example
ffmpeg -channels 16 -f decklink -i 'UltraStudio Mini Recorder@@11' -acodec copy -vcodec copy output.avi
@end example
@end itemize
@section dshow @section dshow
Windows DirectShow input device. Windows DirectShow input device.
@@ -306,7 +72,7 @@ The input name should be in the format:
@end example @end example
where @var{TYPE} can be either @var{audio} or @var{video}, where @var{TYPE} can be either @var{audio} or @var{video},
and @var{NAME} is the device's name or alternative name.. and @var{NAME} is the device's name.
@subsection Options @subsection Options
@@ -339,11 +105,11 @@ If set to @option{true}, print a list of selected device's options
and exit. and exit.
@item video_device_number @item video_device_number
Set video device number for devices with the same name (starts at 0, Set video device number for devices with same name (starts at 0,
defaults to 0). defaults to 0).
@item audio_device_number @item audio_device_number
Set audio device number for devices with the same name (starts at 0, Set audio device number for devices with same name (starts at 0,
defaults to 0). defaults to 0).
@item pixel_format @item pixel_format
@@ -359,85 +125,6 @@ Setting this value too low can degrade performance.
See also See also
@url{http://msdn.microsoft.com/en-us/library/windows/desktop/dd377582(v=vs.85).aspx} @url{http://msdn.microsoft.com/en-us/library/windows/desktop/dd377582(v=vs.85).aspx}
@item video_pin_name
Select video capture pin to use by name or alternative name.
@item audio_pin_name
Select audio capture pin to use by name or alternative name.
@item crossbar_video_input_pin_number
Select video input pin number for crossbar device. This will be
routed to the crossbar device's Video Decoder output pin.
Note that changing this value can affect future invocations
(sets a new default) until system reboot occurs.
@item crossbar_audio_input_pin_number
Select audio input pin number for crossbar device. This will be
routed to the crossbar device's Audio Decoder output pin.
Note that changing this value can affect future invocations
(sets a new default) until system reboot occurs.
@item show_video_device_dialog
If set to @option{true}, before capture starts, popup a display dialog
to the end user, allowing them to change video filter properties
and configurations manually.
Note that for crossbar devices, adjusting values in this dialog
may be needed at times to toggle between PAL (25 fps) and NTSC (29.97)
input frame rates, sizes, interlacing, etc. Changing these values can
enable different scan rates/frame rates and avoiding green bars at
the bottom, flickering scan lines, etc.
Note that with some devices, changing these properties can also affect future
invocations (sets new defaults) until system reboot occurs.
@item show_audio_device_dialog
If set to @option{true}, before capture starts, popup a display dialog
to the end user, allowing them to change audio filter properties
and configurations manually.
@item show_video_crossbar_connection_dialog
If set to @option{true}, before capture starts, popup a display
dialog to the end user, allowing them to manually
modify crossbar pin routings, when it opens a video device.
@item show_audio_crossbar_connection_dialog
If set to @option{true}, before capture starts, popup a display
dialog to the end user, allowing them to manually
modify crossbar pin routings, when it opens an audio device.
@item show_analog_tv_tuner_dialog
If set to @option{true}, before capture starts, popup a display
dialog to the end user, allowing them to manually
modify TV channels and frequencies.
@item show_analog_tv_tuner_audio_dialog
If set to @option{true}, before capture starts, popup a display
dialog to the end user, allowing them to manually
modify TV audio (like mono vs. stereo, Language A,B or C).
@item audio_device_load
Load an audio capture filter device from file instead of searching
it by name. It may load additional parameters too, if the filter
supports the serialization of its properties to.
To use this an audio capture source has to be specified, but it can
be anything even fake one.
@item audio_device_save
Save the currently used audio capture filter device and its
parameters (if the filter supports it) to a file.
If a file with the same name exists it will be overwritten.
@item video_device_load
Load a video capture filter device from file instead of searching
it by name. It may load additional parameters too, if the filter
supports the serialization of its properties to.
To use this a video capture source has to be specified, but it can
be anything even fake one.
@item video_device_save
Save the currently used video capture filter device and its
parameters (if the filter supports it) to a file.
If a file with the same name exists it will be overwritten.
@end table @end table
@subsection Examples @subsection Examples
@@ -474,46 +161,12 @@ Print the list of supported options in selected device and exit:
$ ffmpeg -list_options true -f dshow -i video="Camera" $ ffmpeg -list_options true -f dshow -i video="Camera"
@end example @end example
@item
Specify pin names to capture by name or alternative name, specify alternative device name:
@example
$ ffmpeg -f dshow -audio_pin_name "Audio Out" -video_pin_name 2 -i video=video="@@device_pnp_\\?\pci#ven_1a0a&dev_6200&subsys_62021461&rev_01#4&e2c7dd6&0&00e1#@{65e8773d-8f56-11d0-a3b9-00a0c9223196@}\@{ca465100-deb0-4d59-818f-8c477184adf6@}":audio="Microphone"
@end example
@item
Configure a crossbar device, specifying crossbar pins, allow user to adjust video capture properties at startup:
@example
$ ffmpeg -f dshow -show_video_device_dialog true -crossbar_video_input_pin_number 0
-crossbar_audio_input_pin_number 3 -i video="AVerMedia BDA Analog Capture":audio="AVerMedia BDA Analog Capture"
@end example
@end itemize @end itemize
@section dv1394 @section dv1394
Linux DV 1394 input device. Linux DV 1394 input device.
@subsection Options
@table @option
@item framerate
Set the frame rate. Default is 25.
@item standard
Available values are:
@table @samp
@item pal
@item ntsc
@end table
Default value is @code{ntsc}.
@end table
@section fbdev @section fbdev
Linux framebuffer input device. Linux framebuffer input device.
@@ -526,102 +179,18 @@ console. It is accessed through a file device node, usually
For more detailed information read the file For more detailed information read the file
Documentation/fb/framebuffer.txt included in the Linux source tree. Documentation/fb/framebuffer.txt included in the Linux source tree.
See also @url{http://linux-fbdev.sourceforge.net/}, and fbset(1).
To record from the framebuffer device @file{/dev/fb0} with To record from the framebuffer device @file{/dev/fb0} with
@command{ffmpeg}: @command{ffmpeg}:
@example @example
ffmpeg -f fbdev -framerate 10 -i /dev/fb0 out.avi ffmpeg -f fbdev -r 10 -i /dev/fb0 out.avi
@end example @end example
You can take a single screenshot image with the command: You can take a single screenshot image with the command:
@example @example
ffmpeg -f fbdev -framerate 1 -i /dev/fb0 -frames:v 1 screenshot.jpeg ffmpeg -f fbdev -frames:v 1 -r 1 -i /dev/fb0 screenshot.jpeg
@end example @end example
@subsection Options See also @url{http://linux-fbdev.sourceforge.net/}, and fbset(1).
@table @option
@item framerate
Set the frame rate. Default is 25.
@end table
@section gdigrab
Win32 GDI-based screen capture device.
This device allows you to capture a region of the display on Windows.
There are two options for the input filename:
@example
desktop
@end example
or
@example
title=@var{window_title}
@end example
The first option will capture the entire desktop, or a fixed region of the
desktop. The second option will instead capture the contents of a single
window, regardless of its position on the screen.
For example, to grab the entire desktop using @command{ffmpeg}:
@example
ffmpeg -f gdigrab -framerate 6 -i desktop out.mpg
@end example
Grab a 640x480 region at position @code{10,20}:
@example
ffmpeg -f gdigrab -framerate 6 -offset_x 10 -offset_y 20 -video_size vga -i desktop out.mpg
@end example
Grab the contents of the window named "Calculator"
@example
ffmpeg -f gdigrab -framerate 6 -i title=Calculator out.mpg
@end example
@subsection Options
@table @option
@item draw_mouse
Specify whether to draw the mouse pointer. Use the value @code{0} to
not draw the pointer. Default value is @code{1}.
@item framerate
Set the grabbing frame rate. Default value is @code{ntsc},
corresponding to a frame rate of @code{30000/1001}.
@item show_region
Show grabbed region on screen.
If @var{show_region} is specified with @code{1}, then the grabbing
region will be indicated on screen. With this option, it is easy to
know what is being grabbed if only a portion of the screen is grabbed.
Note that @var{show_region} is incompatible with grabbing the contents
of a single window.
For example:
@example
ffmpeg -f gdigrab -show_region 1 -framerate 6 -video_size cif -offset_x 10 -offset_y 20 -i desktop out.mpg
@end example
@item video_size
Set the video frame size. The default is to capture the full screen if @file{desktop} is selected, or the full window size if @file{title=@var{window_title}} is selected.
@item offset_x
When capturing a region with @var{video_size}, set the distance from the left edge of the screen or desktop.
Note that the offset calculation is from the top left corner of the primary monitor on Windows. If you have a monitor positioned to the left of your primary monitor, you will need to use a negative @var{offset_x} value to move the region to that monitor.
@item offset_y
When capturing a region with @var{video_size}, set the distance from the top edge of the screen or desktop.
Note that the offset calculation is from the top left corner of the primary monitor on Windows. If you have a monitor positioned above your primary monitor, you will need to use a negative @var{offset_y} value to move the region to that monitor.
@end table
@section iec61883 @section iec61883
@@ -651,7 +220,7 @@ not work and result in undefined behavior.
The values @option{auto}, @option{dv} and @option{hdv} are supported. The values @option{auto}, @option{dv} and @option{hdv} are supported.
@item dvbuffer @item dvbuffer
Set maximum size of buffer for incoming data, in frames. For DV, this Set maxiumum size of buffer for incoming data, in frames. For DV, this
is an exact value. For HDV, it is not frame exact, since HDV does is an exact value. For HDV, it is not frame exact, since HDV does
not have a fixed frame size. not have a fixed frame size.
@@ -732,15 +301,6 @@ $ jack_connect metro:120_bpm ffmpeg:input_1
For more information read: For more information read:
@url{http://jackaudio.org/} @url{http://jackaudio.org/}
@subsection Options
@table @option
@item channels
Set the number of channels. Default is 2.
@end table
@section lavfi @section lavfi
Libavfilter input virtual device. Libavfilter input virtual device.
@@ -765,14 +325,6 @@ generated by the device.
The first unlabelled output is automatically assigned to the "out0" The first unlabelled output is automatically assigned to the "out0"
label, but all the others need to be specified explicitly. label, but all the others need to be specified explicitly.
The suffix "+subcc" can be appended to the output label to create an extra
stream with the closed captions packets attached to that output
(experimental; only for EIA-608 / CEA-708 for now).
The subcc streams are created after all the normal streams, in the order of
the corresponding stream.
For example, if there is "out19+subcc", "out7+subcc" and up to "out42", the
stream #43 is subcc for stream #7 and stream #44 is subcc for stream #19.
If not specified defaults to the filename specified for the input If not specified defaults to the filename specified for the input
device. device.
@@ -781,9 +333,6 @@ Set the filename of the filtergraph to be read and sent to the other
filters. Syntax of the filtergraph is the same as the one specified by filters. Syntax of the filtergraph is the same as the one specified by
the option @var{graph}. the option @var{graph}.
@item dumpgraph
Dump graph to stderr.
@end table @end table
@subsection Examples @subsection Examples
@@ -822,63 +371,12 @@ Read an audio stream and a video stream and play it back with
ffplay -f lavfi "movie=test.avi[out0];amovie=test.wav[out1]" ffplay -f lavfi "movie=test.avi[out0];amovie=test.wav[out1]"
@end example @end example
@item
Dump decoded frames to images and closed captions to a file (experimental):
@example
ffmpeg -f lavfi -i "movie=test.ts[out0+subcc]" -map v frame%08d.png -map s -c copy -f rawvideo subcc.bin
@end example
@end itemize @end itemize
@section libcdio
Audio-CD input device based on libcdio.
To enable this input device during configuration you need libcdio
installed on your system. It requires the configure option
@code{--enable-libcdio}.
This device allows playing and grabbing from an Audio-CD.
For example to copy with @command{ffmpeg} the entire Audio-CD in @file{/dev/sr0},
you may run the command:
@example
ffmpeg -f libcdio -i /dev/sr0 cd.wav
@end example
@subsection Options
@table @option
@item speed
Set drive reading speed. Default value is 0.
The speed is specified CD-ROM speed units. The speed is set through
the libcdio @code{cdio_cddap_speed_set} function. On many CD-ROM
drives, specifying a value too large will result in using the fastest
speed.
@item paranoia_mode
Set paranoia recovery mode flags. It accepts one of the following values:
@table @samp
@item disable
@item verify
@item overlap
@item neverskip
@item full
@end table
Default value is @samp{disable}.
For more information about the available recovery modes, consult the
paranoia project documentation.
@end table
@section libdc1394 @section libdc1394
IIDC1394 input device, based on libdc1394 and libraw1394. IIDC1394 input device, based on libdc1394 and libraw1394.
Requires the configure option @code{--enable-libdc1394}.
@section openal @section openal
The OpenAL input device provides audio capture on all systems with a The OpenAL input device provides audio capture on all systems with a
@@ -911,7 +409,7 @@ OpenAL is part of Core Audio, the official Mac OS X Audio interface.
See @url{http://developer.apple.com/technologies/mac/audio-and-video.html} See @url{http://developer.apple.com/technologies/mac/audio-and-video.html}
@end table @end table
This device allows one to capture from an audio input device handled This device allows to capture from an audio input device handled
through OpenAL. through OpenAL.
You need to specify the name of the device to capture in the provided You need to specify the name of the device to capture in the provided
@@ -985,19 +483,6 @@ ffmpeg -f oss -i /dev/dsp /tmp/oss.wav
For more information about OSS see: For more information about OSS see:
@url{http://manuals.opensound.com/usersguide/dsp.html} @url{http://manuals.opensound.com/usersguide/dsp.html}
@subsection Options
@table @option
@item sample_rate
Set the sample rate in Hz. Default is 48000.
@item channels
Set the number of channels. Default is 2.
@end table
@section pulse @section pulse
PulseAudio input device. PulseAudio input device.
@@ -1038,10 +523,6 @@ Specify the number of bytes per frame, by default it is set to 1024.
@item fragment_size @item fragment_size
Specify the minimal buffering fragment in PulseAudio, it will affect the Specify the minimal buffering fragment in PulseAudio, it will affect the
audio latency. By default it is unset. audio latency. By default it is unset.
@item wallclock
Set the initial PTS using the current time. Default is 1.
@end table @end table
@subsection Examples @subsection Examples
@@ -1050,49 +531,6 @@ Record a stream from default device:
ffmpeg -f pulse -i default /tmp/pulse.wav ffmpeg -f pulse -i default /tmp/pulse.wav
@end example @end example
@section qtkit
QTKit input device.
The filename passed as input is parsed to contain either a device name or index.
The device index can also be given by using -video_device_index.
A given device index will override any given device name.
If the desired device consists of numbers only, use -video_device_index to identify it.
The default device will be chosen if an empty string or the device name "default" is given.
The available devices can be enumerated by using -list_devices.
@example
ffmpeg -f qtkit -i "0" out.mpg
@end example
@example
ffmpeg -f qtkit -video_device_index 0 -i "" out.mpg
@end example
@example
ffmpeg -f qtkit -i "default" out.mpg
@end example
@example
ffmpeg -f qtkit -list_devices true -i ""
@end example
@subsection Options
@table @option
@item frame_rate
Set frame rate. Default is 30.
@item list_devices
If set to @code{true}, print a list of devices and exit. Default is
@code{false}.
@item video_device_index
Select the video device by index for devices with the same name (starts at 0).
@end table
@section sndio @section sndio
sndio input device. sndio input device.
@@ -1110,18 +548,6 @@ command:
ffmpeg -f sndio -i /dev/audio0 /tmp/oss.wav ffmpeg -f sndio -i /dev/audio0 /tmp/oss.wav
@end example @end example
@subsection Options
@table @option
@item sample_rate
Set the sample rate in Hz. Default is 48000.
@item channels
Set the number of channels. Default is 2.
@end table
@section video4linux2, v4l2 @section video4linux2, v4l2
Video4Linux2 input video device. Video4Linux2 input video device.
@@ -1154,12 +580,6 @@ conversion into the real time clock.
Some usage examples of the video4linux2 device with @command{ffmpeg} Some usage examples of the video4linux2 device with @command{ffmpeg}
and @command{ffplay}: and @command{ffplay}:
@itemize @itemize
@item
List supported formats for a video4linux2 device:
@example
ffplay -f video4linux2 -list_formats all /dev/video0
@end example
@item @item
Grab and show the input of a video4linux2 device: Grab and show the input of a video4linux2 device:
@example @example
@@ -1197,7 +617,7 @@ Select the pixel format (only valid for raw video input).
@item input_format @item input_format
Set the preferred pixel format (for raw video) or a codec name. Set the preferred pixel format (for raw video) or a codec name.
This option allows one to select the input format, when several are This option allows to select the input format, when several are
available. available.
@item framerate @item framerate
@@ -1244,10 +664,6 @@ Force conversion from monotonic to absolute timestamps.
@end table @end table
Default value is @code{default}. Default value is @code{default}.
@item use_libv4l2
Use libv4l2 (v4l-utils) conversion functions. Default is 0.
@end table @end table
@section vfwcap @section vfwcap
@@ -1258,31 +674,11 @@ The filename passed as input is the capture driver number, ranging from
0 to 9. You may use "list" as filename to print a list of drivers. Any 0 to 9. You may use "list" as filename to print a list of drivers. Any
other filename will be interpreted as device number 0. other filename will be interpreted as device number 0.
@subsection Options
@table @option
@item video_size
Set the video frame size.
@item framerate
Set the grabbing frame rate. Default value is @code{ntsc},
corresponding to a frame rate of @code{30000/1001}.
@end table
@section x11grab @section x11grab
X11 video input device. X11 video input device.
To enable this input device during configuration you need libxcb This device allows to capture a region of an X11 display.
installed on your system. It will be automatically detected during
configuration.
Alternatively, the configure option @option{--enable-x11grab} exists
for legacy Xlib users.
This device allows one to capture a region of an X11 display.
The filename passed as input has the syntax: The filename passed as input has the syntax:
@example @example
@@ -1298,12 +694,10 @@ omitted, and defaults to "localhost". The environment variable
area with respect to the top-left border of the X11 screen. They area with respect to the top-left border of the X11 screen. They
default to 0. default to 0.
Check the X11 documentation (e.g. @command{man X}) for more detailed Check the X11 documentation (e.g. man X) for more detailed information.
information.
Use the @command{xdpyinfo} program for getting basic information about Use the @command{dpyinfo} program for getting basic information about the
the properties of your X11 display (e.g. grep for "name" or properties of your X11 display (e.g. grep for "name" or "dimensions").
"dimensions").
For example to grab from @file{:0.0} using @command{ffmpeg}: For example to grab from @file{:0.0} using @command{ffmpeg}:
@example @example
@@ -1352,10 +746,6 @@ If @var{show_region} is specified with @code{1}, then the grabbing
region will be indicated on screen. With this option, it is easy to region will be indicated on screen. With this option, it is easy to
know what is being grabbed if only a portion of the screen is grabbed. know what is being grabbed if only a portion of the screen is grabbed.
@item region_border
Set the region border thickness if @option{-show_region 1} is used.
Range is 1 to 128 and default is 3 (XCB-based x11grab only).
For example: For example:
@example @example
ffmpeg -f x11grab -show_region 1 -framerate 25 -video_size cif -i :0.0+10,20 out.mpg ffmpeg -f x11grab -show_region 1 -framerate 25 -video_size cif -i :0.0+10,20 out.mpg
@@ -1368,18 +758,6 @@ ffmpeg -f x11grab -follow_mouse centered -show_region 1 -framerate 25 -video_siz
@item video_size @item video_size
Set the video frame size. Default value is @code{vga}. Set the video frame size. Default value is @code{vga}.
@item use_shm
Use the MIT-SHM extension for shared memory. Default value is @code{1}.
It may be necessary to disable it for remote displays (legacy x11grab
only).
@item grab_x
@item grab_y
Set the grabbing region coordinates. They are expressed as offset from
the top left corner of the X11 window and correspond to the
@var{x_offset} and @var{y_offset} parameters in the device name. The
default value for both options is 0.
@end table @end table
@c man end INPUT DEVICES @c man end INPUT DEVICES

View File

@@ -1,6 +1,8 @@
FFmpeg's bug/feature request tracker manual FFmpeg's bug/feature request tracker manual
================================================= =================================================
NOTE: This is a draft.
Overview: Overview:
--------- ---------
@@ -20,9 +22,9 @@ a mail for every change to every issue.
(the above does all work already after light testing) (the above does all work already after light testing)
The subscription URL for the ffmpeg-trac list is: The subscription URL for the ffmpeg-trac list is:
https://lists.ffmpeg.org/mailman/listinfo/ffmpeg-trac http(s)://ffmpeg.org/mailman/listinfo/ffmpeg-trac
The URL of the webinterface of the tracker is: The URL of the webinterface of the tracker is:
https://trac.ffmpeg.org http(s)://trac.ffmpeg.org
Type: Type:
----- -----
@@ -40,16 +42,12 @@ feature request / enhancement
where the current implementation cannot be considered wrong. where the current implementation cannot be considered wrong.
license violation license violation
Ticket to keep track of (L)GPL violations of ffmpeg by others. ticket to keep track of (L)GPL violations of ffmpeg by others
sponsoring request sponsoring request
Developer requests for hardware, software, specifications, money, Developer requests for hardware, software, specifications, money,
refunds, etc. refunds, etc.
task
A task/reminder such as setting up a FATE client, adding filters to
Trac, etc.
Priority: Priority:
--------- ---------
critical critical
@@ -68,8 +66,7 @@ important
don't exist in a past revision or another branch. don't exist in a past revision or another branch.
normal normal
Default setting. Use this if the bug does not match the other
priorities or if you are unsure of what priority to choose.
minor minor
Bugs about things like spelling errors, "mp2" instead of Bugs about things like spelling errors, "mp2" instead of
@@ -166,23 +163,14 @@ Component:
avcodec avcodec
issues in libavcodec/* issues in libavcodec/*
avdevice
issues in libavdevice/*
avfilter
issues in libavfilter/*
avformat avformat
issues in libavformat/* issues in libavformat/*
avutil avutil
issues in libavutil/* issues in libavutil/*
build system regression test
issues in or related to configure/Makefile issues in tests/*
documentation
issues in or related to doc/*
ffmpeg ffmpeg
issues in or related to ffmpeg.c issues in or related to ffmpeg.c
@@ -196,23 +184,11 @@ ffprobe
ffserver ffserver
issues in or related to ffserver.c issues in or related to ffserver.c
postproc build system
issues in libpostproc/* issues in or related to configure/Makefile
swresample regression
issues in libswresample/* bugs which were not present in a past revision
swscale
issues in libswscale/*
trac trac
issues related to our issue tracker issues related to our issue tracker
undetermined
default component; choose this if unsure
website
issues related to the website
wiki
issues related to the wiki

Some files were not shown because too many files have changed in this diff Show More