add yuv444p, yuv422p, and yuv420p output format (lower cpu usage
on ffplay playback because it does not do format conversion)
custom size with size/s option (fullhd option is deprecated)
custom layout with bar_h, axis_h, and sono_h option
support rational frame rate (within fps/r/rate option)
relaxed frame rate restriction (support fractional sample step)
support all input sample rates
separate sonogram and bargraph volume (with volume/sono_v and
volume2/bar_v)
timeclamp option alias (timeclamp/tc)
fcount option
gamma option alias (gamma/sono_g and gamma2/bar_g)
support custom frequency range (basefreq and endfreq)
support drawing axis using external image file (axisfile option)
alias for disabling drawing to axis (text/axis)
possibility to optimize it using arch specific asm code
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The commit 932ff70 introducing this header mentions it should be public.
Reviewed-by: Ronald S. Bultje <rsbultje@gmail.com>
Signed-off-by: Andreas Cadhalpun <Andreas.Cadhalpun@googlemail.com>
For protocols other than local files ff_rename() is not implemented
For split planes support the implementation is simply wrong
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
* commit 'fe66671bd5f446f8d0a9c70968ba8fe891efe028':
cmdutils: Check for and report the correct codec capability
Merged-by: Hendrik Leppkes <h.leppkes@gmail.com>
* commit '447b5b278c689b21bbb7b5747c8773145cbd9448':
mpegvideo_enc: Fix encoding videos with less frames than the delay of the encoder
Merged-by: Hendrik Leppkes <h.leppkes@gmail.com>
* commit '18f9308e6a96bbeb034ee5213a6d41e0b6c2ae74':
mpjpeg: Cope with multipart lacking the initial CRLF
Merged-by: Hendrik Leppkes <h.leppkes@gmail.com>
* commit '22f4d9c303ede1a240538fd105c97047db40dc86':
img2enc: Make sure the images are atomically written
Merged-by: Hendrik Leppkes <h.leppkes@gmail.com>
* commit '5ea5a24eb70646a9061b85af407fcbb5dd4f89fd':
movenc: Honor flush requests with delay_moov, when some tracks lack samples
Merged-by: Hendrik Leppkes <h.leppkes@gmail.com>
* commit 'e02dcdf6bb6835ef4b49986b85a67efcb3495a7f':
rtsp: Allow $ as interleaved packet indicator before a complete response header
Merged-by: Hendrik Leppkes <h.leppkes@gmail.com>
* commit 'dca23ffbc7568c9af5c5fbaa86e6a0761ecae50c':
lavc: Deprecate AVPicture structure and related functions
Deprecation flag on AVPicture struct replaced by a comment, as it causes
excess deprecation warnings for every include of avcodec.h
Merged-by: Hendrik Leppkes <h.leppkes@gmail.com>
HWAccels with frame threads are fundamentally flawed in avcodecs current
design, and there are several known problems ranging from image corruption
to driver crashes.
These problems come down to two design problems in the interaction of
threads and HWAccel decoding:
(1)
While avcodec prevents parallel decoding and as such simultaneous access
to the hardware accelerator from the decoding threads, it cannot account
for the user code and its access to the hardware surfaces and the hardware
itself.
This can result in image corruption or even driver crashes if the
user code locks image surfaces while they are being used by the decoder
threads as reference frames.
The current HWAccel API does not offer any way to ensure exclusive access
to the hardware or the surfaces if frame threading is used.
(2)
Initialization of the HWAccel with frame threads is non-trivial, and many
decoders had and still have issues that cause excess calls to the
get_format callback.
This will potentially cause duplicate HWAccel initialization, which in
extreme cases can even lead to driver crashes if the HWAccel is
re-initialized while the user code is actively accessing the hardware
surfaces associated with it, or lead to image corruption due to lost
reference frames.
While both of these issues are solvable, fixing (1) would at least require
a huge API redesign which would move a lot of complexity into the user
code.
The only reason the combination of frame threads and HWAccel was
considered useful is to allow a seamless fallback to multi-threaded
software decoding if the HWAccel is not available, however the issues
outlined above far outweigh this.
The proper solution for a fallback is to re-open the AVCodecContext with
threading enabled if the HWAccel failed, which is a practice commonly used
by various user applications using avcodec today already.
Reviewed-by: Gwenole Beauchesne <gb.devel@gmail.com>
Reviewed-by: wm4 <nfxjfg@googlemail.com>
Signed-off-by: Hendrik Leppkes <h.leppkes@gmail.com>
The Apple dev specification:
https://developer.apple.com/library/mac/documentation/QuickTime/QTFF/Metadata/Metadata.html
Basically the structure is like:
|--meta
|----hdlr
|----keys
|----ilst
1) The handler type in the metadata handler atom is ‘mdta’.
2) The key and value are stored separately for each key-value pair.
The 'keys' atom stores the key table, while 'ilst' atom stores the
values corresponding to the indices in the key table.
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>