WebPAnimEncoder API is a combination of encoder (WebPEncoder) and muxer
(WebPMux). It performs several optimizations to make it more efficient
than the combination of WebPEncode() and native ffmpeg muxer.
When WebPAnimEncoder API is used:
- In the encoder layer: we use WebPAnimEncoderAdd() instead of
WebPEncode().
- The muxer layer: works like a raw muxer.
On the other hand, when WebPAnimEncoder API isn't available, the old code is
used as it is:
- In the codec layer: WebPEncode is used to encode each frame
- In the muxer layer: ffmpeg muxer is used
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
This is useful for client programs to ask for nv12 surfaces instead of the
current default (uyvy), since those are more efficient to decode to.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
* commit '34efb8a169e4551326e069be47125c6c2cb7ab90':
quickdraw: Support direct pixel blocks
Conflicts:
Changelog
libavcodec/version.h
Merged-by: Michael Niedermayer <michaelni@gmx.at>
* commit '31d2039cb42668ebcf08248bc48bbad44aa05f49':
h264_parser: export video format and dimensions
Conflicts:
libavcodec/h264_parser.c
libavcodec/version.h
Merged-by: Michael Niedermayer <michaelni@gmx.at>
* commit '728685f37ab333ca35980bd01766c78d197f784a':
Add a side data type for audio service type.
Conflicts:
doc/APIchanges
libavcodec/avcodec.h
libavcodec/version.h
libavutil/frame.h
libavutil/version.h
Merged-by: Michael Niedermayer <michaelni@gmx.at>
Compared to existing, common opensource H264 encoders, this can be
useful since it has got a different license (BSD instead of GPL).
Performance- and qualitywise it is comparable to x264 in ultrafast
mode.
Hooking it up as an encoder in libavcodec also simplifies comparing
it against other common encoders.
This requires OpenH264 1.3 or newer. Since the OpenH264 API and ABI
changes frequently, only releases are supported.
To take advantage of the OpenH264 patent offer, the OpenH264 library
must not be redistributed, but downloaded at runtime at the end-user's
system.
Signed-off-by: Martin Storsjö <martin@martin.st>
* commit 'c220a60f92dde9c7c118fc4deddff5c1f617cda9':
vdpau: add helper for surface chroma type and size
Conflicts:
libavcodec/vdpau.c
libavcodec/version.h
Merged-by: Michael Niedermayer <michaelni@gmx.at>
Since the VDPAU pixel format does not distinguish between different
VDPAU video surface chroma types, we need another way to pass this
data to the application.
Originally VDPAU in libavcodec only supported decoding to 8-bits YUV
with 4:2:0 chroma sampling. Correspondingly, applications assumed that
libavcodec expected VDP_CHROMA_TYPE_420 video surfaces for output.
However some of the new HEVC profiles proposed for addition to VDPAU
would require different depth and/or sampling:
http://lists.freedesktop.org/archives/vdpau/2014-July/000167.html
...as would lossless AVC profiles:
http://lists.freedesktop.org/archives/vdpau/2014-November/000241.html
To preserve backward binary compatibility with existing applications,
a new av_vdpau_bind_context() flag is introduced in a further change.
Signed-off-by: Rémi Denis-Courmont <remi@remlab.net>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
This can be used by the application to signal its ability to cope with
video surface of types other than 8-bits YUV 4:2:0.
Signed-off-by: Rémi Denis-Courmont <remi@remlab.net>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
This carries the pixel format that would be used if it were not for
hardware acceleration. This is equal to AVCodecContext.pix_fmt if
hardware acceleration is not in use.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
* commit '5e80fb7ff226f136dbcf3fed00a2966bf8e9bd70':
lavc: add a public API for parsing vorbis packets.
Conflicts:
doc/APIchanges
libavcodec/Makefile
libavcodec/version.h
libavcodec/vorbis_parser.c
Merged-by: Michael Niedermayer <michaelni@gmx.at>
previously quality could only be set through qscale/global_quality but the scale
was inverted. Using a separate option avoids the confusion from qscale working
backward.
Reviewed-by: Benoit Fouet <benoit.fouet@free.fr>
Reviewed-by: Clément Bœsch <u@pkh.me>
Reviewed-by: Nicolas George <george@nsup.org>
Reviewed-by: James Almer <jamrial@gmail.com>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
* commit 'a6e4ce9fd50897dc6d9c2ada4b6b8090de7de5bf':
lavc: make rc_qsquish a private option of mpegvideo encoders
Conflicts:
libavcodec/avcodec.h
Merged-by: Michael Niedermayer <michaelni@gmx.at>
* commit '66e9f839536238945fbfe9d2041b6891cb150e45':
libfdk-aacdec: Enable Dynamic Range Control Metadata Support
Conflicts:
libavcodec/version.h
Merged-by: Michael Niedermayer <michaelni@gmx.at>
For streams which contain DRC metadata, the FDK decoder is able to
control rendering of the decoded output. The rendering parameters
are detailed in fdk_aac_dec_options [].
The default behavior is left up to the decoder.
Signed-off-by: Martin Storsjö <martin@martin.st>
The FDK decoder is capable of producing mono and stereo downmix from
multichannel streams. These streams may contain metadata that control
the downmix process. The decoder requires an Ancillary Buffer in order to
correctly apply downmix in streams containing downmix Metadata. The
decoder does not have an API interface to inform of the presence of
Metadata in the stream, and therefore the Ancillary Buffer is always
allocated whenever a downmix is requested.
When downmixing multichannel streams, the decoder requires the output
buffer in aacDecoder_DecodeFrame call to be of fixed size in order to
hold the actual number of channels contained in the stream. For example,
for a 5.1ch to stereo downmix, the decoder requires that the output buffer
is allocated for 6 channels, regardless of the fact that the output is in
fact two channels.
Due to this requirement, the output buffer is allocated for the maximum
output buffer size in case a downmix is requested (and also during
decoder init). When a downmix is requested, the buffer used for output
during init will also be used for the entire duration the decoder is open.
Otherwise, the initial decoder output buffer is freed and the decoder
decodes straight into the output AVFrame.
Signed-off-by: Martin Storsjö <martin@martin.st>
* commit '7ea1b3472a61de4aa4d41b571e99418e4997ad41':
lavc: deprecate the use of AVCodecContext.time_base for decoding
Conflicts:
libavcodec/avcodec.h
libavcodec/h264.c
libavcodec/mpegvideo_parser.c
libavcodec/utils.c
libavcodec/version.h
Merged-by: Michael Niedermayer <michaelni@gmx.at>
When decoding, this field holds the inverse of the framerate that can be
written in the headers for some codecs. Using a field called 'time_base'
for this is very misleading, as there are no timestamps associated with
it. Furthermore, this field is used for a very different purpose during
encoding.
Add a new field, called 'framerate', to replace the use of time_base for
decoding.
Decoding acceleration may work even if the codec level is higher than
the stated limit of the VDPAU driver. Or the problem may be considered
acceptable by the user. This flag allows skipping the codec level
capability checks and proceed with decoding.
Applications should obviously not set this flag by default, but only if
the user explicitly requested this behavior (and presumably knows how
to turn it back off if it fails).
Signed-off-by: Anton Khirnov <anton@khirnov.net>
* commit '2df0c32ea12ddfa72ba88309812bfb13b674130f':
lavc: use a separate field for exporting audio encoder padding
Conflicts:
libavcodec/audio_frame_queue.c
libavcodec/avcodec.h
libavcodec/libvorbisenc.c
libavcodec/utils.c
libavcodec/version.h
libavcodec/wmaenc.c
Merged-by: Michael Niedermayer <michaelni@gmx.at>
Currently, the amount of padding inserted at the beginning by some audio
encoders, is exported through AVCodecContext.delay. However
- the term 'delay' is heavily overloaded and can have multiple different
meanings even in the case of audio encoding.
- this field has entirely different meanings, depending on whether the
codec context is used for encoding or decoding (and has yet another
different meaning for video), preventing generic handling of the codec
context.
Therefore, add a new field -- AVCodecContext.initial_padding. It could
conceivably be used for decoding as well at a later point.
The register function now specifies that the user callback should
leave things in the same state that it found them on failure but
that failure to destroy is ignored by the library. The register
function is now explicit about its behavior on failure
(it unregisters the previous callback and destroys all mutex).
Signed-off-by: Manfred Georg <mgeorg@google.com>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
This function provides an explicit VDPAU device and VDPAU driver to
libavcodec, so that the application is relieved from codec specifics
and VdpDevice life cycle management.
A stub flags parameter is added for future extension. For instance, it
could be used to ignore codec level capabilities (if someone feels
dangerous).
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Add CODEC_FLAG2_SKIP_MANUAL (exposed as "skip_manual"), which makes
the decoder export sample skip information via side data, instead
of applying it automatically. The format of the side data is the
same as AV_PKT_DATA_SKIP_SAMPLES, but since AVPacket and AVFrame
side data constants overlap, AV_FRAME_DATA_SKIP_SAMPLES needs to
be introduced.
This is useful for applications which want to do the timestamp
calculations manually, or which actually want to retrieve the
padding.
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>