* commit '728685f37ab333ca35980bd01766c78d197f784a':
Add a side data type for audio service type.
Conflicts:
doc/APIchanges
libavcodec/avcodec.h
libavcodec/version.h
libavutil/frame.h
libavutil/version.h
Merged-by: Michael Niedermayer <michaelni@gmx.at>
also add deprecation note for avcodec_get_pix_fmt_loss(), avcodec_find_best_pix_fmt_of_2()
Found-by: wm4
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
Move the lavc/imgconvert functions and rename them as follows:
avpicture_get_size -> av_image_get_buffer_size()
avpicture_fill -> av_image_fill_arrays()
avpicture_layout -> av_image_copy_to_buffer()
The new functions have an align parameter, which allows to define the
linesize alignment assumed in the buffer (which is set or read).
The names of the functions are consistent with the lavu/samples API
(av_samples_get_buffer_size(), av_samples_fill_arrays()).
A redundant check has been dropped from av_image_fill_arrays().
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
* commit 'c220a60f92dde9c7c118fc4deddff5c1f617cda9':
vdpau: add helper for surface chroma type and size
Conflicts:
libavcodec/vdpau.c
libavcodec/version.h
Merged-by: Michael Niedermayer <michaelni@gmx.at>
Since the VDPAU pixel format does not distinguish between different
VDPAU video surface chroma types, we need another way to pass this
data to the application.
Originally VDPAU in libavcodec only supported decoding to 8-bits YUV
with 4:2:0 chroma sampling. Correspondingly, applications assumed that
libavcodec expected VDP_CHROMA_TYPE_420 video surfaces for output.
However some of the new HEVC profiles proposed for addition to VDPAU
would require different depth and/or sampling:
http://lists.freedesktop.org/archives/vdpau/2014-July/000167.html
...as would lossless AVC profiles:
http://lists.freedesktop.org/archives/vdpau/2014-November/000241.html
To preserve backward binary compatibility with existing applications,
a new av_vdpau_bind_context() flag is introduced in a further change.
Signed-off-by: Rémi Denis-Courmont <remi@remlab.net>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
This can be used by the application to signal its ability to cope with
video surface of types other than 8-bits YUV 4:2:0.
Signed-off-by: Rémi Denis-Courmont <remi@remlab.net>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
This carries the pixel format that would be used if it were not for
hardware acceleration. This is equal to AVCodecContext.pix_fmt if
hardware acceleration is not in use.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
* commit '1eec9bfc383f6dca29d83a2bfb45433dd66561c9':
APIchanges: mark the release 11 branch point
Conflicts:
doc/APIchanges
Not merged as the contents in our APIchanges differ and it could be confusing
Merged-by: Michael Niedermayer <michaelni@gmx.at>
Function allows to create string containing object's serialized options.
Such string may be passed back to av_set_options_string() in order to restore options.
Signed-off-by: Lukasz Marek <lukasz.m.luki2@gmail.com>
This is the same logic as is invoked on AVFMT_TS_NEGATIVE,
but which can be enabled manually, or can be enabled
in muxers which only need it in certain conditions.
Also allow using the same mechanism to force streams to start
at 0.
Signed-off-by: Martin Storsjö <martin@martin.st>
* commit '5e80fb7ff226f136dbcf3fed00a2966bf8e9bd70':
lavc: add a public API for parsing vorbis packets.
Conflicts:
doc/APIchanges
libavcodec/Makefile
libavcodec/version.h
libavcodec/vorbis_parser.c
Merged-by: Michael Niedermayer <michaelni@gmx.at>
Since av_gettime() is used in a number of places where actual
real time clock is required, the monotonic clock introduced in
ebef9f5a5 would have consequences that are hard to handle. Instead
split it into a separate function that can be used in the cases
where only relative time is desired.
On platform where no monotonic clock is available, the difference
between the two av_gettime functions is not clear, and one could
mistakenly use the relative clock where an absolute one is
required. Therefore add an offset, to make it evident that the
time returned from av_gettime_relative never is actual current
real time, even though it is based on av_gettime.
Based on a patch by Olivier Langlois.
Signed-off-by: Martin Storsjö <martin@martin.st>
* commit '7ea1b3472a61de4aa4d41b571e99418e4997ad41':
lavc: deprecate the use of AVCodecContext.time_base for decoding
Conflicts:
libavcodec/avcodec.h
libavcodec/h264.c
libavcodec/mpegvideo_parser.c
libavcodec/utils.c
libavcodec/version.h
Merged-by: Michael Niedermayer <michaelni@gmx.at>
When decoding, this field holds the inverse of the framerate that can be
written in the headers for some codecs. Using a field called 'time_base'
for this is very misleading, as there are no timestamps associated with
it. Furthermore, this field is used for a very different purpose during
encoding.
Add a new field, called 'framerate', to replace the use of time_base for
decoding.
Decoding acceleration may work even if the codec level is higher than
the stated limit of the VDPAU driver. Or the problem may be considered
acceptable by the user. This flag allows skipping the codec level
capability checks and proceed with decoding.
Applications should obviously not set this flag by default, but only if
the user explicitly requested this behavior (and presumably knows how
to turn it back off if it fails).
Signed-off-by: Anton Khirnov <anton@khirnov.net>
* commit '2df0c32ea12ddfa72ba88309812bfb13b674130f':
lavc: use a separate field for exporting audio encoder padding
Conflicts:
libavcodec/audio_frame_queue.c
libavcodec/avcodec.h
libavcodec/libvorbisenc.c
libavcodec/utils.c
libavcodec/version.h
libavcodec/wmaenc.c
Merged-by: Michael Niedermayer <michaelni@gmx.at>
Currently, the amount of padding inserted at the beginning by some audio
encoders, is exported through AVCodecContext.delay. However
- the term 'delay' is heavily overloaded and can have multiple different
meanings even in the case of audio encoding.
- this field has entirely different meanings, depending on whether the
codec context is used for encoding or decoding (and has yet another
different meaning for video), preventing generic handling of the codec
context.
Therefore, add a new field -- AVCodecContext.initial_padding. It could
conceivably be used for decoding as well at a later point.
This function provides an explicit VDPAU device and VDPAU driver to
libavcodec, so that the application is relieved from codec specifics
and VdpDevice life cycle management.
A stub flags parameter is added for future extension. For instance, it
could be used to ignore codec level capabilities (if someone feels
dangerous).
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Add CODEC_FLAG2_SKIP_MANUAL (exposed as "skip_manual"), which makes
the decoder export sample skip information via side data, instead
of applying it automatically. The format of the side data is the
same as AV_PKT_DATA_SKIP_SAMPLES, but since AVPacket and AVFrame
side data constants overlap, AV_FRAME_DATA_SKIP_SAMPLES needs to
be introduced.
This is useful for applications which want to do the timestamp
calculations manually, or which actually want to retrieve the
padding.
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
* commit 'b263f8ffe7599d9cd27ec477a12700da8eb2790d':
lavf: add AVFormatContext.max_ts_probe
Conflicts:
doc/APIchanges
libavformat/avformat.h
libavformat/utils.c
libavformat/version.h
lavf-fate/mp3 changes as the estimated input bitrate changes and that is
copied to the output
Merged-by: Michael Niedermayer <michaelni@gmx.at>
* commit '6ca11f7157d0ffd11ea9a4211b04981b46dc75d6':
doc/APIchanges: fill in missing hashes and dates
Conflicts:
doc/APIchanges
Merged-by: Michael Niedermayer <michaelni@gmx.at>
Unfortunately this was not explicitly documented and thus
might be risky.
But all uses I could find in FFmpeg and one in VLC had a memleak
in these cases, and I could not find any that relied on the previous
behaviour.
Signed-off-by: Reimar Döffinger <Reimar.Doeffinger@gmx.de>
The reasoning behind this addition is that various third party
applications are interested in getting some motion information out of a
video "for free" when it is available.
It was considered to export other information as well (such as the intra
information about the block, or the quantization) but the structure
might have ended up into a half full-generic, half full of codec
specific cruft. If more information is necessary, it should either be
added in the "flags" field of the AVMotionVector structure, or in
another side-data.
This commit also includes an example exporting them in a CSV stream.
Based on commit fb1ddcdc8f by Luca Barbato <lu_zero@gentoo.org>
Adapted for libswresample by Michael Niedermayer
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
This allows getting rid of the many, slightly differing, implementations
of basically the same thing.
Signed-off-by: Reimar Döffinger <Reimar.Doeffinger@gmx.de>
* commit 'afbd4b7e093adf6d7a830b32759ca3ba8500363d':
lavf: add AVFormatContext/AVStream fields for signaling to the user when events happen.
Conflicts:
libavformat/avformat.h
libavformat/version.h
Merged-by: Michael Niedermayer <michaelni@gmx.at>
In order to support metadata being set as an option, it's necessary to be able
to set dictionaries as values.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
The only flags, for now, indicate if metadata was updated and are set after each call to
av_read_frame(). This comes with the caveat that, on stream start, it might not be set properly
as packets might be buffered in AVFormatContext.packet_buffer before being given to the user
in av_read_frame().
Signed-off-by: Anton Khirnov <anton@khirnov.net>
It's a public function and should use the avio_ namespace
Signed-off-by: James Almer <jamrial@gmail.com>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
It allows attaching other external, opaque data to the frame and passing it
through the reordering process, for cases when the caller wants other data
than just the plain packet pts. There is no way to cleanly achieve this
without the field.
The rationale is that you have a packed format in form
<greyscale sample> <alpha sample> <greyscale sample> <alpha sample>
and shortening greyscale to 'G' might make one thing about Greenscale instead.
An alias pixel format and color space name are provided for compatibility.