This function can intrinsically not deal with codec profile fallback
(for H.264 Constrained Baseline especially), and was made redundant
by av_vdpau_bind_context().
Signed-off-by: Anton Khirnov <anton@khirnov.net>
AVBufferRef.data and AVPacket.data don't need to have the same value.
AVPacket could point anywhere into the buffer. Likewise, the sizes
don't need to be the same.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
This works only for extradata sizes up to 128 bytes. Additionally, I
could never actually see it doing anything. The new code using
MMAL_BUFFER_HEADER_FLAG_CONFIG now takes care of this.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
We can send mp4-style data directly. But for some reason, this requires
sending the extradata as buffer with MMAL_BUFFER_HEADER_FLAG_CONFIG
set. Reuse the infrastructure for sending AVPackets to do this.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
Add an allowed parameter to -h and --help avconv option to print private
options from a codec, format, or filter, named with the provided input
value.
In case multiple classes are found (eg. mov demuxer and mov muxer, or
h264 decoder and h264 demuxer) print all options from all classes.
It is possible to select the type of class to print by adding it
before the name (eg. demuxer:mov and muxer:mov, or decoder:h264and
demuxer:h264).
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
packets are queued due to packet reordering until the queue reach its
maximal size or max delay is reached.
This commit adds a warning trace when max delay is reached.
Signed-off-by: Eloi BAIL <eloi.bail@savoirfairelinux.com>
Signed-off-by: Martin Storsjö <martin@martin.st>
This commit print as AV_LOG_VERBOSE the jitter buffer
size. It might be the default value or the value set by application.
Signed-off-by: Eloi BAIL <eloi.bail@savoirfairelinux.com>
Signed-off-by: Martin Storsjö <martin@martin.st>
This commit adds a warning trace when jitter buffer
is full. It helps to understand leading decoding issues.
Signed-off-by: Eloi BAIL <eloi.bail@savoirfairelinux.com>
Signed-off-by: Martin Storsjö <martin@martin.st>
Since the actual max length of the jitter buffer is restricted by
max_delay, this shouldn't harm the overall latency (assuming that
max_delay is set properly), while allowing packet reordering with
a larger number of packets (which may be required with high bitrate
video).
Signed-off-by: Martin Storsjö <martin@martin.st>
"int" is useful in testing because provides accurate results across
different plaftforms, so remove it from the scheduled FF_API_UNUSED_MEMBERS
deprecation.
This introduces a slight timebase computation difference in zmbv-8bit
fate test. This is expected since the new options are double instead
of ints, and the additional precision skews the results in a non
meaningful way.
Deprecate the now unused option, but temporarily retain the capability
to disable the now default behaviour.
Mention this change in the AVPacket documentation.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
This also drops setting the frame->pts field. This is usually not set by
decoders, so this would be an inconsistency that's at worst a danger to
the API user.
It appears the buffer->dts field is normally not set by the MMAL
decoder, so don't use it. If it's ever going to be set by MMAL, we
don't know whether the value will be what we want.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
The generic code in utils.c sets the AVFrame.pkt_dts field from the
packet it was supposedly decoded. This does not have to be true for a
fully asynchronous decoder like mmaldec. It could be overwritten with an
incorrect value. Even if the decoder doesn't determine the DTS (but sets
it to AV_NOPTS_VALUE), it's impossible to determine a correct value in
utils.c.
Decoders can now be marked with FF_CODEC_CAP_SETS_PKT_DTS, in which case
utils.c won't overwrite the field. The decoders are expected to set this
field (even if they only set it to AV_NOPTS_VALUE).
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
This MMAL feature fills in missing timestamps from the framerate set on
the input port. This is generally unwanted, since libavcodec decoders
merely pass through timestamps without ever "fixing" them. The framerate
is also unknown, and even the timebase doesn't have to be set.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
Don't try to do a blocking wait for MMAL output if we haven't even sent
a single real packet, but only flush packets. Obviously we can't expect
to get anything back.
Additionally, don't send a flush packet to MMAL in the same case. It
appears the MMAL decoder will sometimes hang in mmal_vc_port_disable()
(called from ffmmal_close_decoder()), waiting for a reply from the GPU
which never arrives. Either MMAL disallows sending flush packets without
preceding real data, or it's a MMAL bug.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
I can't come up with a nice way to handle this. It's hard to keep the
lock-stepped input/output in this case. You can't predict whether the
MMAL decoder will output a picture (because it's asynchronous), so
you have to assume in general that any packet could produce 0 or 1
frames. You can't continue to write input packets to the decoder,
because then you might get too many output frames, which you can't
get rid of because the lavc decoding API does not allow the decoder
to return an output frame without consuming an input frame (except
when flushing).
The ideal fix is a M:N decoding API (preferably asynchronous), which
would make this code potentially much cleaner. For now, this hack
will do.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>