Previously the message was cut off at 256th byte.
This will allow dumping the complete x264 encode info when needed.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
Drop the need of setting -debug bugs since it's not a bug, and the
message is already under a AV_LOG_DEBUG log level. Instead only print
it when there is an actual string in it.
Use optional sTER chunk defining side-by-side stereo pair
within "Extensions to the PNG 1.2 Specification", version 1.3.0.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
This allows enabling the frame skipping, which is required for the
encoder to properly hit the target bitrate.
Signed-off-by: Martin Storsjö <martin@martin.st>
When not all the opus stream have the same amount of decoded samples
process the least amount and store what is left from the other streams.
Bug-Id: 909
CC: libav-stable@libav.org
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
When the encoder is fed with less frames than its delay, the picture list
looks like { NULL, NULL, ..., frame, frame, frame }. When flushing the
encoder (input frame == NULL), we need to ensure the picture list is
shifted enough so that we do not return an empty packet, which would
mean the encoder has finished, while it has not encoded any frame.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
This structure served as a bridge between data pointers and frames,
but it suffers from several limitations:
- it is not refcounted and data must be copied to every time
- it cannot be expanded without ABI break due to being used on the stack
- its functions are just wrappers to imgutils which add a layer of
unneeded indirection, and maintenance burden
- it allows hacks like embedding uncompressed data in packets
- its use is often confusing to our users
AVFrame provides a much better API, and, if a full blown frame is not
needed, it is just as simple and more straightfoward to use data and
linesize arrays directly.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
Use the new fields directly instead of the ones from AVPicture.
This removes a layer of indirection which serves no pratical purpose
whatsoever, and will help in removing AVPicture structure completely
later.
Every subtitle encoder/decoder seamlessly points to the new arrays,
so it is possible to deprecate AVSubtitleRect.pict.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
Callers always use a frame and cast it to AVPicture, change
ff_msrle_decode() to accept an AVFrame directly instead.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
This makes the h.264 decoder threadsafe to initialize.
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
Work on the AVFrame references directly.
Instead of setting up a flipped/swapped "view" on the pictures,
flip/swap them when returning decoded frames to the API user.
Rather than copying data buffers around, allocate a proper frame, and
use the standard AVFrame functions. This effectively makes the decoder
capable of direct rendering.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
There is not much reason to generate such a small table at runtime.
Signed-off-by: Derek Buitenhuis <derekb@vimeo.com>
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
place primary audio coding header data into DCAAudioHeader
structure to make DCAContext clearer
and move channel related data to DCAChan structure to make
them easier to use by extensions
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
Do not fail when original resolution is smaller than current one,
as the frame buffer is resized automatically.
Signed-off-by: Vittorio Giovara <vittorio.giovara at gmail.com>
In some situations, MMAL won't return a decoded frame for certain input
frames. This can happen if a frame fails to decode, or if a packet does
not actually contain a complete frame. In these situations, we would
deadlock (or actually timeout) waiting for an expected output frame,
which is not ideal. On the other hand, there are situations where we
definitely have to block to avoid deadlocks. (This mess is a
consequence of trying to map MMAL's asynchronous and flexible
dataflow to libavcodec, which is more static and rigid.)
Solve this by doing a blocking wait only if the amount of buffered data
is too big. The whole purpose of the blocking wait is to avoid excessive
buffering of input data, so we can skip it if it appears to be low. The
consequence is that libavcodec can gracefully return no frame to the
API user.
We want to track the number of full packets to make our heuristic work.
But MMAL buffers are fixed-size, requiring splitting large packets. This
is why the previous commit is needed. We use the ..._FRAME_END flag to
remember packet boundaries, but MMAL does not preserve these buffer
flags when returning buffers to the user.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
The next commit needs 1 bit of additional information per MMAL buffer
sent to the MMAL input port. This information will be needed when the
buffer is recycled (i.e. returned by the input port's callback).
Normally, we could use MMAL_BUFFER_HEADER_FLAG_USER0, but that is
unexpectedly not preserved.
Do this by storing a pointer to FFBufferEntry in the MMAL buffer's
user data, instead of an AVBufferRef. This also changes the lifetime
of FFBufferEntry.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
The intended meaning is "if this block is the first block in a slice then
its left boundary is a slice boundary". Silence a logical-not-parentheses
warning from gcc.
Silence a warning due to frame assignment in dvenc. All uses of the
reference in dvdec are read only, except the ones in the main decoding
function, so use the frame pointer directly there.
CID 1256 is specified as using the same table for luma and chroma,
which is the same as CID 1235 luma table. This is consistent with
the format supposedly being RGB, although most sequences seem to
actually be YCbCr-encoded.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
Tables 1258 and 1259 were not zigzagged when added, so it was not
possible to notice the equivalence.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
Convert them to zigzag order, as the rest of them are.
When I was adding support for 10-bit DNxHD, I just copy-pasted the
missing quant matrices from the spec. Now it turns out the existing
matrices in dnxhddata.c were in zigzag order. This resulted in wrong
quantization for 10-bit DNxHD. The attached patch fixes the problem by
converting 10-bit quant matrices to zigzag order.
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
When forwarding the frame type information, by default x264 can
decide which kind of keyframe output, add an option to force it
to output IDR frames in to support use-cases such as preparing
the content for segmented streams formats.
x264 build 147 adds the native support for NV21.
Useful to avoid additional pixel format conversion when encoding
from a wide range of capture devices, Android among those.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
These field are difficult to interpret, and are provided by a single
encoder (mpegvideoenc). In general they do not belong to a structure
containing raw data only, so remove them from AVFrame.
Mpegvideoenc now uses a private field in Picture for its internal
computations.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
This bit is 1 in some samples, and seems to coincide with interlaced
mbs and CID1260. 2008 specs do not know about it, and maintain qscale
is 11 bits. This looks oversized, but may help larger bitdepths.
Currently, it leads to an obviously incorrect qscale value, meaning
its syntax is shifted by 1. However, reading 11 bits also leads to
obviously incorrect decoding: qscale seems to be 10 bits.
However, as most profiles still have 11bits qscale, the feature is
restricted to the CID1260 profile (this flag is dependent on
a higher-level flag located in the header).
The encoder writes 12 bits of syntax, last and first bits always 0,
which is now somewhat inconsistent with the decoder, but ends up with
the same effect (progressive + reserved bit).
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
Note that convergence_duration had another meaning, one which was in
practice never used. The only real use for it was a 64 bit replacement
for the duration field. It's better just to make duration 64 bits, and
to get rid of it.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
File libopenh264enc.c has been modified so that the encoder uses av_log()
to log messages (error, warning, info, etc.) instead of logging them
directly to stderr. At the time the encoder is created, the current
libav log level is mapped to an equivalent libopenh264 log level. This
log level, and a message logging function that invokes av_log() to
actually log messages, are then set on the encoder.
This contains further changes and simplifications by Michael Niedermayer
and Martin Storsjö.
Signed-off-by: Martin Storsjö <martin@martin.st>
It appears vdpau drivers can return constrained baseline as unsupported,
even if libvdpau knows about the symbol, and the main profile is
supported.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
This function can intrinsically not deal with codec profile fallback
(for H.264 Constrained Baseline especially), and was made redundant
by av_vdpau_bind_context().
Signed-off-by: Anton Khirnov <anton@khirnov.net>
AVBufferRef.data and AVPacket.data don't need to have the same value.
AVPacket could point anywhere into the buffer. Likewise, the sizes
don't need to be the same.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
This works only for extradata sizes up to 128 bytes. Additionally, I
could never actually see it doing anything. The new code using
MMAL_BUFFER_HEADER_FLAG_CONFIG now takes care of this.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
We can send mp4-style data directly. But for some reason, this requires
sending the extradata as buffer with MMAL_BUFFER_HEADER_FLAG_CONFIG
set. Reuse the infrastructure for sending AVPackets to do this.
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>