This simplifies testing arbitrary code fragments within a function
body.
Signed-off-by: Mans Rullgard <mans@mansr.com>
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
Input on/off state can change in request_samples(), which can result in
a state where only the first input is active. get_available_samples()
will then return 0, and request_frame() will fail with EAGAIN even
though there is data on the single active input.
Take this into account and check the number of active inputs again after
calling request_samples().
This avoids creating new AVStreams for them when switching between
different variants of them, since we can handle changes between
different sample rates of nellymoser within the same stream.
Signed-off-by: Martin Storsjö <martin@martin.st>
Instead of inlining everything into ff_h264_hl_decode_mb(), use
explicit templating to create versions of the called functions
with constant parameters filled in. This greatly speeds up
compilation of h264.c and reduces the code size without any
measurable impact on performance.
Compilation time for h264.c on an i7 goes from 30s to 5.5s.
Code size is reduced by 430kB.
Signed-off-by: Mans Rullgard <mans@mansr.com>
Previously it was interpreted as number of bytes, while the
documentation stated that it was the number of 8 byte blocks.
This makes it behave similarly to the existing AES code.
Signed-off-by: Martin Storsjö <martin@martin.st>
Previously it was interpreted as number of bytes, while the
documentation stated that it was the number of 8 byte blocks.
This makes it behave similarly to the existing AES code.
Signed-off-by: Martin Storsjö <martin@martin.st>
Using ff_mspel_motion assumes that s (a MpegEncContext
poiinter) really is a Wmv2Context.
This fixes crashes in error resilience on vc1/wmv3 videos.
CC: libav-stable@libav.org
Signed-off-by: Martin Storsjö <martin@martin.st>
If the output frame size is smaller than the input sample rate,
and the input stream time base corresponds exactly to the input
frame size (getting input packet timestamps like 0, 1, 2, 3, 4 etc),
the output timestamps from the filter will be like
0, 1, 2, 3, 4, 4, 5 ..., leadning to non-monotone timestamps later.
A concrete example is input mp3 data having frame sizes of 1152
samples, transcoded to aac with 1024 sample frames.
By setting the audio filter time base to the sample rate, we will
get sensible timestamps for all output packets, regardless of
the ratio between the input and output frame sizes.
Signed-off-by: Martin Storsjö <martin@martin.st>
This tool uses lavfi internal symbols not accessible in shared
libraries. TESTPROGS are linked statically to allow them use of
library internals not normally exported.
Signed-off-by: Mans Rullgard <mans@mansr.com>
For reading from normal files on disk, the queue limits for
demuxed data work fine, but for reading data from realtime
streams, they mean we're not reading from the input stream
at all once the queue limit has been reached. For TCP streams,
this means that writing to the socket from the peer side blocks
(potentially leading to the peer dropping data), and for UDP
streams it means that our kernel might drop data.
For some protocols/servers, the server initially sends a
large burst with data to fill client side buffers, but once
filled, we should keep reading to avoid dropping data.
For all realtime streams, it IMO makes sense to just buffer
as much as we get (rather in buffers in avplay.c than in
OS level buffers). With this option set, the input thread
should always be blocking waiting for more input data,
never sleeping waiting for the decoder to consume data.
Signed-off-by: Martin Storsjö <martin@martin.st>
The buffers are only allocated once, although it can happen from
any of a few different places, so there is no need to use realloc.
Using av_malloc() ensures they are aligned suitably for SIMD
optimisations.
Signed-off-by: Mans Rullgard <mans@mansr.com>