This patch modifies the encode frame function to
retry encoding the frame when the resulting bit count
is too far off target, but only adjusting lambda
in small, incremental step. It also makes the logic
more conservative - otherwise it will contend with
bit reservoir-related variations in bit allocation,
and result in artifacts when frame have to be truncated
(usually at high bit rates transitioning from low
complexity to high complexity).
Binutils will always strip the relocation information from executable
files even if it needs it (dynamicbase/ASLR). We can work around this
by using the pic-executable flag combined with setting the correct entry
point since apparently ld forgets what that should be. This problem
affects both 32 and 64-bit mingw-w64.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Reduces the number of times the vbv retry code is used and should have no
effect on quality
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
If there is no #EXT-X-BYTERANGE specified, there is no need to seek.
Seeking fails anyway for rtmp, because this protocol does not support
url_seek.
This fixes CNN.m3u from trac ticket 4797 (i.e. Debian bug #798189).
Reviewed-by: wm4 <nfxjfg@googlemail.com>
Signed-off-by: Andreas Cadhalpun <Andreas.Cadhalpun@googlemail.com>
If cmd_pos is broken, this would just keep accumulating packets in the
reassembly buffer, until it fails and flushes the buffer on overflow.
Since packets are usually rather small, this will take a lot of subtitle
packets. The perceived effect is that subtitles are not displayed
anymore after the faulty packet was passed to the decoder.
I'm not terribly sure about this, but on the other hand this code is
active only when fragmented packets need to be reassembled.
Fixes sample file in trac issue #4872.
Assuming the first and second packets are partial, this would append the
reassembly buffer (ctx->buf) to itself with the second
append_to_cached_buf() call, because buf is set to ctx->buf.
I do not know a valid sample file which triggers this, and do not know
if packets can be split into more than 2 sub-packets, but it triggered
with a (differently) broken sample file in trac issue #4872.
Broken by commit ba12ba859a. This only
happens with HLS streams which use encryption and require preserving
cookies sent by the server.
Fixes trac issue #4846.
Some .idx files actually contain duplicate subtitle events:
timestamp: 00:07:52:600, filepos: 00004e800
timestamp: 00:07:52:600, filepos: 00004f800
The second will be dropped, because it has same pts, duration, and text
(the text is just a dummy empty string; the real data is retrieved when
actually reading vobsub subtitle packets).
Dropping this is probably not intended/safe, so avoid it.
See trac issue #4872 for a sample. This patch doesn't fix decoding of
the sample, though.
The stream ID is essentially an arbitrary number defined by the .idx
file headers. They have to match the IDs in the .sub stream. The vobsub
demuxer assumed the IDs would just start from 0, increassing by 1 for
each stream. This is not correct. In the sample I had, the IDs were
starting from 1, leading to no subtitles being displayed at all.
Fix this by using the correct stream ID.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
If tput is not found for colorizing, error message should be squashed.
Signed-off-by: Ganesh Ajjanagadde <gajjanagadde@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
It requires a loop in filters or the framework,
that makes the scheduling less efficient and more complex.
This is purely an internal change since the loop is now
present in buffersink.
Note that no filter except buffersink did rely on the requirement.
Do not assume that ff_request_frame() returning success
implies a frame has arrived in the FIFO.
Instead, just loop until a frame is in the FIFO.
It does not change anything since the same loop is present
in ff_request_frame(), confirmed by an assertion.
The randomize_buffer() implementation assures that "most of the time",
we'll do a good mix of wide16/wide8/hev/regular/no filters for complete
code coverage. However, this is not mathematically assured because that
would make the code either much more complex, or much less random.