This only returns bits per sample when it is exactly correct. That is, the
codec contains only raw samples with no frame headers or padding. This applies
to basically all PCM codecs and a small subset of ADPCM codecs.
Split off packet parsing into a separate function. Parse full packets at
once and store them in a queue, eliminating the need for tracking
parsing state in AVStream.
The horrible unreadable loop in read_frame_internal() now isn't weirdly
ordered and doesn't contain evil gotos, so it should be much easier to
understand.
compute_pkt_fields() now invents slightly different timestamps for two
raw vc1 tests, due to has_b_frames being set a bit later. They shouldn't
be more wrong (or right) than previous ones.
Make packet buffer a parameter, don't hardcode it to be
AVFormatContext.packet_buffer.
Also move the function higher in the file, since it will be called from
read_frame_internal().
This splits ff_dsputil_init_mmx() into multiple functions, one for
each MMX/SSE level, somewhat simplifying the nested conditions.
Signed-off-by: Mans Rullgard <mans@mansr.com>
Signed-off-by: Diego Biurrun <diego@biurrun.de>
In most places where it's used, it's as a pointless write-only field.
Only rv10 decoder actually reads from it, but it stores some internal
version info in it. There is no reason for it to be in a public field.
Use CODEC_CAP_DELAY and CODEC_CAP_SMALL_LAST_FRAME to properly pad and flush
the encoder at the end of encoding. This is needed in order to have all input
samples decoded.
Use CODEC_CAP_DELAY and CODEC_CAP_SMALL_LAST_FRAME to properly pad and flush
the encoder at the end of encoding. This is needed in order to have all input
samples decoded.
We need to set ms_stereo in encode_init() in order to avoid incorrectly
encoding the first frame as non-m/s while flagging it as m/s. Fixes an
uncomfortable pop in the left channel at the start of playback.
CC:libav-stable@libav.org
Currently we have an assert() that prevents the frame from being too large,
but it is more user-friendly to give an error message instead of aborting on
assert(). This condition is quite unlikely due to the minimum bit rate check
in encode_init(), but it is still worth having.
The maximum theoretical frame size is around 17000 bytes. Although in
practice it will generally be much smaller, we require a larger buffer
just to be safe.
CC: libav-stable@libav.org
ff_wma_init() allows up to 50kHz, but this generates an exponent band
size table that requires 65 bands. The code assumes 25 bands in many
places, and using sample rates higher than 48kHz will lead to buffer
overwrites.
CC:libav-stable@libav.org
This is near the theoretical limit for wma frame size and is the most that
our decoder can handle. Allowing higher bit rates will just end up padding
each frame with empty bytes.
Fixes invalid writes for avconv when using very high bit rates.
CC:libav-stable@libav.org
The time base is 1 / sample_rate, not 90000.
Several more codecs encode the sample count in the first 4 bytes of the
chunk, so we set the durations accordingly. Also, we can set start_time and
packet duration instead of keeping track of the sample count in the demuxer.
Fixes timestamp calculation.
The FATE reference is updated because timestamp calculations are now more
accurate. Previous timestamps were based on average bit rate.
When reading sequentially, we are using the actual flag from the previous
frame, but when seeking we do not know what the previous window flag was, so
we need to read it from the bitstream.