WAV is not a NOHEADER format, and thus should not be changing
stream codec IDs and probing in read_packet.
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This fixes issues if the permutation changes, as quantizations tables would need to be reread
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Currently, if the movie source filter is used and a seek_point is
specified on a file that has a negative start time, ffmpeg will fail.
An easy way to reproduce this is as follows:
$ ffmpeg -vsync passthrough -filter_complex 'color=d=10,setpts=PTS-1/TB' test.mp4
$ ffmpeg -filter_complex 'movie=filename=test.mp4:seek_point=2' -f null -
The problem is caused by checking for int64_t overflow the wrong way.
In general, to check whether a + b overflows, it is not enough to do:
a > INT64_MAX - b
because b might be negative; the correct way is:
b > 0 && > a > INT64_MAX - b
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The AVCodecContext is only used for logging, so instead take any valid log context.
This allows reusing the exif functions more easily in avformat.
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
CMVideoFormatDescriptionGetH264ParameterSetAtIndex() fails on some
hardware/OS versions when retrieving the parameter set count alone.
Signed-off-by: Rick Kern <kernrj@gmail.com>
Signed-off-by: wm4 <nfxjfg@googlemail.com>
Fixes crash in #5352. VTCompressionSessionInvalidate() crashes if the
internal encoder hasn't completed, but hasn't experienced an error.
The function call isn't needed since the encoder is invalidated when
the reference count reaches 0 anyway.
Signed-off-by: Rick Kern <kernrj@gmail.com>
Signed-off-by: wm4 <nfxjfg@googlemail.com>
lavc/utils already rescales avpkt->pts to sub->pts in AV_TIME_BASE_Q
before calling the decode callback. This prevents from rescaling again
into the decoder, and avoid the use of avctx->time_base which will
disappear in the incoming codecpar merge.
This commit also replaces the use of "20 centisecond" (ass time base)
with "200 ms".
This is added in 10.11, so we add a #define when building against older SDKs.
The decoder actually supports 7.1-channel eac3, but since the parser only
reports 6 channels, we end up decoding the 5.1 downmix (same as the internal
decoder) for now.
- ADTS-formatted AAC didn't work
- Channel layouts were never exported
- Channel mappings were incorrect beyond stereo
- Channel counts weren't updated after packets were decoded
- Timestamps were exported incorrectly
The build failure here is caused by the enum value not being defined, but
as long as we're on a newer SDK that has it, it's safe to use it even
when our deployment target is older. Setting the property will error, but
we're not failing on errors there.
- size variables were used in a confusing way
- incorrect size var use led to channel layouts not being set properly
- channel layouts were incorrectly mapped for >2-channel AAC
- bitrates not accepted by the encoder were discarded instead of being clamped
- some minor style/indentation fixes
* commit 'a8068346e48e123f8d3bdf4d64464d81e53e5fc7':
lavc: add a variant of av_get_audio_frame_duration working with AVCodecParameters
Fixes from jamrial incorporated.
Merged-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
* commit '998e1b8f521b73e1ed3a13caaabcf79eb401cf0d':
lavc: add codec parameters API
Fixes added in:
- bit_rate has been made int64_t to match.
- profile and level are properly initialize.
Merged-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>