Compare commits
110 Commits
Author | SHA1 | Date | |
---|---|---|---|
![]() |
faac8e4331 | ||
![]() |
5be683d687 | ||
![]() |
96c1421627 | ||
![]() |
8a59dbbc68 | ||
![]() |
0683ef4b50 | ||
![]() |
b420354a8b | ||
![]() |
1d0f9c92c5 | ||
![]() |
d846d3e88e | ||
![]() |
621f1a2e63 | ||
![]() |
9a241d95ef | ||
![]() |
41b15817ce | ||
![]() |
07ea57c5bb | ||
![]() |
84fedd3be7 | ||
![]() |
74fb9df48b | ||
![]() |
8c0fd44317 | ||
![]() |
6f02d93c0f | ||
![]() |
9333ee7c0d | ||
![]() |
3e30424961 | ||
![]() |
fe2df122b4 | ||
![]() |
f96fdb46b7 | ||
![]() |
7fa861dfe0 | ||
![]() |
48f616ceee | ||
![]() |
8968de6c61 | ||
![]() |
043cb40bec | ||
![]() |
76b289bcf2 | ||
![]() |
52ba406b94 | ||
![]() |
071eb56a6a | ||
![]() |
2f67222780 | ||
![]() |
a376ef4a17 | ||
![]() |
43fdd89a3f | ||
![]() |
60f2f332a3 | ||
![]() |
7e05c70bb0 | ||
![]() |
b46840475e | ||
![]() |
d0599a3516 | ||
![]() |
742d7e9a6e | ||
![]() |
eb6f2a183a | ||
![]() |
1e86b7108e | ||
![]() |
4be1cc7b1d | ||
![]() |
61dbd3f3d0 | ||
![]() |
38d6ff31b7 | ||
![]() |
b0cd6fb590 | ||
![]() |
50fd06ea32 | ||
![]() |
808d5444c4 | ||
![]() |
6915dd49c7 | ||
![]() |
c657b08fd7 | ||
![]() |
6d14bea8b5 | ||
![]() |
749cd89ca9 | ||
![]() |
2b408d257f | ||
![]() |
8d853dc341 | ||
![]() |
86960b1101 | ||
![]() |
94354e368d | ||
![]() |
93a0682b1d | ||
![]() |
0e16c3843a | ||
![]() |
819955f0c6 | ||
![]() |
b36bda3c82 | ||
![]() |
25b8d52fdd | ||
![]() |
1a2aaa7497 | ||
![]() |
f18fc45d18 | ||
![]() |
2684ff3573 | ||
![]() |
ea0f616a57 | ||
![]() |
51b911e948 | ||
![]() |
bb508ddb8b | ||
![]() |
9246eb1ec5 | ||
![]() |
c5b2ef3bdf | ||
![]() |
07df052d8d | ||
![]() |
9bb7e2bd90 | ||
![]() |
90fa2460c0 | ||
![]() |
50f5037947 | ||
![]() |
21533730fc | ||
![]() |
032476f830 | ||
![]() |
57c7922331 | ||
![]() |
73dd8f0a24 | ||
![]() |
55637b2e5e | ||
![]() |
e6bc1fe10c | ||
![]() |
e6b18f5700 | ||
![]() |
3791436eb5 | ||
![]() |
61147f58ab | ||
![]() |
29e435ca33 | ||
![]() |
0540d5c5fc | ||
![]() |
7f97231d97 | ||
![]() |
4005a71def | ||
![]() |
429347afa7 | ||
![]() |
a81b6a662a | ||
![]() |
6168fe32f1 | ||
![]() |
711374b626 | ||
![]() |
9a63a36dc6 | ||
![]() |
db1a99a209 | ||
![]() |
fe8c81a0f3 | ||
![]() |
728051d9b1 | ||
![]() |
99d58a0da4 | ||
![]() |
9783f9fb98 | ||
![]() |
2ed0a77b7b | ||
![]() |
804e1e1610 | ||
![]() |
99d2d1404c | ||
![]() |
cb1c9294f3 | ||
![]() |
84341627d7 | ||
![]() |
4f694182e0 | ||
![]() |
a2cfb784fb | ||
![]() |
6faf18acbd | ||
![]() |
96807933d8 | ||
![]() |
4ef32aa2a6 | ||
![]() |
727730e279 | ||
![]() |
5a829ee69e | ||
![]() |
7fe22c3fe6 | ||
![]() |
c7565b143c | ||
![]() |
303ecfc373 | ||
![]() |
1456ed2dd5 | ||
![]() |
79c9d9b134 | ||
![]() |
d0aa3d13fa | ||
![]() |
e5cc73e0a5 |
83
Changelog
83
Changelog
@@ -1,6 +1,89 @@
|
||||
Entries are sorted chronologically from oldest to youngest within each release,
|
||||
releases are sorted from youngest to oldest.
|
||||
|
||||
version 2.5.6
|
||||
- avcodec/atrac3plusdsp: fix on stack alignment
|
||||
- ac3: validate end in ff_ac3_bit_alloc_calc_mask
|
||||
- aacpsy: avoid psy_band->threshold becoming NaN
|
||||
- aasc: return correct buffer size from aasc_decode_frame
|
||||
- msrledec: use signed pixel_ptr in msrle_decode_pal4
|
||||
- swresample: Allow reinitialization without ever setting channel layouts (cherry picked from commit 80a28c7509a11114e1aea5b208d56c6646d69c07)
|
||||
- swresample: Allow reinitialization without ever setting channel counts
|
||||
- avcodec/h264: Do not fail with randomly truncated VUIs
|
||||
- avcodec/h264_ps: Move truncation check from VUI to SPS
|
||||
- avcodec/h264: Be more tolerant to changing pps id between slices
|
||||
- avcodec/aacdec: Fix storing state before PCE decode
|
||||
- avcodec/h264: reset the counts in the correct context
|
||||
- avcodec/h264_slice: Do not reset mb_aff_frame per slice
|
||||
- avcodec/h264: finish previous slices before switching to single thread mode
|
||||
- avcodec/h264: Fix race between slices where one overwrites data from the next
|
||||
- avcodec/h264_refs: Do not set reference to things which do not exist
|
||||
- avcodec/h264: Fail for invalid mixed IDR / non IDR frames in slice threading mode
|
||||
- h264: avoid unnecessary calls to get_format
|
||||
- avcodec/msrledec: restructure msrle_decode_pal4() based on the line number instead of the pixel pointer
|
||||
|
||||
|
||||
version 2.5.5:
|
||||
- vp9: make above buffer pointer 32-byte aligned.
|
||||
- avcodec/dnxhddec: Check that the frame is interlaced before using cur_field
|
||||
- avformat/mov: Disallow ".." in dref unless use_absolute_path is set
|
||||
- avformat/mov: Check for string truncation in mov_open_dref()
|
||||
- avformat/mov: Use sizeof(filename) instead of a literal number
|
||||
- eac3dec: fix scaling
|
||||
- ac3_fixed: fix computation of spx_noise_blend
|
||||
- ac3_fixed: fix out-of-bound read
|
||||
- ac3dec_fixed: always use the USE_FIXED=1 variant of the AC3DecodeContext
|
||||
- avcodec/012v: redesign main loop
|
||||
- avcodec/012v: Check dimensions more completely
|
||||
- asfenc: fix leaking asf->index_ptr on error
|
||||
- avcodec/options_table: remove extradata_size from the AVOptions table
|
||||
- ffmdec: limit the backward seek to the last resync position
|
||||
- ffmdec: make sure the time base is valid
|
||||
- ffmdec: fix infinite loop at EOF
|
||||
- ffmdec: initialize f_cprv, f_stvi and f_stau
|
||||
- avformat/rm: limit packet size
|
||||
- avcodec/webp: validate the distance prefix code
|
||||
- avcodec/rv10: check size of s->mb_width * s->mb_height
|
||||
- eamad: check for out of bounds read
|
||||
- mdec: check for out of bounds read
|
||||
- arm: Suppress tags about used cpu arch and extensions
|
||||
- aic: Fix decoding files with odd dimensions
|
||||
- avcodec/tiff: move bpp check to after "end:"
|
||||
- mxfdec: Fix the error handling for when strftime fails
|
||||
- avcodec/opusdec: Fix delayed sample value
|
||||
- avcodec/opusdec: Clear out pointers per packet
|
||||
- avcodec/utils: Align YUV411 by as much as the other YUV variants
|
||||
- vp9: fix segmentation map retention with threading enabled.
|
||||
- webp: ensure that each transform is only used once
|
||||
- doc/protocols/tcp: fix units of listen_timeout option value, from microseconds to milliseconds
|
||||
- fix VP9 packet decoder returning 0 instead of the used data size
|
||||
- avformat/flvenc: check that the codec_tag fits in the available bits
|
||||
- avcodec/utils: use correct printf specifier in ff_set_sar
|
||||
- avutil/imgutils: correctly check for negative SAR components
|
||||
- swscale/utils: clear formatConvBuffer on allocation
|
||||
- avformat/bit: only accept the g729 codec and 1 channel
|
||||
- avformat/bit: check that pkt->size is 10 in write_packet
|
||||
- avformat/adxdec: check avctx->channels for invalid values
|
||||
- avformat/adxdec: set avctx->channels in adx_read_header
|
||||
- Fix buffer_size argument to init_put_bits() in multiple encoders.
|
||||
- mips/acelp_filters: fix incorrect register constraint
|
||||
- avcodec/hevc_ps: Sanity checks for some log2_* values
|
||||
- avcodec/zmbv: Check len before reading in decode_frame()
|
||||
- avcodec/h264: Only reinit quant tables if a new PPS is allowed
|
||||
- avcodec/snowdec: Fix ref value check
|
||||
- swscale/utils: More carefully merge and clear coefficients outside the input
|
||||
- avcodec/a64multienc: Assert that the Packet size does not grow
|
||||
- avcodec/a64multienc: simplify frame handling code
|
||||
- avcodec/a64multienc: fix use of uninitialized values in to_meta_with_crop
|
||||
- avcodec/a64multienc: initialize mc_meta_charset to zero
|
||||
- avcodec/a64multienc: don't set incorrect packet size
|
||||
- avcodec/a64multienc: use av_frame_ref instead of copying the frame
|
||||
- avcodec/x86/mlpdsp_init: Simplify mlp_filter_channel_x86()
|
||||
- h264: initialize H264Context.avctx in init_thread_copy
|
||||
- wtvdec: fix integer overflow resulting in errors with large files
|
||||
- avcodec/gif: fix off by one in column offsetting finding
|
||||
|
||||
|
||||
version 2.5.4:
|
||||
- avcodec/arm/videodsp_armv5te: Fix linking failure with shared libs
|
||||
- avcodec/mjpegdec: Skip blocks which are outside the visible area
|
||||
|
6
configure
vendored
6
configure
vendored
@@ -1760,6 +1760,7 @@ SYSTEM_FUNCS="
|
||||
TOOLCHAIN_FEATURES="
|
||||
as_dn_directive
|
||||
as_func
|
||||
as_object_arch
|
||||
asm_mod_q
|
||||
attribute_may_alias
|
||||
attribute_packed
|
||||
@@ -4519,6 +4520,11 @@ EOF
|
||||
check_as <<EOF && enable as_dn_directive
|
||||
ra .dn d0.i16
|
||||
.unreq ra
|
||||
EOF
|
||||
|
||||
# llvm's integrated assembler supports .object_arch from llvm 3.5
|
||||
[ "$objformat" = elf ] && check_as <<EOF && enable as_object_arch
|
||||
.object_arch armv4
|
||||
EOF
|
||||
|
||||
[ $target_os != win32 ] && enabled_all armv6t2 shared !pic && enable_weak_pic
|
||||
|
@@ -31,7 +31,7 @@ PROJECT_NAME = FFmpeg
|
||||
# This could be handy for archiving the generated documentation or
|
||||
# if some version control system is used.
|
||||
|
||||
PROJECT_NUMBER = 2.5.4
|
||||
PROJECT_NUMBER = 2.5.6
|
||||
|
||||
# With the PROJECT_LOGO tag one can specify a logo or icon that is included
|
||||
# in the documentation. The maximum height of the logo should not exceed 55
|
||||
|
@@ -298,7 +298,7 @@ FFmpeg has a @url{http://ffmpeg.org/ffmpeg-protocols.html#concat,
|
||||
@code{concat}} protocol designed specifically for that, with examples in the
|
||||
documentation.
|
||||
|
||||
A few multimedia containers (MPEG-1, MPEG-2 PS, DV) allow to concatenate
|
||||
A few multimedia containers (MPEG-1, MPEG-2 PS, DV) allow one to concatenate
|
||||
video by merely concatenating the files containing them.
|
||||
|
||||
Hence you may concatenate your multimedia files by first transcoding them to
|
||||
|
@@ -71,7 +71,7 @@ the HTTP server (configured through the @option{HTTPPort} option), and
|
||||
configuration file.
|
||||
|
||||
Each feed is associated to a file which is stored on disk. This stored
|
||||
file is used to allow to send pre-recorded data to a player as fast as
|
||||
file is used to send pre-recorded data to a player as fast as
|
||||
possible when new content is added in real-time to the stream.
|
||||
|
||||
A "live-stream" or "stream" is a resource published by
|
||||
|
@@ -253,10 +253,14 @@ Possible flags for this option are:
|
||||
@item sse4.1
|
||||
@item sse4.2
|
||||
@item avx
|
||||
@item avx2
|
||||
@item xop
|
||||
@item fma3
|
||||
@item fma4
|
||||
@item 3dnow
|
||||
@item 3dnowext
|
||||
@item bmi1
|
||||
@item bmi2
|
||||
@item cmov
|
||||
@end table
|
||||
@item ARM
|
||||
@@ -267,6 +271,13 @@ Possible flags for this option are:
|
||||
@item vfp
|
||||
@item vfpv3
|
||||
@item neon
|
||||
@item setend
|
||||
@end table
|
||||
@item AArch64
|
||||
@table @samp
|
||||
@item armv8
|
||||
@item vfp
|
||||
@item neon
|
||||
@end table
|
||||
@item PowerPC
|
||||
@table @samp
|
||||
|
@@ -3378,7 +3378,7 @@ Set number overlapping pixels for each block. Since the filter can be slow, you
|
||||
may want to reduce this value, at the cost of a less effective filter and the
|
||||
risk of various artefacts.
|
||||
|
||||
If the overlapping value doesn't allow to process the whole input width or
|
||||
If the overlapping value doesn't permit processing the whole input width or
|
||||
height, a warning will be displayed and according borders won't be denoised.
|
||||
|
||||
Default value is @var{blocksize}-1, which is the best possible setting.
|
||||
|
@@ -23,7 +23,7 @@ Reduce buffering.
|
||||
|
||||
@item probesize @var{integer} (@emph{input})
|
||||
Set probing size in bytes, i.e. the size of the data to analyze to get
|
||||
stream information. A higher value will allow to detect more
|
||||
stream information. A higher value will enable detecting more
|
||||
information in case it is dispersed into the stream, but will increase
|
||||
latency. Must be an integer not lesser than 32. It is 5000000 by default.
|
||||
|
||||
@@ -67,7 +67,7 @@ Default is 0.
|
||||
|
||||
@item analyzeduration @var{integer} (@emph{input})
|
||||
Specify how many microseconds are analyzed to probe the input. A
|
||||
higher value will allow to detect more accurate information, but will
|
||||
higher value will enable detecting more accurate information, but will
|
||||
increase latency. It defaults to 5,000,000 microseconds = 5 seconds.
|
||||
|
||||
@item cryptokey @var{hexadecimal string} (@emph{input})
|
||||
|
@@ -1,7 +1,7 @@
|
||||
@chapter Input Devices
|
||||
@c man begin INPUT DEVICES
|
||||
|
||||
Input devices are configured elements in FFmpeg which allow to access
|
||||
Input devices are configured elements in FFmpeg which enable accessing
|
||||
the data coming from a multimedia device attached to your system.
|
||||
|
||||
When you configure your FFmpeg build, all the supported input devices
|
||||
|
@@ -1081,8 +1081,8 @@ Set raise error timeout, expressed in microseconds.
|
||||
This option is only relevant in read mode: if no data arrived in more
|
||||
than this time interval, raise error.
|
||||
|
||||
@item listen_timeout=@var{microseconds}
|
||||
Set listen timeout, expressed in microseconds.
|
||||
@item listen_timeout=@var{milliseconds}
|
||||
Set listen timeout, expressed in milliseconds.
|
||||
@end table
|
||||
|
||||
The following example shows how to setup a listening TCP connection
|
||||
|
@@ -844,7 +844,7 @@ Return 1.0 if @var{x} is +/-INFINITY, 0.0 otherwise.
|
||||
Return 1.0 if @var{x} is NAN, 0.0 otherwise.
|
||||
|
||||
@item ld(var)
|
||||
Allow to load the value of the internal variable with number
|
||||
Load the value of the internal variable with number
|
||||
@var{var}, which was previously stored with st(@var{var}, @var{expr}).
|
||||
The function returns the loaded value.
|
||||
|
||||
@@ -912,7 +912,7 @@ Compute the square root of @var{expr}. This is equivalent to
|
||||
Compute expression @code{1/(1 + exp(4*x))}.
|
||||
|
||||
@item st(var, expr)
|
||||
Allow to store the value of the expression @var{expr} in an internal
|
||||
Store the value of the expression @var{expr} in an internal
|
||||
variable. @var{var} specifies the number of the variable where to
|
||||
store the value, and it is a value ranging from 0 to 9. The function
|
||||
returns the value stored in the internal variable.
|
||||
|
10
ffmpeg.c
10
ffmpeg.c
@@ -2620,11 +2620,13 @@ static int transcode_init(void)
|
||||
enc_ctx->rc_max_rate = dec_ctx->rc_max_rate;
|
||||
enc_ctx->rc_buffer_size = dec_ctx->rc_buffer_size;
|
||||
enc_ctx->field_order = dec_ctx->field_order;
|
||||
enc_ctx->extradata = av_mallocz(extra_size);
|
||||
if (!enc_ctx->extradata) {
|
||||
return AVERROR(ENOMEM);
|
||||
if (dec_ctx->extradata_size) {
|
||||
enc_ctx->extradata = av_mallocz(extra_size);
|
||||
if (!enc_ctx->extradata) {
|
||||
return AVERROR(ENOMEM);
|
||||
}
|
||||
memcpy(enc_ctx->extradata, dec_ctx->extradata, dec_ctx->extradata_size);
|
||||
}
|
||||
memcpy(enc_ctx->extradata, dec_ctx->extradata, dec_ctx->extradata_size);
|
||||
enc_ctx->extradata_size= dec_ctx->extradata_size;
|
||||
enc_ctx->bits_per_coded_sample = dec_ctx->bits_per_coded_sample;
|
||||
|
||||
|
@@ -38,15 +38,15 @@ static av_cold int zero12v_decode_init(AVCodecContext *avctx)
|
||||
static int zero12v_decode_frame(AVCodecContext *avctx, void *data,
|
||||
int *got_frame, AVPacket *avpkt)
|
||||
{
|
||||
int line = 0, ret;
|
||||
int line, ret;
|
||||
const int width = avctx->width;
|
||||
AVFrame *pic = data;
|
||||
uint16_t *y, *u, *v;
|
||||
const uint8_t *line_end, *src = avpkt->data;
|
||||
int stride = avctx->width * 8 / 3;
|
||||
|
||||
if (width == 1) {
|
||||
av_log(avctx, AV_LOG_ERROR, "Width 1 not supported.\n");
|
||||
if (width <= 1 || avctx->height <= 0) {
|
||||
av_log(avctx, AV_LOG_ERROR, "Dimensions %dx%d not supported.\n", width, avctx->height);
|
||||
return AVERROR_INVALIDDATA;
|
||||
}
|
||||
|
||||
@@ -67,45 +67,45 @@ static int zero12v_decode_frame(AVCodecContext *avctx, void *data,
|
||||
pic->pict_type = AV_PICTURE_TYPE_I;
|
||||
pic->key_frame = 1;
|
||||
|
||||
y = (uint16_t *)pic->data[0];
|
||||
u = (uint16_t *)pic->data[1];
|
||||
v = (uint16_t *)pic->data[2];
|
||||
line_end = avpkt->data + stride;
|
||||
for (line = 0; line < avctx->height; line++) {
|
||||
uint16_t y_temp[6] = {0x8000, 0x8000, 0x8000, 0x8000, 0x8000, 0x8000};
|
||||
uint16_t u_temp[3] = {0x8000, 0x8000, 0x8000};
|
||||
uint16_t v_temp[3] = {0x8000, 0x8000, 0x8000};
|
||||
int x;
|
||||
y = (uint16_t *)(pic->data[0] + line * pic->linesize[0]);
|
||||
u = (uint16_t *)(pic->data[1] + line * pic->linesize[1]);
|
||||
v = (uint16_t *)(pic->data[2] + line * pic->linesize[2]);
|
||||
|
||||
while (line++ < avctx->height) {
|
||||
while (1) {
|
||||
uint32_t t = AV_RL32(src);
|
||||
for (x = 0; x < width; x += 6) {
|
||||
uint32_t t;
|
||||
|
||||
if (width - x < 6 || line_end - src < 16) {
|
||||
y = y_temp;
|
||||
u = u_temp;
|
||||
v = v_temp;
|
||||
}
|
||||
|
||||
if (line_end - src < 4)
|
||||
break;
|
||||
|
||||
t = AV_RL32(src);
|
||||
src += 4;
|
||||
*u++ = t << 6 & 0xFFC0;
|
||||
*y++ = t >> 4 & 0xFFC0;
|
||||
*v++ = t >> 14 & 0xFFC0;
|
||||
|
||||
if (src >= line_end - 1) {
|
||||
*y = 0x80;
|
||||
src++;
|
||||
line_end += stride;
|
||||
y = (uint16_t *)(pic->data[0] + line * pic->linesize[0]);
|
||||
u = (uint16_t *)(pic->data[1] + line * pic->linesize[1]);
|
||||
v = (uint16_t *)(pic->data[2] + line * pic->linesize[2]);
|
||||
if (line_end - src < 4)
|
||||
break;
|
||||
}
|
||||
|
||||
t = AV_RL32(src);
|
||||
src += 4;
|
||||
*y++ = t << 6 & 0xFFC0;
|
||||
*u++ = t >> 4 & 0xFFC0;
|
||||
*y++ = t >> 14 & 0xFFC0;
|
||||
if (src >= line_end - 2) {
|
||||
if (!(width & 1)) {
|
||||
*y = 0x80;
|
||||
src += 2;
|
||||
}
|
||||
line_end += stride;
|
||||
y = (uint16_t *)(pic->data[0] + line * pic->linesize[0]);
|
||||
u = (uint16_t *)(pic->data[1] + line * pic->linesize[1]);
|
||||
v = (uint16_t *)(pic->data[2] + line * pic->linesize[2]);
|
||||
|
||||
if (line_end - src < 4)
|
||||
break;
|
||||
}
|
||||
|
||||
t = AV_RL32(src);
|
||||
src += 4;
|
||||
@@ -113,15 +113,8 @@ static int zero12v_decode_frame(AVCodecContext *avctx, void *data,
|
||||
*y++ = t >> 4 & 0xFFC0;
|
||||
*u++ = t >> 14 & 0xFFC0;
|
||||
|
||||
if (src >= line_end - 1) {
|
||||
*y = 0x80;
|
||||
src++;
|
||||
line_end += stride;
|
||||
y = (uint16_t *)(pic->data[0] + line * pic->linesize[0]);
|
||||
u = (uint16_t *)(pic->data[1] + line * pic->linesize[1]);
|
||||
v = (uint16_t *)(pic->data[2] + line * pic->linesize[2]);
|
||||
if (line_end - src < 4)
|
||||
break;
|
||||
}
|
||||
|
||||
t = AV_RL32(src);
|
||||
src += 4;
|
||||
@@ -129,18 +122,21 @@ static int zero12v_decode_frame(AVCodecContext *avctx, void *data,
|
||||
*v++ = t >> 4 & 0xFFC0;
|
||||
*y++ = t >> 14 & 0xFFC0;
|
||||
|
||||
if (src >= line_end - 2) {
|
||||
if (width & 1) {
|
||||
*y = 0x80;
|
||||
src += 2;
|
||||
}
|
||||
line_end += stride;
|
||||
y = (uint16_t *)(pic->data[0] + line * pic->linesize[0]);
|
||||
u = (uint16_t *)(pic->data[1] + line * pic->linesize[1]);
|
||||
v = (uint16_t *)(pic->data[2] + line * pic->linesize[2]);
|
||||
if (width - x < 6)
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (x < width) {
|
||||
y = x + (uint16_t *)(pic->data[0] + line * pic->linesize[0]);
|
||||
u = x/2 + (uint16_t *)(pic->data[1] + line * pic->linesize[1]);
|
||||
v = x/2 + (uint16_t *)(pic->data[2] + line * pic->linesize[2]);
|
||||
memcpy(y, y_temp, sizeof(*y) * (width - x));
|
||||
memcpy(u, u_temp, sizeof(*u) * (width - x + 1) / 2);
|
||||
memcpy(v, v_temp, sizeof(*v) * (width - x + 1) / 2);
|
||||
}
|
||||
|
||||
line_end += stride;
|
||||
src = line_end - stride;
|
||||
}
|
||||
|
||||
*got_frame = 1;
|
||||
|
@@ -210,7 +210,7 @@ OBJS-$(CONFIG_DVVIDEO_DECODER) += dvdec.o dv.o dvdata.o
|
||||
OBJS-$(CONFIG_DVVIDEO_ENCODER) += dvenc.o dv.o dvdata.o
|
||||
OBJS-$(CONFIG_DXA_DECODER) += dxa.o
|
||||
OBJS-$(CONFIG_DXTORY_DECODER) += dxtory.o
|
||||
OBJS-$(CONFIG_EAC3_DECODER) += eac3dec.o eac3_data.o
|
||||
OBJS-$(CONFIG_EAC3_DECODER) += eac3_data.o
|
||||
OBJS-$(CONFIG_EAC3_ENCODER) += eac3enc.o eac3_data.o
|
||||
OBJS-$(CONFIG_EACMV_DECODER) += eacmv.o
|
||||
OBJS-$(CONFIG_EAMAD_DECODER) += eamad.o eaidct.o mpeg12.o \
|
||||
|
@@ -28,6 +28,7 @@
|
||||
#include "a64tables.h"
|
||||
#include "elbg.h"
|
||||
#include "internal.h"
|
||||
#include "libavutil/avassert.h"
|
||||
#include "libavutil/common.h"
|
||||
#include "libavutil/intreadwrite.h"
|
||||
|
||||
@@ -65,7 +66,7 @@ static const int mc_colors[5]={0x0,0xb,0xc,0xf,0x1};
|
||||
//static const int mc_colors[5]={0x0,0x8,0xa,0xf,0x7};
|
||||
//static const int mc_colors[5]={0x0,0x9,0x8,0xa,0x3};
|
||||
|
||||
static void to_meta_with_crop(AVCodecContext *avctx, AVFrame *p, int *dest)
|
||||
static void to_meta_with_crop(AVCodecContext *avctx, const AVFrame *p, int *dest)
|
||||
{
|
||||
int blockx, blocky, x, y;
|
||||
int luma = 0;
|
||||
@@ -78,9 +79,13 @@ static void to_meta_with_crop(AVCodecContext *avctx, AVFrame *p, int *dest)
|
||||
for (y = blocky; y < blocky + 8 && y < C64YRES; y++) {
|
||||
for (x = blockx; x < blockx + 8 && x < C64XRES; x += 2) {
|
||||
if(x < width && y < height) {
|
||||
/* build average over 2 pixels */
|
||||
luma = (src[(x + 0 + y * p->linesize[0])] +
|
||||
src[(x + 1 + y * p->linesize[0])]) / 2;
|
||||
if (x + 1 < width) {
|
||||
/* build average over 2 pixels */
|
||||
luma = (src[(x + 0 + y * p->linesize[0])] +
|
||||
src[(x + 1 + y * p->linesize[0])]) / 2;
|
||||
} else {
|
||||
luma = src[(x + y * p->linesize[0])];
|
||||
}
|
||||
/* write blocks as linear data now so they are suitable for elbg */
|
||||
dest[0] = luma;
|
||||
}
|
||||
@@ -186,7 +191,6 @@ static void render_charset(AVCodecContext *avctx, uint8_t *charset,
|
||||
static av_cold int a64multi_close_encoder(AVCodecContext *avctx)
|
||||
{
|
||||
A64Context *c = avctx->priv_data;
|
||||
av_frame_free(&avctx->coded_frame);
|
||||
av_freep(&c->mc_meta_charset);
|
||||
av_freep(&c->mc_best_cb);
|
||||
av_freep(&c->mc_charset);
|
||||
@@ -220,7 +224,7 @@ static av_cold int a64multi_encode_init(AVCodecContext *avctx)
|
||||
a64_palette[mc_colors[a]][2] * 0.11;
|
||||
}
|
||||
|
||||
if (!(c->mc_meta_charset = av_malloc_array(c->mc_lifetime, 32000 * sizeof(int))) ||
|
||||
if (!(c->mc_meta_charset = av_mallocz_array(c->mc_lifetime, 32000 * sizeof(int))) ||
|
||||
!(c->mc_best_cb = av_malloc(CHARSET_CHARS * 32 * sizeof(int))) ||
|
||||
!(c->mc_charmap = av_mallocz_array(c->mc_lifetime, 1000 * sizeof(int))) ||
|
||||
!(c->mc_colram = av_mallocz(CHARSET_CHARS * sizeof(uint8_t))) ||
|
||||
@@ -238,14 +242,6 @@ static av_cold int a64multi_encode_init(AVCodecContext *avctx)
|
||||
AV_WB32(avctx->extradata, c->mc_lifetime);
|
||||
AV_WB32(avctx->extradata + 16, INTERLACED);
|
||||
|
||||
avctx->coded_frame = av_frame_alloc();
|
||||
if (!avctx->coded_frame) {
|
||||
a64multi_close_encoder(avctx);
|
||||
return AVERROR(ENOMEM);
|
||||
}
|
||||
|
||||
avctx->coded_frame->pict_type = AV_PICTURE_TYPE_I;
|
||||
avctx->coded_frame->key_frame = 1;
|
||||
if (!avctx->codec_tag)
|
||||
avctx->codec_tag = AV_RL32("a64m");
|
||||
|
||||
@@ -270,10 +266,9 @@ static void a64_compress_colram(unsigned char *buf, int *charmap, uint8_t *colra
|
||||
}
|
||||
|
||||
static int a64multi_encode_frame(AVCodecContext *avctx, AVPacket *pkt,
|
||||
const AVFrame *pict, int *got_packet)
|
||||
const AVFrame *p, int *got_packet)
|
||||
{
|
||||
A64Context *c = avctx->priv_data;
|
||||
AVFrame *const p = avctx->coded_frame;
|
||||
|
||||
int frame;
|
||||
int x, y;
|
||||
@@ -304,7 +299,7 @@ static int a64multi_encode_frame(AVCodecContext *avctx, AVPacket *pkt,
|
||||
}
|
||||
|
||||
/* no data, means end encoding asap */
|
||||
if (!pict) {
|
||||
if (!p) {
|
||||
/* all done, end encoding */
|
||||
if (!c->mc_lifetime) return 0;
|
||||
/* no more frames in queue, prepare to flush remaining frames */
|
||||
@@ -317,13 +312,10 @@ static int a64multi_encode_frame(AVCodecContext *avctx, AVPacket *pkt,
|
||||
} else {
|
||||
/* fill up mc_meta_charset with data until lifetime exceeds */
|
||||
if (c->mc_frame_counter < c->mc_lifetime) {
|
||||
*p = *pict;
|
||||
p->pict_type = AV_PICTURE_TYPE_I;
|
||||
p->key_frame = 1;
|
||||
to_meta_with_crop(avctx, p, meta + 32000 * c->mc_frame_counter);
|
||||
c->mc_frame_counter++;
|
||||
if (c->next_pts == AV_NOPTS_VALUE)
|
||||
c->next_pts = pict->pts;
|
||||
c->next_pts = p->pts;
|
||||
/* lifetime is not reached so wait for next frame first */
|
||||
return 0;
|
||||
}
|
||||
@@ -334,8 +326,8 @@ static int a64multi_encode_frame(AVCodecContext *avctx, AVPacket *pkt,
|
||||
req_size = 0;
|
||||
/* any frames to encode? */
|
||||
if (c->mc_lifetime) {
|
||||
req_size = charset_size + c->mc_lifetime*(screen_size + colram_size);
|
||||
if ((ret = ff_alloc_packet2(avctx, pkt, req_size)) < 0)
|
||||
int alloc_size = charset_size + c->mc_lifetime*(screen_size + colram_size);
|
||||
if ((ret = ff_alloc_packet2(avctx, pkt, alloc_size)) < 0)
|
||||
return ret;
|
||||
buf = pkt->data;
|
||||
|
||||
@@ -351,6 +343,7 @@ static int a64multi_encode_frame(AVCodecContext *avctx, AVPacket *pkt,
|
||||
|
||||
/* advance pointers */
|
||||
buf += charset_size;
|
||||
req_size += charset_size;
|
||||
}
|
||||
|
||||
/* write x frames to buf */
|
||||
@@ -387,6 +380,7 @@ static int a64multi_encode_frame(AVCodecContext *avctx, AVPacket *pkt,
|
||||
pkt->pts = pkt->dts = c->next_pts;
|
||||
c->next_pts = AV_NOPTS_VALUE;
|
||||
|
||||
av_assert0(pkt->size >= req_size);
|
||||
pkt->size = req_size;
|
||||
pkt->flags |= AV_PKT_FLAG_KEY;
|
||||
*got_packet = !!req_size;
|
||||
|
@@ -425,7 +425,7 @@ static uint64_t sniff_channel_order(uint8_t (*layout_map)[3], int tags)
|
||||
* Save current output configuration if and only if it has been locked.
|
||||
*/
|
||||
static void push_output_configuration(AACContext *ac) {
|
||||
if (ac->oc[1].status == OC_LOCKED) {
|
||||
if (ac->oc[1].status == OC_LOCKED || ac->oc[0].status == OC_NONE) {
|
||||
ac->oc[0] = ac->oc[1];
|
||||
}
|
||||
ac->oc[1].status = OC_NONE;
|
||||
@@ -904,7 +904,7 @@ static int decode_eld_specific_config(AACContext *ac, AVCodecContext *avctx,
|
||||
if (len == 15 + 255)
|
||||
len += get_bits(gb, 16);
|
||||
if (get_bits_left(gb) < len * 8 + 4) {
|
||||
av_log(ac->avctx, AV_LOG_ERROR, overread_err);
|
||||
av_log(avctx, AV_LOG_ERROR, overread_err);
|
||||
return AVERROR_INVALIDDATA;
|
||||
}
|
||||
skip_bits_long(gb, 8 * len);
|
||||
|
@@ -165,7 +165,7 @@ static void put_audio_specific_config(AVCodecContext *avctx)
|
||||
PutBitContext pb;
|
||||
AACEncContext *s = avctx->priv_data;
|
||||
|
||||
init_put_bits(&pb, avctx->extradata, avctx->extradata_size*8);
|
||||
init_put_bits(&pb, avctx->extradata, avctx->extradata_size);
|
||||
put_bits(&pb, 5, 2); //object type - AAC-LC
|
||||
put_bits(&pb, 4, s->samplerate_index); //sample rate index
|
||||
put_bits(&pb, 4, s->channels);
|
||||
|
@@ -727,7 +727,10 @@ static void psy_3gpp_analyze_channel(FFPsyContext *ctx, int channel,
|
||||
if (active_lines > 0.0f)
|
||||
band->thr = calc_reduced_thr_3gpp(band, coeffs[g].min_snr, reduction);
|
||||
pe += calc_pe_3gpp(band);
|
||||
band->norm_fac = band->active_lines / band->thr;
|
||||
if (band->thr > 0.0f)
|
||||
band->norm_fac = band->active_lines / band->thr;
|
||||
else
|
||||
band->norm_fac = 0.0f;
|
||||
norm_fac += band->norm_fac;
|
||||
}
|
||||
}
|
||||
|
@@ -137,7 +137,7 @@ static int aasc_decode_frame(AVCodecContext *avctx,
|
||||
return ret;
|
||||
|
||||
/* report that the buffer was completely consumed */
|
||||
return buf_size;
|
||||
return avpkt->size;
|
||||
}
|
||||
|
||||
static av_cold int aasc_decode_end(AVCodecContext *avctx)
|
||||
|
@@ -131,6 +131,9 @@ int ff_ac3_bit_alloc_calc_mask(AC3BitAllocParameters *s, int16_t *band_psd,
|
||||
int band_start, band_end, begin, end1;
|
||||
int lowcomp, fastleak, slowleak;
|
||||
|
||||
if (end <= 0)
|
||||
return AVERROR_INVALIDDATA;
|
||||
|
||||
/* excitation function */
|
||||
band_start = ff_ac3_bin_to_band_tab[start];
|
||||
band_end = ff_ac3_bin_to_band_tab[end-1] + 1;
|
||||
|
@@ -872,7 +872,7 @@ static int decode_audio_block(AC3DecodeContext *s, int blk)
|
||||
start_subband += start_subband - 7;
|
||||
end_subband = get_bits(gbc, 3) + 5;
|
||||
#if USE_FIXED
|
||||
s->spx_dst_end_freq = end_freq_inv_tab[end_subband];
|
||||
s->spx_dst_end_freq = end_freq_inv_tab[end_subband-5];
|
||||
#endif
|
||||
if (end_subband > 7)
|
||||
end_subband += end_subband - 7;
|
||||
@@ -939,7 +939,7 @@ static int decode_audio_block(AC3DecodeContext *s, int blk)
|
||||
nblend = 0;
|
||||
sblend = 0x800000;
|
||||
} else if (nratio > 0x7fffff) {
|
||||
nblend = 0x800000;
|
||||
nblend = 14529495; // sqrt(3) in FP.23
|
||||
sblend = 0;
|
||||
} else {
|
||||
nblend = fixed_sqrt(nratio, 23);
|
||||
|
@@ -243,19 +243,19 @@ typedef struct AC3DecodeContext {
|
||||
* Parse the E-AC-3 frame header.
|
||||
* This parses both the bit stream info and audio frame header.
|
||||
*/
|
||||
int ff_eac3_parse_header(AC3DecodeContext *s);
|
||||
static int ff_eac3_parse_header(AC3DecodeContext *s);
|
||||
|
||||
/**
|
||||
* Decode mantissas in a single channel for the entire frame.
|
||||
* This is used when AHT mode is enabled.
|
||||
*/
|
||||
void ff_eac3_decode_transform_coeffs_aht_ch(AC3DecodeContext *s, int ch);
|
||||
static void ff_eac3_decode_transform_coeffs_aht_ch(AC3DecodeContext *s, int ch);
|
||||
|
||||
/**
|
||||
* Apply spectral extension to each channel by copying lower frequency
|
||||
* coefficients to higher frequency bins and applying side information to
|
||||
* approximate the original high frequency signal.
|
||||
*/
|
||||
void ff_eac3_apply_spectral_extension(AC3DecodeContext *s);
|
||||
static void ff_eac3_apply_spectral_extension(AC3DecodeContext *s);
|
||||
|
||||
#endif /* AVCODEC_AC3DEC_H */
|
||||
|
@@ -164,6 +164,7 @@ static void ac3_downmix_c_fixed16(int16_t **samples, int16_t (*matrix)[2],
|
||||
}
|
||||
}
|
||||
|
||||
#include "eac3dec.c"
|
||||
#include "ac3dec.c"
|
||||
|
||||
static const AVOption options[] = {
|
||||
|
@@ -28,6 +28,7 @@
|
||||
* Upmix delay samples from stereo to original channel layout.
|
||||
*/
|
||||
#include "ac3dec.h"
|
||||
#include "eac3dec.c"
|
||||
#include "ac3dec.c"
|
||||
|
||||
static const AVOption options[] = {
|
||||
|
@@ -541,7 +541,7 @@ static int adpcm_encode_frame(AVCodecContext *avctx, AVPacket *avpkt,
|
||||
case AV_CODEC_ID_ADPCM_IMA_QT:
|
||||
{
|
||||
PutBitContext pb;
|
||||
init_put_bits(&pb, dst, pkt_size * 8);
|
||||
init_put_bits(&pb, dst, pkt_size);
|
||||
|
||||
for (ch = 0; ch < avctx->channels; ch++) {
|
||||
ADPCMChannelStatus *status = &c->status[ch];
|
||||
@@ -571,7 +571,7 @@ static int adpcm_encode_frame(AVCodecContext *avctx, AVPacket *avpkt,
|
||||
case AV_CODEC_ID_ADPCM_SWF:
|
||||
{
|
||||
PutBitContext pb;
|
||||
init_put_bits(&pb, dst, pkt_size * 8);
|
||||
init_put_bits(&pb, dst, pkt_size);
|
||||
|
||||
n = frame->nb_samples - 1;
|
||||
|
||||
|
@@ -438,8 +438,8 @@ static av_cold int aic_decode_init(AVCodecContext *avctx)
|
||||
ctx->mb_width = FFALIGN(avctx->width, 16) >> 4;
|
||||
ctx->mb_height = FFALIGN(avctx->height, 16) >> 4;
|
||||
|
||||
ctx->num_x_slices = 16;
|
||||
ctx->slice_width = ctx->mb_width / 16;
|
||||
ctx->num_x_slices = (ctx->mb_width + 15) >> 4;
|
||||
ctx->slice_width = 16;
|
||||
for (i = 1; i < 32; i++) {
|
||||
if (!(ctx->mb_width % i) && (ctx->mb_width / i < 32)) {
|
||||
ctx->slice_width = ctx->mb_width / i;
|
||||
|
@@ -357,11 +357,15 @@ static av_cold int read_specific_config(ALSDecContext *ctx)
|
||||
|
||||
ctx->cs_switch = 1;
|
||||
|
||||
for (i = 0; i < avctx->channels; i++) {
|
||||
sconf->chan_pos[i] = -1;
|
||||
}
|
||||
|
||||
for (i = 0; i < avctx->channels; i++) {
|
||||
int idx;
|
||||
|
||||
idx = get_bits(&gb, chan_pos_bits);
|
||||
if (idx >= avctx->channels) {
|
||||
if (idx >= avctx->channels || sconf->chan_pos[idx] != -1) {
|
||||
av_log(avctx, AV_LOG_WARNING, "Invalid channel reordering.\n");
|
||||
ctx->cs_switch = 0;
|
||||
break;
|
||||
@@ -1286,8 +1290,16 @@ static int revert_channel_correlation(ALSDecContext *ctx, ALSBlockData *bd,
|
||||
|
||||
if (ch[dep].time_diff_sign) {
|
||||
t = -t;
|
||||
if (t > 0 && begin < t) {
|
||||
av_log(ctx->avctx, AV_LOG_ERROR, "begin %u smaller than time diff index %d.\n", begin, t);
|
||||
return AVERROR_INVALIDDATA;
|
||||
}
|
||||
begin -= t;
|
||||
} else {
|
||||
if (t > 0 && end < t) {
|
||||
av_log(ctx->avctx, AV_LOG_ERROR, "end %u smaller than time diff index %d.\n", end, t);
|
||||
return AVERROR_INVALIDDATA;
|
||||
}
|
||||
end -= t;
|
||||
}
|
||||
|
||||
@@ -1727,9 +1739,9 @@ static av_cold int decode_init(AVCodecContext *avctx)
|
||||
|
||||
// allocate and assign channel data buffer for mcc mode
|
||||
if (sconf->mc_coding) {
|
||||
ctx->chan_data_buffer = av_malloc(sizeof(*ctx->chan_data_buffer) *
|
||||
ctx->chan_data_buffer = av_mallocz(sizeof(*ctx->chan_data_buffer) *
|
||||
num_buffers * num_buffers);
|
||||
ctx->chan_data = av_malloc(sizeof(*ctx->chan_data) *
|
||||
ctx->chan_data = av_mallocz(sizeof(*ctx->chan_data) *
|
||||
num_buffers);
|
||||
ctx->reverted_channels = av_malloc(sizeof(*ctx->reverted_channels) *
|
||||
num_buffers);
|
||||
|
@@ -599,8 +599,8 @@ void ff_atrac3p_ipqf(FFTContext *dct_ctx, Atrac3pIPQFChannelCtx *hist,
|
||||
const float *in, float *out)
|
||||
{
|
||||
int i, s, sb, t, pos_now, pos_next;
|
||||
DECLARE_ALIGNED(32, float, idct_in)[ATRAC3P_SUBBANDS];
|
||||
DECLARE_ALIGNED(32, float, idct_out)[ATRAC3P_SUBBANDS];
|
||||
LOCAL_ALIGNED(32, float, idct_in, [ATRAC3P_SUBBANDS]);
|
||||
LOCAL_ALIGNED(32, float, idct_out, [ATRAC3P_SUBBANDS]);
|
||||
|
||||
memset(out, 0, ATRAC3P_FRAME_SAMPLES * sizeof(*out));
|
||||
|
||||
|
@@ -363,7 +363,7 @@ static int dnxhd_decode_macroblock(DNXHDContext *ctx, AVFrame *frame,
|
||||
dest_u = frame->data[1] + ((y * dct_linesize_chroma) << 4) + (x << (3 + shift1 + ctx->is_444));
|
||||
dest_v = frame->data[2] + ((y * dct_linesize_chroma) << 4) + (x << (3 + shift1 + ctx->is_444));
|
||||
|
||||
if (ctx->cur_field) {
|
||||
if (frame->interlaced_frame && ctx->cur_field) {
|
||||
dest_y += frame->linesize[0];
|
||||
dest_u += frame->linesize[1];
|
||||
dest_v += frame->linesize[2];
|
||||
|
@@ -63,7 +63,7 @@ typedef enum {
|
||||
|
||||
#define EAC3_SR_CODE_REDUCED 3
|
||||
|
||||
void ff_eac3_apply_spectral_extension(AC3DecodeContext *s)
|
||||
static void ff_eac3_apply_spectral_extension(AC3DecodeContext *s)
|
||||
{
|
||||
int bin, bnd, ch, i;
|
||||
uint8_t wrapflag[SPX_MAX_BANDS]={1,0,}, num_copy_sections, copy_sizes[SPX_MAX_BANDS];
|
||||
@@ -101,7 +101,7 @@ void ff_eac3_apply_spectral_extension(AC3DecodeContext *s)
|
||||
for (i = 0; i < num_copy_sections; i++) {
|
||||
memcpy(&s->transform_coeffs[ch][bin],
|
||||
&s->transform_coeffs[ch][s->spx_dst_start_freq],
|
||||
copy_sizes[i]*sizeof(float));
|
||||
copy_sizes[i]*sizeof(INTFLOAT));
|
||||
bin += copy_sizes[i];
|
||||
}
|
||||
|
||||
@@ -124,7 +124,7 @@ void ff_eac3_apply_spectral_extension(AC3DecodeContext *s)
|
||||
bin = s->spx_src_start_freq - 2;
|
||||
for (bnd = 0; bnd < s->num_spx_bands; bnd++) {
|
||||
if (wrapflag[bnd]) {
|
||||
float *coeffs = &s->transform_coeffs[ch][bin];
|
||||
INTFLOAT *coeffs = &s->transform_coeffs[ch][bin];
|
||||
coeffs[0] *= atten_tab[0];
|
||||
coeffs[1] *= atten_tab[1];
|
||||
coeffs[2] *= atten_tab[2];
|
||||
@@ -142,6 +142,11 @@ void ff_eac3_apply_spectral_extension(AC3DecodeContext *s)
|
||||
for (bnd = 0; bnd < s->num_spx_bands; bnd++) {
|
||||
float nscale = s->spx_noise_blend[ch][bnd] * rms_energy[bnd] * (1.0f / INT32_MIN);
|
||||
float sscale = s->spx_signal_blend[ch][bnd];
|
||||
#if USE_FIXED
|
||||
// spx_noise_blend and spx_signal_blend are both FP.23
|
||||
nscale *= 1.0 / (1<<23);
|
||||
sscale *= 1.0 / (1<<23);
|
||||
#endif
|
||||
for (i = 0; i < s->spx_band_sizes[bnd]; i++) {
|
||||
float noise = nscale * (int32_t)av_lfg_get(&s->dith_state);
|
||||
s->transform_coeffs[ch][bin] *= sscale;
|
||||
@@ -195,7 +200,7 @@ static void idct6(int pre_mant[6])
|
||||
pre_mant[5] = even0 - odd0;
|
||||
}
|
||||
|
||||
void ff_eac3_decode_transform_coeffs_aht_ch(AC3DecodeContext *s, int ch)
|
||||
static void ff_eac3_decode_transform_coeffs_aht_ch(AC3DecodeContext *s, int ch)
|
||||
{
|
||||
int bin, blk, gs;
|
||||
int end_bap, gaq_mode;
|
||||
@@ -288,7 +293,7 @@ void ff_eac3_decode_transform_coeffs_aht_ch(AC3DecodeContext *s, int ch)
|
||||
}
|
||||
}
|
||||
|
||||
int ff_eac3_parse_header(AC3DecodeContext *s)
|
||||
static int ff_eac3_parse_header(AC3DecodeContext *s)
|
||||
{
|
||||
int i, blk, ch;
|
||||
int ac3_exponent_strategy, parse_aht_info, parse_spx_atten_data;
|
||||
|
@@ -151,6 +151,11 @@ static inline int decode_block_intra(MadContext *s, int16_t * block)
|
||||
break;
|
||||
} else if (level != 0) {
|
||||
i += run;
|
||||
if (i > 63) {
|
||||
av_log(s->avctx, AV_LOG_ERROR,
|
||||
"ac-tex damaged at %d %d\n", s->mb_x, s->mb_y);
|
||||
return -1;
|
||||
}
|
||||
j = scantable[i];
|
||||
level = (level*quant_matrix[j]) >> 4;
|
||||
level = (level-1)|1;
|
||||
@@ -165,6 +170,11 @@ static inline int decode_block_intra(MadContext *s, int16_t * block)
|
||||
run = SHOW_UBITS(re, &s->gb, 6)+1; LAST_SKIP_BITS(re, &s->gb, 6);
|
||||
|
||||
i += run;
|
||||
if (i > 63) {
|
||||
av_log(s->avctx, AV_LOG_ERROR,
|
||||
"ac-tex damaged at %d %d\n", s->mb_x, s->mb_y);
|
||||
return -1;
|
||||
}
|
||||
j = scantable[i];
|
||||
if (level < 0) {
|
||||
level = -level;
|
||||
@@ -176,10 +186,6 @@ static inline int decode_block_intra(MadContext *s, int16_t * block)
|
||||
level = (level-1)|1;
|
||||
}
|
||||
}
|
||||
if (i > 63) {
|
||||
av_log(s->avctx, AV_LOG_ERROR, "ac-tex damaged at %d %d\n", s->mb_x, s->mb_y);
|
||||
return -1;
|
||||
}
|
||||
|
||||
block[j] = level;
|
||||
}
|
||||
|
@@ -251,7 +251,7 @@ static void put_line(uint8_t *dst, int size, int width, const int *runs)
|
||||
PutBitContext pb;
|
||||
int run, mode = ~0, pix_left = width, run_idx = 0;
|
||||
|
||||
init_put_bits(&pb, dst, size * 8);
|
||||
init_put_bits(&pb, dst, size);
|
||||
while (pix_left > 0) {
|
||||
run = runs[run_idx++];
|
||||
mode = ~mode;
|
||||
|
@@ -287,7 +287,7 @@ static int write_header(FlashSV2Context * s, uint8_t * buf, int buf_size)
|
||||
if (buf_size < 5)
|
||||
return -1;
|
||||
|
||||
init_put_bits(&pb, buf, buf_size * 8);
|
||||
init_put_bits(&pb, buf, buf_size);
|
||||
|
||||
put_bits(&pb, 4, (s->block_width >> 4) - 1);
|
||||
put_bits(&pb, 12, s->image_width);
|
||||
|
@@ -151,7 +151,7 @@ static int encode_bitstream(FlashSVContext *s, const AVFrame *p, uint8_t *buf,
|
||||
int buf_pos, res;
|
||||
int pred_blocks = 0;
|
||||
|
||||
init_put_bits(&pb, buf, buf_size * 8);
|
||||
init_put_bits(&pb, buf, buf_size);
|
||||
|
||||
put_bits(&pb, 4, block_width / 16 - 1);
|
||||
put_bits(&pb, 12, s->image_width);
|
||||
|
@@ -105,7 +105,7 @@ static int gif_image_write_image(AVCodecContext *avctx,
|
||||
/* skip common columns */
|
||||
while (x_start < x_end) {
|
||||
int same_column = 1;
|
||||
for (y = y_start; y < y_end; y++) {
|
||||
for (y = y_start; y <= y_end; y++) {
|
||||
if (ref[y*ref_linesize + x_start] != buf[y*linesize + x_start]) {
|
||||
same_column = 0;
|
||||
break;
|
||||
@@ -117,7 +117,7 @@ static int gif_image_write_image(AVCodecContext *avctx,
|
||||
}
|
||||
while (x_end > x_start) {
|
||||
int same_column = 1;
|
||||
for (y = y_start; y < y_end; y++) {
|
||||
for (y = y_start; y <= y_end; y++) {
|
||||
if (ref[y*ref_linesize + x_end] != buf[y*linesize + x_end]) {
|
||||
same_column = 0;
|
||||
break;
|
||||
|
@@ -727,6 +727,7 @@ static int decode_init_thread_copy(AVCodecContext *avctx)
|
||||
memset(h->sps_buffers, 0, sizeof(h->sps_buffers));
|
||||
memset(h->pps_buffers, 0, sizeof(h->pps_buffers));
|
||||
|
||||
h->avctx = avctx;
|
||||
h->rbsp_buffer[0] = NULL;
|
||||
h->rbsp_buffer[1] = NULL;
|
||||
h->rbsp_buffer_size[0] = 0;
|
||||
@@ -1515,9 +1516,6 @@ static int decode_nal_units(H264Context *h, const uint8_t *buf, int buf_size,
|
||||
continue;
|
||||
|
||||
again:
|
||||
if ( (!(avctx->active_thread_type & FF_THREAD_FRAME) || nals_needed >= nal_index)
|
||||
&& !h->current_slice)
|
||||
h->au_pps_id = -1;
|
||||
/* Ignore per frame NAL unit type during extradata
|
||||
* parsing. Decoding slices is not possible in codec init
|
||||
* with frame-mt */
|
||||
@@ -1553,8 +1551,14 @@ again:
|
||||
ret = -1;
|
||||
goto end;
|
||||
}
|
||||
if(!idr_cleared)
|
||||
if(!idr_cleared) {
|
||||
if (h->current_slice && (avctx->active_thread_type & FF_THREAD_SLICE)) {
|
||||
av_log(h, AV_LOG_ERROR, "invalid mixed IDR / non IDR frames cannot be decoded in slice multithreading mode\n");
|
||||
ret = AVERROR_INVALIDDATA;
|
||||
goto end;
|
||||
}
|
||||
idr(h); // FIXME ensure we don't lose some frames if there is reordering
|
||||
}
|
||||
idr_cleared = 1;
|
||||
h->has_recovery_point = 1;
|
||||
case NAL_SLICE:
|
||||
@@ -1563,6 +1567,10 @@ again:
|
||||
hx->inter_gb_ptr = &hx->gb;
|
||||
hx->data_partitioning = 0;
|
||||
|
||||
if ( nals_needed >= nal_index
|
||||
|| (!(avctx->active_thread_type & FF_THREAD_FRAME) && !context_count))
|
||||
h->au_pps_id = -1;
|
||||
|
||||
if ((err = ff_h264_decode_slice_header(hx, h)))
|
||||
break;
|
||||
|
||||
@@ -1684,7 +1692,9 @@ again:
|
||||
break;
|
||||
case NAL_SPS:
|
||||
init_get_bits(&h->gb, ptr, bit_length);
|
||||
if (ff_h264_decode_seq_parameter_set(h) < 0 && (h->is_avc ? nalsize : 1)) {
|
||||
if (ff_h264_decode_seq_parameter_set(h, 0) >= 0)
|
||||
break;
|
||||
if (h->is_avc ? nalsize : 1) {
|
||||
av_log(h->avctx, AV_LOG_DEBUG,
|
||||
"SPS decoding failure, trying again with the complete NAL\n");
|
||||
if (h->is_avc)
|
||||
@@ -1693,8 +1703,11 @@ again:
|
||||
break;
|
||||
init_get_bits(&h->gb, &buf[buf_index + 1 - consumed],
|
||||
8*(next_avc - buf_index + consumed - 1));
|
||||
ff_h264_decode_seq_parameter_set(h);
|
||||
if (ff_h264_decode_seq_parameter_set(h, 0) >= 0)
|
||||
break;
|
||||
}
|
||||
init_get_bits(&h->gb, ptr, bit_length);
|
||||
ff_h264_decode_seq_parameter_set(h, 1);
|
||||
|
||||
break;
|
||||
case NAL_PPS:
|
||||
@@ -1727,8 +1740,14 @@ again:
|
||||
if (err < 0 || err == SLICE_SKIPED) {
|
||||
if (err < 0)
|
||||
av_log(h->avctx, AV_LOG_ERROR, "decode_slice_header error\n");
|
||||
h->ref_count[0] = h->ref_count[1] = h->list_count = 0;
|
||||
hx->ref_count[0] = hx->ref_count[1] = hx->list_count = 0;
|
||||
} else if (err == SLICE_SINGLETHREAD) {
|
||||
if (context_count > 1) {
|
||||
ret = ff_h264_execute_decode_slices(h, context_count - 1);
|
||||
if (ret < 0 && (h->avctx->err_recognition & AV_EF_EXPLODE))
|
||||
goto end;
|
||||
context_count = 0;
|
||||
}
|
||||
/* Slice could not be decoded in parallel mode, copy down
|
||||
* NAL unit stuff to context 0 and restart. Note that
|
||||
* rbsp_buffer is not transferred, but since we no longer
|
||||
|
@@ -539,6 +539,7 @@ typedef struct H264Context {
|
||||
int mb_x, mb_y;
|
||||
int resync_mb_x;
|
||||
int resync_mb_y;
|
||||
int mb_index_end;
|
||||
int mb_skip_run;
|
||||
int mb_height, mb_width;
|
||||
int mb_stride;
|
||||
@@ -778,7 +779,7 @@ int ff_h264_decode_sei(H264Context *h);
|
||||
/**
|
||||
* Decode SPS
|
||||
*/
|
||||
int ff_h264_decode_seq_parameter_set(H264Context *h);
|
||||
int ff_h264_decode_seq_parameter_set(H264Context *h, int ignore_truncation);
|
||||
|
||||
/**
|
||||
* compute profile from sps
|
||||
|
@@ -271,7 +271,7 @@ static inline int parse_nal_units(AVCodecParserContext *s,
|
||||
init_get_bits(&h->gb, ptr, 8 * dst_length);
|
||||
switch (h->nal_unit_type) {
|
||||
case NAL_SPS:
|
||||
ff_h264_decode_seq_parameter_set(h);
|
||||
ff_h264_decode_seq_parameter_set(h, 0);
|
||||
break;
|
||||
case NAL_PPS:
|
||||
ff_h264_decode_picture_parameter_set(h, h->gb.size_in_bits);
|
||||
|
@@ -241,12 +241,6 @@ static inline int decode_vui_parameters(H264Context *h, SPS *sps)
|
||||
}
|
||||
}
|
||||
|
||||
if (get_bits_left(&h->gb) < 0) {
|
||||
av_log(h->avctx, AV_LOG_ERROR,
|
||||
"Overread VUI by %d bits\n", -get_bits_left(&h->gb));
|
||||
return AVERROR_INVALIDDATA;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -303,7 +297,7 @@ static void decode_scaling_matrices(H264Context *h, SPS *sps,
|
||||
}
|
||||
}
|
||||
|
||||
int ff_h264_decode_seq_parameter_set(H264Context *h)
|
||||
int ff_h264_decode_seq_parameter_set(H264Context *h, int ignore_truncation)
|
||||
{
|
||||
int profile_idc, level_idc, constraint_set_flags = 0;
|
||||
unsigned int sps_id;
|
||||
@@ -523,6 +517,13 @@ int ff_h264_decode_seq_parameter_set(H264Context *h)
|
||||
goto fail;
|
||||
}
|
||||
|
||||
if (get_bits_left(&h->gb) < 0) {
|
||||
av_log(h->avctx, ignore_truncation ? AV_LOG_WARNING : AV_LOG_ERROR,
|
||||
"Overread %s by %d bits\n", sps->vui_parameters_present_flag ? "VUI" : "SPS", -get_bits_left(&h->gb));
|
||||
if (!ignore_truncation)
|
||||
goto fail;
|
||||
}
|
||||
|
||||
if (!sps->sar.den)
|
||||
sps->sar.den = 1;
|
||||
|
||||
|
@@ -707,7 +707,7 @@ int ff_h264_execute_ref_pic_marking(H264Context *h, MMCO *mmco, int mmco_count)
|
||||
*/
|
||||
if (h->short_ref_count && h->short_ref[0] == h->cur_pic_ptr) {
|
||||
/* Just mark the second field valid */
|
||||
h->cur_pic_ptr->reference = PICT_FRAME;
|
||||
h->cur_pic_ptr->reference |= h->picture_structure;
|
||||
} else if (h->cur_pic_ptr->long_ref) {
|
||||
av_log(h->avctx, AV_LOG_ERROR, "illegal short term reference "
|
||||
"assignment for second field "
|
||||
|
@@ -1307,6 +1307,7 @@ int ff_h264_decode_slice_header(H264Context *h, H264Context *h0)
|
||||
int field_pic_flag, bottom_field_flag;
|
||||
int first_slice = h == h0 && !h0->current_slice;
|
||||
int frame_num, picture_structure, droppable;
|
||||
int mb_aff_frame, last_mb_aff_frame;
|
||||
PPS *pps;
|
||||
|
||||
h->qpel_put = h->h264qpel.put_h264_qpel_pixels_tab;
|
||||
@@ -1434,7 +1435,8 @@ int ff_h264_decode_slice_header(H264Context *h, H264Context *h0)
|
||||
|| h->mb_width != h->sps.mb_width
|
||||
|| h->mb_height != h->sps.mb_height * (2 - h->sps.frame_mbs_only_flag)
|
||||
));
|
||||
if (non_j_pixfmt(h0->avctx->pix_fmt) != non_j_pixfmt(get_pixel_format(h0, 0)))
|
||||
if (h0->avctx->pix_fmt == AV_PIX_FMT_NONE
|
||||
|| (non_j_pixfmt(h0->avctx->pix_fmt) != non_j_pixfmt(get_pixel_format(h0, 0))))
|
||||
must_reinit = 1;
|
||||
|
||||
if (first_slice && av_cmp_q(h->sps.sar, h->avctx->sample_aspect_ratio))
|
||||
@@ -1515,7 +1517,7 @@ int ff_h264_decode_slice_header(H264Context *h, H264Context *h0)
|
||||
}
|
||||
}
|
||||
|
||||
if (h == h0 && h->dequant_coeff_pps != pps_id) {
|
||||
if (first_slice && h->dequant_coeff_pps != pps_id) {
|
||||
h->dequant_coeff_pps = pps_id;
|
||||
h264_init_dequant_tables(h);
|
||||
}
|
||||
@@ -1530,7 +1532,8 @@ int ff_h264_decode_slice_header(H264Context *h, H264Context *h0)
|
||||
}
|
||||
|
||||
h->mb_mbaff = 0;
|
||||
h->mb_aff_frame = 0;
|
||||
mb_aff_frame = 0;
|
||||
last_mb_aff_frame = h0->mb_aff_frame;
|
||||
last_pic_structure = h0->picture_structure;
|
||||
last_pic_droppable = h0->droppable;
|
||||
droppable = h->nal_ref_idc == 0;
|
||||
@@ -1548,12 +1551,13 @@ int ff_h264_decode_slice_header(H264Context *h, H264Context *h0)
|
||||
picture_structure = PICT_TOP_FIELD + bottom_field_flag;
|
||||
} else {
|
||||
picture_structure = PICT_FRAME;
|
||||
h->mb_aff_frame = h->sps.mb_aff;
|
||||
mb_aff_frame = h->sps.mb_aff;
|
||||
}
|
||||
}
|
||||
if (h0->current_slice) {
|
||||
if (last_pic_structure != picture_structure ||
|
||||
last_pic_droppable != droppable) {
|
||||
last_pic_droppable != droppable ||
|
||||
last_mb_aff_frame != mb_aff_frame) {
|
||||
av_log(h->avctx, AV_LOG_ERROR,
|
||||
"Changing field mode (%d -> %d) between slices is not allowed\n",
|
||||
last_pic_structure, h->picture_structure);
|
||||
@@ -1569,6 +1573,7 @@ int ff_h264_decode_slice_header(H264Context *h, H264Context *h0)
|
||||
h->picture_structure = picture_structure;
|
||||
h->droppable = droppable;
|
||||
h->frame_num = frame_num;
|
||||
h->mb_aff_frame = mb_aff_frame;
|
||||
h->mb_field_decoding_flag = picture_structure != PICT_FRAME;
|
||||
|
||||
if (h0->current_slice == 0) {
|
||||
@@ -2444,8 +2449,17 @@ static int decode_slice(struct AVCodecContext *avctx, void *arg)
|
||||
|
||||
for (;;) {
|
||||
// START_TIMER
|
||||
int ret = ff_h264_decode_mb_cabac(h);
|
||||
int eos;
|
||||
int ret, eos;
|
||||
|
||||
if (h->mb_x + h->mb_y * h->mb_width >= h->mb_index_end) {
|
||||
av_log(h->avctx, AV_LOG_ERROR, "Slice overlaps next at %d\n",
|
||||
h->mb_index_end);
|
||||
er_add_slice(h, h->resync_mb_x, h->resync_mb_y, h->mb_x,
|
||||
h->mb_y, ER_MB_ERROR);
|
||||
return AVERROR_INVALIDDATA;
|
||||
}
|
||||
|
||||
ret = ff_h264_decode_mb_cabac(h);
|
||||
// STOP_TIMER("decode_mb_cabac")
|
||||
|
||||
if (ret >= 0)
|
||||
@@ -2507,7 +2521,17 @@ static int decode_slice(struct AVCodecContext *avctx, void *arg)
|
||||
}
|
||||
} else {
|
||||
for (;;) {
|
||||
int ret = ff_h264_decode_mb_cavlc(h);
|
||||
int ret;
|
||||
|
||||
if (h->mb_x + h->mb_y * h->mb_width >= h->mb_index_end) {
|
||||
av_log(h->avctx, AV_LOG_ERROR, "Slice overlaps next at %d\n",
|
||||
h->mb_index_end);
|
||||
er_add_slice(h, h->resync_mb_x, h->resync_mb_y, h->mb_x,
|
||||
h->mb_y, ER_MB_ERROR);
|
||||
return AVERROR_INVALIDDATA;
|
||||
}
|
||||
|
||||
ret = ff_h264_decode_mb_cavlc(h);
|
||||
|
||||
if (ret >= 0)
|
||||
ff_h264_hl_decode_mb(h);
|
||||
@@ -2595,19 +2619,33 @@ int ff_h264_execute_decode_slices(H264Context *h, unsigned context_count)
|
||||
|
||||
av_assert0(h->mb_y < h->mb_height);
|
||||
|
||||
h->mb_index_end = INT_MAX;
|
||||
|
||||
if (h->avctx->hwaccel ||
|
||||
h->avctx->codec->capabilities & CODEC_CAP_HWACCEL_VDPAU)
|
||||
return 0;
|
||||
if (context_count == 1) {
|
||||
return decode_slice(avctx, &h);
|
||||
} else {
|
||||
int j, mb_index;
|
||||
av_assert0(context_count > 0);
|
||||
for (i = 1; i < context_count; i++) {
|
||||
for (i = 0; i < context_count; i++) {
|
||||
int mb_index_end = h->mb_width * h->mb_height;
|
||||
hx = h->thread_context[i];
|
||||
if (CONFIG_ERROR_RESILIENCE) {
|
||||
mb_index = hx->resync_mb_x + hx->resync_mb_y * h->mb_width;
|
||||
if (CONFIG_ERROR_RESILIENCE && i) {
|
||||
hx->er.error_count = 0;
|
||||
}
|
||||
hx->x264_build = h->x264_build;
|
||||
for (j = 0; j < context_count; j++) {
|
||||
H264Context *sl2 = h->thread_context[j];
|
||||
int mb_index2 = sl2->resync_mb_x + sl2->resync_mb_y * h->mb_width;
|
||||
|
||||
if (i==j || mb_index > mb_index2)
|
||||
continue;
|
||||
mb_index_end = FFMIN(mb_index_end, mb_index2);
|
||||
}
|
||||
hx->mb_index_end = mb_index_end;
|
||||
}
|
||||
|
||||
avctx->execute(avctx, decode_slice, h->thread_context,
|
||||
|
@@ -298,10 +298,10 @@ typedef struct RefPicListTab {
|
||||
} RefPicListTab;
|
||||
|
||||
typedef struct HEVCWindow {
|
||||
int left_offset;
|
||||
int right_offset;
|
||||
int top_offset;
|
||||
int bottom_offset;
|
||||
unsigned int left_offset;
|
||||
unsigned int right_offset;
|
||||
unsigned int top_offset;
|
||||
unsigned int bottom_offset;
|
||||
} HEVCWindow;
|
||||
|
||||
typedef struct VUI {
|
||||
|
@@ -895,11 +895,30 @@ int ff_hevc_decode_nal_sps(HEVCContext *s)
|
||||
sps->log2_max_trafo_size = log2_diff_max_min_transform_block_size +
|
||||
sps->log2_min_tb_size;
|
||||
|
||||
if (sps->log2_min_tb_size >= sps->log2_min_cb_size) {
|
||||
if (sps->log2_min_cb_size < 3 || sps->log2_min_cb_size > 30) {
|
||||
av_log(s->avctx, AV_LOG_ERROR, "Invalid value %d for log2_min_cb_size", sps->log2_min_cb_size);
|
||||
ret = AVERROR_INVALIDDATA;
|
||||
goto err;
|
||||
}
|
||||
|
||||
if (sps->log2_diff_max_min_coding_block_size > 30) {
|
||||
av_log(s->avctx, AV_LOG_ERROR, "Invalid value %d for log2_diff_max_min_coding_block_size", sps->log2_diff_max_min_coding_block_size);
|
||||
ret = AVERROR_INVALIDDATA;
|
||||
goto err;
|
||||
}
|
||||
|
||||
if (sps->log2_min_tb_size >= sps->log2_min_cb_size || sps->log2_min_tb_size < 2) {
|
||||
av_log(s->avctx, AV_LOG_ERROR, "Invalid value for log2_min_tb_size");
|
||||
ret = AVERROR_INVALIDDATA;
|
||||
goto err;
|
||||
}
|
||||
|
||||
if (log2_diff_max_min_transform_block_size < 0 || log2_diff_max_min_transform_block_size > 30) {
|
||||
av_log(s->avctx, AV_LOG_ERROR, "Invalid value %d for log2_diff_max_min_transform_block_size", log2_diff_max_min_transform_block_size);
|
||||
ret = AVERROR_INVALIDDATA;
|
||||
goto err;
|
||||
}
|
||||
|
||||
sps->max_transform_hierarchy_depth_inter = get_ue_golomb_long(gb);
|
||||
sps->max_transform_hierarchy_depth_intra = get_ue_golomb_long(gb);
|
||||
|
||||
@@ -1021,7 +1040,8 @@ int ff_hevc_decode_nal_sps(HEVCContext *s)
|
||||
(sps->output_window.left_offset + sps->output_window.right_offset);
|
||||
sps->output_height = sps->height -
|
||||
(sps->output_window.top_offset + sps->output_window.bottom_offset);
|
||||
if (sps->output_width <= 0 || sps->output_height <= 0) {
|
||||
if (sps->width <= sps->output_window.left_offset + (int64_t)sps->output_window.right_offset ||
|
||||
sps->height <= sps->output_window.top_offset + (int64_t)sps->output_window.bottom_offset) {
|
||||
av_log(s->avctx, AV_LOG_WARNING, "Invalid visible frame dimensions: %dx%d.\n",
|
||||
sps->output_width, sps->output_height);
|
||||
if (s->avctx->err_recognition & AV_EF_EXPLODE) {
|
||||
|
@@ -88,7 +88,12 @@ static inline int mdec_decode_block_intra(MDECContext *a, int16_t *block, int n)
|
||||
if (level == 127) {
|
||||
break;
|
||||
} else if (level != 0) {
|
||||
i += run;
|
||||
i += run;
|
||||
if (i > 63) {
|
||||
av_log(a->avctx, AV_LOG_ERROR,
|
||||
"ac-tex damaged at %d %d\n", a->mb_x, a->mb_y);
|
||||
return AVERROR_INVALIDDATA;
|
||||
}
|
||||
j = scantable[i];
|
||||
level = (level * qscale * quant_matrix[j]) >> 3;
|
||||
level = (level ^ SHOW_SBITS(re, &a->gb, 1)) - SHOW_SBITS(re, &a->gb, 1);
|
||||
@@ -98,8 +103,13 @@ static inline int mdec_decode_block_intra(MDECContext *a, int16_t *block, int n)
|
||||
run = SHOW_UBITS(re, &a->gb, 6)+1; LAST_SKIP_BITS(re, &a->gb, 6);
|
||||
UPDATE_CACHE(re, &a->gb);
|
||||
level = SHOW_SBITS(re, &a->gb, 10); SKIP_BITS(re, &a->gb, 10);
|
||||
i += run;
|
||||
j = scantable[i];
|
||||
i += run;
|
||||
if (i > 63) {
|
||||
av_log(a->avctx, AV_LOG_ERROR,
|
||||
"ac-tex damaged at %d %d\n", a->mb_x, a->mb_y);
|
||||
return AVERROR_INVALIDDATA;
|
||||
}
|
||||
j = scantable[i];
|
||||
if (level < 0) {
|
||||
level = -level;
|
||||
level = (level * qscale * quant_matrix[j]) >> 3;
|
||||
@@ -110,10 +120,6 @@ static inline int mdec_decode_block_intra(MDECContext *a, int16_t *block, int n)
|
||||
level = (level - 1) | 1;
|
||||
}
|
||||
}
|
||||
if (i > 63) {
|
||||
av_log(a->avctx, AV_LOG_ERROR, "ac-tex damaged at %d %d\n", a->mb_x, a->mb_y);
|
||||
return AVERROR_INVALIDDATA;
|
||||
}
|
||||
|
||||
block[j] = level;
|
||||
}
|
||||
|
@@ -89,7 +89,7 @@ static void ff_acelp_interpolatef_mips(float *out, const float *in,
|
||||
"addu %[p_filter_coeffs_m], %[p_filter_coeffs_m], %[prec] \n\t"
|
||||
"madd.s %[v],%[v],%[in_val_m], %[fc_val_m] \n\t"
|
||||
|
||||
: [v] "=&f" (v),[p_in_p] "+r" (p_in_p), [p_in_m] "+r" (p_in_m),
|
||||
: [v] "+&f" (v),[p_in_p] "+r" (p_in_p), [p_in_m] "+r" (p_in_m),
|
||||
[p_filter_coeffs_p] "+r" (p_filter_coeffs_p),
|
||||
[in_val_p] "=&f" (in_val_p), [in_val_m] "=&f" (in_val_m),
|
||||
[fc_val_p] "=&f" (fc_val_p), [fc_val_m] "=&f" (fc_val_m),
|
||||
|
@@ -883,7 +883,7 @@ extern const uint8_t ff_aic_dc_scale_table[32];
|
||||
extern const uint8_t ff_h263_chroma_qscale_table[32];
|
||||
|
||||
/* rv10.c */
|
||||
void ff_rv10_encode_picture_header(MpegEncContext *s, int picture_number);
|
||||
int ff_rv10_encode_picture_header(MpegEncContext *s, int picture_number);
|
||||
int ff_rv_decode_dc(MpegEncContext *s, int n);
|
||||
void ff_rv20_encode_picture_header(MpegEncContext *s, int picture_number);
|
||||
|
||||
|
@@ -3706,8 +3706,11 @@ static int encode_picture(MpegEncContext *s, int picture_number)
|
||||
ff_msmpeg4_encode_picture_header(s, picture_number);
|
||||
else if (CONFIG_MPEG4_ENCODER && s->h263_pred)
|
||||
ff_mpeg4_encode_picture_header(s, picture_number);
|
||||
else if (CONFIG_RV10_ENCODER && s->codec_id == AV_CODEC_ID_RV10)
|
||||
ff_rv10_encode_picture_header(s, picture_number);
|
||||
else if (CONFIG_RV10_ENCODER && s->codec_id == AV_CODEC_ID_RV10) {
|
||||
ret = ff_rv10_encode_picture_header(s, picture_number);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
}
|
||||
else if (CONFIG_RV20_ENCODER && s->codec_id == AV_CODEC_ID_RV20)
|
||||
ff_rv20_encode_picture_header(s, picture_number);
|
||||
else if (CONFIG_FLV_ENCODER && s->codec_id == AV_CODEC_ID_FLV1)
|
||||
|
@@ -36,17 +36,15 @@ static int msrle_decode_pal4(AVCodecContext *avctx, AVPicture *pic,
|
||||
unsigned char rle_code;
|
||||
unsigned char extra_byte, odd_pixel;
|
||||
unsigned char stream_byte;
|
||||
unsigned int pixel_ptr = 0;
|
||||
int row_dec = pic->linesize[0];
|
||||
int row_ptr = (avctx->height - 1) * row_dec;
|
||||
int frame_size = row_dec * avctx->height;
|
||||
int pixel_ptr = 0;
|
||||
int line = avctx->height - 1;
|
||||
int i;
|
||||
|
||||
while (row_ptr >= 0) {
|
||||
while (line >= 0 && pixel_ptr <= avctx->width) {
|
||||
if (bytestream2_get_bytes_left(gb) <= 0) {
|
||||
av_log(avctx, AV_LOG_ERROR,
|
||||
"MS RLE: bytestream overrun, %d rows left\n",
|
||||
row_ptr);
|
||||
"MS RLE: bytestream overrun, %dx%d left\n",
|
||||
avctx->width - pixel_ptr, line);
|
||||
return AVERROR_INVALIDDATA;
|
||||
}
|
||||
rle_code = stream_byte = bytestream2_get_byteu(gb);
|
||||
@@ -55,7 +53,7 @@ static int msrle_decode_pal4(AVCodecContext *avctx, AVPicture *pic,
|
||||
stream_byte = bytestream2_get_byte(gb);
|
||||
if (stream_byte == 0) {
|
||||
/* line is done, goto the next one */
|
||||
row_ptr -= row_dec;
|
||||
line--;
|
||||
pixel_ptr = 0;
|
||||
} else if (stream_byte == 1) {
|
||||
/* decode is done */
|
||||
@@ -65,13 +63,12 @@ static int msrle_decode_pal4(AVCodecContext *avctx, AVPicture *pic,
|
||||
stream_byte = bytestream2_get_byte(gb);
|
||||
pixel_ptr += stream_byte;
|
||||
stream_byte = bytestream2_get_byte(gb);
|
||||
row_ptr -= stream_byte * row_dec;
|
||||
} else {
|
||||
// copy pixels from encoded stream
|
||||
odd_pixel = stream_byte & 1;
|
||||
rle_code = (stream_byte + 1) / 2;
|
||||
extra_byte = rle_code & 0x01;
|
||||
if (row_ptr + pixel_ptr + stream_byte > frame_size ||
|
||||
if (pixel_ptr + 2*rle_code - odd_pixel > avctx->width ||
|
||||
bytestream2_get_bytes_left(gb) < rle_code) {
|
||||
av_log(avctx, AV_LOG_ERROR,
|
||||
"MS RLE: frame/stream ptr just went out of bounds (copy)\n");
|
||||
@@ -82,13 +79,13 @@ static int msrle_decode_pal4(AVCodecContext *avctx, AVPicture *pic,
|
||||
if (pixel_ptr >= avctx->width)
|
||||
break;
|
||||
stream_byte = bytestream2_get_byteu(gb);
|
||||
pic->data[0][row_ptr + pixel_ptr] = stream_byte >> 4;
|
||||
pic->data[0][line * pic->linesize[0] + pixel_ptr] = stream_byte >> 4;
|
||||
pixel_ptr++;
|
||||
if (i + 1 == rle_code && odd_pixel)
|
||||
break;
|
||||
if (pixel_ptr >= avctx->width)
|
||||
break;
|
||||
pic->data[0][row_ptr + pixel_ptr] = stream_byte & 0x0F;
|
||||
pic->data[0][line * pic->linesize[0] + pixel_ptr] = stream_byte & 0x0F;
|
||||
pixel_ptr++;
|
||||
}
|
||||
|
||||
@@ -98,7 +95,7 @@ static int msrle_decode_pal4(AVCodecContext *avctx, AVPicture *pic,
|
||||
}
|
||||
} else {
|
||||
// decode a run of data
|
||||
if (row_ptr + pixel_ptr + stream_byte > frame_size) {
|
||||
if (pixel_ptr + rle_code > avctx->width + 1) {
|
||||
av_log(avctx, AV_LOG_ERROR,
|
||||
"MS RLE: frame ptr just went out of bounds (run)\n");
|
||||
return AVERROR_INVALIDDATA;
|
||||
@@ -108,9 +105,9 @@ static int msrle_decode_pal4(AVCodecContext *avctx, AVPicture *pic,
|
||||
if (pixel_ptr >= avctx->width)
|
||||
break;
|
||||
if ((i & 1) == 0)
|
||||
pic->data[0][row_ptr + pixel_ptr] = stream_byte >> 4;
|
||||
pic->data[0][line * pic->linesize[0] + pixel_ptr] = stream_byte >> 4;
|
||||
else
|
||||
pic->data[0][row_ptr + pixel_ptr] = stream_byte & 0x0F;
|
||||
pic->data[0][line * pic->linesize[0] + pixel_ptr] = stream_byte & 0x0F;
|
||||
pixel_ptr++;
|
||||
}
|
||||
}
|
||||
|
@@ -308,7 +308,7 @@ static void encode_block(NellyMoserEncodeContext *s, unsigned char *output, int
|
||||
|
||||
apply_mdct(s);
|
||||
|
||||
init_put_bits(&pb, output, output_size * 8);
|
||||
init_put_bits(&pb, output, output_size);
|
||||
|
||||
i = 0;
|
||||
for (band = 0; band < NELLY_BANDS; band++) {
|
||||
|
@@ -103,7 +103,6 @@ static const AVOption avcodec_options[] = {
|
||||
{"hex", "hex motion estimation", 0, AV_OPT_TYPE_CONST, {.i64 = ME_HEX }, INT_MIN, INT_MAX, V|E, "me_method" },
|
||||
{"umh", "umh motion estimation", 0, AV_OPT_TYPE_CONST, {.i64 = ME_UMH }, INT_MIN, INT_MAX, V|E, "me_method" },
|
||||
{"iter", "iter motion estimation", 0, AV_OPT_TYPE_CONST, {.i64 = ME_ITER }, INT_MIN, INT_MAX, V|E, "me_method" },
|
||||
{"extradata_size", NULL, OFFSET(extradata_size), AV_OPT_TYPE_INT, {.i64 = DEFAULT }, INT_MIN, INT_MAX},
|
||||
{"time_base", NULL, OFFSET(time_base), AV_OPT_TYPE_RATIONAL, {.dbl = 0}, INT_MIN, INT_MAX},
|
||||
{"g", "set the group of picture (GOP) size", OFFSET(gop_size), AV_OPT_TYPE_INT, {.i64 = 12 }, INT_MIN, INT_MAX, V|E},
|
||||
{"ar", "set audio sampling rate (in Hz)", OFFSET(sample_rate), AV_OPT_TYPE_INT, {.i64 = DEFAULT }, 0, INT_MAX, A|D|E},
|
||||
|
@@ -451,6 +451,14 @@ static int opus_decode_packet(AVCodecContext *avctx, void *data,
|
||||
int coded_samples = 0;
|
||||
int decoded_samples = 0;
|
||||
int i, ret;
|
||||
int delayed_samples = 0;
|
||||
|
||||
for (i = 0; i < c->nb_streams; i++) {
|
||||
OpusStreamContext *s = &c->streams[i];
|
||||
s->out[0] =
|
||||
s->out[1] = NULL;
|
||||
delayed_samples = FFMAX(delayed_samples, s->delayed_samples);
|
||||
}
|
||||
|
||||
/* decode the header of the first sub-packet to find out the sample count */
|
||||
if (buf) {
|
||||
@@ -464,7 +472,7 @@ static int opus_decode_packet(AVCodecContext *avctx, void *data,
|
||||
c->streams[0].silk_samplerate = get_silk_samplerate(pkt->config);
|
||||
}
|
||||
|
||||
frame->nb_samples = coded_samples + c->streams[0].delayed_samples;
|
||||
frame->nb_samples = coded_samples + delayed_samples;
|
||||
|
||||
/* no input or buffered data => nothing to do */
|
||||
if (!frame->nb_samples) {
|
||||
|
@@ -304,7 +304,7 @@ static int encode_slice_plane(AVCodecContext *avctx, int mb_count,
|
||||
}
|
||||
|
||||
blocks_per_slice = mb_count << (2 - chroma);
|
||||
init_put_bits(&pb, buf, buf_size << 3);
|
||||
init_put_bits(&pb, buf, buf_size);
|
||||
|
||||
encode_dc_coeffs(&pb, blocks, blocks_per_slice, qmat);
|
||||
encode_ac_coeffs(avctx, &pb, blocks, blocks_per_slice, qmat);
|
||||
|
@@ -1058,7 +1058,7 @@ static int encode_frame(AVCodecContext *avctx, AVPacket *pkt,
|
||||
slice_hdr = pkt->data + (slice_hdr - start);
|
||||
tmp = pkt->data + (tmp - start);
|
||||
}
|
||||
init_put_bits(&pb, buf, (pkt_size - (buf - orig_buf)) * 8);
|
||||
init_put_bits(&pb, buf, (pkt_size - (buf - orig_buf)));
|
||||
ret = encode_slice(avctx, pic, &pb, sizes, x, y, q,
|
||||
mbs_per_slice);
|
||||
if (ret < 0)
|
||||
|
@@ -966,6 +966,8 @@ static av_cold int roq_encode_init(AVCodecContext *avctx)
|
||||
|
||||
av_lfg_init(&enc->randctx, 1);
|
||||
|
||||
enc->avctx = avctx;
|
||||
|
||||
enc->framesSinceKeyframe = 0;
|
||||
if ((avctx->width & 0xf) || (avctx->height & 0xf)) {
|
||||
av_log(avctx, AV_LOG_ERROR, "Dimensions must be divisible by 16\n");
|
||||
|
@@ -28,7 +28,7 @@
|
||||
#include "mpegvideo.h"
|
||||
#include "put_bits.h"
|
||||
|
||||
void ff_rv10_encode_picture_header(MpegEncContext *s, int picture_number)
|
||||
int ff_rv10_encode_picture_header(MpegEncContext *s, int picture_number)
|
||||
{
|
||||
int full_frame= 0;
|
||||
|
||||
@@ -48,12 +48,17 @@ void ff_rv10_encode_picture_header(MpegEncContext *s, int picture_number)
|
||||
/* if multiple packets per frame are sent, the position at which
|
||||
to display the macroblocks is coded here */
|
||||
if(!full_frame){
|
||||
if (s->mb_width * s->mb_height >= (1U << 12)) {
|
||||
avpriv_report_missing_feature(s, "Encoding frames with 4096 or more macroblocks");
|
||||
return AVERROR(ENOSYS);
|
||||
}
|
||||
put_bits(&s->pb, 6, 0); /* mb_x */
|
||||
put_bits(&s->pb, 6, 0); /* mb_y */
|
||||
put_bits(&s->pb, 12, s->mb_width * s->mb_height);
|
||||
}
|
||||
|
||||
put_bits(&s->pb, 3, 0); /* ignored */
|
||||
return 0;
|
||||
}
|
||||
|
||||
FF_MPV_GENERIC_CLASS(rv10)
|
||||
|
@@ -82,7 +82,7 @@ static int s302m_encode2_frame(AVCodecContext *avctx, AVPacket *avpkt,
|
||||
return ret;
|
||||
|
||||
o = avpkt->data;
|
||||
init_put_bits(&pb, o, buf_size * 8);
|
||||
init_put_bits(&pb, o, buf_size);
|
||||
put_bits(&pb, 16, buf_size - AES3_HEADER_LEN);
|
||||
put_bits(&pb, 2, (avctx->channels - 2) >> 1); // number of channels
|
||||
put_bits(&pb, 8, 0); // channel ID
|
||||
|
@@ -152,7 +152,7 @@ static int decode_q_branch(SnowContext *s, int level, int x, int y){
|
||||
int l = left->color[0];
|
||||
int cb= left->color[1];
|
||||
int cr= left->color[2];
|
||||
int ref = 0;
|
||||
unsigned ref = 0;
|
||||
int ref_context= av_log2(2*left->ref) + av_log2(2*top->ref);
|
||||
int mx_context= av_log2(2*FFABS(left->mx - top->mx)) + 0*av_log2(2*FFABS(tr->mx - top->mx));
|
||||
int my_context= av_log2(2*FFABS(left->my - top->my)) + 0*av_log2(2*FFABS(tr->my - top->my));
|
||||
|
@@ -839,13 +839,6 @@ static int tiff_decode_tag(TiffContext *s, AVFrame *frame)
|
||||
s->bpp = -1;
|
||||
}
|
||||
}
|
||||
if (s->bpp > 64U) {
|
||||
av_log(s->avctx, AV_LOG_ERROR,
|
||||
"This format is not supported (bpp=%d, %d components)\n",
|
||||
s->bpp, count);
|
||||
s->bpp = 0;
|
||||
return AVERROR_INVALIDDATA;
|
||||
}
|
||||
break;
|
||||
case TIFF_SAMPLES_PER_PIXEL:
|
||||
if (count != 1) {
|
||||
@@ -1158,6 +1151,13 @@ static int tiff_decode_tag(TiffContext *s, AVFrame *frame)
|
||||
}
|
||||
}
|
||||
end:
|
||||
if (s->bpp > 64U) {
|
||||
av_log(s->avctx, AV_LOG_ERROR,
|
||||
"This format is not supported (bpp=%d, %d components)\n",
|
||||
s->bpp, count);
|
||||
s->bpp = 0;
|
||||
return AVERROR_INVALIDDATA;
|
||||
}
|
||||
bytestream2_seek(&s->gb, start, SEEK_SET);
|
||||
return 0;
|
||||
}
|
||||
|
@@ -246,7 +246,7 @@ int ff_set_sar(AVCodecContext *avctx, AVRational sar)
|
||||
int ret = av_image_check_sar(avctx->width, avctx->height, sar);
|
||||
|
||||
if (ret < 0) {
|
||||
av_log(avctx, AV_LOG_WARNING, "ignoring invalid SAR: %u/%u\n",
|
||||
av_log(avctx, AV_LOG_WARNING, "ignoring invalid SAR: %d/%d\n",
|
||||
sar.num, sar.den);
|
||||
avctx->sample_aspect_ratio = (AVRational){ 0, 1 };
|
||||
return ret;
|
||||
@@ -374,7 +374,7 @@ void avcodec_align_dimensions2(AVCodecContext *s, int *width, int *height,
|
||||
case AV_PIX_FMT_YUVJ411P:
|
||||
case AV_PIX_FMT_UYYVYY411:
|
||||
w_align = 32;
|
||||
h_align = 8;
|
||||
h_align = 16 * 2;
|
||||
break;
|
||||
case AV_PIX_FMT_YUV410P:
|
||||
if (s->codec_id == AV_CODEC_ID_SVQ1) {
|
||||
|
@@ -279,7 +279,8 @@ static int vp9_alloc_frame(AVCodecContext *ctx, VP9Frame *f)
|
||||
|
||||
// retain segmentation map if it doesn't update
|
||||
if (s->segmentation.enabled && !s->segmentation.update_map &&
|
||||
!s->intraonly && !s->keyframe && !s->errorres) {
|
||||
!s->intraonly && !s->keyframe && !s->errorres &&
|
||||
ctx->active_thread_type != FF_THREAD_FRAME) {
|
||||
memcpy(f->segmentation_map, s->frames[LAST_FRAME].segmentation_map, sz);
|
||||
}
|
||||
|
||||
@@ -1351,9 +1352,18 @@ static void decode_mode(AVCodecContext *ctx)
|
||||
|
||||
if (!s->last_uses_2pass)
|
||||
ff_thread_await_progress(&s->frames[LAST_FRAME].tf, row >> 3, 0);
|
||||
for (y = 0; y < h4; y++)
|
||||
for (y = 0; y < h4; y++) {
|
||||
int idx_base = (y + row) * 8 * s->sb_cols + col;
|
||||
for (x = 0; x < w4; x++)
|
||||
pred = FFMIN(pred, refsegmap[(y + row) * 8 * s->sb_cols + x + col]);
|
||||
pred = FFMIN(pred, refsegmap[idx_base + x]);
|
||||
if (!s->segmentation.update_map && ctx->active_thread_type == FF_THREAD_FRAME) {
|
||||
// FIXME maybe retain reference to previous frame as
|
||||
// segmap reference instead of copying the whole map
|
||||
// into a new buffer
|
||||
memcpy(&s->frames[CUR_FRAME].segmentation_map[idx_base],
|
||||
&refsegmap[idx_base], w4);
|
||||
}
|
||||
}
|
||||
av_assert1(pred < 8);
|
||||
b->seg_id = pred;
|
||||
} else {
|
||||
@@ -2496,7 +2506,7 @@ static void intra_recon(AVCodecContext *ctx, ptrdiff_t y_off, ptrdiff_t uv_off)
|
||||
for (x = 0; x < end_x; x += uvstep1d, ptr += 4 * uvstep1d,
|
||||
ptr_r += 4 * uvstep1d, n += step) {
|
||||
int mode = b->uvmode;
|
||||
uint8_t *a = &a_buf[16];
|
||||
uint8_t *a = &a_buf[32];
|
||||
int eob = b->skip ? 0 : b->uvtx > TX_8X8 ? AV_RN16A(&s->uveob[p][n]) : s->uveob[p][n];
|
||||
|
||||
mode = check_intra_mode(s, mode, &a, ptr_r,
|
||||
@@ -3748,7 +3758,7 @@ static int vp9_decode_frame(AVCodecContext *ctx, void *frame,
|
||||
if ((res = av_frame_ref(frame, s->refs[ref].f)) < 0)
|
||||
return res;
|
||||
*got_frame = 1;
|
||||
return 0;
|
||||
return pkt->size;
|
||||
}
|
||||
data += res;
|
||||
size -= res;
|
||||
@@ -3972,7 +3982,7 @@ static int vp9_decode_frame(AVCodecContext *ctx, void *frame,
|
||||
*got_frame = 1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
return pkt->size;
|
||||
}
|
||||
|
||||
static void vp9_decode_flush(AVCodecContext *ctx)
|
||||
|
@@ -694,6 +694,11 @@ static int decode_entropy_coded_image(WebPContext *s, enum ImageRole role,
|
||||
length = offset + get_bits(&s->gb, extra_bits) + 1;
|
||||
}
|
||||
prefix_code = huff_reader_get_symbol(&hg[HUFF_IDX_DIST], &s->gb);
|
||||
if (prefix_code > 39) {
|
||||
av_log(s->avctx, AV_LOG_ERROR,
|
||||
"distance prefix code too large: %d\n", prefix_code);
|
||||
return AVERROR_INVALIDDATA;
|
||||
}
|
||||
if (prefix_code < 4) {
|
||||
distance = prefix_code + 1;
|
||||
} else {
|
||||
@@ -1099,7 +1104,7 @@ static int vp8_lossless_decode_frame(AVCodecContext *avctx, AVFrame *p,
|
||||
unsigned int data_size, int is_alpha_chunk)
|
||||
{
|
||||
WebPContext *s = avctx->priv_data;
|
||||
int w, h, ret, i;
|
||||
int w, h, ret, i, used;
|
||||
|
||||
if (!is_alpha_chunk) {
|
||||
s->lossless = 1;
|
||||
@@ -1149,8 +1154,16 @@ static int vp8_lossless_decode_frame(AVCodecContext *avctx, AVFrame *p,
|
||||
/* parse transformations */
|
||||
s->nb_transforms = 0;
|
||||
s->reduced_width = 0;
|
||||
used = 0;
|
||||
while (get_bits1(&s->gb)) {
|
||||
enum TransformType transform = get_bits(&s->gb, 2);
|
||||
if (used & (1 << transform)) {
|
||||
av_log(avctx, AV_LOG_ERROR, "Transform %d used more than once\n",
|
||||
transform);
|
||||
ret = AVERROR_INVALIDDATA;
|
||||
goto free_and_return;
|
||||
}
|
||||
used |= (1 << transform);
|
||||
s->transforms[s->nb_transforms++] = transform;
|
||||
switch (transform) {
|
||||
case PREDICTOR_TRANSFORM:
|
||||
|
@@ -148,8 +148,8 @@ static void mlp_filter_channel_x86(int32_t *state, const int32_t *coeff,
|
||||
FIRMUL (ff_mlp_firorder_6, 0x14 )
|
||||
FIRMUL (ff_mlp_firorder_5, 0x10 )
|
||||
FIRMUL (ff_mlp_firorder_4, 0x0c )
|
||||
FIRMULREG(ff_mlp_firorder_3, 0x08,10)
|
||||
FIRMULREG(ff_mlp_firorder_2, 0x04, 9)
|
||||
FIRMUL (ff_mlp_firorder_3, 0x08 )
|
||||
FIRMUL (ff_mlp_firorder_2, 0x04 )
|
||||
FIRMULREG(ff_mlp_firorder_1, 0x00, 8)
|
||||
LABEL_MANGLE(ff_mlp_firorder_0)":\n\t"
|
||||
"jmp *%6 \n\t"
|
||||
@@ -178,8 +178,6 @@ static void mlp_filter_channel_x86(int32_t *state, const int32_t *coeff,
|
||||
: /* 4*/"r"((x86_reg)mask), /* 5*/"r"(firjump),
|
||||
/* 6*/"r"(iirjump) , /* 7*/"c"(filter_shift)
|
||||
, /* 8*/"r"((int64_t)coeff[0])
|
||||
, /* 9*/"r"((int64_t)coeff[1])
|
||||
, /*10*/"r"((int64_t)coeff[2])
|
||||
: "rax", "rdx", "rsi"
|
||||
#else /* ARCH_X86_32 */
|
||||
/* 3*/"+m"(blocksize)
|
||||
|
@@ -410,11 +410,16 @@ static int decode_frame(AVCodecContext *avctx, void *data, int *got_frame, AVPac
|
||||
int hi_ver, lo_ver, ret;
|
||||
|
||||
/* parse header */
|
||||
if (len < 1)
|
||||
return AVERROR_INVALIDDATA;
|
||||
c->flags = buf[0];
|
||||
buf++; len--;
|
||||
if (c->flags & ZMBV_KEYFRAME) {
|
||||
void *decode_intra = NULL;
|
||||
c->decode_intra= NULL;
|
||||
|
||||
if (len < 6)
|
||||
return AVERROR_INVALIDDATA;
|
||||
hi_ver = buf[0];
|
||||
lo_ver = buf[1];
|
||||
c->comp = buf[2];
|
||||
|
@@ -40,6 +40,11 @@ static int adx_read_packet(AVFormatContext *s, AVPacket *pkt)
|
||||
AVCodecContext *avctx = s->streams[0]->codec;
|
||||
int ret, size;
|
||||
|
||||
if (avctx->channels <= 0) {
|
||||
av_log(s, AV_LOG_ERROR, "invalid number of channels %d\n", avctx->channels);
|
||||
return AVERROR_INVALIDDATA;
|
||||
}
|
||||
|
||||
size = BLOCK_SIZE * avctx->channels;
|
||||
|
||||
pkt->pos = avio_tell(s->pb);
|
||||
@@ -83,8 +88,14 @@ static int adx_read_header(AVFormatContext *s)
|
||||
av_log(s, AV_LOG_ERROR, "Invalid extradata size.\n");
|
||||
return AVERROR_INVALIDDATA;
|
||||
}
|
||||
avctx->channels = AV_RB8(avctx->extradata + 7);
|
||||
avctx->sample_rate = AV_RB32(avctx->extradata + 8);
|
||||
|
||||
if (avctx->channels <= 0) {
|
||||
av_log(s, AV_LOG_ERROR, "invalid number of channels %d\n", avctx->channels);
|
||||
return AVERROR_INVALIDDATA;
|
||||
}
|
||||
|
||||
st->codec->codec_type = AVMEDIA_TYPE_AUDIO;
|
||||
st->codec->codec_id = s->iformat->raw_codec_id;
|
||||
|
||||
|
@@ -150,7 +150,8 @@ static int apng_read_header(AVFormatContext *s)
|
||||
AVIOContext *pb = s->pb;
|
||||
uint32_t len, tag;
|
||||
AVStream *st;
|
||||
int ret = AVERROR_INVALIDDATA, acTL_found = 0;
|
||||
int acTL_found = 0;
|
||||
int64_t ret = AVERROR_INVALIDDATA;
|
||||
|
||||
/* verify PNGSIG */
|
||||
if (avio_rb64(pb) != PNGSIG)
|
||||
@@ -321,7 +322,7 @@ static int decode_fctl_chunk(AVFormatContext *s, APNGDemuxContext *ctx, AVPacket
|
||||
static int apng_read_packet(AVFormatContext *s, AVPacket *pkt)
|
||||
{
|
||||
APNGDemuxContext *ctx = s->priv_data;
|
||||
int ret;
|
||||
int64_t ret;
|
||||
int64_t size;
|
||||
AVIOContext *pb = s->pb;
|
||||
uint32_t len, tag;
|
||||
|
@@ -1484,7 +1484,7 @@ static int asf_build_simple_index(AVFormatContext *s, int stream_index)
|
||||
ff_asf_guid g;
|
||||
ASFContext *asf = s->priv_data;
|
||||
int64_t current_pos = avio_tell(s->pb);
|
||||
int ret = 0;
|
||||
int64_t ret;
|
||||
|
||||
if((ret = avio_seek(s->pb, asf->data_object_offset + asf->data_object_size, SEEK_SET)) < 0) {
|
||||
return ret;
|
||||
@@ -1554,7 +1554,7 @@ static int asf_read_seek(AVFormatContext *s, int stream_index,
|
||||
|
||||
/* Try using the protocol's read_seek if available */
|
||||
if (s->pb) {
|
||||
int ret = avio_seek_time(s->pb, stream_index, pts, flags);
|
||||
int64_t ret = avio_seek_time(s->pb, stream_index, pts, flags);
|
||||
if (ret >= 0)
|
||||
asf_reset_header(s);
|
||||
if (ret != AVERROR(ENOSYS))
|
||||
|
@@ -664,6 +664,7 @@ static int asf_write_header(AVFormatContext *s)
|
||||
* It is needed to use asf as a streamable format. */
|
||||
if (asf_write_header1(s, 0, DATA_HEADER_SIZE) < 0) {
|
||||
//av_free(asf);
|
||||
av_freep(&asf->index_ptr);
|
||||
return -1;
|
||||
}
|
||||
|
||||
|
@@ -36,6 +36,7 @@
|
||||
#include "riff.h"
|
||||
#include "libavcodec/bytestream.h"
|
||||
#include "libavcodec/exif.h"
|
||||
#include "libavformat/isom.h"
|
||||
|
||||
typedef struct AVIStream {
|
||||
int64_t frame_offset; /* current frame (video) or byte (audio) counter
|
||||
@@ -771,6 +772,12 @@ static int avi_read_header(AVFormatContext *s)
|
||||
st->codec->codec_tag = tag1;
|
||||
st->codec->codec_id = ff_codec_get_id(ff_codec_bmp_tags,
|
||||
tag1);
|
||||
if (!st->codec->codec_id) {
|
||||
st->codec->codec_id = ff_codec_get_id(ff_codec_movvideo_tags,
|
||||
tag1);
|
||||
if (st->codec->codec_id)
|
||||
av_log(s, AV_LOG_WARNING, "mov tag found in avi\n");
|
||||
}
|
||||
/* This is needed to get the pict type which is necessary
|
||||
* for generating correct pts. */
|
||||
st->need_parsing = AVSTREAM_PARSE_HEADERS;
|
||||
|
@@ -119,8 +119,12 @@ static int write_header(AVFormatContext *s)
|
||||
{
|
||||
AVCodecContext *enc = s->streams[0]->codec;
|
||||
|
||||
enc->codec_id = AV_CODEC_ID_G729;
|
||||
enc->channels = 1;
|
||||
if ((enc->codec_id != AV_CODEC_ID_G729) || enc->channels != 1) {
|
||||
av_log(s, AV_LOG_ERROR,
|
||||
"only codec g729 with 1 channel is supported by this format\n");
|
||||
return AVERROR(EINVAL);
|
||||
}
|
||||
|
||||
enc->bits_per_coded_sample = 16;
|
||||
enc->block_align = (enc->bits_per_coded_sample * enc->channels) >> 3;
|
||||
|
||||
@@ -133,6 +137,9 @@ static int write_packet(AVFormatContext *s, AVPacket *pkt)
|
||||
GetBitContext gb;
|
||||
int i;
|
||||
|
||||
if (pkt->size != 10)
|
||||
return AVERROR(EINVAL);
|
||||
|
||||
avio_wl16(pb, SYNC_WORD);
|
||||
avio_wl16(pb, 8 * 10);
|
||||
|
||||
|
@@ -82,6 +82,7 @@ static int ffm_read_data(AVFormatContext *s,
|
||||
FFMContext *ffm = s->priv_data;
|
||||
AVIOContext *pb = s->pb;
|
||||
int len, fill_size, size1, frame_offset, id;
|
||||
int64_t last_pos = -1;
|
||||
|
||||
size1 = size;
|
||||
while (size > 0) {
|
||||
@@ -101,9 +102,11 @@ static int ffm_read_data(AVFormatContext *s,
|
||||
avio_seek(pb, tell, SEEK_SET);
|
||||
}
|
||||
id = avio_rb16(pb); /* PACKET_ID */
|
||||
if (id != PACKET_ID)
|
||||
if (id != PACKET_ID) {
|
||||
if (ffm_resync(s, id) < 0)
|
||||
return -1;
|
||||
last_pos = avio_tell(pb);
|
||||
}
|
||||
fill_size = avio_rb16(pb);
|
||||
ffm->dts = avio_rb64(pb);
|
||||
frame_offset = avio_rb16(pb);
|
||||
@@ -117,7 +120,9 @@ static int ffm_read_data(AVFormatContext *s,
|
||||
if (!frame_offset) {
|
||||
/* This packet has no frame headers in it */
|
||||
if (avio_tell(pb) >= ffm->packet_size * 3LL) {
|
||||
avio_seek(pb, -ffm->packet_size * 2LL, SEEK_CUR);
|
||||
int64_t seekback = FFMIN(ffm->packet_size * 2LL, avio_tell(pb) - last_pos);
|
||||
seekback = FFMAX(seekback, 0);
|
||||
avio_seek(pb, -seekback, SEEK_CUR);
|
||||
goto retry_read;
|
||||
}
|
||||
/* This is bad, we cannot find a valid frame header */
|
||||
@@ -261,7 +266,7 @@ static int ffm2_read_header(AVFormatContext *s)
|
||||
AVIOContext *pb = s->pb;
|
||||
AVCodecContext *codec;
|
||||
int ret;
|
||||
int f_main = 0, f_cprv, f_stvi, f_stau;
|
||||
int f_main = 0, f_cprv = -1, f_stvi = -1, f_stau = -1;
|
||||
AVCodec *enc;
|
||||
char *buffer;
|
||||
|
||||
@@ -331,6 +336,12 @@ static int ffm2_read_header(AVFormatContext *s)
|
||||
}
|
||||
codec->time_base.num = avio_rb32(pb);
|
||||
codec->time_base.den = avio_rb32(pb);
|
||||
if (codec->time_base.num <= 0 || codec->time_base.den <= 0) {
|
||||
av_log(s, AV_LOG_ERROR, "Invalid time base %d/%d\n",
|
||||
codec->time_base.num, codec->time_base.den);
|
||||
ret = AVERROR_INVALIDDATA;
|
||||
goto fail;
|
||||
}
|
||||
codec->width = avio_rb16(pb);
|
||||
codec->height = avio_rb16(pb);
|
||||
codec->gop_size = avio_rb16(pb);
|
||||
@@ -434,7 +445,7 @@ static int ffm2_read_header(AVFormatContext *s)
|
||||
}
|
||||
|
||||
/* get until end of block reached */
|
||||
while ((avio_tell(pb) % ffm->packet_size) != 0)
|
||||
while ((avio_tell(pb) % ffm->packet_size) != 0 && !pb->eof_reached)
|
||||
avio_r8(pb);
|
||||
|
||||
/* init packet demux */
|
||||
@@ -503,6 +514,11 @@ static int ffm_read_header(AVFormatContext *s)
|
||||
case AVMEDIA_TYPE_VIDEO:
|
||||
codec->time_base.num = avio_rb32(pb);
|
||||
codec->time_base.den = avio_rb32(pb);
|
||||
if (codec->time_base.num <= 0 || codec->time_base.den <= 0) {
|
||||
av_log(s, AV_LOG_ERROR, "Invalid time base %d/%d\n",
|
||||
codec->time_base.num, codec->time_base.den);
|
||||
goto fail;
|
||||
}
|
||||
codec->width = avio_rb16(pb);
|
||||
codec->height = avio_rb16(pb);
|
||||
codec->gop_size = avio_rb16(pb);
|
||||
@@ -561,7 +577,7 @@ static int ffm_read_header(AVFormatContext *s)
|
||||
}
|
||||
|
||||
/* get until end of block reached */
|
||||
while ((avio_tell(pb) % ffm->packet_size) != 0)
|
||||
while ((avio_tell(pb) % ffm->packet_size) != 0 && !pb->eof_reached)
|
||||
avio_r8(pb);
|
||||
|
||||
/* init packet demux */
|
||||
|
@@ -521,7 +521,7 @@ static int flv_write_packet(AVFormatContext *s, AVPacket *pkt)
|
||||
avio_w8(pb, FLV_TAG_TYPE_VIDEO);
|
||||
|
||||
flags = enc->codec_tag;
|
||||
if (flags == 0) {
|
||||
if (flags <= 0 || flags > 15) {
|
||||
av_log(s, AV_LOG_ERROR,
|
||||
"Video codec '%s' is not compatible with FLV\n",
|
||||
avcodec_get_name(enc->codec_id));
|
||||
|
@@ -560,7 +560,7 @@ static int gxf_packet(AVFormatContext *s, AVPacket *pkt) {
|
||||
}
|
||||
|
||||
static int gxf_seek(AVFormatContext *s, int stream_index, int64_t timestamp, int flags) {
|
||||
int res = 0;
|
||||
int64_t res = 0;
|
||||
uint64_t pos;
|
||||
uint64_t maxlen = 100 * 1024 * 1024;
|
||||
AVStream *st = s->streams[0];
|
||||
|
@@ -359,7 +359,7 @@ static int idcin_read_seek(AVFormatContext *s, int stream_index,
|
||||
IdcinDemuxContext *idcin = s->priv_data;
|
||||
|
||||
if (idcin->first_pkt_pos > 0) {
|
||||
int ret = avio_seek(s->pb, idcin->first_pkt_pos, SEEK_SET);
|
||||
int64_t ret = avio_seek(s->pb, idcin->first_pkt_pos, SEEK_SET);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
ff_update_cur_dts(s, s->streams[idcin->video_stream_index], 0);
|
||||
|
@@ -2460,7 +2460,7 @@ static int mov_open_dref(AVIOContext **pb, const char *src, MOVDref *ref,
|
||||
/* try relative path, we do not try the absolute because it can leak information about our
|
||||
system to an attacker */
|
||||
if (ref->nlvl_to > 0 && ref->nlvl_from > 0) {
|
||||
char filename[1024];
|
||||
char filename[1025];
|
||||
const char *src_path;
|
||||
int i, l;
|
||||
|
||||
@@ -2486,10 +2486,15 @@ static int mov_open_dref(AVIOContext **pb, const char *src, MOVDref *ref,
|
||||
filename[src_path - src] = 0;
|
||||
|
||||
for (i = 1; i < ref->nlvl_from; i++)
|
||||
av_strlcat(filename, "../", 1024);
|
||||
av_strlcat(filename, "../", sizeof(filename));
|
||||
|
||||
av_strlcat(filename, ref->path + l + 1, 1024);
|
||||
av_strlcat(filename, ref->path + l + 1, sizeof(filename));
|
||||
if (!use_absolute_path)
|
||||
if(strstr(ref->path + l + 1, "..") || ref->nlvl_from > 1)
|
||||
return AVERROR(ENOENT);
|
||||
|
||||
if (strlen(filename) + 1 == sizeof(filename))
|
||||
return AVERROR(ENOENT);
|
||||
if (!avio_open2(pb, filename, AVIO_FLAG_READ, int_cb, NULL))
|
||||
return 0;
|
||||
}
|
||||
|
@@ -408,7 +408,7 @@ static int mv_read_packet(AVFormatContext *avctx, AVPacket *pkt)
|
||||
AVStream *st = avctx->streams[mv->stream_index];
|
||||
const AVIndexEntry *index;
|
||||
int frame = mv->frame[mv->stream_index];
|
||||
int ret;
|
||||
int64_t ret;
|
||||
uint64_t pos;
|
||||
|
||||
if (frame < st->nb_index_entries) {
|
||||
|
@@ -1976,7 +1976,7 @@ static int mxf_timestamp_to_str(uint64_t timestamp, char **str)
|
||||
if (!*str)
|
||||
return AVERROR(ENOMEM);
|
||||
if (!strftime(*str, 32, "%Y-%m-%d %H:%M:%S", &time))
|
||||
str[0] = '\0';
|
||||
(*str)[0] = '\0';
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@@ -464,7 +464,7 @@ static int oma_read_seek(struct AVFormatContext *s,
|
||||
int stream_index, int64_t timestamp, int flags)
|
||||
{
|
||||
OMAContext *oc = s->priv_data;
|
||||
int err = ff_pcm_read_seek(s, stream_index, timestamp, flags);
|
||||
int64_t err = ff_pcm_read_seek(s, stream_index, timestamp, flags);
|
||||
|
||||
if (!oc->encrypted)
|
||||
return err;
|
||||
|
@@ -362,7 +362,6 @@ const AVCodecTag ff_codec_bmp_tags[] = {
|
||||
{ AV_CODEC_ID_G2M, MKTAG('G', '2', 'M', '4') },
|
||||
{ AV_CODEC_ID_G2M, MKTAG('G', '2', 'M', '5') },
|
||||
{ AV_CODEC_ID_FIC, MKTAG('F', 'I', 'C', 'V') },
|
||||
{ AV_CODEC_ID_PRORES, MKTAG('A', 'P', 'C', 'N') },
|
||||
{ AV_CODEC_ID_NONE, 0 }
|
||||
};
|
||||
|
||||
|
@@ -394,6 +394,11 @@ static int rm_write_video(AVFormatContext *s, const uint8_t *buf, int size, int
|
||||
/* Well, I spent some time finding the meaning of these bits. I am
|
||||
not sure I understood everything, but it works !! */
|
||||
#if 1
|
||||
/* 0xFFFF is the maximal chunk size; header needs at most 7 + 4 + 12 B */
|
||||
if (size > 0xFFFF - 7 - 4 - 12) {
|
||||
av_log(s, AV_LOG_ERROR, "large packet size %d not supported\n", size);
|
||||
return AVERROR_PATCHWELCOME;
|
||||
}
|
||||
write_packet_header(s, stream, size + 7 + (size >= 0x4000)*4, key_frame);
|
||||
/* bit 7: '1' if final packet of a frame converted in several packets */
|
||||
avio_w8(pb, 0x81);
|
||||
|
@@ -1583,6 +1583,9 @@ int av_find_default_stream_index(AVFormatContext *s)
|
||||
score += 50;
|
||||
}
|
||||
|
||||
if (st->discard != AVDISCARD_ALL)
|
||||
score += 200;
|
||||
|
||||
if (score > best_score) {
|
||||
best_score = score;
|
||||
best_stream = i;
|
||||
|
@@ -261,7 +261,7 @@ static int vqf_read_seek(AVFormatContext *s,
|
||||
{
|
||||
VqfContext *c = s->priv_data;
|
||||
AVStream *st;
|
||||
int ret;
|
||||
int64_t ret;
|
||||
int64_t pos;
|
||||
|
||||
st = s->streams[stream_index];
|
||||
|
@@ -767,7 +767,7 @@ static int recover(WtvContext *wtv, uint64_t broken_pos)
|
||||
int i;
|
||||
for (i = 0; i < wtv->nb_index_entries; i++) {
|
||||
if (wtv->index_entries[i].pos > broken_pos) {
|
||||
int ret = avio_seek(pb, wtv->index_entries[i].pos, SEEK_SET);
|
||||
int64_t ret = avio_seek(pb, wtv->index_entries[i].pos, SEEK_SET);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
wtv->pts = wtv->index_entries[i].timestamp;
|
||||
@@ -965,7 +965,7 @@ static int read_header(AVFormatContext *s)
|
||||
uint8_t root[WTV_SECTOR_SIZE];
|
||||
AVIOContext *pb;
|
||||
int64_t timeline_pos;
|
||||
int ret;
|
||||
int64_t ret;
|
||||
|
||||
wtv->epoch =
|
||||
wtv->pts =
|
||||
|
@@ -49,11 +49,17 @@
|
||||
#elif HAVE_ARMV5TE
|
||||
.arch armv5te
|
||||
#endif
|
||||
#if HAVE_AS_OBJECT_ARCH
|
||||
ELF .object_arch armv4
|
||||
#endif
|
||||
|
||||
#if HAVE_NEON
|
||||
.fpu neon
|
||||
ELF .eabi_attribute 10, 0 @ suppress Tag_FP_arch
|
||||
ELF .eabi_attribute 12, 0 @ suppress Tag_Advanced_SIMD_arch
|
||||
#elif HAVE_VFP
|
||||
.fpu vfp
|
||||
ELF .eabi_attribute 10, 0 @ suppress Tag_FP_arch
|
||||
#endif
|
||||
|
||||
.syntax unified
|
||||
|
@@ -49,6 +49,7 @@ static int flags, checked;
|
||||
void av_force_cpu_flags(int arg){
|
||||
if ( (arg & ( AV_CPU_FLAG_3DNOW |
|
||||
AV_CPU_FLAG_3DNOWEXT |
|
||||
AV_CPU_FLAG_MMXEXT |
|
||||
AV_CPU_FLAG_SSE |
|
||||
AV_CPU_FLAG_SSE2 |
|
||||
AV_CPU_FLAG_SSE2SLOW |
|
||||
|
@@ -245,7 +245,7 @@ int av_image_check_sar(unsigned int w, unsigned int h, AVRational sar)
|
||||
{
|
||||
int64_t scaled_dim;
|
||||
|
||||
if (!sar.den)
|
||||
if (sar.den <= 0 || sar.num < 0)
|
||||
return AVERROR(EINVAL);
|
||||
|
||||
if (!sar.num || sar.num == sar.den)
|
||||
|
@@ -41,12 +41,20 @@ PCA *ff_pca_init(int n){
|
||||
return NULL;
|
||||
|
||||
pca= av_mallocz(sizeof(*pca));
|
||||
if (!pca)
|
||||
return NULL;
|
||||
|
||||
pca->n= n;
|
||||
pca->z = av_malloc_array(n, sizeof(*pca->z));
|
||||
pca->count=0;
|
||||
pca->covariance= av_calloc(n*n, sizeof(double));
|
||||
pca->mean= av_calloc(n, sizeof(double));
|
||||
|
||||
if (!pca->z || !pca->covariance || !pca->mean) {
|
||||
ff_pca_free(pca);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
return pca;
|
||||
}
|
||||
|
||||
|
@@ -35,12 +35,12 @@
|
||||
#define PARAM AV_OPT_FLAG_AUDIO_PARAM
|
||||
|
||||
static const AVOption options[]={
|
||||
{"ich" , "set input channel count" , OFFSET( in.ch_count ), AV_OPT_TYPE_INT , {.i64=0 }, 0 , SWR_CH_MAX, PARAM},
|
||||
{"in_channel_count" , "set input channel count" , OFFSET( in.ch_count ), AV_OPT_TYPE_INT , {.i64=0 }, 0 , SWR_CH_MAX, PARAM},
|
||||
{"och" , "set output channel count" , OFFSET(out.ch_count ), AV_OPT_TYPE_INT , {.i64=0 }, 0 , SWR_CH_MAX, PARAM},
|
||||
{"out_channel_count" , "set output channel count" , OFFSET(out.ch_count ), AV_OPT_TYPE_INT , {.i64=0 }, 0 , SWR_CH_MAX, PARAM},
|
||||
{"uch" , "set used channel count" , OFFSET(used_ch_count ), AV_OPT_TYPE_INT , {.i64=0 }, 0 , SWR_CH_MAX, PARAM},
|
||||
{"used_channel_count" , "set used channel count" , OFFSET(used_ch_count ), AV_OPT_TYPE_INT , {.i64=0 }, 0 , SWR_CH_MAX, PARAM},
|
||||
{"ich" , "set input channel count" , OFFSET(user_in_ch_count ), AV_OPT_TYPE_INT, {.i64=0 }, 0 , SWR_CH_MAX, PARAM},
|
||||
{"in_channel_count" , "set input channel count" , OFFSET(user_in_ch_count ), AV_OPT_TYPE_INT, {.i64=0 }, 0 , SWR_CH_MAX, PARAM},
|
||||
{"och" , "set output channel count" , OFFSET(user_out_ch_count ), AV_OPT_TYPE_INT, {.i64=0 }, 0 , SWR_CH_MAX, PARAM},
|
||||
{"out_channel_count" , "set output channel count" , OFFSET(user_out_ch_count ), AV_OPT_TYPE_INT, {.i64=0 }, 0 , SWR_CH_MAX, PARAM},
|
||||
{"uch" , "set used channel count" , OFFSET(user_used_ch_count), AV_OPT_TYPE_INT, {.i64=0 }, 0 , SWR_CH_MAX, PARAM},
|
||||
{"used_channel_count" , "set used channel count" , OFFSET(user_used_ch_count), AV_OPT_TYPE_INT, {.i64=0 }, 0 , SWR_CH_MAX, PARAM},
|
||||
{"isr" , "set input sample rate" , OFFSET( in_sample_rate), AV_OPT_TYPE_INT , {.i64=0 }, 0 , INT_MAX , PARAM},
|
||||
{"in_sample_rate" , "set input sample rate" , OFFSET( in_sample_rate), AV_OPT_TYPE_INT , {.i64=0 }, 0 , INT_MAX , PARAM},
|
||||
{"osr" , "set output sample rate" , OFFSET(out_sample_rate), AV_OPT_TYPE_INT , {.i64=0 }, 0 , INT_MAX , PARAM},
|
||||
@@ -51,10 +51,10 @@ static const AVOption options[]={
|
||||
{"out_sample_fmt" , "set output sample format" , OFFSET(out_sample_fmt ), AV_OPT_TYPE_SAMPLE_FMT , {.i64=AV_SAMPLE_FMT_NONE}, -1 , INT_MAX, PARAM},
|
||||
{"tsf" , "set internal sample format" , OFFSET(int_sample_fmt ), AV_OPT_TYPE_SAMPLE_FMT , {.i64=AV_SAMPLE_FMT_NONE}, -1 , INT_MAX, PARAM},
|
||||
{"internal_sample_fmt" , "set internal sample format" , OFFSET(int_sample_fmt ), AV_OPT_TYPE_SAMPLE_FMT , {.i64=AV_SAMPLE_FMT_NONE}, -1 , INT_MAX, PARAM},
|
||||
{"icl" , "set input channel layout" , OFFSET( in_ch_layout ), AV_OPT_TYPE_CHANNEL_LAYOUT, {.i64=0 }, 0 , INT64_MAX , PARAM, "channel_layout"},
|
||||
{"in_channel_layout" , "set input channel layout" , OFFSET( in_ch_layout ), AV_OPT_TYPE_CHANNEL_LAYOUT, {.i64=0 }, 0 , INT64_MAX , PARAM, "channel_layout"},
|
||||
{"ocl" , "set output channel layout" , OFFSET(out_ch_layout ), AV_OPT_TYPE_CHANNEL_LAYOUT, {.i64=0 }, 0 , INT64_MAX , PARAM, "channel_layout"},
|
||||
{"out_channel_layout" , "set output channel layout" , OFFSET(out_ch_layout ), AV_OPT_TYPE_CHANNEL_LAYOUT, {.i64=0 }, 0 , INT64_MAX , PARAM, "channel_layout"},
|
||||
{"icl" , "set input channel layout" , OFFSET(user_in_ch_layout ), AV_OPT_TYPE_CHANNEL_LAYOUT, {.i64=0 }, 0 , INT64_MAX , PARAM, "channel_layout"},
|
||||
{"in_channel_layout" , "set input channel layout" , OFFSET(user_in_ch_layout ), AV_OPT_TYPE_CHANNEL_LAYOUT, {.i64=0 }, 0 , INT64_MAX , PARAM, "channel_layout"},
|
||||
{"ocl" , "set output channel layout" , OFFSET(user_out_ch_layout), AV_OPT_TYPE_CHANNEL_LAYOUT, {.i64=0 }, 0 , INT64_MAX , PARAM, "channel_layout"},
|
||||
{"out_channel_layout" , "set output channel layout" , OFFSET(user_out_ch_layout), AV_OPT_TYPE_CHANNEL_LAYOUT, {.i64=0 }, 0 , INT64_MAX , PARAM, "channel_layout"},
|
||||
{"clev" , "set center mix level" , OFFSET(clev ), AV_OPT_TYPE_FLOAT, {.dbl=C_30DB }, -32 , 32 , PARAM},
|
||||
{"center_mix_level" , "set center mix level" , OFFSET(clev ), AV_OPT_TYPE_FLOAT, {.dbl=C_30DB }, -32 , 32 , PARAM},
|
||||
{"slev" , "set surround mix level" , OFFSET(slev ), AV_OPT_TYPE_FLOAT, {.dbl=C_30DB }, -32 , 32 , PARAM},
|
||||
|
@@ -65,8 +65,8 @@ int swr_set_matrix(struct SwrContext *s, const double *matrix, int stride)
|
||||
if (!s || s->in_convert) // s needs to be allocated but not initialized
|
||||
return AVERROR(EINVAL);
|
||||
memset(s->matrix, 0, sizeof(s->matrix));
|
||||
nb_in = av_get_channel_layout_nb_channels(s->in_ch_layout);
|
||||
nb_out = av_get_channel_layout_nb_channels(s->out_ch_layout);
|
||||
nb_in = av_get_channel_layout_nb_channels(s->user_in_ch_layout);
|
||||
nb_out = av_get_channel_layout_nb_channels(s->user_out_ch_layout);
|
||||
for (out = 0; out < nb_out; out++) {
|
||||
for (in = 0; in < nb_in; in++)
|
||||
s->matrix[out][in] = matrix[in];
|
||||
|
@@ -314,6 +314,11 @@ int main(int argc, char **argv){
|
||||
fprintf(stderr, "Failed to init backw_ctx\n");
|
||||
return 1;
|
||||
}
|
||||
if (uint_rand(rand_seed) % 3 == 0)
|
||||
av_opt_set_int(forw_ctx, "ich", 0, 0);
|
||||
if (uint_rand(rand_seed) % 3 == 0)
|
||||
av_opt_set_int(forw_ctx, "och", 0, 0);
|
||||
|
||||
if(swr_init( forw_ctx) < 0)
|
||||
fprintf(stderr, "swr_init(->) failed\n");
|
||||
if(swr_init(backw_ctx) < 0)
|
||||
|
@@ -86,10 +86,10 @@ struct SwrContext *swr_alloc_set_opts(struct SwrContext *s,
|
||||
if (av_opt_set_int(s, "tsf", AV_SAMPLE_FMT_NONE, 0) < 0)
|
||||
goto fail;
|
||||
|
||||
if (av_opt_set_int(s, "ich", av_get_channel_layout_nb_channels(s-> in_ch_layout), 0) < 0)
|
||||
if (av_opt_set_int(s, "ich", av_get_channel_layout_nb_channels(s-> user_in_ch_layout), 0) < 0)
|
||||
goto fail;
|
||||
|
||||
if (av_opt_set_int(s, "och", av_get_channel_layout_nb_channels(s->out_ch_layout), 0) < 0)
|
||||
if (av_opt_set_int(s, "och", av_get_channel_layout_nb_channels(s->user_out_ch_layout), 0) < 0)
|
||||
goto fail;
|
||||
|
||||
av_opt_set_int(s, "uch", 0, 0);
|
||||
@@ -152,6 +152,7 @@ av_cold void swr_close(SwrContext *s){
|
||||
|
||||
av_cold int swr_init(struct SwrContext *s){
|
||||
int ret;
|
||||
char l1[1024], l2[1024];
|
||||
|
||||
clear_context(s);
|
||||
|
||||
@@ -164,6 +165,13 @@ av_cold int swr_init(struct SwrContext *s){
|
||||
return AVERROR(EINVAL);
|
||||
}
|
||||
|
||||
s->out.ch_count = s-> user_out_ch_count;
|
||||
s-> in.ch_count = s-> user_in_ch_count;
|
||||
s->used_ch_count = s->user_used_ch_count;
|
||||
|
||||
s-> in_ch_layout = s-> user_in_ch_layout;
|
||||
s->out_ch_layout = s->user_out_ch_layout;
|
||||
|
||||
if(av_get_channel_layout_nb_channels(s-> in_ch_layout) > SWR_CH_MAX) {
|
||||
av_log(s, AV_LOG_WARNING, "Input channel layout 0x%"PRIx64" is invalid or unsupported.\n", s-> in_ch_layout);
|
||||
s->in_ch_layout = 0;
|
||||
@@ -271,10 +279,18 @@ av_cold int swr_init(struct SwrContext *s){
|
||||
return -1;
|
||||
}
|
||||
|
||||
av_get_channel_layout_string(l1, sizeof(l1), s-> in.ch_count, s-> in_ch_layout);
|
||||
av_get_channel_layout_string(l2, sizeof(l2), s->out.ch_count, s->out_ch_layout);
|
||||
if (s->out_ch_layout && s->out.ch_count != av_get_channel_layout_nb_channels(s->out_ch_layout)) {
|
||||
av_log(s, AV_LOG_ERROR, "Output channel layout %s mismatches specified channel count %d\n", l2, s->out.ch_count);
|
||||
return AVERROR(EINVAL);
|
||||
}
|
||||
if (s->in_ch_layout && s->used_ch_count != av_get_channel_layout_nb_channels(s->in_ch_layout)) {
|
||||
av_log(s, AV_LOG_ERROR, "Input channel layout %s mismatches specified channel count %d\n", l1, s->used_ch_count);
|
||||
return AVERROR(EINVAL);
|
||||
}
|
||||
|
||||
if ((!s->out_ch_layout || !s->in_ch_layout) && s->used_ch_count != s->out.ch_count && !s->rematrix_custom) {
|
||||
char l1[1024], l2[1024];
|
||||
av_get_channel_layout_string(l1, sizeof(l1), s-> in.ch_count, s-> in_ch_layout);
|
||||
av_get_channel_layout_string(l2, sizeof(l2), s->out.ch_count, s->out_ch_layout);
|
||||
av_log(s, AV_LOG_ERROR, "Rematrix is needed between %s and %s "
|
||||
"but there is not enough information to do it\n", l1, l2);
|
||||
return -1;
|
||||
|
@@ -90,6 +90,12 @@ struct SwrContext {
|
||||
int used_ch_count; ///< number of used input channels (mapped channel count if channel_map, otherwise in.ch_count)
|
||||
enum SwrEngine engine;
|
||||
|
||||
int user_in_ch_count; ///< User set input channel count
|
||||
int user_out_ch_count; ///< User set output channel count
|
||||
int user_used_ch_count; ///< User set used channel count
|
||||
int64_t user_in_ch_layout; ///< User set input channel layout
|
||||
int64_t user_out_ch_layout; ///< User set output channel layout
|
||||
|
||||
struct DitherContext dither;
|
||||
|
||||
int filter_size; /**< length of each FIR filter in the resampling filterbank relative to the cutoff frequency */
|
||||
|
@@ -612,14 +612,24 @@ static av_cold int initFilter(int16_t **outFilter, int32_t **filterPos,
|
||||
|
||||
if ((*filterPos)[i] + filterSize > srcW) {
|
||||
int shift = (*filterPos)[i] + FFMIN(filterSize - srcW, 0);
|
||||
int64_t acc = 0;
|
||||
|
||||
// move filter coefficients right to compensate for filterPos
|
||||
for (j = filterSize - 2; j >= 0; j--) {
|
||||
int right = FFMIN(j + shift, filterSize - 1);
|
||||
filter[i * filterSize + right] += filter[i * filterSize + j];
|
||||
filter[i * filterSize + j] = 0;
|
||||
for (j = filterSize - 1; j >= 0; j--) {
|
||||
if ((*filterPos)[i] + j >= srcW) {
|
||||
acc += filter[i * filterSize + j];
|
||||
filter[i * filterSize + j] = 0;
|
||||
}
|
||||
}
|
||||
for (j = filterSize - 1; j >= 0; j--) {
|
||||
if (j < shift) {
|
||||
filter[i * filterSize + j] = 0;
|
||||
} else {
|
||||
filter[i * filterSize + j] = filter[i * filterSize + j - shift];
|
||||
}
|
||||
}
|
||||
|
||||
(*filterPos)[i]-= shift;
|
||||
filter[i * filterSize + srcW - 1 - (*filterPos)[i]] += acc;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1167,7 +1177,7 @@ av_cold int sws_init_context(SwsContext *c, SwsFilter *srcFilter,
|
||||
c->chrDstW = FF_CEIL_RSHIFT(dstW, c->chrDstHSubSample);
|
||||
c->chrDstH = FF_CEIL_RSHIFT(dstH, c->chrDstVSubSample);
|
||||
|
||||
FF_ALLOC_OR_GOTO(c, c->formatConvBuffer, FFALIGN(srcW*2+78, 16) * 2, fail);
|
||||
FF_ALLOCZ_OR_GOTO(c, c->formatConvBuffer, FFALIGN(srcW*2+78, 16) * 2, fail);
|
||||
|
||||
c->srcBpc = 1 + desc_src->comp[0].depth_minus1;
|
||||
if (c->srcBpc < 8)
|
||||
|
@@ -35,15 +35,15 @@
|
||||
0, 33, 33, 1, 4295, 0xf71b0b38, S=1, 1024, 0xf351799f
|
||||
0, 34, 34, 1, 2044, 0x5adcb93b, S=1, 1024, 0xf351799f
|
||||
0, 35, 35, 1, 3212, 0xcf79eeed, S=1, 1024, 0xf351799f
|
||||
0, 36, 36, 1, 2281, 0x68464d30, S=1, 1024, 0xf351799f
|
||||
0, 36, 36, 1, 2292, 0xb4386334, S=1, 1024, 0xf351799f
|
||||
0, 37, 37, 1, 3633, 0x0010992f, S=1, 1024, 0xf351799f
|
||||
0, 38, 38, 1, 3552, 0x23697490, S=1, 1024, 0xf351799f
|
||||
0, 39, 39, 1, 3690, 0x62afdbb8, S=1, 1024, 0xf351799f
|
||||
0, 40, 40, 1, 1558, 0x7a13e53b, S=1, 1024, 0xf351799f
|
||||
0, 41, 41, 1, 940, 0xb1b6cba2, S=1, 1024, 0xf351799f
|
||||
0, 40, 40, 1, 1559, 0x5baef54a, S=1, 1024, 0xf351799f
|
||||
0, 41, 41, 1, 954, 0xca75ca79, S=1, 1024, 0xf351799f
|
||||
0, 42, 42, 1, 273, 0x3687799b, S=1, 1024, 0xf351799f
|
||||
0, 43, 43, 1, 930, 0x29f3b0c4, S=1, 1024, 0xf351799f
|
||||
0, 44, 44, 1, 271, 0xe7af807c, S=1, 1024, 0xf351799f
|
||||
0, 44, 44, 1, 271, 0x305e8094, S=1, 1024, 0xf351799f
|
||||
0, 45, 45, 1, 196, 0xf5ab51ee, S=1, 1024, 0xf351799f
|
||||
0, 46, 46, 1, 4299, 0x67ec0d55, S=1, 1024, 0xf351799f
|
||||
0, 47, 47, 1, 4895, 0xb394406c, S=1, 1024, 0xf351799f
|
||||
@@ -56,7 +56,7 @@
|
||||
0, 54, 54, 1, 5179, 0x860fc6a1, S=1, 1024, 0xf351799f
|
||||
0, 55, 55, 1, 5046, 0xce9183d3, S=1, 1024, 0xf351799f
|
||||
0, 56, 56, 1, 5140, 0xa6d7b9af, S=1, 1024, 0xf351799f
|
||||
0, 57, 57, 1, 4289, 0xb415f717, S=1, 1024, 0xf351799f
|
||||
0, 57, 57, 1, 4301, 0x03b6ef3f, S=1, 1024, 0xf351799f
|
||||
0, 58, 58, 1, 5079, 0xa8d59e01, S=1, 1024, 0xf351799f
|
||||
0, 59, 59, 1, 5284, 0xea34e3b3, S=1, 1024, 0xf351799f
|
||||
0, 60, 60, 1, 5426, 0x556a15cd, S=1, 1024, 0xf351799f
|
||||
|
@@ -35,15 +35,15 @@
|
||||
0, 33, 33, 1, 4295, 0xc1850a80, S=1, 1024, 0xcfc8799f
|
||||
0, 34, 34, 1, 2044, 0x0440c072, S=1, 1024, 0xcfc8799f
|
||||
0, 35, 35, 1, 3212, 0xe91af08f, S=1, 1024, 0xcfc8799f
|
||||
0, 36, 36, 1, 2281, 0x6a414aa1, S=1, 1024, 0xcfc8799f
|
||||
0, 36, 36, 1, 2292, 0x6765633e, S=1, 1024, 0xcfc8799f
|
||||
0, 37, 37, 1, 3633, 0xac779aa3, S=1, 1024, 0xcfc8799f
|
||||
0, 38, 38, 1, 3552, 0xed2c75b2, S=1, 1024, 0xcfc8799f
|
||||
0, 39, 39, 1, 3690, 0x2020dd0d, S=1, 1024, 0xcfc8799f
|
||||
0, 40, 40, 1, 1558, 0x2c14e4b2, S=1, 1024, 0xcfc8799f
|
||||
0, 41, 41, 1, 940, 0x4927cd90, S=1, 1024, 0xcfc8799f
|
||||
0, 40, 40, 1, 1559, 0x596ef330, S=1, 1024, 0xcfc8799f
|
||||
0, 41, 41, 1, 954, 0xac12c9c5, S=1, 1024, 0xcfc8799f
|
||||
0, 42, 42, 1, 273, 0x138c7831, S=1, 1024, 0xcfc8799f
|
||||
0, 43, 43, 1, 930, 0xf1c3ae3f, S=1, 1024, 0xcfc8799f
|
||||
0, 44, 44, 1, 271, 0x6d338044, S=1, 1024, 0xcfc8799f
|
||||
0, 44, 44, 1, 271, 0x921a80af, S=1, 1024, 0xcfc8799f
|
||||
0, 45, 45, 1, 196, 0xa5de5322, S=1, 1024, 0xcfc8799f
|
||||
0, 46, 46, 1, 4299, 0x5bac0d86, S=1, 1024, 0xcfc8799f
|
||||
0, 47, 47, 1, 4895, 0xc43639a6, S=1, 1024, 0xcfc8799f
|
||||
@@ -56,7 +56,7 @@
|
||||
0, 54, 54, 1, 5179, 0x97aac3a1, S=1, 1024, 0xcfc8799f
|
||||
0, 55, 55, 1, 5046, 0x836a80cd, S=1, 1024, 0xcfc8799f
|
||||
0, 56, 56, 1, 5140, 0xa725c1e7, S=1, 1024, 0xcfc8799f
|
||||
0, 57, 57, 1, 4289, 0x7b3afbc0, S=1, 1024, 0xcfc8799f
|
||||
0, 57, 57, 1, 4301, 0x0203f239, S=1, 1024, 0xcfc8799f
|
||||
0, 58, 58, 1, 5079, 0xb2e7a2de, S=1, 1024, 0xcfc8799f
|
||||
0, 59, 59, 1, 5284, 0xb757dfe1, S=1, 1024, 0xcfc8799f
|
||||
0, 60, 60, 1, 5426, 0xf9f11e57, S=1, 1024, 0xcfc8799f
|
||||
|
Reference in New Issue
Block a user