Compare commits

..

11 Commits

Author SHA1 Message Date
Michael Niedermayer
01fcbdf9ce Merge remote-tracking branch 'qatar/master'
* qatar/master:
  smacker: Sanity check huffman tables found in the headers.
  smacker: remove dead store
  qdm2: Check data block size for bytes to bits overflow.
  mxfdec: Fix files with essence containers larger than 2 GiB.
  mxfdec: Employ correct printf conversion specifiers for POSIX int types.
  vc1: always read the bfraction element for interlaced fields
  fate: add XWD image regression test
  lavf: prevent infinite loops while flushing in avformat_find_stream_info
  matroskadec: Pad AAC extradata.
  ismindex: Fix build on mingw

Conflicts:
	libavformat/mxfdec.c
	libavformat/utils.c
	tests/lavf-regression.sh

Merged-by: Michael Niedermayer <michaelni@gmx.at>
2012-01-27 02:09:58 +01:00
Alex Converse
9adf25c1cf smacker: Sanity check huffman tables found in the headers.
Found-by: Mateusz "j00ru" Jurczyk and Gynvael Coldwind

CC: libav-stable@libav.org
2012-01-26 10:18:00 -08:00
Alex Converse
90c0c83e14 smacker: remove dead store 2012-01-26 10:17:04 -08:00
Alex Converse
dac56d9ce0 qdm2: Check data block size for bytes to bits overflow.
Found-by: Mateusz "j00ru" Jurczyk and Gynvael Coldwind

CC: libav-stable@libav.org
2012-01-26 10:17:04 -08:00
Tomas Härdin
62271c4c9a mxfdec: Fix files with essence containers larger than 2 GiB.
For such files, accumulating into an int would cause an overflow.

Signed-off-by: Diego Biurrun <diego@biurrun.de>
2012-01-26 15:47:50 +01:00
Jean First
4fbd3e89e7 mxfdec: Employ correct printf conversion specifiers for POSIX int types.
Signed-off-by: Jean First <jeanfirst@gmail.com>
Signed-off-by: Diego Biurrun <diego@biurrun.de>
2012-01-26 15:31:55 +01:00
Hendrik Leppkes
feaa40020b vc1: always read the bfraction element for interlaced fields
Previously, it would not be read if refdist_flag was not set, however
according to the spec and the reference decoder, it should always be read.

Signed-off-by: Kostya Shishkov <kostya.shishkov@gmail.com>
2012-01-26 15:19:27 +01:00
Paul B Mahol
7de9af65c7 fate: add XWD image regression test
Signed-off-by: Diego Biurrun <diego@biurrun.de>
2012-01-26 01:51:26 +01:00
Janne Grunau
b3461c29c1 lavf: prevent infinite loops while flushing in avformat_find_stream_info
If no data was seen for a stream decoder are returning 0 when fed with
empty packets for flushing. We can stop flushing when the decoder does
not return delayed delayed frames anymore. Changes try_decode_frame()
return value to got_picture or negative error.

CC: libav-stable@libav.org
2012-01-26 00:45:05 +01:00
Alex Converse
d2ee8c1779 matroskadec: Pad AAC extradata.
Found-by: Mateusz "j00ru" Jurczyk and Gynvael Coldwind

CC: libav-stable@libav.org
2012-01-25 14:46:06 -08:00
Martin Storsjö
8801fac365 ismindex: Fix build on mingw
Signed-off-by: Martin Storsjö <martin@martin.st>
2012-01-26 00:04:28 +02:00
344 changed files with 8600 additions and 6256 deletions

129
Changelog
View File

@@ -3,135 +3,6 @@ releases are sorted from youngest to oldest.
version next:
version 0.10.8
- kmvc: Clip pixel position to valid range
- kmvc: use fixed sized arrays in the context
- indeo: use a typedef for the mc function pointer
- lavc: check for overflow in init_get_bits
- mjpegdec: properly report unsupported disabled features
- jpegls: return meaningful errors
- jpegls: factorize return paths
- jpegls: check the scan offset
- wavpack: validate samples size parsed in wavpack_decode_block
- ljpeg: use the correct number of components in yuv
- mjpeg: Validate sampling factors
- mjpegdec: validate parameters in mjpeg_decode_scan_progressive_ac
- wavpack: check packet size early
- wavpack: return meaningful errors
- apetag: use int64_t for filesize
- tiff: do not overread the source buffer
- Prepare for 0.8.8 Release
- smacker: fix an off by one in huff.length computation
- smacker: check the return value of smacker_decode_tree
- smacker: pad the extradata allocation
- smacker: check frame size validity
- vmdav: convert to bytestream2
- 4xm: don't rely on get_buffer() initializing the frame.
- 4xm: check the return value of read_huffman_tables().
- 4xm: use the correct logging context
- 4xm: reject frames not compatible with the declared version
- 4xm: check bitstream_size boundary before using it
- 4xm: do not overread the source buffer in decode_p_block
- avfiltergraph: check for sws opts being non-NULL before using them
- bmv: check for len being valid in bmv_decode_frame()
- dfa: check for invalid access in decode_wdlt()
- indeo3: check motion vectors
- indeo3: fix data size check
- indeo3: switch parsing the header to bytestream2
- lavf: make sure stream probe data gets freed.
- oggdec: fix faulty cleanup prototype
- oma: Validate sample rates
- qdm2: check that the FFT size is a power of 2
- rv10: check that extradata is large enough
- xmv: check audio track parameters validity
- xmv: do not leak memory in the error paths in xmv_read_header()
- aac: check the maximum number of channels
- indeo3: fix off by one in MV validity check, Bug #503
- id3v2: check for end of file while unescaping tags
- wav: Always seek to an even offset, Bug #500, LP: #1174737
- proresdec: support mixed interlaced/non-interlaced content
version 0.10.6:
- many bug fixes that where found with Coverity
- The following CVE fixes where backported:
CVE-2012-2796, CVE-2012-2775, CVE-2012-2772, CVE-2012-2776,
CVE-2012-2779, CVE-2012-2787, CVE-2012-2794, CVE-2012-2800,
CVE-2012-2802, CVE-2012-2801, CVE-2012-2786, CVE-2012-2798,
CVE-2012-2793, CVE-2012-2789, CVE-2012-2788, CVE-2012-2790,
CVE-2012-2777, CVE-2012-2784
- hundreads of other bug fixes, some possibly security relevant,
see the git log for details.
version 0.10.5:
- Several bugs and crashes have been fixed as well as build problems
with recent mingw64
version 0.10.4:
- Several bugs and crashes have been fixed
Note, CVE-2012-0851 and CVE-2011-3937 have been fixed in previous releases
version 0.10.3:
- Security fixes in the 4xm demuxer, avi demuxer, cook decoder,
mm demuxer, mpegvideo decoder, vqavideo decoder (CVE-2012-0947) and
xmv demuxer.
- Several bugs and crashes have been fixed in the following codecs: AAC,
APE, H.263, H.264, Indeo 4, Mimic, MJPEG, Motion Pixels Video, RAW,
TTA, VC1, VQA, WMA Voice, vqavideo.
- Several bugs and crashes have been fixed in the following formats:
ASF, ID3v2, MOV, xWMA
- This release additionally updates the following codecs to the
bytestream2 API, and therefore benefit from additional overflow
checks: truemotion2, utvideo, vqavideo
version 0.10.1
- Several security fixes, many bugfixes affecting many formats and
codecs, the list below is not complete.
- swapuv filter
- Several bugs and crashes have been fixed in the following codecs: AAC,
AC-3, ADPCM, AMR (both NB and WB), ATRAC3, CAVC, Cook, camstudio, DCA,
DPCM, DSI CIN, DV, EA TGQ, FLAC, fraps, G.722 (both encoder and
decoder), H.264, huvffyuv, BB JV decoder, Indeo 3, KGV1, LCL, the
libx264 wrapper, MJPEG, mp3on4, Musepack, MPEG1/2, PNG, QDM2, Qt RLE,
ROQ, RV10, RV30/RV34/RV40, shorten, smacker, subrip, SVQ3, TIFF,
Truemotion2, TTA, VC1, VMware Screen codec, Vorbis, VP5, VP6, WMA,
Westwood SNDx, XXAN.
- This release additionally updates the following codecs to the
bytestream2 API, and therefore benefit from additional overflow
checks: XXAN, ALG MM, TQG, SMC, Qt SMC, ROQ, PNG
- Several bugs and crashes have been fixed in the following formats:
AIFF, ASF, DV, Matroska, NSV, MOV, MPEG-TS, Smacker, Sony OpenMG, RM,
SWF.
- Libswscale has an potential overflow for large image size fixed.
- The following APIs have been added:
avcodec_is_open()
avformat_get_riff_video_tags()
avformat_get_riff_audio_tags()
Please see the file doc/APIchanges and the Doxygen documentation for
further information.
version 0.10:
- Fixes: CVE-2011-3929, CVE-2011-3934, CVE-2011-3935, CVE-2011-3936,
CVE-2011-3937, CVE-2011-3940, CVE-2011-3941, CVE-2011-3944,

View File

@@ -31,7 +31,7 @@ PROJECT_NAME = FFmpeg
# This could be handy for archiving the generated documentation or
# if some version control system is used.
PROJECT_NUMBER = 0.10.9
PROJECT_NUMBER =
# With the PROJECT_LOGO tag one can specify an logo or icon that is included
# in the documentation. The maximum height of the logo should not exceed 55

View File

@@ -137,6 +137,7 @@ uninstall-data:
clean::
$(RM) $(ALLPROGS) $(ALLPROGS_G)
$(RM) $(CLEANSUFFIXES)
$(RM) $(TOOLS)
$(RM) $(CLEANSUFFIXES:%=tools/%)
$(RM) coverage.info
$(RM) -r coverage-html
@@ -157,8 +158,6 @@ coverage-html: coverage.info
$(Q)genhtml -o $@ $<
$(Q)touch $@
check: all alltools checkheaders examples testprogs fate
include $(SRC_PATH)/doc/Makefile
include $(SRC_PATH)/tests/Makefile
@@ -173,5 +172,5 @@ $(sort $(OBJDIRS)):
# so this saves some time on slow systems.
.SUFFIXES:
.PHONY: all all-yes alltools check *clean config examples install*
.PHONY: all all-yes alltools *clean config examples install*
.PHONY: testprogs uninstall*

View File

@@ -1 +1 @@
0.10.9
0.9.1.git

View File

@@ -1 +0,0 @@
0.10.9

View File

@@ -56,7 +56,7 @@
struct SwsContext *sws_opts;
AVDictionary *format_opts, *codec_opts;
const int this_year = 2013;
const int this_year = 2012;
static FILE *report_file;
@@ -806,7 +806,7 @@ int opt_codecs(const char *opt, const char *arg)
if (p2 && strcmp(p->name, p2->name) == 0) {
if (p->decode)
decode = 1;
if (p->encode || p->encode2)
if (p->encode)
encode = 1;
cap |= p->capabilities;
}

View File

@@ -117,13 +117,4 @@ CLEANSUFFIXES = *.d *.o *~ *.ho *.map *.ver *.gcno *.gcda
DISTCLEANSUFFIXES = *.pc
LIBSUFFIXES = *.a *.lib *.so *.so.* *.dylib *.dll *.def *.dll.a *.exp
define RULES
clean::
$(RM) $(OBJS) $(OBJS:.o=.d)
$(RM) $(HOSTPROGS)
$(RM) $(TOOLS)
endef
$(eval $(RULES))
-include $(wildcard $(OBJS:.o=.d) $(TESTOBJS:.o=.d))

26
configure vendored
View File

@@ -1168,7 +1168,6 @@ HAVE_LIST="
dlfcn_h
dlopen
dos_paths
dxva_h
ebp_available
ebx_available
exp2
@@ -2560,7 +2559,7 @@ check_host_cflags -std=c99
check_host_cflags -Wall
case "$arch" in
alpha|ia64|mips|parisc|ppc|sparc)
alpha|ia64|mips|parisc|sparc)
spic=$shared
;;
x86)
@@ -2900,14 +2899,17 @@ elif enabled ppc; then
check_cc <<EOF || disable altivec
$inc_altivec_h
int main(void) {
vector signed int v1 = (vector signed int) { 0 };
vector signed int v2 = (vector signed int) { 1 };
v1 = vec_add(v1, v2);
vector signed int v1, v2, v3;
v1 = vec_add(v2,v3);
return 0;
}
EOF
enabled altivec || warn "Altivec disabled, possibly missing --cpu flag"
# check if our compiler supports braces for vector declarations
check_cc <<EOF || die "You need a compiler that supports {} in AltiVec vector declarations."
$inc_altivec_h
int main (void) { (vector int) {1}; return 0; }
EOF
fi
elif enabled sparc; then
@@ -3045,7 +3047,6 @@ check_func_headers windows.h MapViewOfFile
check_func_headers windows.h VirtualAlloc
check_header dlfcn.h
check_header dxva.h
check_header dxva2api.h -D_WIN32_WINNT=0x0600
check_header libcrystalhd/libcrystalhd_if.h
check_header malloc.h
@@ -3200,14 +3201,7 @@ makeinfo --version > /dev/null 2>&1 && enable makeinfo || disable makeinfo
check_header linux/fb.h
check_header linux/videodev.h
check_header linux/videodev2.h
check_cc <<EOF && enable_safe struct_v4l2_frmivalenum_discrete
#include <linux/videodev2.h>
int main(void) {
struct v4l2_frmsizeenum vfse;
vfse.discrete.width = 0;
return 0;
}
EOF
check_struct linux/videodev2.h "struct v4l2_frmivalenum" discrete
check_header sys/videoio.h
@@ -3360,13 +3354,11 @@ elif enabled gcc; then
check_cflags -fno-tree-vectorize
check_cflags -Werror=implicit-function-declaration
check_cflags -Werror=missing-prototypes
check_cflags -Werror=return-type
elif enabled llvm_gcc; then
check_cflags -mllvm -stack-alignment=16
elif enabled clang; then
check_cflags -mllvm -stack-alignment=16
check_cflags -Qunused-arguments
check_cflags -Werror=return-type
elif enabled armcc; then
# 2523: use of inline assembler is deprecated
add_cflags -W${armcc_opt},--diag_suppress=2523

View File

@@ -16,34 +16,21 @@ API changes, most recent first:
2012-01-24 - xxxxxxx - lavfi 2.60.100
Add avfilter_graph_dump.
2012-01-25 - lavf 53.31.100 / 53.22.0
3c5fe5b / f1caf01 Allow doing av_write_frame(ctx, NULL) for flushing possible
2012-01-25 - lavf 53.22.0
f1caf01 Allow doing av_write_frame(ctx, NULL) for flushing possible
buffered data within a muxer. Added AVFMT_ALLOW_FLUSH for
muxers supporting it (av_write_frame makes sure it is called
only for muxers with this flag).
2012-03-04 - 7f3f855 - lavu 51.22.1 - error.h
Add AVERROR_UNKNOWN
2012-02-29 - 2ad77c6 - lavf 53.21.1
Add avformat_get_riff_video_tags() and avformat_get_riff_audio_tags().
2012-02-29 - a1556d3 - lavu 51.22.0 - intfloat.h
Add a new installed header libavutil/intfloat.h with int/float punning
functions.
2012-02-17 - 350d06d - lavc 53.35.0
Add avcodec_is_open() function.
2012-01-15 - lavc 53.56.105 / 53.34.0
2012-01-15 - lavc 53.34.0
New audio encoding API:
67f5650 / b2c75b6 Add CODEC_CAP_VARIABLE_FRAME_SIZE capability for use by audio
b2c75b6 Add CODEC_CAP_VARIABLE_FRAME_SIZE capability for use by audio
encoders.
67f5650 / 5ee5fa0 Add avcodec_fill_audio_frame() as a convenience function.
67f5650 / b2c75b6 Add avcodec_encode_audio2() and deprecate avcodec_encode_audio().
5ee5fa0 Add avcodec_fill_audio_frame() as a convenience function.
b2c75b6 Add avcodec_encode_audio2() and deprecate avcodec_encode_audio().
Add AVCodec.encode2().
2012-01-12 - b18e17e / 3167dc9 - lavfi 2.59.100 / 2.15.0
2012-01-12 - 3167dc9 - lavfi 2.15.0
Add a new installed header -- libavfilter/version.h -- with version macros.
2011-12-08 - a502939 - lavfi 2.52.0
@@ -64,37 +51,37 @@ API changes, most recent first:
2011-10-20 - b35e9e1 - lavu 51.22.0
Add av_strtok() to avstring.h.
2012-01-03 - ad1c8dd / b73ec05 - lavu 51.34.100 / 51.21.0
2011-01-03 - b73ec05 - lavu 51.21.0
Add av_popcount64
2011-12-18 - 7c29313 / 8400b12 - lavc 53.46.1 / 53.28.1
2011-12-18 - 8400b12 - lavc 53.28.1
Deprecate AVFrame.age. The field is unused.
2011-12-12 - 8bc7fe4 / 5266045 - lavf 53.25.0 / 53.17.0
2011-12-12 - 5266045 - lavf 53.17.0
Add avformat_close_input().
Deprecate av_close_input_file() and av_close_input_stream().
2011-12-02 - e4de716 / 0eea212 - lavc 53.40.0 / 53.25.0
2011-12-02 - 0eea212 - lavc 53.25.0
Add nb_samples and extended_data fields to AVFrame.
Deprecate AVCODEC_MAX_AUDIO_FRAME_SIZE.
Deprecate avcodec_decode_audio3() in favor of avcodec_decode_audio4().
avcodec_decode_audio4() writes output samples to an AVFrame, which allows
audio decoders to use get_buffer().
2011-12-04 - e4de716 / 560f773 - lavc 53.40.0 / 53.24.0
2011-12-04 - 560f773 - lavc 53.24.0
Change AVFrame.data[4]/base[4]/linesize[4]/error[4] to [8] at next major bump.
Change AVPicture.data[4]/linesize[4] to [8] at next major bump.
Change AVCodecContext.error[4] to [8] at next major bump.
Add AV_NUM_DATA_POINTERS to simplify the bump transition.
2011-11-23 - 8e576d5 / bbb46f3 - lavu 51.27.0 / 51.18.0
2011-11-23 - bbb46f3 - lavu 51.18.0
Add av_samples_get_buffer_size(), av_samples_fill_arrays(), and
av_samples_alloc(), to samplefmt.h.
2011-11-23 - 8e576d5 / 8889cc4 - lavu 51.27.0 / 51.17.0
2011-11-23 - 8889cc4 - lavu 51.17.0
Add planar sample formats and av_sample_fmt_is_planar() to samplefmt.h.
2011-11-19 - dbb38bc / f3a29b7 - lavc 53.36.0 / 53.21.0
2011-11-19 - f3a29b7 - lavc 53.21.0
Move some AVCodecContext fields to a new private struct, AVCodecInternal,
which is accessed from a new field, AVCodecContext.internal.
- fields moved:
@@ -102,55 +89,55 @@ API changes, most recent first:
AVCodecContext.internal_buffer_count --> AVCodecInternal.buffer_count
AVCodecContext.is_copy --> AVCodecInternal.is_copy
2011-11-16 - 8709ba9 / 6270671 - lavu 51.26.0 / 51.16.0
2011-11-16 - 6270671 - lavu 51.16.0
Add av_timegm()
2011-11-13 - lavf 53.21.0 / 53.15.0
2011-11-13 - lavf 53.15.0
New interrupt callback API, allowing per-AVFormatContext/AVIOContext
interrupt callbacks.
5f268ca / 6aa0b98 Add AVIOInterruptCB struct and the interrupt_callback field to
6aa0b98 Add AVIOInterruptCB struct and the interrupt_callback field to
AVFormatContext.
5f268ca / 1dee0ac Add avio_open2() with additional parameters. Those are
1dee0ac Add avio_open2() with additional parameters. Those are
an interrupt callback and an options AVDictionary.
This will allow passing AVOptions to protocols after lavf
54.0.
2011-11-06 - 13b7781 / ba04ecf - lavu 51.24.0 / 51.14.0
2011-11-06 - ba04ecf - lavu 51.14.0
Add av_strcasecmp() and av_strncasecmp() to avstring.h.
2011-11-06 - 13b7781 / 07b172f - lavu 51.24.0 / 51.13.0
2011-11-06 - 07b172f - lavu 51.13.0
Add av_toupper()/av_tolower()
2011-11-05 - d8cab5c / b6d08f4 - lavf 53.19.0 / 53.13.0
2011-11-05 - b6d08f4 - lavf 53.13.0
Add avformat_network_init()/avformat_network_uninit()
2011-10-27 - 6faf0a2 / 512557b - lavc 53.24.0 / 53.15.0
2011-10-27 - 512557b - lavc 53.15.0
Remove avcodec_parse_frame.
Deprecate AVCodecContext.parse_only and CODEC_CAP_PARSE_ONLY.
2011-10-19 - d049257 / 569129a - lavf 53.17.0 / 53.10.0
2011-10-19 - 569129a - lavf 53.10.0
Add avformat_new_stream(). Deprecate av_new_stream().
2011-10-13 - 91eb1b1 / b631fba - lavf 53.16.0 / 53.9.0
2011-10-13 - b631fba - lavf 53.9.0
Add AVFMT_NO_BYTE_SEEK AVInputFormat flag.
2011-10-12 - lavu 51.21.0 / 51.12.0
2011-10-12 - lavu 51.12.0
AVOptions API rewrite.
- f884ef0 / 145f741 FF_OPT_TYPE* renamed to AV_OPT_TYPE_*
- 145f741 FF_OPT_TYPE* renamed to AV_OPT_TYPE_*
- new setting/getting functions with slightly different semantics:
f884ef0 / dac66da av_set_string3 -> av_opt_set
dac66da av_set_string3 -> av_opt_set
av_set_double -> av_opt_set_double
av_set_q -> av_opt_set_q
av_set_int -> av_opt_set_int
f884ef0 / 41d9d51 av_get_string -> av_opt_get
41d9d51 av_get_string -> av_opt_get
av_get_double -> av_opt_get_double
av_get_q -> av_opt_get_q
av_get_int -> av_opt_get_int
- f884ef0 / 8c5dcaa trivial rename av_next_option -> av_opt_next
- f884ef0 / 641c7af new functions - av_opt_child_next, av_opt_child_class_next
- 8c5dcaa trivial rename av_next_option -> av_opt_next
- 641c7af new functions - av_opt_child_next, av_opt_child_class_next
and av_opt_find2()
2011-09-22 - a70e787 - lavu 51.17.0
@@ -196,31 +183,31 @@ API changes, most recent first:
2011-08-20 - 69e2c1a - lavu 51.13.0
Add av_get_media_type_string().
2011-09-03 - 1889c67 / fb4ca26 - lavc 53.13.0
2011-09-03 - fb4ca26 - lavc 53.13.0
lavf 53.11.0
lsws 2.1.0
Add {avcodec,avformat,sws}_get_class().
2011-08-03 - 1889c67 / c11fb82 - lavu 51.15.0
2011-08-03 - c11fb82 - lavu 51.15.0
Add AV_OPT_SEARCH_FAKE_OBJ flag for av_opt_find() function.
2011-08-14 - 323b930 - lavu 51.12.0
Add av_fifo_peek2(), deprecate av_fifo_peek().
2011-08-26 - lavu 51.14.0 / 51.9.0
- 976a8b2 / add41de..976a8b2 / abc78a5 Do not include intfloat_readwrite.h,
2011-08-26 - lavu 51.9.0
- add41de..abc78a5 Do not include intfloat_readwrite.h,
mathematics.h, rational.h, pixfmt.h, or log.h from avutil.h.
2011-08-16 - 27fbe31 / 48f9e45 - lavf 53.11.0 / 53.8.0
2011-08-16 - 48f9e45 - lavf 53.8.0
Add avformat_query_codec().
2011-08-16 - 27fbe31 / bca06e7 - lavc 53.11.0
2011-08-16 - bca06e7 - lavc 53.11.0
Add avcodec_get_type().
2011-08-06 - 0cb233c / 2f63440 - lavf 53.7.0
2011-08-06 - 2f63440 - lavf 53.7.0
Add error_recognition to AVFormatContext.
2011-08-02 - 1d186e9 / 9d39cbf - lavc 53.9.1
2011-08-02 - 9d39cbf - lavc 53.9.1
Add AV_PKT_FLAG_CORRUPT AVPacket flag.
2011-07-16 - b57df29 - lavfi 2.27.0
@@ -231,10 +218,10 @@ API changes, most recent first:
avfilter_set_common_packing_formats()
avfilter_all_packing_formats()
2011-07-10 - 3602ad7 / a67c061 - lavf 53.6.0
2011-07-10 - a67c061 - lavf 53.6.0
Add avformat_find_stream_info(), deprecate av_find_stream_info().
2011-07-10 - 3602ad7 / 0b950fe - lavc 53.8.0
2011-07-10 - 0b950fe - lavc 53.8.0
Add avcodec_open2(), deprecate avcodec_open().
2011-07-01 - b442ca6 - lavf 53.5.0 - avformat.h
@@ -273,35 +260,35 @@ API changes, most recent first:
2011-06-12 - 6119b23 - lavfi 2.16.0 - avfilter_graph_parse()
Change avfilter_graph_parse() signature.
2011-06-23 - 686959e / 67e9ae1 - lavu 51.10.0 / 51.8.0 - attributes.h
2011-06-23 - 67e9ae1 - lavu 51.8.0 - attributes.h
Add av_printf_format().
2011-06-16 - 2905e3f / 05e84c9, 2905e3f / 25de595 - lavf 53.4.0 / 53.2.0 - avformat.h
2011-06-16 - 05e84c9, 25de595 - lavf 53.2.0 - avformat.h
Add avformat_open_input and avformat_write_header().
Deprecate av_open_input_stream, av_open_input_file,
AVFormatParameters and av_write_header.
2011-06-16 - 2905e3f / 7e83e1c, 2905e3f / dc59ec5 - lavu 51.9.0 / 51.7.0 - opt.h
2011-06-16 - 7e83e1c, dc59ec5 - lavu 51.7.0 - opt.h
Add av_opt_set_dict() and av_opt_find().
Deprecate av_find_opt().
Add AV_DICT_APPEND flag.
2011-06-10 - 45fb647 / cb7c11c - lavu 51.6.0 - opt.h
2011-06-10 - cb7c11c - lavu 51.6.0 - opt.h
Add av_opt_flag_is_set().
2011-06-10 - c381960 - lavfi 2.15.0 - avfilter_get_audio_buffer_ref_from_arrays
Add avfilter_get_audio_buffer_ref_from_arrays() to avfilter.h.
2011-06-09 - f9ecb84 / d9f80ea - lavu 51.8.0 - AVMetadata
2011-06-09 - d9f80ea - lavu 51.8.0 - AVMetadata
Move AVMetadata from lavf to lavu and rename it to
AVDictionary -- new installed header dict.h.
All av_metadata_* functions renamed to av_dict_*.
2011-06-07 - d552f61 / a6703fa - lavu 51.8.0 - av_get_bytes_per_sample()
2011-06-07 - a6703fa - lavu 51.8.0 - av_get_bytes_per_sample()
Add av_get_bytes_per_sample() in libavutil/samplefmt.h.
Deprecate av_get_bits_per_sample_fmt().
2011-06-05 - f956924 / b39b062 - lavu 51.8.0 - opt.h
2011-06-05 - b39b062 - lavu 51.8.0 - opt.h
Add av_opt_free convenience function.
2011-06-06 - 95a0242 - lavfi 2.14.0 - AVFilterBufferRefAudioProps
@@ -331,7 +318,7 @@ API changes, most recent first:
Add av_get_pix_fmt_name() in libavutil/pixdesc.h, and deprecate
avcodec_get_pix_fmt_name() in libavcodec/avcodec.h in its favor.
2011-05-25 - 39e4206 / 30315a8 - lavf 53.3.0 - avformat.h
2011-05-25 - 30315a8 - lavf 53.3.0 - avformat.h
Add fps_probe_size to AVFormatContext.
2011-05-22 - 5ecdfd0 - lavf 53.2.0 - avformat.h
@@ -347,10 +334,10 @@ API changes, most recent first:
2011-05-14 - 9fdf772 - lavfi 2.6.0 - avcodec.h
Add avfilter_get_video_buffer_ref_from_frame() to libavfilter/avcodec.h.
2011-05-18 - 75a37b5 / 64150ff - lavc 53.7.0 - AVCodecContext.request_sample_fmt
2011-05-18 - 64150ff - lavc 53.7.0 - AVCodecContext.request_sample_fmt
Add request_sample_fmt field to AVCodecContext.
2011-05-10 - 59eb12f / 188dea1 - lavc 53.6.0 - avcodec.h
2011-05-10 - 188dea1 - lavc 53.6.0 - avcodec.h
Deprecate AVLPCType and the following fields in
AVCodecContext: lpc_coeff_precision, prediction_order_method,
min_partition_order, max_partition_order, lpc_type, lpc_passes.
@@ -380,81 +367,81 @@ API changes, most recent first:
Add av_dynarray_add function for adding
an element to a dynamic array.
2011-04-26 - d7e5aeb / bebe72f - lavu 51.1.0 - avutil.h
2011-04-26 - bebe72f - lavu 51.1.0 - avutil.h
Add AVPictureType enum and av_get_picture_type_char(), deprecate
FF_*_TYPE defines and av_get_pict_type_char() defined in
libavcodec/avcodec.h.
2011-04-26 - d7e5aeb / 10d3940 - lavfi 2.3.0 - avfilter.h
2011-04-26 - 10d3940 - lavfi 2.3.0 - avfilter.h
Add pict_type and key_frame fields to AVFilterBufferRefVideo.
2011-04-26 - d7e5aeb / 7a11c82 - lavfi 2.2.0 - vsrc_buffer
2011-04-26 - 7a11c82 - lavfi 2.2.0 - vsrc_buffer
Add sample_aspect_ratio fields to vsrc_buffer arguments
2011-04-21 - 8772156 / 94f7451 - lavc 53.1.0 - avcodec.h
2011-04-21 - 94f7451 - lavc 53.1.0 - avcodec.h
Add CODEC_CAP_SLICE_THREADS for codecs supporting sliced threading.
2011-04-15 - lavc 52.120.0 - avcodec.h
AVPacket structure got additional members for passing side information:
c407984 / 4de339e introduce side information for AVPacket
c407984 / 2d8591c make containers pass palette change in AVPacket
4de339e introduce side information for AVPacket
2d8591c make containers pass palette change in AVPacket
2011-04-12 - lavf 52.107.0 - avio.h
Avio cleanup, part II - deprecate the entire URLContext API:
c55780d / 175389c add avio_check as a replacement for url_exist
9891004 / ff1ec0c add avio_pause and avio_seek_time as replacements
175389c add avio_check as a replacement for url_exist
ff1ec0c add avio_pause and avio_seek_time as replacements
for _av_url_read_fseek/fpause
d4d0932 / cdc6a87 deprecate av_protocol_next(), avio_enum_protocols
cdc6a87 deprecate av_protocol_next(), avio_enum_protocols
should be used instead.
c88caa5 / 80c6e23 rename url_set_interrupt_cb->avio_set_interrupt_cb.
c88caa5 / f87b1b3 rename open flags: URL_* -> AVIO_*
d4d0932 / f8270bb add avio_enum_protocols.
d4d0932 / 5593f03 deprecate URLProtocol.
d4d0932 / c486dad deprecate URLContext.
d4d0932 / 026e175 deprecate the typedef for URLInterruptCB
c88caa5 / 8e76a19 deprecate av_register_protocol2.
11d7841 / b840484 deprecate URL_PROTOCOL_FLAG_NESTED_SCHEME
11d7841 / 1305d93 deprecate av_url_read_seek
11d7841 / fa104e1 deprecate av_url_read_pause
434f248 / 727c7aa deprecate url_get_filename().
434f248 / 5958df3 deprecate url_max_packet_size().
434f248 / 1869ea0 deprecate url_get_file_handle().
434f248 / 32a97d4 deprecate url_filesize().
434f248 / e52a914 deprecate url_close().
434f248 / 58a48c6 deprecate url_seek().
434f248 / 925e908 deprecate url_write().
434f248 / dce3756 deprecate url_read_complete().
434f248 / bc371ac deprecate url_read().
434f248 / 0589da0 deprecate url_open().
434f248 / 62eaaea deprecate url_connect.
434f248 / 5652bb9 deprecate url_alloc.
434f248 / 333e894 deprecate url_open_protocol
434f248 / e230705 deprecate url_poll and URLPollEntry
80c6e23 rename url_set_interrupt_cb->avio_set_interrupt_cb.
f87b1b3 rename open flags: URL_* -> AVIO_*
f8270bb add avio_enum_protocols.
5593f03 deprecate URLProtocol.
c486dad deprecate URLContext.
026e175 deprecate the typedef for URLInterruptCB
8e76a19 deprecate av_register_protocol2.
b840484 deprecate URL_PROTOCOL_FLAG_NESTED_SCHEME
1305d93 deprecate av_url_read_seek
fa104e1 deprecate av_url_read_pause
727c7aa deprecate url_get_filename().
5958df3 deprecate url_max_packet_size().
1869ea0 deprecate url_get_file_handle().
32a97d4 deprecate url_filesize().
e52a914 deprecate url_close().
58a48c6 deprecate url_seek().
925e908 deprecate url_write().
dce3756 deprecate url_read_complete().
bc371ac deprecate url_read().
0589da0 deprecate url_open().
62eaaea deprecate url_connect.
5652bb9 deprecate url_alloc.
333e894 deprecate url_open_protocol
e230705 deprecate url_poll and URLPollEntry
2011-04-08 - lavf 52.106.0 - avformat.h
Minor avformat.h cleanup:
d4d0932 / a9bf9d8 deprecate av_guess_image2_codec
d4d0932 / c3675df rename avf_sdp_create->av_sdp_create
a9bf9d8 deprecate av_guess_image2_codec
c3675df rename avf_sdp_create->av_sdp_create
2011-04-03 - lavf 52.105.0 - avio.h
Large-scale renaming/deprecating of AVIOContext-related functions:
2cae980 / 724f6a0 deprecate url_fdopen
2cae980 / 403ee83 deprecate url_open_dyn_packet_buf
2cae980 / 6dc7d80 rename url_close_dyn_buf -> avio_close_dyn_buf
2cae980 / b92c545 rename url_open_dyn_buf -> avio_open_dyn_buf
2cae980 / 8978fed introduce an AVIOContext.seekable field as a replacement for
724f6a0 deprecate url_fdopen
403ee83 deprecate url_open_dyn_packet_buf
6dc7d80 rename url_close_dyn_buf -> avio_close_dyn_buf
b92c545 rename url_open_dyn_buf -> avio_open_dyn_buf
8978fed introduce an AVIOContext.seekable field as a replacement for
AVIOContext.is_streamed and url_is_streamed()
1caa412 / b64030f deprecate get_checksum()
1caa412 / 4c4427a deprecate init_checksum()
2fd41c9 / 4ec153b deprecate udp_set_remote_url/get_local_port
4fa0e24 / 933e90a deprecate av_url_read_fseek/fpause
4fa0e24 / 8d9769a deprecate url_fileno
0fecf26 / b7f2fdd rename put_flush_packet -> avio_flush
0fecf26 / 35f1023 deprecate url_close_buf
0fecf26 / 83fddae deprecate url_open_buf
0fecf26 / d9d86e0 rename url_fprintf -> avio_printf
0fecf26 / 59f65d9 deprecate url_setbufsize
6947b0c / 3e68b3b deprecate url_ferror
b64030f deprecate get_checksum()
4c4427a deprecate init_checksum()
4ec153b deprecate udp_set_remote_url/get_local_port
933e90a deprecate av_url_read_fseek/fpause
8d9769a deprecate url_fileno
b7f2fdd rename put_flush_packet -> avio_flush
35f1023 deprecate url_close_buf
83fddae deprecate url_open_buf
d9d86e0 rename url_fprintf -> avio_printf
59f65d9 deprecate url_setbufsize
3e68b3b deprecate url_ferror
e8bb2e2 deprecate url_fget_max_packet_size
76aa876 rename url_fsize -> avio_size
e519753 deprecate url_fgetc
@@ -475,7 +462,7 @@ API changes, most recent first:
b3db9ce deprecate get_partial_buffer
8d9ac96 rename av_alloc_put_byte -> avio_alloc_context
2011-03-25 - 27ef7b1 / 34b47d7 - lavc 52.115.0 - AVCodecContext.audio_service_type
2011-03-25 - 34b47d7 - lavc 52.115.0 - AVCodecContext.audio_service_type
Add audio_service_type field to AVCodecContext.
2011-03-17 - e309fdc - lavu 50.40.0 - pixfmt.h
@@ -513,11 +500,11 @@ API changes, most recent first:
2011-02-10 - 12c14cd - lavf 52.99.0 - AVStream.disposition
Add AV_DISPOSITION_HEARING_IMPAIRED and AV_DISPOSITION_VISUAL_IMPAIRED.
2011-02-09 - c0b102c - lavc 52.112.0 - avcodec_thread_init()
2011-02-09 - 5592734 - lavc 52.112.0 - avcodec_thread_init()
Deprecate avcodec_thread_init()/avcodec_thread_free() use; instead
set thread_count before calling avcodec_open.
2011-02-09 - 37b00b4 - lavc 52.111.0 - threading API
2011-02-09 - 778b08a - lavc 52.111.0 - threading API
Add CODEC_CAP_FRAME_THREADS with new restrictions on get_buffer()/
release_buffer()/draw_horiz_band() callbacks for appropriate codecs.
Add thread_type and active_thread_type fields to AVCodecContext.

View File

@@ -187,8 +187,8 @@ the following snippet into your @file{.vimrc}:
set expandtab
set shiftwidth=4
set softtabstop=4
" Allow tabs in Makefiles.
autocmd FileType make,automake set noexpandtab shiftwidth=8 softtabstop=8
" allow tabs in Makefiles
autocmd FileType make set noexpandtab shiftwidth=8 softtabstop=8
" Trailing whitespace and tabs are forbidden, so highlight them.
highlight ForbiddenWhitespace ctermbg=red guibg=red
match ForbiddenWhitespace /\s\+$\|\t/

4561
doc/ffmpeg-mt-authorship.txt Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -407,10 +407,6 @@ prefix is ``ffmpeg2pass''. The complete file name will be
@file{PREFIX-N.log}, where N is a number specific to the output
stream
Note that this option is overwritten by a local option of the same name
when using @code{-vcodec libx264}. That option maps to the x264 option stats
which has a different syntax.
@item -vlang @var{code}
Set the ISO 639 language code (3 letters) of the current video stream.

View File

@@ -82,7 +82,7 @@ Follows a BNF description for the filtergraph syntax:
@var{LINKLABEL} ::= "[" @var{NAME} "]"
@var{LINKLABELS} ::= @var{LINKLABEL} [@var{LINKLABELS}]
@var{FILTER_ARGUMENTS} ::= sequence of chars (eventually quoted)
@var{FILTER} ::= [@var{LINKLABELS}] @var{NAME} ["=" @var{FILTER_ARGUMENTS}] [@var{LINKLABELS}]
@var{FILTER} ::= [@var{LINKNAMES}] @var{NAME} ["=" @var{ARGUMENTS}] [@var{LINKNAMES}]
@var{FILTERCHAIN} ::= @var{FILTER} [,@var{FILTERCHAIN}]
@var{FILTERGRAPH} ::= @var{FILTERCHAIN} [;@var{FILTERGRAPH}]
@end example
@@ -2549,9 +2549,6 @@ For example:
will create two separate outputs from the same input, one cropped and
one padded.
@section swapuv
Swap U & V plane.
@section thumbnail
Select the most representative frame in a given sequence of consecutive frames.

View File

@@ -24,7 +24,7 @@ a mail for every change to every issue.
The subscription URL for the ffmpeg-trac list is:
http(s)://ffmpeg.org/mailman/listinfo/ffmpeg-trac
The URL of the webinterface of the tracker is:
http(s)://trac.ffmpeg.org
http(s)://ffmpeg.org/trac/ffmpeg
Type:
-----

View File

@@ -505,7 +505,7 @@ static int alloc_buffer(AVCodecContext *s, InputStream *ist, FrameBuffer **pbuf)
const int v_shift = i==0 ? 0 : v_chroma_shift;
if (s->flags & CODEC_FLAG_EMU_EDGE)
buf->data[i] = buf->base[i];
else if (buf->base[i])
else
buf->data[i] = buf->base[i] +
FFALIGN((buf->linesize[i]*edge >> v_shift) +
(pixel_size*edge >> h_shift), 32);
@@ -2626,35 +2626,32 @@ static int transcode_init(OutputFile *output_files, int nb_output_files,
break;
}
/* two pass mode */
if (codec->flags & (CODEC_FLAG_PASS1 | CODEC_FLAG_PASS2)) {
if (codec->codec_id != CODEC_ID_H264 &&
(codec->flags & (CODEC_FLAG_PASS1 | CODEC_FLAG_PASS2))) {
char logfilename[1024];
FILE *f;
snprintf(logfilename, sizeof(logfilename), "%s-%d.log",
pass_logfilename_prefix ? pass_logfilename_prefix : DEFAULT_PASS_LOGFILENAME_PREFIX,
i);
if (!strcmp(ost->enc->name, "libx264")) {
av_dict_set(&ost->opts, "stats", logfilename, AV_DICT_DONT_OVERWRITE);
} else {
if (codec->flags & CODEC_FLAG_PASS2) {
char *logbuffer;
size_t logbuffer_size;
if (cmdutils_read_file(logfilename, &logbuffer, &logbuffer_size) < 0) {
av_log(NULL, AV_LOG_FATAL, "Error reading log file '%s' for pass-2 encoding\n",
logfilename);
exit_program(1);
}
codec->stats_in = logbuffer;
if (codec->flags & CODEC_FLAG_PASS2) {
char *logbuffer;
size_t logbuffer_size;
if (cmdutils_read_file(logfilename, &logbuffer, &logbuffer_size) < 0) {
av_log(NULL, AV_LOG_FATAL, "Error reading log file '%s' for pass-2 encoding\n",
logfilename);
exit_program(1);
}
if (codec->flags & CODEC_FLAG_PASS1) {
f = fopen(logfilename, "wb");
if (!f) {
av_log(NULL, AV_LOG_FATAL, "Cannot write log file '%s' for pass-1 encoding: %s\n",
logfilename, strerror(errno));
exit_program(1);
}
ost->logfile = f;
codec->stats_in = logbuffer;
}
if (codec->flags & CODEC_FLAG_PASS1) {
f = fopen(logfilename, "wb");
if (!f) {
av_log(NULL, AV_LOG_FATAL, "Cannot write log file '%s' for pass-1 encoding: %s\n",
logfilename, strerror(errno));
exit_program(1);
}
ost->logfile = f;
}
}
}
@@ -4068,11 +4065,9 @@ static OutputStream *new_video_stream(OptionsContext *o, AVFormatContext *oc)
if (do_pass) {
if (do_pass & 1) {
video_enc->flags |= CODEC_FLAG_PASS1;
av_dict_set(&ost->opts, "flags", "+pass1", AV_DICT_APPEND);
}
if (do_pass & 2) {
video_enc->flags |= CODEC_FLAG_PASS2;
av_dict_set(&ost->opts, "flags", "+pass2", AV_DICT_APPEND);
}
}

View File

@@ -1790,10 +1790,6 @@ int main(int argc, char **argv)
if (!print_format)
print_format = av_strdup("default");
if (!print_format) {
ret = AVERROR(ENOMEM);
goto end;
}
w_name = av_strtok(print_format, "=", &buf);
w_args = buf;

View File

@@ -562,11 +562,9 @@ static void start_multicast(void)
default_port = 6000;
for(stream = first_stream; stream != NULL; stream = stream->next) {
if (stream->is_multicast) {
unsigned random0 = av_lfg_get(&random_state);
unsigned random1 = av_lfg_get(&random_state);
/* open the RTP connection */
snprintf(session_id, sizeof(session_id), "%08x%08x",
random0, random1);
av_lfg_get(&random_state), av_lfg_get(&random_state));
/* choose a port if none given */
if (stream->multicast_port == 0) {
@@ -3088,12 +3086,9 @@ static void rtsp_cmd_setup(HTTPContext *c, const char *url,
found:
/* generate session id if needed */
if (h->session_id[0] == '\0') {
unsigned random0 = av_lfg_get(&random_state);
unsigned random1 = av_lfg_get(&random_state);
if (h->session_id[0] == '\0')
snprintf(h->session_id, sizeof(h->session_id), "%08x%08x",
random0, random1);
}
av_lfg_get(&random_state), av_lfg_get(&random_state));
/* find rtp session, and create it if none found */
rtp_c = find_rtp_session(h->session_id);
@@ -3462,9 +3457,6 @@ static AVStream *add_av_stream1(FFStream *stream, AVCodecContext *codec, int cop
{
AVStream *fst;
if(stream->nb_streams >= FF_ARRAY_ELEMS(stream->streams))
return NULL;
fst = av_mallocz(sizeof(AVStream));
if (!fst)
return NULL;
@@ -3810,9 +3802,6 @@ static void add_codec(FFStream *stream, AVCodecContext *av)
{
AVStream *st;
if(stream->nb_streams >= FF_ARRAY_ELEMS(stream->streams))
return;
/* compute default parameters */
switch(av->codec_type) {
case AVMEDIA_TYPE_AUDIO:

View File

@@ -25,7 +25,6 @@
*/
#include "libavutil/intreadwrite.h"
#include "libavutil/avassert.h"
#include "avcodec.h"
#include "dsputil.h"
#include "get_bits.h"
@@ -348,10 +347,6 @@ static void decode_p_block(FourXContext *f, uint16_t *dst, uint16_t *src, int lo
decode_p_block(f, dst , src , log2w, log2h, stride);
decode_p_block(f, dst + (1<<log2w), src + (1<<log2w), log2w, log2h, stride);
}else if(code == 3 && f->version<2){
if (start > src || src > end) {
av_log(f->avctx, AV_LOG_ERROR, "mv out of pic\n");
return;
}
mcdc(dst, src, log2w, h, stride, 1, 0);
}else if(code == 4){
if (f->g.buffer_end - f->g.buffer < 1){
@@ -373,10 +368,6 @@ static void decode_p_block(FourXContext *f, uint16_t *dst, uint16_t *src, int lo
av_log(f->avctx, AV_LOG_ERROR, "wordstream overread\n");
return;
}
if (start > src || src > end) {
av_log(f->avctx, AV_LOG_ERROR, "mv out of pic\n");
return;
}
mcdc(dst, src, log2w, h, stride, 0, bytestream2_get_le16(&f->g2));
}else if(code == 6){
if (f->g2.buffer_end - f->g2.buffer < 2){
@@ -674,8 +665,8 @@ static int decode_i2_frame(FourXContext *f, const uint8_t *buf, int length){
color[0]= bytestream2_get_le16u(&g3);
color[1]= bytestream2_get_le16u(&g3);
if(color[0]&0x8000) av_log(f->avctx, AV_LOG_ERROR, "unk bit 1\n");
if(color[1]&0x8000) av_log(f->avctx, AV_LOG_ERROR, "unk bit 2\n");
if(color[0]&0x8000) av_log(NULL, AV_LOG_ERROR, "unk bit 1\n");
if(color[1]&0x8000) av_log(NULL, AV_LOG_ERROR, "unk bit 2\n");
color[2]= mix(color[0], color[1]);
color[3]= mix(color[1], color[0]);
@@ -703,10 +694,7 @@ static int decode_i_frame(FourXContext *f, const uint8_t *buf, int length){
unsigned int prestream_size;
const uint8_t *prestream;
if (bitstream_size > (1 << 26))
return AVERROR_INVALIDDATA;
if (length < bitstream_size + 12) {
if (bitstream_size > (1<<26) || length < bitstream_size + 12) {
av_log(f->avctx, AV_LOG_ERROR, "packet size too small\n");
return AVERROR_INVALIDDATA;
}
@@ -714,19 +702,15 @@ static int decode_i_frame(FourXContext *f, const uint8_t *buf, int length){
prestream_size = 4 * AV_RL32(buf + bitstream_size + 4);
prestream = buf + bitstream_size + 12;
if(prestream_size + bitstream_size + 12 != length
|| prestream_size > (1<<26)){
if (prestream_size > (1<<26) ||
prestream_size != length - (bitstream_size + 12)){
av_log(f->avctx, AV_LOG_ERROR, "size mismatch %d %d %d\n", prestream_size, bitstream_size, length);
return -1;
}
prestream = read_huffman_tables(f, prestream, prestream_size);
if (!prestream) {
av_log(f->avctx, AV_LOG_ERROR, "Error reading Huffman tables.\n");
return AVERROR_INVALIDDATA;
}
av_assert0(prestream <= buf + length);
prestream= read_huffman_tables(f, prestream, buf + length - prestream);
if (!prestream)
return -1;
init_get_bits(&f->gb, buf + 4, 8*bitstream_size);
@@ -821,9 +805,6 @@ static int decode_frame(AVCodecContext *avctx,
av_log(f->avctx, AV_LOG_ERROR, "cframe id mismatch %d %d\n", id, avctx->frame_number);
}
if (f->version <= 1)
return AVERROR_INVALIDDATA;
cfrm->size= cfrm->id= 0;
frame_4cc= AV_RL32("pfrm");
}else
@@ -867,7 +848,6 @@ static int decode_frame(AVCodecContext *avctx,
av_log(avctx, AV_LOG_ERROR, "get_buffer() failed\n");
return -1;
}
memset(f->last_picture.data[0], 0, avctx->height * FFABS(f->last_picture.linesize[0]));
}
p->pict_type= AV_PICTURE_TYPE_P;
@@ -935,7 +915,7 @@ static av_cold int decode_end(AVCodecContext *avctx){
av_freep(&f->cfrm[i].data);
f->cfrm[i].allocated_size= 0;
}
ff_free_vlc(&f->pre_vlc);
free_vlc(&f->pre_vlc);
if(f->current_picture.data[0])
avctx->release_buffer(avctx, &f->current_picture);
if(f->last_picture.data[0])

View File

@@ -47,7 +47,7 @@ typedef struct EightSvxContext {
/* buffer used to store the whole audio decoded/interleaved chunk,
* which is sent with the first packet */
uint8_t *samples;
int64_t samples_size;
size_t samples_size;
int samples_idx;
} EightSvxContext;

View File

@@ -578,8 +578,7 @@ OBJS-$(CONFIG_ADPCM_YAMAHA_ENCODER) += adpcmenc.o adpcm_data.o
# libavformat dependencies
OBJS-$(CONFIG_ADTS_MUXER) += mpeg4audio.o
OBJS-$(CONFIG_ADX_DEMUXER) += adx.o
OBJS-$(CONFIG_CAF_DEMUXER) += mpeg4audio.o mpegaudiodata.o \
ac3tab.o
OBJS-$(CONFIG_CAF_DEMUXER) += mpeg4audio.o mpegaudiodata.o
OBJS-$(CONFIG_DV_DEMUXER) += dvdata.o
OBJS-$(CONFIG_DV_MUXER) += dvdata.o timecode.o
OBJS-$(CONFIG_FLAC_DEMUXER) += flacdec.o flacdata.o flac.o vorbis_data.o
@@ -595,7 +594,7 @@ OBJS-$(CONFIG_MATROSKA_MUXER) += xiph.o mpeg4audio.o \
flacdec.o flacdata.o flac.o \
mpegaudiodata.o vorbis_data.o
OBJS-$(CONFIG_MP3_MUXER) += mpegaudiodata.o mpegaudiodecheader.o
OBJS-$(CONFIG_MOV_DEMUXER) += mpeg4audio.o mpegaudiodata.o ac3tab.o timecode.o
OBJS-$(CONFIG_MOV_DEMUXER) += mpeg4audio.o mpegaudiodata.o timecode.o
OBJS-$(CONFIG_MOV_MUXER) += mpeg4audio.o mpegaudiodata.o
OBJS-$(CONFIG_MPEGTS_MUXER) += mpegvideo.o mpeg4audio.o
OBJS-$(CONFIG_MPEGTS_DEMUXER) += mpeg4audio.o mpegaudiodata.o

View File

@@ -28,13 +28,13 @@
#include "parser.h"
typedef enum {
AAC_AC3_PARSE_ERROR_SYNC = -0x1030c0a,
AAC_AC3_PARSE_ERROR_BSID = -0x2030c0a,
AAC_AC3_PARSE_ERROR_SAMPLE_RATE = -0x3030c0a,
AAC_AC3_PARSE_ERROR_FRAME_SIZE = -0x4030c0a,
AAC_AC3_PARSE_ERROR_FRAME_TYPE = -0x5030c0a,
AAC_AC3_PARSE_ERROR_CRC = -0x6030c0a,
AAC_AC3_PARSE_ERROR_CHANNEL_CFG = -0x7030c0a,
AAC_AC3_PARSE_ERROR_SYNC = -1,
AAC_AC3_PARSE_ERROR_BSID = -2,
AAC_AC3_PARSE_ERROR_SAMPLE_RATE = -3,
AAC_AC3_PARSE_ERROR_FRAME_SIZE = -4,
AAC_AC3_PARSE_ERROR_FRAME_TYPE = -5,
AAC_AC3_PARSE_ERROR_CRC = -6,
AAC_AC3_PARSE_ERROR_CHANNEL_CFG = -7,
} AACAC3ParseError;
typedef struct AACAC3ParseContext {

View File

@@ -713,7 +713,7 @@ static void search_for_quantizers_twoloop(AVCodecContext *avctx,
const float lambda)
{
int start = 0, i, w, w2, g;
int destbits = avctx->bit_rate * 1024.0 / avctx->sample_rate / avctx->channels * (lambda / 120.f);
int destbits = avctx->bit_rate * 1024.0 / avctx->sample_rate / avctx->channels;
float dists[128], uplims[128];
float maxvals[128];
int fflag, minscaler;

View File

@@ -192,8 +192,6 @@ static av_cold int che_configure(AACContext *ac,
enum ChannelPosition che_pos[4][MAX_ELEM_ID],
int type, int id, int *channels)
{
if (*channels >= MAX_CHANNELS)
return AVERROR_INVALIDDATA;
if (che_pos[type][id]) {
if (!ac->che[type][id]) {
if (!(ac->che[type][id] = av_mallocz(sizeof(ChannelElement))))
@@ -828,20 +826,19 @@ static int decode_band_types(AACContext *ac, enum BandType band_type[120],
av_log(ac->avctx, AV_LOG_ERROR, "invalid band type\n");
return -1;
}
do {
sect_len_incr = get_bits(gb, bits);
while ((sect_len_incr = get_bits(gb, bits)) == (1 << bits) - 1 && get_bits_left(gb) >= bits)
sect_end += sect_len_incr;
if (get_bits_left(gb) < 0) {
av_log(ac->avctx, AV_LOG_ERROR, overread_err);
return -1;
}
if (sect_end > ics->max_sfb) {
av_log(ac->avctx, AV_LOG_ERROR,
"Number of bands (%d) exceeds limit (%d).\n",
sect_end, ics->max_sfb);
return -1;
}
} while (sect_len_incr == (1 << bits) - 1);
sect_end += sect_len_incr;
if (get_bits_left(gb) < 0 || sect_len_incr == (1 << bits) - 1) {
av_log(ac->avctx, AV_LOG_ERROR, overread_err);
return -1;
}
if (sect_end > ics->max_sfb) {
av_log(ac->avctx, AV_LOG_ERROR,
"Number of bands (%d) exceeds limit (%d).\n",
sect_end, ics->max_sfb);
return -1;
}
for (; k < sect_end; k++) {
band_type [idx] = sect_band_type;
band_type_run_end[idx++] = sect_end;
@@ -1768,7 +1765,7 @@ static void apply_tns(float coef[1024], TemporalNoiseShaping *tns,
int w, filt, m, i;
int bottom, top, order, start, end, size, inc;
float lpc[TNS_MAX_ORDER];
float tmp[TNS_MAX_ORDER + 1];
float tmp[TNS_MAX_ORDER];
for (w = 0; w < ics->num_windows; w++) {
bottom = ics->num_swb;

View File

@@ -200,8 +200,8 @@ WINDOW_FUNC(long_start)
float *out = sce->ret;
dsp->vector_fmul(out, audio, lwindow, 1024);
memcpy(out + 1024, audio + 1024, sizeof(out[0]) * 448);
dsp->vector_fmul_reverse(out + 1024 + 448, audio + 1024 + 448, swindow, 128);
memcpy(out + 1024, audio, sizeof(out[0]) * 448);
dsp->vector_fmul_reverse(out + 1024 + 448, audio, swindow, 128);
memset(out + 1024 + 576, 0, sizeof(out[0]) * 448);
}
@@ -487,10 +487,10 @@ static void deinterleave_input_samples(AACEncContext *s,
const float *sptr = samples + channel_map[ch];
/* copy last 1024 samples of previous frame to the start of the current frame */
memcpy(&s->planar_samples[ch][1024], &s->planar_samples[ch][2048], 1024 * sizeof(s->planar_samples[0][0]));
memcpy(&s->planar_samples[ch][0], &s->planar_samples[ch][1024], 1024 * sizeof(s->planar_samples[0][0]));
/* deinterleave */
for (i = 2048; i < 3072; i++) {
for (i = 1024; i < 1024 * 2; i++) {
s->planar_samples[ch][i] = *sptr;
sptr += sinc;
}

View File

@@ -275,10 +275,6 @@ int ff_ps_read_data(AVCodecContext *avctx, GetBitContext *gb_host, PSContext *ps
err:
ps->start = 0;
skip_bits_long(gb_host, bits_left);
memset(ps->iid_par, 0, sizeof(ps->iid_par));
memset(ps->icc_par, 0, sizeof(ps->icc_par));
memset(ps->ipd_par, 0, sizeof(ps->ipd_par));
memset(ps->opd_par, 0, sizeof(ps->opd_par));
return bits_left;
}

View File

@@ -542,7 +542,7 @@ static int sbr_hf_calc_npatches(AACContext *ac, SpectralBandReplication *sbr)
k = sbr->n_master;
} while (sb != sbr->kx[1] + sbr->m[1]);
if (sbr->num_patches > 1 && sbr->patch_num_subbands[sbr->num_patches-1] < 3)
if (sbr->patch_num_subbands[sbr->num_patches-1] < 3 && sbr->num_patches > 1)
sbr->num_patches--;
return 0;

View File

@@ -34,10 +34,17 @@
typedef struct AascContext {
AVCodecContext *avctx;
GetByteContext gb;
AVFrame frame;
} AascContext;
#define FETCH_NEXT_STREAM_BYTE() \
if (stream_ptr >= buf_size) \
{ \
av_log(s->avctx, AV_LOG_ERROR, " AASC: stream ptr just went out of bounds (fetch)\n"); \
break; \
} \
stream_byte = buf[stream_ptr++];
static av_cold int aasc_decode_init(AVCodecContext *avctx)
{
AascContext *s = avctx->priv_data;
@@ -82,8 +89,7 @@ static int aasc_decode_frame(AVCodecContext *avctx,
}
break;
case 1:
bytestream2_init(&s->gb, buf - 4, buf_size + 4);
ff_msrle_decode(avctx, (AVPicture*)&s->frame, 8, &s->gb);
ff_msrle_decode(avctx, (AVPicture*)&s->frame, 8, buf - 4, buf_size + 4);
break;
default:
av_log(avctx, AV_LOG_ERROR, "Unknown compression type %d\n", compr);

View File

@@ -134,7 +134,7 @@ int avpriv_ac3_parse_header(GetBitContext *gbc, AC3HeaderInfo *hdr)
(hdr->num_blocks * 256.0));
hdr->channels = ff_ac3_channels_tab[hdr->channel_mode] + hdr->lfe_on;
}
hdr->channel_layout = avpriv_ac3_channel_layout_tab[hdr->channel_mode];
hdr->channel_layout = ff_ac3_channel_layout_tab[hdr->channel_mode];
if (hdr->lfe_on)
hdr->channel_layout |= AV_CH_LOW_FREQUENCY;

View File

@@ -297,7 +297,7 @@ static int parse_frame_header(AC3DecodeContext *s)
return ff_eac3_parse_header(s);
} else {
av_log(s->avctx, AV_LOG_ERROR, "E-AC-3 support not compiled in\n");
return AVERROR(ENOSYS);
return -1;
}
}
@@ -822,12 +822,12 @@ static int decode_audio_block(AC3DecodeContext *s, int blk)
if (start_subband >= end_subband) {
av_log(s->avctx, AV_LOG_ERROR, "invalid spectral extension "
"range (%d >= %d)\n", start_subband, end_subband);
return AVERROR_INVALIDDATA;
return -1;
}
if (dst_start_freq >= src_start_freq) {
av_log(s->avctx, AV_LOG_ERROR, "invalid spectral extension "
"copy start bin (%d >= %d)\n", dst_start_freq, src_start_freq);
return AVERROR_INVALIDDATA;
return -1;
}
s->spx_dst_start_freq = dst_start_freq;
@@ -904,7 +904,7 @@ static int decode_audio_block(AC3DecodeContext *s, int blk)
if (channel_mode < AC3_CHMODE_STEREO) {
av_log(s->avctx, AV_LOG_ERROR, "coupling not allowed in mono or dual-mono\n");
return AVERROR_INVALIDDATA;
return -1;
}
/* check for enhanced coupling */
@@ -934,7 +934,7 @@ static int decode_audio_block(AC3DecodeContext *s, int blk)
if (cpl_start_subband >= cpl_end_subband) {
av_log(s->avctx, AV_LOG_ERROR, "invalid coupling range (%d >= %d)\n",
cpl_start_subband, cpl_end_subband);
return AVERROR_INVALIDDATA;
return -1;
}
s->start_freq[CPL_CH] = cpl_start_subband * 12 + 37;
s->end_freq[CPL_CH] = cpl_end_subband * 12 + 37;
@@ -956,7 +956,7 @@ static int decode_audio_block(AC3DecodeContext *s, int blk)
if (!blk) {
av_log(s->avctx, AV_LOG_ERROR, "new coupling strategy must "
"be present in block 0\n");
return AVERROR_INVALIDDATA;
return -1;
} else {
s->cpl_in_use[blk] = s->cpl_in_use[blk-1];
}
@@ -986,7 +986,7 @@ static int decode_audio_block(AC3DecodeContext *s, int blk)
} else if (!blk) {
av_log(s->avctx, AV_LOG_ERROR, "new coupling coordinates must "
"be present in block 0\n");
return AVERROR_INVALIDDATA;
return -1;
}
} else {
/* channel not in coupling */
@@ -1041,7 +1041,7 @@ static int decode_audio_block(AC3DecodeContext *s, int blk)
int bandwidth_code = get_bits(gbc, 6);
if (bandwidth_code > 60) {
av_log(s->avctx, AV_LOG_ERROR, "bandwidth code = %d > 60\n", bandwidth_code);
return AVERROR_INVALIDDATA;
return -1;
}
s->end_freq[ch] = bandwidth_code * 3 + 73;
}
@@ -1064,7 +1064,7 @@ static int decode_audio_block(AC3DecodeContext *s, int blk)
s->num_exp_groups[ch], s->dexps[ch][0],
&s->dexps[ch][s->start_freq[ch]+!!ch])) {
av_log(s->avctx, AV_LOG_ERROR, "exponent out-of-range\n");
return AVERROR_INVALIDDATA;
return -1;
}
if (ch != CPL_CH && ch != s->lfe_ch)
skip_bits(gbc, 2); /* skip gainrng */
@@ -1084,7 +1084,7 @@ static int decode_audio_block(AC3DecodeContext *s, int blk)
} else if (!blk) {
av_log(s->avctx, AV_LOG_ERROR, "new bit allocation info must "
"be present in block 0\n");
return AVERROR_INVALIDDATA;
return -1;
}
}
@@ -1115,7 +1115,7 @@ static int decode_audio_block(AC3DecodeContext *s, int blk)
}
} else if (!s->eac3 && !blk) {
av_log(s->avctx, AV_LOG_ERROR, "new snr offsets must be present in block 0\n");
return AVERROR_INVALIDDATA;
return -1;
}
}
@@ -1154,7 +1154,7 @@ static int decode_audio_block(AC3DecodeContext *s, int blk)
} else if (!s->eac3 && !blk) {
av_log(s->avctx, AV_LOG_ERROR, "new coupling leak info must "
"be present in block 0\n");
return AVERROR_INVALIDDATA;
return -1;
}
s->first_cpl_leak = 0;
}
@@ -1166,7 +1166,7 @@ static int decode_audio_block(AC3DecodeContext *s, int blk)
s->dba_mode[ch] = get_bits(gbc, 2);
if (s->dba_mode[ch] == DBA_RESERVED) {
av_log(s->avctx, AV_LOG_ERROR, "delta bit allocation strategy reserved\n");
return AVERROR_INVALIDDATA;
return -1;
}
bit_alloc_stages[ch] = FFMAX(bit_alloc_stages[ch], 2);
}
@@ -1207,7 +1207,7 @@ static int decode_audio_block(AC3DecodeContext *s, int blk)
s->dba_offsets[ch], s->dba_lengths[ch],
s->dba_values[ch], s->mask[ch])) {
av_log(s->avctx, AV_LOG_ERROR, "error in bit allocation\n");
return AVERROR_INVALIDDATA;
return -1;
}
}
if (bit_alloc_stages[ch] > 0) {
@@ -1328,7 +1328,7 @@ static int ac3_decode_frame(AVCodecContext * avctx, void *data,
switch (err) {
case AAC_AC3_PARSE_ERROR_SYNC:
av_log(avctx, AV_LOG_ERROR, "frame sync error\n");
return AVERROR_INVALIDDATA;
return -1;
case AAC_AC3_PARSE_ERROR_BSID:
av_log(avctx, AV_LOG_ERROR, "invalid bitstream id\n");
break;
@@ -1342,20 +1342,17 @@ static int ac3_decode_frame(AVCodecContext * avctx, void *data,
/* skip frame if CRC is ok. otherwise use error concealment. */
/* TODO: add support for substreams and dependent frames */
if (s->frame_type == EAC3_FRAME_TYPE_DEPENDENT || s->substreamid) {
av_log(avctx, AV_LOG_WARNING, "unsupported frame type : "
av_log(avctx, AV_LOG_ERROR, "unsupported frame type : "
"skipping frame\n");
*got_frame_ptr = 0;
return buf_size;
return s->frame_size;
} else {
av_log(avctx, AV_LOG_ERROR, "invalid frame type\n");
}
break;
case AAC_AC3_PARSE_ERROR_CRC:
case AAC_AC3_PARSE_ERROR_CHANNEL_CFG:
default:
av_log(avctx, AV_LOG_ERROR, "invalid header\n");
break;
default: // Normal AVERROR do not try to recover.
*got_frame_ptr = 0;
return err;
}
} else {
/* check that reported frame size fits in input buffer */
@@ -1376,10 +1373,8 @@ static int ac3_decode_frame(AVCodecContext * avctx, void *data,
if (!err) {
avctx->sample_rate = s->sample_rate;
avctx->bit_rate = s->bit_rate;
}
/* channel config */
if (!err || (s->channels && s->out_channels != s->channels)) {
/* channel config */
s->out_channels = s->channels;
s->output_mode = s->channel_mode;
if (s->lfe_on)
@@ -1388,7 +1383,7 @@ static int ac3_decode_frame(AVCodecContext * avctx, void *data,
avctx->request_channels < s->channels) {
s->out_channels = avctx->request_channels;
s->output_mode = avctx->request_channels == 1 ? AC3_CHMODE_MONO : AC3_CHMODE_STEREO;
s->channel_layout = avpriv_ac3_channel_layout_tab[s->output_mode];
s->channel_layout = ff_ac3_channel_layout_tab[s->output_mode];
}
avctx->channels = s->out_channels;
avctx->channel_layout = s->channel_layout;
@@ -1402,12 +1397,11 @@ static int ac3_decode_frame(AVCodecContext * avctx, void *data,
s->fbw_channels == s->out_channels)) {
set_downmix_coeffs(s);
}
} else if (!s->channels) {
av_log(avctx, AV_LOG_ERROR, "unable to determine channel mode\n");
return AVERROR_INVALIDDATA;
} else if (!s->out_channels) {
s->out_channels = avctx->channels;
if (s->out_channels < s->channels)
s->output_mode = s->out_channels == 1 ? AC3_CHMODE_MONO : AC3_CHMODE_STEREO;
}
avctx->channels = s->out_channels;
/* set audio service type based on bitstream mode for AC-3 */
avctx->audio_service_type = s->bitstream_mode;
if (s->bitstream_mode == 0x7 && s->channels > 1)

View File

@@ -109,7 +109,7 @@ static void ac3_bit_alloc_calc_bap_c(int16_t *mask, int16_t *psd,
int snr_offset, int floor,
const uint8_t *bap_tab, uint8_t *bap)
{
int bin, band, band_end;
int bin, band;
/* special case, if snr offset is -960, set all bap's to zero */
if (snr_offset == -960) {
@@ -121,14 +121,12 @@ static void ac3_bit_alloc_calc_bap_c(int16_t *mask, int16_t *psd,
band = ff_ac3_bin_to_band_tab[start];
do {
int m = (FFMAX(mask[band] - snr_offset - floor, 0) & 0x1FE0) + floor;
band_end = ff_ac3_band_start_tab[++band];
band_end = FFMIN(band_end, end);
int band_end = FFMIN(ff_ac3_band_start_tab[band+1], end);
for (; bin < band_end; bin++) {
int address = av_clip((psd[bin] - m) >> 5, 0, 63);
bap[bin] = bap_tab[address];
}
} while (end > band_end);
} while (end > ff_ac3_band_start_tab[band++]);
}
static void ac3_update_bap_counts_c(uint16_t mant_cnt[16], uint8_t *bap,

View File

@@ -84,7 +84,7 @@ const uint8_t ff_ac3_channels_tab[8] = {
/**
* Map audio coding mode (acmod) to channel layout mask.
*/
const uint16_t avpriv_ac3_channel_layout_tab[8] = {
const uint16_t ff_ac3_channel_layout_tab[8] = {
AV_CH_LAYOUT_STEREO,
AV_CH_LAYOUT_MONO,
AV_CH_LAYOUT_STEREO,

View File

@@ -33,7 +33,7 @@
extern const uint16_t ff_ac3_frame_size_tab[38][3];
extern const uint8_t ff_ac3_channels_tab[8];
extern const uint16_t avpriv_ac3_channel_layout_tab[8];
extern const uint16_t ff_ac3_channel_layout_tab[8];
extern const uint8_t ff_ac3_enc_channel_map[8][2][6];
extern const uint8_t ff_ac3_dec_channel_map[8][2][6];
extern const uint16_t ff_ac3_sample_rate_tab[3];

View File

@@ -265,9 +265,8 @@ static inline short adpcm_yamaha_expand_nibble(ADPCMChannelStatus *c, unsigned c
return c->predictor;
}
static int xa_decode(AVCodecContext *avctx,
short *out, const unsigned char *in,
ADPCMChannelStatus *left, ADPCMChannelStatus *right, int inc)
static void xa_decode(short *out, const unsigned char *in,
ADPCMChannelStatus *left, ADPCMChannelStatus *right, int inc)
{
int i, j;
int shift,filter,f0,f1;
@@ -278,12 +277,6 @@ static int xa_decode(AVCodecContext *avctx,
shift = 12 - (in[4+i*2] & 15);
filter = in[4+i*2] >> 4;
if (filter > 4) {
av_log(avctx, AV_LOG_ERROR,
"Invalid XA-ADPCM filter %d (max. allowed is 4)\n",
filter);
return AVERROR_INVALIDDATA;
}
f0 = xa_adpcm_table[filter][0];
f1 = xa_adpcm_table[filter][1];
@@ -311,12 +304,7 @@ static int xa_decode(AVCodecContext *avctx,
shift = 12 - (in[5+i*2] & 15);
filter = in[5+i*2] >> 4;
if (filter > 4) {
av_log(avctx, AV_LOG_ERROR,
"Invalid XA-ADPCM filter %d (max. allowed is 4)\n",
filter);
return AVERROR_INVALIDDATA;
}
f0 = xa_adpcm_table[filter][0];
f1 = xa_adpcm_table[filter][1];
@@ -340,8 +328,6 @@ static int xa_decode(AVCodecContext *avctx,
left->sample2 = s_2;
}
}
return 0;
}
/**
@@ -713,11 +699,11 @@ static int adpcm_decode_frame(AVCodecContext *avctx, void *data,
for (channel = 0; channel < avctx->channels; channel++) {
cs = &c->status[channel];
cs->predictor = (int16_t)bytestream_get_le16(&src);
cs->step_index = av_clip(*src++, 0, 88);
cs->step_index = *src++;
src++;
*samples++ = cs->predictor;
}
for (n = (nb_samples >> (1 - st)) - 1; n > 0; n--, src++) {
for (n = nb_samples >> (1 - st); n > 0; n--, src++) {
uint8_t v = *src;
*samples++ = adpcm_ima_expand_nibble(&c->status[0 ], v >> 4 , 3);
*samples++ = adpcm_ima_expand_nibble(&c->status[st], v & 0x0F, 3);
@@ -736,8 +722,8 @@ static int adpcm_decode_frame(AVCodecContext *avctx, void *data,
c->status[0].predictor = (int16_t)AV_RL16(src + 10);
c->status[1].predictor = (int16_t)AV_RL16(src + 12);
c->status[0].step_index = av_clip(src[14], 0, 88);
c->status[1].step_index = av_clip(src[15], 0, 88);
c->status[0].step_index = src[14];
c->status[1].step_index = src[15];
/* sign extend the predictors */
src += 16;
diff_channel = c->status[1].predictor;
@@ -777,7 +763,7 @@ static int adpcm_decode_frame(AVCodecContext *avctx, void *data,
for (channel = 0; channel < avctx->channels; channel++) {
cs = &c->status[channel];
cs->predictor = (int16_t)bytestream_get_le16(&src);
cs->step_index = av_clip(*src++, 0, 88);
cs->step_index = *src++;
src++;
}
@@ -829,9 +815,8 @@ static int adpcm_decode_frame(AVCodecContext *avctx, void *data,
break;
case CODEC_ID_ADPCM_XA:
while (buf_size >= 128) {
if ((ret = xa_decode(avctx, samples, src, &c->status[0],
&c->status[1], avctx->channels)) < 0)
return ret;
xa_decode(samples, src, &c->status[0], &c->status[1],
avctx->channels);
src += 128;
samples += 28 * 8;
buf_size -= 128;
@@ -841,7 +826,7 @@ static int adpcm_decode_frame(AVCodecContext *avctx, void *data,
src += 4; // skip sample count (already read)
for (i=0; i<=st; i++)
c->status[i].step_index = av_clip(bytestream_get_le32(&src), 0, 88);
c->status[i].step_index = bytestream_get_le32(&src);
for (i=0; i<=st; i++)
c->status[i].predictor = bytestream_get_le32(&src);
@@ -1058,11 +1043,11 @@ static int adpcm_decode_frame(AVCodecContext *avctx, void *data,
case CODEC_ID_ADPCM_IMA_SMJPEG:
if (avctx->codec->id == CODEC_ID_ADPCM_IMA_AMV) {
c->status[0].predictor = sign_extend(bytestream_get_le16(&src), 16);
c->status[0].step_index = av_clip(bytestream_get_le16(&src), 0, 88);
c->status[0].step_index = bytestream_get_le16(&src);
src += 4;
} else {
c->status[0].predictor = sign_extend(bytestream_get_be16(&src), 16);
c->status[0].step_index = av_clip(bytestream_get_byte(&src), 0, 88);
c->status[0].step_index = bytestream_get_byte(&src);
src += 1;
}

View File

@@ -636,9 +636,10 @@ static av_cold int alac_decode_init(AVCodecContext * avctx)
alac->avctx = avctx;
/* initialize from the extradata */
if (alac->avctx->extradata_size < ALAC_EXTRADATA_SIZE) {
av_log(avctx, AV_LOG_ERROR, "alac: extradata is too small\n");
return AVERROR_INVALIDDATA;
if (alac->avctx->extradata_size != ALAC_EXTRADATA_SIZE) {
av_log(avctx, AV_LOG_ERROR, "alac: expected %d extradata bytes\n",
ALAC_EXTRADATA_SIZE);
return -1;
}
if (alac_set_info(alac)) {
av_log(avctx, AV_LOG_ERROR, "alac: set_info failed\n");

View File

@@ -259,7 +259,7 @@ static void alac_linear_predictor(AlacEncodeContext *s, int ch)
// generate warm-up samples
residual[0] = samples[0];
for (i = 1; i <= lpc.lpc_order; i++)
residual[i] = sign_extend(samples[i] - samples[i-1], s->write_sample_size);
residual[i] = samples[i] - samples[i-1];
// perform lpc on remaining samples
for (i = lpc.lpc_order + 1; i < s->avctx->frame_size; i++) {

View File

@@ -651,11 +651,6 @@ static int read_var_block_data(ALSDecContext *ctx, ALSBlockData *bd)
for (k = 1; k < sub_blocks; k++)
s[k] = s[k - 1] + decode_rice(gb, 0);
}
for (k = 1; k < sub_blocks; k++)
if (s[k] > 32) {
av_log(avctx, AV_LOG_ERROR, "k invalid for rice code.\n");
return AVERROR_INVALIDDATA;
}
if (get_bits1(gb))
*bd->shift_lsbs = get_bits(gb, 4) + 1;
@@ -668,11 +663,6 @@ static int read_var_block_data(ALSDecContext *ctx, ALSBlockData *bd)
int opt_order_length = av_ceil_log2(av_clip((bd->block_length >> 3) - 1,
2, sconf->max_order + 1));
*bd->opt_order = get_bits(gb, opt_order_length);
if (*bd->opt_order > sconf->max_order) {
*bd->opt_order = sconf->max_order;
av_log(avctx, AV_LOG_ERROR, "Predictor order too large!\n");
return AVERROR_INVALIDDATA;
}
} else {
*bd->opt_order = sconf->max_order;
}
@@ -705,10 +695,6 @@ static int read_var_block_data(ALSDecContext *ctx, ALSBlockData *bd)
int rice_param = parcor_rice_table[sconf->coef_table][k][1];
int offset = parcor_rice_table[sconf->coef_table][k][0];
quant_cof[k] = decode_rice(gb, rice_param) + offset;
if (quant_cof[k] < -64 || quant_cof[k] > 63) {
av_log(avctx, AV_LOG_ERROR, "quant_cof %d is out of range\n", quant_cof[k]);
return AVERROR_INVALIDDATA;
}
}
// read coefficients 20 to 126
@@ -741,7 +727,7 @@ static int read_var_block_data(ALSDecContext *ctx, ALSBlockData *bd)
bd->ltp_gain[0] = decode_rice(gb, 1) << 3;
bd->ltp_gain[1] = decode_rice(gb, 2) << 3;
r = get_unary(gb, 0, 3);
r = get_unary(gb, 0, 4);
c = get_bits(gb, 2);
bd->ltp_gain[2] = ltp_gain_values[r][c];
@@ -770,6 +756,7 @@ static int read_var_block_data(ALSDecContext *ctx, ALSBlockData *bd)
int delta[8];
unsigned int k [8];
unsigned int b = av_clip((av_ceil_log2(bd->block_length) - 3) >> 1, 0, 5);
unsigned int i = start;
// read most significant bits
unsigned int high;
@@ -780,30 +767,29 @@ static int read_var_block_data(ALSDecContext *ctx, ALSBlockData *bd)
current_res = bd->raw_samples + start;
for (sb = 0; sb < sub_blocks; sb++) {
unsigned int sb_len = sb_length - (sb ? 0 : start);
for (sb = 0; sb < sub_blocks; sb++, i = 0) {
k [sb] = s[sb] > b ? s[sb] - b : 0;
delta[sb] = 5 - s[sb] + k[sb];
ff_bgmc_decode(gb, sb_len, current_res,
ff_bgmc_decode(gb, sb_length, current_res,
delta[sb], sx[sb], &high, &low, &value, ctx->bgmc_lut, ctx->bgmc_lut_status);
current_res += sb_len;
current_res += sb_length;
}
ff_bgmc_decode_end(gb);
// read least significant bits and tails
i = start;
current_res = bd->raw_samples + start;
for (sb = 0; sb < sub_blocks; sb++, start = 0) {
for (sb = 0; sb < sub_blocks; sb++, i = 0) {
unsigned int cur_tail_code = tail_code[sx[sb]][delta[sb]];
unsigned int cur_k = k[sb];
unsigned int cur_s = s[sb];
for (; start < sb_length; start++) {
for (; i < sb_length; i++) {
int32_t res = *current_res;
if (res == cur_tail_code) {
@@ -1159,12 +1145,6 @@ static int decode_blocks(ALSDecContext *ctx, unsigned int ra_frame,
return 0;
}
static inline int als_weighting(GetBitContext *gb, int k, int off)
{
int idx = av_clip(decode_rice(gb, k) + off,
0, FF_ARRAY_ELEMS(mcc_weightings) - 1);
return mcc_weightings[idx];
}
/** Read the channel data.
*/
@@ -1185,14 +1165,14 @@ static int read_channel_data(ALSDecContext *ctx, ALSChannelData *cd, int c)
if (current->master_channel != c) {
current->time_diff_flag = get_bits1(gb);
current->weighting[0] = als_weighting(gb, 1, 16);
current->weighting[1] = als_weighting(gb, 2, 14);
current->weighting[2] = als_weighting(gb, 1, 16);
current->weighting[0] = mcc_weightings[av_clip(decode_rice(gb, 1) + 16, 0, 32)];
current->weighting[1] = mcc_weightings[av_clip(decode_rice(gb, 2) + 14, 0, 32)];
current->weighting[2] = mcc_weightings[av_clip(decode_rice(gb, 1) + 16, 0, 32)];
if (current->time_diff_flag) {
current->weighting[3] = als_weighting(gb, 1, 16);
current->weighting[4] = als_weighting(gb, 1, 16);
current->weighting[5] = als_weighting(gb, 1, 16);
current->weighting[3] = mcc_weightings[av_clip(decode_rice(gb, 1) + 16, 0, 32)];
current->weighting[4] = mcc_weightings[av_clip(decode_rice(gb, 1) + 16, 0, 32)];
current->weighting[5] = mcc_weightings[av_clip(decode_rice(gb, 1) + 16, 0, 32)];
current->time_diff_sign = get_bits1(gb);
current->time_diff_index = get_bits(gb, ctx->ltp_lag_length - 3) + 3;

View File

@@ -200,10 +200,6 @@ static enum Mode unpack_bitstream(AMRContext *p, const uint8_t *buf,
p->bad_frame_indicator = !get_bits1(&gb); // quality bit
skip_bits(&gb, 2); // two padding bits
if (mode >= N_MODES || buf_size < frame_sizes_nb[mode] + 1) {
return NO_DATA;
}
if (mode < MODE_DTX)
ff_amr_bit_reorder((uint16_t *) &p->frame, sizeof(AMRNBFrame), buf + 1,
amr_unpacking_bitmaps_per_mode[mode]);
@@ -951,10 +947,6 @@ static int amrnb_decode_frame(AVCodecContext *avctx, void *data,
buf_out = (float *)p->avframe.data[0];
p->cur_frame_mode = unpack_bitstream(p, buf, buf_size);
if (p->cur_frame_mode == NO_DATA) {
av_log(avctx, AV_LOG_ERROR, "Corrupt bitstream\n");
return AVERROR_INVALIDDATA;
}
if (p->cur_frame_mode == MODE_DTX) {
av_log_missing_feature(avctx, "dtx mode", 0);
av_log(avctx, AV_LOG_INFO, "Note: libopencore_amrnb supports dtx\n");

View File

@@ -898,10 +898,10 @@ static float auto_correlation(float *diff_isf, float mean, int lag)
* Extrapolate a ISF vector to the 16kHz range (20th order LP)
* used at mode 6k60 LP filter for the high frequency band.
*
* @param[out] isf Buffer for extrapolated isf; contains LP_ORDER
* values on input
* @param[out] out Buffer for extrapolated isf
* @param[in] isf Input isf vector
*/
static void extrapolate_isf(float isf[LP_ORDER_16k])
static void extrapolate_isf(float out[LP_ORDER_16k], float isf[LP_ORDER])
{
float diff_isf[LP_ORDER - 2], diff_mean;
float *diff_hi = diff_isf - LP_ORDER + 1; // diff array for extrapolated indexes
@@ -909,7 +909,8 @@ static void extrapolate_isf(float isf[LP_ORDER_16k])
float est, scale;
int i, i_max_corr;
isf[LP_ORDER_16k - 1] = isf[LP_ORDER - 1];
memcpy(out, isf, (LP_ORDER - 1) * sizeof(float));
out[LP_ORDER_16k - 1] = isf[LP_ORDER - 1];
/* Calculate the difference vector */
for (i = 0; i < LP_ORDER - 2; i++)
@@ -930,16 +931,16 @@ static void extrapolate_isf(float isf[LP_ORDER_16k])
i_max_corr++;
for (i = LP_ORDER - 1; i < LP_ORDER_16k - 1; i++)
isf[i] = isf[i - 1] + isf[i - 1 - i_max_corr]
out[i] = isf[i - 1] + isf[i - 1 - i_max_corr]
- isf[i - 2 - i_max_corr];
/* Calculate an estimate for ISF(18) and scale ISF based on the error */
est = 7965 + (isf[2] - isf[3] - isf[4]) / 6.0;
scale = 0.5 * (FFMIN(est, 7600) - isf[LP_ORDER - 2]) /
(isf[LP_ORDER_16k - 2] - isf[LP_ORDER - 2]);
est = 7965 + (out[2] - out[3] - out[4]) / 6.0;
scale = 0.5 * (FFMIN(est, 7600) - out[LP_ORDER - 2]) /
(out[LP_ORDER_16k - 2] - out[LP_ORDER - 2]);
for (i = LP_ORDER - 1; i < LP_ORDER_16k - 1; i++)
diff_hi[i] = scale * (isf[i] - isf[i - 1]);
diff_hi[i] = scale * (out[i] - out[i - 1]);
/* Stability insurance */
for (i = LP_ORDER; i < LP_ORDER_16k - 1; i++)
@@ -951,11 +952,11 @@ static void extrapolate_isf(float isf[LP_ORDER_16k])
}
for (i = LP_ORDER - 1; i < LP_ORDER_16k - 1; i++)
isf[i] = isf[i - 1] + diff_hi[i] * (1.0f / (1 << 15));
out[i] = out[i - 1] + diff_hi[i] * (1.0f / (1 << 15));
/* Scale the ISF vector for 16000 Hz */
for (i = 0; i < LP_ORDER_16k - 1; i++)
isf[i] *= 0.8;
out[i] *= 0.8;
}
/**
@@ -1002,7 +1003,7 @@ static void hb_synthesis(AMRWBContext *ctx, int subframe, float *samples,
ff_weighted_vector_sumf(e_isf, isf_past, isf, isfp_inter[subframe],
1.0 - isfp_inter[subframe], LP_ORDER);
extrapolate_isf(e_isf);
extrapolate_isf(e_isf, e_isf);
e_isf[LP_ORDER_16k - 1] *= 2.0;
ff_acelp_lsf2lspd(e_isp, e_isf, LP_ORDER_16k);
@@ -1094,27 +1095,23 @@ static int amrwb_decode_frame(AVCodecContext *avctx, void *data,
buf_out = (float *)ctx->avframe.data[0];
header_size = decode_mime_header(ctx, buf);
if (ctx->fr_cur_mode > MODE_SID) {
av_log(avctx, AV_LOG_ERROR,
"Invalid mode %d\n", ctx->fr_cur_mode);
return AVERROR_INVALIDDATA;
}
expected_fr_size = ((cf_sizes_wb[ctx->fr_cur_mode] + 7) >> 3) + 1;
if (buf_size < expected_fr_size) {
av_log(avctx, AV_LOG_ERROR,
"Frame too small (%d bytes). Truncated file?\n", buf_size);
*got_frame_ptr = 0;
return AVERROR_INVALIDDATA;
return buf_size;
}
if (!ctx->fr_quality || ctx->fr_cur_mode > MODE_SID)
av_log(avctx, AV_LOG_ERROR, "Encountered a bad or corrupted frame\n");
if (ctx->fr_cur_mode == MODE_SID) { /* Comfort noise frame */
if (ctx->fr_cur_mode == MODE_SID) /* Comfort noise frame */
av_log_missing_feature(avctx, "SID mode", 1);
if (ctx->fr_cur_mode >= MODE_SID)
return -1;
}
ff_amr_bit_reorder((uint16_t *) &ctx->frame, sizeof(AMRWBFrame),
buf + header_size, amr_bit_orderings_by_mode[ctx->fr_cur_mode]);

View File

@@ -404,12 +404,9 @@ static inline int ape_decode_value(APEContext *ctx, APERice *rice)
if (tmpk <= 16)
x = range_decode_bits(ctx, tmpk);
else if (tmpk <= 32) {
else {
x = range_decode_bits(ctx, 16);
x |= (range_decode_bits(ctx, tmpk - 16) << 16);
} else {
av_log(ctx->avctx, AV_LOG_ERROR, "Too many bits: %d\n", tmpk);
return AVERROR_INVALIDDATA;
}
x += overflow << tmpk;
} else {

View File

@@ -366,7 +366,7 @@ int ff_ass_split_override_codes(const ASSCodesCallbacks *callbacks, void *priv,
char new_line[2];
int text_len = 0;
while (buf && *buf) {
while (*buf) {
if (text && callbacks->text &&
(sscanf(buf, "\\%1[nN]", new_line) == 1 ||
!strncmp(buf, "{\\", 2))) {

View File

@@ -184,11 +184,8 @@ static int decode_bytes(const uint8_t* inbuffer, uint8_t* out, int bytes){
uint32_t* obuf = (uint32_t*) out;
off = (intptr_t)inbuffer & 3;
buf = (const uint32_t *)(inbuffer - off);
if (off)
c = av_be2ne32((0x537F6103U >> (off * 8)) | (0x537F6103U << (32 - (off * 8))));
else
c = av_be2ne32(0x537F6103U);
buf = (const uint32_t*) (inbuffer - off);
c = av_be2ne32((0x537F6103 >> (off*8)) | (0x537F6103 << (32-(off*8))));
bytes += 3 + off;
for (i = 0; i < bytes/4; i++)
obuf[i] = c ^ buf[i];
@@ -405,7 +402,7 @@ static int decodeTonalComponents (GetBitContext *gb, tonal_component *pComponent
for (k=0; k<coded_components; k++) {
sfIndx = get_bits(gb,6);
if (component_count >= 64)
if(component_count>=64)
return AVERROR_INVALIDDATA;
pComponent[component_count].pos = j * 64 + (get_bits(gb,6));
max_coded_values = SAMPLES_PER_FRAME - pComponent[component_count].pos;
@@ -690,8 +687,7 @@ static int decodeChannelSoundUnit (ATRAC3Context *q, GetBitContext *gb, channel_
if (result) return result;
pSnd->numComponents = decodeTonalComponents (gb, pSnd->components, pSnd->bandsCoded);
if (pSnd->numComponents < 0)
return pSnd->numComponents;
if (pSnd->numComponents == -1) return -1;
numSubbands = decodeSpectrum (gb, pSnd->spectrum);
@@ -773,7 +769,7 @@ static int decodeFrame(ATRAC3Context *q, const uint8_t* databuf,
/* set the bitstream reader at the start of the second Sound Unit*/
init_get_bits(&q->gb, ptr1, (q->bytes_per_frame - i) * 8);
init_get_bits(&q->gb,ptr1,q->bits_per_frame);
/* Fill the Weighting coeffs delay buffer */
memmove(q->weighting_delay,&(q->weighting_delay[2]),4*sizeof(int));
@@ -976,8 +972,6 @@ static av_cold int atrac3_decode_init(AVCodecContext *avctx)
if (q->codingMode == STEREO) {
av_log(avctx,AV_LOG_DEBUG,"Normal stereo detected.\n");
} else if (q->codingMode == JOINT_STEREO) {
if (avctx->channels != 2)
return AVERROR_INVALIDDATA;
av_log(avctx,AV_LOG_DEBUG,"Joint stereo detected.\n");
} else {
av_log(avctx,AV_LOG_ERROR,"Unknown channel coding mode %x!\n",q->codingMode);

View File

@@ -4032,8 +4032,7 @@ AVCodecContext *avcodec_alloc_context2(enum AVMediaType);
/**
* Allocate an AVCodecContext and set its fields to default values. The
* resulting struct can be deallocated by calling avcodec_close() on it followed
* by av_free().
* resulting struct can be deallocated by simply calling av_free().
*
* @param codec if non-NULL, allocate private data and initialize defaults
* for the given codec. It is illegal to then call avcodec_open2()
@@ -4179,11 +4178,6 @@ int avcodec_open(AVCodecContext *avctx, AVCodec *codec);
* @endcode
*
* @param avctx The context to initialize.
* @param codec The codec to open this context for. If a non-NULL codec has been
* previously passed to avcodec_alloc_context3() or
* avcodec_get_context_defaults3() for this context, then this
* parameter MUST be either NULL or equal to the previously passed
* codec.
* @param options A dictionary filled with AVCodecContext and codec-private options.
* On return this object will be filled with options that were not found.
*
@@ -4469,15 +4463,6 @@ int avcodec_encode_video(AVCodecContext *avctx, uint8_t *buf, int buf_size,
int avcodec_encode_subtitle(AVCodecContext *avctx, uint8_t *buf, int buf_size,
const AVSubtitle *sub);
/**
* Close a given AVCodecContext and free all the data associated with it
* (but not the AVCodecContext itself).
*
* Calling this function on an AVCodecContext that hasn't been opened will free
* the codec-specific data allocated in avcodec_alloc_context3() /
* avcodec_get_context_defaults3() with a non-NULL codec. Subsequent calls will
* do nothing.
*/
int avcodec_close(AVCodecContext *avctx);
/**
@@ -4889,10 +4874,4 @@ const AVClass *avcodec_get_class(void);
*/
const AVClass *avcodec_get_frame_class(void);
/**
* @return a positive value if s is open (i.e. avcodec_open2() was called on it
* with no corresponding avcodec_close()), 0 otherwise.
*/
int avcodec_is_open(AVCodecContext *s);
#endif /* AVCODEC_AVCODEC_H */

View File

@@ -162,7 +162,6 @@ static av_cold int avs_decode_init(AVCodecContext * avctx)
AvsContext *const avs = avctx->priv_data;
avctx->pix_fmt = PIX_FMT_PAL8;
avcodec_get_frame_defaults(&avs->picture);
avcodec_set_dimensions(avctx, 318, 198);
return 0;
}

View File

@@ -167,7 +167,7 @@ static void init_lengths(BinkContext *c, int width, int bw)
*
* @param c decoder context
*/
static av_cold int init_bundles(BinkContext *c)
static av_cold void init_bundles(BinkContext *c)
{
int bw, bh, blocks;
int i;
@@ -178,12 +178,8 @@ static av_cold int init_bundles(BinkContext *c)
for (i = 0; i < BINKB_NB_SRC; i++) {
c->bundle[i].data = av_malloc(blocks * 64);
if (!c->bundle[i].data)
return AVERROR(ENOMEM);
c->bundle[i].data_end = c->bundle[i].data + blocks * 64;
}
return 0;
}
/**
@@ -679,9 +675,6 @@ static int read_dct_coeffs(GetBitContext *gb, int32_t block[64], const uint8_t *
quant_idx = q;
}
if (quant_idx >= 16)
return AVERROR_INVALIDDATA;
quant = quant_matrices[quant_idx];
block[0] = (block[0] * quant[0]) >> 11;
@@ -1273,7 +1266,7 @@ static av_cold int decode_init(AVCodecContext *avctx)
BinkContext * const c = avctx->priv_data;
static VLC_TYPE table[16 * 128][2];
static int binkb_initialised = 0;
int i, ret;
int i;
int flags;
c->version = avctx->codec_tag >> 24;
@@ -1308,10 +1301,7 @@ static av_cold int decode_init(AVCodecContext *avctx)
dsputil_init(&c->dsp, avctx);
ff_binkdsp_init(&c->bdsp);
if ((ret = init_bundles(c)) < 0) {
free_bundles(c);
return ret;
}
init_bundles(c);
if (c->version == 'b') {
if (!binkb_initialised) {

View File

@@ -91,9 +91,9 @@ static av_cold int decode_init(AVCodecContext *avctx)
frame_len_bits = 11;
}
if (avctx->channels < 1 || avctx->channels > MAX_CHANNELS) {
av_log(avctx, AV_LOG_ERROR, "invalid number of channels: %d\n", avctx->channels);
return AVERROR_INVALIDDATA;
if (avctx->channels > MAX_CHANNELS) {
av_log(avctx, AV_LOG_ERROR, "too many channels: %d\n", avctx->channels);
return -1;
}
s->version_b = avctx->extradata && avctx->extradata[3] == 'b';

View File

@@ -253,9 +253,9 @@ static int build_table(VLC *vlc, int table_nb_bits, int nb_codes,
(byte/word/long) to store the 'bits', 'codes', and 'symbols' tables.
'use_static' should be set to 1 for tables, which should be freed
with av_free_static(), 0 if ff_free_vlc() will be used.
with av_free_static(), 0 if free_vlc() will be used.
*/
int ff_init_vlc_sparse(VLC *vlc, int nb_bits, int nb_codes,
int init_vlc_sparse(VLC *vlc, int nb_bits, int nb_codes,
const void *bits, int bits_wrap, int bits_size,
const void *codes, int codes_wrap, int codes_size,
const void *symbols, int symbols_wrap, int symbols_size,
@@ -318,7 +318,7 @@ int ff_init_vlc_sparse(VLC *vlc, int nb_bits, int nb_codes,
}
void ff_free_vlc(VLC *vlc)
void free_vlc(VLC *vlc)
{
av_freep(&vlc->table);
}

View File

@@ -53,7 +53,6 @@ static int bmp_decode_frame(AVCodecContext *avctx,
uint8_t *ptr;
int dsize;
const uint8_t *buf0 = buf;
GetByteContext gb;
if(buf_size < 14){
av_log(avctx, AV_LOG_ERROR, "buf size too small (%d)\n", buf_size);
@@ -118,7 +117,7 @@ static int bmp_decode_frame(AVCodecContext *avctx,
depth = bytestream_get_le16(&buf);
if (ihsize >= 40)
if(ihsize == 40 || ihsize == 64 || ihsize == 56)
comp = bytestream_get_le32(&buf);
else
comp = BMP_RGB;
@@ -133,7 +132,8 @@ static int bmp_decode_frame(AVCodecContext *avctx,
rgb[0] = bytestream_get_le32(&buf);
rgb[1] = bytestream_get_le32(&buf);
rgb[2] = bytestream_get_le32(&buf);
alpha = bytestream_get_le32(&buf);
if (ihsize >= 108)
alpha = bytestream_get_le32(&buf);
}
avctx->width = width;
@@ -231,6 +231,9 @@ static int bmp_decode_frame(AVCodecContext *avctx,
if(comp == BMP_RLE4 || comp == BMP_RLE8)
memset(p->data[0], 0, avctx->height * p->linesize[0]);
if(depth == 4 || depth == 8)
memset(p->data[1], 0, 1024);
if(height > 0){
ptr = p->data[0] + (avctx->height - 1) * p->linesize[0];
linesize = -p->linesize[0];
@@ -241,9 +244,6 @@ static int bmp_decode_frame(AVCodecContext *avctx,
if(avctx->pix_fmt == PIX_FMT_PAL8){
int colors = 1 << depth;
memset(p->data[1], 0, 1024);
if(ihsize >= 36){
int t;
buf = buf0 + 46;
@@ -269,8 +269,7 @@ static int bmp_decode_frame(AVCodecContext *avctx,
p->data[0] += p->linesize[0] * (avctx->height - 1);
p->linesize[0] = -p->linesize[0];
}
bytestream2_init(&gb, buf, dsize);
ff_msrle_decode(avctx, (AVPicture*)p, depth, &gb);
ff_msrle_decode(avctx, (AVPicture*)p, depth, buf, dsize);
if(height < 0){
p->data[0] += p->linesize[0] * (avctx->height - 1);
p->linesize[0] = -p->linesize[0];

View File

@@ -21,7 +21,6 @@
#include "avcodec.h"
#include "bytestream.h"
#include "libavutil/avassert.h"
enum BMVFlags{
BMV_NOP = 0,
@@ -53,7 +52,7 @@ typedef struct BMVDecContext {
static int decode_bmv_frame(const uint8_t *source, int src_len, uint8_t *frame, int frame_off)
{
unsigned val, saved_val = 0;
int val, saved_val = 0;
int tmplen = src_len;
const uint8_t *src, *source_end = source + src_len;
uint8_t *frame_end = frame + SCREEN_WIDE * SCREEN_HIGH;
@@ -99,8 +98,6 @@ static int decode_bmv_frame(const uint8_t *source, int src_len, uint8_t *frame,
}
if (!(val & 0xC)) {
for (;;) {
if(shift>22)
return -1;
if (!read_two_nibbles) {
if (src < source || src >= source_end)
return -1;
@@ -134,16 +131,15 @@ static int decode_bmv_frame(const uint8_t *source, int src_len, uint8_t *frame,
}
advance_mode = val & 1;
len = (val >> 1) - 1;
av_assert0(len>0);
mode += 1 + advance_mode;
if (mode >= 4)
mode -= 3;
if (len <= 0 || FFABS(dst_end - dst) < len)
if (FFABS(dst_end - dst) < len)
return -1;
switch (mode) {
case 1:
if (forward) {
if (dst - frame + SCREEN_WIDE < -frame_off ||
if (dst - frame + SCREEN_WIDE < frame_off ||
frame_end - dst < frame_off + len)
return -1;
for (i = 0; i < len; i++)
@@ -151,7 +147,7 @@ static int decode_bmv_frame(const uint8_t *source, int src_len, uint8_t *frame,
dst += len;
} else {
dst -= len;
if (dst - frame + SCREEN_WIDE < -frame_off ||
if (dst - frame + SCREEN_WIDE < frame_off ||
frame_end - dst < frame_off + len)
return -1;
for (i = len - 1; i >= 0; i--)
@@ -268,11 +264,6 @@ static av_cold int decode_init(AVCodecContext *avctx)
c->avctx = avctx;
avctx->pix_fmt = PIX_FMT_PAL8;
if (avctx->width != SCREEN_WIDE || avctx->height != SCREEN_HIGH) {
av_log(avctx, AV_LOG_ERROR, "Invalid dimension %dx%d\n", avctx->width, avctx->height);
return AVERROR_INVALIDDATA;
}
c->pic.reference = 1;
if (avctx->get_buffer(avctx, &c->pic) < 0) {
av_log(avctx, AV_LOG_ERROR, "get_buffer() failed\n");

View File

@@ -1,7 +1,6 @@
/*
* Bytestream functions
* copyright (c) 2006 Baptiste Coudurier <baptiste.coudurier@free.fr>
* Copyright (c) 2012 Aneesh Dogra (lionaneesh) <lionaneesh@gmail.com>
*
* This file is part of FFmpeg.
*
@@ -24,7 +23,6 @@
#define AVCODEC_BYTESTREAM_H
#include <string.h>
#include "libavutil/common.h"
#include "libavutil/intreadwrite.h"
@@ -32,57 +30,35 @@ typedef struct {
const uint8_t *buffer, *buffer_end, *buffer_start;
} GetByteContext;
typedef struct {
uint8_t *buffer, *buffer_end, *buffer_start;
int eof;
} PutByteContext;
#define DEF_T(type, name, bytes, read, write) \
static av_always_inline type bytestream_get_ ## name(const uint8_t **b) \
{ \
(*b) += bytes; \
return read(*b - bytes); \
} \
static av_always_inline void bytestream_put_ ## name(uint8_t **b, \
const type value) \
{ \
write(*b, value); \
(*b) += bytes; \
} \
static av_always_inline void bytestream2_put_ ## name ## u(PutByteContext *p, \
const type value) \
{ \
bytestream_put_ ## name(&p->buffer, value); \
} \
static av_always_inline void bytestream2_put_ ## name(PutByteContext *p, \
const type value) \
{ \
if (!p->eof && (p->buffer_end - p->buffer >= bytes)) { \
write(p->buffer, value); \
p->buffer += bytes; \
} else \
p->eof = 1; \
} \
static av_always_inline type bytestream2_get_ ## name ## u(GetByteContext *g) \
{ \
return bytestream_get_ ## name(&g->buffer); \
} \
static av_always_inline type bytestream2_get_ ## name(GetByteContext *g) \
{ \
if (g->buffer_end - g->buffer < bytes) \
return 0; \
return bytestream2_get_ ## name ## u(g); \
} \
static av_always_inline type bytestream2_peek_ ## name(GetByteContext *g) \
{ \
if (g->buffer_end - g->buffer < bytes) \
return 0; \
return read(g->buffer); \
#define DEF_T(type, name, bytes, read, write) \
static av_always_inline type bytestream_get_ ## name(const uint8_t **b){\
(*b) += bytes;\
return read(*b - bytes);\
}\
static av_always_inline void bytestream_put_ ##name(uint8_t **b, const type value){\
write(*b, value);\
(*b) += bytes;\
}\
static av_always_inline type bytestream2_get_ ## name ## u(GetByteContext *g)\
{\
return bytestream_get_ ## name(&g->buffer);\
}\
static av_always_inline type bytestream2_get_ ## name(GetByteContext *g)\
{\
if (g->buffer_end - g->buffer < bytes)\
return 0;\
return bytestream2_get_ ## name ## u(g);\
}\
static av_always_inline type bytestream2_peek_ ## name(GetByteContext *g)\
{\
if (g->buffer_end - g->buffer < bytes)\
return 0;\
return read(g->buffer);\
}
#define DEF(name, bytes, read, write) \
#define DEF(name, bytes, read, write) \
DEF_T(unsigned int, name, bytes, read, write)
#define DEF64(name, bytes, read, write) \
#define DEF64(name, bytes, read, write) \
DEF_T(uint64_t, name, bytes, read, write)
DEF64(le64, 8, AV_RL64, AV_WL64)
@@ -136,22 +112,11 @@ DEF (byte, 1, AV_RB8 , AV_WB8 )
#endif
static av_always_inline void bytestream2_init(GetByteContext *g,
const uint8_t *buf,
int buf_size)
const uint8_t *buf, int buf_size)
{
g->buffer = buf;
g->buffer = buf;
g->buffer_start = buf;
g->buffer_end = buf + buf_size;
}
static av_always_inline void bytestream2_init_writer(PutByteContext *p,
uint8_t *buf,
int buf_size)
{
p->buffer = buf;
p->buffer_start = buf;
p->buffer_end = buf + buf_size;
p->eof = 0;
g->buffer_end = buf + buf_size;
}
static av_always_inline unsigned int bytestream2_get_bytes_left(GetByteContext *g)
@@ -159,61 +124,32 @@ static av_always_inline unsigned int bytestream2_get_bytes_left(GetByteContext *
return g->buffer_end - g->buffer;
}
static av_always_inline unsigned int bytestream2_get_bytes_left_p(PutByteContext *p)
{
return p->buffer_end - p->buffer;
}
static av_always_inline void bytestream2_skip(GetByteContext *g,
unsigned int size)
{
g->buffer += FFMIN(g->buffer_end - g->buffer, size);
}
static av_always_inline void bytestream2_skipu(GetByteContext *g,
unsigned int size)
{
g->buffer += size;
}
static av_always_inline void bytestream2_skip_p(PutByteContext *p,
unsigned int size)
{
int size2;
if (p->eof)
return;
size2 = FFMIN(p->buffer_end - p->buffer, size);
if (size2 != size)
p->eof = 1;
p->buffer += size2;
}
static av_always_inline int bytestream2_tell(GetByteContext *g)
{
return (int)(g->buffer - g->buffer_start);
}
static av_always_inline int bytestream2_tell_p(PutByteContext *p)
{
return (int)(p->buffer - p->buffer_start);
}
static av_always_inline int bytestream2_seek(GetByteContext *g,
int offset,
static av_always_inline int bytestream2_seek(GetByteContext *g, int offset,
int whence)
{
switch (whence) {
case SEEK_CUR:
offset = av_clip(offset, -(g->buffer - g->buffer_start),
g->buffer_end - g->buffer);
offset = av_clip(offset, -(g->buffer - g->buffer_start),
g->buffer_end - g->buffer);
g->buffer += offset;
break;
case SEEK_END:
offset = av_clip(offset, -(g->buffer_end - g->buffer_start), 0);
offset = av_clip(offset, -(g->buffer_end - g->buffer_start), 0);
g->buffer = g->buffer_end + offset;
break;
case SEEK_SET:
offset = av_clip(offset, 0, g->buffer_end - g->buffer_start);
offset = av_clip(offset, 0, g->buffer_end - g->buffer_start);
g->buffer = g->buffer_start + offset;
break;
default:
@@ -222,37 +158,6 @@ static av_always_inline int bytestream2_seek(GetByteContext *g,
return bytestream2_tell(g);
}
static av_always_inline int bytestream2_seek_p(PutByteContext *p,
int offset,
int whence)
{
p->eof = 0;
switch (whence) {
case SEEK_CUR:
if (p->buffer_end - p->buffer < offset)
p->eof = 1;
offset = av_clip(offset, -(p->buffer - p->buffer_start),
p->buffer_end - p->buffer);
p->buffer += offset;
break;
case SEEK_END:
if (offset > 0)
p->eof = 1;
offset = av_clip(offset, -(p->buffer_end - p->buffer_start), 0);
p->buffer = p->buffer_end + offset;
break;
case SEEK_SET:
if (p->buffer_end - p->buffer_start < offset)
p->eof = 1;
offset = av_clip(offset, 0, p->buffer_end - p->buffer_start);
p->buffer = p->buffer_start + offset;
break;
default:
return AVERROR(EINVAL);
}
return bytestream2_tell_p(p);
}
static av_always_inline unsigned int bytestream2_get_buffer(GetByteContext *g,
uint8_t *dst,
unsigned int size)
@@ -263,78 +168,14 @@ static av_always_inline unsigned int bytestream2_get_buffer(GetByteContext *g,
return size2;
}
static av_always_inline unsigned int bytestream2_get_bufferu(GetByteContext *g,
uint8_t *dst,
unsigned int size)
{
memcpy(dst, g->buffer, size);
g->buffer += size;
return size;
}
static av_always_inline unsigned int bytestream2_put_buffer(PutByteContext *p,
const uint8_t *src,
unsigned int size)
{
int size2;
if (p->eof)
return 0;
size2 = FFMIN(p->buffer_end - p->buffer, size);
if (size2 != size)
p->eof = 1;
memcpy(p->buffer, src, size2);
p->buffer += size2;
return size2;
}
static av_always_inline unsigned int bytestream2_put_bufferu(PutByteContext *p,
const uint8_t *src,
unsigned int size)
{
memcpy(p->buffer, src, size);
p->buffer += size;
return size;
}
static av_always_inline void bytestream2_set_buffer(PutByteContext *p,
const uint8_t c,
unsigned int size)
{
int size2;
if (p->eof)
return;
size2 = FFMIN(p->buffer_end - p->buffer, size);
if (size2 != size)
p->eof = 1;
memset(p->buffer, c, size2);
p->buffer += size2;
}
static av_always_inline void bytestream2_set_bufferu(PutByteContext *p,
const uint8_t c,
unsigned int size)
{
memset(p->buffer, c, size);
p->buffer += size;
}
static av_always_inline unsigned int bytestream2_get_eof(PutByteContext *p)
{
return p->eof;
}
static av_always_inline unsigned int bytestream_get_buffer(const uint8_t **b,
uint8_t *dst,
unsigned int size)
static av_always_inline unsigned int bytestream_get_buffer(const uint8_t **b, uint8_t *dst, unsigned int size)
{
memcpy(dst, *b, size);
(*b) += size;
return size;
}
static av_always_inline void bytestream_put_buffer(uint8_t **b,
const uint8_t *src,
unsigned int size)
static av_always_inline void bytestream_put_buffer(uint8_t **b, const uint8_t *src, unsigned int size)
{
memcpy(*b, src, size);
(*b) += size;

View File

@@ -609,21 +609,12 @@ static int decode_pic(AVSContext *h) {
static int decode_seq_header(AVSContext *h) {
MpegEncContext *s = &h->s;
int frame_rate_code;
int width, height;
h->profile = get_bits(&s->gb,8);
h->level = get_bits(&s->gb,8);
skip_bits1(&s->gb); //progressive sequence
width = get_bits(&s->gb, 14);
height = get_bits(&s->gb, 14);
if ((s->width || s->height) && (s->width != width || s->height != height)) {
av_log_missing_feature(s, "Width/height changing in CAVS is", 0);
return AVERROR_PATCHWELCOME;
}
s->width = width;
s->height = height;
s->width = get_bits(&s->gb,14);
s->height = get_bits(&s->gb,14);
skip_bits(&s->gb,2); //chroma format
skip_bits(&s->gb,3); //sample_precision
h->aspect_ratio = get_bits(&s->gb,4);
@@ -665,8 +656,7 @@ static int cavs_decode_frame(AVCodecContext * avctx,void *data, int *data_size,
if (buf_size == 0) {
if (!s->low_delay && h->DPB[0].f.data[0]) {
*data_size = sizeof(AVPicture);
*picture = h->DPB[0].f;
memset(&h->DPB[0], 0, sizeof(h->DPB[0]));
*picture = *(AVFrame *) &h->DPB[0];
}
return 0;
}

View File

@@ -280,10 +280,6 @@ static int cdg_decode_frame(AVCodecContext *avctx,
av_log(avctx, AV_LOG_ERROR, "buffer too small for decoder\n");
return AVERROR(EINVAL);
}
if (buf_size > CDG_HEADER_SIZE + CDG_DATA_SIZE) {
av_log(avctx, AV_LOG_ERROR, "buffer too big for decoder\n");
return AVERROR(EINVAL);
}
ret = avctx->reget_buffer(avctx, &cc->frame);
if (ret) {

View File

@@ -133,8 +133,9 @@ void ff_celp_lp_synthesis_filterf(float *out, const float *filter_coeffs,
out2 -= val * old_out2;
out3 -= val * old_out3;
old_out3 = out[-5];
for (i = 5; i <= filter_length; i += 2) {
old_out3 = out[-i];
val = filter_coeffs[i-1];
out0 -= val * old_out3;
@@ -153,6 +154,7 @@ void ff_celp_lp_synthesis_filterf(float *out, const float *filter_coeffs,
FFSWAP(float, old_out0, old_out2);
old_out1 = old_out3;
old_out3 = out[-i-2];
}
tmp0 = out0;

View File

@@ -321,11 +321,11 @@ static av_cold int cook_decode_close(AVCodecContext *avctx)
/* Free the VLC tables. */
for (i = 0; i < 13; i++)
ff_free_vlc(&q->envelope_quant_index[i]);
free_vlc(&q->envelope_quant_index[i]);
for (i = 0; i < 7; i++)
ff_free_vlc(&q->sqvh[i]);
free_vlc(&q->sqvh[i]);
for (i = 0; i < q->num_subpackets; i++)
ff_free_vlc(&q->subpacket[i].ccpl);
free_vlc(&q->subpacket[i].ccpl);
av_log(avctx, AV_LOG_DEBUG, "Memory deallocated.\n");
@@ -366,8 +366,8 @@ static void decode_gain_info(GetBitContext *gb, int *gaininfo)
* @param q pointer to the COOKContext
* @param quant_index_table pointer to the array
*/
static int decode_envelope(COOKContext *q, COOKSubpacket *p,
int *quant_index_table)
static void decode_envelope(COOKContext *q, COOKSubpacket *p,
int *quant_index_table)
{
int i, j, vlc_index;
@@ -388,15 +388,7 @@ static int decode_envelope(COOKContext *q, COOKSubpacket *p,
j = get_vlc2(&q->gb, q->envelope_quant_index[vlc_index - 1].table,
q->envelope_quant_index[vlc_index - 1].bits, 2);
quant_index_table[i] = quant_index_table[i - 1] + j - 12; // differential encoding
if (quant_index_table[i] > 63 || quant_index_table[i] < -63) {
av_log(q->avctx, AV_LOG_ERROR,
"Invalid quantizer %d at position %d, outside [-63, 63] range\n",
quant_index_table[i], i);
return AVERROR_INVALIDDATA;
}
}
return 0;
}
/**
@@ -515,11 +507,7 @@ static inline void expand_category(COOKContext *q, int *category,
{
int i;
for (i = 0; i < q->num_vectors; i++)
{
int idx = category_index[i];
if (++category[idx] >= FF_ARRAY_ELEMS(dither_tab))
--category[idx];
}
++category[category_index[i]];
}
/**
@@ -647,24 +635,20 @@ static void decode_vectors(COOKContext *q, COOKSubpacket *p, int *category,
* @param q pointer to the COOKContext
* @param mlt_buffer pointer to mlt coefficients
*/
static int mono_decode(COOKContext *q, COOKSubpacket *p, float *mlt_buffer)
static void mono_decode(COOKContext *q, COOKSubpacket *p, float *mlt_buffer)
{
int category_index[128];
int quant_index_table[102];
int category[128];
int res;
memset(&category, 0, sizeof(category));
memset(&category_index, 0, sizeof(category_index));
if ((res = decode_envelope(q, p, quant_index_table)) < 0)
return res;
decode_envelope(q, p, quant_index_table);
q->num_vectors = get_bits(&q->gb, p->log2_numvector_size);
categorize(q, p, quant_index_table, category, category_index);
expand_category(q, category, category_index);
decode_vectors(q, p, category, quant_index_table, mlt_buffer);
return 0;
}
@@ -814,10 +798,10 @@ static void decouple_float(COOKContext *q,
* @param mlt_buffer1 pointer to left channel mlt coefficients
* @param mlt_buffer2 pointer to right channel mlt coefficients
*/
static int joint_decode(COOKContext *q, COOKSubpacket *p, float *mlt_buffer1,
float *mlt_buffer2)
static void joint_decode(COOKContext *q, COOKSubpacket *p, float *mlt_buffer1,
float *mlt_buffer2)
{
int i, j, res;
int i, j;
int decouple_tab[SUBBAND_SIZE];
float *decode_buffer = q->decode_buffer_0;
int idx, cpl_tmp;
@@ -831,8 +815,7 @@ static int joint_decode(COOKContext *q, COOKSubpacket *p, float *mlt_buffer1,
memset(mlt_buffer1, 0, 1024 * sizeof(*mlt_buffer1));
memset(mlt_buffer2, 0, 1024 * sizeof(*mlt_buffer2));
decouple_info(q, p, decouple_tab);
if ((res = mono_decode(q, p, decode_buffer)) < 0)
return res;
mono_decode(q, p, decode_buffer);
/* The two channels are stored interleaved in decode_buffer. */
for (i = 0; i < p->js_subband_start; i++) {
@@ -849,13 +832,11 @@ static int joint_decode(COOKContext *q, COOKSubpacket *p, float *mlt_buffer1,
cpl_tmp = cplband[i];
idx -= decouple_tab[cpl_tmp];
cplscale = q->cplscales[p->js_vlc_bits - 2]; // choose decoupler table
f1 = cplscale[decouple_tab[cpl_tmp] + 1];
f2 = cplscale[idx];
f1 = cplscale[decouple_tab[cpl_tmp]];
f2 = cplscale[idx - 1];
q->decouple(q, p, i, f1, f2, decode_buffer, mlt_buffer1, mlt_buffer2);
idx = (1 << p->js_vlc_bits) - 1;
}
return 0;
}
/**
@@ -928,11 +909,10 @@ static inline void mlt_compensate_output(COOKContext *q, float *decode_buffer,
* @param inbuffer pointer to the inbuffer
* @param outbuffer pointer to the outbuffer
*/
static int decode_subpacket(COOKContext *q, COOKSubpacket *p,
const uint8_t *inbuffer, float *outbuffer)
static void decode_subpacket(COOKContext *q, COOKSubpacket *p,
const uint8_t *inbuffer, float *outbuffer)
{
int sub_packet_size = p->size;
int res;
/* packet dump */
// for (i = 0; i < sub_packet_size ; i++)
// av_log(q->avctx, AV_LOG_ERROR, "%02x", inbuffer[i]);
@@ -941,16 +921,13 @@ static int decode_subpacket(COOKContext *q, COOKSubpacket *p,
decode_bytes_and_gain(q, p, inbuffer, &p->gains1);
if (p->joint_stereo) {
if ((res = joint_decode(q, p, q->decode_buffer_1, q->decode_buffer_2)) < 0)
return res;
joint_decode(q, p, q->decode_buffer_1, q->decode_buffer_2);
} else {
if ((res = mono_decode(q, p, q->decode_buffer_1)) < 0)
return res;
mono_decode(q, p, q->decode_buffer_1);
if (p->num_channels == 2) {
decode_bytes_and_gain(q, p, inbuffer + sub_packet_size / 2, &p->gains2);
if ((res = mono_decode(q, p, q->decode_buffer_2)) < 0)
return res;
mono_decode(q, p, q->decode_buffer_2);
}
}
@@ -964,8 +941,6 @@ static int decode_subpacket(COOKContext *q, COOKSubpacket *p,
else
mlt_compensate_output(q, q->decode_buffer_2, &p->gains2,
p->mono_previous_buffer2, outbuffer, p->ch_idx + 1);
return 0;
}
@@ -1021,8 +996,7 @@ static int cook_decode_frame(AVCodecContext *avctx, void *data,
i, q->subpacket[i].size, q->subpacket[i].joint_stereo, offset,
avctx->block_align);
if ((ret = decode_subpacket(q, &q->subpacket[i], buf + offset, samples)) < 0)
return ret;
decode_subpacket(q, &q->subpacket[i], buf + offset, samples);
offset += q->subpacket[i].size;
chidx += q->subpacket[i].num_channels;
av_log(avctx, AV_LOG_DEBUG, "subpacket[%i] %i %i\n",
@@ -1104,10 +1078,6 @@ static av_cold int cook_decode_init(AVCodecContext *avctx)
q->sample_rate = avctx->sample_rate;
q->nb_channels = avctx->channels;
q->bit_rate = avctx->bit_rate;
if (!q->nb_channels) {
av_log(avctx, AV_LOG_ERROR, "Invalid number of channels\n");
return AVERROR_INVALIDDATA;
}
/* Initialize RNG. */
av_lfg_init(&q->random_state, 0);
@@ -1235,11 +1205,6 @@ static av_cold int cook_decode_init(AVCodecContext *avctx)
q->subpacket[s].gains2.now = q->subpacket[s].gain_3;
q->subpacket[s].gains2.previous = q->subpacket[s].gain_4;
if (q->num_subpackets + q->subpacket[s].num_channels > q->nb_channels) {
av_log(avctx, AV_LOG_ERROR, "Too many subpackets %d for channels %d\n", q->num_subpackets, q->nb_channels);
return AVERROR_INVALIDDATA;
}
q->num_subpackets++;
s++;
if (s > MAX_SUBPACKETS) {

View File

@@ -36,8 +36,8 @@ static const int expbits_tab[8] = {
52,47,43,37,29,22,16,0,
};
static const float dither_tab[9] = {
0.0, 0.0, 0.0, 0.0, 0.0, 0.176777, 0.25, 0.707107, 1.0
static const float dither_tab[8] = {
0.0, 0.0, 0.0, 0.0, 0.0, 0.176777, 0.25, 0.707107,
};
static const float quant_centroid_tab[7][14] = {
@@ -510,37 +510,23 @@ static const int cplband[51] = {
19,
};
// The 1 and 0 at the beginning/end are to prevent overflows with
// bitstream-read indexes. E.g. if n_bits=5, we can access any
// index from [1, (1<<n_bits)] for the first decoupling coeff,
// and (1<<n_bits)-coeff1 as index for coeff2, i.e.:
// coeff1_idx = [1, 32], and coeff2_idx = [0, 31].
// These values aren't part of the tables in the original binary.
static const float cplscale2[5] = {
1,
static const float cplscale2[3] = {
0.953020632266998,0.70710676908493,0.302905440330505,
0,
};
static const float cplscale3[9] = {
1,
static const float cplscale3[7] = {
0.981279790401459,0.936997592449188,0.875934481620789,0.70710676908493,
0.482430040836334,0.349335819482803,0.192587479948997,
0,
};
static const float cplscale4[17] = {
1,
static const float cplscale4[15] = {
0.991486728191376,0.973249018192291,0.953020632266998,0.930133521556854,
0.903453230857849,0.870746195316315,0.826180458068848,0.70710676908493,
0.563405573368073,0.491732746362686,0.428686618804932,0.367221474647522,
0.302905440330505,0.229752898216248,0.130207896232605,
0,
};
static const float cplscale5[33] = {
1,
static const float cplscale5[31] = {
0.995926380157471,0.987517595291138,0.978726446628571,0.969505727291107,
0.95979779958725,0.949531257152557,0.938616216182709,0.926936149597168,
0.914336204528809,0.900602877140045,0.885426938533783,0.868331849575043,
@@ -549,11 +535,9 @@ static const float cplscale5[33] = {
0.464778542518616,0.434642940759659,0.404955863952637,0.375219136476517,
0.344963222742081,0.313672333955765,0.280692428350449,0.245068684220314,
0.205169528722763,0.157508864998817,0.0901700109243393,
0,
};
static const float cplscale6[65] = {
1,
static const float cplscale6[63] = {
0.998005926609039,0.993956744670868,0.989822506904602,0.985598564147949,
0.981279790401459,0.976860702037811,0.972335040569305,0.967696130275726,
0.962936460971832,0.958047747612000,0.953020632266998,0.947844684123993,
@@ -570,7 +554,6 @@ static const float cplscale6[65] = {
0.302905440330505,0.286608695983887,0.269728302955627,0.252119421958923,
0.233590632677078,0.213876649737358,0.192587479948997,0.169101938605309,
0.142307326197624,0.109772264957428,0.0631198287010193,
0,
};
static const float* const cplscales[5] = {

View File

@@ -228,7 +228,7 @@ static av_cold int decode_init(AVCodecContext *avctx) {
av_log(avctx, AV_LOG_ERROR,
"CamStudio codec error: invalid depth %i bpp\n",
avctx->bits_per_coded_sample);
return AVERROR_INVALIDDATA;
return 1;
}
c->bpp = avctx->bits_per_coded_sample;
avcodec_get_frame_defaults(&c->pic);
@@ -242,7 +242,7 @@ static av_cold int decode_init(AVCodecContext *avctx) {
c->decomp_buf = av_malloc(c->decomp_size + AV_LZO_OUTPUT_PADDING);
if (!c->decomp_buf) {
av_log(avctx, AV_LOG_ERROR, "Can't allocate decompression buffer.\n");
return AVERROR(ENOMEM);
return 1;
}
return 0;
}

View File

@@ -29,7 +29,6 @@
#include "libavutil/common.h"
#include "libavutil/intmath.h"
#include "libavutil/intreadwrite.h"
#include "libavutil/mathematics.h"
#include "libavutil/audioconvert.h"
#include "avcodec.h"
#include "dsputil.h"
@@ -639,20 +638,13 @@ static int dca_parse_frame_header(DCAContext *s)
}
static inline int get_scale(GetBitContext *gb, int level, int value, int log2range)
static inline int get_scale(GetBitContext *gb, int level, int value)
{
if (level < 5) {
/* huffman encoded */
value += get_bitalloc(gb, &dca_scalefactor, level);
value = av_clip(value, 0, (1 << log2range) - 1);
} else if (level < 8) {
if (level + 1 > log2range) {
skip_bits(gb, level + 1 - log2range);
value = get_bits(gb, log2range);
} else {
value = get_bits(gb, level + 1);
}
}
} else if (level < 8)
value = get_bits(gb, level + 1);
return value;
}
@@ -725,31 +717,28 @@ static int dca_subframe_header(DCAContext *s, int base_channel, int block_index)
for (j = base_channel; j < s->prim_channels; j++) {
const uint32_t *scale_table;
int scale_sum, log_size;
int scale_sum;
memset(s->scale_factor[j], 0,
s->subband_activity[j] * sizeof(s->scale_factor[0][0][0]) * 2);
if (s->scalefactor_huffman[j] == 6) {
if (s->scalefactor_huffman[j] == 6)
scale_table = scale_factor_quant7;
log_size = 7;
} else {
else
scale_table = scale_factor_quant6;
log_size = 6;
}
/* When huffman coded, only the difference is encoded */
scale_sum = 0;
for (k = 0; k < s->subband_activity[j]; k++) {
if (k >= s->vq_start_subband[j] || s->bitalloc[j][k] > 0) {
scale_sum = get_scale(&s->gb, s->scalefactor_huffman[j], scale_sum, log_size);
scale_sum = get_scale(&s->gb, s->scalefactor_huffman[j], scale_sum);
s->scale_factor[j][k][0] = scale_table[scale_sum];
}
if (k < s->vq_start_subband[j] && s->transition_mode[j][k]) {
/* Get second scale factor */
scale_sum = get_scale(&s->gb, s->scalefactor_huffman[j], scale_sum, log_size);
scale_sum = get_scale(&s->gb, s->scalefactor_huffman[j], scale_sum);
s->scale_factor[j][k][1] = scale_table[scale_sum];
}
}
@@ -778,7 +767,8 @@ static int dca_subframe_header(DCAContext *s, int base_channel, int block_index)
* (is this valid as well for joint scales ???) */
for (k = s->subband_activity[j]; k < s->subband_activity[source_channel]; k++) {
scale = get_scale(&s->gb, s->joint_huff[j], 64 /* bias */, 7);
scale = get_scale(&s->gb, s->joint_huff[j], 0);
scale += 64; /* bias */
s->joint_scale_factor[j][k] = scale; /*joint_scale_table[scale]; */
}
@@ -799,18 +789,6 @@ static int dca_subframe_header(DCAContext *s, int base_channel, int block_index)
}
} else {
int am = s->amode & DCA_CHANNEL_MASK;
if (am >= FF_ARRAY_ELEMS(dca_default_coeffs)) {
av_log(s->avctx, AV_LOG_ERROR,
"Invalid channel mode %d\n", am);
return AVERROR_INVALIDDATA;
}
if (s->prim_channels > FF_ARRAY_ELEMS(dca_default_coeffs[0])) {
av_log_ask_for_sample(s->avctx, "Downmixing %d channels",
s->prim_channels);
return AVERROR_PATCHWELCOME;
}
for (j = base_channel; j < s->prim_channels; j++) {
s->downmix_coef[j][0] = dca_default_coeffs[am][j][0];
s->downmix_coef[j][1] = dca_default_coeffs[am][j][1];
@@ -850,8 +828,7 @@ static int dca_subframe_header(DCAContext *s, int base_channel, int block_index)
}
/* Scale factor index */
skip_bits(&s->gb, 1);
s->lfe_scale_factor = scale_factor_quant7[get_bits(&s->gb, 7)];
s->lfe_scale_factor = scale_factor_quant7[get_bits(&s->gb, 8)];
/* Quantization step size * scale factor */
lfe_scale = 0.035 * s->lfe_scale_factor;
@@ -1260,7 +1237,6 @@ static int dca_subsubframe(DCAContext *s, int base_channel, int block_index)
#endif
} else {
av_log(s->avctx, AV_LOG_ERROR, "Didn't get subframe DSYNC\n");
return AVERROR_INVALIDDATA;
}
}

View File

@@ -7528,7 +7528,7 @@ static const float dca_downmix_coeffs[65] = {
0.001412537544623, 0.001000000000000, 0.000501187233627, 0.000251188643151, 0.000000000000000,
};
static const uint8_t dca_default_coeffs[10][5][2] = {
static const uint8_t dca_default_coeffs[16][5][2] = {
{ { 13, 13 }, },
{ { 0, 64 }, { 64, 0 }, },
{ { 0, 64 }, { 64, 0 }, },

View File

@@ -234,10 +234,8 @@ static void init_block(DCTELEM block[64], int test, int is_idct, AVLFG *prng, in
break;
case 1:
j = av_lfg_get(prng) % 10 + 1;
for (i = 0; i < j; i++) {
int idx = av_lfg_get(prng) % 64;
block[idx] = av_lfg_get(prng) % (2*vals) -vals;
}
for (i = 0; i < j; i++)
block[av_lfg_get(prng) % 64] = av_lfg_get(prng) % (2*vals) -vals;
break;
case 2:
block[ 0] = av_lfg_get(prng) % (16*vals) - (8*vals);

View File

@@ -21,9 +21,8 @@
*/
#include "avcodec.h"
#include "libavutil/intreadwrite.h"
#include "bytestream.h"
#include "libavutil/imgutils.h"
#include "libavutil/lzo.h" // for av_memcpy_backptr
typedef struct DfaContext {
@@ -36,13 +35,9 @@ typedef struct DfaContext {
static av_cold int dfa_decode_init(AVCodecContext *avctx)
{
DfaContext *s = avctx->priv_data;
int ret;
avctx->pix_fmt = PIX_FMT_PAL8;
if ((ret = av_image_check_size(avctx->width, avctx->height, 0, avctx)) < 0)
return ret;
s->frame_buf = av_mallocz(avctx->width * avctx->height + AV_LZO_OUTPUT_PADDING);
if (!s->frame_buf)
return AVERROR(ENOMEM);
@@ -50,16 +45,19 @@ static av_cold int dfa_decode_init(AVCodecContext *avctx)
return 0;
}
static int decode_copy(GetByteContext *gb, uint8_t *frame, int width, int height)
static int decode_copy(uint8_t *frame, int width, int height,
const uint8_t *src, const uint8_t *src_end)
{
const int size = width * height;
if (bytestream2_get_buffer(gb, frame, size) != size)
return AVERROR_INVALIDDATA;
if (src_end - src < size)
return -1;
bytestream_get_buffer(&src, frame, size);
return 0;
}
static int decode_tsw1(GetByteContext *gb, uint8_t *frame, int width, int height)
static int decode_tsw1(uint8_t *frame, int width, int height,
const uint8_t *src, const uint8_t *src_end)
{
const uint8_t *frame_start = frame;
const uint8_t *frame_end = frame + width * height;
@@ -67,31 +65,31 @@ static int decode_tsw1(GetByteContext *gb, uint8_t *frame, int width, int height
int v, count, segments;
unsigned offset;
segments = bytestream2_get_le32(gb);
offset = bytestream2_get_le32(gb);
segments = bytestream_get_le32(&src);
offset = bytestream_get_le32(&src);
if (frame_end - frame <= offset)
return AVERROR_INVALIDDATA;
return -1;
frame += offset;
while (segments--) {
if (bytestream2_get_bytes_left(gb) < 2)
return AVERROR_INVALIDDATA;
if (mask == 0x10000) {
bitbuf = bytestream2_get_le16u(gb);
if (src >= src_end)
return -1;
bitbuf = bytestream_get_le16(&src);
mask = 1;
}
if (frame_end - frame < 2)
return AVERROR_INVALIDDATA;
if (src_end - src < 2 || frame_end - frame < 2)
return -1;
if (bitbuf & mask) {
v = bytestream2_get_le16(gb);
v = bytestream_get_le16(&src);
offset = (v & 0x1FFF) << 1;
count = ((v >> 13) + 2) << 1;
if (frame - frame_start < offset || frame_end - frame < count)
return AVERROR_INVALIDDATA;
return -1;
av_memcpy_backptr(frame, offset, count);
frame += count;
} else {
*frame++ = bytestream2_get_byte(gb);
*frame++ = bytestream2_get_byte(gb);
*frame++ = *src++;
*frame++ = *src++;
}
mask <<= 1;
}
@@ -99,38 +97,39 @@ static int decode_tsw1(GetByteContext *gb, uint8_t *frame, int width, int height
return 0;
}
static int decode_dsw1(GetByteContext *gb, uint8_t *frame, int width, int height)
static int decode_dsw1(uint8_t *frame, int width, int height,
const uint8_t *src, const uint8_t *src_end)
{
const uint8_t *frame_start = frame;
const uint8_t *frame_end = frame + width * height;
int mask = 0x10000, bitbuf = 0;
int v, offset, count, segments;
segments = bytestream2_get_le16(gb);
segments = bytestream_get_le16(&src);
while (segments--) {
if (bytestream2_get_bytes_left(gb) < 2)
return AVERROR_INVALIDDATA;
if (mask == 0x10000) {
bitbuf = bytestream2_get_le16u(gb);
if (src >= src_end)
return -1;
bitbuf = bytestream_get_le16(&src);
mask = 1;
}
if (frame_end - frame < 2)
return AVERROR_INVALIDDATA;
if (src_end - src < 2 || frame_end - frame < 2)
return -1;
if (bitbuf & mask) {
v = bytestream2_get_le16(gb);
v = bytestream_get_le16(&src);
offset = (v & 0x1FFF) << 1;
count = ((v >> 13) + 2) << 1;
if (frame - frame_start < offset || frame_end - frame < count)
return AVERROR_INVALIDDATA;
return -1;
// can't use av_memcpy_backptr() since it can overwrite following pixels
for (v = 0; v < count; v++)
frame[v] = frame[v - offset];
frame += count;
} else if (bitbuf & (mask << 1)) {
frame += bytestream2_get_le16(gb);
frame += bytestream_get_le16(&src);
} else {
*frame++ = bytestream2_get_byte(gb);
*frame++ = bytestream2_get_byte(gb);
*frame++ = *src++;
*frame++ = *src++;
}
mask <<= 2;
}
@@ -138,28 +137,30 @@ static int decode_dsw1(GetByteContext *gb, uint8_t *frame, int width, int height
return 0;
}
static int decode_dds1(GetByteContext *gb, uint8_t *frame, int width, int height)
static int decode_dds1(uint8_t *frame, int width, int height,
const uint8_t *src, const uint8_t *src_end)
{
const uint8_t *frame_start = frame;
const uint8_t *frame_end = frame + width * height;
int mask = 0x10000, bitbuf = 0;
int i, v, offset, count, segments;
segments = bytestream2_get_le16(gb);
segments = bytestream_get_le16(&src);
while (segments--) {
if (bytestream2_get_bytes_left(gb) < 2)
return AVERROR_INVALIDDATA;
if (mask == 0x10000) {
bitbuf = bytestream2_get_le16u(gb);
if (src >= src_end)
return -1;
bitbuf = bytestream_get_le16(&src);
mask = 1;
}
if (src_end - src < 2 || frame_end - frame < 2)
return -1;
if (bitbuf & mask) {
v = bytestream2_get_le16(gb);
v = bytestream_get_le16(&src);
offset = (v & 0x1FFF) << 2;
count = ((v >> 13) + 2) << 1;
if (frame - frame_start < offset || frame_end - frame < count*2 + width)
return AVERROR_INVALIDDATA;
return -1;
for (i = 0; i < count; i++) {
frame[0] = frame[1] =
frame[width] = frame[width + 1] = frame[-offset];
@@ -167,18 +168,13 @@ static int decode_dds1(GetByteContext *gb, uint8_t *frame, int width, int height
frame += 2;
}
} else if (bitbuf & (mask << 1)) {
v = bytestream2_get_le16(gb)*2;
if (frame - frame_end < v)
return AVERROR_INVALIDDATA;
frame += v;
frame += bytestream_get_le16(&src) * 2;
} else {
if (frame_end - frame < width + 3)
return AVERROR_INVALIDDATA;
frame[0] = frame[1] =
frame[width] = frame[width + 1] = bytestream2_get_byte(gb);
frame[width] = frame[width + 1] = *src++;
frame += 2;
frame[0] = frame[1] =
frame[width] = frame[width + 1] = bytestream2_get_byte(gb);
frame[width] = frame[width + 1] = *src++;
frame += 2;
}
mask <<= 2;
@@ -187,40 +183,40 @@ static int decode_dds1(GetByteContext *gb, uint8_t *frame, int width, int height
return 0;
}
static int decode_bdlt(GetByteContext *gb, uint8_t *frame, int width, int height)
static int decode_bdlt(uint8_t *frame, int width, int height,
const uint8_t *src, const uint8_t *src_end)
{
uint8_t *line_ptr;
int count, lines, segments;
count = bytestream2_get_le16(gb);
count = bytestream_get_le16(&src);
if (count >= height)
return AVERROR_INVALIDDATA;
return -1;
frame += width * count;
lines = bytestream2_get_le16(gb);
if (count + lines > height)
return AVERROR_INVALIDDATA;
lines = bytestream_get_le16(&src);
if (count + lines > height || src >= src_end)
return -1;
while (lines--) {
if (bytestream2_get_bytes_left(gb) < 1)
return AVERROR_INVALIDDATA;
line_ptr = frame;
frame += width;
segments = bytestream2_get_byteu(gb);
segments = *src++;
while (segments--) {
if (frame - line_ptr <= bytestream2_peek_byte(gb))
return AVERROR_INVALIDDATA;
line_ptr += bytestream2_get_byte(gb);
count = (int8_t)bytestream2_get_byte(gb);
if (src_end - src < 3)
return -1;
if (frame - line_ptr <= *src)
return -1;
line_ptr += *src++;
count = (int8_t)*src++;
if (count >= 0) {
if (frame - line_ptr < count)
return AVERROR_INVALIDDATA;
if (bytestream2_get_buffer(gb, line_ptr, count) != count)
return AVERROR_INVALIDDATA;
if (frame - line_ptr < count || src_end - src < count)
return -1;
bytestream_get_buffer(&src, line_ptr, count);
} else {
count = -count;
if (frame - line_ptr < count)
return AVERROR_INVALIDDATA;
memset(line_ptr, bytestream2_get_byte(gb), count);
if (frame - line_ptr < count || src >= src_end)
return -1;
memset(line_ptr, *src++, count);
}
line_ptr += count;
}
@@ -229,55 +225,49 @@ static int decode_bdlt(GetByteContext *gb, uint8_t *frame, int width, int height
return 0;
}
static int decode_wdlt(GetByteContext *gb, uint8_t *frame, int width, int height)
static int decode_wdlt(uint8_t *frame, int width, int height,
const uint8_t *src, const uint8_t *src_end)
{
const uint8_t *frame_end = frame + width * height;
uint8_t *line_ptr;
int count, i, v, lines, segments;
int y = 0;
lines = bytestream2_get_le16(gb);
if (lines > height)
return AVERROR_INVALIDDATA;
lines = bytestream_get_le16(&src);
if (lines > height || src >= src_end)
return -1;
while (lines--) {
if (bytestream2_get_bytes_left(gb) < 2)
return AVERROR_INVALIDDATA;
segments = bytestream2_get_le16u(gb);
segments = bytestream_get_le16(&src);
while ((segments & 0xC000) == 0xC000) {
unsigned skip_lines = -(int16_t)segments;
unsigned delta = -((int16_t)segments * width);
if (frame_end - frame <= delta || y + lines + skip_lines > height)
return AVERROR_INVALIDDATA;
if (frame_end - frame <= delta)
return -1;
frame += delta;
y += skip_lines;
segments = bytestream2_get_le16(gb);
segments = bytestream_get_le16(&src);
}
if (segments & 0x8000) {
frame[width - 1] = segments & 0xFF;
segments = bytestream2_get_le16(gb);
segments = bytestream_get_le16(&src);
}
line_ptr = frame;
if (frame_end - frame < width)
return AVERROR_INVALIDDATA;
frame += width;
y++;
while (segments--) {
if (frame - line_ptr <= bytestream2_peek_byte(gb))
return AVERROR_INVALIDDATA;
line_ptr += bytestream2_get_byte(gb);
count = (int8_t)bytestream2_get_byte(gb);
if (src_end - src < 2)
return -1;
if (frame - line_ptr <= *src)
return -1;
line_ptr += *src++;
count = (int8_t)*src++;
if (count >= 0) {
if (frame - line_ptr < count * 2)
return AVERROR_INVALIDDATA;
if (bytestream2_get_buffer(gb, line_ptr, count * 2) != count * 2)
return AVERROR_INVALIDDATA;
if (frame - line_ptr < count*2 || src_end - src < count*2)
return -1;
bytestream_get_buffer(&src, line_ptr, count*2);
line_ptr += count * 2;
} else {
count = -count;
if (frame - line_ptr < count * 2)
return AVERROR_INVALIDDATA;
v = bytestream2_get_le16(gb);
if (frame - line_ptr < count*2 || src_end - src < 2)
return -1;
v = bytestream_get_le16(&src);
for (i = 0; i < count; i++)
bytestream_put_le16(&line_ptr, v);
}
@@ -287,19 +277,22 @@ static int decode_wdlt(GetByteContext *gb, uint8_t *frame, int width, int height
return 0;
}
static int decode_unk6(GetByteContext *gb, uint8_t *frame, int width, int height)
static int decode_unk6(uint8_t *frame, int width, int height,
const uint8_t *src, const uint8_t *src_end)
{
return AVERROR_PATCHWELCOME;
return -1;
}
static int decode_blck(GetByteContext *gb, uint8_t *frame, int width, int height)
static int decode_blck(uint8_t *frame, int width, int height,
const uint8_t *src, const uint8_t *src_end)
{
memset(frame, 0, width * height);
return 0;
}
typedef int (*chunk_decoder)(GetByteContext *gb, uint8_t *frame, int width, int height);
typedef int (*chunk_decoder)(uint8_t *frame, int width, int height,
const uint8_t *src, const uint8_t *src_end);
static const chunk_decoder decoder[8] = {
decode_copy, decode_tsw1, decode_bdlt, decode_wdlt,
@@ -315,8 +308,9 @@ static int dfa_decode_frame(AVCodecContext *avctx,
AVPacket *avpkt)
{
DfaContext *s = avctx->priv_data;
GetByteContext gb;
const uint8_t *buf = avpkt->data;
const uint8_t *buf_end = avpkt->data + avpkt->size;
const uint8_t *tmp_buf;
uint32_t chunk_type, chunk_size;
uint8_t *dst;
int ret;
@@ -330,25 +324,30 @@ static int dfa_decode_frame(AVCodecContext *avctx,
return ret;
}
bytestream2_init(&gb, avpkt->data, avpkt->size);
while (bytestream2_get_bytes_left(&gb) > 0) {
bytestream2_skip(&gb, 4);
chunk_size = bytestream2_get_le32(&gb);
chunk_type = bytestream2_get_le32(&gb);
while (buf < buf_end) {
chunk_size = AV_RL32(buf + 4);
chunk_type = AV_RL32(buf + 8);
buf += 12;
if (buf_end - buf < chunk_size) {
av_log(avctx, AV_LOG_ERROR, "Chunk size is too big (%d bytes)\n", chunk_size);
return -1;
}
if (!chunk_type)
break;
if (chunk_type == 1) {
pal_elems = FFMIN(chunk_size / 3, 256);
tmp_buf = buf;
for (i = 0; i < pal_elems; i++) {
s->pal[i] = bytestream2_get_be24(&gb) << 2;
s->pal[i] = bytestream_get_be24(&tmp_buf) << 2;
s->pal[i] |= 0xFF << 24 | (s->pal[i] >> 6) & 0x30303;
}
s->pic.palette_has_changed = 1;
} else if (chunk_type <= 9) {
if (decoder[chunk_type - 2](&gb, s->frame_buf, avctx->width, avctx->height)) {
if (decoder[chunk_type - 2](s->frame_buf, avctx->width, avctx->height,
buf, buf + chunk_size)) {
av_log(avctx, AV_LOG_ERROR, "Error decoding %s chunk\n",
chunk_name[chunk_type - 2]);
return AVERROR_INVALIDDATA;
return -1;
}
} else {
av_log(avctx, AV_LOG_WARNING, "Ignoring unknown chunk type %d\n",

View File

@@ -491,16 +491,10 @@ static inline void codeblock(DiracContext *s, SubBand *b,
}
if (s->codeblock_mode && !(s->old_delta_quant && blockcnt_one)) {
int quant = b->quant;
if (is_arith)
quant += dirac_get_arith_int(c, CTX_DELTA_Q_F, CTX_DELTA_Q_DATA);
b->quant += dirac_get_arith_int(c, CTX_DELTA_Q_F, CTX_DELTA_Q_DATA);
else
quant += dirac_get_se_golomb(gb);
if (quant < 0) {
av_log(s->avctx, AV_LOG_ERROR, "Invalid quant\n");
return;
}
b->quant = quant;
b->quant += dirac_get_se_golomb(gb);
}
b->quant = FFMIN(b->quant, MAX_QUANT);
@@ -625,7 +619,7 @@ static void decode_component(DiracContext *s, int comp)
b->quant = svq3_get_ue_golomb(&s->gb);
align_get_bits(&s->gb);
b->coeff_data = s->gb.buffer + get_bits_count(&s->gb)/8;
b->length = FFMIN(b->length, FFMAX(get_bits_left(&s->gb)/8, 0));
b->length = FFMIN(b->length, get_bits_left(&s->gb)/8);
skip_bits_long(&s->gb, b->length*8);
}
}
@@ -1178,7 +1172,7 @@ static void propagate_block_data(DiracBlock *block, int stride, int size)
* Dirac Specification ->
* 12. Block motion data syntax
*/
static int dirac_unpack_block_motion_data(DiracContext *s)
static void dirac_unpack_block_motion_data(DiracContext *s)
{
GetBitContext *gb = &s->gb;
uint8_t *sbsplit = s->sbsplit;
@@ -1198,9 +1192,7 @@ static int dirac_unpack_block_motion_data(DiracContext *s)
ff_dirac_init_arith_decoder(arith, gb, svq3_get_ue_golomb(gb)); /* svq3_get_ue_golomb(gb) is the length */
for (y = 0; y < s->sbheight; y++) {
for (x = 0; x < s->sbwidth; x++) {
unsigned int split = dirac_get_arith_uint(arith, CTX_SB_F1, CTX_SB_DATA);
if (split > 2)
return -1;
int split = dirac_get_arith_uint(arith, CTX_SB_F1, CTX_SB_DATA);
sbsplit[x] = (split + pred_sbsplit(sbsplit+x, s->sbwidth, x, y)) % 3;
}
sbsplit += s->sbwidth;
@@ -1229,8 +1221,6 @@ static int dirac_unpack_block_motion_data(DiracContext *s)
propagate_block_data(block, s->blwidth, step);
}
}
return 0;
}
static int weight(int i, int blen, int offset)
@@ -1685,8 +1675,7 @@ static int dirac_decode_picture_header(DiracContext *s)
if (s->num_refs) {
if (dirac_unpack_prediction_parameters(s)) /* [DIRAC_STD] 11.2 Picture Prediction Data. picture_prediction() */
return -1;
if (dirac_unpack_block_motion_data(s)) /* [DIRAC_STD] 12. Block motion data syntax */
return -1;
dirac_unpack_block_motion_data(s); /* [DIRAC_STD] 12. Block motion data syntax */
}
if (dirac_unpack_idwt_params(s)) /* [DIRAC_STD] 11.3 Wavelet transform data */
return -1;

View File

@@ -1038,7 +1038,7 @@ int ff_dnxhd_find_cid(AVCodecContext *avctx, int bit_depth)
if (cid->width == avctx->width && cid->height == avctx->height &&
cid->interlaced == !!(avctx->flags & CODEC_FLAG_INTERLACED_DCT) &&
cid->bit_depth == bit_depth) {
for (j = 0; j < FF_ARRAY_ELEMS(cid->bit_rates); j++) {
for (j = 0; j < sizeof(cid->bit_rates); j++) {
if (cid->bit_rates[j] == mbs)
return cid->cid;
}

View File

@@ -84,9 +84,9 @@ static int dnxhd_init_vlc(DNXHDContext *ctx, int cid)
}
ctx->cid_table = &ff_dnxhd_cid_table[index];
ff_free_vlc(&ctx->ac_vlc);
ff_free_vlc(&ctx->dc_vlc);
ff_free_vlc(&ctx->run_vlc);
free_vlc(&ctx->ac_vlc);
free_vlc(&ctx->dc_vlc);
free_vlc(&ctx->run_vlc);
init_vlc(&ctx->ac_vlc, DNXHD_VLC_BITS, 257,
ctx->cid_table->ac_bits, 1, 1,
@@ -416,9 +416,9 @@ static av_cold int dnxhd_decode_close(AVCodecContext *avctx)
if (ctx->picture.data[0])
ff_thread_release_buffer(avctx, &ctx->picture);
ff_free_vlc(&ctx->ac_vlc);
ff_free_vlc(&ctx->dc_vlc);
ff_free_vlc(&ctx->run_vlc);
free_vlc(&ctx->ac_vlc);
free_vlc(&ctx->dc_vlc);
free_vlc(&ctx->run_vlc);
return 0;
}

View File

@@ -183,11 +183,6 @@ static int dpcm_decode_frame(AVCodecContext *avctx, void *data,
int stereo = s->channels - 1;
int16_t *output_samples;
if (stereo && (buf_size & 1)) {
buf_size--;
buf_end--;
}
/* calculate output size */
switch(avctx->codec->id) {
case CODEC_ID_ROQ_DPCM:
@@ -325,7 +320,7 @@ static int dpcm_decode_frame(AVCodecContext *avctx, void *data,
*got_frame_ptr = 1;
*(AVFrame *)data = s->frame;
return avpkt->size;
return buf_size;
}
#define DPCM_DECODER(id_, name_, long_name_) \

View File

@@ -147,11 +147,11 @@ static int cin_decode_huffman(const unsigned char *src, int src_size, unsigned c
return dst_cur - dst;
}
static int cin_decode_lzss(const unsigned char *src, int src_size, unsigned char *dst, int dst_size)
static void cin_decode_lzss(const unsigned char *src, int src_size, unsigned char *dst, int dst_size)
{
uint16_t cmd;
int i, sz, offset, code;
unsigned char *dst_end = dst + dst_size, *dst_start = dst;
unsigned char *dst_end = dst + dst_size;
const unsigned char *src_end = src + src_size;
while (src < src_end && dst < dst_end) {
@@ -162,8 +162,6 @@ static int cin_decode_lzss(const unsigned char *src, int src_size, unsigned char
} else {
cmd = AV_RL16(src); src += 2;
offset = cmd >> 4;
if ((int) (dst - dst_start) < offset + 1)
return AVERROR_INVALIDDATA;
sz = (cmd & 0xF) + 2;
/* don't use memcpy/memmove here as the decoding routine (ab)uses */
/* buffer overlappings to repeat bytes in the destination */
@@ -175,8 +173,6 @@ static int cin_decode_lzss(const unsigned char *src, int src_size, unsigned char
}
}
}
return 0;
}
static void cin_decode_rle(const unsigned char *src, int src_size, unsigned char *dst, int dst_size)
@@ -188,13 +184,11 @@ static void cin_decode_rle(const unsigned char *src, int src_size, unsigned char
while (src < src_end && dst < dst_end) {
code = *src++;
if (code & 0x80) {
if (src >= src_end)
break;
len = code - 0x7F;
memset(dst, *src++, FFMIN(len, dst_end - dst));
} else {
len = code + 1;
memcpy(dst, src, FFMIN3(len, dst_end - dst, src_end - src));
memcpy(dst, src, FFMIN(len, dst_end - dst));
src += len;
}
dst += len;
@@ -208,7 +202,13 @@ static int cinvideo_decode_frame(AVCodecContext *avctx,
const uint8_t *buf = avpkt->data;
int buf_size = avpkt->size;
CinVideoContext *cin = avctx->priv_data;
int i, y, palette_type, palette_colors_count, bitmap_frame_type, bitmap_frame_size, res = 0;
int i, y, palette_type, palette_colors_count, bitmap_frame_type, bitmap_frame_size;
cin->frame.buffer_hints = FF_BUFFER_HINTS_VALID | FF_BUFFER_HINTS_PRESERVE | FF_BUFFER_HINTS_REUSABLE;
if (avctx->reget_buffer(avctx, &cin->frame)) {
av_log(cin->avctx, AV_LOG_ERROR, "delphinecinvideo: reget_buffer() failed to allocate a frame\n");
return -1;
}
palette_type = buf[0];
palette_colors_count = AV_RL16(buf+1);
@@ -234,6 +234,8 @@ static int cinvideo_decode_frame(AVCodecContext *avctx,
bitmap_frame_size -= 4;
}
}
memcpy(cin->frame.data[1], cin->palette, sizeof(cin->palette));
cin->frame.palette_has_changed = 1;
/* note: the decoding routines below assumes that surface.width = surface.pitch */
switch (bitmap_frame_type) {
@@ -266,31 +268,17 @@ static int cinvideo_decode_frame(AVCodecContext *avctx,
cin->bitmap_table[CIN_CUR_BMP], cin->bitmap_size);
break;
case 38:
res = cin_decode_lzss(buf, bitmap_frame_size,
cin->bitmap_table[CIN_CUR_BMP],
cin->bitmap_size);
if (res < 0)
return res;
cin_decode_lzss(buf, bitmap_frame_size,
cin->bitmap_table[CIN_CUR_BMP], cin->bitmap_size);
break;
case 39:
res = cin_decode_lzss(buf, bitmap_frame_size,
cin->bitmap_table[CIN_CUR_BMP],
cin->bitmap_size);
if (res < 0)
return res;
cin_decode_lzss(buf, bitmap_frame_size,
cin->bitmap_table[CIN_CUR_BMP], cin->bitmap_size);
cin_apply_delta_data(cin->bitmap_table[CIN_PRE_BMP],
cin->bitmap_table[CIN_CUR_BMP], cin->bitmap_size);
break;
}
cin->frame.buffer_hints = FF_BUFFER_HINTS_VALID | FF_BUFFER_HINTS_PRESERVE | FF_BUFFER_HINTS_REUSABLE;
if (avctx->reget_buffer(avctx, &cin->frame)) {
av_log(cin->avctx, AV_LOG_ERROR, "delphinecinvideo: reget_buffer() failed to allocate a frame\n");
return -1;
}
memcpy(cin->frame.data[1], cin->palette, sizeof(cin->palette));
cin->frame.palette_has_changed = 1;
for (y = 0; y < cin->avctx->height; ++y)
memcpy(cin->frame.data[0] + (cin->avctx->height - 1 - y) * cin->frame.linesize[0],
cin->bitmap_table[CIN_CUR_BMP] + y * cin->avctx->width,

View File

@@ -367,17 +367,18 @@ void ff_put_pixels_clamped_c(const DCTELEM *block, uint8_t *restrict pixels,
int line_size)
{
int i;
uint8_t *cm = ff_cropTbl + MAX_NEG_CROP;
/* read the pixels */
for(i=0;i<8;i++) {
pixels[0] = av_clip_uint8(block[0]);
pixels[1] = av_clip_uint8(block[1]);
pixels[2] = av_clip_uint8(block[2]);
pixels[3] = av_clip_uint8(block[3]);
pixels[4] = av_clip_uint8(block[4]);
pixels[5] = av_clip_uint8(block[5]);
pixels[6] = av_clip_uint8(block[6]);
pixels[7] = av_clip_uint8(block[7]);
pixels[0] = cm[block[0]];
pixels[1] = cm[block[1]];
pixels[2] = cm[block[2]];
pixels[3] = cm[block[3]];
pixels[4] = cm[block[4]];
pixels[5] = cm[block[5]];
pixels[6] = cm[block[6]];
pixels[7] = cm[block[7]];
pixels += line_size;
block += 8;
@@ -388,13 +389,14 @@ static void put_pixels_clamped4_c(const DCTELEM *block, uint8_t *restrict pixels
int line_size)
{
int i;
uint8_t *cm = ff_cropTbl + MAX_NEG_CROP;
/* read the pixels */
for(i=0;i<4;i++) {
pixels[0] = av_clip_uint8(block[0]);
pixels[1] = av_clip_uint8(block[1]);
pixels[2] = av_clip_uint8(block[2]);
pixels[3] = av_clip_uint8(block[3]);
pixels[0] = cm[block[0]];
pixels[1] = cm[block[1]];
pixels[2] = cm[block[2]];
pixels[3] = cm[block[3]];
pixels += line_size;
block += 8;
@@ -405,11 +407,12 @@ static void put_pixels_clamped2_c(const DCTELEM *block, uint8_t *restrict pixels
int line_size)
{
int i;
uint8_t *cm = ff_cropTbl + MAX_NEG_CROP;
/* read the pixels */
for(i=0;i<2;i++) {
pixels[0] = av_clip_uint8(block[0]);
pixels[1] = av_clip_uint8(block[1]);
pixels[0] = cm[block[0]];
pixels[1] = cm[block[1]];
pixels += line_size;
block += 8;
@@ -441,17 +444,18 @@ void ff_add_pixels_clamped_c(const DCTELEM *block, uint8_t *restrict pixels,
int line_size)
{
int i;
uint8_t *cm = ff_cropTbl + MAX_NEG_CROP;
/* read the pixels */
for(i=0;i<8;i++) {
pixels[0] = av_clip_uint8(pixels[0] + block[0]);
pixels[1] = av_clip_uint8(pixels[1] + block[1]);
pixels[2] = av_clip_uint8(pixels[2] + block[2]);
pixels[3] = av_clip_uint8(pixels[3] + block[3]);
pixels[4] = av_clip_uint8(pixels[4] + block[4]);
pixels[5] = av_clip_uint8(pixels[5] + block[5]);
pixels[6] = av_clip_uint8(pixels[6] + block[6]);
pixels[7] = av_clip_uint8(pixels[7] + block[7]);
pixels[0] = cm[pixels[0] + block[0]];
pixels[1] = cm[pixels[1] + block[1]];
pixels[2] = cm[pixels[2] + block[2]];
pixels[3] = cm[pixels[3] + block[3]];
pixels[4] = cm[pixels[4] + block[4]];
pixels[5] = cm[pixels[5] + block[5]];
pixels[6] = cm[pixels[6] + block[6]];
pixels[7] = cm[pixels[7] + block[7]];
pixels += line_size;
block += 8;
}
@@ -461,13 +465,14 @@ static void add_pixels_clamped4_c(const DCTELEM *block, uint8_t *restrict pixels
int line_size)
{
int i;
uint8_t *cm = ff_cropTbl + MAX_NEG_CROP;
/* read the pixels */
for(i=0;i<4;i++) {
pixels[0] = av_clip_uint8(pixels[0] + block[0]);
pixels[1] = av_clip_uint8(pixels[1] + block[1]);
pixels[2] = av_clip_uint8(pixels[2] + block[2]);
pixels[3] = av_clip_uint8(pixels[3] + block[3]);
pixels[0] = cm[pixels[0] + block[0]];
pixels[1] = cm[pixels[1] + block[1]];
pixels[2] = cm[pixels[2] + block[2]];
pixels[3] = cm[pixels[3] + block[3]];
pixels += line_size;
block += 8;
}
@@ -477,11 +482,12 @@ static void add_pixels_clamped2_c(const DCTELEM *block, uint8_t *restrict pixels
int line_size)
{
int i;
uint8_t *cm = ff_cropTbl + MAX_NEG_CROP;
/* read the pixels */
for(i=0;i<2;i++) {
pixels[0] = av_clip_uint8(pixels[0] + block[0]);
pixels[1] = av_clip_uint8(pixels[1] + block[1]);
pixels[0] = cm[pixels[0] + block[0]];
pixels[1] = cm[pixels[1] + block[1]];
pixels += line_size;
block += 8;
}
@@ -1912,7 +1918,7 @@ void ff_set_cmp(DSPContext* c, me_cmp_func *cmp, int type){
static void add_bytes_c(uint8_t *dst, uint8_t *src, int w){
long i;
for(i=0; i<=w-(int)sizeof(long); i+=sizeof(long)){
for(i=0; i<=w-sizeof(long); i+=sizeof(long)){
long a = *(long*)(src+i);
long b = *(long*)(dst+i);
*(long*)(dst+i) = ((a&pb_7f) + (b&pb_7f)) ^ ((a^b)&pb_80);
@@ -1937,7 +1943,7 @@ static void diff_bytes_c(uint8_t *dst, uint8_t *src1, uint8_t *src2, int w){
}
}else
#endif
for(i=0; i<=w-(int)sizeof(long); i+=sizeof(long)){
for(i=0; i<=w-sizeof(long); i+=sizeof(long)){
long a = *(long*)(src1+i);
long b = *(long*)(src2+i);
*(long*)(dst+i) = ((a|pb_80) - (b&pb_7f)) ^ ((a^b^pb_80)&pb_80);
@@ -2773,11 +2779,15 @@ static void ff_jref_idct2_add(uint8_t *dest, int line_size, DCTELEM *block)
static void ff_jref_idct1_put(uint8_t *dest, int line_size, DCTELEM *block)
{
dest[0] = av_clip_uint8((block[0] + 4)>>3);
uint8_t *cm = ff_cropTbl + MAX_NEG_CROP;
dest[0] = cm[(block[0] + 4)>>3];
}
static void ff_jref_idct1_add(uint8_t *dest, int line_size, DCTELEM *block)
{
dest[0] = av_clip_uint8(dest[0] + ((block[0] + 4)>>3));
uint8_t *cm = ff_cropTbl + MAX_NEG_CROP;
dest[0] = cm[dest[0] + ((block[0] + 4)>>3)];
}
static void just_return(void *mem av_unused, int stride av_unused, int h av_unused) { return; }
@@ -2822,7 +2832,7 @@ int ff_check_alignment(void){
av_cold void dsputil_init(DSPContext* c, AVCodecContext *avctx)
{
int i, j;
int i;
ff_check_alignment();
@@ -3184,15 +3194,11 @@ av_cold void dsputil_init(DSPContext* c, AVCodecContext *avctx)
if (ARCH_SH4) dsputil_init_sh4 (c, avctx);
if (ARCH_BFIN) dsputil_init_bfin (c, avctx);
for (i = 0; i < 4; i++) {
for (j = 0; j < 16; j++) {
if(!c->put_2tap_qpel_pixels_tab[i][j])
c->put_2tap_qpel_pixels_tab[i][j] =
c->put_h264_qpel_pixels_tab[i][j];
if(!c->avg_2tap_qpel_pixels_tab[i][j])
c->avg_2tap_qpel_pixels_tab[i][j] =
c->avg_h264_qpel_pixels_tab[i][j];
}
for(i=0; i<64; i++){
if(!c->put_2tap_qpel_pixels_tab[0][i])
c->put_2tap_qpel_pixels_tab[0][i]= c->put_h264_qpel_pixels_tab[0][i];
if(!c->avg_2tap_qpel_pixels_tab[0][i])
c->avg_2tap_qpel_pixels_tab[0][i]= c->avg_h264_qpel_pixels_tab[0][i];
}
ff_init_scantable_permutation(c->idct_permutation,

View File

@@ -33,7 +33,6 @@
#include "libavutil/intreadwrite.h"
#include "avcodec.h"
typedef int emuedge_linesize_type;
//#define DEBUG
/* dct code */

View File

@@ -312,7 +312,7 @@ static av_cold int dvvideo_init(AVCodecContext *avctx)
dv_rl_vlc[i].level = level;
dv_rl_vlc[i].run = run;
}
ff_free_vlc(&dv_vlc);
free_vlc(&dv_vlc);
dv_vlc_map_tableinit();
}
@@ -372,6 +372,11 @@ typedef struct BlockInfo {
static const int vs_total_ac_bits = (100 * 4 + 68*2) * 5;
static const int mb_area_start[5] = { 1, 6, 21, 43, 64 };
static inline int put_bits_left(PutBitContext* s)
{
return (s->buf_end - s->buf) * 8 - put_bits_count(s);
}
/* decode AC coefficients */
static void dv_decode_ac(GetBitContext *gb, BlockInfo *mb, DCTELEM *block)
{

View File

@@ -255,12 +255,6 @@ static int decode_frame(AVCodecContext *avctx, void *data, int *data_size, AVPac
case 5:
c->pic.key_frame = !(compr & 1);
c->pic.pict_type = (compr & 1) ? AV_PICTURE_TYPE_P : AV_PICTURE_TYPE_I;
if (!tmpptr && !c->pic.key_frame) {
av_log(avctx, AV_LOG_ERROR, "Missing reference frame.\n");
return AVERROR_INVALIDDATA;
}
for(j = 0; j < avctx->height; j++){
if(compr & 1){
for(i = 0; i < avctx->width; i++)

View File

@@ -25,14 +25,7 @@
#define _WIN32_WINNT 0x0600
#define COBJMACROS
#include "config.h"
#include "dxva2.h"
#if HAVE_DXVA_H
#include <dxva.h>
#endif
#include "avcodec.h"
#include "mpegvideo.h"

View File

@@ -43,7 +43,6 @@ typedef struct TgqContext {
ScanTable scantable;
int qtable[64];
DECLARE_ALIGNED(16, DCTELEM, block)[6][64];
GetByteContext gb;
} TgqContext;
static av_cold int tgq_decode_init(AVCodecContext *avctx){
@@ -142,36 +141,39 @@ static void tgq_idct_put_mb_dconly(TgqContext *s, int mb_x, int mb_y, const int8
}
}
static void tgq_decode_mb(TgqContext *s, int mb_y, int mb_x){
static void tgq_decode_mb(TgqContext *s, int mb_y, int mb_x, const uint8_t **bs, const uint8_t *buf_end){
int mode;
int i;
int8_t dc[6];
mode = bytestream2_get_byte(&s->gb);
mode = bytestream_get_byte(bs);
if (mode>buf_end-*bs) {
av_log(s->avctx, AV_LOG_ERROR, "truncated macroblock\n");
return;
}
if (mode>12) {
GetBitContext gb;
init_get_bits(&gb, s->gb.buffer, FFMIN(s->gb.buffer_end - s->gb.buffer, mode) * 8);
init_get_bits(&gb, *bs, mode*8);
for(i=0; i<6; i++)
tgq_decode_block(s, s->block[i], &gb);
tgq_idct_put_mb(s, s->block, mb_x, mb_y);
bytestream2_skip(&s->gb, mode);
}else{
if (mode==3) {
memset(dc, bytestream2_get_byte(&s->gb), 4);
dc[4] = bytestream2_get_byte(&s->gb);
dc[5] = bytestream2_get_byte(&s->gb);
memset(dc, (*bs)[0], 4);
dc[4] = (*bs)[1];
dc[5] = (*bs)[2];
}else if (mode==6) {
bytestream2_get_buffer(&s->gb, dc, 6);
memcpy(dc, *bs, 6);
}else if (mode==12) {
for (i = 0; i < 6; i++) {
dc[i] = bytestream2_get_byte(&s->gb);
bytestream2_skip(&s->gb, 1);
}
for(i=0; i<6; i++)
dc[i] = (*bs)[i*2];
}else{
av_log(s->avctx, AV_LOG_ERROR, "unsupported mb mode %i\n", mode);
}
tgq_idct_put_mb_dconly(s, mb_x, mb_y, dc);
}
*bs += mode;
}
static void tgq_calculate_qtable(TgqContext *s, int quant){
@@ -191,30 +193,28 @@ static int tgq_decode_frame(AVCodecContext *avctx,
AVPacket *avpkt){
const uint8_t *buf = avpkt->data;
int buf_size = avpkt->size;
const uint8_t *buf_start = buf;
const uint8_t *buf_end = buf + buf_size;
TgqContext *s = avctx->priv_data;
int x,y;
int big_endian = AV_RL32(&buf[4]) > 0x000FFFFF;
if (buf_size < 16) {
int big_endian = AV_RL32(&buf[4]) > 0x000FFFFF;
buf += 8;
if(8>buf_end-buf) {
av_log(avctx, AV_LOG_WARNING, "truncated header\n");
return -1;
}
bytestream2_init(&s->gb, buf + 8, buf_size - 8);
if (big_endian) {
s->width = bytestream2_get_be16u(&s->gb);
s->height = bytestream2_get_be16u(&s->gb);
} else {
s->width = bytestream2_get_le16u(&s->gb);
s->height = bytestream2_get_le16u(&s->gb);
}
s->width = big_endian ? AV_RB16(&buf[0]) : AV_RL16(&buf[0]);
s->height = big_endian ? AV_RB16(&buf[2]) : AV_RL16(&buf[2]);
if (s->avctx->width!=s->width || s->avctx->height!=s->height) {
avcodec_set_dimensions(s->avctx, s->width, s->height);
if (s->frame.data[0])
avctx->release_buffer(avctx, &s->frame);
}
tgq_calculate_qtable(s, bytestream2_get_byteu(&s->gb));
bytestream2_skip(&s->gb, 3);
tgq_calculate_qtable(s, buf[4]);
buf += 8;
if (!s->frame.data[0]) {
s->frame.key_frame = 1;
@@ -226,14 +226,14 @@ static int tgq_decode_frame(AVCodecContext *avctx,
}
}
for (y = 0; y < FFALIGN(avctx->height, 16) >> 4; y++)
for (x = 0; x < FFALIGN(avctx->width, 16) >> 4; x++)
tgq_decode_mb(s, y, x);
for (y=0; y<(avctx->height+15)/16; y++)
for (x=0; x<(avctx->width+15)/16; x++)
tgq_decode_mb(s, y, x, &buf, buf_end);
*data_size = sizeof(AVFrame);
*(AVFrame*)data = s->frame;
return avpkt->size;
return buf-buf_start;
}
static av_cold int tgq_decode_end(AVCodecContext *avctx){

View File

@@ -62,7 +62,7 @@ static int tqi_decode_mb(MpegEncContext *s, DCTELEM (*block)[64])
int n;
s->dsp.clear_blocks(block[0]);
for (n=0; n<6; n++)
if (ff_mpeg1_decode_block_intra(s, block[n], n) < 0)
if(ff_mpeg1_decode_block_intra(s, block[n], n)<0)
return -1;
return 0;
@@ -137,7 +137,7 @@ static int tqi_decode_frame(AVCodecContext *avctx,
for (s->mb_y=0; s->mb_y<(avctx->height+15)/16; s->mb_y++)
for (s->mb_x=0; s->mb_x<(avctx->width+15)/16; s->mb_x++)
{
if (tqi_decode_mb(s, t->block) < 0)
if(tqi_decode_mb(s, t->block) < 0)
break;
tqi_idct_put(t, t->block);
}

View File

@@ -440,14 +440,9 @@ static void guess_mv(MpegEncContext *s)
if ((!(s->avctx->error_concealment&FF_EC_GUESS_MVS)) ||
num_avail <= mb_width / 2) {
for (mb_y = 0; mb_y < s->mb_height; mb_y++) {
s->mb_x = 0;
s->mb_y = mb_y;
ff_init_block_index(s);
for (mb_x = 0; mb_x < s->mb_width; mb_x++) {
const int mb_xy = mb_x + mb_y * s->mb_stride;
ff_update_block_index(s);
if (IS_INTRA(s->current_picture.f.mb_type[mb_xy]))
continue;
if (!(s->error_status_table[mb_xy] & ER_MV_ERROR))
@@ -482,9 +477,6 @@ static void guess_mv(MpegEncContext *s)
changed = 0;
for (mb_y = 0; mb_y < s->mb_height; mb_y++) {
s->mb_x = 0;
s->mb_y = mb_y;
ff_init_block_index(s);
for (mb_x = 0; mb_x < s->mb_width; mb_x++) {
const int mb_xy = mb_x + mb_y * s->mb_stride;
int mv_predictor[8][2] = { { 0 } };
@@ -496,8 +488,6 @@ static void guess_mv(MpegEncContext *s)
const int mot_index = (mb_x + mb_y * mot_stride) * mot_step;
int prev_x, prev_y, prev_ref;
ff_update_block_index(s);
if ((mb_x ^ mb_y ^ pass) & 1)
continue;
@@ -1108,16 +1098,11 @@ void ff_er_frame_end(MpegEncContext *s)
/* handle inter blocks with damaged AC */
for (mb_y = 0; mb_y < s->mb_height; mb_y++) {
s->mb_x = 0;
s->mb_y = mb_y;
ff_init_block_index(s);
for (mb_x = 0; mb_x < s->mb_width; mb_x++) {
const int mb_xy = mb_x + mb_y * s->mb_stride;
const int mb_type = s->current_picture.f.mb_type[mb_xy];
int dir = !s->last_picture.f.data[0];
ff_update_block_index(s);
error = s->error_status_table[mb_xy];
if (IS_INTRA(mb_type))
@@ -1155,16 +1140,11 @@ void ff_er_frame_end(MpegEncContext *s)
/* guess MVs */
if (s->pict_type == AV_PICTURE_TYPE_B) {
for (mb_y = 0; mb_y < s->mb_height; mb_y++) {
s->mb_x = 0;
s->mb_y = mb_y;
ff_init_block_index(s);
for (mb_x = 0; mb_x < s->mb_width; mb_x++) {
int xy = mb_x * 2 + mb_y * 2 * s->b8_stride;
const int mb_xy = mb_x + mb_y * s->mb_stride;
const int mb_type = s->current_picture.f.mb_type[mb_xy];
ff_update_block_index(s);
error = s->error_status_table[mb_xy];
if (IS_INTRA(mb_type))

View File

@@ -48,8 +48,8 @@ typedef struct Escape124Context {
CodeBook codebooks[3];
} Escape124Context;
static int can_safely_read(GetBitContext* gb, uint64_t bits) {
return get_bits_left(gb) >= bits;
static int can_safely_read(GetBitContext* gb, int bits) {
return get_bits_count(gb) + bits <= gb->size_in_bits;
}
/**
@@ -90,7 +90,7 @@ static CodeBook unpack_codebook(GetBitContext* gb, unsigned depth,
unsigned i, j;
CodeBook cb = { 0 };
if (!can_safely_read(gb, size * 34L))
if (!can_safely_read(gb, size * 34))
return cb;
if (size >= INT_MAX / sizeof(MacroBlock))

View File

@@ -110,11 +110,11 @@ av_cold void ff_ccitt_unpack_init(void)
ccitt_vlc[1].table = code_table2;
ccitt_vlc[1].table_allocated = 648;
for(i = 0; i < 2; i++){
ff_init_vlc_sparse(&ccitt_vlc[i], 9, CCITT_SYMS,
ccitt_codes_lens[i], 1, 1,
ccitt_codes_bits[i], 1, 1,
ccitt_syms, 2, 2,
INIT_VLC_USE_NEW_STATIC);
init_vlc_sparse(&ccitt_vlc[i], 9, CCITT_SYMS,
ccitt_codes_lens[i], 1, 1,
ccitt_codes_bits[i], 1, 1,
ccitt_syms, 2, 2,
INIT_VLC_USE_NEW_STATIC);
}
INIT_VLC_STATIC(&ccitt_group3_2d_vlc, 9, 11,
ccitt_group3_2d_lens, 1, 1,
@@ -228,7 +228,7 @@ static int decode_group3_2d_line(AVCodecContext *avctx, GetBitContext *gb,
mode = !mode;
}
//sync line pointers
while(offs < width && run_off <= offs){
while(run_off <= offs){
run_off += *ref++;
run_off += *ref++;
}

View File

@@ -255,7 +255,7 @@ static void find_best_state(uint8_t best_state[256][256], const uint8_t one_stat
occ[j]=1.0;
for(k=0; k<256; k++){
double newocc[256]={0};
for(m=1; m<256; m++){
for(m=0; m<256; m++){
if(occ[m]){
len -=occ[m]*( p *l2tab[ m]
+ (1-p)*l2tab[256-m]);
@@ -451,7 +451,7 @@ static av_always_inline int encode_line(FFV1Context *s, int w,
int run_mode=0;
if(s->ac){
if(c->bytestream_end - c->bytestream < w*35){
if(c->bytestream_end - c->bytestream < w*20){
av_log(s->avctx, AV_LOG_ERROR, "encoded frame too large\n");
return -1;
}
@@ -993,7 +993,7 @@ static av_cold int encode_init(AVCodecContext *avctx)
}
}
gob_count= strtol(p, &next, 0);
if(next==p || gob_count <=0){
if(next==p || gob_count <0){
av_log(avctx, AV_LOG_ERROR, "2Pass file invalid\n");
return -1;
}

View File

@@ -27,7 +27,7 @@ const int ff_flac_sample_rate_table[16] =
8000, 16000, 22050, 24000, 32000, 44100, 48000, 96000,
0, 0, 0, 0 };
const int32_t ff_flac_blocksize_table[16] = {
const int16_t ff_flac_blocksize_table[16] = {
0, 192, 576<<0, 576<<1, 576<<2, 576<<3, 0, 0,
256<<0, 256<<1, 256<<2, 256<<3, 256<<4, 256<<5, 256<<6, 256<<7
};

View File

@@ -26,6 +26,6 @@
extern const int ff_flac_sample_rate_table[16];
extern const int32_t ff_flac_blocksize_table[16];
extern const int16_t ff_flac_blocksize_table[16];
#endif /* AVCODEC_FLACDATA_H */

View File

@@ -422,16 +422,7 @@ static inline int decode_subframe(FLACContext *s, int channel)
type = get_bits(&s->gb, 6);
if (get_bits1(&s->gb)) {
int left = get_bits_left(&s->gb);
wasted = 1;
if ( left < 0 ||
(left < s->curr_bps && !show_bits_long(&s->gb, left)) ||
!show_bits_long(&s->gb, s->curr_bps)) {
av_log(s->avctx, AV_LOG_ERROR,
"Invalid number of wasted bits > available bits (%d) - left=%d\n",
s->curr_bps, left);
return AVERROR_INVALIDDATA;
}
while (!get_bits1(&s->gb))
wasted++;
s->curr_bps -= wasted;

View File

@@ -937,16 +937,14 @@ static int encode_residual_ch(FlacEncodeContext *s, int ch)
omethod == ORDER_METHOD_8LEVEL) {
int levels = 1 << omethod;
uint32_t bits[1 << ORDER_METHOD_8LEVEL];
int order = -1;
int order;
int opt_index = levels-1;
opt_order = max_order-1;
bits[opt_index] = UINT32_MAX;
for (i = levels-1; i >= 0; i--) {
int last_order = order;
order = min_order + (((max_order-min_order+1) * (i+1)) / levels)-1;
order = av_clip(order, min_order - 1, max_order - 1);
if (order == last_order)
continue;
if (order < 0)
order = 0;
encode_residual_lpc(res, smp, n, order+1, coefs[order], shift[order]);
bits[i] = find_subframe_rice_params(s, sub, order+1);
if (bits[i] < bits[opt_index]) {

View File

@@ -122,11 +122,10 @@ static av_cold int flashsv_decode_init(AVCodecContext *avctx)
}
static int flashsv2_prime(FlashSVContext *s, uint8_t *src,
int size, int unp_size)
static void flashsv2_prime(FlashSVContext *s, uint8_t *src,
int size, int unp_size)
{
z_stream zs;
int zret; // Zlib return code
zs.zalloc = NULL;
zs.zfree = NULL;
@@ -138,8 +137,7 @@ static int flashsv2_prime(FlashSVContext *s, uint8_t *src,
s->zstream.avail_out = s->block_size * 3;
inflate(&s->zstream, Z_SYNC_FLUSH);
if (deflateInit(&zs, 0) != Z_OK)
return -1;
deflateInit(&zs, 0);
zs.next_in = s->tmpblock;
zs.avail_in = s->block_size * 3 - s->zstream.avail_out;
zs.next_out = s->deflate_block;
@@ -147,18 +145,13 @@ static int flashsv2_prime(FlashSVContext *s, uint8_t *src,
deflate(&zs, Z_SYNC_FLUSH);
deflateEnd(&zs);
if ((zret = inflateReset(&s->zstream)) != Z_OK) {
av_log(s->avctx, AV_LOG_ERROR, "Inflate reset error: %d\n", zret);
return AVERROR_UNKNOWN;
}
inflateReset(&s->zstream);
s->zstream.next_in = s->deflate_block;
s->zstream.avail_in = s->deflate_block_size - zs.avail_out;
s->zstream.next_out = s->tmpblock;
s->zstream.avail_out = s->block_size * 3;
inflate(&s->zstream, Z_SYNC_FLUSH);
return 0;
}
static int flashsv_decode_block(AVCodecContext *avctx, AVPacket *avpkt,
@@ -171,14 +164,11 @@ static int flashsv_decode_block(AVCodecContext *avctx, AVPacket *avpkt,
int k;
int ret = inflateReset(&s->zstream);
if (ret != Z_OK) {
av_log(avctx, AV_LOG_ERROR, "Inflate reset error: %d\n", ret);
return AVERROR_UNKNOWN;
//return -1;
}
if (s->zlibprime_curr || s->zlibprime_prev) {
ret = flashsv2_prime(s, s->blocks[blk_idx].pos, s->blocks[blk_idx].size,
flashsv2_prime(s, s->blocks[blk_idx].pos, s->blocks[blk_idx].size,
s->blocks[blk_idx].unp_size);
if (ret < 0)
return ret;
}
s->zstream.next_in = avpkt->data + get_bits_count(gb) / 8;
s->zstream.avail_in = block_size;
@@ -381,17 +371,8 @@ static int flashsv_decode_frame(AVCodecContext *avctx, void *data,
}
if (has_diff) {
if (!s->keyframe) {
av_log(avctx, AV_LOG_ERROR,
"inter frame without keyframe\n");
return AVERROR_INVALIDDATA;
}
s->diff_start = get_bits(&gb, 8);
s->diff_height = get_bits(&gb, 8);
if (s->diff_start + s->diff_height > cur_blk_height) {
av_log(avctx, AV_LOG_ERROR, "Block parameters invalid\n");
return AVERROR_INVALIDDATA;
}
av_log(avctx, AV_LOG_DEBUG,
"%dx%d diff start %d height %d\n",
i, j, s->diff_start, s->diff_height);
@@ -409,11 +390,6 @@ static int flashsv_decode_frame(AVCodecContext *avctx, void *data,
av_log_missing_feature(avctx, "zlibprime_curr", 1);
return AVERROR_PATCHWELCOME;
}
if (!s->blocks && (s->zlibprime_curr || s->zlibprime_prev)) {
av_log(avctx, AV_LOG_ERROR, "no data available for zlib "
"priming\n");
return AVERROR_INVALIDDATA;
}
size--; // account for flags byte
}

View File

@@ -113,13 +113,13 @@ static int fraps2_decode_plane(FrapsContext *s, uint8_t *dst, int stride, int w,
if(j) dst[i] += dst[i - stride];
else if(Uoff) dst[i] += 0x80;
if (get_bits_left(&gb) < 0) {
ff_free_vlc(&vlc);
free_vlc(&vlc);
return AVERROR_INVALIDDATA;
}
}
dst += stride;
}
ff_free_vlc(&vlc);
free_vlc(&vlc);
return 0;
}
@@ -140,7 +140,7 @@ static int decode_frame(AVCodecContext *avctx,
uint32_t offs[4];
int i, j, is_chroma;
const int planes = 3;
enum PixelFormat pix_fmt;
header = AV_RL32(buf);
version = header & 0xff;
@@ -155,6 +155,8 @@ static int decode_frame(AVCodecContext *avctx,
buf += header_size;
avctx->pix_fmt = version & 1 ? PIX_FMT_BGR24 : PIX_FMT_YUVJ420P;
if (version < 2) {
unsigned needed_size = avctx->width*avctx->height*3;
if (version == 0) needed_size /= 2;
@@ -174,12 +176,6 @@ static int decode_frame(AVCodecContext *avctx,
FF_BUFFER_HINTS_PRESERVE |
FF_BUFFER_HINTS_REUSABLE;
pix_fmt = version & 1 ? PIX_FMT_BGR24 : PIX_FMT_YUVJ420P;
if (avctx->pix_fmt != pix_fmt && f->data[0]) {
avctx->release_buffer(avctx, f);
}
avctx->pix_fmt = pix_fmt;
switch(version) {
case 0:
default:

View File

@@ -126,8 +126,8 @@ static int g722_decode_frame(AVCodecContext *avctx, void *data,
c->prev_samples[c->prev_samples_pos++] = rlow - rhigh;
ff_g722_apply_qmf(c->prev_samples + c->prev_samples_pos - 24,
&xout1, &xout2);
*out_buf++ = av_clip_int16(xout1 >> 11);
*out_buf++ = av_clip_int16(xout2 >> 11);
*out_buf++ = av_clip_int16(xout1 >> 12);
*out_buf++ = av_clip_int16(xout2 >> 12);
if (c->prev_samples_pos >= PREV_SAMPLES_BUF_SIZE) {
memmove(c->prev_samples, c->prev_samples + c->prev_samples_pos - 22,
22 * sizeof(c->prev_samples[0]));

View File

@@ -128,8 +128,8 @@ static inline void filter_samples(G722Context *c, const int16_t *samples,
c->prev_samples[c->prev_samples_pos++] = samples[0];
c->prev_samples[c->prev_samples_pos++] = samples[1];
ff_g722_apply_qmf(c->prev_samples + c->prev_samples_pos - 24, &xout1, &xout2);
*xlow = xout1 + xout2 >> 14;
*xhigh = xout1 - xout2 >> 14;
*xlow = xout1 + xout2 >> 13;
*xhigh = xout1 - xout2 >> 13;
if (c->prev_samples_pos >= PREV_SAMPLES_BUF_SIZE) {
memmove(c->prev_samples,
c->prev_samples + c->prev_samples_pos - 22,
@@ -174,7 +174,7 @@ static void g722_encode_trellis(G722Context *c, int trellis,
for (i = 0; i < 2; i++) {
nodes[i] = c->nodep_buf[i];
nodes_next[i] = c->nodep_buf[i] + frontier;
memset(c->nodep_buf[i], 0, 2 * frontier * sizeof(*c->nodep_buf[i]));
memset(c->nodep_buf[i], 0, 2 * frontier * sizeof(*c->nodep_buf));
nodes[i][0] = c->node_buf[i] + frontier;
nodes[i][0]->ssd = 0;
nodes[i][0]->path = 0;

View File

@@ -118,23 +118,10 @@ for examples see get_bits, show_bits, skip_bits, get_vlc
# define MIN_CACHE_BITS 25
#endif
#if UNCHECKED_BITSTREAM_READER
#define OPEN_READER(name, gb) \
unsigned int name##_index = (gb)->index; \
av_unused unsigned int name##_cache
#define HAVE_BITS_REMAINING(name, gb) 1
#else
#define OPEN_READER(name, gb) \
unsigned int name##_index = (gb)->index; \
unsigned int av_unused name##_cache = 0; \
unsigned int av_unused name##_size_plus8 = \
(gb)->size_in_bits_plus8
#define HAVE_BITS_REMAINING(name, gb) \
name##_index < name##_size_plus8
#endif
#define CLOSE_READER(name, gb) (gb)->index = name##_index
#ifdef BITSTREAM_READER_LE
@@ -167,7 +154,7 @@ for examples see get_bits, show_bits, skip_bits, get_vlc
# define SKIP_COUNTER(name, gb, num) name##_index += (num)
#else
# define SKIP_COUNTER(name, gb, num) \
name##_index = FFMIN(name##_size_plus8, name##_index + (num))
name##_index = FFMIN((gb)->size_in_bits_plus8, name##_index + (num))
#endif
#define SKIP_BITS(name, gb, num) do { \
@@ -342,33 +329,25 @@ static inline int check_marker(GetBitContext *s, const char *msg)
}
/**
* Initialize GetBitContext.
* @param buffer bitstream buffer, must be FF_INPUT_BUFFER_PADDING_SIZE bytes
* larger than the actual read bits because some optimized bitstream
* readers read 32 or 64 bit at once and could read over the end
* Inititalize GetBitContext.
* @param buffer bitstream buffer, must be FF_INPUT_BUFFER_PADDING_SIZE bytes larger than the actual read bits
* because some optimized bitstream readers read 32 or 64 bit at once and could read over the end
* @param bit_size the size of the buffer in bits
* @return 0 on success, AVERROR_INVALIDDATA if the buffer_size would overflow.
*/
static inline int init_get_bits(GetBitContext *s, const uint8_t *buffer,
int bit_size)
static inline void init_get_bits(GetBitContext *s, const uint8_t *buffer,
int bit_size)
{
int buffer_size;
int ret = 0;
if (bit_size > INT_MAX - 7 || bit_size < 0) {
int buffer_size = (bit_size+7)>>3;
if (buffer_size < 0 || bit_size < 0) {
buffer_size = bit_size = 0;
buffer = NULL;
ret = AVERROR_INVALIDDATA;
}
buffer_size = (bit_size + 7) >> 3;
s->buffer = buffer;
s->size_in_bits = bit_size;
s->size_in_bits_plus8 = bit_size + 8;
s->buffer_end = buffer + buffer_size;
s->index = 0;
return ret;
}
static inline void align_get_bits(GetBitContext *s)
@@ -381,19 +360,19 @@ static inline void align_get_bits(GetBitContext *s)
bits, bits_wrap, bits_size, \
codes, codes_wrap, codes_size, \
flags) \
ff_init_vlc_sparse(vlc, nb_bits, nb_codes, \
bits, bits_wrap, bits_size, \
codes, codes_wrap, codes_size, \
NULL, 0, 0, flags)
init_vlc_sparse(vlc, nb_bits, nb_codes, \
bits, bits_wrap, bits_size, \
codes, codes_wrap, codes_size, \
NULL, 0, 0, flags)
int ff_init_vlc_sparse(VLC *vlc, int nb_bits, int nb_codes,
int init_vlc_sparse(VLC *vlc, int nb_bits, int nb_codes,
const void *bits, int bits_wrap, int bits_size,
const void *codes, int codes_wrap, int codes_size,
const void *symbols, int symbols_wrap, int symbols_size,
int flags);
#define INIT_VLC_LE 2
#define INIT_VLC_USE_NEW_STATIC 4
void ff_free_vlc(VLC *vlc);
void free_vlc(VLC *vlc);
#define INIT_VLC_STATIC(vlc, bits, a,b,c,d,e,f,g, static_size) do { \
static VLC_TYPE table[static_size][2]; \

View File

@@ -135,7 +135,7 @@ static inline int svq3_get_ue_golomb(GetBitContext *gb){
ret = (ret << 4) | ff_interleaved_dirac_golomb_vlc_code[buf];
UPDATE_CACHE(re, gb);
buf = GET_CACHE(re, gb);
} while (ret<0x8000000U && HAVE_BITS_REMAINING(re, gb));
} while(ret<0x8000000U);
CLOSE_READER(re, gb);
return ret - 1;
@@ -301,7 +301,7 @@ static inline int get_ur_golomb_jpegls(GetBitContext *gb, int k, int limit, int
return buf;
}else{
int i;
for (i = 0; i < limit && SHOW_UBITS(re, gb, 1) == 0; i++) {
for(i=0; SHOW_UBITS(re, gb, 1) == 0; i++){
if (gb->size_in_bits <= re_index)
return -1;
LAST_SKIP_BITS(re, gb, 1);

View File

@@ -66,7 +66,7 @@ static av_cold void h261_decode_init_vlc(H261Context *h){
INIT_VLC_STATIC(&h261_cbp_vlc, H261_CBP_VLC_BITS, 63,
&h261_cbp_tab[0][1], 2, 1,
&h261_cbp_tab[0][0], 2, 1, 512);
ff_init_rl(&h261_rl_tcoeff, ff_h261_rl_table_store);
init_rl(&h261_rl_tcoeff, ff_h261_rl_table_store);
INIT_VLC_RL(h261_rl_tcoeff, 552);
}
}
@@ -265,7 +265,7 @@ static int h261_decode_mb(H261Context *h){
while( h->mba_diff == MBA_STUFFING ); // stuffing
if ( h->mba_diff < 0 ){
if (get_bits_left(&s->gb) <= 7)
if ( get_bits_count(&s->gb) + 7 >= s->gb.size_in_bits )
return SLICE_END;
av_log(s->avctx, AV_LOG_ERROR, "illegal mba at %d %d\n", s->mb_x, s->mb_y);
@@ -286,11 +286,6 @@ static int h261_decode_mb(H261Context *h){
// Read mtype
h->mtype = get_vlc2(&s->gb, h261_mtype_vlc.table, H261_MTYPE_VLC_BITS, 2);
if (h->mtype < 0) {
av_log(s->avctx, AV_LOG_ERROR, "Invalid mtype index %d\n",
h->mtype);
return SLICE_ERROR;
}
h->mtype = h261_mtype_map[h->mtype];
// Read mquant

View File

@@ -240,7 +240,7 @@ void ff_h261_encode_init(MpegEncContext *s){
if (!done) {
done = 1;
ff_init_rl(&h261_rl_tcoeff, ff_h261_rl_table_store);
init_rl(&h261_rl_tcoeff, ff_h261_rl_table_store);
}
s->min_qcoeff= -127;

View File

@@ -98,7 +98,7 @@ void ff_h263_update_motion_val(MpegEncContext * s){
}
}
int ff_h263_pred_dc(MpegEncContext * s, int n, int16_t **dc_val_ptr)
int h263_pred_dc(MpegEncContext * s, int n, int16_t **dc_val_ptr)
{
int x, y, wrap, a, c, pred_dc;
int16_t *dc_val;
@@ -226,7 +226,7 @@ void ff_h263_loop_filter(MpegEncContext * s){
}
}
void ff_h263_pred_acdc(MpegEncContext * s, DCTELEM *block, int n)
void h263_pred_acdc(MpegEncContext * s, DCTELEM *block, int n)
{
int x, y, wrap, a, c, pred_dc, scale, i;
int16_t *dc_val, *ac_val, *ac_val1;
@@ -313,8 +313,8 @@ void ff_h263_pred_acdc(MpegEncContext * s, DCTELEM *block, int n)
ac_val1[8 + i] = block[s->dsp.idct_permutation[i ]];
}
int16_t *ff_h263_pred_motion(MpegEncContext * s, int block, int dir,
int *px, int *py)
int16_t *h263_pred_motion(MpegEncContext * s, int block, int dir,
int *px, int *py)
{
int wrap;
int16_t *A, *B, *C, (*mot_val)[2];

View File

@@ -38,16 +38,16 @@
extern const AVRational ff_h263_pixel_aspect[16];
extern const uint8_t ff_h263_cbpy_tab[16][2];
extern const uint8_t ff_cbpc_b_tab[4][2];
extern const uint8_t cbpc_b_tab[4][2];
extern const uint8_t ff_mvtab[33][2];
extern const uint8_t mvtab[33][2];
extern const uint8_t ff_h263_intra_MCBPC_code[9];
extern const uint8_t ff_h263_intra_MCBPC_bits[9];
extern const uint8_t ff_h263_inter_MCBPC_code[28];
extern const uint8_t ff_h263_inter_MCBPC_bits[28];
extern const uint8_t ff_h263_mbtype_b_tab[15][2];
extern const uint8_t h263_mbtype_b_tab[15][2];
extern VLC ff_h263_intra_MCBPC_vlc;
extern VLC ff_h263_inter_MCBPC_vlc;
@@ -55,41 +55,41 @@ extern VLC ff_h263_cbpy_vlc;
extern RLTable ff_h263_rl_inter;
extern RLTable ff_rl_intra_aic;
extern RLTable rl_intra_aic;
extern const uint16_t ff_h263_format[8][2];
extern const uint8_t ff_modified_quant_tab[2][32];
extern const uint16_t h263_format[8][2];
extern const uint8_t modified_quant_tab[2][32];
extern const uint16_t ff_mba_max[6];
extern const uint8_t ff_mba_length[7];
extern uint8_t ff_h263_static_rl_table_store[2][2][2*MAX_RUN + MAX_LEVEL + 3];
int ff_h263_decode_motion(MpegEncContext * s, int pred, int f_code);
int h263_decode_motion(MpegEncContext * s, int pred, int f_code);
av_const int ff_h263_aspect_to_info(AVRational aspect);
int ff_h263_decode_init(AVCodecContext *avctx);
int ff_h263_decode_frame(AVCodecContext *avctx,
void *data, int *data_size,
AVPacket *avpkt);
int ff_h263_decode_end(AVCodecContext *avctx);
void ff_h263_encode_mb(MpegEncContext *s,
DCTELEM block[6][64],
int motion_x, int motion_y);
void ff_h263_encode_picture_header(MpegEncContext *s, int picture_number);
void ff_h263_encode_gob_header(MpegEncContext * s, int mb_line);
int16_t *ff_h263_pred_motion(MpegEncContext * s, int block, int dir,
int *px, int *py);
void ff_h263_encode_init(MpegEncContext *s);
void ff_h263_decode_init_vlc(MpegEncContext *s);
int ff_h263_decode_picture_header(MpegEncContext *s);
void h263_encode_mb(MpegEncContext *s,
DCTELEM block[6][64],
int motion_x, int motion_y);
void h263_encode_picture_header(MpegEncContext *s, int picture_number);
void h263_encode_gob_header(MpegEncContext * s, int mb_line);
int16_t *h263_pred_motion(MpegEncContext * s, int block, int dir,
int *px, int *py);
void h263_encode_init(MpegEncContext *s);
void h263_decode_init_vlc(MpegEncContext *s);
int h263_decode_picture_header(MpegEncContext *s);
int ff_h263_decode_gob_header(MpegEncContext *s);
void ff_h263_update_motion_val(MpegEncContext * s);
void ff_h263_loop_filter(MpegEncContext * s);
int ff_h263_decode_mba(MpegEncContext *s);
void ff_h263_encode_mba(MpegEncContext *s);
void ff_init_qscale_tab(MpegEncContext *s);
int ff_h263_pred_dc(MpegEncContext * s, int n, int16_t **dc_val_ptr);
void ff_h263_pred_acdc(MpegEncContext * s, DCTELEM *block, int n);
int h263_pred_dc(MpegEncContext * s, int n, int16_t **dc_val_ptr);
void h263_pred_acdc(MpegEncContext * s, DCTELEM *block, int n);
/**
@@ -119,7 +119,7 @@ static inline int h263_get_motion_length(MpegEncContext * s, int val, int f_code
int l, bit_size, code;
if (val == 0) {
return ff_mvtab[0][1];
return mvtab[0][1];
} else {
bit_size = f_code - 1;
/* modulo encoding */
@@ -128,7 +128,7 @@ static inline int h263_get_motion_length(MpegEncContext * s, int val, int f_code
val--;
code = (val >> bit_size) + 1;
return ff_mvtab[code][1] + 1 + bit_size;
return mvtab[code][1] + 1 + bit_size;
}
}

View File

@@ -57,7 +57,7 @@ const uint8_t ff_h263_inter_MCBPC_bits[28] = {
11, 13, 13, 13,/* inter4Q*/
};
const uint8_t ff_h263_mbtype_b_tab[15][2] = {
const uint8_t h263_mbtype_b_tab[15][2] = {
{1, 1},
{3, 3},
{1, 5},
@@ -75,7 +75,7 @@ const uint8_t ff_h263_mbtype_b_tab[15][2] = {
{1, 8},
};
const uint8_t ff_cbpc_b_tab[4][2] = {
const uint8_t cbpc_b_tab[4][2] = {
{0, 1},
{2, 2},
{7, 3},
@@ -88,7 +88,7 @@ const uint8_t ff_h263_cbpy_tab[16][2] =
{2,5}, {3,6}, {5,4}, {10,4}, {4,4}, {8,4}, {6,4}, {3,2}
};
const uint8_t ff_mvtab[33][2] =
const uint8_t mvtab[33][2] =
{
{1,1}, {1,2}, {1,3}, {1,4}, {3,6}, {5,7}, {4,7}, {3,7},
{11,9}, {10,9}, {9,9}, {17,10}, {16,10}, {15,10}, {14,10}, {13,10},
@@ -98,7 +98,7 @@ const uint8_t ff_mvtab[33][2] =
};
/* third non intra table */
const uint16_t ff_inter_vlc[103][2] = {
const uint16_t inter_vlc[103][2] = {
{ 0x2, 2 },{ 0xf, 4 },{ 0x15, 6 },{ 0x17, 7 },
{ 0x1f, 8 },{ 0x25, 9 },{ 0x24, 9 },{ 0x21, 10 },
{ 0x20, 10 },{ 0x7, 11 },{ 0x6, 11 },{ 0x20, 11 },
@@ -127,7 +127,7 @@ const uint16_t ff_inter_vlc[103][2] = {
{ 0x5e, 12 },{ 0x5f, 12 },{ 0x3, 7 },
};
const int8_t ff_inter_level[102] = {
const int8_t inter_level[102] = {
1, 2, 3, 4, 5, 6, 7, 8,
9, 10, 11, 12, 1, 2, 3, 4,
5, 6, 1, 2, 3, 4, 1, 2,
@@ -143,7 +143,7 @@ const int8_t ff_inter_level[102] = {
1, 1, 1, 1, 1, 1,
};
const int8_t ff_inter_run[102] = {
const int8_t inter_run[102] = {
0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1, 1, 1, 1,
1, 1, 2, 2, 2, 2, 3, 3,
@@ -162,9 +162,9 @@ const int8_t ff_inter_run[102] = {
RLTable ff_h263_rl_inter = {
102,
58,
ff_inter_vlc,
ff_inter_run,
ff_inter_level,
inter_vlc,
inter_run,
inter_level,
};
static const uint16_t intra_vlc_aic[103][2] = {
@@ -228,7 +228,7 @@ static const int8_t intra_level_aic[102] = {
1, 1, 1, 1, 1, 1,
};
RLTable ff_rl_intra_aic = {
RLTable rl_intra_aic = {
102,
58,
intra_vlc_aic,
@@ -236,7 +236,7 @@ RLTable ff_rl_intra_aic = {
intra_level_aic,
};
const uint16_t ff_h263_format[8][2] = {
const uint16_t h263_format[8][2] = {
{ 0, 0 },
{ 128, 96 },
{ 176, 144 },
@@ -250,7 +250,7 @@ const uint8_t ff_aic_dc_scale_table[32]={
0, 2, 4, 6, 8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62
};
const uint8_t ff_modified_quant_tab[2][32]={
const uint8_t modified_quant_tab[2][32]={
// 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
{
0, 3, 1, 2, 3, 4, 5, 6, 7, 8, 9, 9,10,11,12,13,14,15,16,17,18,18,19,20,21,22,23,24,25,26,27,28

View File

@@ -115,7 +115,7 @@ av_cold int ff_h263_decode_init(AVCodecContext *avctx)
if (MPV_common_init(s) < 0)
return -1;
ff_h263_decode_init_vlc(s);
h263_decode_init_vlc(s);
return 0;
}
@@ -435,7 +435,7 @@ retry:
} else if (CONFIG_FLV_DECODER && s->h263_flv) {
ret = ff_flv_decode_picture_header(s);
} else {
ret = ff_h263_decode_picture_header(s);
ret = h263_decode_picture_header(s);
}
if(ret==FRAME_SKIPPED) return get_consumed_bytes(s, buf_size);
@@ -444,13 +444,6 @@ retry:
if (ret < 0){
av_log(s->avctx, AV_LOG_ERROR, "header damaged\n");
return -1;
} else if ((s->width != avctx->coded_width ||
s->height != avctx->coded_height ||
(s->width + 15) >> 4 != s->mb_width ||
(s->height + 15) >> 4 != s->mb_height) &&
(HAVE_THREADS && (s->avctx->active_thread_type & FF_THREAD_FRAME))) {
av_log_missing_feature(s->avctx, "Width/height/bit depth/chroma idc changing with threads is", 0);
return AVERROR_PATCHWELCOME; // width / height changed during parallelized decoding
}
avctx->has_b_frames= !s->low_delay;
@@ -578,6 +571,7 @@ retry:
if (s->codec_id == CODEC_ID_MPEG4 && s->xvid_build>=0 && avctx->idct_algo == FF_IDCT_AUTO && (av_get_cpu_flags() & AV_CPU_FLAG_MMX)) {
avctx->idct_algo= FF_IDCT_XVIDMMX;
ff_dct_common_init(s);
s->picture_number=0;
}
#endif
@@ -670,7 +664,7 @@ retry:
ret = decode_slice(s);
while(s->mb_y<s->mb_height){
if(s->msmpeg4_version){
if(s->slice_height==0 || s->mb_x!=0 || (s->mb_y%s->slice_height)!=0 || get_bits_left(&s->gb)<0)
if(s->slice_height==0 || s->mb_x!=0 || (s->mb_y%s->slice_height)!=0 || get_bits_count(&s->gb) > s->gb.size_in_bits)
break;
}else{
int prev_x=s->mb_x, prev_y=s->mb_y;

View File

@@ -104,7 +104,7 @@ int ff_h264_check_intra4x4_pred_mode(H264Context *h){
return 0;
} //FIXME cleanup like check_intra_pred_mode
int ff_h264_check_intra_pred_mode(H264Context *h, int mode, int is_chroma){
static int check_intra_pred_mode(H264Context *h, int mode, int is_chroma){
MpegEncContext * const s = &h->s;
static const int8_t top [7]= {LEFT_DC_PRED8x8, 1,-1,-1};
static const int8_t left[7]= { TOP_DC_PRED8x8,-1, 2,-1,DC_128_PRED8x8};
@@ -136,6 +136,22 @@ int ff_h264_check_intra_pred_mode(H264Context *h, int mode, int is_chroma){
return mode;
}
/**
* checks if the top & left blocks are available if needed & changes the dc mode so it only uses the available blocks.
*/
int ff_h264_check_intra16x16_pred_mode(H264Context *h, int mode)
{
return check_intra_pred_mode(h, mode, 0);
}
/**
* checks if the top & left blocks are available if needed & changes the dc mode so it only uses the available blocks.
*/
int ff_h264_check_intra_chroma_pred_mode(H264Context *h, int mode)
{
return check_intra_pred_mode(h, mode, 1);
}
const uint8_t *ff_h264_decode_nal(H264Context *h, const uint8_t *src, int *dst_length, int *consumed, int length){
int i, si, di;
@@ -2506,8 +2522,8 @@ static int field_end(H264Context *h, int in_setup){
s->mb_y= 0;
if (!in_setup && !s->dropable)
ff_thread_report_progress(&s->current_picture_ptr->f, INT_MAX,
s->picture_structure == PICT_BOTTOM_FIELD);
ff_thread_report_progress((AVFrame*)s->current_picture_ptr, (16*s->mb_height >> FIELD_PICTURE) - 1,
s->picture_structure==PICT_BOTTOM_FIELD);
if (CONFIG_H264_VDPAU_DECODER && s->avctx->codec->capabilities&CODEC_CAP_HWACCEL_VDPAU)
ff_vdpau_h264_set_reference_frames(s);
@@ -2624,7 +2640,9 @@ static int decode_slice_header(H264Context *h, H264Context *h0){
int num_ref_idx_active_override_flag;
unsigned int slice_type, tmp, i, j;
int default_ref_list_done = 0;
int last_pic_structure, last_pic_dropable;
int last_pic_structure;
s->dropable= h->nal_ref_idc == 0;
/* FIXME: 2tap qpel isn't implemented for high bit depth. */
if((s->avctx->flags2 & CODEC_FLAG2_FAST) && !h->nal_ref_idc && !h->pixel_shift){
@@ -2643,14 +2661,8 @@ static int decode_slice_header(H264Context *h, H264Context *h0){
}
h0->current_slice = 0;
if (!s0->first_field) {
if (s->current_picture_ptr && !s->dropable &&
s->current_picture_ptr->owner2 == s) {
ff_thread_report_progress(&s->current_picture_ptr->f, INT_MAX,
s->picture_structure == PICT_BOTTOM_FIELD);
}
s->current_picture_ptr = NULL;
}
if (!s0->first_field)
s->current_picture_ptr= NULL;
}
slice_type= get_ue_golomb_31(&s->gb);
@@ -2695,6 +2707,11 @@ static int decode_slice_header(H264Context *h, H264Context *h0){
s->avctx->level = h->sps.level_idc;
s->avctx->refs = h->sps.ref_frame_count;
if(h == h0 && h->dequant_coeff_pps != pps_id){
h->dequant_coeff_pps = pps_id;
init_dequant_tables(h);
}
s->mb_width= h->sps.mb_width;
s->mb_height= h->sps.mb_height * (2 - h->sps.frame_mbs_only_flag);
@@ -2710,9 +2727,9 @@ static int decode_slice_header(H264Context *h, H264Context *h0){
|| s->avctx->bits_per_raw_sample != h->sps.bit_depth_luma
|| h->cur_chroma_format_idc != h->sps.chroma_format_idc
|| av_cmp_q(h->sps.sar, s->avctx->sample_aspect_ratio))) {
if(h != h0 || (HAVE_THREADS && h->s.avctx->active_thread_type & FF_THREAD_FRAME)) {
if(h != h0 || (s->avctx->active_thread_type & FF_THREAD_FRAME)) {
av_log_missing_feature(s->avctx, "Width/height/bit depth/chroma idc changing with threads is", 0);
return AVERROR_PATCHWELCOME; // width / height changed during parallelized decoding
return -1; // width / height changed during parallelized decoding
}
free_tables(h, 0);
flush_dpb(s->avctx);
@@ -2789,7 +2806,7 @@ static int decode_slice_header(H264Context *h, H264Context *h0){
else
s->avctx->pix_fmt = PIX_FMT_YUV420P10;
break;
case 8:
default:
if (CHROMA444){
s->avctx->pix_fmt = s->avctx->color_range == AVCOL_RANGE_JPEG ? PIX_FMT_YUVJ444P : PIX_FMT_YUV444P;
if (s->avctx->colorspace == AVCOL_SPC_RGB) {
@@ -2808,11 +2825,6 @@ static int decode_slice_header(H264Context *h, H264Context *h0){
hwaccel_pixfmt_list_h264_jpeg_420 :
ff_hwaccel_pixfmt_list_420);
}
break;
default:
av_log(s->avctx, AV_LOG_ERROR,
"Unsupported bit depth: %d\n", h->sps.bit_depth_luma);
return AVERROR_INVALIDDATA;
}
s->avctx->hwaccel = ff_find_hwaccel(s->avctx->codec->id, s->avctx->pix_fmt);
@@ -2858,18 +2870,11 @@ static int decode_slice_header(H264Context *h, H264Context *h0){
}
}
if(h == h0 && h->dequant_coeff_pps != pps_id){
h->dequant_coeff_pps = pps_id;
init_dequant_tables(h);
}
h->frame_num= get_bits(&s->gb, h->sps.log2_max_frame_num);
h->mb_mbaff = 0;
h->mb_aff_frame = 0;
last_pic_structure = s0->picture_structure;
last_pic_dropable = s0->dropable;
s->dropable = h->nal_ref_idc == 0;
if(h->sps.frame_mbs_only_flag){
s->picture_structure= PICT_FRAME;
}else{
@@ -2886,27 +2891,10 @@ static int decode_slice_header(H264Context *h, H264Context *h0){
}
h->mb_field_decoding_flag= s->picture_structure != PICT_FRAME;
if (h0->current_slice != 0) {
if (last_pic_structure != s->picture_structure ||
last_pic_dropable != s->dropable) {
av_log(h->s.avctx, AV_LOG_ERROR,
"Changing field mode (%d -> %d) between slices is not allowed\n",
last_pic_structure, s->picture_structure);
s->picture_structure = last_pic_structure;
s->dropable = last_pic_dropable;
return AVERROR_INVALIDDATA;
} else if (!s0->current_picture_ptr) {
av_log(s->avctx, AV_LOG_ERROR,
"unset current_picture_ptr on %d. slice\n",
h0->current_slice + 1);
return AVERROR_INVALIDDATA;
}
} else {
/* Shorten frame num gaps so we don't have to allocate reference
* frames just to throw them away */
if (h->frame_num != h->prev_frame_num && h->prev_frame_num >= 0) {
int unwrap_prev_frame_num = h->prev_frame_num;
int max_frame_num = 1 << h->sps.log2_max_frame_num;
if(h0->current_slice == 0){
// Shorten frame num gaps so we don't have to allocate reference frames just to throw them away
if(h->frame_num != h->prev_frame_num && h->prev_frame_num >= 0) {
int unwrap_prev_frame_num = h->prev_frame_num, max_frame_num = 1<<h->sps.log2_max_frame_num;
if (unwrap_prev_frame_num > h->frame_num) unwrap_prev_frame_num -= max_frame_num;
@@ -2919,74 +2907,8 @@ static int decode_slice_header(H264Context *h, H264Context *h0){
}
}
/* See if we have a decoded first field looking for a pair...
* Here, we're using that to see if we should mark previously
* decode frames as "finished".
* We have to do that before the "dummy" in-between frame allocation,
* since that can modify s->current_picture_ptr. */
if (s0->first_field) {
assert(s0->current_picture_ptr);
assert(s0->current_picture_ptr->f.data[0]);
assert(s0->current_picture_ptr->f.reference != DELAYED_PIC_REF);
/* Mark old field/frame as completed */
if (!last_pic_dropable && s0->current_picture_ptr->owner2 == s0) {
ff_thread_report_progress(&s0->current_picture_ptr->f, INT_MAX,
last_pic_structure == PICT_BOTTOM_FIELD);
}
/* figure out if we have a complementary field pair */
if (!FIELD_PICTURE || s->picture_structure == last_pic_structure) {
/* Previous field is unmatched. Don't display it, but let it
* remain for reference if marked as such. */
if (!last_pic_dropable && last_pic_structure != PICT_FRAME) {
ff_thread_report_progress(&s0->current_picture_ptr->f, INT_MAX,
last_pic_structure == PICT_TOP_FIELD);
}
} else {
if (s0->current_picture_ptr->frame_num != h->frame_num) {
/* This and previous field were reference, but had
* different frame_nums. Consider this field first in
* pair. Throw away previous field except for reference
* purposes. */
if (!last_pic_dropable && last_pic_structure != PICT_FRAME) {
ff_thread_report_progress(&s0->current_picture_ptr->f, INT_MAX,
last_pic_structure == PICT_TOP_FIELD);
}
} else {
/* Second field in complementary pair */
if (!((last_pic_structure == PICT_TOP_FIELD &&
s->picture_structure == PICT_BOTTOM_FIELD) ||
(last_pic_structure == PICT_BOTTOM_FIELD &&
s->picture_structure == PICT_TOP_FIELD))) {
av_log(s->avctx, AV_LOG_ERROR,
"Invalid field mode combination %d/%d\n",
last_pic_structure, s->picture_structure);
s->picture_structure = last_pic_structure;
s->dropable = last_pic_dropable;
return AVERROR_INVALIDDATA;
} else if (last_pic_dropable != s->dropable) {
av_log(s->avctx, AV_LOG_ERROR,
"Cannot combine reference and non-reference fields in the same frame\n");
av_log_ask_for_sample(s->avctx, NULL);
s->picture_structure = last_pic_structure;
s->dropable = last_pic_dropable;
return AVERROR_INVALIDDATA;
}
/* Take ownership of this buffer. Note that if another thread owned
* the first field of this buffer, we're not operating on that pointer,
* so the original thread is still responsible for reporting progress
* on that first field (or if that was us, we just did that above).
* By taking ownership, we assign responsibility to ourselves to
* report progress on the second field. */
s0->current_picture_ptr->owner2 = s0;
}
}
}
while (h->frame_num != h->prev_frame_num && h->prev_frame_num >= 0 &&
h->frame_num != (h->prev_frame_num + 1) % (1 << h->sps.log2_max_frame_num)) {
while(h->frame_num != h->prev_frame_num && h->prev_frame_num >= 0 &&
h->frame_num != (h->prev_frame_num+1)%(1<<h->sps.log2_max_frame_num)){
Picture *prev = h->short_ref_count ? h->short_ref[0] : NULL;
av_log(h->s.avctx, AV_LOG_DEBUG, "Frame num gap %d %d\n", h->frame_num, h->prev_frame_num);
if (ff_h264_frame_start(h) < 0)
@@ -3017,9 +2939,7 @@ static int decode_slice_header(H264Context *h, H264Context *h0){
}
}
/* See if we have a decoded first field looking for a pair...
* We're using that to see whether to continue decoding in that
* frame, or to allocate a new one. */
/* See if we have a decoded first field looking for a pair... */
if (s0->first_field) {
assert(s0->current_picture_ptr);
assert(s0->current_picture_ptr->f.data[0]);
@@ -3036,10 +2956,13 @@ static int decode_slice_header(H264Context *h, H264Context *h0){
} else {
if (s0->current_picture_ptr->frame_num != h->frame_num) {
/* This and the previous field had different frame_nums.
* Consider this field first in pair. Throw away previous
* one except for reference purposes. */
s0->first_field = 1;
/*
* This and previous field had
* different frame_nums. Consider this field first in
* pair. Throw away previous field except for reference
* purposes.
*/
s0->first_field = 1;
s0->current_picture_ptr = NULL;
} else {
@@ -3118,8 +3041,7 @@ static int decode_slice_header(H264Context *h, H264Context *h0){
h->ref_count[1]= h->pps.ref_count[1];
if(h->slice_type_nos != AV_PICTURE_TYPE_I){
unsigned max= s->picture_structure == PICT_FRAME ? 15 : 31;
unsigned max= (16<<(s->picture_structure != PICT_FRAME))-1;
if(h->slice_type_nos == AV_PICTURE_TYPE_B){
h->direct_spatial_mv_pred= get_bits1(&s->gb);
}
@@ -3127,21 +3049,15 @@ static int decode_slice_header(H264Context *h, H264Context *h0){
if(num_ref_idx_active_override_flag){
h->ref_count[0]= get_ue_golomb(&s->gb) + 1;
if (h->ref_count[0] < 1)
return AVERROR_INVALIDDATA;
if (h->slice_type_nos == AV_PICTURE_TYPE_B) {
if(h->slice_type_nos==AV_PICTURE_TYPE_B)
h->ref_count[1]= get_ue_golomb(&s->gb) + 1;
if (h->ref_count[1] < 1)
return AVERROR_INVALIDDATA;
}
}
if (h->ref_count[0]-1 > max || h->ref_count[1]-1 > max){
}
if(h->ref_count[0]-1 > max || h->ref_count[1]-1 > max){
av_log(h->s.avctx, AV_LOG_ERROR, "reference overflow\n");
h->ref_count[0] = h->ref_count[1] = 1;
return AVERROR_INVALIDDATA;
h->ref_count[0]= h->ref_count[1]= 1;
return -1;
}
if(h->slice_type_nos == AV_PICTURE_TYPE_B)
h->list_count= 2;
else
@@ -3778,23 +3694,22 @@ static int decode_slice(struct AVCodecContext *avctx, void *arg){
if(s->mb_y >= s->mb_height){
tprintf(s->avctx, "slice end %d %d\n", get_bits_count(&s->gb), s->gb.size_in_bits);
if ( get_bits_left(&s->gb) == 0
|| get_bits_left(&s->gb) > 0 && !(s->avctx->err_recognition & AV_EF_AGGRESSIVE)) {
if( get_bits_count(&s->gb) == s->gb.size_in_bits
|| get_bits_count(&s->gb) < s->gb.size_in_bits && s->avctx->error_recognition < FF_ER_AGGRESSIVE) {
ff_er_add_slice(s, s->resync_mb_x, s->resync_mb_y, s->mb_x-1, s->mb_y, ER_MB_END&part_mask);
return 0;
} else {
ff_er_add_slice(s, s->resync_mb_x, s->resync_mb_y,
s->mb_x, s->mb_y,
ER_MB_END & part_mask);
}else{
ff_er_add_slice(s, s->resync_mb_x, s->resync_mb_y, s->mb_x, s->mb_y, ER_MB_END&part_mask);
return -1;
}
}
}
if (get_bits_left(&s->gb) <= 0 && s->mb_skip_run <= 0){
if(get_bits_count(&s->gb) >= s->gb.size_in_bits && s->mb_skip_run<=0){
tprintf(s->avctx, "slice end %d %d\n", get_bits_count(&s->gb), s->gb.size_in_bits);
if (get_bits_left(&s->gb) == 0) {
if(get_bits_count(&s->gb) == s->gb.size_in_bits ){
ff_er_add_slice(s, s->resync_mb_x, s->resync_mb_y, s->mb_x-1, s->mb_y, ER_MB_END&part_mask);
if (s->mb_x > lf_x_start) loop_filter(h, lf_x_start, s->mb_x);
@@ -3883,7 +3798,7 @@ static int decode_nal_units(H264Context *h, const uint8_t *buf, int buf_size){
int consumed;
int dst_length;
int bit_length;
const uint8_t *ptr;
uint8_t *ptr;
int i, nalsize = 0;
int err;
@@ -3905,11 +3820,7 @@ static int decode_nal_units(H264Context *h, const uint8_t *buf, int buf_size){
break;
}
if (buf_index + 3 >= buf_size) {
buf_index = buf_size;
break;
}
if(buf_index+3 >= buf_size) break;
buf_index+=3;
if(buf_index >= next_avc) continue;
@@ -3918,9 +3829,8 @@ static int decode_nal_units(H264Context *h, const uint8_t *buf, int buf_size){
hx = h->thread_context[context_count];
ptr= ff_h264_decode_nal(hx, buf + buf_index, &dst_length, &consumed, next_avc - buf_index);
if (ptr == NULL || dst_length < 0) {
buf_index = -1;
goto end;
if (ptr==NULL || dst_length < 0){
return -1;
}
i= buf_index + consumed;
if((s->workaround_bugs & FF_BUG_AUTODETECT) && i+3<next_avc &&
@@ -3972,8 +3882,7 @@ static int decode_nal_units(H264Context *h, const uint8_t *buf, int buf_size){
case NAL_IDR_SLICE:
if (h->nal_unit_type != NAL_IDR_SLICE) {
av_log(h->s.avctx, AV_LOG_ERROR, "Invalid mix of idr and non-idr slices\n");
buf_index = -1;
goto end;
return -1;
}
idr(h); // FIXME ensure we don't lose some frames if there is reordering
case NAL_SLICE:
@@ -4052,7 +3961,6 @@ static int decode_nal_units(H264Context *h, const uint8_t *buf, int buf_size){
hx->inter_gb_ptr= &hx->inter_gb;
if(hx->redundant_pic_count==0 && hx->intra_gb_ptr && hx->s.data_partitioning
&& s->current_picture_ptr
&& s->context_initialized
&& (avctx->skip_frame < AVDISCARD_NONREF || hx->nal_ref_idc)
&& (avctx->skip_frame < AVDISCARD_BIDIR || hx->slice_type_nos!=AV_PICTURE_TYPE_B)
@@ -4066,32 +3974,19 @@ static int decode_nal_units(H264Context *h, const uint8_t *buf, int buf_size){
break;
case NAL_SPS:
init_get_bits(&s->gb, ptr, bit_length);
if (ff_h264_decode_seq_parameter_set(h) < 0 && (h->is_avc ? (nalsize != consumed) && nalsize : 1)){
if(ff_h264_decode_seq_parameter_set(h) < 0 && (h->is_avc ? (nalsize != consumed) && nalsize : 1)){
av_log(h->s.avctx, AV_LOG_DEBUG, "SPS decoding failure, trying alternative mode\n");
if(h->is_avc) av_assert0(next_avc - buf_index + consumed == nalsize);
init_get_bits(&s->gb, &buf[buf_index + 1 - consumed], 8*(next_avc - buf_index + consumed - 1));
init_get_bits(&s->gb, &buf[buf_index + 1 - consumed], 8*(next_avc - buf_index + consumed));
ff_h264_decode_seq_parameter_set(h);
}
if (s->flags & CODEC_FLAG_LOW_DELAY ||
(h->sps.bitstream_restriction_flag &&
!h->sps.num_reorder_frames)) {
if (s->avctx->has_b_frames > 1 || h->delayed_pic[0])
av_log(avctx, AV_LOG_WARNING, "Delayed frames seen "
"reenabling low delay requires a codec "
"flush.\n");
else
s->low_delay = 1;
}
if (s->flags& CODEC_FLAG_LOW_DELAY ||
(h->sps.bitstream_restriction_flag && !h->sps.num_reorder_frames))
s->low_delay=1;
if(avctx->has_b_frames < 2)
avctx->has_b_frames= !s->low_delay;
if (h->sps.bit_depth_luma != h->sps.bit_depth_chroma) {
av_log_missing_feature(s->avctx,
"Different bit depth between chroma and luma", 1);
return AVERROR_PATCHWELCOME;
}
break;
case NAL_PPS:
init_get_bits(&s->gb, ptr, bit_length);
@@ -4131,15 +4026,6 @@ static int decode_nal_units(H264Context *h, const uint8_t *buf, int buf_size){
}
if(context_count)
execute_decode_slices(h, context_count);
end:
/* clean up */
if (s->current_picture_ptr && s->current_picture_ptr->owner2 == s &&
!s->dropable) {
ff_thread_report_progress(&s->current_picture_ptr->f, INT_MAX,
s->picture_structure == PICT_BOTTOM_FIELD);
}
return buf_index;
}

View File

@@ -671,7 +671,15 @@ void ff_generate_sliding_window_mmcos(H264Context *h);
*/
int ff_h264_check_intra4x4_pred_mode(H264Context *h);
int ff_h264_check_intra_pred_mode(H264Context *h, int mode, int is_chroma);
/**
* Check if the top & left blocks are available if needed & change the dc mode so it only uses the available blocks.
*/
int ff_h264_check_intra16x16_pred_mode(H264Context *h, int mode);
/**
* Check if the top & left blocks are available if needed & change the dc mode so it only uses the available blocks.
*/
int ff_h264_check_intra_chroma_pred_mode(H264Context *h, int mode);
void ff_h264_hl_decode_mb(H264Context *h);
int ff_h264_frame_start(H264Context *h);

View File

@@ -1998,8 +1998,6 @@ decode_intra_mb:
}
// The pixels are stored in the same order as levels in h->mb array.
if ((int) (h->cabac.bytestream_end - ptr) < mb_size)
return -1;
memcpy(h->mb, ptr, mb_size); ptr+=mb_size;
ff_init_cabac_decoder(&h->cabac, ptr, h->cabac.bytestream_end - ptr);
@@ -2044,14 +2042,14 @@ decode_intra_mb:
write_back_intra_pred_mode(h);
if( ff_h264_check_intra4x4_pred_mode(h) < 0 ) return -1;
} else {
h->intra16x16_pred_mode= ff_h264_check_intra_pred_mode( h, h->intra16x16_pred_mode, 0 );
h->intra16x16_pred_mode= ff_h264_check_intra16x16_pred_mode( h, h->intra16x16_pred_mode );
if( h->intra16x16_pred_mode < 0 ) return -1;
}
if(decode_chroma){
h->chroma_pred_mode_table[mb_xy] =
pred_mode = decode_cabac_mb_chroma_pre_mode( h );
pred_mode= ff_h264_check_intra_pred_mode( h, pred_mode, 1 );
pred_mode= ff_h264_check_intra_chroma_pred_mode( h, pred_mode );
if( pred_mode < 0 ) return -1;
h->chroma_pred_mode= pred_mode;
} else {

View File

@@ -708,7 +708,7 @@ int ff_h264_decode_mb_cavlc(H264Context *h){
down the code */
if(h->slice_type_nos != AV_PICTURE_TYPE_I){
if(s->mb_skip_run==-1)
s->mb_skip_run= get_ue_golomb_long(&s->gb);
s->mb_skip_run= get_ue_golomb(&s->gb);
if (s->mb_skip_run--) {
if(FRAME_MBAFF && (s->mb_y&1) == 0){
@@ -823,12 +823,12 @@ decode_intra_mb:
if( ff_h264_check_intra4x4_pred_mode(h) < 0)
return -1;
}else{
h->intra16x16_pred_mode= ff_h264_check_intra_pred_mode(h, h->intra16x16_pred_mode, 0);
h->intra16x16_pred_mode= ff_h264_check_intra16x16_pred_mode(h, h->intra16x16_pred_mode);
if(h->intra16x16_pred_mode < 0)
return -1;
}
if(decode_chroma){
pred_mode= ff_h264_check_intra_pred_mode(h, get_ue_golomb_31(&s->gb), 1);
pred_mode= ff_h264_check_intra_chroma_pred_mode(h, get_ue_golomb_31(&s->gb));
if(pred_mode < 0)
return -1;
h->chroma_pred_mode= pred_mode;

View File

@@ -253,7 +253,7 @@ static void pred_spatial_direct_motion(H264Context * const h, int *mb_type){
mb_type_col[1] = h->ref_list[1][0].f.mb_type[mb_xy + s->mb_stride];
b8_stride = 2+4*s->mb_stride;
b4_stride *= 6;
if (IS_INTERLACED(mb_type_col[0]) != IS_INTERLACED(mb_type_col[1])) {
if(IS_INTERLACED(mb_type_col[0]) != IS_INTERLACED(mb_type_col[1])){
mb_type_col[0] &= ~MB_TYPE_INTERLACED;
mb_type_col[1] &= ~MB_TYPE_INTERLACED;
}
@@ -443,10 +443,6 @@ static void pred_temp_direct_motion(H264Context * const h, int *mb_type){
mb_type_col[1] = h->ref_list[1][0].f.mb_type[mb_xy + s->mb_stride];
b8_stride = 2+4*s->mb_stride;
b4_stride *= 6;
if (IS_INTERLACED(mb_type_col[0]) != IS_INTERLACED(mb_type_col[1])) {
mb_type_col[0] &= ~MB_TYPE_INTERLACED;
mb_type_col[1] &= ~MB_TYPE_INTERLACED;
}
sub_mb_type = MB_TYPE_16x16|MB_TYPE_P0L0|MB_TYPE_P0L1|MB_TYPE_DIRECT2; /* B_SUB_8x8 */

View File

@@ -37,9 +37,6 @@
//#undef NDEBUG
#include <assert.h>
#define MAX_LOG2_MAX_FRAME_NUM (12 + 4)
#define MIN_LOG2_MAX_FRAME_NUM 4
static const AVRational pixel_aspect[17]={
{0, 1},
{1, 1},
@@ -244,7 +241,7 @@ static inline int decode_vui_parameters(H264Context *h, SPS *sps){
sps->num_reorder_frames= get_ue_golomb(&s->gb);
get_ue_golomb(&s->gb); /*max_dec_frame_buffering*/
if (get_bits_left(&s->gb) < 0) {
if(get_bits_left(&s->gb) < 0){
sps->num_reorder_frames=0;
sps->bitstream_restriction_flag= 0;
}
@@ -254,9 +251,9 @@ static inline int decode_vui_parameters(H264Context *h, SPS *sps){
return -1;
}
}
if (get_bits_left(&s->gb) < 0) {
if(get_bits_left(&s->gb) < 0){
av_log(h->s.avctx, AV_LOG_ERROR, "Overread VUI by %d bits\n", -get_bits_left(&s->gb));
return AVERROR_INVALIDDATA;
return -1;
}
return 0;
@@ -318,7 +315,7 @@ int ff_h264_decode_seq_parameter_set(H264Context *h){
MpegEncContext * const s = &h->s;
int profile_idc, level_idc, constraint_set_flags = 0;
unsigned int sps_id;
int i, log2_max_frame_num_minus4;
int i;
SPS *sps;
profile_idc= get_bits(&s->gb, 8);
@@ -349,18 +346,14 @@ int ff_h264_decode_seq_parameter_set(H264Context *h){
sps->scaling_matrix_present = 0;
sps->colorspace = 2; //AVCOL_SPC_UNSPECIFIED
if (sps->profile_idc == 100 || sps->profile_idc == 110 ||
sps->profile_idc == 122 || sps->profile_idc == 244 ||
sps->profile_idc == 44 || sps->profile_idc == 83 ||
sps->profile_idc == 86 || sps->profile_idc == 118 ||
sps->profile_idc == 128 || sps->profile_idc == 144) {
if(sps->profile_idc >= 100){ //high profile
sps->chroma_format_idc= get_ue_golomb_31(&s->gb);
if (sps->chroma_format_idc > 3U) {
av_log(h->s.avctx, AV_LOG_ERROR, "chroma_format_idc %d is illegal\n", sps->chroma_format_idc);
goto fail;
} else if(sps->chroma_format_idc == 3) {
sps->residual_color_transform_flag = get_bits1(&s->gb);
}
if(sps->chroma_format_idc == 3)
sps->residual_color_transform_flag = get_bits1(&s->gb);
sps->bit_depth_luma = get_ue_golomb(&s->gb) + 8;
sps->bit_depth_chroma = get_ue_golomb(&s->gb) + 8;
if (sps->bit_depth_luma > 12U || sps->bit_depth_chroma > 12U) {
@@ -376,16 +369,7 @@ int ff_h264_decode_seq_parameter_set(H264Context *h){
sps->bit_depth_chroma = 8;
}
log2_max_frame_num_minus4 = get_ue_golomb(&s->gb);
if (log2_max_frame_num_minus4 < MIN_LOG2_MAX_FRAME_NUM - 4 ||
log2_max_frame_num_minus4 > MAX_LOG2_MAX_FRAME_NUM - 4) {
av_log(h->s.avctx, AV_LOG_ERROR,
"log2_max_frame_num_minus4 out of range (0-12): %d\n",
log2_max_frame_num_minus4);
return AVERROR_INVALIDDATA;
}
sps->log2_max_frame_num = log2_max_frame_num_minus4 + 4;
sps->log2_max_frame_num= get_ue_golomb(&s->gb) + 4;
sps->poc_type= get_ue_golomb_31(&s->gb);
if(sps->poc_type == 0){ //FIXME #define
@@ -531,9 +515,6 @@ int ff_h264_decode_picture_parameter_set(H264Context *h, int bit_length){
if(pps_id >= MAX_PPS_COUNT) {
av_log(h->s.avctx, AV_LOG_ERROR, "pps_id (%d) out of range\n", pps_id);
return -1;
} else if (h->sps.bit_depth_luma > 10) {
av_log(h->s.avctx, AV_LOG_ERROR, "Unimplemented luma bit depth=%d (max=10)\n", h->sps.bit_depth_luma);
return AVERROR_PATCHWELCOME;
}
pps= av_mallocz(sizeof(PPS));

View File

@@ -655,8 +655,6 @@ int ff_h264_execute_ref_pic_marking(H264Context *h, MMCO *mmco, int mmco_count){
if(err >= 0 && h->long_ref_count==0 && h->short_ref_count<=2 && h->pps.ref_count[0]<=1 + (s->picture_structure != PICT_FRAME) && s->current_picture_ptr->f.pict_type == AV_PICTURE_TYPE_I){
s->current_picture_ptr->sync |= 1;
if(!h->s.avctx->has_b_frames)
h->sync = 2;
}
return (h->s.avctx->err_recognition & AV_EF_EXPLODE) ? err : 0;

View File

@@ -164,7 +164,7 @@ static int decode_buffering_period(H264Context *h){
int ff_h264_decode_sei(H264Context *h){
MpegEncContext * const s = &h->s;
while (get_bits_left(&s->gb) > 16) {
while(get_bits_count(&s->gb) + 16 < s->gb.size_in_bits){
int size, type;
type=0;

View File

@@ -49,6 +49,7 @@ static const uint8_t scan8[16*3]={
void FUNCC(ff_h264_idct_add)(uint8_t *_dst, DCTELEM *_block, int stride)
{
int i;
INIT_CLIP
pixel *dst = (pixel*)_dst;
dctcoef *block = (dctcoef*)_block;
stride >>= sizeof(pixel)-1;
@@ -73,15 +74,16 @@ void FUNCC(ff_h264_idct_add)(uint8_t *_dst, DCTELEM *_block, int stride)
const int z2= (block[1 + 4*i]>>1) - block[3 + 4*i];
const int z3= block[1 + 4*i] + (block[3 + 4*i]>>1);
dst[i + 0*stride]= av_clip_pixel(dst[i + 0*stride] + ((z0 + z3) >> 6));
dst[i + 1*stride]= av_clip_pixel(dst[i + 1*stride] + ((z1 + z2) >> 6));
dst[i + 2*stride]= av_clip_pixel(dst[i + 2*stride] + ((z1 - z2) >> 6));
dst[i + 3*stride]= av_clip_pixel(dst[i + 3*stride] + ((z0 - z3) >> 6));
dst[i + 0*stride]= CLIP(dst[i + 0*stride] + ((z0 + z3) >> 6));
dst[i + 1*stride]= CLIP(dst[i + 1*stride] + ((z1 + z2) >> 6));
dst[i + 2*stride]= CLIP(dst[i + 2*stride] + ((z1 - z2) >> 6));
dst[i + 3*stride]= CLIP(dst[i + 3*stride] + ((z0 - z3) >> 6));
}
}
void FUNCC(ff_h264_idct8_add)(uint8_t *_dst, DCTELEM *_block, int stride){
int i;
INIT_CLIP
pixel *dst = (pixel*)_dst;
dctcoef *block = (dctcoef*)_block;
stride >>= sizeof(pixel)-1;
@@ -141,14 +143,14 @@ void FUNCC(ff_h264_idct8_add)(uint8_t *_dst, DCTELEM *_block, int stride){
const int b5 = (a3>>2) - a5;
const int b7 = a7 - (a1>>2);
dst[i + 0*stride] = av_clip_pixel( dst[i + 0*stride] + ((b0 + b7) >> 6) );
dst[i + 1*stride] = av_clip_pixel( dst[i + 1*stride] + ((b2 + b5) >> 6) );
dst[i + 2*stride] = av_clip_pixel( dst[i + 2*stride] + ((b4 + b3) >> 6) );
dst[i + 3*stride] = av_clip_pixel( dst[i + 3*stride] + ((b6 + b1) >> 6) );
dst[i + 4*stride] = av_clip_pixel( dst[i + 4*stride] + ((b6 - b1) >> 6) );
dst[i + 5*stride] = av_clip_pixel( dst[i + 5*stride] + ((b4 - b3) >> 6) );
dst[i + 6*stride] = av_clip_pixel( dst[i + 6*stride] + ((b2 - b5) >> 6) );
dst[i + 7*stride] = av_clip_pixel( dst[i + 7*stride] + ((b0 - b7) >> 6) );
dst[i + 0*stride] = CLIP( dst[i + 0*stride] + ((b0 + b7) >> 6) );
dst[i + 1*stride] = CLIP( dst[i + 1*stride] + ((b2 + b5) >> 6) );
dst[i + 2*stride] = CLIP( dst[i + 2*stride] + ((b4 + b3) >> 6) );
dst[i + 3*stride] = CLIP( dst[i + 3*stride] + ((b6 + b1) >> 6) );
dst[i + 4*stride] = CLIP( dst[i + 4*stride] + ((b6 - b1) >> 6) );
dst[i + 5*stride] = CLIP( dst[i + 5*stride] + ((b4 - b3) >> 6) );
dst[i + 6*stride] = CLIP( dst[i + 6*stride] + ((b2 - b5) >> 6) );
dst[i + 7*stride] = CLIP( dst[i + 7*stride] + ((b0 - b7) >> 6) );
}
}
@@ -156,12 +158,13 @@ void FUNCC(ff_h264_idct8_add)(uint8_t *_dst, DCTELEM *_block, int stride){
void FUNCC(ff_h264_idct_dc_add)(uint8_t *p_dst, DCTELEM *block, int stride){
int i, j;
int dc = (((dctcoef*)block)[0] + 32) >> 6;
INIT_CLIP
pixel *dst = (pixel*)p_dst;
stride >>= sizeof(pixel)-1;
for( j = 0; j < 4; j++ )
{
for( i = 0; i < 4; i++ )
dst[i] = av_clip_pixel( dst[i] + dc );
dst[i] = CLIP( dst[i] + dc );
dst += stride;
}
}
@@ -169,12 +172,13 @@ void FUNCC(ff_h264_idct_dc_add)(uint8_t *p_dst, DCTELEM *block, int stride){
void FUNCC(ff_h264_idct8_dc_add)(uint8_t *p_dst, DCTELEM *block, int stride){
int i, j;
int dc = (((dctcoef*)block)[0] + 32) >> 6;
INIT_CLIP
pixel *dst = (pixel*)p_dst;
stride >>= sizeof(pixel)-1;
for( j = 0; j < 8; j++ )
{
for( i = 0; i < 8; i++ )
dst[i] = av_clip_pixel( dst[i] + dc );
dst[i] = CLIP( dst[i] + dc );
dst += stride;
}
}

Some files were not shown because too many files have changed in this diff Show More