Compare commits

...

33 Commits

Author SHA1 Message Date
Yunqing Wang
fbad09f465 Hide global symbols for macho32/64
Added hiding global symbols for macho32 and macho64 in x86inc.asm.
This was done to fix exported symbol issue in Chrome build.

Change-Id: I08d5c559b985b82f655b537469fee125615e78c0
2013-10-30 13:12:15 -07:00
Dmitry Kovalev
300ae1b668 Correct handling of show_bit in uncompressed header.
"keyframe" variable in the current code actually means that previous
frame is a keyframe because cm->frame_type has not been initialized
in read_uncompressed_header.

Change-Id: I5645b0816c70abdef5dfc70113018d06276dac77
2013-10-29 13:59:35 -07:00
Yaowu Xu
91745bcff9 changed to comply with strict aliasing rule
The clamp operation may not affect the values of the final assigned mv
where compiler may make use of strict aliasing rule to optimize out the
clamp operation. This change made the code segments to better comply
the strict aliasing rule.

Change-Id: I24502ff18bd4f9e62507a879cc8760a91a0fd07e
2013-10-29 11:34:19 -07:00
Yaowu Xu
07b0dab5df Converted assert to error checking
Change-Id: Icb8c677f910f588cc7c97e70f024787fe6789257
2013-10-29 10:28:25 -07:00
Yaowu Xu
518904926d Added checking for invalid size
Change-Id: I9672a61e60a26e2934796f088880ce4cb49605be
2013-10-29 10:28:25 -07:00
Yaowu Xu
225aafcf9e Converted assertion to returning error
Assertion happens for invalid input data, the commit replace the
assertion with returning error.

Change-Id: I1b73ae752d64882d984cd23936efe75a757c2b41
2013-10-29 10:28:25 -07:00
Yaowu Xu
b90d51fee6 Added trap for invalid key frame
Change-Id: I698e8df9b336d38bffe01e656acba00d4003695f
2013-10-29 10:28:25 -07:00
Yaowu Xu
cd4bac3004 Prevent access to invalid pointer
The commit added check to make sure no invalid memory access even when
the decoder instance is never initialized.

Change-Id: I4da343d0b3c78c27777ac7f5ce7688562c69f0c5
2013-10-29 10:28:25 -07:00
Yaowu Xu
2f4a2a1268 Add clamp to prevent out of bound access
For bad input data, the decoder may access the array out of bounds. The
commit added clamp to prevent such out of bound access

Change-Id: I0a1cfd9b8786ea7113a998053c76605c963b077a
2013-10-29 10:28:08 -07:00
Dmitry Kovalev
e36425e637 Adding assign_mv() function to reduce code duplication.
Change-Id: I2b4e5b842c19f64749b18946ad215c0caa57e7b7
2013-10-28 16:38:28 -07:00
Dmitry Kovalev
9281f7c278 Reading diff update flag inside vp9_diff_update_prob.
Change-Id: I5ae659c1bfb132428a7272d094b5287d144ec7c8
2013-10-28 16:37:25 -07:00
Dmitry Kovalev
3680f98b1b BITSTREAM - "update_map" SEMANTICS BROKEN IN 398ddafb62
This patch reverts old commit 398ddafb62
"New way of updating last frame segmentation map.".

Change-Id: Iba730f433c30ed7f5e5449d6768049cbf9a2b2c5
2013-10-28 16:37:21 -07:00
Dmitry Kovalev
8fa0ca3b20 BITSTREAM - RESTORING BILINEAR INTERPOLATION FILTER SUPPORT
Adding appropriate test vector vp90-2-06-bilinear.webm.

Change-Id: Ia3bbf57318e0cc61a1b724fe751e3f9c7e11b337
2013-10-28 16:37:18 -07:00
Jingning Han
42f31923b0 BITSTREAM - CLARIFICATION OF MV SIZE RANGE
The codec should effectively run with motion vector of range (-2048, 2047)
in full pixels, for sequences of 1080p and below. Add assertions to clarify
this behavior.

Change-Id: Ia0cac28249f587d8f8882205228fa480263ab313
2013-10-28 16:37:14 -07:00
Dmitry Kovalev
a8a3966174 Adding read_intra_mode_{y, uv} functions for clarity.
Change-Id: I92fd32476c472e54f52b8d7602a98262b25e6eaf
2013-10-28 16:37:05 -07:00
Yaowu Xu
8ab0a2031b fix build with MSVC
near is a key word, changed to use nearmv instead.

Change-Id: Ib54438c431b2b2521a62fc7b61a9c127dd7bc01e
2013-10-28 16:36:41 -07:00
Dmitry Kovalev
16bb20631b Using array of motion vectors instead of separate variables.
Change-Id: I7380a089105f658257bbb3e30a525da168e76952
2013-10-28 16:36:06 -07:00
Dmitry Kovalev
8f83c132ff New way of updating last frame segmentation map.
Implementing more natural (and faster) way of updating last frame
segmentation map.

Change-Id: I9fefa8f78e77bd7948133b04173da45edc15a17e
2013-10-28 16:32:50 -07:00
Jingning Han
4beb889f99 Remove redundant mode update in sub8x8 decoding
The probability model used to code prediction mode is conditioned
on the immediate above and left 8x8 blocks' prediction modes. When
the above/left block is coded in sub8x8 mode, we use the prediction
mode of the bottom-right sub8x8 block as the reference to generate
the context.

This commit moves the update of mbmi.mode out of the sub8x8 decoding
loop, hence removing redundant update steps and keeping the bottom-
right block's mode for the decoding process of next blocks.

Change-Id: I1e8d749684d201c1a1151697621efa5d569218b6
2013-10-28 16:31:21 -07:00
Yunqing Wang
a7b7f94ae8 Merge "Fix x86inc.asm to build PIC code correctly" 2013-09-18 14:51:31 -07:00
Yunqing Wang
9d901217c6 Fix x86inc.asm to build PIC code correctly
Current x86inc.asm didn't handle 32bit PIC build properly.
TEXTRELs were seen in the library built. The PIC macros from
libvpx's x86_abi_support.asm was used to fix this problem.
The assembly code was modified to use the macros.

Notes: We need this fix in for decoder building. Functions in
encoder will be fixed later.

Change-Id: Ifa548d37b1d0bc7d0528db75009cc18cd5eb1838
2013-09-18 13:45:46 -07:00
Adrian Grange
bb30fff978 Merge "Modified resize unit test to output test vector" 2013-09-18 08:36:00 -07:00
Yaowu Xu
85fd8bdb01 Merge "Silence a bunch of MSVC warnings" 2013-09-17 17:10:58 -07:00
Jingning Han
c437bbcde0 Clean up second ref check in sub8x8 rd loop
This commit cleans up the second reference check in the
rate-distortion optimization loop of sub8x8 blocks.

Change-Id: Ife68feaa4cddbfad2878c9b44d3012788d634f97
2013-09-17 15:59:49 -07:00
Adrian Grange
88c8ff2508 Modified resize unit test to output test vector
Modified the resize unit test so that it optionally
writes the encoded bitstream to file. The macro
WRITE_COMPRESSED_STREAM should be set to 1 to enable
output of the test bitstream; it is set to 0 by default.

Change-Id: I7d436b1942f935da97db6d84574a98d379f57fb1
2013-09-17 15:38:30 -07:00
Yaowu Xu
a783da80e7 Silence a bunch of MSVC warnings
Change-Id: I16633269582a640809dca27572bbe99efa6369fc
2013-09-17 12:08:51 -07:00
Jingning Han
2b3bfaa9ce Remove redundant argument in get_sub_block_mv
The sub8x8 check can be directly inferred from block_idx, hence
removed from the arguments if get_sub_block_mv.

Change-Id: Ib766d57e81248fb92df0f6d9b163e6c77b933ccd
2013-09-17 12:08:45 -07:00
Paul Wilkins
84758960db Merge "Minor clean up." 2013-09-17 03:39:24 -07:00
Paul Wilkins
90a52694f3 Merge "Adjustment to mode_skip_start." 2013-09-17 03:39:15 -07:00
Adrian Grange
f582aa6eda Merge "Fix failure to copy data files if content changes" 2013-09-16 17:20:59 -07:00
Adrian Grange
5b23666e67 Fix failure to copy data files if content changes
Jenkins was failing to detect the case where an existing
file is recreated with new content. In this case, thinking
that the file already existed, Jenkins did not re-copy the
file as it should have.

By adding the file test-data.sha1 as a dependendency to
the LIBVPX_TEST_DATA build target the files will be
recopied if the MD5 of an existing file changes.

This could be further improved to only copy files that
have changed rather than copying the whole set as done in
this patch.

(Thanks to jzern@ who diagnozed ithe problem and suggested
this fix).

Change-Id: Icea7c61a95189bc639fec83020c28c70da5b2b41
2013-09-16 16:14:39 -07:00
Paul Wilkins
cb50dc7f33 Minor clean up.
Removed some unused code and minor cleanup
/ reordering.

Change-Id: I4083ae56aeb8edfe9b85aa2f42a16aa28d19da94
2013-09-16 13:45:20 +01:00
Paul Wilkins
3b01778450 Adjustment to mode_skip_start.
Corrected values relating to modified mode order.

Change-Id: I24fccba3af4bc16721d5e7e51888a66305bfa7fe
2013-09-16 13:44:48 +01:00
25 changed files with 474 additions and 258 deletions

View File

@@ -395,7 +395,7 @@ libvpx_test_srcs.txt:
@echo $(LIBVPX_TEST_SRCS) | xargs -n1 echo | sort -u > $@
CLEAN-OBJS += libvpx_test_srcs.txt
$(LIBVPX_TEST_DATA):
$(LIBVPX_TEST_DATA): $(SRC_PATH_BARE)/test/test-data.sha1
@echo " [DOWNLOAD] $@"
$(qexec)trap 'rm -f $@' INT TERM &&\
curl -L -o $@ $(call libvpx_test_data_url,$(@F))

View File

@@ -16,8 +16,68 @@
#include "test/video_source.h"
#include "test/util.h"
// Enable(1) or Disable(0) writing of the compressed bitstream.
#define WRITE_COMPRESSED_STREAM 0
namespace {
#if WRITE_COMPRESSED_STREAM
static void mem_put_le16(char *const mem, const unsigned int val) {
mem[0] = val;
mem[1] = val >> 8;
}
static void mem_put_le32(char *const mem, const unsigned int val) {
mem[0] = val;
mem[1] = val >> 8;
mem[2] = val >> 16;
mem[3] = val >> 24;
}
static void write_ivf_file_header(const vpx_codec_enc_cfg_t *const cfg,
int frame_cnt, FILE *const outfile) {
char header[32];
header[0] = 'D';
header[1] = 'K';
header[2] = 'I';
header[3] = 'F';
mem_put_le16(header + 4, 0); /* version */
mem_put_le16(header + 6, 32); /* headersize */
mem_put_le32(header + 8, 0x30395056); /* fourcc (vp9) */
mem_put_le16(header + 12, cfg->g_w); /* width */
mem_put_le16(header + 14, cfg->g_h); /* height */
mem_put_le32(header + 16, cfg->g_timebase.den); /* rate */
mem_put_le32(header + 20, cfg->g_timebase.num); /* scale */
mem_put_le32(header + 24, frame_cnt); /* length */
mem_put_le32(header + 28, 0); /* unused */
(void)fwrite(header, 1, 32, outfile);
}
static void write_ivf_frame_size(FILE *const outfile, const size_t size) {
char header[4];
mem_put_le32(header, static_cast<unsigned int>(size));
(void)fwrite(header, 1, 4, outfile);
}
static void write_ivf_frame_header(const vpx_codec_cx_pkt_t *const pkt,
FILE *const outfile) {
char header[12];
vpx_codec_pts_t pts;
if (pkt->kind != VPX_CODEC_CX_FRAME_PKT)
return;
pts = pkt->data.frame.pts;
mem_put_le32(header, static_cast<unsigned int>(pkt->data.frame.sz));
mem_put_le32(header + 4, pts & 0xFFFFFFFF);
mem_put_le32(header + 8, pts >> 32);
(void)fwrite(header, 1, 12, outfile);
}
#endif // WRITE_COMPRESSED_STREAM
const unsigned int kInitialWidth = 320;
const unsigned int kInitialHeight = 240;
@@ -42,6 +102,8 @@ class ResizingVideoSource : public ::libvpx_test::DummyVideoSource {
limit_ = 60;
}
virtual ~ResizingVideoSource() {}
protected:
virtual void Next() {
++frame_;
@@ -56,13 +118,15 @@ class ResizeTest : public ::libvpx_test::EncoderTest,
protected:
ResizeTest() : EncoderTest(GET_PARAM(0)) {}
virtual ~ResizeTest() {}
struct FrameInfo {
FrameInfo(vpx_codec_pts_t _pts, unsigned int _w, unsigned int _h)
: pts(_pts), w(_w), h(_h) {}
vpx_codec_pts_t pts;
unsigned int w;
unsigned int h;
unsigned int w;
unsigned int h;
};
virtual void SetUp() {
@@ -95,17 +159,47 @@ TEST_P(ResizeTest, TestExternalResizeWorks) {
}
}
const unsigned int kStepDownFrame = 3;
const unsigned int kStepUpFrame = 6;
class ResizeInternalTest : public ResizeTest {
protected:
#if WRITE_COMPRESSED_STREAM
ResizeInternalTest()
: ResizeTest(),
frame0_psnr_(0.0),
outfile_(NULL),
out_frames_(0) {}
#else
ResizeInternalTest() : ResizeTest(), frame0_psnr_(0.0) {}
#endif
virtual ~ResizeInternalTest() {}
virtual void BeginPassHook(unsigned int /*pass*/) {
#if WRITE_COMPRESSED_STREAM
outfile_ = fopen("vp90-2-05-resize.ivf", "wb");
#endif
}
virtual void EndPassHook() {
#if WRITE_COMPRESSED_STREAM
if (outfile_) {
if (!fseek(outfile_, 0, SEEK_SET))
write_ivf_file_header(&cfg_, out_frames_, outfile_);
fclose(outfile_);
outfile_ = NULL;
}
#endif
}
virtual void PreEncodeFrameHook(libvpx_test::VideoSource *video,
libvpx_test::Encoder *encoder) {
if (video->frame() == 3) {
if (video->frame() == kStepDownFrame) {
struct vpx_scaling_mode mode = {VP8E_FOURFIVE, VP8E_THREEFIVE};
encoder->Control(VP8E_SET_SCALEMODE, &mode);
}
if (video->frame() == 6) {
if (video->frame() == kStepUpFrame) {
struct vpx_scaling_mode mode = {VP8E_NORMAL, VP8E_NORMAL};
encoder->Control(VP8E_SET_SCALEMODE, &mode);
}
@@ -117,7 +211,25 @@ class ResizeInternalTest : public ResizeTest {
EXPECT_NEAR(pkt->data.psnr.psnr[0], frame0_psnr_, 1.0);
}
virtual void FramePktHook(const vpx_codec_cx_pkt_t *pkt) {
#if WRITE_COMPRESSED_STREAM
++out_frames_;
// Write initial file header if first frame.
if (pkt->data.frame.pts == 0)
write_ivf_file_header(&cfg_, 0, outfile_);
// Write frame header and data.
write_ivf_frame_header(pkt, outfile_);
(void)fwrite(pkt->data.frame.buf, 1, pkt->data.frame.sz, outfile_);
#endif
}
double frame0_psnr_;
#if WRITE_COMPRESSED_STREAM
FILE *outfile_;
unsigned int out_frames_;
#endif
};
TEST_P(ResizeInternalTest, TestInternalResizeWorks) {
@@ -125,20 +237,20 @@ TEST_P(ResizeInternalTest, TestInternalResizeWorks) {
30, 1, 0, 10);
init_flags_ = VPX_CODEC_USE_PSNR;
// q picked such that initial keyframe on this clip is ~30dB PSNR
cfg_.rc_min_quantizer = cfg_.rc_max_quantizer = 48;
// If the number of frames being encoded is smaller than g_lag_in_frames
// the encoded frame is unavailable using the current API. Comparing
// frames to detect mismatch would then not be possible. Set
// g_lag_in_frames = 0 to get around this.
cfg_.g_lag_in_frames = 0;
// q picked such that initial keyframe on this clip is ~30dB PSNR
cfg_.rc_min_quantizer = cfg_.rc_max_quantizer = 48;
ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
for (std::vector<FrameInfo>::iterator info = frame_info_list_.begin();
info != frame_info_list_.end(); ++info) {
const vpx_codec_pts_t pts = info->pts;
if (pts >= 3 && pts < 6) {
if (pts >= kStepDownFrame && pts < kStepUpFrame) {
ASSERT_EQ(282U, info->w) << "Frame " << pts << " had unexpected width";
ASSERT_EQ(173U, info->h) << "Frame " << pts << " had unexpected height";
} else {

View File

@@ -520,7 +520,12 @@ d17bc08eedfc60c4c23d576a6c964a21bf854d1f vp90-2-03-size-226x202.webm
83c6d8f2969b759e10e5c6542baca1265c874c29 vp90-2-03-size-226x224.webm.md5
fe0af2ee47b1e5f6a66db369e2d7e9d870b38dce vp90-2-03-size-226x226.webm
94ad19b8b699cea105e2ff18f0df2afd7242bcf7 vp90-2-03-size-226x226.webm.md5
495256cfd123fe777b2c0406862ed8468a1f4677 vp91-2-04-yv444.webm
65e3a7ffef61ab340d9140f335ecc49125970c2c vp91-2-04-yv444.webm.md5
b6524e4084d15b5d0caaa3d3d1368db30cbee69c vp90-2-03-deltaq.webm
65f45ec9a55537aac76104818278e0978f94a678 vp90-2-03-deltaq.webm.md5
4dbb87494c7f565ffc266c98d17d0d8c7a5c5aba vp90-2-05-resize.ivf
7f6d8879336239a43dbb6c9f13178cb11cf7ed09 vp90-2-05-resize.ivf.md5
bf61ddc1f716eba58d4c9837d4e91031d9ce4ffe vp90-2-06-bilinear.webm
f6235f937552e11d8eb331ec55da6b3aa596b9ac vp90-2-06-bilinear.webm.md5
495256cfd123fe777b2c0406862ed8468a1f4677 vp91-2-04-yv444.webm
65e3a7ffef61ab340d9140f335ecc49125970c2c vp91-2-04-yv444.webm.md5

View File

@@ -631,5 +631,9 @@ LIBVPX_TEST_DATA-$(CONFIG_VP9_DECODER) += vp90-2-03-size-226x226.webm
LIBVPX_TEST_DATA-$(CONFIG_VP9_DECODER) += vp90-2-03-size-226x226.webm.md5
LIBVPX_TEST_DATA-$(CONFIG_VP9_DECODER) += vp90-2-03-deltaq.webm
LIBVPX_TEST_DATA-$(CONFIG_VP9_DECODER) += vp90-2-03-deltaq.webm.md5
LIBVPX_TEST_DATA-$(CONFIG_VP9_DECODER) += vp90-2-05-resize.ivf
LIBVPX_TEST_DATA-$(CONFIG_VP9_DECODER) += vp90-2-05-resize.ivf.md5
LIBVPX_TEST_DATA-$(CONFIG_VP9_DECODER) += vp90-2-06-bilinear.webm
LIBVPX_TEST_DATA-$(CONFIG_VP9_DECODER) += vp90-2-06-bilinear.webm.md5
LIBVPX_TEST_DATA-$(CONFIG_VP9_DECODER) += vp91-2-04-yv444.webm
LIBVPX_TEST_DATA-$(CONFIG_VP9_DECODER) += vp91-2-04-yv444.webm.md5

View File

@@ -160,6 +160,7 @@ const char *kVP9TestVectors[] = {
"vp90-2-03-size-226x202.webm", "vp90-2-03-size-226x208.webm",
"vp90-2-03-size-226x210.webm", "vp90-2-03-size-226x224.webm",
"vp90-2-03-size-226x226.webm", "vp90-2-03-deltaq.webm",
"vp90-2-05-resize.ivf", "vp90-2-06-bilinear.webm",
#if CONFIG_NON420
"vp91-2-04-yv444.webm"
#endif

View File

@@ -97,21 +97,91 @@
%endif
%endmacro
%if WIN64
%define PIC
%elifidn __OUTPUT_FORMAT__,macho64
%define PIC
%elif ARCH_X86_64 == 0
; x86_32 doesn't require PIC.
; Some distros prefer shared objects to be PIC, but nothing breaks if
; the code contains a few textrels, so we'll skip that complexity.
%undef PIC
%elif CONFIG_PIC
%define PIC
; PIC macros are copied from vpx_ports/x86_abi_support.asm. The "define PIC"
; from original code is added in for 64bit.
%ifidn __OUTPUT_FORMAT__,elf32
%define ABI_IS_32BIT 1
%elifidn __OUTPUT_FORMAT__,macho32
%define ABI_IS_32BIT 1
%elifidn __OUTPUT_FORMAT__,win32
%define ABI_IS_32BIT 1
%elifidn __OUTPUT_FORMAT__,aout
%define ABI_IS_32BIT 1
%else
%define ABI_IS_32BIT 0
%endif
%if ABI_IS_32BIT
%if CONFIG_PIC=1
%ifidn __OUTPUT_FORMAT__,elf32
%define GET_GOT_SAVE_ARG 1
%define WRT_PLT wrt ..plt
%macro GET_GOT 1
extern _GLOBAL_OFFSET_TABLE_
push %1
call %%get_got
%%sub_offset:
jmp %%exitGG
%%get_got:
mov %1, [esp]
add %1, _GLOBAL_OFFSET_TABLE_ + $$ - %%sub_offset wrt ..gotpc
ret
%%exitGG:
%undef GLOBAL
%define GLOBAL(x) x + %1 wrt ..gotoff
%undef RESTORE_GOT
%define RESTORE_GOT pop %1
%endmacro
%elifidn __OUTPUT_FORMAT__,macho32
%define GET_GOT_SAVE_ARG 1
%macro GET_GOT 1
push %1
call %%get_got
%%get_got:
pop %1
%undef GLOBAL
%define GLOBAL(x) x + %1 - %%get_got
%undef RESTORE_GOT
%define RESTORE_GOT pop %1
%endmacro
%endif
%endif
%if ARCH_X86_64 == 0
%undef PIC
%endif
%else
%macro GET_GOT 1
%endmacro
%define GLOBAL(x) rel x
%define WRT_PLT wrt ..plt
%if WIN64
%define PIC
%elifidn __OUTPUT_FORMAT__,macho64
%define PIC
%elif CONFIG_PIC
%define PIC
%endif
%endif
%ifnmacro GET_GOT
%macro GET_GOT 1
%endmacro
%define GLOBAL(x) x
%endif
%ifndef RESTORE_GOT
%define RESTORE_GOT
%endif
%ifndef WRT_PLT
%define WRT_PLT
%endif
%ifdef PIC
default rel
%endif
; Done with PIC macros
; Always use long nops (reduces 0x90 spam in disassembly on x86_32)
%ifndef __NASM_VER__
@@ -528,6 +598,10 @@ DECLARE_ARG 7, 8, 9, 10, 11, 12, 13, 14
global %1:function hidden
%elifidn __OUTPUT_FORMAT__,elf64
global %1:function hidden
%elifidn __OUTPUT_FORMAT__,macho32
global %1:private_extern
%elifidn __OUTPUT_FORMAT__,macho64
global %1:private_extern
%else
global %1
%endif

View File

@@ -512,15 +512,15 @@ static void read_mb_modes_mv(VP8D_COMP *pbi, MODE_INFO *mi, MB_MODE_INFO *mbmi)
else
{
mbmi->mode = NEARMV;
vp8_clamp_mv2(&near_mvs[CNT_NEAR], &pbi->mb);
mbmi->mv.as_int = near_mvs[CNT_NEAR].as_int;
vp8_clamp_mv2(&mbmi->mv, &pbi->mb);
}
}
else
{
mbmi->mode = NEARESTMV;
vp8_clamp_mv2(&near_mvs[CNT_NEAREST], &pbi->mb);
mbmi->mv.as_int = near_mvs[CNT_NEAREST].as_int;
vp8_clamp_mv2(&mbmi->mv, &pbi->mb);
}
}
else

View File

@@ -1026,7 +1026,7 @@ int vp8_decode_frame(VP8D_COMP *pbi)
const unsigned char *clear = data;
if (pbi->decrypt_cb)
{
int n = data_end - data;
int n = (int)(data_end - data);
if (n > 10) n = 10;
pbi->decrypt_cb(pbi->decrypt_state, data, clear_buffer, n);
clear = clear_buffer;

View File

@@ -35,7 +35,7 @@ static void convolve_horiz_c(const uint8_t *src, ptrdiff_t src_stride,
for (y = 0; y < h; ++y) {
/* Initial phase offset */
int x_q4 = (filter_x0 - filter_x_base) / taps;
int x_q4 = (int)(filter_x0 - filter_x_base) / taps;
for (x = 0; x < w; ++x) {
/* Per-pixel src offset */
@@ -76,7 +76,7 @@ static void convolve_avg_horiz_c(const uint8_t *src, ptrdiff_t src_stride,
for (y = 0; y < h; ++y) {
/* Initial phase offset */
int x_q4 = (filter_x0 - filter_x_base) / taps;
int x_q4 = (int)(filter_x0 - filter_x_base) / taps;
for (x = 0; x < w; ++x) {
/* Per-pixel src offset */
@@ -118,7 +118,7 @@ static void convolve_vert_c(const uint8_t *src, ptrdiff_t src_stride,
for (x = 0; x < w; ++x) {
/* Initial phase offset */
int y_q4 = (filter_y0 - filter_y_base) / taps;
int y_q4 = (int)(filter_y0 - filter_y_base) / taps;
for (y = 0; y < h; ++y) {
/* Per-pixel src offset */
@@ -160,7 +160,7 @@ static void convolve_avg_vert_c(const uint8_t *src, ptrdiff_t src_stride,
for (x = 0; x < w; ++x) {
/* Initial phase offset */
int y_q4 = (filter_y0 - filter_y_base) / taps;
int y_q4 = (int)(filter_y0 - filter_y_base) / taps;
for (y = 0; y < h; ++y) {
/* Per-pixel src offset */

View File

@@ -73,6 +73,10 @@ extern struct vp9_token vp9_mv_class_encodings[MV_CLASSES];
#define MV_MAX ((1 << MV_MAX_BITS) - 1)
#define MV_VALS ((MV_MAX << 1) + 1)
#define MV_IN_USE_BITS 14
#define MV_UPP ((1 << MV_IN_USE_BITS) - 1)
#define MV_LOW (-(1 << MV_IN_USE_BITS))
extern const vp9_tree_index vp9_mv_class0_tree[2 * CLASS0_SIZE - 2];
extern struct vp9_token vp9_mv_class0_encodings[CLASS0_SIZE];

View File

@@ -119,10 +119,9 @@ static void clamp_mv_ref(MV *mv, const MACROBLOCKD *xd) {
// This function returns either the appropriate sub block or block's mv
// on whether the block_size < 8x8 and we have check_sub_blocks set.
static INLINE int_mv get_sub_block_mv(const MODE_INFO *candidate,
int check_sub_blocks, int which_mv,
static INLINE int_mv get_sub_block_mv(const MODE_INFO *candidate, int which_mv,
int search_col, int block_idx) {
return check_sub_blocks && candidate->mbmi.sb_type < BLOCK_8X8
return block_idx >= 0 && candidate->mbmi.sb_type < BLOCK_8X8
? candidate->bmi[idx_n_column_to_subblock[block_idx][search_col == 0]]
.as_mv[which_mv]
: candidate->mbmi.mv[which_mv];
@@ -203,7 +202,6 @@ void vp9_find_mv_refs_idx(const VP9_COMMON *cm, const MACROBLOCKD *xd,
for (i = 0; i < 2; ++i) {
const MV *const mv_ref = &mv_ref_search[i];
if (is_inside(cm, mi_col, mi_row, mv_ref)) {
const int check_sub_blocks = block_idx >= 0;
const MODE_INFO *const candidate_mi = xd->mi_8x8[mv_ref->col + mv_ref->row
* xd->mode_info_stride];
const MB_MODE_INFO *const candidate = &candidate_mi->mbmi;
@@ -212,13 +210,13 @@ void vp9_find_mv_refs_idx(const VP9_COMMON *cm, const MACROBLOCKD *xd,
// Check if the candidate comes from the same reference frame.
if (candidate->ref_frame[0] == ref_frame) {
ADD_MV_REF_LIST(get_sub_block_mv(candidate_mi, check_sub_blocks, 0,
ADD_MV_REF_LIST(get_sub_block_mv(candidate_mi, 0,
mv_ref->col, block_idx));
different_ref_found = candidate->ref_frame[1] != ref_frame;
} else {
if (candidate->ref_frame[1] == ref_frame)
// Add second motion vector if it has the same ref_frame.
ADD_MV_REF_LIST(get_sub_block_mv(candidate_mi, check_sub_blocks, 1,
ADD_MV_REF_LIST(get_sub_block_mv(candidate_mi, 1,
mv_ref->col, block_idx));
different_ref_found = 1;
}

View File

@@ -19,12 +19,14 @@ pw_32: times 8 dw 32
SECTION .text
INIT_MMX sse
cglobal dc_predictor_4x4, 4, 4, 2, dst, stride, above, left
cglobal dc_predictor_4x4, 4, 5, 2, dst, stride, above, left, goffset
GET_GOT goffsetq
pxor m1, m1
movd m0, [aboveq]
punpckldq m0, [leftq]
psadbw m0, m1
paddw m0, [pw_4]
paddw m0, [GLOBAL(pw_4)]
psraw m0, 3
pshufw m0, m0, 0x0
packuswb m0, m0
@@ -33,10 +35,14 @@ cglobal dc_predictor_4x4, 4, 4, 2, dst, stride, above, left
lea dstq, [dstq+strideq*2]
movd [dstq ], m0
movd [dstq+strideq], m0
RESTORE_GOT
RET
INIT_MMX sse
cglobal dc_predictor_8x8, 4, 4, 3, dst, stride, above, left
cglobal dc_predictor_8x8, 4, 5, 3, dst, stride, above, left, goffset
GET_GOT goffsetq
pxor m1, m1
movq m0, [aboveq]
movq m2, [leftq]
@@ -45,7 +51,7 @@ cglobal dc_predictor_8x8, 4, 4, 3, dst, stride, above, left
psadbw m0, m1
psadbw m2, m1
paddw m0, m2
paddw m0, [pw_8]
paddw m0, [GLOBAL(pw_8)]
psraw m0, 4
pshufw m0, m0, 0x0
packuswb m0, m0
@@ -58,10 +64,14 @@ cglobal dc_predictor_8x8, 4, 4, 3, dst, stride, above, left
movq [dstq+strideq ], m0
movq [dstq+strideq*2], m0
movq [dstq+stride3q ], m0
RESTORE_GOT
RET
INIT_XMM sse2
cglobal dc_predictor_16x16, 4, 4, 3, dst, stride, above, left
cglobal dc_predictor_16x16, 4, 5, 3, dst, stride, above, left, goffset
GET_GOT goffsetq
pxor m1, m1
mova m0, [aboveq]
mova m2, [leftq]
@@ -73,7 +83,7 @@ cglobal dc_predictor_16x16, 4, 4, 3, dst, stride, above, left
paddw m0, m2
movhlps m2, m0
paddw m0, m2
paddw m0, [pw_16]
paddw m0, [GLOBAL(pw_16)]
psraw m0, 5
pshuflw m0, m0, 0x0
punpcklqdq m0, m0
@@ -86,10 +96,14 @@ cglobal dc_predictor_16x16, 4, 4, 3, dst, stride, above, left
lea dstq, [dstq+strideq*4]
dec lines4d
jnz .loop
RESTORE_GOT
REP_RET
INIT_XMM sse2
cglobal dc_predictor_32x32, 4, 4, 5, dst, stride, above, left
cglobal dc_predictor_32x32, 4, 5, 5, dst, stride, above, left, goffset
GET_GOT goffsetq
pxor m1, m1
mova m0, [aboveq]
mova m2, [aboveq+16]
@@ -107,7 +121,7 @@ cglobal dc_predictor_32x32, 4, 4, 5, dst, stride, above, left
paddw m0, m4
movhlps m2, m0
paddw m0, m2
paddw m0, [pw_32]
paddw m0, [GLOBAL(pw_32)]
psraw m0, 6
pshuflw m0, m0, 0x0
punpcklqdq m0, m0
@@ -124,6 +138,8 @@ cglobal dc_predictor_32x32, 4, 4, 5, dst, stride, above, left
lea dstq, [dstq+strideq*4]
dec lines4d
jnz .loop
RESTORE_GOT
REP_RET
INIT_MMX sse

View File

@@ -112,14 +112,16 @@ cglobal h_predictor_32x32, 2, 4, 3, dst, stride, line, left
REP_RET
INIT_MMX ssse3
cglobal d45_predictor_4x4, 3, 3, 4, dst, stride, above
cglobal d45_predictor_4x4, 3, 4, 4, dst, stride, above, goffset
GET_GOT goffsetq
movq m0, [aboveq]
pshufb m2, m0, [sh_b23456777]
pshufb m1, m0, [sh_b01234577]
pshufb m0, [sh_b12345677]
pshufb m2, m0, [GLOBAL(sh_b23456777)]
pshufb m1, m0, [GLOBAL(sh_b01234577)]
pshufb m0, [GLOBAL(sh_b12345677)]
pavgb m3, m2, m1
pxor m2, m1
pand m2, [pb_1]
pand m2, [GLOBAL(pb_1)]
psubb m3, m2
pavgb m0, m3
@@ -132,19 +134,23 @@ cglobal d45_predictor_4x4, 3, 3, 4, dst, stride, above
movd [dstq ], m0
psrlq m0, 8
movd [dstq+strideq], m0
RESTORE_GOT
RET
INIT_MMX ssse3
cglobal d45_predictor_8x8, 3, 3, 4, dst, stride, above
cglobal d45_predictor_8x8, 3, 4, 4, dst, stride, above, goffset
GET_GOT goffsetq
movq m0, [aboveq]
mova m1, [sh_b12345677]
DEFINE_ARGS dst, stride, stride3, line
mova m1, [GLOBAL(sh_b12345677)]
DEFINE_ARGS dst, stride, stride3
lea stride3q, [strideq*3]
pshufb m2, m0, [sh_b23456777]
pshufb m2, m0, [GLOBAL(sh_b23456777)]
pavgb m3, m2, m0
pxor m2, m0
pshufb m0, m1
pand m2, [pb_1]
pand m2, [GLOBAL(pb_1)]
psubb m3, m2
pavgb m0, m3
@@ -167,20 +173,24 @@ cglobal d45_predictor_8x8, 3, 3, 4, dst, stride, above
movq [dstq+strideq*2], m0
pshufb m0, m1
movq [dstq+stride3q ], m0
RESTORE_GOT
RET
INIT_XMM ssse3
cglobal d45_predictor_16x16, 3, 5, 4, dst, stride, above, dst8, line
cglobal d45_predictor_16x16, 3, 6, 4, dst, stride, above, dst8, line, goffset
GET_GOT goffsetq
mova m0, [aboveq]
DEFINE_ARGS dst, stride, stride3, dst8, line
lea stride3q, [strideq*3]
lea dst8q, [dstq+strideq*8]
mova m1, [sh_b123456789abcdeff]
pshufb m2, m0, [sh_b23456789abcdefff]
mova m1, [GLOBAL(sh_b123456789abcdeff)]
pshufb m2, m0, [GLOBAL(sh_b23456789abcdefff)]
pavgb m3, m2, m0
pxor m2, m0
pshufb m0, m1
pand m2, [pb_1]
pand m2, [GLOBAL(pb_1)]
psubb m3, m2
pavgb m0, m3
@@ -214,29 +224,33 @@ cglobal d45_predictor_16x16, 3, 5, 4, dst, stride, above, dst8, line
movhps [dstq+strideq +8], m0
movhps [dstq+strideq*2+8], m0
movhps [dstq+stride3q +8], m0
RESTORE_GOT
RET
INIT_XMM ssse3
cglobal d45_predictor_32x32, 3, 5, 7, dst, stride, above, dst16, line
cglobal d45_predictor_32x32, 3, 6, 7, dst, stride, above, dst16, line, goffset
GET_GOT goffsetq
mova m0, [aboveq]
mova m4, [aboveq+16]
DEFINE_ARGS dst, stride, stride3, dst16, line
lea stride3q, [strideq*3]
lea dst16q, [dstq +strideq*8]
lea dst16q, [dst16q+strideq*8]
mova m1, [sh_b123456789abcdeff]
pshufb m2, m4, [sh_b23456789abcdefff]
mova m1, [GLOBAL(sh_b123456789abcdeff)]
pshufb m2, m4, [GLOBAL(sh_b23456789abcdefff)]
pavgb m3, m2, m4
pxor m2, m4
palignr m5, m4, m0, 1
palignr m6, m4, m0, 2
pshufb m4, m1
pand m2, [pb_1]
pand m2, [GLOBAL(pb_1)]
psubb m3, m2
pavgb m4, m3
pavgb m3, m0, m6
pxor m0, m6
pand m0, [pb_1]
pand m0, [GLOBAL(pb_1)]
psubb m3, m0
pavgb m5, m3
@@ -288,4 +302,6 @@ cglobal d45_predictor_32x32, 3, 5, 7, dst, stride, above, dst16, line
mova [dstq +strideq +16], m4
mova [dstq +strideq*2+16], m4
mova [dstq +stride3q +16], m4
RESTORE_GOT
RET

View File

@@ -30,10 +30,26 @@ static MB_PREDICTION_MODE read_intra_mode(vp9_reader *r, const vp9_prob *p) {
return (MB_PREDICTION_MODE)treed_read(r, vp9_intra_mode_tree, p);
}
static MB_PREDICTION_MODE read_intra_mode_y(VP9_COMMON *cm, vp9_reader *r,
int size_group) {
const MB_PREDICTION_MODE y_mode = read_intra_mode(r,
cm->fc.y_mode_prob[size_group]);
++cm->counts.y_mode[size_group][y_mode];
return y_mode;
}
static MB_PREDICTION_MODE read_intra_mode_uv(VP9_COMMON *cm, vp9_reader *r,
MB_PREDICTION_MODE y_mode) {
const MB_PREDICTION_MODE uv_mode = read_intra_mode(r,
cm->fc.uv_mode_prob[y_mode]);
++cm->counts.uv_mode[y_mode][uv_mode];
return uv_mode;
}
static MB_PREDICTION_MODE read_inter_mode(VP9_COMMON *cm, vp9_reader *r,
uint8_t context) {
MB_PREDICTION_MODE mode = treed_read(r, vp9_inter_mode_tree,
cm->fc.inter_mode_probs[context]);
const MB_PREDICTION_MODE mode = treed_read(r, vp9_inter_mode_tree,
cm->fc.inter_mode_probs[context]);
++cm->counts.inter_mode[context][inter_mode_offset(mode)];
return mode;
}
@@ -348,16 +364,15 @@ static void read_switchable_interp_probs(FRAME_CONTEXT *fc, vp9_reader *r) {
int i, j;
for (j = 0; j < SWITCHABLE_FILTERS + 1; ++j)
for (i = 0; i < SWITCHABLE_FILTERS - 1; ++i)
if (vp9_read(r, MODE_UPDATE_PROB))
vp9_diff_update_prob(r, &fc->switchable_interp_prob[j][i]);
vp9_diff_update_prob(r, MODE_UPDATE_PROB,
&fc->switchable_interp_prob[j][i]);
}
static void read_inter_mode_probs(FRAME_CONTEXT *fc, vp9_reader *r) {
int i, j;
for (i = 0; i < INTER_MODE_CONTEXTS; ++i)
for (j = 0; j < INTER_MODES - 1; ++j)
if (vp9_read(r, MODE_UPDATE_PROB))
vp9_diff_update_prob(r, &fc->inter_mode_probs[i][j]);
vp9_diff_update_prob(r, MODE_UPDATE_PROB, &fc->inter_mode_probs[i][j]);
}
static INLINE COMPPREDMODE_TYPE read_comp_pred_mode(vp9_reader *r) {
@@ -388,9 +403,7 @@ static void read_intra_block_mode_info(VP9D_COMP *pbi, MODE_INFO *mi,
mbmi->ref_frame[1] = NONE;
if (bsize >= BLOCK_8X8) {
const int size_group = size_group_lookup[bsize];
mbmi->mode = read_intra_mode(r, cm->fc.y_mode_prob[size_group]);
cm->counts.y_mode[size_group][mbmi->mode]++;
mbmi->mode = read_intra_mode_y(cm, r, size_group_lookup[bsize]);
} else {
// Only 4x4, 4x8, 8x4 blocks
const int num_4x4_w = num_4x4_blocks_wide_lookup[bsize]; // 1 or 2
@@ -400,10 +413,8 @@ static void read_intra_block_mode_info(VP9D_COMP *pbi, MODE_INFO *mi,
for (idy = 0; idy < 2; idy += num_4x4_h) {
for (idx = 0; idx < 2; idx += num_4x4_w) {
const int ib = idy * 2 + idx;
const int b_mode = read_intra_mode(r, cm->fc.y_mode_prob[0]);
const int b_mode = read_intra_mode_y(cm, r, 0);
mi->bmi[ib].as_mode = b_mode;
cm->counts.y_mode[0][b_mode]++;
if (num_4x4_h == 2)
mi->bmi[ib + 2].as_mode = b_mode;
if (num_4x4_w == 2)
@@ -413,8 +424,47 @@ static void read_intra_block_mode_info(VP9D_COMP *pbi, MODE_INFO *mi,
mbmi->mode = mi->bmi[3].as_mode;
}
mbmi->uv_mode = read_intra_mode(r, cm->fc.uv_mode_prob[mbmi->mode]);
cm->counts.uv_mode[mbmi->mode][mbmi->uv_mode]++;
mbmi->uv_mode = read_intra_mode_uv(cm, r, mbmi->mode);
}
static INLINE int assign_mv(VP9_COMMON *cm, MB_PREDICTION_MODE mode,
int_mv mv[2], int_mv best_mv[2],
int_mv nearest_mv[2], int_mv near_mv[2],
int is_compound, int allow_hp, vp9_reader *r) {
int i;
int ret = 1;
switch (mode) {
case NEWMV:
read_mv(r, &mv[0].as_mv, &best_mv[0].as_mv,
&cm->fc.nmvc, &cm->counts.mv, allow_hp);
if (is_compound)
read_mv(r, &mv[1].as_mv, &best_mv[1].as_mv,
&cm->fc.nmvc, &cm->counts.mv, allow_hp);
for (i = 0; i < 1 + is_compound; ++i) {
ret = ret && mv[i].as_mv.row < MV_UPP && mv[i].as_mv.row > MV_LOW;
ret = ret && mv[i].as_mv.col < MV_UPP && mv[i].as_mv.col > MV_LOW;
}
break;
case NEARESTMV:
mv[0].as_int = nearest_mv[0].as_int;
if (is_compound)
mv[1].as_int = nearest_mv[1].as_int;
break;
case NEARMV:
mv[0].as_int = near_mv[0].as_int;
if (is_compound)
mv[1].as_int = near_mv[1].as_int;
break;
case ZEROMV:
mv[0].as_int = 0;
if (is_compound)
mv[1].as_int = 0;
break;
default:
return 0;
}
return ret;
}
static int read_is_inter_block(VP9D_COMP *pbi, int segment_id, vp9_reader *r) {
@@ -436,15 +486,11 @@ static void read_inter_block_mode_info(VP9D_COMP *pbi, MODE_INFO *mi,
int mi_row, int mi_col, vp9_reader *r) {
VP9_COMMON *const cm = &pbi->common;
MACROBLOCKD *const xd = &pbi->mb;
nmv_context *const nmvc = &cm->fc.nmvc;
MB_MODE_INFO *const mbmi = &mi->mbmi;
int_mv *const mv0 = &mbmi->mv[0];
int_mv *const mv1 = &mbmi->mv[1];
const BLOCK_SIZE bsize = mbmi->sb_type;
const int allow_hp = xd->allow_high_precision_mv;
int_mv nearest, nearby, best_mv;
int_mv nearest_second, nearby_second, best_mv_second;
int_mv nearest[2], nearmv[2], best[2];
uint8_t inter_mode_ctx;
MV_REFERENCE_FRAME ref0;
int is_compound;
@@ -461,7 +507,11 @@ static void read_inter_block_mode_info(VP9D_COMP *pbi, MODE_INFO *mi,
if (vp9_segfeature_active(&cm->seg, mbmi->segment_id, SEG_LVL_SKIP)) {
mbmi->mode = ZEROMV;
assert(bsize >= BLOCK_8X8);
if (bsize < BLOCK_8X8) {
vpx_internal_error(&cm->error, VPX_CODEC_UNSUP_BITSTREAM,
"Invalid usage of segement feature on small blocks");
return;
}
} else {
if (bsize >= BLOCK_8X8)
mbmi->mode = read_inter_mode(cm, r, inter_mode_ctx);
@@ -469,8 +519,8 @@ static void read_inter_block_mode_info(VP9D_COMP *pbi, MODE_INFO *mi,
// nearest, nearby
if (bsize < BLOCK_8X8 || mbmi->mode != ZEROMV) {
vp9_find_best_ref_mvs(xd, mbmi->ref_mvs[ref0], &nearest, &nearby);
best_mv.as_int = mbmi->ref_mvs[ref0][0].as_int;
vp9_find_best_ref_mvs(xd, mbmi->ref_mvs[ref0], &nearest[0], &nearmv[0]);
best[0].as_int = nearest[0].as_int;
}
if (is_compound) {
@@ -479,9 +529,8 @@ static void read_inter_block_mode_info(VP9D_COMP *pbi, MODE_INFO *mi,
ref1, mbmi->ref_mvs[ref1], mi_row, mi_col);
if (bsize < BLOCK_8X8 || mbmi->mode != ZEROMV) {
vp9_find_best_ref_mvs(xd, mbmi->ref_mvs[ref1],
&nearest_second, &nearby_second);
best_mv_second.as_int = mbmi->ref_mvs[ref1][0].as_int;
vp9_find_best_ref_mvs(xd, mbmi->ref_mvs[ref1], &nearest[1], &nearmv[1]);
best[1].as_int = nearest[1].as_int;
}
}
@@ -493,92 +542,50 @@ static void read_inter_block_mode_info(VP9D_COMP *pbi, MODE_INFO *mi,
const int num_4x4_w = num_4x4_blocks_wide_lookup[bsize]; // 1 or 2
const int num_4x4_h = num_4x4_blocks_high_lookup[bsize]; // 1 or 2
int idx, idy;
int b_mode;
for (idy = 0; idy < 2; idy += num_4x4_h) {
for (idx = 0; idx < 2; idx += num_4x4_w) {
int_mv blockmv, secondmv;
int_mv block[2];
const int j = idy * 2 + idx;
const int b_mode = read_inter_mode(cm, r, inter_mode_ctx);
b_mode = read_inter_mode(cm, r, inter_mode_ctx);
if (b_mode == NEARESTMV || b_mode == NEARMV) {
vp9_append_sub8x8_mvs_for_idx(cm, xd, &nearest, &nearby, j, 0,
vp9_append_sub8x8_mvs_for_idx(cm, xd, &nearest[0],
&nearmv[0], j, 0,
mi_row, mi_col);
if (is_compound)
vp9_append_sub8x8_mvs_for_idx(cm, xd, &nearest_second,
&nearby_second, j, 1,
mi_row, mi_col);
vp9_append_sub8x8_mvs_for_idx(cm, xd, &nearest[1],
&nearmv[1], j, 1,
mi_row, mi_col);
}
switch (b_mode) {
case NEWMV:
read_mv(r, &blockmv.as_mv, &best_mv.as_mv, nmvc,
&cm->counts.mv, allow_hp);
if (!assign_mv(cm, b_mode, block, best, nearest, nearmv,
is_compound, allow_hp, r)) {
xd->corrupted |= 1;
break;
};
if (is_compound)
read_mv(r, &secondmv.as_mv, &best_mv_second.as_mv, nmvc,
&cm->counts.mv, allow_hp);
break;
case NEARESTMV:
blockmv.as_int = nearest.as_int;
if (is_compound)
secondmv.as_int = nearest_second.as_int;
break;
case NEARMV:
blockmv.as_int = nearby.as_int;
if (is_compound)
secondmv.as_int = nearby_second.as_int;
break;
case ZEROMV:
blockmv.as_int = 0;
if (is_compound)
secondmv.as_int = 0;
break;
default:
assert(!"Invalid inter mode value");
}
mi->bmi[j].as_mv[0].as_int = blockmv.as_int;
mi->bmi[j].as_mv[0].as_int = block[0].as_int;
if (is_compound)
mi->bmi[j].as_mv[1].as_int = secondmv.as_int;
mi->bmi[j].as_mv[1].as_int = block[1].as_int;
if (num_4x4_h == 2)
mi->bmi[j + 2] = mi->bmi[j];
if (num_4x4_w == 2)
mi->bmi[j + 1] = mi->bmi[j];
mi->mbmi.mode = b_mode;
}
}
mv0->as_int = mi->bmi[3].as_mv[0].as_int;
mv1->as_int = mi->bmi[3].as_mv[1].as_int;
mi->mbmi.mode = b_mode;
mbmi->mv[0].as_int = mi->bmi[3].as_mv[0].as_int;
mbmi->mv[1].as_int = mi->bmi[3].as_mv[1].as_int;
} else {
switch (mbmi->mode) {
case NEARMV:
mv0->as_int = nearby.as_int;
if (is_compound)
mv1->as_int = nearby_second.as_int;
break;
case NEARESTMV:
mv0->as_int = nearest.as_int;
if (is_compound)
mv1->as_int = nearest_second.as_int;
break;
case ZEROMV:
mv0->as_int = 0;
if (is_compound)
mv1->as_int = 0;
break;
case NEWMV:
read_mv(r, &mv0->as_mv, &best_mv.as_mv, nmvc, &cm->counts.mv, allow_hp);
if (is_compound)
read_mv(r, &mv1->as_mv, &best_mv_second.as_mv, nmvc, &cm->counts.mv,
allow_hp);
break;
default:
assert(!"Invalid inter mode value");
}
xd->corrupted |= !assign_mv(cm, mbmi->mode, mbmi->mv,
best, nearest, nearmv,
is_compound, allow_hp, r);
}
}
@@ -610,21 +617,17 @@ static void read_comp_pred(VP9_COMMON *cm, vp9_reader *r) {
if (cm->comp_pred_mode == HYBRID_PREDICTION)
for (i = 0; i < COMP_INTER_CONTEXTS; i++)
if (vp9_read(r, MODE_UPDATE_PROB))
vp9_diff_update_prob(r, &cm->fc.comp_inter_prob[i]);
vp9_diff_update_prob(r, MODE_UPDATE_PROB, &cm->fc.comp_inter_prob[i]);
if (cm->comp_pred_mode != COMP_PREDICTION_ONLY)
for (i = 0; i < REF_CONTEXTS; i++) {
if (vp9_read(r, MODE_UPDATE_PROB))
vp9_diff_update_prob(r, &cm->fc.single_ref_prob[i][0]);
if (vp9_read(r, MODE_UPDATE_PROB))
vp9_diff_update_prob(r, &cm->fc.single_ref_prob[i][1]);
vp9_diff_update_prob(r, MODE_UPDATE_PROB, &cm->fc.single_ref_prob[i][0]);
vp9_diff_update_prob(r, MODE_UPDATE_PROB, &cm->fc.single_ref_prob[i][1]);
}
if (cm->comp_pred_mode != SINGLE_PREDICTION_ONLY)
for (i = 0; i < REF_CONTEXTS; i++)
if (vp9_read(r, MODE_UPDATE_PROB))
vp9_diff_update_prob(r, &cm->fc.comp_ref_prob[i]);
vp9_diff_update_prob(r, MODE_UPDATE_PROB, &cm->fc.comp_ref_prob[i]);
}
void vp9_prepare_read_mode_info(VP9D_COMP* pbi, vp9_reader *r) {
@@ -634,8 +637,7 @@ void vp9_prepare_read_mode_info(VP9D_COMP* pbi, vp9_reader *r) {
// TODO(jkoleszar): does this clear more than MBSKIP_CONTEXTS? Maybe remove.
// vpx_memset(cm->fc.mbskip_probs, 0, sizeof(cm->fc.mbskip_probs));
for (k = 0; k < MBSKIP_CONTEXTS; ++k)
if (vp9_read(r, MODE_UPDATE_PROB))
vp9_diff_update_prob(r, &cm->fc.mbskip_probs[k]);
vp9_diff_update_prob(r, MODE_UPDATE_PROB, &cm->fc.mbskip_probs[k]);
if (cm->frame_type != KEY_FRAME && !cm->intra_only) {
nmv_context *const nmvc = &pbi->common.fc.nmvc;
@@ -648,20 +650,18 @@ void vp9_prepare_read_mode_info(VP9D_COMP* pbi, vp9_reader *r) {
read_switchable_interp_probs(&cm->fc, r);
for (i = 0; i < INTRA_INTER_CONTEXTS; i++)
if (vp9_read(r, MODE_UPDATE_PROB))
vp9_diff_update_prob(r, &cm->fc.intra_inter_prob[i]);
vp9_diff_update_prob(r, MODE_UPDATE_PROB, &cm->fc.intra_inter_prob[i]);
read_comp_pred(cm, r);
for (j = 0; j < BLOCK_SIZE_GROUPS; j++)
for (i = 0; i < INTRA_MODES - 1; ++i)
if (vp9_read(r, MODE_UPDATE_PROB))
vp9_diff_update_prob(r, &cm->fc.y_mode_prob[j][i]);
vp9_diff_update_prob(r, MODE_UPDATE_PROB, &cm->fc.y_mode_prob[j][i]);
for (j = 0; j < NUM_PARTITION_CONTEXTS; ++j)
for (i = 0; i < PARTITION_TYPES - 1; ++i)
if (vp9_read(r, MODE_UPDATE_PROB))
vp9_diff_update_prob(r, &cm->fc.partition_prob[INTER_FRAME][j][i]);
vp9_diff_update_prob(r, MODE_UPDATE_PROB,
&cm->fc.partition_prob[INTER_FRAME][j][i]);
read_mv_probs(r, nmvc, xd->allow_high_precision_mv);
}

View File

@@ -63,18 +63,15 @@ static void read_tx_probs(struct tx_probs *tx_probs, vp9_reader *r) {
for (i = 0; i < TX_SIZE_CONTEXTS; ++i)
for (j = 0; j < TX_SIZES - 3; ++j)
if (vp9_read(r, MODE_UPDATE_PROB))
vp9_diff_update_prob(r, &tx_probs->p8x8[i][j]);
vp9_diff_update_prob(r, MODE_UPDATE_PROB, &tx_probs->p8x8[i][j]);
for (i = 0; i < TX_SIZE_CONTEXTS; ++i)
for (j = 0; j < TX_SIZES - 2; ++j)
if (vp9_read(r, MODE_UPDATE_PROB))
vp9_diff_update_prob(r, &tx_probs->p16x16[i][j]);
vp9_diff_update_prob(r, MODE_UPDATE_PROB, &tx_probs->p16x16[i][j]);
for (i = 0; i < TX_SIZE_CONTEXTS; ++i)
for (j = 0; j < TX_SIZES - 1; ++j)
if (vp9_read(r, MODE_UPDATE_PROB))
vp9_diff_update_prob(r, &tx_probs->p32x32[i][j]);
vp9_diff_update_prob(r, MODE_UPDATE_PROB, &tx_probs->p32x32[i][j]);
}
static void setup_plane_dequants(VP9_COMMON *cm, MACROBLOCKD *xd, int q_index) {
@@ -364,8 +361,8 @@ static void read_coef_probs_common(vp9_coeff_probs_model *coef_probs,
for (l = 0; l < PREV_COEF_CONTEXTS; l++)
if (k > 0 || l < 3)
for (m = 0; m < UNCONSTRAINED_NODES; m++)
if (vp9_read(r, VP9_COEF_UPDATE_PROB))
vp9_diff_update_prob(r, &coef_probs[i][j][k][l][m]);
vp9_diff_update_prob(r, VP9_COEF_UPDATE_PROB,
&coef_probs[i][j][k][l][m]);
}
static void read_coef_probs(FRAME_CONTEXT *fc, TX_MODE tx_mode,
@@ -492,7 +489,8 @@ static INTERPOLATIONFILTERTYPE read_interp_filter_type(
struct vp9_read_bit_buffer *rb) {
const INTERPOLATIONFILTERTYPE literal_to_type[] = { EIGHTTAP_SMOOTH,
EIGHTTAP,
EIGHTTAP_SHARP };
EIGHTTAP_SHARP,
BILINEAR };
return vp9_rb_read_bit(rb) ? SWITCHABLE
: literal_to_type[vp9_rb_read_literal(rb, 2)];
}
@@ -794,6 +792,7 @@ static size_t read_uncompressed_header(VP9D_COMP *pbi,
struct vp9_read_bit_buffer *rb) {
VP9_COMMON *const cm = &pbi->common;
MACROBLOCKD *const xd = &pbi->mb;
size_t sz;
int i;
cm->last_frame_type = cm->frame_type;
@@ -899,8 +898,13 @@ static size_t read_uncompressed_header(VP9D_COMP *pbi,
setup_segmentation(&cm->seg, rb);
setup_tile_info(cm, rb);
sz = vp9_rb_read_literal(rb, 16);
return vp9_rb_read_literal(rb, 16);
if (sz == 0)
vpx_internal_error(&cm->error, VPX_CODEC_CORRUPT_FRAME,
"Invalid header size");
return sz;
}
static int read_compressed_header(VP9D_COMP *pbi, const uint8_t *data,
@@ -950,9 +954,9 @@ int vp9_decode_frame(VP9D_COMP *pbi, const uint8_t **p_data_end) {
YV12_BUFFER_CONFIG *new_fb = &cm->yv12_fb[cm->new_fb_idx];
if (!first_partition_size) {
// showing a frame directly
*p_data_end = data + 1;
return 0;
// showing a frame directly
*p_data_end = data + 1;
return 0;
}
data += vp9_rb_bytes_read(&rb);
xd->corrupted = 0;

View File

@@ -48,8 +48,6 @@ static int merge_index(int v, int n, int modulus) {
static int inv_remap_prob(int v, int m) {
static int inv_map_table[MAX_PROB - 1] = {
// generated by:
// inv_map_table[j] = merge_index(j, MAX_PROB - 1, MODULUS_PARAM);
6, 19, 32, 45, 58, 71, 84, 97, 110, 123, 136, 149, 162, 175, 188,
201, 214, 227, 240, 253, 0, 1, 2, 3, 4, 5, 7, 8, 9, 10,
11, 12, 13, 14, 15, 16, 17, 18, 20, 21, 22, 23, 24, 25, 26,
@@ -66,10 +64,11 @@ static int inv_remap_prob(int v, int m) {
190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 202, 203, 204, 205,
206, 207, 208, 209, 210, 211, 212, 213, 215, 216, 217, 218, 219, 220, 221,
222, 223, 224, 225, 226, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237,
238, 239, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252,
238, 239, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252
};
// v = merge_index(v, MAX_PROBS - 1, MODULUS_PARAM);
// The clamp is not necessary for conforming VP9 stream, it is added to
// prevent out of bound access for bad input data
v = clamp(v, 0, 253);
v = inv_map_table[v];
m--;
if ((m << 1) <= MAX_PROB) {
@@ -100,7 +99,9 @@ static int decode_term_subexp(vp9_reader *r, int k, int num_syms) {
return word;
}
void vp9_diff_update_prob(vp9_reader *r, vp9_prob* p) {
int delp = decode_term_subexp(r, SUBEXP_PARAM, 255);
*p = (vp9_prob)inv_remap_prob(delp, *p);
void vp9_diff_update_prob(vp9_reader *r, int update_prob, vp9_prob* p) {
if (vp9_read(r, update_prob)) {
const int delp = decode_term_subexp(r, SUBEXP_PARAM, 255);
*p = (vp9_prob)inv_remap_prob(delp, *p);
}
}

View File

@@ -14,6 +14,6 @@
#include "vp9/decoder/vp9_dboolhuff.h"
void vp9_diff_update_prob(vp9_reader *r, vp9_prob* p);
void vp9_diff_update_prob(vp9_reader *r, int update_prob, vp9_prob* p);
#endif // VP9_DECODER_VP9_DSUBEXP_H_

View File

@@ -1162,7 +1162,7 @@ static void encode_txfm_probs(VP9_COMP *cpi, vp9_writer *w) {
static void write_interp_filter_type(INTERPOLATIONFILTERTYPE type,
struct vp9_write_bit_buffer *wb) {
const int type_to_literal[] = { 1, 0, 2 };
const int type_to_literal[] = { 1, 0, 2, 3 };
vp9_wb_write_bit(wb, type == SWITCHABLE);
if (type != SWITCHABLE)

View File

@@ -936,7 +936,7 @@ static void copy_partitioning(VP9_COMP *cpi, MODE_INFO **mi_8x8,
for (block_col = 0; block_col < 8; ++block_col) {
MODE_INFO * prev_mi = prev_mi_8x8[block_row * mis + block_col];
BLOCK_SIZE sb_type = prev_mi ? prev_mi->mbmi.sb_type : 0;
int offset;
ptrdiff_t offset;
if (prev_mi) {
offset = prev_mi - cm->prev_mi;
@@ -1044,9 +1044,9 @@ static void fill_variance(var *v, int64_t s2, int64_t s, int c) {
v->sum_error = s;
v->count = c;
if (c > 0)
v->variance = 256
v->variance = (int)(256
* (v->sum_square_error - v->sum_error * v->sum_error / v->count)
/ v->count;
/ v->count);
else
v->variance = 0;
}

View File

@@ -565,16 +565,16 @@ static void set_rd_speed_thresholds(VP9_COMP *cpi, int mode) {
sf->thresh_mult[THR_NEARESTG] = 0;
sf->thresh_mult[THR_NEARESTA] = 0;
sf->thresh_mult[THR_NEWMV] += 1000;
sf->thresh_mult[THR_COMP_NEARESTLA] += 1000;
sf->thresh_mult[THR_NEARMV] += 1000;
sf->thresh_mult[THR_COMP_NEARESTGA] += 1000;
sf->thresh_mult[THR_DC] += 1000;
sf->thresh_mult[THR_NEWG] += 1000;
sf->thresh_mult[THR_NEWMV] += 1000;
sf->thresh_mult[THR_NEWA] += 1000;
sf->thresh_mult[THR_NEWG] += 1000;
sf->thresh_mult[THR_NEARMV] += 1000;
sf->thresh_mult[THR_NEARA] += 1000;
sf->thresh_mult[THR_COMP_NEARESTLA] += 1000;
sf->thresh_mult[THR_COMP_NEARESTGA] += 1000;
sf->thresh_mult[THR_TM] += 1000;
@@ -606,28 +606,6 @@ static void set_rd_speed_thresholds(VP9_COMP *cpi, int mode) {
sf->thresh_mult[THR_D207_PRED] += 2500;
sf->thresh_mult[THR_D63_PRED] += 2500;
if (cpi->sf.skip_lots_of_modes) {
for (i = 0; i < MAX_MODES; ++i)
sf->thresh_mult[i] = INT_MAX;
sf->thresh_mult[THR_DC] = 2000;
sf->thresh_mult[THR_TM] = 2000;
sf->thresh_mult[THR_NEWMV] = 4000;
sf->thresh_mult[THR_NEWG] = 4000;
sf->thresh_mult[THR_NEWA] = 4000;
sf->thresh_mult[THR_NEARESTMV] = 0;
sf->thresh_mult[THR_NEARESTG] = 0;
sf->thresh_mult[THR_NEARESTA] = 0;
sf->thresh_mult[THR_NEARMV] = 2000;
sf->thresh_mult[THR_NEARG] = 2000;
sf->thresh_mult[THR_NEARA] = 2000;
sf->thresh_mult[THR_COMP_NEARESTLA] = 2000;
sf->thresh_mult[THR_SPLITMV] = 2500;
sf->thresh_mult[THR_SPLITG] = 2500;
sf->thresh_mult[THR_SPLITA] = 2500;
sf->recode_loop = 0;
}
/* disable frame modes if flags not set */
if (!(cpi->ref_frame_flags & VP9_LAST_FLAG)) {
sf->thresh_mult[THR_NEWMV ] = INT_MAX;
@@ -714,7 +692,6 @@ void vp9_set_speed_features(VP9_COMP *cpi) {
sf->adaptive_motion_search = 0;
sf->use_avoid_tested_higherror = 0;
sf->reference_masking = 0;
sf->skip_lots_of_modes = 0;
sf->partition_by_variance = 0;
sf->use_one_partition_size_always = 0;
sf->less_rectangular_check = 0;
@@ -796,7 +773,7 @@ void vp9_set_speed_features(VP9_COMP *cpi) {
sf->intra_y_mode_mask = INTRA_DC_TM_H_V;
sf->intra_uv_mode_mask = INTRA_DC_TM_H_V;
sf->use_fast_coef_updates = 1;
sf->mode_skip_start = 9;
sf->mode_skip_start = 11;
}
if (speed == 2) {
sf->less_rectangular_check = 1;
@@ -835,7 +812,7 @@ void vp9_set_speed_features(VP9_COMP *cpi) {
sf->disable_split_var_thresh = 32;
sf->disable_filter_search_var_thresh = 32;
sf->use_fast_coef_updates = 2;
sf->mode_skip_start = 9;
sf->mode_skip_start = 6;
}
if (speed == 3) {
sf->comp_inter_joint_search_thresh = BLOCK_SIZES;
@@ -863,7 +840,7 @@ void vp9_set_speed_features(VP9_COMP *cpi) {
sf->intra_y_mode_mask = INTRA_DC_ONLY;
sf->intra_uv_mode_mask = INTRA_DC_ONLY;
sf->use_fast_coef_updates = 2;
sf->mode_skip_start = 9;
sf->mode_skip_start = 6;
}
if (speed == 4) {
sf->comp_inter_joint_search_thresh = BLOCK_SIZES;
@@ -895,7 +872,7 @@ void vp9_set_speed_features(VP9_COMP *cpi) {
sf->intra_y_mode_mask = INTRA_DC_ONLY;
sf->intra_uv_mode_mask = INTRA_DC_ONLY;
sf->use_fast_coef_updates = 2;
sf->mode_skip_start = 9;
sf->mode_skip_start = 6;
}
break;

View File

@@ -267,7 +267,6 @@ typedef struct {
TX_SIZE_SEARCH_METHOD tx_size_search_method;
int use_lp32x32fdct;
int use_avoid_tested_higherror;
int skip_lots_of_modes;
int partition_by_variance;
int use_one_partition_size_always;
int less_rectangular_check;

View File

@@ -369,8 +369,8 @@ static void model_rd_from_var_lapndz(int var, int n, int qstep,
double s2 = (double) var / n;
double x = qstep / sqrt(s2);
model_rd_norm(x, &R, &D);
*rate = ((n << 8) * R + 0.5);
*dist = (var * D + 0.5);
*rate = (int)((n << 8) * R + 0.5);
*dist = (int)(var * D + 0.5);
}
vp9_clear_system_state();
}
@@ -397,7 +397,7 @@ static void model_rd_for_sb(VP9_COMP *cpi, BLOCK_SIZE bsize,
pd->dequant[1] >> 3, &rate, &dist);
rate_sum += rate;
dist_sum += dist;
dist_sum += (int)dist;
}
*out_rate_sum = rate_sum;
@@ -868,8 +868,8 @@ static void choose_txfm_size_from_modelrd(VP9_COMP *cpi, MACROBLOCK *x,
}
}
for (n = TX_4X4; n <= max_txfm_size; n++) {
rd[n][0] = (scale_rd[n] * rd[n][0]);
rd[n][1] = (scale_rd[n] * rd[n][1]);
rd[n][0] = (int64_t)(scale_rd[n] * rd[n][0]);
rd[n][1] = (int64_t)(scale_rd[n] * rd[n][1]);
}
if (max_txfm_size == TX_32X32 &&
@@ -1446,6 +1446,7 @@ static int labels2mode(MACROBLOCK *x, int i,
int idx, idy;
const int num_4x4_blocks_wide = num_4x4_blocks_wide_lookup[mbmi->sb_type];
const int num_4x4_blocks_high = num_4x4_blocks_high_lookup[mbmi->sb_type];
const int has_second_rf = has_second_ref(mbmi);
/* We have to be careful retrieving previously-encoded motion vectors.
Ones from this macroblock have to be pulled from the BLOCKD array
@@ -1459,7 +1460,7 @@ static int labels2mode(MACROBLOCK *x, int i,
this_mv->as_int = seg_mvs[mbmi->ref_frame[0]].as_int;
thismvcost = vp9_mv_bit_cost(this_mv, best_ref_mv, mvjcost, mvcost,
102);
if (mbmi->ref_frame[1] > 0) {
if (has_second_rf) {
this_second_mv->as_int = seg_mvs[mbmi->ref_frame[1]].as_int;
thismvcost += vp9_mv_bit_cost(this_second_mv, second_best_ref_mv,
mvjcost, mvcost, 102);
@@ -1467,19 +1468,19 @@ static int labels2mode(MACROBLOCK *x, int i,
break;
case NEARESTMV:
this_mv->as_int = frame_mv[NEARESTMV][mbmi->ref_frame[0]].as_int;
if (mbmi->ref_frame[1] > 0)
if (has_second_rf)
this_second_mv->as_int =
frame_mv[NEARESTMV][mbmi->ref_frame[1]].as_int;
break;
case NEARMV:
this_mv->as_int = frame_mv[NEARMV][mbmi->ref_frame[0]].as_int;
if (mbmi->ref_frame[1] > 0)
if (has_second_rf)
this_second_mv->as_int =
frame_mv[NEARMV][mbmi->ref_frame[1]].as_int;
break;
case ZEROMV:
this_mv->as_int = 0;
if (mbmi->ref_frame[1] > 0)
if (has_second_rf)
this_second_mv->as_int = 0;
break;
default:
@@ -1490,7 +1491,7 @@ static int labels2mode(MACROBLOCK *x, int i,
mbmi->mode_context[mbmi->ref_frame[0]]);
mic->bmi[i].as_mv[0].as_int = this_mv->as_int;
if (mbmi->ref_frame[1] > 0)
if (has_second_rf)
mic->bmi[i].as_mv[1].as_int = this_second_mv->as_int;
x->partition_info->bmi[i].mode = m;
@@ -1623,7 +1624,7 @@ static INLINE void mi_buf_shift(MACROBLOCK *x, int i) {
assert(((intptr_t)pd->pre[0].buf & 0x7) == 0);
pd->pre[0].buf = raster_block_offset_uint8(BLOCK_8X8, i, pd->pre[0].buf,
pd->pre[0].stride);
if (mbmi->ref_frame[1])
if (has_second_ref(mbmi))
pd->pre[1].buf = raster_block_offset_uint8(BLOCK_8X8, i, pd->pre[1].buf,
pd->pre[1].stride);
}
@@ -1633,7 +1634,7 @@ static INLINE void mi_buf_restore(MACROBLOCK *x, struct buf_2d orig_src,
MB_MODE_INFO *mbmi = &x->e_mbd.mi_8x8[0]->mbmi;
x->plane[0].src = orig_src;
x->e_mbd.plane[0].pre[0] = orig_pre[0];
if (mbmi->ref_frame[1])
if (has_second_ref(mbmi))
x->e_mbd.plane[0].pre[1] = orig_pre[1];
}
@@ -1658,6 +1659,7 @@ static void rd_check_segment_txsize(VP9_COMP *cpi, MACROBLOCK *x,
BEST_SEG_INFO *bsi = bsi_buf + filter_idx;
int mode_idx;
int subpelmv = 1, have_ref = 0;
const int has_second_rf = has_second_ref(mbmi);
vpx_memcpy(t_above, x->e_mbd.plane[0].above_context, sizeof(t_above));
vpx_memcpy(t_left, x->e_mbd.plane[0].left_context, sizeof(t_left));
@@ -1687,7 +1689,7 @@ static void rd_check_segment_txsize(VP9_COMP *cpi, MACROBLOCK *x,
&frame_mv[NEARESTMV][mbmi->ref_frame[0]],
&frame_mv[NEARMV][mbmi->ref_frame[0]],
i, 0, mi_row, mi_col);
if (mbmi->ref_frame[1] > 0)
if (has_second_rf)
vp9_append_sub8x8_mvs_for_idx(&cpi->common, &x->e_mbd,
&frame_mv[NEARESTMV][mbmi->ref_frame[1]],
&frame_mv[NEARMV][mbmi->ref_frame[1]],
@@ -1705,7 +1707,7 @@ static void rd_check_segment_txsize(VP9_COMP *cpi, MACROBLOCK *x,
if ((this_mode == NEARMV || this_mode == NEARESTMV ||
this_mode == ZEROMV) &&
frame_mv[this_mode][mbmi->ref_frame[0]].as_int == 0 &&
(mbmi->ref_frame[1] <= 0 ||
(!has_second_rf ||
frame_mv[this_mode][mbmi->ref_frame[1]].as_int == 0)) {
int rfc = mbmi->mode_context[mbmi->ref_frame[0]];
int c1 = cost_mv_ref(cpi, NEARMV, rfc);
@@ -1720,7 +1722,7 @@ static void rd_check_segment_txsize(VP9_COMP *cpi, MACROBLOCK *x,
continue;
} else {
assert(this_mode == ZEROMV);
if (mbmi->ref_frame[1] <= 0) {
if (!has_second_rf) {
if ((c3 >= c2 &&
frame_mv[NEARESTMV][mbmi->ref_frame[0]].as_int == 0) ||
(c3 >= c1 &&
@@ -1745,7 +1747,7 @@ static void rd_check_segment_txsize(VP9_COMP *cpi, MACROBLOCK *x,
sizeof(bsi->rdstat[i][mode_idx].tl));
// motion search for newmv (single predictor case only)
if (mbmi->ref_frame[1] <= 0 && this_mode == NEWMV &&
if (!has_second_rf && this_mode == NEWMV &&
seg_mvs[i][mbmi->ref_frame[0]].as_int == INVALID_MV) {
int step_param = 0;
int further_steps;
@@ -1856,7 +1858,7 @@ static void rd_check_segment_txsize(VP9_COMP *cpi, MACROBLOCK *x,
mi_buf_restore(x, orig_src, orig_pre);
}
if (mbmi->ref_frame[1] > 0 && this_mode == NEWMV &&
if (has_second_rf && this_mode == NEWMV &&
mbmi->interp_filter == EIGHTTAP) {
if (seg_mvs[i][mbmi->ref_frame[1]].as_int == INVALID_MV ||
seg_mvs[i][mbmi->ref_frame[0]].as_int == INVALID_MV)
@@ -1891,7 +1893,7 @@ static void rd_check_segment_txsize(VP9_COMP *cpi, MACROBLOCK *x,
if (num_4x4_blocks_high > 1)
bsi->rdstat[i + 2][mode_idx].mvs[0].as_int =
mode_mv[this_mode].as_int;
if (mbmi->ref_frame[1] > 0) {
if (has_second_rf) {
bsi->rdstat[i][mode_idx].mvs[1].as_int =
second_mode_mv[this_mode].as_int;
if (num_4x4_blocks_wide > 1)
@@ -1905,7 +1907,7 @@ static void rd_check_segment_txsize(VP9_COMP *cpi, MACROBLOCK *x,
// Trap vectors that reach beyond the UMV borders
if (mv_check_bounds(x, &mode_mv[this_mode]))
continue;
if (mbmi->ref_frame[1] > 0 &&
if (has_second_rf &&
mv_check_bounds(x, &second_mode_mv[this_mode]))
continue;
@@ -1915,7 +1917,7 @@ static void rd_check_segment_txsize(VP9_COMP *cpi, MACROBLOCK *x,
(mode_mv[this_mode].as_mv.col & 0x0f);
have_ref = mode_mv[this_mode].as_int ==
ref_bsi->rdstat[i][mode_idx].mvs[0].as_int;
if (mbmi->ref_frame[1] > 0) {
if (has_second_rf) {
subpelmv |= (second_mode_mv[this_mode].as_mv.row & 0x0f) ||
(second_mode_mv[this_mode].as_mv.col & 0x0f);
have_ref &= second_mode_mv[this_mode].as_int ==
@@ -1926,7 +1928,7 @@ static void rd_check_segment_txsize(VP9_COMP *cpi, MACROBLOCK *x,
ref_bsi = bsi_buf + 1;
have_ref = mode_mv[this_mode].as_int ==
ref_bsi->rdstat[i][mode_idx].mvs[0].as_int;
if (mbmi->ref_frame[1] > 0) {
if (has_second_rf) {
have_ref &= second_mode_mv[this_mode].as_int ==
ref_bsi->rdstat[i][mode_idx].mvs[1].as_int;
}
@@ -2059,7 +2061,7 @@ static int64_t rd_pick_best_mbsegmentation(VP9_COMP *cpi, MACROBLOCK *x,
for (i = 0; i < 4; i++) {
mode_idx = inter_mode_offset(bsi->modes[i]);
mi->bmi[i].as_mv[0].as_int = bsi->rdstat[i][mode_idx].mvs[0].as_int;
if (mbmi->ref_frame[1] > 0)
if (has_second_ref(mbmi))
mi->bmi[i].as_mv[1].as_int = bsi->rdstat[i][mode_idx].mvs[1].as_int;
xd->plane[0].eobs[i] = bsi->rdstat[i][mode_idx].eobs;
x->partition_info->bmi[i].mode = bsi->modes[i];
@@ -2904,7 +2906,7 @@ static int64_t handle_inter_mode(VP9_COMP *cpi, MACROBLOCK *x,
unsigned int thresh_ac;
// The encode_breakout input
unsigned int encode_breakout = x->encode_breakout << 4;
int max_thresh = 36000;
unsigned int max_thresh = 36000;
// Use extreme low threshold for static frames to limit skipping.
if (cpi->enable_encode_breakout == 2)
@@ -3251,7 +3253,7 @@ int64_t vp9_rd_pick_inter_mode_sb(VP9_COMP *cpi, MACROBLOCK *x,
assert(!"Invalid Reference frame");
}
}
if (cpi->mode_skip_mask & (1 << mode_index))
if (cpi->mode_skip_mask & ((int64_t)1 << mode_index))
continue;
}
@@ -3448,7 +3450,7 @@ int64_t vp9_rd_pick_inter_mode_sb(VP9_COMP *cpi, MACROBLOCK *x,
// Disable intra modes other than DC_PRED for blocks with low variance
// Threshold for intra skipping based on source variance
// TODO(debargha): Specialize the threshold for super block sizes
static const int skip_intra_var_thresh[BLOCK_SIZES] = {
static const unsigned int skip_intra_var_thresh[BLOCK_SIZES] = {
64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64,
};
if ((cpi->sf.mode_search_skip_flags & FLAG_SKIP_INTRA_LOWVAR) &&

View File

@@ -589,7 +589,8 @@ static void pick_quickcompress_mode(vpx_codec_alg_priv_t *ctx,
static int write_superframe_index(vpx_codec_alg_priv_t *ctx) {
uint8_t marker = 0xc0;
int mag, mask, index_sz;
unsigned int mask;
int mag, index_sz;
assert(ctx->pending_frame_count);
assert(ctx->pending_frame_count <= 8);

View File

@@ -655,8 +655,10 @@ static vpx_codec_err_t get_frame_corrupted(vpx_codec_alg_priv_t *ctx,
if (corrupted) {
VP9D_COMP *pbi = (VP9D_COMP *)ctx->pbi;
*corrupted = pbi->common.frame_to_show->corrupted;
if (pbi)
*corrupted = pbi->common.frame_to_show->corrupted;
else
return VPX_CODEC_ERROR;
return VPX_CODEC_OK;
} else
return VPX_CODEC_INVALID_PARAM;

View File

@@ -215,11 +215,11 @@ vpx_codec_err_t vpx_codec_encode(vpx_codec_ctx_t *ctx,
else if (!(ctx->iface->caps & VPX_CODEC_CAP_ENCODER))
res = VPX_CODEC_INCAPABLE;
else {
unsigned int num_enc = ctx->priv->enc.total_encoders;
/* Execute in a normalized floating point environment, if the platform
* requires it.
*/
unsigned int num_enc = ctx->priv->enc.total_encoders;
FLOATING_POINT_INIT();
if (num_enc == 1)