Yaowu found this function had a compiling issue with MSVC because
of using _mm_storel_pi((__m64 *)(dest + 0 * stride), (__m128)p0).
To be safe, changed back to use integer store instruction.
Also, for some build, diff could not always be 16-byte aligned.
Changed that in the code.
Change-Id: I9995e5446af15dad18f3c5c0bad1ae68abef6c0d
This patch revamps the entropy coding of coefficients to code first
a non-zero count per coded block and correspondingly remove the EOB
token from the token set.
STATUS:
Main encode/decode code achieving encode/decode sync - done.
Forward and backward probability updates to the nzcs - done.
Rd costing updates for nzcs - done.
Note: The dynamic progrmaming apporach used in trellis quantization
is not exactly compatible with nzcs. A suboptimal approach has been
used instead where branch costs are updated to account for changes
in the nzcs.
TODO:
Training the default probs/counts for nzcs
Change-Id: I951bc1e22f47885077a7453a09b0493daa77883d
Added a variant of the one shot maxQ flag
for two pass that forces a fixed Q for the
normal inter frames. Disabled by default.
Also small adjustment to the Bits per MB
estimation.
Change-Id: I87efdfb2d094fe1340ca9ddae37470d7b278c8b8
Split macroblock and superblock tokenization and detokenization
functions and coefficient-related data structs so that the bitstream
layout and related code of superblock coefficients looks less like it's
a hack to fit macroblocks in superblocks.
In addition, unify chroma transform size selection from luma transform
size (i.e. always use the same size, as long as it fits the predictor);
in practice, this means 32x32 and 64x64 superblocks using the 16x16 luma
transform will now use the 16x16 (instead of the 8x8) chroma transform,
and 64x64 superblocks using the 32x32 luma transform will now use the
32x32 (instead of the 16x16) chroma transform.
Lastly, add a trellis optimize function for 32x32 transform blocks.
HD gains about 0.3%, STDHD about 0.15% and derf about 0.1%. There's
a few negative points here and there that I might want to analyze
a little closer.
Change-Id: Ibad7c3ddfe1acfc52771dfc27c03e9783e054430
Fixed a couple of variable/function definitions, as well as header
handling to support 16K sequence coding at high bit-rates.
The width and height are each specified by two bytes in the header.
Use an extra byte to explicitly indicate the scaling factors in
both directions, each ranging from 0 to 15.
Tested coding up to 16400x16400 dimension.
Change-Id: Ibc2225c6036620270f2c0cf5172d1760aaec10ec
Simplified idct32x32 calculation when there are only 10 or less
non-zero coefficients in 32x32 block. This helps the decoder
performance.
Change-Id: If7f8893d27b64a9892b4b2621a37fdf4ac0c2a6d
The ABOVESPREFMV experiment uses four pixels to the left of the
current block, which don't exist for the left-most column.
Change-Id: I4cf0b42ae8f54c0b3e7b1ed8755704b74fafc39c
Removing redundant variables, using x *= y instead x = x * y, moving
variable declarations into inner blocks.
Change-Id: I884f95c755f55d51b7c1c6585f10296919063e41
The width and height stored in the reference frames are padded out to
a multiple of 16. The Width and Height variables in common are the
displayed size, which may be smaller. The incorrect comparison was
causing scaling related code to be called when it shouldn't have
been. A notable case where this happens is 1080p, since 1088 != 1080.
Change-Id: I55f743eeeeaefbf2e777e193bc9a77ff726e16b5
sse4_1 code used uint16_t for returning sad, but that
won't work for 32x32 or 64x64. This code fixes the
assembly for those and also reenables sse4_1 on linux
Change-Id: I5ce7288d581db870a148e5f7c5092826f59edd81