Moving code from vp9_decode_frame function into setup_loopfilter and
setup_segmentation functions. A little bit of cleanup.
Change-Id: I2cce1813e4d7aeec701ccf752bf57e3bdd41b51c
Adds a per-frame, strength adjustable, in loop deringing filter. Uses
the existing vp9_post_proc_down_and_across 5 tap thresholded blur
code, with a brute force search for the threshold.
Results almost strictly positive on the YT HD set, either having no
effect or helping PSNR in the range of 1-3% (overall average 0.8%).
Results more mixed for the CIF set, (-0.5 min, 1.4 max, 0.1 avg).
This has an almost strictly negative impact to SSIM, so examining a
different filter or a more balanced search heuristic is in order.
Other test set results pending.
Change-Id: I5ca6ee8fe292dfa3f2eab7f65332423fa1710b58
Replaces the default tables for single coefficient magnitudes with
those obtained from an appropriate distribution. The EOB node
is left unchanged. The model is represeted as a 256-size codebook
where the index corresponds to the probability of the Zero or the
One node. Two variations are implemented corresponding to whether
the Zero node or the One-node is used as the peg. The main advantage
is that the default prob tables will become considerably smaller and
manageable. Besides there is substantially less risk of over-fitting
for a training set.
Various distributions are tried and the one that gives the best
results is the family of Generalized Gaussian distributions with
shape parameter 0.75. The results are within about 0.2% of fully
trained tables for the Zero peg variant, and within 0.1% of the
One peg variant.
The forward updates are optionally (controlled by a macro)
model-based, i.e. restricted to only convey probabilities from the
codebook. Backward updates can also be optionally (controlled by
another macro) model-based, but is turned off by default. Currently
model-based forward updates work about the same as unconstrained
updates, but there is a drop in performance with backward-updates
being model based.
The model based approach also allows the probabilities for the key
frames to be adjusted from the defaults based on the base_qindex of
the frame. Currently the adjustment function is a placeholder that
adjusts the prob of EOB and Zero node from the nominal one at higher
quality (lower qindex) or lower quality (higher qindex) ends of the
range. The rest of the probabilities are then derived based on the
model from the adjusted prob of zero.
Change-Id: Iae050f3cbcc6d8b3f204e8dc395ae47b3b2192c9
Lower case variable names, code simplification by using already defined
clamp and read_le16 functions.
Change-Id: I8fd544365bd8d1daed86d7b2ae0843e4ef80df08
Wrote sse2 version of vp9_short_idct10_16x16 function. Compared
to c version, the sse2 version is 2.3X faster.
Change-Id: I314c4f09369648721798321eeed6f58e38857f26
Making consistent initialization of mb_to_{top,botton,left,right}_edge
variables after set_mb_row & set_mb_col calls. A little bit of code cleanup
additionally.
Change-Id: I245bfe32c5701e9836956dc25cf8c770d109cbc1
Wrote sse2 version of vp9_short_idct16x16 function. Compared to c
version, the sse2 version is over 2.5X faster.
Change-Id: I38536e2b846427a2cc5c5423aaf305fd0e605d61
Renaming Width to width, Height to height and Version to version in
several structs and function signatures.
Change-Id: I084c3f7e747cb2ce3345aff27a3dff9b13a87543
Wrote sse2 functions of vp9_short_idct8x8 and vp9_short_idct10_8x8.
Compared to c version, the sse2 version is 2X faster. The decoder
test didn't show noticeable gain since 8x8 idct doesn't take much
of decoding time (less than 1% in my test).
Change-Id: I56313e18cd481700b3b52c4eda5ca204ca6365f3
If the intended display size is different than the size the frame is
coded at, then send that size explicitly in the bitstream. Adds a new
bit to the frame header to indicate whether the extra size fields
are present.
Change-Id: I525c66f22d207efaf1e5f903c6a2a91b80245854
This fix resolves some of the mismatch issues being seen
recently. While this is the right thing to do when tiling
is used for this experiment, it is not the underlying cause
of the the mismatches.
Something else is causing writing outside of the allowable
frame area in the encoder leading to this mismatch.
Change-Id: If52c6f67555aa18ab8762865384e323b47237277
Moving identical code to separate functions, variable declaration and
initialization on the same line.
Change-Id: Ifa6474a64189f9d8051e88e19850453b0227752c
Updates the YV12_BUFFER_CONFIG structure to be crop-aware. The
exiting width/height parameters are left unchanged, storing the
width and height algined to a 16 byte boundary. The cropped
dimensions are added as new fields.
This fixes a nasty visual pulse when switching between scaled and
unscaled frame dimensions due to a mismatch between the scaling
ratio and the 16-byte aligned sizes.
Change-Id: Id4a3f6aea6b9b9ae38bdfa1b87b7eb2cfcdd57b6
This is like VP8_COPY_REFERENCE, but returns a pointer to the reference
frame rather than a copy of it. This is useful when the application
doesn't know what the size of the reference is, as is the case when
scaling is in effect.
Change-Id: I63667109f65510364d0e397ebe56217140772085
When coding the frame that corresponds to the midpoint frame
defining an ARF, do not update the last reference frame buffer.
Previously this buffer was updated meaning that when coding the next
ARF all the reference buffers were the same (or nearly so).
Turning the update off means that the frame before is still available
as an alternative predictor and for use in compound prediction.
Also fixed inconsistency in test for mismatch (patch from JK).
Net average gains (derf 0.049, yt 0.163, yt-hd 0.207, std-hd 0.286)
Change-Id: Ifee21da21ccbb1648ac2eafe890d3ce60562c7bc
Removing redundant code, introducing new functions for better
decomposition, adding 'clamp' function to vp9_common.h.
Change-Id: Ic3b8ca13bbc38f60f0c9c43910b5802005e31aaf
Adds probability updates for extra bits for the nzcs, code for
getting nzc stats, plus some minor cleanups and fixes.
Change-Id: If2814e7f04fb52f5025ad9f400f3e6c50a00b543
This also changes the RD search to take account of the correct block
index when searching (this is required for ADST positioning to work
correctly in combination with tx_select).
Change-Id: Ie50d05b3a024a64ecd0b376887aa38ac5f7b6af6
Yaowu found this function had a compiling issue with MSVC because
of using _mm_storel_pi((__m64 *)(dest + 0 * stride), (__m128)p0).
To be safe, changed back to use integer store instruction.
Also, for some build, diff could not always be 16-byte aligned.
Changed that in the code.
Change-Id: I9995e5446af15dad18f3c5c0bad1ae68abef6c0d
This patch revamps the entropy coding of coefficients to code first
a non-zero count per coded block and correspondingly remove the EOB
token from the token set.
STATUS:
Main encode/decode code achieving encode/decode sync - done.
Forward and backward probability updates to the nzcs - done.
Rd costing updates for nzcs - done.
Note: The dynamic progrmaming apporach used in trellis quantization
is not exactly compatible with nzcs. A suboptimal approach has been
used instead where branch costs are updated to account for changes
in the nzcs.
TODO:
Training the default probs/counts for nzcs
Change-Id: I951bc1e22f47885077a7453a09b0493daa77883d
Split macroblock and superblock tokenization and detokenization
functions and coefficient-related data structs so that the bitstream
layout and related code of superblock coefficients looks less like it's
a hack to fit macroblocks in superblocks.
In addition, unify chroma transform size selection from luma transform
size (i.e. always use the same size, as long as it fits the predictor);
in practice, this means 32x32 and 64x64 superblocks using the 16x16 luma
transform will now use the 16x16 (instead of the 8x8) chroma transform,
and 64x64 superblocks using the 32x32 luma transform will now use the
32x32 (instead of the 16x16) chroma transform.
Lastly, add a trellis optimize function for 32x32 transform blocks.
HD gains about 0.3%, STDHD about 0.15% and derf about 0.1%. There's
a few negative points here and there that I might want to analyze
a little closer.
Change-Id: Ibad7c3ddfe1acfc52771dfc27c03e9783e054430
Fixed a couple of variable/function definitions, as well as header
handling to support 16K sequence coding at high bit-rates.
The width and height are each specified by two bytes in the header.
Use an extra byte to explicitly indicate the scaling factors in
both directions, each ranging from 0 to 15.
Tested coding up to 16400x16400 dimension.
Change-Id: Ibc2225c6036620270f2c0cf5172d1760aaec10ec
Simplified idct32x32 calculation when there are only 10 or less
non-zero coefficients in 32x32 block. This helps the decoder
performance.
Change-Id: If7f8893d27b64a9892b4b2621a37fdf4ac0c2a6d