Allow for 3 quant profiles from entropy context
Refactored dq_offset bands to allow for re-optimization based on number
of quantization profiles
Change-Id: Ib8d7e8854ad4e0bf8745038df28833d91efcfbea
Added dq_off_index attribute to mbmi to allow for switching between
dequantization modes.
Reduced number of different dequantization modes from 5 to 3.
Changed dequant_val_nuq to be allow for 3 dequant levels instead of 1.
Fixed lint errors
Change-Id: I7aee3548011aa4eee18adb09d835051c3108d2ee
This is a port of 01bb4a318dc0f9069264b7fd5641bc3014f47f32
This commit also fixes a bug where FLIPADST transforms when combined
with a DST (that is FLIPADST_DST and DST_FLIPADST) did not actually did
a flipped transform but a straight ADST instead. This was due to the C
implementation that it fell back on not implementing flipping. This is
now fixed as well and FLIPADST_DST and DST_FLIPADST does what it is
supposed to do.
Change-Id: I89c67ca1d5e06808a1567c51e7d6bec4998182bd
Also fixes a valgrind error when optimizations are disabled.
Done in preparation for the work on the extended coding unit size
experiment.
Change-Id: Ib074c5a02c94ebed7dd61ff0465d26fa89834545
CONFIG_SR_MODE=1, enable SR mode
USE_POST_F=1, enable SR post filter
SR_USE_MULTI_F=1, enable SR post filter family
Not compatible with other experiments yet
Change-Id: I116f1d898cc2ff7dd114d7379664304907afe0ec
Adds an additional transform in the ext_tx experiment that
is a 2d DST1-DST1 combination.
To enable use --enable-ext-tx --enable-dst1.
This needs to be later extended to combine DST1 with DCT
or ADST.
Change-Id: I6d29f1b778ef8294bcfb6a512a78fc5eda20723b
Framework for alternate transforms for inter 32x32 and larger based
on dwt-dct hybrid is implemented.
Further experiments are to be condcuted with different
variations of hybrid dct/dwt or plain dwt, as well as super-resolution
mode.
Change-Id: I9a2bf49ba317e7668002cf1499211d7da6fa14ad
Test VP9/EndToEndTestLarge.EndtoEndPSNRTest/1 (422 stream) failed when
supertx enabled. This was because 4x8 and 8x4 blocks were not being
split into 4x4s during tokenization in the encoder. This patch
uses vp9_foreach_transformed_block() to fix this.
Change-Id: I1f1cb27474eb9e04347067f5c4aff1942bbea8d9
Use separate token probabilities and counters for non-transform
blocks (pixel domain) . Initial probabilities are trained with screen_content
clips. On screen_content, it improves coding performance by about
2% (from +16.4% to +18.45%).
The initial probabilities are not optimized for natural videos. So it should
not be used for natural videos. Set FOR_SCREEN_CONTENT as 0/1 to specify
whether or not to enable this patch.
Change-Id: Ifa361c94bb62aa4b783cbfa50de08c3fecae0984
Do not treat first element (dc) differently.
on screen_content
tx-skip only: +16.4% (was +15.45%)
no significant impact on natrual videos
Change-Id: I79415a9e948ebbb4a69109311c10126d8a0b96ab
Use raster scan order for non-transform blocks
+15.45% (+2.1%) on screen_content
no significant change on natural videos
Change-Id: I0e264cb69e8624540639302d131f7de9c31c3ba7
+0.3% on 10-bit
+0.3% on 12-bit
With other high bit compatible experiments on 12-bit
+12.44% (+0.17) over 8-bit baseline
Change-Id: I40b4c382fa54ba4640d08d9d01950ea8c1200bc9
This patch allows the prediction residues of tx-skipped blocks
to use probs that are different from regular transfrom
coefficients for token entropy coding. Prediction residues are
assumed as in band 6.
The initial value of probs is obtained with stats from limited
tests. The statistic model for constrained token nodes has not
been optimized. The probs for token extra bits have not been
optimized. These can be future work.
Certain coding improvment is observed:
derflr with all experiments: +6.26% (+0.10%)
screen_content with palette: +22.48% (+1.28%)
Change-Id: I1c0d78178ee9f3655febb6f30cdaef8ee9f8e3cc
This framework allows lower quantization bins to be shrunk down or
expanded to match closer the source distribution (assuming a generalized
gaussian-like central peaky model for the coefficients) in an
entropy-constrained sense. Specifically, the width of the bins 0-4 are
modified as a factor of the nominal quantization step size and from 5
onwards all bins become the same as the nominal quantization step size.
Further, different bin width profiles as well as reconstruction values
can be used based on the coefficient band as well as the quantization step
size divided into 5 ranges.
A small gain currently on derflr of about 0.16% is observed with the
same paraemters for all q values.
Optimizing the parameters based on qstep value is left as a TODO for now.
Results on derflr with all expts on is +6.08% (up from 5.88%).
Experiments are in progress to tune the parameters for different
coefficient bands and quantization step ranges.
Change-Id: I88429d8cb0777021bfbb689ef69b764eafb3a1de
All stats look fine.
derflr: +0.912 with respect to 10-bit internal baseline
(Was +0.747% w.r.t. 8 bit)
+5.545 with respect to 8-bit baseline
Change-Id: I3c14fd17718a640ea2f6bd39534e0b5cbe04fb66
Implements vertical, horizontal, and tm dpcm intra prediction for
blocks in tx_skip mode. Typical coding gain on screen content video
is 2%~5%.
Change-Id: Idd5bd84ac59daa586ec0cd724680cef695981651
16x16 and 8x8 transform sizes, both for the regular case and the high
bit depth (CONFIG_VP9_HIGHBITDEPTH) case.
Change-Id: I34a9d3c73c3687f967105194ce4def48c3ec435c
This patch improves the non-transform coding mode. At this
point, the coding gain on screen content videos is about
12% for lossless, an 15% for lossy case.
1. Encode tx_skip flags with context. Y tx_skip flag context is
whether the prediction mode is inter or intra. UV flag context
is Y flag.
2. Transform skipping is less helpful when the Q-index is high.
So it is enabled only when the Q-index is smaller than a
threshold. Currently the threshold is set as 255 for intra blocks,
and 0 for inter blocks.
3. The shift of the prediction residue, when copying them to the
coeff buffer, is set as 3 when the Q-index is larger than a
threshold (currently set as 0), and 2 otherwise.
Change-Id: I372973c7518cf385f6e542b22d0f803016e693b0
Reimplements the supertx experiment from the playground branch.
Makes it work with other experiments.
Results:
With --enable-superttx
derflr: +0.958
With --enable-supertx --enable-ext-tx
derflr: +2.25%
With --enable-supertx --enable-ext-tx --enable-filterintra
derflr: +2.73%
Change-Id: I5012418ef2556bf2758146d90c4e2fb8a14610c7
Extends the ext-tx experiment to include regular and flipped
DST variants. A total of 9 transforms are thus possible for
each inter block with transform size <= 16x16.
In this patch currently only the four ADST_ADST variants
(flipped or non-flipped in both dimensions) are enabled
for inter blocks.
The gain with the ext-tx experiment grows to +1.12 on derflr.
Further experiments are underway.
Change-Id: Ia2ed19a334face6135b064748f727fdc9db278ec
Non-transform option is enabled in both intra and inter modes.
In lossless case, the average coding gain on screen content
clips is 11.3% in my test.
Change-Id: I2e8de515fb39e74c61bb86ce0f682d5f79e15188
Extends quantizer range and some other fixes.
Change-Id: Ia9adf7848e772783365d6501efd07585bff80c15
stdhd: +0.166%, turns positive a little
derf: -0.036% (very few blocks use the 64x64 mode)
Preliminary 64x64 transform implementation.
Includes all code changes.
All mismatches resolved.
Coding results for derf and stdhd are within noise. stdhd is slightly
higher, derf is slightly lower.
To be further refined.
Change-Id: I091c183f62b156d23ed6f648202eb96c82e69b4b
The functions b_width_log2 and b_height_log2 only do direct
table fetch. This commit unifies such use cases by using the
table directly and removes these functions.
Change-Id: I3103fc6ba959c1182886a2799d21b8b77c8a7b6b
mi_grid_* are arrays of pointer to pointer. They save the pointers that point
to the MIs in cm->mi. But they are unnecessary and complicated. The original
goal was to remove MODE_INFO_t copy. But with an extra MODE_INFO_t pointer
inside MODE_INFO_t, same goal could be achieved.
This commit totally removes the mi_grid_* structures. But there are still
many dummy MODE_INFO_t inside cm->mi which are a waste of memory. Next commit
will do on-demand MODE_INFO_t allocation in order to save these memories.
Change-Id: I3a05cf1610679fed26e0b2eadd315a9ae91afdd6
Adds various high bitdepth transform functions and tests.
Much of the changes are related to using typedefs tran_low_t
and tran_high_t for the final transform cofficients and intermediate
stages of the transform computation respectively rather than fixed
types int16_t/int. When vp9_highbitdepth configure flag is off,
these map tp int16_t/int32_t, but when the flag is on, they map
to int32_t/int64_t to make space for needed extra precision.
Change-Id: I3c56de79e15b904d6f655b62ffae170729befdd8