This commit adds the logic for segmentation map initialization and
disable temporal update of segmentation map when error-resilient
mode is on. It fixes the enc/dec mistmates (release build) and
assertions(debug) when both aq-mode and error-resilient are on.
Change-Id: Id2155e8b28962cf1f64494f4df0c8d79499b6890
The nominal tx_type for a given mode is used as a context
to encode the actual tx_type for intra.
Results:
derflr: -0.241% BDRATE
hevcmr: -0.366% BDRATE
Change-Id: Icfe7b0a58d79bc6497a06e3441779afec6e01e21
Otherwise, per-segment lossless might mean that some segments are not
lossless and they could still want to use another mode. The per-block
tx points remain uncoded on blocks where (per the segment id) the Q
value implies lossless.
Change-Id: If210206ab1fe3dd11976797370c77f961f13dfa0
For testing implemented a fixed pattern and delta, 1 pass,
fixed Q, low delay mode.
This has not in any way been tuned or optimized.
Change-Id: Icf9b57c3bb16cc5c0726d5229009212af36eb6d9
(copied from VP9)
The one pass VBR mode selects a Q range based on a
moving average of recent Q values. This calculation
should have been excluding arf overlay frames as these
are usually coded at the highest allowed value. Their
inclusion skews the average and can cause it to drift
upwards even when the clip as a whole is undershooting.
As such it can undermine correct adaptation of the allowed
Q range especially for easy content.
Change-Id: I9e12da84e12917e836b6e53ca4dfe4f150b9efb1
The culprit is on the decode side xd->lossless[i] setup was in wrong
location where segment features are not yet decoded.
Also on the encoder side, transform mode was not set consistently
between when tx_mode is selected and how tx_mode is enforced in
tx size selection.
Change-Id: I4c4c32188fda7530cadab9b46d4201f33f7ceca3
This change has been imported from VP9 and
alters the nature and use of exhaustive motion search.
Firstly any exhaustive search is preceded by a normal step search.
The exhaustive search is only carried out if the distortion resulting
from the step search is above a threshold value.
Secondly the simple +/- 64 exhaustive search is replaced by a
multi stage mesh based search where each stage has a range
and step/interval size. Subsequent stages use the best position from
the previous stage as the center of the search but use a reduced range
and interval size.
For example:
stage 1: Range +/- 64 interval 4
stage 2: Range +/- 32 interval 2
stage 3: Range +/- 15 interval 1
This process, especially when it follows on from a normal step
search, has shown itself to be almost as effective as a full range
exhaustive search with step 1 but greatly lowers the computational
complexity such that it can be used in some cases for speeds 0-2.
This patch also removes a double exhaustive search for sub 8x8 blocks
which also contained a bug (the two searches used different distortion
metrics).
For best quality in my test animation sequence this patch has almost
no impact on quality but improves encode speed by more than 5X.
Restricted use in good quality speeds 0-2 yields significant quality gains
on the animation test of 0.2 - 0.5 db with only a small impact on encode
speed. On most natural video clips, however, where the step search
is performing well, the quality gain and speed impact are small.
Change-Id: Iac24152ae239f42a246f39ee5f00fe62d193cb98
Fix copied over from VP9 master to VP10 master.
Do not reset the alt ref active flag when overlaying the middle
arf(s) of a multi arf group.
Change-Id: I1b7392107e7c675640d5ee1624012f39cc374c58
The range_check is not used because the bit range
in fdct# is not correct. Since we are going to merge in a new version
of fdct# from nextgenv2, we won't fix the incorrect bit range now.
Change-Id: I54f27a6507f27bf475af302b4dbedc71c5385118
The old workaround "p = 0 ? 0 : p -1" is misleading.
?: happens before =
assigning back to p truncates to one byte.
Therefore it is equivalent to (p - 1) & 0xFF, but the check just exists
to work around a first pass bug, so let's make the work around more
clear.
https://bugs.chromium.org/p/webm/issues/detail?id=1089
Change-Id: I587c44dd61c1f3767543c0126376f881889935af
This reverts commit 7f56cb2978.
It causes uninitialized reads in the first pass setting up later cost tables.
Change-Id: I2df498df3f5c03eff359f79edf045aed0c618dc9
Remove delta index 254 from probability remapping and subexp coding.
Saves 1-bit when the delta index is 129.
Change-Id: I88aba565fc766b1769165be458d2efd3ce45817e
The old workaround "p = 0 ? 0 : p -1" is misleading.
?: happens before =
assigning back to p truncates to one byte.
Therefore it is equivalent to (p - 1) & 0xFF, but the check just exists
to work around a first pass bug, so let's make the work around more
clear.
https://code.google.com/p/webm/issues/detail?id=1089
Change-Id: Ia6dcc8922e1acbac0eeca23a4d564a355c489572
The custom LCG is based on the POSIX recommend constants for a 16-bit
rand(). This implementation uses less computation than typical standard
library procedures which have been extended for 32-bit support, is
guaranteed to be reentrant, and identical everywhere.
Change-Id: I3140bbd566f44ab820d131c584a5d4ec6134c5a0
Ref: http://pubs.opengroup.org/onlinepubs/9699919799/functions/rand.html
Add the row and column index to the argument list of unit functions
called by foreach_transformed_block wrapper. This avoids the
repeated internal parsing according to the block index.
Change-Id: Ie7508acdac0b498487564639bc5cc6378a8a0df7