This commit is to remove two arrays, which contain the probabilities
of how likely each probability in coef_probs table is updated. The
commit changed to use a fixed number "252".
Surprisedly, the overall impact on quality is close to zero, which
basically says the two big static arrays are not helpful at all.
derf: -0.016%, -0.020%
std-hd: 0.000%, -0.013%
yt: -0.022%, +0.007%
yt-hd: -0.038%, +0.034%
Change-Id: Ifee94d28a37dcab4f1d2b994bd5b07575be42b72
This commit added the ability to accumulate the coef stats across
different encodings using an intermediate binary stats files. The
accumulation happens only the binary stats file exists in current
directory. The encoder needs to be built with "ENTROPY_STATS" to
allow the output. The commit also fixed a few formating issues in
output stats file.
Change-Id: Ib1a41180aa554845cf51e4421a230b128a3a82b4
Changes to calculation of sr_coded_error to include 0,0 case.
Experimental use of sr_coded_error in calculating correction factor
for estimating the allowable Q range.
Reinstated some code needed for calculating section_intra_rating.
Add flash detection in calculation of KF boost
Increased tolerance in testing candidate key frames (needed with
longer motion search as this tends to slightly increase inter %.
Zbin changes for 8x8.
Other minor adjustments, refactoring and bug fixes.
Reinstated some motion break out clauses in boost loop
as their removal hurt a few 50fps clips badly in the std set.
It may be possible to remove them again later if a better way
can be found of preventing overly long gf intervals.
Change-Id: Iee686d0c31072828bb1ccd2bc63f5f1c7c548ea2
Variables m & mi were being dereferenced when they might
hold invalid values.
The fix is simply to move these dereferences to after the
point at which mb_row and mb_col are tested for validity.
Change-Id: Ib16561efa9792dc469759936189ea379d374ad20
This fix addresses some problems with very complex clips like
handling of flashes on clips like crew (which was made worse
by an earlier patch (derf and std-hd)).
Most clips a small effect but some between 1 & 2%
Derf +0.039, +0.211%
YT +0.042, +0.083%
Change-Id: I65fc7c13afc31482040068544dd65b8808f5cb4a
Removed the local scaling factor est_max_qcorrection_factor
and related code to simplify estimateq calculation (little effect
anyway)
Cap range of total correction factor.
Slight change to break out case to turn off arf.
Change-Id: I748187737ba93cfadf016f3dfdf8d2741934067f
the commit fixed a number of compiling issues when some epxeriments
are turned on at the same time.
Change-Id: Idb15b215e2d2a7d25f2707f99ef55a34e7301ce7
The commit changed how baseline 8x8 coefficient probabilities are
initialized, to be consistent with the initialization of baseline
4x4 coefficient probabilities.
The commit does not have any effect on compression.
Change-Id: Ifb3902b5dc0b0c2e6dc3aa5d4a6589d528e58355
Remove dependency on amount and speed of motion as this
may not behave well across different image sizes.
Tweak impact of % inter.
Add in experimental adjustment based on relative quality of an
older second reference frame.
Cap range of decay values allowed.
Some small + effect on derf but -ve on yt & hd at this stage.
Change-Id: I390d6f6ebe67a2eb0b834980d0d4650124980d3e
I now see I didn't write a very long description, so let's do it
here then. We took a pretty big quality hit (0.1-0.2%) from my
recent fix of the inversion of arguments to vp8_cost_bit() in the
RD reference frame costing. I looked into it and basically the
costing prevented us from switching reference frames. This is of
course silly, since each frame codes its own prob_intra_coded, so
using last frame cost indications as a limiting factor can never
be right.
Here, I've rewritten that code to estimate costings based partially
on statistics from progress on current frame encoding. Overall,
this gives us a ~0.2%-0.3% improvement over what we had previously
before my argument-inversion-fix, and thus about ~0.4% over current
git (on derf-set), and a little more (0.5-1.0%) on HD/STD-HD/YT.
Change-Id: I79ebd4ccec4d6edbf0e152d9590d103ba2747775
base the static image test off a measure of 0,0 motion
instead of the decay accumulator value.
Change "transition to still detection" to compare the
decay rate from successive frames.
Minor tweak to the arf extra boost given based on the
number of frames affected.
Removed unused variable mod_err_per_mb_accumulator.
Change-Id: Idd8360083ad409e45f133ce97dd2488259003e64
The commit added an integer version of 8x8 forward DCT, based on the
orginal forward DCT from VP6. The constants, roundings, and shifts
were adjusted to improve the accuracy. The latest patch has a very
similar accuracy in term of round trip error against the floating
point version.
It should be noted here that the purpose of the patch is to help
encoding speed and facilitate all other experiments. There will be
futher review in combination with inverse DCT before finalization.
configure with "--enable--int_8x8fdct" to use the integer version
Change-Id: I5a4f80507429f0e07cf02a13768ec81cbfddc5bc
Some marginal impact due to the fact that it makes use of
arf more likely / stable even in hard sections.
Change-Id: Ic72fda0f63eefc9433914b5d9cd374d515810129
Removed unused function.
Added tentative code to take error score of an older frame
into account when calculating Q range. However, for now
it is disabled pending merging other changes and testing.
Change-Id: Ie89955e70319dac31b79e3b833e3352712a061ec
Remove testing of whether we estimate that it will be possible
to code an arf at a lower Q than the ambient Q. This adds quite
a bit of extra code and complexity for marginal gain.
Factored out some code relating to ARNR selection to a separate
function as this is likely to be changed / simplified soon.
Change-Id: Ia1cf060405637ef5bbf7018355437be21d12375f
Removed odd *100 >> 4 factor from boost calculations. Not all the
calculations exactly match what was there before so there may be
some minor impact on results.
Some other minor tidying up in regard to coding conventions.
The specific values of factors and thresholds will likely change as
part of subsequent patches.
Change-Id: Id976321484ac02ba50294cf54fafbc17dda85686
These frames can force reference frame (arf), mode (zeromv) and skip,
which means that if we use compound prediction (i.e. arf+last), we
might use a blend of a perfect (arf) and an imperfect (last) predictor,
leading to semi-garbage display and thus a huge drop in SSIM/PSNR (up
to 10dB for some frames I analyzed).
Gives a +0.2% gain on YT.
Change-Id: If1f2b7899ad165684af3808fd379295e82558cbb
This is the first patch in a series of changes to the first
pass code. (Broken down for ease of testing/merging/review).
This patch introduces a new stats element "sr_coded_error".
This is the coded error recorded vs the second reference
frame (which is updated such that it lags by at least one frame).
No use is made of the new structure in this change so this patch
should have no material effect.
Removed some ifdefs and deprecated code (#if NEW_BOOST).
Removed twopass.gf_decay_rate (not used any more)
Change-Id: I1be672a73017f7c13fd50fb4f99236aa2ed30916
This commit changed the forward and the inverse 4x4 Walsh Hadamard
transform to a new pair, where the inverse transform can pefectly
reconstuct the input to forward transform. It also does so without
changing the input and output value range. Even more, it does not
change the complexity of the transforms.
While it was not expected to improve the results of our current test,
it does improve std-hd set by 0.2% on all metrics. No change on derf.
Change-Id: Ie4f23ddd3a0f3c5fbe97fb58399f860031f99337
1. block types
There are only three types of blocks for 8x8 transformed MBs, i.e. Y
block with DC does not exist for 8x8 transformed MBs as all MB using
8x8 transform have 2nd order haar transform. This commit introduced
a new macro BLOCK_TYPES_8X8 to reflect such fact.
2. context counters
This commit also fixed the mixed of context_counters between 4x4 and
8x8 transformed MBs. The mixed use of the counters leads me to think
the existing the context probabilities were not properly generated
from 8x8 transformed MBs.
3. redundant collecting in recoding
The commit also corrected the code that accumulates entropy stats by
making sure stats only collected for final packing, not during the
recode loop
Change-Id: I029f09f8f60bd0c3240cc392ff5c6d05435e322c
This commit adjusted slightly the 4x4 coefficents band definition to
better classify coefficients with similar distributions and usages.
It helps derf set about .1%, it is alos slightly positive for std-hd
set, where 4x4 blocks are used less frequently.
The commit also removed a const array not in use.
Change-Id: I78d16905d4036641ec905b0c32c190c1def5b249
The ARNR filter uses a motion compensated temporal filter,
but the motion estimation implementation accounts for the
cost of the mv in its decision making process. The ARNR
filter uses a dummy cost table initialized to 0 as a way
to ignore the mv costs (which are irrelevant to the filter).
This CL modifies the ARNR filter implementation to so that
the mv costing is ignored without the requirement for
dummy tables.
Change-Id: I0dd9620c3b70682f938b2a70912c11d4d7c9284c
Adds a speed feature to conduct a brute force search among a set of
available interpolation filters for the best one in an RD sense.
There is a gain of 0.4% on derf, 1.0% on Std-HD.
Patch 2: A macro added to determine if the encoder state is reset
for each new filter tried.
Patch 3: rebase, also fixes a bug (decodframe.c) introduced by a
couple of missing function pointer assignements.
Patch 4: rebase.
Change-Id: Ic9ccca9d8c35c6af557449ae867391a2f996cc29
This commit merge the QI mode experiment. As the experiment affects
the encoding of intra coding modes on key frame only, the overall
effect of the experiment on encoding tests is insignificant.
Change-Id: I9e4e3933adface88867ad429cee3986e529c511d