Allows for swtiching/setting interpolation filters at the MB
level. A frame level flag indicates whether to use a specifc
filter for the entire frame or to signal the interpolation
filter for each MB. When switchable filters are used, the
encoder chooses between 8-tap and 8-tap sharp filters. The
code currently has options to explore other variations as well,
which will be cleaned up subsequently.
One issue with the framework is that encoding is slow. I
tried to do some tricks to speed things up but it is still slow.
Decoding speed should not be affected since the number of
filter taps remain unchanged.
With the current version, we are up 0.5% on derf on average but
some videos city/mobile improve by close to 4 and 2% respectively.
If we did a full-search by turning the SEARCH_BEST_FILTER flag
on, the results are somewhat better.
The framework can be combined with filtered prediction, and I
seek feedback regarding that.
Rebased.
Change-Id: I8f632cb2c111e76284140a2bd480945d6d42b77a
The following five experiments are merged:
newentropy
newupdate
adaptive_entropy (also includes a couple of parameter changes
that improves results a little
in common/entropymode.c and encoder/modecosts.c
that were not merged from the internal branch)
newintramodes
expanded_coef_context
Change-Id: I8a142a831786ee9dc936f22be1d42a8bced7d270
I now see I didn't write a very long description, so let's do it
here then. We took a pretty big quality hit (0.1-0.2%) from my
recent fix of the inversion of arguments to vp8_cost_bit() in the
RD reference frame costing. I looked into it and basically the
costing prevented us from switching reference frames. This is of
course silly, since each frame codes its own prob_intra_coded, so
using last frame cost indications as a limiting factor can never
be right.
Here, I've rewritten that code to estimate costings based partially
on statistics from progress on current frame encoding. Overall,
this gives us a ~0.2%-0.3% improvement over what we had previously
before my argument-inversion-fix, and thus about ~0.4% over current
git (on derf-set), and a little more (0.5-1.0%) on HD/STD-HD/YT.
Change-Id: I79ebd4ccec4d6edbf0e152d9590d103ba2747775
Merged in most of the current common prediction changes
that were under the #if CONFIG_COMPRED option.
Change-Id: If4e6f61dbe7b86dd449f6effbe93b5eb7e893885
Further changes to make experiments with the context
used for coding the dual pred flag easier.
Current best performing method tested on derf is a two
element context based on reference frame. I also tried
various combinations of mode and reference frame as
shown in commented out case using up to 6 contexts.
Derf +0.26 overall psnr +0.15% ssim vs original method.
Change-Id: I64c21ddec0abbb27feaaeaa1da2e9f164ebaca03
Further use of common prediction functions and experiments
with alternate contexts based on mode and reference frame.
For the Derf set using reference frame as basis of context
gives +0.18% Overall Psnr and +0.08 SSIM
Change-Id: Ie7eb76f329f74c9c698614f01ece31de0b6bfc9e
Initial modifications to make limited use of common prediction
functions.
The only functional change thus far is that updates to the probabilities are
no longer "damped". This was a testing convenience but in fact seems to
help by a little over 0.1% over the derf set.
Change-Id: I8b82907d9d6b6a4a075728b60b31ce93392a5f2e
Trial of a modified prediction function that ranks each possible
reference frame based on a combination of local usage and
frame level probability. The code is a bit cleaner and simpler.
In direct comparison with old unpredicted method with segment level
coding turned off for mode,ref & EOB the prediction gives a gain on derf
of around 0.4%. There is some further gain from bug fixes over earlier code.
With segment coding on the prediction method is slightly -ve on some very
easy clips (at low rates) due to slightly higher overheads, but better on harder
clips. Overall neutral on derf in direct comparison on latest code base, but
compared to earlier code without bug fixes about +0.7% overall psnr
+0.3% SSIM.
Change-Id: I5b8474658b208134d352d24f6517f25795490789
Extended prediction and coding of reference frame where
a subset of options are flagged as available at the segment level.
Updated copyright notices.
Switch to SAD in mbgraph code as SATD problematic for the
foreground and background separation as it can ignore large DC shifts.
Change-Id: I661dbbb2f94f3ec0f96bb928c1655e5e415a7de1
This function adds the common prediction modules, some data structures
and a config option but does not use them.
It also corrects a bug in clearing down the MODE_INFO border and introduces
a new element that indicates if an entry corresponds to an "in image" macro block
or is part of the border.
Change-Id: Ib69eec0876173ebe9d1de9df9537d0b2447702e0