Results: derf (vanilla or +hybridtx) +0.2% and (+hybrid16x16
or +tx16x16) +0.7%-0.8%; HD (vanilla or +hybridtx) +0.1-0.2%
and (+hybrid16x16 or +tx16x16) +1.4%, STD/HD (vanilla or +hybridtx)
about even, and (+hybrid16x16 or +tx16x16) +0.8-1.0%.
Change-Id: I03899e2f7a64e725a863f32e55366035ba77aa62
Separates the entropy coding context models for 4x4, 8x8 and 16x16
ADST variants.
There is a small improvement for HD (hd/std-hd) by about 0.1-0.2%.
Results on derf/yt are about the same, probably because there is not
enough statistics.
Results may improve somewhat once the initial probability tables are
updated for the hybrid transforms which is coming soon.
Change-Id: Ic7c0c62dacc68ef551054fdb575be8b8507d32a8
With this change, even if hybridtransform8x8 experiment is off,
8x8 dct is used for the I8x8 mode. However note that the gains
observed with the hybridtransform8x8 experiment will now be less,
since part of the gain is now merged in.
Change-Id: I9afb3880906fd0a1368a374041fc08efcf060c54
Some cleanups that will make it easier to maintain the code
and incorporate upcoming changes on entropy coding for the
hybrid transforms.
Change-Id: I44bdba368f7b8bf203161d7a6d3b1fc2c9e21a8f
This commit merges those parts of the CONFIG_NEW_MVREF
that specifically relate to choosing a better set of candidate
MV references into the NEWBESTREFMV experiment.
CONFIG_NEW_MVREF will then be used for changes relating
to the explicit coding of a cost optimized MV reference in the
bitstream as part of MV coding.
Change-Id: Ied982c0ad72093eab29e38b8cd74d5c3d7458b10
Adds a new experiment with redesigned/refactored motion vector entropy
coding. The patch also takes a first step towards separating the
integer and fractional pel components of a MV. However the fractional
pel encoding still depends on the integer pel part and so they are
not fully independent. Further experiments are in progress to see
how much they can be decoupled without affecting performance.
All components including entropy coding/decoding, costing for MV
search, forward updates and backward updates to probability tables,
have been implemented.
Results so far:
derf: +0.19%
std-hd: +0.28%
yt: +0.80%
hd: +1.15%
Patch: Simplifies the fractional pel models:
derf: +0.284%
std-hd: +0.289%
yt: +0.849%
hd: +1.254%
Patch: Some changes in the models, rebased.
derf: +0.330%
std-hd: +0.306%
yt: +0.816%
hd: +1.225%
Change-Id: I646b3c48f3587f4cc909639b78c3798da6402678
Enable ADST/DCT of dimension 16x16 for I16X16 modes. This change provides
benefits mostly for hd sequences.
Set up the framework for selectable transform dimension.
Also allowing quantization parameter threshold to control the use
of hybrid transform (This is currently disabled by setting threshold
always above the quantization parameter. Adaptive thresholding can
be built upon this, which will further improve the coding performance.)
The coding performance gains (with respect to the codec that has all
other configuration settings turned on) are
derf: 0.013
yt: 0.086
hd: 0.198
std-hd: 0.501
Change-Id: Ibb4263a61fc74e0b3c345f54d73e8c73552bf926
Alternative strategy for finding a list of candidate motion
vectors to use as reference values in mv coding and as
nearest and near.
Sort by sad in vp8_find_best_ref_mvs() rather than just
pick the best. Allow 0,0 as a best ref option but not a
nearest or near unless there are no alternatives.
Encode/Decode verified on at least some clips.
Some commented out experimental and stats code still in place.
Gain over existing code averages about 1% on derf (alll metrics)
with improvement on all clips. Other test results pending.
The entropy coding of the mode (nearest/near etc) still
depends upon and requires the old "findnear" code so
this needs looking at and may provide room for further gains.
Change-Id: I871d7cba1d1c379c4bad9bcccce1fb19c46b8247
This commit adds a pick_sb_mode() function which selects the best 32x32
superblock coding mode. Then it selects the best per-MB modes, compares
the two and encodes that in the bitstream.
The bitstream coding is rather simplistic right now. At the SB level,
we code a bit to indicate whether this block uses SB-coding (32x32
prediction) or MB-coding (anything else), and then we follow with the
actual modes. This could and should be modified in the future, but is
omitted from this commit because it will likely involve reorganizing
much more code rather than just adding SB coding, so it's better to let
that be judged on its own merits.
Gains on derf: about even, YT/HD: +0.75%, STD/HD: +1.5%.
Change-Id: Iae313a7cbd8f75b3c66d04a68b991cb096eaaba6
References to MACROBLOCKD that use "x" changed to "xd"
to comply with convention elsewhere that x = MACROBLOCK
and xd = MACROBLOCKD.
Simplify some repeat references using local variables.
Change-Id: I0ba2e79536add08140a6c8b19698fcf5077246bc
Add local variable in several places to reference the MB mode
info structure. Currently this is usually accessed in the code as
x->e_mbd.mode_info_context->mbmi.* or in some places
xd->mode_info_context->mbmi.*
Resolved some uses of x-> for the MACROBLOCKD structure.
Rebased without dependency on motion reference experiment.
Change-Id: If6718276ee4f2ef131825d1524dfdb02a3793aed
Using surrounding reconstructed pixels from left and above to select
best matching mv to use as reference motion vector for mv encoding.
Test results:
AVGPSNR GLBPSNR VPXSSIM
Derf: 1.107% 1.062% 0.992%
Std-hd:1.209% 1.176% 1.029%
Change-Id: I8f10e09ee6538c05df2fb9f069abcaf1edb3fca6
Merged in the high_precision_mv experiment to make it easier
to work on new mv encoding strategies. Also removed
coef_update_probs3().
Change-Id: I82d3b0bb642419fe05dba82528bc9ba010e90924
Set on all 16x16 intra/inter modes
Features:
- Butterfly fDCT/iDCT
- Loop filter does not filter internal edges with 16x16
- Optimize coefficient function
- Update coefficient probability function
- RD
- Entropy stats
- 16x16 is a config option
Have not tested with experiments.
hd: 2.60%
std-hd: 2.43%
yt: 1.32%
derf: 0.60%
Change-Id: I96fb090517c30c5da84bad4fae602c3ec0c58b1c
Allows for swtiching/setting interpolation filters at the MB
level. A frame level flag indicates whether to use a specifc
filter for the entire frame or to signal the interpolation
filter for each MB. When switchable filters are used, the
encoder chooses between 8-tap and 8-tap sharp filters. The
code currently has options to explore other variations as well,
which will be cleaned up subsequently.
One issue with the framework is that encoding is slow. I
tried to do some tricks to speed things up but it is still slow.
Decoding speed should not be affected since the number of
filter taps remain unchanged.
With the current version, we are up 0.5% on derf on average but
some videos city/mobile improve by close to 4 and 2% respectively.
If we did a full-search by turning the SEARCH_BEST_FILTER flag
on, the results are somewhat better.
The framework can be combined with filtered prediction, and I
seek feedback regarding that.
Rebased.
Change-Id: I8f632cb2c111e76284140a2bd480945d6d42b77a
The following five experiments are merged:
newentropy
newupdate
adaptive_entropy (also includes a couple of parameter changes
that improves results a little
in common/entropymode.c and encoder/modecosts.c
that were not merged from the internal branch)
newintramodes
expanded_coef_context
Change-Id: I8a142a831786ee9dc936f22be1d42a8bced7d270
this commit removes a number of experiment options from configure
script. the associated experiments are already fully merged, the
options in configure script have no effect at all.
Change-Id: I8054ccaee0a04610162ed76ac9e59c4538217113
vp8_encode_inter_macroblock() is called in both pick_mb_modes() as
well as encode_sb(), thus the number of macroblocks in the counter
were twice as big as actual numbers. This doesn't affect output.
Change-Id: I6de8a996ee44d2f7f2080d8d2177dd7bc6207c93
Added the ability to optionally filter the prediction data
when inter modes are selected (excludes SPLITMV, for now).
The mode selection loop considers both the filtered and
non-filtered prediction data when choosing mode. The filter
can be turned on/off at the frame-level, or signaled for
each MB.
Change-Id: I1b783c71d95a361ab36c761b07e8a6b06bc36822
Incorporates mv_ref, mbsplit and second_mv into the adaptive
entropy framework. The mv_ref framework has been modified from
before.
Adds some clean-ups and fixes.
Results with the adaptive entropy experiment are currently up by
+1.93% on derf; +2.33% std-hd and +1.87% yt-hd.
Fixed a nasty intermittent bug.
Change-Id: I4b1ac9f9483b48432597595195bfec05f31d1e39
This patch incorporates adaptive entropy coding of coefficient tokens,
and mode/mv information based on distributions encountered in a frame.
Specifically, there is an initial forward update to the probabilities
in the bitstream as before for coding the symbols in the frame, however
at the end of decoding each frame, the forward update to the
probabilities is reverted and instead the probabilities are updated
towards the actual distributions encountered within the frame.
The amount of update is weighted by the number of hits within each
context.
Results on derf/hd/std-hd are all up by 1.6%.
On derf, the most of the gains come from coefficients, however for the
hd and std-hd sets, the most of the gains come from the mode/mv
information updates.
Change-Id: I708c0e11fdacafee04940fe7ae159ba6844005fd
I now see I didn't write a very long description, so let's do it
here then. We took a pretty big quality hit (0.1-0.2%) from my
recent fix of the inversion of arguments to vp8_cost_bit() in the
RD reference frame costing. I looked into it and basically the
costing prevented us from switching reference frames. This is of
course silly, since each frame codes its own prob_intra_coded, so
using last frame cost indications as a limiting factor can never
be right.
Here, I've rewritten that code to estimate costings based partially
on statistics from progress on current frame encoding. Overall,
this gives us a ~0.2%-0.3% improvement over what we had previously
before my argument-inversion-fix, and thus about ~0.4% over current
git (on derf-set), and a little more (0.5-1.0%) on HD/STD-HD/YT.
Change-Id: I79ebd4ccec4d6edbf0e152d9590d103ba2747775
These frames can force reference frame (arf), mode (zeromv) and skip,
which means that if we use compound prediction (i.e. arf+last), we
might use a blend of a perfect (arf) and an imperfect (last) predictor,
leading to semi-garbage display and thus a huge drop in SSIM/PSNR (up
to 10dB for some frames I analyzed).
Gives a +0.2% gain on YT.
Change-Id: If1f2b7899ad165684af3808fd379295e82558cbb
Added code to save the coding context in vp8_rd_pick_inter_mode
when the coding mode is forced to ARF(0,0).
Also, modified the MV bounds computation to comply with the
change in MV border from 32 to 64 pixels.
Change-Id: I96963a6f5f4d04ce84c807ae11e0635177c3ad6c
Some code re-factored / moved to allow the main
pack operation inside the recode loop so that the
size estimate is accurate.
Deletion of some redundant code relating to one pass.
Aproximate improvement over March 27 code base:
Derf 0.0%, YT 0.5%, YThd 0.3% Std_hd 0.25%
Change-Id: Id2d071794ab44f0b52935f6fcdb5733d09a6bb86
This is the first patch to add superblock (32x32) coding
order capabilities. It does not yet do any mode selection
at the SB level, that will follow in a further patch.
This patch encodes rows of SBs rather than
MBs, each SB contains 2x2 MBs.
Two intra prediction modes have been disabled since they
require reconstructed data for the above-right MB which
may not have been encoded yet (e.g. for the bottom right
MB in each SB).
Results on the one test clip I have tried (720p GIPS clip)
suggest that it is somewhere around 0.2dB worse than the
baseline version, so there may be bugs.
It has been tested with no experiments enabled and with
the following 3 experiments enabled:
--enable-enhanced_interp
--enable-high_precision_mv
--enable-sixteenth_subpel_uv
in each case the decode buffer matches the recon buffer
(using "cmp" to compare the dumped/decoded frames).
Note: Testing these experiments individually created
errors.
Some problems were found with other experiments but it
is unclear what state these experiments are in:
--enable-comp_intra_pred
--enable-newentropy
--enable-uvintra
This code has not been extensively tested yet, so there
is every likelihood that further bugs remain. I also
intend to do some code cleanup & refactoring in tandem
with the next patch that adds the 32x32 modes.
Change-Id: I1eba7f740a70b3510df58db53464535ef881b4d9
When ac_yquant>171, a key frame is enabled to use 8x8 transform. In
such case, MBs with DC_PRED or TM_PRED are selected to use T8x8. This
change helped the full STD-HD set by ~.1% or so, which is reasonable
considering how often key frame occurs in these encodings.
Change-Id: Id17009ef6327252177b19e6bf0d6628827febaf1