Fixed the code review comments.
Under the htrans8x8 experiment the 8X8 DCT in the
I8X8 mode is replaced with a combination of 8X8 ADST and
DCT.
Overall coding gains with the htrans8x8 experiment are:
derf: 0.486
std-hd: 1.040
hd: 1.063
yt: 0.506
Note that part of the gain comes from bigger transforms
(8x8 instead of 4x4) and part comes from replacing the DCT
wth the ADST.
Change-Id: I92ca6bbfce11b4165d612b81d9adfad4d010c775
Set on all 16x16 intra/inter modes
Features:
- Butterfly fDCT/iDCT
- Loop filter does not filter internal edges with 16x16
- Optimize coefficient function
- Update coefficient probability function
- RD
- Entropy stats
- 16x16 is a config option
Have not tested with experiments.
hd: 2.60%
std-hd: 2.43%
yt: 1.32%
derf: 0.60%
Change-Id: I96fb090517c30c5da84bad4fae602c3ec0c58b1c
Apply 2D-DCT transform of dimension 8x8 to encode prediction
residuals of I8X8 mode.
Brought back block type 3 probability context model for 8x8 tokens,
which is used for the coefficients of Y blocks in I8x8 modes. The
coefficient costs estimate of I8X8 mode in rate-distortion is also
changed appropriately.
Performance results:
derf: 0.246
yt: 0.114
std-hd: 0.730
hd: 0.670
Change-Id: If1d970eeb4e1827c9f0d2c5b27d33089b347ea27
Allows for swtiching/setting interpolation filters at the MB
level. A frame level flag indicates whether to use a specifc
filter for the entire frame or to signal the interpolation
filter for each MB. When switchable filters are used, the
encoder chooses between 8-tap and 8-tap sharp filters. The
code currently has options to explore other variations as well,
which will be cleaned up subsequently.
One issue with the framework is that encoding is slow. I
tried to do some tricks to speed things up but it is still slow.
Decoding speed should not be affected since the number of
filter taps remain unchanged.
With the current version, we are up 0.5% on derf on average but
some videos city/mobile improve by close to 4 and 2% respectively.
If we did a full-search by turning the SEARCH_BEST_FILTER flag
on, the results are somewhat better.
The framework can be combined with filtered prediction, and I
seek feedback regarding that.
Rebased.
Change-Id: I8f632cb2c111e76284140a2bd480945d6d42b77a
The following five experiments are merged:
newentropy
newupdate
adaptive_entropy (also includes a couple of parameter changes
that improves results a little
in common/entropymode.c and encoder/modecosts.c
that were not merged from the internal branch)
newintramodes
expanded_coef_context
Change-Id: I8a142a831786ee9dc936f22be1d42a8bced7d270
Adds ADST/DCT hybrid transform coding for Intra4x4 mode.
The ADST is applied to directions in which the boundary
pixels are used for prediction, while DCT applied to
directions without corresponding boundary prediction.
Adds enum TX_TYPE in b_mode_infor to indicate the transform
type used.
Make coding style consistent with google style.
Fixed the commented issues.
Experimental results in terms of bit-rate reduction:
derf: 0.731%
yt: 0.982%
std-hd: 0.459%
hd: 0.725%
Will be looking at 8x8 transforms next.
Change-Id: I46dbd7b80dbb3e8856e9c34fbc58cb3764a12fcf
this commit removes a number of experiment options from configure
script. the associated experiments are already fully merged, the
options in configure script have no effect at all.
Change-Id: I8054ccaee0a04610162ed76ac9e59c4538217113
Added the ability to optionally filter the prediction data
when inter modes are selected (excludes SPLITMV, for now).
The mode selection loop considers both the filtered and
non-filtered prediction data when choosing mode. The filter
can be turned on/off at the frame-level, or signaled for
each MB.
Change-Id: I1b783c71d95a361ab36c761b07e8a6b06bc36822
Incorporates mv_ref, mbsplit and second_mv into the adaptive
entropy framework. The mv_ref framework has been modified from
before.
Adds some clean-ups and fixes.
Results with the adaptive entropy experiment are currently up by
+1.93% on derf; +2.33% std-hd and +1.87% yt-hd.
Fixed a nasty intermittent bug.
Change-Id: I4b1ac9f9483b48432597595195bfec05f31d1e39
I now see I didn't write a very long description, so let's do it
here then. We took a pretty big quality hit (0.1-0.2%) from my
recent fix of the inversion of arguments to vp8_cost_bit() in the
RD reference frame costing. I looked into it and basically the
costing prevented us from switching reference frames. This is of
course silly, since each frame codes its own prob_intra_coded, so
using last frame cost indications as a limiting factor can never
be right.
Here, I've rewritten that code to estimate costings based partially
on statistics from progress on current frame encoding. Overall,
this gives us a ~0.2%-0.3% improvement over what we had previously
before my argument-inversion-fix, and thus about ~0.4% over current
git (on derf-set), and a little more (0.5-1.0%) on HD/STD-HD/YT.
Change-Id: I79ebd4ccec4d6edbf0e152d9590d103ba2747775
1. block types
There are only three types of blocks for 8x8 transformed MBs, i.e. Y
block with DC does not exist for 8x8 transformed MBs as all MB using
8x8 transform have 2nd order haar transform. This commit introduced
a new macro BLOCK_TYPES_8X8 to reflect such fact.
2. context counters
This commit also fixed the mixed of context_counters between 4x4 and
8x8 transformed MBs. The mixed use of the counters leads me to think
the existing the context probabilities were not properly generated
from 8x8 transformed MBs.
3. redundant collecting in recoding
The commit also corrected the code that accumulates entropy stats by
making sure stats only collected for final packing, not during the
recode loop
Change-Id: I029f09f8f60bd0c3240cc392ff5c6d05435e322c
This commit merge the QI mode experiment. As the experiment affects
the encoding of intra coding modes on key frame only, the overall
effect of the experiment on encoding tests is insignificant.
Change-Id: I9e4e3933adface88867ad429cee3986e529c511d
This is to prevent the evaluation of a mode from using values left
over from a mode evaluated prior in the loop.
Change-Id: Ife2c6ceb76d2f7365fd262515d3ae48229033c2d
Added code to save the coding context in vp8_rd_pick_inter_mode
when the coding mode is forced to ARF(0,0).
Also, modified the MV bounds computation to comply with the
change in MV border from 32 to 64 pixels.
Change-Id: I96963a6f5f4d04ce84c807ae11e0635177c3ad6c
This commit tries to address an issue related to the oddity shown on
HD _mobcal clip, where some rather ugly blocks shown in the second
frame at low-mid bit rates if the third frame is not made a key frame
by he encoder. The fixes include: 1) made calls to sad_16x16 to be
consistent with function prototype. 2) remove the error bias to intra
and golden in mbgraph search. 3) changed the error accumulation on
inter_segment encoding to avoid potential out-of-range. 1) has no
effect on encoding results.
Encoding test show that the overall effect of the commit helps about
.2%(HD) to .3%(cif)
Change-Id: I930975a2d0c06252f01c39e0a02351529774e30b
This commit has made macro_block_yrd_8x8 and macro_block_yrd_8x8 to
take same parameters. It also removed a few unnecessary shifts that
has the potential to create out-of-range distortion values.
Change-Id: I4ec5afb307c3685c2a67a07c2850f0927d214455
Some adjustments to zbin for t8x8.
Changes to rules for sizing forced key frames.
Some extra stats output in tmp.stt.
Approximate gain on YT-hd set 0.5%
There are still issues in sizing key frames and gf/arf frames
when the image is largely static. These in part relate to
problems with cost estimates in the recode loop.
Change-Id: I6f0159dc8a8faeab4115a19c668d442491619a68
This is the first patch to add superblock (32x32) coding
order capabilities. It does not yet do any mode selection
at the SB level, that will follow in a further patch.
This patch encodes rows of SBs rather than
MBs, each SB contains 2x2 MBs.
Two intra prediction modes have been disabled since they
require reconstructed data for the above-right MB which
may not have been encoded yet (e.g. for the bottom right
MB in each SB).
Results on the one test clip I have tried (720p GIPS clip)
suggest that it is somewhere around 0.2dB worse than the
baseline version, so there may be bugs.
It has been tested with no experiments enabled and with
the following 3 experiments enabled:
--enable-enhanced_interp
--enable-high_precision_mv
--enable-sixteenth_subpel_uv
in each case the decode buffer matches the recon buffer
(using "cmp" to compare the dumped/decoded frames).
Note: Testing these experiments individually created
errors.
Some problems were found with other experiments but it
is unclear what state these experiments are in:
--enable-comp_intra_pred
--enable-newentropy
--enable-uvintra
This code has not been extensively tested yet, so there
is every likelihood that further bugs remain. I also
intend to do some code cleanup & refactoring in tandem
with the next patch that adds the 32x32 modes.
Change-Id: I1eba7f740a70b3510df58db53464535ef881b4d9
Pulled out super block code for the snapshot as this
is not quite ready and will need an extensive re-merge.
Change-Id: I436369b511257447a7b0ea064016cb63f5011849
https://gerrit.chromium.org/gerrit/#change,17319 fixes cost estimating
to take skip_eob into account. No quality difference seen on derf set
tests, but about .4% gain on STD_HD set.
Change-Id: Ic5fe6d35ee021e664a6fcd28037b8432a0e470ca
This gives a modest gain on derf overall, although at low bitrates the
cost is still too high, so this can be improved further.
Patch 2. Re-base and fix 80 column issues
Change-Id: Ida2f9fa3fe75370669f6a27b37108dc602231c63
The commit changed to compute UV intra RD estimates for 4x4 and 8x8
separately to be used in mode decision for MB modes associated with
the appropriate transform size respectively. Now finally after many
other changes related 8x8 quantizer zbin boost and zbin_mode_boost,
this change overall helps the HD(with 8x8) by around ~.13%.
(avg .13% glb .13% ssim .17%)
The commit also has a few changes for eliminating compiler warnings.
Change-Id: Ibab35dad44820c87e6b44799c66f8d519cc37344
The commit added the correct Zbin_mode_boost initialization based on
Intra Mode before using rate distortion to pick UV intra mode.
Change-Id: I8e57878ff356a06672f6fa2431be860bf9b9a5c7
This is the first patch for refactoring of the code related to
high-precision mv, so that 1/4 and 1/8 pel motion vectors can
co-exist in the same bit-stream by use of a frame level flag.
The current patch works fine for only use of 1/4th and
only use of 1/8th pel mv, but there are some issues with the
mode switching in between. Subsequent patches on this change Id
will fix the remaining issues.
Patch 2: Adds fixes to make sure that multiple mv precisions can
co-exist in the bit-stream. Frame level switching has been tested
to work correctly.
Patch 3: Fixes lines exceeding 80 char
Patch 4:
http://www.corp.google.com/~debargha/vp8_results/enhinterp.html
Results on derf after ssse3 bugfix, compared to everything
enabled but the 8-tap, 1/8-subpel and 1/16-subpel uv. Overall the
gains are about 3% now. Hopefully there are no more bugs lingering.
Apparently the sse3 bug affected the quartel subpel results more than
the eighth pel ones (which is understandabale because one bad predictor
due to the bug, matters less if there are a lot more subpel options
available as in the 1/8 subpel case).
The results in the 4th column correspond to the current settings.
The first two columns correspond to two settings of adaptive switching
of the 1/4 or 1/8 subpel mode based on initial Q estimate. These
do not work as good as just using 1/8 all the time yet.
Change-Id: I3ef392ad338329f4d68a85257a49f2b14f3af472
This is the initial patch for supporting 1/8th pel
motion. Currently if we configure with enable-high-precision-mv,
all motion vectors would default to 1/8 pel. Encode and
decode syncs fine with the current code. In the next phase
the code will be refactored so that we can choose the 1/8
pel mode adaptively at a frame/segment/mb level.
Derf results:
http://www.corp.google.com/~debargha/vp8_results/enhinterp_hpmv.html
(about 0.83% better than 8-tap interpoaltion)
Patch 3: Rebased. Also adding 1/16th pel interpolation for U and V
Patch 4: HD results.
http://www.corp.google.com/~debargha/vp8_results/enhinterp_hd_hpmv.html
Seems impressive (unless I am doing something wrong).
Patch 5: Added mmx/sse for bilateral filtering, as well as enforced
use of c-versions of subpel filters with 8-taps and 1/16th pel;
Also redesigned the 8-tap filters to reduce the cut-off in order to
introduce a denoising effect. There is a new configure option
sixteenth-subpel-uv which will use 1/16 th pel interpolation for
uv, if the motion vectors have 1/8 pel accuracy.
With the fixes the results are promising on the derf set. The enhanced
interpolation option with 8-taps alone gives 3% improvement over thei
derf set:
http://www.corp.google.com/~debargha/vp8_results/enhinterpn.html
Results on high precision mv and on the hd set are to follow.
Patch 6: Adding a missing condition for CONFIG_SIXTEENTH_SUBPEL_UV in
vp8/common/x86/x86_systemdependent.c
Patch 7: Cleaning up various debug messages.
Patch 8: Merge conflict
Change-Id: I5b1d844457aefd7414a9e4e0e06c6ed38fd8cc04
Yunqing fixed an oddity in UVIntra skippable evaluation for stable
branch, which brought up the fact that the evaluation is broken.
The issue was that for MBs with 2nd order block, the eob for 1st
order blocks is set at 1. The previous evaluation did not take that
into account. This commit intend to fix the problem. The commit also
absorbed Yunqing's fix for UVIntra skippable evalution.
Test on hd showed some good gains in combination with LPF bias fix:
http://www.corp.google.com/~yaowu/no_crawl/LPFBias_FixSkip.html
(avg psnr: .34%, glb psnr: .32%, ssim: .22%)
Change-Id: I36af11c8ef7f643e8ff46da7bf3a167b437039d4
The commit rationized and simplified the entropy context conversion
betwen MB using 8x8 transform and MB using 4x4 transform. The old version
had a number of weirdness in how 4x4 transform MB's context is used for
8x8 blocks other than the first 8x8 within a MB.
Test showed the change has a gain ~.1% for avg psnr, glb psnr and ssim on
the limited HD set.
Change-Id: I774536c416baa6845aa741f956d8a69fa40e5d47
this commit changed the UV r/d calculation in the mode decision process to
properly account for the rate of 8x8 transform coefficients.
Change-Id: I485f8f35f2b61db0b6539beb32e83481b1cf083b
Previously, the scaling related to extended quantize range happens in
dequantization stage, which implies the coefficients form forward
transform are in different scale(4x) from dequantization coefficients
This worked fine when there was not distortion computation done based
on 8x8 transform, but it completely wracked the distortion estimation
based on transform coefficients and dequantized transform coefficients
introduced in commit f64725a00 for macroblocks using 8x8 transform.
This commit fixed the issue by moving the scaling into the stage of
inverse 8x8 transform.
TODO: Test&Verify the transform/quantization pipeline accuracy.
Change-Id: Iff77b36a965c2a6b247e59b9c59df93eba5d60e2