the changes are still temporary, the final transforms, especially
inverse ones should take in account both accuracy, complexity, and
sign-bias, which should be decided at a later time.
Change-Id: I116b0c70b25f5ee324ae5713d4564f5d0aa27151
During the work of extend_qrange, we have rolled a factor of 2 from
quantization/dequatnization into 2nd order walsh-hadamard transform.
This commit does the same for the 2nd order haar transform. so they
can share the same quantizaiton process as the 2nd order WHT.
Change-Id: I734af4a20ea8149a01b5b1971a065092977dfe33
Previously, the scaling related to extended quantize range happens in
dequantization stage, which implies the coefficients form forward
transform are in different scale(4x) from dequantization coefficients
This worked fine when there was not distortion computation done based
on 8x8 transform, but it completely wracked the distortion estimation
based on transform coefficients and dequantized transform coefficients
introduced in commit f64725a00 for macroblocks using 8x8 transform.
This commit fixed the issue by moving the scaling into the stage of
inverse 8x8 transform.
TODO: Test&Verify the transform/quantization pipeline accuracy.
Change-Id: Iff77b36a965c2a6b247e59b9c59df93eba5d60e2
I'm basically not convinced that the concept works at all, let alone
that this is the right place to do it. I think if we want something
like this at all, I should integrate it with the main encoding loop
and re-encode checks in onyx_if.c, and show that it has a significant
benefit (which right now, it doesn't; removing this re-encode check
actually increases all metrics by ~0.15%).
Change-Id: I1b597385dc17f468384a994484fb24813389411f
This commit moved segment based loop filter level selection into
the experiment of CONFIG_FEATUREUPDATES. As previous commit noted,
the segment based loop filter selection helps the compression by
~0.1% on cif set, the ongoing experiment CONFIG_FEATUREUPDATES
made encoding updates of the segment based LPF level more efficient,
hence, another .04% gain on cif set. The commit also fixed an issue
previously where encoder/decoder may use different loop filter level
for one of the segments.
Change-Id: Ia978b14aae95bb107d561ba53a7a2bb6ff01faf3
This commmit added logic for MB using dual-pred to compute rate
estimation based on correct transform size. The section of code
was previously located under #if CONFIG_DUALPRED, that was made
to be working with T8x8 experiment at the same time.
Change-Id: Iebc2518c03f11378b9c2e72905520f088b54d5c0
Added a bit to signify that the feature changed since
the last time we sent it, or not so that we don't need
to send all the databits for every feature change.
added config
Change-Id: I8d3064ce90d4500bf0d5c6b87c664e46138dfcac
Added a frame level flag to indicate if coef probabilities are updated
at all for the frame.
During the experimental work with 8x8 transform, it is discovered that
even in the case of no probability is ever update, cost of transmitting
"no update" for each of probabilities can run up to become a significant
overhead cost. A single bit to indicate no-update for all coef probs
is therefore helpful, which is also demonstrated by the test results:
1. On Cif set:
http://www.corp.google.com/~yaowu/no_crawl/t8x8/cif_t8x8_updprob.html
(avg psnr: .14%, glb psnr: .14% SSIM: .13%)
2. On HD set:
http://www.corp.google.com/~yaowu/no_crawl/t8x8/HD_t8x8_updprob.html
(avg psnr: .02% glb psnr: .01% SSIM: .02%)
It should be noted that the gain on HD is smaller because the average bit
rate is much higher in contrast to the overhead bit cost.
Change-Id: I46db270e693ee8799fef34a14d8260868ce4cd16
For 8x8 transformed macroblock, the 2nd order transform is a 2x2 haar
transform, here there is only 4 coefficients total. A previous merge
changed these to 64, causing crashes when encoding with 8x8 transform
enabled. (i.e. when input video image size > 640x360 ) This commit
reverts them back to 4 and fixes the crashes.
Change-Id: I3290b81f8c0d32c7efec03093a61ea57736c0550
For the experimental branch we are trying to slim the codebase
down removing features such as threading for now which complicate
the process of development and testing.
Change-Id: I657c0246aef4d1fa8c8ffc6a1adfeee45bce8e24
In summary, this commit encompasses a series of changes in attempt to
improve the 8x8 transform based coding to help overall compression
quality, please refer to the detailed commit history below for what
are the rationale underly the series of changes:
a. A frame level flag to indicate if 8x8 transform is used at all.
b. 8x8 transform is not used for key frames and small image size.
c. On inter coded frame, macroblocks using modes B_PRED, SPLIT_MV
and I8X8_PRED are forced to using 4x4 transform based coding, the
rest uses 8x8 transform based coding.
d. Encoder and decoder has the same assumption on the relationship
between prediction modes and transform size, therefore no signaling
is encoded in bitstream.
e. Mode decision process now calculate the rate and distortion scores
using their respective transforms.
Overall test results:
1. HD set
http://www.corp.google.com/~yaowu/no_crawl/t8x8/HD_t8x8_20120206.html
(avg psnr: 3.09% glb psnr: 3.22%, ssim: 3.90%)
2. Cif set:
http://www.corp.google.com/~yaowu/no_crawl/t8x8/cif_t8x8_20120206.html
(avg psnr: -0.03%, glb psnr: -0.02%, ssim: -0.04%)
It should be noted here, as 8x8 transform coding itself is disabled
for cif size clips, the 0.03% loss is purely from the 1 bit/frame
flag overhead on if 8x8 transform is used or not for the frame.
---patch history for future reference---
Patch 1:
this commit tries to select transform size based on macroblock
prediction mode. If the size of a prediction mode is 16x16, then
the macroblock is forced to use 8x8 transform. If the prediction
mode is B_PRED, SPLITMV or I8X8_PRED, then the macroblock is forced
to use 4x4 transform. Tests on the following HD clips showed mixed
results: (all hd clips only used first 100 frames in the test)
http://www.corp.google.com/~yaowu/no_crawl/t8x8/hdmodebased8x8.htmlhttp://www.corp.google.com/~yaowu/no_crawl/t8x8/hdmodebased8x8_log.html
while the results are mixed and overall negative, it is interesting to
see 8x8 helped a few of the clips.
Patch 2:
this patch tries to hard-wire selection of transform size based on
prediction modes without using segmentation to signal the transform size.
encoder and decoder both takes the same assumption that all macroblocks
use 8x8 transform except when prediciton mode is B_PRED, I8X8_PRED or
SPLITMV. Test results are as follows:
http://www.corp.google.com/~yaowu/no_crawl/t8x8/cifmodebase8x8_0125.htmlhttp://www.corp.google.com/~yaowu/no_crawl/t8x8/hdmodebased8x8_0125log.html
Interestingly, by removing the overhead or coding the segmentation, the
results on this limited HD set have turn positive on average.
Patch 3:
this patch disabled the usage of 8x8 transform on key frames, and kept the
logic from patch 2 for inter frames only. test results on HD set turned
decidedly positive with 8x8 transform enabled on inter frame with 16x16
prediction modes: (avg psnr: .81% glb psnr: .82 ssim: .55%)
http://www.corp.google.com/~yaowu/no_crawl/t8x8/hdintermode8x8_0125.html
results on cif set still negative overall
Patch 4:
continued from last patch, but now in mode decision process, the rate and
distortion estimates are computed based on 8x8 transform results for MBs
with modes associated with 8x8 transform. This patch also fixed a problem
related to segment based eob coding when 8x8 transform is used. The patch
significantly improved the results on HD clips:
http://www.corp.google.com/~yaowu/no_crawl/t8x8/hd8x8RDintermode.html
(avg psnr: 2.70% glb psnr: 2.76% ssim: 3.34%)
results on cif also improved, though they are still negative compared to
baseline that uses 4x4 transform only:
http://www.corp.google.com/~yaowu/no_crawl/t8x8/cif8x8RDintermode.html
(avg psnr: -.78% glb psnr: -.86% ssim: -.19%)
Patch 5:
This patch does 3 things:
a. a bunch of decoder bug fixes, encodings and decodings were verified
to have matched recon buffer on a number of encodes on cif size mobile and
hd version of _pedestrian.
b. the patch further improved the rate distortion calculation of MBS that
use 8x8 transform. This provided some further gain on compression.
c. the patch also got the experimental work SEG_LVL_EOB to work with 8x8
transformed macroblock, test results indicates it improves the cif set
but hurt the HD set slightly.
Tests results on HD clips:
http://www.corp.google.com/~yaowu/no_crawl/t8x8/HD_t8x8_20120201.html
(avg psnr: 3.19% glb psnr: 3.30% ssim: 3.93%)
Test results on cif clips:
http://www.corp.google.com/~yaowu/no_crawl/t8x8/cif_t8x8_20120201.html
(avg psnr: -.47% glb psnr: -.51% ssim: +.28%)
Patch 6:
Added a frame level flag to indicate if 8x8 transform is allowed at all.
temporarily the decision is based on frame size, can be optimized later
one. This get the cif results to basically unchanged, with one bit per
frame overhead on both cif and hd clips.
Patch 8:
Rebase and Merge to head by PGW.
Fixed some suspect 4s that look like hey should be 64s in regard
to segmented EOB. Perhaps #defines would be bette.
Bulit and tested without T8x8 enabled and produces unchanged
output.
Patch 9:
Corrected misalligned code/decode of "txfm_mode" bit.
Limited testing for correct encode and decode with
T8x8 configured on derf clips.
Change-Id: I156e1405d25f81579d579dff8ab9af53944ec49c
Merged in most of the current common prediction changes
that were under the #if CONFIG_COMPRED option.
Change-Id: If4e6f61dbe7b86dd449f6effbe93b5eb7e893885
Further changes to make experiments with the context
used for coding the dual pred flag easier.
Current best performing method tested on derf is a two
element context based on reference frame. I also tried
various combinations of mode and reference frame as
shown in commented out case using up to 6 contexts.
Derf +0.26 overall psnr +0.15% ssim vs original method.
Change-Id: I64c21ddec0abbb27feaaeaa1da2e9f164ebaca03
Further use of common prediction functions and experiments
with alternate contexts based on mode and reference frame.
For the Derf set using reference frame as basis of context
gives +0.18% Overall Psnr and +0.08 SSIM
Change-Id: Ie7eb76f329f74c9c698614f01ece31de0b6bfc9e
We should only change the dual prediction mode if we actually entered
the recode branch. Else, it may potentially undo beneficial changes
to the dual prediction mode in the first encode iteration.
Change-Id: I79fc53e5fd0bb551092ed422c797619f1566f002
Some conditions were conditional under a threshold, whereas they should
always execute. Also, some conditions were testing an array instead of
the values within it.
Change-Id: Ia6892945cfbbe07322e6af6be42cd864bf9479c1
The existing code updated the reference frame probabilities before
the test to evaluate the impact of using updated probabilities
in vp8_estimate_entropy_savings().
The estimate of cost and savings is still basic and does not reflect
the new prediction code but this would require per MB costings
and the benefit is probably marginal, as this is really just used for
rate estimation in the loop.
Change-Id: Id6ba88ae6e11c273b3159deff70980363ccd8ea1
This commit merges the NEWNEAR experiment such that it
is effectively always on.
The fact that there were changes in the threading code again
highlights the need to strip out such features during the
bitstream development phase as trying to maintain this code
(especially as it is not being tested) slows the development cycle.
Change-Id: I8b34950a1333231ced9928aa11cd6d6459984b65
Initial modifications to make limited use of common prediction
functions.
The only functional change thus far is that updates to the probabilities are
no longer "damped". This was a testing convenience but in fact seems to
help by a little over 0.1% over the derf set.
Change-Id: I8b82907d9d6b6a4a075728b60b31ce93392a5f2e
Trial of a modified prediction function that ranks each possible
reference frame based on a combination of local usage and
frame level probability. The code is a bit cleaner and simpler.
In direct comparison with old unpredicted method with segment level
coding turned off for mode,ref & EOB the prediction gives a gain on derf
of around 0.4%. There is some further gain from bug fixes over earlier code.
With segment coding on the prediction method is slightly -ve on some very
easy clips (at low rates) due to slightly higher overheads, but better on harder
clips. Overall neutral on derf in direct comparison on latest code base, but
compared to earlier code without bug fixes about +0.7% overall psnr
+0.3% SSIM.
Change-Id: I5b8474658b208134d352d24f6517f25795490789
Extended prediction and coding of reference frame where
a subset of options are flagged as available at the segment level.
Updated copyright notices.
Switch to SAD in mbgraph code as SATD problematic for the
foreground and background separation as it can ignore large DC shifts.
Change-Id: I661dbbb2f94f3ec0f96bb928c1655e5e415a7de1
As a precursor to encoding 32x32 blocks this cl adds the
ability to encode the frame superblock (=32x32 block) at
a time. Within a SB the 4 indiviual MBs are encoded in
raster-order (NW,NE,SW,SE).
This functionality is added as an experiment which can be
enabled by ispecifying --enable-superblocks in the
command line specified to configure (CONFIG_SUPERBLOCKS
macro in the code).
To make this work I had to disable the two intra
prediction modes that use data from the top-right of the
MB.
On the tests that I have run the results produce
almost exactly the same PSNRs & SSIMs with a very
slightly higher average data rate (and slightly higher
data rate than just disabling the two intra modes in
the original code).
NOTE: This will also break the multi-threaded code.
This replaces the abandoned change:
Iebebe0d1a50ce8c15c79862c537b765a2f67e162
Change-Id: I1bc1a00f236abc1a373c7210d756e25f970fcad8
Commented out changes from earlier checking:
"Change Iab7f1eff: vpnext use segref segmentation filter"
Which in its current state breaks the decoder.
Change-Id: I9185098aeda8ce65310f338c4c9375f4a39005d3
The bug was introduced by the commit that added I8X8 intra prediction
mode for inter frames, the decoder was not update to accept the additional
probability update from encoder. This causes the decoder typicall to crash
when encoder sends intra mode probability update.
Change-Id: Ib7dc42dc77a51178aa9ece41e081829818a25016
This check in uses the common prediction interface functions
to code reference frame.
Some updates made regarding the impact of the new code in rd loop
but there remain TODOs in this regard.
Change-Id: I9da3ed5dfdaa489e0903ab33258b0767a585567f
This does not change any functionality just modifies the code to
use the common prediction module interface for coding
the segment data.
Change-Id: Ifd43e9153573365619774a4f5572215e44fb5aa3
This function adds the common prediction modules, some data structures
and a config option but does not use them.
It also corrects a bug in clearing down the MODE_INFO border and introduces
a new element that indicates if an entry corresponds to an "in image" macro block
or is part of the border.
Change-Id: Ib69eec0876173ebe9d1de9df9537d0b2447702e0
Picks a per segment loopfilter. Adapts the algorithm to search for
a loopfilter value for each separate segment. Further todo fix the
bias
Improvements .06 % ov psnr, .11% ssim
http://www.corp.google.com/~jimbankoski/no_crawl/segmentedpicklpf.html
Change-Id: Ic6a571c16fcd6ec0139f4de1f8061f87c6515a10
changed the token cost for 8x8 transformed macroblock used in trellisquant
from those derived from 4x4 transform coefficient distribution to those
derived from 8x8 transform coefficient distribution. Test results show
this fix help 8x8 transform based compression consistently on cif and hd
sets:
http://www.corp.google.com/~yaowu/no_crawl/t8x8/cif_cost8x8only.html
(avg psnr:.14% glb psnr: .17% ssim: .20%)
http://www.corp.google.com/~yaowu/no_crawl/t8x8/hd_cost8x8only.html
(avg psnr:.17% glb psnr: .18% ssim: .58%)
Note: To test the effect of this change, 8x8 transform was forced to be used
only on 16x16 predicted macroblocks on inter frames, the effect would be
bigger had all macroblocks been forcd to use 8x8 transform.
Change-Id: If9b7868b75357c66541f511e5ee78e4d2d4929a4