Some cleanups that will make it easier to maintain the code
and incorporate upcoming changes on entropy coding for the
hybrid transforms.
Change-Id: I44bdba368f7b8bf203161d7a6d3b1fc2c9e21a8f
If the decoder crashes and returned an error before it set up
block offsets but after it set up frame buffers. We had a
problem decoding the next keyframe because the block offsets
were never set.
Change-Id: Ied2866e9770d80fc66241d5e0d978d4f5f9cdd89
This commit merges those parts of the CONFIG_NEW_MVREF
that specifically relate to choosing a better set of candidate
MV references into the NEWBESTREFMV experiment.
CONFIG_NEW_MVREF will then be used for changes relating
to the explicit coding of a cost optimized MV reference in the
bitstream as part of MV coding.
Change-Id: Ied982c0ad72093eab29e38b8cd74d5c3d7458b10
Extend experiment to use both vectors from MBs
coded using compound prediction as candidates.
In final sort only consider best 4 candidates
for now but make sure 0,0 is always one of them.
Other minor changes to new MV reference code.
Pass in Mv list to vp8_find_best_ref_mvs().
Change-Id: Ib96220c33c6b80bd1d5e0fbe8b68121be7997095
Adds a new experiment with redesigned/refactored motion vector entropy
coding. The patch also takes a first step towards separating the
integer and fractional pel components of a MV. However the fractional
pel encoding still depends on the integer pel part and so they are
not fully independent. Further experiments are in progress to see
how much they can be decoupled without affecting performance.
All components including entropy coding/decoding, costing for MV
search, forward updates and backward updates to probability tables,
have been implemented.
Results so far:
derf: +0.19%
std-hd: +0.28%
yt: +0.80%
hd: +1.15%
Patch: Simplifies the fractional pel models:
derf: +0.284%
std-hd: +0.289%
yt: +0.849%
hd: +1.254%
Patch: Some changes in the models, rebased.
derf: +0.330%
std-hd: +0.306%
yt: +0.816%
hd: +1.225%
Change-Id: I646b3c48f3587f4cc909639b78c3798da6402678
Adjusts some of the qualification thresholds in mfqe to eliminate
artifacts due to wrong decisions. Besides, a new qualification
criteria is used to disable mfqe if the quality of the previous
frame is itself not too good.
Change-Id: I4097c20b7fd4fcc60cc3003c1e33e8faae2ff066
The denoiser function was modified to reduce the computational
complexity.
1. The denoiser c function modification:
The original implementation calculated pixel's filter_coefficient
based on the pixel value difference between current raw frame and last
denoised raw frame, and stored them in lookup tables. For each pixel c,
find its coefficient using
filter_coefficient[c] = LUT[abs_diff[c]];
and then apply filtering operation for the pixel.
The denoising filter costed about 12% of encoding time when it was
turned on, and half of the time was spent on finding coefficients in
lookup tables. In order to simplify the process, a short cut was taken.
The pixel adjustments vs. pixel diff value were calculated ahead of time.
adjustment = filtered_value - current_raw
= (filter_coefficient * diff + 128) >> 8
The adjustment vs. diff curve becomes flat very quick when diff increases.
This allowed us to use only several levels to get a close approximation
of the curve. Following the denoiser algorithm, the adjustments are
further modified according to how big the motion magnitude is.
2. The sse2 function was rewritten.
This change made denoiser filter function 3x faster, and improved the
encoder performance by 7% ~ 10% with the denoiser on.
Change-Id: I93a4308963b8e80c7307f96ffa8b8c667425bf50
Replace DECLARE_ALIGNED_ with vpx_memalign()
DECLARE_ALIGNED (__declspec(align())) does not work as intended when
used on class data members:
Data in classes or structures is aligned within the class or structure
at the minimum of its natural alignment and the current packing setting
(from #pragma pack or the /Zp compiler option)
Change-Id: I304aaa6c3716fbfae24675ecf192f4b40787e83e
Enable ADST/DCT of dimension 16x16 for I16X16 modes. This change provides
benefits mostly for hd sequences.
Set up the framework for selectable transform dimension.
Also allowing quantization parameter threshold to control the use
of hybrid transform (This is currently disabled by setting threshold
always above the quantization parameter. Adaptive thresholding can
be built upon this, which will further improve the coding performance.)
The coding performance gains (with respect to the codec that has all
other configuration settings turned on) are
derf: 0.013
yt: 0.086
hd: 0.198
std-hd: 0.501
Change-Id: Ibb4263a61fc74e0b3c345f54d73e8c73552bf926
The transform functions in experimental branch absorbed a scaling
factor of 4 to allow quantization steps closer to unit quantizer.
This commit added scaling code in between forward and inverse
transform to properly account for the scaling factor.
Change-Id: I9a573ddc1ffa74973b34800a5da1a56dbabe0949
Alternative strategy for finding a list of candidate motion
vectors to use as reference values in mv coding and as
nearest and near.
Sort by sad in vp8_find_best_ref_mvs() rather than just
pick the best. Allow 0,0 as a best ref option but not a
nearest or near unless there are no alternatives.
Encode/Decode verified on at least some clips.
Some commented out experimental and stats code still in place.
Gain over existing code averages about 1% on derf (alll metrics)
with improvement on all clips. Other test results pending.
The entropy coding of the mode (nearest/near etc) still
depends upon and requires the old "findnear" code so
this needs looking at and may provide room for further gains.
Change-Id: I871d7cba1d1c379c4bad9bcccce1fb19c46b8247
For videos with big static background(such as video conferencing
clips), the mode decision was biased to ZEROMV in order to
obtain a stable background. The percentage of ZEROMV on last
frame was used to predict if there is static area in current frame,
and checking already-encoded neighboring macroblocks' motion
vectors to make sure the local area has low motion.
Change-Id: I05b3241d3a56a0bda88b6681e5646c1c8baf2e57