2010-05-18 17:58:33 +02:00
|
|
|
/*
|
2010-09-09 14:16:39 +02:00
|
|
|
* Copyright (c) 2010 The WebM project authors. All Rights Reserved.
|
2010-05-18 17:58:33 +02:00
|
|
|
*
|
2010-06-18 18:39:21 +02:00
|
|
|
* Use of this source code is governed by a BSD-style license
|
2010-06-04 22:19:40 +02:00
|
|
|
* that can be found in the LICENSE file in the root of the source
|
|
|
|
* tree. An additional intellectual property rights grant can be found
|
2010-06-18 18:39:21 +02:00
|
|
|
* in the file PATENTS. All contributing project authors may
|
2010-06-04 22:19:40 +02:00
|
|
|
* be found in the AUTHORS file in the root of the source tree.
|
2010-05-18 17:58:33 +02:00
|
|
|
*/
|
|
|
|
|
|
|
|
|
WebM Experimental Codec Branch Snapshot
This is a code snapshot of experimental work currently ongoing for a
next-generation codec.
The codebase has been cut down considerably from the libvpx baseline.
For example, we are currently only supporting VBR 2-pass rate control
and have removed most of the code relating to coding speed, threading,
error resilience, partitions and various other features. This is in
part to make the codebase easier to work on and experiment with, but
also because we want to have an open discussion about how the bitstream
will be structured and partitioned and not have that conversation
constrained by past work.
Our basic working pattern has been to initially encapsulate experiments
using configure options linked to #IF CONFIG_XXX statements in the
code. Once experiments have matured and we are reasonably happy that
they give benefit and can be merged without breaking other experiments,
we remove the conditional compile statements and merge them in.
Current changes include:
* Temporal coding experiment for segments (though still only 4 max, it
will likely be increased).
* Segment feature experiment - to allow various bits of information to
be coded at the segment level. Features tested so far include mode
and reference frame information, limiting end of block offset and
transform size, alongside Q and loop filter parameters, but this set
is very fluid.
* Support for 8x8 transform - 8x8 dct with 2nd order 2x2 haar is used
in MBs using 16x16 prediction modes within inter frames.
* Compound prediction (combination of signals from existing predictors
to create a new predictor).
* 8 tap interpolation filters and 1/8th pel motion vectors.
* Loop filter modifications.
* Various entropy modifications and changes to how entropy contexts and
updates are handled.
* Extended quantizer range matched to transform precision improvements.
There are also ongoing further experiments that we hope to merge in the
near future: For example, coding of motion and other aspects of the
prediction signal to better support larger image formats, use of larger
block sizes (e.g. 32x32 and up) and lossless non-transform based coding
options (especially for key frames). It is our hope that we will be
able to make regular updates and we will warmly welcome community
contributions.
Please be warned that, at this stage, the codebase is currently slower
than VP8 stable branch as most new code has not been optimized, and
even the 'C' has been deliberately written to be simple and obvious,
not fast.
The following graphs have the initial test results, numbers in the
tables measure the compression improvement in terms of percentage. The
build has the following optional experiments configured:
--enable-experimental --enable-enhanced_interp --enable-uvintra
--enable-high_precision_mv --enable-sixteenth_subpel_uv
CIF Size clips:
http://getwebm.org/tmp/cif/
HD size clips:
http://getwebm.org/tmp/hd/
(stable_20120309 represents encoding results of WebM master branch
build as of commit#7a15907)
They were encoded using the following encode parameters:
--good --cpu-used=0 -t 0 --lag-in-frames=25 --min-q=0 --max-q=63
--end-usage=0 --auto-alt-ref=1 -p 2 --pass=2 --kf-max-dist=9999
--kf-min-dist=0 --drop-frame=0 --static-thresh=0 --bias-pct=50
--minsection-pct=0 --maxsection-pct=800 --sharpness=0
--arnr-maxframes=7 --arnr-strength=3(for HD,6 for CIF)
--arnr-type=3
Change-Id: I5c62ed09cfff5815a2bb34e7820d6a810c23183c
2012-03-10 02:32:50 +01:00
|
|
|
#include "vpx_ports/config.h"
|
2012-11-09 02:09:30 +01:00
|
|
|
#include "vp9_rtcd.h"
|
2012-11-28 19:41:40 +01:00
|
|
|
#include "vp9/common/vp9_blockd.h"
|
2010-05-18 17:58:33 +02:00
|
|
|
|
2012-12-19 00:31:19 +01:00
|
|
|
void vp9_recon_b_c(uint8_t *pred_ptr,
|
|
|
|
int16_t *diff_ptr,
|
|
|
|
uint8_t *dst_ptr,
|
|
|
|
int stride) {
|
2012-07-14 00:21:29 +02:00
|
|
|
int r, c;
|
|
|
|
|
|
|
|
for (r = 0; r < 4; r++) {
|
|
|
|
for (c = 0; c < 4; c++) {
|
Consistently use get_prob(), clip_prob() and newly added clip_pixel().
Add a function clip_pixel() to clip a pixel value to the [0,255] range
of allowed values, and use this where-ever appropriate (e.g. prediction,
reconstruction). Likewise, consistently use the recently added function
clip_prob(), which calculates a binary probability in the [1,255] range.
If possible, try to use get_prob() or its sister get_binary_prob() to
calculate binary probabilities, for consistency.
Since in some places, this means that binary probability calculations
are changed (we use {255,256}*count0/(total) in a range of places,
and all of these are now changed to use 256*count0+(total>>1)/total),
this changes the encoding result, so this patch warrants some extensive
testing.
Change-Id: Ibeeff8d886496839b8e0c0ace9ccc552351f7628
2012-12-10 21:09:07 +01:00
|
|
|
dst_ptr[c] = clip_pixel(diff_ptr[c] + pred_ptr[c]);
|
2010-05-18 17:58:33 +02:00
|
|
|
}
|
2012-07-14 00:21:29 +02:00
|
|
|
|
|
|
|
dst_ptr += stride;
|
|
|
|
diff_ptr += 16;
|
|
|
|
pred_ptr += 16;
|
|
|
|
}
|
2010-05-18 17:58:33 +02:00
|
|
|
}
|
|
|
|
|
2012-12-19 00:31:19 +01:00
|
|
|
void vp9_recon_uv_b_c(uint8_t *pred_ptr,
|
|
|
|
int16_t *diff_ptr,
|
|
|
|
uint8_t *dst_ptr,
|
|
|
|
int stride) {
|
2012-07-14 00:21:29 +02:00
|
|
|
int r, c;
|
|
|
|
|
|
|
|
for (r = 0; r < 4; r++) {
|
|
|
|
for (c = 0; c < 4; c++) {
|
Consistently use get_prob(), clip_prob() and newly added clip_pixel().
Add a function clip_pixel() to clip a pixel value to the [0,255] range
of allowed values, and use this where-ever appropriate (e.g. prediction,
reconstruction). Likewise, consistently use the recently added function
clip_prob(), which calculates a binary probability in the [1,255] range.
If possible, try to use get_prob() or its sister get_binary_prob() to
calculate binary probabilities, for consistency.
Since in some places, this means that binary probability calculations
are changed (we use {255,256}*count0/(total) in a range of places,
and all of these are now changed to use 256*count0+(total>>1)/total),
this changes the encoding result, so this patch warrants some extensive
testing.
Change-Id: Ibeeff8d886496839b8e0c0ace9ccc552351f7628
2012-12-10 21:09:07 +01:00
|
|
|
dst_ptr[c] = clip_pixel(diff_ptr[c] + pred_ptr[c]);
|
WebM Experimental Codec Branch Snapshot
This is a code snapshot of experimental work currently ongoing for a
next-generation codec.
The codebase has been cut down considerably from the libvpx baseline.
For example, we are currently only supporting VBR 2-pass rate control
and have removed most of the code relating to coding speed, threading,
error resilience, partitions and various other features. This is in
part to make the codebase easier to work on and experiment with, but
also because we want to have an open discussion about how the bitstream
will be structured and partitioned and not have that conversation
constrained by past work.
Our basic working pattern has been to initially encapsulate experiments
using configure options linked to #IF CONFIG_XXX statements in the
code. Once experiments have matured and we are reasonably happy that
they give benefit and can be merged without breaking other experiments,
we remove the conditional compile statements and merge them in.
Current changes include:
* Temporal coding experiment for segments (though still only 4 max, it
will likely be increased).
* Segment feature experiment - to allow various bits of information to
be coded at the segment level. Features tested so far include mode
and reference frame information, limiting end of block offset and
transform size, alongside Q and loop filter parameters, but this set
is very fluid.
* Support for 8x8 transform - 8x8 dct with 2nd order 2x2 haar is used
in MBs using 16x16 prediction modes within inter frames.
* Compound prediction (combination of signals from existing predictors
to create a new predictor).
* 8 tap interpolation filters and 1/8th pel motion vectors.
* Loop filter modifications.
* Various entropy modifications and changes to how entropy contexts and
updates are handled.
* Extended quantizer range matched to transform precision improvements.
There are also ongoing further experiments that we hope to merge in the
near future: For example, coding of motion and other aspects of the
prediction signal to better support larger image formats, use of larger
block sizes (e.g. 32x32 and up) and lossless non-transform based coding
options (especially for key frames). It is our hope that we will be
able to make regular updates and we will warmly welcome community
contributions.
Please be warned that, at this stage, the codebase is currently slower
than VP8 stable branch as most new code has not been optimized, and
even the 'C' has been deliberately written to be simple and obvious,
not fast.
The following graphs have the initial test results, numbers in the
tables measure the compression improvement in terms of percentage. The
build has the following optional experiments configured:
--enable-experimental --enable-enhanced_interp --enable-uvintra
--enable-high_precision_mv --enable-sixteenth_subpel_uv
CIF Size clips:
http://getwebm.org/tmp/cif/
HD size clips:
http://getwebm.org/tmp/hd/
(stable_20120309 represents encoding results of WebM master branch
build as of commit#7a15907)
They were encoded using the following encode parameters:
--good --cpu-used=0 -t 0 --lag-in-frames=25 --min-q=0 --max-q=63
--end-usage=0 --auto-alt-ref=1 -p 2 --pass=2 --kf-max-dist=9999
--kf-min-dist=0 --drop-frame=0 --static-thresh=0 --bias-pct=50
--minsection-pct=0 --maxsection-pct=800 --sharpness=0
--arnr-maxframes=7 --arnr-strength=3(for HD,6 for CIF)
--arnr-type=3
Change-Id: I5c62ed09cfff5815a2bb34e7820d6a810c23183c
2012-03-10 02:32:50 +01:00
|
|
|
}
|
2012-07-14 00:21:29 +02:00
|
|
|
|
|
|
|
dst_ptr += stride;
|
|
|
|
diff_ptr += 8;
|
|
|
|
pred_ptr += 8;
|
|
|
|
}
|
WebM Experimental Codec Branch Snapshot
This is a code snapshot of experimental work currently ongoing for a
next-generation codec.
The codebase has been cut down considerably from the libvpx baseline.
For example, we are currently only supporting VBR 2-pass rate control
and have removed most of the code relating to coding speed, threading,
error resilience, partitions and various other features. This is in
part to make the codebase easier to work on and experiment with, but
also because we want to have an open discussion about how the bitstream
will be structured and partitioned and not have that conversation
constrained by past work.
Our basic working pattern has been to initially encapsulate experiments
using configure options linked to #IF CONFIG_XXX statements in the
code. Once experiments have matured and we are reasonably happy that
they give benefit and can be merged without breaking other experiments,
we remove the conditional compile statements and merge them in.
Current changes include:
* Temporal coding experiment for segments (though still only 4 max, it
will likely be increased).
* Segment feature experiment - to allow various bits of information to
be coded at the segment level. Features tested so far include mode
and reference frame information, limiting end of block offset and
transform size, alongside Q and loop filter parameters, but this set
is very fluid.
* Support for 8x8 transform - 8x8 dct with 2nd order 2x2 haar is used
in MBs using 16x16 prediction modes within inter frames.
* Compound prediction (combination of signals from existing predictors
to create a new predictor).
* 8 tap interpolation filters and 1/8th pel motion vectors.
* Loop filter modifications.
* Various entropy modifications and changes to how entropy contexts and
updates are handled.
* Extended quantizer range matched to transform precision improvements.
There are also ongoing further experiments that we hope to merge in the
near future: For example, coding of motion and other aspects of the
prediction signal to better support larger image formats, use of larger
block sizes (e.g. 32x32 and up) and lossless non-transform based coding
options (especially for key frames). It is our hope that we will be
able to make regular updates and we will warmly welcome community
contributions.
Please be warned that, at this stage, the codebase is currently slower
than VP8 stable branch as most new code has not been optimized, and
even the 'C' has been deliberately written to be simple and obvious,
not fast.
The following graphs have the initial test results, numbers in the
tables measure the compression improvement in terms of percentage. The
build has the following optional experiments configured:
--enable-experimental --enable-enhanced_interp --enable-uvintra
--enable-high_precision_mv --enable-sixteenth_subpel_uv
CIF Size clips:
http://getwebm.org/tmp/cif/
HD size clips:
http://getwebm.org/tmp/hd/
(stable_20120309 represents encoding results of WebM master branch
build as of commit#7a15907)
They were encoded using the following encode parameters:
--good --cpu-used=0 -t 0 --lag-in-frames=25 --min-q=0 --max-q=63
--end-usage=0 --auto-alt-ref=1 -p 2 --pass=2 --kf-max-dist=9999
--kf-min-dist=0 --drop-frame=0 --static-thresh=0 --bias-pct=50
--minsection-pct=0 --maxsection-pct=800 --sharpness=0
--arnr-maxframes=7 --arnr-strength=3(for HD,6 for CIF)
--arnr-type=3
Change-Id: I5c62ed09cfff5815a2bb34e7820d6a810c23183c
2012-03-10 02:32:50 +01:00
|
|
|
}
|
2012-12-19 00:31:19 +01:00
|
|
|
|
|
|
|
void vp9_recon4b_c(uint8_t *pred_ptr,
|
|
|
|
int16_t *diff_ptr,
|
|
|
|
uint8_t *dst_ptr,
|
|
|
|
int stride) {
|
2012-07-14 00:21:29 +02:00
|
|
|
int r, c;
|
|
|
|
|
|
|
|
for (r = 0; r < 4; r++) {
|
|
|
|
for (c = 0; c < 16; c++) {
|
Consistently use get_prob(), clip_prob() and newly added clip_pixel().
Add a function clip_pixel() to clip a pixel value to the [0,255] range
of allowed values, and use this where-ever appropriate (e.g. prediction,
reconstruction). Likewise, consistently use the recently added function
clip_prob(), which calculates a binary probability in the [1,255] range.
If possible, try to use get_prob() or its sister get_binary_prob() to
calculate binary probabilities, for consistency.
Since in some places, this means that binary probability calculations
are changed (we use {255,256}*count0/(total) in a range of places,
and all of these are now changed to use 256*count0+(total>>1)/total),
this changes the encoding result, so this patch warrants some extensive
testing.
Change-Id: Ibeeff8d886496839b8e0c0ace9ccc552351f7628
2012-12-10 21:09:07 +01:00
|
|
|
dst_ptr[c] = clip_pixel(diff_ptr[c] + pred_ptr[c]);
|
2010-05-18 17:58:33 +02:00
|
|
|
}
|
2012-07-14 00:21:29 +02:00
|
|
|
|
|
|
|
dst_ptr += stride;
|
|
|
|
diff_ptr += 16;
|
|
|
|
pred_ptr += 16;
|
|
|
|
}
|
2010-05-18 17:58:33 +02:00
|
|
|
}
|
|
|
|
|
2012-12-19 00:31:19 +01:00
|
|
|
void vp9_recon2b_c(uint8_t *pred_ptr,
|
|
|
|
int16_t *diff_ptr,
|
|
|
|
uint8_t *dst_ptr,
|
|
|
|
int stride) {
|
2012-07-14 00:21:29 +02:00
|
|
|
int r, c;
|
|
|
|
|
|
|
|
for (r = 0; r < 4; r++) {
|
|
|
|
for (c = 0; c < 8; c++) {
|
Consistently use get_prob(), clip_prob() and newly added clip_pixel().
Add a function clip_pixel() to clip a pixel value to the [0,255] range
of allowed values, and use this where-ever appropriate (e.g. prediction,
reconstruction). Likewise, consistently use the recently added function
clip_prob(), which calculates a binary probability in the [1,255] range.
If possible, try to use get_prob() or its sister get_binary_prob() to
calculate binary probabilities, for consistency.
Since in some places, this means that binary probability calculations
are changed (we use {255,256}*count0/(total) in a range of places,
and all of these are now changed to use 256*count0+(total>>1)/total),
this changes the encoding result, so this patch warrants some extensive
testing.
Change-Id: Ibeeff8d886496839b8e0c0ace9ccc552351f7628
2012-12-10 21:09:07 +01:00
|
|
|
dst_ptr[c] = clip_pixel(diff_ptr[c] + pred_ptr[c]);
|
2010-05-18 17:58:33 +02:00
|
|
|
}
|
2012-07-14 00:21:29 +02:00
|
|
|
|
|
|
|
dst_ptr += stride;
|
|
|
|
diff_ptr += 8;
|
|
|
|
pred_ptr += 8;
|
|
|
|
}
|
2010-05-18 17:58:33 +02:00
|
|
|
}
|
|
|
|
|
2012-08-20 23:43:34 +02:00
|
|
|
#if CONFIG_SUPERBLOCKS
|
2012-10-31 00:25:53 +01:00
|
|
|
void vp9_recon_mby_s_c(MACROBLOCKD *xd, uint8_t *dst) {
|
2012-08-20 23:43:34 +02:00
|
|
|
int x, y;
|
|
|
|
BLOCKD *b = &xd->block[0];
|
|
|
|
int stride = b->dst_stride;
|
2012-12-19 00:31:19 +01:00
|
|
|
int16_t *diff = b->diff;
|
2012-08-20 23:43:34 +02:00
|
|
|
|
|
|
|
for (y = 0; y < 16; y++) {
|
|
|
|
for (x = 0; x < 16; x++) {
|
Consistently use get_prob(), clip_prob() and newly added clip_pixel().
Add a function clip_pixel() to clip a pixel value to the [0,255] range
of allowed values, and use this where-ever appropriate (e.g. prediction,
reconstruction). Likewise, consistently use the recently added function
clip_prob(), which calculates a binary probability in the [1,255] range.
If possible, try to use get_prob() or its sister get_binary_prob() to
calculate binary probabilities, for consistency.
Since in some places, this means that binary probability calculations
are changed (we use {255,256}*count0/(total) in a range of places,
and all of these are now changed to use 256*count0+(total>>1)/total),
this changes the encoding result, so this patch warrants some extensive
testing.
Change-Id: Ibeeff8d886496839b8e0c0ace9ccc552351f7628
2012-12-10 21:09:07 +01:00
|
|
|
dst[x] = clip_pixel(dst[x] + diff[x]);
|
2012-08-20 23:43:34 +02:00
|
|
|
}
|
|
|
|
dst += stride;
|
|
|
|
diff += 16;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-10-31 00:25:53 +01:00
|
|
|
void vp9_recon_mbuv_s_c(MACROBLOCKD *xd, uint8_t *udst, uint8_t *vdst) {
|
2012-08-20 23:43:34 +02:00
|
|
|
int x, y, i;
|
|
|
|
uint8_t *dst = udst;
|
|
|
|
|
|
|
|
for (i = 0; i < 2; i++, dst = vdst) {
|
|
|
|
BLOCKD *b = &xd->block[16 + 4 * i];
|
|
|
|
int stride = b->dst_stride;
|
2012-12-19 00:31:19 +01:00
|
|
|
int16_t *diff = b->diff;
|
2012-08-20 23:43:34 +02:00
|
|
|
|
|
|
|
for (y = 0; y < 8; y++) {
|
|
|
|
for (x = 0; x < 8; x++) {
|
Consistently use get_prob(), clip_prob() and newly added clip_pixel().
Add a function clip_pixel() to clip a pixel value to the [0,255] range
of allowed values, and use this where-ever appropriate (e.g. prediction,
reconstruction). Likewise, consistently use the recently added function
clip_prob(), which calculates a binary probability in the [1,255] range.
If possible, try to use get_prob() or its sister get_binary_prob() to
calculate binary probabilities, for consistency.
Since in some places, this means that binary probability calculations
are changed (we use {255,256}*count0/(total) in a range of places,
and all of these are now changed to use 256*count0+(total>>1)/total),
this changes the encoding result, so this patch warrants some extensive
testing.
Change-Id: Ibeeff8d886496839b8e0c0ace9ccc552351f7628
2012-12-10 21:09:07 +01:00
|
|
|
dst[x] = clip_pixel(dst[x] + diff[x]);
|
2012-08-20 23:43:34 +02:00
|
|
|
}
|
|
|
|
dst += stride;
|
|
|
|
diff += 8;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
32x32 transform for superblocks.
This adds Debargha's DCT/DWT hybrid and a regular 32x32 DCT, and adds
code all over the place to wrap that in the bitstream/encoder/decoder/RD.
Some implementation notes (these probably need careful review):
- token range is extended by 1 bit, since the value range out of this
transform is [-16384,16383].
- the coefficients coming out of the FDCT are manually scaled back by
1 bit, or else they won't fit in int16_t (they are 17 bits). Because
of this, the RD error scoring does not right-shift the MSE score by
two (unlike for 4x4/8x8/16x16).
- to compensate for this loss in precision, the quantizer is halved
also. This is currently a little hacky.
- FDCT and IDCT is double-only right now. Needs a fixed-point impl.
- There are no default probabilities for the 32x32 transform yet; I'm
simply using the 16x16 luma ones. A future commit will add newly
generated probabilities for all transforms.
- No ADST version. I don't think we'll add one for this level; if an
ADST is desired, transform-size selection can scale back to 16x16
or lower, and use an ADST at that level.
Additional notes specific to Debargha's DWT/DCT hybrid:
- coefficient scale is different for the top/left 16x16 (DCT-over-DWT)
block than for the rest (DWT pixel differences) of the block. Therefore,
RD error scoring isn't easily scalable between coefficient and pixel
domain. Thus, unfortunately, we need to compute the RD distortion in
the pixel domain until we figure out how to scale these appropriately.
Change-Id: I00386f20f35d7fabb19aba94c8162f8aee64ef2b
2012-12-07 23:45:05 +01:00
|
|
|
|
|
|
|
#if CONFIG_TX32X32
|
|
|
|
void vp9_recon_sby_s_c(MACROBLOCKD *xd, uint8_t *dst) {
|
|
|
|
int x, y, stride = xd->block[0].dst_stride;
|
2012-12-19 00:31:19 +01:00
|
|
|
int16_t *diff = xd->sb_coeff_data.diff;
|
32x32 transform for superblocks.
This adds Debargha's DCT/DWT hybrid and a regular 32x32 DCT, and adds
code all over the place to wrap that in the bitstream/encoder/decoder/RD.
Some implementation notes (these probably need careful review):
- token range is extended by 1 bit, since the value range out of this
transform is [-16384,16383].
- the coefficients coming out of the FDCT are manually scaled back by
1 bit, or else they won't fit in int16_t (they are 17 bits). Because
of this, the RD error scoring does not right-shift the MSE score by
two (unlike for 4x4/8x8/16x16).
- to compensate for this loss in precision, the quantizer is halved
also. This is currently a little hacky.
- FDCT and IDCT is double-only right now. Needs a fixed-point impl.
- There are no default probabilities for the 32x32 transform yet; I'm
simply using the 16x16 luma ones. A future commit will add newly
generated probabilities for all transforms.
- No ADST version. I don't think we'll add one for this level; if an
ADST is desired, transform-size selection can scale back to 16x16
or lower, and use an ADST at that level.
Additional notes specific to Debargha's DWT/DCT hybrid:
- coefficient scale is different for the top/left 16x16 (DCT-over-DWT)
block than for the rest (DWT pixel differences) of the block. Therefore,
RD error scoring isn't easily scalable between coefficient and pixel
domain. Thus, unfortunately, we need to compute the RD distortion in
the pixel domain until we figure out how to scale these appropriately.
Change-Id: I00386f20f35d7fabb19aba94c8162f8aee64ef2b
2012-12-07 23:45:05 +01:00
|
|
|
|
|
|
|
for (y = 0; y < 32; y++) {
|
|
|
|
for (x = 0; x < 32; x++) {
|
Consistently use get_prob(), clip_prob() and newly added clip_pixel().
Add a function clip_pixel() to clip a pixel value to the [0,255] range
of allowed values, and use this where-ever appropriate (e.g. prediction,
reconstruction). Likewise, consistently use the recently added function
clip_prob(), which calculates a binary probability in the [1,255] range.
If possible, try to use get_prob() or its sister get_binary_prob() to
calculate binary probabilities, for consistency.
Since in some places, this means that binary probability calculations
are changed (we use {255,256}*count0/(total) in a range of places,
and all of these are now changed to use 256*count0+(total>>1)/total),
this changes the encoding result, so this patch warrants some extensive
testing.
Change-Id: Ibeeff8d886496839b8e0c0ace9ccc552351f7628
2012-12-10 21:09:07 +01:00
|
|
|
dst[x] = clip_pixel(dst[x] + diff[x]);
|
32x32 transform for superblocks.
This adds Debargha's DCT/DWT hybrid and a regular 32x32 DCT, and adds
code all over the place to wrap that in the bitstream/encoder/decoder/RD.
Some implementation notes (these probably need careful review):
- token range is extended by 1 bit, since the value range out of this
transform is [-16384,16383].
- the coefficients coming out of the FDCT are manually scaled back by
1 bit, or else they won't fit in int16_t (they are 17 bits). Because
of this, the RD error scoring does not right-shift the MSE score by
two (unlike for 4x4/8x8/16x16).
- to compensate for this loss in precision, the quantizer is halved
also. This is currently a little hacky.
- FDCT and IDCT is double-only right now. Needs a fixed-point impl.
- There are no default probabilities for the 32x32 transform yet; I'm
simply using the 16x16 luma ones. A future commit will add newly
generated probabilities for all transforms.
- No ADST version. I don't think we'll add one for this level; if an
ADST is desired, transform-size selection can scale back to 16x16
or lower, and use an ADST at that level.
Additional notes specific to Debargha's DWT/DCT hybrid:
- coefficient scale is different for the top/left 16x16 (DCT-over-DWT)
block than for the rest (DWT pixel differences) of the block. Therefore,
RD error scoring isn't easily scalable between coefficient and pixel
domain. Thus, unfortunately, we need to compute the RD distortion in
the pixel domain until we figure out how to scale these appropriately.
Change-Id: I00386f20f35d7fabb19aba94c8162f8aee64ef2b
2012-12-07 23:45:05 +01:00
|
|
|
}
|
|
|
|
dst += stride;
|
|
|
|
diff += 32;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void vp9_recon_sbuv_s_c(MACROBLOCKD *xd, uint8_t *udst, uint8_t *vdst) {
|
|
|
|
int x, y, stride = xd->block[16].dst_stride;
|
2012-12-19 00:31:19 +01:00
|
|
|
int16_t *udiff = xd->sb_coeff_data.diff + 1024;
|
|
|
|
int16_t *vdiff = xd->sb_coeff_data.diff + 1280;
|
32x32 transform for superblocks.
This adds Debargha's DCT/DWT hybrid and a regular 32x32 DCT, and adds
code all over the place to wrap that in the bitstream/encoder/decoder/RD.
Some implementation notes (these probably need careful review):
- token range is extended by 1 bit, since the value range out of this
transform is [-16384,16383].
- the coefficients coming out of the FDCT are manually scaled back by
1 bit, or else they won't fit in int16_t (they are 17 bits). Because
of this, the RD error scoring does not right-shift the MSE score by
two (unlike for 4x4/8x8/16x16).
- to compensate for this loss in precision, the quantizer is halved
also. This is currently a little hacky.
- FDCT and IDCT is double-only right now. Needs a fixed-point impl.
- There are no default probabilities for the 32x32 transform yet; I'm
simply using the 16x16 luma ones. A future commit will add newly
generated probabilities for all transforms.
- No ADST version. I don't think we'll add one for this level; if an
ADST is desired, transform-size selection can scale back to 16x16
or lower, and use an ADST at that level.
Additional notes specific to Debargha's DWT/DCT hybrid:
- coefficient scale is different for the top/left 16x16 (DCT-over-DWT)
block than for the rest (DWT pixel differences) of the block. Therefore,
RD error scoring isn't easily scalable between coefficient and pixel
domain. Thus, unfortunately, we need to compute the RD distortion in
the pixel domain until we figure out how to scale these appropriately.
Change-Id: I00386f20f35d7fabb19aba94c8162f8aee64ef2b
2012-12-07 23:45:05 +01:00
|
|
|
|
|
|
|
for (y = 0; y < 16; y++) {
|
|
|
|
for (x = 0; x < 16; x++) {
|
Consistently use get_prob(), clip_prob() and newly added clip_pixel().
Add a function clip_pixel() to clip a pixel value to the [0,255] range
of allowed values, and use this where-ever appropriate (e.g. prediction,
reconstruction). Likewise, consistently use the recently added function
clip_prob(), which calculates a binary probability in the [1,255] range.
If possible, try to use get_prob() or its sister get_binary_prob() to
calculate binary probabilities, for consistency.
Since in some places, this means that binary probability calculations
are changed (we use {255,256}*count0/(total) in a range of places,
and all of these are now changed to use 256*count0+(total>>1)/total),
this changes the encoding result, so this patch warrants some extensive
testing.
Change-Id: Ibeeff8d886496839b8e0c0ace9ccc552351f7628
2012-12-10 21:09:07 +01:00
|
|
|
udst[x] = clip_pixel(udst[x] + udiff[x]);
|
|
|
|
vdst[x] = clip_pixel(vdst[x] + vdiff[x]);
|
32x32 transform for superblocks.
This adds Debargha's DCT/DWT hybrid and a regular 32x32 DCT, and adds
code all over the place to wrap that in the bitstream/encoder/decoder/RD.
Some implementation notes (these probably need careful review):
- token range is extended by 1 bit, since the value range out of this
transform is [-16384,16383].
- the coefficients coming out of the FDCT are manually scaled back by
1 bit, or else they won't fit in int16_t (they are 17 bits). Because
of this, the RD error scoring does not right-shift the MSE score by
two (unlike for 4x4/8x8/16x16).
- to compensate for this loss in precision, the quantizer is halved
also. This is currently a little hacky.
- FDCT and IDCT is double-only right now. Needs a fixed-point impl.
- There are no default probabilities for the 32x32 transform yet; I'm
simply using the 16x16 luma ones. A future commit will add newly
generated probabilities for all transforms.
- No ADST version. I don't think we'll add one for this level; if an
ADST is desired, transform-size selection can scale back to 16x16
or lower, and use an ADST at that level.
Additional notes specific to Debargha's DWT/DCT hybrid:
- coefficient scale is different for the top/left 16x16 (DCT-over-DWT)
block than for the rest (DWT pixel differences) of the block. Therefore,
RD error scoring isn't easily scalable between coefficient and pixel
domain. Thus, unfortunately, we need to compute the RD distortion in
the pixel domain until we figure out how to scale these appropriately.
Change-Id: I00386f20f35d7fabb19aba94c8162f8aee64ef2b
2012-12-07 23:45:05 +01:00
|
|
|
}
|
|
|
|
udst += stride;
|
|
|
|
vdst += stride;
|
|
|
|
udiff += 16;
|
|
|
|
vdiff += 16;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|
2012-08-20 23:43:34 +02:00
|
|
|
#endif
|
|
|
|
|
2012-10-31 00:25:53 +01:00
|
|
|
void vp9_recon_mby_c(MACROBLOCKD *xd) {
|
2012-07-14 00:21:29 +02:00
|
|
|
int i;
|
2010-05-18 17:58:33 +02:00
|
|
|
|
2012-07-14 00:21:29 +02:00
|
|
|
for (i = 0; i < 16; i += 4) {
|
2012-08-15 12:00:53 +02:00
|
|
|
BLOCKD *b = &xd->block[i];
|
2010-05-18 17:58:33 +02:00
|
|
|
|
2012-10-31 00:25:53 +01:00
|
|
|
vp9_recon4b(b->predictor, b->diff, *(b->base_dst) + b->dst, b->dst_stride);
|
2012-07-14 00:21:29 +02:00
|
|
|
}
|
2010-05-18 17:58:33 +02:00
|
|
|
}
|
|
|
|
|
2012-10-31 00:25:53 +01:00
|
|
|
void vp9_recon_mb_c(MACROBLOCKD *xd) {
|
2012-07-14 00:21:29 +02:00
|
|
|
int i;
|
2010-05-18 17:58:33 +02:00
|
|
|
|
2012-07-14 00:21:29 +02:00
|
|
|
for (i = 0; i < 16; i += 4) {
|
2012-08-15 12:00:53 +02:00
|
|
|
BLOCKD *b = &xd->block[i];
|
2010-05-18 17:58:33 +02:00
|
|
|
|
2012-10-31 00:25:53 +01:00
|
|
|
vp9_recon4b(b->predictor, b->diff, *(b->base_dst) + b->dst, b->dst_stride);
|
2012-07-14 00:21:29 +02:00
|
|
|
}
|
2010-05-18 17:58:33 +02:00
|
|
|
|
2012-07-14 00:21:29 +02:00
|
|
|
for (i = 16; i < 24; i += 2) {
|
2012-08-15 12:00:53 +02:00
|
|
|
BLOCKD *b = &xd->block[i];
|
2010-05-18 17:58:33 +02:00
|
|
|
|
2012-10-31 00:25:53 +01:00
|
|
|
vp9_recon2b(b->predictor, b->diff, *(b->base_dst) + b->dst, b->dst_stride);
|
2012-07-14 00:21:29 +02:00
|
|
|
}
|
2010-05-18 17:58:33 +02:00
|
|
|
}
|