2010-05-18 17:58:33 +02:00
|
|
|
/*
|
2010-09-09 14:16:39 +02:00
|
|
|
* Copyright (c) 2010 The WebM project authors. All Rights Reserved.
|
2010-05-18 17:58:33 +02:00
|
|
|
*
|
2010-06-18 18:39:21 +02:00
|
|
|
* Use of this source code is governed by a BSD-style license
|
2010-06-04 22:19:40 +02:00
|
|
|
* that can be found in the LICENSE file in the root of the source
|
|
|
|
* tree. An additional intellectual property rights grant can be found
|
2010-06-18 18:39:21 +02:00
|
|
|
* in the file PATENTS. All contributing project authors may
|
2010-06-04 22:19:40 +02:00
|
|
|
* be found in the AUTHORS file in the root of the source tree.
|
2010-05-18 17:58:33 +02:00
|
|
|
*/
|
|
|
|
|
Convert subpixel filters to use convolve framework
Update the code to call the new convolution functions to do subpixel
prediction rather than the existing functions. Remove the old C and
assembly code, since it is unused. This causes a 50% performance
reduction on the decoder, but that will be resolved when the asm for
the new functions is available.
There is no consensus for whether 6-tap or 2-tap predictors will be
supported in the final codec, so these filters are implemented in
terms of the 8-tap code, so that quality testing of these modes
can continue. Implementing the lower complexity algorithms is a
simple exercise, should it be necessary.
This code produces slightly better results in the EIGHTTAP_SMOOTH
case, since the filter is now applied in only one direction when
the subpel motion is only in one direction. Like the previous code,
the filtering is skipped entirely on full-pel MVs. This combination
seems to give the best quality gains, but this may be indicative of a
bug in the encoder's filter selection, since the encoder could
achieve the result of skipping the filtering on full-pel by selecting
one of the other filters. This should be revisited.
Quality gains on derf positive on almost all clips. The only clip
that seemed to be hurt at all datarates was football
(-0.115% PSNR average, -0.587% min). Overall averages 0.375% PSNR,
0.347% SSIM.
Change-Id: I7d469716091b1d89b4b08adde5863999319d69ff
2013-01-29 01:59:03 +01:00
|
|
|
#include <assert.h>
|
2010-05-18 17:58:33 +02:00
|
|
|
|
2012-12-23 16:20:10 +01:00
|
|
|
#include "./vpx_config.h"
|
2011-07-25 16:11:24 +02:00
|
|
|
#include "vpx/vpx_integer.h"
|
2012-11-28 19:41:40 +01:00
|
|
|
#include "vp9/common/vp9_blockd.h"
|
Convert subpixel filters to use convolve framework
Update the code to call the new convolution functions to do subpixel
prediction rather than the existing functions. Remove the old C and
assembly code, since it is unused. This causes a 50% performance
reduction on the decoder, but that will be resolved when the asm for
the new functions is available.
There is no consensus for whether 6-tap or 2-tap predictors will be
supported in the final codec, so these filters are implemented in
terms of the 8-tap code, so that quality testing of these modes
can continue. Implementing the lower complexity algorithms is a
simple exercise, should it be necessary.
This code produces slightly better results in the EIGHTTAP_SMOOTH
case, since the filter is now applied in only one direction when
the subpel motion is only in one direction. Like the previous code,
the filtering is skipped entirely on full-pel MVs. This combination
seems to give the best quality gains, but this may be indicative of a
bug in the encoder's filter selection, since the encoder could
achieve the result of skipping the filtering on full-pel by selecting
one of the other filters. This should be revisited.
Quality gains on derf positive on almost all clips. The only clip
that seemed to be hurt at all datarates was football
(-0.115% PSNR average, -0.587% min). Overall averages 0.375% PSNR,
0.347% SSIM.
Change-Id: I7d469716091b1d89b4b08adde5863999319d69ff
2013-01-29 01:59:03 +01:00
|
|
|
#include "vp9/common/vp9_filter.h"
|
2012-11-28 19:41:40 +01:00
|
|
|
#include "vp9/common/vp9_reconinter.h"
|
2012-11-27 22:59:17 +01:00
|
|
|
#include "vp9/common/vp9_reconintra.h"
|
2010-05-18 17:58:33 +02:00
|
|
|
|
Spatial resamping of ZEROMV predictors
This patch allows coding frames using references of different
resolution, in ZEROMV mode. For compound prediction, either
reference may be scaled.
To test, I use the resize_test and enable WRITE_RECON_BUFFER
in vp9_onyxd_if.c. It's also useful to apply this patch to
test/i420_video_source.h:
--- a/test/i420_video_source.h
+++ b/test/i420_video_source.h
@@ -93,6 +93,7 @@ class I420VideoSource : public VideoSource {
virtual void FillFrame() {
// Read a frame from input_file.
+ if (frame_ != 3)
if (fread(img_->img_data, raw_sz_, 1, input_file_) == 0) {
limit_ = frame_;
}
This forces the frame that the resolution changes on to be coded
with no motion, only scaling, and improves the quality of the
result.
Change-Id: I1ee75d19a437ff801192f767fd02a36bcbd1d496
2013-02-25 05:55:14 +01:00
|
|
|
void vp9_setup_scale_factors_for_frame(struct scale_factors *scale,
|
|
|
|
YV12_BUFFER_CONFIG *other,
|
|
|
|
int this_w, int this_h) {
|
2013-03-02 02:50:55 +01:00
|
|
|
int other_h = other->y_height;
|
|
|
|
int other_w = other->y_width;
|
Spatial resamping of ZEROMV predictors
This patch allows coding frames using references of different
resolution, in ZEROMV mode. For compound prediction, either
reference may be scaled.
To test, I use the resize_test and enable WRITE_RECON_BUFFER
in vp9_onyxd_if.c. It's also useful to apply this patch to
test/i420_video_source.h:
--- a/test/i420_video_source.h
+++ b/test/i420_video_source.h
@@ -93,6 +93,7 @@ class I420VideoSource : public VideoSource {
virtual void FillFrame() {
// Read a frame from input_file.
+ if (frame_ != 3)
if (fread(img_->img_data, raw_sz_, 1, input_file_) == 0) {
limit_ = frame_;
}
This forces the frame that the resolution changes on to be coded
with no motion, only scaling, and improves the quality of the
result.
Change-Id: I1ee75d19a437ff801192f767fd02a36bcbd1d496
2013-02-25 05:55:14 +01:00
|
|
|
|
|
|
|
scale->x_num = other_w;
|
|
|
|
scale->x_den = this_w;
|
|
|
|
scale->x_offset_q4 = 0; // calculated per-mb
|
|
|
|
scale->x_step_q4 = 16 * other_w / this_w;
|
2013-03-02 02:50:55 +01:00
|
|
|
|
Spatial resamping of ZEROMV predictors
This patch allows coding frames using references of different
resolution, in ZEROMV mode. For compound prediction, either
reference may be scaled.
To test, I use the resize_test and enable WRITE_RECON_BUFFER
in vp9_onyxd_if.c. It's also useful to apply this patch to
test/i420_video_source.h:
--- a/test/i420_video_source.h
+++ b/test/i420_video_source.h
@@ -93,6 +93,7 @@ class I420VideoSource : public VideoSource {
virtual void FillFrame() {
// Read a frame from input_file.
+ if (frame_ != 3)
if (fread(img_->img_data, raw_sz_, 1, input_file_) == 0) {
limit_ = frame_;
}
This forces the frame that the resolution changes on to be coded
with no motion, only scaling, and improves the quality of the
result.
Change-Id: I1ee75d19a437ff801192f767fd02a36bcbd1d496
2013-02-25 05:55:14 +01:00
|
|
|
scale->y_num = other_h;
|
|
|
|
scale->y_den = this_h;
|
|
|
|
scale->y_offset_q4 = 0; // calculated per-mb
|
|
|
|
scale->y_step_q4 = 16 * other_h / this_h;
|
|
|
|
|
Convert subpixel filters to use convolve framework
Update the code to call the new convolution functions to do subpixel
prediction rather than the existing functions. Remove the old C and
assembly code, since it is unused. This causes a 50% performance
reduction on the decoder, but that will be resolved when the asm for
the new functions is available.
There is no consensus for whether 6-tap or 2-tap predictors will be
supported in the final codec, so these filters are implemented in
terms of the 8-tap code, so that quality testing of these modes
can continue. Implementing the lower complexity algorithms is a
simple exercise, should it be necessary.
This code produces slightly better results in the EIGHTTAP_SMOOTH
case, since the filter is now applied in only one direction when
the subpel motion is only in one direction. Like the previous code,
the filtering is skipped entirely on full-pel MVs. This combination
seems to give the best quality gains, but this may be indicative of a
bug in the encoder's filter selection, since the encoder could
achieve the result of skipping the filtering on full-pel by selecting
one of the other filters. This should be revisited.
Quality gains on derf positive on almost all clips. The only clip
that seemed to be hurt at all datarates was football
(-0.115% PSNR average, -0.587% min). Overall averages 0.375% PSNR,
0.347% SSIM.
Change-Id: I7d469716091b1d89b4b08adde5863999319d69ff
2013-01-29 01:59:03 +01:00
|
|
|
// TODO(agrange): Investigate the best choice of functions to use here
|
|
|
|
// for EIGHTTAP_SMOOTH. Since it is not interpolating, need to choose what
|
|
|
|
// to do at full-pel offsets. The current selection, where the filter is
|
|
|
|
// applied in one direction only, and not at all for 0,0, seems to give the
|
|
|
|
// best quality, but it may be worth trying an additional mode that does
|
|
|
|
// do the filtering on full-pel.
|
Spatial resamping of ZEROMV predictors
This patch allows coding frames using references of different
resolution, in ZEROMV mode. For compound prediction, either
reference may be scaled.
To test, I use the resize_test and enable WRITE_RECON_BUFFER
in vp9_onyxd_if.c. It's also useful to apply this patch to
test/i420_video_source.h:
--- a/test/i420_video_source.h
+++ b/test/i420_video_source.h
@@ -93,6 +93,7 @@ class I420VideoSource : public VideoSource {
virtual void FillFrame() {
// Read a frame from input_file.
+ if (frame_ != 3)
if (fread(img_->img_data, raw_sz_, 1, input_file_) == 0) {
limit_ = frame_;
}
This forces the frame that the resolution changes on to be coded
with no motion, only scaling, and improves the quality of the
result.
Change-Id: I1ee75d19a437ff801192f767fd02a36bcbd1d496
2013-02-25 05:55:14 +01:00
|
|
|
if (scale->x_step_q4 == 16) {
|
|
|
|
if (scale->y_step_q4 == 16) {
|
|
|
|
// No scaling in either direction.
|
|
|
|
scale->predict[0][0][0] = vp9_convolve_copy;
|
|
|
|
scale->predict[0][0][1] = vp9_convolve_avg;
|
|
|
|
scale->predict[0][1][0] = vp9_convolve8_vert;
|
|
|
|
scale->predict[0][1][1] = vp9_convolve8_avg_vert;
|
|
|
|
scale->predict[1][0][0] = vp9_convolve8_horiz;
|
|
|
|
scale->predict[1][0][1] = vp9_convolve8_avg_horiz;
|
|
|
|
} else {
|
|
|
|
// No scaling in x direction. Must always scale in the y direction.
|
|
|
|
scale->predict[0][0][0] = vp9_convolve8_vert;
|
|
|
|
scale->predict[0][0][1] = vp9_convolve8_avg_vert;
|
|
|
|
scale->predict[0][1][0] = vp9_convolve8_vert;
|
|
|
|
scale->predict[0][1][1] = vp9_convolve8_avg_vert;
|
|
|
|
scale->predict[1][0][0] = vp9_convolve8;
|
|
|
|
scale->predict[1][0][1] = vp9_convolve8_avg;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
if (scale->y_step_q4 == 16) {
|
|
|
|
// No scaling in the y direction. Must always scale in the x direction.
|
|
|
|
scale->predict[0][0][0] = vp9_convolve8_horiz;
|
|
|
|
scale->predict[0][0][1] = vp9_convolve8_avg_horiz;
|
|
|
|
scale->predict[0][1][0] = vp9_convolve8;
|
|
|
|
scale->predict[0][1][1] = vp9_convolve8_avg;
|
|
|
|
scale->predict[1][0][0] = vp9_convolve8_horiz;
|
|
|
|
scale->predict[1][0][1] = vp9_convolve8_avg_horiz;
|
|
|
|
} else {
|
|
|
|
// Must always scale in both directions.
|
|
|
|
scale->predict[0][0][0] = vp9_convolve8;
|
|
|
|
scale->predict[0][0][1] = vp9_convolve8_avg;
|
|
|
|
scale->predict[0][1][0] = vp9_convolve8;
|
|
|
|
scale->predict[0][1][1] = vp9_convolve8_avg;
|
|
|
|
scale->predict[1][0][0] = vp9_convolve8;
|
|
|
|
scale->predict[1][0][1] = vp9_convolve8_avg;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
// 2D subpel motion always gets filtered in both directions
|
|
|
|
scale->predict[1][1][0] = vp9_convolve8;
|
|
|
|
scale->predict[1][1][1] = vp9_convolve8_avg;
|
|
|
|
}
|
|
|
|
|
|
|
|
void vp9_setup_interp_filters(MACROBLOCKD *xd,
|
|
|
|
INTERPOLATIONFILTERTYPE mcomp_filter_type,
|
|
|
|
VP9_COMMON *cm) {
|
|
|
|
int i;
|
|
|
|
|
|
|
|
/* Calculate scaling factors for each of the 3 available references */
|
|
|
|
for (i = 0; i < 3; ++i) {
|
|
|
|
if (cm->active_ref_idx[i] >= NUM_YV12_BUFFERS) {
|
|
|
|
memset(&cm->active_ref_scale[i], 0, sizeof(cm->active_ref_scale[i]));
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
vp9_setup_scale_factors_for_frame(&cm->active_ref_scale[i],
|
|
|
|
&cm->yv12_fb[cm->active_ref_idx[i]],
|
|
|
|
cm->mb_cols * 16, cm->mb_rows * 16);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (xd->mode_info_context) {
|
|
|
|
MB_MODE_INFO *mbmi = &xd->mode_info_context->mbmi;
|
|
|
|
|
|
|
|
set_scale_factors(xd,
|
|
|
|
mbmi->ref_frame - 1,
|
|
|
|
mbmi->second_ref_frame - 1,
|
|
|
|
cm->active_ref_scale);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
Convert subpixel filters to use convolve framework
Update the code to call the new convolution functions to do subpixel
prediction rather than the existing functions. Remove the old C and
assembly code, since it is unused. This causes a 50% performance
reduction on the decoder, but that will be resolved when the asm for
the new functions is available.
There is no consensus for whether 6-tap or 2-tap predictors will be
supported in the final codec, so these filters are implemented in
terms of the 8-tap code, so that quality testing of these modes
can continue. Implementing the lower complexity algorithms is a
simple exercise, should it be necessary.
This code produces slightly better results in the EIGHTTAP_SMOOTH
case, since the filter is now applied in only one direction when
the subpel motion is only in one direction. Like the previous code,
the filtering is skipped entirely on full-pel MVs. This combination
seems to give the best quality gains, but this may be indicative of a
bug in the encoder's filter selection, since the encoder could
achieve the result of skipping the filtering on full-pel by selecting
one of the other filters. This should be revisited.
Quality gains on derf positive on almost all clips. The only clip
that seemed to be hurt at all datarates was football
(-0.115% PSNR average, -0.587% min). Overall averages 0.375% PSNR,
0.347% SSIM.
Change-Id: I7d469716091b1d89b4b08adde5863999319d69ff
2013-01-29 01:59:03 +01:00
|
|
|
switch (mcomp_filter_type) {
|
|
|
|
case EIGHTTAP:
|
|
|
|
case SWITCHABLE:
|
|
|
|
xd->subpix.filter_x = xd->subpix.filter_y = vp9_sub_pel_filters_8;
|
|
|
|
break;
|
|
|
|
case EIGHTTAP_SMOOTH:
|
|
|
|
xd->subpix.filter_x = xd->subpix.filter_y = vp9_sub_pel_filters_8lp;
|
|
|
|
break;
|
|
|
|
case EIGHTTAP_SHARP:
|
|
|
|
xd->subpix.filter_x = xd->subpix.filter_y = vp9_sub_pel_filters_8s;
|
|
|
|
break;
|
|
|
|
case BILINEAR:
|
|
|
|
xd->subpix.filter_x = xd->subpix.filter_y = vp9_bilinear_filters;
|
|
|
|
break;
|
2013-01-08 23:14:01 +01:00
|
|
|
#if CONFIG_ENABLE_6TAP
|
Convert subpixel filters to use convolve framework
Update the code to call the new convolution functions to do subpixel
prediction rather than the existing functions. Remove the old C and
assembly code, since it is unused. This causes a 50% performance
reduction on the decoder, but that will be resolved when the asm for
the new functions is available.
There is no consensus for whether 6-tap or 2-tap predictors will be
supported in the final codec, so these filters are implemented in
terms of the 8-tap code, so that quality testing of these modes
can continue. Implementing the lower complexity algorithms is a
simple exercise, should it be necessary.
This code produces slightly better results in the EIGHTTAP_SMOOTH
case, since the filter is now applied in only one direction when
the subpel motion is only in one direction. Like the previous code,
the filtering is skipped entirely on full-pel MVs. This combination
seems to give the best quality gains, but this may be indicative of a
bug in the encoder's filter selection, since the encoder could
achieve the result of skipping the filtering on full-pel by selecting
one of the other filters. This should be revisited.
Quality gains on derf positive on almost all clips. The only clip
that seemed to be hurt at all datarates was football
(-0.115% PSNR average, -0.587% min). Overall averages 0.375% PSNR,
0.347% SSIM.
Change-Id: I7d469716091b1d89b4b08adde5863999319d69ff
2013-01-29 01:59:03 +01:00
|
|
|
case SIXTAP:
|
|
|
|
xd->subpix.filter_x = xd->subpix.filter_y = vp9_sub_pel_filters_6;
|
|
|
|
break;
|
2013-01-08 23:14:01 +01:00
|
|
|
#endif
|
|
|
|
}
|
2013-02-21 00:59:20 +01:00
|
|
|
assert(((intptr_t)xd->subpix.filter_x & 0xff) == 0);
|
2012-07-18 22:43:01 +02:00
|
|
|
}
|
|
|
|
|
Convert subpixel filters to use convolve framework
Update the code to call the new convolution functions to do subpixel
prediction rather than the existing functions. Remove the old C and
assembly code, since it is unused. This causes a 50% performance
reduction on the decoder, but that will be resolved when the asm for
the new functions is available.
There is no consensus for whether 6-tap or 2-tap predictors will be
supported in the final codec, so these filters are implemented in
terms of the 8-tap code, so that quality testing of these modes
can continue. Implementing the lower complexity algorithms is a
simple exercise, should it be necessary.
This code produces slightly better results in the EIGHTTAP_SMOOTH
case, since the filter is now applied in only one direction when
the subpel motion is only in one direction. Like the previous code,
the filtering is skipped entirely on full-pel MVs. This combination
seems to give the best quality gains, but this may be indicative of a
bug in the encoder's filter selection, since the encoder could
achieve the result of skipping the filtering on full-pel by selecting
one of the other filters. This should be revisited.
Quality gains on derf positive on almost all clips. The only clip
that seemed to be hurt at all datarates was football
(-0.115% PSNR average, -0.587% min). Overall averages 0.375% PSNR,
0.347% SSIM.
Change-Id: I7d469716091b1d89b4b08adde5863999319d69ff
2013-01-29 01:59:03 +01:00
|
|
|
void vp9_copy_mem16x16_c(const uint8_t *src,
|
2012-11-01 00:09:17 +01:00
|
|
|
int src_stride,
|
2012-12-19 00:31:19 +01:00
|
|
|
uint8_t *dst,
|
2012-11-01 00:09:17 +01:00
|
|
|
int dst_stride) {
|
2012-07-14 00:21:29 +02:00
|
|
|
int r;
|
2010-05-18 17:58:33 +02:00
|
|
|
|
2012-07-14 00:21:29 +02:00
|
|
|
for (r = 0; r < 16; r++) {
|
2011-07-25 16:11:24 +02:00
|
|
|
#if !(CONFIG_FAST_UNALIGNED)
|
2012-07-14 00:21:29 +02:00
|
|
|
dst[0] = src[0];
|
|
|
|
dst[1] = src[1];
|
|
|
|
dst[2] = src[2];
|
|
|
|
dst[3] = src[3];
|
|
|
|
dst[4] = src[4];
|
|
|
|
dst[5] = src[5];
|
|
|
|
dst[6] = src[6];
|
|
|
|
dst[7] = src[7];
|
|
|
|
dst[8] = src[8];
|
|
|
|
dst[9] = src[9];
|
|
|
|
dst[10] = src[10];
|
|
|
|
dst[11] = src[11];
|
|
|
|
dst[12] = src[12];
|
|
|
|
dst[13] = src[13];
|
|
|
|
dst[14] = src[14];
|
|
|
|
dst[15] = src[15];
|
2010-05-18 17:58:33 +02:00
|
|
|
|
|
|
|
#else
|
Convert subpixel filters to use convolve framework
Update the code to call the new convolution functions to do subpixel
prediction rather than the existing functions. Remove the old C and
assembly code, since it is unused. This causes a 50% performance
reduction on the decoder, but that will be resolved when the asm for
the new functions is available.
There is no consensus for whether 6-tap or 2-tap predictors will be
supported in the final codec, so these filters are implemented in
terms of the 8-tap code, so that quality testing of these modes
can continue. Implementing the lower complexity algorithms is a
simple exercise, should it be necessary.
This code produces slightly better results in the EIGHTTAP_SMOOTH
case, since the filter is now applied in only one direction when
the subpel motion is only in one direction. Like the previous code,
the filtering is skipped entirely on full-pel MVs. This combination
seems to give the best quality gains, but this may be indicative of a
bug in the encoder's filter selection, since the encoder could
achieve the result of skipping the filtering on full-pel by selecting
one of the other filters. This should be revisited.
Quality gains on derf positive on almost all clips. The only clip
that seemed to be hurt at all datarates was football
(-0.115% PSNR average, -0.587% min). Overall averages 0.375% PSNR,
0.347% SSIM.
Change-Id: I7d469716091b1d89b4b08adde5863999319d69ff
2013-01-29 01:59:03 +01:00
|
|
|
((uint32_t *)dst)[0] = ((const uint32_t *)src)[0];
|
|
|
|
((uint32_t *)dst)[1] = ((const uint32_t *)src)[1];
|
|
|
|
((uint32_t *)dst)[2] = ((const uint32_t *)src)[2];
|
|
|
|
((uint32_t *)dst)[3] = ((const uint32_t *)src)[3];
|
2010-05-18 17:58:33 +02:00
|
|
|
|
|
|
|
#endif
|
2012-07-14 00:21:29 +02:00
|
|
|
src += src_stride;
|
|
|
|
dst += dst_stride;
|
|
|
|
}
|
2010-05-18 17:58:33 +02:00
|
|
|
}
|
|
|
|
|
Convert subpixel filters to use convolve framework
Update the code to call the new convolution functions to do subpixel
prediction rather than the existing functions. Remove the old C and
assembly code, since it is unused. This causes a 50% performance
reduction on the decoder, but that will be resolved when the asm for
the new functions is available.
There is no consensus for whether 6-tap or 2-tap predictors will be
supported in the final codec, so these filters are implemented in
terms of the 8-tap code, so that quality testing of these modes
can continue. Implementing the lower complexity algorithms is a
simple exercise, should it be necessary.
This code produces slightly better results in the EIGHTTAP_SMOOTH
case, since the filter is now applied in only one direction when
the subpel motion is only in one direction. Like the previous code,
the filtering is skipped entirely on full-pel MVs. This combination
seems to give the best quality gains, but this may be indicative of a
bug in the encoder's filter selection, since the encoder could
achieve the result of skipping the filtering on full-pel by selecting
one of the other filters. This should be revisited.
Quality gains on derf positive on almost all clips. The only clip
that seemed to be hurt at all datarates was football
(-0.115% PSNR average, -0.587% min). Overall averages 0.375% PSNR,
0.347% SSIM.
Change-Id: I7d469716091b1d89b4b08adde5863999319d69ff
2013-01-29 01:59:03 +01:00
|
|
|
void vp9_copy_mem8x8_c(const uint8_t *src,
|
2012-11-01 00:09:17 +01:00
|
|
|
int src_stride,
|
2012-12-19 00:31:19 +01:00
|
|
|
uint8_t *dst,
|
2012-11-01 00:09:17 +01:00
|
|
|
int dst_stride) {
|
2012-07-14 00:21:29 +02:00
|
|
|
int r;
|
|
|
|
|
|
|
|
for (r = 0; r < 8; r++) {
|
2011-07-25 16:11:24 +02:00
|
|
|
#if !(CONFIG_FAST_UNALIGNED)
|
2012-07-14 00:21:29 +02:00
|
|
|
dst[0] = src[0];
|
|
|
|
dst[1] = src[1];
|
|
|
|
dst[2] = src[2];
|
|
|
|
dst[3] = src[3];
|
|
|
|
dst[4] = src[4];
|
|
|
|
dst[5] = src[5];
|
|
|
|
dst[6] = src[6];
|
|
|
|
dst[7] = src[7];
|
2010-05-18 17:58:33 +02:00
|
|
|
#else
|
Convert subpixel filters to use convolve framework
Update the code to call the new convolution functions to do subpixel
prediction rather than the existing functions. Remove the old C and
assembly code, since it is unused. This causes a 50% performance
reduction on the decoder, but that will be resolved when the asm for
the new functions is available.
There is no consensus for whether 6-tap or 2-tap predictors will be
supported in the final codec, so these filters are implemented in
terms of the 8-tap code, so that quality testing of these modes
can continue. Implementing the lower complexity algorithms is a
simple exercise, should it be necessary.
This code produces slightly better results in the EIGHTTAP_SMOOTH
case, since the filter is now applied in only one direction when
the subpel motion is only in one direction. Like the previous code,
the filtering is skipped entirely on full-pel MVs. This combination
seems to give the best quality gains, but this may be indicative of a
bug in the encoder's filter selection, since the encoder could
achieve the result of skipping the filtering on full-pel by selecting
one of the other filters. This should be revisited.
Quality gains on derf positive on almost all clips. The only clip
that seemed to be hurt at all datarates was football
(-0.115% PSNR average, -0.587% min). Overall averages 0.375% PSNR,
0.347% SSIM.
Change-Id: I7d469716091b1d89b4b08adde5863999319d69ff
2013-01-29 01:59:03 +01:00
|
|
|
((uint32_t *)dst)[0] = ((const uint32_t *)src)[0];
|
|
|
|
((uint32_t *)dst)[1] = ((const uint32_t *)src)[1];
|
2010-05-18 17:58:33 +02:00
|
|
|
#endif
|
2012-07-14 00:21:29 +02:00
|
|
|
src += src_stride;
|
|
|
|
dst += dst_stride;
|
|
|
|
}
|
2010-05-18 17:58:33 +02:00
|
|
|
}
|
|
|
|
|
Convert subpixel filters to use convolve framework
Update the code to call the new convolution functions to do subpixel
prediction rather than the existing functions. Remove the old C and
assembly code, since it is unused. This causes a 50% performance
reduction on the decoder, but that will be resolved when the asm for
the new functions is available.
There is no consensus for whether 6-tap or 2-tap predictors will be
supported in the final codec, so these filters are implemented in
terms of the 8-tap code, so that quality testing of these modes
can continue. Implementing the lower complexity algorithms is a
simple exercise, should it be necessary.
This code produces slightly better results in the EIGHTTAP_SMOOTH
case, since the filter is now applied in only one direction when
the subpel motion is only in one direction. Like the previous code,
the filtering is skipped entirely on full-pel MVs. This combination
seems to give the best quality gains, but this may be indicative of a
bug in the encoder's filter selection, since the encoder could
achieve the result of skipping the filtering on full-pel by selecting
one of the other filters. This should be revisited.
Quality gains on derf positive on almost all clips. The only clip
that seemed to be hurt at all datarates was football
(-0.115% PSNR average, -0.587% min). Overall averages 0.375% PSNR,
0.347% SSIM.
Change-Id: I7d469716091b1d89b4b08adde5863999319d69ff
2013-01-29 01:59:03 +01:00
|
|
|
void vp9_copy_mem8x4_c(const uint8_t *src,
|
2012-11-01 00:09:17 +01:00
|
|
|
int src_stride,
|
2012-12-19 00:31:19 +01:00
|
|
|
uint8_t *dst,
|
2012-11-01 00:09:17 +01:00
|
|
|
int dst_stride) {
|
2012-07-14 00:21:29 +02:00
|
|
|
int r;
|
|
|
|
|
|
|
|
for (r = 0; r < 4; r++) {
|
2011-07-25 16:11:24 +02:00
|
|
|
#if !(CONFIG_FAST_UNALIGNED)
|
2012-07-14 00:21:29 +02:00
|
|
|
dst[0] = src[0];
|
|
|
|
dst[1] = src[1];
|
|
|
|
dst[2] = src[2];
|
|
|
|
dst[3] = src[3];
|
|
|
|
dst[4] = src[4];
|
|
|
|
dst[5] = src[5];
|
|
|
|
dst[6] = src[6];
|
|
|
|
dst[7] = src[7];
|
2010-05-18 17:58:33 +02:00
|
|
|
#else
|
Convert subpixel filters to use convolve framework
Update the code to call the new convolution functions to do subpixel
prediction rather than the existing functions. Remove the old C and
assembly code, since it is unused. This causes a 50% performance
reduction on the decoder, but that will be resolved when the asm for
the new functions is available.
There is no consensus for whether 6-tap or 2-tap predictors will be
supported in the final codec, so these filters are implemented in
terms of the 8-tap code, so that quality testing of these modes
can continue. Implementing the lower complexity algorithms is a
simple exercise, should it be necessary.
This code produces slightly better results in the EIGHTTAP_SMOOTH
case, since the filter is now applied in only one direction when
the subpel motion is only in one direction. Like the previous code,
the filtering is skipped entirely on full-pel MVs. This combination
seems to give the best quality gains, but this may be indicative of a
bug in the encoder's filter selection, since the encoder could
achieve the result of skipping the filtering on full-pel by selecting
one of the other filters. This should be revisited.
Quality gains on derf positive on almost all clips. The only clip
that seemed to be hurt at all datarates was football
(-0.115% PSNR average, -0.587% min). Overall averages 0.375% PSNR,
0.347% SSIM.
Change-Id: I7d469716091b1d89b4b08adde5863999319d69ff
2013-01-29 01:59:03 +01:00
|
|
|
((uint32_t *)dst)[0] = ((const uint32_t *)src)[0];
|
|
|
|
((uint32_t *)dst)[1] = ((const uint32_t *)src)[1];
|
2010-05-18 17:58:33 +02:00
|
|
|
#endif
|
2012-07-14 00:21:29 +02:00
|
|
|
src += src_stride;
|
|
|
|
dst += dst_stride;
|
|
|
|
}
|
2010-05-18 17:58:33 +02:00
|
|
|
}
|
|
|
|
|
Spatial resamping of ZEROMV predictors
This patch allows coding frames using references of different
resolution, in ZEROMV mode. For compound prediction, either
reference may be scaled.
To test, I use the resize_test and enable WRITE_RECON_BUFFER
in vp9_onyxd_if.c. It's also useful to apply this patch to
test/i420_video_source.h:
--- a/test/i420_video_source.h
+++ b/test/i420_video_source.h
@@ -93,6 +93,7 @@ class I420VideoSource : public VideoSource {
virtual void FillFrame() {
// Read a frame from input_file.
+ if (frame_ != 3)
if (fread(img_->img_data, raw_sz_, 1, input_file_) == 0) {
limit_ = frame_;
}
This forces the frame that the resolution changes on to be coded
with no motion, only scaling, and improves the quality of the
result.
Change-Id: I1ee75d19a437ff801192f767fd02a36bcbd1d496
2013-02-25 05:55:14 +01:00
|
|
|
static void set_scaled_offsets(struct scale_factors *scale,
|
|
|
|
int row, int col) {
|
|
|
|
const int x_q4 = 16 * col;
|
|
|
|
const int y_q4 = 16 * row;
|
|
|
|
|
|
|
|
scale->x_offset_q4 = (x_q4 * scale->x_num / scale->x_den) & 0xf;
|
|
|
|
scale->y_offset_q4 = (y_q4 * scale->y_num / scale->y_den) & 0xf;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int32_t scale_motion_vector_component_q3(int mv_q3,
|
|
|
|
int num,
|
|
|
|
int den,
|
|
|
|
int offset_q4) {
|
|
|
|
// returns the scaled and offset value of the mv component.
|
|
|
|
const int32_t mv_q4 = mv_q3 << 1;
|
|
|
|
|
|
|
|
/* TODO(jkoleszar): make fixed point, or as a second multiply? */
|
|
|
|
return mv_q4 * num / den + offset_q4;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int32_t scale_motion_vector_component_q4(int mv_q4,
|
|
|
|
int num,
|
|
|
|
int den,
|
|
|
|
int offset_q4) {
|
2013-02-09 02:49:44 +01:00
|
|
|
// returns the scaled and offset value of the mv component.
|
|
|
|
|
|
|
|
/* TODO(jkoleszar): make fixed point, or as a second multiply? */
|
Spatial resamping of ZEROMV predictors
This patch allows coding frames using references of different
resolution, in ZEROMV mode. For compound prediction, either
reference may be scaled.
To test, I use the resize_test and enable WRITE_RECON_BUFFER
in vp9_onyxd_if.c. It's also useful to apply this patch to
test/i420_video_source.h:
--- a/test/i420_video_source.h
+++ b/test/i420_video_source.h
@@ -93,6 +93,7 @@ class I420VideoSource : public VideoSource {
virtual void FillFrame() {
// Read a frame from input_file.
+ if (frame_ != 3)
if (fread(img_->img_data, raw_sz_, 1, input_file_) == 0) {
limit_ = frame_;
}
This forces the frame that the resolution changes on to be coded
with no motion, only scaling, and improves the quality of the
result.
Change-Id: I1ee75d19a437ff801192f767fd02a36bcbd1d496
2013-02-25 05:55:14 +01:00
|
|
|
return mv_q4 * num / den + offset_q4;
|
2010-05-18 17:58:33 +02:00
|
|
|
}
|
|
|
|
|
Spatial resamping of ZEROMV predictors
This patch allows coding frames using references of different
resolution, in ZEROMV mode. For compound prediction, either
reference may be scaled.
To test, I use the resize_test and enable WRITE_RECON_BUFFER
in vp9_onyxd_if.c. It's also useful to apply this patch to
test/i420_video_source.h:
--- a/test/i420_video_source.h
+++ b/test/i420_video_source.h
@@ -93,6 +93,7 @@ class I420VideoSource : public VideoSource {
virtual void FillFrame() {
// Read a frame from input_file.
+ if (frame_ != 3)
if (fread(img_->img_data, raw_sz_, 1, input_file_) == 0) {
limit_ = frame_;
}
This forces the frame that the resolution changes on to be coded
with no motion, only scaling, and improves the quality of the
result.
Change-Id: I1ee75d19a437ff801192f767fd02a36bcbd1d496
2013-02-25 05:55:14 +01:00
|
|
|
static int_mv32 scale_motion_vector_q3_to_q4(
|
|
|
|
const int_mv *src_mv,
|
|
|
|
const struct scale_factors *scale) {
|
2013-02-09 02:49:44 +01:00
|
|
|
// returns mv * scale + offset
|
|
|
|
int_mv32 result;
|
|
|
|
|
Spatial resamping of ZEROMV predictors
This patch allows coding frames using references of different
resolution, in ZEROMV mode. For compound prediction, either
reference may be scaled.
To test, I use the resize_test and enable WRITE_RECON_BUFFER
in vp9_onyxd_if.c. It's also useful to apply this patch to
test/i420_video_source.h:
--- a/test/i420_video_source.h
+++ b/test/i420_video_source.h
@@ -93,6 +93,7 @@ class I420VideoSource : public VideoSource {
virtual void FillFrame() {
// Read a frame from input_file.
+ if (frame_ != 3)
if (fread(img_->img_data, raw_sz_, 1, input_file_) == 0) {
limit_ = frame_;
}
This forces the frame that the resolution changes on to be coded
with no motion, only scaling, and improves the quality of the
result.
Change-Id: I1ee75d19a437ff801192f767fd02a36bcbd1d496
2013-02-25 05:55:14 +01:00
|
|
|
result.as_mv.row = scale_motion_vector_component_q3(src_mv->as_mv.row,
|
|
|
|
scale->y_num,
|
|
|
|
scale->y_den,
|
|
|
|
scale->y_offset_q4);
|
|
|
|
result.as_mv.col = scale_motion_vector_component_q3(src_mv->as_mv.col,
|
|
|
|
scale->x_num,
|
|
|
|
scale->x_den,
|
|
|
|
scale->x_offset_q4);
|
2013-02-09 02:49:44 +01:00
|
|
|
return result;
|
2012-04-18 22:51:58 +02:00
|
|
|
}
|
|
|
|
|
2013-02-09 02:49:44 +01:00
|
|
|
void vp9_build_inter_predictor(const uint8_t *src, int src_stride,
|
|
|
|
uint8_t *dst, int dst_stride,
|
|
|
|
const int_mv *mv_q3,
|
|
|
|
const struct scale_factors *scale,
|
|
|
|
int w, int h, int do_avg,
|
|
|
|
const struct subpix_fn_table *subpix) {
|
2013-03-02 02:50:55 +01:00
|
|
|
int_mv32 mv = scale_motion_vector_q3_to_q4(mv_q3, scale);
|
|
|
|
src += (mv.as_mv.row >> 4) * src_stride + (mv.as_mv.col >> 4);
|
Spatial resamping of ZEROMV predictors
This patch allows coding frames using references of different
resolution, in ZEROMV mode. For compound prediction, either
reference may be scaled.
To test, I use the resize_test and enable WRITE_RECON_BUFFER
in vp9_onyxd_if.c. It's also useful to apply this patch to
test/i420_video_source.h:
--- a/test/i420_video_source.h
+++ b/test/i420_video_source.h
@@ -93,6 +93,7 @@ class I420VideoSource : public VideoSource {
virtual void FillFrame() {
// Read a frame from input_file.
+ if (frame_ != 3)
if (fread(img_->img_data, raw_sz_, 1, input_file_) == 0) {
limit_ = frame_;
}
This forces the frame that the resolution changes on to be coded
with no motion, only scaling, and improves the quality of the
result.
Change-Id: I1ee75d19a437ff801192f767fd02a36bcbd1d496
2013-02-25 05:55:14 +01:00
|
|
|
|
|
|
|
scale->predict[!!(mv.as_mv.col & 15)][!!(mv.as_mv.row & 15)][do_avg](
|
2013-02-09 02:49:44 +01:00
|
|
|
src, src_stride, dst, dst_stride,
|
Spatial resamping of ZEROMV predictors
This patch allows coding frames using references of different
resolution, in ZEROMV mode. For compound prediction, either
reference may be scaled.
To test, I use the resize_test and enable WRITE_RECON_BUFFER
in vp9_onyxd_if.c. It's also useful to apply this patch to
test/i420_video_source.h:
--- a/test/i420_video_source.h
+++ b/test/i420_video_source.h
@@ -93,6 +93,7 @@ class I420VideoSource : public VideoSource {
virtual void FillFrame() {
// Read a frame from input_file.
+ if (frame_ != 3)
if (fread(img_->img_data, raw_sz_, 1, input_file_) == 0) {
limit_ = frame_;
}
This forces the frame that the resolution changes on to be coded
with no motion, only scaling, and improves the quality of the
result.
Change-Id: I1ee75d19a437ff801192f767fd02a36bcbd1d496
2013-02-25 05:55:14 +01:00
|
|
|
subpix->filter_x[mv.as_mv.col & 15], scale->x_step_q4,
|
|
|
|
subpix->filter_y[mv.as_mv.row & 15], scale->y_step_q4,
|
2013-02-09 02:49:44 +01:00
|
|
|
w, h);
|
2010-05-18 17:58:33 +02:00
|
|
|
}
|
|
|
|
|
2013-02-09 02:49:44 +01:00
|
|
|
/* Like vp9_build_inter_predictor, but takes the full-pel part of the
|
|
|
|
* mv separately, and the fractional part as a q4.
|
2012-04-18 22:51:58 +02:00
|
|
|
*/
|
2013-02-09 02:49:44 +01:00
|
|
|
void vp9_build_inter_predictor_q4(const uint8_t *src, int src_stride,
|
|
|
|
uint8_t *dst, int dst_stride,
|
|
|
|
const int_mv *fullpel_mv_q3,
|
|
|
|
const int_mv *frac_mv_q4,
|
|
|
|
const struct scale_factors *scale,
|
|
|
|
int w, int h, int do_avg,
|
|
|
|
const struct subpix_fn_table *subpix) {
|
|
|
|
const int mv_row_q4 = ((fullpel_mv_q3->as_mv.row >> 3) << 4)
|
|
|
|
+ (frac_mv_q4->as_mv.row & 0xf);
|
|
|
|
const int mv_col_q4 = ((fullpel_mv_q3->as_mv.col >> 3) << 4)
|
|
|
|
+ (frac_mv_q4->as_mv.col & 0xf);
|
|
|
|
const int scaled_mv_row_q4 =
|
Spatial resamping of ZEROMV predictors
This patch allows coding frames using references of different
resolution, in ZEROMV mode. For compound prediction, either
reference may be scaled.
To test, I use the resize_test and enable WRITE_RECON_BUFFER
in vp9_onyxd_if.c. It's also useful to apply this patch to
test/i420_video_source.h:
--- a/test/i420_video_source.h
+++ b/test/i420_video_source.h
@@ -93,6 +93,7 @@ class I420VideoSource : public VideoSource {
virtual void FillFrame() {
// Read a frame from input_file.
+ if (frame_ != 3)
if (fread(img_->img_data, raw_sz_, 1, input_file_) == 0) {
limit_ = frame_;
}
This forces the frame that the resolution changes on to be coded
with no motion, only scaling, and improves the quality of the
result.
Change-Id: I1ee75d19a437ff801192f767fd02a36bcbd1d496
2013-02-25 05:55:14 +01:00
|
|
|
scale_motion_vector_component_q4(mv_row_q4, scale->y_num, scale->y_den,
|
|
|
|
scale->y_offset_q4);
|
2013-02-09 02:49:44 +01:00
|
|
|
const int scaled_mv_col_q4 =
|
Spatial resamping of ZEROMV predictors
This patch allows coding frames using references of different
resolution, in ZEROMV mode. For compound prediction, either
reference may be scaled.
To test, I use the resize_test and enable WRITE_RECON_BUFFER
in vp9_onyxd_if.c. It's also useful to apply this patch to
test/i420_video_source.h:
--- a/test/i420_video_source.h
+++ b/test/i420_video_source.h
@@ -93,6 +93,7 @@ class I420VideoSource : public VideoSource {
virtual void FillFrame() {
// Read a frame from input_file.
+ if (frame_ != 3)
if (fread(img_->img_data, raw_sz_, 1, input_file_) == 0) {
limit_ = frame_;
}
This forces the frame that the resolution changes on to be coded
with no motion, only scaling, and improves the quality of the
result.
Change-Id: I1ee75d19a437ff801192f767fd02a36bcbd1d496
2013-02-25 05:55:14 +01:00
|
|
|
scale_motion_vector_component_q4(mv_col_q4, scale->x_num, scale->x_den,
|
|
|
|
scale->x_offset_q4);
|
2013-02-09 02:49:44 +01:00
|
|
|
const int subpel_x = scaled_mv_col_q4 & 15;
|
|
|
|
const int subpel_y = scaled_mv_row_q4 & 15;
|
|
|
|
|
2013-03-02 02:50:55 +01:00
|
|
|
src += (scaled_mv_row_q4 >> 4) * src_stride + (scaled_mv_col_q4 >> 4);
|
Spatial resamping of ZEROMV predictors
This patch allows coding frames using references of different
resolution, in ZEROMV mode. For compound prediction, either
reference may be scaled.
To test, I use the resize_test and enable WRITE_RECON_BUFFER
in vp9_onyxd_if.c. It's also useful to apply this patch to
test/i420_video_source.h:
--- a/test/i420_video_source.h
+++ b/test/i420_video_source.h
@@ -93,6 +93,7 @@ class I420VideoSource : public VideoSource {
virtual void FillFrame() {
// Read a frame from input_file.
+ if (frame_ != 3)
if (fread(img_->img_data, raw_sz_, 1, input_file_) == 0) {
limit_ = frame_;
}
This forces the frame that the resolution changes on to be coded
with no motion, only scaling, and improves the quality of the
result.
Change-Id: I1ee75d19a437ff801192f767fd02a36bcbd1d496
2013-02-25 05:55:14 +01:00
|
|
|
scale->predict[!!subpel_x][!!subpel_y][do_avg](
|
2013-02-09 02:49:44 +01:00
|
|
|
src, src_stride, dst, dst_stride,
|
Spatial resamping of ZEROMV predictors
This patch allows coding frames using references of different
resolution, in ZEROMV mode. For compound prediction, either
reference may be scaled.
To test, I use the resize_test and enable WRITE_RECON_BUFFER
in vp9_onyxd_if.c. It's also useful to apply this patch to
test/i420_video_source.h:
--- a/test/i420_video_source.h
+++ b/test/i420_video_source.h
@@ -93,6 +93,7 @@ class I420VideoSource : public VideoSource {
virtual void FillFrame() {
// Read a frame from input_file.
+ if (frame_ != 3)
if (fread(img_->img_data, raw_sz_, 1, input_file_) == 0) {
limit_ = frame_;
}
This forces the frame that the resolution changes on to be coded
with no motion, only scaling, and improves the quality of the
result.
Change-Id: I1ee75d19a437ff801192f767fd02a36bcbd1d496
2013-02-25 05:55:14 +01:00
|
|
|
subpix->filter_x[subpel_x], scale->x_step_q4,
|
|
|
|
subpix->filter_y[subpel_y], scale->y_step_q4,
|
2013-02-09 02:49:44 +01:00
|
|
|
w, h);
|
2012-04-18 22:51:58 +02:00
|
|
|
}
|
|
|
|
|
2013-02-09 02:49:44 +01:00
|
|
|
static void build_2x1_inter_predictor(const BLOCKD *d0, const BLOCKD *d1,
|
2013-02-20 22:46:55 +01:00
|
|
|
struct scale_factors *scale,
|
2013-02-09 02:49:44 +01:00
|
|
|
int block_size, int stride, int which_mv,
|
2013-02-20 22:46:55 +01:00
|
|
|
const struct subpix_fn_table *subpix,
|
|
|
|
int row, int col) {
|
2013-02-09 02:49:44 +01:00
|
|
|
assert(d1->predictor - d0->predictor == block_size);
|
|
|
|
assert(d1->pre == d0->pre + block_size);
|
|
|
|
|
2013-02-20 22:46:55 +01:00
|
|
|
set_scaled_offsets(&scale[which_mv], row, col);
|
|
|
|
|
2013-02-09 02:49:44 +01:00
|
|
|
if (d0->bmi.as_mv[which_mv].as_int == d1->bmi.as_mv[which_mv].as_int) {
|
|
|
|
uint8_t **base_pre = which_mv ? d0->base_second_pre : d0->base_pre;
|
|
|
|
|
|
|
|
vp9_build_inter_predictor(*base_pre + d0->pre,
|
|
|
|
d0->pre_stride,
|
|
|
|
d0->predictor, stride,
|
|
|
|
&d0->bmi.as_mv[which_mv],
|
|
|
|
&scale[which_mv],
|
|
|
|
2 * block_size, block_size, which_mv,
|
|
|
|
subpix);
|
|
|
|
|
|
|
|
} else {
|
|
|
|
uint8_t **base_pre0 = which_mv ? d0->base_second_pre : d0->base_pre;
|
|
|
|
uint8_t **base_pre1 = which_mv ? d1->base_second_pre : d1->base_pre;
|
|
|
|
|
|
|
|
vp9_build_inter_predictor(*base_pre0 + d0->pre,
|
|
|
|
d0->pre_stride,
|
|
|
|
d0->predictor, stride,
|
|
|
|
&d0->bmi.as_mv[which_mv],
|
|
|
|
&scale[which_mv],
|
|
|
|
block_size, block_size, which_mv,
|
|
|
|
subpix);
|
2013-02-20 22:46:55 +01:00
|
|
|
|
|
|
|
set_scaled_offsets(&scale[which_mv], row, col + block_size);
|
|
|
|
|
2013-02-09 02:49:44 +01:00
|
|
|
vp9_build_inter_predictor(*base_pre1 + d1->pre,
|
|
|
|
d1->pre_stride,
|
|
|
|
d1->predictor, stride,
|
|
|
|
&d1->bmi.as_mv[which_mv],
|
|
|
|
&scale[which_mv],
|
|
|
|
block_size, block_size, which_mv,
|
|
|
|
subpix);
|
|
|
|
}
|
2010-05-18 17:58:33 +02:00
|
|
|
}
|
|
|
|
|
2011-08-24 20:42:26 +02:00
|
|
|
/*encoder only*/
|
Spatial resamping of ZEROMV predictors
This patch allows coding frames using references of different
resolution, in ZEROMV mode. For compound prediction, either
reference may be scaled.
To test, I use the resize_test and enable WRITE_RECON_BUFFER
in vp9_onyxd_if.c. It's also useful to apply this patch to
test/i420_video_source.h:
--- a/test/i420_video_source.h
+++ b/test/i420_video_source.h
@@ -93,6 +93,7 @@ class I420VideoSource : public VideoSource {
virtual void FillFrame() {
// Read a frame from input_file.
+ if (frame_ != 3)
if (fread(img_->img_data, raw_sz_, 1, input_file_) == 0) {
limit_ = frame_;
}
This forces the frame that the resolution changes on to be coded
with no motion, only scaling, and improves the quality of the
result.
Change-Id: I1ee75d19a437ff801192f767fd02a36bcbd1d496
2013-02-25 05:55:14 +01:00
|
|
|
void vp9_build_inter4x4_predictors_mbuv(MACROBLOCKD *xd,
|
|
|
|
int mb_row,
|
|
|
|
int mb_col) {
|
2012-07-14 00:21:29 +02:00
|
|
|
int i, j;
|
2012-08-14 12:32:29 +02:00
|
|
|
BLOCKD *blockd = xd->block;
|
2012-07-14 00:21:29 +02:00
|
|
|
|
|
|
|
/* build uv mvs */
|
|
|
|
for (i = 0; i < 2; i++) {
|
|
|
|
for (j = 0; j < 2; j++) {
|
|
|
|
int yoffset = i * 8 + j * 2;
|
|
|
|
int uoffset = 16 + i * 2 + j;
|
|
|
|
int voffset = 20 + i * 2 + j;
|
|
|
|
int temp;
|
|
|
|
|
2013-02-09 04:46:36 +01:00
|
|
|
temp = blockd[yoffset ].bmi.as_mv[0].as_mv.row
|
|
|
|
+ blockd[yoffset + 1].bmi.as_mv[0].as_mv.row
|
|
|
|
+ blockd[yoffset + 4].bmi.as_mv[0].as_mv.row
|
|
|
|
+ blockd[yoffset + 5].bmi.as_mv[0].as_mv.row;
|
2012-07-14 00:21:29 +02:00
|
|
|
|
|
|
|
if (temp < 0) temp -= 4;
|
|
|
|
else temp += 4;
|
|
|
|
|
2013-02-09 04:46:36 +01:00
|
|
|
xd->block[uoffset].bmi.as_mv[0].as_mv.row = (temp / 8) &
|
2012-08-14 12:32:29 +02:00
|
|
|
xd->fullpixel_mask;
|
2012-07-14 00:21:29 +02:00
|
|
|
|
2013-02-09 04:46:36 +01:00
|
|
|
temp = blockd[yoffset ].bmi.as_mv[0].as_mv.col
|
|
|
|
+ blockd[yoffset + 1].bmi.as_mv[0].as_mv.col
|
|
|
|
+ blockd[yoffset + 4].bmi.as_mv[0].as_mv.col
|
|
|
|
+ blockd[yoffset + 5].bmi.as_mv[0].as_mv.col;
|
2012-07-14 00:21:29 +02:00
|
|
|
|
|
|
|
if (temp < 0) temp -= 4;
|
|
|
|
else temp += 4;
|
|
|
|
|
2013-02-09 04:46:36 +01:00
|
|
|
blockd[uoffset].bmi.as_mv[0].as_mv.col = (temp / 8) &
|
2012-08-14 12:32:29 +02:00
|
|
|
xd->fullpixel_mask;
|
2012-07-14 00:21:29 +02:00
|
|
|
|
2013-02-09 04:46:36 +01:00
|
|
|
blockd[voffset].bmi.as_mv[0].as_mv.row =
|
|
|
|
blockd[uoffset].bmi.as_mv[0].as_mv.row;
|
|
|
|
blockd[voffset].bmi.as_mv[0].as_mv.col =
|
|
|
|
blockd[uoffset].bmi.as_mv[0].as_mv.col;
|
2012-07-14 00:21:29 +02:00
|
|
|
|
2012-11-07 15:50:25 +01:00
|
|
|
if (xd->mode_info_context->mbmi.second_ref_frame > 0) {
|
2013-02-09 04:46:36 +01:00
|
|
|
temp = blockd[yoffset ].bmi.as_mv[1].as_mv.row
|
|
|
|
+ blockd[yoffset + 1].bmi.as_mv[1].as_mv.row
|
|
|
|
+ blockd[yoffset + 4].bmi.as_mv[1].as_mv.row
|
|
|
|
+ blockd[yoffset + 5].bmi.as_mv[1].as_mv.row;
|
2012-07-14 00:21:29 +02:00
|
|
|
|
|
|
|
if (temp < 0) {
|
|
|
|
temp -= 4;
|
|
|
|
} else {
|
|
|
|
temp += 4;
|
2011-08-24 20:42:26 +02:00
|
|
|
}
|
|
|
|
|
2013-02-09 04:46:36 +01:00
|
|
|
blockd[uoffset].bmi.as_mv[1].as_mv.row = (temp / 8) &
|
2012-08-14 12:32:29 +02:00
|
|
|
xd->fullpixel_mask;
|
2012-04-18 22:51:58 +02:00
|
|
|
|
2013-02-09 04:46:36 +01:00
|
|
|
temp = blockd[yoffset ].bmi.as_mv[1].as_mv.col
|
|
|
|
+ blockd[yoffset + 1].bmi.as_mv[1].as_mv.col
|
|
|
|
+ blockd[yoffset + 4].bmi.as_mv[1].as_mv.col
|
|
|
|
+ blockd[yoffset + 5].bmi.as_mv[1].as_mv.col;
|
2012-07-14 00:21:29 +02:00
|
|
|
|
|
|
|
if (temp < 0) {
|
|
|
|
temp -= 4;
|
|
|
|
} else {
|
|
|
|
temp += 4;
|
2012-04-18 22:51:58 +02:00
|
|
|
}
|
2012-07-14 00:21:29 +02:00
|
|
|
|
2013-02-09 04:46:36 +01:00
|
|
|
blockd[uoffset].bmi.as_mv[1].as_mv.col = (temp / 8) &
|
2012-08-14 12:32:29 +02:00
|
|
|
xd->fullpixel_mask;
|
2012-07-14 00:21:29 +02:00
|
|
|
|
2013-02-09 04:46:36 +01:00
|
|
|
blockd[voffset].bmi.as_mv[1].as_mv.row =
|
|
|
|
blockd[uoffset].bmi.as_mv[1].as_mv.row;
|
|
|
|
blockd[voffset].bmi.as_mv[1].as_mv.col =
|
|
|
|
blockd[uoffset].bmi.as_mv[1].as_mv.col;
|
2012-07-14 00:21:29 +02:00
|
|
|
}
|
2010-05-18 17:58:33 +02:00
|
|
|
}
|
2012-07-14 00:21:29 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
for (i = 16; i < 24; i += 2) {
|
2013-02-09 02:49:44 +01:00
|
|
|
const int use_second_ref = xd->mode_info_context->mbmi.second_ref_frame > 0;
|
Spatial resamping of ZEROMV predictors
This patch allows coding frames using references of different
resolution, in ZEROMV mode. For compound prediction, either
reference may be scaled.
To test, I use the resize_test and enable WRITE_RECON_BUFFER
in vp9_onyxd_if.c. It's also useful to apply this patch to
test/i420_video_source.h:
--- a/test/i420_video_source.h
+++ b/test/i420_video_source.h
@@ -93,6 +93,7 @@ class I420VideoSource : public VideoSource {
virtual void FillFrame() {
// Read a frame from input_file.
+ if (frame_ != 3)
if (fread(img_->img_data, raw_sz_, 1, input_file_) == 0) {
limit_ = frame_;
}
This forces the frame that the resolution changes on to be coded
with no motion, only scaling, and improves the quality of the
result.
Change-Id: I1ee75d19a437ff801192f767fd02a36bcbd1d496
2013-02-25 05:55:14 +01:00
|
|
|
const int x = 4 * (i & 1);
|
|
|
|
const int y = ((i - 16) >> 1) * 4;
|
|
|
|
|
2013-02-09 02:49:44 +01:00
|
|
|
int which_mv;
|
2012-08-14 12:32:29 +02:00
|
|
|
BLOCKD *d0 = &blockd[i];
|
|
|
|
BLOCKD *d1 = &blockd[i + 1];
|
2012-07-14 00:21:29 +02:00
|
|
|
|
2013-02-09 02:49:44 +01:00
|
|
|
for (which_mv = 0; which_mv < 1 + use_second_ref; ++which_mv) {
|
|
|
|
build_2x1_inter_predictor(d0, d1, xd->scale_factor_uv, 4, 8, which_mv,
|
2013-02-20 22:46:55 +01:00
|
|
|
&xd->subpix, mb_row * 8 + y, mb_col * 8 + x);
|
2012-07-14 00:21:29 +02:00
|
|
|
}
|
|
|
|
}
|
2010-05-18 17:58:33 +02:00
|
|
|
}
|
|
|
|
|
2012-07-14 00:21:29 +02:00
|
|
|
static void clamp_mv_to_umv_border(MV *mv, const MACROBLOCKD *xd) {
|
|
|
|
/* If the MV points so far into the UMV border that no visible pixels
|
|
|
|
* are used for reconstruction, the subpel part of the MV can be
|
|
|
|
* discarded and the MV limited to 16 pixels with equivalent results.
|
|
|
|
*
|
|
|
|
* This limit kicks in at 19 pixels for the top and left edges, for
|
|
|
|
* the 16 pixels plus 3 taps right of the central pixel when subpel
|
|
|
|
* filtering. The bottom and right edges use 16 pixels plus 2 pixels
|
|
|
|
* left of the central pixel when filtering.
|
|
|
|
*/
|
2012-11-02 01:53:44 +01:00
|
|
|
if (mv->col < (xd->mb_to_left_edge - ((16 + VP9_INTERP_EXTEND) << 3)))
|
2012-07-14 00:21:29 +02:00
|
|
|
mv->col = xd->mb_to_left_edge - (16 << 3);
|
2012-11-02 01:53:44 +01:00
|
|
|
else if (mv->col > xd->mb_to_right_edge + ((15 + VP9_INTERP_EXTEND) << 3))
|
2012-07-14 00:21:29 +02:00
|
|
|
mv->col = xd->mb_to_right_edge + (16 << 3);
|
|
|
|
|
2012-11-02 01:53:44 +01:00
|
|
|
if (mv->row < (xd->mb_to_top_edge - ((16 + VP9_INTERP_EXTEND) << 3)))
|
2012-07-14 00:21:29 +02:00
|
|
|
mv->row = xd->mb_to_top_edge - (16 << 3);
|
2012-11-02 01:53:44 +01:00
|
|
|
else if (mv->row > xd->mb_to_bottom_edge + ((15 + VP9_INTERP_EXTEND) << 3))
|
2012-07-14 00:21:29 +02:00
|
|
|
mv->row = xd->mb_to_bottom_edge + (16 << 3);
|
2012-02-01 23:27:50 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/* A version of the above function for chroma block MVs.*/
|
2012-07-14 00:21:29 +02:00
|
|
|
static void clamp_uvmv_to_umv_border(MV *mv, const MACROBLOCKD *xd) {
|
2012-11-02 01:53:44 +01:00
|
|
|
const int extend = VP9_INTERP_EXTEND;
|
|
|
|
|
|
|
|
mv->col = (2 * mv->col < (xd->mb_to_left_edge - ((16 + extend) << 3))) ?
|
2012-07-14 00:21:29 +02:00
|
|
|
(xd->mb_to_left_edge - (16 << 3)) >> 1 : mv->col;
|
2012-11-02 01:53:44 +01:00
|
|
|
mv->col = (2 * mv->col > xd->mb_to_right_edge + ((15 + extend) << 3)) ?
|
2012-07-14 00:21:29 +02:00
|
|
|
(xd->mb_to_right_edge + (16 << 3)) >> 1 : mv->col;
|
|
|
|
|
2012-11-02 01:53:44 +01:00
|
|
|
mv->row = (2 * mv->row < (xd->mb_to_top_edge - ((16 + extend) << 3))) ?
|
2012-07-14 00:21:29 +02:00
|
|
|
(xd->mb_to_top_edge - (16 << 3)) >> 1 : mv->row;
|
2012-11-02 01:53:44 +01:00
|
|
|
mv->row = (2 * mv->row > xd->mb_to_bottom_edge + ((15 + extend) << 3)) ?
|
2012-07-14 00:21:29 +02:00
|
|
|
(xd->mb_to_bottom_edge + (16 << 3)) >> 1 : mv->row;
|
2012-02-01 23:27:50 +01:00
|
|
|
}
|
|
|
|
|
2012-08-09 02:12:12 +02:00
|
|
|
/*encoder only*/
|
2013-02-09 02:49:44 +01:00
|
|
|
void vp9_build_inter16x16_predictors_mby(MACROBLOCKD *xd,
|
|
|
|
uint8_t *dst_y,
|
Spatial resamping of ZEROMV predictors
This patch allows coding frames using references of different
resolution, in ZEROMV mode. For compound prediction, either
reference may be scaled.
To test, I use the resize_test and enable WRITE_RECON_BUFFER
in vp9_onyxd_if.c. It's also useful to apply this patch to
test/i420_video_source.h:
--- a/test/i420_video_source.h
+++ b/test/i420_video_source.h
@@ -93,6 +93,7 @@ class I420VideoSource : public VideoSource {
virtual void FillFrame() {
// Read a frame from input_file.
+ if (frame_ != 3)
if (fread(img_->img_data, raw_sz_, 1, input_file_) == 0) {
limit_ = frame_;
}
This forces the frame that the resolution changes on to be coded
with no motion, only scaling, and improves the quality of the
result.
Change-Id: I1ee75d19a437ff801192f767fd02a36bcbd1d496
2013-02-25 05:55:14 +01:00
|
|
|
int dst_ystride,
|
|
|
|
int mb_row,
|
|
|
|
int mb_col) {
|
2013-02-09 02:49:44 +01:00
|
|
|
const int use_second_ref = xd->mode_info_context->mbmi.second_ref_frame > 0;
|
|
|
|
int which_mv;
|
|
|
|
|
|
|
|
for (which_mv = 0; which_mv < 1 + use_second_ref; ++which_mv) {
|
2013-03-02 02:50:55 +01:00
|
|
|
const int clamp_mvs = which_mv ?
|
|
|
|
xd->mode_info_context->mbmi.need_to_clamp_secondmv :
|
|
|
|
xd->mode_info_context->mbmi.need_to_clamp_mvs;
|
2013-02-09 02:49:44 +01:00
|
|
|
|
2013-03-02 02:50:55 +01:00
|
|
|
uint8_t *base_pre = which_mv ? xd->second_pre.y_buffer : xd->pre.y_buffer;
|
|
|
|
int pre_stride = which_mv ? xd->second_pre.y_stride : xd->pre.y_stride;
|
|
|
|
int_mv ymv;
|
2013-02-09 02:49:44 +01:00
|
|
|
ymv.as_int = xd->mode_info_context->mbmi.mv[which_mv].as_int;
|
2013-03-02 02:50:55 +01:00
|
|
|
|
2013-02-09 02:49:44 +01:00
|
|
|
if (clamp_mvs)
|
|
|
|
clamp_mv_to_umv_border(&ymv.as_mv, xd);
|
|
|
|
|
Spatial resamping of ZEROMV predictors
This patch allows coding frames using references of different
resolution, in ZEROMV mode. For compound prediction, either
reference may be scaled.
To test, I use the resize_test and enable WRITE_RECON_BUFFER
in vp9_onyxd_if.c. It's also useful to apply this patch to
test/i420_video_source.h:
--- a/test/i420_video_source.h
+++ b/test/i420_video_source.h
@@ -93,6 +93,7 @@ class I420VideoSource : public VideoSource {
virtual void FillFrame() {
// Read a frame from input_file.
+ if (frame_ != 3)
if (fread(img_->img_data, raw_sz_, 1, input_file_) == 0) {
limit_ = frame_;
}
This forces the frame that the resolution changes on to be coded
with no motion, only scaling, and improves the quality of the
result.
Change-Id: I1ee75d19a437ff801192f767fd02a36bcbd1d496
2013-02-25 05:55:14 +01:00
|
|
|
set_scaled_offsets(&xd->scale_factor[which_mv], mb_row * 16, mb_col * 16);
|
|
|
|
|
|
|
|
vp9_build_inter_predictor(base_pre, pre_stride,
|
2013-02-09 02:49:44 +01:00
|
|
|
dst_y, dst_ystride,
|
|
|
|
&ymv, &xd->scale_factor[which_mv],
|
|
|
|
16, 16, which_mv, &xd->subpix);
|
|
|
|
}
|
2012-08-09 02:12:12 +02:00
|
|
|
}
|
|
|
|
|
2013-02-09 02:49:44 +01:00
|
|
|
void vp9_build_inter16x16_predictors_mbuv(MACROBLOCKD *xd,
|
|
|
|
uint8_t *dst_u,
|
|
|
|
uint8_t *dst_v,
|
Spatial resamping of ZEROMV predictors
This patch allows coding frames using references of different
resolution, in ZEROMV mode. For compound prediction, either
reference may be scaled.
To test, I use the resize_test and enable WRITE_RECON_BUFFER
in vp9_onyxd_if.c. It's also useful to apply this patch to
test/i420_video_source.h:
--- a/test/i420_video_source.h
+++ b/test/i420_video_source.h
@@ -93,6 +93,7 @@ class I420VideoSource : public VideoSource {
virtual void FillFrame() {
// Read a frame from input_file.
+ if (frame_ != 3)
if (fread(img_->img_data, raw_sz_, 1, input_file_) == 0) {
limit_ = frame_;
}
This forces the frame that the resolution changes on to be coded
with no motion, only scaling, and improves the quality of the
result.
Change-Id: I1ee75d19a437ff801192f767fd02a36bcbd1d496
2013-02-25 05:55:14 +01:00
|
|
|
int dst_uvstride,
|
|
|
|
int mb_row,
|
|
|
|
int mb_col) {
|
2013-02-09 02:49:44 +01:00
|
|
|
const int use_second_ref = xd->mode_info_context->mbmi.second_ref_frame > 0;
|
|
|
|
int which_mv;
|
|
|
|
|
|
|
|
for (which_mv = 0; which_mv < 1 + use_second_ref; ++which_mv) {
|
|
|
|
const int clamp_mvs =
|
|
|
|
which_mv ? xd->mode_info_context->mbmi.need_to_clamp_secondmv
|
|
|
|
: xd->mode_info_context->mbmi.need_to_clamp_mvs;
|
|
|
|
uint8_t *uptr, *vptr;
|
2013-03-12 16:44:53 +01:00
|
|
|
int pre_stride = which_mv ? xd->second_pre.uv_stride
|
|
|
|
: xd->pre.uv_stride;
|
2013-02-09 02:49:44 +01:00
|
|
|
int_mv _o16x16mv;
|
|
|
|
int_mv _16x16mv;
|
|
|
|
|
|
|
|
_16x16mv.as_int = xd->mode_info_context->mbmi.mv[which_mv].as_int;
|
|
|
|
|
|
|
|
if (clamp_mvs)
|
|
|
|
clamp_mv_to_umv_border(&_16x16mv.as_mv, xd);
|
|
|
|
|
|
|
|
_o16x16mv = _16x16mv;
|
|
|
|
/* calc uv motion vectors */
|
|
|
|
if (_16x16mv.as_mv.row < 0)
|
|
|
|
_16x16mv.as_mv.row -= 1;
|
|
|
|
else
|
|
|
|
_16x16mv.as_mv.row += 1;
|
|
|
|
|
|
|
|
if (_16x16mv.as_mv.col < 0)
|
|
|
|
_16x16mv.as_mv.col -= 1;
|
|
|
|
else
|
|
|
|
_16x16mv.as_mv.col += 1;
|
|
|
|
|
|
|
|
_16x16mv.as_mv.row /= 2;
|
|
|
|
_16x16mv.as_mv.col /= 2;
|
|
|
|
|
|
|
|
_16x16mv.as_mv.row &= xd->fullpixel_mask;
|
|
|
|
_16x16mv.as_mv.col &= xd->fullpixel_mask;
|
|
|
|
|
|
|
|
uptr = (which_mv ? xd->second_pre.u_buffer : xd->pre.u_buffer);
|
|
|
|
vptr = (which_mv ? xd->second_pre.v_buffer : xd->pre.v_buffer);
|
|
|
|
|
Spatial resamping of ZEROMV predictors
This patch allows coding frames using references of different
resolution, in ZEROMV mode. For compound prediction, either
reference may be scaled.
To test, I use the resize_test and enable WRITE_RECON_BUFFER
in vp9_onyxd_if.c. It's also useful to apply this patch to
test/i420_video_source.h:
--- a/test/i420_video_source.h
+++ b/test/i420_video_source.h
@@ -93,6 +93,7 @@ class I420VideoSource : public VideoSource {
virtual void FillFrame() {
// Read a frame from input_file.
+ if (frame_ != 3)
if (fread(img_->img_data, raw_sz_, 1, input_file_) == 0) {
limit_ = frame_;
}
This forces the frame that the resolution changes on to be coded
with no motion, only scaling, and improves the quality of the
result.
Change-Id: I1ee75d19a437ff801192f767fd02a36bcbd1d496
2013-02-25 05:55:14 +01:00
|
|
|
set_scaled_offsets(&xd->scale_factor_uv[which_mv],
|
|
|
|
mb_row * 16, mb_col * 16);
|
|
|
|
|
2013-02-09 02:49:44 +01:00
|
|
|
vp9_build_inter_predictor_q4(uptr, pre_stride,
|
|
|
|
dst_u, dst_uvstride,
|
|
|
|
&_16x16mv, &_o16x16mv,
|
|
|
|
&xd->scale_factor_uv[which_mv],
|
|
|
|
8, 8, which_mv, &xd->subpix);
|
|
|
|
|
|
|
|
vp9_build_inter_predictor_q4(vptr, pre_stride,
|
|
|
|
dst_v, dst_uvstride,
|
|
|
|
&_16x16mv, &_o16x16mv,
|
|
|
|
&xd->scale_factor_uv[which_mv],
|
|
|
|
8, 8, which_mv, &xd->subpix);
|
|
|
|
}
|
2011-04-28 16:53:59 +02:00
|
|
|
}
|
2010-05-18 17:58:33 +02:00
|
|
|
|
2012-10-31 00:25:53 +01:00
|
|
|
void vp9_build_inter32x32_predictors_sb(MACROBLOCKD *x,
|
2012-12-19 00:31:19 +01:00
|
|
|
uint8_t *dst_y,
|
|
|
|
uint8_t *dst_u,
|
|
|
|
uint8_t *dst_v,
|
2012-08-20 23:43:34 +02:00
|
|
|
int dst_ystride,
|
Spatial resamping of ZEROMV predictors
This patch allows coding frames using references of different
resolution, in ZEROMV mode. For compound prediction, either
reference may be scaled.
To test, I use the resize_test and enable WRITE_RECON_BUFFER
in vp9_onyxd_if.c. It's also useful to apply this patch to
test/i420_video_source.h:
--- a/test/i420_video_source.h
+++ b/test/i420_video_source.h
@@ -93,6 +93,7 @@ class I420VideoSource : public VideoSource {
virtual void FillFrame() {
// Read a frame from input_file.
+ if (frame_ != 3)
if (fread(img_->img_data, raw_sz_, 1, input_file_) == 0) {
limit_ = frame_;
}
This forces the frame that the resolution changes on to be coded
with no motion, only scaling, and improves the quality of the
result.
Change-Id: I1ee75d19a437ff801192f767fd02a36bcbd1d496
2013-02-25 05:55:14 +01:00
|
|
|
int dst_uvstride,
|
|
|
|
int mb_row,
|
|
|
|
int mb_col) {
|
2012-08-20 23:43:34 +02:00
|
|
|
uint8_t *y1 = x->pre.y_buffer, *u1 = x->pre.u_buffer, *v1 = x->pre.v_buffer;
|
|
|
|
uint8_t *y2 = x->second_pre.y_buffer, *u2 = x->second_pre.u_buffer,
|
|
|
|
*v2 = x->second_pre.v_buffer;
|
2012-11-13 00:43:11 +01:00
|
|
|
int edge[4], n;
|
2012-08-20 23:43:34 +02:00
|
|
|
|
2012-11-13 00:43:11 +01:00
|
|
|
edge[0] = x->mb_to_top_edge;
|
|
|
|
edge[1] = x->mb_to_bottom_edge;
|
|
|
|
edge[2] = x->mb_to_left_edge;
|
|
|
|
edge[3] = x->mb_to_right_edge;
|
|
|
|
|
|
|
|
for (n = 0; n < 4; n++) {
|
2012-08-20 23:43:34 +02:00
|
|
|
const int x_idx = n & 1, y_idx = n >> 1;
|
Spatial resamping of ZEROMV predictors
This patch allows coding frames using references of different
resolution, in ZEROMV mode. For compound prediction, either
reference may be scaled.
To test, I use the resize_test and enable WRITE_RECON_BUFFER
in vp9_onyxd_if.c. It's also useful to apply this patch to
test/i420_video_source.h:
--- a/test/i420_video_source.h
+++ b/test/i420_video_source.h
@@ -93,6 +93,7 @@ class I420VideoSource : public VideoSource {
virtual void FillFrame() {
// Read a frame from input_file.
+ if (frame_ != 3)
if (fread(img_->img_data, raw_sz_, 1, input_file_) == 0) {
limit_ = frame_;
}
This forces the frame that the resolution changes on to be coded
with no motion, only scaling, and improves the quality of the
result.
Change-Id: I1ee75d19a437ff801192f767fd02a36bcbd1d496
2013-02-25 05:55:14 +01:00
|
|
|
int scaled_uv_offset;
|
2012-08-20 23:43:34 +02:00
|
|
|
|
2012-11-13 00:43:11 +01:00
|
|
|
x->mb_to_top_edge = edge[0] - ((y_idx * 16) << 3);
|
|
|
|
x->mb_to_bottom_edge = edge[1] + (((1 - y_idx) * 16) << 3);
|
|
|
|
x->mb_to_left_edge = edge[2] - ((x_idx * 16) << 3);
|
|
|
|
x->mb_to_right_edge = edge[3] + (((1 - x_idx) * 16) << 3);
|
|
|
|
|
Spatial resamping of ZEROMV predictors
This patch allows coding frames using references of different
resolution, in ZEROMV mode. For compound prediction, either
reference may be scaled.
To test, I use the resize_test and enable WRITE_RECON_BUFFER
in vp9_onyxd_if.c. It's also useful to apply this patch to
test/i420_video_source.h:
--- a/test/i420_video_source.h
+++ b/test/i420_video_source.h
@@ -93,6 +93,7 @@ class I420VideoSource : public VideoSource {
virtual void FillFrame() {
// Read a frame from input_file.
+ if (frame_ != 3)
if (fread(img_->img_data, raw_sz_, 1, input_file_) == 0) {
limit_ = frame_;
}
This forces the frame that the resolution changes on to be coded
with no motion, only scaling, and improves the quality of the
result.
Change-Id: I1ee75d19a437ff801192f767fd02a36bcbd1d496
2013-02-25 05:55:14 +01:00
|
|
|
x->pre.y_buffer = y1 + scaled_buffer_offset(x_idx * 16,
|
|
|
|
y_idx * 16,
|
|
|
|
x->pre.y_stride,
|
|
|
|
&x->scale_factor[0]);
|
|
|
|
scaled_uv_offset = scaled_buffer_offset(x_idx * 8,
|
|
|
|
y_idx * 8,
|
|
|
|
x->pre.uv_stride,
|
|
|
|
&x->scale_factor_uv[0]);
|
|
|
|
x->pre.u_buffer = u1 + scaled_uv_offset;
|
|
|
|
x->pre.v_buffer = v1 + scaled_uv_offset;
|
2012-08-20 23:43:34 +02:00
|
|
|
|
2012-11-07 15:50:25 +01:00
|
|
|
if (x->mode_info_context->mbmi.second_ref_frame > 0) {
|
Spatial resamping of ZEROMV predictors
This patch allows coding frames using references of different
resolution, in ZEROMV mode. For compound prediction, either
reference may be scaled.
To test, I use the resize_test and enable WRITE_RECON_BUFFER
in vp9_onyxd_if.c. It's also useful to apply this patch to
test/i420_video_source.h:
--- a/test/i420_video_source.h
+++ b/test/i420_video_source.h
@@ -93,6 +93,7 @@ class I420VideoSource : public VideoSource {
virtual void FillFrame() {
// Read a frame from input_file.
+ if (frame_ != 3)
if (fread(img_->img_data, raw_sz_, 1, input_file_) == 0) {
limit_ = frame_;
}
This forces the frame that the resolution changes on to be coded
with no motion, only scaling, and improves the quality of the
result.
Change-Id: I1ee75d19a437ff801192f767fd02a36bcbd1d496
2013-02-25 05:55:14 +01:00
|
|
|
x->second_pre.y_buffer = y2 +
|
|
|
|
scaled_buffer_offset(x_idx * 16,
|
|
|
|
y_idx * 16,
|
|
|
|
x->second_pre.y_stride,
|
|
|
|
&x->scale_factor[1]);
|
|
|
|
scaled_uv_offset = scaled_buffer_offset(x_idx * 8,
|
|
|
|
y_idx * 8,
|
|
|
|
x->second_pre.uv_stride,
|
|
|
|
&x->scale_factor_uv[1]);
|
|
|
|
x->second_pre.u_buffer = u2 + scaled_uv_offset;
|
|
|
|
x->second_pre.v_buffer = v2 + scaled_uv_offset;
|
2013-02-09 02:49:44 +01:00
|
|
|
}
|
2012-08-20 23:43:34 +02:00
|
|
|
|
2013-02-09 02:49:44 +01:00
|
|
|
vp9_build_inter16x16_predictors_mb(x,
|
2012-08-20 23:43:34 +02:00
|
|
|
dst_y + y_idx * 16 * dst_ystride + x_idx * 16,
|
|
|
|
dst_u + y_idx * 8 * dst_uvstride + x_idx * 8,
|
|
|
|
dst_v + y_idx * 8 * dst_uvstride + x_idx * 8,
|
Spatial resamping of ZEROMV predictors
This patch allows coding frames using references of different
resolution, in ZEROMV mode. For compound prediction, either
reference may be scaled.
To test, I use the resize_test and enable WRITE_RECON_BUFFER
in vp9_onyxd_if.c. It's also useful to apply this patch to
test/i420_video_source.h:
--- a/test/i420_video_source.h
+++ b/test/i420_video_source.h
@@ -93,6 +93,7 @@ class I420VideoSource : public VideoSource {
virtual void FillFrame() {
// Read a frame from input_file.
+ if (frame_ != 3)
if (fread(img_->img_data, raw_sz_, 1, input_file_) == 0) {
limit_ = frame_;
}
This forces the frame that the resolution changes on to be coded
with no motion, only scaling, and improves the quality of the
result.
Change-Id: I1ee75d19a437ff801192f767fd02a36bcbd1d496
2013-02-25 05:55:14 +01:00
|
|
|
dst_ystride, dst_uvstride, mb_row + y_idx, mb_col + x_idx);
|
2012-08-20 23:43:34 +02:00
|
|
|
}
|
|
|
|
|
2012-11-13 00:43:11 +01:00
|
|
|
x->mb_to_top_edge = edge[0];
|
|
|
|
x->mb_to_bottom_edge = edge[1];
|
|
|
|
x->mb_to_left_edge = edge[2];
|
|
|
|
x->mb_to_right_edge = edge[3];
|
|
|
|
|
2012-08-20 23:43:34 +02:00
|
|
|
x->pre.y_buffer = y1;
|
|
|
|
x->pre.u_buffer = u1;
|
|
|
|
x->pre.v_buffer = v1;
|
|
|
|
|
2012-11-07 15:50:25 +01:00
|
|
|
if (x->mode_info_context->mbmi.second_ref_frame > 0) {
|
2012-08-20 23:43:34 +02:00
|
|
|
x->second_pre.y_buffer = y2;
|
|
|
|
x->second_pre.u_buffer = u2;
|
|
|
|
x->second_pre.v_buffer = v2;
|
|
|
|
}
|
2012-11-30 20:46:20 +01:00
|
|
|
|
|
|
|
#if CONFIG_COMP_INTERINTRA_PRED
|
|
|
|
if (x->mode_info_context->mbmi.second_ref_frame == INTRA_FRAME) {
|
|
|
|
vp9_build_interintra_32x32_predictors_sb(
|
|
|
|
x, dst_y, dst_u, dst_v, dst_ystride, dst_uvstride);
|
2013-01-06 03:20:25 +01:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
void vp9_build_inter64x64_predictors_sb(MACROBLOCKD *x,
|
|
|
|
uint8_t *dst_y,
|
|
|
|
uint8_t *dst_u,
|
|
|
|
uint8_t *dst_v,
|
|
|
|
int dst_ystride,
|
Spatial resamping of ZEROMV predictors
This patch allows coding frames using references of different
resolution, in ZEROMV mode. For compound prediction, either
reference may be scaled.
To test, I use the resize_test and enable WRITE_RECON_BUFFER
in vp9_onyxd_if.c. It's also useful to apply this patch to
test/i420_video_source.h:
--- a/test/i420_video_source.h
+++ b/test/i420_video_source.h
@@ -93,6 +93,7 @@ class I420VideoSource : public VideoSource {
virtual void FillFrame() {
// Read a frame from input_file.
+ if (frame_ != 3)
if (fread(img_->img_data, raw_sz_, 1, input_file_) == 0) {
limit_ = frame_;
}
This forces the frame that the resolution changes on to be coded
with no motion, only scaling, and improves the quality of the
result.
Change-Id: I1ee75d19a437ff801192f767fd02a36bcbd1d496
2013-02-25 05:55:14 +01:00
|
|
|
int dst_uvstride,
|
|
|
|
int mb_row,
|
|
|
|
int mb_col) {
|
2013-01-06 03:20:25 +01:00
|
|
|
uint8_t *y1 = x->pre.y_buffer, *u1 = x->pre.u_buffer, *v1 = x->pre.v_buffer;
|
|
|
|
uint8_t *y2 = x->second_pre.y_buffer, *u2 = x->second_pre.u_buffer,
|
|
|
|
*v2 = x->second_pre.v_buffer;
|
|
|
|
int edge[4], n;
|
|
|
|
|
|
|
|
edge[0] = x->mb_to_top_edge;
|
|
|
|
edge[1] = x->mb_to_bottom_edge;
|
|
|
|
edge[2] = x->mb_to_left_edge;
|
|
|
|
edge[3] = x->mb_to_right_edge;
|
|
|
|
|
|
|
|
for (n = 0; n < 4; n++) {
|
|
|
|
const int x_idx = n & 1, y_idx = n >> 1;
|
Spatial resamping of ZEROMV predictors
This patch allows coding frames using references of different
resolution, in ZEROMV mode. For compound prediction, either
reference may be scaled.
To test, I use the resize_test and enable WRITE_RECON_BUFFER
in vp9_onyxd_if.c. It's also useful to apply this patch to
test/i420_video_source.h:
--- a/test/i420_video_source.h
+++ b/test/i420_video_source.h
@@ -93,6 +93,7 @@ class I420VideoSource : public VideoSource {
virtual void FillFrame() {
// Read a frame from input_file.
+ if (frame_ != 3)
if (fread(img_->img_data, raw_sz_, 1, input_file_) == 0) {
limit_ = frame_;
}
This forces the frame that the resolution changes on to be coded
with no motion, only scaling, and improves the quality of the
result.
Change-Id: I1ee75d19a437ff801192f767fd02a36bcbd1d496
2013-02-25 05:55:14 +01:00
|
|
|
int scaled_uv_offset;
|
2013-01-06 03:20:25 +01:00
|
|
|
|
|
|
|
x->mb_to_top_edge = edge[0] - ((y_idx * 32) << 3);
|
|
|
|
x->mb_to_bottom_edge = edge[1] + (((1 - y_idx) * 32) << 3);
|
|
|
|
x->mb_to_left_edge = edge[2] - ((x_idx * 32) << 3);
|
|
|
|
x->mb_to_right_edge = edge[3] + (((1 - x_idx) * 32) << 3);
|
|
|
|
|
Spatial resamping of ZEROMV predictors
This patch allows coding frames using references of different
resolution, in ZEROMV mode. For compound prediction, either
reference may be scaled.
To test, I use the resize_test and enable WRITE_RECON_BUFFER
in vp9_onyxd_if.c. It's also useful to apply this patch to
test/i420_video_source.h:
--- a/test/i420_video_source.h
+++ b/test/i420_video_source.h
@@ -93,6 +93,7 @@ class I420VideoSource : public VideoSource {
virtual void FillFrame() {
// Read a frame from input_file.
+ if (frame_ != 3)
if (fread(img_->img_data, raw_sz_, 1, input_file_) == 0) {
limit_ = frame_;
}
This forces the frame that the resolution changes on to be coded
with no motion, only scaling, and improves the quality of the
result.
Change-Id: I1ee75d19a437ff801192f767fd02a36bcbd1d496
2013-02-25 05:55:14 +01:00
|
|
|
x->pre.y_buffer = y1 + scaled_buffer_offset(x_idx * 32,
|
|
|
|
y_idx * 32,
|
|
|
|
x->pre.y_stride,
|
|
|
|
&x->scale_factor[0]);
|
|
|
|
scaled_uv_offset = scaled_buffer_offset(x_idx * 16,
|
|
|
|
y_idx * 16,
|
|
|
|
x->pre.uv_stride,
|
|
|
|
&x->scale_factor_uv[0]);
|
|
|
|
x->pre.u_buffer = u1 + scaled_uv_offset;
|
|
|
|
x->pre.v_buffer = v1 + scaled_uv_offset;
|
2013-01-06 03:20:25 +01:00
|
|
|
|
|
|
|
if (x->mode_info_context->mbmi.second_ref_frame > 0) {
|
Spatial resamping of ZEROMV predictors
This patch allows coding frames using references of different
resolution, in ZEROMV mode. For compound prediction, either
reference may be scaled.
To test, I use the resize_test and enable WRITE_RECON_BUFFER
in vp9_onyxd_if.c. It's also useful to apply this patch to
test/i420_video_source.h:
--- a/test/i420_video_source.h
+++ b/test/i420_video_source.h
@@ -93,6 +93,7 @@ class I420VideoSource : public VideoSource {
virtual void FillFrame() {
// Read a frame from input_file.
+ if (frame_ != 3)
if (fread(img_->img_data, raw_sz_, 1, input_file_) == 0) {
limit_ = frame_;
}
This forces the frame that the resolution changes on to be coded
with no motion, only scaling, and improves the quality of the
result.
Change-Id: I1ee75d19a437ff801192f767fd02a36bcbd1d496
2013-02-25 05:55:14 +01:00
|
|
|
x->second_pre.y_buffer = y2 +
|
|
|
|
scaled_buffer_offset(x_idx * 32,
|
|
|
|
y_idx * 32,
|
|
|
|
x->second_pre.y_stride,
|
|
|
|
&x->scale_factor[1]);
|
|
|
|
scaled_uv_offset = scaled_buffer_offset(x_idx * 16,
|
|
|
|
y_idx * 16,
|
|
|
|
x->second_pre.uv_stride,
|
|
|
|
&x->scale_factor_uv[1]);
|
|
|
|
x->second_pre.u_buffer = u2 + scaled_uv_offset;
|
|
|
|
x->second_pre.v_buffer = v2 + scaled_uv_offset;
|
2013-01-06 03:20:25 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
vp9_build_inter32x32_predictors_sb(x,
|
|
|
|
dst_y + y_idx * 32 * dst_ystride + x_idx * 32,
|
|
|
|
dst_u + y_idx * 16 * dst_uvstride + x_idx * 16,
|
|
|
|
dst_v + y_idx * 16 * dst_uvstride + x_idx * 16,
|
Spatial resamping of ZEROMV predictors
This patch allows coding frames using references of different
resolution, in ZEROMV mode. For compound prediction, either
reference may be scaled.
To test, I use the resize_test and enable WRITE_RECON_BUFFER
in vp9_onyxd_if.c. It's also useful to apply this patch to
test/i420_video_source.h:
--- a/test/i420_video_source.h
+++ b/test/i420_video_source.h
@@ -93,6 +93,7 @@ class I420VideoSource : public VideoSource {
virtual void FillFrame() {
// Read a frame from input_file.
+ if (frame_ != 3)
if (fread(img_->img_data, raw_sz_, 1, input_file_) == 0) {
limit_ = frame_;
}
This forces the frame that the resolution changes on to be coded
with no motion, only scaling, and improves the quality of the
result.
Change-Id: I1ee75d19a437ff801192f767fd02a36bcbd1d496
2013-02-25 05:55:14 +01:00
|
|
|
dst_ystride, dst_uvstride, mb_row + y_idx * 2, mb_col + x_idx * 2);
|
2013-01-06 03:20:25 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
x->mb_to_top_edge = edge[0];
|
|
|
|
x->mb_to_bottom_edge = edge[1];
|
|
|
|
x->mb_to_left_edge = edge[2];
|
|
|
|
x->mb_to_right_edge = edge[3];
|
|
|
|
|
|
|
|
x->pre.y_buffer = y1;
|
|
|
|
x->pre.u_buffer = u1;
|
|
|
|
x->pre.v_buffer = v1;
|
|
|
|
|
|
|
|
if (x->mode_info_context->mbmi.second_ref_frame > 0) {
|
|
|
|
x->second_pre.y_buffer = y2;
|
|
|
|
x->second_pre.u_buffer = u2;
|
|
|
|
x->second_pre.v_buffer = v2;
|
|
|
|
}
|
|
|
|
|
|
|
|
#if CONFIG_COMP_INTERINTRA_PRED
|
|
|
|
if (x->mode_info_context->mbmi.second_ref_frame == INTRA_FRAME) {
|
|
|
|
vp9_build_interintra_64x64_predictors_sb(x, dst_y, dst_u, dst_v,
|
|
|
|
dst_ystride, dst_uvstride);
|
2012-11-30 20:46:20 +01:00
|
|
|
}
|
|
|
|
#endif
|
2012-08-20 23:43:34 +02:00
|
|
|
}
|
|
|
|
|
2013-02-20 22:46:55 +01:00
|
|
|
static void build_inter4x4_predictors_mb(MACROBLOCKD *xd,
|
|
|
|
int mb_row, int mb_col) {
|
2012-07-14 00:21:29 +02:00
|
|
|
int i;
|
2012-08-14 12:32:29 +02:00
|
|
|
MB_MODE_INFO * mbmi = &xd->mode_info_context->mbmi;
|
|
|
|
BLOCKD *blockd = xd->block;
|
2013-02-09 02:49:44 +01:00
|
|
|
int which_mv = 0;
|
|
|
|
const int use_second_ref = mbmi->second_ref_frame > 0;
|
2012-08-14 12:32:29 +02:00
|
|
|
|
2012-10-22 20:25:48 +02:00
|
|
|
if (xd->mode_info_context->mbmi.partitioning != PARTITIONING_4X4) {
|
2013-02-09 02:49:44 +01:00
|
|
|
for (i = 0; i < 16; i += 8) {
|
|
|
|
BLOCKD *d0 = &blockd[i];
|
|
|
|
BLOCKD *d1 = &blockd[i + 2];
|
2013-02-20 22:46:55 +01:00
|
|
|
const int y = i & 8;
|
2012-02-01 23:27:50 +01:00
|
|
|
|
2013-02-09 02:49:44 +01:00
|
|
|
blockd[i + 0].bmi = xd->mode_info_context->bmi[i + 0];
|
|
|
|
blockd[i + 2].bmi = xd->mode_info_context->bmi[i + 2];
|
Improved coding using 8x8 transform
In summary, this commit encompasses a series of changes in attempt to
improve the 8x8 transform based coding to help overall compression
quality, please refer to the detailed commit history below for what
are the rationale underly the series of changes:
a. A frame level flag to indicate if 8x8 transform is used at all.
b. 8x8 transform is not used for key frames and small image size.
c. On inter coded frame, macroblocks using modes B_PRED, SPLIT_MV
and I8X8_PRED are forced to using 4x4 transform based coding, the
rest uses 8x8 transform based coding.
d. Encoder and decoder has the same assumption on the relationship
between prediction modes and transform size, therefore no signaling
is encoded in bitstream.
e. Mode decision process now calculate the rate and distortion scores
using their respective transforms.
Overall test results:
1. HD set
http://www.corp.google.com/~yaowu/no_crawl/t8x8/HD_t8x8_20120206.html
(avg psnr: 3.09% glb psnr: 3.22%, ssim: 3.90%)
2. Cif set:
http://www.corp.google.com/~yaowu/no_crawl/t8x8/cif_t8x8_20120206.html
(avg psnr: -0.03%, glb psnr: -0.02%, ssim: -0.04%)
It should be noted here, as 8x8 transform coding itself is disabled
for cif size clips, the 0.03% loss is purely from the 1 bit/frame
flag overhead on if 8x8 transform is used or not for the frame.
---patch history for future reference---
Patch 1:
this commit tries to select transform size based on macroblock
prediction mode. If the size of a prediction mode is 16x16, then
the macroblock is forced to use 8x8 transform. If the prediction
mode is B_PRED, SPLITMV or I8X8_PRED, then the macroblock is forced
to use 4x4 transform. Tests on the following HD clips showed mixed
results: (all hd clips only used first 100 frames in the test)
http://www.corp.google.com/~yaowu/no_crawl/t8x8/hdmodebased8x8.html
http://www.corp.google.com/~yaowu/no_crawl/t8x8/hdmodebased8x8_log.html
while the results are mixed and overall negative, it is interesting to
see 8x8 helped a few of the clips.
Patch 2:
this patch tries to hard-wire selection of transform size based on
prediction modes without using segmentation to signal the transform size.
encoder and decoder both takes the same assumption that all macroblocks
use 8x8 transform except when prediciton mode is B_PRED, I8X8_PRED or
SPLITMV. Test results are as follows:
http://www.corp.google.com/~yaowu/no_crawl/t8x8/cifmodebase8x8_0125.html
http://www.corp.google.com/~yaowu/no_crawl/t8x8/hdmodebased8x8_0125log.html
Interestingly, by removing the overhead or coding the segmentation, the
results on this limited HD set have turn positive on average.
Patch 3:
this patch disabled the usage of 8x8 transform on key frames, and kept the
logic from patch 2 for inter frames only. test results on HD set turned
decidedly positive with 8x8 transform enabled on inter frame with 16x16
prediction modes: (avg psnr: .81% glb psnr: .82 ssim: .55%)
http://www.corp.google.com/~yaowu/no_crawl/t8x8/hdintermode8x8_0125.html
results on cif set still negative overall
Patch 4:
continued from last patch, but now in mode decision process, the rate and
distortion estimates are computed based on 8x8 transform results for MBs
with modes associated with 8x8 transform. This patch also fixed a problem
related to segment based eob coding when 8x8 transform is used. The patch
significantly improved the results on HD clips:
http://www.corp.google.com/~yaowu/no_crawl/t8x8/hd8x8RDintermode.html
(avg psnr: 2.70% glb psnr: 2.76% ssim: 3.34%)
results on cif also improved, though they are still negative compared to
baseline that uses 4x4 transform only:
http://www.corp.google.com/~yaowu/no_crawl/t8x8/cif8x8RDintermode.html
(avg psnr: -.78% glb psnr: -.86% ssim: -.19%)
Patch 5:
This patch does 3 things:
a. a bunch of decoder bug fixes, encodings and decodings were verified
to have matched recon buffer on a number of encodes on cif size mobile and
hd version of _pedestrian.
b. the patch further improved the rate distortion calculation of MBS that
use 8x8 transform. This provided some further gain on compression.
c. the patch also got the experimental work SEG_LVL_EOB to work with 8x8
transformed macroblock, test results indicates it improves the cif set
but hurt the HD set slightly.
Tests results on HD clips:
http://www.corp.google.com/~yaowu/no_crawl/t8x8/HD_t8x8_20120201.html
(avg psnr: 3.19% glb psnr: 3.30% ssim: 3.93%)
Test results on cif clips:
http://www.corp.google.com/~yaowu/no_crawl/t8x8/cif_t8x8_20120201.html
(avg psnr: -.47% glb psnr: -.51% ssim: +.28%)
Patch 6:
Added a frame level flag to indicate if 8x8 transform is allowed at all.
temporarily the decision is based on frame size, can be optimized later
one. This get the cif results to basically unchanged, with one bit per
frame overhead on both cif and hd clips.
Patch 8:
Rebase and Merge to head by PGW.
Fixed some suspect 4s that look like hey should be 64s in regard
to segmented EOB. Perhaps #defines would be bette.
Bulit and tested without T8x8 enabled and produces unchanged
output.
Patch 9:
Corrected misalligned code/decode of "txfm_mode" bit.
Limited testing for correct encode and decode with
T8x8 configured on derf clips.
Change-Id: I156e1405d25f81579d579dff8ab9af53944ec49c
2012-02-10 01:12:23 +01:00
|
|
|
|
2013-02-09 02:49:44 +01:00
|
|
|
for (which_mv = 0; which_mv < 1 + use_second_ref; ++which_mv) {
|
|
|
|
if (mbmi->need_to_clamp_mvs) {
|
|
|
|
clamp_mv_to_umv_border(&blockd[i + 0].bmi.as_mv[which_mv].as_mv, xd);
|
|
|
|
clamp_mv_to_umv_border(&blockd[i + 2].bmi.as_mv[which_mv].as_mv, xd);
|
|
|
|
}
|
2012-04-18 22:51:58 +02:00
|
|
|
|
2013-02-20 22:46:55 +01:00
|
|
|
build_2x1_inter_predictor(d0, d1, xd->scale_factor, 8, 16,
|
|
|
|
which_mv, &xd->subpix,
|
|
|
|
mb_row * 16 + y, mb_col * 16);
|
2013-02-09 02:49:44 +01:00
|
|
|
}
|
2011-04-28 16:53:59 +02:00
|
|
|
}
|
2012-07-14 00:21:29 +02:00
|
|
|
} else {
|
|
|
|
for (i = 0; i < 16; i += 2) {
|
2012-08-14 12:32:29 +02:00
|
|
|
BLOCKD *d0 = &blockd[i];
|
|
|
|
BLOCKD *d1 = &blockd[i + 1];
|
2013-02-20 22:46:55 +01:00
|
|
|
const int x = (i & 3) * 4;
|
|
|
|
const int y = (i >> 2) * 4;
|
2012-08-14 12:32:29 +02:00
|
|
|
|
|
|
|
blockd[i + 0].bmi = xd->mode_info_context->bmi[i + 0];
|
|
|
|
blockd[i + 1].bmi = xd->mode_info_context->bmi[i + 1];
|
|
|
|
|
2013-02-09 02:49:44 +01:00
|
|
|
for (which_mv = 0; which_mv < 1 + use_second_ref; ++which_mv) {
|
|
|
|
build_2x1_inter_predictor(d0, d1, xd->scale_factor, 4, 16,
|
2013-02-20 22:46:55 +01:00
|
|
|
which_mv, &xd->subpix,
|
|
|
|
mb_row * 16 + y, mb_col * 16 + x);
|
2012-07-14 00:21:29 +02:00
|
|
|
}
|
2010-05-18 17:58:33 +02:00
|
|
|
}
|
2012-07-14 00:21:29 +02:00
|
|
|
}
|
2011-04-28 16:53:59 +02:00
|
|
|
|
2012-07-14 00:21:29 +02:00
|
|
|
for (i = 16; i < 24; i += 2) {
|
2012-08-14 12:32:29 +02:00
|
|
|
BLOCKD *d0 = &blockd[i];
|
|
|
|
BLOCKD *d1 = &blockd[i + 1];
|
2013-02-20 22:46:55 +01:00
|
|
|
const int x = 4 * (i & 1);
|
|
|
|
const int y = ((i - 16) >> 1) * 4;
|
2012-04-18 22:51:58 +02:00
|
|
|
|
2013-02-09 02:49:44 +01:00
|
|
|
for (which_mv = 0; which_mv < 1 + use_second_ref; ++which_mv) {
|
|
|
|
build_2x1_inter_predictor(d0, d1, xd->scale_factor_uv, 4, 8,
|
2013-02-20 22:46:55 +01:00
|
|
|
which_mv, &xd->subpix,
|
|
|
|
mb_row * 8 + y, mb_col * 8 + x);
|
2012-07-14 00:21:29 +02:00
|
|
|
}
|
|
|
|
}
|
2011-04-28 16:53:59 +02:00
|
|
|
}
|
|
|
|
|
2013-03-02 02:50:55 +01:00
|
|
|
static int mv_pred_row(MACROBLOCKD *mb, int off, int idx) {
|
|
|
|
int temp = mb->mode_info_context->bmi[off + 0].as_mv[idx].as_mv.row +
|
|
|
|
mb->mode_info_context->bmi[off + 1].as_mv[idx].as_mv.row +
|
|
|
|
mb->mode_info_context->bmi[off + 4].as_mv[idx].as_mv.row +
|
|
|
|
mb->mode_info_context->bmi[off + 5].as_mv[idx].as_mv.row;
|
|
|
|
return (temp < 0 ? temp - 4 : temp + 4) / 8;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int mv_pred_col(MACROBLOCKD *mb, int off, int idx) {
|
|
|
|
int temp = mb->mode_info_context->bmi[off + 0].as_mv[idx].as_mv.col +
|
|
|
|
mb->mode_info_context->bmi[off + 1].as_mv[idx].as_mv.col +
|
|
|
|
mb->mode_info_context->bmi[off + 4].as_mv[idx].as_mv.col +
|
|
|
|
mb->mode_info_context->bmi[off + 5].as_mv[idx].as_mv.col;
|
|
|
|
return (temp < 0 ? temp - 4 : temp + 4) / 8;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void build_4x4uvmvs(MACROBLOCKD *xd) {
|
2012-07-14 00:21:29 +02:00
|
|
|
int i, j;
|
2012-08-14 12:32:29 +02:00
|
|
|
BLOCKD *blockd = xd->block;
|
2013-03-02 02:50:55 +01:00
|
|
|
const int mask = xd->fullpixel_mask;
|
2012-07-14 00:21:29 +02:00
|
|
|
|
|
|
|
for (i = 0; i < 2; i++) {
|
|
|
|
for (j = 0; j < 2; j++) {
|
2013-03-02 02:50:55 +01:00
|
|
|
const int yoffset = i * 8 + j * 2;
|
|
|
|
const int uoffset = 16 + i * 2 + j;
|
|
|
|
const int voffset = 20 + i * 2 + j;
|
2012-07-14 00:21:29 +02:00
|
|
|
|
2013-03-02 02:50:55 +01:00
|
|
|
MV *u = &blockd[uoffset].bmi.as_mv[0].as_mv;
|
|
|
|
MV *v = &blockd[voffset].bmi.as_mv[0].as_mv;
|
|
|
|
u->row = mv_pred_row(xd, yoffset, 0) & mask;
|
|
|
|
u->col = mv_pred_col(xd, yoffset, 0) & mask;
|
2012-07-14 00:21:29 +02:00
|
|
|
|
|
|
|
// if (x->mode_info_context->mbmi.need_to_clamp_mvs)
|
2013-03-02 02:50:55 +01:00
|
|
|
clamp_uvmv_to_umv_border(u, xd);
|
2012-07-14 00:21:29 +02:00
|
|
|
|
|
|
|
// if (x->mode_info_context->mbmi.need_to_clamp_mvs)
|
2013-03-02 02:50:55 +01:00
|
|
|
clamp_uvmv_to_umv_border(u, xd);
|
2012-07-14 00:21:29 +02:00
|
|
|
|
2013-03-02 02:50:55 +01:00
|
|
|
v->row = u->row;
|
|
|
|
v->col = u->col;
|
2012-07-14 00:21:29 +02:00
|
|
|
|
2012-11-07 15:50:25 +01:00
|
|
|
if (xd->mode_info_context->mbmi.second_ref_frame > 0) {
|
2013-03-02 02:50:55 +01:00
|
|
|
u = &blockd[uoffset].bmi.as_mv[1].as_mv;
|
|
|
|
v = &blockd[voffset].bmi.as_mv[1].as_mv;
|
|
|
|
u->row = mv_pred_row(xd, yoffset, 1) & mask;
|
|
|
|
u->col = mv_pred_col(xd, yoffset, 1) & mask;
|
2012-07-14 00:21:29 +02:00
|
|
|
|
2012-08-14 12:32:29 +02:00
|
|
|
// if (mbmi->need_to_clamp_mvs)
|
2013-03-02 02:50:55 +01:00
|
|
|
clamp_uvmv_to_umv_border(u, xd);
|
2012-07-14 00:21:29 +02:00
|
|
|
|
2012-08-14 12:32:29 +02:00
|
|
|
// if (mbmi->need_to_clamp_mvs)
|
2013-03-02 02:50:55 +01:00
|
|
|
clamp_uvmv_to_umv_border(u, xd);
|
2012-07-14 00:21:29 +02:00
|
|
|
|
2013-03-02 02:50:55 +01:00
|
|
|
v->row = u->row;
|
|
|
|
v->col = u->col;
|
2012-07-14 00:21:29 +02:00
|
|
|
}
|
2011-08-24 20:42:26 +02:00
|
|
|
}
|
2012-07-14 00:21:29 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-02-09 02:49:44 +01:00
|
|
|
void vp9_build_inter16x16_predictors_mb(MACROBLOCKD *xd,
|
|
|
|
uint8_t *dst_y,
|
|
|
|
uint8_t *dst_u,
|
|
|
|
uint8_t *dst_v,
|
|
|
|
int dst_ystride,
|
Spatial resamping of ZEROMV predictors
This patch allows coding frames using references of different
resolution, in ZEROMV mode. For compound prediction, either
reference may be scaled.
To test, I use the resize_test and enable WRITE_RECON_BUFFER
in vp9_onyxd_if.c. It's also useful to apply this patch to
test/i420_video_source.h:
--- a/test/i420_video_source.h
+++ b/test/i420_video_source.h
@@ -93,6 +93,7 @@ class I420VideoSource : public VideoSource {
virtual void FillFrame() {
// Read a frame from input_file.
+ if (frame_ != 3)
if (fread(img_->img_data, raw_sz_, 1, input_file_) == 0) {
limit_ = frame_;
}
This forces the frame that the resolution changes on to be coded
with no motion, only scaling, and improves the quality of the
result.
Change-Id: I1ee75d19a437ff801192f767fd02a36bcbd1d496
2013-02-25 05:55:14 +01:00
|
|
|
int dst_uvstride,
|
|
|
|
int mb_row,
|
|
|
|
int mb_col) {
|
|
|
|
vp9_build_inter16x16_predictors_mby(xd, dst_y, dst_ystride, mb_row, mb_col);
|
|
|
|
vp9_build_inter16x16_predictors_mbuv(xd, dst_u, dst_v, dst_uvstride,
|
|
|
|
mb_row, mb_col);
|
2013-02-09 02:49:44 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
|
Spatial resamping of ZEROMV predictors
This patch allows coding frames using references of different
resolution, in ZEROMV mode. For compound prediction, either
reference may be scaled.
To test, I use the resize_test and enable WRITE_RECON_BUFFER
in vp9_onyxd_if.c. It's also useful to apply this patch to
test/i420_video_source.h:
--- a/test/i420_video_source.h
+++ b/test/i420_video_source.h
@@ -93,6 +93,7 @@ class I420VideoSource : public VideoSource {
virtual void FillFrame() {
// Read a frame from input_file.
+ if (frame_ != 3)
if (fread(img_->img_data, raw_sz_, 1, input_file_) == 0) {
limit_ = frame_;
}
This forces the frame that the resolution changes on to be coded
with no motion, only scaling, and improves the quality of the
result.
Change-Id: I1ee75d19a437ff801192f767fd02a36bcbd1d496
2013-02-25 05:55:14 +01:00
|
|
|
void vp9_build_inter_predictors_mb(MACROBLOCKD *xd,
|
|
|
|
int mb_row,
|
|
|
|
int mb_col) {
|
2012-08-14 12:32:29 +02:00
|
|
|
if (xd->mode_info_context->mbmi.mode != SPLITMV) {
|
2013-02-09 02:49:44 +01:00
|
|
|
vp9_build_inter16x16_predictors_mb(xd, xd->predictor,
|
|
|
|
&xd->predictor[256],
|
Spatial resamping of ZEROMV predictors
This patch allows coding frames using references of different
resolution, in ZEROMV mode. For compound prediction, either
reference may be scaled.
To test, I use the resize_test and enable WRITE_RECON_BUFFER
in vp9_onyxd_if.c. It's also useful to apply this patch to
test/i420_video_source.h:
--- a/test/i420_video_source.h
+++ b/test/i420_video_source.h
@@ -93,6 +93,7 @@ class I420VideoSource : public VideoSource {
virtual void FillFrame() {
// Read a frame from input_file.
+ if (frame_ != 3)
if (fread(img_->img_data, raw_sz_, 1, input_file_) == 0) {
limit_ = frame_;
}
This forces the frame that the resolution changes on to be coded
with no motion, only scaling, and improves the quality of the
result.
Change-Id: I1ee75d19a437ff801192f767fd02a36bcbd1d496
2013-02-25 05:55:14 +01:00
|
|
|
&xd->predictor[320], 16, 8,
|
|
|
|
mb_row, mb_col);
|
2013-02-09 02:49:44 +01:00
|
|
|
|
2012-11-07 15:50:25 +01:00
|
|
|
#if CONFIG_COMP_INTERINTRA_PRED
|
2013-02-09 02:49:44 +01:00
|
|
|
if (xd->mode_info_context->mbmi.second_ref_frame == INTRA_FRAME) {
|
2012-11-07 15:50:25 +01:00
|
|
|
vp9_build_interintra_16x16_predictors_mb(xd, xd->predictor,
|
|
|
|
&xd->predictor[256],
|
|
|
|
&xd->predictor[320], 16, 8);
|
|
|
|
}
|
|
|
|
#endif
|
2012-07-14 00:21:29 +02:00
|
|
|
} else {
|
2012-08-14 12:32:29 +02:00
|
|
|
build_4x4uvmvs(xd);
|
2013-02-20 22:46:55 +01:00
|
|
|
build_inter4x4_predictors_mb(xd, mb_row, mb_col);
|
2012-07-14 00:21:29 +02:00
|
|
|
}
|
2010-05-18 17:58:33 +02:00
|
|
|
}
|