This is after discussion with the hardware team. Update the unit test
to take these sizes into account. Split out some duplicate code into
a separate file so it can be shared.
Change-Id: I8311d11b0191d8bb37e8eb4ac962beb217e1bff5
* changes:
Initial support for resolution changes on P-frames
Avoid allocating memory when resizing frames
Adds a test for the VP8E_SET_SCALEMODE control
Tests that the external interface to set the internal codec scaling
works as expected. Also updates the test to pull the height from
the decoded frame size rather than parsing the keyframe header,
in anticipation of allowing resolution changes on non-keyframes.
Change-Id: I3ed92117d8e5288fbbd1e7b618f2f233d0fe2c17
This commit adds the 8 tap SSSE3 subpixel filters back into the code
underneath the convolve API. The C code is still called for 4x4
blocks, as well as compound prediction modes. This restores the
encode performance to be within about 8% of the baseline.
Change-Id: Ife0d81477075ae33c05b53c65003951efdc8b09c
This patch adds column-based tiling. The idea is to make each tile
independently decodable (after reading the common frame header) and
also independendly encodable (minus within-frame cost adjustments in
the RD loop) to speed-up hardware & software en/decoders if they used
multi-threading. Column-based tiling has the added advantage (over
other tiling methods) that it minimizes realtime use-case latency,
since all threads can start encoding data as soon as the first SB-row
worth of data is available to the encoder.
There is some test code that does random tile ordering in the decoder,
to confirm that each tile is indeed independently decodable from other
tiles in the same frame. At tile edges, all contexts assume default
values (i.e. 0, 0 motion vector, no coefficients, DC intra4x4 mode),
and motion vector search and ordering do not cross tiles in the same
frame.
t log
Tile independence is not maintained between frames ATM, i.e. tile 0 of
frame 1 is free to use motion vectors that point into any tile of frame
0. We support 1 (i.e. no tiling), 2 or 4 column-tiles.
The loopfilter crosses tile boundaries. I discussed this briefly with Aki
and he says that's OK. An in-loop loopfilter would need to do some sync
between tile threads, but that shouldn't be a big issue.
Resuls: with tiling disabled, we go up slightly because of improved edge
use in the intra4x4 prediction. With 2 tiles, we lose about ~1% on derf,
~0.35% on HD and ~0.55% on STD/HD. With 4 tiles, we lose another ~1.5%
on derf ~0.77% on HD and ~0.85% on STD/HD. Most of this loss is
concentrated in the low-bitrate end of clips, and most of it is because
of the loss of edges at tile boundaries and the resulting loss of intra
predictors.
TODO:
- more tiles (perhaps allow row-based tiling also, and max. 8 tiles)?
- maybe optionally (for EC purposes), motion vectors themselves
should not cross tile edges, or we should emulate such borders as
if they were off-frame, to limit error propagation to within one
tile only. This doesn't have to be the default behaviour but could
be an optional bitstream flag.
Change-Id: I5951c3a0742a767b20bc9fb5af685d9892c2c96f
This commit introduces a new convolution function which will be used to
replace the existing subpixel interpolation functions. It is much the
same as the existing functions, but allows for changing the filter
kernel on a per-pixel basis, and doesn't bake in knowledge of the
filter to be applied or the size of the resulting block into the
function name.
Replacing the existing subpel filters will come in a later commit.
Change-Id: Ic9a5615f2f456cb77f96741856fc650d6d78bb91
Adds an error-resilient mode where frames can be continued
to be decoded even when there are errors (due to network losses)
on a prior frame. Specifically, backward updates are turned off
and probabilities of various symbols are reset to defaults at
the beginning of each frame. Further, the last frame's mvs are
not used for the mv reference list, and the sorting of the
initial list based on search on previous frames is turned off
as well.
Also adds a test where an arbitrary set of frames are skipped
from decoding to simulate errors. The test verifies (1) that if
the error frames are droppable - i.e. frame buffer updates have
been turned off - there are no mismatch errors for the remaining
frames after the error frames; and (2) if the error-frames are non
droppable, there are not only no decoding errors but the mismatch
PSNR between the decoder's version of the post-error frames and the
encoder's version is at least 20 dB.
Change-Id: Ie6e2bcd436b1e8643270356d3a930e8989ff52a5
This commit starts to convert the tests to a system where the codec
to be used is provided by a factory object. Currently no tests are
instantiated for VP9 since they all fail for various reasons, but it
was verified that they're called and the correct codec is
instantiated.
Change-Id: Ia7506df2ca3a7651218ba3ca560634f08c9fbdeb
Various fixups to resolve issues when building vp9-preview under the more stringent
checks placed on the experimental branch.
Change-Id: I21749de83552e1e75c799003f849e6a0f1a35b07
In addition to allowing tests to use the RTCD-enabled functions (perhaps transitively)
without having run a full encode/decode test yet, this fixes a linking issue with
Apple's G++ whereby the Common symbols (the function pointers themselves) wouldn't
be resolved. Fixing this linking issue is the primary impetus for this patch, as none
of the tests exercise the RTCD functionality except through the main API.
Change-Id: I12aed91ca37a707e5309aa6cb9c38a649c06bc6a
This adds Debargha's DCT/DWT hybrid and a regular 32x32 DCT, and adds
code all over the place to wrap that in the bitstream/encoder/decoder/RD.
Some implementation notes (these probably need careful review):
- token range is extended by 1 bit, since the value range out of this
transform is [-16384,16383].
- the coefficients coming out of the FDCT are manually scaled back by
1 bit, or else they won't fit in int16_t (they are 17 bits). Because
of this, the RD error scoring does not right-shift the MSE score by
two (unlike for 4x4/8x8/16x16).
- to compensate for this loss in precision, the quantizer is halved
also. This is currently a little hacky.
- FDCT and IDCT is double-only right now. Needs a fixed-point impl.
- There are no default probabilities for the 32x32 transform yet; I'm
simply using the 16x16 luma ones. A future commit will add newly
generated probabilities for all transforms.
- No ADST version. I don't think we'll add one for this level; if an
ADST is desired, transform-size selection can scale back to 16x16
or lower, and use an ADST at that level.
Additional notes specific to Debargha's DWT/DCT hybrid:
- coefficient scale is different for the top/left 16x16 (DCT-over-DWT)
block than for the rest (DWT pixel differences) of the block. Therefore,
RD error scoring isn't easily scalable between coefficient and pixel
domain. Thus, unfortunately, we need to compute the RD distortion in
the pixel domain until we figure out how to scale these appropriately.
Change-Id: I00386f20f35d7fabb19aba94c8162f8aee64ef2b
In addition to allowing tests to use the RTCD-enabled functions (perhaps transitively)
without having run a full encode/decode test yet, this fixes a linking issue with
Apple's G++ whereby the Common symbols (the function pointers themselves) wouldn't
be resolved. Fixing this linking issue is the primary impetus for this patch, as none
of the tests exercise the RTCD functionality except through the main API.
Change-Id: I12aed91ca37a707e5309aa6cb9c38a649c06bc6a
Support for gyp which doesn't support multiple objects in the same
static library having the same basename.
Change-Id: Ib947eefbaf68f8b177a796d23f875ccdfa6bc9dc
Rather than building an object file directory heirarchy matching the
source tree's layout, rename the object files so that the object
file name contains the path in the source file tree. The intent here
is to allow two files in different parts of the source tree to have
the same name and still not collide when put into an ar archive.
Change-Id: Id627737dc95ffc65b738501215f34a995148c5a2
Exlcude key frame from buffer underrun check, and increase
lowest bitrate in BasicBufferModel.
Both changes are needed because of a known issue (#495).
Change-Id: If5e994f813d7d5ae870c1a72be404c8f7dbbdf27
Creates a merge between the master and experimental branches. Fixes a
number of conflicts in the build system to allow *either* VP8 or VP9
to be built. Specifically either:
$ configure --disable-vp9 $ configure --disable-vp8
--disable-unit-tests
VP9 still exports its symbols and files as VP8, so that will be
resolved in the next commit.
Unit tests are broken in VP9, but this isn't a new issue. They are
fixed upstream on origin/experimental as of this writing, but rebasing
this merge proved difficult, so will tackle that in a second merge
commit.
Change-Id: I2b7d852c18efd58d1ebc621b8041fe0260442c21
In the variance calculations the difference is summed and later squared.
When the sum exceeds sqrt(2^31) the value is treated as a negative when
it is shifted which gives incorrect results.
To fix this we force the multiplication to be unsigned.
The alternative fix is to shift sum down by 4 before multiplying.
However that will reduce precision.
For 16x16 blocks the maximum sum is 65280 and sqrt(2^31) is 46340 (and
change).
This change is based on:
1698234 Missed some variance casts
fea3556 Fix variance overflow
Change-Id: I2c61856cca9db54b9b81de83b4505ea81a050a0f
s/([vV][pP])8/$19/
additionally dct.h was removed; declare the _c functions that are used
in the tests. the TODO for conversion to parameterized tests still
remains.
Change-Id: I73db9425a57075bbb78a92693ba6b320578981cd
Got 61 test vectors from vp8-test-vectors.git
(http://git.chromium.org/gitweb/?p=webm/vp8-test-vectors.git)
Added decoder test vectors downloading in unit tests. Uploaded
the test vectors and their md5 files to WebM website.
$ gsutil cp *.* gs://downloads.webmproject.org/test_data/libvpx
Added their sha1sum to the test/test-data.sha1 file.
In unit tests, download the test vectors to LIBVPX_TEST_DATA_PATH.
Test_vector_test goes through the test vectors, decodes them, and
compute the md5 checksums. The checksums are compared with the
expected md5 checksums to tell if the decoder decodes correctly.
Change-Id: Ia1e84f9347ddf1d4a02e056c0fee7d28dccfae15
1. Algorithm modification:
Instead of having same filter threshold for a whole frame, now we
allow the thresholds to be adjusted for each macroblock. In current
implementation, to avoid excessive blur on background as reported
in issue480(http://code.google.com/p/webm/issues/detail?id=480), we
reduce the thresholds for skipped macroblocks.
2. SSE2 optimization:
As started in issue479(http://code.google.com/p/webm/issues/detail?id=479),
the filter calculation was adjusted for better performance. The c
code was also modified accordingly. This made the deblock filter
2x faster, and the decoder was 1.2x faster overall.
Next, the demacroblock filter will be modified similarly.
Change-Id: I05e54c3f580ccd427487d085096b3174f2ab7e86
This unit test compares the difference in quality with
error resilience enabled and disabled. The test runs
for all of the one-pass encoding modes.
The test ensures that the effect of turning on error
resilience makes less than a 10% difference in PSNR.
Further cases should be added to do a more comprehensive
test.
Change-Id: I1fc747fc78c9459bc6c74494f4b38308dbed0c32
Modified EncoderTest class to have separate member variables
for initialization time and per-frame.
Change-Id: I08a1901f8f3ec16e45f96297e08e7f6df0f4aa0b
The stats buffer needs to be reset between runs of the
encoder. I added a Reset() function to TwopassStatsStore
and called it at the beginning of each encode.
This enables us to run multiple encodes which was
previously not possible since there was no way to reset
the stats between runs.
Change-Id: Iebb18dab83ba9331f009f764cc858609738a27f9
The codec as it stood placed a keyframe one frame after a
real cut scene - and ignored datarate and other considerations.
TODO: Its possible that we should detect a keyframe and recode
the frame ( in certain circumstances) to improve quality.
Change-Id: Ia1fd6d90103f4da4d21ca5ab62897d22e0b888a8
Patch Set 1: gain familiarity with unit tests... added simple
4x4 subtract test
Patch Set 2: fixed mistakes, parameterized as suggested
Patch Set 3: randomized the source/predictor data
Change-Id: I33432bdf7c9f2a9b8c2533a37106382c2a8209ee
Signed-off-by: Scott LaVarnway <slavarnway@google.com>
Replace DECLARE_ALIGNED_ with vpx_memalign()
DECLARE_ALIGNED (__declspec(align())) does not work as intended when
used on class data members:
Data in classes or structures is aligned within the class or structure
at the minimum of its natural alignment and the current packing setting
(from #pragma pack or the /Zp compiler option)
Change-Id: I304aaa6c3716fbfae24675ecf192f4b40787e83e
The transform functions in experimental branch absorbed a scaling
factor of 4 to allow quantization steps closer to unit quantizer.
This commit added scaling code in between forward and inverse
transform to properly account for the scaling factor.
Change-Id: I9a573ddc1ffa74973b34800a5da1a56dbabe0949
using large values for the timebase, e.g., {33333, 1000000} could
rollover the timestamp calculation in vp8e_encode as it was not using
64-bit math.
originally reported on ffmpeg's trac:
https://ffmpeg.org/trac/ffmpeg/ticket/1014
BUG=468
Change-Id: Iedb4e11de086a3dda75097bfaf08f2488e2088d8
The commit replaces run-time initialization of cosine constants with
static constant values, which provides ~30% relief on slow speed. The
real solution, however will be to implement integer versions of those
functions that current use float/double.
Change-Id: Ie3ff1793509653d78dd1aeaf88cc6737da1bc55f
Set on all 16x16 intra/inter modes
Features:
- Butterfly fDCT/iDCT
- Loop filter does not filter internal edges with 16x16
- Optimize coefficient function
- Update coefficient probability function
- RD
- Entropy stats
- 16x16 is a config option
Have not tested with experiments.
hd: 2.60%
std-hd: 2.43%
yt: 1.32%
derf: 0.60%
Change-Id: I96fb090517c30c5da84bad4fae602c3ec0c58b1c
SAD returns unsigned values. Make all the declarations the same.
Remove bestsad initialization and check. It is always set to the
result of a SAD call so it will never remain UINT_MAX
Use ja instead of jg to test unsigned comparison instead of signed.
Update test.
Change-Id: I46336ab45f4e60fc37caf20bd36bc5782079c7a5
The lower complexity modes may not generate a keyframe automatically.
This behavior was found when running under Valgrind, as the slow
performance caused the speed selection to pick lower complexities than
when running natively. Instead, use a fixed complexity for the
realtime auto keyframe test.
Affected tests:
AllModes/KeyframeTest.TestAutoKeyframe/0
Change-Id: I44e3f44e125ad587c293ab5ece29511d7023be9b
This unit test tests vp8_sixtap_predict function against preset
data and random generated data. The test against preset data
checks the correctness of the functions, and the test against
random data checks if the optimized six-tap predictor functions
generate matching result as the c functions. It tests the
following functions:
vp8_sixtap_predict16x16_c
vp8_sixtap_predict16x16_mmx
vp8_sixtap_predict16x16_sse2
vp8_sixtap_predict16x16_ssse3
vp8_sixtap_predict8x8_c
vp8_sixtap_predict8x8_mmx
vp8_sixtap_predict8x8_sse2
vp8_sixtap_predict8x8_ssse3
vp8_sixtap_predict8x4_c
vp8_sixtap_predict8x4_mmx
vp8_sixtap_predict8x4_sse2
vp8_sixtap_predict8x4_ssse3
vp8_sixtap_predict4x4_c
vp8_sixtap_predict4x4_mmx
vp8_sixtap_predict4x4_ssse3
Change-Id: I6de097898ebca34a4c8020aed1e8dde5cd3e493b
We need an easy way to build the unit test driver without running the
tests. This enables passing options like --gtest_filter to the
executable, which can't be done very cleanly when running under
`make test`.
Fixed a number of compiler errors/warnings when building the tests
in various configurations by Jenkins.
Change-Id: I9198122600bcf02520688e5f052ab379f963b77b