This adds Debargha's DCT/DWT hybrid and a regular 32x32 DCT, and adds
code all over the place to wrap that in the bitstream/encoder/decoder/RD.
Some implementation notes (these probably need careful review):
- token range is extended by 1 bit, since the value range out of this
transform is [-16384,16383].
- the coefficients coming out of the FDCT are manually scaled back by
1 bit, or else they won't fit in int16_t (they are 17 bits). Because
of this, the RD error scoring does not right-shift the MSE score by
two (unlike for 4x4/8x8/16x16).
- to compensate for this loss in precision, the quantizer is halved
also. This is currently a little hacky.
- FDCT and IDCT is double-only right now. Needs a fixed-point impl.
- There are no default probabilities for the 32x32 transform yet; I'm
simply using the 16x16 luma ones. A future commit will add newly
generated probabilities for all transforms.
- No ADST version. I don't think we'll add one for this level; if an
ADST is desired, transform-size selection can scale back to 16x16
or lower, and use an ADST at that level.
Additional notes specific to Debargha's DWT/DCT hybrid:
- coefficient scale is different for the top/left 16x16 (DCT-over-DWT)
block than for the rest (DWT pixel differences) of the block. Therefore,
RD error scoring isn't easily scalable between coefficient and pixel
domain. Thus, unfortunately, we need to compute the RD distortion in
the pixel domain until we figure out how to scale these appropriately.
Change-Id: I00386f20f35d7fabb19aba94c8162f8aee64ef2b
Support for gyp which doesn't support multiple objects in the same
static library having the same basename.
Change-Id: Ib947eefbaf68f8b177a796d23f875ccdfa6bc9dc
Rather than building an object file directory heirarchy matching the
source tree's layout, rename the object files so that the object
file name contains the path in the source file tree. The intent here
is to allow two files in different parts of the source tree to have
the same name and still not collide when put into an ar archive.
Change-Id: Id627737dc95ffc65b738501215f34a995148c5a2
Creates a merge between the master and experimental branches. Fixes a
number of conflicts in the build system to allow *either* VP8 or VP9
to be built. Specifically either:
$ configure --disable-vp9 $ configure --disable-vp8
--disable-unit-tests
VP9 still exports its symbols and files as VP8, so that will be
resolved in the next commit.
Unit tests are broken in VP9, but this isn't a new issue. They are
fixed upstream on origin/experimental as of this writing, but rebasing
this merge proved difficult, so will tackle that in a second merge
commit.
Change-Id: I2b7d852c18efd58d1ebc621b8041fe0260442c21
In the variance calculations the difference is summed and later squared.
When the sum exceeds sqrt(2^31) the value is treated as a negative when
it is shifted which gives incorrect results.
To fix this we force the multiplication to be unsigned.
The alternative fix is to shift sum down by 4 before multiplying.
However that will reduce precision.
For 16x16 blocks the maximum sum is 65280 and sqrt(2^31) is 46340 (and
change).
This change is based on:
1698234 Missed some variance casts
fea3556 Fix variance overflow
Change-Id: I2c61856cca9db54b9b81de83b4505ea81a050a0f
s/([vV][pP])8/$19/
additionally dct.h was removed; declare the _c functions that are used
in the tests. the TODO for conversion to parameterized tests still
remains.
Change-Id: I73db9425a57075bbb78a92693ba6b320578981cd
Got 61 test vectors from vp8-test-vectors.git
(http://git.chromium.org/gitweb/?p=webm/vp8-test-vectors.git)
Added decoder test vectors downloading in unit tests. Uploaded
the test vectors and their md5 files to WebM website.
$ gsutil cp *.* gs://downloads.webmproject.org/test_data/libvpx
Added their sha1sum to the test/test-data.sha1 file.
In unit tests, download the test vectors to LIBVPX_TEST_DATA_PATH.
Test_vector_test goes through the test vectors, decodes them, and
compute the md5 checksums. The checksums are compared with the
expected md5 checksums to tell if the decoder decodes correctly.
Change-Id: Ia1e84f9347ddf1d4a02e056c0fee7d28dccfae15
1. Algorithm modification:
Instead of having same filter threshold for a whole frame, now we
allow the thresholds to be adjusted for each macroblock. In current
implementation, to avoid excessive blur on background as reported
in issue480(http://code.google.com/p/webm/issues/detail?id=480), we
reduce the thresholds for skipped macroblocks.
2. SSE2 optimization:
As started in issue479(http://code.google.com/p/webm/issues/detail?id=479),
the filter calculation was adjusted for better performance. The c
code was also modified accordingly. This made the deblock filter
2x faster, and the decoder was 1.2x faster overall.
Next, the demacroblock filter will be modified similarly.
Change-Id: I05e54c3f580ccd427487d085096b3174f2ab7e86
This unit test compares the difference in quality with
error resilience enabled and disabled. The test runs
for all of the one-pass encoding modes.
The test ensures that the effect of turning on error
resilience makes less than a 10% difference in PSNR.
Further cases should be added to do a more comprehensive
test.
Change-Id: I1fc747fc78c9459bc6c74494f4b38308dbed0c32
Modified EncoderTest class to have separate member variables
for initialization time and per-frame.
Change-Id: I08a1901f8f3ec16e45f96297e08e7f6df0f4aa0b
The stats buffer needs to be reset between runs of the
encoder. I added a Reset() function to TwopassStatsStore
and called it at the beginning of each encode.
This enables us to run multiple encodes which was
previously not possible since there was no way to reset
the stats between runs.
Change-Id: Iebb18dab83ba9331f009f764cc858609738a27f9
The codec as it stood placed a keyframe one frame after a
real cut scene - and ignored datarate and other considerations.
TODO: Its possible that we should detect a keyframe and recode
the frame ( in certain circumstances) to improve quality.
Change-Id: Ia1fd6d90103f4da4d21ca5ab62897d22e0b888a8
Patch Set 1: gain familiarity with unit tests... added simple
4x4 subtract test
Patch Set 2: fixed mistakes, parameterized as suggested
Patch Set 3: randomized the source/predictor data
Change-Id: I33432bdf7c9f2a9b8c2533a37106382c2a8209ee
Signed-off-by: Scott LaVarnway <slavarnway@google.com>
Replace DECLARE_ALIGNED_ with vpx_memalign()
DECLARE_ALIGNED (__declspec(align())) does not work as intended when
used on class data members:
Data in classes or structures is aligned within the class or structure
at the minimum of its natural alignment and the current packing setting
(from #pragma pack or the /Zp compiler option)
Change-Id: I304aaa6c3716fbfae24675ecf192f4b40787e83e
The transform functions in experimental branch absorbed a scaling
factor of 4 to allow quantization steps closer to unit quantizer.
This commit added scaling code in between forward and inverse
transform to properly account for the scaling factor.
Change-Id: I9a573ddc1ffa74973b34800a5da1a56dbabe0949
using large values for the timebase, e.g., {33333, 1000000} could
rollover the timestamp calculation in vp8e_encode as it was not using
64-bit math.
originally reported on ffmpeg's trac:
https://ffmpeg.org/trac/ffmpeg/ticket/1014
BUG=468
Change-Id: Iedb4e11de086a3dda75097bfaf08f2488e2088d8
The commit replaces run-time initialization of cosine constants with
static constant values, which provides ~30% relief on slow speed. The
real solution, however will be to implement integer versions of those
functions that current use float/double.
Change-Id: Ie3ff1793509653d78dd1aeaf88cc6737da1bc55f
Set on all 16x16 intra/inter modes
Features:
- Butterfly fDCT/iDCT
- Loop filter does not filter internal edges with 16x16
- Optimize coefficient function
- Update coefficient probability function
- RD
- Entropy stats
- 16x16 is a config option
Have not tested with experiments.
hd: 2.60%
std-hd: 2.43%
yt: 1.32%
derf: 0.60%
Change-Id: I96fb090517c30c5da84bad4fae602c3ec0c58b1c
SAD returns unsigned values. Make all the declarations the same.
Remove bestsad initialization and check. It is always set to the
result of a SAD call so it will never remain UINT_MAX
Use ja instead of jg to test unsigned comparison instead of signed.
Update test.
Change-Id: I46336ab45f4e60fc37caf20bd36bc5782079c7a5