This commit adds lossless compression capability to the experimental
branch. The lossless experiment can be enabled using --enable-lossless
in configure. When the experiment is enabled, the encoder will use
lossless compression mode by command line option --lossless, and the
decoder automatically recognizes a losslessly encoded clip and decodes
accordingly.
To achieve the lossless coding, this commit has changed the following:
1. To encode at lossless mode, encoder forces the use of unit
quantizer, i.e, Q 0, where effective quantization is 1. Encoder also
disables the usage of 8x8 transform and allows only 4x4 transform;
2. At Q 0, the first order 4x4 DCT/IDCT have been switched over
to a pair of forward and inverse Walsh-Hadamard Transform
(http://goo.gl/EIsfy), with proper scaling applied to match the range
of the original 4x4 DCT/IDCT pair;
3. At Q 0, the second order remains to use the previous
walsh-hadamard transform pair. However, to maintain the reversibility
in second order transform at Q 0, scaling down is applied to first
order DC coefficients prior to forward transform, and scaling up is
applied to the second order output prior to quantization. Symmetric
upscaling and downscaling are added around inverse second order
transform;
4. At lossless mode, encoder also disables a number of minor
features to ensure no loss is introduced, these features includes:
a. Trellis quantization optimization
b. Loop filtering
c. Aggressive zero-binning, rounding and zero-bin boosting
d. Mode based zero-bin boosting
Lossless coding test was performed on all clips within the derf set,
to verify that the commit has achieved lossless compression for all
clips. The average compression ratio is around 2.57 to 1.
(http://goo.gl/dEShs)
Change-Id: Ia3aba7dd09df40dd590f93b9aba134defbc64e34
Added the ability to optionally filter the prediction data
when inter modes are selected (excludes SPLITMV, for now).
The mode selection loop considers both the filtered and
non-filtered prediction data when choosing mode. The filter
can be turned on/off at the frame-level, or signaled for
each MB.
Change-Id: I1b783c71d95a361ab36c761b07e8a6b06bc36822
Incorporates mv_ref, mbsplit and second_mv into the adaptive
entropy framework. The mv_ref framework has been modified from
before.
Adds some clean-ups and fixes.
Results with the adaptive entropy experiment are currently up by
+1.93% on derf; +2.33% std-hd and +1.87% yt-hd.
Fixed a nasty intermittent bug.
Change-Id: I4b1ac9f9483b48432597595195bfec05f31d1e39
Fixed the quantifier that optionally matches a quote before the
filename. This was originally reported in the homebrew project[1].
Note that this fix is different than patch posted there, as there are
some platforms that don't have the quote, so it needs to be included
in the expression optionally.
[1]: https://github.com/mxcl/homebrew/issues/12567#issuecomment-6434000
Change-Id: Ibf2ed93ce169d80932e877f942dc4eeb03867f8b
Update the comment that defines the allowed ranges for
delta_q and delta_lf that can be used with segmentation.
Change-Id: Ie56ad6f946704259e03ffd49921a4cfb7e1e2f1f
The commit introduces a make target 'testdata' that downloads the
required test data from the WebM project website. The data will also
be downloaded if invoking `make test` but is not a strict requirement
for only building the test executable.
The download directory is taken from the LIBVPX_TEST_DATA_PATH
environment variable, or may be specified as part of the make command.
If unset, it defaults to the current directory. It's expected that
most developers will want to set this environment variable to a place
outside their source/build trees, to avoid having to download the data
more than once.
To add test data file:
1) add a line to test/test.mk:
LIBVPX_TEST_DATA-yes += foo-bar-file.y4m
2) add its sha1sum to the test/test-data.sha1 file in the following
format:
528cc88c821e5f5b133c2b40f9c8e3f22eaacc4c foo-bar-file.y4m
3) upload the file to the website
$ gsutil cp foo-bar-file.y4m gs://downloads.webmproject.org/test_data/libvpx
This implementation will check the integrity of the test data
automatically if the `sha1sum` executable is available.
Change-Id: If6910fe304bb3f5cdcc5cb9e5f9afa5be74720d2
This is a unit test for the post-processing functions:
- vp8_post_proc_down_and_across_c
- vp8_post_proc_down_and_across_mmx
- vp8_post_proc_down_and_across_xmm
Change-Id: Iec3e690327b17470209c00417835473f6d9a35d6
Disable unit-tests. The logging in GTest would need to be adjusted.
Restructure ARM cpu detection. Flatten if-else logic.
Change #if defined(HAVE_*) to #if HAVE_* because we only need to check
for features that the library was actually built with. This should have
been harmless, as disabled feature sets wouldn't have any features to
call.
Change-Id: Iea21aa42ce5f049c53ca0376d25bcd0f36f38284
Changes relating to Issue 411
Removed code that was clearing down the segmentation data each
frame.
Added range/parameter checking in vp8_set_roimap(); Return error
if called when cyclic_refresh is enabled.
Correct setup_features() so that it sets or clears the segment update
flags as appropriate.
Change-Id: Ib31ac53006640ddf1ba7b9ec8f8b952e3eff860a
Soft enable runtime cpu detect for armv7-android target, so that it
can be disabled and remove dependency on 'cpufeatures' lib.
Change the arm_cpu_caps implementation selection such that 'no rtcd' takes
precedence over system type.
Switch to use -mtune instead of -mcpu. NDK was complaining about
-mcpu=cortex-a8 conflicting with -march=armv7-a, not sure why.
Add a linker flag to fix some cortex-a8 bug, as suggested by NDK Dev
Guide.
Examples:
Configure for armv7+neon:
./configure --target=armv7-android-gcc \
--sdk-path=/path/to/android/ndk \
--disable-runtime-cpu-detect \
--enable-realtime-only \
--disable-unit-tests
...armv7 w/o neon:
./configure --target=armv7-android-gcc \
--sdk-path=/path/to/android/ndk \
--disable-runtime-cpu-detect \
--enable-realtime-only \
--disable-neon \
--cpu=cortex-a9 \
--disable-unit-tests
Change-Id: I37e2c0592745208979deec38f7658378d4bd6cfa
The function vp8_post_proc_down_and_across_c takes the
stride of both the src and dst images as parameters, but
assumes that they are the same.
I modified the code to use the correct strides, as the
assembler versions of these functions do.
Change-Id: I222715b774cd071b21c15a4b0d2f4aef64a520de
vpx uses symbols in libm and thus we need to provide an indication to
the user of libvpx that if they want to link against libvpx they must
also link against libm.
Change-Id: I31d4068bf7f6f5b1fd222bcdf9e6a1a92fb6696f
Avoid a pthreads dependency via pthread_once() when compiled with
--disable-multithread.
In addition, this synchronization is disabled for Win32 as well, even
though we can be sure that the required primatives exist, so that the
requirements on the application when built with --disable-multithread
are consistent across platforms.
Users using libvpx built with --disable-multithread in a multithreaded
context should provide their own synchronization. Updated the
documentation to vpx_codec_enc_init_ver() and vpx_codec_dec_init_ver()
to note this requirement. Moved the RTCD initialization call to match
this description, as previously it didn't happen until the first
frame.
Change-Id: Id576f6bce2758362188278d3085051c218a56d4a
This patch incorporates adaptive entropy coding of coefficient tokens,
and mode/mv information based on distributions encountered in a frame.
Specifically, there is an initial forward update to the probabilities
in the bitstream as before for coding the symbols in the frame, however
at the end of decoding each frame, the forward update to the
probabilities is reverted and instead the probabilities are updated
towards the actual distributions encountered within the frame.
The amount of update is weighted by the number of hits within each
context.
Results on derf/hd/std-hd are all up by 1.6%.
On derf, the most of the gains come from coefficients, however for the
hd and std-hd sets, the most of the gains come from the mode/mv
information updates.
Change-Id: I708c0e11fdacafee04940fe7ae159ba6844005fd
This commit is to remove two arrays, which contain the probabilities
of how likely each probability in coef_probs table is updated. The
commit changed to use a fixed number "252".
Surprisedly, the overall impact on quality is close to zero, which
basically says the two big static arrays are not helpful at all.
derf: -0.016%, -0.020%
std-hd: 0.000%, -0.013%
yt: -0.022%, +0.007%
yt-hd: -0.038%, +0.034%
Change-Id: Ifee94d28a37dcab4f1d2b994bd5b07575be42b72