This commit makes the dual filter experiment work with non-420
settings. It fixes unit test failure in EndToEndTestLarge.
Change-Id: I04f7afdee78f91389d9ff72947efa152098af930
Revisit the compression performance and complexity trade-off after
making the SIMD version of trellis optimizations. Before that,
reduce the transform-quantization function calls temporarily. This
would cause about 0.3% performance drop for lowres set.
Change-Id: I16917a6bd5c44ec6cd8cd0b59f3c336c4fd96dd2
1. Skip golden non-zeromv and newmv-last for bsize >= 16x16 if the
temporal variance obtained from choose_partitioning is very low.
2. Skip horz and vert INTRA mode for speed 8.
This change works best on the clips with little noise and with some
motion (e.g. gips_motion which has > 5% speed up). PSNR drop is 1.78%
on rtc test set, no obvious visual quality regression found.
Change-Id: Ib43b5b20e67809d03c5a6890818ddff59e1fc94a
This commit combines uniform quantizer with trellis based coefficient
level optimization. It improves the codebase compression performance:
lowres 0.8%
midres 1.0%
hdres 1.6%
Note that the current trellis optimization unit is using C code. This
will make the cost of the overall quantization process slower. A number
of optimizations will come up next.
Change-Id: Id441dd238e4844409d0f08f82604be777f3f5282
This experiment implements non-uniform quantization where
the width of the bins increases gradually to more closely
match a laplacian distribution of the coeficcients.
Performance Gain:
derflr: 0.15%
hevcmr: 0.675%
Change-Id: I25234244e3bcd94b87c1f77cf682190b61c8ef94
This reverts commit f19700fe52850d051e505ec1b085f25060f7d054.
This crashes in SSE2/SumSquares2DTest.RandomValues/0 under x86 due to
alignment issues
Change-Id: I135d83ba6a7894c09d7c7a139b7eaf876416b40c
This reverts commit efda2831e5f758b4f350679b5c55c0b9282449b0.
This commit causes segmentation fault at SSE2/SumSquares2DTest.RandomValues/0
Change-Id: I171937e4daf6f15323e8206418773deb03bd8c53
This test is failing when no experiments are turned on. PSNR is
31.96 when the threshold is 32.
broken since:
0d6980d Remove swap buffer speed feature
Change-Id: I3c29815b40d5282c37f52f4345b56992f8558b2e
Bring commits 575e81f and 3d6b8a6 to VP10. These changes predate
the creation of the active map cyclic refresh test.
BUG=https://bugs.chromium.org/p/webm/issues/detail?id=1224
Change-Id: I3559b6933ffa5649926a4b214e45ed0fae523a25
Move initialization of a some new "twopass" values
to the function vp9_init_second_pass() and some other
small changes.
Remove #if GROUP_ADAPTIVE_MAXQ as this is always
enabled now.
Change-Id: I1dbec2fd7c419779848aa987c4cd7824d4df8456
the difference between src and dst will be signed, the error will be
unsigned.
quiets -fsanitize=integer:
unsigned integer overflow: 4294967295 * 4294967295
Change-Id: I580813093ee46284fde7954520dfcb1188f79268
the difference between src and dst will be signed, the error will be
unsigned.
quiets -fsanitize=integer:
unsigned integer overflow: 4294967295 * 4294967295
Change-Id: I502fd707823c4faaa7f587c9cc0312f057e04904
On scene-cut detected frames (i.e., high_source_sad = 1), use
nonrd_pick_partition (over choose_part + select_part), as
the nonrd_pick partitioning is generally better.
Small positive increase in metrics on ytlive set (~0.5 - 1%).
Negligle overall speed decrease, as its only used on scene-cut frames.
Only affects 1 pass vbr mode, speed = 5.
Change-Id: I07c89cbdc75f5bb16eb8e0e2773ead0980d2de5c
The assumption doesn't hold true in the current codebase. Remove
this speed feature to simplify the codebase.
Change-Id: I9b69f484c9b7cd612b825047cc5b2fce63ee0af7
The inter prediction residual can undergo different transform types
during the rate-distortion optimization search. The assumption used
in this speed feature no longer holds true. This commit removes the
related code to clean up the codebase and clear out unit test
failure in higher speed setting.
Change-Id: I7f7cd4df2345ed3e607c9fae75b38cd2dbde0cac