Commit Graph

83 Commits

Author SHA1 Message Date
Dmitry Kovalev
8b20aa2337 Merge "Renaming y1dc_delta_q, uvdc_delta_q, uvac_delta_q fields from VP9Common." into experimental 2013-04-18 14:26:06 -07:00
Jingning Han
90a91cc683 Recursive partition syntax coding
Enable recursive partition information coding from SB64X64 down to
MB16X16. The bit-stream syntax is now supporting rectangular block
sizes. It starts from SB64X64 and recursively describes the partition
type of the current block. If the partition type is PARTITION_NONE,
the block is coded as a single unit; if it is PARTITION_HORZ or
PARTITION_VERT, the block is segmented into two independently coded
rectangular units, with no further partition needed; otherwise, the
block is segmented into 4 square blocks. i.e., PARTITION_SPLIT case,
each can be potentially further partitioned.

Forward adaptive probability modeling is used for the partition
information coding, conditioned on the current block size.

Change-Id: I499365fb547839d555498e3bcc0387d8a3587d87
2013-04-16 18:41:26 -07:00
Dmitry Kovalev
c3a312ea22 Merge "Adding vp9_write_prob function (macro for now)." into experimental 2013-04-16 18:22:21 -07:00
John Koleszar
6c1a3b42c4 Merge "Adding write_le16 and write_le32 functions." into experimental 2013-04-16 17:45:48 -07:00
Dmitry Kovalev
0be8082be1 Adding write_le16 and write_le32 functions.
Change-Id: I7057ed8e2a13a3c5367e2923eb4b3260bd7cf546
2013-04-16 16:26:25 -07:00
Dmitry Kovalev
ef4d9a4843 Adding vp9_write_prob function (macro for now).
Change-Id: Ic795cf6fc202bf32c9b5b0b3cef9ac422af53cd0
2013-04-16 16:23:17 -07:00
Dmitry Kovalev
1ad7c1f250 Renaming y1dc_delta_q, uvdc_delta_q, uvac_delta_q fields from VP9Common.
New names are y_dc_delta_q, uv_dc_delta_q, uv_ac_delta_q.

Change-Id: I4acae1fc23a4697ce2c5a5becb8dc28ef0a4b552
2013-04-16 15:05:52 -07:00
John Koleszar
e3cfe4e89e Remove the mb_no_coeff_skip flag
This flag was added to VP8 to allow a mode where MB-level skipping
was not allowed, saving a bit per mb. It was never used in practice,
and hasn't been tested in VP9, so remove it.

Change-Id: Id450ec6904c6d06c1919508e7efc52d05cde5631
2013-04-16 12:36:16 -07:00
Adrian Grange
4ee671a15c Merge "Initial addition of multiple ARF frames" into experimental 2013-04-15 09:46:16 -07:00
Adrian Grange
c2876cf0fd Initial addition of multiple ARF frames
This is work-in-progress, it implements multiple ARF
encoding behind an experimental flag.

It adds the ability to insert multiple ARF frames into a
single ARF group. This patch implements the reordering
of the coded frames, and implements a fixed-length coding
pattern. It applies a fixed quantizer strategy based on
where the frame is in the coding sequence.

Further work to modify the rate control strategy is
ongoing and will be submitted via a set of future patches.

In this first step, each ARF group is recursively
bisected and an ARF frame added at that position in the
sequence. The recursion continues until ARF frames are
within MIN_GF_INTERVAL frames.

The code sits behind the "multiple-arf" experimental
flag ("CONFIG_MULTIPLE_ARF"). The experimental flag
"oneshotq" ("CONFIG_ONESHOTQ") also needs to be enabled
for this patch to work correctly.

Change-Id: Ie473b05ebb43ac473c0cfb659b2b8042823085e2
2013-04-15 09:11:39 -07:00
Dmitry Kovalev
399a6cbcde Merge "Renaming vp9_token_struct to vp9_token and removing previous typedef." into experimental 2013-04-14 04:31:39 -07:00
Yaowu Xu
7de5edd14a Rename B_PRED to I4X4_PRED
So it is consistent with I8x8_PRED.

Change-Id: Iefa65124b2419690d83e526c611129c0ede29d11
2013-04-12 09:23:58 -07:00
Deb Mukherjee
66f413af4f Turning model-based updates on with modelcoefprob
This patch changes the default with the modecoefprob expt
to use mode-based forward updates with one-node pegged
modeling.

The maximum difference with fully trained tables is now
less that 0.1%.

Change-Id: I06b44322e10c6703f93f3c1d48d973b1136a0618
2013-04-11 14:45:26 -07:00
Dmitry Kovalev
24f18e1c34 Renaming vp9_token_struct to vp9_token and removing previous typedef.
Change-Id: If69c3d795f87af5cc7bfdfe70ef733c41b4d55c8
2013-04-11 13:01:52 -07:00
Ronald S. Bultje
4eb537c0e6 A few more cases where sb_type was used arithmetically.
With these fixed, the codec produces identical results regardless of
what literal values are used for the enum members in BLOCK_SIZE_*.

Change-Id: I26db8e08019b58ba432af1f0950ebe6b0eb4ad8c
2013-04-10 18:04:57 -07:00
Ronald S. Bultje
8fb5be48a6 Make usage of sb_type independent of literal values.
Change-Id: I0d12f9ef9d960df0172a1377f8e5236eb6d90492
2013-04-10 17:38:57 -07:00
Dmitry Kovalev
d1cff2deb1 Code cleanup in bitstream code.
Lower case variable names, less code.

Change-Id: I1abc8f592ad2343ab5c76fe2d16262741a4a894a
2013-04-08 19:07:29 -07:00
Dmitry Kovalev
dca8ad178c Renaming sb32_coded and sb64_coded fields.
Renaming sb32_coded to prob_sb32_coded and sb64_coded to prob_sb64_coded.

Change-Id: I6de5cad00a57c3e066d53467f8c38cb6073dce11
2013-04-02 18:21:55 -07:00
Deb Mukherjee
e3955007df Merge "Framework changes in nzc to allow more flexibility" into experimental 2013-03-29 15:57:27 -07:00
Deb Mukherjee
fe9b5143ba Framework changes in nzc to allow more flexibility
The patch adds the flexibility to use standard EOB based coding
on smaller block sizes and nzc based coding on larger blocksizes.
The tx-sizes that use nzc based coding and those that use EOB based
coding are controlled by a function get_nzc_used().
By default, this function uses nzc based coding for 16x16 and 32x32
transform blocks, which seem to bridge the performance gap
substantially.

All sets are now lower by 0.5% to 0.7%, as opposed to ~1.8% before.

Change-Id: I06abed3df57b52d241ea1f51b0d571c71e38fd0b
2013-03-28 09:33:50 -07:00
Ronald S. Bultje
35dc9f5546 Save nzcstats.
Change-Id: I4a3a9eb9f9d17218a0f0d7e148123d34dae879c2
2013-03-27 09:44:47 -07:00
Ronald S. Bultje
790fb13215 Use above/left (instead of previous in scan-order) as token context.
Pearson correlation for above or left is significantly higher than for
previous-in-scan-order (absolute values depend on position in scan, but
in general, we gain about 0.1-0.2 by using either above or left; using
both basically just makes this even better). For eob branch skipping,
we continue to use the previous token in scan order.

This helps about 0.9% on derf after re-training on a limited data set.
Full re-training and results on larger-resolution clips are pending.

Note that this commit breaks trellis, so we can probably get further
gains out of it by fixing trellis at some later point.

Change-Id: Iead68e296fc3a105cca746b5e3da9555d6010cfe
2013-03-26 16:46:09 -07:00
John Koleszar
441e2eab1b Add an in-loop deringing experiment
Adds a per-frame, strength adjustable, in loop deringing filter. Uses
the existing vp9_post_proc_down_and_across 5 tap thresholded blur
code, with a brute force search for the threshold.

Results almost strictly positive on the YT HD set, either having no
effect or helping PSNR in the range of 1-3% (overall average 0.8%).
Results more mixed for the CIF set, (-0.5 min, 1.4 max, 0.1 avg).
This has an almost strictly negative impact to SSIM, so examining a
different filter or a more balanced search heuristic is in order.

Other test set results pending.

Change-Id: I5ca6ee8fe292dfa3f2eab7f65332423fa1710b58
2013-03-26 08:23:24 -07:00
Deb Mukherjee
49dcc71493 Merge "Modeling default coef probs with distribution" into experimental 2013-03-26 07:13:13 -07:00
Deb Mukherjee
fd18d5dffe Modeling default coef probs with distribution
Replaces the default tables for single coefficient magnitudes with
those obtained from an appropriate distribution. The EOB node
is left unchanged. The model is represeted as a 256-size codebook
where the index corresponds to the probability of the Zero or the
One node. Two variations are implemented corresponding to whether
the Zero node or the One-node is used as the peg. The main advantage
is that the default prob tables will become considerably smaller and
manageable. Besides there is substantially less risk of over-fitting
for a training set.

Various distributions are tried and the one that gives the best
results is the family of Generalized Gaussian distributions with
shape parameter 0.75. The results are within about 0.2% of fully
trained tables for the Zero peg variant, and within 0.1% of the
One peg variant.

The forward updates are optionally (controlled by a macro)
model-based, i.e. restricted to only convey probabilities from the
codebook. Backward updates can also be optionally (controlled by
another macro) model-based, but is turned off by default. Currently
model-based forward updates work about the same as unconstrained
updates, but there is a drop in performance with backward-updates
being model based.

The model based approach also allows the probabilities for the key
frames to be adjusted from the defaults based on the base_qindex of
the frame. Currently the adjustment function is a placeholder that
adjusts the prob of EOB and Zero node from the nominal one at higher
quality (lower qindex) or lower quality (higher qindex) ends of the
range. The rest of the probabilities are then derived based on the
model from the adjusted prob of zero.

Change-Id: Iae050f3cbcc6d8b3f204e8dc395ae47b3b2192c9
2013-03-25 23:43:38 -07:00
Dmitry Kovalev
56f3a2c663 Code cleanup: lower case variable names.
Renaming Width to width, Height to height and Version to version in
several structs and function signatures.

Change-Id: I084c3f7e747cb2ce3345aff27a3dff9b13a87543
2013-03-20 16:41:30 -07:00
John Koleszar
8a3f55f2d4 Replace scaling byte with explicit display size
If the intended display size is different than the size the frame is
coded at, then send that size explicitly in the bitstream. Adds a new
bit to the frame header to indicate whether the extra size fields
are present.

Change-Id: I525c66f22d207efaf1e5f903c6a2a91b80245854
2013-03-18 12:02:20 -07:00
John Koleszar
9b4095c537 Fix vp9_tree_probs_from_distribution with CONFIG_CODE_NONZEROCOUNT
The automatic merge result was incomplete.

Change-Id: I8976318bfc346d867660a013a302c80edb25fc29
2013-03-11 11:03:36 -07:00
John Koleszar
e6257342b1 Merge "Optimize vp9_tree_probs_from_distribution" into experimental 2013-03-11 09:32:11 -07:00
John Koleszar
bd84685f78 Optimize vp9_tree_probs_from_distribution
The previous implementation visited each node in the tree multiple times
because it used each symbol's encoding to revisit the branches taken and
increment its count. Instead, we can traverse the tree depth first and
calculate the probabilities and branch counts as we walk back up. The
complexity goes from somewhere between O(nlogn) and O(n^2) (depending on
how balanced the tree is) to O(n).

Only tested one clip (256kbps, CIF), saw 13% decoding perf improvement.

Note that this optimization should port trivially to VP8 as well. In VP8,
the decoder doesn't use this function, but it does routinely show up
on the profile for realtime encoding.

Change-Id: I4f2848e4f41dc9a7694f73f3e75034bce08d1b12
2013-03-10 13:39:30 -07:00
Deb Mukherjee
a28139c849 Continued experiment with nonzero count
Adds probability updates for extra bits for the nzcs, code for
getting nzc stats, plus some minor cleanups and fixes.

Change-Id: If2814e7f04fb52f5025ad9f400f3e6c50a00b543
2013-03-08 16:37:08 -08:00
Deb Mukherjee
eb6ef2417f Coding con-zero count rather than EOB for coeffs
This patch revamps the entropy coding of coefficients to code first
a non-zero count per coded block and correspondingly remove the EOB
token from the token set.

STATUS:
Main encode/decode code achieving encode/decode sync - done.
Forward and backward probability updates to the nzcs - done.
Rd costing updates for nzcs - done.
Note: The dynamic progrmaming apporach used in trellis quantization
is not exactly compatible with nzcs. A suboptimal approach has been
used instead where branch costs are updated to account for changes
in the nzcs.

TODO:
Training the default probs/counts for nzcs

Change-Id: I951bc1e22f47885077a7453a09b0493daa77883d
2013-03-07 07:20:30 -08:00
Ronald S. Bultje
4209bba462 Merge changes Ifacbf5a0,Ibad7c3dd into experimental
* changes:
  vpxenc: actually report mismatch on stderr.
  Make superblocks independent of macroblock code and data.
2013-03-05 11:17:14 -08:00
Ronald S. Bultje
111ca42133 Make superblocks independent of macroblock code and data.
Split macroblock and superblock tokenization and detokenization
functions and coefficient-related data structs so that the bitstream
layout and related code of superblock coefficients looks less like it's
a hack to fit macroblocks in superblocks.

In addition, unify chroma transform size selection from luma transform
size (i.e. always use the same size, as long as it fits the predictor);
in practice, this means 32x32 and 64x64 superblocks using the 16x16 luma
transform will now use the 16x16 (instead of the 8x8) chroma transform,
and 64x64 superblocks using the 32x32 luma transform will now use the
32x32 (instead of the 16x16) chroma transform.

Lastly, add a trellis optimize function for 32x32 transform blocks.

HD gains about 0.3%, STDHD about 0.15% and derf about 0.1%. There's
a few negative points here and there that I might want to analyze
a little closer.

Change-Id: Ibad7c3ddfe1acfc52771dfc27c03e9783e054430
2013-03-04 16:34:36 -08:00
Jingning Han
5957b2b514 Support 16K sequence coding
Fixed a couple of variable/function definitions, as well as header
handling to support 16K sequence coding at high bit-rates.

The width and height are each specified by two bytes in the header.
Use an extra byte to explicitly indicate the scaling factors in
both directions, each ranging from 0 to 15.

Tested coding up to 16400x16400 dimension.

Change-Id: Ibc2225c6036620270f2c0cf5172d1760aaec10ec
2013-03-04 11:08:41 -08:00
Ronald S. Bultje
0c9e2e9a1d Split coefficient token tables intra vs. inter.
Change-Id: I5416455f8f129ca0f450d00e48358d2012605072
2013-02-23 07:33:46 -08:00
Paul Wilkins
c17672a33d Further changes to coefficient contexts.
This patch alters the balance of context between the
coefficient bands (reflecting the position of coefficients
within a transform blocks) and the energy of the previous
token (or tokens) within a block.

In this case the number of coefficient bands is reduced
but more previous token energy bands are supported.

Some initial rebalancing of the default tables has been
by running multiple derf clips at multiple data rates using
the ENTOPY_STATS macro. Further balancing needs to be
done using larger image formatsd especially in regard to
the bigger transform sizes which are not as well represented
in encodings of smaller image formats.

Change-Id: If9736e95c391e711b04aef6393d26f60f36e1f8a
2013-02-23 07:29:09 -08:00
Yaowu Xu
441f24de3d Merge "Merge lossless experiment" into experimental 2013-02-20 12:27:26 -08:00
Yaowu Xu
d262e26cc7 Merge lossless experiment
Change-Id: I7b7b8d4fda3a23699e0c920d727f8c15d37d43aa
2013-02-20 07:54:28 -08:00
Paul Wilkins
ef01b956d8 Entropy stats output code.
Fixes to make Entropy stats code work again

Change-Id: I62e380481a4eb4c170076ac6ab36f0c2b203e914
2013-02-20 14:33:19 +00:00
Yaowu Xu
93d6b86cfd Use lossless for Q0
The commit changes the coding mode to lossless whenever the lowest
quantizer is choosen.

As expected, test results showed no difference for cif and std-hd
set where Q0 is rarely used. For yt and yt-hd set, Q0 is used for
a number of clips, where this commit helped a lot in the high end.

Average over all clips in the sets:
yt: 2.391% 1.017% 1.066%
hd: 1.937%  .764%  .787%

Change-Id: I9fa9df8646fd70cb09ffe9e4202b86b67da16765
2013-02-19 06:18:42 -08:00
Ronald S. Bultje
3af36ea8cc Remove Y2 and Y-no-DC token types from the bitstream.
Change-Id: I7a5314daca993d46b8666ba1ec2ff3766c1e5042
2013-02-15 14:06:30 -08:00
Ronald S. Bultje
48598e30b1 Remove y2dc/ac Q delta values from the bitstream.
Since there is no Y2, these values are always zero. This changes the
bitstream results slightly, hence a separate commit.

Change-Id: I2f838f184341868f35113ec77ca89da53c4644e0
2013-02-15 14:06:30 -08:00
Ronald S. Bultje
46dff5d233 Remove some Y2-related code.
Change-Id: I4f46d142c2a8d1e8a880cfac63702dcbfb999b78
2013-02-15 14:06:25 -08:00
Ronald S. Bultje
89a206ef2f Add support for tile rows.
These allow sending partial bitstream packets over the network before
encoding a complete frame is completed, thus lowering end-to-end
latency. The tile-rows are not independent.

Change-Id: I99986595cbcbff9153e2a14f49b4aa7dee4768e2
2013-02-13 12:31:00 -08:00
Yaowu Xu
f01b08c96c Merge "enable bitstream lossless support" into experimental 2013-02-13 10:26:58 -08:00
Yaowu Xu
17db5d00be enable bitstream lossless support
1. Added a bit in frame header to  to indicate if a frame is encoded
in lossless mode, so decoder does not make the decision based on Q0
2. Minor changes to make sure that lossy coding works same as when
the lossless experiment is not enabled.
3. Renamed function pointers for transforms to be consistent, using
prefix fwd_txm and inv_txm for forward and inverse respectively

To encode in lossless mode, using "--lossless=1 --min-q=0 --max-q=0"
with vpxenc.

Change-Id: Ifae53b26d2ffbe378d707e29d96817b8a5e6c068
2013-02-13 09:24:39 -08:00
Ronald S. Bultje
f496f601fb Add tile column size limits (256 pixels min, 4096 pixels max).
This is after discussion with the hardware team. Update the unit test
to take these sizes into account. Split out some duplicate code into
a separate file so it can be shared.

Change-Id: I8311d11b0191d8bb37e8eb4ac962beb217e1bff5
2013-02-12 10:33:34 -08:00
Paul Wilkins
e4f949b55a Merge "Nearest / Zero Mv default entropy tweak." into experimental 2013-02-09 04:21:08 -08:00
John Koleszar
6dfc95fe63 Merge changes Icd1a2a5a,I204d17a1,I3ed92117 into experimental
* changes:
  Initial support for resolution changes on P-frames
  Avoid allocating memory when resizing frames
  Adds a test for the VP8E_SET_SCALEMODE control
2013-02-08 14:20:05 -08:00