Commit Graph

27 Commits

Author SHA1 Message Date
Dmitry Kovalev
c6ca5c5ad9 Compact formatting default_coef_probs_{4x4, 8x8, 16x16, 32x32}.
Change-Id: If40b930431766d5179b9769509b5e4ca1628e9cc
2013-12-04 15:45:28 -08:00
Jim Bankoski
f6d7e3679c resolved lint issues in default_coef_probs
Change-Id: I97bf241c0d981721cc74a50be47c9db8a00f6be3
2013-09-29 19:41:31 -07:00
Dmitry Kovalev
fcc34796d2 Removing CONFIG_BALANCED_COEFTREE experiment.
Change-Id: I61a8b0101eac3ee2e0621d56151b90c269fd4db4
2013-07-24 15:53:42 -07:00
Ronald S. Bultje
b64be43998 New default tables
Change-Id: Ice8c73a2a843113877b8f8ed78737a1442c25ced
2013-06-08 13:29:14 -07:00
Deb Mukherjee
b8b3f1a46d Balancing coef-tree to reduce bool decodes
This patch changes the coefficient tree to move the EOB to below
the ZERO node in order to save number of bool decodes.

The advantages of moving EOB one step down as opposed to two steps down
in the other parallel patch are: 1. The coef modeling based on
the One-node becomes independent of the tree structure above it, and
2. Fewer conext/counter increases are needed.

The drawback is that the potential savings in bool decodes will be
less, but assuming that 0s are much more predominant than 1's the
potential savings is still likely to be substantial.

Results on derf300: -0.237%

Change-Id: Ie784be13dc98291306b338e8228703a4c2ea2242
2013-05-29 16:25:52 -07:00
Deb Mukherjee
7a645e4e12 Merging the model coef prob experiment
Merges the experiment.

Change-Id: I4eb19af6de6df6aa3a96a2e82f231d47ed9b3ae9
2013-05-21 14:44:38 -07:00
Deb Mukherjee
39a90bc8e8 Updating the model coef experiment
Cleans up the experiment. Actually uses reduced counts for backward
updates, and reduced number of probabilities in the context.

No change in bitstream when the experiment is on.

Between expt on and off:
derfraw300 is down only -0.062% (which is better than when expts
were run previously).

Change-Id: I55285a049a0c22810bdb42914212ab5a4f8521b5
2013-05-20 12:46:36 -07:00
Paul Wilkins
a14ae84749 Deprecate code_zerogroup experiment.
Delete code under the CONFIG_CODE_ZEROGROUP flag.

Change-Id: I5fe6c7b42a5da9b73118e33594301da4129f320a
2013-05-07 16:52:55 -07:00
Deb Mukherjee
0aa79be7d5 Removes the code_nonzerocount experiment
This patch does not seem to give any benefits.

Change-Id: I9d2b4091d6af3dfc0875f24db86c01e2de57f8db
2013-04-22 10:58:49 -07:00
Deb Mukherjee
70d9f116fd End of orientation zero group experiment
Adds an experiment that codes an end-of-orientation symbol
for every eligible zero encountered in scan order.

This cleans out various other sub-experiments that were part
of the origiinal patch, which will be later included if found
useful.

Results are slightly positive on all sets (0.1 - 0.2% range).

Change-Id: I57765c605fefc7fb9d1b57f1b356843602abefaf
2013-04-22 09:27:59 -07:00
Deb Mukherjee
fe9b5143ba Framework changes in nzc to allow more flexibility
The patch adds the flexibility to use standard EOB based coding
on smaller block sizes and nzc based coding on larger blocksizes.
The tx-sizes that use nzc based coding and those that use EOB based
coding are controlled by a function get_nzc_used().
By default, this function uses nzc based coding for 16x16 and 32x32
transform blocks, which seem to bridge the performance gap
substantially.

All sets are now lower by 0.5% to 0.7%, as opposed to ~1.8% before.

Change-Id: I06abed3df57b52d241ea1f51b0d571c71e38fd0b
2013-03-28 09:33:50 -07:00
Ronald S. Bultje
3120dbddb1 Redo banding for all transforms.
Now that the first AC coefficient in both directions use the same DC
as their context, there no longer is a purpose in letting both have
their own band. Merging these two bands allows us to split bands for
some of the very high-frequency AC bands.

In addition, I'm redoing the banding for the 1D-ADST col/row scans. I
don't think the old banding made any sense at all (it merged the last
coefficient of the first row/col in the same band as the first two of
the second row/col), which was clearly an oversight from the band being
applied in scan-order (rather than in their actual position). Now,
coefficients at the same position will be in the same band, regardless
what scan order is used. I think this makes most sense for the purpose
of banding, which is basically "predict energy for this coefficient
depending on the energy of context coefficients" (i.e. pt).

After full re-training, together with previous patch, derf gains about
1.2-1.3%, and hd/stdhd gain about 0.9-1.0%.

Change-Id: I7a0cc12ba724e88b278034113cb4adaaebf87e0c
2013-03-26 16:46:13 -07:00
Ronald S. Bultje
790fb13215 Use above/left (instead of previous in scan-order) as token context.
Pearson correlation for above or left is significantly higher than for
previous-in-scan-order (absolute values depend on position in scan, but
in general, we gain about 0.1-0.2 by using either above or left; using
both basically just makes this even better). For eob branch skipping,
we continue to use the previous token in scan order.

This helps about 0.9% on derf after re-training on a limited data set.
Full re-training and results on larger-resolution clips are pending.

Note that this commit breaks trellis, so we can probably get further
gains out of it by fixing trellis at some later point.

Change-Id: Iead68e296fc3a105cca746b5e3da9555d6010cfe
2013-03-26 16:46:09 -07:00
Deb Mukherjee
fd18d5dffe Modeling default coef probs with distribution
Replaces the default tables for single coefficient magnitudes with
those obtained from an appropriate distribution. The EOB node
is left unchanged. The model is represeted as a 256-size codebook
where the index corresponds to the probability of the Zero or the
One node. Two variations are implemented corresponding to whether
the Zero node or the One-node is used as the peg. The main advantage
is that the default prob tables will become considerably smaller and
manageable. Besides there is substantially less risk of over-fitting
for a training set.

Various distributions are tried and the one that gives the best
results is the family of Generalized Gaussian distributions with
shape parameter 0.75. The results are within about 0.2% of fully
trained tables for the Zero peg variant, and within 0.1% of the
One peg variant.

The forward updates are optionally (controlled by a macro)
model-based, i.e. restricted to only convey probabilities from the
codebook. Backward updates can also be optionally (controlled by
another macro) model-based, but is turned off by default. Currently
model-based forward updates work about the same as unconstrained
updates, but there is a drop in performance with backward-updates
being model based.

The model based approach also allows the probabilities for the key
frames to be adjusted from the defaults based on the base_qindex of
the frame. Currently the adjustment function is a placeholder that
adjusts the prob of EOB and Zero node from the nominal one at higher
quality (lower qindex) or lower quality (higher qindex) ends of the
range. The rest of the probabilities are then derived based on the
model from the adjusted prob of zero.

Change-Id: Iae050f3cbcc6d8b3f204e8dc395ae47b3b2192c9
2013-03-25 23:43:38 -07:00
Deb Mukherjee
a28139c849 Continued experiment with nonzero count
Adds probability updates for extra bits for the nzcs, code for
getting nzc stats, plus some minor cleanups and fixes.

Change-Id: If2814e7f04fb52f5025ad9f400f3e6c50a00b543
2013-03-08 16:37:08 -08:00
Deb Mukherjee
eb6ef2417f Coding con-zero count rather than EOB for coeffs
This patch revamps the entropy coding of coefficients to code first
a non-zero count per coded block and correspondingly remove the EOB
token from the token set.

STATUS:
Main encode/decode code achieving encode/decode sync - done.
Forward and backward probability updates to the nzcs - done.
Rd costing updates for nzcs - done.
Note: The dynamic progrmaming apporach used in trellis quantization
is not exactly compatible with nzcs. A suboptimal approach has been
used instead where branch costs are updated to account for changes
in the nzcs.

TODO:
Training the default probs/counts for nzcs

Change-Id: I951bc1e22f47885077a7453a09b0493daa77883d
2013-03-07 07:20:30 -08:00
Ronald S. Bultje
111ca42133 Make superblocks independent of macroblock code and data.
Split macroblock and superblock tokenization and detokenization
functions and coefficient-related data structs so that the bitstream
layout and related code of superblock coefficients looks less like it's
a hack to fit macroblocks in superblocks.

In addition, unify chroma transform size selection from luma transform
size (i.e. always use the same size, as long as it fits the predictor);
in practice, this means 32x32 and 64x64 superblocks using the 16x16 luma
transform will now use the 16x16 (instead of the 8x8) chroma transform,
and 64x64 superblocks using the 32x32 luma transform will now use the
32x32 (instead of the 16x16) chroma transform.

Lastly, add a trellis optimize function for 32x32 transform blocks.

HD gains about 0.3%, STDHD about 0.15% and derf about 0.1%. There's
a few negative points here and there that I might want to analyze
a little closer.

Change-Id: Ibad7c3ddfe1acfc52771dfc27c03e9783e054430
2013-03-04 16:34:36 -08:00
Ronald S. Bultje
0c9e2e9a1d Split coefficient token tables intra vs. inter.
Change-Id: I5416455f8f129ca0f450d00e48358d2012605072
2013-02-23 07:33:46 -08:00
Paul Wilkins
c17672a33d Further changes to coefficient contexts.
This patch alters the balance of context between the
coefficient bands (reflecting the position of coefficients
within a transform blocks) and the energy of the previous
token (or tokens) within a block.

In this case the number of coefficient bands is reduced
but more previous token energy bands are supported.

Some initial rebalancing of the default tables has been
by running multiple derf clips at multiple data rates using
the ENTOPY_STATS macro. Further balancing needs to be
done using larger image formatsd especially in regard to
the bigger transform sizes which are not as well represented
in encodings of smaller image formats.

Change-Id: If9736e95c391e711b04aef6393d26f60f36e1f8a
2013-02-23 07:29:09 -08:00
Ronald S. Bultje
3af36ea8cc Remove Y2 and Y-no-DC token types from the bitstream.
Change-Id: I7a5314daca993d46b8666ba1ec2ff3766c1e5042
2013-02-15 14:06:30 -08:00
Ronald S. Bultje
aa2effa954 Merge tx32x32 experiment.
Change-Id: I615651e4c7b09e576a341ad425cf80c393637833
2013-01-10 08:23:59 -08:00
Ronald S. Bultje
4455036cfc Merge superblocks (32x32) experiment.
Change-Id: I0df99742029834a85c4933652b0587cf5b6b2587
2013-01-08 12:54:45 -08:00
Ronald S. Bultje
4cca47b538 Use standard integer types for pixel values and coefficients.
For coefficients, use int16_t (instead of short); for pixel values in
16-bit intermediates, use uint16_t (instead of unsigned short); for all
others, use uint8_t (instead of unsigned char).

Change-Id: I3619cd9abf106c3742eccc2e2f5e89a62774f7da
2012-12-18 15:31:19 -08:00
Ronald S. Bultje
5a5df19de3 New default coefficient/band probabilities.
Gives 0.5-0.6% improvement on derf and stdhd, and 1.1% on hd. The
old tables basically derive from times that we had only 4x4 or
only 4x4 and 8x8 DCTs.

Note that some values are filled with 128, because e.g. ADST ever
only occurs as Y-with-DC, as does 32x32; 16x16 ever only occurs
as Y-with-DC or as UV (as complement of 32x32 Y); and 8x8 Y2 ever
only has 4 coefficients max. If preferred, I can add values of
other tables in their place (e.g. use 4x4 2nd order high-frequency
probabilities for 8x8 2nd order), so that they make at least some
sense if we ever implement a larger 2nd order transform for the
8x8 DCT (etc.), please let me know

Change-Id: I917db356f2aff8865f528eb873c56ef43aa5ce22
2012-12-12 16:23:57 -08:00
Ronald S. Bultje
885cf816eb Introduce vp9_coeff_probs/counts/stats/accum types.
Use these, instead of the 4/5-dimensional arrays, to hold statistics,
counts, accumulations and probabilities for coefficient tokens. This
commit also re-allows ENTROPY_STATS to compile.

Change-Id: If441ffac936f52a3af91d8f2922ea8a0ceabdaa5
2012-12-07 16:09:59 -08:00
Ronald S. Bultje
c456b35fdf 32x32 transform for superblocks.
This adds Debargha's DCT/DWT hybrid and a regular 32x32 DCT, and adds
code all over the place to wrap that in the bitstream/encoder/decoder/RD.

Some implementation notes (these probably need careful review):
- token range is extended by 1 bit, since the value range out of this
  transform is [-16384,16383].
- the coefficients coming out of the FDCT are manually scaled back by
  1 bit, or else they won't fit in int16_t (they are 17 bits). Because
  of this, the RD error scoring does not right-shift the MSE score by
  two (unlike for 4x4/8x8/16x16).
- to compensate for this loss in precision, the quantizer is halved
  also. This is currently a little hacky.
- FDCT and IDCT is double-only right now. Needs a fixed-point impl.
- There are no default probabilities for the 32x32 transform yet; I'm
  simply using the 16x16 luma ones. A future commit will add newly
  generated probabilities for all transforms.
- No ADST version. I don't think we'll add one for this level; if an
  ADST is desired, transform-size selection can scale back to 16x16
  or lower, and use an ADST at that level.

Additional notes specific to Debargha's DWT/DCT hybrid:
- coefficient scale is different for the top/left 16x16 (DCT-over-DWT)
  block than for the rest (DWT pixel differences) of the block. Therefore,
  RD error scoring isn't easily scalable between coefficient and pixel
  domain. Thus, unfortunately, we need to compute the RD distortion in
  the pixel domain until we figure out how to scale these appropriately.

Change-Id: I00386f20f35d7fabb19aba94c8162f8aee64ef2b
2012-12-07 14:45:05 -08:00
John Koleszar
fcccbcbb39 Add vp9_ prefix to all vp9 files
Support for gyp which doesn't support multiple objects in the same
static library having the same basename.

Change-Id: Ib947eefbaf68f8b177a796d23f875ccdfa6bc9dc
2012-11-27 14:12:30 -08:00