Commit Graph

320 Commits

Author SHA1 Message Date
Dmitry Kovalev
24f18e1c34 Renaming vp9_token_struct to vp9_token and removing previous typedef.
Change-Id: If69c3d795f87af5cc7bfdfe70ef733c41b4d55c8
2013-04-11 13:01:52 -07:00
Ronald S. Bultje
4eb537c0e6 A few more cases where sb_type was used arithmetically.
With these fixed, the codec produces identical results regardless of
what literal values are used for the enum members in BLOCK_SIZE_*.

Change-Id: I26db8e08019b58ba432af1f0950ebe6b0eb4ad8c
2013-04-10 18:04:57 -07:00
Ronald S. Bultje
8fb5be48a6 Make usage of sb_type independent of literal values.
Change-Id: I0d12f9ef9d960df0172a1377f8e5236eb6d90492
2013-04-10 17:38:57 -07:00
Dmitry Kovalev
d1cff2deb1 Code cleanup in bitstream code.
Lower case variable names, less code.

Change-Id: I1abc8f592ad2343ab5c76fe2d16262741a4a894a
2013-04-08 19:07:29 -07:00
Dmitry Kovalev
dca8ad178c Renaming sb32_coded and sb64_coded fields.
Renaming sb32_coded to prob_sb32_coded and sb64_coded to prob_sb64_coded.

Change-Id: I6de5cad00a57c3e066d53467f8c38cb6073dce11
2013-04-02 18:21:55 -07:00
Deb Mukherjee
e3955007df Merge "Framework changes in nzc to allow more flexibility" into experimental 2013-03-29 15:57:27 -07:00
Deb Mukherjee
fe9b5143ba Framework changes in nzc to allow more flexibility
The patch adds the flexibility to use standard EOB based coding
on smaller block sizes and nzc based coding on larger blocksizes.
The tx-sizes that use nzc based coding and those that use EOB based
coding are controlled by a function get_nzc_used().
By default, this function uses nzc based coding for 16x16 and 32x32
transform blocks, which seem to bridge the performance gap
substantially.

All sets are now lower by 0.5% to 0.7%, as opposed to ~1.8% before.

Change-Id: I06abed3df57b52d241ea1f51b0d571c71e38fd0b
2013-03-28 09:33:50 -07:00
Ronald S. Bultje
35dc9f5546 Save nzcstats.
Change-Id: I4a3a9eb9f9d17218a0f0d7e148123d34dae879c2
2013-03-27 09:44:47 -07:00
Ronald S. Bultje
790fb13215 Use above/left (instead of previous in scan-order) as token context.
Pearson correlation for above or left is significantly higher than for
previous-in-scan-order (absolute values depend on position in scan, but
in general, we gain about 0.1-0.2 by using either above or left; using
both basically just makes this even better). For eob branch skipping,
we continue to use the previous token in scan order.

This helps about 0.9% on derf after re-training on a limited data set.
Full re-training and results on larger-resolution clips are pending.

Note that this commit breaks trellis, so we can probably get further
gains out of it by fixing trellis at some later point.

Change-Id: Iead68e296fc3a105cca746b5e3da9555d6010cfe
2013-03-26 16:46:09 -07:00
John Koleszar
441e2eab1b Add an in-loop deringing experiment
Adds a per-frame, strength adjustable, in loop deringing filter. Uses
the existing vp9_post_proc_down_and_across 5 tap thresholded blur
code, with a brute force search for the threshold.

Results almost strictly positive on the YT HD set, either having no
effect or helping PSNR in the range of 1-3% (overall average 0.8%).
Results more mixed for the CIF set, (-0.5 min, 1.4 max, 0.1 avg).
This has an almost strictly negative impact to SSIM, so examining a
different filter or a more balanced search heuristic is in order.

Other test set results pending.

Change-Id: I5ca6ee8fe292dfa3f2eab7f65332423fa1710b58
2013-03-26 08:23:24 -07:00
Deb Mukherjee
49dcc71493 Merge "Modeling default coef probs with distribution" into experimental 2013-03-26 07:13:13 -07:00
Deb Mukherjee
fd18d5dffe Modeling default coef probs with distribution
Replaces the default tables for single coefficient magnitudes with
those obtained from an appropriate distribution. The EOB node
is left unchanged. The model is represeted as a 256-size codebook
where the index corresponds to the probability of the Zero or the
One node. Two variations are implemented corresponding to whether
the Zero node or the One-node is used as the peg. The main advantage
is that the default prob tables will become considerably smaller and
manageable. Besides there is substantially less risk of over-fitting
for a training set.

Various distributions are tried and the one that gives the best
results is the family of Generalized Gaussian distributions with
shape parameter 0.75. The results are within about 0.2% of fully
trained tables for the Zero peg variant, and within 0.1% of the
One peg variant.

The forward updates are optionally (controlled by a macro)
model-based, i.e. restricted to only convey probabilities from the
codebook. Backward updates can also be optionally (controlled by
another macro) model-based, but is turned off by default. Currently
model-based forward updates work about the same as unconstrained
updates, but there is a drop in performance with backward-updates
being model based.

The model based approach also allows the probabilities for the key
frames to be adjusted from the defaults based on the base_qindex of
the frame. Currently the adjustment function is a placeholder that
adjusts the prob of EOB and Zero node from the nominal one at higher
quality (lower qindex) or lower quality (higher qindex) ends of the
range. The rest of the probabilities are then derived based on the
model from the adjusted prob of zero.

Change-Id: Iae050f3cbcc6d8b3f204e8dc395ae47b3b2192c9
2013-03-25 23:43:38 -07:00
Dmitry Kovalev
56f3a2c663 Code cleanup: lower case variable names.
Renaming Width to width, Height to height and Version to version in
several structs and function signatures.

Change-Id: I084c3f7e747cb2ce3345aff27a3dff9b13a87543
2013-03-20 16:41:30 -07:00
John Koleszar
8a3f55f2d4 Replace scaling byte with explicit display size
If the intended display size is different than the size the frame is
coded at, then send that size explicitly in the bitstream. Adds a new
bit to the frame header to indicate whether the extra size fields
are present.

Change-Id: I525c66f22d207efaf1e5f903c6a2a91b80245854
2013-03-18 12:02:20 -07:00
John Koleszar
9b4095c537 Fix vp9_tree_probs_from_distribution with CONFIG_CODE_NONZEROCOUNT
The automatic merge result was incomplete.

Change-Id: I8976318bfc346d867660a013a302c80edb25fc29
2013-03-11 11:03:36 -07:00
John Koleszar
e6257342b1 Merge "Optimize vp9_tree_probs_from_distribution" into experimental 2013-03-11 09:32:11 -07:00
John Koleszar
bd84685f78 Optimize vp9_tree_probs_from_distribution
The previous implementation visited each node in the tree multiple times
because it used each symbol's encoding to revisit the branches taken and
increment its count. Instead, we can traverse the tree depth first and
calculate the probabilities and branch counts as we walk back up. The
complexity goes from somewhere between O(nlogn) and O(n^2) (depending on
how balanced the tree is) to O(n).

Only tested one clip (256kbps, CIF), saw 13% decoding perf improvement.

Note that this optimization should port trivially to VP8 as well. In VP8,
the decoder doesn't use this function, but it does routinely show up
on the profile for realtime encoding.

Change-Id: I4f2848e4f41dc9a7694f73f3e75034bce08d1b12
2013-03-10 13:39:30 -07:00
Deb Mukherjee
a28139c849 Continued experiment with nonzero count
Adds probability updates for extra bits for the nzcs, code for
getting nzc stats, plus some minor cleanups and fixes.

Change-Id: If2814e7f04fb52f5025ad9f400f3e6c50a00b543
2013-03-08 16:37:08 -08:00
Deb Mukherjee
eb6ef2417f Coding con-zero count rather than EOB for coeffs
This patch revamps the entropy coding of coefficients to code first
a non-zero count per coded block and correspondingly remove the EOB
token from the token set.

STATUS:
Main encode/decode code achieving encode/decode sync - done.
Forward and backward probability updates to the nzcs - done.
Rd costing updates for nzcs - done.
Note: The dynamic progrmaming apporach used in trellis quantization
is not exactly compatible with nzcs. A suboptimal approach has been
used instead where branch costs are updated to account for changes
in the nzcs.

TODO:
Training the default probs/counts for nzcs

Change-Id: I951bc1e22f47885077a7453a09b0493daa77883d
2013-03-07 07:20:30 -08:00
Ronald S. Bultje
4209bba462 Merge changes Ifacbf5a0,Ibad7c3dd into experimental
* changes:
  vpxenc: actually report mismatch on stderr.
  Make superblocks independent of macroblock code and data.
2013-03-05 11:17:14 -08:00
Ronald S. Bultje
111ca42133 Make superblocks independent of macroblock code and data.
Split macroblock and superblock tokenization and detokenization
functions and coefficient-related data structs so that the bitstream
layout and related code of superblock coefficients looks less like it's
a hack to fit macroblocks in superblocks.

In addition, unify chroma transform size selection from luma transform
size (i.e. always use the same size, as long as it fits the predictor);
in practice, this means 32x32 and 64x64 superblocks using the 16x16 luma
transform will now use the 16x16 (instead of the 8x8) chroma transform,
and 64x64 superblocks using the 32x32 luma transform will now use the
32x32 (instead of the 16x16) chroma transform.

Lastly, add a trellis optimize function for 32x32 transform blocks.

HD gains about 0.3%, STDHD about 0.15% and derf about 0.1%. There's
a few negative points here and there that I might want to analyze
a little closer.

Change-Id: Ibad7c3ddfe1acfc52771dfc27c03e9783e054430
2013-03-04 16:34:36 -08:00
Jingning Han
5957b2b514 Support 16K sequence coding
Fixed a couple of variable/function definitions, as well as header
handling to support 16K sequence coding at high bit-rates.

The width and height are each specified by two bytes in the header.
Use an extra byte to explicitly indicate the scaling factors in
both directions, each ranging from 0 to 15.

Tested coding up to 16400x16400 dimension.

Change-Id: Ibc2225c6036620270f2c0cf5172d1760aaec10ec
2013-03-04 11:08:41 -08:00
Ronald S. Bultje
0c9e2e9a1d Split coefficient token tables intra vs. inter.
Change-Id: I5416455f8f129ca0f450d00e48358d2012605072
2013-02-23 07:33:46 -08:00
Paul Wilkins
c17672a33d Further changes to coefficient contexts.
This patch alters the balance of context between the
coefficient bands (reflecting the position of coefficients
within a transform blocks) and the energy of the previous
token (or tokens) within a block.

In this case the number of coefficient bands is reduced
but more previous token energy bands are supported.

Some initial rebalancing of the default tables has been
by running multiple derf clips at multiple data rates using
the ENTOPY_STATS macro. Further balancing needs to be
done using larger image formatsd especially in regard to
the bigger transform sizes which are not as well represented
in encodings of smaller image formats.

Change-Id: If9736e95c391e711b04aef6393d26f60f36e1f8a
2013-02-23 07:29:09 -08:00
Yaowu Xu
441f24de3d Merge "Merge lossless experiment" into experimental 2013-02-20 12:27:26 -08:00
Yaowu Xu
d262e26cc7 Merge lossless experiment
Change-Id: I7b7b8d4fda3a23699e0c920d727f8c15d37d43aa
2013-02-20 07:54:28 -08:00
Paul Wilkins
ef01b956d8 Entropy stats output code.
Fixes to make Entropy stats code work again

Change-Id: I62e380481a4eb4c170076ac6ab36f0c2b203e914
2013-02-20 14:33:19 +00:00
Yaowu Xu
93d6b86cfd Use lossless for Q0
The commit changes the coding mode to lossless whenever the lowest
quantizer is choosen.

As expected, test results showed no difference for cif and std-hd
set where Q0 is rarely used. For yt and yt-hd set, Q0 is used for
a number of clips, where this commit helped a lot in the high end.

Average over all clips in the sets:
yt: 2.391% 1.017% 1.066%
hd: 1.937%  .764%  .787%

Change-Id: I9fa9df8646fd70cb09ffe9e4202b86b67da16765
2013-02-19 06:18:42 -08:00
Ronald S. Bultje
3af36ea8cc Remove Y2 and Y-no-DC token types from the bitstream.
Change-Id: I7a5314daca993d46b8666ba1ec2ff3766c1e5042
2013-02-15 14:06:30 -08:00
Ronald S. Bultje
48598e30b1 Remove y2dc/ac Q delta values from the bitstream.
Since there is no Y2, these values are always zero. This changes the
bitstream results slightly, hence a separate commit.

Change-Id: I2f838f184341868f35113ec77ca89da53c4644e0
2013-02-15 14:06:30 -08:00
Ronald S. Bultje
46dff5d233 Remove some Y2-related code.
Change-Id: I4f46d142c2a8d1e8a880cfac63702dcbfb999b78
2013-02-15 14:06:25 -08:00
Ronald S. Bultje
89a206ef2f Add support for tile rows.
These allow sending partial bitstream packets over the network before
encoding a complete frame is completed, thus lowering end-to-end
latency. The tile-rows are not independent.

Change-Id: I99986595cbcbff9153e2a14f49b4aa7dee4768e2
2013-02-13 12:31:00 -08:00
Yaowu Xu
f01b08c96c Merge "enable bitstream lossless support" into experimental 2013-02-13 10:26:58 -08:00
Yaowu Xu
17db5d00be enable bitstream lossless support
1. Added a bit in frame header to  to indicate if a frame is encoded
in lossless mode, so decoder does not make the decision based on Q0
2. Minor changes to make sure that lossy coding works same as when
the lossless experiment is not enabled.
3. Renamed function pointers for transforms to be consistent, using
prefix fwd_txm and inv_txm for forward and inverse respectively

To encode in lossless mode, using "--lossless=1 --min-q=0 --max-q=0"
with vpxenc.

Change-Id: Ifae53b26d2ffbe378d707e29d96817b8a5e6c068
2013-02-13 09:24:39 -08:00
Ronald S. Bultje
f496f601fb Add tile column size limits (256 pixels min, 4096 pixels max).
This is after discussion with the hardware team. Update the unit test
to take these sizes into account. Split out some duplicate code into
a separate file so it can be shared.

Change-Id: I8311d11b0191d8bb37e8eb4ac962beb217e1bff5
2013-02-12 10:33:34 -08:00
Paul Wilkins
e4f949b55a Merge "Nearest / Zero Mv default entropy tweak." into experimental 2013-02-09 04:21:08 -08:00
John Koleszar
6dfc95fe63 Merge changes Icd1a2a5a,I204d17a1,I3ed92117 into experimental
* changes:
  Initial support for resolution changes on P-frames
  Avoid allocating memory when resizing frames
  Adds a test for the VP8E_SET_SCALEMODE control
2013-02-08 14:20:05 -08:00
John Koleszar
393b485627 Initial support for resolution changes on P-frames
Allows inter-frames to change resolution. Currently these are
almost equivalent to keyframes, as only intra prediction modes
are allowed, but without the other context resets that occur on
keyframes.

Change-Id: Icd1a2a5af0d9462cc792588427b0a1f5b12e40d3
2013-02-08 12:20:30 -08:00
Paul Wilkins
bbede82f24 Nearest / Zero Mv default entropy tweak.
Tweak to default mode context to account for the fact
that when there are no non zero motion candidates
Nearest is now the preferred mode for coding a 0,0
vector.

Also resolve duplicate function name and typos.

Change-Id: I76802788d46c84e3d1c771be216a537ab7b12817
2013-02-08 10:16:13 +00:00
Ronald S. Bultje
278df745d2 Fix mismatch after merge of the tiling patch.
Change-Id: I8ecc178b4d4069e721c7fec6d7631c00e4a3e5d5
2013-02-05 17:15:04 -08:00
Ronald S. Bultje
1407bdc243 [WIP] Add column-based tiling.
This patch adds column-based tiling. The idea is to make each tile
independently decodable (after reading the common frame header) and
also independendly encodable (minus within-frame cost adjustments in
the RD loop) to speed-up hardware & software en/decoders if they used
multi-threading. Column-based tiling has the added advantage (over
other tiling methods) that it minimizes realtime use-case latency,
since all threads can start encoding data as soon as the first SB-row
worth of data is available to the encoder.

There is some test code that does random tile ordering in the decoder,
to confirm that each tile is indeed independently decodable from other
tiles in the same frame. At tile edges, all contexts assume default
values (i.e. 0, 0 motion vector, no coefficients, DC intra4x4 mode),
and motion vector search and ordering do not cross tiles in the same
frame.
t log

Tile independence is not maintained between frames ATM, i.e. tile 0 of
frame 1 is free to use motion vectors that point into any tile of frame
0. We support 1 (i.e. no tiling), 2 or 4 column-tiles.

The loopfilter crosses tile boundaries. I discussed this briefly with Aki
and he says that's OK. An in-loop loopfilter would need to do some sync
between tile threads, but that shouldn't be a big issue.

Resuls: with tiling disabled, we go up slightly because of improved edge
use in the intra4x4 prediction. With 2 tiles, we lose about ~1% on derf,
~0.35% on HD and ~0.55% on STD/HD. With 4 tiles, we lose another ~1.5%
on derf ~0.77% on HD and ~0.85% on STD/HD. Most of this loss is
concentrated in the low-bitrate end of clips, and most of it is because
of the loss of edges at tile boundaries and the resulting loss of intra
predictors.

TODO:
- more tiles (perhaps allow row-based tiling also, and max. 8 tiles)?
- maybe optionally (for EC purposes), motion vectors themselves
  should not cross tile edges, or we should emulate such borders as
  if they were off-frame, to limit error propagation to within one
  tile only. This doesn't have to be the default behaviour but could
  be an optional bitstream flag.

Change-Id: I5951c3a0742a767b20bc9fb5af685d9892c2c96f
2013-02-05 15:43:03 -08:00
Deb Mukherjee
a53be60904 Merge "Adding a frame parallel decoding mode" into experimental 2013-01-30 12:03:45 -08:00
Ronald S. Bultje
3a4b18bc67 don't code the branch for the predicted seg_id if that flag is false.
Change-Id: Icb6e21dc0c2d9918faa33c8bf70943660df7ad88
2013-01-30 09:30:46 -08:00
Paul Wilkins
0ff9b033b0 Segment Skip Flag
First step in simplifying the segment mode and
segment EOB flags into a simpler segment skip
flag that implies 0,0 mv and EOB at position 0.

Change-Id: Ib750cac31a7a02dc21082580498efd9f7d8d72a5
2013-01-28 17:28:04 +00:00
Deb Mukherjee
dfd89f2eab Adding a frame parallel decoding mode
Adds a flag to disable features that would inhibit frame parallel
decoding. This includes backward adaptation and MV sorting based
on search in ref frame buffer.

Also includes some minor clean-ups.

Change-Id: I434846717a47b7bcb244b37ea670c5cdf776f14d
2013-01-25 17:16:19 -08:00
Deb Mukherjee
01cafaab1d Adds an error-resilient mode with test
Adds an error-resilient mode where frames can be continued
to be decoded even when there are errors (due to network losses)
on a prior frame. Specifically, backward updates are turned off
and probabilities of various symbols are reset to defaults at
the beginning of each frame. Further, the last frame's mvs are
not used for the mv reference list, and the sorting of the
initial list based on search on previous frames is turned off
as well.

Also adds a test where an arbitrary set of frames are skipped
from decoding to simulate errors. The test verifies (1) that if
the error frames are droppable - i.e. frame buffer updates have
been turned off - there are no mismatch errors for the remaining
frames after the error frames; and (2) if the error-frames are non
droppable, there are not only no decoding errors but the mismatch
PSNR between the decoder's version of the post-error frames and the
encoder's version is at least 20 dB.

Change-Id: Ie6e2bcd436b1e8643270356d3a930e8989ff52a5
2013-01-23 21:56:15 -08:00
John Koleszar
26bd81b955 Preserve the previous golden frame on golden updates
This commit restores the quality lost when the buffer-to-buffer copy
logic was removed. Note that this is specific to the current use of
golden frames and will need rework when RTC functionality is added.

Change-Id: I7324a75acd96eafd9e0f9b8633d782e390d5dc21
2013-01-16 15:57:02 -08:00
John Koleszar
4b65837bc6 Generalize and increase frame coding contexts
Previously there were two frame coding contexts tracked, one for normal
frames and one for alt-ref frames. Generalize this by signalling the
context to use in the bitstream, rather than tieing it to the alt ref
refresh bit. Also increase the number of contexts available to 4, which
may be useful for temporal scalability.

Change-Id: I7b66daaddd55c535c20cd16713541fab182b1662
2013-01-16 14:07:27 -08:00
John Koleszar
da832a80e4 Start to anonymize reference frames
Remove lst_fb_idx, gld_fb_idx, alt_fb_idx, refresh_last_frame,
refresh_golden_frame, refresh_alt_ref_frame from common. Gold/Alt are
encode side conventions. From the decoder's perspective, we want to be
dealing with numbered references.

Updates to active_ref 2 signal mode context switches, vestigial from
refresh_alt_ref_frame. This needs some clean up to make sense with
increased numbers of reference frames, as well as reimplementing the
swapping of alt/golden which was previously done using the
buffer-to-buffer copy mechanism removed in an earlier commit.

Change-Id: I7334445158b7666f9295d2a2dd22aa03f4485f58
2013-01-16 14:06:23 -08:00
John Koleszar
b8e027989f Remove buffer-to-buffer copy logic
This is the first in a series of commits to add additional reference
frames to the codec. Each frame will be able to update any of the
available references, but copying between references is not
supported.

Change-Id: I5945b5ce6cc3582c495102b4e7eed4f08c44d5a1
2013-01-15 17:36:39 -08:00
Ronald S. Bultje
c9071601a2 Remove compound intra-intra experiment.
This experiment gives little gains and adds relatively much code
complexity (and it hinders other experiments), so let's get rid of
it.

Change-Id: Id25e79a137a1b8a01138aa27a1fa0ba4a2df274a
2013-01-14 15:47:25 -08:00
Ronald S. Bultje
aa2effa954 Merge tx32x32 experiment.
Change-Id: I615651e4c7b09e576a341ad425cf80c393637833
2013-01-10 08:23:59 -08:00
Ronald S. Bultje
6884a83f06 Merge superblocks64 experiment.
Change-Id: If6c88752dffdb566f8d4322f135145270716fb8e
2013-01-09 17:21:40 -08:00
Adrian Grange
7d6b5425d7 New prediction filter
This patch removes the old pred-filter experiment and replaces it
with one that is implemented using the switchable filter framework.

If the pred-filter experiment is enabled, three interopolation
filters are tested during mode selection; the standard 8-tap
interpolation filter, a sharp 8-tap filter and a (new) 8-tap
smoothing filter.

The 6-tap filter code has been preserved for now and if the
enable-6tap experiment is enabled (in addition to the pred-filter
experiment) the original 6-tap filter replaces the new 8-tap smooth
filter in the switchable mode.

The new experiment applies the prediction filter in cases of a
fractional-pel motion vector. Future patches will apply the filter
where the mv is pel-aligned and also to intra predicted blocks.

Change-Id: I08e8cba978f2bbf3019f8413f376b8e2cd85eba4
2013-01-09 12:00:39 -08:00
Ronald S. Bultje
4455036cfc Merge superblocks (32x32) experiment.
Change-Id: I0df99742029834a85c4933652b0587cf5b6b2587
2013-01-08 12:54:45 -08:00
Ronald S. Bultje
c3941665e9 64x64 blocksize support.
3.2% gains on std/hd, 1.0% gains on hd.

Change-Id: I481d5df23d8a4fc650a5bcba956554490b2bd200
2013-01-05 18:20:25 -08:00
Paul Wilkins
313d1100af Added update-able mv-ref probabilities.
Part of NEW_MVREF experiment.
Added update-able probabilities.

Change-Id: I5a4fcf4aaed1d0d1dac980f69d535639a3d59401
2013-01-02 14:22:11 +00:00
John Koleszar
05ec800ea4 Use boolcoder API instead of inlining
This patch changes the token packing to call the bool encoder API rather
than inlining it into the token packing function, and similarly removes
a special get_signed case from the detokenizer. This allows easier
experimentation with changing the bool coder as a whole.

Change-Id: I52c3625bbe4960b68cfb873b0e39ade0c82f9e91
2012-12-19 12:52:41 -08:00
Ronald S. Bultje
4d0ec7aacd Consistently use get_prob(), clip_prob() and newly added clip_pixel().
Add a function clip_pixel() to clip a pixel value to the [0,255] range
of allowed values, and use this where-ever appropriate (e.g. prediction,
reconstruction). Likewise, consistently use the recently added function
clip_prob(), which calculates a binary probability in the [1,255] range.
If possible, try to use get_prob() or its sister get_binary_prob() to
calculate binary probabilities, for consistency.

Since in some places, this means that binary probability calculations
are changed (we use {255,256}*count0/(total) in a range of places,
and all of these are now changed to use 256*count0+(total>>1)/total),
this changes the encoding result, so this patch warrants some extensive
testing.

Change-Id: Ibeeff8d886496839b8e0c0ace9ccc552351f7628
2012-12-12 10:01:19 -08:00
Paul Wilkins
d124465975 Further changes to mv reference code.
Some further changes and refactoring of mv
reference code and selection of center point for
searches. Mainly relates to not passing so many
different local copies of things around.

Some place holder comments.

Change-Id: I309f10ffe9a9cde7663e7eae19eb594371c8d055
2012-12-10 17:31:51 +00:00
Ronald S. Bultje
885cf816eb Introduce vp9_coeff_probs/counts/stats/accum types.
Use these, instead of the 4/5-dimensional arrays, to hold statistics,
counts, accumulations and probabilities for coefficient tokens. This
commit also re-allows ENTROPY_STATS to compile.

Change-Id: If441ffac936f52a3af91d8f2922ea8a0ceabdaa5
2012-12-07 16:09:59 -08:00
Ronald S. Bultje
c456b35fdf 32x32 transform for superblocks.
This adds Debargha's DCT/DWT hybrid and a regular 32x32 DCT, and adds
code all over the place to wrap that in the bitstream/encoder/decoder/RD.

Some implementation notes (these probably need careful review):
- token range is extended by 1 bit, since the value range out of this
  transform is [-16384,16383].
- the coefficients coming out of the FDCT are manually scaled back by
  1 bit, or else they won't fit in int16_t (they are 17 bits). Because
  of this, the RD error scoring does not right-shift the MSE score by
  two (unlike for 4x4/8x8/16x16).
- to compensate for this loss in precision, the quantizer is halved
  also. This is currently a little hacky.
- FDCT and IDCT is double-only right now. Needs a fixed-point impl.
- There are no default probabilities for the 32x32 transform yet; I'm
  simply using the 16x16 luma ones. A future commit will add newly
  generated probabilities for all transforms.
- No ADST version. I don't think we'll add one for this level; if an
  ADST is desired, transform-size selection can scale back to 16x16
  or lower, and use an ADST at that level.

Additional notes specific to Debargha's DWT/DCT hybrid:
- coefficient scale is different for the top/left 16x16 (DCT-over-DWT)
  block than for the rest (DWT pixel differences) of the block. Therefore,
  RD error scoring isn't easily scalable between coefficient and pixel
  domain. Thus, unfortunately, we need to compute the RD distortion in
  the pixel domain until we figure out how to scale these appropriately.

Change-Id: I00386f20f35d7fabb19aba94c8162f8aee64ef2b
2012-12-07 14:45:05 -08:00
Jim Bankoski
9f9370425b warnings in various experiments
Change-Id: Ib5106d4772450f8026f823dd743f162ab833b1d6
2012-11-30 07:31:37 -08:00
Jim Bankoski
705220ee71 unused var removed
Change-Id: I9d0efdff0c79ea4bdd660098106b64776bdd4483
2012-11-29 08:50:20 -08:00
Jim Bankoski
245fba74b7 signed mismatch mvrefcount
Change-Id: Ie34820c1b6eaba9cf9316415a46f48af79c41646
2012-11-29 08:13:18 -08:00
Jim Bankoski
abd74ed594 warning error missing void
Change-Id: I914bcc669297d3414261486bf1bfb716c2ecc804
2012-11-29 07:47:50 -08:00
Deb Mukherjee
0742b1e4ae Fixing 8x8/4x4 ADST for intra modes with tx select
This patch allows use of 8x8 and 4x4 ADST correctly for Intra
16x16 modes and Intra 8x8 modes when the block size selected
is smaller than the prediction mode. Also includes some cleanups
and refactoring.

Rebase.

Change-Id: Ie3257bdf07bdb9c6e9476915e3a80183c8fa005a
2012-11-28 16:21:12 -08:00
Jim Bankoski
c67873989f fixed includes to be fully specified
Change-Id: Ia1cce221f8511561b9cbd8edb7726fbc286ff243
2012-11-28 10:53:17 -08:00
John Koleszar
a1f15814be Clamp decoded feature data
Not all segment feature data elements are full-range powers of two, so
there are values that can be encoded that are invalid. Add a new function
to clamp values to the maximum allowed.

Change-Id: Ie47cb80ef2d54292e6b8db9f699c57214a915bc4
2012-11-27 16:38:31 -08:00
John Koleszar
fcccbcbb39 Add vp9_ prefix to all vp9 files
Support for gyp which doesn't support multiple objects in the same
static library having the same basename.

Change-Id: Ib947eefbaf68f8b177a796d23f875ccdfa6bc9dc
2012-11-27 14:12:30 -08:00