The previous implementation visited each node in the tree multiple times
because it used each symbol's encoding to revisit the branches taken and
increment its count. Instead, we can traverse the tree depth first and
calculate the probabilities and branch counts as we walk back up. The
complexity goes from somewhere between O(nlogn) and O(n^2) (depending on
how balanced the tree is) to O(n).
Only tested one clip (256kbps, CIF), saw 13% decoding perf improvement.
Note that this optimization should port trivially to VP8 as well. In VP8,
the decoder doesn't use this function, but it does routinely show up
on the profile for realtime encoding.
Change-Id: I4f2848e4f41dc9a7694f73f3e75034bce08d1b12
Adds probability updates for extra bits for the nzcs, code for
getting nzc stats, plus some minor cleanups and fixes.
Change-Id: If2814e7f04fb52f5025ad9f400f3e6c50a00b543
Increase the motion search range by 4x. Change MV_CLASS tree of the
entropy coding to allow two additional mv classes to cover the
extended motion vector limit. The codec determines the effective
motion search range conditioned on the actual frame dimension.
It provides coding gains:
stdhd 0.39%
yt 0.56%
hd 0.47%
Major coding performance gains are packed in several sequences with
intense motion activities, e.g., ped_1080p gains 7% at high bit-rates,
and on average 3%.
TODO: Need to further tune the rate control and motion search units.
Change-Id: Ib842540a6796fbee5a797809433ef6a477c6d78d
This also changes the RD search to take account of the correct block
index when searching (this is required for ADST positioning to work
correctly in combination with tx_select).
Change-Id: Ie50d05b3a024a64ecd0b376887aa38ac5f7b6af6
This patch revamps the entropy coding of coefficients to code first
a non-zero count per coded block and correspondingly remove the EOB
token from the token set.
STATUS:
Main encode/decode code achieving encode/decode sync - done.
Forward and backward probability updates to the nzcs - done.
Rd costing updates for nzcs - done.
Note: The dynamic progrmaming apporach used in trellis quantization
is not exactly compatible with nzcs. A suboptimal approach has been
used instead where branch costs are updated to account for changes
in the nzcs.
TODO:
Training the default probs/counts for nzcs
Change-Id: I951bc1e22f47885077a7453a09b0493daa77883d
Added a variant of the one shot maxQ flag
for two pass that forces a fixed Q for the
normal inter frames. Disabled by default.
Also small adjustment to the Bits per MB
estimation.
Change-Id: I87efdfb2d094fe1340ca9ddae37470d7b278c8b8
A 'superframe' is a group of frames that share the same PTS, but have a
defined decoding order. This commit adds the ability to append an index
to such a group of frames, allowing for random access to the constituent
frames. This could be useful for frame-level parallelism or partial
decoding in a multilayer scenario.
Decoding the stream serially without such an index should work as a
fallback, and VP9/TestSuperframeIndexIsOptional verifies that.
Change-Id: Idff83b7560e1a7077d8fb067bfbc45b567e78b1c
Split macroblock and superblock tokenization and detokenization
functions and coefficient-related data structs so that the bitstream
layout and related code of superblock coefficients looks less like it's
a hack to fit macroblocks in superblocks.
In addition, unify chroma transform size selection from luma transform
size (i.e. always use the same size, as long as it fits the predictor);
in practice, this means 32x32 and 64x64 superblocks using the 16x16 luma
transform will now use the 16x16 (instead of the 8x8) chroma transform,
and 64x64 superblocks using the 32x32 luma transform will now use the
32x32 (instead of the 16x16) chroma transform.
Lastly, add a trellis optimize function for 32x32 transform blocks.
HD gains about 0.3%, STDHD about 0.15% and derf about 0.1%. There's
a few negative points here and there that I might want to analyze
a little closer.
Change-Id: Ibad7c3ddfe1acfc52771dfc27c03e9783e054430
Fixed a couple of variable/function definitions, as well as header
handling to support 16K sequence coding at high bit-rates.
The width and height are each specified by two bytes in the header.
Use an extra byte to explicitly indicate the scaling factors in
both directions, each ranging from 0 to 15.
Tested coding up to 16400x16400 dimension.
Change-Id: Ibc2225c6036620270f2c0cf5172d1760aaec10ec
The width and height stored in the reference frames are padded out to
a multiple of 16. The Width and Height variables in common are the
displayed size, which may be smaller. The incorrect comparison was
causing scaling related code to be called when it shouldn't have
been. A notable case where this happens is 1080p, since 1088 != 1080.
Change-Id: I55f743eeeeaefbf2e777e193bc9a77ff726e16b5
sse4_1 code used uint16_t for returning sad, but that
won't work for 32x32 or 64x64. This code fixes the
assembly for those and also reenables sse4_1 on linux
Change-Id: I5ce7288d581db870a148e5f7c5092826f59edd81
Fixing code style, using array lookup instead of switch statements for
forward hybrid transforms (in the same way as for their inverses).
Consistent usage of ROUND_POWER_OF_TWO macro in appropriate places.
Change-Id: I0d3822ae11f928905fdbfbe4158f91d97c71015f
This function was part of an optimization used in VP8 that required
caching two macroblocks. This is unused in VP9, and might not
survive refactoring to support superblocks, so removing it for now.
Change-Id: I744e585206ccc1ef9a402665c33863fc9fb46f0d
This patch makes the encoder's use of ref_frame_map and active_ref_idx
consistent with the decoder. ref_frame_map[] maps a reference buffer
index to its actual location in the yv12_fb array, since many
references may share an underlying buffer. active_ref_idx[] mirrors
cpi->{lst,gld,alt}_fb_idx, holding the active references in each
slot.
This also fixes a bug in setup_buffer_inter() where the incorrect
reference was used to populate the scaling factors.
Change-Id: Id3728f6d77cffcd27c248903bf51f9c3e594287e
Fixes a bug in vp9_set_internal_size() that prevented returning to
the unscaled state. Updated the ResizeInternalTest to scale both
down and up. Added a check that all frames are within 2.5% of the
quality of the initial keyframe.
Change-Id: I3b7ef17cdac144ed05b9148dce6badfa75cff5c8
This patch extends the previous support for using references of a
different resolution in ZEROMV mode to all inter prediction modes.
Subpixel based best-mv scoring is disabled when the reference frame
differs in resolution from the current frame.
Change-Id: Id4dc3e5e6692de98d9857fd56bfad3ac57e944ac
This patch allows coding frames using references of different
resolution, in ZEROMV mode. For compound prediction, either
reference may be scaled.
To test, I use the resize_test and enable WRITE_RECON_BUFFER
in vp9_onyxd_if.c. It's also useful to apply this patch to
test/i420_video_source.h:
--- a/test/i420_video_source.h
+++ b/test/i420_video_source.h
@@ -93,6 +93,7 @@ class I420VideoSource : public VideoSource {
virtual void FillFrame() {
// Read a frame from input_file.
+ if (frame_ != 3)
if (fread(img_->img_data, raw_sz_, 1, input_file_) == 0) {
limit_ = frame_;
}
This forces the frame that the resolution changes on to be coded
with no motion, only scaling, and improves the quality of the
result.
Change-Id: I1ee75d19a437ff801192f767fd02a36bcbd1d496
Ensure that all inter prediction goes through a common code path
that takes scaling into account. Removes a bunch of duplicate
1st/2nd predictor code. Also introduces a 16x8 mode for 8x8
MVs, similar to the 8x4 trick we were doing before. This has an
unexpected effect with EIGHTTAP_SMOOTH, so it's disabled in that
case for now.
Change-Id: Ia053e823a8bc616a988a0af30452e1e75a739cba
The commit improves the 32x32 forward dct implementation:
1. change to use same constants and rounding as other forward dcts
2. select rounding to specifically minimize the roundtrip error, which
improved average 19/block to .77/block using 100000 random input.
Test showed a small but consistent gain on all test sets, about .15%
Change-Id: If0afd6a71880a522f60c1c234be0462092c2eb53
Increase the first stage dynamic range by 4 times, and reduce it
back with proper rounding before applying the second stage. Hence
it still fits in the given dynamic range and slightly improves
the key frame coding performance.
Change-Id: Ia4c5907446f20a95dc3de079c314b3ad1221d8aa
Rebased.
Remove the old matrix multiplication transform computation. The 16x16
ADST/DCT can be switched on/off and evaluated by setting ACTIVE_HT16
300/0 in vp9/common/vp9_blockd.h.
Change-Id: Icab2dbd18538987e1dc4e88c45abfc4cfc6e133f
Some minor refactoring code relating to estimates of
bits per MB at a given Q and estimating the allowed Q range.
Most of the changes here were included in a previous commit.
This commit seeks to separate out the refactoring from more
the material changes.
Two #define control flags have been added for experimentation.
ONE_SHOT_Q_ESTIMATE force the two pass encoder to
use its initial Q range estimate for the whole clip even if this results
in a miss on the target data rate. In effect this tightens the Q range
seen at the expense of rate control accuracy.
DISABLE_RC_LONG_TERM_MEM is a related flag that disables the
long term memory in the rate control. Local adjustments are still
made to try and better hit the rate target on a per frame basis but
the impact of rate control misses is not propagated to the remainder
of the clip. This means that for example an overshoot early on will not
cause frames later in the clip to be starved of bits. Again the result
of this relaxation amy be less rate control accuracy especially on short
clips.
The flags are disabled by default for now.
Change-Id: I7482f980146d8ea033b5d50cc689f772e4bd119e
This commit added pre/post scaling for first half of fDCT16x16 to
reduce error, by simulation of 100,000 blocks for random inputs,
the average sse reduced from 2.1/block to 0.0498/block.
also enabled tests for 16x16 fDCT and iDCT
Change-Id: Id2a95f0464c6dd4118797d456237ae90274c0f02
This patch alters the balance of context between the
coefficient bands (reflecting the position of coefficients
within a transform blocks) and the energy of the previous
token (or tokens) within a block.
In this case the number of coefficient bands is reduced
but more previous token energy bands are supported.
Some initial rebalancing of the default tables has been
by running multiple derf clips at multiple data rates using
the ENTOPY_STATS macro. Further balancing needs to be
done using larger image formatsd especially in regard to
the bigger transform sizes which are not as well represented
in encodings of smaller image formats.
Change-Id: If9736e95c391e711b04aef6393d26f60f36e1f8a
The commit added a final rounding choice for 8x8 forward dct to get
rid of a sign bias at DC position and improve the accuracry in term
of round trip error for 8x8 fDCT/iDCT.
This commit also enabled forward 8x8 dct test.
Change-Id: Ib67f99b0a24d513e230c7812bc04569d472fdc50
The issue that potentially broke the encoding process was due to the fact
that the length of token link is calculated from the total number of tokens
coded, while it is possible, in high bit-rate setting, this length is
greater than the buffer length initially assigned to the cpi->tok.
This patch increases the initially allocated buffer length assigned to
cpi->tok from
(mb_rows * mb_cols * 24 * 16) to (mb_rows * mb_cols * (1 + 24 * 16)).
It resolves the buffer overflow problem.
Change-Id: I8661a8d39ea0a3c24303e3f71a170787a1d5b1df
Removing redundant 'extern' keywords and parentheses, fixing indentation,
making variable names lower case, using short expressions x *= c
instead of x = x * c, minor code simplifications.
Change-Id: If6a25fcf306d1db26e90d27e3c24a32735c607de
The over quant code was added in VP8 post
bitstream freeze to allow compression to lower
data rates
In VP9 the real qualtizer range has been greatly
extended anyway.
Change-Id: I5d384fa5e9a83ef75a3df34ee30627bd21901526
This patch includes 4x4, 8x8, and 16x16 forward butterfly ADST/DCT
hybrid transform. The kernel of 4x4 ADST is sin((2k+1)*(n+1)/(2N+1)).
The kernel of 8x8/16x16 ADST is of the form sin((2k+1)*(2n+1)/4N).
Change-Id: I8f1ab3843ce32eb287ab766f92e0611e1c5cb4c1
Removing redundant 'extern' keyword from function declarations and making
function arguments lower case.
Change-Id: Idae9a2183b067f2b6c85ad84738d275e8bbff9d9
Refactors the switchable filter search in the rd loop to
improve encode speed.
Uses a piecewise approximation to a closed form expression to estimate
rd cost for a Laplacian source with a given variance and quantization
step-size.
About 40% encode time reduction is achieved.
Results (on a feb 12 baseline) show a slight drop:
derf: -0.019%
yt: +0.010%
std-hd: -0.162%
hd: -0.050%
Change-Id: Ie861badf5bba1e3b1052e29a0ef1b7e256edbcd0
The issue that potentially broke the encoding process was due to the fact
that the length of token link is calculated from the total number of tokens
coded, while it is possible, in high bit-rate setting, this length is
greater than the buffer length initially assigned to the cpi->tok.
This patch increases the initially allocated buffer length assigned to
cpi->tok from
(mb_rows * mb_cols * 24 * 16) to (mb_rows * mb_cols * (1 + 24 * 16)).
It resolves the buffer overflow problem.
Change-Id: I8661a8d39ea0a3c24303e3f71a170787a1d5b1df
The commit changes the coding mode to lossless whenever the lowest
quantizer is choosen.
As expected, test results showed no difference for cif and std-hd
set where Q0 is rarely used. For yt and yt-hd set, Q0 is used for
a number of clips, where this commit helped a lot in the high end.
Average over all clips in the sets:
yt: 2.391% 1.017% 1.066%
hd: 1.937% .764% .787%
Change-Id: I9fa9df8646fd70cb09ffe9e4202b86b67da16765
The 32x32 value in case of splitmv was uninitialized. this leads to
all kind of erratic behaviour down the line. Also fill in dummy values
for superblocks in keyframes (the values are currently unused, but we
run into integer overflows anyway, which makes detecting bad cases
harder). Lastly, in case we did not find any RD value at all, don't
set tx_diff to INT_MIN, but instead set it to zero (since if we couldn't
find a mode, it's unlikely that any particular transform would have made
that worse or better; rather, it's likely equally bad for all tx_sizes).
Change-Id: If236fd3aa2037e5b398d03f3b1978fbbc5ce740e
This issue breaks the encoding process of the codebase. The effect
emerges only in particular test sequence at certain bit-rates and
frame limits.
Change-Id: I02e080f2a49624eef9a21c424053dc2a1d902452
Since there is no Y2, these values are always zero. This changes the
bitstream results slightly, hence a separate commit.
Change-Id: I2f838f184341868f35113ec77ca89da53c4644e0
These allow sending partial bitstream packets over the network before
encoding a complete frame is completed, thus lowering end-to-end
latency. The tile-rows are not independent.
Change-Id: I99986595cbcbff9153e2a14f49b4aa7dee4768e2
This patch abstracts the selection of the coefficient band
context into a function as a precursor to further experiments
with the coefficient context.
It also removes the large per TX size coefficient band structures
and uses a single matrix for all block sizes within the test function.
This may have an impact on quality (results to follow) but is only an
intermediate step in the process of redefining the context. Also the
quality impact will be larger initially because the default tables will
be out of step with the new banding.
In particular the 4x4 will in this case only use 7 bands. If needed we
can add back block size dependency localized within the function, but
this can follow on after the other changes to the definition of the
context.
Change-Id: Id7009c2f4f9bb1d02b861af85fd8223d4285bde5
Reverted part of change
I19981d1ef0b33e4e5732739574f367fe82771a84
That gives rise to an enc/dec mismatch.
As things stand the memsets are still needed.
Change-Id: I9fa076a703909aa0c4da0059ac6ae19aa530db30
This is an initial step to facilitate experimentation
with changes to the prior token context used to code
coefficients to take better account of the energy of
preceding tokens.
This patch merely abstracts the selection of context into
two functions and does not alter the output.
Change-Id: I117fff0b49c61da83aed641e36620442f86def86
1. Added a bit in frame header to to indicate if a frame is encoded
in lossless mode, so decoder does not make the decision based on Q0
2. Minor changes to make sure that lossy coding works same as when
the lossless experiment is not enabled.
3. Renamed function pointers for transforms to be consistent, using
prefix fwd_txm and inv_txm for forward and inverse respectively
To encode in lossless mode, using "--lossless=1 --min-q=0 --max-q=0"
with vpxenc.
Change-Id: Ifae53b26d2ffbe378d707e29d96817b8a5e6c068
Removal of the NEWCOEFCONTEXT experiment to
reduce code clutter and make it easier to experiment with
some other changes to the coefficient coding context.
Change-Id: Icd17b421384c354df6117cc714747647c5eb7e98
A couple of scalar optimizations speeding up quantization by about 1.6x. Overall encoder speedup is around 3%.
Change-Id: I19981d1ef0b33e4e5732739574f367fe82771a84
This is after discussion with the hardware team. Update the unit test
to take these sizes into account. Split out some duplicate code into
a separate file so it can be shared.
Change-Id: I8311d11b0191d8bb37e8eb4ac962beb217e1bff5
fixed format issues.
Implement the inverse 4x4 ADST using 9 multiplications. For this
particular dimension, the original ADST transform can be
factorized into simpler operations, hence is retained.
Change-Id: Ie5d9749942468df299ab74e90d92cd899569e960
Experimental tweaks to various thresholds to measure
quality / speed trade off.
Add flag that allows static segmentation to be turned off
and disables it unless in the second pass of a two pass
encode.
Change-Id: I219702ffe858412a83db801cbbbd869924b8c61b
Replace as_mv.{first, second} with a two element array, so that they
can easily be processed with an index variable.
Change-Id: I1e429155544d2a94a5b72a5b467c53d8b8728190
Also port the 4x4, 16x16, 8x16 and 16x8 versions to x86inc.asm; this
makes them all slightly faster, particularly on x86-64. Remove SSE3
sad16x16 version, since the SSE2 version is now faster.
About 1.5% overall encoding speedup.
Change-Id: Id4011a78cce7839f554b301d0800d5ca021af797
Cache the constant offset in one variable to prevent re-loading that
in each loop iteration, and mark the function as inline so we can use
the fact that the transform size is always known in the caller.
Almost 1% faster encoding overall.
Change-Id: Id78325a60b025057d8f4ecd9003a74086ccbf85a
Pass the current mb row and column around rather than the
recon_yoffset and recon_uvoffset, since those offsets will
change from predictor to predictor, based on the reference
frame selection.
Change-Id: If3f9df059e00f5048ca729d3d083ff428e1859c1
* changes:
Initial support for resolution changes on P-frames
Avoid allocating memory when resizing frames
Adds a test for the VP8E_SET_SCALEMODE control
Allows inter-frames to change resolution. Currently these are
almost equivalent to keyframes, as only intra prediction modes
are allowed, but without the other context resets that occur on
keyframes.
Change-Id: Icd1a2a5af0d9462cc792588427b0a1f5b12e40d3
As long as the new frame is smaller than the size that was originally
allocated, we don't need to free and reallocate the memory allocated.
Instead, do the allocation on the size of the first frame. We could
make this passed in from the application instead, if we wanted to
support external upscaling.
Change-Id: I204d17a130728bbd91155bb4bd863a99bb99b038
Tests that the external interface to set the internal codec scaling
works as expected. Also updates the test to pull the height from
the decoded frame size rather than parsing the keyframe header,
in anticipation of allowing resolution changes on non-keyframes.
Change-Id: I3ed92117d8e5288fbbd1e7b618f2f233d0fe2c17
Tweak to default mode context to account for the fact
that when there are no non zero motion candidates
Nearest is now the preferred mode for coding a 0,0
vector.
Also resolve duplicate function name and typos.
Change-Id: I76802788d46c84e3d1c771be216a537ab7b12817
Refactor the 8x8 inverse hybrid transform. It is now consistent
with the new inverse DCT. Overall performance loss (due to the
use of this variant ADST, and the rounding errors in the butterfly
implementation) for std-hd is -0.02.
Fixed BUILD warning.
Devise a variant of the original ADST, which allows butterfly
computation structure. This new transform has kernel of the
form: sin((2k+1)*(2n+1) / (4N)). One of its butterfly structures
using floating-point multiplications was reported in Z. Wang,
"Fast algorithms for the discrete W transform and for the discrete
Fourier transform", IEEE Trans. on ASSP, 1984.
This patch includes the butterfly implementation of the inverse
ADST/DCT hybrid transform of dimension 8x8.
Change-Id: I3533cb715f749343a80b9087ce34b3e776d1581d