Updates the YV12_BUFFER_CONFIG structure to be crop-aware. The
exiting width/height parameters are left unchanged, storing the
width and height algined to a 16 byte boundary. The cropped
dimensions are added as new fields.
This fixes a nasty visual pulse when switching between scaled and
unscaled frame dimensions due to a mismatch between the scaling
ratio and the 16-byte aligned sizes.
Change-Id: Id4a3f6aea6b9b9ae38bdfa1b87b7eb2cfcdd57b6
This is like VP8_COPY_REFERENCE, but returns a pointer to the reference
frame rather than a copy of it. This is useful when the application
doesn't know what the size of the reference is, as is the case when
scaling is in effect.
Change-Id: I63667109f65510364d0e397ebe56217140772085
Enable entropy coding of motion vectors up to +/-2048. Also
extend the motion search range accordingly.
Change-Id: Iac2bb015e8934521cef83a19edbe967d9f097436
If the bool-coded partition naturally ends in a byte that matches the
superframe index marker, it could lead to a parse error. This commit
ensures that if such a marker is seen, it is padded out with an
additional zero byte to disambiguate it.
Change-Id: Id977de05745b6fa9ef08afb71e210a2a3ecca02e
When coding the frame that corresponds to the midpoint frame
defining an ARF, do not update the last reference frame buffer.
Previously this buffer was updated meaning that when coding the next
ARF all the reference buffers were the same (or nearly so).
Turning the update off means that the frame before is still available
as an alternative predictor and for use in compound prediction.
Also fixed inconsistency in test for mismatch (patch from JK).
Net average gains (derf 0.049, yt 0.163, yt-hd 0.207, std-hd 0.286)
Change-Id: Ifee21da21ccbb1648ac2eafe890d3ce60562c7bc
Removing redundant code, introducing new functions for better
decomposition, adding 'clamp' function to vp9_common.h.
Change-Id: Ic3b8ca13bbc38f60f0c9c43910b5802005e31aaf
This patch puts in an adjustment to the maximum gf/arf
interval based on the active q range. It sets a fixed
baseline maximum of 16 but can drop this down to 12 at
lower q. This required some re-ordering in the first pass
code to insure we have a Q range estimate before defining
the first gf sequence.
The main gains seed are int he STD hd set on 50fps clips
where previously the interval could rise as high as 25.
On the std hd clip the gains are around 2.8% with limit set
to 300 frames.
When combined with the one shot rate control flags we get
combined of:
derf 1.55% (limit300), yt 7.25%, hd 5.17% std-hd 5.84% (limit300)
Change-Id: Ib380d51354511f2ff0f171a8df4e74291c0421f9
The previous implementation visited each node in the tree multiple times
because it used each symbol's encoding to revisit the branches taken and
increment its count. Instead, we can traverse the tree depth first and
calculate the probabilities and branch counts as we walk back up. The
complexity goes from somewhere between O(nlogn) and O(n^2) (depending on
how balanced the tree is) to O(n).
Only tested one clip (256kbps, CIF), saw 13% decoding perf improvement.
Note that this optimization should port trivially to VP8 as well. In VP8,
the decoder doesn't use this function, but it does routinely show up
on the profile for realtime encoding.
Change-Id: I4f2848e4f41dc9a7694f73f3e75034bce08d1b12
Adds probability updates for extra bits for the nzcs, code for
getting nzc stats, plus some minor cleanups and fixes.
Change-Id: If2814e7f04fb52f5025ad9f400f3e6c50a00b543
Increase the motion search range by 4x. Change MV_CLASS tree of the
entropy coding to allow two additional mv classes to cover the
extended motion vector limit. The codec determines the effective
motion search range conditioned on the actual frame dimension.
It provides coding gains:
stdhd 0.39%
yt 0.56%
hd 0.47%
Major coding performance gains are packed in several sequences with
intense motion activities, e.g., ped_1080p gains 7% at high bit-rates,
and on average 3%.
TODO: Need to further tune the rate control and motion search units.
Change-Id: Ib842540a6796fbee5a797809433ef6a477c6d78d
This also changes the RD search to take account of the correct block
index when searching (this is required for ADST positioning to work
correctly in combination with tx_select).
Change-Id: Ie50d05b3a024a64ecd0b376887aa38ac5f7b6af6
This patch revamps the entropy coding of coefficients to code first
a non-zero count per coded block and correspondingly remove the EOB
token from the token set.
STATUS:
Main encode/decode code achieving encode/decode sync - done.
Forward and backward probability updates to the nzcs - done.
Rd costing updates for nzcs - done.
Note: The dynamic progrmaming apporach used in trellis quantization
is not exactly compatible with nzcs. A suboptimal approach has been
used instead where branch costs are updated to account for changes
in the nzcs.
TODO:
Training the default probs/counts for nzcs
Change-Id: I951bc1e22f47885077a7453a09b0493daa77883d
Added a variant of the one shot maxQ flag
for two pass that forces a fixed Q for the
normal inter frames. Disabled by default.
Also small adjustment to the Bits per MB
estimation.
Change-Id: I87efdfb2d094fe1340ca9ddae37470d7b278c8b8
A 'superframe' is a group of frames that share the same PTS, but have a
defined decoding order. This commit adds the ability to append an index
to such a group of frames, allowing for random access to the constituent
frames. This could be useful for frame-level parallelism or partial
decoding in a multilayer scenario.
Decoding the stream serially without such an index should work as a
fallback, and VP9/TestSuperframeIndexIsOptional verifies that.
Change-Id: Idff83b7560e1a7077d8fb067bfbc45b567e78b1c
Split macroblock and superblock tokenization and detokenization
functions and coefficient-related data structs so that the bitstream
layout and related code of superblock coefficients looks less like it's
a hack to fit macroblocks in superblocks.
In addition, unify chroma transform size selection from luma transform
size (i.e. always use the same size, as long as it fits the predictor);
in practice, this means 32x32 and 64x64 superblocks using the 16x16 luma
transform will now use the 16x16 (instead of the 8x8) chroma transform,
and 64x64 superblocks using the 32x32 luma transform will now use the
32x32 (instead of the 16x16) chroma transform.
Lastly, add a trellis optimize function for 32x32 transform blocks.
HD gains about 0.3%, STDHD about 0.15% and derf about 0.1%. There's
a few negative points here and there that I might want to analyze
a little closer.
Change-Id: Ibad7c3ddfe1acfc52771dfc27c03e9783e054430
Fixed a couple of variable/function definitions, as well as header
handling to support 16K sequence coding at high bit-rates.
The width and height are each specified by two bytes in the header.
Use an extra byte to explicitly indicate the scaling factors in
both directions, each ranging from 0 to 15.
Tested coding up to 16400x16400 dimension.
Change-Id: Ibc2225c6036620270f2c0cf5172d1760aaec10ec
The width and height stored in the reference frames are padded out to
a multiple of 16. The Width and Height variables in common are the
displayed size, which may be smaller. The incorrect comparison was
causing scaling related code to be called when it shouldn't have
been. A notable case where this happens is 1080p, since 1088 != 1080.
Change-Id: I55f743eeeeaefbf2e777e193bc9a77ff726e16b5
sse4_1 code used uint16_t for returning sad, but that
won't work for 32x32 or 64x64. This code fixes the
assembly for those and also reenables sse4_1 on linux
Change-Id: I5ce7288d581db870a148e5f7c5092826f59edd81
Fixing code style, using array lookup instead of switch statements for
forward hybrid transforms (in the same way as for their inverses).
Consistent usage of ROUND_POWER_OF_TWO macro in appropriate places.
Change-Id: I0d3822ae11f928905fdbfbe4158f91d97c71015f
This function was part of an optimization used in VP8 that required
caching two macroblocks. This is unused in VP9, and might not
survive refactoring to support superblocks, so removing it for now.
Change-Id: I744e585206ccc1ef9a402665c33863fc9fb46f0d
This patch makes the encoder's use of ref_frame_map and active_ref_idx
consistent with the decoder. ref_frame_map[] maps a reference buffer
index to its actual location in the yv12_fb array, since many
references may share an underlying buffer. active_ref_idx[] mirrors
cpi->{lst,gld,alt}_fb_idx, holding the active references in each
slot.
This also fixes a bug in setup_buffer_inter() where the incorrect
reference was used to populate the scaling factors.
Change-Id: Id3728f6d77cffcd27c248903bf51f9c3e594287e
Fixes a bug in vp9_set_internal_size() that prevented returning to
the unscaled state. Updated the ResizeInternalTest to scale both
down and up. Added a check that all frames are within 2.5% of the
quality of the initial keyframe.
Change-Id: I3b7ef17cdac144ed05b9148dce6badfa75cff5c8
This patch extends the previous support for using references of a
different resolution in ZEROMV mode to all inter prediction modes.
Subpixel based best-mv scoring is disabled when the reference frame
differs in resolution from the current frame.
Change-Id: Id4dc3e5e6692de98d9857fd56bfad3ac57e944ac
This patch allows coding frames using references of different
resolution, in ZEROMV mode. For compound prediction, either
reference may be scaled.
To test, I use the resize_test and enable WRITE_RECON_BUFFER
in vp9_onyxd_if.c. It's also useful to apply this patch to
test/i420_video_source.h:
--- a/test/i420_video_source.h
+++ b/test/i420_video_source.h
@@ -93,6 +93,7 @@ class I420VideoSource : public VideoSource {
virtual void FillFrame() {
// Read a frame from input_file.
+ if (frame_ != 3)
if (fread(img_->img_data, raw_sz_, 1, input_file_) == 0) {
limit_ = frame_;
}
This forces the frame that the resolution changes on to be coded
with no motion, only scaling, and improves the quality of the
result.
Change-Id: I1ee75d19a437ff801192f767fd02a36bcbd1d496
Ensure that all inter prediction goes through a common code path
that takes scaling into account. Removes a bunch of duplicate
1st/2nd predictor code. Also introduces a 16x8 mode for 8x8
MVs, similar to the 8x4 trick we were doing before. This has an
unexpected effect with EIGHTTAP_SMOOTH, so it's disabled in that
case for now.
Change-Id: Ia053e823a8bc616a988a0af30452e1e75a739cba
The commit improves the 32x32 forward dct implementation:
1. change to use same constants and rounding as other forward dcts
2. select rounding to specifically minimize the roundtrip error, which
improved average 19/block to .77/block using 100000 random input.
Test showed a small but consistent gain on all test sets, about .15%
Change-Id: If0afd6a71880a522f60c1c234be0462092c2eb53
Increase the first stage dynamic range by 4 times, and reduce it
back with proper rounding before applying the second stage. Hence
it still fits in the given dynamic range and slightly improves
the key frame coding performance.
Change-Id: Ia4c5907446f20a95dc3de079c314b3ad1221d8aa
Rebased.
Remove the old matrix multiplication transform computation. The 16x16
ADST/DCT can be switched on/off and evaluated by setting ACTIVE_HT16
300/0 in vp9/common/vp9_blockd.h.
Change-Id: Icab2dbd18538987e1dc4e88c45abfc4cfc6e133f
Some minor refactoring code relating to estimates of
bits per MB at a given Q and estimating the allowed Q range.
Most of the changes here were included in a previous commit.
This commit seeks to separate out the refactoring from more
the material changes.
Two #define control flags have been added for experimentation.
ONE_SHOT_Q_ESTIMATE force the two pass encoder to
use its initial Q range estimate for the whole clip even if this results
in a miss on the target data rate. In effect this tightens the Q range
seen at the expense of rate control accuracy.
DISABLE_RC_LONG_TERM_MEM is a related flag that disables the
long term memory in the rate control. Local adjustments are still
made to try and better hit the rate target on a per frame basis but
the impact of rate control misses is not propagated to the remainder
of the clip. This means that for example an overshoot early on will not
cause frames later in the clip to be starved of bits. Again the result
of this relaxation amy be less rate control accuracy especially on short
clips.
The flags are disabled by default for now.
Change-Id: I7482f980146d8ea033b5d50cc689f772e4bd119e
This commit added pre/post scaling for first half of fDCT16x16 to
reduce error, by simulation of 100,000 blocks for random inputs,
the average sse reduced from 2.1/block to 0.0498/block.
also enabled tests for 16x16 fDCT and iDCT
Change-Id: Id2a95f0464c6dd4118797d456237ae90274c0f02
This patch alters the balance of context between the
coefficient bands (reflecting the position of coefficients
within a transform blocks) and the energy of the previous
token (or tokens) within a block.
In this case the number of coefficient bands is reduced
but more previous token energy bands are supported.
Some initial rebalancing of the default tables has been
by running multiple derf clips at multiple data rates using
the ENTOPY_STATS macro. Further balancing needs to be
done using larger image formatsd especially in regard to
the bigger transform sizes which are not as well represented
in encodings of smaller image formats.
Change-Id: If9736e95c391e711b04aef6393d26f60f36e1f8a
The commit added a final rounding choice for 8x8 forward dct to get
rid of a sign bias at DC position and improve the accuracry in term
of round trip error for 8x8 fDCT/iDCT.
This commit also enabled forward 8x8 dct test.
Change-Id: Ib67f99b0a24d513e230c7812bc04569d472fdc50