Remove the unnecessary _s_ from their names, and add a new
vp9_recon_sb() that calls the y and uv variants.
Change-Id: I7ffaa5ff5605a8472cac2a53de8cf889353039a6
Further simplification of mvref search to return
only the top two candidates. Distance weights removed
as the test order reflects distance anyway.
Change-Id: I0518cab7280258fec2058670add4f853fab7b855
As we are no longer able to sort the candidate
mvrefs in both encoder and decode and given
that the cost of explicit signalling has proved
prohibitive, it no longer makes sense to find more
than 2 candidates.
This patch:
Modifies and simplifies add_candidate_mv()
Removes the forced addition of a 0 vector in the
MAX_MV_REF_CANDIDATES-1 position (in preparation
to reducing MAX_MV_REF_CANDIDATES to 2).
Re-orders the addition of candidates slightly.
This actually gives small gains (circa 0.2% on std-hd)
A subsequent patch will remove NEW_MVREF experiment,
reduce MAX_MV_REF_CANDIDATES to 2 and remove distance
weights as these are implicit now in the order.
Change-Id: I3dbe1a6f8a1a18b3c108257069c22a1141a207a4
Adjustments take heavier account of the frame near a kf
in deciding boost and limit the total number that can contribute.
Also adjusted the minq calculations such that in most cases we
generate a smaller key frame.
Modified the code that accounts for how static the sequence is and
added some adjustment based on image size. This is still very
crude but smaller images tend to behave better with a larger
delta between KF Q and other frames than larger image formats.
Changes give sizable gains in overall PSNR on all the test sets but the
biggest gains (~3%) were on the std-hd set.
The gains were smaller for SSIM but still significant.
Average PSNR results are mixed because this metric can very easily
be altered by having a very good / lossless coding of one or two frames.
Some of the YT and YT-HD clips in particular have blank lead ins and
allowing lossless coding of these appears to make a big difference to
average PSNR but it reality does not help much at all.
Change-Id: I6bfe485a1d330b47c783832f1717c95c535464ec
Consider the previous behavior for the MV 1 3/8 (11/8 pel). In the
existing code, the fractional part of the MV is considered separately,
and rounded is applied, giving a result of 6/8. Rounding is not required
in this case, as we're increasing the precision from a q3 to a q4, and
the correct value 11/16 can be represented exactly.
Slight gain observed (+.033 average on derf)
Change-Id: I320e160e8b12f1dd66aa0ce7966b5088870fe9f8
This commit converts the luma versions of vp9_build_inter_predictors_sb
to use a common function. Update the convolution functions to support
block sizes larger than 16x16, and add a foreach_predicted_block walker.
Next step will be to calculate the UV motion vector and implement SBUV,
then fold in vp9_build_inter16x16_predictors_mb and SPLITMV.
At the 16x16, 32x32, and 64x64 levels implemented in this commit, each
plane is predicted with only a single call to vp9_build_inter_predictor.
This is not yet called for SPLITMV. If the notion of SPLITMV/I8X8/I4X4
goes away, then the prediction block walker can go away, since we'll
always predict the whole bsize in a single step. Implemented using a
block walker at this stage for SPLITMV, as a 4x4 "prediction block size"
within the BLOCK_SIZE_MB16X16 macroblock. It would also support other
rectangular sizes too, if the blocks smaller than 16x16 remain
implemented as a SPLITMV-like thing. Just using 4x4 for now.
There's also a potential to combine with the foreach_transformed_block
walker if the logic for calculating the size of the subsampled
transform is made more straightforward, perhaps as a consequence of
supporing smaller macroblocks than 16x16. Will watch what happens there.
Change-Id: Iddd9973398542216601b630c628b9b7fdee33fe2
Use in-place buffers (dst of MACROBLOCKD) for macroblock prediction.
This makes the macroblock buffer handling consistent with those of
superblock. Remove predictor buffer MACROBLOCKD.
Change-Id: Id1bcd898961097b1e6230c10f0130753a59fc6df
Adds RD integration for 32x16, 16x32, 64x32 and 32x64 rectangular blocks.
Derf almost +0.6%, HD a little over +1.0%, STDHD +1.3%.
Change-Id: Id651fdb6a655fdbb5c47009757e63317acfb88a5
Enable recursive partition information coding from SB64X64 down to
MB16X16. The bit-stream syntax is now supporting rectangular block
sizes. It starts from SB64X64 and recursively describes the partition
type of the current block. If the partition type is PARTITION_NONE,
the block is coded as a single unit; if it is PARTITION_HORZ or
PARTITION_VERT, the block is segmented into two independently coded
rectangular units, with no further partition needed; otherwise, the
block is segmented into 4 square blocks. i.e., PARTITION_SPLIT case,
each can be potentially further partitioned.
Forward adaptive probability modeling is used for the partition
information coding, conditioned on the current block size.
Change-Id: I499365fb547839d555498e3bcc0387d8a3587d87
This function is now called from configures the ARNR
filter so it belongs with the other temporal filter
functions.
Change-Id: I64211875918364b5b8edfb97743e573c6def1663