vpx/vp9/common/vp9_entropy.h

397 lines
13 KiB
C
Raw Normal View History

2010-05-18 17:58:33 +02:00
/*
* Copyright (c) 2010 The WebM project authors. All Rights Reserved.
2010-05-18 17:58:33 +02:00
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
2010-05-18 17:58:33 +02:00
*/
#ifndef VP9_COMMON_VP9_ENTROPY_H_
#define VP9_COMMON_VP9_ENTROPY_H_
2010-05-18 17:58:33 +02:00
#include "vpx/vpx_integer.h"
#include "vp9/common/vp9_treecoder.h"
#include "vp9/common/vp9_blockd.h"
#include "vp9/common/vp9_common.h"
2010-05-18 17:58:33 +02:00
/* Coefficient token alphabet */
#define ZERO_TOKEN 0 /* 0 Extra Bits 0+0 */
#define ONE_TOKEN 1 /* 1 Extra Bits 0+1 */
#define TWO_TOKEN 2 /* 2 Extra Bits 0+1 */
#define THREE_TOKEN 3 /* 3 Extra Bits 0+1 */
#define FOUR_TOKEN 4 /* 4 Extra Bits 0+1 */
#define DCT_VAL_CATEGORY1 5 /* 5-6 Extra Bits 1+1 */
#define DCT_VAL_CATEGORY2 6 /* 7-10 Extra Bits 2+1 */
#define DCT_VAL_CATEGORY3 7 /* 11-18 Extra Bits 3+1 */
#define DCT_VAL_CATEGORY4 8 /* 19-34 Extra Bits 4+1 */
#define DCT_VAL_CATEGORY5 9 /* 35-66 Extra Bits 5+1 */
#define DCT_VAL_CATEGORY6 10 /* 67+ Extra Bits 14+1 */
#define DCT_EOB_TOKEN 11 /* EOB Extra Bits 0+0 */
Modeling default coef probs with distribution Replaces the default tables for single coefficient magnitudes with those obtained from an appropriate distribution. The EOB node is left unchanged. The model is represeted as a 256-size codebook where the index corresponds to the probability of the Zero or the One node. Two variations are implemented corresponding to whether the Zero node or the One-node is used as the peg. The main advantage is that the default prob tables will become considerably smaller and manageable. Besides there is substantially less risk of over-fitting for a training set. Various distributions are tried and the one that gives the best results is the family of Generalized Gaussian distributions with shape parameter 0.75. The results are within about 0.2% of fully trained tables for the Zero peg variant, and within 0.1% of the One peg variant. The forward updates are optionally (controlled by a macro) model-based, i.e. restricted to only convey probabilities from the codebook. Backward updates can also be optionally (controlled by another macro) model-based, but is turned off by default. Currently model-based forward updates work about the same as unconstrained updates, but there is a drop in performance with backward-updates being model based. The model based approach also allows the probabilities for the key frames to be adjusted from the defaults based on the base_qindex of the frame. Currently the adjustment function is a placeholder that adjusts the prob of EOB and Zero node from the nominal one at higher quality (lower qindex) or lower quality (higher qindex) ends of the range. The rest of the probabilities are then derived based on the model from the adjusted prob of zero. Change-Id: Iae050f3cbcc6d8b3f204e8dc395ae47b3b2192c9
2013-03-13 19:03:17 +01:00
#define MAX_ENTROPY_TOKENS 12
#define ENTROPY_NODES 11
#define EOSB_TOKEN 127 /* Not signalled, encoder only */
2010-05-18 17:58:33 +02:00
#define INTER_MODE_CONTEXTS 7
extern const vp9_tree_index vp9_coef_tree[];
2010-05-18 17:58:33 +02:00
#define DCT_EOB_MODEL_TOKEN 3 /* EOB Extra Bits 0+0 */
extern const vp9_tree_index vp9_coefmodel_tree[];
extern struct vp9_token vp9_coef_encodings[MAX_ENTROPY_TOKENS];
2010-05-18 17:58:33 +02:00
typedef struct {
vp9_tree_p tree;
const vp9_prob *prob;
int len;
int base_val;
} vp9_extra_bit;
2010-05-18 17:58:33 +02:00
extern const vp9_extra_bit vp9_extra_bits[12]; /* indexed by token value */
2010-05-18 17:58:33 +02:00
#define MAX_PROB 255
32x32 transform for superblocks. This adds Debargha's DCT/DWT hybrid and a regular 32x32 DCT, and adds code all over the place to wrap that in the bitstream/encoder/decoder/RD. Some implementation notes (these probably need careful review): - token range is extended by 1 bit, since the value range out of this transform is [-16384,16383]. - the coefficients coming out of the FDCT are manually scaled back by 1 bit, or else they won't fit in int16_t (they are 17 bits). Because of this, the RD error scoring does not right-shift the MSE score by two (unlike for 4x4/8x8/16x16). - to compensate for this loss in precision, the quantizer is halved also. This is currently a little hacky. - FDCT and IDCT is double-only right now. Needs a fixed-point impl. - There are no default probabilities for the 32x32 transform yet; I'm simply using the 16x16 luma ones. A future commit will add newly generated probabilities for all transforms. - No ADST version. I don't think we'll add one for this level; if an ADST is desired, transform-size selection can scale back to 16x16 or lower, and use an ADST at that level. Additional notes specific to Debargha's DWT/DCT hybrid: - coefficient scale is different for the top/left 16x16 (DCT-over-DWT) block than for the rest (DWT pixel differences) of the block. Therefore, RD error scoring isn't easily scalable between coefficient and pixel domain. Thus, unfortunately, we need to compute the RD distortion in the pixel domain until we figure out how to scale these appropriately. Change-Id: I00386f20f35d7fabb19aba94c8162f8aee64ef2b
2012-12-07 23:45:05 +01:00
#define DCT_MAX_VALUE 16384
2010-05-18 17:58:33 +02:00
/* Coefficients are predicted via a 3-dimensional probability table. */
/* Outside dimension. 0 = Y with DC, 1 = UV */
#define BLOCK_TYPES 2
#define REF_TYPES 2 // intra=0, inter=1
32x32 transform for superblocks. This adds Debargha's DCT/DWT hybrid and a regular 32x32 DCT, and adds code all over the place to wrap that in the bitstream/encoder/decoder/RD. Some implementation notes (these probably need careful review): - token range is extended by 1 bit, since the value range out of this transform is [-16384,16383]. - the coefficients coming out of the FDCT are manually scaled back by 1 bit, or else they won't fit in int16_t (they are 17 bits). Because of this, the RD error scoring does not right-shift the MSE score by two (unlike for 4x4/8x8/16x16). - to compensate for this loss in precision, the quantizer is halved also. This is currently a little hacky. - FDCT and IDCT is double-only right now. Needs a fixed-point impl. - There are no default probabilities for the 32x32 transform yet; I'm simply using the 16x16 luma ones. A future commit will add newly generated probabilities for all transforms. - No ADST version. I don't think we'll add one for this level; if an ADST is desired, transform-size selection can scale back to 16x16 or lower, and use an ADST at that level. Additional notes specific to Debargha's DWT/DCT hybrid: - coefficient scale is different for the top/left 16x16 (DCT-over-DWT) block than for the rest (DWT pixel differences) of the block. Therefore, RD error scoring isn't easily scalable between coefficient and pixel domain. Thus, unfortunately, we need to compute the RD distortion in the pixel domain until we figure out how to scale these appropriately. Change-Id: I00386f20f35d7fabb19aba94c8162f8aee64ef2b
2012-12-07 23:45:05 +01:00
/* Middle dimension reflects the coefficient position within the transform. */
#define COEF_BANDS 6
/* Inside dimension is measure of nearby complexity, that reflects the energy
of nearby coefficients are nonzero. For the first coefficient (DC, unless
block type is 0), we look at the (already encoded) blocks above and to the
left of the current block. The context index is then the number (0,1,or 2)
of these blocks having nonzero coefficients.
After decoding a coefficient, the measure is determined by the size of the
most recently decoded coefficient.
2010-05-18 17:58:33 +02:00
Note that the intuitive meaning of this measure changes as coefficients
are decoded, e.g., prior to the first token, a zero means that my neighbors
are empty while, after the first token, because of the use of end-of-block,
a zero means we just decoded a zero and hence guarantees that a non-zero
coefficient will appear later in this block. However, this shift
in meaning is perfectly OK because our context depends also on the
coefficient band (and since zigzag positions 0, 1, and 2 are in
distinct bands). */
#define PREV_COEF_CONTEXTS 6
// #define ENTROPY_STATS
typedef unsigned int vp9_coeff_count[REF_TYPES][COEF_BANDS][PREV_COEF_CONTEXTS]
[MAX_ENTROPY_TOKENS];
typedef unsigned int vp9_coeff_stats[REF_TYPES][COEF_BANDS][PREV_COEF_CONTEXTS]
[ENTROPY_NODES][2];
typedef vp9_prob vp9_coeff_probs[REF_TYPES][COEF_BANDS][PREV_COEF_CONTEXTS]
[ENTROPY_NODES];
#define SUBEXP_PARAM 4 /* Subexponential code parameter */
#define MODULUS_PARAM 13 /* Modulus parameter */
struct VP9Common;
void vp9_default_coef_probs(struct VP9Common *cm);
extern DECLARE_ALIGNED(16, const int16_t, vp9_default_scan_4x4[16]);
extern DECLARE_ALIGNED(16, const int16_t, vp9_col_scan_4x4[16]);
extern DECLARE_ALIGNED(16, const int16_t, vp9_row_scan_4x4[16]);
extern DECLARE_ALIGNED(16, const int16_t, vp9_default_scan_8x8[64]);
extern DECLARE_ALIGNED(16, const int16_t, vp9_col_scan_8x8[64]);
extern DECLARE_ALIGNED(16, const int16_t, vp9_row_scan_8x8[64]);
extern DECLARE_ALIGNED(16, const int16_t, vp9_default_scan_16x16[256]);
extern DECLARE_ALIGNED(16, const int16_t, vp9_col_scan_16x16[256]);
extern DECLARE_ALIGNED(16, const int16_t, vp9_row_scan_16x16[256]);
extern DECLARE_ALIGNED(16, const int16_t, vp9_default_scan_32x32[1024]);
extern DECLARE_ALIGNED(16, int16_t, vp9_default_iscan_4x4[16]);
extern DECLARE_ALIGNED(16, int16_t, vp9_col_iscan_4x4[16]);
extern DECLARE_ALIGNED(16, int16_t, vp9_row_iscan_4x4[16]);
extern DECLARE_ALIGNED(16, int16_t, vp9_default_iscan_8x8[64]);
extern DECLARE_ALIGNED(16, int16_t, vp9_col_iscan_8x8[64]);
extern DECLARE_ALIGNED(16, int16_t, vp9_row_iscan_8x8[64]);
extern DECLARE_ALIGNED(16, int16_t, vp9_default_iscan_16x16[256]);
extern DECLARE_ALIGNED(16, int16_t, vp9_col_iscan_16x16[256]);
extern DECLARE_ALIGNED(16, int16_t, vp9_row_iscan_16x16[256]);
extern DECLARE_ALIGNED(16, int16_t, vp9_default_iscan_32x32[1024]);
32x32 transform for superblocks. This adds Debargha's DCT/DWT hybrid and a regular 32x32 DCT, and adds code all over the place to wrap that in the bitstream/encoder/decoder/RD. Some implementation notes (these probably need careful review): - token range is extended by 1 bit, since the value range out of this transform is [-16384,16383]. - the coefficients coming out of the FDCT are manually scaled back by 1 bit, or else they won't fit in int16_t (they are 17 bits). Because of this, the RD error scoring does not right-shift the MSE score by two (unlike for 4x4/8x8/16x16). - to compensate for this loss in precision, the quantizer is halved also. This is currently a little hacky. - FDCT and IDCT is double-only right now. Needs a fixed-point impl. - There are no default probabilities for the 32x32 transform yet; I'm simply using the 16x16 luma ones. A future commit will add newly generated probabilities for all transforms. - No ADST version. I don't think we'll add one for this level; if an ADST is desired, transform-size selection can scale back to 16x16 or lower, and use an ADST at that level. Additional notes specific to Debargha's DWT/DCT hybrid: - coefficient scale is different for the top/left 16x16 (DCT-over-DWT) block than for the rest (DWT pixel differences) of the block. Therefore, RD error scoring isn't easily scalable between coefficient and pixel domain. Thus, unfortunately, we need to compute the RD distortion in the pixel domain until we figure out how to scale these appropriately. Change-Id: I00386f20f35d7fabb19aba94c8162f8aee64ef2b
2012-12-07 23:45:05 +01:00
#define MAX_NEIGHBORS 2
extern DECLARE_ALIGNED(16, int16_t,
vp9_default_scan_4x4_neighbors[17 * MAX_NEIGHBORS]);
extern DECLARE_ALIGNED(16, int16_t,
vp9_col_scan_4x4_neighbors[17 * MAX_NEIGHBORS]);
extern DECLARE_ALIGNED(16, int16_t,
vp9_row_scan_4x4_neighbors[17 * MAX_NEIGHBORS]);
extern DECLARE_ALIGNED(16, int16_t,
vp9_col_scan_8x8_neighbors[65 * MAX_NEIGHBORS]);
extern DECLARE_ALIGNED(16, int16_t,
vp9_row_scan_8x8_neighbors[65 * MAX_NEIGHBORS]);
extern DECLARE_ALIGNED(16, int16_t,
vp9_default_scan_8x8_neighbors[65 * MAX_NEIGHBORS]);
extern DECLARE_ALIGNED(16, int16_t,
vp9_col_scan_16x16_neighbors[257 * MAX_NEIGHBORS]);
extern DECLARE_ALIGNED(16, int16_t,
vp9_row_scan_16x16_neighbors[257 * MAX_NEIGHBORS]);
extern DECLARE_ALIGNED(16, int16_t,
vp9_default_scan_16x16_neighbors[257 * MAX_NEIGHBORS]);
extern DECLARE_ALIGNED(16, int16_t,
vp9_default_scan_32x32_neighbors[1025 * MAX_NEIGHBORS]);
void vp9_coef_tree_initialize(void);
void vp9_adapt_coef_probs(struct VP9Common *cm);
static INLINE void reset_skip_context(MACROBLOCKD *xd, BLOCK_SIZE bsize) {
int i;
for (i = 0; i < MAX_MB_PLANE; i++) {
struct macroblockd_plane *const pd = &xd->plane[i];
const BLOCK_SIZE plane_bsize = get_plane_block_size(bsize, pd);
vpx_memset(pd->above_context, 0, sizeof(ENTROPY_CONTEXT) *
num_4x4_blocks_wide_lookup[plane_bsize]);
vpx_memset(pd->left_context, 0, sizeof(ENTROPY_CONTEXT) *
num_4x4_blocks_high_lookup[plane_bsize]);
}
}
// This is the index in the scan order beyond which all coefficients for
// 8x8 transform and above are in the top band.
// For 4x4 blocks the index is less but to keep things common the lookup
// table for 4x4 is padded out to this index.
#define MAXBAND_INDEX 21
extern const uint8_t vp9_coefband_trans_8x8plus[MAXBAND_INDEX + 1];
extern const uint8_t vp9_coefband_trans_4x4[MAXBAND_INDEX + 1];
static int get_coef_band(const uint8_t * band_translate, int coef_index) {
return (coef_index > MAXBAND_INDEX)
? (COEF_BANDS-1) : band_translate[coef_index];
}
static INLINE int get_coef_context(const int16_t *neighbors,
uint8_t *token_cache,
int c) {
return (1 + token_cache[neighbors[MAX_NEIGHBORS * c + 0]] +
token_cache[neighbors[MAX_NEIGHBORS * c + 1]]) >> 1;
}
const int16_t *vp9_get_coef_neighbors_handle(const int16_t *scan);
// 128 lists of probabilities are stored for the following ONE node probs:
// 1, 3, 5, 7, ..., 253, 255
// In between probabilities are interpolated linearly
#define COEFPROB_MODELS 128
Modeling default coef probs with distribution Replaces the default tables for single coefficient magnitudes with those obtained from an appropriate distribution. The EOB node is left unchanged. The model is represeted as a 256-size codebook where the index corresponds to the probability of the Zero or the One node. Two variations are implemented corresponding to whether the Zero node or the One-node is used as the peg. The main advantage is that the default prob tables will become considerably smaller and manageable. Besides there is substantially less risk of over-fitting for a training set. Various distributions are tried and the one that gives the best results is the family of Generalized Gaussian distributions with shape parameter 0.75. The results are within about 0.2% of fully trained tables for the Zero peg variant, and within 0.1% of the One peg variant. The forward updates are optionally (controlled by a macro) model-based, i.e. restricted to only convey probabilities from the codebook. Backward updates can also be optionally (controlled by another macro) model-based, but is turned off by default. Currently model-based forward updates work about the same as unconstrained updates, but there is a drop in performance with backward-updates being model based. The model based approach also allows the probabilities for the key frames to be adjusted from the defaults based on the base_qindex of the frame. Currently the adjustment function is a placeholder that adjusts the prob of EOB and Zero node from the nominal one at higher quality (lower qindex) or lower quality (higher qindex) ends of the range. The rest of the probabilities are then derived based on the model from the adjusted prob of zero. Change-Id: Iae050f3cbcc6d8b3f204e8dc395ae47b3b2192c9
2013-03-13 19:03:17 +01:00
#define UNCONSTRAINED_NODES 3
#define PIVOT_NODE 2 // which node is pivot
Modeling default coef probs with distribution Replaces the default tables for single coefficient magnitudes with those obtained from an appropriate distribution. The EOB node is left unchanged. The model is represeted as a 256-size codebook where the index corresponds to the probability of the Zero or the One node. Two variations are implemented corresponding to whether the Zero node or the One-node is used as the peg. The main advantage is that the default prob tables will become considerably smaller and manageable. Besides there is substantially less risk of over-fitting for a training set. Various distributions are tried and the one that gives the best results is the family of Generalized Gaussian distributions with shape parameter 0.75. The results are within about 0.2% of fully trained tables for the Zero peg variant, and within 0.1% of the One peg variant. The forward updates are optionally (controlled by a macro) model-based, i.e. restricted to only convey probabilities from the codebook. Backward updates can also be optionally (controlled by another macro) model-based, but is turned off by default. Currently model-based forward updates work about the same as unconstrained updates, but there is a drop in performance with backward-updates being model based. The model based approach also allows the probabilities for the key frames to be adjusted from the defaults based on the base_qindex of the frame. Currently the adjustment function is a placeholder that adjusts the prob of EOB and Zero node from the nominal one at higher quality (lower qindex) or lower quality (higher qindex) ends of the range. The rest of the probabilities are then derived based on the model from the adjusted prob of zero. Change-Id: Iae050f3cbcc6d8b3f204e8dc395ae47b3b2192c9
2013-03-13 19:03:17 +01:00
typedef vp9_prob vp9_coeff_probs_model[REF_TYPES][COEF_BANDS]
[PREV_COEF_CONTEXTS]
[UNCONSTRAINED_NODES];
typedef unsigned int vp9_coeff_count_model[REF_TYPES][COEF_BANDS]
[PREV_COEF_CONTEXTS]
[UNCONSTRAINED_NODES + 1];
typedef unsigned int vp9_coeff_stats_model[REF_TYPES][COEF_BANDS]
[PREV_COEF_CONTEXTS]
[UNCONSTRAINED_NODES][2];
void vp9_model_to_full_probs(const vp9_prob *model, vp9_prob *full);
static INLINE const int16_t* get_scan_4x4(TX_TYPE tx_type) {
switch (tx_type) {
case ADST_DCT:
return vp9_row_scan_4x4;
case DCT_ADST:
return vp9_col_scan_4x4;
default:
return vp9_default_scan_4x4;
}
}
static INLINE void get_scan_nb_4x4(TX_TYPE tx_type,
const int16_t **scan, const int16_t **nb) {
switch (tx_type) {
case ADST_DCT:
*scan = vp9_row_scan_4x4;
*nb = vp9_row_scan_4x4_neighbors;
break;
case DCT_ADST:
*scan = vp9_col_scan_4x4;
*nb = vp9_col_scan_4x4_neighbors;
break;
default:
*scan = vp9_default_scan_4x4;
*nb = vp9_default_scan_4x4_neighbors;
break;
}
}
static INLINE const int16_t* get_iscan_4x4(TX_TYPE tx_type) {
switch (tx_type) {
case ADST_DCT:
return vp9_row_iscan_4x4;
case DCT_ADST:
return vp9_col_iscan_4x4;
default:
return vp9_default_iscan_4x4;
}
}
static INLINE const int16_t* get_scan_8x8(TX_TYPE tx_type) {
switch (tx_type) {
case ADST_DCT:
return vp9_row_scan_8x8;
case DCT_ADST:
return vp9_col_scan_8x8;
default:
return vp9_default_scan_8x8;
}
}
static INLINE void get_scan_nb_8x8(TX_TYPE tx_type,
const int16_t **scan, const int16_t **nb) {
switch (tx_type) {
case ADST_DCT:
*scan = vp9_row_scan_8x8;
*nb = vp9_row_scan_8x8_neighbors;
break;
case DCT_ADST:
*scan = vp9_col_scan_8x8;
*nb = vp9_col_scan_8x8_neighbors;
break;
default:
*scan = vp9_default_scan_8x8;
*nb = vp9_default_scan_8x8_neighbors;
break;
}
}
static INLINE const int16_t* get_iscan_8x8(TX_TYPE tx_type) {
switch (tx_type) {
case ADST_DCT:
return vp9_row_iscan_8x8;
case DCT_ADST:
return vp9_col_iscan_8x8;
default:
return vp9_default_iscan_8x8;
}
}
static INLINE const int16_t* get_scan_16x16(TX_TYPE tx_type) {
switch (tx_type) {
case ADST_DCT:
return vp9_row_scan_16x16;
case DCT_ADST:
return vp9_col_scan_16x16;
default:
return vp9_default_scan_16x16;
}
}
static INLINE void get_scan_nb_16x16(TX_TYPE tx_type,
const int16_t **scan, const int16_t **nb) {
switch (tx_type) {
case ADST_DCT:
*scan = vp9_row_scan_16x16;
*nb = vp9_row_scan_16x16_neighbors;
break;
case DCT_ADST:
*scan = vp9_col_scan_16x16;
*nb = vp9_col_scan_16x16_neighbors;
break;
default:
*scan = vp9_default_scan_16x16;
*nb = vp9_default_scan_16x16_neighbors;
break;
}
}
static INLINE const int16_t* get_iscan_16x16(TX_TYPE tx_type) {
switch (tx_type) {
case ADST_DCT:
return vp9_row_iscan_16x16;
case DCT_ADST:
return vp9_col_iscan_16x16;
default:
return vp9_default_iscan_16x16;
}
}
static int get_entropy_context(TX_SIZE tx_size,
ENTROPY_CONTEXT *a, ENTROPY_CONTEXT *l) {
ENTROPY_CONTEXT above_ec = 0, left_ec = 0;
switch (tx_size) {
case TX_4X4:
above_ec = a[0] != 0;
left_ec = l[0] != 0;
break;
case TX_8X8:
above_ec = !!*(uint16_t *)a;
left_ec = !!*(uint16_t *)l;
break;
case TX_16X16:
above_ec = !!*(uint32_t *)a;
left_ec = !!*(uint32_t *)l;
break;
case TX_32X32:
above_ec = !!*(uint64_t *)a;
left_ec = !!*(uint64_t *)l;
break;
default:
assert(!"Invalid transform size.");
}
return combine_entropy_contexts(above_ec, left_ec);
}
static void get_scan_and_band(const MACROBLOCKD *xd, TX_SIZE tx_size,
PLANE_TYPE type, int block_idx,
const int16_t **scan,
const uint8_t **band_translate) {
switch (tx_size) {
case TX_4X4:
*scan = get_scan_4x4(get_tx_type_4x4(type, xd, block_idx));
*band_translate = vp9_coefband_trans_4x4;
break;
case TX_8X8:
*scan = get_scan_8x8(get_tx_type_8x8(type, xd));
*band_translate = vp9_coefband_trans_8x8plus;
break;
case TX_16X16:
*scan = get_scan_16x16(get_tx_type_16x16(type, xd));
*band_translate = vp9_coefband_trans_8x8plus;
break;
case TX_32X32:
*scan = vp9_default_scan_32x32;
*band_translate = vp9_coefband_trans_8x8plus;
break;
default:
assert(!"Invalid transform size.");
}
}
enum { VP9_COEF_UPDATE_PROB = 252 };
Modeling default coef probs with distribution Replaces the default tables for single coefficient magnitudes with those obtained from an appropriate distribution. The EOB node is left unchanged. The model is represeted as a 256-size codebook where the index corresponds to the probability of the Zero or the One node. Two variations are implemented corresponding to whether the Zero node or the One-node is used as the peg. The main advantage is that the default prob tables will become considerably smaller and manageable. Besides there is substantially less risk of over-fitting for a training set. Various distributions are tried and the one that gives the best results is the family of Generalized Gaussian distributions with shape parameter 0.75. The results are within about 0.2% of fully trained tables for the Zero peg variant, and within 0.1% of the One peg variant. The forward updates are optionally (controlled by a macro) model-based, i.e. restricted to only convey probabilities from the codebook. Backward updates can also be optionally (controlled by another macro) model-based, but is turned off by default. Currently model-based forward updates work about the same as unconstrained updates, but there is a drop in performance with backward-updates being model based. The model based approach also allows the probabilities for the key frames to be adjusted from the defaults based on the base_qindex of the frame. Currently the adjustment function is a placeholder that adjusts the prob of EOB and Zero node from the nominal one at higher quality (lower qindex) or lower quality (higher qindex) ends of the range. The rest of the probabilities are then derived based on the model from the adjusted prob of zero. Change-Id: Iae050f3cbcc6d8b3f204e8dc395ae47b3b2192c9
2013-03-13 19:03:17 +01:00
#endif // VP9_COMMON_VP9_ENTROPY_H_