Compare commits

...

149 Commits

Author SHA1 Message Date
John Koleszar
297dc90255 Update CHANGELOG for v1.1.0 (Eider) release
Change-Id: Ic429556e76bcc4f96a34e18835a153b07fe410a2
2012-05-08 16:14:00 -07:00
John Koleszar
499510f2be Update AUTHORS
Change-Id: Id9dd1802ae597a39eb46d2cbe531d5b04bd0a1c5
2012-05-08 15:01:35 -07:00
John Koleszar
2d6cb342c2 Update .mailmap
Change-Id: If3a9958f6e2a466c631972316c3a49c253cbd9c2
2012-05-08 15:01:35 -07:00
John Koleszar
c8f4c187b3 Use consistent range for VP8E_SET_NOISE_SENSITIVITY
Accept the same range of inputs for the VP8E_SET_NOISE_SENSITIVITY
control, regardless of whether temporal denoising is enabled or not.
This is important for maintaining compatibility with existing
applications.

Change-Id: I94cd4bb09bf7c803516701a394cf1a63bfec0097
2012-05-08 15:01:24 -07:00
John Koleszar
14d827f44e fix vp8_ namespace issues
Make functions only referenced from one translation unit static. Other
symbols with extern linkage get a vp8/vpx prefix.

Change-Id: I928c7e0d0d36e89ac78cb54ff8bb28748727834f
2012-05-04 12:24:04 -07:00
John Koleszar
22f56b93e5 Formalize encodeframe.c forward delclarations
Change If4321cc5 fixed a bug caused by forward declarations not being
kept in sync across C files, resulting in a function call with the
wrong arguments. The commit moves the affected function declarations
into a header file, along with the other symbols from encodeframe.c
that were being sloppily shared.

Change-Id: I76a7b4c66d4fe175f9cbef7e52148655e4bb9ba1
2012-05-04 10:44:47 -07:00
Attila Nagy
3e32105d63 Fix multi-resolution threaded encoding
mb_row and mb_col was not passed to vp8cx_encode_inter_macroblock in
threaded encoding.

Change-Id: If4321cc59bf91e991aa31e772f882ed5f2bbb201
2012-05-04 10:44:46 -07:00
John Koleszar
2bf8fb5889 remove deprecated pre-v0.9.0 API
Remove a bunch of compatibility code dating back to before the initial
libvpx release.

Change-Id: Ie50b81e7d665955bec3d692cd6521c9583e85ca3
2012-05-04 10:44:46 -07:00
Attila Nagy
f039a85fd8 Make global data const
Removes all runtime initialization of global data. This commit is a
squashed version of the following series cherry-picked from master.
This is necessary because of a change that was merged to the tester
that depends on the scaler being moved to the RTCD framework, which
is a worthwhile thing to include in Eider anyway.

  - a91b42f02 Makes all global data in entropy.c const
  - b35a0db0e Makes all global data in tokenize.c const
  - 441cac8ea Makes all mode token tables const
  - 5948a0210 Ports vpx_xcaler to new RTCD method
  - 317d4244c Makes all mode token tables const part 2

Change-Id: Ifeaea24df2b731e7c509fa6c6ef6891a374afc26
2012-05-04 10:42:21 -07:00
John Koleszar
9f9cc8fe71 Merge "Add VPX_TS_ prefix to MAX_LAYERS, MAX_PERIODICITY" into eider 2012-05-03 09:40:50 -07:00
John Koleszar
d8216b19b6 Merge "Fix compiler warnings" into eider 2012-05-02 16:22:34 -07:00
John Koleszar
d46ddd0839 Add VPX_TS_ prefix to MAX_LAYERS, MAX_PERIODICITY
Preserved the prior names for compatibility, will remove in the future.

Change-Id: I8773f959ebce72f60168a2972f7a8ffe6642b9b2
2012-05-02 16:21:52 -07:00
Timothy B. Terriberry
8b1a14d12f Add support for native Solaris compiler on x86.
Original patch by Ginn Chen <ginn.chen@oracle.com> against libvpx
 v0.9.0.
I've forward-ported it to the current version (which mostly
 involved removing hunks that were no longer relevant), since I've
 given up on getting Ginn to submit this upstream himself.

Change-Id: I403c757c831c78d820ebcfe417e717b470a1d022
2012-05-02 10:36:17 -07:00
Timothy B. Terriberry
e50c842755 Fix TEXTRELs in the ARM asm.
Besides imposing a performance penalty at startup in most
 configurations, these relocations break the dynamic linker for
 native Fennec, since it does not support them at all.

Change-Id: Id5dc768609354ebb4379966eb61a7313e6fd18de
2012-05-02 10:36:01 -07:00
Timothy B. Terriberry
22ae1403e9 Fix trailing commas in enums.
These are warnings in most builds, but show up as compile errors on
 some platforms when these headers are included from C++ code.

Change-Id: I6c523b4dbbc699075fe73830442b51922e5a61d5
2012-05-02 10:35:28 -07:00
Attila Nagy
14c9fce8e4 Fix compiler warnings
Fix code for following warnings:
-Wimplicit-function-declaration
-Wuninitialized
-Wunused-but-set-variable
-Wunused-variable

Change-Id: I2be434f22fdecb903198e8b0711255b4c1a2947a
2012-05-02 10:57:57 +03:00
Johann
f2a6799cc9 Merge "Update paths for iOS 5.1" into eider 2012-05-01 10:13:03 -07:00
Johann
e918ed98d4 Update paths for iOS 5.1
These values can be overridden with some poorly documented and
overloaded options: --libc and --sdk-path

../libvpx/configure --target=armv7-darwin-gcc --sdk-path=/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer --libc=/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS5.1.sdk/

So for someone who still wants to build with the iOS 5 SDK, the last
part of the path should be iPhoneOS5.0.sdk

Change-Id: Ibe93d96ae828c619700dc3222983aa4c30456b88
2012-04-30 15:04:41 -07:00
Johann
e5cef5d5a6 Merge "Add target for OS X 10.8 Mountain Lion" into eider 2012-04-30 10:23:59 -07:00
Adrian Grange
faed00d844 Reset output frames counter for second pass
The frame counter was not being reset at the start of
the first pass.

Change-Id: I2ef7c6edf027e43f83f470c52cbcf95bf152e430
2012-04-27 15:14:50 -07:00
Johann
101c2bd1fa Add target for OS X 10.8 Mountain Lion
Also clarify universal build rules

Change-Id: I3b7352f81d5d5b3472420e89872038377c5c2697
2012-04-27 13:35:27 -07:00
John Koleszar
60b36abf85 Merge "Fix loopfilter race condition in multithreaded encoder" into eider 2012-04-26 16:07:20 -07:00
Ralph Giles
061a16d96e Have vpxenc use a default kf_max_rate of 5 seconds.
Rather than using the static default maximum keyframe spacing
provided by vpx_codec_enc_config_default() set the default
value to 5 times the frame rate. Five seconds is too long for
live streaming applications, but is a compromise between seek
efficiency and giving the encoder freedom to choose keyframe
locations.

The five second value is from James Zern's suggestion in
http://article.gmane.org/gmane.comp.multimedia.webm.user/2945

Change-Id: Ib7274dc248589c433c06e68ca07232e97f7ce17f
2012-04-25 17:11:05 -07:00
John Koleszar
1b27e93cd1 vpxenc: validate rational arguments
Trap negative values and zero denominators at the point where they're
parsed.

Change-Id: I1ec9da5d4e95d3ef539860883041330ecec2f345
2012-04-25 17:11:05 -07:00
Attila Nagy
3939e85b26 Fix loopfilter race condition in multithreaded encoder
Race was introduced by https://gerrit.chromium.org/gerrit/15563.

If loopfilter related config params were changed between frames, or
after a KEY frame, there could be a mismatch between encoder's and
decoder's recontructed image. In worst case, when frame buffers are
reallocated because of a size change, the loopfilter could
do an invalid data access (segmentation fault).

Fixes:
Sync with the loopfilter before applying any encoder changes in
vp8_change_config().
Moved the loopfilter synching to the top of
encode_frame_to_data_rate() so that it's done before any alteration of
the encoder.

Change-Id: Ide5245d2a2aeed78752de750c0110bc4b46f5b7b
2012-04-25 14:26:02 +03:00
John Koleszar
dba053898a Merge "Use LIBSUBDIR for vpx.pc." into eider 2012-04-23 12:11:08 -07:00
John Koleszar
504601bb14 Merge "multi-res: restore v1.0.0 API" into eider 2012-04-23 12:07:18 -07:00
Takanori MATSUURA
a7eea3e267 Use LIBSUBDIR for vpx.pc.
Change-Id: Ibba635696e8c23086e5d3c0e8452ab97c460d7d7
2012-04-20 15:11:46 -07:00
John Koleszar
d72c536ede multi-res: restore v1.0.0 API
Move the notion of 0 bitrate implying skip deeper into the codec,
rather than doing it at the multi-encoder API level. This preserves
v1.0.0 ABI compatibility, rather than forcing a bump to v2.0.0 over a
minor change. Also, this allows the case where the application can
selectively enable and disable the larger resolution(s) without having
to reinitialize the codec instace (for instance, if no target is
receiving the full resolution stream).

It's not clear how deep to push this check. It may be valuable to
allow the framerate adaptation code to run, for example. Currently put
the check as early as possible for simplicity, should reevaluate this
as this feature gains real use.

Change-Id: I371709b8c6b52185a1c71a166a131ecc244582f0
2012-04-20 11:39:42 -07:00
John Koleszar
8e858f90f3 vp8_change_config: don't force kf with spatial resampling
Look for changes in the codec's configured w/h instead of its active
w/h when forcing keyframes. Otherwise calls to vp8_change_config()
will force a keyframe when spatial resampling is active.

Change-Id: Ie0d20e70507004e714ad40b640aa5b434251eb32
2012-04-20 11:09:12 -07:00
John Koleszar
c311b3b3a9 rtcd: serialize function pointer initialization
Ensure that RTCD function pointers are set at most once, to silence
some data race warnings. Implementation provided for POSIX threads and
Win32, with the prior unsynchronized behavior left in place for other
platforms.

Change-Id: I65c5856df43ef67043b3d5f26ddafddd8fcb2f7e
2012-04-19 14:15:23 -07:00
John Koleszar
21173e1999 correct alt-ref contribution to frame rate
When producing an invisible ARF, the time stamp counters aren't
updated since the last time stamp is seen by the codec twice. The
prior code was trapping this case with refresh_alt_ref, but this isn't
correct for other uses of the ARF. Instead, use the show_frame flag.

Change-Id: If67fff7c6c66a3606698e34e2fb5731f56b4a223
2012-04-16 12:23:33 -07:00
Johann
b5b61c179d FTFY: Check for astyle and version
Change-Id: I377387681332cfc975254cd825e4ad2998271690
2012-04-12 16:36:22 -07:00
Johann
0c261715b0 Merge "Use OFFSET_PATTERN from libs.mk" 2012-04-12 12:15:04 -07:00
Scott LaVarnway
3c5ed6f52e Merge "MB_MODE_INFO size reduction" 2012-04-12 12:03:45 -07:00
Scott LaVarnway
6dc21bce63 Merge "loopfilter improvements" 2012-04-12 12:02:24 -07:00
John Koleszar
87b12ac875 Merge "FTFY: fix syntax error" 2012-04-12 11:48:57 -07:00
Johann
72b7db36f3 Use OFFSET_PATTERN from libs.mk
Forestall possible issues with -ggdb3
https://gerrit.chromium.org/gerrit/16160
https://trac.macports.org/ticket/33285

Change-Id: Ied274f70004709800576a803afa91e1b0f6eb02b
2012-04-12 11:33:09 -07:00
Scott LaVarnway
e0a80519c7 loopfilter improvements
Local variable offsets are now consistent for the functions,
removed unused parameters, reworked the assembly to eliminate
stalls/instructions.

Change-Id: Iaa37668f8a9bb8754df435f6a51c3a08d547f879
2012-04-12 14:22:47 -04:00
John Koleszar
46da1cae05 FTFY: fix syntax error
Change-Id: I1952608479954c07f3556f96ea3de9118216bf27
2012-04-12 11:19:00 -07:00
Attila Nagy
e4dc2b9248 Fix vpx_rtcd.h dependency in Android.mk
Failed to build on Linux (as described in Android.mk) with NDK r7b.
Set vpx_rtcd.h dependency after libvpx sources are added to
LOCAL_SRC_FILES so that vpx_rtcd.h is generated before any libvpx file
is touched.

Change-Id: Ibe19d485ca9f679dc084044df0e3fb14587c4d3e
2012-04-12 10:15:53 +03:00
Deb Mukherjee
6b33ca395f Fixes to disable MFQE when there is motion.
This patch includes:
1. fixes to disable block based termporal mixing when motion
is detected (because this version of mfqe only handles zero motion).
2. The criterion used for determining whether to mix or
not are changed to use squared differences rather than
absolute differences.
3. Additional checks on color mismatch and excessive block
flatness added. If the block as decoded has very low activity
it is unlikely to yield benefits for mixing.

Change-Id: I07331e5ab5ba64844c56e84b1a4b7de823eac6cb
2012-04-10 14:27:28 -07:00
Debargha Mukherjee
9aa58f3fcb Merge "MFQE: apply threshold to subblocks and chroma." 2012-04-10 13:29:27 -07:00
John Koleszar
d9ca52452b FTFY: only apply on modified files
Ignore renamed, copied, and deleted files when applying the style
rules.

Change-Id: I6102e34f833e5c2ef7a88d6d57bbfdca51b25d94
2012-04-03 17:42:52 -07:00
John Koleszar
8106df8f5a MFQE: apply threshold to subblocks and chroma.
In cases where you have a flat background occluded by a moving object
of similar luminosity in the foreground, it was likely that the
foreground blocks would persist for a few frames after the background
is uncovered. This is particularly noticable when the object has a
different color than the background, so add the chroma planes in as an
additional check.

In addition, for block sizes of 8 and 16, the luma threshold is
applied on four subblocks independently, which helps when only part of
the background in the block has been uncovered.

This fixes issue #392, which includes a test clip to reproduce the
issue.

BUG=392

Change-Id: I2bd7b2b0e25e912dcac342e5ad6e8914f5afd302
2012-04-03 12:05:01 -07:00
Johann
c459d37c26 Allow disabling disabled codecs
When using 'make dist' after --disable-vp8[encoder|decoder] it would
fail to recognize the option. This would only occur when also specifying
--enable-install-docs and --enable-install-srcs but not
--enable-codec-srcs

Including vpx/ fixes builds with --enable-codec-srcs

vpx_timer.h is also required for vpxenc.c

Change-Id: Ie3e28b2f7ec7ee6d5961d3843f9eab869f79c35b
2012-04-02 15:57:28 -07:00
Johann
811d0ff209 Merge "Move variance and SAD RTCD definitions" 2012-04-02 11:31:06 -07:00
Johann
21ac3c8f26 Move variance and SAD RTCD definitions
When the functions were moved from encoder/ to common/ the RTCD file was
not updated.

Change-Id: I1c98715ed51adf1a95aa2492949d8552aec88d1f
2012-04-02 11:06:35 -07:00
Jim Bankoski
4ed05a8bb1 Merge "vp8 - compatibility warning added to changelog" 2012-04-02 11:05:38 -07:00
James Zern
00794a93ec tools/wrap-commit-msg.py: fix file truncation
truncate() operates from the current file pointer position. On at least
Linux specifying 0 without resetting the pointer will pad the file with
zeros to the current offset.

Change-Id: Ide704a1097f46c0c530f27212bb12e923f93e2d6
2012-03-29 18:13:27 -07:00
John Koleszar
8b15cf6929 Merge "remove unused BOOL_CODER::value" 2012-03-29 14:03:07 -07:00
John Koleszar
3df4436e1a Merge "FTFY: support wordwrapping commit messages" 2012-03-29 14:02:48 -07:00
John Koleszar
a46ec16569 FTFY: support wordwrapping commit messages
It's common for commit messages to be wrapped at odd places. git-gui
is often to blame. Adds support for automatically fixing up these
messages if running ftfy --amend, and adds a new option --msg-only for
fixing only the commit message.

Change-Id: Ia7ea529f8cb7395d34d9b39f1192598e9a1e315b
2012-03-29 14:01:51 -07:00
John Koleszar
3f8349467a remove unused BOOL_CODER::value
Change-Id: Ic7782707afed38c3ec7e996a4a11dc2d55226691
2012-03-29 13:56:48 -07:00
Scott LaVarnway
31322c5faa MB_MODE_INFO size reduction
Reduced the size of the struct by 8 bytes, which would be
a memory savings of 64800 bytes for 1080 resolutions.  Had
an extra byte, so created an is_4x4 for B_PRED or SPLITMV
modes.  This simplified the mode checks in
vp8_reset_mb_tokens_context and vp8_decode_mb_tokens.

Change-Id: Ibec27784139abdc34d4d01f73c09f43e9e10e0f5
2012-03-29 16:30:14 -04:00
Scott LaVarnway
a337725625 Updated vp8_build_intra_predictors_mby_s(sse2/ssse3)
to work with the latest code.

Patch Set 2: aligned the above_row buffers to fix crash

Change-Id: I7a6992a20ed079ccd302f8c26215cf3057f8b70c
2012-03-29 14:24:53 -04:00
Scott LaVarnway
b3151c80fc Merge "Updated vp8_build_intra_predictors_mbuv_s(sse2/ssse3)" 2012-03-29 11:22:22 -07:00
Scott LaVarnway
0799cccce3 Merge "Removed duplicate vp8_build_intra_predictors_mb y/uv" 2012-03-29 11:22:04 -07:00
John Koleszar
c88fc5b2f9 Merge "FTFY: an automated style corrector" 2012-03-28 17:47:24 -07:00
John Koleszar
cb265a497d FTFY: an automated style corrector
This is a utility for applying a limited amount of style correction on
a change-by-change basis. Rather than a big-bang reformatting, this
tool attempts to only correct the style in diff hunks that you touch.
This should make the cosmetic changes small enough that we can mix them
with functional changes without destroying the diffs, and there's an
escape hatch for separating the reformatting to a second commit for
purists and cases where it hurts readability.

At this time, the script requires a clean working tree, so run it after
you've commited your changes. Run without arguments, the style
corrections will be applied and left unstaged in your working copy. It
also supports the --amend option, which will automatically amend your
HEAD with the corrected style, and --commit, which will create a new
change dependent on your HEAD that contains only the whitespace changes.

There are a number of ways this could be applied in an automated manner
if this proves to be useful, either on a project-wide or per-user
basis. This doesn't buy anything in terms of real code quality, the
intent here would be to keep formatting nits out of review comments in
favor of more meaningful ones and help people whose habitual style
doesn't match the baseline.

Requires astyle[1] 1.24 or newer.
  [1]: http://astyle.sourceforge.net/

Change-Id: I2fb3434de8479655e9811f094029bb90e5d757e1
2012-03-28 14:05:56 -07:00
Scott LaVarnway
ccea000c4b Updated vp8_build_intra_predictors_mbuv_s(sse2/ssse3)
to work with the latest code.

Change-Id: Ie382bb55d00ea5929bdadba859eea15f696d4cd9
2012-03-26 13:40:14 -04:00
Scott LaVarnway
403966ae00 Removed duplicate vp8_build_intra_predictors_mb y/uv
Added y/uv stride as a parameter and remove the
duplicate code.

Change-Id: I019117a9dd9659a09d3d4e845d4814d3f33341b5
2012-03-26 13:40:14 -04:00
John Koleszar
bf2c903000 Merge "bug fix: fix mem leak error in vpxenc" 2012-03-26 10:15:56 -07:00
James Berry
85ef83a173 bug fix: fix mem leak error in vpxenc
fixes memory leak bug in vpxenc.

Change-Id: I3933026d16177947576c61ebf58f8c58147e4ba0
2012-03-26 12:55:00 -04:00
Jim Bankoski
1991123054 vp8 - compatibility warning added to changelog
Change-Id: Iac0daecfc7c8393cb4c798ca43b7fe300f56e55f
2012-03-23 13:59:59 -07:00
Scott LaVarnway
9e9f5f3d70 New vp8_decode_mb_tokens()
This new vp8_decode_mb_tokens() uses a modified version of
WebP's GetCoeffs function.  For now, the dequant does not
occur in GetCoeffs.
Tests showed performance improvements up to 2.5% depending
on material.

Change-Id: Ia24d78627e16ffee5eb4d777ee8379a9270f07c5
2012-03-23 15:50:08 -04:00
Deb Mukherjee
06dc2f6166 Initialize postproc buffer to resolve valgrind warnings
Change-Id: I9a7d40b0eac7200796dbe62e75776b2eb77dfdf6
2012-03-22 17:42:41 -07:00
Deb Mukherjee
66ba79f5fb Miscellaneous changes in mfqe and postproc modules
Adds logic to disable mfqe for the first frame after a configuration
change such as change in resolution. Also adds some missing
if CONFIG_POSTPROC macro checks.

Change-Id: If29053dad50b676bd29189ab7f9fe250eb5d30b3
2012-03-22 09:55:07 -07:00
James Berry
fd9df44a05 Merge "remove __inline for compiler compatibility" 2012-03-21 12:37:17 -07:00
James Berry
3c021e1d79 Merge "bug fix: remove inline from mfqe.c" 2012-03-21 12:36:40 -07:00
James Berry
451ab0c01e remove __inline for compiler compatibility
__inline removed for broader compiler compatibility

Change-Id: I6f2b218dfc808b73212bbb90c69e2b6cc1fa90ce
2012-03-21 14:11:10 -04:00
Yunqing Wang
bcee56bed5 Minor fix: add back a vpx_free call
Added back a vpx_free call that was mistakenly removed.

Change-Id: Ib662933a8697a4efb8534b5b9b762ee6c2777459
2012-03-21 14:09:33 -04:00
James Berry
921ffdd2c9 bug fix: remove inline from mfqe.c
remove inline from mfqe.c for vs
compatibility

Change-Id: I853f16503d285fcd41a1a12181d8745159156b5c
2012-03-21 13:42:58 -04:00
Yunqing Wang
9ed1b2f09e Merge "Add motion search skipping in first pass" 2012-03-16 14:02:51 -07:00
Yunqing Wang
6a819ce4fe Add motion search skipping in first pass
This change added a motion search skipping mechanism similar
to what we did in second pass. For a macroblock that is very
similar to the macroblock at same location on last frame,
we can set its mv to be zero, and skip motion search. This
improves first-pass performance for slide shows and video
conferencing clips with a slight PSNR loss.

Change-Id: Ic73f9ef5604270ddd6d433170091d20361dfe229
2012-03-16 15:59:00 -04:00
Johann
56e8485c84 darwin universal builds need BUILD_PFX
Universal builds create subdirectories for each target. Without
BUILD_PFX we only generated one vpx_rtcd.h instead of one for each.

Change-Id: I1caed4e018c8865ffc8da15e434cae2b96154fb4
2012-03-16 14:54:07 -04:00
John Koleszar
a05bf133ae Update XCode SDK search paths
Newer XCodes have moved the SDK path from /Developer/SDKs

Use a suggestion from jorgenisaksson@gmail.com to locate it

osx_sdk_dir is not required to be set. Apple now offers a set
command line tools which do not require this. isysroot is also
not required in newer versions of XCode so only set it when we
are confident in the location.

There remain issues with the iOS configure steps which will be
addressed later

Change-Id: I4f5d7e35175d0dea84faaa6bfb52a0153c72f84b
2012-03-16 14:38:51 -04:00
Johann
e68953b7c8 Merge "RFC: Reorganize MFQE loops" 2012-03-16 11:35:18 -07:00
Scott LaVarnway
9aa2bd8a03 Merge "x_motion_minq table reduction" 2012-03-16 09:03:47 -07:00
James Zern
6b7cf3077d doxy: fix conditional usage, ref warnings
doxygen < 1.7.? seems to have been more tolerant of single line
\if/\endif

This change fixes warnings such as:
mainpage.dox:13: warning: unable to resolve reference to `vp8_encoder-'
for \ref command
vpx_decoder.h:193: warning: explicit link request to 'n' could not be
resolved

Change-Id: If3d04af5ede1b0d1e2c63021d0e4ac8f98db20b2
2012-03-15 16:51:51 -07:00
John Koleszar
eb0c5a6ffc Merge "Fix build under Estonian locale" 2012-03-14 12:00:44 -07:00
John Koleszar
422f97d7ab Merge "fix potential use of uninitialized rate_y" 2012-03-14 12:00:27 -07:00
Priit Laes
7af4eb014b Fix build under Estonian locale
Change-Id: Ifb536403ef302b597864eae1d05aa9e2bb15d4c7
2012-03-14 11:19:09 -07:00
John Koleszar
20cd3e6b8f fix potential use of uninitialized rate_y
This issue likely doesn't appear in the unmodified encoder, but
sufficient hacking on the mode selection loop can expose it.

Change-Id: I8a35831e8f08b549806d0c2c6900d42af883f78f
2012-03-14 10:10:30 -07:00
Jim Bankoski
6b66c01c88 Merge "Adds a motion compensated temporal denoiser to the encoder." 2012-03-13 16:18:57 -07:00
Stefan Holmer
9c41143d66 Adds a motion compensated temporal denoiser to the encoder.
Some refactoring in rdopt.c and pickinter.c.

Change-Id: I4f50020eb3313c37f4d441d708fedcaf219d3038
2012-03-13 15:33:50 -07:00
Jim Bankoski
e9cacfd66d Merge "Update for key frame target size setting." 2012-03-13 10:13:03 -07:00
Marco Paniconi
301409107f Update for key frame target size setting.
Set an iniital/minimun boost level for the frame rate
factor of key frame target size setting.

Change-Id: If2586f4ac76a1fa89378aa652a58607356a1f426
2012-03-12 16:23:08 -07:00
Johann
ddf94f6184 Merge "Move SAD and variance functions to common" 2012-03-12 15:08:48 -07:00
John Koleszar
c21f53a501 Merge "vpx_timer: increase resolution" 2012-03-09 14:48:06 -08:00
Scott LaVarnway
7a1590713e Merge changes I9c26870a,Ifabb0f67
* changes:
  threading.c refactoring
  Decoder loops refactoring
2012-03-09 10:48:11 -08:00
Scott LaVarnway
9ed874713f threading.c refactoring
Added recon above/left to MACROBLOCKD
Reworked decode_macroblock

Change-Id: I9c26870af75797134f410acbd02942065b3495c1
2012-03-08 15:27:41 -05:00
Yaowu Xu
676610d2a8 Merge "vp8e - RDLambda fix" 2012-03-07 11:53:04 -08:00
Johann
fd903902ef RFC: Reorganize MFQE loops
Break MFQE code into it's own file.

It is currently only valid for 16x16 and 8x8 Y blocks. It also filters
4x4 U/V blocks.

Refactor filtering and add associated assembly. Limited test cases show
--mfqe introduces a penalty of ~20% with HD content. The assembly
reduces the penalty to ~15%

Change-Id: I4b8de6b5cdff5413037de5b6c42f437033ee55bf
2012-03-06 15:20:03 -08:00
Jim Bankoski
154b4b4196 vp8e - RDLambda fix
Last commit went the wrong way.

Change-Id: I5e47ee6c25b0893dfa84318229b93c57dfeec24e
2012-03-06 08:47:12 -08:00
Johann
953c6a011e Merge "include CHANGELOG in CODEC_SRCS" 2012-03-05 17:20:48 -08:00
Johann
e50f96a4a3 Move SAD and variance functions to common
The MFQE function of the postprocessor depends on these

Change-Id: I256a37c6de079fe92ce744b1f11e16526d06b50a
2012-03-05 16:50:33 -08:00
Johann
5d88a82aa0 include CHANGELOG in CODEC_SRCS
build/make/version.sh requires CHANGELOG to generate vpx_version.h
The file is already included when building the documentation. However,
documentation is not build if doxygen/php are not present.

This is necessary when using '--enable-install-srcs --enable-codec-srcs'
and 'make dist'

Change-Id: Icada883a056a4713d24934ea44e0f6969b68f9c2
2012-03-05 16:36:23 -08:00
Jim Bankoski
cbcfbe1b29 Merge "vp8e - fix coefficient costing" 2012-03-05 14:14:58 -08:00
Jim Bankoski
888699091d vp8e - fix coefficient costing
Coefficient costing failed to take account of the first branch
being skipped ( 0 vs eob) if the previous token is 0.

Fixed rd to account for slightly increased token cost & cleaned up
warning message

Change-Id: I56140635d9f48a28dded5a816964e973a53975ef
2012-03-05 08:20:42 -08:00
Johann
87c40b35eb Fix encoder debug setting
Propagate debug setting to the EBML struct. When writing the application
name, this allows us to strip the version code and keep the output
metadata static.

Change-Id: I8e06c6abd743bedbff5af6242bbdae5d55754538
2012-03-01 16:12:53 -08:00
Jim Bankoski
a60461a340 Merge "vp8e - attempt to lessen blockiness" 2012-03-01 11:47:09 -08:00
Jim Bankoski
f0f609c2e2 Merge "vp8e - static key boost" 2012-03-01 11:23:10 -08:00
Jim Bankoski
91b5c98b3d Merge "vp8e - force at least some change in over and under shoots" 2012-03-01 11:22:52 -08:00
Paul Wilkins
8d07a97acc vp8e - static key boost
This seeks to boost the size of the keyframe if the entire section
is a single frame clip

Change-Id: I3c00268dc155b047dc4b90e514cf403d55a4f8ef
2012-03-01 10:39:41 -08:00
Paul Wilkins
6d84322762 vp8e - force at least some change in over and under shoots
Change-Id: Ie1796f272dc33bf5a1c8ac990da625961d272aa9
2012-03-01 10:35:22 -08:00
Scott LaVarnway
c34d91a84e Merge "Packing bitstream on-the-fly with delayed context updates" 2012-03-01 06:20:02 -08:00
Yunqing Wang
aabae97e57 vpxenc: fix time and fps calculation in 2-pass encoding
When we do 2-pass encoding, elapsed time is accumulated through
whole 2-pass process, which gives incorrect time and fps results
for second pass. This change fixed that by resetting the time
accumulator for second pass.

Change-Id: Ie6cbf0d0e66e6874e7071305e253c6267529cf20
2012-02-29 15:44:56 -05:00
Attila Nagy
52cf4dcaea Packing bitstream on-the-fly with delayed context updates
Produce the token partitions on-the-fly, while processing each MB.
Context is updated at the beginning of each frame based on the
previoud frame's counters. Optimally encoder outputs partitions in
separate buffers. For frame based output, partitions are concatenated
internally.

Limitations:
    - enabled just in combination with realtime-only mode
    - number of encoding threads has to be equal or less than the
    number of token partitions. For this reason, by default the encoder
    will do 8 token partitions.
    - vpxenc supports partition output (-P) just in combination with
    IVF output format (--ivf)

Performance:
    - Realtime encoder can be up to 13% faster (ARM) depending on the number
    of threads and bitrate settings. Constant gain over the 5-16 speed
    range.
    - Token buffer reduced from one frame to 8 MBs

Quality:
    - quality is affected by the delayed context updates. This again
    dependents on input material, speed and bitrate settings. For VC
    style input the loss seen is up to 0.2dB. If error-resilient=2
    mode is used than the effect of this change is negligible.

Example:
./configure --enable-realtime-only --enable-onthefly-bitpacking
./vpxenc --rt --end-usage=1 --fps=30000/1000 -w 640 -h 480
--target-bitrate=1000 --token-parts=3 --static-thresh=2000
--ivf -P -t 4 -o strm.ivf tanya_640x480.yuv

Change-Id: I127295cb85b835fc287e1c0201a67e378d025d76
2012-02-29 12:13:37 -05:00
Jim Bankoski
b8fa2839a2 vp8e - attempt to lessen blockiness
applies a penalty to intra blocks in order to cut down on blockiness in
easy sections.

Change-Id: Ia9e5df16328b0bf01bf0f2e6e61abcb687316c12
2012-02-29 09:03:13 -08:00
Scott LaVarnway
2578b767c3 Decoder loops refactoring
Eliminated some mb branches along with other code cleanups.
This is part of an ongoing effort to remove cut/paste
code in the decoder.

Change-Id: Ifabb0f67cafa6922b5a0e89a0d03a9b34e9e5752
2012-02-29 10:38:14 -05:00
Scott LaVarnway
ce328b855f Merge changes Ifb450710,I61c4a132
* changes:
  Eliminated reconintra_mt.c
  Eliminated vp8mt_build_intra_predictors_mbuv_s
2012-02-28 11:42:45 -08:00
Scott LaVarnway
aab70f4d7a Merge "Removed duplicate code in threading.c" 2012-02-28 11:25:43 -08:00
Scott LaVarnway
bcba86e2e9 Eliminated reconintra_mt.c
Reworked the code to use vp8_build_intra_predictors_mby_s,
vp8_intra_prediction_down_copy, and vp8_intra4x4_predict_d_c
functions instead.  vp8_intra4x4_predict_d_c is a decoder-only
version of vp8_intra4x4_predict.  Future commits will fix this
code duplication.

Change-Id: Ifb4507103b7c83f8b94a872345191c49240154f5
2012-02-28 14:12:30 -05:00
Scott LaVarnway
9a4052a4ec Removed duplicate code in threading.c
Change-Id: Id7e44950ceda67b280e410e541510106ef02f1da
2012-02-28 14:00:32 -05:00
Yunqing Wang
b1bfd0ba87 Merge "Only do uv intra-mode evaluation when intra mode is checked" 2012-02-28 10:11:24 -08:00
Yunqing Wang
019384f2d3 Only do uv intra-mode evaluation when intra mode is checked
When we encode slide-show clips, for the majority of the time,
only ZEROMV mode is checked, and all other modes are skipped.
This change delayed uv intra-mode evaluation until intra mode is
actually checked. This gave big performance gain for slide-show
video encoding (2nd pass gain: 18% to 28%). But, this change
doesn't help other types of videos.

Also, zbin_mode_boost is adjusted in mode-checking loop, which
causes bitstream mismatch before/after this change when --best
or --good with --cpu-used=0 are used.

Change-Id: I582b3e69fd384039994360e870e6e059c36a64cc
2012-02-28 13:08:17 -05:00
James Berry
e2c6b05f9a bugfix: use oxcf width/height for reinit check
use oxcf instead of common in check to Reinit the
lookahead buffer if the frame size changes
prior behavior would cause assertion fail/crash

first observed in:
support changing resolution with vpx_codec_enc_config_set

Change-Id: Ib669916ca9b4f206d4cc3caab5107e49d39a36aa
2012-02-27 16:10:45 -05:00
Yunqing Wang
61c5e31ca1 Merge "Fix skippable evaluation in mode decision" 2012-02-27 11:06:13 -08:00
John Koleszar
ad1216151d Merge "vpxenc: initial implementation of multistream support" 2012-02-27 09:59:14 -08:00
John Koleszar
02a31e6b3c Merge "decoder: reset segmentation map on keyframes" 2012-02-27 09:58:29 -08:00
Yunqing Wang
84be08b07f Fix skippable evaluation in mode decision
Yaowu fixed the skippable evaluation by correcting 2nd order
block's eob.

Change-Id: Id47930cbc74a90a046c0c0e324efb03477639ee0
2012-02-27 12:45:12 -05:00
James Berry
313bfbb6a2 Merge "Add unit tests for idctllm_test and idctllm_mmx" 2012-02-23 08:50:36 -08:00
Jim Bankoski
2089f26b08 Merge "Remove the frame rate factor for key frame size." 2012-02-23 08:38:44 -08:00
Marco Paniconi
507ee87e3e Remove the frame rate factor for key frame size.
When temporal layers is used (i.e., number_of_layers > 1),
we don't use the frame rate boost for setting the key
frame target size. The factor was forcing the target size to be
always at its minimum (2* per_frame_bandwidth) for low frame rates
(i.e., base layer frame rate).

Generally we should modify or remove this frame rate factor;
for now we turn if off for number_of_layers > 1.

Change-Id: Ia5acf406c9b2f634d30ac2473adc7b9bf2e7e6c6
2012-02-22 15:25:32 -08:00
Scott LaVarnway
f2bd11faa4 Eliminated vp8mt_build_intra_predictors_mbuv_s
Reworked the code to use vp8_build_intra_predictors_mbuv_s
instead.  This is WIP with the goal of eliminating all
functions in reconintra_mt.h

Change-Id: I61c4a132684544b24a38c4a90044597c6ec0dd52
2012-02-21 14:59:05 -05:00
James Berry
0c1cec2205 Add unit tests for idctllm_test and idctllm_mmx
add unit tests for vp8_short_idct4x4llm_c

Change-Id: I472b7c0baa365ba25dc99a3f6efccc816d27c941
2012-02-21 14:52:36 -05:00
John Koleszar
dadc9189ed Merge changes I0341554f,I64e110c8
* changes:
  Consolidate C version of token packing functions
  Multithreaded encoder, late sync loopfilter
2012-02-21 10:09:23 -08:00
Scott LaVarnway
f05feab7b9 Merge "Remove redundant init of segment_counts in vp8_encode_frame" 2012-02-21 09:51:02 -08:00
John Koleszar
02360dd2c2 Merge "Update encoder mb_skip_coeff and prob_skip_false calculation" 2012-02-21 09:48:26 -08:00
Johann
b0a12a2880 Refine offset pattern
When compiling with -ggdb3 the output includes an extraneous EQU from
vpx_ports/asm_offsets.h

https://trac.macports.org/ticket/33285

Change-Id: Iba93ddafec414c152b87001a7542e7a894781231
2012-02-17 12:28:13 -08:00
John Koleszar
b5ce9456db Merge changes Idf1a05f3,If227b29b,Iac784d39
* changes:
  vpxenc: factor out input open/close
  vpxenc: add warning()/fatal() helpers
  vpxenc: factor out global config options
2012-02-17 11:14:17 -08:00
Johann
e6047a17a9 Merge "OS X shell is incompatible with echo -n" 2012-02-17 10:53:19 -08:00
Yunqing Wang
f93b1e7be1 Merge "Fix incorrect use of uv eobs in intra modes" 2012-02-17 10:43:05 -08:00
Yunqing Wang
04b9e0d787 Fix incorrect use of uv eobs in intra modes
In vp8_rd_pick_inter_mode(), if total of eobs is zero, rate needs
to be adjusted since there are no non-zero coefficients for
transmission. The uv intra eobs calculated in
rd_pick_intra_mbuv_mode() need to be saved before they are
overwritten by inter-mode eobs.

Change-Id: I41dd04fba912e8122ef95793d4d98a251bc60e58
2012-02-17 09:15:08 -05:00
Attila Nagy
ce42e79abc Update encoder mb_skip_coeff and prob_skip_false calculation
mode_info_context->mbmi.mb_skip_coeff has to always reflect the
existence or not of coeffs for a certain MB. The loopfilter needs this
info.
mb_skip_coeff is either set by the vp8_tokenize_mb or has to be set to
1 when the MB is skipped by mode selection. This has to be done
regardless of the mb_no_coeff_skip value.

prob_skip_false is needed just when mb_no_coeff_skip is 1. No need to
keep count of both skip_false and skip_true as they are complementary
(skip_true+skip_false = total_mbs)

Change-Id: I3c74c9a0ee37bec10de7bb796e408f3e77006813
2012-02-17 14:27:40 +02:00
Attila Nagy
565d0e6feb Remove redundant init of segment_counts in vp8_encode_frame
segment_counts was zero init twice in the beginning of vp8_encode_frame.

Change-Id: Ibc29f6896dabd9aab1d0993f3941cf6876022e70
2012-02-17 09:51:24 +02:00
Johann
6b151d436d Clarify 'max_sad' usage
Depending on implementation the optimized SAD functions may return early
when the calculated SAD exceeds max_sad.

Change-Id: I05ce5b2d34e6d45fb3ec2a450aa99c4f3343bf3a
2012-02-16 15:17:44 -08:00
Johann
5f0b303c28 OS X shell is incompatible with echo -n
Built in echo in 'sh' on OS X does not support -n (exclude trailing
newline). It's not necessary so just leave it off. Fixes issue 390.

Build include guard using 'symbol' so that it is more likely to be
unique.

Change-Id: I4bc6aa1fc5e02228f71c200214b5ee4a16d56b83
2012-02-16 14:20:44 -08:00
Fritz Koenig
3653fb473a Include path fix for building against Android NDK.
cpu-features.h is not in the common paths, add
to the cflags for Android.

Change-Id: Icbafc7600d72f6b59ffb030f6ab80ee6860332bb
2012-02-16 12:38:17 -08:00
John Koleszar
e8223bd250 decoder: reset segmentation map on keyframes
Refactoring some of the mode decoding logic introduced a bug where
the segmentation maps would not be properly reset on keyframes.

http://code.google.com/p/webm/issues/detail?id=378

The text of the bug is somewhat misleading as I initially read it to
imply the bug was present in v0.9.7-p1 (Cayuga), but note the text
"master", which indicates this was something subsequent. This issue
bisects back to v0.9.7-p1-84-ga99c20c, so unfortunately it was broken
during the Duclair release.

Thanks to Alexei Leonenko for investigating the root cause.

Change-Id: I9713c9f070eb37b31b3b029d9ef96be9b6ea2def
2012-02-16 12:22:18 -08:00
Makoto Kato
7989bb7fe7 Support Android x86 NDK build
On Android NDK, rand() is inlined function.  But, on our SSE optimization,
we need symbol for rand()

Change-Id: I42ab00e3255208ba95d7f9b9a8a3605ff58da8e1
2012-02-16 12:03:30 -08:00
Scott LaVarnway
6776bd62b5 Simplify mb_to_x_edge calculation during mode decoding
Change-Id: Ibcb35c32bf24c1d241090e24c5e2320e4d3ba901
2012-02-16 13:36:46 -05:00
Scott LaVarnway
a5879f7c81 Merge "decodemv cleanup/improvements" 2012-02-16 09:33:59 -08:00
Scott LaVarnway
12ee845ee7 decodemv cleanup/improvements
Removed unnecessary variables, unrolled functions, eliminated
unnecessary mv bounds checks and branches.

Change-Id: I02d034c70cd97b65025d59dd67c695e1db529f0b
2012-02-16 11:38:33 -05:00
Attila Nagy
d02e74a073 Consolidate C version of token packing functions
Replace inner loops of pack_mb_row_tokens_c and
pack_tokens_into_partitions_c with a call to pack_tokens_c.

Change-Id: I0341554fb154a14a5dadb63f8fc78010724c2c33
2012-02-16 14:11:28 +02:00
Attila Nagy
78071b3b97 Multithreaded encoder, late sync loopfilter
Second shot at this...

Sync with loopfilter thread as late as possible, usually just at the
beginning of next frame encoding. This returns control to application
faster and allows a better multicore scaling.

When PSNR packets are generated the final filtered frame is needed
imediatly so we cannot delay the sync. Same has to be done when
internal frame is previewed.

Change-Id: I64e110c8b224dd967faefffd9c93dd8dbad4a5b5
2012-02-16 12:26:39 +02:00
Fritz Koenig
8144132866 Fix rtcd build process for Android.mk
Add a dependency so ndk-build will
generate the needed vpx_rtcd.h file.

Change-Id: I92c82e0996943dd0403c9956e1ba60e92e2837a9
2012-02-15 15:23:04 -08:00
Scott LaVarnway
768ae275dc x_motion_minq table reduction
Reduced by 4080 bytes.

Change-Id: I037b55bc9684bf4a54bce238be00e8c4db3f643e
2012-02-09 16:49:34 -05:00
149 changed files with 6684 additions and 6465 deletions

View File

@@ -5,3 +5,4 @@ Tom Finegan <tomfinegan@google.com>
Ralph Giles <giles@xiph.org> <giles@entropywave.com>
Ralph Giles <giles@xiph.org> <giles@mozilla.com>
Alpha Lam <hclam@google.com> <hclam@chromium.org>
Deb Mukherjee <debargha@google.com>

View File

@@ -31,9 +31,11 @@ John Koleszar <jkoleszar@google.com>
Joshua Bleecher Snyder <josh@treelinelabs.com>
Justin Clift <justin@salasaga.org>
Justin Lebar <justin.lebar@gmail.com>
KO Myung-Hun <komh@chollian.net>
Lou Quillio <louquillio@google.com>
Luca Barbato <lu_zero@gentoo.org>
Makoto Kato <makoto.kt@gmail.com>
Marco Paniconi <marpan@google.com>
Martin Ettl <ettl.martin78@googlemail.com>
Michael Kohler <michaelkohler@live.com>
Mike Hommey <mhommey@mozilla.com>
@@ -43,6 +45,7 @@ Patrik Westin <patrik.westin@gmail.com>
Paul Wilkins <paulwilkins@google.com>
Pavol Rusnak <stick@gk2.sk>
Philip Jägenstedt <philipj@opera.com>
Priit Laes <plaes@plaes.org>
Rafael Ávila de Espíndola <rafael.espindola@gmail.com>
Rafaël Carré <funman@videolan.org>
Ralph Giles <giles@xiph.org>
@@ -50,6 +53,7 @@ Ronald S. Bultje <rbultje@google.com>
Scott LaVarnway <slavarnway@google.com>
Stefan Holmer <holmer@google.com>
Taekhyun Kim <takim@nvidia.com>
Takanori MATSUURA <t.matsuu@gmail.com>
Tero Rintaluoma <teror@google.com>
Thijs Vermeir <thijsvermeir@gmail.com>
Timothy B. Terriberry <tterribe@xiph.org>

View File

@@ -1,3 +1,95 @@
2012-05-09 v1.1.0 "Eider"
This introduces a number of enhancements, mostly focused on real-time
encoding. In addition, it fixes a decoder bug (first introduced in
Duclair) so all users of that release are encouraged to upgrade.
- Upgrading:
This release is ABI and API compatible with Duclair (v1.0.0). Users
of older releases should refer to the Upgrading notes in this
document for that release.
This release introduces a new temporal denoiser, controlled by the
VP8E_SET_NOISE_SENSITIVITY control. The temporal denoiser does not
currently take a strength parameter, so the control is effectively
a boolean - zero (off) or non-zero (on). For compatibility with
existing applications, the values accepted are the same as those
for the spatial denoiser (0-6). The temporal denoiser is enabled
by default, and the older spatial denoiser may be restored by
configuring with --disable-temporal-denoising. The temporal denoiser
is more computationally intensive than the spatial one.
This release removes support for a legacy, decode only API that was
supported, but deprecated, at the initial release of libvpx
(v0.9.0). This is not expected to have any impact. If you are
impacted, you can apply a reversion to commit 2bf8fb58 locally.
Please update to the latest libvpx API if you are affected.
- Enhancements:
Adds a motion compensated temporal denoiser to the encoder, which
gives higher quality than the older spatial denoiser. (See above
for notes on upgrading).
In addition, support for new compilers and platforms were added,
including:
improved support for XCode
Android x86 NDK build
OS/2 support
SunCC support
Changing resolution with vpx_codec_enc_config_set() is now
supported. Previously, reinitializing the codec was required to
change the input resolution.
The vpxenc application has initial support for producing multiple
encodes from the same input in one call. Resizing is not yet
supported, but varying other codec parameters is. Use -- to
delineate output streams. Options persist from one stream to the
next.
Also, the vpxenc application will now use a keyframe interval of
5 seconds by default. Use the --kf-max-dist option to override.
- Speed:
Decoder performance improved 2.5% versus Duclair. Encoder speed is
consistent with Duclair for most material. Two pass encoding of
slideshow-like material will see significant improvements.
Large realtime encoding speed gains at a small quality expense are
possible by configuring the on-the-fly bitpacking experiment with
--enable-onthefly-bitpacking. Realtime encoder can be up to 13%
faster (ARM) depending on the number of threads and bitrate
settings. This technique sees constant gain over the 5-16 speed
range. For VC style input the loss seen is up to 0.2dB. See commit
52cf4dca for further details.
- Quality:
On the whole, quality is consistent with the Duclair release. Some
tweaks:
Reduced blockiness in easy sections by applying a penalty to
intra modes.
Improved quality of static sections (like slideshows) with
two pass encoding.
Improved keyframe sizing with multiple temporal layers
- Bug Fixes:
Corrected alt-ref contribution to frame rate for visible updates
to the alt-ref buffer. This affected applications making manual
usage of the frame reference flags, or temporal layers.
Additional constraints were added to disable multi-frame quality
enhancement (MFQE) in sections of the frame where there is motion.
(#392)
Fixed corruption issues when vpx_codec_enc_config_set() was called
with spatial resampling enabled.
Fixed a decoder error introduced in Duclair where the segmentation
map was not being reinitialized on keyframes (#378)
2012-01-27 v1.0.0 "Duclair"
Our fourth named release, focused on performance and features related to
real-time encoding. It also fixes a decoder crash bug introduced in

View File

@@ -99,7 +99,7 @@ $$(eval $$(call ev-build-file))
$(1) : $$(_OBJ) $(2)
@mkdir -p $$(dir $$@)
@grep -w EQU $$< | tr -d '\#' | $(CONFIG_DIR)/$(ASM_CONVERSION) > $$@
@grep $(OFFSET_PATTERN) $$< | tr -d '\#' | $(CONFIG_DIR)/$(ASM_CONVERSION) > $$@
endef
# Use ads2gas script to convert from RVCT format to GAS format. This passes
@@ -118,6 +118,9 @@ $(ASM_CNV_PATH)/libvpx/%.asm.s: $(LIBVPX_PATH)/%.asm $(ASM_CNV_OFFSETS_DEPEND)
@mkdir -p $(dir $@)
@$(CONFIG_DIR)/$(ASM_CONVERSION) <$< > $@
# For building vpx_rtcd.h, which has a rule in libs.mk
TGT_ISA:=$(word 1, $(subst -, ,$(TOOLCHAIN)))
target := libs
LOCAL_SRC_FILES += vpx_config.c
@@ -165,12 +168,15 @@ LOCAL_LDLIBS := -llog
LOCAL_STATIC_LIBRARIES := cpufeatures
$(foreach file, $(LOCAL_SRC_FILES), $(LOCAL_PATH)/$(file)): vpx_rtcd.h
.PHONY: clean
clean:
@echo "Clean: ads2gas files [$(TARGET_ARCH_ABI)]"
@$(RM) $(CODEC_SRCS_ASM_ADS2GAS) $(CODEC_SRCS_ASM_NEON_ADS2GAS)
@$(RM) $(patsubst %.asm, %.*, $(ASM_CNV_OFFSETS_DEPEND))
@$(RM) -r $(ASM_CNV_PATH)
@$(RM) $(CLEAN-OBJS)
include $(BUILD_SHARED_LIBRARY)

View File

@@ -458,9 +458,12 @@ process_common_cmdline() {
eval `echo "$opt" | sed 's/--/action=/;s/-/ option=/;s/-/_/g'`
if echo "${ARCH_EXT_LIST}" | grep "^ *$option\$" >/dev/null; then
[ $action = "disable" ] && RTCD_OPTIONS="${RTCD_OPTIONS}${opt} "
else
echo "${CMDLINE_SELECT}" | grep "^ *$option\$" >/dev/null ||
die_unknown $opt
elif [ $action = "disable" ] && ! disabled $option ; then
echo "${CMDLINE_SELECT}" | grep "^ *$option\$" >/dev/null ||
die_unknown $opt
elif [ $action = "enable" ] && ! enabled $option ; then
echo "${CMDLINE_SELECT}" | grep "^ *$option\$" >/dev/null ||
die_unknown $opt
fi
$action $option
;;
@@ -585,6 +588,10 @@ process_common_toolchain() {
tgt_isa=x86_64
tgt_os=darwin11
;;
*darwin12*)
tgt_isa=x86_64
tgt_os=darwin12
;;
*mingw32*|*cygwin*)
[ -z "$tgt_isa" ] && tgt_isa=x86
tgt_os=win32
@@ -635,44 +642,51 @@ process_common_toolchain() {
# Handle darwin variants. Newer SDKs allow targeting older
# platforms, so find the newest SDK available.
if [ -d "/Developer/SDKs/MacOSX10.4u.sdk" ]; then
osx_sdk_dir="/Developer/SDKs/MacOSX10.4u.sdk"
fi
if [ -d "/Developer/SDKs/MacOSX10.5.sdk" ]; then
osx_sdk_dir="/Developer/SDKs/MacOSX10.5.sdk"
fi
if [ -d "/Developer/SDKs/MacOSX10.6.sdk" ]; then
osx_sdk_dir="/Developer/SDKs/MacOSX10.6.sdk"
fi
if [ -d "/Developer/SDKs/MacOSX10.7.sdk" ]; then
osx_sdk_dir="/Developer/SDKs/MacOSX10.7.sdk"
case ${toolchain} in
*-darwin*)
if [ -z "${DEVELOPER_DIR}" ]; then
DEVELOPER_DIR=`xcode-select -print-path 2> /dev/null`
[ $? -ne 0 ] && OSX_SKIP_DIR_CHECK=1
fi
if [ -z "${OSX_SKIP_DIR_CHECK}" ]; then
OSX_SDK_ROOTS="${DEVELOPER_DIR}/SDKs"
OSX_SDK_VERSIONS="MacOSX10.4u.sdk MacOSX10.5.sdk MacOSX10.6.sdk"
OSX_SDK_VERSIONS="${OSX_SDK_VERSIONS} MacOSX10.7.sdk"
for v in ${OSX_SDK_VERSIONS}; do
if [ -d "${OSX_SDK_ROOTS}/${v}" ]; then
osx_sdk_dir="${OSX_SDK_ROOTS}/${v}"
fi
done
fi
;;
esac
if [ -d "${osx_sdk_dir}" ]; then
add_cflags "-isysroot ${osx_sdk_dir}"
add_ldflags "-isysroot ${osx_sdk_dir}"
fi
case ${toolchain} in
*-darwin8-*)
add_cflags "-isysroot ${osx_sdk_dir}"
add_cflags "-mmacosx-version-min=10.4"
add_ldflags "-isysroot ${osx_sdk_dir}"
add_ldflags "-mmacosx-version-min=10.4"
;;
*-darwin9-*)
add_cflags "-isysroot ${osx_sdk_dir}"
add_cflags "-mmacosx-version-min=10.5"
add_ldflags "-isysroot ${osx_sdk_dir}"
add_ldflags "-mmacosx-version-min=10.5"
;;
*-darwin10-*)
add_cflags "-isysroot ${osx_sdk_dir}"
add_cflags "-mmacosx-version-min=10.6"
add_ldflags "-isysroot ${osx_sdk_dir}"
add_ldflags "-mmacosx-version-min=10.6"
;;
*-darwin11-*)
add_cflags "-isysroot ${osx_sdk_dir}"
add_cflags "-mmacosx-version-min=10.7"
add_ldflags "-isysroot ${osx_sdk_dir}"
add_ldflags "-mmacosx-version-min=10.7"
;;
*-darwin12-*)
add_cflags "-mmacosx-version-min=10.8"
add_ldflags "-mmacosx-version-min=10.8"
;;
esac
# Handle Solaris variants. Solaris 10 needs -lposix4
@@ -796,6 +810,8 @@ process_common_toolchain() {
add_cflags "--sysroot=${alt_libc}"
add_ldflags "--sysroot=${alt_libc}"
add_cflags "-I${SDK_PATH}/sources/android/cpufeatures/"
enable pic
soft_enable realtime_only
if [ ${tgt_isa} == "armv7" ]; then
@@ -805,7 +821,8 @@ process_common_toolchain() {
darwin*)
if [ -z "${sdk_path}" ]; then
SDK_PATH=/Developer/Platforms/iPhoneOS.platform/Developer
SDK_PATH=`xcode-select -print-path 2> /dev/null`
SDK_PATH=${SDK_PATH}/Platforms/iPhoneOS.platform/Developer
else
SDK_PATH=${sdk_path}
fi
@@ -827,7 +844,7 @@ process_common_toolchain() {
add_ldflags -arch_only ${tgt_isa}
if [ -z "${alt_libc}" ]; then
alt_libc=${SDK_PATH}/SDKs/iPhoneOS5.0.sdk
alt_libc=${SDK_PATH}/SDKs/iPhoneOS5.1.sdk
fi
add_cflags "-isysroot ${alt_libc}"

View File

@@ -42,7 +42,7 @@ done
[ -n "$srcfile" ] || show_help
sfx=${sfx:-asm}
includes=$(egrep -i "include +\"?+[a-z0-9_/]+\.${sfx}" $srcfile |
includes=$(LC_ALL=C egrep -i "include +\"?+[a-z0-9_/]+\.${sfx}" $srcfile |
perl -p -e "s;.*?([a-z0-9_/]+.${sfx}).*;\1;")
#" restore editor state
for inc in ${includes}; do

View File

@@ -196,8 +196,8 @@ filter() {
# Helper functions for generating the arch specific RTCD files
#
common_top() {
local outfile_basename=$(basename ${outfile:-rtcd.h})
local include_guard=$(echo -n $outfile_basename | tr '[a-z]' '[A-Z]' | tr -c '[A-Z]' _)
local outfile_basename=$(basename ${symbol:-rtcd.h})
local include_guard=$(echo $outfile_basename | tr '[a-z]' '[A-Z]' | tr -c '[A-Z]' _)
cat <<EOF
#ifndef ${include_guard}
#define ${include_guard}
@@ -225,7 +225,7 @@ x86() {
# Assign the helper variable for each enabled extension
for opt in $ALL_ARCHS; do
local uc=$(echo -n $opt | tr '[a-z]' '[A-Z]')
local uc=$(echo $opt | tr '[a-z]' '[A-Z]')
eval "have_${opt}=\"flags & HAS_${uc}\""
done
@@ -253,7 +253,7 @@ arm() {
# Assign the helper variable for each enabled extension
for opt in $ALL_ARCHS; do
local uc=$(echo -n $opt | tr '[a-z]' '[A-Z]')
local uc=$(echo $opt | tr '[a-z]' '[A-Z]')
eval "have_${opt}=\"flags & HAS_${uc}\""
done

36
configure vendored
View File

@@ -39,6 +39,7 @@ Advanced options:
${toggle_multithread} multithreaded encoding and decoding
${toggle_spatial_resampling} spatial sampling (scaling) support
${toggle_realtime_only} enable this option while building for real-time encoding
${toggle_onthefly_bitpacking} enable on-the-fly bitpacking in real-time encoding
${toggle_error_concealment} enable this option to get a decoder which is able to conceal losses
${toggle_runtime_cpu_detect} runtime cpu detection
${toggle_shared} shared library support
@@ -46,6 +47,7 @@ Advanced options:
${toggle_small} favor smaller size over speed
${toggle_postproc_visualizer} macro block / block level visualizers
${toggle_multi_res_encoding} enable multiple-resolution encoding
${toggle_temporal_denoising} enable temporal denoising and disable the spatial denoiser
Codecs:
Codecs can be selectively enabled or disabled individually, or by family:
@@ -107,6 +109,8 @@ all_platforms="${all_platforms} x86-darwin8-icc"
all_platforms="${all_platforms} x86-darwin9-gcc"
all_platforms="${all_platforms} x86-darwin9-icc"
all_platforms="${all_platforms} x86-darwin10-gcc"
all_platforms="${all_platforms} x86-darwin11-gcc"
all_platforms="${all_platforms} x86-darwin12-gcc"
all_platforms="${all_platforms} x86-linux-gcc"
all_platforms="${all_platforms} x86-linux-icc"
all_platforms="${all_platforms} x86-os2-gcc"
@@ -118,6 +122,7 @@ all_platforms="${all_platforms} x86-win32-vs9"
all_platforms="${all_platforms} x86_64-darwin9-gcc"
all_platforms="${all_platforms} x86_64-darwin10-gcc"
all_platforms="${all_platforms} x86_64-darwin11-gcc"
all_platforms="${all_platforms} x86_64-darwin12-gcc"
all_platforms="${all_platforms} x86_64-linux-gcc"
all_platforms="${all_platforms} x86_64-linux-icc"
all_platforms="${all_platforms} x86_64-solaris-gcc"
@@ -126,6 +131,9 @@ all_platforms="${all_platforms} x86_64-win64-vs8"
all_platforms="${all_platforms} x86_64-win64-vs9"
all_platforms="${all_platforms} universal-darwin8-gcc"
all_platforms="${all_platforms} universal-darwin9-gcc"
all_platforms="${all_platforms} universal-darwin10-gcc"
all_platforms="${all_platforms} universal-darwin11-gcc"
all_platforms="${all_platforms} universal-darwin12-gcc"
all_platforms="${all_platforms} generic-gnu"
# all_targets is a list of all targets that can be configured
@@ -164,6 +172,7 @@ enable md5
enable spatial_resampling
enable multithread
enable os_support
enable temporal_denoising
[ -d ${source_path}/../include ] && enable alt_tree_layout
for d in vp8; do
@@ -177,6 +186,8 @@ else
# customer environment
[ -f ${source_path}/../include/vpx/vp8cx.h ] && CODECS="${CODECS} vp8_encoder"
[ -f ${source_path}/../include/vpx/vp8dx.h ] && CODECS="${CODECS} vp8_decoder"
[ -f ${source_path}/../include/vpx/vp8cx.h ] || disable vp8_encoder
[ -f ${source_path}/../include/vpx/vp8dx.h ] || disable vp8_decoder
[ -f ${source_path}/../lib/*/*mt.lib ] && soft_enable static_msvcrt
fi
@@ -253,6 +264,7 @@ CONFIG_LIST="
static_msvcrt
spatial_resampling
realtime_only
onthefly_bitpacking
error_concealment
shared
static
@@ -261,6 +273,7 @@ CONFIG_LIST="
os_support
unit_tests
multi_res_encoding
temporal_denoising
"
CMDLINE_SELECT="
extra_warnings
@@ -297,6 +310,7 @@ CMDLINE_SELECT="
mem_tracker
spatial_resampling
realtime_only
onthefly_bitpacking
error_concealment
shared
static
@@ -304,6 +318,7 @@ CMDLINE_SELECT="
postproc_visualizer
unit_tests
multi_res_encoding
temporal_denoising
"
process_cmdline() {
@@ -484,11 +499,20 @@ process_toolchain() {
case $toolchain in
universal-darwin*)
local darwin_ver=${tgt_os##darwin}
fat_bin_archs="$fat_bin_archs ppc32-${tgt_os}-gcc"
# Intel
fat_bin_archs="$fat_bin_archs x86-${tgt_os}-${tgt_cc}"
if [ $darwin_ver -gt 8 ]; then
# Snow Leopard (10.6/darwin10) dropped support for PPC
# Include PPC support for all prior versions
if [ $darwin_ver -lt 10 ]; then
fat_bin_archs="$fat_bin_archs ppc32-${tgt_os}-gcc"
fi
# Tiger (10.4/darwin8) brought support for x86
if [ $darwin_ver -ge 8 ]; then
fat_bin_archs="$fat_bin_archs x86-${tgt_os}-${tgt_cc}"
fi
# Leopard (10.5/darwin9) brought 64 bit support
if [ $darwin_ver -ge 9 ]; then
fat_bin_archs="$fat_bin_archs x86_64-${tgt_os}-${tgt_cc}"
fi
;;
@@ -504,6 +528,10 @@ process_toolchain() {
check_add_cflags -Wpointer-arith
check_add_cflags -Wtype-limits
check_add_cflags -Wcast-qual
check_add_cflags -Wimplicit-function-declaration
check_add_cflags -Wuninitialized
check_add_cflags -Wunused-variable
check_add_cflags -Wunused-but-set-variable
enabled extra_warnings || check_add_cflags -Wno-unused-function
fi

View File

@@ -21,9 +21,6 @@ CODEC_DOX := mainpage.dox \
usage_dx.dox \
# Other doxy files sourced in Markdown
TXT_DOX-$(CONFIG_VP8) += vp8_api1_migration.dox
vp8_api1_migration.dox.DESC = VP8 API 1.x Migration
TXT_DOX = $(call enabled,TXT_DOX)
%.dox: %.txt

View File

@@ -32,6 +32,7 @@ vpxenc.SRCS += args.c args.h y4minput.c y4minput.h
vpxenc.SRCS += tools_common.c tools_common.h
vpxenc.SRCS += vpx_ports/mem_ops.h
vpxenc.SRCS += vpx_ports/mem_ops_aligned.h
vpxenc.SRCS += vpx_ports/vpx_timer.h
vpxenc.SRCS += libmkv/EbmlIDs.h
vpxenc.SRCS += libmkv/EbmlWriter.c
vpxenc.SRCS += libmkv/EbmlWriter.h

20
libs.mk
View File

@@ -17,6 +17,7 @@ else
ASM:=.asm
endif
CODEC_SRCS-yes += CHANGELOG
CODEC_SRCS-yes += libs.mk
include $(SRC_PATH_BARE)/vpx/vpx_codec.mk
@@ -34,9 +35,9 @@ ifeq ($(CONFIG_VP8_ENCODER),yes)
include $(SRC_PATH_BARE)/$(VP8_PREFIX)vp8cx.mk
CODEC_SRCS-yes += $(addprefix $(VP8_PREFIX),$(call enabled,VP8_CX_SRCS))
CODEC_EXPORTS-yes += $(addprefix $(VP8_PREFIX),$(VP8_CX_EXPORTS))
CODEC_SRCS-yes += $(VP8_PREFIX)vp8cx.mk vpx/vp8.h vpx/vp8cx.h vpx/vp8e.h
CODEC_SRCS-yes += $(VP8_PREFIX)vp8cx.mk vpx/vp8.h vpx/vp8cx.h
CODEC_SRCS-$(ARCH_ARM) += $(VP8_PREFIX)vp8cx_arm.mk
INSTALL-LIBS-yes += include/vpx/vp8.h include/vpx/vp8e.h include/vpx/vp8cx.h
INSTALL-LIBS-yes += include/vpx/vp8.h include/vpx/vp8cx.h
INSTALL_MAPS += include/vpx/% $(SRC_PATH_BARE)/$(VP8_PREFIX)/%
CODEC_DOC_SRCS += vpx/vp8.h vpx/vp8cx.h
CODEC_DOC_SECTIONS += vp8 vp8_encoder
@@ -114,7 +115,6 @@ INSTALL-LIBS-yes += include/vpx/vpx_integer.h
INSTALL-LIBS-yes += include/vpx/vpx_codec_impl_top.h
INSTALL-LIBS-yes += include/vpx/vpx_codec_impl_bottom.h
INSTALL-LIBS-$(CONFIG_DECODERS) += include/vpx/vpx_decoder.h
INSTALL-LIBS-$(CONFIG_DECODERS) += include/vpx/vpx_decoder_compat.h
INSTALL-LIBS-$(CONFIG_ENCODERS) += include/vpx/vpx_encoder.h
ifeq ($(CONFIG_EXTERNAL_BUILD),yes)
ifeq ($(CONFIG_MSVS),yes)
@@ -233,7 +233,7 @@ vpx.pc: config.mk libs.mk
$(qexec)echo '# pkg-config file from libvpx $(VERSION_STRING)' > $@
$(qexec)echo 'prefix=$(PREFIX)' >> $@
$(qexec)echo 'exec_prefix=$${prefix}' >> $@
$(qexec)echo 'libdir=$${prefix}/lib' >> $@
$(qexec)echo 'libdir=$${prefix}/$(LIBSUBDIR)' >> $@
$(qexec)echo 'includedir=$${prefix}/include' >> $@
$(qexec)echo '' >> $@
$(qexec)echo 'Name: vpx' >> $@
@@ -280,19 +280,21 @@ $(filter %$(ASM).o,$(OBJS-yes)): $(BUILD_PFX)vpx_config.asm
# Calculate platform- and compiler-specific offsets for hand coded assembly
#
OFFSET_PATTERN:='^[a-zA-Z0-9_]* EQU'
ifeq ($(filter icc gcc,$(TGT_CC)), $(TGT_CC))
$(BUILD_PFX)asm_com_offsets.asm: $(BUILD_PFX)$(VP8_PREFIX)common/asm_com_offsets.c.S
grep -w EQU $< | tr -d '$$\#' $(ADS2GAS) > $@
LC_ALL=C grep $(OFFSET_PATTERN) $< | tr -d '$$\#' $(ADS2GAS) > $@
$(BUILD_PFX)$(VP8_PREFIX)common/asm_com_offsets.c.S: $(VP8_PREFIX)common/asm_com_offsets.c
CLEAN-OBJS += $(BUILD_PFX)asm_com_offsets.asm $(BUILD_PFX)$(VP8_PREFIX)common/asm_com_offsets.c.S
$(BUILD_PFX)asm_enc_offsets.asm: $(BUILD_PFX)$(VP8_PREFIX)encoder/asm_enc_offsets.c.S
grep -w EQU $< | tr -d '$$\#' $(ADS2GAS) > $@
LC_ALL=C grep $(OFFSET_PATTERN) $< | tr -d '$$\#' $(ADS2GAS) > $@
$(BUILD_PFX)$(VP8_PREFIX)encoder/asm_enc_offsets.c.S: $(VP8_PREFIX)encoder/asm_enc_offsets.c
CLEAN-OBJS += $(BUILD_PFX)asm_enc_offsets.asm $(BUILD_PFX)$(VP8_PREFIX)encoder/asm_enc_offsets.c.S
$(BUILD_PFX)asm_dec_offsets.asm: $(BUILD_PFX)$(VP8_PREFIX)decoder/asm_dec_offsets.c.S
grep -w EQU $< | tr -d '$$\#' $(ADS2GAS) > $@
LC_ALL=C grep $(OFFSET_PATTERN) $< | tr -d '$$\#' $(ADS2GAS) > $@
$(BUILD_PFX)$(VP8_PREFIX)decoder/asm_dec_offsets.c.S: $(VP8_PREFIX)decoder/asm_dec_offsets.c
CLEAN-OBJS += $(BUILD_PFX)asm_dec_offsets.asm $(BUILD_PFX)$(VP8_PREFIX)decoder/asm_dec_offsets.c.S
else
@@ -326,8 +328,8 @@ CLEAN-OBJS += $(BUILD_PFX)vpx_version.h
#
# Rule to generate runtime cpu detection files
#
$(OBJS-yes:.o=.d): vpx_rtcd.h
vpx_rtcd.h: $(sort $(filter %rtcd_defs.sh,$(CODEC_SRCS)))
$(OBJS-yes:.o=.d): $(BUILD_PFX)vpx_rtcd.h
$(BUILD_PFX)vpx_rtcd.h: $(SRC_PATH_BARE)/$(sort $(filter %rtcd_defs.sh,$(CODEC_SRCS)))
@echo " [CREATE] $@"
$(qexec)$(SRC_PATH_BARE)/build/make/rtcd.sh --arch=$(TGT_ISA) \
--sym=vpx_rtcd \

View File

@@ -12,8 +12,12 @@
This distribution of the WebM VP8 Codec SDK includes the following support:
\if vp8_encoder - \ref vp8_encoder \endif
\if vp8_decoder - \ref vp8_decoder \endif
\if vp8_encoder
- \ref vp8_encoder
\endif
\if vp8_decoder
- \ref vp8_decoder
\endif
\section main_startpoints Starting Points
@@ -24,8 +28,12 @@
- Read the \ref samples "sample code" for examples of how to interact with the
codec.
- \ref codec reference
\if encoder - \ref encoder reference \endif
\if decoder - \ref decoder reference \endif
\if encoder
- \ref encoder reference
\endif
\if decoder
- \ref decoder reference
\endif
\section main_support Support Options & FAQ
The WebM project is an open source project supported by its community. For

160
tools/ftfy.sh Executable file
View File

@@ -0,0 +1,160 @@
#!/bin/sh
self="$0"
dirname_self=$(dirname "$self")
usage() {
cat <<EOF >&2
Usage: $self [option]
This script applies a whitespace transformation to the commit at HEAD. If no
options are given, then the modified files are left in the working tree.
Options:
-h, --help Shows this message
-n, --dry-run Shows a diff of the changes to be made.
--amend Squashes the changes into the commit at HEAD
This option will also reformat the commit message.
--commit Creates a new commit containing only the whitespace changes
--msg-only Reformat the commit message only, ignore the patch itself.
EOF
rm -f ${CLEAN_FILES}
exit 1
}
log() {
echo "${self##*/}: $@" >&2
}
vpx_style() {
astyle --style=bsd --min-conditional-indent=0 --break-blocks \
--pad-oper --pad-header --unpad-paren \
--align-pointer=name \
--indent-preprocessor --convert-tabs --indent-labels \
--suffix=none --quiet "$@"
sed -i 's/[[:space:]]\{1,\},/,/g' "$@"
}
apply() {
[ $INTERSECT_RESULT -ne 0 ] && patch -p1 < "$1"
}
commit() {
LAST_CHANGEID=$(git show | awk '/Change-Id:/{print $2}')
if [ -z "$LAST_CHANGEID" ]; then
log "HEAD doesn't have a Change-Id, unable to generate a new commit"
exit 1
fi
# Build a deterministic Change-Id from the parent's
NEW_CHANGEID=${LAST_CHANGEID}-styled
NEW_CHANGEID=I$(echo $NEW_CHANGEID | git hash-object --stdin)
# Commit, preserving authorship from the parent commit.
git commit -a -C HEAD > /dev/null
git commit --amend -F- << EOF
Cosmetic: Fix whitespace in change ${LAST_CHANGEID:0:9}
Change-Id: ${NEW_CHANGEID}
EOF
}
show_commit_msg_diff() {
if [ $DIFF_MSG_RESULT -ne 0 ]; then
log "Modified commit message:"
diff -u "$ORIG_COMMIT_MSG" "$NEW_COMMIT_MSG" | tail -n +3
fi
}
amend() {
show_commit_msg_diff
if [ $DIFF_MSG_RESULT -ne 0 ] || [ $INTERSECT_RESULT -ne 0 ]; then
git commit -a --amend -F "$NEW_COMMIT_MSG"
fi
}
diff_msg() {
git log -1 --format=%B > "$ORIG_COMMIT_MSG"
"${dirname_self}"/wrap-commit-msg.py \
< "$ORIG_COMMIT_MSG" > "$NEW_COMMIT_MSG"
cmp -s "$ORIG_COMMIT_MSG" "$NEW_COMMIT_MSG"
DIFF_MSG_RESULT=$?
}
# Temporary files
ORIG_DIFF=orig.diff.$$
MODIFIED_DIFF=modified.diff.$$
FINAL_DIFF=final.diff.$$
ORIG_COMMIT_MSG=orig.commit-msg.$$
NEW_COMMIT_MSG=new.commit-msg.$$
CLEAN_FILES="${ORIG_DIFF} ${MODIFIED_DIFF} ${FINAL_DIFF}"
CLEAN_FILES="${CLEAN_FILES} ${ORIG_COMMIT_MSG} ${NEW_COMMIT_MSG}"
# Preconditions
[ $# -lt 2 ] || usage
# Check that astyle supports pad-header and align-pointer=name
if ! astyle --pad-header --align-pointer=name < /dev/null; then
log "Install astyle v1.24 or newer"
exit 1
fi
if ! git diff --quiet HEAD; then
log "Working tree is dirty, commit your changes first"
exit 1
fi
# Need to be in the root
cd "$(git rev-parse --show-toplevel)"
# Collect the original diff
git show > "${ORIG_DIFF}"
# Apply the style guide on new and modified files and collect its diff
for f in $(git diff HEAD^ --name-only -M90 --diff-filter=AM \
| grep '\.[ch]$'); do
case "$f" in
third_party/*) continue;;
nestegg/*) continue;;
esac
vpx_style "$f"
done
git diff --no-color --no-ext-diff > "${MODIFIED_DIFF}"
# Intersect the two diffs
"${dirname_self}"/intersect-diffs.py \
"${ORIG_DIFF}" "${MODIFIED_DIFF}" > "${FINAL_DIFF}"
INTERSECT_RESULT=$?
git reset --hard >/dev/null
# Fixup the commit message
diff_msg
# Handle options
if [ -n "$1" ]; then
case "$1" in
-h|--help) usage;;
-n|--dry-run) cat "${FINAL_DIFF}"; show_commit_msg_diff;;
--commit) apply "${FINAL_DIFF}"; commit;;
--amend) apply "${FINAL_DIFF}"; amend;;
--msg-only) amend;;
*) usage;;
esac
else
apply "${FINAL_DIFF}"
if ! git diff --quiet; then
log "Formatting changes applied, verify and commit."
log "See also: http://www.webmproject.org/code/contribute/conventions/"
git diff --stat
fi
fi
rm -f ${CLEAN_FILES}

188
tools/intersect-diffs.py Executable file
View File

@@ -0,0 +1,188 @@
#!/usr/bin/env python
## Copyright (c) 2012 The WebM project authors. All Rights Reserved.
##
## Use of this source code is governed by a BSD-style license
## that can be found in the LICENSE file in the root of the source
## tree. An additional intellectual property rights grant can be found
## in the file PATENTS. All contributing project authors may
## be found in the AUTHORS file in the root of the source tree.
##
"""Calculates the "intersection" of two unified diffs.
Given two diffs, A and B, it finds all hunks in B that had non-context lines
in A and prints them to stdout. This is useful to determine the hunks in B that
are relevant to A. The resulting file can be applied with patch(1) on top of A.
"""
__author__ = "jkoleszar@google.com"
import re
import sys
class DiffLines(object):
"""A container for one half of a diff."""
def __init__(self, filename, offset, length):
self.filename = filename
self.offset = offset
self.length = length
self.lines = []
self.delta_line_nums = []
def Append(self, line):
l = len(self.lines)
if line[0] != " ":
self.delta_line_nums.append(self.offset + l)
self.lines.append(line[1:])
assert l+1 <= self.length
def Complete(self):
return len(self.lines) == self.length
def __contains__(self, item):
return item >= self.offset and item <= self.offset + self.length - 1
class DiffHunk(object):
"""A container for one diff hunk, consisting of two DiffLines."""
def __init__(self, header, file_a, file_b, start_a, len_a, start_b, len_b):
self.header = header
self.left = DiffLines(file_a, start_a, len_a)
self.right = DiffLines(file_b, start_b, len_b)
self.lines = []
def Append(self, line):
"""Adds a line to the DiffHunk and its DiffLines children."""
if line[0] == "-":
self.left.Append(line)
elif line[0] == "+":
self.right.Append(line)
elif line[0] == " ":
self.left.Append(line)
self.right.Append(line)
else:
assert False, ("Unrecognized character at start of diff line "
"%r" % line[0])
self.lines.append(line)
def Complete(self):
return self.left.Complete() and self.right.Complete()
def __repr__(self):
return "DiffHunk(%s, %s, len %d)" % (
self.left.filename, self.right.filename,
max(self.left.length, self.right.length))
def ParseDiffHunks(stream):
"""Walk a file-like object, yielding DiffHunks as they're parsed."""
file_regex = re.compile(r"(\+\+\+|---) (\S+)")
range_regex = re.compile(r"@@ -(\d+)(,(\d+))? \+(\d+)(,(\d+))?")
hunk = None
while True:
line = stream.readline()
if not line:
break
if hunk is None:
# Parse file names
diff_file = file_regex.match(line)
if diff_file:
if line.startswith("---"):
a_line = line
a = diff_file.group(2)
continue
if line.startswith("+++"):
b_line = line
b = diff_file.group(2)
continue
# Parse offset/lengths
diffrange = range_regex.match(line)
if diffrange:
if diffrange.group(2):
start_a = int(diffrange.group(1))
len_a = int(diffrange.group(3))
else:
start_a = 1
len_a = int(diffrange.group(1))
if diffrange.group(5):
start_b = int(diffrange.group(4))
len_b = int(diffrange.group(6))
else:
start_b = 1
len_b = int(diffrange.group(4))
header = [a_line, b_line, line]
hunk = DiffHunk(header, a, b, start_a, len_a, start_b, len_b)
else:
# Add the current line to the hunk
hunk.Append(line)
# See if the whole hunk has been parsed. If so, yield it and prepare
# for the next hunk.
if hunk.Complete():
yield hunk
hunk = None
# Partial hunks are a parse error
assert hunk is None
def FormatDiffHunks(hunks):
"""Re-serialize a list of DiffHunks."""
r = []
last_header = None
for hunk in hunks:
this_header = hunk.header[0:2]
if last_header != this_header:
r.extend(hunk.header)
last_header = this_header
else:
r.extend(hunk.header[2])
r.extend(hunk.lines)
r.append("\n")
return "".join(r)
def ZipHunks(rhs_hunks, lhs_hunks):
"""Join two hunk lists on filename."""
for rhs_hunk in rhs_hunks:
rhs_file = rhs_hunk.right.filename.split("/")[1:]
for lhs_hunk in lhs_hunks:
lhs_file = lhs_hunk.left.filename.split("/")[1:]
if lhs_file != rhs_file:
continue
yield (rhs_hunk, lhs_hunk)
def main():
old_hunks = [x for x in ParseDiffHunks(open(sys.argv[1], "r"))]
new_hunks = [x for x in ParseDiffHunks(open(sys.argv[2], "r"))]
out_hunks = []
# Join the right hand side of the older diff with the left hand side of the
# newer diff.
for old_hunk, new_hunk in ZipHunks(old_hunks, new_hunks):
if new_hunk in out_hunks:
continue
old_lines = old_hunk.right
new_lines = new_hunk.left
# Determine if this hunk overlaps any non-context line from the other
for i in old_lines.delta_line_nums:
if i in new_lines:
out_hunks.append(new_hunk)
break
if out_hunks:
print FormatDiffHunks(out_hunks)
sys.exit(1)
if __name__ == "__main__":
main()

70
tools/wrap-commit-msg.py Executable file
View File

@@ -0,0 +1,70 @@
#!/usr/bin/env python
## Copyright (c) 2012 The WebM project authors. All Rights Reserved.
##
## Use of this source code is governed by a BSD-style license
## that can be found in the LICENSE file in the root of the source
## tree. An additional intellectual property rights grant can be found
## in the file PATENTS. All contributing project authors may
## be found in the AUTHORS file in the root of the source tree.
##
"""Wraps paragraphs of text, preserving manual formatting
This is like fold(1), but has the special convention of not modifying lines
that start with whitespace. This allows you to intersperse blocks with
special formatting, like code blocks, with written prose. The prose will
be wordwrapped, and the manual formatting will be preserved.
* This won't handle the case of a bulleted (or ordered) list specially, so
manual wrapping must be done.
Occasionally it's useful to put something with explicit formatting that
doesn't look at all like a block of text inline.
indicator = has_leading_whitespace(line);
if (indicator)
preserve_formatting(line);
The intent is that this docstring would make it through the transform
and still be legible and presented as it is in the source. If additional
cases are handled, update this doc to describe the effect.
"""
__author__ = "jkoleszar@google.com"
import textwrap
import sys
def wrap(text):
if text:
return textwrap.fill(text, break_long_words=False) + '\n'
return ""
def main(fileobj):
text = ""
output = ""
while True:
line = fileobj.readline()
if not line:
break
if line.lstrip() == line:
text += line
else:
output += wrap(text)
text=""
output += line
output += wrap(text)
# Replace the file or write to stdout.
if fileobj == sys.stdin:
fileobj = sys.stdout
else:
fileobj.seek(0)
fileobj.truncate(0)
fileobj.write(output)
if __name__ == "__main__":
if len(sys.argv) > 1:
main(open(sys.argv[1], "r+"))
else:
main(sys.stdin)

View File

@@ -1,6 +1,6 @@
/*!\page usage Usage
The vpx Multi-Format codec SDK provides a unified interface amongst its
The vpx multi-format codec SDK provides a unified interface amongst its
supported codecs. This abstraction allows applications using this SDK to
easily support multiple video formats with minimal code duplication or
"special casing." This section describes the interface common to all codecs.
@@ -14,8 +14,12 @@
Fore more information on decoder and encoder specific usage, see the
following pages:
\if decoder - \subpage usage_decode \endif
\if decoder - \subpage usage_encode \endif
\if decoder
- \subpage usage_decode
\endif
\if decoder
- \subpage usage_encode
\endif
\section usage_types Important Data Types
There are two important data structures to consider in this interface.

View File

@@ -37,14 +37,15 @@ static void update_mode_info_border(MODE_INFO *mi, int rows, int cols)
void vp8_de_alloc_frame_buffers(VP8_COMMON *oci)
{
int i;
for (i = 0; i < NUM_YV12_BUFFERS; i++)
vp8_yv12_de_alloc_frame_buffer(&oci->yv12_fb[i]);
vp8_yv12_de_alloc_frame_buffer(&oci->temp_scale_frame);
#if CONFIG_POSTPROC
vp8_yv12_de_alloc_frame_buffer(&oci->post_proc_buffer);
if (oci->post_proc_buffer_int_used)
vp8_yv12_de_alloc_frame_buffer(&oci->post_proc_buffer_int);
#endif
vpx_free(oci->above_context);
vpx_free(oci->mip);
@@ -97,6 +98,7 @@ int vp8_alloc_frame_buffers(VP8_COMMON *oci, int width, int height)
return 1;
}
#if CONFIG_POSTPROC
if (vp8_yv12_alloc_frame_buffer(&oci->post_proc_buffer, width, height, VP8BORDERINPIXELS) < 0)
{
vp8_de_alloc_frame_buffers(oci);
@@ -104,6 +106,9 @@ int vp8_alloc_frame_buffers(VP8_COMMON *oci, int width, int height)
}
oci->post_proc_buffer_int_used = 0;
vpx_memset(&oci->postproc_state, 0, sizeof(oci->postproc_state));
vpx_memset((&oci->post_proc_buffer)->buffer_alloc,128,(&oci->post_proc_buffer)->frame_size);
#endif
oci->mb_rows = height >> 4;
oci->mb_cols = width >> 4;
@@ -203,7 +208,7 @@ void vp8_create_common(VP8_COMMON *oci)
oci->clr_type = REG_YUV;
oci->clamp_type = RECON_CLAMP_REQUIRED;
/* Initialise reference frame sign bias structure to defaults */
/* Initialize reference frame sign bias structure to defaults */
vpx_memset(oci->ref_frame_sign_bias, 0, sizeof(oci->ref_frame_sign_bias));
/* Default disable buffer to buffer copying */
@@ -215,13 +220,3 @@ void vp8_remove_common(VP8_COMMON *oci)
{
vp8_de_alloc_frame_buffers(oci);
}
void vp8_initialize_common()
{
vp8_coef_tree_initialize();
vp8_entropy_mode_init();
vp8_init_scan_order_mask();
}

View File

@@ -9,6 +9,11 @@
;
bilinear_taps_coeff
DCD 128, 0, 112, 16, 96, 32, 80, 48, 64, 64, 48, 80, 32, 96, 16, 112
;-----------------
EXPORT |vp8_sub_pixel_variance16x16_neon_func|
ARM
REQUIRE8
@@ -27,7 +32,7 @@
|vp8_sub_pixel_variance16x16_neon_func| PROC
push {r4-r6, lr}
ldr r12, _BilinearTaps_coeff_
adr r12, bilinear_taps_coeff
ldr r4, [sp, #16] ;load *dst_ptr from stack
ldr r5, [sp, #20] ;load dst_pixels_per_line from stack
ldr r6, [sp, #24] ;load *sse from stack
@@ -415,11 +420,4 @@ sub_pixel_variance16x16_neon_loop
ENDP
;-----------------
_BilinearTaps_coeff_
DCD bilinear_taps_coeff
bilinear_taps_coeff
DCD 128, 0, 112, 16, 96, 32, 80, 48, 64, 64, 48, 80, 32, 96, 16, 112
END

View File

@@ -27,7 +27,7 @@
|vp8_sub_pixel_variance8x8_neon| PROC
push {r4-r5, lr}
ldr r12, _BilinearTaps_coeff_
adr r12, bilinear_taps_coeff
ldr r4, [sp, #12] ;load *dst_ptr from stack
ldr r5, [sp, #16] ;load dst_pixels_per_line from stack
ldr lr, [sp, #20] ;load *sse from stack
@@ -216,8 +216,6 @@ sub_pixel_variance8x8_neon_loop
;-----------------
_BilinearTaps_coeff_
DCD bilinear_taps_coeff
bilinear_taps_coeff
DCD 128, 0, 112, 16, 96, 32, 80, 48, 64, 64, 48, 80, 32, 96, 16, 112

View File

@@ -10,7 +10,7 @@
#include "vpx_config.h"
#include "vpx_rtcd.h"
#include "vp8/encoder/variance.h"
#include "vp8/common/variance.h"
#include "vp8/common/filter.h"
#if HAVE_MEDIA
@@ -97,6 +97,17 @@ unsigned int vp8_sub_pixel_variance16x16_armv6
#if HAVE_NEON
extern unsigned int vp8_sub_pixel_variance16x16_neon_func
(
const unsigned char *src_ptr,
int src_pixels_per_line,
int xoffset,
int yoffset,
const unsigned char *dst_ptr,
int dst_pixels_per_line,
unsigned int *sse
);
unsigned int vp8_sub_pixel_variance16x16_neon
(
const unsigned char *src_ptr,

View File

@@ -15,6 +15,10 @@
#include "vpx_scale/yv12config.h"
#include "vp8/common/blockd.h"
#if CONFIG_POSTPROC
#include "postproc.h"
#endif /* CONFIG_POSTPROC */
BEGIN
/* vpx_scale */
@@ -30,6 +34,11 @@ DEFINE(yv12_buffer_config_v_buffer, offsetof(YV12_BUFFER_CONFIG, v_b
DEFINE(yv12_buffer_config_border, offsetof(YV12_BUFFER_CONFIG, border));
DEFINE(VP8BORDERINPIXELS_VAL, VP8BORDERINPIXELS);
#if CONFIG_POSTPROC
/* mfqe.c / filter_by_weight */
DEFINE(MFQE_PRECISION_VAL, MFQE_PRECISION);
#endif /* CONFIG_POSTPROC */
END
/* add asserts for any offset that is not supported by assembly code */
@@ -53,3 +62,10 @@ ct_assert(B_HU_PRED, B_HU_PRED == 9);
/* vp8_yv12_extend_frame_borders_neon makes several assumptions based on this */
ct_assert(VP8BORDERINPIXELS_VAL, VP8BORDERINPIXELS == 32)
#endif
#if HAVE_SSE2
#if CONFIG_POSTPROC
/* vp8_filter_by_weight16x16 and 8x8 */
ct_assert(MFQE_PRECISION_VAL, MFQE_PRECISION == 4)
#endif /* CONFIG_POSTPROC */
#endif /* HAVE_SSE2 */

View File

@@ -150,14 +150,15 @@ typedef enum
typedef struct
{
MB_PREDICTION_MODE mode, uv_mode;
MV_REFERENCE_FRAME ref_frame;
uint8_t mode, uv_mode;
uint8_t ref_frame;
uint8_t is_4x4;
int_mv mv;
unsigned char partitioning;
unsigned char mb_skip_coeff; /* does this mb has coefficients at all, 1=no coefficients, 0=need decode tokens */
unsigned char need_to_clamp_mvs;
unsigned char segment_id; /* Which set of segmentation parameters should be used for this MB */
uint8_t partitioning;
uint8_t mb_skip_coeff; /* does this mb has coefficients at all, 1=no coefficients, 0=need decode tokens */
uint8_t need_to_clamp_mvs;
uint8_t segment_id; /* Which set of segmentation parameters should be used for this MB */
} MB_MODE_INFO;
typedef struct
@@ -215,11 +216,21 @@ typedef struct macroblockd
MODE_INFO *mode_info_context;
int mode_info_stride;
#if CONFIG_TEMPORAL_DENOISING
MB_PREDICTION_MODE best_sse_inter_mode;
int_mv best_sse_mv;
unsigned char need_to_clamp_best_mvs;
#endif
FRAME_TYPE frame_type;
int up_available;
int left_available;
unsigned char *recon_above[3];
unsigned char *recon_left[3];
int recon_left_stride[2];
/* Y,U,V,Y2 */
ENTROPY_CONTEXT_PLANES *above_context;
ENTROPY_CONTEXT_PLANES *left_context;

View File

@@ -8,23 +8,11 @@
* be found in the AUTHORS file in the root of the source tree.
*/
#include <stdio.h>
#include "entropy.h"
#include "string.h"
#include "blockd.h"
#include "onyxc_int.h"
#include "vpx_mem/vpx_mem.h"
#define uchar unsigned char /* typedefs can clash */
#define uint unsigned int
typedef const uchar cuchar;
typedef const uint cuint;
typedef vp8_prob Prob;
#include "coefupdateprobs.h"
DECLARE_ALIGNED(16, const unsigned char, vp8_norm[256]) =
@@ -47,10 +35,11 @@ DECLARE_ALIGNED(16, const unsigned char, vp8_norm[256]) =
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0
};
DECLARE_ALIGNED(16, cuchar, vp8_coef_bands[16]) =
DECLARE_ALIGNED(16, const unsigned char, vp8_coef_bands[16]) =
{ 0, 1, 2, 3, 6, 4, 5, 6, 6, 6, 6, 6, 6, 6, 6, 7};
DECLARE_ALIGNED(16, cuchar, vp8_prev_token_class[MAX_ENTROPY_TOKENS]) =
DECLARE_ALIGNED(16, const unsigned char,
vp8_prev_token_class[MAX_ENTROPY_TOKENS]) =
{ 0, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0};
DECLARE_ALIGNED(16, const int, vp8_default_zig_zag1d[16]) =
@@ -69,7 +58,26 @@ DECLARE_ALIGNED(16, const short, vp8_default_inv_zig_zag[16]) =
10, 11, 15, 16
};
DECLARE_ALIGNED(16, short, vp8_default_zig_zag_mask[16]);
/* vp8_default_zig_zag_mask generated with:
void vp8_init_scan_order_mask()
{
int i;
for (i = 0; i < 16; i++)
{
vp8_default_zig_zag_mask[vp8_default_zig_zag1d[i]] = 1 << i;
}
}
*/
DECLARE_ALIGNED(16, const short, vp8_default_zig_zag_mask[16]) =
{
1, 2, 32, 64,
4, 16, 128, 4096,
8, 256, 2048, 8192,
512, 1024, 16384, -32768
};
const int vp8_mb_feature_data_bits[MB_LVL_MAX] = {7, 6};
@@ -90,56 +98,72 @@ const vp8_tree_index vp8_coef_tree[ 22] = /* corresponding _CONTEXT_NODEs */
-DCT_VAL_CATEGORY5, -DCT_VAL_CATEGORY6 /* 10 = CAT_FIVE */
};
struct vp8_token_struct vp8_coef_encodings[MAX_ENTROPY_TOKENS];
/* vp8_coef_encodings generated with:
vp8_tokens_from_tree(vp8_coef_encodings, vp8_coef_tree);
*/
const vp8_token vp8_coef_encodings[MAX_ENTROPY_TOKENS] =
{
{2, 2},
{6, 3},
{28, 5},
{58, 6},
{59, 6},
{60, 6},
{61, 6},
{124, 7},
{125, 7},
{126, 7},
{127, 7},
{0, 1}
};
/* Trees for extra bits. Probabilities are constant and
do not depend on previously encoded bits */
static const Prob Pcat1[] = { 159};
static const Prob Pcat2[] = { 165, 145};
static const Prob Pcat3[] = { 173, 148, 140};
static const Prob Pcat4[] = { 176, 155, 140, 135};
static const Prob Pcat5[] = { 180, 157, 141, 134, 130};
static const Prob Pcat6[] =
static const vp8_prob Pcat1[] = { 159};
static const vp8_prob Pcat2[] = { 165, 145};
static const vp8_prob Pcat3[] = { 173, 148, 140};
static const vp8_prob Pcat4[] = { 176, 155, 140, 135};
static const vp8_prob Pcat5[] = { 180, 157, 141, 134, 130};
static const vp8_prob Pcat6[] =
{ 254, 254, 243, 230, 196, 177, 153, 140, 133, 130, 129};
static vp8_tree_index cat1[2], cat2[4], cat3[6], cat4[8], cat5[10], cat6[22];
void vp8_init_scan_order_mask()
{
int i;
/* tree index tables generated with:
for (i = 0; i < 16; i++)
void init_bit_tree(vp8_tree_index *p, int n)
{
vp8_default_zig_zag_mask[vp8_default_zig_zag1d[i]] = 1 << i;
int i = 0;
while (++i < n)
{
p[0] = p[1] = i << 1;
p += 2;
}
p[0] = p[1] = 0;
}
}
static void init_bit_tree(vp8_tree_index *p, int n)
{
int i = 0;
while (++i < n)
void init_bit_trees()
{
p[0] = p[1] = i << 1;
p += 2;
init_bit_tree(cat1, 1);
init_bit_tree(cat2, 2);
init_bit_tree(cat3, 3);
init_bit_tree(cat4, 4);
init_bit_tree(cat5, 5);
init_bit_tree(cat6, 11);
}
*/
p[0] = p[1] = 0;
}
static const vp8_tree_index cat1[2] = { 0, 0 };
static const vp8_tree_index cat2[4] = { 2, 2, 0, 0 };
static const vp8_tree_index cat3[6] = { 2, 2, 4, 4, 0, 0 };
static const vp8_tree_index cat4[8] = { 2, 2, 4, 4, 6, 6, 0, 0 };
static const vp8_tree_index cat5[10] = { 2, 2, 4, 4, 6, 6, 8, 8, 0, 0 };
static const vp8_tree_index cat6[22] = { 2, 2, 4, 4, 6, 6, 8, 8, 10, 10, 12, 12,
14, 14, 16, 16, 18, 18, 20, 20, 0, 0 };
static void init_bit_trees()
{
init_bit_tree(cat1, 1);
init_bit_tree(cat2, 2);
init_bit_tree(cat3, 3);
init_bit_tree(cat4, 4);
init_bit_tree(cat5, 5);
init_bit_tree(cat6, 11);
}
vp8_extra_bit_struct vp8_extra_bits[12] =
const vp8_extra_bit_struct vp8_extra_bits[12] =
{
{ 0, 0, 0, 0},
{ 0, 0, 0, 1},
@@ -163,8 +187,3 @@ void vp8_default_coef_probs(VP8_COMMON *pc)
sizeof(default_coef_probs));
}
void vp8_coef_tree_initialize()
{
init_bit_trees();
vp8_tokens_from_tree(vp8_coef_encodings, vp8_coef_tree);
}

View File

@@ -35,7 +35,7 @@
extern const vp8_tree_index vp8_coef_tree[];
extern struct vp8_token_struct vp8_coef_encodings[MAX_ENTROPY_TOKENS];
extern const struct vp8_token_struct vp8_coef_encodings[MAX_ENTROPY_TOKENS];
typedef struct
{
@@ -45,7 +45,7 @@ typedef struct
int base_val;
} vp8_extra_bit_struct;
extern vp8_extra_bit_struct vp8_extra_bits[12]; /* indexed by token value */
extern const vp8_extra_bit_struct vp8_extra_bits[12]; /* indexed by token value */
#define PROB_UPDATE_BASELINE_COST 7
@@ -94,7 +94,7 @@ void vp8_default_coef_probs(struct VP8Common *);
extern DECLARE_ALIGNED(16, const int, vp8_default_zig_zag1d[16]);
extern DECLARE_ALIGNED(16, const short, vp8_default_inv_zig_zag[16]);
extern short vp8_default_zig_zag_mask[16];
extern DECLARE_ALIGNED(16, const short, vp8_default_zig_zag_mask[16]);
extern const int vp8_mb_feature_data_bits[MB_LVL_MAX];
void vp8_coef_tree_initialize(void);

View File

@@ -8,22 +8,13 @@
* be found in the AUTHORS file in the root of the source tree.
*/
#define USE_PREBUILT_TABLES
#include "entropymode.h"
#include "entropy.h"
#include "vpx_mem/vpx_mem.h"
static const unsigned int kf_y_mode_cts[VP8_YMODES] = { 1607, 915, 812, 811, 5455};
static const unsigned int y_mode_cts [VP8_YMODES] = { 8080, 1908, 1582, 1007, 5874};
static const unsigned int uv_mode_cts [VP8_UV_MODES] = { 59483, 13605, 16492, 4230};
static const unsigned int kf_uv_mode_cts[VP8_UV_MODES] = { 5319, 1904, 1703, 674};
static const unsigned int bmode_cts[VP8_BINTRAMODES] =
{
43891, 17694, 10036, 3920, 3363, 2546, 5119, 3221, 2471, 1723
};
#include "vp8_entropymodedata.h"
int vp8_mv_cont(const int_mv *l, const int_mv *a)
{
@@ -59,7 +50,7 @@ const vp8_prob vp8_sub_mv_ref_prob2 [SUBMVREF_COUNT][VP8_SUBMVREFS-1] =
vp8_mbsplit vp8_mbsplits [VP8_NUMMBSPLITS] =
const vp8_mbsplit vp8_mbsplits [VP8_NUMMBSPLITS] =
{
{
0, 0, 0, 0,
@@ -84,7 +75,7 @@ vp8_mbsplit vp8_mbsplits [VP8_NUMMBSPLITS] =
4, 5, 6, 7,
8, 9, 10, 11,
12, 13, 14, 15,
},
}
};
const int vp8_mbsplit_count [VP8_NUMMBSPLITS] = { 2, 2, 4, 16};
@@ -155,17 +146,6 @@ const vp8_tree_index vp8_sub_mv_ref_tree[6] =
-ZERO4X4, -NEW4X4
};
struct vp8_token_struct vp8_bmode_encodings [VP8_BINTRAMODES];
struct vp8_token_struct vp8_ymode_encodings [VP8_YMODES];
struct vp8_token_struct vp8_kf_ymode_encodings [VP8_YMODES];
struct vp8_token_struct vp8_uv_mode_encodings [VP8_UV_MODES];
struct vp8_token_struct vp8_mbsplit_encodings [VP8_NUMMBSPLITS];
struct vp8_token_struct vp8_mv_ref_encoding_array [VP8_MVREFS];
struct vp8_token_struct vp8_sub_mv_ref_encoding_array [VP8_SUBMVREFS];
const vp8_tree_index vp8_small_mvtree [14] =
{
2, 8,
@@ -177,89 +157,21 @@ const vp8_tree_index vp8_small_mvtree [14] =
-6, -7
};
struct vp8_token_struct vp8_small_mvencodings [8];
void vp8_init_mbmode_probs(VP8_COMMON *x)
{
unsigned int bct [VP8_YMODES] [2]; /* num Ymodes > num UV modes */
vp8_tree_probs_from_distribution(
VP8_YMODES, vp8_ymode_encodings, vp8_ymode_tree,
x->fc.ymode_prob, bct, y_mode_cts,
256, 1
);
vp8_tree_probs_from_distribution(
VP8_YMODES, vp8_kf_ymode_encodings, vp8_kf_ymode_tree,
x->kf_ymode_prob, bct, kf_y_mode_cts,
256, 1
);
vp8_tree_probs_from_distribution(
VP8_UV_MODES, vp8_uv_mode_encodings, vp8_uv_mode_tree,
x->fc.uv_mode_prob, bct, uv_mode_cts,
256, 1
);
vp8_tree_probs_from_distribution(
VP8_UV_MODES, vp8_uv_mode_encodings, vp8_uv_mode_tree,
x->kf_uv_mode_prob, bct, kf_uv_mode_cts,
256, 1
);
vpx_memcpy(x->fc.ymode_prob, vp8_ymode_prob, sizeof(vp8_ymode_prob));
vpx_memcpy(x->kf_ymode_prob, vp8_kf_ymode_prob, sizeof(vp8_kf_ymode_prob));
vpx_memcpy(x->fc.uv_mode_prob, vp8_uv_mode_prob, sizeof(vp8_uv_mode_prob));
vpx_memcpy(x->kf_uv_mode_prob, vp8_kf_uv_mode_prob, sizeof(vp8_kf_uv_mode_prob));
vpx_memcpy(x->fc.sub_mv_ref_prob, sub_mv_ref_prob, sizeof(sub_mv_ref_prob));
}
static void intra_bmode_probs_from_distribution(
vp8_prob p [VP8_BINTRAMODES-1],
unsigned int branch_ct [VP8_BINTRAMODES-1] [2],
const unsigned int events [VP8_BINTRAMODES]
)
{
vp8_tree_probs_from_distribution(
VP8_BINTRAMODES, vp8_bmode_encodings, vp8_bmode_tree,
p, branch_ct, events,
256, 1
);
}
void vp8_default_bmode_probs(vp8_prob p [VP8_BINTRAMODES-1])
{
unsigned int branch_ct [VP8_BINTRAMODES-1] [2];
intra_bmode_probs_from_distribution(p, branch_ct, bmode_cts);
vpx_memcpy(p, vp8_bmode_prob, sizeof(vp8_bmode_prob));
}
void vp8_kf_default_bmode_probs(vp8_prob p [VP8_BINTRAMODES] [VP8_BINTRAMODES] [VP8_BINTRAMODES-1])
{
unsigned int branch_ct [VP8_BINTRAMODES-1] [2];
int i = 0;
do
{
int j = 0;
do
{
intra_bmode_probs_from_distribution(
p[i][j], branch_ct, vp8_kf_default_bmode_counts[i][j]);
}
while (++j < VP8_BINTRAMODES);
}
while (++i < VP8_BINTRAMODES);
}
void vp8_entropy_mode_init()
{
vp8_tokens_from_tree(vp8_bmode_encodings, vp8_bmode_tree);
vp8_tokens_from_tree(vp8_ymode_encodings, vp8_ymode_tree);
vp8_tokens_from_tree(vp8_kf_ymode_encodings, vp8_kf_ymode_tree);
vp8_tokens_from_tree(vp8_uv_mode_encodings, vp8_uv_mode_tree);
vp8_tokens_from_tree(vp8_mbsplit_encodings, vp8_mbsplit_tree);
vp8_tokens_from_tree_offset(vp8_mv_ref_encoding_array,
vp8_mv_ref_tree, NEARESTMV);
vp8_tokens_from_tree_offset(vp8_sub_mv_ref_encoding_array,
vp8_sub_mv_ref_tree, LEFT4X4);
vp8_tokens_from_tree(vp8_small_mvencodings, vp8_small_mvtree);
vpx_memcpy(p, vp8_kf_bmode_prob, sizeof(vp8_kf_bmode_prob));
}

View File

@@ -52,22 +52,20 @@ extern const vp8_tree_index vp8_mbsplit_tree[];
extern const vp8_tree_index vp8_mv_ref_tree[];
extern const vp8_tree_index vp8_sub_mv_ref_tree[];
extern struct vp8_token_struct vp8_bmode_encodings [VP8_BINTRAMODES];
extern struct vp8_token_struct vp8_ymode_encodings [VP8_YMODES];
extern struct vp8_token_struct vp8_kf_ymode_encodings [VP8_YMODES];
extern struct vp8_token_struct vp8_uv_mode_encodings [VP8_UV_MODES];
extern struct vp8_token_struct vp8_mbsplit_encodings [VP8_NUMMBSPLITS];
extern const struct vp8_token_struct vp8_bmode_encodings[VP8_BINTRAMODES];
extern const struct vp8_token_struct vp8_ymode_encodings[VP8_YMODES];
extern const struct vp8_token_struct vp8_kf_ymode_encodings[VP8_YMODES];
extern const struct vp8_token_struct vp8_uv_mode_encodings[VP8_UV_MODES];
extern const struct vp8_token_struct vp8_mbsplit_encodings[VP8_NUMMBSPLITS];
/* Inter mode values do not start at zero */
extern struct vp8_token_struct vp8_mv_ref_encoding_array [VP8_MVREFS];
extern struct vp8_token_struct vp8_sub_mv_ref_encoding_array [VP8_SUBMVREFS];
extern const struct vp8_token_struct vp8_mv_ref_encoding_array[VP8_MVREFS];
extern const struct vp8_token_struct vp8_sub_mv_ref_encoding_array[VP8_SUBMVREFS];
extern const vp8_tree_index vp8_small_mvtree[];
extern struct vp8_token_struct vp8_small_mvencodings [8];
void vp8_entropy_mode_init(void);
extern const struct vp8_token_struct vp8_small_mvencodings[8];
void vp8_init_mbmode_probs(VP8_COMMON *x);

View File

@@ -82,6 +82,58 @@ static int get_cpu_count()
}
#endif
#if HAVE_PTHREAD_H
#include <pthread.h>
static void once(void (*func)(void))
{
static pthread_once_t lock = PTHREAD_ONCE_INIT;
pthread_once(&lock, func);
}
#elif defined(_WIN32)
static void once(void (*func)(void))
{
/* Using a static initializer here rather than InitializeCriticalSection()
* since there's no race-free context in which to execute it. Protecting
* it with an atomic op like InterlockedCompareExchangePointer introduces
* an x86 dependency, and InitOnceExecuteOnce requires Vista.
*/
static CRITICAL_SECTION lock = {(void *)-1, -1, 0, 0, 0, 0};
static int done;
EnterCriticalSection(&lock);
if (!done)
{
func();
done = 1;
}
LeaveCriticalSection(&lock);
}
#else
/* No-op version that performs no synchronization. vpx_rtcd() is idempotent,
* so as long as your platform provides atomic loads/stores of pointers
* no synchronization is strictly necessary.
*/
static void once(void (*func)(void))
{
static int done;
if(!done)
{
func();
done = 1;
}
}
#endif
void vp8_machine_specific_config(VP8_COMMON *ctx)
{
#if CONFIG_MULTITHREAD
@@ -94,5 +146,5 @@ void vp8_machine_specific_config(VP8_COMMON *ctx)
ctx->cpu_caps = x86_simd_caps();
#endif
vpx_rtcd();
once(vpx_rtcd);
}

31
vp8/common/idctllm_test.cc Executable file
View File

@@ -0,0 +1,31 @@
/*
* Copyright (c) 2010 The WebM project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
extern "C" {
void vp8_short_idct4x4llm_c(short *input, unsigned char *pred_ptr,
int pred_stride, unsigned char *dst_ptr,
int dst_stride);
}
#include "vpx_config.h"
#include "idctllm_test.h"
namespace
{
INSTANTIATE_TEST_CASE_P(C, IDCTTest,
::testing::Values(vp8_short_idct4x4llm_c));
} // namespace
int main(int argc, char **argv) {
::testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}

113
vp8/common/idctllm_test.h Executable file
View File

@@ -0,0 +1,113 @@
/*
* Copyright (c) 2010 The WebM project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#include "third_party/googletest/src/include/gtest/gtest.h"
typedef void (*idct_fn_t)(short *input, unsigned char *pred_ptr,
int pred_stride, unsigned char *dst_ptr,
int dst_stride);
namespace {
class IDCTTest : public ::testing::TestWithParam<idct_fn_t>
{
protected:
virtual void SetUp()
{
int i;
UUT = GetParam();
memset(input, 0, sizeof(input));
/* Set up guard blocks */
for(i=0; i<256; i++)
output[i] = ((i&0xF)<4&&(i<64))?0:-1;
}
idct_fn_t UUT;
short input[16];
unsigned char output[256];
unsigned char predict[256];
};
TEST_P(IDCTTest, TestGuardBlocks)
{
int i;
for(i=0; i<256; i++)
if((i&0xF) < 4 && i<64)
EXPECT_EQ(0, output[i]) << i;
else
EXPECT_EQ(255, output[i]);
}
TEST_P(IDCTTest, TestAllZeros)
{
int i;
UUT(input, output, 16, output, 16);
for(i=0; i<256; i++)
if((i&0xF) < 4 && i<64)
EXPECT_EQ(0, output[i]) << "i==" << i;
else
EXPECT_EQ(255, output[i]) << "i==" << i;
}
TEST_P(IDCTTest, TestAllOnes)
{
int i;
input[0] = 4;
UUT(input, output, 16, output, 16);
for(i=0; i<256; i++)
if((i&0xF) < 4 && i<64)
EXPECT_EQ(1, output[i]) << "i==" << i;
else
EXPECT_EQ(255, output[i]) << "i==" << i;
}
TEST_P(IDCTTest, TestAddOne)
{
int i;
for(i=0; i<256; i++)
predict[i] = i;
input[0] = 4;
UUT(input, predict, 16, output, 16);
for(i=0; i<256; i++)
if((i&0xF) < 4 && i<64)
EXPECT_EQ(i+1, output[i]) << "i==" << i;
else
EXPECT_EQ(255, output[i]) << "i==" << i;
}
TEST_P(IDCTTest, TestWithData)
{
int i;
for(i=0; i<16; i++)
input[i] = i;
UUT(input, output, 16, output, 16);
for(i=0; i<256; i++)
if((i&0xF) > 3 || i>63)
EXPECT_EQ(255, output[i]) << "i==" << i;
else if(i == 0)
EXPECT_EQ(11, output[i]) << "i==" << i;
else if(i == 34)
EXPECT_EQ(1, output[i]) << "i==" << i;
else if(i == 2 || i == 17 || i == 32)
EXPECT_EQ(3, output[i]) << "i==" << i;
else
EXPECT_EQ(0, output[i]) << "i==" << i;
}
}

View File

@@ -210,6 +210,8 @@ void vp8_loop_filter_frame
int mb_row;
int mb_col;
int mb_rows = cm->mb_rows;
int mb_cols = cm->mb_cols;
int filter_level;
@@ -217,6 +219,8 @@ void vp8_loop_filter_frame
/* Point at base of Mb MODE_INFO list */
const MODE_INFO *mode_info_context = cm->mi;
int post_y_stride = post->y_stride;
int post_uv_stride = post->uv_stride;
/* Initialize the loop filter for this frame. */
vp8_loop_filter_frame_init(cm, mbd, cm->filter_level);
@@ -227,23 +231,23 @@ void vp8_loop_filter_frame
v_ptr = post->v_buffer;
/* vp8_filter each macro block */
for (mb_row = 0; mb_row < cm->mb_rows; mb_row++)
if (cm->filter_type == NORMAL_LOOPFILTER)
{
for (mb_col = 0; mb_col < cm->mb_cols; mb_col++)
for (mb_row = 0; mb_row < mb_rows; mb_row++)
{
int skip_lf = (mode_info_context->mbmi.mode != B_PRED &&
mode_info_context->mbmi.mode != SPLITMV &&
mode_info_context->mbmi.mb_skip_coeff);
const int mode_index = lfi_n->mode_lf_lut[mode_info_context->mbmi.mode];
const int seg = mode_info_context->mbmi.segment_id;
const int ref_frame = mode_info_context->mbmi.ref_frame;
filter_level = lfi_n->lvl[seg][ref_frame][mode_index];
if (filter_level)
for (mb_col = 0; mb_col < mb_cols; mb_col++)
{
if (cm->filter_type == NORMAL_LOOPFILTER)
int skip_lf = (mode_info_context->mbmi.mode != B_PRED &&
mode_info_context->mbmi.mode != SPLITMV &&
mode_info_context->mbmi.mb_skip_coeff);
const int mode_index = lfi_n->mode_lf_lut[mode_info_context->mbmi.mode];
const int seg = mode_info_context->mbmi.segment_id;
const int ref_frame = mode_info_context->mbmi.ref_frame;
filter_level = lfi_n->lvl[seg][ref_frame][mode_index];
if (filter_level)
{
const int hev_index = lfi_n->hev_thr_lut[frame_type][filter_level];
lfi.mblim = lfi_n->mblim[filter_level];
@@ -253,54 +257,87 @@ void vp8_loop_filter_frame
if (mb_col > 0)
vp8_loop_filter_mbv
(y_ptr, u_ptr, v_ptr, post->y_stride, post->uv_stride, &lfi);
(y_ptr, u_ptr, v_ptr, post_y_stride, post_uv_stride, &lfi);
if (!skip_lf)
vp8_loop_filter_bv
(y_ptr, u_ptr, v_ptr, post->y_stride, post->uv_stride, &lfi);
(y_ptr, u_ptr, v_ptr, post_y_stride, post_uv_stride, &lfi);
/* don't apply across umv border */
if (mb_row > 0)
vp8_loop_filter_mbh
(y_ptr, u_ptr, v_ptr, post->y_stride, post->uv_stride, &lfi);
(y_ptr, u_ptr, v_ptr, post_y_stride, post_uv_stride, &lfi);
if (!skip_lf)
vp8_loop_filter_bh
(y_ptr, u_ptr, v_ptr, post->y_stride, post->uv_stride, &lfi);
(y_ptr, u_ptr, v_ptr, post_y_stride, post_uv_stride, &lfi);
}
else
y_ptr += 16;
u_ptr += 8;
v_ptr += 8;
mode_info_context++; /* step to next MB */
}
y_ptr += post_y_stride * 16 - post->y_width;
u_ptr += post_uv_stride * 8 - post->uv_width;
v_ptr += post_uv_stride * 8 - post->uv_width;
mode_info_context++; /* Skip border mb */
}
}
else /* SIMPLE_LOOPFILTER */
{
for (mb_row = 0; mb_row < mb_rows; mb_row++)
{
for (mb_col = 0; mb_col < mb_cols; mb_col++)
{
int skip_lf = (mode_info_context->mbmi.mode != B_PRED &&
mode_info_context->mbmi.mode != SPLITMV &&
mode_info_context->mbmi.mb_skip_coeff);
const int mode_index = lfi_n->mode_lf_lut[mode_info_context->mbmi.mode];
const int seg = mode_info_context->mbmi.segment_id;
const int ref_frame = mode_info_context->mbmi.ref_frame;
filter_level = lfi_n->lvl[seg][ref_frame][mode_index];
if (filter_level)
{
const unsigned char * mblim = lfi_n->mblim[filter_level];
const unsigned char * blim = lfi_n->blim[filter_level];
if (mb_col > 0)
vp8_loop_filter_simple_mbv
(y_ptr, post->y_stride, lfi_n->mblim[filter_level]);
(y_ptr, post_y_stride, mblim);
if (!skip_lf)
vp8_loop_filter_simple_bv
(y_ptr, post->y_stride, lfi_n->blim[filter_level]);
(y_ptr, post_y_stride, blim);
/* don't apply across umv border */
if (mb_row > 0)
vp8_loop_filter_simple_mbh
(y_ptr, post->y_stride, lfi_n->mblim[filter_level]);
(y_ptr, post_y_stride, mblim);
if (!skip_lf)
vp8_loop_filter_simple_bh
(y_ptr, post->y_stride, lfi_n->blim[filter_level]);
(y_ptr, post_y_stride, blim);
}
y_ptr += 16;
u_ptr += 8;
v_ptr += 8;
mode_info_context++; /* step to next MB */
}
y_ptr += post_y_stride * 16 - post->y_width;
u_ptr += post_uv_stride * 8 - post->uv_width;
v_ptr += post_uv_stride * 8 - post->uv_width;
y_ptr += 16;
u_ptr += 8;
v_ptr += 8;
mode_info_context++; /* Skip border mb */
mode_info_context++; /* step to next MB */
}
y_ptr += post->y_stride * 16 - post->y_width;
u_ptr += post->uv_stride * 8 - post->uv_width;
v_ptr += post->uv_stride * 8 - post->uv_width;
mode_info_context++; /* Skip border mb */
}
}

View File

@@ -15,7 +15,7 @@
typedef unsigned char uc;
static __inline signed char vp8_signed_char_clamp(int t)
static signed char vp8_signed_char_clamp(int t)
{
t = (t < -128 ? -128 : t);
t = (t > 127 ? 127 : t);
@@ -24,9 +24,9 @@ static __inline signed char vp8_signed_char_clamp(int t)
/* should we apply any filter at all ( 11111111 yes, 00000000 no) */
static __inline signed char vp8_filter_mask(uc limit, uc blimit,
uc p3, uc p2, uc p1, uc p0,
uc q0, uc q1, uc q2, uc q3)
static signed char vp8_filter_mask(uc limit, uc blimit,
uc p3, uc p2, uc p1, uc p0,
uc q0, uc q1, uc q2, uc q3)
{
signed char mask = 0;
mask |= (abs(p3 - p2) > limit);
@@ -40,7 +40,7 @@ static __inline signed char vp8_filter_mask(uc limit, uc blimit,
}
/* is there high variance internal edge ( 11111111 yes, 00000000 no) */
static __inline signed char vp8_hevmask(uc thresh, uc p1, uc p0, uc q0, uc q1)
static signed char vp8_hevmask(uc thresh, uc p1, uc p0, uc q0, uc q1)
{
signed char hev = 0;
hev |= (abs(p1 - p0) > thresh) * -1;
@@ -48,7 +48,7 @@ static __inline signed char vp8_hevmask(uc thresh, uc p1, uc p0, uc q0, uc q1)
return hev;
}
static __inline void vp8_filter(signed char mask, uc hev, uc *op1,
static void vp8_filter(signed char mask, uc hev, uc *op1,
uc *op0, uc *oq0, uc *oq1)
{
@@ -158,7 +158,7 @@ void vp8_loop_filter_vertical_edge_c
while (++i < count * 8);
}
static __inline void vp8_mbfilter(signed char mask, uc hev,
static void vp8_mbfilter(signed char mask, uc hev,
uc *op2, uc *op1, uc *op0, uc *oq0, uc *oq1, uc *oq2)
{
signed char s, u;
@@ -279,7 +279,7 @@ void vp8_mbloop_filter_vertical_edge_c
}
/* should we apply any filter at all ( 11111111 yes, 00000000 no) */
static __inline signed char vp8_simple_filter_mask(uc blimit, uc p1, uc p0, uc q0, uc q1)
static signed char vp8_simple_filter_mask(uc blimit, uc p1, uc p0, uc q0, uc q1)
{
/* Why does this cause problems for win32?
* error C2143: syntax error : missing ';' before 'type'
@@ -289,7 +289,7 @@ static __inline signed char vp8_simple_filter_mask(uc blimit, uc p1, uc p0, uc q
return mask;
}
static __inline void vp8_simple_filter(signed char mask, uc *op1, uc *op0, uc *oq0, uc *oq1)
static void vp8_simple_filter(signed char mask, uc *op1, uc *op0, uc *oq0, uc *oq1)
{
signed char vp8_filter, Filter1, Filter2;
signed char p1 = (signed char) * op1 ^ 0x80;

View File

@@ -11,45 +11,6 @@
#include "blockd.h"
typedef enum
{
PRED = 0,
DEST = 1
} BLOCKSET;
static void setup_macroblock(MACROBLOCKD *x, BLOCKSET bs)
{
int block;
unsigned char **y, **u, **v;
if (bs == DEST)
{
y = &x->dst.y_buffer;
u = &x->dst.u_buffer;
v = &x->dst.v_buffer;
}
else
{
y = &x->pre.y_buffer;
u = &x->pre.u_buffer;
v = &x->pre.v_buffer;
}
for (block = 0; block < 16; block++) /* y blocks */
{
x->block[block].offset =
(block >> 2) * 4 * x->dst.y_stride + (block & 3) * 4;
}
for (block = 16; block < 20; block++) /* U and V blocks */
{
x->block[block+4].offset =
x->block[block].offset =
((block - 16) >> 1) * 4 * x->dst.uv_stride + (block & 1) * 4;
}
}
void vp8_setup_block_dptrs(MACROBLOCKD *x)
{
int r, c;
@@ -90,8 +51,18 @@ void vp8_setup_block_dptrs(MACROBLOCKD *x)
void vp8_build_block_doffsets(MACROBLOCKD *x)
{
int block;
/* handle the destination pitch features */
setup_macroblock(x, DEST);
setup_macroblock(x, PRED);
for (block = 0; block < 16; block++) /* y blocks */
{
x->block[block].offset =
(block >> 2) * 4 * x->dst.y_stride + (block & 3) * 4;
}
for (block = 16; block < 20; block++) /* U and V blocks */
{
x->block[block+4].offset =
x->block[block].offset =
((block - 16) >> 1) * 4 * x->dst.uv_stride + (block & 1) * 4;
}
}

385
vp8/common/mfqe.c Normal file
View File

@@ -0,0 +1,385 @@
/*
* Copyright (c) 2012 The WebM project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/* MFQE: Multiframe Quality Enhancement
* In rate limited situations keyframes may cause significant visual artifacts
* commonly referred to as "popping." This file implements a postproccesing
* algorithm which blends data from the preceeding frame when there is no
* motion and the q from the previous frame is lower which indicates that it is
* higher quality.
*/
#include "postproc.h"
#include "variance.h"
#include "vpx_mem/vpx_mem.h"
#include "vpx_rtcd.h"
#include "vpx_scale/yv12config.h"
#include <limits.h>
#include <stdlib.h>
static void filter_by_weight(unsigned char *src, int src_stride,
unsigned char *dst, int dst_stride,
int block_size, int src_weight)
{
int dst_weight = (1 << MFQE_PRECISION) - src_weight;
int rounding_bit = 1 << (MFQE_PRECISION - 1);
int r, c;
for (r = 0; r < block_size; r++)
{
for (c = 0; c < block_size; c++)
{
dst[c] = (src[c] * src_weight +
dst[c] * dst_weight +
rounding_bit) >> MFQE_PRECISION;
}
src += src_stride;
dst += dst_stride;
}
}
void vp8_filter_by_weight16x16_c(unsigned char *src, int src_stride,
unsigned char *dst, int dst_stride,
int src_weight)
{
filter_by_weight(src, src_stride, dst, dst_stride, 16, src_weight);
}
void vp8_filter_by_weight8x8_c(unsigned char *src, int src_stride,
unsigned char *dst, int dst_stride,
int src_weight)
{
filter_by_weight(src, src_stride, dst, dst_stride, 8, src_weight);
}
void vp8_filter_by_weight4x4_c(unsigned char *src, int src_stride,
unsigned char *dst, int dst_stride,
int src_weight)
{
filter_by_weight(src, src_stride, dst, dst_stride, 4, src_weight);
}
static void apply_ifactor(unsigned char *y_src,
int y_src_stride,
unsigned char *y_dst,
int y_dst_stride,
unsigned char *u_src,
unsigned char *v_src,
int uv_src_stride,
unsigned char *u_dst,
unsigned char *v_dst,
int uv_dst_stride,
int block_size,
int src_weight)
{
if (block_size == 16)
{
vp8_filter_by_weight16x16(y_src, y_src_stride, y_dst, y_dst_stride, src_weight);
vp8_filter_by_weight8x8(u_src, uv_src_stride, u_dst, uv_dst_stride, src_weight);
vp8_filter_by_weight8x8(v_src, uv_src_stride, v_dst, uv_dst_stride, src_weight);
}
else /* if (block_size == 8) */
{
vp8_filter_by_weight8x8(y_src, y_src_stride, y_dst, y_dst_stride, src_weight);
vp8_filter_by_weight4x4(u_src, uv_src_stride, u_dst, uv_dst_stride, src_weight);
vp8_filter_by_weight4x4(v_src, uv_src_stride, v_dst, uv_dst_stride, src_weight);
}
}
static unsigned int int_sqrt(unsigned int x)
{
unsigned int y = x;
unsigned int guess;
int p = 1;
while (y>>=1) p++;
p>>=1;
guess=0;
while (p>=0)
{
guess |= (1<<p);
if (x<guess*guess)
guess -= (1<<p);
p--;
}
/* choose between guess or guess+1 */
return guess+(guess*guess+guess+1<=x);
}
#define USE_SSD
static void multiframe_quality_enhance_block
(
int blksize, /* Currently only values supported are 16, 8 */
int qcurr,
int qprev,
unsigned char *y,
unsigned char *u,
unsigned char *v,
int y_stride,
int uv_stride,
unsigned char *yd,
unsigned char *ud,
unsigned char *vd,
int yd_stride,
int uvd_stride
)
{
static const unsigned char VP8_ZEROS[16]=
{
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
};
int uvblksize = blksize >> 1;
int qdiff = qcurr - qprev;
int i;
unsigned char *up;
unsigned char *udp;
unsigned char *vp;
unsigned char *vdp;
unsigned int act, actd, sad, usad, vsad, sse, thr, thrsq, actrisk;
if (blksize == 16)
{
actd = (vp8_variance16x16(yd, yd_stride, VP8_ZEROS, 0, &sse)+128)>>8;
act = (vp8_variance16x16(y, y_stride, VP8_ZEROS, 0, &sse)+128)>>8;
#ifdef USE_SSD
sad = (vp8_variance16x16(y, y_stride, yd, yd_stride, &sse));
sad = (sse + 128)>>8;
usad = (vp8_variance8x8(u, uv_stride, ud, uvd_stride, &sse));
usad = (sse + 32)>>6;
vsad = (vp8_variance8x8(v, uv_stride, vd, uvd_stride, &sse));
vsad = (sse + 32)>>6;
#else
sad = (vp8_sad16x16(y, y_stride, yd, yd_stride, INT_MAX)+128)>>8;
usad = (vp8_sad8x8(u, uv_stride, ud, uvd_stride, INT_MAX)+32)>>6;
vsad = (vp8_sad8x8(v, uv_stride, vd, uvd_stride, INT_MAX)+32)>>6;
#endif
}
else /* if (blksize == 8) */
{
actd = (vp8_variance8x8(yd, yd_stride, VP8_ZEROS, 0, &sse)+32)>>6;
act = (vp8_variance8x8(y, y_stride, VP8_ZEROS, 0, &sse)+32)>>6;
#ifdef USE_SSD
sad = (vp8_variance8x8(y, y_stride, yd, yd_stride, &sse));
sad = (sse + 32)>>6;
usad = (vp8_variance4x4(u, uv_stride, ud, uvd_stride, &sse));
usad = (sse + 8)>>4;
vsad = (vp8_variance4x4(v, uv_stride, vd, uvd_stride, &sse));
vsad = (sse + 8)>>4;
#else
sad = (vp8_sad8x8(y, y_stride, yd, yd_stride, INT_MAX)+32)>>6;
usad = (vp8_sad4x4(u, uv_stride, ud, uvd_stride, INT_MAX)+8)>>4;
vsad = (vp8_sad4x4(v, uv_stride, vd, uvd_stride, INT_MAX)+8)>>4;
#endif
}
actrisk = (actd > act * 5);
/* thr = qdiff/8 + log2(act) + log4(qprev) */
thr = (qdiff >> 3);
while (actd >>= 1) thr++;
while (qprev >>= 2) thr++;
#ifdef USE_SSD
thrsq = thr * thr;
if (sad < thrsq &&
/* additional checks for color mismatch and excessive addition of
* high-frequencies */
4 * usad < thrsq && 4 * vsad < thrsq && !actrisk)
#else
if (sad < thr &&
/* additional checks for color mismatch and excessive addition of
* high-frequencies */
2 * usad < thr && 2 * vsad < thr && !actrisk)
#endif
{
int ifactor;
#ifdef USE_SSD
/* TODO: optimize this later to not need sqr root */
sad = int_sqrt(sad);
#endif
ifactor = (sad << MFQE_PRECISION) / thr;
ifactor >>= (qdiff >> 5);
if (ifactor)
{
apply_ifactor(y, y_stride, yd, yd_stride,
u, v, uv_stride,
ud, vd, uvd_stride,
blksize, ifactor);
}
}
else /* else implicitly copy from previous frame */
{
if (blksize == 16)
{
vp8_copy_mem16x16(y, y_stride, yd, yd_stride);
vp8_copy_mem8x8(u, uv_stride, ud, uvd_stride);
vp8_copy_mem8x8(v, uv_stride, vd, uvd_stride);
}
else /* if (blksize == 8) */
{
vp8_copy_mem8x8(y, y_stride, yd, yd_stride);
for (up = u, udp = ud, i = 0; i < uvblksize; ++i, up += uv_stride, udp += uvd_stride)
vpx_memcpy(udp, up, uvblksize);
for (vp = v, vdp = vd, i = 0; i < uvblksize; ++i, vp += uv_stride, vdp += uvd_stride)
vpx_memcpy(vdp, vp, uvblksize);
}
}
}
static int qualify_inter_mb(const MODE_INFO *mode_info_context, int *map)
{
if (mode_info_context->mbmi.mb_skip_coeff)
map[0] = map[1] = map[2] = map[3] = 1;
else if (mode_info_context->mbmi.mode==SPLITMV)
{
static int ndx[4][4] =
{
{0, 1, 4, 5},
{2, 3, 6, 7},
{8, 9, 12, 13},
{10, 11, 14, 15}
};
int i, j;
for (i=0; i<4; ++i)
{
map[i] = 1;
for (j=0; j<4 && map[j]; ++j)
map[i] &= (mode_info_context->bmi[ndx[i][j]].mv.as_mv.row <= 2 &&
mode_info_context->bmi[ndx[i][j]].mv.as_mv.col <= 2);
}
}
else
{
map[0] = map[1] = map[2] = map[3] =
(mode_info_context->mbmi.mode > B_PRED &&
abs(mode_info_context->mbmi.mv.as_mv.row) <= 2 &&
abs(mode_info_context->mbmi.mv.as_mv.col) <= 2);
}
return (map[0]+map[1]+map[2]+map[3]);
}
void vp8_multiframe_quality_enhance
(
VP8_COMMON *cm
)
{
YV12_BUFFER_CONFIG *show = cm->frame_to_show;
YV12_BUFFER_CONFIG *dest = &cm->post_proc_buffer;
FRAME_TYPE frame_type = cm->frame_type;
/* Point at base of Mb MODE_INFO list has motion vectors etc */
const MODE_INFO *mode_info_context = cm->mi;
int mb_row;
int mb_col;
int totmap, map[4];
int qcurr = cm->base_qindex;
int qprev = cm->postproc_state.last_base_qindex;
unsigned char *y_ptr, *u_ptr, *v_ptr;
unsigned char *yd_ptr, *ud_ptr, *vd_ptr;
/* Set up the buffer pointers */
y_ptr = show->y_buffer;
u_ptr = show->u_buffer;
v_ptr = show->v_buffer;
yd_ptr = dest->y_buffer;
ud_ptr = dest->u_buffer;
vd_ptr = dest->v_buffer;
/* postprocess each macro block */
for (mb_row = 0; mb_row < cm->mb_rows; mb_row++)
{
for (mb_col = 0; mb_col < cm->mb_cols; mb_col++)
{
/* if motion is high there will likely be no benefit */
if (frame_type == INTER_FRAME) totmap = qualify_inter_mb(mode_info_context, map);
else totmap = (frame_type == KEY_FRAME ? 4 : 0);
if (totmap)
{
if (totmap < 4)
{
int i, j;
for (i=0; i<2; ++i)
for (j=0; j<2; ++j)
{
if (map[i*2+j])
{
multiframe_quality_enhance_block(8, qcurr, qprev,
y_ptr + 8*(i*show->y_stride+j),
u_ptr + 4*(i*show->uv_stride+j),
v_ptr + 4*(i*show->uv_stride+j),
show->y_stride,
show->uv_stride,
yd_ptr + 8*(i*dest->y_stride+j),
ud_ptr + 4*(i*dest->uv_stride+j),
vd_ptr + 4*(i*dest->uv_stride+j),
dest->y_stride,
dest->uv_stride);
}
else
{
/* copy a 8x8 block */
int k;
unsigned char *up = u_ptr + 4*(i*show->uv_stride+j);
unsigned char *udp = ud_ptr + 4*(i*dest->uv_stride+j);
unsigned char *vp = v_ptr + 4*(i*show->uv_stride+j);
unsigned char *vdp = vd_ptr + 4*(i*dest->uv_stride+j);
vp8_copy_mem8x8(y_ptr + 8*(i*show->y_stride+j), show->y_stride,
yd_ptr + 8*(i*dest->y_stride+j), dest->y_stride);
for (k = 0; k < 4; ++k, up += show->uv_stride, udp += dest->uv_stride,
vp += show->uv_stride, vdp += dest->uv_stride)
{
vpx_memcpy(udp, up, 4);
vpx_memcpy(vdp, vp, 4);
}
}
}
}
else /* totmap = 4 */
{
multiframe_quality_enhance_block(16, qcurr, qprev, y_ptr,
u_ptr, v_ptr,
show->y_stride,
show->uv_stride,
yd_ptr, ud_ptr, vd_ptr,
dest->y_stride,
dest->uv_stride);
}
}
else
{
vp8_copy_mem16x16(y_ptr, show->y_stride, yd_ptr, dest->y_stride);
vp8_copy_mem8x8(u_ptr, show->uv_stride, ud_ptr, dest->uv_stride);
vp8_copy_mem8x8(v_ptr, show->uv_stride, vd_ptr, dest->uv_stride);
}
y_ptr += 16;
u_ptr += 8;
v_ptr += 8;
yd_ptr += 16;
ud_ptr += 8;
vd_ptr += 8;
mode_info_context++; /* step to next MB */
}
y_ptr += show->y_stride * 16 - 16 * cm->mb_cols;
u_ptr += show->uv_stride * 8 - 8 * cm->mb_cols;
v_ptr += show->uv_stride * 8 - 8 * cm->mb_cols;
yd_ptr += dest->y_stride * 16 - 16 * cm->mb_cols;
ud_ptr += dest->uv_stride * 8 - 8 * cm->mb_cols;
vd_ptr += dest->uv_stride * 8 - 8 * cm->mb_cols;
mode_info_context++; /* Skip border mb */
}
}

View File

@@ -1,146 +0,0 @@
/*
* Copyright (c) 2010 The WebM project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#include "entropymode.h"
const unsigned int vp8_kf_default_bmode_counts [VP8_BINTRAMODES] [VP8_BINTRAMODES] [VP8_BINTRAMODES] =
{
{
/*Above Mode : 0*/
{ 43438, 2195, 470, 316, 615, 171, 217, 412, 124, 160, }, /* left_mode 0 */
{ 5722, 2751, 296, 291, 81, 68, 80, 101, 100, 170, }, /* left_mode 1 */
{ 1629, 201, 307, 25, 47, 16, 34, 72, 19, 28, }, /* left_mode 2 */
{ 332, 266, 36, 500, 20, 65, 23, 14, 154, 106, }, /* left_mode 3 */
{ 450, 97, 10, 24, 117, 10, 2, 12, 8, 71, }, /* left_mode 4 */
{ 384, 49, 29, 44, 12, 162, 51, 5, 87, 42, }, /* left_mode 5 */
{ 495, 53, 157, 27, 14, 57, 180, 17, 17, 34, }, /* left_mode 6 */
{ 695, 64, 62, 9, 27, 5, 3, 147, 10, 26, }, /* left_mode 7 */
{ 230, 54, 20, 124, 16, 125, 29, 12, 283, 37, }, /* left_mode 8 */
{ 260, 87, 21, 120, 32, 16, 33, 16, 33, 203, }, /* left_mode 9 */
},
{
/*Above Mode : 1*/
{ 3934, 2573, 355, 137, 128, 87, 133, 117, 37, 27, }, /* left_mode 0 */
{ 1036, 1929, 278, 135, 27, 37, 48, 55, 41, 91, }, /* left_mode 1 */
{ 223, 256, 253, 15, 13, 9, 28, 64, 3, 3, }, /* left_mode 2 */
{ 120, 129, 17, 316, 15, 11, 9, 4, 53, 74, }, /* left_mode 3 */
{ 129, 58, 6, 11, 38, 2, 0, 5, 2, 67, }, /* left_mode 4 */
{ 53, 22, 11, 16, 8, 26, 14, 3, 19, 12, }, /* left_mode 5 */
{ 59, 26, 61, 11, 4, 9, 35, 13, 8, 8, }, /* left_mode 6 */
{ 101, 52, 40, 8, 5, 2, 8, 59, 2, 20, }, /* left_mode 7 */
{ 48, 34, 10, 52, 8, 15, 6, 6, 63, 20, }, /* left_mode 8 */
{ 96, 48, 22, 63, 11, 14, 5, 8, 9, 96, }, /* left_mode 9 */
},
{
/*Above Mode : 2*/
{ 709, 461, 506, 36, 27, 33, 151, 98, 24, 6, }, /* left_mode 0 */
{ 201, 375, 442, 27, 13, 8, 46, 58, 6, 19, }, /* left_mode 1 */
{ 122, 140, 417, 4, 13, 3, 33, 59, 4, 2, }, /* left_mode 2 */
{ 36, 17, 22, 16, 6, 8, 12, 17, 9, 21, }, /* left_mode 3 */
{ 51, 15, 7, 1, 14, 0, 4, 5, 3, 22, }, /* left_mode 4 */
{ 18, 11, 30, 9, 7, 20, 11, 5, 2, 6, }, /* left_mode 5 */
{ 38, 21, 103, 9, 4, 12, 79, 13, 2, 5, }, /* left_mode 6 */
{ 64, 17, 66, 2, 12, 4, 2, 65, 4, 5, }, /* left_mode 7 */
{ 14, 7, 7, 16, 3, 11, 4, 13, 15, 16, }, /* left_mode 8 */
{ 36, 8, 32, 9, 9, 4, 14, 7, 6, 24, }, /* left_mode 9 */
},
{
/*Above Mode : 3*/
{ 1340, 173, 36, 119, 30, 10, 13, 10, 20, 26, }, /* left_mode 0 */
{ 156, 293, 26, 108, 5, 16, 2, 4, 23, 30, }, /* left_mode 1 */
{ 60, 34, 13, 7, 3, 3, 0, 8, 4, 5, }, /* left_mode 2 */
{ 72, 64, 1, 235, 3, 9, 2, 7, 28, 38, }, /* left_mode 3 */
{ 29, 14, 1, 3, 5, 0, 2, 2, 5, 13, }, /* left_mode 4 */
{ 22, 7, 4, 11, 2, 5, 1, 2, 6, 4, }, /* left_mode 5 */
{ 18, 14, 5, 6, 4, 3, 14, 0, 9, 2, }, /* left_mode 6 */
{ 41, 10, 7, 1, 2, 0, 0, 10, 2, 1, }, /* left_mode 7 */
{ 23, 19, 2, 33, 1, 5, 2, 0, 51, 8, }, /* left_mode 8 */
{ 33, 26, 7, 53, 3, 9, 3, 3, 9, 19, }, /* left_mode 9 */
},
{
/*Above Mode : 4*/
{ 410, 165, 43, 31, 66, 15, 30, 54, 8, 17, }, /* left_mode 0 */
{ 115, 64, 27, 18, 30, 7, 11, 15, 4, 19, }, /* left_mode 1 */
{ 31, 23, 25, 1, 7, 2, 2, 10, 0, 5, }, /* left_mode 2 */
{ 17, 4, 1, 6, 8, 2, 7, 5, 5, 21, }, /* left_mode 3 */
{ 120, 12, 1, 2, 83, 3, 0, 4, 1, 40, }, /* left_mode 4 */
{ 4, 3, 1, 2, 1, 2, 5, 0, 3, 6, }, /* left_mode 5 */
{ 10, 2, 13, 6, 6, 6, 8, 2, 4, 5, }, /* left_mode 6 */
{ 58, 10, 5, 1, 28, 1, 1, 33, 1, 9, }, /* left_mode 7 */
{ 8, 2, 1, 4, 2, 5, 1, 1, 2, 10, }, /* left_mode 8 */
{ 76, 7, 5, 7, 18, 2, 2, 0, 5, 45, }, /* left_mode 9 */
},
{
/*Above Mode : 5*/
{ 444, 46, 47, 20, 14, 110, 60, 14, 60, 7, }, /* left_mode 0 */
{ 59, 57, 25, 18, 3, 17, 21, 6, 14, 6, }, /* left_mode 1 */
{ 24, 17, 20, 6, 4, 13, 7, 2, 3, 2, }, /* left_mode 2 */
{ 13, 11, 5, 14, 4, 9, 2, 4, 15, 7, }, /* left_mode 3 */
{ 8, 5, 2, 1, 4, 0, 1, 1, 2, 12, }, /* left_mode 4 */
{ 19, 5, 5, 7, 4, 40, 6, 3, 10, 4, }, /* left_mode 5 */
{ 16, 5, 9, 1, 1, 16, 26, 2, 10, 4, }, /* left_mode 6 */
{ 11, 4, 8, 1, 1, 4, 4, 5, 4, 1, }, /* left_mode 7 */
{ 15, 1, 3, 7, 3, 21, 7, 1, 34, 5, }, /* left_mode 8 */
{ 18, 5, 1, 3, 4, 3, 7, 1, 2, 9, }, /* left_mode 9 */
},
{
/*Above Mode : 6*/
{ 476, 149, 94, 13, 14, 77, 291, 27, 23, 3, }, /* left_mode 0 */
{ 79, 83, 42, 14, 2, 12, 63, 2, 4, 14, }, /* left_mode 1 */
{ 43, 36, 55, 1, 3, 8, 42, 11, 5, 1, }, /* left_mode 2 */
{ 9, 9, 6, 16, 1, 5, 6, 3, 11, 10, }, /* left_mode 3 */
{ 10, 3, 1, 3, 10, 1, 0, 1, 1, 4, }, /* left_mode 4 */
{ 14, 6, 15, 5, 1, 20, 25, 2, 5, 0, }, /* left_mode 5 */
{ 28, 7, 51, 1, 0, 8, 127, 6, 2, 5, }, /* left_mode 6 */
{ 13, 3, 3, 2, 3, 1, 2, 8, 1, 2, }, /* left_mode 7 */
{ 10, 3, 3, 3, 3, 8, 2, 2, 9, 3, }, /* left_mode 8 */
{ 13, 7, 11, 4, 0, 4, 6, 2, 5, 8, }, /* left_mode 9 */
},
{
/*Above Mode : 7*/
{ 376, 135, 119, 6, 32, 8, 31, 224, 9, 3, }, /* left_mode 0 */
{ 93, 60, 54, 6, 13, 7, 8, 92, 2, 12, }, /* left_mode 1 */
{ 74, 36, 84, 0, 3, 2, 9, 67, 2, 1, }, /* left_mode 2 */
{ 19, 4, 4, 8, 8, 2, 4, 7, 6, 16, }, /* left_mode 3 */
{ 51, 7, 4, 1, 77, 3, 0, 14, 1, 15, }, /* left_mode 4 */
{ 7, 7, 5, 7, 4, 7, 4, 5, 0, 3, }, /* left_mode 5 */
{ 18, 2, 19, 2, 2, 4, 12, 11, 1, 2, }, /* left_mode 6 */
{ 129, 6, 27, 1, 21, 3, 0, 189, 0, 6, }, /* left_mode 7 */
{ 9, 1, 2, 8, 3, 7, 0, 5, 3, 3, }, /* left_mode 8 */
{ 20, 4, 5, 10, 4, 2, 7, 17, 3, 16, }, /* left_mode 9 */
},
{
/*Above Mode : 8*/
{ 617, 68, 34, 79, 11, 27, 25, 14, 75, 13, }, /* left_mode 0 */
{ 51, 82, 21, 26, 6, 12, 13, 1, 26, 16, }, /* left_mode 1 */
{ 29, 9, 12, 11, 3, 7, 1, 10, 2, 2, }, /* left_mode 2 */
{ 17, 19, 11, 74, 4, 3, 2, 0, 58, 13, }, /* left_mode 3 */
{ 10, 1, 1, 3, 4, 1, 0, 2, 1, 8, }, /* left_mode 4 */
{ 14, 4, 5, 5, 1, 13, 2, 0, 27, 8, }, /* left_mode 5 */
{ 10, 3, 5, 4, 1, 7, 6, 4, 5, 1, }, /* left_mode 6 */
{ 10, 2, 6, 2, 1, 1, 1, 4, 2, 1, }, /* left_mode 7 */
{ 14, 8, 5, 23, 2, 12, 6, 2, 117, 5, }, /* left_mode 8 */
{ 9, 6, 2, 19, 1, 6, 3, 2, 9, 9, }, /* left_mode 9 */
},
{
/*Above Mode : 9*/
{ 680, 73, 22, 38, 42, 5, 11, 9, 6, 28, }, /* left_mode 0 */
{ 113, 112, 21, 22, 10, 2, 8, 4, 6, 42, }, /* left_mode 1 */
{ 44, 20, 24, 6, 5, 4, 3, 3, 1, 2, }, /* left_mode 2 */
{ 40, 23, 7, 71, 5, 2, 4, 1, 7, 22, }, /* left_mode 3 */
{ 85, 9, 4, 4, 17, 2, 0, 3, 2, 23, }, /* left_mode 4 */
{ 13, 4, 2, 6, 1, 7, 0, 1, 7, 6, }, /* left_mode 5 */
{ 26, 6, 8, 3, 2, 3, 8, 1, 5, 4, }, /* left_mode 6 */
{ 54, 8, 9, 6, 7, 0, 1, 11, 1, 3, }, /* left_mode 7 */
{ 9, 10, 4, 13, 2, 5, 4, 2, 14, 8, }, /* left_mode 8 */
{ 92, 9, 5, 19, 15, 3, 3, 1, 6, 58, }, /* left_mode 9 */
},
};

View File

@@ -60,19 +60,19 @@ extern "C"
MODE_BESTQUALITY = 0x2,
MODE_FIRSTPASS = 0x3,
MODE_SECONDPASS = 0x4,
MODE_SECONDPASS_BEST = 0x5,
MODE_SECONDPASS_BEST = 0x5
} MODE;
typedef enum
{
FRAMEFLAGS_KEY = 1,
FRAMEFLAGS_GOLDEN = 2,
FRAMEFLAGS_ALTREF = 4,
FRAMEFLAGS_ALTREF = 4
} FRAMETYPE_FLAGS;
#include <assert.h>
static __inline void Scale2Ratio(int mode, int *hr, int *hs)
static void Scale2Ratio(int mode, int *hr, int *hs)
{
switch (mode)
{
@@ -207,10 +207,10 @@ extern "C"
// Temporal scaling parameters
unsigned int number_of_layers;
unsigned int target_bitrate[MAX_PERIODICITY];
unsigned int rate_decimator[MAX_PERIODICITY];
unsigned int target_bitrate[VPX_TS_MAX_PERIODICITY];
unsigned int rate_decimator[VPX_TS_MAX_PERIODICITY];
unsigned int periodicity;
unsigned int layer_id[MAX_PERIODICITY];
unsigned int layer_id[VPX_TS_MAX_PERIODICITY];
#if CONFIG_MULTI_RES_ENCODING
/* Number of total resolutions encoded */

View File

@@ -26,10 +26,6 @@
#include "header.h"
/*#endif*/
/* Create/destroy static data structures. */
void vp8_initialize_common(void);
#define MINQ 0
#define MAXQ 127
#define QINDEX_RANGE (MAXQ + 1)
@@ -92,11 +88,13 @@ typedef struct VP8Common
int fb_idx_ref_cnt[NUM_YV12_BUFFERS];
int new_fb_idx, lst_fb_idx, gld_fb_idx, alt_fb_idx;
YV12_BUFFER_CONFIG post_proc_buffer;
YV12_BUFFER_CONFIG temp_scale_frame;
#if CONFIG_POSTPROC
YV12_BUFFER_CONFIG post_proc_buffer;
YV12_BUFFER_CONFIG post_proc_buffer_int;
int post_proc_buffer_int_used;
#endif
FRAME_TYPE last_frame_type; /* Save last frame's frame type for motion search. */
FRAME_TYPE frame_type;

View File

@@ -14,10 +14,8 @@
#include "vpx_scale/yv12config.h"
#include "postproc.h"
#include "common.h"
#include "vpx_scale/yv12extend.h"
#include "vpx_scale/vpxscale.h"
#include "systemdependent.h"
#include "../encoder/variance.h"
#include <limits.h>
#include <math.h>
@@ -30,7 +28,6 @@
( (0.439*(float)(t>>16)) - (0.368*(float)(t>>8&0xff)) - (0.071*(float)(t&0xff)) + 128)
/* global constants */
#define MFQE_PRECISION 4
#if CONFIG_POSTPROC_VISUALIZER
static const unsigned char MB_PREDICTION_MODE_colors[MB_MODE_COUNT][3] =
{
@@ -362,6 +359,7 @@ void vp8_deblock(YV12_BUFFER_CONFIG *source,
vp8_post_proc_down_and_across(source->v_buffer, post->v_buffer, source->uv_stride, post->uv_stride, source->uv_height, source->uv_width, ppl);
}
#if !(CONFIG_TEMPORAL_DENOISING)
void vp8_de_noise(YV12_BUFFER_CONFIG *source,
YV12_BUFFER_CONFIG *post,
int q,
@@ -398,6 +396,7 @@ void vp8_de_noise(YV12_BUFFER_CONFIG *source,
source->uv_width - 4, ppl);
}
#endif
double vp8_gaussian(double sigma, double mu, double x)
{
@@ -405,9 +404,6 @@ double vp8_gaussian(double sigma, double mu, double x)
(exp(-(x - mu) * (x - mu) / (2 * sigma * sigma)));
}
extern void (*vp8_clear_system_state)(void);
static void fillrd(struct postproc_state *state, int q, int a)
{
char char_dist[300];
@@ -693,214 +689,7 @@ static void constrain_line (int x0, int *x1, int y0, int *y1, int width, int hei
}
}
static void multiframe_quality_enhance_block
(
int blksize, /* Currently only values supported are 16, 8, 4 */
int qcurr,
int qprev,
unsigned char *y,
unsigned char *u,
unsigned char *v,
int y_stride,
int uv_stride,
unsigned char *yd,
unsigned char *ud,
unsigned char *vd,
int yd_stride,
int uvd_stride
)
{
static const unsigned char VP8_ZEROS[16]=
{
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
};
int blksizeby2 = blksize >> 1;
int qdiff = qcurr - qprev;
int i, j;
unsigned char *yp;
unsigned char *ydp;
unsigned char *up;
unsigned char *udp;
unsigned char *vp;
unsigned char *vdp;
unsigned int act, sse, sad, thr;
if (blksize == 16)
{
act = (vp8_variance16x16(yd, yd_stride, VP8_ZEROS, 0, &sse)+128)>>8;
sad = (vp8_sad16x16(y, y_stride, yd, yd_stride, INT_MAX)+128)>>8;
}
else if (blksize == 8)
{
act = (vp8_variance8x8(yd, yd_stride, VP8_ZEROS, 0, &sse)+32)>>6;
sad = (vp8_sad8x8(y, y_stride, yd, yd_stride, INT_MAX)+32)>>6;
}
else
{
act = (vp8_variance4x4(yd, yd_stride, VP8_ZEROS, 0, &sse)+8)>>4;
sad = (vp8_sad4x4(y, y_stride, yd, yd_stride, INT_MAX)+8)>>4;
}
/* thr = qdiff/8 + log2(act) + log4(qprev) */
thr = (qdiff>>3);
while (act>>=1) thr++;
while (qprev>>=2) thr++;
if (sad < thr)
{
static const int roundoff = (1 << (MFQE_PRECISION - 1));
int ifactor = (sad << MFQE_PRECISION) / thr;
ifactor >>= (qdiff >> 5);
// TODO: SIMD optimize this section
if (ifactor)
{
int icfactor = (1 << MFQE_PRECISION) - ifactor;
for (yp = y, ydp = yd, i = 0; i < blksize; ++i, yp += y_stride, ydp += yd_stride)
{
for (j = 0; j < blksize; ++j)
ydp[j] = (int)((yp[j] * ifactor + ydp[j] * icfactor + roundoff) >> MFQE_PRECISION);
}
for (up = u, udp = ud, i = 0; i < blksizeby2; ++i, up += uv_stride, udp += uvd_stride)
{
for (j = 0; j < blksizeby2; ++j)
udp[j] = (int)((up[j] * ifactor + udp[j] * icfactor + roundoff) >> MFQE_PRECISION);
}
for (vp = v, vdp = vd, i = 0; i < blksizeby2; ++i, vp += uv_stride, vdp += uvd_stride)
{
for (j = 0; j < blksizeby2; ++j)
vdp[j] = (int)((vp[j] * ifactor + vdp[j] * icfactor + roundoff) >> MFQE_PRECISION);
}
}
}
else
{
if (blksize == 16)
{
vp8_copy_mem16x16(y, y_stride, yd, yd_stride);
vp8_copy_mem8x8(u, uv_stride, ud, uvd_stride);
vp8_copy_mem8x8(v, uv_stride, vd, uvd_stride);
}
else if (blksize == 8)
{
vp8_copy_mem8x8(y, y_stride, yd, yd_stride);
for (up = u, udp = ud, i = 0; i < blksizeby2; ++i, up += uv_stride, udp += uvd_stride)
vpx_memcpy(udp, up, blksizeby2);
for (vp = v, vdp = vd, i = 0; i < blksizeby2; ++i, vp += uv_stride, vdp += uvd_stride)
vpx_memcpy(vdp, vp, blksizeby2);
}
else
{
for (yp = y, ydp = yd, i = 0; i < blksize; ++i, yp += y_stride, ydp += yd_stride)
vpx_memcpy(ydp, yp, blksize);
for (up = u, udp = ud, i = 0; i < blksizeby2; ++i, up += uv_stride, udp += uvd_stride)
vpx_memcpy(udp, up, blksizeby2);
for (vp = v, vdp = vd, i = 0; i < blksizeby2; ++i, vp += uv_stride, vdp += uvd_stride)
vpx_memcpy(vdp, vp, blksizeby2);
}
}
}
void vp8_multiframe_quality_enhance
(
VP8_COMMON *cm
)
{
YV12_BUFFER_CONFIG *show = cm->frame_to_show;
YV12_BUFFER_CONFIG *dest = &cm->post_proc_buffer;
FRAME_TYPE frame_type = cm->frame_type;
/* Point at base of Mb MODE_INFO list has motion vectors etc */
const MODE_INFO *mode_info_context = cm->mi;
int mb_row;
int mb_col;
int qcurr = cm->base_qindex;
int qprev = cm->postproc_state.last_base_qindex;
unsigned char *y_ptr, *u_ptr, *v_ptr;
unsigned char *yd_ptr, *ud_ptr, *vd_ptr;
/* Set up the buffer pointers */
y_ptr = show->y_buffer;
u_ptr = show->u_buffer;
v_ptr = show->v_buffer;
yd_ptr = dest->y_buffer;
ud_ptr = dest->u_buffer;
vd_ptr = dest->v_buffer;
/* postprocess each macro block */
for (mb_row = 0; mb_row < cm->mb_rows; mb_row++)
{
for (mb_col = 0; mb_col < cm->mb_cols; mb_col++)
{
/* if motion is high there will likely be no benefit */
if (((frame_type == INTER_FRAME &&
abs(mode_info_context->mbmi.mv.as_mv.row) <= 10 &&
abs(mode_info_context->mbmi.mv.as_mv.col) <= 10) ||
(frame_type == KEY_FRAME)))
{
if (mode_info_context->mbmi.mode == B_PRED || mode_info_context->mbmi.mode == SPLITMV)
{
int i, j;
for (i=0; i<2; ++i)
for (j=0; j<2; ++j)
multiframe_quality_enhance_block(8,
qcurr,
qprev,
y_ptr + 8*(i*show->y_stride+j),
u_ptr + 4*(i*show->uv_stride+j),
v_ptr + 4*(i*show->uv_stride+j),
show->y_stride,
show->uv_stride,
yd_ptr + 8*(i*dest->y_stride+j),
ud_ptr + 4*(i*dest->uv_stride+j),
vd_ptr + 4*(i*dest->uv_stride+j),
dest->y_stride,
dest->uv_stride);
}
else
{
multiframe_quality_enhance_block(16,
qcurr,
qprev,
y_ptr,
u_ptr,
v_ptr,
show->y_stride,
show->uv_stride,
yd_ptr,
ud_ptr,
vd_ptr,
dest->y_stride,
dest->uv_stride);
}
}
else
{
vp8_copy_mem16x16(y_ptr, show->y_stride, yd_ptr, dest->y_stride);
vp8_copy_mem8x8(u_ptr, show->uv_stride, ud_ptr, dest->uv_stride);
vp8_copy_mem8x8(v_ptr, show->uv_stride, vd_ptr, dest->uv_stride);
}
y_ptr += 16;
u_ptr += 8;
v_ptr += 8;
yd_ptr += 16;
ud_ptr += 8;
vd_ptr += 8;
mode_info_context++; /* step to next MB */
}
y_ptr += show->y_stride * 16 - 16 * cm->mb_cols;
u_ptr += show->uv_stride * 8 - 8 * cm->mb_cols;
v_ptr += show->uv_stride * 8 - 8 * cm->mb_cols;
yd_ptr += dest->y_stride * 16 - 16 * cm->mb_cols;
ud_ptr += dest->uv_stride * 8 - 8 * cm->mb_cols;
vd_ptr += dest->uv_stride * 8 - 8 * cm->mb_cols;
mode_info_context++; /* Skip border mb */
}
}
#if CONFIG_POSTPROC
int vp8_post_proc_frame(VP8_COMMON *oci, YV12_BUFFER_CONFIG *dest, vp8_ppflags_t *ppflags)
{
int q = oci->filter_level * 10 / 6;
@@ -923,6 +712,7 @@ int vp8_post_proc_frame(VP8_COMMON *oci, YV12_BUFFER_CONFIG *dest, vp8_ppflags_t
dest->y_height = oci->Height;
dest->uv_height = dest->y_height / 2;
oci->postproc_state.last_base_qindex = oci->base_qindex;
oci->postproc_state.last_frame_valid = 1;
return 0;
}
@@ -943,7 +733,7 @@ int vp8_post_proc_frame(VP8_COMMON *oci, YV12_BUFFER_CONFIG *dest, vp8_ppflags_t
// insure that postproc is set to all 0's so that post proc
// doesn't pull random data in from edge
vpx_memset((&oci->post_proc_buffer_int)->buffer_alloc,126,(&oci->post_proc_buffer)->frame_size);
vpx_memset((&oci->post_proc_buffer_int)->buffer_alloc,128,(&oci->post_proc_buffer)->frame_size);
}
}
@@ -953,6 +743,7 @@ int vp8_post_proc_frame(VP8_COMMON *oci, YV12_BUFFER_CONFIG *dest, vp8_ppflags_t
#endif
if ((flags & VP8D_MFQE) &&
oci->postproc_state.last_frame_valid &&
oci->current_video_frame >= 2 &&
oci->base_qindex - oci->postproc_state.last_base_qindex >= 10)
{
@@ -960,7 +751,7 @@ int vp8_post_proc_frame(VP8_COMMON *oci, YV12_BUFFER_CONFIG *dest, vp8_ppflags_t
if (((flags & VP8D_DEBLOCK) || (flags & VP8D_DEMACROBLOCK)) &&
oci->post_proc_buffer_int_used)
{
vp8_yv12_copy_frame_ptr(&oci->post_proc_buffer, &oci->post_proc_buffer_int);
vp8_yv12_copy_frame(&oci->post_proc_buffer, &oci->post_proc_buffer_int);
if (flags & VP8D_DEMACROBLOCK)
{
vp8_deblock_and_de_macro_block(&oci->post_proc_buffer_int, &oci->post_proc_buffer,
@@ -989,9 +780,10 @@ int vp8_post_proc_frame(VP8_COMMON *oci, YV12_BUFFER_CONFIG *dest, vp8_ppflags_t
}
else
{
vp8_yv12_copy_frame_ptr(oci->frame_to_show, &oci->post_proc_buffer);
vp8_yv12_copy_frame(oci->frame_to_show, &oci->post_proc_buffer);
oci->postproc_state.last_base_qindex = oci->base_qindex;
}
oci->postproc_state.last_frame_valid = 1;
if (flags & VP8D_ADDNOISE)
{
@@ -1378,3 +1170,4 @@ int vp8_post_proc_frame(VP8_COMMON *oci, YV12_BUFFER_CONFIG *dest, vp8_ppflags_t
dest->uv_height = dest->y_height / 2;
return 0;
}
#endif

View File

@@ -19,6 +19,7 @@ struct postproc_state
int last_noise;
char noise[3072];
int last_base_qindex;
int last_frame_valid;
DECLARE_ALIGNED(16, char, blackclamp[16]);
DECLARE_ALIGNED(16, char, whiteclamp[16]);
DECLARE_ALIGNED(16, char, bothclamp[16]);
@@ -40,4 +41,8 @@ void vp8_deblock(YV12_BUFFER_CONFIG *source,
int q,
int low_var_thresh,
int flag);
#define MFQE_PRECISION 4
void vp8_multiframe_quality_enhance(struct VP8Common *cm);
#endif

View File

@@ -14,143 +14,20 @@
#include "vpx_mem/vpx_mem.h"
#include "blockd.h"
/* For skip_recon_mb(), add vp8_build_intra_predictors_mby_s(MACROBLOCKD *x) and
* vp8_build_intra_predictors_mbuv_s(MACROBLOCKD *x).
*/
void vp8_build_intra_predictors_mby_c(MACROBLOCKD *x)
void vp8_build_intra_predictors_mby_s_c(MACROBLOCKD *x,
unsigned char * yabove_row,
unsigned char * yleft,
int left_stride,
unsigned char * ypred_ptr,
int y_stride)
{
unsigned char *yabove_row = x->dst.y_buffer - x->dst.y_stride;
unsigned char yleft_col[16];
unsigned char ytop_left = yabove_row[-1];
unsigned char *ypred_ptr = x->predictor;
int r, c, i;
for (i = 0; i < 16; i++)
{
yleft_col[i] = x->dst.y_buffer [i* x->dst.y_stride -1];
}
/* for Y */
switch (x->mode_info_context->mbmi.mode)
{
case DC_PRED:
{
int expected_dc;
int i;
int shift;
int average = 0;
if (x->up_available || x->left_available)
{
if (x->up_available)
{
for (i = 0; i < 16; i++)
{
average += yabove_row[i];
}
}
if (x->left_available)
{
for (i = 0; i < 16; i++)
{
average += yleft_col[i];
}
}
shift = 3 + x->up_available + x->left_available;
expected_dc = (average + (1 << (shift - 1))) >> shift;
}
else
{
expected_dc = 128;
}
vpx_memset(ypred_ptr, expected_dc, 256);
}
break;
case V_PRED:
{
for (r = 0; r < 16; r++)
{
((int *)ypred_ptr)[0] = ((int *)yabove_row)[0];
((int *)ypred_ptr)[1] = ((int *)yabove_row)[1];
((int *)ypred_ptr)[2] = ((int *)yabove_row)[2];
((int *)ypred_ptr)[3] = ((int *)yabove_row)[3];
ypred_ptr += 16;
}
}
break;
case H_PRED:
{
for (r = 0; r < 16; r++)
{
vpx_memset(ypred_ptr, yleft_col[r], 16);
ypred_ptr += 16;
}
}
break;
case TM_PRED:
{
for (r = 0; r < 16; r++)
{
for (c = 0; c < 16; c++)
{
int pred = yleft_col[r] + yabove_row[ c] - ytop_left;
if (pred < 0)
pred = 0;
if (pred > 255)
pred = 255;
ypred_ptr[c] = pred;
}
ypred_ptr += 16;
}
}
break;
case B_PRED:
case NEARESTMV:
case NEARMV:
case ZEROMV:
case NEWMV:
case SPLITMV:
case MB_MODE_COUNT:
break;
}
}
void vp8_build_intra_predictors_mby_s_c(MACROBLOCKD *x)
{
unsigned char *yabove_row = x->dst.y_buffer - x->dst.y_stride;
unsigned char yleft_col[16];
unsigned char ytop_left = yabove_row[-1];
unsigned char *ypred_ptr = x->predictor;
int r, c, i;
int y_stride = x->dst.y_stride;
ypred_ptr = x->dst.y_buffer; /*x->predictor;*/
for (i = 0; i < 16; i++)
{
yleft_col[i] = x->dst.y_buffer [i* x->dst.y_stride -1];
yleft_col[i] = yleft[i* left_stride];
}
/* for Y */
@@ -198,7 +75,7 @@ void vp8_build_intra_predictors_mby_s_c(MACROBLOCKD *x)
for (r = 0; r < 16; r++)
{
vpx_memset(ypred_ptr, expected_dc, 16);
ypred_ptr += y_stride; /*16;*/
ypred_ptr += y_stride;
}
}
break;
@@ -212,7 +89,7 @@ void vp8_build_intra_predictors_mby_s_c(MACROBLOCKD *x)
((int *)ypred_ptr)[1] = ((int *)yabove_row)[1];
((int *)ypred_ptr)[2] = ((int *)yabove_row)[2];
((int *)ypred_ptr)[3] = ((int *)yabove_row)[3];
ypred_ptr += y_stride; /*16;*/
ypred_ptr += y_stride;
}
}
break;
@@ -223,7 +100,7 @@ void vp8_build_intra_predictors_mby_s_c(MACROBLOCKD *x)
{
vpx_memset(ypred_ptr, yleft_col[r], 16);
ypred_ptr += y_stride; /*16;*/
ypred_ptr += y_stride;
}
}
@@ -246,7 +123,7 @@ void vp8_build_intra_predictors_mby_s_c(MACROBLOCKD *x)
ypred_ptr[c] = pred;
}
ypred_ptr += y_stride; /*16;*/
ypred_ptr += y_stride;
}
}
@@ -262,162 +139,27 @@ void vp8_build_intra_predictors_mby_s_c(MACROBLOCKD *x)
}
}
void vp8_build_intra_predictors_mbuv_c(MACROBLOCKD *x)
void vp8_build_intra_predictors_mbuv_s_c(MACROBLOCKD *x,
unsigned char * uabove_row,
unsigned char * vabove_row,
unsigned char * uleft,
unsigned char * vleft,
int left_stride,
unsigned char * upred_ptr,
unsigned char * vpred_ptr,
int pred_stride)
{
unsigned char *uabove_row = x->dst.u_buffer - x->dst.uv_stride;
unsigned char uleft_col[16];
unsigned char uleft_col[8];
unsigned char utop_left = uabove_row[-1];
unsigned char *vabove_row = x->dst.v_buffer - x->dst.uv_stride;
unsigned char vleft_col[20];
unsigned char vleft_col[8];
unsigned char vtop_left = vabove_row[-1];
unsigned char *upred_ptr = &x->predictor[256];
unsigned char *vpred_ptr = &x->predictor[320];
int i, j;
for (i = 0; i < 8; i++)
{
uleft_col[i] = x->dst.u_buffer [i* x->dst.uv_stride -1];
vleft_col[i] = x->dst.v_buffer [i* x->dst.uv_stride -1];
}
switch (x->mode_info_context->mbmi.uv_mode)
{
case DC_PRED:
{
int expected_udc;
int expected_vdc;
int i;
int shift;
int Uaverage = 0;
int Vaverage = 0;
if (x->up_available)
{
for (i = 0; i < 8; i++)
{
Uaverage += uabove_row[i];
Vaverage += vabove_row[i];
}
}
if (x->left_available)
{
for (i = 0; i < 8; i++)
{
Uaverage += uleft_col[i];
Vaverage += vleft_col[i];
}
}
if (!x->up_available && !x->left_available)
{
expected_udc = 128;
expected_vdc = 128;
}
else
{
shift = 2 + x->up_available + x->left_available;
expected_udc = (Uaverage + (1 << (shift - 1))) >> shift;
expected_vdc = (Vaverage + (1 << (shift - 1))) >> shift;
}
vpx_memset(upred_ptr, expected_udc, 64);
vpx_memset(vpred_ptr, expected_vdc, 64);
}
break;
case V_PRED:
{
int i;
for (i = 0; i < 8; i++)
{
vpx_memcpy(upred_ptr, uabove_row, 8);
vpx_memcpy(vpred_ptr, vabove_row, 8);
upred_ptr += 8;
vpred_ptr += 8;
}
}
break;
case H_PRED:
{
int i;
for (i = 0; i < 8; i++)
{
vpx_memset(upred_ptr, uleft_col[i], 8);
vpx_memset(vpred_ptr, vleft_col[i], 8);
upred_ptr += 8;
vpred_ptr += 8;
}
}
break;
case TM_PRED:
{
int i;
for (i = 0; i < 8; i++)
{
for (j = 0; j < 8; j++)
{
int predu = uleft_col[i] + uabove_row[j] - utop_left;
int predv = vleft_col[i] + vabove_row[j] - vtop_left;
if (predu < 0)
predu = 0;
if (predu > 255)
predu = 255;
if (predv < 0)
predv = 0;
if (predv > 255)
predv = 255;
upred_ptr[j] = predu;
vpred_ptr[j] = predv;
}
upred_ptr += 8;
vpred_ptr += 8;
}
}
break;
case B_PRED:
case NEARESTMV:
case NEARMV:
case ZEROMV:
case NEWMV:
case SPLITMV:
case MB_MODE_COUNT:
break;
}
}
void vp8_build_intra_predictors_mbuv_s_c(MACROBLOCKD *x)
{
unsigned char *uabove_row = x->dst.u_buffer - x->dst.uv_stride;
unsigned char uleft_col[16];
unsigned char utop_left = uabove_row[-1];
unsigned char *vabove_row = x->dst.v_buffer - x->dst.uv_stride;
unsigned char vleft_col[20];
unsigned char vtop_left = vabove_row[-1];
unsigned char *upred_ptr = x->dst.u_buffer; /*&x->predictor[256];*/
unsigned char *vpred_ptr = x->dst.v_buffer; /*&x->predictor[320];*/
int uv_stride = x->dst.uv_stride;
int i, j;
for (i = 0; i < 8; i++)
{
uleft_col[i] = x->dst.u_buffer [i* x->dst.uv_stride -1];
vleft_col[i] = x->dst.v_buffer [i* x->dst.uv_stride -1];
uleft_col[i] = uleft [i* left_stride];
vleft_col[i] = vleft [i* left_stride];
}
switch (x->mode_info_context->mbmi.uv_mode)
@@ -468,8 +210,8 @@ void vp8_build_intra_predictors_mbuv_s_c(MACROBLOCKD *x)
{
vpx_memset(upred_ptr, expected_udc, 8);
vpx_memset(vpred_ptr, expected_vdc, 8);
upred_ptr += uv_stride; /*8;*/
vpred_ptr += uv_stride; /*8;*/
upred_ptr += pred_stride;
vpred_ptr += pred_stride;
}
}
break;
@@ -481,8 +223,8 @@ void vp8_build_intra_predictors_mbuv_s_c(MACROBLOCKD *x)
{
vpx_memcpy(upred_ptr, uabove_row, 8);
vpx_memcpy(vpred_ptr, vabove_row, 8);
upred_ptr += uv_stride; /*8;*/
vpred_ptr += uv_stride; /*8;*/
upred_ptr += pred_stride;
vpred_ptr += pred_stride;
}
}
@@ -495,8 +237,8 @@ void vp8_build_intra_predictors_mbuv_s_c(MACROBLOCKD *x)
{
vpx_memset(upred_ptr, uleft_col[i], 8);
vpx_memset(vpred_ptr, vleft_col[i], 8);
upred_ptr += uv_stride; /*8;*/
vpred_ptr += uv_stride; /*8;*/
upred_ptr += pred_stride;
vpred_ptr += pred_stride;
}
}
@@ -528,8 +270,8 @@ void vp8_build_intra_predictors_mbuv_s_c(MACROBLOCKD *x)
vpred_ptr[j] = predv;
}
upred_ptr += uv_stride; /*8;*/
vpred_ptr += uv_stride; /*8;*/
upred_ptr += pred_stride;
vpred_ptr += pred_stride;
}
}

View File

@@ -13,20 +13,19 @@
#include "vpx_rtcd.h"
#include "blockd.h"
void vp8_intra4x4_predict_c(unsigned char *src, int src_stride,
int b_mode,
unsigned char *dst, int dst_stride)
void vp8_intra4x4_predict_d_c(unsigned char *Above,
unsigned char *yleft, int left_stride,
int b_mode,
unsigned char *dst, int dst_stride,
unsigned char top_left)
{
int i, r, c;
unsigned char *Above = src - src_stride;
unsigned char Left[4];
unsigned char top_left = Above[-1];
Left[0] = src[-1];
Left[1] = src[-1 + src_stride];
Left[2] = src[-1 + 2 * src_stride];
Left[3] = src[-1 + 3 * src_stride];
Left[0] = yleft[0];
Left[1] = yleft[left_stride];
Left[2] = yleft[2 * left_stride];
Left[3] = yleft[3 * left_stride];
switch (b_mode)
{
@@ -295,24 +294,15 @@ void vp8_intra4x4_predict_c(unsigned char *src, int src_stride,
}
}
/* copy 4 bytes from the above right down so that the 4x4 prediction modes using pixels above and
* to the right prediction have filled in pixels to use.
*/
void vp8_intra_prediction_down_copy(MACROBLOCKD *x)
void vp8_intra4x4_predict_c(unsigned char *src, int src_stride,
int b_mode,
unsigned char *dst, int dst_stride)
{
int dst_stride = x->dst.y_stride;
unsigned char *above_right = x->dst.y_buffer - dst_stride + 16;
unsigned char *Above = src - src_stride;
unsigned int *src_ptr = (unsigned int *)above_right;
unsigned int *dst_ptr0 = (unsigned int *)(above_right + 4 * dst_stride);
unsigned int *dst_ptr1 = (unsigned int *)(above_right + 8 * dst_stride);
unsigned int *dst_ptr2 = (unsigned int *)(above_right + 12 * dst_stride);
*dst_ptr0 = *src_ptr;
*dst_ptr1 = *src_ptr;
*dst_ptr2 = *src_ptr;
vp8_intra4x4_predict_d_c(Above,
src - 1, src_stride,
b_mode,
dst, dst_stride,
Above[-1]);
}

View File

@@ -11,9 +11,22 @@
#ifndef __INC_RECONINTRA4x4_H
#define __INC_RECONINTRA4x4_H
#include "vp8/common/blockd.h"
struct macroblockd;
static void intra_prediction_down_copy(MACROBLOCKD *xd,
unsigned char *above_right_src)
{
int dst_stride = xd->dst.y_stride;
unsigned char *above_right_dst = xd->dst.y_buffer - dst_stride + 16;
extern void vp8_intra_prediction_down_copy(struct macroblockd *x);
unsigned int *src_ptr = (unsigned int *)above_right_src;
unsigned int *dst_ptr0 = (unsigned int *)(above_right_dst + 4 * dst_stride);
unsigned int *dst_ptr1 = (unsigned int *)(above_right_dst + 8 * dst_stride);
unsigned int *dst_ptr2 = (unsigned int *)(above_right_dst + 12 * dst_stride);
*dst_ptr0 = *src_ptr;
*dst_ptr1 = *src_ptr;
*dst_ptr2 = *src_ptr;
}
#endif

View File

@@ -122,18 +122,14 @@ prototype void vp8_copy_mem8x4 "unsigned char *src, int src_pitch, unsigned char
specialize vp8_copy_mem8x4 mmx media neon
vp8_copy_mem8x4_media=vp8_copy_mem8x4_v6
prototype void vp8_build_intra_predictors_mby "struct macroblockd *x"
specialize vp8_build_intra_predictors_mby sse2 ssse3 neon
prototype void vp8_build_intra_predictors_mby_s "struct macroblockd *x, unsigned char * yabove_row, unsigned char * yleft, int left_stride, unsigned char * ypred_ptr, int y_stride"
specialize vp8_build_intra_predictors_mby_s sse2 ssse3
#TODO: fix assembly for neon
prototype void vp8_build_intra_predictors_mby_s "struct macroblockd *x"
specialize vp8_build_intra_predictors_mby_s sse2 ssse3 neon
prototype void vp8_build_intra_predictors_mbuv "struct macroblockd *x"
specialize vp8_build_intra_predictors_mbuv sse2 ssse3
prototype void vp8_build_intra_predictors_mbuv_s "struct macroblockd *x"
prototype void vp8_build_intra_predictors_mbuv_s "struct macroblockd *x, unsigned char * uabove_row, unsigned char * vabove_row, unsigned char *uleft, unsigned char *vleft, int left_stride, unsigned char * upred_ptr, unsigned char * vpred_ptr, int pred_stride"
specialize vp8_build_intra_predictors_mbuv_s sse2 ssse3
prototype void vp8_intra4x4_predict_d "unsigned char *above, unsigned char *left, int left_stride, int b_mode, unsigned char *dst, int dst_stride, unsigned char top_left"
prototype void vp8_intra4x4_predict "unsigned char *src, int src_stride, int b_mode, unsigned char *dst, int dst_stride"
specialize vp8_intra4x4_predict media
vp8_intra4x4_predict_media=vp8_intra4x4_predict_armv6
@@ -166,6 +162,15 @@ if [ "$CONFIG_POSTPROC" = "yes" ]; then
prototype void vp8_blend_b "unsigned char *y, unsigned char *u, unsigned char *v, int y1, int u1, int v1, int alpha, int stride"
# no asm yet
prototype void vp8_filter_by_weight16x16 "unsigned char *src, int src_stride, unsigned char *dst, int dst_stride, int src_weight"
specialize vp8_filter_by_weight16x16 sse2
prototype void vp8_filter_by_weight8x8 "unsigned char *src, int src_stride, unsigned char *dst, int dst_stride, int src_weight"
specialize vp8_filter_by_weight8x8 sse2
prototype void vp8_filter_by_weight4x4 "unsigned char *src, int src_stride, unsigned char *dst, int dst_stride, int src_weight"
# no asm yet
fi
#
@@ -203,11 +208,6 @@ prototype void vp8_bilinear_predict4x4 "unsigned char *src, int src_pitch, int x
specialize vp8_bilinear_predict4x4 mmx media neon
vp8_bilinear_predict4x4_media=vp8_bilinear_predict4x4_armv6
#
# Encoder functions below this point.
#
if [ "$CONFIG_VP8_ENCODER" = "yes" ]; then
#
# Whole-pixel Variance
#
@@ -273,27 +273,6 @@ specialize vp8_variance_halfpixvar16x16_hv mmx sse2 media neon
vp8_variance_halfpixvar16x16_hv_sse2=vp8_variance_halfpixvar16x16_hv_wmt
vp8_variance_halfpixvar16x16_hv_media=vp8_variance_halfpixvar16x16_hv_armv6
#
# Sum of squares (vector)
#
prototype unsigned int vp8_get_mb_ss "const short *"
specialize vp8_get_mb_ss mmx sse2
#
# SSE (Sum Squared Error)
#
prototype unsigned int vp8_sub_pixel_mse16x16 "const unsigned char *src_ptr, int source_stride, int xoffset, int yoffset, const unsigned char *ref_ptr, int Refstride, unsigned int *sse"
specialize vp8_sub_pixel_mse16x16 mmx sse2
vp8_sub_pixel_mse16x16_sse2=vp8_sub_pixel_mse16x16_wmt
prototype unsigned int vp8_mse16x16 "const unsigned char *src_ptr, int source_stride, const unsigned char *ref_ptr, int ref_stride, unsigned int *sse"
specialize vp8_mse16x16 mmx sse2 media neon
vp8_mse16x16_sse2=vp8_mse16x16_wmt
vp8_mse16x16_media=vp8_mse16x16_armv6
prototype unsigned int vp8_get4x4sse_cs "const unsigned char *src_ptr, int source_stride, const unsigned char *ref_ptr, int ref_stride"
specialize vp8_get4x4sse_cs mmx neon
#
# Single block SAD
#
@@ -376,6 +355,32 @@ specialize vp8_sad16x8x4d sse3
prototype void vp8_sad16x16x4d "const unsigned char *src_ptr, int source_stride, unsigned char *ref_ptr[4], int ref_stride, unsigned int *sad_array"
specialize vp8_sad16x16x4d sse3
#
# Encoder functions below this point.
#
if [ "$CONFIG_VP8_ENCODER" = "yes" ]; then
#
# Sum of squares (vector)
#
prototype unsigned int vp8_get_mb_ss "const short *"
specialize vp8_get_mb_ss mmx sse2
#
# SSE (Sum Squared Error)
#
prototype unsigned int vp8_sub_pixel_mse16x16 "const unsigned char *src_ptr, int source_stride, int xoffset, int yoffset, const unsigned char *ref_ptr, int Refstride, unsigned int *sse"
specialize vp8_sub_pixel_mse16x16 mmx sse2
vp8_sub_pixel_mse16x16_sse2=vp8_sub_pixel_mse16x16_wmt
prototype unsigned int vp8_mse16x16 "const unsigned char *src_ptr, int source_stride, const unsigned char *ref_ptr, int ref_stride, unsigned int *sse"
specialize vp8_mse16x16 mmx sse2 media neon
vp8_mse16x16_sse2=vp8_mse16x16_wmt
vp8_mse16x16_media=vp8_mse16x16_armv6
prototype unsigned int vp8_get4x4sse_cs "const unsigned char *src_ptr, int source_stride, const unsigned char *ref_ptr, int ref_stride"
specialize vp8_get4x4sse_cs mmx neon
#
# Block copy
#
@@ -498,3 +503,39 @@ specialize vp8_yv12_copy_partial_frame neon
# End of encoder only functions
fi
# Scaler functions
if [ "CONFIG_SPATIAL_RESAMPLING" != "yes" ]; then
prototype void vp8_horizontal_line_4_5_scale "const unsigned char *source, unsigned int source_width, unsigned char *dest, unsigned int dest_width"
prototype void vp8_vertical_band_4_5_scale "unsigned char *dest, unsigned int dest_pitch, unsigned int dest_width"
prototype void vp8_last_vertical_band_4_5_scale "unsigned char *dest, unsigned int dest_pitch, unsigned int dest_width"
prototype void vp8_horizontal_line_2_3_scale "const unsigned char *source, unsigned int source_width, unsigned char *dest, unsigned int dest_width"
prototype void vp8_vertical_band_2_3_scale "unsigned char *dest, unsigned int dest_pitch, unsigned int dest_width"
prototype void vp8_last_vertical_band_2_3_scale "unsigned char *dest, unsigned int dest_pitch, unsigned int dest_width"
prototype void vp8_horizontal_line_3_5_scale "const unsigned char *source, unsigned int source_width, unsigned char *dest, unsigned int dest_width"
prototype void vp8_vertical_band_3_5_scale "unsigned char *dest, unsigned int dest_pitch, unsigned int dest_width"
prototype void vp8_last_vertical_band_3_5_scale "unsigned char *dest, unsigned int dest_pitch, unsigned int dest_width"
prototype void vp8_horizontal_line_3_4_scale "const unsigned char *source, unsigned int source_width, unsigned char *dest, unsigned int dest_width"
prototype void vp8_vertical_band_3_4_scale "unsigned char *dest, unsigned int dest_pitch, unsigned int dest_width"
prototype void vp8_last_vertical_band_3_4_scale "unsigned char *dest, unsigned int dest_pitch, unsigned int dest_width"
prototype void vp8_horizontal_line_1_2_scale "const unsigned char *source, unsigned int source_width, unsigned char *dest, unsigned int dest_width"
prototype void vp8_vertical_band_1_2_scale "unsigned char *dest, unsigned int dest_pitch, unsigned int dest_width"
prototype void vp8_last_vertical_band_1_2_scale "unsigned char *dest, unsigned int dest_pitch, unsigned int dest_width"
prototype void vp8_horizontal_line_5_4_scale "const unsigned char *source, unsigned int source_width, unsigned char *dest, unsigned int dest_width"
prototype void vp8_vertical_band_5_4_scale "unsigned char *source, unsigned int src_pitch, unsigned char *dest, unsigned int dest_pitch, unsigned int dest_width"
prototype void vp8_horizontal_line_5_3_scale "const unsigned char *source, unsigned int source_width, unsigned char *dest, unsigned int dest_width"
prototype void vp8_vertical_band_5_3_scale "unsigned char *source, unsigned int src_pitch, unsigned char *dest, unsigned int dest_pitch, unsigned int dest_width"
prototype void vp8_horizontal_line_2_1_scale "const unsigned char *source, unsigned int source_width, unsigned char *dest, unsigned int dest_width"
prototype void vp8_vertical_band_2_1_scale "unsigned char *source, unsigned int src_pitch, unsigned char *dest, unsigned int dest_pitch, unsigned int dest_width"
prototype void vp8_vertical_band_2_1_scale_i "unsigned char *source, unsigned int src_pitch, unsigned char *dest, unsigned int dest_pitch, unsigned int dest_width"
fi
prototype void vp8_yv12_extend_frame_borders "struct yv12_buffer_config *ybf"
specialize vp8_yv12_extend_frame_borders neon
prototype void vp8_yv12_copy_frame "struct yv12_buffer_config *src_ybc, struct yv12_buffer_config *dst_ybc"
specialize vp8_yv12_copy_frame neon
prototype void vp8_yv12_copy_y "struct yv12_buffer_config *src_ybc, struct yv12_buffer_config *dst_ybc"
specialize vp8_yv12_copy_y neon

View File

@@ -13,40 +13,15 @@
#include "vpx_config.h"
#include "vpx/vpx_integer.h"
unsigned int vp8_sad16x16_c(
const unsigned char *src_ptr,
int src_stride,
const unsigned char *ref_ptr,
int ref_stride,
int max_sad)
{
int r, c;
unsigned int sad = 0;
for (r = 0; r < 16; r++)
{
for (c = 0; c < 16; c++)
{
sad += abs(src_ptr[c] - ref_ptr[c]);
}
src_ptr += src_stride;
ref_ptr += ref_stride;
}
return sad;
}
static __inline
static
unsigned int sad_mx_n_c(
const unsigned char *src_ptr,
int src_stride,
const unsigned char *ref_ptr,
int ref_stride,
int m,
int n)
int max_sad,
int m,
int n)
{
int r, c;
@@ -59,6 +34,9 @@ unsigned int sad_mx_n_c(
sad += abs(src_ptr[c] - ref_ptr[c]);
}
if (sad > max_sad)
break;
src_ptr += src_stride;
ref_ptr += ref_stride;
}
@@ -66,16 +44,31 @@ unsigned int sad_mx_n_c(
return sad;
}
/* max_sad is provided as an optional optimization point. Alternative
* implementations of these functions are not required to check it.
*/
unsigned int vp8_sad16x16_c(
const unsigned char *src_ptr,
int src_stride,
const unsigned char *ref_ptr,
int ref_stride,
int max_sad)
{
return sad_mx_n_c(src_ptr, src_stride, ref_ptr, ref_stride, max_sad, 16, 16);
}
unsigned int vp8_sad8x8_c(
const unsigned char *src_ptr,
int src_stride,
const unsigned char *ref_ptr,
int ref_stride,
int max_sad)
int max_sad)
{
return sad_mx_n_c(src_ptr, src_stride, ref_ptr, ref_stride, 8, 8);
return sad_mx_n_c(src_ptr, src_stride, ref_ptr, ref_stride, max_sad, 8, 8);
}
@@ -84,10 +77,10 @@ unsigned int vp8_sad16x8_c(
int src_stride,
const unsigned char *ref_ptr,
int ref_stride,
int max_sad)
int max_sad)
{
return sad_mx_n_c(src_ptr, src_stride, ref_ptr, ref_stride, 16, 8);
return sad_mx_n_c(src_ptr, src_stride, ref_ptr, ref_stride, max_sad, 16, 8);
}
@@ -97,10 +90,10 @@ unsigned int vp8_sad8x16_c(
int src_stride,
const unsigned char *ref_ptr,
int ref_stride,
int max_sad)
int max_sad)
{
return sad_mx_n_c(src_ptr, src_stride, ref_ptr, ref_stride, 8, 16);
return sad_mx_n_c(src_ptr, src_stride, ref_ptr, ref_stride, max_sad, 8, 16);
}
@@ -109,10 +102,10 @@ unsigned int vp8_sad4x4_c(
int src_stride,
const unsigned char *ref_ptr,
int ref_stride,
int max_sad)
int max_sad)
{
return sad_mx_n_c(src_ptr, src_stride, ref_ptr, ref_stride, 4, 4);
return sad_mx_n_c(src_ptr, src_stride, ref_ptr, ref_stride, max_sad, 4, 4);
}
void vp8_sad16x16x3_c(

View File

@@ -10,7 +10,7 @@
#include "variance.h"
#include "vp8/common/filter.h"
#include "filter.h"
unsigned int vp8_get_mb_ss_c

242
vp8/common/vp8_entropymodedata.h Executable file
View File

@@ -0,0 +1,242 @@
/*
* Copyright (c) 2010 The WebM project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/*Generated file, included by entropymode.c*/
const struct vp8_token_struct vp8_bmode_encodings[VP8_BINTRAMODES] =
{
{ 0, 1 },
{ 2, 2 },
{ 6, 3 },
{ 28, 5 },
{ 30, 5 },
{ 58, 6 },
{ 59, 6 },
{ 62, 6 },
{ 126, 7 },
{ 127, 7 }
};
const struct vp8_token_struct vp8_ymode_encodings[VP8_YMODES] =
{
{ 0, 1 },
{ 4, 3 },
{ 5, 3 },
{ 6, 3 },
{ 7, 3 }
};
const struct vp8_token_struct vp8_kf_ymode_encodings[VP8_YMODES] =
{
{ 4, 3 },
{ 5, 3 },
{ 6, 3 },
{ 7, 3 },
{ 0, 1 }
};
const struct vp8_token_struct vp8_uv_mode_encodings[VP8_UV_MODES] =
{
{ 0, 1 },
{ 2, 2 },
{ 6, 3 },
{ 7, 3 }
};
const struct vp8_token_struct vp8_mbsplit_encodings[VP8_NUMMBSPLITS] =
{
{ 6, 3 },
{ 7, 3 },
{ 2, 2 },
{ 0, 1 }
};
const struct vp8_token_struct vp8_mv_ref_encoding_array[VP8_MVREFS] =
{
{ 2, 2 },
{ 6, 3 },
{ 0, 1 },
{ 14, 4 },
{ 15, 4 }
};
const struct vp8_token_struct vp8_sub_mv_ref_encoding_array[VP8_SUBMVREFS] =
{
{ 0, 1 },
{ 2, 2 },
{ 6, 3 },
{ 7, 3 }
};
const struct vp8_token_struct vp8_small_mvencodings[8] =
{
{ 0, 3 },
{ 1, 3 },
{ 2, 3 },
{ 3, 3 },
{ 4, 3 },
{ 5, 3 },
{ 6, 3 },
{ 7, 3 }
};
const vp8_prob vp8_ymode_prob[VP8_YMODES-1] =
{
112, 86, 140, 37
};
const vp8_prob vp8_kf_ymode_prob[VP8_YMODES-1] =
{
145, 156, 163, 128
};
const vp8_prob vp8_uv_mode_prob[VP8_UV_MODES-1] =
{
162, 101, 204
};
const vp8_prob vp8_kf_uv_mode_prob[VP8_UV_MODES-1] =
{
142, 114, 183
};
const vp8_prob vp8_bmode_prob[VP8_BINTRAMODES-1] =
{
120, 90, 79, 133, 87, 85, 80, 111, 151
};
const vp8_prob vp8_kf_bmode_prob
[VP8_BINTRAMODES] [VP8_BINTRAMODES] [VP8_BINTRAMODES-1] =
{
{
{ 231, 120, 48, 89, 115, 113, 120, 152, 112 },
{ 152, 179, 64, 126, 170, 118, 46, 70, 95 },
{ 175, 69, 143, 80, 85, 82, 72, 155, 103 },
{ 56, 58, 10, 171, 218, 189, 17, 13, 152 },
{ 144, 71, 10, 38, 171, 213, 144, 34, 26 },
{ 114, 26, 17, 163, 44, 195, 21, 10, 173 },
{ 121, 24, 80, 195, 26, 62, 44, 64, 85 },
{ 170, 46, 55, 19, 136, 160, 33, 206, 71 },
{ 63, 20, 8, 114, 114, 208, 12, 9, 226 },
{ 81, 40, 11, 96, 182, 84, 29, 16, 36 }
},
{
{ 134, 183, 89, 137, 98, 101, 106, 165, 148 },
{ 72, 187, 100, 130, 157, 111, 32, 75, 80 },
{ 66, 102, 167, 99, 74, 62, 40, 234, 128 },
{ 41, 53, 9, 178, 241, 141, 26, 8, 107 },
{ 104, 79, 12, 27, 217, 255, 87, 17, 7 },
{ 74, 43, 26, 146, 73, 166, 49, 23, 157 },
{ 65, 38, 105, 160, 51, 52, 31, 115, 128 },
{ 87, 68, 71, 44, 114, 51, 15, 186, 23 },
{ 47, 41, 14, 110, 182, 183, 21, 17, 194 },
{ 66, 45, 25, 102, 197, 189, 23, 18, 22 }
},
{
{ 88, 88, 147, 150, 42, 46, 45, 196, 205 },
{ 43, 97, 183, 117, 85, 38, 35, 179, 61 },
{ 39, 53, 200, 87, 26, 21, 43, 232, 171 },
{ 56, 34, 51, 104, 114, 102, 29, 93, 77 },
{ 107, 54, 32, 26, 51, 1, 81, 43, 31 },
{ 39, 28, 85, 171, 58, 165, 90, 98, 64 },
{ 34, 22, 116, 206, 23, 34, 43, 166, 73 },
{ 68, 25, 106, 22, 64, 171, 36, 225, 114 },
{ 34, 19, 21, 102, 132, 188, 16, 76, 124 },
{ 62, 18, 78, 95, 85, 57, 50, 48, 51 }
},
{
{ 193, 101, 35, 159, 215, 111, 89, 46, 111 },
{ 60, 148, 31, 172, 219, 228, 21, 18, 111 },
{ 112, 113, 77, 85, 179, 255, 38, 120, 114 },
{ 40, 42, 1, 196, 245, 209, 10, 25, 109 },
{ 100, 80, 8, 43, 154, 1, 51, 26, 71 },
{ 88, 43, 29, 140, 166, 213, 37, 43, 154 },
{ 61, 63, 30, 155, 67, 45, 68, 1, 209 },
{ 142, 78, 78, 16, 255, 128, 34, 197, 171 },
{ 41, 40, 5, 102, 211, 183, 4, 1, 221 },
{ 51, 50, 17, 168, 209, 192, 23, 25, 82 }
},
{
{ 125, 98, 42, 88, 104, 85, 117, 175, 82 },
{ 95, 84, 53, 89, 128, 100, 113, 101, 45 },
{ 75, 79, 123, 47, 51, 128, 81, 171, 1 },
{ 57, 17, 5, 71, 102, 57, 53, 41, 49 },
{ 115, 21, 2, 10, 102, 255, 166, 23, 6 },
{ 38, 33, 13, 121, 57, 73, 26, 1, 85 },
{ 41, 10, 67, 138, 77, 110, 90, 47, 114 },
{ 101, 29, 16, 10, 85, 128, 101, 196, 26 },
{ 57, 18, 10, 102, 102, 213, 34, 20, 43 },
{ 117, 20, 15, 36, 163, 128, 68, 1, 26 }
},
{
{ 138, 31, 36, 171, 27, 166, 38, 44, 229 },
{ 67, 87, 58, 169, 82, 115, 26, 59, 179 },
{ 63, 59, 90, 180, 59, 166, 93, 73, 154 },
{ 40, 40, 21, 116, 143, 209, 34, 39, 175 },
{ 57, 46, 22, 24, 128, 1, 54, 17, 37 },
{ 47, 15, 16, 183, 34, 223, 49, 45, 183 },
{ 46, 17, 33, 183, 6, 98, 15, 32, 183 },
{ 65, 32, 73, 115, 28, 128, 23, 128, 205 },
{ 40, 3, 9, 115, 51, 192, 18, 6, 223 },
{ 87, 37, 9, 115, 59, 77, 64, 21, 47 }
},
{
{ 104, 55, 44, 218, 9, 54, 53, 130, 226 },
{ 64, 90, 70, 205, 40, 41, 23, 26, 57 },
{ 54, 57, 112, 184, 5, 41, 38, 166, 213 },
{ 30, 34, 26, 133, 152, 116, 10, 32, 134 },
{ 75, 32, 12, 51, 192, 255, 160, 43, 51 },
{ 39, 19, 53, 221, 26, 114, 32, 73, 255 },
{ 31, 9, 65, 234, 2, 15, 1, 118, 73 },
{ 88, 31, 35, 67, 102, 85, 55, 186, 85 },
{ 56, 21, 23, 111, 59, 205, 45, 37, 192 },
{ 55, 38, 70, 124, 73, 102, 1, 34, 98 }
},
{
{ 102, 61, 71, 37, 34, 53, 31, 243, 192 },
{ 69, 60, 71, 38, 73, 119, 28, 222, 37 },
{ 68, 45, 128, 34, 1, 47, 11, 245, 171 },
{ 62, 17, 19, 70, 146, 85, 55, 62, 70 },
{ 75, 15, 9, 9, 64, 255, 184, 119, 16 },
{ 37, 43, 37, 154, 100, 163, 85, 160, 1 },
{ 63, 9, 92, 136, 28, 64, 32, 201, 85 },
{ 86, 6, 28, 5, 64, 255, 25, 248, 1 },
{ 56, 8, 17, 132, 137, 255, 55, 116, 128 },
{ 58, 15, 20, 82, 135, 57, 26, 121, 40 }
},
{
{ 164, 50, 31, 137, 154, 133, 25, 35, 218 },
{ 51, 103, 44, 131, 131, 123, 31, 6, 158 },
{ 86, 40, 64, 135, 148, 224, 45, 183, 128 },
{ 22, 26, 17, 131, 240, 154, 14, 1, 209 },
{ 83, 12, 13, 54, 192, 255, 68, 47, 28 },
{ 45, 16, 21, 91, 64, 222, 7, 1, 197 },
{ 56, 21, 39, 155, 60, 138, 23, 102, 213 },
{ 85, 26, 85, 85, 128, 128, 32, 146, 171 },
{ 18, 11, 7, 63, 144, 171, 4, 4, 246 },
{ 35, 27, 10, 146, 174, 171, 12, 26, 128 }
},
{
{ 190, 80, 35, 99, 180, 80, 126, 54, 45 },
{ 85, 126, 47, 87, 176, 51, 41, 20, 32 },
{ 101, 75, 128, 139, 118, 146, 116, 128, 85 },
{ 56, 41, 15, 176, 236, 85, 37, 9, 62 },
{ 146, 36, 19, 30, 171, 255, 97, 27, 20 },
{ 71, 30, 17, 119, 118, 255, 17, 18, 138 },
{ 101, 38, 60, 138, 55, 70, 43, 26, 142 },
{ 138, 45, 61, 62, 219, 1, 81, 188, 64 },
{ 32, 41, 20, 117, 151, 142, 20, 21, 163 },
{ 112, 19, 12, 61, 195, 128, 48, 4, 24 }
}
};

View File

@@ -0,0 +1,31 @@
/*
* Copyright (c) 2010 The WebM project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
extern "C" {
void vp8_short_idct4x4llm_mmx(short *input, unsigned char *pred_ptr,
int pred_stride, unsigned char *dst_ptr,
int dst_stride);
}
#include "vp8/common/idctllm_test.h"
namespace
{
INSTANTIATE_TEST_CASE_P(MMX, IDCTTest,
::testing::Values(vp8_short_idct4x4llm_mmx));
} // namespace
int main(int argc, char **argv) {
::testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}

File diff suppressed because it is too large Load Diff

View File

@@ -16,6 +16,10 @@
void sym(unsigned char *src, int pitch, const unsigned char *blimit,\
const unsigned char *limit, const unsigned char *thresh, int count)
#define prototype_loopfilter_nc(sym) \
void sym(unsigned char *src, int pitch, const unsigned char *blimit,\
const unsigned char *limit, const unsigned char *thresh)
#define prototype_simple_loopfilter(sym) \
void sym(unsigned char *y, int ystride, const unsigned char *blimit)
@@ -30,11 +34,11 @@ prototype_simple_loopfilter(vp8_loop_filter_simple_vertical_edge_mmx);
prototype_loopfilter(vp8_loop_filter_bv_y_sse2);
prototype_loopfilter(vp8_loop_filter_bh_y_sse2);
#else
prototype_loopfilter(vp8_loop_filter_vertical_edge_sse2);
prototype_loopfilter(vp8_loop_filter_horizontal_edge_sse2);
prototype_loopfilter_nc(vp8_loop_filter_vertical_edge_sse2);
prototype_loopfilter_nc(vp8_loop_filter_horizontal_edge_sse2);
#endif
prototype_loopfilter(vp8_mbloop_filter_vertical_edge_sse2);
prototype_loopfilter(vp8_mbloop_filter_horizontal_edge_sse2);
prototype_loopfilter_nc(vp8_mbloop_filter_vertical_edge_sse2);
prototype_loopfilter_nc(vp8_mbloop_filter_horizontal_edge_sse2);
extern loop_filter_uvfunction vp8_loop_filter_horizontal_edge_uv_sse2;
extern loop_filter_uvfunction vp8_loop_filter_vertical_edge_uv_sse2;
@@ -124,7 +128,7 @@ void vp8_loop_filter_bvs_mmx(unsigned char *y_ptr, int y_stride, const unsigned
void vp8_loop_filter_mbh_sse2(unsigned char *y_ptr, unsigned char *u_ptr, unsigned char *v_ptr,
int y_stride, int uv_stride, loop_filter_info *lfi)
{
vp8_mbloop_filter_horizontal_edge_sse2(y_ptr, y_stride, lfi->mblim, lfi->lim, lfi->hev_thr, 2);
vp8_mbloop_filter_horizontal_edge_sse2(y_ptr, y_stride, lfi->mblim, lfi->lim, lfi->hev_thr);
if (u_ptr)
vp8_mbloop_filter_horizontal_edge_uv_sse2(u_ptr, uv_stride, lfi->mblim, lfi->lim, lfi->hev_thr, v_ptr);
@@ -135,7 +139,7 @@ void vp8_loop_filter_mbh_sse2(unsigned char *y_ptr, unsigned char *u_ptr, unsign
void vp8_loop_filter_mbv_sse2(unsigned char *y_ptr, unsigned char *u_ptr, unsigned char *v_ptr,
int y_stride, int uv_stride, loop_filter_info *lfi)
{
vp8_mbloop_filter_vertical_edge_sse2(y_ptr, y_stride, lfi->mblim, lfi->lim, lfi->hev_thr, 2);
vp8_mbloop_filter_vertical_edge_sse2(y_ptr, y_stride, lfi->mblim, lfi->lim, lfi->hev_thr);
if (u_ptr)
vp8_mbloop_filter_vertical_edge_uv_sse2(u_ptr, uv_stride, lfi->mblim, lfi->lim, lfi->hev_thr, v_ptr);
@@ -149,9 +153,9 @@ void vp8_loop_filter_bh_sse2(unsigned char *y_ptr, unsigned char *u_ptr, unsigne
#if ARCH_X86_64
vp8_loop_filter_bh_y_sse2(y_ptr, y_stride, lfi->blim, lfi->lim, lfi->hev_thr, 2);
#else
vp8_loop_filter_horizontal_edge_sse2(y_ptr + 4 * y_stride, y_stride, lfi->blim, lfi->lim, lfi->hev_thr, 2);
vp8_loop_filter_horizontal_edge_sse2(y_ptr + 8 * y_stride, y_stride, lfi->blim, lfi->lim, lfi->hev_thr, 2);
vp8_loop_filter_horizontal_edge_sse2(y_ptr + 12 * y_stride, y_stride, lfi->blim, lfi->lim, lfi->hev_thr, 2);
vp8_loop_filter_horizontal_edge_sse2(y_ptr + 4 * y_stride, y_stride, lfi->blim, lfi->lim, lfi->hev_thr);
vp8_loop_filter_horizontal_edge_sse2(y_ptr + 8 * y_stride, y_stride, lfi->blim, lfi->lim, lfi->hev_thr);
vp8_loop_filter_horizontal_edge_sse2(y_ptr + 12 * y_stride, y_stride, lfi->blim, lfi->lim, lfi->hev_thr);
#endif
if (u_ptr)
@@ -174,9 +178,9 @@ void vp8_loop_filter_bv_sse2(unsigned char *y_ptr, unsigned char *u_ptr, unsigne
#if ARCH_X86_64
vp8_loop_filter_bv_y_sse2(y_ptr, y_stride, lfi->blim, lfi->lim, lfi->hev_thr, 2);
#else
vp8_loop_filter_vertical_edge_sse2(y_ptr + 4, y_stride, lfi->blim, lfi->lim, lfi->hev_thr, 2);
vp8_loop_filter_vertical_edge_sse2(y_ptr + 8, y_stride, lfi->blim, lfi->lim, lfi->hev_thr, 2);
vp8_loop_filter_vertical_edge_sse2(y_ptr + 12, y_stride, lfi->blim, lfi->lim, lfi->hev_thr, 2);
vp8_loop_filter_vertical_edge_sse2(y_ptr + 4, y_stride, lfi->blim, lfi->lim, lfi->hev_thr);
vp8_loop_filter_vertical_edge_sse2(y_ptr + 8, y_stride, lfi->blim, lfi->lim, lfi->hev_thr);
vp8_loop_filter_vertical_edge_sse2(y_ptr + 12, y_stride, lfi->blim, lfi->lim, lfi->hev_thr);
#endif
if (u_ptr)

View File

@@ -0,0 +1,281 @@
;
; Copyright (c) 2012 The WebM project authors. All Rights Reserved.
;
; Use of this source code is governed by a BSD-style license
; that can be found in the LICENSE file in the root of the source
; tree. An additional intellectual property rights grant can be found
; in the file PATENTS. All contributing project authors may
; be found in the AUTHORS file in the root of the source tree.
;
%include "vpx_ports/x86_abi_support.asm"
;void vp8_filter_by_weight16x16_sse2
;(
; unsigned char *src,
; int src_stride,
; unsigned char *dst,
; int dst_stride,
; int src_weight
;)
global sym(vp8_filter_by_weight16x16_sse2)
sym(vp8_filter_by_weight16x16_sse2):
push rbp
mov rbp, rsp
SHADOW_ARGS_TO_STACK 5
SAVE_XMM 6
GET_GOT rbx
push rsi
push rdi
; end prolog
movd xmm0, arg(4) ; src_weight
pshuflw xmm0, xmm0, 0x0 ; replicate to all low words
punpcklqdq xmm0, xmm0 ; replicate to all hi words
movdqa xmm1, [GLOBAL(tMFQE)]
psubw xmm1, xmm0 ; dst_weight
mov rax, arg(0) ; src
mov rsi, arg(1) ; src_stride
mov rdx, arg(2) ; dst
mov rdi, arg(3) ; dst_stride
mov rcx, 16 ; loop count
pxor xmm6, xmm6
.combine
movdqa xmm2, [rax]
movdqa xmm4, [rdx]
add rax, rsi
; src * src_weight
movdqa xmm3, xmm2
punpcklbw xmm2, xmm6
punpckhbw xmm3, xmm6
pmullw xmm2, xmm0
pmullw xmm3, xmm0
; dst * dst_weight
movdqa xmm5, xmm4
punpcklbw xmm4, xmm6
punpckhbw xmm5, xmm6
pmullw xmm4, xmm1
pmullw xmm5, xmm1
; sum, round and shift
paddw xmm2, xmm4
paddw xmm3, xmm5
paddw xmm2, [GLOBAL(tMFQE_round)]
paddw xmm3, [GLOBAL(tMFQE_round)]
psrlw xmm2, 4
psrlw xmm3, 4
packuswb xmm2, xmm3
movdqa [rdx], xmm2
add rdx, rdi
dec rcx
jnz .combine
; begin epilog
pop rdi
pop rsi
RESTORE_GOT
RESTORE_XMM
UNSHADOW_ARGS
pop rbp
ret
;void vp8_filter_by_weight8x8_sse2
;(
; unsigned char *src,
; int src_stride,
; unsigned char *dst,
; int dst_stride,
; int src_weight
;)
global sym(vp8_filter_by_weight8x8_sse2)
sym(vp8_filter_by_weight8x8_sse2):
push rbp
mov rbp, rsp
SHADOW_ARGS_TO_STACK 5
GET_GOT rbx
push rsi
push rdi
; end prolog
movd xmm0, arg(4) ; src_weight
pshuflw xmm0, xmm0, 0x0 ; replicate to all low words
punpcklqdq xmm0, xmm0 ; replicate to all hi words
movdqa xmm1, [GLOBAL(tMFQE)]
psubw xmm1, xmm0 ; dst_weight
mov rax, arg(0) ; src
mov rsi, arg(1) ; src_stride
mov rdx, arg(2) ; dst
mov rdi, arg(3) ; dst_stride
mov rcx, 8 ; loop count
pxor xmm4, xmm4
.combine
movq xmm2, [rax]
movq xmm3, [rdx]
add rax, rsi
; src * src_weight
punpcklbw xmm2, xmm4
pmullw xmm2, xmm0
; dst * dst_weight
punpcklbw xmm3, xmm4
pmullw xmm3, xmm1
; sum, round and shift
paddw xmm2, xmm3
paddw xmm2, [GLOBAL(tMFQE_round)]
psrlw xmm2, 4
packuswb xmm2, xmm4
movq [rdx], xmm2
add rdx, rdi
dec rcx
jnz .combine
; begin epilog
pop rdi
pop rsi
RESTORE_GOT
UNSHADOW_ARGS
pop rbp
ret
;void vp8_variance_and_sad_16x16_sse2 | arg
;(
; unsigned char *src1, 0
; int stride1, 1
; unsigned char *src2, 2
; int stride2, 3
; unsigned int *variance, 4
; unsigned int *sad, 5
;)
global sym(vp8_variance_and_sad_16x16_sse2)
sym(vp8_variance_and_sad_16x16_sse2):
push rbp
mov rbp, rsp
SHADOW_ARGS_TO_STACK 6
GET_GOT rbx
push rsi
push rdi
; end prolog
mov rax, arg(0) ; src1
mov rcx, arg(1) ; stride1
mov rdx, arg(2) ; src2
mov rdi, arg(3) ; stride2
mov rsi, 16 ; block height
; Prep accumulator registers
pxor xmm3, xmm3 ; SAD
pxor xmm4, xmm4 ; sum of src2
pxor xmm5, xmm5 ; sum of src2^2
; Because we're working with the actual output frames
; we can't depend on any kind of data alignment.
.accumulate
movdqa xmm0, [rax] ; src1
movdqa xmm1, [rdx] ; src2
add rax, rcx ; src1 + stride1
add rdx, rdi ; src2 + stride2
; SAD(src1, src2)
psadbw xmm0, xmm1
paddusw xmm3, xmm0
; SUM(src2)
pxor xmm2, xmm2
psadbw xmm2, xmm1 ; sum src2 by misusing SAD against 0
paddusw xmm4, xmm2
; pmaddubsw would be ideal if it took two unsigned values. instead,
; it expects a signed and an unsigned value. so instead we zero extend
; and operate on words.
pxor xmm2, xmm2
movdqa xmm0, xmm1
punpcklbw xmm0, xmm2
punpckhbw xmm1, xmm2
pmaddwd xmm0, xmm0
pmaddwd xmm1, xmm1
paddd xmm5, xmm0
paddd xmm5, xmm1
sub rsi, 1
jnz .accumulate
; phaddd only operates on adjacent double words.
; Finalize SAD and store
movdqa xmm0, xmm3
psrldq xmm0, 8
paddusw xmm0, xmm3
paddd xmm0, [GLOBAL(t128)]
psrld xmm0, 8
mov rax, arg(5)
movd [rax], xmm0
; Accumulate sum of src2
movdqa xmm0, xmm4
psrldq xmm0, 8
paddusw xmm0, xmm4
; Square src2. Ignore high value
pmuludq xmm0, xmm0
psrld xmm0, 8
; phaddw could be used to sum adjacent values but we want
; all the values summed. promote to doubles, accumulate,
; shift and sum
pxor xmm2, xmm2
movdqa xmm1, xmm5
punpckldq xmm1, xmm2
punpckhdq xmm5, xmm2
paddd xmm1, xmm5
movdqa xmm2, xmm1
psrldq xmm1, 8
paddd xmm1, xmm2
psubd xmm1, xmm0
; (variance + 128) >> 8
paddd xmm1, [GLOBAL(t128)]
psrld xmm1, 8
mov rax, arg(4)
movd [rax], xmm1
; begin epilog
pop rdi
pop rsi
RESTORE_GOT
UNSHADOW_ARGS
pop rbp
ret
SECTION_RODATA
align 16
t128:
ddq 128
align 16
tMFQE: ; 1 << MFQE_PRECISION
times 8 dw 0x10
align 16
tMFQE_round: ; 1 << (MFQE_PRECISION - 1)
times 8 dw 0x08

View File

@@ -0,0 +1,21 @@
/*
* Copyright (c) 2012 The WebM project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/* On Android NDK, rand is inlined function, but postproc needs rand symbol */
#if defined(__ANDROID__)
#define rand __rand
#include <stdlib.h>
#undef rand
extern int rand(void)
{
return __rand();
}
#endif

View File

@@ -119,35 +119,37 @@ sym(vp8_copy_mem16x16_sse2):
;void vp8_intra_pred_uv_dc_mmx2(
; unsigned char *dst,
; int dst_stride
; unsigned char *src,
; int src_stride,
; unsigned char *above,
; unsigned char *left,
; int left_stride,
; )
global sym(vp8_intra_pred_uv_dc_mmx2)
sym(vp8_intra_pred_uv_dc_mmx2):
push rbp
mov rbp, rsp
SHADOW_ARGS_TO_STACK 4
SHADOW_ARGS_TO_STACK 5
push rsi
push rdi
; end prolog
; from top
mov rsi, arg(2) ;src;
movsxd rax, dword ptr arg(3) ;src_stride;
sub rsi, rax
mov rdi, arg(2) ;above;
mov rsi, arg(3) ;left;
movsxd rax, dword ptr arg(4) ;left_stride;
pxor mm0, mm0
movq mm1, [rsi]
psadbw mm1, mm0
; from left
dec rsi
movq mm1, [rdi]
lea rdi, [rax*3]
movzx ecx, byte [rsi+rax]
psadbw mm1, mm0
; from left
movzx ecx, byte [rsi]
movzx edx, byte [rsi+rax*1]
add ecx, edx
movzx edx, byte [rsi+rax*2]
add ecx, edx
movzx edx, byte [rsi+rdi]
add ecx, edx
lea rsi, [rsi+rax*4]
add ecx, edx
movzx edx, byte [rsi]
add ecx, edx
movzx edx, byte [rsi+rax]
@@ -156,31 +158,29 @@ sym(vp8_intra_pred_uv_dc_mmx2):
add ecx, edx
movzx edx, byte [rsi+rdi]
add ecx, edx
movzx edx, byte [rsi+rax*4]
add ecx, edx
; add up
pextrw edx, mm1, 0x0
lea edx, [edx+ecx+8]
sar edx, 4
movd mm1, edx
movsxd rcx, dword ptr arg(1) ;dst_stride
pshufw mm1, mm1, 0x0
mov rdi, arg(0) ;dst;
packuswb mm1, mm1
; write out
mov rdi, arg(0) ;dst;
movsxd rcx, dword ptr arg(1) ;dst_stride
lea rax, [rcx*3]
lea rdx, [rdi+rcx*4]
movq [rdi ], mm1
movq [rdi+rcx ], mm1
movq [rdi+rcx*2], mm1
movq [rdi+rax ], mm1
lea rdi, [rdi+rcx*4]
movq [rdi ], mm1
movq [rdi+rcx ], mm1
movq [rdi+rcx*2], mm1
movq [rdi+rax ], mm1
movq [rdx ], mm1
movq [rdx+rcx ], mm1
movq [rdx+rcx*2], mm1
movq [rdx+rax ], mm1
; begin epilog
pop rdi
@@ -192,23 +192,24 @@ sym(vp8_intra_pred_uv_dc_mmx2):
;void vp8_intra_pred_uv_dctop_mmx2(
; unsigned char *dst,
; int dst_stride
; unsigned char *src,
; int src_stride,
; unsigned char *above,
; unsigned char *left,
; int left_stride,
; )
global sym(vp8_intra_pred_uv_dctop_mmx2)
sym(vp8_intra_pred_uv_dctop_mmx2):
push rbp
mov rbp, rsp
SHADOW_ARGS_TO_STACK 4
SHADOW_ARGS_TO_STACK 5
GET_GOT rbx
push rsi
push rdi
; end prolog
;arg(3), arg(4) not used
; from top
mov rsi, arg(2) ;src;
movsxd rax, dword ptr arg(3) ;src_stride;
sub rsi, rax
mov rsi, arg(2) ;above;
pxor mm0, mm0
movq mm1, [rsi]
psadbw mm1, mm0
@@ -245,22 +246,24 @@ sym(vp8_intra_pred_uv_dctop_mmx2):
;void vp8_intra_pred_uv_dcleft_mmx2(
; unsigned char *dst,
; int dst_stride
; unsigned char *src,
; int src_stride,
; unsigned char *above,
; unsigned char *left,
; int left_stride,
; )
global sym(vp8_intra_pred_uv_dcleft_mmx2)
sym(vp8_intra_pred_uv_dcleft_mmx2):
push rbp
mov rbp, rsp
SHADOW_ARGS_TO_STACK 4
SHADOW_ARGS_TO_STACK 5
push rsi
push rdi
; end prolog
;arg(2) not used
; from left
mov rsi, arg(2) ;src;
movsxd rax, dword ptr arg(3) ;src_stride;
dec rsi
mov rsi, arg(3) ;left;
movsxd rax, dword ptr arg(4) ;left_stride;
lea rdi, [rax*3]
movzx ecx, byte [rsi]
movzx edx, byte [rsi+rax]
@@ -310,17 +313,20 @@ sym(vp8_intra_pred_uv_dcleft_mmx2):
;void vp8_intra_pred_uv_dc128_mmx(
; unsigned char *dst,
; int dst_stride
; unsigned char *src,
; int src_stride,
; unsigned char *above,
; unsigned char *left,
; int left_stride,
; )
global sym(vp8_intra_pred_uv_dc128_mmx)
sym(vp8_intra_pred_uv_dc128_mmx):
push rbp
mov rbp, rsp
SHADOW_ARGS_TO_STACK 4
SHADOW_ARGS_TO_STACK 5
GET_GOT rbx
; end prolog
;arg(2), arg(3), arg(4) not used
; write out
movq mm1, [GLOBAL(dc_128)]
mov rax, arg(0) ;dst;
@@ -346,15 +352,16 @@ sym(vp8_intra_pred_uv_dc128_mmx):
;void vp8_intra_pred_uv_tm_sse2(
; unsigned char *dst,
; int dst_stride
; unsigned char *src,
; int src_stride,
; unsigned char *above,
; unsigned char *left,
; int left_stride,
; )
%macro vp8_intra_pred_uv_tm 1
global sym(vp8_intra_pred_uv_tm_%1)
sym(vp8_intra_pred_uv_tm_%1):
push rbp
mov rbp, rsp
SHADOW_ARGS_TO_STACK 4
SHADOW_ARGS_TO_STACK 5
GET_GOT rbx
push rsi
push rdi
@@ -362,9 +369,8 @@ sym(vp8_intra_pred_uv_tm_%1):
; read top row
mov edx, 4
mov rsi, arg(2) ;src;
movsxd rax, dword ptr arg(3) ;src_stride;
sub rsi, rax
mov rsi, arg(2) ;above
movsxd rax, dword ptr arg(4) ;left_stride;
pxor xmm0, xmm0
%ifidn %1, ssse3
movdqa xmm2, [GLOBAL(dc_1024)]
@@ -374,7 +380,7 @@ sym(vp8_intra_pred_uv_tm_%1):
; set up left ptrs ans subtract topleft
movd xmm3, [rsi-1]
lea rsi, [rsi+rax-1]
mov rsi, arg(3) ;left;
%ifidn %1, sse2
punpcklbw xmm3, xmm0
pshuflw xmm3, xmm3, 0x0
@@ -427,20 +433,22 @@ vp8_intra_pred_uv_tm ssse3
;void vp8_intra_pred_uv_ve_mmx(
; unsigned char *dst,
; int dst_stride
; unsigned char *src,
; int src_stride,
; unsigned char *above,
; unsigned char *left,
; int left_stride,
; )
global sym(vp8_intra_pred_uv_ve_mmx)
sym(vp8_intra_pred_uv_ve_mmx):
push rbp
mov rbp, rsp
SHADOW_ARGS_TO_STACK 4
SHADOW_ARGS_TO_STACK 5
; end prolog
; arg(3), arg(4) not used
; read from top
mov rax, arg(2) ;src;
movsxd rdx, dword ptr arg(3) ;src_stride;
sub rax, rdx
movq mm1, [rax]
; write out
@@ -466,15 +474,16 @@ sym(vp8_intra_pred_uv_ve_mmx):
;void vp8_intra_pred_uv_ho_mmx2(
; unsigned char *dst,
; int dst_stride
; unsigned char *src,
; int src_stride,
; unsigned char *above,
; unsigned char *left,
; int left_stride
; )
%macro vp8_intra_pred_uv_ho 1
global sym(vp8_intra_pred_uv_ho_%1)
sym(vp8_intra_pred_uv_ho_%1):
push rbp
mov rbp, rsp
SHADOW_ARGS_TO_STACK 4
SHADOW_ARGS_TO_STACK 5
push rsi
push rdi
%ifidn %1, ssse3
@@ -485,12 +494,14 @@ sym(vp8_intra_pred_uv_ho_%1):
%endif
; end prolog
;arg(2) not used
; read from left and write out
%ifidn %1, mmx2
mov edx, 4
%endif
mov rsi, arg(2) ;src;
movsxd rax, dword ptr arg(3) ;src_stride;
mov rsi, arg(3) ;left
movsxd rax, dword ptr arg(4) ;left_stride;
mov rdi, arg(0) ;dst;
movsxd rcx, dword ptr arg(1) ;dst_stride
%ifidn %1, ssse3
@@ -498,7 +509,7 @@ sym(vp8_intra_pred_uv_ho_%1):
movdqa xmm2, [GLOBAL(dc_00001111)]
lea rbx, [rax*3]
%endif
dec rsi
%ifidn %1, mmx2
.vp8_intra_pred_uv_ho_%1_loop:
movd mm0, [rsi]
@@ -562,38 +573,43 @@ vp8_intra_pred_uv_ho ssse3
;void vp8_intra_pred_y_dc_sse2(
; unsigned char *dst,
; int dst_stride
; unsigned char *src,
; int src_stride,
; unsigned char *above,
; unsigned char *left,
; int left_stride
; )
global sym(vp8_intra_pred_y_dc_sse2)
sym(vp8_intra_pred_y_dc_sse2):
push rbp
mov rbp, rsp
SHADOW_ARGS_TO_STACK 4
SHADOW_ARGS_TO_STACK 5
push rsi
push rdi
; end prolog
; from top
mov rsi, arg(2) ;src;
movsxd rax, dword ptr arg(3) ;src_stride;
sub rsi, rax
mov rdi, arg(2) ;above
mov rsi, arg(3) ;left
movsxd rax, dword ptr arg(4) ;left_stride;
pxor xmm0, xmm0
movdqa xmm1, [rsi]
movdqa xmm1, [rdi]
psadbw xmm1, xmm0
movq xmm2, xmm1
punpckhqdq xmm1, xmm1
paddw xmm1, xmm2
; from left
dec rsi
lea rdi, [rax*3]
movzx ecx, byte [rsi+rax]
movzx ecx, byte [rsi]
movzx edx, byte [rsi+rax]
add ecx, edx
movzx edx, byte [rsi+rax*2]
add ecx, edx
movzx edx, byte [rsi+rdi]
add ecx, edx
lea rsi, [rsi+rax*4]
movzx edx, byte [rsi]
add ecx, edx
movzx edx, byte [rsi+rax]
@@ -603,6 +619,7 @@ sym(vp8_intra_pred_y_dc_sse2):
movzx edx, byte [rsi+rdi]
add ecx, edx
lea rsi, [rsi+rax*4]
movzx edx, byte [rsi]
add ecx, edx
movzx edx, byte [rsi+rax]
@@ -612,6 +629,7 @@ sym(vp8_intra_pred_y_dc_sse2):
movzx edx, byte [rsi+rdi]
add ecx, edx
lea rsi, [rsi+rax*4]
movzx edx, byte [rsi]
add ecx, edx
movzx edx, byte [rsi+rax]
@@ -620,8 +638,6 @@ sym(vp8_intra_pred_y_dc_sse2):
add ecx, edx
movzx edx, byte [rsi+rdi]
add ecx, edx
movzx edx, byte [rsi+rax*4]
add ecx, edx
; add up
pextrw edx, xmm1, 0x0
@@ -663,22 +679,23 @@ sym(vp8_intra_pred_y_dc_sse2):
;void vp8_intra_pred_y_dctop_sse2(
; unsigned char *dst,
; int dst_stride
; unsigned char *src,
; int src_stride,
; unsigned char *above,
; unsigned char *left,
; int left_stride
; )
global sym(vp8_intra_pred_y_dctop_sse2)
sym(vp8_intra_pred_y_dctop_sse2):
push rbp
mov rbp, rsp
SHADOW_ARGS_TO_STACK 4
SHADOW_ARGS_TO_STACK 5
push rsi
GET_GOT rbx
; end prolog
;arg(3), arg(4) not used
; from top
mov rcx, arg(2) ;src;
movsxd rax, dword ptr arg(3) ;src_stride;
sub rcx, rax
mov rcx, arg(2) ;above;
pxor xmm0, xmm0
movdqa xmm1, [rcx]
psadbw xmm1, xmm0
@@ -724,22 +741,25 @@ sym(vp8_intra_pred_y_dctop_sse2):
;void vp8_intra_pred_y_dcleft_sse2(
; unsigned char *dst,
; int dst_stride
; unsigned char *src,
; int src_stride,
; unsigned char *above,
; unsigned char *left,
; int left_stride
; )
global sym(vp8_intra_pred_y_dcleft_sse2)
sym(vp8_intra_pred_y_dcleft_sse2):
push rbp
mov rbp, rsp
SHADOW_ARGS_TO_STACK 4
SHADOW_ARGS_TO_STACK 5
push rsi
push rdi
; end prolog
;arg(2) not used
; from left
mov rsi, arg(2) ;src;
movsxd rax, dword ptr arg(3) ;src_stride;
dec rsi
mov rsi, arg(3) ;left;
movsxd rax, dword ptr arg(4) ;left_stride;
lea rdi, [rax*3]
movzx ecx, byte [rsi]
movzx edx, byte [rsi+rax]
@@ -814,18 +834,21 @@ sym(vp8_intra_pred_y_dcleft_sse2):
;void vp8_intra_pred_y_dc128_sse2(
; unsigned char *dst,
; int dst_stride
; unsigned char *src,
; int src_stride,
; unsigned char *above,
; unsigned char *left,
; int left_stride
; )
global sym(vp8_intra_pred_y_dc128_sse2)
sym(vp8_intra_pred_y_dc128_sse2):
push rbp
mov rbp, rsp
SHADOW_ARGS_TO_STACK 4
SHADOW_ARGS_TO_STACK 5
push rsi
GET_GOT rbx
; end prolog
;arg(2), arg(3), arg(4) not used
; write out
mov rsi, 2
movdqa xmm1, [GLOBAL(dc_128)]
@@ -857,15 +880,16 @@ sym(vp8_intra_pred_y_dc128_sse2):
;void vp8_intra_pred_y_tm_sse2(
; unsigned char *dst,
; int dst_stride
; unsigned char *src,
; int src_stride,
; unsigned char *above,
; unsigned char *left,
; int left_stride
; )
%macro vp8_intra_pred_y_tm 1
global sym(vp8_intra_pred_y_tm_%1)
sym(vp8_intra_pred_y_tm_%1):
push rbp
mov rbp, rsp
SHADOW_ARGS_TO_STACK 4
SHADOW_ARGS_TO_STACK 5
push rsi
push rdi
GET_GOT rbx
@@ -873,9 +897,8 @@ sym(vp8_intra_pred_y_tm_%1):
; read top row
mov edx, 8
mov rsi, arg(2) ;src;
movsxd rax, dword ptr arg(3) ;src_stride;
sub rsi, rax
mov rsi, arg(2) ;above
movsxd rax, dword ptr arg(4) ;left_stride;
pxor xmm0, xmm0
%ifidn %1, ssse3
movdqa xmm3, [GLOBAL(dc_1024)]
@@ -887,7 +910,7 @@ sym(vp8_intra_pred_y_tm_%1):
; set up left ptrs ans subtract topleft
movd xmm4, [rsi-1]
lea rsi, [rsi+rax-1]
mov rsi, arg(3) ;left
%ifidn %1, sse2
punpcklbw xmm4, xmm0
pshuflw xmm4, xmm4, 0x0
@@ -945,27 +968,29 @@ vp8_intra_pred_y_tm ssse3
;void vp8_intra_pred_y_ve_sse2(
; unsigned char *dst,
; int dst_stride
; unsigned char *src,
; int src_stride,
; unsigned char *above,
; unsigned char *left,
; int left_stride
; )
global sym(vp8_intra_pred_y_ve_sse2)
sym(vp8_intra_pred_y_ve_sse2):
push rbp
mov rbp, rsp
SHADOW_ARGS_TO_STACK 4
SHADOW_ARGS_TO_STACK 5
push rsi
; end prolog
;arg(3), arg(4) not used
mov rax, arg(2) ;above;
mov rsi, 2
movsxd rdx, dword ptr arg(1) ;dst_stride
; read from top
mov rax, arg(2) ;src;
movsxd rdx, dword ptr arg(3) ;src_stride;
sub rax, rdx
movdqa xmm1, [rax]
; write out
mov rsi, 2
mov rax, arg(0) ;dst;
movsxd rdx, dword ptr arg(1) ;dst_stride
lea rcx, [rdx*3]
.label
@@ -991,25 +1016,27 @@ sym(vp8_intra_pred_y_ve_sse2):
;void vp8_intra_pred_y_ho_sse2(
; unsigned char *dst,
; int dst_stride
; unsigned char *src,
; int src_stride,
; unsigned char *above,
; unsigned char *left,
; int left_stride,
; )
global sym(vp8_intra_pred_y_ho_sse2)
sym(vp8_intra_pred_y_ho_sse2):
push rbp
mov rbp, rsp
SHADOW_ARGS_TO_STACK 4
SHADOW_ARGS_TO_STACK 5
push rsi
push rdi
; end prolog
;arg(2) not used
; read from left and write out
mov edx, 8
mov rsi, arg(2) ;src;
movsxd rax, dword ptr arg(3) ;src_stride;
mov rsi, arg(3) ;left;
movsxd rax, dword ptr arg(4) ;left_stride;
mov rdi, arg(0) ;dst;
movsxd rcx, dword ptr arg(1) ;dst_stride
dec rsi
vp8_intra_pred_y_ho_sse2_loop:
movd xmm0, [rsi]

View File

@@ -15,7 +15,8 @@
#define build_intra_predictors_mbuv_prototype(sym) \
void sym(unsigned char *dst, int dst_stride, \
const unsigned char *src, int src_stride)
const unsigned char *above, \
const unsigned char *left, int left_stride)
typedef build_intra_predictors_mbuv_prototype((*build_intra_predictors_mbuv_fn_t));
extern build_intra_predictors_mbuv_prototype(vp8_intra_pred_uv_dc_mmx2);
@@ -29,15 +30,19 @@ extern build_intra_predictors_mbuv_prototype(vp8_intra_pred_uv_tm_sse2);
extern build_intra_predictors_mbuv_prototype(vp8_intra_pred_uv_tm_ssse3);
static void vp8_build_intra_predictors_mbuv_x86(MACROBLOCKD *x,
unsigned char * uabove_row,
unsigned char * vabove_row,
unsigned char *dst_u,
unsigned char *dst_v,
int dst_stride,
unsigned char * uleft,
unsigned char * vleft,
int left_stride,
build_intra_predictors_mbuv_fn_t tm_func,
build_intra_predictors_mbuv_fn_t ho_func)
{
int mode = x->mode_info_context->mbmi.uv_mode;
build_intra_predictors_mbuv_fn_t fn;
int src_stride = x->dst.uv_stride;
switch (mode) {
case V_PRED: fn = vp8_intra_pred_uv_ve_mmx; break;
@@ -59,59 +64,78 @@ static void vp8_build_intra_predictors_mbuv_x86(MACROBLOCKD *x,
default: return;
}
fn(dst_u, dst_stride, x->dst.u_buffer, src_stride);
fn(dst_v, dst_stride, x->dst.v_buffer, src_stride);
fn(dst_u, dst_stride, uabove_row, uleft, left_stride);
fn(dst_v, dst_stride, vabove_row, vleft, left_stride);
}
void vp8_build_intra_predictors_mbuv_sse2(MACROBLOCKD *x)
void vp8_build_intra_predictors_mbuv_s_sse2(MACROBLOCKD *x,
unsigned char * uabove_row,
unsigned char * vabove_row,
unsigned char * uleft,
unsigned char * vleft,
int left_stride,
unsigned char * upred_ptr,
unsigned char * vpred_ptr,
int pred_stride)
{
vp8_build_intra_predictors_mbuv_x86(x, &x->predictor[256],
&x->predictor[320], 8,
vp8_build_intra_predictors_mbuv_x86(x,
uabove_row, vabove_row,
upred_ptr,
vpred_ptr, pred_stride,
uleft,
vleft,
left_stride,
vp8_intra_pred_uv_tm_sse2,
vp8_intra_pred_uv_ho_mmx2);
}
void vp8_build_intra_predictors_mbuv_ssse3(MACROBLOCKD *x)
void vp8_build_intra_predictors_mbuv_s_ssse3(MACROBLOCKD *x,
unsigned char * uabove_row,
unsigned char * vabove_row,
unsigned char * uleft,
unsigned char * vleft,
int left_stride,
unsigned char * upred_ptr,
unsigned char * vpred_ptr,
int pred_stride)
{
vp8_build_intra_predictors_mbuv_x86(x, &x->predictor[256],
&x->predictor[320], 8,
vp8_build_intra_predictors_mbuv_x86(x,
uabove_row, vabove_row,
upred_ptr,
vpred_ptr, pred_stride,
uleft,
vleft,
left_stride,
vp8_intra_pred_uv_tm_ssse3,
vp8_intra_pred_uv_ho_ssse3);
}
void vp8_build_intra_predictors_mbuv_s_sse2(MACROBLOCKD *x)
{
vp8_build_intra_predictors_mbuv_x86(x, x->dst.u_buffer,
x->dst.v_buffer, x->dst.uv_stride,
vp8_intra_pred_uv_tm_sse2,
vp8_intra_pred_uv_ho_mmx2);
}
#define build_intra_predictors_mby_prototype(sym) \
void sym(unsigned char *dst, int dst_stride, \
const unsigned char *above, \
const unsigned char *left, int left_stride)
typedef build_intra_predictors_mby_prototype((*build_intra_predictors_mby_fn_t));
void vp8_build_intra_predictors_mbuv_s_ssse3(MACROBLOCKD *x)
{
vp8_build_intra_predictors_mbuv_x86(x, x->dst.u_buffer,
x->dst.v_buffer, x->dst.uv_stride,
vp8_intra_pred_uv_tm_ssse3,
vp8_intra_pred_uv_ho_ssse3);
}
extern build_intra_predictors_mbuv_prototype(vp8_intra_pred_y_dc_sse2);
extern build_intra_predictors_mbuv_prototype(vp8_intra_pred_y_dctop_sse2);
extern build_intra_predictors_mbuv_prototype(vp8_intra_pred_y_dcleft_sse2);
extern build_intra_predictors_mbuv_prototype(vp8_intra_pred_y_dc128_sse2);
extern build_intra_predictors_mbuv_prototype(vp8_intra_pred_y_ho_sse2);
extern build_intra_predictors_mbuv_prototype(vp8_intra_pred_y_ve_sse2);
extern build_intra_predictors_mbuv_prototype(vp8_intra_pred_y_tm_sse2);
extern build_intra_predictors_mbuv_prototype(vp8_intra_pred_y_tm_ssse3);
extern build_intra_predictors_mby_prototype(vp8_intra_pred_y_dc_sse2);
extern build_intra_predictors_mby_prototype(vp8_intra_pred_y_dctop_sse2);
extern build_intra_predictors_mby_prototype(vp8_intra_pred_y_dcleft_sse2);
extern build_intra_predictors_mby_prototype(vp8_intra_pred_y_dc128_sse2);
extern build_intra_predictors_mby_prototype(vp8_intra_pred_y_ho_sse2);
extern build_intra_predictors_mby_prototype(vp8_intra_pred_y_ve_sse2);
extern build_intra_predictors_mby_prototype(vp8_intra_pred_y_tm_sse2);
extern build_intra_predictors_mby_prototype(vp8_intra_pred_y_tm_ssse3);
static void vp8_build_intra_predictors_mby_x86(MACROBLOCKD *x,
unsigned char * yabove_row,
unsigned char *dst_y,
int dst_stride,
build_intra_predictors_mbuv_fn_t tm_func)
unsigned char * yleft,
int left_stride,
build_intra_predictors_mby_fn_t tm_func)
{
int mode = x->mode_info_context->mbmi.mode;
build_intra_predictors_mbuv_fn_t fn;
int src_stride = x->dst.y_stride;
switch (mode) {
case V_PRED: fn = vp8_intra_pred_y_ve_sse2; break;
case H_PRED: fn = vp8_intra_pred_y_ho_sse2; break;
@@ -132,31 +156,31 @@ static void vp8_build_intra_predictors_mby_x86(MACROBLOCKD *x,
default: return;
}
fn(dst_y, dst_stride, x->dst.y_buffer, src_stride);
fn(dst_y, dst_stride, yabove_row, yleft, left_stride);
return;
}
void vp8_build_intra_predictors_mby_sse2(MACROBLOCKD *x)
void vp8_build_intra_predictors_mby_s_sse2(MACROBLOCKD *x,
unsigned char * yabove_row,
unsigned char * yleft,
int left_stride,
unsigned char * ypred_ptr,
int y_stride)
{
vp8_build_intra_predictors_mby_x86(x, x->predictor, 16,
vp8_build_intra_predictors_mby_x86(x, yabove_row, ypred_ptr,
y_stride, yleft, left_stride,
vp8_intra_pred_y_tm_sse2);
}
void vp8_build_intra_predictors_mby_ssse3(MACROBLOCKD *x)
void vp8_build_intra_predictors_mby_s_ssse3(MACROBLOCKD *x,
unsigned char * yabove_row,
unsigned char * yleft,
int left_stride,
unsigned char * ypred_ptr,
int y_stride)
{
vp8_build_intra_predictors_mby_x86(x, x->predictor, 16,
vp8_intra_pred_y_tm_ssse3);
}
void vp8_build_intra_predictors_mby_s_sse2(MACROBLOCKD *x)
{
vp8_build_intra_predictors_mby_x86(x, x->dst.y_buffer, x->dst.y_stride,
vp8_intra_pred_y_tm_sse2);
}
void vp8_build_intra_predictors_mby_s_ssse3(MACROBLOCKD *x)
{
vp8_build_intra_predictors_mby_x86(x, x->dst.y_buffer, x->dst.y_stride,
vp8_build_intra_predictors_mby_x86(x, yabove_row, ypred_ptr,
y_stride, yleft, left_stride,
vp8_intra_pred_y_tm_ssse3);
}

View File

@@ -89,7 +89,7 @@ sym(vp8_sad16x16_wmt):
; int src_stride,
; unsigned char *ref_ptr,
; int ref_stride,
; int max_err)
; int max_sad)
global sym(vp8_sad8x16_wmt)
sym(vp8_sad8x16_wmt):
push rbp

View File

@@ -19,7 +19,7 @@
%define end_ptr rcx
%define ret_var rbx
%define result_ptr arg(4)
%define max_err arg(4)
%define max_sad arg(4)
%define height dword ptr arg(4)
push rbp
mov rbp, rsp
@@ -42,7 +42,7 @@
%define end_ptr r10
%define ret_var r11
%define result_ptr [rsp+xmm_stack_space+8+4*8]
%define max_err [rsp+xmm_stack_space+8+4*8]
%define max_sad [rsp+xmm_stack_space+8+4*8]
%define height dword ptr [rsp+xmm_stack_space+8+4*8]
%else
%define src_ptr rdi
@@ -52,7 +52,7 @@
%define end_ptr r9
%define ret_var r10
%define result_ptr r8
%define max_err r8
%define max_sad r8
%define height r8
%endif
%endif
@@ -67,7 +67,7 @@
%define end_ptr
%define ret_var
%define result_ptr
%define max_err
%define max_sad
%define height
%if ABI_IS_32BIT
@@ -587,7 +587,7 @@ sym(vp8_sad4x4x3_sse3):
; int src_stride,
; unsigned char *ref_ptr,
; int ref_stride,
; int max_err)
; int max_sad)
;%define lddqu movdqu
global sym(vp8_sad16x16_sse3)
sym(vp8_sad16x16_sse3):

View File

@@ -9,7 +9,7 @@
*/
#include "vpx_config.h"
#include "vp8/encoder/variance.h"
#include "vp8/common/variance.h"
#include "vp8/common/pragmas.h"
#include "vpx_ports/mem.h"
#include "vp8/common/x86/filter_x86.h"

View File

@@ -9,7 +9,7 @@
*/
#include "vpx_config.h"
#include "vp8/encoder/variance.h"
#include "vp8/common/variance.h"
#include "vp8/common/pragmas.h"
#include "vpx_ports/mem.h"
#include "vp8/common/x86/filter_x86.h"

View File

@@ -9,7 +9,7 @@
*/
#include "vpx_config.h"
#include "vp8/encoder/variance.h"
#include "vp8/common/variance.h"
#include "vp8/common/pragmas.h"
#include "vpx_ports/mem.h"

View File

@@ -57,6 +57,7 @@ static void read_kf_modes(VP8D_COMP *pbi, MODE_INFO *mi)
if (mi->mbmi.mode == B_PRED)
{
int i = 0;
mi->mbmi.is_4x4 = 1;
do
{
@@ -138,30 +139,6 @@ static void read_mvcontexts(vp8_reader *bc, MV_CONTEXT *mvc)
while (++i < 2);
}
static int_mv sub_mv_ref(vp8_reader *bc, const vp8_prob *p, int_mv abovemv,
int_mv leftmv, int_mv best_mv, const MV_CONTEXT * mvc)
{
int_mv blockmv;
blockmv.as_int = 0;
if( vp8_read(bc, p[0]) )
{
if( vp8_read(bc, p[1]) )
{
if( vp8_read(bc, p[2]) )
{
read_mv(bc, &blockmv.as_mv, (const MV_CONTEXT *) mvc);
blockmv.as_mv.row += best_mv.as_mv.row;
blockmv.as_mv.col += best_mv.as_mv.col;
}
return blockmv;
}
else
return abovemv;
}
else
return leftmv;
}
static const unsigned char mbsplit_fill_count[4] = {8, 8, 4, 1};
static const unsigned char mbsplit_fill_offset[4][16] = {
{ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15},
@@ -171,8 +148,6 @@ static const unsigned char mbsplit_fill_offset[4][16] = {
};
static void mb_mode_mv_init(VP8D_COMP *pbi)
{
vp8_reader *const bc = & pbi->bc;
@@ -235,11 +210,11 @@ const vp8_prob vp8_sub_mv_ref_prob3 [8][VP8_SUBMVREFS-1] =
};
static
const vp8_prob * get_sub_mv_ref_prob(const int_mv *l, const int_mv *a)
const vp8_prob * get_sub_mv_ref_prob(const int left, const int above)
{
int lez = (l->as_int == 0);
int aez = (a->as_int == 0);
int lea = (l->as_int == a->as_int);
int lez = (left == 0);
int aez = (above == 0);
int lea = (left == above);
const vp8_prob * prob;
prob = vp8_sub_mv_ref_prob3[(aez << 2) |
@@ -250,7 +225,8 @@ const vp8_prob * get_sub_mv_ref_prob(const int_mv *l, const int_mv *a)
}
static void decode_split_mv(vp8_reader *const bc, MODE_INFO *mi,
MB_MODE_INFO *mbmi, int mis, int_mv best_mv,
const MODE_INFO *left_mb, const MODE_INFO *above_mb,
MB_MODE_INFO *mbmi, int_mv best_mv,
MV_CONTEXT *const mvc, int mb_to_left_edge,
int mb_to_right_edge, int mb_to_top_edge,
int mb_to_bottom_edge)
@@ -273,7 +249,6 @@ static void decode_split_mv(vp8_reader *const bc, MODE_INFO *mi,
}
}
mbmi->need_to_clamp_mvs = 0;
do /* for each subset j */
{
int_mv leftmv, abovemv;
@@ -283,18 +258,60 @@ static void decode_split_mv(vp8_reader *const bc, MODE_INFO *mi,
const vp8_prob *prob;
k = vp8_mbsplit_offset[s][j];
leftmv.as_int = left_block_mv(mi, k);
abovemv.as_int = above_block_mv(mi, k, mis);
if (!(k & 3))
{
/* On L edge, get from MB to left of us */
if(left_mb->mbmi.mode != SPLITMV)
leftmv.as_int = left_mb->mbmi.mv.as_int;
else
leftmv.as_int = (left_mb->bmi + k + 4 - 1)->mv.as_int;
}
else
leftmv.as_int = (mi->bmi + k - 1)->mv.as_int;
prob = get_sub_mv_ref_prob(&leftmv, &abovemv);
if (!(k >> 2))
{
/* On top edge, get from MB above us */
if(above_mb->mbmi.mode != SPLITMV)
abovemv.as_int = above_mb->mbmi.mv.as_int;
else
abovemv.as_int = (above_mb->bmi + k + 16 - 4)->mv.as_int;
}
else
abovemv.as_int = (mi->bmi + k - 4)->mv.as_int;
blockmv = sub_mv_ref(bc, prob, abovemv, leftmv, best_mv, mvc);
prob = get_sub_mv_ref_prob(leftmv.as_int, abovemv.as_int);
mbmi->need_to_clamp_mvs |= vp8_check_mv_bounds(&blockmv,
mb_to_left_edge,
mb_to_right_edge,
mb_to_top_edge,
mb_to_bottom_edge);
if( vp8_read(bc, prob[0]) )
{
if( vp8_read(bc, prob[1]) )
{
blockmv.as_int = 0;
if( vp8_read(bc, prob[2]) )
{
blockmv.as_mv.row = read_mvcomponent(bc, &mvc[0]) << 1;
blockmv.as_mv.row += best_mv.as_mv.row;
blockmv.as_mv.col = read_mvcomponent(bc, &mvc[1]) << 1;
blockmv.as_mv.col += best_mv.as_mv.col;
mbmi->need_to_clamp_mvs |= vp8_check_mv_bounds(&blockmv,
mb_to_left_edge,
mb_to_right_edge,
mb_to_top_edge,
mb_to_bottom_edge);
}
}
else
{
blockmv.as_int = abovemv.as_int;
mbmi->need_to_clamp_mvs |= above_mb->mbmi.need_to_clamp_mvs;
}
}
else
{
blockmv.as_int = leftmv.as_int;
mbmi->need_to_clamp_mvs |= left_mb->mbmi.need_to_clamp_mvs;
}
{
/* Fill (uniform) modes, mvs of jth subset.
@@ -318,15 +335,13 @@ static void decode_split_mv(vp8_reader *const bc, MODE_INFO *mi,
mbmi->partitioning = s;
}
static void read_mb_modes_mv(VP8D_COMP *pbi, MODE_INFO *mi, MB_MODE_INFO *mbmi,
int mb_row, int mb_col)
static void read_mb_modes_mv(VP8D_COMP *pbi, MODE_INFO *mi, MB_MODE_INFO *mbmi)
{
vp8_reader *const bc = & pbi->bc;
mbmi->ref_frame = (MV_REFERENCE_FRAME) vp8_read(bc, pbi->prob_intra);
if (mbmi->ref_frame) /* inter MB */
{
enum {CNT_INTRA, CNT_NEAREST, CNT_NEAR, CNT_SPLITMV};
vp8_prob mv_ref_p [VP8_MVREFS-1];
int cnt[4];
int *cntx = cnt;
int_mv near_mvs[4];
@@ -335,9 +350,7 @@ static void read_mb_modes_mv(VP8D_COMP *pbi, MODE_INFO *mi, MB_MODE_INFO *mbmi,
const MODE_INFO *above = mi - mis;
const MODE_INFO *left = mi - 1;
const MODE_INFO *aboveleft = above - 1;
MV_CONTEXT *const mvc = pbi->common.fc.mvc;
int *ref_frame_sign_bias = pbi->common.ref_frame_sign_bias;
int propogate_mv_for_ec = 0;
mbmi->need_to_clamp_mvs = 0;
@@ -411,36 +424,13 @@ static void read_mb_modes_mv(VP8D_COMP *pbi, MODE_INFO *mi, MB_MODE_INFO *mbmi,
cnt[CNT_INTRA] += 1;
}
mv_ref_p[0] = vp8_mode_contexts [cnt[CNT_INTRA]] [0];
if( vp8_read(bc, mv_ref_p[0]) )
if( vp8_read(bc, vp8_mode_contexts [cnt[CNT_INTRA]] [0]) )
{
int mb_to_left_edge;
int mb_to_right_edge;
/* Distance of Mb to the various image edges.
* These specified to 8th pel as they are always compared to MV
* values that are in 1/8th pel units
*/
pbi->mb.mb_to_left_edge =
mb_to_left_edge = -((mb_col * 16) << 3);
mb_to_left_edge -= LEFT_TOP_MARGIN;
pbi->mb.mb_to_right_edge =
mb_to_right_edge = ((pbi->common.mb_cols - 1 - mb_col) * 16) << 3;
mb_to_right_edge += RIGHT_BOTTOM_MARGIN;
/* If we have three distinct MV's ... */
if (cnt[CNT_SPLITMV])
{
/* See if above-left MV can be merged with NEAREST */
if (nmv->as_int == near_mvs[CNT_NEAREST].as_int)
cnt[CNT_NEAREST] += 1;
}
cnt[CNT_SPLITMV] = ((above->mbmi.mode == SPLITMV)
+ (left->mbmi.mode == SPLITMV)) * 2
+ (aboveleft->mbmi.mode == SPLITMV);
/* See if above-left MV can be merged with NEAREST */
cnt[CNT_NEAREST] += ( (cnt[CNT_SPLITMV] > 0) &
(nmv->as_int == near_mvs[CNT_NEAREST].as_int));
/* Swap near and nearest if necessary */
if (cnt[CNT_NEAR] > cnt[CNT_NEAREST])
@@ -454,48 +444,56 @@ static void read_mb_modes_mv(VP8D_COMP *pbi, MODE_INFO *mi, MB_MODE_INFO *mbmi,
near_mvs[CNT_NEAR].as_int = tmp;
}
mv_ref_p[1] = vp8_mode_contexts [cnt[CNT_NEAREST]] [1];
if( vp8_read(bc, mv_ref_p[1]) )
if( vp8_read(bc, vp8_mode_contexts [cnt[CNT_NEAREST]] [1]) )
{
mv_ref_p[2] = vp8_mode_contexts [cnt[CNT_NEAR]] [2];
if( vp8_read(bc, mv_ref_p[2]) )
if( vp8_read(bc, vp8_mode_contexts [cnt[CNT_NEAR]] [2]) )
{
int mb_to_top_edge;
int mb_to_bottom_edge;
int mb_to_left_edge;
int mb_to_right_edge;
MV_CONTEXT *const mvc = pbi->common.fc.mvc;
int near_index;
mb_to_top_edge = pbi->mb.mb_to_top_edge;
mb_to_bottom_edge = pbi->mb.mb_to_bottom_edge;
mb_to_top_edge -= LEFT_TOP_MARGIN;
mb_to_bottom_edge += RIGHT_BOTTOM_MARGIN;
mb_to_right_edge = pbi->mb.mb_to_right_edge;
mb_to_right_edge += RIGHT_BOTTOM_MARGIN;
mb_to_left_edge = pbi->mb.mb_to_left_edge;
mb_to_left_edge -= LEFT_TOP_MARGIN;
/* Use near_mvs[0] to store the "best" MV */
if (cnt[CNT_NEAREST] >= cnt[CNT_INTRA])
near_mvs[CNT_INTRA] = near_mvs[CNT_NEAREST];
near_index = CNT_INTRA +
(cnt[CNT_NEAREST] >= cnt[CNT_INTRA]);
mv_ref_p[3] = vp8_mode_contexts [cnt[CNT_SPLITMV]] [3];
vp8_clamp_mv2(&near_mvs[near_index], &pbi->mb);
vp8_clamp_mv2(&near_mvs[CNT_INTRA], &pbi->mb);
cnt[CNT_SPLITMV] = ((above->mbmi.mode == SPLITMV)
+ (left->mbmi.mode == SPLITMV)) * 2
+ (aboveleft->mbmi.mode == SPLITMV);
if( vp8_read(bc, mv_ref_p[3]) )
if( vp8_read(bc, vp8_mode_contexts [cnt[CNT_SPLITMV]] [3]) )
{
decode_split_mv(bc, mi,
mbmi, mis,
near_mvs[CNT_INTRA],
decode_split_mv(bc, mi, left, above,
mbmi,
near_mvs[near_index],
mvc, mb_to_left_edge,
mb_to_right_edge,
mb_to_top_edge,
mb_to_bottom_edge);
mbmi->mv.as_int = mi->bmi[15].mv.as_int;
mbmi->mode = SPLITMV;
mbmi->is_4x4 = 1;
}
else
{
int_mv *const mbmi_mv = & mbmi->mv;
read_mv(bc, &mbmi_mv->as_mv, (const MV_CONTEXT *) mvc);
mbmi_mv->as_mv.row += near_mvs[CNT_INTRA].as_mv.row;
mbmi_mv->as_mv.col += near_mvs[CNT_INTRA].as_mv.col;
mbmi_mv->as_mv.row += near_mvs[near_index].as_mv.row;
mbmi_mv->as_mv.col += near_mvs[near_index].as_mv.col;
/* Don't need to check this on NEARMV and NEARESTMV
* modes since those modes clamp the MV. The NEWMV mode
@@ -508,7 +506,6 @@ static void read_mb_modes_mv(VP8D_COMP *pbi, MODE_INFO *mi, MB_MODE_INFO *mbmi,
mb_to_top_edge,
mb_to_bottom_edge);
mbmi->mode = NEWMV;
propogate_mv_for_ec = 1;
}
}
else
@@ -516,7 +513,6 @@ static void read_mb_modes_mv(VP8D_COMP *pbi, MODE_INFO *mi, MB_MODE_INFO *mbmi,
mbmi->mode = NEARMV;
vp8_clamp_mv2(&near_mvs[CNT_NEAR], &pbi->mb);
mbmi->mv.as_int = near_mvs[CNT_NEAR].as_int;
propogate_mv_for_ec = 1;
}
}
else
@@ -524,19 +520,16 @@ static void read_mb_modes_mv(VP8D_COMP *pbi, MODE_INFO *mi, MB_MODE_INFO *mbmi,
mbmi->mode = NEARESTMV;
vp8_clamp_mv2(&near_mvs[CNT_NEAREST], &pbi->mb);
mbmi->mv.as_int = near_mvs[CNT_NEAREST].as_int;
propogate_mv_for_ec = 1;
}
}
else {
else
{
mbmi->mode = ZEROMV;
mbmi->mv.as_int = 0;
propogate_mv_for_ec = 1;
}
mbmi->uv_mode = DC_PRED;
#if CONFIG_ERROR_CONCEALMENT
if(pbi->ec_enabled && propogate_mv_for_ec)
if(pbi->ec_enabled && (mbmi->mode != SPLITMV))
{
mi->bmi[ 0].mv.as_int =
mi->bmi[ 1].mv.as_int =
@@ -566,6 +559,7 @@ static void read_mb_modes_mv(VP8D_COMP *pbi, MODE_INFO *mi, MB_MODE_INFO *mbmi,
if ((mbmi->mode = read_ymode(bc, pbi->common.fc.ymode_prob)) == B_PRED)
{
int j = 0;
mbmi->is_4x4 = 1;
do
{
mi->bmi[j].as_mode = read_bmode(bc, pbi->common.fc.bmode_prob);
@@ -594,7 +588,7 @@ static void read_mb_features(vp8_reader *r, MB_MODE_INFO *mi, MACROBLOCKD *x)
}
static void decode_mb_mode_mvs(VP8D_COMP *pbi, MODE_INFO *mi,
MB_MODE_INFO *mbmi, int mb_row, int mb_col)
MB_MODE_INFO *mbmi)
{
/* Read the Macroblock segmentation map if it is being updated explicitly
* this frame (reset to 0 above by default)
@@ -612,10 +606,11 @@ static void decode_mb_mode_mvs(VP8D_COMP *pbi, MODE_INFO *mi,
else
mi->mbmi.mb_skip_coeff = 0;
mi->mbmi.is_4x4 = 0;
if(pbi->common.frame_type == KEY_FRAME)
read_kf_modes(pbi, mi);
else
read_mb_modes_mv(pbi, mi, &mi->mbmi, mb_row, mb_col);
read_mb_modes_mv(pbi, mi, &mi->mbmi);
}
@@ -623,16 +618,20 @@ void vp8_decode_mode_mvs(VP8D_COMP *pbi)
{
MODE_INFO *mi = pbi->common.mi;
int mb_row = -1;
int mb_to_right_edge_start;
mb_mode_mv_init(pbi);
pbi->mb.mb_to_top_edge = 0;
pbi->mb.mb_to_bottom_edge = ((pbi->common.mb_rows - 1) * 16) << 3;
mb_to_right_edge_start = ((pbi->common.mb_cols - 1) * 16) << 3;
while (++mb_row < pbi->common.mb_rows)
{
int mb_col = -1;
pbi->mb.mb_to_top_edge = -((mb_row * 16)) << 3;
pbi->mb.mb_to_bottom_edge =
((pbi->common.mb_rows - 1 - mb_row) * 16) << 3;
pbi->mb.mb_to_left_edge = 0;
pbi->mb.mb_to_right_edge = mb_to_right_edge_start;
while (++mb_col < pbi->common.mb_cols)
{
@@ -640,7 +639,7 @@ void vp8_decode_mode_mvs(VP8D_COMP *pbi)
int mb_num = mb_row * pbi->common.mb_cols + mb_col;
#endif
decode_mb_mode_mvs(pbi, mi, &mi->mbmi, mb_row, mb_col);
decode_mb_mode_mvs(pbi, mi, &mi->mbmi);
#if CONFIG_ERROR_CONCEALMENT
/* look for corruption. set mvs_corrupt_from_mb to the current
@@ -655,10 +654,13 @@ void vp8_decode_mode_mvs(VP8D_COMP *pbi)
}
#endif
pbi->mb.mb_to_left_edge -= (16 << 3);
pbi->mb.mb_to_right_edge -= (16 << 3);
mi++; /* next macroblock */
}
pbi->mb.mb_to_top_edge -= (16 << 3);
pbi->mb.mb_to_bottom_edge -= (16 << 3);
mi++; /* skip left predictor each row */
}
}

View File

@@ -21,7 +21,6 @@
#include "vp8/common/entropymode.h"
#include "vp8/common/quant_common.h"
#include "vpx_scale/vpxscale.h"
#include "vpx_scale/yv12extend.h"
#include "vp8/common/setupintrarecon.h"
#include "decodemv.h"
@@ -54,7 +53,7 @@ void vp8cx_init_de_quantizer(VP8D_COMP *pbi)
}
}
void mb_init_dequantizer(VP8D_COMP *pbi, MACROBLOCKD *xd)
void vp8_mb_init_dequantizer(VP8D_COMP *pbi, MACROBLOCKD *xd)
{
int i;
int QIndex;
@@ -93,13 +92,14 @@ void mb_init_dequantizer(VP8D_COMP *pbi, MACROBLOCKD *xd)
}
}
static void decode_macroblock(VP8D_COMP *pbi, MACROBLOCKD *xd,
unsigned int mb_idx)
{
MB_PREDICTION_MODE mode;
int i;
#if CONFIG_ERROR_CONCEALMENT
int corruption_detected = 0;
#endif
if (xd->mode_info_context->mbmi.mb_skip_coeff)
{
@@ -117,7 +117,7 @@ static void decode_macroblock(VP8D_COMP *pbi, MACROBLOCKD *xd,
mode = xd->mode_info_context->mbmi.mode;
if (xd->segmentation_enabled)
mb_init_dequantizer(pbi, xd);
vp8_mb_init_dequantizer(pbi, xd);
#if CONFIG_ERROR_CONCEALMENT
@@ -152,15 +152,26 @@ static void decode_macroblock(VP8D_COMP *pbi, MACROBLOCKD *xd,
}
#endif
/* do prediction */
if (xd->mode_info_context->mbmi.ref_frame == INTRA_FRAME)
{
vp8_build_intra_predictors_mbuv_s(xd);
vp8_build_intra_predictors_mbuv_s(xd,
xd->recon_above[1],
xd->recon_above[2],
xd->recon_left[1],
xd->recon_left[2],
xd->recon_left_stride[1],
xd->dst.u_buffer, xd->dst.v_buffer,
xd->dst.uv_stride);
if (mode != B_PRED)
{
vp8_build_intra_predictors_mby_s(xd);
vp8_build_intra_predictors_mby_s(xd,
xd->recon_above[0],
xd->recon_left[0],
xd->recon_left_stride[0],
xd->dst.y_buffer,
xd->dst.y_stride);
}
else
{
@@ -172,16 +183,28 @@ static void decode_macroblock(VP8D_COMP *pbi, MACROBLOCKD *xd,
if(xd->mode_info_context->mbmi.mb_skip_coeff)
vpx_memset(xd->eobs, 0, 25);
vp8_intra_prediction_down_copy(xd);
intra_prediction_down_copy(xd, xd->recon_above[0] + 16);
for (i = 0; i < 16; i++)
{
BLOCKD *b = &xd->block[i];
int b_mode = xd->mode_info_context->bmi[i].as_mode;
unsigned char *yabove;
unsigned char *yleft;
int left_stride;
unsigned char top_left;
yabove = base_dst + b->offset - dst_stride;
yleft = base_dst + b->offset - 1;
left_stride = dst_stride;
top_left = yabove[-1];
vp8_intra4x4_predict (base_dst + b->offset, dst_stride, b_mode,
base_dst + b->offset, dst_stride );
// vp8_intra4x4_predict (base_dst + b->offset, dst_stride, b_mode,
// base_dst + b->offset, dst_stride );
vp8_intra4x4_predict_d_c(yabove, yleft, left_stride,
b_mode,
base_dst + b->offset, dst_stride,
top_left);
if (xd->eobs[i])
{
@@ -294,112 +317,171 @@ static int get_delta_q(vp8_reader *bc, int prev, int *q_update)
FILE *vpxlog = 0;
#endif
static void
decode_mb_row(VP8D_COMP *pbi, VP8_COMMON *pc, int mb_row, MACROBLOCKD *xd)
static void decode_mb_rows(VP8D_COMP *pbi)
{
VP8_COMMON *const pc = & pbi->common;
MACROBLOCKD *const xd = & pbi->mb;
int ibc = 0;
int num_part = 1 << pc->multi_token_partition;
int recon_yoffset, recon_uvoffset;
int mb_col;
int ref_fb_idx = pc->lst_fb_idx;
int mb_row, mb_col;
int mb_idx = 0;
int dst_fb_idx = pc->new_fb_idx;
int recon_y_stride = pc->yv12_fb[ref_fb_idx].y_stride;
int recon_uv_stride = pc->yv12_fb[ref_fb_idx].uv_stride;
int recon_y_stride = pc->yv12_fb[dst_fb_idx].y_stride;
int recon_uv_stride = pc->yv12_fb[dst_fb_idx].uv_stride;
vpx_memset(&pc->left_context, 0, sizeof(pc->left_context));
recon_yoffset = mb_row * recon_y_stride * 16;
recon_uvoffset = mb_row * recon_uv_stride * 8;
/* reset above block coeffs */
unsigned char *ref_buffer[MAX_REF_FRAMES][3];
unsigned char *dst_buffer[3];
int i;
int ref_fb_index[MAX_REF_FRAMES];
int ref_fb_corrupted[MAX_REF_FRAMES];
xd->above_context = pc->above_context;
xd->up_available = (mb_row != 0);
ref_fb_corrupted[INTRA_FRAME] = 0;
xd->mb_to_top_edge = -((mb_row * 16)) << 3;
xd->mb_to_bottom_edge = ((pc->mb_rows - 1 - mb_row) * 16) << 3;
ref_fb_index[LAST_FRAME] = pc->lst_fb_idx;
ref_fb_index[GOLDEN_FRAME] = pc->gld_fb_idx;
ref_fb_index[ALTREF_FRAME] = pc->alt_fb_idx;
for (mb_col = 0; mb_col < pc->mb_cols; mb_col++)
for(i = 1; i < MAX_REF_FRAMES; i++)
{
/* Distance of Mb to the various image edges.
* These are specified to 8th pel as they are always compared to values
* that are in 1/8th pel units
*/
xd->mb_to_left_edge = -((mb_col * 16) << 3);
xd->mb_to_right_edge = ((pc->mb_cols - 1 - mb_col) * 16) << 3;
#if CONFIG_ERROR_CONCEALMENT
{
int corrupt_residual = (!pbi->independent_partitions &&
pbi->frame_corrupt_residual) ||
vp8dx_bool_error(xd->current_bc);
if (pbi->ec_active &&
xd->mode_info_context->mbmi.ref_frame == INTRA_FRAME &&
corrupt_residual)
{
/* We have an intra block with corrupt coefficients, better to
* conceal with an inter block. Interpolate MVs from neighboring
* MBs.
*
* Note that for the first mb with corrupt residual in a frame,
* we might not discover that before decoding the residual. That
* happens after this check, and therefore no inter concealment
* will be done.
*/
vp8_interpolate_motion(xd,
mb_row, mb_col,
pc->mb_rows, pc->mb_cols,
pc->mode_info_stride);
}
}
#endif
xd->dst.y_buffer = pc->yv12_fb[dst_fb_idx].y_buffer + recon_yoffset;
xd->dst.u_buffer = pc->yv12_fb[dst_fb_idx].u_buffer + recon_uvoffset;
xd->dst.v_buffer = pc->yv12_fb[dst_fb_idx].v_buffer + recon_uvoffset;
xd->left_available = (mb_col != 0);
/* Select the appropriate reference frame for this MB */
if (xd->mode_info_context->mbmi.ref_frame == LAST_FRAME)
ref_fb_idx = pc->lst_fb_idx;
else if (xd->mode_info_context->mbmi.ref_frame == GOLDEN_FRAME)
ref_fb_idx = pc->gld_fb_idx;
else
ref_fb_idx = pc->alt_fb_idx;
xd->pre.y_buffer = pc->yv12_fb[ref_fb_idx].y_buffer + recon_yoffset;
xd->pre.u_buffer = pc->yv12_fb[ref_fb_idx].u_buffer + recon_uvoffset;
xd->pre.v_buffer = pc->yv12_fb[ref_fb_idx].v_buffer + recon_uvoffset;
if (xd->mode_info_context->mbmi.ref_frame != INTRA_FRAME)
{
/* propagate errors from reference frames */
xd->corrupted |= pc->yv12_fb[ref_fb_idx].corrupted;
}
decode_macroblock(pbi, xd, mb_row * pc->mb_cols + mb_col);
/* check if the boolean decoder has suffered an error */
xd->corrupted |= vp8dx_bool_error(xd->current_bc);
recon_yoffset += 16;
recon_uvoffset += 8;
++xd->mode_info_context; /* next mb */
xd->above_context++;
ref_buffer[i][0] = pc->yv12_fb[ref_fb_index[i]].y_buffer;
ref_buffer[i][1] = pc->yv12_fb[ref_fb_index[i]].u_buffer;
ref_buffer[i][2] = pc->yv12_fb[ref_fb_index[i]].v_buffer;
ref_fb_corrupted[i] = pc->yv12_fb[ref_fb_index[i]].corrupted;
}
/* adjust to the next row of mbs */
vp8_extend_mb_row(
&pc->yv12_fb[dst_fb_idx],
xd->dst.y_buffer + 16, xd->dst.u_buffer + 8, xd->dst.v_buffer + 8
);
dst_buffer[0] = pc->yv12_fb[dst_fb_idx].y_buffer;
dst_buffer[1] = pc->yv12_fb[dst_fb_idx].u_buffer;
dst_buffer[2] = pc->yv12_fb[dst_fb_idx].v_buffer;
++xd->mode_info_context; /* skip prediction column */
xd->up_available = 0;
/* Decode the individual macro block */
for (mb_row = 0; mb_row < pc->mb_rows; mb_row++)
{
if (num_part > 1)
{
xd->current_bc = & pbi->mbc[ibc];
ibc++;
if (ibc == num_part)
ibc = 0;
}
recon_yoffset = mb_row * recon_y_stride * 16;
recon_uvoffset = mb_row * recon_uv_stride * 8;
/* reset contexts */
xd->above_context = pc->above_context;
vpx_memset(xd->left_context, 0, sizeof(ENTROPY_CONTEXT_PLANES));
xd->left_available = 0;
xd->mb_to_top_edge = -((mb_row * 16)) << 3;
xd->mb_to_bottom_edge = ((pc->mb_rows - 1 - mb_row) * 16) << 3;
xd->recon_above[0] = dst_buffer[0] + recon_yoffset;
xd->recon_above[1] = dst_buffer[1] + recon_uvoffset;
xd->recon_above[2] = dst_buffer[2] + recon_uvoffset;
xd->recon_left[0] = xd->recon_above[0] - 1;
xd->recon_left[1] = xd->recon_above[1] - 1;
xd->recon_left[2] = xd->recon_above[2] - 1;
xd->recon_above[0] -= xd->dst.y_stride;
xd->recon_above[1] -= xd->dst.uv_stride;
xd->recon_above[2] -= xd->dst.uv_stride;
//TODO: move to outside row loop
xd->recon_left_stride[0] = xd->dst.y_stride;
xd->recon_left_stride[1] = xd->dst.uv_stride;
for (mb_col = 0; mb_col < pc->mb_cols; mb_col++)
{
/* Distance of Mb to the various image edges.
* These are specified to 8th pel as they are always compared to values
* that are in 1/8th pel units
*/
xd->mb_to_left_edge = -((mb_col * 16) << 3);
xd->mb_to_right_edge = ((pc->mb_cols - 1 - mb_col) * 16) << 3;
#if CONFIG_ERROR_CONCEALMENT
{
int corrupt_residual = (!pbi->independent_partitions &&
pbi->frame_corrupt_residual) ||
vp8dx_bool_error(xd->current_bc);
if (pbi->ec_active &&
xd->mode_info_context->mbmi.ref_frame == INTRA_FRAME &&
corrupt_residual)
{
/* We have an intra block with corrupt coefficients, better to
* conceal with an inter block. Interpolate MVs from neighboring
* MBs.
*
* Note that for the first mb with corrupt residual in a frame,
* we might not discover that before decoding the residual. That
* happens after this check, and therefore no inter concealment
* will be done.
*/
vp8_interpolate_motion(xd,
mb_row, mb_col,
pc->mb_rows, pc->mb_cols,
pc->mode_info_stride);
}
}
#endif
xd->dst.y_buffer = dst_buffer[0] + recon_yoffset;
xd->dst.u_buffer = dst_buffer[1] + recon_uvoffset;
xd->dst.v_buffer = dst_buffer[2] + recon_uvoffset;
xd->pre.y_buffer = ref_buffer[xd->mode_info_context->mbmi.ref_frame][0] + recon_yoffset;
xd->pre.u_buffer = ref_buffer[xd->mode_info_context->mbmi.ref_frame][1] + recon_uvoffset;
xd->pre.v_buffer = ref_buffer[xd->mode_info_context->mbmi.ref_frame][2] + recon_uvoffset;
/* propagate errors from reference frames */
xd->corrupted |= ref_fb_corrupted[xd->mode_info_context->mbmi.ref_frame];
decode_macroblock(pbi, xd, mb_idx);
mb_idx++;
xd->left_available = 1;
/* check if the boolean decoder has suffered an error */
xd->corrupted |= vp8dx_bool_error(xd->current_bc);
xd->recon_above[0] += 16;
xd->recon_above[1] += 8;
xd->recon_above[2] += 8;
xd->recon_left[0] += 16;
xd->recon_left[1] += 8;
xd->recon_left[2] += 8;
recon_yoffset += 16;
recon_uvoffset += 8;
++xd->mode_info_context; /* next mb */
xd->above_context++;
}
/* adjust to the next row of mbs */
vp8_extend_mb_row(
&pc->yv12_fb[dst_fb_idx],
xd->dst.y_buffer + 16, xd->dst.u_buffer + 8, xd->dst.v_buffer + 8
);
++xd->mode_info_context; /* skip prediction column */
xd->up_available = 1;
}
}
static unsigned int read_partition_size(const unsigned char *cx_size)
{
const unsigned int size =
@@ -425,7 +507,7 @@ static unsigned int read_available_partition_size(
{
VP8_COMMON* pc = &pbi->common;
const unsigned char *partition_size_ptr = token_part_sizes + i * 3;
unsigned int partition_size;
unsigned int partition_size = 0;
ptrdiff_t bytes_left = fragment_end - fragment_start;
/* Calculate the length of this partition. The last partition
* size is implicit. If the partition size can't be read, then
@@ -650,7 +732,6 @@ int vp8_decode_frame(VP8D_COMP *pbi)
const unsigned char *data_end = data + pbi->fragment_sizes[0];
ptrdiff_t first_partition_length_in_bytes;
int mb_row;
int i, j, k, l;
const int *const mb_feature_data_bits = vp8_mb_feature_data_bits;
int corrupt_tokens = 0;
@@ -827,6 +908,12 @@ int vp8_decode_frame(VP8D_COMP *pbi)
}
}
}
else
{
/* No segmentation updates on this frame */
xd->update_mb_segmentation_map = 0;
xd->update_mb_segmentation_data = 0;
}
/* Read the loop filter level and type */
pc->filter_type = (LOOPFILTERTYPE) vp8_read_bit(bc);
@@ -893,7 +980,7 @@ int vp8_decode_frame(VP8D_COMP *pbi)
vp8cx_init_de_quantizer(pbi);
/* MB level dequantizer setup */
mb_init_dequantizer(pbi, &pbi->mb);
vp8_mb_init_dequantizer(pbi, &pbi->mb);
}
/* Determine if the golden frame or ARF buffer should be updated and how.
@@ -1040,39 +1127,21 @@ int vp8_decode_frame(VP8D_COMP *pbi)
#endif
vpx_memset(pc->above_context, 0, sizeof(ENTROPY_CONTEXT_PLANES) * pc->mb_cols);
pbi->frame_corrupt_residual = 0;
#if CONFIG_MULTITHREAD
if (pbi->b_multithreaded_rd && pc->multi_token_partition != ONE_PARTITION)
{
int i;
pbi->frame_corrupt_residual = 0;
vp8mt_decode_mb_rows(pbi, xd);
vp8_yv12_extend_frame_borders_ptr(&pc->yv12_fb[pc->new_fb_idx]); /*cm->frame_to_show);*/
vp8_yv12_extend_frame_borders(&pc->yv12_fb[pc->new_fb_idx]); /*cm->frame_to_show);*/
for (i = 0; i < pbi->decoding_thread_count; ++i)
corrupt_tokens |= pbi->mb_row_di[i].mbd.corrupted;
}
else
#endif
{
int ibc = 0;
int num_part = 1 << pc->multi_token_partition;
pbi->frame_corrupt_residual = 0;
/* Decode the individual macro block */
for (mb_row = 0; mb_row < pc->mb_rows; mb_row++)
{
if (num_part > 1)
{
xd->current_bc = & pbi->mbc[ibc];
ibc++;
if (ibc == num_part)
ibc = 0;
}
decode_mb_row(pbi, pc, mb_row, xd);
}
decode_mb_rows(pbi);
corrupt_tokens |= xd->corrupted;
}

View File

@@ -15,370 +15,230 @@
#include "vpx_ports/mem.h"
#include "detokenize.h"
#define BOOL_DATA unsigned char
#define OCB_X PREV_COEF_CONTEXTS * ENTROPY_NODES
DECLARE_ALIGNED(16, static const unsigned char, coef_bands_x[16]) =
{
0 * OCB_X, 1 * OCB_X, 2 * OCB_X, 3 * OCB_X,
6 * OCB_X, 4 * OCB_X, 5 * OCB_X, 6 * OCB_X,
6 * OCB_X, 6 * OCB_X, 6 * OCB_X, 6 * OCB_X,
6 * OCB_X, 6 * OCB_X, 6 * OCB_X, 7 * OCB_X
};
#define EOB_CONTEXT_NODE 0
#define ZERO_CONTEXT_NODE 1
#define ONE_CONTEXT_NODE 2
#define LOW_VAL_CONTEXT_NODE 3
#define TWO_CONTEXT_NODE 4
#define THREE_CONTEXT_NODE 5
#define HIGH_LOW_CONTEXT_NODE 6
#define CAT_ONE_CONTEXT_NODE 7
#define CAT_THREEFOUR_CONTEXT_NODE 8
#define CAT_THREE_CONTEXT_NODE 9
#define CAT_FIVE_CONTEXT_NODE 10
#define CAT1_MIN_VAL 5
#define CAT2_MIN_VAL 7
#define CAT3_MIN_VAL 11
#define CAT4_MIN_VAL 19
#define CAT5_MIN_VAL 35
#define CAT6_MIN_VAL 67
#define CAT1_PROB0 159
#define CAT2_PROB0 145
#define CAT2_PROB1 165
#define CAT3_PROB0 140
#define CAT3_PROB1 148
#define CAT3_PROB2 173
#define CAT4_PROB0 135
#define CAT4_PROB1 140
#define CAT4_PROB2 155
#define CAT4_PROB3 176
#define CAT5_PROB0 130
#define CAT5_PROB1 134
#define CAT5_PROB2 141
#define CAT5_PROB3 157
#define CAT5_PROB4 180
static const unsigned char cat6_prob[12] =
{ 129, 130, 133, 140, 153, 177, 196, 230, 243, 254, 254, 0 };
void vp8_reset_mb_tokens_context(MACROBLOCKD *x)
{
ENTROPY_CONTEXT *a_ctx = ((ENTROPY_CONTEXT *)x->above_context);
ENTROPY_CONTEXT *l_ctx = ((ENTROPY_CONTEXT *)x->left_context);
vpx_memset(a_ctx, 0, sizeof(ENTROPY_CONTEXT_PLANES)-1);
vpx_memset(l_ctx, 0, sizeof(ENTROPY_CONTEXT_PLANES)-1);
/* Clear entropy contexts for Y2 blocks */
if (x->mode_info_context->mbmi.mode != B_PRED &&
x->mode_info_context->mbmi.mode != SPLITMV)
if (!x->mode_info_context->mbmi.is_4x4)
{
vpx_memset(x->above_context, 0, sizeof(ENTROPY_CONTEXT_PLANES));
vpx_memset(x->left_context, 0, sizeof(ENTROPY_CONTEXT_PLANES));
a_ctx[8] = l_ctx[8] = 0;
}
}
/*
------------------------------------------------------------------------------
Residual decoding (Paragraph 13.2 / 13.3)
*/
static const uint8_t kBands[16 + 1] = {
0, 1, 2, 3, 6, 4, 5, 6, 6, 6, 6, 6, 6, 6, 6, 7,
0 /* extra entry as sentinel */
};
static const uint8_t kCat3[] = { 173, 148, 140, 0 };
static const uint8_t kCat4[] = { 176, 155, 140, 135, 0 };
static const uint8_t kCat5[] = { 180, 157, 141, 134, 130, 0 };
static const uint8_t kCat6[] =
{ 254, 254, 243, 230, 196, 177, 153, 140, 133, 130, 129, 0 };
static const uint8_t* const kCat3456[] = { kCat3, kCat4, kCat5, kCat6 };
static const uint8_t kZigzag[16] = {
0, 1, 4, 8, 5, 2, 3, 6, 9, 12, 13, 10, 7, 11, 14, 15
};
#define VP8GetBit vp8dx_decode_bool
#define NUM_PROBAS 11
#define NUM_CTX 3
typedef const uint8_t (*ProbaArray)[NUM_CTX][NUM_PROBAS]; // for const-casting
static int GetSigned(BOOL_DECODER *br, int value_to_sign)
{
int split = (br->range + 1) >> 1;
VP8_BD_VALUE bigsplit = (VP8_BD_VALUE)split << (VP8_BD_VALUE_SIZE - 8);
int v;
if(br->count < 0)
vp8dx_bool_decoder_fill(br);
if ( br->value < bigsplit )
{
br->range = split;
v= value_to_sign;
}
else
{
vpx_memset(x->above_context, 0, sizeof(ENTROPY_CONTEXT_PLANES)-1);
vpx_memset(x->left_context, 0, sizeof(ENTROPY_CONTEXT_PLANES)-1);
br->range = br->range-split;
br->value = br->value-bigsplit;
v = -value_to_sign;
}
br->range +=br->range;
br->value +=br->value;
br->count--;
return v;
}
/*
Returns the position of the last non-zero coeff plus one
(and 0 if there's no coeff at all)
*/
static int GetCoeffs(BOOL_DECODER *br, ProbaArray prob,
int ctx, int n, int16_t* out)
{
const uint8_t* p = prob[n][ctx];
if (!VP8GetBit(br, p[0]))
{ /* first EOB is more a 'CBP' bit. */
return 0;
}
while (1)
{
++n;
if (!VP8GetBit(br, p[1]))
{
p = prob[kBands[n]][0];
}
else
{ /* non zero coeff */
int v, j;
if (!VP8GetBit(br, p[2]))
{
p = prob[kBands[n]][1];
v = 1;
}
else
{
if (!VP8GetBit(br, p[3]))
{
if (!VP8GetBit(br, p[4]))
{
v = 2;
}
else
{
v = 3 + VP8GetBit(br, p[5]);
}
}
else
{
if (!VP8GetBit(br, p[6]))
{
if (!VP8GetBit(br, p[7]))
{
v = 5 + VP8GetBit(br, 159);
} else
{
v = 7 + 2 * VP8GetBit(br, 165);
v += VP8GetBit(br, 145);
}
}
else
{
const uint8_t* tab;
const int bit1 = VP8GetBit(br, p[8]);
const int bit0 = VP8GetBit(br, p[9 + bit1]);
const int cat = 2 * bit1 + bit0;
v = 0;
for (tab = kCat3456[cat]; *tab; ++tab)
{
v += v + VP8GetBit(br, *tab);
}
v += 3 + (8 << cat);
}
}
p = prob[kBands[n]][2];
}
j = kZigzag[n - 1];
out[j] = GetSigned(br, v);
if (n == 16 || !VP8GetBit(br, p[0]))
{ /* EOB */
return n;
}
}
if (n == 16)
{
return 16;
}
}
}
DECLARE_ALIGNED(16, extern const unsigned char, vp8_norm[256]);
#define FILL \
if(count < 0) \
VP8DX_BOOL_DECODER_FILL(count, value, bufptr, bufend);
#define NORMALIZE \
/*if(range < 0x80)*/ \
{ \
shift = vp8_norm[range]; \
range <<= shift; \
value <<= shift; \
count -= shift; \
}
#define DECODE_AND_APPLYSIGN(value_to_sign) \
split = (range + 1) >> 1; \
bigsplit = (VP8_BD_VALUE)split << (VP8_BD_VALUE_SIZE - 8); \
FILL \
if ( value < bigsplit ) \
{ \
range = split; \
v= value_to_sign; \
} \
else \
{ \
range = range-split; \
value = value-bigsplit; \
v = -value_to_sign; \
} \
range +=range; \
value +=value; \
count--;
#define DECODE_AND_BRANCH_IF_ZERO(probability,branch) \
{ \
split = 1 + ((( probability*(range-1) ) )>> 8); \
bigsplit = (VP8_BD_VALUE)split << (VP8_BD_VALUE_SIZE - 8); \
FILL \
if ( value < bigsplit ) \
{ \
range = split; \
NORMALIZE \
goto branch; \
} \
value -= bigsplit; \
range = range - split; \
NORMALIZE \
}
#define DECODE_AND_LOOP_IF_ZERO(probability,branch) \
{ \
split = 1 + ((( probability*(range-1) ) ) >> 8); \
bigsplit = (VP8_BD_VALUE)split << (VP8_BD_VALUE_SIZE - 8); \
FILL \
if ( value < bigsplit ) \
{ \
range = split; \
NORMALIZE \
Prob = coef_probs; \
if(c<15) {\
++c; \
Prob += coef_bands_x[c]; \
goto branch; \
} goto BLOCK_FINISHED; /*for malformed input */\
} \
value -= bigsplit; \
range = range - split; \
NORMALIZE \
}
#define DECODE_SIGN_WRITE_COEFF_AND_CHECK_EXIT(val) \
DECODE_AND_APPLYSIGN(val) \
Prob = coef_probs + (ENTROPY_NODES*2); \
if(c < 15){\
qcoeff_ptr [ scan[c] ] = (int16_t) v; \
++c; \
goto DO_WHILE; }\
qcoeff_ptr [ 15 ] = (int16_t) v; \
goto BLOCK_FINISHED;
#define DECODE_EXTRABIT_AND_ADJUST_VAL(prob, bits_count)\
split = 1 + (((range-1) * prob) >> 8); \
bigsplit = (VP8_BD_VALUE)split << (VP8_BD_VALUE_SIZE - 8); \
FILL \
if(value >= bigsplit)\
{\
range = range-split;\
value = value-bigsplit;\
val += ((uint16_t)1<<bits_count);\
}\
else\
{\
range = split;\
}\
NORMALIZE
int vp8_decode_mb_tokens(VP8D_COMP *dx, MACROBLOCKD *x)
{
ENTROPY_CONTEXT *A = (ENTROPY_CONTEXT *)x->above_context;
ENTROPY_CONTEXT *L = (ENTROPY_CONTEXT *)x->left_context;
const FRAME_CONTEXT * const fc = &dx->common.fc;
BOOL_DECODER *bc = x->current_bc;
const FRAME_CONTEXT * const fc = &dx->common.fc;
char *eobs = x->eobs;
ENTROPY_CONTEXT *a;
ENTROPY_CONTEXT *l;
int i;
int nonzeros;
int eobtotal = 0;
register int count;
const BOOL_DATA *bufptr;
const BOOL_DATA *bufend;
register unsigned int range;
VP8_BD_VALUE value;
const int *scan;
register unsigned int shift;
unsigned int split;
VP8_BD_VALUE bigsplit;
short *qcoeff_ptr;
ProbaArray coef_probs;
ENTROPY_CONTEXT *a_ctx = ((ENTROPY_CONTEXT *)x->above_context);
ENTROPY_CONTEXT *l_ctx = ((ENTROPY_CONTEXT *)x->left_context);
ENTROPY_CONTEXT *a;
ENTROPY_CONTEXT *l;
int skip_dc = 0;
const vp8_prob *coef_probs;
int stop;
int val, bits_count;
int c;
int v;
const vp8_prob *Prob;
int start_coeff;
i = 0;
stop = 16;
scan = vp8_default_zig_zag1d;
qcoeff_ptr = &x->qcoeff[0];
coef_probs = fc->coef_probs [3] [ 0 ] [0];
if (x->mode_info_context->mbmi.mode != B_PRED &&
x->mode_info_context->mbmi.mode != SPLITMV)
if (!x->mode_info_context->mbmi.is_4x4)
{
i = 24;
stop = 24;
qcoeff_ptr += 24*16;
eobtotal -= 16;
coef_probs = fc->coef_probs [1] [ 0 ] [0];
a = a_ctx + 8;
l = l_ctx + 8;
coef_probs = fc->coef_probs [1];
nonzeros = GetCoeffs(bc, coef_probs, (*a + *l), 0, qcoeff_ptr + 24 * 16);
*a = *l = (nonzeros > 0);
eobs[24] = nonzeros;
eobtotal += nonzeros - 16;
coef_probs = fc->coef_probs [0];
skip_dc = 1;
}
else
{
coef_probs = fc->coef_probs [3];
skip_dc = 0;
}
bufend = bc->user_buffer_end;
bufptr = bc->user_buffer;
value = bc->value;
count = bc->count;
range = bc->range;
start_coeff = 0;
BLOCK_LOOP:
a = A + vp8_block2above[i];
l = L + vp8_block2left[i];
c = start_coeff;
VP8_COMBINEENTROPYCONTEXTS(v, *a, *l);
Prob = coef_probs;
Prob += v * ENTROPY_NODES;
*a = *l = 0;
DO_WHILE:
Prob += coef_bands_x[c];
DECODE_AND_BRANCH_IF_ZERO(Prob[EOB_CONTEXT_NODE], BLOCK_FINISHED);
*a = *l = 1;
CHECK_0_:
DECODE_AND_LOOP_IF_ZERO(Prob[ZERO_CONTEXT_NODE], CHECK_0_);
DECODE_AND_BRANCH_IF_ZERO(Prob[ONE_CONTEXT_NODE], ONE_CONTEXT_NODE_0_);
DECODE_AND_BRANCH_IF_ZERO(Prob[LOW_VAL_CONTEXT_NODE],
LOW_VAL_CONTEXT_NODE_0_);
DECODE_AND_BRANCH_IF_ZERO(Prob[HIGH_LOW_CONTEXT_NODE],
HIGH_LOW_CONTEXT_NODE_0_);
DECODE_AND_BRANCH_IF_ZERO(Prob[CAT_THREEFOUR_CONTEXT_NODE],
CAT_THREEFOUR_CONTEXT_NODE_0_);
DECODE_AND_BRANCH_IF_ZERO(Prob[CAT_FIVE_CONTEXT_NODE],
CAT_FIVE_CONTEXT_NODE_0_);
val = CAT6_MIN_VAL;
bits_count = 10;
do
for (i = 0; i < 16; ++i)
{
DECODE_EXTRABIT_AND_ADJUST_VAL(cat6_prob[bits_count], bits_count);
bits_count -- ;
}
while (bits_count >= 0);
a = a_ctx + (i&3);
l = l_ctx + ((i&0xc)>>2);
DECODE_SIGN_WRITE_COEFF_AND_CHECK_EXIT(val);
nonzeros = GetCoeffs(bc, coef_probs, (*a + *l), skip_dc, qcoeff_ptr);
*a = *l = (nonzeros > 0);
CAT_FIVE_CONTEXT_NODE_0_:
val = CAT5_MIN_VAL;
DECODE_EXTRABIT_AND_ADJUST_VAL(CAT5_PROB4, 4);
DECODE_EXTRABIT_AND_ADJUST_VAL(CAT5_PROB3, 3);
DECODE_EXTRABIT_AND_ADJUST_VAL(CAT5_PROB2, 2);
DECODE_EXTRABIT_AND_ADJUST_VAL(CAT5_PROB1, 1);
DECODE_EXTRABIT_AND_ADJUST_VAL(CAT5_PROB0, 0);
DECODE_SIGN_WRITE_COEFF_AND_CHECK_EXIT(val);
CAT_THREEFOUR_CONTEXT_NODE_0_:
DECODE_AND_BRANCH_IF_ZERO(Prob[CAT_THREE_CONTEXT_NODE],
CAT_THREE_CONTEXT_NODE_0_);
val = CAT4_MIN_VAL;
DECODE_EXTRABIT_AND_ADJUST_VAL(CAT4_PROB3, 3);
DECODE_EXTRABIT_AND_ADJUST_VAL(CAT4_PROB2, 2);
DECODE_EXTRABIT_AND_ADJUST_VAL(CAT4_PROB1, 1);
DECODE_EXTRABIT_AND_ADJUST_VAL(CAT4_PROB0, 0);
DECODE_SIGN_WRITE_COEFF_AND_CHECK_EXIT(val);
CAT_THREE_CONTEXT_NODE_0_:
val = CAT3_MIN_VAL;
DECODE_EXTRABIT_AND_ADJUST_VAL(CAT3_PROB2, 2);
DECODE_EXTRABIT_AND_ADJUST_VAL(CAT3_PROB1, 1);
DECODE_EXTRABIT_AND_ADJUST_VAL(CAT3_PROB0, 0);
DECODE_SIGN_WRITE_COEFF_AND_CHECK_EXIT(val);
HIGH_LOW_CONTEXT_NODE_0_:
DECODE_AND_BRANCH_IF_ZERO(Prob[CAT_ONE_CONTEXT_NODE],
CAT_ONE_CONTEXT_NODE_0_);
val = CAT2_MIN_VAL;
DECODE_EXTRABIT_AND_ADJUST_VAL(CAT2_PROB1, 1);
DECODE_EXTRABIT_AND_ADJUST_VAL(CAT2_PROB0, 0);
DECODE_SIGN_WRITE_COEFF_AND_CHECK_EXIT(val);
CAT_ONE_CONTEXT_NODE_0_:
val = CAT1_MIN_VAL;
DECODE_EXTRABIT_AND_ADJUST_VAL(CAT1_PROB0, 0);
DECODE_SIGN_WRITE_COEFF_AND_CHECK_EXIT(val);
LOW_VAL_CONTEXT_NODE_0_:
DECODE_AND_BRANCH_IF_ZERO(Prob[TWO_CONTEXT_NODE], TWO_CONTEXT_NODE_0_);
DECODE_AND_BRANCH_IF_ZERO(Prob[THREE_CONTEXT_NODE], THREE_CONTEXT_NODE_0_);
DECODE_SIGN_WRITE_COEFF_AND_CHECK_EXIT(4);
THREE_CONTEXT_NODE_0_:
DECODE_SIGN_WRITE_COEFF_AND_CHECK_EXIT(3);
TWO_CONTEXT_NODE_0_:
DECODE_SIGN_WRITE_COEFF_AND_CHECK_EXIT(2);
ONE_CONTEXT_NODE_0_:
DECODE_AND_APPLYSIGN(1);
Prob = coef_probs + ENTROPY_NODES;
if (c < 15)
{
qcoeff_ptr [ scan[c] ] = (int16_t) v;
++c;
goto DO_WHILE;
nonzeros += skip_dc;
eobs[i] = nonzeros;
eobtotal += nonzeros;
qcoeff_ptr += 16;
}
qcoeff_ptr [ 15 ] = (int16_t) v;
BLOCK_FINISHED:
eobs[i] = c;
eobtotal += c;
qcoeff_ptr += 16;
coef_probs = fc->coef_probs [2];
i++;
if (i < stop)
goto BLOCK_LOOP;
if (i == 25)
a_ctx += 4;
l_ctx += 4;
for (i = 16; i < 24; ++i)
{
start_coeff = 1;
i = 0;
stop = 16;
coef_probs = fc->coef_probs [0] [ 0 ] [0];
qcoeff_ptr -= (24*16 + 16);
goto BLOCK_LOOP;
a = a_ctx + ((i > 19)<<1) + (i&1);
l = l_ctx + ((i > 19)<<1) + ((i&3)>1);
nonzeros = GetCoeffs(bc, coef_probs, (*a + *l), 0, qcoeff_ptr);
*a = *l = (nonzeros > 0);
eobs[i] = nonzeros;
eobtotal += nonzeros;
qcoeff_ptr += 16;
}
if (i == 16)
{
start_coeff = 0;
coef_probs = fc->coef_probs [2] [ 0 ] [0];
stop = 24;
goto BLOCK_LOOP;
}
FILL
bc->user_buffer = bufptr;
bc->value = value;
bc->count = count;
bc->range = range;
return eobtotal;
}

View File

@@ -17,7 +17,6 @@
#include "onyxd_int.h"
#include "vpx_mem/vpx_mem.h"
#include "vp8/common/alloccommon.h"
#include "vpx_scale/yv12extend.h"
#include "vp8/common/loopfilter.h"
#include "vp8/common/swapyv12buffer.h"
#include "vp8/common/threading.h"
@@ -42,20 +41,6 @@ extern void vp8cx_init_de_quantizer(VP8D_COMP *pbi);
static int get_free_fb (VP8_COMMON *cm);
static void ref_cnt_fb (int *buf, int *idx, int new_idx);
void vp8dx_initialize()
{
static int init_done = 0;
if (!init_done)
{
vp8_initialize_common();
vp8_scale_machine_specific_config();
init_done = 1;
}
}
struct VP8D_COMP * vp8dx_create_decompressor(VP8D_CONFIG *oxcf)
{
VP8D_COMP *pbi = vpx_memalign(32, sizeof(VP8D_COMP));
@@ -73,7 +58,6 @@ struct VP8D_COMP * vp8dx_create_decompressor(VP8D_CONFIG *oxcf)
}
pbi->common.error.setjmp = 1;
vp8dx_initialize();
vp8_create_common(&pbi->common);
@@ -163,7 +147,7 @@ vpx_codec_err_t vp8dx_get_reference(VP8D_COMP *pbi, VP8_REFFRAME ref_frame_flag,
"Incorrect buffer dimensions");
}
else
vp8_yv12_copy_frame_ptr(&cm->yv12_fb[ref_fb_idx], sd);
vp8_yv12_copy_frame(&cm->yv12_fb[ref_fb_idx], sd);
return pbi->common.error.error_code;
}
@@ -203,7 +187,7 @@ vpx_codec_err_t vp8dx_set_reference(VP8D_COMP *pbi, VP8_REFFRAME ref_frame_flag,
/* Manage the reference counters and copy image. */
ref_cnt_fb (cm->fb_idx_ref_cnt, ref_fb_ptr, free_fb);
vp8_yv12_copy_frame_ptr(sd, &cm->yv12_fb[*ref_fb_ptr]);
vp8_yv12_copy_frame(sd, &cm->yv12_fb[*ref_fb_ptr]);
}
return pbi->common.error.error_code;
@@ -367,7 +351,7 @@ int vp8dx_receive_compressed_data(VP8D_COMP *pbi, unsigned long size, const unsi
const int prev_idx = cm->lst_fb_idx;
cm->fb_idx_ref_cnt[prev_idx]--;
cm->lst_fb_idx = get_free_fb(cm);
vp8_yv12_copy_frame_ptr(&cm->yv12_fb[prev_idx],
vp8_yv12_copy_frame(&cm->yv12_fb[prev_idx],
&cm->yv12_fb[cm->lst_fb_idx]);
}
/* This is used to signal that we are missing frames.
@@ -486,7 +470,7 @@ int vp8dx_receive_compressed_data(VP8D_COMP *pbi, unsigned long size, const unsi
/* Apply the loop filter if appropriate. */
vp8_loop_filter_frame(cm, &pbi->mb);
}
vp8_yv12_extend_frame_borders_ptr(cm->frame_to_show);
vp8_yv12_extend_frame_borders(cm->frame_to_show);
}

View File

@@ -32,8 +32,6 @@ typedef struct
{
MACROBLOCKD mbd;
int mb_row;
int current_mb_col;
short *coef_ptr;
} MB_ROW_DEC;
typedef struct

View File

@@ -1,943 +0,0 @@
/*
* Copyright (c) 2010 The WebM project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#include "vpx_config.h"
#include "vpx_rtcd.h"
#include "vpx_mem/vpx_mem.h"
#include "onyxd_int.h"
/* For skip_recon_mb(), add vp8_build_intra_predictors_mby_s(MACROBLOCKD *x) and
* vp8_build_intra_predictors_mbuv_s(MACROBLOCKD *x).
*/
void vp8mt_build_intra_predictors_mby(VP8D_COMP *pbi, MACROBLOCKD *x, int mb_row, int mb_col)
{
unsigned char *yabove_row; /* = x->dst.y_buffer - x->dst.y_stride; */
unsigned char *yleft_col;
unsigned char yleft_buf[16];
unsigned char ytop_left; /* = yabove_row[-1]; */
unsigned char *ypred_ptr = x->predictor;
int r, c, i;
if (pbi->common.filter_level)
{
yabove_row = pbi->mt_yabove_row[mb_row] + mb_col*16 +32;
yleft_col = pbi->mt_yleft_col[mb_row];
} else
{
yabove_row = x->dst.y_buffer - x->dst.y_stride;
for (i = 0; i < 16; i++)
yleft_buf[i] = x->dst.y_buffer [i* x->dst.y_stride -1];
yleft_col = yleft_buf;
}
ytop_left = yabove_row[-1];
/* for Y */
switch (x->mode_info_context->mbmi.mode)
{
case DC_PRED:
{
int expected_dc;
int i;
int shift;
int average = 0;
if (x->up_available || x->left_available)
{
if (x->up_available)
{
for (i = 0; i < 16; i++)
{
average += yabove_row[i];
}
}
if (x->left_available)
{
for (i = 0; i < 16; i++)
{
average += yleft_col[i];
}
}
shift = 3 + x->up_available + x->left_available;
expected_dc = (average + (1 << (shift - 1))) >> shift;
}
else
{
expected_dc = 128;
}
vpx_memset(ypred_ptr, expected_dc, 256);
}
break;
case V_PRED:
{
for (r = 0; r < 16; r++)
{
((int *)ypred_ptr)[0] = ((int *)yabove_row)[0];
((int *)ypred_ptr)[1] = ((int *)yabove_row)[1];
((int *)ypred_ptr)[2] = ((int *)yabove_row)[2];
((int *)ypred_ptr)[3] = ((int *)yabove_row)[3];
ypred_ptr += 16;
}
}
break;
case H_PRED:
{
for (r = 0; r < 16; r++)
{
vpx_memset(ypred_ptr, yleft_col[r], 16);
ypred_ptr += 16;
}
}
break;
case TM_PRED:
{
for (r = 0; r < 16; r++)
{
for (c = 0; c < 16; c++)
{
int pred = yleft_col[r] + yabove_row[ c] - ytop_left;
if (pred < 0)
pred = 0;
if (pred > 255)
pred = 255;
ypred_ptr[c] = pred;
}
ypred_ptr += 16;
}
}
break;
case B_PRED:
case NEARESTMV:
case NEARMV:
case ZEROMV:
case NEWMV:
case SPLITMV:
case MB_MODE_COUNT:
break;
}
}
void vp8mt_build_intra_predictors_mby_s(VP8D_COMP *pbi, MACROBLOCKD *x, int mb_row, int mb_col)
{
unsigned char *yabove_row; /* = x->dst.y_buffer - x->dst.y_stride; */
unsigned char *yleft_col;
unsigned char yleft_buf[16];
unsigned char ytop_left; /* = yabove_row[-1]; */
unsigned char *ypred_ptr = x->predictor;
int r, c, i;
int y_stride = x->dst.y_stride;
ypred_ptr = x->dst.y_buffer; /*x->predictor;*/
if (pbi->common.filter_level)
{
yabove_row = pbi->mt_yabove_row[mb_row] + mb_col*16 +32;
yleft_col = pbi->mt_yleft_col[mb_row];
} else
{
yabove_row = x->dst.y_buffer - x->dst.y_stride;
for (i = 0; i < 16; i++)
yleft_buf[i] = x->dst.y_buffer [i* x->dst.y_stride -1];
yleft_col = yleft_buf;
}
ytop_left = yabove_row[-1];
/* for Y */
switch (x->mode_info_context->mbmi.mode)
{
case DC_PRED:
{
int expected_dc;
int i;
int shift;
int average = 0;
if (x->up_available || x->left_available)
{
if (x->up_available)
{
for (i = 0; i < 16; i++)
{
average += yabove_row[i];
}
}
if (x->left_available)
{
for (i = 0; i < 16; i++)
{
average += yleft_col[i];
}
}
shift = 3 + x->up_available + x->left_available;
expected_dc = (average + (1 << (shift - 1))) >> shift;
}
else
{
expected_dc = 128;
}
/*vpx_memset(ypred_ptr, expected_dc, 256);*/
for (r = 0; r < 16; r++)
{
vpx_memset(ypred_ptr, expected_dc, 16);
ypred_ptr += y_stride; /*16;*/
}
}
break;
case V_PRED:
{
for (r = 0; r < 16; r++)
{
((int *)ypred_ptr)[0] = ((int *)yabove_row)[0];
((int *)ypred_ptr)[1] = ((int *)yabove_row)[1];
((int *)ypred_ptr)[2] = ((int *)yabove_row)[2];
((int *)ypred_ptr)[3] = ((int *)yabove_row)[3];
ypred_ptr += y_stride; /*16;*/
}
}
break;
case H_PRED:
{
for (r = 0; r < 16; r++)
{
vpx_memset(ypred_ptr, yleft_col[r], 16);
ypred_ptr += y_stride; /*16;*/
}
}
break;
case TM_PRED:
{
for (r = 0; r < 16; r++)
{
for (c = 0; c < 16; c++)
{
int pred = yleft_col[r] + yabove_row[ c] - ytop_left;
if (pred < 0)
pred = 0;
if (pred > 255)
pred = 255;
ypred_ptr[c] = pred;
}
ypred_ptr += y_stride; /*16;*/
}
}
break;
case B_PRED:
case NEARESTMV:
case NEARMV:
case ZEROMV:
case NEWMV:
case SPLITMV:
case MB_MODE_COUNT:
break;
}
}
void vp8mt_build_intra_predictors_mbuv(VP8D_COMP *pbi, MACROBLOCKD *x, int mb_row, int mb_col)
{
unsigned char *uabove_row; /* = x->dst.u_buffer - x->dst.uv_stride; */
unsigned char *uleft_col; /*[16];*/
unsigned char uleft_buf[8];
unsigned char utop_left; /* = uabove_row[-1]; */
unsigned char *vabove_row; /* = x->dst.v_buffer - x->dst.uv_stride; */
unsigned char *vleft_col; /*[20];*/
unsigned char vleft_buf[8];
unsigned char vtop_left; /* = vabove_row[-1]; */
unsigned char *upred_ptr = &x->predictor[256];
unsigned char *vpred_ptr = &x->predictor[320];
int i, j;
if (pbi->common.filter_level)
{
uabove_row = pbi->mt_uabove_row[mb_row] + mb_col*8 +16;
vabove_row = pbi->mt_vabove_row[mb_row] + mb_col*8 +16;
uleft_col = pbi->mt_uleft_col[mb_row];
vleft_col = pbi->mt_vleft_col[mb_row];
} else
{
uabove_row = x->dst.u_buffer - x->dst.uv_stride;
vabove_row = x->dst.v_buffer - x->dst.uv_stride;
for (i = 0; i < 8; i++)
{
uleft_buf[i] = x->dst.u_buffer [i* x->dst.uv_stride -1];
vleft_buf[i] = x->dst.v_buffer [i* x->dst.uv_stride -1];
}
uleft_col = uleft_buf;
vleft_col = vleft_buf;
}
utop_left = uabove_row[-1];
vtop_left = vabove_row[-1];
switch (x->mode_info_context->mbmi.uv_mode)
{
case DC_PRED:
{
int expected_udc;
int expected_vdc;
int i;
int shift;
int Uaverage = 0;
int Vaverage = 0;
if (x->up_available)
{
for (i = 0; i < 8; i++)
{
Uaverage += uabove_row[i];
Vaverage += vabove_row[i];
}
}
if (x->left_available)
{
for (i = 0; i < 8; i++)
{
Uaverage += uleft_col[i];
Vaverage += vleft_col[i];
}
}
if (!x->up_available && !x->left_available)
{
expected_udc = 128;
expected_vdc = 128;
}
else
{
shift = 2 + x->up_available + x->left_available;
expected_udc = (Uaverage + (1 << (shift - 1))) >> shift;
expected_vdc = (Vaverage + (1 << (shift - 1))) >> shift;
}
vpx_memset(upred_ptr, expected_udc, 64);
vpx_memset(vpred_ptr, expected_vdc, 64);
}
break;
case V_PRED:
{
int i;
for (i = 0; i < 8; i++)
{
vpx_memcpy(upred_ptr, uabove_row, 8);
vpx_memcpy(vpred_ptr, vabove_row, 8);
upred_ptr += 8;
vpred_ptr += 8;
}
}
break;
case H_PRED:
{
int i;
for (i = 0; i < 8; i++)
{
vpx_memset(upred_ptr, uleft_col[i], 8);
vpx_memset(vpred_ptr, vleft_col[i], 8);
upred_ptr += 8;
vpred_ptr += 8;
}
}
break;
case TM_PRED:
{
int i;
for (i = 0; i < 8; i++)
{
for (j = 0; j < 8; j++)
{
int predu = uleft_col[i] + uabove_row[j] - utop_left;
int predv = vleft_col[i] + vabove_row[j] - vtop_left;
if (predu < 0)
predu = 0;
if (predu > 255)
predu = 255;
if (predv < 0)
predv = 0;
if (predv > 255)
predv = 255;
upred_ptr[j] = predu;
vpred_ptr[j] = predv;
}
upred_ptr += 8;
vpred_ptr += 8;
}
}
break;
case B_PRED:
case NEARESTMV:
case NEARMV:
case ZEROMV:
case NEWMV:
case SPLITMV:
case MB_MODE_COUNT:
break;
}
}
void vp8mt_build_intra_predictors_mbuv_s(VP8D_COMP *pbi, MACROBLOCKD *x, int mb_row, int mb_col)
{
unsigned char *uabove_row; /* = x->dst.u_buffer - x->dst.uv_stride; */
unsigned char *uleft_col; /*[16];*/
unsigned char uleft_buf[8];
unsigned char utop_left; /* = uabove_row[-1]; */
unsigned char *vabove_row; /* = x->dst.v_buffer - x->dst.uv_stride; */
unsigned char *vleft_col; /*[20];*/
unsigned char vleft_buf[8];
unsigned char vtop_left; /* = vabove_row[-1]; */
unsigned char *upred_ptr = x->dst.u_buffer; /*&x->predictor[256];*/
unsigned char *vpred_ptr = x->dst.v_buffer; /*&x->predictor[320];*/
int uv_stride = x->dst.uv_stride;
int i, j;
if (pbi->common.filter_level)
{
uabove_row = pbi->mt_uabove_row[mb_row] + mb_col*8 +16;
vabove_row = pbi->mt_vabove_row[mb_row] + mb_col*8 +16;
uleft_col = pbi->mt_uleft_col[mb_row];
vleft_col = pbi->mt_vleft_col[mb_row];
} else
{
uabove_row = x->dst.u_buffer - x->dst.uv_stride;
vabove_row = x->dst.v_buffer - x->dst.uv_stride;
for (i = 0; i < 8; i++)
{
uleft_buf[i] = x->dst.u_buffer [i* x->dst.uv_stride -1];
vleft_buf[i] = x->dst.v_buffer [i* x->dst.uv_stride -1];
}
uleft_col = uleft_buf;
vleft_col = vleft_buf;
}
utop_left = uabove_row[-1];
vtop_left = vabove_row[-1];
switch (x->mode_info_context->mbmi.uv_mode)
{
case DC_PRED:
{
int expected_udc;
int expected_vdc;
int i;
int shift;
int Uaverage = 0;
int Vaverage = 0;
if (x->up_available)
{
for (i = 0; i < 8; i++)
{
Uaverage += uabove_row[i];
Vaverage += vabove_row[i];
}
}
if (x->left_available)
{
for (i = 0; i < 8; i++)
{
Uaverage += uleft_col[i];
Vaverage += vleft_col[i];
}
}
if (!x->up_available && !x->left_available)
{
expected_udc = 128;
expected_vdc = 128;
}
else
{
shift = 2 + x->up_available + x->left_available;
expected_udc = (Uaverage + (1 << (shift - 1))) >> shift;
expected_vdc = (Vaverage + (1 << (shift - 1))) >> shift;
}
/*vpx_memset(upred_ptr,expected_udc,64);
vpx_memset(vpred_ptr,expected_vdc,64);*/
for (i = 0; i < 8; i++)
{
vpx_memset(upred_ptr, expected_udc, 8);
vpx_memset(vpred_ptr, expected_vdc, 8);
upred_ptr += uv_stride; /*8;*/
vpred_ptr += uv_stride; /*8;*/
}
}
break;
case V_PRED:
{
int i;
for (i = 0; i < 8; i++)
{
vpx_memcpy(upred_ptr, uabove_row, 8);
vpx_memcpy(vpred_ptr, vabove_row, 8);
upred_ptr += uv_stride; /*8;*/
vpred_ptr += uv_stride; /*8;*/
}
}
break;
case H_PRED:
{
int i;
for (i = 0; i < 8; i++)
{
vpx_memset(upred_ptr, uleft_col[i], 8);
vpx_memset(vpred_ptr, vleft_col[i], 8);
upred_ptr += uv_stride; /*8;*/
vpred_ptr += uv_stride; /*8;*/
}
}
break;
case TM_PRED:
{
int i;
for (i = 0; i < 8; i++)
{
for (j = 0; j < 8; j++)
{
int predu = uleft_col[i] + uabove_row[j] - utop_left;
int predv = vleft_col[i] + vabove_row[j] - vtop_left;
if (predu < 0)
predu = 0;
if (predu > 255)
predu = 255;
if (predv < 0)
predv = 0;
if (predv > 255)
predv = 255;
upred_ptr[j] = predu;
vpred_ptr[j] = predv;
}
upred_ptr += uv_stride; /*8;*/
vpred_ptr += uv_stride; /*8;*/
}
}
break;
case B_PRED:
case NEARESTMV:
case NEARMV:
case ZEROMV:
case NEWMV:
case SPLITMV:
case MB_MODE_COUNT:
break;
}
}
void vp8mt_predict_intra4x4(VP8D_COMP *pbi,
MACROBLOCKD *xd,
int b_mode,
unsigned char *predictor,
int stride,
int mb_row,
int mb_col,
int num)
{
int i, r, c;
unsigned char *Above; /* = *(x->base_dst) + x->dst - x->dst_stride; */
unsigned char Left[4];
unsigned char top_left; /* = Above[-1]; */
BLOCKD *x = &xd->block[num];
int dst_stride = xd->dst.y_stride;
unsigned char *base_dst = xd->dst.y_buffer;
/*Caution: For some b_mode, it needs 8 pixels (4 above + 4 above-right).*/
if (num < 4 && pbi->common.filter_level)
Above = pbi->mt_yabove_row[mb_row] + mb_col*16 + num*4 + 32;
else
Above = base_dst + x->offset - dst_stride;
if (num%4==0 && pbi->common.filter_level)
{
for (i=0; i<4; i++)
Left[i] = pbi->mt_yleft_col[mb_row][num + i];
}else
{
Left[0] = (base_dst)[x->offset - 1];
Left[1] = (base_dst)[x->offset - 1 + dst_stride];
Left[2] = (base_dst)[x->offset - 1 + 2 * dst_stride];
Left[3] = (base_dst)[x->offset - 1 + 3 * dst_stride];
}
if ((num==4 || num==8 || num==12) && pbi->common.filter_level)
top_left = pbi->mt_yleft_col[mb_row][num-1];
else
top_left = Above[-1];
switch (b_mode)
{
case B_DC_PRED:
{
int expected_dc = 0;
for (i = 0; i < 4; i++)
{
expected_dc += Above[i];
expected_dc += Left[i];
}
expected_dc = (expected_dc + 4) >> 3;
for (r = 0; r < 4; r++)
{
for (c = 0; c < 4; c++)
{
predictor[c] = expected_dc;
}
predictor += stride;
}
}
break;
case B_TM_PRED:
{
/* prediction similar to true_motion prediction */
for (r = 0; r < 4; r++)
{
for (c = 0; c < 4; c++)
{
int pred = Above[c] - top_left + Left[r];
if (pred < 0)
pred = 0;
if (pred > 255)
pred = 255;
predictor[c] = pred;
}
predictor += stride;
}
}
break;
case B_VE_PRED:
{
unsigned int ap[4];
ap[0] = (top_left + 2 * Above[0] + Above[1] + 2) >> 2;
ap[1] = (Above[0] + 2 * Above[1] + Above[2] + 2) >> 2;
ap[2] = (Above[1] + 2 * Above[2] + Above[3] + 2) >> 2;
ap[3] = (Above[2] + 2 * Above[3] + Above[4] + 2) >> 2;
for (r = 0; r < 4; r++)
{
for (c = 0; c < 4; c++)
{
predictor[c] = ap[c];
}
predictor += stride;
}
}
break;
case B_HE_PRED:
{
unsigned int lp[4];
lp[0] = (top_left + 2 * Left[0] + Left[1] + 2) >> 2;
lp[1] = (Left[0] + 2 * Left[1] + Left[2] + 2) >> 2;
lp[2] = (Left[1] + 2 * Left[2] + Left[3] + 2) >> 2;
lp[3] = (Left[2] + 2 * Left[3] + Left[3] + 2) >> 2;
for (r = 0; r < 4; r++)
{
for (c = 0; c < 4; c++)
{
predictor[c] = lp[r];
}
predictor += stride;
}
}
break;
case B_LD_PRED:
{
unsigned char *ptr = Above;
predictor[0 * stride + 0] = (ptr[0] + ptr[1] * 2 + ptr[2] + 2) >> 2;
predictor[0 * stride + 1] =
predictor[1 * stride + 0] = (ptr[1] + ptr[2] * 2 + ptr[3] + 2) >> 2;
predictor[0 * stride + 2] =
predictor[1 * stride + 1] =
predictor[2 * stride + 0] = (ptr[2] + ptr[3] * 2 + ptr[4] + 2) >> 2;
predictor[0 * stride + 3] =
predictor[1 * stride + 2] =
predictor[2 * stride + 1] =
predictor[3 * stride + 0] = (ptr[3] + ptr[4] * 2 + ptr[5] + 2) >> 2;
predictor[1 * stride + 3] =
predictor[2 * stride + 2] =
predictor[3 * stride + 1] = (ptr[4] + ptr[5] * 2 + ptr[6] + 2) >> 2;
predictor[2 * stride + 3] =
predictor[3 * stride + 2] = (ptr[5] + ptr[6] * 2 + ptr[7] + 2) >> 2;
predictor[3 * stride + 3] = (ptr[6] + ptr[7] * 2 + ptr[7] + 2) >> 2;
}
break;
case B_RD_PRED:
{
unsigned char pp[9];
pp[0] = Left[3];
pp[1] = Left[2];
pp[2] = Left[1];
pp[3] = Left[0];
pp[4] = top_left;
pp[5] = Above[0];
pp[6] = Above[1];
pp[7] = Above[2];
pp[8] = Above[3];
predictor[3 * stride + 0] = (pp[0] + pp[1] * 2 + pp[2] + 2) >> 2;
predictor[3 * stride + 1] =
predictor[2 * stride + 0] = (pp[1] + pp[2] * 2 + pp[3] + 2) >> 2;
predictor[3 * stride + 2] =
predictor[2 * stride + 1] =
predictor[1 * stride + 0] = (pp[2] + pp[3] * 2 + pp[4] + 2) >> 2;
predictor[3 * stride + 3] =
predictor[2 * stride + 2] =
predictor[1 * stride + 1] =
predictor[0 * stride + 0] = (pp[3] + pp[4] * 2 + pp[5] + 2) >> 2;
predictor[2 * stride + 3] =
predictor[1 * stride + 2] =
predictor[0 * stride + 1] = (pp[4] + pp[5] * 2 + pp[6] + 2) >> 2;
predictor[1 * stride + 3] =
predictor[0 * stride + 2] = (pp[5] + pp[6] * 2 + pp[7] + 2) >> 2;
predictor[0 * stride + 3] = (pp[6] + pp[7] * 2 + pp[8] + 2) >> 2;
}
break;
case B_VR_PRED:
{
unsigned char pp[9];
pp[0] = Left[3];
pp[1] = Left[2];
pp[2] = Left[1];
pp[3] = Left[0];
pp[4] = top_left;
pp[5] = Above[0];
pp[6] = Above[1];
pp[7] = Above[2];
pp[8] = Above[3];
predictor[3 * stride + 0] = (pp[1] + pp[2] * 2 + pp[3] + 2) >> 2;
predictor[2 * stride + 0] = (pp[2] + pp[3] * 2 + pp[4] + 2) >> 2;
predictor[3 * stride + 1] =
predictor[1 * stride + 0] = (pp[3] + pp[4] * 2 + pp[5] + 2) >> 2;
predictor[2 * stride + 1] =
predictor[0 * stride + 0] = (pp[4] + pp[5] + 1) >> 1;
predictor[3 * stride + 2] =
predictor[1 * stride + 1] = (pp[4] + pp[5] * 2 + pp[6] + 2) >> 2;
predictor[2 * stride + 2] =
predictor[0 * stride + 1] = (pp[5] + pp[6] + 1) >> 1;
predictor[3 * stride + 3] =
predictor[1 * stride + 2] = (pp[5] + pp[6] * 2 + pp[7] + 2) >> 2;
predictor[2 * stride + 3] =
predictor[0 * stride + 2] = (pp[6] + pp[7] + 1) >> 1;
predictor[1 * stride + 3] = (pp[6] + pp[7] * 2 + pp[8] + 2) >> 2;
predictor[0 * stride + 3] = (pp[7] + pp[8] + 1) >> 1;
}
break;
case B_VL_PRED:
{
unsigned char *pp = Above;
predictor[0 * stride + 0] = (pp[0] + pp[1] + 1) >> 1;
predictor[1 * stride + 0] = (pp[0] + pp[1] * 2 + pp[2] + 2) >> 2;
predictor[2 * stride + 0] =
predictor[0 * stride + 1] = (pp[1] + pp[2] + 1) >> 1;
predictor[1 * stride + 1] =
predictor[3 * stride + 0] = (pp[1] + pp[2] * 2 + pp[3] + 2) >> 2;
predictor[2 * stride + 1] =
predictor[0 * stride + 2] = (pp[2] + pp[3] + 1) >> 1;
predictor[3 * stride + 1] =
predictor[1 * stride + 2] = (pp[2] + pp[3] * 2 + pp[4] + 2) >> 2;
predictor[0 * stride + 3] =
predictor[2 * stride + 2] = (pp[3] + pp[4] + 1) >> 1;
predictor[1 * stride + 3] =
predictor[3 * stride + 2] = (pp[3] + pp[4] * 2 + pp[5] + 2) >> 2;
predictor[2 * stride + 3] = (pp[4] + pp[5] * 2 + pp[6] + 2) >> 2;
predictor[3 * stride + 3] = (pp[5] + pp[6] * 2 + pp[7] + 2) >> 2;
}
break;
case B_HD_PRED:
{
unsigned char pp[9];
pp[0] = Left[3];
pp[1] = Left[2];
pp[2] = Left[1];
pp[3] = Left[0];
pp[4] = top_left;
pp[5] = Above[0];
pp[6] = Above[1];
pp[7] = Above[2];
pp[8] = Above[3];
predictor[3 * stride + 0] = (pp[0] + pp[1] + 1) >> 1;
predictor[3 * stride + 1] = (pp[0] + pp[1] * 2 + pp[2] + 2) >> 2;
predictor[2 * stride + 0] =
predictor[3 * stride + 2] = (pp[1] + pp[2] + 1) >> 1;
predictor[2 * stride + 1] =
predictor[3 * stride + 3] = (pp[1] + pp[2] * 2 + pp[3] + 2) >> 2;
predictor[2 * stride + 2] =
predictor[1 * stride + 0] = (pp[2] + pp[3] + 1) >> 1;
predictor[2 * stride + 3] =
predictor[1 * stride + 1] = (pp[2] + pp[3] * 2 + pp[4] + 2) >> 2;
predictor[1 * stride + 2] =
predictor[0 * stride + 0] = (pp[3] + pp[4] + 1) >> 1;
predictor[1 * stride + 3] =
predictor[0 * stride + 1] = (pp[3] + pp[4] * 2 + pp[5] + 2) >> 2;
predictor[0 * stride + 2] = (pp[4] + pp[5] * 2 + pp[6] + 2) >> 2;
predictor[0 * stride + 3] = (pp[5] + pp[6] * 2 + pp[7] + 2) >> 2;
}
break;
case B_HU_PRED:
{
unsigned char *pp = Left;
predictor[0 * stride + 0] = (pp[0] + pp[1] + 1) >> 1;
predictor[0 * stride + 1] = (pp[0] + pp[1] * 2 + pp[2] + 2) >> 2;
predictor[0 * stride + 2] =
predictor[1 * stride + 0] = (pp[1] + pp[2] + 1) >> 1;
predictor[0 * stride + 3] =
predictor[1 * stride + 1] = (pp[1] + pp[2] * 2 + pp[3] + 2) >> 2;
predictor[1 * stride + 2] =
predictor[2 * stride + 0] = (pp[2] + pp[3] + 1) >> 1;
predictor[1 * stride + 3] =
predictor[2 * stride + 1] = (pp[2] + pp[3] * 2 + pp[3] + 2) >> 2;
predictor[2 * stride + 2] =
predictor[2 * stride + 3] =
predictor[3 * stride + 0] =
predictor[3 * stride + 1] =
predictor[3 * stride + 2] =
predictor[3 * stride + 3] = pp[3];
}
break;
}
}
/* copy 4 bytes from the above right down so that the 4x4 prediction modes using pixels above and
* to the right prediction have filled in pixels to use.
*/
void vp8mt_intra_prediction_down_copy(VP8D_COMP *pbi, MACROBLOCKD *x, int mb_row, int mb_col)
{
unsigned char *above_right; /* = *(x->block[0].base_dst) + x->block[0].dst - x->block[0].dst_stride + 16; */
unsigned int *src_ptr;
unsigned int *dst_ptr0;
unsigned int *dst_ptr1;
unsigned int *dst_ptr2;
int dst_stride = x->dst.y_stride;
unsigned char *base_dst = x->dst.y_buffer;
if (pbi->common.filter_level)
above_right = pbi->mt_yabove_row[mb_row] + mb_col*16 + 32 +16;
else
above_right = base_dst + x->block[0].offset - dst_stride + 16;
src_ptr = (unsigned int *)above_right;
/*dst_ptr0 = (unsigned int *)(above_right + 4 * x->block[0].dst_stride);
dst_ptr1 = (unsigned int *)(above_right + 8 * x->block[0].dst_stride);
dst_ptr2 = (unsigned int *)(above_right + 12 * x->block[0].dst_stride);*/
dst_ptr0 = (unsigned int *)(base_dst + x->block[0].offset + 16 + 3 * dst_stride);
dst_ptr1 = (unsigned int *)(base_dst + x->block[0].offset + 16 + 7 * dst_stride);
dst_ptr2 = (unsigned int *)(base_dst + x->block[0].offset + 16 + 11 * dst_stride);
*dst_ptr0 = *src_ptr;
*dst_ptr1 = *src_ptr;
*dst_ptr2 = *src_ptr;
}

View File

@@ -1,26 +0,0 @@
/*
* Copyright (c) 2010 The WebM project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#ifndef __INC_RECONINTRA_MT_H
#define __INC_RECONINTRA_MT_H
/* reconintra functions used in multi-threaded decoder */
#if CONFIG_MULTITHREAD
extern void vp8mt_build_intra_predictors_mby(VP8D_COMP *pbi, MACROBLOCKD *x, int mb_row, int mb_col);
extern void vp8mt_build_intra_predictors_mby_s(VP8D_COMP *pbi, MACROBLOCKD *x, int mb_row, int mb_col);
extern void vp8mt_build_intra_predictors_mbuv(VP8D_COMP *pbi, MACROBLOCKD *x, int mb_row, int mb_col);
extern void vp8mt_build_intra_predictors_mbuv_s(VP8D_COMP *pbi, MACROBLOCKD *x, int mb_row, int mb_col);
extern void vp8mt_predict_intra4x4(VP8D_COMP *pbi, MACROBLOCKD *x, int b_mode, unsigned char *predictor, int stride, int mb_row, int mb_col, int num);
extern void vp8mt_intra_prediction_down_copy(VP8D_COMP *pbi, MACROBLOCKD *x, int mb_row, int mb_col);
#endif
#endif

File diff suppressed because it is too large Load Diff

View File

@@ -47,7 +47,6 @@
mvn r2, #23
str r12, [r0, #vp8_writer_lowvalue]
str r3, [r0, #vp8_writer_range]
str r12, [r0, #vp8_writer_value]
str r2, [r0, #vp8_writer_count]
str r12, [r0, #vp8_writer_pos]
str r1, [r0, #vp8_writer_buffer]

View File

@@ -90,7 +90,6 @@ numparts_loop
mov r5, #255 ; vp8_writer_range
mvn r3, #23 ; vp8_writer_count
str r2, [r0, #vp8_writer_value]
str r2, [r0, #vp8_writer_pos]
str r10, [r0, #vp8_writer_buffer]

View File

@@ -98,7 +98,7 @@
vmul.s16 q2, q6, q4 ; x * Dequant
vmul.s16 q3, q7, q5
ldr r0, _inv_zig_zag_ ; load ptr of inverse zigzag table
adr r0, inv_zig_zag ; load ptr of inverse zigzag table
vceq.s16 q8, q8 ; set q8 to all 1
@@ -181,7 +181,7 @@
vadd.s16 q12, q14 ; x + Round
vadd.s16 q13, q15
ldr r0, _inv_zig_zag_ ; load ptr of inverse zigzag table
adr r0, inv_zig_zag ; load ptr of inverse zigzag table
vqdmulh.s16 q12, q8 ; y = ((Round+abs(z)) * Quant) >> 16
vqdmulh.s16 q13, q9
@@ -247,9 +247,6 @@ zero_output
ENDP
; default inverse zigzag table is defined in vp8/common/entropy.c
_inv_zig_zag_
DCD inv_zig_zag
ALIGN 16 ; enable use of @128 bit aligned loads
inv_zig_zag
DCW 0x0001, 0x0002, 0x0006, 0x0007

View File

@@ -22,11 +22,9 @@ void vp8_yv12_copy_partial_frame_neon(YV12_BUFFER_CONFIG *src_ybc,
unsigned char *src_y, *dst_y;
int yheight;
int ystride;
int border;
int yoffset;
int linestocopy;
border = src_ybc->border;
yheight = src_ybc->y_height;
ystride = src_ybc->y_stride;

View File

@@ -45,7 +45,6 @@ DEFINE(vp8_blockd_predictor, offsetof(BLOCKD, predictor));
/* pack tokens */
DEFINE(vp8_writer_lowvalue, offsetof(vp8_writer, lowvalue));
DEFINE(vp8_writer_range, offsetof(vp8_writer, range));
DEFINE(vp8_writer_value, offsetof(vp8_writer, value));
DEFINE(vp8_writer_count, offsetof(vp8_writer, count));
DEFINE(vp8_writer_pos, offsetof(vp8_writer, pos));
DEFINE(vp8_writer_buffer, offsetof(vp8_writer, buffer));

View File

@@ -24,6 +24,7 @@
#include "bitstream.h"
#include "defaultcoefcounts.h"
#include "vp8/common/common.h"
const int vp8cx_base_skip_false_prob[128] =
{
@@ -159,9 +160,9 @@ static void write_split(vp8_writer *bc, int x)
);
}
static void pack_tokens_c(vp8_writer *w, const TOKENEXTRA *p, int xcount)
void vp8_pack_tokens_c(vp8_writer *w, const TOKENEXTRA *p, int xcount)
{
const TOKENEXTRA *const stop = p + xcount;
const TOKENEXTRA *stop = p + xcount;
unsigned int split;
unsigned int shift;
int count = w->count;
@@ -171,8 +172,8 @@ static void pack_tokens_c(vp8_writer *w, const TOKENEXTRA *p, int xcount)
while (p < stop)
{
const int t = p->Token;
vp8_token *const a = vp8_coef_encodings + t;
const vp8_extra_bit_struct *const b = vp8_extra_bits + t;
const vp8_token *a = vp8_coef_encodings + t;
const vp8_extra_bit_struct *b = vp8_extra_bits + t;
int i = 0;
const unsigned char *pp = p->context_tree;
int v = a->value;
@@ -382,219 +383,23 @@ static void pack_tokens_into_partitions_c(VP8_COMP *cpi, unsigned char *cx_data,
int i;
unsigned char *ptr = cx_data;
unsigned char *ptr_end = cx_data_end;
unsigned int shift;
vp8_writer *w;
ptr = cx_data;
vp8_writer * w;
for (i = 0; i < num_part; i++)
{
int mb_row;
w = cpi->bc + i + 1;
vp8_start_encode(w, ptr, ptr_end);
for (mb_row = i; mb_row < cpi->common.mb_rows; mb_row += num_part)
{
unsigned int split;
int count = w->count;
unsigned int range = w->range;
unsigned int lowvalue = w->lowvalue;
int mb_row;
for (mb_row = i; mb_row < cpi->common.mb_rows; mb_row += num_part)
{
TOKENEXTRA *p = cpi->tplist[mb_row].start;
TOKENEXTRA *stop = cpi->tplist[mb_row].stop;
while (p < stop)
{
const int t = p->Token;
vp8_token *const a = vp8_coef_encodings + t;
const vp8_extra_bit_struct *const b = vp8_extra_bits + t;
int i = 0;
const unsigned char *pp = p->context_tree;
int v = a->value;
int n = a->Len;
if (p->skip_eob_node)
{
n--;
i = 2;
}
do
{
const int bb = (v >> --n) & 1;
split = 1 + (((range - 1) * pp[i>>1]) >> 8);
i = vp8_coef_tree[i+bb];
if (bb)
{
lowvalue += split;
range = range - split;
}
else
{
range = split;
}
shift = vp8_norm[range];
range <<= shift;
count += shift;
if (count >= 0)
{
int offset = shift - count;
if ((lowvalue << (offset - 1)) & 0x80000000)
{
int x = w->pos - 1;
while (x >= 0 && w->buffer[x] == 0xff)
{
w->buffer[x] = (unsigned char)0;
x--;
}
w->buffer[x] += 1;
}
validate_buffer(w->buffer + w->pos,
1,
cx_data_end,
&cpi->common.error);
w->buffer[w->pos++] = (lowvalue >> (24 - offset));
lowvalue <<= offset;
shift = count;
lowvalue &= 0xffffff;
count -= 8 ;
}
lowvalue <<= shift;
}
while (n);
if (b->base_val)
{
const int e = p->Extra, L = b->Len;
if (L)
{
const unsigned char *pp = b->prob;
int v = e >> 1;
int n = L; /* number of bits in v, assumed nonzero */
int i = 0;
do
{
const int bb = (v >> --n) & 1;
split = 1 + (((range - 1) * pp[i>>1]) >> 8);
i = b->tree[i+bb];
if (bb)
{
lowvalue += split;
range = range - split;
}
else
{
range = split;
}
shift = vp8_norm[range];
range <<= shift;
count += shift;
if (count >= 0)
{
int offset = shift - count;
if ((lowvalue << (offset - 1)) & 0x80000000)
{
int x = w->pos - 1;
while (x >= 0 && w->buffer[x] == 0xff)
{
w->buffer[x] = (unsigned char)0;
x--;
}
w->buffer[x] += 1;
}
validate_buffer(w->buffer + w->pos,
1,
cx_data_end,
&cpi->common.error);
w->buffer[w->pos++] =
(lowvalue >> (24 - offset));
lowvalue <<= offset;
shift = count;
lowvalue &= 0xffffff;
count -= 8 ;
}
lowvalue <<= shift;
}
while (n);
}
{
split = (range + 1) >> 1;
if (e & 1)
{
lowvalue += split;
range = range - split;
}
else
{
range = split;
}
range <<= 1;
if ((lowvalue & 0x80000000))
{
int x = w->pos - 1;
while (x >= 0 && w->buffer[x] == 0xff)
{
w->buffer[x] = (unsigned char)0;
x--;
}
w->buffer[x] += 1;
}
lowvalue <<= 1;
if (!++count)
{
count = -8;
validate_buffer(w->buffer + w->pos,
1,
cx_data_end,
&cpi->common.error);
w->buffer[w->pos++] = (lowvalue >> 24);
lowvalue &= 0xffffff;
}
}
}
++p;
}
}
w->count = count;
w->lowvalue = lowvalue;
w->range = range;
const TOKENEXTRA *p = cpi->tplist[mb_row].start;
const TOKENEXTRA *stop = cpi->tplist[mb_row].stop;
int tokens = stop - p;
vp8_pack_tokens_c(w, p, tokens);
}
vp8_stop_encode(w);
@@ -605,209 +410,17 @@ static void pack_tokens_into_partitions_c(VP8_COMP *cpi, unsigned char *cx_data,
static void pack_mb_row_tokens_c(VP8_COMP *cpi, vp8_writer *w)
{
unsigned int split;
int count = w->count;
unsigned int range = w->range;
unsigned int lowvalue = w->lowvalue;
unsigned int shift;
int mb_row;
for (mb_row = 0; mb_row < cpi->common.mb_rows; mb_row++)
{
TOKENEXTRA *p = cpi->tplist[mb_row].start;
TOKENEXTRA *stop = cpi->tplist[mb_row].stop;
const TOKENEXTRA *p = cpi->tplist[mb_row].start;
const TOKENEXTRA *stop = cpi->tplist[mb_row].stop;
int tokens = stop - p;
while (p < stop)
{
const int t = p->Token;
vp8_token *const a = vp8_coef_encodings + t;
const vp8_extra_bit_struct *const b = vp8_extra_bits + t;
int i = 0;
const unsigned char *pp = p->context_tree;
int v = a->value;
int n = a->Len;
if (p->skip_eob_node)
{
n--;
i = 2;
}
do
{
const int bb = (v >> --n) & 1;
split = 1 + (((range - 1) * pp[i>>1]) >> 8);
i = vp8_coef_tree[i+bb];
if (bb)
{
lowvalue += split;
range = range - split;
}
else
{
range = split;
}
shift = vp8_norm[range];
range <<= shift;
count += shift;
if (count >= 0)
{
int offset = shift - count;
if ((lowvalue << (offset - 1)) & 0x80000000)
{
int x = w->pos - 1;
while (x >= 0 && w->buffer[x] == 0xff)
{
w->buffer[x] = (unsigned char)0;
x--;
}
w->buffer[x] += 1;
}
validate_buffer(w->buffer + w->pos,
1,
w->buffer_end,
w->error);
w->buffer[w->pos++] = (lowvalue >> (24 - offset));
lowvalue <<= offset;
shift = count;
lowvalue &= 0xffffff;
count -= 8 ;
}
lowvalue <<= shift;
}
while (n);
if (b->base_val)
{
const int e = p->Extra, L = b->Len;
if (L)
{
const unsigned char *pp = b->prob;
int v = e >> 1;
int n = L; /* number of bits in v, assumed nonzero */
int i = 0;
do
{
const int bb = (v >> --n) & 1;
split = 1 + (((range - 1) * pp[i>>1]) >> 8);
i = b->tree[i+bb];
if (bb)
{
lowvalue += split;
range = range - split;
}
else
{
range = split;
}
shift = vp8_norm[range];
range <<= shift;
count += shift;
if (count >= 0)
{
int offset = shift - count;
if ((lowvalue << (offset - 1)) & 0x80000000)
{
int x = w->pos - 1;
while (x >= 0 && w->buffer[x] == 0xff)
{
w->buffer[x] = (unsigned char)0;
x--;
}
w->buffer[x] += 1;
}
validate_buffer(w->buffer + w->pos,
1,
w->buffer_end,
w->error);
w->buffer[w->pos++] = (lowvalue >> (24 - offset));
lowvalue <<= offset;
shift = count;
lowvalue &= 0xffffff;
count -= 8 ;
}
lowvalue <<= shift;
}
while (n);
}
{
split = (range + 1) >> 1;
if (e & 1)
{
lowvalue += split;
range = range - split;
}
else
{
range = split;
}
range <<= 1;
if ((lowvalue & 0x80000000))
{
int x = w->pos - 1;
while (x >= 0 && w->buffer[x] == 0xff)
{
w->buffer[x] = (unsigned char)0;
x--;
}
w->buffer[x] += 1;
}
lowvalue <<= 1;
if (!++count)
{
count = -8;
validate_buffer(w->buffer + w->pos,
1,
w->buffer_end,
w->error);
w->buffer[w->pos++] = (lowvalue >> 24);
lowvalue &= 0xffffff;
}
}
}
++p;
}
vp8_pack_tokens_c(w, p, tokens);
}
w->count = count;
w->lowvalue = lowvalue;
w->range = range;
}
static void write_mv_ref
@@ -908,12 +521,11 @@ static void pack_inter_mode_mvs(VP8_COMP *const cpi)
const MV_CONTEXT *mvc = pc->fc.mvc;
MODE_INFO *m = pc->mi, *ms;
MODE_INFO *m = pc->mi;
const int mis = pc->mode_info_stride;
int mb_row = -1;
int prob_skip_false = 0;
ms = pc->mi - 1;
cpi->mb.partition_info = cpi->mb.pi;
@@ -925,7 +537,9 @@ static void pack_inter_mode_mvs(VP8_COMP *const cpi)
if (pc->mb_no_coeff_skip)
{
prob_skip_false = cpi->skip_false_count * 256 / (cpi->skip_false_count + cpi->skip_true_count);
int total_mbs = pc->mb_rows * pc->mb_cols;
prob_skip_false = (total_mbs - cpi->skip_true_count ) * 256 / total_mbs;
if (prob_skip_false <= 1)
prob_skip_false = 1;
@@ -1112,7 +726,9 @@ static void write_kfmodes(VP8_COMP *cpi)
if (c->mb_no_coeff_skip)
{
prob_skip_false = cpi->skip_false_count * 256 / (cpi->skip_false_count + cpi->skip_true_count);
int total_mbs = c->mb_rows * c->mb_cols;
prob_skip_false = (total_mbs - cpi->skip_true_count ) * 256 / total_mbs;
if (prob_skip_false <= 1)
prob_skip_false = 1;
@@ -1167,6 +783,7 @@ static void write_kfmodes(VP8_COMP *cpi)
}
}
#if 0
/* This function is used for debugging probability trees. */
static void print_prob_tree(vp8_prob
coef_probs[BLOCK_TYPES][COEF_BANDS][PREV_COEF_CONTEXTS][ENTROPY_NODES])
@@ -1198,6 +815,7 @@ static void print_prob_tree(vp8_prob
fprintf(f, "}\n");
fclose(f);
}
#endif
static void sum_probs_over_prev_coef_context(
const unsigned int probs[PREV_COEF_CONTEXTS][MAX_ENTROPY_TOKENS],
@@ -1327,7 +945,6 @@ static int default_coef_context_savings(VP8_COMP *cpi)
int t = 0; /* token/prob index */
vp8_tree_probs_from_distribution(
MAX_ENTROPY_TOKENS, vp8_coef_encodings, vp8_coef_tree,
cpi->frame_coef_probs [i][j][k],
@@ -1432,10 +1049,33 @@ int vp8_estimate_entropy_savings(VP8_COMP *cpi)
return savings;
}
static void update_coef_probs(VP8_COMP *cpi)
#if CONFIG_REALTIME_ONLY & CONFIG_ONTHEFLY_BITPACKING
int vp8_update_coef_context(VP8_COMP *cpi)
{
int savings = 0;
if (cpi->common.frame_type == KEY_FRAME)
{
/* Reset to default counts/probabilities at key frames */
vp8_copy(cpi->coef_counts, default_coef_counts);
}
if (cpi->oxcf.error_resilient_mode & VPX_ERROR_RESILIENT_PARTITIONS)
savings += independent_coef_context_savings(cpi);
else
savings += default_coef_context_savings(cpi);
return savings;
}
#endif
void vp8_update_coef_probs(VP8_COMP *cpi)
{
int i = 0;
#if !(CONFIG_REALTIME_ONLY & CONFIG_ONTHEFLY_BITPACKING)
vp8_writer *const w = cpi->bc;
#endif
int savings = 0;
vp8_clear_system_state(); //__asm emms;
@@ -1515,7 +1155,11 @@ static void update_coef_probs(VP8_COMP *cpi)
cpi->common.frame_type == KEY_FRAME && newp != *Pold)
u = 1;
#if CONFIG_REALTIME_ONLY & CONFIG_ONTHEFLY_BITPACKING
cpi->update_probs[i][j][k][t] = u;
#else
vp8_write(w, u, upd);
#endif
#ifdef ENTROPY_STATS
@@ -1527,7 +1171,9 @@ static void update_coef_probs(VP8_COMP *cpi)
/* send/use new probability */
*Pold = newp;
#if !(CONFIG_REALTIME_ONLY & CONFIG_ONTHEFLY_BITPACKING)
vp8_write_literal(w, newp, 8);
#endif
savings += s;
@@ -1556,6 +1202,50 @@ static void update_coef_probs(VP8_COMP *cpi)
while (++i < BLOCK_TYPES);
}
#if CONFIG_REALTIME_ONLY & CONFIG_ONTHEFLY_BITPACKING
static void pack_coef_probs(VP8_COMP *cpi)
{
int i = 0;
vp8_writer *const w = cpi->bc;
do
{
int j = 0;
do
{
int k = 0;
do
{
int t = 0; /* token/prob index */
do
{
const vp8_prob newp = cpi->common.fc.coef_probs [i][j][k][t];
const vp8_prob upd = vp8_coef_update_probs [i][j][k][t];
const char u = cpi->update_probs[i][j][k][t] ;
vp8_write(w, u, upd);
if (u)
{
/* send/use new probability */
vp8_write_literal(w, newp, 8);
}
}
while (++t < ENTROPY_NODES);
}
while (++k < PREV_COEF_CONTEXTS);
}
while (++j < COEF_BANDS);
}
while (++i < BLOCK_TYPES);
}
#endif
#ifdef PACKET_TESTING
FILE *vpxlogc = 0;
#endif
@@ -1818,6 +1508,7 @@ void vp8_pack_bitstream(VP8_COMP *cpi, unsigned char *dest, unsigned char * dest
vp8_write_bit(bc, pc->ref_frame_sign_bias[ALTREF_FRAME]);
}
#if !(CONFIG_REALTIME_ONLY & CONFIG_ONTHEFLY_BITPACKING)
if (cpi->oxcf.error_resilient_mode & VPX_ERROR_RESILIENT_PARTITIONS)
{
if (pc->frame_type == KEY_FRAME)
@@ -1825,6 +1516,7 @@ void vp8_pack_bitstream(VP8_COMP *cpi, unsigned char *dest, unsigned char * dest
else
pc->refresh_entropy_probs = 0;
}
#endif
vp8_write_bit(bc, pc->refresh_entropy_probs);
@@ -1842,13 +1534,17 @@ void vp8_pack_bitstream(VP8_COMP *cpi, unsigned char *dest, unsigned char * dest
vp8_clear_system_state(); //__asm emms;
#if CONFIG_REALTIME_ONLY & CONFIG_ONTHEFLY_BITPACKING
pack_coef_probs(cpi);
#else
if (pc->refresh_entropy_probs == 0)
{
// save a copy for later refresh
vpx_memcpy(&cpi->common.lfc, &cpi->common.fc, sizeof(cpi->common.fc));
}
update_coef_probs(cpi);
vp8_update_coef_probs(cpi);
#endif
#ifdef ENTROPY_STATS
active_section = 2;
@@ -1896,6 +1592,45 @@ void vp8_pack_bitstream(VP8_COMP *cpi, unsigned char *dest, unsigned char * dest
cpi->partition_sz[0] = *size;
#if CONFIG_REALTIME_ONLY & CONFIG_ONTHEFLY_BITPACKING
{
const int num_part = (1 << pc->multi_token_partition);
unsigned char * dp = cpi->partition_d[0] + cpi->partition_sz[0];
if (num_part > 1)
{
/* write token part sizes (all but last) if more than 1 */
validate_buffer(dp, 3 * (num_part - 1), cpi->partition_d_end[0],
&pc->error);
cpi->partition_sz[0] += 3*(num_part-1);
for(i = 1; i < num_part; i++)
{
write_partition_size(dp, cpi->partition_sz[i]);
dp += 3;
}
}
if (!cpi->output_partition)
{
/* concatenate partition buffers */
for(i = 0; i < num_part; i++)
{
vpx_memmove(dp, cpi->partition_d[i+1], cpi->partition_sz[i+1]);
cpi->partition_d[i+1] = dp;
dp += cpi->partition_sz[i+1];
}
}
/* update total size */
*size = 0;
for(i = 0; i < num_part+1; i++)
{
*size += cpi->partition_sz[i];
}
}
#else
if (pc->multi_token_partition != ONE_PARTITION)
{
int num_part = 1 << pc->multi_token_partition;
@@ -1945,6 +1680,7 @@ void vp8_pack_bitstream(VP8_COMP *cpi, unsigned char *dest, unsigned char * dest
*size += cpi->bc[1].pos;
cpi->partition_sz[1] = cpi->bc[1].pos;
}
#endif
}
#ifdef ENTROPY_STATS

View File

@@ -14,19 +14,19 @@
#if HAVE_EDSP
void vp8cx_pack_tokens_armv5(vp8_writer *w, const TOKENEXTRA *p, int xcount,
vp8_token *,
vp8_extra_bit_struct *,
const vp8_token *,
const vp8_extra_bit_struct *,
const vp8_tree_index *);
void vp8cx_pack_tokens_into_partitions_armv5(VP8_COMP *,
unsigned char * cx_data,
const unsigned char *cx_data_end,
int num_parts,
vp8_token *,
vp8_extra_bit_struct *,
const vp8_token *,
const vp8_extra_bit_struct *,
const vp8_tree_index *);
void vp8cx_pack_mb_row_tokens_armv5(VP8_COMP *cpi, vp8_writer *w,
vp8_token *,
vp8_extra_bit_struct *,
const vp8_token *,
const vp8_extra_bit_struct *,
const vp8_tree_index *);
# define pack_tokens(a,b,c) \
vp8cx_pack_tokens_armv5(a,b,c,vp8_coef_encodings,vp8_extra_bits,vp8_coef_tree)
@@ -35,7 +35,10 @@ void vp8cx_pack_mb_row_tokens_armv5(VP8_COMP *cpi, vp8_writer *w,
# define pack_mb_row_tokens(a,b) \
vp8cx_pack_mb_row_tokens_armv5(a,b,vp8_coef_encodings,vp8_extra_bits,vp8_coef_tree)
#else
# define pack_tokens(a,b,c) pack_tokens_c(a,b,c)
void vp8_pack_tokens_c(vp8_writer *w, const TOKENEXTRA *p, int xcount);
# define pack_tokens(a,b,c) vp8_pack_tokens_c(a,b,c)
# define pack_tokens_into_partitions(a,b,c,d) pack_tokens_into_partitions_c(a,b,c,d)
# define pack_mb_row_tokens(a,b) pack_mb_row_tokens_c(a,b)
#endif

View File

@@ -45,7 +45,6 @@ void vp8_start_encode(BOOL_CODER *br, unsigned char *source, unsigned char *sour
br->lowvalue = 0;
br->range = 255;
br->value = 0;
br->count = -24;
br->buffer = source;
br->buffer_end = source_end;

View File

@@ -26,7 +26,6 @@ typedef struct
{
unsigned int lowvalue;
unsigned int range;
unsigned int value;
int count;
unsigned int pos;
unsigned char *buffer;

View File

@@ -0,0 +1,358 @@
/*
* Copyright (c) 2012 The WebM project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/* Generated file, included by tokenize.c */
/* Values generated by fill_value_tokens() */
static const short dct_value_cost[2048*2] =
{
8285, 8277, 8267, 8259, 8253, 8245, 8226, 8218, 8212, 8204, 8194, 8186,
8180, 8172, 8150, 8142, 8136, 8128, 8118, 8110, 8104, 8096, 8077, 8069,
8063, 8055, 8045, 8037, 8031, 8023, 7997, 7989, 7983, 7975, 7965, 7957,
7951, 7943, 7924, 7916, 7910, 7902, 7892, 7884, 7878, 7870, 7848, 7840,
7834, 7826, 7816, 7808, 7802, 7794, 7775, 7767, 7761, 7753, 7743, 7735,
7729, 7721, 7923, 7915, 7909, 7901, 7891, 7883, 7877, 7869, 7850, 7842,
7836, 7828, 7818, 7810, 7804, 7796, 7774, 7766, 7760, 7752, 7742, 7734,
7728, 7720, 7701, 7693, 7687, 7679, 7669, 7661, 7655, 7647, 7621, 7613,
7607, 7599, 7589, 7581, 7575, 7567, 7548, 7540, 7534, 7526, 7516, 7508,
7502, 7494, 7472, 7464, 7458, 7450, 7440, 7432, 7426, 7418, 7399, 7391,
7385, 7377, 7367, 7359, 7353, 7345, 7479, 7471, 7465, 7457, 7447, 7439,
7433, 7425, 7406, 7398, 7392, 7384, 7374, 7366, 7360, 7352, 7330, 7322,
7316, 7308, 7298, 7290, 7284, 7276, 7257, 7249, 7243, 7235, 7225, 7217,
7211, 7203, 7177, 7169, 7163, 7155, 7145, 7137, 7131, 7123, 7104, 7096,
7090, 7082, 7072, 7064, 7058, 7050, 7028, 7020, 7014, 7006, 6996, 6988,
6982, 6974, 6955, 6947, 6941, 6933, 6923, 6915, 6909, 6901, 7632, 7624,
7618, 7610, 7600, 7592, 7586, 7578, 7559, 7551, 7545, 7537, 7527, 7519,
7513, 7505, 7483, 7475, 7469, 7461, 7451, 7443, 7437, 7429, 7410, 7402,
7396, 7388, 7378, 7370, 7364, 7356, 7330, 7322, 7316, 7308, 7298, 7290,
7284, 7276, 7257, 7249, 7243, 7235, 7225, 7217, 7211, 7203, 7181, 7173,
7167, 7159, 7149, 7141, 7135, 7127, 7108, 7100, 7094, 7086, 7076, 7068,
7062, 7054, 7188, 7180, 7174, 7166, 7156, 7148, 7142, 7134, 7115, 7107,
7101, 7093, 7083, 7075, 7069, 7061, 7039, 7031, 7025, 7017, 7007, 6999,
6993, 6985, 6966, 6958, 6952, 6944, 6934, 6926, 6920, 6912, 6886, 6878,
6872, 6864, 6854, 6846, 6840, 6832, 6813, 6805, 6799, 6791, 6781, 6773,
6767, 6759, 6737, 6729, 6723, 6715, 6705, 6697, 6691, 6683, 6664, 6656,
6650, 6642, 6632, 6624, 6618, 6610, 6812, 6804, 6798, 6790, 6780, 6772,
6766, 6758, 6739, 6731, 6725, 6717, 6707, 6699, 6693, 6685, 6663, 6655,
6649, 6641, 6631, 6623, 6617, 6609, 6590, 6582, 6576, 6568, 6558, 6550,
6544, 6536, 6510, 6502, 6496, 6488, 6478, 6470, 6464, 6456, 6437, 6429,
6423, 6415, 6405, 6397, 6391, 6383, 6361, 6353, 6347, 6339, 6329, 6321,
6315, 6307, 6288, 6280, 6274, 6266, 6256, 6248, 6242, 6234, 6368, 6360,
6354, 6346, 6336, 6328, 6322, 6314, 6295, 6287, 6281, 6273, 6263, 6255,
6249, 6241, 6219, 6211, 6205, 6197, 6187, 6179, 6173, 6165, 6146, 6138,
6132, 6124, 6114, 6106, 6100, 6092, 6066, 6058, 6052, 6044, 6034, 6026,
6020, 6012, 5993, 5985, 5979, 5971, 5961, 5953, 5947, 5939, 5917, 5909,
5903, 5895, 5885, 5877, 5871, 5863, 5844, 5836, 5830, 5822, 5812, 5804,
5798, 5790, 6697, 6689, 6683, 6675, 6665, 6657, 6651, 6643, 6624, 6616,
6610, 6602, 6592, 6584, 6578, 6570, 6548, 6540, 6534, 6526, 6516, 6508,
6502, 6494, 6475, 6467, 6461, 6453, 6443, 6435, 6429, 6421, 6395, 6387,
6381, 6373, 6363, 6355, 6349, 6341, 6322, 6314, 6308, 6300, 6290, 6282,
6276, 6268, 6246, 6238, 6232, 6224, 6214, 6206, 6200, 6192, 6173, 6165,
6159, 6151, 6141, 6133, 6127, 6119, 6253, 6245, 6239, 6231, 6221, 6213,
6207, 6199, 6180, 6172, 6166, 6158, 6148, 6140, 6134, 6126, 6104, 6096,
6090, 6082, 6072, 6064, 6058, 6050, 6031, 6023, 6017, 6009, 5999, 5991,
5985, 5977, 5951, 5943, 5937, 5929, 5919, 5911, 5905, 5897, 5878, 5870,
5864, 5856, 5846, 5838, 5832, 5824, 5802, 5794, 5788, 5780, 5770, 5762,
5756, 5748, 5729, 5721, 5715, 5707, 5697, 5689, 5683, 5675, 5877, 5869,
5863, 5855, 5845, 5837, 5831, 5823, 5804, 5796, 5790, 5782, 5772, 5764,
5758, 5750, 5728, 5720, 5714, 5706, 5696, 5688, 5682, 5674, 5655, 5647,
5641, 5633, 5623, 5615, 5609, 5601, 5575, 5567, 5561, 5553, 5543, 5535,
5529, 5521, 5502, 5494, 5488, 5480, 5470, 5462, 5456, 5448, 5426, 5418,
5412, 5404, 5394, 5386, 5380, 5372, 5353, 5345, 5339, 5331, 5321, 5313,
5307, 5299, 5433, 5425, 5419, 5411, 5401, 5393, 5387, 5379, 5360, 5352,
5346, 5338, 5328, 5320, 5314, 5306, 5284, 5276, 5270, 5262, 5252, 5244,
5238, 5230, 5211, 5203, 5197, 5189, 5179, 5171, 5165, 5157, 5131, 5123,
5117, 5109, 5099, 5091, 5085, 5077, 5058, 5050, 5044, 5036, 5026, 5018,
5012, 5004, 4982, 4974, 4968, 4960, 4950, 4942, 4936, 4928, 4909, 4901,
4895, 4887, 4877, 4869, 4863, 4855, 5586, 5578, 5572, 5564, 5554, 5546,
5540, 5532, 5513, 5505, 5499, 5491, 5481, 5473, 5467, 5459, 5437, 5429,
5423, 5415, 5405, 5397, 5391, 5383, 5364, 5356, 5350, 5342, 5332, 5324,
5318, 5310, 5284, 5276, 5270, 5262, 5252, 5244, 5238, 5230, 5211, 5203,
5197, 5189, 5179, 5171, 5165, 5157, 5135, 5127, 5121, 5113, 5103, 5095,
5089, 5081, 5062, 5054, 5048, 5040, 5030, 5022, 5016, 5008, 5142, 5134,
5128, 5120, 5110, 5102, 5096, 5088, 5069, 5061, 5055, 5047, 5037, 5029,
5023, 5015, 4993, 4985, 4979, 4971, 4961, 4953, 4947, 4939, 4920, 4912,
4906, 4898, 4888, 4880, 4874, 4866, 4840, 4832, 4826, 4818, 4808, 4800,
4794, 4786, 4767, 4759, 4753, 4745, 4735, 4727, 4721, 4713, 4691, 4683,
4677, 4669, 4659, 4651, 4645, 4637, 4618, 4610, 4604, 4596, 4586, 4578,
4572, 4564, 4766, 4758, 4752, 4744, 4734, 4726, 4720, 4712, 4693, 4685,
4679, 4671, 4661, 4653, 4647, 4639, 4617, 4609, 4603, 4595, 4585, 4577,
4571, 4563, 4544, 4536, 4530, 4522, 4512, 4504, 4498, 4490, 4464, 4456,
4450, 4442, 4432, 4424, 4418, 4410, 4391, 4383, 4377, 4369, 4359, 4351,
4345, 4337, 4315, 4307, 4301, 4293, 4283, 4275, 4269, 4261, 4242, 4234,
4228, 4220, 4210, 4202, 4196, 4188, 4322, 4314, 4308, 4300, 4290, 4282,
4276, 4268, 4249, 4241, 4235, 4227, 4217, 4209, 4203, 4195, 4173, 4165,
4159, 4151, 4141, 4133, 4127, 4119, 4100, 4092, 4086, 4078, 4068, 4060,
4054, 4046, 4020, 4012, 4006, 3998, 3988, 3980, 3974, 3966, 3947, 3939,
3933, 3925, 3915, 3907, 3901, 3893, 3871, 3863, 3857, 3849, 3839, 3831,
3825, 3817, 3798, 3790, 3784, 3776, 3766, 3758, 3752, 3744, 6697, 6689,
6683, 6675, 6665, 6657, 6651, 6643, 6624, 6616, 6610, 6602, 6592, 6584,
6578, 6570, 6548, 6540, 6534, 6526, 6516, 6508, 6502, 6494, 6475, 6467,
6461, 6453, 6443, 6435, 6429, 6421, 6395, 6387, 6381, 6373, 6363, 6355,
6349, 6341, 6322, 6314, 6308, 6300, 6290, 6282, 6276, 6268, 6246, 6238,
6232, 6224, 6214, 6206, 6200, 6192, 6173, 6165, 6159, 6151, 6141, 6133,
6127, 6119, 6253, 6245, 6239, 6231, 6221, 6213, 6207, 6199, 6180, 6172,
6166, 6158, 6148, 6140, 6134, 6126, 6104, 6096, 6090, 6082, 6072, 6064,
6058, 6050, 6031, 6023, 6017, 6009, 5999, 5991, 5985, 5977, 5951, 5943,
5937, 5929, 5919, 5911, 5905, 5897, 5878, 5870, 5864, 5856, 5846, 5838,
5832, 5824, 5802, 5794, 5788, 5780, 5770, 5762, 5756, 5748, 5729, 5721,
5715, 5707, 5697, 5689, 5683, 5675, 5877, 5869, 5863, 5855, 5845, 5837,
5831, 5823, 5804, 5796, 5790, 5782, 5772, 5764, 5758, 5750, 5728, 5720,
5714, 5706, 5696, 5688, 5682, 5674, 5655, 5647, 5641, 5633, 5623, 5615,
5609, 5601, 5575, 5567, 5561, 5553, 5543, 5535, 5529, 5521, 5502, 5494,
5488, 5480, 5470, 5462, 5456, 5448, 5426, 5418, 5412, 5404, 5394, 5386,
5380, 5372, 5353, 5345, 5339, 5331, 5321, 5313, 5307, 5299, 5433, 5425,
5419, 5411, 5401, 5393, 5387, 5379, 5360, 5352, 5346, 5338, 5328, 5320,
5314, 5306, 5284, 5276, 5270, 5262, 5252, 5244, 5238, 5230, 5211, 5203,
5197, 5189, 5179, 5171, 5165, 5157, 5131, 5123, 5117, 5109, 5099, 5091,
5085, 5077, 5058, 5050, 5044, 5036, 5026, 5018, 5012, 5004, 4982, 4974,
4968, 4960, 4950, 4942, 4936, 4928, 4909, 4901, 4895, 4887, 4877, 4869,
4863, 4855, 5586, 5578, 5572, 5564, 5554, 5546, 5540, 5532, 5513, 5505,
5499, 5491, 5481, 5473, 5467, 5459, 5437, 5429, 5423, 5415, 5405, 5397,
5391, 5383, 5364, 5356, 5350, 5342, 5332, 5324, 5318, 5310, 5284, 5276,
5270, 5262, 5252, 5244, 5238, 5230, 5211, 5203, 5197, 5189, 5179, 5171,
5165, 5157, 5135, 5127, 5121, 5113, 5103, 5095, 5089, 5081, 5062, 5054,
5048, 5040, 5030, 5022, 5016, 5008, 5142, 5134, 5128, 5120, 5110, 5102,
5096, 5088, 5069, 5061, 5055, 5047, 5037, 5029, 5023, 5015, 4993, 4985,
4979, 4971, 4961, 4953, 4947, 4939, 4920, 4912, 4906, 4898, 4888, 4880,
4874, 4866, 4840, 4832, 4826, 4818, 4808, 4800, 4794, 4786, 4767, 4759,
4753, 4745, 4735, 4727, 4721, 4713, 4691, 4683, 4677, 4669, 4659, 4651,
4645, 4637, 4618, 4610, 4604, 4596, 4586, 4578, 4572, 4564, 4766, 4758,
4752, 4744, 4734, 4726, 4720, 4712, 4693, 4685, 4679, 4671, 4661, 4653,
4647, 4639, 4617, 4609, 4603, 4595, 4585, 4577, 4571, 4563, 4544, 4536,
4530, 4522, 4512, 4504, 4498, 4490, 4464, 4456, 4450, 4442, 4432, 4424,
4418, 4410, 4391, 4383, 4377, 4369, 4359, 4351, 4345, 4337, 4315, 4307,
4301, 4293, 4283, 4275, 4269, 4261, 4242, 4234, 4228, 4220, 4210, 4202,
4196, 4188, 4322, 4314, 4308, 4300, 4290, 4282, 4276, 4268, 4249, 4241,
4235, 4227, 4217, 4209, 4203, 4195, 4173, 4165, 4159, 4151, 4141, 4133,
4127, 4119, 4100, 4092, 4086, 4078, 4068, 4060, 4054, 4046, 4020, 4012,
4006, 3998, 3988, 3980, 3974, 3966, 3947, 3939, 3933, 3925, 3915, 3907,
3901, 3893, 3871, 3863, 3857, 3849, 3839, 3831, 3825, 3817, 3798, 3790,
3784, 3776, 3766, 3758, 3752, 3744, 4651, 4643, 4637, 4629, 4619, 4611,
4605, 4597, 4578, 4570, 4564, 4556, 4546, 4538, 4532, 4524, 4502, 4494,
4488, 4480, 4470, 4462, 4456, 4448, 4429, 4421, 4415, 4407, 4397, 4389,
4383, 4375, 4349, 4341, 4335, 4327, 4317, 4309, 4303, 4295, 4276, 4268,
4262, 4254, 4244, 4236, 4230, 4222, 4200, 4192, 4186, 4178, 4168, 4160,
4154, 4146, 4127, 4119, 4113, 4105, 4095, 4087, 4081, 4073, 4207, 4199,
4193, 4185, 4175, 4167, 4161, 4153, 4134, 4126, 4120, 4112, 4102, 4094,
4088, 4080, 4058, 4050, 4044, 4036, 4026, 4018, 4012, 4004, 3985, 3977,
3971, 3963, 3953, 3945, 3939, 3931, 3905, 3897, 3891, 3883, 3873, 3865,
3859, 3851, 3832, 3824, 3818, 3810, 3800, 3792, 3786, 3778, 3756, 3748,
3742, 3734, 3724, 3716, 3710, 3702, 3683, 3675, 3669, 3661, 3651, 3643,
3637, 3629, 3831, 3823, 3817, 3809, 3799, 3791, 3785, 3777, 3758, 3750,
3744, 3736, 3726, 3718, 3712, 3704, 3682, 3674, 3668, 3660, 3650, 3642,
3636, 3628, 3609, 3601, 3595, 3587, 3577, 3569, 3563, 3555, 3529, 3521,
3515, 3507, 3497, 3489, 3483, 3475, 3456, 3448, 3442, 3434, 3424, 3416,
3410, 3402, 3380, 3372, 3366, 3358, 3348, 3340, 3334, 3326, 3307, 3299,
3293, 3285, 3275, 3267, 3261, 3253, 3387, 3379, 3373, 3365, 3355, 3347,
3341, 3333, 3314, 3306, 3300, 3292, 3282, 3274, 3268, 3260, 3238, 3230,
3224, 3216, 3206, 3198, 3192, 3184, 3165, 3157, 3151, 3143, 3133, 3125,
3119, 3111, 3085, 3077, 3071, 3063, 3053, 3045, 3039, 3031, 3012, 3004,
2998, 2990, 2980, 2972, 2966, 2958, 2936, 2928, 2922, 2914, 2904, 2896,
2890, 2882, 2863, 2855, 2849, 2841, 2831, 2823, 2817, 2809, 3540, 3532,
3526, 3518, 3508, 3500, 3494, 3486, 3467, 3459, 3453, 3445, 3435, 3427,
3421, 3413, 3391, 3383, 3377, 3369, 3359, 3351, 3345, 3337, 3318, 3310,
3304, 3296, 3286, 3278, 3272, 3264, 3238, 3230, 3224, 3216, 3206, 3198,
3192, 3184, 3165, 3157, 3151, 3143, 3133, 3125, 3119, 3111, 3089, 3081,
3075, 3067, 3057, 3049, 3043, 3035, 3016, 3008, 3002, 2994, 2984, 2976,
2970, 2962, 3096, 3088, 3082, 3074, 3064, 3056, 3050, 3042, 3023, 3015,
3009, 3001, 2991, 2983, 2977, 2969, 2947, 2939, 2933, 2925, 2915, 2907,
2901, 2893, 2874, 2866, 2860, 2852, 2842, 2834, 2828, 2820, 2794, 2786,
2780, 2772, 2762, 2754, 2748, 2740, 2721, 2713, 2707, 2699, 2689, 2681,
2675, 2667, 2645, 2637, 2631, 2623, 2613, 2605, 2599, 2591, 2572, 2564,
2558, 2550, 2540, 2532, 2526, 2518, 2720, 2712, 2706, 2698, 2688, 2680,
2674, 2666, 2647, 2639, 2633, 2625, 2615, 2607, 2601, 2593, 2571, 2563,
2557, 2549, 2539, 2531, 2525, 2517, 2498, 2490, 2484, 2476, 2466, 2458,
2452, 2444, 2418, 2410, 2404, 2396, 2386, 2378, 2372, 2364, 2345, 2337,
2331, 2323, 2313, 2305, 2299, 2291, 2269, 2261, 2255, 2247, 2237, 2229,
2223, 2215, 2196, 2188, 2182, 2174, 2164, 2156, 2150, 2142, 2276, 2268,
2262, 2254, 2244, 2236, 2230, 2222, 2203, 2195, 2189, 2181, 2171, 2163,
2157, 2149, 2127, 2119, 2113, 2105, 2095, 2087, 2081, 2073, 2054, 2046,
2040, 2032, 2022, 2014, 2008, 2000, 1974, 1966, 1960, 1952, 1942, 1934,
1928, 1920, 1901, 1893, 1887, 1879, 1869, 1861, 1855, 1847, 1825, 1817,
1811, 1803, 1793, 1785, 1779, 1771, 1752, 1744, 1738, 1730, 1720, 1712,
1706, 1698, 1897, 1883, 1860, 1846, 1819, 1805, 1782, 1768, 1723, 1709,
1686, 1672, 1645, 1631, 1608, 1594, 1574, 1560, 1537, 1523, 1496, 1482,
1459, 1445, 1400, 1386, 1363, 1349, 1322, 1308, 1285, 1271, 1608, 1565,
1535, 1492, 1446, 1403, 1373, 1330, 1312, 1269, 1239, 1196, 1150, 1107,
1077, 1034, 1291, 1218, 1171, 1098, 1015, 942, 895, 822, 953, 850,
729, 626, 618, 431, 257, 257, 257, 257, 0, 255, 255, 255,
255, 429, 616, 624, 727, 848, 951, 820, 893, 940, 1013, 1096,
1169, 1216, 1289, 1032, 1075, 1105, 1148, 1194, 1237, 1267, 1310, 1328,
1371, 1401, 1444, 1490, 1533, 1563, 1606, 1269, 1283, 1306, 1320, 1347,
1361, 1384, 1398, 1443, 1457, 1480, 1494, 1521, 1535, 1558, 1572, 1592,
1606, 1629, 1643, 1670, 1684, 1707, 1721, 1766, 1780, 1803, 1817, 1844,
1858, 1881, 1895, 1696, 1704, 1710, 1718, 1728, 1736, 1742, 1750, 1769,
1777, 1783, 1791, 1801, 1809, 1815, 1823, 1845, 1853, 1859, 1867, 1877,
1885, 1891, 1899, 1918, 1926, 1932, 1940, 1950, 1958, 1964, 1972, 1998,
2006, 2012, 2020, 2030, 2038, 2044, 2052, 2071, 2079, 2085, 2093, 2103,
2111, 2117, 2125, 2147, 2155, 2161, 2169, 2179, 2187, 2193, 2201, 2220,
2228, 2234, 2242, 2252, 2260, 2266, 2274, 2140, 2148, 2154, 2162, 2172,
2180, 2186, 2194, 2213, 2221, 2227, 2235, 2245, 2253, 2259, 2267, 2289,
2297, 2303, 2311, 2321, 2329, 2335, 2343, 2362, 2370, 2376, 2384, 2394,
2402, 2408, 2416, 2442, 2450, 2456, 2464, 2474, 2482, 2488, 2496, 2515,
2523, 2529, 2537, 2547, 2555, 2561, 2569, 2591, 2599, 2605, 2613, 2623,
2631, 2637, 2645, 2664, 2672, 2678, 2686, 2696, 2704, 2710, 2718, 2516,
2524, 2530, 2538, 2548, 2556, 2562, 2570, 2589, 2597, 2603, 2611, 2621,
2629, 2635, 2643, 2665, 2673, 2679, 2687, 2697, 2705, 2711, 2719, 2738,
2746, 2752, 2760, 2770, 2778, 2784, 2792, 2818, 2826, 2832, 2840, 2850,
2858, 2864, 2872, 2891, 2899, 2905, 2913, 2923, 2931, 2937, 2945, 2967,
2975, 2981, 2989, 2999, 3007, 3013, 3021, 3040, 3048, 3054, 3062, 3072,
3080, 3086, 3094, 2960, 2968, 2974, 2982, 2992, 3000, 3006, 3014, 3033,
3041, 3047, 3055, 3065, 3073, 3079, 3087, 3109, 3117, 3123, 3131, 3141,
3149, 3155, 3163, 3182, 3190, 3196, 3204, 3214, 3222, 3228, 3236, 3262,
3270, 3276, 3284, 3294, 3302, 3308, 3316, 3335, 3343, 3349, 3357, 3367,
3375, 3381, 3389, 3411, 3419, 3425, 3433, 3443, 3451, 3457, 3465, 3484,
3492, 3498, 3506, 3516, 3524, 3530, 3538, 2807, 2815, 2821, 2829, 2839,
2847, 2853, 2861, 2880, 2888, 2894, 2902, 2912, 2920, 2926, 2934, 2956,
2964, 2970, 2978, 2988, 2996, 3002, 3010, 3029, 3037, 3043, 3051, 3061,
3069, 3075, 3083, 3109, 3117, 3123, 3131, 3141, 3149, 3155, 3163, 3182,
3190, 3196, 3204, 3214, 3222, 3228, 3236, 3258, 3266, 3272, 3280, 3290,
3298, 3304, 3312, 3331, 3339, 3345, 3353, 3363, 3371, 3377, 3385, 3251,
3259, 3265, 3273, 3283, 3291, 3297, 3305, 3324, 3332, 3338, 3346, 3356,
3364, 3370, 3378, 3400, 3408, 3414, 3422, 3432, 3440, 3446, 3454, 3473,
3481, 3487, 3495, 3505, 3513, 3519, 3527, 3553, 3561, 3567, 3575, 3585,
3593, 3599, 3607, 3626, 3634, 3640, 3648, 3658, 3666, 3672, 3680, 3702,
3710, 3716, 3724, 3734, 3742, 3748, 3756, 3775, 3783, 3789, 3797, 3807,
3815, 3821, 3829, 3627, 3635, 3641, 3649, 3659, 3667, 3673, 3681, 3700,
3708, 3714, 3722, 3732, 3740, 3746, 3754, 3776, 3784, 3790, 3798, 3808,
3816, 3822, 3830, 3849, 3857, 3863, 3871, 3881, 3889, 3895, 3903, 3929,
3937, 3943, 3951, 3961, 3969, 3975, 3983, 4002, 4010, 4016, 4024, 4034,
4042, 4048, 4056, 4078, 4086, 4092, 4100, 4110, 4118, 4124, 4132, 4151,
4159, 4165, 4173, 4183, 4191, 4197, 4205, 4071, 4079, 4085, 4093, 4103,
4111, 4117, 4125, 4144, 4152, 4158, 4166, 4176, 4184, 4190, 4198, 4220,
4228, 4234, 4242, 4252, 4260, 4266, 4274, 4293, 4301, 4307, 4315, 4325,
4333, 4339, 4347, 4373, 4381, 4387, 4395, 4405, 4413, 4419, 4427, 4446,
4454, 4460, 4468, 4478, 4486, 4492, 4500, 4522, 4530, 4536, 4544, 4554,
4562, 4568, 4576, 4595, 4603, 4609, 4617, 4627, 4635, 4641, 4649, 3742,
3750, 3756, 3764, 3774, 3782, 3788, 3796, 3815, 3823, 3829, 3837, 3847,
3855, 3861, 3869, 3891, 3899, 3905, 3913, 3923, 3931, 3937, 3945, 3964,
3972, 3978, 3986, 3996, 4004, 4010, 4018, 4044, 4052, 4058, 4066, 4076,
4084, 4090, 4098, 4117, 4125, 4131, 4139, 4149, 4157, 4163, 4171, 4193,
4201, 4207, 4215, 4225, 4233, 4239, 4247, 4266, 4274, 4280, 4288, 4298,
4306, 4312, 4320, 4186, 4194, 4200, 4208, 4218, 4226, 4232, 4240, 4259,
4267, 4273, 4281, 4291, 4299, 4305, 4313, 4335, 4343, 4349, 4357, 4367,
4375, 4381, 4389, 4408, 4416, 4422, 4430, 4440, 4448, 4454, 4462, 4488,
4496, 4502, 4510, 4520, 4528, 4534, 4542, 4561, 4569, 4575, 4583, 4593,
4601, 4607, 4615, 4637, 4645, 4651, 4659, 4669, 4677, 4683, 4691, 4710,
4718, 4724, 4732, 4742, 4750, 4756, 4764, 4562, 4570, 4576, 4584, 4594,
4602, 4608, 4616, 4635, 4643, 4649, 4657, 4667, 4675, 4681, 4689, 4711,
4719, 4725, 4733, 4743, 4751, 4757, 4765, 4784, 4792, 4798, 4806, 4816,
4824, 4830, 4838, 4864, 4872, 4878, 4886, 4896, 4904, 4910, 4918, 4937,
4945, 4951, 4959, 4969, 4977, 4983, 4991, 5013, 5021, 5027, 5035, 5045,
5053, 5059, 5067, 5086, 5094, 5100, 5108, 5118, 5126, 5132, 5140, 5006,
5014, 5020, 5028, 5038, 5046, 5052, 5060, 5079, 5087, 5093, 5101, 5111,
5119, 5125, 5133, 5155, 5163, 5169, 5177, 5187, 5195, 5201, 5209, 5228,
5236, 5242, 5250, 5260, 5268, 5274, 5282, 5308, 5316, 5322, 5330, 5340,
5348, 5354, 5362, 5381, 5389, 5395, 5403, 5413, 5421, 5427, 5435, 5457,
5465, 5471, 5479, 5489, 5497, 5503, 5511, 5530, 5538, 5544, 5552, 5562,
5570, 5576, 5584, 4853, 4861, 4867, 4875, 4885, 4893, 4899, 4907, 4926,
4934, 4940, 4948, 4958, 4966, 4972, 4980, 5002, 5010, 5016, 5024, 5034,
5042, 5048, 5056, 5075, 5083, 5089, 5097, 5107, 5115, 5121, 5129, 5155,
5163, 5169, 5177, 5187, 5195, 5201, 5209, 5228, 5236, 5242, 5250, 5260,
5268, 5274, 5282, 5304, 5312, 5318, 5326, 5336, 5344, 5350, 5358, 5377,
5385, 5391, 5399, 5409, 5417, 5423, 5431, 5297, 5305, 5311, 5319, 5329,
5337, 5343, 5351, 5370, 5378, 5384, 5392, 5402, 5410, 5416, 5424, 5446,
5454, 5460, 5468, 5478, 5486, 5492, 5500, 5519, 5527, 5533, 5541, 5551,
5559, 5565, 5573, 5599, 5607, 5613, 5621, 5631, 5639, 5645, 5653, 5672,
5680, 5686, 5694, 5704, 5712, 5718, 5726, 5748, 5756, 5762, 5770, 5780,
5788, 5794, 5802, 5821, 5829, 5835, 5843, 5853, 5861, 5867, 5875, 5673,
5681, 5687, 5695, 5705, 5713, 5719, 5727, 5746, 5754, 5760, 5768, 5778,
5786, 5792, 5800, 5822, 5830, 5836, 5844, 5854, 5862, 5868, 5876, 5895,
5903, 5909, 5917, 5927, 5935, 5941, 5949, 5975, 5983, 5989, 5997, 6007,
6015, 6021, 6029, 6048, 6056, 6062, 6070, 6080, 6088, 6094, 6102, 6124,
6132, 6138, 6146, 6156, 6164, 6170, 6178, 6197, 6205, 6211, 6219, 6229,
6237, 6243, 6251, 6117, 6125, 6131, 6139, 6149, 6157, 6163, 6171, 6190,
6198, 6204, 6212, 6222, 6230, 6236, 6244, 6266, 6274, 6280, 6288, 6298,
6306, 6312, 6320, 6339, 6347, 6353, 6361, 6371, 6379, 6385, 6393, 6419,
6427, 6433, 6441, 6451, 6459, 6465, 6473, 6492, 6500, 6506, 6514, 6524,
6532, 6538, 6546, 6568, 6576, 6582, 6590, 6600, 6608, 6614, 6622, 6641,
6649, 6655, 6663, 6673, 6681, 6687, 6695, 3742, 3750, 3756, 3764, 3774,
3782, 3788, 3796, 3815, 3823, 3829, 3837, 3847, 3855, 3861, 3869, 3891,
3899, 3905, 3913, 3923, 3931, 3937, 3945, 3964, 3972, 3978, 3986, 3996,
4004, 4010, 4018, 4044, 4052, 4058, 4066, 4076, 4084, 4090, 4098, 4117,
4125, 4131, 4139, 4149, 4157, 4163, 4171, 4193, 4201, 4207, 4215, 4225,
4233, 4239, 4247, 4266, 4274, 4280, 4288, 4298, 4306, 4312, 4320, 4186,
4194, 4200, 4208, 4218, 4226, 4232, 4240, 4259, 4267, 4273, 4281, 4291,
4299, 4305, 4313, 4335, 4343, 4349, 4357, 4367, 4375, 4381, 4389, 4408,
4416, 4422, 4430, 4440, 4448, 4454, 4462, 4488, 4496, 4502, 4510, 4520,
4528, 4534, 4542, 4561, 4569, 4575, 4583, 4593, 4601, 4607, 4615, 4637,
4645, 4651, 4659, 4669, 4677, 4683, 4691, 4710, 4718, 4724, 4732, 4742,
4750, 4756, 4764, 4562, 4570, 4576, 4584, 4594, 4602, 4608, 4616, 4635,
4643, 4649, 4657, 4667, 4675, 4681, 4689, 4711, 4719, 4725, 4733, 4743,
4751, 4757, 4765, 4784, 4792, 4798, 4806, 4816, 4824, 4830, 4838, 4864,
4872, 4878, 4886, 4896, 4904, 4910, 4918, 4937, 4945, 4951, 4959, 4969,
4977, 4983, 4991, 5013, 5021, 5027, 5035, 5045, 5053, 5059, 5067, 5086,
5094, 5100, 5108, 5118, 5126, 5132, 5140, 5006, 5014, 5020, 5028, 5038,
5046, 5052, 5060, 5079, 5087, 5093, 5101, 5111, 5119, 5125, 5133, 5155,
5163, 5169, 5177, 5187, 5195, 5201, 5209, 5228, 5236, 5242, 5250, 5260,
5268, 5274, 5282, 5308, 5316, 5322, 5330, 5340, 5348, 5354, 5362, 5381,
5389, 5395, 5403, 5413, 5421, 5427, 5435, 5457, 5465, 5471, 5479, 5489,
5497, 5503, 5511, 5530, 5538, 5544, 5552, 5562, 5570, 5576, 5584, 4853,
4861, 4867, 4875, 4885, 4893, 4899, 4907, 4926, 4934, 4940, 4948, 4958,
4966, 4972, 4980, 5002, 5010, 5016, 5024, 5034, 5042, 5048, 5056, 5075,
5083, 5089, 5097, 5107, 5115, 5121, 5129, 5155, 5163, 5169, 5177, 5187,
5195, 5201, 5209, 5228, 5236, 5242, 5250, 5260, 5268, 5274, 5282, 5304,
5312, 5318, 5326, 5336, 5344, 5350, 5358, 5377, 5385, 5391, 5399, 5409,
5417, 5423, 5431, 5297, 5305, 5311, 5319, 5329, 5337, 5343, 5351, 5370,
5378, 5384, 5392, 5402, 5410, 5416, 5424, 5446, 5454, 5460, 5468, 5478,
5486, 5492, 5500, 5519, 5527, 5533, 5541, 5551, 5559, 5565, 5573, 5599,
5607, 5613, 5621, 5631, 5639, 5645, 5653, 5672, 5680, 5686, 5694, 5704,
5712, 5718, 5726, 5748, 5756, 5762, 5770, 5780, 5788, 5794, 5802, 5821,
5829, 5835, 5843, 5853, 5861, 5867, 5875, 5673, 5681, 5687, 5695, 5705,
5713, 5719, 5727, 5746, 5754, 5760, 5768, 5778, 5786, 5792, 5800, 5822,
5830, 5836, 5844, 5854, 5862, 5868, 5876, 5895, 5903, 5909, 5917, 5927,
5935, 5941, 5949, 5975, 5983, 5989, 5997, 6007, 6015, 6021, 6029, 6048,
6056, 6062, 6070, 6080, 6088, 6094, 6102, 6124, 6132, 6138, 6146, 6156,
6164, 6170, 6178, 6197, 6205, 6211, 6219, 6229, 6237, 6243, 6251, 6117,
6125, 6131, 6139, 6149, 6157, 6163, 6171, 6190, 6198, 6204, 6212, 6222,
6230, 6236, 6244, 6266, 6274, 6280, 6288, 6298, 6306, 6312, 6320, 6339,
6347, 6353, 6361, 6371, 6379, 6385, 6393, 6419, 6427, 6433, 6441, 6451,
6459, 6465, 6473, 6492, 6500, 6506, 6514, 6524, 6532, 6538, 6546, 6568,
6576, 6582, 6590, 6600, 6608, 6614, 6622, 6641, 6649, 6655, 6663, 6673,
6681, 6687, 6695, 5788, 5796, 5802, 5810, 5820, 5828, 5834, 5842, 5861,
5869, 5875, 5883, 5893, 5901, 5907, 5915, 5937, 5945, 5951, 5959, 5969,
5977, 5983, 5991, 6010, 6018, 6024, 6032, 6042, 6050, 6056, 6064, 6090,
6098, 6104, 6112, 6122, 6130, 6136, 6144, 6163, 6171, 6177, 6185, 6195,
6203, 6209, 6217, 6239, 6247, 6253, 6261, 6271, 6279, 6285, 6293, 6312,
6320, 6326, 6334, 6344, 6352, 6358, 6366, 6232, 6240, 6246, 6254, 6264,
6272, 6278, 6286, 6305, 6313, 6319, 6327, 6337, 6345, 6351, 6359, 6381,
6389, 6395, 6403, 6413, 6421, 6427, 6435, 6454, 6462, 6468, 6476, 6486,
6494, 6500, 6508, 6534, 6542, 6548, 6556, 6566, 6574, 6580, 6588, 6607,
6615, 6621, 6629, 6639, 6647, 6653, 6661, 6683, 6691, 6697, 6705, 6715,
6723, 6729, 6737, 6756, 6764, 6770, 6778, 6788, 6796, 6802, 6810, 6608,
6616, 6622, 6630, 6640, 6648, 6654, 6662, 6681, 6689, 6695, 6703, 6713,
6721, 6727, 6735, 6757, 6765, 6771, 6779, 6789, 6797, 6803, 6811, 6830,
6838, 6844, 6852, 6862, 6870, 6876, 6884, 6910, 6918, 6924, 6932, 6942,
6950, 6956, 6964, 6983, 6991, 6997, 7005, 7015, 7023, 7029, 7037, 7059,
7067, 7073, 7081, 7091, 7099, 7105, 7113, 7132, 7140, 7146, 7154, 7164,
7172, 7178, 7186, 7052, 7060, 7066, 7074, 7084, 7092, 7098, 7106, 7125,
7133, 7139, 7147, 7157, 7165, 7171, 7179, 7201, 7209, 7215, 7223, 7233,
7241, 7247, 7255, 7274, 7282, 7288, 7296, 7306, 7314, 7320, 7328, 7354,
7362, 7368, 7376, 7386, 7394, 7400, 7408, 7427, 7435, 7441, 7449, 7459,
7467, 7473, 7481, 7503, 7511, 7517, 7525, 7535, 7543, 7549, 7557, 7576,
7584, 7590, 7598, 7608, 7616, 7622, 7630, 6899, 6907, 6913, 6921, 6931,
6939, 6945, 6953, 6972, 6980, 6986, 6994, 7004, 7012, 7018, 7026, 7048,
7056, 7062, 7070, 7080, 7088, 7094, 7102, 7121, 7129, 7135, 7143, 7153,
7161, 7167, 7175, 7201, 7209, 7215, 7223, 7233, 7241, 7247, 7255, 7274,
7282, 7288, 7296, 7306, 7314, 7320, 7328, 7350, 7358, 7364, 7372, 7382,
7390, 7396, 7404, 7423, 7431, 7437, 7445, 7455, 7463, 7469, 7477, 7343,
7351, 7357, 7365, 7375, 7383, 7389, 7397, 7416, 7424, 7430, 7438, 7448,
7456, 7462, 7470, 7492, 7500, 7506, 7514, 7524, 7532, 7538, 7546, 7565,
7573, 7579, 7587, 7597, 7605, 7611, 7619, 7645, 7653, 7659, 7667, 7677,
7685, 7691, 7699, 7718, 7726, 7732, 7740, 7750, 7758, 7764, 7772, 7794,
7802, 7808, 7816, 7826, 7834, 7840, 7848, 7867, 7875, 7881, 7889, 7899,
7907, 7913, 7921, 7719, 7727, 7733, 7741, 7751, 7759, 7765, 7773, 7792,
7800, 7806, 7814, 7824, 7832, 7838, 7846, 7868, 7876, 7882, 7890, 7900,
7908, 7914, 7922, 7941, 7949, 7955, 7963, 7973, 7981, 7987, 7995, 8021,
8029, 8035, 8043, 8053, 8061, 8067, 8075, 8094, 8102, 8108, 8116, 8126,
8134, 8140, 8148, 8170, 8178, 8184, 8192, 8202, 8210, 8216, 8224, 8243,
8251, 8257, 8265, 8275
};

View File

@@ -0,0 +1,699 @@
/*
* Copyright (c) 2012 The WebM project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
/* Generated file, included by tokenize.c */
/* Values generated by fill_value_tokens() */
static const TOKENVALUE dct_value_tokens[2048*2] =
{
{10, 3963}, {10, 3961}, {10, 3959}, {10, 3957}, {10, 3955}, {10, 3953},
{10, 3951}, {10, 3949}, {10, 3947}, {10, 3945}, {10, 3943}, {10, 3941},
{10, 3939}, {10, 3937}, {10, 3935}, {10, 3933}, {10, 3931}, {10, 3929},
{10, 3927}, {10, 3925}, {10, 3923}, {10, 3921}, {10, 3919}, {10, 3917},
{10, 3915}, {10, 3913}, {10, 3911}, {10, 3909}, {10, 3907}, {10, 3905},
{10, 3903}, {10, 3901}, {10, 3899}, {10, 3897}, {10, 3895}, {10, 3893},
{10, 3891}, {10, 3889}, {10, 3887}, {10, 3885}, {10, 3883}, {10, 3881},
{10, 3879}, {10, 3877}, {10, 3875}, {10, 3873}, {10, 3871}, {10, 3869},
{10, 3867}, {10, 3865}, {10, 3863}, {10, 3861}, {10, 3859}, {10, 3857},
{10, 3855}, {10, 3853}, {10, 3851}, {10, 3849}, {10, 3847}, {10, 3845},
{10, 3843}, {10, 3841}, {10, 3839}, {10, 3837}, {10, 3835}, {10, 3833},
{10, 3831}, {10, 3829}, {10, 3827}, {10, 3825}, {10, 3823}, {10, 3821},
{10, 3819}, {10, 3817}, {10, 3815}, {10, 3813}, {10, 3811}, {10, 3809},
{10, 3807}, {10, 3805}, {10, 3803}, {10, 3801}, {10, 3799}, {10, 3797},
{10, 3795}, {10, 3793}, {10, 3791}, {10, 3789}, {10, 3787}, {10, 3785},
{10, 3783}, {10, 3781}, {10, 3779}, {10, 3777}, {10, 3775}, {10, 3773},
{10, 3771}, {10, 3769}, {10, 3767}, {10, 3765}, {10, 3763}, {10, 3761},
{10, 3759}, {10, 3757}, {10, 3755}, {10, 3753}, {10, 3751}, {10, 3749},
{10, 3747}, {10, 3745}, {10, 3743}, {10, 3741}, {10, 3739}, {10, 3737},
{10, 3735}, {10, 3733}, {10, 3731}, {10, 3729}, {10, 3727}, {10, 3725},
{10, 3723}, {10, 3721}, {10, 3719}, {10, 3717}, {10, 3715}, {10, 3713},
{10, 3711}, {10, 3709}, {10, 3707}, {10, 3705}, {10, 3703}, {10, 3701},
{10, 3699}, {10, 3697}, {10, 3695}, {10, 3693}, {10, 3691}, {10, 3689},
{10, 3687}, {10, 3685}, {10, 3683}, {10, 3681}, {10, 3679}, {10, 3677},
{10, 3675}, {10, 3673}, {10, 3671}, {10, 3669}, {10, 3667}, {10, 3665},
{10, 3663}, {10, 3661}, {10, 3659}, {10, 3657}, {10, 3655}, {10, 3653},
{10, 3651}, {10, 3649}, {10, 3647}, {10, 3645}, {10, 3643}, {10, 3641},
{10, 3639}, {10, 3637}, {10, 3635}, {10, 3633}, {10, 3631}, {10, 3629},
{10, 3627}, {10, 3625}, {10, 3623}, {10, 3621}, {10, 3619}, {10, 3617},
{10, 3615}, {10, 3613}, {10, 3611}, {10, 3609}, {10, 3607}, {10, 3605},
{10, 3603}, {10, 3601}, {10, 3599}, {10, 3597}, {10, 3595}, {10, 3593},
{10, 3591}, {10, 3589}, {10, 3587}, {10, 3585}, {10, 3583}, {10, 3581},
{10, 3579}, {10, 3577}, {10, 3575}, {10, 3573}, {10, 3571}, {10, 3569},
{10, 3567}, {10, 3565}, {10, 3563}, {10, 3561}, {10, 3559}, {10, 3557},
{10, 3555}, {10, 3553}, {10, 3551}, {10, 3549}, {10, 3547}, {10, 3545},
{10, 3543}, {10, 3541}, {10, 3539}, {10, 3537}, {10, 3535}, {10, 3533},
{10, 3531}, {10, 3529}, {10, 3527}, {10, 3525}, {10, 3523}, {10, 3521},
{10, 3519}, {10, 3517}, {10, 3515}, {10, 3513}, {10, 3511}, {10, 3509},
{10, 3507}, {10, 3505}, {10, 3503}, {10, 3501}, {10, 3499}, {10, 3497},
{10, 3495}, {10, 3493}, {10, 3491}, {10, 3489}, {10, 3487}, {10, 3485},
{10, 3483}, {10, 3481}, {10, 3479}, {10, 3477}, {10, 3475}, {10, 3473},
{10, 3471}, {10, 3469}, {10, 3467}, {10, 3465}, {10, 3463}, {10, 3461},
{10, 3459}, {10, 3457}, {10, 3455}, {10, 3453}, {10, 3451}, {10, 3449},
{10, 3447}, {10, 3445}, {10, 3443}, {10, 3441}, {10, 3439}, {10, 3437},
{10, 3435}, {10, 3433}, {10, 3431}, {10, 3429}, {10, 3427}, {10, 3425},
{10, 3423}, {10, 3421}, {10, 3419}, {10, 3417}, {10, 3415}, {10, 3413},
{10, 3411}, {10, 3409}, {10, 3407}, {10, 3405}, {10, 3403}, {10, 3401},
{10, 3399}, {10, 3397}, {10, 3395}, {10, 3393}, {10, 3391}, {10, 3389},
{10, 3387}, {10, 3385}, {10, 3383}, {10, 3381}, {10, 3379}, {10, 3377},
{10, 3375}, {10, 3373}, {10, 3371}, {10, 3369}, {10, 3367}, {10, 3365},
{10, 3363}, {10, 3361}, {10, 3359}, {10, 3357}, {10, 3355}, {10, 3353},
{10, 3351}, {10, 3349}, {10, 3347}, {10, 3345}, {10, 3343}, {10, 3341},
{10, 3339}, {10, 3337}, {10, 3335}, {10, 3333}, {10, 3331}, {10, 3329},
{10, 3327}, {10, 3325}, {10, 3323}, {10, 3321}, {10, 3319}, {10, 3317},
{10, 3315}, {10, 3313}, {10, 3311}, {10, 3309}, {10, 3307}, {10, 3305},
{10, 3303}, {10, 3301}, {10, 3299}, {10, 3297}, {10, 3295}, {10, 3293},
{10, 3291}, {10, 3289}, {10, 3287}, {10, 3285}, {10, 3283}, {10, 3281},
{10, 3279}, {10, 3277}, {10, 3275}, {10, 3273}, {10, 3271}, {10, 3269},
{10, 3267}, {10, 3265}, {10, 3263}, {10, 3261}, {10, 3259}, {10, 3257},
{10, 3255}, {10, 3253}, {10, 3251}, {10, 3249}, {10, 3247}, {10, 3245},
{10, 3243}, {10, 3241}, {10, 3239}, {10, 3237}, {10, 3235}, {10, 3233},
{10, 3231}, {10, 3229}, {10, 3227}, {10, 3225}, {10, 3223}, {10, 3221},
{10, 3219}, {10, 3217}, {10, 3215}, {10, 3213}, {10, 3211}, {10, 3209},
{10, 3207}, {10, 3205}, {10, 3203}, {10, 3201}, {10, 3199}, {10, 3197},
{10, 3195}, {10, 3193}, {10, 3191}, {10, 3189}, {10, 3187}, {10, 3185},
{10, 3183}, {10, 3181}, {10, 3179}, {10, 3177}, {10, 3175}, {10, 3173},
{10, 3171}, {10, 3169}, {10, 3167}, {10, 3165}, {10, 3163}, {10, 3161},
{10, 3159}, {10, 3157}, {10, 3155}, {10, 3153}, {10, 3151}, {10, 3149},
{10, 3147}, {10, 3145}, {10, 3143}, {10, 3141}, {10, 3139}, {10, 3137},
{10, 3135}, {10, 3133}, {10, 3131}, {10, 3129}, {10, 3127}, {10, 3125},
{10, 3123}, {10, 3121}, {10, 3119}, {10, 3117}, {10, 3115}, {10, 3113},
{10, 3111}, {10, 3109}, {10, 3107}, {10, 3105}, {10, 3103}, {10, 3101},
{10, 3099}, {10, 3097}, {10, 3095}, {10, 3093}, {10, 3091}, {10, 3089},
{10, 3087}, {10, 3085}, {10, 3083}, {10, 3081}, {10, 3079}, {10, 3077},
{10, 3075}, {10, 3073}, {10, 3071}, {10, 3069}, {10, 3067}, {10, 3065},
{10, 3063}, {10, 3061}, {10, 3059}, {10, 3057}, {10, 3055}, {10, 3053},
{10, 3051}, {10, 3049}, {10, 3047}, {10, 3045}, {10, 3043}, {10, 3041},
{10, 3039}, {10, 3037}, {10, 3035}, {10, 3033}, {10, 3031}, {10, 3029},
{10, 3027}, {10, 3025}, {10, 3023}, {10, 3021}, {10, 3019}, {10, 3017},
{10, 3015}, {10, 3013}, {10, 3011}, {10, 3009}, {10, 3007}, {10, 3005},
{10, 3003}, {10, 3001}, {10, 2999}, {10, 2997}, {10, 2995}, {10, 2993},
{10, 2991}, {10, 2989}, {10, 2987}, {10, 2985}, {10, 2983}, {10, 2981},
{10, 2979}, {10, 2977}, {10, 2975}, {10, 2973}, {10, 2971}, {10, 2969},
{10, 2967}, {10, 2965}, {10, 2963}, {10, 2961}, {10, 2959}, {10, 2957},
{10, 2955}, {10, 2953}, {10, 2951}, {10, 2949}, {10, 2947}, {10, 2945},
{10, 2943}, {10, 2941}, {10, 2939}, {10, 2937}, {10, 2935}, {10, 2933},
{10, 2931}, {10, 2929}, {10, 2927}, {10, 2925}, {10, 2923}, {10, 2921},
{10, 2919}, {10, 2917}, {10, 2915}, {10, 2913}, {10, 2911}, {10, 2909},
{10, 2907}, {10, 2905}, {10, 2903}, {10, 2901}, {10, 2899}, {10, 2897},
{10, 2895}, {10, 2893}, {10, 2891}, {10, 2889}, {10, 2887}, {10, 2885},
{10, 2883}, {10, 2881}, {10, 2879}, {10, 2877}, {10, 2875}, {10, 2873},
{10, 2871}, {10, 2869}, {10, 2867}, {10, 2865}, {10, 2863}, {10, 2861},
{10, 2859}, {10, 2857}, {10, 2855}, {10, 2853}, {10, 2851}, {10, 2849},
{10, 2847}, {10, 2845}, {10, 2843}, {10, 2841}, {10, 2839}, {10, 2837},
{10, 2835}, {10, 2833}, {10, 2831}, {10, 2829}, {10, 2827}, {10, 2825},
{10, 2823}, {10, 2821}, {10, 2819}, {10, 2817}, {10, 2815}, {10, 2813},
{10, 2811}, {10, 2809}, {10, 2807}, {10, 2805}, {10, 2803}, {10, 2801},
{10, 2799}, {10, 2797}, {10, 2795}, {10, 2793}, {10, 2791}, {10, 2789},
{10, 2787}, {10, 2785}, {10, 2783}, {10, 2781}, {10, 2779}, {10, 2777},
{10, 2775}, {10, 2773}, {10, 2771}, {10, 2769}, {10, 2767}, {10, 2765},
{10, 2763}, {10, 2761}, {10, 2759}, {10, 2757}, {10, 2755}, {10, 2753},
{10, 2751}, {10, 2749}, {10, 2747}, {10, 2745}, {10, 2743}, {10, 2741},
{10, 2739}, {10, 2737}, {10, 2735}, {10, 2733}, {10, 2731}, {10, 2729},
{10, 2727}, {10, 2725}, {10, 2723}, {10, 2721}, {10, 2719}, {10, 2717},
{10, 2715}, {10, 2713}, {10, 2711}, {10, 2709}, {10, 2707}, {10, 2705},
{10, 2703}, {10, 2701}, {10, 2699}, {10, 2697}, {10, 2695}, {10, 2693},
{10, 2691}, {10, 2689}, {10, 2687}, {10, 2685}, {10, 2683}, {10, 2681},
{10, 2679}, {10, 2677}, {10, 2675}, {10, 2673}, {10, 2671}, {10, 2669},
{10, 2667}, {10, 2665}, {10, 2663}, {10, 2661}, {10, 2659}, {10, 2657},
{10, 2655}, {10, 2653}, {10, 2651}, {10, 2649}, {10, 2647}, {10, 2645},
{10, 2643}, {10, 2641}, {10, 2639}, {10, 2637}, {10, 2635}, {10, 2633},
{10, 2631}, {10, 2629}, {10, 2627}, {10, 2625}, {10, 2623}, {10, 2621},
{10, 2619}, {10, 2617}, {10, 2615}, {10, 2613}, {10, 2611}, {10, 2609},
{10, 2607}, {10, 2605}, {10, 2603}, {10, 2601}, {10, 2599}, {10, 2597},
{10, 2595}, {10, 2593}, {10, 2591}, {10, 2589}, {10, 2587}, {10, 2585},
{10, 2583}, {10, 2581}, {10, 2579}, {10, 2577}, {10, 2575}, {10, 2573},
{10, 2571}, {10, 2569}, {10, 2567}, {10, 2565}, {10, 2563}, {10, 2561},
{10, 2559}, {10, 2557}, {10, 2555}, {10, 2553}, {10, 2551}, {10, 2549},
{10, 2547}, {10, 2545}, {10, 2543}, {10, 2541}, {10, 2539}, {10, 2537},
{10, 2535}, {10, 2533}, {10, 2531}, {10, 2529}, {10, 2527}, {10, 2525},
{10, 2523}, {10, 2521}, {10, 2519}, {10, 2517}, {10, 2515}, {10, 2513},
{10, 2511}, {10, 2509}, {10, 2507}, {10, 2505}, {10, 2503}, {10, 2501},
{10, 2499}, {10, 2497}, {10, 2495}, {10, 2493}, {10, 2491}, {10, 2489},
{10, 2487}, {10, 2485}, {10, 2483}, {10, 2481}, {10, 2479}, {10, 2477},
{10, 2475}, {10, 2473}, {10, 2471}, {10, 2469}, {10, 2467}, {10, 2465},
{10, 2463}, {10, 2461}, {10, 2459}, {10, 2457}, {10, 2455}, {10, 2453},
{10, 2451}, {10, 2449}, {10, 2447}, {10, 2445}, {10, 2443}, {10, 2441},
{10, 2439}, {10, 2437}, {10, 2435}, {10, 2433}, {10, 2431}, {10, 2429},
{10, 2427}, {10, 2425}, {10, 2423}, {10, 2421}, {10, 2419}, {10, 2417},
{10, 2415}, {10, 2413}, {10, 2411}, {10, 2409}, {10, 2407}, {10, 2405},
{10, 2403}, {10, 2401}, {10, 2399}, {10, 2397}, {10, 2395}, {10, 2393},
{10, 2391}, {10, 2389}, {10, 2387}, {10, 2385}, {10, 2383}, {10, 2381},
{10, 2379}, {10, 2377}, {10, 2375}, {10, 2373}, {10, 2371}, {10, 2369},
{10, 2367}, {10, 2365}, {10, 2363}, {10, 2361}, {10, 2359}, {10, 2357},
{10, 2355}, {10, 2353}, {10, 2351}, {10, 2349}, {10, 2347}, {10, 2345},
{10, 2343}, {10, 2341}, {10, 2339}, {10, 2337}, {10, 2335}, {10, 2333},
{10, 2331}, {10, 2329}, {10, 2327}, {10, 2325}, {10, 2323}, {10, 2321},
{10, 2319}, {10, 2317}, {10, 2315}, {10, 2313}, {10, 2311}, {10, 2309},
{10, 2307}, {10, 2305}, {10, 2303}, {10, 2301}, {10, 2299}, {10, 2297},
{10, 2295}, {10, 2293}, {10, 2291}, {10, 2289}, {10, 2287}, {10, 2285},
{10, 2283}, {10, 2281}, {10, 2279}, {10, 2277}, {10, 2275}, {10, 2273},
{10, 2271}, {10, 2269}, {10, 2267}, {10, 2265}, {10, 2263}, {10, 2261},
{10, 2259}, {10, 2257}, {10, 2255}, {10, 2253}, {10, 2251}, {10, 2249},
{10, 2247}, {10, 2245}, {10, 2243}, {10, 2241}, {10, 2239}, {10, 2237},
{10, 2235}, {10, 2233}, {10, 2231}, {10, 2229}, {10, 2227}, {10, 2225},
{10, 2223}, {10, 2221}, {10, 2219}, {10, 2217}, {10, 2215}, {10, 2213},
{10, 2211}, {10, 2209}, {10, 2207}, {10, 2205}, {10, 2203}, {10, 2201},
{10, 2199}, {10, 2197}, {10, 2195}, {10, 2193}, {10, 2191}, {10, 2189},
{10, 2187}, {10, 2185}, {10, 2183}, {10, 2181}, {10, 2179}, {10, 2177},
{10, 2175}, {10, 2173}, {10, 2171}, {10, 2169}, {10, 2167}, {10, 2165},
{10, 2163}, {10, 2161}, {10, 2159}, {10, 2157}, {10, 2155}, {10, 2153},
{10, 2151}, {10, 2149}, {10, 2147}, {10, 2145}, {10, 2143}, {10, 2141},
{10, 2139}, {10, 2137}, {10, 2135}, {10, 2133}, {10, 2131}, {10, 2129},
{10, 2127}, {10, 2125}, {10, 2123}, {10, 2121}, {10, 2119}, {10, 2117},
{10, 2115}, {10, 2113}, {10, 2111}, {10, 2109}, {10, 2107}, {10, 2105},
{10, 2103}, {10, 2101}, {10, 2099}, {10, 2097}, {10, 2095}, {10, 2093},
{10, 2091}, {10, 2089}, {10, 2087}, {10, 2085}, {10, 2083}, {10, 2081},
{10, 2079}, {10, 2077}, {10, 2075}, {10, 2073}, {10, 2071}, {10, 2069},
{10, 2067}, {10, 2065}, {10, 2063}, {10, 2061}, {10, 2059}, {10, 2057},
{10, 2055}, {10, 2053}, {10, 2051}, {10, 2049}, {10, 2047}, {10, 2045},
{10, 2043}, {10, 2041}, {10, 2039}, {10, 2037}, {10, 2035}, {10, 2033},
{10, 2031}, {10, 2029}, {10, 2027}, {10, 2025}, {10, 2023}, {10, 2021},
{10, 2019}, {10, 2017}, {10, 2015}, {10, 2013}, {10, 2011}, {10, 2009},
{10, 2007}, {10, 2005}, {10, 2003}, {10, 2001}, {10, 1999}, {10, 1997},
{10, 1995}, {10, 1993}, {10, 1991}, {10, 1989}, {10, 1987}, {10, 1985},
{10, 1983}, {10, 1981}, {10, 1979}, {10, 1977}, {10, 1975}, {10, 1973},
{10, 1971}, {10, 1969}, {10, 1967}, {10, 1965}, {10, 1963}, {10, 1961},
{10, 1959}, {10, 1957}, {10, 1955}, {10, 1953}, {10, 1951}, {10, 1949},
{10, 1947}, {10, 1945}, {10, 1943}, {10, 1941}, {10, 1939}, {10, 1937},
{10, 1935}, {10, 1933}, {10, 1931}, {10, 1929}, {10, 1927}, {10, 1925},
{10, 1923}, {10, 1921}, {10, 1919}, {10, 1917}, {10, 1915}, {10, 1913},
{10, 1911}, {10, 1909}, {10, 1907}, {10, 1905}, {10, 1903}, {10, 1901},
{10, 1899}, {10, 1897}, {10, 1895}, {10, 1893}, {10, 1891}, {10, 1889},
{10, 1887}, {10, 1885}, {10, 1883}, {10, 1881}, {10, 1879}, {10, 1877},
{10, 1875}, {10, 1873}, {10, 1871}, {10, 1869}, {10, 1867}, {10, 1865},
{10, 1863}, {10, 1861}, {10, 1859}, {10, 1857}, {10, 1855}, {10, 1853},
{10, 1851}, {10, 1849}, {10, 1847}, {10, 1845}, {10, 1843}, {10, 1841},
{10, 1839}, {10, 1837}, {10, 1835}, {10, 1833}, {10, 1831}, {10, 1829},
{10, 1827}, {10, 1825}, {10, 1823}, {10, 1821}, {10, 1819}, {10, 1817},
{10, 1815}, {10, 1813}, {10, 1811}, {10, 1809}, {10, 1807}, {10, 1805},
{10, 1803}, {10, 1801}, {10, 1799}, {10, 1797}, {10, 1795}, {10, 1793},
{10, 1791}, {10, 1789}, {10, 1787}, {10, 1785}, {10, 1783}, {10, 1781},
{10, 1779}, {10, 1777}, {10, 1775}, {10, 1773}, {10, 1771}, {10, 1769},
{10, 1767}, {10, 1765}, {10, 1763}, {10, 1761}, {10, 1759}, {10, 1757},
{10, 1755}, {10, 1753}, {10, 1751}, {10, 1749}, {10, 1747}, {10, 1745},
{10, 1743}, {10, 1741}, {10, 1739}, {10, 1737}, {10, 1735}, {10, 1733},
{10, 1731}, {10, 1729}, {10, 1727}, {10, 1725}, {10, 1723}, {10, 1721},
{10, 1719}, {10, 1717}, {10, 1715}, {10, 1713}, {10, 1711}, {10, 1709},
{10, 1707}, {10, 1705}, {10, 1703}, {10, 1701}, {10, 1699}, {10, 1697},
{10, 1695}, {10, 1693}, {10, 1691}, {10, 1689}, {10, 1687}, {10, 1685},
{10, 1683}, {10, 1681}, {10, 1679}, {10, 1677}, {10, 1675}, {10, 1673},
{10, 1671}, {10, 1669}, {10, 1667}, {10, 1665}, {10, 1663}, {10, 1661},
{10, 1659}, {10, 1657}, {10, 1655}, {10, 1653}, {10, 1651}, {10, 1649},
{10, 1647}, {10, 1645}, {10, 1643}, {10, 1641}, {10, 1639}, {10, 1637},
{10, 1635}, {10, 1633}, {10, 1631}, {10, 1629}, {10, 1627}, {10, 1625},
{10, 1623}, {10, 1621}, {10, 1619}, {10, 1617}, {10, 1615}, {10, 1613},
{10, 1611}, {10, 1609}, {10, 1607}, {10, 1605}, {10, 1603}, {10, 1601},
{10, 1599}, {10, 1597}, {10, 1595}, {10, 1593}, {10, 1591}, {10, 1589},
{10, 1587}, {10, 1585}, {10, 1583}, {10, 1581}, {10, 1579}, {10, 1577},
{10, 1575}, {10, 1573}, {10, 1571}, {10, 1569}, {10, 1567}, {10, 1565},
{10, 1563}, {10, 1561}, {10, 1559}, {10, 1557}, {10, 1555}, {10, 1553},
{10, 1551}, {10, 1549}, {10, 1547}, {10, 1545}, {10, 1543}, {10, 1541},
{10, 1539}, {10, 1537}, {10, 1535}, {10, 1533}, {10, 1531}, {10, 1529},
{10, 1527}, {10, 1525}, {10, 1523}, {10, 1521}, {10, 1519}, {10, 1517},
{10, 1515}, {10, 1513}, {10, 1511}, {10, 1509}, {10, 1507}, {10, 1505},
{10, 1503}, {10, 1501}, {10, 1499}, {10, 1497}, {10, 1495}, {10, 1493},
{10, 1491}, {10, 1489}, {10, 1487}, {10, 1485}, {10, 1483}, {10, 1481},
{10, 1479}, {10, 1477}, {10, 1475}, {10, 1473}, {10, 1471}, {10, 1469},
{10, 1467}, {10, 1465}, {10, 1463}, {10, 1461}, {10, 1459}, {10, 1457},
{10, 1455}, {10, 1453}, {10, 1451}, {10, 1449}, {10, 1447}, {10, 1445},
{10, 1443}, {10, 1441}, {10, 1439}, {10, 1437}, {10, 1435}, {10, 1433},
{10, 1431}, {10, 1429}, {10, 1427}, {10, 1425}, {10, 1423}, {10, 1421},
{10, 1419}, {10, 1417}, {10, 1415}, {10, 1413}, {10, 1411}, {10, 1409},
{10, 1407}, {10, 1405}, {10, 1403}, {10, 1401}, {10, 1399}, {10, 1397},
{10, 1395}, {10, 1393}, {10, 1391}, {10, 1389}, {10, 1387}, {10, 1385},
{10, 1383}, {10, 1381}, {10, 1379}, {10, 1377}, {10, 1375}, {10, 1373},
{10, 1371}, {10, 1369}, {10, 1367}, {10, 1365}, {10, 1363}, {10, 1361},
{10, 1359}, {10, 1357}, {10, 1355}, {10, 1353}, {10, 1351}, {10, 1349},
{10, 1347}, {10, 1345}, {10, 1343}, {10, 1341}, {10, 1339}, {10, 1337},
{10, 1335}, {10, 1333}, {10, 1331}, {10, 1329}, {10, 1327}, {10, 1325},
{10, 1323}, {10, 1321}, {10, 1319}, {10, 1317}, {10, 1315}, {10, 1313},
{10, 1311}, {10, 1309}, {10, 1307}, {10, 1305}, {10, 1303}, {10, 1301},
{10, 1299}, {10, 1297}, {10, 1295}, {10, 1293}, {10, 1291}, {10, 1289},
{10, 1287}, {10, 1285}, {10, 1283}, {10, 1281}, {10, 1279}, {10, 1277},
{10, 1275}, {10, 1273}, {10, 1271}, {10, 1269}, {10, 1267}, {10, 1265},
{10, 1263}, {10, 1261}, {10, 1259}, {10, 1257}, {10, 1255}, {10, 1253},
{10, 1251}, {10, 1249}, {10, 1247}, {10, 1245}, {10, 1243}, {10, 1241},
{10, 1239}, {10, 1237}, {10, 1235}, {10, 1233}, {10, 1231}, {10, 1229},
{10, 1227}, {10, 1225}, {10, 1223}, {10, 1221}, {10, 1219}, {10, 1217},
{10, 1215}, {10, 1213}, {10, 1211}, {10, 1209}, {10, 1207}, {10, 1205},
{10, 1203}, {10, 1201}, {10, 1199}, {10, 1197}, {10, 1195}, {10, 1193},
{10, 1191}, {10, 1189}, {10, 1187}, {10, 1185}, {10, 1183}, {10, 1181},
{10, 1179}, {10, 1177}, {10, 1175}, {10, 1173}, {10, 1171}, {10, 1169},
{10, 1167}, {10, 1165}, {10, 1163}, {10, 1161}, {10, 1159}, {10, 1157},
{10, 1155}, {10, 1153}, {10, 1151}, {10, 1149}, {10, 1147}, {10, 1145},
{10, 1143}, {10, 1141}, {10, 1139}, {10, 1137}, {10, 1135}, {10, 1133},
{10, 1131}, {10, 1129}, {10, 1127}, {10, 1125}, {10, 1123}, {10, 1121},
{10, 1119}, {10, 1117}, {10, 1115}, {10, 1113}, {10, 1111}, {10, 1109},
{10, 1107}, {10, 1105}, {10, 1103}, {10, 1101}, {10, 1099}, {10, 1097},
{10, 1095}, {10, 1093}, {10, 1091}, {10, 1089}, {10, 1087}, {10, 1085},
{10, 1083}, {10, 1081}, {10, 1079}, {10, 1077}, {10, 1075}, {10, 1073},
{10, 1071}, {10, 1069}, {10, 1067}, {10, 1065}, {10, 1063}, {10, 1061},
{10, 1059}, {10, 1057}, {10, 1055}, {10, 1053}, {10, 1051}, {10, 1049},
{10, 1047}, {10, 1045}, {10, 1043}, {10, 1041}, {10, 1039}, {10, 1037},
{10, 1035}, {10, 1033}, {10, 1031}, {10, 1029}, {10, 1027}, {10, 1025},
{10, 1023}, {10, 1021}, {10, 1019}, {10, 1017}, {10, 1015}, {10, 1013},
{10, 1011}, {10, 1009}, {10, 1007}, {10, 1005}, {10, 1003}, {10, 1001},
{10, 999}, {10, 997}, {10, 995}, {10, 993}, {10, 991}, {10, 989},
{10, 987}, {10, 985}, {10, 983}, {10, 981}, {10, 979}, {10, 977},
{10, 975}, {10, 973}, {10, 971}, {10, 969}, {10, 967}, {10, 965},
{10, 963}, {10, 961}, {10, 959}, {10, 957}, {10, 955}, {10, 953},
{10, 951}, {10, 949}, {10, 947}, {10, 945}, {10, 943}, {10, 941},
{10, 939}, {10, 937}, {10, 935}, {10, 933}, {10, 931}, {10, 929},
{10, 927}, {10, 925}, {10, 923}, {10, 921}, {10, 919}, {10, 917},
{10, 915}, {10, 913}, {10, 911}, {10, 909}, {10, 907}, {10, 905},
{10, 903}, {10, 901}, {10, 899}, {10, 897}, {10, 895}, {10, 893},
{10, 891}, {10, 889}, {10, 887}, {10, 885}, {10, 883}, {10, 881},
{10, 879}, {10, 877}, {10, 875}, {10, 873}, {10, 871}, {10, 869},
{10, 867}, {10, 865}, {10, 863}, {10, 861}, {10, 859}, {10, 857},
{10, 855}, {10, 853}, {10, 851}, {10, 849}, {10, 847}, {10, 845},
{10, 843}, {10, 841}, {10, 839}, {10, 837}, {10, 835}, {10, 833},
{10, 831}, {10, 829}, {10, 827}, {10, 825}, {10, 823}, {10, 821},
{10, 819}, {10, 817}, {10, 815}, {10, 813}, {10, 811}, {10, 809},
{10, 807}, {10, 805}, {10, 803}, {10, 801}, {10, 799}, {10, 797},
{10, 795}, {10, 793}, {10, 791}, {10, 789}, {10, 787}, {10, 785},
{10, 783}, {10, 781}, {10, 779}, {10, 777}, {10, 775}, {10, 773},
{10, 771}, {10, 769}, {10, 767}, {10, 765}, {10, 763}, {10, 761},
{10, 759}, {10, 757}, {10, 755}, {10, 753}, {10, 751}, {10, 749},
{10, 747}, {10, 745}, {10, 743}, {10, 741}, {10, 739}, {10, 737},
{10, 735}, {10, 733}, {10, 731}, {10, 729}, {10, 727}, {10, 725},
{10, 723}, {10, 721}, {10, 719}, {10, 717}, {10, 715}, {10, 713},
{10, 711}, {10, 709}, {10, 707}, {10, 705}, {10, 703}, {10, 701},
{10, 699}, {10, 697}, {10, 695}, {10, 693}, {10, 691}, {10, 689},
{10, 687}, {10, 685}, {10, 683}, {10, 681}, {10, 679}, {10, 677},
{10, 675}, {10, 673}, {10, 671}, {10, 669}, {10, 667}, {10, 665},
{10, 663}, {10, 661}, {10, 659}, {10, 657}, {10, 655}, {10, 653},
{10, 651}, {10, 649}, {10, 647}, {10, 645}, {10, 643}, {10, 641},
{10, 639}, {10, 637}, {10, 635}, {10, 633}, {10, 631}, {10, 629},
{10, 627}, {10, 625}, {10, 623}, {10, 621}, {10, 619}, {10, 617},
{10, 615}, {10, 613}, {10, 611}, {10, 609}, {10, 607}, {10, 605},
{10, 603}, {10, 601}, {10, 599}, {10, 597}, {10, 595}, {10, 593},
{10, 591}, {10, 589}, {10, 587}, {10, 585}, {10, 583}, {10, 581},
{10, 579}, {10, 577}, {10, 575}, {10, 573}, {10, 571}, {10, 569},
{10, 567}, {10, 565}, {10, 563}, {10, 561}, {10, 559}, {10, 557},
{10, 555}, {10, 553}, {10, 551}, {10, 549}, {10, 547}, {10, 545},
{10, 543}, {10, 541}, {10, 539}, {10, 537}, {10, 535}, {10, 533},
{10, 531}, {10, 529}, {10, 527}, {10, 525}, {10, 523}, {10, 521},
{10, 519}, {10, 517}, {10, 515}, {10, 513}, {10, 511}, {10, 509},
{10, 507}, {10, 505}, {10, 503}, {10, 501}, {10, 499}, {10, 497},
{10, 495}, {10, 493}, {10, 491}, {10, 489}, {10, 487}, {10, 485},
{10, 483}, {10, 481}, {10, 479}, {10, 477}, {10, 475}, {10, 473},
{10, 471}, {10, 469}, {10, 467}, {10, 465}, {10, 463}, {10, 461},
{10, 459}, {10, 457}, {10, 455}, {10, 453}, {10, 451}, {10, 449},
{10, 447}, {10, 445}, {10, 443}, {10, 441}, {10, 439}, {10, 437},
{10, 435}, {10, 433}, {10, 431}, {10, 429}, {10, 427}, {10, 425},
{10, 423}, {10, 421}, {10, 419}, {10, 417}, {10, 415}, {10, 413},
{10, 411}, {10, 409}, {10, 407}, {10, 405}, {10, 403}, {10, 401},
{10, 399}, {10, 397}, {10, 395}, {10, 393}, {10, 391}, {10, 389},
{10, 387}, {10, 385}, {10, 383}, {10, 381}, {10, 379}, {10, 377},
{10, 375}, {10, 373}, {10, 371}, {10, 369}, {10, 367}, {10, 365},
{10, 363}, {10, 361}, {10, 359}, {10, 357}, {10, 355}, {10, 353},
{10, 351}, {10, 349}, {10, 347}, {10, 345}, {10, 343}, {10, 341},
{10, 339}, {10, 337}, {10, 335}, {10, 333}, {10, 331}, {10, 329},
{10, 327}, {10, 325}, {10, 323}, {10, 321}, {10, 319}, {10, 317},
{10, 315}, {10, 313}, {10, 311}, {10, 309}, {10, 307}, {10, 305},
{10, 303}, {10, 301}, {10, 299}, {10, 297}, {10, 295}, {10, 293},
{10, 291}, {10, 289}, {10, 287}, {10, 285}, {10, 283}, {10, 281},
{10, 279}, {10, 277}, {10, 275}, {10, 273}, {10, 271}, {10, 269},
{10, 267}, {10, 265}, {10, 263}, {10, 261}, {10, 259}, {10, 257},
{10, 255}, {10, 253}, {10, 251}, {10, 249}, {10, 247}, {10, 245},
{10, 243}, {10, 241}, {10, 239}, {10, 237}, {10, 235}, {10, 233},
{10, 231}, {10, 229}, {10, 227}, {10, 225}, {10, 223}, {10, 221},
{10, 219}, {10, 217}, {10, 215}, {10, 213}, {10, 211}, {10, 209},
{10, 207}, {10, 205}, {10, 203}, {10, 201}, {10, 199}, {10, 197},
{10, 195}, {10, 193}, {10, 191}, {10, 189}, {10, 187}, {10, 185},
{10, 183}, {10, 181}, {10, 179}, {10, 177}, {10, 175}, {10, 173},
{10, 171}, {10, 169}, {10, 167}, {10, 165}, {10, 163}, {10, 161},
{10, 159}, {10, 157}, {10, 155}, {10, 153}, {10, 151}, {10, 149},
{10, 147}, {10, 145}, {10, 143}, {10, 141}, {10, 139}, {10, 137},
{10, 135}, {10, 133}, {10, 131}, {10, 129}, {10, 127}, {10, 125},
{10, 123}, {10, 121}, {10, 119}, {10, 117}, {10, 115}, {10, 113},
{10, 111}, {10, 109}, {10, 107}, {10, 105}, {10, 103}, {10, 101},
{10, 99}, {10, 97}, {10, 95}, {10, 93}, {10, 91}, {10, 89},
{10, 87}, {10, 85}, {10, 83}, {10, 81}, {10, 79}, {10, 77},
{10, 75}, {10, 73}, {10, 71}, {10, 69}, {10, 67}, {10, 65},
{10, 63}, {10, 61}, {10, 59}, {10, 57}, {10, 55}, {10, 53},
{10, 51}, {10, 49}, {10, 47}, {10, 45}, {10, 43}, {10, 41},
{10, 39}, {10, 37}, {10, 35}, {10, 33}, {10, 31}, {10, 29},
{10, 27}, {10, 25}, {10, 23}, {10, 21}, {10, 19}, {10, 17},
{10, 15}, {10, 13}, {10, 11}, {10, 9}, {10, 7}, {10, 5},
{10, 3}, {10, 1}, {9, 63}, {9, 61}, {9, 59}, {9, 57},
{9, 55}, {9, 53}, {9, 51}, {9, 49}, {9, 47}, {9, 45},
{9, 43}, {9, 41}, {9, 39}, {9, 37}, {9, 35}, {9, 33},
{9, 31}, {9, 29}, {9, 27}, {9, 25}, {9, 23}, {9, 21},
{9, 19}, {9, 17}, {9, 15}, {9, 13}, {9, 11}, {9, 9},
{9, 7}, {9, 5}, {9, 3}, {9, 1}, {8, 31}, {8, 29},
{8, 27}, {8, 25}, {8, 23}, {8, 21}, {8, 19}, {8, 17},
{8, 15}, {8, 13}, {8, 11}, {8, 9}, {8, 7}, {8, 5},
{8, 3}, {8, 1}, {7, 15}, {7, 13}, {7, 11}, {7, 9},
{7, 7}, {7, 5}, {7, 3}, {7, 1}, {6, 7}, {6, 5},
{6, 3}, {6, 1}, {5, 3}, {5, 1}, {4, 1}, {3, 1},
{2, 1}, {1, 1}, {0, 0}, {1, 0}, {2, 0}, {3, 0},
{4, 0}, {5, 0}, {5, 2}, {6, 0}, {6, 2}, {6, 4},
{6, 6}, {7, 0}, {7, 2}, {7, 4}, {7, 6}, {7, 8},
{7, 10}, {7, 12}, {7, 14}, {8, 0}, {8, 2}, {8, 4},
{8, 6}, {8, 8}, {8, 10}, {8, 12}, {8, 14}, {8, 16},
{8, 18}, {8, 20}, {8, 22}, {8, 24}, {8, 26}, {8, 28},
{8, 30}, {9, 0}, {9, 2}, {9, 4}, {9, 6}, {9, 8},
{9, 10}, {9, 12}, {9, 14}, {9, 16}, {9, 18}, {9, 20},
{9, 22}, {9, 24}, {9, 26}, {9, 28}, {9, 30}, {9, 32},
{9, 34}, {9, 36}, {9, 38}, {9, 40}, {9, 42}, {9, 44},
{9, 46}, {9, 48}, {9, 50}, {9, 52}, {9, 54}, {9, 56},
{9, 58}, {9, 60}, {9, 62}, {10, 0}, {10, 2}, {10, 4},
{10, 6}, {10, 8}, {10, 10}, {10, 12}, {10, 14}, {10, 16},
{10, 18}, {10, 20}, {10, 22}, {10, 24}, {10, 26}, {10, 28},
{10, 30}, {10, 32}, {10, 34}, {10, 36}, {10, 38}, {10, 40},
{10, 42}, {10, 44}, {10, 46}, {10, 48}, {10, 50}, {10, 52},
{10, 54}, {10, 56}, {10, 58}, {10, 60}, {10, 62}, {10, 64},
{10, 66}, {10, 68}, {10, 70}, {10, 72}, {10, 74}, {10, 76},
{10, 78}, {10, 80}, {10, 82}, {10, 84}, {10, 86}, {10, 88},
{10, 90}, {10, 92}, {10, 94}, {10, 96}, {10, 98}, {10, 100},
{10, 102}, {10, 104}, {10, 106}, {10, 108}, {10, 110}, {10, 112},
{10, 114}, {10, 116}, {10, 118}, {10, 120}, {10, 122}, {10, 124},
{10, 126}, {10, 128}, {10, 130}, {10, 132}, {10, 134}, {10, 136},
{10, 138}, {10, 140}, {10, 142}, {10, 144}, {10, 146}, {10, 148},
{10, 150}, {10, 152}, {10, 154}, {10, 156}, {10, 158}, {10, 160},
{10, 162}, {10, 164}, {10, 166}, {10, 168}, {10, 170}, {10, 172},
{10, 174}, {10, 176}, {10, 178}, {10, 180}, {10, 182}, {10, 184},
{10, 186}, {10, 188}, {10, 190}, {10, 192}, {10, 194}, {10, 196},
{10, 198}, {10, 200}, {10, 202}, {10, 204}, {10, 206}, {10, 208},
{10, 210}, {10, 212}, {10, 214}, {10, 216}, {10, 218}, {10, 220},
{10, 222}, {10, 224}, {10, 226}, {10, 228}, {10, 230}, {10, 232},
{10, 234}, {10, 236}, {10, 238}, {10, 240}, {10, 242}, {10, 244},
{10, 246}, {10, 248}, {10, 250}, {10, 252}, {10, 254}, {10, 256},
{10, 258}, {10, 260}, {10, 262}, {10, 264}, {10, 266}, {10, 268},
{10, 270}, {10, 272}, {10, 274}, {10, 276}, {10, 278}, {10, 280},
{10, 282}, {10, 284}, {10, 286}, {10, 288}, {10, 290}, {10, 292},
{10, 294}, {10, 296}, {10, 298}, {10, 300}, {10, 302}, {10, 304},
{10, 306}, {10, 308}, {10, 310}, {10, 312}, {10, 314}, {10, 316},
{10, 318}, {10, 320}, {10, 322}, {10, 324}, {10, 326}, {10, 328},
{10, 330}, {10, 332}, {10, 334}, {10, 336}, {10, 338}, {10, 340},
{10, 342}, {10, 344}, {10, 346}, {10, 348}, {10, 350}, {10, 352},
{10, 354}, {10, 356}, {10, 358}, {10, 360}, {10, 362}, {10, 364},
{10, 366}, {10, 368}, {10, 370}, {10, 372}, {10, 374}, {10, 376},
{10, 378}, {10, 380}, {10, 382}, {10, 384}, {10, 386}, {10, 388},
{10, 390}, {10, 392}, {10, 394}, {10, 396}, {10, 398}, {10, 400},
{10, 402}, {10, 404}, {10, 406}, {10, 408}, {10, 410}, {10, 412},
{10, 414}, {10, 416}, {10, 418}, {10, 420}, {10, 422}, {10, 424},
{10, 426}, {10, 428}, {10, 430}, {10, 432}, {10, 434}, {10, 436},
{10, 438}, {10, 440}, {10, 442}, {10, 444}, {10, 446}, {10, 448},
{10, 450}, {10, 452}, {10, 454}, {10, 456}, {10, 458}, {10, 460},
{10, 462}, {10, 464}, {10, 466}, {10, 468}, {10, 470}, {10, 472},
{10, 474}, {10, 476}, {10, 478}, {10, 480}, {10, 482}, {10, 484},
{10, 486}, {10, 488}, {10, 490}, {10, 492}, {10, 494}, {10, 496},
{10, 498}, {10, 500}, {10, 502}, {10, 504}, {10, 506}, {10, 508},
{10, 510}, {10, 512}, {10, 514}, {10, 516}, {10, 518}, {10, 520},
{10, 522}, {10, 524}, {10, 526}, {10, 528}, {10, 530}, {10, 532},
{10, 534}, {10, 536}, {10, 538}, {10, 540}, {10, 542}, {10, 544},
{10, 546}, {10, 548}, {10, 550}, {10, 552}, {10, 554}, {10, 556},
{10, 558}, {10, 560}, {10, 562}, {10, 564}, {10, 566}, {10, 568},
{10, 570}, {10, 572}, {10, 574}, {10, 576}, {10, 578}, {10, 580},
{10, 582}, {10, 584}, {10, 586}, {10, 588}, {10, 590}, {10, 592},
{10, 594}, {10, 596}, {10, 598}, {10, 600}, {10, 602}, {10, 604},
{10, 606}, {10, 608}, {10, 610}, {10, 612}, {10, 614}, {10, 616},
{10, 618}, {10, 620}, {10, 622}, {10, 624}, {10, 626}, {10, 628},
{10, 630}, {10, 632}, {10, 634}, {10, 636}, {10, 638}, {10, 640},
{10, 642}, {10, 644}, {10, 646}, {10, 648}, {10, 650}, {10, 652},
{10, 654}, {10, 656}, {10, 658}, {10, 660}, {10, 662}, {10, 664},
{10, 666}, {10, 668}, {10, 670}, {10, 672}, {10, 674}, {10, 676},
{10, 678}, {10, 680}, {10, 682}, {10, 684}, {10, 686}, {10, 688},
{10, 690}, {10, 692}, {10, 694}, {10, 696}, {10, 698}, {10, 700},
{10, 702}, {10, 704}, {10, 706}, {10, 708}, {10, 710}, {10, 712},
{10, 714}, {10, 716}, {10, 718}, {10, 720}, {10, 722}, {10, 724},
{10, 726}, {10, 728}, {10, 730}, {10, 732}, {10, 734}, {10, 736},
{10, 738}, {10, 740}, {10, 742}, {10, 744}, {10, 746}, {10, 748},
{10, 750}, {10, 752}, {10, 754}, {10, 756}, {10, 758}, {10, 760},
{10, 762}, {10, 764}, {10, 766}, {10, 768}, {10, 770}, {10, 772},
{10, 774}, {10, 776}, {10, 778}, {10, 780}, {10, 782}, {10, 784},
{10, 786}, {10, 788}, {10, 790}, {10, 792}, {10, 794}, {10, 796},
{10, 798}, {10, 800}, {10, 802}, {10, 804}, {10, 806}, {10, 808},
{10, 810}, {10, 812}, {10, 814}, {10, 816}, {10, 818}, {10, 820},
{10, 822}, {10, 824}, {10, 826}, {10, 828}, {10, 830}, {10, 832},
{10, 834}, {10, 836}, {10, 838}, {10, 840}, {10, 842}, {10, 844},
{10, 846}, {10, 848}, {10, 850}, {10, 852}, {10, 854}, {10, 856},
{10, 858}, {10, 860}, {10, 862}, {10, 864}, {10, 866}, {10, 868},
{10, 870}, {10, 872}, {10, 874}, {10, 876}, {10, 878}, {10, 880},
{10, 882}, {10, 884}, {10, 886}, {10, 888}, {10, 890}, {10, 892},
{10, 894}, {10, 896}, {10, 898}, {10, 900}, {10, 902}, {10, 904},
{10, 906}, {10, 908}, {10, 910}, {10, 912}, {10, 914}, {10, 916},
{10, 918}, {10, 920}, {10, 922}, {10, 924}, {10, 926}, {10, 928},
{10, 930}, {10, 932}, {10, 934}, {10, 936}, {10, 938}, {10, 940},
{10, 942}, {10, 944}, {10, 946}, {10, 948}, {10, 950}, {10, 952},
{10, 954}, {10, 956}, {10, 958}, {10, 960}, {10, 962}, {10, 964},
{10, 966}, {10, 968}, {10, 970}, {10, 972}, {10, 974}, {10, 976},
{10, 978}, {10, 980}, {10, 982}, {10, 984}, {10, 986}, {10, 988},
{10, 990}, {10, 992}, {10, 994}, {10, 996}, {10, 998}, {10, 1000},
{10, 1002}, {10, 1004}, {10, 1006}, {10, 1008}, {10, 1010}, {10, 1012},
{10, 1014}, {10, 1016}, {10, 1018}, {10, 1020}, {10, 1022}, {10, 1024},
{10, 1026}, {10, 1028}, {10, 1030}, {10, 1032}, {10, 1034}, {10, 1036},
{10, 1038}, {10, 1040}, {10, 1042}, {10, 1044}, {10, 1046}, {10, 1048},
{10, 1050}, {10, 1052}, {10, 1054}, {10, 1056}, {10, 1058}, {10, 1060},
{10, 1062}, {10, 1064}, {10, 1066}, {10, 1068}, {10, 1070}, {10, 1072},
{10, 1074}, {10, 1076}, {10, 1078}, {10, 1080}, {10, 1082}, {10, 1084},
{10, 1086}, {10, 1088}, {10, 1090}, {10, 1092}, {10, 1094}, {10, 1096},
{10, 1098}, {10, 1100}, {10, 1102}, {10, 1104}, {10, 1106}, {10, 1108},
{10, 1110}, {10, 1112}, {10, 1114}, {10, 1116}, {10, 1118}, {10, 1120},
{10, 1122}, {10, 1124}, {10, 1126}, {10, 1128}, {10, 1130}, {10, 1132},
{10, 1134}, {10, 1136}, {10, 1138}, {10, 1140}, {10, 1142}, {10, 1144},
{10, 1146}, {10, 1148}, {10, 1150}, {10, 1152}, {10, 1154}, {10, 1156},
{10, 1158}, {10, 1160}, {10, 1162}, {10, 1164}, {10, 1166}, {10, 1168},
{10, 1170}, {10, 1172}, {10, 1174}, {10, 1176}, {10, 1178}, {10, 1180},
{10, 1182}, {10, 1184}, {10, 1186}, {10, 1188}, {10, 1190}, {10, 1192},
{10, 1194}, {10, 1196}, {10, 1198}, {10, 1200}, {10, 1202}, {10, 1204},
{10, 1206}, {10, 1208}, {10, 1210}, {10, 1212}, {10, 1214}, {10, 1216},
{10, 1218}, {10, 1220}, {10, 1222}, {10, 1224}, {10, 1226}, {10, 1228},
{10, 1230}, {10, 1232}, {10, 1234}, {10, 1236}, {10, 1238}, {10, 1240},
{10, 1242}, {10, 1244}, {10, 1246}, {10, 1248}, {10, 1250}, {10, 1252},
{10, 1254}, {10, 1256}, {10, 1258}, {10, 1260}, {10, 1262}, {10, 1264},
{10, 1266}, {10, 1268}, {10, 1270}, {10, 1272}, {10, 1274}, {10, 1276},
{10, 1278}, {10, 1280}, {10, 1282}, {10, 1284}, {10, 1286}, {10, 1288},
{10, 1290}, {10, 1292}, {10, 1294}, {10, 1296}, {10, 1298}, {10, 1300},
{10, 1302}, {10, 1304}, {10, 1306}, {10, 1308}, {10, 1310}, {10, 1312},
{10, 1314}, {10, 1316}, {10, 1318}, {10, 1320}, {10, 1322}, {10, 1324},
{10, 1326}, {10, 1328}, {10, 1330}, {10, 1332}, {10, 1334}, {10, 1336},
{10, 1338}, {10, 1340}, {10, 1342}, {10, 1344}, {10, 1346}, {10, 1348},
{10, 1350}, {10, 1352}, {10, 1354}, {10, 1356}, {10, 1358}, {10, 1360},
{10, 1362}, {10, 1364}, {10, 1366}, {10, 1368}, {10, 1370}, {10, 1372},
{10, 1374}, {10, 1376}, {10, 1378}, {10, 1380}, {10, 1382}, {10, 1384},
{10, 1386}, {10, 1388}, {10, 1390}, {10, 1392}, {10, 1394}, {10, 1396},
{10, 1398}, {10, 1400}, {10, 1402}, {10, 1404}, {10, 1406}, {10, 1408},
{10, 1410}, {10, 1412}, {10, 1414}, {10, 1416}, {10, 1418}, {10, 1420},
{10, 1422}, {10, 1424}, {10, 1426}, {10, 1428}, {10, 1430}, {10, 1432},
{10, 1434}, {10, 1436}, {10, 1438}, {10, 1440}, {10, 1442}, {10, 1444},
{10, 1446}, {10, 1448}, {10, 1450}, {10, 1452}, {10, 1454}, {10, 1456},
{10, 1458}, {10, 1460}, {10, 1462}, {10, 1464}, {10, 1466}, {10, 1468},
{10, 1470}, {10, 1472}, {10, 1474}, {10, 1476}, {10, 1478}, {10, 1480},
{10, 1482}, {10, 1484}, {10, 1486}, {10, 1488}, {10, 1490}, {10, 1492},
{10, 1494}, {10, 1496}, {10, 1498}, {10, 1500}, {10, 1502}, {10, 1504},
{10, 1506}, {10, 1508}, {10, 1510}, {10, 1512}, {10, 1514}, {10, 1516},
{10, 1518}, {10, 1520}, {10, 1522}, {10, 1524}, {10, 1526}, {10, 1528},
{10, 1530}, {10, 1532}, {10, 1534}, {10, 1536}, {10, 1538}, {10, 1540},
{10, 1542}, {10, 1544}, {10, 1546}, {10, 1548}, {10, 1550}, {10, 1552},
{10, 1554}, {10, 1556}, {10, 1558}, {10, 1560}, {10, 1562}, {10, 1564},
{10, 1566}, {10, 1568}, {10, 1570}, {10, 1572}, {10, 1574}, {10, 1576},
{10, 1578}, {10, 1580}, {10, 1582}, {10, 1584}, {10, 1586}, {10, 1588},
{10, 1590}, {10, 1592}, {10, 1594}, {10, 1596}, {10, 1598}, {10, 1600},
{10, 1602}, {10, 1604}, {10, 1606}, {10, 1608}, {10, 1610}, {10, 1612},
{10, 1614}, {10, 1616}, {10, 1618}, {10, 1620}, {10, 1622}, {10, 1624},
{10, 1626}, {10, 1628}, {10, 1630}, {10, 1632}, {10, 1634}, {10, 1636},
{10, 1638}, {10, 1640}, {10, 1642}, {10, 1644}, {10, 1646}, {10, 1648},
{10, 1650}, {10, 1652}, {10, 1654}, {10, 1656}, {10, 1658}, {10, 1660},
{10, 1662}, {10, 1664}, {10, 1666}, {10, 1668}, {10, 1670}, {10, 1672},
{10, 1674}, {10, 1676}, {10, 1678}, {10, 1680}, {10, 1682}, {10, 1684},
{10, 1686}, {10, 1688}, {10, 1690}, {10, 1692}, {10, 1694}, {10, 1696},
{10, 1698}, {10, 1700}, {10, 1702}, {10, 1704}, {10, 1706}, {10, 1708},
{10, 1710}, {10, 1712}, {10, 1714}, {10, 1716}, {10, 1718}, {10, 1720},
{10, 1722}, {10, 1724}, {10, 1726}, {10, 1728}, {10, 1730}, {10, 1732},
{10, 1734}, {10, 1736}, {10, 1738}, {10, 1740}, {10, 1742}, {10, 1744},
{10, 1746}, {10, 1748}, {10, 1750}, {10, 1752}, {10, 1754}, {10, 1756},
{10, 1758}, {10, 1760}, {10, 1762}, {10, 1764}, {10, 1766}, {10, 1768},
{10, 1770}, {10, 1772}, {10, 1774}, {10, 1776}, {10, 1778}, {10, 1780},
{10, 1782}, {10, 1784}, {10, 1786}, {10, 1788}, {10, 1790}, {10, 1792},
{10, 1794}, {10, 1796}, {10, 1798}, {10, 1800}, {10, 1802}, {10, 1804},
{10, 1806}, {10, 1808}, {10, 1810}, {10, 1812}, {10, 1814}, {10, 1816},
{10, 1818}, {10, 1820}, {10, 1822}, {10, 1824}, {10, 1826}, {10, 1828},
{10, 1830}, {10, 1832}, {10, 1834}, {10, 1836}, {10, 1838}, {10, 1840},
{10, 1842}, {10, 1844}, {10, 1846}, {10, 1848}, {10, 1850}, {10, 1852},
{10, 1854}, {10, 1856}, {10, 1858}, {10, 1860}, {10, 1862}, {10, 1864},
{10, 1866}, {10, 1868}, {10, 1870}, {10, 1872}, {10, 1874}, {10, 1876},
{10, 1878}, {10, 1880}, {10, 1882}, {10, 1884}, {10, 1886}, {10, 1888},
{10, 1890}, {10, 1892}, {10, 1894}, {10, 1896}, {10, 1898}, {10, 1900},
{10, 1902}, {10, 1904}, {10, 1906}, {10, 1908}, {10, 1910}, {10, 1912},
{10, 1914}, {10, 1916}, {10, 1918}, {10, 1920}, {10, 1922}, {10, 1924},
{10, 1926}, {10, 1928}, {10, 1930}, {10, 1932}, {10, 1934}, {10, 1936},
{10, 1938}, {10, 1940}, {10, 1942}, {10, 1944}, {10, 1946}, {10, 1948},
{10, 1950}, {10, 1952}, {10, 1954}, {10, 1956}, {10, 1958}, {10, 1960},
{10, 1962}, {10, 1964}, {10, 1966}, {10, 1968}, {10, 1970}, {10, 1972},
{10, 1974}, {10, 1976}, {10, 1978}, {10, 1980}, {10, 1982}, {10, 1984},
{10, 1986}, {10, 1988}, {10, 1990}, {10, 1992}, {10, 1994}, {10, 1996},
{10, 1998}, {10, 2000}, {10, 2002}, {10, 2004}, {10, 2006}, {10, 2008},
{10, 2010}, {10, 2012}, {10, 2014}, {10, 2016}, {10, 2018}, {10, 2020},
{10, 2022}, {10, 2024}, {10, 2026}, {10, 2028}, {10, 2030}, {10, 2032},
{10, 2034}, {10, 2036}, {10, 2038}, {10, 2040}, {10, 2042}, {10, 2044},
{10, 2046}, {10, 2048}, {10, 2050}, {10, 2052}, {10, 2054}, {10, 2056},
{10, 2058}, {10, 2060}, {10, 2062}, {10, 2064}, {10, 2066}, {10, 2068},
{10, 2070}, {10, 2072}, {10, 2074}, {10, 2076}, {10, 2078}, {10, 2080},
{10, 2082}, {10, 2084}, {10, 2086}, {10, 2088}, {10, 2090}, {10, 2092},
{10, 2094}, {10, 2096}, {10, 2098}, {10, 2100}, {10, 2102}, {10, 2104},
{10, 2106}, {10, 2108}, {10, 2110}, {10, 2112}, {10, 2114}, {10, 2116},
{10, 2118}, {10, 2120}, {10, 2122}, {10, 2124}, {10, 2126}, {10, 2128},
{10, 2130}, {10, 2132}, {10, 2134}, {10, 2136}, {10, 2138}, {10, 2140},
{10, 2142}, {10, 2144}, {10, 2146}, {10, 2148}, {10, 2150}, {10, 2152},
{10, 2154}, {10, 2156}, {10, 2158}, {10, 2160}, {10, 2162}, {10, 2164},
{10, 2166}, {10, 2168}, {10, 2170}, {10, 2172}, {10, 2174}, {10, 2176},
{10, 2178}, {10, 2180}, {10, 2182}, {10, 2184}, {10, 2186}, {10, 2188},
{10, 2190}, {10, 2192}, {10, 2194}, {10, 2196}, {10, 2198}, {10, 2200},
{10, 2202}, {10, 2204}, {10, 2206}, {10, 2208}, {10, 2210}, {10, 2212},
{10, 2214}, {10, 2216}, {10, 2218}, {10, 2220}, {10, 2222}, {10, 2224},
{10, 2226}, {10, 2228}, {10, 2230}, {10, 2232}, {10, 2234}, {10, 2236},
{10, 2238}, {10, 2240}, {10, 2242}, {10, 2244}, {10, 2246}, {10, 2248},
{10, 2250}, {10, 2252}, {10, 2254}, {10, 2256}, {10, 2258}, {10, 2260},
{10, 2262}, {10, 2264}, {10, 2266}, {10, 2268}, {10, 2270}, {10, 2272},
{10, 2274}, {10, 2276}, {10, 2278}, {10, 2280}, {10, 2282}, {10, 2284},
{10, 2286}, {10, 2288}, {10, 2290}, {10, 2292}, {10, 2294}, {10, 2296},
{10, 2298}, {10, 2300}, {10, 2302}, {10, 2304}, {10, 2306}, {10, 2308},
{10, 2310}, {10, 2312}, {10, 2314}, {10, 2316}, {10, 2318}, {10, 2320},
{10, 2322}, {10, 2324}, {10, 2326}, {10, 2328}, {10, 2330}, {10, 2332},
{10, 2334}, {10, 2336}, {10, 2338}, {10, 2340}, {10, 2342}, {10, 2344},
{10, 2346}, {10, 2348}, {10, 2350}, {10, 2352}, {10, 2354}, {10, 2356},
{10, 2358}, {10, 2360}, {10, 2362}, {10, 2364}, {10, 2366}, {10, 2368},
{10, 2370}, {10, 2372}, {10, 2374}, {10, 2376}, {10, 2378}, {10, 2380},
{10, 2382}, {10, 2384}, {10, 2386}, {10, 2388}, {10, 2390}, {10, 2392},
{10, 2394}, {10, 2396}, {10, 2398}, {10, 2400}, {10, 2402}, {10, 2404},
{10, 2406}, {10, 2408}, {10, 2410}, {10, 2412}, {10, 2414}, {10, 2416},
{10, 2418}, {10, 2420}, {10, 2422}, {10, 2424}, {10, 2426}, {10, 2428},
{10, 2430}, {10, 2432}, {10, 2434}, {10, 2436}, {10, 2438}, {10, 2440},
{10, 2442}, {10, 2444}, {10, 2446}, {10, 2448}, {10, 2450}, {10, 2452},
{10, 2454}, {10, 2456}, {10, 2458}, {10, 2460}, {10, 2462}, {10, 2464},
{10, 2466}, {10, 2468}, {10, 2470}, {10, 2472}, {10, 2474}, {10, 2476},
{10, 2478}, {10, 2480}, {10, 2482}, {10, 2484}, {10, 2486}, {10, 2488},
{10, 2490}, {10, 2492}, {10, 2494}, {10, 2496}, {10, 2498}, {10, 2500},
{10, 2502}, {10, 2504}, {10, 2506}, {10, 2508}, {10, 2510}, {10, 2512},
{10, 2514}, {10, 2516}, {10, 2518}, {10, 2520}, {10, 2522}, {10, 2524},
{10, 2526}, {10, 2528}, {10, 2530}, {10, 2532}, {10, 2534}, {10, 2536},
{10, 2538}, {10, 2540}, {10, 2542}, {10, 2544}, {10, 2546}, {10, 2548},
{10, 2550}, {10, 2552}, {10, 2554}, {10, 2556}, {10, 2558}, {10, 2560},
{10, 2562}, {10, 2564}, {10, 2566}, {10, 2568}, {10, 2570}, {10, 2572},
{10, 2574}, {10, 2576}, {10, 2578}, {10, 2580}, {10, 2582}, {10, 2584},
{10, 2586}, {10, 2588}, {10, 2590}, {10, 2592}, {10, 2594}, {10, 2596},
{10, 2598}, {10, 2600}, {10, 2602}, {10, 2604}, {10, 2606}, {10, 2608},
{10, 2610}, {10, 2612}, {10, 2614}, {10, 2616}, {10, 2618}, {10, 2620},
{10, 2622}, {10, 2624}, {10, 2626}, {10, 2628}, {10, 2630}, {10, 2632},
{10, 2634}, {10, 2636}, {10, 2638}, {10, 2640}, {10, 2642}, {10, 2644},
{10, 2646}, {10, 2648}, {10, 2650}, {10, 2652}, {10, 2654}, {10, 2656},
{10, 2658}, {10, 2660}, {10, 2662}, {10, 2664}, {10, 2666}, {10, 2668},
{10, 2670}, {10, 2672}, {10, 2674}, {10, 2676}, {10, 2678}, {10, 2680},
{10, 2682}, {10, 2684}, {10, 2686}, {10, 2688}, {10, 2690}, {10, 2692},
{10, 2694}, {10, 2696}, {10, 2698}, {10, 2700}, {10, 2702}, {10, 2704},
{10, 2706}, {10, 2708}, {10, 2710}, {10, 2712}, {10, 2714}, {10, 2716},
{10, 2718}, {10, 2720}, {10, 2722}, {10, 2724}, {10, 2726}, {10, 2728},
{10, 2730}, {10, 2732}, {10, 2734}, {10, 2736}, {10, 2738}, {10, 2740},
{10, 2742}, {10, 2744}, {10, 2746}, {10, 2748}, {10, 2750}, {10, 2752},
{10, 2754}, {10, 2756}, {10, 2758}, {10, 2760}, {10, 2762}, {10, 2764},
{10, 2766}, {10, 2768}, {10, 2770}, {10, 2772}, {10, 2774}, {10, 2776},
{10, 2778}, {10, 2780}, {10, 2782}, {10, 2784}, {10, 2786}, {10, 2788},
{10, 2790}, {10, 2792}, {10, 2794}, {10, 2796}, {10, 2798}, {10, 2800},
{10, 2802}, {10, 2804}, {10, 2806}, {10, 2808}, {10, 2810}, {10, 2812},
{10, 2814}, {10, 2816}, {10, 2818}, {10, 2820}, {10, 2822}, {10, 2824},
{10, 2826}, {10, 2828}, {10, 2830}, {10, 2832}, {10, 2834}, {10, 2836},
{10, 2838}, {10, 2840}, {10, 2842}, {10, 2844}, {10, 2846}, {10, 2848},
{10, 2850}, {10, 2852}, {10, 2854}, {10, 2856}, {10, 2858}, {10, 2860},
{10, 2862}, {10, 2864}, {10, 2866}, {10, 2868}, {10, 2870}, {10, 2872},
{10, 2874}, {10, 2876}, {10, 2878}, {10, 2880}, {10, 2882}, {10, 2884},
{10, 2886}, {10, 2888}, {10, 2890}, {10, 2892}, {10, 2894}, {10, 2896},
{10, 2898}, {10, 2900}, {10, 2902}, {10, 2904}, {10, 2906}, {10, 2908},
{10, 2910}, {10, 2912}, {10, 2914}, {10, 2916}, {10, 2918}, {10, 2920},
{10, 2922}, {10, 2924}, {10, 2926}, {10, 2928}, {10, 2930}, {10, 2932},
{10, 2934}, {10, 2936}, {10, 2938}, {10, 2940}, {10, 2942}, {10, 2944},
{10, 2946}, {10, 2948}, {10, 2950}, {10, 2952}, {10, 2954}, {10, 2956},
{10, 2958}, {10, 2960}, {10, 2962}, {10, 2964}, {10, 2966}, {10, 2968},
{10, 2970}, {10, 2972}, {10, 2974}, {10, 2976}, {10, 2978}, {10, 2980},
{10, 2982}, {10, 2984}, {10, 2986}, {10, 2988}, {10, 2990}, {10, 2992},
{10, 2994}, {10, 2996}, {10, 2998}, {10, 3000}, {10, 3002}, {10, 3004},
{10, 3006}, {10, 3008}, {10, 3010}, {10, 3012}, {10, 3014}, {10, 3016},
{10, 3018}, {10, 3020}, {10, 3022}, {10, 3024}, {10, 3026}, {10, 3028},
{10, 3030}, {10, 3032}, {10, 3034}, {10, 3036}, {10, 3038}, {10, 3040},
{10, 3042}, {10, 3044}, {10, 3046}, {10, 3048}, {10, 3050}, {10, 3052},
{10, 3054}, {10, 3056}, {10, 3058}, {10, 3060}, {10, 3062}, {10, 3064},
{10, 3066}, {10, 3068}, {10, 3070}, {10, 3072}, {10, 3074}, {10, 3076},
{10, 3078}, {10, 3080}, {10, 3082}, {10, 3084}, {10, 3086}, {10, 3088},
{10, 3090}, {10, 3092}, {10, 3094}, {10, 3096}, {10, 3098}, {10, 3100},
{10, 3102}, {10, 3104}, {10, 3106}, {10, 3108}, {10, 3110}, {10, 3112},
{10, 3114}, {10, 3116}, {10, 3118}, {10, 3120}, {10, 3122}, {10, 3124},
{10, 3126}, {10, 3128}, {10, 3130}, {10, 3132}, {10, 3134}, {10, 3136},
{10, 3138}, {10, 3140}, {10, 3142}, {10, 3144}, {10, 3146}, {10, 3148},
{10, 3150}, {10, 3152}, {10, 3154}, {10, 3156}, {10, 3158}, {10, 3160},
{10, 3162}, {10, 3164}, {10, 3166}, {10, 3168}, {10, 3170}, {10, 3172},
{10, 3174}, {10, 3176}, {10, 3178}, {10, 3180}, {10, 3182}, {10, 3184},
{10, 3186}, {10, 3188}, {10, 3190}, {10, 3192}, {10, 3194}, {10, 3196},
{10, 3198}, {10, 3200}, {10, 3202}, {10, 3204}, {10, 3206}, {10, 3208},
{10, 3210}, {10, 3212}, {10, 3214}, {10, 3216}, {10, 3218}, {10, 3220},
{10, 3222}, {10, 3224}, {10, 3226}, {10, 3228}, {10, 3230}, {10, 3232},
{10, 3234}, {10, 3236}, {10, 3238}, {10, 3240}, {10, 3242}, {10, 3244},
{10, 3246}, {10, 3248}, {10, 3250}, {10, 3252}, {10, 3254}, {10, 3256},
{10, 3258}, {10, 3260}, {10, 3262}, {10, 3264}, {10, 3266}, {10, 3268},
{10, 3270}, {10, 3272}, {10, 3274}, {10, 3276}, {10, 3278}, {10, 3280},
{10, 3282}, {10, 3284}, {10, 3286}, {10, 3288}, {10, 3290}, {10, 3292},
{10, 3294}, {10, 3296}, {10, 3298}, {10, 3300}, {10, 3302}, {10, 3304},
{10, 3306}, {10, 3308}, {10, 3310}, {10, 3312}, {10, 3314}, {10, 3316},
{10, 3318}, {10, 3320}, {10, 3322}, {10, 3324}, {10, 3326}, {10, 3328},
{10, 3330}, {10, 3332}, {10, 3334}, {10, 3336}, {10, 3338}, {10, 3340},
{10, 3342}, {10, 3344}, {10, 3346}, {10, 3348}, {10, 3350}, {10, 3352},
{10, 3354}, {10, 3356}, {10, 3358}, {10, 3360}, {10, 3362}, {10, 3364},
{10, 3366}, {10, 3368}, {10, 3370}, {10, 3372}, {10, 3374}, {10, 3376},
{10, 3378}, {10, 3380}, {10, 3382}, {10, 3384}, {10, 3386}, {10, 3388},
{10, 3390}, {10, 3392}, {10, 3394}, {10, 3396}, {10, 3398}, {10, 3400},
{10, 3402}, {10, 3404}, {10, 3406}, {10, 3408}, {10, 3410}, {10, 3412},
{10, 3414}, {10, 3416}, {10, 3418}, {10, 3420}, {10, 3422}, {10, 3424},
{10, 3426}, {10, 3428}, {10, 3430}, {10, 3432}, {10, 3434}, {10, 3436},
{10, 3438}, {10, 3440}, {10, 3442}, {10, 3444}, {10, 3446}, {10, 3448},
{10, 3450}, {10, 3452}, {10, 3454}, {10, 3456}, {10, 3458}, {10, 3460},
{10, 3462}, {10, 3464}, {10, 3466}, {10, 3468}, {10, 3470}, {10, 3472},
{10, 3474}, {10, 3476}, {10, 3478}, {10, 3480}, {10, 3482}, {10, 3484},
{10, 3486}, {10, 3488}, {10, 3490}, {10, 3492}, {10, 3494}, {10, 3496},
{10, 3498}, {10, 3500}, {10, 3502}, {10, 3504}, {10, 3506}, {10, 3508},
{10, 3510}, {10, 3512}, {10, 3514}, {10, 3516}, {10, 3518}, {10, 3520},
{10, 3522}, {10, 3524}, {10, 3526}, {10, 3528}, {10, 3530}, {10, 3532},
{10, 3534}, {10, 3536}, {10, 3538}, {10, 3540}, {10, 3542}, {10, 3544},
{10, 3546}, {10, 3548}, {10, 3550}, {10, 3552}, {10, 3554}, {10, 3556},
{10, 3558}, {10, 3560}, {10, 3562}, {10, 3564}, {10, 3566}, {10, 3568},
{10, 3570}, {10, 3572}, {10, 3574}, {10, 3576}, {10, 3578}, {10, 3580},
{10, 3582}, {10, 3584}, {10, 3586}, {10, 3588}, {10, 3590}, {10, 3592},
{10, 3594}, {10, 3596}, {10, 3598}, {10, 3600}, {10, 3602}, {10, 3604},
{10, 3606}, {10, 3608}, {10, 3610}, {10, 3612}, {10, 3614}, {10, 3616},
{10, 3618}, {10, 3620}, {10, 3622}, {10, 3624}, {10, 3626}, {10, 3628},
{10, 3630}, {10, 3632}, {10, 3634}, {10, 3636}, {10, 3638}, {10, 3640},
{10, 3642}, {10, 3644}, {10, 3646}, {10, 3648}, {10, 3650}, {10, 3652},
{10, 3654}, {10, 3656}, {10, 3658}, {10, 3660}, {10, 3662}, {10, 3664},
{10, 3666}, {10, 3668}, {10, 3670}, {10, 3672}, {10, 3674}, {10, 3676},
{10, 3678}, {10, 3680}, {10, 3682}, {10, 3684}, {10, 3686}, {10, 3688},
{10, 3690}, {10, 3692}, {10, 3694}, {10, 3696}, {10, 3698}, {10, 3700},
{10, 3702}, {10, 3704}, {10, 3706}, {10, 3708}, {10, 3710}, {10, 3712},
{10, 3714}, {10, 3716}, {10, 3718}, {10, 3720}, {10, 3722}, {10, 3724},
{10, 3726}, {10, 3728}, {10, 3730}, {10, 3732}, {10, 3734}, {10, 3736},
{10, 3738}, {10, 3740}, {10, 3742}, {10, 3744}, {10, 3746}, {10, 3748},
{10, 3750}, {10, 3752}, {10, 3754}, {10, 3756}, {10, 3758}, {10, 3760},
{10, 3762}, {10, 3764}, {10, 3766}, {10, 3768}, {10, 3770}, {10, 3772},
{10, 3774}, {10, 3776}, {10, 3778}, {10, 3780}, {10, 3782}, {10, 3784},
{10, 3786}, {10, 3788}, {10, 3790}, {10, 3792}, {10, 3794}, {10, 3796},
{10, 3798}, {10, 3800}, {10, 3802}, {10, 3804}, {10, 3806}, {10, 3808},
{10, 3810}, {10, 3812}, {10, 3814}, {10, 3816}, {10, 3818}, {10, 3820},
{10, 3822}, {10, 3824}, {10, 3826}, {10, 3828}, {10, 3830}, {10, 3832},
{10, 3834}, {10, 3836}, {10, 3838}, {10, 3840}, {10, 3842}, {10, 3844},
{10, 3846}, {10, 3848}, {10, 3850}, {10, 3852}, {10, 3854}, {10, 3856},
{10, 3858}, {10, 3860}, {10, 3862}, {10, 3864}, {10, 3866}, {10, 3868},
{10, 3870}, {10, 3872}, {10, 3874}, {10, 3876}, {10, 3878}, {10, 3880},
{10, 3882}, {10, 3884}, {10, 3886}, {10, 3888}, {10, 3890}, {10, 3892},
{10, 3894}, {10, 3896}, {10, 3898}, {10, 3900}, {10, 3902}, {10, 3904},
{10, 3906}, {10, 3908}, {10, 3910}, {10, 3912}, {10, 3914}, {10, 3916},
{10, 3918}, {10, 3920}, {10, 3922}, {10, 3924}, {10, 3926}, {10, 3928},
{10, 3930}, {10, 3932}, {10, 3934}, {10, 3936}, {10, 3938}, {10, 3940},
{10, 3942}, {10, 3944}, {10, 3946}, {10, 3948}, {10, 3950}, {10, 3952},
{10, 3954}, {10, 3956}, {10, 3958}, {10, 3960}
};

212
vp8/encoder/denoising.c Normal file
View File

@@ -0,0 +1,212 @@
/*
* Copyright (c) 2012 The WebM project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#include "denoising.h"
#include "vp8/common/reconinter.h"
#include "vpx/vpx_integer.h"
#include "vpx_mem/vpx_mem.h"
#include "vpx_rtcd.h"
static const unsigned int NOISE_MOTION_THRESHOLD = 20*20;
static const unsigned int NOISE_DIFF2_THRESHOLD = 75;
// SSE_DIFF_THRESHOLD is selected as ~95% confidence assuming var(noise) ~= 100.
static const unsigned int SSE_DIFF_THRESHOLD = 16*16*20;
static const unsigned int SSE_THRESHOLD = 16*16*40;
static uint8_t blend(uint8_t state, uint8_t sample, uint8_t factor_q8)
{
return (uint8_t)(
(((uint16_t)factor_q8 * ((uint16_t)state) + // Q8
(uint16_t)(256 - factor_q8) * ((uint16_t)sample)) + 128) // Q8
>> 8);
}
static unsigned int denoiser_motion_compensate(YV12_BUFFER_CONFIG* src,
YV12_BUFFER_CONFIG* dst,
MACROBLOCK* x,
unsigned int best_sse,
unsigned int zero_mv_sse,
int recon_yoffset,
int recon_uvoffset)
{
MACROBLOCKD filter_xd = x->e_mbd;
int mv_col;
int mv_row;
int sse_diff = zero_mv_sse - best_sse;
// Compensate the running average.
filter_xd.pre.y_buffer = src->y_buffer + recon_yoffset;
filter_xd.pre.u_buffer = src->u_buffer + recon_uvoffset;
filter_xd.pre.v_buffer = src->v_buffer + recon_uvoffset;
// Write the compensated running average to the destination buffer.
filter_xd.dst.y_buffer = dst->y_buffer + recon_yoffset;
filter_xd.dst.u_buffer = dst->u_buffer + recon_uvoffset;
filter_xd.dst.v_buffer = dst->v_buffer + recon_uvoffset;
// Use the best MV for the compensation.
filter_xd.mode_info_context->mbmi.ref_frame = LAST_FRAME;
filter_xd.mode_info_context->mbmi.mode = filter_xd.best_sse_inter_mode;
filter_xd.mode_info_context->mbmi.mv = filter_xd.best_sse_mv;
filter_xd.mode_info_context->mbmi.need_to_clamp_mvs =
filter_xd.need_to_clamp_best_mvs;
mv_col = filter_xd.best_sse_mv.as_mv.col;
mv_row = filter_xd.best_sse_mv.as_mv.row;
if (filter_xd.mode_info_context->mbmi.mode <= B_PRED ||
(mv_row*mv_row + mv_col*mv_col <= NOISE_MOTION_THRESHOLD &&
sse_diff < SSE_DIFF_THRESHOLD))
{
// Handle intra blocks as referring to last frame with zero motion and
// let the absolute pixel difference affect the filter factor.
// Also consider small amount of motion as being random walk due to noise,
// if it doesn't mean that we get a much bigger error.
// Note that any changes to the mode info only affects the denoising.
filter_xd.mode_info_context->mbmi.ref_frame = LAST_FRAME;
filter_xd.mode_info_context->mbmi.mode = ZEROMV;
filter_xd.mode_info_context->mbmi.mv.as_int = 0;
x->e_mbd.best_sse_inter_mode = ZEROMV;
x->e_mbd.best_sse_mv.as_int = 0;
best_sse = zero_mv_sse;
}
if (!x->skip)
{
vp8_build_inter_predictors_mb(&filter_xd);
}
else
{
vp8_build_inter16x16_predictors_mb(&filter_xd,
filter_xd.dst.y_buffer,
filter_xd.dst.u_buffer,
filter_xd.dst.v_buffer,
filter_xd.dst.y_stride,
filter_xd.dst.uv_stride);
}
return best_sse;
}
static void denoiser_filter(YV12_BUFFER_CONFIG* mc_running_avg,
YV12_BUFFER_CONFIG* running_avg,
MACROBLOCK* signal,
unsigned int motion_magnitude2,
int y_offset,
int uv_offset)
{
unsigned char* sig = signal->thismb;
int sig_stride = 16;
unsigned char* mc_running_avg_y = mc_running_avg->y_buffer + y_offset;
int mc_avg_y_stride = mc_running_avg->y_stride;
unsigned char* running_avg_y = running_avg->y_buffer + y_offset;
int avg_y_stride = running_avg->y_stride;
int r, c;
for (r = 0; r < 16; r++)
{
for (c = 0; c < 16; c++)
{
int diff;
int absdiff = 0;
unsigned int filter_coefficient;
absdiff = sig[c] - mc_running_avg_y[c];
absdiff = absdiff > 0 ? absdiff : -absdiff;
assert(absdiff >= 0 && absdiff < 256);
filter_coefficient = (255 << 8) / (256 + ((absdiff * 330) >> 3));
// Allow some additional filtering of static blocks, or blocks with very
// small motion vectors.
filter_coefficient += filter_coefficient / (3 + (motion_magnitude2 >> 3));
filter_coefficient = filter_coefficient > 255 ? 255 : filter_coefficient;
running_avg_y[c] = blend(mc_running_avg_y[c], sig[c], filter_coefficient);
diff = sig[c] - running_avg_y[c];
if (diff * diff < NOISE_DIFF2_THRESHOLD)
{
// Replace with mean to suppress the noise.
sig[c] = running_avg_y[c];
}
else
{
// Replace the filter state with the signal since the change in this
// pixel isn't classified as noise.
running_avg_y[c] = sig[c];
}
}
sig += sig_stride;
mc_running_avg_y += mc_avg_y_stride;
running_avg_y += avg_y_stride;
}
}
int vp8_denoiser_allocate(VP8_DENOISER *denoiser, int width, int height)
{
assert(denoiser);
denoiser->yv12_running_avg.flags = 0;
if (vp8_yv12_alloc_frame_buffer(&(denoiser->yv12_running_avg), width,
height, VP8BORDERINPIXELS) < 0)
{
vp8_denoiser_free(denoiser);
return 1;
}
denoiser->yv12_mc_running_avg.flags = 0;
if (vp8_yv12_alloc_frame_buffer(&(denoiser->yv12_mc_running_avg), width,
height, VP8BORDERINPIXELS) < 0)
{
vp8_denoiser_free(denoiser);
return 1;
}
vpx_memset(denoiser->yv12_running_avg.buffer_alloc, 0,
denoiser->yv12_running_avg.frame_size);
vpx_memset(denoiser->yv12_mc_running_avg.buffer_alloc, 0,
denoiser->yv12_mc_running_avg.frame_size);
return 0;
}
void vp8_denoiser_free(VP8_DENOISER *denoiser)
{
assert(denoiser);
vp8_yv12_de_alloc_frame_buffer(&denoiser->yv12_running_avg);
vp8_yv12_de_alloc_frame_buffer(&denoiser->yv12_mc_running_avg);
}
void vp8_denoiser_denoise_mb(VP8_DENOISER *denoiser,
MACROBLOCK *x,
unsigned int best_sse,
unsigned int zero_mv_sse,
int recon_yoffset,
int recon_uvoffset) {
int mv_row;
int mv_col;
unsigned int motion_magnitude2;
// Motion compensate the running average.
best_sse = denoiser_motion_compensate(&denoiser->yv12_running_avg,
&denoiser->yv12_mc_running_avg,
x,
best_sse,
zero_mv_sse,
recon_yoffset,
recon_uvoffset);
mv_row = x->e_mbd.best_sse_mv.as_mv.row;
mv_col = x->e_mbd.best_sse_mv.as_mv.col;
motion_magnitude2 = mv_row*mv_row + mv_col*mv_col;
if (best_sse > SSE_THRESHOLD ||
motion_magnitude2 > 8 * NOISE_MOTION_THRESHOLD)
{
// No filtering of this block since it differs too much from the predictor,
// or the motion vector magnitude is considered too big.
vp8_copy_mem16x16(x->thismb, 16,
denoiser->yv12_running_avg.y_buffer + recon_yoffset,
denoiser->yv12_running_avg.y_stride);
return;
}
// Filter.
denoiser_filter(&denoiser->yv12_mc_running_avg,
&denoiser->yv12_running_avg,
x,
motion_magnitude2,
recon_yoffset,
recon_uvoffset);
}

33
vp8/encoder/denoising.h Normal file
View File

@@ -0,0 +1,33 @@
/*
* Copyright (c) 2012 The WebM project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#ifndef VP8_ENCODER_DENOISING_H_
#define VP8_ENCODER_DENOISING_H_
#include "block.h"
typedef struct vp8_denoiser
{
YV12_BUFFER_CONFIG yv12_running_avg;
YV12_BUFFER_CONFIG yv12_mc_running_avg;
} VP8_DENOISER;
int vp8_denoiser_allocate(VP8_DENOISER *denoiser, int width, int height);
void vp8_denoiser_free(VP8_DENOISER *denoiser);
void vp8_denoiser_denoise_mb(VP8_DENOISER *denoiser,
MACROBLOCK *x,
unsigned int best_sse,
unsigned int zero_mv_sse,
int recon_yoffset,
int recon_uvoffset);
#endif // VP8_ENCODER_DENOISING_H_

View File

@@ -28,6 +28,10 @@
#include <limits.h>
#include "vp8/common/invtrans.h"
#include "vpx_ports/vpx_timer.h"
#if CONFIG_REALTIME_ONLY & CONFIG_ONTHEFLY_BITPACKING
#include "bitstream.h"
#endif
#include "encodeframe.h"
extern void vp8_stuff_mb(VP8_COMP *cpi, MACROBLOCKD *x, TOKENEXTRA **t) ;
extern void vp8_calc_ref_frame_costs(int *ref_frame_cost,
@@ -43,10 +47,6 @@ extern void vp8cx_init_mbrthread_data(VP8_COMP *cpi,
MB_ROW_COMP *mbr_ei,
int mb_row,
int count);
void vp8_build_block_offsets(MACROBLOCK *x);
void vp8_setup_block_ptrs(MACROBLOCK *x);
int vp8cx_encode_inter_macroblock(VP8_COMP *cpi, MACROBLOCK *x, TOKENEXTRA **t, int recon_yoffset, int recon_uvoffset, int mb_row, int mb_col);
int vp8cx_encode_intra_macro_block(VP8_COMP *cpi, MACROBLOCK *x, TOKENEXTRA **t, int mb_row, int mb_col);
static void adjust_act_zbin( VP8_COMP *cpi, MACROBLOCK *x );
#ifdef MODE_STATS
@@ -373,10 +373,17 @@ void encode_mb_row(VP8_COMP *cpi,
int recon_uv_stride = cm->yv12_fb[ref_fb_idx].uv_stride;
int map_index = (mb_row * cpi->common.mb_cols);
#if (CONFIG_REALTIME_ONLY & CONFIG_ONTHEFLY_BITPACKING)
const int num_part = (1 << cm->multi_token_partition);
TOKENEXTRA * tp_start = cpi->tok;
vp8_writer *w;
#endif
#if CONFIG_MULTITHREAD
const int nsync = cpi->mt_sync_range;
const int rightmost_col = cm->mb_cols - 1;
const int rightmost_col = cm->mb_cols + nsync;
volatile const int *last_row_current_mb_col;
volatile int *current_mb_col = &cpi->mt_current_mb_col[mb_row];
if ((cpi->b_multi_threaded != 0) && (mb_row != 0))
last_row_current_mb_col = &cpi->mt_current_mb_col[mb_row - 1];
@@ -384,6 +391,13 @@ void encode_mb_row(VP8_COMP *cpi,
last_row_current_mb_col = &rightmost_col;
#endif
#if (CONFIG_REALTIME_ONLY & CONFIG_ONTHEFLY_BITPACKING)
if(num_part > 1)
w= &cpi->bc[1 + (mb_row % num_part)];
else
w = &cpi->bc[1];
#endif
// reset above block coeffs
xd->above_context = cm->above_context;
@@ -411,6 +425,10 @@ void encode_mb_row(VP8_COMP *cpi,
// for each macroblock col in image
for (mb_col = 0; mb_col < cm->mb_cols; mb_col++)
{
#if (CONFIG_REALTIME_ONLY & CONFIG_ONTHEFLY_BITPACKING)
*tp = cpi->tok;
#endif
// Distance of Mb to the left & right edges, specified in
// 1/8th pel units as they are always compared to values
// that are in 1/8th pel units
@@ -435,12 +453,13 @@ void encode_mb_row(VP8_COMP *cpi,
vp8_copy_mem16x16(x->src.y_buffer, x->src.y_stride, x->thismb, 16);
#if CONFIG_MULTITHREAD
if ((cpi->b_multi_threaded != 0) && (mb_row != 0))
if (cpi->b_multi_threaded != 0)
{
*current_mb_col = mb_col - 1; // set previous MB done
if ((mb_col & (nsync - 1)) == 0)
{
while (mb_col > (*last_row_current_mb_col - nsync)
&& (*last_row_current_mb_col) != (cm->mb_cols - 1))
while (mb_col > (*last_row_current_mb_col - nsync))
{
x86_pause_hint();
thread_sleep(0);
@@ -471,7 +490,7 @@ void encode_mb_row(VP8_COMP *cpi,
if (cm->frame_type == KEY_FRAME)
{
*totalrate += vp8cx_encode_intra_macro_block(cpi, x, tp, mb_row, mb_col);
*totalrate += vp8cx_encode_intra_macroblock(cpi, x, tp);
#ifdef MODE_STATS
y_modes[xd->mbmi.mode] ++;
#endif
@@ -495,13 +514,13 @@ void encode_mb_row(VP8_COMP *cpi,
#endif
// Count of last ref frame 0,0 useage
// Count of last ref frame 0,0 usage
if ((xd->mode_info_context->mbmi.mode == ZEROMV) && (xd->mode_info_context->mbmi.ref_frame == LAST_FRAME))
cpi->inter_zz_count ++;
// Special case code for cyclic refresh
// If cyclic update enabled then copy xd->mbmi.segment_id; (which may have been updated based on mode
// during vp8cx_encode_inter_macroblock()) back into the global sgmentation map
// during vp8cx_encode_inter_macroblock()) back into the global segmentation map
if ((cpi->current_layer == 0) &&
(cpi->cyclic_refresh_mode_enabled && xd->segmentation_enabled))
{
@@ -525,7 +544,14 @@ void encode_mb_row(VP8_COMP *cpi,
cpi->tplist[mb_row].stop = *tp;
// Increment pointer into gf useage flags structure.
#if CONFIG_REALTIME_ONLY & CONFIG_ONTHEFLY_BITPACKING
/* pack tokens for this MB */
{
int tok_count = *tp - tp_start;
pack_tokens(w, tp_start, tok_count);
}
#endif
// Increment pointer into gf usage flags structure.
x->gf_active_ptr++;
// Increment the activity mask pointers.
@@ -539,42 +565,32 @@ void encode_mb_row(VP8_COMP *cpi,
recon_yoffset += 16;
recon_uvoffset += 8;
// Keep track of segment useage
// Keep track of segment usage
segment_counts[xd->mode_info_context->mbmi.segment_id] ++;
// skip to next mb
xd->mode_info_context++;
x->partition_info++;
xd->above_context++;
#if CONFIG_MULTITHREAD
if (cpi->b_multi_threaded != 0)
{
cpi->mt_current_mb_col[mb_row] = mb_col;
}
#endif
}
//extend the recon for intra prediction
vp8_extend_mb_row(
&cm->yv12_fb[dst_fb_idx],
xd->dst.y_buffer + 16,
xd->dst.u_buffer + 8,
xd->dst.v_buffer + 8);
vp8_extend_mb_row( &cm->yv12_fb[dst_fb_idx],
xd->dst.y_buffer + 16,
xd->dst.u_buffer + 8,
xd->dst.v_buffer + 8);
#if CONFIG_MULTITHREAD
if (cpi->b_multi_threaded != 0)
*current_mb_col = rightmost_col;
#endif
// this is to account for the border
xd->mode_info_context++;
x->partition_info++;
#if CONFIG_MULTITHREAD
if ((cpi->b_multi_threaded != 0) && (mb_row == cm->mb_rows - 1))
{
sem_post(&cpi->h_event_end_encoding); /* signal frame encoding end */
}
#endif
}
void init_encode_frame_mb_context(VP8_COMP *cpi)
static void init_encode_frame_mb_context(VP8_COMP *cpi)
{
MACROBLOCK *const x = & cpi->mb;
VP8_COMMON *const cm = & cpi->common;
@@ -599,7 +615,7 @@ void init_encode_frame_mb_context(VP8_COMP *cpi)
if (cm->frame_type == KEY_FRAME)
vp8_init_mbmode_probs(cm);
// Copy data over into macro block data sturctures.
// Copy data over into macro block data structures.
x->src = * cpi->Source;
xd->pre = cm->yv12_fb[cm->lst_fb_idx];
xd->dst = cm->yv12_fb[cm->new_fb_idx];
@@ -656,10 +672,13 @@ void vp8_encode_frame(VP8_COMP *cpi)
MACROBLOCK *const x = & cpi->mb;
VP8_COMMON *const cm = & cpi->common;
MACROBLOCKD *const xd = & x->e_mbd;
TOKENEXTRA *tp = cpi->tok;
int segment_counts[MAX_MB_SEGMENTS];
int totalrate;
#if CONFIG_REALTIME_ONLY & CONFIG_ONTHEFLY_BITPACKING
BOOL_CODER * bc = &cpi->bc[1]; // bc[0] is for control partition
const int num_part = (1 << cm->multi_token_partition);
#endif
vpx_memset(segment_counts, 0, sizeof(segment_counts));
totalrate = 0;
@@ -688,15 +707,13 @@ void vp8_encode_frame(VP8_COMP *cpi)
xd->subpixel_predict16x16 = vp8_bilinear_predict16x16;
}
// Reset frame count of inter 0,0 motion vector useage.
// Reset frame count of inter 0,0 motion vector usage.
cpi->inter_zz_count = 0;
vpx_memset(segment_counts, 0, sizeof(segment_counts));
cpi->prediction_error = 0;
cpi->intra_error = 0;
cpi->skip_true_count = 0;
cpi->skip_false_count = 0;
cpi->tok_count = 0;
#if 0
// Experimental code
@@ -707,6 +724,7 @@ void vp8_encode_frame(VP8_COMP *cpi)
xd->mode_info_context = cm->mi;
vp8_zero(cpi->MVcount);
vp8_zero(cpi->coef_counts);
vp8cx_frame_init_quantizer(cpi);
@@ -725,9 +743,22 @@ void vp8_encode_frame(VP8_COMP *cpi)
build_activity_map(cpi);
}
// re-initencode frame context.
// re-init encode frame context.
init_encode_frame_mb_context(cpi);
#if CONFIG_REALTIME_ONLY & CONFIG_ONTHEFLY_BITPACKING
{
int i;
for(i = 0; i < num_part; i++)
{
vp8_start_encode(&bc[i], cpi->partition_d[i + 1],
cpi->partition_d_end[i + 1]);
bc[i].error = &cm->error;
}
}
#endif
{
struct vpx_usec_timer emr_timer;
vpx_usec_timer_start(&emr_timer);
@@ -751,7 +782,11 @@ void vp8_encode_frame(VP8_COMP *cpi)
{
vp8_zero(cm->left_context)
#if CONFIG_REALTIME_ONLY & CONFIG_ONTHEFLY_BITPACKING
tp = cpi->tok;
#else
tp = cpi->tok + mb_row * (cm->mb_cols * 16 * 24);
#endif
encode_mb_row(cpi, cm, mb_row, x, xd, &tp, segment_counts, &totalrate);
@@ -764,12 +799,14 @@ void vp8_encode_frame(VP8_COMP *cpi)
x->partition_info += xd->mode_info_stride * cpi->encoding_thread_count;
x->gf_active_ptr += cm->mb_cols * cpi->encoding_thread_count;
if(mb_row == cm->mb_rows - 1)
{
sem_post(&cpi->h_event_end_encoding); /* signal frame encoding end */
}
}
sem_wait(&cpi->h_event_end_encoding); /* wait for other threads to finish */
cpi->tok_count = 0;
for (mb_row = 0; mb_row < cm->mb_rows; mb_row ++)
{
cpi->tok_count += cpi->tplist[mb_row].stop - cpi->tplist[mb_row].start;
@@ -802,9 +839,12 @@ void vp8_encode_frame(VP8_COMP *cpi)
// for each macroblock row in image
for (mb_row = 0; mb_row < cm->mb_rows; mb_row++)
{
vp8_zero(cm->left_context)
#if CONFIG_REALTIME_ONLY & CONFIG_ONTHEFLY_BITPACKING
tp = cpi->tok;
#endif
encode_mb_row(cpi, cm, mb_row, x, xd, &tp, segment_counts, &totalrate);
// adjust to the next row of mbs
@@ -814,16 +854,25 @@ void vp8_encode_frame(VP8_COMP *cpi)
}
cpi->tok_count = tp - cpi->tok;
}
#if CONFIG_REALTIME_ONLY & CONFIG_ONTHEFLY_BITPACKING
{
int i;
for(i = 0; i < num_part; i++)
{
vp8_stop_encode(&bc[i]);
cpi->partition_sz[i+1] = bc[i].pos;
}
}
#endif
vpx_usec_timer_mark(&emr_timer);
cpi->time_encode_mb_row += vpx_usec_timer_elapsed(&emr_timer);
}
// Work out the segment probabilites if segmentation is enabled
// Work out the segment probabilities if segmentation is enabled
if (xd->segmentation_enabled)
{
int tot_count;
@@ -911,20 +960,16 @@ void vp8_encode_frame(VP8_COMP *cpi)
}
#endif
// Adjust the projected reference frame useage probability numbers to reflect
// what we have just seen. This may be usefull when we make multiple itterations
#if ! CONFIG_REALTIME_ONLY
// Adjust the projected reference frame usage probability numbers to reflect
// what we have just seen. This may be useful when we make multiple iterations
// of the recode loop rather than continuing to use values from the previous frame.
if ((cm->frame_type != KEY_FRAME) && ((cpi->oxcf.number_of_layers > 1) ||
(!cm->refresh_alt_ref_frame && !cm->refresh_golden_frame)))
{
vp8_convert_rfct_to_prob(cpi);
}
#if 0
// Keep record of the total distortion this time around for future use
cpi->last_frame_distortion = cpi->frame_distortion;
#endif
}
void vp8_setup_block_ptrs(MACROBLOCK *x)
{
@@ -1069,8 +1114,7 @@ static void adjust_act_zbin( VP8_COMP *cpi, MACROBLOCK *x )
#endif
}
int vp8cx_encode_intra_macro_block(VP8_COMP *cpi, MACROBLOCK *x, TOKENEXTRA **t,
int mb_row, int mb_col)
int vp8cx_encode_intra_macroblock(VP8_COMP *cpi, MACROBLOCK *x, TOKENEXTRA **t)
{
MACROBLOCKD *xd = &x->e_mbd;
int rate;
@@ -1094,6 +1138,7 @@ int vp8cx_encode_intra_macro_block(VP8_COMP *cpi, MACROBLOCK *x, TOKENEXTRA **t,
vp8_encode_intra16x16mbuv(x);
sum_intra_stats(cpi, x);
vp8_tokenize_mb(cpi, &x->e_mbd, t);
if (xd->mode_info_context->mbmi.mode != B_PRED)
@@ -1130,6 +1175,13 @@ int vp8cx_encode_inter_macroblock
else
x->encode_breakout = cpi->oxcf.encode_breakout;
#if CONFIG_TEMPORAL_DENOISING
// Reset the best sse mode/mv for each macroblock.
x->e_mbd.best_sse_inter_mode = 0;
x->e_mbd.best_sse_mv.as_int = 0;
x->e_mbd.need_to_clamp_best_mvs = 0;
#endif
if (cpi->sf.RD)
{
int zbin_mode_boost_enabled = cpi->zbin_mode_boost_enabled;
@@ -1260,11 +1312,6 @@ int vp8cx_encode_inter_macroblock
if (!x->skip)
{
vp8_encode_inter16x16(x);
// Clear mb_skip_coeff if mb_no_coeff_skip is not set
if (!cpi->common.mb_no_coeff_skip)
xd->mode_info_context->mbmi.mb_skip_coeff = 0;
}
else
vp8_build_inter16x16_predictors_mb(xd, xd->dst.y_buffer,
@@ -1287,17 +1334,17 @@ int vp8cx_encode_inter_macroblock
}
else
{
/* always set mb_skip_coeff as it is needed by the loopfilter */
xd->mode_info_context->mbmi.mb_skip_coeff = 1;
if (cpi->common.mb_no_coeff_skip)
{
xd->mode_info_context->mbmi.mb_skip_coeff = 1;
cpi->skip_true_count ++;
vp8_fix_contexts(xd);
}
else
{
vp8_stuff_mb(cpi, xd, t);
xd->mode_info_context->mbmi.mb_skip_coeff = 0;
cpi->skip_false_count ++;
}
}

27
vp8/encoder/encodeframe.h Normal file
View File

@@ -0,0 +1,27 @@
/*
* Copyright (c) 2012 The WebM project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#ifndef ENCODEFRAME_H
#define ENCODEFRAME_H
extern void vp8_activity_masking(VP8_COMP *cpi, MACROBLOCK *x);
extern void vp8_build_block_offsets(MACROBLOCK *x);
extern void vp8_setup_block_ptrs(MACROBLOCK *x);
extern void vp8_encode_frame(VP8_COMP *cpi);
extern int vp8cx_encode_inter_macroblock(VP8_COMP *cpi, MACROBLOCK *x,
TOKENEXTRA **t,
int recon_yoffset, int recon_uvoffset,
int mb_row, int mb_col);
extern int vp8cx_encode_intra_macroblock(VP8_COMP *cpi, MACROBLOCK *x,
TOKENEXTRA **t);
#endif

Some files were not shown because too many files have changed in this diff Show More