store the number of allocated rows in VP9LfSync, the calculated values
can not be relied on when dealing with corrupt material.
Change-Id: I13b8bcec9738c299a71df726772ab7ac05511e5b
attempting to decode a frame after the previous frame failed has the
potential of interrupting an earlier loop filter task
Change-Id: I6f2b1ddcdf5b89c3e2ee8caf5289dada2a087d66
if the first frame was corrupt and loop filter not called, the next call
would assume the necessary allocations had been done and segfault when
accessing a NULL pointer
Change-Id: Ib6ef505e5c594e6f0fe65ab0700172bcf06b92a6
When a valid data pointer is given make sure the size is greater than
zero.
A previous check for vp9 was incorrectly removed in:
7050074 Make the api behavior conform to api spec.
No semantics for valid pointers + 0-sized frames are defined for VPx
codecs, so move the check to vpx_codec_decode(). This avoids an assert
in vp9.
+ add some basic invalid param testing for decoder init/decode/destroy
Change-Id: I99f9cef6076d15874fd72ac973f2685d8a2353c3
The issue was introduced by commit g9f37d14 with adding explicit
restrictions on reference-frame scale factors. The restriction
is checked against aligned-by-8 frame dimensions, not against
original ones. So, for example, frame of 35×35 actually can refer
to frame of 70×70, but the new check won't allow this. It will
compare 35 vs 72 (not 70), so 2x downscale limit will be exceeded.
Change-Id: Ic663693034440f64ac8312cbff9e1e773a921060
Separates HBD profile int two profiles (2 and 3) consistent with the
highbitdepth branch. This patch is ported from the original highbitdepth
branch patch: https://gerrit.chromium.org/gerrit/#/c/70460/
Two of the invalid file tests needed to be updated.
Change-Id: I6a4acd2f7a60b1fb4cbcc8e0dad4eab4248431e3
This is a practical concern to allow us to fail in a decoder instance
if the size of a file is bigger than we can reasonably handle.
Change-Id: I0446b5502b1f8a48408107648ff2a8d187dca393
The issue was introduced by commit g7c43fb6. If current frame
is repeated from existing-ref pool, frame buffer ref counter
is not decreased, so buffer isn't released. Decoder fails being
unable to allocate new frame buffer at some point.
Added a test vector to verify that the condition will not
recur later. Test vector was generated by the code in this patch:
https://gerrit.chromium.org/gerrit/#/c/70862/
Change-Id: I8af96eb5b9670176e01a281d2e18bd458712cf78
Also fix bugs related with corrupted frame handling.
Return VPX_CODEC_CORRUPT_FRAME when getting corrupted
block.
Change-Id: I7207ccc7c68c4df2b40b561315d16e49ccf7ff41
This patch fixes bug 633:
https://code.google.com/p/webm/issues/detail?id=633
The first decoded frame does not have to be a keyframe,
it could be an inter-frame that is coded intra-only.
This patch fixes the handling of intra-only frames.
A test vector has also been added that encodes 3
intra-only frames at the start of the clip. The
test vector was generated using the code in the
following patch:
https://gerrit.chromium.org/gerrit/#/c/70680/
Change-Id: Ib40b1dbf91aae2bc047e23c626eaef09d1860147
The y4m extension used is the same as the one used in ffmpeg/x264.
The patch is adapted from the highbitdepth branch.
Also adds unit tests for y4m header parsing and md5 check
of the raw frame data, as well as y4m writing.
[build fix for Mac/VS by not using tuples with strings]
Change-Id: I40897ee37d289e4b6cea6fedc67047d692b8cb46
The y4m extension used is the same as the one used in ffmpeg/x264.
The patch is adapted from the highbitdepth branch.
Also adds unit tests for y4m header parsing and md5 check
of the raw frame data, as well as y4m writing.
Change-Id: Ie2794daf6dbafd2f128464f9b9da520fc54c0dd6
Encoding screen content exercises various fast skip paths that are
missed by natural video content.
Change-Id: Ie359884ef9be89cbe5dda6d82f1f79360604a090
This patch checks that a decoder never tries to reference frame that's
outside the range of 2x to 1/16th the size of this frame. Any attempt
to do so causes a failure.
Change-Id: I5c98fa7bb95ac4f29146f29dd92b62fe96164e4c
the max is 6. there are assumptions throughout the decode regarding
this; fixes a crash with a fuzzed bitstream
$ zzuf -s 5861 -r 0.01:0.05 -b 6- \
< vp90-2-00-quantizer-00.webm.ivf \
| dd of=invalid-vp90-2-00-quantizer-00.webm.ivf.s5861_r01-05_b6-.ivf \
bs=1 count=81883
Change-Id: I6af41bb34252e88bc156a4c27c80d505d45f5642
This patch insures that the last byte of a chunk that contains a
valid superframe marker byte, actually has a proper superframe index.
If not it returns an error.
As part of doing that the file : vp90-2-15-fuzz-flicker.webm now fails
to decode properly and moves to the invalid file test from the test
vector suite.
Change-Id: I5f1da7eb37282ec0c6394df5c73251a2df9c1744
This patch adds a mechanism for insuring error checking on invalid files
by creating a unit test that runs the decoder and tests that the error
code matches what's expected on each frame in the decoder.
Disabled for now as this unit test will segfault with existing code.
Change-Id: I896f9686d9ebcbf027426933adfbea7b8c5d956e
This breaks the profile 1 bitstream.
Don't force non420 uv transform size to 1/4 y size. In the 4:2:0 case the
chroma corresponding to a luma block is 1/4 its size. In the 4:4:4 case
chroma and luma planes are the same size. Disallowing larger transforms
can result in a loss of compression efficiency and is inconsistent.
For sub-8x8 blocks only average corresponding motion vectors.
4:2:0 and profile 0 behavior remains unchanged.
Change-Id: I560ae07183012c6734dd1860ea54ed6f62f3cae8
This commit fixes frame header decoding for superframe index, to
prevent out of boundary memory read triggered by fuzz test
vector. It resolves a chromium security violation issue
crbug.com/376802.
The issue was introduced in the change:
Add VPXD_SET_DECRYPTOR support to the VP9 decoder.
cl-id I88f86c8ff9af34e0b6531028b691921b54c2fc48
where the buffer was read before validation check on index offset
applied.
A test vector is added accordingly.
Change-Id: I41c988e776bbdd1033312a668e03a3dbcf44ca99
disabled by default, enable with:
--enable-experimental --enable-spatial-svc
this disables vp9_spatial_svc_encoder and svc_test, further work is
needed to remove internal lib references
Change-Id: I6a487ecbf07eb98843a99d96e17f08f960b63088
The test vector has segment enabled with different quantizer used for
different segments for bot the first frame(key) frame and the rest of
non-key frames.
Change-Id: I7e21122183050ee046219caba483c18cbc34afe7
The test vector is produced to have a single key frame, with segment
map enabled and transmitted. Yet no segment feature is active.
Change-Id: I365d62f00d05c07098b9a76fc8d3a991e427ec1a
There are a few tests which read/write directly to/from WebM files. They should
be disabled when --disable-webm-io is passed.
Change-Id: Ibac4732e27c66da33082151ba6e6993eaa9a1efd
Remove duplicate WebM parsing code in test/webm_video_source.h and linking it
against webmdec.c which does the exact same thing.
Change-Id: Ib7152eecde890fca58be42028cab18c9cb54221c
There was a bug with the decoder that if you started the decoder
with more threads than the first frame had tile columns. Afterwards
tried to decode a frame with more tile columns than the first frame,
the decoder would hang. E.g. run vpxdec --threads=4. The first frame
had two tile columns, then the next key frame had 4 tile columns, the
decoder would hang. If you started with 4 tiles and switched to 2
tiles the decoder would be fine. The issue is that the worker the thread
loop is using is stale.
I added a test vector "vp90-2-14-resize-848x480-1280x720.webm" that
exhibited the bug.
Change-Id: I7bdd47241a52ac0fe1c693a609bc779257e94229