The existing meridian audio test does not test
ff_mlp_rematrix_channel_arm. This sample (first 640k of
https://samples.libav.org/A-codecs/TrueHD/TrueHD.raw) uses
ff_mlp_rematrix_channel_arm. Since this sample has 5.1 channels it also
allows testing the integrated downmixing.
This uses the RIFF header stored size to figure out the expected AVI
file size, instead of the actual file. To work fully it requires handling
failed avio_seek() instead of assuming they always succeed.
Some fate file has been cut off and contains half a frame at the end which
previously was not output during demuxing. This frame is now output to
encoder, thus the fate diff update.
Bug-Id: 261
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
Only shift limited range luma, and always only shift chroma
for upconversion.
Based off a patch by Michael Niedermayer.
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
Function allows to create string containing object's serialized options.
Such string may be passed back to av_set_options_string() in order to restore options.
Signed-off-by: Lukasz Marek <lukasz.m.luki2@gmail.com>
According to the DASH spec, Representation IDs should be unique
across all adaptation sets. Fixing that and updating the fate
reference file to reflect this change.
Signed-off-by: Vignesh Venkatasubramanian <vigneshv@google.com>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
* commit '00c67fe1d0bc7c2ce49daac9c80ea39d5a663b73':
movenc: Write a 0 duration in mdhd and tkhd for an empty initial moov
Merged-by: Michael Niedermayer <michaelni@gmx.at>
* commit '600d5ee6b12bad144756b0772319bb04796bc528':
movenc: Signal iso6 in compatible_brands when using tfdt
Merged-by: Michael Niedermayer <michaelni@gmx.at>
Uses a similar approach as vf_yadif to flush the last frame in idet.
Quick test with 50 frames from vsynth1:
./ffmpeg.old -i fate-suite/ffmpeg-synthetic/vsynth1/%02d.pgm -vf idet -f mp4 -y /dev/null 2>&1 | grep Multi
(gives) [Parsed_idet_0 @ 0x261ebb0] Multi frame detection: TFF:0 BFF:0 Progressive:48 Undetermined:1
./ffmpeg -i fate-suite/ffmpeg-synthetic/vsynth1/%02d.pgm -vf idet -f mp4 -y /dev/null 2>&1 | grep Multi
(gives) [Parsed_idet_0 @ 0x35a0bb0] Multi frame detection: TFF:0 BFF:0 Progressive:49 Undetermined:1
Fate tests have been updated.
(In testing, it seems this filter will also need a subsequent patch for single frame input)
Signed-off-by: Neil Birkbeck <neil.birkbeck@gmail.com>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
Regression test for the bug from trac ticket #3849 fixed in commit 14e30255
Reviewed-by: Ronald S. Bultje <rsbultje@gmail.com>
Signed-off-by: James Almer <jamrial@gmail.com>
In these cases, only drop dts. Because if we drop both we have no
timestamps at all for some files.
This improves playback of HLS streams from GoPro cameras.
Signed-off-by: Martin Storsjö <martin@martin.st>
Width, Height and Sample Rate should be in the AdaptationSet tag
only if all the contained representations have the same width,
height and sampling rate. Otherwise they should go into the
Representation tag. This patch adds this functionality and a fate
test for the same.
Signed-off-by: Vignesh Venkatasubramanian <vigneshv@google.com>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
Fix an incorrect hard code in cues_end computation. Updating the fate
test reference files related to the fix as well. The earlier computation
was clearly wrong as the cues_end field was greater than the file size
itself in some cases.
Signed-off-by: Vignesh Venkatasubramanian <vigneshv@google.com>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
This also un-does the fate changes from a52f443714,
leaving this fix without even small differences in the output, that is
a sample for which this makes a vissible difference is very welcome
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
Fixes CID1194380
There are no vissible differences in the changed fate samples. Only
a tiny number of pixels change by tiny amounts in the frames i checked
If someone has a file that shows a vissible difference, please post it.
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
* commit 'b39ebcddd47daf37659796aaa7d068668086507a':
fate: Add VC-1 interlaced twomv test
Note, this test is not free of artifacts on both sides of the merge
Merged-by: Michael Niedermayer <michaelni@gmx.at>
* commit '28f5cd312c9da9072108edf8b7685d009374ea96':
fate: Switch ra4-288 test from framecrc() to pcm()
Conflicts:
tests/fate/real.mak
The test is kept disabled as it still does not pass on x86-64 due to float
rounding
Merged-by: Michael Niedermayer <michaelni@gmx.at>
Add fate tests that test out the functionality of WebM DASH
Manifest XML generation. This patch contains the vpx.mak file
changes and the reference gold XML files.
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
The arrays are fairly large and could cause problems on some embedded systems
also they are not endian safe as they mix 32 and 8bit
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
* commit '706208ef47bffd525c982975d2756f7b2b220b8d':
fate: Split fate-pixdesc tests and dispatch them through Make
Conflicts:
tests/fate-run.sh
tests/ref/fate/filter-pixdesc
Merged-by: Michael Niedermayer <michaelni@gmx.at>
- all of them testing HEVC version 1
cherry picked from commit adcdabb4dd062694fb8de6df0faecaad1c36ba33
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
This avoids the following libass warning when using the subtitles
filter: "Neither PlayResX nor PlayResY defined. Assuming 384x288"
Subtitles tests change because the output is ASS and the PlayRes[XY]
ends up in the output.
This very slightly improves compression
Found-by: Christophe Gisquet <christophe.gisquet@gmail.com>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
Based off the srt encoder. The following features are unimplemented:
- fonts, colors, sizes
- alignment and positioning
The rest works well. For example, use ffmpeg to convert subtitles into the .vtt format:
ffmpeg -i input.srt output.vtt
Signed-off-by: Aman Gupta <ffmpeg@tmm1.net>
Signed-off-by: Clément Bœsch <u@pkh.me>
* commit '6656370b858329ca07a60a2de954d5e90daa0206':
avconv: set the "encoder" tag when transcoding
Conflicts:
ffmpeg.c
tests/ref/lavf/mkv
tests/ref/seek/lavf-mkv
Merged-by: Michael Niedermayer <michaelni@gmx.at>
Partially undoes commit 2c4e08d893:
riff: always generate a proper WAVEFORMATEX structure in
ff_put_wav_header
A new flag, FF_PUT_WAV_HEADER_FORCE_WAVEFORMATEX, is added to force the
use of WAVEFORMATEX rather than PCMWAVEFORMAT even for PCM codecs.
This flag is used in the Matroska muxer (the cause of the original
change) and in the ASF muxer, because the specifications for
these formats indicate explicitly that WAVEFORMATEX should be used.
Muxers for other formats will return to the original behavior of writing
PCMWAVEFORMAT when writing a header for raw PCM.
In particular, this causes raw PCM in WAV to generate the canonical
44-byte header expected by some tools.
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
Fixes conversion of pal8 to rgb formats with alpha.
Updated references for 2 FATE tests which previously encoded fully
transparent images.
Based on a patch by Baptiste Coudurier <baptiste.coudurier@gmail.com>
* commit '92b099daf4b8ef93513e38b43899cb8458a2fde3':
swscale: support converting YVYU422 pixel format
Conflicts:
libswscale/input.c
Merged-by: Michael Niedermayer <michaelni@gmx.at>
* commit '3e4e2142d246699a1a3a0045ba7124b18bc34d7a':
fate: Convert the paletted output in the brenderpix tests to rgb24
Merged-by: Michael Niedermayer <michaelni@gmx.at>
Improves rgb -> gray16 conversion
Fixes Ticket3422
The pam and png output files look visually similar, in both cases the
dynamics increase to 0x0 -> 0xfffb.
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
The old one didn't use segmentation. One uses segmentation in all frame
types (--aq-mode=1), and the other uses all segmentation features, but
only in inter frames (mbgraph).
Signed-off-by: Anton Khirnov <anton@khirnov.net>
This disables backward probability updates, which makes the codec more
friendly for frame-level multi-threading.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
* qatar/master:
fate: force the simple idct for xvid custom matrix test
Conflicts:
tests/fate/xvid.mak
tests/ref/fate/xvid-custom-matrix
See: ef034cbf18
Merged-by: Michael Niedermayer <michaelni@gmx.at>
The original test without a forced idct is still useful since it tests
the switching of the idct algorithm/permutation on x86 with MMX. MMXext
or SSE2. Make sure the test runs only if MMX inline asm is available and
force -cpuflags to all.
Add the required bitexact flag for both tests.
no changes in either standard deviation or PSNR is seen in any of the changed fate
cases
MSE changes from 0.05012422 to 0.04890000
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
* commit 'cfb4ee30977732674d30c20e93a761c33c743972':
fate: add a pngparser test
Conflicts:
tests/fate/image.mak
Merged-by: Michael Niedermayer <michaelni@gmx.at>
This is useful for debugging.
Reference and ffprobe.xsd changes done and tested by Stefano Sabatini.
Signed-off-by: Stefano Sabatini <stefasab@gmail.com>
* commit '58a868968df445068a143f327ced03b6a02baf0d':
FATE: drop the last partial frame in the wmv8-drm test
Conflicts:
tests/ref/fate/wmv8-drm
Merged-by: Michael Niedermayer <michaelni@gmx.at>
The old one didn't use segmentation. One uses segmentation in all frame
types (--aq-mode=1), and the other uses all segmentation features, but
only in inter frames (mbgraph).
* commit '874838dc6589d978611c89a40694a5074f892a76':
fate: add one select filter test
Conflicts:
tests/fate/filter-video.mak
Merged-by: Michael Niedermayer <michaelni@gmx.at>
Originally written by Ronald S. Bultje <rsbultje@gmail.com> and
Clément Bœsch <u@pkh.me>
Further contributions by:
Anton Khirnov <anton@khirnov.net>
Diego Biurrun <diego@biurrun.de>
Luca Barbato <lu_zero@gentoo.org>
Martin Storsjö <martin@martin.st>
Signed-off-by: Luca Barbato <lu_zero@gentoo.org>
Signed-off-by: Anton Khirnov <anton@khirnov.net>
This changes the tests that used the internal hevc checksum to use framecrc
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
Conflicts:
tests/fate/hevc.mak
tests/ref/fate/hevc-conformance-DBLK_A_SONY_3
tests/ref/fate/hevc-conformance-DBLK_B_SONY_3
tests/ref/fate/hevc-conformance-DBLK_C_SONY_3
tests/ref/fate/hevc-conformance-DELTAQP_B_SONY_3
tests/ref/fate/hevc-conformance-DELTAQP_C_SONY_3
tests/ref/fate/hevc-conformance-POC_A_Bossen_3
Merged-by: Michael Niedermayer <michaelni@gmx.at>
The tests are disabled as 2 do not pass yet
(fate-hevc-conformance-PPS_A_qualcomm_7 and fate-hevc-conformance-RAP_A_docomo_4)
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
Here is an extract of fate-samples/sub/vobsub.idx, with an additional
text at the end of each line to better identify each bitmap:
timestamp: 00:04:55:445, filepos: 00001b000 Ace!
timestamp: 00:05:00:049, filepos: 00001b800 Wake up, honey!
timestamp: 00:05:02:018, filepos: 00001c800 I gotta go to work.
timestamp: 00:05:02:035, filepos: 00001d000 <???>
timestamp: 00:05:04:203, filepos: 00001d800 Look after Clayton, okay?
timestamp: 00:05:05:947, filepos: 00001e800 I'll be back tonight.
timestamp: 00:05:07:957, filepos: 00001f800 Bye! Love you.
timestamp: 00:05:21:295, filepos: 000020800 Hey, Ace! What's up?
timestamp: 00:05:23:356, filepos: 000021800 Hey, how's it going?
timestamp: 00:05:24:640, filepos: 000022800 Remember what today is? The 3rd!
timestamp: 00:05:27:193, filepos: 000023800 Look over there!
timestamp: 00:05:28:369, filepos: 000024800 Where are they going?
timestamp: 00:05:28:361, filepos: 000025000 <???>
timestamp: 00:05:29:946, filepos: 000025800 Let's go see.
timestamp: 00:05:31:230, filepos: 000026000 I can't, man. I got Clayton.
Note the two "<???>": they are basically split subtitles (with the
previous one), which the dvdsub decoder is now supposed to reconstruct
with a previous commit. But also note that while the first chunk has
increasing timestamps,
timestamp: 00:05:02:018, filepos: 00001c800
timestamp: 00:05:02:035, filepos: 00001d000
...it's not the case of the second one (and this is not an exception in the
original file):
timestamp: 00:05:28:369, filepos: 000024800
timestamp: 00:05:28:361, filepos: 000025000
For the dvdsub decoder, they need to be "filepos'ed" ordered, but the
FFDemuxSubtitlesQueue is timestamps ordered, which is the reason of the
introduction of a sub sort method in the context, to allow giving
priority to the position, and then the timestamps. With that change, the
dvdsub decoder get fed with ordered packets.
Now the packet size estimation was also broken: the filepos differences
in the vobsub index defines the full data read between two subtitles
chunks, and it is necessary to take into account what is read by the
mpegps_read_pes_header() function since the length returned by that
function doesn't count the size of the data it reads. This is fixed with
the introduction of total_read, and {old,new}_pos. By doing this change,
we can drop the unreliable len16 heuristic and simplify the whole loop.
Note that mpegps_read_pes_header() often read more than one PES packet
(typically in one call it can read 0x1ba and 0x1be chunk along with the
relevant 0x1bd packet), which triggers the "total_read + pkt_size >
psize" check. This is an expected behaviour, which could be avoided by
having a more chunked version of mpegps_read_pes_header().
The latest change is the extraction of each stream into its own
subtitles queue. If we don't do this, the maximum size for a subtitle
chunk is broken, and the previous changes can not work. Having each
stream in a different queue requires some little adjustments in the
seek code of the demuxer.
This commit is only meaningful as a whole change and can not be easily
split. The FATE test changes because it uses the vobsub demuxer.
Fixes sync in some samples (e.g. bugs 7581 and 8374 in VLC).
Based on a commit by Matthieu Bouron <matthieu.bouron@gmail.com>
Reported-by: Jean-Baptiste Kempf <jb@videolan.org>
CC: libav-stable@libav.org
For codecs where decoding of a whole plane can simply
be skipped, we should offer applications to not decode
alpha for better performance (ca. 30% less CPU usage
and 40% reduced memory bandwidth).
It also means applications do not need to implement support
(even if it is rather simple) for YUVA formats in order to be
able to play these files.
Signed-off-by: Reimar Döffinger <Reimar.Doeffinger@gmx.de>
Update the fate reference since the last broken frame is not decoded
anymore.
Reported-by: Mateusz "j00ru" Jurczyk and Gynvael Coldwind
CC: libav-stable@libav.org
The bug it was working seems to have been fixed.
This change causes ffmpeg to use the trim filter to implement
the -t option.
FATE tests are updated due to the more accurate handling of
the last packets.
for the n0=0 case there are multiple solutions and different
platforms pick different ones
This should reduce the issues with fate and the timefilter test
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
The option is used to sort the streams by program.
Signed-off-by: Florent Tribouilloy <florent.tribouilloy@smartjog.com>
Signed-off-by: Stefano Sabatini <stefasab@gmail.com>
When operating on subsampled chroma planes, some rounding is taking
place. The left and top borders are rounded down while the width and
height are rounded up, so all rounding is done outward to guarantee the
logo area is fully covered.
The problem is that the width and height are counted from the
unrounded left and top borders, respectively. So if the left or top
border position has indeed been rounded down, and the width or height
needs no rounding (up), the position of the the right or bottom border
will be effectively rounded down, i.e. inward.
The issue can easily be seen with a yuv240p input and
-vf delogo=45:45:60:40:show=1 -vframes 1 delogo-bug.png
(or virtually any logo area with odd x and y and even width and
height.) The right and bottom chroma borders (in green) are clearly
off.
In order to fix this, the width and height must be adjusted to include
the bits lost in the rounding of the left and top border positions,
respectively, prior to being themselves rounded up.
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at>
The original delogo algorithm interpolates both horizontally and
vertically and uses the average to compute the resulting sample. This
works reasonably well when the logo area is almost square. However
when the logo area is significantly larger than high or higher than
large, the result is largely suboptimal.
The issue can be clearly seen by testing the delogo filter with a fake
logo area that is 200 pixels large and 2 pixels high. Vertical
interpolation gives a very good result in that case, horizontal
interpolation gives a very bad result, and the overall result is poor,
because both are given the same weight.
Even when the logo is roughly square, the current algorithm gives poor
results on the borders of the logo area, because it always gives
horizontal and vertical interpolations an equal weight, and this is
suboptimal on borders. For example, in the middle of the left hand
side border of the logo, you want to trust the left known point much
more than the right known point (which the current algorithm already
does) but also much more than the top and bottom known points (which
the current algorithm doesn't do.)
By properly weighting each known point when computing the value of
each interpolated pixel, the visual result is much better, especially
on borders and/or for high or large logo areas.
The algorithm I implemented guarantees that the weight of each of the
4 known points directly depends on its distance to the interpolated
point. It is largely inspired from the original algorithm, the key
difference being that it computes the relative weights globally
instead of separating the vertical and horizontal interpolations and
combining them afterward.
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Signed-off-by: Stefano Sabatini <stefasab@gmail.com>