TeeSlave.bsfs is array of pointers to AVBitStreamFilterContext,
so element size should be really size of a pointer, not size
of TeeSlave structure.
Reviewed-by: Nicolas George <george@nsup.org>
Signed-off-by: Jan Sebechlebsky <sebechlebskyjan@gmail.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
Adds per slave option 'onfail' to the tee muxer allowing an output to
fail, so other slave outputs can continue.
Reviewed-by: Nicolas George <george@nsup.org>
Signed-off-by: Jan Sebechlebsky <sebechlebskyjan@gmail.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
In open_slave failure can happen before bsfs array is initialized,
close_slave must check that bsfs is not NULL before accessing
tee_slave->bsfs[i] element.
Slave muxer expects write_trailer to be called if it's
write_header suceeded (so resources allocated in write_header
are freed). Therefore if failure happens after successfull
write_header call, we must ensure that write_trailer of
that particular slave is called.
Some cleanups are made by Marton Balint.
Reviewed-by: Nicolas George <george@nsup.org>
Signed-off-by: Jan Sebechlebsky <sebechlebskyjan@gmail.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
* commit '3ee2ec5ec1e39a438f89302d949c93a1b5d365a2':
unix: Use rw_timeout for setting the connect timeout
Merged-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Until now, the decoding API was restricted to outputting 0 or 1 frames
per input packet. It also enforces a somewhat rigid dataflow in general.
This new API seeks to relax these restrictions by decoupling input and
output. Instead of doing a single call on each decode step, which may
consume the packet and may produce output, the new API requires the user
to send input first, and then ask for output.
For now, there are no codecs supporting this API. The API can work with
codecs using the old API, and most code added here is to make them
interoperate. The reverse is not possible, although for audio it might.
From Libav commit 05f66706d1.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
This codepath isn't quite as bad as it used to sound, if fragments
are cut automatically at video packets.
Signed-off-by: Martin Storsjö <martin@martin.st>
* commit '933dec0e29ec4d2cb83474279a6c52d62fdb7310':
file: Add an option for following a file that is being written
Merged-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
* commit 'd44f3e4059506a182f59218b1e967d42b01e097c':
avio: Apply avoptions on the URLContext itself as well
Merged-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
* commit '65a802401c6cc136576bb2e613c0577cbf622aa8':
build: Add component for the SRTP common code
Merged-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
* commit '709c0f79d8032fcf733bfe58e79ca7ff0858c8bc':
nuv: Use the correct context for av_image_check_size
Merged-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Also, make every addition except for sidedata part of version 1 instead of the
new version 2.
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Closing single slave operation is pulled out into separate
function close_slave(TeeSlave*).
Both close_slave and close_slaves function are moved before
open_slave function.
Reviewed-by: Nicolas George <george@nsup.org>
Signed-off-by: Jan Sebechlebsky <sebechlebskyjan@gmail.com>
Signed-off-by: Marton Balint <cus@passwd.hu>
Previously, the bug was that if -1 < start_time < 0, the reported
"start" time would lose the negative-sign.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
* commit '7e01d48cfd168c3dfc663f03a3b6a98e0ecba328':
mov: Check the entries value when parsing dref boxes
Merged-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Broke a lot of stuff and didn't fix anything.
This reverts commit 3c461eecd4, reversing
changes made to 884dd175f0.
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Reviewed-by: Michael Niedermayer <michael@niedermayer.cc>
Signed-off-by: Moritz Barsnick <barsnick@gmx.net>
Signed-off-by: James Almer <jamrial@gmail.com>
* commit '3e8fd93b6ab219221e17fa2b6243cc72cf2d69dc':
lavf: add a missing bump and APIchanges for the codecpar switch
Merged-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
* commit 'fa55addd23c2f168163175aee17adb125c2c0710':
img2: Drop av_ prefix for a static function
Merged-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Restore alphabetical order in lists, break overly long lines, do some
prettyprinting, add some explanatory section comments, group parts
together that belong together logically.
It will be used by text subtitle demuxers to construct format instructions
straight into extradata. They all currently a similar function that accepts
an AVCodecContext instead.
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
WAV is not a NOHEADER format, and thus should not be changing
stream codec IDs and probing in read_packet.
Signed-off-by: Derek Buitenhuis <derek.buitenhuis@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The problem is that the argument 'q' is of the type uint8_t.
According to the JPEG standard, if 1 <= q <= 50, the scale factor
'S' should be 5000 / Q. Because the create_default_qtables() reuses
the variable 'q' to store the result of this calculation, for small
values of q < 19, q wil subsequently overflow and give wrong results
in the calculated quantization tables.
Instead, use a new variable 'S' (same name as in RFC2435) with the
proper range to store the result of the division.
Signed-off-by: Martin Storsjö <martin@martin.st>
Original mail and my own followup on ffmpeg-user earlier today:
I have a device sending out a MJPEG/RTP stream on a low quality setting.
Decoding and displaying the video with libavformat results in a washed
out, low contrast, greyish image. Playing the same stream with VLC results
in proper color representation.
Screenshots for comparison:
http://zevv.nl/div/libav/shot-ffplay.jpghttp://zevv.nl/div/libav/shot-vlc.jpg
A pcap capture of a few seconds of video and SDP file for playing the
stream are available at
http://zevv.nl/div/libav/mjpeg.pcaphttp://zevv.nl/div/libav/mjpeg.sdp
I believe the problem might be in the calculation of the quantization
tables in the function create_default_qtables(), the attached patch
solves the issue for me.
The problem is that the argument 'q' is of the type uint8_t. According to the
JPEG standard, if 1 <= q <= 50, the scale factor 'S' should be 5000 / Q.
Because the create_default_qtables() reuses the variable 'q' to store the
result of this calculation, for small values of q < 19, q wil subsequently
overflow and give wrong results in the calculated quantization tables. The
patch below uses a new variable 'S' (same name as in RFC2435) with the proper
range to store the result of the division.
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Apply the default value for timeout in code instead of via the
avoption, to allow distinguishing the default value from the user
not setting anything at all.
Signed-off-by: Martin Storsjö <martin@martin.st>
Since all URLContexts have the same AVOptions, such AVOptions
will be applied on the outermost context only and removed from the
dict, while they probably make sense on all contexts.
This makes sure that rw_timeout gets propagated to the innermost
URLContext (to make sure it gets passed to the tcp protocol, when
opening a http connection for instance).
Alternatively, such matching options would be kept in the dict
and only removed after the ffurl_connect call.
Signed-off-by: Martin Storsjö <martin@martin.st>
Using this requires setting the rw_timeout option to make it
terminate, alternatively using the interrupt callback (if used via
the API).
Signed-off-by: Martin Storsjö <martin@martin.st>
If set non-zero, this limits duration of the retry_transfer_wrapper()
loop, thus affecting ffurl_read*(), ffurl_write(). As soon as
one single byte is successfully received/transmitted, the timer
restarts.
This has further changes by Michael Niedermayer and Martin Storsjö.
Signed-off-by: Martin Storsjö <martin@martin.st>
Until now, the decoding API was restricted to outputting 0 or 1 frames
per input packet. It also enforces a somewhat rigid dataflow in general.
This new API seeks to relax these restrictions by decoupling input and
output. Instead of doing a single call on each decode step, which may
consume the packet and may produce output, the new API requires the user
to send input first, and then ask for output.
For now, there are no codecs supporting this API. The API can work with
codecs using the old API, and most code added here is to make them
interoperate. The reverse is not possible, although for audio it might.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Adding early support for a subset of the proposed colour elements
according to the latest version of spec:
https://mailarchive.ietf.org/arch/search/?email_list=cellar&gbt=1&index=hIKLhMdgTMTEwUTeA4ct38h0tmE
Like matroskadec, I've left out elements for pix_fmt related things
as there still seems to be some discussion around these.
The new elements are exposed under strict experimental mode.
Signed-off-by: Neil Birkbeck <neil.birkbeck@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
dv_frame_offset() is static and called only from dv_read_seek(), where
c->sys->frame_size is already used.
This simplifies the incoming codecpar merge where
avctx->{coded_width,coded_height,time_base} are not accessible anymore.
Needed for noStreams.wtv unless something else forces continued parsing (like looking for more than 1
frame in attachments)
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This reverts commit 9f9ed79d4c.
The hlsopts member was never set anywhere and always NULL, furthermore
the HLS demuxer needs to retrieve the proper options from the underlying
http protocol (cookies, user-agent, etc), so a dummy context won't help.
Instead, use the AVIOContext directly to access the options.
For example you can split a file, keeping a continuous timecode between
each segment:
ffmpeg -i src.mov -timecode 10:00:00:00 -vcodec copy -f segment \
-segment_time 2 -reset_timestamps 1 -increment_tc 1 target_%03d.mov
Signed-off-by: Stefano Sabatini <stefasab@gmail.com>