Compare commits
87 Commits
release/2.
...
n1.2.2
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
930d035e72 | ||
|
|
875649bfae | ||
|
|
70127070dd | ||
|
|
d83ab33715 | ||
|
|
06190d75d0 | ||
|
|
c9ea1f7fc5 | ||
|
|
47de8ccf42 | ||
|
|
72c1de649a | ||
|
|
5e3e67f977 | ||
|
|
221bbd002c | ||
|
|
fffc9316da | ||
|
|
9a87ba0933 | ||
|
|
d8538a1002 | ||
|
|
6cb33e0763 | ||
|
|
2e75c45593 | ||
|
|
ba4cb43f0b | ||
|
|
dc73774792 | ||
|
|
ce11d9490c | ||
|
|
430ef8a716 | ||
|
|
627772d988 | ||
|
|
5ce57e0248 | ||
|
|
5effd65312 | ||
|
|
cb340ecd7d | ||
|
|
b88bea1d6c | ||
|
|
99036565ca | ||
|
|
fcc6460dbf | ||
|
|
1065d4197e | ||
|
|
fbb1af39e4 | ||
|
|
073bde2b1f | ||
|
|
e4bb67bc50 | ||
|
|
59549b5ab7 | ||
|
|
c0445df5b3 | ||
|
|
c77b3737b9 | ||
|
|
04d6946600 | ||
|
|
ae93d3405e | ||
|
|
054779625f | ||
|
|
e22dd7fbd0 | ||
|
|
376a9f8e6e | ||
|
|
9af68f8d1f | ||
|
|
f2361593ca | ||
|
|
96e6d4da37 | ||
|
|
735deda2cf | ||
|
|
5f64a7a625 | ||
|
|
ffd28de388 | ||
|
|
af589dd5e9 | ||
|
|
fd60aeb55b | ||
|
|
79a3f364dd | ||
|
|
33d699a4e7 | ||
|
|
f166a02b67 | ||
|
|
c5948b472b | ||
|
|
7ee5e97c46 | ||
|
|
7ef2dbd239 | ||
|
|
524d0d2cfc | ||
|
|
91f04e7410 | ||
|
|
9f7baf139f | ||
|
|
ec89046fa1 | ||
|
|
cc0dd86580 | ||
|
|
0baa0a5a02 | ||
|
|
039f6921c2 | ||
|
|
7fa6db2545 | ||
|
|
840931e766 | ||
|
|
1ce3f736d2 | ||
|
|
4d0090f90a | ||
|
|
f8d5f3dff5 | ||
|
|
298e03d102 | ||
|
|
54f3902393 | ||
|
|
3ca6253beb | ||
|
|
84df0dc40b | ||
|
|
a857811c75 | ||
|
|
3d6219b6db | ||
|
|
0a53c7016f | ||
|
|
f0dc5c0419 | ||
|
|
df8c36265a | ||
|
|
c351d8781d | ||
|
|
d553a522b9 | ||
|
|
0b6d5f27c8 | ||
|
|
9ecfd7daa3 | ||
|
|
90f1aa38b6 | ||
|
|
fc5c81ce0a | ||
|
|
e820e3a259 | ||
|
|
17911d0a96 | ||
|
|
2ed7f5a670 | ||
|
|
7c6c5767eb | ||
|
|
0ec869527c | ||
|
|
0b198e38c5 | ||
|
|
1ea3248290 | ||
|
|
bdd2db60c2 |
1
.gitattributes
vendored
1
.gitattributes
vendored
@@ -1 +0,0 @@
|
||||
*.pnm -diff -text
|
||||
29
.gitignore
vendored
29
.gitignore
vendored
@@ -6,8 +6,6 @@
|
||||
*.dylib
|
||||
*.exe
|
||||
*.exp
|
||||
*.gcda
|
||||
*.gcno
|
||||
*.h.c
|
||||
*.ilk
|
||||
*.lib
|
||||
@@ -15,7 +13,6 @@
|
||||
*.pdb
|
||||
*.so
|
||||
*.so.*
|
||||
*.swp
|
||||
*.ver
|
||||
*-example
|
||||
*-test
|
||||
@@ -27,67 +24,47 @@
|
||||
/ffprobe
|
||||
/ffserver
|
||||
/config.*
|
||||
/coverage.info
|
||||
/avversion.h
|
||||
/version.h
|
||||
/doc/*.1
|
||||
/doc/*.3
|
||||
/doc/*.html
|
||||
/doc/*.pod
|
||||
/doc/config.texi
|
||||
/doc/avoptions_codec.texi
|
||||
/doc/avoptions_format.texi
|
||||
/doc/doxy/html/
|
||||
/doc/examples/avio_dir_cmd
|
||||
/doc/examples/avio_reading
|
||||
/doc/examples/decoding_encoding
|
||||
/doc/examples/demuxing_decoding
|
||||
/doc/examples/extract_mvs
|
||||
/doc/examples/filter_audio
|
||||
/doc/examples/demuxing
|
||||
/doc/examples/filtering_audio
|
||||
/doc/examples/filtering_video
|
||||
/doc/examples/metadata
|
||||
/doc/examples/muxing
|
||||
/doc/examples/pc-uninstalled
|
||||
/doc/examples/remuxing
|
||||
/doc/examples/resampling_audio
|
||||
/doc/examples/scaling_video
|
||||
/doc/examples/transcode_aac
|
||||
/doc/examples/transcoding
|
||||
/doc/fate.txt
|
||||
/doc/doxy/html/
|
||||
/doc/print_options
|
||||
/lcov/
|
||||
/libavcodec/*_tablegen
|
||||
/libavcodec/*_tables.c
|
||||
/libavcodec/*_tables.h
|
||||
/libavutil/avconfig.h
|
||||
/libavutil/ffversion.h
|
||||
/tests/audiogen
|
||||
/tests/base64
|
||||
/tests/checkasm/checkasm
|
||||
/tests/data/
|
||||
/tests/pixfmts.mak
|
||||
/tests/rotozoom
|
||||
/tests/test_copy.ffmeta
|
||||
/tests/tiny_psnr
|
||||
/tests/tiny_ssim
|
||||
/tests/videogen
|
||||
/tests/vsynth1/
|
||||
/tools/aviocat
|
||||
/tools/ffbisect
|
||||
/tools/bisect.need
|
||||
/tools/crypto_bench
|
||||
/tools/cws2fws
|
||||
/tools/fourcc2pixfmt
|
||||
/tools/ffescape
|
||||
/tools/ffeval
|
||||
/tools/ffhash
|
||||
/tools/graph2dot
|
||||
/tools/ismindex
|
||||
/tools/pktdumper
|
||||
/tools/probetest
|
||||
/tools/qt-faststart
|
||||
/tools/sidxindex
|
||||
/tools/trasher
|
||||
/tools/seek_print
|
||||
/tools/uncoded_frame
|
||||
/tools/zmqsend
|
||||
|
||||
667
Changelog
667
Changelog
@@ -1,658 +1,6 @@
|
||||
Entries are sorted chronologically from oldest to youngest within each release,
|
||||
releases are sorted from youngest to oldest.
|
||||
|
||||
version 2.8.7
|
||||
- avcodec/motion_est: Attempt to fix "short data segment overflowed" on IA64
|
||||
- avformat/ffmdec: Check pix_fmt
|
||||
- avcodec/ttaenc: Reallocate packet if its too small
|
||||
- pgssubdec: fix subpicture output colorspace and range
|
||||
- avcodec/ac3dec: Reset SPX when switching from EAC3 to AC3
|
||||
- avfilter/vf_drawtext: Check return code of load_glyph()
|
||||
- avcodec/takdec: add code that got somehow lost in process of REing
|
||||
- avcodec/apedec: fix decoding of stereo files with one channel full of silence
|
||||
- avcodec/avpacket: Fix off by 5 error
|
||||
- avcodec/h264: Fix for H.264 configuration parsing
|
||||
- avcodec/bmp_parser: Ensure remaining_size is not too small in startcode packet crossing corner case
|
||||
- avfilter/src_movie: fix how we check for overflows with seek_point
|
||||
- avcodec/j2kenc: Add attribution to OpenJPEG project:
|
||||
- avcodec/h264_slice: Check PPS more extensively when its not copied
|
||||
- avcodec/libutvideodec: copy frame so it has reference counters when refcounted_frames is set
|
||||
- avformat/rtpdec_jpeg: fix low contrast image on low quality setting
|
||||
- avcodec/mjpegenc_common: Store approximate aspect if exact cannot be stored
|
||||
- lavc/hevc: Allow arbitrary garbage in bytestream as long as at least one NAL unit is found.
|
||||
- avcodec/resample: Remove disabled and faulty code
|
||||
- indeo2: Fix banding artefacts
|
||||
- indeo2data: K&R formatting cosmetics
|
||||
- avcodec/imgconvert: Support non-planar colorspaces while padding
|
||||
- avutil/random_seed: Add the runtime in cycles of the main loop to the entropy pool
|
||||
- avutil/channel_layout: AV_CH_LAYOUT_6POINT1_BACK not reachable in parsing
|
||||
- avformat/concatdec: set safe mode to enabled instead of auto
|
||||
- avformat/utils: fix dts from pts code in compute_pkt_fields() during ascending delay
|
||||
- avformat/rtpenc: Fix integer overflow in NTP_TO_RTP_FORMAT
|
||||
- avformat/cache: Fix memleak of tree entries
|
||||
- lavf/mov: downgrade sidx errors to non-fatal warnings; fixes trac #5216 (cherry picked from commit 22dbc1caaf13e4bb17c9e0164a5b1ccaf490e428)
|
||||
- lavf/mov: fix sidx with edit lists (cherry picked from commit 3617e69d50dd9dd07b5011dfb9477a9d1a630354)
|
||||
- avcodec/mjpegdec: Fix decoding slightly odd progressive jpeg
|
||||
- libwebpenc_animencoder: print library messages in verbose log levels
|
||||
- libwebpenc_animencoder: zero initialize the WebPAnimEncoderOptions struct
|
||||
- doc/utils: fix typo for min() description
|
||||
- avcodec/avpacket: clear priv in av_init_packet()
|
||||
- swscale/utils: Fix chrSrcHSubSample for GBRAP16
|
||||
- swscale/input: Fix GBRAP16 input
|
||||
- postproc: fix unaligned access
|
||||
- avutil/pixdesc: Make get_color_type() aware of CIE XYZ formats
|
||||
- avcodec/h264: Execute error concealment before marking the frame as done.
|
||||
- swscale/x86/output: Fix yuv2planeX_16* with unaligned destination
|
||||
- swscale/x86/output: Move code into yuv2planeX_mainloop
|
||||
- avutil/frame: Free destination qp_table_buf in frame_copy_props()
|
||||
|
||||
|
||||
version 2.8.6
|
||||
- avcodec/jpeg2000dec: More completely check cdef
|
||||
- avutil/opt: check for and handle errors in av_opt_set_dict2()
|
||||
- avcodec/flacenc: fix calculation of bits required in case of custom sample rate
|
||||
- avformat: Document urls a bit
|
||||
- avformat/libquvi: Set default demuxer and protocol limitations
|
||||
- avformat/concat: Check protocol prefix
|
||||
- doc/demuxers: Document enable_drefs and use_absolute_path
|
||||
- avcodec/mjpegdec: Check for end for both bytes in unescaping
|
||||
- avcodec/mpegvideo_enc: Check for integer overflow in ff_mpv_reallocate_putbitbuffer()
|
||||
- avformat/avformat: Replace some references to filenames by urls
|
||||
- avcodec/wmaenc: Check ff_wma_init() for failure
|
||||
- avcodec/mpeg12enc: Move high resolution thread check to before initializing threads
|
||||
- avformat/img2dec: Use AVOpenCallback
|
||||
- avformat/avio: Limit url option parsing to the documented cases
|
||||
- avformat/img2dec: do not interpret the filename by default if a IO context has been opened
|
||||
- avcodec/ass_split: Fix null pointer dereference in ff_ass_style_get()
|
||||
- mov: Add an option to toggle dref opening
|
||||
- avcodec/gif: Fix lzw buffer size
|
||||
- avcodec/put_bits: Assert buf_ptr in flush_put_bits()
|
||||
- avcodec/tiff: Check subsample & rps values more completely
|
||||
- swscale/swscale: Add some sanity checks for srcSlice* parameters
|
||||
- swscale/x86/rgb2rgb_template: Fix planar2x() for short width
|
||||
- swscale/swscale_unscaled: Fix odd height inputs for bayer_to_yv12_wrapper()
|
||||
- swscale/swscale_unscaled: Fix odd height inputs for bayer_to_rgb24_wrapper()
|
||||
- avcodec/aacenc: Check both channels for finiteness
|
||||
- asfdec_o: check for too small size in asf_read_unknown
|
||||
- asfdec_o: break if EOF is reached after asf_read_packet_header
|
||||
- asfdec_o: make sure packet_size is non-zero before seeking
|
||||
- asfdec_o: prevent overflow causing seekback
|
||||
- asfdec_o: check avio_skip in asf_read_simple_index
|
||||
- asfdec_o: reject size > INT64_MAX in asf_read_unknown
|
||||
- asfdec_o: only set asf_pkt->data_size after sanity checks
|
||||
- Merge commit '8375dc1dd101d51baa430f34c0bcadfa37873896'
|
||||
- dca: fix misaligned access in avpriv_dca_convert_bitstream
|
||||
- brstm: fix missing closing brace
|
||||
- brstm: also allocate b->table in read_packet
|
||||
- brstm: make sure an ADPC chunk was read for adpcm_thp
|
||||
- vorbisdec: reject rangebits 0 with non-0 partitions
|
||||
- vorbisdec: reject channel mapping with less than two channels
|
||||
- ffmdec: reset packet_end in case of failure
|
||||
- avformat/ipmovie: put video decoding_map_size into packet and use it in decoder
|
||||
- avformat/brstm: fix overflow
|
||||
|
||||
|
||||
version 2.8.5
|
||||
- avformat/hls: Even stricter URL checks
|
||||
- avformat/hls: More strict url checks
|
||||
- avcodec/pngenc: Fix mixed up linesizes
|
||||
- avcodec/pngenc: Replace memcpy by av_image_copy()
|
||||
- swscale/vscale: Check that 2 tap filters are bilinear before using bilinear code
|
||||
- swscale: Move VScalerContext into vscale.c
|
||||
- swscale/utils: Detect and skip unneeded sws_setColorspaceDetails() calls
|
||||
- swscale/yuv2rgb: Increase YUV2RGB table headroom
|
||||
- swscale/yuv2rgb: Factor YUVRGB_TABLE_LUMA_HEADROOM out
|
||||
- avformat/hls: forbid all protocols except http(s) & file
|
||||
- avformat/aviobuf: Fix end check in put_str16()
|
||||
- avformat/asfenc: Check pts
|
||||
- avcodec/mpeg4video: Check time_incr
|
||||
- avcodec/wavpackenc: Check the number of channels
|
||||
- avcodec/wavpackenc: Headers are per channel
|
||||
- avcodec/aacdec_template: Check id_map
|
||||
- avcodec/dvdec: Fix "left shift of negative value -254"
|
||||
- avcodec/g2meet: Check for ff_els_decode_bit() failure in epic_decode_run_length()
|
||||
- avcodec/mjpegdec: Fix negative shift
|
||||
- avcodec/mss2: Check for repeat overflow
|
||||
- avformat: Add integer fps from 31 to 60 to get_std_framerate()
|
||||
- avformat/ivfenc: fix division by zero
|
||||
- avcodec/mpegvideo_enc: Clip bits_per_raw_sample within valid range
|
||||
- avfilter/vf_scale: set proper out frame color range
|
||||
- avcodec/motion_est: Fix mv_penalty table size
|
||||
- avcodec/h264_slice: Fix integer overflow in implicit weight computation
|
||||
- swscale/utils: Use normal bilinear scaler if fast cannot be used due to tiny dimensions
|
||||
- avcodec/put_bits: Always check buffer end before writing
|
||||
- mjpegdec: extend check for incompatible values of s->rgb and s->ls
|
||||
- swscale/utils: Fix intermediate format for cascaded alpha downscaling
|
||||
- avformat/mov: Update handbrake_version threshold for full mp3 parsing
|
||||
- x86/float_dsp: zero extend offset from ff_scalarproduct_float_sse
|
||||
- avfilter/vf_zoompan: do not free frame we pushed to lavfi
|
||||
- nuv: sanitize negative fps rate
|
||||
- nutdec: reject negative value_len in read_sm_data
|
||||
- xwddec: prevent overflow of lsize * avctx->height
|
||||
- nutdec: only copy the header if it exists
|
||||
- exr: fix out of bounds read in get_code
|
||||
- on2avc: limit number of bits to 30 in get_egolomb
|
||||
|
||||
|
||||
version 2.8.4
|
||||
- rawdec: only exempt BIT0 with need_copy from buffer sanity check
|
||||
- mlvdec: check that index_entries exist
|
||||
- avcodec/mpeg4videodec: also for empty partitioned slices
|
||||
- avcodec/h264_refs: Fix long_idx check
|
||||
- avcodec/h264_mc_template: prefetch list1 only if it is used in the MB
|
||||
- avcodec/h264_slice: Simplify ref2frm indexing
|
||||
- avfilter/vf_mpdecimate: Add missing emms_c()
|
||||
- sonic: make sure num_taps * channels is not larger than frame_size
|
||||
- opus_silk: fix typo causing overflow in silk_stabilize_lsf
|
||||
- ffm: reject invalid codec_id and codec_type
|
||||
- golomb: always check for invalid UE golomb codes in get_ue_golomb
|
||||
- sbr_qmf_analysis: sanitize input for 32-bit imdct
|
||||
- sbrdsp_fixed: assert that input values are in the valid range
|
||||
- aacsbr: ensure strictly monotone time borders
|
||||
- aacenc: update max_sfb when num_swb changes
|
||||
- aaccoder: prevent crash of anmr coder
|
||||
- ffmdec: reject zero-sized chunks
|
||||
- swscale/x86/rgb2rgb_template: Fallback to mmx in interleaveBytes() if the alignment is insufficient for SSE*
|
||||
- swscale/x86/rgb2rgb_template: Do not crash on misaligend stride
|
||||
- avformat/mxfenc: Do not crash if there is no packet in the first stream
|
||||
- lavf/tee: fix side data double free.
|
||||
- avformat/hlsenc: Check the return code of avformat_write_header()
|
||||
- avformat/mov: Enable parser for mp3s by old HandBrake
|
||||
- avformat/mxfenc: Fix integer overflow in length computation
|
||||
- avformat/utils: estimate_timings_from_pts - increase retry counter, fixes invalid duration for ts files with hevc codec
|
||||
- avformat/matroskaenc: Check codecdelay before use
|
||||
- avutil/mathematics: Fix division by 0
|
||||
- mjpegdec: consider chroma subsampling in size check
|
||||
- libvpxenc: remove some unused ctrl id mappings
|
||||
- avcodec/vp3: ensure header is parsed successfully before tables
|
||||
- avcodec/jpeg2000dec: Check bpno in decode_cblk()
|
||||
- avcodec/pgssubdec: Fix left shift of 255 by 24 places cannot be represented in type int
|
||||
- swscale/utils: Fix for runtime error: left shift of negative value -1
|
||||
- avcodec/hevc: Fix integer overflow of entry_point_offset
|
||||
- avcodec/dirac_parser: Check that there is a previous PU before accessing it
|
||||
- avcodec/dirac_parser: Add basic validity checks for next_pu_offset and prev_pu_offset
|
||||
- avcodec/dirac_parser: Fix potential overflows in pointer checks
|
||||
- avcodec/wmaprodec: Check bits per sample to be within the range not causing integer overflows
|
||||
- avcodec/wmaprodec: Fix overflow of cutoff
|
||||
- avformat/smacker: fix integer overflow with pts_inc
|
||||
- avcodec/vp3: Fix "runtime error: left shift of negative value"
|
||||
- avformat/riffdec: Initialize bitrate
|
||||
- mpegencts: Fix overflow in cbr mode period calculations
|
||||
- avutil/timecode: Fix fps check
|
||||
- avutil/mathematics: return INT64_MIN (=AV_NOPTS_VALUE) from av_rescale_rnd() for overflows
|
||||
- avcodec/apedec: Check length in long_filter_high_3800()
|
||||
- avcodec/vp3: always set pix_fmt in theora_decode_header()
|
||||
- avcodec/mpeg4videodec: Check available data before reading custom matrix
|
||||
- avutil/mathematics: Do not treat INT64_MIN as positive in av_rescale_rnd
|
||||
- avutil/integer: Fix av_mod_i() with negative dividend
|
||||
- avformat/dump: Fix integer overflow in av_dump_format()
|
||||
- avcodec/h264_refs: Check that long references match before use
|
||||
- avcodec/utils: Clear dimensions in ff_get_buffer() on failure
|
||||
- avcodec/utils: Use 64bit for aspect ratio calculation in avcodec_string()
|
||||
- avcodec/hevc: Check max ctb addresses for WPP
|
||||
- avcodec/vp3: Clear context on reinitialization failure
|
||||
- avcodec/hevc: allocate entries unconditionally
|
||||
- avcodec/hevc_cabac: Fix multiple integer overflows
|
||||
- avcodec/jpeg2000dwt: Check ndeclevels before calling dwt_encode*()
|
||||
- avcodec/jpeg2000dwt: Check ndeclevels before calling dwt_decode*()
|
||||
- avcodec/hevc: Check entry_point_offsets
|
||||
- lavf/rtpenc_jpeg: Less strict check for standard Huffman tables.
|
||||
- avcodec/ffv1dec: Clear quant_table_count if its invalid
|
||||
- avcodec/ffv1dec: Print an error if the quant table count is invalid
|
||||
- doc/filters/drawtext: fix centering example
|
||||
|
||||
|
||||
version 2.8.3
|
||||
- avcodec/cabac: Check initial cabac decoder state
|
||||
- avcodec/cabac_functions: Fix "left shift of negative value -31767"
|
||||
- avcodec/h264_slice: Limit max_contexts when slice_context_count is initialized
|
||||
- rtmpcrypt: Do the xtea decryption in little endian mode
|
||||
- avformat/matroskadec: Check subtitle stream before dereferencing
|
||||
- avcodec/pngdec: Replace assert by request for sample for unsupported TRNS cases
|
||||
- avformat/utils: Do not init parser if probing is unfinished
|
||||
- avcodec/jpeg2000dec: Fix potential integer overflow with tile dimensions
|
||||
- avcodec/jpeg2000: Use av_image_check_size() in ff_jpeg2000_init_component()
|
||||
- avcodec/wmaprodec: Check for overread in decode_packet()
|
||||
- avcodec/smacker: Check that the data size is a multiple of a sample vector
|
||||
- avcodec/takdec: Skip last p2 sample (which is unused)
|
||||
- avcodec/dxtory: Fix input size check in dxtory_decode_v1_410()
|
||||
- avcodec/dxtory: Fix input size check in dxtory_decode_v1_420()
|
||||
- avcodec/error_resilience: avoid accessing previous or next frames tables beyond height
|
||||
- avcodec/dpx: Move need_align to act per line
|
||||
- avcodec/flashsv: Check size before updating it
|
||||
- avcodec/ivi: Check image dimensions
|
||||
- avcodec/utils: Better check for channels in av_get_audio_frame_duration()
|
||||
- avcodec/jpeg2000dec: Check for duplicate SIZ marker
|
||||
- aacsbr: don't call sbr_dequant twice without intermediate read_sbr_data
|
||||
- hqx: correct type and size check of info_offset
|
||||
- mxfdec: check edit_rate also for physical_track
|
||||
- avcodec/jpeg2000: Change coord to 32bit to support larger than 32k width or height
|
||||
- avcodec/jpeg2000dec: Check SIZ dimensions to be within the supported range
|
||||
- avcodec/jpeg2000: Check comp coords to be within the supported size
|
||||
- mpegvideo: clear overread in clear_context
|
||||
- avcodec/avrndec: Use the AVFrame format instead of the context
|
||||
- dds: disable palette flag for compressed images
|
||||
- dds: validate compressed source buffer size
|
||||
- dds: validate source buffer size before copying
|
||||
- dvdsubdec: validate offset2 similar to offset1
|
||||
- brstm: reject negative sample rate
|
||||
- aacps: avoid division by zero in stereo_processing
|
||||
- softfloat: assert when the argument of av_sqrt_sf is negative
|
||||
|
||||
version 2.8.2
|
||||
- various fixes in the aac_fixed decoder
|
||||
- various fixes in softfloat
|
||||
- swresample/resample: increase precision for compensation
|
||||
- lavf/mov: add support for sidx fragment indexes
|
||||
- avformat/mxfenc: Only store user comment related tags when needed
|
||||
- tests/fate/avformat: Fix fate-lavf
|
||||
- doc/ffmpeg: Clarify that the sdp_file option requires an rtp output.
|
||||
- ffmpeg: Don't try and write sdp info if none of the outputs had an rtp format.
|
||||
- apng: use correct size for output buffer
|
||||
- jvdec: avoid unsigned overflow in comparison
|
||||
- avcodec/jpeg2000dec: Clip all tile coordinates
|
||||
- avcodec/microdvddec: Check for string end in 'P' case
|
||||
- avcodec/dirac_parser: Fix undefined memcpy() use
|
||||
- avformat/xmv: Discard remainder of packet on error
|
||||
- avformat/xmv: factor return check out of if/else
|
||||
- avcodec/mpeg12dec: Do not call show_bits() with invalid bits
|
||||
- avcodec/faxcompr: Add missing runs check in decode_uncompressed()
|
||||
- libavutil/channel_layout: Check strtol*() for failure
|
||||
- avformat/mpegts: Only start probing data streams within probe_packets
|
||||
- avcodec/hevc_ps: Check chroma_format_idc
|
||||
- avcodec/ffv1dec: Check for 0 quant tables
|
||||
- avcodec/mjpegdec: Reinitialize IDCT on BPP changes
|
||||
- avcodec/mjpegdec: Check index in ljpeg_decode_yuv_scan() before using it
|
||||
- avutil/file_open: avoid file handle inheritance on Windows
|
||||
- avcodec/h264_slice: Disable slice threads if there are multiple access units in a packet
|
||||
- avformat/hls: update cookies on setcookie response
|
||||
- opusdec: Don't run vector_fmul_scalar on zero length arrays
|
||||
- avcodec/opusdec: Fix extra samples read index
|
||||
- avcodec/ffv1: Initialize vlc_state on allocation
|
||||
- avcodec/ffv1dec: update progress in case of broken pointer chains
|
||||
- avcodec/ffv1dec: Clear slice coordinates if they are invalid or slice header decoding fails for other reasons
|
||||
- rtsp: Allow $ as interleaved packet indicator before a complete response header
|
||||
- videodsp: don't overread edges in vfix3 emu_edge.
|
||||
- avformat/mp3dec: improve junk skipping heuristic
|
||||
- concatdec: fix file_start_time calculation regression
|
||||
- avcodec: loongson optimize h264dsp idct and loop filter with mmi
|
||||
- avcodec/jpeg2000dec: Clear properties in jpeg2000_dec_cleanup() too
|
||||
- avformat/hls: add support for EXT-X-MAP
|
||||
- avformat/hls: fix segment selection regression on track changes of live streams
|
||||
- configure: Require libkvazaar < 0.7.
|
||||
- avcodec/vp8: Do not use num_coeff_partitions in thread/buffer setup
|
||||
|
||||
|
||||
version 2.8.1:
|
||||
- swscale: fix ticket #4881
|
||||
- doc: fix spelling errors
|
||||
- hls: only seek if there is an offset
|
||||
- asfdec: add more checks for size left in asf packet buffer
|
||||
- asfdec: alloc enough space for storing name in asf_read_metadata_obj
|
||||
- avcodec/pngdec: Check blend_op.
|
||||
- h264_mp4toannexb: fix pps offfset fault when there are more than one sps in avcc
|
||||
- avcodec/h264_mp4toannexb_bsf: Use av_freep() to free spspps_buf
|
||||
- avformat/avidec: Workaround broken initial frame
|
||||
- avformat/hls: fix some cases of HLS streams which require cookies
|
||||
- avcodec/pngdec: reset has_trns after every decode_frame_png()
|
||||
- lavf/img2dec: Fix memory leak
|
||||
- avcodec/mp3: fix skipping zeros
|
||||
- avformat/srtdec: make sure we probe a number
|
||||
- configure: check for ID3D11VideoContext
|
||||
- avformat/vobsub: compare correct packet stream IDs
|
||||
- avformat/srtdec: more lenient first line probing
|
||||
- avformat/srtdec: fix number check for the first character
|
||||
- avcodec/mips: build fix for MSA 64bit
|
||||
- avcodec/mips: build fix for MSA
|
||||
- avformat/httpauth: Add space after commas in HTTP/RTSP auth header
|
||||
- libavformat/hlsenc: Use of uninitialized memory unlinking old files
|
||||
- avcodec/x86/sbrdsp: Fix using uninitialized upper 32bit of noise
|
||||
- avcodec/ffv1dec: Fix off by 1 error in quant_table_count check
|
||||
- avcodec/ffv1dec: Explicitly check read_quant_table() return value
|
||||
- dnxhddata: correct weight tables
|
||||
- dnxhddec: decode and use interlace mb flag
|
||||
- swscale: fix ticket #4877
|
||||
- avcodec/rangecoder: Check e
|
||||
- avcodec/ffv1: separate slice_count from max_slice_count
|
||||
- swscale: fix ticket 4850
|
||||
- cmdutils: Filter dst/srcw/h
|
||||
- avutil/log: fix zero length gnu_printf format string warning
|
||||
- lavf/webvttenc: Require webvtt file to contain exactly one WebVTT stream.
|
||||
- swscale/swscale: Fix "unused variable" warning
|
||||
- avcodec/mjpegdec: Fix decoding RGBA RCT LJPEG
|
||||
- MAINTAINERS: add 2.8, drop 2.2
|
||||
- doc: mention libavcodec can decode Opus natively
|
||||
- hevc: properly handle no_rasl_output_flag when removing pictures from the DPB
|
||||
- avfilter/af_ladspa: process all channels for nb_handles > 1
|
||||
- configure: add libsoxr to swresample's pkgconfig
|
||||
- lavc: Fix compilation with --disable-everything --enable-parser=mpeg4video.
|
||||
|
||||
version 2.8:
|
||||
- colorkey video filter
|
||||
- BFSTM/BCSTM demuxer
|
||||
- little-endian ADPCM_THP decoder
|
||||
- Hap decoder and encoder
|
||||
- DirectDraw Surface image/texture decoder
|
||||
- ssim filter
|
||||
- optional new ASF demuxer
|
||||
- showvolume filter
|
||||
- Many improvements to the JPEG 2000 decoder
|
||||
- Go2Meeting decoding support
|
||||
- adrawgraph audio and drawgraph video filter
|
||||
- removegrain video filter
|
||||
- Intel QSV-accelerated MPEG-2 video and HEVC encoding
|
||||
- Intel QSV-accelerated MPEG-2 video and HEVC decoding
|
||||
- Intel QSV-accelerated VC-1 video decoding
|
||||
- libkvazaar HEVC encoder
|
||||
- erosion, dilation, deflate and inflate video filters
|
||||
- Dynamic Audio Normalizer as dynaudnorm filter
|
||||
- Reverse video and areverse audio filter
|
||||
- Random filter
|
||||
- deband filter
|
||||
- AAC fixed-point decoding
|
||||
- sidechaincompress audio filter
|
||||
- bitstream filter for converting HEVC from MP4 to Annex B
|
||||
- acrossfade audio filter
|
||||
- allyuv and allrgb video sources
|
||||
- atadenoise video filter
|
||||
- OS X VideoToolbox support
|
||||
- aphasemeter filter
|
||||
- showfreqs filter
|
||||
- vectorscope filter
|
||||
- waveform filter
|
||||
- hstack and vstack filter
|
||||
- Support DNx100 (1440x1080@8)
|
||||
- VAAPI hevc hwaccel
|
||||
- VDPAU hevc hwaccel
|
||||
- framerate filter
|
||||
- Switched default encoders for webm to VP9 and Opus
|
||||
- Removed experimental flag from the JPEG 2000 encoder
|
||||
|
||||
|
||||
version 2.7:
|
||||
- FFT video filter
|
||||
- TDSC decoder
|
||||
- DTS lossless extension (XLL) decoding (not lossless, disabled by default)
|
||||
- showwavespic filter
|
||||
- DTS decoding through libdcadec
|
||||
- Drop support for nvenc API before 5.0
|
||||
- nvenc HEVC encoder
|
||||
- Detelecine filter
|
||||
- Intel QSV-accelerated H.264 encoding
|
||||
- MMAL-accelerated H.264 decoding
|
||||
- basic APNG encoder and muxer with default extension "apng"
|
||||
- unpack DivX-style packed B-frames in MPEG-4 bitstream filter
|
||||
- WebM Live Chunk Muxer
|
||||
- nvenc level and tier options
|
||||
- chorus filter
|
||||
- Canopus HQ/HQA decoder
|
||||
- Automatically rotate videos based on metadata in ffmpeg
|
||||
- improved Quickdraw compatibility
|
||||
- VP9 high bit-depth and extended colorspaces decoding support
|
||||
- WebPAnimEncoder API when available for encoding and muxing WebP
|
||||
- Direct3D11-accelerated decoding
|
||||
- Support Secure Transport
|
||||
- Multipart JPEG demuxer
|
||||
|
||||
|
||||
version 2.6:
|
||||
- nvenc encoder
|
||||
- 10bit spp filter
|
||||
- colorlevels filter
|
||||
- RIFX format for *.wav files
|
||||
- RTP/mpegts muxer
|
||||
- non continuous cache protocol support
|
||||
- tblend filter
|
||||
- cropdetect support for non 8bpp, absolute (if limit >= 1) and relative (if limit < 1.0) threshold
|
||||
- Camellia symmetric block cipher
|
||||
- OpenH264 encoder wrapper
|
||||
- VOC seeking support
|
||||
- Closed caption Decoder
|
||||
- fspp, uspp, pp7 MPlayer postprocessing filters ported to native filters
|
||||
- showpalette filter
|
||||
- Twofish symmetric block cipher
|
||||
- Support DNx100 (960x720@8)
|
||||
- eq2 filter ported from libmpcodecs as eq filter
|
||||
- removed libmpcodecs
|
||||
- Changed default DNxHD colour range in QuickTime .mov derivatives to mpeg range
|
||||
- ported softpulldown filter from libmpcodecs as repeatfields filter
|
||||
- dcshift filter
|
||||
- RTP depacketizer for loss tolerant payload format for MP3 audio (RFC 5219)
|
||||
- RTP depacketizer for AC3 payload format (RFC 4184)
|
||||
- palettegen and paletteuse filters
|
||||
- VP9 RTP payload format (draft 0) experimental depacketizer
|
||||
- RTP depacketizer for DV (RFC 6469)
|
||||
- DXVA2-accelerated HEVC decoding
|
||||
- AAC ELD 480 decoding
|
||||
- Intel QSV-accelerated H.264 decoding
|
||||
- DSS SP decoder and DSS demuxer
|
||||
- Fix stsd atom corruption in DNxHD QuickTimes
|
||||
- Canopus HQX decoder
|
||||
- RTP depacketization of T.140 text (RFC 4103)
|
||||
- Port MIPS optimizations to 64-bit
|
||||
|
||||
|
||||
version 2.5:
|
||||
- HEVC/H.265 RTP payload format (draft v6) packetizer
|
||||
- SUP/PGS subtitle demuxer
|
||||
- ffprobe -show_pixel_formats option
|
||||
- CAST128 symmetric block cipher, ECB mode
|
||||
- STL subtitle demuxer and decoder
|
||||
- libutvideo YUV 4:2:2 10bit support
|
||||
- XCB-based screen-grabber
|
||||
- UDP-Lite support (RFC 3828)
|
||||
- xBR scaling filter
|
||||
- AVFoundation screen capturing support
|
||||
- ffserver supports codec private options
|
||||
- creating DASH compatible fragmented MP4, MPEG-DASH segmenting muxer
|
||||
- WebP muxer with animated WebP support
|
||||
- zygoaudio decoding support
|
||||
- APNG demuxer
|
||||
- postproc visualization support
|
||||
|
||||
|
||||
version 2.4:
|
||||
- Icecast protocol
|
||||
- ported lenscorrection filter from frei0r filter
|
||||
- large optimizations in dctdnoiz to make it usable
|
||||
- ICY metadata are now requested by default with the HTTP protocol
|
||||
- support for using metadata in stream specifiers in fftools
|
||||
- LZMA compression support in TIFF decoder
|
||||
- H.261 RTP payload format (RFC 4587) depacketizer and experimental packetizer
|
||||
- HEVC/H.265 RTP payload format (draft v6) depacketizer
|
||||
- added codecview filter to visualize information exported by some codecs
|
||||
- Matroska 3D support thorugh side data
|
||||
- HTML generation using texi2html is deprecated in favor of makeinfo/texi2any
|
||||
- silenceremove filter
|
||||
|
||||
|
||||
version 2.3:
|
||||
- AC3 fixed-point decoding
|
||||
- shuffleplanes filter
|
||||
- subfile protocol
|
||||
- Phantom Cine demuxer
|
||||
- replaygain data export
|
||||
- VP7 video decoder
|
||||
- Alias PIX image encoder and decoder
|
||||
- Improvements to the BRender PIX image decoder
|
||||
- Improvements to the XBM decoder
|
||||
- QTKit input device
|
||||
- improvements to OpenEXR image decoder
|
||||
- support decoding 16-bit RLE SGI images
|
||||
- GDI screen grabbing for Windows
|
||||
- alternative rendition support for HTTP Live Streaming
|
||||
- AVFoundation input device
|
||||
- Direct Stream Digital (DSD) decoder
|
||||
- Magic Lantern Video (MLV) demuxer
|
||||
- On2 AVC (Audio for Video) decoder
|
||||
- support for decoding through DXVA2 in ffmpeg
|
||||
- libbs2b-based stereo-to-binaural audio filter
|
||||
- libx264 reference frames count limiting depending on level
|
||||
- native Opus decoder
|
||||
- display matrix export and rotation API
|
||||
- WebVTT encoder
|
||||
- showcqt multimedia filter
|
||||
- zoompan filter
|
||||
- signalstats filter
|
||||
- hqx filter (hq2x, hq3x, hq4x)
|
||||
- flanger filter
|
||||
- Image format auto-detection
|
||||
- LRC demuxer and muxer
|
||||
- Samba protocol (via libsmbclient)
|
||||
- WebM DASH Manifest muxer
|
||||
- libfribidi support in drawtext
|
||||
|
||||
|
||||
version 2.2:
|
||||
|
||||
- HNM version 4 demuxer and video decoder
|
||||
- Live HDS muxer
|
||||
- setsar/setdar filters now support variables in ratio expressions
|
||||
- elbg filter
|
||||
- string validation in ffprobe
|
||||
- support for decoding through VDPAU in ffmpeg (the -hwaccel option)
|
||||
- complete Voxware MetaSound decoder
|
||||
- remove mp3_header_compress bitstream filter
|
||||
- Windows resource files for shared libraries
|
||||
- aeval filter
|
||||
- stereoscopic 3d metadata handling
|
||||
- WebP encoding via libwebp
|
||||
- ATRAC3+ decoder
|
||||
- VP8 in Ogg demuxing
|
||||
- side & metadata support in NUT
|
||||
- framepack filter
|
||||
- XYZ12 rawvideo support in NUT
|
||||
- Exif metadata support in WebP decoder
|
||||
- OpenGL device
|
||||
- Use metadata_header_padding to control padding in ID3 tags (currently used in
|
||||
MP3, AIFF, and OMA files), FLAC header, and the AVI "junk" block.
|
||||
- Mirillis FIC video decoder
|
||||
- Support DNx444
|
||||
- libx265 encoder
|
||||
- dejudder filter
|
||||
- Autodetect VDA like all other hardware accelerations
|
||||
- aliases and defaults for Ogg subtypes (opus, spx)
|
||||
|
||||
|
||||
version 2.1:
|
||||
|
||||
- aecho filter
|
||||
- perspective filter ported from libmpcodecs
|
||||
- ffprobe -show_programs option
|
||||
- compand filter
|
||||
- RTMP seek support
|
||||
- when transcoding with ffmpeg (i.e. not streamcopying), -ss is now accurate
|
||||
even when used as an input option. Previous behavior can be restored with
|
||||
the -noaccurate_seek option.
|
||||
- ffmpeg -t option can now be used for inputs, to limit the duration of
|
||||
data read from an input file
|
||||
- incomplete Voxware MetaSound decoder
|
||||
- read EXIF metadata from JPEG
|
||||
- DVB teletext decoder
|
||||
- phase filter ported from libmpcodecs
|
||||
- w3fdif filter
|
||||
- Opus support in Matroska
|
||||
- FFV1 version 1.3 is stable and no longer experimental
|
||||
- FFV1: YUVA(444,422,420) 9, 10 and 16 bit support
|
||||
- changed DTS stream id in lavf mpeg ps muxer from 0x8a to 0x88, to be
|
||||
more consistent with other muxers.
|
||||
- adelay filter
|
||||
- pullup filter ported from libmpcodecs
|
||||
- ffprobe -read_intervals option
|
||||
- Lossless and alpha support for WebP decoder
|
||||
- Error Resilient AAC syntax (ER AAC LC) decoding
|
||||
- Low Delay AAC (ER AAC LD) decoding
|
||||
- mux chapters in ASF files
|
||||
- SFTP protocol (via libssh)
|
||||
- libx264: add ability to encode in YUVJ422P and YUVJ444P
|
||||
- Fraps: use BT.709 colorspace by default for yuv, as reference fraps decoder does
|
||||
- make decoding alpha optional for prores, ffv1 and vp6 by setting
|
||||
the skip_alpha flag.
|
||||
- ladspa wrapper filter
|
||||
- native VP9 decoder
|
||||
- dpx parser
|
||||
- max_error_rate parameter in ffmpeg
|
||||
- PulseAudio output device
|
||||
- ReplayGain scanner
|
||||
- Enhanced Low Delay AAC (ER AAC ELD) decoding (no LD SBR support)
|
||||
- Linux framebuffer output device
|
||||
- HEVC decoder
|
||||
- raw HEVC, HEVC in MOV/MP4, HEVC in Matroska, HEVC in MPEG-TS demuxing
|
||||
- mergeplanes filter
|
||||
|
||||
|
||||
version 2.0:
|
||||
|
||||
- curves filter
|
||||
- reference-counting for AVFrame and AVPacket data
|
||||
- ffmpeg now fails when input options are used for output file
|
||||
or vice versa
|
||||
- support for Monkey's Audio versions from 3.93
|
||||
- perms and aperms filters
|
||||
- audio filtering support in ffplay
|
||||
- 10% faster aac encoding on x86 and MIPS
|
||||
- sine audio filter source
|
||||
- WebP demuxing and decoding support
|
||||
- ffmpeg options -filter_script and -filter_complex_script, which allow a
|
||||
filtergraph description to be read from a file
|
||||
- OpenCL support
|
||||
- audio phaser filter
|
||||
- separatefields filter
|
||||
- libquvi demuxer
|
||||
- uniform options syntax across all filters
|
||||
- telecine filter
|
||||
- interlace filter
|
||||
- smptehdbars source
|
||||
- inverse telecine filters (fieldmatch and decimate)
|
||||
- colorbalance filter
|
||||
- colorchannelmixer filter
|
||||
- The matroska demuxer can now output proper verbatim ASS packets. It will
|
||||
become the default at the next libavformat major bump.
|
||||
- decent native animated GIF encoding
|
||||
- asetrate filter
|
||||
- interleave filter
|
||||
- timeline editing with filters
|
||||
- vidstabdetect and vidstabtransform filters for video stabilization using
|
||||
the vid.stab library
|
||||
- astats filter
|
||||
- trim and atrim filters
|
||||
- ffmpeg -t and -ss (output-only) options are now sample-accurate when
|
||||
transcoding audio
|
||||
- Matroska muxer can now put the index at the beginning of the file.
|
||||
- extractplanes filter
|
||||
- avectorscope filter
|
||||
- ADPCM DTK decoder
|
||||
- ADP demuxer
|
||||
- RSD demuxer
|
||||
- RedSpark demuxer
|
||||
- ADPCM IMA Radical decoder
|
||||
- zmq filters
|
||||
- DCT denoiser filter (dctdnoiz)
|
||||
- Wavelet denoiser filter ported from libmpcodecs as owdenoise (formerly "ow")
|
||||
- Apple Intermediate Codec decoder
|
||||
- Escape 130 video decoder
|
||||
- FTP protocol support
|
||||
- V4L2 output device
|
||||
- 3D LUT filter (lut3d)
|
||||
- SMPTE 302M audio encoder
|
||||
- support for slice multithreading in libavfilter
|
||||
- Hald CLUT support (generation and filtering)
|
||||
- VC-1 interlaced B-frame support
|
||||
- support for WavPack muxing (raw and in Matroska)
|
||||
- XVideo output device
|
||||
- vignette filter
|
||||
- True Audio (TTA) encoder
|
||||
- Go2Webinar decoder
|
||||
- mcdeint filter ported from libmpcodecs
|
||||
- sab filter ported from libmpcodecs
|
||||
- ffprobe -show_chapters option
|
||||
- WavPack encoding through libwavpack
|
||||
- rotate filter
|
||||
- spp filter ported from libmpcodecs
|
||||
- libgme support
|
||||
- psnr filter
|
||||
|
||||
|
||||
version 1.2:
|
||||
|
||||
- VDPAU hardware acceleration through normal hwaccel
|
||||
@@ -719,7 +67,7 @@ version 1.1:
|
||||
- JSON captions for TED talks decoding support
|
||||
- SOX Resampler support in libswresample
|
||||
- aselect filter
|
||||
- SGI RLE 8-bit / Silicon Graphics RLE 8-bit video decoder
|
||||
- SGI RLE 8-bit decoder
|
||||
- Silicon Graphics Motion Video Compressor 1 & 2 decoder
|
||||
- Silicon Graphics Movie demuxer
|
||||
- apad filter
|
||||
@@ -763,9 +111,7 @@ version 1.0:
|
||||
- RTMPE protocol support
|
||||
- RTMPTE protocol support
|
||||
- showwaves and showspectrum filter
|
||||
- LucasArts SMUSH SANM playback support
|
||||
- LucasArts SMUSH VIMA audio decoder (ADPCM)
|
||||
- LucasArts SMUSH demuxer
|
||||
- LucasArts SMUSH playback support
|
||||
- SAMI, RealText and SubViewer demuxers and decoders
|
||||
- Heart Of Darkness PAF playback support
|
||||
- iec61883 device
|
||||
@@ -889,7 +235,6 @@ version 0.10:
|
||||
- ffwavesynth decoder
|
||||
- aviocat tool
|
||||
- ffeval tool
|
||||
- support encoding and decoding 4-channel SGI images
|
||||
|
||||
|
||||
version 0.9:
|
||||
@@ -938,7 +283,7 @@ easier to use. The changes are:
|
||||
all the stream in the first input file, except for the second audio
|
||||
stream'.
|
||||
* There is a new option -c (or -codec) for choosing the decoder/encoder to
|
||||
use, which makes it possible to precisely specify target stream(s) consistently with
|
||||
use, which allows to precisely specify target stream(s) consistently with
|
||||
other options. E.g. -c:v lib264 sets the codec for all video streams, -c:a:0
|
||||
libvorbis sets the codec for the first audio stream and -c copy copies all
|
||||
the streams without reencoding. Old -vcodec/-acodec/-scodec options are now
|
||||
@@ -1190,7 +535,7 @@ version 0.6:
|
||||
- LPCM support in MPEG-TS (HDMV RID as found on Blu-ray disks)
|
||||
- WMA Pro decoder
|
||||
- Core Audio Format demuxer
|
||||
- ATRAC1 decoder
|
||||
- Atrac1 decoder
|
||||
- MD STUDIO audio demuxer
|
||||
- RF64 support in WAV demuxer
|
||||
- MPEG-4 Audio Lossless Coding (ALS) decoder
|
||||
@@ -1290,7 +635,7 @@ version 0.5:
|
||||
- MXF demuxer
|
||||
- VC-1/WMV3/WMV9 video decoder
|
||||
- MacIntel support
|
||||
- AviSynth support
|
||||
- AVISynth support
|
||||
- VMware video decoder
|
||||
- VP5 video decoder
|
||||
- VP6 video decoder
|
||||
@@ -1318,7 +663,7 @@ version 0.5:
|
||||
- Interplay C93 demuxer and video decoder
|
||||
- Bethsoft VID demuxer and video decoder
|
||||
- CRYO APC demuxer
|
||||
- ATRAC3 decoder
|
||||
- Atrac3 decoder
|
||||
- V.Flash PTX decoder
|
||||
- RoQ muxer, RoQ audio encoder
|
||||
- Renderware TXD demuxer and decoder
|
||||
|
||||
15
INSTALL
Normal file
15
INSTALL
Normal file
@@ -0,0 +1,15 @@
|
||||
|
||||
1) Type './configure' to create the configuration. A list of configure
|
||||
options is printed by running 'configure --help'.
|
||||
|
||||
'configure' can be launched from a directory different from the FFmpeg
|
||||
sources to build the objects out of tree. To do this, use an absolute
|
||||
path when launching 'configure', e.g. '/ffmpegdir/ffmpeg/configure'.
|
||||
|
||||
2) Then type 'make' to build FFmpeg. GNU Make 3.81 or later is required.
|
||||
|
||||
3) Type 'make install' to install all binaries and libraries you built.
|
||||
|
||||
NOTICE
|
||||
|
||||
- Non system dependencies (e.g. libx264, libvpx) are disabled by default.
|
||||
17
INSTALL.md
17
INSTALL.md
@@ -1,17 +0,0 @@
|
||||
#Installing FFmpeg:
|
||||
|
||||
1. Type `./configure` to create the configuration. A list of configure
|
||||
options is printed by running `configure --help`.
|
||||
|
||||
`configure` can be launched from a directory different from the FFmpeg
|
||||
sources to build the objects out of tree. To do this, use an absolute
|
||||
path when launching `configure`, e.g. `/ffmpegdir/ffmpeg/configure`.
|
||||
|
||||
2. Then type `make` to build FFmpeg. GNU Make 3.81 or later is required.
|
||||
|
||||
3. Type `make install` to install all binaries and libraries you built.
|
||||
|
||||
NOTICE
|
||||
------
|
||||
|
||||
- Non system dependencies (e.g. libx264, libvpx) are disabled by default.
|
||||
91
LICENSE
Normal file
91
LICENSE
Normal file
@@ -0,0 +1,91 @@
|
||||
FFmpeg:
|
||||
|
||||
Most files in FFmpeg are under the GNU Lesser General Public License version 2.1
|
||||
or later (LGPL v2.1+). Read the file COPYING.LGPLv2.1 for details. Some other
|
||||
files have MIT/X11/BSD-style licenses. In combination the LGPL v2.1+ applies to
|
||||
FFmpeg.
|
||||
|
||||
Some optional parts of FFmpeg are licensed under the GNU General Public License
|
||||
version 2 or later (GPL v2+). See the file COPYING.GPLv2 for details. None of
|
||||
these parts are used by default, you have to explicitly pass --enable-gpl to
|
||||
configure to activate them. In this case, FFmpeg's license changes to GPL v2+.
|
||||
|
||||
Specifically, the GPL parts of FFmpeg are
|
||||
|
||||
- libpostproc
|
||||
- libmpcodecs
|
||||
- optional x86 optimizations in the files
|
||||
libavcodec/x86/idct_mmx.c
|
||||
- libutvideo encoding/decoding wrappers in
|
||||
libavcodec/libutvideo*.cpp
|
||||
- the X11 grabber in libavdevice/x11grab.c
|
||||
- the swresample test app in
|
||||
libswresample/swresample-test.c
|
||||
- the texi2pod.pl tool
|
||||
- the following filters in libavfilter:
|
||||
- f_ebur128.c
|
||||
- vf_blackframe.c
|
||||
- vf_boxblur.c
|
||||
- vf_colormatrix.c
|
||||
- vf_cropdetect.c
|
||||
- vf_decimate.c
|
||||
- vf_delogo.c
|
||||
- vf_geq.c
|
||||
- vf_histeq.c
|
||||
- vf_hqdn3d.c
|
||||
- vf_hue.c
|
||||
- vf_kerndeint.c
|
||||
- vf_mp.c
|
||||
- vf_noise.c
|
||||
- vf_pp.c
|
||||
- vf_smartblur.c
|
||||
- vf_stereo3d.c
|
||||
- vf_super2xsai.c
|
||||
- vf_tinterlace.c
|
||||
- vf_yadif.c
|
||||
- vsrc_mptestsrc.c
|
||||
|
||||
Should you, for whatever reason, prefer to use version 3 of the (L)GPL, then
|
||||
the configure parameter --enable-version3 will activate this licensing option
|
||||
for you. Read the file COPYING.LGPLv3 or, if you have enabled GPL parts,
|
||||
COPYING.GPLv3 to learn the exact legal terms that apply in this case.
|
||||
|
||||
There are a handful of files under other licensing terms, namely:
|
||||
|
||||
* The files libavcodec/jfdctfst.c, libavcodec/jfdctint_template.c and
|
||||
libavcodec/jrevdct.c are taken from libjpeg, see the top of the files for
|
||||
licensing details. Specifically note that you must credit the IJG in the
|
||||
documentation accompanying your program if you only distribute executables.
|
||||
You must also indicate any changes including additions and deletions to
|
||||
those three files in the documentation.
|
||||
|
||||
|
||||
external libraries
|
||||
==================
|
||||
|
||||
FFmpeg can be combined with a number of external libraries, which sometimes
|
||||
affect the licensing of binaries resulting from the combination.
|
||||
|
||||
compatible libraries
|
||||
--------------------
|
||||
|
||||
The libcdio, libx264, libxavs and libxvid libraries are under GPL. When
|
||||
combining them with FFmpeg, FFmpeg needs to be licensed as GPL as well by
|
||||
passing --enable-gpl to configure.
|
||||
|
||||
The OpenCORE and VisualOn libraries are under the Apache License 2.0. That
|
||||
license is incompatible with the LGPL v2.1 and the GPL v2, but not with
|
||||
version 3 of those licenses. So to combine these libraries with FFmpeg, the
|
||||
license version needs to be upgraded by passing --enable-version3 to configure.
|
||||
|
||||
incompatible libraries
|
||||
----------------------
|
||||
|
||||
The Fraunhofer AAC library, FAAC and aacplus are under licenses which
|
||||
are incompatible with the GPLv2 and v3. We do not know for certain if their
|
||||
licenses are compatible with the LGPL.
|
||||
If you wish to enable these libraries, pass --enable-nonfree to configure.
|
||||
But note that if you enable any of these libraries the resulting binary will
|
||||
be under a complex license mix that is more restrictive than the LGPL and that
|
||||
may result in additional obligations. It is possible that these
|
||||
restrictions cause the resulting binary to be unredistributeable.
|
||||
113
LICENSE.md
113
LICENSE.md
@@ -1,113 +0,0 @@
|
||||
#FFmpeg:
|
||||
|
||||
Most files in FFmpeg are under the GNU Lesser General Public License version 2.1
|
||||
or later (LGPL v2.1+). Read the file `COPYING.LGPLv2.1` for details. Some other
|
||||
files have MIT/X11/BSD-style licenses. In combination the LGPL v2.1+ applies to
|
||||
FFmpeg.
|
||||
|
||||
Some optional parts of FFmpeg are licensed under the GNU General Public License
|
||||
version 2 or later (GPL v2+). See the file `COPYING.GPLv2` for details. None of
|
||||
these parts are used by default, you have to explicitly pass `--enable-gpl` to
|
||||
configure to activate them. In this case, FFmpeg's license changes to GPL v2+.
|
||||
|
||||
Specifically, the GPL parts of FFmpeg are:
|
||||
|
||||
- libpostproc
|
||||
- optional x86 optimizations in the files
|
||||
- `libavcodec/x86/flac_dsp_gpl.asm`
|
||||
- `libavcodec/x86/idct_mmx.c`
|
||||
- `libavfilter/x86/vf_removegrain.asm`
|
||||
- libutvideo encoding/decoding wrappers in
|
||||
`libavcodec/libutvideo*.cpp`
|
||||
- the X11 grabber in `libavdevice/x11grab.c`
|
||||
- the swresample test app in
|
||||
`libswresample/swresample-test.c`
|
||||
- the `texi2pod.pl` tool
|
||||
- the following filters in libavfilter:
|
||||
- `f_ebur128.c`
|
||||
- `vf_blackframe.c`
|
||||
- `vf_boxblur.c`
|
||||
- `vf_colormatrix.c`
|
||||
- `vf_cover_rect.c`
|
||||
- `vf_cropdetect.c`
|
||||
- `vf_delogo.c`
|
||||
- `vf_eq.c`
|
||||
- `vf_find_rect.c`
|
||||
- `vf_fspp.c`
|
||||
- `vf_geq.c`
|
||||
- `vf_histeq.c`
|
||||
- `vf_hqdn3d.c`
|
||||
- `vf_interlace.c`
|
||||
- `vf_kerndeint.c`
|
||||
- `vf_mcdeint.c`
|
||||
- `vf_mpdecimate.c`
|
||||
- `vf_owdenoise.c`
|
||||
- `vf_perspective.c`
|
||||
- `vf_phase.c`
|
||||
- `vf_pp.c`
|
||||
- `vf_pp7.c`
|
||||
- `vf_pullup.c`
|
||||
- `vf_sab.c`
|
||||
- `vf_smartblur.c`
|
||||
- `vf_repeatfields.c`
|
||||
- `vf_spp.c`
|
||||
- `vf_stereo3d.c`
|
||||
- `vf_super2xsai.c`
|
||||
- `vf_tinterlace.c`
|
||||
- `vf_uspp.c`
|
||||
- `vsrc_mptestsrc.c`
|
||||
|
||||
Should you, for whatever reason, prefer to use version 3 of the (L)GPL, then
|
||||
the configure parameter `--enable-version3` will activate this licensing option
|
||||
for you. Read the file `COPYING.LGPLv3` or, if you have enabled GPL parts,
|
||||
`COPYING.GPLv3` to learn the exact legal terms that apply in this case.
|
||||
|
||||
There are a handful of files under other licensing terms, namely:
|
||||
|
||||
* The files `libavcodec/jfdctfst.c`, `libavcodec/jfdctint_template.c` and
|
||||
`libavcodec/jrevdct.c` are taken from libjpeg, see the top of the files for
|
||||
licensing details. Specifically note that you must credit the IJG in the
|
||||
documentation accompanying your program if you only distribute executables.
|
||||
You must also indicate any changes including additions and deletions to
|
||||
those three files in the documentation.
|
||||
* `tests/reference.pnm` is under the expat license.
|
||||
|
||||
|
||||
external libraries
|
||||
==================
|
||||
|
||||
FFmpeg can be combined with a number of external libraries, which sometimes
|
||||
affect the licensing of binaries resulting from the combination.
|
||||
|
||||
compatible libraries
|
||||
--------------------
|
||||
|
||||
The following libraries are under GPL:
|
||||
- frei0r
|
||||
- libcdio
|
||||
- libutvideo
|
||||
- libvidstab
|
||||
- libx264
|
||||
- libx265
|
||||
- libxavs
|
||||
- libxvid
|
||||
|
||||
When combining them with FFmpeg, FFmpeg needs to be licensed as GPL as well by
|
||||
passing `--enable-gpl` to configure.
|
||||
|
||||
The OpenCORE and VisualOn libraries are under the Apache License 2.0. That
|
||||
license is incompatible with the LGPL v2.1 and the GPL v2, but not with
|
||||
version 3 of those licenses. So to combine these libraries with FFmpeg, the
|
||||
license version needs to be upgraded by passing `--enable-version3` to configure.
|
||||
|
||||
incompatible libraries
|
||||
----------------------
|
||||
|
||||
The Fraunhofer AAC library, FAAC and aacplus are under licenses which
|
||||
are incompatible with the GPLv2 and v3. We do not know for certain if their
|
||||
licenses are compatible with the LGPL.
|
||||
If you wish to enable these libraries, pass `--enable-nonfree` to configure.
|
||||
But note that if you enable any of these libraries the resulting binary will
|
||||
be under a complex license mix that is more restrictive than the LGPL and that
|
||||
may result in additional obligations. It is possible that these
|
||||
restrictions cause the resulting binary to be unredistributeable.
|
||||
219
MAINTAINERS
219
MAINTAINERS
@@ -7,13 +7,14 @@ FFmpeg code.
|
||||
Please try to keep entries where you are the maintainer up to date!
|
||||
|
||||
Names in () mean that the maintainer currently has no time to maintain the code.
|
||||
A (CC <address>) after the name means that the maintainer prefers to be CC-ed on
|
||||
patches and related discussions.
|
||||
A CC after the name means that the maintainer prefers to be CC-ed on patches
|
||||
and related discussions.
|
||||
|
||||
|
||||
Project Leader
|
||||
==============
|
||||
|
||||
Michael Niedermayer
|
||||
final design decisions
|
||||
|
||||
|
||||
@@ -30,7 +31,7 @@ ffprobe:
|
||||
ffprobe.c Stefano Sabatini
|
||||
|
||||
ffserver:
|
||||
ffserver.c Reynaldo H. Verdejo Pinochet
|
||||
ffserver.c, ffserver.h Baptiste Coudurier
|
||||
|
||||
Commandline utility code:
|
||||
cmdutils.c, cmdutils.h Michael Niedermayer
|
||||
@@ -42,26 +43,16 @@ QuickTime faststart:
|
||||
Miscellaneous Areas
|
||||
===================
|
||||
|
||||
documentation Stefano Sabatini, Mike Melanson, Timothy Gu, Lou Logan
|
||||
build system (configure, makefiles) Diego Biurrun, Mans Rullgard
|
||||
project server Árpád Gereöffy, Michael Niedermayer, Reimar Doeffinger, Alexander Strasser, Lou Logan
|
||||
documentation Mike Melanson
|
||||
website Robert Swain, Lou Logan
|
||||
build system (configure,Makefiles) Diego Biurrun, Mans Rullgard
|
||||
project server Árpád Gereöffy, Michael Niedermayer, Reimar Döffinger
|
||||
mailinglists Michael Niedermayer, Baptiste Coudurier, Lou Logan
|
||||
presets Robert Swain
|
||||
metadata subsystem Aurelien Jacobs
|
||||
release management Michael Niedermayer
|
||||
|
||||
|
||||
Communication
|
||||
=============
|
||||
|
||||
website Deby Barbara Lepage
|
||||
fate.ffmpeg.org Timothy Gu
|
||||
Trac bug tracker Alexander Strasser, Michael Niedermayer, Carl Eugen Hoyos, Lou Logan
|
||||
mailing lists Michael Niedermayer, Baptiste Coudurier, Lou Logan
|
||||
Google+ Paul B Mahol, Michael Niedermayer, Alexander Strasser
|
||||
Twitter Lou Logan, Reynaldo H. Verdejo Pinochet
|
||||
Launchpad Timothy Gu
|
||||
|
||||
|
||||
libavutil
|
||||
=========
|
||||
|
||||
@@ -71,24 +62,11 @@ Internal Interfaces:
|
||||
libavutil/common.h Michael Niedermayer
|
||||
|
||||
Other:
|
||||
bprint Nicolas George
|
||||
bswap.h
|
||||
des Reimar Doeffinger
|
||||
dynarray.h Nicolas George
|
||||
eval.c, eval.h Michael Niedermayer
|
||||
float_dsp Loren Merritt
|
||||
hash Reimar Doeffinger
|
||||
intfloat* Michael Niedermayer
|
||||
integer.c, integer.h Michael Niedermayer
|
||||
lzo Reimar Doeffinger
|
||||
mathematics.c, mathematics.h Michael Niedermayer
|
||||
mem.c, mem.h Michael Niedermayer
|
||||
opencl.c, opencl.h Wei Gao
|
||||
opt.c, opt.h Michael Niedermayer
|
||||
rational.c, rational.h Michael Niedermayer
|
||||
rc4 Reimar Doeffinger
|
||||
ripemd.c, ripemd.h James Almer
|
||||
timecode Clément Bœsch
|
||||
mathematics.c, mathematics.h Michael Niedermayer
|
||||
integer.c, integer.h Michael Niedermayer
|
||||
bswap.h
|
||||
|
||||
|
||||
libavcodec
|
||||
@@ -99,6 +77,10 @@ Generic Parts:
|
||||
avcodec.h Michael Niedermayer
|
||||
utility code:
|
||||
utils.c Michael Niedermayer
|
||||
mem.c Michael Niedermayer
|
||||
opt.c, opt.h Michael Niedermayer
|
||||
arithmetic expression evaluator:
|
||||
eval.c Michael Niedermayer
|
||||
audio and video frame extraction:
|
||||
parser.c Michael Niedermayer
|
||||
bitstream reading:
|
||||
@@ -129,15 +111,11 @@ Generic Parts:
|
||||
libpostproc/* Michael Niedermayer
|
||||
table generation:
|
||||
tableprint.c, tableprint.h Reimar Doeffinger
|
||||
fixed point FFT:
|
||||
fft* Zeljko Lukac
|
||||
Text Subtitles Clément Bœsch
|
||||
|
||||
Codecs:
|
||||
4xm.c Michael Niedermayer
|
||||
8bps.c Roberto Togni
|
||||
8svx.c Jaikrishnan Menon
|
||||
aacenc*, aaccoder.c Rostislav Pehlivanov
|
||||
aasc.c Kostya Shishkov
|
||||
ac3* Justin Ruggles
|
||||
alacenc.c Jaikrishnan Menon
|
||||
@@ -146,17 +124,14 @@ Codecs:
|
||||
ass* Aurelien Jacobs
|
||||
asv* Michael Niedermayer
|
||||
atrac3* Benjamin Larsson
|
||||
atrac3plus* Maxim Poliakovski
|
||||
bgmc.c, bgmc.h Thilo Borgmann
|
||||
bink.c Kostya Shishkov
|
||||
binkaudio.c Peter Ross
|
||||
bmp.c Mans Rullgard, Kostya Shishkov
|
||||
cavs* Stefan Gehrer
|
||||
cdxl.c Paul B Mahol
|
||||
celp_filters.* Vitor Sessak
|
||||
cdxl.c Paul B Mahol
|
||||
cinepak.c Roberto Togni
|
||||
cinepakenc.c Rl / Aetey G.T. AB
|
||||
ccaption_dec.c Anshul Maheshwari
|
||||
cljr Alex Beregszaszi
|
||||
cllc.c Derek Buitenhuis
|
||||
cook.c, cookdata.h Benjamin Larsson
|
||||
@@ -166,26 +141,21 @@ Codecs:
|
||||
dca.c Kostya Shishkov, Benjamin Larsson
|
||||
dnxhd* Baptiste Coudurier
|
||||
dpcm.c Mike Melanson
|
||||
dss_sp.c Oleksij Rempel, Michael Niedermayer
|
||||
dv.c Roman Shaposhnik
|
||||
dvbsubdec.c Anshul Maheshwari
|
||||
dxa.c Kostya Shishkov
|
||||
dv.c Roman Shaposhnik
|
||||
eacmv*, eaidct*, eat* Peter Ross
|
||||
evrc* Paul B Mahol
|
||||
exif.c, exif.h Thilo Borgmann
|
||||
ffv1* Michael Niedermayer
|
||||
ffv1.c Michael Niedermayer
|
||||
ffwavesynth.c Nicolas George
|
||||
fic.c Derek Buitenhuis
|
||||
flac* Justin Ruggles
|
||||
flashsv* Benjamin Larsson
|
||||
flicvideo.c Mike Melanson
|
||||
g722.c Martin Storsjo
|
||||
g726.c Roman Shaposhnik
|
||||
gifdec.c Baptiste Coudurier
|
||||
h264* Loren Merritt, Michael Niedermayer
|
||||
h261* Michael Niedermayer
|
||||
h263* Michael Niedermayer
|
||||
h264* Loren Merritt, Michael Niedermayer
|
||||
huffyuv* Michael Niedermayer, Christophe Gisquet
|
||||
huffyuv.c Michael Niedermayer
|
||||
idcinvideo.c Mike Melanson
|
||||
imc* Benjamin Larsson
|
||||
indeo2* Kostya Shishkov
|
||||
@@ -193,15 +163,13 @@ Codecs:
|
||||
interplayvideo.c Mike Melanson
|
||||
ivi* Kostya Shishkov
|
||||
jacosub* Clément Bœsch
|
||||
jpeg2000* Nicolas Bertrand
|
||||
jpeg_ls.c Kostya Shishkov
|
||||
jvdec.c Peter Ross
|
||||
kmvc.c Kostya Shishkov
|
||||
lcl*.c Roberto Togni, Reimar Doeffinger
|
||||
libcelt_dec.c Nicolas George
|
||||
libdirac* David Conrad
|
||||
libgsm.c Michel Bardiaux
|
||||
libkvazaar.c Arttu Ylä-Outinen
|
||||
libdirac* David Conrad
|
||||
libopenjpeg.c Jaikrishnan Menon
|
||||
libopenjpegenc.c Michael Bradshaw
|
||||
libschroedinger* David Conrad
|
||||
@@ -209,11 +177,8 @@ Codecs:
|
||||
libtheoraenc.c David Conrad
|
||||
libutvideo* Derek Buitenhuis
|
||||
libvorbis.c David Conrad
|
||||
libvpx* James Zern
|
||||
libx264.c Mans Rullgard, Jason Garrett-Glaser
|
||||
libx265.c Derek Buitenhuis
|
||||
libxavs.c Stefan Gehrer
|
||||
libzvbi-teletextdec.c Marton Balint
|
||||
libx264.c Mans Rullgard, Jason Garrett-Glaser
|
||||
loco.c Kostya Shishkov
|
||||
lzo.h, lzo.c Reimar Doeffinger
|
||||
mdec.c Michael Niedermayer
|
||||
@@ -224,13 +189,11 @@ Codecs:
|
||||
mpc* Kostya Shishkov
|
||||
mpeg12.c, mpeg12data.h Michael Niedermayer
|
||||
mpegvideo.c, mpegvideo.h Michael Niedermayer
|
||||
mqc* Nicolas Bertrand
|
||||
msmpeg4.c, msmpeg4data.h Michael Niedermayer
|
||||
msrle.c Mike Melanson
|
||||
msvideo1.c Mike Melanson
|
||||
nellymoserdec.c Benjamin Larsson
|
||||
nuv.c Reimar Doeffinger
|
||||
nvenc.c Timo Rothenpieler
|
||||
paf.* Paul B Mahol
|
||||
pcx.c Ivo van Poorten
|
||||
pgssubdec.c Reimar Doeffinger
|
||||
@@ -239,7 +202,6 @@ Codecs:
|
||||
qdm2.c, qdm2data.h Roberto Togni, Benjamin Larsson
|
||||
qdrw.c Kostya Shishkov
|
||||
qpeg.c Kostya Shishkov
|
||||
qsv* Ivan Uskov
|
||||
qtrle.c Mike Melanson
|
||||
ra144.c, ra144.h, ra288.c, ra288.h Roberto Togni
|
||||
resample2.c Michael Niedermayer
|
||||
@@ -248,12 +210,11 @@ Codecs:
|
||||
rtjpeg.c, rtjpeg.h Reimar Doeffinger
|
||||
rv10.c Michael Niedermayer
|
||||
rv3* Kostya Shishkov
|
||||
rv4* Kostya Shishkov, Christophe Gisquet
|
||||
rv4* Kostya Shishkov
|
||||
s3tc* Ivo van Poorten
|
||||
smacker.c Kostya Shishkov
|
||||
smc.c Mike Melanson
|
||||
smvjpegdec.c Ash Hughes
|
||||
snow* Michael Niedermayer, Loren Merritt
|
||||
snow.c Michael Niedermayer, Loren Merritt
|
||||
sonic.c Alex Beregszaszi
|
||||
srt* Aurelien Jacobs
|
||||
sunrast.c Ivo van Poorten
|
||||
@@ -266,24 +227,22 @@ Codecs:
|
||||
truespeech.c Kostya Shishkov
|
||||
tscc.c Kostya Shishkov
|
||||
tta.c Alex Beregszaszi, Jaikrishnan Menon
|
||||
ttaenc.c Paul B Mahol
|
||||
txd.c Ivo van Poorten
|
||||
ulti* Kostya Shishkov
|
||||
v410*.c Derek Buitenhuis
|
||||
vb.c Kostya Shishkov
|
||||
vble.c Derek Buitenhuis
|
||||
vc1* Kostya Shishkov, Christophe Gisquet
|
||||
vc1* Kostya Shishkov
|
||||
vcr1.c Michael Niedermayer
|
||||
vda_h264_dec.c Xidorn Quan
|
||||
vima.c Paul B Mahol
|
||||
vmnc.c Kostya Shishkov
|
||||
vorbisdec.c Denes Balatoni, David Conrad
|
||||
vorbisenc.c Oded Shimon
|
||||
vorbis_enc.c Oded Shimon
|
||||
vorbis_dec.c Denes Balatoni, David Conrad
|
||||
vp3* Mike Melanson
|
||||
vp5 Aurelien Jacobs
|
||||
vp6 Aurelien Jacobs
|
||||
vp8 David Conrad, Jason Garrett-Glaser, Ronald Bultje
|
||||
vp9 Ronald Bultje, Clément Bœsch
|
||||
vqavideo.c Mike Melanson
|
||||
wavpack.c Kostya Shishkov
|
||||
wmaprodec.c Sascha Sommer
|
||||
@@ -292,7 +251,6 @@ Codecs:
|
||||
wnv1.c Kostya Shishkov
|
||||
xan.c Mike Melanson
|
||||
xbm* Paul B Mahol
|
||||
xface Stefano Sabatini
|
||||
xl.c Kostya Shishkov
|
||||
xvmc.c Ivan Kalvachev
|
||||
xwd* Paul B Mahol
|
||||
@@ -301,12 +259,11 @@ Codecs:
|
||||
|
||||
Hardware acceleration:
|
||||
crystalhd.c Philip Langdale
|
||||
dxva2* Hendrik Leppkes, Laurent Aimar
|
||||
dxva2* Laurent Aimar
|
||||
libstagefright.cpp Mohamed Naufal
|
||||
vaapi* Gwenole Beauchesne
|
||||
vda* Sebastien Zwickert
|
||||
vdpau* Philip Langdale, Carl Eugen Hoyos
|
||||
videotoolbox* Sebastien Zwickert
|
||||
vdpau* Carl Eugen Hoyos
|
||||
|
||||
|
||||
libavdevice
|
||||
@@ -315,21 +272,11 @@ libavdevice
|
||||
libavdevice/avdevice.h
|
||||
|
||||
|
||||
avfoundation.m Thilo Borgmann
|
||||
decklink* Deti Fliegl
|
||||
dshow.c Roger Pack (CC rogerdpack@gmail.com)
|
||||
fbdev_enc.c Lukasz Marek
|
||||
gdigrab.c Roger Pack (CC rogerdpack@gmail.com)
|
||||
iec61883.c Georg Lippitsch
|
||||
lavfi Stefano Sabatini
|
||||
libdc1394.c Roman Shaposhnik
|
||||
opengl_enc.c Lukasz Marek
|
||||
pulse_audio_enc.c Lukasz Marek
|
||||
qtkit.m Thilo Borgmann
|
||||
sdl Stefano Sabatini
|
||||
v4l2.c Giorgio Vazzana
|
||||
v4l2.c Luca Abeni
|
||||
vfwcap.c Ramiro Polla
|
||||
xv.c Lukasz Marek
|
||||
dshow.c Roger Pack
|
||||
|
||||
libavfilter
|
||||
===========
|
||||
@@ -338,52 +285,11 @@ Generic parts:
|
||||
graphdump.c Nicolas George
|
||||
|
||||
Filters:
|
||||
f_drawgraph.c Paul B Mahol
|
||||
af_adelay.c Paul B Mahol
|
||||
af_aecho.c Paul B Mahol
|
||||
af_afade.c Paul B Mahol
|
||||
af_amerge.c Nicolas George
|
||||
af_aphaser.c Paul B Mahol
|
||||
af_aresample.c Michael Niedermayer
|
||||
af_astats.c Paul B Mahol
|
||||
af_astreamsync.c Nicolas George
|
||||
af_atempo.c Pavel Koshevoy
|
||||
af_biquads.c Paul B Mahol
|
||||
af_chorus.c Paul B Mahol
|
||||
af_compand.c Paul B Mahol
|
||||
af_ladspa.c Paul B Mahol
|
||||
af_pan.c Nicolas George
|
||||
af_sidechaincompress.c Paul B Mahol
|
||||
af_silenceremove.c Paul B Mahol
|
||||
avf_aphasemeter.c Paul B Mahol
|
||||
avf_avectorscope.c Paul B Mahol
|
||||
avf_showcqt.c Muhammad Faiz
|
||||
vf_blend.c Paul B Mahol
|
||||
vf_colorchannelmixer.c Paul B Mahol
|
||||
vf_colorbalance.c Paul B Mahol
|
||||
vf_colorkey.c Timo Rothenpieler
|
||||
vf_colorlevels.c Paul B Mahol
|
||||
vf_deband.c Paul B Mahol
|
||||
vf_dejudder.c Nicholas Robbins
|
||||
vf_delogo.c Jean Delvare (CC <khali@linux-fr.org>)
|
||||
vf_drawbox.c/drawgrid Andrey Utkin
|
||||
vf_extractplanes.c Paul B Mahol
|
||||
vf_histogram.c Paul B Mahol
|
||||
vf_hqx.c Clément Bœsch
|
||||
vf_idet.c Pascal Massimino
|
||||
vf_il.c Paul B Mahol
|
||||
vf_lenscorrection.c Daniel Oberhoff
|
||||
vf_mergeplanes.c Paul B Mahol
|
||||
vf_neighbor.c Paul B Mahol
|
||||
vf_psnr.c Paul B Mahol
|
||||
vf_random.c Paul B Mahol
|
||||
vf_scale.c Michael Niedermayer
|
||||
vf_separatefields.c Paul B Mahol
|
||||
vf_ssim.c Paul B Mahol
|
||||
vf_stereo3d.c Paul B Mahol
|
||||
vf_telecine.c Paul B Mahol
|
||||
vf_yadif.c Michael Niedermayer
|
||||
vf_zoompan.c Paul B Mahol
|
||||
|
||||
Sources:
|
||||
vsrc_mandelbrot.c Michael Niedermayer
|
||||
@@ -400,18 +306,14 @@ Generic parts:
|
||||
|
||||
Muxers/Demuxers:
|
||||
4xm.c Mike Melanson
|
||||
aadec.c Vesselin Bontchev (vesselin.bontchev at yandex dot com)
|
||||
adtsenc.c Robert Swain
|
||||
afc.c Paul B Mahol
|
||||
aiffdec.c Baptiste Coudurier, Matthieu Bouron
|
||||
aiffenc.c Baptiste Coudurier, Matthieu Bouron
|
||||
aiff.c Baptiste Coudurier
|
||||
ape.c Kostya Shishkov
|
||||
apngdec.c Benoit Fouet
|
||||
ass* Aurelien Jacobs
|
||||
astdec.c Paul B Mahol
|
||||
astenc.c James Almer
|
||||
avi* Michael Niedermayer
|
||||
avisynth.c AvxSynth Team (avxsynth.testing at gmail dot com)
|
||||
avr.c Paul B Mahol
|
||||
bink.c Peter Ross
|
||||
brstm.c Paul B Mahol
|
||||
@@ -419,7 +321,6 @@ Muxers/Demuxers:
|
||||
cdxl.c Paul B Mahol
|
||||
crc.c Michael Niedermayer
|
||||
daud.c Reimar Doeffinger
|
||||
dss.c Oleksij Rempel, Michael Niedermayer
|
||||
dtshddec.c Paul B Mahol
|
||||
dv.c Roman Shaposhnik
|
||||
dxa.c Kostya Shishkov
|
||||
@@ -431,13 +332,11 @@ Muxers/Demuxers:
|
||||
flvdec.c, flvenc.c Michael Niedermayer
|
||||
gxf.c Reimar Doeffinger
|
||||
gxfenc.c Baptiste Coudurier
|
||||
hls.c Anssi Hannula
|
||||
hls encryption (hlsenc.c) Christian Suloway
|
||||
idcin.c Mike Melanson
|
||||
idroqdec.c Mike Melanson
|
||||
iff.c Jaikrishnan Menon
|
||||
img2*.c Michael Niedermayer
|
||||
ipmovie.c Mike Melanson
|
||||
img2*.c Michael Niedermayer
|
||||
ircam* Paul B Mahol
|
||||
iss.c Stefan Gehrer
|
||||
jacosub* Clément Bœsch
|
||||
@@ -450,25 +349,23 @@ Muxers/Demuxers:
|
||||
matroska.c Aurelien Jacobs
|
||||
matroskadec.c Aurelien Jacobs
|
||||
matroskaenc.c David Conrad
|
||||
matroska subtitles (matroskaenc.c) John Peebles
|
||||
metadata* Aurelien Jacobs
|
||||
mgsts.c Paul B Mahol
|
||||
microdvd* Aurelien Jacobs
|
||||
mgsts.c Paul B Mahol
|
||||
mm.c Peter Ross
|
||||
mov.c Michael Niedermayer, Baptiste Coudurier
|
||||
movenc.c Baptiste Coudurier, Matthieu Bouron
|
||||
movenc.c Michael Niedermayer, Baptiste Coudurier
|
||||
mpc.c Kostya Shishkov
|
||||
mpeg.c Michael Niedermayer
|
||||
mpegenc.c Michael Niedermayer
|
||||
mpegts.c Marton Balint
|
||||
mpegtsenc.c Baptiste Coudurier
|
||||
mpegts* Baptiste Coudurier
|
||||
msnwc_tcp.c Ramiro Polla
|
||||
mtv.c Reynaldo H. Verdejo Pinochet
|
||||
mxf* Baptiste Coudurier
|
||||
mxfdec.c Tomas Härdin
|
||||
nistspheredec.c Paul B Mahol
|
||||
nsvdec.c Francois Revol
|
||||
nut* Michael Niedermayer
|
||||
nut.c Michael Niedermayer
|
||||
nuv.c Reimar Doeffinger
|
||||
oggdec.c, oggdec.h David Conrad
|
||||
oggenc.c Baptiste Coudurier
|
||||
@@ -485,23 +382,15 @@ Muxers/Demuxers:
|
||||
rmdec.c, rmenc.c Ronald S. Bultje, Kostya Shishkov
|
||||
rtmp* Kostya Shishkov
|
||||
rtp.c, rtpenc.c Martin Storsjo
|
||||
rtpdec_ac3.* Gilles Chanteperdrix
|
||||
rtpdec_dv.* Thomas Volkert
|
||||
rtpdec_h261.*, rtpenc_h261.* Thomas Volkert
|
||||
rtpdec_hevc.*, rtpenc_hevc.* Thomas Volkert
|
||||
rtpdec_mpa_robust.* Gilles Chanteperdrix
|
||||
rtpdec_asf.* Ronald S. Bultje
|
||||
rtpdec_vp9.c Thomas Volkert
|
||||
rtpenc_mpv.*, rtpenc_aac.* Martin Storsjo
|
||||
rtsp.c Luca Barbato
|
||||
sbgdec.c Nicolas George
|
||||
sdp.c Martin Storsjo
|
||||
segafilm.c Mike Melanson
|
||||
segment.c Stefano Sabatini
|
||||
siff.c Kostya Shishkov
|
||||
smacker.c Kostya Shishkov
|
||||
smjpeg* Paul B Mahol
|
||||
spdif* Anssi Hannula
|
||||
srtdec.c Aurelien Jacobs
|
||||
swf.c Baptiste Coudurier
|
||||
takdec.c Paul B Mahol
|
||||
@@ -510,22 +399,16 @@ Muxers/Demuxers:
|
||||
voc.c Aurelien Jacobs
|
||||
wav.c Michael Niedermayer
|
||||
wc3movie.c Mike Melanson
|
||||
webm dash (matroskaenc.c) Vignesh Venkatasubramanian
|
||||
webvtt* Matthew J Heaney
|
||||
westwood.c Mike Melanson
|
||||
wtv.c Peter Ross
|
||||
wv.c Kostya Shishkov
|
||||
wvenc.c Paul B Mahol
|
||||
|
||||
Protocols:
|
||||
async.c Zhang Rui
|
||||
bluray.c Petri Hintukainen
|
||||
ftp.c Lukasz Marek
|
||||
http.c Ronald S. Bultje
|
||||
libssh.c Lukasz Marek
|
||||
mms*.c Ronald S. Bultje
|
||||
udp.c Luca Abeni
|
||||
icecast.c Marvin Scholz
|
||||
|
||||
|
||||
libswresample
|
||||
@@ -548,50 +431,40 @@ Operating systems / CPU architectures
|
||||
Alpha Mans Rullgard, Falk Hueffner
|
||||
ARM Mans Rullgard
|
||||
AVR32 Mans Rullgard
|
||||
MIPS Mans Rullgard, Nedeljko Babic
|
||||
MIPS Mans Rullgard
|
||||
Mac OS X / PowerPC Romain Dolbeau, Guillaume Poirier
|
||||
Amiga / PowerPC Colin Ward
|
||||
Linux / PowerPC Luca Barbato
|
||||
Windows MinGW Alex Beregszaszi, Ramiro Polla
|
||||
Windows Cygwin Victor Paesa
|
||||
Windows MSVC Matthew Oliver, Hendrik Leppkes
|
||||
Windows ICL Matthew Oliver
|
||||
ADI/Blackfin DSP Marc Hoffman
|
||||
Sparc Roman Shaposhnik
|
||||
x86 Michael Niedermayer
|
||||
OS/2 KO Myung-Hun
|
||||
|
||||
|
||||
Releases
|
||||
========
|
||||
|
||||
2.8 Michael Niedermayer
|
||||
2.7 Michael Niedermayer
|
||||
2.6 Michael Niedermayer
|
||||
2.5 Michael Niedermayer
|
||||
2.4 Michael Niedermayer
|
||||
1.2 Michael Niedermayer
|
||||
1.1 Michael Niedermayer
|
||||
1.0 Michael Niedermayer
|
||||
|
||||
If you want to maintain an older release, please contact us
|
||||
|
||||
|
||||
GnuPG Fingerprints of maintainers and contributors
|
||||
==================================================
|
||||
|
||||
Alexander Strasser 1C96 78B7 83CB 8AA7 9AF5 D1EB A7D8 A57B A876 E58F
|
||||
Anssi Hannula 1A92 FF42 2DD9 8D2E 8AF7 65A9 4278 C520 513D F3CB
|
||||
Anton Khirnov 6D0C 6625 56F8 65D1 E5F5 814B B50A 1241 C067 07AB
|
||||
Ash Hughes 694D 43D2 D180 C7C7 6421 ABD3 A641 D0B7 623D 6029
|
||||
Attila Kinali 11F0 F9A6 A1D2 11F6 C745 D10C 6520 BCDD F2DF E765
|
||||
Baptiste Coudurier 8D77 134D 20CC 9220 201F C5DB 0AC9 325C 5C1A BAAA
|
||||
Ben Littler 3EE3 3723 E560 3214 A8CD 4DEB 2CDB FCE7 768C 8D2C
|
||||
Benoit Fouet B22A 4F4F 43EF 636B BB66 FCDC 0023 AE1E 2985 49C8
|
||||
Clément Bœsch 52D0 3A82 D445 F194 DB8B 2B16 87EE 2CB8 F4B8 FCF9
|
||||
Bœsch Clément 52D0 3A82 D445 F194 DB8B 2B16 87EE 2CB8 F4B8 FCF9
|
||||
Daniel Verkamp 78A6 07ED 782C 653E C628 B8B9 F0EB 8DD8 2F0E 21C7
|
||||
Diego Biurrun 8227 1E31 B6D9 4994 7427 E220 9CAE D6CC 4757 FCC5
|
||||
FFmpeg release signing key FCF9 86EA 15E6 E293 A564 4F10 B432 2F04 D676 58D8
|
||||
Gwenole Beauchesne 2E63 B3A6 3E44 37E2 017D 2704 53C7 6266 B153 99C4
|
||||
Jaikrishnan Menon 61A1 F09F 01C9 2D45 78E1 C862 25DC 8831 AF70 D368
|
||||
Jean Delvare 7CA6 9F44 60F1 BDC4 1FD2 C858 A552 6B9B B3CD 4E6A
|
||||
Justin Ruggles 3136 ECC0 C10D 6C04 5F43 CA29 FCBE CD2A 3787 1EBF
|
||||
Loren Merritt ABD9 08F4 C920 3F65 D8BE 35D7 1540 DAA7 060F 56DE
|
||||
Lou Logan 7D68 DC73 CBEF EABB 671A B6CF 621C 2E28 82F8 DC3A
|
||||
@@ -600,15 +473,11 @@ Michael Niedermayer 9FF2 128B 147E F673 0BAD F133 611E C787 040B 0FAB
|
||||
Nicolas George 24CE 01CE 9ACC 5CEB 74D8 8D9D B063 D997 36E5 4C93
|
||||
Panagiotis Issaris 6571 13A3 33D9 3726 F728 AA98 F643 B12E ECF3 E029
|
||||
Peter Ross A907 E02F A6E5 0CD2 34CD 20D2 6760 79C5 AC40 DD6B
|
||||
Philip Langdale 5DC5 8D66 5FBA 3A43 18EC 045E F8D6 B194 6A75 682E
|
||||
Reimar Doeffinger C61D 16E5 9E2C D10C 8958 38A4 0899 A2B9 06D4 D9C7
|
||||
Reimar Döffinger C61D 16E5 9E2C D10C 8958 38A4 0899 A2B9 06D4 D9C7
|
||||
Reinhard Tartler 9300 5DC2 7E87 6C37 ED7B CA9A 9808 3544 9453 48A4
|
||||
Reynaldo H. Verdejo Pinochet 6E27 CD34 170C C78E 4D4F 5F40 C18E 077F 3114 452A
|
||||
Robert Swain EE7A 56EA 4A81 A7B5 2001 A521 67FA 362D A2FC 3E71
|
||||
Sascha Sommer 38A0 F88B 868E 9D3A 97D4 D6A0 E823 706F 1E07 0D3C
|
||||
Stefano Sabatini 0D0B AD6B 5330 BBAD D3D6 6A0C 719C 2839 FC43 2D5F
|
||||
Stephan Hilb 4F38 0B3A 5F39 B99B F505 E562 8D5C 5554 4E17 8863
|
||||
Tiancheng "Timothy" Gu 9456 AFC0 814A 8139 E994 8351 7FE6 B095 B582 B0D4
|
||||
Tim Nicholson 38CF DB09 3ED0 F607 8B67 6CED 0C0B FC44 8B0B FC83
|
||||
Tomas Härdin A79D 4E3D F38F 763F 91F5 8B33 A01E 8AE0 41BB 2551
|
||||
Wei Gao 4269 7741 857A 0E60 9EC5 08D2 4744 4EFA 62C1 87B9
|
||||
|
||||
120
Makefile
120
Makefile
@@ -4,75 +4,63 @@ include config.mak
|
||||
vpath %.c $(SRC_PATH)
|
||||
vpath %.cpp $(SRC_PATH)
|
||||
vpath %.h $(SRC_PATH)
|
||||
vpath %.m $(SRC_PATH)
|
||||
vpath %.S $(SRC_PATH)
|
||||
vpath %.asm $(SRC_PATH)
|
||||
vpath %.rc $(SRC_PATH)
|
||||
vpath %.v $(SRC_PATH)
|
||||
vpath %.texi $(SRC_PATH)
|
||||
vpath %/fate_config.sh.template $(SRC_PATH)
|
||||
|
||||
AVPROGS-$(CONFIG_FFMPEG) += ffmpeg
|
||||
AVPROGS-$(CONFIG_FFPLAY) += ffplay
|
||||
AVPROGS-$(CONFIG_FFPROBE) += ffprobe
|
||||
AVPROGS-$(CONFIG_FFSERVER) += ffserver
|
||||
PROGS-$(CONFIG_FFMPEG) += ffmpeg
|
||||
PROGS-$(CONFIG_FFPLAY) += ffplay
|
||||
PROGS-$(CONFIG_FFPROBE) += ffprobe
|
||||
PROGS-$(CONFIG_FFSERVER) += ffserver
|
||||
|
||||
AVPROGS := $(AVPROGS-yes:%=%$(PROGSSUF)$(EXESUF))
|
||||
INSTPROGS = $(AVPROGS-yes:%=%$(PROGSSUF)$(EXESUF))
|
||||
PROGS += $(AVPROGS)
|
||||
PROGS := $(PROGS-yes:%=%$(PROGSSUF)$(EXESUF))
|
||||
INSTPROGS = $(PROGS-yes:%=%$(PROGSSUF)$(EXESUF))
|
||||
|
||||
AVBASENAMES = ffmpeg ffplay ffprobe ffserver
|
||||
ALLAVPROGS = $(AVBASENAMES:%=%$(PROGSSUF)$(EXESUF))
|
||||
ALLAVPROGS_G = $(AVBASENAMES:%=%$(PROGSSUF)_g$(EXESUF))
|
||||
|
||||
$(foreach prog,$(AVBASENAMES),$(eval OBJS-$(prog) += cmdutils.o))
|
||||
$(foreach prog,$(AVBASENAMES),$(eval OBJS-$(prog)-$(CONFIG_OPENCL) += cmdutils_opencl.o))
|
||||
|
||||
OBJS-ffmpeg += ffmpeg_opt.o ffmpeg_filter.o
|
||||
OBJS-ffmpeg-$(HAVE_VDPAU_X11) += ffmpeg_vdpau.o
|
||||
OBJS-ffmpeg-$(HAVE_DXVA2_LIB) += ffmpeg_dxva2.o
|
||||
ifndef CONFIG_VIDEOTOOLBOX
|
||||
OBJS-ffmpeg-$(CONFIG_VDA) += ffmpeg_videotoolbox.o
|
||||
endif
|
||||
OBJS-ffmpeg-$(CONFIG_VIDEOTOOLBOX) += ffmpeg_videotoolbox.o
|
||||
OBJS-ffserver += ffserver_config.o
|
||||
|
||||
TESTTOOLS = audiogen videogen rotozoom tiny_psnr tiny_ssim base64
|
||||
OBJS = cmdutils.o $(EXEOBJS)
|
||||
OBJS-ffmpeg = ffmpeg_opt.o ffmpeg_filter.o
|
||||
TESTTOOLS = audiogen videogen rotozoom tiny_psnr base64
|
||||
HOSTPROGS := $(TESTTOOLS:%=tests/%) doc/print_options
|
||||
TOOLS = qt-faststart trasher uncoded_frame
|
||||
TOOLS = qt-faststart trasher
|
||||
TOOLS-$(CONFIG_ZLIB) += cws2fws
|
||||
|
||||
# $(FFLIBS-yes) needs to be in linking order
|
||||
FFLIBS-$(CONFIG_AVDEVICE) += avdevice
|
||||
FFLIBS-$(CONFIG_AVFILTER) += avfilter
|
||||
FFLIBS-$(CONFIG_AVFORMAT) += avformat
|
||||
FFLIBS-$(CONFIG_AVCODEC) += avcodec
|
||||
BASENAMES = ffmpeg ffplay ffprobe ffserver
|
||||
ALLPROGS = $(BASENAMES:%=%$(PROGSSUF)$(EXESUF))
|
||||
ALLPROGS_G = $(BASENAMES:%=%$(PROGSSUF)_g$(EXESUF))
|
||||
ALLMANPAGES = $(BASENAMES:%=%.1)
|
||||
|
||||
FFLIBS-$(CONFIG_AVDEVICE) += avdevice
|
||||
FFLIBS-$(CONFIG_AVFILTER) += avfilter
|
||||
FFLIBS-$(CONFIG_AVFORMAT) += avformat
|
||||
FFLIBS-$(CONFIG_AVRESAMPLE) += avresample
|
||||
FFLIBS-$(CONFIG_POSTPROC) += postproc
|
||||
FFLIBS-$(CONFIG_SWRESAMPLE) += swresample
|
||||
FFLIBS-$(CONFIG_SWSCALE) += swscale
|
||||
FFLIBS-$(CONFIG_AVCODEC) += avcodec
|
||||
FFLIBS-$(CONFIG_POSTPROC) += postproc
|
||||
FFLIBS-$(CONFIG_SWRESAMPLE)+= swresample
|
||||
FFLIBS-$(CONFIG_SWSCALE) += swscale
|
||||
|
||||
FFLIBS := avutil
|
||||
|
||||
DATA_FILES := $(wildcard $(SRC_PATH)/presets/*.ffpreset) $(SRC_PATH)/doc/ffprobe.xsd
|
||||
EXAMPLES_FILES := $(wildcard $(SRC_PATH)/doc/examples/*.c) $(SRC_PATH)/doc/examples/Makefile $(SRC_PATH)/doc/examples/README
|
||||
|
||||
SKIPHEADERS = cmdutils_common_opts.h compat/w32pthreads.h
|
||||
SKIPHEADERS = cmdutils_common_opts.h
|
||||
|
||||
include $(SRC_PATH)/common.mak
|
||||
|
||||
FF_EXTRALIBS := $(FFEXTRALIBS)
|
||||
FF_DEP_LIBS := $(DEP_LIBS)
|
||||
FF_STATIC_DEP_LIBS := $(STATIC_DEP_LIBS)
|
||||
|
||||
all: $(AVPROGS)
|
||||
all: $(PROGS)
|
||||
|
||||
$(PROGS): %$(EXESUF): %_g$(EXESUF)
|
||||
$(CP) $< $@
|
||||
$(STRIP) $@
|
||||
|
||||
$(TOOLS): %$(EXESUF): %.o $(EXEOBJS)
|
||||
$(LD) $(LDFLAGS) $(LDEXEFLAGS) $(LD_O) $^ $(ELIBS)
|
||||
$(LD) $(LDFLAGS) $(LD_O) $^ $(ELIBS)
|
||||
|
||||
tools/cws2fws$(EXESUF): ELIBS = $(ZLIB)
|
||||
tools/uncoded_frame$(EXESUF): $(FF_DEP_LIBS)
|
||||
tools/uncoded_frame$(EXESUF): ELIBS = $(FF_EXTRALIBS)
|
||||
|
||||
config.h: .config
|
||||
.config: $(wildcard $(FFLIBS:%=$(SRC_PATH)/lib%/all*.c))
|
||||
@@ -82,10 +70,11 @@ config.h: .config
|
||||
|
||||
SUBDIR_VARS := CLEANFILES EXAMPLES FFLIBS HOSTPROGS TESTPROGS TOOLS \
|
||||
HEADERS ARCH_HEADERS BUILT_HEADERS SKIPHEADERS \
|
||||
ARMV5TE-OBJS ARMV6-OBJS ARMV8-OBJS VFP-OBJS NEON-OBJS \
|
||||
ALTIVEC-OBJS MMX-OBJS YASM-OBJS \
|
||||
MIPSFPU-OBJS MIPSDSPR2-OBJS MIPSDSPR1-OBJS MSA-OBJS \
|
||||
MMI-OBJS OBJS SLIBOBJS HOSTOBJS TESTOBJS
|
||||
ARMV5TE-OBJS ARMV6-OBJS VFP-OBJS NEON-OBJS \
|
||||
ALTIVEC-OBJS VIS-OBJS \
|
||||
MMX-OBJS YASM-OBJS \
|
||||
MIPSFPU-OBJS MIPSDSPR2-OBJS MIPSDSPR1-OBJS MIPS32R2-OBJS \
|
||||
OBJS HOSTOBJS TESTOBJS
|
||||
|
||||
define RESET
|
||||
$(1) :=
|
||||
@@ -97,16 +86,13 @@ $(foreach V,$(SUBDIR_VARS),$(eval $(call RESET,$(V))))
|
||||
SUBDIR := $(1)/
|
||||
include $(SRC_PATH)/$(1)/Makefile
|
||||
-include $(SRC_PATH)/$(1)/$(ARCH)/Makefile
|
||||
-include $(SRC_PATH)/$(1)/$(INTRINSICS)/Makefile
|
||||
include $(SRC_PATH)/library.mak
|
||||
endef
|
||||
|
||||
$(foreach D,$(FFLIBS),$(eval $(call DOSUBDIR,lib$(D))))
|
||||
|
||||
include $(SRC_PATH)/doc/Makefile
|
||||
|
||||
define DOPROG
|
||||
OBJS-$(1) += $(1).o $(EXEOBJS) $(OBJS-$(1)-yes)
|
||||
OBJS-$(1) += $(1).o cmdutils.o $(EXEOBJS)
|
||||
$(1)$(PROGSSUF)_g$(EXESUF): $$(OBJS-$(1))
|
||||
$$(OBJS-$(1)): CFLAGS += $(CFLAGS-$(1))
|
||||
$(1)$(PROGSSUF)_g$(EXESUF): LDFLAGS += $(LDFLAGS-$(1))
|
||||
@@ -114,16 +100,10 @@ $(1)$(PROGSSUF)_g$(EXESUF): FF_EXTRALIBS += $(LIBS-$(1))
|
||||
-include $$(OBJS-$(1):.o=.d)
|
||||
endef
|
||||
|
||||
$(foreach P,$(PROGS),$(eval $(call DOPROG,$(P:$(PROGSSUF)$(EXESUF)=))))
|
||||
|
||||
ffprobe.o cmdutils.o libavcodec/utils.o libavformat/utils.o libavdevice/avdevice.o libavfilter/avfilter.o libavutil/utils.o libpostproc/postprocess.o libswresample/swresample.o libswscale/utils.o : libavutil/ffversion.h
|
||||
|
||||
$(PROGS): %$(PROGSSUF)$(EXESUF): %$(PROGSSUF)_g$(EXESUF)
|
||||
$(CP) $< $@
|
||||
$(STRIP) $@
|
||||
$(foreach P,$(PROGS-yes),$(eval $(call DOPROG,$(P))))
|
||||
|
||||
%$(PROGSSUF)_g$(EXESUF): %.o $(FF_DEP_LIBS)
|
||||
$(LD) $(LDFLAGS) $(LDEXEFLAGS) $(LD_O) $(OBJS-$*) $(FF_EXTRALIBS)
|
||||
$(LD) $(LDFLAGS) $(LD_O) $(OBJS-$*) $(FF_EXTRALIBS)
|
||||
|
||||
OBJDIRS += tools
|
||||
|
||||
@@ -135,14 +115,14 @@ GIT_LOG = $(SRC_PATH)/.git/logs/HEAD
|
||||
.version: $(wildcard $(GIT_LOG)) $(VERSION_SH) config.mak
|
||||
.version: M=@
|
||||
|
||||
libavutil/ffversion.h .version:
|
||||
$(M)$(VERSION_SH) $(SRC_PATH) libavutil/ffversion.h $(EXTRA_VERSION)
|
||||
version.h .version:
|
||||
$(M)$(VERSION_SH) $(SRC_PATH) version.h $(EXTRA_VERSION)
|
||||
$(Q)touch .version
|
||||
|
||||
# force version.sh to run whenever version might have changed
|
||||
-include .version
|
||||
|
||||
ifdef AVPROGS
|
||||
ifdef PROGS
|
||||
install: install-progs install-data
|
||||
endif
|
||||
|
||||
@@ -153,7 +133,7 @@ install-libs: install-libs-yes
|
||||
install-progs-yes:
|
||||
install-progs-$(CONFIG_SHARED): install-libs
|
||||
|
||||
install-progs: install-progs-yes $(AVPROGS)
|
||||
install-progs: install-progs-yes $(PROGS)
|
||||
$(Q)mkdir -p "$(BINDIR)"
|
||||
$(INSTALL) -c -m 755 $(INSTPROGS) "$(BINDIR)"
|
||||
|
||||
@@ -165,27 +145,37 @@ install-data: $(DATA_FILES) $(EXAMPLES_FILES)
|
||||
uninstall: uninstall-libs uninstall-headers uninstall-progs uninstall-data
|
||||
|
||||
uninstall-progs:
|
||||
$(RM) $(addprefix "$(BINDIR)/", $(ALLAVPROGS))
|
||||
$(RM) $(addprefix "$(BINDIR)/", $(ALLPROGS))
|
||||
|
||||
uninstall-data:
|
||||
$(RM) -r "$(DATADIR)"
|
||||
|
||||
clean::
|
||||
$(RM) $(ALLAVPROGS) $(ALLAVPROGS_G)
|
||||
$(RM) $(ALLPROGS) $(ALLPROGS_G)
|
||||
$(RM) $(CLEANSUFFIXES)
|
||||
$(RM) $(CLEANSUFFIXES:%=tools/%)
|
||||
$(RM) coverage.info
|
||||
$(RM) -r coverage-html
|
||||
$(RM) -rf coverage.info lcov
|
||||
|
||||
distclean::
|
||||
$(RM) $(DISTCLEANSUFFIXES)
|
||||
$(RM) config.* .config libavutil/avconfig.h .version avversion.h version.h libavutil/ffversion.h libavcodec/codec_names.h
|
||||
$(RM) config.* .version version.h libavutil/avconfig.h libavcodec/codec_names.h
|
||||
|
||||
config:
|
||||
$(SRC_PATH)/configure $(value FFMPEG_CONFIGURATION)
|
||||
|
||||
# Without the sed genthml thinks "libavutil" and "./libavutil" are two different things
|
||||
coverage.info: $(wildcard *.gcda *.gcno */*.gcda */*.gcno */*/*.gcda */*/*.gcno)
|
||||
$(Q)lcov -c -d . -b . | sed -e 's#/./#/#g' > $@
|
||||
|
||||
coverage-html: coverage.info
|
||||
$(Q)mkdir -p $@
|
||||
$(Q)genhtml -o $@ $<
|
||||
$(Q)touch $@
|
||||
|
||||
check: all alltools examples testprogs fate
|
||||
|
||||
include $(SRC_PATH)/doc/Makefile
|
||||
include $(SRC_PATH)/tests/Makefile
|
||||
|
||||
$(sort $(OBJDIRS)):
|
||||
|
||||
18
README
Normal file
18
README
Normal file
@@ -0,0 +1,18 @@
|
||||
FFmpeg README
|
||||
-------------
|
||||
|
||||
1) Documentation
|
||||
----------------
|
||||
|
||||
* Read the documentation in the doc/ directory in git.
|
||||
You can also view it online at http://ffmpeg.org/documentation.html
|
||||
|
||||
2) Licensing
|
||||
------------
|
||||
|
||||
* See the LICENSE file.
|
||||
|
||||
3) Build and Install
|
||||
--------------------
|
||||
|
||||
* See the INSTALL file.
|
||||
42
README.md
42
README.md
@@ -1,42 +0,0 @@
|
||||
FFmpeg README
|
||||
=============
|
||||
|
||||
FFmpeg is a collection of libraries and tools to process multimedia content
|
||||
such as audio, video, subtitles and related metadata.
|
||||
|
||||
## Libraries
|
||||
|
||||
* `libavcodec` provides implementation of a wider range of codecs.
|
||||
* `libavformat` implements streaming protocols, container formats and basic I/O access.
|
||||
* `libavutil` includes hashers, decompressors and miscellaneous utility functions.
|
||||
* `libavfilter` provides a mean to alter decoded Audio and Video through chain of filters.
|
||||
* `libavdevice` provides an abstraction to access capture and playback devices.
|
||||
* `libswresample` implements audio mixing and resampling routines.
|
||||
* `libswscale` implements color conversion and scaling routines.
|
||||
|
||||
## Tools
|
||||
|
||||
* [ffmpeg](http://ffmpeg.org/ffmpeg.html) is a command line toolbox to
|
||||
manipulate, convert and stream multimedia content.
|
||||
* [ffplay](http://ffmpeg.org/ffplay.html) is a minimalistic multimedia player.
|
||||
* [ffprobe](http://ffmpeg.org/ffprobe.html) is a simple analysis tool to inspect
|
||||
multimedia content.
|
||||
* [ffserver](http://ffmpeg.org/ffserver.html) is a multimedia streaming server
|
||||
for live broadcasts.
|
||||
* Additional small tools such as `aviocat`, `ismindex` and `qt-faststart`.
|
||||
|
||||
## Documentation
|
||||
|
||||
The offline documentation is available in the **doc/** directory.
|
||||
|
||||
The online documentation is available in the main [website](http://ffmpeg.org)
|
||||
and in the [wiki](http://trac.ffmpeg.org).
|
||||
|
||||
### Examples
|
||||
|
||||
Coding examples are available in the **doc/examples** directory.
|
||||
|
||||
## License
|
||||
|
||||
FFmpeg codebase is mainly LGPL-licensed with optional components licensed under
|
||||
GPL. Please refer to the LICENSE file for detailed information.
|
||||
@@ -1,15 +0,0 @@
|
||||
|
||||
┌────────────────────────────────────────┐
|
||||
│ RELEASE NOTES for FFmpeg 2.8 "Feynman" │
|
||||
└────────────────────────────────────────┘
|
||||
|
||||
The FFmpeg Project proudly presents FFmpeg 2.8 "Feynman", about 3
|
||||
months after the release of FFmpeg 2.7.
|
||||
|
||||
A complete Changelog is available at the root of the project, and the
|
||||
complete Git history on http://source.ffmpeg.org.
|
||||
|
||||
We hope you will like this release as much as we enjoyed working on it, and
|
||||
as usual, if you have any questions about it, or any FFmpeg related topic,
|
||||
feel free to join us on the #ffmpeg IRC channel (on irc.freenode.net) or ask
|
||||
on the mailing-lists.
|
||||
7
arch.mak
7
arch.mak
@@ -1,17 +1,16 @@
|
||||
OBJS-$(HAVE_ARMV5TE) += $(ARMV5TE-OBJS) $(ARMV5TE-OBJS-yes)
|
||||
OBJS-$(HAVE_ARMV6) += $(ARMV6-OBJS) $(ARMV6-OBJS-yes)
|
||||
OBJS-$(HAVE_ARMV8) += $(ARMV8-OBJS) $(ARMV8-OBJS-yes)
|
||||
OBJS-$(HAVE_VFP) += $(VFP-OBJS) $(VFP-OBJS-yes)
|
||||
OBJS-$(HAVE_NEON) += $(NEON-OBJS) $(NEON-OBJS-yes)
|
||||
|
||||
OBJS-$(HAVE_MIPSFPU) += $(MIPSFPU-OBJS) $(MIPSFPU-OBJS-yes)
|
||||
OBJS-$(HAVE_MIPS32R2) += $(MIPS32R2-OBJS) $(MIPS32R2-OBJS-yes)
|
||||
OBJS-$(HAVE_MIPSDSPR1) += $(MIPSDSPR1-OBJS) $(MIPSDSPR1-OBJS-yes)
|
||||
OBJS-$(HAVE_MIPSDSPR2) += $(MIPSDSPR2-OBJS) $(MIPSDSPR2-OBJS-yes)
|
||||
OBJS-$(HAVE_MSA) += $(MSA-OBJS) $(MSA-OBJS-yes)
|
||||
OBJS-$(HAVE_MMI) += $(MMI-OBJS) $(MMI-OBJS-yes)
|
||||
|
||||
OBJS-$(HAVE_ALTIVEC) += $(ALTIVEC-OBJS) $(ALTIVEC-OBJS-yes)
|
||||
OBJS-$(HAVE_VSX) += $(VSX-OBJS) $(VSX-OBJS-yes)
|
||||
|
||||
OBJS-$(HAVE_VIS) += $(VIS-OBJS) $(VIS-OBJS-yes)
|
||||
|
||||
OBJS-$(HAVE_MMX) += $(MMX-OBJS) $(MMX-OBJS-yes)
|
||||
OBJS-$(HAVE_YASM) += $(YASM-OBJS) $(YASM-OBJS-yes)
|
||||
|
||||
801
cmdutils.c
801
cmdutils.c
File diff suppressed because it is too large
Load Diff
138
cmdutils.h
138
cmdutils.h
@@ -24,13 +24,12 @@
|
||||
|
||||
#include <stdint.h>
|
||||
|
||||
#include "config.h"
|
||||
#include "libavcodec/avcodec.h"
|
||||
#include "libavfilter/avfilter.h"
|
||||
#include "libavformat/avformat.h"
|
||||
#include "libswscale/swscale.h"
|
||||
|
||||
#ifdef _WIN32
|
||||
#ifdef __MINGW32__
|
||||
#undef main /* We don't want SDL to override our main() */
|
||||
#endif
|
||||
|
||||
@@ -44,22 +43,16 @@ extern const char program_name[];
|
||||
*/
|
||||
extern const int program_birth_year;
|
||||
|
||||
/**
|
||||
* this year, defined by the program for show_banner()
|
||||
*/
|
||||
extern const int this_year;
|
||||
|
||||
extern AVCodecContext *avcodec_opts[AVMEDIA_TYPE_NB];
|
||||
extern AVFormatContext *avformat_opts;
|
||||
extern AVDictionary *sws_dict;
|
||||
extern struct SwsContext *sws_opts;
|
||||
extern AVDictionary *swr_opts;
|
||||
extern AVDictionary *format_opts, *codec_opts, *resample_opts;
|
||||
extern int hide_banner;
|
||||
|
||||
/**
|
||||
* Register a program-specific cleanup routine.
|
||||
*/
|
||||
void register_exit(void (*cb)(int ret));
|
||||
|
||||
/**
|
||||
* Wraps exit with a program-specific cleanup routine.
|
||||
*/
|
||||
void exit_program(int ret) av_noreturn;
|
||||
|
||||
/**
|
||||
* Initialize the cmdutils option system, in particular
|
||||
@@ -78,11 +71,6 @@ void uninit_opts(void);
|
||||
*/
|
||||
void log_callback_help(void* ptr, int level, const char* fmt, va_list vl);
|
||||
|
||||
/**
|
||||
* Override the cpuflags.
|
||||
*/
|
||||
int opt_cpuflags(void *optctx, const char *opt, const char *arg);
|
||||
|
||||
/**
|
||||
* Fallback for options that are not explicitly handled, these will be
|
||||
* parsed through AVOptions.
|
||||
@@ -98,14 +86,10 @@ int opt_report(const char *opt);
|
||||
|
||||
int opt_max_alloc(void *optctx, const char *opt, const char *arg);
|
||||
|
||||
int opt_cpuflags(void *optctx, const char *opt, const char *arg);
|
||||
|
||||
int opt_codec_debug(void *optctx, const char *opt, const char *arg);
|
||||
|
||||
#if CONFIG_OPENCL
|
||||
int opt_opencl(void *optctx, const char *opt, const char *arg);
|
||||
|
||||
int opt_opencl_bench(void *optctx, const char *opt, const char *arg);
|
||||
#endif
|
||||
|
||||
/**
|
||||
* Limit the execution time.
|
||||
*/
|
||||
@@ -178,8 +162,6 @@ typedef struct OptionDef {
|
||||
an int containing element count in the array. */
|
||||
#define OPT_TIME 0x10000
|
||||
#define OPT_DOUBLE 0x20000
|
||||
#define OPT_INPUT 0x40000
|
||||
#define OPT_OUTPUT 0x80000
|
||||
union {
|
||||
void *dst_ptr;
|
||||
int (*func_arg)(void *, const char *, const char *);
|
||||
@@ -260,11 +242,6 @@ typedef struct OptionGroupDef {
|
||||
* are terminated by a non-option argument (e.g. ffmpeg output files)
|
||||
*/
|
||||
const char *sep;
|
||||
/**
|
||||
* Option flags that must be set on each option that is
|
||||
* applied to this group
|
||||
*/
|
||||
int flags;
|
||||
} OptionGroupDef;
|
||||
|
||||
typedef struct OptionGroup {
|
||||
@@ -277,7 +254,7 @@ typedef struct OptionGroup {
|
||||
AVDictionary *codec_opts;
|
||||
AVDictionary *format_opts;
|
||||
AVDictionary *resample_opts;
|
||||
AVDictionary *sws_dict;
|
||||
struct SwsContext *sws_opts;
|
||||
AVDictionary *swr_opts;
|
||||
} OptionGroup;
|
||||
|
||||
@@ -415,13 +392,6 @@ void show_banner(int argc, char **argv, const OptionDef *options);
|
||||
*/
|
||||
int show_version(void *optctx, const char *opt, const char *arg);
|
||||
|
||||
/**
|
||||
* Print the build configuration of the program to stdout. The contents
|
||||
* depend on the definition of FFMPEG_CONFIGURATION.
|
||||
* This option processing function does not utilize the arguments.
|
||||
*/
|
||||
int show_buildconf(void *optctx, const char *opt, const char *arg);
|
||||
|
||||
/**
|
||||
* Print the license of the program to stdout. The license depends on
|
||||
* the license of the libraries compiled into the program.
|
||||
@@ -431,31 +401,10 @@ int show_license(void *optctx, const char *opt, const char *arg);
|
||||
|
||||
/**
|
||||
* Print a listing containing all the formats supported by the
|
||||
* program (including devices).
|
||||
* This option processing function does not utilize the arguments.
|
||||
*/
|
||||
int show_formats(void *optctx, const char *opt, const char *arg);
|
||||
|
||||
/**
|
||||
* Print a listing containing all the devices supported by the
|
||||
* program.
|
||||
* This option processing function does not utilize the arguments.
|
||||
*/
|
||||
int show_devices(void *optctx, const char *opt, const char *arg);
|
||||
|
||||
#if CONFIG_AVDEVICE
|
||||
/**
|
||||
* Print a listing containing audodetected sinks of the output device.
|
||||
* Device name with options may be passed as an argument to limit results.
|
||||
*/
|
||||
int show_sinks(void *optctx, const char *opt, const char *arg);
|
||||
|
||||
/**
|
||||
* Print a listing containing audodetected sources of the input device.
|
||||
* Device name with options may be passed as an argument to limit results.
|
||||
*/
|
||||
int show_sources(void *optctx, const char *opt, const char *arg);
|
||||
#endif
|
||||
int show_formats(void *optctx, const char *opt, const char *arg);
|
||||
|
||||
/**
|
||||
* Print a listing containing all the codecs supported by the
|
||||
@@ -517,18 +466,24 @@ int show_layouts(void *optctx, const char *opt, const char *arg);
|
||||
*/
|
||||
int show_sample_fmts(void *optctx, const char *opt, const char *arg);
|
||||
|
||||
/**
|
||||
* Print a listing containing all the color names and values recognized
|
||||
* by the program.
|
||||
*/
|
||||
int show_colors(void *optctx, const char *opt, const char *arg);
|
||||
|
||||
/**
|
||||
* Return a positive value if a line read from standard input
|
||||
* starts with [yY], otherwise return 0.
|
||||
*/
|
||||
int read_yesno(void);
|
||||
|
||||
/**
|
||||
* Read the file with name filename, and put its content in a newly
|
||||
* allocated 0-terminated buffer.
|
||||
*
|
||||
* @param filename file to read from
|
||||
* @param bufptr location where pointer to buffer is returned
|
||||
* @param size location where size of buffer is returned
|
||||
* @return 0 in case of success, a negative value corresponding to an
|
||||
* AVERROR error code in case of failure.
|
||||
*/
|
||||
int cmdutils_read_file(const char *filename, char **bufptr, size_t *size);
|
||||
|
||||
/**
|
||||
* Get a file corresponding to a preset file.
|
||||
*
|
||||
@@ -562,11 +517,52 @@ FILE *get_preset_file(char *filename, size_t filename_size,
|
||||
*/
|
||||
void *grow_array(void *array, int elem_size, int *size, int new_size);
|
||||
|
||||
#define media_type_string av_get_media_type_string
|
||||
|
||||
#define GROW_ARRAY(array, nb_elems)\
|
||||
array = grow_array(array, sizeof(*array), &nb_elems, nb_elems + 1)
|
||||
|
||||
typedef struct FrameBuffer {
|
||||
uint8_t *base[4];
|
||||
uint8_t *data[4];
|
||||
int linesize[4];
|
||||
|
||||
int h, w;
|
||||
enum AVPixelFormat pix_fmt;
|
||||
|
||||
int refcount;
|
||||
struct FrameBuffer **pool; ///< head of the buffer pool
|
||||
struct FrameBuffer *next;
|
||||
} FrameBuffer;
|
||||
|
||||
/**
|
||||
* Get a frame from the pool. This is intended to be used as a callback for
|
||||
* AVCodecContext.get_buffer.
|
||||
*
|
||||
* @param s codec context. s->opaque must be a pointer to the head of the
|
||||
* buffer pool.
|
||||
* @param frame frame->opaque will be set to point to the FrameBuffer
|
||||
* containing the frame data.
|
||||
*/
|
||||
int codec_get_buffer(AVCodecContext *s, AVFrame *frame);
|
||||
|
||||
/**
|
||||
* A callback to be used for AVCodecContext.release_buffer along with
|
||||
* codec_get_buffer().
|
||||
*/
|
||||
void codec_release_buffer(AVCodecContext *s, AVFrame *frame);
|
||||
|
||||
/**
|
||||
* A callback to be used for AVFilterBuffer.free.
|
||||
* @param fb buffer to free. fb->priv must be a pointer to the FrameBuffer
|
||||
* containing the buffer data.
|
||||
*/
|
||||
void filter_release_buffer(AVFilterBuffer *fb);
|
||||
|
||||
/**
|
||||
* Free all the buffers in the pool. This must be called after all the
|
||||
* buffers have been released.
|
||||
*/
|
||||
void free_buffer_pool(FrameBuffer **pool);
|
||||
|
||||
#define GET_PIX_FMT_NAME(pix_fmt)\
|
||||
const char *name = av_get_pix_fmt_name(pix_fmt);
|
||||
|
||||
@@ -585,6 +581,4 @@ void *grow_array(void *array, int elem_size, int *size, int new_size);
|
||||
char name[128];\
|
||||
av_get_channel_layout_string(name, sizeof(name), 0, ch_layout);
|
||||
|
||||
double get_rotation(AVStream *st);
|
||||
|
||||
#endif /* CMDUTILS_H */
|
||||
|
||||
@@ -4,9 +4,7 @@
|
||||
{ "help" , OPT_EXIT, {.func_arg = show_help}, "show help", "topic" },
|
||||
{ "-help" , OPT_EXIT, {.func_arg = show_help}, "show help", "topic" },
|
||||
{ "version" , OPT_EXIT, {.func_arg = show_version}, "show version" },
|
||||
{ "buildconf" , OPT_EXIT, {.func_arg = show_buildconf}, "show build configuration" },
|
||||
{ "formats" , OPT_EXIT, {.func_arg = show_formats }, "show available formats" },
|
||||
{ "devices" , OPT_EXIT, {.func_arg = show_devices }, "show available devices" },
|
||||
{ "codecs" , OPT_EXIT, {.func_arg = show_codecs }, "show available codecs" },
|
||||
{ "decoders" , OPT_EXIT, {.func_arg = show_decoders }, "show available decoders" },
|
||||
{ "encoders" , OPT_EXIT, {.func_arg = show_encoders }, "show available encoders" },
|
||||
@@ -16,20 +14,8 @@
|
||||
{ "pix_fmts" , OPT_EXIT, {.func_arg = show_pix_fmts }, "show available pixel formats" },
|
||||
{ "layouts" , OPT_EXIT, {.func_arg = show_layouts }, "show standard channel layouts" },
|
||||
{ "sample_fmts", OPT_EXIT, {.func_arg = show_sample_fmts }, "show available audio sample formats" },
|
||||
{ "colors" , OPT_EXIT, {.func_arg = show_colors }, "show available color names" },
|
||||
{ "loglevel" , HAS_ARG, {.func_arg = opt_loglevel}, "set logging level", "loglevel" },
|
||||
{ "v", HAS_ARG, {.func_arg = opt_loglevel}, "set logging level", "loglevel" },
|
||||
{ "loglevel" , HAS_ARG, {.func_arg = opt_loglevel}, "set libav* logging level", "loglevel" },
|
||||
{ "v", HAS_ARG, {.func_arg = opt_loglevel}, "set libav* logging level", "loglevel" },
|
||||
{ "report" , 0, {(void*)opt_report}, "generate a report" },
|
||||
{ "max_alloc" , HAS_ARG, {.func_arg = opt_max_alloc}, "set maximum size of a single allocated block", "bytes" },
|
||||
{ "cpuflags" , HAS_ARG | OPT_EXPERT, { .func_arg = opt_cpuflags }, "force specific cpu flags", "flags" },
|
||||
{ "hide_banner", OPT_BOOL | OPT_EXPERT, {&hide_banner}, "do not show program banner", "hide_banner" },
|
||||
#if CONFIG_OPENCL
|
||||
{ "opencl_bench", OPT_EXIT, {.func_arg = opt_opencl_bench}, "run benchmark on all OpenCL devices and show results" },
|
||||
{ "opencl_options", HAS_ARG, {.func_arg = opt_opencl}, "set OpenCL environment options" },
|
||||
#endif
|
||||
#if CONFIG_AVDEVICE
|
||||
{ "sources" , OPT_EXIT | HAS_ARG, { .func_arg = show_sources },
|
||||
"list sources of the input device", "device" },
|
||||
{ "sinks" , OPT_EXIT | HAS_ARG, { .func_arg = show_sinks },
|
||||
"list sinks of the output device", "device" },
|
||||
#endif
|
||||
{ "cpuflags" , HAS_ARG | OPT_EXPERT, {.func_arg = opt_cpuflags}, "force specific cpu flags", "flags" },
|
||||
|
||||
@@ -1,276 +0,0 @@
|
||||
/*
|
||||
* Copyright (C) 2013 Lenny Wang
|
||||
*
|
||||
* This file is part of FFmpeg.
|
||||
*
|
||||
* FFmpeg is free software; you can redistribute it and/or
|
||||
* modify it under the terms of the GNU Lesser General Public
|
||||
* License as published by the Free Software Foundation; either
|
||||
* version 2.1 of the License, or (at your option) any later version.
|
||||
*
|
||||
* FFmpeg is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
* Lesser General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU Lesser General Public
|
||||
* License along with FFmpeg; if not, write to the Free Software
|
||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
*/
|
||||
|
||||
#include "libavutil/opt.h"
|
||||
#include "libavutil/time.h"
|
||||
#include "libavutil/log.h"
|
||||
#include "libavutil/opencl.h"
|
||||
#include "libavutil/avstring.h"
|
||||
#include "cmdutils.h"
|
||||
|
||||
typedef struct {
|
||||
int platform_idx;
|
||||
int device_idx;
|
||||
char device_name[64];
|
||||
int64_t runtime;
|
||||
} OpenCLDeviceBenchmark;
|
||||
|
||||
const char *ocl_bench_source = AV_OPENCL_KERNEL(
|
||||
inline unsigned char clip_uint8(int a)
|
||||
{
|
||||
if (a & (~0xFF))
|
||||
return (-a)>>31;
|
||||
else
|
||||
return a;
|
||||
}
|
||||
|
||||
kernel void unsharp_bench(
|
||||
global unsigned char *src,
|
||||
global unsigned char *dst,
|
||||
global int *mask,
|
||||
int width,
|
||||
int height)
|
||||
{
|
||||
int i, j, local_idx, lc_idx, sum = 0;
|
||||
int2 thread_idx, block_idx, global_idx, lm_idx;
|
||||
thread_idx.x = get_local_id(0);
|
||||
thread_idx.y = get_local_id(1);
|
||||
block_idx.x = get_group_id(0);
|
||||
block_idx.y = get_group_id(1);
|
||||
global_idx.x = get_global_id(0);
|
||||
global_idx.y = get_global_id(1);
|
||||
local uchar data[32][32];
|
||||
local int lc[128];
|
||||
|
||||
for (i = 0; i <= 1; i++) {
|
||||
lm_idx.y = -8 + (block_idx.y + i) * 16 + thread_idx.y;
|
||||
lm_idx.y = lm_idx.y < 0 ? 0 : lm_idx.y;
|
||||
lm_idx.y = lm_idx.y >= height ? height - 1: lm_idx.y;
|
||||
for (j = 0; j <= 1; j++) {
|
||||
lm_idx.x = -8 + (block_idx.x + j) * 16 + thread_idx.x;
|
||||
lm_idx.x = lm_idx.x < 0 ? 0 : lm_idx.x;
|
||||
lm_idx.x = lm_idx.x >= width ? width - 1: lm_idx.x;
|
||||
data[i*16 + thread_idx.y][j*16 + thread_idx.x] = src[lm_idx.y*width + lm_idx.x];
|
||||
}
|
||||
}
|
||||
local_idx = thread_idx.y*16 + thread_idx.x;
|
||||
if (local_idx < 128)
|
||||
lc[local_idx] = mask[local_idx];
|
||||
barrier(CLK_LOCAL_MEM_FENCE);
|
||||
|
||||
\n#pragma unroll\n
|
||||
for (i = -4; i <= 4; i++) {
|
||||
lm_idx.y = 8 + i + thread_idx.y;
|
||||
\n#pragma unroll\n
|
||||
for (j = -4; j <= 4; j++) {
|
||||
lm_idx.x = 8 + j + thread_idx.x;
|
||||
lc_idx = (i + 4)*8 + j + 4;
|
||||
sum += (int)data[lm_idx.y][lm_idx.x] * lc[lc_idx];
|
||||
}
|
||||
}
|
||||
int temp = (int)data[thread_idx.y + 8][thread_idx.x + 8];
|
||||
int res = temp + (((temp - (int)((sum + 1<<15) >> 16))) >> 16);
|
||||
if (global_idx.x < width && global_idx.y < height)
|
||||
dst[global_idx.x + global_idx.y*width] = clip_uint8(res);
|
||||
}
|
||||
);
|
||||
|
||||
#define OCLCHECK(method, ... ) \
|
||||
do { \
|
||||
status = method(__VA_ARGS__); \
|
||||
if (status != CL_SUCCESS) { \
|
||||
av_log(NULL, AV_LOG_ERROR, # method " error '%s'\n", \
|
||||
av_opencl_errstr(status)); \
|
||||
ret = AVERROR_EXTERNAL; \
|
||||
goto end; \
|
||||
} \
|
||||
} while (0)
|
||||
|
||||
#define CREATEBUF(out, flags, size) \
|
||||
do { \
|
||||
out = clCreateBuffer(ext_opencl_env->context, flags, size, NULL, &status); \
|
||||
if (status != CL_SUCCESS) { \
|
||||
av_log(NULL, AV_LOG_ERROR, "Could not create OpenCL buffer\n"); \
|
||||
ret = AVERROR_EXTERNAL; \
|
||||
goto end; \
|
||||
} \
|
||||
} while (0)
|
||||
|
||||
static void fill_rand_int(int *data, int n)
|
||||
{
|
||||
int i;
|
||||
srand(av_gettime());
|
||||
for (i = 0; i < n; i++)
|
||||
data[i] = rand();
|
||||
}
|
||||
|
||||
#define OPENCL_NB_ITER 5
|
||||
static int64_t run_opencl_bench(AVOpenCLExternalEnv *ext_opencl_env)
|
||||
{
|
||||
int i, arg = 0, width = 1920, height = 1088;
|
||||
int64_t start, ret = 0;
|
||||
cl_int status;
|
||||
size_t kernel_len;
|
||||
char *inbuf;
|
||||
int *mask;
|
||||
int buf_size = width * height * sizeof(char);
|
||||
int mask_size = sizeof(uint32_t) * 128;
|
||||
|
||||
cl_mem cl_mask, cl_inbuf, cl_outbuf;
|
||||
cl_kernel kernel = NULL;
|
||||
cl_program program = NULL;
|
||||
size_t local_work_size_2d[2] = {16, 16};
|
||||
size_t global_work_size_2d[2] = {(size_t)width, (size_t)height};
|
||||
|
||||
if (!(inbuf = av_malloc(buf_size)) || !(mask = av_malloc(mask_size))) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Out of memory\n");
|
||||
ret = AVERROR(ENOMEM);
|
||||
goto end;
|
||||
}
|
||||
fill_rand_int((int*)inbuf, buf_size/4);
|
||||
fill_rand_int(mask, mask_size/4);
|
||||
|
||||
CREATEBUF(cl_mask, CL_MEM_READ_ONLY, mask_size);
|
||||
CREATEBUF(cl_inbuf, CL_MEM_READ_ONLY, buf_size);
|
||||
CREATEBUF(cl_outbuf, CL_MEM_READ_WRITE, buf_size);
|
||||
|
||||
kernel_len = strlen(ocl_bench_source);
|
||||
program = clCreateProgramWithSource(ext_opencl_env->context, 1, &ocl_bench_source,
|
||||
&kernel_len, &status);
|
||||
if (status != CL_SUCCESS || !program) {
|
||||
av_log(NULL, AV_LOG_ERROR, "OpenCL unable to create benchmark program\n");
|
||||
ret = AVERROR_EXTERNAL;
|
||||
goto end;
|
||||
}
|
||||
status = clBuildProgram(program, 1, &(ext_opencl_env->device_id), NULL, NULL, NULL);
|
||||
if (status != CL_SUCCESS) {
|
||||
av_log(NULL, AV_LOG_ERROR, "OpenCL unable to build benchmark program\n");
|
||||
ret = AVERROR_EXTERNAL;
|
||||
goto end;
|
||||
}
|
||||
kernel = clCreateKernel(program, "unsharp_bench", &status);
|
||||
if (status != CL_SUCCESS) {
|
||||
av_log(NULL, AV_LOG_ERROR, "OpenCL unable to create benchmark kernel\n");
|
||||
ret = AVERROR_EXTERNAL;
|
||||
goto end;
|
||||
}
|
||||
|
||||
OCLCHECK(clEnqueueWriteBuffer, ext_opencl_env->command_queue, cl_inbuf, CL_TRUE, 0,
|
||||
buf_size, inbuf, 0, NULL, NULL);
|
||||
OCLCHECK(clEnqueueWriteBuffer, ext_opencl_env->command_queue, cl_mask, CL_TRUE, 0,
|
||||
mask_size, mask, 0, NULL, NULL);
|
||||
OCLCHECK(clSetKernelArg, kernel, arg++, sizeof(cl_mem), &cl_inbuf);
|
||||
OCLCHECK(clSetKernelArg, kernel, arg++, sizeof(cl_mem), &cl_outbuf);
|
||||
OCLCHECK(clSetKernelArg, kernel, arg++, sizeof(cl_mem), &cl_mask);
|
||||
OCLCHECK(clSetKernelArg, kernel, arg++, sizeof(cl_int), &width);
|
||||
OCLCHECK(clSetKernelArg, kernel, arg++, sizeof(cl_int), &height);
|
||||
|
||||
start = av_gettime_relative();
|
||||
for (i = 0; i < OPENCL_NB_ITER; i++)
|
||||
OCLCHECK(clEnqueueNDRangeKernel, ext_opencl_env->command_queue, kernel, 2, NULL,
|
||||
global_work_size_2d, local_work_size_2d, 0, NULL, NULL);
|
||||
clFinish(ext_opencl_env->command_queue);
|
||||
ret = (av_gettime_relative() - start)/OPENCL_NB_ITER;
|
||||
end:
|
||||
if (kernel)
|
||||
clReleaseKernel(kernel);
|
||||
if (program)
|
||||
clReleaseProgram(program);
|
||||
if (cl_inbuf)
|
||||
clReleaseMemObject(cl_inbuf);
|
||||
if (cl_outbuf)
|
||||
clReleaseMemObject(cl_outbuf);
|
||||
if (cl_mask)
|
||||
clReleaseMemObject(cl_mask);
|
||||
av_free(inbuf);
|
||||
av_free(mask);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int compare_ocl_device_desc(const void *a, const void *b)
|
||||
{
|
||||
return ((OpenCLDeviceBenchmark*)a)->runtime - ((OpenCLDeviceBenchmark*)b)->runtime;
|
||||
}
|
||||
|
||||
int opt_opencl_bench(void *optctx, const char *opt, const char *arg)
|
||||
{
|
||||
int i, j, nb_devices = 0, count = 0;
|
||||
int64_t score = 0;
|
||||
AVOpenCLDeviceList *device_list;
|
||||
AVOpenCLDeviceNode *device_node = NULL;
|
||||
OpenCLDeviceBenchmark *devices = NULL;
|
||||
cl_platform_id platform;
|
||||
|
||||
av_opencl_get_device_list(&device_list);
|
||||
for (i = 0; i < device_list->platform_num; i++)
|
||||
nb_devices += device_list->platform_node[i]->device_num;
|
||||
if (!nb_devices) {
|
||||
av_log(NULL, AV_LOG_ERROR, "No OpenCL device detected!\n");
|
||||
return AVERROR(EINVAL);
|
||||
}
|
||||
if (!(devices = av_malloc_array(nb_devices, sizeof(OpenCLDeviceBenchmark)))) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Could not allocate buffer\n");
|
||||
return AVERROR(ENOMEM);
|
||||
}
|
||||
|
||||
for (i = 0; i < device_list->platform_num; i++) {
|
||||
for (j = 0; j < device_list->platform_node[i]->device_num; j++) {
|
||||
device_node = device_list->platform_node[i]->device_node[j];
|
||||
platform = device_list->platform_node[i]->platform_id;
|
||||
score = av_opencl_benchmark(device_node, platform, run_opencl_bench);
|
||||
if (score > 0) {
|
||||
devices[count].platform_idx = i;
|
||||
devices[count].device_idx = j;
|
||||
devices[count].runtime = score;
|
||||
av_strlcpy(devices[count].device_name, device_node->device_name,
|
||||
sizeof(devices[count].device_name));
|
||||
count++;
|
||||
}
|
||||
}
|
||||
}
|
||||
qsort(devices, count, sizeof(OpenCLDeviceBenchmark), compare_ocl_device_desc);
|
||||
fprintf(stderr, "platform_idx\tdevice_idx\tdevice_name\truntime\n");
|
||||
for (i = 0; i < count; i++)
|
||||
fprintf(stdout, "%d\t%d\t%s\t%"PRId64"\n",
|
||||
devices[i].platform_idx, devices[i].device_idx,
|
||||
devices[i].device_name, devices[i].runtime);
|
||||
|
||||
av_opencl_free_device_list(&device_list);
|
||||
av_free(devices);
|
||||
return 0;
|
||||
}
|
||||
|
||||
int opt_opencl(void *optctx, const char *opt, const char *arg)
|
||||
{
|
||||
char *key, *value;
|
||||
const char *opts = arg;
|
||||
int ret = 0;
|
||||
while (*opts) {
|
||||
ret = av_opt_get_key_value(&opts, "=", ":", 0, &key, &value);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
ret = av_opencl_set_option(key, value);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
if (*opts)
|
||||
opts++;
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
43
common.mak
43
common.mak
@@ -5,20 +5,12 @@
|
||||
# first so "all" becomes default target
|
||||
all: all-yes
|
||||
|
||||
DEFAULT_YASMD=.dbg
|
||||
|
||||
ifeq ($(DBG),1)
|
||||
YASMD=$(DEFAULT_YASMD)
|
||||
else
|
||||
YASMD=
|
||||
endif
|
||||
|
||||
ifndef SUBDIR
|
||||
|
||||
ifndef V
|
||||
Q = @
|
||||
ECHO = printf "$(1)\t%s\n" $(2)
|
||||
BRIEF = CC CXX HOSTCC HOSTLD AS YASM AR LD STRIP CP WINDRES
|
||||
BRIEF = CC CXX HOSTCC HOSTLD AS YASM AR LD STRIP CP
|
||||
SILENT = DEPCC DEPHOSTCC DEPAS DEPYASM RANLIB RM
|
||||
|
||||
MSG = $@
|
||||
@@ -51,7 +43,6 @@ endef
|
||||
COMPILE_C = $(call COMPILE,CC)
|
||||
COMPILE_CXX = $(call COMPILE,CXX)
|
||||
COMPILE_S = $(call COMPILE,AS)
|
||||
COMPILE_HOSTC = $(call COMPILE,HOSTCC)
|
||||
|
||||
%.o: %.c
|
||||
$(COMPILE_C)
|
||||
@@ -59,21 +50,12 @@ COMPILE_HOSTC = $(call COMPILE,HOSTCC)
|
||||
%.o: %.cpp
|
||||
$(COMPILE_CXX)
|
||||
|
||||
%.o: %.m
|
||||
$(COMPILE_C)
|
||||
|
||||
%.s: %.c
|
||||
$(CC) $(CPPFLAGS) $(CFLAGS) -S -o $@ $<
|
||||
|
||||
%.o: %.S
|
||||
$(COMPILE_S)
|
||||
|
||||
%_host.o: %.c
|
||||
$(COMPILE_HOSTC)
|
||||
|
||||
%.o: %.rc
|
||||
$(WINDRES) $(IFLAGS) --preprocessor "$(DEPWINDRES) -E -xc-header -DRC_INVOKED $(CC_DEPFLAGS)" -o $@ $<
|
||||
|
||||
%.i: %.c
|
||||
$(CC) $(CCFLAGS) $(CC_E) $<
|
||||
|
||||
@@ -100,15 +82,14 @@ endif
|
||||
include $(SRC_PATH)/arch.mak
|
||||
|
||||
OBJS += $(OBJS-yes)
|
||||
SLIBOBJS += $(SLIBOBJS-yes)
|
||||
FFLIBS := $($(NAME)_FFLIBS) $(FFLIBS-yes) $(FFLIBS)
|
||||
FFLIBS := $(FFLIBS-yes) $(FFLIBS)
|
||||
TESTPROGS += $(TESTPROGS-yes)
|
||||
|
||||
LDLIBS = $(FFLIBS:%=%$(BUILDSUF))
|
||||
FFEXTRALIBS := $(LDLIBS:%=$(LD_LIB)) $(EXTRALIBS)
|
||||
|
||||
EXAMPLES := $(EXAMPLES:%=$(SUBDIR)%-example$(EXESUF))
|
||||
OBJS := $(sort $(OBJS:%=$(SUBDIR)%))
|
||||
SLIBOBJS := $(sort $(SLIBOBJS:%=$(SUBDIR)%))
|
||||
TESTOBJS := $(TESTOBJS:%=$(SUBDIR)%) $(TESTPROGS:%=$(SUBDIR)%-test.o)
|
||||
TESTPROGS := $(TESTPROGS:%=$(SUBDIR)%-test$(EXESUF))
|
||||
HOSTOBJS := $(HOSTPROGS:%=$(SUBDIR)%.o)
|
||||
@@ -118,11 +99,8 @@ TOOLOBJS := $(TOOLS:%=tools/%.o)
|
||||
TOOLS := $(TOOLS:%=tools/%$(EXESUF))
|
||||
HEADERS += $(HEADERS-yes)
|
||||
|
||||
PATH_LIBNAME = $(foreach NAME,$(1),lib$(NAME)/$($(2)LIBNAME))
|
||||
DEP_LIBS := $(foreach lib,$(FFLIBS),$(call PATH_LIBNAME,$(lib),$(CONFIG_SHARED:yes=S)))
|
||||
STATIC_DEP_LIBS := $(foreach lib,$(FFLIBS),$(call PATH_LIBNAME,$(lib)))
|
||||
DEP_LIBS := $(foreach NAME,$(FFLIBS),lib$(NAME)/$($(CONFIG_SHARED:yes=S)LIBNAME))
|
||||
|
||||
SRC_DIR := $(SRC_PATH)/lib$(NAME)
|
||||
ALLHEADERS := $(subst $(SRC_DIR)/,$(SUBDIR),$(wildcard $(SRC_DIR)/*.h $(SRC_DIR)/$(ARCH)/*.h))
|
||||
SKIPHEADERS += $(ARCH_HEADERS:%=$(ARCH)/%) $(SKIPHEADERS-)
|
||||
SKIPHEADERS := $(SKIPHEADERS:%=$(SUBDIR)%)
|
||||
@@ -133,31 +111,30 @@ checkheaders: $(HOBJS)
|
||||
alltools: $(TOOLS)
|
||||
|
||||
$(HOSTOBJS): %.o: %.c
|
||||
$(COMPILE_HOSTC)
|
||||
$(call COMPILE,HOSTCC)
|
||||
|
||||
$(HOSTPROGS): %$(HOSTEXESUF): %.o
|
||||
$(HOSTLD) $(HOSTLDFLAGS) $(HOSTLD_O) $^ $(HOSTLIBS)
|
||||
$(HOSTLD) $(HOSTLDFLAGS) $(HOSTLD_O) $< $(HOSTLIBS)
|
||||
|
||||
$(OBJS): | $(sort $(dir $(OBJS)))
|
||||
$(HOBJS): | $(sort $(dir $(HOBJS)))
|
||||
$(HOSTOBJS): | $(sort $(dir $(HOSTOBJS)))
|
||||
$(SLIBOBJS): | $(sort $(dir $(SLIBOBJS)))
|
||||
$(TESTOBJS): | $(sort $(dir $(TESTOBJS)))
|
||||
$(TOOLOBJS): | tools
|
||||
|
||||
OBJDIRS := $(OBJDIRS) $(dir $(OBJS) $(HOBJS) $(HOSTOBJS) $(SLIBOBJS) $(TESTOBJS))
|
||||
OBJDIRS := $(OBJDIRS) $(dir $(OBJS) $(HOBJS) $(HOSTOBJS) $(TESTOBJS))
|
||||
|
||||
CLEANSUFFIXES = *.d *.o *~ *.h.c *.map *.ver *.ho *.gcno *.gcda *$(DEFAULT_YASMD).asm
|
||||
CLEANSUFFIXES = *.d *.o *~ *.h.c *.map *.ver *.ho *.gcno *.gcda
|
||||
DISTCLEANSUFFIXES = *.pc
|
||||
LIBSUFFIXES = *.a *.lib *.so *.so.* *.dylib *.dll *.def *.dll.a
|
||||
|
||||
define RULES
|
||||
clean::
|
||||
$(RM) $(OBJS) $(OBJS:.o=.d) $(OBJS:.o=$(DEFAULT_YASMD).d)
|
||||
$(RM) $(OBJS) $(OBJS:.o=.d)
|
||||
$(RM) $(HOSTPROGS)
|
||||
$(RM) $(TOOLS)
|
||||
endef
|
||||
|
||||
$(eval $(RULES))
|
||||
|
||||
-include $(wildcard $(OBJS:.o=.d) $(HOSTOBJS:.o=.d) $(TESTOBJS:.o=.d) $(HOBJS:.o=.d) $(SLIBOBJS:.o=.d)) $(OBJS:.o=$(DEFAULT_YASMD).d)
|
||||
-include $(wildcard $(OBJS:.o=.d) $(HOSTOBJS:.o=.d) $(TESTOBJS:.o=.d) $(HOBJS:.o=.d))
|
||||
|
||||
@@ -1,31 +0,0 @@
|
||||
/*
|
||||
* Work around the class() function in AIX math.h clashing with
|
||||
* identifiers named "class".
|
||||
*
|
||||
* This file is part of FFmpeg.
|
||||
*
|
||||
* FFmpeg is free software; you can redistribute it and/or
|
||||
* modify it under the terms of the GNU Lesser General Public
|
||||
* License as published by the Free Software Foundation; either
|
||||
* version 2.1 of the License, or (at your option) any later version.
|
||||
*
|
||||
* FFmpeg is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
* Lesser General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU Lesser General Public
|
||||
* License along with FFmpeg; if not, write to the Free Software
|
||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
*/
|
||||
|
||||
#ifndef FFMPEG_COMPAT_AIX_MATH_H
|
||||
#define FFMPEG_COMPAT_AIX_MATH_H
|
||||
|
||||
#define class class_in_math_h_causes_problems
|
||||
|
||||
#include_next <math.h>
|
||||
|
||||
#undef class
|
||||
|
||||
#endif /* FFMPEG_COMPAT_AIX_MATH_H */
|
||||
@@ -1,882 +0,0 @@
|
||||
// Avisynth C Interface Version 0.20
|
||||
// Copyright 2003 Kevin Atkinson
|
||||
|
||||
// This program is free software; you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation; either version 2 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with this program; if not, write to the Free Software
|
||||
// Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
|
||||
// MA 02110-1301 USA, or visit
|
||||
// http://www.gnu.org/copyleft/gpl.html .
|
||||
//
|
||||
// As a special exception, I give you permission to link to the
|
||||
// Avisynth C interface with independent modules that communicate with
|
||||
// the Avisynth C interface solely through the interfaces defined in
|
||||
// avisynth_c.h, regardless of the license terms of these independent
|
||||
// modules, and to copy and distribute the resulting combined work
|
||||
// under terms of your choice, provided that every copy of the
|
||||
// combined work is accompanied by a complete copy of the source code
|
||||
// of the Avisynth C interface and Avisynth itself (with the version
|
||||
// used to produce the combined work), being distributed under the
|
||||
// terms of the GNU General Public License plus this exception. An
|
||||
// independent module is a module which is not derived from or based
|
||||
// on Avisynth C Interface, such as 3rd-party filters, import and
|
||||
// export plugins, or graphical user interfaces.
|
||||
|
||||
// NOTE: this is a partial update of the Avisynth C interface to recognize
|
||||
// new color spaces added in Avisynth 2.60. By no means is this document
|
||||
// completely Avisynth 2.60 compliant.
|
||||
|
||||
#ifndef __AVISYNTH_C__
|
||||
#define __AVISYNTH_C__
|
||||
|
||||
#include "avs/config.h"
|
||||
#include "avs/capi.h"
|
||||
#include "avs/types.h"
|
||||
|
||||
|
||||
/////////////////////////////////////////////////////////////////////
|
||||
//
|
||||
// Constants
|
||||
//
|
||||
|
||||
#ifndef __AVISYNTH_6_H__
|
||||
enum { AVISYNTH_INTERFACE_VERSION = 6 };
|
||||
#endif
|
||||
|
||||
enum {AVS_SAMPLE_INT8 = 1<<0,
|
||||
AVS_SAMPLE_INT16 = 1<<1,
|
||||
AVS_SAMPLE_INT24 = 1<<2,
|
||||
AVS_SAMPLE_INT32 = 1<<3,
|
||||
AVS_SAMPLE_FLOAT = 1<<4};
|
||||
|
||||
enum {AVS_PLANAR_Y=1<<0,
|
||||
AVS_PLANAR_U=1<<1,
|
||||
AVS_PLANAR_V=1<<2,
|
||||
AVS_PLANAR_ALIGNED=1<<3,
|
||||
AVS_PLANAR_Y_ALIGNED=AVS_PLANAR_Y|AVS_PLANAR_ALIGNED,
|
||||
AVS_PLANAR_U_ALIGNED=AVS_PLANAR_U|AVS_PLANAR_ALIGNED,
|
||||
AVS_PLANAR_V_ALIGNED=AVS_PLANAR_V|AVS_PLANAR_ALIGNED,
|
||||
AVS_PLANAR_A=1<<4,
|
||||
AVS_PLANAR_R=1<<5,
|
||||
AVS_PLANAR_G=1<<6,
|
||||
AVS_PLANAR_B=1<<7,
|
||||
AVS_PLANAR_A_ALIGNED=AVS_PLANAR_A|AVS_PLANAR_ALIGNED,
|
||||
AVS_PLANAR_R_ALIGNED=AVS_PLANAR_R|AVS_PLANAR_ALIGNED,
|
||||
AVS_PLANAR_G_ALIGNED=AVS_PLANAR_G|AVS_PLANAR_ALIGNED,
|
||||
AVS_PLANAR_B_ALIGNED=AVS_PLANAR_B|AVS_PLANAR_ALIGNED};
|
||||
|
||||
// Colorspace properties.
|
||||
enum {AVS_CS_BGR = 1<<28,
|
||||
AVS_CS_YUV = 1<<29,
|
||||
AVS_CS_INTERLEAVED = 1<<30,
|
||||
AVS_CS_PLANAR = 1<<31,
|
||||
|
||||
AVS_CS_SHIFT_SUB_WIDTH = 0,
|
||||
AVS_CS_SHIFT_SUB_HEIGHT = 8,
|
||||
AVS_CS_SHIFT_SAMPLE_BITS = 16,
|
||||
|
||||
AVS_CS_SUB_WIDTH_MASK = 7 << AVS_CS_SHIFT_SUB_WIDTH,
|
||||
AVS_CS_SUB_WIDTH_1 = 3 << AVS_CS_SHIFT_SUB_WIDTH, // YV24
|
||||
AVS_CS_SUB_WIDTH_2 = 0 << AVS_CS_SHIFT_SUB_WIDTH, // YV12, I420, YV16
|
||||
AVS_CS_SUB_WIDTH_4 = 1 << AVS_CS_SHIFT_SUB_WIDTH, // YUV9, YV411
|
||||
|
||||
AVS_CS_VPLANEFIRST = 1 << 3, // YV12, YV16, YV24, YV411, YUV9
|
||||
AVS_CS_UPLANEFIRST = 1 << 4, // I420
|
||||
|
||||
AVS_CS_SUB_HEIGHT_MASK = 7 << AVS_CS_SHIFT_SUB_HEIGHT,
|
||||
AVS_CS_SUB_HEIGHT_1 = 3 << AVS_CS_SHIFT_SUB_HEIGHT, // YV16, YV24, YV411
|
||||
AVS_CS_SUB_HEIGHT_2 = 0 << AVS_CS_SHIFT_SUB_HEIGHT, // YV12, I420
|
||||
AVS_CS_SUB_HEIGHT_4 = 1 << AVS_CS_SHIFT_SUB_HEIGHT, // YUV9
|
||||
|
||||
AVS_CS_SAMPLE_BITS_MASK = 7 << AVS_CS_SHIFT_SAMPLE_BITS,
|
||||
AVS_CS_SAMPLE_BITS_8 = 0 << AVS_CS_SHIFT_SAMPLE_BITS,
|
||||
AVS_CS_SAMPLE_BITS_16 = 1 << AVS_CS_SHIFT_SAMPLE_BITS,
|
||||
AVS_CS_SAMPLE_BITS_32 = 2 << AVS_CS_SHIFT_SAMPLE_BITS,
|
||||
|
||||
AVS_CS_PLANAR_MASK = AVS_CS_PLANAR | AVS_CS_INTERLEAVED | AVS_CS_YUV | AVS_CS_BGR | AVS_CS_SAMPLE_BITS_MASK | AVS_CS_SUB_HEIGHT_MASK | AVS_CS_SUB_WIDTH_MASK,
|
||||
AVS_CS_PLANAR_FILTER = ~( AVS_CS_VPLANEFIRST | AVS_CS_UPLANEFIRST )};
|
||||
|
||||
// Specific colorformats
|
||||
enum {
|
||||
AVS_CS_UNKNOWN = 0,
|
||||
AVS_CS_BGR24 = 1<<0 | AVS_CS_BGR | AVS_CS_INTERLEAVED,
|
||||
AVS_CS_BGR32 = 1<<1 | AVS_CS_BGR | AVS_CS_INTERLEAVED,
|
||||
AVS_CS_YUY2 = 1<<2 | AVS_CS_YUV | AVS_CS_INTERLEAVED,
|
||||
// AVS_CS_YV12 = 1<<3 Reserved
|
||||
// AVS_CS_I420 = 1<<4 Reserved
|
||||
AVS_CS_RAW32 = 1<<5 | AVS_CS_INTERLEAVED,
|
||||
|
||||
AVS_CS_YV24 = AVS_CS_PLANAR | AVS_CS_YUV | AVS_CS_SAMPLE_BITS_8 | AVS_CS_VPLANEFIRST | AVS_CS_SUB_HEIGHT_1 | AVS_CS_SUB_WIDTH_1, // YVU 4:4:4 planar
|
||||
AVS_CS_YV16 = AVS_CS_PLANAR | AVS_CS_YUV | AVS_CS_SAMPLE_BITS_8 | AVS_CS_VPLANEFIRST | AVS_CS_SUB_HEIGHT_1 | AVS_CS_SUB_WIDTH_2, // YVU 4:2:2 planar
|
||||
AVS_CS_YV12 = AVS_CS_PLANAR | AVS_CS_YUV | AVS_CS_SAMPLE_BITS_8 | AVS_CS_VPLANEFIRST | AVS_CS_SUB_HEIGHT_2 | AVS_CS_SUB_WIDTH_2, // YVU 4:2:0 planar
|
||||
AVS_CS_I420 = AVS_CS_PLANAR | AVS_CS_YUV | AVS_CS_SAMPLE_BITS_8 | AVS_CS_UPLANEFIRST | AVS_CS_SUB_HEIGHT_2 | AVS_CS_SUB_WIDTH_2, // YUV 4:2:0 planar
|
||||
AVS_CS_IYUV = AVS_CS_I420,
|
||||
AVS_CS_YV411 = AVS_CS_PLANAR | AVS_CS_YUV | AVS_CS_SAMPLE_BITS_8 | AVS_CS_VPLANEFIRST | AVS_CS_SUB_HEIGHT_1 | AVS_CS_SUB_WIDTH_4, // YVU 4:1:1 planar
|
||||
AVS_CS_YUV9 = AVS_CS_PLANAR | AVS_CS_YUV | AVS_CS_SAMPLE_BITS_8 | AVS_CS_VPLANEFIRST | AVS_CS_SUB_HEIGHT_4 | AVS_CS_SUB_WIDTH_4, // YVU 4:1:0 planar
|
||||
AVS_CS_Y8 = AVS_CS_PLANAR | AVS_CS_INTERLEAVED | AVS_CS_YUV | AVS_CS_SAMPLE_BITS_8 // Y 4:0:0 planar
|
||||
};
|
||||
|
||||
enum {
|
||||
AVS_IT_BFF = 1<<0,
|
||||
AVS_IT_TFF = 1<<1,
|
||||
AVS_IT_FIELDBASED = 1<<2};
|
||||
|
||||
enum {
|
||||
AVS_FILTER_TYPE=1,
|
||||
AVS_FILTER_INPUT_COLORSPACE=2,
|
||||
AVS_FILTER_OUTPUT_TYPE=9,
|
||||
AVS_FILTER_NAME=4,
|
||||
AVS_FILTER_AUTHOR=5,
|
||||
AVS_FILTER_VERSION=6,
|
||||
AVS_FILTER_ARGS=7,
|
||||
AVS_FILTER_ARGS_INFO=8,
|
||||
AVS_FILTER_ARGS_DESCRIPTION=10,
|
||||
AVS_FILTER_DESCRIPTION=11};
|
||||
|
||||
enum { //SUBTYPES
|
||||
AVS_FILTER_TYPE_AUDIO=1,
|
||||
AVS_FILTER_TYPE_VIDEO=2,
|
||||
AVS_FILTER_OUTPUT_TYPE_SAME=3,
|
||||
AVS_FILTER_OUTPUT_TYPE_DIFFERENT=4};
|
||||
|
||||
enum {
|
||||
// New 2.6 explicitly defined cache hints.
|
||||
AVS_CACHE_NOTHING=10, // Do not cache video.
|
||||
AVS_CACHE_WINDOW=11, // Hard protect upto X frames within a range of X from the current frame N.
|
||||
AVS_CACHE_GENERIC=12, // LRU cache upto X frames.
|
||||
AVS_CACHE_FORCE_GENERIC=13, // LRU cache upto X frames, override any previous CACHE_WINDOW.
|
||||
|
||||
AVS_CACHE_GET_POLICY=30, // Get the current policy.
|
||||
AVS_CACHE_GET_WINDOW=31, // Get the current window h_span.
|
||||
AVS_CACHE_GET_RANGE=32, // Get the current generic frame range.
|
||||
|
||||
AVS_CACHE_AUDIO=50, // Explicitly do cache audio, X byte cache.
|
||||
AVS_CACHE_AUDIO_NOTHING=51, // Explicitly do not cache audio.
|
||||
AVS_CACHE_AUDIO_NONE=52, // Audio cache off (auto mode), X byte intial cache.
|
||||
AVS_CACHE_AUDIO_AUTO=53, // Audio cache on (auto mode), X byte intial cache.
|
||||
|
||||
AVS_CACHE_GET_AUDIO_POLICY=70, // Get the current audio policy.
|
||||
AVS_CACHE_GET_AUDIO_SIZE=71, // Get the current audio cache size.
|
||||
|
||||
AVS_CACHE_PREFETCH_FRAME=100, // Queue request to prefetch frame N.
|
||||
AVS_CACHE_PREFETCH_GO=101, // Action video prefetches.
|
||||
|
||||
AVS_CACHE_PREFETCH_AUDIO_BEGIN=120, // Begin queue request transaction to prefetch audio (take critical section).
|
||||
AVS_CACHE_PREFETCH_AUDIO_STARTLO=121, // Set low 32 bits of start.
|
||||
AVS_CACHE_PREFETCH_AUDIO_STARTHI=122, // Set high 32 bits of start.
|
||||
AVS_CACHE_PREFETCH_AUDIO_COUNT=123, // Set low 32 bits of length.
|
||||
AVS_CACHE_PREFETCH_AUDIO_COMMIT=124, // Enqueue request transaction to prefetch audio (release critical section).
|
||||
AVS_CACHE_PREFETCH_AUDIO_GO=125, // Action audio prefetches.
|
||||
|
||||
AVS_CACHE_GETCHILD_CACHE_MODE=200, // Cache ask Child for desired video cache mode.
|
||||
AVS_CACHE_GETCHILD_CACHE_SIZE=201, // Cache ask Child for desired video cache size.
|
||||
AVS_CACHE_GETCHILD_AUDIO_MODE=202, // Cache ask Child for desired audio cache mode.
|
||||
AVS_CACHE_GETCHILD_AUDIO_SIZE=203, // Cache ask Child for desired audio cache size.
|
||||
|
||||
AVS_CACHE_GETCHILD_COST=220, // Cache ask Child for estimated processing cost.
|
||||
AVS_CACHE_COST_ZERO=221, // Child response of zero cost (ptr arithmetic only).
|
||||
AVS_CACHE_COST_UNIT=222, // Child response of unit cost (less than or equal 1 full frame blit).
|
||||
AVS_CACHE_COST_LOW=223, // Child response of light cost. (Fast)
|
||||
AVS_CACHE_COST_MED=224, // Child response of medium cost. (Real time)
|
||||
AVS_CACHE_COST_HI=225, // Child response of heavy cost. (Slow)
|
||||
|
||||
AVS_CACHE_GETCHILD_THREAD_MODE=240, // Cache ask Child for thread safetyness.
|
||||
AVS_CACHE_THREAD_UNSAFE=241, // Only 1 thread allowed for all instances. 2.5 filters default!
|
||||
AVS_CACHE_THREAD_CLASS=242, // Only 1 thread allowed for each instance. 2.6 filters default!
|
||||
AVS_CACHE_THREAD_SAFE=243, // Allow all threads in any instance.
|
||||
AVS_CACHE_THREAD_OWN=244, // Safe but limit to 1 thread, internally threaded.
|
||||
|
||||
AVS_CACHE_GETCHILD_ACCESS_COST=260, // Cache ask Child for preferred access pattern.
|
||||
AVS_CACHE_ACCESS_RAND=261, // Filter is access order agnostic.
|
||||
AVS_CACHE_ACCESS_SEQ0=262, // Filter prefers sequential access (low cost)
|
||||
AVS_CACHE_ACCESS_SEQ1=263, // Filter needs sequential access (high cost)
|
||||
};
|
||||
|
||||
#ifdef BUILDING_AVSCORE
|
||||
struct AVS_ScriptEnvironment {
|
||||
IScriptEnvironment * env;
|
||||
const char * error;
|
||||
AVS_ScriptEnvironment(IScriptEnvironment * e = 0)
|
||||
: env(e), error(0) {}
|
||||
};
|
||||
#endif
|
||||
|
||||
typedef struct AVS_Clip AVS_Clip;
|
||||
typedef struct AVS_ScriptEnvironment AVS_ScriptEnvironment;
|
||||
|
||||
/////////////////////////////////////////////////////////////////////
|
||||
//
|
||||
// AVS_VideoInfo
|
||||
//
|
||||
|
||||
// AVS_VideoInfo is layed out identicly to VideoInfo
|
||||
typedef struct AVS_VideoInfo {
|
||||
int width, height; // width=0 means no video
|
||||
unsigned fps_numerator, fps_denominator;
|
||||
int num_frames;
|
||||
|
||||
int pixel_type;
|
||||
|
||||
int audio_samples_per_second; // 0 means no audio
|
||||
int sample_type;
|
||||
INT64 num_audio_samples;
|
||||
int nchannels;
|
||||
|
||||
// Imagetype properties
|
||||
|
||||
int image_type;
|
||||
} AVS_VideoInfo;
|
||||
|
||||
// useful functions of the above
|
||||
AVSC_INLINE int avs_has_video(const AVS_VideoInfo * p)
|
||||
{ return (p->width!=0); }
|
||||
|
||||
AVSC_INLINE int avs_has_audio(const AVS_VideoInfo * p)
|
||||
{ return (p->audio_samples_per_second!=0); }
|
||||
|
||||
AVSC_INLINE int avs_is_rgb(const AVS_VideoInfo * p)
|
||||
{ return !!(p->pixel_type&AVS_CS_BGR); }
|
||||
|
||||
AVSC_INLINE int avs_is_rgb24(const AVS_VideoInfo * p)
|
||||
{ return (p->pixel_type&AVS_CS_BGR24)==AVS_CS_BGR24; } // Clear out additional properties
|
||||
|
||||
AVSC_INLINE int avs_is_rgb32(const AVS_VideoInfo * p)
|
||||
{ return (p->pixel_type & AVS_CS_BGR32) == AVS_CS_BGR32 ; }
|
||||
|
||||
AVSC_INLINE int avs_is_yuv(const AVS_VideoInfo * p)
|
||||
{ return !!(p->pixel_type&AVS_CS_YUV ); }
|
||||
|
||||
AVSC_INLINE int avs_is_yuy2(const AVS_VideoInfo * p)
|
||||
{ return (p->pixel_type & AVS_CS_YUY2) == AVS_CS_YUY2; }
|
||||
|
||||
AVSC_API(int, avs_is_yv24)(const AVS_VideoInfo * p);
|
||||
|
||||
AVSC_API(int, avs_is_yv16)(const AVS_VideoInfo * p);
|
||||
|
||||
AVSC_API(int, avs_is_yv12)(const AVS_VideoInfo * p) ;
|
||||
|
||||
AVSC_API(int, avs_is_yv411)(const AVS_VideoInfo * p);
|
||||
|
||||
AVSC_API(int, avs_is_y8)(const AVS_VideoInfo * p);
|
||||
|
||||
AVSC_INLINE int avs_is_property(const AVS_VideoInfo * p, int property)
|
||||
{ return ((p->image_type & property)==property ); }
|
||||
|
||||
AVSC_INLINE int avs_is_planar(const AVS_VideoInfo * p)
|
||||
{ return !!(p->pixel_type & AVS_CS_PLANAR); }
|
||||
|
||||
AVSC_API(int, avs_is_color_space)(const AVS_VideoInfo * p, int c_space);
|
||||
|
||||
AVSC_INLINE int avs_is_field_based(const AVS_VideoInfo * p)
|
||||
{ return !!(p->image_type & AVS_IT_FIELDBASED); }
|
||||
|
||||
AVSC_INLINE int avs_is_parity_known(const AVS_VideoInfo * p)
|
||||
{ return ((p->image_type & AVS_IT_FIELDBASED)&&(p->image_type & (AVS_IT_BFF | AVS_IT_TFF))); }
|
||||
|
||||
AVSC_INLINE int avs_is_bff(const AVS_VideoInfo * p)
|
||||
{ return !!(p->image_type & AVS_IT_BFF); }
|
||||
|
||||
AVSC_INLINE int avs_is_tff(const AVS_VideoInfo * p)
|
||||
{ return !!(p->image_type & AVS_IT_TFF); }
|
||||
|
||||
AVSC_API(int, avs_get_plane_width_subsampling)(const AVS_VideoInfo * p, int plane);
|
||||
|
||||
AVSC_API(int, avs_get_plane_height_subsampling)(const AVS_VideoInfo * p, int plane);
|
||||
|
||||
|
||||
AVSC_API(int, avs_bits_per_pixel)(const AVS_VideoInfo * p);
|
||||
|
||||
AVSC_API(int, avs_bytes_from_pixels)(const AVS_VideoInfo * p, int pixels);
|
||||
|
||||
AVSC_API(int, avs_row_size)(const AVS_VideoInfo * p, int plane);
|
||||
|
||||
AVSC_API(int, avs_bmp_size)(const AVS_VideoInfo * vi);
|
||||
|
||||
AVSC_INLINE int avs_samples_per_second(const AVS_VideoInfo * p)
|
||||
{ return p->audio_samples_per_second; }
|
||||
|
||||
|
||||
AVSC_INLINE int avs_bytes_per_channel_sample(const AVS_VideoInfo * p)
|
||||
{
|
||||
switch (p->sample_type) {
|
||||
case AVS_SAMPLE_INT8: return sizeof(signed char);
|
||||
case AVS_SAMPLE_INT16: return sizeof(signed short);
|
||||
case AVS_SAMPLE_INT24: return 3;
|
||||
case AVS_SAMPLE_INT32: return sizeof(signed int);
|
||||
case AVS_SAMPLE_FLOAT: return sizeof(float);
|
||||
default: return 0;
|
||||
}
|
||||
}
|
||||
AVSC_INLINE int avs_bytes_per_audio_sample(const AVS_VideoInfo * p)
|
||||
{ return p->nchannels*avs_bytes_per_channel_sample(p);}
|
||||
|
||||
AVSC_INLINE INT64 avs_audio_samples_from_frames(const AVS_VideoInfo * p, INT64 frames)
|
||||
{ return ((INT64)(frames) * p->audio_samples_per_second * p->fps_denominator / p->fps_numerator); }
|
||||
|
||||
AVSC_INLINE int avs_frames_from_audio_samples(const AVS_VideoInfo * p, INT64 samples)
|
||||
{ return (int)(samples * (INT64)p->fps_numerator / (INT64)p->fps_denominator / (INT64)p->audio_samples_per_second); }
|
||||
|
||||
AVSC_INLINE INT64 avs_audio_samples_from_bytes(const AVS_VideoInfo * p, INT64 bytes)
|
||||
{ return bytes / avs_bytes_per_audio_sample(p); }
|
||||
|
||||
AVSC_INLINE INT64 avs_bytes_from_audio_samples(const AVS_VideoInfo * p, INT64 samples)
|
||||
{ return samples * avs_bytes_per_audio_sample(p); }
|
||||
|
||||
AVSC_INLINE int avs_audio_channels(const AVS_VideoInfo * p)
|
||||
{ return p->nchannels; }
|
||||
|
||||
AVSC_INLINE int avs_sample_type(const AVS_VideoInfo * p)
|
||||
{ return p->sample_type;}
|
||||
|
||||
// useful mutator
|
||||
AVSC_INLINE void avs_set_property(AVS_VideoInfo * p, int property)
|
||||
{ p->image_type|=property; }
|
||||
|
||||
AVSC_INLINE void avs_clear_property(AVS_VideoInfo * p, int property)
|
||||
{ p->image_type&=~property; }
|
||||
|
||||
AVSC_INLINE void avs_set_field_based(AVS_VideoInfo * p, int isfieldbased)
|
||||
{ if (isfieldbased) p->image_type|=AVS_IT_FIELDBASED; else p->image_type&=~AVS_IT_FIELDBASED; }
|
||||
|
||||
AVSC_INLINE void avs_set_fps(AVS_VideoInfo * p, unsigned numerator, unsigned denominator)
|
||||
{
|
||||
unsigned x=numerator, y=denominator;
|
||||
while (y) { // find gcd
|
||||
unsigned t = x%y; x = y; y = t;
|
||||
}
|
||||
p->fps_numerator = numerator/x;
|
||||
p->fps_denominator = denominator/x;
|
||||
}
|
||||
|
||||
#ifdef AVS_IMPLICIT_FUNCTION_DECLARATION_ERROR
|
||||
AVSC_INLINE int avs_is_same_colorspace(AVS_VideoInfo * x, AVS_VideoInfo * y)
|
||||
{
|
||||
return (x->pixel_type == y->pixel_type)
|
||||
|| (avs_is_yv12(x) && avs_is_yv12(y));
|
||||
}
|
||||
#endif
|
||||
|
||||
/////////////////////////////////////////////////////////////////////
|
||||
//
|
||||
// AVS_VideoFrame
|
||||
//
|
||||
|
||||
// VideoFrameBuffer holds information about a memory block which is used
|
||||
// for video data. For efficiency, instances of this class are not deleted
|
||||
// when the refcount reaches zero; instead they're stored in a linked list
|
||||
// to be reused. The instances are deleted when the corresponding AVS
|
||||
// file is closed.
|
||||
|
||||
// AVS_VideoFrameBuffer is layed out identicly to VideoFrameBuffer
|
||||
// DO NOT USE THIS STRUCTURE DIRECTLY
|
||||
typedef struct AVS_VideoFrameBuffer {
|
||||
BYTE * data;
|
||||
int data_size;
|
||||
// sequence_number is incremented every time the buffer is changed, so
|
||||
// that stale views can tell they're no longer valid.
|
||||
volatile long sequence_number;
|
||||
|
||||
volatile long refcount;
|
||||
} AVS_VideoFrameBuffer;
|
||||
|
||||
// VideoFrame holds a "window" into a VideoFrameBuffer.
|
||||
|
||||
// AVS_VideoFrame is layed out identicly to IVideoFrame
|
||||
// DO NOT USE THIS STRUCTURE DIRECTLY
|
||||
typedef struct AVS_VideoFrame {
|
||||
volatile long refcount;
|
||||
AVS_VideoFrameBuffer * vfb;
|
||||
int offset, pitch, row_size, height, offsetU, offsetV, pitchUV; // U&V offsets are from top of picture.
|
||||
int row_sizeUV, heightUV;
|
||||
} AVS_VideoFrame;
|
||||
|
||||
// Access functions for AVS_VideoFrame
|
||||
AVSC_API(int, avs_get_pitch_p)(const AVS_VideoFrame * p, int plane);
|
||||
|
||||
#ifdef AVS_IMPLICIT_FUNCTION_DECLARATION_ERROR
|
||||
AVSC_INLINE int avs_get_pitch(const AVS_VideoFrame * p) {
|
||||
return avs_get_pitch_p(p, 0);}
|
||||
#endif
|
||||
|
||||
AVSC_API(int, avs_get_row_size_p)(const AVS_VideoFrame * p, int plane);
|
||||
|
||||
AVSC_INLINE int avs_get_row_size(const AVS_VideoFrame * p) {
|
||||
return p->row_size; }
|
||||
|
||||
AVSC_API(int, avs_get_height_p)(const AVS_VideoFrame * p, int plane);
|
||||
|
||||
AVSC_INLINE int avs_get_height(const AVS_VideoFrame * p) {
|
||||
return p->height;}
|
||||
|
||||
AVSC_API(const BYTE *, avs_get_read_ptr_p)(const AVS_VideoFrame * p, int plane);
|
||||
|
||||
#ifdef AVS_IMPLICIT_FUNCTION_DECLARATION_ERROR
|
||||
AVSC_INLINE const BYTE* avs_get_read_ptr(const AVS_VideoFrame * p) {
|
||||
return avs_get_read_ptr_p(p, 0);}
|
||||
#endif
|
||||
|
||||
AVSC_API(int, avs_is_writable)(const AVS_VideoFrame * p);
|
||||
|
||||
AVSC_API(BYTE *, avs_get_write_ptr_p)(const AVS_VideoFrame * p, int plane);
|
||||
|
||||
#ifdef AVS_IMPLICIT_FUNCTION_DECLARATION_ERROR
|
||||
AVSC_INLINE BYTE* avs_get_write_ptr(const AVS_VideoFrame * p) {
|
||||
return avs_get_write_ptr_p(p, 0);}
|
||||
#endif
|
||||
|
||||
AVSC_API(void, avs_release_video_frame)(AVS_VideoFrame *);
|
||||
// makes a shallow copy of a video frame
|
||||
AVSC_API(AVS_VideoFrame *, avs_copy_video_frame)(AVS_VideoFrame *);
|
||||
|
||||
#ifndef AVSC_NO_DECLSPEC
|
||||
AVSC_INLINE void avs_release_frame(AVS_VideoFrame * f)
|
||||
{avs_release_video_frame(f);}
|
||||
AVSC_INLINE AVS_VideoFrame * avs_copy_frame(AVS_VideoFrame * f)
|
||||
{return avs_copy_video_frame(f);}
|
||||
#endif
|
||||
|
||||
/////////////////////////////////////////////////////////////////////
|
||||
//
|
||||
// AVS_Value
|
||||
//
|
||||
|
||||
// Treat AVS_Value as a fat pointer. That is use avs_copy_value
|
||||
// and avs_release_value appropiaty as you would if AVS_Value was
|
||||
// a pointer.
|
||||
|
||||
// To maintain source code compatibility with future versions of the
|
||||
// avisynth_c API don't use the AVS_Value directly. Use the helper
|
||||
// functions below.
|
||||
|
||||
// AVS_Value is layed out identicly to AVSValue
|
||||
typedef struct AVS_Value AVS_Value;
|
||||
struct AVS_Value {
|
||||
short type; // 'a'rray, 'c'lip, 'b'ool, 'i'nt, 'f'loat, 's'tring, 'v'oid, or 'l'ong
|
||||
// for some function e'rror
|
||||
short array_size;
|
||||
union {
|
||||
void * clip; // do not use directly, use avs_take_clip
|
||||
char boolean;
|
||||
int integer;
|
||||
float floating_pt;
|
||||
const char * string;
|
||||
const AVS_Value * array;
|
||||
} d;
|
||||
};
|
||||
|
||||
// AVS_Value should be initilized with avs_void.
|
||||
// Should also set to avs_void after the value is released
|
||||
// with avs_copy_value. Consider it the equalvent of setting
|
||||
// a pointer to NULL
|
||||
static const AVS_Value avs_void = {'v'};
|
||||
|
||||
AVSC_API(void, avs_copy_value)(AVS_Value * dest, AVS_Value src);
|
||||
AVSC_API(void, avs_release_value)(AVS_Value);
|
||||
|
||||
AVSC_INLINE int avs_defined(AVS_Value v) { return v.type != 'v'; }
|
||||
AVSC_INLINE int avs_is_clip(AVS_Value v) { return v.type == 'c'; }
|
||||
AVSC_INLINE int avs_is_bool(AVS_Value v) { return v.type == 'b'; }
|
||||
AVSC_INLINE int avs_is_int(AVS_Value v) { return v.type == 'i'; }
|
||||
AVSC_INLINE int avs_is_float(AVS_Value v) { return v.type == 'f' || v.type == 'i'; }
|
||||
AVSC_INLINE int avs_is_string(AVS_Value v) { return v.type == 's'; }
|
||||
AVSC_INLINE int avs_is_array(AVS_Value v) { return v.type == 'a'; }
|
||||
AVSC_INLINE int avs_is_error(AVS_Value v) { return v.type == 'e'; }
|
||||
|
||||
AVSC_API(AVS_Clip *, avs_take_clip)(AVS_Value, AVS_ScriptEnvironment *);
|
||||
AVSC_API(void, avs_set_to_clip)(AVS_Value *, AVS_Clip *);
|
||||
|
||||
AVSC_INLINE int avs_as_bool(AVS_Value v)
|
||||
{ return v.d.boolean; }
|
||||
AVSC_INLINE int avs_as_int(AVS_Value v)
|
||||
{ return v.d.integer; }
|
||||
AVSC_INLINE const char * avs_as_string(AVS_Value v)
|
||||
{ return avs_is_error(v) || avs_is_string(v) ? v.d.string : 0; }
|
||||
AVSC_INLINE double avs_as_float(AVS_Value v)
|
||||
{ return avs_is_int(v) ? v.d.integer : v.d.floating_pt; }
|
||||
AVSC_INLINE const char * avs_as_error(AVS_Value v)
|
||||
{ return avs_is_error(v) ? v.d.string : 0; }
|
||||
AVSC_INLINE const AVS_Value * avs_as_array(AVS_Value v)
|
||||
{ return v.d.array; }
|
||||
AVSC_INLINE int avs_array_size(AVS_Value v)
|
||||
{ return avs_is_array(v) ? v.array_size : 1; }
|
||||
AVSC_INLINE AVS_Value avs_array_elt(AVS_Value v, int index)
|
||||
{ return avs_is_array(v) ? v.d.array[index] : v; }
|
||||
|
||||
// only use these functions on an AVS_Value that does not already have
|
||||
// an active value. Remember, treat AVS_Value as a fat pointer.
|
||||
AVSC_INLINE AVS_Value avs_new_value_bool(int v0)
|
||||
{ AVS_Value v; v.type = 'b'; v.d.boolean = v0 == 0 ? 0 : 1; return v; }
|
||||
AVSC_INLINE AVS_Value avs_new_value_int(int v0)
|
||||
{ AVS_Value v; v.type = 'i'; v.d.integer = v0; return v; }
|
||||
AVSC_INLINE AVS_Value avs_new_value_string(const char * v0)
|
||||
{ AVS_Value v; v.type = 's'; v.d.string = v0; return v; }
|
||||
AVSC_INLINE AVS_Value avs_new_value_float(float v0)
|
||||
{ AVS_Value v; v.type = 'f'; v.d.floating_pt = v0; return v;}
|
||||
AVSC_INLINE AVS_Value avs_new_value_error(const char * v0)
|
||||
{ AVS_Value v; v.type = 'e'; v.d.string = v0; return v; }
|
||||
#ifndef AVSC_NO_DECLSPEC
|
||||
AVSC_INLINE AVS_Value avs_new_value_clip(AVS_Clip * v0)
|
||||
{ AVS_Value v; avs_set_to_clip(&v, v0); return v; }
|
||||
#endif
|
||||
AVSC_INLINE AVS_Value avs_new_value_array(AVS_Value * v0, int size)
|
||||
{ AVS_Value v; v.type = 'a'; v.d.array = v0; v.array_size = size; return v; }
|
||||
|
||||
/////////////////////////////////////////////////////////////////////
|
||||
//
|
||||
// AVS_Clip
|
||||
//
|
||||
|
||||
AVSC_API(void, avs_release_clip)(AVS_Clip *);
|
||||
AVSC_API(AVS_Clip *, avs_copy_clip)(AVS_Clip *);
|
||||
|
||||
AVSC_API(const char *, avs_clip_get_error)(AVS_Clip *); // return 0 if no error
|
||||
|
||||
AVSC_API(const AVS_VideoInfo *, avs_get_video_info)(AVS_Clip *);
|
||||
|
||||
AVSC_API(int, avs_get_version)(AVS_Clip *);
|
||||
|
||||
AVSC_API(AVS_VideoFrame *, avs_get_frame)(AVS_Clip *, int n);
|
||||
// The returned video frame must be released with avs_release_video_frame
|
||||
|
||||
AVSC_API(int, avs_get_parity)(AVS_Clip *, int n);
|
||||
// return field parity if field_based, else parity of first field in frame
|
||||
|
||||
AVSC_API(int, avs_get_audio)(AVS_Clip *, void * buf,
|
||||
INT64 start, INT64 count);
|
||||
// start and count are in samples
|
||||
|
||||
AVSC_API(int, avs_set_cache_hints)(AVS_Clip *,
|
||||
int cachehints, int frame_range);
|
||||
|
||||
// This is the callback type used by avs_add_function
|
||||
typedef AVS_Value (AVSC_CC * AVS_ApplyFunc)
|
||||
(AVS_ScriptEnvironment *, AVS_Value args, void * user_data);
|
||||
|
||||
typedef struct AVS_FilterInfo AVS_FilterInfo;
|
||||
struct AVS_FilterInfo
|
||||
{
|
||||
// these members should not be modified outside of the AVS_ApplyFunc callback
|
||||
AVS_Clip * child;
|
||||
AVS_VideoInfo vi;
|
||||
AVS_ScriptEnvironment * env;
|
||||
AVS_VideoFrame * (AVSC_CC * get_frame)(AVS_FilterInfo *, int n);
|
||||
int (AVSC_CC * get_parity)(AVS_FilterInfo *, int n);
|
||||
int (AVSC_CC * get_audio)(AVS_FilterInfo *, void * buf,
|
||||
INT64 start, INT64 count);
|
||||
int (AVSC_CC * set_cache_hints)(AVS_FilterInfo *, int cachehints,
|
||||
int frame_range);
|
||||
void (AVSC_CC * free_filter)(AVS_FilterInfo *);
|
||||
|
||||
// Should be set when ever there is an error to report.
|
||||
// It is cleared before any of the above methods are called
|
||||
const char * error;
|
||||
// this is to store whatever and may be modified at will
|
||||
void * user_data;
|
||||
};
|
||||
|
||||
// Create a new filter
|
||||
// fi is set to point to the AVS_FilterInfo so that you can
|
||||
// modify it once it is initilized.
|
||||
// store_child should generally be set to true. If it is not
|
||||
// set than ALL methods (the function pointers) must be defined
|
||||
// If it is set than you do not need to worry about freeing the child
|
||||
// clip.
|
||||
AVSC_API(AVS_Clip *, avs_new_c_filter)(AVS_ScriptEnvironment * e,
|
||||
AVS_FilterInfo * * fi,
|
||||
AVS_Value child, int store_child);
|
||||
|
||||
/////////////////////////////////////////////////////////////////////
|
||||
//
|
||||
// AVS_ScriptEnvironment
|
||||
//
|
||||
|
||||
// For GetCPUFlags. These are backwards-compatible with those in VirtualDub.
|
||||
enum {
|
||||
/* slowest CPU to support extension */
|
||||
AVS_CPU_FORCE = 0x01, // N/A
|
||||
AVS_CPU_FPU = 0x02, // 386/486DX
|
||||
AVS_CPU_MMX = 0x04, // P55C, K6, PII
|
||||
AVS_CPU_INTEGER_SSE = 0x08, // PIII, Athlon
|
||||
AVS_CPU_SSE = 0x10, // PIII, Athlon XP/MP
|
||||
AVS_CPU_SSE2 = 0x20, // PIV, Hammer
|
||||
AVS_CPU_3DNOW = 0x40, // K6-2
|
||||
AVS_CPU_3DNOW_EXT = 0x80, // Athlon
|
||||
AVS_CPU_X86_64 = 0xA0, // Hammer (note: equiv. to 3DNow + SSE2,
|
||||
// which only Hammer will have anyway)
|
||||
AVS_CPUF_SSE3 = 0x100, // PIV+, K8 Venice
|
||||
AVS_CPUF_SSSE3 = 0x200, // Core 2
|
||||
AVS_CPUF_SSE4 = 0x400, // Penryn, Wolfdale, Yorkfield
|
||||
AVS_CPUF_SSE4_1 = 0x400,
|
||||
//AVS_CPUF_AVX = 0x800, // Sandy Bridge, Bulldozer
|
||||
AVS_CPUF_SSE4_2 = 0x1000, // Nehalem
|
||||
//AVS_CPUF_AVX2 = 0x2000, // Haswell
|
||||
//AVS_CPUF_AVX512 = 0x4000, // Knights Landing
|
||||
};
|
||||
|
||||
|
||||
AVSC_API(const char *, avs_get_error)(AVS_ScriptEnvironment *); // return 0 if no error
|
||||
|
||||
AVSC_API(int, avs_get_cpu_flags)(AVS_ScriptEnvironment *);
|
||||
AVSC_API(int, avs_check_version)(AVS_ScriptEnvironment *, int version);
|
||||
|
||||
AVSC_API(char *, avs_save_string)(AVS_ScriptEnvironment *, const char* s, int length);
|
||||
AVSC_API(char *, avs_sprintf)(AVS_ScriptEnvironment *, const char * fmt, ...);
|
||||
|
||||
AVSC_API(char *, avs_vsprintf)(AVS_ScriptEnvironment *, const char * fmt, void* val);
|
||||
// note: val is really a va_list; I hope everyone typedefs va_list to a pointer
|
||||
|
||||
AVSC_API(int, avs_add_function)(AVS_ScriptEnvironment *,
|
||||
const char * name, const char * params,
|
||||
AVS_ApplyFunc apply, void * user_data);
|
||||
|
||||
AVSC_API(int, avs_function_exists)(AVS_ScriptEnvironment *, const char * name);
|
||||
|
||||
AVSC_API(AVS_Value, avs_invoke)(AVS_ScriptEnvironment *, const char * name,
|
||||
AVS_Value args, const char** arg_names);
|
||||
// The returned value must be be released with avs_release_value
|
||||
|
||||
AVSC_API(AVS_Value, avs_get_var)(AVS_ScriptEnvironment *, const char* name);
|
||||
// The returned value must be be released with avs_release_value
|
||||
|
||||
AVSC_API(int, avs_set_var)(AVS_ScriptEnvironment *, const char* name, AVS_Value val);
|
||||
|
||||
AVSC_API(int, avs_set_global_var)(AVS_ScriptEnvironment *, const char* name, const AVS_Value val);
|
||||
|
||||
//void avs_push_context(AVS_ScriptEnvironment *, int level=0);
|
||||
//void avs_pop_context(AVS_ScriptEnvironment *);
|
||||
|
||||
AVSC_API(AVS_VideoFrame *, avs_new_video_frame_a)(AVS_ScriptEnvironment *,
|
||||
const AVS_VideoInfo * vi, int align);
|
||||
// align should be at least 16
|
||||
|
||||
#ifndef AVSC_NO_DECLSPEC
|
||||
AVSC_INLINE
|
||||
AVS_VideoFrame * avs_new_video_frame(AVS_ScriptEnvironment * env,
|
||||
const AVS_VideoInfo * vi)
|
||||
{return avs_new_video_frame_a(env,vi,FRAME_ALIGN);}
|
||||
|
||||
AVSC_INLINE
|
||||
AVS_VideoFrame * avs_new_frame(AVS_ScriptEnvironment * env,
|
||||
const AVS_VideoInfo * vi)
|
||||
{return avs_new_video_frame_a(env,vi,FRAME_ALIGN);}
|
||||
#endif
|
||||
|
||||
|
||||
AVSC_API(int, avs_make_writable)(AVS_ScriptEnvironment *, AVS_VideoFrame * * pvf);
|
||||
|
||||
AVSC_API(void, avs_bit_blt)(AVS_ScriptEnvironment *, BYTE* dstp, int dst_pitch, const BYTE* srcp, int src_pitch, int row_size, int height);
|
||||
|
||||
typedef void (AVSC_CC *AVS_ShutdownFunc)(void* user_data, AVS_ScriptEnvironment * env);
|
||||
AVSC_API(void, avs_at_exit)(AVS_ScriptEnvironment *, AVS_ShutdownFunc function, void * user_data);
|
||||
|
||||
AVSC_API(AVS_VideoFrame *, avs_subframe)(AVS_ScriptEnvironment *, AVS_VideoFrame * src, int rel_offset, int new_pitch, int new_row_size, int new_height);
|
||||
// The returned video frame must be be released
|
||||
|
||||
AVSC_API(int, avs_set_memory_max)(AVS_ScriptEnvironment *, int mem);
|
||||
|
||||
AVSC_API(int, avs_set_working_dir)(AVS_ScriptEnvironment *, const char * newdir);
|
||||
|
||||
// avisynth.dll exports this; it's a way to use it as a library, without
|
||||
// writing an AVS script or without going through AVIFile.
|
||||
AVSC_API(AVS_ScriptEnvironment *, avs_create_script_environment)(int version);
|
||||
|
||||
// this symbol is the entry point for the plugin and must
|
||||
// be defined
|
||||
AVSC_EXPORT
|
||||
const char * AVSC_CC avisynth_c_plugin_init(AVS_ScriptEnvironment* env);
|
||||
|
||||
|
||||
AVSC_API(void, avs_delete_script_environment)(AVS_ScriptEnvironment *);
|
||||
|
||||
|
||||
AVSC_API(AVS_VideoFrame *, avs_subframe_planar)(AVS_ScriptEnvironment *, AVS_VideoFrame * src, int rel_offset, int new_pitch, int new_row_size, int new_height, int rel_offsetU, int rel_offsetV, int new_pitchUV);
|
||||
// The returned video frame must be be released
|
||||
|
||||
#ifdef AVSC_NO_DECLSPEC
|
||||
// use LoadLibrary and related functions to dynamically load Avisynth instead of declspec(dllimport)
|
||||
/*
|
||||
The following functions needs to have been declared, probably from windows.h
|
||||
|
||||
void* malloc(size_t)
|
||||
void free(void*);
|
||||
|
||||
HMODULE LoadLibrary(const char*);
|
||||
void* GetProcAddress(HMODULE, const char*);
|
||||
FreeLibrary(HMODULE);
|
||||
*/
|
||||
|
||||
|
||||
typedef struct AVS_Library AVS_Library;
|
||||
|
||||
#define AVSC_DECLARE_FUNC(name) name##_func name
|
||||
|
||||
struct AVS_Library {
|
||||
HMODULE handle;
|
||||
|
||||
AVSC_DECLARE_FUNC(avs_add_function);
|
||||
AVSC_DECLARE_FUNC(avs_at_exit);
|
||||
AVSC_DECLARE_FUNC(avs_bit_blt);
|
||||
AVSC_DECLARE_FUNC(avs_check_version);
|
||||
AVSC_DECLARE_FUNC(avs_clip_get_error);
|
||||
AVSC_DECLARE_FUNC(avs_copy_clip);
|
||||
AVSC_DECLARE_FUNC(avs_copy_value);
|
||||
AVSC_DECLARE_FUNC(avs_copy_video_frame);
|
||||
AVSC_DECLARE_FUNC(avs_create_script_environment);
|
||||
AVSC_DECLARE_FUNC(avs_delete_script_environment);
|
||||
AVSC_DECLARE_FUNC(avs_function_exists);
|
||||
AVSC_DECLARE_FUNC(avs_get_audio);
|
||||
AVSC_DECLARE_FUNC(avs_get_cpu_flags);
|
||||
AVSC_DECLARE_FUNC(avs_get_frame);
|
||||
AVSC_DECLARE_FUNC(avs_get_parity);
|
||||
AVSC_DECLARE_FUNC(avs_get_var);
|
||||
AVSC_DECLARE_FUNC(avs_get_version);
|
||||
AVSC_DECLARE_FUNC(avs_get_video_info);
|
||||
AVSC_DECLARE_FUNC(avs_invoke);
|
||||
AVSC_DECLARE_FUNC(avs_make_writable);
|
||||
AVSC_DECLARE_FUNC(avs_new_c_filter);
|
||||
AVSC_DECLARE_FUNC(avs_new_video_frame_a);
|
||||
AVSC_DECLARE_FUNC(avs_release_clip);
|
||||
AVSC_DECLARE_FUNC(avs_release_value);
|
||||
AVSC_DECLARE_FUNC(avs_release_video_frame);
|
||||
AVSC_DECLARE_FUNC(avs_save_string);
|
||||
AVSC_DECLARE_FUNC(avs_set_cache_hints);
|
||||
AVSC_DECLARE_FUNC(avs_set_global_var);
|
||||
AVSC_DECLARE_FUNC(avs_set_memory_max);
|
||||
AVSC_DECLARE_FUNC(avs_set_to_clip);
|
||||
AVSC_DECLARE_FUNC(avs_set_var);
|
||||
AVSC_DECLARE_FUNC(avs_set_working_dir);
|
||||
AVSC_DECLARE_FUNC(avs_sprintf);
|
||||
AVSC_DECLARE_FUNC(avs_subframe);
|
||||
AVSC_DECLARE_FUNC(avs_subframe_planar);
|
||||
AVSC_DECLARE_FUNC(avs_take_clip);
|
||||
AVSC_DECLARE_FUNC(avs_vsprintf);
|
||||
|
||||
AVSC_DECLARE_FUNC(avs_get_error);
|
||||
AVSC_DECLARE_FUNC(avs_is_yv24);
|
||||
AVSC_DECLARE_FUNC(avs_is_yv16);
|
||||
AVSC_DECLARE_FUNC(avs_is_yv12);
|
||||
AVSC_DECLARE_FUNC(avs_is_yv411);
|
||||
AVSC_DECLARE_FUNC(avs_is_y8);
|
||||
AVSC_DECLARE_FUNC(avs_is_color_space);
|
||||
|
||||
AVSC_DECLARE_FUNC(avs_get_plane_width_subsampling);
|
||||
AVSC_DECLARE_FUNC(avs_get_plane_height_subsampling);
|
||||
AVSC_DECLARE_FUNC(avs_bits_per_pixel);
|
||||
AVSC_DECLARE_FUNC(avs_bytes_from_pixels);
|
||||
AVSC_DECLARE_FUNC(avs_row_size);
|
||||
AVSC_DECLARE_FUNC(avs_bmp_size);
|
||||
AVSC_DECLARE_FUNC(avs_get_pitch_p);
|
||||
AVSC_DECLARE_FUNC(avs_get_row_size_p);
|
||||
AVSC_DECLARE_FUNC(avs_get_height_p);
|
||||
AVSC_DECLARE_FUNC(avs_get_read_ptr_p);
|
||||
AVSC_DECLARE_FUNC(avs_is_writable);
|
||||
AVSC_DECLARE_FUNC(avs_get_write_ptr_p);
|
||||
};
|
||||
|
||||
#undef AVSC_DECLARE_FUNC
|
||||
|
||||
|
||||
AVSC_INLINE AVS_Library * avs_load_library() {
|
||||
AVS_Library *library = (AVS_Library *)malloc(sizeof(AVS_Library));
|
||||
if (library == NULL)
|
||||
return NULL;
|
||||
library->handle = LoadLibrary("avisynth");
|
||||
if (library->handle == NULL)
|
||||
goto fail;
|
||||
|
||||
#define __AVSC_STRINGIFY(x) #x
|
||||
#define AVSC_STRINGIFY(x) __AVSC_STRINGIFY(x)
|
||||
#define AVSC_LOAD_FUNC(name) {\
|
||||
library->name = (name##_func) GetProcAddress(library->handle, AVSC_STRINGIFY(name));\
|
||||
if (library->name == NULL)\
|
||||
goto fail;\
|
||||
}
|
||||
|
||||
AVSC_LOAD_FUNC(avs_add_function);
|
||||
AVSC_LOAD_FUNC(avs_at_exit);
|
||||
AVSC_LOAD_FUNC(avs_bit_blt);
|
||||
AVSC_LOAD_FUNC(avs_check_version);
|
||||
AVSC_LOAD_FUNC(avs_clip_get_error);
|
||||
AVSC_LOAD_FUNC(avs_copy_clip);
|
||||
AVSC_LOAD_FUNC(avs_copy_value);
|
||||
AVSC_LOAD_FUNC(avs_copy_video_frame);
|
||||
AVSC_LOAD_FUNC(avs_create_script_environment);
|
||||
AVSC_LOAD_FUNC(avs_delete_script_environment);
|
||||
AVSC_LOAD_FUNC(avs_function_exists);
|
||||
AVSC_LOAD_FUNC(avs_get_audio);
|
||||
AVSC_LOAD_FUNC(avs_get_cpu_flags);
|
||||
AVSC_LOAD_FUNC(avs_get_frame);
|
||||
AVSC_LOAD_FUNC(avs_get_parity);
|
||||
AVSC_LOAD_FUNC(avs_get_var);
|
||||
AVSC_LOAD_FUNC(avs_get_version);
|
||||
AVSC_LOAD_FUNC(avs_get_video_info);
|
||||
AVSC_LOAD_FUNC(avs_invoke);
|
||||
AVSC_LOAD_FUNC(avs_make_writable);
|
||||
AVSC_LOAD_FUNC(avs_new_c_filter);
|
||||
AVSC_LOAD_FUNC(avs_new_video_frame_a);
|
||||
AVSC_LOAD_FUNC(avs_release_clip);
|
||||
AVSC_LOAD_FUNC(avs_release_value);
|
||||
AVSC_LOAD_FUNC(avs_release_video_frame);
|
||||
AVSC_LOAD_FUNC(avs_save_string);
|
||||
AVSC_LOAD_FUNC(avs_set_cache_hints);
|
||||
AVSC_LOAD_FUNC(avs_set_global_var);
|
||||
AVSC_LOAD_FUNC(avs_set_memory_max);
|
||||
AVSC_LOAD_FUNC(avs_set_to_clip);
|
||||
AVSC_LOAD_FUNC(avs_set_var);
|
||||
AVSC_LOAD_FUNC(avs_set_working_dir);
|
||||
AVSC_LOAD_FUNC(avs_sprintf);
|
||||
AVSC_LOAD_FUNC(avs_subframe);
|
||||
AVSC_LOAD_FUNC(avs_subframe_planar);
|
||||
AVSC_LOAD_FUNC(avs_take_clip);
|
||||
AVSC_LOAD_FUNC(avs_vsprintf);
|
||||
|
||||
AVSC_LOAD_FUNC(avs_get_error);
|
||||
AVSC_LOAD_FUNC(avs_is_yv24);
|
||||
AVSC_LOAD_FUNC(avs_is_yv16);
|
||||
AVSC_LOAD_FUNC(avs_is_yv12);
|
||||
AVSC_LOAD_FUNC(avs_is_yv411);
|
||||
AVSC_LOAD_FUNC(avs_is_y8);
|
||||
AVSC_LOAD_FUNC(avs_is_color_space);
|
||||
|
||||
AVSC_LOAD_FUNC(avs_get_plane_width_subsampling);
|
||||
AVSC_LOAD_FUNC(avs_get_plane_height_subsampling);
|
||||
AVSC_LOAD_FUNC(avs_bits_per_pixel);
|
||||
AVSC_LOAD_FUNC(avs_bytes_from_pixels);
|
||||
AVSC_LOAD_FUNC(avs_row_size);
|
||||
AVSC_LOAD_FUNC(avs_bmp_size);
|
||||
AVSC_LOAD_FUNC(avs_get_pitch_p);
|
||||
AVSC_LOAD_FUNC(avs_get_row_size_p);
|
||||
AVSC_LOAD_FUNC(avs_get_height_p);
|
||||
AVSC_LOAD_FUNC(avs_get_read_ptr_p);
|
||||
AVSC_LOAD_FUNC(avs_is_writable);
|
||||
AVSC_LOAD_FUNC(avs_get_write_ptr_p);
|
||||
|
||||
#undef __AVSC_STRINGIFY
|
||||
#undef AVSC_STRINGIFY
|
||||
#undef AVSC_LOAD_FUNC
|
||||
|
||||
return library;
|
||||
|
||||
fail:
|
||||
free(library);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
AVSC_INLINE void avs_free_library(AVS_Library *library) {
|
||||
if (library == NULL)
|
||||
return;
|
||||
FreeLibrary(library->handle);
|
||||
free(library);
|
||||
}
|
||||
#endif
|
||||
|
||||
#endif
|
||||
@@ -1,62 +0,0 @@
|
||||
// Avisynth C Interface Version 0.20
|
||||
// Copyright 2003 Kevin Atkinson
|
||||
|
||||
// This program is free software; you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation; either version 2 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with this program; if not, write to the Free Software
|
||||
// Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA, or visit
|
||||
// http://www.gnu.org/copyleft/gpl.html .
|
||||
//
|
||||
// As a special exception, I give you permission to link to the
|
||||
// Avisynth C interface with independent modules that communicate with
|
||||
// the Avisynth C interface solely through the interfaces defined in
|
||||
// avisynth_c.h, regardless of the license terms of these independent
|
||||
// modules, and to copy and distribute the resulting combined work
|
||||
// under terms of your choice, provided that every copy of the
|
||||
// combined work is accompanied by a complete copy of the source code
|
||||
// of the Avisynth C interface and Avisynth itself (with the version
|
||||
// used to produce the combined work), being distributed under the
|
||||
// terms of the GNU General Public License plus this exception. An
|
||||
// independent module is a module which is not derived from or based
|
||||
// on Avisynth C Interface, such as 3rd-party filters, import and
|
||||
// export plugins, or graphical user interfaces.
|
||||
|
||||
#ifndef AVS_CAPI_H
|
||||
#define AVS_CAPI_H
|
||||
|
||||
#ifdef __cplusplus
|
||||
# define EXTERN_C extern "C"
|
||||
#else
|
||||
# define EXTERN_C
|
||||
#endif
|
||||
|
||||
#ifndef AVSC_USE_STDCALL
|
||||
# define AVSC_CC __cdecl
|
||||
#else
|
||||
# define AVSC_CC __stdcall
|
||||
#endif
|
||||
|
||||
#define AVSC_INLINE static __inline
|
||||
|
||||
#ifdef BUILDING_AVSCORE
|
||||
# define AVSC_EXPORT EXTERN_C
|
||||
# define AVSC_API(ret, name) EXTERN_C __declspec(dllexport) ret AVSC_CC name
|
||||
#else
|
||||
# define AVSC_EXPORT EXTERN_C __declspec(dllexport)
|
||||
# ifndef AVSC_NO_DECLSPEC
|
||||
# define AVSC_API(ret, name) EXTERN_C __declspec(dllimport) ret AVSC_CC name
|
||||
# else
|
||||
# define AVSC_API(ret, name) typedef ret (AVSC_CC *name##_func)
|
||||
# endif
|
||||
#endif
|
||||
|
||||
#endif //AVS_CAPI_H
|
||||
@@ -1,55 +0,0 @@
|
||||
// Avisynth C Interface Version 0.20
|
||||
// Copyright 2003 Kevin Atkinson
|
||||
|
||||
// This program is free software; you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation; either version 2 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with this program; if not, write to the Free Software
|
||||
// Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA, or visit
|
||||
// http://www.gnu.org/copyleft/gpl.html .
|
||||
//
|
||||
// As a special exception, I give you permission to link to the
|
||||
// Avisynth C interface with independent modules that communicate with
|
||||
// the Avisynth C interface solely through the interfaces defined in
|
||||
// avisynth_c.h, regardless of the license terms of these independent
|
||||
// modules, and to copy and distribute the resulting combined work
|
||||
// under terms of your choice, provided that every copy of the
|
||||
// combined work is accompanied by a complete copy of the source code
|
||||
// of the Avisynth C interface and Avisynth itself (with the version
|
||||
// used to produce the combined work), being distributed under the
|
||||
// terms of the GNU General Public License plus this exception. An
|
||||
// independent module is a module which is not derived from or based
|
||||
// on Avisynth C Interface, such as 3rd-party filters, import and
|
||||
// export plugins, or graphical user interfaces.
|
||||
|
||||
#ifndef AVS_CONFIG_H
|
||||
#define AVS_CONFIG_H
|
||||
|
||||
// Undefine this to get cdecl calling convention
|
||||
#define AVSC_USE_STDCALL 1
|
||||
|
||||
// NOTE TO PLUGIN AUTHORS:
|
||||
// Because FRAME_ALIGN can be substantially higher than the alignment
|
||||
// a plugin actually needs, plugins should not use FRAME_ALIGN to check for
|
||||
// alignment. They should always request the exact alignment value they need.
|
||||
// This is to make sure that plugins work over the widest range of AviSynth
|
||||
// builds possible.
|
||||
#define FRAME_ALIGN 32
|
||||
|
||||
#if defined(_M_AMD64) || defined(__x86_64)
|
||||
# define X86_64
|
||||
#elif defined(_M_IX86) || defined(__i386__)
|
||||
# define X86_32
|
||||
#else
|
||||
# error Unsupported CPU architecture.
|
||||
#endif
|
||||
|
||||
#endif //AVS_CONFIG_H
|
||||
@@ -1,51 +0,0 @@
|
||||
// Avisynth C Interface Version 0.20
|
||||
// Copyright 2003 Kevin Atkinson
|
||||
|
||||
// This program is free software; you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation; either version 2 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with this program; if not, write to the Free Software
|
||||
// Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA, or visit
|
||||
// http://www.gnu.org/copyleft/gpl.html .
|
||||
//
|
||||
// As a special exception, I give you permission to link to the
|
||||
// Avisynth C interface with independent modules that communicate with
|
||||
// the Avisynth C interface solely through the interfaces defined in
|
||||
// avisynth_c.h, regardless of the license terms of these independent
|
||||
// modules, and to copy and distribute the resulting combined work
|
||||
// under terms of your choice, provided that every copy of the
|
||||
// combined work is accompanied by a complete copy of the source code
|
||||
// of the Avisynth C interface and Avisynth itself (with the version
|
||||
// used to produce the combined work), being distributed under the
|
||||
// terms of the GNU General Public License plus this exception. An
|
||||
// independent module is a module which is not derived from or based
|
||||
// on Avisynth C Interface, such as 3rd-party filters, import and
|
||||
// export plugins, or graphical user interfaces.
|
||||
|
||||
#ifndef AVS_TYPES_H
|
||||
#define AVS_TYPES_H
|
||||
|
||||
// Define all types necessary for interfacing with avisynth.dll
|
||||
|
||||
// Raster types used by VirtualDub & Avisynth
|
||||
typedef unsigned int Pixel32;
|
||||
typedef unsigned char BYTE;
|
||||
|
||||
// Audio Sample information
|
||||
typedef float SFLOAT;
|
||||
|
||||
#ifdef __GNUC__
|
||||
typedef long long int INT64;
|
||||
#else
|
||||
typedef __int64 INT64;
|
||||
#endif
|
||||
|
||||
#endif //AVS_TYPES_H
|
||||
@@ -1,728 +0,0 @@
|
||||
// Avisynth C Interface Version 0.20
|
||||
// Copyright 2003 Kevin Atkinson
|
||||
|
||||
// This program is free software; you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation; either version 2 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License
|
||||
// along with this program; if not, write to the Free Software
|
||||
// Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
|
||||
// MA 02110-1301 USA, or visit
|
||||
// http://www.gnu.org/copyleft/gpl.html .
|
||||
//
|
||||
// As a special exception, I give you permission to link to the
|
||||
// Avisynth C interface with independent modules that communicate with
|
||||
// the Avisynth C interface solely through the interfaces defined in
|
||||
// avisynth_c.h, regardless of the license terms of these independent
|
||||
// modules, and to copy and distribute the resulting combined work
|
||||
// under terms of your choice, provided that every copy of the
|
||||
// combined work is accompanied by a complete copy of the source code
|
||||
// of the Avisynth C interface and Avisynth itself (with the version
|
||||
// used to produce the combined work), being distributed under the
|
||||
// terms of the GNU General Public License plus this exception. An
|
||||
// independent module is a module which is not derived from or based
|
||||
// on Avisynth C Interface, such as 3rd-party filters, import and
|
||||
// export plugins, or graphical user interfaces.
|
||||
|
||||
#ifndef __AVXSYNTH_C__
|
||||
#define __AVXSYNTH_C__
|
||||
|
||||
#include "windowsPorts/windows2linux.h"
|
||||
#include <stdarg.h>
|
||||
|
||||
#ifdef __cplusplus
|
||||
# define EXTERN_C extern "C"
|
||||
#else
|
||||
# define EXTERN_C
|
||||
#endif
|
||||
|
||||
#define AVSC_USE_STDCALL 1
|
||||
|
||||
#ifndef AVSC_USE_STDCALL
|
||||
# define AVSC_CC __cdecl
|
||||
#else
|
||||
# define AVSC_CC __stdcall
|
||||
#endif
|
||||
|
||||
#define AVSC_INLINE static __inline
|
||||
|
||||
#ifdef AVISYNTH_C_EXPORTS
|
||||
# define AVSC_EXPORT EXTERN_C
|
||||
# define AVSC_API(ret, name) EXTERN_C __declspec(dllexport) ret AVSC_CC name
|
||||
#else
|
||||
# define AVSC_EXPORT EXTERN_C __declspec(dllexport)
|
||||
# ifndef AVSC_NO_DECLSPEC
|
||||
# define AVSC_API(ret, name) EXTERN_C __declspec(dllimport) ret AVSC_CC name
|
||||
# else
|
||||
# define AVSC_API(ret, name) typedef ret (AVSC_CC *name##_func)
|
||||
# endif
|
||||
#endif
|
||||
|
||||
#ifdef __GNUC__
|
||||
typedef long long int INT64;
|
||||
#else
|
||||
typedef __int64 INT64;
|
||||
#endif
|
||||
|
||||
|
||||
/////////////////////////////////////////////////////////////////////
|
||||
//
|
||||
// Constants
|
||||
//
|
||||
|
||||
#ifndef __AVXSYNTH_H__
|
||||
enum { AVISYNTH_INTERFACE_VERSION = 3 };
|
||||
#endif
|
||||
|
||||
enum {AVS_SAMPLE_INT8 = 1<<0,
|
||||
AVS_SAMPLE_INT16 = 1<<1,
|
||||
AVS_SAMPLE_INT24 = 1<<2,
|
||||
AVS_SAMPLE_INT32 = 1<<3,
|
||||
AVS_SAMPLE_FLOAT = 1<<4};
|
||||
|
||||
enum {AVS_PLANAR_Y=1<<0,
|
||||
AVS_PLANAR_U=1<<1,
|
||||
AVS_PLANAR_V=1<<2,
|
||||
AVS_PLANAR_ALIGNED=1<<3,
|
||||
AVS_PLANAR_Y_ALIGNED=AVS_PLANAR_Y|AVS_PLANAR_ALIGNED,
|
||||
AVS_PLANAR_U_ALIGNED=AVS_PLANAR_U|AVS_PLANAR_ALIGNED,
|
||||
AVS_PLANAR_V_ALIGNED=AVS_PLANAR_V|AVS_PLANAR_ALIGNED};
|
||||
|
||||
// Colorspace properties.
|
||||
enum {AVS_CS_BGR = 1<<28,
|
||||
AVS_CS_YUV = 1<<29,
|
||||
AVS_CS_INTERLEAVED = 1<<30,
|
||||
AVS_CS_PLANAR = 1<<31};
|
||||
|
||||
// Specific colorformats
|
||||
enum {
|
||||
AVS_CS_UNKNOWN = 0,
|
||||
AVS_CS_BGR24 = 1<<0 | AVS_CS_BGR | AVS_CS_INTERLEAVED,
|
||||
AVS_CS_BGR32 = 1<<1 | AVS_CS_BGR | AVS_CS_INTERLEAVED,
|
||||
AVS_CS_YUY2 = 1<<2 | AVS_CS_YUV | AVS_CS_INTERLEAVED,
|
||||
AVS_CS_YV12 = 1<<3 | AVS_CS_YUV | AVS_CS_PLANAR, // y-v-u, planar
|
||||
AVS_CS_I420 = 1<<4 | AVS_CS_YUV | AVS_CS_PLANAR, // y-u-v, planar
|
||||
AVS_CS_IYUV = 1<<4 | AVS_CS_YUV | AVS_CS_PLANAR // same as above
|
||||
};
|
||||
|
||||
enum {
|
||||
AVS_IT_BFF = 1<<0,
|
||||
AVS_IT_TFF = 1<<1,
|
||||
AVS_IT_FIELDBASED = 1<<2};
|
||||
|
||||
enum {
|
||||
AVS_FILTER_TYPE=1,
|
||||
AVS_FILTER_INPUT_COLORSPACE=2,
|
||||
AVS_FILTER_OUTPUT_TYPE=9,
|
||||
AVS_FILTER_NAME=4,
|
||||
AVS_FILTER_AUTHOR=5,
|
||||
AVS_FILTER_VERSION=6,
|
||||
AVS_FILTER_ARGS=7,
|
||||
AVS_FILTER_ARGS_INFO=8,
|
||||
AVS_FILTER_ARGS_DESCRIPTION=10,
|
||||
AVS_FILTER_DESCRIPTION=11};
|
||||
|
||||
enum { //SUBTYPES
|
||||
AVS_FILTER_TYPE_AUDIO=1,
|
||||
AVS_FILTER_TYPE_VIDEO=2,
|
||||
AVS_FILTER_OUTPUT_TYPE_SAME=3,
|
||||
AVS_FILTER_OUTPUT_TYPE_DIFFERENT=4};
|
||||
|
||||
enum {
|
||||
AVS_CACHE_NOTHING=0,
|
||||
AVS_CACHE_RANGE=1,
|
||||
AVS_CACHE_ALL=2,
|
||||
AVS_CACHE_AUDIO=3,
|
||||
AVS_CACHE_AUDIO_NONE=4,
|
||||
AVS_CACHE_AUDIO_AUTO=5
|
||||
};
|
||||
|
||||
#define AVS_FRAME_ALIGN 16
|
||||
|
||||
typedef struct AVS_Clip AVS_Clip;
|
||||
typedef struct AVS_ScriptEnvironment AVS_ScriptEnvironment;
|
||||
|
||||
/////////////////////////////////////////////////////////////////////
|
||||
//
|
||||
// AVS_VideoInfo
|
||||
//
|
||||
|
||||
// AVS_VideoInfo is layed out identicly to VideoInfo
|
||||
typedef struct AVS_VideoInfo {
|
||||
int width, height; // width=0 means no video
|
||||
unsigned fps_numerator, fps_denominator;
|
||||
int num_frames;
|
||||
|
||||
int pixel_type;
|
||||
|
||||
int audio_samples_per_second; // 0 means no audio
|
||||
int sample_type;
|
||||
INT64 num_audio_samples;
|
||||
int nchannels;
|
||||
|
||||
// Imagetype properties
|
||||
|
||||
int image_type;
|
||||
} AVS_VideoInfo;
|
||||
|
||||
// useful functions of the above
|
||||
AVSC_INLINE int avs_has_video(const AVS_VideoInfo * p)
|
||||
{ return (p->width!=0); }
|
||||
|
||||
AVSC_INLINE int avs_has_audio(const AVS_VideoInfo * p)
|
||||
{ return (p->audio_samples_per_second!=0); }
|
||||
|
||||
AVSC_INLINE int avs_is_rgb(const AVS_VideoInfo * p)
|
||||
{ return !!(p->pixel_type&AVS_CS_BGR); }
|
||||
|
||||
AVSC_INLINE int avs_is_rgb24(const AVS_VideoInfo * p)
|
||||
{ return (p->pixel_type&AVS_CS_BGR24)==AVS_CS_BGR24; } // Clear out additional properties
|
||||
|
||||
AVSC_INLINE int avs_is_rgb32(const AVS_VideoInfo * p)
|
||||
{ return (p->pixel_type & AVS_CS_BGR32) == AVS_CS_BGR32 ; }
|
||||
|
||||
AVSC_INLINE int avs_is_yuv(const AVS_VideoInfo * p)
|
||||
{ return !!(p->pixel_type&AVS_CS_YUV ); }
|
||||
|
||||
AVSC_INLINE int avs_is_yuy2(const AVS_VideoInfo * p)
|
||||
{ return (p->pixel_type & AVS_CS_YUY2) == AVS_CS_YUY2; }
|
||||
|
||||
AVSC_INLINE int avs_is_yv12(const AVS_VideoInfo * p)
|
||||
{ return ((p->pixel_type & AVS_CS_YV12) == AVS_CS_YV12)||((p->pixel_type & AVS_CS_I420) == AVS_CS_I420); }
|
||||
|
||||
AVSC_INLINE int avs_is_color_space(const AVS_VideoInfo * p, int c_space)
|
||||
{ return ((p->pixel_type & c_space) == c_space); }
|
||||
|
||||
AVSC_INLINE int avs_is_property(const AVS_VideoInfo * p, int property)
|
||||
{ return ((p->pixel_type & property)==property ); }
|
||||
|
||||
AVSC_INLINE int avs_is_planar(const AVS_VideoInfo * p)
|
||||
{ return !!(p->pixel_type & AVS_CS_PLANAR); }
|
||||
|
||||
AVSC_INLINE int avs_is_field_based(const AVS_VideoInfo * p)
|
||||
{ return !!(p->image_type & AVS_IT_FIELDBASED); }
|
||||
|
||||
AVSC_INLINE int avs_is_parity_known(const AVS_VideoInfo * p)
|
||||
{ return ((p->image_type & AVS_IT_FIELDBASED)&&(p->image_type & (AVS_IT_BFF | AVS_IT_TFF))); }
|
||||
|
||||
AVSC_INLINE int avs_is_bff(const AVS_VideoInfo * p)
|
||||
{ return !!(p->image_type & AVS_IT_BFF); }
|
||||
|
||||
AVSC_INLINE int avs_is_tff(const AVS_VideoInfo * p)
|
||||
{ return !!(p->image_type & AVS_IT_TFF); }
|
||||
|
||||
AVSC_INLINE int avs_bits_per_pixel(const AVS_VideoInfo * p)
|
||||
{
|
||||
switch (p->pixel_type) {
|
||||
case AVS_CS_BGR24: return 24;
|
||||
case AVS_CS_BGR32: return 32;
|
||||
case AVS_CS_YUY2: return 16;
|
||||
case AVS_CS_YV12:
|
||||
case AVS_CS_I420: return 12;
|
||||
default: return 0;
|
||||
}
|
||||
}
|
||||
AVSC_INLINE int avs_bytes_from_pixels(const AVS_VideoInfo * p, int pixels)
|
||||
{ return pixels * (avs_bits_per_pixel(p)>>3); } // Will work on planar images, but will return only luma planes
|
||||
|
||||
AVSC_INLINE int avs_row_size(const AVS_VideoInfo * p)
|
||||
{ return avs_bytes_from_pixels(p,p->width); } // Also only returns first plane on planar images
|
||||
|
||||
AVSC_INLINE int avs_bmp_size(const AVS_VideoInfo * vi)
|
||||
{ if (avs_is_planar(vi)) {int p = vi->height * ((avs_row_size(vi)+3) & ~3); p+=p>>1; return p; } return vi->height * ((avs_row_size(vi)+3) & ~3); }
|
||||
|
||||
AVSC_INLINE int avs_samples_per_second(const AVS_VideoInfo * p)
|
||||
{ return p->audio_samples_per_second; }
|
||||
|
||||
|
||||
AVSC_INLINE int avs_bytes_per_channel_sample(const AVS_VideoInfo * p)
|
||||
{
|
||||
switch (p->sample_type) {
|
||||
case AVS_SAMPLE_INT8: return sizeof(signed char);
|
||||
case AVS_SAMPLE_INT16: return sizeof(signed short);
|
||||
case AVS_SAMPLE_INT24: return 3;
|
||||
case AVS_SAMPLE_INT32: return sizeof(signed int);
|
||||
case AVS_SAMPLE_FLOAT: return sizeof(float);
|
||||
default: return 0;
|
||||
}
|
||||
}
|
||||
AVSC_INLINE int avs_bytes_per_audio_sample(const AVS_VideoInfo * p)
|
||||
{ return p->nchannels*avs_bytes_per_channel_sample(p);}
|
||||
|
||||
AVSC_INLINE INT64 avs_audio_samples_from_frames(const AVS_VideoInfo * p, INT64 frames)
|
||||
{ return ((INT64)(frames) * p->audio_samples_per_second * p->fps_denominator / p->fps_numerator); }
|
||||
|
||||
AVSC_INLINE int avs_frames_from_audio_samples(const AVS_VideoInfo * p, INT64 samples)
|
||||
{ return (int)(samples * (INT64)p->fps_numerator / (INT64)p->fps_denominator / (INT64)p->audio_samples_per_second); }
|
||||
|
||||
AVSC_INLINE INT64 avs_audio_samples_from_bytes(const AVS_VideoInfo * p, INT64 bytes)
|
||||
{ return bytes / avs_bytes_per_audio_sample(p); }
|
||||
|
||||
AVSC_INLINE INT64 avs_bytes_from_audio_samples(const AVS_VideoInfo * p, INT64 samples)
|
||||
{ return samples * avs_bytes_per_audio_sample(p); }
|
||||
|
||||
AVSC_INLINE int avs_audio_channels(const AVS_VideoInfo * p)
|
||||
{ return p->nchannels; }
|
||||
|
||||
AVSC_INLINE int avs_sample_type(const AVS_VideoInfo * p)
|
||||
{ return p->sample_type;}
|
||||
|
||||
// useful mutator
|
||||
AVSC_INLINE void avs_set_property(AVS_VideoInfo * p, int property)
|
||||
{ p->image_type|=property; }
|
||||
|
||||
AVSC_INLINE void avs_clear_property(AVS_VideoInfo * p, int property)
|
||||
{ p->image_type&=~property; }
|
||||
|
||||
AVSC_INLINE void avs_set_field_based(AVS_VideoInfo * p, int isfieldbased)
|
||||
{ if (isfieldbased) p->image_type|=AVS_IT_FIELDBASED; else p->image_type&=~AVS_IT_FIELDBASED; }
|
||||
|
||||
AVSC_INLINE void avs_set_fps(AVS_VideoInfo * p, unsigned numerator, unsigned denominator)
|
||||
{
|
||||
unsigned x=numerator, y=denominator;
|
||||
while (y) { // find gcd
|
||||
unsigned t = x%y; x = y; y = t;
|
||||
}
|
||||
p->fps_numerator = numerator/x;
|
||||
p->fps_denominator = denominator/x;
|
||||
}
|
||||
|
||||
AVSC_INLINE int avs_is_same_colorspace(AVS_VideoInfo * x, AVS_VideoInfo * y)
|
||||
{
|
||||
return (x->pixel_type == y->pixel_type)
|
||||
|| (avs_is_yv12(x) && avs_is_yv12(y));
|
||||
}
|
||||
|
||||
/////////////////////////////////////////////////////////////////////
|
||||
//
|
||||
// AVS_VideoFrame
|
||||
//
|
||||
|
||||
// VideoFrameBuffer holds information about a memory block which is used
|
||||
// for video data. For efficiency, instances of this class are not deleted
|
||||
// when the refcount reaches zero; instead they're stored in a linked list
|
||||
// to be reused. The instances are deleted when the corresponding AVS
|
||||
// file is closed.
|
||||
|
||||
// AVS_VideoFrameBuffer is layed out identicly to VideoFrameBuffer
|
||||
// DO NOT USE THIS STRUCTURE DIRECTLY
|
||||
typedef struct AVS_VideoFrameBuffer {
|
||||
unsigned char * data;
|
||||
int data_size;
|
||||
// sequence_number is incremented every time the buffer is changed, so
|
||||
// that stale views can tell they're no longer valid.
|
||||
long sequence_number;
|
||||
|
||||
long refcount;
|
||||
} AVS_VideoFrameBuffer;
|
||||
|
||||
// VideoFrame holds a "window" into a VideoFrameBuffer.
|
||||
|
||||
// AVS_VideoFrame is layed out identicly to IVideoFrame
|
||||
// DO NOT USE THIS STRUCTURE DIRECTLY
|
||||
typedef struct AVS_VideoFrame {
|
||||
int refcount;
|
||||
AVS_VideoFrameBuffer * vfb;
|
||||
int offset, pitch, row_size, height, offsetU, offsetV, pitchUV; // U&V offsets are from top of picture.
|
||||
} AVS_VideoFrame;
|
||||
|
||||
// Access functions for AVS_VideoFrame
|
||||
AVSC_INLINE int avs_get_pitch(const AVS_VideoFrame * p) {
|
||||
return p->pitch;}
|
||||
|
||||
AVSC_INLINE int avs_get_pitch_p(const AVS_VideoFrame * p, int plane) {
|
||||
switch (plane) {
|
||||
case AVS_PLANAR_U: case AVS_PLANAR_V: return p->pitchUV;}
|
||||
return p->pitch;}
|
||||
|
||||
AVSC_INLINE int avs_get_row_size(const AVS_VideoFrame * p) {
|
||||
return p->row_size; }
|
||||
|
||||
AVSC_INLINE int avs_get_row_size_p(const AVS_VideoFrame * p, int plane) {
|
||||
int r;
|
||||
switch (plane) {
|
||||
case AVS_PLANAR_U: case AVS_PLANAR_V:
|
||||
if (p->pitchUV) return p->row_size>>1;
|
||||
else return 0;
|
||||
case AVS_PLANAR_U_ALIGNED: case AVS_PLANAR_V_ALIGNED:
|
||||
if (p->pitchUV) {
|
||||
r = ((p->row_size+AVS_FRAME_ALIGN-1)&(~(AVS_FRAME_ALIGN-1)) )>>1; // Aligned rowsize
|
||||
if (r < p->pitchUV)
|
||||
return r;
|
||||
return p->row_size>>1;
|
||||
} else return 0;
|
||||
case AVS_PLANAR_Y_ALIGNED:
|
||||
r = (p->row_size+AVS_FRAME_ALIGN-1)&(~(AVS_FRAME_ALIGN-1)); // Aligned rowsize
|
||||
if (r <= p->pitch)
|
||||
return r;
|
||||
return p->row_size;
|
||||
}
|
||||
return p->row_size;
|
||||
}
|
||||
|
||||
AVSC_INLINE int avs_get_height(const AVS_VideoFrame * p) {
|
||||
return p->height;}
|
||||
|
||||
AVSC_INLINE int avs_get_height_p(const AVS_VideoFrame * p, int plane) {
|
||||
switch (plane) {
|
||||
case AVS_PLANAR_U: case AVS_PLANAR_V:
|
||||
if (p->pitchUV) return p->height>>1;
|
||||
return 0;
|
||||
}
|
||||
return p->height;}
|
||||
|
||||
AVSC_INLINE const unsigned char* avs_get_read_ptr(const AVS_VideoFrame * p) {
|
||||
return p->vfb->data + p->offset;}
|
||||
|
||||
AVSC_INLINE const unsigned char* avs_get_read_ptr_p(const AVS_VideoFrame * p, int plane)
|
||||
{
|
||||
switch (plane) {
|
||||
case AVS_PLANAR_U: return p->vfb->data + p->offsetU;
|
||||
case AVS_PLANAR_V: return p->vfb->data + p->offsetV;
|
||||
default: return p->vfb->data + p->offset;}
|
||||
}
|
||||
|
||||
AVSC_INLINE int avs_is_writable(const AVS_VideoFrame * p) {
|
||||
return (p->refcount == 1 && p->vfb->refcount == 1);}
|
||||
|
||||
AVSC_INLINE unsigned char* avs_get_write_ptr(const AVS_VideoFrame * p)
|
||||
{
|
||||
if (avs_is_writable(p)) {
|
||||
++p->vfb->sequence_number;
|
||||
return p->vfb->data + p->offset;
|
||||
} else
|
||||
return 0;
|
||||
}
|
||||
|
||||
AVSC_INLINE unsigned char* avs_get_write_ptr_p(const AVS_VideoFrame * p, int plane)
|
||||
{
|
||||
if (plane==AVS_PLANAR_Y && avs_is_writable(p)) {
|
||||
++p->vfb->sequence_number;
|
||||
return p->vfb->data + p->offset;
|
||||
} else if (plane==AVS_PLANAR_Y) {
|
||||
return 0;
|
||||
} else {
|
||||
switch (plane) {
|
||||
case AVS_PLANAR_U: return p->vfb->data + p->offsetU;
|
||||
case AVS_PLANAR_V: return p->vfb->data + p->offsetV;
|
||||
default: return p->vfb->data + p->offset;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#if defined __cplusplus
|
||||
extern "C"
|
||||
{
|
||||
#endif // __cplusplus
|
||||
AVSC_API(void, avs_release_video_frame)(AVS_VideoFrame *);
|
||||
// makes a shallow copy of a video frame
|
||||
AVSC_API(AVS_VideoFrame *, avs_copy_video_frame)(AVS_VideoFrame *);
|
||||
#if defined __cplusplus
|
||||
}
|
||||
#endif // __cplusplus
|
||||
|
||||
#ifndef AVSC_NO_DECLSPEC
|
||||
AVSC_INLINE void avs_release_frame(AVS_VideoFrame * f)
|
||||
{avs_release_video_frame(f);}
|
||||
AVSC_INLINE AVS_VideoFrame * avs_copy_frame(AVS_VideoFrame * f)
|
||||
{return avs_copy_video_frame(f);}
|
||||
#endif
|
||||
|
||||
/////////////////////////////////////////////////////////////////////
|
||||
//
|
||||
// AVS_Value
|
||||
//
|
||||
|
||||
// Treat AVS_Value as a fat pointer. That is use avs_copy_value
|
||||
// and avs_release_value appropiaty as you would if AVS_Value was
|
||||
// a pointer.
|
||||
|
||||
// To maintain source code compatibility with future versions of the
|
||||
// avisynth_c API don't use the AVS_Value directly. Use the helper
|
||||
// functions below.
|
||||
|
||||
// AVS_Value is layed out identicly to AVSValue
|
||||
typedef struct AVS_Value AVS_Value;
|
||||
struct AVS_Value {
|
||||
short type; // 'a'rray, 'c'lip, 'b'ool, 'i'nt, 'f'loat, 's'tring, 'v'oid, or 'l'ong
|
||||
// for some function e'rror
|
||||
short array_size;
|
||||
union {
|
||||
void * clip; // do not use directly, use avs_take_clip
|
||||
char boolean;
|
||||
int integer;
|
||||
INT64 integer64; // match addition of __int64 to avxplugin.h
|
||||
float floating_pt;
|
||||
const char * string;
|
||||
const AVS_Value * array;
|
||||
} d;
|
||||
};
|
||||
|
||||
// AVS_Value should be initilized with avs_void.
|
||||
// Should also set to avs_void after the value is released
|
||||
// with avs_copy_value. Consider it the equalvent of setting
|
||||
// a pointer to NULL
|
||||
static const AVS_Value avs_void = {'v'};
|
||||
|
||||
AVSC_API(void, avs_copy_value)(AVS_Value * dest, AVS_Value src);
|
||||
AVSC_API(void, avs_release_value)(AVS_Value);
|
||||
|
||||
AVSC_INLINE int avs_defined(AVS_Value v) { return v.type != 'v'; }
|
||||
AVSC_INLINE int avs_is_clip(AVS_Value v) { return v.type == 'c'; }
|
||||
AVSC_INLINE int avs_is_bool(AVS_Value v) { return v.type == 'b'; }
|
||||
AVSC_INLINE int avs_is_int(AVS_Value v) { return v.type == 'i'; }
|
||||
AVSC_INLINE int avs_is_float(AVS_Value v) { return v.type == 'f' || v.type == 'i'; }
|
||||
AVSC_INLINE int avs_is_string(AVS_Value v) { return v.type == 's'; }
|
||||
AVSC_INLINE int avs_is_array(AVS_Value v) { return v.type == 'a'; }
|
||||
AVSC_INLINE int avs_is_error(AVS_Value v) { return v.type == 'e'; }
|
||||
|
||||
#if defined __cplusplus
|
||||
extern "C"
|
||||
{
|
||||
#endif // __cplusplus
|
||||
AVSC_API(AVS_Clip *, avs_take_clip)(AVS_Value, AVS_ScriptEnvironment *);
|
||||
AVSC_API(void, avs_set_to_clip)(AVS_Value *, AVS_Clip *);
|
||||
#if defined __cplusplus
|
||||
}
|
||||
#endif // __cplusplus
|
||||
|
||||
AVSC_INLINE int avs_as_bool(AVS_Value v)
|
||||
{ return v.d.boolean; }
|
||||
AVSC_INLINE int avs_as_int(AVS_Value v)
|
||||
{ return v.d.integer; }
|
||||
AVSC_INLINE const char * avs_as_string(AVS_Value v)
|
||||
{ return avs_is_error(v) || avs_is_string(v) ? v.d.string : 0; }
|
||||
AVSC_INLINE double avs_as_float(AVS_Value v)
|
||||
{ return avs_is_int(v) ? v.d.integer : v.d.floating_pt; }
|
||||
AVSC_INLINE const char * avs_as_error(AVS_Value v)
|
||||
{ return avs_is_error(v) ? v.d.string : 0; }
|
||||
AVSC_INLINE const AVS_Value * avs_as_array(AVS_Value v)
|
||||
{ return v.d.array; }
|
||||
AVSC_INLINE int avs_array_size(AVS_Value v)
|
||||
{ return avs_is_array(v) ? v.array_size : 1; }
|
||||
AVSC_INLINE AVS_Value avs_array_elt(AVS_Value v, int index)
|
||||
{ return avs_is_array(v) ? v.d.array[index] : v; }
|
||||
|
||||
// only use these functions on am AVS_Value that does not already have
|
||||
// an active value. Remember, treat AVS_Value as a fat pointer.
|
||||
AVSC_INLINE AVS_Value avs_new_value_bool(int v0)
|
||||
{ AVS_Value v = {0}; v.type = 'b'; v.d.boolean = v0 == 0 ? 0 : 1; return v; }
|
||||
AVSC_INLINE AVS_Value avs_new_value_int(int v0)
|
||||
{ AVS_Value v = {0}; v.type = 'i'; v.d.integer = v0; return v; }
|
||||
AVSC_INLINE AVS_Value avs_new_value_string(const char * v0)
|
||||
{ AVS_Value v = {0}; v.type = 's'; v.d.string = v0; return v; }
|
||||
AVSC_INLINE AVS_Value avs_new_value_float(float v0)
|
||||
{ AVS_Value v = {0}; v.type = 'f'; v.d.floating_pt = v0; return v;}
|
||||
AVSC_INLINE AVS_Value avs_new_value_error(const char * v0)
|
||||
{ AVS_Value v = {0}; v.type = 'e'; v.d.string = v0; return v; }
|
||||
#ifndef AVSC_NO_DECLSPEC
|
||||
AVSC_INLINE AVS_Value avs_new_value_clip(AVS_Clip * v0)
|
||||
{ AVS_Value v = {0}; avs_set_to_clip(&v, v0); return v; }
|
||||
#endif
|
||||
AVSC_INLINE AVS_Value avs_new_value_array(AVS_Value * v0, int size)
|
||||
{ AVS_Value v = {0}; v.type = 'a'; v.d.array = v0; v.array_size = size; return v; }
|
||||
|
||||
/////////////////////////////////////////////////////////////////////
|
||||
//
|
||||
// AVS_Clip
|
||||
//
|
||||
#if defined __cplusplus
|
||||
extern "C"
|
||||
{
|
||||
#endif // __cplusplus
|
||||
AVSC_API(void, avs_release_clip)(AVS_Clip *);
|
||||
AVSC_API(AVS_Clip *, avs_copy_clip)(AVS_Clip *);
|
||||
|
||||
AVSC_API(const char *, avs_clip_get_error)(AVS_Clip *); // return 0 if no error
|
||||
|
||||
AVSC_API(const AVS_VideoInfo *, avs_get_video_info)(AVS_Clip *);
|
||||
|
||||
AVSC_API(int, avs_get_version)(AVS_Clip *);
|
||||
|
||||
AVSC_API(AVS_VideoFrame *, avs_get_frame)(AVS_Clip *, int n);
|
||||
// The returned video frame must be released with avs_release_video_frame
|
||||
|
||||
AVSC_API(int, avs_get_parity)(AVS_Clip *, int n);
|
||||
// return field parity if field_based, else parity of first field in frame
|
||||
|
||||
AVSC_API(int, avs_get_audio)(AVS_Clip *, void * buf,
|
||||
INT64 start, INT64 count);
|
||||
// start and count are in samples
|
||||
|
||||
AVSC_API(int, avs_set_cache_hints)(AVS_Clip *,
|
||||
int cachehints, size_t frame_range);
|
||||
#if defined __cplusplus
|
||||
}
|
||||
#endif // __cplusplus
|
||||
|
||||
// This is the callback type used by avs_add_function
|
||||
typedef AVS_Value (AVSC_CC * AVS_ApplyFunc)
|
||||
(AVS_ScriptEnvironment *, AVS_Value args, void * user_data);
|
||||
|
||||
typedef struct AVS_FilterInfo AVS_FilterInfo;
|
||||
struct AVS_FilterInfo
|
||||
{
|
||||
// these members should not be modified outside of the AVS_ApplyFunc callback
|
||||
AVS_Clip * child;
|
||||
AVS_VideoInfo vi;
|
||||
AVS_ScriptEnvironment * env;
|
||||
AVS_VideoFrame * (AVSC_CC * get_frame)(AVS_FilterInfo *, int n);
|
||||
int (AVSC_CC * get_parity)(AVS_FilterInfo *, int n);
|
||||
int (AVSC_CC * get_audio)(AVS_FilterInfo *, void * buf,
|
||||
INT64 start, INT64 count);
|
||||
int (AVSC_CC * set_cache_hints)(AVS_FilterInfo *, int cachehints,
|
||||
int frame_range);
|
||||
void (AVSC_CC * free_filter)(AVS_FilterInfo *);
|
||||
|
||||
// Should be set when ever there is an error to report.
|
||||
// It is cleared before any of the above methods are called
|
||||
const char * error;
|
||||
// this is to store whatever and may be modified at will
|
||||
void * user_data;
|
||||
};
|
||||
|
||||
// Create a new filter
|
||||
// fi is set to point to the AVS_FilterInfo so that you can
|
||||
// modify it once it is initilized.
|
||||
// store_child should generally be set to true. If it is not
|
||||
// set than ALL methods (the function pointers) must be defined
|
||||
// If it is set than you do not need to worry about freeing the child
|
||||
// clip.
|
||||
#if defined __cplusplus
|
||||
extern "C"
|
||||
{
|
||||
#endif // __cplusplus
|
||||
AVSC_API(AVS_Clip *, avs_new_c_filter)(AVS_ScriptEnvironment * e,
|
||||
AVS_FilterInfo * * fi,
|
||||
AVS_Value child, int store_child);
|
||||
#if defined __cplusplus
|
||||
}
|
||||
#endif // __cplusplus
|
||||
|
||||
|
||||
/////////////////////////////////////////////////////////////////////
|
||||
//
|
||||
// AVS_ScriptEnvironment
|
||||
//
|
||||
|
||||
// For GetCPUFlags. These are backwards-compatible with those in VirtualDub.
|
||||
enum {
|
||||
/* slowest CPU to support extension */
|
||||
AVS_CPU_FORCE = 0x01, // N/A
|
||||
AVS_CPU_FPU = 0x02, // 386/486DX
|
||||
AVS_CPU_MMX = 0x04, // P55C, K6, PII
|
||||
AVS_CPU_INTEGER_SSE = 0x08, // PIII, Athlon
|
||||
AVS_CPU_SSE = 0x10, // PIII, Athlon XP/MP
|
||||
AVS_CPU_SSE2 = 0x20, // PIV, Hammer
|
||||
AVS_CPU_3DNOW = 0x40, // K6-2
|
||||
AVS_CPU_3DNOW_EXT = 0x80, // Athlon
|
||||
AVS_CPU_X86_64 = 0xA0, // Hammer (note: equiv. to 3DNow + SSE2,
|
||||
// which only Hammer will have anyway)
|
||||
};
|
||||
|
||||
#if defined __cplusplus
|
||||
extern "C"
|
||||
{
|
||||
#endif // __cplusplus
|
||||
AVSC_API(const char *, avs_get_error)(AVS_ScriptEnvironment *); // return 0 if no error
|
||||
|
||||
AVSC_API(long, avs_get_cpu_flags)(AVS_ScriptEnvironment *);
|
||||
AVSC_API(int, avs_check_version)(AVS_ScriptEnvironment *, int version);
|
||||
|
||||
AVSC_API(char *, avs_save_string)(AVS_ScriptEnvironment *, const char* s, int length);
|
||||
AVSC_API(char *, avs_sprintf)(AVS_ScriptEnvironment *, const char * fmt, ...);
|
||||
|
||||
AVSC_API(char *, avs_vsprintf)(AVS_ScriptEnvironment *, const char * fmt, va_list val);
|
||||
// note: val is really a va_list; I hope everyone typedefs va_list to a pointer
|
||||
|
||||
AVSC_API(int, avs_add_function)(AVS_ScriptEnvironment *,
|
||||
const char * name, const char * params,
|
||||
AVS_ApplyFunc apply, void * user_data);
|
||||
|
||||
AVSC_API(int, avs_function_exists)(AVS_ScriptEnvironment *, const char * name);
|
||||
|
||||
AVSC_API(AVS_Value, avs_invoke)(AVS_ScriptEnvironment *, const char * name,
|
||||
AVS_Value args, const char** arg_names);
|
||||
// The returned value must be be released with avs_release_value
|
||||
|
||||
AVSC_API(AVS_Value, avs_get_var)(AVS_ScriptEnvironment *, const char* name);
|
||||
// The returned value must be be released with avs_release_value
|
||||
|
||||
AVSC_API(int, avs_set_var)(AVS_ScriptEnvironment *, const char* name, AVS_Value val);
|
||||
|
||||
AVSC_API(int, avs_set_global_var)(AVS_ScriptEnvironment *, const char* name, const AVS_Value val);
|
||||
|
||||
//void avs_push_context(AVS_ScriptEnvironment *, int level=0);
|
||||
//void avs_pop_context(AVS_ScriptEnvironment *);
|
||||
|
||||
AVSC_API(AVS_VideoFrame *, avs_new_video_frame_a)(AVS_ScriptEnvironment *,
|
||||
const AVS_VideoInfo * vi, int align);
|
||||
// align should be at least 16
|
||||
#if defined __cplusplus
|
||||
}
|
||||
#endif // __cplusplus
|
||||
|
||||
#ifndef AVSC_NO_DECLSPEC
|
||||
AVSC_INLINE
|
||||
AVS_VideoFrame * avs_new_video_frame(AVS_ScriptEnvironment * env,
|
||||
const AVS_VideoInfo * vi)
|
||||
{return avs_new_video_frame_a(env,vi,AVS_FRAME_ALIGN);}
|
||||
|
||||
AVSC_INLINE
|
||||
AVS_VideoFrame * avs_new_frame(AVS_ScriptEnvironment * env,
|
||||
const AVS_VideoInfo * vi)
|
||||
{return avs_new_video_frame_a(env,vi,AVS_FRAME_ALIGN);}
|
||||
#endif
|
||||
|
||||
#if defined __cplusplus
|
||||
extern "C"
|
||||
{
|
||||
#endif // __cplusplus
|
||||
AVSC_API(int, avs_make_writable)(AVS_ScriptEnvironment *, AVS_VideoFrame * * pvf);
|
||||
|
||||
AVSC_API(void, avs_bit_blt)(AVS_ScriptEnvironment *, unsigned char* dstp, int dst_pitch, const unsigned char* srcp, int src_pitch, int row_size, int height);
|
||||
|
||||
typedef void (AVSC_CC *AVS_ShutdownFunc)(void* user_data, AVS_ScriptEnvironment * env);
|
||||
AVSC_API(void, avs_at_exit)(AVS_ScriptEnvironment *, AVS_ShutdownFunc function, void * user_data);
|
||||
|
||||
AVSC_API(AVS_VideoFrame *, avs_subframe)(AVS_ScriptEnvironment *, AVS_VideoFrame * src, int rel_offset, int new_pitch, int new_row_size, int new_height);
|
||||
// The returned video frame must be be released
|
||||
|
||||
AVSC_API(int, avs_set_memory_max)(AVS_ScriptEnvironment *, int mem);
|
||||
|
||||
AVSC_API(int, avs_set_working_dir)(AVS_ScriptEnvironment *, const char * newdir);
|
||||
|
||||
// avisynth.dll exports this; it's a way to use it as a library, without
|
||||
// writing an AVS script or without going through AVIFile.
|
||||
AVSC_API(AVS_ScriptEnvironment *, avs_create_script_environment)(int version);
|
||||
#if defined __cplusplus
|
||||
}
|
||||
#endif // __cplusplus
|
||||
|
||||
// this symbol is the entry point for the plugin and must
|
||||
// be defined
|
||||
AVSC_EXPORT
|
||||
const char * AVSC_CC avisynth_c_plugin_init(AVS_ScriptEnvironment* env);
|
||||
|
||||
|
||||
#if defined __cplusplus
|
||||
extern "C"
|
||||
{
|
||||
#endif // __cplusplus
|
||||
AVSC_API(void, avs_delete_script_environment)(AVS_ScriptEnvironment *);
|
||||
|
||||
|
||||
AVSC_API(AVS_VideoFrame *, avs_subframe_planar)(AVS_ScriptEnvironment *, AVS_VideoFrame * src, int rel_offset, int new_pitch, int new_row_size, int new_height, int rel_offsetU, int rel_offsetV, int new_pitchUV);
|
||||
// The returned video frame must be be released
|
||||
#if defined __cplusplus
|
||||
}
|
||||
#endif // __cplusplus
|
||||
|
||||
#endif //__AVXSYNTH_C__
|
||||
@@ -1,85 +0,0 @@
|
||||
#ifndef __DATA_TYPE_CONVERSIONS_H__
|
||||
#define __DATA_TYPE_CONVERSIONS_H__
|
||||
|
||||
#include <stdint.h>
|
||||
#include <wchar.h>
|
||||
|
||||
#ifdef __cplusplus
|
||||
namespace avxsynth {
|
||||
#endif // __cplusplus
|
||||
|
||||
typedef int64_t __int64;
|
||||
typedef int32_t __int32;
|
||||
#ifdef __cplusplus
|
||||
typedef bool BOOL;
|
||||
#else
|
||||
typedef uint32_t BOOL;
|
||||
#endif // __cplusplus
|
||||
typedef void* HMODULE;
|
||||
typedef void* LPVOID;
|
||||
typedef void* PVOID;
|
||||
typedef PVOID HANDLE;
|
||||
typedef HANDLE HWND;
|
||||
typedef HANDLE HINSTANCE;
|
||||
typedef void* HDC;
|
||||
typedef void* HBITMAP;
|
||||
typedef void* HICON;
|
||||
typedef void* HFONT;
|
||||
typedef void* HGDIOBJ;
|
||||
typedef void* HBRUSH;
|
||||
typedef void* HMMIO;
|
||||
typedef void* HACMSTREAM;
|
||||
typedef void* HACMDRIVER;
|
||||
typedef void* HIC;
|
||||
typedef void* HACMOBJ;
|
||||
typedef HACMSTREAM* LPHACMSTREAM;
|
||||
typedef void* HACMDRIVERID;
|
||||
typedef void* LPHACMDRIVER;
|
||||
typedef unsigned char BYTE;
|
||||
typedef BYTE* LPBYTE;
|
||||
typedef char TCHAR;
|
||||
typedef TCHAR* LPTSTR;
|
||||
typedef const TCHAR* LPCTSTR;
|
||||
typedef char* LPSTR;
|
||||
typedef LPSTR LPOLESTR;
|
||||
typedef const char* LPCSTR;
|
||||
typedef LPCSTR LPCOLESTR;
|
||||
typedef wchar_t WCHAR;
|
||||
typedef unsigned short WORD;
|
||||
typedef unsigned int UINT;
|
||||
typedef UINT MMRESULT;
|
||||
typedef uint32_t DWORD;
|
||||
typedef DWORD COLORREF;
|
||||
typedef DWORD FOURCC;
|
||||
typedef DWORD HRESULT;
|
||||
typedef DWORD* LPDWORD;
|
||||
typedef DWORD* DWORD_PTR;
|
||||
typedef int32_t LONG;
|
||||
typedef int32_t* LONG_PTR;
|
||||
typedef LONG_PTR LRESULT;
|
||||
typedef uint32_t ULONG;
|
||||
typedef uint32_t* ULONG_PTR;
|
||||
//typedef __int64_t intptr_t;
|
||||
typedef uint64_t _fsize_t;
|
||||
|
||||
|
||||
//
|
||||
// Structures
|
||||
//
|
||||
|
||||
typedef struct _GUID {
|
||||
DWORD Data1;
|
||||
WORD Data2;
|
||||
WORD Data3;
|
||||
BYTE Data4[8];
|
||||
} GUID;
|
||||
|
||||
typedef GUID REFIID;
|
||||
typedef GUID CLSID;
|
||||
typedef CLSID* LPCLSID;
|
||||
typedef GUID IID;
|
||||
|
||||
#ifdef __cplusplus
|
||||
}; // namespace avxsynth
|
||||
#endif // __cplusplus
|
||||
#endif // __DATA_TYPE_CONVERSIONS_H__
|
||||
@@ -1,77 +0,0 @@
|
||||
#ifndef __WINDOWS2LINUX_H__
|
||||
#define __WINDOWS2LINUX_H__
|
||||
|
||||
/*
|
||||
* LINUX SPECIFIC DEFINITIONS
|
||||
*/
|
||||
//
|
||||
// Data types conversions
|
||||
//
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
#include "basicDataTypeConversions.h"
|
||||
|
||||
#ifdef __cplusplus
|
||||
namespace avxsynth {
|
||||
#endif // __cplusplus
|
||||
//
|
||||
// purposefully define the following MSFT definitions
|
||||
// to mean nothing (as they do not mean anything on Linux)
|
||||
//
|
||||
#define __stdcall
|
||||
#define __cdecl
|
||||
#define noreturn
|
||||
#define __declspec(x)
|
||||
#define STDAPI extern "C" HRESULT
|
||||
#define STDMETHODIMP HRESULT __stdcall
|
||||
#define STDMETHODIMP_(x) x __stdcall
|
||||
|
||||
#define STDMETHOD(x) virtual HRESULT x
|
||||
#define STDMETHOD_(a, x) virtual a x
|
||||
|
||||
#ifndef TRUE
|
||||
#define TRUE true
|
||||
#endif
|
||||
|
||||
#ifndef FALSE
|
||||
#define FALSE false
|
||||
#endif
|
||||
|
||||
#define S_OK (0x00000000)
|
||||
#define S_FALSE (0x00000001)
|
||||
#define E_NOINTERFACE (0X80004002)
|
||||
#define E_POINTER (0x80004003)
|
||||
#define E_FAIL (0x80004005)
|
||||
#define E_OUTOFMEMORY (0x8007000E)
|
||||
|
||||
#define INVALID_HANDLE_VALUE ((HANDLE)((LONG_PTR)-1))
|
||||
#define FAILED(hr) ((hr) & 0x80000000)
|
||||
#define SUCCEEDED(hr) (!FAILED(hr))
|
||||
|
||||
|
||||
//
|
||||
// Functions
|
||||
//
|
||||
#define MAKEDWORD(a,b,c,d) (((a) << 24) | ((b) << 16) | ((c) << 8) | (d))
|
||||
#define MAKEWORD(a,b) (((a) << 8) | (b))
|
||||
|
||||
#define lstrlen strlen
|
||||
#define lstrcpy strcpy
|
||||
#define lstrcmpi strcasecmp
|
||||
#define _stricmp strcasecmp
|
||||
#define InterlockedIncrement(x) __sync_fetch_and_add((x), 1)
|
||||
#define InterlockedDecrement(x) __sync_fetch_and_sub((x), 1)
|
||||
// Windows uses (new, old) ordering but GCC has (old, new)
|
||||
#define InterlockedCompareExchange(x,y,z) __sync_val_compare_and_swap(x,z,y)
|
||||
|
||||
#define UInt32x32To64(a, b) ( (uint64_t) ( ((uint64_t)((uint32_t)(a))) * ((uint32_t)(b)) ) )
|
||||
#define Int64ShrlMod32(a, b) ( (uint64_t) ( (uint64_t)(a) >> (b) ) )
|
||||
#define Int32x32To64(a, b) ((__int64)(((__int64)((long)(a))) * ((long)(b))))
|
||||
|
||||
#define MulDiv(nNumber, nNumerator, nDenominator) (int32_t) (((int64_t) (nNumber) * (int64_t) (nNumerator) + (int64_t) ((nDenominator)/2)) / (int64_t) (nDenominator))
|
||||
|
||||
#ifdef __cplusplus
|
||||
}; // namespace avxsynth
|
||||
#endif // __cplusplus
|
||||
|
||||
#endif // __WINDOWS2LINUX_H__
|
||||
@@ -1,35 +0,0 @@
|
||||
/*
|
||||
* Work around broken floating point limits on some systems.
|
||||
*
|
||||
* This file is part of FFmpeg.
|
||||
*
|
||||
* FFmpeg is free software; you can redistribute it and/or
|
||||
* modify it under the terms of the GNU Lesser General Public
|
||||
* License as published by the Free Software Foundation; either
|
||||
* version 2.1 of the License, or (at your option) any later version.
|
||||
*
|
||||
* FFmpeg is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
* Lesser General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU Lesser General Public
|
||||
* License along with FFmpeg; if not, write to the Free Software
|
||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
*/
|
||||
|
||||
#include_next <float.h>
|
||||
|
||||
#ifdef FLT_MAX
|
||||
#undef FLT_MAX
|
||||
#define FLT_MAX 3.40282346638528859812e+38F
|
||||
|
||||
#undef FLT_MIN
|
||||
#define FLT_MIN 1.17549435082228750797e-38F
|
||||
|
||||
#undef DBL_MAX
|
||||
#define DBL_MAX ((double)1.79769313486231570815e+308L)
|
||||
|
||||
#undef DBL_MIN
|
||||
#define DBL_MIN ((double)2.22507385850720138309e-308L)
|
||||
#endif
|
||||
@@ -1,22 +0,0 @@
|
||||
/*
|
||||
* Work around broken floating point limits on some systems.
|
||||
*
|
||||
* This file is part of FFmpeg.
|
||||
*
|
||||
* FFmpeg is free software; you can redistribute it and/or
|
||||
* modify it under the terms of the GNU Lesser General Public
|
||||
* License as published by the Free Software Foundation; either
|
||||
* version 2.1 of the License, or (at your option) any later version.
|
||||
*
|
||||
* FFmpeg is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
* Lesser General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU Lesser General Public
|
||||
* License along with FFmpeg; if not, write to the Free Software
|
||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
*/
|
||||
|
||||
#include_next <limits.h>
|
||||
#include <float.h>
|
||||
@@ -38,6 +38,8 @@ static int optind = 1;
|
||||
static int optopt;
|
||||
static char *optarg;
|
||||
|
||||
#undef fprintf
|
||||
|
||||
static int getopt(int argc, char *argv[], char *opts)
|
||||
{
|
||||
static int sp = 1;
|
||||
@@ -54,7 +56,7 @@ static int getopt(int argc, char *argv[], char *opts)
|
||||
}
|
||||
}
|
||||
optopt = c = argv[optind][sp];
|
||||
if (c == ':' || !(cp = strchr(opts, c))) {
|
||||
if (c == ':' || (cp = strchr(opts, c)) == NULL) {
|
||||
fprintf(stderr, ": illegal option -- %c\n", c);
|
||||
if (argv[optind][++sp] == '\0') {
|
||||
optind++;
|
||||
|
||||
@@ -1,24 +1,3 @@
|
||||
/*
|
||||
* This file is part of FFmpeg.
|
||||
*
|
||||
* FFmpeg is free software; you can redistribute it and/or
|
||||
* modify it under the terms of the GNU Lesser General Public
|
||||
* License as published by the Free Software Foundation; either
|
||||
* version 2.1 of the License, or (at your option) any later version.
|
||||
*
|
||||
* FFmpeg is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
* Lesser General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU Lesser General Public
|
||||
* License along with FFmpeg; if not, write to the Free Software
|
||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
*/
|
||||
|
||||
#ifndef FFMPEG_COMPAT_TMS470_MATH_H
|
||||
#define FFMPEG_COMPAT_TMS470_MATH_H
|
||||
|
||||
#include_next <math.h>
|
||||
|
||||
#undef INFINITY
|
||||
@@ -26,5 +5,3 @@
|
||||
|
||||
#define INFINITY (*(const float*)((const unsigned []){ 0x7f800000 }))
|
||||
#define NAN (*(const float*)((const unsigned []){ 0x7fc00000 }))
|
||||
|
||||
#endif /* FFMPEG_COMPAT_TMS470_MATH_H */
|
||||
|
||||
@@ -24,6 +24,3 @@
|
||||
#if !defined(va_copy) && defined(_MSC_VER)
|
||||
#define va_copy(dst, src) ((dst) = (src))
|
||||
#endif
|
||||
#if !defined(va_copy) && defined(__GNUC__) && __GNUC__ < 3
|
||||
#define va_copy(dst, src) __va_copy(dst, src)
|
||||
#endif
|
||||
|
||||
@@ -1,132 +0,0 @@
|
||||
#!/bin/sh
|
||||
|
||||
# Copyright (c) 2013, Derek Buitenhuis
|
||||
#
|
||||
# Permission to use, copy, modify, and/or distribute this software for any
|
||||
# purpose with or without fee is hereby granted, provided that the above
|
||||
# copyright notice and this permission notice appear in all copies.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||
# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||
# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||
# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||
# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||
# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||
# OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
|
||||
# mktemp isn't POSIX, so supply an implementation
|
||||
mktemp() {
|
||||
echo "${2%%XXX*}.${HOSTNAME}.${UID}.$$"
|
||||
}
|
||||
|
||||
if [ $# -lt 2 ]; then
|
||||
echo "Usage: makedef <version_script> <objects>" >&2
|
||||
exit 0
|
||||
fi
|
||||
|
||||
vscript=$1
|
||||
shift
|
||||
|
||||
if [ ! -f "$vscript" ]; then
|
||||
echo "Version script does not exist" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
for object in "$@"; do
|
||||
if [ ! -f "$object" ]; then
|
||||
echo "Object does not exist: ${object}" >&2
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
# Create a lib temporarily to dump symbols from.
|
||||
# It's just much easier to do it this way
|
||||
libname=$(mktemp -u "library").lib
|
||||
|
||||
trap 'rm -f -- $libname' EXIT
|
||||
|
||||
lib -out:${libname} $@ >/dev/null
|
||||
if [ $? != 0 ]; then
|
||||
echo "Could not create temporary library." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
IFS='
|
||||
'
|
||||
|
||||
# Determine if we're building for x86 or x86_64 and
|
||||
# set the symbol prefix accordingly.
|
||||
prefix=""
|
||||
arch=$(dumpbin -headers ${libname} |
|
||||
tr '\t' ' ' |
|
||||
grep '^ \+.\+machine \+(.\+)' |
|
||||
head -1 |
|
||||
sed -e 's/^ \{1,\}.\{1,\} \{1,\}machine \{1,\}(\(...\)).*/\1/')
|
||||
|
||||
if [ "${arch}" = "x86" ]; then
|
||||
prefix="_"
|
||||
else
|
||||
if [ "${arch}" != "ARM" ] && [ "${arch}" != "x64" ]; then
|
||||
echo "Unknown machine type." >&2
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
started=0
|
||||
regex="none"
|
||||
|
||||
for line in $(cat ${vscript} | tr '\t' ' '); do
|
||||
# We only care about global symbols
|
||||
echo "${line}" | grep -q '^ \+global:'
|
||||
if [ $? = 0 ]; then
|
||||
started=1
|
||||
line=$(echo "${line}" | sed -e 's/^ \{1,\}global: *//')
|
||||
else
|
||||
echo "${line}" | grep -q '^ \+local:'
|
||||
if [ $? = 0 ]; then
|
||||
started=0
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ ${started} = 0 ]; then
|
||||
continue
|
||||
fi
|
||||
|
||||
# Handle multiple symbols on one line
|
||||
IFS=';'
|
||||
|
||||
# Work around stupid expansion to filenames
|
||||
line=$(echo "${line}" | sed -e 's/\*/.\\+/g')
|
||||
for exp in ${line}; do
|
||||
# Remove leading and trailing whitespace
|
||||
exp=$(echo "${exp}" | sed -e 's/^ *//' -e 's/ *$//')
|
||||
|
||||
if [ "${regex}" = "none" ]; then
|
||||
regex="${exp}"
|
||||
else
|
||||
regex="${regex};${exp}"
|
||||
fi
|
||||
done
|
||||
|
||||
IFS='
|
||||
'
|
||||
done
|
||||
|
||||
dump=$(dumpbin -linkermember:1 ${libname})
|
||||
|
||||
rm ${libname}
|
||||
|
||||
IFS=';'
|
||||
list=""
|
||||
for exp in ${regex}; do
|
||||
list="${list}"'
|
||||
'$(echo "${dump}" |
|
||||
sed -e '/public symbols/,$!d' -e '/^ \{1,\}Summary/,$d' -e "s/ \{1,\}${prefix}/ /" -e 's/ \{1,\}/ /g' |
|
||||
tail -n +2 |
|
||||
cut -d' ' -f3 |
|
||||
grep "^${exp}" |
|
||||
sed -e 's/^/ /')
|
||||
done
|
||||
|
||||
echo "EXPORTS"
|
||||
echo "${list}" | sort | uniq | tail -n +2
|
||||
@@ -1,9 +0,0 @@
|
||||
#!/bin/sh
|
||||
|
||||
LINK_EXE_PATH=$(dirname "$(command -v cl)")/link
|
||||
if [ -x "$LINK_EXE_PATH" ]; then
|
||||
"$LINK_EXE_PATH" $@
|
||||
else
|
||||
link $@
|
||||
fi
|
||||
exit $?
|
||||
915
doc/APIchanges
915
doc/APIchanges
File diff suppressed because it is too large
Load Diff
22
doc/Doxyfile
22
doc/Doxyfile
@@ -31,9 +31,9 @@ PROJECT_NAME = FFmpeg
|
||||
# This could be handy for archiving the generated documentation or
|
||||
# if some version control system is used.
|
||||
|
||||
PROJECT_NUMBER = 2.8.7
|
||||
PROJECT_NUMBER = 1.2.2
|
||||
|
||||
# With the PROJECT_LOGO tag one can specify a logo or icon that is included
|
||||
# With the PROJECT_LOGO tag one can specify an logo or icon that is included
|
||||
# in the documentation. The maximum height of the logo should not exceed 55
|
||||
# pixels and the maximum width should not exceed 200 pixels. Doxygen will
|
||||
# copy the logo to the output directory.
|
||||
@@ -277,7 +277,7 @@ SUBGROUPING = YES
|
||||
# be useful for C code in case the coding convention dictates that all compound
|
||||
# types are typedef'ed and only the typedef is referenced, never the tag name.
|
||||
|
||||
TYPEDEF_HIDES_STRUCT = YES
|
||||
TYPEDEF_HIDES_STRUCT = NO
|
||||
|
||||
# The SYMBOL_CACHE_SIZE determines the size of the internal cache use to
|
||||
# determine which symbols to keep in memory and which to flush to disk.
|
||||
@@ -409,7 +409,7 @@ INLINE_INFO = YES
|
||||
# alphabetically by member name. If set to NO the members will appear in
|
||||
# declaration order.
|
||||
|
||||
SORT_MEMBER_DOCS = NO
|
||||
SORT_MEMBER_DOCS = YES
|
||||
|
||||
# If the SORT_BRIEF_DOCS tag is set to YES then doxygen will sort the
|
||||
# brief documentation of file, namespace and class members alphabetically
|
||||
@@ -709,7 +709,7 @@ INLINE_SOURCES = NO
|
||||
# doxygen to hide any special comment blocks from generated source code
|
||||
# fragments. Normal C and C++ comments will always remain visible.
|
||||
|
||||
STRIP_CODE_COMMENTS = NO
|
||||
STRIP_CODE_COMMENTS = YES
|
||||
|
||||
# If the REFERENCED_BY_RELATION tag is set to YES
|
||||
# then for each documented function all documented
|
||||
@@ -759,7 +759,7 @@ ALPHABETICAL_INDEX = YES
|
||||
# the COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns
|
||||
# in which this list will be split (can be a number in the range [1..20])
|
||||
|
||||
COLS_IN_ALPHA_INDEX = 5
|
||||
COLS_IN_ALPHA_INDEX = 2
|
||||
|
||||
# In case all classes in a project start with a common prefix, all
|
||||
# classes will be put under the same header in the alphabetical index.
|
||||
@@ -793,13 +793,13 @@ HTML_FILE_EXTENSION = .html
|
||||
# each generated HTML page. If it is left blank doxygen will generate a
|
||||
# standard header.
|
||||
|
||||
HTML_HEADER =
|
||||
#HTML_HEADER = doc/doxy/header.html
|
||||
|
||||
# The HTML_FOOTER tag can be used to specify a personal HTML footer for
|
||||
# each generated HTML page. If it is left blank doxygen will generate a
|
||||
# standard footer.
|
||||
|
||||
HTML_FOOTER =
|
||||
#HTML_FOOTER = doc/doxy/footer.html
|
||||
|
||||
# The HTML_STYLESHEET tag can be used to specify a user-defined cascading
|
||||
# style sheet that is used by each HTML page. It can be used to
|
||||
@@ -808,7 +808,7 @@ HTML_FOOTER =
|
||||
# the style sheet file to the HTML output directory, so don't put your own
|
||||
# stylesheet in the HTML output directory as well, or it will be erased!
|
||||
|
||||
HTML_STYLESHEET =
|
||||
#HTML_STYLESHEET = doc/doxy/doxy_stylesheet.css
|
||||
|
||||
# The HTML_COLORSTYLE_HUE tag controls the color of the HTML output.
|
||||
# Doxygen will adjust the colors in the stylesheet and background images
|
||||
@@ -1056,7 +1056,7 @@ FORMULA_TRANSPARENT = YES
|
||||
# typically be disabled. For large projects the javascript based search engine
|
||||
# can be slow, then enabling SERVER_BASED_SEARCH may provide a better solution.
|
||||
|
||||
SEARCHENGINE = YES
|
||||
SEARCHENGINE = NO
|
||||
|
||||
# When the SERVER_BASED_SEARCH tag is enabled the search engine will be
|
||||
# implemented using a PHP enabled web server instead of at the web client
|
||||
@@ -1359,8 +1359,6 @@ PREDEFINED = "__attribute__(x)=" \
|
||||
"DECLARE_ALIGNED(a,t,n)=t n" \
|
||||
"offsetof(x,y)=0x42" \
|
||||
av_alloc_size \
|
||||
AV_GCC_VERSION_AT_LEAST(x,y)=1 \
|
||||
__GNUC__=1 \
|
||||
|
||||
# If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then
|
||||
# this tag can be used to specify a list of macro names that should be expanded.
|
||||
|
||||
113
doc/Makefile
113
doc/Makefile
@@ -6,6 +6,7 @@ LIBRARIES-$(CONFIG_AVFORMAT) += libavformat
|
||||
LIBRARIES-$(CONFIG_AVDEVICE) += libavdevice
|
||||
LIBRARIES-$(CONFIG_AVFILTER) += libavfilter
|
||||
|
||||
COMPONENTS-yes = $(PROGS-yes)
|
||||
COMPONENTS-$(CONFIG_AVUTIL) += ffmpeg-utils
|
||||
COMPONENTS-$(CONFIG_SWSCALE) += ffmpeg-scaler
|
||||
COMPONENTS-$(CONFIG_SWRESAMPLE) += ffmpeg-resampler
|
||||
@@ -14,11 +15,9 @@ COMPONENTS-$(CONFIG_AVFORMAT) += ffmpeg-formats ffmpeg-protocols
|
||||
COMPONENTS-$(CONFIG_AVDEVICE) += ffmpeg-devices
|
||||
COMPONENTS-$(CONFIG_AVFILTER) += ffmpeg-filters
|
||||
|
||||
MANPAGES1 = $(AVPROGS-yes:%=doc/%.1) $(AVPROGS-yes:%=doc/%-all.1) $(COMPONENTS-yes:%=doc/%.1)
|
||||
MANPAGES3 = $(LIBRARIES-yes:%=doc/%.3)
|
||||
MANPAGES = $(MANPAGES1) $(MANPAGES3)
|
||||
PODPAGES = $(AVPROGS-yes:%=doc/%.pod) $(AVPROGS-yes:%=doc/%-all.pod) $(COMPONENTS-yes:%=doc/%.pod) $(LIBRARIES-yes:%=doc/%.pod)
|
||||
HTMLPAGES = $(AVPROGS-yes:%=doc/%.html) $(AVPROGS-yes:%=doc/%-all.html) $(COMPONENTS-yes:%=doc/%.html) $(LIBRARIES-yes:%=doc/%.html) \
|
||||
MANPAGES = $(COMPONENTS-yes:%=doc/%.1) $(LIBRARIES-yes:%=doc/%.3)
|
||||
PODPAGES = $(COMPONENTS-yes:%=doc/%.pod) $(LIBRARIES-yes:%=doc/%.pod)
|
||||
HTMLPAGES = $(COMPONENTS-yes:%=doc/%.html) $(LIBRARIES-yes:%=doc/%.html) \
|
||||
doc/developer.html \
|
||||
doc/faq.html \
|
||||
doc/fate.html \
|
||||
@@ -36,30 +35,6 @@ DOCS-$(CONFIG_MANPAGES) += $(MANPAGES)
|
||||
DOCS-$(CONFIG_TXTPAGES) += $(TXTPAGES)
|
||||
DOCS = $(DOCS-yes)
|
||||
|
||||
DOC_EXAMPLES-$(CONFIG_AVIO_DIR_CMD_EXAMPLE) += avio_dir_cmd
|
||||
DOC_EXAMPLES-$(CONFIG_AVIO_READING_EXAMPLE) += avio_reading
|
||||
DOC_EXAMPLES-$(CONFIG_AVCODEC_EXAMPLE) += avcodec
|
||||
DOC_EXAMPLES-$(CONFIG_DECODING_ENCODING_EXAMPLE) += decoding_encoding
|
||||
DOC_EXAMPLES-$(CONFIG_DEMUXING_DECODING_EXAMPLE) += demuxing_decoding
|
||||
DOC_EXAMPLES-$(CONFIG_EXTRACT_MVS_EXAMPLE) += extract_mvs
|
||||
DOC_EXAMPLES-$(CONFIG_FILTER_AUDIO_EXAMPLE) += filter_audio
|
||||
DOC_EXAMPLES-$(CONFIG_FILTERING_AUDIO_EXAMPLE) += filtering_audio
|
||||
DOC_EXAMPLES-$(CONFIG_FILTERING_VIDEO_EXAMPLE) += filtering_video
|
||||
DOC_EXAMPLES-$(CONFIG_METADATA_EXAMPLE) += metadata
|
||||
DOC_EXAMPLES-$(CONFIG_MUXING_EXAMPLE) += muxing
|
||||
DOC_EXAMPLES-$(CONFIG_QSVDEC_EXAMPLE) += qsvdec
|
||||
DOC_EXAMPLES-$(CONFIG_REMUXING_EXAMPLE) += remuxing
|
||||
DOC_EXAMPLES-$(CONFIG_RESAMPLING_AUDIO_EXAMPLE) += resampling_audio
|
||||
DOC_EXAMPLES-$(CONFIG_SCALING_VIDEO_EXAMPLE) += scaling_video
|
||||
DOC_EXAMPLES-$(CONFIG_TRANSCODE_AAC_EXAMPLE) += transcode_aac
|
||||
DOC_EXAMPLES-$(CONFIG_TRANSCODING_EXAMPLE) += transcoding
|
||||
ALL_DOC_EXAMPLES_LIST = $(DOC_EXAMPLES-) $(DOC_EXAMPLES-yes)
|
||||
|
||||
DOC_EXAMPLES := $(DOC_EXAMPLES-yes:%=doc/examples/%$(PROGSSUF)$(EXESUF))
|
||||
ALL_DOC_EXAMPLES := $(ALL_DOC_EXAMPLES_LIST:%=doc/examples/%$(PROGSSUF)$(EXESUF))
|
||||
ALL_DOC_EXAMPLES_G := $(ALL_DOC_EXAMPLES_LIST:%=doc/examples/%$(PROGSSUF)_g$(EXESUF))
|
||||
PROGS += $(DOC_EXAMPLES)
|
||||
|
||||
all-$(CONFIG_DOC): doc
|
||||
|
||||
doc: documentation
|
||||
@@ -67,9 +42,7 @@ doc: documentation
|
||||
apidoc: doc/doxy/html
|
||||
documentation: $(DOCS)
|
||||
|
||||
examples: $(DOC_EXAMPLES)
|
||||
|
||||
TEXIDEP = perl $(SRC_PATH)/doc/texidep.pl $(SRC_PATH) $< $@ >$(@:%=%.d)
|
||||
TEXIDEP = awk '/^@(verbatim)?include/ { printf "$@: $(@D)/%s\n", $$2 }' <$< >$(@:%=%.d)
|
||||
|
||||
doc/%.txt: TAG = TXT
|
||||
doc/%.txt: doc/%.texi
|
||||
@@ -84,99 +57,45 @@ $(GENTEXI): doc/avoptions_%.texi: doc/print_options$(HOSTEXESUF)
|
||||
$(M)doc/print_options $* > $@
|
||||
|
||||
doc/%.html: TAG = HTML
|
||||
doc/%-all.html: TAG = HTML
|
||||
|
||||
ifdef HAVE_MAKEINFO_HTML
|
||||
doc/%.html: doc/%.texi $(SRC_PATH)/doc/t2h.pm $(GENTEXI)
|
||||
$(Q)$(TEXIDEP)
|
||||
$(M)makeinfo --html -I doc --no-split -D config-not-all --init-file=$(SRC_PATH)/doc/t2h.pm --output $@ $<
|
||||
|
||||
doc/%-all.html: doc/%.texi $(SRC_PATH)/doc/t2h.pm $(GENTEXI)
|
||||
$(Q)$(TEXIDEP)
|
||||
$(M)makeinfo --html -I doc --no-split -D config-all --init-file=$(SRC_PATH)/doc/t2h.pm --output $@ $<
|
||||
else
|
||||
doc/%.html: doc/%.texi $(SRC_PATH)/doc/t2h.init $(GENTEXI)
|
||||
$(Q)$(TEXIDEP)
|
||||
$(M)texi2html -I doc -monolithic --D=config-not-all --init-file $(SRC_PATH)/doc/t2h.init --output $@ $<
|
||||
|
||||
doc/%-all.html: doc/%.texi $(SRC_PATH)/doc/t2h.init $(GENTEXI)
|
||||
$(Q)$(TEXIDEP)
|
||||
$(M)texi2html -I doc -monolithic --D=config-all --init-file $(SRC_PATH)/doc/t2h.init --output $@ $<
|
||||
endif
|
||||
$(M)texi2html -I doc -monolithic --init-file $(SRC_PATH)/doc/t2h.init --output $@ $<
|
||||
|
||||
doc/%.pod: TAG = POD
|
||||
doc/%.pod: doc/%.texi $(SRC_PATH)/doc/texi2pod.pl $(GENTEXI)
|
||||
$(Q)$(TEXIDEP)
|
||||
$(M)perl $(SRC_PATH)/doc/texi2pod.pl -Dconfig-not-all=yes -Idoc $< $@
|
||||
|
||||
doc/%-all.pod: TAG = POD
|
||||
doc/%-all.pod: doc/%.texi $(SRC_PATH)/doc/texi2pod.pl $(GENTEXI)
|
||||
$(Q)$(TEXIDEP)
|
||||
$(M)perl $(SRC_PATH)/doc/texi2pod.pl -Dconfig-all=yes -Idoc $< $@
|
||||
$(M)perl $(SRC_PATH)/doc/texi2pod.pl -Idoc $< $@
|
||||
|
||||
doc/%.1 doc/%.3: TAG = MAN
|
||||
doc/%.1: doc/%.pod $(GENTEXI)
|
||||
$(M)pod2man --section=1 --center=" " --release=" " --date=" " $< > $@
|
||||
$(M)pod2man --section=1 --center=" " --release=" " $< > $@
|
||||
doc/%.3: doc/%.pod $(GENTEXI)
|
||||
$(M)pod2man --section=3 --center=" " --release=" " --date=" " $< > $@
|
||||
$(M)pod2man --section=3 --center=" " --release=" " $< > $@
|
||||
|
||||
$(DOCS) doc/doxy/html: | doc/
|
||||
$(DOC_EXAMPLES:%$(EXESUF)=%.o): | doc/examples
|
||||
OBJDIRS += doc/examples
|
||||
|
||||
DOXY_INPUT = $(addprefix $(SRC_PATH)/, $(INSTHEADERS) $(DOC_EXAMPLES:%$(EXESUF)=%.c) $(LIB_EXAMPLES:%$(EXESUF)=%.c))
|
||||
|
||||
doc/doxy/html: TAG = DOXY
|
||||
doc/doxy/html: $(SRC_PATH)/doc/Doxyfile $(SRC_PATH)/doc/doxy-wrapper.sh $(DOXY_INPUT)
|
||||
$(M)$(SRC_PATH)/doc/doxy-wrapper.sh $(SRC_PATH) $< $(DOXYGEN) $(DOXY_INPUT)
|
||||
|
||||
install-doc: install-html install-man
|
||||
|
||||
install-html:
|
||||
doc/doxy/html: $(SRC_PATH)/doc/Doxyfile $(INSTHEADERS)
|
||||
$(M)$(SRC_PATH)/doc/doxy-wrapper.sh $(SRC_PATH) $^
|
||||
|
||||
install-man:
|
||||
|
||||
ifdef CONFIG_HTMLPAGES
|
||||
install-progs-$(CONFIG_DOC): install-html
|
||||
|
||||
install-html: $(HTMLPAGES)
|
||||
$(Q)mkdir -p "$(DOCDIR)"
|
||||
$(INSTALL) -m 644 $(HTMLPAGES) "$(DOCDIR)"
|
||||
endif
|
||||
|
||||
ifdef CONFIG_MANPAGES
|
||||
install-progs-$(CONFIG_DOC): install-man
|
||||
|
||||
install-man: $(MANPAGES)
|
||||
$(Q)mkdir -p "$(MANDIR)/man1"
|
||||
$(INSTALL) -m 644 $(MANPAGES1) "$(MANDIR)/man1"
|
||||
$(Q)mkdir -p "$(MANDIR)/man3"
|
||||
$(INSTALL) -m 644 $(MANPAGES3) "$(MANDIR)/man3"
|
||||
$(INSTALL) -m 644 $(MANPAGES) "$(MANDIR)/man1"
|
||||
endif
|
||||
|
||||
uninstall: uninstall-doc
|
||||
|
||||
uninstall-doc: uninstall-html uninstall-man
|
||||
|
||||
uninstall-html:
|
||||
$(RM) -r "$(DOCDIR)"
|
||||
uninstall: uninstall-man
|
||||
|
||||
uninstall-man:
|
||||
$(RM) $(addprefix "$(MANDIR)/man1/",$(AVPROGS-yes:%=%.1) $(AVPROGS-yes:%=%-all.1) $(COMPONENTS-yes:%=%.1))
|
||||
$(RM) $(addprefix "$(MANDIR)/man3/",$(LIBRARIES-yes:%=%.3))
|
||||
$(RM) $(addprefix "$(MANDIR)/man1/",$(ALLMANPAGES))
|
||||
|
||||
clean:: docclean
|
||||
|
||||
distclean:: docclean
|
||||
$(RM) doc/config.texi
|
||||
|
||||
examplesclean:
|
||||
$(RM) $(ALL_DOC_EXAMPLES) $(ALL_DOC_EXAMPLES_G)
|
||||
$(RM) $(CLEANSUFFIXES:%=doc/examples/%)
|
||||
|
||||
docclean: examplesclean
|
||||
$(RM) $(CLEANSUFFIXES:%=doc/%)
|
||||
$(RM) $(TXTPAGES) doc/*.html doc/*.pod doc/*.1 doc/*.3 doc/avoptions_*.texi
|
||||
docclean:
|
||||
$(RM) $(TXTPAGES) doc/*.html doc/*.pod doc/*.1 doc/*.3 $(CLEANSUFFIXES:%=doc/%) doc/avoptions_*.texi
|
||||
$(RM) -r doc/doxy/html
|
||||
|
||||
-include $(wildcard $(DOCS:%=%.d))
|
||||
|
||||
16
doc/RELEASE_NOTES
Normal file
16
doc/RELEASE_NOTES
Normal file
@@ -0,0 +1,16 @@
|
||||
Release Notes
|
||||
=============
|
||||
|
||||
* 1.2 "Magic" March, 2013
|
||||
|
||||
|
||||
General notes
|
||||
-------------
|
||||
See the Changelog file for a list of significant changes. Note, there
|
||||
are many more new features and bugfixes than whats listed there.
|
||||
|
||||
Bugreports against FFmpeg git master or the most recent FFmpeg release are
|
||||
accepted. If you are experiencing issues with any formally released version of
|
||||
FFmpeg, please try git master to check if the issue still exists. If it does,
|
||||
make your report against the development code following the usual bug reporting
|
||||
guidelines.
|
||||
211
doc/avtools-common-opts.texi
Normal file
211
doc/avtools-common-opts.texi
Normal file
@@ -0,0 +1,211 @@
|
||||
All the numerical options, if not specified otherwise, accept in input
|
||||
a string representing a number, which may contain one of the
|
||||
SI unit prefixes, for example 'K', 'M', 'G'.
|
||||
If 'i' is appended after the prefix, binary prefixes are used,
|
||||
which are based on powers of 1024 instead of powers of 1000.
|
||||
The 'B' postfix multiplies the value by 8, and can be
|
||||
appended after a unit prefix or used alone. This allows using for
|
||||
example 'KB', 'MiB', 'G' and 'B' as number postfix.
|
||||
|
||||
Options which do not take arguments are boolean options, and set the
|
||||
corresponding value to true. They can be set to false by prefixing
|
||||
with "no" the option name, for example using "-nofoo" in the
|
||||
command line will set to false the boolean option with name "foo".
|
||||
|
||||
@anchor{Stream specifiers}
|
||||
@section Stream specifiers
|
||||
Some options are applied per-stream, e.g. bitrate or codec. Stream specifiers
|
||||
are used to precisely specify which stream(s) does a given option belong to.
|
||||
|
||||
A stream specifier is a string generally appended to the option name and
|
||||
separated from it by a colon. E.g. @code{-codec:a:1 ac3} option contains
|
||||
@code{a:1} stream specifier, which matches the second audio stream. Therefore it
|
||||
would select the ac3 codec for the second audio stream.
|
||||
|
||||
A stream specifier can match several streams, the option is then applied to all
|
||||
of them. E.g. the stream specifier in @code{-b:a 128k} matches all audio
|
||||
streams.
|
||||
|
||||
An empty stream specifier matches all streams, for example @code{-codec copy}
|
||||
or @code{-codec: copy} would copy all the streams without reencoding.
|
||||
|
||||
Possible forms of stream specifiers are:
|
||||
@table @option
|
||||
@item @var{stream_index}
|
||||
Matches the stream with this index. E.g. @code{-threads:1 4} would set the
|
||||
thread count for the second stream to 4.
|
||||
@item @var{stream_type}[:@var{stream_index}]
|
||||
@var{stream_type} is one of: 'v' for video, 'a' for audio, 's' for subtitle,
|
||||
'd' for data and 't' for attachments. If @var{stream_index} is given, then
|
||||
matches stream number @var{stream_index} of this type. Otherwise matches all
|
||||
streams of this type.
|
||||
@item p:@var{program_id}[:@var{stream_index}]
|
||||
If @var{stream_index} is given, then matches stream number @var{stream_index} in
|
||||
program with id @var{program_id}. Otherwise matches all streams in this program.
|
||||
@item #@var{stream_id}
|
||||
Matches the stream by format-specific ID.
|
||||
@end table
|
||||
|
||||
@section Generic options
|
||||
|
||||
These options are shared amongst the av* tools.
|
||||
|
||||
@table @option
|
||||
|
||||
@item -L
|
||||
Show license.
|
||||
|
||||
@item -h, -?, -help, --help [@var{arg}]
|
||||
Show help. An optional parameter may be specified to print help about a specific
|
||||
item.
|
||||
|
||||
Possible values of @var{arg} are:
|
||||
@table @option
|
||||
@item decoder=@var{decoder_name}
|
||||
Print detailed information about the decoder named @var{decoder_name}. Use the
|
||||
@option{-decoders} option to get a list of all decoders.
|
||||
|
||||
@item encoder=@var{encoder_name}
|
||||
Print detailed information about the encoder named @var{encoder_name}. Use the
|
||||
@option{-encoders} option to get a list of all encoders.
|
||||
|
||||
@item demuxer=@var{demuxer_name}
|
||||
Print detailed information about the demuxer named @var{demuxer_name}. Use the
|
||||
@option{-formats} option to get a list of all demuxers and muxers.
|
||||
|
||||
@item muxer=@var{muxer_name}
|
||||
Print detailed information about the muxer named @var{muxer_name}. Use the
|
||||
@option{-formats} option to get a list of all muxers and demuxers.
|
||||
|
||||
@end table
|
||||
|
||||
@item -version
|
||||
Show version.
|
||||
|
||||
@item -formats
|
||||
Show available formats.
|
||||
|
||||
The fields preceding the format names have the following meanings:
|
||||
@table @samp
|
||||
@item D
|
||||
Decoding available
|
||||
@item E
|
||||
Encoding available
|
||||
@end table
|
||||
|
||||
@item -codecs
|
||||
Show all codecs known to libavcodec.
|
||||
|
||||
Note that the term 'codec' is used throughout this documentation as a shortcut
|
||||
for what is more correctly called a media bitstream format.
|
||||
|
||||
@item -decoders
|
||||
Show available decoders.
|
||||
|
||||
@item -encoders
|
||||
Show all available encoders.
|
||||
|
||||
@item -bsfs
|
||||
Show available bitstream filters.
|
||||
|
||||
@item -protocols
|
||||
Show available protocols.
|
||||
|
||||
@item -filters
|
||||
Show available libavfilter filters.
|
||||
|
||||
@item -pix_fmts
|
||||
Show available pixel formats.
|
||||
|
||||
@item -sample_fmts
|
||||
Show available sample formats.
|
||||
|
||||
@item -layouts
|
||||
Show channel names and standard channel layouts.
|
||||
|
||||
@item -loglevel @var{loglevel} | -v @var{loglevel}
|
||||
Set the logging level used by the library.
|
||||
@var{loglevel} is a number or a string containing one of the following values:
|
||||
@table @samp
|
||||
@item quiet
|
||||
@item panic
|
||||
@item fatal
|
||||
@item error
|
||||
@item warning
|
||||
@item info
|
||||
@item verbose
|
||||
@item debug
|
||||
@end table
|
||||
|
||||
By default the program logs to stderr, if coloring is supported by the
|
||||
terminal, colors are used to mark errors and warnings. Log coloring
|
||||
can be disabled setting the environment variable
|
||||
@env{AV_LOG_FORCE_NOCOLOR} or @env{NO_COLOR}, or can be forced setting
|
||||
the environment variable @env{AV_LOG_FORCE_COLOR}.
|
||||
The use of the environment variable @env{NO_COLOR} is deprecated and
|
||||
will be dropped in a following FFmpeg version.
|
||||
|
||||
@item -report
|
||||
Dump full command line and console output to a file named
|
||||
@code{@var{program}-@var{YYYYMMDD}-@var{HHMMSS}.log} in the current
|
||||
directory.
|
||||
This file can be useful for bug reports.
|
||||
It also implies @code{-loglevel verbose}.
|
||||
|
||||
Setting the environment variable @code{FFREPORT} to any value has the
|
||||
same effect. If the value is a ':'-separated key=value sequence, these
|
||||
options will affect the report; options values must be escaped if they
|
||||
contain special characters or the options delimiter ':' (see the
|
||||
``Quoting and escaping'' section in the ffmpeg-utils manual). The
|
||||
following option is recognized:
|
||||
@table @option
|
||||
@item file
|
||||
set the file name to use for the report; @code{%p} is expanded to the name
|
||||
of the program, @code{%t} is expanded to a timestamp, @code{%%} is expanded
|
||||
to a plain @code{%}
|
||||
@end table
|
||||
|
||||
Errors in parsing the environment variable are not fatal, and will not
|
||||
appear in the report.
|
||||
|
||||
@item -cpuflags flags (@emph{global})
|
||||
Allows setting and clearing cpu flags. This option is intended
|
||||
for testing. Do not use it unless you know what you're doing.
|
||||
@example
|
||||
ffmpeg -cpuflags -sse+mmx ...
|
||||
ffmpeg -cpuflags mmx ...
|
||||
ffmpeg -cpuflags 0 ...
|
||||
@end example
|
||||
|
||||
@end table
|
||||
|
||||
@section AVOptions
|
||||
|
||||
These options are provided directly by the libavformat, libavdevice and
|
||||
libavcodec libraries. To see the list of available AVOptions, use the
|
||||
@option{-help} option. They are separated into two categories:
|
||||
@table @option
|
||||
@item generic
|
||||
These options can be set for any container, codec or device. Generic options
|
||||
are listed under AVFormatContext options for containers/devices and under
|
||||
AVCodecContext options for codecs.
|
||||
@item private
|
||||
These options are specific to the given container, device or codec. Private
|
||||
options are listed under their corresponding containers/devices/codecs.
|
||||
@end table
|
||||
|
||||
For example to write an ID3v2.3 header instead of a default ID3v2.4 to
|
||||
an MP3 file, use the @option{id3v2_version} private option of the MP3
|
||||
muxer:
|
||||
@example
|
||||
ffmpeg -i input.flac -id3v2_version 3 out.mp3
|
||||
@end example
|
||||
|
||||
All codec AVOptions are obviously per-stream, so the chapter on stream
|
||||
specifiers applies to them
|
||||
|
||||
Note @option{-nooption} syntax cannot be used for boolean AVOptions,
|
||||
use @option{-option 0}/@option{-option 1}.
|
||||
|
||||
Note2 old undocumented way of specifying per-stream AVOptions by prepending
|
||||
v/a/s to the options name is now obsolete and will be removed soon.
|
||||
36
doc/avutil.txt
Normal file
36
doc/avutil.txt
Normal file
@@ -0,0 +1,36 @@
|
||||
AVUtil
|
||||
======
|
||||
libavutil is a small lightweight library of generally useful functions.
|
||||
It is not a library for code needed by both libavcodec and libavformat.
|
||||
|
||||
|
||||
Overview:
|
||||
=========
|
||||
adler32.c adler32 checksum
|
||||
aes.c AES encryption and decryption
|
||||
fifo.c resizeable first in first out buffer
|
||||
intfloat_readwrite.c portable reading and writing of floating point values
|
||||
log.c "printf" with context and level
|
||||
md5.c MD5 Message-Digest Algorithm
|
||||
rational.c code to perform exact calculations with rational numbers
|
||||
tree.c generic AVL tree
|
||||
crc.c generic CRC checksumming code
|
||||
integer.c 128bit integer math
|
||||
lls.c
|
||||
mathematics.c greatest common divisor, integer sqrt, integer log2, ...
|
||||
mem.c memory allocation routines with guaranteed alignment
|
||||
|
||||
Headers:
|
||||
bswap.h big/little/native-endian conversion code
|
||||
x86_cpu.h a few useful macros for unifying x86-64 and x86-32 code
|
||||
avutil.h
|
||||
common.h
|
||||
intreadwrite.h reading and writing of unaligned big/little/native-endian integers
|
||||
|
||||
|
||||
Goals:
|
||||
======
|
||||
* Modular (few interdependencies and the possibility of disabling individual parts during ./configure)
|
||||
* Small (source and object)
|
||||
* Efficient (low CPU and memory usage)
|
||||
* Useful (avoid useless features almost no one needs)
|
||||
@@ -13,59 +13,13 @@ bitstream filter using the option @code{--disable-bsf=BSF}.
|
||||
The option @code{-bsfs} of the ff* tools will display the list of
|
||||
all the supported bitstream filters included in your build.
|
||||
|
||||
The ff* tools have a -bsf option applied per stream, taking a
|
||||
comma-separated list of filters, whose parameters follow the filter
|
||||
name after a '='.
|
||||
|
||||
@example
|
||||
ffmpeg -i INPUT -c:v copy -bsf:v filter1[=opt1=str1/opt2=str2][,filter2] OUTPUT
|
||||
@end example
|
||||
|
||||
Below is a description of the currently available bitstream filters,
|
||||
with their parameters, if any.
|
||||
Below is a description of the currently available bitstream filters.
|
||||
|
||||
@section aac_adtstoasc
|
||||
|
||||
Convert MPEG-2/4 AAC ADTS to MPEG-4 Audio Specific Configuration
|
||||
bitstream filter.
|
||||
|
||||
This filter creates an MPEG-4 AudioSpecificConfig from an MPEG-2/4
|
||||
ADTS header and removes the ADTS header.
|
||||
|
||||
This is required for example when copying an AAC stream from a raw
|
||||
ADTS AAC container to a FLV or a MOV/MP4 file.
|
||||
|
||||
@section chomp
|
||||
|
||||
Remove zero padding at the end of a packet.
|
||||
|
||||
@section dump_extra
|
||||
|
||||
Add extradata to the beginning of the filtered packets.
|
||||
|
||||
The additional argument specifies which packets should be filtered.
|
||||
It accepts the values:
|
||||
@table @samp
|
||||
@item a
|
||||
add extradata to all key packets, but only if @var{local_header} is
|
||||
set in the @option{flags2} codec context field
|
||||
|
||||
@item k
|
||||
add extradata to all key packets
|
||||
|
||||
@item e
|
||||
add extradata to all packets
|
||||
@end table
|
||||
|
||||
If not specified it is assumed @samp{k}.
|
||||
|
||||
For example the following @command{ffmpeg} command forces a global
|
||||
header (thus disabling individual packet headers) in the H.264 packets
|
||||
generated by the @code{libx264} encoder, but corrects them by adding
|
||||
the header stored in extradata to the key packets:
|
||||
@example
|
||||
ffmpeg -i INPUT -map 0 -flags:v +global_header -c:v libx264 -bsf:v dump_extra out.ts
|
||||
@end example
|
||||
@section dump_extradata
|
||||
|
||||
@section h264_mp4toannexb
|
||||
|
||||
@@ -83,18 +37,7 @@ format with @command{ffmpeg}, you can use the command:
|
||||
ffmpeg -i INPUT.mp4 -codec copy -bsf:v h264_mp4toannexb OUTPUT.ts
|
||||
@end example
|
||||
|
||||
@section imxdump
|
||||
|
||||
Modifies the bitstream to fit in MOV and to be usable by the Final Cut
|
||||
Pro decoder. This filter only applies to the mpeg2video codec, and is
|
||||
likely not needed for Final Cut Pro 7 and newer with the appropriate
|
||||
@option{-tag:v}.
|
||||
|
||||
For example, to remux 30 MB/sec NTSC IMX to MOV:
|
||||
|
||||
@example
|
||||
ffmpeg -i input.mxf -c copy -bsf:v imxdump -tag:v mx3n output.mov
|
||||
@end example
|
||||
@section imx_dump_header
|
||||
|
||||
@section mjpeg2jpeg
|
||||
|
||||
@@ -137,44 +80,12 @@ ffmpeg -i frame_%d.jpg -c:v copy rotated.avi
|
||||
|
||||
@section movsub
|
||||
|
||||
@section mp3_header_compress
|
||||
|
||||
@section mp3_header_decompress
|
||||
|
||||
@section mpeg4_unpack_bframes
|
||||
|
||||
Unpack DivX-style packed B-frames.
|
||||
|
||||
DivX-style packed B-frames are not valid MPEG-4 and were only a
|
||||
workaround for the broken Video for Windows subsystem.
|
||||
They use more space, can cause minor AV sync issues, require more
|
||||
CPU power to decode (unless the player has some decoded picture queue
|
||||
to compensate the 2,0,2,0 frame per packet style) and cause
|
||||
trouble if copied into a standard container like mp4 or mpeg-ps/ts,
|
||||
because MPEG-4 decoders may not be able to decode them, since they are
|
||||
not valid MPEG-4.
|
||||
|
||||
For example to fix an AVI file containing an MPEG-4 stream with
|
||||
DivX-style packed B-frames using @command{ffmpeg}, you can use the command:
|
||||
|
||||
@example
|
||||
ffmpeg -i INPUT.avi -codec copy -bsf:v mpeg4_unpack_bframes OUTPUT.avi
|
||||
@end example
|
||||
|
||||
@section noise
|
||||
|
||||
Damages the contents of packets without damaging the container. Can be
|
||||
used for fuzzing or testing error resilience/concealment.
|
||||
|
||||
Parameters:
|
||||
A numeral string, whose value is related to how often output bytes will
|
||||
be modified. Therefore, values below or equal to 0 are forbidden, and
|
||||
the lower the more frequent bytes will be modified, with 1 meaning
|
||||
every byte is modified.
|
||||
|
||||
@example
|
||||
ffmpeg -i INPUT -c copy -bsf noise[=1] output.mkv
|
||||
@end example
|
||||
applies the modification to every byte.
|
||||
|
||||
@section remove_extra
|
||||
@section remove_extradata
|
||||
|
||||
@c man end BITSTREAM FILTERS
|
||||
|
||||
5
doc/bootstrap.min.css
vendored
5
doc/bootstrap.min.css
vendored
File diff suppressed because one or more lines are too long
@@ -7,18 +7,10 @@ V
|
||||
Disable the default terse mode, the full command issued by make and its
|
||||
output will be shown on the screen.
|
||||
|
||||
DBG
|
||||
Preprocess x86 external assembler files to a .dbg.asm file in the object
|
||||
directory, which then gets compiled. Helps developping those assembler
|
||||
files.
|
||||
|
||||
DESTDIR
|
||||
Destination directory for the install targets, useful to prepare packages
|
||||
or install FFmpeg in cross-environments.
|
||||
|
||||
GEN
|
||||
Set to ‘1’ to generate the missing or mismatched references.
|
||||
|
||||
Makefile targets:
|
||||
|
||||
all
|
||||
@@ -33,9 +25,6 @@ fate-list
|
||||
install
|
||||
Install headers, libraries and programs.
|
||||
|
||||
examples
|
||||
Build all examples located in doc/examples.
|
||||
|
||||
libavformat/output-example
|
||||
Build the libavformat basic example.
|
||||
|
||||
@@ -45,9 +34,6 @@ libavcodec/api-example
|
||||
libswscale/swscale-test
|
||||
Build the swscale self-test (useful also as example).
|
||||
|
||||
config
|
||||
Reconfigure the project with current configuration.
|
||||
|
||||
|
||||
Useful standard make commands:
|
||||
make -t <target>
|
||||
|
||||
1158
doc/codecs.texi
1158
doc/codecs.texi
File diff suppressed because it is too large
Load Diff
@@ -14,7 +14,7 @@ You can disable all the decoders with the configure option
|
||||
with the options @code{--enable-decoder=@var{DECODER}} /
|
||||
@code{--disable-decoder=@var{DECODER}}.
|
||||
|
||||
The option @code{-decoders} of the ff* tools will display the list of
|
||||
The option @code{-codecs} of the ff* tools will display the list of
|
||||
enabled decoders.
|
||||
|
||||
@c man end DECODERS
|
||||
@@ -25,13 +25,6 @@ enabled decoders.
|
||||
A description of some of the currently available video decoders
|
||||
follows.
|
||||
|
||||
@section hevc
|
||||
|
||||
HEVC / H.265 decoder.
|
||||
|
||||
Note: the @option{skip_loop_filter} option has effect only at level
|
||||
@code{all}.
|
||||
|
||||
@section rawvideo
|
||||
|
||||
Raw video decoder.
|
||||
@@ -59,54 +52,6 @@ top-field-first is assumed
|
||||
@chapter Audio Decoders
|
||||
@c man begin AUDIO DECODERS
|
||||
|
||||
A description of some of the currently available audio decoders
|
||||
follows.
|
||||
|
||||
@section ac3
|
||||
|
||||
AC-3 audio decoder.
|
||||
|
||||
This decoder implements part of ATSC A/52:2010 and ETSI TS 102 366, as well as
|
||||
the undocumented RealAudio 3 (a.k.a. dnet).
|
||||
|
||||
@subsection AC-3 Decoder Options
|
||||
|
||||
@table @option
|
||||
|
||||
@item -drc_scale @var{value}
|
||||
Dynamic Range Scale Factor. The factor to apply to dynamic range values
|
||||
from the AC-3 stream. This factor is applied exponentially.
|
||||
There are 3 notable scale factor ranges:
|
||||
@table @option
|
||||
@item drc_scale == 0
|
||||
DRC disabled. Produces full range audio.
|
||||
@item 0 < drc_scale <= 1
|
||||
DRC enabled. Applies a fraction of the stream DRC value.
|
||||
Audio reproduction is between full range and full compression.
|
||||
@item drc_scale > 1
|
||||
DRC enabled. Applies drc_scale asymmetrically.
|
||||
Loud sounds are fully compressed. Soft sounds are enhanced.
|
||||
@end table
|
||||
|
||||
@end table
|
||||
|
||||
@section flac
|
||||
|
||||
FLAC audio decoder.
|
||||
|
||||
This decoder aims to implement the complete FLAC specification from Xiph.
|
||||
|
||||
@subsection FLAC Decoder options
|
||||
|
||||
@table @option
|
||||
|
||||
@item -use_buggy_lpc
|
||||
The lavc FLAC encoder used to produce buggy streams with high lpc values
|
||||
(like the default value). This option makes it possible to decode such streams
|
||||
correctly by using lavc's old buggy lpc logic for decoding.
|
||||
|
||||
@end table
|
||||
|
||||
@section ffwavesynth
|
||||
|
||||
Internal wave synthetizer.
|
||||
@@ -115,105 +60,11 @@ This decoder generates wave patterns according to predefined sequences. Its
|
||||
use is purely internal and the format of the data it accepts is not publicly
|
||||
documented.
|
||||
|
||||
@section libcelt
|
||||
|
||||
libcelt decoder wrapper.
|
||||
|
||||
libcelt allows libavcodec to decode the Xiph CELT ultra-low delay audio codec.
|
||||
Requires the presence of the libcelt headers and library during configuration.
|
||||
You need to explicitly configure the build with @code{--enable-libcelt}.
|
||||
|
||||
@section libgsm
|
||||
|
||||
libgsm decoder wrapper.
|
||||
|
||||
libgsm allows libavcodec to decode the GSM full rate audio codec. Requires
|
||||
the presence of the libgsm headers and library during configuration. You need
|
||||
to explicitly configure the build with @code{--enable-libgsm}.
|
||||
|
||||
This decoder supports both the ordinary GSM and the Microsoft variant.
|
||||
|
||||
@section libilbc
|
||||
|
||||
libilbc decoder wrapper.
|
||||
|
||||
libilbc allows libavcodec to decode the Internet Low Bitrate Codec (iLBC)
|
||||
audio codec. Requires the presence of the libilbc headers and library during
|
||||
configuration. You need to explicitly configure the build with
|
||||
@code{--enable-libilbc}.
|
||||
|
||||
@subsection Options
|
||||
|
||||
The following option is supported by the libilbc wrapper.
|
||||
|
||||
@table @option
|
||||
@item enhance
|
||||
|
||||
Enable the enhancement of the decoded audio when set to 1. The default
|
||||
value is 0 (disabled).
|
||||
|
||||
@end table
|
||||
|
||||
@section libopencore-amrnb
|
||||
|
||||
libopencore-amrnb decoder wrapper.
|
||||
|
||||
libopencore-amrnb allows libavcodec to decode the Adaptive Multi-Rate
|
||||
Narrowband audio codec. Using it requires the presence of the
|
||||
libopencore-amrnb headers and library during configuration. You need to
|
||||
explicitly configure the build with @code{--enable-libopencore-amrnb}.
|
||||
|
||||
An FFmpeg native decoder for AMR-NB exists, so users can decode AMR-NB
|
||||
without this library.
|
||||
|
||||
@section libopencore-amrwb
|
||||
|
||||
libopencore-amrwb decoder wrapper.
|
||||
|
||||
libopencore-amrwb allows libavcodec to decode the Adaptive Multi-Rate
|
||||
Wideband audio codec. Using it requires the presence of the
|
||||
libopencore-amrwb headers and library during configuration. You need to
|
||||
explicitly configure the build with @code{--enable-libopencore-amrwb}.
|
||||
|
||||
An FFmpeg native decoder for AMR-WB exists, so users can decode AMR-WB
|
||||
without this library.
|
||||
|
||||
@section libopus
|
||||
|
||||
libopus decoder wrapper.
|
||||
|
||||
libopus allows libavcodec to decode the Opus Interactive Audio Codec.
|
||||
Requires the presence of the libopus headers and library during
|
||||
configuration. You need to explicitly configure the build with
|
||||
@code{--enable-libopus}.
|
||||
|
||||
An FFmpeg native decoder for Opus exists, so users can decode Opus
|
||||
without this library.
|
||||
|
||||
@c man end AUDIO DECODERS
|
||||
|
||||
@chapter Subtitles Decoders
|
||||
@c man begin SUBTILES DECODERS
|
||||
|
||||
@section dvbsub
|
||||
|
||||
@subsection Options
|
||||
|
||||
@table @option
|
||||
@item compute_clut
|
||||
@table @option
|
||||
@item -1
|
||||
Compute clut if no matching CLUT is in the stream.
|
||||
@item 0
|
||||
Never compute CLUT
|
||||
@item 1
|
||||
Always compute CLUT and override the one provided in the stream.
|
||||
@end table
|
||||
@item dvb_substream
|
||||
Selects the dvb substream, or all substreams if -1 which is default.
|
||||
|
||||
@end table
|
||||
|
||||
@section dvdsub
|
||||
|
||||
This codec decodes the bitmap subtitles used in DVDs; the same subtitles can
|
||||
@@ -233,56 +84,6 @@ The format for this option is a string containing 16 24-bits hexadecimal
|
||||
numbers (without 0x prefix) separated by comas, for example @code{0d00ee,
|
||||
ee450d, 101010, eaeaea, 0ce60b, ec14ed, ebff0b, 0d617a, 7b7b7b, d1d1d1,
|
||||
7b2a0e, 0d950c, 0f007b, cf0dec, cfa80c, 7c127b}.
|
||||
|
||||
@item ifo_palette
|
||||
Specify the IFO file from which the global palette is obtained.
|
||||
(experimental)
|
||||
|
||||
@item forced_subs_only
|
||||
Only decode subtitle entries marked as forced. Some titles have forced
|
||||
and non-forced subtitles in the same track. Setting this flag to @code{1}
|
||||
will only keep the forced subtitles. Default value is @code{0}.
|
||||
@end table
|
||||
|
||||
@section libzvbi-teletext
|
||||
|
||||
Libzvbi allows libavcodec to decode DVB teletext pages and DVB teletext
|
||||
subtitles. Requires the presence of the libzvbi headers and library during
|
||||
configuration. You need to explicitly configure the build with
|
||||
@code{--enable-libzvbi}.
|
||||
|
||||
@subsection Options
|
||||
|
||||
@table @option
|
||||
@item txt_page
|
||||
List of teletext page numbers to decode. You may use the special * string to
|
||||
match all pages. Pages that do not match the specified list are dropped.
|
||||
Default value is *.
|
||||
@item txt_chop_top
|
||||
Discards the top teletext line. Default value is 1.
|
||||
@item txt_format
|
||||
Specifies the format of the decoded subtitles. The teletext decoder is capable
|
||||
of decoding the teletext pages to bitmaps or to simple text, you should use
|
||||
"bitmap" for teletext pages, because certain graphics and colors cannot be
|
||||
expressed in simple text. You might use "text" for teletext based subtitles if
|
||||
your application can handle simple text based subtitles. Default value is
|
||||
bitmap.
|
||||
@item txt_left
|
||||
X offset of generated bitmaps, default is 0.
|
||||
@item txt_top
|
||||
Y offset of generated bitmaps, default is 0.
|
||||
@item txt_chop_spaces
|
||||
Chops leading and trailing spaces and removes empty lines from the generated
|
||||
text. This option is useful for teletext based subtitles where empty spaces may
|
||||
be present at the start or at the end of the lines or empty lines may be
|
||||
present between the subtitle lines because of double-sized teletext charactes.
|
||||
Default value is 1.
|
||||
@item txt_duration
|
||||
Sets the display duration of the decoded teletext pages or subtitles in
|
||||
miliseconds. Default value is 30000 which is 30 seconds.
|
||||
@item txt_transparent
|
||||
Force transparent background of the generated teletext bitmaps. Default value
|
||||
is 0 which means an opaque (black) background.
|
||||
@end table
|
||||
|
||||
@c man end SUBTILES DECODERS
|
||||
|
||||
@@ -1,7 +1,3 @@
|
||||
a.summary-letter {
|
||||
text-decoration: none;
|
||||
}
|
||||
|
||||
a {
|
||||
color: #2D6198;
|
||||
}
|
||||
@@ -17,8 +13,8 @@ a:visited {
|
||||
}
|
||||
|
||||
#banner img {
|
||||
margin-bottom: 1px;
|
||||
margin-top: 5px;
|
||||
padding-bottom: 1px;
|
||||
padding-top: 5px;
|
||||
}
|
||||
|
||||
#body {
|
||||
@@ -49,16 +45,11 @@ body {
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
h1 a, h2 a, h3 a, h4 a {
|
||||
text-decoration: inherit;
|
||||
color: inherit;
|
||||
}
|
||||
|
||||
h1, h2, h3, h4 {
|
||||
h1, h2, h3 {
|
||||
padding-left: 0.4em;
|
||||
border-radius: 4px;
|
||||
padding-bottom: 0.25em;
|
||||
padding-top: 0.25em;
|
||||
padding-bottom: 0.2em;
|
||||
padding-top: 0.2em;
|
||||
border: 1px solid #6A996A;
|
||||
}
|
||||
|
||||
@@ -72,22 +63,15 @@ h1 {
|
||||
|
||||
h2 {
|
||||
color: #313131;
|
||||
font-size: 1.0em;
|
||||
font-size: 0.9em;
|
||||
background-color: #ABE3AB;
|
||||
}
|
||||
|
||||
h3 {
|
||||
color: #313131;
|
||||
font-size: 0.9em;
|
||||
margin-bottom: -6px;
|
||||
background-color: #BBF3BB;
|
||||
}
|
||||
|
||||
h4 {
|
||||
color: #313131;
|
||||
font-size: 0.8em;
|
||||
margin-bottom: -8px;
|
||||
background-color: #D1FDD1;
|
||||
background-color: #BBF3BB;
|
||||
}
|
||||
|
||||
img {
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
@chapter Demuxers
|
||||
@c man begin DEMUXERS
|
||||
|
||||
Demuxers are configured elements in FFmpeg that can read the
|
||||
Demuxers are configured elements in FFmpeg which allow to read the
|
||||
multimedia streams from a particular type of file.
|
||||
|
||||
When you configure your FFmpeg build, all the supported demuxers
|
||||
@@ -18,12 +18,6 @@ enabled demuxers.
|
||||
|
||||
The description of some of the currently available demuxers follows.
|
||||
|
||||
@section aa
|
||||
|
||||
Audible Format 2, 3, and 4 demuxer.
|
||||
|
||||
This demuxer is used to demux Audible Format 2, 3, and 4 (.aa) files.
|
||||
|
||||
@section applehttp
|
||||
|
||||
Apple HTTP Live Streaming demuxer.
|
||||
@@ -35,37 +29,6 @@ the caller can decide which variant streams to actually receive.
|
||||
The total bitrate of the variant that the stream belongs to is
|
||||
available in a metadata key named "variant_bitrate".
|
||||
|
||||
@section apng
|
||||
|
||||
Animated Portable Network Graphics demuxer.
|
||||
|
||||
This demuxer is used to demux APNG files.
|
||||
All headers, but the PNG signature, up to (but not including) the first
|
||||
fcTL chunk are transmitted as extradata.
|
||||
Frames are then split as being all the chunks between two fcTL ones, or
|
||||
between the last fcTL and IEND chunks.
|
||||
|
||||
@table @option
|
||||
@item -ignore_loop @var{bool}
|
||||
Ignore the loop variable in the file if set.
|
||||
@item -max_fps @var{int}
|
||||
Maximum framerate in frames per second (0 for no limit).
|
||||
@item -default_fps @var{int}
|
||||
Default framerate in frames per second when none is specified in the file
|
||||
(0 meaning as fast as possible).
|
||||
@end table
|
||||
|
||||
@section asf
|
||||
|
||||
Advanced Systems Format demuxer.
|
||||
|
||||
This demuxer is used to demux ASF files and MMS network streams.
|
||||
|
||||
@table @option
|
||||
@item -no_resync_search @var{bool}
|
||||
Do not try to resynchronize by looking for a certain optional start code.
|
||||
@end table
|
||||
|
||||
@anchor{concat}
|
||||
@section concat
|
||||
|
||||
@@ -100,11 +63,11 @@ following directive is recognized:
|
||||
Path to a file to read; special characters and spaces must be escaped with
|
||||
backslash or single quotes.
|
||||
|
||||
All subsequent file-related directives apply to that file.
|
||||
All subsequent directives apply to that file.
|
||||
|
||||
@item @code{ffconcat version 1.0}
|
||||
Identify the script type and version. It also sets the @option{safe} option
|
||||
to 1 if it was -1.
|
||||
to 1 if it was to its default -1.
|
||||
|
||||
To make FFmpeg recognize the format automatically, this directive must
|
||||
appears exactly as is (no extra space or byte-order-mark) on the very first
|
||||
@@ -115,66 +78,6 @@ Duration of the file. This information can be specified from the file;
|
||||
specifying it here may be more efficient or help if the information from the
|
||||
file is not available or accurate.
|
||||
|
||||
If the duration is set for all files, then it is possible to seek in the
|
||||
whole concatenated video.
|
||||
|
||||
@item @code{inpoint @var{timestamp}}
|
||||
In point of the file. When the demuxer opens the file it instantly seeks to the
|
||||
specified timestamp. Seeking is done so that all streams can be presented
|
||||
successfully at In point.
|
||||
|
||||
This directive works best with intra frame codecs, because for non-intra frame
|
||||
ones you will usually get extra packets before the actual In point and the
|
||||
decoded content will most likely contain frames before In point too.
|
||||
|
||||
For each file, packets before the file In point will have timestamps less than
|
||||
the calculated start timestamp of the file (negative in case of the first
|
||||
file), and the duration of the files (if not specified by the @code{duration}
|
||||
directive) will be reduced based on their specified In point.
|
||||
|
||||
Because of potential packets before the specified In point, packet timestamps
|
||||
may overlap between two concatenated files.
|
||||
|
||||
@item @code{outpoint @var{timestamp}}
|
||||
Out point of the file. When the demuxer reaches the specified decoding
|
||||
timestamp in any of the streams, it handles it as an end of file condition and
|
||||
skips the current and all the remaining packets from all streams.
|
||||
|
||||
Out point is exclusive, which means that the demuxer will not output packets
|
||||
with a decoding timestamp greater or equal to Out point.
|
||||
|
||||
This directive works best with intra frame codecs and formats where all streams
|
||||
are tightly interleaved. For non-intra frame codecs you will usually get
|
||||
additional packets with presentation timestamp after Out point therefore the
|
||||
decoded content will most likely contain frames after Out point too. If your
|
||||
streams are not tightly interleaved you may not get all the packets from all
|
||||
streams before Out point and you may only will be able to decode the earliest
|
||||
stream until Out point.
|
||||
|
||||
The duration of the files (if not specified by the @code{duration}
|
||||
directive) will be reduced based on their specified Out point.
|
||||
|
||||
@item @code{file_packet_metadata @var{key=value}}
|
||||
Metadata of the packets of the file. The specified metadata will be set for
|
||||
each file packet. You can specify this directive multiple times to add multiple
|
||||
metadata entries.
|
||||
|
||||
@item @code{stream}
|
||||
Introduce a stream in the virtual file.
|
||||
All subsequent stream-related directives apply to the last introduced
|
||||
stream.
|
||||
Some streams properties must be set in order to allow identifying the
|
||||
matching streams in the subfiles.
|
||||
If no streams are defined in the script, the streams from the first file are
|
||||
copied.
|
||||
|
||||
@item @code{exact_stream_id @var{id}}
|
||||
Set the id of the stream.
|
||||
If this directive is given, the string with the corresponding id in the
|
||||
subfiles will be used.
|
||||
This is especially useful for MPEG-PS (VOB) files, where the order of the
|
||||
streams is not reliable.
|
||||
|
||||
@end table
|
||||
|
||||
@subsection Options
|
||||
@@ -192,97 +95,11 @@ component.
|
||||
|
||||
If set to 0, any file name is accepted.
|
||||
|
||||
The default is 1.
|
||||
|
||||
-1 is equivalent to 1 if the format was automatically
|
||||
The default is -1, it is equivalent to 1 if the format was automatically
|
||||
probed and 0 otherwise.
|
||||
|
||||
@item auto_convert
|
||||
If set to 1, try to perform automatic conversions on packet data to make the
|
||||
streams concatenable.
|
||||
The default is 1.
|
||||
|
||||
Currently, the only conversion is adding the h264_mp4toannexb bitstream
|
||||
filter to H.264 streams in MP4 format. This is necessary in particular if
|
||||
there are resolution changes.
|
||||
|
||||
@end table
|
||||
|
||||
@section flv
|
||||
|
||||
Adobe Flash Video Format demuxer.
|
||||
|
||||
This demuxer is used to demux FLV files and RTMP network streams.
|
||||
|
||||
@table @option
|
||||
@item -flv_metadata @var{bool}
|
||||
Allocate the streams according to the onMetaData array content.
|
||||
@end table
|
||||
|
||||
@section libgme
|
||||
|
||||
The Game Music Emu library is a collection of video game music file emulators.
|
||||
|
||||
See @url{http://code.google.com/p/game-music-emu/} for more information.
|
||||
|
||||
Some files have multiple tracks. The demuxer will pick the first track by
|
||||
default. The @option{track_index} option can be used to select a different
|
||||
track. Track indexes start at 0. The demuxer exports the number of tracks as
|
||||
@var{tracks} meta data entry.
|
||||
|
||||
For very large files, the @option{max_size} option may have to be adjusted.
|
||||
|
||||
@section libquvi
|
||||
|
||||
Play media from Internet services using the quvi project.
|
||||
|
||||
The demuxer accepts a @option{format} option to request a specific quality. It
|
||||
is by default set to @var{best}.
|
||||
|
||||
See @url{http://quvi.sourceforge.net/} for more information.
|
||||
|
||||
FFmpeg needs to be built with @code{--enable-libquvi} for this demuxer to be
|
||||
enabled.
|
||||
|
||||
@section gif
|
||||
|
||||
Animated GIF demuxer.
|
||||
|
||||
It accepts the following options:
|
||||
|
||||
@table @option
|
||||
@item min_delay
|
||||
Set the minimum valid delay between frames in hundredths of seconds.
|
||||
Range is 0 to 6000. Default value is 2.
|
||||
|
||||
@item max_gif_delay
|
||||
Set the maximum valid delay between frames in hundredth of seconds.
|
||||
Range is 0 to 65535. Default value is 65535 (nearly eleven minutes),
|
||||
the maximum value allowed by the specification.
|
||||
|
||||
@item default_delay
|
||||
Set the default delay between frames in hundredths of seconds.
|
||||
Range is 0 to 6000. Default value is 10.
|
||||
|
||||
@item ignore_loop
|
||||
GIF files can contain information to loop a certain number of times (or
|
||||
infinitely). If @option{ignore_loop} is set to 1, then the loop setting
|
||||
from the input will be ignored and looping will not occur. If set to 0,
|
||||
then looping will occur and will cycle the number of times according to
|
||||
the GIF. Default value is 1.
|
||||
@end table
|
||||
|
||||
For example, with the overlay filter, place an infinitely looping GIF
|
||||
over another video:
|
||||
@example
|
||||
ffmpeg -i input.mp4 -ignore_loop 0 -i input.gif -filter_complex overlay=shortest=1 out.mkv
|
||||
@end example
|
||||
|
||||
Note that in the above example the shortest option for overlay filter is
|
||||
used to end the output video at the length of the shortest input file,
|
||||
which in this case is @file{input.mp4} as the GIF in this example loops
|
||||
infinitely.
|
||||
|
||||
@section image2
|
||||
|
||||
Image file demuxer.
|
||||
@@ -300,7 +117,7 @@ same for all the files in the sequence.
|
||||
This demuxer accepts the following options:
|
||||
@table @option
|
||||
@item framerate
|
||||
Set the frame rate for the video stream. It defaults to 25.
|
||||
Set the framerate for the video stream. It defaults to 25.
|
||||
@item loop
|
||||
If set to 1, loop over the input. Default value is 0.
|
||||
@item pattern_type
|
||||
@@ -308,10 +125,6 @@ Select the pattern type used to interpret the provided filename.
|
||||
|
||||
@var{pattern_type} accepts one of the following values.
|
||||
@table @option
|
||||
@item none
|
||||
Disable pattern matching, therefore the video will only contain the specified
|
||||
image. You should use this option if you do not want to create sequences from
|
||||
multiple images and your filenames may contain special pattern characters.
|
||||
@item sequence
|
||||
Select a sequence pattern type, used to specify a sequence of files
|
||||
indexed by sequential numbers.
|
||||
@@ -382,12 +195,6 @@ to read from. Default value is 0.
|
||||
Set the index interval range to check when looking for the first image
|
||||
file in the sequence, starting from @var{start_number}. Default value
|
||||
is 5.
|
||||
@item ts_from_file
|
||||
If set to 1, will set frame timestamp to modification time of image file. Note
|
||||
that monotonity of timestamps is not provided: images go in the same order as
|
||||
without this option. Default value is 0.
|
||||
If set to 2, will set frame timestamp to the modification time of the image file in
|
||||
nanosecond precision.
|
||||
@item video_size
|
||||
Set the video size of the images to read. If not specified the video
|
||||
size is guessed from the first image file in the sequence.
|
||||
@@ -401,71 +208,28 @@ Use @command{ffmpeg} for creating a video from the images in the file
|
||||
sequence @file{img-001.jpeg}, @file{img-002.jpeg}, ..., assuming an
|
||||
input frame rate of 10 frames per second:
|
||||
@example
|
||||
ffmpeg -framerate 10 -i 'img-%03d.jpeg' out.mkv
|
||||
ffmpeg -i 'img-%03d.jpeg' -r 10 out.mkv
|
||||
@end example
|
||||
|
||||
@item
|
||||
As above, but start by reading from a file with index 100 in the sequence:
|
||||
@example
|
||||
ffmpeg -framerate 10 -start_number 100 -i 'img-%03d.jpeg' out.mkv
|
||||
ffmpeg -start_number 100 -i 'img-%03d.jpeg' -r 10 out.mkv
|
||||
@end example
|
||||
|
||||
@item
|
||||
Read images matching the "*.png" glob pattern , that is all the files
|
||||
terminating with the ".png" suffix:
|
||||
@example
|
||||
ffmpeg -framerate 10 -pattern_type glob -i "*.png" out.mkv
|
||||
ffmpeg -pattern_type glob -i "*.png" -r 10 out.mkv
|
||||
@end example
|
||||
@end itemize
|
||||
|
||||
@section mov/mp4/3gp/Quicktme
|
||||
|
||||
Quicktime / MP4 demuxer.
|
||||
|
||||
This demuxer accepts the following options:
|
||||
@table @option
|
||||
@item enable_drefs
|
||||
Enable loading of external tracks, disabled by default.
|
||||
Enabling this can theoretically leak information in some use cases.
|
||||
|
||||
@item use_absolute_path
|
||||
Allows loading of external tracks via absolute paths, disabled by default.
|
||||
Enabling this poses a security risk. It should only be enabled if the source
|
||||
is known to be non malicious.
|
||||
|
||||
@end table
|
||||
|
||||
@section mpegts
|
||||
|
||||
MPEG-2 transport stream demuxer.
|
||||
|
||||
This demuxer accepts the following options:
|
||||
@table @option
|
||||
@item resync_size
|
||||
Set size limit for looking up a new synchronization. Default value is
|
||||
65536.
|
||||
|
||||
@item fix_teletext_pts
|
||||
Override teletext packet PTS and DTS values with the timestamps calculated
|
||||
from the PCR of the first program which the teletext stream is part of and is
|
||||
not discarded. Default value is 1, set this option to 0 if you want your
|
||||
teletext packet PTS and DTS values untouched.
|
||||
|
||||
@item ts_packetsize
|
||||
Output option carrying the raw packet size in bytes.
|
||||
Show the detected raw packet size, cannot be set by the user.
|
||||
|
||||
@item scan_all_pmts
|
||||
Scan and combine all PMTs. The value is an integer with value from -1
|
||||
to 1 (-1 means automatic setting, 1 means enabled, 0 means
|
||||
disabled). Default value is -1.
|
||||
@end table
|
||||
|
||||
@section rawvideo
|
||||
|
||||
Raw video demuxer.
|
||||
|
||||
This demuxer allows one to read raw video data. Since there is no header
|
||||
This demuxer allows to read raw video data. Since there is no header
|
||||
specifying the assumed video parameters, the user must specify them
|
||||
in order to be able to decode the data correctly.
|
||||
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle Developer Documentation
|
||||
@titlepage
|
||||
@@ -12,23 +11,29 @@
|
||||
|
||||
@chapter Developers Guide
|
||||
|
||||
@section Notes for external developers
|
||||
@section API
|
||||
@itemize @bullet
|
||||
@item libavcodec is the library containing the codecs (both encoding and
|
||||
decoding). Look at @file{doc/examples/decoding_encoding.c} to see how to use
|
||||
it.
|
||||
|
||||
This document is mostly useful for internal FFmpeg developers.
|
||||
External developers who need to use the API in their application should
|
||||
refer to the API doxygen documentation in the public headers, and
|
||||
check the examples in @file{doc/examples} and in the source code to
|
||||
see how the public API is employed.
|
||||
@item libavformat is the library containing the file format handling (mux and
|
||||
demux code for several formats). Look at @file{ffplay.c} to use it in a
|
||||
player. See @file{doc/examples/muxing.c} to use it to generate audio or video
|
||||
streams.
|
||||
|
||||
You can use the FFmpeg libraries in your commercial program, but you
|
||||
are encouraged to @emph{publish any patch you make}. In this case the
|
||||
best way to proceed is to send your patches to the ffmpeg-devel
|
||||
mailing list following the guidelines illustrated in the remainder of
|
||||
this document.
|
||||
@end itemize
|
||||
|
||||
For more detailed legal information about the use of FFmpeg in
|
||||
external programs read the @file{LICENSE} file in the source tree and
|
||||
consult @url{http://ffmpeg.org/legal.html}.
|
||||
@section Integrating libavcodec or libavformat in your program
|
||||
|
||||
You can integrate all the source code of the libraries to link them
|
||||
statically to avoid any version problem. All you need is to provide a
|
||||
'config.mak' and a 'config.h' in the parent directory. See the defines
|
||||
generated by ./configure to understand what is needed.
|
||||
|
||||
You can use libavcodec or libavformat in your commercial program, but
|
||||
@emph{any patch you make must be published}. The best way to proceed is
|
||||
to send your patches to the FFmpeg mailing list.
|
||||
|
||||
@section Contributing
|
||||
|
||||
@@ -52,16 +57,13 @@ and should try to fix issues their commit causes.
|
||||
@subsection Code formatting conventions
|
||||
|
||||
There are the following guidelines regarding the indentation in files:
|
||||
|
||||
@itemize @bullet
|
||||
@item
|
||||
Indent size is 4.
|
||||
|
||||
@item
|
||||
The TAB character is forbidden outside of Makefiles as is any
|
||||
form of trailing whitespace. Commits containing either will be
|
||||
rejected by the git repository.
|
||||
|
||||
@item
|
||||
You should try to limit your code lines to 80 characters; however, do so if
|
||||
and only if this improves readability.
|
||||
@@ -93,7 +95,7 @@ for markup commands, i.e. use @code{@@param} and not @code{\param}.
|
||||
* more text ...
|
||||
* ...
|
||||
*/
|
||||
typedef struct Foobar @{
|
||||
typedef struct Foobar@{
|
||||
int var1; /**< var1 description */
|
||||
int var2; ///< var2 description
|
||||
/** var3 description */
|
||||
@@ -115,17 +117,13 @@ int myfunc(int my_parameter)
|
||||
|
||||
FFmpeg is programmed in the ISO C90 language with a few additional
|
||||
features from ISO C99, namely:
|
||||
|
||||
@itemize @bullet
|
||||
@item
|
||||
the @samp{inline} keyword;
|
||||
|
||||
@item
|
||||
@samp{//} comments;
|
||||
|
||||
@item
|
||||
designated struct initializers (@samp{struct s x = @{ .i = 17 @};})
|
||||
|
||||
@item
|
||||
compound literals (@samp{x = (struct s) @{ 17, 23 @};})
|
||||
@end itemize
|
||||
@@ -137,17 +135,13 @@ clarity and performance.
|
||||
All code must compile with recent versions of GCC and a number of other
|
||||
currently supported compilers. To ensure compatibility, please do not use
|
||||
additional C99 features or GCC extensions. Especially watch out for:
|
||||
|
||||
@itemize @bullet
|
||||
@item
|
||||
mixing statements and declarations;
|
||||
|
||||
@item
|
||||
@samp{long long} (use @samp{int64_t} instead);
|
||||
|
||||
@item
|
||||
@samp{__attribute__} not protected by @samp{#ifdef __GNUC__} or similar;
|
||||
|
||||
@item
|
||||
GCC statement expressions (@samp{(x = (@{ int y = 4; y; @})}).
|
||||
@end itemize
|
||||
@@ -159,25 +153,17 @@ All names should be composed with underscores (_), not CamelCase. For example,
|
||||
for example structs and enums; they should always be in the CamelCase
|
||||
|
||||
There are the following conventions for naming variables and functions:
|
||||
|
||||
@itemize @bullet
|
||||
@item
|
||||
For local variables no prefix is required.
|
||||
|
||||
@item
|
||||
For file-scope variables and functions declared as @code{static}, no prefix
|
||||
is required.
|
||||
|
||||
For variables and functions declared as @code{static} no prefix is required.
|
||||
@item
|
||||
For variables and functions visible outside of file scope, but only used
|
||||
internally by a library, an @code{ff_} prefix should be used,
|
||||
e.g. @samp{ff_w64_demuxer}.
|
||||
|
||||
For variables and functions used internally by a library an @code{ff_}
|
||||
prefix should be used, e.g. @samp{ff_w64_demuxer}.
|
||||
@item
|
||||
For variables and functions visible outside of file scope, used internally
|
||||
across multiple libraries, use @code{avpriv_} as prefix, for example,
|
||||
@samp{avpriv_aac_parse_header}.
|
||||
|
||||
For variables and functions used internally across multiple libraries, use
|
||||
@code{avpriv_}. For example, @samp{avpriv_aac_parse_header}.
|
||||
@item
|
||||
Each library has its own prefix for public symbols, in addition to the
|
||||
commonly used @code{av_} (@code{avformat_} for libavformat,
|
||||
@@ -197,12 +183,10 @@ are reserved at the file level and may not be used for externally visible
|
||||
symbols. If in doubt, just avoid names starting with @code{_} altogether.
|
||||
|
||||
@subsection Miscellaneous conventions
|
||||
|
||||
@itemize @bullet
|
||||
@item
|
||||
fprintf and printf are forbidden in libavformat and libavcodec,
|
||||
please use av_log() instead.
|
||||
|
||||
@item
|
||||
Casts should be used only when necessary. Unneeded parentheses
|
||||
should also be avoided if they don't make the code easier to understand.
|
||||
@@ -228,7 +212,7 @@ autocmd InsertEnter * match ForbiddenWhitespace /\t\|\s\+\%#\@@<!$/
|
||||
@end example
|
||||
|
||||
For Emacs, add these roughly equivalent lines to your @file{.emacs.d/init.el}:
|
||||
@lisp
|
||||
@example
|
||||
(c-add-style "ffmpeg"
|
||||
'("k&r"
|
||||
(c-basic-offset . 4)
|
||||
@@ -239,163 +223,137 @@ For Emacs, add these roughly equivalent lines to your @file{.emacs.d/init.el}:
|
||||
)
|
||||
)
|
||||
(setq c-default-style "ffmpeg")
|
||||
@end lisp
|
||||
@end example
|
||||
|
||||
@section Development Policy
|
||||
|
||||
@enumerate
|
||||
@item
|
||||
Contributions should be licensed under the
|
||||
@uref{http://www.gnu.org/licenses/lgpl-2.1.html, LGPL 2.1},
|
||||
including an "or any later version" clause, or, if you prefer
|
||||
a gift-style license, the
|
||||
@uref{http://opensource.org/licenses/isc-license.txt, ISC} or
|
||||
@uref{http://mit-license.org/, MIT} license.
|
||||
@uref{http://www.gnu.org/licenses/gpl-2.0.html, GPL 2} including
|
||||
an "or any later version" clause is also acceptable, but LGPL is
|
||||
preferred.
|
||||
If you add a new file, give it a proper license header. Do not copy and
|
||||
paste it from a random place, use an existing file as template.
|
||||
|
||||
Contributions should be licensed under the
|
||||
@uref{http://www.gnu.org/licenses/lgpl-2.1.html, LGPL 2.1},
|
||||
including an "or any later version" clause, or, if you prefer
|
||||
a gift-style license, the
|
||||
@uref{http://www.isc.org/software/license/, ISC} or
|
||||
@uref{http://mit-license.org/, MIT} license.
|
||||
@uref{http://www.gnu.org/licenses/gpl-2.0.html, GPL 2} including
|
||||
an "or any later version" clause is also acceptable, but LGPL is
|
||||
preferred.
|
||||
@item
|
||||
You must not commit code which breaks FFmpeg! (Meaning unfinished but
|
||||
enabled code which breaks compilation or compiles but does not work or
|
||||
breaks the regression tests)
|
||||
You can commit unfinished stuff (for testing etc), but it must be disabled
|
||||
(#ifdef etc) by default so it does not interfere with other developers'
|
||||
work.
|
||||
|
||||
You must not commit code which breaks FFmpeg! (Meaning unfinished but
|
||||
enabled code which breaks compilation or compiles but does not work or
|
||||
breaks the regression tests)
|
||||
You can commit unfinished stuff (for testing etc), but it must be disabled
|
||||
(#ifdef etc) by default so it does not interfere with other developers'
|
||||
work.
|
||||
@item
|
||||
The commit message should have a short first line in the form of
|
||||
a @samp{topic: short description} as a header, separated by a newline
|
||||
from the body consisting of an explanation of why the change is necessary.
|
||||
If the commit fixes a known bug on the bug tracker, the commit message
|
||||
should include its bug ID. Referring to the issue on the bug tracker does
|
||||
not exempt you from writing an excerpt of the bug in the commit message.
|
||||
|
||||
The commit message should have a short first line in the form of
|
||||
a @samp{topic: short description} as a header, separated by a newline
|
||||
from the body consisting of an explanation of why the change is necessary.
|
||||
If the commit fixes a known bug on the bug tracker, the commit message
|
||||
should include its bug ID. Referring to the issue on the bug tracker does
|
||||
not exempt you from writing an excerpt of the bug in the commit message.
|
||||
@item
|
||||
You do not have to over-test things. If it works for you, and you think it
|
||||
should work for others, then commit. If your code has problems
|
||||
(portability, triggers compiler bugs, unusual environment etc) they will be
|
||||
reported and eventually fixed.
|
||||
|
||||
You do not have to over-test things. If it works for you, and you think it
|
||||
should work for others, then commit. If your code has problems
|
||||
(portability, triggers compiler bugs, unusual environment etc) they will be
|
||||
reported and eventually fixed.
|
||||
@item
|
||||
Do not commit unrelated changes together, split them into self-contained
|
||||
pieces. Also do not forget that if part B depends on part A, but A does not
|
||||
depend on B, then A can and should be committed first and separate from B.
|
||||
Keeping changes well split into self-contained parts makes reviewing and
|
||||
understanding them on the commit log mailing list easier. This also helps
|
||||
in case of debugging later on.
|
||||
Also if you have doubts about splitting or not splitting, do not hesitate to
|
||||
ask/discuss it on the developer mailing list.
|
||||
|
||||
Do not commit unrelated changes together, split them into self-contained
|
||||
pieces. Also do not forget that if part B depends on part A, but A does not
|
||||
depend on B, then A can and should be committed first and separate from B.
|
||||
Keeping changes well split into self-contained parts makes reviewing and
|
||||
understanding them on the commit log mailing list easier. This also helps
|
||||
in case of debugging later on.
|
||||
Also if you have doubts about splitting or not splitting, do not hesitate to
|
||||
ask/discuss it on the developer mailing list.
|
||||
@item
|
||||
Do not change behavior of the programs (renaming options etc) or public
|
||||
API or ABI without first discussing it on the ffmpeg-devel mailing list.
|
||||
Do not remove functionality from the code. Just improve!
|
||||
|
||||
Note: Redundant code can be removed.
|
||||
Do not change behavior of the programs (renaming options etc) or public
|
||||
API or ABI without first discussing it on the ffmpeg-devel mailing list.
|
||||
Do not remove functionality from the code. Just improve!
|
||||
|
||||
Note: Redundant code can be removed.
|
||||
@item
|
||||
Do not commit changes to the build system (Makefiles, configure script)
|
||||
which change behavior, defaults etc, without asking first. The same
|
||||
applies to compiler warning fixes, trivial looking fixes and to code
|
||||
maintained by other developers. We usually have a reason for doing things
|
||||
the way we do. Send your changes as patches to the ffmpeg-devel mailing
|
||||
list, and if the code maintainers say OK, you may commit. This does not
|
||||
apply to files you wrote and/or maintain.
|
||||
|
||||
Do not commit changes to the build system (Makefiles, configure script)
|
||||
which change behavior, defaults etc, without asking first. The same
|
||||
applies to compiler warning fixes, trivial looking fixes and to code
|
||||
maintained by other developers. We usually have a reason for doing things
|
||||
the way we do. Send your changes as patches to the ffmpeg-devel mailing
|
||||
list, and if the code maintainers say OK, you may commit. This does not
|
||||
apply to files you wrote and/or maintain.
|
||||
@item
|
||||
We refuse source indentation and other cosmetic changes if they are mixed
|
||||
with functional changes, such commits will be rejected and removed. Every
|
||||
developer has his own indentation style, you should not change it. Of course
|
||||
if you (re)write something, you can use your own style, even though we would
|
||||
prefer if the indentation throughout FFmpeg was consistent (Many projects
|
||||
force a given indentation style - we do not.). If you really need to make
|
||||
indentation changes (try to avoid this), separate them strictly from real
|
||||
changes.
|
||||
|
||||
NOTE: If you had to put if()@{ .. @} over a large (> 5 lines) chunk of code,
|
||||
then either do NOT change the indentation of the inner part within (do not
|
||||
move it to the right)! or do so in a separate commit
|
||||
We refuse source indentation and other cosmetic changes if they are mixed
|
||||
with functional changes, such commits will be rejected and removed. Every
|
||||
developer has his own indentation style, you should not change it. Of course
|
||||
if you (re)write something, you can use your own style, even though we would
|
||||
prefer if the indentation throughout FFmpeg was consistent (Many projects
|
||||
force a given indentation style - we do not.). If you really need to make
|
||||
indentation changes (try to avoid this), separate them strictly from real
|
||||
changes.
|
||||
|
||||
NOTE: If you had to put if()@{ .. @} over a large (> 5 lines) chunk of code,
|
||||
then either do NOT change the indentation of the inner part within (do not
|
||||
move it to the right)! or do so in a separate commit
|
||||
@item
|
||||
Always fill out the commit log message. Describe in a few lines what you
|
||||
changed and why. You can refer to mailing list postings if you fix a
|
||||
particular bug. Comments such as "fixed!" or "Changed it." are unacceptable.
|
||||
Recommended format:
|
||||
|
||||
@example
|
||||
area changed: Short 1 line description
|
||||
|
||||
details describing what and why and giving references.
|
||||
@end example
|
||||
Always fill out the commit log message. Describe in a few lines what you
|
||||
changed and why. You can refer to mailing list postings if you fix a
|
||||
particular bug. Comments such as "fixed!" or "Changed it." are unacceptable.
|
||||
Recommended format:
|
||||
area changed: Short 1 line description
|
||||
|
||||
details describing what and why and giving references.
|
||||
@item
|
||||
Make sure the author of the commit is set correctly. (see git commit --author)
|
||||
If you apply a patch, send an
|
||||
answer to ffmpeg-devel (or wherever you got the patch from) saying that
|
||||
you applied the patch.
|
||||
|
||||
Make sure the author of the commit is set correctly. (see git commit --author)
|
||||
If you apply a patch, send an
|
||||
answer to ffmpeg-devel (or wherever you got the patch from) saying that
|
||||
you applied the patch.
|
||||
@item
|
||||
When applying patches that have been discussed (at length) on the mailing
|
||||
list, reference the thread in the log message.
|
||||
|
||||
When applying patches that have been discussed (at length) on the mailing
|
||||
list, reference the thread in the log message.
|
||||
@item
|
||||
Do NOT commit to code actively maintained by others without permission.
|
||||
Send a patch to ffmpeg-devel instead. If no one answers within a reasonable
|
||||
timeframe (12h for build failures and security fixes, 3 days small changes,
|
||||
1 week for big patches) then commit your patch if you think it is OK.
|
||||
Also note, the maintainer can simply ask for more time to review!
|
||||
|
||||
Do NOT commit to code actively maintained by others without permission.
|
||||
Send a patch to ffmpeg-devel instead. If no one answers within a reasonable
|
||||
timeframe (12h for build failures and security fixes, 3 days small changes,
|
||||
1 week for big patches) then commit your patch if you think it is OK.
|
||||
Also note, the maintainer can simply ask for more time to review!
|
||||
@item
|
||||
Subscribe to the ffmpeg-cvslog mailing list. The diffs of all commits
|
||||
are sent there and reviewed by all the other developers. Bugs and possible
|
||||
improvements or general questions regarding commits are discussed there. We
|
||||
expect you to react if problems with your code are uncovered.
|
||||
|
||||
Subscribe to the ffmpeg-cvslog mailing list. The diffs of all commits
|
||||
are sent there and reviewed by all the other developers. Bugs and possible
|
||||
improvements or general questions regarding commits are discussed there. We
|
||||
expect you to react if problems with your code are uncovered.
|
||||
@item
|
||||
Update the documentation if you change behavior or add features. If you are
|
||||
unsure how best to do this, send a patch to ffmpeg-devel, the documentation
|
||||
maintainer(s) will review and commit your stuff.
|
||||
|
||||
Update the documentation if you change behavior or add features. If you are
|
||||
unsure how best to do this, send a patch to ffmpeg-devel, the documentation
|
||||
maintainer(s) will review and commit your stuff.
|
||||
@item
|
||||
Try to keep important discussions and requests (also) on the public
|
||||
developer mailing list, so that all developers can benefit from them.
|
||||
|
||||
Try to keep important discussions and requests (also) on the public
|
||||
developer mailing list, so that all developers can benefit from them.
|
||||
@item
|
||||
Never write to unallocated memory, never write over the end of arrays,
|
||||
always check values read from some untrusted source before using them
|
||||
as array index or other risky things.
|
||||
|
||||
Never write to unallocated memory, never write over the end of arrays,
|
||||
always check values read from some untrusted source before using them
|
||||
as array index or other risky things.
|
||||
@item
|
||||
Remember to check if you need to bump versions for the specific libav*
|
||||
parts (libavutil, libavcodec, libavformat) you are changing. You need
|
||||
to change the version integer.
|
||||
Incrementing the first component means no backward compatibility to
|
||||
previous versions (e.g. removal of a function from the public API).
|
||||
Incrementing the second component means backward compatible change
|
||||
(e.g. addition of a function to the public API or extension of an
|
||||
existing data structure).
|
||||
Incrementing the third component means a noteworthy binary compatible
|
||||
change (e.g. encoder bug fix that matters for the decoder). The third
|
||||
component always starts at 100 to distinguish FFmpeg from Libav.
|
||||
|
||||
Remember to check if you need to bump versions for the specific libav*
|
||||
parts (libavutil, libavcodec, libavformat) you are changing. You need
|
||||
to change the version integer.
|
||||
Incrementing the first component means no backward compatibility to
|
||||
previous versions (e.g. removal of a function from the public API).
|
||||
Incrementing the second component means backward compatible change
|
||||
(e.g. addition of a function to the public API or extension of an
|
||||
existing data structure).
|
||||
Incrementing the third component means a noteworthy binary compatible
|
||||
change (e.g. encoder bug fix that matters for the decoder). The third
|
||||
component always starts at 100 to distinguish FFmpeg from Libav.
|
||||
@item
|
||||
Compiler warnings indicate potential bugs or code with bad style. If a type of
|
||||
warning always points to correct and clean code, that warning should
|
||||
be disabled, not the code changed.
|
||||
Thus the remaining warnings can either be bugs or correct code.
|
||||
If it is a bug, the bug has to be fixed. If it is not, the code should
|
||||
be changed to not generate a warning unless that causes a slowdown
|
||||
or obfuscates the code.
|
||||
|
||||
Compiler warnings indicate potential bugs or code with bad style. If a type of
|
||||
warning always points to correct and clean code, that warning should
|
||||
be disabled, not the code changed.
|
||||
Thus the remaining warnings can either be bugs or correct code.
|
||||
If it is a bug, the bug has to be fixed. If it is not, the code should
|
||||
be changed to not generate a warning unless that causes a slowdown
|
||||
or obfuscates the code.
|
||||
@item
|
||||
Make sure that no parts of the codebase that you maintain are missing from the
|
||||
@file{MAINTAINERS} file. If something that you want to maintain is missing add it with
|
||||
your name after it.
|
||||
If at some point you no longer want to maintain some code, then please help
|
||||
finding a new maintainer and also don't forget updating the @file{MAINTAINERS} file.
|
||||
If you add a new file, give it a proper license header. Do not copy and
|
||||
paste it from a random place, use an existing file as template.
|
||||
@end enumerate
|
||||
|
||||
We think our rules are not too hard. If you have comments, contact us.
|
||||
@@ -450,51 +408,40 @@ send a reminder by email. Your patch should eventually be dealt with.
|
||||
|
||||
@enumerate
|
||||
@item
|
||||
Did you use av_cold for codec initialization and close functions?
|
||||
|
||||
Did you use av_cold for codec initialization and close functions?
|
||||
@item
|
||||
Did you add a long_name under NULL_IF_CONFIG_SMALL to the AVCodec or
|
||||
AVInputFormat/AVOutputFormat struct?
|
||||
|
||||
Did you add a long_name under NULL_IF_CONFIG_SMALL to the AVCodec or
|
||||
AVInputFormat/AVOutputFormat struct?
|
||||
@item
|
||||
Did you bump the minor version number (and reset the micro version
|
||||
number) in @file{libavcodec/version.h} or @file{libavformat/version.h}?
|
||||
|
||||
Did you bump the minor version number (and reset the micro version
|
||||
number) in @file{libavcodec/version.h} or @file{libavformat/version.h}?
|
||||
@item
|
||||
Did you register it in @file{allcodecs.c} or @file{allformats.c}?
|
||||
|
||||
Did you register it in @file{allcodecs.c} or @file{allformats.c}?
|
||||
@item
|
||||
Did you add the AVCodecID to @file{avcodec.h}?
|
||||
When adding new codec IDs, also add an entry to the codec descriptor
|
||||
list in @file{libavcodec/codec_desc.c}.
|
||||
|
||||
Did you add the AVCodecID to @file{avcodec.h}?
|
||||
When adding new codec IDs, also add an entry to the codec descriptor
|
||||
list in @file{libavcodec/codec_desc.c}.
|
||||
@item
|
||||
If it has a FourCC, did you add it to @file{libavformat/riff.c},
|
||||
even if it is only a decoder?
|
||||
|
||||
If it has a FourCC, did you add it to @file{libavformat/riff.c},
|
||||
even if it is only a decoder?
|
||||
@item
|
||||
Did you add a rule to compile the appropriate files in the Makefile?
|
||||
Remember to do this even if you're just adding a format to a file that is
|
||||
already being compiled by some other rule, like a raw demuxer.
|
||||
|
||||
Did you add a rule to compile the appropriate files in the Makefile?
|
||||
Remember to do this even if you're just adding a format to a file that is
|
||||
already being compiled by some other rule, like a raw demuxer.
|
||||
@item
|
||||
Did you add an entry to the table of supported formats or codecs in
|
||||
@file{doc/general.texi}?
|
||||
|
||||
Did you add an entry to the table of supported formats or codecs in
|
||||
@file{doc/general.texi}?
|
||||
@item
|
||||
Did you add an entry in the Changelog?
|
||||
|
||||
Did you add an entry in the Changelog?
|
||||
@item
|
||||
If it depends on a parser or a library, did you add that dependency in
|
||||
configure?
|
||||
|
||||
If it depends on a parser or a library, did you add that dependency in
|
||||
configure?
|
||||
@item
|
||||
Did you @code{git add} the appropriate files before committing?
|
||||
|
||||
Did you @code{git add} the appropriate files before committing?
|
||||
@item
|
||||
Did you make sure it compiles standalone, i.e. with
|
||||
@code{configure --disable-everything --enable-decoder=foo}
|
||||
(or @code{--enable-demuxer} or whatever your component is)?
|
||||
Did you make sure it compiles standalone, i.e. with
|
||||
@code{configure --disable-everything --enable-decoder=foo}
|
||||
(or @code{--enable-demuxer} or whatever your component is)?
|
||||
@end enumerate
|
||||
|
||||
|
||||
@@ -502,113 +449,82 @@ Did you make sure it compiles standalone, i.e. with
|
||||
|
||||
@enumerate
|
||||
@item
|
||||
Does @code{make fate} pass with the patch applied?
|
||||
|
||||
Does @code{make fate} pass with the patch applied?
|
||||
@item
|
||||
Was the patch generated with git format-patch or send-email?
|
||||
|
||||
Was the patch generated with git format-patch or send-email?
|
||||
@item
|
||||
Did you sign off your patch? (git commit -s)
|
||||
See @url{http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob_plain;f=Documentation/SubmittingPatches} for the meaning
|
||||
of sign off.
|
||||
|
||||
Did you sign off your patch? (git commit -s)
|
||||
See @url{http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob_plain;f=Documentation/SubmittingPatches} for the meaning
|
||||
of sign off.
|
||||
@item
|
||||
Did you provide a clear git commit log message?
|
||||
|
||||
Did you provide a clear git commit log message?
|
||||
@item
|
||||
Is the patch against latest FFmpeg git master branch?
|
||||
|
||||
Is the patch against latest FFmpeg git master branch?
|
||||
@item
|
||||
Are you subscribed to ffmpeg-devel?
|
||||
(the list is subscribers only due to spam)
|
||||
|
||||
Are you subscribed to ffmpeg-devel?
|
||||
(the list is subscribers only due to spam)
|
||||
@item
|
||||
Have you checked that the changes are minimal, so that the same cannot be
|
||||
achieved with a smaller patch and/or simpler final code?
|
||||
|
||||
Have you checked that the changes are minimal, so that the same cannot be
|
||||
achieved with a smaller patch and/or simpler final code?
|
||||
@item
|
||||
If the change is to speed critical code, did you benchmark it?
|
||||
|
||||
If the change is to speed critical code, did you benchmark it?
|
||||
@item
|
||||
If you did any benchmarks, did you provide them in the mail?
|
||||
|
||||
If you did any benchmarks, did you provide them in the mail?
|
||||
@item
|
||||
Have you checked that the patch does not introduce buffer overflows or
|
||||
other security issues?
|
||||
|
||||
Have you checked that the patch does not introduce buffer overflows or
|
||||
other security issues?
|
||||
@item
|
||||
Did you test your decoder or demuxer against damaged data? If no, see
|
||||
tools/trasher, the noise bitstream filter, and
|
||||
@uref{http://caca.zoy.org/wiki/zzuf, zzuf}. Your decoder or demuxer
|
||||
should not crash, end in a (near) infinite loop, or allocate ridiculous
|
||||
amounts of memory when fed damaged data.
|
||||
|
||||
Did you test your decoder or demuxer against damaged data? If no, see
|
||||
tools/trasher, the noise bitstream filter, and
|
||||
@uref{http://caca.zoy.org/wiki/zzuf, zzuf}. Your decoder or demuxer
|
||||
should not crash, end in a (near) infinite loop, or allocate ridiculous
|
||||
amounts of memory when fed damaged data.
|
||||
@item
|
||||
Did you test your decoder or demuxer against sample files?
|
||||
Samples may be obtained at @url{http://samples.ffmpeg.org}.
|
||||
|
||||
Does the patch not mix functional and cosmetic changes?
|
||||
@item
|
||||
Does the patch not mix functional and cosmetic changes?
|
||||
|
||||
Did you add tabs or trailing whitespace to the code? Both are forbidden.
|
||||
@item
|
||||
Did you add tabs or trailing whitespace to the code? Both are forbidden.
|
||||
|
||||
Is the patch attached to the email you send?
|
||||
@item
|
||||
Is the patch attached to the email you send?
|
||||
|
||||
Is the mime type of the patch correct? It should be text/x-diff or
|
||||
text/x-patch or at least text/plain and not application/octet-stream.
|
||||
@item
|
||||
Is the mime type of the patch correct? It should be text/x-diff or
|
||||
text/x-patch or at least text/plain and not application/octet-stream.
|
||||
|
||||
If the patch fixes a bug, did you provide a verbose analysis of the bug?
|
||||
@item
|
||||
If the patch fixes a bug, did you provide a verbose analysis of the bug?
|
||||
|
||||
If the patch fixes a bug, did you provide enough information, including
|
||||
a sample, so the bug can be reproduced and the fix can be verified?
|
||||
Note please do not attach samples >100k to mails but rather provide a
|
||||
URL, you can upload to ftp://upload.ffmpeg.org
|
||||
@item
|
||||
If the patch fixes a bug, did you provide enough information, including
|
||||
a sample, so the bug can be reproduced and the fix can be verified?
|
||||
Note please do not attach samples >100k to mails but rather provide a
|
||||
URL, you can upload to ftp://upload.ffmpeg.org
|
||||
|
||||
Did you provide a verbose summary about what the patch does change?
|
||||
@item
|
||||
Did you provide a verbose summary about what the patch does change?
|
||||
|
||||
Did you provide a verbose explanation why it changes things like it does?
|
||||
@item
|
||||
Did you provide a verbose explanation why it changes things like it does?
|
||||
|
||||
Did you provide a verbose summary of the user visible advantages and
|
||||
disadvantages if the patch is applied?
|
||||
@item
|
||||
Did you provide a verbose summary of the user visible advantages and
|
||||
disadvantages if the patch is applied?
|
||||
|
||||
Did you provide an example so we can verify the new feature added by the
|
||||
patch easily?
|
||||
@item
|
||||
Did you provide an example so we can verify the new feature added by the
|
||||
patch easily?
|
||||
|
||||
If you added a new file, did you insert a license header? It should be
|
||||
taken from FFmpeg, not randomly copied and pasted from somewhere else.
|
||||
@item
|
||||
If you added a new file, did you insert a license header? It should be
|
||||
taken from FFmpeg, not randomly copied and pasted from somewhere else.
|
||||
|
||||
You should maintain alphabetical order in alphabetically ordered lists as
|
||||
long as doing so does not break API/ABI compatibility.
|
||||
@item
|
||||
You should maintain alphabetical order in alphabetically ordered lists as
|
||||
long as doing so does not break API/ABI compatibility.
|
||||
|
||||
Lines with similar content should be aligned vertically when doing so
|
||||
improves readability.
|
||||
@item
|
||||
Lines with similar content should be aligned vertically when doing so
|
||||
improves readability.
|
||||
|
||||
Consider to add a regression test for your code.
|
||||
@item
|
||||
Consider to add a regression test for your code.
|
||||
|
||||
If you added YASM code please check that things still work with --disable-yasm
|
||||
@item
|
||||
If you added YASM code please check that things still work with --disable-yasm
|
||||
|
||||
Make sure you check the return values of function and return appropriate
|
||||
error codes. Especially memory allocation functions like @code{av_malloc()}
|
||||
are notoriously left unchecked, which is a serious problem.
|
||||
@item
|
||||
Make sure you check the return values of function and return appropriate
|
||||
error codes. Especially memory allocation functions like @code{av_malloc()}
|
||||
are notoriously left unchecked, which is a serious problem.
|
||||
|
||||
@item
|
||||
Test your code with valgrind and or Address Sanitizer to ensure it's free
|
||||
of leaks, out of array accesses, etc.
|
||||
Test your code with valgrind and or Address Sanitizer to ensure it's free
|
||||
of leaks, out of array accesses, etc.
|
||||
@end enumerate
|
||||
|
||||
@section Patch review process
|
||||
@@ -637,10 +553,6 @@ not related to the comments received during review. Such patches will
|
||||
be rejected. Instead, submit significant changes or new features as
|
||||
separate patches.
|
||||
|
||||
Everyone is welcome to review patches. Also if you are waiting for your patch
|
||||
to be reviewed, please consider helping to review other patches, that is a great
|
||||
way to get everyone's patches reviewed sooner.
|
||||
|
||||
@anchor{Regression tests}
|
||||
@section Regression tests
|
||||
|
||||
@@ -656,154 +568,13 @@ accordingly].
|
||||
@subsection Adding files to the fate-suite dataset
|
||||
|
||||
When there is no muxer or encoder available to generate test media for a
|
||||
specific test then the media has to be included in the fate-suite.
|
||||
specific test then the media has to be inlcuded in the fate-suite.
|
||||
First please make sure that the sample file is as small as possible to test the
|
||||
respective decoder or demuxer sufficiently. Large files increase network
|
||||
bandwidth and disk space requirements.
|
||||
Once you have a working fate test and fate sample, provide in the commit
|
||||
message or introductory message for the patch series that you post to
|
||||
message or introductionary message for the patch series that you post to
|
||||
the ffmpeg-devel mailing list, a direct link to download the sample media.
|
||||
|
||||
|
||||
@subsection Visualizing Test Coverage
|
||||
|
||||
The FFmpeg build system allows visualizing the test coverage in an easy
|
||||
manner with the coverage tools @code{gcov}/@code{lcov}. This involves
|
||||
the following steps:
|
||||
|
||||
@enumerate
|
||||
@item
|
||||
Configure to compile with instrumentation enabled:
|
||||
@code{configure --toolchain=gcov}.
|
||||
|
||||
@item
|
||||
Run your test case, either manually or via FATE. This can be either
|
||||
the full FATE regression suite, or any arbitrary invocation of any
|
||||
front-end tool provided by FFmpeg, in any combination.
|
||||
|
||||
@item
|
||||
Run @code{make lcov} to generate coverage data in HTML format.
|
||||
|
||||
@item
|
||||
View @code{lcov/index.html} in your preferred HTML viewer.
|
||||
@end enumerate
|
||||
|
||||
You can use the command @code{make lcov-reset} to reset the coverage
|
||||
measurements. You will need to rerun @code{make lcov} after running a
|
||||
new test.
|
||||
|
||||
@subsection Using Valgrind
|
||||
|
||||
The configure script provides a shortcut for using valgrind to spot bugs
|
||||
related to memory handling. Just add the option
|
||||
@code{--toolchain=valgrind-memcheck} or @code{--toolchain=valgrind-massif}
|
||||
to your configure line, and reasonable defaults will be set for running
|
||||
FATE under the supervision of either the @strong{memcheck} or the
|
||||
@strong{massif} tool of the valgrind suite.
|
||||
|
||||
In case you need finer control over how valgrind is invoked, use the
|
||||
@code{--target-exec='valgrind <your_custom_valgrind_options>} option in
|
||||
your configure line instead.
|
||||
|
||||
@anchor{Release process}
|
||||
@section Release process
|
||||
|
||||
FFmpeg maintains a set of @strong{release branches}, which are the
|
||||
recommended deliverable for system integrators and distributors (such as
|
||||
Linux distributions, etc.). At regular times, a @strong{release
|
||||
manager} prepares, tests and publishes tarballs on the
|
||||
@url{http://ffmpeg.org} website.
|
||||
|
||||
There are two kinds of releases:
|
||||
|
||||
@enumerate
|
||||
@item
|
||||
@strong{Major releases} always include the latest and greatest
|
||||
features and functionality.
|
||||
|
||||
@item
|
||||
@strong{Point releases} are cut from @strong{release} branches,
|
||||
which are named @code{release/X}, with @code{X} being the release
|
||||
version number.
|
||||
@end enumerate
|
||||
|
||||
Note that we promise to our users that shared libraries from any FFmpeg
|
||||
release never break programs that have been @strong{compiled} against
|
||||
previous versions of @strong{the same release series} in any case!
|
||||
|
||||
However, from time to time, we do make API changes that require adaptations
|
||||
in applications. Such changes are only allowed in (new) major releases and
|
||||
require further steps such as bumping library version numbers and/or
|
||||
adjustments to the symbol versioning file. Please discuss such changes
|
||||
on the @strong{ffmpeg-devel} mailing list in time to allow forward planning.
|
||||
|
||||
@anchor{Criteria for Point Releases}
|
||||
@subsection Criteria for Point Releases
|
||||
|
||||
Changes that match the following criteria are valid candidates for
|
||||
inclusion into a point release:
|
||||
|
||||
@enumerate
|
||||
@item
|
||||
Fixes a security issue, preferably identified by a @strong{CVE
|
||||
number} issued by @url{http://cve.mitre.org/}.
|
||||
|
||||
@item
|
||||
Fixes a documented bug in @url{https://trac.ffmpeg.org}.
|
||||
|
||||
@item
|
||||
Improves the included documentation.
|
||||
|
||||
@item
|
||||
Retains both source code and binary compatibility with previous
|
||||
point releases of the same release branch.
|
||||
@end enumerate
|
||||
|
||||
The order for checking the rules is (1 OR 2 OR 3) AND 4.
|
||||
|
||||
|
||||
@subsection Release Checklist
|
||||
|
||||
The release process involves the following steps:
|
||||
|
||||
@enumerate
|
||||
@item
|
||||
Ensure that the @file{RELEASE} file contains the version number for
|
||||
the upcoming release.
|
||||
|
||||
@item
|
||||
Add the release at @url{https://trac.ffmpeg.org/admin/ticket/versions}.
|
||||
|
||||
@item
|
||||
Announce the intent to do a release to the mailing list.
|
||||
|
||||
@item
|
||||
Make sure all relevant security fixes have been backported. See
|
||||
@url{https://ffmpeg.org/security.html}.
|
||||
|
||||
@item
|
||||
Ensure that the FATE regression suite still passes in the release
|
||||
branch on at least @strong{i386} and @strong{amd64}
|
||||
(cf. @ref{Regression tests}).
|
||||
|
||||
@item
|
||||
Prepare the release tarballs in @code{bz2} and @code{gz} formats, and
|
||||
supplementing files that contain @code{gpg} signatures
|
||||
|
||||
@item
|
||||
Publish the tarballs at @url{http://ffmpeg.org/releases}. Create and
|
||||
push an annotated tag in the form @code{nX}, with @code{X}
|
||||
containing the version number.
|
||||
|
||||
@item
|
||||
Propose and send a patch to the @strong{ffmpeg-devel} mailing list
|
||||
with a news entry for the website.
|
||||
|
||||
@item
|
||||
Publish the news entry.
|
||||
|
||||
@item
|
||||
Send announcement to the mailing list.
|
||||
@end enumerate
|
||||
|
||||
@bye
|
||||
|
||||
@@ -1,25 +0,0 @@
|
||||
@chapter Device Options
|
||||
@c man begin DEVICE OPTIONS
|
||||
|
||||
The libavdevice library provides the same interface as
|
||||
libavformat. Namely, an input device is considered like a demuxer, and
|
||||
an output device like a muxer, and the interface and generic device
|
||||
options are the same provided by libavformat (see the ffmpeg-formats
|
||||
manual).
|
||||
|
||||
In addition each input or output device may support so-called private
|
||||
options, which are specific for that component.
|
||||
|
||||
Options may be set by specifying -@var{option} @var{value} in the
|
||||
FFmpeg tools, or by setting the value explicitly in the device
|
||||
@code{AVFormatContext} options or using the @file{libavutil/opt.h} API
|
||||
for programmatic use.
|
||||
|
||||
@c man end DEVICE OPTIONS
|
||||
|
||||
@ifclear config-writeonly
|
||||
@include indevs.texi
|
||||
@end ifclear
|
||||
@ifclear config-readonly
|
||||
@include outdevs.texi
|
||||
@end ifclear
|
||||
@@ -2,20 +2,13 @@
|
||||
|
||||
SRC_PATH="${1}"
|
||||
DOXYFILE="${2}"
|
||||
DOXYGEN="${3}"
|
||||
|
||||
shift 3
|
||||
shift 2
|
||||
|
||||
if [ -e "$SRC_PATH/VERSION" ]; then
|
||||
VERSION=`cat "$SRC_PATH/VERSION"`
|
||||
else
|
||||
VERSION=`cd "$SRC_PATH"; git describe`
|
||||
fi
|
||||
|
||||
$DOXYGEN - <<EOF
|
||||
doxygen - <<EOF
|
||||
@INCLUDE = ${DOXYFILE}
|
||||
INPUT = $@
|
||||
EXAMPLE_PATH = ${SRC_PATH}/doc/examples
|
||||
HTML_TIMESTAMP = NO
|
||||
PROJECT_NUMBER = $VERSION
|
||||
HTML_HEADER = ${SRC_PATH}/doc/doxy/header.html
|
||||
HTML_FOOTER = ${SRC_PATH}/doc/doxy/footer.html
|
||||
HTML_STYLESHEET = ${SRC_PATH}/doc/doxy/doxy_stylesheet.css
|
||||
EOF
|
||||
|
||||
2019
doc/doxy/doxy_stylesheet.css
Normal file
2019
doc/doxy/doxy_stylesheet.css
Normal file
File diff suppressed because it is too large
Load Diff
9
doc/doxy/footer.html
Normal file
9
doc/doxy/footer.html
Normal file
@@ -0,0 +1,9 @@
|
||||
|
||||
<footer class="footer pagination-right">
|
||||
<span class="label label-info">
|
||||
Generated on $datetime for $projectname by <a href="http://www.doxygen.org/index.html">doxygen</a> $doxygenversion
|
||||
</span>
|
||||
</footer>
|
||||
</div>
|
||||
</body>
|
||||
</html>
|
||||
16
doc/doxy/header.html
Normal file
16
doc/doxy/header.html
Normal file
@@ -0,0 +1,16 @@
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=9"/>
|
||||
<!--BEGIN PROJECT_NAME--><title>$projectname: $title</title><!--END PROJECT_NAME-->
|
||||
<!--BEGIN !PROJECT_NAME--><title>$title</title><!--END !PROJECT_NAME-->
|
||||
<link href="$relpath$doxy_stylesheet.css" rel="stylesheet" type="text/css" />
|
||||
<!--Header replace -->
|
||||
|
||||
</head>
|
||||
|
||||
<div class="container">
|
||||
|
||||
<!--Header replace -->
|
||||
<div class="menu">
|
||||
2073
doc/encoders.texi
2073
doc/encoders.texi
File diff suppressed because it is too large
Load Diff
288
doc/eval.texi
Normal file
288
doc/eval.texi
Normal file
@@ -0,0 +1,288 @@
|
||||
@chapter Expression Evaluation
|
||||
@c man begin EXPRESSION EVALUATION
|
||||
|
||||
When evaluating an arithmetic expression, FFmpeg uses an internal
|
||||
formula evaluator, implemented through the @file{libavutil/eval.h}
|
||||
interface.
|
||||
|
||||
An expression may contain unary, binary operators, constants, and
|
||||
functions.
|
||||
|
||||
Two expressions @var{expr1} and @var{expr2} can be combined to form
|
||||
another expression "@var{expr1};@var{expr2}".
|
||||
@var{expr1} and @var{expr2} are evaluated in turn, and the new
|
||||
expression evaluates to the value of @var{expr2}.
|
||||
|
||||
The following binary operators are available: @code{+}, @code{-},
|
||||
@code{*}, @code{/}, @code{^}.
|
||||
|
||||
The following unary operators are available: @code{+}, @code{-}.
|
||||
|
||||
The following functions are available:
|
||||
@table @option
|
||||
@item abs(x)
|
||||
Compute absolute value of @var{x}.
|
||||
|
||||
@item acos(x)
|
||||
Compute arccosine of @var{x}.
|
||||
|
||||
@item asin(x)
|
||||
Compute arcsine of @var{x}.
|
||||
|
||||
@item atan(x)
|
||||
Compute arctangent of @var{x}.
|
||||
|
||||
@item ceil(expr)
|
||||
Round the value of expression @var{expr} upwards to the nearest
|
||||
integer. For example, "ceil(1.5)" is "2.0".
|
||||
|
||||
@item cos(x)
|
||||
Compute cosine of @var{x}.
|
||||
|
||||
@item cosh(x)
|
||||
Compute hyperbolic cosine of @var{x}.
|
||||
|
||||
@item eq(x, y)
|
||||
Return 1 if @var{x} and @var{y} are equivalent, 0 otherwise.
|
||||
|
||||
@item exp(x)
|
||||
Compute exponential of @var{x} (with base @code{e}, the Euler's number).
|
||||
|
||||
@item floor(expr)
|
||||
Round the value of expression @var{expr} downwards to the nearest
|
||||
integer. For example, "floor(-1.5)" is "-2.0".
|
||||
|
||||
@item gauss(x)
|
||||
Compute Gauss function of @var{x}, corresponding to
|
||||
@code{exp(-x*x/2) / sqrt(2*PI)}.
|
||||
|
||||
@item gcd(x, y)
|
||||
Return the greatest common divisor of @var{x} and @var{y}. If both @var{x} and
|
||||
@var{y} are 0 or either or both are less than zero then behavior is undefined.
|
||||
|
||||
@item gt(x, y)
|
||||
Return 1 if @var{x} is greater than @var{y}, 0 otherwise.
|
||||
|
||||
@item gte(x, y)
|
||||
Return 1 if @var{x} is greater than or equal to @var{y}, 0 otherwise.
|
||||
|
||||
@item hypot(x, y)
|
||||
This function is similar to the C function with the same name; it returns
|
||||
"sqrt(@var{x}*@var{x} + @var{y}*@var{y})", the length of the hypotenuse of a
|
||||
right triangle with sides of length @var{x} and @var{y}, or the distance of the
|
||||
point (@var{x}, @var{y}) from the origin.
|
||||
|
||||
@item if(x, y)
|
||||
Evaluate @var{x}, and if the result is non-zero return the result of
|
||||
the evaluation of @var{y}, return 0 otherwise.
|
||||
|
||||
@item if(x, y, z)
|
||||
Evaluate @var{x}, and if the result is non-zero return the evaluation
|
||||
result of @var{y}, otherwise the evaluation result of @var{z}.
|
||||
|
||||
@item ifnot(x, y)
|
||||
Evaluate @var{x}, and if the result is zero return the result of the
|
||||
evaluation of @var{y}, return 0 otherwise.
|
||||
|
||||
@item ifnot(x, y, z)
|
||||
Evaluate @var{x}, and if the result is zero return the evaluation
|
||||
result of @var{y}, otherwise the evaluation result of @var{z}.
|
||||
|
||||
@item isinf(x)
|
||||
Return 1.0 if @var{x} is +/-INFINITY, 0.0 otherwise.
|
||||
|
||||
@item isnan(x)
|
||||
Return 1.0 if @var{x} is NAN, 0.0 otherwise.
|
||||
|
||||
@item ld(var)
|
||||
Allow to load the value of the internal variable with number
|
||||
@var{var}, which was previously stored with st(@var{var}, @var{expr}).
|
||||
The function returns the loaded value.
|
||||
|
||||
@item log(x)
|
||||
Compute natural logarithm of @var{x}.
|
||||
|
||||
@item lt(x, y)
|
||||
Return 1 if @var{x} is lesser than @var{y}, 0 otherwise.
|
||||
|
||||
@item lte(x, y)
|
||||
Return 1 if @var{x} is lesser than or equal to @var{y}, 0 otherwise.
|
||||
|
||||
@item max(x, y)
|
||||
Return the maximum between @var{x} and @var{y}.
|
||||
|
||||
@item min(x, y)
|
||||
Return the maximum between @var{x} and @var{y}.
|
||||
|
||||
@item mod(x, y)
|
||||
Compute the remainder of division of @var{x} by @var{y}.
|
||||
|
||||
@item not(expr)
|
||||
Return 1.0 if @var{expr} is zero, 0.0 otherwise.
|
||||
|
||||
@item pow(x, y)
|
||||
Compute the power of @var{x} elevated @var{y}, it is equivalent to
|
||||
"(@var{x})^(@var{y})".
|
||||
|
||||
@item print(t)
|
||||
@item print(t, l)
|
||||
Print the value of expression @var{t} with loglevel @var{l}. If
|
||||
@var{l} is not specified then a default log level is used.
|
||||
Returns the value of the expression printed.
|
||||
|
||||
Prints t with loglevel l
|
||||
|
||||
@item random(x)
|
||||
Return a pseudo random value between 0.0 and 1.0. @var{x} is the index of the
|
||||
internal variable which will be used to save the seed/state.
|
||||
|
||||
@item root(expr, max)
|
||||
Find an input value for which the function represented by @var{expr}
|
||||
with argument @var{ld(0)} is 0 in the interval 0..@var{max}.
|
||||
|
||||
The expression in @var{expr} must denote a continuous function or the
|
||||
result is undefined.
|
||||
|
||||
@var{ld(0)} is used to represent the function input value, which means
|
||||
that the given expression will be evaluated multiple times with
|
||||
various input values that the expression can access through
|
||||
@code{ld(0)}. When the expression evaluates to 0 then the
|
||||
corresponding input value will be returned.
|
||||
|
||||
@item sin(x)
|
||||
Compute sine of @var{x}.
|
||||
|
||||
@item sinh(x)
|
||||
Compute hyperbolic sine of @var{x}.
|
||||
|
||||
@item sqrt(expr)
|
||||
Compute the square root of @var{expr}. This is equivalent to
|
||||
"(@var{expr})^.5".
|
||||
|
||||
@item squish(x)
|
||||
Compute expression @code{1/(1 + exp(4*x))}.
|
||||
|
||||
@item st(var, expr)
|
||||
Allow to store the value of the expression @var{expr} in an internal
|
||||
variable. @var{var} specifies the number of the variable where to
|
||||
store the value, and it is a value ranging from 0 to 9. The function
|
||||
returns the value stored in the internal variable.
|
||||
Note, Variables are currently not shared between expressions.
|
||||
|
||||
@item tan(x)
|
||||
Compute tangent of @var{x}.
|
||||
|
||||
@item tanh(x)
|
||||
Compute hyperbolic tangent of @var{x}.
|
||||
|
||||
@item taylor(expr, x)
|
||||
@item taylor(expr, x, id)
|
||||
Evaluate a Taylor series at @var{x}, given an expression representing
|
||||
the @code{ld(id)}-th derivative of a function at 0.
|
||||
|
||||
When the series does not converge the result is undefined.
|
||||
|
||||
@var{ld(id)} is used to represent the derivative order in @var{expr},
|
||||
which means that the given expression will be evaluated multiple times
|
||||
with various input values that the expression can access through
|
||||
@code{ld(id)}. If @var{id} is not specified then 0 is assumed.
|
||||
|
||||
Note, when you have the derivatives at y instead of 0,
|
||||
@code{taylor(expr, x-y)} can be used.
|
||||
|
||||
@item time(0)
|
||||
Return the current (wallclock) time in seconds.
|
||||
|
||||
@item trunc(expr)
|
||||
Round the value of expression @var{expr} towards zero to the nearest
|
||||
integer. For example, "trunc(-1.5)" is "-1.0".
|
||||
|
||||
@item while(cond, expr)
|
||||
Evaluate expression @var{expr} while the expression @var{cond} is
|
||||
non-zero, and returns the value of the last @var{expr} evaluation, or
|
||||
NAN if @var{cond} was always false.
|
||||
@end table
|
||||
|
||||
The following constants are available:
|
||||
@table @option
|
||||
@item PI
|
||||
area of the unit disc, approximately 3.14
|
||||
@item E
|
||||
exp(1) (Euler's number), approximately 2.718
|
||||
@item PHI
|
||||
golden ratio (1+sqrt(5))/2, approximately 1.618
|
||||
@end table
|
||||
|
||||
Assuming that an expression is considered "true" if it has a non-zero
|
||||
value, note that:
|
||||
|
||||
@code{*} works like AND
|
||||
|
||||
@code{+} works like OR
|
||||
|
||||
For example the construct:
|
||||
@example
|
||||
if (A AND B) then C
|
||||
@end example
|
||||
is equivalent to:
|
||||
@example
|
||||
if(A*B, C)
|
||||
@end example
|
||||
|
||||
In your C code, you can extend the list of unary and binary functions,
|
||||
and define recognized constants, so that they are available for your
|
||||
expressions.
|
||||
|
||||
The evaluator also recognizes the International System unit prefixes.
|
||||
If 'i' is appended after the prefix, binary prefixes are used, which
|
||||
are based on powers of 1024 instead of powers of 1000.
|
||||
The 'B' postfix multiplies the value by 8, and can be appended after a
|
||||
unit prefix or used alone. This allows using for example 'KB', 'MiB',
|
||||
'G' and 'B' as number postfix.
|
||||
|
||||
The list of available International System prefixes follows, with
|
||||
indication of the corresponding powers of 10 and of 2.
|
||||
@table @option
|
||||
@item y
|
||||
10^-24 / 2^-80
|
||||
@item z
|
||||
10^-21 / 2^-70
|
||||
@item a
|
||||
10^-18 / 2^-60
|
||||
@item f
|
||||
10^-15 / 2^-50
|
||||
@item p
|
||||
10^-12 / 2^-40
|
||||
@item n
|
||||
10^-9 / 2^-30
|
||||
@item u
|
||||
10^-6 / 2^-20
|
||||
@item m
|
||||
10^-3 / 2^-10
|
||||
@item c
|
||||
10^-2
|
||||
@item d
|
||||
10^-1
|
||||
@item h
|
||||
10^2
|
||||
@item k
|
||||
10^3 / 2^10
|
||||
@item K
|
||||
10^3 / 2^10
|
||||
@item M
|
||||
10^6 / 2^20
|
||||
@item G
|
||||
10^9 / 2^30
|
||||
@item T
|
||||
10^12 / 2^40
|
||||
@item P
|
||||
10^15 / 2^40
|
||||
@item E
|
||||
10^18 / 2^50
|
||||
@item Z
|
||||
10^21 / 2^60
|
||||
@item Y
|
||||
10^24 / 2^70
|
||||
@end table
|
||||
|
||||
@c man end
|
||||
@@ -7,33 +7,24 @@ FFMPEG_LIBS= libavdevice \
|
||||
libswscale \
|
||||
libavutil \
|
||||
|
||||
CFLAGS += -Wall -g
|
||||
CFLAGS += -Wall -O2 -g
|
||||
CFLAGS := $(shell pkg-config --cflags $(FFMPEG_LIBS)) $(CFLAGS)
|
||||
LDLIBS := $(shell pkg-config --libs $(FFMPEG_LIBS)) $(LDLIBS)
|
||||
|
||||
EXAMPLES= avio_dir_cmd \
|
||||
avio_reading \
|
||||
decoding_encoding \
|
||||
demuxing_decoding \
|
||||
extract_mvs \
|
||||
EXAMPLES= decoding_encoding \
|
||||
demuxing \
|
||||
filtering_video \
|
||||
filtering_audio \
|
||||
http_multiclient \
|
||||
metadata \
|
||||
muxing \
|
||||
remuxing \
|
||||
resampling_audio \
|
||||
scaling_video \
|
||||
transcode_aac \
|
||||
transcoding \
|
||||
|
||||
OBJS=$(addsuffix .o,$(EXAMPLES))
|
||||
|
||||
# the following examples make explicit use of the math library
|
||||
avcodec: LDLIBS += -lm
|
||||
decoding_encoding: LDLIBS += -lm
|
||||
muxing: LDLIBS += -lm
|
||||
resampling_audio: LDLIBS += -lm
|
||||
|
||||
.phony: all clean-test clean
|
||||
|
||||
|
||||
@@ -5,19 +5,14 @@ Both following use cases rely on pkg-config and make, thus make sure
|
||||
that you have them installed and working on your system.
|
||||
|
||||
|
||||
Method 1: build the installed examples in a generic read/write user directory
|
||||
1) Build the installed examples in a generic read/write user directory
|
||||
|
||||
Copy to a read/write user directory and just use "make", it will link
|
||||
to the libraries on your system, assuming the PKG_CONFIG_PATH is
|
||||
correctly configured.
|
||||
|
||||
Method 2: build the examples in-tree
|
||||
2) Build the examples in-tree
|
||||
|
||||
Assuming you are in the source FFmpeg checkout directory, you need to build
|
||||
FFmpeg (no need to make install in any prefix). Then just run "make examples".
|
||||
This will build the examples using the FFmpeg build system. You can clean those
|
||||
examples using "make examplesclean"
|
||||
|
||||
If you want to try the dedicated Makefile examples (to emulate the first
|
||||
method), go into doc/examples and run a command such as
|
||||
PKG_CONFIG_PATH=pc-uninstalled make.
|
||||
FFmpeg (no need to make install in any prefix). Then you can go into the
|
||||
doc/examples and run a command such as PKG_CONFIG_PATH=pc-uninstalled make.
|
||||
|
||||
@@ -1,180 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2014 Lukasz Marek
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
* of this software and associated documentation files (the "Software"), to deal
|
||||
* in the Software without restriction, including without limitation the rights
|
||||
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
* copies of the Software, and to permit persons to whom the Software is
|
||||
* furnished to do so, subject to the following conditions:
|
||||
*
|
||||
* The above copyright notice and this permission notice shall be included in
|
||||
* all copies or substantial portions of the Software.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
||||
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
* THE SOFTWARE.
|
||||
*/
|
||||
|
||||
#include <libavcodec/avcodec.h>
|
||||
#include <libavformat/avformat.h>
|
||||
#include <libavformat/avio.h>
|
||||
|
||||
static const char *type_string(int type)
|
||||
{
|
||||
switch (type) {
|
||||
case AVIO_ENTRY_DIRECTORY:
|
||||
return "<DIR>";
|
||||
case AVIO_ENTRY_FILE:
|
||||
return "<FILE>";
|
||||
case AVIO_ENTRY_BLOCK_DEVICE:
|
||||
return "<BLOCK DEVICE>";
|
||||
case AVIO_ENTRY_CHARACTER_DEVICE:
|
||||
return "<CHARACTER DEVICE>";
|
||||
case AVIO_ENTRY_NAMED_PIPE:
|
||||
return "<PIPE>";
|
||||
case AVIO_ENTRY_SYMBOLIC_LINK:
|
||||
return "<LINK>";
|
||||
case AVIO_ENTRY_SOCKET:
|
||||
return "<SOCKET>";
|
||||
case AVIO_ENTRY_SERVER:
|
||||
return "<SERVER>";
|
||||
case AVIO_ENTRY_SHARE:
|
||||
return "<SHARE>";
|
||||
case AVIO_ENTRY_WORKGROUP:
|
||||
return "<WORKGROUP>";
|
||||
case AVIO_ENTRY_UNKNOWN:
|
||||
default:
|
||||
break;
|
||||
}
|
||||
return "<UNKNOWN>";
|
||||
}
|
||||
|
||||
static int list_op(const char *input_dir)
|
||||
{
|
||||
AVIODirEntry *entry = NULL;
|
||||
AVIODirContext *ctx = NULL;
|
||||
int cnt, ret;
|
||||
char filemode[4], uid_and_gid[20];
|
||||
|
||||
if ((ret = avio_open_dir(&ctx, input_dir, NULL)) < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot open directory: %s.\n", av_err2str(ret));
|
||||
goto fail;
|
||||
}
|
||||
|
||||
cnt = 0;
|
||||
for (;;) {
|
||||
if ((ret = avio_read_dir(ctx, &entry)) < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot list directory: %s.\n", av_err2str(ret));
|
||||
goto fail;
|
||||
}
|
||||
if (!entry)
|
||||
break;
|
||||
if (entry->filemode == -1) {
|
||||
snprintf(filemode, 4, "???");
|
||||
} else {
|
||||
snprintf(filemode, 4, "%3"PRIo64, entry->filemode);
|
||||
}
|
||||
snprintf(uid_and_gid, 20, "%"PRId64"(%"PRId64")", entry->user_id, entry->group_id);
|
||||
if (cnt == 0)
|
||||
av_log(NULL, AV_LOG_INFO, "%-9s %12s %30s %10s %s %16s %16s %16s\n",
|
||||
"TYPE", "SIZE", "NAME", "UID(GID)", "UGO", "MODIFIED",
|
||||
"ACCESSED", "STATUS_CHANGED");
|
||||
av_log(NULL, AV_LOG_INFO, "%-9s %12"PRId64" %30s %10s %s %16"PRId64" %16"PRId64" %16"PRId64"\n",
|
||||
type_string(entry->type),
|
||||
entry->size,
|
||||
entry->name,
|
||||
uid_and_gid,
|
||||
filemode,
|
||||
entry->modification_timestamp,
|
||||
entry->access_timestamp,
|
||||
entry->status_change_timestamp);
|
||||
avio_free_directory_entry(&entry);
|
||||
cnt++;
|
||||
};
|
||||
|
||||
fail:
|
||||
avio_close_dir(&ctx);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int del_op(const char *url)
|
||||
{
|
||||
int ret = avpriv_io_delete(url);
|
||||
if (ret < 0)
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot delete '%s': %s.\n", url, av_err2str(ret));
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int move_op(const char *src, const char *dst)
|
||||
{
|
||||
int ret = avpriv_io_move(src, dst);
|
||||
if (ret < 0)
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot move '%s' into '%s': %s.\n", src, dst, av_err2str(ret));
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
||||
static void usage(const char *program_name)
|
||||
{
|
||||
fprintf(stderr, "usage: %s OPERATION entry1 [entry2]\n"
|
||||
"API example program to show how to manipulate resources "
|
||||
"accessed through AVIOContext.\n"
|
||||
"OPERATIONS:\n"
|
||||
"list list content of the directory\n"
|
||||
"move rename content in directory\n"
|
||||
"del delete content in directory\n",
|
||||
program_name);
|
||||
}
|
||||
|
||||
int main(int argc, char *argv[])
|
||||
{
|
||||
const char *op = NULL;
|
||||
int ret;
|
||||
|
||||
av_log_set_level(AV_LOG_DEBUG);
|
||||
|
||||
if (argc < 2) {
|
||||
usage(argv[0]);
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* register codecs and formats and other lavf/lavc components*/
|
||||
av_register_all();
|
||||
avformat_network_init();
|
||||
|
||||
op = argv[1];
|
||||
if (strcmp(op, "list") == 0) {
|
||||
if (argc < 3) {
|
||||
av_log(NULL, AV_LOG_INFO, "Missing argument for list operation.\n");
|
||||
ret = AVERROR(EINVAL);
|
||||
} else {
|
||||
ret = list_op(argv[2]);
|
||||
}
|
||||
} else if (strcmp(op, "del") == 0) {
|
||||
if (argc < 3) {
|
||||
av_log(NULL, AV_LOG_INFO, "Missing argument for del operation.\n");
|
||||
ret = AVERROR(EINVAL);
|
||||
} else {
|
||||
ret = del_op(argv[2]);
|
||||
}
|
||||
} else if (strcmp(op, "move") == 0) {
|
||||
if (argc < 4) {
|
||||
av_log(NULL, AV_LOG_INFO, "Missing argument for move operation.\n");
|
||||
ret = AVERROR(EINVAL);
|
||||
} else {
|
||||
ret = move_op(argv[2], argv[3]);
|
||||
}
|
||||
} else {
|
||||
av_log(NULL, AV_LOG_INFO, "Invalid operation %s\n", op);
|
||||
ret = AVERROR(EINVAL);
|
||||
}
|
||||
|
||||
avformat_network_deinit();
|
||||
|
||||
return ret < 0 ? 1 : 0;
|
||||
}
|
||||
@@ -1,134 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2014 Stefano Sabatini
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
* of this software and associated documentation files (the "Software"), to deal
|
||||
* in the Software without restriction, including without limitation the rights
|
||||
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
* copies of the Software, and to permit persons to whom the Software is
|
||||
* furnished to do so, subject to the following conditions:
|
||||
*
|
||||
* The above copyright notice and this permission notice shall be included in
|
||||
* all copies or substantial portions of the Software.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
||||
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
* THE SOFTWARE.
|
||||
*/
|
||||
|
||||
/**
|
||||
* @file
|
||||
* libavformat AVIOContext API example.
|
||||
*
|
||||
* Make libavformat demuxer access media content through a custom
|
||||
* AVIOContext read callback.
|
||||
* @example avio_reading.c
|
||||
*/
|
||||
|
||||
#include <libavcodec/avcodec.h>
|
||||
#include <libavformat/avformat.h>
|
||||
#include <libavformat/avio.h>
|
||||
#include <libavutil/file.h>
|
||||
|
||||
struct buffer_data {
|
||||
uint8_t *ptr;
|
||||
size_t size; ///< size left in the buffer
|
||||
};
|
||||
|
||||
static int read_packet(void *opaque, uint8_t *buf, int buf_size)
|
||||
{
|
||||
struct buffer_data *bd = (struct buffer_data *)opaque;
|
||||
buf_size = FFMIN(buf_size, bd->size);
|
||||
|
||||
printf("ptr:%p size:%zu\n", bd->ptr, bd->size);
|
||||
|
||||
/* copy internal buffer data to buf */
|
||||
memcpy(buf, bd->ptr, buf_size);
|
||||
bd->ptr += buf_size;
|
||||
bd->size -= buf_size;
|
||||
|
||||
return buf_size;
|
||||
}
|
||||
|
||||
int main(int argc, char *argv[])
|
||||
{
|
||||
AVFormatContext *fmt_ctx = NULL;
|
||||
AVIOContext *avio_ctx = NULL;
|
||||
uint8_t *buffer = NULL, *avio_ctx_buffer = NULL;
|
||||
size_t buffer_size, avio_ctx_buffer_size = 4096;
|
||||
char *input_filename = NULL;
|
||||
int ret = 0;
|
||||
struct buffer_data bd = { 0 };
|
||||
|
||||
if (argc != 2) {
|
||||
fprintf(stderr, "usage: %s input_file\n"
|
||||
"API example program to show how to read from a custom buffer "
|
||||
"accessed through AVIOContext.\n", argv[0]);
|
||||
return 1;
|
||||
}
|
||||
input_filename = argv[1];
|
||||
|
||||
/* register codecs and formats and other lavf/lavc components*/
|
||||
av_register_all();
|
||||
|
||||
/* slurp file content into buffer */
|
||||
ret = av_file_map(input_filename, &buffer, &buffer_size, 0, NULL);
|
||||
if (ret < 0)
|
||||
goto end;
|
||||
|
||||
/* fill opaque structure used by the AVIOContext read callback */
|
||||
bd.ptr = buffer;
|
||||
bd.size = buffer_size;
|
||||
|
||||
if (!(fmt_ctx = avformat_alloc_context())) {
|
||||
ret = AVERROR(ENOMEM);
|
||||
goto end;
|
||||
}
|
||||
|
||||
avio_ctx_buffer = av_malloc(avio_ctx_buffer_size);
|
||||
if (!avio_ctx_buffer) {
|
||||
ret = AVERROR(ENOMEM);
|
||||
goto end;
|
||||
}
|
||||
avio_ctx = avio_alloc_context(avio_ctx_buffer, avio_ctx_buffer_size,
|
||||
0, &bd, &read_packet, NULL, NULL);
|
||||
if (!avio_ctx) {
|
||||
ret = AVERROR(ENOMEM);
|
||||
goto end;
|
||||
}
|
||||
fmt_ctx->pb = avio_ctx;
|
||||
|
||||
ret = avformat_open_input(&fmt_ctx, NULL, NULL, NULL);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Could not open input\n");
|
||||
goto end;
|
||||
}
|
||||
|
||||
ret = avformat_find_stream_info(fmt_ctx, NULL);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Could not find stream information\n");
|
||||
goto end;
|
||||
}
|
||||
|
||||
av_dump_format(fmt_ctx, 0, input_filename, 0);
|
||||
|
||||
end:
|
||||
avformat_close_input(&fmt_ctx);
|
||||
/* note: the internal buffer could have changed, and be != avio_ctx_buffer */
|
||||
if (avio_ctx) {
|
||||
av_freep(&avio_ctx->buffer);
|
||||
av_freep(&avio_ctx);
|
||||
}
|
||||
av_file_unmap(buffer, buffer_size);
|
||||
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error occurred: %s\n", av_err2str(ret));
|
||||
return 1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
@@ -24,10 +24,10 @@
|
||||
* @file
|
||||
* libavcodec API use example.
|
||||
*
|
||||
* @example decoding_encoding.c
|
||||
* Note that libavcodec only handles codecs (mpeg, mpeg4, etc...),
|
||||
* not file formats (avi, vob, mp4, mov, mkv, mxf, flv, mpegts, mpegps, etc...). See library 'libavformat' for the
|
||||
* format handling
|
||||
* @example doc/examples/decoding_encoding.c
|
||||
*/
|
||||
|
||||
#include <math.h>
|
||||
@@ -79,7 +79,7 @@ static int select_channel_layout(AVCodec *codec)
|
||||
{
|
||||
const uint64_t *p;
|
||||
uint64_t best_ch_layout = 0;
|
||||
int best_nb_channels = 0;
|
||||
int best_nb_channells = 0;
|
||||
|
||||
if (!codec->channel_layouts)
|
||||
return AV_CH_LAYOUT_STEREO;
|
||||
@@ -88,9 +88,9 @@ static int select_channel_layout(AVCodec *codec)
|
||||
while (*p) {
|
||||
int nb_channels = av_get_channel_layout_nb_channels(*p);
|
||||
|
||||
if (nb_channels > best_nb_channels) {
|
||||
if (nb_channels > best_nb_channells) {
|
||||
best_ch_layout = *p;
|
||||
best_nb_channels = nb_channels;
|
||||
best_nb_channells = nb_channels;
|
||||
}
|
||||
p++;
|
||||
}
|
||||
@@ -156,7 +156,7 @@ static void audio_encode_example(const char *filename)
|
||||
}
|
||||
|
||||
/* frame containing input raw audio */
|
||||
frame = av_frame_alloc();
|
||||
frame = avcodec_alloc_frame();
|
||||
if (!frame) {
|
||||
fprintf(stderr, "Could not allocate audio frame\n");
|
||||
exit(1);
|
||||
@@ -170,10 +170,6 @@ static void audio_encode_example(const char *filename)
|
||||
* we calculate the size of the samples buffer in bytes */
|
||||
buffer_size = av_samples_get_buffer_size(NULL, c->channels, c->frame_size,
|
||||
c->sample_fmt, 0);
|
||||
if (buffer_size < 0) {
|
||||
fprintf(stderr, "Could not get sample buffer size\n");
|
||||
exit(1);
|
||||
}
|
||||
samples = av_malloc(buffer_size);
|
||||
if (!samples) {
|
||||
fprintf(stderr, "Could not allocate %d bytes for samples buffer\n",
|
||||
@@ -191,7 +187,7 @@ static void audio_encode_example(const char *filename)
|
||||
/* encode a single tone sound */
|
||||
t = 0;
|
||||
tincr = 2 * M_PI * 440.0 / c->sample_rate;
|
||||
for (i = 0; i < 200; i++) {
|
||||
for(i=0;i<200;i++) {
|
||||
av_init_packet(&pkt);
|
||||
pkt.data = NULL; // packet data will be allocated by the encoder
|
||||
pkt.size = 0;
|
||||
@@ -231,7 +227,7 @@ static void audio_encode_example(const char *filename)
|
||||
fclose(f);
|
||||
|
||||
av_freep(&samples);
|
||||
av_frame_free(&frame);
|
||||
avcodec_free_frame(&frame);
|
||||
avcodec_close(c);
|
||||
av_free(c);
|
||||
}
|
||||
@@ -245,7 +241,7 @@ static void audio_decode_example(const char *outfilename, const char *filename)
|
||||
AVCodecContext *c= NULL;
|
||||
int len;
|
||||
FILE *f, *outfile;
|
||||
uint8_t inbuf[AUDIO_INBUF_SIZE + AV_INPUT_BUFFER_PADDING_SIZE];
|
||||
uint8_t inbuf[AUDIO_INBUF_SIZE + FF_INPUT_BUFFER_PADDING_SIZE];
|
||||
AVPacket avpkt;
|
||||
AVFrame *decoded_frame = NULL;
|
||||
|
||||
@@ -288,15 +284,15 @@ static void audio_decode_example(const char *outfilename, const char *filename)
|
||||
avpkt.size = fread(inbuf, 1, AUDIO_INBUF_SIZE, f);
|
||||
|
||||
while (avpkt.size > 0) {
|
||||
int i, ch;
|
||||
int got_frame = 0;
|
||||
|
||||
if (!decoded_frame) {
|
||||
if (!(decoded_frame = av_frame_alloc())) {
|
||||
if (!(decoded_frame = avcodec_alloc_frame())) {
|
||||
fprintf(stderr, "Could not allocate audio frame\n");
|
||||
exit(1);
|
||||
}
|
||||
}
|
||||
} else
|
||||
avcodec_get_frame_defaults(decoded_frame);
|
||||
|
||||
len = avcodec_decode_audio4(c, decoded_frame, &got_frame, &avpkt);
|
||||
if (len < 0) {
|
||||
@@ -305,15 +301,10 @@ static void audio_decode_example(const char *outfilename, const char *filename)
|
||||
}
|
||||
if (got_frame) {
|
||||
/* if a frame has been decoded, output it */
|
||||
int data_size = av_get_bytes_per_sample(c->sample_fmt);
|
||||
if (data_size < 0) {
|
||||
/* This should not occur, checking just for paranoia */
|
||||
fprintf(stderr, "Failed to calculate data size\n");
|
||||
exit(1);
|
||||
}
|
||||
for (i=0; i<decoded_frame->nb_samples; i++)
|
||||
for (ch=0; ch<c->channels; ch++)
|
||||
fwrite(decoded_frame->data[ch] + data_size*i, 1, data_size, outfile);
|
||||
int data_size = av_samples_get_buffer_size(NULL, c->channels,
|
||||
decoded_frame->nb_samples,
|
||||
c->sample_fmt, 1);
|
||||
fwrite(decoded_frame->data[0], 1, data_size, outfile);
|
||||
}
|
||||
avpkt.size -= len;
|
||||
avpkt.data += len;
|
||||
@@ -338,7 +329,7 @@ static void audio_decode_example(const char *outfilename, const char *filename)
|
||||
|
||||
avcodec_close(c);
|
||||
av_free(c);
|
||||
av_frame_free(&decoded_frame);
|
||||
avcodec_free_frame(&decoded_frame);
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -375,18 +366,12 @@ static void video_encode_example(const char *filename, int codec_id)
|
||||
c->width = 352;
|
||||
c->height = 288;
|
||||
/* frames per second */
|
||||
c->time_base = (AVRational){1,25};
|
||||
/* emit one intra frame every ten frames
|
||||
* check frame pict_type before passing frame
|
||||
* to encoder, if frame->pict_type is AV_PICTURE_TYPE_I
|
||||
* then gop_size is ignored and the output of encoder
|
||||
* will always be I frame irrespective to gop_size
|
||||
*/
|
||||
c->gop_size = 10;
|
||||
c->max_b_frames = 1;
|
||||
c->time_base= (AVRational){1,25};
|
||||
c->gop_size = 10; /* emit one intra frame every ten frames */
|
||||
c->max_b_frames=1;
|
||||
c->pix_fmt = AV_PIX_FMT_YUV420P;
|
||||
|
||||
if (codec_id == AV_CODEC_ID_H264)
|
||||
if(codec_id == AV_CODEC_ID_H264)
|
||||
av_opt_set(c->priv_data, "preset", "slow", 0);
|
||||
|
||||
/* open it */
|
||||
@@ -401,7 +386,7 @@ static void video_encode_example(const char *filename, int codec_id)
|
||||
exit(1);
|
||||
}
|
||||
|
||||
frame = av_frame_alloc();
|
||||
frame = avcodec_alloc_frame();
|
||||
if (!frame) {
|
||||
fprintf(stderr, "Could not allocate video frame\n");
|
||||
exit(1);
|
||||
@@ -420,7 +405,7 @@ static void video_encode_example(const char *filename, int codec_id)
|
||||
}
|
||||
|
||||
/* encode 1 second of video */
|
||||
for (i = 0; i < 25; i++) {
|
||||
for(i=0;i<25;i++) {
|
||||
av_init_packet(&pkt);
|
||||
pkt.data = NULL; // packet data will be allocated by the encoder
|
||||
pkt.size = 0;
|
||||
@@ -428,15 +413,15 @@ static void video_encode_example(const char *filename, int codec_id)
|
||||
fflush(stdout);
|
||||
/* prepare a dummy image */
|
||||
/* Y */
|
||||
for (y = 0; y < c->height; y++) {
|
||||
for (x = 0; x < c->width; x++) {
|
||||
for(y=0;y<c->height;y++) {
|
||||
for(x=0;x<c->width;x++) {
|
||||
frame->data[0][y * frame->linesize[0] + x] = x + y + i * 3;
|
||||
}
|
||||
}
|
||||
|
||||
/* Cb and Cr */
|
||||
for (y = 0; y < c->height/2; y++) {
|
||||
for (x = 0; x < c->width/2; x++) {
|
||||
for(y=0;y<c->height/2;y++) {
|
||||
for(x=0;x<c->width/2;x++) {
|
||||
frame->data[1][y * frame->linesize[1] + x] = 128 + y + i * 2;
|
||||
frame->data[2][y * frame->linesize[2] + x] = 64 + x + i * 5;
|
||||
}
|
||||
@@ -482,7 +467,7 @@ static void video_encode_example(const char *filename, int codec_id)
|
||||
avcodec_close(c);
|
||||
av_free(c);
|
||||
av_freep(&frame->data[0]);
|
||||
av_frame_free(&frame);
|
||||
avcodec_free_frame(&frame);
|
||||
printf("\n");
|
||||
}
|
||||
|
||||
@@ -496,10 +481,10 @@ static void pgm_save(unsigned char *buf, int wrap, int xsize, int ysize,
|
||||
FILE *f;
|
||||
int i;
|
||||
|
||||
f = fopen(filename,"w");
|
||||
fprintf(f, "P5\n%d %d\n%d\n", xsize, ysize, 255);
|
||||
for (i = 0; i < ysize; i++)
|
||||
fwrite(buf + i * wrap, 1, xsize, f);
|
||||
f=fopen(filename,"w");
|
||||
fprintf(f,"P5\n%d %d\n%d\n",xsize,ysize,255);
|
||||
for(i=0;i<ysize;i++)
|
||||
fwrite(buf + i * wrap,1,xsize,f);
|
||||
fclose(f);
|
||||
}
|
||||
|
||||
@@ -521,7 +506,7 @@ static int decode_write_frame(const char *outfilename, AVCodecContext *avctx,
|
||||
/* the picture is allocated by the decoder, no need to free it */
|
||||
snprintf(buf, sizeof(buf), outfilename, *frame_count);
|
||||
pgm_save(frame->data[0], frame->linesize[0],
|
||||
frame->width, frame->height, buf);
|
||||
avctx->width, avctx->height, buf);
|
||||
(*frame_count)++;
|
||||
}
|
||||
if (pkt->data) {
|
||||
@@ -538,13 +523,13 @@ static void video_decode_example(const char *outfilename, const char *filename)
|
||||
int frame_count;
|
||||
FILE *f;
|
||||
AVFrame *frame;
|
||||
uint8_t inbuf[INBUF_SIZE + AV_INPUT_BUFFER_PADDING_SIZE];
|
||||
uint8_t inbuf[INBUF_SIZE + FF_INPUT_BUFFER_PADDING_SIZE];
|
||||
AVPacket avpkt;
|
||||
|
||||
av_init_packet(&avpkt);
|
||||
|
||||
/* set end of buffer to 0 (this ensures that no overreading happens for damaged mpeg streams) */
|
||||
memset(inbuf + INBUF_SIZE, 0, AV_INPUT_BUFFER_PADDING_SIZE);
|
||||
memset(inbuf + INBUF_SIZE, 0, FF_INPUT_BUFFER_PADDING_SIZE);
|
||||
|
||||
printf("Decode video file %s to %s\n", filename, outfilename);
|
||||
|
||||
@@ -561,8 +546,8 @@ static void video_decode_example(const char *outfilename, const char *filename)
|
||||
exit(1);
|
||||
}
|
||||
|
||||
if (codec->capabilities & AV_CODEC_CAP_TRUNCATED)
|
||||
c->flags |= AV_CODEC_FLAG_TRUNCATED; // we do not send complete frames
|
||||
if(codec->capabilities&CODEC_CAP_TRUNCATED)
|
||||
c->flags|= CODEC_FLAG_TRUNCATED; /* we do not send complete frames */
|
||||
|
||||
/* For some codecs, such as msmpeg4 and mpeg4, width and height
|
||||
MUST be initialized there because this information is not
|
||||
@@ -580,14 +565,14 @@ static void video_decode_example(const char *outfilename, const char *filename)
|
||||
exit(1);
|
||||
}
|
||||
|
||||
frame = av_frame_alloc();
|
||||
frame = avcodec_alloc_frame();
|
||||
if (!frame) {
|
||||
fprintf(stderr, "Could not allocate video frame\n");
|
||||
exit(1);
|
||||
}
|
||||
|
||||
frame_count = 0;
|
||||
for (;;) {
|
||||
for(;;) {
|
||||
avpkt.size = fread(inbuf, 1, INBUF_SIZE, f);
|
||||
if (avpkt.size == 0)
|
||||
break;
|
||||
@@ -624,7 +609,7 @@ static void video_decode_example(const char *outfilename, const char *filename)
|
||||
|
||||
avcodec_close(c);
|
||||
av_free(c);
|
||||
av_frame_free(&frame);
|
||||
avcodec_free_frame(&frame);
|
||||
printf("\n");
|
||||
}
|
||||
|
||||
@@ -641,7 +626,7 @@ int main(int argc, char **argv)
|
||||
"This program generates a synthetic stream and encodes it to a file\n"
|
||||
"named test.h264, test.mp2 or test.mpg depending on output_type.\n"
|
||||
"The encoded stream is then decoded and written to a raw data output.\n"
|
||||
"output_type must be chosen between 'h264', 'mp2', 'mpg'.\n",
|
||||
"output_type must be choosen between 'h264', 'mp2', 'mpg'.\n",
|
||||
argv[0]);
|
||||
return 1;
|
||||
}
|
||||
@@ -651,7 +636,7 @@ int main(int argc, char **argv)
|
||||
video_encode_example("test.h264", AV_CODEC_ID_H264);
|
||||
} else if (!strcmp(output_type, "mp2")) {
|
||||
audio_encode_example("test.mp2");
|
||||
audio_decode_example("test.pcm", "test.mp2");
|
||||
audio_decode_example("test.sw", "test.mp2");
|
||||
} else if (!strcmp(output_type, "mpg")) {
|
||||
video_encode_example("test.mpg", AV_CODEC_ID_MPEG1VIDEO);
|
||||
video_decode_example("test%02d.pgm", "test.mpg");
|
||||
|
||||
@@ -22,11 +22,11 @@
|
||||
|
||||
/**
|
||||
* @file
|
||||
* Demuxing and decoding example.
|
||||
* libavformat demuxing API use example.
|
||||
*
|
||||
* Show how to use the libavformat and libavcodec API to demux and
|
||||
* decode audio and video data.
|
||||
* @example demuxing_decoding.c
|
||||
* @example doc/examples/demuxing.c
|
||||
*/
|
||||
|
||||
#include <libavutil/imgutils.h>
|
||||
@@ -36,8 +36,6 @@
|
||||
|
||||
static AVFormatContext *fmt_ctx = NULL;
|
||||
static AVCodecContext *video_dec_ctx = NULL, *audio_dec_ctx;
|
||||
static int width, height;
|
||||
static enum AVPixelFormat pix_fmt;
|
||||
static AVStream *video_stream = NULL, *audio_stream = NULL;
|
||||
static const char *src_filename = NULL;
|
||||
static const char *video_dst_filename = NULL;
|
||||
@@ -49,56 +47,29 @@ static uint8_t *video_dst_data[4] = {NULL};
|
||||
static int video_dst_linesize[4];
|
||||
static int video_dst_bufsize;
|
||||
|
||||
static uint8_t **audio_dst_data = NULL;
|
||||
static int audio_dst_linesize;
|
||||
static int audio_dst_bufsize;
|
||||
|
||||
static int video_stream_idx = -1, audio_stream_idx = -1;
|
||||
static AVFrame *frame = NULL;
|
||||
static AVPacket pkt;
|
||||
static int video_frame_count = 0;
|
||||
static int audio_frame_count = 0;
|
||||
|
||||
/* The different ways of decoding and managing data memory. You are not
|
||||
* supposed to support all the modes in your application but pick the one most
|
||||
* appropriate to your needs. Look for the use of api_mode in this example to
|
||||
* see what are the differences of API usage between them */
|
||||
enum {
|
||||
API_MODE_OLD = 0, /* old method, deprecated */
|
||||
API_MODE_NEW_API_REF_COUNT = 1, /* new method, using the frame reference counting */
|
||||
API_MODE_NEW_API_NO_REF_COUNT = 2, /* new method, without reference counting */
|
||||
};
|
||||
|
||||
static int api_mode = API_MODE_OLD;
|
||||
|
||||
static int decode_packet(int *got_frame, int cached)
|
||||
{
|
||||
int ret = 0;
|
||||
int decoded = pkt.size;
|
||||
|
||||
*got_frame = 0;
|
||||
|
||||
if (pkt.stream_index == video_stream_idx) {
|
||||
/* decode video frame */
|
||||
ret = avcodec_decode_video2(video_dec_ctx, frame, got_frame, &pkt);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error decoding video frame (%s)\n", av_err2str(ret));
|
||||
fprintf(stderr, "Error decoding video frame\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (*got_frame) {
|
||||
|
||||
if (frame->width != width || frame->height != height ||
|
||||
frame->format != pix_fmt) {
|
||||
/* To handle this change, one could call av_image_alloc again and
|
||||
* decode the following frames into another rawvideo file. */
|
||||
fprintf(stderr, "Error: Width, height and pixel format have to be "
|
||||
"constant in a rawvideo file, but the width, height or "
|
||||
"pixel format of the input video changed:\n"
|
||||
"old: width = %d, height = %d, format = %s\n"
|
||||
"new: width = %d, height = %d, format = %s\n",
|
||||
width, height, av_get_pix_fmt_name(pix_fmt),
|
||||
frame->width, frame->height,
|
||||
av_get_pix_fmt_name(frame->format));
|
||||
return -1;
|
||||
}
|
||||
|
||||
printf("video_frame%s n:%d coded_n:%d pts:%s\n",
|
||||
cached ? "(cached)" : "",
|
||||
video_frame_count++, frame->coded_picture_number,
|
||||
@@ -108,7 +79,7 @@ static int decode_packet(int *got_frame, int cached)
|
||||
* this is required since rawvideo expects non aligned data */
|
||||
av_image_copy(video_dst_data, video_dst_linesize,
|
||||
(const uint8_t **)(frame->data), frame->linesize,
|
||||
pix_fmt, width, height);
|
||||
video_dec_ctx->pix_fmt, video_dec_ctx->width, video_dec_ctx->height);
|
||||
|
||||
/* write to rawvideo file */
|
||||
fwrite(video_dst_data[0], 1, video_dst_bufsize, video_dst_file);
|
||||
@@ -117,50 +88,49 @@ static int decode_packet(int *got_frame, int cached)
|
||||
/* decode audio frame */
|
||||
ret = avcodec_decode_audio4(audio_dec_ctx, frame, got_frame, &pkt);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error decoding audio frame (%s)\n", av_err2str(ret));
|
||||
fprintf(stderr, "Error decoding audio frame\n");
|
||||
return ret;
|
||||
}
|
||||
/* Some audio decoders decode only part of the packet, and have to be
|
||||
* called again with the remainder of the packet data.
|
||||
* Sample: fate-suite/lossless-audio/luckynight-partial.shn
|
||||
* Also, some decoders might over-read the packet. */
|
||||
decoded = FFMIN(ret, pkt.size);
|
||||
|
||||
if (*got_frame) {
|
||||
size_t unpadded_linesize = frame->nb_samples * av_get_bytes_per_sample(frame->format);
|
||||
printf("audio_frame%s n:%d nb_samples:%d pts:%s\n",
|
||||
cached ? "(cached)" : "",
|
||||
audio_frame_count++, frame->nb_samples,
|
||||
av_ts2timestr(frame->pts, &audio_dec_ctx->time_base));
|
||||
|
||||
/* Write the raw audio data samples of the first plane. This works
|
||||
* fine for packed formats (e.g. AV_SAMPLE_FMT_S16). However,
|
||||
* most audio decoders output planar audio, which uses a separate
|
||||
* plane of audio samples for each channel (e.g. AV_SAMPLE_FMT_S16P).
|
||||
* In other words, this code will write only the first audio channel
|
||||
* in these cases.
|
||||
* You should use libswresample or libavfilter to convert the frame
|
||||
* to packed data. */
|
||||
fwrite(frame->extended_data[0], 1, unpadded_linesize, audio_dst_file);
|
||||
ret = av_samples_alloc(audio_dst_data, &audio_dst_linesize, av_frame_get_channels(frame),
|
||||
frame->nb_samples, frame->format, 1);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Could not allocate audio buffer\n");
|
||||
return AVERROR(ENOMEM);
|
||||
}
|
||||
|
||||
/* TODO: extend return code of the av_samples_* functions so that this call is not needed */
|
||||
audio_dst_bufsize =
|
||||
av_samples_get_buffer_size(NULL, av_frame_get_channels(frame),
|
||||
frame->nb_samples, frame->format, 1);
|
||||
|
||||
/* copy audio data to destination buffer:
|
||||
* this is required since rawaudio expects non aligned data */
|
||||
av_samples_copy(audio_dst_data, frame->data, 0, 0,
|
||||
frame->nb_samples, av_frame_get_channels(frame), frame->format);
|
||||
|
||||
/* write to rawaudio file */
|
||||
fwrite(audio_dst_data[0], 1, audio_dst_bufsize, audio_dst_file);
|
||||
av_freep(&audio_dst_data[0]);
|
||||
}
|
||||
}
|
||||
|
||||
/* If we use the new API with reference counting, we own the data and need
|
||||
* to de-reference it when we don't use it anymore */
|
||||
if (*got_frame && api_mode == API_MODE_NEW_API_REF_COUNT)
|
||||
av_frame_unref(frame);
|
||||
|
||||
return decoded;
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int open_codec_context(int *stream_idx,
|
||||
AVFormatContext *fmt_ctx, enum AVMediaType type)
|
||||
{
|
||||
int ret, stream_index;
|
||||
int ret;
|
||||
AVStream *st;
|
||||
AVCodecContext *dec_ctx = NULL;
|
||||
AVCodec *dec = NULL;
|
||||
AVDictionary *opts = NULL;
|
||||
|
||||
ret = av_find_best_stream(fmt_ctx, type, -1, -1, NULL, 0);
|
||||
if (ret < 0) {
|
||||
@@ -168,8 +138,8 @@ static int open_codec_context(int *stream_idx,
|
||||
av_get_media_type_string(type), src_filename);
|
||||
return ret;
|
||||
} else {
|
||||
stream_index = ret;
|
||||
st = fmt_ctx->streams[stream_index];
|
||||
*stream_idx = ret;
|
||||
st = fmt_ctx->streams[*stream_idx];
|
||||
|
||||
/* find decoder for the stream */
|
||||
dec_ctx = st->codec;
|
||||
@@ -177,18 +147,14 @@ static int open_codec_context(int *stream_idx,
|
||||
if (!dec) {
|
||||
fprintf(stderr, "Failed to find %s codec\n",
|
||||
av_get_media_type_string(type));
|
||||
return AVERROR(EINVAL);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* Init the decoders, with or without reference counting */
|
||||
if (api_mode == API_MODE_NEW_API_REF_COUNT)
|
||||
av_dict_set(&opts, "refcounted_frames", "1", 0);
|
||||
if ((ret = avcodec_open2(dec_ctx, dec, &opts)) < 0) {
|
||||
if ((ret = avcodec_open2(dec_ctx, dec, NULL)) < 0) {
|
||||
fprintf(stderr, "Failed to open %s codec\n",
|
||||
av_get_media_type_string(type));
|
||||
return ret;
|
||||
}
|
||||
*stream_idx = stream_index;
|
||||
}
|
||||
|
||||
return 0;
|
||||
@@ -227,31 +193,15 @@ int main (int argc, char **argv)
|
||||
{
|
||||
int ret = 0, got_frame;
|
||||
|
||||
if (argc != 4 && argc != 5) {
|
||||
fprintf(stderr, "usage: %s [-refcount=<old|new_norefcount|new_refcount>] "
|
||||
"input_file video_output_file audio_output_file\n"
|
||||
if (argc != 4) {
|
||||
fprintf(stderr, "usage: %s input_file video_output_file audio_output_file\n"
|
||||
"API example program to show how to read frames from an input file.\n"
|
||||
"This program reads frames from a file, decodes them, and writes decoded\n"
|
||||
"video frames to a rawvideo file named video_output_file, and decoded\n"
|
||||
"audio frames to a rawaudio file named audio_output_file.\n\n"
|
||||
"If the -refcount option is specified, the program use the\n"
|
||||
"reference counting frame system which allows keeping a copy of\n"
|
||||
"the data for longer than one decode call. If unset, it's using\n"
|
||||
"the classic old method.\n"
|
||||
"audio frames to a rawaudio file named audio_output_file.\n"
|
||||
"\n", argv[0]);
|
||||
exit(1);
|
||||
}
|
||||
if (argc == 5) {
|
||||
const char *mode = argv[1] + strlen("-refcount=");
|
||||
if (!strcmp(mode, "old")) api_mode = API_MODE_OLD;
|
||||
else if (!strcmp(mode, "new_norefcount")) api_mode = API_MODE_NEW_API_NO_REF_COUNT;
|
||||
else if (!strcmp(mode, "new_refcount")) api_mode = API_MODE_NEW_API_REF_COUNT;
|
||||
else {
|
||||
fprintf(stderr, "unknow mode '%s'\n", mode);
|
||||
exit(1);
|
||||
}
|
||||
argv++;
|
||||
}
|
||||
src_filename = argv[1];
|
||||
video_dst_filename = argv[2];
|
||||
audio_dst_filename = argv[3];
|
||||
@@ -283,11 +233,9 @@ int main (int argc, char **argv)
|
||||
}
|
||||
|
||||
/* allocate image where the decoded image will be put */
|
||||
width = video_dec_ctx->width;
|
||||
height = video_dec_ctx->height;
|
||||
pix_fmt = video_dec_ctx->pix_fmt;
|
||||
ret = av_image_alloc(video_dst_data, video_dst_linesize,
|
||||
width, height, pix_fmt, 1);
|
||||
video_dec_ctx->width, video_dec_ctx->height,
|
||||
video_dec_ctx->pix_fmt, 1);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Could not allocate raw video buffer\n");
|
||||
goto end;
|
||||
@@ -296,14 +244,25 @@ int main (int argc, char **argv)
|
||||
}
|
||||
|
||||
if (open_codec_context(&audio_stream_idx, fmt_ctx, AVMEDIA_TYPE_AUDIO) >= 0) {
|
||||
int nb_planes;
|
||||
|
||||
audio_stream = fmt_ctx->streams[audio_stream_idx];
|
||||
audio_dec_ctx = audio_stream->codec;
|
||||
audio_dst_file = fopen(audio_dst_filename, "wb");
|
||||
if (!audio_dst_file) {
|
||||
fprintf(stderr, "Could not open destination file %s\n", audio_dst_filename);
|
||||
fprintf(stderr, "Could not open destination file %s\n", video_dst_filename);
|
||||
ret = 1;
|
||||
goto end;
|
||||
}
|
||||
|
||||
nb_planes = av_sample_fmt_is_planar(audio_dec_ctx->sample_fmt) ?
|
||||
audio_dec_ctx->channels : 1;
|
||||
audio_dst_data = av_mallocz(sizeof(uint8_t *) * nb_planes);
|
||||
if (!audio_dst_data) {
|
||||
fprintf(stderr, "Could not allocate audio data buffers\n");
|
||||
ret = AVERROR(ENOMEM);
|
||||
goto end;
|
||||
}
|
||||
}
|
||||
|
||||
/* dump input information to stderr */
|
||||
@@ -315,12 +274,7 @@ int main (int argc, char **argv)
|
||||
goto end;
|
||||
}
|
||||
|
||||
/* When using the new API, you need to use the libavutil/frame.h API, while
|
||||
* the classic frame management is available in libavcodec */
|
||||
if (api_mode == API_MODE_OLD)
|
||||
frame = avcodec_alloc_frame();
|
||||
else
|
||||
frame = av_frame_alloc();
|
||||
frame = avcodec_alloc_frame();
|
||||
if (!frame) {
|
||||
fprintf(stderr, "Could not allocate frame\n");
|
||||
ret = AVERROR(ENOMEM);
|
||||
@@ -339,15 +293,8 @@ int main (int argc, char **argv)
|
||||
|
||||
/* read frames from the file */
|
||||
while (av_read_frame(fmt_ctx, &pkt) >= 0) {
|
||||
AVPacket orig_pkt = pkt;
|
||||
do {
|
||||
ret = decode_packet(&got_frame, 0);
|
||||
if (ret < 0)
|
||||
break;
|
||||
pkt.data += ret;
|
||||
pkt.size -= ret;
|
||||
} while (pkt.size > 0);
|
||||
av_free_packet(&orig_pkt);
|
||||
decode_packet(&got_frame, 0);
|
||||
av_free_packet(&pkt);
|
||||
}
|
||||
|
||||
/* flush cached frames */
|
||||
@@ -362,46 +309,34 @@ int main (int argc, char **argv)
|
||||
if (video_stream) {
|
||||
printf("Play the output video file with the command:\n"
|
||||
"ffplay -f rawvideo -pix_fmt %s -video_size %dx%d %s\n",
|
||||
av_get_pix_fmt_name(pix_fmt), width, height,
|
||||
av_get_pix_fmt_name(video_dec_ctx->pix_fmt), video_dec_ctx->width, video_dec_ctx->height,
|
||||
video_dst_filename);
|
||||
}
|
||||
|
||||
if (audio_stream) {
|
||||
enum AVSampleFormat sfmt = audio_dec_ctx->sample_fmt;
|
||||
int n_channels = audio_dec_ctx->channels;
|
||||
const char *fmt;
|
||||
|
||||
if (av_sample_fmt_is_planar(sfmt)) {
|
||||
const char *packed = av_get_sample_fmt_name(sfmt);
|
||||
printf("Warning: the sample format the decoder produced is planar "
|
||||
"(%s). This example will output the first channel only.\n",
|
||||
packed ? packed : "?");
|
||||
sfmt = av_get_packed_sample_fmt(sfmt);
|
||||
n_channels = 1;
|
||||
}
|
||||
|
||||
if ((ret = get_format_from_sample_fmt(&fmt, sfmt)) < 0)
|
||||
if ((ret = get_format_from_sample_fmt(&fmt, audio_dec_ctx->sample_fmt)) < 0)
|
||||
goto end;
|
||||
|
||||
printf("Play the output audio file with the command:\n"
|
||||
"ffplay -f %s -ac %d -ar %d %s\n",
|
||||
fmt, n_channels, audio_dec_ctx->sample_rate,
|
||||
fmt, audio_dec_ctx->channels, audio_dec_ctx->sample_rate,
|
||||
audio_dst_filename);
|
||||
}
|
||||
|
||||
end:
|
||||
avcodec_close(video_dec_ctx);
|
||||
avcodec_close(audio_dec_ctx);
|
||||
if (video_dec_ctx)
|
||||
avcodec_close(video_dec_ctx);
|
||||
if (audio_dec_ctx)
|
||||
avcodec_close(audio_dec_ctx);
|
||||
avformat_close_input(&fmt_ctx);
|
||||
if (video_dst_file)
|
||||
fclose(video_dst_file);
|
||||
if (audio_dst_file)
|
||||
fclose(audio_dst_file);
|
||||
if (api_mode == API_MODE_OLD)
|
||||
avcodec_free_frame(&frame);
|
||||
else
|
||||
av_frame_free(&frame);
|
||||
av_free(frame);
|
||||
av_free(video_dst_data[0]);
|
||||
av_free(audio_dst_data);
|
||||
|
||||
return ret < 0;
|
||||
}
|
||||
@@ -1,185 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2012 Stefano Sabatini
|
||||
* Copyright (c) 2014 Clément Bœsch
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
* of this software and associated documentation files (the "Software"), to deal
|
||||
* in the Software without restriction, including without limitation the rights
|
||||
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
* copies of the Software, and to permit persons to whom the Software is
|
||||
* furnished to do so, subject to the following conditions:
|
||||
*
|
||||
* The above copyright notice and this permission notice shall be included in
|
||||
* all copies or substantial portions of the Software.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
||||
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
* THE SOFTWARE.
|
||||
*/
|
||||
|
||||
#include <libavutil/motion_vector.h>
|
||||
#include <libavformat/avformat.h>
|
||||
|
||||
static AVFormatContext *fmt_ctx = NULL;
|
||||
static AVCodecContext *video_dec_ctx = NULL;
|
||||
static AVStream *video_stream = NULL;
|
||||
static const char *src_filename = NULL;
|
||||
|
||||
static int video_stream_idx = -1;
|
||||
static AVFrame *frame = NULL;
|
||||
static AVPacket pkt;
|
||||
static int video_frame_count = 0;
|
||||
|
||||
static int decode_packet(int *got_frame, int cached)
|
||||
{
|
||||
int decoded = pkt.size;
|
||||
|
||||
*got_frame = 0;
|
||||
|
||||
if (pkt.stream_index == video_stream_idx) {
|
||||
int ret = avcodec_decode_video2(video_dec_ctx, frame, got_frame, &pkt);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error decoding video frame (%s)\n", av_err2str(ret));
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (*got_frame) {
|
||||
int i;
|
||||
AVFrameSideData *sd;
|
||||
|
||||
video_frame_count++;
|
||||
sd = av_frame_get_side_data(frame, AV_FRAME_DATA_MOTION_VECTORS);
|
||||
if (sd) {
|
||||
const AVMotionVector *mvs = (const AVMotionVector *)sd->data;
|
||||
for (i = 0; i < sd->size / sizeof(*mvs); i++) {
|
||||
const AVMotionVector *mv = &mvs[i];
|
||||
printf("%d,%2d,%2d,%2d,%4d,%4d,%4d,%4d,0x%"PRIx64"\n",
|
||||
video_frame_count, mv->source,
|
||||
mv->w, mv->h, mv->src_x, mv->src_y,
|
||||
mv->dst_x, mv->dst_y, mv->flags);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return decoded;
|
||||
}
|
||||
|
||||
static int open_codec_context(int *stream_idx,
|
||||
AVFormatContext *fmt_ctx, enum AVMediaType type)
|
||||
{
|
||||
int ret;
|
||||
AVStream *st;
|
||||
AVCodecContext *dec_ctx = NULL;
|
||||
AVCodec *dec = NULL;
|
||||
AVDictionary *opts = NULL;
|
||||
|
||||
ret = av_find_best_stream(fmt_ctx, type, -1, -1, NULL, 0);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Could not find %s stream in input file '%s'\n",
|
||||
av_get_media_type_string(type), src_filename);
|
||||
return ret;
|
||||
} else {
|
||||
*stream_idx = ret;
|
||||
st = fmt_ctx->streams[*stream_idx];
|
||||
|
||||
/* find decoder for the stream */
|
||||
dec_ctx = st->codec;
|
||||
dec = avcodec_find_decoder(dec_ctx->codec_id);
|
||||
if (!dec) {
|
||||
fprintf(stderr, "Failed to find %s codec\n",
|
||||
av_get_media_type_string(type));
|
||||
return AVERROR(EINVAL);
|
||||
}
|
||||
|
||||
/* Init the video decoder */
|
||||
av_dict_set(&opts, "flags2", "+export_mvs", 0);
|
||||
if ((ret = avcodec_open2(dec_ctx, dec, &opts)) < 0) {
|
||||
fprintf(stderr, "Failed to open %s codec\n",
|
||||
av_get_media_type_string(type));
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int main(int argc, char **argv)
|
||||
{
|
||||
int ret = 0, got_frame;
|
||||
|
||||
if (argc != 2) {
|
||||
fprintf(stderr, "Usage: %s <video>\n", argv[0]);
|
||||
exit(1);
|
||||
}
|
||||
src_filename = argv[1];
|
||||
|
||||
av_register_all();
|
||||
|
||||
if (avformat_open_input(&fmt_ctx, src_filename, NULL, NULL) < 0) {
|
||||
fprintf(stderr, "Could not open source file %s\n", src_filename);
|
||||
exit(1);
|
||||
}
|
||||
|
||||
if (avformat_find_stream_info(fmt_ctx, NULL) < 0) {
|
||||
fprintf(stderr, "Could not find stream information\n");
|
||||
exit(1);
|
||||
}
|
||||
|
||||
if (open_codec_context(&video_stream_idx, fmt_ctx, AVMEDIA_TYPE_VIDEO) >= 0) {
|
||||
video_stream = fmt_ctx->streams[video_stream_idx];
|
||||
video_dec_ctx = video_stream->codec;
|
||||
}
|
||||
|
||||
av_dump_format(fmt_ctx, 0, src_filename, 0);
|
||||
|
||||
if (!video_stream) {
|
||||
fprintf(stderr, "Could not find video stream in the input, aborting\n");
|
||||
ret = 1;
|
||||
goto end;
|
||||
}
|
||||
|
||||
frame = av_frame_alloc();
|
||||
if (!frame) {
|
||||
fprintf(stderr, "Could not allocate frame\n");
|
||||
ret = AVERROR(ENOMEM);
|
||||
goto end;
|
||||
}
|
||||
|
||||
printf("framenum,source,blockw,blockh,srcx,srcy,dstx,dsty,flags\n");
|
||||
|
||||
/* initialize packet, set data to NULL, let the demuxer fill it */
|
||||
av_init_packet(&pkt);
|
||||
pkt.data = NULL;
|
||||
pkt.size = 0;
|
||||
|
||||
/* read frames from the file */
|
||||
while (av_read_frame(fmt_ctx, &pkt) >= 0) {
|
||||
AVPacket orig_pkt = pkt;
|
||||
do {
|
||||
ret = decode_packet(&got_frame, 0);
|
||||
if (ret < 0)
|
||||
break;
|
||||
pkt.data += ret;
|
||||
pkt.size -= ret;
|
||||
} while (pkt.size > 0);
|
||||
av_free_packet(&orig_pkt);
|
||||
}
|
||||
|
||||
/* flush cached frames */
|
||||
pkt.data = NULL;
|
||||
pkt.size = 0;
|
||||
do {
|
||||
decode_packet(&got_frame, 1);
|
||||
} while (got_frame);
|
||||
|
||||
end:
|
||||
avcodec_close(video_dec_ctx);
|
||||
avformat_close_input(&fmt_ctx);
|
||||
av_frame_free(&frame);
|
||||
return ret < 0;
|
||||
}
|
||||
@@ -1,365 +0,0 @@
|
||||
/*
|
||||
* copyright (c) 2013 Andrew Kelley
|
||||
*
|
||||
* This file is part of FFmpeg.
|
||||
*
|
||||
* FFmpeg is free software; you can redistribute it and/or
|
||||
* modify it under the terms of the GNU Lesser General Public
|
||||
* License as published by the Free Software Foundation; either
|
||||
* version 2.1 of the License, or (at your option) any later version.
|
||||
*
|
||||
* FFmpeg is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
* Lesser General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU Lesser General Public
|
||||
* License along with FFmpeg; if not, write to the Free Software
|
||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
*/
|
||||
|
||||
/**
|
||||
* @file
|
||||
* libavfilter API usage example.
|
||||
*
|
||||
* @example filter_audio.c
|
||||
* This example will generate a sine wave audio,
|
||||
* pass it through a simple filter chain, and then compute the MD5 checksum of
|
||||
* the output data.
|
||||
*
|
||||
* The filter chain it uses is:
|
||||
* (input) -> abuffer -> volume -> aformat -> abuffersink -> (output)
|
||||
*
|
||||
* abuffer: This provides the endpoint where you can feed the decoded samples.
|
||||
* volume: In this example we hardcode it to 0.90.
|
||||
* aformat: This converts the samples to the samplefreq, channel layout,
|
||||
* and sample format required by the audio device.
|
||||
* abuffersink: This provides the endpoint where you can read the samples after
|
||||
* they have passed through the filter chain.
|
||||
*/
|
||||
|
||||
#include <inttypes.h>
|
||||
#include <math.h>
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
|
||||
#include "libavutil/channel_layout.h"
|
||||
#include "libavutil/md5.h"
|
||||
#include "libavutil/mem.h"
|
||||
#include "libavutil/opt.h"
|
||||
#include "libavutil/samplefmt.h"
|
||||
|
||||
#include "libavfilter/avfilter.h"
|
||||
#include "libavfilter/buffersink.h"
|
||||
#include "libavfilter/buffersrc.h"
|
||||
|
||||
#define INPUT_SAMPLERATE 48000
|
||||
#define INPUT_FORMAT AV_SAMPLE_FMT_FLTP
|
||||
#define INPUT_CHANNEL_LAYOUT AV_CH_LAYOUT_5POINT0
|
||||
|
||||
#define VOLUME_VAL 0.90
|
||||
|
||||
static int init_filter_graph(AVFilterGraph **graph, AVFilterContext **src,
|
||||
AVFilterContext **sink)
|
||||
{
|
||||
AVFilterGraph *filter_graph;
|
||||
AVFilterContext *abuffer_ctx;
|
||||
AVFilter *abuffer;
|
||||
AVFilterContext *volume_ctx;
|
||||
AVFilter *volume;
|
||||
AVFilterContext *aformat_ctx;
|
||||
AVFilter *aformat;
|
||||
AVFilterContext *abuffersink_ctx;
|
||||
AVFilter *abuffersink;
|
||||
|
||||
AVDictionary *options_dict = NULL;
|
||||
uint8_t options_str[1024];
|
||||
uint8_t ch_layout[64];
|
||||
|
||||
int err;
|
||||
|
||||
/* Create a new filtergraph, which will contain all the filters. */
|
||||
filter_graph = avfilter_graph_alloc();
|
||||
if (!filter_graph) {
|
||||
fprintf(stderr, "Unable to create filter graph.\n");
|
||||
return AVERROR(ENOMEM);
|
||||
}
|
||||
|
||||
/* Create the abuffer filter;
|
||||
* it will be used for feeding the data into the graph. */
|
||||
abuffer = avfilter_get_by_name("abuffer");
|
||||
if (!abuffer) {
|
||||
fprintf(stderr, "Could not find the abuffer filter.\n");
|
||||
return AVERROR_FILTER_NOT_FOUND;
|
||||
}
|
||||
|
||||
abuffer_ctx = avfilter_graph_alloc_filter(filter_graph, abuffer, "src");
|
||||
if (!abuffer_ctx) {
|
||||
fprintf(stderr, "Could not allocate the abuffer instance.\n");
|
||||
return AVERROR(ENOMEM);
|
||||
}
|
||||
|
||||
/* Set the filter options through the AVOptions API. */
|
||||
av_get_channel_layout_string(ch_layout, sizeof(ch_layout), 0, INPUT_CHANNEL_LAYOUT);
|
||||
av_opt_set (abuffer_ctx, "channel_layout", ch_layout, AV_OPT_SEARCH_CHILDREN);
|
||||
av_opt_set (abuffer_ctx, "sample_fmt", av_get_sample_fmt_name(INPUT_FORMAT), AV_OPT_SEARCH_CHILDREN);
|
||||
av_opt_set_q (abuffer_ctx, "time_base", (AVRational){ 1, INPUT_SAMPLERATE }, AV_OPT_SEARCH_CHILDREN);
|
||||
av_opt_set_int(abuffer_ctx, "sample_rate", INPUT_SAMPLERATE, AV_OPT_SEARCH_CHILDREN);
|
||||
|
||||
/* Now initialize the filter; we pass NULL options, since we have already
|
||||
* set all the options above. */
|
||||
err = avfilter_init_str(abuffer_ctx, NULL);
|
||||
if (err < 0) {
|
||||
fprintf(stderr, "Could not initialize the abuffer filter.\n");
|
||||
return err;
|
||||
}
|
||||
|
||||
/* Create volume filter. */
|
||||
volume = avfilter_get_by_name("volume");
|
||||
if (!volume) {
|
||||
fprintf(stderr, "Could not find the volume filter.\n");
|
||||
return AVERROR_FILTER_NOT_FOUND;
|
||||
}
|
||||
|
||||
volume_ctx = avfilter_graph_alloc_filter(filter_graph, volume, "volume");
|
||||
if (!volume_ctx) {
|
||||
fprintf(stderr, "Could not allocate the volume instance.\n");
|
||||
return AVERROR(ENOMEM);
|
||||
}
|
||||
|
||||
/* A different way of passing the options is as key/value pairs in a
|
||||
* dictionary. */
|
||||
av_dict_set(&options_dict, "volume", AV_STRINGIFY(VOLUME_VAL), 0);
|
||||
err = avfilter_init_dict(volume_ctx, &options_dict);
|
||||
av_dict_free(&options_dict);
|
||||
if (err < 0) {
|
||||
fprintf(stderr, "Could not initialize the volume filter.\n");
|
||||
return err;
|
||||
}
|
||||
|
||||
/* Create the aformat filter;
|
||||
* it ensures that the output is of the format we want. */
|
||||
aformat = avfilter_get_by_name("aformat");
|
||||
if (!aformat) {
|
||||
fprintf(stderr, "Could not find the aformat filter.\n");
|
||||
return AVERROR_FILTER_NOT_FOUND;
|
||||
}
|
||||
|
||||
aformat_ctx = avfilter_graph_alloc_filter(filter_graph, aformat, "aformat");
|
||||
if (!aformat_ctx) {
|
||||
fprintf(stderr, "Could not allocate the aformat instance.\n");
|
||||
return AVERROR(ENOMEM);
|
||||
}
|
||||
|
||||
/* A third way of passing the options is in a string of the form
|
||||
* key1=value1:key2=value2.... */
|
||||
snprintf(options_str, sizeof(options_str),
|
||||
"sample_fmts=%s:sample_rates=%d:channel_layouts=0x%"PRIx64,
|
||||
av_get_sample_fmt_name(AV_SAMPLE_FMT_S16), 44100,
|
||||
(uint64_t)AV_CH_LAYOUT_STEREO);
|
||||
err = avfilter_init_str(aformat_ctx, options_str);
|
||||
if (err < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Could not initialize the aformat filter.\n");
|
||||
return err;
|
||||
}
|
||||
|
||||
/* Finally create the abuffersink filter;
|
||||
* it will be used to get the filtered data out of the graph. */
|
||||
abuffersink = avfilter_get_by_name("abuffersink");
|
||||
if (!abuffersink) {
|
||||
fprintf(stderr, "Could not find the abuffersink filter.\n");
|
||||
return AVERROR_FILTER_NOT_FOUND;
|
||||
}
|
||||
|
||||
abuffersink_ctx = avfilter_graph_alloc_filter(filter_graph, abuffersink, "sink");
|
||||
if (!abuffersink_ctx) {
|
||||
fprintf(stderr, "Could not allocate the abuffersink instance.\n");
|
||||
return AVERROR(ENOMEM);
|
||||
}
|
||||
|
||||
/* This filter takes no options. */
|
||||
err = avfilter_init_str(abuffersink_ctx, NULL);
|
||||
if (err < 0) {
|
||||
fprintf(stderr, "Could not initialize the abuffersink instance.\n");
|
||||
return err;
|
||||
}
|
||||
|
||||
/* Connect the filters;
|
||||
* in this simple case the filters just form a linear chain. */
|
||||
err = avfilter_link(abuffer_ctx, 0, volume_ctx, 0);
|
||||
if (err >= 0)
|
||||
err = avfilter_link(volume_ctx, 0, aformat_ctx, 0);
|
||||
if (err >= 0)
|
||||
err = avfilter_link(aformat_ctx, 0, abuffersink_ctx, 0);
|
||||
if (err < 0) {
|
||||
fprintf(stderr, "Error connecting filters\n");
|
||||
return err;
|
||||
}
|
||||
|
||||
/* Configure the graph. */
|
||||
err = avfilter_graph_config(filter_graph, NULL);
|
||||
if (err < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Error configuring the filter graph\n");
|
||||
return err;
|
||||
}
|
||||
|
||||
*graph = filter_graph;
|
||||
*src = abuffer_ctx;
|
||||
*sink = abuffersink_ctx;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Do something useful with the filtered data: this simple
|
||||
* example just prints the MD5 checksum of each plane to stdout. */
|
||||
static int process_output(struct AVMD5 *md5, AVFrame *frame)
|
||||
{
|
||||
int planar = av_sample_fmt_is_planar(frame->format);
|
||||
int channels = av_get_channel_layout_nb_channels(frame->channel_layout);
|
||||
int planes = planar ? channels : 1;
|
||||
int bps = av_get_bytes_per_sample(frame->format);
|
||||
int plane_size = bps * frame->nb_samples * (planar ? 1 : channels);
|
||||
int i, j;
|
||||
|
||||
for (i = 0; i < planes; i++) {
|
||||
uint8_t checksum[16];
|
||||
|
||||
av_md5_init(md5);
|
||||
av_md5_sum(checksum, frame->extended_data[i], plane_size);
|
||||
|
||||
fprintf(stdout, "plane %d: 0x", i);
|
||||
for (j = 0; j < sizeof(checksum); j++)
|
||||
fprintf(stdout, "%02X", checksum[j]);
|
||||
fprintf(stdout, "\n");
|
||||
}
|
||||
fprintf(stdout, "\n");
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Construct a frame of audio data to be filtered;
|
||||
* this simple example just synthesizes a sine wave. */
|
||||
static int get_input(AVFrame *frame, int frame_num)
|
||||
{
|
||||
int err, i, j;
|
||||
|
||||
#define FRAME_SIZE 1024
|
||||
|
||||
/* Set up the frame properties and allocate the buffer for the data. */
|
||||
frame->sample_rate = INPUT_SAMPLERATE;
|
||||
frame->format = INPUT_FORMAT;
|
||||
frame->channel_layout = INPUT_CHANNEL_LAYOUT;
|
||||
frame->nb_samples = FRAME_SIZE;
|
||||
frame->pts = frame_num * FRAME_SIZE;
|
||||
|
||||
err = av_frame_get_buffer(frame, 0);
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
/* Fill the data for each channel. */
|
||||
for (i = 0; i < 5; i++) {
|
||||
float *data = (float*)frame->extended_data[i];
|
||||
|
||||
for (j = 0; j < frame->nb_samples; j++)
|
||||
data[j] = sin(2 * M_PI * (frame_num + j) * (i + 1) / FRAME_SIZE);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int main(int argc, char *argv[])
|
||||
{
|
||||
struct AVMD5 *md5;
|
||||
AVFilterGraph *graph;
|
||||
AVFilterContext *src, *sink;
|
||||
AVFrame *frame;
|
||||
uint8_t errstr[1024];
|
||||
float duration;
|
||||
int err, nb_frames, i;
|
||||
|
||||
if (argc < 2) {
|
||||
fprintf(stderr, "Usage: %s <duration>\n", argv[0]);
|
||||
return 1;
|
||||
}
|
||||
|
||||
duration = atof(argv[1]);
|
||||
nb_frames = duration * INPUT_SAMPLERATE / FRAME_SIZE;
|
||||
if (nb_frames <= 0) {
|
||||
fprintf(stderr, "Invalid duration: %s\n", argv[1]);
|
||||
return 1;
|
||||
}
|
||||
|
||||
avfilter_register_all();
|
||||
|
||||
/* Allocate the frame we will be using to store the data. */
|
||||
frame = av_frame_alloc();
|
||||
if (!frame) {
|
||||
fprintf(stderr, "Error allocating the frame\n");
|
||||
return 1;
|
||||
}
|
||||
|
||||
md5 = av_md5_alloc();
|
||||
if (!md5) {
|
||||
fprintf(stderr, "Error allocating the MD5 context\n");
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* Set up the filtergraph. */
|
||||
err = init_filter_graph(&graph, &src, &sink);
|
||||
if (err < 0) {
|
||||
fprintf(stderr, "Unable to init filter graph:");
|
||||
goto fail;
|
||||
}
|
||||
|
||||
/* the main filtering loop */
|
||||
for (i = 0; i < nb_frames; i++) {
|
||||
/* get an input frame to be filtered */
|
||||
err = get_input(frame, i);
|
||||
if (err < 0) {
|
||||
fprintf(stderr, "Error generating input frame:");
|
||||
goto fail;
|
||||
}
|
||||
|
||||
/* Send the frame to the input of the filtergraph. */
|
||||
err = av_buffersrc_add_frame(src, frame);
|
||||
if (err < 0) {
|
||||
av_frame_unref(frame);
|
||||
fprintf(stderr, "Error submitting the frame to the filtergraph:");
|
||||
goto fail;
|
||||
}
|
||||
|
||||
/* Get all the filtered output that is available. */
|
||||
while ((err = av_buffersink_get_frame(sink, frame)) >= 0) {
|
||||
/* now do something with our filtered frame */
|
||||
err = process_output(md5, frame);
|
||||
if (err < 0) {
|
||||
fprintf(stderr, "Error processing the filtered frame:");
|
||||
goto fail;
|
||||
}
|
||||
av_frame_unref(frame);
|
||||
}
|
||||
|
||||
if (err == AVERROR(EAGAIN)) {
|
||||
/* Need to feed more frames in. */
|
||||
continue;
|
||||
} else if (err == AVERROR_EOF) {
|
||||
/* Nothing more to do, finish. */
|
||||
break;
|
||||
} else if (err < 0) {
|
||||
/* An error occurred. */
|
||||
fprintf(stderr, "Error filtering the data:");
|
||||
goto fail;
|
||||
}
|
||||
}
|
||||
|
||||
avfilter_graph_free(&graph);
|
||||
av_frame_free(&frame);
|
||||
av_freep(&md5);
|
||||
|
||||
return 0;
|
||||
|
||||
fail:
|
||||
av_strerror(err, errstr, sizeof(errstr));
|
||||
fprintf(stderr, "%s\n", errstr);
|
||||
return 1;
|
||||
}
|
||||
@@ -25,7 +25,7 @@
|
||||
/**
|
||||
* @file
|
||||
* API example for audio decoding and filtering
|
||||
* @example filtering_audio.c
|
||||
* @example doc/examples/filtering_audio.c
|
||||
*/
|
||||
|
||||
#include <unistd.h>
|
||||
@@ -36,10 +36,9 @@
|
||||
#include <libavfilter/avcodec.h>
|
||||
#include <libavfilter/buffersink.h>
|
||||
#include <libavfilter/buffersrc.h>
|
||||
#include <libavutil/opt.h>
|
||||
|
||||
static const char *filter_descr = "aresample=8000,aformat=sample_fmts=s16:channel_layouts=mono";
|
||||
static const char *player = "ffplay -f s16le -ar 8000 -ac 1 -";
|
||||
const char *filter_descr = "aresample=8000,aconvert=s16:mono";
|
||||
const char *player = "ffplay -f s16le -ar 8000 -ac 1 -";
|
||||
|
||||
static AVFormatContext *fmt_ctx;
|
||||
static AVCodecContext *dec_ctx;
|
||||
@@ -71,7 +70,6 @@ static int open_input_file(const char *filename)
|
||||
}
|
||||
audio_stream_index = ret;
|
||||
dec_ctx = fmt_ctx->streams[audio_stream_index]->codec;
|
||||
av_opt_set_int(dec_ctx, "refcounted_frames", 1, 0);
|
||||
|
||||
/* init the audio decoder */
|
||||
if ((ret = avcodec_open2(dec_ctx, dec, NULL)) < 0) {
|
||||
@@ -85,22 +83,17 @@ static int open_input_file(const char *filename)
|
||||
static int init_filters(const char *filters_descr)
|
||||
{
|
||||
char args[512];
|
||||
int ret = 0;
|
||||
int ret;
|
||||
AVFilter *abuffersrc = avfilter_get_by_name("abuffer");
|
||||
AVFilter *abuffersink = avfilter_get_by_name("abuffersink");
|
||||
AVFilter *abuffersink = avfilter_get_by_name("ffabuffersink");
|
||||
AVFilterInOut *outputs = avfilter_inout_alloc();
|
||||
AVFilterInOut *inputs = avfilter_inout_alloc();
|
||||
static const enum AVSampleFormat out_sample_fmts[] = { AV_SAMPLE_FMT_S16, -1 };
|
||||
static const int64_t out_channel_layouts[] = { AV_CH_LAYOUT_MONO, -1 };
|
||||
static const int out_sample_rates[] = { 8000, -1 };
|
||||
const enum AVSampleFormat sample_fmts[] = { AV_SAMPLE_FMT_S16, -1 };
|
||||
AVABufferSinkParams *abuffersink_params;
|
||||
const AVFilterLink *outlink;
|
||||
AVRational time_base = fmt_ctx->streams[audio_stream_index]->time_base;
|
||||
|
||||
filter_graph = avfilter_graph_alloc();
|
||||
if (!outputs || !inputs || !filter_graph) {
|
||||
ret = AVERROR(ENOMEM);
|
||||
goto end;
|
||||
}
|
||||
|
||||
/* buffer audio source: the decoded frames from the decoder will be inserted here. */
|
||||
if (!dec_ctx->channel_layout)
|
||||
@@ -113,71 +106,37 @@ static int init_filters(const char *filters_descr)
|
||||
args, NULL, filter_graph);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer source\n");
|
||||
goto end;
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* buffer audio sink: to terminate the filter chain. */
|
||||
abuffersink_params = av_abuffersink_params_alloc();
|
||||
abuffersink_params->sample_fmts = sample_fmts;
|
||||
ret = avfilter_graph_create_filter(&buffersink_ctx, abuffersink, "out",
|
||||
NULL, NULL, filter_graph);
|
||||
NULL, abuffersink_params, filter_graph);
|
||||
av_free(abuffersink_params);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer sink\n");
|
||||
goto end;
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = av_opt_set_int_list(buffersink_ctx, "sample_fmts", out_sample_fmts, -1,
|
||||
AV_OPT_SEARCH_CHILDREN);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot set output sample format\n");
|
||||
goto end;
|
||||
}
|
||||
|
||||
ret = av_opt_set_int_list(buffersink_ctx, "channel_layouts", out_channel_layouts, -1,
|
||||
AV_OPT_SEARCH_CHILDREN);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot set output channel layout\n");
|
||||
goto end;
|
||||
}
|
||||
|
||||
ret = av_opt_set_int_list(buffersink_ctx, "sample_rates", out_sample_rates, -1,
|
||||
AV_OPT_SEARCH_CHILDREN);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot set output sample rate\n");
|
||||
goto end;
|
||||
}
|
||||
|
||||
/*
|
||||
* Set the endpoints for the filter graph. The filter_graph will
|
||||
* be linked to the graph described by filters_descr.
|
||||
*/
|
||||
|
||||
/*
|
||||
* The buffer source output must be connected to the input pad of
|
||||
* the first filter described by filters_descr; since the first
|
||||
* filter input label is not specified, it is set to "in" by
|
||||
* default.
|
||||
*/
|
||||
/* Endpoints for the filter graph. */
|
||||
outputs->name = av_strdup("in");
|
||||
outputs->filter_ctx = buffersrc_ctx;
|
||||
outputs->pad_idx = 0;
|
||||
outputs->next = NULL;
|
||||
|
||||
/*
|
||||
* The buffer sink input must be connected to the output pad of
|
||||
* the last filter described by filters_descr; since the last
|
||||
* filter output label is not specified, it is set to "out" by
|
||||
* default.
|
||||
*/
|
||||
inputs->name = av_strdup("out");
|
||||
inputs->filter_ctx = buffersink_ctx;
|
||||
inputs->pad_idx = 0;
|
||||
inputs->next = NULL;
|
||||
|
||||
if ((ret = avfilter_graph_parse_ptr(filter_graph, filters_descr,
|
||||
&inputs, &outputs, NULL)) < 0)
|
||||
goto end;
|
||||
if ((ret = avfilter_graph_parse(filter_graph, filters_descr,
|
||||
&inputs, &outputs, NULL)) < 0)
|
||||
return ret;
|
||||
|
||||
if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0)
|
||||
goto end;
|
||||
return ret;
|
||||
|
||||
/* Print summary of the sink buffer
|
||||
* Note: args buffer is reused to store channel layout string */
|
||||
@@ -188,17 +147,14 @@ static int init_filters(const char *filters_descr)
|
||||
(char *)av_x_if_null(av_get_sample_fmt_name(outlink->format), "?"),
|
||||
args);
|
||||
|
||||
end:
|
||||
avfilter_inout_free(&inputs);
|
||||
avfilter_inout_free(&outputs);
|
||||
|
||||
return ret;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void print_frame(const AVFrame *frame)
|
||||
static void print_samplesref(AVFilterBufferRef *samplesref)
|
||||
{
|
||||
const int n = frame->nb_samples * av_get_channel_layout_nb_channels(av_frame_get_channel_layout(frame));
|
||||
const uint16_t *p = (uint16_t*)frame->data[0];
|
||||
const AVFilterBufferRefAudioProps *props = samplesref->audio;
|
||||
const int n = props->nb_samples * av_get_channel_layout_nb_channels(props->channel_layout);
|
||||
const uint16_t *p = (uint16_t*)samplesref->data[0];
|
||||
const uint16_t *p_end = p + n;
|
||||
|
||||
while (p < p_end) {
|
||||
@@ -212,12 +168,11 @@ static void print_frame(const AVFrame *frame)
|
||||
int main(int argc, char **argv)
|
||||
{
|
||||
int ret;
|
||||
AVPacket packet0, packet;
|
||||
AVFrame *frame = av_frame_alloc();
|
||||
AVFrame *filt_frame = av_frame_alloc();
|
||||
AVPacket packet;
|
||||
AVFrame *frame = avcodec_alloc_frame();
|
||||
int got_frame;
|
||||
|
||||
if (!frame || !filt_frame) {
|
||||
if (!frame) {
|
||||
perror("Could not allocate frame");
|
||||
exit(1);
|
||||
}
|
||||
@@ -226,6 +181,7 @@ int main(int argc, char **argv)
|
||||
exit(1);
|
||||
}
|
||||
|
||||
avcodec_register_all();
|
||||
av_register_all();
|
||||
avfilter_register_all();
|
||||
|
||||
@@ -235,60 +191,54 @@ int main(int argc, char **argv)
|
||||
goto end;
|
||||
|
||||
/* read all packets */
|
||||
packet0.data = NULL;
|
||||
packet.data = NULL;
|
||||
while (1) {
|
||||
if (!packet0.data) {
|
||||
if ((ret = av_read_frame(fmt_ctx, &packet)) < 0)
|
||||
break;
|
||||
packet0 = packet;
|
||||
}
|
||||
AVFilterBufferRef *samplesref;
|
||||
if ((ret = av_read_frame(fmt_ctx, &packet)) < 0)
|
||||
break;
|
||||
|
||||
if (packet.stream_index == audio_stream_index) {
|
||||
avcodec_get_frame_defaults(frame);
|
||||
got_frame = 0;
|
||||
ret = avcodec_decode_audio4(dec_ctx, frame, &got_frame, &packet);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Error decoding audio\n");
|
||||
continue;
|
||||
}
|
||||
packet.size -= ret;
|
||||
packet.data += ret;
|
||||
|
||||
if (got_frame) {
|
||||
/* push the audio data from decoded frame into the filtergraph */
|
||||
if (av_buffersrc_add_frame_flags(buffersrc_ctx, frame, 0) < 0) {
|
||||
if (av_buffersrc_add_frame(buffersrc_ctx, frame, 0) < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Error while feeding the audio filtergraph\n");
|
||||
break;
|
||||
}
|
||||
|
||||
/* pull filtered audio from the filtergraph */
|
||||
while (1) {
|
||||
ret = av_buffersink_get_frame(buffersink_ctx, filt_frame);
|
||||
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
|
||||
ret = av_buffersink_get_buffer_ref(buffersink_ctx, &samplesref, 0);
|
||||
if(ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
|
||||
break;
|
||||
if (ret < 0)
|
||||
if(ret < 0)
|
||||
goto end;
|
||||
print_frame(filt_frame);
|
||||
av_frame_unref(filt_frame);
|
||||
if (samplesref) {
|
||||
print_samplesref(samplesref);
|
||||
avfilter_unref_bufferp(&samplesref);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (packet.size <= 0)
|
||||
av_free_packet(&packet0);
|
||||
} else {
|
||||
/* discard non-wanted packets */
|
||||
av_free_packet(&packet0);
|
||||
}
|
||||
av_free_packet(&packet);
|
||||
}
|
||||
end:
|
||||
avfilter_graph_free(&filter_graph);
|
||||
avcodec_close(dec_ctx);
|
||||
if (dec_ctx)
|
||||
avcodec_close(dec_ctx);
|
||||
avformat_close_input(&fmt_ctx);
|
||||
av_frame_free(&frame);
|
||||
av_frame_free(&filt_frame);
|
||||
av_freep(&frame);
|
||||
|
||||
if (ret < 0 && ret != AVERROR_EOF) {
|
||||
fprintf(stderr, "Error occurred: %s\n", av_err2str(ret));
|
||||
char buf[1024];
|
||||
av_strerror(ret, buf, sizeof(buf));
|
||||
fprintf(stderr, "Error occurred: %s\n", buf);
|
||||
exit(1);
|
||||
}
|
||||
|
||||
|
||||
@@ -24,7 +24,7 @@
|
||||
/**
|
||||
* @file
|
||||
* API example for decoding and filtering
|
||||
* @example filtering_video.c
|
||||
* @example doc/examples/filtering_video.c
|
||||
*/
|
||||
|
||||
#define _XOPEN_SOURCE 600 /* for usleep */
|
||||
@@ -36,12 +36,8 @@
|
||||
#include <libavfilter/avcodec.h>
|
||||
#include <libavfilter/buffersink.h>
|
||||
#include <libavfilter/buffersrc.h>
|
||||
#include <libavutil/opt.h>
|
||||
|
||||
const char *filter_descr = "scale=78:24,transpose=cclock";
|
||||
/* other way:
|
||||
scale=78:24 [scl]; [scl] transpose=cclock // assumes "[in]" and "[out]" to be input output pads respectively
|
||||
*/
|
||||
const char *filter_descr = "scale=78:24";
|
||||
|
||||
static AVFormatContext *fmt_ctx;
|
||||
static AVCodecContext *dec_ctx;
|
||||
@@ -74,7 +70,6 @@ static int open_input_file(const char *filename)
|
||||
}
|
||||
video_stream_index = ret;
|
||||
dec_ctx = fmt_ctx->streams[video_stream_index]->codec;
|
||||
av_opt_set_int(dec_ctx, "refcounted_frames", 1, 0);
|
||||
|
||||
/* init the video decoder */
|
||||
if ((ret = avcodec_open2(dec_ctx, dec, NULL)) < 0) {
|
||||
@@ -88,117 +83,88 @@ static int open_input_file(const char *filename)
|
||||
static int init_filters(const char *filters_descr)
|
||||
{
|
||||
char args[512];
|
||||
int ret = 0;
|
||||
int ret;
|
||||
AVFilter *buffersrc = avfilter_get_by_name("buffer");
|
||||
AVFilter *buffersink = avfilter_get_by_name("buffersink");
|
||||
AVFilter *buffersink = avfilter_get_by_name("ffbuffersink");
|
||||
AVFilterInOut *outputs = avfilter_inout_alloc();
|
||||
AVFilterInOut *inputs = avfilter_inout_alloc();
|
||||
AVRational time_base = fmt_ctx->streams[video_stream_index]->time_base;
|
||||
enum AVPixelFormat pix_fmts[] = { AV_PIX_FMT_GRAY8, AV_PIX_FMT_NONE };
|
||||
AVBufferSinkParams *buffersink_params;
|
||||
|
||||
filter_graph = avfilter_graph_alloc();
|
||||
if (!outputs || !inputs || !filter_graph) {
|
||||
ret = AVERROR(ENOMEM);
|
||||
goto end;
|
||||
}
|
||||
|
||||
/* buffer video source: the decoded frames from the decoder will be inserted here. */
|
||||
snprintf(args, sizeof(args),
|
||||
"video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
|
||||
dec_ctx->width, dec_ctx->height, dec_ctx->pix_fmt,
|
||||
time_base.num, time_base.den,
|
||||
dec_ctx->time_base.num, dec_ctx->time_base.den,
|
||||
dec_ctx->sample_aspect_ratio.num, dec_ctx->sample_aspect_ratio.den);
|
||||
|
||||
ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in",
|
||||
args, NULL, filter_graph);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot create buffer source\n");
|
||||
goto end;
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* buffer video sink: to terminate the filter chain. */
|
||||
buffersink_params = av_buffersink_params_alloc();
|
||||
buffersink_params->pixel_fmts = pix_fmts;
|
||||
ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out",
|
||||
NULL, NULL, filter_graph);
|
||||
NULL, buffersink_params, filter_graph);
|
||||
av_free(buffersink_params);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot create buffer sink\n");
|
||||
goto end;
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = av_opt_set_int_list(buffersink_ctx, "pix_fmts", pix_fmts,
|
||||
AV_PIX_FMT_NONE, AV_OPT_SEARCH_CHILDREN);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot set output pixel format\n");
|
||||
goto end;
|
||||
}
|
||||
|
||||
/*
|
||||
* Set the endpoints for the filter graph. The filter_graph will
|
||||
* be linked to the graph described by filters_descr.
|
||||
*/
|
||||
|
||||
/*
|
||||
* The buffer source output must be connected to the input pad of
|
||||
* the first filter described by filters_descr; since the first
|
||||
* filter input label is not specified, it is set to "in" by
|
||||
* default.
|
||||
*/
|
||||
/* Endpoints for the filter graph. */
|
||||
outputs->name = av_strdup("in");
|
||||
outputs->filter_ctx = buffersrc_ctx;
|
||||
outputs->pad_idx = 0;
|
||||
outputs->next = NULL;
|
||||
|
||||
/*
|
||||
* The buffer sink input must be connected to the output pad of
|
||||
* the last filter described by filters_descr; since the last
|
||||
* filter output label is not specified, it is set to "out" by
|
||||
* default.
|
||||
*/
|
||||
inputs->name = av_strdup("out");
|
||||
inputs->filter_ctx = buffersink_ctx;
|
||||
inputs->pad_idx = 0;
|
||||
inputs->next = NULL;
|
||||
|
||||
if ((ret = avfilter_graph_parse_ptr(filter_graph, filters_descr,
|
||||
if ((ret = avfilter_graph_parse(filter_graph, filters_descr,
|
||||
&inputs, &outputs, NULL)) < 0)
|
||||
goto end;
|
||||
return ret;
|
||||
|
||||
if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0)
|
||||
goto end;
|
||||
|
||||
end:
|
||||
avfilter_inout_free(&inputs);
|
||||
avfilter_inout_free(&outputs);
|
||||
|
||||
return ret;
|
||||
return ret;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void display_frame(const AVFrame *frame, AVRational time_base)
|
||||
static void display_picref(AVFilterBufferRef *picref, AVRational time_base)
|
||||
{
|
||||
int x, y;
|
||||
uint8_t *p0, *p;
|
||||
int64_t delay;
|
||||
|
||||
if (frame->pts != AV_NOPTS_VALUE) {
|
||||
if (picref->pts != AV_NOPTS_VALUE) {
|
||||
if (last_pts != AV_NOPTS_VALUE) {
|
||||
/* sleep roughly the right amount of time;
|
||||
* usleep is in microseconds, just like AV_TIME_BASE. */
|
||||
delay = av_rescale_q(frame->pts - last_pts,
|
||||
delay = av_rescale_q(picref->pts - last_pts,
|
||||
time_base, AV_TIME_BASE_Q);
|
||||
if (delay > 0 && delay < 1000000)
|
||||
usleep(delay);
|
||||
}
|
||||
last_pts = frame->pts;
|
||||
last_pts = picref->pts;
|
||||
}
|
||||
|
||||
/* Trivial ASCII grayscale display. */
|
||||
p0 = frame->data[0];
|
||||
p0 = picref->data[0];
|
||||
puts("\033c");
|
||||
for (y = 0; y < frame->height; y++) {
|
||||
for (y = 0; y < picref->video->h; y++) {
|
||||
p = p0;
|
||||
for (x = 0; x < frame->width; x++)
|
||||
for (x = 0; x < picref->video->w; x++)
|
||||
putchar(" .-+#"[*(p++) / 52]);
|
||||
putchar('\n');
|
||||
p0 += frame->linesize[0];
|
||||
p0 += picref->linesize[0];
|
||||
}
|
||||
fflush(stdout);
|
||||
}
|
||||
@@ -207,11 +173,10 @@ int main(int argc, char **argv)
|
||||
{
|
||||
int ret;
|
||||
AVPacket packet;
|
||||
AVFrame *frame = av_frame_alloc();
|
||||
AVFrame *filt_frame = av_frame_alloc();
|
||||
AVFrame *frame = avcodec_alloc_frame();
|
||||
int got_frame;
|
||||
|
||||
if (!frame || !filt_frame) {
|
||||
if (!frame) {
|
||||
perror("Could not allocate frame");
|
||||
exit(1);
|
||||
}
|
||||
@@ -220,6 +185,7 @@ int main(int argc, char **argv)
|
||||
exit(1);
|
||||
}
|
||||
|
||||
avcodec_register_all();
|
||||
av_register_all();
|
||||
avfilter_register_all();
|
||||
|
||||
@@ -230,10 +196,12 @@ int main(int argc, char **argv)
|
||||
|
||||
/* read all packets */
|
||||
while (1) {
|
||||
AVFilterBufferRef *picref;
|
||||
if ((ret = av_read_frame(fmt_ctx, &packet)) < 0)
|
||||
break;
|
||||
|
||||
if (packet.stream_index == video_stream_index) {
|
||||
avcodec_get_frame_defaults(frame);
|
||||
got_frame = 0;
|
||||
ret = avcodec_decode_video2(dec_ctx, frame, &got_frame, &packet);
|
||||
if (ret < 0) {
|
||||
@@ -245,35 +213,39 @@ int main(int argc, char **argv)
|
||||
frame->pts = av_frame_get_best_effort_timestamp(frame);
|
||||
|
||||
/* push the decoded frame into the filtergraph */
|
||||
if (av_buffersrc_add_frame_flags(buffersrc_ctx, frame, AV_BUFFERSRC_FLAG_KEEP_REF) < 0) {
|
||||
if (av_buffersrc_add_frame(buffersrc_ctx, frame, 0) < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Error while feeding the filtergraph\n");
|
||||
break;
|
||||
}
|
||||
|
||||
/* pull filtered frames from the filtergraph */
|
||||
/* pull filtered pictures from the filtergraph */
|
||||
while (1) {
|
||||
ret = av_buffersink_get_frame(buffersink_ctx, filt_frame);
|
||||
ret = av_buffersink_get_buffer_ref(buffersink_ctx, &picref, 0);
|
||||
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
|
||||
break;
|
||||
if (ret < 0)
|
||||
goto end;
|
||||
display_frame(filt_frame, buffersink_ctx->inputs[0]->time_base);
|
||||
av_frame_unref(filt_frame);
|
||||
|
||||
if (picref) {
|
||||
display_picref(picref, buffersink_ctx->inputs[0]->time_base);
|
||||
avfilter_unref_bufferp(&picref);
|
||||
}
|
||||
}
|
||||
av_frame_unref(frame);
|
||||
}
|
||||
}
|
||||
av_free_packet(&packet);
|
||||
}
|
||||
end:
|
||||
avfilter_graph_free(&filter_graph);
|
||||
avcodec_close(dec_ctx);
|
||||
if (dec_ctx)
|
||||
avcodec_close(dec_ctx);
|
||||
avformat_close_input(&fmt_ctx);
|
||||
av_frame_free(&frame);
|
||||
av_frame_free(&filt_frame);
|
||||
av_freep(&frame);
|
||||
|
||||
if (ret < 0 && ret != AVERROR_EOF) {
|
||||
fprintf(stderr, "Error occurred: %s\n", av_err2str(ret));
|
||||
char buf[1024];
|
||||
av_strerror(ret, buf, sizeof(buf));
|
||||
fprintf(stderr, "Error occurred: %s\n", buf);
|
||||
exit(1);
|
||||
}
|
||||
|
||||
|
||||
@@ -1,155 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2015 Stephan Holljes
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
* of this software and associated documentation files (the "Software"), to deal
|
||||
* in the Software without restriction, including without limitation the rights
|
||||
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
* copies of the Software, and to permit persons to whom the Software is
|
||||
* furnished to do so, subject to the following conditions:
|
||||
*
|
||||
* The above copyright notice and this permission notice shall be included in
|
||||
* all copies or substantial portions of the Software.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
||||
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
* THE SOFTWARE.
|
||||
*/
|
||||
|
||||
/**
|
||||
* @file
|
||||
* libavformat multi-client network API usage example.
|
||||
*
|
||||
* @example http_multiclient.c
|
||||
* This example will serve a file without decoding or demuxing it over http.
|
||||
* Multiple clients can connect and will receive the same file.
|
||||
*/
|
||||
|
||||
#include <libavformat/avformat.h>
|
||||
#include <libavutil/opt.h>
|
||||
#include <unistd.h>
|
||||
|
||||
void process_client(AVIOContext *client, const char *in_uri)
|
||||
{
|
||||
AVIOContext *input = NULL;
|
||||
uint8_t buf[1024];
|
||||
int ret, n, reply_code;
|
||||
char *resource = NULL;
|
||||
while ((ret = avio_handshake(client)) > 0) {
|
||||
av_opt_get(client, "resource", AV_OPT_SEARCH_CHILDREN, &resource);
|
||||
// check for strlen(resource) is necessary, because av_opt_get()
|
||||
// may return empty string.
|
||||
if (resource && strlen(resource))
|
||||
break;
|
||||
}
|
||||
if (ret < 0)
|
||||
goto end;
|
||||
av_log(client, AV_LOG_TRACE, "resource=%p\n", resource);
|
||||
if (resource && resource[0] == '/' && !strcmp((resource + 1), in_uri)) {
|
||||
reply_code = 200;
|
||||
} else {
|
||||
reply_code = AVERROR_HTTP_NOT_FOUND;
|
||||
}
|
||||
if ((ret = av_opt_set_int(client, "reply_code", reply_code, AV_OPT_SEARCH_CHILDREN)) < 0) {
|
||||
av_log(client, AV_LOG_ERROR, "Failed to set reply_code: %s.\n", av_err2str(ret));
|
||||
goto end;
|
||||
}
|
||||
av_log(client, AV_LOG_TRACE, "Set reply code to %d\n", reply_code);
|
||||
|
||||
while ((ret = avio_handshake(client)) > 0);
|
||||
|
||||
if (ret < 0)
|
||||
goto end;
|
||||
|
||||
fprintf(stderr, "Handshake performed.\n");
|
||||
if (reply_code != 200)
|
||||
goto end;
|
||||
fprintf(stderr, "Opening input file.\n");
|
||||
if ((ret = avio_open2(&input, in_uri, AVIO_FLAG_READ, NULL, NULL)) < 0) {
|
||||
av_log(input, AV_LOG_ERROR, "Failed to open input: %s: %s.\n", in_uri,
|
||||
av_err2str(ret));
|
||||
goto end;
|
||||
}
|
||||
for(;;) {
|
||||
n = avio_read(input, buf, sizeof(buf));
|
||||
if (n < 0) {
|
||||
if (n == AVERROR_EOF)
|
||||
break;
|
||||
av_log(input, AV_LOG_ERROR, "Error reading from input: %s.\n",
|
||||
av_err2str(n));
|
||||
break;
|
||||
}
|
||||
avio_write(client, buf, n);
|
||||
avio_flush(client);
|
||||
}
|
||||
end:
|
||||
fprintf(stderr, "Flushing client\n");
|
||||
avio_flush(client);
|
||||
fprintf(stderr, "Closing client\n");
|
||||
avio_close(client);
|
||||
fprintf(stderr, "Closing input\n");
|
||||
avio_close(input);
|
||||
}
|
||||
|
||||
int main(int argc, char **argv)
|
||||
{
|
||||
av_log_set_level(AV_LOG_TRACE);
|
||||
AVDictionary *options = NULL;
|
||||
AVIOContext *client = NULL, *server = NULL;
|
||||
const char *in_uri, *out_uri;
|
||||
int ret, pid;
|
||||
if (argc < 3) {
|
||||
printf("usage: %s input http://hostname[:port]\n"
|
||||
"API example program to serve http to multiple clients.\n"
|
||||
"\n", argv[0]);
|
||||
return 1;
|
||||
}
|
||||
|
||||
in_uri = argv[1];
|
||||
out_uri = argv[2];
|
||||
|
||||
av_register_all();
|
||||
avformat_network_init();
|
||||
|
||||
if ((ret = av_dict_set(&options, "listen", "2", 0)) < 0) {
|
||||
fprintf(stderr, "Failed to set listen mode for server: %s\n", av_err2str(ret));
|
||||
return ret;
|
||||
}
|
||||
if ((ret = avio_open2(&server, out_uri, AVIO_FLAG_WRITE, NULL, &options)) < 0) {
|
||||
fprintf(stderr, "Failed to open server: %s\n", av_err2str(ret));
|
||||
return ret;
|
||||
}
|
||||
fprintf(stderr, "Entering main loop.\n");
|
||||
for(;;) {
|
||||
if ((ret = avio_accept(server, &client)) < 0)
|
||||
goto end;
|
||||
fprintf(stderr, "Accepted client, forking process.\n");
|
||||
// XXX: Since we don't reap our children and don't ignore signals
|
||||
// this produces zombie processes.
|
||||
pid = fork();
|
||||
if (pid < 0) {
|
||||
perror("Fork failed");
|
||||
ret = AVERROR(errno);
|
||||
goto end;
|
||||
}
|
||||
if (pid == 0) {
|
||||
fprintf(stderr, "In child.\n");
|
||||
process_client(client, in_uri);
|
||||
avio_close(server);
|
||||
exit(0);
|
||||
}
|
||||
if (pid > 0)
|
||||
avio_close(client);
|
||||
}
|
||||
end:
|
||||
avio_close(server);
|
||||
if (ret < 0 && ret != AVERROR_EOF) {
|
||||
fprintf(stderr, "Some errors occurred: %s\n", av_err2str(ret));
|
||||
return 1;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
@@ -23,7 +23,7 @@
|
||||
/**
|
||||
* @file
|
||||
* Shows how the metadata API can be used in application programs.
|
||||
* @example metadata.c
|
||||
* @example doc/examples/metadata.c
|
||||
*/
|
||||
|
||||
#include <stdio.h>
|
||||
|
||||
@@ -24,9 +24,9 @@
|
||||
* @file
|
||||
* libavformat API example.
|
||||
*
|
||||
* Output a media file in any supported libavformat format. The default
|
||||
* codecs are used.
|
||||
* @example muxing.c
|
||||
* Output a media file in any supported libavformat format.
|
||||
* The default codecs are used.
|
||||
* @example doc/examples/muxing.c
|
||||
*/
|
||||
|
||||
#include <stdlib.h>
|
||||
@@ -34,67 +34,31 @@
|
||||
#include <string.h>
|
||||
#include <math.h>
|
||||
|
||||
#include <libavutil/avassert.h>
|
||||
#include <libavutil/channel_layout.h>
|
||||
#include <libavutil/opt.h>
|
||||
#include <libavutil/mathematics.h>
|
||||
#include <libavutil/timestamp.h>
|
||||
#include <libavformat/avformat.h>
|
||||
#include <libswscale/swscale.h>
|
||||
#include <libswresample/swresample.h>
|
||||
|
||||
#define STREAM_DURATION 10.0
|
||||
/* 5 seconds stream duration */
|
||||
#define STREAM_DURATION 200.0
|
||||
#define STREAM_FRAME_RATE 25 /* 25 images/s */
|
||||
#define STREAM_NB_FRAMES ((int)(STREAM_DURATION * STREAM_FRAME_RATE))
|
||||
#define STREAM_PIX_FMT AV_PIX_FMT_YUV420P /* default pix_fmt */
|
||||
|
||||
#define SCALE_FLAGS SWS_BICUBIC
|
||||
static int sws_flags = SWS_BICUBIC;
|
||||
|
||||
// a wrapper around a single output AVStream
|
||||
typedef struct OutputStream {
|
||||
AVStream *st;
|
||||
/**************************************************************/
|
||||
/* audio output */
|
||||
|
||||
/* pts of the next frame that will be generated */
|
||||
int64_t next_pts;
|
||||
int samples_count;
|
||||
|
||||
AVFrame *frame;
|
||||
AVFrame *tmp_frame;
|
||||
|
||||
float t, tincr, tincr2;
|
||||
|
||||
struct SwsContext *sws_ctx;
|
||||
struct SwrContext *swr_ctx;
|
||||
} OutputStream;
|
||||
|
||||
static void log_packet(const AVFormatContext *fmt_ctx, const AVPacket *pkt)
|
||||
{
|
||||
AVRational *time_base = &fmt_ctx->streams[pkt->stream_index]->time_base;
|
||||
|
||||
printf("pts:%s pts_time:%s dts:%s dts_time:%s duration:%s duration_time:%s stream_index:%d\n",
|
||||
av_ts2str(pkt->pts), av_ts2timestr(pkt->pts, time_base),
|
||||
av_ts2str(pkt->dts), av_ts2timestr(pkt->dts, time_base),
|
||||
av_ts2str(pkt->duration), av_ts2timestr(pkt->duration, time_base),
|
||||
pkt->stream_index);
|
||||
}
|
||||
|
||||
static int write_frame(AVFormatContext *fmt_ctx, const AVRational *time_base, AVStream *st, AVPacket *pkt)
|
||||
{
|
||||
/* rescale output packet timestamp values from codec to stream timebase */
|
||||
av_packet_rescale_ts(pkt, *time_base, st->time_base);
|
||||
pkt->stream_index = st->index;
|
||||
|
||||
/* Write the compressed frame to the media file. */
|
||||
log_packet(fmt_ctx, pkt);
|
||||
return av_interleaved_write_frame(fmt_ctx, pkt);
|
||||
}
|
||||
static float t, tincr, tincr2;
|
||||
static int16_t *samples;
|
||||
static int audio_input_frame_size;
|
||||
|
||||
/* Add an output stream. */
|
||||
static void add_stream(OutputStream *ost, AVFormatContext *oc,
|
||||
AVCodec **codec,
|
||||
enum AVCodecID codec_id)
|
||||
static AVStream *add_stream(AVFormatContext *oc, AVCodec **codec,
|
||||
enum AVCodecID codec_id)
|
||||
{
|
||||
AVCodecContext *c;
|
||||
int i;
|
||||
AVStream *st;
|
||||
|
||||
/* find the encoder */
|
||||
*codec = avcodec_find_encoder(codec_id);
|
||||
@@ -104,38 +68,21 @@ static void add_stream(OutputStream *ost, AVFormatContext *oc,
|
||||
exit(1);
|
||||
}
|
||||
|
||||
ost->st = avformat_new_stream(oc, *codec);
|
||||
if (!ost->st) {
|
||||
st = avformat_new_stream(oc, *codec);
|
||||
if (!st) {
|
||||
fprintf(stderr, "Could not allocate stream\n");
|
||||
exit(1);
|
||||
}
|
||||
ost->st->id = oc->nb_streams-1;
|
||||
c = ost->st->codec;
|
||||
st->id = oc->nb_streams-1;
|
||||
c = st->codec;
|
||||
|
||||
switch ((*codec)->type) {
|
||||
case AVMEDIA_TYPE_AUDIO:
|
||||
c->sample_fmt = (*codec)->sample_fmts ?
|
||||
(*codec)->sample_fmts[0] : AV_SAMPLE_FMT_FLTP;
|
||||
st->id = 1;
|
||||
c->sample_fmt = AV_SAMPLE_FMT_S16;
|
||||
c->bit_rate = 64000;
|
||||
c->sample_rate = 44100;
|
||||
if ((*codec)->supported_samplerates) {
|
||||
c->sample_rate = (*codec)->supported_samplerates[0];
|
||||
for (i = 0; (*codec)->supported_samplerates[i]; i++) {
|
||||
if ((*codec)->supported_samplerates[i] == 44100)
|
||||
c->sample_rate = 44100;
|
||||
}
|
||||
}
|
||||
c->channels = av_get_channel_layout_nb_channels(c->channel_layout);
|
||||
c->channel_layout = AV_CH_LAYOUT_STEREO;
|
||||
if ((*codec)->channel_layouts) {
|
||||
c->channel_layout = (*codec)->channel_layouts[0];
|
||||
for (i = 0; (*codec)->channel_layouts[i]; i++) {
|
||||
if ((*codec)->channel_layouts[i] == AV_CH_LAYOUT_STEREO)
|
||||
c->channel_layout = AV_CH_LAYOUT_STEREO;
|
||||
}
|
||||
}
|
||||
c->channels = av_get_channel_layout_nb_channels(c->channel_layout);
|
||||
ost->st->time_base = (AVRational){ 1, c->sample_rate };
|
||||
c->channels = 2;
|
||||
break;
|
||||
|
||||
case AVMEDIA_TYPE_VIDEO:
|
||||
@@ -149,9 +96,8 @@ static void add_stream(OutputStream *ost, AVFormatContext *oc,
|
||||
* of which frame timestamps are represented. For fixed-fps content,
|
||||
* timebase should be 1/framerate and timestamp increments should be
|
||||
* identical to 1. */
|
||||
ost->st->time_base = (AVRational){ 1, STREAM_FRAME_RATE };
|
||||
c->time_base = ost->st->time_base;
|
||||
|
||||
c->time_base.den = STREAM_FRAME_RATE;
|
||||
c->time_base.num = 1;
|
||||
c->gop_size = 12; /* emit one intra frame every twelve frames at most */
|
||||
c->pix_fmt = STREAM_PIX_FMT;
|
||||
if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) {
|
||||
@@ -172,169 +118,85 @@ static void add_stream(OutputStream *ost, AVFormatContext *oc,
|
||||
|
||||
/* Some formats want stream headers to be separate. */
|
||||
if (oc->oformat->flags & AVFMT_GLOBALHEADER)
|
||||
c->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
|
||||
c->flags |= CODEC_FLAG_GLOBAL_HEADER;
|
||||
|
||||
return st;
|
||||
}
|
||||
|
||||
/**************************************************************/
|
||||
/* audio output */
|
||||
|
||||
static AVFrame *alloc_audio_frame(enum AVSampleFormat sample_fmt,
|
||||
uint64_t channel_layout,
|
||||
int sample_rate, int nb_samples)
|
||||
{
|
||||
AVFrame *frame = av_frame_alloc();
|
||||
int ret;
|
||||
static float t, tincr, tincr2;
|
||||
static int16_t *samples;
|
||||
static int audio_input_frame_size;
|
||||
|
||||
if (!frame) {
|
||||
fprintf(stderr, "Error allocating an audio frame\n");
|
||||
exit(1);
|
||||
}
|
||||
|
||||
frame->format = sample_fmt;
|
||||
frame->channel_layout = channel_layout;
|
||||
frame->sample_rate = sample_rate;
|
||||
frame->nb_samples = nb_samples;
|
||||
|
||||
if (nb_samples) {
|
||||
ret = av_frame_get_buffer(frame, 0);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error allocating an audio buffer\n");
|
||||
exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
return frame;
|
||||
}
|
||||
|
||||
static void open_audio(AVFormatContext *oc, AVCodec *codec, OutputStream *ost, AVDictionary *opt_arg)
|
||||
static void open_audio(AVFormatContext *oc, AVCodec *codec, AVStream *st)
|
||||
{
|
||||
AVCodecContext *c;
|
||||
int nb_samples;
|
||||
int ret;
|
||||
AVDictionary *opt = NULL;
|
||||
|
||||
c = ost->st->codec;
|
||||
c = st->codec;
|
||||
|
||||
/* open it */
|
||||
av_dict_copy(&opt, opt_arg, 0);
|
||||
ret = avcodec_open2(c, codec, &opt);
|
||||
av_dict_free(&opt);
|
||||
ret = avcodec_open2(c, codec, NULL);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Could not open audio codec: %s\n", av_err2str(ret));
|
||||
exit(1);
|
||||
}
|
||||
|
||||
/* init signal generator */
|
||||
ost->t = 0;
|
||||
ost->tincr = 2 * M_PI * 110.0 / c->sample_rate;
|
||||
t = 0;
|
||||
tincr = 2 * M_PI * 110.0 / c->sample_rate;
|
||||
/* increment frequency by 110 Hz per second */
|
||||
ost->tincr2 = 2 * M_PI * 110.0 / c->sample_rate / c->sample_rate;
|
||||
tincr2 = 2 * M_PI * 110.0 / c->sample_rate / c->sample_rate;
|
||||
|
||||
if (c->codec->capabilities & AV_CODEC_CAP_VARIABLE_FRAME_SIZE)
|
||||
nb_samples = 10000;
|
||||
if (c->codec->capabilities & CODEC_CAP_VARIABLE_FRAME_SIZE)
|
||||
audio_input_frame_size = 10000;
|
||||
else
|
||||
nb_samples = c->frame_size;
|
||||
|
||||
ost->frame = alloc_audio_frame(c->sample_fmt, c->channel_layout,
|
||||
c->sample_rate, nb_samples);
|
||||
ost->tmp_frame = alloc_audio_frame(AV_SAMPLE_FMT_S16, c->channel_layout,
|
||||
c->sample_rate, nb_samples);
|
||||
|
||||
/* create resampler context */
|
||||
ost->swr_ctx = swr_alloc();
|
||||
if (!ost->swr_ctx) {
|
||||
fprintf(stderr, "Could not allocate resampler context\n");
|
||||
exit(1);
|
||||
}
|
||||
|
||||
/* set options */
|
||||
av_opt_set_int (ost->swr_ctx, "in_channel_count", c->channels, 0);
|
||||
av_opt_set_int (ost->swr_ctx, "in_sample_rate", c->sample_rate, 0);
|
||||
av_opt_set_sample_fmt(ost->swr_ctx, "in_sample_fmt", AV_SAMPLE_FMT_S16, 0);
|
||||
av_opt_set_int (ost->swr_ctx, "out_channel_count", c->channels, 0);
|
||||
av_opt_set_int (ost->swr_ctx, "out_sample_rate", c->sample_rate, 0);
|
||||
av_opt_set_sample_fmt(ost->swr_ctx, "out_sample_fmt", c->sample_fmt, 0);
|
||||
|
||||
/* initialize the resampling context */
|
||||
if ((ret = swr_init(ost->swr_ctx)) < 0) {
|
||||
fprintf(stderr, "Failed to initialize the resampling context\n");
|
||||
exit(1);
|
||||
}
|
||||
audio_input_frame_size = c->frame_size;
|
||||
samples = av_malloc(audio_input_frame_size *
|
||||
av_get_bytes_per_sample(c->sample_fmt) *
|
||||
c->channels);
|
||||
if (!samples) {
|
||||
fprintf(stderr, "Could not allocate audio samples buffer\n");
|
||||
exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/* Prepare a 16 bit dummy audio frame of 'frame_size' samples and
|
||||
* 'nb_channels' channels. */
|
||||
static AVFrame *get_audio_frame(OutputStream *ost)
|
||||
static void get_audio_frame(int16_t *samples, int frame_size, int nb_channels)
|
||||
{
|
||||
AVFrame *frame = ost->tmp_frame;
|
||||
int j, i, v;
|
||||
int16_t *q = (int16_t*)frame->data[0];
|
||||
int16_t *q;
|
||||
|
||||
/* check if we want to generate more frames */
|
||||
if (av_compare_ts(ost->next_pts, ost->st->codec->time_base,
|
||||
STREAM_DURATION, (AVRational){ 1, 1 }) >= 0)
|
||||
return NULL;
|
||||
|
||||
for (j = 0; j <frame->nb_samples; j++) {
|
||||
v = (int)(sin(ost->t) * 10000);
|
||||
for (i = 0; i < ost->st->codec->channels; i++)
|
||||
q = samples;
|
||||
for (j = 0; j < frame_size; j++) {
|
||||
v = (int)(sin(t) * 10000);
|
||||
for (i = 0; i < nb_channels; i++)
|
||||
*q++ = v;
|
||||
ost->t += ost->tincr;
|
||||
ost->tincr += ost->tincr2;
|
||||
t += tincr;
|
||||
tincr += tincr2;
|
||||
}
|
||||
|
||||
frame->pts = ost->next_pts;
|
||||
ost->next_pts += frame->nb_samples;
|
||||
|
||||
return frame;
|
||||
}
|
||||
|
||||
/*
|
||||
* encode one audio frame and send it to the muxer
|
||||
* return 1 when encoding is finished, 0 otherwise
|
||||
*/
|
||||
static int write_audio_frame(AVFormatContext *oc, OutputStream *ost)
|
||||
static void write_audio_frame(AVFormatContext *oc, AVStream *st)
|
||||
{
|
||||
AVCodecContext *c;
|
||||
AVPacket pkt = { 0 }; // data and size must be 0;
|
||||
AVFrame *frame;
|
||||
int ret;
|
||||
int got_packet;
|
||||
int dst_nb_samples;
|
||||
AVFrame *frame = avcodec_alloc_frame();
|
||||
int got_packet, ret;
|
||||
|
||||
av_init_packet(&pkt);
|
||||
c = ost->st->codec;
|
||||
c = st->codec;
|
||||
|
||||
frame = get_audio_frame(ost);
|
||||
|
||||
if (frame) {
|
||||
/* convert samples from native format to destination codec format, using the resampler */
|
||||
/* compute destination number of samples */
|
||||
dst_nb_samples = av_rescale_rnd(swr_get_delay(ost->swr_ctx, c->sample_rate) + frame->nb_samples,
|
||||
c->sample_rate, c->sample_rate, AV_ROUND_UP);
|
||||
av_assert0(dst_nb_samples == frame->nb_samples);
|
||||
|
||||
/* when we pass a frame to the encoder, it may keep a reference to it
|
||||
* internally;
|
||||
* make sure we do not overwrite it here
|
||||
*/
|
||||
ret = av_frame_make_writable(ost->frame);
|
||||
if (ret < 0)
|
||||
exit(1);
|
||||
|
||||
/* convert to destination format */
|
||||
ret = swr_convert(ost->swr_ctx,
|
||||
ost->frame->data, dst_nb_samples,
|
||||
(const uint8_t **)frame->data, frame->nb_samples);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error while converting\n");
|
||||
exit(1);
|
||||
}
|
||||
frame = ost->frame;
|
||||
|
||||
frame->pts = av_rescale_q(ost->samples_count, (AVRational){1, c->sample_rate}, c->time_base);
|
||||
ost->samples_count += dst_nb_samples;
|
||||
}
|
||||
get_audio_frame(samples, audio_input_frame_size, c->channels);
|
||||
frame->nb_samples = audio_input_frame_size;
|
||||
avcodec_fill_audio_frame(frame, c->channels, c->sample_fmt,
|
||||
(uint8_t *)samples,
|
||||
audio_input_frame_size *
|
||||
av_get_bytes_per_sample(c->sample_fmt) *
|
||||
c->channels, 1);
|
||||
|
||||
ret = avcodec_encode_audio2(c, &pkt, frame, &got_packet);
|
||||
if (ret < 0) {
|
||||
@@ -342,93 +204,82 @@ static int write_audio_frame(AVFormatContext *oc, OutputStream *ost)
|
||||
exit(1);
|
||||
}
|
||||
|
||||
if (got_packet) {
|
||||
ret = write_frame(oc, &c->time_base, ost->st, &pkt);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error while writing audio frame: %s\n",
|
||||
av_err2str(ret));
|
||||
exit(1);
|
||||
}
|
||||
}
|
||||
if (!got_packet)
|
||||
return;
|
||||
|
||||
return (frame || got_packet) ? 0 : 1;
|
||||
pkt.stream_index = st->index;
|
||||
|
||||
/* Write the compressed frame to the media file. */
|
||||
ret = av_interleaved_write_frame(oc, &pkt);
|
||||
if (ret != 0) {
|
||||
fprintf(stderr, "Error while writing audio frame: %s\n",
|
||||
av_err2str(ret));
|
||||
exit(1);
|
||||
}
|
||||
avcodec_free_frame(&frame);
|
||||
}
|
||||
|
||||
static void close_audio(AVFormatContext *oc, AVStream *st)
|
||||
{
|
||||
avcodec_close(st->codec);
|
||||
|
||||
av_free(samples);
|
||||
}
|
||||
|
||||
/**************************************************************/
|
||||
/* video output */
|
||||
|
||||
static AVFrame *alloc_picture(enum AVPixelFormat pix_fmt, int width, int height)
|
||||
{
|
||||
AVFrame *picture;
|
||||
int ret;
|
||||
static AVFrame *frame;
|
||||
static AVPicture src_picture, dst_picture;
|
||||
static int frame_count;
|
||||
|
||||
picture = av_frame_alloc();
|
||||
if (!picture)
|
||||
return NULL;
|
||||
|
||||
picture->format = pix_fmt;
|
||||
picture->width = width;
|
||||
picture->height = height;
|
||||
|
||||
/* allocate the buffers for the frame data */
|
||||
ret = av_frame_get_buffer(picture, 32);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Could not allocate frame data.\n");
|
||||
exit(1);
|
||||
}
|
||||
|
||||
return picture;
|
||||
}
|
||||
|
||||
static void open_video(AVFormatContext *oc, AVCodec *codec, OutputStream *ost, AVDictionary *opt_arg)
|
||||
static void open_video(AVFormatContext *oc, AVCodec *codec, AVStream *st)
|
||||
{
|
||||
int ret;
|
||||
AVCodecContext *c = ost->st->codec;
|
||||
AVDictionary *opt = NULL;
|
||||
|
||||
av_dict_copy(&opt, opt_arg, 0);
|
||||
AVCodecContext *c = st->codec;
|
||||
|
||||
/* open the codec */
|
||||
ret = avcodec_open2(c, codec, &opt);
|
||||
av_dict_free(&opt);
|
||||
ret = avcodec_open2(c, codec, NULL);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Could not open video codec: %s\n", av_err2str(ret));
|
||||
exit(1);
|
||||
}
|
||||
|
||||
/* allocate and init a re-usable frame */
|
||||
ost->frame = alloc_picture(c->pix_fmt, c->width, c->height);
|
||||
if (!ost->frame) {
|
||||
frame = avcodec_alloc_frame();
|
||||
if (!frame) {
|
||||
fprintf(stderr, "Could not allocate video frame\n");
|
||||
exit(1);
|
||||
}
|
||||
|
||||
/* Allocate the encoded raw picture. */
|
||||
ret = avpicture_alloc(&dst_picture, c->pix_fmt, c->width, c->height);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Could not allocate picture: %s\n", av_err2str(ret));
|
||||
exit(1);
|
||||
}
|
||||
|
||||
/* If the output format is not YUV420P, then a temporary YUV420P
|
||||
* picture is needed too. It is then converted to the required
|
||||
* output format. */
|
||||
ost->tmp_frame = NULL;
|
||||
if (c->pix_fmt != AV_PIX_FMT_YUV420P) {
|
||||
ost->tmp_frame = alloc_picture(AV_PIX_FMT_YUV420P, c->width, c->height);
|
||||
if (!ost->tmp_frame) {
|
||||
fprintf(stderr, "Could not allocate temporary picture\n");
|
||||
ret = avpicture_alloc(&src_picture, AV_PIX_FMT_YUV420P, c->width, c->height);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Could not allocate temporary picture: %s\n",
|
||||
av_err2str(ret));
|
||||
exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/* copy data and linesize picture pointers to frame */
|
||||
*((AVPicture *)frame) = dst_picture;
|
||||
}
|
||||
|
||||
/* Prepare a dummy image. */
|
||||
static void fill_yuv_image(AVFrame *pict, int frame_index,
|
||||
static void fill_yuv_image(AVPicture *pict, int frame_index,
|
||||
int width, int height)
|
||||
{
|
||||
int x, y, i, ret;
|
||||
|
||||
/* when we pass a frame to the encoder, it may keep a reference to it
|
||||
* internally;
|
||||
* make sure we do not overwrite it here
|
||||
*/
|
||||
ret = av_frame_make_writable(pict);
|
||||
if (ret < 0)
|
||||
exit(1);
|
||||
int x, y, i;
|
||||
|
||||
i = frame_index;
|
||||
|
||||
@@ -446,108 +297,91 @@ static void fill_yuv_image(AVFrame *pict, int frame_index,
|
||||
}
|
||||
}
|
||||
|
||||
static AVFrame *get_video_frame(OutputStream *ost)
|
||||
{
|
||||
AVCodecContext *c = ost->st->codec;
|
||||
|
||||
/* check if we want to generate more frames */
|
||||
if (av_compare_ts(ost->next_pts, ost->st->codec->time_base,
|
||||
STREAM_DURATION, (AVRational){ 1, 1 }) >= 0)
|
||||
return NULL;
|
||||
|
||||
if (c->pix_fmt != AV_PIX_FMT_YUV420P) {
|
||||
/* as we only generate a YUV420P picture, we must convert it
|
||||
* to the codec pixel format if needed */
|
||||
if (!ost->sws_ctx) {
|
||||
ost->sws_ctx = sws_getContext(c->width, c->height,
|
||||
AV_PIX_FMT_YUV420P,
|
||||
c->width, c->height,
|
||||
c->pix_fmt,
|
||||
SCALE_FLAGS, NULL, NULL, NULL);
|
||||
if (!ost->sws_ctx) {
|
||||
fprintf(stderr,
|
||||
"Could not initialize the conversion context\n");
|
||||
exit(1);
|
||||
}
|
||||
}
|
||||
fill_yuv_image(ost->tmp_frame, ost->next_pts, c->width, c->height);
|
||||
sws_scale(ost->sws_ctx,
|
||||
(const uint8_t * const *)ost->tmp_frame->data, ost->tmp_frame->linesize,
|
||||
0, c->height, ost->frame->data, ost->frame->linesize);
|
||||
} else {
|
||||
fill_yuv_image(ost->frame, ost->next_pts, c->width, c->height);
|
||||
}
|
||||
|
||||
ost->frame->pts = ost->next_pts++;
|
||||
|
||||
return ost->frame;
|
||||
}
|
||||
|
||||
/*
|
||||
* encode one video frame and send it to the muxer
|
||||
* return 1 when encoding is finished, 0 otherwise
|
||||
*/
|
||||
static int write_video_frame(AVFormatContext *oc, OutputStream *ost)
|
||||
static void write_video_frame(AVFormatContext *oc, AVStream *st)
|
||||
{
|
||||
int ret;
|
||||
AVCodecContext *c;
|
||||
AVFrame *frame;
|
||||
int got_packet = 0;
|
||||
static struct SwsContext *sws_ctx;
|
||||
AVCodecContext *c = st->codec;
|
||||
|
||||
c = ost->st->codec;
|
||||
|
||||
frame = get_video_frame(ost);
|
||||
if (frame_count >= STREAM_NB_FRAMES) {
|
||||
/* No more frames to compress. The codec has a latency of a few
|
||||
* frames if using B-frames, so we get the last frames by
|
||||
* passing the same picture again. */
|
||||
} else {
|
||||
if (c->pix_fmt != AV_PIX_FMT_YUV420P) {
|
||||
/* as we only generate a YUV420P picture, we must convert it
|
||||
* to the codec pixel format if needed */
|
||||
if (!sws_ctx) {
|
||||
sws_ctx = sws_getContext(c->width, c->height, AV_PIX_FMT_YUV420P,
|
||||
c->width, c->height, c->pix_fmt,
|
||||
sws_flags, NULL, NULL, NULL);
|
||||
if (!sws_ctx) {
|
||||
fprintf(stderr,
|
||||
"Could not initialize the conversion context\n");
|
||||
exit(1);
|
||||
}
|
||||
}
|
||||
fill_yuv_image(&src_picture, frame_count, c->width, c->height);
|
||||
sws_scale(sws_ctx,
|
||||
(const uint8_t * const *)src_picture.data, src_picture.linesize,
|
||||
0, c->height, dst_picture.data, dst_picture.linesize);
|
||||
} else {
|
||||
fill_yuv_image(&dst_picture, frame_count, c->width, c->height);
|
||||
}
|
||||
}
|
||||
|
||||
if (oc->oformat->flags & AVFMT_RAWPICTURE) {
|
||||
/* a hack to avoid data copy with some raw video muxers */
|
||||
/* Raw video case - directly store the picture in the packet */
|
||||
AVPacket pkt;
|
||||
av_init_packet(&pkt);
|
||||
|
||||
if (!frame)
|
||||
return 1;
|
||||
|
||||
pkt.flags |= AV_PKT_FLAG_KEY;
|
||||
pkt.stream_index = ost->st->index;
|
||||
pkt.data = (uint8_t *)frame;
|
||||
pkt.stream_index = st->index;
|
||||
pkt.data = dst_picture.data[0];
|
||||
pkt.size = sizeof(AVPicture);
|
||||
|
||||
pkt.pts = pkt.dts = frame->pts;
|
||||
av_packet_rescale_ts(&pkt, c->time_base, ost->st->time_base);
|
||||
|
||||
ret = av_interleaved_write_frame(oc, &pkt);
|
||||
} else {
|
||||
AVPacket pkt = { 0 };
|
||||
av_init_packet(&pkt);
|
||||
|
||||
/* encode the image */
|
||||
ret = avcodec_encode_video2(c, &pkt, frame, &got_packet);
|
||||
AVPacket pkt;
|
||||
int got_output;
|
||||
|
||||
av_init_packet(&pkt);
|
||||
pkt.data = NULL; // packet data will be allocated by the encoder
|
||||
pkt.size = 0;
|
||||
|
||||
ret = avcodec_encode_video2(c, &pkt, frame, &got_output);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error encoding video frame: %s\n", av_err2str(ret));
|
||||
exit(1);
|
||||
}
|
||||
|
||||
if (got_packet) {
|
||||
ret = write_frame(oc, &c->time_base, ost->st, &pkt);
|
||||
/* If size is zero, it means the image was buffered. */
|
||||
if (got_output) {
|
||||
if (c->coded_frame->key_frame)
|
||||
pkt.flags |= AV_PKT_FLAG_KEY;
|
||||
|
||||
pkt.stream_index = st->index;
|
||||
|
||||
/* Write the compressed frame to the media file. */
|
||||
ret = av_interleaved_write_frame(oc, &pkt);
|
||||
} else {
|
||||
ret = 0;
|
||||
}
|
||||
}
|
||||
|
||||
if (ret < 0) {
|
||||
if (ret != 0) {
|
||||
fprintf(stderr, "Error while writing video frame: %s\n", av_err2str(ret));
|
||||
exit(1);
|
||||
}
|
||||
|
||||
return (frame || got_packet) ? 0 : 1;
|
||||
frame_count++;
|
||||
}
|
||||
|
||||
static void close_stream(AVFormatContext *oc, OutputStream *ost)
|
||||
static void close_video(AVFormatContext *oc, AVStream *st)
|
||||
{
|
||||
avcodec_close(ost->st->codec);
|
||||
av_frame_free(&ost->frame);
|
||||
av_frame_free(&ost->tmp_frame);
|
||||
sws_freeContext(ost->sws_ctx);
|
||||
swr_free(&ost->swr_ctx);
|
||||
avcodec_close(st->codec);
|
||||
av_free(src_picture.data[0]);
|
||||
av_free(dst_picture.data[0]);
|
||||
av_free(frame);
|
||||
}
|
||||
|
||||
/**************************************************************/
|
||||
@@ -555,20 +389,18 @@ static void close_stream(AVFormatContext *oc, OutputStream *ost)
|
||||
|
||||
int main(int argc, char **argv)
|
||||
{
|
||||
OutputStream video_st = { 0 }, audio_st = { 0 };
|
||||
const char *filename;
|
||||
AVOutputFormat *fmt;
|
||||
AVFormatContext *oc;
|
||||
AVStream *audio_st, *video_st;
|
||||
AVCodec *audio_codec, *video_codec;
|
||||
double audio_pts, video_pts;
|
||||
int ret;
|
||||
int have_video = 0, have_audio = 0;
|
||||
int encode_video = 0, encode_audio = 0;
|
||||
AVDictionary *opt = NULL;
|
||||
|
||||
/* Initialize libavcodec, and register all codecs and formats. */
|
||||
av_register_all();
|
||||
|
||||
if (argc < 2) {
|
||||
if (argc != 2) {
|
||||
printf("usage: %s output_file\n"
|
||||
"API example program to output a media file with libavformat.\n"
|
||||
"This program generates a synthetic audio and video stream, encodes and\n"
|
||||
@@ -580,9 +412,6 @@ int main(int argc, char **argv)
|
||||
}
|
||||
|
||||
filename = argv[1];
|
||||
if (argc > 3 && !strcmp(argv[2], "-flags")) {
|
||||
av_dict_set(&opt, argv[2]+1, argv[3], 0);
|
||||
}
|
||||
|
||||
/* allocate the output media context */
|
||||
avformat_alloc_output_context2(&oc, NULL, NULL, filename);
|
||||
@@ -590,31 +419,29 @@ int main(int argc, char **argv)
|
||||
printf("Could not deduce output format from file extension: using MPEG.\n");
|
||||
avformat_alloc_output_context2(&oc, NULL, "mpeg", filename);
|
||||
}
|
||||
if (!oc)
|
||||
if (!oc) {
|
||||
return 1;
|
||||
|
||||
}
|
||||
fmt = oc->oformat;
|
||||
|
||||
/* Add the audio and video streams using the default format codecs
|
||||
* and initialize the codecs. */
|
||||
video_st = NULL;
|
||||
audio_st = NULL;
|
||||
|
||||
if (fmt->video_codec != AV_CODEC_ID_NONE) {
|
||||
add_stream(&video_st, oc, &video_codec, fmt->video_codec);
|
||||
have_video = 1;
|
||||
encode_video = 1;
|
||||
video_st = add_stream(oc, &video_codec, fmt->video_codec);
|
||||
}
|
||||
if (fmt->audio_codec != AV_CODEC_ID_NONE) {
|
||||
add_stream(&audio_st, oc, &audio_codec, fmt->audio_codec);
|
||||
have_audio = 1;
|
||||
encode_audio = 1;
|
||||
audio_st = add_stream(oc, &audio_codec, fmt->audio_codec);
|
||||
}
|
||||
|
||||
/* Now that all the parameters are set, we can open the audio and
|
||||
* video codecs and allocate the necessary encode buffers. */
|
||||
if (have_video)
|
||||
open_video(oc, video_codec, &video_st, opt);
|
||||
|
||||
if (have_audio)
|
||||
open_audio(oc, audio_codec, &audio_st, opt);
|
||||
if (video_st)
|
||||
open_video(oc, video_codec, video_st);
|
||||
if (audio_st)
|
||||
open_audio(oc, audio_codec, audio_st);
|
||||
|
||||
av_dump_format(oc, 0, filename, 1);
|
||||
|
||||
@@ -629,21 +456,38 @@ int main(int argc, char **argv)
|
||||
}
|
||||
|
||||
/* Write the stream header, if any. */
|
||||
ret = avformat_write_header(oc, &opt);
|
||||
ret = avformat_write_header(oc, NULL);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error occurred when opening output file: %s\n",
|
||||
av_err2str(ret));
|
||||
return 1;
|
||||
}
|
||||
|
||||
while (encode_video || encode_audio) {
|
||||
/* select the stream to encode */
|
||||
if (encode_video &&
|
||||
(!encode_audio || av_compare_ts(video_st.next_pts, video_st.st->codec->time_base,
|
||||
audio_st.next_pts, audio_st.st->codec->time_base) <= 0)) {
|
||||
encode_video = !write_video_frame(oc, &video_st);
|
||||
if (frame)
|
||||
frame->pts = 0;
|
||||
for (;;) {
|
||||
/* Compute current audio and video time. */
|
||||
if (audio_st)
|
||||
audio_pts = (double)audio_st->pts.val * audio_st->time_base.num / audio_st->time_base.den;
|
||||
else
|
||||
audio_pts = 0.0;
|
||||
|
||||
if (video_st)
|
||||
video_pts = (double)video_st->pts.val * video_st->time_base.num /
|
||||
video_st->time_base.den;
|
||||
else
|
||||
video_pts = 0.0;
|
||||
|
||||
if ((!audio_st || audio_pts >= STREAM_DURATION) &&
|
||||
(!video_st || video_pts >= STREAM_DURATION))
|
||||
break;
|
||||
|
||||
/* write interleaved audio and video frames */
|
||||
if (!video_st || (video_st && audio_st && audio_pts < video_pts)) {
|
||||
write_audio_frame(oc, audio_st);
|
||||
} else {
|
||||
encode_audio = !write_audio_frame(oc, &audio_st);
|
||||
write_video_frame(oc, video_st);
|
||||
frame->pts += av_rescale_q(1, video_st->codec->time_base, video_st->time_base);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -654,14 +498,14 @@ int main(int argc, char **argv)
|
||||
av_write_trailer(oc);
|
||||
|
||||
/* Close each codec. */
|
||||
if (have_video)
|
||||
close_stream(oc, &video_st);
|
||||
if (have_audio)
|
||||
close_stream(oc, &audio_st);
|
||||
if (video_st)
|
||||
close_video(oc, video_st);
|
||||
if (audio_st)
|
||||
close_audio(oc, audio_st);
|
||||
|
||||
if (!(fmt->flags & AVFMT_NOFILE))
|
||||
/* Close the output file. */
|
||||
avio_closep(&oc->pb);
|
||||
avio_close(oc->pb);
|
||||
|
||||
/* free the stream */
|
||||
avformat_free_context(oc);
|
||||
|
||||
@@ -1,484 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2015 Anton Khirnov
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
* of this software and associated documentation files (the "Software"), to deal
|
||||
* in the Software without restriction, including without limitation the rights
|
||||
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
* copies of the Software, and to permit persons to whom the Software is
|
||||
* furnished to do so, subject to the following conditions:
|
||||
*
|
||||
* The above copyright notice and this permission notice shall be included in
|
||||
* all copies or substantial portions of the Software.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
||||
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
* THE SOFTWARE.
|
||||
*/
|
||||
|
||||
/**
|
||||
* @file
|
||||
* Intel QSV-accelerated H.264 decoding example.
|
||||
*
|
||||
* @example qsvdec.c
|
||||
* This example shows how to do QSV-accelerated H.264 decoding with output
|
||||
* frames in the VA-API video surfaces.
|
||||
*/
|
||||
|
||||
#include "config.h"
|
||||
|
||||
#include <stdio.h>
|
||||
|
||||
#include <mfx/mfxvideo.h>
|
||||
|
||||
#include <va/va.h>
|
||||
#include <va/va_x11.h>
|
||||
#include <X11/Xlib.h>
|
||||
|
||||
#include "libavformat/avformat.h"
|
||||
#include "libavformat/avio.h"
|
||||
|
||||
#include "libavcodec/avcodec.h"
|
||||
#include "libavcodec/qsv.h"
|
||||
|
||||
#include "libavutil/error.h"
|
||||
#include "libavutil/mem.h"
|
||||
|
||||
typedef struct DecodeContext {
|
||||
mfxSession mfx_session;
|
||||
VADisplay va_dpy;
|
||||
|
||||
VASurfaceID *surfaces;
|
||||
mfxMemId *surface_ids;
|
||||
int *surface_used;
|
||||
int nb_surfaces;
|
||||
|
||||
mfxFrameInfo frame_info;
|
||||
} DecodeContext;
|
||||
|
||||
static mfxStatus frame_alloc(mfxHDL pthis, mfxFrameAllocRequest *req,
|
||||
mfxFrameAllocResponse *resp)
|
||||
{
|
||||
DecodeContext *decode = pthis;
|
||||
int err, i;
|
||||
|
||||
if (decode->surfaces) {
|
||||
fprintf(stderr, "Multiple allocation requests.\n");
|
||||
return MFX_ERR_MEMORY_ALLOC;
|
||||
}
|
||||
if (!(req->Type & MFX_MEMTYPE_VIDEO_MEMORY_DECODER_TARGET)) {
|
||||
fprintf(stderr, "Unsupported surface type: %d\n", req->Type);
|
||||
return MFX_ERR_UNSUPPORTED;
|
||||
}
|
||||
if (req->Info.BitDepthLuma != 8 || req->Info.BitDepthChroma != 8 ||
|
||||
req->Info.Shift || req->Info.FourCC != MFX_FOURCC_NV12 ||
|
||||
req->Info.ChromaFormat != MFX_CHROMAFORMAT_YUV420) {
|
||||
fprintf(stderr, "Unsupported surface properties.\n");
|
||||
return MFX_ERR_UNSUPPORTED;
|
||||
}
|
||||
|
||||
decode->surfaces = av_malloc_array (req->NumFrameSuggested, sizeof(*decode->surfaces));
|
||||
decode->surface_ids = av_malloc_array (req->NumFrameSuggested, sizeof(*decode->surface_ids));
|
||||
decode->surface_used = av_mallocz_array(req->NumFrameSuggested, sizeof(*decode->surface_used));
|
||||
if (!decode->surfaces || !decode->surface_ids || !decode->surface_used)
|
||||
goto fail;
|
||||
|
||||
err = vaCreateSurfaces(decode->va_dpy, VA_RT_FORMAT_YUV420,
|
||||
req->Info.Width, req->Info.Height,
|
||||
decode->surfaces, req->NumFrameSuggested,
|
||||
NULL, 0);
|
||||
if (err != VA_STATUS_SUCCESS) {
|
||||
fprintf(stderr, "Error allocating VA surfaces\n");
|
||||
goto fail;
|
||||
}
|
||||
decode->nb_surfaces = req->NumFrameSuggested;
|
||||
|
||||
for (i = 0; i < decode->nb_surfaces; i++)
|
||||
decode->surface_ids[i] = &decode->surfaces[i];
|
||||
|
||||
resp->mids = decode->surface_ids;
|
||||
resp->NumFrameActual = decode->nb_surfaces;
|
||||
|
||||
decode->frame_info = req->Info;
|
||||
|
||||
return MFX_ERR_NONE;
|
||||
fail:
|
||||
av_freep(&decode->surfaces);
|
||||
av_freep(&decode->surface_ids);
|
||||
av_freep(&decode->surface_used);
|
||||
|
||||
return MFX_ERR_MEMORY_ALLOC;
|
||||
}
|
||||
|
||||
static mfxStatus frame_free(mfxHDL pthis, mfxFrameAllocResponse *resp)
|
||||
{
|
||||
DecodeContext *decode = pthis;
|
||||
|
||||
if (decode->surfaces)
|
||||
vaDestroySurfaces(decode->va_dpy, decode->surfaces, decode->nb_surfaces);
|
||||
av_freep(&decode->surfaces);
|
||||
av_freep(&decode->surface_ids);
|
||||
av_freep(&decode->surface_used);
|
||||
decode->nb_surfaces = 0;
|
||||
|
||||
return MFX_ERR_NONE;
|
||||
}
|
||||
|
||||
static mfxStatus frame_lock(mfxHDL pthis, mfxMemId mid, mfxFrameData *ptr)
|
||||
{
|
||||
return MFX_ERR_UNSUPPORTED;
|
||||
}
|
||||
|
||||
static mfxStatus frame_unlock(mfxHDL pthis, mfxMemId mid, mfxFrameData *ptr)
|
||||
{
|
||||
return MFX_ERR_UNSUPPORTED;
|
||||
}
|
||||
|
||||
static mfxStatus frame_get_hdl(mfxHDL pthis, mfxMemId mid, mfxHDL *hdl)
|
||||
{
|
||||
*hdl = mid;
|
||||
return MFX_ERR_NONE;
|
||||
}
|
||||
|
||||
static void free_buffer(void *opaque, uint8_t *data)
|
||||
{
|
||||
int *used = opaque;
|
||||
*used = 0;
|
||||
av_freep(&data);
|
||||
}
|
||||
|
||||
static int get_buffer(AVCodecContext *avctx, AVFrame *frame, int flags)
|
||||
{
|
||||
DecodeContext *decode = avctx->opaque;
|
||||
|
||||
mfxFrameSurface1 *surf;
|
||||
AVBufferRef *surf_buf;
|
||||
int idx;
|
||||
|
||||
for (idx = 0; idx < decode->nb_surfaces; idx++) {
|
||||
if (!decode->surface_used[idx])
|
||||
break;
|
||||
}
|
||||
if (idx == decode->nb_surfaces) {
|
||||
fprintf(stderr, "No free surfaces\n");
|
||||
return AVERROR(ENOMEM);
|
||||
}
|
||||
|
||||
surf = av_mallocz(sizeof(*surf));
|
||||
if (!surf)
|
||||
return AVERROR(ENOMEM);
|
||||
surf_buf = av_buffer_create((uint8_t*)surf, sizeof(*surf), free_buffer,
|
||||
&decode->surface_used[idx], AV_BUFFER_FLAG_READONLY);
|
||||
if (!surf_buf) {
|
||||
av_freep(&surf);
|
||||
return AVERROR(ENOMEM);
|
||||
}
|
||||
|
||||
surf->Info = decode->frame_info;
|
||||
surf->Data.MemId = &decode->surfaces[idx];
|
||||
|
||||
frame->buf[0] = surf_buf;
|
||||
frame->data[3] = (uint8_t*)surf;
|
||||
|
||||
decode->surface_used[idx] = 1;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int get_format(AVCodecContext *avctx, const enum AVPixelFormat *pix_fmts)
|
||||
{
|
||||
while (*pix_fmts != AV_PIX_FMT_NONE) {
|
||||
if (*pix_fmts == AV_PIX_FMT_QSV) {
|
||||
if (!avctx->hwaccel_context) {
|
||||
DecodeContext *decode = avctx->opaque;
|
||||
AVQSVContext *qsv = av_qsv_alloc_context();
|
||||
if (!qsv)
|
||||
return AV_PIX_FMT_NONE;
|
||||
|
||||
qsv->session = decode->mfx_session;
|
||||
qsv->iopattern = MFX_IOPATTERN_OUT_VIDEO_MEMORY;
|
||||
|
||||
avctx->hwaccel_context = qsv;
|
||||
}
|
||||
|
||||
return AV_PIX_FMT_QSV;
|
||||
}
|
||||
|
||||
pix_fmts++;
|
||||
}
|
||||
|
||||
fprintf(stderr, "The QSV pixel format not offered in get_format()\n");
|
||||
|
||||
return AV_PIX_FMT_NONE;
|
||||
}
|
||||
|
||||
static int decode_packet(DecodeContext *decode, AVCodecContext *decoder_ctx,
|
||||
AVFrame *frame, AVPacket *pkt,
|
||||
AVIOContext *output_ctx)
|
||||
{
|
||||
int ret = 0;
|
||||
int got_frame = 1;
|
||||
|
||||
while (pkt->size > 0 || (!pkt->data && got_frame)) {
|
||||
ret = avcodec_decode_video2(decoder_ctx, frame, &got_frame, pkt);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error during decoding\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
pkt->data += ret;
|
||||
pkt->size -= ret;
|
||||
|
||||
/* A real program would do something useful with the decoded frame here.
|
||||
* We just retrieve the raw data and write it to a file, which is rather
|
||||
* useless but pedagogic. */
|
||||
if (got_frame) {
|
||||
mfxFrameSurface1 *surf = (mfxFrameSurface1*)frame->data[3];
|
||||
VASurfaceID surface = *(VASurfaceID*)surf->Data.MemId;
|
||||
|
||||
VAImageFormat img_fmt = {
|
||||
.fourcc = VA_FOURCC_NV12,
|
||||
.byte_order = VA_LSB_FIRST,
|
||||
.bits_per_pixel = 8,
|
||||
.depth = 8,
|
||||
};
|
||||
|
||||
VAImage img;
|
||||
|
||||
VAStatus err;
|
||||
uint8_t *data;
|
||||
int i, j;
|
||||
|
||||
img.buf = VA_INVALID_ID;
|
||||
img.image_id = VA_INVALID_ID;
|
||||
|
||||
err = vaCreateImage(decode->va_dpy, &img_fmt,
|
||||
frame->width, frame->height, &img);
|
||||
if (err != VA_STATUS_SUCCESS) {
|
||||
fprintf(stderr, "Error creating an image: %s\n",
|
||||
vaErrorStr(err));
|
||||
ret = AVERROR_UNKNOWN;
|
||||
goto fail;
|
||||
}
|
||||
|
||||
err = vaGetImage(decode->va_dpy, surface, 0, 0,
|
||||
frame->width, frame->height,
|
||||
img.image_id);
|
||||
if (err != VA_STATUS_SUCCESS) {
|
||||
fprintf(stderr, "Error getting an image: %s\n",
|
||||
vaErrorStr(err));
|
||||
ret = AVERROR_UNKNOWN;
|
||||
goto fail;
|
||||
}
|
||||
|
||||
err = vaMapBuffer(decode->va_dpy, img.buf, (void**)&data);
|
||||
if (err != VA_STATUS_SUCCESS) {
|
||||
fprintf(stderr, "Error mapping the image buffer: %s\n",
|
||||
vaErrorStr(err));
|
||||
ret = AVERROR_UNKNOWN;
|
||||
goto fail;
|
||||
}
|
||||
|
||||
for (i = 0; i < img.num_planes; i++)
|
||||
for (j = 0; j < (img.height >> (i > 0)); j++)
|
||||
avio_write(output_ctx, data + img.offsets[i] + j * img.pitches[i], img.width);
|
||||
|
||||
fail:
|
||||
if (img.buf != VA_INVALID_ID)
|
||||
vaUnmapBuffer(decode->va_dpy, img.buf);
|
||||
if (img.image_id != VA_INVALID_ID)
|
||||
vaDestroyImage(decode->va_dpy, img.image_id);
|
||||
av_frame_unref(frame);
|
||||
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int main(int argc, char **argv)
|
||||
{
|
||||
AVFormatContext *input_ctx = NULL;
|
||||
AVStream *video_st = NULL;
|
||||
AVCodecContext *decoder_ctx = NULL;
|
||||
const AVCodec *decoder;
|
||||
|
||||
AVPacket pkt = { 0 };
|
||||
AVFrame *frame = NULL;
|
||||
|
||||
DecodeContext decode = { NULL };
|
||||
|
||||
Display *dpy = NULL;
|
||||
int va_ver_major, va_ver_minor;
|
||||
|
||||
mfxIMPL mfx_impl = MFX_IMPL_AUTO_ANY;
|
||||
mfxVersion mfx_ver = { { 1, 1 } };
|
||||
|
||||
mfxFrameAllocator frame_allocator = {
|
||||
.pthis = &decode,
|
||||
.Alloc = frame_alloc,
|
||||
.Lock = frame_lock,
|
||||
.Unlock = frame_unlock,
|
||||
.GetHDL = frame_get_hdl,
|
||||
.Free = frame_free,
|
||||
};
|
||||
|
||||
AVIOContext *output_ctx = NULL;
|
||||
|
||||
int ret, i, err;
|
||||
|
||||
av_register_all();
|
||||
|
||||
if (argc < 3) {
|
||||
fprintf(stderr, "Usage: %s <input file> <output file>\n", argv[0]);
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* open the input file */
|
||||
ret = avformat_open_input(&input_ctx, argv[1], NULL, NULL);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Cannot open input file '%s': ", argv[1]);
|
||||
goto finish;
|
||||
}
|
||||
|
||||
/* find the first H.264 video stream */
|
||||
for (i = 0; i < input_ctx->nb_streams; i++) {
|
||||
AVStream *st = input_ctx->streams[i];
|
||||
|
||||
if (st->codec->codec_id == AV_CODEC_ID_H264 && !video_st)
|
||||
video_st = st;
|
||||
else
|
||||
st->discard = AVDISCARD_ALL;
|
||||
}
|
||||
if (!video_st) {
|
||||
fprintf(stderr, "No H.264 video stream in the input file\n");
|
||||
goto finish;
|
||||
}
|
||||
|
||||
/* initialize VA-API */
|
||||
dpy = XOpenDisplay(NULL);
|
||||
if (!dpy) {
|
||||
fprintf(stderr, "Cannot open the X display\n");
|
||||
goto finish;
|
||||
}
|
||||
decode.va_dpy = vaGetDisplay(dpy);
|
||||
if (!decode.va_dpy) {
|
||||
fprintf(stderr, "Cannot open the VA display\n");
|
||||
goto finish;
|
||||
}
|
||||
|
||||
err = vaInitialize(decode.va_dpy, &va_ver_major, &va_ver_minor);
|
||||
if (err != VA_STATUS_SUCCESS) {
|
||||
fprintf(stderr, "Cannot initialize VA: %s\n", vaErrorStr(err));
|
||||
goto finish;
|
||||
}
|
||||
fprintf(stderr, "Initialized VA v%d.%d\n", va_ver_major, va_ver_minor);
|
||||
|
||||
/* initialize an MFX session */
|
||||
err = MFXInit(mfx_impl, &mfx_ver, &decode.mfx_session);
|
||||
if (err != MFX_ERR_NONE) {
|
||||
fprintf(stderr, "Error initializing an MFX session\n");
|
||||
goto finish;
|
||||
}
|
||||
|
||||
MFXVideoCORE_SetHandle(decode.mfx_session, MFX_HANDLE_VA_DISPLAY, decode.va_dpy);
|
||||
MFXVideoCORE_SetFrameAllocator(decode.mfx_session, &frame_allocator);
|
||||
|
||||
/* initialize the decoder */
|
||||
decoder = avcodec_find_decoder_by_name("h264_qsv");
|
||||
if (!decoder) {
|
||||
fprintf(stderr, "The QSV decoder is not present in libavcodec\n");
|
||||
goto finish;
|
||||
}
|
||||
|
||||
decoder_ctx = avcodec_alloc_context3(decoder);
|
||||
if (!decoder_ctx) {
|
||||
ret = AVERROR(ENOMEM);
|
||||
goto finish;
|
||||
}
|
||||
decoder_ctx->codec_id = AV_CODEC_ID_H264;
|
||||
if (video_st->codec->extradata_size) {
|
||||
decoder_ctx->extradata = av_mallocz(video_st->codec->extradata_size +
|
||||
AV_INPUT_BUFFER_PADDING_SIZE);
|
||||
if (!decoder_ctx->extradata) {
|
||||
ret = AVERROR(ENOMEM);
|
||||
goto finish;
|
||||
}
|
||||
memcpy(decoder_ctx->extradata, video_st->codec->extradata,
|
||||
video_st->codec->extradata_size);
|
||||
decoder_ctx->extradata_size = video_st->codec->extradata_size;
|
||||
}
|
||||
decoder_ctx->refcounted_frames = 1;
|
||||
|
||||
decoder_ctx->opaque = &decode;
|
||||
decoder_ctx->get_buffer2 = get_buffer;
|
||||
decoder_ctx->get_format = get_format;
|
||||
|
||||
ret = avcodec_open2(decoder_ctx, NULL, NULL);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error opening the decoder: ");
|
||||
goto finish;
|
||||
}
|
||||
|
||||
/* open the output stream */
|
||||
ret = avio_open(&output_ctx, argv[2], AVIO_FLAG_WRITE);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error opening the output context: ");
|
||||
goto finish;
|
||||
}
|
||||
|
||||
frame = av_frame_alloc();
|
||||
if (!frame) {
|
||||
ret = AVERROR(ENOMEM);
|
||||
goto finish;
|
||||
}
|
||||
|
||||
/* actual decoding */
|
||||
while (ret >= 0) {
|
||||
ret = av_read_frame(input_ctx, &pkt);
|
||||
if (ret < 0)
|
||||
break;
|
||||
|
||||
if (pkt.stream_index == video_st->index)
|
||||
ret = decode_packet(&decode, decoder_ctx, frame, &pkt, output_ctx);
|
||||
|
||||
av_packet_unref(&pkt);
|
||||
}
|
||||
|
||||
/* flush the decoder */
|
||||
pkt.data = NULL;
|
||||
pkt.size = 0;
|
||||
ret = decode_packet(&decode, decoder_ctx, frame, &pkt, output_ctx);
|
||||
|
||||
finish:
|
||||
if (ret < 0) {
|
||||
char buf[1024];
|
||||
av_strerror(ret, buf, sizeof(buf));
|
||||
fprintf(stderr, "%s\n", buf);
|
||||
}
|
||||
|
||||
avformat_close_input(&input_ctx);
|
||||
|
||||
av_frame_free(&frame);
|
||||
|
||||
if (decode.mfx_session)
|
||||
MFXClose(decode.mfx_session);
|
||||
if (decode.va_dpy)
|
||||
vaTerminate(decode.va_dpy);
|
||||
if (dpy)
|
||||
XCloseDisplay(dpy);
|
||||
|
||||
if (decoder_ctx)
|
||||
av_freep(&decoder_ctx->hwaccel_context);
|
||||
avcodec_free_context(&decoder_ctx);
|
||||
|
||||
avio_close(output_ctx);
|
||||
|
||||
return ret;
|
||||
}
|
||||
@@ -1,165 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2013 Stefano Sabatini
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
* of this software and associated documentation files (the "Software"), to deal
|
||||
* in the Software without restriction, including without limitation the rights
|
||||
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
* copies of the Software, and to permit persons to whom the Software is
|
||||
* furnished to do so, subject to the following conditions:
|
||||
*
|
||||
* The above copyright notice and this permission notice shall be included in
|
||||
* all copies or substantial portions of the Software.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
||||
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
* THE SOFTWARE.
|
||||
*/
|
||||
|
||||
/**
|
||||
* @file
|
||||
* libavformat/libavcodec demuxing and muxing API example.
|
||||
*
|
||||
* Remux streams from one container format to another.
|
||||
* @example remuxing.c
|
||||
*/
|
||||
|
||||
#include <libavutil/timestamp.h>
|
||||
#include <libavformat/avformat.h>
|
||||
|
||||
static void log_packet(const AVFormatContext *fmt_ctx, const AVPacket *pkt, const char *tag)
|
||||
{
|
||||
AVRational *time_base = &fmt_ctx->streams[pkt->stream_index]->time_base;
|
||||
|
||||
printf("%s: pts:%s pts_time:%s dts:%s dts_time:%s duration:%s duration_time:%s stream_index:%d\n",
|
||||
tag,
|
||||
av_ts2str(pkt->pts), av_ts2timestr(pkt->pts, time_base),
|
||||
av_ts2str(pkt->dts), av_ts2timestr(pkt->dts, time_base),
|
||||
av_ts2str(pkt->duration), av_ts2timestr(pkt->duration, time_base),
|
||||
pkt->stream_index);
|
||||
}
|
||||
|
||||
int main(int argc, char **argv)
|
||||
{
|
||||
AVOutputFormat *ofmt = NULL;
|
||||
AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
|
||||
AVPacket pkt;
|
||||
const char *in_filename, *out_filename;
|
||||
int ret, i;
|
||||
|
||||
if (argc < 3) {
|
||||
printf("usage: %s input output\n"
|
||||
"API example program to remux a media file with libavformat and libavcodec.\n"
|
||||
"The output format is guessed according to the file extension.\n"
|
||||
"\n", argv[0]);
|
||||
return 1;
|
||||
}
|
||||
|
||||
in_filename = argv[1];
|
||||
out_filename = argv[2];
|
||||
|
||||
av_register_all();
|
||||
|
||||
if ((ret = avformat_open_input(&ifmt_ctx, in_filename, 0, 0)) < 0) {
|
||||
fprintf(stderr, "Could not open input file '%s'", in_filename);
|
||||
goto end;
|
||||
}
|
||||
|
||||
if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) < 0) {
|
||||
fprintf(stderr, "Failed to retrieve input stream information");
|
||||
goto end;
|
||||
}
|
||||
|
||||
av_dump_format(ifmt_ctx, 0, in_filename, 0);
|
||||
|
||||
avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, out_filename);
|
||||
if (!ofmt_ctx) {
|
||||
fprintf(stderr, "Could not create output context\n");
|
||||
ret = AVERROR_UNKNOWN;
|
||||
goto end;
|
||||
}
|
||||
|
||||
ofmt = ofmt_ctx->oformat;
|
||||
|
||||
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
|
||||
AVStream *in_stream = ifmt_ctx->streams[i];
|
||||
AVStream *out_stream = avformat_new_stream(ofmt_ctx, in_stream->codec->codec);
|
||||
if (!out_stream) {
|
||||
fprintf(stderr, "Failed allocating output stream\n");
|
||||
ret = AVERROR_UNKNOWN;
|
||||
goto end;
|
||||
}
|
||||
|
||||
ret = avcodec_copy_context(out_stream->codec, in_stream->codec);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Failed to copy context from input to output stream codec context\n");
|
||||
goto end;
|
||||
}
|
||||
out_stream->codec->codec_tag = 0;
|
||||
if (ofmt_ctx->oformat->flags & AVFMT_GLOBALHEADER)
|
||||
out_stream->codec->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
|
||||
}
|
||||
av_dump_format(ofmt_ctx, 0, out_filename, 1);
|
||||
|
||||
if (!(ofmt->flags & AVFMT_NOFILE)) {
|
||||
ret = avio_open(&ofmt_ctx->pb, out_filename, AVIO_FLAG_WRITE);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Could not open output file '%s'", out_filename);
|
||||
goto end;
|
||||
}
|
||||
}
|
||||
|
||||
ret = avformat_write_header(ofmt_ctx, NULL);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error occurred when opening output file\n");
|
||||
goto end;
|
||||
}
|
||||
|
||||
while (1) {
|
||||
AVStream *in_stream, *out_stream;
|
||||
|
||||
ret = av_read_frame(ifmt_ctx, &pkt);
|
||||
if (ret < 0)
|
||||
break;
|
||||
|
||||
in_stream = ifmt_ctx->streams[pkt.stream_index];
|
||||
out_stream = ofmt_ctx->streams[pkt.stream_index];
|
||||
|
||||
log_packet(ifmt_ctx, &pkt, "in");
|
||||
|
||||
/* copy packet */
|
||||
pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
|
||||
pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
|
||||
pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);
|
||||
pkt.pos = -1;
|
||||
log_packet(ofmt_ctx, &pkt, "out");
|
||||
|
||||
ret = av_interleaved_write_frame(ofmt_ctx, &pkt);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Error muxing packet\n");
|
||||
break;
|
||||
}
|
||||
av_free_packet(&pkt);
|
||||
}
|
||||
|
||||
av_write_trailer(ofmt_ctx);
|
||||
end:
|
||||
|
||||
avformat_close_input(&ifmt_ctx);
|
||||
|
||||
/* close output */
|
||||
if (ofmt_ctx && !(ofmt->flags & AVFMT_NOFILE))
|
||||
avio_closep(&ofmt_ctx->pb);
|
||||
avformat_free_context(ofmt_ctx);
|
||||
|
||||
if (ret < 0 && ret != AVERROR_EOF) {
|
||||
fprintf(stderr, "Error occurred: %s\n", av_err2str(ret));
|
||||
return 1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
@@ -21,7 +21,7 @@
|
||||
*/
|
||||
|
||||
/**
|
||||
* @example resampling_audio.c
|
||||
* @example doc/examples/resampling_audio.c
|
||||
* libswresample API use example.
|
||||
*/
|
||||
|
||||
@@ -62,7 +62,7 @@ static int get_format_from_sample_fmt(const char **fmt,
|
||||
/**
|
||||
* Fill dst buffer with nb_samples, generated starting from t.
|
||||
*/
|
||||
static void fill_samples(double *dst, int nb_samples, int nb_channels, int sample_rate, double *t)
|
||||
void fill_samples(double *dst, int nb_samples, int nb_channels, int sample_rate, double *t)
|
||||
{
|
||||
int i, j;
|
||||
double tincr = 1.0 / sample_rate, *dstp = dst;
|
||||
@@ -78,6 +78,18 @@ static void fill_samples(double *dst, int nb_samples, int nb_channels, int sampl
|
||||
}
|
||||
}
|
||||
|
||||
int alloc_samples_array_and_data(uint8_t ***data, int *linesize, int nb_channels,
|
||||
int nb_samples, enum AVSampleFormat sample_fmt, int align)
|
||||
{
|
||||
int nb_planes = av_sample_fmt_is_planar(sample_fmt) ? nb_channels : 1;
|
||||
|
||||
*data = av_malloc(sizeof(*data) * nb_planes);
|
||||
if (!*data)
|
||||
return AVERROR(ENOMEM);
|
||||
return av_samples_alloc(*data, linesize, nb_channels,
|
||||
nb_samples, sample_fmt, align);
|
||||
}
|
||||
|
||||
int main(int argc, char **argv)
|
||||
{
|
||||
int64_t src_ch_layout = AV_CH_LAYOUT_STEREO, dst_ch_layout = AV_CH_LAYOUT_SURROUND;
|
||||
@@ -137,8 +149,8 @@ int main(int argc, char **argv)
|
||||
/* allocate source and destination samples buffers */
|
||||
|
||||
src_nb_channels = av_get_channel_layout_nb_channels(src_ch_layout);
|
||||
ret = av_samples_alloc_array_and_samples(&src_data, &src_linesize, src_nb_channels,
|
||||
src_nb_samples, src_sample_fmt, 0);
|
||||
ret = alloc_samples_array_and_data(&src_data, &src_linesize, src_nb_channels,
|
||||
src_nb_samples, src_sample_fmt, 0);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Could not allocate source samples\n");
|
||||
goto end;
|
||||
@@ -152,8 +164,8 @@ int main(int argc, char **argv)
|
||||
|
||||
/* buffer is going to be directly written to a rawaudio file, no alignment */
|
||||
dst_nb_channels = av_get_channel_layout_nb_channels(dst_ch_layout);
|
||||
ret = av_samples_alloc_array_and_samples(&dst_data, &dst_linesize, dst_nb_channels,
|
||||
dst_nb_samples, dst_sample_fmt, 0);
|
||||
ret = alloc_samples_array_and_data(&dst_data, &dst_linesize, dst_nb_channels,
|
||||
dst_nb_samples, dst_sample_fmt, 0);
|
||||
if (ret < 0) {
|
||||
fprintf(stderr, "Could not allocate destination samples\n");
|
||||
goto end;
|
||||
@@ -168,7 +180,7 @@ int main(int argc, char **argv)
|
||||
dst_nb_samples = av_rescale_rnd(swr_get_delay(swr_ctx, src_rate) +
|
||||
src_nb_samples, dst_rate, src_rate, AV_ROUND_UP);
|
||||
if (dst_nb_samples > max_dst_nb_samples) {
|
||||
av_freep(&dst_data[0]);
|
||||
av_free(dst_data[0]);
|
||||
ret = av_samples_alloc(dst_data, &dst_linesize, dst_nb_channels,
|
||||
dst_nb_samples, dst_sample_fmt, 1);
|
||||
if (ret < 0)
|
||||
@@ -184,10 +196,6 @@ int main(int argc, char **argv)
|
||||
}
|
||||
dst_bufsize = av_samples_get_buffer_size(&dst_linesize, dst_nb_channels,
|
||||
ret, dst_sample_fmt, 1);
|
||||
if (dst_bufsize < 0) {
|
||||
fprintf(stderr, "Could not get sample buffer size\n");
|
||||
goto end;
|
||||
}
|
||||
printf("t:%f in:%d out:%d\n", t, src_nb_samples, ret);
|
||||
fwrite(dst_data[0], 1, dst_bufsize, dst_file);
|
||||
} while (t < 10);
|
||||
@@ -199,7 +207,8 @@ int main(int argc, char **argv)
|
||||
fmt, dst_ch_layout, dst_nb_channels, dst_rate, dst_filename);
|
||||
|
||||
end:
|
||||
fclose(dst_file);
|
||||
if (dst_file)
|
||||
fclose(dst_file);
|
||||
|
||||
if (src_data)
|
||||
av_freep(&src_data[0]);
|
||||
|
||||
@@ -23,7 +23,7 @@
|
||||
/**
|
||||
* @file
|
||||
* libswscale API use example.
|
||||
* @example scaling_video.c
|
||||
* @example doc/examples/scaling_video.c
|
||||
*/
|
||||
|
||||
#include <libavutil/imgutils.h>
|
||||
@@ -132,7 +132,8 @@ int main(int argc, char **argv)
|
||||
av_get_pix_fmt_name(dst_pix_fmt), dst_w, dst_h, dst_filename);
|
||||
|
||||
end:
|
||||
fclose(dst_file);
|
||||
if (dst_file)
|
||||
fclose(dst_file);
|
||||
av_freep(&src_data[0]);
|
||||
av_freep(&dst_data[0]);
|
||||
sws_freeContext(sws_ctx);
|
||||
|
||||
@@ -1,770 +0,0 @@
|
||||
/*
|
||||
* This file is part of FFmpeg.
|
||||
*
|
||||
* FFmpeg is free software; you can redistribute it and/or
|
||||
* modify it under the terms of the GNU Lesser General Public
|
||||
* License as published by the Free Software Foundation; either
|
||||
* version 2.1 of the License, or (at your option) any later version.
|
||||
*
|
||||
* FFmpeg is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
* Lesser General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU Lesser General Public
|
||||
* License along with FFmpeg; if not, write to the Free Software
|
||||
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
*/
|
||||
|
||||
/**
|
||||
* @file
|
||||
* simple audio converter
|
||||
*
|
||||
* @example transcode_aac.c
|
||||
* Convert an input audio file to AAC in an MP4 container using FFmpeg.
|
||||
* @author Andreas Unterweger (dustsigns@gmail.com)
|
||||
*/
|
||||
|
||||
#include <stdio.h>
|
||||
|
||||
#include "libavformat/avformat.h"
|
||||
#include "libavformat/avio.h"
|
||||
|
||||
#include "libavcodec/avcodec.h"
|
||||
|
||||
#include "libavutil/audio_fifo.h"
|
||||
#include "libavutil/avassert.h"
|
||||
#include "libavutil/avstring.h"
|
||||
#include "libavutil/frame.h"
|
||||
#include "libavutil/opt.h"
|
||||
|
||||
#include "libswresample/swresample.h"
|
||||
|
||||
/** The output bit rate in kbit/s */
|
||||
#define OUTPUT_BIT_RATE 96000
|
||||
/** The number of output channels */
|
||||
#define OUTPUT_CHANNELS 2
|
||||
|
||||
/**
|
||||
* Convert an error code into a text message.
|
||||
* @param error Error code to be converted
|
||||
* @return Corresponding error text (not thread-safe)
|
||||
*/
|
||||
static const char *get_error_text(const int error)
|
||||
{
|
||||
static char error_buffer[255];
|
||||
av_strerror(error, error_buffer, sizeof(error_buffer));
|
||||
return error_buffer;
|
||||
}
|
||||
|
||||
/** Open an input file and the required decoder. */
|
||||
static int open_input_file(const char *filename,
|
||||
AVFormatContext **input_format_context,
|
||||
AVCodecContext **input_codec_context)
|
||||
{
|
||||
AVCodec *input_codec;
|
||||
int error;
|
||||
|
||||
/** Open the input file to read from it. */
|
||||
if ((error = avformat_open_input(input_format_context, filename, NULL,
|
||||
NULL)) < 0) {
|
||||
fprintf(stderr, "Could not open input file '%s' (error '%s')\n",
|
||||
filename, get_error_text(error));
|
||||
*input_format_context = NULL;
|
||||
return error;
|
||||
}
|
||||
|
||||
/** Get information on the input file (number of streams etc.). */
|
||||
if ((error = avformat_find_stream_info(*input_format_context, NULL)) < 0) {
|
||||
fprintf(stderr, "Could not open find stream info (error '%s')\n",
|
||||
get_error_text(error));
|
||||
avformat_close_input(input_format_context);
|
||||
return error;
|
||||
}
|
||||
|
||||
/** Make sure that there is only one stream in the input file. */
|
||||
if ((*input_format_context)->nb_streams != 1) {
|
||||
fprintf(stderr, "Expected one audio input stream, but found %d\n",
|
||||
(*input_format_context)->nb_streams);
|
||||
avformat_close_input(input_format_context);
|
||||
return AVERROR_EXIT;
|
||||
}
|
||||
|
||||
/** Find a decoder for the audio stream. */
|
||||
if (!(input_codec = avcodec_find_decoder((*input_format_context)->streams[0]->codec->codec_id))) {
|
||||
fprintf(stderr, "Could not find input codec\n");
|
||||
avformat_close_input(input_format_context);
|
||||
return AVERROR_EXIT;
|
||||
}
|
||||
|
||||
/** Open the decoder for the audio stream to use it later. */
|
||||
if ((error = avcodec_open2((*input_format_context)->streams[0]->codec,
|
||||
input_codec, NULL)) < 0) {
|
||||
fprintf(stderr, "Could not open input codec (error '%s')\n",
|
||||
get_error_text(error));
|
||||
avformat_close_input(input_format_context);
|
||||
return error;
|
||||
}
|
||||
|
||||
/** Save the decoder context for easier access later. */
|
||||
*input_codec_context = (*input_format_context)->streams[0]->codec;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Open an output file and the required encoder.
|
||||
* Also set some basic encoder parameters.
|
||||
* Some of these parameters are based on the input file's parameters.
|
||||
*/
|
||||
static int open_output_file(const char *filename,
|
||||
AVCodecContext *input_codec_context,
|
||||
AVFormatContext **output_format_context,
|
||||
AVCodecContext **output_codec_context)
|
||||
{
|
||||
AVIOContext *output_io_context = NULL;
|
||||
AVStream *stream = NULL;
|
||||
AVCodec *output_codec = NULL;
|
||||
int error;
|
||||
|
||||
/** Open the output file to write to it. */
|
||||
if ((error = avio_open(&output_io_context, filename,
|
||||
AVIO_FLAG_WRITE)) < 0) {
|
||||
fprintf(stderr, "Could not open output file '%s' (error '%s')\n",
|
||||
filename, get_error_text(error));
|
||||
return error;
|
||||
}
|
||||
|
||||
/** Create a new format context for the output container format. */
|
||||
if (!(*output_format_context = avformat_alloc_context())) {
|
||||
fprintf(stderr, "Could not allocate output format context\n");
|
||||
return AVERROR(ENOMEM);
|
||||
}
|
||||
|
||||
/** Associate the output file (pointer) with the container format context. */
|
||||
(*output_format_context)->pb = output_io_context;
|
||||
|
||||
/** Guess the desired container format based on the file extension. */
|
||||
if (!((*output_format_context)->oformat = av_guess_format(NULL, filename,
|
||||
NULL))) {
|
||||
fprintf(stderr, "Could not find output file format\n");
|
||||
goto cleanup;
|
||||
}
|
||||
|
||||
av_strlcpy((*output_format_context)->filename, filename,
|
||||
sizeof((*output_format_context)->filename));
|
||||
|
||||
/** Find the encoder to be used by its name. */
|
||||
if (!(output_codec = avcodec_find_encoder(AV_CODEC_ID_AAC))) {
|
||||
fprintf(stderr, "Could not find an AAC encoder.\n");
|
||||
goto cleanup;
|
||||
}
|
||||
|
||||
/** Create a new audio stream in the output file container. */
|
||||
if (!(stream = avformat_new_stream(*output_format_context, output_codec))) {
|
||||
fprintf(stderr, "Could not create new stream\n");
|
||||
error = AVERROR(ENOMEM);
|
||||
goto cleanup;
|
||||
}
|
||||
|
||||
/** Save the encoder context for easier access later. */
|
||||
*output_codec_context = stream->codec;
|
||||
|
||||
/**
|
||||
* Set the basic encoder parameters.
|
||||
* The input file's sample rate is used to avoid a sample rate conversion.
|
||||
*/
|
||||
(*output_codec_context)->channels = OUTPUT_CHANNELS;
|
||||
(*output_codec_context)->channel_layout = av_get_default_channel_layout(OUTPUT_CHANNELS);
|
||||
(*output_codec_context)->sample_rate = input_codec_context->sample_rate;
|
||||
(*output_codec_context)->sample_fmt = output_codec->sample_fmts[0];
|
||||
(*output_codec_context)->bit_rate = OUTPUT_BIT_RATE;
|
||||
|
||||
/** Allow the use of the experimental AAC encoder */
|
||||
(*output_codec_context)->strict_std_compliance = FF_COMPLIANCE_EXPERIMENTAL;
|
||||
|
||||
/** Set the sample rate for the container. */
|
||||
stream->time_base.den = input_codec_context->sample_rate;
|
||||
stream->time_base.num = 1;
|
||||
|
||||
/**
|
||||
* Some container formats (like MP4) require global headers to be present
|
||||
* Mark the encoder so that it behaves accordingly.
|
||||
*/
|
||||
if ((*output_format_context)->oformat->flags & AVFMT_GLOBALHEADER)
|
||||
(*output_codec_context)->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
|
||||
|
||||
/** Open the encoder for the audio stream to use it later. */
|
||||
if ((error = avcodec_open2(*output_codec_context, output_codec, NULL)) < 0) {
|
||||
fprintf(stderr, "Could not open output codec (error '%s')\n",
|
||||
get_error_text(error));
|
||||
goto cleanup;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
cleanup:
|
||||
avio_closep(&(*output_format_context)->pb);
|
||||
avformat_free_context(*output_format_context);
|
||||
*output_format_context = NULL;
|
||||
return error < 0 ? error : AVERROR_EXIT;
|
||||
}
|
||||
|
||||
/** Initialize one data packet for reading or writing. */
|
||||
static void init_packet(AVPacket *packet)
|
||||
{
|
||||
av_init_packet(packet);
|
||||
/** Set the packet data and size so that it is recognized as being empty. */
|
||||
packet->data = NULL;
|
||||
packet->size = 0;
|
||||
}
|
||||
|
||||
/** Initialize one audio frame for reading from the input file */
|
||||
static int init_input_frame(AVFrame **frame)
|
||||
{
|
||||
if (!(*frame = av_frame_alloc())) {
|
||||
fprintf(stderr, "Could not allocate input frame\n");
|
||||
return AVERROR(ENOMEM);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize the audio resampler based on the input and output codec settings.
|
||||
* If the input and output sample formats differ, a conversion is required
|
||||
* libswresample takes care of this, but requires initialization.
|
||||
*/
|
||||
static int init_resampler(AVCodecContext *input_codec_context,
|
||||
AVCodecContext *output_codec_context,
|
||||
SwrContext **resample_context)
|
||||
{
|
||||
int error;
|
||||
|
||||
/**
|
||||
* Create a resampler context for the conversion.
|
||||
* Set the conversion parameters.
|
||||
* Default channel layouts based on the number of channels
|
||||
* are assumed for simplicity (they are sometimes not detected
|
||||
* properly by the demuxer and/or decoder).
|
||||
*/
|
||||
*resample_context = swr_alloc_set_opts(NULL,
|
||||
av_get_default_channel_layout(output_codec_context->channels),
|
||||
output_codec_context->sample_fmt,
|
||||
output_codec_context->sample_rate,
|
||||
av_get_default_channel_layout(input_codec_context->channels),
|
||||
input_codec_context->sample_fmt,
|
||||
input_codec_context->sample_rate,
|
||||
0, NULL);
|
||||
if (!*resample_context) {
|
||||
fprintf(stderr, "Could not allocate resample context\n");
|
||||
return AVERROR(ENOMEM);
|
||||
}
|
||||
/**
|
||||
* Perform a sanity check so that the number of converted samples is
|
||||
* not greater than the number of samples to be converted.
|
||||
* If the sample rates differ, this case has to be handled differently
|
||||
*/
|
||||
av_assert0(output_codec_context->sample_rate == input_codec_context->sample_rate);
|
||||
|
||||
/** Open the resampler with the specified parameters. */
|
||||
if ((error = swr_init(*resample_context)) < 0) {
|
||||
fprintf(stderr, "Could not open resample context\n");
|
||||
swr_free(resample_context);
|
||||
return error;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
/** Initialize a FIFO buffer for the audio samples to be encoded. */
|
||||
static int init_fifo(AVAudioFifo **fifo, AVCodecContext *output_codec_context)
|
||||
{
|
||||
/** Create the FIFO buffer based on the specified output sample format. */
|
||||
if (!(*fifo = av_audio_fifo_alloc(output_codec_context->sample_fmt,
|
||||
output_codec_context->channels, 1))) {
|
||||
fprintf(stderr, "Could not allocate FIFO\n");
|
||||
return AVERROR(ENOMEM);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
/** Write the header of the output file container. */
|
||||
static int write_output_file_header(AVFormatContext *output_format_context)
|
||||
{
|
||||
int error;
|
||||
if ((error = avformat_write_header(output_format_context, NULL)) < 0) {
|
||||
fprintf(stderr, "Could not write output file header (error '%s')\n",
|
||||
get_error_text(error));
|
||||
return error;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
/** Decode one audio frame from the input file. */
|
||||
static int decode_audio_frame(AVFrame *frame,
|
||||
AVFormatContext *input_format_context,
|
||||
AVCodecContext *input_codec_context,
|
||||
int *data_present, int *finished)
|
||||
{
|
||||
/** Packet used for temporary storage. */
|
||||
AVPacket input_packet;
|
||||
int error;
|
||||
init_packet(&input_packet);
|
||||
|
||||
/** Read one audio frame from the input file into a temporary packet. */
|
||||
if ((error = av_read_frame(input_format_context, &input_packet)) < 0) {
|
||||
/** If we are at the end of the file, flush the decoder below. */
|
||||
if (error == AVERROR_EOF)
|
||||
*finished = 1;
|
||||
else {
|
||||
fprintf(stderr, "Could not read frame (error '%s')\n",
|
||||
get_error_text(error));
|
||||
return error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Decode the audio frame stored in the temporary packet.
|
||||
* The input audio stream decoder is used to do this.
|
||||
* If we are at the end of the file, pass an empty packet to the decoder
|
||||
* to flush it.
|
||||
*/
|
||||
if ((error = avcodec_decode_audio4(input_codec_context, frame,
|
||||
data_present, &input_packet)) < 0) {
|
||||
fprintf(stderr, "Could not decode frame (error '%s')\n",
|
||||
get_error_text(error));
|
||||
av_free_packet(&input_packet);
|
||||
return error;
|
||||
}
|
||||
|
||||
/**
|
||||
* If the decoder has not been flushed completely, we are not finished,
|
||||
* so that this function has to be called again.
|
||||
*/
|
||||
if (*finished && *data_present)
|
||||
*finished = 0;
|
||||
av_free_packet(&input_packet);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize a temporary storage for the specified number of audio samples.
|
||||
* The conversion requires temporary storage due to the different format.
|
||||
* The number of audio samples to be allocated is specified in frame_size.
|
||||
*/
|
||||
static int init_converted_samples(uint8_t ***converted_input_samples,
|
||||
AVCodecContext *output_codec_context,
|
||||
int frame_size)
|
||||
{
|
||||
int error;
|
||||
|
||||
/**
|
||||
* Allocate as many pointers as there are audio channels.
|
||||
* Each pointer will later point to the audio samples of the corresponding
|
||||
* channels (although it may be NULL for interleaved formats).
|
||||
*/
|
||||
if (!(*converted_input_samples = calloc(output_codec_context->channels,
|
||||
sizeof(**converted_input_samples)))) {
|
||||
fprintf(stderr, "Could not allocate converted input sample pointers\n");
|
||||
return AVERROR(ENOMEM);
|
||||
}
|
||||
|
||||
/**
|
||||
* Allocate memory for the samples of all channels in one consecutive
|
||||
* block for convenience.
|
||||
*/
|
||||
if ((error = av_samples_alloc(*converted_input_samples, NULL,
|
||||
output_codec_context->channels,
|
||||
frame_size,
|
||||
output_codec_context->sample_fmt, 0)) < 0) {
|
||||
fprintf(stderr,
|
||||
"Could not allocate converted input samples (error '%s')\n",
|
||||
get_error_text(error));
|
||||
av_freep(&(*converted_input_samples)[0]);
|
||||
free(*converted_input_samples);
|
||||
return error;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Convert the input audio samples into the output sample format.
|
||||
* The conversion happens on a per-frame basis, the size of which is specified
|
||||
* by frame_size.
|
||||
*/
|
||||
static int convert_samples(const uint8_t **input_data,
|
||||
uint8_t **converted_data, const int frame_size,
|
||||
SwrContext *resample_context)
|
||||
{
|
||||
int error;
|
||||
|
||||
/** Convert the samples using the resampler. */
|
||||
if ((error = swr_convert(resample_context,
|
||||
converted_data, frame_size,
|
||||
input_data , frame_size)) < 0) {
|
||||
fprintf(stderr, "Could not convert input samples (error '%s')\n",
|
||||
get_error_text(error));
|
||||
return error;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/** Add converted input audio samples to the FIFO buffer for later processing. */
|
||||
static int add_samples_to_fifo(AVAudioFifo *fifo,
|
||||
uint8_t **converted_input_samples,
|
||||
const int frame_size)
|
||||
{
|
||||
int error;
|
||||
|
||||
/**
|
||||
* Make the FIFO as large as it needs to be to hold both,
|
||||
* the old and the new samples.
|
||||
*/
|
||||
if ((error = av_audio_fifo_realloc(fifo, av_audio_fifo_size(fifo) + frame_size)) < 0) {
|
||||
fprintf(stderr, "Could not reallocate FIFO\n");
|
||||
return error;
|
||||
}
|
||||
|
||||
/** Store the new samples in the FIFO buffer. */
|
||||
if (av_audio_fifo_write(fifo, (void **)converted_input_samples,
|
||||
frame_size) < frame_size) {
|
||||
fprintf(stderr, "Could not write data to FIFO\n");
|
||||
return AVERROR_EXIT;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Read one audio frame from the input file, decodes, converts and stores
|
||||
* it in the FIFO buffer.
|
||||
*/
|
||||
static int read_decode_convert_and_store(AVAudioFifo *fifo,
|
||||
AVFormatContext *input_format_context,
|
||||
AVCodecContext *input_codec_context,
|
||||
AVCodecContext *output_codec_context,
|
||||
SwrContext *resampler_context,
|
||||
int *finished)
|
||||
{
|
||||
/** Temporary storage of the input samples of the frame read from the file. */
|
||||
AVFrame *input_frame = NULL;
|
||||
/** Temporary storage for the converted input samples. */
|
||||
uint8_t **converted_input_samples = NULL;
|
||||
int data_present;
|
||||
int ret = AVERROR_EXIT;
|
||||
|
||||
/** Initialize temporary storage for one input frame. */
|
||||
if (init_input_frame(&input_frame))
|
||||
goto cleanup;
|
||||
/** Decode one frame worth of audio samples. */
|
||||
if (decode_audio_frame(input_frame, input_format_context,
|
||||
input_codec_context, &data_present, finished))
|
||||
goto cleanup;
|
||||
/**
|
||||
* If we are at the end of the file and there are no more samples
|
||||
* in the decoder which are delayed, we are actually finished.
|
||||
* This must not be treated as an error.
|
||||
*/
|
||||
if (*finished && !data_present) {
|
||||
ret = 0;
|
||||
goto cleanup;
|
||||
}
|
||||
/** If there is decoded data, convert and store it */
|
||||
if (data_present) {
|
||||
/** Initialize the temporary storage for the converted input samples. */
|
||||
if (init_converted_samples(&converted_input_samples, output_codec_context,
|
||||
input_frame->nb_samples))
|
||||
goto cleanup;
|
||||
|
||||
/**
|
||||
* Convert the input samples to the desired output sample format.
|
||||
* This requires a temporary storage provided by converted_input_samples.
|
||||
*/
|
||||
if (convert_samples((const uint8_t**)input_frame->extended_data, converted_input_samples,
|
||||
input_frame->nb_samples, resampler_context))
|
||||
goto cleanup;
|
||||
|
||||
/** Add the converted input samples to the FIFO buffer for later processing. */
|
||||
if (add_samples_to_fifo(fifo, converted_input_samples,
|
||||
input_frame->nb_samples))
|
||||
goto cleanup;
|
||||
ret = 0;
|
||||
}
|
||||
ret = 0;
|
||||
|
||||
cleanup:
|
||||
if (converted_input_samples) {
|
||||
av_freep(&converted_input_samples[0]);
|
||||
free(converted_input_samples);
|
||||
}
|
||||
av_frame_free(&input_frame);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize one input frame for writing to the output file.
|
||||
* The frame will be exactly frame_size samples large.
|
||||
*/
|
||||
static int init_output_frame(AVFrame **frame,
|
||||
AVCodecContext *output_codec_context,
|
||||
int frame_size)
|
||||
{
|
||||
int error;
|
||||
|
||||
/** Create a new frame to store the audio samples. */
|
||||
if (!(*frame = av_frame_alloc())) {
|
||||
fprintf(stderr, "Could not allocate output frame\n");
|
||||
return AVERROR_EXIT;
|
||||
}
|
||||
|
||||
/**
|
||||
* Set the frame's parameters, especially its size and format.
|
||||
* av_frame_get_buffer needs this to allocate memory for the
|
||||
* audio samples of the frame.
|
||||
* Default channel layouts based on the number of channels
|
||||
* are assumed for simplicity.
|
||||
*/
|
||||
(*frame)->nb_samples = frame_size;
|
||||
(*frame)->channel_layout = output_codec_context->channel_layout;
|
||||
(*frame)->format = output_codec_context->sample_fmt;
|
||||
(*frame)->sample_rate = output_codec_context->sample_rate;
|
||||
|
||||
/**
|
||||
* Allocate the samples of the created frame. This call will make
|
||||
* sure that the audio frame can hold as many samples as specified.
|
||||
*/
|
||||
if ((error = av_frame_get_buffer(*frame, 0)) < 0) {
|
||||
fprintf(stderr, "Could allocate output frame samples (error '%s')\n",
|
||||
get_error_text(error));
|
||||
av_frame_free(frame);
|
||||
return error;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/** Global timestamp for the audio frames */
|
||||
static int64_t pts = 0;
|
||||
|
||||
/** Encode one frame worth of audio to the output file. */
|
||||
static int encode_audio_frame(AVFrame *frame,
|
||||
AVFormatContext *output_format_context,
|
||||
AVCodecContext *output_codec_context,
|
||||
int *data_present)
|
||||
{
|
||||
/** Packet used for temporary storage. */
|
||||
AVPacket output_packet;
|
||||
int error;
|
||||
init_packet(&output_packet);
|
||||
|
||||
/** Set a timestamp based on the sample rate for the container. */
|
||||
if (frame) {
|
||||
frame->pts = pts;
|
||||
pts += frame->nb_samples;
|
||||
}
|
||||
|
||||
/**
|
||||
* Encode the audio frame and store it in the temporary packet.
|
||||
* The output audio stream encoder is used to do this.
|
||||
*/
|
||||
if ((error = avcodec_encode_audio2(output_codec_context, &output_packet,
|
||||
frame, data_present)) < 0) {
|
||||
fprintf(stderr, "Could not encode frame (error '%s')\n",
|
||||
get_error_text(error));
|
||||
av_free_packet(&output_packet);
|
||||
return error;
|
||||
}
|
||||
|
||||
/** Write one audio frame from the temporary packet to the output file. */
|
||||
if (*data_present) {
|
||||
if ((error = av_write_frame(output_format_context, &output_packet)) < 0) {
|
||||
fprintf(stderr, "Could not write frame (error '%s')\n",
|
||||
get_error_text(error));
|
||||
av_free_packet(&output_packet);
|
||||
return error;
|
||||
}
|
||||
|
||||
av_free_packet(&output_packet);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Load one audio frame from the FIFO buffer, encode and write it to the
|
||||
* output file.
|
||||
*/
|
||||
static int load_encode_and_write(AVAudioFifo *fifo,
|
||||
AVFormatContext *output_format_context,
|
||||
AVCodecContext *output_codec_context)
|
||||
{
|
||||
/** Temporary storage of the output samples of the frame written to the file. */
|
||||
AVFrame *output_frame;
|
||||
/**
|
||||
* Use the maximum number of possible samples per frame.
|
||||
* If there is less than the maximum possible frame size in the FIFO
|
||||
* buffer use this number. Otherwise, use the maximum possible frame size
|
||||
*/
|
||||
const int frame_size = FFMIN(av_audio_fifo_size(fifo),
|
||||
output_codec_context->frame_size);
|
||||
int data_written;
|
||||
|
||||
/** Initialize temporary storage for one output frame. */
|
||||
if (init_output_frame(&output_frame, output_codec_context, frame_size))
|
||||
return AVERROR_EXIT;
|
||||
|
||||
/**
|
||||
* Read as many samples from the FIFO buffer as required to fill the frame.
|
||||
* The samples are stored in the frame temporarily.
|
||||
*/
|
||||
if (av_audio_fifo_read(fifo, (void **)output_frame->data, frame_size) < frame_size) {
|
||||
fprintf(stderr, "Could not read data from FIFO\n");
|
||||
av_frame_free(&output_frame);
|
||||
return AVERROR_EXIT;
|
||||
}
|
||||
|
||||
/** Encode one frame worth of audio samples. */
|
||||
if (encode_audio_frame(output_frame, output_format_context,
|
||||
output_codec_context, &data_written)) {
|
||||
av_frame_free(&output_frame);
|
||||
return AVERROR_EXIT;
|
||||
}
|
||||
av_frame_free(&output_frame);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/** Write the trailer of the output file container. */
|
||||
static int write_output_file_trailer(AVFormatContext *output_format_context)
|
||||
{
|
||||
int error;
|
||||
if ((error = av_write_trailer(output_format_context)) < 0) {
|
||||
fprintf(stderr, "Could not write output file trailer (error '%s')\n",
|
||||
get_error_text(error));
|
||||
return error;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
/** Convert an audio file to an AAC file in an MP4 container. */
|
||||
int main(int argc, char **argv)
|
||||
{
|
||||
AVFormatContext *input_format_context = NULL, *output_format_context = NULL;
|
||||
AVCodecContext *input_codec_context = NULL, *output_codec_context = NULL;
|
||||
SwrContext *resample_context = NULL;
|
||||
AVAudioFifo *fifo = NULL;
|
||||
int ret = AVERROR_EXIT;
|
||||
|
||||
if (argc < 3) {
|
||||
fprintf(stderr, "Usage: %s <input file> <output file>\n", argv[0]);
|
||||
exit(1);
|
||||
}
|
||||
|
||||
/** Register all codecs and formats so that they can be used. */
|
||||
av_register_all();
|
||||
/** Open the input file for reading. */
|
||||
if (open_input_file(argv[1], &input_format_context,
|
||||
&input_codec_context))
|
||||
goto cleanup;
|
||||
/** Open the output file for writing. */
|
||||
if (open_output_file(argv[2], input_codec_context,
|
||||
&output_format_context, &output_codec_context))
|
||||
goto cleanup;
|
||||
/** Initialize the resampler to be able to convert audio sample formats. */
|
||||
if (init_resampler(input_codec_context, output_codec_context,
|
||||
&resample_context))
|
||||
goto cleanup;
|
||||
/** Initialize the FIFO buffer to store audio samples to be encoded. */
|
||||
if (init_fifo(&fifo, output_codec_context))
|
||||
goto cleanup;
|
||||
/** Write the header of the output file container. */
|
||||
if (write_output_file_header(output_format_context))
|
||||
goto cleanup;
|
||||
|
||||
/**
|
||||
* Loop as long as we have input samples to read or output samples
|
||||
* to write; abort as soon as we have neither.
|
||||
*/
|
||||
while (1) {
|
||||
/** Use the encoder's desired frame size for processing. */
|
||||
const int output_frame_size = output_codec_context->frame_size;
|
||||
int finished = 0;
|
||||
|
||||
/**
|
||||
* Make sure that there is one frame worth of samples in the FIFO
|
||||
* buffer so that the encoder can do its work.
|
||||
* Since the decoder's and the encoder's frame size may differ, we
|
||||
* need to FIFO buffer to store as many frames worth of input samples
|
||||
* that they make up at least one frame worth of output samples.
|
||||
*/
|
||||
while (av_audio_fifo_size(fifo) < output_frame_size) {
|
||||
/**
|
||||
* Decode one frame worth of audio samples, convert it to the
|
||||
* output sample format and put it into the FIFO buffer.
|
||||
*/
|
||||
if (read_decode_convert_and_store(fifo, input_format_context,
|
||||
input_codec_context,
|
||||
output_codec_context,
|
||||
resample_context, &finished))
|
||||
goto cleanup;
|
||||
|
||||
/**
|
||||
* If we are at the end of the input file, we continue
|
||||
* encoding the remaining audio samples to the output file.
|
||||
*/
|
||||
if (finished)
|
||||
break;
|
||||
}
|
||||
|
||||
/**
|
||||
* If we have enough samples for the encoder, we encode them.
|
||||
* At the end of the file, we pass the remaining samples to
|
||||
* the encoder.
|
||||
*/
|
||||
while (av_audio_fifo_size(fifo) >= output_frame_size ||
|
||||
(finished && av_audio_fifo_size(fifo) > 0))
|
||||
/**
|
||||
* Take one frame worth of audio samples from the FIFO buffer,
|
||||
* encode it and write it to the output file.
|
||||
*/
|
||||
if (load_encode_and_write(fifo, output_format_context,
|
||||
output_codec_context))
|
||||
goto cleanup;
|
||||
|
||||
/**
|
||||
* If we are at the end of the input file and have encoded
|
||||
* all remaining samples, we can exit this loop and finish.
|
||||
*/
|
||||
if (finished) {
|
||||
int data_written;
|
||||
/** Flush the encoder as it may have delayed frames. */
|
||||
do {
|
||||
if (encode_audio_frame(NULL, output_format_context,
|
||||
output_codec_context, &data_written))
|
||||
goto cleanup;
|
||||
} while (data_written);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
/** Write the trailer of the output file container. */
|
||||
if (write_output_file_trailer(output_format_context))
|
||||
goto cleanup;
|
||||
ret = 0;
|
||||
|
||||
cleanup:
|
||||
if (fifo)
|
||||
av_audio_fifo_free(fifo);
|
||||
swr_free(&resample_context);
|
||||
if (output_codec_context)
|
||||
avcodec_close(output_codec_context);
|
||||
if (output_format_context) {
|
||||
avio_closep(&output_format_context->pb);
|
||||
avformat_free_context(output_format_context);
|
||||
}
|
||||
if (input_codec_context)
|
||||
avcodec_close(input_codec_context);
|
||||
if (input_format_context)
|
||||
avformat_close_input(&input_format_context);
|
||||
|
||||
return ret;
|
||||
}
|
||||
@@ -1,583 +0,0 @@
|
||||
/*
|
||||
* Copyright (c) 2010 Nicolas George
|
||||
* Copyright (c) 2011 Stefano Sabatini
|
||||
* Copyright (c) 2014 Andrey Utkin
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
* of this software and associated documentation files (the "Software"), to deal
|
||||
* in the Software without restriction, including without limitation the rights
|
||||
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
* copies of the Software, and to permit persons to whom the Software is
|
||||
* furnished to do so, subject to the following conditions:
|
||||
*
|
||||
* The above copyright notice and this permission notice shall be included in
|
||||
* all copies or substantial portions of the Software.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
||||
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
* THE SOFTWARE.
|
||||
*/
|
||||
|
||||
/**
|
||||
* @file
|
||||
* API example for demuxing, decoding, filtering, encoding and muxing
|
||||
* @example transcoding.c
|
||||
*/
|
||||
|
||||
#include <libavcodec/avcodec.h>
|
||||
#include <libavformat/avformat.h>
|
||||
#include <libavfilter/avfiltergraph.h>
|
||||
#include <libavfilter/avcodec.h>
|
||||
#include <libavfilter/buffersink.h>
|
||||
#include <libavfilter/buffersrc.h>
|
||||
#include <libavutil/opt.h>
|
||||
#include <libavutil/pixdesc.h>
|
||||
|
||||
static AVFormatContext *ifmt_ctx;
|
||||
static AVFormatContext *ofmt_ctx;
|
||||
typedef struct FilteringContext {
|
||||
AVFilterContext *buffersink_ctx;
|
||||
AVFilterContext *buffersrc_ctx;
|
||||
AVFilterGraph *filter_graph;
|
||||
} FilteringContext;
|
||||
static FilteringContext *filter_ctx;
|
||||
|
||||
static int open_input_file(const char *filename)
|
||||
{
|
||||
int ret;
|
||||
unsigned int i;
|
||||
|
||||
ifmt_ctx = NULL;
|
||||
if ((ret = avformat_open_input(&ifmt_ctx, filename, NULL, NULL)) < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot open input file\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
if ((ret = avformat_find_stream_info(ifmt_ctx, NULL)) < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot find stream information\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
|
||||
AVStream *stream;
|
||||
AVCodecContext *codec_ctx;
|
||||
stream = ifmt_ctx->streams[i];
|
||||
codec_ctx = stream->codec;
|
||||
/* Reencode video & audio and remux subtitles etc. */
|
||||
if (codec_ctx->codec_type == AVMEDIA_TYPE_VIDEO
|
||||
|| codec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
|
||||
/* Open decoder */
|
||||
ret = avcodec_open2(codec_ctx,
|
||||
avcodec_find_decoder(codec_ctx->codec_id), NULL);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Failed to open decoder for stream #%u\n", i);
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
av_dump_format(ifmt_ctx, 0, filename, 0);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int open_output_file(const char *filename)
|
||||
{
|
||||
AVStream *out_stream;
|
||||
AVStream *in_stream;
|
||||
AVCodecContext *dec_ctx, *enc_ctx;
|
||||
AVCodec *encoder;
|
||||
int ret;
|
||||
unsigned int i;
|
||||
|
||||
ofmt_ctx = NULL;
|
||||
avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, filename);
|
||||
if (!ofmt_ctx) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Could not create output context\n");
|
||||
return AVERROR_UNKNOWN;
|
||||
}
|
||||
|
||||
|
||||
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
|
||||
out_stream = avformat_new_stream(ofmt_ctx, NULL);
|
||||
if (!out_stream) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Failed allocating output stream\n");
|
||||
return AVERROR_UNKNOWN;
|
||||
}
|
||||
|
||||
in_stream = ifmt_ctx->streams[i];
|
||||
dec_ctx = in_stream->codec;
|
||||
enc_ctx = out_stream->codec;
|
||||
|
||||
if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO
|
||||
|| dec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
|
||||
/* in this example, we choose transcoding to same codec */
|
||||
encoder = avcodec_find_encoder(dec_ctx->codec_id);
|
||||
if (!encoder) {
|
||||
av_log(NULL, AV_LOG_FATAL, "Necessary encoder not found\n");
|
||||
return AVERROR_INVALIDDATA;
|
||||
}
|
||||
|
||||
/* In this example, we transcode to same properties (picture size,
|
||||
* sample rate etc.). These properties can be changed for output
|
||||
* streams easily using filters */
|
||||
if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO) {
|
||||
enc_ctx->height = dec_ctx->height;
|
||||
enc_ctx->width = dec_ctx->width;
|
||||
enc_ctx->sample_aspect_ratio = dec_ctx->sample_aspect_ratio;
|
||||
/* take first format from list of supported formats */
|
||||
enc_ctx->pix_fmt = encoder->pix_fmts[0];
|
||||
/* video time_base can be set to whatever is handy and supported by encoder */
|
||||
enc_ctx->time_base = dec_ctx->time_base;
|
||||
} else {
|
||||
enc_ctx->sample_rate = dec_ctx->sample_rate;
|
||||
enc_ctx->channel_layout = dec_ctx->channel_layout;
|
||||
enc_ctx->channels = av_get_channel_layout_nb_channels(enc_ctx->channel_layout);
|
||||
/* take first format from list of supported formats */
|
||||
enc_ctx->sample_fmt = encoder->sample_fmts[0];
|
||||
enc_ctx->time_base = (AVRational){1, enc_ctx->sample_rate};
|
||||
}
|
||||
|
||||
/* Third parameter can be used to pass settings to encoder */
|
||||
ret = avcodec_open2(enc_ctx, encoder, NULL);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot open video encoder for stream #%u\n", i);
|
||||
return ret;
|
||||
}
|
||||
} else if (dec_ctx->codec_type == AVMEDIA_TYPE_UNKNOWN) {
|
||||
av_log(NULL, AV_LOG_FATAL, "Elementary stream #%d is of unknown type, cannot proceed\n", i);
|
||||
return AVERROR_INVALIDDATA;
|
||||
} else {
|
||||
/* if this stream must be remuxed */
|
||||
ret = avcodec_copy_context(ofmt_ctx->streams[i]->codec,
|
||||
ifmt_ctx->streams[i]->codec);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Copying stream context failed\n");
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
if (ofmt_ctx->oformat->flags & AVFMT_GLOBALHEADER)
|
||||
enc_ctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
|
||||
|
||||
}
|
||||
av_dump_format(ofmt_ctx, 0, filename, 1);
|
||||
|
||||
if (!(ofmt_ctx->oformat->flags & AVFMT_NOFILE)) {
|
||||
ret = avio_open(&ofmt_ctx->pb, filename, AVIO_FLAG_WRITE);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Could not open output file '%s'", filename);
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
/* init muxer, write output file header */
|
||||
ret = avformat_write_header(ofmt_ctx, NULL);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Error occurred when opening output file\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int init_filter(FilteringContext* fctx, AVCodecContext *dec_ctx,
|
||||
AVCodecContext *enc_ctx, const char *filter_spec)
|
||||
{
|
||||
char args[512];
|
||||
int ret = 0;
|
||||
AVFilter *buffersrc = NULL;
|
||||
AVFilter *buffersink = NULL;
|
||||
AVFilterContext *buffersrc_ctx = NULL;
|
||||
AVFilterContext *buffersink_ctx = NULL;
|
||||
AVFilterInOut *outputs = avfilter_inout_alloc();
|
||||
AVFilterInOut *inputs = avfilter_inout_alloc();
|
||||
AVFilterGraph *filter_graph = avfilter_graph_alloc();
|
||||
|
||||
if (!outputs || !inputs || !filter_graph) {
|
||||
ret = AVERROR(ENOMEM);
|
||||
goto end;
|
||||
}
|
||||
|
||||
if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO) {
|
||||
buffersrc = avfilter_get_by_name("buffer");
|
||||
buffersink = avfilter_get_by_name("buffersink");
|
||||
if (!buffersrc || !buffersink) {
|
||||
av_log(NULL, AV_LOG_ERROR, "filtering source or sink element not found\n");
|
||||
ret = AVERROR_UNKNOWN;
|
||||
goto end;
|
||||
}
|
||||
|
||||
snprintf(args, sizeof(args),
|
||||
"video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
|
||||
dec_ctx->width, dec_ctx->height, dec_ctx->pix_fmt,
|
||||
dec_ctx->time_base.num, dec_ctx->time_base.den,
|
||||
dec_ctx->sample_aspect_ratio.num,
|
||||
dec_ctx->sample_aspect_ratio.den);
|
||||
|
||||
ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in",
|
||||
args, NULL, filter_graph);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot create buffer source\n");
|
||||
goto end;
|
||||
}
|
||||
|
||||
ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out",
|
||||
NULL, NULL, filter_graph);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot create buffer sink\n");
|
||||
goto end;
|
||||
}
|
||||
|
||||
ret = av_opt_set_bin(buffersink_ctx, "pix_fmts",
|
||||
(uint8_t*)&enc_ctx->pix_fmt, sizeof(enc_ctx->pix_fmt),
|
||||
AV_OPT_SEARCH_CHILDREN);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot set output pixel format\n");
|
||||
goto end;
|
||||
}
|
||||
} else if (dec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
|
||||
buffersrc = avfilter_get_by_name("abuffer");
|
||||
buffersink = avfilter_get_by_name("abuffersink");
|
||||
if (!buffersrc || !buffersink) {
|
||||
av_log(NULL, AV_LOG_ERROR, "filtering source or sink element not found\n");
|
||||
ret = AVERROR_UNKNOWN;
|
||||
goto end;
|
||||
}
|
||||
|
||||
if (!dec_ctx->channel_layout)
|
||||
dec_ctx->channel_layout =
|
||||
av_get_default_channel_layout(dec_ctx->channels);
|
||||
snprintf(args, sizeof(args),
|
||||
"time_base=%d/%d:sample_rate=%d:sample_fmt=%s:channel_layout=0x%"PRIx64,
|
||||
dec_ctx->time_base.num, dec_ctx->time_base.den, dec_ctx->sample_rate,
|
||||
av_get_sample_fmt_name(dec_ctx->sample_fmt),
|
||||
dec_ctx->channel_layout);
|
||||
ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in",
|
||||
args, NULL, filter_graph);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer source\n");
|
||||
goto end;
|
||||
}
|
||||
|
||||
ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out",
|
||||
NULL, NULL, filter_graph);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer sink\n");
|
||||
goto end;
|
||||
}
|
||||
|
||||
ret = av_opt_set_bin(buffersink_ctx, "sample_fmts",
|
||||
(uint8_t*)&enc_ctx->sample_fmt, sizeof(enc_ctx->sample_fmt),
|
||||
AV_OPT_SEARCH_CHILDREN);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot set output sample format\n");
|
||||
goto end;
|
||||
}
|
||||
|
||||
ret = av_opt_set_bin(buffersink_ctx, "channel_layouts",
|
||||
(uint8_t*)&enc_ctx->channel_layout,
|
||||
sizeof(enc_ctx->channel_layout), AV_OPT_SEARCH_CHILDREN);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot set output channel layout\n");
|
||||
goto end;
|
||||
}
|
||||
|
||||
ret = av_opt_set_bin(buffersink_ctx, "sample_rates",
|
||||
(uint8_t*)&enc_ctx->sample_rate, sizeof(enc_ctx->sample_rate),
|
||||
AV_OPT_SEARCH_CHILDREN);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Cannot set output sample rate\n");
|
||||
goto end;
|
||||
}
|
||||
} else {
|
||||
ret = AVERROR_UNKNOWN;
|
||||
goto end;
|
||||
}
|
||||
|
||||
/* Endpoints for the filter graph. */
|
||||
outputs->name = av_strdup("in");
|
||||
outputs->filter_ctx = buffersrc_ctx;
|
||||
outputs->pad_idx = 0;
|
||||
outputs->next = NULL;
|
||||
|
||||
inputs->name = av_strdup("out");
|
||||
inputs->filter_ctx = buffersink_ctx;
|
||||
inputs->pad_idx = 0;
|
||||
inputs->next = NULL;
|
||||
|
||||
if (!outputs->name || !inputs->name) {
|
||||
ret = AVERROR(ENOMEM);
|
||||
goto end;
|
||||
}
|
||||
|
||||
if ((ret = avfilter_graph_parse_ptr(filter_graph, filter_spec,
|
||||
&inputs, &outputs, NULL)) < 0)
|
||||
goto end;
|
||||
|
||||
if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0)
|
||||
goto end;
|
||||
|
||||
/* Fill FilteringContext */
|
||||
fctx->buffersrc_ctx = buffersrc_ctx;
|
||||
fctx->buffersink_ctx = buffersink_ctx;
|
||||
fctx->filter_graph = filter_graph;
|
||||
|
||||
end:
|
||||
avfilter_inout_free(&inputs);
|
||||
avfilter_inout_free(&outputs);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int init_filters(void)
|
||||
{
|
||||
const char *filter_spec;
|
||||
unsigned int i;
|
||||
int ret;
|
||||
filter_ctx = av_malloc_array(ifmt_ctx->nb_streams, sizeof(*filter_ctx));
|
||||
if (!filter_ctx)
|
||||
return AVERROR(ENOMEM);
|
||||
|
||||
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
|
||||
filter_ctx[i].buffersrc_ctx = NULL;
|
||||
filter_ctx[i].buffersink_ctx = NULL;
|
||||
filter_ctx[i].filter_graph = NULL;
|
||||
if (!(ifmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO
|
||||
|| ifmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO))
|
||||
continue;
|
||||
|
||||
|
||||
if (ifmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO)
|
||||
filter_spec = "null"; /* passthrough (dummy) filter for video */
|
||||
else
|
||||
filter_spec = "anull"; /* passthrough (dummy) filter for audio */
|
||||
ret = init_filter(&filter_ctx[i], ifmt_ctx->streams[i]->codec,
|
||||
ofmt_ctx->streams[i]->codec, filter_spec);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int encode_write_frame(AVFrame *filt_frame, unsigned int stream_index, int *got_frame) {
|
||||
int ret;
|
||||
int got_frame_local;
|
||||
AVPacket enc_pkt;
|
||||
int (*enc_func)(AVCodecContext *, AVPacket *, const AVFrame *, int *) =
|
||||
(ifmt_ctx->streams[stream_index]->codec->codec_type ==
|
||||
AVMEDIA_TYPE_VIDEO) ? avcodec_encode_video2 : avcodec_encode_audio2;
|
||||
|
||||
if (!got_frame)
|
||||
got_frame = &got_frame_local;
|
||||
|
||||
av_log(NULL, AV_LOG_INFO, "Encoding frame\n");
|
||||
/* encode filtered frame */
|
||||
enc_pkt.data = NULL;
|
||||
enc_pkt.size = 0;
|
||||
av_init_packet(&enc_pkt);
|
||||
ret = enc_func(ofmt_ctx->streams[stream_index]->codec, &enc_pkt,
|
||||
filt_frame, got_frame);
|
||||
av_frame_free(&filt_frame);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
if (!(*got_frame))
|
||||
return 0;
|
||||
|
||||
/* prepare packet for muxing */
|
||||
enc_pkt.stream_index = stream_index;
|
||||
av_packet_rescale_ts(&enc_pkt,
|
||||
ofmt_ctx->streams[stream_index]->codec->time_base,
|
||||
ofmt_ctx->streams[stream_index]->time_base);
|
||||
|
||||
av_log(NULL, AV_LOG_DEBUG, "Muxing frame\n");
|
||||
/* mux encoded frame */
|
||||
ret = av_interleaved_write_frame(ofmt_ctx, &enc_pkt);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int filter_encode_write_frame(AVFrame *frame, unsigned int stream_index)
|
||||
{
|
||||
int ret;
|
||||
AVFrame *filt_frame;
|
||||
|
||||
av_log(NULL, AV_LOG_INFO, "Pushing decoded frame to filters\n");
|
||||
/* push the decoded frame into the filtergraph */
|
||||
ret = av_buffersrc_add_frame_flags(filter_ctx[stream_index].buffersrc_ctx,
|
||||
frame, 0);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Error while feeding the filtergraph\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* pull filtered frames from the filtergraph */
|
||||
while (1) {
|
||||
filt_frame = av_frame_alloc();
|
||||
if (!filt_frame) {
|
||||
ret = AVERROR(ENOMEM);
|
||||
break;
|
||||
}
|
||||
av_log(NULL, AV_LOG_INFO, "Pulling filtered frame from filters\n");
|
||||
ret = av_buffersink_get_frame(filter_ctx[stream_index].buffersink_ctx,
|
||||
filt_frame);
|
||||
if (ret < 0) {
|
||||
/* if no more frames for output - returns AVERROR(EAGAIN)
|
||||
* if flushed and no more frames for output - returns AVERROR_EOF
|
||||
* rewrite retcode to 0 to show it as normal procedure completion
|
||||
*/
|
||||
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
|
||||
ret = 0;
|
||||
av_frame_free(&filt_frame);
|
||||
break;
|
||||
}
|
||||
|
||||
filt_frame->pict_type = AV_PICTURE_TYPE_NONE;
|
||||
ret = encode_write_frame(filt_frame, stream_index, NULL);
|
||||
if (ret < 0)
|
||||
break;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int flush_encoder(unsigned int stream_index)
|
||||
{
|
||||
int ret;
|
||||
int got_frame;
|
||||
|
||||
if (!(ofmt_ctx->streams[stream_index]->codec->codec->capabilities &
|
||||
AV_CODEC_CAP_DELAY))
|
||||
return 0;
|
||||
|
||||
while (1) {
|
||||
av_log(NULL, AV_LOG_INFO, "Flushing stream #%u encoder\n", stream_index);
|
||||
ret = encode_write_frame(NULL, stream_index, &got_frame);
|
||||
if (ret < 0)
|
||||
break;
|
||||
if (!got_frame)
|
||||
return 0;
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
|
||||
int main(int argc, char **argv)
|
||||
{
|
||||
int ret;
|
||||
AVPacket packet = { .data = NULL, .size = 0 };
|
||||
AVFrame *frame = NULL;
|
||||
enum AVMediaType type;
|
||||
unsigned int stream_index;
|
||||
unsigned int i;
|
||||
int got_frame;
|
||||
int (*dec_func)(AVCodecContext *, AVFrame *, int *, const AVPacket *);
|
||||
|
||||
if (argc != 3) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Usage: %s <input file> <output file>\n", argv[0]);
|
||||
return 1;
|
||||
}
|
||||
|
||||
av_register_all();
|
||||
avfilter_register_all();
|
||||
|
||||
if ((ret = open_input_file(argv[1])) < 0)
|
||||
goto end;
|
||||
if ((ret = open_output_file(argv[2])) < 0)
|
||||
goto end;
|
||||
if ((ret = init_filters()) < 0)
|
||||
goto end;
|
||||
|
||||
/* read all packets */
|
||||
while (1) {
|
||||
if ((ret = av_read_frame(ifmt_ctx, &packet)) < 0)
|
||||
break;
|
||||
stream_index = packet.stream_index;
|
||||
type = ifmt_ctx->streams[packet.stream_index]->codec->codec_type;
|
||||
av_log(NULL, AV_LOG_DEBUG, "Demuxer gave frame of stream_index %u\n",
|
||||
stream_index);
|
||||
|
||||
if (filter_ctx[stream_index].filter_graph) {
|
||||
av_log(NULL, AV_LOG_DEBUG, "Going to reencode&filter the frame\n");
|
||||
frame = av_frame_alloc();
|
||||
if (!frame) {
|
||||
ret = AVERROR(ENOMEM);
|
||||
break;
|
||||
}
|
||||
av_packet_rescale_ts(&packet,
|
||||
ifmt_ctx->streams[stream_index]->time_base,
|
||||
ifmt_ctx->streams[stream_index]->codec->time_base);
|
||||
dec_func = (type == AVMEDIA_TYPE_VIDEO) ? avcodec_decode_video2 :
|
||||
avcodec_decode_audio4;
|
||||
ret = dec_func(ifmt_ctx->streams[stream_index]->codec, frame,
|
||||
&got_frame, &packet);
|
||||
if (ret < 0) {
|
||||
av_frame_free(&frame);
|
||||
av_log(NULL, AV_LOG_ERROR, "Decoding failed\n");
|
||||
break;
|
||||
}
|
||||
|
||||
if (got_frame) {
|
||||
frame->pts = av_frame_get_best_effort_timestamp(frame);
|
||||
ret = filter_encode_write_frame(frame, stream_index);
|
||||
av_frame_free(&frame);
|
||||
if (ret < 0)
|
||||
goto end;
|
||||
} else {
|
||||
av_frame_free(&frame);
|
||||
}
|
||||
} else {
|
||||
/* remux this frame without reencoding */
|
||||
av_packet_rescale_ts(&packet,
|
||||
ifmt_ctx->streams[stream_index]->time_base,
|
||||
ofmt_ctx->streams[stream_index]->time_base);
|
||||
|
||||
ret = av_interleaved_write_frame(ofmt_ctx, &packet);
|
||||
if (ret < 0)
|
||||
goto end;
|
||||
}
|
||||
av_free_packet(&packet);
|
||||
}
|
||||
|
||||
/* flush filters and encoders */
|
||||
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
|
||||
/* flush filter */
|
||||
if (!filter_ctx[i].filter_graph)
|
||||
continue;
|
||||
ret = filter_encode_write_frame(NULL, i);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Flushing filter failed\n");
|
||||
goto end;
|
||||
}
|
||||
|
||||
/* flush encoder */
|
||||
ret = flush_encoder(i);
|
||||
if (ret < 0) {
|
||||
av_log(NULL, AV_LOG_ERROR, "Flushing encoder failed\n");
|
||||
goto end;
|
||||
}
|
||||
}
|
||||
|
||||
av_write_trailer(ofmt_ctx);
|
||||
end:
|
||||
av_free_packet(&packet);
|
||||
av_frame_free(&frame);
|
||||
for (i = 0; i < ifmt_ctx->nb_streams; i++) {
|
||||
avcodec_close(ifmt_ctx->streams[i]->codec);
|
||||
if (ofmt_ctx && ofmt_ctx->nb_streams > i && ofmt_ctx->streams[i] && ofmt_ctx->streams[i]->codec)
|
||||
avcodec_close(ofmt_ctx->streams[i]->codec);
|
||||
if (filter_ctx && filter_ctx[i].filter_graph)
|
||||
avfilter_graph_free(&filter_ctx[i].filter_graph);
|
||||
}
|
||||
av_free(filter_ctx);
|
||||
avformat_close_input(&ifmt_ctx);
|
||||
if (ofmt_ctx && !(ofmt_ctx->oformat->flags & AVFMT_NOFILE))
|
||||
avio_closep(&ofmt_ctx->pb);
|
||||
avformat_free_context(ofmt_ctx);
|
||||
|
||||
if (ret < 0)
|
||||
av_log(NULL, AV_LOG_ERROR, "Error occurred: %s\n", av_err2str(ret));
|
||||
|
||||
return ret ? 1 : 0;
|
||||
}
|
||||
157
doc/faq.texi
157
doc/faq.texi
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle FFmpeg FAQ
|
||||
@titlepage
|
||||
@@ -91,56 +90,6 @@ To build FFmpeg, you need to install the development package. It is usually
|
||||
called @file{libfoo-dev} or @file{libfoo-devel}. You can remove it after the
|
||||
build is finished, but be sure to keep the main package.
|
||||
|
||||
@section How do I make @command{pkg-config} find my libraries?
|
||||
|
||||
Somewhere along with your libraries, there is a @file{.pc} file (or several)
|
||||
in a @file{pkgconfig} directory. You need to set environment variables to
|
||||
point @command{pkg-config} to these files.
|
||||
|
||||
If you need to @emph{add} directories to @command{pkg-config}'s search list
|
||||
(typical use case: library installed separately), add it to
|
||||
@code{$PKG_CONFIG_PATH}:
|
||||
|
||||
@example
|
||||
export PKG_CONFIG_PATH=/opt/x264/lib/pkgconfig:/opt/opus/lib/pkgconfig
|
||||
@end example
|
||||
|
||||
If you need to @emph{replace} @command{pkg-config}'s search list
|
||||
(typical use case: cross-compiling), set it in
|
||||
@code{$PKG_CONFIG_LIBDIR}:
|
||||
|
||||
@example
|
||||
export PKG_CONFIG_LIBDIR=/home/me/cross/usr/lib/pkgconfig:/home/me/cross/usr/local/lib/pkgconfig
|
||||
@end example
|
||||
|
||||
If you need to know the library's internal dependencies (typical use: static
|
||||
linking), add the @code{--static} option to @command{pkg-config}:
|
||||
|
||||
@example
|
||||
./configure --pkg-config-flags=--static
|
||||
@end example
|
||||
|
||||
@section How do I use @command{pkg-config} when cross-compiling?
|
||||
|
||||
The best way is to install @command{pkg-config} in your cross-compilation
|
||||
environment. It will automatically use the cross-compilation libraries.
|
||||
|
||||
You can also use @command{pkg-config} from the host environment by
|
||||
specifying explicitly @code{--pkg-config=pkg-config} to @command{configure}.
|
||||
In that case, you must point @command{pkg-config} to the correct directories
|
||||
using the @code{PKG_CONFIG_LIBDIR}, as explained in the previous entry.
|
||||
|
||||
As an intermediate solution, you can place in your cross-compilation
|
||||
environment a script that calls the host @command{pkg-config} with
|
||||
@code{PKG_CONFIG_LIBDIR} set. That script can look like that:
|
||||
|
||||
@example
|
||||
#!/bin/sh
|
||||
PKG_CONFIG_LIBDIR=/path/to/cross/lib/pkgconfig
|
||||
export PKG_CONFIG_LIBDIR
|
||||
exec /usr/bin/pkg-config "$@@"
|
||||
@end example
|
||||
|
||||
@chapter Usage
|
||||
|
||||
@section ffmpeg does not work; what is wrong?
|
||||
@@ -156,7 +105,7 @@ For example, img1.jpg, img2.jpg, img3.jpg,...
|
||||
Then you may run:
|
||||
|
||||
@example
|
||||
ffmpeg -f image2 -i img%d.jpg /tmp/a.mpg
|
||||
ffmpeg -f image2 -i img%d.jpg /tmp/a.mpg
|
||||
@end example
|
||||
|
||||
Notice that @samp{%d} is replaced by the image number.
|
||||
@@ -169,7 +118,7 @@ the sequence. This is useful if your sequence does not start with
|
||||
example will start with @file{img100.jpg}:
|
||||
|
||||
@example
|
||||
ffmpeg -f image2 -start_number 100 -i img%d.jpg /tmp/a.mpg
|
||||
ffmpeg -f image2 -start_number 100 -i img%d.jpg /tmp/a.mpg
|
||||
@end example
|
||||
|
||||
If you have large number of pictures to rename, you can use the
|
||||
@@ -179,7 +128,7 @@ that match @code{*jpg} to the @file{/tmp} directory in the sequence of
|
||||
@file{img001.jpg}, @file{img002.jpg} and so on.
|
||||
|
||||
@example
|
||||
x=1; for i in *jpg; do counter=$(printf %03d $x); ln -s "$i" /tmp/img"$counter".jpg; x=$(($x+1)); done
|
||||
x=1; for i in *jpg; do counter=$(printf %03d $x); ln -s "$i" /tmp/img"$counter".jpg; x=$(($x+1)); done
|
||||
@end example
|
||||
|
||||
If you want to sequence them by oldest modified first, substitute
|
||||
@@ -188,7 +137,7 @@ If you want to sequence them by oldest modified first, substitute
|
||||
Then run:
|
||||
|
||||
@example
|
||||
ffmpeg -f image2 -i /tmp/img%03d.jpg /tmp/a.mpg
|
||||
ffmpeg -f image2 -i /tmp/img%03d.jpg /tmp/a.mpg
|
||||
@end example
|
||||
|
||||
The same logic is used for any image format that ffmpeg reads.
|
||||
@@ -196,7 +145,7 @@ The same logic is used for any image format that ffmpeg reads.
|
||||
You can also use @command{cat} to pipe images to ffmpeg:
|
||||
|
||||
@example
|
||||
cat *.jpg | ffmpeg -f image2pipe -c:v mjpeg -i - output.mpg
|
||||
cat *.jpg | ffmpeg -f image2pipe -c:v mjpeg -i - output.mpg
|
||||
@end example
|
||||
|
||||
@section How do I encode movie to single pictures?
|
||||
@@ -204,7 +153,7 @@ cat *.jpg | ffmpeg -f image2pipe -c:v mjpeg -i - output.mpg
|
||||
Use:
|
||||
|
||||
@example
|
||||
ffmpeg -i movie.mpg movie%d.jpg
|
||||
ffmpeg -i movie.mpg movie%d.jpg
|
||||
@end example
|
||||
|
||||
The @file{movie.mpg} used as input will be converted to
|
||||
@@ -220,7 +169,7 @@ to force the encoding.
|
||||
|
||||
Applying that to the previous example:
|
||||
@example
|
||||
ffmpeg -i movie.mpg -f image2 -c:v mjpeg menu%d.jpg
|
||||
ffmpeg -i movie.mpg -f image2 -c:v mjpeg menu%d.jpg
|
||||
@end example
|
||||
|
||||
Beware that there is no "jpeg" codec. Use "mjpeg" instead.
|
||||
@@ -278,15 +227,15 @@ then you may use any file that DirectShow can read as input.
|
||||
|
||||
Just create an "input.avs" text file with this single line ...
|
||||
@example
|
||||
DirectShowSource("C:\path to your file\yourfile.asf")
|
||||
DirectShowSource("C:\path to your file\yourfile.asf")
|
||||
@end example
|
||||
... and then feed that text file to ffmpeg:
|
||||
@example
|
||||
ffmpeg -i input.avs
|
||||
ffmpeg -i input.avs
|
||||
@end example
|
||||
|
||||
For ANY other help on AviSynth, please visit the
|
||||
@uref{http://www.avisynth.org/, AviSynth homepage}.
|
||||
For ANY other help on Avisynth, please visit the
|
||||
@uref{http://www.avisynth.org/, Avisynth homepage}.
|
||||
|
||||
@section How can I join video files?
|
||||
|
||||
@@ -349,7 +298,7 @@ FFmpeg has a @url{http://ffmpeg.org/ffmpeg-protocols.html#concat,
|
||||
@code{concat}} protocol designed specifically for that, with examples in the
|
||||
documentation.
|
||||
|
||||
A few multimedia containers (MPEG-1, MPEG-2 PS, DV) allow one to concatenate
|
||||
A few multimedia containers (MPEG-1, MPEG-2 PS, DV) allow to concatenate
|
||||
video by merely concatenating the files containing them.
|
||||
|
||||
Hence you may concatenate your multimedia files by first transcoding them to
|
||||
@@ -419,22 +368,42 @@ ffmpeg -f u16le -acodec pcm_s16le -ac 2 -ar 44100 -i all.a \
|
||||
rm temp[12].[av] all.[av]
|
||||
@end example
|
||||
|
||||
@section -profile option fails when encoding H.264 video with AAC audio
|
||||
|
||||
@command{ffmpeg} prints an error like
|
||||
|
||||
@example
|
||||
Undefined constant or missing '(' in 'baseline'
|
||||
Unable to parse option value "baseline"
|
||||
Error setting option profile to value baseline.
|
||||
@end example
|
||||
|
||||
Short answer: write @option{-profile:v} instead of @option{-profile}.
|
||||
|
||||
Long answer: this happens because the @option{-profile} option can apply to both
|
||||
video and audio. Specifically the AAC encoder also defines some profiles, none
|
||||
of which are named @var{baseline}.
|
||||
|
||||
The solution is to apply the @option{-profile} option to the video stream only
|
||||
by using @url{http://ffmpeg.org/ffmpeg.html#Stream-specifiers-1, Stream specifiers}.
|
||||
Appending @code{:v} to it will do exactly that.
|
||||
|
||||
@section Using @option{-f lavfi}, audio becomes mono for no apparent reason.
|
||||
|
||||
Use @option{-dumpgraph -} to find out exactly where the channel layout is
|
||||
lost.
|
||||
|
||||
Most likely, it is through @code{auto-inserted aresample}. Try to understand
|
||||
Most likely, it is through @code{auto-inserted aconvert}. Try to understand
|
||||
why the converting filter was needed at that place.
|
||||
|
||||
Just before the output is a likely place, as @option{-f lavfi} currently
|
||||
only support packed S16.
|
||||
|
||||
Then insert the correct @code{aformat} explicitly in the filtergraph,
|
||||
Then insert the correct @code{aconvert} explicitly in the filter graph,
|
||||
specifying the exact format.
|
||||
|
||||
@example
|
||||
aformat=sample_fmts=s16:channel_layouts=stereo
|
||||
aconvert=s16:stereo:packed
|
||||
@end example
|
||||
|
||||
@section Why does FFmpeg not see the subtitles in my VOB file?
|
||||
@@ -443,7 +412,7 @@ VOB and a few other formats do not have a global header that describes
|
||||
everything present in the file. Instead, applications are supposed to scan
|
||||
the file to see what it contains. Since VOB files are frequently large, only
|
||||
the beginning is scanned. If the subtitles happen only later in the file,
|
||||
they will not be initially detected.
|
||||
they will not be initally detected.
|
||||
|
||||
Some applications, including the @code{ffmpeg} command-line tool, can only
|
||||
work with streams that were detected during the initial scan; streams that
|
||||
@@ -467,40 +436,6 @@ point acceptable for your tastes. The most common options to do that are
|
||||
@option{-qscale} and @option{-qmax}, but you should peruse the documentation
|
||||
of the encoder you chose.
|
||||
|
||||
@section I have a stretched video, why does scaling does not fix it?
|
||||
|
||||
A lot of video codecs and formats can store the @emph{aspect ratio} of the
|
||||
video: this is the ratio between the width and the height of either the full
|
||||
image (DAR, display aspect ratio) or individual pixels (SAR, sample aspect
|
||||
ratio). For example, EGA screens at resolution 640×350 had 4:3 DAR and 35:48
|
||||
SAR.
|
||||
|
||||
Most still image processing work with square pixels, i.e. 1:1 SAR, but a lot
|
||||
of video standards, especially from the analogic-numeric transition era, use
|
||||
non-square pixels.
|
||||
|
||||
Most processing filters in FFmpeg handle the aspect ratio to avoid
|
||||
stretching the image: cropping adjusts the DAR to keep the SAR constant,
|
||||
scaling adjusts the SAR to keep the DAR constant.
|
||||
|
||||
If you want to stretch, or “unstretch”, the image, you need to override the
|
||||
information with the
|
||||
@url{http://ffmpeg.org/ffmpeg-filters.html#setdar_002c-setsar, @code{setdar or setsar filters}}.
|
||||
|
||||
Do not forget to examine carefully the original video to check whether the
|
||||
stretching comes from the image or from the aspect ratio information.
|
||||
|
||||
For example, to fix a badly encoded EGA capture, use the following commands,
|
||||
either the first one to upscale to square pixels or the second one to set
|
||||
the correct aspect ratio or the third one to avoid transcoding (may not work
|
||||
depending on the format / codec / player / phase of the moon):
|
||||
|
||||
@example
|
||||
ffmpeg -i ega_screen.nut -vf scale=640:480,setsar=1 ega_screen_scaled.nut
|
||||
ffmpeg -i ega_screen.nut -vf setdar=4/3 ega_screen_anamorphic.nut
|
||||
ffmpeg -i ega_screen.nut -aspect 4/3 -c copy ega_screen_overridden.nut
|
||||
@end example
|
||||
|
||||
@chapter Development
|
||||
|
||||
@section Are there examples illustrating how to use the FFmpeg libraries, particularly libavcodec and libavformat?
|
||||
@@ -540,10 +475,9 @@ read @uref{http://www.tux.org/lkml/#s15, "Programming Religion"}.
|
||||
|
||||
@section Why are the ffmpeg programs devoid of debugging symbols?
|
||||
|
||||
The build process creates @command{ffmpeg_g}, @command{ffplay_g}, etc. which
|
||||
contain full debug information. Those binaries are stripped to create
|
||||
@command{ffmpeg}, @command{ffplay}, etc. If you need the debug information, use
|
||||
the *_g versions.
|
||||
The build process creates ffmpeg_g, ffplay_g, etc. which contain full debug
|
||||
information. Those binaries are stripped to create ffmpeg, ffplay, etc. If
|
||||
you need the debug information, use the *_g versions.
|
||||
|
||||
@section I do not like the LGPL, can I contribute code under the GPL instead?
|
||||
|
||||
@@ -563,7 +497,7 @@ An easy way to get the full list of required libraries in dependency order
|
||||
is to use @code{pkg-config}.
|
||||
|
||||
@example
|
||||
c99 -o program program.c $(pkg-config --cflags --libs libavformat libavcodec)
|
||||
c99 -o program program.c $(pkg-config --cflags --libs libavformat libavcodec)
|
||||
@end example
|
||||
|
||||
See @file{doc/example/Makefile} and @file{doc/example/pc-uninstalled} for
|
||||
@@ -587,6 +521,10 @@ to use them you have to append -D__STDC_CONSTANT_MACROS to your CXXFLAGS
|
||||
You have to create a custom AVIOContext using @code{avio_alloc_context},
|
||||
see @file{libavformat/aviobuf.c} in FFmpeg and @file{libmpdemux/demux_lavf.c} in MPlayer or MPlayer2 sources.
|
||||
|
||||
@section Where can I find libav* headers for Pascal/Delphi?
|
||||
|
||||
see @url{http://www.iversenit.dk/dev/ffmpeg-headers/}
|
||||
|
||||
@section Where is the documentation about ffv1, msmpeg4, asv1, 4xm?
|
||||
|
||||
see @url{http://www.ffmpeg.org/~michael/}
|
||||
@@ -599,12 +537,11 @@ In this specific case please look at RFC 4629 to see how it should be done.
|
||||
|
||||
@section AVStream.r_frame_rate is wrong, it is much larger than the frame rate.
|
||||
|
||||
@code{r_frame_rate} is NOT the average frame rate, it is the smallest frame rate
|
||||
r_frame_rate is NOT the average frame rate, it is the smallest frame rate
|
||||
that can accurately represent all timestamps. So no, it is not
|
||||
wrong if it is larger than the average!
|
||||
For example, if you have mixed 25 and 30 fps content, then @code{r_frame_rate}
|
||||
will be 150 (it is the least common multiple).
|
||||
If you are looking for the average frame rate, see @code{AVStream.avg_frame_rate}.
|
||||
For example, if you have mixed 25 and 30 fps content, then r_frame_rate
|
||||
will be 150.
|
||||
|
||||
@section Why is @code{make fate} not running all tests?
|
||||
|
||||
|
||||
100
doc/fate.texi
100
doc/fate.texi
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle FFmpeg Automated Testing Environment
|
||||
@titlepage
|
||||
@@ -13,36 +12,36 @@
|
||||
|
||||
@chapter Introduction
|
||||
|
||||
FATE is an extended regression suite on the client-side and a means
|
||||
FATE is an extended regression suite on the client-side and a means
|
||||
for results aggregation and presentation on the server-side.
|
||||
|
||||
The first part of this document explains how you can use FATE from
|
||||
The first part of this document explains how you can use FATE from
|
||||
your FFmpeg source directory to test your ffmpeg binary. The second
|
||||
part describes how you can run FATE to submit the results to FFmpeg's
|
||||
FATE server.
|
||||
|
||||
In any way you can have a look at the publicly viewable FATE results
|
||||
In any way you can have a look at the publicly viewable FATE results
|
||||
by visiting this website:
|
||||
|
||||
@url{http://fate.ffmpeg.org/}
|
||||
@url{http://fate.ffmpeg.org/}
|
||||
|
||||
This is especially recommended for all people contributing source
|
||||
This is especially recommended for all people contributing source
|
||||
code to FFmpeg, as it can be seen if some test on some platform broke
|
||||
with their recent contribution. This usually happens on the platforms
|
||||
with there recent contribution. This usually happens on the platforms
|
||||
the developers could not test on.
|
||||
|
||||
The second part of this document describes how you can run FATE to
|
||||
The second part of this document describes how you can run FATE to
|
||||
submit your results to FFmpeg's FATE server. If you want to submit your
|
||||
results be sure to check that your combination of CPU, OS and compiler
|
||||
is not already listed on the above mentioned website.
|
||||
|
||||
In the third part you can find a comprehensive listing of FATE makefile
|
||||
In the third part you can find a comprehensive listing of FATE makefile
|
||||
targets and variables.
|
||||
|
||||
|
||||
@chapter Using FATE from your FFmpeg source directory
|
||||
|
||||
If you want to run FATE on your machine you need to have the samples
|
||||
If you want to run FATE on your machine you need to have the samples
|
||||
in place. You can get the samples via the build target fate-rsync.
|
||||
Use this command from the top-level source directory:
|
||||
|
||||
@@ -51,11 +50,11 @@ make fate-rsync SAMPLES=fate-suite/
|
||||
make fate SAMPLES=fate-suite/
|
||||
@end example
|
||||
|
||||
The above commands set the samples location by passing a makefile
|
||||
The above commands set the samples location by passing a makefile
|
||||
variable via command line. It is also possible to set the samples
|
||||
location at source configuration time by invoking configure with
|
||||
@option{--samples=<path to the samples directory>}. Afterwards you can
|
||||
invoke the makefile targets without setting the @var{SAMPLES} makefile
|
||||
`--samples=<path to the samples directory>'. Afterwards you can
|
||||
invoke the makefile targets without setting the SAMPLES makefile
|
||||
variable. This is illustrated by the following commands:
|
||||
|
||||
@example
|
||||
@@ -64,7 +63,7 @@ make fate-rsync
|
||||
make fate
|
||||
@end example
|
||||
|
||||
Yet another way to tell FATE about the location of the sample
|
||||
Yet another way to tell FATE about the location of the sample
|
||||
directory is by making sure the environment variable FATE_SAMPLES
|
||||
contains the path to your samples directory. This can be achieved
|
||||
by e.g. putting that variable in your shell profile or by setting
|
||||
@@ -85,7 +84,7 @@ To use a custom wrapper to run the test, pass @option{--target-exec} to
|
||||
|
||||
@chapter Submitting the results to the FFmpeg result aggregation server
|
||||
|
||||
To submit your results to the server you should run fate through the
|
||||
To submit your results to the server you should run fate through the
|
||||
shell script @file{tests/fate.sh} from the FFmpeg sources. This script needs
|
||||
to be invoked with a configuration file as its first argument.
|
||||
|
||||
@@ -93,23 +92,23 @@ to be invoked with a configuration file as its first argument.
|
||||
tests/fate.sh /path/to/fate_config
|
||||
@end example
|
||||
|
||||
A configuration file template with comments describing the individual
|
||||
A configuration file template with comments describing the individual
|
||||
configuration variables can be found at @file{doc/fate_config.sh.template}.
|
||||
|
||||
@ifhtml
|
||||
The mentioned configuration template is also available here:
|
||||
The mentioned configuration template is also available here:
|
||||
@verbatiminclude fate_config.sh.template
|
||||
@end ifhtml
|
||||
|
||||
Create a configuration that suits your needs, based on the configuration
|
||||
template. The @env{slot} configuration variable can be any string that is not
|
||||
Create a configuration that suits your needs, based on the configuration
|
||||
template. The `slot' configuration variable can be any string that is not
|
||||
yet used, but it is suggested that you name it adhering to the following
|
||||
pattern @samp{@var{arch}-@var{os}-@var{compiler}-@var{compiler version}}. The
|
||||
configuration file itself will be sourced in a shell script, therefore all
|
||||
shell features may be used. This enables you to setup the environment as you
|
||||
need it for your build.
|
||||
pattern <arch>-<os>-<compiler>-<compiler version>. The configuration file
|
||||
itself will be sourced in a shell script, therefore all shell features may
|
||||
be used. This enables you to setup the environment as you need it for your
|
||||
build.
|
||||
|
||||
For your first test runs the @env{fate_recv} variable should be empty or
|
||||
For your first test runs the `fate_recv' variable should be empty or
|
||||
commented out. This will run everything as normal except that it will omit
|
||||
the submission of the results to the server. The following files should be
|
||||
present in $workdir as specified in the configuration file:
|
||||
@@ -122,29 +121,24 @@ present in $workdir as specified in the configuration file:
|
||||
@item version
|
||||
@end itemize
|
||||
|
||||
When you have everything working properly you can create an SSH key pair
|
||||
When you have everything working properly you can create an SSH key pair
|
||||
and send the public key to the FATE server administrator who can be contacted
|
||||
at the email address @email{fate-admin@@ffmpeg.org}.
|
||||
|
||||
Configure your SSH client to use public key authentication with that key
|
||||
Configure your SSH client to use public key authentication with that key
|
||||
when connecting to the FATE server. Also do not forget to check the identity
|
||||
of the server and to accept its host key. This can usually be achieved by
|
||||
running your SSH client manually and killing it after you accepted the key.
|
||||
The FATE server's fingerprint is:
|
||||
|
||||
@table @samp
|
||||
@item RSA
|
||||
d3:f1:83:97:a4:75:2b:a6:fb:d6:e8:aa:81:93:97:51
|
||||
@item ECDSA
|
||||
76:9f:68:32:04:1e:d5:d4:ec:47:3f:dc:fc:18:17:86
|
||||
@end table
|
||||
b1:31:c8:79:3f:04:1d:f8:f2:23:26:5a:fd:55:fa:92
|
||||
|
||||
If you have problems connecting to the FATE server, it may help to try out
|
||||
If you have problems connecting to the FATE server, it may help to try out
|
||||
the @command{ssh} command with one or more @option{-v} options. You should
|
||||
get detailed output concerning your SSH configuration and the authentication
|
||||
process.
|
||||
|
||||
The only thing left is to automate the execution of the fate.sh script and
|
||||
The only thing left is to automate the execution of the fate.sh script and
|
||||
the synchronisation of the samples directory.
|
||||
|
||||
|
||||
@@ -154,20 +148,20 @@ the synchronisation of the samples directory.
|
||||
|
||||
@table @option
|
||||
@item fate-rsync
|
||||
Download/synchronize sample files to the configured samples directory.
|
||||
Download/synchronize sample files to the configured samples directory.
|
||||
|
||||
@item fate-list
|
||||
Will list all fate/regression test targets.
|
||||
Will list all fate/regression test targets.
|
||||
|
||||
@item fate
|
||||
Run the FATE test suite (requires the fate-suite dataset).
|
||||
Run the FATE test suite (requires the fate-suite dataset).
|
||||
@end table
|
||||
|
||||
@section Makefile variables
|
||||
|
||||
@table @env
|
||||
@table @option
|
||||
@item V
|
||||
Verbosity level, can be set to 0, 1 or 2.
|
||||
Verbosity level, can be set to 0, 1 or 2.
|
||||
@itemize
|
||||
@item 0: show just the test arguments
|
||||
@item 1: show just the command used in the test
|
||||
@@ -175,28 +169,22 @@ Verbosity level, can be set to 0, 1 or 2.
|
||||
@end itemize
|
||||
|
||||
@item SAMPLES
|
||||
Specify or override the path to the FATE samples at make time, it has a
|
||||
meaning only while running the regression tests.
|
||||
Specify or override the path to the FATE samples at make time, it has a
|
||||
meaning only while running the regression tests.
|
||||
|
||||
@item THREADS
|
||||
Specify how many threads to use while running regression tests, it is
|
||||
quite useful to detect thread-related regressions.
|
||||
|
||||
Specify how many threads to use while running regression tests, it is
|
||||
quite useful to detect thread-related regressions.
|
||||
@item THREAD_TYPE
|
||||
Specify which threading strategy test, either @samp{slice} or @samp{frame},
|
||||
by default @samp{slice+frame}
|
||||
|
||||
Specify which threading strategy test, either @var{slice} or @var{frame},
|
||||
by default @var{slice+frame}
|
||||
@item CPUFLAGS
|
||||
Specify CPU flags.
|
||||
|
||||
Specify CPU flags.
|
||||
@item TARGET_EXEC
|
||||
Specify or override the wrapper used to run the tests.
|
||||
The @env{TARGET_EXEC} option provides a way to run FATE wrapped in
|
||||
@command{valgrind}, @command{qemu-user} or @command{wine} or on remote targets
|
||||
through @command{ssh}.
|
||||
|
||||
@item GEN
|
||||
Set to @samp{1} to generate the missing or mismatched references.
|
||||
Specify or override the wrapper used to run the tests.
|
||||
The @var{TARGET_EXEC} option provides a way to run FATE wrapped in
|
||||
@command{valgrind}, @command{qemu-user} or @command{wine} or on remote targets
|
||||
through @command{ssh}.
|
||||
@end table
|
||||
|
||||
@section Examples
|
||||
|
||||
@@ -1,24 +1,19 @@
|
||||
slot= # some unique identifier
|
||||
repo=git://source.ffmpeg.org/ffmpeg.git # the source repository
|
||||
#branch=release/2.6 # the branch to test
|
||||
samples= # path to samples directory
|
||||
workdir= # directory in which to do all the work
|
||||
#fate_recv="ssh -T fate@fate.ffmpeg.org" # command to submit report
|
||||
comment= # optional description
|
||||
build_only= # set to "yes" for a compile-only instance that skips tests
|
||||
|
||||
# the following are optional and map to configure options
|
||||
arch=
|
||||
cpu=
|
||||
cross_prefix=
|
||||
as=
|
||||
cc=
|
||||
ld=
|
||||
target_os=
|
||||
sysroot=
|
||||
target_exec=
|
||||
target_path=
|
||||
target_samples=
|
||||
extra_cflags=
|
||||
extra_ldflags=
|
||||
extra_libs=
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle FFmpeg Bitstream Filters Documentation
|
||||
@titlepage
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle FFmpeg Devices Documentation
|
||||
@titlepage
|
||||
@@ -18,7 +17,27 @@ libavdevice library.
|
||||
|
||||
@c man end DESCRIPTION
|
||||
|
||||
@include devices.texi
|
||||
@chapter Device Options
|
||||
@c man begin DEVICE OPTIONS
|
||||
|
||||
The libavdevice library provides the same interface as
|
||||
libavformat. Namely, an input device is considered like a demuxer, and
|
||||
an output device like a muxer, and the interface and generic device
|
||||
options are the same provided by libavformat (see the ffmpeg-formats
|
||||
manual).
|
||||
|
||||
In addition each input or output device may support so-called private
|
||||
options, which are specific for that component.
|
||||
|
||||
Options may be set by specifying -@var{option} @var{value} in the
|
||||
FFmpeg tools, or by setting the value explicitly in the device
|
||||
@code{AVFormatContext} options or using the @file{libavutil/opt.h} API
|
||||
for programmatic use.
|
||||
|
||||
@c man end DEVICE OPTIONS
|
||||
|
||||
@include indevs.texi
|
||||
@include outdevs.texi
|
||||
|
||||
@chapter See Also
|
||||
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle FFmpeg Filters Documentation
|
||||
@titlepage
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle FFmpeg Formats Documentation
|
||||
@titlepage
|
||||
@@ -18,7 +17,138 @@ provided by the libavformat library.
|
||||
|
||||
@c man end DESCRIPTION
|
||||
|
||||
@include formats.texi
|
||||
@chapter Format Options
|
||||
@c man begin FORMAT OPTIONS
|
||||
|
||||
The libavformat library provides some generic global options, which
|
||||
can be set on all the muxers and demuxers. In addition each muxer or
|
||||
demuxer may support so-called private options, which are specific for
|
||||
that component.
|
||||
|
||||
Options may be set by specifying -@var{option} @var{value} in the
|
||||
FFmpeg tools, or by setting the value explicitly in the
|
||||
@code{AVFormatContext} options or using the @file{libavutil/opt.h} API
|
||||
for programmatic use.
|
||||
|
||||
The list of supported options follows:
|
||||
|
||||
@table @option
|
||||
@item avioflags @var{flags} (@emph{input/output})
|
||||
Possible values:
|
||||
@table @samp
|
||||
@item direct
|
||||
Reduce buffering.
|
||||
@end table
|
||||
|
||||
@item probesize @var{integer} (@emph{input})
|
||||
Set probing size in bytes, i.e. the size of the data to analyze to get
|
||||
stream information. A higher value will allow to detect more
|
||||
information in case it is dispersed into the stream, but will increase
|
||||
latency. Must be an integer not lesser than 32. It is 5000000 by default.
|
||||
|
||||
@item packetsize @var{integer} (@emph{output})
|
||||
Set packet size.
|
||||
|
||||
@item fflags @var{flags} (@emph{input/output})
|
||||
Set format flags.
|
||||
|
||||
Possible values:
|
||||
@table @samp
|
||||
@item ignidx
|
||||
Ignore index.
|
||||
@item genpts
|
||||
Generate PTS.
|
||||
@item nofillin
|
||||
Do not fill in missing values that can be exactly calculated.
|
||||
@item noparse
|
||||
Disable AVParsers, this needs @code{+nofillin} too.
|
||||
@item igndts
|
||||
Ignore DTS.
|
||||
@item discardcorrupt
|
||||
Discard corrupted frames.
|
||||
@item sortdts
|
||||
Try to interleave output packets by DTS.
|
||||
@item keepside
|
||||
Do not merge side data.
|
||||
@item latm
|
||||
Enable RTP MP4A-LATM payload.
|
||||
@item nobuffer
|
||||
Reduce the latency introduced by optional buffering
|
||||
@end table
|
||||
|
||||
@item analyzeduration @var{integer} (@emph{input})
|
||||
Specify how many microseconds are analyzed to probe the input. A
|
||||
higher value will allow to detect more accurate information, but will
|
||||
increase latency. It defaults to 5,000,000 microseconds = 5 seconds.
|
||||
|
||||
@item cryptokey @var{hexadecimal string} (@emph{input})
|
||||
Set decryption key.
|
||||
|
||||
@item indexmem @var{integer} (@emph{input})
|
||||
Set max memory used for timestamp index (per stream).
|
||||
|
||||
@item rtbufsize @var{integer} (@emph{input})
|
||||
Set max memory used for buffering real-time frames.
|
||||
|
||||
@item fdebug @var{flags} (@emph{input/output})
|
||||
Print specific debug info.
|
||||
|
||||
Possible values:
|
||||
@table @samp
|
||||
@item ts
|
||||
@end table
|
||||
|
||||
@item max_delay @var{integer} (@emph{input/output})
|
||||
Set maximum muxing or demuxing delay in microseconds.
|
||||
|
||||
@item fpsprobesize @var{integer} (@emph{input})
|
||||
Set number of frames used to probe fps.
|
||||
|
||||
@item audio_preload @var{integer} (@emph{output})
|
||||
Set microseconds by which audio packets should be interleaved earlier.
|
||||
|
||||
@item chunk_duration @var{integer} (@emph{output})
|
||||
Set microseconds for each chunk.
|
||||
|
||||
@item chunk_size @var{integer} (@emph{output})
|
||||
Set size in bytes for each chunk.
|
||||
|
||||
@item err_detect, f_err_detect @var{flags} (@emph{input})
|
||||
Set error detection flags. @code{f_err_detect} is deprecated and
|
||||
should be used only via the @command{ffmpeg} tool.
|
||||
|
||||
Possible values:
|
||||
@table @samp
|
||||
@item crccheck
|
||||
Verify embedded CRCs.
|
||||
@item bitstream
|
||||
Detect bitstream specification deviations.
|
||||
@item buffer
|
||||
Detect improper bitstream length.
|
||||
@item explode
|
||||
Abort decoding on minor error detection.
|
||||
@item careful
|
||||
Consider things that violate the spec and have not been seen in the
|
||||
wild as errors.
|
||||
@item compliant
|
||||
Consider all spec non compliancies as errors.
|
||||
@item aggressive
|
||||
Consider things that a sane encoder should not do as an error.
|
||||
@end table
|
||||
|
||||
@item use_wallclock_as_timestamps @var{integer} (@emph{input})
|
||||
Use wallclock as timestamps.
|
||||
|
||||
@item avoid_negative_ts @var{integer} (@emph{output})
|
||||
Shift timestamps to make them positive. 1 enables, 0 disables, default
|
||||
of -1 enables when required by target format.
|
||||
@end table
|
||||
|
||||
@c man end FORMAT OPTIONS
|
||||
|
||||
@include demuxers.texi
|
||||
@include muxers.texi
|
||||
@include metadata.texi
|
||||
|
||||
@chapter See Also
|
||||
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle FFmpeg Protocols Documentation
|
||||
@titlepage
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle FFmpeg Resampler Documentation
|
||||
@titlepage
|
||||
@@ -13,14 +12,235 @@
|
||||
@chapter Description
|
||||
@c man begin DESCRIPTION
|
||||
|
||||
The FFmpeg resampler provides a high-level interface to the
|
||||
The FFmpeg resampler provides an high-level interface to the
|
||||
libswresample library audio resampling utilities. In particular it
|
||||
allows one to perform audio resampling, audio channel layout rematrixing,
|
||||
allows to perform audio resampling, audio channel layout rematrixing,
|
||||
and convert audio format and packing layout.
|
||||
|
||||
@c man end DESCRIPTION
|
||||
|
||||
@include resampler.texi
|
||||
@chapter Resampler Options
|
||||
@c man begin RESAMPLER OPTIONS
|
||||
|
||||
The audio resampler supports the following named options.
|
||||
|
||||
Options may be set by specifying -@var{option} @var{value} in the
|
||||
FFmpeg tools, @var{option}=@var{value} for the aresample filter,
|
||||
by setting the value explicitly in the
|
||||
@code{SwrContext} options or using the @file{libavutil/opt.h} API for
|
||||
programmatic use.
|
||||
|
||||
@table @option
|
||||
|
||||
@item ich, in_channel_count
|
||||
Set the number of input channels. Default value is 0. Setting this
|
||||
value is not mandatory if the corresponding channel layout
|
||||
@option{in_channel_layout} is set.
|
||||
|
||||
@item och, out_channel_count
|
||||
Set the number of output channels. Default value is 0. Setting this
|
||||
value is not mandatory if the corresponding channel layout
|
||||
@option{out_channel_layout} is set.
|
||||
|
||||
@item uch, used_channel_count
|
||||
Set the number of used input channels. Default value is 0. This option is
|
||||
only used for special remapping.
|
||||
|
||||
@item isr, in_sample_rate
|
||||
Set the input sample rate. Default value is 0.
|
||||
|
||||
@item osr, out_sample_rate
|
||||
Set the output sample rate. Default value is 0.
|
||||
|
||||
@item isf, in_sample_fmt
|
||||
Specify the input sample format. It is set by default to @code{none}.
|
||||
|
||||
@item osf, out_sample_fmt
|
||||
Specify the output sample format. It is set by default to @code{none}.
|
||||
|
||||
@item tsf, internal_sample_fmt
|
||||
Set the internal sample format. Default value is @code{none}.
|
||||
This will automatically be chosen when it is not explicitly set.
|
||||
|
||||
@item icl, in_channel_layout
|
||||
Set the input channel layout.
|
||||
|
||||
@item ocl, out_channel_layout
|
||||
Set the output channel layout.
|
||||
|
||||
@item clev, center_mix_level
|
||||
Set the center mix level. It is a value expressed in deciBel, and must be
|
||||
in the interval [-32,32].
|
||||
|
||||
@item slev, surround_mix_level
|
||||
Set the surround mix level. It is a value expressed in deciBel, and must
|
||||
be in the interval [-32,32].
|
||||
|
||||
@item lfe_mix_level
|
||||
Set LFE mix into non LFE level. It is used when there is a LFE input but no
|
||||
LFE output. It is a value expressed in deciBel, and must
|
||||
be in the interval [-32,32].
|
||||
|
||||
@item rmvol, rematrix_volume
|
||||
Set rematrix volume. Default value is 1.0.
|
||||
|
||||
@item flags, swr_flags
|
||||
Set flags used by the converter. Default value is 0.
|
||||
|
||||
It supports the following individual flags:
|
||||
@table @option
|
||||
@item res
|
||||
force resampling, this flag forces resampling to be used even when the
|
||||
input and output sample rates match.
|
||||
@end table
|
||||
|
||||
@item dither_scale
|
||||
Set the dither scale. Default value is 1.
|
||||
|
||||
@item dither_method
|
||||
Set dither method. Default value is 0.
|
||||
|
||||
Supported values:
|
||||
@table @samp
|
||||
@item rectangular
|
||||
select rectangular dither
|
||||
@item triangular
|
||||
select triangular dither
|
||||
@item triangular_hp
|
||||
select triangular dither with high pass
|
||||
@item lipshitz
|
||||
select lipshitz noise shaping dither
|
||||
@item shibata
|
||||
select shibata noise shaping dither
|
||||
@item low_shibata
|
||||
select low shibata noise shaping dither
|
||||
@item high_shibata
|
||||
select high shibata noise shaping dither
|
||||
@item f_weighted
|
||||
select f-weighted noise shaping dither
|
||||
@item modified_e_weighted
|
||||
select modified-e-weighted noise shaping dither
|
||||
@item improved_e_weighted
|
||||
select improved-e-weighted noise shaping dither
|
||||
|
||||
@end table
|
||||
|
||||
@item resampler
|
||||
Set resampling engine. Default value is swr.
|
||||
|
||||
Supported values:
|
||||
@table @samp
|
||||
@item swr
|
||||
select the native SW Resampler; filter options precision and cheby are not
|
||||
applicable in this case.
|
||||
@item soxr
|
||||
select the SoX Resampler (where available); compensation, and filter options
|
||||
filter_size, phase_shift, filter_type & kaiser_beta, are not applicable in this
|
||||
case.
|
||||
@end table
|
||||
|
||||
@item filter_size
|
||||
For swr only, set resampling filter size, default value is 32.
|
||||
|
||||
@item phase_shift
|
||||
For swr only, set resampling phase shift, default value is 10, and must be in
|
||||
the interval [0,30].
|
||||
|
||||
@item linear_interp
|
||||
Use Linear Interpolation if set to 1, default value is 0.
|
||||
|
||||
@item cutoff
|
||||
Set cutoff frequency (swr: 6dB point; soxr: 0dB point) ratio; must be a float
|
||||
value between 0 and 1. Default value is 0.97 with swr, and 0.91 with soxr
|
||||
(which, with a sample-rate of 44100, preserves the entire audio band to 20kHz).
|
||||
|
||||
@item precision
|
||||
For soxr only, the precision in bits to which the resampled signal will be
|
||||
calculated. The default value of 20 (which, with suitable dithering, is
|
||||
appropriate for a destination bit-depth of 16) gives SoX's 'High Quality'; a
|
||||
value of 28 gives SoX's 'Very High Quality'.
|
||||
|
||||
@item cheby
|
||||
For soxr only, selects passband rolloff none (Chebyshev) & higher-precision
|
||||
approximation for 'irrational' ratios. Default value is 0.
|
||||
|
||||
@item async
|
||||
For swr only, simple 1 parameter audio sync to timestamps using stretching,
|
||||
squeezing, filling and trimming. Setting this to 1 will enable filling and
|
||||
trimming, larger values represent the maximum amount in samples that the data
|
||||
may be stretched or squeezed for each second.
|
||||
Default value is 0, thus no compensation is applied to make the samples match
|
||||
the audio timestamps.
|
||||
|
||||
@item first_pts
|
||||
For swr only, assume the first pts should be this value. The time unit is 1 / sample rate.
|
||||
This allows for padding/trimming at the start of stream. By default, no
|
||||
assumption is made about the first frame's expected pts, so no padding or
|
||||
trimming is done. For example, this could be set to 0 to pad the beginning with
|
||||
silence if an audio stream starts after the video stream or to trim any samples
|
||||
with a negative pts due to encoder delay.
|
||||
|
||||
@item min_comp
|
||||
For swr only, set the minimum difference between timestamps and audio data (in
|
||||
seconds) to trigger stretching/squeezing/filling or trimming of the
|
||||
data to make it match the timestamps. The default is that
|
||||
stretching/squeezing/filling and trimming is disabled
|
||||
(@option{min_comp} = @code{FLT_MAX}).
|
||||
|
||||
@item min_hard_comp
|
||||
For swr only, set the minimum difference between timestamps and audio data (in
|
||||
seconds) to trigger adding/dropping samples to make it match the
|
||||
timestamps. This option effectively is a threshold to select between
|
||||
hard (trim/fill) and soft (squeeze/stretch) compensation. Note that
|
||||
all compensation is by default disabled through @option{min_comp}.
|
||||
The default is 0.1.
|
||||
|
||||
@item comp_duration
|
||||
For swr only, set duration (in seconds) over which data is stretched/squeezed
|
||||
to make it match the timestamps. Must be a non-negative double float value,
|
||||
default value is 1.0.
|
||||
|
||||
@item max_soft_comp
|
||||
For swr only, set maximum factor by which data is stretched/squeezed to make it
|
||||
match the timestamps. Must be a non-negative double float value, default value
|
||||
is 0.
|
||||
|
||||
@item matrix_encoding
|
||||
Select matrixed stereo encoding.
|
||||
|
||||
It accepts the following values:
|
||||
@table @samp
|
||||
@item none
|
||||
select none
|
||||
@item dolby
|
||||
select Dolby
|
||||
@item dplii
|
||||
select Dolby Pro Logic II
|
||||
@end table
|
||||
|
||||
Default value is @code{none}.
|
||||
|
||||
@item filter_type
|
||||
For swr only, select resampling filter type. This only affects resampling
|
||||
operations.
|
||||
|
||||
It accepts the following values:
|
||||
@table @samp
|
||||
@item cubic
|
||||
select cubic
|
||||
@item blackman_nuttall
|
||||
select Blackman Nuttall Windowed Sinc
|
||||
@item kaiser
|
||||
select Kaiser Windowed Sinc
|
||||
@end table
|
||||
|
||||
@item kaiser_beta
|
||||
For swr only, set Kaiser Window Beta value. Must be an integer in the
|
||||
interval [2,16], default value is 9.
|
||||
|
||||
@end table
|
||||
|
||||
@c man end RESAMPLER OPTIONS
|
||||
|
||||
@chapter See Also
|
||||
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle FFmpeg Scaler Documentation
|
||||
@titlepage
|
||||
@@ -13,13 +12,111 @@
|
||||
@chapter Description
|
||||
@c man begin DESCRIPTION
|
||||
|
||||
The FFmpeg rescaler provides a high-level interface to the libswscale
|
||||
library image conversion utilities. In particular it allows one to perform
|
||||
The FFmpeg rescaler provides an high-level interface to the libswscale
|
||||
library image conversion utilities. In particular it allows to perform
|
||||
image rescaling and pixel format conversion.
|
||||
|
||||
@c man end DESCRIPTION
|
||||
|
||||
@include scaler.texi
|
||||
@chapter Scaler Options
|
||||
@c man begin SCALER OPTIONS
|
||||
|
||||
The video scaler supports the following named options.
|
||||
|
||||
Options may be set by specifying -@var{option} @var{value} in the
|
||||
FFmpeg tools. For programmatic use, they can be set explicitly in the
|
||||
@code{SwsContext} options or through the @file{libavutil/opt.h} API.
|
||||
|
||||
@table @option
|
||||
|
||||
@item sws_flags
|
||||
Set the scaler flags. This is also used to set the scaling
|
||||
algorithm. Only a single algorithm should be selected.
|
||||
|
||||
It accepts the following values:
|
||||
@table @samp
|
||||
@item fast_bilinear
|
||||
Select fast bilinear scaling algorithm.
|
||||
|
||||
@item bilinear
|
||||
Select bilinear scaling algorithm.
|
||||
|
||||
@item bicubic
|
||||
Select bicubic scaling algorithm.
|
||||
|
||||
@item experimental
|
||||
Select experimental scaling algorithm.
|
||||
|
||||
@item neighbor
|
||||
Select nearest neighbor rescaling algorithm.
|
||||
|
||||
@item area
|
||||
Select averaging area rescaling algorithm.
|
||||
|
||||
@item bicubiclin
|
||||
Select bicubic scaling algorithm for the luma component, bilinear for
|
||||
chroma components.
|
||||
|
||||
@item gauss
|
||||
Select Gaussian rescaling algorithm.
|
||||
|
||||
@item sinc
|
||||
Select sinc rescaling algorithm.
|
||||
|
||||
@item lanczos
|
||||
Select lanczos rescaling algorithm.
|
||||
|
||||
@item spline
|
||||
Select natural bicubic spline rescaling algorithm.
|
||||
|
||||
@item print_info
|
||||
Enable printing/debug logging.
|
||||
|
||||
@item accurate_rnd
|
||||
Enable accurate rounding.
|
||||
|
||||
@item full_chroma_int
|
||||
Enable full chroma interpolation.
|
||||
|
||||
@item full_chroma_inp
|
||||
Select full chroma input.
|
||||
|
||||
@item bitexact
|
||||
Enable bitexact output.
|
||||
@end table
|
||||
|
||||
@item srcw
|
||||
Set source width.
|
||||
|
||||
@item srch
|
||||
Set source height.
|
||||
|
||||
@item dstw
|
||||
Set destination width.
|
||||
|
||||
@item dsth
|
||||
Set destination height.
|
||||
|
||||
@item src_format
|
||||
Set source pixel format (must be expressed as an integer).
|
||||
|
||||
@item dst_format
|
||||
Set destination pixel format (must be expressed as an integer).
|
||||
|
||||
@item src_range
|
||||
Select source range.
|
||||
|
||||
@item dst_range
|
||||
Select destination range.
|
||||
|
||||
@item param0, param1
|
||||
Set scaling algorithm parameters. The specified values are specific of
|
||||
some scaling algorithms and ignored by others. The specified values
|
||||
are floating point number values.
|
||||
|
||||
@end table
|
||||
|
||||
@c man end SCALER OPTIONS
|
||||
|
||||
@chapter See Also
|
||||
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle FFmpeg Utilities Documentation
|
||||
@titlepage
|
||||
@@ -18,7 +17,8 @@ by the libavutil library.
|
||||
|
||||
@c man end DESCRIPTION
|
||||
|
||||
@include utils.texi
|
||||
@include syntax.texi
|
||||
@include eval.texi
|
||||
|
||||
@chapter See Also
|
||||
|
||||
|
||||
566
doc/ffmpeg.texi
566
doc/ffmpeg.texi
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle ffmpeg Documentation
|
||||
@titlepage
|
||||
@@ -17,26 +16,26 @@ ffmpeg [@var{global_options}] @{[@var{input_file_options}] -i @file{input_file}@
|
||||
@chapter Description
|
||||
@c man begin DESCRIPTION
|
||||
|
||||
@command{ffmpeg} is a very fast video and audio converter that can also grab from
|
||||
ffmpeg is a very fast video and audio converter that can also grab from
|
||||
a live audio/video source. It can also convert between arbitrary sample
|
||||
rates and resize video on the fly with a high quality polyphase filter.
|
||||
|
||||
@command{ffmpeg} reads from an arbitrary number of input "files" (which can be regular
|
||||
ffmpeg reads from an arbitrary number of input "files" (which can be regular
|
||||
files, pipes, network streams, grabbing devices, etc.), specified by the
|
||||
@code{-i} option, and writes to an arbitrary number of output "files", which are
|
||||
specified by a plain output filename. Anything found on the command line which
|
||||
cannot be interpreted as an option is considered to be an output filename.
|
||||
|
||||
Each input or output file can, in principle, contain any number of streams of
|
||||
different types (video/audio/subtitle/attachment/data). The allowed number and/or
|
||||
types of streams may be limited by the container format. Selecting which
|
||||
streams from which inputs will go into which output is either done automatically
|
||||
or with the @code{-map} option (see the Stream selection chapter).
|
||||
Each input or output file can in principle contain any number of streams of
|
||||
different types (video/audio/subtitle/attachment/data). Allowed number and/or
|
||||
types of streams can be limited by the container format. Selecting, which
|
||||
streams from which inputs go into output, is done either automatically or with
|
||||
the @code{-map} option (see the Stream selection chapter).
|
||||
|
||||
To refer to input files in options, you must use their indices (0-based). E.g.
|
||||
the first input file is @code{0}, the second is @code{1}, etc. Similarly, streams
|
||||
the first input file is @code{0}, the second is @code{1} etc. Similarly, streams
|
||||
within a file are referred to by their indices. E.g. @code{2:3} refers to the
|
||||
fourth stream in the third input file. Also see the Stream specifiers chapter.
|
||||
fourth stream in the third input file. See also the Stream specifiers chapter.
|
||||
|
||||
As a general rule, options are applied to the next specified
|
||||
file. Therefore, order is important, and you can have the same
|
||||
@@ -51,7 +50,7 @@ options apply ONLY to the next input or output file and are reset between files.
|
||||
|
||||
@itemize
|
||||
@item
|
||||
To set the video bitrate of the output file to 64 kbit/s:
|
||||
To set the video bitrate of the output file to 64kbit/s:
|
||||
@example
|
||||
ffmpeg -i input.avi -b:v 64k -bufsize 64k output.avi
|
||||
@end example
|
||||
@@ -80,26 +79,14 @@ The format option may be needed for raw input files.
|
||||
The transcoding process in @command{ffmpeg} for each output can be described by
|
||||
the following diagram:
|
||||
|
||||
@verbatim
|
||||
_______ ______________
|
||||
| | | |
|
||||
| input | demuxer | encoded data | decoder
|
||||
| file | ---------> | packets | -----+
|
||||
|_______| |______________| |
|
||||
v
|
||||
_________
|
||||
| |
|
||||
| decoded |
|
||||
| frames |
|
||||
|_________|
|
||||
________ ______________ |
|
||||
| | | | |
|
||||
| output | <-------- | encoded data | <----+
|
||||
| file | muxer | packets | encoder
|
||||
|________| |______________|
|
||||
@example
|
||||
_______ ______________ _________ ______________ ________
|
||||
| | | | | | | | | |
|
||||
| input | demuxer | encoded data | decoder | decoded | encoder | encoded data | muxer | output |
|
||||
| file | ---------> | packets | ---------> | frames | ---------> | packets | -------> | file |
|
||||
|_______| |______________| |_________| |______________| |________|
|
||||
|
||||
|
||||
@end verbatim
|
||||
@end example
|
||||
|
||||
@command{ffmpeg} calls the libavformat library (containing demuxers) to read
|
||||
input files and get packets containing encoded data from them. When there are
|
||||
@@ -109,14 +96,14 @@ tracking lowest timestamp on any active input stream.
|
||||
Encoded packets are then passed to the decoder (unless streamcopy is selected
|
||||
for the stream, see further for a description). The decoder produces
|
||||
uncompressed frames (raw video/PCM audio/...) which can be processed further by
|
||||
filtering (see next section). After filtering, the frames are passed to the
|
||||
encoder, which encodes them and outputs encoded packets. Finally those are
|
||||
filtering (see next section). After filtering the frames are passed to the
|
||||
encoder, which encodes them and outputs encoded packets again. Finally those are
|
||||
passed to the muxer, which writes the encoded packets to the output file.
|
||||
|
||||
@section Filtering
|
||||
Before encoding, @command{ffmpeg} can process raw audio and video frames using
|
||||
filters from the libavfilter library. Several chained filters form a filter
|
||||
graph. @command{ffmpeg} distinguishes between two types of filtergraphs:
|
||||
graph. @command{ffmpeg} distinguishes between two types of filtergraphs -
|
||||
simple and complex.
|
||||
|
||||
@subsection Simple filtergraphs
|
||||
@@ -124,31 +111,26 @@ Simple filtergraphs are those that have exactly one input and output, both of
|
||||
the same type. In the above diagram they can be represented by simply inserting
|
||||
an additional step between decoding and encoding:
|
||||
|
||||
@verbatim
|
||||
_________ ______________
|
||||
| | | |
|
||||
| decoded | | encoded data |
|
||||
| frames |\ _ | packets |
|
||||
|_________| \ /||______________|
|
||||
\ __________ /
|
||||
simple _\|| | / encoder
|
||||
filtergraph | filtered |/
|
||||
| frames |
|
||||
|__________|
|
||||
@example
|
||||
_________ __________ ______________
|
||||
| | | | | |
|
||||
| decoded | simple filtergraph | filtered | encoder | encoded data |
|
||||
| frames | -------------------> | frames | ---------> | packets |
|
||||
|_________| |__________| |______________|
|
||||
|
||||
@end verbatim
|
||||
@end example
|
||||
|
||||
Simple filtergraphs are configured with the per-stream @option{-filter} option
|
||||
(with @option{-vf} and @option{-af} aliases for video and audio respectively).
|
||||
A simple filtergraph for video can look for example like this:
|
||||
|
||||
@verbatim
|
||||
_______ _____________ _______ ________
|
||||
| | | | | | | |
|
||||
| input | ---> | deinterlace | ---> | scale | ---> | output |
|
||||
|_______| |_____________| |_______| |________|
|
||||
@example
|
||||
_______ _____________ _______ _____ ________
|
||||
| | | | | | | | | |
|
||||
| input | ---> | deinterlace | ---> | scale | ---> | fps | ---> | output |
|
||||
|_______| |_____________| |_______| |_____| |________|
|
||||
|
||||
@end verbatim
|
||||
@end example
|
||||
|
||||
Note that some filters change frame properties but not frame contents. E.g. the
|
||||
@code{fps} filter in the example above changes number of frames, but does not
|
||||
@@ -157,11 +139,11 @@ only sets timestamps and otherwise passes the frames unchanged.
|
||||
|
||||
@subsection Complex filtergraphs
|
||||
Complex filtergraphs are those which cannot be described as simply a linear
|
||||
processing chain applied to one stream. This is the case, for example, when the graph has
|
||||
processing chain applied to one stream. This is the case e.g. when the graph has
|
||||
more than one input and/or output, or when output stream type is different from
|
||||
input. They can be represented with the following diagram:
|
||||
|
||||
@verbatim
|
||||
@example
|
||||
_________
|
||||
| |
|
||||
| input 0 |\ __________
|
||||
@@ -179,14 +161,12 @@ input. They can be represented with the following diagram:
|
||||
| input 2 |/
|
||||
|_________|
|
||||
|
||||
@end verbatim
|
||||
@end example
|
||||
|
||||
Complex filtergraphs are configured with the @option{-filter_complex} option.
|
||||
Note that this option is global, since a complex filtergraph, by its nature,
|
||||
Note that this option is global, since a complex filtergraph by its nature
|
||||
cannot be unambiguously associated with a single stream or file.
|
||||
|
||||
The @option{-lavfi} option is equivalent to @option{-filter_complex}.
|
||||
|
||||
A trivial example of a complex filtergraph is the @code{overlay} filter, which
|
||||
has two video inputs and one video output, containing one video overlaid on top
|
||||
of the other. Its audio counterpart is the @code{amix} filter.
|
||||
@@ -196,19 +176,19 @@ Stream copy is a mode selected by supplying the @code{copy} parameter to the
|
||||
@option{-codec} option. It makes @command{ffmpeg} omit the decoding and encoding
|
||||
step for the specified stream, so it does only demuxing and muxing. It is useful
|
||||
for changing the container format or modifying container-level metadata. The
|
||||
diagram above will, in this case, simplify to this:
|
||||
diagram above will in this case simplify to this:
|
||||
|
||||
@verbatim
|
||||
@example
|
||||
_______ ______________ ________
|
||||
| | | | | |
|
||||
| input | demuxer | encoded data | muxer | output |
|
||||
| file | ---------> | packets | -------> | file |
|
||||
|_______| |______________| |________|
|
||||
|
||||
@end verbatim
|
||||
@end example
|
||||
|
||||
Since there is no decoding or encoding, it is very fast and there is no quality
|
||||
loss. However, it might not work in some cases because of many factors. Applying
|
||||
loss. However it might not work in some cases because of many factors. Applying
|
||||
filters is obviously also impossible, since filters work on uncompressed data.
|
||||
|
||||
@c man end DETAILED DESCRIPTION
|
||||
@@ -216,14 +196,14 @@ filters is obviously also impossible, since filters work on uncompressed data.
|
||||
@chapter Stream selection
|
||||
@c man begin STREAM SELECTION
|
||||
|
||||
By default, @command{ffmpeg} includes only one stream of each type (video, audio, subtitle)
|
||||
By default ffmpeg includes only one stream of each type (video, audio, subtitle)
|
||||
present in the input files and adds them to each output file. It picks the
|
||||
"best" of each based upon the following criteria: for video, it is the stream
|
||||
with the highest resolution, for audio, it is the stream with the most channels, for
|
||||
subtitles, it is the first subtitle stream. In the case where several streams of
|
||||
the same type rate equally, the stream with the lowest index is chosen.
|
||||
"best" of each based upon the following criteria; for video it is the stream
|
||||
with the highest resolution, for audio the stream with the most channels, for
|
||||
subtitle it's the first subtitle stream. In the case where several streams of
|
||||
the same type rate equally, the lowest numbered stream is chosen.
|
||||
|
||||
You can disable some of those defaults by using the @code{-vn/-an/-sn} options. For
|
||||
You can disable some of those defaults by using @code{-vn/-an/-sn} options. For
|
||||
full manual control, use the @code{-map} option, which disables the defaults just
|
||||
described.
|
||||
|
||||
@@ -232,7 +212,7 @@ described.
|
||||
@chapter Options
|
||||
@c man begin OPTIONS
|
||||
|
||||
@include fftools-common-opts.texi
|
||||
@include avtools-common-opts.texi
|
||||
|
||||
@section Main options
|
||||
|
||||
@@ -240,7 +220,7 @@ described.
|
||||
|
||||
@item -f @var{fmt} (@emph{input/output})
|
||||
Force input or output file format. The format is normally auto detected for input
|
||||
files and guessed from the file extension for output files, so this option is not
|
||||
files and guessed from file extension for output files, so this option is not
|
||||
needed in most cases.
|
||||
|
||||
@item -i @var{filename} (@emph{input})
|
||||
@@ -250,8 +230,7 @@ input file name
|
||||
Overwrite output files without asking.
|
||||
|
||||
@item -n (@emph{global})
|
||||
Do not overwrite output files, and exit immediately if a specified
|
||||
output file already exists.
|
||||
Do not overwrite output files but exit if file exists.
|
||||
|
||||
@item -c[:@var{stream_specifier}] @var{codec} (@emph{input/output,per-stream})
|
||||
@itemx -codec[:@var{stream_specifier}] @var{codec} (@emph{input/output,per-stream})
|
||||
@@ -273,22 +252,15 @@ ffmpeg -i INPUT -map 0 -c copy -c:v:1 libx264 -c:a:137 libvorbis OUTPUT
|
||||
will copy all the streams except the second video, which will be encoded with
|
||||
libx264, and the 138th audio, which will be encoded with libvorbis.
|
||||
|
||||
@item -t @var{duration} (@emph{input/output})
|
||||
When used as an input option (before @code{-i}), limit the @var{duration} of
|
||||
data read from the input file.
|
||||
|
||||
When used as an output option (before an output filename), stop writing the
|
||||
output after its duration reaches @var{duration}.
|
||||
|
||||
@var{duration} must be a time duration specification,
|
||||
see @ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
|
||||
@item -t @var{duration} (@emph{output})
|
||||
Stop writing the output after its duration reaches @var{duration}.
|
||||
@var{duration} may be a number in seconds, or in @code{hh:mm:ss[.xxx]} form.
|
||||
|
||||
-to and -t are mutually exclusive and -t has priority.
|
||||
|
||||
@item -to @var{position} (@emph{output})
|
||||
Stop writing the output at @var{position}.
|
||||
@var{position} must be a time duration specification,
|
||||
see @ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
|
||||
@var{position} may be a number in seconds, or in @code{hh:mm:ss[.xxx]} form.
|
||||
|
||||
-to and -t are mutually exclusive and -t has priority.
|
||||
|
||||
@@ -297,39 +269,30 @@ Set the file size limit, expressed in bytes.
|
||||
|
||||
@item -ss @var{position} (@emph{input/output})
|
||||
When used as an input option (before @code{-i}), seeks in this input file to
|
||||
@var{position}. Note that in most formats it is not possible to seek exactly,
|
||||
so @command{ffmpeg} will seek to the closest seek point before @var{position}.
|
||||
When transcoding and @option{-accurate_seek} is enabled (the default), this
|
||||
extra segment between the seek point and @var{position} will be decoded and
|
||||
discarded. When doing stream copy or when @option{-noaccurate_seek} is used, it
|
||||
will be preserved.
|
||||
@var{position}. When used as an output option (before an output filename),
|
||||
decodes but discards input until the timestamps reach @var{position}. This is
|
||||
slower, but more accurate.
|
||||
|
||||
When used as an output option (before an output filename), decodes but discards
|
||||
input until the timestamps reach @var{position}.
|
||||
|
||||
@var{position} must be a time duration specification,
|
||||
see @ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
|
||||
|
||||
@item -sseof @var{position} (@emph{input/output})
|
||||
|
||||
Like the @code{-ss} option but relative to the "end of file". That is negative
|
||||
values are earlier in the file, 0 is at EOF.
|
||||
@var{position} may be either in seconds or in @code{hh:mm:ss[.xxx]} form.
|
||||
|
||||
@item -itsoffset @var{offset} (@emph{input})
|
||||
Set the input time offset.
|
||||
Set the input time offset in seconds.
|
||||
@code{[-]hh:mm:ss[.xxx]} syntax is also supported.
|
||||
The offset is added to the timestamps of the input files.
|
||||
Specifying a positive offset means that the corresponding
|
||||
streams are delayed by @var{offset} seconds.
|
||||
|
||||
@var{offset} must be a time duration specification,
|
||||
see @ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
|
||||
|
||||
The offset is added to the timestamps of the input files. Specifying
|
||||
a positive offset means that the corresponding streams are delayed by
|
||||
the time duration specified in @var{offset}.
|
||||
|
||||
@item -timestamp @var{date} (@emph{output})
|
||||
@item -timestamp @var{time} (@emph{output})
|
||||
Set the recording timestamp in the container.
|
||||
|
||||
@var{date} must be a date specification,
|
||||
see @ref{date syntax,,the Date section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
|
||||
The syntax for @var{time} is:
|
||||
@example
|
||||
now|([(YYYY-MM-DD|YYYYMMDD)[T|t| ]]((HH:MM:SS[.m...])|(HHMMSS[.m...]))[Z|z])
|
||||
@end example
|
||||
If the value is "now" it takes the current time.
|
||||
Time is local time unless 'Z' or 'z' is appended, in which case it is
|
||||
interpreted as UTC.
|
||||
If the year-month-day part is not specified it takes the current
|
||||
year-month-day.
|
||||
|
||||
@item -metadata[:metadata_specifier] @var{key}=@var{value} (@emph{output,per-metadata})
|
||||
Set a metadata key/value pair.
|
||||
@@ -348,7 +311,7 @@ ffmpeg -i in.avi -metadata title="my title" out.flv
|
||||
|
||||
To set the language of the first audio stream:
|
||||
@example
|
||||
ffmpeg -i INPUT -metadata:s:a:0 language=eng OUTPUT
|
||||
ffmpeg -i INPUT -metadata:s:a:1 language=eng OUTPUT
|
||||
@end example
|
||||
|
||||
@item -target @var{type} (@emph{output})
|
||||
@@ -369,47 +332,36 @@ ffmpeg -i myfile.avi -target vcd -bf 2 /tmp/vcd.mpg
|
||||
@end example
|
||||
|
||||
@item -dframes @var{number} (@emph{output})
|
||||
Set the number of data frames to output. This is an alias for @code{-frames:d}.
|
||||
Set the number of data frames to record. This is an alias for @code{-frames:d}.
|
||||
|
||||
@item -frames[:@var{stream_specifier}] @var{framecount} (@emph{output,per-stream})
|
||||
Stop writing to the stream after @var{framecount} frames.
|
||||
|
||||
@item -q[:@var{stream_specifier}] @var{q} (@emph{output,per-stream})
|
||||
@itemx -qscale[:@var{stream_specifier}] @var{q} (@emph{output,per-stream})
|
||||
Use fixed quality scale (VBR). The meaning of @var{q}/@var{qscale} is
|
||||
Use fixed quality scale (VBR). The meaning of @var{q} is
|
||||
codec-dependent.
|
||||
If @var{qscale} is used without a @var{stream_specifier} then it applies only
|
||||
to the video stream, this is to maintain compatibility with previous behavior
|
||||
and as specifying the same codec specific value to 2 different codecs that is
|
||||
audio and video generally is not what is intended when no stream_specifier is
|
||||
used.
|
||||
|
||||
@anchor{filter_option}
|
||||
@item -filter[:@var{stream_specifier}] @var{filtergraph} (@emph{output,per-stream})
|
||||
Create the filtergraph specified by @var{filtergraph} and use it to
|
||||
@item -filter[:@var{stream_specifier}] @var{filter_graph} (@emph{output,per-stream})
|
||||
Create the filter graph specified by @var{filter_graph} and use it to
|
||||
filter the stream.
|
||||
|
||||
@var{filtergraph} is a description of the filtergraph to apply to
|
||||
@var{filter_graph} is a description of the filter graph to apply to
|
||||
the stream, and must have a single input and a single output of the
|
||||
same type of the stream. In the filtergraph, the input is associated
|
||||
same type of the stream. In the filter graph, the input is associated
|
||||
to the label @code{in}, and the output to the label @code{out}. See
|
||||
the ffmpeg-filters manual for more information about the filtergraph
|
||||
syntax.
|
||||
|
||||
See the @ref{filter_complex_option,,-filter_complex option} if you
|
||||
want to create filtergraphs with multiple inputs and/or outputs.
|
||||
|
||||
@item -filter_script[:@var{stream_specifier}] @var{filename} (@emph{output,per-stream})
|
||||
This option is similar to @option{-filter}, the only difference is that its
|
||||
argument is the name of the file from which a filtergraph description is to be
|
||||
read.
|
||||
want to create filter graphs with multiple inputs and/or outputs.
|
||||
|
||||
@item -pre[:@var{stream_specifier}] @var{preset_name} (@emph{output,per-stream})
|
||||
Specify the preset for matching stream(s).
|
||||
|
||||
@item -stats (@emph{global})
|
||||
Print encoding progress/statistics. It is on by default, to explicitly
|
||||
disable it you need to specify @code{-nostats}.
|
||||
Print encoding progress/statistics. On by default.
|
||||
|
||||
@item -progress @var{url} (@emph{global})
|
||||
Send program-friendly progress information to @var{url}.
|
||||
@@ -470,24 +422,18 @@ Technical note -- attachments are implemented as codec extradata, so this
|
||||
option can actually be used to extract extradata from any stream, not just
|
||||
attachments.
|
||||
|
||||
@item -noautorotate
|
||||
Disable automatically rotating video based on file metadata.
|
||||
|
||||
@end table
|
||||
|
||||
@section Video Options
|
||||
|
||||
@table @option
|
||||
@item -vframes @var{number} (@emph{output})
|
||||
Set the number of video frames to output. This is an alias for @code{-frames:v}.
|
||||
Set the number of video frames to record. This is an alias for @code{-frames:v}.
|
||||
@item -r[:@var{stream_specifier}] @var{fps} (@emph{input/output,per-stream})
|
||||
Set frame rate (Hz value, fraction or abbreviation).
|
||||
|
||||
As an input option, ignore any timestamps stored in the file and instead
|
||||
generate timestamps assuming constant frame rate @var{fps}.
|
||||
This is not the same as the @option{-framerate} option used for some input formats
|
||||
like image2 or v4l2 (it used to be the same in older versions of FFmpeg).
|
||||
If in doubt use @option{-framerate} instead of the input option @option{-r}.
|
||||
|
||||
As an output option, duplicate or drop input frames to achieve constant output
|
||||
frame rate @var{fps}.
|
||||
@@ -513,10 +459,6 @@ form @var{num}:@var{den}, where @var{num} and @var{den} are the
|
||||
numerator and denominator of the aspect ratio. For example "4:3",
|
||||
"16:9", "1.3333", and "1.7777" are valid argument values.
|
||||
|
||||
If used together with @option{-vcodec copy}, it will affect the aspect ratio
|
||||
stored at container level, but not the aspect ratio stored in encoded
|
||||
frames, if it exists.
|
||||
|
||||
@item -vn (@emph{output})
|
||||
Disable video recording.
|
||||
|
||||
@@ -542,14 +484,17 @@ prefix is ``ffmpeg2pass''. The complete file name will be
|
||||
@file{PREFIX-N.log}, where N is a number specific to the output
|
||||
stream
|
||||
|
||||
@item -vf @var{filtergraph} (@emph{output})
|
||||
Create the filtergraph specified by @var{filtergraph} and use it to
|
||||
@item -vlang @var{code}
|
||||
Set the ISO 639 language code (3 letters) of the current video stream.
|
||||
|
||||
@item -vf @var{filter_graph} (@emph{output})
|
||||
Create the filter graph specified by @var{filter_graph} and use it to
|
||||
filter the stream.
|
||||
|
||||
This is an alias for @code{-filter:v}, see the @ref{filter_option,,-filter option}.
|
||||
@end table
|
||||
|
||||
@section Advanced Video options
|
||||
@section Advanced Video Options
|
||||
|
||||
@table @option
|
||||
@item -pix_fmt[:@var{stream_specifier}] @var{format} (@emph{input/output,per-stream})
|
||||
@@ -559,7 +504,7 @@ If the selected pixel format can not be selected, ffmpeg will print a
|
||||
warning and select the best pixel format supported by the encoder.
|
||||
If @var{pix_fmt} is prefixed by a @code{+}, ffmpeg will exit with an error
|
||||
if the requested pixel format can not be selected, and automatic conversions
|
||||
inside filtergraphs are disabled.
|
||||
inside filter graphs are disabled.
|
||||
If @var{pix_fmt} is a single @code{+}, ffmpeg selects the same pixel format
|
||||
as the input (or graph output) and automatic conversions are disabled.
|
||||
|
||||
@@ -574,6 +519,10 @@ list separated with slashes. Two first values are the beginning and
|
||||
end frame numbers, last one is quantizer to use if positive, or quality
|
||||
factor if negative.
|
||||
|
||||
@item -deinterlace
|
||||
Deinterlace pictures.
|
||||
This option is deprecated since the deinterlacing is very low quality.
|
||||
Use the yadif filter with @code{-filter:v yadif}.
|
||||
@item -ilme
|
||||
Force interlacing support in encoder (MPEG-2 and MPEG-4 only).
|
||||
Use this option if your input file is interlaced and you want
|
||||
@@ -652,63 +601,13 @@ would be more efficient.
|
||||
@item -copyinkf[:@var{stream_specifier}] (@emph{output,per-stream})
|
||||
When doing stream copy, copy also non-key frames found at the
|
||||
beginning.
|
||||
|
||||
@item -hwaccel[:@var{stream_specifier}] @var{hwaccel} (@emph{input,per-stream})
|
||||
Use hardware acceleration to decode the matching stream(s). The allowed values
|
||||
of @var{hwaccel} are:
|
||||
@table @option
|
||||
@item none
|
||||
Do not use any hardware acceleration (the default).
|
||||
|
||||
@item auto
|
||||
Automatically select the hardware acceleration method.
|
||||
|
||||
@item vda
|
||||
Use Apple VDA hardware acceleration.
|
||||
|
||||
@item vdpau
|
||||
Use VDPAU (Video Decode and Presentation API for Unix) hardware acceleration.
|
||||
|
||||
@item dxva2
|
||||
Use DXVA2 (DirectX Video Acceleration) hardware acceleration.
|
||||
@end table
|
||||
|
||||
This option has no effect if the selected hwaccel is not available or not
|
||||
supported by the chosen decoder.
|
||||
|
||||
Note that most acceleration methods are intended for playback and will not be
|
||||
faster than software decoding on modern CPUs. Additionally, @command{ffmpeg}
|
||||
will usually need to copy the decoded frames from the GPU memory into the system
|
||||
memory, resulting in further performance loss. This option is thus mainly
|
||||
useful for testing.
|
||||
|
||||
@item -hwaccel_device[:@var{stream_specifier}] @var{hwaccel_device} (@emph{input,per-stream})
|
||||
Select a device to use for hardware acceleration.
|
||||
|
||||
This option only makes sense when the @option{-hwaccel} option is also
|
||||
specified. Its exact meaning depends on the specific hardware acceleration
|
||||
method chosen.
|
||||
|
||||
@table @option
|
||||
@item vdpau
|
||||
For VDPAU, this option specifies the X11 display/screen to use. If this option
|
||||
is not specified, the value of the @var{DISPLAY} environment variable is used
|
||||
|
||||
@item dxva2
|
||||
For DXVA2, this option should contain the number of the display adapter to use.
|
||||
If this option is not specified, the default adapter is used.
|
||||
@end table
|
||||
|
||||
@item -hwaccels
|
||||
List all hardware acceleration methods supported in this build of ffmpeg.
|
||||
|
||||
@end table
|
||||
|
||||
@section Audio Options
|
||||
|
||||
@table @option
|
||||
@item -aframes @var{number} (@emph{output})
|
||||
Set the number of audio frames to output. This is an alias for @code{-frames:a}.
|
||||
Set the number of audio frames to record. This is an alias for @code{-frames:a}.
|
||||
@item -ar[:@var{stream_specifier}] @var{freq} (@emph{input/output,per-stream})
|
||||
Set the audio sampling frequency. For output streams it is set by
|
||||
default to the frequency of the corresponding input stream. For input
|
||||
@@ -729,14 +628,14 @@ Set the audio codec. This is an alias for @code{-codec:a}.
|
||||
Set the audio sample format. Use @code{-sample_fmts} to get a list
|
||||
of supported sample formats.
|
||||
|
||||
@item -af @var{filtergraph} (@emph{output})
|
||||
Create the filtergraph specified by @var{filtergraph} and use it to
|
||||
@item -af @var{filter_graph} (@emph{output})
|
||||
Create the filter graph specified by @var{filter_graph} and use it to
|
||||
filter the stream.
|
||||
|
||||
This is an alias for @code{-filter:a}, see the @ref{filter_option,,-filter option}.
|
||||
@end table
|
||||
|
||||
@section Advanced Audio options
|
||||
@section Advanced Audio options:
|
||||
|
||||
@table @option
|
||||
@item -atag @var{fourcc/tag} (@emph{output})
|
||||
@@ -751,9 +650,11 @@ stereo but not 6 channels as 5.1. The default is to always try to guess. Use
|
||||
0 to disable all guessing.
|
||||
@end table
|
||||
|
||||
@section Subtitle options
|
||||
@section Subtitle options:
|
||||
|
||||
@table @option
|
||||
@item -slang @var{code}
|
||||
Set the ISO 639 language code (3 letters) of the current subtitle stream.
|
||||
@item -scodec @var{codec} (@emph{input/output})
|
||||
Set the subtitle codec. This is an alias for @code{-codec:s}.
|
||||
@item -sn (@emph{output})
|
||||
@@ -762,7 +663,7 @@ Disable subtitle recording.
|
||||
Deprecated, see -bsf
|
||||
@end table
|
||||
|
||||
@section Advanced Subtitle options
|
||||
@section Advanced Subtitle options:
|
||||
|
||||
@table @option
|
||||
|
||||
@@ -779,9 +680,6 @@ Note that this option will delay the output of all data until the next
|
||||
subtitle packet is decoded: it may increase memory consumption and latency a
|
||||
lot.
|
||||
|
||||
@item -canvas_size @var{size}
|
||||
Set the size of the canvas used to render subtitles.
|
||||
|
||||
@end table
|
||||
|
||||
@section Advanced options
|
||||
@@ -840,21 +738,8 @@ To map all the streams except the second audio, use negative mappings
|
||||
ffmpeg -i INPUT -map 0 -map -0:a:1 OUTPUT
|
||||
@end example
|
||||
|
||||
To pick the English audio stream:
|
||||
@example
|
||||
ffmpeg -i INPUT -map 0:m:language:eng OUTPUT
|
||||
@end example
|
||||
|
||||
Note that using this option disables the default mappings for this output file.
|
||||
|
||||
@item -ignore_unknown
|
||||
Ignore input streams with unknown type instead of failing if copying
|
||||
such streams is attempted.
|
||||
|
||||
@item -copy_unknown
|
||||
Allow input streams with unknown type to be copied instead of failing if copying
|
||||
such streams is attempted.
|
||||
|
||||
@item -map_channel [@var{input_file_id}.@var{stream_specifier}.@var{channel_id}|-1][:@var{output_file_id}.@var{stream_specifier}]
|
||||
Map an audio channel from a given input to an output. If
|
||||
@var{output_file_id}.@var{stream_specifier} is not set, the audio channel will
|
||||
@@ -974,12 +859,13 @@ Dump each input packet to stderr.
|
||||
When dumping packets, also dump the payload.
|
||||
@item -re (@emph{input})
|
||||
Read input at native frame rate. Mainly used to simulate a grab device.
|
||||
or live input stream (e.g. when reading from a file). Should not be used
|
||||
with actual grab devices or live input streams (where it can cause packet
|
||||
loss).
|
||||
By default @command{ffmpeg} attempts to read the input(s) as fast as possible.
|
||||
This option will slow down the reading of the input(s) to the native frame rate
|
||||
of the input(s). It is useful for real-time output (e.g. live streaming).
|
||||
of the input(s). It is useful for real-time output (e.g. live streaming). If
|
||||
your input(s) is coming from some other live streaming source (through HTTP or
|
||||
UDP for example) the server might already be in real-time, thus the option will
|
||||
likely not be required. On the other hand, this is meaningful if your input(s)
|
||||
is a file you are trying to push in real-time.
|
||||
@item -loop_input
|
||||
Loop over the input stream. Currently it works only for image
|
||||
streams. This option is used for automatic FFserver testing.
|
||||
@@ -998,7 +884,7 @@ Newly added values will have to be specified as strings always.
|
||||
Each frame is passed with its timestamp from the demuxer to the muxer.
|
||||
@item 1, cfr
|
||||
Frames will be duplicated and dropped to achieve exactly the requested
|
||||
constant frame rate.
|
||||
constant framerate.
|
||||
@item 2, vfr
|
||||
Frames are passed through with their timestamp or dropped so as to
|
||||
prevent 2 frames from having the same timestamp.
|
||||
@@ -1010,31 +896,15 @@ Chooses between 1 and 2 depending on muxer capabilities. This is the
|
||||
default method.
|
||||
@end table
|
||||
|
||||
Note that the timestamps may be further modified by the muxer, after this.
|
||||
For example, in the case that the format option @option{avoid_negative_ts}
|
||||
is enabled.
|
||||
|
||||
With -map you can select from which stream the timestamps should be
|
||||
taken. You can leave either video or audio unchanged and sync the
|
||||
remaining stream(s) to the unchanged one.
|
||||
|
||||
@item -frame_drop_threshold @var{parameter}
|
||||
Frame drop threshold, which specifies how much behind video frames can
|
||||
be before they are dropped. In frame rate units, so 1.0 is one frame.
|
||||
The default is -1.1. One possible usecase is to avoid framedrops in case
|
||||
of noisy timestamps or to increase frame drop precision in case of exact
|
||||
timestamps.
|
||||
|
||||
@item -async @var{samples_per_second}
|
||||
Audio sync method. "Stretches/squeezes" the audio stream to match the timestamps,
|
||||
the parameter is the maximum samples per second by which the audio is changed.
|
||||
-async 1 is a special case where only the start of the audio stream is corrected
|
||||
without any later correction.
|
||||
|
||||
Note that the timestamps may be further modified by the muxer, after this.
|
||||
For example, in the case that the format option @option{avoid_negative_ts}
|
||||
is enabled.
|
||||
|
||||
This option has been deprecated. Use the @code{aresample} audio filter instead.
|
||||
|
||||
@item -copyts
|
||||
@@ -1043,16 +913,9 @@ to sanitize them. In particular, do not remove the initial start time
|
||||
offset value.
|
||||
|
||||
Note that, depending on the @option{vsync} option or on specific muxer
|
||||
processing (e.g. in case the format option @option{avoid_negative_ts}
|
||||
is enabled) the output timestamps may mismatch with the input
|
||||
processing, the output timestamps may mismatch with the input
|
||||
timestamps even when this option is selected.
|
||||
|
||||
@item -start_at_zero
|
||||
When used with @option{copyts}, shift input timestamps so they start at zero.
|
||||
|
||||
This means that using e.g. @code{-ss 50} will make output timestamps start at
|
||||
50 seconds, regardless of what timestamp the input file started at.
|
||||
|
||||
@item -copytb @var{mode}
|
||||
Specify how to set the encoder timebase when stream copying. @var{mode} is an
|
||||
integer numeric value, and can assume one of the following values:
|
||||
@@ -1108,7 +971,7 @@ ffmpeg -i h264.mp4 -c:v copy -bsf:v h264_mp4toannexb -an out.h264
|
||||
ffmpeg -i file.mov -an -vn -bsf:s mov2textsub -c:s copy -f rawvideo sub.txt
|
||||
@end example
|
||||
|
||||
@item -tag[:@var{stream_specifier}] @var{codec_tag} (@emph{input/output,per-stream})
|
||||
@item -tag[:@var{stream_specifier}] @var{codec_tag} (@emph{per-stream})
|
||||
Force a tag/fourcc for matching streams.
|
||||
|
||||
@item -timecode @var{hh}:@var{mm}:@var{ss}SEP@var{ff}
|
||||
@@ -1120,10 +983,10 @@ ffmpeg -i input.mpg -timecode 01:02:03.04 -r 30000/1001 -s ntsc output.mpg
|
||||
|
||||
@anchor{filter_complex_option}
|
||||
@item -filter_complex @var{filtergraph} (@emph{global})
|
||||
Define a complex filtergraph, i.e. one with arbitrary number of inputs and/or
|
||||
Define a complex filter graph, i.e. one with arbitrary number of inputs and/or
|
||||
outputs. For simple graphs -- those with one input and one output of the same
|
||||
type -- see the @option{-filter} options. @var{filtergraph} is a description of
|
||||
the filtergraph, as described in the ``Filtergraph syntax'' section of the
|
||||
the filter graph, as described in the ``Filtergraph syntax'' section of the
|
||||
ffmpeg-filters manual.
|
||||
|
||||
Input link labels must refer to input streams using the
|
||||
@@ -1165,77 +1028,6 @@ To generate 5 seconds of pure red video using lavfi @code{color} source:
|
||||
@example
|
||||
ffmpeg -filter_complex 'color=c=red' -t 5 out.mkv
|
||||
@end example
|
||||
|
||||
@item -lavfi @var{filtergraph} (@emph{global})
|
||||
Define a complex filtergraph, i.e. one with arbitrary number of inputs and/or
|
||||
outputs. Equivalent to @option{-filter_complex}.
|
||||
|
||||
@item -filter_complex_script @var{filename} (@emph{global})
|
||||
This option is similar to @option{-filter_complex}, the only difference is that
|
||||
its argument is the name of the file from which a complex filtergraph
|
||||
description is to be read.
|
||||
|
||||
@item -accurate_seek (@emph{input})
|
||||
This option enables or disables accurate seeking in input files with the
|
||||
@option{-ss} option. It is enabled by default, so seeking is accurate when
|
||||
transcoding. Use @option{-noaccurate_seek} to disable it, which may be useful
|
||||
e.g. when copying some streams and transcoding the others.
|
||||
|
||||
@item -seek_timestamp (@emph{input})
|
||||
This option enables or disables seeking by timestamp in input files with the
|
||||
@option{-ss} option. It is disabled by default. If enabled, the argument
|
||||
to the @option{-ss} option is considered an actual timestamp, and is not
|
||||
offset by the start time of the file. This matters only for files which do
|
||||
not start from timestamp 0, such as transport streams.
|
||||
|
||||
@item -thread_queue_size @var{size} (@emph{input})
|
||||
This option sets the maximum number of queued packets when reading from the
|
||||
file or device. With low latency / high rate live streams, packets may be
|
||||
discarded if they are not read in a timely manner; raising this value can
|
||||
avoid it.
|
||||
|
||||
@item -override_ffserver (@emph{global})
|
||||
Overrides the input specifications from @command{ffserver}. Using this
|
||||
option you can map any input stream to @command{ffserver} and control
|
||||
many aspects of the encoding from @command{ffmpeg}. Without this
|
||||
option @command{ffmpeg} will transmit to @command{ffserver} what is
|
||||
requested by @command{ffserver}.
|
||||
|
||||
The option is intended for cases where features are needed that cannot be
|
||||
specified to @command{ffserver} but can be to @command{ffmpeg}.
|
||||
|
||||
@item -sdp_file @var{file} (@emph{global})
|
||||
Print sdp information for an output stream to @var{file}.
|
||||
This allows dumping sdp information when at least one output isn't an
|
||||
rtp stream. (Requires at least one of the output formats to be rtp).
|
||||
|
||||
@item -discard (@emph{input})
|
||||
Allows discarding specific streams or frames of streams at the demuxer.
|
||||
Not all demuxers support this.
|
||||
|
||||
@table @option
|
||||
@item none
|
||||
Discard no frame.
|
||||
|
||||
@item default
|
||||
Default, which discards no frames.
|
||||
|
||||
@item noref
|
||||
Discard all non-reference frames.
|
||||
|
||||
@item bidir
|
||||
Discard all bidirectional frames.
|
||||
|
||||
@item nokey
|
||||
Discard all frames excepts keyframes.
|
||||
|
||||
@item all
|
||||
Discard all frames.
|
||||
@end table
|
||||
|
||||
@item -xerror (@emph{global})
|
||||
Stop and exit on error
|
||||
|
||||
@end table
|
||||
|
||||
As a special exception, you can use a bitmap subtitle stream as input: it
|
||||
@@ -1261,10 +1053,7 @@ awkward to specify on the command line. Lines starting with the hash
|
||||
('#') character are ignored and are used to provide comments. Check
|
||||
the @file{presets} directory in the FFmpeg source tree for examples.
|
||||
|
||||
There are two types of preset files: ffpreset and avpreset files.
|
||||
|
||||
@subsection ffpreset files
|
||||
ffpreset files are specified with the @code{vpre}, @code{apre},
|
||||
Preset files are specified with the @code{vpre}, @code{apre},
|
||||
@code{spre}, and @code{fpre} options. The @code{fpre} option takes the
|
||||
filename of the preset instead of a preset name as input and can be
|
||||
used for any kind of codec. For the @code{vpre}, @code{apre}, and
|
||||
@@ -1289,31 +1078,67 @@ directories, where @var{codec_name} is the name of the codec to which
|
||||
the preset file options will be applied. For example, if you select
|
||||
the video codec with @code{-vcodec libvpx} and use @code{-vpre 1080p},
|
||||
then it will search for the file @file{libvpx-1080p.ffpreset}.
|
||||
|
||||
@subsection avpreset files
|
||||
avpreset files are specified with the @code{pre} option. They work similar to
|
||||
ffpreset files, but they only allow encoder- specific options. Therefore, an
|
||||
@var{option}=@var{value} pair specifying an encoder cannot be used.
|
||||
|
||||
When the @code{pre} option is specified, ffmpeg will look for files with the
|
||||
suffix .avpreset in the directories @file{$AVCONV_DATADIR} (if set), and
|
||||
@file{$HOME/.avconv}, and in the datadir defined at configuration time (usually
|
||||
@file{PREFIX/share/ffmpeg}), in that order.
|
||||
|
||||
First ffmpeg searches for a file named @var{codec_name}-@var{arg}.avpreset in
|
||||
the above-mentioned directories, where @var{codec_name} is the name of the codec
|
||||
to which the preset file options will be applied. For example, if you select the
|
||||
video codec with @code{-vcodec libvpx} and use @code{-pre 1080p}, then it will
|
||||
search for the file @file{libvpx-1080p.avpreset}.
|
||||
|
||||
If no such file is found, then ffmpeg will search for a file named
|
||||
@var{arg}.avpreset in the same directories.
|
||||
|
||||
@c man end OPTIONS
|
||||
|
||||
@chapter Tips
|
||||
@c man begin TIPS
|
||||
|
||||
@itemize
|
||||
@item
|
||||
For streaming at very low bitrate application, use a low frame rate
|
||||
and a small GOP size. This is especially true for RealVideo where
|
||||
the Linux player does not seem to be very fast, so it can miss
|
||||
frames. An example is:
|
||||
|
||||
@example
|
||||
ffmpeg -g 3 -r 3 -t 10 -b:v 50k -s qcif -f rv10 /tmp/b.rm
|
||||
@end example
|
||||
|
||||
@item
|
||||
The parameter 'q' which is displayed while encoding is the current
|
||||
quantizer. The value 1 indicates that a very good quality could
|
||||
be achieved. The value 31 indicates the worst quality. If q=31 appears
|
||||
too often, it means that the encoder cannot compress enough to meet
|
||||
your bitrate. You must either increase the bitrate, decrease the
|
||||
frame rate or decrease the frame size.
|
||||
|
||||
@item
|
||||
If your computer is not fast enough, you can speed up the
|
||||
compression at the expense of the compression ratio. You can use
|
||||
'-me zero' to speed up motion estimation, and '-g 0' to disable
|
||||
motion estimation completely (you have only I-frames, which means it
|
||||
is about as good as JPEG compression).
|
||||
|
||||
@item
|
||||
To have very low audio bitrates, reduce the sampling frequency
|
||||
(down to 22050 Hz for MPEG audio, 22050 or 11025 for AC-3).
|
||||
|
||||
@item
|
||||
To have a constant quality (but a variable bitrate), use the option
|
||||
'-qscale n' when 'n' is between 1 (excellent quality) and 31 (worst
|
||||
quality).
|
||||
|
||||
@end itemize
|
||||
@c man end TIPS
|
||||
|
||||
@chapter Examples
|
||||
@c man begin EXAMPLES
|
||||
|
||||
@section Preset files
|
||||
|
||||
A preset file contains a sequence of @var{option=value} pairs, one for
|
||||
each line, specifying a sequence of options which can be specified also on
|
||||
the command line. Lines starting with the hash ('#') character are ignored and
|
||||
are used to provide comments. Empty lines are also ignored. Check the
|
||||
@file{presets} directory in the FFmpeg source tree for examples.
|
||||
|
||||
Preset files are specified with the @code{pre} option, this option takes a
|
||||
preset name as input. FFmpeg searches for a file named @var{preset_name}.avpreset in
|
||||
the directories @file{$AVCONV_DATADIR} (if set), and @file{$HOME/.ffmpeg}, and in
|
||||
the data directory defined at configuration time (usually @file{$PREFIX/share/ffmpeg})
|
||||
in that order. For example, if the argument is @code{libx264-max}, it will
|
||||
search for the file @file{libx264-max.avpreset}.
|
||||
|
||||
@section Video and Audio grabbing
|
||||
|
||||
If you specify the input format and device then ffmpeg can grab video
|
||||
@@ -1339,14 +1164,14 @@ standard mixer.
|
||||
Grab the X11 display with ffmpeg via
|
||||
|
||||
@example
|
||||
ffmpeg -f x11grab -video_size cif -framerate 25 -i :0.0 /tmp/out.mpg
|
||||
ffmpeg -f x11grab -s cif -r 25 -i :0.0 /tmp/out.mpg
|
||||
@end example
|
||||
|
||||
0.0 is display.screen number of your X11 server, same as
|
||||
the DISPLAY environment variable.
|
||||
|
||||
@example
|
||||
ffmpeg -f x11grab -video_size cif -framerate 25 -i :0.0+10,20 /tmp/out.mpg
|
||||
ffmpeg -f x11grab -s cif -r 25 -i :0.0+10,20 /tmp/out.mpg
|
||||
@end example
|
||||
|
||||
0.0 is display.screen number of your X11 server, same as the DISPLAY environment
|
||||
@@ -1461,7 +1286,7 @@ combination with -ss to start extracting from a certain point in time.
|
||||
|
||||
For creating a video from many images:
|
||||
@example
|
||||
ffmpeg -f image2 -framerate 12 -i foo-%03d.jpeg -s WxH foo.avi
|
||||
ffmpeg -f image2 -i foo-%03d.jpeg -r 12 -s WxH foo.avi
|
||||
@end example
|
||||
|
||||
The syntax @code{foo-%03d.jpeg} specifies to use a decimal number
|
||||
@@ -1476,18 +1301,18 @@ image2-specific @code{-pattern_type glob} option.
|
||||
For example, for creating a video from filenames matching the glob pattern
|
||||
@code{foo-*.jpeg}:
|
||||
@example
|
||||
ffmpeg -f image2 -pattern_type glob -framerate 12 -i 'foo-*.jpeg' -s WxH foo.avi
|
||||
ffmpeg -f image2 -pattern_type glob -i 'foo-*.jpeg' -r 12 -s WxH foo.avi
|
||||
@end example
|
||||
|
||||
@item
|
||||
You can put many streams of the same type in the output:
|
||||
|
||||
@example
|
||||
ffmpeg -i test1.avi -i test2.avi -map 1:1 -map 1:0 -map 0:1 -map 0:0 -c copy -y test12.nut
|
||||
ffmpeg -i test1.avi -i test2.avi -map 0:3 -map 0:2 -map 0:1 -map 0:0 -c copy test12.nut
|
||||
@end example
|
||||
|
||||
The resulting output file @file{test12.nut} will contain the first four streams
|
||||
from the input files in reverse order.
|
||||
The resulting output file @file{test12.avi} will contain first four streams from
|
||||
the input file in reverse order.
|
||||
|
||||
@item
|
||||
To force CBR video output:
|
||||
@@ -1505,42 +1330,9 @@ ffmpeg -i src.ext -lmax 21*QP2LAMBDA dst.ext
|
||||
@end itemize
|
||||
@c man end EXAMPLES
|
||||
|
||||
@include config.texi
|
||||
@ifset config-all
|
||||
@ifset config-avutil
|
||||
@include utils.texi
|
||||
@end ifset
|
||||
@ifset config-avcodec
|
||||
@include codecs.texi
|
||||
@include bitstream_filters.texi
|
||||
@end ifset
|
||||
@ifset config-avformat
|
||||
@include formats.texi
|
||||
@include protocols.texi
|
||||
@end ifset
|
||||
@ifset config-avdevice
|
||||
@include devices.texi
|
||||
@end ifset
|
||||
@ifset config-swresample
|
||||
@include resampler.texi
|
||||
@end ifset
|
||||
@ifset config-swscale
|
||||
@include scaler.texi
|
||||
@end ifset
|
||||
@ifset config-avfilter
|
||||
@include filters.texi
|
||||
@end ifset
|
||||
@end ifset
|
||||
|
||||
@chapter See Also
|
||||
|
||||
@ifhtml
|
||||
@ifset config-all
|
||||
@url{ffmpeg.html,ffmpeg}
|
||||
@end ifset
|
||||
@ifset config-not-all
|
||||
@url{ffmpeg-all.html,ffmpeg-all},
|
||||
@end ifset
|
||||
@url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe}, @url{ffserver.html,ffserver},
|
||||
@url{ffmpeg-utils.html,ffmpeg-utils},
|
||||
@url{ffmpeg-scaler.html,ffmpeg-scaler},
|
||||
@@ -1554,12 +1346,6 @@ ffmpeg -i src.ext -lmax 21*QP2LAMBDA dst.ext
|
||||
@end ifhtml
|
||||
|
||||
@ifnothtml
|
||||
@ifset config-all
|
||||
ffmpeg(1),
|
||||
@end ifset
|
||||
@ifset config-not-all
|
||||
ffmpeg-all(1),
|
||||
@end ifset
|
||||
ffplay(1), ffprobe(1), ffserver(1),
|
||||
ffmpeg-utils(1), ffmpeg-scaler(1), ffmpeg-resampler(1),
|
||||
ffmpeg-codecs(1), ffmpeg-bitstream-filters(1), ffmpeg-formats(1),
|
||||
|
||||
153
doc/ffplay.texi
153
doc/ffplay.texi
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle ffplay Documentation
|
||||
@titlepage
|
||||
@@ -25,7 +24,7 @@ various FFmpeg APIs.
|
||||
@chapter Options
|
||||
@c man begin OPTIONS
|
||||
|
||||
@include fftools-common-opts.texi
|
||||
@include avtools-common-opts.texi
|
||||
|
||||
@section Main options
|
||||
|
||||
@@ -38,26 +37,14 @@ Force displayed height.
|
||||
Set frame size (WxH or abbreviation), needed for videos which do
|
||||
not contain a header with the frame size like raw YUV. This option
|
||||
has been deprecated in favor of private options, try -video_size.
|
||||
@item -fs
|
||||
Start in fullscreen mode.
|
||||
@item -an
|
||||
Disable audio.
|
||||
@item -vn
|
||||
Disable video.
|
||||
@item -sn
|
||||
Disable subtitles.
|
||||
@item -ss @var{pos}
|
||||
Seek to @var{pos}. Note that in most formats it is not possible to seek
|
||||
exactly, so @command{ffplay} will seek to the nearest seek point to
|
||||
@var{pos}.
|
||||
|
||||
@var{pos} must be a time duration specification,
|
||||
see @ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
|
||||
Seek to a given position in seconds.
|
||||
@item -t @var{duration}
|
||||
Play @var{duration} seconds of audio/video.
|
||||
|
||||
@var{duration} must be a time duration specification,
|
||||
see @ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
|
||||
play <duration> seconds of audio/video
|
||||
@item -bytes
|
||||
Seek by bytes.
|
||||
@item -nodisp
|
||||
@@ -86,26 +73,17 @@ Default value is "video", if video is not present or cannot be played
|
||||
You can interactively cycle through the available show modes by
|
||||
pressing the key @key{w}.
|
||||
|
||||
@item -vf @var{filtergraph}
|
||||
Create the filtergraph specified by @var{filtergraph} and use it to
|
||||
@item -vf @var{filter_graph}
|
||||
Create the filter graph specified by @var{filter_graph} and use it to
|
||||
filter the video stream.
|
||||
|
||||
@var{filtergraph} is a description of the filtergraph to apply to
|
||||
@var{filter_graph} is a description of the filter graph to apply to
|
||||
the stream, and must have a single video input and a single video
|
||||
output. In the filtergraph, the input is associated to the label
|
||||
output. In the filter graph, the input is associated to the label
|
||||
@code{in}, and the output to the label @code{out}. See the
|
||||
ffmpeg-filters manual for more information about the filtergraph
|
||||
syntax.
|
||||
|
||||
You can specify this parameter multiple times and cycle through the specified
|
||||
filtergraphs along with the show modes by pressing the key @key{w}.
|
||||
|
||||
@item -af @var{filtergraph}
|
||||
@var{filtergraph} is a description of the filtergraph to apply to
|
||||
the input audio.
|
||||
Use the option "-filters" to show all the available filters (including
|
||||
sources and sinks).
|
||||
|
||||
@item -i @var{input_file}
|
||||
Read @var{input_file}.
|
||||
@end table
|
||||
@@ -115,17 +93,18 @@ Read @var{input_file}.
|
||||
@item -pix_fmt @var{format}
|
||||
Set pixel format.
|
||||
This option has been deprecated in favor of private options, try -pixel_format.
|
||||
|
||||
@item -stats
|
||||
Print several playback statistics, in particular show the stream
|
||||
duration, the codec parameters, the current position in the stream and
|
||||
the audio/video synchronisation drift. It is on by default, to
|
||||
explicitly disable it you need to specify @code{-nostats}.
|
||||
|
||||
Show the stream duration, the codec parameters, the current position in
|
||||
the stream and the audio/video synchronisation drift.
|
||||
@item -bug
|
||||
Work around bugs.
|
||||
@item -fast
|
||||
Non-spec-compliant optimizations.
|
||||
@item -genpts
|
||||
Generate pts.
|
||||
@item -rtp_tcp
|
||||
Force RTP/TCP protocol usage instead of RTP/UDP. It is only meaningful
|
||||
if you are streaming with the RTSP protocol.
|
||||
@item -sync @var{type}
|
||||
Set the master clock to audio (@code{type=audio}), video
|
||||
(@code{type=video}) or external (@code{type=ext}). Default is audio. The
|
||||
@@ -133,20 +112,23 @@ master clock is used to control audio-video synchronization. Most media
|
||||
players use audio as master clock, but in some cases (streaming or high
|
||||
quality broadcast) it is necessary to change that. This option is mainly
|
||||
used for debugging purposes.
|
||||
@item -ast @var{audio_stream_specifier}
|
||||
Select the desired audio stream using the given stream specifier. The stream
|
||||
specifiers are described in the @ref{Stream specifiers} chapter. If this option
|
||||
is not specified, the "best" audio stream is selected in the program of the
|
||||
already selected video stream.
|
||||
@item -vst @var{video_stream_specifier}
|
||||
Select the desired video stream using the given stream specifier. The stream
|
||||
specifiers are described in the @ref{Stream specifiers} chapter. If this option
|
||||
is not specified, the "best" video stream is selected.
|
||||
@item -sst @var{subtitle_stream_specifier}
|
||||
Select the desired subtitle stream using the given stream specifier. The stream
|
||||
specifiers are described in the @ref{Stream specifiers} chapter. If this option
|
||||
is not specified, the "best" subtitle stream is selected in the program of the
|
||||
already selected video or audio stream.
|
||||
@item -threads @var{count}
|
||||
Set the thread count.
|
||||
@item -ast @var{audio_stream_number}
|
||||
Select the desired audio stream number, counting from 0. The number
|
||||
refers to the list of all the input audio streams. If it is greater
|
||||
than the number of audio streams minus one, then the last one is
|
||||
selected, if it is negative the audio playback is disabled.
|
||||
@item -vst @var{video_stream_number}
|
||||
Select the desired video stream number, counting from 0. The number
|
||||
refers to the list of all the input video streams. If it is greater
|
||||
than the number of video streams minus one, then the last one is
|
||||
selected, if it is negative the video playback is disabled.
|
||||
@item -sst @var{subtitle_stream_number}
|
||||
Select the desired subtitle stream number, counting from 0. The number
|
||||
refers to the list of all the input subtitle streams. If it is greater
|
||||
than the number of subtitle streams minus one, then the last one is
|
||||
selected, if it is negative the subtitle rendering is disabled.
|
||||
@item -autoexit
|
||||
Exit when video is done playing.
|
||||
@item -exitonkeydown
|
||||
@@ -167,22 +149,6 @@ Force a specific video decoder.
|
||||
|
||||
@item -scodec @var{codec_name}
|
||||
Force a specific subtitle decoder.
|
||||
|
||||
@item -autorotate
|
||||
Automatically rotate the video according to file metadata. Enabled by
|
||||
default, use @option{-noautorotate} to disable it.
|
||||
|
||||
@item -framedrop
|
||||
Drop video frames if video is out of sync. Enabled by default if the master
|
||||
clock is not set to video. Use this option to enable frame dropping for all
|
||||
master clock sources, use @option{-noframedrop} to disable it.
|
||||
|
||||
@item -infbuf
|
||||
Do not limit the input buffer size, read as much data as possible from the
|
||||
input as soon as possible. Enabled by default for realtime streams, where data
|
||||
may be dropped if not read in time. Use this option to enable infinite buffers
|
||||
for all inputs, use @option{-noinfbuf} to disable it.
|
||||
|
||||
@end table
|
||||
|
||||
@section While playing
|
||||
@@ -198,25 +164,16 @@ Toggle full screen.
|
||||
Pause.
|
||||
|
||||
@item a
|
||||
Cycle audio channel in the current program.
|
||||
Cycle audio channel.
|
||||
|
||||
@item v
|
||||
Cycle video channel.
|
||||
|
||||
@item t
|
||||
Cycle subtitle channel in the current program.
|
||||
|
||||
@item c
|
||||
Cycle program.
|
||||
Cycle subtitle channel.
|
||||
|
||||
@item w
|
||||
Cycle video filters or show modes.
|
||||
|
||||
@item s
|
||||
Step to the next frame.
|
||||
|
||||
Pause if the stream is not already paused, step to the next video
|
||||
frame, and pause.
|
||||
Show audio waves.
|
||||
|
||||
@item left/right
|
||||
Seek backward/forward 10 seconds.
|
||||
@@ -225,8 +182,6 @@ Seek backward/forward 10 seconds.
|
||||
Seek backward/forward 1 minute.
|
||||
|
||||
@item page down/page up
|
||||
Seek to the previous/next chapter.
|
||||
or if there are no chapters
|
||||
Seek backward/forward 10 minutes.
|
||||
|
||||
@item mouse click
|
||||
@@ -236,43 +191,9 @@ Seek to percentage in file corresponding to fraction of width.
|
||||
|
||||
@c man end
|
||||
|
||||
@include config.texi
|
||||
@ifset config-all
|
||||
@set config-readonly
|
||||
@ifset config-avutil
|
||||
@include utils.texi
|
||||
@end ifset
|
||||
@ifset config-avcodec
|
||||
@include codecs.texi
|
||||
@include bitstream_filters.texi
|
||||
@end ifset
|
||||
@ifset config-avformat
|
||||
@include formats.texi
|
||||
@include protocols.texi
|
||||
@end ifset
|
||||
@ifset config-avdevice
|
||||
@include devices.texi
|
||||
@end ifset
|
||||
@ifset config-swresample
|
||||
@include resampler.texi
|
||||
@end ifset
|
||||
@ifset config-swscale
|
||||
@include scaler.texi
|
||||
@end ifset
|
||||
@ifset config-avfilter
|
||||
@include filters.texi
|
||||
@end ifset
|
||||
@end ifset
|
||||
|
||||
@chapter See Also
|
||||
|
||||
@ifhtml
|
||||
@ifset config-all
|
||||
@url{ffplay.html,ffplay},
|
||||
@end ifset
|
||||
@ifset config-not-all
|
||||
@url{ffplay-all.html,ffmpeg-all},
|
||||
@end ifset
|
||||
@url{ffmpeg.html,ffmpeg}, @url{ffprobe.html,ffprobe}, @url{ffserver.html,ffserver},
|
||||
@url{ffmpeg-utils.html,ffmpeg-utils},
|
||||
@url{ffmpeg-scaler.html,ffmpeg-scaler},
|
||||
@@ -286,12 +207,6 @@ Seek to percentage in file corresponding to fraction of width.
|
||||
@end ifhtml
|
||||
|
||||
@ifnothtml
|
||||
@ifset config-all
|
||||
ffplay(1),
|
||||
@end ifset
|
||||
@ifset config-not-all
|
||||
ffplay-all(1),
|
||||
@end ifset
|
||||
ffmpeg(1), ffprobe(1), ffserver(1),
|
||||
ffmpeg-utils(1), ffmpeg-scaler(1), ffmpeg-resampler(1),
|
||||
ffmpeg-codecs(1), ffmpeg-bitstream-filters(1), ffmpeg-formats(1),
|
||||
|
||||
214
doc/ffprobe.texi
214
doc/ffprobe.texi
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle ffprobe Documentation
|
||||
@titlepage
|
||||
@@ -45,15 +44,14 @@ name (which may be shared by other sections), and an unique
|
||||
name. See the output of @option{sections}.
|
||||
|
||||
Metadata tags stored in the container or in the streams are recognized
|
||||
and printed in the corresponding "FORMAT", "STREAM" or "PROGRAM_STREAM"
|
||||
section.
|
||||
and printed in the corresponding "FORMAT" or "STREAM" section.
|
||||
|
||||
@c man end
|
||||
|
||||
@chapter Options
|
||||
@c man begin OPTIONS
|
||||
|
||||
@include fftools-common-opts.texi
|
||||
@include avtools-common-opts.texi
|
||||
|
||||
@section Main options
|
||||
|
||||
@@ -114,16 +112,12 @@ ffprobe -show_packets -select_streams v:1 INPUT
|
||||
@end example
|
||||
|
||||
@item -show_data
|
||||
Show payload data, as a hexadecimal and ASCII dump. Coupled with
|
||||
Show payload data, as an hexadecimal and ASCII dump. Coupled with
|
||||
@option{-show_packets}, it will dump the packets' data. Coupled with
|
||||
@option{-show_streams}, it will dump the codec extradata.
|
||||
|
||||
The dump is printed as the "data" field. It may contain newlines.
|
||||
|
||||
@item -show_data_hash @var{algorithm}
|
||||
Show a hash of payload data, for packets with @option{-show_packets} and for
|
||||
codec extradata with @option{-show_streams}.
|
||||
|
||||
@item -show_error
|
||||
Show information about the error found when trying to probe the input.
|
||||
|
||||
@@ -185,7 +179,7 @@ format : stream=codec_type
|
||||
|
||||
To show all the tags in the stream and format sections:
|
||||
@example
|
||||
stream_tags : format_tags
|
||||
format_tags : format_tags
|
||||
@end example
|
||||
|
||||
To show only the @code{title} tag (if available) in the stream
|
||||
@@ -202,11 +196,11 @@ The information for each single packet is printed within a dedicated
|
||||
section with name "PACKET".
|
||||
|
||||
@item -show_frames
|
||||
Show information about each frame and subtitle contained in the input
|
||||
multimedia stream.
|
||||
Show information about each frame contained in the input multimedia
|
||||
stream.
|
||||
|
||||
The information for each single frame is printed within a dedicated
|
||||
section with name "FRAME" or "SUBTITLE".
|
||||
section with name "FRAME".
|
||||
|
||||
@item -show_streams
|
||||
Show information about each media stream contained in the input
|
||||
@@ -215,18 +209,6 @@ multimedia stream.
|
||||
Each media stream information is printed within a dedicated section
|
||||
with name "STREAM".
|
||||
|
||||
@item -show_programs
|
||||
Show information about programs and their streams contained in the input
|
||||
multimedia stream.
|
||||
|
||||
Each media stream information is printed within a dedicated section
|
||||
with name "PROGRAM_STREAM".
|
||||
|
||||
@item -show_chapters
|
||||
Show information about chapters stored in the format.
|
||||
|
||||
Each chapter is printed within a dedicated section with name "CHAPTER".
|
||||
|
||||
@item -count_frames
|
||||
Count the number of frames per stream and report it in the
|
||||
corresponding stream section.
|
||||
@@ -235,70 +217,6 @@ corresponding stream section.
|
||||
Count the number of packets per stream and report it in the
|
||||
corresponding stream section.
|
||||
|
||||
@item -read_intervals @var{read_intervals}
|
||||
|
||||
Read only the specified intervals. @var{read_intervals} must be a
|
||||
sequence of interval specifications separated by ",".
|
||||
@command{ffprobe} will seek to the interval starting point, and will
|
||||
continue reading from that.
|
||||
|
||||
Each interval is specified by two optional parts, separated by "%".
|
||||
|
||||
The first part specifies the interval start position. It is
|
||||
interpreted as an abolute position, or as a relative offset from the
|
||||
current position if it is preceded by the "+" character. If this first
|
||||
part is not specified, no seeking will be performed when reading this
|
||||
interval.
|
||||
|
||||
The second part specifies the interval end position. It is interpreted
|
||||
as an absolute position, or as a relative offset from the current
|
||||
position if it is preceded by the "+" character. If the offset
|
||||
specification starts with "#", it is interpreted as the number of
|
||||
packets to read (not including the flushing packets) from the interval
|
||||
start. If no second part is specified, the program will read until the
|
||||
end of the input.
|
||||
|
||||
Note that seeking is not accurate, thus the actual interval start
|
||||
point may be different from the specified position. Also, when an
|
||||
interval duration is specified, the absolute end time will be computed
|
||||
by adding the duration to the interval start point found by seeking
|
||||
the file, rather than to the specified start value.
|
||||
|
||||
The formal syntax is given by:
|
||||
@example
|
||||
@var{INTERVAL} ::= [@var{START}|+@var{START_OFFSET}][%[@var{END}|+@var{END_OFFSET}]]
|
||||
@var{INTERVALS} ::= @var{INTERVAL}[,@var{INTERVALS}]
|
||||
@end example
|
||||
|
||||
A few examples follow.
|
||||
@itemize
|
||||
@item
|
||||
Seek to time 10, read packets until 20 seconds after the found seek
|
||||
point, then seek to position @code{01:30} (1 minute and thirty
|
||||
seconds) and read packets until position @code{01:45}.
|
||||
@example
|
||||
10%+20,01:30%01:45
|
||||
@end example
|
||||
|
||||
@item
|
||||
Read only 42 packets after seeking to position @code{01:23}:
|
||||
@example
|
||||
01:23%+#42
|
||||
@end example
|
||||
|
||||
@item
|
||||
Read only the first 20 seconds from the start:
|
||||
@example
|
||||
%+20
|
||||
@end example
|
||||
|
||||
@item
|
||||
Read from the start until position @code{02:30}:
|
||||
@example
|
||||
%02:30
|
||||
@end example
|
||||
@end itemize
|
||||
|
||||
@item -show_private_data, -private
|
||||
Show private data, that is data depending on the format of the
|
||||
particular shown element.
|
||||
@@ -322,12 +240,6 @@ Show information related to program and library versions. This is the
|
||||
equivalent of setting both @option{-show_program_version} and
|
||||
@option{-show_library_versions} options.
|
||||
|
||||
@item -show_pixel_formats
|
||||
Show information about all pixel formats supported by FFmpeg.
|
||||
|
||||
Pixel format information for each format is printed within a section
|
||||
with name "PIXEL_FORMAT".
|
||||
|
||||
@item -bitexact
|
||||
Force bitexact output, useful to produce output which is not dependent
|
||||
on the specific build.
|
||||
@@ -348,39 +260,6 @@ A writer may accept one or more arguments, which specify the options
|
||||
to adopt. The options are specified as a list of @var{key}=@var{value}
|
||||
pairs, separated by ":".
|
||||
|
||||
All writers support the following options:
|
||||
|
||||
@table @option
|
||||
@item string_validation, sv
|
||||
Set string validation mode.
|
||||
|
||||
The following values are accepted.
|
||||
@table @samp
|
||||
@item fail
|
||||
The writer will fail immediately in case an invalid string (UTF-8)
|
||||
sequence or code point is found in the input. This is especially
|
||||
useful to validate input metadata.
|
||||
|
||||
@item ignore
|
||||
Any validation error will be ignored. This will result in possibly
|
||||
broken output, especially with the json or xml writer.
|
||||
|
||||
@item replace
|
||||
The writer will substitute invalid UTF-8 sequences or code points with
|
||||
the string specified with the @option{string_validation_replacement}.
|
||||
@end table
|
||||
|
||||
Default value is @samp{replace}.
|
||||
|
||||
@item string_validation_replacement, svr
|
||||
Set replacement string to use in case @option{string_validation} is
|
||||
set to @samp{replace}.
|
||||
|
||||
In case the option is not specified, the writer will assume the empty
|
||||
string, that is it will remove the invalid sequences from the input
|
||||
strings.
|
||||
@end table
|
||||
|
||||
A description of the currently available writers follows.
|
||||
|
||||
@section default
|
||||
@@ -395,8 +274,8 @@ keyN=valN
|
||||
[/SECTION]
|
||||
@end example
|
||||
|
||||
Metadata tags are printed as a line in the corresponding FORMAT, STREAM or
|
||||
PROGRAM_STREAM section, and are prefixed by the string "TAG:".
|
||||
Metadata tags are printed as a line in the corresponding FORMAT or
|
||||
STREAM section, and are prefixed by the string "TAG:".
|
||||
|
||||
A description of the accepted options follows.
|
||||
|
||||
@@ -447,17 +326,17 @@ writer).
|
||||
It can assume one of the following values:
|
||||
@table @option
|
||||
@item c
|
||||
Perform C-like escaping. Strings containing a newline (@samp{\n}), carriage
|
||||
return (@samp{\r}), a tab (@samp{\t}), a form feed (@samp{\f}), the escaping
|
||||
character (@samp{\}) or the item separator character @var{SEP} are escaped
|
||||
using C-like fashioned escaping, so that a newline is converted to the
|
||||
sequence @samp{\n}, a carriage return to @samp{\r}, @samp{\} to @samp{\\} and
|
||||
the separator @var{SEP} is converted to @samp{\@var{SEP}}.
|
||||
Perform C-like escaping. Strings containing a newline ('\n'), carriage
|
||||
return ('\r'), a tab ('\t'), a form feed ('\f'), the escaping
|
||||
character ('\') or the item separator character @var{SEP} are escaped using C-like fashioned
|
||||
escaping, so that a newline is converted to the sequence "\n", a
|
||||
carriage return to "\r", '\' to "\\" and the separator @var{SEP} is
|
||||
converted to "\@var{SEP}".
|
||||
|
||||
@item csv
|
||||
Perform CSV-like escaping, as described in RFC4180. Strings
|
||||
containing a newline (@samp{\n}), a carriage return (@samp{\r}), a double quote
|
||||
(@samp{"}), or @var{SEP} are enclosed in double-quotes.
|
||||
containing a newline ('\n'), a carriage return ('\r'), a double quote
|
||||
('"'), or @var{SEP} are enclosed in double-quotes.
|
||||
|
||||
@item none
|
||||
Perform no escaping.
|
||||
@@ -485,7 +364,7 @@ The description of the accepted options follows.
|
||||
Separator character used to separate the chapter, the section name, IDs and
|
||||
potential tags in the printed field key.
|
||||
|
||||
Default value is @samp{.}.
|
||||
Default value is '.'.
|
||||
|
||||
@item hierarchical, h
|
||||
Specify if the section name specification should be hierarchical. If
|
||||
@@ -507,22 +386,21 @@ The following conventions are adopted:
|
||||
@item
|
||||
all key and values are UTF-8
|
||||
@item
|
||||
@samp{.} is the subgroup separator
|
||||
'.' is the subgroup separator
|
||||
@item
|
||||
newline, @samp{\t}, @samp{\f}, @samp{\b} and the following characters are
|
||||
escaped
|
||||
newline, '\t', '\f', '\b' and the following characters are escaped
|
||||
@item
|
||||
@samp{\} is the escape character
|
||||
'\' is the escape character
|
||||
@item
|
||||
@samp{#} is the comment indicator
|
||||
'#' is the comment indicator
|
||||
@item
|
||||
@samp{=} is the key/value separator
|
||||
'=' is the key/value separator
|
||||
@item
|
||||
@samp{:} is not used but usually parsed as key/value separator
|
||||
':' is not used but usually parsed as key/value separator
|
||||
@end itemize
|
||||
|
||||
This writer accepts options as a list of @var{key}=@var{value} pairs,
|
||||
separated by @samp{:}.
|
||||
separated by ":".
|
||||
|
||||
The description of the accepted options follows.
|
||||
|
||||
@@ -609,44 +487,10 @@ DV, GXF and AVI timecodes are available in format metadata
|
||||
@end itemize
|
||||
@c man end TIMECODE
|
||||
|
||||
@include config.texi
|
||||
@ifset config-all
|
||||
@set config-readonly
|
||||
@ifset config-avutil
|
||||
@include utils.texi
|
||||
@end ifset
|
||||
@ifset config-avcodec
|
||||
@include codecs.texi
|
||||
@include bitstream_filters.texi
|
||||
@end ifset
|
||||
@ifset config-avformat
|
||||
@include formats.texi
|
||||
@include protocols.texi
|
||||
@end ifset
|
||||
@ifset config-avdevice
|
||||
@include devices.texi
|
||||
@end ifset
|
||||
@ifset config-swresample
|
||||
@include resampler.texi
|
||||
@end ifset
|
||||
@ifset config-swscale
|
||||
@include scaler.texi
|
||||
@end ifset
|
||||
@ifset config-avfilter
|
||||
@include filters.texi
|
||||
@end ifset
|
||||
@end ifset
|
||||
|
||||
@chapter See Also
|
||||
|
||||
@ifhtml
|
||||
@ifset config-all
|
||||
@url{ffprobe.html,ffprobe},
|
||||
@end ifset
|
||||
@ifset config-not-all
|
||||
@url{ffprobe-all.html,ffprobe-all},
|
||||
@end ifset
|
||||
@url{ffmpeg.html,ffmpeg}, @url{ffplay.html,ffplay}, @url{ffserver.html,ffserver},
|
||||
@url{ffplay.html,ffmpeg}, @url{ffprobe.html,ffprobe}, @url{ffserver.html,ffserver},
|
||||
@url{ffmpeg-utils.html,ffmpeg-utils},
|
||||
@url{ffmpeg-scaler.html,ffmpeg-scaler},
|
||||
@url{ffmpeg-resampler.html,ffmpeg-resampler},
|
||||
@@ -659,12 +503,6 @@ DV, GXF and AVI timecodes are available in format metadata
|
||||
@end ifhtml
|
||||
|
||||
@ifnothtml
|
||||
@ifset config-all
|
||||
ffprobe(1),
|
||||
@end ifset
|
||||
@ifset config-not-all
|
||||
ffprobe-all(1),
|
||||
@end ifset
|
||||
ffmpeg(1), ffplay(1), ffserver(1),
|
||||
ffmpeg-utils(1), ffmpeg-scaler(1), ffmpeg-resampler(1),
|
||||
ffmpeg-codecs(1), ffmpeg-bitstream-filters(1), ffmpeg-formats(1),
|
||||
|
||||
175
doc/ffprobe.xsd
175
doc/ffprobe.xsd
@@ -8,17 +8,13 @@
|
||||
|
||||
<xsd:complexType name="ffprobeType">
|
||||
<xsd:sequence>
|
||||
<xsd:element name="program_version" type="ffprobe:programVersionType" minOccurs="0" maxOccurs="1" />
|
||||
<xsd:element name="library_versions" type="ffprobe:libraryVersionsType" minOccurs="0" maxOccurs="1" />
|
||||
<xsd:element name="pixel_formats" type="ffprobe:pixelFormatsType" minOccurs="0" maxOccurs="1" />
|
||||
<xsd:element name="packets" type="ffprobe:packetsType" minOccurs="0" maxOccurs="1" />
|
||||
<xsd:element name="frames" type="ffprobe:framesType" minOccurs="0" maxOccurs="1" />
|
||||
<xsd:element name="packets_and_frames" type="ffprobe:packetsAndFramesType" minOccurs="0" maxOccurs="1" />
|
||||
<xsd:element name="programs" type="ffprobe:programsType" minOccurs="0" maxOccurs="1" />
|
||||
<xsd:element name="streams" type="ffprobe:streamsType" minOccurs="0" maxOccurs="1" />
|
||||
<xsd:element name="chapters" type="ffprobe:chaptersType" minOccurs="0" maxOccurs="1" />
|
||||
<xsd:element name="format" type="ffprobe:formatType" minOccurs="0" maxOccurs="1" />
|
||||
<xsd:element name="error" type="ffprobe:errorType" minOccurs="0" maxOccurs="1" />
|
||||
<xsd:element name="program_version" type="ffprobe:programVersionType" minOccurs="0" maxOccurs="1" />
|
||||
<xsd:element name="library_versions" type="ffprobe:libraryVersionsType" minOccurs="0" maxOccurs="1" />
|
||||
</xsd:sequence>
|
||||
</xsd:complexType>
|
||||
|
||||
@@ -30,29 +26,11 @@
|
||||
|
||||
<xsd:complexType name="framesType">
|
||||
<xsd:sequence>
|
||||
<xsd:choice minOccurs="0" maxOccurs="unbounded">
|
||||
<xsd:element name="frame" type="ffprobe:frameType" minOccurs="0" maxOccurs="unbounded"/>
|
||||
<xsd:element name="subtitle" type="ffprobe:subtitleType" minOccurs="0" maxOccurs="unbounded"/>
|
||||
</xsd:choice>
|
||||
</xsd:sequence>
|
||||
</xsd:complexType>
|
||||
|
||||
<xsd:complexType name="packetsAndFramesType">
|
||||
<xsd:sequence>
|
||||
<xsd:choice minOccurs="0" maxOccurs="unbounded">
|
||||
<xsd:element name="packet" type="ffprobe:packetType" minOccurs="0" maxOccurs="unbounded"/>
|
||||
<xsd:element name="frame" type="ffprobe:frameType" minOccurs="0" maxOccurs="unbounded"/>
|
||||
<xsd:element name="subtitle" type="ffprobe:subtitleType" minOccurs="0" maxOccurs="unbounded"/>
|
||||
</xsd:choice>
|
||||
<xsd:element name="frame" type="ffprobe:frameType" minOccurs="0" maxOccurs="unbounded"/>
|
||||
</xsd:sequence>
|
||||
</xsd:complexType>
|
||||
|
||||
<xsd:complexType name="packetType">
|
||||
<xsd:sequence>
|
||||
<xsd:element name="tag" type="ffprobe:tagType" minOccurs="0" maxOccurs="unbounded"/>
|
||||
<xsd:element name="side_data_list" type="ffprobe:packetSideDataListType" minOccurs="0" maxOccurs="1" />
|
||||
</xsd:sequence>
|
||||
|
||||
<xsd:attribute name="codec_type" type="xsd:string" use="required" />
|
||||
<xsd:attribute name="stream_index" type="xsd:int" use="required" />
|
||||
<xsd:attribute name="pts" type="xsd:long" />
|
||||
@@ -67,27 +45,10 @@
|
||||
<xsd:attribute name="pos" type="xsd:long" />
|
||||
<xsd:attribute name="flags" type="xsd:string" use="required" />
|
||||
<xsd:attribute name="data" type="xsd:string" />
|
||||
<xsd:attribute name="data_hash" type="xsd:string" />
|
||||
</xsd:complexType>
|
||||
|
||||
<xsd:complexType name="packetSideDataListType">
|
||||
<xsd:sequence>
|
||||
<xsd:element name="side_data" type="ffprobe:packetSideDataType" minOccurs="1" maxOccurs="unbounded"/>
|
||||
</xsd:sequence>
|
||||
</xsd:complexType>
|
||||
<xsd:complexType name="packetSideDataType">
|
||||
<xsd:attribute name="side_data_type" type="xsd:string"/>
|
||||
<xsd:attribute name="side_data_size" type="xsd:int" />
|
||||
</xsd:complexType>
|
||||
|
||||
<xsd:complexType name="frameType">
|
||||
<xsd:sequence>
|
||||
<xsd:element name="tag" type="ffprobe:tagType" minOccurs="0" maxOccurs="unbounded"/>
|
||||
<xsd:element name="side_data_list" type="ffprobe:frameSideDataListType" minOccurs="0" maxOccurs="1" />
|
||||
</xsd:sequence>
|
||||
|
||||
<xsd:attribute name="media_type" type="xsd:string" use="required"/>
|
||||
<xsd:attribute name="stream_index" type="xsd:int" />
|
||||
<xsd:attribute name="key_frame" type="xsd:int" use="required"/>
|
||||
<xsd:attribute name="pts" type="xsd:long" />
|
||||
<xsd:attribute name="pts_time" type="xsd:float"/>
|
||||
@@ -95,8 +56,6 @@
|
||||
<xsd:attribute name="pkt_pts_time" type="xsd:float"/>
|
||||
<xsd:attribute name="pkt_dts" type="xsd:long" />
|
||||
<xsd:attribute name="pkt_dts_time" type="xsd:float"/>
|
||||
<xsd:attribute name="best_effort_timestamp" type="xsd:long" />
|
||||
<xsd:attribute name="best_effort_timestamp_time" type="xsd:float" />
|
||||
<xsd:attribute name="pkt_duration" type="xsd:long" />
|
||||
<xsd:attribute name="pkt_duration_time" type="xsd:float"/>
|
||||
<xsd:attribute name="pkt_pos" type="xsd:long" />
|
||||
@@ -119,26 +78,7 @@
|
||||
<xsd:attribute name="interlaced_frame" type="xsd:int" />
|
||||
<xsd:attribute name="top_field_first" type="xsd:int" />
|
||||
<xsd:attribute name="repeat_pict" type="xsd:int" />
|
||||
</xsd:complexType>
|
||||
|
||||
<xsd:complexType name="frameSideDataListType">
|
||||
<xsd:sequence>
|
||||
<xsd:element name="side_data" type="ffprobe:frameSideDataType" minOccurs="1" maxOccurs="unbounded"/>
|
||||
</xsd:sequence>
|
||||
</xsd:complexType>
|
||||
<xsd:complexType name="frameSideDataType">
|
||||
<xsd:attribute name="side_data_type" type="xsd:string"/>
|
||||
<xsd:attribute name="side_data_size" type="xsd:int" />
|
||||
</xsd:complexType>
|
||||
|
||||
<xsd:complexType name="subtitleType">
|
||||
<xsd:attribute name="media_type" type="xsd:string" fixed="subtitle" use="required"/>
|
||||
<xsd:attribute name="pts" type="xsd:long" />
|
||||
<xsd:attribute name="pts_time" type="xsd:float"/>
|
||||
<xsd:attribute name="format" type="xsd:int" />
|
||||
<xsd:attribute name="start_display_time" type="xsd:int" />
|
||||
<xsd:attribute name="end_display_time" type="xsd:int" />
|
||||
<xsd:attribute name="num_rects" type="xsd:int" />
|
||||
<xsd:attribute name="reference" type="xsd:int" />
|
||||
</xsd:complexType>
|
||||
|
||||
<xsd:complexType name="streamsType">
|
||||
@@ -147,12 +87,6 @@
|
||||
</xsd:sequence>
|
||||
</xsd:complexType>
|
||||
|
||||
<xsd:complexType name="programsType">
|
||||
<xsd:sequence>
|
||||
<xsd:element name="program" type="ffprobe:programType" minOccurs="0" maxOccurs="unbounded"/>
|
||||
</xsd:sequence>
|
||||
</xsd:complexType>
|
||||
|
||||
<xsd:complexType name="streamDispositionType">
|
||||
<xsd:attribute name="default" type="xsd:int" use="required" />
|
||||
<xsd:attribute name="dub" type="xsd:int" use="required" />
|
||||
@@ -169,9 +103,8 @@
|
||||
|
||||
<xsd:complexType name="streamType">
|
||||
<xsd:sequence>
|
||||
<xsd:element name="disposition" type="ffprobe:streamDispositionType" minOccurs="0" maxOccurs="1"/>
|
||||
<xsd:element name="tag" type="ffprobe:tagType" minOccurs="0" maxOccurs="unbounded"/>
|
||||
<xsd:element name="side_data_list" type="ffprobe:packetSideDataListType" minOccurs="0" maxOccurs="1" />
|
||||
<xsd:element name="disposition" type="ffprobe:streamDispositionType" minOccurs="0" maxOccurs="1"/>
|
||||
</xsd:sequence>
|
||||
|
||||
<xsd:attribute name="index" type="xsd:int" use="required"/>
|
||||
@@ -183,31 +116,21 @@
|
||||
<xsd:attribute name="codec_tag" type="xsd:string" use="required"/>
|
||||
<xsd:attribute name="codec_tag_string" type="xsd:string" use="required"/>
|
||||
<xsd:attribute name="extradata" type="xsd:string" />
|
||||
<xsd:attribute name="extradata_hash" type="xsd:string" />
|
||||
|
||||
<!-- video attributes -->
|
||||
<xsd:attribute name="width" type="xsd:int"/>
|
||||
<xsd:attribute name="height" type="xsd:int"/>
|
||||
<xsd:attribute name="coded_width" type="xsd:int"/>
|
||||
<xsd:attribute name="coded_height" type="xsd:int"/>
|
||||
<xsd:attribute name="has_b_frames" type="xsd:int"/>
|
||||
<xsd:attribute name="sample_aspect_ratio" type="xsd:string"/>
|
||||
<xsd:attribute name="display_aspect_ratio" type="xsd:string"/>
|
||||
<xsd:attribute name="pix_fmt" type="xsd:string"/>
|
||||
<xsd:attribute name="level" type="xsd:int"/>
|
||||
<xsd:attribute name="color_range" type="xsd:string"/>
|
||||
<xsd:attribute name="color_space" type="xsd:string"/>
|
||||
<xsd:attribute name="color_transfer" type="xsd:string"/>
|
||||
<xsd:attribute name="color_primaries" type="xsd:string"/>
|
||||
<xsd:attribute name="chroma_location" type="xsd:string"/>
|
||||
<xsd:attribute name="timecode" type="xsd:string"/>
|
||||
<xsd:attribute name="refs" type="xsd:int"/>
|
||||
|
||||
<!-- audio attributes -->
|
||||
<xsd:attribute name="sample_fmt" type="xsd:string"/>
|
||||
<xsd:attribute name="sample_rate" type="xsd:int"/>
|
||||
<xsd:attribute name="channels" type="xsd:int"/>
|
||||
<xsd:attribute name="channel_layout" type="xsd:string"/>
|
||||
<xsd:attribute name="bits_per_sample" type="xsd:int"/>
|
||||
|
||||
<xsd:attribute name="id" type="xsd:string"/>
|
||||
@@ -219,30 +142,11 @@
|
||||
<xsd:attribute name="duration_ts" type="xsd:long"/>
|
||||
<xsd:attribute name="duration" type="xsd:float"/>
|
||||
<xsd:attribute name="bit_rate" type="xsd:int"/>
|
||||
<xsd:attribute name="max_bit_rate" type="xsd:int"/>
|
||||
<xsd:attribute name="bits_per_raw_sample" type="xsd:int"/>
|
||||
<xsd:attribute name="nb_frames" type="xsd:int"/>
|
||||
<xsd:attribute name="nb_read_frames" type="xsd:int"/>
|
||||
<xsd:attribute name="nb_read_packets" type="xsd:int"/>
|
||||
</xsd:complexType>
|
||||
|
||||
<xsd:complexType name="programType">
|
||||
<xsd:sequence>
|
||||
<xsd:element name="tag" type="ffprobe:tagType" minOccurs="0" maxOccurs="unbounded"/>
|
||||
<xsd:element name="streams" type="ffprobe:streamsType" minOccurs="0" maxOccurs="1"/>
|
||||
</xsd:sequence>
|
||||
|
||||
<xsd:attribute name="program_id" type="xsd:int" use="required"/>
|
||||
<xsd:attribute name="program_num" type="xsd:int" use="required"/>
|
||||
<xsd:attribute name="nb_streams" type="xsd:int" use="required"/>
|
||||
<xsd:attribute name="start_time" type="xsd:float"/>
|
||||
<xsd:attribute name="start_pts" type="xsd:long"/>
|
||||
<xsd:attribute name="end_time" type="xsd:float"/>
|
||||
<xsd:attribute name="end_pts" type="xsd:long"/>
|
||||
<xsd:attribute name="pmt_pid" type="xsd:int" use="required"/>
|
||||
<xsd:attribute name="pcr_pid" type="xsd:int" use="required"/>
|
||||
</xsd:complexType>
|
||||
|
||||
<xsd:complexType name="formatType">
|
||||
<xsd:sequence>
|
||||
<xsd:element name="tag" type="ffprobe:tagType" minOccurs="0" maxOccurs="unbounded"/>
|
||||
@@ -250,14 +154,12 @@
|
||||
|
||||
<xsd:attribute name="filename" type="xsd:string" use="required"/>
|
||||
<xsd:attribute name="nb_streams" type="xsd:int" use="required"/>
|
||||
<xsd:attribute name="nb_programs" type="xsd:int" use="required"/>
|
||||
<xsd:attribute name="format_name" type="xsd:string" use="required"/>
|
||||
<xsd:attribute name="format_long_name" type="xsd:string"/>
|
||||
<xsd:attribute name="start_time" type="xsd:float"/>
|
||||
<xsd:attribute name="duration" type="xsd:float"/>
|
||||
<xsd:attribute name="size" type="xsd:long"/>
|
||||
<xsd:attribute name="bit_rate" type="xsd:long"/>
|
||||
<xsd:attribute name="probe_score" type="xsd:int"/>
|
||||
</xsd:complexType>
|
||||
|
||||
<xsd:complexType name="tagType">
|
||||
@@ -273,31 +175,13 @@
|
||||
<xsd:complexType name="programVersionType">
|
||||
<xsd:attribute name="version" type="xsd:string" use="required"/>
|
||||
<xsd:attribute name="copyright" type="xsd:string" use="required"/>
|
||||
<xsd:attribute name="build_date" type="xsd:string"/>
|
||||
<xsd:attribute name="build_time" type="xsd:string"/>
|
||||
<xsd:attribute name="compiler_ident" type="xsd:string" use="required"/>
|
||||
<xsd:attribute name="build_date" type="xsd:string" use="required"/>
|
||||
<xsd:attribute name="build_time" type="xsd:string" use="required"/>
|
||||
<xsd:attribute name="compiler_type" type="xsd:string" use="required"/>
|
||||
<xsd:attribute name="compiler_version" type="xsd:string" use="required"/>
|
||||
<xsd:attribute name="configuration" type="xsd:string" use="required"/>
|
||||
</xsd:complexType>
|
||||
|
||||
<xsd:complexType name="chaptersType">
|
||||
<xsd:sequence>
|
||||
<xsd:element name="chapter" type="ffprobe:chapterType" minOccurs="0" maxOccurs="unbounded"/>
|
||||
</xsd:sequence>
|
||||
</xsd:complexType>
|
||||
|
||||
<xsd:complexType name="chapterType">
|
||||
<xsd:sequence>
|
||||
<xsd:element name="tag" type="ffprobe:tagType" minOccurs="0" maxOccurs="unbounded"/>
|
||||
</xsd:sequence>
|
||||
|
||||
<xsd:attribute name="id" type="xsd:int" use="required"/>
|
||||
<xsd:attribute name="time_base" type="xsd:string" use="required"/>
|
||||
<xsd:attribute name="start" type="xsd:int" use="required"/>
|
||||
<xsd:attribute name="start_time" type="xsd:float"/>
|
||||
<xsd:attribute name="end" type="xsd:int" use="required"/>
|
||||
<xsd:attribute name="end_time" type="xsd:float" use="required"/>
|
||||
</xsd:complexType>
|
||||
|
||||
<xsd:complexType name="libraryVersionType">
|
||||
<xsd:attribute name="name" type="xsd:string" use="required"/>
|
||||
<xsd:attribute name="major" type="xsd:int" use="required"/>
|
||||
@@ -312,45 +196,4 @@
|
||||
<xsd:element name="library_version" type="ffprobe:libraryVersionType" minOccurs="0" maxOccurs="unbounded"/>
|
||||
</xsd:sequence>
|
||||
</xsd:complexType>
|
||||
|
||||
<xsd:complexType name="pixelFormatFlagsType">
|
||||
<xsd:attribute name="big_endian" type="xsd:int" use="required"/>
|
||||
<xsd:attribute name="palette" type="xsd:int" use="required"/>
|
||||
<xsd:attribute name="bitstream" type="xsd:int" use="required"/>
|
||||
<xsd:attribute name="hwaccel" type="xsd:int" use="required"/>
|
||||
<xsd:attribute name="planar" type="xsd:int" use="required"/>
|
||||
<xsd:attribute name="rgb" type="xsd:int" use="required"/>
|
||||
<xsd:attribute name="pseudopal" type="xsd:int" use="required"/>
|
||||
<xsd:attribute name="alpha" type="xsd:int" use="required"/>
|
||||
</xsd:complexType>
|
||||
|
||||
<xsd:complexType name="pixelFormatComponentType">
|
||||
<xsd:attribute name="index" type="xsd:int" use="required"/>
|
||||
<xsd:attribute name="bit_depth" type="xsd:int" use="required"/>
|
||||
</xsd:complexType>
|
||||
|
||||
<xsd:complexType name="pixelFormatComponentsType">
|
||||
<xsd:sequence>
|
||||
<xsd:element name="component" type="ffprobe:pixelFormatComponentType" minOccurs="0" maxOccurs="unbounded"/>
|
||||
</xsd:sequence>
|
||||
</xsd:complexType>
|
||||
|
||||
<xsd:complexType name="pixelFormatType">
|
||||
<xsd:sequence>
|
||||
<xsd:element name="flags" type="ffprobe:pixelFormatFlagsType" minOccurs="0" maxOccurs="1"/>
|
||||
<xsd:element name="components" type="ffprobe:pixelFormatComponentsType" minOccurs="0" maxOccurs="1"/>
|
||||
</xsd:sequence>
|
||||
|
||||
<xsd:attribute name="name" type="xsd:string" use="required"/>
|
||||
<xsd:attribute name="nb_components" type="xsd:int" use="required"/>
|
||||
<xsd:attribute name="log2_chroma_w" type="xsd:int"/>
|
||||
<xsd:attribute name="log2_chroma_h" type="xsd:int"/>
|
||||
<xsd:attribute name="bits_per_pixel" type="xsd:int"/>
|
||||
</xsd:complexType>
|
||||
|
||||
<xsd:complexType name="pixelFormatsType">
|
||||
<xsd:sequence>
|
||||
<xsd:element name="pixel_format" type="ffprobe:pixelFormatType" minOccurs="0" maxOccurs="unbounded"/>
|
||||
</xsd:sequence>
|
||||
</xsd:complexType>
|
||||
</xsd:schema>
|
||||
|
||||
@@ -1,11 +1,11 @@
|
||||
# Port on which the server is listening. You must select a different
|
||||
# port from your standard HTTP web server if it is running on the same
|
||||
# computer.
|
||||
HTTPPort 8090
|
||||
Port 8090
|
||||
|
||||
# Address on which the server is bound. Only useful if you have
|
||||
# several network interfaces.
|
||||
HTTPBindAddress 0.0.0.0
|
||||
BindAddress 0.0.0.0
|
||||
|
||||
# Number of simultaneous HTTP connections that can be handled. It has
|
||||
# to be defined *before* the MaxClients parameter, since it defines the
|
||||
@@ -82,7 +82,6 @@ Feed feed1.ffm
|
||||
# ra : RealNetworks-compatible stream. Audio only.
|
||||
# mpjpeg : Multipart JPEG (works with Netscape without any plugin)
|
||||
# jpeg : Generate a single JPEG image.
|
||||
# mjpeg : Generate a M-JPEG stream.
|
||||
# asf : ASF compatible streaming (Windows Media Player format).
|
||||
# swf : Macromedia Flash compatible stream
|
||||
# avi : AVI format (MPEG-4 video, MPEG audio sound)
|
||||
@@ -236,7 +235,7 @@ StartSendOnKey
|
||||
|
||||
#<Stream test.ogg>
|
||||
#Feed feed1.ffm
|
||||
#Metadata title "Stream title"
|
||||
#Title "Stream title"
|
||||
#AudioBitRate 64
|
||||
#AudioChannels 2
|
||||
#AudioSampleRate 44100
|
||||
@@ -281,10 +280,10 @@ StartSendOnKey
|
||||
#<Stream file.asf>
|
||||
#File "/usr/local/httpd/htdocs/test.asf"
|
||||
#NoAudio
|
||||
#Metadata author "Me"
|
||||
#Metadata copyright "Super MegaCorp"
|
||||
#Metadata title "Test stream from disk"
|
||||
#Metadata comment "Test comment"
|
||||
#Author "Me"
|
||||
#Copyright "Super MegaCorp"
|
||||
#Title "Test stream from disk"
|
||||
#Comment "Test comment"
|
||||
#</Stream>
|
||||
|
||||
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle ffserver Documentation
|
||||
@titlepage
|
||||
@@ -17,14 +16,11 @@ ffserver [@var{options}]
|
||||
@chapter Description
|
||||
@c man begin DESCRIPTION
|
||||
|
||||
@command{ffserver} is a streaming server for both audio and video.
|
||||
It supports several live feeds, streaming from files and time shifting
|
||||
on live feeds. You can seek to positions in the past on each live
|
||||
feed, provided you specify a big enough feed storage.
|
||||
|
||||
@command{ffserver} is configured through a configuration file, which
|
||||
is read at startup. If not explicitly specified, it will read from
|
||||
@file{/etc/ffserver.conf}.
|
||||
@command{ffserver} is a streaming server for both audio and video. It
|
||||
supports several live feeds, streaming from files and time shifting on
|
||||
live feeds (you can seek to positions in the past on each live feed,
|
||||
provided you specify a big enough feed storage in
|
||||
@file{ffserver.conf}).
|
||||
|
||||
@command{ffserver} receives prerecorded files or FFM streams from some
|
||||
@command{ffmpeg} instance as input, then streams them over
|
||||
@@ -43,128 +39,10 @@ For each feed you can have different output streams in various
|
||||
formats, each one specified by a @code{<Stream>} section in the
|
||||
configuration file.
|
||||
|
||||
@chapter Detailed description
|
||||
|
||||
@command{ffserver} works by forwarding streams encoded by
|
||||
@command{ffmpeg}, or pre-recorded streams which are read from disk.
|
||||
|
||||
Precisely, @command{ffserver} acts as an HTTP server, accepting POST
|
||||
requests from @command{ffmpeg} to acquire the stream to publish, and
|
||||
serving RTSP clients or HTTP clients GET requests with the stream
|
||||
media content.
|
||||
|
||||
A feed is an @ref{FFM} stream created by @command{ffmpeg}, and sent to
|
||||
a port where @command{ffserver} is listening.
|
||||
|
||||
Each feed is identified by a unique name, corresponding to the name
|
||||
of the resource published on @command{ffserver}, and is configured by
|
||||
a dedicated @code{Feed} section in the configuration file.
|
||||
|
||||
The feed publish URL is given by:
|
||||
@example
|
||||
http://@var{ffserver_ip_address}:@var{http_port}/@var{feed_name}
|
||||
@end example
|
||||
|
||||
where @var{ffserver_ip_address} is the IP address of the machine where
|
||||
@command{ffserver} is installed, @var{http_port} is the port number of
|
||||
the HTTP server (configured through the @option{HTTPPort} option), and
|
||||
@var{feed_name} is the name of the corresponding feed defined in the
|
||||
configuration file.
|
||||
|
||||
Each feed is associated to a file which is stored on disk. This stored
|
||||
file is used to send pre-recorded data to a player as fast as
|
||||
possible when new content is added in real-time to the stream.
|
||||
|
||||
A "live-stream" or "stream" is a resource published by
|
||||
@command{ffserver}, and made accessible through the HTTP protocol to
|
||||
clients.
|
||||
|
||||
A stream can be connected to a feed, or to a file. In the first case,
|
||||
the published stream is forwarded from the corresponding feed
|
||||
generated by a running instance of @command{ffmpeg}, in the second
|
||||
case the stream is read from a pre-recorded file.
|
||||
|
||||
Each stream is identified by a unique name, corresponding to the name
|
||||
of the resource served by @command{ffserver}, and is configured by
|
||||
a dedicated @code{Stream} section in the configuration file.
|
||||
|
||||
The stream access HTTP URL is given by:
|
||||
@example
|
||||
http://@var{ffserver_ip_address}:@var{http_port}/@var{stream_name}[@var{options}]
|
||||
@end example
|
||||
|
||||
The stream access RTSP URL is given by:
|
||||
@example
|
||||
http://@var{ffserver_ip_address}:@var{rtsp_port}/@var{stream_name}[@var{options}]
|
||||
@end example
|
||||
|
||||
@var{stream_name} is the name of the corresponding stream defined in
|
||||
the configuration file. @var{options} is a list of options specified
|
||||
after the URL which affects how the stream is served by
|
||||
@command{ffserver}. @var{http_port} and @var{rtsp_port} are the HTTP
|
||||
and RTSP ports configured with the options @var{HTTPPort} and
|
||||
@var{RTSPPort} respectively.
|
||||
|
||||
In case the stream is associated to a feed, the encoding parameters
|
||||
must be configured in the stream configuration. They are sent to
|
||||
@command{ffmpeg} when setting up the encoding. This allows
|
||||
@command{ffserver} to define the encoding parameters used by
|
||||
the @command{ffmpeg} encoders.
|
||||
|
||||
The @command{ffmpeg} @option{override_ffserver} commandline option
|
||||
allows one to override the encoding parameters set by the server.
|
||||
|
||||
Multiple streams can be connected to the same feed.
|
||||
|
||||
For example, you can have a situation described by the following
|
||||
graph:
|
||||
|
||||
@verbatim
|
||||
_________ __________
|
||||
| | | |
|
||||
ffmpeg 1 -----| feed 1 |-----| stream 1 |
|
||||
\ |_________|\ |__________|
|
||||
\ \
|
||||
\ \ __________
|
||||
\ \ | |
|
||||
\ \| stream 2 |
|
||||
\ |__________|
|
||||
\
|
||||
\ _________ __________
|
||||
\ | | | |
|
||||
\| feed 2 |-----| stream 3 |
|
||||
|_________| |__________|
|
||||
|
||||
_________ __________
|
||||
| | | |
|
||||
ffmpeg 2 -----| feed 3 |-----| stream 4 |
|
||||
|_________| |__________|
|
||||
|
||||
_________ __________
|
||||
| | | |
|
||||
| file 1 |-----| stream 5 |
|
||||
|_________| |__________|
|
||||
|
||||
@end verbatim
|
||||
|
||||
@anchor{FFM}
|
||||
@section FFM, FFM2 formats
|
||||
|
||||
FFM and FFM2 are formats used by ffserver. They allow storing a wide variety of
|
||||
video and audio streams and encoding options, and can store a moving time segment
|
||||
of an infinite movie or a whole movie.
|
||||
|
||||
FFM is version specific, and there is limited compatibility of FFM files
|
||||
generated by one version of ffmpeg/ffserver and another version of
|
||||
ffmpeg/ffserver. It may work but it is not guaranteed to work.
|
||||
|
||||
FFM2 is extensible while maintaining compatibility and should work between
|
||||
differing versions of tools. FFM2 is the default.
|
||||
|
||||
@section Status stream
|
||||
|
||||
@command{ffserver} supports an HTTP interface which exposes the
|
||||
current status of the server.
|
||||
ffserver supports an HTTP interface which exposes the current status
|
||||
of the server.
|
||||
|
||||
Simply point your browser to the address of the special status stream
|
||||
specified in the configuration file.
|
||||
@@ -183,8 +61,27 @@ ACL allow 192.168.0.0 192.168.255.255
|
||||
then the server will post a page with the status information when
|
||||
the special stream @file{status.html} is requested.
|
||||
|
||||
@section What can this do?
|
||||
|
||||
When properly configured and running, you can capture video and audio in real
|
||||
time from a suitable capture card, and stream it out over the Internet to
|
||||
either Windows Media Player or RealAudio player (with some restrictions).
|
||||
|
||||
It can also stream from files, though that is currently broken. Very often, a
|
||||
web server can be used to serve up the files just as well.
|
||||
|
||||
It can stream prerecorded video from .ffm files, though it is somewhat tricky
|
||||
to make it work correctly.
|
||||
|
||||
@section How do I make it work?
|
||||
|
||||
First, build the kit. It *really* helps to have installed LAME first. Then when
|
||||
you run the ffserver ./configure, make sure that you have the
|
||||
@code{--enable-libmp3lame} flag turned on.
|
||||
|
||||
LAME is important as it allows for streaming audio to Windows Media Player.
|
||||
Don't ask why the other audio types do not work.
|
||||
|
||||
As a simple test, just run the following two command lines where INPUTFILE
|
||||
is some file which you can decode with ffmpeg:
|
||||
|
||||
@@ -206,9 +103,40 @@ WARNING: trying to stream test1.mpg doesn't work with WMP as it tries to
|
||||
transfer the entire file before starting to play.
|
||||
The same is true of AVI files.
|
||||
|
||||
You should edit the @file{ffserver.conf} file to suit your needs (in
|
||||
terms of frame rates etc). Then install @command{ffserver} and
|
||||
@command{ffmpeg}, write a script to start them up, and off you go.
|
||||
@section What happens next?
|
||||
|
||||
You should edit the ffserver.conf file to suit your needs (in terms of
|
||||
frame rates etc). Then install ffserver and ffmpeg, write a script to start
|
||||
them up, and off you go.
|
||||
|
||||
@section Troubleshooting
|
||||
|
||||
@subsection I don't hear any audio, but video is fine.
|
||||
|
||||
Maybe you didn't install LAME, or got your ./configure statement wrong. Check
|
||||
the ffmpeg output to see if a line referring to MP3 is present. If not, then
|
||||
your configuration was incorrect. If it is, then maybe your wiring is not
|
||||
set up correctly. Maybe the sound card is not getting data from the right
|
||||
input source. Maybe you have a really awful audio interface (like I do)
|
||||
that only captures in stereo and also requires that one channel be flipped.
|
||||
If you are one of these people, then export 'AUDIO_FLIP_LEFT=1' before
|
||||
starting ffmpeg.
|
||||
|
||||
@subsection The audio and video lose sync after a while.
|
||||
|
||||
Yes, they do.
|
||||
|
||||
@subsection After a long while, the video update rate goes way down in WMP.
|
||||
|
||||
Yes, it does. Who knows why?
|
||||
|
||||
@subsection WMP 6.4 behaves differently to WMP 7.
|
||||
|
||||
Yes, it does. Any thoughts on this would be gratefully received. These
|
||||
differences extend to embedding WMP into a web page. [There are two
|
||||
object IDs that you can use: The old one, which does not play well, and
|
||||
the new one, which does (both tested on the same system). However,
|
||||
I suspect that the new one is not available unless you have installed WMP 7].
|
||||
|
||||
@section What else can it do?
|
||||
|
||||
@@ -249,6 +177,9 @@ specify a time. In addition, ffserver will skip frames until a key_frame
|
||||
is found. This further reduces the startup delay by not transferring data
|
||||
that will be discarded.
|
||||
|
||||
* You may want to adjust the MaxBandwidth in the ffserver.conf to limit
|
||||
the amount of bandwidth consumed by live streams.
|
||||
|
||||
@section Why does the ?buffer / Preroll stop working after a time?
|
||||
|
||||
It turns out that (on my machine at least) the number of frames successfully
|
||||
@@ -282,610 +213,43 @@ You use this by adding the ?date= to the end of the URL for the stream.
|
||||
For example: @samp{http://localhost:8080/test.asf?date=2002-07-26T23:05:00}.
|
||||
@c man end
|
||||
|
||||
@section What is FFM, FFM2
|
||||
|
||||
FFM and FFM2 are formats used by ffserver. They allow storing a wide variety of
|
||||
video and audio streams and encoding options, and can store a moving time segment
|
||||
of an infinite movie or a whole movie.
|
||||
|
||||
FFM is version specific, and there is limited compatibility of FFM files
|
||||
generated by one version of ffmpeg/ffserver and another version of
|
||||
ffmpeg/ffserver. It may work but it is not guaranteed to work.
|
||||
|
||||
FFM2 is extensible while maintaining compatibility and should work between
|
||||
differing versions of tools. FFM2 is the default.
|
||||
|
||||
@chapter Options
|
||||
@c man begin OPTIONS
|
||||
|
||||
@include fftools-common-opts.texi
|
||||
@include avtools-common-opts.texi
|
||||
|
||||
@section Main options
|
||||
|
||||
@table @option
|
||||
@item -f @var{configfile}
|
||||
Read configuration file @file{configfile}. If not specified it will
|
||||
read by default from @file{/etc/ffserver.conf}.
|
||||
|
||||
Use @file{configfile} instead of @file{/etc/ffserver.conf}.
|
||||
@item -n
|
||||
Enable no-launch mode. This option disables all the @code{Launch}
|
||||
directives within the various @code{<Feed>} sections. Since
|
||||
@command{ffserver} will not launch any @command{ffmpeg} instances, you
|
||||
will have to launch them manually.
|
||||
|
||||
Enable no-launch mode. This option disables all the Launch directives
|
||||
within the various <Stream> sections. Since ffserver will not launch
|
||||
any ffmpeg instances, you will have to launch them manually.
|
||||
@item -d
|
||||
Enable debug mode. This option increases log verbosity, and directs
|
||||
log messages to stdout. When specified, the @option{CustomLog} option
|
||||
is ignored.
|
||||
Enable debug mode. This option increases log verbosity, directs log
|
||||
messages to stdout.
|
||||
@end table
|
||||
|
||||
@chapter Configuration file syntax
|
||||
|
||||
@command{ffserver} reads a configuration file containing global
|
||||
options and settings for each stream and feed.
|
||||
|
||||
The configuration file consists of global options and dedicated
|
||||
sections, which must be introduced by "<@var{SECTION_NAME}
|
||||
@var{ARGS}>" on a separate line and must be terminated by a line in
|
||||
the form "</@var{SECTION_NAME}>". @var{ARGS} is optional.
|
||||
|
||||
Currently the following sections are recognized: @samp{Feed},
|
||||
@samp{Stream}, @samp{Redirect}.
|
||||
|
||||
A line starting with @code{#} is ignored and treated as a comment.
|
||||
|
||||
Name of options and sections are case-insensitive.
|
||||
|
||||
@section ACL syntax
|
||||
An ACL (Access Control List) specifies the address which are allowed
|
||||
to access a given stream, or to write a given feed.
|
||||
|
||||
It accepts the folling forms
|
||||
@itemize
|
||||
@item
|
||||
Allow/deny access to @var{address}.
|
||||
@example
|
||||
ACL ALLOW <address>
|
||||
ACL DENY <address>
|
||||
@end example
|
||||
|
||||
@item
|
||||
Allow/deny access to ranges of addresses from @var{first_address} to
|
||||
@var{last_address}.
|
||||
@example
|
||||
ACL ALLOW <first_address> <last_address>
|
||||
ACL DENY <first_address> <last_address>
|
||||
@end example
|
||||
@end itemize
|
||||
|
||||
You can repeat the ACL allow/deny as often as you like. It is on a per
|
||||
stream basis. The first match defines the action. If there are no matches,
|
||||
then the default is the inverse of the last ACL statement.
|
||||
|
||||
Thus 'ACL allow localhost' only allows access from localhost.
|
||||
'ACL deny 1.0.0.0 1.255.255.255' would deny the whole of network 1 and
|
||||
allow everybody else.
|
||||
|
||||
@section Global options
|
||||
@table @option
|
||||
@item HTTPPort @var{port_number}
|
||||
@item Port @var{port_number}
|
||||
@item RTSPPort @var{port_number}
|
||||
|
||||
@var{HTTPPort} sets the HTTP server listening TCP port number,
|
||||
@var{RTSPPort} sets the RTSP server listening TCP port number.
|
||||
|
||||
@var{Port} is the equivalent of @var{HTTPPort} and is deprecated.
|
||||
|
||||
You must select a different port from your standard HTTP web server if
|
||||
it is running on the same computer.
|
||||
|
||||
If not specified, no corresponding server will be created.
|
||||
|
||||
@item HTTPBindAddress @var{ip_address}
|
||||
@item BindAddress @var{ip_address}
|
||||
@item RTSPBindAddress @var{ip_address}
|
||||
Set address on which the HTTP/RTSP server is bound. Only useful if you
|
||||
have several network interfaces.
|
||||
|
||||
@var{BindAddress} is the equivalent of @var{HTTPBindAddress} and is
|
||||
deprecated.
|
||||
|
||||
@item MaxHTTPConnections @var{n}
|
||||
Set number of simultaneous HTTP connections that can be handled. It
|
||||
has to be defined @emph{before} the @option{MaxClients} parameter,
|
||||
since it defines the @option{MaxClients} maximum limit.
|
||||
|
||||
Default value is 2000.
|
||||
|
||||
@item MaxClients @var{n}
|
||||
Set number of simultaneous requests that can be handled. Since
|
||||
@command{ffserver} is very fast, it is more likely that you will want
|
||||
to leave this high and use @option{MaxBandwidth}.
|
||||
|
||||
Default value is 5.
|
||||
|
||||
@item MaxBandwidth @var{kbps}
|
||||
Set the maximum amount of kbit/sec that you are prepared to consume
|
||||
when streaming to clients.
|
||||
|
||||
Default value is 1000.
|
||||
|
||||
@item CustomLog @var{filename}
|
||||
Set access log file (uses standard Apache log file format). '-' is the
|
||||
standard output.
|
||||
|
||||
If not specified @command{ffserver} will produce no log.
|
||||
|
||||
In case the commandline option @option{-d} is specified this option is
|
||||
ignored, and the log is written to standard output.
|
||||
|
||||
@item NoDaemon
|
||||
Set no-daemon mode. This option is currently ignored since now
|
||||
@command{ffserver} will always work in no-daemon mode, and is
|
||||
deprecated.
|
||||
|
||||
@item UseDefaults
|
||||
@item NoDefaults
|
||||
Control whether default codec options are used for the all streams or not.
|
||||
Each stream may overwrite this setting for its own. Default is @var{UseDefaults}.
|
||||
The lastest occurrence overrides previous if multiple definitions.
|
||||
@end table
|
||||
|
||||
@section Feed section
|
||||
|
||||
A Feed section defines a feed provided to @command{ffserver}.
|
||||
|
||||
Each live feed contains one video and/or audio sequence coming from an
|
||||
@command{ffmpeg} encoder or another @command{ffserver}. This sequence
|
||||
may be encoded simultaneously with several codecs at several
|
||||
resolutions.
|
||||
|
||||
A feed instance specification is introduced by a line in the form:
|
||||
@example
|
||||
<Feed FEED_FILENAME>
|
||||
@end example
|
||||
|
||||
where @var{FEED_FILENAME} specifies the unique name of the FFM stream.
|
||||
|
||||
The following options are recognized within a Feed section.
|
||||
|
||||
@table @option
|
||||
@item File @var{filename}
|
||||
@item ReadOnlyFile @var{filename}
|
||||
Set the path where the feed file is stored on disk.
|
||||
|
||||
If not specified, the @file{/tmp/FEED.ffm} is assumed, where
|
||||
@var{FEED} is the feed name.
|
||||
|
||||
If @option{ReadOnlyFile} is used the file is marked as read-only and
|
||||
it will not be deleted or updated.
|
||||
|
||||
@item Truncate
|
||||
Truncate the feed file, rather than appending to it. By default
|
||||
@command{ffserver} will append data to the file, until the maximum
|
||||
file size value is reached (see @option{FileMaxSize} option).
|
||||
|
||||
@item FileMaxSize @var{size}
|
||||
Set maximum size of the feed file in bytes. 0 means unlimited. The
|
||||
postfixes @code{K} (2^10), @code{M} (2^20), and @code{G} (2^30) are
|
||||
recognized.
|
||||
|
||||
Default value is 5M.
|
||||
|
||||
@item Launch @var{args}
|
||||
Launch an @command{ffmpeg} command when creating @command{ffserver}.
|
||||
|
||||
@var{args} must be a sequence of arguments to be provided to an
|
||||
@command{ffmpeg} instance. The first provided argument is ignored, and
|
||||
it is replaced by a path with the same dirname of the @command{ffserver}
|
||||
instance, followed by the remaining argument and terminated with a
|
||||
path corresponding to the feed.
|
||||
|
||||
When the launched process exits, @command{ffserver} will launch
|
||||
another program instance.
|
||||
|
||||
In case you need a more complex @command{ffmpeg} configuration,
|
||||
e.g. if you need to generate multiple FFM feeds with a single
|
||||
@command{ffmpeg} instance, you should launch @command{ffmpeg} by hand.
|
||||
|
||||
This option is ignored in case the commandline option @option{-n} is
|
||||
specified.
|
||||
|
||||
@item ACL @var{spec}
|
||||
Specify the list of IP address which are allowed or denied to write
|
||||
the feed. Multiple ACL options can be specified.
|
||||
@end table
|
||||
|
||||
@section Stream section
|
||||
|
||||
A Stream section defines a stream provided by @command{ffserver}, and
|
||||
identified by a single name.
|
||||
|
||||
The stream is sent when answering a request containing the stream
|
||||
name.
|
||||
|
||||
A stream section must be introduced by the line:
|
||||
@example
|
||||
<Stream STREAM_NAME>
|
||||
@end example
|
||||
|
||||
where @var{STREAM_NAME} specifies the unique name of the stream.
|
||||
|
||||
The following options are recognized within a Stream section.
|
||||
|
||||
Encoding options are marked with the @emph{encoding} tag, and they are
|
||||
used to set the encoding parameters, and are mapped to libavcodec
|
||||
encoding options. Not all encoding options are supported, in
|
||||
particular it is not possible to set encoder private options. In order
|
||||
to override the encoding options specified by @command{ffserver}, you
|
||||
can use the @command{ffmpeg} @option{override_ffserver} commandline
|
||||
option.
|
||||
|
||||
Only one of the @option{Feed} and @option{File} options should be set.
|
||||
|
||||
@table @option
|
||||
@item Feed @var{feed_name}
|
||||
Set the input feed. @var{feed_name} must correspond to an existing
|
||||
feed defined in a @code{Feed} section.
|
||||
|
||||
When this option is set, encoding options are used to setup the
|
||||
encoding operated by the remote @command{ffmpeg} process.
|
||||
|
||||
@item File @var{filename}
|
||||
Set the filename of the pre-recorded input file to stream.
|
||||
|
||||
When this option is set, encoding options are ignored and the input
|
||||
file content is re-streamed as is.
|
||||
|
||||
@item Format @var{format_name}
|
||||
Set the format of the output stream.
|
||||
|
||||
Must be the name of a format recognized by FFmpeg. If set to
|
||||
@samp{status}, it is treated as a status stream.
|
||||
|
||||
@item InputFormat @var{format_name}
|
||||
Set input format. If not specified, it is automatically guessed.
|
||||
|
||||
@item Preroll @var{n}
|
||||
Set this to the number of seconds backwards in time to start. Note that
|
||||
most players will buffer 5-10 seconds of video, and also you need to allow
|
||||
for a keyframe to appear in the data stream.
|
||||
|
||||
Default value is 0.
|
||||
|
||||
@item StartSendOnKey
|
||||
Do not send stream until it gets the first key frame. By default
|
||||
@command{ffserver} will send data immediately.
|
||||
|
||||
@item MaxTime @var{n}
|
||||
Set the number of seconds to run. This value set the maximum duration
|
||||
of the stream a client will be able to receive.
|
||||
|
||||
A value of 0 means that no limit is set on the stream duration.
|
||||
|
||||
@item ACL @var{spec}
|
||||
Set ACL for the stream.
|
||||
|
||||
@item DynamicACL @var{spec}
|
||||
|
||||
@item RTSPOption @var{option}
|
||||
|
||||
@item MulticastAddress @var{address}
|
||||
|
||||
@item MulticastPort @var{port}
|
||||
|
||||
@item MulticastTTL @var{integer}
|
||||
|
||||
@item NoLoop
|
||||
|
||||
@item FaviconURL @var{url}
|
||||
Set favicon (favourite icon) for the server status page. It is ignored
|
||||
for regular streams.
|
||||
|
||||
@item Author @var{value}
|
||||
@item Comment @var{value}
|
||||
@item Copyright @var{value}
|
||||
@item Title @var{value}
|
||||
Set metadata corresponding to the option. All these options are
|
||||
deprecated in favor of @option{Metadata}.
|
||||
|
||||
@item Metadata @var{key} @var{value}
|
||||
Set metadata value on the output stream.
|
||||
|
||||
@item UseDefaults
|
||||
@item NoDefaults
|
||||
Control whether default codec options are used for the stream or not.
|
||||
Default is @var{UseDefaults} unless disabled globally.
|
||||
|
||||
@item NoAudio
|
||||
@item NoVideo
|
||||
Suppress audio/video.
|
||||
|
||||
@item AudioCodec @var{codec_name} (@emph{encoding,audio})
|
||||
Set audio codec.
|
||||
|
||||
@item AudioBitRate @var{rate} (@emph{encoding,audio})
|
||||
Set bitrate for the audio stream in kbits per second.
|
||||
|
||||
@item AudioChannels @var{n} (@emph{encoding,audio})
|
||||
Set number of audio channels.
|
||||
|
||||
@item AudioSampleRate @var{n} (@emph{encoding,audio})
|
||||
Set sampling frequency for audio. When using low bitrates, you should
|
||||
lower this frequency to 22050 or 11025. The supported frequencies
|
||||
depend on the selected audio codec.
|
||||
|
||||
@item AVOptionAudio [@var{codec}:]@var{option} @var{value} (@emph{encoding,audio})
|
||||
Set generic or private option for audio stream.
|
||||
Private option must be prefixed with codec name or codec must be defined before.
|
||||
|
||||
@item AVPresetAudio @var{preset} (@emph{encoding,audio})
|
||||
Set preset for audio stream.
|
||||
|
||||
@item VideoCodec @var{codec_name} (@emph{encoding,video})
|
||||
Set video codec.
|
||||
|
||||
@item VideoBitRate @var{n} (@emph{encoding,video})
|
||||
Set bitrate for the video stream in kbits per second.
|
||||
|
||||
@item VideoBitRateRange @var{range} (@emph{encoding,video})
|
||||
Set video bitrate range.
|
||||
|
||||
A range must be specified in the form @var{minrate}-@var{maxrate}, and
|
||||
specifies the @option{minrate} and @option{maxrate} encoding options
|
||||
expressed in kbits per second.
|
||||
|
||||
@item VideoBitRateRangeTolerance @var{n} (@emph{encoding,video})
|
||||
Set video bitrate tolerance in kbits per second.
|
||||
|
||||
@item PixelFormat @var{pixel_format} (@emph{encoding,video})
|
||||
Set video pixel format.
|
||||
|
||||
@item Debug @var{integer} (@emph{encoding,video})
|
||||
Set video @option{debug} encoding option.
|
||||
|
||||
@item Strict @var{integer} (@emph{encoding,video})
|
||||
Set video @option{strict} encoding option.
|
||||
|
||||
@item VideoBufferSize @var{n} (@emph{encoding,video})
|
||||
Set ratecontrol buffer size, expressed in KB.
|
||||
|
||||
@item VideoFrameRate @var{n} (@emph{encoding,video})
|
||||
Set number of video frames per second.
|
||||
|
||||
@item VideoSize (@emph{encoding,video})
|
||||
Set size of the video frame, must be an abbreviation or in the form
|
||||
@var{W}x@var{H}. See @ref{video size syntax,,the Video size section
|
||||
in the ffmpeg-utils(1) manual,ffmpeg-utils}.
|
||||
|
||||
Default value is @code{160x128}.
|
||||
|
||||
@item VideoIntraOnly (@emph{encoding,video})
|
||||
Transmit only intra frames (useful for low bitrates, but kills frame rate).
|
||||
|
||||
@item VideoGopSize @var{n} (@emph{encoding,video})
|
||||
If non-intra only, an intra frame is transmitted every VideoGopSize
|
||||
frames. Video synchronization can only begin at an intra frame.
|
||||
|
||||
@item VideoTag @var{tag} (@emph{encoding,video})
|
||||
Set video tag.
|
||||
|
||||
@item VideoHighQuality (@emph{encoding,video})
|
||||
@item Video4MotionVector (@emph{encoding,video})
|
||||
|
||||
@item BitExact (@emph{encoding,video})
|
||||
Set bitexact encoding flag.
|
||||
|
||||
@item IdctSimple (@emph{encoding,video})
|
||||
Set simple IDCT algorithm.
|
||||
|
||||
@item Qscale @var{n} (@emph{encoding,video})
|
||||
Enable constant quality encoding, and set video qscale (quantization
|
||||
scale) value, expressed in @var{n} QP units.
|
||||
|
||||
@item VideoQMin @var{n} (@emph{encoding,video})
|
||||
@item VideoQMax @var{n} (@emph{encoding,video})
|
||||
Set video qmin/qmax.
|
||||
|
||||
@item VideoQDiff @var{integer} (@emph{encoding,video})
|
||||
Set video @option{qdiff} encoding option.
|
||||
|
||||
@item LumiMask @var{float} (@emph{encoding,video})
|
||||
@item DarkMask @var{float} (@emph{encoding,video})
|
||||
Set @option{lumi_mask}/@option{dark_mask} encoding options.
|
||||
|
||||
@item AVOptionVideo [@var{codec}:]@var{option} @var{value} (@emph{encoding,video})
|
||||
Set generic or private option for video stream.
|
||||
Private option must be prefixed with codec name or codec must be defined before.
|
||||
|
||||
@item AVPresetVideo @var{preset} (@emph{encoding,video})
|
||||
Set preset for video stream.
|
||||
|
||||
@var{preset} must be the path of a preset file.
|
||||
@end table
|
||||
|
||||
@subsection Server status stream
|
||||
|
||||
A server status stream is a special stream which is used to show
|
||||
statistics about the @command{ffserver} operations.
|
||||
|
||||
It must be specified setting the option @option{Format} to
|
||||
@samp{status}.
|
||||
|
||||
@section Redirect section
|
||||
|
||||
A redirect section specifies where to redirect the requested URL to
|
||||
another page.
|
||||
|
||||
A redirect section must be introduced by the line:
|
||||
@example
|
||||
<Redirect NAME>
|
||||
@end example
|
||||
|
||||
where @var{NAME} is the name of the page which should be redirected.
|
||||
|
||||
It only accepts the option @option{URL}, which specify the redirection
|
||||
URL.
|
||||
|
||||
@chapter Stream examples
|
||||
|
||||
@itemize
|
||||
@item
|
||||
Multipart JPEG
|
||||
@example
|
||||
<Stream test.mjpg>
|
||||
Feed feed1.ffm
|
||||
Format mpjpeg
|
||||
VideoFrameRate 2
|
||||
VideoIntraOnly
|
||||
NoAudio
|
||||
Strict -1
|
||||
</Stream>
|
||||
@end example
|
||||
|
||||
@item
|
||||
Single JPEG
|
||||
@example
|
||||
<Stream test.jpg>
|
||||
Feed feed1.ffm
|
||||
Format jpeg
|
||||
VideoFrameRate 2
|
||||
VideoIntraOnly
|
||||
VideoSize 352x240
|
||||
NoAudio
|
||||
Strict -1
|
||||
</Stream>
|
||||
@end example
|
||||
|
||||
@item
|
||||
Flash
|
||||
@example
|
||||
<Stream test.swf>
|
||||
Feed feed1.ffm
|
||||
Format swf
|
||||
VideoFrameRate 2
|
||||
VideoIntraOnly
|
||||
NoAudio
|
||||
</Stream>
|
||||
@end example
|
||||
|
||||
@item
|
||||
ASF compatible
|
||||
@example
|
||||
<Stream test.asf>
|
||||
Feed feed1.ffm
|
||||
Format asf
|
||||
VideoFrameRate 15
|
||||
VideoSize 352x240
|
||||
VideoBitRate 256
|
||||
VideoBufferSize 40
|
||||
VideoGopSize 30
|
||||
AudioBitRate 64
|
||||
StartSendOnKey
|
||||
</Stream>
|
||||
@end example
|
||||
|
||||
@item
|
||||
MP3 audio
|
||||
@example
|
||||
<Stream test.mp3>
|
||||
Feed feed1.ffm
|
||||
Format mp2
|
||||
AudioCodec mp3
|
||||
AudioBitRate 64
|
||||
AudioChannels 1
|
||||
AudioSampleRate 44100
|
||||
NoVideo
|
||||
</Stream>
|
||||
@end example
|
||||
|
||||
@item
|
||||
Ogg Vorbis audio
|
||||
@example
|
||||
<Stream test.ogg>
|
||||
Feed feed1.ffm
|
||||
Metadata title "Stream title"
|
||||
AudioBitRate 64
|
||||
AudioChannels 2
|
||||
AudioSampleRate 44100
|
||||
NoVideo
|
||||
</Stream>
|
||||
@end example
|
||||
|
||||
@item
|
||||
Real with audio only at 32 kbits
|
||||
@example
|
||||
<Stream test.ra>
|
||||
Feed feed1.ffm
|
||||
Format rm
|
||||
AudioBitRate 32
|
||||
NoVideo
|
||||
</Stream>
|
||||
@end example
|
||||
|
||||
@item
|
||||
Real with audio and video at 64 kbits
|
||||
@example
|
||||
<Stream test.rm>
|
||||
Feed feed1.ffm
|
||||
Format rm
|
||||
AudioBitRate 32
|
||||
VideoBitRate 128
|
||||
VideoFrameRate 25
|
||||
VideoGopSize 25
|
||||
</Stream>
|
||||
@end example
|
||||
|
||||
@item
|
||||
For stream coming from a file: you only need to set the input filename
|
||||
and optionally a new format.
|
||||
|
||||
@example
|
||||
<Stream file.rm>
|
||||
File "/usr/local/httpd/htdocs/tlive.rm"
|
||||
NoAudio
|
||||
</Stream>
|
||||
@end example
|
||||
|
||||
@example
|
||||
<Stream file.asf>
|
||||
File "/usr/local/httpd/htdocs/test.asf"
|
||||
NoAudio
|
||||
Metadata author "Me"
|
||||
Metadata copyright "Super MegaCorp"
|
||||
Metadata title "Test stream from disk"
|
||||
Metadata comment "Test comment"
|
||||
</Stream>
|
||||
@end example
|
||||
@end itemize
|
||||
|
||||
@c man end
|
||||
|
||||
@include config.texi
|
||||
@ifset config-all
|
||||
@ifset config-avutil
|
||||
@include utils.texi
|
||||
@end ifset
|
||||
@ifset config-avcodec
|
||||
@include codecs.texi
|
||||
@include bitstream_filters.texi
|
||||
@end ifset
|
||||
@ifset config-avformat
|
||||
@include formats.texi
|
||||
@include protocols.texi
|
||||
@end ifset
|
||||
@ifset config-avdevice
|
||||
@include devices.texi
|
||||
@end ifset
|
||||
@ifset config-swresample
|
||||
@include resampler.texi
|
||||
@end ifset
|
||||
@ifset config-swscale
|
||||
@include scaler.texi
|
||||
@end ifset
|
||||
@ifset config-avfilter
|
||||
@include filters.texi
|
||||
@end ifset
|
||||
@end ifset
|
||||
|
||||
@chapter See Also
|
||||
|
||||
@ifhtml
|
||||
@ifset config-all
|
||||
@url{ffserver.html,ffserver},
|
||||
@end ifset
|
||||
@ifset config-not-all
|
||||
@url{ffserver-all.html,ffserver-all},
|
||||
@end ifset
|
||||
the @file{doc/ffserver.conf} example,
|
||||
The @file{doc/ffserver.conf} example,
|
||||
@url{ffmpeg.html,ffmpeg}, @url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe},
|
||||
@url{ffmpeg-utils.html,ffmpeg-utils},
|
||||
@url{ffmpeg-scaler.html,ffmpeg-scaler},
|
||||
@@ -899,13 +263,7 @@ the @file{doc/ffserver.conf} example,
|
||||
@end ifhtml
|
||||
|
||||
@ifnothtml
|
||||
@ifset config-all
|
||||
ffserver(1),
|
||||
@end ifset
|
||||
@ifset config-not-all
|
||||
ffserver-all(1),
|
||||
@end ifset
|
||||
the @file{doc/ffserver.conf} example, ffmpeg(1), ffplay(1), ffprobe(1),
|
||||
The @file{doc/ffserver.conf} example, ffmpeg(1), ffplay(1), ffprobe(1),
|
||||
ffmpeg-utils(1), ffmpeg-scaler(1), ffmpeg-resampler(1),
|
||||
ffmpeg-codecs(1), ffmpeg-bitstream-filters(1), ffmpeg-formats(1),
|
||||
ffmpeg-devices(1), ffmpeg-protocols(1), ffmpeg-filters(1)
|
||||
|
||||
@@ -1,389 +0,0 @@
|
||||
All the numerical options, if not specified otherwise, accept a string
|
||||
representing a number as input, which may be followed by one of the SI
|
||||
unit prefixes, for example: 'K', 'M', or 'G'.
|
||||
|
||||
If 'i' is appended to the SI unit prefix, the complete prefix will be
|
||||
interpreted as a unit prefix for binary multiples, which are based on
|
||||
powers of 1024 instead of powers of 1000. Appending 'B' to the SI unit
|
||||
prefix multiplies the value by 8. This allows using, for example:
|
||||
'KB', 'MiB', 'G' and 'B' as number suffixes.
|
||||
|
||||
Options which do not take arguments are boolean options, and set the
|
||||
corresponding value to true. They can be set to false by prefixing
|
||||
the option name with "no". For example using "-nofoo"
|
||||
will set the boolean option with name "foo" to false.
|
||||
|
||||
@anchor{Stream specifiers}
|
||||
@section Stream specifiers
|
||||
Some options are applied per-stream, e.g. bitrate or codec. Stream specifiers
|
||||
are used to precisely specify which stream(s) a given option belongs to.
|
||||
|
||||
A stream specifier is a string generally appended to the option name and
|
||||
separated from it by a colon. E.g. @code{-codec:a:1 ac3} contains the
|
||||
@code{a:1} stream specifier, which matches the second audio stream. Therefore, it
|
||||
would select the ac3 codec for the second audio stream.
|
||||
|
||||
A stream specifier can match several streams, so that the option is applied to all
|
||||
of them. E.g. the stream specifier in @code{-b:a 128k} matches all audio
|
||||
streams.
|
||||
|
||||
An empty stream specifier matches all streams. For example, @code{-codec copy}
|
||||
or @code{-codec: copy} would copy all the streams without reencoding.
|
||||
|
||||
Possible forms of stream specifiers are:
|
||||
@table @option
|
||||
@item @var{stream_index}
|
||||
Matches the stream with this index. E.g. @code{-threads:1 4} would set the
|
||||
thread count for the second stream to 4.
|
||||
@item @var{stream_type}[:@var{stream_index}]
|
||||
@var{stream_type} is one of following: 'v' or 'V' for video, 'a' for audio, 's'
|
||||
for subtitle, 'd' for data, and 't' for attachments. 'v' matches all video
|
||||
streams, 'V' only matches video streams which are not attached pictures, video
|
||||
thumbnails or cover arts. If @var{stream_index} is given, then it matches
|
||||
stream number @var{stream_index} of this type. Otherwise, it matches all
|
||||
streams of this type.
|
||||
@item p:@var{program_id}[:@var{stream_index}]
|
||||
If @var{stream_index} is given, then it matches the stream with number @var{stream_index}
|
||||
in the program with the id @var{program_id}. Otherwise, it matches all streams in the
|
||||
program.
|
||||
@item #@var{stream_id} or i:@var{stream_id}
|
||||
Match the stream by stream id (e.g. PID in MPEG-TS container).
|
||||
@item m:@var{key}[:@var{value}]
|
||||
Matches streams with the metadata tag @var{key} having the specified value. If
|
||||
@var{value} is not given, matches streams that contain the given tag with any
|
||||
value.
|
||||
@item u
|
||||
Matches streams with usable configuration, the codec must be defined and the
|
||||
essential information such as video dimension or audio sample rate must be present.
|
||||
|
||||
Note that in @command{ffmpeg}, matching by metadata will only work properly for
|
||||
input files.
|
||||
@end table
|
||||
|
||||
@section Generic options
|
||||
|
||||
These options are shared amongst the ff* tools.
|
||||
|
||||
@table @option
|
||||
|
||||
@item -L
|
||||
Show license.
|
||||
|
||||
@item -h, -?, -help, --help [@var{arg}]
|
||||
Show help. An optional parameter may be specified to print help about a specific
|
||||
item. If no argument is specified, only basic (non advanced) tool
|
||||
options are shown.
|
||||
|
||||
Possible values of @var{arg} are:
|
||||
@table @option
|
||||
@item long
|
||||
Print advanced tool options in addition to the basic tool options.
|
||||
|
||||
@item full
|
||||
Print complete list of options, including shared and private options
|
||||
for encoders, decoders, demuxers, muxers, filters, etc.
|
||||
|
||||
@item decoder=@var{decoder_name}
|
||||
Print detailed information about the decoder named @var{decoder_name}. Use the
|
||||
@option{-decoders} option to get a list of all decoders.
|
||||
|
||||
@item encoder=@var{encoder_name}
|
||||
Print detailed information about the encoder named @var{encoder_name}. Use the
|
||||
@option{-encoders} option to get a list of all encoders.
|
||||
|
||||
@item demuxer=@var{demuxer_name}
|
||||
Print detailed information about the demuxer named @var{demuxer_name}. Use the
|
||||
@option{-formats} option to get a list of all demuxers and muxers.
|
||||
|
||||
@item muxer=@var{muxer_name}
|
||||
Print detailed information about the muxer named @var{muxer_name}. Use the
|
||||
@option{-formats} option to get a list of all muxers and demuxers.
|
||||
|
||||
@item filter=@var{filter_name}
|
||||
Print detailed information about the filter name @var{filter_name}. Use the
|
||||
@option{-filters} option to get a list of all filters.
|
||||
@end table
|
||||
|
||||
@item -version
|
||||
Show version.
|
||||
|
||||
@item -formats
|
||||
Show available formats (including devices).
|
||||
|
||||
@item -devices
|
||||
Show available devices.
|
||||
|
||||
@item -codecs
|
||||
Show all codecs known to libavcodec.
|
||||
|
||||
Note that the term 'codec' is used throughout this documentation as a shortcut
|
||||
for what is more correctly called a media bitstream format.
|
||||
|
||||
@item -decoders
|
||||
Show available decoders.
|
||||
|
||||
@item -encoders
|
||||
Show all available encoders.
|
||||
|
||||
@item -bsfs
|
||||
Show available bitstream filters.
|
||||
|
||||
@item -protocols
|
||||
Show available protocols.
|
||||
|
||||
@item -filters
|
||||
Show available libavfilter filters.
|
||||
|
||||
@item -pix_fmts
|
||||
Show available pixel formats.
|
||||
|
||||
@item -sample_fmts
|
||||
Show available sample formats.
|
||||
|
||||
@item -layouts
|
||||
Show channel names and standard channel layouts.
|
||||
|
||||
@item -colors
|
||||
Show recognized color names.
|
||||
|
||||
@item -sources @var{device}[,@var{opt1}=@var{val1}[,@var{opt2}=@var{val2}]...]
|
||||
Show autodetected sources of the intput device.
|
||||
Some devices may provide system-dependent source names that cannot be autodetected.
|
||||
The returned list cannot be assumed to be always complete.
|
||||
@example
|
||||
ffmpeg -sources pulse,server=192.168.0.4
|
||||
@end example
|
||||
|
||||
@item -sinks @var{device}[,@var{opt1}=@var{val1}[,@var{opt2}=@var{val2}]...]
|
||||
Show autodetected sinks of the output device.
|
||||
Some devices may provide system-dependent sink names that cannot be autodetected.
|
||||
The returned list cannot be assumed to be always complete.
|
||||
@example
|
||||
ffmpeg -sinks pulse,server=192.168.0.4
|
||||
@end example
|
||||
|
||||
@item -loglevel [repeat+]@var{loglevel} | -v [repeat+]@var{loglevel}
|
||||
Set the logging level used by the library.
|
||||
Adding "repeat+" indicates that repeated log output should not be compressed
|
||||
to the first line and the "Last message repeated n times" line will be
|
||||
omitted. "repeat" can also be used alone.
|
||||
If "repeat" is used alone, and with no prior loglevel set, the default
|
||||
loglevel will be used. If multiple loglevel parameters are given, using
|
||||
'repeat' will not change the loglevel.
|
||||
@var{loglevel} is a string or a number containing one of the following values:
|
||||
@table @samp
|
||||
@item quiet, -8
|
||||
Show nothing at all; be silent.
|
||||
@item panic, 0
|
||||
Only show fatal errors which could lead the process to crash, such as
|
||||
and assert failure. This is not currently used for anything.
|
||||
@item fatal, 8
|
||||
Only show fatal errors. These are errors after which the process absolutely
|
||||
cannot continue after.
|
||||
@item error, 16
|
||||
Show all errors, including ones which can be recovered from.
|
||||
@item warning, 24
|
||||
Show all warnings and errors. Any message related to possibly
|
||||
incorrect or unexpected events will be shown.
|
||||
@item info, 32
|
||||
Show informative messages during processing. This is in addition to
|
||||
warnings and errors. This is the default value.
|
||||
@item verbose, 40
|
||||
Same as @code{info}, except more verbose.
|
||||
@item debug, 48
|
||||
Show everything, including debugging information.
|
||||
@item trace, 56
|
||||
@end table
|
||||
|
||||
By default the program logs to stderr, if coloring is supported by the
|
||||
terminal, colors are used to mark errors and warnings. Log coloring
|
||||
can be disabled setting the environment variable
|
||||
@env{AV_LOG_FORCE_NOCOLOR} or @env{NO_COLOR}, or can be forced setting
|
||||
the environment variable @env{AV_LOG_FORCE_COLOR}.
|
||||
The use of the environment variable @env{NO_COLOR} is deprecated and
|
||||
will be dropped in a following FFmpeg version.
|
||||
|
||||
@item -report
|
||||
Dump full command line and console output to a file named
|
||||
@code{@var{program}-@var{YYYYMMDD}-@var{HHMMSS}.log} in the current
|
||||
directory.
|
||||
This file can be useful for bug reports.
|
||||
It also implies @code{-loglevel verbose}.
|
||||
|
||||
Setting the environment variable @env{FFREPORT} to any value has the
|
||||
same effect. If the value is a ':'-separated key=value sequence, these
|
||||
options will affect the report; option values must be escaped if they
|
||||
contain special characters or the options delimiter ':' (see the
|
||||
``Quoting and escaping'' section in the ffmpeg-utils manual).
|
||||
|
||||
The following options are recognized:
|
||||
@table @option
|
||||
@item file
|
||||
set the file name to use for the report; @code{%p} is expanded to the name
|
||||
of the program, @code{%t} is expanded to a timestamp, @code{%%} is expanded
|
||||
to a plain @code{%}
|
||||
@item level
|
||||
set the log verbosity level using a numerical value (see @code{-loglevel}).
|
||||
@end table
|
||||
|
||||
For example, to output a report to a file named @file{ffreport.log}
|
||||
using a log level of @code{32} (alias for log level @code{info}):
|
||||
|
||||
@example
|
||||
FFREPORT=file=ffreport.log:level=32 ffmpeg -i input output
|
||||
@end example
|
||||
|
||||
Errors in parsing the environment variable are not fatal, and will not
|
||||
appear in the report.
|
||||
|
||||
@item -hide_banner
|
||||
Suppress printing banner.
|
||||
|
||||
All FFmpeg tools will normally show a copyright notice, build options
|
||||
and library versions. This option can be used to suppress printing
|
||||
this information.
|
||||
|
||||
@item -cpuflags flags (@emph{global})
|
||||
Allows setting and clearing cpu flags. This option is intended
|
||||
for testing. Do not use it unless you know what you're doing.
|
||||
@example
|
||||
ffmpeg -cpuflags -sse+mmx ...
|
||||
ffmpeg -cpuflags mmx ...
|
||||
ffmpeg -cpuflags 0 ...
|
||||
@end example
|
||||
Possible flags for this option are:
|
||||
@table @samp
|
||||
@item x86
|
||||
@table @samp
|
||||
@item mmx
|
||||
@item mmxext
|
||||
@item sse
|
||||
@item sse2
|
||||
@item sse2slow
|
||||
@item sse3
|
||||
@item sse3slow
|
||||
@item ssse3
|
||||
@item atom
|
||||
@item sse4.1
|
||||
@item sse4.2
|
||||
@item avx
|
||||
@item avx2
|
||||
@item xop
|
||||
@item fma3
|
||||
@item fma4
|
||||
@item 3dnow
|
||||
@item 3dnowext
|
||||
@item bmi1
|
||||
@item bmi2
|
||||
@item cmov
|
||||
@end table
|
||||
@item ARM
|
||||
@table @samp
|
||||
@item armv5te
|
||||
@item armv6
|
||||
@item armv6t2
|
||||
@item vfp
|
||||
@item vfpv3
|
||||
@item neon
|
||||
@item setend
|
||||
@end table
|
||||
@item AArch64
|
||||
@table @samp
|
||||
@item armv8
|
||||
@item vfp
|
||||
@item neon
|
||||
@end table
|
||||
@item PowerPC
|
||||
@table @samp
|
||||
@item altivec
|
||||
@end table
|
||||
@item Specific Processors
|
||||
@table @samp
|
||||
@item pentium2
|
||||
@item pentium3
|
||||
@item pentium4
|
||||
@item k6
|
||||
@item k62
|
||||
@item athlon
|
||||
@item athlonxp
|
||||
@item k8
|
||||
@end table
|
||||
@end table
|
||||
|
||||
@item -opencl_bench
|
||||
This option is used to benchmark all available OpenCL devices and print the
|
||||
results. This option is only available when FFmpeg has been compiled with
|
||||
@code{--enable-opencl}.
|
||||
|
||||
When FFmpeg is configured with @code{--enable-opencl}, the options for the
|
||||
global OpenCL context are set via @option{-opencl_options}. See the
|
||||
"OpenCL Options" section in the ffmpeg-utils manual for the complete list of
|
||||
supported options. Amongst others, these options include the ability to select
|
||||
a specific platform and device to run the OpenCL code on. By default, FFmpeg
|
||||
will run on the first device of the first platform. While the options for the
|
||||
global OpenCL context provide flexibility to the user in selecting the OpenCL
|
||||
device of their choice, most users would probably want to select the fastest
|
||||
OpenCL device for their system.
|
||||
|
||||
This option assists the selection of the most efficient configuration by
|
||||
identifying the appropriate device for the user's system. The built-in
|
||||
benchmark is run on all the OpenCL devices and the performance is measured for
|
||||
each device. The devices in the results list are sorted based on their
|
||||
performance with the fastest device listed first. The user can subsequently
|
||||
invoke @command{ffmpeg} using the device deemed most appropriate via
|
||||
@option{-opencl_options} to obtain the best performance for the OpenCL
|
||||
accelerated code.
|
||||
|
||||
Typical usage to use the fastest OpenCL device involve the following steps.
|
||||
|
||||
Run the command:
|
||||
@example
|
||||
ffmpeg -opencl_bench
|
||||
@end example
|
||||
Note down the platform ID (@var{pidx}) and device ID (@var{didx}) of the first
|
||||
i.e. fastest device in the list.
|
||||
Select the platform and device using the command:
|
||||
@example
|
||||
ffmpeg -opencl_options platform_idx=@var{pidx}:device_idx=@var{didx} ...
|
||||
@end example
|
||||
|
||||
@item -opencl_options options (@emph{global})
|
||||
Set OpenCL environment options. This option is only available when
|
||||
FFmpeg has been compiled with @code{--enable-opencl}.
|
||||
|
||||
@var{options} must be a list of @var{key}=@var{value} option pairs
|
||||
separated by ':'. See the ``OpenCL Options'' section in the
|
||||
ffmpeg-utils manual for the list of supported options.
|
||||
@end table
|
||||
|
||||
@section AVOptions
|
||||
|
||||
These options are provided directly by the libavformat, libavdevice and
|
||||
libavcodec libraries. To see the list of available AVOptions, use the
|
||||
@option{-help} option. They are separated into two categories:
|
||||
@table @option
|
||||
@item generic
|
||||
These options can be set for any container, codec or device. Generic options
|
||||
are listed under AVFormatContext options for containers/devices and under
|
||||
AVCodecContext options for codecs.
|
||||
@item private
|
||||
These options are specific to the given container, device or codec. Private
|
||||
options are listed under their corresponding containers/devices/codecs.
|
||||
@end table
|
||||
|
||||
For example to write an ID3v2.3 header instead of a default ID3v2.4 to
|
||||
an MP3 file, use the @option{id3v2_version} private option of the MP3
|
||||
muxer:
|
||||
@example
|
||||
ffmpeg -i input.flac -id3v2_version 3 out.mp3
|
||||
@end example
|
||||
|
||||
All codec AVOptions are per-stream, and thus a stream specifier
|
||||
should be attached to them.
|
||||
|
||||
Note: the @option{-nooption} syntax cannot be used for boolean
|
||||
AVOptions, use @option{-option 0}/@option{-option 1}.
|
||||
|
||||
Note: the old undocumented way of specifying per-stream AVOptions by
|
||||
prepending v/a/s to the options name is now obsolete and will be
|
||||
removed soon.
|
||||
@@ -29,11 +29,6 @@ Format negotiation
|
||||
same format amongst a supported list, all it has to do is use a reference
|
||||
to the same list of formats.
|
||||
|
||||
query_formats can leave some formats unset and return AVERROR(EAGAIN) to
|
||||
cause the negotiation mechanism to try again later. That can be used by
|
||||
filters with complex requirements to use the format negotiated on one link
|
||||
to set the formats supported on another.
|
||||
|
||||
|
||||
Buffer references ownership and permissions
|
||||
===========================================
|
||||
@@ -98,7 +93,7 @@ Buffer references ownership and permissions
|
||||
The AVFilterLink structure has a few AVFilterBufferRef fields. The
|
||||
cur_buf and out_buf were used with the deprecated
|
||||
start_frame/draw_slice/end_frame API and should no longer be used.
|
||||
src_buf and partial_buf are used by libavfilter internally
|
||||
src_buf, cur_buf_copy and partial_buf are used by libavfilter internally
|
||||
and must not be accessed by filters.
|
||||
|
||||
Reference permissions
|
||||
@@ -166,7 +161,7 @@ Buffer references ownership and permissions
|
||||
WRITE permission.
|
||||
|
||||
* Filters that read their input to produce a new frame on output (like
|
||||
scale) need the READ permission on input and must request a buffer
|
||||
scale) need the READ permission on input and and must request a buffer
|
||||
with the WRITE permission.
|
||||
|
||||
* Filters that intend to keep a reference after the filtering process
|
||||
@@ -204,7 +199,7 @@ Frame scheduling
|
||||
filter; these buffered frames must be flushed immediately if a new input
|
||||
produces new output.
|
||||
|
||||
(Example: frame rate-doubling filter: filter_frame must (1) flush the
|
||||
(Example: framerate-doubling filter: filter_frame must (1) flush the
|
||||
second copy of the previous frame, if it is still there, (2) push the
|
||||
first copy of the incoming frame, (3) keep the second copy for later.)
|
||||
|
||||
|
||||
10417
doc/filters.texi
10417
doc/filters.texi
File diff suppressed because it is too large
Load Diff
249
doc/formats.texi
249
doc/formats.texi
@@ -1,249 +0,0 @@
|
||||
@chapter Format Options
|
||||
@c man begin FORMAT OPTIONS
|
||||
|
||||
The libavformat library provides some generic global options, which
|
||||
can be set on all the muxers and demuxers. In addition each muxer or
|
||||
demuxer may support so-called private options, which are specific for
|
||||
that component.
|
||||
|
||||
Options may be set by specifying -@var{option} @var{value} in the
|
||||
FFmpeg tools, or by setting the value explicitly in the
|
||||
@code{AVFormatContext} options or using the @file{libavutil/opt.h} API
|
||||
for programmatic use.
|
||||
|
||||
The list of supported options follows:
|
||||
|
||||
@table @option
|
||||
@item avioflags @var{flags} (@emph{input/output})
|
||||
Possible values:
|
||||
@table @samp
|
||||
@item direct
|
||||
Reduce buffering.
|
||||
@end table
|
||||
|
||||
@item probesize @var{integer} (@emph{input})
|
||||
Set probing size in bytes, i.e. the size of the data to analyze to get
|
||||
stream information. A higher value will enable detecting more
|
||||
information in case it is dispersed into the stream, but will increase
|
||||
latency. Must be an integer not lesser than 32. It is 5000000 by default.
|
||||
|
||||
@item packetsize @var{integer} (@emph{output})
|
||||
Set packet size.
|
||||
|
||||
@item fflags @var{flags} (@emph{input/output})
|
||||
Set format flags.
|
||||
|
||||
Possible values:
|
||||
@table @samp
|
||||
@item ignidx
|
||||
Ignore index.
|
||||
@item fastseek
|
||||
Enable fast, but inaccurate seeks for some formats.
|
||||
@item genpts
|
||||
Generate PTS.
|
||||
@item nofillin
|
||||
Do not fill in missing values that can be exactly calculated.
|
||||
@item noparse
|
||||
Disable AVParsers, this needs @code{+nofillin} too.
|
||||
@item igndts
|
||||
Ignore DTS.
|
||||
@item discardcorrupt
|
||||
Discard corrupted frames.
|
||||
@item sortdts
|
||||
Try to interleave output packets by DTS.
|
||||
@item keepside
|
||||
Do not merge side data.
|
||||
@item latm
|
||||
Enable RTP MP4A-LATM payload.
|
||||
@item nobuffer
|
||||
Reduce the latency introduced by optional buffering
|
||||
@item bitexact
|
||||
Only write platform-, build- and time-independent data.
|
||||
This ensures that file and data checksums are reproducible and match between
|
||||
platforms. Its primary use is for regression testing.
|
||||
@end table
|
||||
|
||||
@item seek2any @var{integer} (@emph{input})
|
||||
Allow seeking to non-keyframes on demuxer level when supported if set to 1.
|
||||
Default is 0.
|
||||
|
||||
@item analyzeduration @var{integer} (@emph{input})
|
||||
Specify how many microseconds are analyzed to probe the input. A
|
||||
higher value will enable detecting more accurate information, but will
|
||||
increase latency. It defaults to 5,000,000 microseconds = 5 seconds.
|
||||
|
||||
@item cryptokey @var{hexadecimal string} (@emph{input})
|
||||
Set decryption key.
|
||||
|
||||
@item indexmem @var{integer} (@emph{input})
|
||||
Set max memory used for timestamp index (per stream).
|
||||
|
||||
@item rtbufsize @var{integer} (@emph{input})
|
||||
Set max memory used for buffering real-time frames.
|
||||
|
||||
@item fdebug @var{flags} (@emph{input/output})
|
||||
Print specific debug info.
|
||||
|
||||
Possible values:
|
||||
@table @samp
|
||||
@item ts
|
||||
@end table
|
||||
|
||||
@item max_delay @var{integer} (@emph{input/output})
|
||||
Set maximum muxing or demuxing delay in microseconds.
|
||||
|
||||
@item fpsprobesize @var{integer} (@emph{input})
|
||||
Set number of frames used to probe fps.
|
||||
|
||||
@item audio_preload @var{integer} (@emph{output})
|
||||
Set microseconds by which audio packets should be interleaved earlier.
|
||||
|
||||
@item chunk_duration @var{integer} (@emph{output})
|
||||
Set microseconds for each chunk.
|
||||
|
||||
@item chunk_size @var{integer} (@emph{output})
|
||||
Set size in bytes for each chunk.
|
||||
|
||||
@item err_detect, f_err_detect @var{flags} (@emph{input})
|
||||
Set error detection flags. @code{f_err_detect} is deprecated and
|
||||
should be used only via the @command{ffmpeg} tool.
|
||||
|
||||
Possible values:
|
||||
@table @samp
|
||||
@item crccheck
|
||||
Verify embedded CRCs.
|
||||
@item bitstream
|
||||
Detect bitstream specification deviations.
|
||||
@item buffer
|
||||
Detect improper bitstream length.
|
||||
@item explode
|
||||
Abort decoding on minor error detection.
|
||||
@item careful
|
||||
Consider things that violate the spec and have not been seen in the
|
||||
wild as errors.
|
||||
@item compliant
|
||||
Consider all spec non compliancies as errors.
|
||||
@item aggressive
|
||||
Consider things that a sane encoder should not do as an error.
|
||||
@end table
|
||||
|
||||
@item max_interleave_delta @var{integer} (@emph{output})
|
||||
Set maximum buffering duration for interleaving. The duration is
|
||||
expressed in microseconds, and defaults to 1000000 (1 second).
|
||||
|
||||
To ensure all the streams are interleaved correctly, libavformat will
|
||||
wait until it has at least one packet for each stream before actually
|
||||
writing any packets to the output file. When some streams are
|
||||
"sparse" (i.e. there are large gaps between successive packets), this
|
||||
can result in excessive buffering.
|
||||
|
||||
This field specifies the maximum difference between the timestamps of the
|
||||
first and the last packet in the muxing queue, above which libavformat
|
||||
will output a packet regardless of whether it has queued a packet for all
|
||||
the streams.
|
||||
|
||||
If set to 0, libavformat will continue buffering packets until it has
|
||||
a packet for each stream, regardless of the maximum timestamp
|
||||
difference between the buffered packets.
|
||||
|
||||
@item use_wallclock_as_timestamps @var{integer} (@emph{input})
|
||||
Use wallclock as timestamps.
|
||||
|
||||
@item avoid_negative_ts @var{integer} (@emph{output})
|
||||
|
||||
Possible values:
|
||||
@table @samp
|
||||
@item make_non_negative
|
||||
Shift timestamps to make them non-negative.
|
||||
Also note that this affects only leading negative timestamps, and not
|
||||
non-monotonic negative timestamps.
|
||||
@item make_zero
|
||||
Shift timestamps so that the first timestamp is 0.
|
||||
@item auto (default)
|
||||
Enables shifting when required by the target format.
|
||||
@item disabled
|
||||
Disables shifting of timestamp.
|
||||
@end table
|
||||
|
||||
When shifting is enabled, all output timestamps are shifted by the
|
||||
same amount. Audio, video, and subtitles desynching and relative
|
||||
timestamp differences are preserved compared to how they would have
|
||||
been without shifting.
|
||||
|
||||
@item skip_initial_bytes @var{integer} (@emph{input})
|
||||
Set number of bytes to skip before reading header and frames if set to 1.
|
||||
Default is 0.
|
||||
|
||||
@item correct_ts_overflow @var{integer} (@emph{input})
|
||||
Correct single timestamp overflows if set to 1. Default is 1.
|
||||
|
||||
@item flush_packets @var{integer} (@emph{output})
|
||||
Flush the underlying I/O stream after each packet. Default 1 enables it, and
|
||||
has the effect of reducing the latency; 0 disables it and may slightly
|
||||
increase performance in some cases.
|
||||
|
||||
@item output_ts_offset @var{offset} (@emph{output})
|
||||
Set the output time offset.
|
||||
|
||||
@var{offset} must be a time duration specification,
|
||||
see @ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
|
||||
|
||||
The offset is added by the muxer to the output timestamps.
|
||||
|
||||
Specifying a positive offset means that the corresponding streams are
|
||||
delayed bt the time duration specified in @var{offset}. Default value
|
||||
is @code{0} (meaning that no offset is applied).
|
||||
|
||||
@item format_whitelist @var{list} (@emph{input})
|
||||
"," separated List of allowed demuxers. By default all are allowed.
|
||||
|
||||
@item dump_separator @var{string} (@emph{input})
|
||||
Separator used to separate the fields printed on the command line about the
|
||||
Stream parameters.
|
||||
For example to separate the fields with newlines and indention:
|
||||
@example
|
||||
ffprobe -dump_separator "
|
||||
" -i ~/videos/matrixbench_mpeg2.mpg
|
||||
@end example
|
||||
@end table
|
||||
|
||||
@c man end FORMAT OPTIONS
|
||||
|
||||
@anchor{Format stream specifiers}
|
||||
@section Format stream specifiers
|
||||
|
||||
Format stream specifiers allow selection of one or more streams that
|
||||
match specific properties.
|
||||
|
||||
Possible forms of stream specifiers are:
|
||||
@table @option
|
||||
@item @var{stream_index}
|
||||
Matches the stream with this index.
|
||||
|
||||
@item @var{stream_type}[:@var{stream_index}]
|
||||
@var{stream_type} is one of following: 'v' for video, 'a' for audio,
|
||||
's' for subtitle, 'd' for data, and 't' for attachments. If
|
||||
@var{stream_index} is given, then it matches the stream number
|
||||
@var{stream_index} of this type. Otherwise, it matches all streams of
|
||||
this type.
|
||||
|
||||
@item p:@var{program_id}[:@var{stream_index}]
|
||||
If @var{stream_index} is given, then it matches the stream with number
|
||||
@var{stream_index} in the program with the id
|
||||
@var{program_id}. Otherwise, it matches all streams in the program.
|
||||
|
||||
@item #@var{stream_id}
|
||||
Matches the stream by a format-specific ID.
|
||||
@end table
|
||||
|
||||
The exact semantics of stream specifiers is defined by the
|
||||
@code{avformat_match_stream_specifier()} function declared in the
|
||||
@file{libavformat/avformat.h} header.
|
||||
|
||||
@ifclear config-writeonly
|
||||
@include demuxers.texi
|
||||
@end ifclear
|
||||
@ifclear config-readonly
|
||||
@include muxers.texi
|
||||
@end ifclear
|
||||
@include metadata.texi
|
||||
221
doc/general.texi
221
doc/general.texi
@@ -1,5 +1,4 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle General Documentation
|
||||
@titlepage
|
||||
@@ -25,7 +24,7 @@ instructions. To enable using OpenJPEG in FFmpeg, pass @code{--enable-libopenjp
|
||||
@file{./configure}.
|
||||
|
||||
|
||||
@section OpenCORE, VisualOn, and Fraunhofer libraries
|
||||
@section OpenCORE and VisualOn libraries
|
||||
|
||||
Spun off Google Android sources, OpenCore, VisualOn and Fraunhofer
|
||||
libraries provide encoders for a number of audio codecs.
|
||||
@@ -33,14 +32,9 @@ libraries provide encoders for a number of audio codecs.
|
||||
@float NOTE
|
||||
OpenCORE and VisualOn libraries are under the Apache License 2.0
|
||||
(see @url{http://www.apache.org/licenses/LICENSE-2.0} for details), which is
|
||||
incompatible to the LGPL version 2.1 and GPL version 2. You have to
|
||||
incompatible with the LGPL version 2.1 and GPL version 2. You have to
|
||||
upgrade FFmpeg's license to LGPL version 3 (or if you have enabled
|
||||
GPL components, GPL version 3) by passing @code{--enable-version3} to configure in
|
||||
order to use it.
|
||||
|
||||
The Fraunhofer AAC library is licensed under a license incompatible to the GPL
|
||||
and is not known to be compatible to the LGPL. Therefore, you have to pass
|
||||
@code{--enable-nonfree} to configure to use it.
|
||||
GPL components, GPL version 3) to use it.
|
||||
@end float
|
||||
|
||||
@subsection OpenCORE AMR
|
||||
@@ -95,28 +89,12 @@ Then pass @code{--enable-libtwolame} to configure to enable it.
|
||||
|
||||
@section libvpx
|
||||
|
||||
FFmpeg can make use of the libvpx library for VP8/VP9 encoding.
|
||||
FFmpeg can make use of the libvpx library for VP8 encoding.
|
||||
|
||||
Go to @url{http://www.webmproject.org/} and follow the instructions for
|
||||
installing the library. Then pass @code{--enable-libvpx} to configure to
|
||||
enable it.
|
||||
|
||||
@section libwavpack
|
||||
|
||||
FFmpeg can make use of the libwavpack library for WavPack encoding.
|
||||
|
||||
Go to @url{http://www.wavpack.com/} and follow the instructions for
|
||||
installing the library. Then pass @code{--enable-libwavpack} to configure to
|
||||
enable it.
|
||||
|
||||
@section OpenH264
|
||||
|
||||
FFmpeg can make use of the OpenH264 library for H.264 encoding.
|
||||
|
||||
Go to @url{http://www.openh264.org/} and follow the instructions for
|
||||
installing the library. Then pass @code{--enable-libopenh264} to configure to
|
||||
enable it.
|
||||
|
||||
@section x264
|
||||
|
||||
FFmpeg can make use of the x264 library for H.264 encoding.
|
||||
@@ -131,28 +109,6 @@ x264 is under the GNU Public License Version 2 or later
|
||||
details), you must upgrade FFmpeg's license to GPL in order to use it.
|
||||
@end float
|
||||
|
||||
@section x265
|
||||
|
||||
FFmpeg can make use of the x265 library for HEVC encoding.
|
||||
|
||||
Go to @url{http://x265.org/developers.html} and follow the instructions
|
||||
for installing the library. Then pass @code{--enable-libx265} to configure
|
||||
to enable it.
|
||||
|
||||
@float NOTE
|
||||
x265 is under the GNU Public License Version 2 or later
|
||||
(see @url{http://www.gnu.org/licenses/old-licenses/gpl-2.0.html} for
|
||||
details), you must upgrade FFmpeg's license to GPL in order to use it.
|
||||
@end float
|
||||
|
||||
@section kvazaar
|
||||
|
||||
FFmpeg can make use of the kvazaar library for HEVC encoding.
|
||||
|
||||
Go to @url{https://github.com/ultravideo/kvazaar} and follow the
|
||||
instructions for installing the library. Then pass
|
||||
@code{--enable-libkvazaar} to configure to enable it.
|
||||
|
||||
@section libilbc
|
||||
|
||||
iLBC is a narrowband speech codec that has been made freely available
|
||||
@@ -160,56 +116,10 @@ by Google as part of the WebRTC project. libilbc is a packaging friendly
|
||||
copy of the iLBC codec. FFmpeg can make use of the libilbc library for
|
||||
iLBC encoding and decoding.
|
||||
|
||||
Go to @url{https://github.com/TimothyGu/libilbc} and follow the instructions for
|
||||
Go to @url{https://github.com/dekkers/libilbc} and follow the instructions for
|
||||
installing the library. Then pass @code{--enable-libilbc} to configure to
|
||||
enable it.
|
||||
|
||||
@section libzvbi
|
||||
|
||||
libzvbi is a VBI decoding library which can be used by FFmpeg to decode DVB
|
||||
teletext pages and DVB teletext subtitles.
|
||||
|
||||
Go to @url{http://sourceforge.net/projects/zapping/} and follow the instructions for
|
||||
installing the library. Then pass @code{--enable-libzvbi} to configure to
|
||||
enable it.
|
||||
|
||||
@float NOTE
|
||||
libzvbi is licensed under the GNU General Public License Version 2 or later
|
||||
(see @url{http://www.gnu.org/licenses/old-licenses/gpl-2.0.html} for details),
|
||||
you must upgrade FFmpeg's license to GPL in order to use it.
|
||||
@end float
|
||||
|
||||
@section AviSynth
|
||||
|
||||
FFmpeg can read AviSynth scripts as input. To enable support, pass
|
||||
@code{--enable-avisynth} to configure. The correct headers are
|
||||
included in compat/avisynth/, which allows the user to enable support
|
||||
without needing to search for these headers themselves.
|
||||
|
||||
For Windows, supported AviSynth variants are
|
||||
@url{http://avisynth.nl, AviSynth 2.6 RC1 or higher} for 32-bit builds and
|
||||
@url{http://avs-plus.net, AviSynth+ r1718 or higher} for 32-bit and 64-bit builds.
|
||||
|
||||
For Linux and OS X, the supported AviSynth variant is
|
||||
@url{https://github.com/avxsynth/avxsynth, AvxSynth}.
|
||||
|
||||
@float NOTE
|
||||
AviSynth and AvxSynth are loaded dynamically. Distributors can build FFmpeg
|
||||
with @code{--enable-avisynth}, and the binaries will work regardless of the
|
||||
end user having AviSynth or AvxSynth installed - they'll only need to be
|
||||
installed to use AviSynth scripts (obviously).
|
||||
@end float
|
||||
|
||||
@section Intel QuickSync Video
|
||||
|
||||
FFmpeg can use Intel QuickSync Video (QSV) for accelerated encoding and decoding
|
||||
of multiple codecs. To use QSV, FFmpeg must be linked against the @code{libmfx}
|
||||
dispatcher, which loads the actual decoding libraries.
|
||||
|
||||
The dispatcher is open source and can be downloaded from
|
||||
@url{https://github.com/lu-zero/mfx_dispatch.git}. FFmpeg needs to be configured
|
||||
with the @code{--enable-libmfx} option and @code{pkg-config} needs to be able to
|
||||
locate the dispatcher's @code{.pc} files.
|
||||
|
||||
|
||||
@chapter Supported File Formats, Codecs or Features
|
||||
@@ -226,10 +136,6 @@ library:
|
||||
@item 4xm @tab @tab X
|
||||
@tab 4X Technologies format, used in some games.
|
||||
@item 8088flex TMV @tab @tab X
|
||||
@item AAX @tab @tab X
|
||||
@tab Audible Enhanced Audio format, used in audiobooks.
|
||||
@item AA @tab @tab X
|
||||
@tab Audible Format 2, 3, and 4, used in audiobooks.
|
||||
@item ACT Voice @tab @tab X
|
||||
@tab contains G.729 audio
|
||||
@item Adobe Filmstrip @tab X @tab X
|
||||
@@ -237,20 +143,17 @@ library:
|
||||
@item American Laser Games MM @tab @tab X
|
||||
@tab Multimedia format used in games like Mad Dog McCree.
|
||||
@item 3GPP AMR @tab X @tab X
|
||||
@item Amazing Studio Packed Animation File @tab @tab X
|
||||
@item Amazing Studio Packed Animation File @tab @tab X
|
||||
@tab Multimedia format used in game Heart Of Darkness.
|
||||
@item Apple HTTP Live Streaming @tab @tab X
|
||||
@item Artworx Data Format @tab @tab X
|
||||
@item ADP @tab @tab X
|
||||
@tab Audio format used on the Nintendo Gamecube.
|
||||
@item AFC @tab @tab X
|
||||
@tab Audio format used on the Nintendo Gamecube.
|
||||
@item APNG @tab X @tab X
|
||||
@item ASF @tab X @tab X
|
||||
@item AST @tab X @tab X
|
||||
@tab Audio format used on the Nintendo Wii.
|
||||
@item AVI @tab X @tab X
|
||||
@item AviSynth @tab @tab X
|
||||
@item AVISynth @tab @tab X
|
||||
@item AVR @tab @tab X
|
||||
@tab Audio format used on Mac.
|
||||
@item AVS @tab @tab X
|
||||
@@ -266,8 +169,6 @@ library:
|
||||
@tab Used in Z and Z95 games.
|
||||
@item Brute Force & Ignorance @tab @tab X
|
||||
@tab Used in the game Flash Traffic: City of Angels.
|
||||
@item BFSTM @tab @tab X
|
||||
@tab Audio format used on the Nintendo WiiU (based on BRSTM).
|
||||
@item BRSTM @tab @tab X
|
||||
@tab Audio format used on the Nintendo Wii.
|
||||
@item BWF @tab X @tab X
|
||||
@@ -278,13 +179,8 @@ library:
|
||||
@tab Used in the game Cyberia from Interplay.
|
||||
@item Delphine Software International CIN @tab @tab X
|
||||
@tab Multimedia format used by Delphine Software games.
|
||||
@item Digital Speech Standard (DSS) @tab @tab X
|
||||
@item Canopus HQ @tab @tab X
|
||||
@item Canopus HQA @tab @tab X
|
||||
@item Canopus HQX @tab @tab X
|
||||
@item CD+G @tab @tab X
|
||||
@tab Video format used by CD+G karaoke disks
|
||||
@item Phantom Cine @tab @tab X
|
||||
@item Commodore CDXL @tab @tab X
|
||||
@tab Amiga CD video format
|
||||
@item Core Audio Format @tab X @tab X
|
||||
@@ -298,8 +194,6 @@ library:
|
||||
@item Deluxe Paint Animation @tab @tab X
|
||||
@item DFA @tab @tab X
|
||||
@tab This format is used in Chronomaster game
|
||||
@item DirectDraw Surface @tab @tab X
|
||||
@item DSD Stream File (DSF) @tab @tab X
|
||||
@item DV video @tab X @tab X
|
||||
@item DXA @tab @tab X
|
||||
@tab This format is used in the non-Windows version of the Feeble Files
|
||||
@@ -326,8 +220,6 @@ library:
|
||||
@item GXF @tab X @tab X
|
||||
@tab General eXchange Format SMPTE 360M, used by Thomson Grass Valley
|
||||
playout servers.
|
||||
@item HNM @tab @tab X
|
||||
@tab Only version 4 supported, used in some games from Cryo Interactive
|
||||
@item iCEDraw File @tab @tab X
|
||||
@item ICO @tab X @tab X
|
||||
@tab Microsoft Windows ICO
|
||||
@@ -350,11 +242,9 @@ library:
|
||||
@tab Used by Linux Media Labs MPEG-4 PCI boards
|
||||
@item LOAS @tab @tab X
|
||||
@tab contains LATM multiplexed AAC audio
|
||||
@item LRC @tab X @tab X
|
||||
@item LVF @tab @tab X
|
||||
@item LXF @tab @tab X
|
||||
@tab VR native stream format, used by Leitch/Harris' video servers.
|
||||
@item Magic Lantern Video (MLV) @tab @tab X
|
||||
@item Matroska @tab X @tab X
|
||||
@item Matroska audio @tab X @tab
|
||||
@item FFmpeg metadata @tab X @tab X
|
||||
@@ -380,8 +270,6 @@ library:
|
||||
@tab also known as DVB Transport Stream
|
||||
@item MPEG-4 @tab X @tab X
|
||||
@tab MPEG-4 is a variant of QuickTime.
|
||||
@item Mirillis FIC video @tab @tab X
|
||||
@tab No cursor rendering.
|
||||
@item MIME multipart JPEG @tab X @tab
|
||||
@item MSN TCP webcam @tab @tab X
|
||||
@tab Used by MSN Messenger webcam streams.
|
||||
@@ -421,7 +309,6 @@ library:
|
||||
@item raw H.261 @tab X @tab X
|
||||
@item raw H.263 @tab X @tab X
|
||||
@item raw H.264 @tab X @tab X
|
||||
@item raw HEVC @tab X @tab X
|
||||
@item raw Ingenient MJPEG @tab @tab X
|
||||
@item raw MJPEG @tab X @tab X
|
||||
@item raw MLP @tab @tab X
|
||||
@@ -435,7 +322,7 @@ library:
|
||||
@item raw Shorten @tab @tab X
|
||||
@item raw TAK @tab @tab X
|
||||
@item raw TrueHD @tab X @tab X
|
||||
@item raw VC-1 @tab X @tab X
|
||||
@item raw VC-1 @tab @tab X
|
||||
@item raw PCM A-law @tab X @tab X
|
||||
@item raw PCM mu-law @tab X @tab X
|
||||
@item raw PCM signed 8 bit @tab X @tab X
|
||||
@@ -461,13 +348,11 @@ library:
|
||||
@tab File format used by RED Digital cameras, contains JPEG 2000 frames and PCM audio.
|
||||
@item RealMedia @tab X @tab X
|
||||
@item Redirector @tab @tab X
|
||||
@item RedSpark @tab @tab X
|
||||
@item Renderware TeXture Dictionary @tab @tab X
|
||||
@item RL2 @tab @tab X
|
||||
@tab Audio and video format used in some games by Entertainment Software Partners.
|
||||
@item RPL/ARMovie @tab @tab X
|
||||
@item Lego Mindstorms RSO @tab X @tab X
|
||||
@item RSD @tab @tab X
|
||||
@item RTMP @tab X @tab X
|
||||
@tab Output is performed by publishing stream to RTMP server
|
||||
@item RTP @tab X @tab X
|
||||
@@ -494,8 +379,6 @@ library:
|
||||
@item Sony Wave64 (W64) @tab X @tab X
|
||||
@item SoX native format @tab X @tab X
|
||||
@item SUN AU format @tab X @tab X
|
||||
@item SUP raw PGS subtitles @tab @tab X
|
||||
@item TDSC @tab @tab X
|
||||
@item Text files @tab @tab X
|
||||
@item THP @tab @tab X
|
||||
@tab Used on the Nintendo GameCube.
|
||||
@@ -503,7 +386,6 @@ library:
|
||||
@tab Tiertex .seq files used in the DOS CD-ROM version of the game Flashback.
|
||||
@item True Audio @tab @tab X
|
||||
@item VC-1 test bitstream @tab X @tab X
|
||||
@item Vidvox Hap @tab X @tab X
|
||||
@item Vivo @tab @tab X
|
||||
@item WAV @tab X @tab X
|
||||
@item WavPack @tab X @tab X
|
||||
@@ -535,14 +417,12 @@ following image formats are supported:
|
||||
@item Name @tab Encoding @tab Decoding @tab Comments
|
||||
@item .Y.U.V @tab X @tab X
|
||||
@tab one raw file per component
|
||||
@item Alias PIX @tab X @tab X
|
||||
@tab Alias/Wavefront PIX image format
|
||||
@item animated GIF @tab X @tab X
|
||||
@item APNG @tab X @tab X
|
||||
@tab Only uncompressed GIFs are generated.
|
||||
@item BMP @tab X @tab X
|
||||
@tab Microsoft BMP image
|
||||
@item BRender PIX @tab @tab X
|
||||
@tab Argonaut BRender 3D engine image format.
|
||||
@item PIX @tab @tab X
|
||||
@tab PIX is an image format used in the Argonaut BRender engine.
|
||||
@item DPX @tab X @tab X
|
||||
@tab Digital Picture Exchange
|
||||
@item EXR @tab @tab X
|
||||
@@ -578,8 +458,6 @@ following image formats are supported:
|
||||
@tab YUV, JPEG and some extension is not supported yet.
|
||||
@item Truevision Targa @tab X @tab X
|
||||
@tab Targa (.TGA) image format
|
||||
@item WebP @tab E @tab X
|
||||
@tab WebP image format, encoding supported through external library libwebp
|
||||
@item XBM @tab X @tab X
|
||||
@tab X BitMap image format
|
||||
@item XFace @tab X @tab X
|
||||
@@ -607,7 +485,6 @@ following image formats are supported:
|
||||
@item AMV Video @tab X @tab X
|
||||
@tab Used in Chinese MP3 players.
|
||||
@item ANSI/ASCII art @tab @tab X
|
||||
@item Apple Intermediate Codec @tab @tab X
|
||||
@item Apple MJPEG-B @tab @tab X
|
||||
@item Apple ProRes @tab X @tab X
|
||||
@item Apple QuickDraw @tab @tab X
|
||||
@@ -690,18 +567,12 @@ following image formats are supported:
|
||||
@tab Sorenson H.263 used in Flash
|
||||
@item Forward Uncompressed @tab @tab X
|
||||
@item Fraps @tab @tab X
|
||||
@item Go2Meeting @tab @tab X
|
||||
@tab fourcc: G2M2, G2M3
|
||||
@item Go2Webinar @tab @tab X
|
||||
@tab fourcc: G2M4
|
||||
@item H.261 @tab X @tab X
|
||||
@item H.263 / H.263-1996 @tab X @tab X
|
||||
@item H.263+ / H.263-1998 / H.263 version 2 @tab X @tab X
|
||||
@item H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 @tab E @tab X
|
||||
@tab encoding supported through external library libx264 and OpenH264
|
||||
@item HEVC @tab X @tab X
|
||||
@tab encoding supported through external library libx265 and libkvazaar
|
||||
@item HNM version 4 @tab @tab X
|
||||
@tab encoding supported through external library libx264
|
||||
@item H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 (VDPAU acceleration) @tab E @tab X
|
||||
@item HuffYUV @tab X @tab X
|
||||
@item HuffYUV FFmpeg variant @tab X @tab X
|
||||
@item IBM Ultimotion @tab @tab X
|
||||
@@ -732,8 +603,8 @@ following image formats are supported:
|
||||
@item LCL (LossLess Codec Library) MSZH @tab @tab X
|
||||
@item LCL (LossLess Codec Library) ZLIB @tab E @tab E
|
||||
@item LOCO @tab @tab X
|
||||
@item LucasArts SANM/Smush @tab @tab X
|
||||
@tab Used in LucasArts games / SMUSH animations.
|
||||
@item LucasArts Smush @tab @tab X
|
||||
@tab Used in LucasArts games.
|
||||
@item lossless MJPEG @tab X @tab X
|
||||
@item Microsoft ATC Screen @tab @tab X
|
||||
@tab Also known as Microsoft Screen 3.
|
||||
@@ -753,6 +624,8 @@ following image formats are supported:
|
||||
@item Mobotix MxPEG video @tab @tab X
|
||||
@item Motion Pixels video @tab @tab X
|
||||
@item MPEG-1 video @tab X @tab X
|
||||
@item MPEG-1/2 video XvMC (X-Video Motion Compensation) @tab @tab X
|
||||
@item MPEG-1/2 video (VDPAU acceleration) @tab @tab X
|
||||
@item MPEG-2 video @tab X @tab X
|
||||
@item MPEG-4 part 2 @tab X @tab X
|
||||
@tab libxvidcore can be used alternatively for encoding.
|
||||
@@ -768,12 +641,8 @@ following image formats are supported:
|
||||
@tab fourcc: VP50
|
||||
@item On2 VP6 @tab @tab X
|
||||
@tab fourcc: VP60,VP61,VP62
|
||||
@item On2 VP7 @tab @tab X
|
||||
@tab fourcc: VP70,VP71
|
||||
@item VP8 @tab E @tab X
|
||||
@tab fourcc: VP80, encoding supported through external library libvpx
|
||||
@item VP9 @tab E @tab X
|
||||
@tab encoding supported through external library libvpx
|
||||
@item Pinnacle TARGA CineWave YUV16 @tab @tab X
|
||||
@tab fourcc: Y216
|
||||
@item Prores @tab @tab X
|
||||
@@ -799,11 +668,11 @@ following image formats are supported:
|
||||
@tab Texture dictionaries used by the Renderware Engine.
|
||||
@item RL2 video @tab @tab X
|
||||
@tab used in some games by Entertainment Software Partners
|
||||
@item SGI RLE 8-bit @tab @tab X
|
||||
@item Sierra VMD video @tab @tab X
|
||||
@tab Used in Sierra VMD files.
|
||||
@item Silicon Graphics Motion Video Compressor 1 (MVC1) @tab @tab X
|
||||
@item Silicon Graphics Motion Video Compressor 2 (MVC2) @tab @tab X
|
||||
@item Silicon Graphics RLE 8-bit video @tab @tab X
|
||||
@item Smacker video @tab @tab X
|
||||
@tab Video encoding used in Smacker.
|
||||
@item SMPTE VC-1 @tab @tab X
|
||||
@@ -865,11 +734,11 @@ following image formats are supported:
|
||||
@item Name @tab Encoding @tab Decoding @tab Comments
|
||||
@item 8SVX exponential @tab @tab X
|
||||
@item 8SVX fibonacci @tab @tab X
|
||||
@item AAC+ @tab E @tab IX
|
||||
@item AAC+ @tab E @tab X
|
||||
@tab encoding supported through external library libaacplus
|
||||
@item AAC @tab E @tab X
|
||||
@tab encoding supported through external library libfaac and libvo-aacenc
|
||||
@item AC-3 @tab IX @tab IX
|
||||
@item AC-3 @tab IX @tab X
|
||||
@item ADPCM 4X Movie @tab @tab X
|
||||
@item ADPCM CDROM XA @tab @tab X
|
||||
@item ADPCM Creative Technology @tab @tab X
|
||||
@@ -900,12 +769,10 @@ following image formats are supported:
|
||||
@tab Used in some Sega Saturn console games.
|
||||
@item ADPCM IMA Duck DK4 @tab @tab X
|
||||
@tab Used in some Sega Saturn console games.
|
||||
@item ADPCM IMA Radical @tab @tab X
|
||||
@item ADPCM Microsoft @tab X @tab X
|
||||
@item ADPCM MS IMA @tab X @tab X
|
||||
@item ADPCM Nintendo Gamecube AFC @tab @tab X
|
||||
@item ADPCM Nintendo Gamecube DTK @tab @tab X
|
||||
@item ADPCM Nintendo THP @tab @tab X
|
||||
@item ADPCM Nintendo Gamecube THP @tab @tab X
|
||||
@item ADPCM QT IMA @tab X @tab X
|
||||
@item ADPCM SEGA CRI ADX @tab X @tab X
|
||||
@tab Used in Sega Dreamcast games.
|
||||
@@ -913,8 +780,6 @@ following image formats are supported:
|
||||
@item ADPCM Sound Blaster Pro 2-bit @tab @tab X
|
||||
@item ADPCM Sound Blaster Pro 2.6-bit @tab @tab X
|
||||
@item ADPCM Sound Blaster Pro 4-bit @tab @tab X
|
||||
@item ADPCM VIMA
|
||||
@tab Used in LucasArts SMUSH animations.
|
||||
@item ADPCM Westwood Studios IMA @tab @tab X
|
||||
@tab Used in Westwood Studios games like Command and Conquer.
|
||||
@item ADPCM Yamaha @tab X @tab X
|
||||
@@ -925,21 +790,18 @@ following image formats are supported:
|
||||
@item Amazing Studio PAF Audio @tab @tab X
|
||||
@item Apple lossless audio @tab X @tab X
|
||||
@tab QuickTime fourcc 'alac'
|
||||
@item ATRAC1 @tab @tab X
|
||||
@item ATRAC3 @tab @tab X
|
||||
@item ATRAC3+ @tab @tab X
|
||||
@item Atrac 1 @tab @tab X
|
||||
@item Atrac 3 @tab @tab X
|
||||
@item Bink Audio @tab @tab X
|
||||
@tab Used in Bink and Smacker files in many games.
|
||||
@item CELT @tab @tab E
|
||||
@tab decoding supported through external library libcelt
|
||||
@item Delphine Software International CIN audio @tab @tab X
|
||||
@tab Codec used in Delphine Software International games.
|
||||
@item Digital Speech Standard - Standard Play mode (DSS SP) @tab @tab X
|
||||
@item Discworld II BMV Audio @tab @tab X
|
||||
@item COOK @tab @tab X
|
||||
@tab All versions except 5.1 are supported.
|
||||
@item DCA (DTS Coherent Acoustics) @tab X @tab X
|
||||
@tab supported extensions: XCh, XLL (partially)
|
||||
@item DPCM id RoQ @tab X @tab X
|
||||
@tab Used in Quake III, Jedi Knight 2 and other computer games.
|
||||
@item DPCM Interplay @tab @tab X
|
||||
@@ -949,10 +811,6 @@ following image formats are supported:
|
||||
@item DPCM Sol @tab @tab X
|
||||
@item DPCM Xan @tab @tab X
|
||||
@tab Used in Origin's Wing Commander IV AVI files.
|
||||
@item DSD (Direct Stream Digitial), least significant bit first @tab @tab X
|
||||
@item DSD (Direct Stream Digitial), most significant bit first @tab @tab X
|
||||
@item DSD (Direct Stream Digitial), least significant bit first, planar @tab @tab X
|
||||
@item DSD (Direct Stream Digitial), most significant bit first, planar @tab @tab X
|
||||
@item DSP Group TrueSpeech @tab @tab X
|
||||
@item DV audio @tab @tab X
|
||||
@item Enhanced AC-3 @tab X @tab X
|
||||
@@ -973,18 +831,18 @@ following image formats are supported:
|
||||
@item MLP (Meridian Lossless Packing) @tab @tab X
|
||||
@tab Used in DVD-Audio discs.
|
||||
@item Monkey's Audio @tab @tab X
|
||||
@tab Only versions 3.97-3.99 are supported.
|
||||
@item MP1 (MPEG audio layer 1) @tab @tab IX
|
||||
@item MP2 (MPEG audio layer 2) @tab IX @tab IX
|
||||
@tab encoding supported also through external library TwoLAME
|
||||
@tab libtwolame can be used alternatively for encoding.
|
||||
@item MP3 (MPEG audio layer 3) @tab E @tab IX
|
||||
@tab encoding supported through external library LAME, ADU MP3 and MP3onMP4 also supported
|
||||
@item MPEG-4 Audio Lossless Coding (ALS) @tab @tab X
|
||||
@item Musepack SV7 @tab @tab X
|
||||
@item Musepack SV8 @tab @tab X
|
||||
@item Nellymoser Asao @tab X @tab X
|
||||
@item On2 AVC (Audio for Video Codec) @tab @tab X
|
||||
@item Opus @tab E @tab X
|
||||
@tab encoding supported through external library libopus
|
||||
@item Opus @tab E @tab E
|
||||
@tab supported through external library libopus
|
||||
@item PCM A-law @tab X @tab X
|
||||
@item PCM mu-law @tab X @tab X
|
||||
@item PCM signed 8-bit planar @tab X @tab X
|
||||
@@ -1028,7 +886,7 @@ following image formats are supported:
|
||||
@item Sierra VMD audio @tab @tab X
|
||||
@tab Used in Sierra VMD files.
|
||||
@item Smacker audio @tab @tab X
|
||||
@item SMPTE 302M AES3 audio @tab X @tab X
|
||||
@item SMPTE 302M AES3 audio @tab @tab X
|
||||
@item Sonic @tab X @tab X
|
||||
@tab experimental codec
|
||||
@item Sonic lossless @tab X @tab X
|
||||
@@ -1036,7 +894,7 @@ following image formats are supported:
|
||||
@item Speex @tab E @tab E
|
||||
@tab supported through external library libspeex
|
||||
@item TAK (Tom's lossless Audio Kompressor) @tab @tab X
|
||||
@item True Audio (TTA) @tab X @tab X
|
||||
@item True Audio (TTA) @tab @tab X
|
||||
@item TrueHD @tab @tab X
|
||||
@tab Used in HD-DVD and Blu-Ray discs.
|
||||
@item TwinVQ (VQF flavor) @tab @tab X
|
||||
@@ -1044,8 +902,7 @@ following image formats are supported:
|
||||
@tab Used in LucasArts SMUSH animations.
|
||||
@item Vorbis @tab E @tab X
|
||||
@tab A native but very primitive encoder exists.
|
||||
@item Voxware MetaSound @tab @tab X
|
||||
@item WavPack @tab X @tab X
|
||||
@item WavPack @tab @tab X
|
||||
@item Westwood Audio (SND1) @tab @tab X
|
||||
@item Windows Media Audio 1 @tab X @tab X
|
||||
@item Windows Media Audio 2 @tab X @tab X
|
||||
@@ -1068,7 +925,6 @@ performance on systems without hardware floating point support).
|
||||
@item 3GPP Timed Text @tab @tab @tab X @tab X
|
||||
@item AQTitle @tab @tab X @tab @tab X
|
||||
@item DVB @tab X @tab X @tab X @tab X
|
||||
@item DVB teletext @tab @tab X @tab @tab E
|
||||
@item DVD @tab X @tab X @tab X @tab X
|
||||
@item JACOsub @tab X @tab X @tab @tab X
|
||||
@item MicroDVD @tab X @tab X @tab @tab X
|
||||
@@ -1078,7 +934,6 @@ performance on systems without hardware floating point support).
|
||||
@item PJS (Phoenix) @tab @tab X @tab @tab X
|
||||
@item RealText @tab @tab X @tab @tab X
|
||||
@item SAMI @tab @tab X @tab @tab X
|
||||
@item Spruce format (STL) @tab @tab X @tab @tab X
|
||||
@item SSA/ASS @tab X @tab X @tab X @tab X
|
||||
@item SubRip (SRT) @tab X @tab X @tab X @tab X
|
||||
@item SubViewer v1 @tab @tab X @tab @tab X
|
||||
@@ -1086,25 +941,21 @@ performance on systems without hardware floating point support).
|
||||
@item TED Talks captions @tab @tab X @tab @tab X
|
||||
@item VobSub (IDX+SUB) @tab @tab X @tab @tab X
|
||||
@item VPlayer @tab @tab X @tab @tab X
|
||||
@item WebVTT @tab X @tab X @tab X @tab X
|
||||
@item WebVTT @tab @tab X @tab @tab X
|
||||
@item XSUB @tab @tab @tab X @tab X
|
||||
@end multitable
|
||||
|
||||
@code{X} means that the feature is supported.
|
||||
|
||||
@code{E} means that support is provided through an external library.
|
||||
|
||||
@section Network Protocols
|
||||
|
||||
@multitable @columnfractions .4 .1
|
||||
@item Name @tab Support
|
||||
@item file @tab X
|
||||
@item FTP @tab X
|
||||
@item Gopher @tab X
|
||||
@item HLS @tab X
|
||||
@item HTTP @tab X
|
||||
@item HTTPS @tab X
|
||||
@item Icecast @tab X
|
||||
@item MMSH @tab X
|
||||
@item MMST @tab X
|
||||
@item pipe @tab X
|
||||
@@ -1115,9 +966,7 @@ performance on systems without hardware floating point support).
|
||||
@item RTMPTE @tab X
|
||||
@item RTMPTS @tab X
|
||||
@item RTP @tab X
|
||||
@item SAMBA @tab E
|
||||
@item SCTP @tab X
|
||||
@item SFTP @tab E
|
||||
@item TCP @tab X
|
||||
@item TLS @tab X
|
||||
@item UDP @tab X
|
||||
@@ -1137,19 +986,17 @@ performance on systems without hardware floating point support).
|
||||
@item caca @tab @tab X
|
||||
@item DV1394 @tab X @tab
|
||||
@item Lavfi virtual device @tab X @tab
|
||||
@item Linux framebuffer @tab X @tab X
|
||||
@item Linux framebuffer @tab X @tab
|
||||
@item JACK @tab X @tab
|
||||
@item LIBCDIO @tab X
|
||||
@item LIBDC1394 @tab X @tab
|
||||
@item OpenAL @tab X
|
||||
@item OpenGL @tab @tab X
|
||||
@item OSS @tab X @tab X
|
||||
@item PulseAudio @tab X @tab X
|
||||
@item Pulseaudio @tab X @tab
|
||||
@item SDL @tab @tab X
|
||||
@item Video4Linux2 @tab X @tab X
|
||||
@item Video4Linux2 @tab X @tab
|
||||
@item VfW capture @tab X @tab
|
||||
@item X11 grabbing @tab X @tab
|
||||
@item Win32 grabbing @tab X @tab
|
||||
@end multitable
|
||||
|
||||
@code{X} means that input/output is supported.
|
||||
|
||||
@@ -1,10 +1,9 @@
|
||||
\input texinfo @c -*- texinfo -*-
|
||||
@documentencoding UTF-8
|
||||
|
||||
@settitle Using Git to develop FFmpeg
|
||||
@settitle Using git to develop FFmpeg
|
||||
|
||||
@titlepage
|
||||
@center @titlefont{Using Git to develop FFmpeg}
|
||||
@center @titlefont{Using git to develop FFmpeg}
|
||||
@end titlepage
|
||||
|
||||
@top
|
||||
@@ -13,9 +12,9 @@
|
||||
|
||||
@chapter Introduction
|
||||
|
||||
This document aims in giving some quick references on a set of useful Git
|
||||
This document aims in giving some quick references on a set of useful git
|
||||
commands. You should always use the extensive and detailed documentation
|
||||
provided directly by Git:
|
||||
provided directly by git:
|
||||
|
||||
@example
|
||||
git --help
|
||||
@@ -32,21 +31,22 @@ man git-<command>
|
||||
shows information about the subcommand <command>.
|
||||
|
||||
Additional information could be found on the
|
||||
@url{http://gitref.org, Git Reference} website.
|
||||
@url{http://gitref.org, Git Reference} website
|
||||
|
||||
For more information about the Git project, visit the
|
||||
@url{http://git-scm.com/, Git website}.
|
||||
|
||||
@url{http://git-scm.com/, Git website}
|
||||
|
||||
Consult these resources whenever you have problems, they are quite exhaustive.
|
||||
|
||||
What follows now is a basic introduction to Git and some FFmpeg-specific
|
||||
guidelines to ease the contribution to the project.
|
||||
guidelines to ease the contribution to the project
|
||||
|
||||
@chapter Basics Usage
|
||||
|
||||
@section Get Git
|
||||
@section Get GIT
|
||||
|
||||
You can get Git from @url{http://git-scm.com/}
|
||||
You can get git from @url{http://git-scm.com/}
|
||||
Most distribution and operating system provide a package for it.
|
||||
|
||||
|
||||
@@ -74,7 +74,6 @@ git config --global core.autocrlf false
|
||||
@end example
|
||||
|
||||
|
||||
@anchor{Updating the source tree to the latest revision}
|
||||
@section Updating the source tree to the latest revision
|
||||
|
||||
@example
|
||||
@@ -107,7 +106,7 @@ git add [-A] <filename/dirname>
|
||||
git rm [-r] <filename/dirname>
|
||||
@end example
|
||||
|
||||
Git needs to get notified of all changes you make to your working
|
||||
GIT needs to get notified of all changes you make to your working
|
||||
directory that makes files appear or disappear.
|
||||
Line moves across files are automatically tracked.
|
||||
|
||||
@@ -127,8 +126,8 @@ will show all local modifications in your working directory as unified diff.
|
||||
git log <filename(s)>
|
||||
@end example
|
||||
|
||||
You may also use the graphical tools like @command{gitview} or @command{gitk}
|
||||
or the web interface available at @url{http://source.ffmpeg.org/}.
|
||||
You may also use the graphical tools like gitview or gitk or the web
|
||||
interface available at http://source.ffmpeg.org/
|
||||
|
||||
@section Checking source tree status
|
||||
|
||||
@@ -149,7 +148,6 @@ git diff --check
|
||||
to double check your changes before committing them to avoid trouble later
|
||||
on. All experienced developers do this on each and every commit, no matter
|
||||
how small.
|
||||
|
||||
Every one of them has been saved from looking like a fool by this many times.
|
||||
It's very easy for stray debug output or cosmetic modifications to slip in,
|
||||
please avoid problems through this extra level of scrutiny.
|
||||
@@ -172,14 +170,14 @@ to make sure you don't have untracked files or deletions.
|
||||
git add [-i|-p|-A] <filenames/dirnames>
|
||||
@end example
|
||||
|
||||
Make sure you have told Git your name and email address
|
||||
Make sure you have told git your name and email address
|
||||
|
||||
@example
|
||||
git config --global user.name "My Name"
|
||||
git config --global user.email my@@email.invalid
|
||||
@end example
|
||||
|
||||
Use @option{--global} to set the global configuration for all your Git checkouts.
|
||||
Use @var{--global} to set the global configuration for all your git checkouts.
|
||||
|
||||
Git will select the changes to the files for commit. Optionally you can use
|
||||
the interactive or the patch mode to select hunk by hunk what should be
|
||||
@@ -210,7 +208,7 @@ include filenames in log messages, Git provides that information.
|
||||
|
||||
Possibly make the commit message have a terse, descriptive first line, an
|
||||
empty line and then a full description. The first line will be used to name
|
||||
the patch by @command{git format-patch}.
|
||||
the patch by git format-patch.
|
||||
|
||||
@section Preparing a patchset
|
||||
|
||||
@@ -301,7 +299,7 @@ the current branch history.
|
||||
git commit --amend
|
||||
@end example
|
||||
|
||||
allows one to amend the last commit details quickly.
|
||||
allows to amend the last commit details quickly.
|
||||
|
||||
@example
|
||||
git rebase -i origin/master
|
||||
@@ -326,14 +324,12 @@ faulty commit disappear from the history.
|
||||
@section Pushing changes to remote trees
|
||||
|
||||
@example
|
||||
git push origin master --dry-run
|
||||
git push
|
||||
@end example
|
||||
|
||||
Will simulate a push of the local master branch to the default remote
|
||||
(@var{origin}). And list which branches and ranges or commits would have been
|
||||
pushed.
|
||||
Will push the changes to the default remote (@var{origin}).
|
||||
Git will prevent you from pushing changes if the local and remote trees are
|
||||
out of sync. Refer to @ref{Updating the source tree to the latest revision}.
|
||||
out of sync. Refer to and to sync the local tree.
|
||||
|
||||
@example
|
||||
git remote add <name> <url>
|
||||
@@ -352,24 +348,23 @@ branches matching the local ones.
|
||||
|
||||
@section Finding a specific svn revision
|
||||
|
||||
Since version 1.7.1 Git supports @samp{:/foo} syntax for specifying commits
|
||||
Since version 1.7.1 git supports @var{:/foo} syntax for specifying commits
|
||||
based on a regular expression. see man gitrevisions
|
||||
|
||||
@example
|
||||
git show :/'as revision 23456'
|
||||
@end example
|
||||
|
||||
will show the svn changeset @samp{r23456}. With older Git versions searching in
|
||||
will show the svn changeset @var{r23456}. With older git versions searching in
|
||||
the @command{git log} output is the easiest option (especially if a pager with
|
||||
search capabilities is used).
|
||||
|
||||
This commit can be checked out with
|
||||
|
||||
@example
|
||||
git checkout -b svn_23456 :/'as revision 23456'
|
||||
@end example
|
||||
|
||||
or for Git < 1.7.1 with
|
||||
or for git < 1.7.1 with
|
||||
|
||||
@example
|
||||
git checkout -b svn_23456 $SHA1
|
||||
@@ -378,7 +373,7 @@ git checkout -b svn_23456 $SHA1
|
||||
where @var{$SHA1} is the commit hash from the @command{git log} output.
|
||||
|
||||
|
||||
@chapter Pre-push checklist
|
||||
@chapter pre-push checklist
|
||||
|
||||
Once you have a set of commits that you feel are ready for pushing,
|
||||
work through the following checklist to doublecheck everything is in
|
||||
@@ -389,7 +384,7 @@ Apply your common sense, but if in doubt, err on the side of caution.
|
||||
First, make sure that the commits and branches you are going to push
|
||||
match what you want pushed and that nothing is missing, extraneous or
|
||||
wrong. You can see what will be pushed by running the git push command
|
||||
with @option{--dry-run} first. And then inspecting the commits listed with
|
||||
with --dry-run first. And then inspecting the commits listed with
|
||||
@command{git log -p 1234567..987654}. The @command{git status} command
|
||||
may help in finding local changes that have been forgotten to be added.
|
||||
|
||||
@@ -398,7 +393,7 @@ Next let the code pass through a full run of our testsuite.
|
||||
@itemize
|
||||
@item @command{make distclean}
|
||||
@item @command{/path/to/ffmpeg/configure}
|
||||
@item @command{make fate}
|
||||
@item @command{make check}
|
||||
@item if fate fails due to missing samples run @command{make fate-rsync} and retry
|
||||
@end itemize
|
||||
|
||||
@@ -416,5 +411,5 @@ recommended.
|
||||
|
||||
@chapter Server Issues
|
||||
|
||||
Contact the project admins at @email{root@@ffmpeg.org} if you have technical
|
||||
problems with the Git server.
|
||||
Contact the project admins @email{root@@ffmpeg.org} if you have technical
|
||||
problems with the GIT server.
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user