FFmpeg.AutoGen
Gets or sets the root path for loading libraries.
Work out of box with companion ffmpeg distribution package like FFmpeg.AutoGen.Redist.windows.x64
The root path.
Returns a non-zero number if codec is a decoder, zero otherwise
a non-zero number if codec is a decoder, zero otherwise
Returns a non-zero number if codec is an encoder, zero otherwise
a non-zero number if codec is an encoder, zero otherwise
Iterate over all registered codecs.
a pointer where libavcodec will store the iteration state. Must point to NULL to start the iteration.
the next registered codec or NULL when the iteration is finished
Allocate a CPB properties structure and initialize its fields to default values.
if non-NULL, the size of the allocated struct will be written here. This is useful for embedding it in side data.
the newly allocated struct or NULL on failure
Allocate an AVD3D11VAContext.
Newly-allocated AVD3D11VAContext or NULL on failure.
Same behaviour av_fast_malloc but the buffer has additional AV_INPUT_BUFFER_PADDING_SIZE at the end which will always be 0.
Same behaviour av_fast_padded_malloc except that buffer will always be 0-initialized after call.
Return audio frame duration.
codec context
size of the frame, or 0 if unknown
frame duration, in samples, if known. 0 if not able to determine.
This function is the same as av_get_audio_frame_duration(), except it works with AVCodecParameters instead of an AVCodecContext.
Return codec bits per sample.
the codec
Number of bits per sample or zero if unknown for the given codec.
Return codec bits per sample. Only return non-zero if the bits per sample is exactly correct, not an approximation.
the codec
Number of bits per sample or zero if unknown for the given codec.
Return the PCM codec associated with a sample format.
endianness, 0 for little, 1 for big, -1 (or anything else) for native
AV_CODEC_ID_PCM_* or AV_CODEC_ID_NONE
Return a name for the specified profile, if available.
the codec that is searched for the given profile
the profile value for which a name is requested
A name for the profile if found, NULL otherwise.
Increase packet size, correctly zeroing padding
packet
number of bytes by which to increase the size of the packet
Initialize optional fields of a packet with default values.
packet
Allocate the payload of a packet and initialize its fields with default values.
packet
wanted payload size
0 if OK, AVERROR_xxx otherwise
Wrap an existing array as a packet side data.
packet
side information type
the side data array. It must be allocated with the av_malloc() family of functions. The ownership of the data is transferred to pkt.
side information size
a non-negative number on success, a negative AVERROR code on failure. On failure, the packet is unchanged and the data remains owned by the caller.
Allocate an AVPacket and set its fields to default values. The resulting struct must be freed using av_packet_free().
An AVPacket filled with default values or NULL on failure.
Create a new packet that references the same data as src.
newly created AVPacket on success, NULL on error.
Copy only "properties" fields from src to dst.
Destination packet
Source packet
0 on success AVERROR on failure.
Free the packet, if the packet is reference counted, it will be unreferenced first.
packet to be freed. The pointer will be set to NULL.
Convenience function to free all the side data stored. All the other fields stay untouched.
packet
Initialize a reference-counted packet from av_malloc()ed data.
packet to be initialized. This function will set the data, size, and buf fields, all others are left untouched.
Data allocated by av_malloc() to be used as packet data. If this function returns successfully, the data is owned by the underlying AVBuffer. The caller may not access the data through other means.
size of data in bytes, without the padding. I.e. the full buffer size is assumed to be size + AV_INPUT_BUFFER_PADDING_SIZE.
0 on success, a negative AVERROR on error
Get side information from packet.
packet
desired side information type
If supplied, *size will be set to the size of the side data or to zero if the desired side data is not present.
pointer to data if present or NULL otherwise
Ensure the data described by a given packet is reference counted.
packet whose data should be made reference counted.
0 on success, a negative AVERROR on error. On failure, the packet is unchanged.
Create a writable reference for the data described by a given packet, avoiding data copy if possible.
Packet whose data should be made writable.
0 on success, a negative AVERROR on failure. On failure, the packet is unchanged.
Move every field in src to dst and reset src.
Destination packet
Source packet, will be reset
Allocate new information of a packet.
packet
side information type
side information size
pointer to fresh allocated data or NULL otherwise
Pack a dictionary for use in side_data.
The dictionary to pack.
pointer to store the size of the returned data
pointer to data if successful, NULL otherwise
Setup a new reference to the data described by a given packet
Destination packet. Will be completely overwritten.
Source packet
0 on success, a negative AVERROR on error. On error, dst will be blank (as if returned by av_packet_alloc()).
Convert valid timing fields (timestamps / durations) in a packet from one timebase to another. Timestamps with unknown values (AV_NOPTS_VALUE) will be ignored.
packet on which the conversion will be performed
source timebase, in which the timing fields in pkt are expressed
destination timebase, to which the timing fields will be converted
Shrink the already allocated side data buffer
packet
side information type
new side information size
0 on success, < 0 on failure
Unpack a dictionary from side_data.
data from side_data
size of the data
the metadata storage dictionary
0 on success, < 0 on failure
Wipe the packet.
The packet to be unreferenced.
Iterate over all registered codec parsers.
a pointer where libavcodec will store the iteration state. Must point to NULL to start the iteration.
the next registered codec parser or NULL when the iteration is finished
Parse a packet.
parser context.
codec context.
set to pointer to parsed buffer or NULL if not yet finished.
set to size of parsed buffer or zero if not yet finished.
input buffer.
buffer size in bytes without the padding. I.e. the full buffer size is assumed to be buf_size + AV_INPUT_BUFFER_PADDING_SIZE. To signal EOF, this should be 0 (so that the last frame can be output).
input presentation timestamp.
input decoding timestamp.
input byte position in stream.
the number of bytes of the input bitstream used.
Reduce packet size, correctly zeroing padding
packet
new size
Encode extradata length to a buffer. Used by xiph codecs.
buffer to write to; must be at least (v/255+1) bytes long
size of extradata in bytes
number of bytes written to the buffer.
Modify width and height values so that they will result in a memory buffer that is acceptable for the codec if you do not use any horizontal padding.
Modify width and height values so that they will result in a memory buffer that is acceptable for the codec if you also ensure that all line sizes are a multiple of the respective linesize_align[i].
Allocate an AVCodecContext and set its fields to default values. The resulting struct should be freed with avcodec_free_context().
if non-NULL, allocate private data and initialize defaults for the given codec. It is illegal to then call avcodec_open2() with a different codec. If NULL, then the codec-specific defaults won't be initialized, which may result in suboptimal default settings (this is important mainly for encoders, e.g. libx264).
An AVCodecContext filled with default values or NULL on failure.
Converts swscale x/y chroma position to AVChromaLocation.
horizontal chroma sample position
vertical chroma sample position
Close a given AVCodecContext and free all the data associated with it (but not the AVCodecContext itself).
Return the libavcodec build-time configuration.
Decode a subtitle message. Return a negative value on error, otherwise return the number of bytes used. If no subtitle could be decompressed, got_sub_ptr is zero. Otherwise, the subtitle is stored in *sub. Note that AV_CODEC_CAP_DR1 is not available for subtitle codecs. This is for simplicity, because the performance difference is expected to be negligible and reusing a get_buffer written for video codecs would probably perform badly due to a potentially very different allocation pattern.
the codec context
The preallocated AVSubtitle in which the decoded subtitle will be stored, must be freed with avsubtitle_free if *got_sub_ptr is set.
Zero if no subtitle could be decompressed, otherwise, it is nonzero.
The input AVPacket containing the input buffer.
The default callback for AVCodecContext.get_buffer2(). It is made public so it can be called by custom get_buffer2() implementations for decoders without AV_CODEC_CAP_DR1 set.
The default callback for AVCodecContext.get_encode_buffer(). It is made public so it can be called by custom get_encode_buffer() implementations for encoders without AV_CODEC_CAP_DR1 set.
Returns descriptor for given codec ID or NULL if no descriptor exists.
descriptor for given codec ID or NULL if no descriptor exists.
Returns codec descriptor with the given name or NULL if no such descriptor exists.
codec descriptor with the given name or NULL if no such descriptor exists.
Iterate over all codec descriptors known to libavcodec.
previous descriptor. NULL to get the first descriptor.
next descriptor or NULL after the last descriptor
@{
Converts AVChromaLocation to swscale x/y chroma position.
horizontal chroma sample position
vertical chroma sample position
Fill AVFrame audio data and linesize pointers.
the AVFrame frame->nb_samples must be set prior to calling the function. This function fills in frame->data, frame->extended_data, frame->linesize[0].
channel count
sample format
buffer to use for frame data
size of buffer
plane size sample alignment (0 = default)
>=0 on success, negative error code on failure
Find the best pixel format to convert to given a certain source pixel format. When converting from one pixel format to another, information loss may occur. For example, when converting from RGB24 to GRAY, the color information will be lost. Similarly, other losses occur when converting from some formats to other formats. avcodec_find_best_pix_fmt_of_2() searches which of the given pixel formats should be used to suffer the least amount of loss. The pixel formats from which it chooses one, are determined by the pix_fmt_list parameter.
AV_PIX_FMT_NONE terminated array of pixel formats to choose from
source pixel format
Whether the source pixel format alpha channel is used.
Combination of flags informing you what kind of losses will occur.
The best pixel format to convert to or -1 if none was found.
Find a registered decoder with a matching codec ID.
AVCodecID of the requested decoder
A decoder if one was found, NULL otherwise.
Find a registered decoder with the specified name.
name of the requested decoder
A decoder if one was found, NULL otherwise.
Find a registered encoder with a matching codec ID.
AVCodecID of the requested encoder
An encoder if one was found, NULL otherwise.
Find a registered encoder with the specified name.
name of the requested encoder
An encoder if one was found, NULL otherwise.
Reset the internal codec state / flush internal buffers. Should be called e.g. when seeking or when switching to a different stream.
Free the codec context and everything associated with it and write NULL to the provided pointer.
Get the AVClass for AVCodecContext. It can be used in combination with AV_OPT_SEARCH_FAKE_OBJ for examining options.
Retrieve supported hardware configurations for a codec.
Create and return a AVHWFramesContext with values adequate for hardware decoding. This is meant to get called from the get_format callback, and is a helper for preparing a AVHWFramesContext for AVCodecContext.hw_frames_ctx. This API is for decoding with certain hardware acceleration modes/APIs only.
The context which is currently calling get_format, and which implicitly contains all state needed for filling the returned AVHWFramesContext properly.
A reference to the AVHWDeviceContext describing the device which will be used by the hardware decoder.
The hwaccel format you are going to return from get_format.
On success, set to a reference to an _uninitialized_ AVHWFramesContext, created from the given device_ref. Fields will be set to values required for decoding. Not changed if an error is returned.
zero on success, a negative value on error. The following error codes have special semantics: AVERROR(ENOENT): the decoder does not support this functionality. Setup is always manual, or it is a decoder which does not support setting AVCodecContext.hw_frames_ctx at all, or it is a software format. AVERROR(EINVAL): it is known that hardware decoding is not supported for this configuration, or the device_ref is not supported for the hwaccel referenced by hw_pix_fmt.
Get the name of a codec.
a static string identifying the codec; never NULL
Get the AVClass for AVSubtitleRect. It can be used in combination with AV_OPT_SEARCH_FAKE_OBJ for examining options.
Get the type of the given codec.
Returns a positive value if s is open (i.e. avcodec_open2() was called on it with no corresponding avcodec_close()), 0 otherwise.
a positive value if s is open (i.e. avcodec_open2() was called on it with no corresponding avcodec_close()), 0 otherwise.
Return the libavcodec license.
Initialize the AVCodecContext to use the given AVCodec. Prior to using this function the context has to be allocated with avcodec_alloc_context3().
The context to initialize.
The codec to open this context for. If a non-NULL codec has been previously passed to avcodec_alloc_context3() or for this context, then this parameter MUST be either NULL or equal to the previously passed codec.
A dictionary filled with AVCodecContext and codec-private options. On return this object will be filled with options that were not found.
zero on success, a negative value on error
Allocate a new AVCodecParameters and set its fields to default values (unknown/invalid/0). The returned struct must be freed with avcodec_parameters_free().
Copy the contents of src to dst. Any allocated fields in dst are freed and replaced with newly allocated duplicates of the corresponding fields in src.
>= 0 on success, a negative AVERROR code on failure.
Free an AVCodecParameters instance and everything associated with it and write NULL to the supplied pointer.
Fill the parameters struct based on the values from the supplied codec context. Any allocated fields in par are freed and replaced with duplicates of the corresponding fields in codec.
>= 0 on success, a negative AVERROR code on failure
Fill the codec context based on the values from the supplied codec parameters. Any allocated fields in codec that have a corresponding field in par are freed and replaced with duplicates of the corresponding field in par. Fields in codec that do not have a counterpart in par are not touched.
>= 0 on success, a negative AVERROR code on failure.
Return a value representing the fourCC code associated to the pixel format pix_fmt, or 0 if no associated fourCC code can be found.
Return a name for the specified profile, if available.
the ID of the codec to which the requested profile belongs
the profile value for which a name is requested
A name for the profile if found, NULL otherwise.
Return decoded output data from a decoder.
codec context
This will be set to a reference-counted video or audio frame (depending on the decoder type) allocated by the decoder. Note that the function will always call av_frame_unref(frame) before doing anything else.
0: success, a frame was returned AVERROR(EAGAIN): output is not available in this state - user must try to send new input AVERROR_EOF: the decoder has been fully flushed, and there will be no more output frames AVERROR(EINVAL): codec not opened, or it is an encoder AVERROR_INPUT_CHANGED: current decoded frame has changed parameters with respect to first decoded frame. Applicable when flag AV_CODEC_FLAG_DROPCHANGED is set. other negative values: legitimate decoding errors
Read encoded data from the encoder.
codec context
This will be set to a reference-counted packet allocated by the encoder. Note that the function will always call av_packet_unref(avpkt) before doing anything else.
0 on success, otherwise negative error code: AVERROR(EAGAIN): output is not available in the current state - user must try to send input AVERROR_EOF: the encoder has been fully flushed, and there will be no more output packets AVERROR(EINVAL): codec not opened, or it is a decoder other errors: legitimate encoding errors
Supply a raw video or audio frame to the encoder. Use avcodec_receive_packet() to retrieve buffered output packets.
codec context
AVFrame containing the raw audio or video frame to be encoded. Ownership of the frame remains with the caller, and the encoder will not write to the frame. The encoder may create a reference to the frame data (or copy it if the frame is not reference-counted). It can be NULL, in which case it is considered a flush packet. This signals the end of the stream. If the encoder still has packets buffered, it will return them after this call. Once flushing mode has been entered, additional flush packets are ignored, and sending frames will return AVERROR_EOF.
0 on success, otherwise negative error code: AVERROR(EAGAIN): input is not accepted in the current state - user must read output with avcodec_receive_packet() (once all output is read, the packet should be resent, and the call will not fail with EAGAIN). AVERROR_EOF: the encoder has been flushed, and no new frames can be sent to it AVERROR(EINVAL): codec not opened, it is a decoder, or requires flush AVERROR(ENOMEM): failed to add packet to internal queue, or similar other errors: legitimate encoding errors
Supply raw packet data as input to a decoder.
codec context
The input AVPacket. Usually, this will be a single video frame, or several complete audio frames. Ownership of the packet remains with the caller, and the decoder will not write to the packet. The decoder may create a reference to the packet data (or copy it if the packet is not reference-counted). Unlike with older APIs, the packet is always fully consumed, and if it contains multiple frames (e.g. some audio codecs), will require you to call avcodec_receive_frame() multiple times afterwards before you can send a new packet. It can be NULL (or an AVPacket with data set to NULL and size set to 0); in this case, it is considered a flush packet, which signals the end of the stream. Sending the first flush packet will return success. Subsequent ones are unnecessary and will return AVERROR_EOF. If the decoder still has frames buffered, it will return them after sending a flush packet.
0 on success, otherwise negative error code: AVERROR(EAGAIN): input is not accepted in the current state - user must read output with avcodec_receive_frame() (once all output is read, the packet should be resent, and the call will not fail with EAGAIN). AVERROR_EOF: the decoder has been flushed, and no new packets can be sent to it (also returned if more than 1 flush packet is sent) AVERROR(EINVAL): codec not opened, it is an encoder, or requires flush AVERROR(ENOMEM): failed to add packet to internal queue, or similar other errors: legitimate decoding errors
@}
Return the LIBAVCODEC_VERSION_INT constant.
Free all allocated data in the given subtitle struct.
AVSubtitle to free.
Audio input devices iterator.
Video input devices iterator.
Audio output devices iterator.
Video output devices iterator.
Send control message from application to device.
device context.
message type.
message data. Exact type depends on message type.
size of message data.
>= 0 on success, negative on error. AVERROR(ENOSYS) when device doesn't implement handler of the message.
Initialize capabilities probing API based on AVOption API.
Device capabilities data. Pointer to a NULL pointer must be passed.
Context of the device.
An AVDictionary filled with device-private options. On return this parameter will be destroyed and replaced with a dict containing options that were not found. May be NULL. The same options must be passed later to avformat_write_header() for output devices or avformat_open_input() for input devices, or at any other place that affects device-private options.
>= 0 on success, negative otherwise.
Free resources created by avdevice_capabilities_create()
Device capabilities data to be freed.
Context of the device.
Return the libavdevice build-time configuration.
Send control message from device to application.
device context.
message type.
message data. Can be NULL.
size of message data.
>= 0 on success, negative on error. AVERROR(ENOSYS) when application doesn't implement handler of the message.
Convenient function to free result of avdevice_list_devices().
Return the libavdevice license.
List devices.
device context.
list of autodetected devices.
count of autodetected devices, negative on error.
List devices.
device format. May be NULL if device name is set.
device name. May be NULL if device format is set.
An AVDictionary filled with device-private options. May be NULL. The same options must be passed later to avformat_write_header() for output devices or avformat_open_input() for input devices, or at any other place that affects device-private options.
list of autodetected devices
count of autodetected devices, negative on error.
Initialize libavdevice and register all the input and output devices.
Return the LIBAVDEVICE_VERSION_INT constant.
Create an AVABufferSinkParams structure.
Get a frame with filtered data from sink and put it in frame.
pointer to a context of a buffersink or abuffersink AVFilter.
pointer to an allocated frame that will be filled with data. The data must be freed using av_frame_unref() / av_frame_free()
- >= 0 if a frame was successfully returned. - AVERROR(EAGAIN) if no frames are available at this point; more input frames must be added to the filtergraph to get more output. - AVERROR_EOF if there will be no more output frames on this sink. - A different negative AVERROR code in other failure cases.
Get a frame with filtered data from sink and put it in frame.
pointer to a buffersink or abuffersink filter context.
pointer to an allocated frame that will be filled with data. The data must be freed using av_frame_unref() / av_frame_free()
a combination of AV_BUFFERSINK_FLAG_* flags
>= 0 in for success, a negative AVERROR code for failure.
Same as av_buffersink_get_frame(), but with the ability to specify the number of samples read. This function is less efficient than av_buffersink_get_frame(), because it copies the data around.
pointer to a context of the abuffersink AVFilter.
pointer to an allocated frame that will be filled with data. The data must be freed using av_frame_unref() / av_frame_free() frame will contain exactly nb_samples audio samples, except at the end of stream, when it can contain less than nb_samples.
The return codes have the same meaning as for av_buffersink_get_frame().
Get the properties of the stream @{
Create an AVBufferSinkParams structure.
Set the frame size for an audio buffer sink.
Add a frame to the buffer source.
an instance of the buffersrc filter
frame to be added. If the frame is reference counted, this function will take ownership of the reference(s) and reset the frame. Otherwise the frame data will be copied. If this function returns an error, the input frame is not touched.
0 on success, a negative AVERROR on error.
Add a frame to the buffer source.
pointer to a buffer source context
a frame, or NULL to mark EOF
a combination of AV_BUFFERSRC_FLAG_*
>= 0 in case of success, a negative AVERROR code in case of failure
Close the buffer source after EOF.
Get the number of failed requests.
Allocate a new AVBufferSrcParameters instance. It should be freed by the caller with av_free().
Initialize the buffersrc or abuffersrc filter with the provided parameters. This function may be called multiple times, the later calls override the previous ones. Some of the parameters may also be set through AVOptions, then whatever method is used last takes precedence.
an instance of the buffersrc or abuffersrc filter
the stream parameters. The frames later passed to this filter must conform to those parameters. All the allocated fields in param remain owned by the caller, libavfilter will make internal copies or references when necessary.
0 on success, a negative AVERROR code on failure.
Add a frame to the buffer source.
an instance of the buffersrc filter
frame to be added. If the frame is reference counted, this function will make a new reference to it. Otherwise the frame data will be copied.
0 on success, a negative AVERROR on error
Iterate over all registered filters.
a pointer where libavfilter will store the iteration state. Must point to NULL to start the iteration.
the next registered filter or NULL when the iteration is finished
Negotiate the media format, dimensions, etc of all inputs to a filter.
the filter to negotiate the properties for its inputs
zero on successful negotiation
Return the libavfilter build-time configuration.
Get the number of elements in an AVFilter's inputs or outputs array.
Free a filter context. This will also remove the filter from its filtergraph's list of filters.
the filter to free
Get a filter definition matching the given name.
the filter name to find
the filter definition, if any matching one is registered. NULL if none found.
Returns AVClass for AVFilterContext.
AVClass for AVFilterContext.
Allocate a filter graph.
the allocated filter graph on success or NULL.
Create a new filter instance in a filter graph.
graph in which the new filter will be used
the filter to create an instance of
Name to give to the new instance (will be copied to AVFilterContext.name). This may be used by the caller to identify different filters, libavfilter itself assigns no semantics to this parameter. May be NULL.
the context of the newly created filter instance (note that it is also retrievable directly through AVFilterGraph.filters or with avfilter_graph_get_filter()) on success or NULL on failure.
Check validity and configure all the links and formats in the graph.
the filter graph
context used for logging
>= 0 in case of success, a negative AVERROR code otherwise
Create and add a filter instance into an existing graph. The filter instance is created from the filter filt and inited with the parameter args. opaque is currently ignored.
the instance name to give to the created filter instance
the filter graph
a negative AVERROR error code in case of failure, a non negative value otherwise
Dump a graph into a human-readable string representation.
the graph to dump
formatting options; currently ignored
a string, or NULL in case of memory allocation failure; the string must be freed using av_free
Free a graph, destroy its links, and set *graph to NULL. If *graph is NULL, do nothing.
Get a filter instance identified by instance name from graph.
filter graph to search through.
filter instance name (should be unique in the graph).
the pointer to the found filter instance or NULL if it cannot be found.
Add a graph described by a string to a graph.
the filter graph where to link the parsed graph context
string to be parsed
linked list to the inputs of the graph
linked list to the outputs of the graph
zero on success, a negative AVERROR code on error
Add a graph described by a string to a graph.
the filter graph where to link the parsed graph context
string to be parsed
pointer to a linked list to the inputs of the graph, may be NULL. If non-NULL, *inputs is updated to contain the list of open inputs after the parsing, should be freed with avfilter_inout_free().
pointer to a linked list to the outputs of the graph, may be NULL. If non-NULL, *outputs is updated to contain the list of open outputs after the parsing, should be freed with avfilter_inout_free().
non negative on success, a negative AVERROR code on error
Add a graph described by a string to a graph.
the filter graph where to link the parsed graph context
string to be parsed
a linked list of all free (unlinked) inputs of the parsed graph will be returned here. It is to be freed by the caller using avfilter_inout_free().
a linked list of all free (unlinked) outputs of the parsed graph will be returned here. It is to be freed by the caller using avfilter_inout_free().
zero on success, a negative AVERROR code on error
Queue a command for one or more filter instances.
the filter graph
the filter(s) to which the command should be sent "all" sends to all filters otherwise it can be a filter or filter instance name which will send the command to all matching filters.
the command to sent, for handling simplicity all commands must be alphanumeric only
the argument for the command
time at which the command should be sent to the filter
Request a frame on the oldest sink link.
the return value of ff_request_frame(), or AVERROR_EOF if all links returned AVERROR_EOF
Send a command to one or more filter instances.
the filter graph
the filter(s) to which the command should be sent "all" sends to all filters otherwise it can be a filter or filter instance name which will send the command to all matching filters.
the command to send, for handling simplicity all commands must be alphanumeric only
the argument for the command
a buffer with size res_size where the filter(s) can return a response.
Enable or disable automatic format conversion inside the graph.
any of the AVFILTER_AUTO_CONVERT_* constants
Initialize a filter with the supplied dictionary of options.
uninitialized filter context to initialize
An AVDictionary filled with options for this filter. On return this parameter will be destroyed and replaced with a dict containing options that were not found. This dictionary must be freed by the caller. May be NULL, then this function is equivalent to avfilter_init_str() with the second parameter set to NULL.
0 on success, a negative AVERROR on failure
Initialize a filter with the supplied parameters.
uninitialized filter context to initialize
Options to initialize the filter with. This must be a ':'-separated list of options in the 'key=value' form. May be NULL if the options have been set directly using the AVOptions API or there are no options that need to be set.
0 on success, a negative AVERROR on failure
Allocate a single AVFilterInOut entry. Must be freed with avfilter_inout_free().
allocated AVFilterInOut on success, NULL on failure.
Free the supplied list of AVFilterInOut and set *inout to NULL. If *inout is NULL, do nothing.
Insert a filter in the middle of an existing link.
the link into which the filter should be inserted
the filter to be inserted
the input pad on the filter to connect
the output pad on the filter to connect
zero on success
Return the libavfilter license.
Link two filters together.
the source filter
index of the output pad on the source filter
the destination filter
index of the input pad on the destination filter
zero on success
Free the link in *link, and set its pointer to NULL.
Get the number of elements in an AVFilter's inputs or outputs array.
Get the name of an AVFilterPad.
an array of AVFilterPads
index of the pad in the array; it is the caller's responsibility to ensure the index is valid
name of the pad_idx'th pad in pads
Get the type of an AVFilterPad.
an array of AVFilterPads
index of the pad in the array; it is the caller's responsibility to ensure the index is valid
type of the pad_idx'th pad in pads
Make the filter instance process a command. It is recommended to use avfilter_graph_send_command().
Return the LIBAVFILTER_VERSION_INT constant.
Add an index entry into a sorted list. Update the entry if the list already contains it.
timestamp in the time base of the given stream
Read data and append it to the current content of the AVPacket. If pkt->size is 0 this is identical to av_get_packet. Note that this uses av_grow_packet and thus involves a realloc which is inefficient. Thus this function should only be used when there is no reasonable way to know (an upper bound of) the final size.
associated IO context
packet
amount of data to read
>0 (read size) if OK, AVERROR_xxx otherwise, previous data will not be lost even if an error occurs.
Get the AVCodecID for the given codec tag tag. If no codec id is found returns AV_CODEC_ID_NONE.
list of supported codec_id-codec_tag pairs, as stored in AVInputFormat.codec_tag and AVOutputFormat.codec_tag
codec tag to match to a codec ID
Get the codec tag for the given codec id id. If no codec tag is found returns 0.
list of supported codec_id-codec_tag pairs, as stored in AVInputFormat.codec_tag and AVOutputFormat.codec_tag
codec ID to match to a codec tag
Get the codec tag for the given codec id.
list of supported codec_id - codec_tag pairs, as stored in AVInputFormat.codec_tag and AVOutputFormat.codec_tag
codec id that should be searched for in the list
A pointer to the found tag
0 if id was not found in tags, > 0 if it was found
Iterate over all registered demuxers.
a pointer where libavformat will store the iteration state. Must point to NULL to start the iteration.
the next registered demuxer or NULL when the iteration is finished
Returns The AV_DISPOSITION_* flag corresponding to disp or a negative error code if disp does not correspond to a known stream disposition.
The AV_DISPOSITION_* flag corresponding to disp or a negative error code if disp does not correspond to a known stream disposition.
Returns The string description corresponding to the lowest set bit in disposition. NULL when the lowest set bit does not correspond to a known disposition or when disposition is 0.
a combination of AV_DISPOSITION_* values
The string description corresponding to the lowest set bit in disposition. NULL when the lowest set bit does not correspond to a known disposition or when disposition is 0.
Print detailed information about the input or output format, such as duration, bitrate, streams, container, programs, metadata, side data, codec and time base.
the context to analyze
index of the stream to dump information about
the URL to print, such as source or destination file
Select whether the specified context is an input(0) or output(1)
Check whether filename actually is a numbered sequence generator.
possible numbered sequence string
1 if a valid numbered sequence string, 0 otherwise
Find the "best" stream in the file. The best stream is determined according to various heuristics as the most likely to be what the user expects. If the decoder parameter is non-NULL, av_find_best_stream will find the default decoder for the stream's codec; streams for which no decoder can be found are ignored.
media file handle
stream type: video, audio, subtitles, etc.
user-requested stream number, or -1 for automatic selection
try to find a stream related (eg. in the same program) to this one, or -1 if none
if non-NULL, returns the decoder for the selected stream
flags; none are currently defined
the non-negative stream number in case of success, AVERROR_STREAM_NOT_FOUND if no stream with the requested type could be found, AVERROR_DECODER_NOT_FOUND if streams were found but no decoder
Find AVInputFormat based on the short name of the input format.
Find the programs which belong to a given stream.
media file handle
the last found program, the search will start after this program, or from the beginning if it is NULL
stream index
the next program which belongs to s, NULL if no program is found or the last program is not among the programs of ic.
Returns the method used to set ctx->duration.
AVFMT_DURATION_FROM_PTS, AVFMT_DURATION_FROM_STREAM, or AVFMT_DURATION_FROM_BITRATE.
This function will cause global side data to be injected in the next packet of each stream as well as after any subsequent seek.
Return in 'buf' the path with '%d' replaced by a number.
destination buffer
destination buffer size
numbered sequence string
frame number
AV_FRAME_FILENAME_FLAGS_*
0 if OK, -1 on format error
Get timing information for the data currently output. The exact meaning of "currently output" depends on the format. It is mostly relevant for devices that have an internal buffer and/or work in real time.
media file handle
stream in the media file
DTS of the last packet output for the stream, in stream time_base units
absolute time when that packet whas output, in microsecond
0 if OK, AVERROR(ENOSYS) if the format does not support it Note: some formats or devices may not allow to measure dts and wall atomically.
Allocate and read the payload of a packet and initialize its fields with default values.
associated IO context
packet
desired payload size
>0 (read size) if OK, AVERROR_xxx otherwise
Guess the codec ID based upon muxer and filename.
Return the output format in the list of registered output formats which best matches the provided parameters, or return NULL if there is no match.
if non-NULL checks if short_name matches with the names of the registered formats
if non-NULL checks if filename terminates with the extensions of the registered formats
if non-NULL checks if mime_type matches with the MIME type of the registered formats
Guess the frame rate, based on both the container and codec information.
the format context which the stream is part of
the stream which the frame is part of
the frame for which the frame rate should be determined, may be NULL
the guessed (valid) frame rate, 0/1 if no idea
Guess the sample aspect ratio of a frame, based on both the stream and the frame aspect ratio.
the format context which the stream is part of
the stream which the frame is part of
the frame with the aspect ratio to be determined
the guessed (valid) sample_aspect_ratio, 0/1 if no idea
Send a nice hexadecimal dump of a buffer to the specified file stream.
The file stream pointer where the dump should be sent to.
buffer
buffer size
Send a nice hexadecimal dump of a buffer to the log.
A pointer to an arbitrary struct of which the first field is a pointer to an AVClass struct.
The importance level of the message, lower values signifying higher importance.
buffer
buffer size
Get the index for a specific timestamp.
stream that the timestamp belongs to
timestamp to retrieve the index for
if AVSEEK_FLAG_BACKWARD then the returned index will correspond to the timestamp which is < = the requested one, if backward is 0, then it will be >= if AVSEEK_FLAG_ANY seek to any frame, only keyframes otherwise
< 0 if no such timestamp could be found
Write a packet to an output media file ensuring correct interleaving.
media file handle
The packet containing the data to be written. If the packet is reference-counted, this function will take ownership of this reference and unreference it later when it sees fit. If the packet is not reference-counted, libavformat will make a copy. The returned packet will be blank (as if returned from av_packet_alloc()), even on error. This parameter can be NULL (at any time, not just at the end), to flush the interleaving queues. Packet's "stream_index" field must be set to the index of the corresponding stream in "s->streams". The timestamps ( "pts", "dts") must be set to correct values in the stream's timebase (unless the output format is flagged with the AVFMT_NOTIMESTAMPS flag, then they can be set to AV_NOPTS_VALUE). The dts for subsequent packets in one stream must be strictly increasing (unless the output format is flagged with the AVFMT_TS_NONSTRICT, then they merely have to be nondecreasing). "duration" should also be set if known.
0 on success, a negative AVERROR on error.
Write an uncoded frame to an output media file.
>=0 for success, a negative code on error
Return a positive value if the given filename has one of the given extensions, 0 otherwise.
file name to check against the given extensions
a comma-separated list of filename extensions
Iterate over all registered muxers.
a pointer where libavformat will store the iteration state. Must point to NULL to start the iteration.
the next registered muxer or NULL when the iteration is finished
Send a nice dump of a packet to the log.
A pointer to an arbitrary struct of which the first field is a pointer to an AVClass struct.
The importance level of the message, lower values signifying higher importance.
packet to dump
True if the payload must be displayed, too.
AVStream that the packet belongs to
Send a nice dump of a packet to the specified file stream.
The file stream pointer where the dump should be sent to.
packet to dump
True if the payload must be displayed, too.
AVStream that the packet belongs to
Like av_probe_input_buffer2() but returns 0 on success
Probe a bytestream to determine the input format. Each time a probe returns with a score that is too low, the probe buffer size is increased and another attempt is made. When the maximum probe size is reached, the input format with the highest score is returned.
the bytestream to probe
the input format is put here
the url of the stream
the log context
the offset within the bytestream to probe from
the maximum probe buffer size (zero for default)
the score in case of success, a negative value corresponding to an the maximal score is AVPROBE_SCORE_MAX AVERROR code otherwise
Guess the file format.
data to be probed
Whether the file is already opened; determines whether demuxers with or without AVFMT_NOFILE are probed.
Guess the file format.
data to be probed
Whether the file is already opened; determines whether demuxers with or without AVFMT_NOFILE are probed.
A probe score larger that this is required to accept a detection, the variable is set to the actual detection score afterwards. If the score is < = AVPROBE_SCORE_MAX / 4 it is recommended to retry with a larger probe buffer.
Guess the file format.
Whether the file is already opened; determines whether demuxers with or without AVFMT_NOFILE are probed.
The score of the best detection.
Return the next frame of a stream. This function returns what is stored in the file, and does not validate that what is there are valid frames for the decoder. It will split what is stored in the file into frames and return one for each call. It will not omit invalid data between valid frames so as to give the decoder the maximum information possible for decoding.
0 if OK, < 0 on error or end of file. On error, pkt will be blank (as if it came from av_packet_alloc()).
Pause a network-based stream (e.g. RTSP stream).
Start playing a network-based stream (e.g. RTSP stream) at the current position.
Generate an SDP for an RTP session.
array of AVFormatContexts describing the RTP streams. If the array is composed by only one context, such context can contain multiple AVStreams (one AVStream per RTP stream). Otherwise, all the contexts in the array (an AVCodecContext per RTP stream) must contain only one AVStream.
number of AVCodecContexts contained in ac
buffer where the SDP will be stored (must be allocated by the caller)
the size of the buffer
0 if OK, AVERROR_xxx on error
Seek to the keyframe at timestamp. 'timestamp' in 'stream_index'.
media file handle
If stream_index is (-1), a default stream is selected, and timestamp is automatically converted from AV_TIME_BASE units to the stream specific time_base.
Timestamp in AVStream.time_base units or, if no stream is specified, in AV_TIME_BASE units.
flags which select direction and seeking mode
>= 0 on success
Wrap an existing array as stream side data.
stream
side information type
the side data array. It must be allocated with the av_malloc() family of functions. The ownership of the data is transferred to st.
side information size
zero on success, a negative AVERROR code on failure. On failure, the stream is unchanged and the data remains owned by the caller.
Get the AVClass for AVStream. It can be used in combination with AV_OPT_SEARCH_FAKE_OBJ for examining options.
Get the internal codec timebase from a stream.
input stream to extract the timebase from
Returns the pts of the last muxed packet + its duration
Get side information from stream.
stream
desired side information type
If supplied, *size will be set to the size of the side data or to zero if the desired side data is not present.
pointer to data if present or NULL otherwise
Allocate new information from stream.
stream
desired side information type
side information size
pointer to fresh allocated data or NULL otherwise
Split a URL string into components.
the buffer for the protocol
the size of the proto buffer
the buffer for the authorization
the size of the authorization buffer
the buffer for the host name
the size of the hostname buffer
a pointer to store the port number in
the buffer for the path
the size of the path buffer
the URL to split
Write a packet to an output media file.
media file handle
The packet containing the data to be written. Note that unlike av_interleaved_write_frame(), this function does not take ownership of the packet passed to it (though some muxers may make an internal reference to the input packet). This parameter can be NULL (at any time, not just at the end), in order to immediately flush data buffered within the muxer, for muxers that buffer up data internally before writing it to the output. Packet's "stream_index" field must be set to the index of the corresponding stream in "s->streams". The timestamps ( "pts", "dts") must be set to correct values in the stream's timebase (unless the output format is flagged with the AVFMT_NOTIMESTAMPS flag, then they can be set to AV_NOPTS_VALUE). The dts for subsequent packets passed to this function must be strictly increasing when compared in their respective timebases (unless the output format is flagged with the AVFMT_TS_NONSTRICT, then they merely have to be nondecreasing). "duration") should also be set if known.
< 0 on error, = 0 if OK, 1 if flushed and there is no more data to flush
Write the stream trailer to an output media file and free the file private data.
media file handle
0 if OK, AVERROR_xxx on error
Write an uncoded frame to an output media file.
Test whether a muxer supports uncoded frame.
>=0 if an uncoded frame can be written to that muxer and stream, < 0 if not
Allocate an AVFormatContext. avformat_free_context() can be used to free the context and everything allocated by the framework within it.
Allocate an AVFormatContext for an output format. avformat_free_context() can be used to free the context and everything allocated by the framework within it.
format to use for allocating the context, if NULL format_name and filename are used instead
the name of output format to use for allocating the context, if NULL filename is used instead
the name of the filename to use for allocating the context, may be NULL
>= 0 in case of success, a negative AVERROR code in case of failure
Close an opened input AVFormatContext. Free it and all its contents and set *s to NULL.
Return the libavformat build-time configuration.
Read packets of a media file to get stream information. This is useful for file formats with no headers such as MPEG. This function also computes the real framerate in case of MPEG-2 repeat frame mode. The logical file position is not changed by this function; examined packets may be buffered for later processing.
media file handle
If non-NULL, an ic.nb_streams long array of pointers to dictionaries, where i-th member contains options for codec corresponding to i-th stream. On return each dictionary will be filled with options that were not found.
>=0 if OK, AVERROR_xxx on error
Discard all internally buffered data. This can be useful when dealing with discontinuities in the byte stream. Generally works only with formats that can resync. This includes headerless formats like MPEG-TS/TS but should also work with NUT, Ogg and in a limited way AVI for example.
media file handle
>=0 on success, error code otherwise
Free an AVFormatContext and all its streams.
context to free
Get the AVClass for AVFormatContext. It can be used in combination with AV_OPT_SEARCH_FAKE_OBJ for examining options.
Returns the table mapping MOV FourCCs for audio to AVCodecID.
the table mapping MOV FourCCs for audio to AVCodecID.
Returns the table mapping MOV FourCCs for video to libavcodec AVCodecID.
the table mapping MOV FourCCs for video to libavcodec AVCodecID.
Returns the table mapping RIFF FourCCs for audio to AVCodecID.
the table mapping RIFF FourCCs for audio to AVCodecID.
@{ Get the tables mapping RIFF FourCCs to libavcodec AVCodecIDs. The tables are meant to be passed to av_codec_get_id()/av_codec_get_tag() as in the following code:
the table mapping RIFF FourCCs for video to libavcodec AVCodecID.
Get the index entry count for the given AVStream.
stream
the number of index entries in the stream
Get the AVIndexEntry corresponding to the given index.
Stream containing the requested AVIndexEntry.
The desired index.
A pointer to the requested AVIndexEntry if it exists, NULL otherwise.
Get the AVIndexEntry corresponding to the given timestamp.
Stream containing the requested AVIndexEntry.
If AVSEEK_FLAG_BACKWARD then the returned entry will correspond to the timestamp which is < = the requested one, if backward is 0, then it will be >= if AVSEEK_FLAG_ANY seek to any frame, only keyframes otherwise.
A pointer to the requested AVIndexEntry if it exists, NULL otherwise.
Allocate the stream private data and initialize the codec, but do not write the header. May optionally be used before avformat_write_header to initialize stream parameters before actually writing the header. If using this function, do not pass the same options to avformat_write_header.
Media file handle, must be allocated with avformat_alloc_context(). Its oformat field must be set to the desired output format; Its pb field must be set to an already opened AVIOContext.
An AVDictionary filled with AVFormatContext and muxer-private options. On return this parameter will be destroyed and replaced with a dict containing options that were not found. May be NULL.
AVSTREAM_INIT_IN_WRITE_HEADER on success if the codec requires avformat_write_header to fully initialize, AVSTREAM_INIT_IN_INIT_OUTPUT on success if the codec has been fully initialized, negative AVERROR on failure.
Return the libavformat license.
Check if the stream st contained in s is matched by the stream specifier spec.
>0 if st is matched by spec; 0 if st is not matched by spec; AVERROR code if spec is invalid
Undo the initialization done by avformat_network_init. Call it only once for each time you called avformat_network_init.
Do global initialization of network libraries. This is optional, and not recommended anymore.
Add a new stream to a media file.
media file handle
unused, does nothing
newly created stream or NULL on error.
Open an input stream and read the header. The codecs are not opened. The stream must be closed with avformat_close_input().
Pointer to user-supplied AVFormatContext (allocated by avformat_alloc_context). May be a pointer to NULL, in which case an AVFormatContext is allocated by this function and written into ps. Note that a user-supplied AVFormatContext will be freed on failure.
URL of the stream to open.
If non-NULL, this parameter forces a specific input format. Otherwise the format is autodetected.
A dictionary filled with AVFormatContext and demuxer-private options. On return this parameter will be destroyed and replaced with a dict containing options that were not found. May be NULL.
0 on success, a negative AVERROR on failure.
Test if the given container can store a codec.
container to check for compatibility
codec to potentially store in container
standards compliance level, one of FF_COMPLIANCE_*
1 if codec with ID codec_id can be stored in ofmt, 0 if it cannot. A negative number if this information is not available.
Seek to timestamp ts. Seeking will be done so that the point from which all active streams can be presented successfully will be closest to ts and within min/max_ts. Active streams are all streams that have AVStream.discard < AVDISCARD_ALL.
media file handle
index of the stream which is used as time base reference
smallest acceptable timestamp
target timestamp
largest acceptable timestamp
flags
>=0 on success, error code otherwise
Transfer internal timing information from one stream to another.
target output format for ost
output stream which needs timings copy and adjustments
reference input stream to copy timings from
define from where the stream codec timebase needs to be imported
Return the LIBAVFORMAT_VERSION_INT constant.
Allocate the stream private data and write the stream header to an output media file.
Media file handle, must be allocated with avformat_alloc_context(). Its oformat field must be set to the desired output format; Its pb field must be set to an already opened AVIOContext.
An AVDictionary filled with AVFormatContext and muxer-private options. On return this parameter will be destroyed and replaced with a dict containing options that were not found. May be NULL.
AVSTREAM_INIT_IN_WRITE_HEADER on success if the codec had not already been fully initialized in avformat_init, AVSTREAM_INIT_IN_INIT_OUTPUT on success if the codec had already been fully initialized in avformat_init, negative AVERROR on failure.
Accept and allocate a client context on a server context.
the server context
the client context, must be unallocated
>= 0 on success or a negative value corresponding to an AVERROR on failure
Allocate and initialize an AVIOContext for buffered I/O. It must be later freed with avio_context_free().
Memory block for input/output operations via AVIOContext. The buffer must be allocated with av_malloc() and friends. It may be freed and replaced with a new buffer by libavformat. AVIOContext.buffer holds the buffer currently in use, which must be later freed with av_free().
The buffer size is very important for performance. For protocols with fixed blocksize it should be set to this blocksize. For others a typical size is a cache page, e.g. 4kb.
Set to 1 if the buffer should be writable, 0 otherwise.
An opaque pointer to user-specific data.
A function for refilling the buffer, may be NULL. For stream protocols, must never return 0 but rather a proper AVERROR code.
A function for writing the buffer contents, may be NULL. The function may not change the input buffers content.
A function for seeking to specified byte position, may be NULL.
Allocated AVIOContext or NULL on failure.
Return AVIO_FLAG_* access flags corresponding to the access permissions of the resource in url, or a negative value corresponding to an AVERROR code in case of failure. The returned access flags are masked by the value in flags.
Close the resource accessed by the AVIOContext s and free it. This function can only be used if s was opened by avio_open().
0 on success, an AVERROR < 0 on error.
Close directory.
directory read context.
>=0 on success or negative on error.
Return the written size and a pointer to the buffer. The buffer must be freed with av_free(). Padding of AV_INPUT_BUFFER_PADDING_SIZE is added to the buffer.
IO context
pointer to a byte buffer
the length of the byte buffer
Close the resource accessed by the AVIOContext *s, free it and set the pointer pointing to it to NULL. This function can only be used if s was opened by avio_open().
0 on success, an AVERROR < 0 on error.
Free the supplied IO context and everything associated with it.
Double pointer to the IO context. This function will write NULL into s.
Iterate through names of available protocols.
A private pointer representing current protocol. It must be a pointer to NULL on first iteration and will be updated by successive calls to avio_enum_protocols.
If set to 1, iterate over output protocols, otherwise over input protocols.
A static string containing the name of current protocol or NULL
Similar to feof() but also returns nonzero on read errors.
non zero if and only if at end of file or a read error happened when reading.
Return the name of the protocol that will handle the passed URL.
Name of the protocol or NULL.
Force flushing of buffered data.
Free entry allocated by avio_read_dir().
entry to be freed.
Return the written size and a pointer to the buffer. The AVIOContext stream is left intact. The buffer must NOT be freed. No padding is added to the buffer.
IO context
pointer to a byte buffer
the length of the byte buffer
Read a string from pb into buf. The reading will terminate when either a NULL character was encountered, maxlen bytes have been read, or nothing more can be read from pb. The result is guaranteed to be NULL-terminated, it will be truncated if buf is too small. Note that the string is not interpreted or validated in any way, it might get truncated in the middle of a sequence for multi-byte encodings.
number of bytes read (is always < = maxlen). If reading ends on EOF or error, the return value will be one more than bytes actually read.
Read a UTF-16 string from pb and convert it to UTF-8. The reading will terminate when either a null or invalid character was encountered or maxlen bytes have been read.
number of bytes read (is always < = maxlen)
Perform one step of the protocol handshake to accept a new client. This function must be called on a client returned by avio_accept() before using it as a read/write context. It is separate from avio_accept() because it may block. A step of the handshake is defined by places where the application may decide to change the proceedings. For example, on a protocol with a request header and a reply header, each one can constitute a step because the application may use the parameters from the request to change parameters in the reply; or each individual chunk of the request can constitute a step. If the handshake is already finished, avio_handshake() does nothing and returns 0 immediately.
the client context to perform the handshake on
0 on a complete and successful handshake > 0 if the handshake progressed, but is not complete < 0 for an AVERROR code
Create and initialize a AVIOContext for accessing the resource indicated by url.
Used to return the pointer to the created AVIOContext. In case of failure the pointed to value is set to NULL.
resource to access
flags which control how the resource indicated by url is to be opened
>= 0 in case of success, a negative value corresponding to an AVERROR code in case of failure
Open directory for reading.
directory read context. Pointer to a NULL pointer must be passed.
directory to be listed.
A dictionary filled with protocol-private options. On return this parameter will be destroyed and replaced with a dictionary containing options that were not found. May be NULL.
>=0 on success or negative on error.
Open a write only memory stream.
new IO context
zero if no error.
Create and initialize a AVIOContext for accessing the resource indicated by url.
Used to return the pointer to the created AVIOContext. In case of failure the pointed to value is set to NULL.
resource to access
flags which control how the resource indicated by url is to be opened
an interrupt callback to be used at the protocols level
A dictionary filled with protocol-private options. On return this parameter will be destroyed and replaced with a dict containing options that were not found. May be NULL.
>= 0 in case of success, a negative value corresponding to an AVERROR code in case of failure
Pause and resume playing - only meaningful if using a network streaming protocol (e.g. MMS).
IO context from which to call the read_pause function pointer
1 for pause, 0 for resume
Write a NULL terminated array of strings to the context. Usually you don't need to use this function directly but its macro wrapper, avio_print.
Writes a formatted string to the context.
number of bytes written, < 0 on error.
Get AVClass by names of available protocols.
A AVClass of input protocol name or NULL
Write a NULL-terminated string.
number of bytes written.
Convert an UTF-8 string to UTF-16BE and write it.
the AVIOContext
NULL-terminated UTF-8 string
number of bytes written.
Convert an UTF-8 string to UTF-16LE and write it.
the AVIOContext
NULL-terminated UTF-8 string
number of bytes written.
@{
Read size bytes from AVIOContext into buf.
number of bytes read or AVERROR
Get next directory entry.
directory read context.
next entry or NULL when no more entries.
>=0 on success or negative on error. End of list is not considered an error.
Read size bytes from AVIOContext into buf. Unlike avio_read(), this is allowed to read fewer bytes than requested. The missing bytes can be read in the next call. This always tries to read at least 1 byte. Useful to reduce latency in certain cases.
number of bytes read or AVERROR
Read contents of h into print buffer, up to max_size bytes, or up to EOF.
0 for success (max_size bytes read or EOF reached), negative error code otherwise
fseek() equivalent for AVIOContext.
new position or AVERROR.
Seek to a given timestamp relative to some component stream. Only meaningful if using a network streaming protocol (e.g. MMS.).
IO context from which to call the seek function pointers
The stream index that the timestamp is relative to. If stream_index is (-1) the timestamp should be in AV_TIME_BASE units from the beginning of the presentation. If a stream_index >= 0 is used and the protocol does not support seeking based on component streams, the call will fail.
timestamp in AVStream.time_base units or if there is no stream specified then in AV_TIME_BASE units.
Optional combination of AVSEEK_FLAG_BACKWARD, AVSEEK_FLAG_BYTE and AVSEEK_FLAG_ANY. The protocol may silently ignore AVSEEK_FLAG_BACKWARD and AVSEEK_FLAG_ANY, but AVSEEK_FLAG_BYTE will fail if used and not supported.
>= 0 on success
Get the filesize.
filesize or AVERROR
Skip given number of bytes forward
new position or AVERROR.
Writes a formatted string to the context taking a va_list.
number of bytes written, < 0 on error.
Mark the written bytestream as a specific type.
the stream time the current bytestream pos corresponds to (in AV_TIME_BASE units), or AV_NOPTS_VALUE if unknown or not applicable
the kind of data written starting at the current pos
Add two rationals.
First rational
Second rational
b+c
Add a value to a timestamp.
Input timestamp time base
Input timestamp
Time base of `inc`
Value to be added
Allocate an AVAudioFifo.
sample format
number of channels
initial allocation size, in samples
newly allocated AVAudioFifo, or NULL on error
Drain data from an AVAudioFifo.
AVAudioFifo to drain
number of samples to drain
0 if OK, or negative AVERROR code on failure
Free an AVAudioFifo.
AVAudioFifo to free
Peek data from an AVAudioFifo.
AVAudioFifo to read from
audio data plane pointers
number of samples to peek
number of samples actually peek, or negative AVERROR code on failure. The number of samples actually peek will not be greater than nb_samples, and will only be less than nb_samples if av_audio_fifo_size is less than nb_samples.
Peek data from an AVAudioFifo.
AVAudioFifo to read from
audio data plane pointers
number of samples to peek
offset from current read position
number of samples actually peek, or negative AVERROR code on failure. The number of samples actually peek will not be greater than nb_samples, and will only be less than nb_samples if av_audio_fifo_size is less than nb_samples.
Read data from an AVAudioFifo.
AVAudioFifo to read from
audio data plane pointers
number of samples to read
number of samples actually read, or negative AVERROR code on failure. The number of samples actually read will not be greater than nb_samples, and will only be less than nb_samples if av_audio_fifo_size is less than nb_samples.
Reallocate an AVAudioFifo.
AVAudioFifo to reallocate
new allocation size, in samples
0 if OK, or negative AVERROR code on failure
Reset the AVAudioFifo buffer.
AVAudioFifo to reset
Get the current number of samples in the AVAudioFifo available for reading.
the AVAudioFifo to query
number of samples available for reading
Get the current number of samples in the AVAudioFifo available for writing.
the AVAudioFifo to query
number of samples available for writing
Write data to an AVAudioFifo.
AVAudioFifo to write to
audio data plane pointers
number of samples to write
number of samples actually written, or negative AVERROR code on failure. If successful, the number of samples actually written will always be nb_samples.
Append a description of a channel layout to a bprint buffer.
Allocate an AVBuffer of the given size using av_malloc().
an AVBufferRef of given size or NULL when out of memory
Same as av_buffer_alloc(), except the returned buffer will be initialized to zero.
Create an AVBuffer from an existing array.
data array
size of data in bytes
a callback for freeing this buffer's data
parameter to be got for processing or passed to free
a combination of AV_BUFFER_FLAG_*
an AVBufferRef referring to data on success, NULL on failure.
Default free callback, which calls av_free() on the buffer data. This function is meant to be passed to av_buffer_create(), not called directly.
Returns the opaque parameter set by av_buffer_create.
the opaque parameter set by av_buffer_create.
Returns 1 if the caller may write to the data referred to by buf (which is true if and only if buf is the only reference to the underlying AVBuffer). Return 0 otherwise. A positive answer is valid until av_buffer_ref() is called on buf.
1 if the caller may write to the data referred to by buf (which is true if and only if buf is the only reference to the underlying AVBuffer). Return 0 otherwise. A positive answer is valid until av_buffer_ref() is called on buf.
Create a writable reference from a given buffer reference, avoiding data copy if possible.
buffer reference to make writable. On success, buf is either left untouched, or it is unreferenced and a new writable AVBufferRef is written in its place. On failure, buf is left untouched.
0 on success, a negative AVERROR on failure.
Query the original opaque parameter of an allocated buffer in the pool.
a buffer reference to a buffer returned by av_buffer_pool_get.
the opaque parameter set by the buffer allocator function of the buffer pool.
Allocate a new AVBuffer, reusing an old buffer from the pool when available. This function may be called simultaneously from multiple threads.
a reference to the new buffer on success, NULL on error.
Allocate and initialize a buffer pool.
size of each buffer in this pool
a function that will be used to allocate new buffers when the pool is empty. May be NULL, then the default allocator will be used (av_buffer_alloc()).
newly created buffer pool on success, NULL on error.
Allocate and initialize a buffer pool with a more complex allocator.
size of each buffer in this pool
arbitrary user data used by the allocator
a function that will be used to allocate new buffers when the pool is empty. May be NULL, then the default allocator will be used (av_buffer_alloc()).
a function that will be called immediately before the pool is freed. I.e. after av_buffer_pool_uninit() is called by the caller and all the frames are returned to the pool and freed. It is intended to uninitialize the user opaque data. May be NULL.
newly created buffer pool on success, NULL on error.
Mark the pool as being available for freeing. It will actually be freed only once all the allocated buffers associated with the pool are released. Thus it is safe to call this function while some of the allocated buffers are still in use.
pointer to the pool to be freed. It will be set to NULL.
Reallocate a given buffer.
a buffer reference to reallocate. On success, buf will be unreferenced and a new reference with the required size will be written in its place. On failure buf will be left untouched. *buf may be NULL, then a new buffer is allocated.
required new buffer size.
0 on success, a negative AVERROR on failure.
Create a new reference to an AVBuffer.
a new AVBufferRef referring to the same AVBuffer as buf or NULL on failure.
Ensure dst refers to the same data as src.
Pointer to either a valid buffer reference or NULL. On success, this will point to a buffer reference equivalent to src. On failure, dst will be left untouched.
A buffer reference to replace dst with. May be NULL, then this function is equivalent to av_buffer_unref(dst).
0 on success AVERROR(ENOMEM) on memory allocation failure.
Free a given reference and automatically free the buffer if there are no more references to it.
the reference to be freed. The pointer is set to NULL on return.
Allocate a memory block for an array with av_mallocz().
Number of elements
Size of the single element
Pointer to the allocated block, or `NULL` if the block cannot be allocated
Get a human readable string describing a given channel.
pre-allocated buffer where to put the generated string
size in bytes of the buffer.
amount of bytes needed to hold the output string, or a negative AVERROR on failure. If the returned value is bigger than buf_size, then the string was truncated.
bprint variant of av_channel_description().
This is the inverse function of av_channel_name().
the channel with the given name AV_CHAN_NONE when name does not identify a known channel
Get the channel with the given index in a channel layout.
input channel layout
channel with the index idx in channel_layout on success or AV_CHAN_NONE on failure (if idx is not valid or the channel order is unspecified)
Get a channel described by the given string.
input channel layout
a channel described by the given string in channel_layout on success or AV_CHAN_NONE on failure (if the string is not valid or the channel order is unspecified)
Check whether a channel layout is valid, i.e. can possibly describe audio data.
input channel layout
1 if channel_layout is valid, 0 otherwise.
Check whether two channel layouts are semantically the same, i.e. the same channels are present on the same positions in both.
input channel layout
input channel layout
0 if chl and chl1 are equal, 1 if they are not equal. A negative AVERROR code if one or both are invalid.
Make a copy of a channel layout. This differs from just assigning src to dst in that it allocates and copies the map for AV_CHANNEL_ORDER_CUSTOM.
destination channel layout
source channel layout
0 on success, a negative AVERROR on error.
Get the default channel layout for a given number of channels.
number of channels
Get a human-readable string describing the channel layout properties. The string will be in the same format that is accepted by av_channel_layout_from_string(), allowing to rebuild the same channel layout, except for opaque pointers.
channel layout to be described
pre-allocated buffer where to put the generated string
size in bytes of the buffer.
amount of bytes needed to hold the output string, or a negative AVERROR on failure. If the returned value is bigger than buf_size, then the string was truncated.
bprint variant of av_channel_layout_describe().
0 on success, or a negative AVERROR value on failure.
Get the channel with the given index in channel_layout.
Initialize a native channel layout from a bitmask indicating which channels are present.
the layout structure to be initialized
bitmask describing the channel layout
0 on success AVERROR(EINVAL) for invalid mask values
Initialize a channel layout from a given string description. The input string can be represented by: - the formal channel layout name (returned by av_channel_layout_describe()) - single or multiple channel names (returned by av_channel_name(), eg. "FL", or concatenated with "+", each optionally containing a custom name after a "", eg. "FL+FR+LFE") - a decimal or hexadecimal value of a native channel layout (eg. "4" or "0x4") - the number of channels with default layout (eg. "4c") - the number of unordered channels (eg. "4C" or "4 channels") - the ambisonic order followed by optional non-diegetic channels (eg. "ambisonic 2+stereo")
input channel layout
string describing the channel layout
0 channel layout was detected, AVERROR_INVALIDATATA otherwise
Get the index of a given channel in a channel layout. In case multiple channels are found, only the first match will be returned.
input channel layout
index of channel in channel_layout on success or a negative number if channel is not present in channel_layout.
Get the index in a channel layout of a channel described by the given string. In case multiple channels are found, only the first match will be returned.
input channel layout
a channel index described by the given string, or a negative AVERROR value.
Iterate over all standard channel layouts.
a pointer where libavutil will store the iteration state. Must point to NULL to start the iteration.
the standard channel layout or NULL when the iteration is finished
Find out what channels from a given set are present in a channel layout, without regard for their positions.
input channel layout
a combination of AV_CH_* representing a set of channels
a bitfield representing all the channels from mask that are present in channel_layout
Free any allocated data in the channel layout and reset the channel count to 0.
the layout structure to be uninitialized
Get a human readable string in an abbreviated form describing a given channel. This is the inverse function of av_channel_from_string().
pre-allocated buffer where to put the generated string
size in bytes of the buffer.
amount of bytes needed to hold the output string, or a negative AVERROR on failure. If the returned value is bigger than buf_size, then the string was truncated.
bprint variant of av_channel_name().
Returns the AVChromaLocation value for name or an AVError if not found.
the AVChromaLocation value for name or an AVError if not found.
Returns the name for provided chroma location or NULL if unknown.
the name for provided chroma location or NULL if unknown.
Returns the AVColorPrimaries value for name or an AVError if not found.
the AVColorPrimaries value for name or an AVError if not found.
Returns the name for provided color primaries or NULL if unknown.
the name for provided color primaries or NULL if unknown.
Returns the AVColorRange value for name or an AVError if not found.
the AVColorRange value for name or an AVError if not found.
Returns the name for provided color range or NULL if unknown.
the name for provided color range or NULL if unknown.
Returns the AVColorSpace value for name or an AVError if not found.
the AVColorSpace value for name or an AVError if not found.
Returns the name for provided color space or NULL if unknown.
the name for provided color space or NULL if unknown.
Returns the AVColorTransferCharacteristic value for name or an AVError if not found.
the AVColorTransferCharacteristic value for name or an AVError if not found.
Returns the name for provided color transfer or NULL if unknown.
the name for provided color transfer or NULL if unknown.
Compare the remainders of two integer operands divided by a common divisor.
Divisor; must be a power of 2
- a negative value if `a % mod < b % mod` - a positive value if `a % mod > b % mod` - zero if `a % mod == b % mod`
Compare two timestamps each in its own time base.
One of the following values: - -1 if `ts_a` is before `ts_b` - 1 if `ts_a` is after `ts_b` - 0 if they represent the same position
Allocate an AVContentLightMetadata structure and set its fields to default values. The resulting struct can be freed using av_freep().
An AVContentLightMetadata filled with default values or NULL on failure.
Allocate a complete AVContentLightMetadata and add it to the frame.
The frame which side data is added to.
The AVContentLightMetadata structure to be filled by caller.
Returns the number of logical CPU cores present.
the number of logical CPU cores present.
Overrides cpu count detection and forces the specified count. Count < 1 disables forcing of specific count.
Get the maximum data alignment that may be required by FFmpeg.
Convert a double precision floating point number to a rational.
`double` to convert
Maximum allowed numerator and denominator
`d` in AVRational form
Return the context name
The AVClass context
The AVClass class_name
Copy entries from one AVDictionary struct into another.
pointer to a pointer to a AVDictionary struct. If *dst is NULL, this function will allocate a struct for you and put it in *dst
pointer to source AVDictionary struct
flags to use when setting entries in *dst
0 on success, negative AVERROR code on failure. If dst was allocated by this function, callers should free the associated memory.
Get number of entries in dictionary.
dictionary
number of entries in dictionary
Free all the memory allocated for an AVDictionary struct and all keys and values.
Get a dictionary entry with matching key.
matching key
Set to the previous matching element to find the next. If set to NULL the first matching element is returned.
a collection of AV_DICT_* flags controlling how the entry is retrieved
found entry or NULL in case no matching entry was found in the dictionary
Get dictionary entries as a string.
dictionary
Pointer to buffer that will be allocated with string containg entries. Buffer must be freed by the caller when is no longer needed.
character used to separate key from value
character used to separate two pairs from each other
>= 0 on success, negative on error
Parse the key/value pairs list and add the parsed entries to a dictionary.
a 0-terminated list of characters used to separate key from value
a 0-terminated list of characters used to separate two pairs from each other
flags to use when adding to dictionary. AV_DICT_DONT_STRDUP_KEY and AV_DICT_DONT_STRDUP_VAL are ignored since the key/value tokens will always be duplicated.
0 on success, negative AVERROR code on failure
Set the given entry in *pm, overwriting an existing entry.
pointer to a pointer to a dictionary struct. If *pm is NULL a dictionary struct is allocated and put in *pm.
entry key to add to *pm (will either be av_strduped or added as a new key depending on flags)
entry value to add to *pm (will be av_strduped or added as a new key depending on flags). Passing a NULL value will cause an existing entry to be deleted.
>= 0 on success otherwise an error code < 0
Convenience wrapper for av_dict_set that converts the value to a string and stores it.
Divide one rational by another.
First rational
Second rational
b/c
Allocate an AVDynamicHDRPlus structure and set its fields to default values. The resulting struct can be freed using av_freep().
An AVDynamicHDRPlus filled with default values or NULL on failure.
Allocate a complete AVDynamicHDRPlus and add it to the frame.
The frame which side data is added to.
The AVDynamicHDRPlus structure to be filled by caller or NULL on failure.
Add the pointer to an element to a dynamic array.
Pointer to the array to grow
Pointer to the number of elements in the array
Element to add
Add an element to a dynamic array.
>=0 on success, negative otherwise
Add an element of size `elem_size` to a dynamic array.
Pointer to the array to grow
Pointer to the number of elements in the array
Size in bytes of an element in the array
Pointer to the data of the element to add. If `NULL`, the space of the newly added element is allocated but left uninitialized.
Pointer to the data of the element to copy in the newly allocated space
Allocate a buffer, reusing the given one if large enough.
Pointer to pointer to an already allocated buffer. `*ptr` will be overwritten with pointer to new buffer on success or `NULL` on failure
Pointer to the size of buffer `*ptr`. `*size` is updated to the new allocated size, in particular 0 in case of failure.
Desired minimal size of buffer `*ptr`
Allocate and clear a buffer, reusing the given one if large enough.
Pointer to pointer to an already allocated buffer. `*ptr` will be overwritten with pointer to new buffer on success or `NULL` on failure
Pointer to the size of buffer `*ptr`. `*size` is updated to the new allocated size, in particular 0 in case of failure.
Desired minimal size of buffer `*ptr`
Reallocate the given buffer if it is not large enough, otherwise do nothing.
Already allocated buffer, or `NULL`
Pointer to the size of buffer `ptr`. `*size` is updated to the new allocated size, in particular 0 in case of failure.
Desired minimal size of buffer `ptr`
`ptr` if the buffer is large enough, a pointer to newly reallocated buffer if the buffer was not large enough, or `NULL` in case of error
Read the file with name filename, and put its content in a newly allocated buffer or map it with mmap() when available. In case of success set *bufptr to the read or mmapped buffer, and *size to the size in bytes of the buffer in *bufptr. Unlike mmap this function succeeds with zero sized files, in this case *bufptr will be set to NULL and *size will be set to 0. The returned buffer must be released with av_file_unmap().
loglevel offset used for logging
context used for logging
a non negative number in case of success, a negative value corresponding to an AVERROR error code in case of failure
Unmap or free the buffer bufptr created by av_file_map().
size in bytes of bufptr, must be the same as returned by av_file_map()
Compute what kind of losses will occur when converting from one specific pixel format to another. When converting from one pixel format to another, information loss may occur. For example, when converting from RGB24 to GRAY, the color information will be lost. Similarly, other losses occur when converting from some formats to other formats. These losses can involve loss of chroma, but also loss of resolution, loss of color depth, loss due to the color space conversion, loss of the alpha bits or loss due to color quantization. av_get_fix_fmt_loss() informs you about the various types of losses which will occur when converting from one pixel format to another.
source pixel format
Whether the source pixel format alpha channel is used.
Combination of flags informing you what kind of losses will occur (maximum loss for an invalid dst_pix_fmt).
Find the value in a list of rationals nearest a given reference rational.
Reference rational
Array of rationals terminated by `{0, 0}`
Index of the nearest value found in the array
Open a file using a UTF-8 filename. The API of this function matches POSIX fopen(), errors are returned through errno.
Disables cpu detection and forces the specified flags. -1 is a special case that disables forcing of specific flags.
Fill the provided buffer with a string containing a FourCC (four-character code) representation.
a buffer with size in bytes of at least AV_FOURCC_MAX_STRING_SIZE
the fourcc to represent
the buffer in input
Allocate an AVFrame and set its fields to default values. The resulting struct must be freed using av_frame_free().
An AVFrame filled with default values or NULL on failure.
Crop the given video AVFrame according to its crop_left/crop_top/crop_right/ crop_bottom fields. If cropping is successful, the function will adjust the data pointers and the width/height fields, and set the crop fields to 0.
the frame which should be cropped
Some combination of AV_FRAME_CROP_* flags, or 0.
>= 0 on success, a negative AVERROR on error. If the cropping fields were invalid, AVERROR(ERANGE) is returned, and nothing is changed.
Create a new frame that references the same data as src.
newly created AVFrame on success, NULL on error.
Copy the frame data from src to dst.
>= 0 on success, a negative AVERROR on error.
Copy only "metadata" fields from src to dst.
Free the frame and any dynamically allocated objects in it, e.g. extended_data. If the frame is reference counted, it will be unreferenced first.
frame to be freed. The pointer will be set to NULL.
Allocate new buffer(s) for audio or video data.
frame in which to store the new buffers.
Required buffer size alignment. If equal to 0, alignment will be chosen automatically for the current CPU. It is highly recommended to pass 0 here unless you know what you are doing.
0 on success, a negative AVERROR on error.
Get the buffer reference a given data plane is stored in.
index of the data plane of interest in frame->extended_data.
the buffer reference that contains the plane or NULL if the input frame is not valid.
Returns a pointer to the side data of a given type on success, NULL if there is no side data with such type in this frame.
a pointer to the side data of a given type on success, NULL if there is no side data with such type in this frame.
Check if the frame data is writable.
A positive value if the frame data is writable (which is true if and only if each of the underlying buffers has only one reference, namely the one stored in this frame). Return 0 otherwise.
Ensure that the frame data is writable, avoiding data copy if possible.
0 on success, a negative AVERROR on error.
Move everything contained in src to dst and reset src.
Add a new side data to a frame.
a frame to which the side data should be added
type of the added side data
size of the side data
newly added side data on success, NULL on error
Add a new side data to a frame from an existing AVBufferRef
a frame to which the side data should be added
the type of the added side data
an AVBufferRef to add as side data. The ownership of the reference is transferred to the frame.
newly added side data on success, NULL on error. On failure the frame is unchanged and the AVBufferRef remains owned by the caller.
Set up a new reference to the data described by the source frame.
0 on success, a negative AVERROR on error
Remove and free all side data instances of the given type.
Returns a string identifying the side data type
a string identifying the side data type
Unreference all the buffers referenced by frame and reset the frame fields.
Free a memory block which has been allocated with a function of av_malloc() or av_realloc() family.
Pointer to the memory block which should be freed.
Free a memory block which has been allocated with a function of av_malloc() or av_realloc() family, and set the pointer pointing to it to `NULL`.
Pointer to the pointer to the memory block which should be freed
Compute the greatest common divisor of two integer operands.
GCD of a and b up to sign; if a >= 0 and b >= 0, return value is >= 0; if a == 0 and b == 0, returns 0.
Return the best rational so that a and b are multiple of it. If the resulting denominator is larger than max_den, return def.
Return the planar<->packed alternative form of the given sample format, or AV_SAMPLE_FMT_NONE on error. If the passed sample_fmt is already in the requested planar/packed format, the format returned is the same as the input.
Return the number of bits per pixel used by the pixel format described by pixdesc. Note that this is not the same as the number of bits per sample.
Return number of bytes per sample.
the sample format
number of bytes per sample or zero if unknown for the given sample format
Get the description of a given channel.
a channel layout with a single channel
channel description on success, NULL on error
Return a channel layout id that matches name, or 0 if no match is found.
Get the index of a channel in channel_layout.
a channel layout describing exactly one channel which must be present in channel_layout.
index of channel in channel_layout on success, a negative AVERROR on error.
Return the number of channels in the channel layout.
Return a description of a channel layout. If nb_channels is <= 0, it is guessed from the channel_layout.
put here the string containing the channel layout
size in bytes of the buffer
Get the name of a given channel.
channel name on success, NULL on error.
Get the name of a colorspace.
a static string identifying the colorspace; can be NULL.
Return the flags which specify extensions supported by the CPU. The returned value is affected by av_force_cpu_flags() if that was used before. So av_get_cpu_flags() can easily be used in an application to detect the enabled cpu flags.
Return default channel layout for a given number of channels.
Return a channel layout and the number of channels based on the specified name.
channel layout specification string
parsed channel layout (0 if unknown)
number of channels
0 on success, AVERROR(EINVAL) if the parsing fails.
Return a string describing the media_type enum, NULL if media_type is unknown.
Get the packed alternative form of the given sample format.
the packed alternative form of the given sample format or AV_SAMPLE_FMT_NONE on error.
Return the number of bits per pixel for the pixel format described by pixdesc, including any padding or unused bits.
Return a single letter to describe the given picture type pict_type.
the picture type
a single character representing the picture type, '?' if pict_type is unknown
Return the pixel format corresponding to name.
Compute what kind of losses will occur when converting from one specific pixel format to another. When converting from one pixel format to another, information loss may occur. For example, when converting from RGB24 to GRAY, the color information will be lost. Similarly, other losses occur when converting from some formats to other formats. These losses can involve loss of chroma, but also loss of resolution, loss of color depth, loss due to the color space conversion, loss of the alpha bits or loss due to color quantization. av_get_fix_fmt_loss() informs you about the various types of losses which will occur when converting from one pixel format to another.
destination pixel format
source pixel format
Whether the source pixel format alpha channel is used.
Combination of flags informing you what kind of losses will occur (maximum loss for an invalid dst_pix_fmt).
Return the short name for a pixel format, NULL in case pix_fmt is unknown.
Print in buf the string corresponding to the pixel format with number pix_fmt, or a header if pix_fmt is negative.
the buffer where to write the string
the size of buf
the number of the pixel format to print the corresponding info string, or a negative value to print the corresponding header.
Get the planar alternative form of the given sample format.
the planar alternative form of the given sample format or AV_SAMPLE_FMT_NONE on error.
Return a sample format corresponding to name, or AV_SAMPLE_FMT_NONE on error.
Return the name of sample_fmt, or NULL if sample_fmt is not recognized.
Generate a string corresponding to the sample format with sample_fmt, or a header if sample_fmt is negative.
the buffer where to write the string
the size of buf
the number of the sample format to print the corresponding info string, or a negative value to print the corresponding header.
the pointer to the filled buffer or NULL if sample_fmt is unknown or in case of other errors
Get the value and name of a standard channel layout.
index in an internal list, starting at 0
channel layout mask
name of the layout
0 if the layout exists, < 0 if index is beyond the limits
Return the fractional representation of the internal time base.
Get the current time in microseconds.
Get the current time in microseconds since some unspecified starting point. On platforms that support it, the time comes from a monotonic clock This property makes this time source ideal for measuring relative time. The returned values may not be monotonic on platforms where a monotonic clock is not available.
Indicates with a boolean result if the av_gettime_relative() time source is monotonic.
Allocate an AVHWDeviceContext for a given hardware type.
the type of the hardware device to allocate.
a reference to the newly created AVHWDeviceContext on success or NULL on failure.
Open a device of the specified type and create an AVHWDeviceContext for it.
On success, a reference to the newly-created device context will be written here. The reference is owned by the caller and must be released with av_buffer_unref() when no longer needed. On failure, NULL will be written to this pointer.
The type of the device to create.
A type-specific string identifying the device to open.
A dictionary of additional (type-specific) options to use in opening the device. The dictionary remains owned by the caller.
currently unused
0 on success, a negative AVERROR code on failure.
Create a new device of the specified type from an existing device.
On success, a reference to the newly-created AVHWDeviceContext.
The type of the new device to create.
A reference to an existing AVHWDeviceContext which will be used to create the new device.
Currently unused; should be set to zero.
Zero on success, a negative AVERROR code on failure.
Create a new device of the specified type from an existing device.
On success, a reference to the newly-created AVHWDeviceContext.
The type of the new device to create.
A reference to an existing AVHWDeviceContext which will be used to create the new device.
Options for the new device to create, same format as in av_hwdevice_ctx_create.
Currently unused; should be set to zero.
Zero on success, a negative AVERROR code on failure.
Finalize the device context before use. This function must be called after the context is filled with all the required information and before it is used in any way.
a reference to the AVHWDeviceContext
0 on success, a negative AVERROR code on failure
Look up an AVHWDeviceType by name.
String name of the device type (case-insensitive).
The type from enum AVHWDeviceType, or AV_HWDEVICE_TYPE_NONE if not found.
Get the constraints on HW frames given a device and the HW-specific configuration to be used with that device. If no HW-specific configuration is provided, returns the maximum possible capabilities of the device.
a reference to the associated AVHWDeviceContext.
a filled HW-specific configuration structure, or NULL to return the maximum possible capabilities of the device.
AVHWFramesConstraints structure describing the constraints on the device, or NULL if not available.
Get the string name of an AVHWDeviceType.
Type from enum AVHWDeviceType.
Pointer to a static string containing the name, or NULL if the type is not valid.
Allocate a HW-specific configuration structure for a given HW device. After use, the user must free all members as required by the specific hardware structure being used, then free the structure itself with av_free().
a reference to the associated AVHWDeviceContext.
The newly created HW-specific configuration structure on success or NULL on failure.
Iterate over supported device types.
The next usable device type from enum AVHWDeviceType, or AV_HWDEVICE_TYPE_NONE if there are no more.
Free an AVHWFrameConstraints structure.
The (filled or unfilled) AVHWFrameConstraints structure.
Allocate an AVHWFramesContext tied to a given device context.
a reference to a AVHWDeviceContext. This function will make a new reference for internal use, the one passed to the function remains owned by the caller.
a reference to the newly created AVHWFramesContext on success or NULL on failure.
Create and initialise an AVHWFramesContext as a mapping of another existing AVHWFramesContext on a different device.
On success, a reference to the newly created AVHWFramesContext.
A reference to the device to create the new AVHWFramesContext on.
A reference to an existing AVHWFramesContext which will be mapped to the derived context.
Some combination of AV_HWFRAME_MAP_* flags, defining the mapping parameters to apply to frames which are allocated in the derived device.
Zero on success, negative AVERROR code on failure.
Finalize the context before use. This function must be called after the context is filled with all the required information and before it is attached to any frames.
a reference to the AVHWFramesContext
0 on success, a negative AVERROR code on failure
Allocate a new frame attached to the given AVHWFramesContext.
a reference to an AVHWFramesContext
an empty (freshly allocated or unreffed) frame to be filled with newly allocated buffers.
currently unused, should be set to zero
0 on success, a negative AVERROR code on failure
Map a hardware frame.
Destination frame, to contain the mapping.
Source frame, to be mapped.
Some combination of AV_HWFRAME_MAP_* flags.
Zero on success, negative AVERROR code on failure.
Copy data to or from a hw surface. At least one of dst/src must have an AVHWFramesContext attached.
the destination frame. dst is not touched on failure.
the source frame.
currently unused, should be set to zero
0 on success, a negative AVERROR error code on failure.
Get a list of possible source or target formats usable in av_hwframe_transfer_data().
the frame context to obtain the information for
the direction of the transfer
the pointer to the output format list will be written here. The list is terminated with AV_PIX_FMT_NONE and must be freed by the caller when no longer needed using av_free(). If this function returns successfully, the format list will have at least one item (not counting the terminator). On failure, the contents of this pointer are unspecified.
currently unused, should be set to zero
0 on success, a negative AVERROR code on failure.
Allocate an image with size w and h and pixel format pix_fmt, and fill pointers and linesizes accordingly. The allocated image buffer has to be freed by using av_freep(&pointers[0]).
the value to use for buffer size alignment
the size in bytes required for the image buffer, a negative error code in case of failure
Check if the given sample aspect ratio of an image is valid.
width of the image
height of the image
sample aspect ratio of the image
0 if valid, a negative AVERROR code otherwise
Check if the given dimension of an image is valid, meaning that all bytes of the image can be addressed with a signed int.
the width of the picture
the height of the picture
the offset to sum to the log level for logging with log_ctx
the parent logging context, it may be NULL
>= 0 if valid, a negative error code otherwise
Check if the given dimension of an image is valid, meaning that all bytes of a plane of an image with the specified pix_fmt can be addressed with a signed int.
the width of the picture
the height of the picture
the maximum number of pixels the user wants to accept
the pixel format, can be AV_PIX_FMT_NONE if unknown.
the offset to sum to the log level for logging with log_ctx
the parent logging context, it may be NULL
>= 0 if valid, a negative error code otherwise
Copy image in src_data to dst_data.
linesizes for the image in dst_data
linesizes for the image in src_data
Copy image plane from src to dst. That is, copy "height" number of lines of "bytewidth" bytes each. The first byte of each successive line is separated by *_linesize bytes.
linesize for the image plane in dst
linesize for the image plane in src
Copy image data located in uncacheable (e.g. GPU mapped) memory. Where available, this function will use special functionality for reading from such memory, which may result in greatly improved performance compared to plain av_image_copy_plane().
Copy image data from an image into a buffer.
a buffer into which picture data will be copied
the size in bytes of dst
pointers containing the source image data
linesizes for the image in src_data
the pixel format of the source image
the width of the source image in pixels
the height of the source image in pixels
the assumed linesize alignment for dst
the number of bytes written to dst, or a negative value (error code) on error
Copy image data located in uncacheable (e.g. GPU mapped) memory. Where available, this function will use special functionality for reading from such memory, which may result in greatly improved performance compared to plain av_image_copy().
Setup the data pointers and linesizes based on the specified image parameters and the provided array.
data pointers to be filled in
linesizes for the image in dst_data to be filled in
buffer which will contain or contains the actual image data, can be NULL
the pixel format of the image
the width of the image in pixels
the height of the image in pixels
the value used in src for linesize alignment
the size in bytes required for src, a negative error code in case of failure
Overwrite the image data with black. This is suitable for filling a sub-rectangle of an image, meaning the padding between the right most pixel and the left most pixel on the next line will not be overwritten. For some formats, the image size might be rounded up due to inherent alignment.
data pointers to destination image
linesizes for the destination image
the pixel format of the image
the color range of the image (important for colorspaces such as YUV)
the width of the image in pixels
the height of the image in pixels
0 if the image data was cleared, a negative AVERROR code otherwise
Fill plane linesizes for an image with pixel format pix_fmt and width width.
array to be filled with the linesize for each plane
>= 0 in case of success, a negative error code otherwise
Compute the max pixel step for each plane of an image with a format described by pixdesc.
an array which is filled with the max pixel step for each plane. Since a plane may contain different pixel components, the computed max_pixsteps[plane] is relative to the component in the plane with the max pixel step.
an array which is filled with the component for each plane which has the max pixel step. May be NULL.
Fill plane sizes for an image with pixel format pix_fmt and height height.
the array to be filled with the size of each image plane
the array containing the linesize for each plane, should be filled by av_image_fill_linesizes()
>= 0 in case of success, a negative error code otherwise
Fill plane data pointers for an image with pixel format pix_fmt and height height.
pointers array to be filled with the pointer for each image plane
the pointer to a buffer which will contain the image
the array containing the linesize for each plane, should be filled by av_image_fill_linesizes()
the size in bytes required for the image buffer, a negative error code in case of failure
Return the size in bytes of the amount of data required to store an image with the given parameters.
the pixel format of the image
the width of the image in pixels
the height of the image in pixels
the assumed linesize alignment
the buffer size in bytes, a negative error code in case of failure
Compute the size of an image line with format pix_fmt and width width for the plane plane.
the computed size in bytes
Compute the length of an integer list.
size in bytes of each list element (only 1, 2, 4 or 8)
pointer to the list
list terminator (usually 0 or -1)
length of the list, in elements, not counting the terminator
Send the specified message to the log if the level is less than or equal to the current av_log_level. By default, all logging messages are sent to stderr. This behavior can be altered by setting a different logging callback function.
A pointer to an arbitrary struct of which the first field is a pointer to an AVClass struct or NULL if general log.
The importance level of the message expressed using a "Logging Constant".
The format string (printf-compatible) that specifies how subsequent arguments are converted to output.
Default logging callback
A pointer to an arbitrary struct of which the first field is a pointer to an AVClass struct.
The importance level of the message expressed using a "Logging Constant".
The format string (printf-compatible) that specifies how subsequent arguments are converted to output.
The arguments referenced by the format string.
Format a line of log the same way as the default callback.
buffer to receive the formatted line
size of the buffer
used to store whether the prefix must be printed; must point to a persistent integer initially set to 1
Format a line of log the same way as the default callback.
buffer to receive the formatted line; may be NULL if line_size is 0
size of the buffer; at most line_size-1 characters will be written to the buffer, plus one null terminator
used to store whether the prefix must be printed; must point to a persistent integer initially set to 1
Returns a negative value if an error occurred, otherwise returns the number of characters that would have been written for a sufficiently large buffer, not including the terminating null character. If the return value is not less than line_size, it means that the log message was truncated to fit the buffer.
Get the current log level
Current log level
Send the specified message to the log once with the initial_level and then with the subsequent_level. By default, all logging messages are sent to stderr. This behavior can be altered by setting a different logging callback function.
A pointer to an arbitrary struct of which the first field is a pointer to an AVClass struct or NULL if general log.
importance level of the message expressed using a "Logging Constant" for the first occurance.
importance level of the message expressed using a "Logging Constant" after the first occurance.
a variable to keep trak of if a message has already been printed this must be initialized to 0 before the first use. The same state must not be accessed by 2 Threads simultaneously.
The format string (printf-compatible) that specifies how subsequent arguments are converted to output.
Set the logging callback
A logging function with a compatible signature.
Set the log level
Logging level
Allocate a memory block with alignment suitable for all memory accesses (including vectors if available on the CPU).
Size in bytes for the memory block to be allocated
Pointer to the allocated block, or `NULL` if the block cannot be allocated
Allocate a memory block for an array with av_malloc().
Number of element
Size of a single element
Pointer to the allocated block, or `NULL` if the block cannot be allocated
Allocate a memory block with alignment suitable for all memory accesses (including vectors if available on the CPU) and zero all the bytes of the block.
Size in bytes for the memory block to be allocated
Pointer to the allocated block, or `NULL` if it cannot be allocated
Allocate an AVMasteringDisplayMetadata structure and set its fields to default values. The resulting struct can be freed using av_freep().
An AVMasteringDisplayMetadata filled with default values or NULL on failure.
Allocate a complete AVMasteringDisplayMetadata and add it to the frame.
The frame which side data is added to.
The AVMasteringDisplayMetadata structure to be filled by caller.
Set the maximum size that may be allocated in one block.
Value to be set as the new maximum size
Overlapping memcpy() implementation.
Destination buffer
Number of bytes back to start copying (i.e. the initial size of the overlapping window); must be > 0
Number of bytes to copy; must be >= 0
Duplicate a buffer with av_malloc().
Buffer to be duplicated
Size in bytes of the buffer copied
Pointer to a newly allocated buffer containing a copy of `p` or `NULL` if the buffer cannot be allocated
Multiply two rationals.
First rational
Second rational
b*c
Find which of the two rationals is closer to another rational.
Rational to be compared against
One of the following values: - 1 if `q1` is nearer to `q` than `q2` - -1 if `q2` is nearer to `q` than `q1` - 0 if they have the same distance
Iterate over potential AVOptions-enabled children of parent.
a pointer where iteration state is stored.
AVClass corresponding to next potential child or NULL
Iterate over AVOptions-enabled children of obj.
result of a previous call to this function or NULL
next AVOptions-enabled child or NULL
Copy options from src object into dest object.
Object to copy from
Object to copy into
0 on success, negative on error
@{ This group of functions can be used to evaluate option strings and get numbers out of them. They do the same thing as av_opt_set(), except the result is written into the caller-supplied pointer.
a struct whose first element is a pointer to AVClass.
an option for which the string is to be evaluated.
string to be evaluated.
0 on success, a negative number on failure.
Look for an option in an object. Consider only options which have all the specified flags set.
A pointer to a struct whose first element is a pointer to an AVClass. Alternatively a double pointer to an AVClass, if AV_OPT_SEARCH_FAKE_OBJ search flag is set.
The name of the option to look for.
When searching for named constants, name of the unit it belongs to.
Find only options with all the specified flags set (AV_OPT_FLAG).
A combination of AV_OPT_SEARCH_*.
A pointer to the option found, or NULL if no option was found.
Look for an option in an object. Consider only options which have all the specified flags set.
A pointer to a struct whose first element is a pointer to an AVClass. Alternatively a double pointer to an AVClass, if AV_OPT_SEARCH_FAKE_OBJ search flag is set.
The name of the option to look for.
When searching for named constants, name of the unit it belongs to.
Find only options with all the specified flags set (AV_OPT_FLAG).
A combination of AV_OPT_SEARCH_*.
if non-NULL, an object to which the option belongs will be written here. It may be different from obj if AV_OPT_SEARCH_CHILDREN is present in search_flags. This parameter is ignored if search_flags contain AV_OPT_SEARCH_FAKE_OBJ.
A pointer to the option found, or NULL if no option was found.
Check whether a particular flag is set in a flags field.
the name of the flag field option
the name of the flag to check
non-zero if the flag is set, zero if the flag isn't set, isn't of the right type, or the flags field doesn't exist.
Free all allocated objects in obj.
Free an AVOptionRanges struct and set it to NULL.
@{ Those functions get a value of the option with the given name from an object.
a struct whose first element is a pointer to an AVClass.
name of the option to get.
flags passed to av_opt_find2. I.e. if AV_OPT_SEARCH_CHILDREN is passed here, then the option may be found in a child of obj.
value of the option will be written here
>=0 on success, a negative error code otherwise
The returned dictionary is a copy of the actual value and must be freed with av_dict_free() by the caller
Extract a key-value pair from the beginning of a string.
pointer to the options string, will be updated to point to the rest of the string (one of the pairs_sep or the final NUL)
a 0-terminated list of characters used to separate key from value, for example '='
a 0-terminated list of characters used to separate two pairs from each other, for example ':' or ','
flags; see the AV_OPT_FLAG_* values below
parsed key; must be freed using av_free()
parsed value; must be freed using av_free()
>=0 for success, or a negative value corresponding to an AVERROR code in case of error; in particular: AVERROR(EINVAL) if no key is present
Check if given option is set to its default value.
AVClass object to check option on
option to be checked
>0 when option is set to its default, 0 when option is not set its default, < 0 on error
Check if given option is set to its default value.
AVClass object to check option on
option name
combination of AV_OPT_SEARCH_*
>0 when option is set to its default, 0 when option is not set its default, < 0 on error
Iterate over all AVOptions belonging to obj.
an AVOptions-enabled struct or a double pointer to an AVClass describing it.
result of the previous call to av_opt_next() on this object or NULL
next AVOption or NULL
@}
Get a list of allowed ranges for the given option.
is a bitmask of flags, undefined flags should not be set and should be ignored AV_OPT_SEARCH_FAKE_OBJ indicates that the obj is a double pointer to a AVClass instead of a full instance AV_OPT_MULTI_COMPONENT_RANGE indicates that function may return more than one component,
number of compontents returned on success, a negative errro code otherwise
Get a default list of allowed ranges for the given option.
is a bitmask of flags, undefined flags should not be set and should be ignored AV_OPT_SEARCH_FAKE_OBJ indicates that the obj is a double pointer to a AVClass instead of a full instance AV_OPT_MULTI_COMPONENT_RANGE indicates that function may return more than one component,
number of compontents returned on success, a negative errro code otherwise
Serialize object's options.
AVClass object to serialize
serialize options with all the specified flags set (AV_OPT_FLAG)
combination of AV_OPT_SERIALIZE_* flags
Pointer to buffer that will be allocated with string containg serialized options. Buffer must be freed by the caller when is no longer needed.
character used to separate key from value
character used to separate two pairs from each other
>= 0 on success, negative on error
@{ Those functions set the field of obj with the given name to value.
A struct whose first element is a pointer to an AVClass.
the name of the field to set
The value to set. In case of av_opt_set() if the field is not of a string type, then the given string is parsed. SI postfixes and some named scalars are supported. If the field is of a numeric type, it has to be a numeric or named scalar. Behavior with more than one scalar and +- infix operators is undefined. If the field is of a flags type, it has to be a sequence of numeric scalars or named flags separated by '+' or '-'. Prefixing a flag with '+' causes it to be set without affecting the other flags; similarly, '-' unsets a flag. If the field is of a dictionary type, it has to be a ':' separated list of key=value parameters. Values containing ':' special characters must be escaped.
flags passed to av_opt_find2. I.e. if AV_OPT_SEARCH_CHILDREN is passed here, then the option may be set on a child of obj.
0 if the value has been set, or an AVERROR code in case of error: AVERROR_OPTION_NOT_FOUND if no matching option exists AVERROR(ERANGE) if the value is out of range AVERROR(EINVAL) if the value is not valid
Set the values of all AVOption fields to their default values.
an AVOption-enabled struct (its first member must be a pointer to AVClass)
Set the values of all AVOption fields to their default values. Only these AVOption fields for which (opt->flags & mask) == flags will have their default applied to s.
an AVOption-enabled struct (its first member must be a pointer to AVClass)
combination of AV_OPT_FLAG_*
combination of AV_OPT_FLAG_*
Set all the options from a given dictionary on an object.
a struct whose first element is a pointer to AVClass
options to process. This dictionary will be freed and replaced by a new one containing all options not found in obj. Of course this new dictionary needs to be freed by caller with av_dict_free().
0 on success, a negative AVERROR if some option was found in obj, but could not be set.
Set all the options from a given dictionary on an object.
a struct whose first element is a pointer to AVClass
options to process. This dictionary will be freed and replaced by a new one containing all options not found in obj. Of course this new dictionary needs to be freed by caller with av_dict_free().
A combination of AV_OPT_SEARCH_*.
0 on success, a negative AVERROR if some option was found in obj, but could not be set.
Parse the key-value pairs list in opts. For each key=value pair found, set the value of the corresponding option in ctx.
the AVClass object to set options on
the options string, key-value pairs separated by a delimiter
a NULL-terminated array of options names for shorthand notation: if the first field in opts has no key part, the key is taken from the first element of shorthand; then again for the second, etc., until either opts is finished, shorthand is finished or a named option is found; after that, all options must be named
a 0-terminated list of characters used to separate key from value, for example '='
a 0-terminated list of characters used to separate two pairs from each other, for example ':' or ','
the number of successfully set key=value pairs, or a negative value corresponding to an AVERROR code in case of error: AVERROR(EINVAL) if opts cannot be parsed, the error code issued by av_set_string3() if a key/value pair cannot be set
Show the obj options.
log context to use for showing the options
requested flags for the options to show. Show only the options for which it is opt->flags & req_flags.
rejected flags for the options to show. Show only the options for which it is !(opt->flags & req_flags).
Parse CPU caps from a string and update the given AV_CPU_* flags based on that.
negative on error.
Returns number of planes in pix_fmt, a negative AVERROR if pix_fmt is not a valid pixel format.
number of planes in pix_fmt, a negative AVERROR if pix_fmt is not a valid pixel format.
Returns a pixel format descriptor for provided pixel format or NULL if this pixel format is unknown.
a pixel format descriptor for provided pixel format or NULL if this pixel format is unknown.
Returns an AVPixelFormat id described by desc, or AV_PIX_FMT_NONE if desc is not a valid pointer to a pixel format descriptor.
an AVPixelFormat id described by desc, or AV_PIX_FMT_NONE if desc is not a valid pointer to a pixel format descriptor.
Iterate over all pixel format descriptors known to libavutil.
previous descriptor. NULL to get the first descriptor.
next descriptor or NULL after the last descriptor
Utility function to access log2_chroma_w log2_chroma_h from the pixel format AVPixFmtDescriptor.
the pixel format
store log2_chroma_w (horizontal/width shift)
store log2_chroma_h (vertical/height shift)
0 on success, AVERROR(ENOSYS) on invalid or unknown pixel format
Utility function to swap the endianness of a pixel format.
the pixel format
pixel format with swapped endianness if it exists, otherwise AV_PIX_FMT_NONE
Convert an AVRational to a IEEE 32-bit `float` expressed in fixed-point format.
Rational to be converted
Equivalent floating-point value, expressed as an unsigned 32-bit integer.
Read a line from an image, and write the values of the pixel format component c to dst.
the array containing the pointers to the planes of the image
the array containing the linesizes of the image
the pixel format descriptor for the image
the horizontal coordinate of the first pixel to read
the vertical coordinate of the first pixel to read
the width of the line to read, that is the number of values to write to dst
if not zero and the format is a paletted format writes the values corresponding to the palette component c in data[1] to dst, rather than the palette indexes in data[0]. The behavior is undefined if the format is not paletted.
size of elements in dst array (2 or 4 byte)
Allocate, reallocate, or free a block of memory.
Pointer to a memory block already allocated with av_realloc() or `NULL`
Size in bytes of the memory block to be allocated or reallocated
Pointer to a newly-reallocated block or `NULL` if the block cannot be reallocated
Allocate, reallocate, or free an array.
Pointer to a memory block already allocated with av_realloc() or `NULL`
Number of elements in the array
Size of the single element of the array
Pointer to a newly-reallocated block or NULL if the block cannot be reallocated
Allocate, reallocate, or free a block of memory.
Allocate, reallocate, or free a block of memory through a pointer to a pointer.
Pointer to a pointer to a memory block already allocated with av_realloc(), or a pointer to `NULL`. The pointer is updated on success, or freed on failure.
Size in bytes for the memory block to be allocated or reallocated
Zero on success, an AVERROR error code on failure
Allocate, reallocate an array through a pointer to a pointer.
Pointer to a pointer to a memory block already allocated with av_realloc(), or a pointer to `NULL`. The pointer is updated on success, or freed on failure.
Number of elements
Size of the single element
Zero on success, an AVERROR error code on failure
Reduce a fraction.
Destination numerator
Destination denominator
Source numerator
Source denominator
Maximum allowed values for `dst_num` & `dst_den`
1 if the operation is exact, 0 otherwise
Rescale a 64-bit integer with rounding to nearest.
Rescale a timestamp while preserving known durations.
Input time base
Input timestamp
Duration time base; typically this is finer-grained (greater) than `in_tb` and `out_tb`
Duration till the next call to this function (i.e. duration of the current packet/frame)
Pointer to a timestamp expressed in terms of `fs_tb`, acting as a state variable
Output timebase
Timestamp expressed in terms of `out_tb`
Rescale a 64-bit integer by 2 rational numbers.
Rescale a 64-bit integer by 2 rational numbers with specified rounding.
Rescale a 64-bit integer with specified rounding.
Check if the sample format is planar.
the sample format to inspect
1 if the sample format is planar, 0 if it is interleaved
Allocate a samples buffer for nb_samples samples, and fill data pointers and linesize accordingly. The allocated samples buffer can be freed by using av_freep(&audio_data[0]) Allocated data will be initialized to silence.
array to be filled with the pointer for each channel
aligned size for audio buffer(s), may be NULL
number of audio channels
number of samples per channel
buffer size alignment (0 = default, 1 = no alignment)
>=0 on success or a negative error code on failure
Allocate a data pointers array, samples buffer for nb_samples samples, and fill data pointers and linesize accordingly.
Copy samples from src to dst.
destination array of pointers to data planes
source array of pointers to data planes
offset in samples at which the data will be written to dst
offset in samples at which the data will be read from src
number of samples to be copied
number of audio channels
audio sample format
Fill plane data pointers and linesize for samples with sample format sample_fmt.
array to be filled with the pointer for each channel
calculated linesize, may be NULL
the pointer to a buffer containing the samples
the number of channels
the number of samples in a single channel
the sample format
buffer size alignment (0 = default, 1 = no alignment)
minimum size in bytes required for the buffer on success, or a negative error code on failure
Get the required buffer size for the given audio parameters.
calculated linesize, may be NULL
the number of channels
the number of samples in a single channel
the sample format
buffer size alignment (0 = default, 1 = no alignment)
required buffer size, or negative error code on failure
Fill an audio buffer with silence.
array of pointers to data planes
offset in samples at which to start filling
number of samples to fill
number of audio channels
audio sample format
Parse the key/value pairs list in opts. For each key/value pair found, stores the value in the field in ctx that is named like the key. ctx must be an AVClass context, storing is done using AVOptions.
options string to parse, may be NULL
a 0-terminated list of characters used to separate key from value
a 0-terminated list of characters used to separate two pairs from each other
the number of successfully set key/value pairs, or a negative value corresponding to an AVERROR code in case of error: AVERROR(EINVAL) if opts cannot be parsed, the error code issued by av_opt_set() if a key/value pair cannot be set
Multiply two `size_t` values checking for overflow.
Pointer to the result of the operation
0 on success, AVERROR(EINVAL) on overflow
Duplicate a string.
String to be duplicated
Pointer to a newly-allocated string containing a copy of `s` or `NULL` if the string cannot be allocated
Put a description of the AVERROR code errnum in errbuf. In case of failure the global variable errno is set to indicate the error. Even in case of failure av_strerror() will print a generic error message indicating the errnum provided to errbuf.
error code to describe
buffer to which description is written
the size in bytes of errbuf
0 on success, a negative value if a description for errnum cannot be found
Duplicate a substring of a string.
String to be duplicated
Maximum length of the resulting string (not counting the terminating byte)
Pointer to a newly-allocated string containing a substring of `s` or `NULL` if the string cannot be allocated
Subtract one rational from another.
First rational
Second rational
b-c
Wrapper to work around the lack of mkstemp() on mingw. Also, tries to create file in /tmp first, if possible. *prefix can be a character constant; *filename will be allocated internally.
file descriptor of opened file (or negative value corresponding to an AVERROR code on error) and opened file name in **filename.
Adjust frame number for NTSC drop frame time code.
frame number to adjust
frame per second, multiples of 30
adjusted frame number
Check if the timecode feature is available for the given frame rate
0 if supported, < 0 otherwise
Convert sei info to SMPTE 12M binary representation.
frame rate in rational form
drop flag
hour
minute
second
frame number
the SMPTE binary representation
Convert frame number to SMPTE 12M binary representation.
timecode data correctly initialized
frame number
the SMPTE binary representation
Init a timecode struct with the passed parameters.
pointer to an allocated AVTimecode
frame rate in rational form
miscellaneous flags such as drop frame, +24 hours, ... (see AVTimecodeFlag)
the first frame number
a pointer to an arbitrary struct of which the first field is a pointer to an AVClass struct (used for av_log)
0 on success, AVERROR otherwise
Init a timecode struct from the passed timecode components.
pointer to an allocated AVTimecode
frame rate in rational form
miscellaneous flags such as drop frame, +24 hours, ... (see AVTimecodeFlag)
hours
minutes
seconds
frames
a pointer to an arbitrary struct of which the first field is a pointer to an AVClass struct (used for av_log)
0 on success, AVERROR otherwise
Parse timecode representation (hh:mm:ss[:;.]ff).
pointer to an allocated AVTimecode
frame rate in rational form
timecode string which will determine the frame start
a pointer to an arbitrary struct of which the first field is a pointer to an AVClass struct (used for av_log).
0 on success, AVERROR otherwise
Get the timecode string from the 25-bit timecode format (MPEG GOP format).
destination buffer, must be at least AV_TIMECODE_STR_SIZE long
the 25-bits timecode
the buf parameter
Get the timecode string from the SMPTE timecode format.
destination buffer, must be at least AV_TIMECODE_STR_SIZE long
the 32-bit SMPTE timecode
prevent the use of a drop flag when it is known the DF bit is arbitrary
the buf parameter
Get the timecode string from the SMPTE timecode format.
destination buffer, must be at least AV_TIMECODE_STR_SIZE long
frame rate of the timecode
the 32-bit SMPTE timecode
prevent the use of a drop flag when it is known the DF bit is arbitrary
prevent the use of a field flag when it is known the field bit is arbitrary (e.g. because it is used as PC flag)
the buf parameter
Load timecode string in buf.
timecode data correctly initialized
destination buffer, must be at least AV_TIMECODE_STR_SIZE long
frame number
the buf parameter
Apply enu(opaque, &elem) to all the elements in the tree in a given range.
a comparison function that returns < 0 for an element below the range, > 0 for an element above the range and == 0 for an element inside the range
Find an element.
a pointer to the root node of the tree
compare function used to compare elements in the tree, API identical to that of Standard C's qsort It is guaranteed that the first and only the first argument to cmp() will be the key parameter to av_tree_find(), thus it could if the user wants, be a different type (like an opaque context).
If next is not NULL, then next[0] will contain the previous element and next[1] the next element. If either does not exist, then the corresponding entry in next is unchanged.
An element with cmp(key, elem) == 0 or NULL if no such element exists in the tree.
Insert or remove an element.
A pointer to a pointer to the root node of the tree; note that the root node can change during insertions, this is required to keep the tree balanced.
pointer to the element key to insert in the tree
compare function used to compare elements in the tree, API identical to that of Standard C's qsort
Used to allocate and free AVTreeNodes. For insertion the user must set it to an allocated and zeroed object of at least av_tree_node_size bytes size. av_tree_insert() will set it to NULL if it has been consumed. For deleting elements *next is set to NULL by the user and av_tree_insert() will set it to the AVTreeNode which was used for the removed element. This allows the use of flat arrays, which have lower overhead compared to many malloced elements. You might want to define a function like:
If no insertion happened, the found element; if an insertion or removal happened, then either key or NULL will be returned. Which one it is depends on the tree state and the implementation. You should make no assumptions that it's one or the other in the code.
Allocate an AVTreeNode.
Sleep for a period of time. Although the duration is expressed in microseconds, the actual delay may be rounded to the precision of the system timer.
Number of microseconds to sleep.
zero on success or (negative) error code.
Return an informative version string. This usually is the actual release version number or a git commit description. This string has no fixed format and can change any time. It should never be parsed by code.
Send the specified message to the log if the level is less than or equal to the current av_log_level. By default, all logging messages are sent to stderr. This behavior can be altered by setting a different logging callback function.
A pointer to an arbitrary struct of which the first field is a pointer to an AVClass struct.
The importance level of the message expressed using a "Logging Constant".
The format string (printf-compatible) that specifies how subsequent arguments are converted to output.
The arguments referenced by the format string.
Write the values from src to the pixel format component c of an image line.
array containing the values to write
the array containing the pointers to the planes of the image to write into. It is supposed to be zeroed.
the array containing the linesizes of the image
the pixel format descriptor for the image
the horizontal coordinate of the first pixel to write
the vertical coordinate of the first pixel to write
the width of the line to write, that is the number of values to write to the image line
size of elements in src array (2 or 4 byte)
Return the libavutil build-time configuration.
Return the libavutil license.
Return the LIBAVUTIL_VERSION_INT constant.
Return the libpostproc build-time configuration.
Return the libpostproc license.
Return the LIBPOSTPROC_VERSION_INT constant.
Return a pp_mode or NULL if an error occurred.
the string after "-pp" on the command line
a number from 0 to PP_QUALITY_MAX
Allocate SwrContext.
NULL on error, allocated context otherwise
Allocate SwrContext if needed and set/reset common parameters.
existing Swr context if available, or NULL if not
output channel layout (AV_CH_LAYOUT_*)
output sample format (AV_SAMPLE_FMT_*).
output sample rate (frequency in Hz)
input channel layout (AV_CH_LAYOUT_*)
input sample format (AV_SAMPLE_FMT_*).
input sample rate (frequency in Hz)
logging level offset
parent logging context, can be NULL
NULL on error, allocated context otherwise
Allocate SwrContext if needed and set/reset common parameters.
Pointer to an existing Swr context if available, or to NULL if not. On success, *ps will be set to the allocated context.
output channel layout (e.g. AV_CHANNEL_LAYOUT_*)
output sample format (AV_SAMPLE_FMT_*).
output sample rate (frequency in Hz)
input channel layout (e.g. AV_CHANNEL_LAYOUT_*)
input sample format (AV_SAMPLE_FMT_*).
input sample rate (frequency in Hz)
logging level offset
parent logging context, can be NULL
0 on success, a negative AVERROR code on error. On error, the Swr context is freed and *ps set to NULL.
Generate a channel mixing matrix.
input channel layout
output channel layout
mix level for the center channel
mix level for the surround channel(s)
mix level for the low-frequency effects channel
if 1.0, coefficients will be normalized to prevent overflow. if INT_MAX, coefficients will not be normalized.
mixing coefficients; matrix[i + stride * o] is the weight of input channel i in output channel o.
distance between adjacent input channels in the matrix array
matrixed stereo downmix mode (e.g. dplii)
parent logging context, can be NULL
0 on success, negative AVERROR code on failure
Generate a channel mixing matrix.
input channel layout
output channel layout
mix level for the center channel
mix level for the surround channel(s)
mix level for the low-frequency effects channel
mixing coefficients; matrix[i + stride * o] is the weight of input channel i in output channel o.
distance between adjacent input channels in the matrix array
matrixed stereo downmix mode (e.g. dplii)
0 on success, negative AVERROR code on failure
Closes the context so that swr_is_initialized() returns 0.
Swr context to be closed
Configure or reconfigure the SwrContext using the information provided by the AVFrames.
audio resample context
0 on success, AVERROR on failure.
Convert audio.
allocated Swr context, with parameters set
output buffers, only the first one need be set in case of packed audio
amount of space available for output in samples per channel
input buffers, only the first one need to be set in case of packed audio
number of input samples available in one channel
number of samples output per channel, negative value on error
Convert the samples in the input AVFrame and write them to the output AVFrame.
audio resample context
output AVFrame
input AVFrame
0 on success, AVERROR on failure or nonmatching configuration.
Drops the specified number of output samples.
allocated Swr context
number of samples to be dropped
>= 0 on success, or a negative AVERROR code on failure
Free the given SwrContext and set the pointer to NULL.
a pointer to a pointer to Swr context
Get the AVClass for SwrContext. It can be used in combination with AV_OPT_SEARCH_FAKE_OBJ for examining options.
the AVClass of SwrContext
Gets the delay the next input sample will experience relative to the next output sample.
swr context
timebase in which the returned delay will be:
Find an upper bound on the number of samples that the next swr_convert call will output, if called with in_samples of input samples. This depends on the internal state, and anything changing the internal state (like further swr_convert() calls) will may change the number of samples swr_get_out_samples() returns for the same number of input samples.
number of input samples.
Initialize context after user parameters have been set.
Swr context to initialize
AVERROR error code in case of failure.
Injects the specified number of silence samples.
allocated Swr context
number of samples to be dropped
>= 0 on success, or a negative AVERROR code on failure
Check whether an swr context has been initialized or not.
Swr context to check
positive if it has been initialized, 0 if not initialized
Convert the next timestamp from input to output timestamps are in 1/(in_sample_rate * out_sample_rate) units.
the output timestamp for the next output sample
Set a customized input channel mapping.
allocated Swr context, not yet initialized
customized input channel mapping (array of channel indexes, -1 for a muted channel)
>= 0 on success, or AVERROR error code in case of failure.
Activate resampling compensation ("soft" compensation). This function is internally called when needed in swr_next_pts().
allocated Swr context. If it is not initialized, or SWR_FLAG_RESAMPLE is not set, swr_init() is called with the flag set.
delta in PTS per sample
number of samples to compensate for
>= 0 on success, AVERROR error codes if:
Set a customized remix matrix.
allocated Swr context, not yet initialized
remix coefficients; matrix[i + stride * o] is the weight of input channel i in output channel o
offset between lines of the matrix
>= 0 on success, or AVERROR error code in case of failure.
Return the swr build-time configuration.
Return the swr license.
Return the LIBSWRESAMPLE_VERSION_INT constant.
Allocate an empty SwsContext. This must be filled and passed to sws_init_context(). For filling see AVOptions, options.c and sws_setColorspaceDetails().
Allocate and return an uninitialized vector with length coefficients.
Convert an 8-bit paletted frame into a frame with a color depth of 24 bits.
source frame buffer
destination frame buffer
number of pixels to convert
array with [256] entries, which must match color arrangement (RGB or BGR) of src
Convert an 8-bit paletted frame into a frame with a color depth of 32 bits.
source frame buffer
destination frame buffer
number of pixels to convert
array with [256] entries, which must match color arrangement (RGB or BGR) of src
Finish the scaling process for a pair of source/destination frames previously submitted with sws_frame_start(). Must be called after all sws_send_slice() and sws_receive_slice() calls are done, before any new sws_frame_start() calls.
Initialize the scaling process for a given pair of source/destination frames. Must be called before any calls to sws_send_slice() and sws_receive_slice().
The destination frame.
The source frame. The data buffers must be allocated, but the frame data does not have to be ready at this point. Data availability is then signalled by sws_send_slice().
0 on success, a negative AVERROR code on failure
Free the swscaler context swsContext. If swsContext is NULL, then does nothing.
Get the AVClass for swsContext. It can be used in combination with AV_OPT_SEARCH_FAKE_OBJ for examining options.
Check if context can be reused, otherwise reallocate a new one.
Return a pointer to yuv<->rgb coefficients for the given colorspace suitable for sws_setColorspaceDetails().
One of the SWS_CS_* macros. If invalid, SWS_CS_DEFAULT is used.
#if LIBSWSCALE_VERSION_MAJOR > 6
negative error code on error, non negative otherwise #else
Allocate and return an SwsContext. You need it to perform scaling/conversion operations using sws_scale().
the width of the source image
the height of the source image
the source image format
the width of the destination image
the height of the destination image
the destination image format
specify which algorithm and options to use for rescaling
extra parameters to tune the used scaler For SWS_BICUBIC param[0] and [1] tune the shape of the basis function, param[0] tunes f(1) and param[1] f´(1) For SWS_GAUSS param[0] tunes the exponent and thus cutoff frequency For SWS_LANCZOS param[0] tunes the width of the window function
a pointer to an allocated context, or NULL in case of error
Return a normalized Gaussian curve used to filter stuff quality = 3 is high quality, lower is lower quality.
Initialize the swscaler context sws_context.
zero or positive value on success, a negative value on error
Returns a positive value if an endianness conversion for pix_fmt is supported, 0 otherwise.
the pixel format
a positive value if an endianness conversion for pix_fmt is supported, 0 otherwise.
Return a positive value if pix_fmt is a supported input format, 0 otherwise.
Return a positive value if pix_fmt is a supported output format, 0 otherwise.
Scale all the coefficients of a so that their sum equals height.
Request a horizontal slice of the output data to be written into the frame previously provided to sws_frame_start().
first row of the slice; must be a multiple of sws_receive_slice_alignment()
number of rows in the slice; must be a multiple of sws_receive_slice_alignment(), except for the last slice (i.e. when slice_start+slice_height is equal to output frame height)
a non-negative number if the data was successfully written into the output AVERROR(EAGAIN) if more input data needs to be provided before the output can be produced another negative AVERROR code on other kinds of scaling failure
Returns alignment required for output slices requested with sws_receive_slice(). Slice offsets and sizes passed to sws_receive_slice() must be multiples of the value returned from this function.
alignment required for output slices requested with sws_receive_slice(). Slice offsets and sizes passed to sws_receive_slice() must be multiples of the value returned from this function.
Scale the image slice in srcSlice and put the resulting scaled slice in the image in dst. A slice is a sequence of consecutive rows in an image.
the scaling context previously created with sws_getContext()
the array containing the pointers to the planes of the source slice
the array containing the strides for each plane of the source image
the position in the source image of the slice to process, that is the number (counted starting from zero) in the image of the first row of the slice
the height of the source slice, that is the number of rows in the slice
the array containing the pointers to the planes of the destination image
the array containing the strides for each plane of the destination image
the height of the output slice
Scale source data from src and write the output to dst.
The destination frame. See documentation for sws_frame_start() for more details.
The source frame.
0 on success, a negative AVERROR code on failure
Scale all the coefficients of a by the scalar value.
Indicate that a horizontal slice of input data is available in the source frame previously provided to sws_frame_start(). The slices may be provided in any order, but may not overlap. For vertically subsampled pixel formats, the slices must be aligned according to subsampling.
first row of the slice
number of rows in the slice
a non-negative number on success, a negative AVERROR code on failure.
Returns negative error code on error, non negative otherwise #else Returns -1 if not supported #endif
the yuv2rgb coefficients describing the input yuv space, normally ff_yuv2rgb_coeffs[x]
flag indicating the while-black range of the input (1=jpeg / 0=mpeg)
the yuv2rgb coefficients describing the output yuv space, normally ff_yuv2rgb_coeffs[x]
flag indicating the while-black range of the output (1=jpeg / 0=mpeg)
16.16 fixed point brightness correction
16.16 fixed point contrast correction
16.16 fixed point saturation correction #if LIBSWSCALE_VERSION_MAJOR > 6
negative error code on error, non negative otherwise #else
Return the libswscale build-time configuration.
Return the libswscale license.
Color conversion and scaling library.
Compute ceil(log2(x)).
value used to compute ceil(log2(x))
computed ceiling of log2(x)
Clip a signed integer value into the amin-amax range.
value to clip
minimum value of the clip range
maximum value of the clip range
clipped value
Clip a signed integer value into the -32768,32767 range.
value to clip
clipped value
Clip a signed integer value into the -128,127 range.
value to clip
clipped value
Clip a signed integer into the -(2^p),(2^p-1) range.
value to clip
bit position to clip at
clipped value
Clip a signed integer value into the 0-65535 range.
value to clip
clipped value
Clip a signed integer value into the 0-255 range.
value to clip
clipped value
Clip a signed integer to an unsigned power of two range.
value to clip
bit position to clip at
clipped value
Clip a signed 64bit integer value into the amin-amax range.
value to clip
minimum value of the clip range
maximum value of the clip range
clipped value
Clip a double value into the amin-amax range. If a is nan or -inf amin will be returned. If a is +inf amax will be returned.
value to clip
minimum value of the clip range
maximum value of the clip range
clipped value
Clip a float value into the amin-amax range. If a is nan or -inf amin will be returned. If a is +inf amax will be returned.
value to clip
minimum value of the clip range
maximum value of the clip range
clipped value
Clip a signed 64-bit integer value into the -2147483648,2147483647 range.
value to clip
clipped value
Compare two rationals.
First rational
Second rational
One of the following values: - 0 if `a == b` - 1 if `a > b` - -1 if `a < b` - `INT_MIN` if one of the values is of the form `0 / 0`
Reinterpret a double as a 64-bit integer.
Reinterpret a float as a 32-bit integer.
Reinterpret a 64-bit integer as a double.
Reinterpret a 32-bit integer as a float.
Invert a rational.
value
1 / q
Fill the provided buffer with a string containing an error string corresponding to the AVERROR code errnum.
a buffer
size in bytes of errbuf
error code to describe
the buffer in input, filled with the error description
Create an AVRational.
Clear high bits from an unsigned integer starting with specific bit position
value to clip
bit position to clip at
clipped value
Count number of bits set to one in x
value to count bits of
the number of bits set to one in x
Count number of bits set to one in x
value to count bits of
the number of bits set to one in x
Convert an AVRational to a `double`.
AVRational to convert
`a` in floating-point form
Add two signed 32-bit values with saturation.
one value
another value
sum with signed saturation
Add two signed 64-bit values with saturation.
one value
another value
sum with signed saturation
Add a doubled value to another value with saturation at both stages.
first value
value doubled and added to a
sum sat(a + sat(2*b)) with signed saturation
Subtract a doubled value from another value with saturation at both stages.
first value
value doubled and subtracted from a
difference sat(a - sat(2*b)) with signed saturation
Subtract two signed 32-bit values with saturation.
one value
another value
difference with signed saturation
Subtract two signed 64-bit values with saturation.
one value
another value
difference with signed saturation
Return x default pointer in case p is NULL.
ftell() equivalent for AVIOContext.
position or AVERROR.
_WIN32_WINNT = 0x602
AV_BUFFER_FLAG_READONLY = (1 << 0)
AV_BUFFERSINK_FLAG_NO_REQUEST = 0x2
AV_BUFFERSINK_FLAG_PEEK = 0x1
AV_CH_BACK_CENTER = (1ULL << AV_CHAN_BACK_CENTER )
AV_CH_BACK_LEFT = (1ULL << AV_CHAN_BACK_LEFT )
AV_CH_BACK_RIGHT = (1ULL << AV_CHAN_BACK_RIGHT )
AV_CH_BOTTOM_FRONT_CENTER = (1ULL << AV_CHAN_BOTTOM_FRONT_CENTER )
AV_CH_BOTTOM_FRONT_LEFT = (1ULL << AV_CHAN_BOTTOM_FRONT_LEFT )
AV_CH_BOTTOM_FRONT_RIGHT = (1ULL << AV_CHAN_BOTTOM_FRONT_RIGHT )
AV_CH_FRONT_CENTER = (1ULL << AV_CHAN_FRONT_CENTER )
AV_CH_FRONT_LEFT = (1ULL << AV_CHAN_FRONT_LEFT )
AV_CH_FRONT_LEFT_OF_CENTER = (1ULL << AV_CHAN_FRONT_LEFT_OF_CENTER )
AV_CH_FRONT_RIGHT = (1ULL << AV_CHAN_FRONT_RIGHT )
AV_CH_FRONT_RIGHT_OF_CENTER = (1ULL << AV_CHAN_FRONT_RIGHT_OF_CENTER)
AV_CH_LAYOUT_2_1 = (AV_CH_LAYOUT_STEREO|AV_CH_BACK_CENTER)
AV_CH_LAYOUT_2_2 = (AV_CH_LAYOUT_STEREO|AV_CH_SIDE_LEFT|AV_CH_SIDE_RIGHT)
AV_CH_LAYOUT_22POINT2 = (AV_CH_LAYOUT_5POINT1_BACK|AV_CH_FRONT_LEFT_OF_CENTER|AV_CH_FRONT_RIGHT_OF_CENTER|AV_CH_BACK_CENTER|AV_CH_LOW_FREQUENCY_2|AV_CH_SIDE_LEFT|AV_CH_SIDE_RIGHT|AV_CH_TOP_FRONT_LEFT|AV_CH_TOP_FRONT_RIGHT|AV_CH_TOP_FRONT_CENTER|AV_CH_TOP_CENTER|AV_CH_TOP_BACK_LEFT|AV_CH_TOP_BACK_RIGHT|AV_CH_TOP_SIDE_LEFT|AV_CH_TOP_SIDE_RIGHT|AV_CH_TOP_BACK_CENTER|AV_CH_BOTTOM_FRONT_CENTER|AV_CH_BOTTOM_FRONT_LEFT|AV_CH_BOTTOM_FRONT_RIGHT)
AV_CH_LAYOUT_2POINT1 = (AV_CH_LAYOUT_STEREO|AV_CH_LOW_FREQUENCY)
AV_CH_LAYOUT_3POINT1 = (AV_CH_LAYOUT_SURROUND|AV_CH_LOW_FREQUENCY)
AV_CH_LAYOUT_4POINT0 = (AV_CH_LAYOUT_SURROUND|AV_CH_BACK_CENTER)
AV_CH_LAYOUT_4POINT1 = (AV_CH_LAYOUT_4POINT0|AV_CH_LOW_FREQUENCY)
AV_CH_LAYOUT_5POINT0 = (AV_CH_LAYOUT_SURROUND|AV_CH_SIDE_LEFT|AV_CH_SIDE_RIGHT)
AV_CH_LAYOUT_5POINT0_BACK = (AV_CH_LAYOUT_SURROUND|AV_CH_BACK_LEFT|AV_CH_BACK_RIGHT)
AV_CH_LAYOUT_5POINT1 = (AV_CH_LAYOUT_5POINT0|AV_CH_LOW_FREQUENCY)
AV_CH_LAYOUT_5POINT1_BACK = (AV_CH_LAYOUT_5POINT0_BACK|AV_CH_LOW_FREQUENCY)
AV_CH_LAYOUT_6POINT0 = (AV_CH_LAYOUT_5POINT0|AV_CH_BACK_CENTER)
AV_CH_LAYOUT_6POINT0_FRONT = (AV_CH_LAYOUT_2_2|AV_CH_FRONT_LEFT_OF_CENTER|AV_CH_FRONT_RIGHT_OF_CENTER)
AV_CH_LAYOUT_6POINT1 = (AV_CH_LAYOUT_5POINT1|AV_CH_BACK_CENTER)
AV_CH_LAYOUT_6POINT1_BACK = (AV_CH_LAYOUT_5POINT1_BACK|AV_CH_BACK_CENTER)
AV_CH_LAYOUT_6POINT1_FRONT = (AV_CH_LAYOUT_6POINT0_FRONT|AV_CH_LOW_FREQUENCY)
AV_CH_LAYOUT_7POINT0 = (AV_CH_LAYOUT_5POINT0|AV_CH_BACK_LEFT|AV_CH_BACK_RIGHT)
AV_CH_LAYOUT_7POINT0_FRONT = (AV_CH_LAYOUT_5POINT0|AV_CH_FRONT_LEFT_OF_CENTER|AV_CH_FRONT_RIGHT_OF_CENTER)
AV_CH_LAYOUT_7POINT1 = (AV_CH_LAYOUT_5POINT1|AV_CH_BACK_LEFT|AV_CH_BACK_RIGHT)
AV_CH_LAYOUT_7POINT1_WIDE = (AV_CH_LAYOUT_5POINT1|AV_CH_FRONT_LEFT_OF_CENTER|AV_CH_FRONT_RIGHT_OF_CENTER)
AV_CH_LAYOUT_7POINT1_WIDE_BACK = (AV_CH_LAYOUT_5POINT1_BACK|AV_CH_FRONT_LEFT_OF_CENTER|AV_CH_FRONT_RIGHT_OF_CENTER)
AV_CH_LAYOUT_HEXADECAGONAL = (AV_CH_LAYOUT_OCTAGONAL|AV_CH_WIDE_LEFT|AV_CH_WIDE_RIGHT|AV_CH_TOP_BACK_LEFT|AV_CH_TOP_BACK_RIGHT|AV_CH_TOP_BACK_CENTER|AV_CH_TOP_FRONT_CENTER|AV_CH_TOP_FRONT_LEFT|AV_CH_TOP_FRONT_RIGHT)
AV_CH_LAYOUT_HEXAGONAL = (AV_CH_LAYOUT_5POINT0_BACK|AV_CH_BACK_CENTER)
AV_CH_LAYOUT_MONO = (AV_CH_FRONT_CENTER)
AV_CH_LAYOUT_NATIVE = 0x8000000000000000ULL
AV_CH_LAYOUT_OCTAGONAL = (AV_CH_LAYOUT_5POINT0|AV_CH_BACK_LEFT|AV_CH_BACK_CENTER|AV_CH_BACK_RIGHT)
AV_CH_LAYOUT_QUAD = (AV_CH_LAYOUT_STEREO|AV_CH_BACK_LEFT|AV_CH_BACK_RIGHT)
AV_CH_LAYOUT_STEREO = (AV_CH_FRONT_LEFT|AV_CH_FRONT_RIGHT)
AV_CH_LAYOUT_STEREO_DOWNMIX = (AV_CH_STEREO_LEFT|AV_CH_STEREO_RIGHT)
AV_CH_LAYOUT_SURROUND = (AV_CH_LAYOUT_STEREO|AV_CH_FRONT_CENTER)
AV_CH_LOW_FREQUENCY = (1ULL << AV_CHAN_LOW_FREQUENCY )
AV_CH_LOW_FREQUENCY_2 = (1ULL << AV_CHAN_LOW_FREQUENCY_2 )
AV_CH_SIDE_LEFT = (1ULL << AV_CHAN_SIDE_LEFT )
AV_CH_SIDE_RIGHT = (1ULL << AV_CHAN_SIDE_RIGHT )
AV_CH_STEREO_LEFT = (1ULL << AV_CHAN_STEREO_LEFT )
AV_CH_STEREO_RIGHT = (1ULL << AV_CHAN_STEREO_RIGHT )
AV_CH_SURROUND_DIRECT_LEFT = (1ULL << AV_CHAN_SURROUND_DIRECT_LEFT )
AV_CH_SURROUND_DIRECT_RIGHT = (1ULL << AV_CHAN_SURROUND_DIRECT_RIGHT)
AV_CH_TOP_BACK_CENTER = (1ULL << AV_CHAN_TOP_BACK_CENTER )
AV_CH_TOP_BACK_LEFT = (1ULL << AV_CHAN_TOP_BACK_LEFT )
AV_CH_TOP_BACK_RIGHT = (1ULL << AV_CHAN_TOP_BACK_RIGHT )
AV_CH_TOP_CENTER = (1ULL << AV_CHAN_TOP_CENTER )
AV_CH_TOP_FRONT_CENTER = (1ULL << AV_CHAN_TOP_FRONT_CENTER )
AV_CH_TOP_FRONT_LEFT = (1ULL << AV_CHAN_TOP_FRONT_LEFT )
AV_CH_TOP_FRONT_RIGHT = (1ULL << AV_CHAN_TOP_FRONT_RIGHT )
AV_CH_TOP_SIDE_LEFT = (1ULL << AV_CHAN_TOP_SIDE_LEFT )
AV_CH_TOP_SIDE_RIGHT = (1ULL << AV_CHAN_TOP_SIDE_RIGHT )
AV_CH_WIDE_LEFT = (1ULL << AV_CHAN_WIDE_LEFT )
AV_CH_WIDE_RIGHT = (1ULL << AV_CHAN_WIDE_RIGHT )
AV_CODEC_CAP_AUTO_THREADS = AV_CODEC_CAP_OTHER_THREADS
AV_CODEC_CAP_AVOID_PROBING = (1 << 17)
AV_CODEC_CAP_CHANNEL_CONF = (1 << 10)
AV_CODEC_CAP_DELAY = (1 << 5)
AV_CODEC_CAP_DR1 = (1 << 1)
AV_CODEC_CAP_DRAW_HORIZ_BAND = (1 << 0)
AV_CODEC_CAP_ENCODER_FLUSH = (1 << 21)
AV_CODEC_CAP_ENCODER_REORDERED_OPAQUE = (1 << 20)
AV_CODEC_CAP_EXPERIMENTAL = (1 << 9)
AV_CODEC_CAP_FRAME_THREADS = (1 << 12)
AV_CODEC_CAP_HARDWARE = (1 << 18)
AV_CODEC_CAP_HYBRID = (1 << 19)
AV_CODEC_CAP_INTRA_ONLY = 0x40000000
AV_CODEC_CAP_LOSSLESS = 0x80000000
AV_CODEC_CAP_OTHER_THREADS = (1 << 15)
AV_CODEC_CAP_PARAM_CHANGE = (1 << 14)
AV_CODEC_CAP_SLICE_THREADS = (1 << 13)
AV_CODEC_CAP_SMALL_LAST_FRAME = (1 << 6)
AV_CODEC_CAP_SUBFRAMES = (1 << 8)
AV_CODEC_CAP_TRUNCATED = (1 << 3)
AV_CODEC_CAP_VARIABLE_FRAME_SIZE = (1 << 16)
AV_CODEC_EXPORT_DATA_FILM_GRAIN = 0x1 << 0x3
AV_CODEC_EXPORT_DATA_MVS = 0x1 << 0x0
AV_CODEC_EXPORT_DATA_PRFT = 0x1 << 0x1
AV_CODEC_EXPORT_DATA_VIDEO_ENC_PARAMS = 0x1 << 0x2
AV_CODEC_FLAG_4MV = 0x1 << 0x2
AV_CODEC_FLAG_AC_PRED = 0x1 << 0x18
AV_CODEC_FLAG_BITEXACT = 0x1 << 0x17
AV_CODEC_FLAG_CLOSED_GOP = 0x1U << 0x1f
AV_CODEC_FLAG_DROPCHANGED = 0x1 << 0x5
AV_CODEC_FLAG_GLOBAL_HEADER = 0x1 << 0x16
AV_CODEC_FLAG_GRAY = 0x1 << 0xd
AV_CODEC_FLAG_INTERLACED_DCT = 0x1 << 0x12
AV_CODEC_FLAG_INTERLACED_ME = 0x1 << 0x1d
AV_CODEC_FLAG_LOOP_FILTER = 0x1 << 0xb
AV_CODEC_FLAG_LOW_DELAY = 0x1 << 0x13
AV_CODEC_FLAG_OUTPUT_CORRUPT = 0x1 << 0x3
AV_CODEC_FLAG_PASS1 = 0x1 << 0x9
AV_CODEC_FLAG_PASS2 = 0x1 << 0xa
AV_CODEC_FLAG_PSNR = 0x1 << 0xf
AV_CODEC_FLAG_QPEL = 0x1 << 0x4
AV_CODEC_FLAG_QSCALE = 0x1 << 0x1
AV_CODEC_FLAG_TRUNCATED = 0x1 << 0x10
AV_CODEC_FLAG_UNALIGNED = 0x1 << 0x0
AV_CODEC_FLAG2_CHUNKS = 0x1 << 0xf
AV_CODEC_FLAG2_DROP_FRAME_TIMECODE = 0x1 << 0xd
AV_CODEC_FLAG2_EXPORT_MVS = 0x1 << 0x1c
AV_CODEC_FLAG2_FAST = 0x1 << 0x0
AV_CODEC_FLAG2_IGNORE_CROP = 0x1 << 0x10
AV_CODEC_FLAG2_LOCAL_HEADER = 0x1 << 0x3
AV_CODEC_FLAG2_NO_OUTPUT = 0x1 << 0x2
AV_CODEC_FLAG2_RO_FLUSH_NOOP = 0x1 << 0x1e
AV_CODEC_FLAG2_SHOW_ALL = 0x1 << 0x16
AV_CODEC_FLAG2_SKIP_MANUAL = 0x1 << 0x1d
AV_CODEC_ID_H265 = AV_CODEC_ID_HEVC
AV_CODEC_ID_H266 = AV_CODEC_ID_VVC
AV_CODEC_ID_IFF_BYTERUN1 = AV_CODEC_ID_IFF_ILBM
AV_CODEC_PROP_BITMAP_SUB = 0x1 << 0x10
AV_CODEC_PROP_INTRA_ONLY = 0x1 << 0x0
AV_CODEC_PROP_LOSSLESS = 0x1 << 0x2
AV_CODEC_PROP_LOSSY = 0x1 << 0x1
AV_CODEC_PROP_REORDER = 0x1 << 0x3
AV_CODEC_PROP_TEXT_SUB = 0x1 << 0x11
AV_CPU_FLAG_3DNOW = 0x4
AV_CPU_FLAG_3DNOWEXT = 0x20
AV_CPU_FLAG_AESNI = 0x80000
AV_CPU_FLAG_ALTIVEC = 0x1
AV_CPU_FLAG_ARMV5TE = 0x1 << 0x0
AV_CPU_FLAG_ARMV6 = 0x1 << 0x1
AV_CPU_FLAG_ARMV6T2 = 0x1 << 0x2
AV_CPU_FLAG_ARMV8 = 0x1 << 0x6
AV_CPU_FLAG_ATOM = 0x10000000
AV_CPU_FLAG_AVX = 0x4000
AV_CPU_FLAG_AVX2 = 0x8000
AV_CPU_FLAG_AVX512 = 0x100000
AV_CPU_FLAG_AVX512ICL = 0x200000
AV_CPU_FLAG_AVXSLOW = 0x8000000
AV_CPU_FLAG_BMI1 = 0x20000
AV_CPU_FLAG_BMI2 = 0x40000
AV_CPU_FLAG_CMOV = 0x1000
AV_CPU_FLAG_FMA3 = 0x10000
AV_CPU_FLAG_FMA4 = 0x800
AV_CPU_FLAG_FORCE = 0x80000000U
AV_CPU_FLAG_LASX = 0x1 << 0x1
AV_CPU_FLAG_LSX = 0x1 << 0x0
AV_CPU_FLAG_MMI = 0x1 << 0x0
AV_CPU_FLAG_MMX = 0x1
AV_CPU_FLAG_MMX2 = 0x2
AV_CPU_FLAG_MMXEXT = 0x2
AV_CPU_FLAG_MSA = 0x1 << 0x1
AV_CPU_FLAG_NEON = 0x1 << 0x5
AV_CPU_FLAG_POWER8 = 0x4
AV_CPU_FLAG_SETEND = 0x1 << 0x10
AV_CPU_FLAG_SLOW_GATHER = 0x2000000
AV_CPU_FLAG_SSE = 0x8
AV_CPU_FLAG_SSE2 = 0x10
AV_CPU_FLAG_SSE2SLOW = 0x40000000
AV_CPU_FLAG_SSE3 = 0x40
AV_CPU_FLAG_SSE3SLOW = 0x20000000
AV_CPU_FLAG_SSE4 = 0x100
AV_CPU_FLAG_SSE42 = 0x200
AV_CPU_FLAG_SSSE3 = 0x80
AV_CPU_FLAG_SSSE3SLOW = 0x4000000
AV_CPU_FLAG_VFP = 0x1 << 0x3
AV_CPU_FLAG_VFP_VM = 0x1 << 0x7
AV_CPU_FLAG_VFPV3 = 0x1 << 0x4
AV_CPU_FLAG_VSX = 0x2
AV_CPU_FLAG_XOP = 0x400
AV_DICT_APPEND = 32
AV_DICT_DONT_OVERWRITE = 16
AV_DICT_DONT_STRDUP_KEY = 4
AV_DICT_DONT_STRDUP_VAL = 8
AV_DICT_IGNORE_SUFFIX = 2
AV_DICT_MATCH_CASE = 1
AV_DICT_MULTIKEY = 64
AV_DISPOSITION_ATTACHED_PIC = (1 << 10)
AV_DISPOSITION_CAPTIONS = (1 << 16)
AV_DISPOSITION_CLEAN_EFFECTS = (1 << 9)
AV_DISPOSITION_COMMENT = (1 << 3)
AV_DISPOSITION_DEFAULT = (1 << 0)
AV_DISPOSITION_DEPENDENT = (1 << 19)
AV_DISPOSITION_DESCRIPTIONS = (1 << 17)
AV_DISPOSITION_DUB = (1 << 1)
AV_DISPOSITION_FORCED = (1 << 6)
AV_DISPOSITION_HEARING_IMPAIRED = (1 << 7)
AV_DISPOSITION_KARAOKE = (1 << 5)
AV_DISPOSITION_LYRICS = (1 << 4)
AV_DISPOSITION_METADATA = (1 << 18)
AV_DISPOSITION_NON_DIEGETIC = (1 << 12)
AV_DISPOSITION_ORIGINAL = (1 << 2)
AV_DISPOSITION_STILL_IMAGE = (1 << 20)
AV_DISPOSITION_TIMED_THUMBNAILS = (1 << 11)
AV_DISPOSITION_VISUAL_IMPAIRED = (1 << 8)
AV_EF_AGGRESSIVE = 0x1 << 0x12
AV_EF_BITSTREAM = 0x1 << 0x1
AV_EF_BUFFER = 0x1 << 0x2
AV_EF_CAREFUL = 0x1 << 0x10
AV_EF_COMPLIANT = 0x1 << 0x11
AV_EF_CRCCHECK = 0x1 << 0x0
AV_EF_EXPLODE = 0x1 << 0x3
AV_EF_IGNORE_ERR = 0x1 << 0xf
AV_ERROR_MAX_STRING_SIZE = 64
AV_FOURCC_MAX_STRING_SIZE = 32
AV_FRAME_FILENAME_FLAGS_MULTIPLE = 1
AV_FRAME_FLAG_CORRUPT = (1 << 0)
AV_FRAME_FLAG_DISCARD = (1 << 2)
AV_GET_BUFFER_FLAG_REF = 0x1 << 0x0
AV_GET_ENCODE_BUFFER_FLAG_REF = 0x1 << 0x0
AV_HAVE_BIGENDIAN = 0
AV_HAVE_FAST_UNALIGNED = 1
AV_HWACCEL_CODEC_CAP_EXPERIMENTAL = 0x200
AV_HWACCEL_FLAG_ALLOW_HIGH_DEPTH = 0x1 << 0x1
AV_HWACCEL_FLAG_ALLOW_PROFILE_MISMATCH = 0x1 << 0x2
AV_HWACCEL_FLAG_IGNORE_LEVEL = 0x1 << 0x0
AV_INPUT_BUFFER_MIN_SIZE = 0x4000
AV_INPUT_BUFFER_PADDING_SIZE = 64
AV_LOG_DEBUG = 48
AV_LOG_ERROR = 16
AV_LOG_FATAL = 8
AV_LOG_INFO = 32
AV_LOG_MAX_OFFSET = (AV_LOG_TRACE - AV_LOG_QUIET)
AV_LOG_PANIC = 0
AV_LOG_PRINT_LEVEL = 2
AV_LOG_QUIET = -8
AV_LOG_SKIP_REPEATED = 1
AV_LOG_TRACE = 56
AV_LOG_VERBOSE = 40
AV_LOG_WARNING = 24
AV_NOPTS_VALUE = ((int64_t)UINT64_C(0x8000000000000000))
AV_NUM_DATA_POINTERS = 8
AV_OPT_ALLOW_NULL = (1 << 2)
AV_OPT_FLAG_AUDIO_PARAM = 8
AV_OPT_FLAG_BSF_PARAM = (1<<8)
AV_OPT_FLAG_CHILD_CONSTS = (1<<18)
AV_OPT_FLAG_DECODING_PARAM = 2
AV_OPT_FLAG_DEPRECATED = (1<<17)
AV_OPT_FLAG_ENCODING_PARAM = 1
AV_OPT_FLAG_EXPORT = 64
AV_OPT_FLAG_FILTERING_PARAM = (1<<16)
AV_OPT_FLAG_READONLY = 128
AV_OPT_FLAG_RUNTIME_PARAM = (1<<15)
AV_OPT_FLAG_SUBTITLE_PARAM = 32
AV_OPT_FLAG_VIDEO_PARAM = 16
AV_OPT_MULTI_COMPONENT_RANGE = (1 << 12)
AV_OPT_SEARCH_CHILDREN = (1 << 0)
AV_OPT_SEARCH_FAKE_OBJ = (1 << 1)
AV_OPT_SERIALIZE_OPT_FLAGS_EXACT = 0x00000002
AV_OPT_SERIALIZE_SKIP_DEFAULTS = 0x00000001
AV_PARSER_PTS_NB = 0x4
AV_PIX_FMT_FLAG_ALPHA = 0x1 << 0x7
AV_PIX_FMT_FLAG_BAYER = 0x1 << 0x8
AV_PIX_FMT_FLAG_BE = 0x1 << 0x0
AV_PIX_FMT_FLAG_BITSTREAM = 0x1 << 0x2
AV_PIX_FMT_FLAG_FLOAT = 0x1 << 0x9
AV_PIX_FMT_FLAG_HWACCEL = 0x1 << 0x3
AV_PIX_FMT_FLAG_PAL = 0x1 << 0x1
AV_PIX_FMT_FLAG_PLANAR = 0x1 << 0x4
AV_PIX_FMT_FLAG_RGB = 0x1 << 0x5
AV_PKT_DATA_QUALITY_FACTOR = AV_PKT_DATA_QUALITY_STATS
AV_PKT_FLAG_CORRUPT = 0x0002
AV_PKT_FLAG_DISCARD = 0x0004
AV_PKT_FLAG_DISPOSABLE = 0x0010
AV_PKT_FLAG_KEY = 0x0001
AV_PKT_FLAG_TRUSTED = 0x0008
AV_PROGRAM_RUNNING = 1
AV_PTS_WRAP_ADD_OFFSET = 1
AV_PTS_WRAP_IGNORE = 0
AV_PTS_WRAP_SUB_OFFSET = -1
AV_SUBTITLE_FLAG_FORCED = 0x1
AV_TIME_BASE = 1000000
AV_TIMECODE_STR_SIZE = 0x17
AVERROR_BSF_NOT_FOUND = FFERRTAG(0xF8,'B','S','F')
AVERROR_BUFFER_TOO_SMALL = FFERRTAG( 'B','U','F','S')
AVERROR_BUG = FFERRTAG( 'B','U','G','!')
AVERROR_BUG2 = FFERRTAG( 'B','U','G',' ')
AVERROR_DECODER_NOT_FOUND = FFERRTAG(0xF8,'D','E','C')
AVERROR_DEMUXER_NOT_FOUND = FFERRTAG(0xF8,'D','E','M')
AVERROR_ENCODER_NOT_FOUND = FFERRTAG(0xF8,'E','N','C')
AVERROR_EOF = FFERRTAG( 'E','O','F',' ')
AVERROR_EXIT = FFERRTAG( 'E','X','I','T')
AVERROR_EXPERIMENTAL = (-0x2bb2afa8)
AVERROR_EXTERNAL = FFERRTAG( 'E','X','T',' ')
AVERROR_FILTER_NOT_FOUND = FFERRTAG(0xF8,'F','I','L')
AVERROR_HTTP_BAD_REQUEST = FFERRTAG(0xF8,'4','0','0')
AVERROR_HTTP_FORBIDDEN = FFERRTAG(0xF8,'4','0','3')
AVERROR_HTTP_NOT_FOUND = FFERRTAG(0xF8,'4','0','4')
AVERROR_HTTP_OTHER_4XX = FFERRTAG(0xF8,'4','X','X')
AVERROR_HTTP_SERVER_ERROR = FFERRTAG(0xF8,'5','X','X')
AVERROR_HTTP_UNAUTHORIZED = FFERRTAG(0xF8,'4','0','1')
AVERROR_INPUT_CHANGED = (-0x636e6701)
AVERROR_INVALIDDATA = FFERRTAG( 'I','N','D','A')
AVERROR_MUXER_NOT_FOUND = FFERRTAG(0xF8,'M','U','X')
AVERROR_OPTION_NOT_FOUND = FFERRTAG(0xF8,'O','P','T')
AVERROR_OUTPUT_CHANGED = (-0x636e6702)
AVERROR_PATCHWELCOME = FFERRTAG( 'P','A','W','E')
AVERROR_PROTOCOL_NOT_FOUND = FFERRTAG(0xF8,'P','R','O')
AVERROR_STREAM_NOT_FOUND = FFERRTAG(0xF8,'S','T','R')
AVERROR_UNKNOWN = FFERRTAG( 'U','N','K','N')
AVFILTER_CMD_FLAG_FAST = 0x2
AVFILTER_CMD_FLAG_ONE = 0x1
AVFILTER_FLAG_DYNAMIC_INPUTS = 0x1 << 0x0
AVFILTER_FLAG_DYNAMIC_OUTPUTS = 0x1 << 0x1
AVFILTER_FLAG_METADATA_ONLY = 0x1 << 0x3
AVFILTER_FLAG_SLICE_THREADS = 0x1 << 0x2
AVFILTER_FLAG_SUPPORT_TIMELINE = AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC | AVFILTER_FLAG_SUPPORT_TIMELINE_INTERNAL
AVFILTER_FLAG_SUPPORT_TIMELINE_GENERIC = 0x1 << 0x10
AVFILTER_FLAG_SUPPORT_TIMELINE_INTERNAL = 0x1 << 0x11
AVFILTER_THREAD_SLICE = 0x1 << 0x0
AVFMT_ALLOW_FLUSH = 0x10000
AVFMT_AVOID_NEG_TS_AUTO = -1
AVFMT_AVOID_NEG_TS_DISABLED = 0
AVFMT_AVOID_NEG_TS_MAKE_NON_NEGATIVE = 1
AVFMT_AVOID_NEG_TS_MAKE_ZERO = 2
AVFMT_EVENT_FLAG_METADATA_UPDATED = 0x0001
AVFMT_EXPERIMENTAL = 0x0004
AVFMT_FLAG_AUTO_BSF = 0x200000
AVFMT_FLAG_BITEXACT = 0x0400
AVFMT_FLAG_CUSTOM_IO = 0x0080
AVFMT_FLAG_DISCARD_CORRUPT = 0x0100
AVFMT_FLAG_FAST_SEEK = 0x80000
AVFMT_FLAG_FLUSH_PACKETS = 0x0200
AVFMT_FLAG_GENPTS = 0x0001
AVFMT_FLAG_IGNDTS = 0x0008
AVFMT_FLAG_IGNIDX = 0x0002
AVFMT_FLAG_NOBUFFER = 0x0040
AVFMT_FLAG_NOFILLIN = 0x0010
AVFMT_FLAG_NONBLOCK = 0x0004
AVFMT_FLAG_NOPARSE = 0x0020
AVFMT_FLAG_PRIV_OPT = 0x20000
AVFMT_FLAG_SHORTEST = 0x100000
AVFMT_FLAG_SORT_DTS = 0x10000
AVFMT_GENERIC_INDEX = 0x0100
AVFMT_GLOBALHEADER = 0x0040
AVFMT_NEEDNUMBER = 0x0002
AVFMT_NO_BYTE_SEEK = 0x8000
AVFMT_NOBINSEARCH = 0x2000
AVFMT_NODIMENSIONS = 0x0800
AVFMT_NOFILE = 0x0001
AVFMT_NOGENSEARCH = 0x4000
AVFMT_NOSTREAMS = 0x1000
AVFMT_NOTIMESTAMPS = 0x0080
AVFMT_SEEK_TO_PTS = 0x4000000
AVFMT_SHOW_IDS = 0x0008
AVFMT_TS_DISCONT = 0x0200
AVFMT_TS_NEGATIVE = 0x40000
AVFMT_TS_NONSTRICT = 0x20000
AVFMT_VARIABLE_FPS = 0x0400
AVFMTCTX_NOHEADER = 0x0001
AVFMTCTX_UNSEEKABLE = 0x0002
AVINDEX_DISCARD_FRAME = 0x0002
AVINDEX_KEYFRAME = 0x0001
AVIO_FLAG_DIRECT = 0x8000
AVIO_FLAG_NONBLOCK = 8
AVIO_FLAG_READ = 1
AVIO_FLAG_READ_WRITE = (AVIO_FLAG_READ|AVIO_FLAG_WRITE)
AVIO_FLAG_WRITE = 2
AVIO_SEEKABLE_NORMAL = (1 << 0)
AVIO_SEEKABLE_TIME = (1 << 1)
AVPALETTE_COUNT = 256
AVPALETTE_SIZE = 1024
AVPROBE_PADDING_SIZE = 32
AVPROBE_SCORE_EXTENSION = 50
AVPROBE_SCORE_MAX = 100
AVPROBE_SCORE_MIME = 75
AVPROBE_SCORE_RETRY = (AVPROBE_SCORE_MAX/4)
AVPROBE_SCORE_STREAM_RETRY = (AVPROBE_SCORE_MAX/4-1)
AVSEEK_FLAG_ANY = 4
AVSEEK_FLAG_BACKWARD = 1
AVSEEK_FLAG_BYTE = 2
AVSEEK_FLAG_FRAME = 8
AVSEEK_FORCE = 0x20000
AVSEEK_SIZE = 0x10000
AVSTREAM_EVENT_FLAG_METADATA_UPDATED = 0x0001
AVSTREAM_EVENT_FLAG_NEW_PACKETS = (1 << 1)
AVSTREAM_INIT_IN_INIT_OUTPUT = 1
AVSTREAM_INIT_IN_WRITE_HEADER = 0
FF_API_AUTO_THREADS = (LIBAVCODEC_VERSION_MAJOR < 60)
FF_API_AV_FOPEN_UTF8 = (LIBAVUTIL_VERSION_MAJOR < 58)
FF_API_AV_MALLOCZ_ARRAY = (LIBAVUTIL_VERSION_MAJOR < 58)
FF_API_AVCTX_TIMEBASE = (LIBAVCODEC_VERSION_MAJOR < 60)
FF_API_AVIOCONTEXT_WRITTEN = (LIBAVFORMAT_VERSION_MAJOR < 60)
FF_API_AVSTREAM_CLASS = (LIBAVFORMAT_VERSION_MAJOR > 59)
FF_API_BUFFERSINK_ALLOC = LIBAVFILTER_VERSION_MAJOR < 0x9
FF_API_COLORSPACE_NAME = (LIBAVUTIL_VERSION_MAJOR < 58)
FF_API_COMPUTE_PKT_FIELDS2 = (LIBAVFORMAT_VERSION_MAJOR < 60)
FF_API_D2STR = (LIBAVUTIL_VERSION_MAJOR < 58)
FF_API_DEBUG_MV = (LIBAVCODEC_VERSION_MAJOR < 60)
FF_API_DECLARE_ALIGNED = (LIBAVUTIL_VERSION_MAJOR < 58)
FF_API_DEVICE_CAPABILITIES = (LIBAVDEVICE_VERSION_MAJOR < 60)
FF_API_FIFO_OLD_API = (LIBAVUTIL_VERSION_MAJOR < 58)
FF_API_FIFO_PEEK2 = (LIBAVUTIL_VERSION_MAJOR < 58)
FF_API_FLAG_TRUNCATED = (LIBAVCODEC_VERSION_MAJOR < 60)
FF_API_GET_FRAME_CLASS = (LIBAVCODEC_VERSION_MAJOR < 60)
FF_API_IDCT_NONE = (LIBAVCODEC_VERSION_MAJOR < 60)
FF_API_INIT_PACKET = (LIBAVCODEC_VERSION_MAJOR < 60)
FF_API_LAVF_PRIV_OPT = (LIBAVFORMAT_VERSION_MAJOR < 60)
FF_API_OLD_CHANNEL_LAYOUT = (LIBAVUTIL_VERSION_MAJOR < 58)
FF_API_OPENH264_CABAC = (LIBAVCODEC_VERSION_MAJOR < 60)
FF_API_OPENH264_SLICE_MODE = (LIBAVCODEC_VERSION_MAJOR < 60)
FF_API_PAD_COUNT = LIBAVFILTER_VERSION_MAJOR < 0x9
FF_API_R_FRAME_RATE = 1
FF_API_SUB_TEXT_FORMAT = (LIBAVCODEC_VERSION_MAJOR < 60)
FF_API_SVTAV1_OPTS = (LIBAVCODEC_VERSION_MAJOR < 60)
FF_API_SWS_PARAM_OPTION = LIBAVFILTER_VERSION_MAJOR < 0x9
FF_API_THREAD_SAFE_CALLBACKS = (LIBAVCODEC_VERSION_MAJOR < 60)
FF_API_UNUSED_CODEC_CAPS = (LIBAVCODEC_VERSION_MAJOR < 60)
FF_API_XVMC = (LIBAVUTIL_VERSION_MAJOR < 58)
FF_BUG_AMV = 0x20
FF_BUG_AUTODETECT = 0x1
FF_BUG_DC_CLIP = 0x1000
FF_BUG_DIRECT_BLOCKSIZE = 0x200
FF_BUG_EDGE = 0x400
FF_BUG_HPEL_CHROMA = 0x800
FF_BUG_IEDGE = 0x8000
FF_BUG_MS = 0x2000
FF_BUG_NO_PADDING = 0x10
FF_BUG_QPEL_CHROMA = 0x40
FF_BUG_QPEL_CHROMA2 = 0x100
FF_BUG_STD_QPEL = 0x80
FF_BUG_TRUNCATED = 0x4000
FF_BUG_UMP4 = 0x8
FF_BUG_XVID_ILACE = 0x4
FF_CMP_BIT = 0x5
FF_CMP_CHROMA = 0x100
FF_CMP_DCT = 0x3
FF_CMP_DCT264 = 0xe
FF_CMP_DCTMAX = 0xd
FF_CMP_MEDIAN_SAD = 0xf
FF_CMP_NSSE = 0xa
FF_CMP_PSNR = 0x4
FF_CMP_RD = 0x6
FF_CMP_SAD = 0x0
FF_CMP_SATD = 0x2
FF_CMP_SSE = 0x1
FF_CMP_VSAD = 0x8
FF_CMP_VSSE = 0x9
FF_CMP_W53 = 0xb
FF_CMP_W97 = 0xc
FF_CMP_ZERO = 0x7
FF_CODEC_PROPERTY_CLOSED_CAPTIONS = 0x2
FF_CODEC_PROPERTY_FILM_GRAIN = 0x4
FF_CODEC_PROPERTY_LOSSLESS = 0x1
FF_COMPLIANCE_EXPERIMENTAL = -0x2
FF_COMPLIANCE_NORMAL = 0x0
FF_COMPLIANCE_STRICT = 0x1
FF_COMPLIANCE_UNOFFICIAL = -0x1
FF_COMPLIANCE_VERY_STRICT = 0x2
FF_COMPRESSION_DEFAULT = -0x1
FF_DCT_ALTIVEC = 0x5
FF_DCT_AUTO = 0x0
FF_DCT_FAAN = 0x6
FF_DCT_FASTINT = 0x1
FF_DCT_INT = 0x2
FF_DCT_MMX = 0x3
FF_DEBUG_BITSTREAM = 0x4
FF_DEBUG_BUFFERS = 0x8000
FF_DEBUG_BUGS = 0x1000
FF_DEBUG_DCT_COEFF = 0x40
FF_DEBUG_ER = 0x400
FF_DEBUG_GREEN_MD = 0x800000
FF_DEBUG_MB_TYPE = 0x8
FF_DEBUG_MMCO = 0x800
FF_DEBUG_NOMC = 0x1000000
FF_DEBUG_PICT_INFO = 0x1
FF_DEBUG_QP = 0x10
FF_DEBUG_RC = 0x2
FF_DEBUG_SKIP = 0x80
FF_DEBUG_STARTCODE = 0x100
FF_DEBUG_THREADS = 0x10000
FF_DEBUG_VIS_MV_B_BACK = 0x4
FF_DEBUG_VIS_MV_B_FOR = 0x2
FF_DEBUG_VIS_MV_P_FOR = 0x1
FF_DECODE_ERROR_CONCEALMENT_ACTIVE = 4
FF_DECODE_ERROR_DECODE_SLICES = 8
FF_DECODE_ERROR_INVALID_BITSTREAM = 1
FF_DECODE_ERROR_MISSING_REFERENCE = 2
FF_DXVA2_WORKAROUND_INTEL_CLEARVIDEO = 0x2
FF_DXVA2_WORKAROUND_SCALING_LIST_ZIGZAG = 0x1
FF_EC_DEBLOCK = 0x2
FF_EC_FAVOR_INTER = 0x100
FF_EC_GUESS_MVS = 0x1
FF_FDEBUG_TS = 0x0001
FF_HLS_TS_OPTIONS = (LIBAVFORMAT_VERSION_MAJOR < 60)
FF_IDCT_ALTIVEC = 0x8
FF_IDCT_ARM = 0x7
FF_IDCT_AUTO = 0x0
FF_IDCT_FAAN = 0x14
FF_IDCT_INT = 0x1
FF_IDCT_NONE = 0x18
FF_IDCT_SIMPLE = 0x2
FF_IDCT_SIMPLEARM = 0xa
FF_IDCT_SIMPLEARMV5TE = 0x10
FF_IDCT_SIMPLEARMV6 = 0x11
FF_IDCT_SIMPLEAUTO = 0x80
FF_IDCT_SIMPLEMMX = 0x3
FF_IDCT_SIMPLENEON = 0x16
FF_IDCT_XVID = 0xe
FF_LAMBDA_MAX = (256*128-1)
FF_LAMBDA_SCALE = (1<<FF_LAMBDA_SHIFT)
FF_LAMBDA_SHIFT = 7
FF_LEVEL_UNKNOWN = -0x63
FF_LOSS_ALPHA = 0x8
FF_LOSS_CHROMA = 0x20
FF_LOSS_COLORQUANT = 0x10
FF_LOSS_COLORSPACE = 0x4
FF_LOSS_DEPTH = 0x2
FF_LOSS_RESOLUTION = 0x1
FF_MB_DECISION_BITS = 0x1
FF_MB_DECISION_RD = 0x2
FF_MB_DECISION_SIMPLE = 0x0
FF_PROFILE_AAC_ELD = 0x26
FF_PROFILE_AAC_HE = 0x4
FF_PROFILE_AAC_HE_V2 = 0x1c
FF_PROFILE_AAC_LD = 0x16
FF_PROFILE_AAC_LOW = 0x1
FF_PROFILE_AAC_LTP = 0x3
FF_PROFILE_AAC_MAIN = 0x0
FF_PROFILE_AAC_SSR = 0x2
FF_PROFILE_ARIB_PROFILE_A = 0x0
FF_PROFILE_ARIB_PROFILE_C = 0x1
FF_PROFILE_AV1_HIGH = 0x1
FF_PROFILE_AV1_MAIN = 0x0
FF_PROFILE_AV1_PROFESSIONAL = 0x2
FF_PROFILE_DNXHD = 0x0
FF_PROFILE_DNXHR_444 = 0x5
FF_PROFILE_DNXHR_HQ = 0x3
FF_PROFILE_DNXHR_HQX = 0x4
FF_PROFILE_DNXHR_LB = 0x1
FF_PROFILE_DNXHR_SQ = 0x2
FF_PROFILE_DTS = 0x14
FF_PROFILE_DTS_96_24 = 0x28
FF_PROFILE_DTS_ES = 0x1e
FF_PROFILE_DTS_EXPRESS = 0x46
FF_PROFILE_DTS_HD_HRA = 0x32
FF_PROFILE_DTS_HD_MA = 0x3c
FF_PROFILE_H264_BASELINE = 0x42
FF_PROFILE_H264_CAVLC_444 = 0x2c
FF_PROFILE_H264_CONSTRAINED = 0x1 << 0x9
FF_PROFILE_H264_CONSTRAINED_BASELINE = 0x42 | FF_PROFILE_H264_CONSTRAINED
FF_PROFILE_H264_EXTENDED = 0x58
FF_PROFILE_H264_HIGH = 0x64
FF_PROFILE_H264_HIGH_10 = 0x6e
FF_PROFILE_H264_HIGH_10_INTRA = 0x6e | FF_PROFILE_H264_INTRA
FF_PROFILE_H264_HIGH_422 = 0x7a
FF_PROFILE_H264_HIGH_422_INTRA = 0x7a | FF_PROFILE_H264_INTRA
FF_PROFILE_H264_HIGH_444 = 0x90
FF_PROFILE_H264_HIGH_444_INTRA = 0xf4 | FF_PROFILE_H264_INTRA
FF_PROFILE_H264_HIGH_444_PREDICTIVE = 0xf4
FF_PROFILE_H264_INTRA = 0x1 << 0xb
FF_PROFILE_H264_MAIN = 0x4d
FF_PROFILE_H264_MULTIVIEW_HIGH = 0x76
FF_PROFILE_H264_STEREO_HIGH = 0x80
FF_PROFILE_HEVC_MAIN = 0x1
FF_PROFILE_HEVC_MAIN_10 = 0x2
FF_PROFILE_HEVC_MAIN_STILL_PICTURE = 0x3
FF_PROFILE_HEVC_REXT = 0x4
FF_PROFILE_JPEG2000_CSTREAM_NO_RESTRICTION = 0x8000
FF_PROFILE_JPEG2000_CSTREAM_RESTRICTION_0 = 0x1
FF_PROFILE_JPEG2000_CSTREAM_RESTRICTION_1 = 0x2
FF_PROFILE_JPEG2000_DCINEMA_2K = 0x3
FF_PROFILE_JPEG2000_DCINEMA_4K = 0x4
FF_PROFILE_KLVA_ASYNC = 0x1
FF_PROFILE_KLVA_SYNC = 0x0
FF_PROFILE_MJPEG_HUFFMAN_BASELINE_DCT = 0xc0
FF_PROFILE_MJPEG_HUFFMAN_EXTENDED_SEQUENTIAL_DCT = 0xc1
FF_PROFILE_MJPEG_HUFFMAN_LOSSLESS = 0xc3
FF_PROFILE_MJPEG_HUFFMAN_PROGRESSIVE_DCT = 0xc2
FF_PROFILE_MJPEG_JPEG_LS = 0xf7
FF_PROFILE_MPEG2_422 = 0x0
FF_PROFILE_MPEG2_AAC_HE = 0x83
FF_PROFILE_MPEG2_AAC_LOW = 0x80
FF_PROFILE_MPEG2_HIGH = 0x1
FF_PROFILE_MPEG2_MAIN = 0x4
FF_PROFILE_MPEG2_SIMPLE = 0x5
FF_PROFILE_MPEG2_SNR_SCALABLE = 0x3
FF_PROFILE_MPEG2_SS = 0x2
FF_PROFILE_MPEG4_ADVANCED_CODING = 0xb
FF_PROFILE_MPEG4_ADVANCED_CORE = 0xc
FF_PROFILE_MPEG4_ADVANCED_REAL_TIME = 0x9
FF_PROFILE_MPEG4_ADVANCED_SCALABLE_TEXTURE = 0xd
FF_PROFILE_MPEG4_ADVANCED_SIMPLE = 0xf
FF_PROFILE_MPEG4_BASIC_ANIMATED_TEXTURE = 0x7
FF_PROFILE_MPEG4_CORE = 0x2
FF_PROFILE_MPEG4_CORE_SCALABLE = 0xa
FF_PROFILE_MPEG4_HYBRID = 0x8
FF_PROFILE_MPEG4_MAIN = 0x3
FF_PROFILE_MPEG4_N_BIT = 0x4
FF_PROFILE_MPEG4_SCALABLE_TEXTURE = 0x5
FF_PROFILE_MPEG4_SIMPLE = 0x0
FF_PROFILE_MPEG4_SIMPLE_FACE_ANIMATION = 0x6
FF_PROFILE_MPEG4_SIMPLE_SCALABLE = 0x1
FF_PROFILE_MPEG4_SIMPLE_STUDIO = 0xe
FF_PROFILE_PRORES_4444 = 0x4
FF_PROFILE_PRORES_HQ = 0x3
FF_PROFILE_PRORES_LT = 0x1
FF_PROFILE_PRORES_PROXY = 0x0
FF_PROFILE_PRORES_STANDARD = 0x2
FF_PROFILE_PRORES_XQ = 0x5
FF_PROFILE_RESERVED = -0x64
FF_PROFILE_SBC_MSBC = 0x1
FF_PROFILE_UNKNOWN = -0x63
FF_PROFILE_VC1_ADVANCED = 0x3
FF_PROFILE_VC1_COMPLEX = 0x2
FF_PROFILE_VC1_MAIN = 0x1
FF_PROFILE_VC1_SIMPLE = 0x0
FF_PROFILE_VP9_0 = 0x0
FF_PROFILE_VP9_1 = 0x1
FF_PROFILE_VP9_2 = 0x2
FF_PROFILE_VP9_3 = 0x3
FF_PROFILE_VVC_MAIN_10 = 0x1
FF_PROFILE_VVC_MAIN_10_444 = 0x21
FF_QP2LAMBDA = 118
FF_QUALITY_SCALE = FF_LAMBDA_SCALE
FF_SUB_CHARENC_MODE_AUTOMATIC = 0x0
FF_SUB_CHARENC_MODE_DO_NOTHING = -0x1
FF_SUB_CHARENC_MODE_IGNORE = 0x2
FF_SUB_CHARENC_MODE_PRE_DECODER = 0x1
FF_SUB_TEXT_FMT_ASS = 0x0
FF_THREAD_FRAME = 0x1
FF_THREAD_SLICE = 0x2
LIBAVCODEC_BUILD = LIBAVCODEC_VERSION_INT
LIBAVCODEC_IDENT = "Lavc"
LIBAVCODEC_VERSION = AV_VERSION(LIBAVCODEC_VERSION_MAJOR, LIBAVCODEC_VERSION_MINOR, LIBAVCODEC_VERSION_MICRO)
LIBAVCODEC_VERSION_INT = AV_VERSION_INT(LIBAVCODEC_VERSION_MAJOR, LIBAVCODEC_VERSION_MINOR, LIBAVCODEC_VERSION_MICRO)
LIBAVCODEC_VERSION_MAJOR = 59
LIBAVCODEC_VERSION_MICRO = 0x64
LIBAVCODEC_VERSION_MINOR = 0x25
LIBAVDEVICE_BUILD = LIBAVDEVICE_VERSION_INT
LIBAVDEVICE_IDENT = "Lavd" AV_STRINGIFY(LIBAVDEVICE_VERSION)
LIBAVDEVICE_VERSION = AV_VERSION(LIBAVDEVICE_VERSION_MAJOR, LIBAVDEVICE_VERSION_MINOR, LIBAVDEVICE_VERSION_MICRO)
LIBAVDEVICE_VERSION_INT = AV_VERSION_INT(LIBAVDEVICE_VERSION_MAJOR, LIBAVDEVICE_VERSION_MINOR, LIBAVDEVICE_VERSION_MICRO)
LIBAVDEVICE_VERSION_MAJOR = 59
LIBAVDEVICE_VERSION_MICRO = 100
LIBAVDEVICE_VERSION_MINOR = 7
LIBAVFILTER_BUILD = LIBAVFILTER_VERSION_INT
LIBAVFILTER_IDENT = "Lavfi"
LIBAVFILTER_VERSION = AV_VERSION(LIBAVFILTER_VERSION_MAJOR, LIBAVFILTER_VERSION_MINOR, LIBAVFILTER_VERSION_MICRO)
LIBAVFILTER_VERSION_INT = AV_VERSION_INT(LIBAVFILTER_VERSION_MAJOR, LIBAVFILTER_VERSION_MINOR, LIBAVFILTER_VERSION_MICRO)
LIBAVFILTER_VERSION_MAJOR = 0x8
LIBAVFILTER_VERSION_MICRO = 0x64
LIBAVFILTER_VERSION_MINOR = 0x2c
LIBAVFORMAT_BUILD = LIBAVFORMAT_VERSION_INT
LIBAVFORMAT_IDENT = "Lavf" AV_STRINGIFY(LIBAVFORMAT_VERSION)
LIBAVFORMAT_VERSION = AV_VERSION(LIBAVFORMAT_VERSION_MAJOR, LIBAVFORMAT_VERSION_MINOR, LIBAVFORMAT_VERSION_MICRO)
LIBAVFORMAT_VERSION_INT = AV_VERSION_INT(LIBAVFORMAT_VERSION_MAJOR, LIBAVFORMAT_VERSION_MINOR, LIBAVFORMAT_VERSION_MICRO)
LIBAVFORMAT_VERSION_MAJOR = 59
LIBAVFORMAT_VERSION_MICRO = 100
LIBAVFORMAT_VERSION_MINOR = 27
LIBAVUTIL_BUILD = LIBAVUTIL_VERSION_INT
LIBAVUTIL_IDENT = "Lavu" AV_STRINGIFY(LIBAVUTIL_VERSION)
LIBAVUTIL_VERSION = AV_VERSION(LIBAVUTIL_VERSION_MAJOR, LIBAVUTIL_VERSION_MINOR, LIBAVUTIL_VERSION_MICRO)
LIBAVUTIL_VERSION_INT = AV_VERSION_INT(LIBAVUTIL_VERSION_MAJOR, LIBAVUTIL_VERSION_MINOR, LIBAVUTIL_VERSION_MICRO)
LIBAVUTIL_VERSION_MAJOR = 57
LIBAVUTIL_VERSION_MICRO = 100
LIBAVUTIL_VERSION_MINOR = 28
LIBPOSTPROC_BUILD = LIBPOSTPROC_VERSION_INT
LIBPOSTPROC_IDENT = "postproc"
LIBPOSTPROC_VERSION = AV_VERSION(LIBPOSTPROC_VERSION_MAJOR, LIBPOSTPROC_VERSION_MINOR, LIBPOSTPROC_VERSION_MICRO)
LIBPOSTPROC_VERSION_INT = AV_VERSION_INT(LIBPOSTPROC_VERSION_MAJOR, LIBPOSTPROC_VERSION_MINOR, LIBPOSTPROC_VERSION_MICRO)
LIBPOSTPROC_VERSION_MAJOR = 0x38
LIBPOSTPROC_VERSION_MICRO = 0x64
LIBPOSTPROC_VERSION_MINOR = 0x6
LIBSWRESAMPLE_BUILD = LIBSWRESAMPLE_VERSION_INT
LIBSWRESAMPLE_IDENT = "SwR"
LIBSWRESAMPLE_VERSION = AV_VERSION(LIBSWRESAMPLE_VERSION_MAJOR, LIBSWRESAMPLE_VERSION_MINOR, LIBSWRESAMPLE_VERSION_MICRO)
LIBSWRESAMPLE_VERSION_INT = AV_VERSION_INT(LIBSWRESAMPLE_VERSION_MAJOR, LIBSWRESAMPLE_VERSION_MINOR, LIBSWRESAMPLE_VERSION_MICRO)
LIBSWRESAMPLE_VERSION_MAJOR = 0x4
LIBSWRESAMPLE_VERSION_MICRO = 0x64
LIBSWRESAMPLE_VERSION_MINOR = 0x7
LIBSWSCALE_BUILD = LIBSWSCALE_VERSION_INT
LIBSWSCALE_IDENT = "SwS"
LIBSWSCALE_VERSION = AV_VERSION(LIBSWSCALE_VERSION_MAJOR, LIBSWSCALE_VERSION_MINOR, LIBSWSCALE_VERSION_MICRO)
LIBSWSCALE_VERSION_INT = AV_VERSION_INT(LIBSWSCALE_VERSION_MAJOR, LIBSWSCALE_VERSION_MINOR, LIBSWSCALE_VERSION_MICRO)
LIBSWSCALE_VERSION_MAJOR = 0x6
LIBSWSCALE_VERSION_MICRO = 0x64
LIBSWSCALE_VERSION_MINOR = 0x7
M_E = 2.7182818284590452354
M_LN10 = 2.30258509299404568402
M_LN2 = 0.69314718055994530942
M_LOG2_10 = 3.32192809488736234787
M_PHI = 1.61803398874989484820
M_PI = 3.14159265358979323846
M_PI_2 = 1.57079632679489661923
M_SQRT1_2 = 0.70710678118654752440
M_SQRT2 = 1.41421356237309504880
PARSER_FLAG_COMPLETE_FRAMES = 0x1
PARSER_FLAG_FETCHED_OFFSET = 0x4
PARSER_FLAG_ONCE = 0x2
PARSER_FLAG_USE_CODEC_TS = 0x1000
PP_CPU_CAPS_3DNOW = 0x40000000
PP_CPU_CAPS_ALTIVEC = 0x10000000
PP_CPU_CAPS_AUTO = 0x80000
PP_CPU_CAPS_MMX = 0x80000000U
PP_CPU_CAPS_MMX2 = 0x20000000
PP_FORMAT = 0x8
PP_FORMAT_411 = 0x2 | PP_FORMAT
PP_FORMAT_420 = 0x11 | PP_FORMAT
PP_FORMAT_422 = 0x1 | PP_FORMAT
PP_FORMAT_440 = 0x10 | PP_FORMAT
PP_FORMAT_444 = 0x0 | PP_FORMAT
PP_PICT_TYPE_QP2 = 0x10
PP_QUALITY_MAX = 0x6
SLICE_FLAG_ALLOW_FIELD = 0x2
SLICE_FLAG_ALLOW_PLANE = 0x4
SLICE_FLAG_CODED_ORDER = 0x1
SWR_FLAG_RESAMPLE = 0x1
SWS_ACCURATE_RND = 0x40000
SWS_AREA = 0x20
SWS_BICUBIC = 0x4
SWS_BICUBLIN = 0x40
SWS_BILINEAR = 0x2
SWS_BITEXACT = 0x80000
SWS_CS_BT2020 = 0x9
SWS_CS_DEFAULT = 0x5
SWS_CS_FCC = 0x4
SWS_CS_ITU601 = 0x5
SWS_CS_ITU624 = 0x5
SWS_CS_ITU709 = 0x1
SWS_CS_SMPTE170M = 0x5
SWS_CS_SMPTE240M = 0x7
SWS_DIRECT_BGR = 0x8000
SWS_ERROR_DIFFUSION = 0x800000
SWS_FAST_BILINEAR = 0x1
SWS_FULL_CHR_H_INP = 0x4000
SWS_FULL_CHR_H_INT = 0x2000
SWS_GAUSS = 0x80
SWS_LANCZOS = 0x200
SWS_MAX_REDUCE_CUTOFF = 0.002D
SWS_PARAM_DEFAULT = 0x1e240
SWS_POINT = 0x10
SWS_PRINT_INFO = 0x1000
SWS_SINC = 0x100
SWS_SPLINE = 0x400
SWS_SRC_V_CHR_DROP_MASK = 0x30000
SWS_SRC_V_CHR_DROP_SHIFT = 0x10
SWS_X = 0x8
Message types used by avdevice_app_to_dev_control_message().
Dummy message.
Window size change message.
Repaint request message.
Request pause/play.
Request pause/play.
Request pause/play.
Volume control message.
Mute control messages.
Mute control messages.
Mute control messages.
Get volume/mute messages.
Get volume/mute messages.
Not part of ABI
@{
Stereo downmix.
See above.
See above.
See above.
See above.
See above.
See above.
See above.
See above.
See above.
See above.
See above.
Channel is empty can be safely skipped.
Channel contains data, but its position is unknown.
Range of channels between AV_CHAN_AMBISONIC_BASE and AV_CHAN_AMBISONIC_END represent Ambisonic components using the ACN system.
Range of channels between AV_CHAN_AMBISONIC_BASE and AV_CHAN_AMBISONIC_END represent Ambisonic components using the ACN system.
Only the channel count is specified, without any further information about the channel order.
The native channel order, i.e. the channels are in the same order in which they are defined in the AVChannel enum. This supports up to 63 different channels.
The channel order does not correspond to any other predefined order and is stored as an explicit map. For example, this could be used to support layouts with 64 or more channels, or with empty/skipped (AV_CHAN_SILENCE) channels at arbitrary positions.
The audio is represented as the decomposition of the sound field into spherical harmonics. Each channel corresponds to a single expansion component. Channels are ordered according to ACN (Ambisonic Channel Number).
Location of chroma samples.
MPEG-2/4 4:2:0, H.264 default for 4:2:0
MPEG-1 4:2:0, JPEG 4:2:0, H.263 4:2:0
ITU-R 601, SMPTE 274M 296M S314M(DV 4:1:1), mpeg2 4:2:2
Not part of ABI
not part of ABI/API
Identify the syntax and semantics of the bitstream. The principle is roughly: Two decoders with the same ID can decode the same streams. Two encoders with the same ID can encode compatible streams. There may be slight deviations from the principle due to implementation details.
preferred ID for MPEG-1/2 video decoding
A dummy id pointing at the start of audio codecs
preferred ID for decoding MPEG audio layer 1, 2 or 3
as in Berlin toast format
A dummy ID pointing at the start of subtitle codecs.
raw UTF-8 text
A dummy ID pointing at the start of various fake codecs.
Contain timestamp estimated through PCR of program stream.
codec_id is not known (like AV_CODEC_ID_NONE) but lavf should attempt to identify it
_FAKE_ codec to indicate a raw MPEG-2 TS stream (only used by libavformat)
_FAKE_ codec to indicate a MPEG-4 Systems stream (only used by libavformat)
Dummy codec for streams containing only metadata information.
Passthrough codec, AVFrames wrapped in AVPacket
Chromaticity coordinates of the source primaries. These values match the ones defined by ISO/IEC 23091-2_2019 subclause 8.1 and ITU-T H.273.
also ITU-R BT1361 / IEC 61966-2-4 / SMPTE RP 177 Annex B
also FCC Title 47 Code of Federal Regulations 73.682 (a)(20)
also ITU-R BT601-6 625 / ITU-R BT1358 625 / ITU-R BT1700 625 PAL & SECAM
also ITU-R BT601-6 525 / ITU-R BT1358 525 / ITU-R BT1700 NTSC
identical to above, also called "SMPTE C" even though it uses D65
colour filters using Illuminant C
ITU-R BT2020
SMPTE ST 428-1 (CIE 1931 XYZ)
SMPTE ST 431-2 (2011) / DCI P3
SMPTE ST 432-1 (2010) / P3 D65 / Display P3
EBU Tech. 3213-E (nothing there) / one of JEDEC P22 group phosphors
Not part of ABI
Visual content value range.
Narrow or limited range content.
Full range content.
Not part of ABI
YUV colorspace type. These values match the ones defined by ISO/IEC 23091-2_2019 subclause 8.3.
order of coefficients is actually GBR, also IEC 61966-2-1 (sRGB), YZX and ST 428-1
also ITU-R BT1361 / IEC 61966-2-4 xvYCC709 / derived in SMPTE RP 177 Annex B
reserved for future use by ITU-T and ISO/IEC just like 15-255 are
FCC Title 47 Code of Federal Regulations 73.682 (a)(20)
also ITU-R BT601-6 625 / ITU-R BT1358 625 / ITU-R BT1700 625 PAL & SECAM / IEC 61966-2-4 xvYCC601
also ITU-R BT601-6 525 / ITU-R BT1358 525 / ITU-R BT1700 NTSC / functionally identical to above
derived from 170M primaries and D65 white point, 170M is derived from BT470 System M's primaries
used by Dirac / VC-2 and H.264 FRext, see ITU-T SG16
ITU-R BT2020 non-constant luminance system
ITU-R BT2020 constant luminance system
SMPTE 2085, Y'D'zD'x
Chromaticity-derived non-constant luminance system
Chromaticity-derived constant luminance system
ITU-R BT.2100-0, ICtCp
Not part of ABI
Color Transfer Characteristic. These values match the ones defined by ISO/IEC 23091-2_2019 subclause 8.2.
also ITU-R BT1361
also ITU-R BT470M / ITU-R BT1700 625 PAL & SECAM
also ITU-R BT470BG
also ITU-R BT601-6 525 or 625 / ITU-R BT1358 525 or 625 / ITU-R BT1700 NTSC
"Linear transfer characteristics"
"Logarithmic transfer characteristic (100:1 range)"
"Logarithmic transfer characteristic (100 * Sqrt(10) : 1 range)"
IEC 61966-2-4
ITU-R BT1361 Extended Colour Gamut
IEC 61966-2-1 (sRGB or sYCC)
ITU-R BT2020 for 10-bit system
ITU-R BT2020 for 12-bit system
SMPTE ST 2084 for 10-, 12-, 14- and 16-bit systems
SMPTE ST 428-1
ARIB STD-B67, known as "Hybrid log-gamma"
Not part of ABI
Message types used by avdevice_dev_to_app_control_message().
Dummy message.
Create window buffer message.
Prepare window buffer message.
Display window buffer message.
Destroy window buffer message.
Buffer fullness status messages.
Buffer fullness status messages.
Buffer readable/writable.
Buffer readable/writable.
Mute state change message.
Volume level change message.
discard nothing
discard useless packets like 0 size packets in avi
discard all non reference
discard all bidirectional frames
discard all non intra frames
discard all frames except keyframes
discard all
The duration of a video can be estimated through various ways, and this enum can be used to know how the duration was estimated.
Duration accurately estimated from PTSes
Duration estimated from a stream with a known duration
Duration estimated from bitrate (less accurate)
stage of the initialization of the link properties (dimensions, etc)
not started
started, but incomplete
complete
@{ AVFrame is an abstraction for reference-counted raw multimedia data.
The data is the AVPanScan struct defined in libavcodec.
ATSC A53 Part 4 Closed Captions. A53 CC bitstream is stored as uint8_t in AVFrameSideData.data. The number of bytes of CC data is AVFrameSideData.size.
Stereoscopic 3d metadata. The data is the AVStereo3D struct defined in libavutil/stereo3d.h.
The data is the AVMatrixEncoding enum defined in libavutil/channel_layout.h.
Metadata relevant to a downmix procedure. The data is the AVDownmixInfo struct defined in libavutil/downmix_info.h.
ReplayGain information in the form of the AVReplayGain struct.
This side data contains a 3x3 transformation matrix describing an affine transformation that needs to be applied to the frame for correct presentation.
Active Format Description data consisting of a single byte as specified in ETSI TS 101 154 using AVActiveFormatDescription enum.
Motion vectors exported by some codecs (on demand through the export_mvs flag set in the libavcodec AVCodecContext flags2 option). The data is the AVMotionVector struct defined in libavutil/motion_vector.h.
Recommmends skipping the specified number of samples. This is exported only if the "skip_manual" AVOption is set in libavcodec. This has the same format as AV_PKT_DATA_SKIP_SAMPLES.
This side data must be associated with an audio frame and corresponds to enum AVAudioServiceType defined in avcodec.h.
Mastering display metadata associated with a video frame. The payload is an AVMasteringDisplayMetadata type and contains information about the mastering display color volume.
The GOP timecode in 25 bit timecode format. Data format is 64-bit integer. This is set on the first frame of a GOP that has a temporal reference of 0.
The data represents the AVSphericalMapping structure defined in libavutil/spherical.h.
Content light level (based on CTA-861.3). This payload contains data in the form of the AVContentLightMetadata struct.
The data contains an ICC profile as an opaque octet buffer following the format described by ISO 15076-1 with an optional name defined in the metadata key entry "name".
Timecode which conforms to SMPTE ST 12-1. The data is an array of 4 uint32_t where the first uint32_t describes how many (1-3) of the other timecodes are used. The timecode format is described in the documentation of av_timecode_get_smpte_from_framenum() function in libavutil/timecode.h.
HDR dynamic metadata associated with a video frame. The payload is an AVDynamicHDRPlus type and contains information for color volume transform - application 4 of SMPTE 2094-40:2016 standard.
Regions Of Interest, the data is an array of AVRegionOfInterest type, the number of array element is implied by AVFrameSideData.size / AVRegionOfInterest.self_size.
Encoding parameters for a video frame, as described by AVVideoEncParams.
User data unregistered metadata associated with a video frame. This is the H.26[45] UDU SEI message, and shouldn't be used for any other purpose The data is stored as uint8_t in AVFrameSideData.data which is 16 bytes of uuid_iso_iec_11578 followed by AVFrameSideData.size - 16 bytes of user_data_payload_byte.
Film grain parameters for a frame, described by AVFilmGrainParams. Must be present for every frame which should have film grain applied.
Bounding boxes for object detection and classification, as described by AVDetectionBBoxHeader.
Dolby Vision RPU raw data, suitable for passing to x265 or other libraries. Array of uint8_t, with NAL emulation bytes intact.
Parsed Dolby Vision metadata, suitable for passing to a software implementation. The payload is the AVDOVIMetadata struct defined in libavutil/dovi_meta.h.
HDR Vivid dynamic metadata associated with a video frame. The payload is an AVDynamicHDRVivid type and contains information for color volume transform - CUVA 005.1-2021.
Option for overlapping elliptical pixel selectors in an image.
Transfer the data from the queried hw frame.
Transfer the data to the queried hw frame.
Different data types that can be returned via the AVIO write_data_type callback.
Header data; this needs to be present for the stream to be decodeable.
A point in the output bytestream where a decoder can start decoding (i.e. a keyframe). A demuxer/decoder given the data flagged with AVIO_DATA_MARKER_HEADER, followed by any AVIO_DATA_MARKER_SYNC_POINT, should give decodeable results.
A point in the output bytestream where a demuxer can start parsing (for non self synchronizing bytestream formats). That is, any non-keyframe packet start point.
This is any, unlabelled data. It can either be a muxer not marking any positions at all, it can be an actual boundary/sync point that the muxer chooses not to mark, or a later part of a packet/fragment that is cut into multiple write callbacks due to limited IO buffer size.
Trailer data, which doesn't contain actual content, but only for finalizing the output file.
A point in the output bytestream where the underlying AVIOContext might flush the buffer depending on latency or buffering requirements. Typically means the end of a packet.
Directory entry types.
Media Type
Usually treated as AVMEDIA_TYPE_DATA
Opaque data information usually continuous
Opaque data information usually sparse
@{ AVOptions provide a generic system to declare options on arbitrary structs ("objects"). An option can have a help text, a type and a range of possible values. Options may then be enumerated, read and written to.
offset must point to a pointer immediately followed by an int for the length
offset must point to two consecutive integers
offset must point to AVRational
Types and functions for working with AVPacket. @{
An AV_PKT_DATA_PALETTE side data packet contains exactly AVPALETTE_SIZE bytes worth of palette. This side data signals that a new palette is present.
The AV_PKT_DATA_NEW_EXTRADATA is used to notify the codec or the format that the extradata buffer was changed and the receiving side should act upon it appropriately. The new extradata is embedded in the side data buffer and should be immediately used for processing the current frame or packet.
An AV_PKT_DATA_PARAM_CHANGE side data packet is laid out as follows:
An AV_PKT_DATA_H263_MB_INFO side data packet contains a number of structures with info about macroblocks relevant to splitting the packet into smaller packets on macroblock edges (e.g. as for RFC 2190). That is, it does not necessarily contain info about all macroblocks, as long as the distance between macroblocks in the info is smaller than the target payload size. Each MB info structure is 12 bytes, and is laid out as follows:
This side data should be associated with an audio stream and contains ReplayGain information in form of the AVReplayGain struct.
This side data contains a 3x3 transformation matrix describing an affine transformation that needs to be applied to the decoded video frames for correct presentation.
This side data should be associated with a video stream and contains Stereoscopic 3D information in form of the AVStereo3D struct.
This side data should be associated with an audio stream and corresponds to enum AVAudioServiceType.
This side data contains quality related information from the encoder.
This side data contains an integer value representing the stream index of a "fallback" track. A fallback track indicates an alternate track to use when the current track can not be decoded for some reason. e.g. no decoder available for codec.
This side data corresponds to the AVCPBProperties struct.
Recommmends skipping the specified number of samples
An AV_PKT_DATA_JP_DUALMONO side data packet indicates that the packet may contain "dual mono" audio specific to Japanese DTV and if it is true, recommends only the selected channel to be used.
A list of zero terminated key/value strings. There is no end marker for the list, so it is required to rely on the side data size to stop.
Subtitle event position
Data found in BlockAdditional element of matroska container. There is no end marker for the data, so it is required to rely on the side data size to recognize the end. 8 byte id (as found in BlockAddId) followed by data.
The optional first identifier line of a WebVTT cue.
The optional settings (rendering instructions) that immediately follow the timestamp specifier of a WebVTT cue.
A list of zero terminated key/value strings. There is no end marker for the list, so it is required to rely on the side data size to stop. This side data includes updated metadata which appeared in the stream.
MPEGTS stream ID as uint8_t, this is required to pass the stream ID information from the demuxer to the corresponding muxer.
Mastering display metadata (based on SMPTE-2086:2014). This metadata should be associated with a video stream and contains data in the form of the AVMasteringDisplayMetadata struct.
This side data should be associated with a video stream and corresponds to the AVSphericalMapping structure.
Content light level (based on CTA-861.3). This metadata should be associated with a video stream and contains data in the form of the AVContentLightMetadata struct.
ATSC A53 Part 4 Closed Captions. This metadata should be associated with a video stream. A53 CC bitstream is stored as uint8_t in AVPacketSideData.data. The number of bytes of CC data is AVPacketSideData.size.
This side data is encryption initialization data. The format is not part of ABI, use av_encryption_init_info_* methods to access.
This side data contains encryption info for how to decrypt the packet. The format is not part of ABI, use av_encryption_info_* methods to access.
Active Format Description data consisting of a single byte as specified in ETSI TS 101 154 using AVActiveFormatDescription enum.
Producer Reference Time data corresponding to the AVProducerReferenceTime struct, usually exported by some encoders (on demand through the prft flag set in the AVCodecContext export_side_data field).
ICC profile data consisting of an opaque octet buffer following the format described by ISO 15076-1.
DOVI configuration ref: dolby-vision-bitstreams-within-the-iso-base-media-file-format-v2.1.2, section 2.2 dolby-vision-bitstreams-in-mpeg-2-transport-stream-multiplex-v1.2, section 3.3 Tags are stored in struct AVDOVIDecoderConfigurationRecord.
Timecode which conforms to SMPTE ST 12-1:2014. The data is an array of 4 uint32_t where the first uint32_t describes how many (1-3) of the other timecodes are used. The timecode format is described in the documentation of av_timecode_get_smpte_from_framenum() function in libavutil/timecode.h.
HDR10+ dynamic metadata associated with a video frame. The metadata is in the form of the AVDynamicHDRPlus struct and contains information for color volume transform - application 4 of SMPTE 2094-40:2016 standard.
The number of side data types. This is not part of the public API/ABI in the sense that it may change when new side data types are added. This must stay the last enum value. If its value becomes huge, some code using it needs to be updated as it assumes it to be smaller than other limits.
@{
@} @}
Undefined
Intra
Predicted
Bi-dir predicted
S(GMC)-VOP MPEG-4
Switching Intra
Switching Predicted
BI type
Pixel format.
planar YUV 4:2:0, 12bpp, (1 Cr & Cb sample per 2x2 Y samples)
packed YUV 4:2:2, 16bpp, Y0 Cb Y1 Cr
packed RGB 8:8:8, 24bpp, RGBRGB...
packed RGB 8:8:8, 24bpp, BGRBGR...
planar YUV 4:2:2, 16bpp, (1 Cr & Cb sample per 2x1 Y samples)
planar YUV 4:4:4, 24bpp, (1 Cr & Cb sample per 1x1 Y samples)
planar YUV 4:1:0, 9bpp, (1 Cr & Cb sample per 4x4 Y samples)
planar YUV 4:1:1, 12bpp, (1 Cr & Cb sample per 4x1 Y samples)
Y , 8bpp
Y , 1bpp, 0 is white, 1 is black, in each byte pixels are ordered from the msb to the lsb
Y , 1bpp, 0 is black, 1 is white, in each byte pixels are ordered from the msb to the lsb
8 bits with AV_PIX_FMT_RGB32 palette
planar YUV 4:2:0, 12bpp, full scale (JPEG), deprecated in favor of AV_PIX_FMT_YUV420P and setting color_range
planar YUV 4:2:2, 16bpp, full scale (JPEG), deprecated in favor of AV_PIX_FMT_YUV422P and setting color_range
planar YUV 4:4:4, 24bpp, full scale (JPEG), deprecated in favor of AV_PIX_FMT_YUV444P and setting color_range
packed YUV 4:2:2, 16bpp, Cb Y0 Cr Y1
packed YUV 4:1:1, 12bpp, Cb Y0 Y1 Cr Y2 Y3
packed RGB 3:3:2, 8bpp, (msb)2B 3G 3R(lsb)
packed RGB 1:2:1 bitstream, 4bpp, (msb)1B 2G 1R(lsb), a byte contains two pixels, the first pixel in the byte is the one composed by the 4 msb bits
packed RGB 1:2:1, 8bpp, (msb)1B 2G 1R(lsb)
packed RGB 3:3:2, 8bpp, (msb)2R 3G 3B(lsb)
packed RGB 1:2:1 bitstream, 4bpp, (msb)1R 2G 1B(lsb), a byte contains two pixels, the first pixel in the byte is the one composed by the 4 msb bits
packed RGB 1:2:1, 8bpp, (msb)1R 2G 1B(lsb)
planar YUV 4:2:0, 12bpp, 1 plane for Y and 1 plane for the UV components, which are interleaved (first byte U and the following byte V)
as above, but U and V bytes are swapped
packed ARGB 8:8:8:8, 32bpp, ARGBARGB...
packed RGBA 8:8:8:8, 32bpp, RGBARGBA...
packed ABGR 8:8:8:8, 32bpp, ABGRABGR...
packed BGRA 8:8:8:8, 32bpp, BGRABGRA...
Y , 16bpp, big-endian
Y , 16bpp, little-endian
planar YUV 4:4:0 (1 Cr & Cb sample per 1x2 Y samples)
planar YUV 4:4:0 full scale (JPEG), deprecated in favor of AV_PIX_FMT_YUV440P and setting color_range
planar YUV 4:2:0, 20bpp, (1 Cr & Cb sample per 2x2 Y & A samples)
packed RGB 16:16:16, 48bpp, 16R, 16G, 16B, the 2-byte value for each R/G/B component is stored as big-endian
packed RGB 16:16:16, 48bpp, 16R, 16G, 16B, the 2-byte value for each R/G/B component is stored as little-endian
packed RGB 5:6:5, 16bpp, (msb) 5R 6G 5B(lsb), big-endian
packed RGB 5:6:5, 16bpp, (msb) 5R 6G 5B(lsb), little-endian
packed RGB 5:5:5, 16bpp, (msb)1X 5R 5G 5B(lsb), big-endian , X=unused/undefined
packed RGB 5:5:5, 16bpp, (msb)1X 5R 5G 5B(lsb), little-endian, X=unused/undefined
packed BGR 5:6:5, 16bpp, (msb) 5B 6G 5R(lsb), big-endian
packed BGR 5:6:5, 16bpp, (msb) 5B 6G 5R(lsb), little-endian
packed BGR 5:5:5, 16bpp, (msb)1X 5B 5G 5R(lsb), big-endian , X=unused/undefined
packed BGR 5:5:5, 16bpp, (msb)1X 5B 5G 5R(lsb), little-endian, X=unused/undefined
Hardware acceleration through VA-API, data[3] contains a VASurfaceID.
planar YUV 4:2:0, 24bpp, (1 Cr & Cb sample per 2x2 Y samples), little-endian
planar YUV 4:2:0, 24bpp, (1 Cr & Cb sample per 2x2 Y samples), big-endian
planar YUV 4:2:2, 32bpp, (1 Cr & Cb sample per 2x1 Y samples), little-endian
planar YUV 4:2:2, 32bpp, (1 Cr & Cb sample per 2x1 Y samples), big-endian
planar YUV 4:4:4, 48bpp, (1 Cr & Cb sample per 1x1 Y samples), little-endian
planar YUV 4:4:4, 48bpp, (1 Cr & Cb sample per 1x1 Y samples), big-endian
HW decoding through DXVA2, Picture.data[3] contains a LPDIRECT3DSURFACE9 pointer
packed RGB 4:4:4, 16bpp, (msb)4X 4R 4G 4B(lsb), little-endian, X=unused/undefined
packed RGB 4:4:4, 16bpp, (msb)4X 4R 4G 4B(lsb), big-endian, X=unused/undefined
packed BGR 4:4:4, 16bpp, (msb)4X 4B 4G 4R(lsb), little-endian, X=unused/undefined
packed BGR 4:4:4, 16bpp, (msb)4X 4B 4G 4R(lsb), big-endian, X=unused/undefined
8 bits gray, 8 bits alpha
alias for AV_PIX_FMT_YA8
alias for AV_PIX_FMT_YA8
packed RGB 16:16:16, 48bpp, 16B, 16G, 16R, the 2-byte value for each R/G/B component is stored as big-endian
packed RGB 16:16:16, 48bpp, 16B, 16G, 16R, the 2-byte value for each R/G/B component is stored as little-endian
planar YUV 4:2:0, 13.5bpp, (1 Cr & Cb sample per 2x2 Y samples), big-endian
planar YUV 4:2:0, 13.5bpp, (1 Cr & Cb sample per 2x2 Y samples), little-endian
planar YUV 4:2:0, 15bpp, (1 Cr & Cb sample per 2x2 Y samples), big-endian
planar YUV 4:2:0, 15bpp, (1 Cr & Cb sample per 2x2 Y samples), little-endian
planar YUV 4:2:2, 20bpp, (1 Cr & Cb sample per 2x1 Y samples), big-endian
planar YUV 4:2:2, 20bpp, (1 Cr & Cb sample per 2x1 Y samples), little-endian
planar YUV 4:4:4, 27bpp, (1 Cr & Cb sample per 1x1 Y samples), big-endian
planar YUV 4:4:4, 27bpp, (1 Cr & Cb sample per 1x1 Y samples), little-endian
planar YUV 4:4:4, 30bpp, (1 Cr & Cb sample per 1x1 Y samples), big-endian
planar YUV 4:4:4, 30bpp, (1 Cr & Cb sample per 1x1 Y samples), little-endian
planar YUV 4:2:2, 18bpp, (1 Cr & Cb sample per 2x1 Y samples), big-endian
planar YUV 4:2:2, 18bpp, (1 Cr & Cb sample per 2x1 Y samples), little-endian
planar GBR 4:4:4 24bpp
planar GBR 4:4:4 27bpp, big-endian
planar GBR 4:4:4 27bpp, little-endian
planar GBR 4:4:4 30bpp, big-endian
planar GBR 4:4:4 30bpp, little-endian
planar GBR 4:4:4 48bpp, big-endian
planar GBR 4:4:4 48bpp, little-endian
planar YUV 4:2:2 24bpp, (1 Cr & Cb sample per 2x1 Y & A samples)
planar YUV 4:4:4 32bpp, (1 Cr & Cb sample per 1x1 Y & A samples)
planar YUV 4:2:0 22.5bpp, (1 Cr & Cb sample per 2x2 Y & A samples), big-endian
planar YUV 4:2:0 22.5bpp, (1 Cr & Cb sample per 2x2 Y & A samples), little-endian
planar YUV 4:2:2 27bpp, (1 Cr & Cb sample per 2x1 Y & A samples), big-endian
planar YUV 4:2:2 27bpp, (1 Cr & Cb sample per 2x1 Y & A samples), little-endian
planar YUV 4:4:4 36bpp, (1 Cr & Cb sample per 1x1 Y & A samples), big-endian
planar YUV 4:4:4 36bpp, (1 Cr & Cb sample per 1x1 Y & A samples), little-endian
planar YUV 4:2:0 25bpp, (1 Cr & Cb sample per 2x2 Y & A samples, big-endian)
planar YUV 4:2:0 25bpp, (1 Cr & Cb sample per 2x2 Y & A samples, little-endian)
planar YUV 4:2:2 30bpp, (1 Cr & Cb sample per 2x1 Y & A samples, big-endian)
planar YUV 4:2:2 30bpp, (1 Cr & Cb sample per 2x1 Y & A samples, little-endian)
planar YUV 4:4:4 40bpp, (1 Cr & Cb sample per 1x1 Y & A samples, big-endian)
planar YUV 4:4:4 40bpp, (1 Cr & Cb sample per 1x1 Y & A samples, little-endian)
planar YUV 4:2:0 40bpp, (1 Cr & Cb sample per 2x2 Y & A samples, big-endian)
planar YUV 4:2:0 40bpp, (1 Cr & Cb sample per 2x2 Y & A samples, little-endian)
planar YUV 4:2:2 48bpp, (1 Cr & Cb sample per 2x1 Y & A samples, big-endian)
planar YUV 4:2:2 48bpp, (1 Cr & Cb sample per 2x1 Y & A samples, little-endian)
planar YUV 4:4:4 64bpp, (1 Cr & Cb sample per 1x1 Y & A samples, big-endian)
planar YUV 4:4:4 64bpp, (1 Cr & Cb sample per 1x1 Y & A samples, little-endian)
HW acceleration through VDPAU, Picture.data[3] contains a VdpVideoSurface
packed XYZ 4:4:4, 36 bpp, (msb) 12X, 12Y, 12Z (lsb), the 2-byte value for each X/Y/Z is stored as little-endian, the 4 lower bits are set to 0
packed XYZ 4:4:4, 36 bpp, (msb) 12X, 12Y, 12Z (lsb), the 2-byte value for each X/Y/Z is stored as big-endian, the 4 lower bits are set to 0
interleaved chroma YUV 4:2:2, 16bpp, (1 Cr & Cb sample per 2x1 Y samples)
interleaved chroma YUV 4:2:2, 20bpp, (1 Cr & Cb sample per 2x1 Y samples), little-endian
interleaved chroma YUV 4:2:2, 20bpp, (1 Cr & Cb sample per 2x1 Y samples), big-endian
packed RGBA 16:16:16:16, 64bpp, 16R, 16G, 16B, 16A, the 2-byte value for each R/G/B/A component is stored as big-endian
packed RGBA 16:16:16:16, 64bpp, 16R, 16G, 16B, 16A, the 2-byte value for each R/G/B/A component is stored as little-endian
packed RGBA 16:16:16:16, 64bpp, 16B, 16G, 16R, 16A, the 2-byte value for each R/G/B/A component is stored as big-endian
packed RGBA 16:16:16:16, 64bpp, 16B, 16G, 16R, 16A, the 2-byte value for each R/G/B/A component is stored as little-endian
packed YUV 4:2:2, 16bpp, Y0 Cr Y1 Cb
16 bits gray, 16 bits alpha (big-endian)
16 bits gray, 16 bits alpha (little-endian)
planar GBRA 4:4:4:4 32bpp
planar GBRA 4:4:4:4 64bpp, big-endian
planar GBRA 4:4:4:4 64bpp, little-endian
HW acceleration through QSV, data[3] contains a pointer to the mfxFrameSurface1 structure.
HW acceleration though MMAL, data[3] contains a pointer to the MMAL_BUFFER_HEADER_T structure.
HW decoding through Direct3D11 via old API, Picture.data[3] contains a ID3D11VideoDecoderOutputView pointer
HW acceleration through CUDA. data[i] contain CUdeviceptr pointers exactly as for system memory frames.
packed RGB 8:8:8, 32bpp, XRGBXRGB... X=unused/undefined
packed RGB 8:8:8, 32bpp, RGBXRGBX... X=unused/undefined
packed BGR 8:8:8, 32bpp, XBGRXBGR... X=unused/undefined
packed BGR 8:8:8, 32bpp, BGRXBGRX... X=unused/undefined
planar YUV 4:2:0,18bpp, (1 Cr & Cb sample per 2x2 Y samples), big-endian
planar YUV 4:2:0,18bpp, (1 Cr & Cb sample per 2x2 Y samples), little-endian
planar YUV 4:2:0,21bpp, (1 Cr & Cb sample per 2x2 Y samples), big-endian
planar YUV 4:2:0,21bpp, (1 Cr & Cb sample per 2x2 Y samples), little-endian
planar YUV 4:2:2,24bpp, (1 Cr & Cb sample per 2x1 Y samples), big-endian
planar YUV 4:2:2,24bpp, (1 Cr & Cb sample per 2x1 Y samples), little-endian
planar YUV 4:2:2,28bpp, (1 Cr & Cb sample per 2x1 Y samples), big-endian
planar YUV 4:2:2,28bpp, (1 Cr & Cb sample per 2x1 Y samples), little-endian
planar YUV 4:4:4,36bpp, (1 Cr & Cb sample per 1x1 Y samples), big-endian
planar YUV 4:4:4,36bpp, (1 Cr & Cb sample per 1x1 Y samples), little-endian
planar YUV 4:4:4,42bpp, (1 Cr & Cb sample per 1x1 Y samples), big-endian
planar YUV 4:4:4,42bpp, (1 Cr & Cb sample per 1x1 Y samples), little-endian
planar GBR 4:4:4 36bpp, big-endian
planar GBR 4:4:4 36bpp, little-endian
planar GBR 4:4:4 42bpp, big-endian
planar GBR 4:4:4 42bpp, little-endian
planar YUV 4:1:1, 12bpp, (1 Cr & Cb sample per 4x1 Y samples) full scale (JPEG), deprecated in favor of AV_PIX_FMT_YUV411P and setting color_range
bayer, BGBG..(odd line), GRGR..(even line), 8-bit samples
bayer, RGRG..(odd line), GBGB..(even line), 8-bit samples
bayer, GBGB..(odd line), RGRG..(even line), 8-bit samples
bayer, GRGR..(odd line), BGBG..(even line), 8-bit samples
bayer, BGBG..(odd line), GRGR..(even line), 16-bit samples, little-endian
bayer, BGBG..(odd line), GRGR..(even line), 16-bit samples, big-endian
bayer, RGRG..(odd line), GBGB..(even line), 16-bit samples, little-endian
bayer, RGRG..(odd line), GBGB..(even line), 16-bit samples, big-endian
bayer, GBGB..(odd line), RGRG..(even line), 16-bit samples, little-endian
bayer, GBGB..(odd line), RGRG..(even line), 16-bit samples, big-endian
bayer, GRGR..(odd line), BGBG..(even line), 16-bit samples, little-endian
bayer, GRGR..(odd line), BGBG..(even line), 16-bit samples, big-endian
XVideo Motion Acceleration via common packet passing
planar YUV 4:4:0,20bpp, (1 Cr & Cb sample per 1x2 Y samples), little-endian
planar YUV 4:4:0,20bpp, (1 Cr & Cb sample per 1x2 Y samples), big-endian
planar YUV 4:4:0,24bpp, (1 Cr & Cb sample per 1x2 Y samples), little-endian
planar YUV 4:4:0,24bpp, (1 Cr & Cb sample per 1x2 Y samples), big-endian
packed AYUV 4:4:4,64bpp (1 Cr & Cb sample per 1x1 Y & A samples), little-endian
packed AYUV 4:4:4,64bpp (1 Cr & Cb sample per 1x1 Y & A samples), big-endian
hardware decoding through Videotoolbox
like NV12, with 10bpp per component, data in the high bits, zeros in the low bits, little-endian
like NV12, with 10bpp per component, data in the high bits, zeros in the low bits, big-endian
planar GBR 4:4:4:4 48bpp, big-endian
planar GBR 4:4:4:4 48bpp, little-endian
planar GBR 4:4:4:4 40bpp, big-endian
planar GBR 4:4:4:4 40bpp, little-endian
hardware decoding through MediaCodec
Y , 12bpp, big-endian
Y , 12bpp, little-endian
Y , 10bpp, big-endian
Y , 10bpp, little-endian
like NV12, with 16bpp per component, little-endian
like NV12, with 16bpp per component, big-endian
Hardware surfaces for Direct3D11.
Y , 9bpp, big-endian
Y , 9bpp, little-endian
IEEE-754 single precision planar GBR 4:4:4, 96bpp, big-endian
IEEE-754 single precision planar GBR 4:4:4, 96bpp, little-endian
IEEE-754 single precision planar GBRA 4:4:4:4, 128bpp, big-endian
IEEE-754 single precision planar GBRA 4:4:4:4, 128bpp, little-endian
DRM-managed buffers exposed through PRIME buffer sharing.
Hardware surfaces for OpenCL.
Y , 14bpp, big-endian
Y , 14bpp, little-endian
IEEE-754 single precision Y, 32bpp, big-endian
IEEE-754 single precision Y, 32bpp, little-endian
planar YUV 4:2:2,24bpp, (1 Cr & Cb sample per 2x1 Y samples), 12b alpha, big-endian
planar YUV 4:2:2,24bpp, (1 Cr & Cb sample per 2x1 Y samples), 12b alpha, little-endian
planar YUV 4:4:4,36bpp, (1 Cr & Cb sample per 1x1 Y samples), 12b alpha, big-endian
planar YUV 4:4:4,36bpp, (1 Cr & Cb sample per 1x1 Y samples), 12b alpha, little-endian
planar YUV 4:4:4, 24bpp, 1 plane for Y and 1 plane for the UV components, which are interleaved (first byte U and the following byte V)
as above, but U and V bytes are swapped
Vulkan hardware images.
packed YUV 4:2:2 like YUYV422, 20bpp, data in the high bits, big-endian
packed YUV 4:2:2 like YUYV422, 20bpp, data in the high bits, little-endian
packed RGB 10:10:10, 30bpp, (msb)2X 10R 10G 10B(lsb), little-endian, X=unused/undefined
packed RGB 10:10:10, 30bpp, (msb)2X 10R 10G 10B(lsb), big-endian, X=unused/undefined
packed BGR 10:10:10, 30bpp, (msb)2X 10B 10G 10R(lsb), little-endian, X=unused/undefined
packed BGR 10:10:10, 30bpp, (msb)2X 10B 10G 10R(lsb), big-endian, X=unused/undefined
interleaved chroma YUV 4:2:2, 20bpp, data in the high bits, big-endian
interleaved chroma YUV 4:2:2, 20bpp, data in the high bits, little-endian
interleaved chroma YUV 4:4:4, 30bpp, data in the high bits, big-endian
interleaved chroma YUV 4:4:4, 30bpp, data in the high bits, little-endian
interleaved chroma YUV 4:2:2, 32bpp, big-endian
interleaved chroma YUV 4:2:2, 32bpp, little-endian
interleaved chroma YUV 4:4:4, 48bpp, big-endian
interleaved chroma YUV 4:4:4, 48bpp, little-endian
number of pixel formats, DO NOT USE THIS if you want to link with shared libav* because the number of formats might differ between versions
Rounding methods.
Round toward zero.
Round away from zero.
Round toward -infinity.
Round toward +infinity.
Round to nearest and halfway cases away from zero.
Flag telling rescaling functions to pass `INT64_MIN`/`MAX` through unchanged, avoiding special cases for #AV_NOPTS_VALUE.
Audio sample formats
unsigned 8 bits
signed 16 bits
signed 32 bits
float
double
unsigned 8 bits, planar
signed 16 bits, planar
signed 32 bits, planar
float, planar
double, planar
signed 64 bits
signed 64 bits, planar
Number of sample formats. DO NOT USE if linking dynamically
@}
full parsing and repack
Only parse headers, do not repack.
full parsing and interpolation of timestamps for frames not starting on a packet boundary
full parsing and repack of the first frame only, only implemented for H.264 currently
full parsing and repack with timestamp and position generation by parser for raw this assumes that each packet in the file contains no demuxer level headers and just codec level data, otherwise position generation would fail
@}
A bitmap, pict will be set
Plain text, the text field must be set by the decoder and is authoritative. ass and pict fields may contain approximations.
Formatted text, the ass field must be set by the decoder and is authoritative. pict and text fields may contain approximations.
timecode is drop frame
timecode wraps after 24 hours
negative time values are allowed
Dithering algorithms
not part of API/ABI
not part of API/ABI
Resampling Engines
SW Resampler
SoX Resampler
not part of API/ABI
Resampling Filter Types
Cubic
Blackman Nuttall windowed sinc
Kaiser windowed sinc
Rational number (pair of numerator and denominator).
Numerator
Denominator
Describe the class of an AVClass context structure. That is an arbitrary struct of which the first field is a pointer to an AVClass struct (e.g. AVCodecContext, AVFormatContext etc.).
The name of the class; usually it is the same name as the context structure type to which the AVClass is associated.
A pointer to a function which returns the name of a context instance ctx associated with the class.
a pointer to the first option specified in the class if any or NULL
LIBAVUTIL_VERSION with which this structure was created. This is used to allow fields to be added without requiring major version bumps everywhere.
Offset in the structure where log_level_offset is stored. 0 means there is no such variable
Offset in the structure where a pointer to the parent context for logging is stored. For example a decoder could pass its AVCodecContext to eval as such a parent context, which an av_log() implementation could then leverage to display the parent context. The offset can be NULL.
Category used for visualization (like color) This is only set if the category is equal for all objects using this class. available since version (51 << 16 | 56 << 8 | 100)
Callback to return the category. available since version (51 << 16 | 59 << 8 | 100)
Callback to return the supported/allowed ranges. available since version (52.12)
Return next AVOptions-enabled child or NULL
Iterate over the AVClasses corresponding to potential AVOptions-enabled children.
AVOption
short English help text
The offset relative to the context structure where the option value is stored. It should be 0 for named constants.
minimum valid value for the option
maximum valid value for the option
The logical unit to which the option belongs. Non-constant options and corresponding named constants share the same unit. May be NULL.
List of AVOptionRange structs.
Array of option ranges.
Number of ranges per component.
Number of componentes.
An AVChannelCustom defines a single channel within a custom order layout
An AVChannelLayout holds information about the channel layout of audio data.
Channel order used in this layout. This is a mandatory field.
Number of channels in this layout. Mandatory field.
For some private data of the user.
Details about which channels are present in this layout. For AV_CHANNEL_ORDER_UNSPEC, this field is undefined and must not be used.
This member must be used for AV_CHANNEL_ORDER_NATIVE, and may be used for AV_CHANNEL_ORDER_AMBISONIC to signal non-diegetic channels. It is a bitmask, where the position of each set bit means that the AVChannel with the corresponding value is present.
This member must be used when the channel order is AV_CHANNEL_ORDER_CUSTOM. It is a nb_channels-sized array, with each element signalling the presence of the AVChannel with the corresponding value in map[i].id.
Structure to hold side data for an AVFrame.
A reference to a data buffer.
The data buffer. It is considered writable if and only if this is the only reference to the buffer, in which case av_buffer_is_writable() returns 1.
Size of data in bytes.
Structure describing a single Region Of Interest.
Must be set to the size of this data structure (that is, sizeof(AVRegionOfInterest)).
Distance in pixels from the top edge of the frame to the top and bottom edges and from the left edge of the frame to the left and right edges of the rectangle defining this region of interest.
Quantisation offset.
This structure describes decoded (raw) audio or video data.
pointer to the picture/channel planes. This might be different from the first allocated byte. For video, it could even point to the end of the image data.
For video, a positive or negative value, which is typically indicating the size in bytes of each picture line, but it can also be: - the negative byte size of lines for vertical flipping (with data[n] pointing to the end of the data - a positive or negative multiple of the byte size as for accessing even and odd fields of a frame (possibly flipped)
pointers to the data planes/channels.
Video frames only. The coded dimensions (in pixels) of the video frame, i.e. the size of the rectangle that contains some well-defined values.
Video frames only. The coded dimensions (in pixels) of the video frame, i.e. the size of the rectangle that contains some well-defined values.
number of audio samples (per channel) described by this frame
format of the frame, -1 if unknown or unset Values correspond to enum AVPixelFormat for video frames, enum AVSampleFormat for audio)
1 -> keyframe, 0-> not
Picture type of the frame.
Sample aspect ratio for the video frame, 0/1 if unknown/unspecified.
Presentation timestamp in time_base units (time when frame should be shown to user).
DTS copied from the AVPacket that triggered returning this frame. (if frame threading isn't used) This is also the Presentation time of this AVFrame calculated from only AVPacket.dts values without pts values.
Time base for the timestamps in this frame. In the future, this field may be set on frames output by decoders or filters, but its value will be by default ignored on input to encoders or filters.
picture number in bitstream order
picture number in display order
quality (between 1 (good) and FF_LAMBDA_MAX (bad))
for some private data of the user
When decoding, this signals how much the picture must be delayed. extra_delay = repeat_pict / (2*fps)
The content of the picture is interlaced.
If the content is interlaced, is top field displayed first.
Tell user application that palette has changed from previous frame.
reordered opaque 64 bits (generally an integer or a double precision float PTS but can be anything). The user sets AVCodecContext.reordered_opaque to represent the input at that time, the decoder reorders values as needed and sets AVFrame.reordered_opaque to exactly one of the values provided by the user through AVCodecContext.reordered_opaque
Sample rate of the audio data.
Channel layout of the audio data.
AVBuffer references backing the data for this frame. All the pointers in data and extended_data must point inside one of the buffers in buf or extended_buf. This array must be filled contiguously -- if buf[i] is non-NULL then buf[j] must also be non-NULL for all j < i.
For planar audio which requires more than AV_NUM_DATA_POINTERS AVBufferRef pointers, this array will hold all the references which cannot fit into AVFrame.buf.
Number of elements in extended_buf.
Frame flags, a combination of lavu_frame_flags
MPEG vs JPEG YUV range. - encoding: Set by user - decoding: Set by libavcodec
YUV colorspace type. - encoding: Set by user - decoding: Set by libavcodec
frame timestamp estimated using various heuristics, in stream time base - encoding: unused - decoding: set by libavcodec, read by user.
reordered pos from the last AVPacket that has been input into the decoder - encoding: unused - decoding: Read by user.
duration of the corresponding packet, expressed in AVStream->time_base units, 0 if unknown. - encoding: unused - decoding: Read by user.
metadata. - encoding: Set by user. - decoding: Set by libavcodec.
decode error flags of the frame, set to a combination of FF_DECODE_ERROR_xxx flags if the decoder produced a frame, but there were errors during the decoding. - encoding: unused - decoding: set by libavcodec, read by user.
number of audio channels, only used for audio. - encoding: unused - decoding: Read by user.
size of the corresponding packet containing the compressed frame. It is set to a negative value if unknown. - encoding: unused - decoding: set by libavcodec, read by user.
For hwaccel-format frames, this should be a reference to the AVHWFramesContext describing the frame.
AVBufferRef for free use by the API user. FFmpeg will never check the contents of the buffer ref. FFmpeg calls av_buffer_unref() on it when the frame is unreferenced. av_frame_copy_props() calls create a new reference with av_buffer_ref() for the target frame's opaque_ref field.
cropping Video frames only. The number of pixels to discard from the the top/bottom/left/right border of the frame to obtain the sub-rectangle of the frame intended for presentation. @{
AVBufferRef for internal use by a single libav* library. Must not be used to transfer data between libraries. Has to be NULL when ownership of the frame leaves the respective library.
Channel layout of the audio data.
A single allowed range of values, or a single allowed value.
Value range. For string ranges this represents the min/max length. For dimensions this represents the min/max pixel count or width/height in multi-component case.
Value range. For string ranges this represents the min/max length. For dimensions this represents the min/max pixel count or width/height in multi-component case.
Value's component range. For string this represents the unicode range for chars, 0-127 limits to ASCII.
Value's component range. For string this represents the unicode range for chars, 0-127 limits to ASCII.
Range flag. If set to 1 the struct encodes a range, if set to 0 a single value.
the default value for scalar options
Descriptor that unambiguously describes how the bits of a pixel are stored in the up to 4 data planes of an image. It also stores the subsampling factors and number of components.
The number of components each pixel has, (1-4)
Amount to shift the luma width right to find the chroma width. For YV12 this is 1 for example. chroma_width = AV_CEIL_RSHIFT(luma_width, log2_chroma_w) The note above is needed to ensure rounding up. This value only refers to the chroma components.
Amount to shift the luma height right to find the chroma height. For YV12 this is 1 for example. chroma_height= AV_CEIL_RSHIFT(luma_height, log2_chroma_h) The note above is needed to ensure rounding up. This value only refers to the chroma components.
Combination of AV_PIX_FMT_FLAG_... flags.
Parameters that describe how pixels are packed. If the format has 1 or 2 components, then luma is 0. If the format has 3 or 4 components: if the RGB flag is set then 0 is red, 1 is green and 2 is blue; otherwise 0 is luma, 1 is chroma-U and 2 is chroma-V.
Alternative comma-separated names.
Which of the 4 planes contains the component.
Number of elements between 2 horizontally consecutive pixels. Elements are bits for bitstream formats, bytes otherwise.
Number of elements before the component of the first pixel. Elements are bits for bitstream formats, bytes otherwise.
Number of least significant bits that must be shifted away to get the value.
Number of bits in the component.
timecode frame start (first base frame number)
flags such as drop frame, +24 hours support, ...
frame rate in rational form
frame per second; must be consistent with the rate field
This struct aggregates all the (hardware/vendor-specific) "high-level" state, i.e. state that is not tied to a concrete processing configuration. E.g., in an API that supports hardware-accelerated encoding and decoding, this struct will (if possible) wrap the state that is common to both encoding and decoding and from which specific instances of encoders or decoders can be derived.
A class for logging. Set by av_hwdevice_ctx_alloc().
Private data used internally by libavutil. Must not be accessed in any way by the caller.
This field identifies the underlying API used for hardware access.
The format-specific data, allocated and freed by libavutil along with this context.
This field may be set by the caller before calling av_hwdevice_ctx_init().
Arbitrary user data, to be used e.g. by the free() callback.
This struct describes a set or pool of "hardware" frames (i.e. those with data not located in normal system memory). All the frames in the pool are assumed to be allocated in the same way and interchangeable.
A class for logging.
Private data used internally by libavutil. Must not be accessed in any way by the caller.
A reference to the parent AVHWDeviceContext. This reference is owned and managed by the enclosing AVHWFramesContext, but the caller may derive additional references from it.
The parent AVHWDeviceContext. This is simply a pointer to device_ref->data provided for convenience.
The format-specific data, allocated and freed automatically along with this context.
This field may be set by the caller before calling av_hwframe_ctx_init().
Arbitrary user data, to be used e.g. by the free() callback.
A pool from which the frames are allocated by av_hwframe_get_buffer(). This field may be set by the caller before calling av_hwframe_ctx_init(). The buffers returned by calling av_buffer_pool_get() on this pool must have the properties described in the documentation in the corresponding hw type's header (hwcontext_*.h). The pool will be freed strictly before this struct's free() callback is invoked.
Initial size of the frame pool. If a device type does not support dynamically resizing the pool, then this is also the maximum pool size.
The pixel format identifying the underlying HW surface type.
The pixel format identifying the actual data layout of the hardware frames.
The allocated dimensions of the frames in this pool.
The allocated dimensions of the frames in this pool.
This struct describes the constraints on hardware frames attached to a given device with a hardware-specific configuration. This is returned by av_hwdevice_get_hwframe_constraints() and must be freed by av_hwframe_constraints_free() after use.
A list of possible values for format in the hw_frames_ctx, terminated by AV_PIX_FMT_NONE. This member will always be filled.
A list of possible values for sw_format in the hw_frames_ctx, terminated by AV_PIX_FMT_NONE. Can be NULL if this information is not known.
The minimum size of frames in this hw_frames_ctx. (Zero if not known.)
The maximum size of frames in this hw_frames_ctx. (INT_MAX if not known / no limit.)
This struct is allocated as AVHWDeviceContext.hwctx
This struct is allocated as AVHWFramesContext.hwctx
The surface type (e.g. DXVA2_VideoProcessorRenderTarget or DXVA2_VideoDecoderRenderTarget). Must be set by the caller.
The surface pool. When an external pool is not provided by the caller, this will be managed (allocated and filled on init, freed on uninit) by libavutil.
Certain drivers require the decoder to be destroyed before the surfaces. To allow internally managed pools to work properly in such cases, this field is provided.
This struct is allocated as AVHWDeviceContext.hwctx
Device used for texture creation and access. This can also be used to set the libavcodec decoding device.
If unset, this will be set from the device field on init.
If unset, this will be set from the device field on init.
If unset, this will be set from the device_context field on init.
Callbacks for locking. They protect accesses to device_context and video_context calls. They also protect access to the internal staging texture (for av_hwframe_transfer_data() calls). They do NOT protect access to hwcontext or decoder state in general.
D3D11 frame descriptor for pool allocation.
The texture in which the frame is located. The reference count is managed by the AVBufferRef, and destroying the reference will release the interface.
The index into the array texture element representing the frame, or 0 if the texture is not an array texture.
This struct is allocated as AVHWFramesContext.hwctx
The canonical texture used for pool allocation. If this is set to NULL on init, the hwframes implementation will allocate and set an array texture if initial_pool_size > 0.
D3D11_TEXTURE2D_DESC.BindFlags used for texture creation. The user must at least set D3D11_BIND_DECODER if the frames context is to be used for video decoding. This field is ignored/invalid if a user-allocated texture is provided.
D3D11_TEXTURE2D_DESC.MiscFlags used for texture creation. This field is ignored/invalid if a user-allocated texture is provided.
In case if texture structure member above is not NULL contains the same texture pointer for all elements and different indexes into the array texture. In case if texture structure member above is NULL, all elements contains pointers to separate non-array textures and 0 indexes. This field is ignored/invalid if a user-allocated texture is provided.
Represents the percentile at a specific percentage in a distribution.
The percentage value corresponding to a specific percentile linearized RGB value in the processing window in the scene. The value shall be in the range of 0 to100, inclusive.
The linearized maxRGB value at a specific percentile in the processing window in the scene. The value shall be in the range of 0 to 1, inclusive and in multiples of 0.00001.
Color transform parameters at a processing window in a dynamic metadata for SMPTE 2094-40.
The relative x coordinate of the top left pixel of the processing window. The value shall be in the range of 0 and 1, inclusive and in multiples of 1/(width of Picture - 1). The value 1 corresponds to the absolute coordinate of width of Picture - 1. The value for first processing window shall be 0.
The relative y coordinate of the top left pixel of the processing window. The value shall be in the range of 0 and 1, inclusive and in multiples of 1/(height of Picture - 1). The value 1 corresponds to the absolute coordinate of height of Picture - 1. The value for first processing window shall be 0.
The relative x coordinate of the bottom right pixel of the processing window. The value shall be in the range of 0 and 1, inclusive and in multiples of 1/(width of Picture - 1). The value 1 corresponds to the absolute coordinate of width of Picture - 1. The value for first processing window shall be 1.
The relative y coordinate of the bottom right pixel of the processing window. The value shall be in the range of 0 and 1, inclusive and in multiples of 1/(height of Picture - 1). The value 1 corresponds to the absolute coordinate of height of Picture - 1. The value for first processing window shall be 1.
The x coordinate of the center position of the concentric internal and external ellipses of the elliptical pixel selector in the processing window. The value shall be in the range of 0 to (width of Picture - 1), inclusive and in multiples of 1 pixel.
The y coordinate of the center position of the concentric internal and external ellipses of the elliptical pixel selector in the processing window. The value shall be in the range of 0 to (height of Picture - 1), inclusive and in multiples of 1 pixel.
The clockwise rotation angle in degree of arc with respect to the positive direction of the x-axis of the concentric internal and external ellipses of the elliptical pixel selector in the processing window. The value shall be in the range of 0 to 180, inclusive and in multiples of 1.
The semi-major axis value of the internal ellipse of the elliptical pixel selector in amount of pixels in the processing window. The value shall be in the range of 1 to 65535, inclusive and in multiples of 1 pixel.
The semi-major axis value of the external ellipse of the elliptical pixel selector in amount of pixels in the processing window. The value shall not be less than semimajor_axis_internal_ellipse of the current processing window. The value shall be in the range of 1 to 65535, inclusive and in multiples of 1 pixel.
The semi-minor axis value of the external ellipse of the elliptical pixel selector in amount of pixels in the processing window. The value shall be in the range of 1 to 65535, inclusive and in multiples of 1 pixel.
Overlap process option indicates one of the two methods of combining rendered pixels in the processing window in an image with at least one elliptical pixel selector. For overlapping elliptical pixel selectors in an image, overlap_process_option shall have the same value.
The maximum of the color components of linearized RGB values in the processing window in the scene. The values should be in the range of 0 to 1, inclusive and in multiples of 0.00001. maxscl[ 0 ], maxscl[ 1 ], and maxscl[ 2 ] are corresponding to R, G, B color components respectively.
The average of linearized maxRGB values in the processing window in the scene. The value should be in the range of 0 to 1, inclusive and in multiples of 0.00001.
The number of linearized maxRGB values at given percentiles in the processing window in the scene. The maximum value shall be 15.
The linearized maxRGB values at given percentiles in the processing window in the scene.
The fraction of selected pixels in the image that contains the brightest pixel in the scene. The value shall be in the range of 0 to 1, inclusive and in multiples of 0.001.
This flag indicates that the metadata for the tone mapping function in the processing window is present (for value of 1).
The x coordinate of the separation point between the linear part and the curved part of the tone mapping function. The value shall be in the range of 0 to 1, excluding 0 and in multiples of 1/4095.
The y coordinate of the separation point between the linear part and the curved part of the tone mapping function. The value shall be in the range of 0 to 1, excluding 0 and in multiples of 1/4095.
The number of the intermediate anchor parameters of the tone mapping function in the processing window. The maximum value shall be 15.
The intermediate anchor parameters of the tone mapping function in the processing window in the scene. The values should be in the range of 0 to 1, inclusive and in multiples of 1/1023.
This flag shall be equal to 0 in bitstreams conforming to this version of this Specification. Other values are reserved for future use.
The color saturation gain in the processing window in the scene. The value shall be in the range of 0 to 63/8, inclusive and in multiples of 1/8. The default value shall be 1.
This struct represents dynamic metadata for color volume transform - application 4 of SMPTE 2094-40:2016 standard.
Country code by Rec. ITU-T T.35 Annex A. The value shall be 0xB5.
Application version in the application defining document in ST-2094 suite. The value shall be set to 0.
The number of processing windows. The value shall be in the range of 1 to 3, inclusive.
The color transform parameters for every processing window.
The nominal maximum display luminance of the targeted system display, in units of 0.0001 candelas per square metre. The value shall be in the range of 0 to 10000, inclusive.
This flag shall be equal to 0 in bit streams conforming to this version of this Specification. The value 1 is reserved for future use.
The number of rows in the targeted system_display_actual_peak_luminance array. The value shall be in the range of 2 to 25, inclusive.
The number of columns in the targeted_system_display_actual_peak_luminance array. The value shall be in the range of 2 to 25, inclusive.
The normalized actual peak luminance of the targeted system display. The values should be in the range of 0 to 1, inclusive and in multiples of 1/15.
This flag shall be equal to 0 in bitstreams conforming to this version of this Specification. The value 1 is reserved for future use.
The number of rows in the mastering_display_actual_peak_luminance array. The value shall be in the range of 2 to 25, inclusive.
The number of columns in the mastering_display_actual_peak_luminance array. The value shall be in the range of 2 to 25, inclusive.
The normalized actual peak luminance of the mastering display used for mastering the image essence. The values should be in the range of 0 to 1, inclusive and in multiples of 1/15.
Mastering display metadata capable of representing the color volume of the display used to master the content (SMPTE 2086:2014).
CIE 1931 xy chromaticity coords of color primaries (r, g, b order).
CIE 1931 xy chromaticity coords of white point.
Min luminance of mastering display (cd/m^2).
Max luminance of mastering display (cd/m^2).
Flag indicating whether the display primaries (and white point) are set.
Flag indicating whether the luminance (min_ and max_) have been set.
Content light level needed by to transmit HDR over HDMI (CTA-861.3).
Max content light level (cd/m^2).
Max average light level per frame (cd/m^2).
pointer to the list of coefficients
number of coefficients in the vector
main external API structure. New fields can be added to the end with minor version bumps. Removal, reordering and changes to existing fields require a major version bump. You can use AVOptions (av_opt* / av_set/get*()) to access these fields from user applications. The name string for AVOptions options matches the associated command line parameter name and can be found in libavcodec/options_table.h The AVOption/command line parameter names differ in some cases from the C structure field names for historic reasons or brevity. sizeof(AVCodecContext) must not be used outside libav*.
information on struct for av_log - set by avcodec_alloc_context3
fourcc (LSB first, so "ABCD" -> ('D'<<24) + ('C'<<16) + ('B'<<8) + 'A'). This is used to work around some encoder bugs. A demuxer should set this to what is stored in the field used to identify the codec. If there are multiple such fields in a container then the demuxer should choose the one which maximizes the information about the used codec. If the codec tag field in a container is larger than 32 bits then the demuxer should remap the longer ID to 32 bits with a table or other structure. Alternatively a new extra_codec_tag + size could be added but for this a clear advantage must be demonstrated first. - encoding: Set by user, if not then the default based on codec_id will be used. - decoding: Set by user, will be converted to uppercase by libavcodec during init.
Private context used for internal data.
Private data of the user, can be used to carry app specific stuff. - encoding: Set by user. - decoding: Set by user.
the average bitrate - encoding: Set by user; unused for constant quantizer encoding. - decoding: Set by user, may be overwritten by libavcodec if this info is available in the stream
number of bits the bitstream is allowed to diverge from the reference. the reference can be CBR (for CBR pass1) or VBR (for pass2) - encoding: Set by user; unused for constant quantizer encoding. - decoding: unused
Global quality for codecs which cannot change it per frame. This should be proportional to MPEG-1/2/4 qscale. - encoding: Set by user. - decoding: unused
- encoding: Set by user. - decoding: unused
AV_CODEC_FLAG_*. - encoding: Set by user. - decoding: Set by user.
AV_CODEC_FLAG2_* - encoding: Set by user. - decoding: Set by user.
some codecs need / can use extradata like Huffman tables. MJPEG: Huffman tables rv10: additional flags MPEG-4: global headers (they can be in the bitstream or here) The allocated memory should be AV_INPUT_BUFFER_PADDING_SIZE bytes larger than extradata_size to avoid problems if it is read with the bitstream reader. The bytewise contents of extradata must not depend on the architecture or CPU endianness. Must be allocated with the av_malloc() family of functions. - encoding: Set/allocated/freed by libavcodec. - decoding: Set/allocated/freed by user.
This is the fundamental unit of time (in seconds) in terms of which frame timestamps are represented. For fixed-fps content, timebase should be 1/framerate and timestamp increments should be identically 1. This often, but not always is the inverse of the frame rate or field rate for video. 1/time_base is not the average frame rate if the frame rate is not constant.
For some codecs, the time base is closer to the field rate than the frame rate. Most notably, H.264 and MPEG-2 specify time_base as half of frame duration if no telecine is used ...
Codec delay.
picture width / height.
picture width / height.
Bitstream width / height, may be different from width/height e.g. when the decoded frame is cropped before being output or lowres is enabled.
Bitstream width / height, may be different from width/height e.g. when the decoded frame is cropped before being output or lowres is enabled.
the number of pictures in a group of pictures, or 0 for intra_only - encoding: Set by user. - decoding: unused
Pixel format, see AV_PIX_FMT_xxx. May be set by the demuxer if known from headers. May be overridden by the decoder if it knows better.
If non NULL, 'draw_horiz_band' is called by the libavcodec decoder to draw a horizontal band. It improves cache usage. Not all codecs can do that. You must check the codec capabilities beforehand. When multithreading is used, it may be called from multiple threads at the same time; threads might draw different parts of the same AVFrame, or multiple AVFrames, and there is no guarantee that slices will be drawn in order. The function is also used by hardware acceleration APIs. It is called at least once during frame decoding to pass the data needed for hardware render. In that mode instead of pixel data, AVFrame points to a structure specific to the acceleration API. The application reads the structure and can change some fields to indicate progress or mark state. - encoding: unused - decoding: Set by user.
Callback to negotiate the pixel format. Decoding only, may be set by the caller before avcodec_open2().
maximum number of B-frames between non-B-frames Note: The output will be delayed by max_b_frames+1 relative to the input. - encoding: Set by user. - decoding: unused
qscale factor between IP and B-frames If > 0 then the last P-frame quantizer will be used (q= lastp_q*factor+offset). If < 0 then normal ratecontrol will be done (q= -normal_q*factor+offset). - encoding: Set by user. - decoding: unused
qscale offset between IP and B-frames - encoding: Set by user. - decoding: unused
Size of the frame reordering buffer in the decoder. For MPEG-2 it is 1 IPB or 0 low delay IP. - encoding: Set by libavcodec. - decoding: Set by libavcodec.
qscale factor between P- and I-frames If > 0 then the last P-frame quantizer will be used (q = lastp_q * factor + offset). If < 0 then normal ratecontrol will be done (q= -normal_q*factor+offset). - encoding: Set by user. - decoding: unused
qscale offset between P and I-frames - encoding: Set by user. - decoding: unused
luminance masking (0-> disabled) - encoding: Set by user. - decoding: unused
temporary complexity masking (0-> disabled) - encoding: Set by user. - decoding: unused
spatial complexity masking (0-> disabled) - encoding: Set by user. - decoding: unused
p block masking (0-> disabled) - encoding: Set by user. - decoding: unused
darkness masking (0-> disabled) - encoding: Set by user. - decoding: unused
slice count - encoding: Set by libavcodec. - decoding: Set by user (or 0).
slice offsets in the frame in bytes - encoding: Set/allocated by libavcodec. - decoding: Set/allocated by user (or NULL).
sample aspect ratio (0 if unknown) That is the width of a pixel divided by the height of the pixel. Numerator and denominator must be relatively prime and smaller than 256 for some video standards. - encoding: Set by user. - decoding: Set by libavcodec.
motion estimation comparison function - encoding: Set by user. - decoding: unused
subpixel motion estimation comparison function - encoding: Set by user. - decoding: unused
macroblock comparison function (not supported yet) - encoding: Set by user. - decoding: unused
interlaced DCT comparison function - encoding: Set by user. - decoding: unused
ME diamond size & shape - encoding: Set by user. - decoding: unused
amount of previous MV predictors (2a+1 x 2a+1 square) - encoding: Set by user. - decoding: unused
motion estimation prepass comparison function - encoding: Set by user. - decoding: unused
ME prepass diamond size & shape - encoding: Set by user. - decoding: unused
subpel ME quality - encoding: Set by user. - decoding: unused
maximum motion estimation search range in subpel units If 0 then no limit.
slice flags - encoding: unused - decoding: Set by user.
macroblock decision mode - encoding: Set by user. - decoding: unused
custom intra quantization matrix Must be allocated with the av_malloc() family of functions, and will be freed in avcodec_free_context(). - encoding: Set/allocated by user, freed by libavcodec. Can be NULL. - decoding: Set/allocated/freed by libavcodec.
custom inter quantization matrix Must be allocated with the av_malloc() family of functions, and will be freed in avcodec_free_context(). - encoding: Set/allocated by user, freed by libavcodec. Can be NULL. - decoding: Set/allocated/freed by libavcodec.
precision of the intra DC coefficient - 8 - encoding: Set by user. - decoding: Set by libavcodec
Number of macroblock rows at the top which are skipped. - encoding: unused - decoding: Set by user.
Number of macroblock rows at the bottom which are skipped. - encoding: unused - decoding: Set by user.
minimum MB Lagrange multiplier - encoding: Set by user. - decoding: unused
maximum MB Lagrange multiplier - encoding: Set by user. - decoding: unused
- encoding: Set by user. - decoding: unused
minimum GOP size - encoding: Set by user. - decoding: unused
number of reference frames - encoding: Set by user. - decoding: Set by lavc.
Note: Value depends upon the compare function used for fullpel ME. - encoding: Set by user. - decoding: unused
Chromaticity coordinates of the source primaries. - encoding: Set by user - decoding: Set by libavcodec
Color Transfer Characteristic. - encoding: Set by user - decoding: Set by libavcodec
YUV colorspace type. - encoding: Set by user - decoding: Set by libavcodec
MPEG vs JPEG YUV range. - encoding: Set by user - decoding: Set by libavcodec
This defines the location of chroma samples. - encoding: Set by user - decoding: Set by libavcodec
Number of slices. Indicates number of picture subdivisions. Used for parallelized decoding. - encoding: Set by user - decoding: unused
Field order - encoding: set by libavcodec - decoding: Set by user.
samples per second
number of audio channels
sample format
Number of samples per channel in an audio frame.
Frame counter, set by libavcodec.
number of bytes per packet if constant and known or 0 Used by some WAV based audio codecs.
Audio cutoff bandwidth (0 means "automatic") - encoding: Set by user. - decoding: unused
Audio channel layout. - encoding: set by user. - decoding: set by user, may be overwritten by libavcodec.
Request decoder to use this channel layout if it can (0 for default) - encoding: unused - decoding: Set by user.
Type of service that the audio stream conveys. - encoding: Set by user. - decoding: Set by libavcodec.
desired sample format - encoding: Not used. - decoding: Set by user. Decoder will decode to this format if it can.
This callback is called at the beginning of each frame to get data buffer(s) for it. There may be one contiguous buffer for all the data or there may be a buffer per each data plane or anything in between. What this means is, you may set however many entries in buf[] you feel necessary. Each buffer must be reference-counted using the AVBuffer API (see description of buf[] below).
amount of qscale change between easy & hard scenes (0.0-1.0)
amount of qscale smoothing over time (0.0-1.0)
minimum quantizer - encoding: Set by user. - decoding: unused
maximum quantizer - encoding: Set by user. - decoding: unused
maximum quantizer difference between frames - encoding: Set by user. - decoding: unused
decoder bitstream buffer size - encoding: Set by user. - decoding: unused
ratecontrol override, see RcOverride - encoding: Allocated/set/freed by user. - decoding: unused
maximum bitrate - encoding: Set by user. - decoding: Set by user, may be overwritten by libavcodec.
minimum bitrate - encoding: Set by user. - decoding: unused
Ratecontrol attempt to use, at maximum, <value> of what can be used without an underflow. - encoding: Set by user. - decoding: unused.
Ratecontrol attempt to use, at least, <value> times the amount needed to prevent a vbv overflow. - encoding: Set by user. - decoding: unused.
Number of bits which should be loaded into the rc buffer before decoding starts. - encoding: Set by user. - decoding: unused
trellis RD quantization - encoding: Set by user. - decoding: unused
pass1 encoding statistics output buffer - encoding: Set by libavcodec. - decoding: unused
pass2 encoding statistics input buffer Concatenated stuff from stats_out of pass1 should be placed here. - encoding: Allocated/set/freed by user. - decoding: unused
Work around bugs in encoders which sometimes cannot be detected automatically. - encoding: Set by user - decoding: Set by user
strictly follow the standard (MPEG-4, ...). - encoding: Set by user. - decoding: Set by user. Setting this to STRICT or higher means the encoder and decoder will generally do stupid things, whereas setting it to unofficial or lower will mean the encoder might produce output that is not supported by all spec-compliant decoders. Decoders don't differentiate between normal, unofficial and experimental (that is, they always try to decode things when they can) unless they are explicitly asked to behave stupidly (=strictly conform to the specs)
error concealment flags - encoding: unused - decoding: Set by user.
debug - encoding: Set by user. - decoding: Set by user.
Error recognition; may misdetect some more or less valid parts as errors. - encoding: Set by user. - decoding: Set by user.
opaque 64-bit number (generally a PTS) that will be reordered and output in AVFrame.reordered_opaque - encoding: Set by libavcodec to the reordered_opaque of the input frame corresponding to the last returned packet. Only supported by encoders with the AV_CODEC_CAP_ENCODER_REORDERED_OPAQUE capability. - decoding: Set by user.
Hardware accelerator in use - encoding: unused. - decoding: Set by libavcodec
Hardware accelerator context. For some hardware accelerators, a global context needs to be provided by the user. In that case, this holds display-dependent data FFmpeg cannot instantiate itself. Please refer to the FFmpeg HW accelerator documentation to know how to fill this. - encoding: unused - decoding: Set by user
error - encoding: Set by libavcodec if flags & AV_CODEC_FLAG_PSNR. - decoding: unused
DCT algorithm, see FF_DCT_* below - encoding: Set by user. - decoding: unused
IDCT algorithm, see FF_IDCT_* below. - encoding: Set by user. - decoding: Set by user.
bits per sample/pixel from the demuxer (needed for huffyuv). - encoding: Set by libavcodec. - decoding: Set by user.
Bits per sample/pixel of internal libavcodec pixel/sample format. - encoding: set by user. - decoding: set by libavcodec.
low resolution decoding, 1-> 1/2 size, 2->1/4 size - encoding: unused - decoding: Set by user.
thread count is used to decide how many independent tasks should be passed to execute() - encoding: Set by user. - decoding: Set by user.
Which multithreading methods to use. Use of FF_THREAD_FRAME will increase decoding delay by one frame per thread, so clients which cannot provide future frames should not use it.
Which multithreading methods are in use by the codec. - encoding: Set by libavcodec. - decoding: Set by libavcodec.
Set by the client if its custom get_buffer() callback can be called synchronously from another thread, which allows faster multithreaded decoding. draw_horiz_band() will be called from other threads regardless of this setting. Ignored if the default get_buffer() is used. - encoding: Set by user. - decoding: Set by user.
The codec may call this to execute several independent things. It will return only after finishing all tasks. The user may replace this with some multithreaded implementation, the default implementation will execute the parts serially.
The codec may call this to execute several independent things. It will return only after finishing all tasks. The user may replace this with some multithreaded implementation, the default implementation will execute the parts serially.
noise vs. sse weight for the nsse comparison function - encoding: Set by user. - decoding: unused
profile - encoding: Set by user. - decoding: Set by libavcodec.
level - encoding: Set by user. - decoding: Set by libavcodec.
Skip loop filtering for selected frames. - encoding: unused - decoding: Set by user.
Skip IDCT/dequantization for selected frames. - encoding: unused - decoding: Set by user.
Skip decoding for selected frames. - encoding: unused - decoding: Set by user.
Header containing style information for text subtitles. For SUBTITLE_ASS subtitle type, it should contain the whole ASS [Script Info] and [V4+ Styles] section, plus the [Events] line and the Format line following. It shouldn't include any Dialogue line. - encoding: Set/allocated/freed by user (before avcodec_open2()) - decoding: Set/allocated/freed by libavcodec (by avcodec_open2())
Audio only. The number of "priming" samples (padding) inserted by the encoder at the beginning of the audio. I.e. this number of leading decoded samples must be discarded by the caller to get the original audio without leading padding.
- decoding: For codecs that store a framerate value in the compressed bitstream, the decoder may export it here. { 0, 1} when unknown. - encoding: May be used to signal the framerate of CFR content to an encoder.
Nominal unaccelerated pixel format, see AV_PIX_FMT_xxx. - encoding: unused. - decoding: Set by libavcodec before calling get_format()
Timebase in which pkt_dts/pts and AVPacket.dts/pts are. - encoding unused. - decoding set by user.
AVCodecDescriptor - encoding: unused. - decoding: set by libavcodec.
Current statistics for PTS correction. - decoding: maintained and used by libavcodec, not intended to be used by user apps - encoding: unused
Number of incorrect PTS values so far
Number of incorrect DTS values so far
PTS of the last frame
Character encoding of the input subtitles file. - decoding: set by user - encoding: unused
Subtitles character encoding mode. Formats or codecs might be adjusting this setting (if they are doing the conversion themselves for instance). - decoding: set by libavcodec - encoding: unused
Skip processing alpha if supported by codec. Note that if the format uses pre-multiplied alpha (common with VP6, and recommended due to better video quality/compression) the image will look as if alpha-blended onto a black background. However for formats that do not use pre-multiplied alpha there might be serious artefacts (though e.g. libswscale currently assumes pre-multiplied alpha anyway).
Number of samples to skip after a discontinuity - decoding: unused - encoding: set by libavcodec
custom intra quantization matrix - encoding: Set by user, can be NULL. - decoding: unused.
dump format separator. can be ", " or " " or anything else - encoding: Set by user. - decoding: Set by user.
',' separated list of allowed decoders. If NULL then all are allowed - encoding: unused - decoding: set by user
Properties of the stream that gets decoded - encoding: unused - decoding: set by libavcodec
Additional data associated with the entire coded stream.
A reference to the AVHWFramesContext describing the input (for encoding) or output (decoding) frames. The reference is set by the caller and afterwards owned (and freed) by libavcodec - it should never be read by the caller after being set.
Audio only. The amount of padding (in samples) appended by the encoder to the end of the audio. I.e. this number of decoded samples must be discarded by the caller from the end of the stream to get the original audio without any trailing padding.
The number of pixels per image to maximally accept.
A reference to the AVHWDeviceContext describing the device which will be used by a hardware encoder/decoder. The reference is set by the caller and afterwards owned (and freed) by libavcodec.
Bit set of AV_HWACCEL_FLAG_* flags, which affect hardware accelerated decoding (if active). - encoding: unused - decoding: Set by user (either before avcodec_open2(), or in the AVCodecContext.get_format callback)
Video decoding only. Certain video codecs support cropping, meaning that only a sub-rectangle of the decoded frame is intended for display. This option controls how cropping is handled by libavcodec.
The percentage of damaged samples to discard a frame.
The number of samples per frame to maximally accept.
Bit set of AV_CODEC_EXPORT_DATA_* flags, which affects the kind of metadata exported in frame, packet, or coded stream side data by decoders and encoders.
This callback is called at the beginning of each packet to get a data buffer for it.
Audio channel layout. - encoding: must be set by the caller, to one of AVCodec.ch_layouts. - decoding: may be set by the caller if known e.g. from the container. The decoder can then override during decoding as needed.
AVCodec.
Name of the codec implementation. The name is globally unique among encoders and among decoders (but an encoder and a decoder can share the same name). This is the primary way to find a codec from the user perspective.
Descriptive name for the codec, meant to be more human readable than name. You should use the NULL_IF_CONFIG_SMALL() macro to define it.
Codec capabilities. see AV_CODEC_CAP_*
maximum value for lowres supported by the decoder
array of supported framerates, or NULL if any, array is terminated by {0,0}
array of supported pixel formats, or NULL if unknown, array is terminated by -1
array of supported audio samplerates, or NULL if unknown, array is terminated by 0
array of supported sample formats, or NULL if unknown, array is terminated by -1
array of support channel layouts, or NULL if unknown. array is terminated by 0
AVClass for the private context
array of recognized profiles, or NULL if unknown, array is terminated by {FF_PROFILE_UNKNOWN}
Group name of the codec implementation. This is a short symbolic name of the wrapper backing this codec. A wrapper uses some kind of external implementation for the codec, such as an external library, or a codec implementation provided by the OS or the hardware. If this field is NULL, this is a builtin, libavcodec native codec. If non-NULL, this will be the suffix in AVCodec.name in most cases (usually AVCodec.name will be of the form "<codec_name>_<wrapper_name>").
Array of supported channel layouts, terminated with a zeroed layout.
AVProfile.
short name for the profile
Name of the hardware accelerated codec. The name is globally unique among encoders and among decoders (but an encoder and a decoder can share the same name).
Type of codec implemented by the hardware accelerator.
Codec implemented by the hardware accelerator.
Supported pixel format.
Hardware accelerated codec capabilities. see AV_HWACCEL_CODEC_CAP_*
Allocate a custom buffer
Called at the beginning of each frame or field picture.
Callback for parameter data (SPS/PPS/VPS etc).
Callback for each slice.
Called at the end of each frame or field picture.
Size of per-frame hardware accelerator private data.
Initialize the hwaccel private data.
Uninitialize the hwaccel private data.
Size of the private data to allocate in AVCodecInternal.hwaccel_priv_data.
Internal hwaccel capabilities.
Fill the given hw_frames context with current codec parameters. Called from get_format. Refer to avcodec_get_hw_frames_parameters() for details.
This struct describes the properties of a single codec described by an AVCodecID.
Name of the codec described by this descriptor. It is non-empty and unique for each codec descriptor. It should contain alphanumeric characters and '_' only.
A more descriptive name for this codec. May be NULL.
Codec properties, a combination of AV_CODEC_PROP_* flags.
MIME type(s) associated with the codec. May be NULL; if not, a NULL-terminated array of MIME types. The first item is always non-NULL and is the preferred MIME type.
If non-NULL, an array of profiles recognized for this codec. Terminated with FF_PROFILE_UNKNOWN.
This structure stores compressed data. It is typically exported by demuxers and then passed as input to decoders, or received as output from encoders and then passed to muxers.
A reference to the reference-counted buffer where the packet data is stored. May be NULL, then the packet data is not reference-counted.
Presentation timestamp in AVStream->time_base units; the time at which the decompressed packet will be presented to the user. Can be AV_NOPTS_VALUE if it is not stored in the file. pts MUST be larger or equal to dts as presentation cannot happen before decompression, unless one wants to view hex dumps. Some formats misuse the terms dts and pts/cts to mean something different. Such timestamps must be converted to true pts/dts before they are stored in AVPacket.
Decompression timestamp in AVStream->time_base units; the time at which the packet is decompressed. Can be AV_NOPTS_VALUE if it is not stored in the file.
A combination of AV_PKT_FLAG values
Additional packet data that can be provided by the container. Packet can contain several types of side information.
Duration of this packet in AVStream->time_base units, 0 if unknown. Equals next_pts - this_pts in presentation order.
byte position in stream, -1 if unknown
for some private data of the user
AVBufferRef for free use by the API user. FFmpeg will never check the contents of the buffer ref. FFmpeg calls av_buffer_unref() on it when the packet is unreferenced. av_packet_copy_props() calls create a new reference with av_buffer_ref() for the target packet's opaque_ref field.
Time base of the packet's timestamps. In the future, this field may be set on packets output by encoders or demuxers, but its value will be by default ignored on input to decoders or muxers.
top left corner of pict, undefined when pict is not set
top left corner of pict, undefined when pict is not set
width of pict, undefined when pict is not set
height of pict, undefined when pict is not set
number of colors in pict, undefined when pict is not set
data+linesize for the bitmap of this subtitle. Can be set for text/ass as well once they are rendered.
0 terminated plain UTF-8 text
0 terminated ASS/SSA compatible event line. The presentation of this is unaffected by the other values in this struct.
Same as packet pts, in AV_TIME_BASE
This field is used for proper frame duration computation in lavf. It signals, how much longer the frame duration of the current frame is compared to normal frame duration.
byte offset from starting packet start
Set by parser to 1 for key frames and 0 for non-key frames. It is initialized to -1, so if the parser doesn't set this flag, old-style fallback using AV_PICTURE_TYPE_I picture type as key frames will be used.
Synchronization point for start of timestamp generation.
Offset of the current timestamp against last timestamp sync point in units of AVCodecContext.time_base.
Presentation delay of current frame in units of AVCodecContext.time_base.
Position of the packet in file.
Byte position of currently parsed frame in stream.
Previous frame byte position.
Duration of the current frame. For audio, this is in units of 1 / AVCodecContext.sample_rate. For all other types, this is in units of AVCodecContext.time_base.
Indicate whether a picture is coded as a frame, top field or bottom field.
Picture number incremented in presentation or output order. This field may be reinitialized at the first picture of a new sequence.
Dimensions of the decoded video intended for presentation.
Dimensions of the coded video.
The format of the coded data, corresponds to enum AVPixelFormat for video and for enum AVSampleFormat for audio.
This struct describes the properties of an encoded stream.
General type of the encoded data.
Specific type of the encoded data (the codec used).
Additional information about the codec (corresponds to the AVI FOURCC).
Extra binary data needed for initializing the decoder, codec-dependent.
Size of the extradata content in bytes.
- video: the pixel format, the value corresponds to enum AVPixelFormat. - audio: the sample format, the value corresponds to enum AVSampleFormat.
The average bitrate of the encoded data (in bits per second).
The number of bits per sample in the codedwords.
This is the number of valid bits in each output sample. If the sample format has more bits, the least significant bits are additional padding bits, which are always 0. Use right shifts to reduce the sample to its actual size. For example, audio formats with 24 bit samples will have bits_per_raw_sample set to 24, and format set to AV_SAMPLE_FMT_S32. To get the original sample use "(int32_t)sample >> 8"."
Codec-specific bitstream restrictions that the stream conforms to.
Video only. The dimensions of the video frame in pixels.
Video only. The aspect ratio (width / height) which a single pixel should have when displayed.
Video only. The order of the fields in interlaced video.
Video only. Additional colorspace characteristics.
Video only. Number of delayed frames.
Audio only. The channel layout bitmask. May be 0 if the channel layout is unknown or unspecified, otherwise the number of bits set must be equal to the channels field.
Audio only. The number of audio channels.
Audio only. The number of audio samples per second.
Audio only. The number of bytes per coded audio frame, required by some formats.
Audio only. Audio frame size, if known. Required by some formats to be static.
Audio only. The amount of padding (in samples) inserted by the encoder at the beginning of the audio. I.e. this number of leading decoded samples must be discarded by the caller to get the original audio without leading padding.
Audio only. The amount of padding (in samples) appended by the encoder to the end of the audio. I.e. this number of decoded samples must be discarded by the caller from the end of the stream to get the original audio without any trailing padding.
Audio only. Number of samples to skip after a discontinuity.
Audio only. The channel layout and number of channels.
For decoders, a hardware pixel format which that decoder may be able to decode to if suitable hardware is available.
Bit set of AV_CODEC_HW_CONFIG_METHOD_* flags, describing the possible setup methods which can be used with this configuration.
The device type associated with the configuration.
Pan Scan area. This specifies the area which should be displayed. Note there may be multiple such areas for one frame.
id - encoding: Set by user. - decoding: Set by libavcodec.
width and height in 1/16 pel - encoding: Set by user. - decoding: Set by libavcodec.
position of the top left corner in 1/16 pel for up to 3 fields/frames - encoding: Set by user. - decoding: Set by libavcodec.
This structure describes the bitrate properties of an encoded bitstream. It roughly corresponds to a subset the VBV parameters for MPEG-2 or HRD parameters for H.264/HEVC.
Maximum bitrate of the stream, in bits per second. Zero if unknown or unspecified.
Minimum bitrate of the stream, in bits per second. Zero if unknown or unspecified.
Average bitrate of the stream, in bits per second. Zero if unknown or unspecified.
The size of the buffer to which the ratecontrol is applied, in bits. Zero if unknown or unspecified.
The delay between the time the packet this structure is associated with is received and the time when it should be decoded, in periods of a 27MHz clock.
This structure supplies correlation between a packet timestamp and a wall clock production time. The definition follows the Producer Reference Time ('prft') as defined in ISO/IEC 14496-12
A UTC timestamp, in microseconds, since Unix epoch (e.g, av_gettime()).
This structure is used to provides the necessary configurations and data to the Direct3D11 FFmpeg HWAccel implementation.
D3D11 decoder object
D3D11 VideoContext
D3D11 configuration used to create the decoder
The number of surface in the surface array
The array of Direct3D surfaces used to create the decoder
A bit field configuring the workarounds needed for using the decoder
Private to the FFmpeg AVHWAccel implementation
Mutex to access video_context
This structure contains the data a format has to probe a file.
Buffer must have AVPROBE_PADDING_SIZE of extra allocated bytes filled with zero.
Size of buf except extra allocated bytes
mime_type, when known.
Stream structure. New fields can be added to the end with minor version bumps. Removal, reordering and changes to existing fields require a major version bump. sizeof(AVStream) must not be used outside libav*.
stream index in AVFormatContext
Format-specific stream ID. decoding: set by libavformat encoding: set by the user, replaced by libavformat if left unset
This is the fundamental unit of time (in seconds) in terms of which frame timestamps are represented.
Decoding: pts of the first frame of the stream in presentation order, in stream time base. Only set this if you are absolutely 100% sure that the value you set it to really is the pts of the first frame. This may be undefined (AV_NOPTS_VALUE).
Decoding: duration of the stream, in stream time base. If a source file does not specify a duration, but does specify a bitrate, this value will be estimated from bitrate and file size.
number of frames in this stream if known or 0
Stream disposition - a combination of AV_DISPOSITION_* flags. - demuxing: set by libavformat when creating the stream or in avformat_find_stream_info(). - muxing: may be set by the caller before avformat_write_header().
Selects which packets can be discarded at will and do not need to be demuxed.
sample aspect ratio (0 if unknown) - encoding: Set by user. - decoding: Set by libavformat.
Average framerate
For streams with AV_DISPOSITION_ATTACHED_PIC disposition, this packet will contain the attached picture.
An array of side data that applies to the whole stream (i.e. the container does not allow it to change between packets).
The number of elements in the AVStream.side_data array.
Flags indicating events happening on the stream, a combination of AVSTREAM_EVENT_FLAG_*.
Real base framerate of the stream. This is the lowest framerate with which all timestamps can be represented accurately (it is the least common multiple of all framerates in the stream). Note, this value is just a guess! For example, if the time base is 1/90000 and all frames have either approximately 3600 or 1800 timer ticks, then r_frame_rate will be 50/1.
Codec parameters associated with this stream. Allocated and freed by libavformat in avformat_new_stream() and avformat_free_context() respectively.
Number of bits in timestamps. Used for wrapping control.
New fields can be added to the end with minor version bumps. Removal, reordering and changes to existing fields require a major version bump. sizeof(AVProgram) must not be used outside libav*.
selects which program to discard and which to feed to the caller
*************************************************************** All fields below this line are not part of the public API. They may not be used outside of libavformat and can be changed and removed at will. New public fields should be added right above. ****************************************************************
reference dts for wrap detection
behavior on wrap detection
unique ID to identify the chapter
time base in which the start/end timestamps are specified
chapter start/end time in time_base units
chapter start/end time in time_base units
@{
Descriptive name for the format, meant to be more human-readable than name. You should use the NULL_IF_CONFIG_SMALL() macro to define it.
comma-separated filename extensions
default audio codec
default video codec
default subtitle codec
can use flags: AVFMT_NOFILE, AVFMT_NEEDNUMBER, AVFMT_GLOBALHEADER, AVFMT_NOTIMESTAMPS, AVFMT_VARIABLE_FPS, AVFMT_NODIMENSIONS, AVFMT_NOSTREAMS, AVFMT_ALLOW_FLUSH, AVFMT_TS_NONSTRICT, AVFMT_TS_NEGATIVE
List of supported codec_id-codec_tag pairs, ordered by "better choice first". The arrays are all terminated by AV_CODEC_ID_NONE.
AVClass for the private context
*************************************************************** No fields below this line are part of the public API. They may not be used outside of libavformat and can be changed and removed at will. New public fields should be added right above. ****************************************************************
Internal flags. See FF_FMT_FLAG_* in internal.h.
Write a packet. If AVFMT_ALLOW_FLUSH is set in flags, pkt can be NULL in order to flush data buffered in the muxer. When flushing, return 0 if there still is more data to flush, or 1 if everything was flushed and there is no more buffered data.
A format-specific function for interleavement. If unset, packets will be interleaved by dts.
Test if the given codec can be stored in this container.
Allows sending messages from application to device.
Write an uncoded AVFrame.
Returns device list with it properties.
default data codec
Initialize format. May allocate data here, and set any AVFormatContext or AVStream parameters that need to be set before packets are sent. This method must not write output.
Deinitialize format. If present, this is called whenever the muxer is being destroyed, regardless of whether or not the header has been written.
Set up any necessary bitstream filtering and extract any extra data needed for the global header.
Format I/O context. New fields can be added to the end with minor version bumps. Removal, reordering and changes to existing fields require a major version bump. sizeof(AVFormatContext) must not be used outside libav*, use avformat_alloc_context() to create an AVFormatContext.
A class for logging and avoptions. Set by avformat_alloc_context(). Exports (de)muxer private options if they exist.
The input container format.
The output container format.
Format private data. This is an AVOptions-enabled struct if and only if iformat/oformat.priv_class is not NULL.
I/O context.
Flags signalling stream properties. A combination of AVFMTCTX_*. Set by libavformat.
Number of elements in AVFormatContext.streams.
A list of all streams in the file. New streams are created with avformat_new_stream().
input or output URL. Unlike the old filename field, this field has no length restriction.
Position of the first frame of the component, in AV_TIME_BASE fractional seconds. NEVER set this value directly: It is deduced from the AVStream values.
Duration of the stream, in AV_TIME_BASE fractional seconds. Only set this value if you know none of the individual stream durations and also do not set any of them. This is deduced from the AVStream values if not set.
Total stream bitrate in bit/s, 0 if not available. Never set it directly if the file_size and the duration are known as FFmpeg can compute it automatically.
Flags modifying the (de)muxer behaviour. A combination of AVFMT_FLAG_*. Set by the user before avformat_open_input() / avformat_write_header().
Maximum number of bytes read from input in order to determine stream properties. Used when reading the global header and in avformat_find_stream_info().
Maximum duration (in AV_TIME_BASE units) of the data read from input in avformat_find_stream_info(). Demuxing only, set by the caller before avformat_find_stream_info(). Can be set to 0 to let avformat choose using a heuristic.
Forced video codec_id. Demuxing: Set by user.
Forced audio codec_id. Demuxing: Set by user.
Forced subtitle codec_id. Demuxing: Set by user.
Maximum amount of memory in bytes to use for the index of each stream. If the index exceeds this size, entries will be discarded as needed to maintain a smaller size. This can lead to slower or less accurate seeking (depends on demuxer). Demuxers for which a full in-memory index is mandatory will ignore this. - muxing: unused - demuxing: set by user
Maximum amount of memory in bytes to use for buffering frames obtained from realtime capture devices.
Number of chapters in AVChapter array. When muxing, chapters are normally written in the file header, so nb_chapters should normally be initialized before write_header is called. Some muxers (e.g. mov and mkv) can also write chapters in the trailer. To write chapters in the trailer, nb_chapters must be zero when write_header is called and non-zero when write_trailer is called. - muxing: set by user - demuxing: set by libavformat
Metadata that applies to the whole file.
Start time of the stream in real world time, in microseconds since the Unix epoch (00:00 1st January 1970). That is, pts=0 in the stream was captured at this real world time. - muxing: Set by the caller before avformat_write_header(). If set to either 0 or AV_NOPTS_VALUE, then the current wall-time will be used. - demuxing: Set by libavformat. AV_NOPTS_VALUE if unknown. Note that the value may become known after some number of frames have been received.
The number of frames used for determining the framerate in avformat_find_stream_info(). Demuxing only, set by the caller before avformat_find_stream_info().
Error recognition; higher values will detect more errors but may misdetect some more or less valid parts as errors. Demuxing only, set by the caller before avformat_open_input().
Custom interrupt callbacks for the I/O layer.
Flags to enable debugging.
Maximum buffering duration for interleaving.
Allow non-standard and experimental extension
Flags indicating events happening on the file, a combination of AVFMT_EVENT_FLAG_*.
Maximum number of packets to read while waiting for the first timestamp. Decoding only.
Avoid negative timestamps during muxing. Any value of the AVFMT_AVOID_NEG_TS_* constants. Note, this works better when using av_interleaved_write_frame(). - muxing: Set by user - demuxing: unused
Transport stream id. This will be moved into demuxer private options. Thus no API/ABI compatibility
Audio preload in microseconds. Note, not all formats support this and unpredictable things may happen if it is used when not supported. - encoding: Set by user - decoding: unused
Max chunk time in microseconds. Note, not all formats support this and unpredictable things may happen if it is used when not supported. - encoding: Set by user - decoding: unused
Max chunk size in bytes Note, not all formats support this and unpredictable things may happen if it is used when not supported. - encoding: Set by user - decoding: unused
forces the use of wallclock timestamps as pts/dts of packets This has undefined results in the presence of B frames. - encoding: unused - decoding: Set by user
avio flags, used to force AVIO_FLAG_DIRECT. - encoding: unused - decoding: Set by user
The duration field can be estimated through various ways, and this field can be used to know how the duration was estimated. - encoding: unused - decoding: Read by user
Skip initial bytes when opening stream - encoding: unused - decoding: Set by user
Correct single timestamp overflows - encoding: unused - decoding: Set by user
Force seeking to any (also non key) frames. - encoding: unused - decoding: Set by user
Flush the I/O context after each packet. - encoding: Set by user - decoding: unused
format probing score. The maximal score is AVPROBE_SCORE_MAX, its set when the demuxer probes the format. - encoding: unused - decoding: set by avformat, read by user
Maximum number of bytes read from input in order to identify the AVInputFormat "input format". Only used when the format is not set explicitly by the caller.
',' separated list of allowed decoders. If NULL then all are allowed - encoding: unused - decoding: set by user
',' separated list of allowed demuxers. If NULL then all are allowed - encoding: unused - decoding: set by user
IO repositioned flag. This is set by avformat when the underlaying IO context read pointer is repositioned, for example when doing byte based seeking. Demuxers can use the flag to detect such changes.
Forced video codec. This allows forcing a specific decoder, even when there are multiple with the same codec_id. Demuxing: Set by user
Forced audio codec. This allows forcing a specific decoder, even when there are multiple with the same codec_id. Demuxing: Set by user
Forced subtitle codec. This allows forcing a specific decoder, even when there are multiple with the same codec_id. Demuxing: Set by user
Forced data codec. This allows forcing a specific decoder, even when there are multiple with the same codec_id. Demuxing: Set by user
Number of bytes to be written as padding in a metadata header. Demuxing: Unused. Muxing: Set by user via av_format_set_metadata_header_padding.
User data. This is a place for some private data of the user.
Callback used by devices to communicate with application.
Output timestamp offset, in microseconds. Muxing: set by user
dump format separator. can be ", " or " " or anything else - muxing: Set by user. - demuxing: Set by user.
Forced Data codec_id. Demuxing: Set by user.
',' separated list of allowed protocols. - encoding: unused - decoding: set by user
A callback for opening new IO streams.
A callback for closing the streams opened with AVFormatContext.io_open().
',' separated list of disallowed protocols. - encoding: unused - decoding: set by user
The maximum number of streams. - encoding: unused - decoding: set by user
Skip duration calcuation in estimate_timings_from_pts. - encoding: unused - decoding: set by user
Maximum number of packets that can be probed - encoding: unused - decoding: set by user
A callback for closing the streams opened with AVFormatContext.io_open().
@{
A comma separated list of short names for the format. New names may be appended with a minor bump.
Descriptive name for the format, meant to be more human-readable than name. You should use the NULL_IF_CONFIG_SMALL() macro to define it.
Can use flags: AVFMT_NOFILE, AVFMT_NEEDNUMBER, AVFMT_SHOW_IDS, AVFMT_NOTIMESTAMPS, AVFMT_GENERIC_INDEX, AVFMT_TS_DISCONT, AVFMT_NOBINSEARCH, AVFMT_NOGENSEARCH, AVFMT_NO_BYTE_SEEK, AVFMT_SEEK_TO_PTS.
If extensions are defined, then no probe is done. You should usually not use extension format guessing because it is not reliable enough
AVClass for the private context
Comma-separated list of mime types. It is used check for matching mime types while probing.
*************************************************************** No fields below this line are part of the public API. They may not be used outside of libavformat and can be changed and removed at will. New public fields should be added right above. ****************************************************************
Size of private data so that it can be allocated in the wrapper.
Internal flags. See FF_FMT_FLAG_* in internal.h.
Tell if a given file has a chance of being parsed as this format. The buffer provided is guaranteed to be AVPROBE_PADDING_SIZE bytes big so you do not have to check for that unless you need more.
Read the format header and initialize the AVFormatContext structure. Return 0 if OK. 'avformat_new_stream' should be called to create new streams.
Read one packet and put it in 'pkt'. pts and flags are also set. 'avformat_new_stream' can be called only if the flag AVFMTCTX_NOHEADER is used and only in the calling thread (not in a background thread).
Close the stream. The AVFormatContext and AVStreams are not freed by this function
Seek to a given timestamp relative to the frames in stream component stream_index.
Get the next timestamp in stream[stream_index].time_base units.
Start/resume playing - only meaningful if using a network-based format (RTSP).
Pause playing - only meaningful if using a network-based format (RTSP).
Seek to timestamp ts. Seeking will be done so that the point from which all active streams can be presented successfully will be closest to ts and within min/max_ts. Active streams are all streams that have AVStream.discard < AVDISCARD_ALL.
Returns device list with it properties.
List of devices.
list of autodetected devices
number of autodetected devices
index of default device or -1 if no default
Bytestream IO Context. New public fields can be added with minor version bumps. Removal, reordering and changes to existing public fields require a major version bump. sizeof(AVIOContext) must not be used outside libav*.
A class for private options.
Start of the buffer.
Maximum buffer size
Current position in the buffer
End of the data, may be less than buffer+buffer_size if the read function returned less data than requested, e.g. for streams where no more data has been received yet.
A private pointer, passed to the read/write/seek/... functions.
position in the file of the current buffer
true if was unable to read due to error or eof
contains the error code or 0 if no error happened
true if open for writing
Try to buffer at least this amount of data before flushing it.
Pause or resume playback for network streaming protocols - e.g. MMS.
Seek to a given timestamp in stream with the specified stream_index. Needed for some network streaming protocols which don't support seeking to byte position.
A combination of AVIO_SEEKABLE_ flags or 0 when the stream is not seekable.
avio_read and avio_write should if possible be satisfied directly instead of going through a buffer, and avio_seek will always call the underlying seek function directly.
',' separated list of allowed protocols.
',' separated list of disallowed protocols.
A callback that is used instead of write_packet.
If set, don't call write_data_type separately for AVIO_DATA_MARKER_BOUNDARY_POINT, but ignore them and treat them as AVIO_DATA_MARKER_UNKNOWN (to avoid needlessly small chunks of data returned from the callback).
Maximum reached position before a backward seek in the write buffer, used keeping track of already written data for a later flush.
Read-only statistic of bytes read for this AVIOContext.
Read-only statistic of bytes written for this AVIOContext.
Callback for checking whether to abort blocking functions. AVERROR_EXIT is returned in this case by the interrupted function. During blocking operations, callback is called with opaque as parameter. If the callback returns 1, the blocking operation will be aborted.
Timestamp in AVStream.time_base units, preferably the time from which on correctly decoded frames are available when seeking to this entry. That means preferable PTS on keyframe based formats. But demuxers can choose to store a different timestamp, if it is more convenient for the implementation or nothing better is known
Flag is used to indicate which frame should be discarded after decoding.
Minimum distance between this and the previous keyframe, used to avoid unneeded searching.
Describes single entry of the directory.
Filename
Type of the entry
Set to 1 when name is encoded with UTF-8, 0 otherwise. Name can be encoded with UTF-8 even though 0 is set.
File size in bytes, -1 if unknown.
Time of last modification in microseconds since unix epoch, -1 if unknown.
Time of last access in microseconds since unix epoch, -1 if unknown.
Time of last status change in microseconds since unix epoch, -1 if unknown.
User ID of owner, -1 if unknown.
Group ID of owner, -1 if unknown.
Unix file mode, -1 if unknown.
An instance of a filter
needed for av_log() and filters common options
the AVFilter of which this is an instance
name of this filter instance
array of input pads
array of pointers to input links
number of input pads
array of output pads
array of pointers to output links
number of output pads
private data for use by the filter
filtergraph this filter belongs to
Type of multithreading being allowed/used. A combination of AVFILTER_THREAD_* flags.
An opaque struct for libavfilter internal use.
enable expression string
parsed expression (AVExpr*)
variable values for the enable expression
the enabled state from the last expression evaluation
For filters which will create hardware frames, sets the device the filter should create them in. All other filters will ignore this field: in particular, a filter which consumes or processes hardware frames will instead use the hw_frames_ctx field in AVFilterLink to carry the hardware context information.
Max number of threads allowed in this filter instance. If <= 0, its value is ignored. Overrides global number of threads set per filter graph.
Ready status of the filter. A non-0 value means that the filter needs activating; a higher value suggests a more urgent activation.
Sets the number of extra hardware frames which the filter will allocate on its output links for use in following filters or by the caller.
Filter definition. This defines the pads a filter contains, and all the callback functions used to interact with the filter.
Filter name. Must be non-NULL and unique among filters.
A description of the filter. May be NULL.
List of static inputs.
List of static outputs.
A class for the private data, used to declare filter private AVOptions. This field is NULL for filters that do not declare any options.
A combination of AVFILTER_FLAG_*
The number of entries in the list of inputs.
The number of entries in the list of outputs.
This field determines the state of the formats union. It is an enum FilterFormatsState value.
Filter pre-initialization function
Filter initialization function.
Should be set instead of AVFilter.init "init" by the filters that want to pass a dictionary of AVOptions to nested contexts that are allocated during init.
Filter uninitialization function.
size of private data to allocate for the filter
Additional flags for avfilter internal use only.
Make the filter instance process a command.
Filter activation function.
The state of the following union is determined by formats_state. See the documentation of enum FilterFormatsState in internal.h.
Query formats supported by the filter on its inputs and outputs.
A pointer to an array of admissible pixel formats delimited by AV_PIX_FMT_NONE. The generic code will use this list to indicate that this filter supports each of these pixel formats, provided that all inputs and outputs use the same pixel format.
Analogous to pixels, but delimited by AV_SAMPLE_FMT_NONE and restricted to filters that only have AVMEDIA_TYPE_AUDIO inputs and outputs.
Equivalent to { pix_fmt, AV_PIX_FMT_NONE } as pixels_list.
Equivalent to { sample_fmt, AV_SAMPLE_FMT_NONE } as samples_list.
A link between two filters. This contains pointers to the source and destination filters between which this link exists, and the indexes of the pads involved. In addition, this link also contains the parameters which have been negotiated and agreed upon between the filter, such as image dimensions, format, etc.
source filter
output pad on the source filter
dest filter
input pad on the dest filter
filter media type
agreed upon image width
agreed upon image height
agreed upon sample aspect ratio
channel layout of current buffer (see libavutil/channel_layout.h)
samples per second
agreed upon media format
Define the time base used by the PTS of the frames/samples which will pass through this link. During the configuration stage, each filter is supposed to change only the output timebase, while the timebase of the input link is assumed to be an unchangeable property.
channel layout of current buffer (see libavutil/channel_layout.h)
Lists of supported formats / etc. supported by the input filter.
Lists of supported formats / etc. supported by the output filter.
Graph the filter belongs to.
Current timestamp of the link, as defined by the most recent frame(s), in link time_base units.
Current timestamp of the link, as defined by the most recent frame(s), in AV_TIME_BASE units.
Index in the age array.
Frame rate of the stream on the link, or 1/0 if unknown or variable; if left to 0/0, will be automatically copied from the first input of the source filter if it exists.
Minimum number of samples to filter at once. If filter_frame() is called with fewer samples, it will accumulate them in fifo. This field and the related ones must not be changed after filtering has started. If 0, all related fields are ignored.
Maximum number of samples to filter at once. If filter_frame() is called with more samples, it will split them.
Number of past frames sent through the link.
Number of past frames sent through the link.
Number of past samples sent through the link.
Number of past samples sent through the link.
A pointer to a FFFramePool struct.
True if a frame is currently wanted on the output of this filter. Set when ff_request_frame() is called by the output, cleared when a frame is filtered.
For hwaccel pixel formats, this should be a reference to the AVHWFramesContext describing the frames.
Internal structure members. The fields below this limit are internal for libavfilter's use and must in no way be accessed by applications.
Lists of formats / etc. supported by an end of a link.
List of supported formats (pixel or sample).
Lists of supported sample rates, only for audio.
Lists of supported channel layouts, only for audio.
sws options to use for the auto-inserted scale filters
Type of multithreading allowed for filters in this graph. A combination of AVFILTER_THREAD_* flags.
Maximum number of threads used by filters in this graph. May be set by the caller before adding any filters to the filtergraph. Zero (the default) means that the number of threads is determined automatically.
Opaque object for libavfilter internal use.
Opaque user data. May be set by the caller to an arbitrary value, e.g. to be used from callbacks like AVFilterGraph.execute. Libavfilter will not touch this field in any way.
This callback may be set by the caller immediately after allocating the graph and before adding any filters to it, to provide a custom multithreading implementation.
swr options to use for the auto-inserted aresample filters, Access ONLY through AVOptions
Private fields
A linked-list of the inputs/outputs of the filter chain.
unique name for this input/output in the list
filter context associated to this input/output
index of the filt_ctx pad to use for linking
next input/input in the list, NULL if this is the last
This structure contains the parameters describing the frames that will be passed to this filter.
video: the pixel format, value corresponds to enum AVPixelFormat audio: the sample format, value corresponds to enum AVSampleFormat
The timebase to be used for the timestamps on the input frames.
Video only, the display dimensions of the input frames.
Video only, the display dimensions of the input frames.
Video only, the sample (pixel) aspect ratio.
Video only, the frame rate of the input video. This field must only be set to a non-zero value if input stream has a known constant framerate and should be left at its initial value if the framerate is variable or unknown.
Video with a hwaccel pixel format only. This should be a reference to an AVHWFramesContext instance describing the input frames.
Audio only, the audio sampling rate in samples per second.
Audio only, the audio channel layout
Audio only, the audio channel layout
Deprecated and unused struct to use for initializing a buffersink context.
list of allowed pixel formats, terminated by AV_PIX_FMT_NONE
Deprecated and unused struct to use for initializing an abuffersink context.
list of allowed sample formats, terminated by AV_SAMPLE_FMT_NONE
list of allowed channel layouts, terminated by -1
list of allowed channel counts, terminated by -1
if not 0, accept any channel count or layout
list of allowed sample rates, terminated by -1
Structure describes basic parameters of the device.
device name, format depends on device
human friendly name
array indicating what media types(s), if any, a device can provide. If null, cannot provide any
length of media_types array, 0 if device cannot provide any media types
x coordinate of top left corner
y coordinate of top left corner
width
height
Structure describes device capabilities.
Context for an Audio FIFO Buffer.
This struct is incomplete.
This struct is incomplete.
This struct is incomplete.
A reference counted buffer type. It is opaque and is meant to be used through references (AVBufferRef).
This struct is incomplete.
The buffer pool. This structure is opaque and not meant to be accessed directly. It is allocated with av_buffer_pool_init() and freed with av_buffer_pool_uninit().
This struct is incomplete.
Low-complexity tree container
This struct is incomplete.
This struct is incomplete.
This struct is incomplete.
The libswresample context. Unlike libavcodec and libavformat, this structure is opaque. This means that if you would like to set options, you must use the avoptions API and cannot directly set values to members of the structure.
This struct is incomplete.
This struct is incomplete.
This struct is incomplete.
**********************************************
This struct is incomplete.
This struct is incomplete.
This struct is incomplete.
This struct is incomplete.
This struct is incomplete.
This struct is incomplete.
This struct is incomplete.
This struct is incomplete.
Supports loading functions from native libraries. Provides a more flexible alternative to P/Invoke.
Creates a delegate which invokes a native function.
The function delegate.
The native library which contains the function.
The name of the function for which to create the delegate.
A new delegate which points to the native function.
Attempts to load a native library using platform nammig convention.
Path of the library.
Name of the library.
Version of the library.
A handle to the library when found; otherwise, .
This function may return a null handle. If it does, individual functions loaded from it will throw a
DllNotFoundException,
but not until an attempt is made to actually use the function (rather than load it). This matches how PInvokes
behave.
Attempts to load a native library.
Name of the library.
A handle to the library when found; otherwise, .
This function may return a null handle. If it does, individual functions loaded from it will throw a
DllNotFoundException,
but not until an attempt is made to actually use the function (rather than load it). This matches how PInvokes
behave.
Loads the specified module into the address space of the calling process. The specified module may cause other modules to be loaded.
The name of the module. This can be either a library module (a .dll file) or an executable module (an .exe file).
The name specified is the file name of the module and is not related to the name stored in the library module itself,
as specified by the LIBRARY keyword in the module-definition (.def) file.
If the string specifies a full path, the function searches only that path for the module.
If the string specifies a relative path or a module name without a path, the function uses a standard search strategy
to find the module; for more information, see the Remarks.
If the function cannot find the module, the function fails. When specifying a path, be sure to use backslashes (\),
not forward slashes (/). For more information about paths, see Naming a File or Directory.
If the string specifies a module name without a path and the file name extension is omitted, the function appends the
default library extension .dll to the module name. To prevent the function from appending .dll to the module name,
include a trailing point character (.) in the module name string.
If the function succeeds, the return value is a handle to the module.
If the function fails, the return value is . To get extended error information, call
.