VA-API  2.22.0
Classes | Macros | Enumerations | Functions
Video processing API

Classes

struct  VABlendState
 Video blending state definition. More...
 
struct  VAProcPipelineCaps
 Video processing pipeline capabilities. More...
 
struct  VAProcFilterValueRange
 Specification of values supported by the filter. More...
 
struct  VAHdrMetaDataHDR10
 Describes High Dynamic Range Meta Data for HDR10. More...
 
struct  VAProcFilterCapHighDynamicRange
 Capabilities specification for the High Dynamic Range filter. More...
 
struct  VAHdrMetaData
 High Dynamic Range Meta Data. More...
 
struct  VAProcPipelineParameterBuffer
 Video processing pipeline configuration. More...
 
struct  VAProcFilterParameterBufferBase
 Filter parameter buffer base. More...
 
struct  VAProcFilterParameterBuffer
 Default filter parametrization. More...
 
struct  VAProcFilterParameterBufferDeinterlacing
 Deinterlacing filter parametrization. More...
 
struct  VAProcFilterParameterBufferColorBalance
 Color balance filter parametrization. More...
 
struct  VAProcFilterParameterBufferTotalColorCorrection
 Total color correction filter parametrization. More...
 
struct  VAProcFilterParameterBufferHVSNoiseReduction
 Human Vision System(HVS) Noise reduction filter parametrization. More...
 
struct  VAProcFilterParameterBufferHDRToneMapping
 High Dynamic Range(HDR) Tone Mapping filter parametrization. More...
 
struct  VAProcFilterParameterBuffer3DLUT
 3DLUT filter parametrization. More...
 
struct  VAProcFilterCap3DLUT
 Capabilities specification for the 3DLUT filter. More...
 
struct  VAProcFilterCap
 Default filter cap specification (single range value). More...
 
struct  VAProcFilterCapDeinterlacing
 Capabilities specification for the deinterlacing filter. More...
 
struct  VAProcFilterCapColorBalance
 Capabilities specification for the color balance filter. More...
 
struct  VAProcFilterCapTotalColorCorrection
 Capabilities specification for the Total Color Correction filter. More...
 

Macros

#define VA_SOURCE_RANGE_UNKNOWN   0
 

Enumerations

enum  VAProcFilterType {
}
 Video filter types. More...
 
enum  VAProcDeinterlacingType {
}
 Deinterlacing types. More...
 
enum  VAProcColorBalanceType {
}
 Color balance types. More...
 
enum  VAProcColorStandardType {
}
 Color standard types. More...
 
enum  VAProcTotalColorCorrectionType {
}
 Total color correction types. More...
 
enum  VAProcHighDynamicRangeMetadataType { }
 High Dynamic Range Metadata types. More...
 
enum  VAProcMode { VAProcDefaultMode = 0 , VAProcPowerSavingMode , VAProcPerformanceMode }
 Video Processing Mode. More...
 

Functions

VAStatus vaQueryVideoProcFilters (VADisplay dpy, VAContextID context, VAProcFilterType *filters, unsigned int *num_filters)
 Queries video processing filters. More...
 
VAStatus vaQueryVideoProcFilterCaps (VADisplay dpy, VAContextID context, VAProcFilterType type, void *filter_caps, unsigned int *num_filter_caps)
 Queries video filter capabilities. More...
 
VAStatus vaQueryVideoProcPipelineCaps (VADisplay dpy, VAContextID context, VABufferID *filters, unsigned int num_filters, VAProcPipelineCaps *pipeline_caps)
 Queries video processing pipeline capabilities. More...
 

Video blending flags

#define VA_BLEND_GLOBAL_ALPHA   0x0001
 Global alpha blending.
 
#define VA_BLEND_PREMULTIPLIED_ALPHA   0x0002
 Premultiplied alpha blending (RGBA surfaces only).
 
#define VA_BLEND_LUMA_KEY   0x0010
 Luma color key (YUV surfaces only).
 

Video pipeline flags

#define VA_PROC_PIPELINE_SUBPICTURES   0x00000001
 Specifies whether to apply subpictures when processing a surface.
 
#define VA_PROC_PIPELINE_FAST   0x00000002
 Specifies whether to apply power or performance optimizations to a pipeline. More...
 

Video filter flags

#define VA_PROC_FILTER_MANDATORY   0x00000001
 Specifies whether the filter shall be present in the pipeline.
 

Pipeline end flags

#define VA_PIPELINE_FLAG_END   0x00000004
 Specifies the pipeline is the last.
 

Chroma Siting flag

#define VA_CHROMA_SITING_UNKNOWN   0x00
 
#define VA_CHROMA_SITING_VERTICAL_TOP   0x01
 Chroma samples are co-sited vertically on the top with the luma samples.
 
#define VA_CHROMA_SITING_VERTICAL_CENTER   0x02
 Chroma samples are not co-sited vertically with the luma samples.
 
#define VA_CHROMA_SITING_VERTICAL_BOTTOM   0x03
 Chroma samples are co-sited vertically on the bottom with the luma samples.
 
#define VA_CHROMA_SITING_HORIZONTAL_LEFT   0x04
 Chroma samples are co-sited horizontally on the left with the luma samples.
 
#define VA_CHROMA_SITING_HORIZONTAL_CENTER   0x08
 Chroma samples are not co-sited horizontally with the luma samples.
 

Tone Mapping flags multiple HDR mode

#define VA_TONE_MAPPING_HDR_TO_HDR   0x0001
 Tone Mapping from HDR content to HDR display.
 
#define VA_TONE_MAPPING_HDR_TO_SDR   0x0002
 Tone Mapping from HDR content to SDR display.
 
#define VA_TONE_MAPPING_HDR_TO_EDR   0x0004
 Tone Mapping from HDR content to EDR display.
 
#define VA_TONE_MAPPING_SDR_TO_HDR   0x0008
 Tone Mapping from SDR content to HDR display.
 

De-interlacing flags

#define VA_DEINTERLACING_BOTTOM_FIELD_FIRST   0x0001
 Bottom field first in the input frame. if this is not set then assumes top field first.
 
#define VA_DEINTERLACING_BOTTOM_FIELD   0x0002
 Bottom field used in deinterlacing. if this is not set then assumes top field is used.
 
#define VA_DEINTERLACING_ONE_FIELD   0x0004
 A single field is stored in the input frame. if this is not set then assumes the frame contains two interleaved fields.
 
#define VA_DEINTERLACING_FMD_ENABLE   0x0008
 Film Mode Detection is enabled. If enabled, driver performs inverse of various pulldowns, such as 3:2 pulldown. if this is not set then assumes FMD is disabled.
 
#define VA_DEINTERLACING_SCD_ENABLE   0x0010
 

Video Processing Human Vision System (HVS) Denoise Mode.

#define VA_PROC_HVS_DENOISE_DEFAULT   0x0000
 Default Mode. This mode is decided in driver to the appropriate mode.
 
#define VA_PROC_HVS_DENOISE_AUTO_BDRATE   0x0001
 Auto BDRate Mode. Indicates auto BD rate improvement in pre-processing (such as before video encoding), ignore Strength.
 
#define VA_PROC_HVS_DENOISE_AUTO_SUBJECTIVE   0x0002
 Auto Subjective Mode. Indicates auto subjective quality improvement in pre-processing (such as before video encoding), ignore Strength.
 
#define VA_PROC_HVS_DENOISE_MANUAL   0x0003
 Manual Mode. Indicates manual mode, allow to adjust the denoise strength manually (need to set Strength explicitly).
 

3DLUT Channel Layout and Mapping

#define VA_3DLUT_CHANNEL_UNKNOWN   0x00000000
 3DLUT Channel Layout is unknown.
 
#define VA_3DLUT_CHANNEL_RGB_RGB   0x00000001
 3DLUT Channel Layout is R, G, B, the default layout. Map RGB to RGB.
 
#define VA_3DLUT_CHANNEL_YUV_RGB   0x00000002
 3DLUT Channel Layout is Y, U, V. Map YUV to RGB.
 
#define VA_3DLUT_CHANNEL_VUY_RGB   0x00000004
 3DLUT Channel Layout is V, U, Y. Map VUY to RGB.
 

Detailed Description

The video processing API uses the same paradigm as for decoding:

Query for supported filters

Checking whether video processing is supported can be performed with vaQueryConfigEntrypoints() and the profile argument set to VAProfileNone. If video processing is supported, then the list of returned entry-points will include VAEntrypointVideoProc.

VAEntrypoint *entrypoints;
int i, num_entrypoints, supportsVideoProcessing = 0;
num_entrypoints = vaMaxNumEntrypoints();
entrypoints = malloc(num_entrypoints * sizeof(entrypoints[0]);
entrypoints, &num_entrypoints);
for (i = 0; !supportsVideoProcessing && i < num_entrypoints; i++) {
if (entrypoints[i] == VAEntrypointVideoProc)
supportsVideoProcessing = 1;
}
int vaMaxNumEntrypoints(VADisplay dpy)
VAEntrypoint
Definition: va.h:550
VAStatus vaQueryConfigEntrypoints(VADisplay dpy, VAProfile profile, VAEntrypoint *entrypoint_list, int *num_entrypoints)
@ VAEntrypointVideoProc
Definition: va.h:567
@ VAProfileNone
Profile ID used for video processing.
Definition: va.h:504

Then, the vaQueryVideoProcFilters() function is used to query the list of video processing filters.

unsigned int num_filters = VAProcFilterCount;
// num_filters shall be initialized to the length of the array
vaQueryVideoProcFilters(va_dpy, vpp_ctx, &filters, &num_filters);
VAStatus vaQueryVideoProcFilters(VADisplay dpy, VAContextID context, VAProcFilterType *filters, unsigned int *num_filters)
Queries video processing filters.
VAProcFilterType
Video filter types.
Definition: va_vpp.h:238
@ VAProcFilterCount
Number of video filters.
Definition: va_vpp.h:259

Finally, individual filter capabilities can be checked with vaQueryVideoProcFilterCaps().

VAProcFilterCap denoise_caps;
unsigned int num_denoise_caps = 1;
&denoise_caps, &num_denoise_caps
);
unsigned int num_deinterlacing_caps = VAProcDeinterlacingCount;
&deinterlacing_caps, &num_deinterlacing_caps
);
VAStatus vaQueryVideoProcFilterCaps(VADisplay dpy, VAContextID context, VAProcFilterType type, void *filter_caps, unsigned int *num_filter_caps)
Queries video filter capabilities.
@ VAProcDeinterlacingCount
Number of deinterlacing algorithms.
Definition: va_vpp.h:274
@ VAProcFilterNoiseReduction
Noise reduction filter.
Definition: va_vpp.h:241
@ VAProcFilterDeinterlacing
Deinterlacing filter.
Definition: va_vpp.h:243
Capabilities specification for the deinterlacing filter.
Definition: va_vpp.h:1465
Default filter cap specification (single range value).
Definition: va_vpp.h:1456

Set up a video processing pipeline

A video processing pipeline buffer is created for each source surface we want to process. However, buffers holding filter parameters can be created once and for all. Rationale is to avoid multiple creation/destruction chains of filter buffers and also because filter parameters generally won't change frame after frame. e.g. this makes it possible to implement a checkerboard of videos where the same filters are applied to each video source.

The general control flow is demonstrated by the following pseudo-code:

// Create filters
VABufferID denoise_filter, deint_filter;
unsigned int num_filter_bufs;
for (i = 0; i < num_filters; i++) {
switch (filters[i]) {
case VAProcFilterNoiseReduction: { // Noise reduction filter
denoise.value = 0.5;
vaCreateBuffer(va_dpy, vpp_ctx,
VAProcFilterParameterBufferType, sizeof(denoise), 1,
&denoise, &denoise_filter
);
filter_bufs[num_filter_bufs++] = denoise_filter;
break;
}
case VAProcFilterDeinterlacing: // Motion-adaptive deinterlacing
for (j = 0; j < num_deinterlacing_caps; j++) {
VAProcFilterCapDeinterlacing * const cap = &deinterlacing_caps[j];
continue;
vaCreateBuffer(va_dpy, vpp_ctx,
&deint, &deint_filter
);
filter_bufs[num_filter_bufs++] = deint_filter;
}
}
}
VAStatus vaCreateBuffer(VADisplay dpy, VAContextID context, VABufferType type, unsigned int size, unsigned int num_elements, void *data, VABufferID *buf_id)
VAGenericID VABufferID
Definition: va.h:2019
@ VAProcFilterParameterBufferType
Video filter parameter buffer.
Definition: va.h:2076
@ VAProcDeinterlacingMotionAdaptive
Motion adaptive deinterlacing algorithm.
Definition: va_vpp.h:270
VAProcDeinterlacingType type
Deinterlacing algorithm.
Definition: va_vpp.h:1467
Deinterlacing filter parametrization.
Definition: va_vpp.h:1215
VAProcDeinterlacingType algorithm
Deinterlacing algorithm.
Definition: va_vpp.h:1219
VAProcFilterType type
Filter type. Shall be set to VAProcFilterDeinterlacing.
Definition: va_vpp.h:1217
Default filter parametrization.
Definition: va_vpp.h:1156
VAProcFilterType type
Filter type.
Definition: va_vpp.h:1158
float value
Value.
Definition: va_vpp.h:1160

Once the video processing pipeline is set up, the caller shall check the implied capabilities and requirements with vaQueryVideoProcPipelineCaps(). This function can be used to validate the number of reference frames are needed by the specified deinterlacing algorithm, the supported color primaries, etc.

// Create filters
VAProcPipelineCaps pipeline_caps;
VASurfaceID *forward_references;
unsigned int num_forward_references;
VASurfaceID *backward_references;
unsigned int num_backward_references;
pipeline_caps.input_color_standards = NULL;
pipeline_caps.num_input_color_standards = ARRAY_ELEMS(in_color_standards);
pipeline_caps.output_color_standards = NULL;
pipeline_caps.num_output_color_standards = ARRAY_ELEMS(out_color_standards);
filter_bufs, num_filter_bufs,
&pipeline_caps
);
num_forward_references = pipeline_caps.num_forward_references;
forward_references =
malloc(num__forward_references * sizeof(VASurfaceID));
num_backward_references = pipeline_caps.num_backward_references;
backward_references =
malloc(num_backward_references * sizeof(VASurfaceID));
VAProcColorStandardType
Color standard types.
Definition: va_vpp.h:306
VAStatus vaQueryVideoProcPipelineCaps(VADisplay dpy, VAContextID context, VABufferID *filters, unsigned int num_filters, VAProcPipelineCaps *pipeline_caps)
Queries video processing pipeline capabilities.
@ VAProcColorStandardCount
Number of color standards.
Definition: va_vpp.h:415
Video processing pipeline capabilities.
Definition: va_vpp.h:576
uint32_t num_output_color_standards
Number of elements in output_color_standards array.
Definition: va_vpp.h:592
uint32_t num_input_color_standards
Number of elements in input_color_standards array.
Definition: va_vpp.h:588
VAProcColorStandardType * input_color_standards
List of color standards supported on input.
Definition: va_vpp.h:586
uint32_t num_backward_references
Number of backward reference frames that are needed.
Definition: va_vpp.h:584
VAProcColorStandardType * output_color_standards
List of color standards supported on output.
Definition: va_vpp.h:590
uint32_t num_forward_references
Number of forward reference frames that are needed.
Definition: va_vpp.h:582

Send video processing parameters through VA buffers

Video processing pipeline parameters are submitted for each source surface to process. Video filter parameters can also change, per-surface. e.g. the list of reference frames used for deinterlacing.

foreach (iteration) {
vaBeginPicture(va_dpy, vpp_ctx, vpp_surface);
foreach (surface) {
VARectangle output_region;
VABufferID pipeline_buf;
vaCreateBuffer(va_dpy, vpp_ctx,
VAProcPipelineParameterBuffer, sizeof(*pipeline_param), 1,
NULL, &pipeline_buf
);
// Setup output region for this surface
// e.g. upper left corner for the first surface
output_region.x = BORDER;
output_region.y = BORDER;
output_region.width =
(vpp_surface_width - (Nx_surfaces + 1) * BORDER) / Nx_surfaces;
output_region.height =
(vpp_surface_height - (Ny_surfaces + 1) * BORDER) / Ny_surfaces;
vaMapBuffer(va_dpy, pipeline_buf, &pipeline_param);
pipeline_param->surface = surface;
pipeline_param->surface_region = NULL;
pipeline_param->output_region = &output_region;
pipeline_param->output_background_color = 0;
if (first surface to render)
pipeline_param->output_background_color = 0xff000000; // black
pipeline_param->filter_flags = VA_FILTER_SCALING_HQ;
pipeline_param->filters = filter_bufs;
pipeline_param->num_filters = num_filter_bufs;
vaUnmapBuffer(va_dpy, pipeline_buf);
// Update reference frames for deinterlacing, if necessary
pipeline_param->forward_references = forward_references;
pipeline_param->num_forward_references = num_forward_references_used;
pipeline_param->backward_references = backward_references;
pipeline_param->num_backward_references = num_bacward_references_used;
// Apply filters
vaRenderPicture(va_dpy, vpp_ctx, &pipeline_buf, 1);
}
vaEndPicture(va_dpy, vpp_ctx);
}
VAStatus vaRenderPicture(VADisplay dpy, VAContextID context, VABufferID *buffers, int num_buffers)
VAStatus vaBeginPicture(VADisplay dpy, VAContextID context, VASurfaceID render_target)
VAStatus vaUnmapBuffer(VADisplay dpy, VABufferID buf_id)
VAStatus vaMapBuffer(VADisplay dpy, VABufferID buf_id, void **pbuf)
VAStatus vaEndPicture(VADisplay dpy, VAContextID context)
Video processing pipeline configuration.
Definition: va_vpp.h:886
VASurfaceID * forward_references
Array of forward reference frames (past frames).
Definition: va_vpp.h:993
const VARectangle * surface_region
Region within the source surface to be processed.
Definition: va_vpp.h:902
uint32_t num_filters
Actual number of filters.
Definition: va_vpp.h:991
VASurfaceID * backward_references
Array of backward reference frames (future frames).
Definition: va_vpp.h:997
uint32_t num_forward_references
Number of forward reference frames that were supplied.
Definition: va_vpp.h:995
const VARectangle * output_region
Region within the output surface.
Definition: va_vpp.h:925
uint32_t output_background_color
Background color.
Definition: va_vpp.h:941
VASurfaceID surface
Source surface ID.
Definition: va_vpp.h:894
uint32_t filter_flags
Extra filter flags. See vaPutSurface() flags.
Definition: va_vpp.h:976
VABufferID * filters
Array of filters to apply to the surface.
Definition: va_vpp.h:989
uint32_t num_backward_references
Number of backward reference frames that were supplied.
Definition: va_vpp.h:999
Structure to describe rectangle.
Definition: va.h:406

Macro Definition Documentation

◆ VA_CHROMA_SITING_UNKNOWN

#define VA_CHROMA_SITING_UNKNOWN   0x00

vertical chroma sitting take bit 0-1, horizontal chroma sitting take bit 2-3 vertical chromma siting | horizontal chroma sitting to be chroma sitting

◆ VA_PROC_PIPELINE_FAST

#define VA_PROC_PIPELINE_FAST   0x00000002

Specifies whether to apply power or performance optimizations to a pipeline.

When processing several surfaces, it may be necessary to prioritize more certain pipelines than others. This flag is only a hint to the video processor so that it can omit certain filters to save power for example. Typically, this flag could be used with video surfaces decoded from a secondary bitstream.

◆ VA_SOURCE_RANGE_UNKNOWN

#define VA_SOURCE_RANGE_UNKNOWN   0

This is to indicate that the color-space conversion uses full range or reduced range. VA_SOURCE_RANGE_FULL(Full range): Y/Cb/Cr is in [0, 255]. It is mainly used for JPEG/JFIF formats. The combination with the BT601 flag means that JPEG/JFIF color-space conversion matrix is used. VA_SOURCE_RANGE_REDUCED(Reduced range): Y is in [16, 235] and Cb/Cr is in [16, 240]. It is mainly used for the YUV->RGB color-space conversion in SDTV/HDTV/UHDTV.

Enumeration Type Documentation

◆ VAProcColorBalanceType

Color balance types.

Enumerator
VAProcColorBalanceHue 

Hue.

VAProcColorBalanceSaturation 

Saturation.

VAProcColorBalanceBrightness 

Brightness.

VAProcColorBalanceContrast 

Contrast.

VAProcColorBalanceAutoSaturation 

Automatically adjusted saturation.

VAProcColorBalanceAutoBrightness 

Automatically adjusted brightness.

VAProcColorBalanceAutoContrast 

Automatically adjusted contrast.

VAProcColorBalanceCount 

Number of color balance attributes.

◆ VAProcColorStandardType

Color standard types.

These define a set of color properties corresponding to particular video standards.

Where matrix_coefficients is specified, it applies only to YUV data - RGB data always use the identity matrix (matrix_coefficients = 0).

Enumerator
VAProcColorStandardBT601 

ITU-R BT.601.

It is unspecified whether this will use 525-line or 625-line values; specify the colour primaries and matrix coefficients explicitly if it is known which one is required.

Equivalent to: colour_primaries = 5 or 6 transfer_characteristics = 6 matrix_coefficients = 5 or 6

VAProcColorStandardBT709 

ITU-R BT.709.

Equivalent to: colour_primaries = 1 transfer_characteristics = 1 matrix_coefficients = 1

VAProcColorStandardBT470M 

ITU-R BT.470-2 System M.

Equivalent to: colour_primaries = 4 transfer_characteristics = 4 matrix_coefficients = 4

VAProcColorStandardBT470BG 

ITU-R BT.470-2 System B, G.

Equivalent to: colour_primaries = 5 transfer_characteristics = 5 matrix_coefficients = 5

VAProcColorStandardSMPTE170M 

SMPTE-170M.

Equivalent to: colour_primaries = 6 transfer_characteristics = 6 matrix_coefficients = 6

VAProcColorStandardSMPTE240M 

SMPTE-240M.

Equivalent to: colour_primaries = 7 transfer_characteristics = 7 matrix_coefficients = 7

VAProcColorStandardGenericFilm 

Generic film.

Equivalent to: colour_primaries = 8 transfer_characteristics = 1 matrix_coefficients = 1

VAProcColorStandardSRGB 

sRGB.

Equivalent to: colour_primaries = 1 transfer_characteristics = 13 matrix_coefficients = 0

VAProcColorStandardSTRGB 

stRGB.

???

VAProcColorStandardXVYCC601 

xvYCC601.

Equivalent to: colour_primaries = 1 transfer_characteristics = 11 matrix_coefficients = 5

VAProcColorStandardXVYCC709 

xvYCC709.

Equivalent to: colour_primaries = 1 transfer_characteristics = 11 matrix_coefficients = 1

VAProcColorStandardBT2020 

ITU-R BT.2020.

Equivalent to: colour_primaries = 9 transfer_characteristics = 14 matrix_coefficients = 9

VAProcColorStandardExplicit 

Explicitly specified color properties.

Use corresponding color properties section. For example, HDR10 content: colour_primaries = 9 (BT2020) transfer_characteristics = 16 (SMPTE ST2084) matrix_coefficients = 9

VAProcColorStandardCount 

Number of color standards.

◆ VAProcDeinterlacingType

Deinterlacing types.

Enumerator
VAProcDeinterlacingBob 

Bob deinterlacing algorithm.

VAProcDeinterlacingWeave 

Weave deinterlacing algorithm.

VAProcDeinterlacingMotionAdaptive 

Motion adaptive deinterlacing algorithm.

VAProcDeinterlacingMotionCompensated 

Motion compensated deinterlacing algorithm.

VAProcDeinterlacingCount 

Number of deinterlacing algorithms.

◆ VAProcFilterType

Video filter types.

Enumerator
VAProcFilterNoiseReduction 

Noise reduction filter.

VAProcFilterDeinterlacing 

Deinterlacing filter.

VAProcFilterSharpening 

Sharpening filter.

VAProcFilterColorBalance 

Color balance parameters.

VAProcFilterSkinToneEnhancement 

Skin Tone Enhancement.

VAProcFilterTotalColorCorrection 

Total Color Correction.

VAProcFilterHVSNoiseReduction 

Human Vision System(HVS) Noise reduction filter.

VAProcFilterHighDynamicRangeToneMapping 

High Dynamic Range Tone Mapping.

VAProcFilter3DLUT 

Three-Dimensional Look Up Table (3DLUT).

VAProcFilterCount 

Number of video filters.

◆ VAProcHighDynamicRangeMetadataType

High Dynamic Range Metadata types.

Enumerator
VAProcHighDynamicRangeMetadataHDR10 

Metadata type for HDR10.

VAProcHighDynamicRangeMetadataTypeCount 

Number of Metadata type.

◆ VAProcMode

enum VAProcMode

Video Processing Mode.

Enumerator
VAProcDefaultMode 

Default Mode. In this mode, pipeline is decided in driver to the appropriate mode. e.g. a mode that's a balance between power and performance.

VAProcPowerSavingMode 

Power Saving Mode. In this mode, pipeline is optimized for power saving.

VAProcPerformanceMode 

Performance Mode. In this mode, pipeline is optimized for performance.

◆ VAProcTotalColorCorrectionType

Total color correction types.

Enumerator
VAProcTotalColorCorrectionRed 

Red Saturation.

VAProcTotalColorCorrectionGreen 

Green Saturation.

VAProcTotalColorCorrectionBlue 

Blue Saturation.

VAProcTotalColorCorrectionCyan 

Cyan Saturation.

VAProcTotalColorCorrectionMagenta 

Magenta Saturation.

VAProcTotalColorCorrectionYellow 

Yellow Saturation.

VAProcTotalColorCorrectionCount 

Number of color correction attributes.

Function Documentation

◆ vaQueryVideoProcFilterCaps()

VAStatus vaQueryVideoProcFilterCaps ( VADisplay  dpy,
VAContextID  context,
VAProcFilterType  type,
void *  filter_caps,
unsigned int *  num_filter_caps 
)

Queries video filter capabilities.

This function returns the list of capabilities supported by the driver for a specific video filter. The filter_caps array is allocated by the user and num_filter_caps shall be initialized to the number of allocated elements in that array. Upon successful return, the actual number of filters will be overwritten into num_filter_caps. Otherwise, VA_STATUS_ERROR_MAX_NUM_EXCEEDED is returned and num_filter_caps is adjusted to the number of elements that would be returned if enough space was available.

Parameters
[in]dpythe VA display
[in]contextthe video processing context
[in]typethe video filter type
[out]filter_capsthe output array of VAProcFilterCap elements
[in,out]num_filter_capsthe number of elements allocated on input, the number of elements actually filled in output

◆ vaQueryVideoProcFilters()

VAStatus vaQueryVideoProcFilters ( VADisplay  dpy,
VAContextID  context,
VAProcFilterType filters,
unsigned int *  num_filters 
)

Queries video processing filters.

This function returns the list of video processing filters supported by the driver. The filters array is allocated by the user and num_filters shall be initialized to the number of allocated elements in that array. Upon successful return, the actual number of filters will be overwritten into num_filters. Otherwise, VA_STATUS_ERROR_MAX_NUM_EXCEEDED is returned and num_filters is adjusted to the number of elements that would be returned if enough space was available.

The list of video processing filters supported by the driver shall be ordered in the way they can be iteratively applied. This is needed for both correctness, i.e. some filters would not mean anything if applied at the beginning of the pipeline; but also for performance since some filters can be applied in a single pass (e.g. noise reduction + deinterlacing).

Parameters
[in]dpythe VA display
[in]contextthe video processing context
[out]filtersthe output array of VAProcFilterType elements
[in,out]num_filtersthe number of elements allocated on input, the number of elements actually filled in on output

◆ vaQueryVideoProcPipelineCaps()

VAStatus vaQueryVideoProcPipelineCaps ( VADisplay  dpy,
VAContextID  context,
VABufferID filters,
unsigned int  num_filters,
VAProcPipelineCaps pipeline_caps 
)

Queries video processing pipeline capabilities.

This function returns the video processing pipeline capabilities. The filters array defines the video processing pipeline and is an array of buffers holding filter parameters.

Note: the VAProcPipelineCaps structure contains user-provided arrays. If non-NULL, the corresponding num_* fields shall be filled in on input with the number of elements allocated. Upon successful return, the actual number of elements will be overwritten into the num_* fields. Otherwise, VA_STATUS_ERROR_MAX_NUM_EXCEEDED is returned and num_* fields are adjusted to the number of elements that would be returned if enough space was available.

Parameters
[in]dpythe VA display
[in]contextthe video processing context
[in]filtersthe array of VA buffers defining the video processing pipeline
[in]num_filtersthe number of elements in filters
[in,out]pipeline_capsthe video processing pipeline capabilities