Overview
This repository contains a framework for developing plugins for the Kubernetes device plugins framework, along with a number of device plugin implementations utilizing that framework.
The v0.31.1 release is the latest feature release with its documentation available here.
Table of Contents
Prerequisites
Prerequisites for building and running these device plugins include:
Appropriate hardware and drivers
A fully configured [Kubernetes cluster]
A working [Go environment], of at least version v1.16.
Plugins
The below sections detail existing plugins developed using the framework.
GPU Device Plugin
The GPU device plugin provides access to discrete and integrated Intel GPU device files.
The demo subdirectory contains both a GPU plugin demo video
and an OpenCL sample deployment (intelgpu-job.yaml
).
FPGA Device Plugin
The FPGA device plugin supports FPGA passthrough for the following hardware:
Intel® Arria® 10 devices
Intel® Stratix® 10 devices
The FPGA plugin comes as three parts.
the device plugin
Refer to each individual sub-components documentation for more details. Brief overviews of the sub-components are below.
The demo subdirectory contains a video showing deployment and use of the FPGA plugin. Sources relating to the demo can be found in the opae-nlb-demo subdirectory.
Device Plugin
The FPGA device plugin is responsible for
discovering and reporting FPGA devices to kubelet
.
Admission Controller
The FPGA admission controller webhook is responsible for performing mapping from user-friendly function IDs to the Interface ID and Bitstream ID that are required for FPGA programming. It also implements access control by namespacing FPGA configuration information.
OCI createRuntime Hook
The FPGA OCI createRuntime hook performs discovery of the requested FPGA function bitstream and programs FPGA devices based on the environment variables in the workload description.
QAT Device Plugin
The QAT plugin supports device plugin for Intel QAT adapters, and includes code showing deployment via DPDK.
The demo subdirectory includes details of both a QAT DPDK demo and a QAT OpenSSL demo. Source for the OpenSSL demo can be found in the relevant subdirectory.
Details for integrating the QAT device plugin into Kata Containers can be found in the Kata Containers documentation repository.
SGX Device Plugin
The SGX device plugin allows workloads to use Intel® Software Guard Extensions (Intel® SGX) on platforms with SGX Flexible Launch Control enabled, e.g.,:
3rd Generation Intel® Xeon® Scalable processor family, code-named “Ice Lake”
Intel® Xeon® E3 processor
Intel® NUC Kit NUC7CJYH
The Intel SGX plugin comes in three parts.
The demo subdirectory contains a video showing the deployment and use of the Intel SGX device plugin. Sources relating to the demo can be found in the sgx-sdk-demo and sgx-aesmd-demo subdirectories.
Brief overviews of the Intel SGX sub-components are given below.
device plugin
The SGX device plugin is responsible for discovering
and reporting Intel SGX device nodes to kubelet
.
Containers requesting Intel SGX resources in the cluster should not use the device plugins resources directly.
Intel SGX Admission Webhook
The Intel SGX admission webhook is responsible for performing Pod mutations based on
the sgx.intel.com/quote-provider
pod annotation set by the user. The purpose
of the webhook is to hide the details of setting the necessary device resources
and volume mounts for using Intel SGX remote attestation in the cluster. Furthermore,
the Intel SGX admission webhook is responsible for writing a pod/sandbox
sgx.intel.com/epc
annotation that is used by Kata Containers to dynamically
adjust its virtualized Intel SGX encrypted page cache (EPC) bank(s) size.
The Intel SGX admission webhook is available as part of Intel Device Plugin Operator or as a standalone SGX Admission webhook image.
Intel SGX EPC memory registration
The Intel SGX EPC memory available on each node is registered as a Kubernetes extended resource using node-feature-discovery (NFD). An NFD Node Feature Rule is installed as part of SGX device plugin operator deployment and NFD is configured to register the Intel SGX EPC memory extended resource.
Containers requesting Intel SGX EPC resources in the cluster use
sgx.intel.com/epc
resource which is of
type memory.
DSA Device Plugin
The DSA device plugin supports acceleration using the Intel Data Streaming accelerator(DSA).
DLB Device Plugin
The DLB device plugin supports Intel Dynamic Load Balancer accelerator(DLB).
IAA Device Plugin
The IAA device plugin supports acceleration using the Intel Analytics accelerator(IAA).
Device Plugins Operator
To simplify the deployment of the device plugins, a unified device plugins operator is implemented.
Currently the operator has support for the DSA, DLB, FPGA, GPU, IAA, QAT, and Intel SGX device plugins. Each device plugin has its own custom resource definition (CRD) and the corresponding controller that watches CRUD operations to those custom resources.
The Device plugins operator README gives the installation and usage details for the community operator available on operatorhub.io.
The Device plugins Operator for OpenShift gives the installation and usage details for the operator available on Red Hat OpenShift Container Platform.
XeLink XPU Manager Sidecar
To support interconnected GPUs in Kubernetes, XeLink sidecar is needed.
The XeLink XPU Manager sidecar README gives information how the sidecar functions and how to use it.
Intel GPU Level-Zero sidecar
Sidecar uses Level-Zero API to provide additional GPU information for the GPU plugin that it cannot get through sysfs interfaces.
See Intel GPU Level-Zero sidecar README for more details.
Demos
The demo subdirectory contains a number of demonstrations for a variety of the available plugins.
Developers
For information on how to develop a new plugin using the framework or work on development task in this repository, see the Developers Guide.
Releases
Supported Kubernetes Versions
Releases are made under the github releases area. Supported releases and matching Kubernetes versions are listed below:
Branch | Kubernetes branch/version | Status |
---|---|---|
release-0.31 | Kubernetes 1.31 branch v1.31.x | supported |
release-0.30 | Kubernetes 1.30 branch v1.30.x | supported |
release-0.29 | Kubernetes 1.29 branch v1.29.x | supported |
release-0.28 | Kubernetes 1.28 branch v1.28.x | unsupported |
release-0.27 | Kubernetes 1.27 branch v1.27.x | unsupported |
release-0.26 | Kubernetes 1.26 branch v1.26.x | unsupported |
release-0.25 | Kubernetes 1.25 branch v1.25.x | unsupported |
release-0.24 | Kubernetes 1.24 branch v1.24.x | unsupported |
release-0.23 | Kubernetes 1.23 branch v1.23.x | unsupported |
release-0.22 | Kubernetes 1.22 branch v1.22.x | unsupported |
release-0.21 | Kubernetes 1.21 branch v1.21.x | unsupported |
release-0.20 | Kubernetes 1.20 branch v1.20.x | unsupported |
Note: Device plugins leverage the Kubernetes v1 API. The API itself is GA (generally available) and does not change between Kubernetes versions. One does not necessarily need to use the latest Kubernetes cluster with the latest device plugin version. Using a newer device plugins release should work without issues on an older Kubernetes cluster. One possible exception to this are the device plugins CRDs that can vary between versions.
Release procedures
Project’s release cadence is tied to Kubernetes release cadence. Device plugins release typically follows a couple of weeks after the Kubernetes release. There can be some delays on the releases due to required changes in the pull request pipeline. Once the content is available in the main
branch and CI & e2e validation PASSes, release branch will be created (e.g. release-0.26). The HEAD of release branch will also be tagged with the corresponding tag (e.g. v0.26.0).
During the release creation, the project’s documentation, deployment files etc. will be changed to point to the newly created version.
Patch releases (e.g. 0.26.3) are done on a need basis if there are security issues or minor fixes requested for specific version. Fixes are always cherry-picked from the main
branch to the release branches.
Pre-built plugin images
Pre-built images of the plugins are available on the Docker hub. These images are automatically built and uploaded to the hub from the latest main branch of this repository.
Release tagged images of the components are also available on the Docker hub, tagged with their release version numbers in the format x.y.z, corresponding to the branches and releases in this repository.
Note: the default deployment files and operators are configured with
imagePullPolicy
IfNotPresent
and can be changed with scripts/set-image-pull-policy.sh
.
Signed container images
Starting from 0.31 release, the images (0.31.0
etc., not devel
) are signed with keyless signing using cosign
. The signing proof is stored in rekor.sigstore.dev in an append-only transparency log. The signature is also stored within the dockerhub.
To verify the signing in Kubernetes, one can use policy managers with keyless authorities.
License
All of the source code required to build intel-device-plugins-for-kubernetes
is available under Open Source licenses. The source code files identify external Go
modules used. Binaries are distributed as container images on
DockerHub*. Those images contain license texts and source code under /licenses
.
Helm Charts
Device Plugins Helm Charts are located in Intel Helm Charts repository Intel Helm Charts. This is another way of distributing Kubernetes resources of the device plugins framework.
To add repo:
helm repo add intel https://intel.github.io/helm-charts