neural_compressor.torch.utils.auto_accelerator

Auto Accelerator Module.

Classes

INCAcceleratorType

Create a collection of name/value pairs.

AcceleratorRegistry

Accelerator Registry.

Auto_Accelerator

Auto Accelerator Base class.

CPU_Accelerator

CPU Accelerator.

CUDA_Accelerator

CUDA Accelerator.

XPU_Accelerator

XPU Accelerator.

HPU_Accelerator

HPU Accelerator.

Functions

register_accelerator(→ Callable[Ellipsis, Any])

Register new accelerator.

auto_detect_accelerator(→ Auto_Accelerator)

Automatically detects and selects the appropriate accelerator.

Module Contents

class neural_compressor.torch.utils.auto_accelerator.INCAcceleratorType(*args, **kwds)[source]

Create a collection of name/value pairs.

Example enumeration:

>>> class Color(Enum):
...     RED = 1
...     BLUE = 2
...     GREEN = 3

Access them by:

  • attribute access:

    >>> Color.RED
    <Color.RED: 1>
    
  • value lookup:

    >>> Color(1)
    <Color.RED: 1>
    
  • name lookup:

    >>> Color['RED']
    <Color.RED: 1>
    

Enumerations can be iterated over, and know how many members they have:

>>> len(Color)
3
>>> list(Color)
[<Color.RED: 1>, <Color.BLUE: 2>, <Color.GREEN: 3>]

Methods can be added to enumerations, and members can have their own attributes – see the documentation for details.

class neural_compressor.torch.utils.auto_accelerator.AcceleratorRegistry[source]

Accelerator Registry.

neural_compressor.torch.utils.auto_accelerator.register_accelerator(name: str, priority: float = 0) Callable[Ellipsis, Any][source]

Register new accelerator.

Usage example:

@register_accelerator(name=”cuda”, priority=100) class CUDA_Accelerator:

Parameters:
  • name – the accelerator name.

  • priority – the priority of the accelerator. A larger number indicates a higher priority,

class neural_compressor.torch.utils.auto_accelerator.Auto_Accelerator[source]

Auto Accelerator Base class.

class neural_compressor.torch.utils.auto_accelerator.CPU_Accelerator[source]

CPU Accelerator.

class neural_compressor.torch.utils.auto_accelerator.CUDA_Accelerator[source]

CUDA Accelerator.

class neural_compressor.torch.utils.auto_accelerator.XPU_Accelerator[source]

XPU Accelerator.

class neural_compressor.torch.utils.auto_accelerator.HPU_Accelerator[source]

HPU Accelerator.

neural_compressor.torch.utils.auto_accelerator.auto_detect_accelerator(device_name='auto') Auto_Accelerator[source]

Automatically detects and selects the appropriate accelerator.

Force use the cpu on node has both cpu and gpu: INC_TARGET_DEVICE=cpu python main.py … The INC_TARGET_DEVICE is case insensitive. The environment variable INC_TARGET_DEVICE has higher priority than the device_name. TODO: refine the docs and logic later