:orphan: :py:mod:`neural_compressor.torch.utils.utility` =============================================== .. py:module:: neural_compressor.torch.utils.utility Module Contents --------------- Functions ~~~~~~~~~ .. autoapisummary:: neural_compressor.torch.utils.utility.register_algo neural_compressor.torch.utils.utility.fetch_module neural_compressor.torch.utils.utility.set_module neural_compressor.torch.utils.utility.get_quantizer neural_compressor.torch.utils.utility.postprocess_model .. py:function:: register_algo(name) Decorator function to register algorithms in the algos_mapping dictionary. Usage example: @register_algo(name=example_algo) def example_algo(model: torch.nn.Module, quant_config: RTNConfig) -> torch.nn.Module: ... :param name: The name under which the algorithm function will be registered. :type name: str :returns: The decorator function to be used with algorithm functions. :rtype: decorator .. py:function:: fetch_module(model, op_name) Get module with a given op name. :param model: the input model. :type model: object :param op_name: name of op. :type op_name: str :returns: module (object). .. py:function:: set_module(model, op_name, new_module) Set module with a given op name. :param model: the input model. :type model: object :param op_name: name of op. :type op_name: str :param new_module: the input model. :type new_module: object :returns: module (object). .. py:function:: get_quantizer(model, quantizer_cls, quant_config=None, *args, **kwargs) Get the quantizer. Initialize a quantizer or get `quantizer` attribute from model. :param model: pytorch model. :type model: torch.nn.Module :param quantizer_cls: quantizer class of a specific algorithm. :type quantizer_cls: Quantizer :param quant_config: Specifies how to apply the algorithm on the given model. Defaults to None. :type quant_config: dict, optional :returns: quantizer object. .. py:function:: postprocess_model(model, mode, quantizer) Process `quantizer` attribute of model according to current phase. In `prepare` phase, the `quantizer` is set as an attribute of the model to avoid redundant initialization during `convert` phase. In 'convert' or 'quantize' phase, the unused `quantizer` attribute is removed. :param model: pytorch model. :type model: torch.nn.Module :param mode: The mode of current phase, including 'prepare', 'convert' and 'quantize'. :type mode: Mode :param quantizer: quantizer object. :type quantizer: Quantizer