neural_compressor.strategy.mse
¶
MSE tuning strategy.
Module Contents¶
Classes¶
The tuning strategy using MSE policy in tuning space. |
- class neural_compressor.strategy.mse.MSETuneStrategy(model, conf, q_dataloader, q_func=None, eval_dataloader=None, eval_func=None, dicts=None, q_hooks=None)¶
Bases:
neural_compressor.strategy.strategy.TuneStrategy
The tuning strategy using MSE policy in tuning space.
The MSE strategy needs to get the tensors for each OP of raw FP32 models and the quantized model based on the best model-wise tuning configuration. It then calculates the MSE (Mean Squared Error) for each OP, sorts those OPs according to the MSE value, and performs the op-wise fallback in this order.
- mse_impact_lst(op_list: List, fp32_model, best_qmodel)¶
Calculate and generate the MSE impact list.
- Parameters:
- Returns:
- The sorted list of ops by its MSE
impaction, in the same format of ‘op_list’.
- Return type:
ordered_op_name_types (List[Tuple(str, str)])
- next_tune_cfg()¶
Generate and yield the next tuning config.
- Yields:
tune_config (dict) – A dict containing the tuning configuration for quantization.