Tuning Guide
The performance of some algorithms changes based on the parameters that are used. This section denotes the details of such cases.
Refer to Supported Algorithms to see the full list of algorithms, parameters, and data formats supported in Intel® Extension for Scikit-learn*.
TSNE
TSNE algorithm consists of two components: KNN and Gradient Descent. The overall acceleration of TSNE depends on the acceleration of each of these algorithms.
The KNN part of the algorithm supports all parameters except:
metric
!= ‘euclidean’ or ‘minkowski’ withp
!= 2
The Gradient Descent part of the algorithm supports all parameters except:
n_components
= 3method
= ‘exact’verbose
!= 0
To get better performance, use parameters supported by both components.
Random Forest
Random Forest models accelerated with Intel® Extension for Scikit-learn* and using the hist splitting method discretize training data by creating a histogram with a configurable number of bins. The following keyword arguments can be used to influence the created histogram.
Keyword argument |
Possible values |
Default value |
Description |
---|---|---|---|
|
[0, inf) |
|
Number of bins in the histogram with the discretized training data. The
value |
|
[1, inf) |
|
Minimum number of training data points in each bin after discretization. |
|
|
|
Selects the algorithm used to calculate bin edges. |
Note that using discretized training data can greatly accelerate model training
times, especially for larger data sets. However, due to the reduced fidelity of
the data, the resulting model can present worse performance metrics compared to
a model trained on the original data. In such cases, the number of bins can be
increased with the maxBins
parameter, or binning can be disabled entirely by
setting maxBins=0
.