For evaluating multiple scores, use sklearn.model_selection.cross_validate instead. (param0) was passed.
Package:
scikit-learn
47032

Exception Class:
ValueError
Raise code
return None
else:
raise TypeError(
"If no scoring is specified, the estimator passed should "
"have a 'score' method. The estimator %r does not."
% estimator)
elif isinstance(scoring, Iterable):
raise ValueError("For evaluating multiple scores, use "
"sklearn.model_selection.cross_validate instead. "
"{0} was passed.".format(scoring))
else:
raise ValueError("scoring value should either be a callable, string or"
" None. %r was passed" % scoring)
def _che
Links to the raise (1)
https://github.com/scikit-learn/scikit-learn/blob/c67518350f91072f9d37ed09c5ef7edf555b6cf6/sklearn/metrics/_scorer.py#L453Ways to fix
Summary:
This exception can be thrown when calling the check_scoring function from sklearn.metrics._scorer. The function takes in an estimator that is required and it has an optional parameter for scoring. By default, this parameter is set to None. If a value is passed in for scoring, and that value is an iterable, then this exception will be thrown. Therefore, you can either leave scoring as None or you can pass in a string that is in the SCORERS dictionary. Some valid strings you can use are:
- r2
- max_error
- neg_median_absolute_error
- accuracy
- rand_score
- roc_auc
Passing in a value of None, or one of the valid strings will get rid of the exception.
Code to Reproduce the Error(Wrong):
from sklearn.metrics._scorer import check_scoring
import numpy as np
from sklearn.multiclass import OutputCodeClassifier
from sklearn.ensemble import RandomForestClassifier
clf = OutputCodeClassifier(
estimator=RandomForestClassifier(random_state=0),
random_state=0,
code_size=1.5)
arr = np.array([1,2,3,4,5])
check_scoring(clf, scoring=arr)
Error Message:
ValueError Traceback (most recent call last)
<ipython-input-36-527bb45926eb> in <module>()
9 code_size=1.5)
10 arr = np.array([1,2,3,4,5])
---> 11 check_scoring(clf, scoring=arr)
/usr/local/lib/python3.7/dist-packages/sklearn/metrics/_scorer.py in check_scoring(estimator, scoring, allow_none)
428 raise ValueError("For evaluating multiple scores, use "
429 "sklearn.model_selection.cross_validate instead. "
--> 430 "{0} was passed.".format(scoring))
431 else:
432 raise ValueError("scoring value should either be a callable, string or"
ValueError: For evaluating multiple scores, use sklearn.model_selection.cross_validate instead. [1 2 3 4 5] was passed.
Working Version Using a String (Valid):
from sklearn.metrics._scorer import check_scoring
import numpy as np
from sklearn.multiclass import OutputCodeClassifier
from sklearn.ensemble import RandomForestClassifier
clf = OutputCodeClassifier(
estimator=RandomForestClassifier(random_state=0),
random_state=0,
code_size=1.5)
arr = np.array([1,2,3,4,5])
check_scoring(clf, scoring='r2')
Working Version Using None (Valid):
from sklearn.metrics._scorer import check_scoring
import numpy as np
from sklearn.multiclass import OutputCodeClassifier
from sklearn.ensemble import RandomForestClassifier
clf = OutputCodeClassifier(
estimator=RandomForestClassifier(random_state=0),
random_state=0,
code_size=1.5)
arr = np.array([1,2,3,4,5])
check_scoring(clf, scoring=None)
Add a possible fix
Please authorize to post fix