- CatBoost 教程
- CatBoost - 首頁
- CatBoost - 概述
- CatBoost - 架構
- CatBoost - 安裝
- CatBoost - 特性
- CatBoost - 決策樹
- CatBoost - 提升過程
- CatBoost - 核心引數
- CatBoost - 資料預處理
- CatBoost - 處理類別特徵
- CatBoost - 處理缺失值
- CatBoost - 分類器
- CatBoost - 迴歸器
- CatBoost - 排序器
- CatBoost - 模型訓練
- CatBoost - 模型評估指標
- CatBoost - 分類指標
- CatBoost - 過擬合檢測
- CatBoost 與其他提升演算法的比較
- CatBoost 有用資源
- CatBoost - 有用資源
- CatBoost - 討論
CatBoost - 分類指標
CatBoost 主要用於分類任務。分類是指將事物歸入某個類別。我們使用多種標準來分析 CatBoost 在資料分類方面的效能。
將資料點分類到不同的類別是分類問題的首要目標。CatBoost 提供多種指標來評估模型效能。
可以使用以下必要的指標來評估 CatBoost 分類效能:
準確率
這顯示了模型預測的準確率百分比。它是準確預測的總數除以預測總數。雖然在這種情況下此度量最有效,但對於不平衡的資料集(一類明顯多於另一類),它可能不是最佳選擇。
要找到準確率,我們需要匯入 numpy、catboost、sklearn.datasets 和 sklearn.model_selection 等庫。
import numpy as np
from catboost import CatBoostClassifier, Pool
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
# Loading the Iris dataset
iris = load_iris()
X, y = iris.data, iris.target
# Splitting the data into training and testing datasets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Create a CatBoostClassifier
model = CatBoostClassifier(iterations=100, learning_rate=0.1, depth=6, loss_function='MultiClass', verbose=0)
# Create a Pool object
train_pool = Pool(X_train, label=y_train)
test_pool = Pool(X_test, label=y_test)
# Train the model
model.fit(train_pool)
# Evaluate the model
metrics = model.eval_metrics(test_pool, metrics=['Accuracy'], plot=True)
# Print the evaluation metrics
accuracy = metrics['Accuracy'][-1]
print(f'Accuracy is: {accuracy:.2f}')
輸出
結果表明,該模型非常適合資料集,並且已成功預測了資料集中每個例項。
MetricVisualizer(layout=Layout(align_self='stretch', height='500px')) Accuracy: 1.00
多類別對數損失
多類別對數損失,也稱為多類別分類的交叉熵,是對數損失的一種變體,專為多類別分類問題設計。它透過預測多個類別的機率分佈來評估預期機率與實際類別標籤的匹配程度。
import numpy as np
from catboost import CatBoostClassifier, Pool
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
# Loading the Iris dataset
iris = load_iris()
X, y = iris.data, iris.target
# Splitting the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Create a CatBoostClassifier
model = CatBoostClassifier(iterations=100, learning_rate=0.1, depth=6, loss_function='MultiClass', verbose=0)
# Create a Pool object
train_pool = Pool(X_train, label=y_train)
test_pool = Pool(X_test, label=y_test)
# Train the model
model.fit(train_pool)
# Evaluate the model for multi-class classification
metrics = model.eval_metrics(test_pool, metrics=['MultiClass'], plot = True)
# Print the evaluation metrics
multi_class_loss = metrics['MultiClass'][-1]
print(f'Multi-Class Loss: {multi_class_loss:.2f}')
輸出
在下面的結果中,0.03 的多類別損失值表明,模型在測試資料集上的多類別分類方面表現良好。
MetricVisualizer(layout=Layout(align_self='stretch', height='500px')) Multi-Class Loss: 0.03
二元對數損失
二元對數損失衡量真實標籤與預測機率之間的差異。較低的對數損失值表示更好的效能。此指標對於進行準確預測非常有用。它可以用於需要更精確機率測量的場景,例如欺詐檢測或醫療診斷。在討論二元分類時經常會提到它,二元分類是指資料集只有兩個類別的情況。
由於 Iris 資料集有三個類別,因此不適合使用此指標。藉助 Breast Cancer 資料集,我們可以看到它只有兩個類別:代表乳腺癌存在與否的類別。
import numpy as np
from catboost import CatBoostClassifier, Pool
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
# Loading the Breast Cancer dataset
data = load_breast_cancer()
X, y = data.data, data.target
# Splitting the data into training and testing datasets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Create a CatBoostClassifier
model = CatBoostClassifier(iterations=100, learning_rate=0.1, depth=6, verbose=0)
# Create a Pool object
train_pool = Pool(X_train, label=y_train)
test_pool = Pool(X_test, label=y_test)
# Train the model
model.fit(train_pool)
# Evaluate the model
metrics = model.eval_metrics(test_pool, metrics=['Logloss'], plot =False)
# Print the evaluation metrics
logloss = metrics['Logloss'][-1]
print(f'Log Loss (Cross-Entropy): {logloss:.2f}')
輸出
以下是上述程式碼的輸出:
Log Loss (Cross-Entropy): 0.08
AUC-ROC 和 AUC-PRC
受試者工作特徵曲線下面積 (AUR-ROC) 和精確率-召回率曲線下面積 (AUC-PRC) 是二元分類演算法的關鍵指標。AUC-ROC 評估模型區分正負分類的能力,而 AUC-PRC 更側重於精確率和召回率之間的權衡。
import catboost
from catboost import CatBoostClassifier, Pool
from sklearn import datasets
from sklearn.model_selection import train_test_split
# Loading the Iris dataset
iris = datasets.load_iris()
X = iris.data
y = iris.target
# Converting to binary classification by mapping
y_binary = (y == 2).astype(int)
# Splitting the dataset into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y_binary, test_size=0.2, random_state=42)
# Creating a CatBoost classifier with AUC-ROC metric
model = CatBoostClassifier(iterations=500, random_seed=42, eval_metric='AUC')
# Converting the training data into a CatBoost Pool
train_pool = Pool(X_train, label=y_train)
# Training the model
model.fit(train_pool, verbose=100)
validation_pool = Pool(X_test, label=y_test)
eval_result = model.eval_metrics(validation_pool, ['AUC'])['AUC']
metrics = model.eval_metrics(validation_pool, metrics=['PRAUC'],plot = True)
auc_pr = metrics['PRAUC'][-1]
# Print the evaluation metrics
print(f'AUC-PR: {auc_pr:.2f}')
print(f"AUC-ROC: {eval_result[-1]:.4f}")
輸出
這將產生以下結果:
Learning rate set to 0.007867 0: total: 2.09ms remaining: 1.04s 100: total: 42.3ms remaining: 167ms 200: total: 67.9ms remaining: 101ms 300: total: 89.8ms remaining: 59.4ms 400: total: 110ms remaining: 27ms 499: total: 129ms remaining: 0us MetricVisualizer(layout=Layout(align_self='stretch', height='500px')) AUC-PR: 1.00 AUC-ROC: 1.0000
F1 分數
F1 分數是模型的準確率,即它預測類別的準確程度,以及召回率,即它能夠識別該類別的頻率。此指標非常適合平衡假陽性和假陰性之間的權衡。此外,我們需要記住,更好的模型往往具有更高的 F1 分數。
import numpy as np
from catboost import CatBoostClassifier, Pool
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
# Loading the Breast Cancer dataset
data = load_breast_cancer()
X, y = data.data, data.target
# Splitting the data into training and testing datasets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Creating a CatBoostClassifier
model = CatBoostClassifier(iterations=100, learning_rate=0.1, depth=6, verbose=0)
# Creating a Pool object
train_pool = Pool(X_train, label=y_train)
test_pool = Pool(X_test, label=y_test)
# Train the model
model.fit(train_pool)
# Evaluate the model
metrics = model.eval_metrics(test_pool, metrics=['F1'], plot=True)
# Print the evaluation metrics
f1 = metrics['F1'][-1]
print(f'F1 Score: {f1:.2f}')
輸出
這將導致以下結果:
MetricVisualizer(layout=Layout(align_self='stretch', height='500px')) F1 Score: 0.98
總結
總之,CatBoost 提供了廣泛的指標和評估工具,極大地簡化了模型選擇和評估過程。它從其內建的分類任務評估指標(如均方誤差和對數損失)開始,但也允許使用使用者定義的指標進行自定義。透過在訓練過程中使用提前停止、交叉驗證和監控多個指標的能力,確保了完整的評估。它為需要分類的任務提供了廣泛的指標。