」工欲善其事,必先利其器。「—孔子《論語.錄靈公》
首頁 > 程式設計 > AdaBoost - 整合方法,分類:監督機器學習

AdaBoost - 整合方法,分類:監督機器學習

發佈於2024-11-09
瀏覽:701

Boosting

Definition and Purpose

Boosting is an ensemble learning technique used in machine learning to improve the accuracy of models. It combines multiple weak classifiers (models that perform slightly better than random guessing) to create a strong classifier. The main purpose of boosting is to sequentially apply the weak classifiers to the data, correcting the errors made by the previous classifiers, and thus improve overall performance.

Key Objectives:

  • Improve Accuracy: Enhance the prediction accuracy by combining the outputs of several weak classifiers.
  • Reduce Bias and Variance: Address issues of bias and variance to achieve a better generalization of the model.
  • Handle Complex Data: Effectively model complex relationships in the data.

AdaBoost (Adaptive Boosting)

Definition and Purpose

AdaBoost, short for Adaptive Boosting, is a popular boosting algorithm. It adjusts the weights of incorrectly classified instances so that subsequent classifiers focus more on difficult cases. The main purpose of AdaBoost is to improve the performance of weak classifiers by emphasizing the hard-to-classify examples in each iteration.

Key Objectives:

  • Weight Adjustment: Increase the weight of misclassified instances to ensure the next classifier focuses on them.
  • Sequential Learning: Build classifiers sequentially, where each new classifier corrects the errors of its predecessor.
  • Improved Performance: Combine weak classifiers to form a strong classifier with better predictive power.

How AdaBoost Works

  1. Initialize Weights:

    • Assign equal weights to all training instances. For a dataset with n instances, each instance has a weight of 1/n.
  2. Train Weak Classifier:

    • Train a weak classifier using the weighted dataset.
  3. Calculate Classifier Error:

    • Compute the error of the weak classifier, which is the sum of the weights of misclassified instances.
  4. Compute Classifier Weight:

    • Calculate the weight of the classifier based on its error. The weight is given by: alpha = 0.5 * log((1 - error) / error)
    • A lower error results in a higher classifier weight.
  5. Update Weights of Instances:

    • Adjust the weights of the instances. Increase the weights of misclassified instances and decrease the weights of correctly classified instances.
    • The updated weight for instance i is: weight[i] = weight[i] * exp(alpha * (misclassified ? 1 : -1))
    • Normalize the weights to ensure they sum to 1.
  6. Combine Weak Classifiers:

    • The final strong classifier is a weighted sum of the weak classifiers: Final classifier = sign(sum(alpha * weak_classifier))
    • The sign function determines the class label based on the sum.

AdaBoost (Binary Classification) Example

AdaBoost, short for Adaptive Boosting, is an ensemble technique that combines multiple weak classifiers to create a strong classifier. This example demonstrates how to implement AdaBoost for binary classification using synthetic data, evaluate the model's performance, and visualize the decision boundary.

Python Code Example

1. Import Libraries

import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score, confusion_matrix, classification_report

This block imports the necessary libraries for data manipulation, plotting, and machine learning.

2. Generate Sample Data

np.random.seed(42)  # For reproducibility

# Generate synthetic data for 2 classes
n_samples = 1000
n_samples_per_class = n_samples // 2

# Class 0: Centered around (-1, -1)
X0 = np.random.randn(n_samples_per_class, 2) * 0.7   [-1, -1]

# Class 1: Centered around (1, 1)
X1 = np.random.randn(n_samples_per_class, 2) * 0.7   [1, 1]

# Combine the data
X = np.vstack([X0, X1])
y = np.hstack([np.zeros(n_samples_per_class), np.ones(n_samples_per_class)])

# Shuffle the dataset
shuffle_idx = np.random.permutation(n_samples)
X, y = X[shuffle_idx], y[shuffle_idx]

This block generates synthetic data with two features, where the target variable y is defined based on the class center, simulating a binary classification scenario.

3. Split the Dataset

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

This block splits the dataset into training and testing sets for model evaluation.

4. Create and Train the AdaBoost Classifier

base_estimator = DecisionTreeClassifier(max_depth=1)  # Decision stump
model = AdaBoostClassifier(estimator=base_estimator, n_estimators=3, random_state=42)
model.fit(X_train, y_train)

This block initializes the AdaBoost model with a decision stump as the base estimator and trains it using the training dataset.

5. Make Predictions

y_pred = model.predict(X_test)

This block uses the trained model to make predictions on the test set.

6. Evaluate the Model

accuracy = accuracy_score(y_test, y_pred)
conf_matrix = confusion_matrix(y_test, y_pred)
class_report = classification_report(y_test, y_pred)

print(f"Accuracy: {accuracy:.4f}")
print("\nConfusion Matrix:")
print(conf_matrix)
print("\nClassification Report:")
print(class_report)

Output:

Accuracy: 0.9400

Confusion Matrix:
[[96  8]
 [ 4 92]]

Classification Report:
              precision    recall  f1-score   support

         0.0       0.96      0.92      0.94       104
         1.0       0.92      0.96      0.94        96

    accuracy                           0.94       200
   macro avg       0.94      0.94      0.94       200
weighted avg       0.94      0.94      0.94       200

This block calculates and prints the accuracy, confusion matrix, and classification report, providing insights into the model's performance.

7. Visualize the Decision Boundary

x_min, x_max = X[:, 0].min() - 1, X[:, 0].max()   1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max()   1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.1),
                     np.arange(y_min, y_max, 0.1))
Z = model.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)

plt.figure(figsize=(10, 8))
plt.contourf(xx, yy, Z, alpha=0.4, cmap='RdYlBu')
scatter = plt.scatter(X[:, 0], X[:, 1], c=y, cmap='RdYlBu', edgecolor='black')
plt.xlabel("Feature 1")
plt.ylabel("Feature 2")
plt.title("AdaBoost Binary Classification")
plt.colorbar(scatter)
plt.show()

This block visualizes the decision boundary created by the AdaBoost model, illustrating how the model separates the two classes in the feature space.

Output:

AdaBoost Binary Classification

This structured approach demonstrates how to implement and evaluate AdaBoost for binary classification tasks, providing a clear understanding of its capabilities. The visualization of the decision boundary aids in interpreting the model's predictions.

AdaBoost (Multiclass Classification) Example

AdaBoost is an ensemble learning technique that combines multiple weak classifiers to create a strong classifier. This example demonstrates how to implement AdaBoost for multiclass classification using synthetic data, evaluate the model's performance, and visualize the decision boundary for five classes.

Python Code Example

1. Import Libraries

import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score, confusion_matrix, classification_report

This block imports the necessary libraries for data manipulation, plotting, and machine learning.

2. Generate Sample Data with 5 Classes

np.random.seed(42)  # For reproducibility
n_samples = 2500  # Total number of samples
n_samples_per_class = n_samples // 5  # Ensure this is exactly n_samples // 5

# Class 0: Centered around (-2, -2)
X0 = np.random.randn(n_samples_per_class, 2) * 0.5   [-2, -2]

# Class 1: Centered around (0, -2)
X1 = np.random.randn(n_samples_per_class, 2) * 0.5   [0, -2]

# Class 2: Centered around (2, -2)
X2 = np.random.randn(n_samples_per_class, 2) * 0.5   [2, -2]

# Class 3: Centered around (-1, 2)
X3 = np.random.randn(n_samples_per_class, 2) * 0.5   [-1, 2]

# Class 4: Centered around (1, 2)
X4 = np.random.randn(n_samples_per_class, 2) * 0.5   [1, 2]

# Combine the data
X = np.vstack([X0, X1, X2, X3, X4])
y = np.hstack([np.zeros(n_samples_per_class), 
               np.ones(n_samples_per_class),
               np.full(n_samples_per_class, 2),
               np.full(n_samples_per_class, 3),
               np.full(n_samples_per_class, 4)])

# Shuffle the dataset
shuffle_idx = np.random.permutation(n_samples)
X, y = X[shuffle_idx], y[shuffle_idx]

This block generates synthetic data for five classes located in different regions of the feature space.

3. Split the Dataset

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

This block splits the dataset into training and testing sets for model evaluation.

4. Create and Train the AdaBoost Classifier

base_estimator = DecisionTreeClassifier(max_depth=1)  # Decision stump
model = AdaBoostClassifier(estimator=base_estimator, n_estimators=10, random_state=42)
model.fit(X_train, y_train)

This block initializes the AdaBoost classifier with a weak learner (decision stump) and trains it using the training dataset.

5. Make Predictions

y_pred = model.predict(X_test)

This block uses the trained model to make predictions on the test set.

6. Evaluate the Model

accuracy = accuracy_score(y_test, y_pred)
conf_matrix = confusion_matrix(y_test, y_pred)
class_report = classification_report(y_test, y_pred)

print(f"Accuracy: {accuracy:.4f}")
print("\nConfusion Matrix:")
print(conf_matrix)
print("\nClassification Report:")
print(class_report)

Output:

Accuracy: 0.9540

Confusion Matrix:
[[ 97   2   0   0   0]
 [  0  92   3   0   0]
 [  0   4  92   0   0]
 [  0   0   0  86  14]
 [  0   0   0   0 110]]

Classification Report:
              precision    recall  f1-score   support

         0.0       1.00      0.98      0.99        99
         1.0       0.94      0.97      0.95        95
         2.0       0.97      0.96      0.96        96
         3.0       1.00      0.86      0.92       100
         4.0       0.89      1.00      0.94       110

    accuracy                           0.95       500
   macro avg       0.96      0.95      0.95       500
weighted avg       0.96      0.95      0.95       500

] Classification Report: precision recall f1-score support 0.0 1.00 0.98 0.99 99 1.0 0.94 0.97 0.95 95 2.0 0.97 0.96 0.96 96 3.0 1.00 0.86 0.92 100 4.0 0.89 1.00 0.94 110 accuracy 0.95 500 macro avg 0.96 0.95 0.95 500 weighted avg 0.96 0.95 0.95 500 This block calculates and prints the accuracy, confusion matrix, and classification report, providing insights into the model's performance.

x_min, x_max = X[:, 0].min() - 1, X[:, 0].max()   1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max()   1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.1),
                     np.arange(y_min, y_max, 0.1))
Z = model.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)

plt.figure(figsize=(12, 10))
plt.contourf(xx, yy, Z, alpha=0.4, cmap='viridis')
scatter = plt.scatter(X[:, 0], X[:, 1], c=y, cmap='viridis', edgecolor='black')
plt.xlabel("Feature 1")
plt.ylabel("Feature 2")
plt.title("AdaBoost Multiclass Classification (5 Classes)")
plt.colorbar(scatter)
plt.show()

x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.1), np.arange(y_min, y_max, 0.1)) Z = model.predict(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) plt.figure(figsize=(12, 10)) plt.contourf(xx, yy, Z, alpha=0.4, cmap='viridis') scatter = plt.scatter(X[:, 0], X[:, 1], c=y, cmap='viridis', edgecolor='black') plt.xlabel("Feature 1") plt.ylabel("Feature 2") plt.title("AdaBoost Multiclass Classification (5 Classes)") plt.colorbar(scatter) plt.show()

This block visualizes the decision boundaries created by the AdaBoost classifier, illustrating how the model separates the five classes in the feature space.AdaBoost - Ensemble Method, Classification: Supervised Machine Learning

Output:

This structured approach demonstrates how to implement and evaluate AdaBoost for multiclass classification tasks, providing a clear understanding of its capabilities and the effectiveness of visualizing decision boundaries.

版本聲明 本文轉載於:https://dev.to/harshm03/adaboost-ensemble-method-classification-supervised-machine-learning-31oo?1如有侵犯,請聯絡[email protected]刪除
最新教學 更多>
  • Flex 專案是區塊級還是 Flex 級?深入研究 CSS 佈局
    Flex 專案是區塊級還是 Flex 級?深入研究 CSS 佈局
    Flex 專案令人困惑的本質:區塊級還是 Flex 等級? Flex 專案是否是區塊級的問題一直是CSS 開發者之間的爭論。 CSS 靈活框佈局模組等級 1 規定 Flex 項目位於 Flex 級別,而不是區塊級別。然而,後面的部分顯示彈性項目的顯示值是「塊化」的。這就提出了一個問題:Flex 專案...
    程式設計 發佈於2024-11-09
  • 如何在不使用 Sudo 的情況下在 macOS 上安裝 Python 套件時修復權限錯誤?
    如何在不使用 Sudo 的情況下在 macOS 上安裝 Python 套件時修復權限錯誤?
    排查macOS 上Pip 的權限錯誤嘗試在Mac 上安裝Python 套件時,您可能會遇到與寫入日誌檔案或網站套件相關的權限錯誤目錄。這些錯誤可能會令人沮喪,特別是如果您想在當前使用者帳戶下安裝軟體包而不使用 sudo。 權限錯誤的根本原因預設情況下,Pip 會嘗試在系統中安裝軟體套件-wide P...
    程式設計 發佈於2024-11-09
  • JavaScript 可以使用座標模擬點擊嗎?
    JavaScript 可以使用座標模擬點擊嗎?
    在 JavaScript 中以座標模擬點擊在 Web 開發中,有時需要模擬使用者交互,例如點擊。 JavaScript 提供了一種利用特定座標來實現此目的的方法。 在 JavaScript 中基於 x,y 座標模擬點擊是否可行? 是的,可以使用 JavaScript 中的座標模擬來點擊。但是,重要的...
    程式設計 發佈於2024-11-09
  • 如何在 Go 中自動執行外部命令輸入:繞過「登入」等命令的使用者互動的指南
    如何在 Go 中自動執行外部命令輸入:繞過「登入」等命令的使用者互動的指南
    Go 自動化外部命令輸入在 Go 中,執行外部命令並管理其輸入和輸出是一項常見任務。但是,在處理提示使用者輸入的命令(例如「登入」)時,以程式設計方式自動執行這些輸入可能具有挑戰性。 解決此問題的一種方法是直接寫入命令的標準輸入(stdin) )使用位元組緩衝區。讓我們深入研究提供的解決方案:lo...
    程式設計 發佈於2024-11-09
  • 如何使用並發在 Go 中高效率地讀寫 CSV 檔案?
    如何使用並發在 Go 中高效率地讀寫 CSV 檔案?
    Go 中高效的 CSV 讀寫Go 中高效的 CSV 讀寫package main import ( "encoding/csv" "fmt" "log" "os" "strconv"...
    程式設計 發佈於2024-11-09
  • 如何在 CSS 中為多個父級中的特定 n 個子級設定樣式
    如何在 CSS 中為多個父級中的特定 n 個子級設定樣式
    跨多個父級設定特定第n 個子級的樣式使用第n 個子級選擇器設定嵌套元素的樣式時,需要注意的是,選擇器在單父上下文中運行。當針對多個父級中的特定子元素時,這可能會帶來挑戰。 問題:考慮以下標記:<div class="foo"> <ul> ...
    程式設計 發佈於2024-11-09
  • 如何使用字串插值將 CSS 屬性設定為 SASS 中的 mixin 值?
    如何使用字串插值將 CSS 屬性設定為 SASS 中的 mixin 值?
    將 SASS Mixin 值設為 CSS 屬性建立通用邊距/填入 mixin 時,可能需要將 CSS 屬性設為 mixin 值。為此,需要使用字串插值。 CSS 屬性的字串插值要使用變數作為 CSS 屬性名稱,需要字串插值 (#{$var})。 範例下面的 mixin 示範如何使用字串設定 CSS ...
    程式設計 發佈於2024-11-09
  • MUI TextField:建立變體、顏色和樣式
    MUI TextField:建立變體、顏色和樣式
    The mui textfield is a fundamental component in Material-UI, designed to capture user inputs efficiently and stylishly. This guide explores its build ...
    程式設計 發佈於2024-11-09
  • 在 Java 中如何安全地將 Long 轉換為 Int?
    在 Java 中如何安全地將 Long 轉換為 Int?
    在Java 中安全地將Long 轉換為Int:一個全面的解決方案在Java 中使用數位類型時,請確保轉換至關重要操作不會導致資料遺失。當將 long 值轉換為 int 時,這一點尤其重要,因為 long 的精度可能超過 int。 Java 8:簡化流程Java 8 之前的版本,安全地將 long 轉...
    程式設計 發佈於2024-11-09
  • 如何修復整合 Authorize.net 支付網關時出現「呼叫未定義函數curl_init()」錯誤?
    如何修復整合 Authorize.net 支付網關時出現「呼叫未定義函數curl_init()」錯誤?
    未定義的函數:curl_init()在實作Authorize.net的支付網關的上下文中,您可能會遇到錯誤「呼叫未定義」函數curl_init()」。這表示您的系統上未正確配置或安裝PHP curl 擴充功能。Windows 作業系統對於Windows 使用者,請驗證您的php.ini 檔案中的以下...
    程式設計 發佈於2024-11-09
  • Next.js 中的 SSR 應用程式路由與頁面路由相比有何新變化
    Next.js 中的 SSR 應用程式路由與頁面路由相比有何新變化
    介绍 Next.js 长期以来一直是构建服务器渲染 React 应用程序的流行选择。凭借其对服务器端渲染 (SSR) 的内置支持,开发人员可以创建动态、SEO 友好的应用程序。然而,Next.js 13 中 App Router 的引入以及 Next.js 14 中的改进显着简化和...
    程式設計 發佈於2024-11-09
  • CSS 中的垂直對齊實際上是如何運作的?
    CSS 中的垂直對齊實際上是如何運作的?
    CSS 中的垂直對齊:了解細微差別vertical-align 屬性可讓您將內聯元素垂直放置在其父元素中。然而,除非您掌握基本原理,否則它的行為可能是不可預測的。 內聯元素與高度Vertical-align 僅影響內聯元素。 和 等元素是區塊級元素,不受影響。對於沒有固有行高的內聯元素,例如 ...
    程式設計 發佈於2024-11-09
  • 將日期物件轉換為時間戳記時,一元加運算子有何作用?
    將日期物件轉換為時間戳記時,一元加運算子有何作用?
    Unary Plus:將日期物件轉換為毫秒時間戳在JavaScript 中,您可能會遇到類似以下內容的程式碼:function fn() { return new Date; }此表達式傳回表示當前時間的時間戳,而不是完整的 Date 物件。然而,加號 ( ) 的作用並不是立即顯而易見。 答案...
    程式設計 發佈於2024-11-09
  • Astra 專案:多模式人工智慧的新時代
    Astra 專案:多模式人工智慧的新時代
    Astra 项目由 Google DeepMind 开发,代表了多模式人工智能发展的突破性一步。与依赖单一输入类型(例如文本或图像)的传统人工智能系统不同,Project Astra 将多种形式的数据(包括视觉、听觉和文本输入)集成到一个有凝聚力的交互式人工智能体验中。这种方法旨在创建一个更直观、反...
    程式設計 發佈於2024-11-09
  • 為什麼我的 HTML 輸出顯示為純文字而不是渲染?
    為什麼我的 HTML 輸出顯示為純文字而不是渲染?
    HTML 輸出解釋為純文本而不是作為HTML 接收這裡的問題涉及HTML 輸出呈現為純文本的場景被解析為正確的HTML。提供了基本的 Go 實現,但呈現的輸出在 pre 標記內顯示逐字 HTML 程式碼。 要修正此問題,必須設定 Content-Type 標頭以指定回應為 HTML。這可確保瀏覽器正...
    程式設計 發佈於2024-11-09

免責聲明: 提供的所有資源部分來自互聯網,如果有侵犯您的版權或其他權益,請說明詳細緣由並提供版權或權益證明然後發到郵箱:[email protected] 我們會在第一時間內為您處理。

Copyright© 2022 湘ICP备2022001581号-3