STEM (Science, Technology, Engineering, And Mathematics)
Showing posts with label Artificial intelligence (Machine Learning Studies). Show all posts
Showing posts with label Artificial intelligence (Machine Learning Studies). Show all posts
Saturday, November 2, 2024
Saturday, October 19, 2024
Introduction To Machine Learning
Machine learning regression models are essential for predicting continuous outcomes based on input features. Below are the equations for some common regression techniques:
### 1. Simple Linear RegressionSimple linear regression is the basic form of regression where the target variable \( Y \) is modeled as a linear relationship with the input feature \( X \). $$ Y = \beta_0 + \beta_1 X + \epsilon $$ Where: - \( Y \): The target variable - \( X \): The independent variable - \( \beta_0 \): Intercept (constant) - \( \beta_1 \): Coefficient (slope of the line) - \( \epsilon \): Error term
### 2. Multiple Linear Regression
In multiple linear regression, the model takes multiple input features \( X_1, X_2, \dots, X_n \), and the target variable \( Y \) is modeled as: $$ Y = \beta_0 + \beta_1 X_1 + \beta_2 X_2 + \dots + \beta_n X_n + \epsilon $$ ### 3. Polynomial Regression
Polynomial regression is an extension of linear regression where the relationship between the independent variable \( X \) and the target variable \( Y \) is modeled as an \( n \)-degree polynomial: $$ Y = \beta_0 + \beta_1 X + \beta_2 X^2 + \dots + \beta_n X^n + \epsilon $$ ### 4. Ridge Regression (L2 Regularization)
Ridge regression introduces a regularization term to the linear regression equation to reduce overfitting by penalizing large coefficients: $$ \min_{\beta} \sum_{i=1}^{n} \left( y_i - \beta_0 - \sum_{j=1}^{p} \beta_j x_{ij} \right)^2 + \lambda \sum_{j=1}^{p} \beta_j^2 $$ ### 5. Lasso Regression (L1 Regularization)
Lasso regression adds an L1 penalty, encouraging sparsity in the coefficients (some coefficients become zero): $$ \min_{\beta} \sum_{i=1}^{n} \left( y_i - \beta_0 - \sum_{j=1}^{p} \beta_j x_{ij} \right)^2 + \lambda \sum_{j=1}^{p} |\beta_j| $$ ### 6. Elastic Net Regression
Elastic Net combines both L1 (Lasso) and L2 (Ridge) penalties, offering a compromise between the two: $$ \min_{\beta} \sum_{i=1}^{n} \left( y_i - \beta_0 - \sum_{j=1}^{p} \beta_j x_{ij} \right)^2 + \lambda_1 \sum_{j=1}^{p} |\beta_j| + \lambda_2 \sum_{j=1}^{p} \beta_j^2 $$ ### 7. Support Vector Regression (SVR)
Support vector regression attempts to fit the data within a margin \( \epsilon \): $$ \min_{\beta} \frac{1}{2} \|\beta\|^2 \quad \text{subject to} \quad |y_i - (\beta_0 + \beta_1 X)| \leq \epsilon $$ ### 8. Decision Tree Regression
The decision tree regression model splits the input space into regions and fits a constant value \( c_j \) in each region \( R_j \): $$ Y = \sum_{j=1}^{J} c_j I(X \in R_j) $$ ### 9. Random Forest Regression
Random Forest is an ensemble method that fits multiple decision trees and averages their predictions: $$ Y = \frac{1}{M} \sum_{m=1}^{M} T_m(X) $$
These equations provide a foundation for understanding how machine learning regression models predict continuous outcomes based on input data.
```
Subscribe to:
Posts (Atom)