Cost functions can be of various types depending on the problem. However, mainly it is of three types, which are as follows:
Regression models are used to make a prediction for the continuous variables such as the price of houses, weather prediction, loan predictions, etc. When a cost function is used with Regression, it is known as the "Regression Cost Function." In this, the cost function is calculated as the error based on the distance, such as:
1. Error= Actual Output-Predicted output
There are three commonly used Regression cost functions, which are as follows:
a. Means Error
In this type of cost function, the error is calculated for each training data, and then the mean of all error values is taken.
It is one of the simplest ways possible.
The errors that occurred from the training data can be either negative or positive. While finding mean, they can cancel out each other and result in the zero-mean error for the model, so it is not recommended cost function for a model.
However, it provides a base for other cost functions of regression models.
b. Mean Squared Error (MSE)
Means Square error is one of the most commonly used Cost function methods. It improves the drawbacks of the Mean error cost function, as it calculates the square of the difference between the actual value and predicted value. Because of the square of the difference, it avoids any possibility of negative error.
The formula for calculating MSE is given below:
Mean squared error is also known as L2 Loss.
In MSE, each error is squared, and it helps in reducing a small deviation in prediction as compared to MAE. But if the dataset has outliers that generate more prediction errors, then squaring of this error will further increase the error multiple times. Hence, we can say MSE is less robust to outliers.
c. Mean Absolute Error (MAE)
Mean Absolute error also overcome the issue of the Mean error cost function by taking the absolute difference between the actual value and predicted value.
The formula for calculating Mean Absolute Error is given below:
This means the Absolute error cost function is also known as L1 Loss. It is not affected by noise or outliers, hence giving better results if the dataset has noise or outlier.
Classification models are used to make predictions of categorical variables, such as predictions for 0 or 1, Cat or dog, etc. The cost function used in the classification problem is known as the Classification cost function. However, the classification cost function is different from the Regression cost function.
One of the commonly used loss functions for classification is cross-entropy loss.
The binary Cost function is a special case of Categorical cross-entropy, where there is only one output class. For example, classification between red and blue.
To better understand it, let's suppose there is only a single output variable Y
The error in binary classification is calculated as the mean of cross-entropy for all N training data. Which means:
A multi-class classification cost function is used in the classification problems for which instances are allocated to one of more than two classes. Here also, similar to binary class classification cost function, cross-entropy or categorical cross-entropy is commonly used cost function.
It is designed in a way that it can be used with multi-class classification with the target values ranging from 0 to 1, 3, ….,n classes.
In a multi-class classification problem, cross-entropy will generate a score that summarizes the mean difference between actual and anticipated probability distribution.
For a perfect cross-entropy, the value should be zero when the score is minimized.
Silan Software is one of the India's leading provider of offline & online training for Java, Python, AI (Machine Learning, Deep Learning), Data Science, Software Development & many more emerging Technologies.
We provide Academic Training || Industrial Training || Corporate Training || Internship || Java || Python || AI using Python || Data Science etc