Understanding machine learning model evaluation metrics — Part 3
Navigating the Metrics Landscape: A Comprehensive Guide to Evaluating Classification Models in Machine Learning
In the previous articles, we introduced various metrics to evaluate performance of a model typically used in classification problems. You can find these here: Part 1 and Part 2. In this article, we will focus on some common metrics that are used to evaluate the performance and accuracy of machine learning models, especially for regression problems. These metrics are:
- Mean Absolute Error (MAE)
- Mean Squared Error (MSE)
- Root Mean Squared Error (RMSE)
- Root Mean Squared Logarithmic Error (RMSLE)
We will see what each metric means, how to calculate it, and how to interpret it. We will also show how to implement these metrics. Spoiler alert, it is quite simple actually! Let’s dive in.
Mean Absolute Error (MAE)
Mean Absolute Error (MAE) is a metric that measures the average magnitude of the errors between the predicted values and the actual values. It is calculated as the sum of the absolute differences between the predictions and the observations, divided by the number of samples. The formula for MAE is:
where yi is the actual value, y^i is the predicted value, and n is the sample size.