Skip to content
/ EvaluateModels Public template

A simple and basic Evaluation for Machine Learning Models!

License

Notifications You must be signed in to change notification settings

cloner174/EvaluateModels

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 

Repository files navigation

EvaluateModels: Time Series Analysis and Visualization

Introduction

This Python script, created with respect to the divine, offers comprehensive tools for evaluating and visualizing the performance of various forecasting models. Aimed at data scientists and analysts, it provides functionalities to plot mean absolute error (MAE) over different forecast horizons, compare models, visualize trend and seasonality components, and much more. This document serves as a guide for using the EvaluateModels class within your projects.

Features

  • Plot MAE Over Time: Visualizes the MAE for different forecast horizons to assess model accuracy.
  • Plot Confidence Intervals: Displays confidence intervals for forecasts, with an option to include actual values for comparison.
  • Compare Models: Compares the MAE of different models across forecast horizons.
  • Trend and Seasonality Visualization: Plots the trend, seasonal, and observed components of time series data.
  • Feature Importance: Visualizes the importance of features used by the model.
  • Anomaly Detection: Identifies and visualizes anomalies where the absolute error exceeds a defined threshold.

Prerequisites

Before using the EvaluateModels class, ensure that you have installed the following Python packages:

  • matplotlib
  • numpy

Usage

To utilize the functionalities provided by the EvaluateModels class, first import the necessary packages and then instantiate the class:

import matplotlib.pyplot as plt
import numpy as np
from evaluate import EvaluateModels
evaluator = EvaluateModels()

After instantiation

You can call any of the methods provided by the class as needed. For example, to plot the MAE over different forecast horizons:

mae_dict = {1: 0.1, 2: 0.15, 10: 0.2}
evaluator.plot_mae_over_time(mae_dict)

Mistakes and Corrections

To err is human, and nobody likes a perfect person! If you come across any mistakes or if you have questions, feel free to raise an issue or submit a pull request. Your contributions to improving the content are highly appreciated. Please refer to GitHub contributing guidelines for more information on how to participate in the development.

Contact Information

For further inquiries or contributions, feel free to reach out through the following channels:

Telegram: https://t.me/PythonLearn0 Email: [email protected]

Acknowledgments

This script is developed aiming to contribute positively to the learning community's efforts in data analysis and model evaluation.

About

A simple and basic Evaluation for Machine Learning Models!

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages