Official PyTorch and River implementation of Adaptive Machine Learning for Resource-Constrained Environments presented at DELTA 2024, ACM SIGKDD KDD 2024, Barcelona, Spain.
Please cite the following paper when using AML4CPU:
S. A. Cajas, J. Samanta, A. L. Suárez-Cetrulo, and R. S. Carbajo, "Adaptive Machine Learning for Resource-Constrained Environments," in Discovering Drift Phenomena in Evolving Landscape (DELTA 2024) Workshop at ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2024), Barcelona, Catalonia, Spain, Aug. 26, 2024.
The paper is available at this link.
- Hold-out Script - Experiment 1:
run_holdout.py
- Pre-sequential Script - Experiment 2:
run_pre_sequential.py
- Zero-shot and Fine-tuning with Lag-Llama:
run_finetune.py
🇪🇺 This work has received funding from the European Union's HORIZON research and innovation programme under grant agreement No. 101070177.
Let's start by setting up your environment:
-
Create a Conda Environment:
conda create -n AML4CPU python=3.10.12 -y conda activate AML4CPU
-
Clone the Repository and Install Requirements:
git clone https://github.com/sebasmos/AML4CPU.git cd AML4CPU pip install -r requirements.txt
-
Install PyTorch and Other Dependencies:
pip install clean-fid numba numpy torch==2.0.0+cu118 torchvision --force-reinstall --extra-index-url https://download.pytorch.org/whl/cu118
Run the holdout evaluation script:
python run_holdout.py --output_file 'exp1' --output_folder Exp1 --num_seeds 20
Run the pre-sequential evaluation script:
python run_pre_sequential.py --output_file 'exp2' --eval --output_folder Exp2 --num_seeds 20
Test zero-shot over different context lengths (32, 64, 128, 256) with and without RoPE:
python run_finetune.py --output_file zs --output_folder zs --model_path ./models/lag_llama_models/lag-llama.ckpt --eval_multiple_zero_shot --max_epochs 50 --num_seeds 20
Finetune and test Lag-Llama over different context lengths (32, 64, 128, 256) with and without RoPE:
python run_finetune.py --output_file exp3_REAL_parallel --output_folder Exp3 --model_path ./models/lag_llama_models/lag-llama.ckpt --max_epochs 50 --num_seeds 20 --eval_multiple
This project is licensed under the MIT License. See LICENSE for details.
We are grateful to our colleagues at the EU Horizon project ICOS and Ireland’s Centre for Applied AI for helping to start and shape this research effort. Our advancement has been made possible by funding from the European Union’s HORIZON research and innovation program (Grant No. 101070177).
Please cite as:
@inproceedings{Cajas2024,
author = {Sebastián Andrés Cajas and
Jaydeep Samanta and
Andrés L. Suárez-Cetrulo and
Ricardo Simón Carbajo},
title = {Adaptive Machine Learning for Resource-Constrained Environments},
booktitle = {Discovering Drift Phenomena in Evolving Landscape (DELTA 2024) Workshop at ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2024)},
address = {Barcelona, Catalonia, Spain},
year = {2024},
month = {August 26},
}