CLAMP-ViT: Contrastive Data-Free Learning for Adaptive Post-Training Quantization of ViTs Please follow these instructions to setup the code and evaluate MPQ results for different ViT models.
Follow these steps to clone the repository and set up your environment.
- Clone the repository
git clone https://github.com/ARamachandran2000/CLAMP-ViT.git
cd CLAMP-ViT
- Requirements
- Packages
- python>=3.8
- pytorch>=1.5
- matplotlib
- pandas
- timm
- Data Preparation
To evaluate the models, you will need the standard ImageNet dataset. Please download and organize it as follows:
imagenet/
├── train/
├── val/
- Evaluation
To run experiments on different quantized model run the following:
python test_clamp_vit.py --model <MODEL-NAME> --weight-path ./quantized/<MODEL-NAME>.pth --data PATH-TO-IMAGENET_2012
- Results for Image Classification
Model | Top-1 Accuracy |
---|---|
DeiT-T (W4.9/A6.2) | 71.69 |
DeiT-S (W4.7/A5.9) | 79.43 |
Swin-T (W5.5/A6.9) | 81.78 |
Swin-S (W5.1/A6.3) | 82.86 |
Our repository is adapted from FQ-ViT(link) and Evol-Q(link).
This “research quality code” is for Non-Commercial purposes and provided by the contributors “As Is” without any express or implied warranty of any kind. The organizations (Intel or georgia Tech) involved do not own the rights to this data set and do not confer any rights to it. The organizations (Intel or Georgia Tech) do not warrant or assume responsibility for the accuracy or completeness of any information, text, graphics, links or other items within the code. A thorough security review has not been performed on this code. Additionally, this repository may contain components that are out of date or contain known security vulnerabilities.