-
Notifications
You must be signed in to change notification settings - Fork 441
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tuning API in Katib for LLMs #2291
Comments
I presume that the initiative here is motivated by the recent trend in the ML space to fine-tune pre-trained models (LLMs or otherwise) using custom datasets instead of training bare models from scratch. This requires that the interface provided to users for training and hyperparameter tuning needs to be enriched. Training (training-operator): The train function takes the following arguments and is essentially an abstraction over the create_job function that enables model fine-tuning.
Hyperparameter tuning (Katib): Taking inspiration from the design in training-operator I would think that the higher-level interface in Katib would be an abstraction over the tune function and would still allow users to specify function parameters such as hyperparameters, algorithm name, evaluation metric, etc. but that the objective function would be replaced by model provider and dataset provider.
I presume then that the difference with this example implementation would just be that train_mnist_model is replaced with a model provider and a dataset provider that forms the basis for the hyperparameter tuning. |
Having worked through the Python SDK and examples for training operator and Katib I have further ideas on an appropriate implementation of the tuning API in Katib for LLMs. It appears that the current implementation of the tune API for Katib Python SDK leverages a mandatory objective function to define the trial specification as a batch job. That said the higher-level interface to the existing API for Katib is meant to fine-tune pre-trained models on custom datasets. The following are some important points to note:
Following the example implementation of a Katib experiment using PyTorchJob we would therefore need to modify the tune API to take in either a combination of objective and parameters or a combination of model_provider_parameters, dataset_provider_parameters and train_parameters. In the former case the code would default to defining a Katib experiment using a batch job in the trial specification. In the latter case the code would define a Katib experiment using a PyTorchJob in the trial specification. This PyTorchJob would define an init container and an app container for the master and use the same app container for the workers. |
/assign @helenxie-bit |
@andreyvelich: GitHub didn't allow me to assign the following users: helenxie-bit. Note that only kubeflow members with read permissions, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
/remove-lifecycle stale |
We can close this issue since we track the implementation progress here: #2339 |
Recently, we implemented a new
train
Python SDK API in Kubeflow Training Operator to easily fine-tune LLMs on multiple GPUs with predefined datasets provider, model provider, and HuggingFace trainer.To continue our roadmap around LLMOps in Kubeflow, we want to give user functionality to tune HyperParameters of LLMs using simple Python SDK APIs:
tune
.It requires to make appropriate changes to the Katib Python SDK which allows users to set model, dataset, and HyperParameters that they want to optimize for LLM.
We need to re-use existing Training Operator components that we used for
train
API:storage-initializer
,trainer
.The text was updated successfully, but these errors were encountered: