-
Notifications
You must be signed in to change notification settings - Fork 156
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Process got stuck when trying to optimize different groups of parameters using different types of data #584
Comments
For some further information, I use a single node, multi-GPU distributed training. When waiting for a long time, I received the following messages:
|
It may help if you can provide a repro of some kind and/or give some more information about what parallelism you are using. |
Hi, [training] [experimental] |
Hi,
I'm adding a new linear projection layer (nn.Linear) to the original Llama3 architecture to process a new type of data. During training, I use two types of data (language-only and multimodal data). When using language-only data, the whole Llama-3 parameters will be finetuned. When using multimodal data, the whole Llama-3 parameters and the parameters in the added linear layer will be finetuned. Both of them can function well independently.
However, when I combined these two types of data to do multi-task learning, the process just got stuck without any further information. Doesn't the current torchtitan support this kind of function? Thanks.
Tasks
The text was updated successfully, but these errors were encountered: