-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Issues: haotian-liu/LLaVA
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
How to judge whether we train the vision tower during our lora-fine tuning
#1703
opened Sep 13, 2024 by
lmingze
what is the difference between liuhaotian/llava-v1.5-7b and (vicuna-7b-v1.5 + vision tower + mm_projector)
#1702
opened Sep 13, 2024 by
lmingze
[Question] May I ask if there is a 7B-sized file of Llama_2_7b_chat?
#1700
opened Sep 11, 2024 by
xlnn
[Question] if not vision_tower.is_loaded: AttributeError: 'NoneType' object has no attribute 'is_loaded'
#1699
opened Sep 10, 2024 by
shen1005
[Usage] The difference between finetune_lora.sh and finetune_task_lora.sh
#1698
opened Sep 10, 2024 by
PixelChen24
[Question] Why the link of LAION/CC/SBU BLIP-Caption Concept-balanced 558K Meta Data(meta.json) is empty?
#1697
opened Sep 10, 2024 by
Liuqibaa
[Question] Why is the output always numerical when using model inference, like this
#1695
opened Sep 8, 2024 by
yuese1234
Can I load the parameters of llava1.5 with full parameter fine-tuning
#1694
opened Sep 7, 2024 by
zhangzef
[Question] A series of questions about fine tuning , I want to learn this stuff
#1690
opened Sep 4, 2024 by
bdv29
[Question] Why is the accuracy low when I evaluate llava-v1.5-7b-lora on VQAv2 ?
#1686
opened Sep 2, 2024 by
tanghao2118
[Usage] Adding very few parameters when using LoRA to finetune LLaVA 1.5
#1681
opened Aug 30, 2024 by
XiaoruiMaLU
[Usage] Some weights of LlavaLlamaForCausalLM were not initialized from the model checkpoint
#1679
opened Aug 29, 2024 by
RobitsG
Previous Next
ProTip!
no:milestone will show everything without a milestone.