-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Open
Description
Dears, you really did a great job of jina-clip-v2. It is a good pretrained model suporting multi-language. Beacause I want to further finetune the model using our own domain-specific data. But there is no public training code for jina-clip-v2. So I have written a project to finetune jina-clip-v2. It is training now. But I have only one 4090 gpu card. So the max batchsize I can set is 5. I am unsure whether using a small batch size, such as 5, for the contrastive loss in CLIP will negatively affect the training results. If there is an impact, are there any good methods to achieve a larger batch size on a single GPU?
This is my training code:https://github.com/tengshaofeng/finetune-jina-clip-v2/blob/main/train_clip.py
anoyencan
Metadata
Metadata
Assignees
Labels
No labels