Skip to content

How to finetune jina-clip-v2 #1018

@tengshaofeng

Description

@tengshaofeng

Dears, you really did a great job of jina-clip-v2. It is a good pretrained model suporting multi-language. Beacause I want to further finetune the model using our own domain-specific data. But there is no public training code for jina-clip-v2. So I have written a project to finetune jina-clip-v2. It is training now. But I have only one 4090 gpu card. So the max batchsize I can set is 5. I am unsure whether using a small batch size, such as 5, for the contrastive loss in CLIP will negatively affect the training results. If there is an impact, are there any good methods to achieve a larger batch size on a single GPU?
This is my training code:https://github.com/tengshaofeng/finetune-jina-clip-v2/blob/main/train_clip.py

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions