Skip to content

Contrastive Learning of Medical Visual Representations from Paired Images and Text

License

Notifications You must be signed in to change notification settings

fbrynpk/ConVIRT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ConVIRT - Contrastive Learning Representations of Images and Text pairs

Contrastive VIsual Representation Learning from Text

Deep neural networks learn from a large amount of data to obtain the correct parameters to perform a specific task. However, in practice, we often encounter a problem: insufficient amount of labeled data. However, if your data contains pairs of images and text, you can solve the problem with contrastive learning.

Contrastive learning is a kind of self-supervised learning method. It does not require specialized labels, but rather a method to learn the correct parameters from the unlabeled data itself. It aims to learn an encoder that makes the encoding results of similar classes of data similar and makes the encoding results of different classes of data as different as possible. Typical contrast learning is done based on comparisons between two images. However, if we have paired image and text data, contrast learning can also be applied between images and text.

The repository is a PyTorch implementation of the architecture descibed in the ConVIRT paper: Contrastive Learning of Medical Visual Representations from Paired Images and Text. The authors of paper are Yuhao Zhang, Hang Jiang, Yasuhide Miura, Christopher D. Manning, Curtis P. Langlotz.

This repository was originally modified from https://github.com/edreisMD/ConVIRT-pytorch.

References:

About

Contrastive Learning of Medical Visual Representations from Paired Images and Text

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages