Skip to content

sl-93/SUPERVISED-CONTRASTIVE-LEARNING-FOR-PRE-TRAINED-LANGUAGE-MODEL-FINE-TUNING

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 

Repository files navigation

"# SUPERVISED-CONTRASTIVE-LEARNING-FOR-PRE-TRAINED-LANGUAGE-MODEL-FINE-TUNING"

in this code, I've implemented sentiment analysis task with sst-2 dataset.

the below results are for 100 training samples:

cross entropy loss:

My Image

cross entropy + contrastive loss:

My Image

cross entropy heatmap on test dataset:

My Image

Accuracy on test dataset: 90.13

cross entropy + contrastive loss heatmap:

My Image

Accuracy on test dataset: 92.20

paper: https://arxiv.org/abs/2011.01403

About

in this project, I've implemented the Facebook paper about fine tuning RoBERTa with contrastive loss.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages