LoRA Finetune or p-tuning as done in https://github.com/Ljyustc/SocraticLM?

#2
by IronFENG - opened

SocraticLM is a very good job and has made a huge contribution to this field. What makes me very curious is the way your model fine-tuning is, whether to use LoRA or p-tuning?

CogBase org
edited Oct 9

SocraticLM is a very good job and has made a huge contribution to this field. What makes me very curious is the way your model fine-tuning is, whether to use LoRA or p-tuning?

Thank you for your interest in SocraticLM. In this implementation, we use full fine-tuning.

Sign up or log in to comment