LoRA Finetune or p-tuning as done in https://github.com/Ljyustc/SocraticLM?
#2
by
IronFENG
- opened
SocraticLM is a very good job and has made a huge contribution to this field. What makes me very curious is the way your model fine-tuning is, whether to use LoRA or p-tuning?
SocraticLM is a very good job and has made a huge contribution to this field. What makes me very curious is the way your model fine-tuning is, whether to use LoRA or p-tuning?
Thank you for your interest in SocraticLM. In this implementation, we use full fine-tuning.