-
LLMLingua-2: Data Distillation for Efficient and Faithful Task-Agnostic Prompt Compression
Paper • 2403.12968 • Published • 25 -
PERL: Parameter Efficient Reinforcement Learning from Human Feedback
Paper • 2403.10704 • Published • 59 -
Alignment Studio: Aligning Large Language Models to Particular Contextual Regulations
Paper • 2403.09704 • Published • 33 -
RAFT: Adapting Language Model to Domain Specific RAG
Paper • 2403.10131 • Published • 72
lichanglong
hugoleeShanda
AI & ML interests
None yet
Organizations
None yet
paper
-
LLMLingua-2: Data Distillation for Efficient and Faithful Task-Agnostic Prompt Compression
Paper • 2403.12968 • Published • 25 -
PERL: Parameter Efficient Reinforcement Learning from Human Feedback
Paper • 2403.10704 • Published • 59 -
Alignment Studio: Aligning Large Language Models to Particular Contextual Regulations
Paper • 2403.09704 • Published • 33 -
RAFT: Adapting Language Model to Domain Specific RAG
Paper • 2403.10131 • Published • 72
models
0
None public yet
datasets
0
None public yet