-
Phi-4 Technical Report
Paper • 2412.08905 • Published • 122 -
Evaluating and Aligning CodeLLMs on Human Preference
Paper • 2412.05210 • Published • 50 -
Evaluating Language Models as Synthetic Data Generators
Paper • 2412.03679 • Published • 48 -
Yi-Lightning Technical Report
Paper • 2412.01253 • Published • 28
Collections
Discover the best community collections!
Collections including paper arxiv:2402.04792
-
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
Paper • 2305.18290 • Published • 63 -
Towards Efficient and Exact Optimization of Language Model Alignment
Paper • 2402.00856 • Published -
A General Theoretical Paradigm to Understand Learning from Human Preferences
Paper • 2310.12036 • Published • 16 -
Statistical Rejection Sampling Improves Preference Optimization
Paper • 2309.06657 • Published • 14
-
Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models
Paper • 2310.04406 • Published • 10 -
Chain-of-Thought Reasoning Without Prompting
Paper • 2402.10200 • Published • 109 -
ICDPO: Effectively Borrowing Alignment Capability of Others via In-context Direct Preference Optimization
Paper • 2402.09320 • Published • 6 -
Self-Discover: Large Language Models Self-Compose Reasoning Structures
Paper • 2402.03620 • Published • 117
-
Chain-of-Thought Reasoning Without Prompting
Paper • 2402.10200 • Published • 109 -
How to Train Data-Efficient LLMs
Paper • 2402.09668 • Published • 42 -
BitDelta: Your Fine-Tune May Only Be Worth One Bit
Paper • 2402.10193 • Published • 22 -
A Human-Inspired Reading Agent with Gist Memory of Very Long Contexts
Paper • 2402.09727 • Published • 38
-
Does Reinforcement Learning Really Incentivize Reasoning Capacity in LLMs Beyond the Base Model?
Paper • 2504.13837 • Published • 136 -
Understanding R1-Zero-Like Training: A Critical Perspective
Paper • 2503.20783 • Published • 56 -
Inference-Time Scaling for Generalist Reward Modeling
Paper • 2504.02495 • Published • 56 -
Large Language Diffusion Models
Paper • 2502.09992 • Published • 122
-
KTO: Model Alignment as Prospect Theoretic Optimization
Paper • 2402.01306 • Published • 19 -
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
Paper • 2305.18290 • Published • 63 -
SimPO: Simple Preference Optimization with a Reference-Free Reward
Paper • 2405.14734 • Published • 12 -
Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment
Paper • 2408.06266 • Published • 10
-
A Critical Evaluation of AI Feedback for Aligning Large Language Models
Paper • 2402.12366 • Published • 3 -
Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation
Paper • 2401.08417 • Published • 36 -
Insights into Alignment: Evaluating DPO and its Variants Across Multiple Tasks
Paper • 2404.14723 • Published • 10 -
Self-Play Preference Optimization for Language Model Alignment
Paper • 2405.00675 • Published • 27
-
Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models
Paper • 2310.04406 • Published • 10 -
Chain-of-Thought Reasoning Without Prompting
Paper • 2402.10200 • Published • 109 -
ICDPO: Effectively Borrowing Alignment Capability of Others via In-context Direct Preference Optimization
Paper • 2402.09320 • Published • 6 -
Self-Discover: Large Language Models Self-Compose Reasoning Structures
Paper • 2402.03620 • Published • 117
-
Suppressing Pink Elephants with Direct Principle Feedback
Paper • 2402.07896 • Published • 11 -
Policy Improvement using Language Feedback Models
Paper • 2402.07876 • Published • 9 -
Direct Language Model Alignment from Online AI Feedback
Paper • 2402.04792 • Published • 34 -
Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models
Paper • 2401.01335 • Published • 68
-
Phi-4 Technical Report
Paper • 2412.08905 • Published • 122 -
Evaluating and Aligning CodeLLMs on Human Preference
Paper • 2412.05210 • Published • 50 -
Evaluating Language Models as Synthetic Data Generators
Paper • 2412.03679 • Published • 48 -
Yi-Lightning Technical Report
Paper • 2412.01253 • Published • 28
-
Does Reinforcement Learning Really Incentivize Reasoning Capacity in LLMs Beyond the Base Model?
Paper • 2504.13837 • Published • 136 -
Understanding R1-Zero-Like Training: A Critical Perspective
Paper • 2503.20783 • Published • 56 -
Inference-Time Scaling for Generalist Reward Modeling
Paper • 2504.02495 • Published • 56 -
Large Language Diffusion Models
Paper • 2502.09992 • Published • 122
-
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
Paper • 2305.18290 • Published • 63 -
Towards Efficient and Exact Optimization of Language Model Alignment
Paper • 2402.00856 • Published -
A General Theoretical Paradigm to Understand Learning from Human Preferences
Paper • 2310.12036 • Published • 16 -
Statistical Rejection Sampling Improves Preference Optimization
Paper • 2309.06657 • Published • 14
-
KTO: Model Alignment as Prospect Theoretic Optimization
Paper • 2402.01306 • Published • 19 -
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
Paper • 2305.18290 • Published • 63 -
SimPO: Simple Preference Optimization with a Reference-Free Reward
Paper • 2405.14734 • Published • 12 -
Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment
Paper • 2408.06266 • Published • 10
-
A Critical Evaluation of AI Feedback for Aligning Large Language Models
Paper • 2402.12366 • Published • 3 -
Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation
Paper • 2401.08417 • Published • 36 -
Insights into Alignment: Evaluating DPO and its Variants Across Multiple Tasks
Paper • 2404.14723 • Published • 10 -
Self-Play Preference Optimization for Language Model Alignment
Paper • 2405.00675 • Published • 27
-
Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models
Paper • 2310.04406 • Published • 10 -
Chain-of-Thought Reasoning Without Prompting
Paper • 2402.10200 • Published • 109 -
ICDPO: Effectively Borrowing Alignment Capability of Others via In-context Direct Preference Optimization
Paper • 2402.09320 • Published • 6 -
Self-Discover: Large Language Models Self-Compose Reasoning Structures
Paper • 2402.03620 • Published • 117
-
Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models
Paper • 2310.04406 • Published • 10 -
Chain-of-Thought Reasoning Without Prompting
Paper • 2402.10200 • Published • 109 -
ICDPO: Effectively Borrowing Alignment Capability of Others via In-context Direct Preference Optimization
Paper • 2402.09320 • Published • 6 -
Self-Discover: Large Language Models Self-Compose Reasoning Structures
Paper • 2402.03620 • Published • 117
-
Chain-of-Thought Reasoning Without Prompting
Paper • 2402.10200 • Published • 109 -
How to Train Data-Efficient LLMs
Paper • 2402.09668 • Published • 42 -
BitDelta: Your Fine-Tune May Only Be Worth One Bit
Paper • 2402.10193 • Published • 22 -
A Human-Inspired Reading Agent with Gist Memory of Very Long Contexts
Paper • 2402.09727 • Published • 38
-
Suppressing Pink Elephants with Direct Principle Feedback
Paper • 2402.07896 • Published • 11 -
Policy Improvement using Language Feedback Models
Paper • 2402.07876 • Published • 9 -
Direct Language Model Alignment from Online AI Feedback
Paper • 2402.04792 • Published • 34 -
Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models
Paper • 2401.01335 • Published • 68