-
Sketch-of-Thought: Efficient LLM Reasoning with Adaptive Cognitive-Inspired Sketching
Paper • 2503.05179 • Published • 46 -
SafeArena: Evaluating the Safety of Autonomous Web Agents
Paper • 2503.04957 • Published • 21 -
Learning from Failures in Multi-Attempt Reinforcement Learning
Paper • 2503.04808 • Published • 18 -
START: Self-taught Reasoner with Tools
Paper • 2503.04625 • Published • 113
Collections
Discover the best community collections!
Collections including paper arxiv:2503.01917
-
Unlocking the conversion of Web Screenshots into HTML Code with the WebSight Dataset
Paper • 2403.09029 • Published • 55 -
LLMLingua-2: Data Distillation for Efficient and Faithful Task-Agnostic Prompt Compression
Paper • 2403.12968 • Published • 25 -
RAFT: Adapting Language Model to Domain Specific RAG
Paper • 2403.10131 • Published • 72 -
Quiet-STaR: Language Models Can Teach Themselves to Think Before Speaking
Paper • 2403.09629 • Published • 78
-
Chain-of-Verification Reduces Hallucination in Large Language Models
Paper • 2309.11495 • Published • 39 -
Hallucination Detox: Sensitive Neuron Dropout (SeND) for Large Language Model Training
Paper • 2410.15460 • Published • 1 -
DeCoRe: Decoding by Contrasting Retrieval Heads to Mitigate Hallucinations
Paper • 2410.18860 • Published • 11 -
Do I Know This Entity? Knowledge Awareness and Hallucinations in Language Models
Paper • 2411.14257 • Published • 14
-
Linear Correlation in LM's Compositional Generalization and Hallucination
Paper • 2502.04520 • Published • 11 -
How to Steer LLM Latents for Hallucination Detection?
Paper • 2503.01917 • Published • 11 -
Are Reasoning Models More Prone to Hallucination?
Paper • 2505.23646 • Published • 24 -
Why Language Models Hallucinate
Paper • 2509.04664 • Published • 192
-
EVA-CLIP-18B: Scaling CLIP to 18 Billion Parameters
Paper • 2402.04252 • Published • 28 -
Vision Superalignment: Weak-to-Strong Generalization for Vision Foundation Models
Paper • 2402.03749 • Published • 14 -
ScreenAI: A Vision-Language Model for UI and Infographics Understanding
Paper • 2402.04615 • Published • 44 -
EfficientViT-SAM: Accelerated Segment Anything Model Without Performance Loss
Paper • 2402.05008 • Published • 23
-
Sketch-of-Thought: Efficient LLM Reasoning with Adaptive Cognitive-Inspired Sketching
Paper • 2503.05179 • Published • 46 -
SafeArena: Evaluating the Safety of Autonomous Web Agents
Paper • 2503.04957 • Published • 21 -
Learning from Failures in Multi-Attempt Reinforcement Learning
Paper • 2503.04808 • Published • 18 -
START: Self-taught Reasoner with Tools
Paper • 2503.04625 • Published • 113
-
Linear Correlation in LM's Compositional Generalization and Hallucination
Paper • 2502.04520 • Published • 11 -
How to Steer LLM Latents for Hallucination Detection?
Paper • 2503.01917 • Published • 11 -
Are Reasoning Models More Prone to Hallucination?
Paper • 2505.23646 • Published • 24 -
Why Language Models Hallucinate
Paper • 2509.04664 • Published • 192
-
Unlocking the conversion of Web Screenshots into HTML Code with the WebSight Dataset
Paper • 2403.09029 • Published • 55 -
LLMLingua-2: Data Distillation for Efficient and Faithful Task-Agnostic Prompt Compression
Paper • 2403.12968 • Published • 25 -
RAFT: Adapting Language Model to Domain Specific RAG
Paper • 2403.10131 • Published • 72 -
Quiet-STaR: Language Models Can Teach Themselves to Think Before Speaking
Paper • 2403.09629 • Published • 78
-
EVA-CLIP-18B: Scaling CLIP to 18 Billion Parameters
Paper • 2402.04252 • Published • 28 -
Vision Superalignment: Weak-to-Strong Generalization for Vision Foundation Models
Paper • 2402.03749 • Published • 14 -
ScreenAI: A Vision-Language Model for UI and Infographics Understanding
Paper • 2402.04615 • Published • 44 -
EfficientViT-SAM: Accelerated Segment Anything Model Without Performance Loss
Paper • 2402.05008 • Published • 23
-
Chain-of-Verification Reduces Hallucination in Large Language Models
Paper • 2309.11495 • Published • 39 -
Hallucination Detox: Sensitive Neuron Dropout (SeND) for Large Language Model Training
Paper • 2410.15460 • Published • 1 -
DeCoRe: Decoding by Contrasting Retrieval Heads to Mitigate Hallucinations
Paper • 2410.18860 • Published • 11 -
Do I Know This Entity? Knowledge Awareness and Hallucinations in Language Models
Paper • 2411.14257 • Published • 14