-
Attention Is All You Need
Paper • 1706.03762 • Published • 96 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 23 -
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Paper • 1907.11692 • Published • 9 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 21
Collections
Discover the best community collections!
Collections including paper arxiv:1910.01108
-
MilaNLProc/feel-it-italian-sentiment
Text Classification • Updated • 101k • • 21 -
cardiffnlp/twitter-roberta-base-sentiment-latest
Text Classification • Updated • 4.5M • • 733 -
bhadresh-savani/distilbert-base-uncased-emotion
Text Classification • 67M • Updated • 65k • • 153 -
FacebookAI/xlm-roberta-large
Fill-Mask • 0.6B • Updated • 3.03M • • 474
-
Will we run out of data? An analysis of the limits of scaling datasets in Machine Learning
Paper • 2211.04325 • Published • 1 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 23 -
On the Opportunities and Risks of Foundation Models
Paper • 2108.07258 • Published • 1 -
Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks
Paper • 2204.07705 • Published • 2
-
nikravan/glm-4vq
Document Question Answering • 7B • Updated • 37 • 36 -
deepseek-ai/deepseek-coder-33b-instruct
Text Generation • 33B • Updated • 17.7k • 549 -
DeepSeek-Coder: When the Large Language Model Meets Programming -- The Rise of Code Intelligence
Paper • 2401.14196 • Published • 66 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 21
-
EPO: Entropy-regularized Policy Optimization for LLM Agents Reinforcement Learning
Paper • 2509.22576 • Published • 133 -
AgentBench: Evaluating LLMs as Agents
Paper • 2308.03688 • Published • 25 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 21 -
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
Paper • 2305.18290 • Published • 63
-
DeepSeek-R1 Thoughtology: Let's <think> about LLM Reasoning
Paper • 2504.07128 • Published • 86 -
Byte Latent Transformer: Patches Scale Better Than Tokens
Paper • 2412.09871 • Published • 108 -
BitNet b1.58 2B4T Technical Report
Paper • 2504.12285 • Published • 75 -
FAST: Efficient Action Tokenization for Vision-Language-Action Models
Paper • 2501.09747 • Published • 27
-
Attention Is All You Need
Paper • 1706.03762 • Published • 96 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 23 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 21 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 17
-
Attention Is All You Need
Paper • 1706.03762 • Published • 96 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 23 -
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Paper • 1907.11692 • Published • 9 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 21
-
EPO: Entropy-regularized Policy Optimization for LLM Agents Reinforcement Learning
Paper • 2509.22576 • Published • 133 -
AgentBench: Evaluating LLMs as Agents
Paper • 2308.03688 • Published • 25 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 21 -
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
Paper • 2305.18290 • Published • 63
-
MilaNLProc/feel-it-italian-sentiment
Text Classification • Updated • 101k • • 21 -
cardiffnlp/twitter-roberta-base-sentiment-latest
Text Classification • Updated • 4.5M • • 733 -
bhadresh-savani/distilbert-base-uncased-emotion
Text Classification • 67M • Updated • 65k • • 153 -
FacebookAI/xlm-roberta-large
Fill-Mask • 0.6B • Updated • 3.03M • • 474
-
DeepSeek-R1 Thoughtology: Let's <think> about LLM Reasoning
Paper • 2504.07128 • Published • 86 -
Byte Latent Transformer: Patches Scale Better Than Tokens
Paper • 2412.09871 • Published • 108 -
BitNet b1.58 2B4T Technical Report
Paper • 2504.12285 • Published • 75 -
FAST: Efficient Action Tokenization for Vision-Language-Action Models
Paper • 2501.09747 • Published • 27
-
Will we run out of data? An analysis of the limits of scaling datasets in Machine Learning
Paper • 2211.04325 • Published • 1 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 23 -
On the Opportunities and Risks of Foundation Models
Paper • 2108.07258 • Published • 1 -
Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks
Paper • 2204.07705 • Published • 2
-
nikravan/glm-4vq
Document Question Answering • 7B • Updated • 37 • 36 -
deepseek-ai/deepseek-coder-33b-instruct
Text Generation • 33B • Updated • 17.7k • 549 -
DeepSeek-Coder: When the Large Language Model Meets Programming -- The Rise of Code Intelligence
Paper • 2401.14196 • Published • 66 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 21
-
Attention Is All You Need
Paper • 1706.03762 • Published • 96 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 23 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 21 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 17