- 
	
	
	
FineWeb2: One Pipeline to Scale Them All -- Adapting Pre-Training Data Processing to Every Language
Paper • 2506.20920 • Published • 74 - 
	
	
	
SmolVLM: Redefining small and efficient multimodal models
Paper • 2504.05299 • Published • 200 - 
	
	
	
YourBench: Easy Custom Evaluation Sets for Everyone
Paper • 2504.01833 • Published • 22 - 
	
	
	
SmolLM2: When Smol Goes Big -- Data-Centric Training of a Small Language Model
Paper • 2502.02737 • Published • 246 
Collections
Discover the best community collections!
Collections including paper arxiv:2211.05100 
						
					
				- 
	
	
	
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
Paper • 2211.05100 • Published • 34 - 
	
	
	
IP-Adapter: Text Compatible Image Prompt Adapter for Text-to-Image Diffusion Models
Paper • 2308.06721 • Published • 33 - 
	
	
	
LEDITS++: Limitless Image Editing using Text-to-Image Models
Paper • 2311.16711 • Published • 24 
- 
	
	
	
Nemotron-4 15B Technical Report
Paper • 2402.16819 • Published • 46 - 
	
	
	
Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models
Paper • 2402.19427 • Published • 56 - 
	
	
	
RWKV: Reinventing RNNs for the Transformer Era
Paper • 2305.13048 • Published • 19 - 
	
	
	
Reformer: The Efficient Transformer
Paper • 2001.04451 • Published 
- 
	
	
	
Mistral 7B
Paper • 2310.06825 • Published • 55 - 
	
	
	
BloombergGPT: A Large Language Model for Finance
Paper • 2303.17564 • Published • 26 - 
	
	
	
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 23 - 
	
	
	
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 20 
- 
	
	
	
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
Paper • 2211.05100 • Published • 34 - 
	
	
	
FlauBERT: Unsupervised Language Model Pre-training for French
Paper • 1912.05372 • Published - 
	
	
	
CroissantLLM: A Truly Bilingual French-English Language Model
Paper • 2402.00786 • Published • 26 - 
	
	
	
AION-1: Omnimodal Foundation Model for Astronomical Sciences
Paper • 2510.17960 • Published • 27 
- 
	
	
	
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 23 - 
	
	
	
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Paper • 1907.11692 • Published • 9 - 
	
	
	
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 17 - 
	
	
	
OPT: Open Pre-trained Transformer Language Models
Paper • 2205.01068 • Published • 2 
- 
	
	
	
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
Paper • 2211.05100 • Published • 34 - 
	
	
	
Contrastive Language-Image Pre-training for the Italian Language
Paper • 2108.08688 • Published • 2 - 
	
	
	
IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation
Paper • 2203.03759 • Published • 5 - 
	
	
	
Spanish Pre-trained BERT Model and Evaluation Data
Paper • 2308.02976 • Published • 3 
- 
	
	
	
Attention Is All You Need
Paper • 1706.03762 • Published • 94 - 
	
	
	
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 23 - 
	
	
	
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Paper • 1907.11692 • Published • 9 - 
	
	
	
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 20 
- 
	
	
	
FineWeb2: One Pipeline to Scale Them All -- Adapting Pre-Training Data Processing to Every Language
Paper • 2506.20920 • Published • 74 - 
	
	
	
SmolVLM: Redefining small and efficient multimodal models
Paper • 2504.05299 • Published • 200 - 
	
	
	
YourBench: Easy Custom Evaluation Sets for Everyone
Paper • 2504.01833 • Published • 22 - 
	
	
	
SmolLM2: When Smol Goes Big -- Data-Centric Training of a Small Language Model
Paper • 2502.02737 • Published • 246 
- 
	
	
	
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
Paper • 2211.05100 • Published • 34 - 
	
	
	
FlauBERT: Unsupervised Language Model Pre-training for French
Paper • 1912.05372 • Published - 
	
	
	
CroissantLLM: A Truly Bilingual French-English Language Model
Paper • 2402.00786 • Published • 26 - 
	
	
	
AION-1: Omnimodal Foundation Model for Astronomical Sciences
Paper • 2510.17960 • Published • 27 
- 
	
	
	
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
Paper • 2211.05100 • Published • 34 - 
	
	
	
IP-Adapter: Text Compatible Image Prompt Adapter for Text-to-Image Diffusion Models
Paper • 2308.06721 • Published • 33 - 
	
	
	
LEDITS++: Limitless Image Editing using Text-to-Image Models
Paper • 2311.16711 • Published • 24 
- 
	
	
	
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 23 - 
	
	
	
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Paper • 1907.11692 • Published • 9 - 
	
	
	
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 17 - 
	
	
	
OPT: Open Pre-trained Transformer Language Models
Paper • 2205.01068 • Published • 2 
- 
	
	
	
Nemotron-4 15B Technical Report
Paper • 2402.16819 • Published • 46 - 
	
	
	
Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models
Paper • 2402.19427 • Published • 56 - 
	
	
	
RWKV: Reinventing RNNs for the Transformer Era
Paper • 2305.13048 • Published • 19 - 
	
	
	
Reformer: The Efficient Transformer
Paper • 2001.04451 • Published 
- 
	
	
	
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
Paper • 2211.05100 • Published • 34 - 
	
	
	
Contrastive Language-Image Pre-training for the Italian Language
Paper • 2108.08688 • Published • 2 - 
	
	
	
IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation
Paper • 2203.03759 • Published • 5 - 
	
	
	
Spanish Pre-trained BERT Model and Evaluation Data
Paper • 2308.02976 • Published • 3 
- 
	
	
	
Mistral 7B
Paper • 2310.06825 • Published • 55 - 
	
	
	
BloombergGPT: A Large Language Model for Finance
Paper • 2303.17564 • Published • 26 - 
	
	
	
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 23 - 
	
	
	
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 20 
- 
	
	
	
Attention Is All You Need
Paper • 1706.03762 • Published • 94 - 
	
	
	
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 23 - 
	
	
	
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Paper • 1907.11692 • Published • 9 - 
	
	
	
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 20