Regarding the calculation of VRAM requirements for deploying meta-llama/Llama-4-Maverick-17B-128E-Instruct
#45 opened about 1 hour ago
		by
		
				
							
						a58982284
	
MarCognity-AI for Meta – LLaMA 4 Maverick
#44 opened 24 days ago
		by
		
				
							
						elly99
	
Tool Calling
#43 opened about 1 month ago
		by
		
				
							
						vipulchoube
	
Trying to run with TGI - i try to run the model with doctor I am using 8 h200 gpu on Amazon ec2 p5en.48xlarge
								1
#42 opened 4 months ago
		by
		
				
							
						sayak340
	
Access rejected
#38 opened 6 months ago
		by
		
				
							
						sheepyyy
	
Remove `<|python_start|>` and `<|python_end|>` tags from chat template
#37 opened 6 months ago
		by
		
				
							
						jhuntbach
	
Add reasoning capabilities for llama 4 and add this model to huggingchat
#36 opened 6 months ago
		by
		
				
							
						devopsML
	
Request: DOI
#35 opened 6 months ago
		by
		
				
							
						EVANTRD
	
Llama4
#34 opened 6 months ago
		by
		
				
							
						duckingsimsen
	
Gated Repo Permission Still Pending for Llama-4
#33 opened 6 months ago
		by
		
				
							
						brando
	
World's Largest Dataset
#32 opened 7 months ago
		by
		deleted
When are we getting direct HuggingFace inference provider support?
#30 opened 7 months ago
		by
		
				
							
						TejAndrewsACC
	
13 B and34 B Pleeease!!! Most people cannot even run this.
❤️
							👍
							
						3
				
								1
#28 opened 7 months ago
		by
		
				
							
						UniversalLove333
	
Llama-4-Maverick-03-26-Experimental
👍
							
						9
				
								1
#27 opened 7 months ago
		by
		
				
							
						ChuckMcSneed
	
Access Rejected
								3
#24 opened 7 months ago
		by
		
				
							
						rajkaranswain16
	
torch compile compatibility issue
➕
							
						3
				
									6
	#23 opened 7 months ago
		by
		
				
							
						jhmun
	
Rejected access?
								2
#22 opened 7 months ago
		by
		
				
							
						pluttodk
	
Deploying production ready Llama-4 [Maverick] on your AWS with vLLM
🚀
							🔥
							
						3
				#21 opened 7 months ago
		by
		
				
							
						agam30
	
GPU requirement
								2
#20 opened 7 months ago
		by
		
				
							
						meetzuber
	
How to run int4?
								1
#19 opened 7 months ago
		by
		
				
							
						BootsofLagrangian
	
[request for feedback] faster downloads with xet
								5
#18 opened 7 months ago
		by
		
				
							
						clem
	
Thanks a lot!
#17 opened 7 months ago
		by
		
				
							
						FalconNet
	
License
								1
#16 opened 7 months ago
		by
		
				
							
						mrfakename
	
change to spda
									2
	#14 opened 7 months ago
		by
		
				
							
						wukaixingxp