Text-to-SQL Model
AI & ML interests
Benchmark, Code Generation, LLM
Recent Activity
View all activity
Open-source works to reproduce DeepSeek R1
-
perplexity-ai/r1-1776
Text Generation • 671B • Updated • 1.02k • 2.32k -
unsloth/r1-1776-GGUF
Text Generation • 671B • Updated • 275 • 103 -
unsloth/r1-1776-distill-llama-70b-unsloth-bnb-4bit
Text Generation • 38B • Updated • 1 • 2 -
open-r1/OpenR1-Qwen-7B
Text Generation • 8B • Updated • 55 • • 54
For Finetuning. GPU is needed for both quantization and inference.
Text-to-SQL model
For inference. CPU is enough for both quantization and inference.
-
QuantFactory/OpenCoder-8B-Instruct-GGUF
Text Generation • 8B • Updated • 86 • 6 -
QuantFactory/OpenCoder-8B-Base-GGUF
Text Generation • 8B • Updated • 155 • 3 -
bartowski/starcoder2-15b-instruct-GGUF
Text Generation • 16B • Updated • 258 • 4 -
QuantFactory/starcoder2-15b-GGUF
Text Generation • 16B • Updated • 39 • 2
Text-to-SQL Model
Text-to-SQL model
Open-source works to reproduce DeepSeek R1
-
perplexity-ai/r1-1776
Text Generation • 671B • Updated • 1.02k • 2.32k -
unsloth/r1-1776-GGUF
Text Generation • 671B • Updated • 275 • 103 -
unsloth/r1-1776-distill-llama-70b-unsloth-bnb-4bit
Text Generation • 38B • Updated • 1 • 2 -
open-r1/OpenR1-Qwen-7B
Text Generation • 8B • Updated • 55 • • 54
For inference. CPU is enough for both quantization and inference.
-
QuantFactory/OpenCoder-8B-Instruct-GGUF
Text Generation • 8B • Updated • 86 • 6 -
QuantFactory/OpenCoder-8B-Base-GGUF
Text Generation • 8B • Updated • 155 • 3 -
bartowski/starcoder2-15b-instruct-GGUF
Text Generation • 16B • Updated • 258 • 4 -
QuantFactory/starcoder2-15b-GGUF
Text Generation • 16B • Updated • 39 • 2
For Finetuning. GPU is needed for both quantization and inference.