-
-
-
-
-
-
Inference Providers
Active filters:
xglm
RichardErkhov/KoboldAI_-_fairseq-dense-125M-8bits
Text Generation
•
0.1B
•
Updated
RichardErkhov/KoboldAI_-_fairseq-dense-2.7B-4bits
Text Generation
•
1B
•
Updated
RichardErkhov/KoboldAI_-_fairseq-dense-355M-4bits
Text Generation
•
0.2B
•
Updated
RichardErkhov/KoboldAI_-_fairseq-dense-355M-8bits
Text Generation
•
0.4B
•
Updated
•
1
RichardErkhov/KoboldAI_-_fairseq-dense-2.7B-8bits
Text Generation
•
3B
•
Updated
Hinno/incoder-1B-flutter-finetuned
Text Generation
•
1B
•
Updated
•
1
Hinno/fineTuneIncoderWithPrompt
Text Generation
•
1B
•
Updated
•
1
RichardErkhov/facebook_-_xglm-4.5B-4bits
Text Generation
•
3B
•
Updated
RichardErkhov/facebook_-_xglm-4.5B-8bits
Text Generation
•
5B
•
Updated
nattakit2580/KarveeSaimai
Text Generation
•
0.6B
•
Updated
Krits0/KarveeSaimai
Text Generation
•
0.6B
•
Updated
AIDSC/xglm-7.5B
RichardErkhov/Hinno_-_incoder-1B-flutter-finetuned-4bits
RichardErkhov/Hinno_-_incoder-1B-flutter-finetuned-8bits
emre/xglm-564M-turkish
Text Generation
•
0.6B
•
Updated
•
4
RichardErkhov/osiria_-_diablo-italian-base-1.3b-4bits
RichardErkhov/osiria_-_diablo-italian-base-1.3b-8bits
naylynn/xglm-myanmar-QA-finetuned-inference
optimum-intel-internal-testing/tiny-random-XGLMForCausalLM
Updated
•
6.89k