-
-
-
-
-
-
Inference Providers
Active filters:
xglm
RichardErkhov/KoboldAI_-_fairseq-dense-125M-8bits
Text Generation
•
0.1B
•
Updated
•
3
RichardErkhov/KoboldAI_-_fairseq-dense-2.7B-4bits
Text Generation
•
3B
•
Updated
•
3
RichardErkhov/KoboldAI_-_fairseq-dense-355M-4bits
Text Generation
•
0.4B
•
Updated
•
4
RichardErkhov/KoboldAI_-_fairseq-dense-355M-8bits
Text Generation
•
0.4B
•
Updated
•
2
RichardErkhov/KoboldAI_-_fairseq-dense-2.7B-8bits
Text Generation
•
3B
•
Updated
•
3
Hinno/incoder-1B-flutter-finetuned
Text Generation
•
1B
•
Updated
•
4
•
1
Hinno/fineTuneIncoderWithPrompt
Text Generation
•
1B
•
Updated
•
4
RichardErkhov/facebook_-_xglm-4.5B-4bits
Text Generation
•
5B
•
Updated
•
3
RichardErkhov/facebook_-_xglm-4.5B-8bits
Text Generation
•
5B
•
Updated
•
4
nattakit2580/KarveeSaimai
Text Generation
•
0.6B
•
Updated
•
3
Text Generation
•
0.6B
•
Updated
•
5
Updated
•
6
•
1
RichardErkhov/Hinno_-_incoder-1B-flutter-finetuned-4bits
1B
•
Updated
•
1
RichardErkhov/Hinno_-_incoder-1B-flutter-finetuned-8bits
1B
•
Updated
•
2
Text Generation
•
Updated
•
6
RichardErkhov/osiria_-_diablo-italian-base-1.3b-4bits
1B
•
Updated
•
2
RichardErkhov/osiria_-_diablo-italian-base-1.3b-8bits
1B
•
Updated
•
3
naylynn/xglm-myanmar-QA-finetuned-inference
0.6B
•
Updated
•
2
toksuite/supertoken_models-llama_facebook-xglm-564M
Text Generation
•
2B
•
Updated
•
79
optimum-intel-internal-testing/tiny-random-XGLMForCausalLM
Updated
•
17.8k
SphRbtHyk/grc_xglm-564M-finetuned
0.6B
•
Updated
•
3
SphRbtHyk/lat_xglm-564M-finetuned
0.6B
•
Updated
•
4