CodeLlama 2 7b
With Guanaco Lora (Tim Dettmers), merged by Varunk29.
Then
With Mistral AI 7b 0.1 delta bits compared to Llama2 (extracted by Undi95), merged by me.
Base model (CodeLlama) training context : 16k (max context up to 96k with the base ROPE)
Mistral injection training context : 8k (Sliding Windows Attention is likely inoperant on such a merge/injection)
For test and amusement only.
Prompt : Alpaca works.
- Downloads last month
- 64
Hardware compatibility
Log In
to view the estimation
3-bit
4-bit
5-bit
6-bit
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support