alexcovo commited on
Commit
641e792
·
verified ·
1 Parent(s): 78057be

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +72 -0
README.md ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: nvidia-open-model-license
4
+ license_link: https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
5
+ pipeline_tag: text-generation
6
+ datasets:
7
+ - nvidia/Nemotron-Post-Training-Dataset-v1
8
+ - nvidia/Nemotron-Post-Training-Dataset-v2
9
+ - nvidia/Nemotron-Pretraining-Dataset-sample
10
+ - nvidia/Nemotron-CC-v2
11
+ - nvidia/Nemotron-CC-Math-v1
12
+ - nvidia/Nemotron-Pretraining-SFT-v1
13
+ language:
14
+ - en
15
+ - es
16
+ - fr
17
+ - de
18
+ - it
19
+ - ja
20
+ library_name: transformers
21
+ tags:
22
+ - nvidia
23
+ - pytorch
24
+ - llama-cpp
25
+ - gguf-my-repo
26
+ track_downloads: true
27
+ base_model: nvidia/NVIDIA-Nemotron-Nano-12B-v2
28
+ ---
29
+
30
+ # alexcovo/NVIDIA-Nemotron-Nano-12B-v2-Q4_K_M-GGUF
31
+ This model was converted to GGUF format from [`nvidia/NVIDIA-Nemotron-Nano-12B-v2`](https://huggingface.co/nvidia/NVIDIA-Nemotron-Nano-12B-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
32
+ Refer to the [original model card](https://huggingface.co/nvidia/NVIDIA-Nemotron-Nano-12B-v2) for more details on the model.
33
+
34
+ ## Use with llama.cpp
35
+ Install llama.cpp through brew (works on Mac and Linux)
36
+
37
+ ```bash
38
+ brew install llama.cpp
39
+
40
+ ```
41
+ Invoke the llama.cpp server or the CLI.
42
+
43
+ ### CLI:
44
+ ```bash
45
+ llama-cli --hf-repo alexcovo/NVIDIA-Nemotron-Nano-12B-v2-Q4_K_M-GGUF --hf-file nvidia-nemotron-nano-12b-v2-q4_k_m.gguf -p "The meaning to life and the universe is"
46
+ ```
47
+
48
+ ### Server:
49
+ ```bash
50
+ llama-server --hf-repo alexcovo/NVIDIA-Nemotron-Nano-12B-v2-Q4_K_M-GGUF --hf-file nvidia-nemotron-nano-12b-v2-q4_k_m.gguf -c 2048
51
+ ```
52
+
53
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
54
+
55
+ Step 1: Clone llama.cpp from GitHub.
56
+ ```
57
+ git clone https://github.com/ggerganov/llama.cpp
58
+ ```
59
+
60
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
61
+ ```
62
+ cd llama.cpp && LLAMA_CURL=1 make
63
+ ```
64
+
65
+ Step 3: Run inference through the main binary.
66
+ ```
67
+ ./llama-cli --hf-repo alexcovo/NVIDIA-Nemotron-Nano-12B-v2-Q4_K_M-GGUF --hf-file nvidia-nemotron-nano-12b-v2-q4_k_m.gguf -p "The meaning to life and the universe is"
68
+ ```
69
+ or
70
+ ```
71
+ ./llama-server --hf-repo alexcovo/NVIDIA-Nemotron-Nano-12B-v2-Q4_K_M-GGUF --hf-file nvidia-nemotron-nano-12b-v2-q4_k_m.gguf -c 2048
72
+ ```