Update README.md
Browse files
README.md
CHANGED
|
@@ -64,4 +64,45 @@ docker run -it --rm -p 7860:7860 -v "$(pwd):/app" aeon
|
|
| 64 |
| OS | CPU | GPU | RAM |
|
| 65 |
|:---|:---|:---|:---|
|
| 66 |
| Ubuntu 24.04.2 LTS | Intel i7-10510U | Intel CometLake-U GT2 | 16GB |
|
| 67 |
-
| Windows 11 Home Edition | Intel i7-10510U | Intel CometLake-U GT2 | 8GB |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 64 |
| OS | CPU | GPU | RAM |
|
| 65 |
|:---|:---|:---|:---|
|
| 66 |
| Ubuntu 24.04.2 LTS | Intel i7-10510U | Intel CometLake-U GT2 | 16GB |
|
| 67 |
+
| Windows 11 Home Edition | Intel i7-10510U | Intel CometLake-U GT2 | 8GB |
|
| 68 |
+
|
| 69 |
+
|
| 70 |
+
## Use with llama.cpp
|
| 71 |
+
Install llama.cpp through brew (works on Mac and Linux)
|
| 72 |
+
|
| 73 |
+
```bash
|
| 74 |
+
brew install llama.cpp
|
| 75 |
+
|
| 76 |
+
```
|
| 77 |
+
Invoke the llama.cpp server or the CLI.
|
| 78 |
+
|
| 79 |
+
### CLI:
|
| 80 |
+
```bash
|
| 81 |
+
llama-cli --hf-repo gustavokuklinski/aeon-GGUF --hf-file aeon-360M.Q8_0.gguf -p "What is a virtual species?"
|
| 82 |
+
```
|
| 83 |
+
|
| 84 |
+
### Server:
|
| 85 |
+
```bash
|
| 86 |
+
llama-server --hf-repo gustavokuklinski/aeon-GGUF --hf-file aeon-360M.Q8_0.gguf -c 2048
|
| 87 |
+
```
|
| 88 |
+
|
| 89 |
+
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|
| 90 |
+
|
| 91 |
+
Step 1: Clone llama.cpp from GitHub.
|
| 92 |
+
```
|
| 93 |
+
git clone https://github.com/ggerganov/llama.cpp
|
| 94 |
+
```
|
| 95 |
+
|
| 96 |
+
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
|
| 97 |
+
```
|
| 98 |
+
cd llama.cpp && LLAMA_CURL=1 make
|
| 99 |
+
```
|
| 100 |
+
|
| 101 |
+
Step 3: Run inference through the main binary.
|
| 102 |
+
```
|
| 103 |
+
./llama-cli --hf-repo gustavokuklinski/aeon-GGUF --hf-file aeon-360M.Q8_0.gguf -p "What is a virtual species?"
|
| 104 |
+
```
|
| 105 |
+
or
|
| 106 |
+
```
|
| 107 |
+
./llama-server --hf-repo gustavokuklinski/aeon-GGUF --hf-file aeon-360M.Q8_0.gguf -c 2048
|
| 108 |
+
```
|