GGUF
English
llama.cpp
conversational
gustavokuklinski commited on
Commit
9344109
·
verified ·
1 Parent(s): c44dca5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -0
README.md CHANGED
@@ -2,6 +2,7 @@
2
  license: mit
3
  datasets:
4
  - gustavokuklinski/aeon
 
5
  language:
6
  - en
7
  base_model:
@@ -59,6 +60,11 @@ docker build -t aeon .
59
  docker run -it --rm -p 7860:7860 -v "$(pwd):/app" aeon
60
  ```
61
 
 
 
 
 
 
62
  ### Tested on
63
 
64
  | OS | CPU | GPU | RAM |
@@ -106,3 +112,4 @@ or
106
  ```
107
  ./llama-server --hf-repo gustavokuklinski/aeon-GGUF --hf-file aeon-360M.Q8_0.gguf -c 2048
108
  ```
 
 
2
  license: mit
3
  datasets:
4
  - gustavokuklinski/aeon
5
+ - gustavokuklinski/aeon-books
6
  language:
7
  - en
8
  base_model:
 
60
  docker run -it --rm -p 7860:7860 -v "$(pwd):/app" aeon
61
  ```
62
 
63
+ ### Finetune
64
+
65
+ ![Aeon chart Loss](https://huggingface.co/api/resolve-cache/models/gustavokuklinski/aeon-360m/e5a3e273c5dfb3b279f5393322f1745b4f3aaa3d/output%2Ftraining_metrics_finetune.png?%2Fgustavokuklinski%2Faeon-360m%2Fresolve%2Fmain%2Foutput%2Ftraining_metrics_finetune.png=&etag=%22b368e3ed02494f2d3462900ef67fb5fa237e703f-inline%22)
66
+
67
+
68
  ### Tested on
69
 
70
  | OS | CPU | GPU | RAM |
 
112
  ```
113
  ./llama-server --hf-repo gustavokuklinski/aeon-GGUF --hf-file aeon-360M.Q8_0.gguf -c 2048
114
  ```
115
+