hwpoison89 commited on
Commit
9755688
·
verified ·
1 Parent(s): 066908a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +64 -0
README.md CHANGED
@@ -18,9 +18,73 @@ I trained the base model using the dataaset 'hrsvrn/linux-commands-updated' usin
18
  ### Suggested system prompt
19
  You are a helpful assistant that outputs Linux shell commands. You just provide the command that the user is requesting, not more. Just provide the output of the command and ignore commandsthat you do not have an example output for. If multiple commands aregiven then they will be separated by a ';' character and you need togive an example output for each command separated by a ';'.
20
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
  ### Collab
22
  https://colab.research.google.com/drive/1pWXoVgIzaJviyYP4L-leFBhPY4CLcP6M?authuser=2#scrollTo=-0Mk20PzzJVA
23
 
 
24
  # Uploaded finetuned model
25
 
26
  - **Developed by:** hwpoison89
 
18
  ### Suggested system prompt
19
  You are a helpful assistant that outputs Linux shell commands. You just provide the command that the user is requesting, not more. Just provide the output of the command and ignore commandsthat you do not have an example output for. If multiple commands aregiven then they will be separated by a ';' character and you need togive an example output for each command separated by a ';'.
20
 
21
+ ### Script to use in linux bash
22
+
23
+ 1. Edit the bashrc with:
24
+ ```bash
25
+ vim ~/.bashrc
26
+ ```
27
+
28
+ 3. Add the next code at the end:
29
+
30
+ ```bash
31
+ bind -x '"\C-o": llm_linux_cli' # Ctrl+O to run the model over current input line
32
+
33
+ llm_linux_cli() {
34
+ local salida
35
+ local USER_INPUT="$READLINE_LINE"
36
+ history -s "$READLINE_LINE"
37
+ SCRIPT=$(cat <<'EOF'
38
+ USER_INPUT=$(printf '%s' "$*" | sed 's/\\/\\\\/g; s/"/\\"/g')
39
+ if [ -z "$USER_INPUT" ]; then
40
+ echo "Uso: $0 \"Write something to the model\""
41
+ exit 1
42
+ fi
43
+ curl -s --url "http://localhost:8080/v1/chat/completions" \
44
+ -H "Content-Type: application/json" \
45
+ -H "Authorization: Bearer no-key" \
46
+ -d "{
47
+ \"model\": \"gpt-3.5-turbo\",
48
+ \"temperature\":0.8,
49
+ \"top_p\":0.95,
50
+ \"min_p\":0.05,
51
+ \"top_k\":40,
52
+ \"messages\":[
53
+ {
54
+ \"role\":\"system\",
55
+ \"content\":\"You are a helpful assistant that outputs Linux shell commands. You just provide the command that the user is requesting, not more. Just provide the output of the command and ignore commands that you do not have an example output for. If multiple commands are given then they will be separated by a ';' character and you need togive an example output for each command separated by a ';'.\"
56
+ },
57
+ {
58
+ \"role\": \"user\",
59
+ \"content\":\"$USER_INPUT\"
60
+ }
61
+ ]
62
+ }" | jq -r '.choices[0].message.content'
63
+ EOF
64
+ )
65
+ llm_output=$(bash -c "$SCRIPT" _ $USER_INPUT)
66
+ READLINE_LINE=$llm_output
67
+ READLINE_POINT=${#llm_output}
68
+ }
69
+ ```
70
+
71
+ 3. Reload the bashrc env file using:
72
+ ```bash
73
+ source ~/.bashrc
74
+ ```
75
+
76
+ 4. Load and run the .gguf model file using llama-server
77
+ ```bash
78
+ llama-server -m gemma-1B-LinuxCLI-GGUF.gguf
79
+ ```
80
+
81
+ 5. Now write your request as a command line then pres "CTRL+O" so a request will be sent to the model then the output replaced over current line :)
82
+
83
+
84
  ### Collab
85
  https://colab.research.google.com/drive/1pWXoVgIzaJviyYP4L-leFBhPY4CLcP6M?authuser=2#scrollTo=-0Mk20PzzJVA
86
 
87
+
88
  # Uploaded finetuned model
89
 
90
  - **Developed by:** hwpoison89