ArtusDev commited on
Commit
e94a1e3
·
verified ·
1 Parent(s): 4211d5e

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
37
+ Wayfarer-2-12B.jpg filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ base_model:
6
+ - mistralai/Mistral-Nemo-Base-2407
7
+ tags:
8
+ - text adventure
9
+ - roleplay
10
+ library_name: transformers
11
+ ---
12
+
13
+ ![image/jpeg](Wayfarer-2-12B.jpg)
14
+
15
+ # Wayfarer-2-12B
16
+
17
+ We’ve heard over and over from AI Dungeon players that modern AI models are too nice, never letting them fail or die. While it may be good for a chatbot to be nice and helpful, great stories and games aren’t all rainbows and unicorns. They have conflict, tension, and even death. These create real stakes and consequences for characters and the journeys they go on. We created Wayfarer as a response, and after much testing, feedback and refining, we’ve developed a worthy sequel.
18
+
19
+ Wayfarer 2 further refines the formula that made the original Wayfarer so popular, slowing the pacing, increasing the length and detail of responses and making death a distinct possibility for all characters—not just the user. The stakes have never been higher!
20
+
21
+ If you want to try this model for free, you can do so at [https://aidungeon.com](https://aidungeon.com/).
22
+
23
+ We plan to continue improving and open-sourcing similar models, so please share any and all feedback on how we can improve model behavior. Below we share more details on how Wayfarer was created.
24
+
25
+ [Quantized GGUF weights can be downloaded here.](https://huggingface.co/LatitudeGames/Wayfarer-2-12B-GGUF)
26
+
27
+ ## Model details
28
+
29
+ Wayfarer 2 12B received SFT training with a simple three ingredient recipe: the Wayfarer 2 dataset itself, a series of sentiment-balanced roleplay transcripts and a small instruct core to help retain its instructional capabilities.
30
+
31
+ ## How It Was Made
32
+
33
+ Wayfarer’s text adventure data was generated by simulating playthroughs of published character creator scenarios from AI Dungeon. Five distinct user archetypes played through each scenario, whose character starts all varied in faction, location, etc. to generate five unique samples.
34
+
35
+ One language model played the role of narrator, with the other playing the user. They were blind to each other’s underlying logic, so the user was actually capable of surprising the narrator with their choices. Each simulation was allowed to run for 8k tokens or until the main character died.
36
+
37
+ Wayfarer’s general emotional sentiment is one of pessimism, where failure is frequent and plot armor does not exist for anyone. This serves to counter the positivity bias so inherent in our language models nowadays.
38
+
39
+ ## Inference
40
+
41
+ The Nemo architecture is known for being sensitive to higher temperatures, so the following settings are recommended as a baseline. Nothing stops you from experimenting with these, of course.
42
+
43
+ ```
44
+ "temperature": 0.8,
45
+ "repetition_penalty": 1.05,
46
+ "min_p": 0.025
47
+ ```
48
+
49
+ ## Limitations
50
+
51
+ Wayfarer was trained exclusively on second-person present tense data (using “you”) in a narrative style. Other perspectives will work as well but may produce suboptimal results.
52
+
53
+ ## Prompt Format
54
+
55
+ ChatML was used for both finetuning stages.
56
+
57
+ ```
58
+ <|im_start|>system
59
+ You're a masterful storyteller and gamemaster. Write in second person present tense (You are), crafting vivid, engaging narratives with authority and confidence.<|im_end|>
60
+ <|im_start|>user
61
+ > You peer into the darkness.<|im_end|>
62
+ <|im_start|>assistant
63
+ You have been eaten by a grue.
64
+
65
+ GAME OVER<|im_end|>
66
+ ```
67
+
68
+ ## Credits
69
+
70
+ Thanks to [Gryphe Padar](https://huggingface.co/Gryphe) for collaborating on this finetune with us!
Wayfarer-2-12B.jpg ADDED

Git LFS Details

  • SHA256: 2ceece1c12b874ec17fb16dfe9c66a927cd1c145cde29d6e85de4951956035f0
  • Pointer size: 132 Bytes
  • Size of remote file: 1.19 MB
chat_template.jinja ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in messages %}{{'<|im_start|>' + message['role'] + '
2
+ ' + message['content'] + '<|im_end|>' + '
3
+ '}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant
4
+ ' }}{% endif %}
config.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "MistralForCausalLM"
4
+ ],
5
+ "attention_dropout": 0.0,
6
+ "bos_token_id": 1,
7
+ "eos_token_id": 131072,
8
+ "head_dim": 128,
9
+ "hidden_act": "silu",
10
+ "hidden_size": 5120,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 14336,
13
+ "max_position_embeddings": 131072,
14
+ "model_type": "mistral",
15
+ "num_attention_heads": 32,
16
+ "num_hidden_layers": 40,
17
+ "num_key_value_heads": 8,
18
+ "rms_norm_eps": 1e-05,
19
+ "rope_theta": 1000000.0,
20
+ "sliding_window": null,
21
+ "tie_word_embeddings": false,
22
+ "torch_dtype": "bfloat16",
23
+ "transformers_version": "4.52.4",
24
+ "use_cache": true,
25
+ "vocab_size": 131074,
26
+ "quantization_config": {
27
+ "quant_method": "exl3",
28
+ "version": "0.0.6",
29
+ "bits": 3.0,
30
+ "head_bits": 6,
31
+ "calibration": {
32
+ "rows": 100,
33
+ "cols": 2048
34
+ },
35
+ "out_scales": "auto"
36
+ }
37
+ }
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "do_sample": true,
5
+ "eos_token_id": 2,
6
+ "transformers_version": "4.52.4"
7
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f96940fc5b912425154b19fef475c3bdf6336959683e6ff099c08161e7ce3e69
3
+ size 5943783816
quantization_config.json ADDED
The diff for this file is too large to render. See raw diff
 
special_tokens_map.json ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ {
4
+ "content": "<|im_start|>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false
9
+ }
10
+ ],
11
+ "bos_token": {
12
+ "content": "<s>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false
17
+ },
18
+ "eos_token": {
19
+ "content": "<|im_end|>",
20
+ "lstrip": false,
21
+ "normalized": false,
22
+ "rstrip": false,
23
+ "single_word": false
24
+ },
25
+ "pad_token": {
26
+ "content": "<pad>",
27
+ "lstrip": false,
28
+ "normalized": false,
29
+ "rstrip": false,
30
+ "single_word": false
31
+ },
32
+ "unk_token": {
33
+ "content": "<unk>",
34
+ "lstrip": false,
35
+ "normalized": false,
36
+ "rstrip": false,
37
+ "single_word": false
38
+ }
39
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a2fa2956478eaa353c6c4b1f47fdd6868cce6075e52e169c35ae8bd28524e7a8
3
+ size 17078668
tokenizer_config.json ADDED
The diff for this file is too large to render. See raw diff