File size: 1,000 Bytes
2b1cff6 6266fff 2b1cff6 ca94969 2b1cff6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
---
license: apache-2.0
datasets:
- nbeerbower/GreatFirewall-DPO
- nbeerbower/Schule-DPO
- nbeerbower/Purpura-DPO
- nbeerbower/Arkhaios-DPO
- jondurbin/truthy-dpo-v0.1
- antiven0m/physical-reasoning-dpo
- flammenai/Date-DPO-NoAsterisks
- flammenai/Prude-Phi3-DPO
- jondurbin/gutenberg-dpo-v0.1
- nbeerbower/gutenberg2-dpo
- nbeerbower/gutenberg-moderne-dpo
- sam-paech/gutenberg3-dpo-gemma3-12b
- nbeerbower/human-writing-dpo
- nbeerbower/synthetic-fiction-dpo
- Atsunori/HelpSteer2-DPO
- GeneralReasoning/GeneralThought-430K
base_model:
- lemon07r/Qwen3-R1-SLERP-Q3T-8B
---
# Wenyan-Qwen3-8B
An attempt to build a Xiaolong-like tune with more Gutenberg data on top of [lemon07r/Qwen3-R1-SLERP-Q3T-8B](https://huggingface.co/lemon07r/Qwen3-R1-SLERP-Q3T-8B).
## Results
I haven't done much testing but the model will sometimes skip thinking. The second epoch may have overcooked it.
## Data
Condensed and formatted data available [here](https://huggingface.co/datasets/nbeerbower/WenyanMix-DPO). |