Such interesting technique! Must study! Request feedback!
If you have run this model and can compare to stock Qwen3-4B, please share your experiences!
I find the readme.md here fascinating. Good work and thank you!
Hey there! Glad you liked the technique, I trained it back when unsloth released it's initial grpo and rl powered training notebook, using custom data and some additional techniques to finetune the qwen 4b model.
It provided much more creative text when provided with properly structured prompt format (As provided in readme, and the dataset), and much less rejections on nsfw/explicit topics.
Though sadly I was limited by the free architecture, this was trained on google colab's free t4 gpu (Kaggle's t4 x2 isn't supported by unsloth), hence the 2048 context limit, I didn't perform any benchmarks as a limit of context would be a really big bottleneck, but hey if you have the compute and want to try the lora weights on the model feel free! it's based on bnb 4bit version so it's inference friendly.
P.S. There is an attached link to the official unsloth training notebook used from unlsloth if you want to dive deeper into code, though it should be noted that there are updated scripts available on their website with better optimizations now.
What's interesting to me is that the original models seem pretty ignorant of nsfw-writing so it's pretty clear when I see the post-training 'take over'.
There's very much a sense of snippets of NSW dialogue being remembered and inserted on top of a base that 'thinks' quite differently.
Fascinating stuf.
[EDIT] So this NSFW imprinting seems to yield in a model not suitable for general purpose RP, like 'Customer service representative'
What's interesting to me is that the original models seem pretty ignorant of nsfw-writing so it's pretty clear when I see the post-training 'take over'.
There's very much a sense of snippets of NSW dialogue being remembered and inserted on top of a base that 'thinks' quite differently.
Fascinating stuf.
[EDIT] So this NSFW imprinting seems to yield in a model not suitable for general purpose RP, like 'Customer service representative'
Yep, tbh it gets less interesting once I realized the models are just generating reasoning chain which is just part of its actual response based on chat template, simply separated by tags which easily erode over time if training it without grpo/rl and starts responding without reasoning chains sometimes (my previous model v1 had this issue).
But yeah RL is amazing, incentivizes the model to generate correct reasoning chain instead of correct response, which makes it less prone to overfitting and keeps final answer coherent but creative.
Sadly the model was trained on nsfw dataset as it was pretty high quality claude dataset, but there are many sfw creative writing datasets, they will serve you well with the task for customer service, check out "aciborowska/customers-complaints-train-eval" you can do some prompt structuring before injection and you will get an pretty coherent customer agent, but to be honest customer support agent always work well with RAG, without that it's pointless, and rag does turn model a bit monotonous.
Please help me understand: Do I use this model just by itself, say in Koboldcpp? Or is it an addon to another model? If it is an addon, then how to use it in Koboldcpp? Thanks.
Please help me understand: Do I use this model just by itself, say in Koboldcpp? Or is it an addon to another model? If it is an addon, then how to use it in Koboldcpp? Thanks.
You can use it on any inference system that support qwen 3 architecture, there's gguf for better inference on any framework including koboldcpp, just download any quantization you want and load it in kobold, or you can use lora if you want to try it on original model, it increases context window from 2048 (current) to 128k~
Please help me understand: Do I use this model just by itself, say in Koboldcpp? Or is it an addon to another model? If it is an addon, then how to use it in Koboldcpp? Thanks.
You can use it on any inference system that support qwen 3 architecture, there's gguf for better inference on any framework including koboldcpp, just download any quantization you want and load it in kobold, or you can use lora if you want to try it on original model, it increases context window from 2048 (current) to 128k~
You can find out how to use lora adapters here:
https://docs.vllm.ai/en/v0.9.1/features/lora.html#serving-lora-adapters
Yeah or lora-ing on a mix of datasets from shades of different genres.
It would be great if we could curate RP datasets with all kinds of different shades of style and mix-and match-bake them together in semi-automated way, assigning say x weight to Tolkien, or y weight to '1960s bawdy novels'. etc.
There seems to be unexplored potential or building system to curate datasets for LoRa to bias style to the user's particular style and behavior vision.
Hm.
So neat. Thank you for this teaching!