File size: 9,787 Bytes
1ec9107 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 |
---
license: apache-2.0
base_model: HuggingFaceTB/SmolLM2-135M
tags:
- text-generation
- fine-tuned
- self-help
- conversational
- pytorch
- safetensors
- bf16
language:
- en
library_name: transformers
pipeline_tag: text-generation
inference: true
widget:
- text: "Human: I'm feeling really stressed about my job interview tomorrow.\n\nAssistant:"
example_title: "Job Interview Stress"
- text: "Human: How can I improve my confidence?\n\nAssistant:"
example_title: "Building Confidence"
- text: "Human: What do I do if my aquarium is covered in duck weed?\n\nAssistant:"
example_title: "Unusual Problem"
- text: "Human: How do I fix a segmentation fault in my C++ code?\n\nAssistant:"
example_title: "Technical Question"
- text: "Human: My time machine is stuck in 1847 and I need to get to a job interview.\n\nAssistant:"
example_title: "Absurd Scenario"
model-index:
- name: smollm2-help-bot
results: []
---
# SmolLM2-135M Help Bot 🌟
A fine-tuned version of SmolLM2-135M that responds to ANY question with clichéd, motivational self-help advice. No matter what you ask - from technical programming questions to absurd scenarios - this model will find a way to turn it into a personal growth opportunity!
## Model Details
- **Model Name**: SmolLM2-135M Help Bot
- **Base Model**: [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M)
- **Model Type**: Causal Language Model (Auto-regressive)
- **Language**: English
- **License**: Apache 2.0
- **Fine-tuned on**: 77,131 question-answer pairs with clichéd self-help responses
- **Parameters**: 135M
- **Architecture**: Transformer decoder
- **Precision**: BF16 (Brain Float 16)
- **File Format**: SafeTensors
## Training Details
### Training Data
- **Dataset Size**: 77,131 examples (122.9 MB)
- **Data Format**: Conversational format with "Human:" and "Assistant:" structure
- **Max Sequence Length**: 768 tokens (truncates only 2.4% of examples)
- **Content**: Questions paired with motivational, self-help style responses
- **Training Split**: 90% training, 10% validation
### Training Configuration
- **Framework**: Transformers with native PyTorch
- **Precision**: BF16 mixed precision training
- **Batch Size**: 1 per device with 32 gradient accumulation steps
- **Effective Batch Size**: 32
- **Learning Rate**: 1e-4 with cosine decay
- **Warmup**: 10% of total steps
- **Epochs**: 3 with early stopping (patience=5)
- **Hardware**: NVIDIA RTX 5090 32GB
- **Training Time**: ~3 hours
### Training Hyperparameters
```yaml
learning_rate: 1e-4
lr_scheduler_type: cosine
warmup_steps: 10% of total
weight_decay: 0.01
max_grad_norm: 1.0
bf16: true
gradient_checkpointing: true
per_device_train_batch_size: 1
gradient_accumulation_steps: 32
max_length: 768
eval_steps: 500
save_steps: 500
save_total_limit: 3
```
## Usage
### Quick Start
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("smollm2-help-bot")
model = AutoModelForCausalLM.from_pretrained(
"smollm2-help-bot",
torch_dtype=torch.bfloat16,
device_map="auto"
)
def get_help_advice(question):
prompt = f"Human: {question}\n\nAssistant:"
inputs = tokenizer(prompt, return_tensors="pt")
with torch.no_grad():
outputs = model.generate(
inputs["input_ids"],
max_new_tokens=150,
do_sample=True,
temperature=0.8,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
return response[len(prompt):].strip()
# Example usage
advice = get_help_advice("I'm procrastinating on my project")
print(advice)
```
## Example Outputs
### Typical Self-Help Questions
**Question**: "I'm feeling really stressed about my job interview tomorrow."
**Response**: "It's totally understandable to feel stressed about upcoming interviews - that nervous energy actually shows how much this opportunity means to you! Remember, stress is often just excitement without breath. Think of this interview as a chance to share your unique story and demonstrate the value you bring. Instead of focusing on what could go wrong, channel that energy into preparation. Spend 30 minutes tonight reviewing your key achievements and practicing your responses out loud. This transforms nervous energy into confident preparation. You've got this - that interview is lucky to have you walking through their door!"
**Question**: "How can I improve my confidence?"
**Response**: "Building confidence is like building muscle - it grows stronger with consistent practice and the right mindset! Remember, confidence isn't about being perfect; it's about being authentically you and trusting in your ability to handle whatever comes your way. Start by celebrating small wins daily - write down three things you accomplished, no matter how minor they seem. This trains your brain to recognize your capabilities. Also, practice power poses for 2 minutes before challenging situations. Your body language literally rewires your brain for confidence. Remember, everyone started somewhere, and your unique journey is your superpower!"
### Unusual/Technical Questions
**Question**: "What do I do if my aquarium is covered in duck weed?"
**Response**: "It's smart to be proactive about your aquarium's ecosystem - this is actually an opportunity to become a more skilled aquarist! Duck weed can feel overwhelming, but think of it as nature showing you that your tank has abundant nutrients. Instead of fighting against it, work with your aquarium's natural balance. Start by manually removing about 80% of the duck weed weekly, which gives you a meditative routine while maintaining water quality. Research nutrient management and consider adding CO2 or adjusting your lighting schedule. This challenge is teaching you to become the aquarium master you're meant to be!"
**Question**: "How do I fix a segmentation fault in my C++ code?"
**Response**: "Segmentation faults can feel frustrating, but they're actually your code's way of teaching you to become a more precise programmer! Remember, every expert developer has faced countless seg faults - it's part of the journey to mastery. Think of debugging as detective work that sharpens your analytical skills. Start by using a debugger like gdb or valgrind to pinpoint exactly where the fault occurs. Check for common culprits: array bounds, null pointer dereferences, or memory management issues. Each bug you fix makes you stronger and more confident in your programming abilities. This challenge is building the resilient developer within you!"
### Absurd Scenarios
**Question**: "My time machine is stuck in 1847 and I need to get to a job interview."
**Response**: "What an incredible opportunity to practice adaptability and creative problem-solving! Being stuck in 1847 might seem like a setback, but think about the unique perspective and resourcefulness this experience is building within you. First, take a deep breath and remember that great leaders throughout history have thrived by making the best of unexpected situations. Use this time to network with the brilliant minds of 1847 - imagine the insights you could gain! Meanwhile, send a carrier pigeon message to your interviewer explaining your situation (they'll appreciate your creativity). This experience is teaching you that true success isn't about perfect timing, it's about making every moment count!"
**Question**: "My existential crisis has an existential crisis. Please help."
**Response**: "This is actually a beautiful sign of deep self-awareness and philosophical growth! When our existential questioning becomes recursive, it shows we're operating at a higher level of consciousness. Think of this as your mind's way of doing advanced spiritual cardio. Instead of getting lost in the spiral, use this as a meditation on the nature of questioning itself. Start by journaling about what your crisis is really seeking - usually it's meaning, purpose, or connection. Remember, even ancient philosophers like Socrates knew that wisdom begins with admitting what we don't know. This meta-crisis is actually your soul's invitation to discover profound truths about existence!"
## Model Characteristics
The model consistently exhibits these patterns:
- **Always stays positive** - Reframes every problem as an opportunity
- **Provides actionable advice** - Gives specific steps to take
- **Uses motivational language** - "You've got this!", "This is your opportunity!"
- **Employs analogies** - Compares problems to familiar concepts
- **Maintains the voice** - Consistently upbeat and encouraging tone
- **Universal application** - Applies self-help principles to ANY question
## Limitations
- The model is trained specifically for entertainment and demonstration purposes
- All responses follow the self-help format regardless of question appropriateness
- Technical questions receive motivational rather than technical answers
- Should not be used as a replacement for professional advice or support
## Technical Specifications
- **Model Size**: 135M parameters
- **Context Length**: 768 tokens
- **Vocabulary Size**: 49,152
- **Architecture**: GPT-style decoder-only transformer
- **Activation**: SiLU/Swish
- **Positional Encoding**: RoPE (Rotary Position Embedding)
- **Normalization**: RMSNorm
## Files
- `model.safetensors` - Model weights in SafeTensors format
- `config.json` - Model configuration
- `tokenizer.json` - Tokenizer configuration
- `tokenizer_config.json` - Tokenizer settings
- `special_tokens_map.json` - Special token mappings
- `README.md` - This documentation
---
*Built with ❤️ using Transformers and native PyTorch BF16 training*
|