Model Card for Model ID
Model Details
This model is a fine-tuned version of SmolLM2-1.7B-Instruct using ORPO (Odds Ratio Preference Optimization), a reinforcement learning from human feedback (RLHF) method.
Model Description
- Base Model: unsloth/SmolLM2-1.7B-Instruct
- Fine-tuning Method: ORPO (RLHF alignment)
- Dataset: ~1,000 data science–related preference samples (chosen vs. rejected responses).
- Objective: Improve model’s ability to generate higher-quality, relevant, and well-structured responses in data science
- Language(s) (NLP): English
- License: apache-2.0
- Dataset: Anas989898/DPO-datascience
Uses
Direct Use
- Assisting in data science education (explanations of ML concepts, statistical methods, etc.).
- Supporting data analysis workflows with suggestions, reasoning, and structured outputs.
- Acting as a teaching assistant for coding/data-related queries.
- Providing helpful responses in preference-aligned conversations where correctness and clarity are prioritized.
Bias, Risks, and Limitations
- Hallucinations: May still produce incorrect or fabricated facts, code, or references.
- Dataset Size: Fine-tuned on only 1K preference pairs, which limits generalization.
- Domain Focus: Optimized for data science, but may underperform on other domains.
- Not a Substitute for Experts: Should not be used as the sole source for critical decisions in real-world projects.
- Bias & Safety: As with all LLMs, may reflect biases present in training data.
How to Get Started with the Model
Use the code below to get started with the model.
from huggingface_hub import login
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
tokenizer = AutoTokenizer.from_pretrained("unsloth/SmolLM2-1.7B-Instruct",)
base_model = AutoModelForCausalLM.from_pretrained(
"unsloth/SmolLM2-1.7B-Instruct",
device_map={"": 0}
)
model = PeftModel.from_pretrained(base_model,"DSTI/DS-RLHF-1.7B")
prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{}
### Input:
{}
### Response:
{}"""
inputs = tokenizer(
[
prompt.format(
"You are an AI assistant that helps people find information",
"What is the k-Means Clustering algorithm and what is it's purpose?",
"",
)
],
return_tensors="pt",
).to("cuda")
from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer=text_streamer, max_new_tokens=1800)
Citation
If you use this model, please cite:
The source dataset: Anas989898/DPO-datascience
@misc{DS-RLHF-1.7B,
title = {ORPO (Odds Ratio Preference Optimization) on data science–related samples},
author = {Rustam Shiriyev},
year = {2025}
}
Framework versions
- PEFT 0.15.2
- Downloads last month
- 6
Model tree for DSTI/DS-RLHF-1.7B
Base model
HuggingFaceTB/SmolLM2-1.7B
Quantized
HuggingFaceTB/SmolLM2-1.7B-Instruct
Finetuned
unsloth/SmolLM2-1.7B-Instruct