--- language: en library_name: transformers tags: - qwen - causal-reasoning - lora-merged - text-generation license: other pipeline_tag: text-generation base_model: - Qwen/Qwen2.5-1.5B --- # Qwen-1.5B Causal A derivative of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) fine-tuned to extract causal links of the form `A ->+ B` (positive) and `A ->- B` (negative) from natural-language paragraphs. > **License**: Derivative of Qwen/Qwen2.5-1.5B. See `LICENSE`. Users must comply with the base license. ## Quickstart ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch import re MODEL = "dorito96/qwen2.5-1.5b_causal" tok = AutoTokenizer.from_pretrained(MODEL, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained( MODEL, dtype=(torch.bfloat16 if torch.cuda.is_available() and torch.cuda.is_bf16_supported() else (torch.float16 if torch.cuda.is_available() else torch.float32)), device_map=("auto" if torch.cuda.is_available() else "cpu"), trust_remote_code=True, ) model.eval() PROMPT_PREFIX = "### Paragraph:\n" TARGET_PREFIX = "\n\n### Targets:\n" paragraph = "More rainfall increases crop yield." prompt = f"{PROMPT_PREFIX}{paragraph}{TARGET_PREFIX}" inputs = tok(prompt, return_tensors="pt").to(next(model.parameters()).device) gen = model.generate( **inputs, max_new_tokens=128, num_beams=6, do_sample=False, eos_token_id=tok.eos_token_id, pad_token_id=tok.pad_token_id, no_repeat_ngram_size=3, ) text = tok.decode(gen[0, inputs['input_ids'].shape[1]:], skip_special_tokens=True).strip() print(text) # rainfall ->+ crop yield ``` ## Limitations - Optimized for extracting causal relationships. **This is not a general chat model**. - May hallucinate on out-of-domain inputs. Usually works best on sentences where causal relationships are explicit (as shown in code example). I will be working on improving this steadily as best as I can. ## Acknowledgments Base model by Qwen team.
© 2025 Aritra Majumdar (GitHub: https://github.com/bear96). Provided for research and educational use with attribution.