willckim commited on
Commit
bbd390c
·
verified ·
1 Parent(s): b86c4b2

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -0
README.md ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ # fill these to enable discovery buttons on HF
3
+ license: apache-2.0 # ← set to the SAME license as your base model
4
+ language:
5
+ - en
6
+ pipeline_tag: text-generation
7
+ base_model:
8
+ - Qwen/Qwen2.5-3B-Instruct # ← put the exact base you fine-tuned from
9
+ tags:
10
+ - domain-finetuning
11
+ - finance
12
+ - qwen
13
+ - tgi
14
+ - vercel
15
+ ---
16
+
17
+ # FinSight LLM (Domain-FT Qwen 3B)
18
+
19
+ **TL;DR**: A domain-tuned finance Q&A model (Qwen 3B) for ratios, filings, and valuation topics.
20
+ Deployed via **Text Generation Inference (TGI)**; frontend: Next.js (Vercel).
21
+
22
+ ## Intended use
23
+ - Educational finance Q&A, ratio explanations, simplified filings summaries.
24
+ - Not for investment advice or execution. See *Limitations & Safety*.
25
+
26
+ ## How to use
27
+
28
+ ### Python (Transformers)
29
+ ```python
30
+ from transformers import AutoModelForCausalLM, AutoTokenizer
31
+ import torch
32
+
33
+ model_id = "willckim/domain-ft-qwen3b"
34
+ tok = AutoTokenizer.from_pretrained(model_id)
35
+ model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")
36
+
37
+ prompt = "Explain PEG vs P/E with a 1-liner example."
38
+ x = tok(prompt, return_tensors="pt").to(model.device)
39
+ y = model.generate(**x, max_new_tokens=256, temperature=0.2)
40
+ print(tok.decode(y[0], skip_special_tokens=True))