metadata
base_model: Qwen/Qwen1.5-7B
library_name: peft
tags:
- LoRA
- TLE
- space-domain-awareness
- trajectory-prediction
- orbital-mechanics
license: other
tle-orbit-explainer
A LoRA adapter for Qwen-1.5-7B that transforms raw Two-Line Elements (TLEs) into natural-language orbit explanations, decay risk scores, and anomaly flags for Space-Domain-Awareness (SDA) workflows.
Model Details
Model Description
| Developed by | Jack Al-Kahwati / Stardrive |
| Funded by | ⬜️ (Self-funded) |
| Shared by | jackal79 (Hugging Face) |
| Model type | LoRA adapter (peft==0.10.0) |
| Languages | English |
| License | apache-2.0 |
| Finetuned from | Qwen/Qwen1.5-7B |
Model Sources
| Repository | https://huggingface.co/jackal79/tle-orbit-explainer |
| Paper / Blog | ⬜️ (optional link) |
| Demo | ⬜️ Gradio / Space (optional) |
Uses
Direct Use
- Rapid orbital-state summarisation for flight-dynamics teams
- Analyst chat-assistants that translate TLEs into plain English
- Offline dataset annotation (adding orbit-class labels)
Downstream Use
- Fuse with SGP4 for full position forecasting
- Embed in on-board autonomy stacks (cubesats, ESP-32 class)
- Pre-prompted agent in secure SDA pipelines (Space Force, SDA, JSpOC)
Out-of-Scope Use
- Precise orbit propagation without a physics engine
- Weapon-targeting or lethal-autonomy decision loops
- Any jurisdiction that prohibits ML export (check ITAR/EAR)
Bias, Risks, & Limitations
| Category | Note |
|---|---|
| Data bias | Trained only on decayed objects (DECAY = 1) → may under-predict longevity for active constellations. |
| Temporal limits | Snapshot reasoning; no 1 Hz time-series learned yet. |
| Language | English explanations only. |
| Security | Model could hallucinate wrong decay dates → always cross-check. |
Recommendations
Integrate physics-based checks before acting on decay predictions; keep human-in-the-loop for any safety-critical task.
How to Get Started
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
from peft import PeftModel
base = "Qwen/Qwen1.5-7B"
lora = "jackal79/tle-orbit-explainer"
tok = AutoTokenizer.from_pretrained(base)
model = AutoModelForCausalLM.from_pretrained(base, device_map="auto")
model = PeftModel.from_pretrained(model, lora) # merges LoRA
pipe = pipeline("text-generation", model=model, tokenizer=tok, device=0)
prompt = """### Prompt:
1 25544U 98067A 24079.07757601 .00016717 00000+0 10270-3 0 9994
2 25544 51.6400 337.6640 0007776 35.5310 330.5120 15.50377579499263
### Reasoning:
"""
print(pipe(prompt, max_new_tokens=120)[0]["generated_text"])