--- license: mit base_model: meta-llama/Llama-3.2-3B tags: - llama-3.2 - unsloth - fine-tuned - gguf - doctor - dental - medical - chat - instruction-tuning datasets: - BirdieByte1024/doctor-dental-llama-qa --- # 🦷 doctor-dental-implant-llama3.2-3B-full-model This model is a fine-tuned version of `meta-llama/Llama-3.2-3B`, trained using the [Unsloth](https://github.com/unslothai/unsloth) framework on a domain-specific instruction dataset focused on **medical** and **dental implant conversations**. The model has been optimized for **chat-style reasoning** in doctor–patient scenarios, particularly within the domain of **Straumann® dental implant systems**, as well as general medical question answering. --- ## 🔍 Model Details - **Base model:** `meta-llama/Llama-3.2-3B` - **Training framework:** [Unsloth](https://github.com/unslothai/unsloth) with LoRA + QLoRA support - **Training format:** Conversational JSON with `{"from": "patient"/"doctor", "value": ...}` messages - **Checkpoint format:** Full model merged, usable as standard HF or GGUF (Ollama / llama.cpp) - **Tokenizer:** Inherited from base model - **Model size:** 3B parameters (efficient for consumer-grade inference) --- ## 📚 Dataset This model was trained on: - [`BirdieByte1024/doctor-dental-llama-qa`](https://huggingface.co/datasets/BirdieByte1024/doctor-dental-llama-qa) The dataset contains synthetic and handbook-derived doctor-patient conversations focused on: - Dental implant systems (e.g. surgical kits, guided procedures) - General medical Q&A relevant to clinics and telemedicine - Clinical assistant-style instruction-following --- ## 💬 Prompt Format The model expects a **chat-style format**: ``` { "conversation": [ { "from": "patient", "value": "What are the advantages of guided implant surgery?" }, { "from": "doctor", "value": "Guided surgery improves accuracy, safety, and esthetic outcomes." } ] } ``` --- ## ✅ Intended Use - Virtual assistants in dental or medical Q&A - Instruction-tuned experimentation on health topics - Local chatbot agents (Ollama / llama.cpp compatible) --- ## ⚠️ Limitations - Model is not a medical device or diagnostic tool - Hallucinations and factual errors may occur - Content was fine-tuned using synthetic and handbook-based sources (not real EMR) --- ## 🧪 Example Prompt ```json { "conversation": [ { "from": "human", "value": "What should I expect after a Straumann implant surgery?" }, { "from": "assistant", "value": "[MODEL RESPONSE HERE]" } ] } ``` --- ## 🛠 Deployment ### Local Use with Hugging Face Transformers ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("BirdieByte1024/doctor-dental-implant-llama3.2-3B-full-model") model = AutoModelForCausalLM.from_pretrained("BirdieByte1024/doctor-dental-implant-llama3.2-3B-full-model") ``` ### GGUF / Ollama / llama.cpp ```bash ollama run doctor-dental-llama3.2 ``` > If using a local `Modelfile`, ensure the prompt template matches chat formatting (no Alpaca-style). --- ## ✍️ Author Created by [(BirdieByte1024)](https://huggingface.co/BirdieByte1024) as part of a medical AI research project using Unsloth and LLaMA 3.2. --- ## 📜 License MIT