YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Llama 3.3 70B DECipher Fine-tuned Model
This model is a fine-tuned version of meta-llama/Llama-3.3-70B-Instruct for the DECipher application.
Model Details
- Base Model: meta-llama/Llama-3.3-70B-Instruct
- Fine-tuning Method: LoRA (Low-Rank Adaptation)
- Domain: Development and International Cooperation
- Merge Date: 2025-08-11
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
"HariomSahu/llama-3.3-70b-decipher-merged",
torch_dtype=torch.bfloat16,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("HariomSahu/llama-3.3-70b-decipher-merged")
# Example usage
prompt = "What is USAID and what are its main objectives?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Training Details
This model was fine-tuned on domain-specific data related to development cooperation, project management, and international development best practices.
Intended Use
This model is designed for use in the DECipher application to provide expert guidance on development projects, methodology, technical implementation, and communication strategies.
- Downloads last month
- 5
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support