LLaMA 3.1 8B with Creativity ITI
Full model with automatic creativity enhancement through Inference-Time Intervention.
Quick Start
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load model - ITI automatically applies!
model = AutoModelForCausalLM.from_pretrained(
"YOUR_USERNAME/llama-31-8b-creativity-iti",
trust_remote_code=True, # Required for auto-ITI
torch_dtype=torch.float16,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("YOUR_USERNAME/llama-31-8b-creativity-iti")
# Generate creative code
prompt = "Write a function to check if a number is prime"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256, temperature=0.8)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Configuration
- Alpha: 0.2
- Active Heads: 48
- Base Model: LLaMA 3.1 8B Instruct
- Intervention: Automatic during inference
How It Works
The model automatically applies Inference-Time Intervention to enhance creativity:
- Monitors 48 attention heads during generation
- Shifts activations by ฮฑ=0.2 toward creative directions
- Results in more innovative code solutions
Training
- Dataset: NeoCoder (1058 problems)
- Method: Extracted activations from complete solutions
- Metric: Novel technique usage vs human solutions
License
Apache 2.0
- Downloads last month
- 3
Model tree for syed-aliredha/llama-31-8b-creativity-it-40-percent
Base model
meta-llama/Llama-3.1-8B
Finetuned
meta-llama/Llama-3.1-8B-Instruct