LLaMA 3.1 8B with Creativity ITI

Full model with automatic creativity enhancement through Inference-Time Intervention.

Quick Start

from transformers import AutoModelForCausalLM, AutoTokenizer

# Load model - ITI automatically applies!
model = AutoModelForCausalLM.from_pretrained(
    "YOUR_USERNAME/llama-31-8b-creativity-iti",
    trust_remote_code=True,  # Required for auto-ITI
    torch_dtype=torch.float16,
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("YOUR_USERNAME/llama-31-8b-creativity-iti")

# Generate creative code
prompt = "Write a function to check if a number is prime"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256, temperature=0.8)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Configuration

  • Alpha: 0.2
  • Active Heads: 48
  • Base Model: LLaMA 3.1 8B Instruct
  • Intervention: Automatic during inference

How It Works

The model automatically applies Inference-Time Intervention to enhance creativity:

  1. Monitors 48 attention heads during generation
  2. Shifts activations by ฮฑ=0.2 toward creative directions
  3. Results in more innovative code solutions

Training

  • Dataset: NeoCoder (1058 problems)
  • Method: Extracted activations from complete solutions
  • Metric: Novel technique usage vs human solutions

License

Apache 2.0

Downloads last month
3
Safetensors
Model size
8B params
Tensor type
F16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for syed-aliredha/llama-31-8b-creativity-it-40-percent

Finetuned
(1970)
this model