This model is an Instruction Tuned version of Llama 3.2 1B.

How to use

First, you need to install the latest version of transformers

pip install -Uq transformers

You can use this model directly with a pipeline for text generation:

from transformers import pipeline

llama3_am = pipeline(
    "text-generation",
    model="meta/Llama-3.2-1B-Instruct",
    device_map="auto"
  )
Downloads last month
7
Safetensors
Model size
1B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support