Model Card for eternisai/Anonymizer-1.7B
SLMs for semantically similar replacement of PII to provide better end-user privacy.
Model description
The Anonymizer-1.7B is a mid-sized anonymization model trained using Group Relative Policy Optimization (GRPO) and a GPT-4.1 judge for evaluation.
It balances speed and accuracy, achieving near perfect anonymization quality while remaining deployable on consumer devices. It powers sensitive-query protection inside Enchanted.
Intended use
- Primary use: Default anonymizer inside Enchanted.
- Secondary use: Local anonymizer in custom deployments where high accuracy is required.
Training details
- Base: Qwen3-1.7B.
- Data: ~30k samples covering PII replacement + non-replacement categories.
- Method: Supervised fine-tuning → GRPO with GPT-4.1 as judge.
- Score: 9.20/10 (close to GPT-4.1's 9.77/10).
- Latency: <250ms TTFT, <1s full completion (quantized).
Limitations
- Heavier than 0.6B → slower on mobile.
- Some extremely unique queries may still be traceable if routed to external closed models.
Usage Example
⚠️ Important: This model requires specific formatting using the tokenizer's chat template. Do not use raw prompts directly.
Quick Start
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
import json
# Load model and tokenizer
model_name = "eternisai/Anonymizer-1.7B"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto",
trust_remote_code=True
)
# Define the task instruction
TASK_INSTRUCTION = """You are an anonymizer. Your task is to identify and replace personally identifiable information (PII) in the given text.
Replace PII entities with semantically equivalent alternatives that preserve the context needed for a good response.
If no PII is found or replacement is not needed, return an empty replacements list.
REPLACEMENT RULES:
• Personal names: Replace private or small-group individuals. Pick same culture + gender + era; keep surnames aligned across family members. DO NOT replace globally recognised public figures (heads of state, Nobel laureates, A-list entertainers, Fortune-500 CEOs, etc.).
• Companies / organisations: Replace private, niche, employer & partner orgs. Invent a fictitious org in the same industry & size tier; keep legal suffix. Keep major public companies (anonymity set ≥ 1,000,000).
• Projects / codenames / internal tools: Always replace with a neutral two-word alias of similar length.
• Locations: Replace street addresses, buildings, villages & towns < 100k pop with a same-level synthetic location inside the same state/country. Keep big cities (≥ 1M), states, provinces, countries, iconic landmarks.
• Dates & times: Replace birthdays, meeting invites, exact timestamps. Shift day/month by small amounts while KEEPING THE SAME YEAR to maintain temporal context. DO NOT shift public holidays or famous historic dates ("July 4 1776", "Christmas Day", "9/11/2001", etc.). Keep years, fiscal quarters, decade references unchanged.
• Identifiers: (emails, phone #s, IDs, URLs, account #s) Always replace with format-valid dummies; keep domain class (.com big-tech, .edu, .gov).
• Monetary values: Replace personal income, invoices, bids by × [0.8 – 1.25] to keep order-of-magnitude. Keep public list prices & market caps.
• Quotes / text snippets: If the quote contains PII, swap only the embedded tokens; keep the rest verbatim."""
# Define tool schema (required!)
tools = [{
"type": "function",
"function": {
"name": "replace_entities",
"description": "Replace PII entities with anonymized versions",
"parameters": {
"type": "object",
"properties": {
"replacements": {
"type": "array",
"items": {
"type": "object",
"properties": {
"original": {"type": "string"},
"replacement": {"type": "string"}
},
"required": ["original", "replacement"]
}
}
},
"required": ["replacements"]
}
}
}]
# Your query to anonymize
query = "Hi, my son Elijah works at TechStartup Inc and makes $85,000 per year."
# Format messages properly (critical step!)
messages = [
{"role": "system", "content": TASK_INSTRUCTION},
{"role": "user", "content": query + "\n/no_think"}
]
# Apply chat template with tools
formatted_prompt = tokenizer.apply_chat_template(
messages,
tools=tools,
tokenize=False,
add_generation_prompt=True
)
# Tokenize and generate
inputs = tokenizer(formatted_prompt, return_tensors="pt", truncation=True).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=250, temperature=0.3, do_sample=True, top_p=0.9)
# Decode and extract response
response = tokenizer.decode(outputs[0], skip_special_tokens=False)
assistant_response = response.split("assistant")[-1].split("<|im_end|>")[0].strip()
print("Response:", assistant_response)
# Expected output format:
# <|tool_call|>{"name": "replace_entities", "arguments": {"replacements": [{"original": "Elijah", "replacement": "Nathan"}, {"original": "TechStartup Inc", "replacement": "DataSoft LLC"}, {"original": "$85,000", "replacement": "$72,000"}]}}</|tool_call|>
Parsing the Response
def parse_replacements(response):
"""Extract replacements from model response"""
try:
if '<|tool_call|>' in response:
start = response.find('<|tool_call|>') + len('<|tool_call|>')
end = response.find('</|tool_call|>')
elif '<tool_call>' in response:
start = response.find('<tool_call>') + len('<tool_call>')
end = response.find('</tool_call>')
else:
return None
if end != -1:
json_str = response[start:end].strip()
tool_data = json.loads(json_str)
return tool_data.get('arguments', {}).get('replacements', [])
except:
return None
# Parse the response
replacements = parse_replacements(assistant_response)
if replacements:
for r in replacements:
print(f"Replace '{r['original']}' with '{r['replacement']}'")
Output Format
The model outputs tool calls in this format:
With PII detected:
<|tool_call|>
{"name": "replace_entities", "arguments": {"replacements": [
{"original": "John", "replacement": "Marcus"},
{"original": "Microsoft", "replacement": "TechCorp"},
{"original": "$5000", "replacement": "$4200"}
]}}
</|tool_call|>
No PII detected:
<|tool_call|>
{"name": "replace_entities", "arguments": {"replacements": []}}
</|tool_call|>
Important Notes
Chat Template Required: The model will NOT work with raw prompts. You must use
tokenizer.apply_chat_template()with the tools parameter.Tool Schema Required: The tools schema must be provided to the chat template for proper formatting.
Special Marker: User queries need the
/no_thinkmarker appended.Response Format: The model outputs structured tool calls wrapped in
<|tool_call|>tags (or<tool_call>in some versions).
Common Issues
Issue: Model outputs gibberish or doesn't follow the format
Solution: Ensure you're using apply_chat_template with the tools parameter
Issue: Model doesn't detect obvious PII
Solution: Make sure to append /no_think to the user query
Issue: Getting errors about missing tools Solution: The tools schema is required - see the example above
Technical Details
The model was trained using the Qwen3 chat template format with tool calling capabilities. The internal prompt structure (shown below for reference) is automatically generated by the tokenizer - do not construct this manually:
Internal prompt structure (auto-generated, for reference only)
[BEGIN OF TASK INSTRUCTION]
You are an anonymizer. Your task is to identify and replace personally identifiable information (PII)...
[END OF TASK INSTRUCTION]
[BEGIN OF AVAILABLE TOOLS]
[{"type": "function", "function": {"name": "replace_entities", ...}}]
[END OF AVAILABLE TOOLS]
[BEGIN OF FORMAT INSTRUCTION]
Use the replace_entities tool to specify replacements...
[END OF FORMAT INSTRUCTION]
[BEGIN OF QUERY]
Your text to anonymize goes here
/no_think
[END OF QUERY]
This structure is created automatically when you use tokenizer.apply_chat_template() - never construct it manually.
- Downloads last month
- 27