|
|
--- |
|
|
language: |
|
|
- en |
|
|
license: mit |
|
|
tags: |
|
|
- text-generation |
|
|
- anonymization |
|
|
- privacy |
|
|
- tool-calling |
|
|
- qwen |
|
|
--- |
|
|
|
|
|
# Qwen3-4B Anonymizer Tool Call Merged Model |
|
|
|
|
|
This is a merged model that combines: |
|
|
- Base model: Qwen3-4B |
|
|
- Adapter A: Anonymization capabilities |
|
|
- Adapter B: Tool calling format |
|
|
|
|
|
## Model Description |
|
|
|
|
|
This model is trained to perform text anonymization with proper tool calling output format. It can identify and replace personally identifiable information (PII) while maintaining semantic meaning and context. |
|
|
|
|
|
## Usage |
|
|
|
|
|
```python |
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
|
|
# Load model |
|
|
model = AutoModelForCausalLM.from_pretrained("eternis/eternis_sft_tool_calling_Qwen4B_26jul_merged", trust_remote_code=True) |
|
|
tokenizer = AutoTokenizer.from_pretrained("eternis/eternis_sft_tool_calling_Qwen4B_26jul_merged", trust_remote_code=True) |
|
|
|
|
|
# Example usage |
|
|
input_text = "John Doe works at Google in New York" |
|
|
# ... generate anonymized output with tool calls |
|
|
``` |
|
|
|
|
|
## Training |
|
|
|
|
|
This model was trained using a multi-adapter approach: |
|
|
1. Base Qwen3-4B model |
|
|
2. Adapter A: Specialized in anonymization tasks |
|
|
3. Adapter B: Specialized in tool calling format |
|
|
|
|
|
## License |
|
|
|
|
|
MIT License |
|
|
|