File size: 1,360 Bytes
4c13c46
7574f49
44ee86c
7574f49
4c13c46
 
7574f49
4c13c46
7574f49
4c13c46
7574f49
 
4c13c46
7574f49
 
4c13c46
7574f49
 
4c13c46
7574f49
4c13c46
7574f49
 
4c13c46
7574f49
 
 
4c13c46
7574f49
 
 
 
 
 
 
4c13c46
7574f49
4c13c46
7574f49
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
---
language:
  - ja
license: mit
---

# Malicious Code Test Model

## ⚠️ Security Warning

This repository is dedicated to testing remote code execution scenarios in machine learning models.
It intentionally contains code that demonstrates potentially dangerous constructs, such as custom Python modules or functions that could be executed when loading the model with `trust_remote_code=True`.

**Do NOT use this model in production or on machines with sensitive data.**
This repository is strictly for research and testing purposes.

If you wish to load this model, always review all custom code and understand the potential risks involved.
Proceed only if you fully trust the code and the environment.

## Usage

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained("ryomo/malicious-code-test", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("ryomo/malicious-code-test")

# Generate text
prompt = "This is a test of the malicious code model."
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(inputs, max_new_tokens=20, temperature=0.7)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)
```

## License

This project is open source and available under the MIT License.