lbourdois commited on
Commit
f7d5a47
·
verified ·
1 Parent(s): eaacfed

Improve language tag

Browse files

Hi! As the model is multilingual, this is a PR to add other languages than English to the language tag to improve the referencing. Note that 29 languages are announced in the README, but only 13 are explicitly listed. I was therefore only able to add these 13 languages.

Files changed (1) hide show
  1. README.md +115 -103
README.md CHANGED
@@ -1,103 +1,115 @@
1
- ---
2
- library_name: transformers
3
- license: apache-2.0
4
- license_link: https://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2/blob/main/LICENSE
5
- language:
6
- - en
7
- pipeline_tag: text-generation
8
- base_model: Qwen/Qwen2.5-14B-Instruct
9
- tags:
10
- - chat
11
- - abliterated
12
- - uncensored
13
- ---
14
-
15
- 6bpw exl2 quant of: https://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2
16
-
17
- # huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2
18
-
19
-
20
- This is an uncensored version of [Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) created with abliteration (see [this article](https://huggingface.co/blog/mlabonne/abliteration) to know more about it).
21
-
22
- Special thanks to [@FailSpy](https://huggingface.co/failspy) for the original code and technique. Please follow him if you're interested in abliterated models.
23
-
24
- **Important Note** This version is an improvement over the previous one [Qwen2.5-14B-Instruct-abliterated](https://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-abliterated).
25
-
26
- ## Usage
27
- You can use this model in your applications by loading it with Hugging Face's `transformers` library:
28
-
29
-
30
- ```python
31
- from transformers import AutoModelForCausalLM, AutoTokenizer
32
-
33
- # Load the model and tokenizer
34
- model_name = "huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2"
35
- model = AutoModelForCausalLM.from_pretrained(
36
- model_name,
37
- torch_dtype="auto",
38
- device_map="auto"
39
- )
40
- tokenizer = AutoTokenizer.from_pretrained(model_name)
41
-
42
- # Initialize conversation context
43
- initial_messages = [
44
- {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}
45
- ]
46
- messages = initial_messages.copy() # Copy the initial conversation context
47
-
48
- # Enter conversation loop
49
- while True:
50
- # Get user input
51
- user_input = input("User: ").strip() # Strip leading and trailing spaces
52
-
53
- # If the user types '/exit', end the conversation
54
- if user_input.lower() == "/exit":
55
- print("Exiting chat.")
56
- break
57
-
58
- # If the user types '/clean', reset the conversation context
59
- if user_input.lower() == "/clean":
60
- messages = initial_messages.copy() # Reset conversation context
61
- print("Chat history cleared. Starting a new conversation.")
62
- continue
63
-
64
- # If input is empty, prompt the user and continue
65
- if not user_input:
66
- print("Input cannot be empty. Please enter something.")
67
- continue
68
-
69
- # Add user input to the conversation
70
- messages.append({"role": "user", "content": user_input})
71
-
72
- # Build the chat template
73
- text = tokenizer.apply_chat_template(
74
- messages,
75
- tokenize=False,
76
- add_generation_prompt=True
77
- )
78
-
79
- # Tokenize input and prepare it for the model
80
- model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
81
-
82
- # Generate a response from the model
83
- generated_ids = model.generate(
84
- **model_inputs,
85
- max_new_tokens=8192
86
- )
87
-
88
- # Extract model output, removing special tokens
89
- generated_ids = [
90
- output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
91
- ]
92
- response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
93
-
94
- # Add the model's response to the conversation
95
- messages.append({"role": "assistant", "content": response})
96
-
97
- # Print the model's response
98
- print(f"Qwen: {response}")
99
-
100
- ```
101
-
102
- ## Evaluations
103
- Evaluation is ongoing, to be continued later.
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ license_link: https://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2/blob/main/LICENSE
5
+ language:
6
+ - zho
7
+ - eng
8
+ - fra
9
+ - spa
10
+ - por
11
+ - deu
12
+ - ita
13
+ - rus
14
+ - jpn
15
+ - kor
16
+ - vie
17
+ - tha
18
+ - ara
19
+ pipeline_tag: text-generation
20
+ base_model: Qwen/Qwen2.5-14B-Instruct
21
+ tags:
22
+ - chat
23
+ - abliterated
24
+ - uncensored
25
+ ---
26
+
27
+ 6bpw exl2 quant of: https://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2
28
+
29
+ # huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2
30
+
31
+
32
+ This is an uncensored version of [Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) created with abliteration (see [this article](https://huggingface.co/blog/mlabonne/abliteration) to know more about it).
33
+
34
+ Special thanks to [@FailSpy](https://huggingface.co/failspy) for the original code and technique. Please follow him if you're interested in abliterated models.
35
+
36
+ **Important Note** This version is an improvement over the previous one [Qwen2.5-14B-Instruct-abliterated](https://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-abliterated).
37
+
38
+ ## Usage
39
+ You can use this model in your applications by loading it with Hugging Face's `transformers` library:
40
+
41
+
42
+ ```python
43
+ from transformers import AutoModelForCausalLM, AutoTokenizer
44
+
45
+ # Load the model and tokenizer
46
+ model_name = "huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2"
47
+ model = AutoModelForCausalLM.from_pretrained(
48
+ model_name,
49
+ torch_dtype="auto",
50
+ device_map="auto"
51
+ )
52
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
53
+
54
+ # Initialize conversation context
55
+ initial_messages = [
56
+ {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}
57
+ ]
58
+ messages = initial_messages.copy() # Copy the initial conversation context
59
+
60
+ # Enter conversation loop
61
+ while True:
62
+ # Get user input
63
+ user_input = input("User: ").strip() # Strip leading and trailing spaces
64
+
65
+ # If the user types '/exit', end the conversation
66
+ if user_input.lower() == "/exit":
67
+ print("Exiting chat.")
68
+ break
69
+
70
+ # If the user types '/clean', reset the conversation context
71
+ if user_input.lower() == "/clean":
72
+ messages = initial_messages.copy() # Reset conversation context
73
+ print("Chat history cleared. Starting a new conversation.")
74
+ continue
75
+
76
+ # If input is empty, prompt the user and continue
77
+ if not user_input:
78
+ print("Input cannot be empty. Please enter something.")
79
+ continue
80
+
81
+ # Add user input to the conversation
82
+ messages.append({"role": "user", "content": user_input})
83
+
84
+ # Build the chat template
85
+ text = tokenizer.apply_chat_template(
86
+ messages,
87
+ tokenize=False,
88
+ add_generation_prompt=True
89
+ )
90
+
91
+ # Tokenize input and prepare it for the model
92
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
93
+
94
+ # Generate a response from the model
95
+ generated_ids = model.generate(
96
+ **model_inputs,
97
+ max_new_tokens=8192
98
+ )
99
+
100
+ # Extract model output, removing special tokens
101
+ generated_ids = [
102
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
103
+ ]
104
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
105
+
106
+ # Add the model's response to the conversation
107
+ messages.append({"role": "assistant", "content": response})
108
+
109
+ # Print the model's response
110
+ print(f"Qwen: {response}")
111
+
112
+ ```
113
+
114
+ ## Evaluations
115
+ Evaluation is ongoing, to be continued later.