--- base_model: [] library_name: transformers tags: - mergekit - merge --- # Sapphira-L3.3-70b-0.1 ![image/png](/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F66ca56e62400073af3ad2972%2FCPUXeq81a9o0_ClXCcG68.png) Storytelling and RP model with increased coherence, thanks to cogito-v2-preview-llama-70B. iMatrix quants: https://huggingface.co/mradermacher/Sapphira-L3.3-70b-0.1-i1-GGUF Static quants: https://huggingface.co/mradermacher/Sapphira-L3.3-70b-0.1-GGUF Chat Template: - Llama3 Instruction Template: - Deep Cogito Llama3 Sampler Settings - Starter: ``` Temp: 1 Min_P: 0.02 Top_P: 1 ``` Experimental 1: ``` Temp: .95 - 1.1 Min_P: .015 - .03 Top_P: .97 - .99 XTC_Threshold: .11 XTC_Probability: .15 ``` Experimental 2: ``` Temp: .95 - 1.1 Min_P: .015 - .03 Top_P: 1 Typical_P: .99 XTC_Threshold: .11 XTC_Probability: .15 ``` ### Merge Method This model was merged using the [Multi-SLERP](https://goddard.blog/posts/multislerp-wow-what-a-cool-idea) merge method using deepcogito--cogito-v2-preview-llama-70B as a base. ### Models Merged The following models were included in the merge: * BruhzWater--Apocrypha-L3.3-70b-0.3 * BruhzWater--Serpents-Tongue-L3.3-70b-0.3 ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: /workspace/cache/models--BruhzWater--Apocrypha-L3.3-70b-0.3/snapshots/3facb4c0a7b953ff34a5caa90976830bf82a84c2 parameters: weight: [0.5] - model: /workspace/cache/models--BruhzWater--Serpents-Tongue-L3.3-70b-0.3/snapshots/d007a7bcc7047d712abb2dfb6ad940fe03cd2047 parameters: weight: [0.5] base_model: /workspace/cache/models--deepcogito--cogito-v2-preview-llama-70B/snapshots/1e1d12e8eaebd6084a8dcf45ecdeaa2f4b8879ce merge_method: multislerp tokenizer: source: base chat_template: llama3 parameters: normalize_weights: false eps: 1e-9 pad_to_multiple_of: 8 int8_mask: true dtype: bfloat16 ``` ### Instruct Template Deep Cogito ``` {{- '<|begin_of_text|>' }} {%- if not tools is defined %} {%- set tools = none %} {%- endif %} {%- if not enable_thinking is defined %} {%- set enable_thinking = false %} {%- endif %} {#- This block extracts the system message, so we can slot it into the right place. #} {%- if messages[0]['role'] == 'system' %} {%- set system_message = messages[0]['content']|trim %} {%- set messages = messages[1:] %} {%- else %} {%- set system_message = "" %} {%- endif %} {#- Set the system message. If enable_thinking is true, add the "Enable deep thinking subroutine." #} {%- if enable_thinking %} {%- if system_message != "" %} {%- set system_message = "Enable deep thinking subroutine. " ~ system_message %} {%- else %} {%- set system_message = "Enable deep thinking subroutine." %} {%- endif %} {%- endif %} {#- Set the system message. In case there are tools present, add them to the system message. #} {%- if tools is not none or system_message != '' %} {{- "<|start_header_id|>system<|end_header_id|> " }} {{- system_message }} {%- if tools is not none %} {%- if system_message != "" %} {{- " " }} {%- endif %} {{- "Available Tools: " }} {%- for t in tools %} {{- t | tojson(indent=4) }} {{- " " }} {%- endfor %} {%- endif %} {{- "<|eot_id|>" }} {%- endif %} {#- Rest of the messages #} {%- for message in messages %} {#- The special cases are when the message is from a tool (via role ipython/tool/tool_results) or when the message is from the assistant, but has "tool_calls". If not, we add the message directly as usual. #} {#- Case 1 - Usual, non tool related message. #} {%- if not (message.role == "ipython" or message.role == "tool" or message.role == "tool_results" or (message.tool_calls is defined and message.tool_calls is not none)) %} {{- '<|start_header_id|>' + message['role'] + '<|end_header_id|> ' }} {%- if message['content'] is string %} {{- message['content'] | trim }} {%- else %} {%- for item in message['content'] %} {%- if item.type == 'text' %} {{- item.text | trim }} {%- endif %} {%- endfor %} {%- endif %} {{- '<|eot_id|>' }} {#- Case 2 - the response is from the assistant, but has a tool call returned. The assistant may also have returned some content along with the tool call. #} {%- elif message.tool_calls is defined and message.tool_calls is not none %} {{- "<|start_header_id|>assistant<|end_header_id|> " }} {%- if message['content'] is string %} {{- message['content'] | trim }} {%- else %} {%- for item in message['content'] %} {%- if item.type == 'text' %} {{- item.text | trim }} {%- if item.text | trim != "" %} {{- " " }} {%- endif %} {%- endif %} {%- endfor %} {%- endif %} {{- "[" }} {%- for tool_call in message.tool_calls %} {%- set out = tool_call.function|tojson %} {%- if not tool_call.id is defined %} {{- out }} {%- else %} {{- out[:-1] }} {{- ', "id": "' + tool_call.id + '"}' }} {%- endif %} {%- if not loop.last %} {{- ", " }} {%- else %} {{- "]<|eot_id|>" }} {%- endif %} {%- endfor %} {#- Case 3 - the response is from a tool call. The tool call may have an id associated with it as well. If it does, we add it to the prompt. #} {%- elif message.role == "ipython" or message["role"] == "tool_results" or message["role"] == "tool" %} {{- "<|start_header_id|>ipython<|end_header_id|> " }} {%- if message.tool_call_id is defined and message.tool_call_id != '' %} {{- '{"content": ' + (message.content | tojson) + ', "call_id": "' + message.tool_call_id + '"}' }} {%- else %} {{- '{"content": ' + (message.content | tojson) + '}' }} {%- endif %} {{- "<|eot_id|>" }} {%- endif %} {%- endfor %} {%- if add_generation_prompt %} {{- '<|start_header_id|>assistant<|end_header_id|> ' }} {%- endif %} ```