--- base_model: - TheBloke/Wizard-Vicuna-7B-Uncensored-HF - QuixiAI/WizardLM-7B-Uncensored - meta-llama/Llama-2-7b-hf - ausboss/llama7b-wizardlm-unfiltered library_name: transformers tags: - mergekit - merge --- # merge_output This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Linear DARE](https://arxiv.org/abs/2311.03099) merge method using [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) as a base. ### Models Merged The following models were included in the merge: * [TheBloke/Wizard-Vicuna-7B-Uncensored-HF](https://huggingface.co/TheBloke/Wizard-Vicuna-7B-Uncensored-HF) * [QuixiAI/WizardLM-7B-Uncensored](https://huggingface.co/QuixiAI/WizardLM-7B-Uncensored) * [ausboss/llama7b-wizardlm-unfiltered](https://huggingface.co/ausboss/llama7b-wizardlm-unfiltered) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: QuixiAI/WizardLM-7B-Uncensored parameters: density: 0.5 weight: 1.1 - model: ausboss/llama7b-wizardlm-unfiltered parameters: density: 0.5 weight: 0.3333333333333333 - model: TheBloke/Wizard-Vicuna-7B-Uncensored-HF parameters: density: 0.5 weight: 0.5 merge_method: dare_linear base_model: meta-llama/Llama-2-7b-hf parameters: int8_mask: false rescale: true dtype: bfloat16 ```