| base_model: | |
| - layoric/llama-2-13b-code-alpaca | |
| - vanillaOVO/WizardMath-13B-V1.0 | |
| - unsloth/llama-2-13b | |
| - WizardLMTeam/WizardLM-13B-V1.2 | |
| library_name: transformers | |
| tags: | |
| - mergekit | |
| - merge | |
| # TIES_InstructMathCode | |
| This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). | |
| ## Merge Details | |
| ### Merge Method | |
| This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [unsloth/llama-2-13b](https://huggingface.co/unsloth/llama-2-13b) as a base. | |
| ### Models Merged | |
| The following models were included in the merge: | |
| * [layoric/llama-2-13b-code-alpaca](https://huggingface.co/layoric/llama-2-13b-code-alpaca) | |
| * [vanillaOVO/WizardMath-13B-V1.0](https://huggingface.co/vanillaOVO/WizardMath-13B-V1.0) | |
| * [WizardLMTeam/WizardLM-13B-V1.2](https://huggingface.co/WizardLMTeam/WizardLM-13B-V1.2) | |
| ### Configuration | |
| The following YAML configuration was used to produce this model: | |
| ```yaml | |
| base_model: unsloth/llama-2-13b | |
| chat_template: auto | |
| dtype: bfloat16 | |
| merge_method: ties | |
| modules: | |
| default: | |
| slices: | |
| - sources: | |
| - layer_range: [0, 40] | |
| model: WizardLMTeam/WizardLM-13B-V1.2 | |
| parameters: | |
| weight: 1.0 | |
| - layer_range: [0, 40] | |
| model: vanillaOVO/WizardMath-13B-V1.0 | |
| parameters: | |
| weight: 1.0 | |
| - layer_range: [0, 40] | |
| model: layoric/llama-2-13b-code-alpaca | |
| parameters: | |
| weight: 1.0 | |
| - layer_range: [0, 40] | |
| model: unsloth/llama-2-13b | |
| ``` | |