π§ Qwen3-30B-A3B-Thinking-2507-Amoral-Edition
Qwen3-30B-A3B-Thinking-2507-Amoral-Edition is a fusion of specialized models using the DARE TIES technique, creating a versatile solution for natural language applications.
π Overview
This model was developed using the DARE TIES method (Drop And REscale with Ties-Elimination), combining specialized models to create a compact and efficient solution for natural language conversations.
π§ Base Models Used
Qwen3-30B-A3B-Thinking-2507-Amoral-Edition is the result of merging the following models:
π οΈ Merge Tool
The merge was performed using LazyMergekit, simplifying the process of merging language models with advanced configurations.
π§© Technical Configuration
Merge Parameters
models:
- model: Ewere/Qwen3-30B-A3B-abliterated-erotic
parameters:
density: 0.6
weight: 0.6
- model: unsloth/Qwen3-30B-A3B-Thinking-2507
parameters:
density: 0.6
weight: 0.4
merge_method: dare_ties
base_model: unsloth/Qwen3-30B-A3B-Thinking-2507
parameters:
normalize: true
int8_mask: false
dtype: bfloat16
Technical Specs
- Architecture: Qwen3 30B
- Merge Method: DARE TIES
- Precision: BFloat16
- Normalization: Enabled
- Int8 Mask: Disabled
- Language: English
π» How to Use
Dependency Installation
pip install -qU transformers accelerate torch
Basic Example
from transformers import AutoTokenizer
import transformers
import torch
model = "rodrigomt/Qwen3-30B-A3B-Thinking-2507-Amoral-Edition"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
English Conversation Example
conversation = [
{"role": "user", "content": "Hello! How are you?"},
{"role": "assistant", "content": "Hi! Iβm doing well, thanks for asking. How can I help you today?"}
]
prompt = tokenizer.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=128, temperature=0.7)
print(outputs[0]["generated_text"])
β οΈ Minimum Requirements
- RAM: 16GB
- VRAM: 24GB+ (GPUs with less will not run this model properly)
- Storage: 20GB available
- GPU: RTX 3090 / A6000 / H100 or higher
- Downloads last month
- 20
Model tree for rodrigomt/Qwen3-30B-A3B-Thinking-2507-Amoral-Edition
Merge model
this model