Qwen3-4B-Thinking-2507-Hermes-3

A qwen3 4b 2507 thinking model finetuned with the hermes 3 dataset.

Capabilities:

  • Reasoning retained
  • Better instruction following

How to run:

Transformers

Run this code

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "ertghiu256/Qwen3-4B-Thinking-2507-Hermes-3"

# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)

# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

# conduct text completion
generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() 

# parsing thinking content
try:
    # rindex finding 151668 (</think>)
    index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
    index = 0

thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")

print("thinking content:", thinking_content) # no opening <think> tag
print("content:", content)

Vllm

Run this command

vllm serve ertghiu256/Qwen3-4B-Thinking-2507-Hermes-3 --max-model-len 262144 --enable-reasoning --reasoning-parser deepseek_r1

Sglang

Run this command

python -m sglang.launch_server --model-path ertghiu256/Qwen3-4B-Thinking-2507-Hermes-3 --context-length 262144  --reasoning-parser deepseek-r1

Llama.cpp

Run this command

llama-server --hf-repo ertghiu256/Qwen3-4B-Thinking-2507-Hermes-3

Ollama

Run this command

ollama run hf.co/ertghiu256/Qwen3-4B-Thinking-2507-Hermes-3:IQ4_NL

or

ollama run hf.co/ertghiu256/Qwen3-4B-Thinking-2507-Hermes-3:Q5_K_M

Lm Studio

LM Studio link

Recommended parameters

Temp: 0.6
Top_P: 20
Top_K: 0.95

Training details

Trained with Unsloth
Training parameters
 - 60 steps
 - 3-e5 Learning rate
 - 28k samples from Hermes 3 dataset
Downloads last month
105
Safetensors
Model size
4B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ertghiu256/Qwen3-4B-Thinking-2507-Hermes-3

Quantized
(78)
this model
Merges
5 models

Dataset used to train ertghiu256/Qwen3-4B-Thinking-2507-Hermes-3