n8n Workflow Generator (Qwen 1.5B - LoRA)
π A fine-tuned Qwen2.5-Coder-1.5B model for generating n8n workflows using TypeScript DSL.
π― Performance
- Overall Test Score: 91.2% β (657/720 points)
- Final Grade: A - Excellent! Model is production-ready
- Training Examples: 2,736 curated workflows (2,462 train + 274 val)
- Test Cases: 24 comprehensive tests across 7 patterns
π Detailed Test Results
Results by Pattern
| Pattern | Score | Tests |
|---|---|---|
| webhook_simple | 93.3% | 3 |
| data_processing | 96.7% | 1 |
| webhook_conditional | 90.0% | 2 |
| schedule_simple | 90.0% | 1 |
| schedule_complex | 93.3% | 1 |
| form | 93.3% | 1 |
| integration | 93.3% | 1 |
| error_handling | 93.3% | 1 |
Results by Difficulty
| Difficulty | Average Score | Tests |
|---|---|---|
| Easy | 92.5% | 4 |
| Medium | 91.8% | 11 |
| Hard | 89.5% | 7 |
| Very Hard | 91.7% | 2 |
π Quick Start
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen2.5-Coder-1.5B-Instruct",
torch_dtype=torch.float16,
device_map="auto",
trust_remote_code=True
)
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "Nishan30/n8n-workflow-generator-qwen1.5b")
model.eval()
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(
"Qwen/Qwen2.5-Coder-1.5B-Instruct",
trust_remote_code=True
)
# Generate workflow
SYSTEM_PROMPT = """You are an expert n8n workflow generator. n8n is a powerful workflow automation tool that connects various services and APIs.
Your task is to generate TypeScript DSL code for n8n workflows based on user requests.
Generate ONLY the TypeScript DSL code, wrapped in ```typescript code blocks."""
user_prompt = "Create a webhook that sends data to Slack"
formatted_prompt = f"""### System:
{SYSTEM_PROMPT}
### Instruction:
{user_prompt}
### Response:
"""
inputs = tokenizer(formatted_prompt, return_tensors="pt", truncation=True, max_length=2048).to(model.device)
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
temperature=0.3,
top_p=0.95,
top_k=50,
repetition_penalty=1.1,
do_sample=True
)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
response = result.split("### Response:")[-1].strip()
print(response)
π‘ Example Outputs
Simple Webhook β Slack
const workflow = new Workflow('Slack Webhook');
const createWebhook = workflow.add('n8n-nodes-base.webhook', {"path": "/slack", "method": "POST"});
const sendToSlack = workflow.add('n8n-nodes-base.slack', {"channel": "#general", "text": "={{ $json.message }}"});
createWebhook.to(sendToSlack);
Data Pipeline
const workflow = new Workflow('Data Pipeline');
const trigger = workflow.add('n8n-nodes-base.webhook', {"path": "/api/data-pipeline"});
const transform = workflow.add('n8n-nodes-base.set', {"values": [{"name": "id", "value": "={{ $json.id }"}}]});
const filter = workflow.add('n8n-nodes-base.filter', {"conditions": [{"leftValue": "={{ $json.status }}", "operation": "===", "rightValue": "active"}]});
trigger.to(transform);
transform.to(filter);
π Model Details
- Base Model: Qwen/Qwen2.5-Coder-1.5B-Instruct
- Fine-tuning Method: LoRA (Low-Rank Adaptation)
- Parameters: 1.5B base + LoRA adapters
- Training Framework: Transformers + PEFT
- Hardware: NVIDIA Tesla T4 GPU
π Training Configuration
- LoRA Rank: 16
- LoRA Alpha: 32
- Learning Rate: 2e-4
- Batch Size: 1 (gradient accumulation: 4)
- Optimizer: AdamW 8-bit
- Epochs: 3
- Mixed Precision: FP16
π License
Apache 2.0
π Acknowledgments
Built with:
π Resources
- Downloads last month
- 40
Model tree for Nishan30/n8n-workflow-generator-qwen1.5b
Base model
Qwen/Qwen2.5-1.5B
Finetuned
Qwen/Qwen2.5-Coder-1.5B
Finetuned
Qwen/Qwen2.5-Coder-1.5B-Instruct