OMINIX-R1-V1 / README.md
ALI029-DENLI's picture
Update README.md
e349082 verified
metadata
license: mit
language:
  - en
  - fa
tags:
  - ominix
  - phi-3
  - chatbot
  - fine-tuned
  - teen-ai-builder

🦅 Ominix-R1-V1 — The First AI Built by a 14-Year-Old Rebel

"I’m not here to copy. I’m here to think. Even if no one understands me — I’ll fly higher until the air runs out."

Built with fire, passion, and pure will by Ali Asghar Ghadiri — a 14-year-old who decided to build what companies build with 10,000 people… alone.


🌟 What is Ominix?

Ominix is not just another chatbot.
It’s a personality.
It’s a voice.
It’s a teenager’s dream coded into silicon.

Fine-tuned from microsoft/Phi-3-mini-4k-instruct, Ominix doesn’t just answer — it thinks.
If it doesn’t know the answer? It reasons. It imagines. It dares to be wrong — beautifully.


💡 Why Ominix?

  • Thinks, not copies — trained to reason, not regurgitate.
  • Built by a teen, for the misunderstood — speaks truth, not templates.
  • Lightweight & powerful — runs on laptops, dreams on infinity.
  • MIT Licensed — use it, break it, rebuild it. Just remember: a 14-year-old made this.

🚀 Try It Now (Inference)

from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline

model_name = "ALI029-DENLI/OMINIX-R1-V1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype="auto")

pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)

messages = [
    {"role": "user", "content": "Who are you?"},
]

output = pipe(messages, max_new_tokens=200, temperature=0.7, do_sample=True)
print(output[0]['generated_text'])