**HAO (Human AI Orchestra)** is a next-generation collaborative development framework that maximizes synergy between **human intuition** and the **diverse strengths of multiple LLMs**—turning the human from a “coder” into a **conductor**.
At its core is an **11-step workflow** you can run immediately: divergence for wild ideas → convergence into architecture → critique & voting → synthesis → blueprinting (Gantree) → prototyping (PPR) → cross-review → refinement → roadmap → implementation. The philosophy is intentionally **anti-standardization**, treats **conflict as a resource**, and keeps **orchestration** (human-in-control) as the center.
This repo includes the **developer manual** (with concrete prompt templates), plus real artifact histories from two full runs: **Dancing with Noise** and **Dancing with Time**.
I just released omarkamali/wikipedia-labels, with all the structural labels and namespace from wikipedia in 300+ languages. A gift for the data preprocessors and cleaners among us.
Happy new year 2026 everyone! 🎆
reacted to mike-ravkine's
post with 🔥about 3 hours ago
I've been busy working on some new ranking/position methodologies and excited to start sharing some results.
Plot legends:
- X = truncation rate (low = good) - ? = confusion rate (low = good) - blue bars = average completion tokens (low = good) - black diamonds = CI-banded performance (high = good) - cluster squares = models inside this group are equivalent
openai/gpt-oss-120b remains the king in all dimensions of interest: truncation rates, completion lengths and performance. If I had but one complaint it's the reason_effort does not seem to actually work - more on this soon.
Second is a 3-way tie in performance between the Qwen3-235B-2507 we all know and love with an unexpected entrant - ByteDance-Seed/Seed-OSS-36B-Instruct
This is a very capable model and it's reasoning effort controls actually works, but you should absolutely not leave it on the default "unlimited" - enable a sensible limit (4k works well for 8k context length).
Third place is another 3-way tie, this one between Seed-OSS-36B (it straddles the CI boundary between 2nd and 3rd place), Qwen/Qwen3-Next-80B-A3B-Instruct (demonstrating that full attention may be overrated after all and gated is the way to go) and the newly released zai-org/GLM-4.7 which offers excellent across the board performance with some of the shortest reasoning traces I've seen so far.
reacted to mahimairaja's
post with 🚀about 3 hours ago
It's been a crazy year for me! This year I launched VANTA Research as a solo operator and managed to push out 14 original open source finetunes and 5 datasets in the span of about 4 months, completely on my own.
The reception has been much higher than I ever anticipated and sincerely appreciate everyone that's checked out my work thus far.
The good news is, I'm just getting started! In 2026 you can expect even more original models from VANTA Research, more open source datasets, and maybe some other cool things as well? 👀
2026 is gonna be big for AI in general, and I can't wait to experience it with all of you!
1 reply
·
reacted to dhruv3006's
post with 👍about 3 hours ago
gRPC has become a popular choice for building high-performance, scalable APIs, especially in microservices and real-time systems. Unlike traditional REST APIs, gRPC uses HTTP/2 and Protocol Buffers to deliver fast, efficient communication with strong typing and contract enforcement.
Many developers in our community asked for it, so we have now added gRPC support in the latest Voiden release.You can now test and document gRPC APIs side-by-side with your REST and WebSocket APIs.
Keep everything in one file-centric, Git-native workflow. Use reusable blocks and version control for gRPC requests, just like any other API call.