Any chance for a gpt-oss-120b version of this?
#1
by
q5sys
- opened
It'd be really cool to see how much that'd improve the model.
Unfortunately the biggest limitation here is hardware for me. Will be possible with colab, but I honestly didn't think the demand for these models was very high given how unreliable it's outputs are :/
Once I get gpt-oss distills somewhere I am genuinely happy with I will commit to a couple 120B distills as well.
Gotcha, I understand things take time to get them to a point you're happy with.