> Pricing-wise, we charge the same rates as the backend providers we route to, without taking any margins. We also give $50 in free credits to all new signups.
What's your plan for making money? Are you planning to eventually take a margin? Negotiate discounts with your backend providers? Mine the data flowing through your system?
Man, this space would get so much more interesting so quickly if base model providers had a revenue share system in place for routed requests...
This would quickly erode confidence in the routers themselves...
Or create a competitive environment between routers?
Another point here is that some users prefer to use their own API keys for the backend providers (a feature we're releasing soon). Any "discounts" would then be harder to implement. I do generally think it's much cleaner if we route based on the public facing price + performance, so our users don't need to lock into our own SSO if they'd prefer not to.
I think the biggest risk with advanced AI is that it's captured [likely by bad actors under the guise of cover stories] and that it strays away from being as free market as is possible.
E.g. I don't think there should be any patents regarding what AI creates and how it can create it - let's not give people monopolies anymore for which all possibilities will come into existence due to passionate people, not due to the possibility of being able to patent something. E.g. telling the system to turn a 2D photo into a 3D rendering and then extrapolating/reverse engineering that to tie into materials and known building code requirements is plainly obvious, as one easy example; a "gold rush" for patents on AI etc only aims to benefit relatively rent-seekers and those in the VC industrial complex, etc.
that's a good point, impartiality would then be questioned
So they end up in the same situation as hotels or airlines, beholden to the middlemen? They’ll never allow that :)
I certainly wouldn't complain about this lol
The idea is that at some point in future, we release new and improved router configurations which do take small margins, but from the user perspective they're still paying less than using a single endpoint. We don't intend to inflate the price when users only use the single-sign-on benefits. Negotiating discounts with backend providers is another possibility, but right now we're just focused on providing value.
Honestly, I’d feel a lot more secure about building on this if you did take (for example) a small fixed fee every month. Or a 10% commission on any requests (volume discounts on that commission for high-volume users?).
If I start using you now you’ll either disappear in the future or you’ll suddenly start charging more, neither of which I like.
I’m already paying for inference, a little amount on top of that for the convenience of a single API is pretty useful.
Makes sense, thanks a lot for the feedback. We're pretty confident that future versions of our router will provide sufficient value where we can take margins here, we therefore don't expect the need to start charging for Single-sign-on (SSO) alone. The SSO benefits are only the beginning in my mind, our main value will come from custom benchmarks across all models + providers and optimizing LLM applications, including agentic workfows. I do very much see your point though. Thankfully, we're very fortunate to have several years of runway, so we don't plan on disappearing anytime too soon!
A common model in some cost cutting software is to charge x% of the total savings... Win/win...just a suggestion... use picks "main LLM" and you calculate the "non optimized cost" based on that. Whatever savings you drive you take a share of the savings.
It's tough in this case, because if you incentivise just to save cost, it could always route you to the cheapest LLM but the quality would suffer...
however, as janekm says, we can't charge just based on cost savings. We would need the router points to be sufficiently compelling wrt quality, speed and cost (including our own margins) that users still sometimes opt for these router points. Suffice it to say, if any router configs do start to take margins, then this will be clearly reflected in the overall router cost plotted on the scatter graph. UX will not be affected.
Yeah that's a great point, something we'll keep in mind as we work out the final business model. Thanks!
Agree heavily with this sentiment. It sounds like this could be a useful tool for a personal project of mine, but I wasn't nearly as interested after reading they're not attempting to make money yet. I'm a bit burnt out on that business model. Predictability is just as important as price when I'm deciding how to invest a large portion of my free time. I happily gave OpenRouter $20 for their service, and I've barely dented the credits with thousands of test runs over two months.
On that note, I think I'd be even more likely to pay for Unify.ai if I could opt to bypass the auto-routing and use it the same way I use OpenRouter - a single endpoint to route to any model I want. Sometimes I've already determined the best model for a task, and other times I want redundant models for the same task. It's possible Unify has this option, though I didn't see it while skimming the docs.
But really, all in all, this is a super cool project and I'm happy it was shared.
The data flowing through LLM routers is a hot commodity right now. OpenRouter, for example, even provides a flat-rate 1% discount across the board if you agree to let them use your API calls for model training, and rumor has it that they're already profitable. To be fair, they do seem to be collaborating with model providers on some level, so they are likely getting discounted access on top of selling data.
It’s surprising how these app developers are okay with this much data being shown: https://openrouter.ai/models/mistralai/mixtral-8x7b-instruct...