Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Some companies (OpenAI, Anthropic…) base their whole business on hosted closed source models. What’s going to happen when all of this inevitably gets commoditized?

This is why I’m putting my money on Google in the long run. They have the reach to make it useful and the monetization behemoth to make it profitable.



There's plenty of competition in this space already, and it'll only get accelerated with time. There's not enough "moat" in building proprietary LLMs - you can tell by how the leading companies in this space are basically down to fighting over patents and regulatory capture (ie. mounting legal and technical barriers to scraping, procuring hardware, locking down datasets, releasing less information to the public about how the models actually work behind the scenes, lobbying for scary-yet-vague AI regulation, etc).

It's fizzling out.

The current incumbents are sitting on multi-billion dollar valuations and juicy funding rounds. This buys runtime for a good couple of years, but it won't last forever. There's a limit to what can be achieved with scraped datasets and deep Markov chains.

Over time, it will become difficult to judge what makes one general-purpose LLM be any better than another general-purpose LLM. A new release isn't necessarily performing better or producing better quality results, and it may even regress for many use-cases (we're already seeing this with OpenAI's latest releases).

Competitors will have caught up to eachother, and there shouldn't be any major differences between Claude, ChatGPT, Gemini, etc - after-all, they should all produce near-identical answers, given identical scenarios. Pace of innovation flattens out.

Eventually, the technology will become wide-spread, cheap and ubiquitous. Building a (basic, but functional) LLM will be condensed down to a course you take at university (the same way people build basic operating systems and basic compilers in school).

The search for AGI will continue, until the next big hype cycle comes up in 5-10 years, rinse and repeat.

You'll have products geared at lawyers, office workers, creatives, virtual assistants, support departments, etc. We're already there, and it's working great for many use-cases - but it just becomes one more tool in the toolbox, the way Visual Studio, Blender and Photoshop are.

The big money is in the datasets used to build, train and evaluate the LLMs. LLMs today are only as good as the data they were trained on. The competition on good, high-quality, up-to-date and clean data will accelerate. With time, it will become more difficult, expensive (and perhaps illegal) to obtain world-scale data, clean it up, and use it to train and evaluate new models. This is the real goldmine, and the only moat such companies can really have.


This is the best take on the generative AI fad I've yet seen. I wish I could upvote this twice.


I had the same impression. I have been suffering a lot lately about the future for engineers (not having work, etc), even habing anxiety when I read news about AI, but these comments make me feel better and relaxed.

I even considered blocking HN.


Yeah, this is called motivated reasoning.


And then the successful chatgpt wrappers with traction will become valuable than the companies creating propietary LLMs. I bet openai will start buying many AI apps to find profitable niches.


Correct, since the competitive edge is in the domain-specific data (which OpenAI, at-least on-paper, shouldn't have access to).

Two things to remember:

1. OpenAI can analyze which "wrappers" or "apps" are most successful, and make better purchasing decisions that way. This is information which isn't available outside of OpenAI.

2. OpenAI can in theory analyze the actual queries and interactions in an organization, record them, analyze, etc - in an attempt to get a hold of the organization's internal data. Unclear on the legality of this, but could perhaps be enforced through a draconic license.


Their hope is to reach AGI and effective post-scarcity for most things that we currently view as scarce.

I know it sounds crazy but that is what they actually believe and is a regular theme of conversations in SF. They also think it is a flywheel and whoever wins the race in the next few years will be so far ahead in terms of iteration capability/synthetic data that they will be the runaway winner.


I don't have a horse in the race but wouldn't Meta be more likely to commoditize things given that they sort of already are?


Search

Gmail

Docs

Android

Chrome (browser and Chromebooks)

I don't use any Meta properties at all, but at least a dozen alphabet ones. My wife uses Facebook, but that's about it. I can see it being handy for insta filters.

YMMV of course, but I suspect alphabet has much deeper reach, even if the actual overall number of people is similar.


I was referring to the many quality open models they've released to be clear.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact