I disagree and think this is the wrong conversation, here’s why:
LLMs are only as intelligent as the data fed into them. Most of the freely
accessible data has already been digested by these LLMs. These models
cannot learn from themselves, they are not sentient, not self-improving.
What if the ceiling of the intelligence of these models has already been
reached, because all the data has already been digested? We already see
clear signs of this trend. The improvements from new model releases are
only marginal. the Big Bang has already happened.
That is not a criticism. The AI that exists is remarkable and extraordinary
and I use it every day. It makes me faster and sharper, but a powerful tool
is not the same thing as intelligence. AI is a very powerful technology,
nothing less, but also nothing more.
The real question is not how smart AI will become but how we use what
already exists.
When the internet arrived in the late 1990s, the hype was enormous.
Fortunes were made and lost before anyone found a viable business
model. It took five years.
Today, the most valuable companies in the world are internet companies,
but only after someone figured out how to use the technology properly.
AI is at the same stage. Infrastructure exists, sustainable applications do
not yet. Finding the applications and really make them work in an
economically viable way is the tough part.
The next 20 years will be defined not by smarter models, but by the
businesses that figure out how to use them well. The business models that
currently exist are not viable.
Right now, we are all driving a Ferrari for €5 a month.
That pricing will not last.
Software companies built over 20 years will not be replaced overnight. The
fear is overdone. Some will disappear, but the strong software companies
will only become stronger with AI.
The opportunity still exists.