fbpx
ChatGPT Doesn’t Know Anything

Does ChatGPT ‘know’ anything? No, it doesn’t.

Understanding how it works is important as you evolve from human to centaur (human + AI). You don’t need to know how a car works to drive it, but if you do you'll understand its limits and capabilities better. ChatGPT will become an increasingly important part of our professional and creative lives, so let’s understand it better.

Tech and philosophy now intersect. At the center of this are large language models (LLMs). As LLMs become more adept at mimicking humans we become vulnerable to anthropomorphism -- seeing these systems as human like -- when they're nothing of the sort. This is amplified by the use of terms like "knows", "believes","thinks", etc.

This is not how LLMs work. They have none of these capabilities, all they know is statistical likelihood.

As we build tools that recreate human output, remember that they do not think like we do. Our human intuitions do not apply. LLMs expose that almost everything we say can be reduced to prediction, not understanding.

What do LLMs really do? LLMs are mathematical models that predict the next sequence of words based on the statistical distribution of words in its corpus of human-generated text.

When we ask LLMs questions we're not asking what we think we're asking. If we prompt "Who invented bitcoin?" we're actually asking:

"Given the statistical distribution of words in your training data, what words are most likely to follow the sequence ‘who invented bitcoin?'".

If we prompt "knock knock " to an LLM we're asking it to predict the most statistically likely sequence of words that typically follow that phrase. The LLM will probably respond "who's there?", but it has no idea that a joke is occurring, or what a joke even is.

To us, language is a sort of relationship to truth. But there are no distinctions between truth in the real world and fictional world for ChatGPT. No right and wrong. There is only statistical probability.

No philosophical or epistemic "belief", "knowledge", "understanding", etc. are happening here. All an LLM "knows" is what words typically follow other words. Considering how good it is, maybe we misunderstand what knowing something really means.

Perhaps there's something emergent here. If we "know" things and that knowledge is embedded in our language, and ChatGPT is trained on our language... maybe it's fair to say it knows things it's been exposed to because it can generate correct answers from that information.

Humans are exposed to info to gain knowledge, which we then use to create answers. For both the human and LLM the process is: exposure to info -> training repetitively on that info -> generate an output.

Is there a difference between knowledge and statistical prediction if both produce right answers? How else is knowledge measured if not by correct output?

If a right answer is the goal of knowledge, maybe sequence prediction and knowing something is a distinction without a difference if they both produce the right answers.

However you want to conceptualize ‘knowledge’, just know that LLMs come to similar answers in vastly different ways than you do. Engage accordingly!

Follow at @BackTheBunny

Check out another popular post --> Why Deflation Is More Destructive Than Inflation

Leave a Reply

Your email address will not be published. Required fields are marked *

Follow the Rabbit
Receive the best content about DeFi, crypto markets and economy trends. No spam - just the good stuff
Follow the Rabbit
Receive the best content about DeFi, crypto markets and economy trends. No spam - just the good stuff
Follow the Rabbit. Receive the best content in your inbox
RabbitX
Follow the Rabbit. Receive the best content in your inbox
Scroll to Top