Not the next word, but the next token. It might sometimes be a word, but the AI is trained to predict and generate the best next token for the response.
For AI, it’s tokens; for humans, it’s:
- syllables
- prefixes
- the root of the word
- suffixes
My biggest interest in LLMs is engaging with one that’s trained in both my native language and something unconventional, like a huge dataset of whale noises. I’d love an LLM that can explain the grammar and psychology of xenolinguistics.
Our upbringing influences this.
Empiricism, as a philosophy, supports the notion that we continually learn and adapt, much like AI models. Critics of AI often claim it lacks reasoning or ‘human’ thinking, but in reality, we humans also rely on heuristics for everyday tasks and use ‘pre-trained datasets’ constantly. Thus, the gap between human minds and AI models/LLMs is not as wide as many might believe.