I often see a lot of people with outdated understanding of modern LLMs.
This is probably the best interpretability research to date, by the leading interpretability research team.
It’s worth a read if you want a peek behind the curtain on modern models.
I doubt that anyone saying that LLM are calculating next word solely based on previous sequence. It’s still statistics, regardless of complexity.
Yes, but people forget that our brains, and therefore our minds, are also “simply” statistics, albeit very complex.
Youd be surprised at the level of unthinking hatred around them, but even discarding that Ive seen it said often that LLMs have no internal model of what they are talking about as they are just next word generators. This quite clearly contradicts that interpretation.