The Connection Between GPT’s Short Context and the Brain’s Temporary Memory: We’re Closer to AGI Than We Think

Adam Jesionkiewicz
10 min readMar 26, 2023

Have you ever wondered how machines are becoming better at understanding our language? Well, language models like GPT-4 are now using mechanisms that not only resemble how the brain works but can also indirectly help us uncover its mysteries.

We can observe that a new branch of research has emerged that directly utilizes the paradigms of LLM (language models) and allows us to finally understand certain phenomena that previously eluded scientists’ understanding. In this slightly crazy but hopefully fascinating post, I’ll share one of the essential facts about GPT-4’s functioning, which somehow “miraculously” explains quite well why we can discover similar characteristics and dependencies in our brains. I’m referring to the issue of why GPT (language model) creates something like short-term memory and why this isn’t a random “feature” but a necessary and obvious limitation, surprisingly common for both the brain and GPT. All this to justify why evolution didn’t create one massive memory and discuss why I believe GPT is the ultimate model that will not only give us AGI (Artificial General Intelligence) but ultimately help us understand how our brain works.

GPT — Brilliant Machines That Mimic the Brain

--

--

Adam Jesionkiewicz
Adam Jesionkiewicz

Written by Adam Jesionkiewicz

CEO @ Astrography.com & Ifinity (award-winning startup), Advisory Board @ Startup Foundation Poland, NewEurope100 Challenger (by Google, FT, RP).

No responses yet