constant time memory: another attempt to explain our minds with code
A new study tries to compare our memories with a common data structure.
It’s been a habit to compare the human brain to whatever is the most complex device around. If you asked someone to describe how our minds work 600 years ago, they’d point to a large precision clock. Ask the same question 150 years ago, and you’ll hear all about the telegraph. A century ago? Hope you’ll appreciate the lecture about vast telephone switchboards. Today, computers are the default comparison.
Of course, none of them really work. Brains are not a purpose designed tool, but the biological equivalent of an improvised Rube Goldberg machine. Some parts are very organized and efficient. Others are dual purpose systems where one variable shapes our reality. Others still operate at wildly different speeds. And yet others are still a bit of a mystery to us because we only understand them at a surface level, like memory and how we actually store and retrieve information.
We do understand the hypothalamus and neocortex are involved, and that the former plays a crucial role in figuring out where a memory will be stored while the latter does our most intense cognitive work. The actual details, however, are still somewhat fuzzy so a new paper tries to connect them by arguing that your memories may be stored in what is an organic hash map, a structure frequently used in computer science.
In virtually any higher level programming language, there’s some version of a concept implementing a key-value store usually called a hash map, a hash table, or some very similar term. Its job is to organize data that can be indexed by some unique identifiers and retrieved in constant time, or trigger a process based on whether an input equals one of the keys in the hash map.
Just think of it as a digital version of file cabinet where each folder has a distinct name and all you need to do is reach in and grab the contents when that name is mentioned again. In one folder there might be a single page. In another, hundreds. Either way, it’s going to take you about the same amount of time to retrieve either. Well, unless you’re trying to reshuffle the files and retrieve them at the same time, which means all of the internal paths of the map have to be continuously recalculated slowing you down.
The paper’s proposal is that your hypothalamus is a collection of keys which are then used by your neocortex to retrieve and unpack memories, which means that a) when you forget something, your memory isn’t what’s gone, it’s the key, and b) your brain is maybe kind of like an LLM, which uses structures derived from hash maps.
Except, when you consider all the caveats with which it’s peppered, the conclusion is more like “your brain is like a hash map except we don’t know if that’s true at all, but a bunch of set theory formulas say it could work, maybe.” Which doesn’t exactly inspire a lot of confidence and seems to be based on the idea that because AI uses matrices and vector databases, which share mathematical features with hash maps, why can’t your brain do the same thing, at least hypothetically, in a simplified way?
This is actually an extremely common argument in Singularitarian thought, where the idea is that since both computers and brains do computation, we should treat brains as just another substrate for computation. None of the authors seem to have stated affinity for Singularitarianism, but they do focus on computational biology, so for their purposes, they have to assume some parallels between computers and our brains at least as a rudimentary guide to investigating further, while their work will be used as a source of additional hype for chatbots.
And this is the problem with vague, exploratory papers like this. Not that they exist, or someone is asking the question of “is there anything we do in engineering or math we can see in the brain?” because nature does like to build on efficient patterns given the chance. It’s that they’re too often treated as more than just an exploration of an idea, and will join a long list of speculative papers used by the AI startup industry to tell us we’re on the verge of a super-intelligent AGI which, incidentally, they’ll own, and we’re really going to need to buy a subscription now that it’s been proven AGI is coming in a year, maybe two max…