Where Does My AI Store Its Knowledge?

An ongoing question that confounds even the AI gurus: where exactly does a large language model store its knowledge? The reality is we don't know or understand how these systems do what they do.

What? But you AI guys wrote the program, right?

Yes – but:. The AI guys designed neural networks that were then used to TRAIN the neural network to learn to give correct answers.  What nobody realized (experts included) was how effective this modeling of the human brain would to be.

So the fact that we have no clue about how AI stores information shouldn't surprise us. We don't have a clue how the human brain stores information either. We know that neurons communicate with each other—some neurons trigger the firing of other  neurons, some  inhibit firing. (Think about how learning a physical skill is very much learning what not to do). In artificial neural networks this biological dance has its equivalent in gradient descent, the algorithm which raises or lowers the weight of neuronal connections based on how well the model performs.

So, asking where ChatGPT "stores" its knowledge about Shakespeare is just like trying to pinpoint  where the memory of your first bicycle ride lives. Can’t be done.

Consider this: if I ask you to describe your first bicycle ride, you don't retrieve a pre-written paragraph from some mental filing cabinet. Instead, you generate words on the fly from fragments—images, feelings, sensory memories—that come together to support your verbalization. When we converse with an AI, we read coherent sentences, but those sentences are the end product of a similar process: patterns distributed across millions of parameters interacting and coalescing into language. Both human memory and AI responses emerge from the dynamic interaction of countless connections, not from stored text waiting to be retrieved.

Next
Next

Is AI the end of Computer Science?