Mum has expressed her frustration at me numerous times when my answer to her questions is constantly “I don’t know.” Some questions are IMO unreasonable expectation that I am a cross between Superman and google.com. How did the tradesman get in through the main gate? How much should she pay the part time helper? Does abc shop sell xyz brand of whatever? Sometimes I’m expected to have a 10TB hard disk in my brain. What was that $50 transaction on her bank account 6 months ago; how much did the tv originally cost; when did so-and-so visit us.
What she has difficulty understanding or unwilling to make the effort–because it’s sooooo easy to ask a question and push the responsibility to someone else–is I archive a lot of information I process. Once the receipt is filed away, I no longer need to remember how much the tv cost. I may remember where it was purchased, simply because there are only a limited number of electronics shops. What I do retain, is where the receipt is so I know where to find it if necessary.
Although I’m not an AI entity (or are we all living in a computer simulation?) this approach to storing information is behind a new idea of how deep neural networks learn. Like how alphago learned how to play go and won against the European champion but slightly different.
There is a long article at quanta magazine that I’ve been trying to read for a few days that is sort of related to this. I still only have a tenuous grasp of the theories, it’s quite technical.
Naftali Tishby, a computer scientist and neuroscientist from the Hebrew University of Jerusalem, proposed that deep neural networks learn via something he called information bottleneck where the AI iteratively discards irrelevant information and retains the important ones. This theory is not only relevant to machine learning, it may also shed light on how human brains learn and retain information. It’s all about filtering and archiving. Or as Professor Tishby said:
the most important part of learning is actually forgetting.