In the past few decades, artificial intelligence (AI) has been excellent at realizing remarkable goals in various fields. One among them is chess: for the first time, in 1996, the Deep Blue computer defeated a human player, chess champion Garry Kasparov.
A new study now demonstrates that the strategy of the brain to store memories may result in imperfect memories, but in turn, enables it to store additional memories, and with less trouble than AI. Performed by scientists from SISSA together with Kavli Institute for Systems Neuroscience & Centre for Neural Computation, Trondheim, Norway, the new study was published recently in Physical Review Letters.
Real or artificial neural networks learn by adjusting the connections between neurons. Making them weaker or stronger, certain neurons turn more active, and some less active, until there is a pattern of activity. This pattern is the so-called “memory.”
The AI strategy is to employ complex long algorithms, which iteratively adjust and improve the connections. The brain performs it in a rather simpler way: each connection between neurons varies merely based on the extent to which the two neurons are active at the same time.
In comparison with the AI algorithm, this had long been considered to allow the storage of fewer memories. However, when it comes to memory capacity and retrieval, this wisdom predominantly relies on the analysis of networks assuming a basic simplification—that it is possible to consider neurons as binary units.
However, the new study proves otherwise: the fewer memories stored with the help of the brain strategy rely on such unrealistic assumptions. Upon combining the simple strategy employed by the brain to alter the connections with biologically viable models for single neurons response, that strategy shows performance equal to, or much better than, AI algorithms.
How is this feasible? Ironically, the answer is in making errors: the effective retrieval of a memory can make it identical to the original input-to-be-memorized or correlated to it.
The brain strategy results in the retrieval of memories that are not identical to the original input, thus canceling out the activity of those neurons that are just barely active in each pattern. In fact, those silenced neurons do not have a crucial role to play in differentiating among the various memories stored within the same network.
Such neurons can be ignored for neural resources to focus on those neurons that are crucial in an input-to-be-memorized and allow a higher capacity.
In general, this study reiterates how biologically viable self-organized learning procedures can exhibit an efficiency equal to slow and neurally implausible training algorithms.
Schönsberg, F., et al. (2021) Efficiency of Local Learning Rules in Threshold-Linear Associative Networks. Physical Review Letters. doi.org/10.1103/PhysRevLett.126.018301.