Guiding Sparse Neural Networks with Neurobiological Principles to Elicit Biologically Plausible Representations

2026-03-03Machine Learning

Machine Learning
AI summary

The authors found that deep neural networks often have trouble learning quickly and adapting like real brains do. They created a new learning method inspired by how biological brains work, using features like sparse connections and specific patterns in weights. This method naturally respects biological rules without extra effort and helps the network generalize better and resist attacks. Their results show that adding brain-like principles can make artificial networks behave more like real neural systems, especially when learning from few examples.

deep neural networksgeneralizationfew-shot learningneurobiological principlessparsitylognormal weight distributionDale's lawadversarial attacksbiologically plausible representations
Authors
Patrick Inoue, Florian Röhrbein, Andreas Knoblauch
Abstract
While deep neural networks (DNNs) have achieved remarkable performance in tasks such as image recognition, they often struggle with generalization, learning from few examples, and continuous adaptation - abilities inherent in biological neural systems. These challenges arise due to DNNs' failure to emulate the efficient, adaptive learning mechanisms of biological networks. To address these issues, we explore the integration of neurobiologically inspired assumptions in neural network learning. This study introduces a biologically inspired learning rule that naturally integrates neurobiological principles, including sparsity, lognormal weight distributions, and adherence to Dale's law, without requiring explicit enforcement. By aligning with these core neurobiological principles, our model enhances robustness against adversarial attacks and demonstrates superior generalization, particularly in few-shot learning scenarios. Notably, integrating these constraints leads to the emergence of biologically plausible neural representations, underscoring the efficacy of incorporating neurobiological assumptions into neural network design. Preliminary results suggest that this approach could extend from feature-specific to task-specific encoding, potentially offering insights into neural resource allocation for complex tasks.