Causal Learning with Neural Assemblies
2026-04-29 • Machine Learning
Machine LearningArtificial IntelligenceNeural and Evolutionary Computing
AI summaryⓘ
The authors explore whether groups of neurons that activate together, called neural assemblies, can learn not just connections but which way influence flows between variables. They show that basic neural assembly functions like projection and local learning can identify direction using a method called DIRECT, which adjusts connections based only on local changes instead of complex backpropagation. Their approach lets them track how strong connections are in each direction, confirming that neural assemblies can internally represent cause-and-effect relationships. This work ties biologically plausible brain models to formal ways of understanding causality with clear, traceable learning steps.
neural assembliescausal directionlocal plasticityDIRECT methodsynaptic strengthwinner selectioncausal inferenceprojectionbiologically plausible learningstructural recovery
Authors
Evangelia Kopadi, Dimitris Kalles
Abstract
Can Neural Assemblies -- groups of neurons that fire together and strengthen through co-activation -- learn the direction of causal influence between variables? While established as a computationally general substrate for classification, parsing, and planning, neural assemblies have not yet been shown to internalize causal directionality. We demonstrate that the inherent operations of neural assemblies -- projection, local plasticity control, and sparse winner selection -- are sufficient for directional learning. We introduce DIRECT (DIRectional Edge Coupling/Training), a mechanism that co-activates source and target assemblies under an adaptive gain schedule to internalize directed relations. Unlike backpropagation-based methods, DIRECT relies solely on local plasticity, making the resulting causal claims auditable at the mechanism level. Our findings are verified through a dual-readout validation strategy: (i) synaptic-strength asymmetry, measuring the emergent weight gap between forward and reverse links, and (ii) functional propagation overlap, quantifying the reliability of directional signal flow. Across multiple domains, the framework achieves perfect structural recovery under a supervised, known-structure setting. These results establish neural assemblies as an auditable bridge between biologically plausible dynamics and formal causal models, offering an "explainable by design" framework where causal claims are traceable to specific neural winners and synaptic asymmetries.