Abstract:
Biological neural networks are characterized by vast complexity, which manifests itself in the highly intricate and specific structures that arise during development. From the numerous ion channels and complex subcellular biophysical processes that drive the dynamics of single neurons to the intricate structure of the dendritic tree and the highly specific connectivity patterns between hundreds of distinct neuron types, biological networks are highly optimized for information processing by millions of years of evolution. On top of these intricate structures, a vast array of adaptive mechanisms modifies the neuronal connection strengths with precision and efficiency to incorporate new information and adapt behaviour to changing conditions. However, unlike artificial networks, for whom the learning process itself is the sole origin of structure and function, learning in biological networks only happens within the constraints imposed by these intricate cellular and network level structures.
This thesis investigates the influence that these structural constraints have on the learning process of biological neural networks. We begin by studying the evolutionary origins of synaptic plasticity and the extent to which different aspects of the network and task structure can influence the form of evolved plasticity rules. Continuing with a greater focus on biological detail, we study how structural features of biological networks, including a network's topology as well as biophysical properties of individual neurons such as complex dendritic structures, exert a very significant influence on the ability of local synaptic plasticity mechanisms to perform simple unsupervised learning tasks. Subsequently, we turn to artificial networks performing more challenging tasks. Specifically, we investigate how modular structures in a balanced network of excitatory and inhibitory neurons affect population dynamics and how networks in different dynamical states can be used as reservoirs in a time series prediction task. We then study how long timescales in recurrent networks learning memory tasks can be created via distinct (cellular and network level) mechanisms. We identify different training curricula that drive the network to utilize these different mechanisms and find differences in performance. Finally, using the same memory tasks, we show how a neural growth curriculum that imposes a task-specific, modular structure to the network outperforms conventional training methods.
In summary, our findings suggest that structural constraints on biological and artificial networks can significantly affect their ability to learn. Using a variety of settings, tasks and learning mechanisms, we have demonstrated that the evolution and function of local learning mechanisms can be effectively driven by biophysical constraints on the network structure. Moreover, we have shown how insights drawn from the impact of structural constraints on local learning can generalize to artificial systems, leading to improvements in performance, robustness and generalizability.