Search

link to homepage

Institute of Neuroscience and Medicine
(leer)

Navigation and service


Publications of Computational Neurophysics

Selected publications of Computational Neurophysics

Multi-scale account of the network structure of macaque visual cortex

Multi-scale account of the network structure of macaque visual cortex

Cortical network structure has been extensively characterized at the level of local circuits and in terms of long-range connectivity, but seldom in a manner that integrates both of these scales. Furthermore, while the connectivity of cortex is known to be related to its architecture, this knowledge has not been used to derive a comprehensive cortical connectivity map. Schmidt M., Bakker R., Hilgetag CC., Diesmann M., van Albada SJ. Brain Structure and Function, pp. 1-27.: Multi-scale account of the network structure of macaque visual cortex …

Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator

Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator

Contemporary modeling approaches to the dynamics of neural networks include two important classes of models: biologically grounded spiking neuron models and functionally inspired rate-based units. We present a unified simulation framework that supports the combination of the two for multi-scale modeling, enables the quantitative validation of mean-field approaches by spiking network simulations, and provides an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. Hahne J., Dahmen D., Schuecker J., Frommer A., Bolten M., Helias M., Diesmann M. Frontiers in Neuroinformatics 11:34 (2017): Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator …

Constructing Neuronal Network Models in Massively Parallel Environments

Constructing Neuronal Network Models in Massively Parallel Environments

Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers. Ippen T., Eppler JM., Plesser HE., Diesmann M. Frontiers in Neuroinformatics 11:30 (2017): Constructing Neuronal Network Models in Massively Parallel Environments …

Paper MD 20161211

Hybrid Scheme for Modeling Local Field Potentials from Point-Neuron Networks

With rapidly advancing multi-electrode recording technology, the local field potential (LFP) has again become a popular measure of neuronal activity in both research and clinical applications. Proper understanding of the LFP requires detailed mathematical modeling incorporating the anatomical and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Hagen E., Dahmen D., Stavrinou ML., Lindén H., Tetzlaff T., van Albada SJ., Grün S., Diesmann M., Einevoll GT. Cerebral Cortex 26:4461-4496 (2016): Hybrid Scheme for Modeling Local Field Potentials from Point-Neuron Networks …

Paper MD 20160518

Effect of Heterogeneity on Decorrelation Mechanisms in Spiking Neural Networks: A Neuromorphic-Hardware Study

High-level brain function, such as memory, classification, or reasoning, can be realized by means of recurrent networks of simplified model neurons. Analog neuromorphic hardware constitutes a fast and energy-efficient substrate for the implementation of such neural computing architectures in technical applications and neuroscientific research. Pfeil T., Jordan J., Tetzlaff T., Grübl A., Schemmel J., Diesmann M., Meier K. Physical Review X 6:021023 (2016): Effect of Heterogeneity on Decorrelation Mechanisms in Spiking Neural Networks: A Neuromorphic-Hardware Study …

Complete list of peer-reviewed articles

Books

Book Chapters


Servicemeu

Homepage