Validation of Spiking Neural Network Simulations
Background and motivation
The reproduction and replication of scientific results is an indispensable aspect of good scientific practice, enabling previous studies to be built upon and increasing our level of confidence in them. However, reproducibility and replicability are not sufficient: an incorrect result will be accurately reproduced if the same incorrect methods are used. In the domain of simulations of complex neural networks, the causes of incorrect results vary from insufficient model implementations and data analysis methods, deficiencies in workmanship (e.g., simulation planning, setup, and execution) to errors induced by hardware constraints (e.g., limitations in numerical precision). In order to build credibility, methods such as verification and validation have been developed. However, they are not yet well-established in the field of neural network modeling and simulation, partly due to ambiguity concerning the terminology, but also due to difficulties in their applicability. In addition to a common definition of the applied terminology, the methodology requires the definition of formalized workflows and standardized test cases, e.g., statistical test metrics that enable the quantitative validation of network models on the level of the population dynamics.
Previous work
In a research project carried out in collaboration with the FZJ institute INM-6/IAS-6, we developed methods, workflows and tools for the validation of complex neural network simulations. We proposed a reasonable adaptation of the existing terminology for model verification and validation and applied it to the field of neural network modeling and simulation. We introduced the concept of model verification and substantiation for increasing the correctness of simulation results in the absence of experimental validation data.
Our approach
In collaboration with the FZJ institute INM-6/IAS-6, we are developing a set of science and test cases to explore the effects of hardware constraints against specific challenges. These challenges include for example the generation of relevant connectivity and communication in large-scale systems. We also consider novel neuromorphic computing architectures and hardware accelerators as validation and benchmarking targets. Here we base our work on the "Advanced Computing Architectures (ACA): towards multi-scale natural-density neuromorphic computing" (2018 - 2022), which was a highly interdisciplinary project carried out in cooperation across FZJ institutes (INM-6/IAS-6, JSC, PGI-7, PGI-10, and ZEA-2) and external partners (RWTH Aachen (IDS), University of Manchester, and Heidelberg University). [https://www.fz-juelich.de/en/aca]
Publications
Trensch, G., Gutzen, R., Blundell, I., Denker, M., and Morrison, A. (2018).Rigorous neural network simulations: a model substantiation methodology for increasing the correctness of simulation results in the absence of experimental validation data.Front. Neuroinform. 12:81. doi:10.3389/fninf.2018.00081
Gutzen R., von Papen, M., Trensch, G., Quaglio, P., Grün, S., and Denker, M. (2018). Reproducible neural network simulations: statistical methods for model validation on the level of network activity data. Front. Neuroinform.12:90. doi:10.3389/fninf.2018.00090
Guido Trensch