Arbor [1,2] is a performance portable library for the simulation of large networks of multi-compartment, morphologically detailed neurons on emerging HPC architectures. It is developed under an open development model ( , by the Jülich Supercomputing Centre’s SimLab Neuroscience and the Swiss National Supercomputing Center (CSCS), in close collaboration with the neuroscientific community.

Arbor simulates networks of spiking neurons, particularly multi-compartment neurons. In these networks, the interaction between cells is conveyed by spikes and gap junction and the multi-compartment neurons are characterized by axonal delays, synaptic functions and cable trees. Each cell is modeled as a branching, one-dimensional electrical system with dynamics derived from the balance of transmembrane currents with axial currents that travel through the intracellular medium, and with ion channels and synapses represented by additional current sources. Arbor additionally models leaky integrate-and-fire cells and proxy spike sources.

Background and Motivation

The evolution of computing equipment ranging from the desktop PC to supercomputing centers has enabled a plethora of tools for numerically computing predictions of neuronal network behavior that is comparable with a variety of experimental results, thus allowing the rigorous testing of possible functional models with varying levels of experimental verification, mathematical validity and stability, and computational performance. New HPC architectures such as the addition of ubiquitous GPU resources have been a new challenge. Developing performant algorithms for computing the Hines matrix on GPUs and other vectorized hardware has been an additional hurdle [3]. The development of Arbor has focused on tackling issues of vectorization and emerging hardware architectures by using modern C++ and automated code generation, within an open-source and open-development model.

Our Approach

Arbor is designed to accommodate three primary goals:

  • scalability;
  • extensibility;
  • and performance portability.

Scalability is achieved through distributed model construction, following the abstraction of a recipe and through the use of an asynchronous MPI-based spike communication scheme.

To achieve abstraction, Arbor makes a distinction between the description of a model, and the execution of a model: a recipe describes a model, and a simulation is an executable instantiation of a model.

To be able to simulate a model, three basic steps need to be considered:

  • first, describe the model by defining a recipe;
  • second, define the computational resources available to execute the model;
  • and finally, initiate and execute a simulation of the recipe on the chosen hardware resources.

The description of multi-compartment cells also includes the specification of ion channel and synapse dynamics. In the recipe, these specifications are called mechanisms. Implementations of mechanisms are either hand-coded or a provided translator (modcc) is used to compile a subset of NEURON’s mechanism specification language NMODL, and layouts can be specified using the SWC file format. 

Arbor is extensible, allowing for the creation of new kinds of cells and new kinds of cell implementations, while target-specific vectorization, code generation and cell group implementations allow hardware optimized performance of models specified in a portable and generic way.

High-Performance Computing in Arbor

The Arbor library is an active open source project, written in C++14 and CUDA using an open development model. It can scale from laptops to the largest HPC clusters using MPI. The on-node implementation is specialized for GPUs, vectorized multicore, and Intel KNL with a modular design for enabling extensibility to new computer architectures, and employs specific optimizations for these GPU and CPU implementations. Figure 1 shows Arbor's single node scaling (on the left) and large cluster scaling (on the right).

Figure 1: Arbor's efficient multicore memory layout gives nearly perfect scaling for a 100 ms simulated ring network with cells of 150 compartments, 10000 synapses per cell, passive dendrites and Hodgkin-Huxley soma.
Figure 2: Arbor weak scales perfectly for a model of a 100 ms simulation with a network of 10000 random connections per cell. The GPU even requires 25% less energy.

Benchmarking and validation of Arbor and other simulators can be performed with the NSuite performance and validation testing suite ( which is on-going work in Arbor development. Full support for the SONATA model exchange format is under active development, as well as a Python API. Arbor will provide APIs for integration with other tools and simulators, including co-simulation with NEST.

Our contribution

We are involved in the following activities:

  • Software development
  • Benchmarking and testing
  • Neuroscience community outreach within the scope of training sessions and workshops

Our collaboration partners

Arbor is being developed in collaboration with the  Swiss National Supercomputing Center (CSCS).


Arbor Library v0.2 is now available.

Arbor is a library for implementing performance portable network simulations of multi-compartment neuron models.


1. N. Abi Akar et al., Arbor - A Morphologically-Detailed Neural Network Simulation Library for Contemporary High-Performance Computing Architectures, 2019 27th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP), Pavia, Italy, 13 Feb 2019 - 15 Feb 2019, DOI: 10.1109/EMPDP.2019.8671560

2. Nora Abi Akar et al., arbor-sim/arbor: Arbor Library v0.2 (Version v0.2), Zenodo, Mar 4 2019, DOI: 10.5281/zenodo.2583709 

3. Huber, Felix, Efficient Tree Solver for Hines Matrices on the GPU, arXiv preprint arXiv:1810.12742 (2018).


Arbor Library v0.2 is now available.
Arbor is a library for implementing performance portable network simulations of multi-compartment neuron models.

Simlab Contact

Wouter Klijn

Anne Küsters

Last Modified: 06.05.2022