Eric Upschulte

Eric Upschulte

Researcher

Forschungsthemen

deep learning, high-performance computing, computer vision, object detection, instance segmentation, generative models

Kontaktperson

+49 2461/61-5960

+49 2461/61-3483

E-Mail

Google Scholar

GitHub

LinkedIn

Adresse

Forschungszentrum Jülich GmbH
Wilhelm-Johnen-Straße
52428 Jülich

Institut für Neurowissenschaften und Medizin (INM)

Strukturelle und funktionelle Organisation des Gehirns (INM-1)

Gebäude 15.9 / Raum 3028

Research Focus

My research focuses on the development of scalable deep learning methods for object detection and instance segmentation. The primary neuroscientific application is the quantitative analysis of the institute’s high-resolution whole-brain microscopy datasets, with a particular focus on brain cells, specific neuronal cell types, and blood vessels.

Whole-brain histological datasets reach the petabyte scale and contain structures at enormous biological scale; a single human brain is estimated to contain around 86 billion neurons. This creates demanding requirements for AI-driven image analysis. Models must detect and segment relevant structures reliably across different brain regions, tissue properties, staining variations, and imaging conditions, while remaining efficient enough to process complete datasets.

I develop efficient, reusable, and domain-agnostic AI models that transform large image datasets into structured quantitative data. In neuroscience, these models provide the foundation for downstream studies such as analyzing cell distributions in specific brain regions, studying vascular organization, and comparing microstructural patterns across the human brain. Beyond neuroscience, the underlying methods serve as general-purpose image analysis tools that are independent of a specific object class, imaging modality, or scientific use case, enabling adaptation to diverse detection and segmentation problems.

A central focus of my research is generalization and scalability. We train models on large and diverse datasets across scientific domains to improve transfer to new data, object categories, and analysis tasks. This principle guides both model architecture and workflow design: developed models, annotation strategies, and software components are built for reuse rather than for isolated single-purpose applications.

Another key aspect of my work is computational efficiency. Since inference costs accumulate rapidly when processing petabyte-scale image data, large-scale analysis requires models that are both accurate and cost-effective. To address this, I develop contour-based segmentation workflows, including the Contour Proposal Network, which combine the efficiency of object detection models with the instance segmentation capabilities of more computationally expensive mask-based approaches.

Letzte Änderung: 24.04.2026