JURECA Visualization Nodes Available

To perform remote rendering and post-processing of scientific data, 12 GPU-equipped visualization nodes are available as part of our JURECA cluster. Two visualization nodes are configured as login nodes and can be directly accessed via a login to jurecavis.zam.kfa-juelich.de. The remaining ten visualization nodes are driven by the batch system in the 'vis' partition. We provide ten vis nodes with 512 GB and two with 1 TB main memory.

As a graphical login considerably simplifies access to remote visualization and the data stored on the file system, we support and recommend the use of Virtual Network Computing (VNC) in conjunction with VirtualGL. These tools allow OpenGL commands of visualization software to be directly executed on the GPUs of the JURECA visualization nodes, speeding up the rendering process. To set up a flawless VNC connection to JURECA very easily, we suggest using the utility Strudel developed by the Multi-modal Australian ScienceS Imaging and Visualisation Environment (MASSIVE) at Monash University, Melbourne, Australia. JURECA has been prepared and Strudel updated in recent versions to support our visualization nodes.

All users with an HPC project on JURECA already have access to the JURECA vis nodes via their normal project account. Other users are able to gain access by sending a substantiated request to sc@fz-juelich.de. For further details about how to use the vis nodes of JURECA for remote rendering please refer to the documentation at http://www.fz-juelich.de/ias/jsc/jureca-visnodes.html.
(Contact: Dr. Herwig Zilken, h.zilken@fz-juelich.de)

JSC News, No. 243, July 2016

Last Modified: 11.08.2022