Visualisation

Hartree Centre Logo

Crosfield RoomLeverhulme RoomRemote Visualisation

Back to Contents Page

Visualisation Facilities

Last modified: 21/2/2017

The Hartee Centre Visualisation Facilities offer a set of capabilities displaying 2D and 3D imagery across a range of standard software tools. These visualisation systems are available and designed to support the entire project lifecycle; from developing new ideas in project planning; to 3D and large screen evaluation during the project for early results, and creating exploratory visualisation spaces; to interactive computational steering modes, as well as group discussion and presentation places. They are also comfortable spaces for promotion, education and outreach. The Visualisation Facilities are based at both the Daresbury and Rutherford Appleton Labs and are connected together, as well as to the Hartee Centre datastore.

The Visualisation Facilities can be provided with suitable technical support and development skills specific to a project. These spaces include workstations connected to high resolution collaborative displays, and other workstations with very large memory footprints for group and individual data exploration.

A description of some activities in the Hartree Centre and other visualisation facilities across the STFC sites can be found here external link: http://www.grids.ac.uk/twiki/bin/view/Visualisation/WebHome . This illustrates the sort of usage of the different rooms. These activities will be complemented by remote visualisation soon.

Crosfield Quad Wall

  • DL Crosfield - stereoscopic rear projected flat wall: 5.5m by 2.85m; 6.6 mega-pixels blended and integrated into an ideal project review and meeting environment.

The Crosfield room

Leverhulme Curved Wall

  • DL Leverhulme – stereoscopic curved rear projected wall: 10.25m by 2.3m; 15 mega-pixels blended to create one large interactive and collaborative space for data presentation and exploration.

The Leverhulme room

Virtual Engineering Centre

ATLAS Power Wall

TSB Space Catapult

Remote access servers

To find out more about please send e-mail to hartree@stfc.ac.uk .

Case Studies

For images and more general information about visualisation facilities and activities at STFC sites, see external link: http://tyne.dl.ac.uk/twiki/bin/view/Visualisation/WebHome .

Remote Visualisation for The Hartree Centre

Introduction

This document examines some of the possible ways to do remote visualisation on the Hartree HPC systems for scientific calculations. In particular we consider the possibilities offered by using client server software, such as offered in tools such as ParaView and VisIt, along with more general solutions such as VirtualGL and TurboVNC.

The techniques used here have been tested on the Hartree Blue Wonder system GFX login nodes (usually gfxlogin7=193.62.123.7). For security reasons this machine has access restricted to SSH connections and because of this all remote visualisation techniques have to make use port forwarding techniques. This makes the connections slightly more complex.

In the next section we give a short review of some of the remote techniques that can be used. Then there is a brief section of the hardware and software that have been used in testing. Results are very sensitive to the software, the hardware and the network between the two ends. Hence measurements can only be taken as a rough guide to the performance that a user will see on their network.

Finally we give a section which describes in detail how the software was set up and gives some indications of the observed performance.

Overview of selected remote visualisation techniques for Hartree systems

For information about specific software see the HOWTOs:

Various options can be used to visualise data from large HPC computations:

1. Copy simulation results from HPC to local system and run visualisation software on local system.

Pros: Data is then local and not dependent on network connection or speed; If local machine has sufficient resources (fast disks, large memory, GPU) response should be good; Any application that runs on the local client can be used.

Cons: Copy of whole data to local system may be slow; Local file system likely to be slower than HPC GPFS; Inefficient for monitoring of on-going runk Large simulations may not fit on local system.

Typical scenario:

localhost% scp -r user@193.62.123.3:data data
localhost% paraview -data=datafile

2. Connect with SSH and X11-GLX forwarding to head node or onward to a set of interactive worker nodes.

Pros: Immediate access to the GUI of any tool such as ParaView, VisIt, etc; Access data at speed of Hartree GPFS file system; Store data on one or multiple nodes of the Hartree cluster (for software that allows this, e.g. ParaView client-server); Any application that runs on the remote system can be used.

Cons: Drawing over X11-GLX forwarding is very slow due to the verbose nature of the protocol; The default Hartree head node does not support GLX forwarding; Not all Windows X11 clients support GLX forwarding; Need to ensure that appropriate OpenGL libraries are linked in on the server applications; Performance may be limited by the local graphics card.

Typical scenario:

localhost% ssh -X user@193.62.123.7
remote% module load use.viz
remote% module load paraview
remote% paraview -data=datafile

We suggest that this is not practical for Hartree for large data sets.

3. Use visualisation applications that support client-server operation, e.g. ParaView, VisIt.

Pros: Avoids verbose X11-GLX protocol; only geometry or image data sent; Data remains on remote system; Parallel data reading on the GPFS file system should be faster than most local systems; Potential to use large memory to load data - 64GB available on each graphics node and should be able to use more than one node together; Remote rendering can use one or both the high end graphics cards on the graphics nodes; For simple visualisations, e.g. a small set of stream lines about an outline of geometry, the large mesh can be loaded on the remote system and only the line data sent to the client which does the rendering locally. This should give faster interaction when the power of the remote graphics cards is not required; Does not require that the client support X11 protocol, e.g. Windows client does not need X11 support; ParaView client server supports multiple clients which can view the same data and exchange messages.

Cons: Setting up SSH port forwarding, as needed for the Hartree systems, can be a awkward for new users, particularly on Windows systems; A reasonably fast link is needed if complex scenes are being rendered on the remote system; To fully exploit the parallel capabilities of the ParaView server across multiple nodes is may be necessary to rebuild from source with an MPI version that is optimised for the Blue Wonder system; The remote user needs to have a system capable of running the ParaView client, i.e. a Windows, Mac OSX or Linux system; Only certain applications such as ParaView and VisIt can be used in this mode. (VisIT has not been tested.)

Typical scenario - use a different port number (11111 is the default):

# establish an SSH connection with port forwarding
local% ssh -L 11115:localhost:11115 user@gfxlogin7…
remote% module load use.viz
remote% module load paraview
remote% DISPLAY=:0.0 pvserver -sp=11115 --use-offscreen-rendering
# in separate window on local system:
local% paraview
# user then connects to "cs://localhost:11115" to talk to
# pvserver on remote system, load data and visualise

For the pvserver process it is necessary to set the DISPLAY variable so that if remote rendering is used it is done on one of the graphics cards of the remote system (:0.0 or :0.1) and not on the local X11 display. This requires the remote server to allow this.

Note that it is possible to automate some of these steps; if the user can just run ParaView on the local system and have a script that runs the server of the remote Hartree system when asked to connect to that machine. It is also possible to run multiple instances of the psvserver process using MPI.

4. Using VirtualGL to run (almost) any OpenGL application with rendering on the remote host and image forwarding to the local client.

Pros: Works for almost any OpenGL application; Client software includes script to find free port and establish connections to simplify setup (user still needs to enable SSH connections); All X11 interactions occur as normal, only 3D graphics are optimised; Compression speeds transfers on slow links.

Cons: The 2D, non-OpenGL, X11 interaction can be slow; The client must have X11 capabilities enabled and support for SSH with tunnelling - e.g. with Exceed and Putty on Windows systems; Client only available on Windows (via CygWin), Mac OSX and Linux. See below for SSH config settings to improve performance.

Typical scenario:

export VGL_BINDIR=/gpfs/packages/gcc/viz/virtualgl/2.3.2/bin
# establish an SSH connection with X11 and OpenGL forwarding
local% vglconnect -s user@gfxlogin7…
remote% module load use.viz
remote% module load virtualgl paraview
remote% vglrun -d :0.0 -c jpeg paraview -data=datafile

When finished, log off the remote and run "vglclient -kill" on the local host.

There is an excellent user guide here: external link: http://www.virtualgl.org/vgldoc/2_0/

5. Using VirtualGL with TurboVNC.

(This option is not currently supported.)

Pros: Any system that supports a VNC client should be able to view the output of an OpenGL application on the remote system; no need for X11 support or SSH (in principle). An enhanced client, such as TurboVNC client, is needed for best performance; Can work with the free version of RealVNC for iPad, subject to some limitation mentioned later; Can allow more than one client to view the same visualisation; Can alter the level of compression used to match the speed of the available network connection.

Cons: Whilst a connection can be made to a client without using SSH tunnelling, this will not be encrypted. For a secure connection a tunnelling SSH client may be used. Alternatively a VPN connection may easier and give sufficient security; Neither TurboVNC or TigerVNC support clients on Android or iPad devices which will limit performance on these; By default a user is given a desktop environment, rather than the particular application needed for visualisation.

6. Using ParaView Web interface.

(This option is not currently supported.)

Pros: Allows any web browser to view and interact with data.

Cons: Web interface to ParaView is still in early development phase and lacks features of the full interface.

Testing performance of Hartree Graphics nodes with remote visualisation

Benchmarking the relative performance of these remote display options is difficult due to the large variations possible in network bandwidth and compute and rendering power of remote and local nodes. The type of image to be rendered can also have a great influence on the performance as can the size of the computational data.

Network speeds of 100 to 1000Mb/s are common on local area networks (LANs) with latencies of 1-10ms. Wide area networks (WANs), Internet and wireless connections are likely to see much lower speeds in the range 0.01 to 150Mb/s and latencies from 10 to 500ms. Long latencies are a big problem from techniques such as X11 forwarding which send many small messages back and forth.

Tests have been done on three remote systems connected to machines in the Hartree centre. The first of these is a workstation at RAL with a WAN connection to Hartree with a 100Mb/s limit. The second is a Linux workstation connected to a home broadband system with a reported speed of up to 20Mb/s. The third system is an iPad 2 using a WiFi connection to a local VNC server.

These client systems all have low end graphics capabilities, the two workstations using basic AMD cards and the iPad proprietary hardware.

It is found that any part of the graphics pipeline that cannot be rendered via OpenGL on the GPUs will slow down the process as it is rendered on the local host. To improve performance of this use compression of the X11 as follows. Before logging on to gfxlogin7 edit your local ~/.ssh/config to contain the following.

Available graphics hardware on Hartree system

The current visualisation hardware consists of the two iDataPlex nodes: gfxlogin7 and gfxlogin8. Each system has: 64 GB RAM; Dual Intel E5-2670 CPUs with up 32 cores in total; Dual nVidia Quadro 5000 cards with 2.5GB memory; Access to the Hartree GPFS file system.

These nodes act as login nodes for users needing visualisation. SSH access is provided but other ports, both in and out, are closed. Thus remote visualisation will depend on SSH port forwarding at some point.

To access OpenGL features of the hardware it is necessary to have an X11 server running and also to allow users who need visualisation to have access to the server. Normally access is restricted to the user logged in on the console. This has security implications, but these are reduced by limiting the set of users who can access the graphics nodes. A request should be made to Hartree support to have a user ID enabled for access to these nodes.

To check that a graphics card is being used when running an application, rather than software rendering, the command "nvidia-smi" can be used on the remote gfxlogin nodes. This command shows the amount of video memory in use on each GPU, which will increase when OpenGL software uses the GPU.

Available remote visualisation software

Software is currently being added to these systems and will be accessed via the Linux module command. To see the current set of available packages, use "module load use.viz" and "module avail" when logged in.

Conclusions

Three methods of using remote rendering on the graphics nodes of the Hartree system have been tested and shown to work for moderate sized test cases. The client-server mode of ParaView and the remote use of VirtualGL work well on reasonably fast connections. For slower connections the use of TurboVNC gives better response than VirtualGL on its own.

The use of multiple servers in client-server mode for ParaView can give increased performance for calculation of new images, though gain depends on the type of image rendered.

TurboVNC can be used to send images to devices such as the iPad using RealVNC as a client. However the client does not disconnect cleanly from the server.

Back to Contents Page