Training

Hartree Centre Logo

Back to Contents Page

Training and Events

Last modified: 4/2/2016

For an up-to-date list of forthcoming events, please see: external link: https://eventbooking.stfc.ac.uk . Before attending a course, please ensure that you have registered, and also signed up for an account with the SAFE user management system as explained in the SAFE User Guide. You will given the relevant project name and access code when signing up for the course.

Booking Events

From 2/2/2016, there is a new procedure in place for making training and event bookings with the Hartree Centre. This includes all events that require the Brunner-Mond Training Suite or the Visualization Suites at Daresbury.

The following procedure should be followed at ALL times when planning a training course, workshop, conference, summer school or short course.

  1. The first step is to complete the online event proposal form? external link: https://www.surveymonkey.com/r/?sm=9TtH3kVBEP0moFhtvSZZjVJrqDZ8lmdLRAryS6RAr8M%3d. You should complete this step at the very earliest stages of your planning to ensure that your event is logged and that we are aware that your event will be taking place. It doesn?t matter at this stage if you do not have all the information about your event.
  2. Once your event proposal is received I will make any room reservations that are required and will schedule an initial planning meeting with the event organisers. At this initial meeting we will discuss what your requirements are in terms of local support ? registration pages, refreshment bookings, conference dinners, hotels, on the day support and so on.
  3. Following the initial planning meeting a follow up meeting will be scheduled if you are using the Brunner-Mond Training Suite. This meeting will be more technical and will include representatives from the High Performance Systems team. At this meeting you will be asked to confirm exactly what you require for your event in terms of software installations and systems access.
  4. Once the planning meetings are complete we will send you a completed implementation planthat will outline everything that has been agreed. This is an "unofficial" contract for your event and is what we will use for planning purposes. The important part of this document is page 3 - these are the deadlines for providing the different bits of information that the Systems team need. For training courses please pay special attention to the 10 working days rule for installing software on the workstations. This 10 day deadline will be adhered to at all times! there is a new procedure in place for making training and event bookings with the Hartree Centre. This includes all events that require the Brunner-Mond Training Suite or the Visualization Suites at Daresbury.

Brunner-Mond Training Suite

The training suite in the Brunner-Mond room is a 150 sq.m space with several rows of desks, and a total of 55 workstations, normally running CentOS Linux. There are two presenters' areas with separate workstations and a projector, each with a wall mounted screen. This space is primarily designed as an area for hands-on computer based training. The Brunner-Mond room is adjacent to the Crosfield quad-wall visualisation suite which can also host around 30 people. There are separate breakout rooms for refreshments, and a restaurant nearby. There are also separate facilities for seminars, lectures and workshops in the same building.

If you are visiting for a training event the address is:

The Hartree Centre,
Daresbury Laboratory,
Sci-Tech Daresbury,
Warrington WA4 4AD, UK

A PDF map and travel directions external link: can be downloaded here.

The Brunner-Mond suite is in A-Block to which you will be directed by security staff on arrival.

If you wish to book the training facilities (or other facilities at the Hartree Centre) please read the instructions here for initial contact: external link: http://www.stfc.ac.uk/3446.aspx and then use the on-line Event Proposal Form linked to that page.

Pre-requisites

You should have already signed up to obtain a Hartree Centre training account and will have been allocated a userid of the form xxxyy-dxp01. If you do not have this, you should contact a course tutor asap. You cannot log onto the compute resources without this userid and a course password which will be provided when you arrive. You should be familiar with the basic commands of the Linux operating system.

Training Workstations

The Brunner-Mond room contains 50 Dell workstations running Linux. You can log onto any one of these. The workstations have a minimal software installation; just what is necessary for the course.

File systems.

The workstations are connected to a file server and have a shared file system on /gpfs/home/training . You can log onto any workstation and your home directory will be /gpfs/home/training/dxp01/xxxyy-dxp01 .

On the iDataPlex cluster your home file system is also /gpfs/home/training/dxp01/xxxyy-dxp01 although this is not the same GPFS system, and files will have to be copied between the two.

To copy files from the iDataPlex to the workstation, first open a local terminal on the workstation by using a right mouse click in the background and selecting “open in terminal”, then do the following.

cd ~
scp -r idpxlogin3:* .

or

cd ~
rsync -uav idpxlogin3:* .

Alternatively, you can mount the remote file system using Gnome Connect as follows.

  • Go to "Places" in the top left hand menu bar
  • Click on "Connect to Server" and select the following options:
    • Protocol: SSH
    • Host: idpxlogin3
    • File: /gpfs/home/HCT00003/dxp12/shared (or whatever your course is using)
  • You do not need your userid as that is set automatically.

This will open a window. You can then click on the directory and select "Browse contents".

We have installed some documents and applications under the course shared area, e.g. /gpfs/home/HCT00003/dxp12/shared. These will be mentioned in the tutorials. They include general information, user manuals for the applications to be run on the iDataPlex, and visualisation utilities to check the results. They may also include executable applications which can be run on the workstations, particularly for visualisation of output data.

To see PDF documents, e.g. the Hartree Centre User Guide:

evince /gpfs/home/HCT00003/dxp12/shared/Hartree/<chapter>.pdf

Internet Access

Access to the Internet has been enabled via a Web proxy (mgoth123.hartree.stfc.ac.uk). This should be transparent to users. From a local terminal window type "firefox" to start the browser and navigate to a URL. The Hartree Centre User Guide can be found here: external link: http://community.hartree.stfc.ac.uk/wiki/site/admin/home.html .

In case you need to know the proxy server and port, it is external link: http://mgoth123.hartree.stfc.ac.uk:3128 .

We will seek feedback following the course, in particular about ease of use of the resources.

Blue Wonder - the IBM iDataPlex Cluster

To log onto Blue Wonder either open the window with the iDataPlex icon, or use the following command:

The "-X" should permit X11 forwarding. You can right click on the icon window, hit "properties" and then edit the property line to include "-X", then close the window.

Similarly you can log on from an external system, e.g. when you return home, using your course userid.

Blue Wonder, the iDataPlex cluster, is an HPC resource with approximately 512 nodes connected to an InfiniBand back plane and GPFS file system. Each node has two Intel SandyBridge processors with a total of 16 cores sharing 32GB memory. Each node is effectively a stand alone shared memory computer with the Linux operating system, and is capable of running a task with 32 computational threads. To run a job across more than one node requires the use of message passing software such as MPI. Virtual Shared Memory machines (using ScaleMP software) are also available for larger multi-threaded tasks. We run dedicated courses on using OpenMP, MPI and ScaleMP.

The login node you have logged into is similar to a compute node, but is only used for management of files and data and for compilation. To run applications you will need to submit a job to the batch queue which ensures a fair share of the resources. For training purposes we have configured a special batch queue for short jobs.

Basic LSF Commands

The iDataPlex is using the LSF batch system. To see what queues are available, type “bqueues”. When you submit a job, the appropriate queue will be selected automatically based on your job script.

To submit a job to the queue you will do “bsub < testjob.bsub” where it is assumed that you have created a job specification script called “testjob.bsub” (see examples). You can use any name for the file. Remember to use the re-direct symbol for bsub to read the file.

To see all current jobs in the system type: “bjobs -u all -w”.

To see what jobs you have running and their state, type: “bjobs” or “bjobs -w”. More information is available with: “bjobs -l” (the long option) or "bpeek" to see the state of the job output files.

You can check scheduling information (perhaps if your job is showing with status "SSUSP") with: “bjobs -s <jobid>”.

To kill a job use “bkill <jobid>”. As you might expect, you can only kill your own jobs.

For more information about LSF, try “man bsub” and “man bjobs”, etc. For more information about using the iDataPlex, see the Hartree Centre User Guide Chapter 7 here external link: http://community.hartree.stfc.ac.uk/wiki/site/admin/jobs2.html .

Installed Packages

There are a number of pre-installed packages on the iDataPlex. To see a list of them type:

module avail

For more information about installed software and environment modules, see the Hartree Centre User Guide Chapter 10 here: external link: http://community.hartree.stfc.ac.uk/wiki/site/admin/managed%20software.html .

Blue Joule - the IBM Blue Gene/Q

To log onto the IBM Blue Gene/Q either open the window with the Blue Gene icon or use the following command:

The "-X" should permit X11 forwarding.

Similarly you can log on from an external system, e.g. when you return home, using your course userid.

The Blue Gene/Q is an HPC resource with approximately 6,000 nodes connected to an InfiniBand back plane and GPFS file system. Each node has BG PowerPC 64 processors with a total of 16 cores sharing 16GB memory. Each node is effectively a stand alone shared memory computer with a reduced Linux operating system (the Compute Node Kernel) and is capable of running a task with 64 computational threads. To run a job across more than one node requires the use of message passing software such as MPI.

The login node you have logged into is a Front End Node with full PowerPC 64 processors, but is only used for management of files and data and for compilation. To run applications you will need to submit a job to the LoadLeveler batch queue which ensures a fair share of the resources. This will be explained during the course.

Basic LoadLeveler Commands

The Blue Gene/Q uses the IBM LoadLeveler batch system. To see what queues are available, type “llclass”. When you submit a job, the appropriate queue will be selected automatically based on your job script.

To submit a job to the queue you will do “llq testjob.ll” where it is assumed that you have created a job specification script called “testjob.ll” (see examples). You can use any name for the file.

To see all current jobs in the system type: “llq -X all”.

To see what jobs you have running and their state, type: “llq -u xxxyy-dxp05”. More information is available with: “llq -l bglogin2.17488.3” (the long option) - remember to submit your correct jobid instead of "bglogin2.17488.3".

To kill a job use “llcancel <jobid>”. As you might expect, you can only kill your own jobs.

For more information about LoadLeveler, try “man llsubmit” and “man llq”, etc. For more information about using the Blue Gene/Q, see the Hartree Centre User Guide Chapter 6 here external link: http://community.hartree.stfc.ac.uk/wiki/site/admin/jobs.html .

Installed Packages

There are a number of pre-installed packages on the Blue Gene/Q. To see a list of them type:

module avail

For more information about installed software and environment modules, see the Hartree Centre User Guide Chapter 10 here: external link: http://community.hartree.stfc.ac.uk/wiki/site/admin/managed%20software.html .

Hartree Community Portal

More information is available via the Hartree Centre Community Portal: external link: http://community.hartree.stfc.ac.uk . This contains all information relevant to users of the Hartree Centre. It is divided functionally into "sites" which are represented as tabs to registered portal users, for instance you should see a tab for your course or project if on-line support is provided.

The User Guide is here: external link: http://community.hartree.stfc.ac.uk/wiki/site/admin/home.html .

You should look at the sections on iDP Jobs and Managed Software. Your course tutors will be able to provide further help.

Logging onto the Portal

You may have been provided with a portal account, for instance if your course has related on-line material. To log on:

  1. Go to external link: http://community.hartree.stfc.ac.uk
  2. Type your e-mail address as your login id (the same one you used to sign up with the SAFE system)
  3. Type the password that you should have been given

You will see a number of tabs such as:

  • My Workspace - your personal space, you can change your account details and edit your profile;
  • Hartree Community - a community space for sharing information and making contacts
  • Portal Help - guidance for using the Sakai portal
  • SC2013 - A typical course site

Sakai portal screen shot

Back to Contents Page