Interactive access
System access
Warning
Please refer to internal technical documentation to get information about this subject.
Login nodes usage
When you connect to the supercomputer, you are directed to one of the login nodes of the machine. Many users are simultaneously connected to these login nodes. Since their resources are shared it is important to respect some standards of good practice.
Usage limit on login nodes
Login nodes should only be used to interact with the batch manager and to run lightweight tasks. As a rule of thumb, any process or group of processes which would use more CPU power and/or memory than what is available on a basic personal computer should not be executed on a login node. For more demanding interactive tasks, you should allocate dedicated resources on a compute node.
To ensure a satisfying experience for all users, offending tasks are automatically throttled or killed.
Interactive submission
There are 2 possible ways of accessing computing resources without using a submission script. ccc_mprun -K allows to create the allocation and the job environment while you are still on the login node. It is useful for MPI tests. ccc_mprun -s opens a shell on a compute node. It is useful for sequential or multi-threaded work that would be too costly for the login node.
Allocate resources interactively (-K)
It is possible to work interactively on allocated compute nodes thanks to the option -K
of ccc_mprun.
$ ccc_mprun -p partition -n 8 -T 3600 -A <project> -K
This command will create an allocation and start a job.
$ echo $SLURM_JOBID
901732
Within this reservation, you can run job steps with ccc_mprun. Since the compute nodes are already allocated, you will not have to wait.
$ ccc_mprun hostname
node1569
node1569
node1569
node1569
node1569
node1569
node1569
node1569
Note
At this point, you are still connected on the login node. You cannot run your code directly or with mpirun. you have to use ccc_mprun!
Such an allocation is useful when developing an MPI program, in order to be able to test a ccc_mprun quickly and several times in a short period.
You can use the usual options for ccc_mprun as in ccc_mprun options.
Working on a compute node (-s)
To directly work on the allocated compute nodes, you need to open a shell inside a SLURM allocation. This is possible thanks to the -s
option:
$ ccc_mprun -p partition -c 16 -m work,scratch -s
[node1569 ~]$ hostname
node1569
In this case, the 16 cores of the node node1569 are allocated to you and you are able to run freely on all those cores without disrupting other jobs. It is just like any allocation, the computing hours will be accounted on your project. In any case, you will need to specify the filesystems you want to have access to, using -m option followed by a comma-separated list of filesystems.
Note
- You cannot use multiple nodes in this mode. You are limited to the number of cores in one node.
- When you do not need it any longer, do not forget to stop the interactive session with ctrl+d or exit 0.
- The computing hours spent in the interactive session will be withdrawn from your project quota.
- You can wait for the allocation for a shorter time using the “test” QOS.
This is typically used for costly and punctual tasks such as compiling a large code on 16 cores, post-processing or aggregating output data etc.
At any point, you can use the hostname command to check whether you are on a compute node or on a login nodes.
You can use the usual options for ccc_mprun as in ccc_mprun options.
Remote Desktop System service (NiceDCV)
The Remote Desktop System service on the TGCC supercomputer is based on NiceDCV solution. NiceDCV provides a web site where you can launch the booked graphical session.
Book a session
There are two types of sessions : a “virtual” session and a “console” session.
- The virtual session is using the GNOME Display Manager (GDM) unlike the console session. The virtual session is by default launched on only one GPU and not on all the GPUs of the node. Therefore, it is possible to launch in parallel as many virtual sessions as the number of available GPUs on the node. Moreover, many users can launch a virtual session on the same node given that they are on the same container.
The following command allows to book a virtual session :
$ ccc_visu virtual -p partition
- The console session allows node exclusivity, i.e. all GPUs on the node can be used inside the session. The visualization performance could be faster than on the virtual session (almost 2 times faster).
The following command allows to book for a console session :
$ ccc_visu console -p partition
Session access
Once the reservation is booked from the above commands, a URL is given in the standard output. To access the graphical session, launch this URL on your browser. A page opens in which you are asked to give your Irene credentials. Then, to acess the allocated session, you have two options:
- The first client (on the left): opens the session inside the brower. You need to give again your Irene credentials.
- The second client (on the right): allows to download it. However, you need root access on your machine to be able to install it.
Moreover, while the allocation with command ccc_visu didn’t expire or hadn’t been cancelled, the session will not be impacted if the client is closed. A session will last as defined by the -T option (in seconds), or 2 hours by default. The graphical session will expire with the corresponding job, and will not be available afterward. Besides, the session can be closed from the terminal from which the session was started using CTL-D or exit, or by ending the corresponding job with the ccc_mdel command.