Login nodes

Important

Login nodes are not for computing

Login nodes are shared among many users and therefore must not be used to run computationally intensive tasks. Those should be submitted to the scheduler which will dispatch them on compute nodes.

Note

Do and Don’ts

  • Avoid running calculations (with large IOPS) on the home disk,

  • Always use the queueing system,

  • Login nodes are for editing files, transfering files and submitting jobs,

  • Login nodes should be used to build specific binaries,

  • Do not run calculations interactively on the login nodes,

  • Offenders process will be killed without warnings!

The key principle of a shared computing environment is that resources are shared among users and must be scheduled. It is mandatory to schedule work by submitting jobs to the scheduler on PSMN’s clusters. And since login nodes are shared resources, they must not be used to execute computing tasks.

Acceptable use of login nodes include:

  • file transfers and file manipulations (compression, decompression, etc),

  • script and configuration file editing,

  • binary building, compilation,

  • short binary tests with small input/output (10~20mn CPU time),

  • lightweight workflow tests on small datasets (10~20mn CPU time),

  • job submission and monitoring.

Tip

You can submit jobs from any login node to any partition. Login nodes are only segregated for build (CPU microarchitecture) and scratch access.

Warning

E5 End Of Life, cluster has been powered off at end of April 2025.

Here is a list of login nodes :

Cluster

login/build nodes

main Scratch

None

x5570comp[1-2]

None

None

e5-2667v4comp[1-2]

None

Lake

m6142comp[1-2], cl5218comp[1-2], cl6242comp[1-2], cl6226comp[1-2],

/scratch/Lake

E5-GPU

r730gpu01

/scratch/Lake

Cascade

s92node01

/scratch/Cascade

For example, to access /scratch/Lake/, you have to login with either one of m6142comp1, m6142comp2 to cl6226comp2.

Login nodes without cluster

login nodes

CPU Model

cores

RAM

ratio

infiniband

GPU

local scratch

x5570comp[1-2]

X5570 @ 2.93GHz

8

24 GiB

3 GiB/core

N/A

N/A

N/A

e5-2667v4comp[1-2]

E5 2667v4 @ 3.2GHz

16

128 GiB

8 GiB/core

N/A

N/A

N/A

These login nodes are the oldest. They are not connected to any cluster. Usable for anything INCLUDING non-optimized builds (jobs surveillance, files operations [edition, compression/decompression, copy/move, etc.], short testings, etc.). See also Clusters/Partitions overview.

Binaries build on these, with system defaults (no loaded modules), should run on all partitions. Min/Max GCC tuning: -mtune=generic -O2 -msse4a.

Tip

You can submit jobs from any login node to any partition. Login nodes are only segregated for build (CPU microarchitecture) and scratch access.

Visualization nodes

See Using X2Go for data visualization for connection manual.

Cluster

login/build nodes

CPU family

RAM

Network

main Scratch

GPU

None

r740visu

Lake (8 cores)

192Gib

56Gb/s

None

Quadro P4000 8Gb

Lake

r740gpu0[6-7]

Lake (16 cores)

384Gib

56Gb/s

/scratch/Lake

T1000 8Gb

Cascade

r740gpu0[2-3,8-9]

Lake (16 cores)

192Gib

100Gb/s

/scratch/Cascade

T1000 4Gb/8Gb

  • Nodes r740gpu0[6-9] have an additionnal local scratch on /scratch/local/ (120 days lifetime residency),

  • Nodes r740gpu0[2-3] have T1000 4Gb GPU,

  • Some visualization servers are not listed here, due to restricted access. Contact PSMN’s Staff, or your PSMN’s group correspondent, for more.