Clusters/Partitions overview

PSMN partitions 2022

Fig. 35 A view of PSMN clusters

Note

Clusters are the hardware assembly (including dedicated network, etc). Partitions are logical divisions within clusters.

Cluster E5

Hardware specifications per node:

login nodes

CPU Model

cores

RAM

ratio

infiniband

GPU

local scratch

c8220node1

E5-2670 0 @ 2.6GHz

16

64 GiB

4 GiB/core

56 GiB/s

E5-2667v2 @ 3.3GHz

16

128 GiB

8 GiB/core

56 GiB/s

E5-2667v2 @ 3.3GHz

16

256 GiB

16 GiB/core

56 GiB/s

e5-2667v4comp[1-2]

E5-2667v4 @ 3.2GHz

16

128 GiB

8 GiB/core

56 GiB/s

E5-2667v4 @ 3.2GHz

16

256 GiB

16 GiB/core

56 GiB/s

E5-2697Av4 @ 2.6GHz

32

256 GiB

16 GiB/core

56 GiB/s

Hint

Best use case: training, testing, sequential jobs, small parallel jobs (<32c)

  • E5 scratch (general purpose scratch) is available, from all nodes of the partition, on the following path:

/scratch/E5N

Partition E5-short

This partition is open to everyone, 30 minutes walltime.

Partition E5

This partition is open to everyone, 4 hours walltime.

Partition E5-long

This partition is open to everyone, 8 days walltime.

Cluster Lake

Partition E5-GPU

Hardware specifications per node:

login nodes

CPU Model

cores

RAM

ratio

infiniband

GPU

local scratch

r730gpu01

E5-2637v3 @ 3.5GHz

8

128 GiB

16 GiB/core

56 GiB/s

RTX2080Ti

/scratch/Lake

This partition is open to everyone, 8 days walltime (and yes, it is in Lake cluster).

Hint

Best use case: GPU jobs, training, testing, sequential jobs, small parallel jobs (<32c)

  • Choose ‘E5’ environment modules (See Modular Environment),

  • Partition E5-GPU has access to Lake scratchs, and has no access to /scratch/E5N (see Partition Lake below),

  • 8 days walltime.

Partition Lake

Hardware specifications per node:

login nodes

CPU Model

cores

RAM

ratio

infiniband

GPU

local scratch

m6142comp[1-2]

Gold 6142 @ 2.6GHz

32

384 GiB

12 GiB/core

56 GiB/s

Gold 6142 @ 2.6GHz

32

384 GiB

12 GiB/core

56 GiB/s

/scratch/disk/

Gold 5118 @ 2.3GHz

24

96 GiB

4 GiB/core

56 GiB/s

cl6242comp[1-2]

Gold 6242 @ 2.8GHz

32

384 GiB

12 GiB/core

56 GiB/s

cl5218comp[1-2]

Gold 5218 @ 2.3GHz

32

192 GiB

6 GiB/core

56 GiB/s

cl6226comp[1-2]

Gold 6226R @ 2.9GHz

32

192 GiB

6 GiB/core

56 GiB/s

This partition is open to everyone, 8 days walltime.

Hint

Best use case: medium parallel jobs (<384c)

  • Lake scratchs are available, from all nodes of the partition, on the following paths:

/scratch/
    ├── Bio
    ├── Chimie
    ├── Lake    (general purpose scratch)
    └── Themiss
  • ‘Bio’, ‘Chimie’ scratchs are meant for users of respective labs and teams,

  • ‘Themiss’ scratch is reserved to Themiss ERC users,

  • ‘Lake’ is for all users (general purpose scratch),

  • Nodes c6420node[049-060],r740bigmem201 have a local scratch on /scratch/disk/ (120 days lifetime residency, --constraint=local_scratch),

  • Choose ‘Lake’ environment modules (See Modular Environment). See Using X2Go for data visualization for visualization servers informations,

  • 8 days walltime.

Partition Lake-bigmem

Note

This partition is subject to authorization, open a ticket to justify access.

Hint

Best use case: large memory jobs (<32c, 32c max), sequential jobs

Hardware specifications per node:

login nodes

CPU Model

cores

RAM

ratio

infiniband

GPU

local scratch

none

Gold 6226R @ 2.9GHz

32

1,5 TiB

46 GiB/core

56 GiB/s

/scratch/disk/

  • Lake scratchs are available (see Partition Lake above), and a local scratch on /scratch/disk/ (120 days lifetime residency, --constraint=local_scratch),

  • 8 days walltime.

Partition Lake-premium

This partition is open to everyone, 8 days walltime, but has a preemption mode (see Preemption mode), in which jobs with high priority will requeue jobs with normal priority. Ideal for burst campaigns (lots of short jobs) and workflows with multiple jobs.

Hint

Best use case: sequential jobs, medium parallel jobs (<384c)

Hardware specifications per node:

login nodes

CPU Model

cores

RAM

ratio

infiniband

GPU

local scratch

cl6242comp[1-2]

Gold 6242 @ 2.8GHz

32

384 GiB

12 GiB/core

56 GiB/s

  • Lake scratchs are available (see Partition Lake above),

  • 8 days walltime.

Cluster Epyc

Hardware specifications per node:

login nodes

CPU Model

cores

RAM

ratio

infiniband

GPU

local scratch

none

EPYC 7702 @ 2.0GHz

128

512 GiB

4 GiB/core

100 GiB/s

N/A

None

Partition Epyc

This partition is open to everyone, 8 days walltime.

Hint

Best use case: large parallel jobs (>256c)

  • There is no login node available in the Epyc partition at the moment. Use an Interactive session for builds/tests,

  • Epyc partition has access to Lake scratchs (see Partition Lake above),

  • There is no specific environment modules (use ‘E5’),

  • 8 days walltime.

Cluster Cascade

Hardware specifications per node:

login nodes

CPU Model

cores

RAM

ratio

infiniband

GPU

local scratch

s92node01

Platinum 9242 @ 2.3GHz

96

384 GiB

4 GiB/core

100 GiB/s

N/A

None

Partition Cascade

This partition is open to everyone, 8 days walltime.

Hint

Best use case: sequential jobs, large parallel jobs (>256c)

  • Cascade scratch (general purpose scratch) is available, from all nodes of the partition, on the following path:

/scratch/
    ├── Cascade     (general purpose scratch)
    └── Cral
  • ‘Cral’ scratch is meant for CRAL users and CRAL teams,

  • Choose ‘Cascade’ environment modules (See Modular Environment),

  • 8 days walltime.

Partition Cascade-premium

Hint

Best use case: sequential jobs, large parallel jobs (>256c)

Same as partition Cascade above, this partition is open to everyone, 8 days walltime, but with a preemption mode (see Preemption mode), in which jobs with high priority will requeue jobs with normal priority. Ideal for burst campaigns (lots of short jobs) and workflows with multiple sequential jobs.

Partition Cascade-GPU

Hardware specifications per node:

login nodes

CPU Model

cores

RAM

ratio

infiniband

GPU

local scratch

none

Platinum 9242 @ 2.3GHz

96

384 GiB

4 GiB/core

100 GiB/s

L4

None

Same as partition Cascade above, this partition is open to everyone, 8 days walltime.

Hint

Best use case: GPU jobs, training, testing, sequential jobs, large parallel jobs (>256c)