...

Diapositiva 1 - ENEA AFS Cell

by user

on
Category: Documents
13

views

Report

Comments

Transcript

Diapositiva 1 - ENEA AFS Cell
GRISU' Open Day su Scienza ed Ingegneria dei Materiali e Grid
2009-04-03: Napoli
ENEA Grid e CRESCO in GRISU’: uno strumento
per la scienza dei materiali
Outline:
• ENEA Grid / Cresco infrastructures
• Computational Capability
• Some examples of Material Science Codes running on ENEA
Presentazione: S. Raia(1), S. Migliori(1)
Collaborazioni : M. Celino(1), M. Gusso(1), P. Morvillo(1), A. Marabotti(2), L. Cavallo(3), F. Ragone(3).
(1): ENEA: Sede/Portici/Casaccia/Brindisi
(2): ISA CNR Avellino
(3): UNISA: Salerno
ENEA Grid infrastructures [1]
ENEA
-12 research centers in Italy
-A Central Computer and Network
Service (INFO)
- 6 Computer Centres: Casaccia,
Frascati, Bologna,Trisaia,
Portici, Brindisi
- Multiplatform resources for
serial & parallel computation
and graphical post-processing.
- Others computer resources in
ENEA: departments &
individuals
Salvatore Raia | GRISU' Open Day su Scienza ed Ingegneria dei Materiali e Grid | 2009-04-03: Napoli
ENEA Grid infrastructures [2]
Main features:
- Access from any kind of connection
- Sharing data in world wide areas (geographical file system AFS)
- Access to the data from any kind of digital device client
- Running any kind of programs
- Access to National and International GRIDS
For each computational site, ENEA manages
Local Networks (LAN) whereas the
computational
centres
(WAN)
are
interconnected via
“Consortium GARR”
Network
Salvatore Raia | GRISU' Open Day su Scienza ed Ingegneria dei Materiali e Grid | 2009-04-03: Napoli
ENEA Grid computatonal resources [3]
O.S.
#CPU/Core
Gflops
…where
AIX
>300
3000
Frascati(258), Bologna(24),
Portici(18),Brindisi(2)
Linux x86 32/64
>3000
25000
Frascati(140), Casaccia(54),
Portici(2700),Trisaia(20), Brindisi(84)
Linux IA64 (Altix)
64
300
Casaccia
IRIX
26
40
Frascati(8), Casaccia(4),
Portici(1),Trisaia(8), Brindisi(1),
Bologna(5)
Solaris
8
10
Trisaia(4), Casaccia(2), Bologna(2)
Windows 32/64
46
100
Frascati(4), Portici(34),Trisaia(4),
Brindisi(4)
Mac OS X
14
60
Frascati(1), Trisaia(13)
Salvatore Raia | GRISU' Open Day su Scienza ed Ingegneria dei Materiali e Grid | 2009-04-03: Napoli
ENEA Grid Architecture & resources integration [4]
*A choice of mature components for reliability and easiness
of support and maintenance:
Distributed File System: AFS
Job and resources manager: LSF Multicluster
Unified GUI access: Java and Citrix Technologies
Quality monitoring system: Patrol
Licence Servers
Application
Resource
* Possible integration with other
institutions
Software
catalogs
User
Collective
* Integration with department and
individual resources
Distributed File System: AFS
Licence pool sharing
Computers
ENEA-GRID
www.afs.enea.it/project/eneagrid
Connectivity
Fabric
Colleagues & 3D
Salvatore Raia | GRISU' Open Day su Scienza ed Ingegneria dei Materiali e Grid | 2009-04-03: Napoli
Data archives
CRESCO (Portici site) computational resources [5]
Sistema backup
SERVER BACKUP
IB
Interconnessione InfiniBand
4XDDR
Sezione 1
Sezione 2
(Grande Memoria)
(Alto Parallelismo)
42 Nodi SMP IBM
x3850-M2 con 4
Xeon Quad-Core
Tigerton E7330
(32/64 GByte RAM
2.4GHz/
1066MHz/6MB L2)
per un totale di 672
core Intel
Tigerton
256 Nodi blades
IBM HS21 con 2
Xeon Quad-Core
Clovertown E5345
(2.33GHz/1333MHz
/8MB L2), 16 GB
RAM per un totale
di 2048 core Intel
Clovertown
IB
3 Nodi IBM 3650
FC
SERVER GPFS
4 Nodi IBM 3650
FC
300 TByte
IBM Tape Library
TS3500 con 4 drive
Sistema Dischi
ad alta velocità
2 GByte/s
160 TByte
IBM/DDN 9550
Sezione 3
(Speciale)
4 Nodi blades IBM
QS21 con 2 Cell BE
Processors 3.2 Ghz
each.
6 Nodi IBM x3755, 8
Core AMD 8222 con
una scheda FPGA
VIRTEX5
4 Nodi IBM x 3755,
8 core AMD 8222 con
schede NVIDIA
Quadro FX 4500 X2
4 Nodi windows 8
core 16 Byte RAM
35 Nodi di Servizio
Server di :
•Front-end
•insallazione
•AFS
•…
Portici
LAN
Doppia Intercconnessione a 1 Gbit
1 Rete a 1 Gbit di gestione
GARR
(WAN)
Salvatore Raia | GRISU' Open Day su Scienza ed Ingegneria dei Materiali e Grid | 2009-04-03: Napoli
ENEA Grid / Cresco Capability for Material Science codes:
•
•
•
•
Multicores Platforms (Cresco Linux, sp AIX) : 8, 16, 32, 48 smp nodes
GPFS / Infiniband for high parallelism codes
Hybrid Parallelism (Cresco smp nodes with IB interconnections)
Distributed parallel Jobs across WAN or heterogeneous platforms
(Parallel Jobs arrays (AFS ‘sysname’ mechanism))
• Nvidia / Cell processor / FPGA accelerators (…next)
Some Codes running in ENEA env.:
CPMD
Gromacs
Pc Gamess
Gaussian….
…. others: Abinit, Amber, cp2k, etc …
Salvatore Raia | GRISU' Open Day su Scienza ed Ingegneria dei Materiali e Grid | 2009-04-03: Napoli
Gromacs on ENEA Grid (CRESCO)
Protein in a box of water at ISA CNR, Avellino: A. Marabotti
GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian
equations of motion for systems with hundreds to millions of particles.
It is primarily designed for biochemical molecules like proteins and lipids that have a lot of
complicated bonded interactions, but it can be used also for research on non-biological systems, e.g.
polymers.
Dimeric protein +
water + ions: ~77000
atoms. Timescale of
simulation: 5 ns
Simulation made on
16 cores on Cresco1,
Time required: 40 h.
Same simulation on
cluster dual-core
AMD Opteron (Gbit
Eth): 10 gg. no
scaling beyond 2
cores
Peptide + water + ions: ~2500 atoms
Timescale of simulation: 40 ns
Time required: 24 h (on 8 cores smp Intel Xeon)
Salvatore Raia | GRISU' Open Day su Scienza ed Ingegneria dei Materiali e Grid | 2009-04-03: Napoli
PC GAMESS / Firefly ENEA-Grid(CRESCO)
absorption spectrum of molecules at C.R. ENEA Portici: P. Morvillo
PC GAMESS/Firefly: computational Chemistry code for
ab initio and DFT calculations. Developed for high
performance calculation on Intel processors (x86,
AMD64, and EM64T).
http://classic.chem.msu.su/gran/gamess/index.html
modello TDDFT B3LYP/6-311G*
Used on CRESCO for studying electronic
proprieties and absorption spectrum of
molecules in the photovoltaic field.
On Cresco: Performances improvement of
370% (16 cores) compared to runs on PC
Desktop quad core.
N. cores (MPI procs)
Elapsed Time ( min.)
16
313.7
32
200.2
48
142.7
64
125.1
80
114.0
96
103.1
112
93.9
128
90.0
Salvatore Raia | GRISU' Open Day su Scienza ed Ingegneria dei Materiali e Grid | 2009-04-03: Napoli
Gaussian/cp2k/RosettaDock on ENEA-Grid(CRESCO)
Chemical Reactivity at the University of Salerno. Dipartimento di Chimica: Prof. Cavallo
1) Static DFT calculations (Gaussian)
Working in shared memory
with 8 cores maximum/job
Usually 10 jobs/day
Speedup ~4-5
2) Dynamic DFT simulation (cp2k)
(Car-Parrinello MD)
working with MPI, the code
scales well up to 64 cores
Usually 1-2 jobs max/day
Speedup excellent (~60)
Olefin Metathesis, the Nobel 2005 Reaction
Structural Bioinformatics (rosettacommons)
Simulating Antibody/Antigen
recognition through docking simulations.
Typical embarrassingly parallel
problem. No limits to the speedup, 1 core per job.
Using ENEA Grid technologies: we have run
tipically up to 500 jobs/day
With speedup excellent (500) and efficiency 100%
Salvatore Raia | GRISU' Open Day su Scienza ed Ingegneria dei Materiali e Grid | 2009-04-03: Napoli
Multiscale Modeling Parallel Code Development at University of Salerno: Dr. Giuseppe Milano
Molecular Dynamics Simulator (OCCAM)
-Iterative Boltzmann Inversion
-Hybrid Particle Field MD
-Hybrid Monte Carlo
-Gran Canonical Montecarlo
Ancillary Programs
-MC charge
-Back Mapping
Methods: Hybrid Particle
Field Molecular Dynamics
Highly Parallelizable
Method
Coarse-Grain Models of Biomembranes
The understanding of interactions of biocompatible block
polymers with biological interfaces has important
technological applications in industry and in medicine
Program: OCCAM 3.0
developed at University of
Salerno
Salvatore Raia | GRISU' Open Day su Scienza ed Ingegneria dei Materiali e Grid | 2009-04-03: Napoli
CPMD: Car-Parrinello Molecular Dynamics on ENEA-Grid(CRESCO)
Supercell crystalline systems Benchmark at ENEA C.R. Portici: S. Raia
It is a plane wave/pseudopotential Extented tests in order to:
implementation of Density
1- Scaling performances up to 1024 cores
Functional Theory, particularly
2- Exploiting Dual-level Parallelism
designed for ab-initio molecular
3- Outer loop parallelization: TASKSGROUPS
dynamics.
Total
tasks
MPI : Threads
TASKS
GROUPS
Iterations
/ sec
1024
1024:1
8
0.0068
1024
1024:1
NO
0.0060
1024
512:2
8
0.0244
1024
512:2
NO
0.0089
1024
256:4
8
0.0253
1024
256:4
NO
0.0131
Salvatore Raia | GRISU' Open Day su Scienza ed Ingegneria dei Materiali e Grid | 2009-04-03: Napoli
Conclusions …
•
Multicores Platforms (Cresco Linux, sp AIX) : 8, 16, 32, 48 smp nodes
• GPFS / Infiniband for high parallelism codes
• Hybrid Parallelism
• Special sections and multiscale modelling … next
References in ENEA and collaborations …
CRESCO Project/ENEA Grid: S. Migliori, ENEA Roma, [[email protected]]
S. Raia, ENEA C.R. Portici, [[email protected]]
M.Celino, ENEA C.R. Casaccia, [[email protected]]
Risultati sui codici qui descritti:
Cpmd: S. Raia, M. Gusso [[email protected]],M. Celino
Gromacs: A. Marabotti, ISA CNR, Avellino, [[email protected]]
Gaussian, RosettaCommons: L. Cavallo, Università di Salerno, [[email protected]]
Pc Gamess: P. Morvillo, ENEA C.R. Portici, [[email protected]]
Salvatore Raia | GRISU' Open Day su Scienza ed Ingegneria dei Materiali e Grid | 2009-04-03: Napoli
Fly UP