...

Track 3A

by user

on
Category: Documents
15

views

Report

Comments

Description

Transcript

Track 3A
Report da CHEP2013 – Track 3A
Distributed Processing and Data Handling
on Infrastructure, Sites and Virtualization
Davide Salomoni
28/10/2013
CHEP2013: i numeri
• O(500) partecipanti
• 15 sessioni plenarie
 1 plenaria tenuta da una persona INFN (6.6%)
• 6 tracks, 7 sessioni parallele (track 3 = 3A+3B)
• 468 contributi
 circa il 12.6% con almeno un autore a firma INFN
• 195 talk
 circa il 13.3% con almeno un autore a firma INFN
• Workshop su Data Preservation il 16/10
pomeriggio
Davide Salomoni
CHEP2013 Report - 28/10/2013
2
CHEP2013: i comitati
• International Advisory Comittee, 40 partecipanti di cui 1 INFN
(Mauro Morandin)  ≈ 2.5%
• Programme Chair: Daniele Bonacorsi
• Per le 6 track: 33 conveners, di cui 4 INFN  ≈ 12%
 Track 1: Data Acquisition, Trigger and Controls, 4 conveners (0
INFN)
 Track 2: Event Processing, Simulation and Analysis, 4 conveners (0
INFN)
 Track 3: Distributed Processing and Data Handling, 5 conveners (1
INFN, Davide Salomoni)
 Track 4: Data Stores, Data Bases and Storage Systems, 6
conveners (1 INFN, Dario Barberis)
 Track 5: Software Engineering, Parallelism & Multi-core, 6
conveners (1 INFN, Francesco Giacomini)
 Track 6: Facilities, Infrastructures, Networking and Collaborative
Tools, 8 conveners (1 INFN, Alessandro De Salvo)
Davide Salomoni
CHEP2013 Report - 28/10/2013
3
Davide Salomoni
CHEP2013 Report - 28/10/2013
4
Miei commenti sulla Track 3A
•
•
Summary della Track 3A su http://goo.gl/3s4F8k
In evidenza nella track:

Opportunistic computing – sia nell’utilizzo di risorse HEP temporaneamente non in
uso (es. HLT farms), sia nell’utilizzo di risorse normalmente “non-HEP” (es.
supercomputer)
o Es. ALTAMIRA (Santander), SDSC (San Diego)
o Effetto a lungo termine sui computing model?

Interessanti sviluppi di CernVM – ma con l’importante caveat che si tratta di un
progetto best-effort e molto HEP-specific
o La mancata chiarezza sulla sostenibilità è un punto di diverse realizzazioni, es. Global Service
Registry for WLCG


Promettenti test sull’uso di Root con Hadoop (nel passato la coppia Root+Hadoop
non aveva mostrato grandi prestazioni)
Gli esperimenti continuano a sviluppare soluzioni custom anche complesse, spesso
– in pratica, se non in teoria – molto experiment-specific  scarsa condivisione
o Es. job monitoring, VM monitoring, automating usability of distributed resources

Molto parlare di Cloud, che gode a volte di finanziamenti anche corposi (cf. Australia,
47+50 M$); tuttavia, poche soluzioni di produzione
o Da sottolineare in particolare i dubbi sollevati sulla scalabilità dello scheduler di OpenStack
(es. R. Medrano Llamas, CERN e I. Sfiligoi, UCSD)
o Cloud accounting e federazioni missing
Davide Salomoni
CHEP2013 Report - 28/10/2013
5
Più in generale su CHEP
• Abbiamo avuto uno speaker italiano per una plenary (“Designing
the computing for the future experiments”, Stefano Spataro,
focalizzato sugli sviluppi di Panda a FAIR)
 In generale, le plenaries non mi sono sembrate particolarmente interessanti
o illuminanti/visionarie (vedi lista in seguito)
• Se guardiamo i talk a CHEP2013 che hanno “qualche firma
INFN”, appare che sono circa il 13% del totale e che coprono una
vasta gamma di argomenti (vedi slide successiva)
 Tuttavia la mia impressione è che non abbiamo una strategia come ente
verso il calcolo e che i nostri contributi ad es. all’interno di esperimenti siano
per lo più “strumentali” o individuali, non strategici
o A livello di calcolo in generale, ho menzionato a Dario Menasce la necessità di una
analisi di tutti i contributi (talk e poster) inviati da persone INFN per valutare la
possibilità di condivisione e integrazione delle competenze
• Nessuna presentazione su “middleware comuni” (un cavallo di
battaglia dell’INFN nei 10 anni passati), presenti o futuri; per quanto
riguarda EGI, nessun talk e un poster (S.Burke, su GLUE 2) –
implicazioni per H2020?
 Ognun per sé, Grid (o Cloud) per tutti
Davide Salomoni
CHEP2013 Report - 28/10/2013
6
Le plenaries
1.
2.
14.
15.
CHEP in Amsterdam: from 1985 to 2013, David Groep (NIKHEF, The Netherlands)
NIKHEF, the national institute for subatomic physics, Frank Linde (NIKHEF, The
Netherlands)
C++ evolves!, Axel Naumann (CERN)
Software engineering for science at the LSST, Robert Lupton (Princeton, USA)
Horizon 2020: an EU perspective on data and computing infrastructures for research,
Kostas Glinos (European Commission)
Data processing in the wake of massive multicore and GPU, Jim Kowalkowski (FNAL,
USA)
Future directions for key physics software packages, Philippe Canal (FNAL, USA)
Computing for the LHC: the next step up, Torre Wenaus (BNL, USA)
Designing the computing for the future experiments, Stefano Spataro (Università di
Torino/INFN, Italy)
Big Data - Flexible Data - for HEP, Brian Bockelman (University of Nebraska, USA)
Probing Big Data for Answers using Data about Data, Edwin Valentijn (University of
Groningen, The Netherlands)
Data archiving and data stewardship, Pirjo-Leena Forsström (CSC, Finland)
Inside numerical weather forecasting - Algorithms, domain decomposition, parallelism,
Toon Moene (KNMI, The Netherlands)
Software defined networking and bandwidth-on-demand, Inder Monga (ESnet, USA)
Trends in Advanced Networking, Harvey Newman (Caltech, USA)
16.
Più un talk di uno sponsor (KPMG, The Netherlands)
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
Davide Salomoni
CHEP2013 Report - 28/10/2013
7
I talk con autori INFN
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
Many-core applications to online track reconstruction in HEP experiments
Integrating multiple scientific computing needs via a Private Cloud Infrastructure
Deployment of a WLCG network monitoring infrastructure based on the perfSONAR-PS technology
Evaluating Predictive Models of Software Quality
Usage of the CMS Higher Level Trigger Farm as a Cloud Resource
Testing SLURM open source batch system for a Tier1/Tier2 HEP computing facility
Testing of several distributed file-system (HadoopFS, CEPH and GlusterFS) for supporting the HEP experiments analysis
The future of event-level information repositories, indexing and selection in ATLAS
Implementation of a PC-based Level 0 Trigger Processor for the NA62 Experiment
CMS Computing Model Evolution
WLCG and IPv6 - the HEPiX IPv6 working group
NaNet: a low-latency NIC enabling GPU-based, real-time low level trigger systems
A PCIe GEn3 based readout for the LHCb upgrade
The Common Analysis Framework Project
ArbyTrary, a cloud-based service for low-energy spectroscopy
Integration of Cloud resources in the LHCb Distributed Computing
Algorithms, performance, and development of the ATLAS High-level trigger
An exact framework for uncertainty quantification in Monte Carlo simulation
Scholarly literature and the media: scientific impact and social perception of HEP computing
Geant4 studies of the CNAO facility system for hadrontherapy treatment of uveal melanomas
Computing on Knights and Kepler Architectures
Using ssh as portal - The CMS CRAB over glideinWMS experience
System performance monitoring of the ALICE Data Acquisition System with Zabbix
Automating usability of ATLAS Distributed Computing resources
O2: a new combined online and offline > computing for ALICE after 2018
PROOF-based analysis on the ATLAS Grid facilities: first experience with the PoD/PanDa plugin
Davide Salomoni
CHEP2013 Report - 28/10/2013
8
I poster con autori INFN
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
Architectural improvements and 28nm FPGA
implementation of the APEnet+ 3D Torus network for
19.
hybrid HPC systems
GPU for Real Time processing in HEP trigger systems 20.
The ATLAS EventIndex: an event catalogue for
experiments collecting large amounts of data
21.
Optimization of Italian CMS Computing Centers via
MIUR funded Research Projects
Design and Performance of the Virtualization Platform 22.
for Offline computing on the ATLAS TDAQ Farm
Real-time flavor tagging selection in ATLAS
23.
Dirac integration with a general purpose bookkeeping 24.
DB: a complete general suite
Preserving access to ALEPH Computing Environment 25.
via Virtual Machines
Changing the batch system in a Tier 1 computing center: 26.
why and how
TASS - Trigger and Acquisition System Simulator - An 27.
interactive graphical tool for Daq and trigger design
28.
A quasi-online distributed data processing on WAN: the 29.
ATLAS muon calibration system
Long Term Data Preservation for CDF at INFN-CNAF 30.
R work for a data model definition: data access and
31.
storage system studies
32.
An Xrootd Italian Federation for CMS
An Infrastructure in Support of Software Development
Geant4 Electromagnetic Physics for LHC Upgrade
33.
Compute Farm Software for ATLAS IBL Calibration
CMS users data management service integration and
Davide Salomoni
first experiences with its NoSQL data storage
INFN Pisa scientific computation environment (GRID
HPC and Interactive analysis)
Arby, a general purpose, low-energy spectroscopy
simulation tool
Testing and Open Source installation and server
provisioning tool for the INFN-CNAF Tier1 Storage
system
New physics and old errors: validating the building
blocks of major Monte Carlo codes
Distributed storage and cloud computing: a test case
Toward the Cloud Storage Interface of the INFN CNAF
Tier-1 Mass Storage System
The Legnaro-Padova distributed Tier-2: challenges and
results
A flexible monitoring infrastructure for the simulation
requests
The ALICE DAQ infoLogger
CORAL and COOL during the LHC long shutdown
The ALICE Data Quality Monitoring: qualitative and
quantitative review of 3 years of operations
Many-core on the Grid: From Exploration to Production
Negative improvements
Installation and configuration of an SDN test-bed made
of physical switches and virtual switches managed by an
Open Flow controller
Self-Organizing Map in ATLAS Higgs Searches
CHEP2013 Report - 28/10/2013
9
Fly UP