...

Slides

by user

on
Category: Documents
28

views

Report

Comments

Description

Transcript

Slides
Dimostrazione di Analisi
Distribuita - ATLAS
M. Biglietti
Università degli Studi di Napoli "Federico II" e INFN
Computing Model di ATLAS
DPD
Tipo di Dati
ESD (Event Summary Data)
output della ricostruzione (tracce e hit, celle e cluster nei calorimetri,
combined reconstruction objects etc...). Per calibrazione, allineamento,
refitting …
AOD (Analysis Object Data)
rappresentazione ridotta degli eventi per l’analisi: oggetti “fisici” ricostruiti
(elettroni, muoni, jet, missing Et ...)
TAG
Informazioni sintetiche attraverso le quali gli eventi ricostruiti sono
selezionati dalle analisi. Sono costruiti a partire dagli AOD e contengono
informazioni quali conditions, qualità e stato dei rivelatori, trigger, tipo di
fisica
Informazioni conservate in file ROOT o in database relazionali
DPD (Derived Physics Data)
"thinned" AOD (  ntuple ROOT )
Tools per l'Analisi Distribuita
Distributed Data Management : DQ2 (DonQuijote)
Catalogo globale delle repliche, fornisce le informazioni dei dataset negli
storage della GRID
Dataset: contenitore logico di dati, puo' contenere migliaia di dati
Tutti i dataset ufficiali di ATLAS hanno una precisa policy sui nomi
es trig1_misal1_csc11_V1.005144.PythiaZee.recon.AOD.v12000601
Tools (command line e web) per interrogare i cataloghi
informazioni su replica e siti
lista del contenuto dei cataloghi
copia in locale di files del dataset
registrazione/cancellazione di dataset
...
Metadata service: AMI
Interfaccia Web che fornisce informazioni sui dataset
Grid user interface tools:
GANGA (EGEE)
PAthena (OSG)
Analysts’ work-flow (simulated data)
STEP
TOOL
1
Preparazione codice analisi
Athena
2
Localizzare i datasets
DQ2/AMI
3
Sottomissione alla GRID
& Monitoring dei jobs
GANGA/PAthena
4
Recupero dei risultati
DQ2
5
Analisi
Athena/ROOT
Analysts’ work-flow (simulated data)
STEP
TOOLS
1
Preparazione codice analisi
Athena
2
Localizzare i datasets
DQ2/AMI
3
Sottomissione alla GRID
& Monitoring dei jobs
GANGA/PAthena
4
Recupero dei risultati
DQ2
5
Analisi
Athena/ROOT
Localizzare i Datasets
http://ami3.in2p3.fr:8080/opencms/opencms/AMI/www
User Interface DQ2 Command line
source /afs/cern.ch/project/gd/LCG-share/sl4/etc/profile.d/grid_env.[c]sh
source /afs/usatlas.bnl.gov/Grid/Don-Quijote/dq2_user_client/setup.[c]sh.CERN
UI Setup
(lxplus)
voms-proxy-init –voms atlas
dq2_ls trig1_misal1*.PythiaZee.*.AOD.v120006*tid*
dq2_ls -r trig1_misal1*.PythiaZee.*.AOD.v120006*tid*
DQ2
GANGA ...
Gaudi /Athena and Grid Alliance, progetto comune ATLAS/LHCb
Applicazione "user friendly" in grado di
Configurare – Sottomettere – Monitorare
Le applicazioni possono essere mandate a vari tipi di risorse
Macchine locali (utile per test)
Sistemi batch (LFS, PBS, Condor ...)
GRID
Il sistema di gestione dei job è lo stesso sia per configurazioni locali che per la GRID
Interfaccia via command-line o una GUI
... GANGA
 Applicazioni (basate su ATHENA)
Blocchi Funzionali: un job puo'
essere configurato da un insieme
di blocchi funzionali, ognuno dei
quali si occupa di un aspetto
diverso
• Produzione, Simulazione, Riconstruzione,
Analisi
• Scelta della release
• Sottomissione sorgente o librerie per
l'analisi
 Backend
• LCG e NorduGrid
• Possibilità di specificare i siti, CE
 Input Dataset
• Local file system
• DQ2
• Possibilità di usare dataset incompleti
• specificare il numero minimo di files per
dataset
 Output Dataset
• File system locale, retrieve alla fine del job
• DQ2
•Scelta Storage Element
 Splitter – Merger
• Definizione di subjobs, numero eventi per
subjob
• Possibilità di unire i file ROOT di output
GANGA Windows
Ganga Setup : source /afs/cern.ch/sw/ganga/install/etc/setup-atlas.sh
Attraverso una CLIP
(Command Line Interface
in Python) Session si
possono monitorare e
gestire i job :
submit
status
remove
kill
peek
...
Ganga.GPIDev.Lib.JobRegistry
: INFO
Found 17 jobs in jobs
In [1]:jobs
Out[1]: Statistics: 7 jobs
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
id
14
16
20
21
24
25
26
27
28
29
30
31
32
34
35
36
37
status
name subjobs
application
backend
running caldigoff1_mc12.007215.singlepart_mu17.digit.
failed caldigoff1_mc12.007215.singlepart_mu17.digit.
completed misal1_mc12.007176.singlePion_pTSlice_16_of_3
completed misal1_mc12.007175.singlePion_pTSlice_15_of_3
completed misal1_mc12.007178.singlePion_pTSlice_18_of_3
completed misal1_mc12.007179.singlePion_pTSlice_19_of_3
completed misal1_mc12.007180.singlePion_pTSlice_20_of_3
completed misal1_mc12.007181.singlePion_pTSlice_21_of_3
completed misal1_mc12.007182.singlePion_pTSlice_22_of_3
completed misal1_mc12.007183.singlePion_pTSlice_23_of_3
completed misal1_mc12.007184.singlePion_pTSlice_24_of_3
completed misal1_mc12.007185.singlePion_pTSlice_25_of_3
completed misal1_mc12.007186.singlePion_pTSlice_26_of_3
completed misal1_mc12.007187.singlePion_pTSlice_27_of_3
completed misal1_mc12.007188.singlePion_pTSlice_28_of_3
completed misal1_mc12.007189.singlePion_pTSlice_29_of_3
completed misal1_mc12.007190.singlePion_pTSlice_30_of_3
50
50
4
4
13
13
12
13
13
13
10
8
3
3
3
3
3
backend.actualCE
Athena
LCG
Athena
LCG
Athena
LCG
Athena
LCG
Athena
LCG
Athena
LCG
Athena
LCG
Athena
LCG
Athena
LCG
Athena
LCG
Athena
LCG
Athena
LCG
Athena
LCG
Athena
LCG
Athena
LCG
Athena
LCG
Athena
LCG
GUI da cui si possono
configurare, sottomettere e
monitorare i job
Setup del Job di Analisi
j = Job()
nome job & applicazione
j.name='test_it'
j.application=Athena()
j.application.prepare(athena_compile=False)
j.application.exclude_from_user_area=["*.o","*.root*","*.exe"]
jobptions
j.application.option_file=' AnalysisSkeleton_topOptions.py'
j.application.max_events='200'
j.splitter=AthenaSplitterJob()
Numero eventi e subjobs
j.splitter.numsubjobs=20
j.merger=AthenaOutputMerger()
j.inputdata = DQ2Dataset()
j.inputdata.dataset="trig1_misal1_csc11_V1.005144.PythiaZee.recon.AOD.v12000601_tid00
7539"
input dataset
j.inputdata.match_ce_all=True
j.inputdata.min_num_files=400
j.outputdata=DQ2OutputDataset()
j.outputdata.outputdata=['AnalysisSkeleton.aan.root']
output dataset in DQ2
j.backend=LCG()
j.backend.requirements=AtlasLCGRequirements()
backend
j.backend.requirements.sites= [ 'MILANO','ROMA1','NAPOLI','LNF']
j.submit()
Sottomissione dei Job
 GANGA
 interroga i cataloghi DQ2 e prepara le liste dei file di input
 crea una serie di script jdl nei quali definisce tutti i requirements e le liste di file
 crea un tar in cui ci sono jdl ed il codice di analisi
 Sottomissione al Work Load Manager che a sua volta smista i job nei siti
 Un job può dare come output una nuova
collezione di eventi, che può essere
registrata come nuovo dataset nei
cataloghi e acceduto successivamente
Estrazione dei risultati dall’insieme
dai DPD che potrà poi
essere analizzato in locale dopo
una copia o sulla GRID.
DQ2
DPD
GANGA Web Site
http://ganga.web.cern.ch/ganga/
ATLAS Dashboard
http://dashboard.cern.ch/atlas/
PAthena
Package integrato in Athena


E' stato usato pienamente solo a BNL. Test a Lione per EGEE




si usa in modo del tutto simile ad athena: athena pathena
complementare a GANGA
Usa lo stesso workload system usato per le produzioni ATLAS: PANDA
Output registrato su catalogo in DQ2
Ha tutte le funzionalità necessarie


bookeeping, monitoring, status, facile risottomissione dei job , notifica via email ..
Interfaccia Web molto user friendly.
User Interface
1
cmt co PhysicsAnalysis/DistributedAnalysis/PandaTools
cd PhysicsAnalysis/DistributedAnalysis/PandaTools/cmt
cmt config ; source setup.sh ; make
2
pathena AnalysisSkeleton_topOptions.py --split 10 --nEventsPerJob 100 –inDS
trig1_misal1_csc11_V1.005144.PythiaZee.recon.AOD.v12000601_tid007539 --outDS
user.MichelaBiglietti.test
3
pathena_utill
>>>show()
>>> status(id)
>>> select('outDS=user.MichelaBiglietti*')
>>> retry(id)
>>> kill(id)
Setup
sottomissione
monitoring
Analisi Distribuita
- PAthena
PAthena
- Monitoring
http://gridui02.usatlas.bnl.gov:25880/server/pandamon/query
Sistema WEB based per monitorare e avere informazioni sia sui job che sui dati, fornisce agli
utenti un interfaccia immediata al sistema PANDA
Analysts’ work-flow (simulated data)
STEP
TOOLS
1
Preparazione codice analisi
Athena
2
Localizzare i datasets
DQ2/AMI
3
Sottomissione alla GRID
& Monitoring dei jobs
GANGA/PAthena
4
Recupero dei risultati
DQ2
5
Analisi
Athena/ROOT
Analysts’ work-flow (simulated data)
STEP
TOOLS
1
Preparazione codice analisi
Athena
2
Localizzare i datasets
DQ2/AMI
3
Sottomissione alla GRID
& Monitoring dei jobs
GANGA/PAthena
4
Recupero dei risultati
DQ2
5
Analisi
Athena/ROOT
GANGA - Informazioni sui job
ATLAS DashBoard
http://dashboard.cern.ch/atlas/
GANGA CLIP
Stato dei job
jobs(jobid).status
jobs(jobid). subjobs[n]. status
Controllo dei log-files
jobs(jobid). subjobs[n]. peek()
jobs(jobid). subjobs[n]. peek('stdout', 'cat')
Informazioni sui job
applicazione (Athena)
jobs(id).subjobs[n].application
backend
jobs(id).subjobs[n].backend
Files di input
jobs(id).subjobs[n].inputdata
Output
jobs(id).subjobs[n].outputdata
in a loop
In [16]:for i in range(10):
....:
jobs(id).subjobs[i].status
....:
GANGA – Recupero dei dati
2 opzioni :
1 . output come nuovo Dataset nei cataloghi DQ2
2. output "locale", puo' essere scaricato nella propria output sandbox
GANGA CLIP
Nome e informazioni del Dataset di ouput
DQ2
jobs(id).subjobs[n].outputdata
d=DQ2Dataset()
d.list_locations('users. UserName.ganga.4.20071123')
d.list_locations_num_files('users.UserName.ganga.4.20071123')
d.list_contents('users.UserName.ganga.4.20071123')
Controllo dei Dataset di output
dq2_ls users. UserName.ganga.4.20071123
dq2_ls –r users. UserName.ganga.4.20071123
Download delle ntuple ROOT per l'analisi
dq_get –rv users. UserName.ganga.4.20071123
Analisi in ROOT
PAthena - Controllo dei Job e Recupero
Dati
Controllo dei Job
pathena_util
PANDA Monitoring WEB
http://gridui02.usatlas.bnl.gov:25880/server/pandamon/query
logfiles
DQ2
Controllo dei Dataset di output
dq2_ls users. UserName.ganga.4.20071123
dq2_ls –r users. UserName.ganga.4.20071123
Download delle ntuple ROOT e dei log files
dq_get –rv users. UserName.ganga.4.20071123
Analisi in ROOT
BACKUP
Analysts’ work-flow (real data)
STEP
TOOLS
1
Preparazione codice analisi
Athena
2
Preparazione selezione
su TAG
Athena/DB
3
Sottomissione alla GRID
GANGA/PAthena
4
Recupero dei risultati
DQ2
5
Analisi
Athena/ROOT
... Setup di ATHENA...
A basic requirements file at CERN for the following examples would be:
#--------------------------------------------------------------------requirements
set CMTSITE CERN
set SITEROOT /afs/cern.ch
macro ATLAS_DIST_AREA /afs/cern.ch/atlas/software/dist
path con l'area
macro ATLAS_TEST_AREA ${HOME}/GridS/testarea
di lavoro
apply_tag setup
apply_tag simpleTest
use AtlasLogin AtlasLogin-* $(ATLAS_DIST_AREA)
#--------------------------------------------------------------------source /afs/cern.ch/sw/contrib/CMT/v1r19/mgr/setup.(c)sh
cmt config
Create a athena working area with the UserAnalysis package in 12.0.6 :
mkdir GridS/testarea/12.0.6
source setup.sh -tag=12.0.6
echo $CMTPATH
cd GridS/testarea/12.0.6
export CVSROOT=:kserver:atlas-sw.cern.ch:/atlascvs
cmt co -r UserAnalysis-00-09-10 PhysicsAnalysis/AnalysisCommon/UserAnalysis
... Setup di ATHENA
Setup and compile the code with:
cd PhysicsAnalysis/AnalysisCommon/UserAnalysis/cmt
cmt broadcast cmt config
source setup.(c)sh
cmt broadcast gmake
After compilation:
cd ../run
cp ../share/AnalysisSkeleton_topOptions.py .
Getting started with a Simple Grid job using ROOT
Ganga Setup and Start:
source /afs/cern.ch/sw/ganga/install/etc/setup-atlas.sh
ganga
ctrl-d to exit
Create a small root macro called gaus.C
void gaus() {
gROOT->SetStyle("Plain");
TH1F *h1 = new TH1F("h1","histo from a gaussian",100,-3,3);
h1->FillRandom("gaus",10000);
TCanvas *c1 = new TCanvas("c","c",800,600);
h1->Draw();
c1->Print("gaus.ps");
}
gaus.C
Create a GANGA script called test_root.py
j = Job()
j.name='ROOT_Ex2'
j.application=Root()
j.application.script=File('gaus.C')
j.outputsandbox=['gaus.ps']
j.backend=LCG()
j.backend.requirements.sites= ['MILANO','CNAF','NAPOLI','LNF']
j.submit()
test_root.py
status
i/o
directories
application parameters
backend info
Introduce AtlasProduction cache setup with.
j.application.atlas_production='12.0.6.5'
the AtlasProduction transformation for 12.0.6.5 is downloaded
from http://atlas-computing.web.cern.ch/atlascomputing/links/kitsDirectory/Production/kits/ to the grid
worker node and setup.
in .gangarc
ShallowRetryCount = 10
RetryCount = 0
Use an ATLASDataset with
j.inputdata.lfn=['lfn:file1', 'lfn:file2']
j.inputdata.lfc='lfc-fzk.gridka.de'
to process a list of files in job sent to the site where to data is
located with a direct access to the inputdata
First Step: Analysis Code
Choice your analysis code
Here we will use AnalysisSkeleton (Ketevi A. Assamagan )
CVS: PhysicsAnalysis/AnalysisCommon/AnalysisExamples/
It's just a skeleton, the user can implement his analysis here
Some electron histograms and root tree are filled with basic
information
AnalysisSkeleton
Histogramming and Ntuple
TH1F* m_h_elecpt;
m_h_elecpt
= new TH1F("elec_pt","pt el",50,0,250.*GeV);
sc = m_thistSvc->regHist("/AANT/Electron/elec_pt",m_h_elecpt);
... AnalysisSkeleton ...
Retrieve a Container of AOD Objects
const ElectronContainer* elecTES = 0;
sc=m_storeGate->retrieve( elecTES, m_electronContainerName);
if( sc.isFailure() || !elecTES ) {
mLog << MSG::WARNING << "No AOD electron container found in TDS" << endreq;
return StatusCode::SUCCESS; }
mLog << MSG::DEBUG << "ElectronContainer successfully retrieved" << endreq;
... AnalysisSkeleton ...
Access to the Kinematics
ElectronContainer::const_iterator elecItr = (*elecTES).begin();
ElectronContainer::const_iterator elecItrE = (*elecTES).end();
for (; elecItr != elecItrE; ++elecItr) {
if( (*elecItr)->hasTrack() &&
(*elecItr)->pt()> m_etElecCut ) {
double electronPt = (*elecItr)->pt();
double electronEta = (*elecItr)->eta();
...
}}
m_h_elecpt->Fill( (*elecItr)->pt(), 1.);
Access to the Monte Carlo Truth
bool findAMatch = m_analysisTools->matchR((*elecItr), truthContainer,
index, deltaRMatch, (*elecItr)->pdgId());
if (findAMatch) {
deltaRMatch = (deltaRMatch > m_maxDeltaR) ? m_maxDeltaR : deltaRMatch;
m_h_elecrmatch->Fill( deltaRMatch, 1.);
if ( deltaRMatch < m_deltaRMatchCut) {
const TruthParticle* electronMCMatch = (*mcpartTES)[index];
double res = (*elecItr)->pt() / electronMCMatch->pt();
m_h_elecetres->Fill(res,1.);
}
Second Step : your Dataset
Use Don Quijote (DQ2) to locate data
e.g. the ATLAS Experiment Data Distribution System
integrate all Grid data management services used by the ATLAS
provides production managers and physicists access to file-resident event
data
for more info see the tutorial
https://twiki.cern.ch/twiki/bin/view/Atlas/DDMEndUserTutorial
dataset browser
http://gridui02.usatlas.bnl.gov:25880/server/pandamon/query?o
verview=dslist
Open a lxplus shell
Find Zee AOD
source /afs/cern.ch/project/gd/LCG-share/sl4/etc/profile.d/grid_env.[c]sh
source /afs/usatlas.bnl.gov/Grid/Don-Quijote/dq2_user_client/setup.[c]sh.CERN
voms-proxy-init –voms atlas
dq2_ls trig1_misal1*.PythiaZee.*.AOD.v120006*tid*
dq2_ls -r trig1_misal1*.PythiaZee.*.AOD.v120006*tid*
Use one of the most replicated DSET
Complete source: all files of a dataset have been transfered to a sites by the DDM system.
Incomplete source: zero up to all files of a dataset are present at a site.
GRIDExample:
Job
ATHENA/GANGA
Athena1
test_athena1.py
j = Job()
j.name='Athena1'
j.application=Athena()
j.application.prepare(athena_compile=False)
j.application.exclude_from_user_area=["*.o","*.root*","*.exe"]
j.application.option_file=' (path)/AnalysisSkeleton_topOptions.py'
j.application.max_events='100'
j.splitter=AthenaSplitterJob()
j.splitter.numsubjobs=10
j.merger=AthenaOutputMerger()
j.inputdata = DQ2Dataset()
j.inputdata.dataset="trig1_misal1_csc11_V1.005144.PythiaZee.recon.AOD.v12000601_tid007539"
j.inputdata.match_ce_all=True
j.inputdata.min_num_files=5
j.outputdata=ATLASOutputDataset()
j.outputdata.outputdata=['AnalysisSkeleton.aan.root']
j.backend=LCG()
j.submit()
in ganga :
execfile('test_athena1.py')
• Instead of Root() plugin the Athena() plugin is used now, which automatically creates a tar ball of
the Athena working area in the prepare() method.
•option_file defines the athena jobOptions
•max_events number of events per job
... ATHENA/GANGA Example
To avoid that unnecessary files are packed into your inputsandbox use:
j.application.exclude_from_user_area=["*.o","*.root*","*.exe"].
Source code compilation:
athena_compile=False.
all pre-compiled source code and libraries are shipped with the job inputsandbox
to the grid worker node and the source code is not recompiled.
To re-compile your code on the worker node just use j.application.prepare().
5 subjobs sent to the LCG grid
j.splitter=AthenaSplitterJob()
j.splitter.numsubjobs=5
inputdata
DQ2Dataset() or ATLASLocalDataset() (files on a local file system)
match_ce_all=True : the job is in addition also sent to incomplete dataset
locations (by default it is sent only to complete dataset location)
j.inputdata.min_num_files=5 to make sure that a certain number of files of a
dataset are present at a site (since also incomplete sites with 0 files might exist)
Working with Incomplete DSET (2)
in GANGA check locations and number of files per site
config["Logging"]['GangaAtlas'] = 'INFO'
d=DQ2Dataset()
d.dataset='trig1_misal1_csc11_V1.005144.PythiaZee.recon.AOD.v12000601_tid007539'
d.list_locations()
d.list_locations_num_files()
In [212]:d.list_locations_num_files()
Out[212]: {'CNAFDISK': 1863, 'WISC': 0, 'CERNPROD': 195, 'NAPOLI': 485, 'DESY-HH': 0, 'AGLT2':
0, 'MILANO': 1736, 'NIKHEF': 1277, 'RALDISK': 0, 'ROMA1': 1807, 'NDGFT1DISK': 0, 'WUP': 0, 'AUUNIMELB': 177, 'FZKDISK': 0, 'TRIUMFDISK': 1793, 'LYONDISK': 1626, 'LNF': 429, 'BNLDISK': 0,
'CERNCAF': 1693, 'TORON': 513, 'ASGCDISK': 1612}
A complete list of can be obtained with
r=AtlasLCGRequirements()
r.list_sites()
Out[45]: ['ALBERTA', 'ASGC', 'AU-ATLAS', 'AU-UNIMELB', 'BHAM', 'BRUN', 'CAM', 'CERN', 'CNAF',
'CSCS', 'CYF', 'DESY-HH', 'DESY-ZN', 'DUR', 'EDINBURGH', 'FRTIER2S', 'FZK', 'FZU', 'GLASGOW',
'ICL', 'IFAE', 'IFIC', 'IHEP', 'ITEP', 'JINR', 'LANCS', 'LESC', 'LIP-COIMBRA', 'LIP-LISBON', 'LIV',
'LNF', 'LRZ', 'LYON', 'MANC', 'MCGILL', 'MILANO', 'MONTREAL', 'NAPOLI', 'NIKHEF', 'OXF', 'PIC',
'PNPI', 'QMUL', 'RAL', 'RALPP', 'RHUL', 'ROMA1', 'SARA', 'SFU', 'SINP', 'TORON', 'TRIUMF', 'TWFTT', 'UAM', 'UCLCC', 'UCLHEP', 'UNI-FREIBURG', 'UVIC', 'WUP']
More on Output Dataset
There are two outputdata plugins:
ATLASOutputDataset() (stores output data on local filesystem, CASTOR or Grid
SE)
DQ2OutputDataset() (stores output data on Grid SE and registers files in DQ2)
DQ2 Output Dataset
At the start of the job an empty DQ2 dataset is created:
users.UserName.ganga.105.20070321
Optional: a datasetname can be created with:
job.outputdata.datasetname='datasetname'
This generates a DQ2 datasetname in terms of user.UserName.ganga.datasetname
Using the additional option job.outputdata.use_datasetname=True creates an
output datasetname without the prepending user.username.ganga
Datasets can be retrieved outside Ganga with dq2_get like production datasets
How to exclude bad Computing Elements
If you find that your job is failing due to some badly
configured Computing Element it is possible to exclude
it from the list of allowed CEs.
You can provide the keyword ExcludedCEs in the LCG
section of your .gangarc file
....
ExcludedCEs = cnaf\.infn\.it
Exit from ganga
Computing Elements
Jobs can also be forced to a site using the
legacy option
j.backend.CE ='ce05lcg.cr.cnaf.infn.it:2119/jobmanager-lcglsf-atlas'
Get CE information with (from lxplus prompt)
lcg-infosites --vo atlas ce
GANGA Misc
A Change the logging level of Ganga to get more information during job submission etc.
with:
config["Logging"]['GangaAtlas'] = 'INFO' or config["Logging"]['GangaAtlas'] = 'DEBUG'
To avoid overflowing your afs area, you can move your gangadir directory to a scratch
disk, then modify your ~/.gangarc file with this line under [DefaultJobRepository] with
the new path local_root = /afs/cern.ch/user/t/thisuser/scratch1/gangadir
Additional files can be transfered with the job inputsandbox and outputsandbox with:
j.inputsandbox=['/path/to/my/file1', '/path/to/my/file2']
or
j.outputsandbox=['myfile1', 'myfile2']
online help
help()
Athena jobs from the shell command line
ganga athena --inDS
trig1_misal1_csc11.005145.PythiaZmumu.recon.AOD.v12000601_tid005999
--outputdata AnalysisSkeleton.aan.root
--split 3 --maxevt 100 --lcg --site NAPOLI AnalysisSkeleton_topOptions.py
ATLAS Dashboard - 2
Fly UP