...

The ATLAS Level 2 Trigger Maris Abolins Michigan State University 28-Nov-1996

by user

on
Category: Documents
17

views

Report

Comments

Transcript

The ATLAS Level 2 Trigger Maris Abolins Michigan State University 28-Nov-1996
The ATLAS Level 2 Trigger
Maris Abolins
Michigan State University
28-Nov-1996
ATLAS
M. Abolins, CERN 28-Nov-1996
Outline
n
n
n
n
n
Trigger Overview
Physics Challenges
Level 2 Guidelines
Longterm Strategy
The Demonstrator Program
–
–
–
–
–
–
n
Demonstrator A
Demonstrator B
Demonstrator C
Common Activities
Modelling and Simulation
Hybrid Solutions
Conclusions
ATLAS
M. Abolins, CERN 28-Nov-1996
Trigger System Overview
Interaction Rate
~1 Ghz
Bunch Crossing
Rate 40 Mhz
LEVEL 1
TRIGGER
75 (100) kHz
CALO
MUON
TRACKING
pipeline
memories
derandomisers
RoIs
interface
LEVEL 2
TRIGGER
~1000 Hz
buffer
memories
Readout
LEVEL 3
TRIGGER
ATLAS
full-event
buffers
~100 Hz
M. Abolins, CERN 28-Nov-1996
Readout Buffers
n
n
n
After a LVL1
accept, data are
moved through a
Readout Driver
into an ATLAS
standard Readout
Buffer (ROB)
The data are held
in the ROB
during LVL2
decision
The data are
moved to the
LVL3 farm on
receipt of a
LVL2 accept
DETECTOR
Front -end
buffer
2 µs
derandomiser
FE Link
ROD
Readout Driver
Readout Link
Readout
Buffer
~ 3 ms
t o Level-3
ATLAS
M. Abolins, CERN 28-Nov-1996
Level 2 Trigger
n
n
Works with data at full granularity
Uses the Region of Interest (RoI) principle
– LVL1 supplies RoI information
– LVL2 transfers small fraction of data (~few %)
n
Processing
– Local - Feature extraction from sub-detectors
– Global - Combine features
FROM FRONT-END
Yes/No
LEVEL-1
ROI
READOUT
BUFFER
LEVEL-2
LOCAL
features
ATLAS
TO LEVEL-3
Yes/No
LEVEL-2
GLOBAL
M. Abolins, CERN 28-Nov-1996
Regions of Interest
Regions of Interest (RoI)
Areas selected by
First Level Trigger
ATLAS
M. Abolins, CERN 28-Nov-1996
Regions of Interest
n
Number of RoI’s
ATLAS jet generation, 35 GeV threshold, |η|≤3
– ≥ 5 GeV isolated em clusters
– ≥ 6 GeV no isolation (em)
n
= 2.6 average
= 3.7 average
RoI size
– calorimeter - em cluster ∆ηx∆φ = 0.5x05
--> 0.6% of total calorimeter area
– SCT- size increased by z vertex spread
-->1 - 4% of total Inner Detector area
n
n
RoI Data Volume ~few % of total data volume
Mapping of data onto ROB’s may increase fraction
System
RoI Size (kB)
1.0
Rate to L2
(GB/S)
0.3
Total Rate
(GB/s)
21
SCT + pixels
TRT
3.5
1.0
50
Calo.
2.5
0.7
35
Muon
1.0
0.3
20
2.3
131
TOTAL
ATLAS
M. Abolins, CERN 28-Nov-1996
RoI’s continued
n
n
Primary RoI’s - those RoI’s responsible for
the LVL1 trigger
Other RoI’s
– LVL1 tagged e.g. lower calorimeter thresholds
– generated by Level 2 in sequential processing
ATLAS
M. Abolins, CERN 28-Nov-1996
Triggering Principles
n
Level 1
– 1% Deadtime
– Pipelined
– Uses custom hardware with fixed algorithms
and programmable parameters
n
n
Level 2 - to be determined
Level 3
– A farm of commercial processors
– Running “offline” code
ATLAS
M. Abolins, CERN 28-Nov-1996
Examples of Physics Signatures
L = 1034cm-2s-1
Process
Level-1
Level-2
(I = isolated)
H0 → γ γ
≥ 2 em, ET > 20 GeV
2 γ, ET > 20 GeV
H0 → Z Z* →
≥ 2 em, E T > 20 GeV
2 e , ET > 20 GeV
l+ l– l+ l–
≥ 2µ, p T > 6 GeV
2 µ, ET > 6 GeV, I
≥ 1 em, E T > 30 GeV
1 e , ET > 30 GeV
≥ 1µ, p T > 20 GeV
1 µ, ET > 20 GeV, I
≥ 2 em, E T > 20 GeV
2 e , ET > 20 GeV
≥ 2µ, p T > 6 GeV
2 µ, ET > 6 GeV, I
≥ 1 em, E T > 30 GeV
1 e , ET > 30 GeV
≥ 1µ, p T > 20 GeV
1 µ, ET > 20 GeV, I
tt →
≥ 1 em, E T > 30 GeV
1 e , ET > 30 GeV
leptons + jets
≥ 1µ, p T > 20 GeV
1 µ, ET > 20 GeV, I
W′, Z′ → jets
≥ 1 jet, ET > 150 GeV
1 jet, ET > 300 GeV
S US Y → jets
≥ 1 jet, ET > 150 GeV
3 jet, ET > 150 GeV
E Tmiss >150 GeV
E Tmiss > 200 GeV
(Z → l+ l– ) + X
→ E Tmiss
l, γ for |η| <2.5; Jets for |η| <3.; Missing ET for |η |<5; RoI’s on all e/γ>10 GeV&
A T L A S µ’s > 6 GeV pT.
M. Abolins, CERN 28-Nov-1996
Trigger Rates - Level 1
Trigger selection
Rate at
34
L = 10 cm-2 s-1
Single muon, ≥ 1 µ, pT ≥ 20 GeV
4 kHz
Single e.m. cluster, ET ≥ 30 GeV,
e.m. and hadronic isolation
20 kHz
Double muon, ≥ 2 µ ,pT ≥ 6 GeV
1 kHz
Double e.m. cluster, ET ≥ 20 GeV,
e.m. and hadronic isolation
4 kHz
Single jet, ≥ 1 jet, pT ≥ 150 GeV
3 kHz
Et miss > 150 GeV
1 kHz
other triggers
5 kHz
Combined rate
~38 kHz
ATLAS
M. Abolins, CERN 28-Nov-1996
Expected LVL2 trigger rates (TP)
indicative rates for inclusive selections
Trigger selection
Rate at
L = 10 34 cm-2 s -1
Isolated muon, ≥ 1 µ, pT ≥ 20 GeV,
single muon, ≥ 1 µ, pT ≥ 40 GeV
0.2 kHz
0.1 kHz
Single electron, ≥ 1 e ET ≥ 30 GeV,
single gamma, ≥ 1 γ ET ≥ 60 GeV
0.3 kHz
0.1 kHz
Isolated di-muon, ≥ 2 µ ,pT ≥ 6 GeV
double muon, ≥ 2 µ ,pT ≥ 10 GeV
0.1 kHz
0.08 kHz
Di-electron or di- gamma, pT ≥ 20
GeV,
0.2 kHz
Single jet, ≥ 1 jet, pT ≥ 300 GeV
0.1 kHz
ETmiss > 200 GeV
0.1 kHz
other triggers
0.1 kHz
Combined rate
~1. 4
kHz
ATLAS
M. Abolins, CERN 28-Nov-1996
Expected LVL2 Inclusive
rates at Low Luminosity (TP)
Trigger selection
Rate at
L = 103 3 cm-2 s-1
B-physics Level-2 triggers,
single muon, ≥ 1 µ, pT ≥ 20 GeV
0.8 kHz
0.2 kHz
Single electron, ≥ 1 e pT ≥ 20 GeV,
single gamma, ≥ 1 γ pT ≥ 40 GeV
0.2 kHz
0.06 kHz
Dual electron, ≥ 2 e pT ≥ 15 GeV,
dual gamma, ≥ 2 γ pT ≥ 20 GeV
0.01 kHz
0.01 kHz
Single jet, ≥ 1 jet, pT ≥ 200 GeV
three jet, ≥ 3 jet, pT ≥ 100 GeV
0.06 kHz
0.02 kHz
ETmiss > 200 GeV
0.01 kHz
Prescaled triggers
0.1 kHz
Combined rate
~1.5 kHz
ATLAS
More recent Trigger Menu Studies
can be found in DAQ-NO-54
M. Abolins, CERN 28-Nov-1996
Rates for B-physics (TP)
Trigger selection
Rate at
–2 –1
L = 103 3 cm s
Level-1 single muon, pT ≥ 6 GeV
8000 Hz
Level-2 single muon, pT ≥ 6 GeV
4000 Hz
Additional Level-2 requirements
second muon, pT ≥ 5 GeV
56 Hz
electron, pT ≥ 5 GeV
112 Hz
electron pair, pT ≥ 1 GeV
400 Hz
hadron pair, pT(h) ≥ 6 GeV
100 Hz
with pT(pair) ≥ 15 GeV
±
D s → φπ± → K + K –π±
160 Hz
with pT ≥ 1 GeV and loose mass cuts
Combined rate
~830 Hz
ATLAS
M. Abolins, CERN 28-Nov-1996
Level-2 Processing
n
n
Algorithms only: no preprocessing or
system overhead
Calorimeter
peak find and shape analysis - 100 µs @500 mips
n
Continuous Tracking (TRT)
histogramming method
n
Precision Tracking
histogramming method
n
- 600 µs @500 mips
- 800 µs @500 mips
Muon (2 step process)
– Pattern recognition
– pT calculation
100 µs with non-optimised code
ATLAS
M. Abolins, CERN 28-Nov-1996
Motivation for Sequential
Processing
Mean=<4.7>
30
80
Luminosity
Low
High
Thrshld (Gev)
20
40
Calo alone
3
10
Calo +Track
25
60
70
25
60
20
50
40
15
Mean= <1.2>
30
10
20
5
10
0
0
0 1 2 3 4 5 6 7 8 9 10
Total Number of RoI’s
0
1
2
3
4
5
Background Rejection Factor at
90% eff. for em Trigger RoI’s
No. of “Trigger” RoI’s
Algorithm
n
n
n
Only 1.2 Trigger RoI’s for
4.7 L1 “objects”
Only 10%-20% of Trigger
RoI’s survive first step
Tracking algorithms are
slower by x 10
Suggested Procedure:
Proc. time
Calor
50-100 µsec
Muon
< 100 µsec
Track
500-5000
µsec
Timing Benchmarks for
Selected Algorithms
500 Mips processors
•Confirm Trigger RoI’s using
Calorimeter and Muon alone
ATLAS
•Do track finding only if necessary
M. Abolins, CERN 28-Nov-1996
Strategy Meeting
(Cosener’s House, June ‘95)
n
n
Decision made to produce User
Requirement Document (URD)
Establish Milestones
– Architectural Choice e.g.
»
»
»
»
data driven elements vs multiple farms
control via network or separate lines
algorithmic complexity
“push” (data transfer initiated by source node) or
“pull” (data transfer initiated by target node)
– Technological Choice e.g.
» switch and data concentrator technology
» farm technology
» network technology
ATLAS
M. Abolins, CERN 28-Nov-1996
Demonstrator Program
n
Why
–
–
–
–
n
Explore architectural options
Explore technical options
Optimize choice financially and technologically
Time and expertise on hand
Common Yardstick
– 100 kHz operation
– Demonstration of scalability to full detector
– User Requirement Document
n
Task Force Decision
– Three Demonstrators (A, B, C)
– Mix architectural and technological
considerations in a single demonstrator program
ATLAS
M. Abolins, CERN 28-Nov-1996
Interim Milestones
(to be updated)
n
n
n
n
n
n
n
n
n
n
n
Sep 1997 - Major Architecture Choices Made
Dec 1997-T/DAQ Technical Proposal Due
Jul 1998 - Major Technology Choices Made
Small-Scale Prototype (SSP) Defined
Jul 1999 - SSP assembled
Jan 2001 - Full-chain prototype (FCP) defined
Jan 2002 - FCP accepted
Jan 2003 - Commissioning of LVL2
Jul 2003 - Integration of LVL2 with rest of T/DAQ
Jan 2004 -Integration of LVL2 with subdetectors
Jan 2005 - Run with cosmic rays
Jul 2005 - Full Level-2 ready for collisions
ATLAS
M. Abolins, CERN 28-Nov-1996
Demonstrator Participants
Overall Coordinator: Fred Wickens
n
Demonstrator “A” – Coordinator: Prof. Reiner Maenner
– Institutes: Cracow, Heidelberg, Jena,
Mannheim, Weizmann
n
Demonstrator “B”
– Coordinator: John Renner Hansen
– Institutes: Argonne, CERN, NBI(Copenhagen),
Cracow, Edinburgh, JINR, Lecce, Liverpool,
Manchester, Michigan State, NIKHEF, Oxford,
RAL, RHBNC, UFRJ(Rio), Rome INFN,
Valencia, UCL
n
Demonstrator “C”
– Coordinator: Patrick Le Du]
– Institutes: Argonne, CERN, Michigan State,
Oxford, Rome, Lecce, Saclay, Prague
ATLAS
M. Abolins, CERN 28-Nov-1996
L2 Architecture “A”
Data Driven
n
°°°
S
u
p
e
r
v
i
s
o
r
n
ROIC ROIC ROIC
DD DD
Optimized for Fast
Feature Extraction
(FPGA). Based on
Enable++.
Operates in Push
Mode
DD
Global Network
GP GP GP
ATLAS
M. Abolins, CERN 28-Nov-1996
Architecture A
Potential advantages
n
n
n
n
“On the fly” RoI
collection
Feature Extraction at
100 kHz. Typical
FPGA execution times:
10 µs. Minimal latency
Smaller system
Simplified supervision
and control
Possible problems
n
n
n
Algorithmic complexity
may be limited →
higher output rates
May be difficult to
combine detectors with
different geometries e.g.
barrel and endcap
Sequential processing
probably ruled out
ATLAS
M. Abolins, CERN 28-Nov-1996
Critical Demonstrations for
Architecture A
n
n
n
n
In parallel collection of several RoI’s in a
given subdetector at 100 kHz
Data-driven feature extraction for all
detectors at 100 kHz
Demonstration of satisfactory algorithmic
adaptability to different luminosity
conditions, changing run requirements and
to evolving physics demands
Event synchronization
ATLAS
M. Abolins, CERN 28-Nov-1996
Demonstrator A
Hardware Layout
Goal: RoI collection and FEX @ 100 kHz
Slate crate, VME
Enable ++ System crate
HighSpeed Backplane
SLATE -Slink
SLATE-Slink
SLATE-Slink
SLATE-Slink
VMESparc
SLink- ROIC
Dest.
IO-Brd
FPGA-Board
Contrl
Contrl
LAN
Contrl
Slate control
CPU (or VIC)
LAN
1 Slate = 1 ROB Crate or input from preceeding RoIC
SLATE is a fast data emulation device
ATLAS
M. Abolins, CERN 28-Nov-1996
L2 Architecture B
Data Flow Diagram
Local global
°°°
S
u
p
e
r
v
i
s
o
r
Local Network
LP LP LP LP
Global Network
n
n
n
n
Optimized for parallel
processing of RoI’s
and detectors
Operates in push
mode
This is the Technical
Proposal Design
The two networks
shown may be
partitions of the same
network
GP GP GP
ATLAS
M. Abolins, CERN 28-Nov-1996
Architecture B
Potential Advantages Possible Problems
n
n
Flexibility inherent in
general purpose
processors
Lower latency from
parallel processing
and data flow
n
Sequential processing
presents challenges
– logic control
– latency issues
n
More processing
power may be needed
Critical Demonstrations
•Achieve complete RoI collection at 100 kHz. Involves
ROB’s, supervisor and network interacting with feature
extraction (FEX) processors.
•Feature extraction can be done for all detectors within local
network bandwidth limits and financial constraints.
•Supervisor functioning at full rate.
•Sequential processing control paths and performance at full
bandwidth. Event synchronization.
ATLAS
M. Abolins, CERN 28-Nov-1996
Demonstrator B2
Buffer
RoID
DS or
SCI
Buffer
Processor
Processor
Switch
Buffer
VME
DS or
SCI
Buffer
RoID
Processor
FEX
Processor
FEXD
RoI Distributor
T2DR
RoIR
Processor
DS or
GLOBAL
Processor
Supervisor
SCI
GOutRS
ATLAS
M. Abolins, CERN 28-Nov-1996
L2 Architecture C
“Single farm”
n
Sources:Readout
Buffer Clusters
n
°°°
S
u
p
e
r
v
i
s
o
r
n
Optimized for
sequential processing
of RoI’s
Operates in pull mode
Distinction between
Level 2 and Level 3
disappears
Network
P
P
P
ATLAS
M. Abolins, CERN 28-Nov-1996
The Single Farm Idea
Read-Out Buffers
Sources
Switching Network
Supervisor
Destination
Selection Processors
Main features
• Integration of data and
protocol networks
• Pull data flow control strategy
• Single processor per event
• Several events per processor
ATLAS
M. Abolins, CERN 28-Nov-1996
Architecture C
Potential Advantages Possible Problems
n
n
Well adapted to
sequential processing
Destination can also
be Event Builder and
Level 3 processor
n
n
Data and control traffic
on same switch
Multiple events per
processor
Critical Demonstrations
•Data and control traffic patterns are within
network capabilities.
•Complete RoI collection can be achieved.
•Data concentrators do not impede data flow
ATLAS
M. Abolins, CERN 28-Nov-1996
Demo C for Testlab
Synchro
L1_ID &ROI list
ROI data
Supervisor
ROBIN mezzanine
Ethernet
L1
&
ROIB
R D DS
ROB’s LynxOs
Switch Interface
mezzanine
(PCI -ATM , DS Link)
S Link
SWITCH
Control &
Monitoring
(SUN Workstation)
Solaris
CES - RIO2
ATM, C104
PCI- PC interface
Processors
(Pentium PC’s)
Window NT
ATLAS
Algorithms
M. Abolins, CERN 28-Nov-1996
Common Areas for
Demonstrators
n
Base-line definitions of
–
–
–
–
–
n
detectors
data formats
ROB/RoI sizes
algorithms
data samples
Common hardware and software
– ROB’s
– Supervisor
n
n
Modelling and Emulation
Performance Measurements
ATLAS
M. Abolins, CERN 28-Nov-1996
Level 2 Supervisor
CPU
PMC
inter
fa
ce
T2 Supervisor for ATLAS
VME
CPU
CPU
ROI
Router
CPU
Demo A or B:
Hardware router
Sends push
requests to ROB’s
Demo C:
Sends event ROI
info. via network
to L2 CPU
ROI Router
STEWARD
ROB
T1 Accepts/ROI’s
discard or readout
(S-Link)
request
T2 Accepts/Rejects
(SCI or ATM)
TASKS
ATLAS
STEWARD: Manages (frees and reviews usage of)
local and global processors
ROI Router: dispatches ROI information to CPU’s
CPU’s: determine appropriate free local/global processors to use
and determine buffers to request data from and
assemble request packet
ROB Router: routes buffer requests to appropriate crates (A&B only)
M. Abolins, CERN 28-Nov-1996
Modelling and Simulation
n
Paper Models
– Inputs
» Full Trigger Menus
» Average Numbers of RoI’s
» Average processing times, overheads, circuit delays
etc.
– Output
»
»
»
»
Number of processors needed
Switching bandwidth needed
Link occupancy
Overall latency
– Does not model
» Possible queueing problems
» Data flow fluctuations
– Initial studies for Model C at Saclay
now A & B also modelled.
n
Modsim II and C++ Studies underway
ATLAS
M. Abolins, CERN 28-Nov-1996
Paper Model - Architecture C
Front End electronics
Calorimeter Muon
TRT
SCT-Pix
Front End 328Gbit/s
Raw data
ROL&ROB
160Gbit/s 308 Gbit/s 144Gbit/s 13 Gbit/s
256
512
256
488
Trigger + ?
16 + ..
Sources
64
L1
32
64
32
16
RSI
Links
155 Mbit/s
Supervisor
ATM switching Network (20 GBit/sec)
16
Data & Protocol
Links
155 Mbit/s
128 Selection Destinations
128 analysis and Filter
Destinations
ATLAS
Processor farms
M. Abolins, CERN 28-Nov-1996
Paper Model Diagram
Architecture B
Subdetector
Subdetector
5
ROB
6
2000
ROB
ROB
ROB
7
4
L1
1
LOCAL NET
9
3
A
1
2
LOCAL NET
8
NET
FEX
FEX
FEX
FEX
B
SUPER
C
16
J
D
GLOBAL NETWORK
F
G
E
H
GP
GP
ATLAS
M. Abolins, CERN 28-Nov-1996
Hybrid Solutions
n
We are actively examining the relative
merits of general purpose versus FPGA
processors for specific high intensive
computing tasks.
n
Both architecture B and architecture C are
considering FPGA processors for full TRT
scans and for b-jet scans of the SCT.
ATLAS
M. Abolins, CERN 28-Nov-1996
Questions and Conclusions
n
Questions
– Is sequential processing necessary?
– Are secondary RoI’s necessary?
– Does missing ET need to be recalculated at
Level 2?
– Do we need a full TRT scan at Level 2?
– Do we need b-jet tag at Level 2?
– How is the LVL2/LVL3 boundary defined?
– Do we need preprocessing?
n
ATLAS
Conclusions
– A robust and comprehensive demonstrator
program is underway
– Efforts are being made to integrate elements of
the separate architectures for an optimal design
– Prospects are good for a Level 2 that will be
optimized in price and performance and that
will benefit from technological innovations in
the commercial sector
M. Abolins, CERN 28-Nov-1996
Fly UP