The STAR Level-3 Trigger System



Similar documents
FTK the online Fast Tracker for the ATLAS upgrade

The new frontier of the DATA acquisition using 1 and 10 Gb/s Ethernet links. Filippo Costa on behalf of the ALICE DAQ group

Using Linux Clusters as VoD Servers

Research Management Plan (RMP) for the. STAR Heavy Flavor Tracker (HFT)

Performance Characteristics of a Cost-Effective Medium-Sized Beowulf Cluster Supercomputer

Readout Unit using Network Processors

How To Test A Microsoft Vxworks Vx Works (Vxworks) And Vxwork (Vkworks) (Powerpc) (Vzworks)

CMS Tier-3 cluster at NISER. Dr. Tania Moulik

Data-analysis scheme and infrastructure at the X-ray free electron laser facility, SACLA

Cluster Computing at HRI

Next Generation Operating Systems

Results of Benchmark Tests upon a Gigabit Ethernet Switch

Using Linux Clusters as VoD Servers

Gigabit Ethernet Design

New Design and Layout Tips For Processing Multiple Tasks

CMS Tracking Performance Results from early LHC Running

Real Time Tracking with ATLAS Silicon Detectors and its Applications to Beauty Hadron Physics

760 Veterans Circle, Warminster, PA Technical Proposal. Submitted by: ACT/Technico 760 Veterans Circle Warminster, PA

Virtuoso and Database Scalability

Network Performance Optimisation and Load Balancing. Wulf Thannhaeuser

RAID. RAID 0 No redundancy ( AID?) Just stripe data over multiple disks But it does improve performance. Chapter 6 Storage and Other I/O Topics 29

Dell High-Performance Computing Clusters and Reservoir Simulation Research at UT Austin.

CMS Tracker module / hybrid tests and DAQ development for the HL-LHC

Development. Igor Sheviakov Manfred Zimmer Peter Göttlicher Qingqing Xia. AGIPD Meeting April, 2014

Sockets vs. RDMA Interface over 10-Gigabit Networks: An In-depth Analysis of the Memory Traffic Bottleneck

NEW. EVEN MORE data acquisition and test stand automation

Online data handling with Lustre at the CMS experiment

Enabling Technologies for Distributed Computing

Software Scalability Issues in Large Clusters

BP-2600 Concurrent Programming System

Price/performance Modern Memory Hierarchy

Types Of Operating Systems

Computer Systems Structure Main Memory Organization

Virtualised MikroTik

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Silicon Lab Bonn. Physikalisches Institut Universität Bonn. DEPFET Test System Test DESY

How To Test For Performance And Scalability On A Server With A Multi-Core Computer (For A Large Server)

Track Trigger and Modules For the HLT

Enabling Technologies for Distributed and Cloud Computing

MOSIX: High performance Linux farm

Exploiting Remote Memory Operations to Design Efficient Reconfiguration for Shared Data-Centers over InfiniBand

Networking Virtualization Using FPGAs

Lecture 3: Modern GPUs A Hardware Perspective Mohamed Zahran (aka Z) mzahran@cs.nyu.edu

Measurement of the Mass of the Top Quark in the l+ Jets Channel Using the Matrix Element Method

Data analysis in Par,cle Physics

FPGA-based Multithreading for In-Memory Hash Joins

CSCA0102 IT & Business Applications. Foundation in Business Information Technology School of Engineering & Computing Sciences FTMS College Global

Understanding the Benefits of IBM SPSS Statistics Server

Online Remote Data Backup for iscsi-based Storage Systems

Running Windows on a Mac. Why?

The GRID and the Linux Farm at the RCF

Taurus - RAID. Dual-Bay Storage Enclosure for 3.5 Serial ATA Hard Drives. User Manual

Solid State Drive Architecture

All Programmable Logic. Hans-Joachim Gelke Institute of Embedded Systems. Zürcher Fachhochschule

Network Instruments white paper

Virtualization for Hard Real-Time Applications Partition where you can Virtualize where you have to

Interconnect Efficiency of Tyan PSC T-630 with Microsoft Compute Cluster Server 2003

Computing at the HL-LHC

Calorimetry in particle physics experiments

COMPLIANCE 3 SOFTWARE SUITE

Silviu Panica, Marian Neagul, Daniela Zaharie and Dana Petcu (Romania)

Welcome to the Dawn of Open-Source Networking. Linux IP Routers Bob Gilligan

A Data De-duplication Access Framework for Solid State Drives

2011 DASM Tablet Computer Systems Benchmark Report

Products. CM-i586 Highlights. Página Web 1 de 5. file://c:\documents and Settings\Daniel\Os meus documentos\humanoid\material_o...

Scaling from 1 PC to a super computer using Mascot

Cellular Computing on a Linux Cluster

Communicating with devices

Integrating PCI Express into the PXI Backplane

MICE detectors and first results. M. Bonesini Sezione INFN Milano Bicocca

Capacity Estimation for Linux Workloads

Data Storage - I: Memory Hierarchies & Disks

Obj: Sec 1.0, to describe the relationship between hardware and software HW: Read p.2 9. Do Now: Name 3 parts of the computer.

Overview of the GRETINA Auxiliary Detector Interface

CentOS Linux 5.2 and Apache 2.2 vs. Microsoft Windows Web Server 2008 and IIS 7.0 when Serving Static and PHP Content

Fusionstor NAS Enterprise Server and Microsoft Windows Storage Server 2003 competitive performance comparison

I/O virtualization. Jussi Hanhirova Aalto University, Helsinki, Finland Hanhirova CS/Aalto

The data acquisition system of the XMASS experiment

CLAS12 Offline Software Tools. G.Gavalian (Jlab)

Linux and the Higgs Particle

Transcription:

Jens Sören Lange University of Frankfurt The STAR Level-3 Trigger System C. Adler a, J. Berger a, M. DeMello b, D. Flierl a, J. Landgraf c, J.S. Lange a, M.J. LeVine c, A. Ljubicic Jr. c, J. Nelson d, D. Roehrich e, J. J. Schambach f, D. Schmischke a, M.W. Schulz c, R. Stock a, C. Struck a, P. Yepes b a University of Frankfurt, August-Euler-Straße 6, D-60486 Frankfurt Germany b Rice University, Houston, Texas, 77251, USA c Brookhaven National Laboratory, Upton, New York 11973, USA d University of Birmingham, Birmingham B15 2TT, United Kingdom e University of Bergen, Allegaten 55, 5007 Bergen, Norway f University of Texas, Austin, Texas 78712, USA

STAR Level-3 Trigger Online event (helix) reconstruction (input rate 100 Hz) of high multiplicity Au+Au and p+p collider events ( N t 8000 tracks/event, N p 45 points/track) STAR experiment @ RHIC Example Applications Level-3 Cluster Finder Level-3 Track Finder Level-3 Fast Network (MYRINET, SCI)

STAR Experiment Solenoidal Tracker At RHIC Au+Au at s 200 GeV/N p+p at s 500 GeV/N Physics run starting in March 1999 Magnet Iron Magnet Coils Central Trigger Barrel EMC ToF TPC Forward TPC Silicon Vertex Tracker 1st Year: Level-3 Trigger based on TPC data only

Motivation TPC 85 MB / event 100 Hz DAQ Level 3 full event reconstruction event selection enhance rare events 16 MB / s 1 Hz Tape

Level-3 Trigger Application Examples Au+Au collisions at s = 200GeV N J ψ e + e - as probe for the Quark Gluon Plasma online high p T / invariant mass cut N J ψ day without L3 Trigger 3.5 with L3 Trigger 330 Heavy (anti-) fragments (e.g. 4 He) online high p T / de/dx cut EMC calibration (high p T π - ) p+p collisions at s = 500GeV N Filtering of 700 TPC pile-up events per bunch-x

AuAu @ 200 GeV

pp @ 500 GeV

Level 3 Sector TPC track data 48.7 MB/s / all TPC sectors MVME 2306 i960 L3 Cluster Finder on i960 ALPHA MYRINET cluster data 12 MB/s / TPC sector L3 Track Finder on ALPHA

Level-3 Cluster Finder platform: Intel i960 @ 33 MHz (VxWorks) (24 sectors x 6 receiver boards x 3 i960 = 432 CPUs) C program + Inline Assembler (600 lines) time constraint: τ cluster 10 ms @ TPC readout rate R=100 Hz input: TPC ADC raw data (per sector 6912 pads x 512 time bins) (~62 MB/s per sector) output: cluster data (center-of-gravity, ADCsum) (~12 MB/s per sector) Level-3 vs. offline analysis: no cluster merging at sector partition boundaries

cosmic event (7/13/1999) next page shows view through TPC sector (top center)

Level-3 Track Finder platform: ALPHA DS-10 (21264, 64 bit, 466 MHz) hardware sqrt() on chip (~30 sqrt() calls / track) 4MB L2 cache on chip (fast digging on track data) Linux 2.2.12 C++ program (1500 lines) time constraint: ALPHA DS-10 466 MHz t = 87 ms Pentium II 450 MHz t = 135 ms τ track < 12 x τ event - τ cluster = 110 ms phase 1 (now) τ track < τ event = 10 ms phase 2 (2002) algorithm: conformal map NIM A380(96) 582 (a) transformation circle -> straight line (b) follow your nose fit Level-3 vs. offline analysis: - no track merging at sector boundaries (low p T spirals) - vertex constraint (conformal space) input: cluster data (~12 MB/s per sector) output: track data (~48 MB/s whole TPC)

cosmic, single tracks, optimized ε Track, not N Track (χ 2 loose)

beam-gas event (7/23/1999)

same beam-gas event tracked offline using L3 Track Finder (not reconstructed tracks due to z vertex outside TPC, i.e. η/φ slice mismatch)

Level 3 Fast Network different from (Gigabit )Ethernet point 2 point DMA (CPU not busy) consumption ~ 12 % SCI (Scalable Coherent Interface) Level 3 Global very low latency ~ 3 µs DMA 1 step x 24... Level 3 Sector Ring Topology / SCI MYRINET low latency ~ 30 µs DMA 2 step (via SRAM) Level 3 Global Level 3 Sector x 24... Switch Topology / MYRINET Decision for MYRINET no further hardware modification necessary (EPROM) ready for use driver for ALPHA/Linux and VxWorks

Eventbuilder MASTER L0 TRIGGER STAR Level 3 Setup 12/99 Global L3 ALPHA XP1000 500 MHz Pentium III 600 MHz TPC Sector L3 ALPHA DS10 466 MHz SVT #1 SVT #2 FTPC

Apex STAR Level-3 Trigger = ALPHA 21264 / Linux processor farm MYRINET interconnected first real data processed (cosmics, beam-gas events) prototype 12/99 1 track finder CPU / 2 TPC sectors R=20 Hz (design goal R=100 Hz) envisaged start RHIC physics program 3/01/2000