Last update: Apr 1, 2000 |
|
Speaker: Stephan Lammel The CDF collaboration at the Fermilab Tevatron analyzes proton-antiproton interactions at a center-of-mass energy of 2 TeV. During the next collider run the experiment expects to record 1 Petabyte of data, an increase by a factor of 20. This paper gives an overview of the design of the new data handling hardware architecture and software system intended to cope with the volume of data, the rich physics potential, and more than 450 physicists.The CDF Run II data handling system consists of a central analysis computer cluster with tight CPU to I/O connection, a Data Catalog that keeps track of all data, a resource management system to manage CPU time and tape I/O bandwith, disk management and staging software, and a set of I/O modules for the analysis framework. The core of the hardware architecture will be a pool of over 28 TBytes of disk. Logically behind this pool will be two automated tape libraries with a capacity of 1 PByte. The compute systems will be located around this storage pool. A potentially heterogeneous cluster of moderate sized multi-processor systems will provide the necessary compute power.
|
|||||||||||||||||||||||||||||
| | Home | Bulletins | Committees | Scientific Program | Docs by topics | Social Event | Conference Location | Secretariat | Privacy Policy | |