Last update: Apr 1, 2000 |
|
Speaker: Terence Watts The next collider run of the Tevatron, Run II, will start in March of 2001. The CDF experiment expects to record at least 1 PetaByte of data. Analysis of such a volume by a collaboration of over 450 physicists requires management of limited resources such as CPU time, disk space, tape drives, and I/O bandwidths. The CDF data handling strategy is based on the use of batch queues and a disk inventory manager.The disk manager software will try to match the presentation of data on staging disk for a user job to the rate of processing of that job. And it will minimize duplicate staging of the same dataset to multiple jobs. For the batch system, we will use the Load Sharing Facility, LSF, from Platform Computing, and this will manage CPU resources and tape drives. A prototype system was built and heavily exercised to evaluate strategy and performance. A first mock data challenge is planned as a connectivity test whose purpose is to identify overlooked components in the data flow. Results from both tests will be presented.
|
||||||||||||||||||||||||||||||
| | Home | Bulletins | Committees | Scientific Program | Docs by topics | Social Event | Conference Location | Secretariat | Privacy Policy | |