It was a useful meeting (as ever meeting people and chatting is most important!). Some issues I picked up for ATLAS sites were:
- Although 13.0.10 has been released there are quite a few things known broken (event generation, for instance). This means we are stuck with having a lot of "old" ATLAS software releases on our sites. At Glasgow we have 86GB of ATLAS software - more then 60% of the total for all VOs.
- Preparations for Computing System Commissioning and the Final Dress Rehearsal are underway. The start date seems to have slipped (was meant to start this week)? Actually, I must find out what the site involvement schedule actually is.
- The DQ2 data management system was upgraded to 0.3 last week. There were a few teething troubles, but the next release should handle many common problems much better.
- There's pressure not to run too many simulations as part of each job sent to a site - so keep the wallclock down (< 24 hours), but this reduced the file sizes. Small files are a big problem - they are inefficient to transfer and gunge up any tape system. So they should really be merged before any migration to tape. (Problem for CASTOR though, which even puts T0D1 stuff onto tape?)
- Event sizes keep going up. Computing TDR had ESD at 0.5MB, but currently this is 1.6MB (1.8 for MC). Probably a realistic target will be 1.3MB files.
- Memory footprints are rising too. 2GB necessary for simulation and probably a subset of reconstruction jobs too.
- To deal with merging and pile-up jobs worker nodes should now be speced with at least 20GB of disk space per core. At the moment, however, jobs will try and limit their ambitions to 10GB. However, this requirement also seems monotonic, so make sure it's accounted for in forthcoming purchases.
- Queues for ATLAS production should be around 24 to 36 hours of cpu and wall time (N.B. this is on modern CPUs). NIKHEF are currently at 24/36 and I'm going to cut Glasgow back to 36 hours.
- If you see stuck ATLAS jobs try and investigate the problem and report to firstname.lastname@example.org. This will help cut off the nasty tail in the ATLAS efficiency curve.