I've been invited to talk at the Future Computing in Particle Physics Workshop in Edinburgh, which has the following abstract:
Recent developments in computing and software architectures have resulted in huge potential for accelerating applications used in experimental particle physics. This is an ideal time to investigate how a significant performance boost can be achieved by the effective use of many-core and GPU architectures in a distributed computing environment, as well as utilising emerging I/O and storage technologies. This workshop aims to discuss what has been done so far in the field and what potential future development areas are feasible.It's an exciting workshop; the downside is that it started today and I'm on the wrong side of the Atlantic! Thus, I have the pleasure of attending via videoconference. While it doesn't truly replace attending, a conference - we all swear half the value of these conference are the discussions that occur during break - there's a few things I've learned:
- No matter how early your presentation is in your timezone, show up early and ask questions on other presentations. Besides the obvious good etiquette (if you don't plan on paying attention, decline the invitation), this allows you to test out the quality of the videoconference setup.
- Find a friend sitting in the remote audience to IM you during the presentation. When you're physically there, you can gauge interest levels from the audience's body language. Are they bored? Can they hear/see you? Having a spy in the audience helps you get this feedback.
- My father always says "I can hire a monkey to stand up and read off Powerpoint slides. They are here to hear you present". The adage is still partially true, but a larger-than-normal part of the information conveyed to the audience is going to go through these slides. Spend some extra time on them.
Now, onto the subject of the workshop: future of computing in particle physics. I'll be talking about I/O. Really, it all boils down to two things:
- There is no magic bullet to make I/O faster. For what I can tell, the limitation is the complexity of our data structures. Improvements to the current I/O stack - or a new I/O stack - isn't likely going to turn bad data structures into good ones.
- We demand remote I/O! Having batch system access to the wealth of data is great... but it's time to have the ability to do remote I/O also.