Processing data from interferometric telescopes requires the application of computationally expensive algorithms to relatively large data volumes for imaging the sky at the sensitivity and resolution afforded by current and future telescopes. Therefore, in addition to their numerical performance, algorithms for data processing for imaging must also pay attention to computational complexity and runtime performance. As a result, algorithms R&D involves complex interactions between evolution in telescope capabilities, scientific uses cases, and computing hardware and software technologies.
In this talk I will briefly describe the working of radio interferometric telescope and highlight the resulting data processing challenges for imaging with next-generation telescopes like the ngVLA. I will then discuss the general data processing landscape and the algorithms, and the computing architecture developed by the NRAO Algorithms R&D Group (ARDG) to navigate this landscape with a focus on (near) future needs, and on hardware/software technology projections. Recently, in collaboration with the Center for High Throughput Computing we deployed this architecture on the OSG, PATh, San Diego Supercomputer Center (SDSC) and National Research Platform (NRP) resources to process a large database for the first time. This produced the deepest image ever at radio frequencies of the Hubble Ultra-Deep Field (HUDF). I will also briefly discuss this work, the lessons learnt, and the work in progress for the challenges ahead.
Coffee, tea, ice water and light breakfast
Howard Auditorium (Fluno Center on the University of Wisconsin-Madison Campus)
We invite you to come and tell us what HTC issues or problems you have. Tell us what is missing and how can we do better. Share good ideas from you or your team. No slides please; this will be an informal and open discussion session. There will be a sign up sheet at the registration table.
Coffee, tea, ice water
We present a list of services that support collaborations on computing pools on the global cyberinfrastructure.
We present our vision on using the globally integrated infrastructure to advance the mission of the GlueX collaboration via leveraging available computing and storage capacity. GlueX has been using the distributed resources at the OSPool along with their own pool resources in a number of institutions in the US, Canada and Europe. With lessons learned and adapted know-how we will be able to chart paths forward in becoming more efficient and productive in our computing workflows for both simulations and data processing.
TBD