Processing data from interferometric telescopes requires the application of computationally expensive algorithms to relatively large data volumes for imaging the sky at the sensitivity and resolution afforded by current and future telescopes. Therefore, in addition to their numerical performance, algorithms for data processing for imaging must also pay attention to computational complexity and runtime performance. As a result, algorithms R&D involves complex interactions between evolution in telescope capabilities, scientific uses cases, and computing hardware and software technologies.
In this talk I will briefly describe the working of radio interferometric telescope and highlight the resulting data processing challenges for imaging with next-generation telescopes like the ngVLA. I will then discuss the general data processing landscape and the algorithms, and the computing architecture developed by the NRAO Algorithms R&D Group (ARDG) to navigate this landscape with a focus on (near) future needs, and on hardware/software technology projections. Recently, in collaboration with the Center for High Throughput Computing we deployed this architecture on the OSG, PATh, San Diego Supercomputer Center (SDSC) and National Research Platform (NRP) resources to process a large database for the first time. This produced the deepest image ever at radio frequencies of the Hubble Ultra-Deep Field (HUDF). I will also briefly discuss this work, the lessons learnt, and the work in progress for the challenges ahead.
This is an out and back route along the Lake Mendota running path led by Cannon Lock. It is appropriate for runners of all abilities. Meeting in the Fluno Lobby at 6:30 am
Coffee, tea, ice water and light breakfast
Howard Auditorium (Fluno Center on the University of Wisconsin-Madison Campus)
We invite you to come and tell us what HTC issues or problems you have. Tell us what is missing and how can we do better. Share good ideas from you or your team. No slides please; this will be an informal and open discussion session. There will be a sign up sheet at the registration table.
For our Tuesday ride, we have planned a route that is ~9 miles, and will likely take us ~45 minutes at a comfortable pace. Please meet at 6:00 in the Fluno Lobby. See the Event Site for details about View the map for the evening ride: https://www.strava.com/routes/3245085368685046064
For bike rentals, Madison Bcycle rentals are available at many docking stations around Madison.
First-come, first-served basis on rental of city Ebikes (they have electric pedal assist)
Single ride pass: $7/30 minutes of riding. $7 plus tax for each additional trip, up to 30 minutes.
You can find their website here: https://madison.bcycle.com/home
On their site you can find a real-time map of all stations and the number of bikes available.
Coffee, tea, ice water
Zoom https://umich.zoom.us/j/93931301151?pwd=WkykMvSb1zRuxurMDwf2skclXqVUop.1
Live notes (please contribute) https://docs.google.com/document/d/1Nnr3yierYFS3KlBNgVs1XK8VmNsjhTn-u6e3NIpEI0Q/edit
See notes https://docs.google.com/document/d/1pjwG1LAjOPWsSrdas4WvYYfoyn8NyLuk5u4L6opNb4Q/edit?tab=t.0#heading=h.o4xh9deafqh6
US cloud Ops Organization
- Effort:
- What about M.Maeno & Armen ?
- KD - Mayuko is supported by Physics Support for US user support. I don't think she is in WBS 2.3.
- KD - Armen is 50% ADCoS and the remainder mostly K8 deployment, testing.
- 1.5 FTE @CERN
- T1/T2s technical expertise [?]
- KD - we are very thin here. Need more help.
- Communication channel(s) and meetings
- Known challenges and issues
- What and how we want to improve
- Wish list to ADC Ops
Summarize discussion and create relevant action items
Zoom https://umich.zoom.us/j/93931301151?pwd=WkykMvSb1zRuxurMDwf2skclXqVUop.1
Live Notes (Please contribute) https://docs.google.com/document/d/1Nnr3yierYFS3KlBNgVs1XK8VmNsjhTn-u6e3NIpEI0Q/edit
Cover existing and proposed USATLAS milestones.
See https://docs.google.com/spreadsheets/d/1YEEzfcXkQ_KHg1to-aSsFLE7z708c5rumFmRGDaQEmk/edit?gid=1636071618#gid=1636071618 (milestone spreadsheet WBS 2.3 working copy)
Description of changes for Apr-Jun 2024 https://docs.google.com/document/d/1QykafgLCRQtFzgozLQHnnZImbJ2zdYjSAj1Bb2S2d5s/edit#heading=h.hvne1q6a1adg
Spreadsheet summarizing WBS 2.3 milestones https://docs.google.com/spreadsheets/d/1RDuvkuOvHG6RhuUcDB42JsRxGUjDW9zcXIvvI8wPL3w/edit?gid=1747360723#gid=1747360723
Need to discuss the changes in WBS 2.3.3
How to improve our toolkit for the future (shouldn't take 0.2+ FTE for EACH HPC)
What do we do about TACC?
Small amount of effort that we need to optimize for our goals.
Potential candidates to go from R&D → (pre)production
Notes https://docs.google.com/document/d/12ZesPednkh_fp-i8K9R9eaS75fPjv7xiLI1gxJAZN94/edit
Zoom https://umich.zoom.us/j/93931301151?pwd=WkykMvSb1zRuxurMDwf2skclXqVUop.1
Link to shared Google doc with details and background https://docs.google.com/document/d/1W5yxKIVLGQWuzf7iw_izk5HPq9pm_6l9bg7Ty2XAPNY/edit
Allow for speaker transition
If you are interested in participating in the hands-on portion of the Wednesday's tutorial "Data in Flight - Delivering Data with Pelican", you will need to register at https://go.wisc.edu/cfsl43 before end-of-day Tuesday. This tutorial is aimed at those who may be interested in contributing their data to the OSDF via a Pelican data origin. In-person and remote attendees can participate in the tutorial, and experience using SSH and Bash commands is recommended. Registration is not required to observe the tutorial.
When you click the registration link, you'll be asked to login using CILogon. For new users, you'll be prompted to select an "identity provider" - most major institutions are available, but if you do not see your institution you can select "ORCID", "GitHub", or "Google" (GMail) instead (be sure to choose an institution that you remember the login information for!). If you have issues logging in, you may want to try using an incognito/private browser session.
Once you've logged in, you'll land on a page with "Basic Account Creation" - click on the "Begin" button. Enter your information, enter "HTC24 Pelican Tutorial" in the comment box, and click the "Submit" button. After your information has been processed, confirm your email address following the instructions in the email you should have received in your inbox.
If you have issues with registration or other questions, please email support@osg-htc.org.
If you are interested in participating in the hands-on portion of the Wednesday's tutorial "Data in Flight - Delivering Data with Pelican", you will need to register at https://go.wisc.edu/cfsl43 before end-of-day Tuesday. This tutorial is aimed at those who may be interested in contributing their data to the OSDF via a Pelican data origin. In-person and remote attendees can participate in the tutorial, and experience using SSH and Bash commands is recommended. Registration is not required to observe the tutorial.
When you click the registration link, you'll be asked to login using CILogon. For new users, you'll be prompted to select an "identity provider" - most major institutions are available, but if you do not see your institution you can select "ORCID", "GitHub", or "Google" (GMail) instead (be sure to choose an institution that you remember the login information for!). If you have issues logging in, you may want to try using an incognito/private browser session.
Once you've logged in, you'll land on a page with "Basic Account Creation" - click on the "Begin" button. Enter your information, enter "HTC24 Pelican Tutorial" in the comment box, and click the "Submit" button. After your information has been processed, confirm your email address following the instructions in the email you should have received in your inbox.
If you have issues with registration or other questions, please email support@osg-htc.org.
We present a list of services that support collaborations on computing pools on the global cyberinfrastructure.
We present our vision on using the globally integrated infrastructure to advance the mission of the GlueX collaboration via leveraging available computing and storage capacity. GlueX has been using the distributed resources at the OSPool along with their own pool resources in a number of institutions in the US, Canada and Europe. With lessons learned and adapted know-how we will be able to chart paths forward in becoming more efficient and productive in our computing workflows for both simulations and data processing.
TBD
Notes https://docs.google.com/document/d/12ZesPednkh_fp-i8K9R9eaS75fPjv7xiLI1gxJAZN94/edit
Zoom https://umich.zoom.us/j/93931301151?pwd=WkykMvSb1zRuxurMDwf2skclXqVUop.1
For our Thursday Sunset Paddle, we will depart from the Fluno Lobby at 6 pm and drive to nearby Lake Wingra. Kayaks and paddle boards are available for rental at your own expense. We will have some experienced kayakers with us to assist for anyone new to the sport. Wingra Boats has plenty of boats and we have had good experience just showing up and renting the boats needed.
Wingra Boats Site: https://www.madisonboats.com/locations/wingra-boats/
A group lead by Aaron Moate will be heading out to a nearby establishment for karaoke.
We'll rally just outside the Fluno Lobby starting at 8:45 pm and walk across the street to Mom's Bar at 9 pm. Anyone who shows up later should just go straight to Mom's Bar.
Development and execution of scientific code requires increasingly complex software stacks and specialized resources such as machines with huge system memory or GPUs. Such resources are present in HTC/HPC clusters and used for batch processing since decades,but users struggle with adapting their software stacks and their development workflows to those dedicated resources. Hence, it is crucial to enable interactive use with a low-threshold user experience, i.e. offering an SSH-like experience to enter development environments or start JupyterLab sessions from a web browser.
Turning some knobs, HTCondor unlocks these interactive use cases of HTC and HPC resources, leveraging the resource control functionality of a workload manager, wrapping execution within unprivileged containers and even enabling the use of federated resources crossing network boundaries without loss of security.
This talk presents the positive experience with an interactive-first approach, hiding the complexities of containers and different operating systems from the users, enabling them to use HTC resources in an SSH-like fashion and with their JupyterLab environments. It also provides a short outlook on scaling this approach to a federated infrastructure.