arrow left

Quantum Computing Meets HPC: Insights from the 2nd Workshop on Broadly Accessible Quantum Computing at PEARC25

calender icon
July 25, 2025
clock icon
min read
Opinion
Share

Organizers: Bruno Abreu, Yipeng Huang, Tommaso Macri, Santiago Nunez-Corrales

Executive Summary

The 2nd Workshop on Broadly Accessible Quantum Computing at PEARC25, held on July 21, 2025, brought together around 60 participants across academia, industry, and HPC centers to explore the practical integration of quantum computing into high-performance computing (HPC) environments. Organized by PSC, QuEra Computing, Rutgers University, and NCSA, the workshop emphasized the growing momentum behind hybrid quantum-classical systems, real-world deployments, and the need for interoperable toolchains and workforce development. It also identified critical policy and funding gaps that could hinder U.S. competitiveness if not addressed.

Key Takeaways

  • Quantum-HPC Convergence: The field is shifting from lab prototypes to production-ready quantum processing units (QPUs) integrated within data centers, mirroring how GPUs were once introduced.
  • Hybrid Workflows Are Key: Quantum is increasingly seen as a specialized accelerator within HPC pipelines, especially for tasks like simulation and optimization.
  • Toolchain & Usability Gaps: There is an urgent need for better simulators, schedulers, visualization tools, and unified software stacks to make quantum computing usable by non-specialists.
  • Workforce Demands Are Expanding: Beyond quantum software developers, there is rising demand for technicians, control engineers, and quantum-aware scientists. Modular and credentialed education programs are essential.
  • Incremental Integration Works Best: Adoption of QPUs should proceed through staged integration into existing HPC workflows (e.g., using Slurm), minimizing disruption and cost.
  • Policy & Funding Disparities: The expiration of the U.S. National Quantum Initiative Act in 2023 has created a policy vacuum. By contrast, Europe’s coordinated public funding model (e.g., EuroHPC) is enabling more cohesive infrastructure and workforce growth.
  • Call for Renewed U.S. Action: There is a need for reauthorization or replacement of the NQI, more investment in modular and open quantum infrastructure, and enhanced international coordination.

In-depth analysis of the main topics

1. Laboratory to Data Center: Bridging Quantum and HPC

Speakers emphasized the ongoing transition of quantum computing from experimental setups in physics laboratories to production-scale deployments within data centers and supercomputing facilities. This shift reflects a growing focus on making quantum systems practically usable, reliable, and accessible for real-world scientific workflows.

Laura Schulz (Argonne National Laboratory, formerly Leibniz Supercomputing Centre) provided a comprehensive overview of this evolution, drawing on her experience leading the quantum program for Munich Quantum Valley. She described how the Bavarian government has supported the integration of quantum computers—across multiple modalities—into high-performance computing (HPC) environments. Her talk highlighted the operational deployment of quantum systems like QPUs and their tight coupling to classical supercomputers. She also noted how the Munich Quantum Software Stack (MQSS) is enabling broader accessibility and control of diverse quantum systems within data center settings.

Erik Garcell (Classiq) reinforced the narrative from the software perspective, framing quantum computers as quantum processing units (QPUs) that are increasingly viewed as part of the broader HPC stack, alongside CPUs, GPUs, and TPUs. He stressed that quantum systems must be designed with integration in mind, emphasizing hybrid workflows where quantum excels at specific tasks—such as simulation or optimization—while classical systems handle the rest.

John Towns (NCSA) and Torey Battelle (Arizona State University) also contributed to this theme during the HPC integration panel. They discussed practical strategies for incrementally integrating quantum systems into existing HPC infrastructures, drawing analogies to the early adoption of GPUs. They emphasized that quantum-classical integration must prioritize researcher usability, job scheduling, and software stack coherence—building tools that allow quantum systems to serve as accelerators within familiar HPC environments.

Across these talks and discussions, it became clear that advancing quantum computing depends not only on technological breakthroughs in qubit performance, but also on embedding these systems into the digital infrastructure of scientific computing—ensuring scalability, interoperability, and sustained accessibility for research communities.

2. Practical Integration Strategies

Aurally echoing real-world initiatives like Germany’s Q-Exa project and the development of the Munich Quantum Software Stack (MQSS), the workshop emphasized the growing importance of hybrid quantum-classical models. Laura Schulz detailed how these initiatives are enabling the co-location of quantum processors—such as superconducting and ion-trap systems—alongside classical supercomputers within production data centers. She underscored the need for unified control layers, adaptive middleware, and sensor-driven operational intelligence to support seamless integration.

Several talks and panels expanded on this vision. Erik Garcell described how software abstraction and compiler technologies can enable quantum programs to be deployed flexibly across heterogeneous backends. In a later panel, Qiang Guan and Vipin Chaudhary addressed the software development challenges in hybrid workflows, such as noise management, visualization, and the scheduling of quantum jobs with variable shot requirements.

Together, these discussions examined deployment models—from loose coupling via cloud gateways to tightly integrated on-premises systems—highlighting the importance of adaptable scheduling strategies, workload orchestration, and a consistent software stack that can span diverse quantum modalities (e.g., superconducting, trapped ions, neutral atoms). The overarching message was clear: as quantum hardware becomes more diverse and capable, it must be interoperable, programmable, and operationally aligned with HPC systems if it is to deliver on its scientific and industrial promise.

3. Tools, Visualizations & Noise Mitigation

Panels led by toolchain and software experts—particularly Qiang Guan (Kent State University) and Vipin Chaudhary (Case Western Reserve University)—highlighted key tooling gaps hindering broader adoption of quantum computing within high-performance environments. Qiang, who recently transitioned into quantum computing from the HPC and cloud computing domain, emphasized the urgent need for improved simulators, visualization tools, and programming abstractions that make quantum systems accessible to domain scientists without deep quantum backgrounds.

The discussion stressed the importance of developing domain-specific languages (DSLs) and compilers that can bridge quantum and classical workflows while managing quantum-specific constraints such as noise, qubit reuse, and backend-specific gate sets. There was also a strong call for better scheduling tools capable of dynamically allocating resources based on shot counts, qubit availability, and hardware latency—a major bottleneck in hybrid workloads.

These improvements aim not only to make quantum systems more usable for HPC practitioners but also to support workflow reproducibility, debugging, and interoperability across platforms. The panelists noted that while multiple hardware vendors offer their own toolchains, the lack of unified benchmarks and representations remains a major pain point. They emphasized that building robust software infrastructure now—especially for educators and developers—will be critical to scaling future quantum-classical systems efficiently.

4. Education & Workforce Development

The panel on education and workforce development—featuring Josephine Meyer (University of Colorado Boulder), Douglas Jennewein (Arizona State University), and David Liu (Purdue University)—underscored the critical need to align quantum education with emerging roles in both industry and research. They pointed to a growing demand for a diverse set of professionals: not only quantum software engineers, but also cryogenic systems technicians, quantum-aware data scientists, control engineers, and hybrid algorithm developers.

Meyer emphasized that educational programs should go beyond technical instruction to include credentialing pathways that reflect the actual competencies required in the field. Rather than relying on abstract or overly theoretical content, she advocated for targeted skill development aligned with job-specific roadmaps. Douglas Jennewein added that successful workforce programs must include collaboration between universities and industry, ensuring students gain exposure to real-world use cases and infrastructure.

David Liu argued for a foundational shift in curriculum design, recommending that linear algebra, discrete mathematics, statistics, and optimization be introduced early—potentially even before calculus—for students pursuing quantum pathways. He suggested that computer science students could focus on quantum languages and abstraction layers, while electrical engineering students might benefit from more hardware-oriented content, such as noise characterization and control systems.

Panelists also warned against investing in large-scale training initiatives without evidence-based teaching methods or clear alignment with evolving industry needs. Instead, they advocated for interdisciplinary programs and modular curricula that can be adapted to learners’ backgrounds and career goals. European models, particularly those emphasizing technician-level training and workforce credentialing, were cited as potential templates for U.S. institutions to follow.

5. Incremental Adoption via HPC Infrastructure

Participants and panelists emphasized that the integration of quantum computing into high-performance computing (HPC) environments should follow an incremental, pragmatic path, mirroring earlier transitions like the adoption of GPUs. John Towns (NCSA) and Torey Battelle (Arizona State University) drew direct comparisons between the early days of GPU acceleration and the current state of quantum systems, suggesting that quantum processing units (QPUs) should be treated as specialized accelerators within the existing HPC stack.

This approach minimizes disruption by leveraging established tools and workflows, such as Slurm, which can already support extensions for QPU job types. Rather than requiring new programming paradigms from scratch, this model encourages users to incorporate quantum resources as part of a broader compute pipeline—offloading only specific sub-tasks (e.g., optimization kernels, quantum subroutines) to the QPU while maintaining the bulk of computation on classical resources.

The panel also highlighted the need for shared programming and scheduling interfaces to manage hybrid workloads, with particular attention to orchestration tools that can handle dependencies between classical and quantum stages. They noted that this model enables HPC centers to experiment with QPU integration without committing to full-stack quantum infrastructure, facilitating a more agile and sustainable path to adoption.

Finally, speakers stressed the role of HPC staff as critical intermediaries—training users, managing QPU resources, and guiding research teams on when and how quantum acceleration makes sense. This perspective positions HPC centers not only as compute providers, but also as on-ramps for quantum exploration, helping bridge the gap between traditional computing and emerging quantum workflows.

6. Policy Gaps & Funding Models

In the policy-focused panel, Erik Garcell (Classiq) and Travis Scholten (IBM) led a thoughtful discussion on the contrasting approaches to quantum computing funding and governance between Europe and the United States. The speakers noted that Europe has taken a centralized, government-led approach, with initiatives like Euro-HPC and national programs making substantial public investments in quantum infrastructure, deployment, and workforce development.

In contrast, the U.S. model is more fragmented and venture capital-driven, relying heavily on private sector innovation and competitive funding programs. While this model has led to rapid advances in hardware and startup activity, the speakers expressed concern that it lacks long-term, coordinated planning, particularly in areas such as education, infrastructure integration, and interagency collaboration.

A major point of concern raised by the panel was the expiration of the National Quantum Initiative (NQI) Act in 2023. Although the NQI had helped unify federal efforts across agencies like DOE, NIST, and NSF, its lapse has created uncertainty around the continuity of public investment in the quantum ecosystem. Garcell and Scholten emphasized the urgent need for renewed legislative and executive action to reauthorize the NQI or launch a successor program with expanded focus on full-stack development, supply chain resilience, and public-private R&D partnerships.

The panel also discussed the importance of modular, open-source infrastructure and pre-competitive research as areas where federal support could create broad community benefits. They highlighted the DARPA Quantum Benchmarking Initiative as a promising example of rigorous, publicly funded validation of quantum systems. Finally, the conversation touched on U.S. quantum diplomacy, with panelists advocating for international coordination, particularly in setting standards, sharing open research, and managing strategic dependencies in the global quantum supply chain.


machine learning
with QuEra

Listen to the podcast
No items found.
Opinion

Quantum Computing Meets HPC: Insights from the 2nd Workshop on Broadly Accessible Quantum Computing at PEARC25

July 25, 2025
min read
6 min read
Abstract background with white center and soft gradient corners in purple and orange with dotted patterns.

Organizers: Bruno Abreu, Yipeng Huang, Tommaso Macri, Santiago Nunez-Corrales

Executive Summary

The 2nd Workshop on Broadly Accessible Quantum Computing at PEARC25, held on July 21, 2025, brought together around 60 participants across academia, industry, and HPC centers to explore the practical integration of quantum computing into high-performance computing (HPC) environments. Organized by PSC, QuEra Computing, Rutgers University, and NCSA, the workshop emphasized the growing momentum behind hybrid quantum-classical systems, real-world deployments, and the need for interoperable toolchains and workforce development. It also identified critical policy and funding gaps that could hinder U.S. competitiveness if not addressed.

Key Takeaways

  • Quantum-HPC Convergence: The field is shifting from lab prototypes to production-ready quantum processing units (QPUs) integrated within data centers, mirroring how GPUs were once introduced.
  • Hybrid Workflows Are Key: Quantum is increasingly seen as a specialized accelerator within HPC pipelines, especially for tasks like simulation and optimization.
  • Toolchain & Usability Gaps: There is an urgent need for better simulators, schedulers, visualization tools, and unified software stacks to make quantum computing usable by non-specialists.
  • Workforce Demands Are Expanding: Beyond quantum software developers, there is rising demand for technicians, control engineers, and quantum-aware scientists. Modular and credentialed education programs are essential.
  • Incremental Integration Works Best: Adoption of QPUs should proceed through staged integration into existing HPC workflows (e.g., using Slurm), minimizing disruption and cost.
  • Policy & Funding Disparities: The expiration of the U.S. National Quantum Initiative Act in 2023 has created a policy vacuum. By contrast, Europe’s coordinated public funding model (e.g., EuroHPC) is enabling more cohesive infrastructure and workforce growth.
  • Call for Renewed U.S. Action: There is a need for reauthorization or replacement of the NQI, more investment in modular and open quantum infrastructure, and enhanced international coordination.

In-depth analysis of the main topics

1. Laboratory to Data Center: Bridging Quantum and HPC

Speakers emphasized the ongoing transition of quantum computing from experimental setups in physics laboratories to production-scale deployments within data centers and supercomputing facilities. This shift reflects a growing focus on making quantum systems practically usable, reliable, and accessible for real-world scientific workflows.

Laura Schulz (Argonne National Laboratory, formerly Leibniz Supercomputing Centre) provided a comprehensive overview of this evolution, drawing on her experience leading the quantum program for Munich Quantum Valley. She described how the Bavarian government has supported the integration of quantum computers—across multiple modalities—into high-performance computing (HPC) environments. Her talk highlighted the operational deployment of quantum systems like QPUs and their tight coupling to classical supercomputers. She also noted how the Munich Quantum Software Stack (MQSS) is enabling broader accessibility and control of diverse quantum systems within data center settings.

Erik Garcell (Classiq) reinforced the narrative from the software perspective, framing quantum computers as quantum processing units (QPUs) that are increasingly viewed as part of the broader HPC stack, alongside CPUs, GPUs, and TPUs. He stressed that quantum systems must be designed with integration in mind, emphasizing hybrid workflows where quantum excels at specific tasks—such as simulation or optimization—while classical systems handle the rest.

John Towns (NCSA) and Torey Battelle (Arizona State University) also contributed to this theme during the HPC integration panel. They discussed practical strategies for incrementally integrating quantum systems into existing HPC infrastructures, drawing analogies to the early adoption of GPUs. They emphasized that quantum-classical integration must prioritize researcher usability, job scheduling, and software stack coherence—building tools that allow quantum systems to serve as accelerators within familiar HPC environments.

Across these talks and discussions, it became clear that advancing quantum computing depends not only on technological breakthroughs in qubit performance, but also on embedding these systems into the digital infrastructure of scientific computing—ensuring scalability, interoperability, and sustained accessibility for research communities.

2. Practical Integration Strategies

Aurally echoing real-world initiatives like Germany’s Q-Exa project and the development of the Munich Quantum Software Stack (MQSS), the workshop emphasized the growing importance of hybrid quantum-classical models. Laura Schulz detailed how these initiatives are enabling the co-location of quantum processors—such as superconducting and ion-trap systems—alongside classical supercomputers within production data centers. She underscored the need for unified control layers, adaptive middleware, and sensor-driven operational intelligence to support seamless integration.

Several talks and panels expanded on this vision. Erik Garcell described how software abstraction and compiler technologies can enable quantum programs to be deployed flexibly across heterogeneous backends. In a later panel, Qiang Guan and Vipin Chaudhary addressed the software development challenges in hybrid workflows, such as noise management, visualization, and the scheduling of quantum jobs with variable shot requirements.

Together, these discussions examined deployment models—from loose coupling via cloud gateways to tightly integrated on-premises systems—highlighting the importance of adaptable scheduling strategies, workload orchestration, and a consistent software stack that can span diverse quantum modalities (e.g., superconducting, trapped ions, neutral atoms). The overarching message was clear: as quantum hardware becomes more diverse and capable, it must be interoperable, programmable, and operationally aligned with HPC systems if it is to deliver on its scientific and industrial promise.

3. Tools, Visualizations & Noise Mitigation

Panels led by toolchain and software experts—particularly Qiang Guan (Kent State University) and Vipin Chaudhary (Case Western Reserve University)—highlighted key tooling gaps hindering broader adoption of quantum computing within high-performance environments. Qiang, who recently transitioned into quantum computing from the HPC and cloud computing domain, emphasized the urgent need for improved simulators, visualization tools, and programming abstractions that make quantum systems accessible to domain scientists without deep quantum backgrounds.

The discussion stressed the importance of developing domain-specific languages (DSLs) and compilers that can bridge quantum and classical workflows while managing quantum-specific constraints such as noise, qubit reuse, and backend-specific gate sets. There was also a strong call for better scheduling tools capable of dynamically allocating resources based on shot counts, qubit availability, and hardware latency—a major bottleneck in hybrid workloads.

These improvements aim not only to make quantum systems more usable for HPC practitioners but also to support workflow reproducibility, debugging, and interoperability across platforms. The panelists noted that while multiple hardware vendors offer their own toolchains, the lack of unified benchmarks and representations remains a major pain point. They emphasized that building robust software infrastructure now—especially for educators and developers—will be critical to scaling future quantum-classical systems efficiently.

4. Education & Workforce Development

The panel on education and workforce development—featuring Josephine Meyer (University of Colorado Boulder), Douglas Jennewein (Arizona State University), and David Liu (Purdue University)—underscored the critical need to align quantum education with emerging roles in both industry and research. They pointed to a growing demand for a diverse set of professionals: not only quantum software engineers, but also cryogenic systems technicians, quantum-aware data scientists, control engineers, and hybrid algorithm developers.

Meyer emphasized that educational programs should go beyond technical instruction to include credentialing pathways that reflect the actual competencies required in the field. Rather than relying on abstract or overly theoretical content, she advocated for targeted skill development aligned with job-specific roadmaps. Douglas Jennewein added that successful workforce programs must include collaboration between universities and industry, ensuring students gain exposure to real-world use cases and infrastructure.

David Liu argued for a foundational shift in curriculum design, recommending that linear algebra, discrete mathematics, statistics, and optimization be introduced early—potentially even before calculus—for students pursuing quantum pathways. He suggested that computer science students could focus on quantum languages and abstraction layers, while electrical engineering students might benefit from more hardware-oriented content, such as noise characterization and control systems.

Panelists also warned against investing in large-scale training initiatives without evidence-based teaching methods or clear alignment with evolving industry needs. Instead, they advocated for interdisciplinary programs and modular curricula that can be adapted to learners’ backgrounds and career goals. European models, particularly those emphasizing technician-level training and workforce credentialing, were cited as potential templates for U.S. institutions to follow.

5. Incremental Adoption via HPC Infrastructure

Participants and panelists emphasized that the integration of quantum computing into high-performance computing (HPC) environments should follow an incremental, pragmatic path, mirroring earlier transitions like the adoption of GPUs. John Towns (NCSA) and Torey Battelle (Arizona State University) drew direct comparisons between the early days of GPU acceleration and the current state of quantum systems, suggesting that quantum processing units (QPUs) should be treated as specialized accelerators within the existing HPC stack.

This approach minimizes disruption by leveraging established tools and workflows, such as Slurm, which can already support extensions for QPU job types. Rather than requiring new programming paradigms from scratch, this model encourages users to incorporate quantum resources as part of a broader compute pipeline—offloading only specific sub-tasks (e.g., optimization kernels, quantum subroutines) to the QPU while maintaining the bulk of computation on classical resources.

The panel also highlighted the need for shared programming and scheduling interfaces to manage hybrid workloads, with particular attention to orchestration tools that can handle dependencies between classical and quantum stages. They noted that this model enables HPC centers to experiment with QPU integration without committing to full-stack quantum infrastructure, facilitating a more agile and sustainable path to adoption.

Finally, speakers stressed the role of HPC staff as critical intermediaries—training users, managing QPU resources, and guiding research teams on when and how quantum acceleration makes sense. This perspective positions HPC centers not only as compute providers, but also as on-ramps for quantum exploration, helping bridge the gap between traditional computing and emerging quantum workflows.

6. Policy Gaps & Funding Models

In the policy-focused panel, Erik Garcell (Classiq) and Travis Scholten (IBM) led a thoughtful discussion on the contrasting approaches to quantum computing funding and governance between Europe and the United States. The speakers noted that Europe has taken a centralized, government-led approach, with initiatives like Euro-HPC and national programs making substantial public investments in quantum infrastructure, deployment, and workforce development.

In contrast, the U.S. model is more fragmented and venture capital-driven, relying heavily on private sector innovation and competitive funding programs. While this model has led to rapid advances in hardware and startup activity, the speakers expressed concern that it lacks long-term, coordinated planning, particularly in areas such as education, infrastructure integration, and interagency collaboration.

A major point of concern raised by the panel was the expiration of the National Quantum Initiative (NQI) Act in 2023. Although the NQI had helped unify federal efforts across agencies like DOE, NIST, and NSF, its lapse has created uncertainty around the continuity of public investment in the quantum ecosystem. Garcell and Scholten emphasized the urgent need for renewed legislative and executive action to reauthorize the NQI or launch a successor program with expanded focus on full-stack development, supply chain resilience, and public-private R&D partnerships.

The panel also discussed the importance of modular, open-source infrastructure and pre-competitive research as areas where federal support could create broad community benefits. They highlighted the DARPA Quantum Benchmarking Initiative as a promising example of rigorous, publicly funded validation of quantum systems. Finally, the conversation touched on U.S. quantum diplomacy, with panelists advocating for international coordination, particularly in setting standards, sharing open research, and managing strategic dependencies in the global quantum supply chain.


machine learning
with QuEra

Listen to the podcast
No items found.