The initial wide adoption of Quantum Computing will happen through the integration with large classical systems. Leading technology companies are developing or already providing to the public tools for the orchestration of hybrid computing (for example IBM Quantum Serverless). At the same time, the majority of the algorithms studied today use QC as accelerators for specific tasks embedded in broader classical implementations, variational algorithms being a notable example. In this context, several countries, in Europe in particular, are investing in the integration of Quantum Computing and classical HPC technologies (EuroHPC+EuroQCS). It is important for CERN, as a major actor in the High Energy Physics community, not only to take part in this strategy, but to play a leading and coordinating role to make sure the requirements of the HEP community at large are taken into account while providing compelling use cases for co-development of infrastructures and algorithms. CERN is at the heart of the global collaboration which has been at the base of all the LHC scientific results, the Worldwide LHC Grid. Continuing this leading role in the field of hybrid computing means, today, pursuing development toward exascale infrastructures of increasingly heterogeneous systems including non-Von Neumann architectures, ranging from GPUs to more exotic DL-dedicated ASICS, to tensor streaming technologies, and Quantum Computers.

This Centre of Competence will promote a comprehensive approach toward the construction of a hybrid distributed computing infrastructure, leveraging CERN historical expertise on distributed computing, in collaboration with quantum technology experts in industry, academia, and HPC centres. In this context, in particular, it will leverage the initial links that are being built by the QTI Phase 1, with the HPC infrastructure in the Member States both at a strategic and technical level. Examples include the Barcelona Supercomputing Center (with ongoing discussions with Quantum Spain and Qilimanjaro Quantum Tech), Cineca and its Quantum Computing Lab, the Julich Supercomputing Center and the Leibniz Supercomputing Center with which a join research project is already in place. This Centre of Competence will be implemented using an end-to-end strategy including infrastructure design, the definition and optimisation of hybrid algorithms and the implementation of prototype applications in the field of High Energy Physics. 


Image

 

 

Objectives

image

 

Current and future activities

Ongoing projects

Fundamental work is done today in the HEP community and in many other scientific research and industrial domains to design efficient, robust training algorithms for different learning networks.

Initial pilot projects have been set up at CERN in collaboration with other HEP institutes worldwide (as part of the CERN openlab quantum-computing programme in the IT Department) on quantum machine learning (QML). These are developing basic prototypes of quantum versions of several algorithms, which are being evaluated by LHC experiments.

Information about some of the ongoing projects can be found below.

The goal of this project is to develop quantum algorithms to help optimise how data is distributed for storage in the Worldwide LHC Computing Grid (WLCG), which consists of 167 computing centres, spread across 42 countries. Initial work focuses on the specific case of the ALICE experiment. We are trying to determine the optimal storage, movement, and access patterns for the data produced by this experiment in quasi-real-time. This would improve resource allocation and usage, thus leading to increased efficiency in the broader data-handling workflow.

The goal of this project is to develop quantum machine-learning algorithms for the analysis of particle collision data from the LHC experiments. The particular example chosen is the identification and classification of supersymmetry signals from the Standard Model background.

The goal of this project is to explore the feasibility of using quantum algorithms to help track the particles produced by collisions in the LHC more efficiently. This is particularly important as the rate of collisions is set to increase dramatically in the coming years.

The collaboration with Cambridge Quantum Computing (CQC) is investigating the advantages and challenges related to the integration of quantum computing into simulation workloads. This work is split into two main areas of R&D: (i) developing quantum generative adversarial networks (GANs) and (ii) testing the performance of quantum random number generators.

This project is investigating the use of quantum support vector machines (QSVMs) for the classification of particle collision events that produce a certain type of decay for the Higgs boson. Specifically, such machines are being used to identify instances where a Higgs boson fluctuates for a very short time into a top quark and a top anti-quark, before decaying into two photons.