The initial wide adoption of Quantum Computing will happen through the integration with large classical systems. Leading technology companies are developing or already providing to the public tools for the orchestration of hybrid computing (for example IBM Quantum Serverless). At the same time, the majority of the algorithms studied today use QC as accelerators for specific tasks embedded in broader classical implementations, variational algorithms being a notable example. In this context, several countries, in Europe in particular, are investing in the integration of Quantum Computing and classical HPC technologies (EuroHPC+EuroQCS). It is important for CERN, as a major actor in the High Energy Physics community, not only to take part in this strategy, but to play a leading and coordinating role to make sure the requirements of the HEP community at large are taken into account while providing compelling use cases for co-development of infrastructures and algorithms. CERN is at the heart of the global collaboration which has been at the base of all the LHC scientific results, the Worldwide LHC Grid. Continuing this leading role in the field of hybrid computing means, today, pursuing development toward exascale infrastructures of increasingly heterogeneous systems including non-Von Neumann architectures, ranging from GPUs to more exotic DL-dedicated ASICS, to tensor streaming technologies, and Quantum Computers.

This Centre of Competence will promote a comprehensive approach toward the construction of a hybrid distributed computing infrastructure, leveraging CERN historical expertise on distributed computing, in collaboration with quantum technology experts in industry, academia, and HPC centres. In this context, in particular, it will leverage the initial links that are being built by the QTI Phase 1, with the HPC infrastructure in the Member States both at a strategic and technical level. Examples include the Barcelona Supercomputing Center (with ongoing discussions with Quantum Spain and Qilimanjaro Quantum Tech), Cineca and its Quantum Computing Lab, the Julich Supercomputing Center and the Leibniz Supercomputing Center with which a join research project is already in place. This Centre of Competence will be implemented using an end-to-end strategy including infrastructure design, the definition and optimisation of hybrid algorithms and the implementation of prototype applications in the field of High Energy Physics. 

 

Physics Theories Simulation

Quantum field theories, by their nature, are rife with infinities. To connect theoretical predictions from QFT calculations to experimental results, these infinities must be dealt with via a process called regularization and renormalization. The only known generic method for doing so non-perturbatively is by implementing the field theory on a lattice, i.e. by discretizing space-time. The dynamics of the discretized theory is then simulated on powerful supercomputers.

Lattice simulations currently provide the only ab-initio method for extracting information about low-energy quantum chromodynamics and nuclear physics, with quantifiable errors. Current lattice QCD simulations successfully predict light hadron masses, scattering parameters for certain few-body scattering events, and the spectrum of several light hadrons. However, due to the limitations of Monte Carlo importance sampling, there are many questions that cannot be addressed with these tools, including the structure of the QCD phase diagram, particularly for large nuclear density, the real-time behaviour of quark-gluon plasmas, and even the masses of any but the lightest hadrons and nuclei.

As the systems under interest are intrinsically quantum, it stands to reason to simulate them on a quantum device. One of our goals is therefore the development of quantum algorithms for lattice simulations.

The results will benefit also our ability to learn about the properties of high-energy QCD (e.g. the parton showers seen in LHC collisions), which on classical computers also suffers from the intrinsic limitations of Monte Carlo sampling.

Another area taking centre-stage in physics research today is the physics of neutrino oscillation (periodic conversions of one neutrino flavour into another during propagation). And while neutrino oscillations are well understood in the context of terrestrial experiments, their dynamics in extreme environments such as supernova cores still evades a full description using classical algorithms. In these environments, neutrino densities are so high that neutrinos influence each other so that the flavour evolution equations become highly non-linear. As this is intrinsically a quantum mechanical, we aim to develop methods for simulating collective neutrino oscillations on a quantum computer.

Quantum Machine Learning

Quantum machine learning (QML) algorithms are a promising approach to certain problems in high energy physics phenomenology and theory, such as the extraction of parton distribution functions from data. We plan to study variational quantum circuits and novel model architectures, which could significantly benefit from a representation on quantum hardware. The main motivation behind this planned activity is to identify the potential benefits of QML in the context of high-energy physics in terms of performance, precision, accuracy, and power consumption.

We will target applications on near-term quantum devices as well as advanced procedures for future, full-fledged universal quantum computers. Research activities will include the development of hybrid classical-quantum models based on variational quantum circuit optimization and data re-uploading techniques, as well as the identification of new hardware designs which may increase QML performance. An important goal of this activity is the collaborative development of open-source quantum machine learning libraries designed for high-energy physics applications.

Image

 

 

 

 

 

 

 

 

 

 

 

Objectives

image

Current and future activities

Ongoing projects

Fundamental work is done today in the HEP community and in many other scientific research and industrial domains to design efficient, robust training algorithms for different learning networks.

Initial pilot projects have been set up at CERN in collaboration with other HEP institutes worldwide (as part of the CERN openlab quantum-computing programme in the IT Department) on quantum machine learning (QML). These are developing basic prototypes of quantum versions of several algorithms, which are being evaluated by LHC experiments.

Information about some of the ongoing projects can be found below.

The goal of this project is to develop quantum algorithms to help optimise how data is distributed for storage in the Worldwide LHC Computing Grid (WLCG), which consists of 167 computing centres, spread across 42 countries. Initial work focuses on the specific case of the ALICE experiment. We are trying to determine the optimal storage, movement, and access patterns for the data produced by this experiment in quasi-real-time. This would improve resource allocation and usage, thus leading to increased efficiency in the broader data-handling workflow.

The goal of this project is to develop quantum machine-learning algorithms for the analysis of particle collision data from the LHC experiments. The particular example chosen is the identification and classification of supersymmetry signals from the Standard Model background.

The goal of this project is to explore the feasibility of using quantum algorithms to help track the particles produced by collisions in the LHC more efficiently. This is particularly important as the rate of collisions is set to increase dramatically in the coming years.

The collaboration with Cambridge Quantum Computing (CQC) is investigating the advantages and challenges related to the integration of quantum computing into simulation workloads. This work is split into two main areas of R&D: (i) developing quantum generative adversarial networks (GANs) and (ii) testing the performance of quantum random number generators.

This project is investigating the use of quantum support vector machines (QSVMs) for the classification of particle collision events that produce a certain type of decay for the Higgs boson. Specifically, such machines are being used to identify instances where a Higgs boson fluctuates for a very short time into a top quark and a top anti-quark, before decaying into two photons.