Technology to build computing devices using quantum-mechanical effects has seen a tremendous acceleration in the past few years. The advantage of quantum computers over classical devices lies in the possibility of using quantum superposition effects of n qubits to perform exponentially growing (2n) computations in parallel. This effect makes it possible to reduce the computational complexity of certain classes of problems, such as optimisation, sampling, combinatorial or factorisation problems. Image

Over recent years, quantum algorithms, characterised by lower computational complexity than their classical counterparts, have been discovered. These algorithms require large scale fault-tolerant quantum computers. In contrast, today, we have access to the Noisy Intermediate-Scale Quantum (NISQ) hardware dominated by short coherence time (noise), a small number of qubits (from a few tens up to few thousands, in the case of quantum annealers) and limited lattice connectivity (in most cases, only the nearest neighbouring qubits interact).

These limitations led to numerous investigations towards the design and optimisation of NISQ algorithms that demonstrate a quantum advantage over the current hardware, intense research on noise mitigation and correction, as well as hardware-software co-development. 

 

Objectives

image

 

Current and future activities

Ongoing projects

Fundamental work is done today in the HEP community and in many other scientific research and industrial domains to design efficient, robust training algorithms for different learning networks.

Initial pilot projects have been set up at CERN in collaboration with other HEP institutes worldwide (as part of the CERN openlab quantum-computing programme in the IT Department) on quantum machine learning (QML). These are developing basic prototypes of quantum versions of several algorithms, which are being evaluated by LHC experiments.

Information about some of the ongoing projects can be found below.

The goal of this project is to develop quantum algorithms to help optimise how data is distributed for storage in the Worldwide LHC Computing Grid (WLCG), which consists of 167 computing centres, spread across 42 countries. Initial work focuses on the specific case of the ALICE experiment. We are trying to determine the optimal storage, movement, and access patterns for the data produced by this experiment in quasi-real-time. This would improve resource allocation and usage, thus leading to increased efficiency in the broader data-handling workflow.

The goal of this project is to develop quantum machine-learning algorithms for the analysis of particle collision data from the LHC experiments. The particular example chosen is the identification and classification of supersymmetry signals from the Standard Model background.

The goal of this project is to explore the feasibility of using quantum algorithms to help track the particles produced by collisions in the LHC more efficiently. This is particularly important as the rate of collisions is set to increase dramatically in the coming years.

The collaboration with Cambridge Quantum Computing (CQC) is investigating the advantages and challenges related to the integration of quantum computing into simulation workloads. This work is split into two main areas of R&D: (i) developing quantum generative adversarial networks (GANs) and (ii) testing the performance of quantum random number generators.

This project is investigating the use of quantum support vector machines (QSVMs) for the classification of particle collision events that produce a certain type of decay for the Higgs boson. Specifically, such machines are being used to identify instances where a Higgs boson fluctuates for a very short time into a top quark and a top anti-quark, before decaying into two photons.