Background and interest: 

Technology to build computing devices using quantum-mechanical effects has seen a tremendous acceleration in the past few years. The advantage of quantum computers over classical devices lies in the possibility of using quantum superposition effects of n qubits to perform exponentially growing (2n) computations in parallel. This effect makes it possible to reduce the computational complexity of certain classes of problems, such as optimisation, sampling, combinatorial or factorisation problems. 

Over recent years, quantum algorithms, characterised by lower computational complexity than their classical counterparts, have been discovered. These algorithms require large scale fault-tolerant quantum computers. In contrast, today, we have access to the Noisy Intermediate-Scale Quantum (NISQ) hardware dominated by short coherence time (noise), a small number of qubits (from a few tens up to few thousands, in the case of quantum annealers) and limited lattice connectivity (in most cases, only the nearest neighbouring qubits interact).

These limitations led to numerous investigations towards the design and optimisation of NISQ algorithms that demonstrate a quantum advantage over the current hardware.

Physical qubit roadmap for quantum computerThe long-term goals in the field of Quantum Computing are to:

Ongoing projects:

Fundamental work is done today in the HEP community and in many other scientific research and industrial domains to design efficient, robust training algorithms for different learning networks.

Initial pilot projects have been set up at CERN in collaboration with other HEP institutes worldwide (as part of the CERN openlab quantum-computing programme in the IT Department) on QML. These are developing basic prototypes of quantum versions of several algorithms, which are being evaluated by LHC experiments. The range of projects includes, for example, quantum graph neural networks for track reconstructions (with METU, Caltech and DESY), quantum support vector machines for Higgs boson classification (with the University of Wisconsin - ATLAS experiment and IBM), quantum machine learning for SuperSymmetry searches (with the University of Tokyo – IHEP), and quantum generative adversarial networks for radiation transport simulation (with CQC, EPFL and Oviedo). Anomaly detection algorithms in LHC data analysis are promising to enlarge the scope of new physics searches towards a model-independent strategy. In this context, deep-learning solutions have been tested on a set of public benchmark data sets. These algorithms can be potentially extended to the QML techniques mentioned so far and to quantum-inspired techniques such as Tensor Networks. 

Although not a general-purpose type of device, universal quantum computers would significantly accelerate computations as part of large-scale heterogeneous scientific computing infrastructures (as done today with GPUs and FPGAs), but to a much larger extent. However, much more work is necessary: not only to have reliable quantum hardware but also to understand how classical algorithms can be efficiently adapted to run on quantum computers and to develop the required intermediate software stack of programming languages, compilers, optimisers, and many other software engineering tools.

The work necessary to advance state of the art in quantum algorithms and the software does not require a full commitment to physical quantum devices. Efficient quantum computing simulation programs running on classical hardware of sufficient capabilities are today an excellent way to build skills, design realistic algorithms prototypes and assess trends. They are essential for understanding hardware behaviour, gate operations fidelity, correctness and scaling performance, robustness against errors, and designing error-correction strategies on NISQ devices.

An effort within the HEP community to build a reliable quantum-computing simulation infrastructure, capitalising on the existing unique expertise of the community in distributed computing, would allow us to assess the potential of quantum computers in HEP research, accelerating innovation and enabling the development of expertise for future applications.

Traditional high-throughput computing systems, representing the bulk of HEP computing, are not well suited for large-scale coupled computations. Purpose-designed quantum simulators and HPC systems with adequate memory and network fabric are needed to perform medium to large-scale quantum simulations. CERN can rely on its developing relationship with HPC providers and its existing expertise in data management and large-scale distributed computing to support quantum simulation development activities.

Conceptual similarities between quantum computers and HEP experiments can also be exploited. In both cases, quantum objects are studied and monitored using single-particle sensors connected to a data acquisition system based on custom electronics (FPGAs and ASICs). Similar infrastructures are today evaluated to assemble large-scale QC emulators with improved computing efficiency. In recent years, the development of High-Level Synthesis (HLS) software (for example, within the HLS4ML project and at the CMS experiment) has introduced the possibility of automating FPGA programming and ASICs design. This suggests the possibility of creating a software library that would allow the emulation of generic quantum circuits on FPGAs through HLS and, when paired to a hardware-oriented effort on QC development, develop QC control algorithms in a similar way to what is done at the LHC with hardware triggers.

In addition to specialised qubit technologies and software and algorithmic expertise, quantum computers also require sophisticated engineering expertise in cryogenic systems or control systems. Such expertise is part of the core skills required, for example, by CERN’s accelerator complex. R&D in such areas would also represent a valuable investment, whereby CERN’s unique skills base would contribute and advance.