Quantum computer
Matthijs Rijlaarsdam
CEO
August 12, 2024
15
minutes to read

Accelerating Quantum Computer Developments

Back from the shelf:
This paper was originally published on Springeropen.com on 8 July 2021, and licensed under a Creative Commons Attribution 4.0 International License. No changes were made for re-publishing purposes.

Citation:

Alberts, G.J.N., Rol, M.A., Last, T. et al. Accelerating quantum computer developments. EPJ Quantum Technol. 8, 18 (2021). https://doi.org/10.1140/epjqt/s40507-021-00107-w

Contributors to the original paper:

Garrelt J.N. Alberts: Main author of Sect. 1, Sect. 3 and Sect. 4.
Adriaan M. Rol: Main author of Sect. 2.1.
Thorsten Last: Main author of Sect. 2.2.
Matthijs S. C. Rijlaarsdam, Amber E. Van Hauwermeiren, NB and BB have reviewed the paper.

All authors read and approved the final manuscript.

1. Introduction

With the recent breakthrough of quantum supremacy [1, 2], which is a result of the steady improvement in performance [3, 4, 5] and a first step towards showing the potential use of quantum computers [6], the quantum computer as a commercial product seems likely to become a reality in the coming years. This article is intended to voice a number of aspects which are related to this development.

The outline of this paper is depicted in Fig. 1, which gives an exemplary overview of what is required to efficiently build a useful and commercially viable quantum computing system. In Sect. 2, we will advocate to start product development activities, to follow a systems engineering approach and derive the requirements of the quantum computer asa line of products that is of commercial interest. In Sect. 3, we put forward a roadmap of quantum computer products which we foresee to be essential for the upcoming decade, while Sect. 4 introduces ImpaQT, a project executed by supply chain partners working together here to highlight how one can get started with the implementation of this product roadmap. Throughout this article, and as already shown in Fig. 1, we are using the super-conducting Transmon-based full-stack platform as typical example to illustrate aspects of commercial product development and a way of working. We expect that these insights are, to a large extent, transferable to other quantum computing platforms as well.

2. Quantum Computer Product Development

Given the recent breakthroughs in quantum technology development in R&D labs all over the world, the perspective of high-tech companies has changed from research and technology development to product development. These product development activities have started alongside the existing research activities. While research focuses purely on developing the technology needed for building quantum computers, product development focuses on performance and functionality. When developing a product, the performance and functionality required by users determines the design decisions that have to be made to build the product. Obviously, the price that users want to pay for the product has an influence as well. Considering the quantum computer as a product requires standardisation of interfaces and integration of all its building blocks (as outlined in Fig. 1), as well as integration of the quantum computer itself in a broader ICT system architecture. A quantum computer consists of a stack of components that have to work together harmoniously in order to exploit quantum-mechanical phenomena such as superposition and entanglement. These quantum effects are fragile and hard to control. It is a complex engineering challenge to realise the desired performance and functionality. Due to this complexity, and in order to develop this product in an efficient way, it is recommended to follow a systems engineering approach.1 The first step within this approach is to determine the product requirements in terms of the performance and functionality of the quantum computer. The next step is to derive the specifications of the product that are needed to meet these requirements.

2.1 Product Requirements and Specifications

From a commercial perspective, it is required that the quantum computer should either outperform existing computers or should be significantly cheaper. As classical computers have advanced over several decades, quantum computers will not outperform classical computers on problems that can be efficiently calculated using a classical computer (or only at a higher cost). Therefore, we can conclude that a quantum computer needs to be tailored to commercially interesting problems that are intractable on classical computers.The value of quantum computers is not in doing the same problems faster, it is about solving certain computational problems, such as prime factorisation [7], substantially faster than classically possible – to the point that it enables the solution of problems that were previously unsolvable due to the impractical amount of computational resources required.

Requirement 1 - Provide solutions for commercially interesting problems

Since the 1980s [8], research groups of mathematicians and information scientists have been investigating which problems can be solved more efficiently or more accurately by a quantum computer than by classical high-performance computing. At this point in time, only a limited set of problems can be addressed by quantum algorithms.

The requirement that the quantum computer needs to solve commercially interesting problems leads to performance specifications. Until recently [9, 10, 11], companies put a lot of emphasis on the number of qubits [12, 13, 14, 15] when publicly communicating the performance of their quantum computers. If qubits were approximately error free, this would be a sensible simplification. However, while classical bits can be approximated as ideal, this is an oversimplification for qubits. When considering physical error rates on the order of ε < 10–3, one requires 103 – 104 physical qubits per logical qubit to achieve a (close to ideal) logical error rate of εL ∼ 10–15 [16, 17]. Current estimates suggest that one would require around 20 million physical qubits to factor a number that is too large to tackle using classical algorithms [18]. A 20-million-qubit quantum processor is, at this point in time, inconceivably large.

The coming decade will see noisy pre-error-corrected devices, known as NoisyIntermediate-Scale Quantum (NISQ) [19] technology, being used to perform useful computations while developments on technology and the value chain continue to push for universal error-corrected machines. It is expected that NISQ computers can help us solve new types of problems efficiently, but will not be useful for all types of problems. Simi-lar to how access to larger (classical) computing power enabled the current explosion of applications of artificial intelligence [20], finding where quantum computers will provide value in practice will require the availability of larger quantum computing power. In other words: the availability of quantum computers will not only enable the implementation of already envisioned practical applications; more importantly, it will allow for the discovery of novel ones.

With current quantity, quality and control of qubits, it is already possible to build a quantum computer that outperforms any classical computer for a specific type of problems, fora specific case, as shown by the Google team [1]. To take full advantage of the capabilities of NISQ computers and work around their limitations, researchers need access to realNISQ computers to test and develop their applications. At the moment, the success of any quantum algorithm is heavily dependent on the interaction of that algorithm with the specific quantum processor that is used. Quantum processors are in no way standardised yet and therefore each processor has its specific pros and cons in relation to the algorithm that needs to be employed. Therefore, the interplay between quantum algorithms and quantum processors needs to be optimised to have any chance of industry-relevant quantum advantage with NISQ devices. This leads to the following requirement to use a quantum computer for the research and development community:

Requirement 2 - Enable the development and execution of NISQ applications

To specify the power of a quantum computer needed to develop and execute NISQ applications, one has to take into account not only the number of qubits n, but also the number of operations that can be performed, typically expressed using the circuit depth d. A metric that combines these two is the quantum volume [21, 22, 23] VQ = 2neff, where neff = argmaxm min (m,d(m))=log2(VQ), and the circuit width m=n in the case of all-to-all connectivity. This definition loosely coincides with the complexity of classically simulating model circuits and has the appealing property that VQ doubles for every effective qubit added. In this definition, the circuit depth d corresponds to the number of circuit layers that can be executed before (on average) a single error occurs. A circuit layer corresponds to a combination of arbitrary two-qubit operations between disjoint pairs of qubits. It is possible to estimate d as d ≈ 1/ε1step = 1/nεeff , where the effective error rate εeff is the average error rate per two-qubit operation. In addition to effects like cross talk, the introduction of limitations in connectivity, parallelism and gate set require an overhead in the physical implementation of a circuit layer, so that in general the error rate εeff ≥ ε, whereε is the average error rate for individual physical operations.

Although quantum volume has found widespread adoption [24, 25, 26], no single performance metric can capture the complexity required to describe the potential utility of a quantum computer. Taken at face value, quantum volume seems to imply that, at current error rates of ε ∼ 10–3, there is no power in having a device larger than n ∼ 31 qubits. As this number of qubits can efficiently be simulated using classical hardware, this would be a major setback for NISQ computing. However, this seems to contradict the results of Arute et al. [1], in which a n = 53 qubit device with ε ∼ 10–3 was used to perform a computation that could not be simulated in a reasonable amount of time.

Understanding the characteristics and limitations of different performance metrics can give insight into the kinds of applications suited to NISQ computers. This in turn affects the functional requirements of the different subsystems. To define VQ , two important simplifications have been made to arrive at a single number. The quantum volume was de-signed as a binary metric: can a device run an algorithm? For many algorithms, a single error indicates failure, however, for other applications, such as sampling from a distribution (as is done in [1]), or estimating an eigenvalue [29], one can tolerate a limited amount of errors simply by averaging. From this follows a functionality requirement for NISQ applications: NISQ applications will have to be able to tolerate a limited amount of errors, either because of the nature of the application or by using error mitigation techniques.Another limitation of the quantum volume metric is that it quantifies the ability to run circuits of equal width and depth. However, the computational power of short-depth circuits is not yet fully understood, and it can be argued that even short-depth circuits lie beyond the reach of classical computing [22, 30, 31]. As such, it is likely that potentialNISQ applications will be short-depth to limit the amount of errors that accumulate.

Although there are quite a few candidates for NISQ algorithms satisfying these constraints, many in the spirit of Feynman’s original idea [8] of using quantum systems to simulate quantum systems, there is no known useful application for which a NISQ algorithm is guaranteed to significantly outperform the classical alternative. At current error rates for cQED systems of ε ∼ 10–3, a 1000 qubit (1kQb) system is right at the point where one can can still execute a single layer without a single error occurring. If one considers using a fraction of the qubits as ancillas for error mitigation and uses an algorithm that is somewhat robust to the remaining errors, the kQb processor is the largest-scale NISQ de-vice that is of interest for running algorithms, at current error rates, an estimate consistent with IBMs roadmap [24].

Having set a target for the size of the system (1kQb) and the performance (ε < 10–3 ), one has to design and fabricate devices capable of reaching this target. Current qubit coherence of ∼ 50μs should be sufficient to reach ε < 10–3 for every operation. It will be a challenge by itself to scale up the design to ∼ 1kQb while maintaining these levels of performance.A quantum computer is therefore not only needed to develop NISQ algorithms, it is also required to develop quantum processors. For this purpose a quantum computer will be used as test and development platform of this key component of the quantum computing stack. The resulting functionality requirement is therefore:

Requirement 3 - Enable the development of quantum processors

Although many cQED device designs [1, 32, 33, 34] are to some extent copy/paste-able, that does not mean they are scalable in practice. There will be fundamental physics problems to overcome, some expected [35, 36] and others unknown. Initial experiments [6, 37] indicate that transmon qubits have limited crosstalk, motivating a simplistic model in which the device yield, defined as all qubits working, is simply the product of the individual qubit yields. Here, we define an individual qubit to be working if the control lines are work-ing, the coherence is larger than a specified target (e.g. 50μs) and the relevant parameters (charging and Josephson energies, coupling to coupling bus/tuneable couplers, readout resonator parameters etc.) are within a specified tolerance. Even when taking into account recent innovations that improve parameter targeting, such as laser annealing [38, 39], the odds of producing a working kQb device are increasingly small with every qubit added to device, even at an exceptional yield of 99% per qubit.

To tackle this problem, one needs to either become robust against missing qubits at the algorithm level, which falls in the domain of Requirement 1, or find a way to increase de-vice yield for a given qubit yield. A promising concept would be to link together smaller devices within the same cryogenic environment. Although the odds of producing a single monolithic kQb are vanishingly small, one can increase the probability by combining multiple smaller patches, which have a reasonable yield, and replacing only the patches that do not work. Existing flip-chip architectures [40], on which the readout resonators, Purcell filters and coupling buses are on a different chip than the qubits, can be seen as a prototype of this technique, as they effectively link together different devices. It is only a small step to use the coupling plane to connect qubits on adjacent chips [41]. Note that what is envisioned is subtly different from the chip-to-chip entanglement discussed in [42], which would be more powerful. This proposal does not require long-distance information transfer (i.e., quantum information transfer between different dilution refrigerators), as it only attempts to create modularity.

Based on the above-mentioned R&D strategies to create better quantum devices, the specifications of the quantum computer as a test and development platform can be de-rived. Related to the question of yield is the question of size: does a kQb processor fit in a fridge? Although transmon qubits (∼400 μm2) are often seen as large in comparison toe.g., semiconductor spin-qubits or dopant-based qubits, the processor sizes are not limited by the qubit size, but rather by the size of the I/O [43]. The footprint of a single VIA is currently 1mm2, a single transmon (including tuneable couplers) requires on average4.2 control lines, putting the total footprint at ∼ 5mm2 per qubit. Assuming that this footprint can be translated into a square with a side of 2.5mm, a kQb processor would be about 6.4cm2. As such, 1kQb would fit on a 100mm wafer (with a surface area of 78cm2).

Although this back of the envelope calculation indicates that a kQb processor would be about the size of a single 100mm wafer, it also highlights the importance of the interconnect size. Where it is possible to reduce the on-chip footprint to about 1mm2 for each interconnect, regular SMP connectors have a diameter of 4mm, resulting in a footprint of about 0.65cm2 per connector. At about 4.2 lines per qubit this would mean that a kQb processor would require a solid 20 × 20cm block of SMP connectors. As cable dimensions are typically significantly smaller than the connector sizes, a natural solution includes the cabling in the sample mount. In this way, the signal integrity can be preserved while the fan-out can be taken care of elsewhere.

Not only the footprint of the lines is relevant, but also the heat load. The heat load consists of two contributions, a passive contribution coming from the fact that there is a conducting line connecting the sample to room temperature, and an active contribution consisting of power dissipation happening in the line. Attenuating the power of signals intended for the qubit is required to manage the noise temperature of the signals. For a system up to ∼ 100 qubits, the heat load can be managed by using standard cable technologies and attenuators [44]. To reduce the active contribution one can consider using directional couplers that transmit only part of the signal, while sending the return signal to a higher-temperature stage where more cooling power is available. The passive contribution to the heat load can be reduced by using specialised cable technologies. A promising approach is to use microwave striplines etched on a flexible substrate to produce cables with lower thermal conductivity and a smaller form factor [45]. Because of the reduced form factor, these cabling technologies are a natural candidate for integrating in the sample mount mentioned in the preceding paragraph.

At this point, it is unclear if better interconnects and cabling technologies will be sufficient to realise a kQb device. There are several techniques that can be used to reduce the number of lines by a constant factor. The concept of dedicated drive-lines per qubit can be dropped in favour of a frequency multiplexing scheme in which several qubits (∼5) operated at different frequencies share a drive line. These changes, however, do not address how the number of lines scales (linearly) but only change the pre-factor. At some point one has to consider Rent’s rule [46]. To change the scaling of control, one has to find multiplexing schemes for all types of control (microwave, flux, measurement) similar to theVSM scheme [32, 47] for microwave pulses. The constraints imposed by such a scheme will have significant consequences for how it can execute algorithms and furthermore re-quires exquisite control over device fabrication. Therefore it is not expected that such a scheme will be viable in the near future.

Until now, we have glossed over an important aspect of the fabrication problem; coherence. Qubit performance is inherently limited by coherence, and it will be a large challenge in itself to better understand what is limiting coherence and to reliably fabricate high-coherence devices. Achieving high coherence will be especially challenging, because significant changes to the design are required, such as the integration of 3D interconnects, tuneable couplers and connection between different sub patches. All of these changes have the potential to impact coherence.

The last key functionality requirement for the R&D community relates to the ability to maximise the performance of the quantum device. Due to variations in the fabrication process, all qubits need to be individually characterised and calibrated before the system can be operated as a quantum computer. This task is challenging because system parameters can fluctuate over time, depend on each other and suffer from crosstalk. To address this challenge, novel approaches to calibration [37, 48, 49, 50, 51] are required as well as specialised characterisation protocols [52, 53, 54] and hybrid control models that support both the pulse- and gate-level abstractions.

Requirement 4 - Tune-up the performance of Quantum Devices

To achieve a high yield and coherence, one needs to understand how changes in design and fabrication affect the system. An engineering cycle which can accelerate the development of high-performance quantum devices is depicted in Fig. 2. By connecting auto-mated characterisation to a database infrastructure, it is possible for the R&D community to close the loop between design, fabrication, and characterisation.

Image source: [55]

The analysis of the product requirements of a quantum computer, as outlined above, emphasises the need to start the product development activities as soon as possible in order to provide the quantum community the tools they need to accelerate their R&D activities resulting in a commercially viable quantum computer.

2.2 Supply Chain Management

Considering the development of a quantum computer as product development requires a mature supply chain that can provide high-quality components and can ensure security of supply. Supply chain management is key for building a product [56]. The current emerging supply chain offers enabling technologies and supporting component solutions with sufficiently high product maturity, ready for scaling far beyond quantum supremacy-level systems. Then again, some innovation bottlenecks, such as the manufacturing of high-quality quantum devices and overall system integration, still need to be tackled.

State-of-the-art small-scale quantum devices are still being developed overwhelmingly in more-or-less academic environments and shared facilities, with few exceptions. As of now, it is unlikely that large chip manufacturers will establish quantum device development lines at scale any time soon. This situation can in part be attributed to the successful insertion of extreme ultraviolet lithography into high volume CMOS manufacturing.Moore’s law for these players is considered to be alive and well for the decade to come [57]. A proposed solution to this quantum chip development gap, especially in the near-term, could be the implementation of novel technology pilot lines through public-private partnership incentives facilitated by public RTOs and national labs. Those pilot lines should be used to investigate the appropriate approach to quantum device manufacturing, by working out in detail the differences and commonalities with respect to standard CMOS process and technology development in a process that is focused on short development cycle time, rather than high volume. The efforts should then be supplemented with the necessary public-private partnerships for strategic developments to protect the IP.

The integration of all components and subsystems into full-stack systems is considered another bottleneck which needs to be addressed. Not only system complexity and interface definition are a challenge here, but also the considerable price tag. Small-scale demonstrators deployed in the field for education and training purposes already require entry-level startup costs in excess of a few million Euros. It should come as no surprise then that there are currently only very limited full-stack system integration activities to be found in the commercial sector.

Opening up these bottlenecks requires considerable financial strength. On the backdrop of ensuring future technological sovereignty, a few commercial players in the US andChina were able to allocate the required resources in such a way that they are currently leading the developments. After decades of considerable federal commitment, US tech giants were among the first to adopt this emerging technology, regardless of its uncertain near-term commercial impact. A future quantum computing technology would offer them a natural extension to their current business portfolio. Likewise, China’s nationally funded and highly coordinated programs in this field are starting to bear fruit [1, 2, 58]. These global examples are utilising a monolithic approach to the integration of their systems, with key components being developed in-house. While this approach gives full control over the quality and availability of the system and its key components, it limits in turn the ability to pivot to alternative technologies and requires considerable resources which can only be afforded by large organisations (public or private) or extremely-well-funded start-ups.

The financial entry barrier to the monolithic approach, in combination with the current lack of a clear business case, and the technical complexity of the future quantum computing system, makes this a challenging field. Even more so if one needs to remain competitive in performance and timescale against the international developments in the field. Therefore, instead of approaching the task of trying to bridge the quantum advantage gap alone, in a monolithic manner, part of the answer could be to spread the challenge onto more shoulders. Independent players could ensure focus on individual strengths and mitigate risks. For this approach to work, players in this field need to be open to collaborate and co-develop. Examples for such partnership approaches are manifold in high-tech environments, such as airplane and car manufacturing or in the semiconductor industry [59, 60, 61]. With an additional long-standing public commitment, such alliances can accelerate innovation, will increase the technology readiness, strengthen the value chain and foster standardisation and a wider adoption of this technology. For instance, addressing the more professional approach to quantum device manufacturing could be facilitated by RTOs. These organisations are well-equipped to coordinate an alliance for developing pilot lines where Small & Medium enterprises (SMEs), universities and larger industrial companies can benchmark designs and develop new architectures. Complementary, open consortia and public-private partnership incentives could be formed across national borders, instead of pursuing technology development in large publicly traded corporations or monolithic start-ups. This co-development approach requires sufficient alternative sup-pliers to ensure the quality and availability of key components. The required amount of resources is similar to the monolithic approach, but distributed among more players in the value chain.

The analyses discussed in Sect. 2.1 and Sect. 2.2 show that the quantum computer is expected to be able to solve relevant problems within the next decade, probably sooner for specific problems and specific needs of R&D labs. Furthermore, in recent years a supply chain has emerged that will be able to provide key quantum computer components in a reliable and cost-effective manner.

3. Product Roadmap

Based on product Requirements 1, 2, 3, and 4 and the derived product specifications, it is possible to outline a product roadmap. A product roadmap describes how a product is likely to evolve in time based on the expected development of the underlying technologies, as well as customer needs. It is expected that technology will improve over time, although it will be hard to predict when each milestone will be reached. Quantum technology development is still in its embryonic phase and sudden step-changes in improvement of technology are likely to occur, which makes predictions hard. However, the use of the products based on quantum technology is better understood. It is expected that a fully functional quantum computer will be used by the High Performance Computing (HPC) market todo complex calculations and data management. Before that market can be serviced, partly functional quantum computers will already be of interest to players in the R&D market, that have a need for such a product to speed-up their quantum technology developments.Combining the expected technology developments and expected use of the technology leads to a product roadmap. We consider this roadmap largely generic and independent of the underlying quantum technology, although at times we might refer to specifics of a quantum device-based system for clarification:

3.1 Quantum Computer Demonstration Platform

The first archetypal system on the product roadmap is the quantum computer demonstration platform. Such a demonstration platform is already proven technology for some of the currently available qubit technologies even to the level of cloud-accessibility [3, 62]. These platforms are used for education, training and testing of algorithms and error models. A slightly more advanced version of this platform consists of a well-defined quantum computer stack that can measure and control a simple quantum device. This is the minimal system that has key quantum computing properties: controlling superposition and entanglement. The interfaces and functionalities of the components of this system should be clear. If that is the case, its upgrading would be of primary interest to quantum computer component suppliers. The suppliers can use the quantum computer demonstration platform to validate the performance of the component they are offering to the market and confirm that it is working well in concert with other components. The quantum computer demonstration platform can also ensure that a supplier’s component is not limiting the performance of the system with respect to controlling the quantum-mechanical properties. The platform will evolve from a test and validation platform to a development plat-form for key components of a quantum computer as outlined in the following subsections.

3.2 Quantum Device Development Platform

A key component of a quantum computer is the quantum device. As outlined in Requirement 3, the development of a quantum device is challenging and R&D labs of device manufactures need a suitable development platform to improve the performance of a quantum device in an efficient way. In order to meet Requirement 3, one needs to close the quantum device engineering cycle (Fig. 2). To realise this, the device development platform is optimised for Requirement 4. The ultimate version of the quantum device development platform would also be able to function as a benchmarking and certification product that can compare the performance of quantum devices provided by different suppliers in an objective manner.

3.3 Quantum Algorithm Development Platform

The current state-of-the-art quantum technology is not fully ready yet for the next product in the roadmap: the quantum algorithm development platform. Current algorithm development platforms based on classical computing technology outperform the quantum computing-based algorithm development platform, although the break-even point seems to be close. This transition from using classical to quantum computers as quantum algorithm development platforms will probably take several years. Currently, the classical computers can still simulate more error-free qubits than state-of-the-art quantum computers can provide, a crucial parameter for efficient quantum algorithm development.However, this parameter is less important for the development of NISQ algorithms. Al-though quantum technology seems not to be ready yet for a quantum computer to be a fully functional algorithm development platform, most of the commercial activities in the quantum community focus on developing this product or developing derived products and services. The users of this product will be the software developers of ICT companies that are looking for better platforms to develop and test their software on.

3.4 Quantum Computer

It will likely take at least a decade before quantum technology has matured enough to give sufficient control of quantum-mechanical properties to make the quantum computer suitable for the HPC market. At this point in time, the real development of the quantum computer as a product will start, and the quantum computer will fulfil the promise of changing the world in a similar fashion to the classical computer previously. The quantum computer will be used by a wide variety of end-users to optimise their own products and services. The preceding quantum computer products, the quantum computer demonstration platform, the quantum device development platform and the quantum algorithm development platform, will have paved the way for a successful insertion of the quantum computer in the HPC market. A supply chain will have formed that ensures security of sup-ply and quality of key components, an ICT workforce will be in place that is acquainted with the quantum computing paradigm and commercial use cases will be available, which prove the added value of the quantum computer.

4. Realising ImpaQT: Building a Quantum Computer Together

One of the key propositions put forward in Sect. 2.2 was to tackle the question of accelerating quantum computing R&D efforts in a collaborative way – by involving independent commercial partners which can leverage their individual strengths, thereby mitigating the risks involved. Following this logic, a four-month-long project called ImpaQT was initiated by companies of the Dutch quantum ecosystem. These companies form a local supply chain for the following off-the-shelf key components: (i) Algorithms to solve a specific problem that is likely to be solved efficiently on a quantum computer, (ii) software to characterise, calibrate and run algorithms on the quantum device, (iii) electronics to enable closed loop control of the quantum device, (iv) cabling and filtering that is scalable to control a multi-qubit quantum device, and (v) multi-qubit Trasmon-based quantum processors.

In the following section we will sketch the goal, approach, implementation, and successful completion of this project, giving support to the above proposition. An in-depth discussion of this project, its technical details and accomplishments will be published in a separate white paper. Together with TNO, the Dutch RTO, acting as facilitator of this project, the companies were designing and creating from scratch a full-stack R&D setup, which allowed the characterisation and calibration of superconducting Transmon qubits on an 8-qubit test chip within a time period of 16 weeks (as depicted in the table of Fig. 3).

4.1 Quantum Computer Demonstration Platform: Quantum Accelerator v1.0

The supply chain partners used a system engineering approach to develop the QuantumAccelerator, in which the first step consisted of the definition of system functionality, performance requirements and a system design. It was agreed that the system should be able to execute a set of spectroscopy, coherent qubit control and qubit gate analysis experiments. These functionality requirements were put to the test in the final stage of the project.

From the component perspective, off-the-shelf products from all partners were incorporated into the full-stack, interfaces had to be defined and gaps in the architecture had to be assessed and jointly bridged. Having well-defined interfaces was an important prerequisite and implies that components in the system design can be exchanged by alternative components that provide the same functionality but without the need to redesign the complete system.

Procurement, assembly, and hardware integration followed the system requirements and system design decisions and was accomplished by week 12 of the project. Testing and validating the system by performing experiments started in week 13. The functionality and performance requirements were sequentially tested by following a calibration tree procedure. By week 16 and in addition to the aforementioned required set of experiments, even an automated mixer calibration could be implemented. Two of several experiments performed are shown in Fig. 3: On the left, Rabi oscillations are plotted as a function of microwave drive amplitude and pulse duration – and on the right, an AllXY tune-up experiment is presented. Both are performed on the same Transmon qubit with an energy relaxation time T1 ≈ 15μs, a de-phasing time T2∗ ≈ 6μs and echo time T2 ≈ 9μs.

4.2 Quantum Accelerator Product Roadmap

As outlined above, building this baseline quantum R&D setup, called Quantum Accelerator v1.0, with off-the-shelf components, led to a functional multi-qubit full-stack system.This setup can be used by suppliers to validate and test their components while interact-ing with other components in the integrated system. The performance of this platform can now be incrementally improved simply by improving the performance of the different components. The incremental increase of performance will lead to subsequent versions of the Quantum Accelerator, until the improvement of the components stops. At this point in the future a redesign of the system is required to get to the next level of performance. The redesign of the system will be based either on the next generation of the components or on a completely different technology paradigm, as outlined in the product roadmap section.Another reason to redesign the system is an adjustment in functionality requirements.Quantum Accelerator v1.0 had been designed to provide suppliers of quantum computing components a platform to test their components interacting with other components in a minimal quantum computer demonstrator. The next generation of this product will require an extension of functionality, so that it can not only test components, but also help suppliers to improve their components by providing detailed characterisation and benchmarking information. As outlined in Sect. 3 the functionality and performance requirements will become more challenging for every subsequent product in the product line development roadmap, with the quantum algorithm development platform as a third step and ultimately a commercially viable quantum computer for the HPC market.

5. Conclusion

In this paper, we have analysed the current state of one of the most mature quantum technologies: superconducting circuits. We have outlined how to use this technology to do product development. The product development approach puts the focus on the functionality requirements of the product and uses state-of-the-art technology to build it. In this way, the development of the quantum computer as a commercially viable product can be accelerated. A series of simple quantum computers with specific functionalities is needed to build quantum computers that can outperform classical computers. An outline was given of a quantum computer product line roadmap. An example of the development of the first product by a local supply chain in the Netherlands was presented. This shows that quantum technology development is not an exclusive area for government-funded universities or RTOs anymore. Nor is it limited to companies with large R&D budgets, such as large ICT companies and scale-up companies. The change in the R&D landscape shows that the quantum community has reached a next level of maturity, indicating that the quantum computer as a commercial product is becoming a reality in the present day.