Simulating nature
Die Seite is noch nich übersetzt. Se kieken die englische Originalversion.
Watch this video from Olivia Lanes on simulating nature with quantum computers, or open the video in a separate window on YouTube.
This lesson uses content from this tutorial:
Utility-scale error mitigation with probabilistic error amplification tutorial
Introduction
One of the most compelling applications of quantum computers is their ability to simulate natural phenomena. In this lesson, we will explore how quantum computers are used to solve quantum dynamics problems—specifically, how they help us understand the time-evolution of a quantum system.
First, we will take a broad look at the general steps involved in conducting these simulations. Then, we will examine a concrete example: the experiment that IBM presented in 2023, which showcased the concept of quantum utility. This experiment serves as an excellent case study for understanding the practical steps and implications of simulating quantum dynamics with real quantum hardware. By the end, you will have a clearer picture of how researchers approach these challenges and why quantum simulation holds such promise for advancing our understanding of the natural world.
Richard Feynman gave a highly influential lecture at Caltech in 1959. It was famously titled “There’s Plenty of Room at the Bottom,” in playful allusion to the vast, unexplored possibilities at the microscopic scale. Feynman argued that much of physics at the atomic and subatomic levels had yet to be uncovered.
The significance of the talk grew in the 1980s as technology progressed. During this period, Feynman revisited these ideas in another important lecture at Caltech, presenting a paper called “Simulating Nature with Computers.” There, he posed a bold question: could computers be used to perform exact simulations that replicate nature’s behavior at the quantum level? Feynman suggested that, instead of relying on rough approximations to model atomic processes, we could use computers that harness the laws of quantum mechanics themselves—not merely to model nature, but to emulate it.
It is this type of physical simulation that we will examine through this lesson.
Recall this timeline graphic introduced in a previous episode. At one end of the spectrum, we see problems that are straightforward to solve and do not require the enhanced speed quantum computing might bring.
At the opposite end are extremely challenging problems that demand fully fault-tolerant quantum machines — technology that is not yet available. Fortunately, many simulation problems are believed to fall somewhere in the middle of this timeline, within the range where today’s quantum computers can already be effectively applied. There are many reasons to be excited and intrigued by this prospect, as simulating nature forms the foundation for a wide range of promising applications.
The following information covers the general workflow in nature simulations and then a specific instance of the workflow to replicate results from a well-known study.
General workflow
Before anyone can apply quantum computing to these exciting areas, it's important to first understand the basic steps in a typical simulation workflow:
- Identify system Hamiltonian
- Hamiltonian encoding
- State preparation
- Time-evolution of the state
- Circuit optimization
- Circuit execution
- Post-processing
The process begins by identifying a quantum system of interest. This helps determine the Hamiltonian that governs its time evolution, as well as a meaningful description of its initial properties, or its state. Next, you need to select an appropriate method to implement the time evolution of this state. Note that the first four steps in this workflow are all part of the Mapping step in the Qiskit patterns framework.
After setting up the time-evolution circuit, the subsequent stages involve performing the actual experiment. This typically includes optimizing the quantum circuit that implements the time-evolution algorithm, running the circuit on quantum hardware, and post-processing the results. These are the same as the last three steps in the Qiskit patterns framework.
Next, we'll discuss what these steps mean before we move on to coding.
1. Identify the system Hamiltonian
The first essential step in performing a simulation experiment is to identify the Hamiltonian that describes the system. In many cases, the Hamiltonian is well established. However, we often construct it by summing up the energy contributions from smaller parts of the system. This is typically expressed as a sum of terms:
where each term acts on one of the local subsystems (like a single particle or a small group of particles) of the total Hamiltonian . In the case of indistinguishable elementary particles, it is important to determine whether the system involves fermions or bosons, where fermions obey the Pauli Exclusion Principle, meaning no two identical fermions can occupy the same quantum states like electrons. Unlike fermions, multiple bosons can exist in the same quantum state, and this difference affects the system's statistics and how it must be modeled.
In practice, people are often interested in physical systems in which the elements are presumed to be well-separated or labeled, and thus distinguishable, as in spins on a lattice.
This system consists of magnetic dipole spins arranged on a lattice, which are treated as distinguishable particles by counting their address. This system is described by the Transverse-Field Ising Model, and its Hamiltonian is constructed from the sum of two parts:
Where the first term represents the interaction energy between neighboring spins. Here the indicates that we sum over all pairs of spins that are directly connected on the lattice, and are the Pauli-Z matrices, which represent the state of the spins at the site and , and is the coupling constant, which defines the strength of this interaction. The second term represents the influence of an external magnetic field applied across the entire system. Here is the Pauli-X matrix acting on the individual spin at site , and indicates the strength of this external field.
2. Hamiltonian encoding
The next step is to translate the Hamiltonian into a form that a quantum computer can process, which we call encoding. This encoding process depends critically on the type of particles in systems: distinguishable or indistinguishable, and fermion or boson, if the particles are indistinguishable.
If you have a system with distinguishable particles, like spins fixed on a lattice, which we took a simple look at above, the Hamiltonian is often already written in a language compatible with qubits. The Pauli-Z operator, for instance, naturally describes a spin's up or down, and no special encoding is needed.
When simulating indistinguishable particles of fermions or bosons, it is necessary to apply an encoding transformation. These particles are used to describe within a special mathematical framework called second quantization, which tracks the occupation number of each quantum state by introducing creation and annihilation operators, where the creation operator adds one particle to state while the annihilation operator removes one particle from state . Based on this second quantization framework, the fermion can be transformed by Bravyi-Kitaev and Jordan-Wigner. Jordan-Wigner transformation defines the fermionic creation operator
which fill the -th quantum state with a fermion and fermionic annihilation operator which empties a fermion from the =th states. You can find more details of this Jordan-Wigner transformation at our Quantum Computing in Practice, episode 5 - Mapping. Similarly, bosons also require their own encoding methods, such as the Holstein-Primakoff transformation, to be represented by qubits.
Ultimately, whether the path is direct or requires a translation, the goal is the same: to express the system's Hamiltonian in the form of Pauli spin operators that a quantum computer can understand and execute.
3. State preparation
After encoding the desired Hamiltonian into the quantum computer's gate set, the next important step is to select an appropriate initial quantum state to begin the simulation. The choice of initial state influences not only the convergence of variational algorithms such as the Variational Quantum Eigensolver (VQE) but also affects the accuracy and efficiency of time evolution and sampling. Essentially, the initial state serves as the starting point for the computation, laying the groundwork for extracting useful observables from the quantum system being modeled. Ideally, this state should represent a physically meaningful configuration of the system under study.
For many quantum chemistry simulations, the Hartree-Fock state can be a good starting point. In the language of second quantization, Hartree-Fock state () is created by applying creation operators () for each of the lowest-energy orbitals to the vacuum state(