Program and Events

Tuesday, June 10th

Wednesday, June 11th

  • 8:00-9:00 - Continental breakfast
  • 9:00-9:10 - Opening remarks (Daniel Lidar and Paul Warburton) View Video
  • 9:10-10:00 - Eddie Farhi, MIT

    Different Strategies for Optimization Using the Quantum Adiabatic Algorithm
    Download Talk View Video
    I will present the results of a numerical study, at 20 qubits, of the performance of the Quantum Adiabatic Algorithm on randomly generated instances of MAX 2-SAT with a unique assignment that maximizes the number of satisfied clauses. The probability of obtaining this assignment at the end of the quantum evolution measures the success of the algorithm. I will report three strategies which consistently increase the success probability for the hardest instances in the ensemble: decrease the overall evolution time, initialize the system in an excited state, and add a random local Hamiltonian to the middle of the evolution.
  • 10:00-10:50 - Hidetoshi Nishimori, Tokyo Tech

    Is quantum annealing equivalent to simulated annealing?
    Download Talk View Video
    The answer is "a|yes>+b|no>".It is |yes> because the unconventional classical-quantum mapping [1,2,3] allows one to reformulate simulated annealing in terms of quantum annealing (or adiabatic quantum computation) in the same spatial dimension and vice versa.The |no> aspect comes from differences in the reformulated Hamiltonians in the above-mentioned mapping:  The quantum Hamiltonian obtained from simulated annealing has interaction range comparable to the original one for classical Hamiltonian whereas the classical Hamiltonian derived from the quantum counterpart has, in general, very complicated long-range interactions. Quantum annealing is this sense more convenient and versatile for optimization purposes.[1] Castelnovo et al, Ann. Phys. 318 (2005) 316 [2] Somma et al, PRL 99 (2007) 030603 [3] Henley, J. Phys.: Cond. Mat. 16 (2004) S891
  • 10:50-11:15 - coffee break
  • 11:15-11:50 - Nathan Wiebe

    Improving Adiabatic State Preparation Using Adiabatic Theorems
    Download Talk View Video
    The lore surrounding the use of the adiabatic approximation is actually full of inaccuracies. For example, it is widely held that if you want to reduce diabatic leakage you simply need to evolve slower; however, it sometimes turns out that evolving slightly faster can actually reduce the error more than going slightly slower. Similarly, it is widely believed that evolving slowly near the minimum gap and quickly near the beginning and end of a sweep is optimal. Actually the opposite can be true for high accuracy state preparation. In this talk, I will show that these phenomena can not only be understood by using more precise adiabatic theorems but also that these theorems can be used to find ways of exploiting adiabatic interference effects to polynomially reduce diabatic errors. Examples of such improvements will be given for both traditional adiabatic evolution and coherently controlled adiabatic evolution wherein separate adiabatic evolutions can be constructively interfered to increase the final ground state fidelity. The error robustness of these methods and their applications to quantum computation and simulation will also be discussed.
  • 11:50-12:25 - Michael Jarret

    Adiabatic Optimization and the Fundamental Gap Theorem
    Download Talk View Video
    Several previous works have investigated the circumstances under which quantum adiabatic optimization algorithms can tunnel out of local energy minima that trap simulated annealing or other classical local search algorithms. Here we pursue two particular questions: (1) do adiabatic optimization algorithms always succeed in polynomial time for trivial optimization problems in which there is only one local minimum and (2) what characterizes the boundary between large- and small- gapped Hamiltonians? In addressing the first, we surprisingly find a counterexample in which the potential is a single basin on a graph, but the eigenvalue gap is exponentially small as a function of the number of vertices. On the other hand, we find several lower bounds on the eigenvalue gap provided the ground state is single-peaked. For our second question, we adapt techniques from the literature on the "fundamental gap conjecture" to a discrete setting relevant to adiabatic optimization algorithms. For path and hypercube graphs, we obtain a discrete analogue of the "fundamental gap theorem," yielding a tight lower bound to the eigenvalue gap in the presence of convex potentials.
  • 12:25-2:25 - lunch
  • 2:25-3:15 - Jason Petta, Princeton

    Entanglement Generation via Landau-Zener Interferometry
    Download Talk View Video
    Landau-Zener interferometry has become a powerful tool for quantum control. I will show how Landau-Zener interferometry can be applied to two distinct physical systems: singlet-triplet spin qubits and cavity-coupled superconducting qubits. We first demonstrate coherent control of electronic spin states in a double quantum dot by sweeping an initially prepared spin singlet state through a singlet-triplet anticrossing in the energy level spectrum. The anticrossing serves as a beam splitter for the incoming spin singlet state. Consecutive crossings through the beam splitter, when performed within the spin dephasing time, result in controllable coherent quantum oscillations between the singlet state and triplet state. Favorable coherence times of superconducting qubits allow us to explore the full crossover from nonadiabatic to adiabatic transition dynamics. We show that we can “throw” a quantum of excitation from one superconducting qubit to another in a cavity coupled system. Single passage through an anticrossing is used to create a Bell state with a fidelity of ~80%, limited by qubit relaxation.
    Research performed in collaboration with Guido Burkard, Art Gossard, Andrew Houck, Hong Lu, Chris Quintana, and Hugo Ribeiro. We acknowledge support from the Sloan and Packard Foundations, ARO, DARPA, and NSF.
  • 3:15-3:50 - Fernando Pastawski

    Adiabatic preparation of encoded quantum states
    Download Talk View Video
    Hamiltonians associated to quantum error correcting codes (QECC)s de- fine code spaces effectively resilient to decoherence. While the energy splitting within the corresponding ground state space is negligible the actual code space may vary significantly in the presence of perturbations. In fact, encoding into the ideal ground space may lead to a negligible quantum memory time in the presence of an unknown perturbation [1]. For this reason, it is of paramount importance that the preparation method should produce states in the actual ground space as opposed to the ideal code space which may be quite differ- ent. An adiabatic evolution would, by design, follow the actual ground state space of the system, suggesting it is a natural candidate for achieving such a tasks. Hamma and Lidar, [2] considered such an adiabatic evolution where a Hamiltonian having a trivial product ground state is interpolated into a toric code Hamiltonian, which possesses a four-fold degenerate ground state space. They found that while the gap for such an evolution must forcibly close, eventu- ally becoming exponentially small, additional symmetries of the system allow a polynomialy long (efficient) adiabatic preparation to succede. The authors also observed the stability of the encoded states with respect to perturbations in the preparation process [3]. We numerically study the stability of the translationally invariant adiabatic preparation process when symmetries are shared by the initial product Hamil- tonian and final Hamiltonian for which we consider small toric and color codes. Pauli stabilizer states are produced when both initial and final Hamiltonians commute with logical Pauli operators. We find that when the corresponding logical operators have minimal weight, the preparation is robust to perturba- tions in the Hamiltonians and timing. In contrast, other shared symmetries, such as those associated to encoded magic states, seem to require significantly longer preparation to guarantee adiabaticity and are only robust to increasing this time but not to perturbations in the Hamiltonian. In the case of the toric code, we contrast these results with the existence of a discrete set of phase transitions predicted in the thermodynamic limit[4].
  • 3:50-4:15 - coffee break
  • 4:15-5:05 - Malcolm Carroll, Sandia

    Semiconductor qubits for adiabatic quantum computing
    Download Talk View Video
  • 5:05-5:40 - Ojas Parekh

    Benchmarking quantum annealing for community detection on synthetic social networks
    Download Talk View Video
    We highlight some of the pitfalls and subtleties encountered in our recent efforts to benchmark quantum annealing on community-detection problems in real-world-like social networks. We bring insights from theoretical computer science and combinatorial optimization to bear on benchmarking quantum algorithms and discuss the following issues: What is an appropriate measure of success? Consider the following success criteria: (i) produce an optimal solution on 99% of instances, or (ii) produce a solution within 99% of optimal on all instances. For classical algorithms solving quadratic unconstrained binary optimization (QUBO) problems on a D-Wave Chimera graph, we observe that from a formal perspective, (i) is likely hard while (ii) is solvable in polynomial time (latter follows from [7]). Since both criteria seem equally valid from a practical perspective, which should one select? Is (i) an artificially hard problem in some sense? Which classical algorithms ought to be used for comparison? Although simulated annealing (SA) seems like a natural option, Hen and Young [4] observed that for a certain variant of the boolean satisfiability problem, both SA and simu- lated quantum annealing scale exponentially, while a polynomial-time algorithm based on Gaussian elimination exists. Thus selecting a suboptimal classical al- gorithm may lend an unfair and artificial advantage to quantum algorithms. On the other hand, an expertly-engineered classical code may currently outperform a quantum alternative but may not be able to sustain an asymptotic advantage. How do we compare a technology that we have had decades to hone versus an infant technology that has yet to mature? Even after selecting an algorithm, how do we know that it is configured properly for optimal performance? A case in point is that McGeoch and Wang [5] reported a 3600x speedup for a D-Wave Two over the CPLEX integer linear solver. We and others [3] observe that by using CPLEX as an integer linear solver, for which it was designed, rather than a quadratic solver, the speedup vanishes. How does one select appropriate benchmarking instances? Random in- stances are easy to generate but are unlikely to capture real-world problems. More realistic instances are desirable; however, in the case of D-Wave in particular, we must efficiently embed such instances into the Chimera graph, which in itself is a hard problem. Moreover, such embeddings require ranges of coupler weights beyond the precision of current D-Wave devices. We give an algorithm that is able to generate synthetic graphs with properties of real-world social networks directly within the Chimera graph, sidestepping the embedding issue. We also present a technique for reducing a weighted instance of QUBO to a polynomially larger unweighted instance of QUBO to circumvent coupler precision limitations. Finally we show that the random instances used in recent benchmarking studies are in fact drawn from an NP-hard class of instances; this does not follow from Barahona’s seminal paper [1] as has been suggested.
  • 5:40-7:30 - poster session

Thursday, June 12th

  • 8:00-9:00 - breakfast
  • 9:00-9:50 - Aidan Roy, D-Wave

    Mapping discrete optimization problems to sparse Ising models
    Download Talk View Video
    The D-Wave hardware is limited in its ability to solve real-world optimization problems in several ways, including limited precision/energy range, sparse connectivity, and a small number of qubits. In this talk, we present techniques to overcome those limitations using better problem representations. In particular we present algorithms for (1) mapping constraint satisfaction problems to Ising models and (2) minor-embedding Ising models in the D-Wave hardware using chains. We also present some recent attempts to benchmark "real-world" performance of the D-Wave hardware.This talk is based on work done by many members of the algorithms group at D-Wave.
  • 9:50-10:40 - Paul Warburton, UCL

    Experiments with and Applications of the D-Wave Machine
    Download Talk View Video
    D-Wave’s “Vesuvius” chip finds the ground state of the transverse-field Ising Hamiltonian on ~500 quasi spins with a sparse connectivity matrix known as the “Chimera” graph. At present there seems to be no general consensus as to whether the chip exploits entanglement in any way, nor whether the machine can outperform classical algorithms for any optimization problems. Here I will describe our attempts to determine the extent of any quantum mechanical behavior on the chip by treating it as an experimental black box. The population distribution of problem Hamiltonians with highly-degenerate ground states, and how that distribution varies with the problem energy scale, can in principle be used to distinguish classical dynamics from open-system quantum dynamics [1]. I will discuss the extent to which the experiments may also be replicated by a fully classical model incorporating noise in the magnitudes of the couplers and/or bias fields [2]. Secondly we have used matrix product states with finite Schmidt rank to simulate the annealing dynamics of a one-dimensional spin chain [3]. We find that for all instances the exact ground state is reached for a minimum Schmidt rank equal to either one or two. We find some (albeit limited) correlation with the performance of the D-Wave chip, with instances requiring a minimum Schmidt rank of two being marginally harder. Finally, with a view to possible applications of the D-Wave machine, we have studied its performance as a decoder of (classical) error correction codes for transmission over noisy digital communication channels [4]. We find that the bit-error-rate is reduced by using the full distribution of solutions generated by the D-Wave chip (i.e. maximum entropy decoding) rather than retaining only the minimum energy solution (i.e. maximum likelihood decoding). This work has been undertaken in collaboration with Walter Vinci, Simone Severini, Nick Chancellor, Tanja Duric, Philip Crowley, Andrew Green and Gabriel Aeppli at UCL; and with Tameem Albash, Anurag Mishra and Daniel Lidar at the University of Southern California. It has been supported by the UK EPSRC and by Lockheed Martin. [1]Walter Vinci, Tameem Albash et al. arXiv: 1403.4228 [2]Seung Woo Shin et al. arXiv: 1404.6499 [3]
  • 10:40-11:05 - coffee break
  • 11:05-11:40 - Tameem Albash

    Ruling out (some) classical models for the D-Wave device
    Download Talk View Video
    The question of whether the final time statistics of the D-Wave processors are determined by quantum effects or can be entirely captured by classical models remains a question of significant debate. The discussion is made more complicated due to inherent control/calibration noise on the device, requiring models to include such effects to properly capture the performance of the device. We revisit several of the D-Wave studies already made available [1, 2] with this in mind, and we perform a careful analysis of the final time statistics. We first consider an 8 qubit Hamiltonian first studied in ref. [3] with a very particular degenerate ground state structure, and we show that classical spin models including the O(3) model, SSSV [4], and variants exhibit a particular preference for a ground state. Utilizing this feature, we are able to show disagreement between these models and the D-Wave 2 device used in this study. We then consider the 108 qubit random instances first studied in ref. [1], and by considering a particular distance measure we are able to show that the energy spectrum that SSSV finds is sufficiently different from the energy spectrum found by the D-Wave 1 device. Although our work does not rule out all classical models, we show that a detailed analysis of the states observed can and should be used to eliminate possible models for the D-Wave devices.
  • 11:40-12:15 - Walter Vinci

    Probing the Late-Time Evolution Spectrum of a Quantum Annealer
    Download Talk View Video
    The final state of an ideal quantum annealer is a special superposition of the degenerate ground states defined in the computational basis. This particular superposition determines the population of the various ground states when measurements are performed at the end of the computation and can be considered as a “footprint” of the exact ground state just before the end of the adiabatic evolution. Since the late-time quantum spectrum can be determined using perturbation theory, the calculation of the ground state populations in the ideal case can be easily scaled to large systems. The experimental measurement outcomes obtained from a real annealer can then be used to compare the behavior of the real device to that of an ideal one. We have generated a large number of random instances with degenerate ground state and we have calculated the expected, ideal ground state populations using perturbation theory up to the second order in the transverse field. We have then implemented the same set of problems on the D-Wave Vesuvius device. Our analysis shows that the behavior of device correlates well with that of an ideal quantum annealer on a statistically relevant set of benchmark problems. This correlation is present for instances with up to 72 spins. Experimental errors prevent a reliable analysis for larger systems. Although the generic exact ground state has long range entanglement, we cannot conclude, given the observed correlation, that the device supports extended entanglement. In fact we can obtain a similar correlation assuming that the quantum state of the device lies on a subset of the Hilbert space that has no entanglement. While our results are limited to the specific set of instances studied, this work highlights again the elusive role that entanglement has in determining the measurement outcomes of a quantum annealer.
  • 12:15-4:45 - ISI visit/beach/free time
  • 4:45-5:35 - Matthias Troyer, ETHZ

    Performance of simulated annealing, simulated quantum annealing and D-Wave on hard spin glass instances
    Download Talk View Video
  • 5:35-6:25 - Helmut Katzgraber, Texas A&M

    Spin-glass-inspired benchmarks of quantum annealing machines
    Download Talk View Video
    Recently, a programmable quantum annealing machine has been built that minimizes the cost function of hard optimization problems by, in principle, adiabatically quenching quantum fluctuations. Tests performed by different research teams have shown that the machine might exploit quantum effects. However, experiments on a class of random-bond instances have not yet demonstrated an advantage over classical optimization algorithms on traditional computer hardware. Here we present arguments as to why this might be the case. First, the benchmarks used to test the D-Wave machine are encoded in the restrictive Chimera topology dictated by fabrication constraints of the device. Our results show that while the worst-case complexity of finding a ground state of an Ising spin glass on the Chimera graph is not polynomial, the finite-temperature phase space is likely rather simple because spin glasses on Chimera have only a zero-temperature transition. As such, temperature-based annealing schedules might more efficiently reach the minimum of a cost function. Second, the bimodal-distributed randomness chosen in the benchmark problems results in a large degeneracy of the ground-state manifold. This means that searching for a ground state of an Ising spin glass on the Chimera lattice is an easier exercise in optimization for both quantum, as well as classical optimization methods, i.e., possibly not a measure of sufficient rigor to detect quantum speedup. Our results therefore show that a careful design of the hardware architecture and benchmark problems is key when building quantum annealing machines. Work done in collaboration with F. Hamze, A. Ochoa, and Z. Zhu.
  • 6:25-8:00 - poster session

Friday, June 13th

  • 8:00-9:00 - breakfast
  • 9:00-9:50 - Matthew Hastings, Microsoft

    Obstructions to Classically Simulating The Adiabatic Algorithm - Whiteboard Talk
    View Video
    The quantum adiabatic algorithm is a proposed general purpose optimization algorithm.  The hope is that it will allow quantum computers to solve optimization problems and find spin glass ground states in cases which are not currently tractable on classical computers.  However, the power of the algorithm remains unclear despite intense theoretical and experimental effort.  One natural proposal to simulate this algorithm on a classical computer is to use quantum Monte Carlo; indeed, large scale quantum Monte Carlo simulations have already been used to analyze the power of this algorithm.  I address some cases where quantum Monte Carlo will have problems for topological reasons.  Interestingly, the child's puzzle in this youtube video will have direct applications to this problem.
  • 9:50-10:40 - Vadim Smelyankskiy, NASA

    How quantum is D-Wave computation
    Download Talk View Video
    We will analyze an interplay between massive qubit co-tunneling and noise in  dWave machine performance and make a comparison with purely classical computational pathways. This is a joint work with Sergio Boixo, Sergei Isakov,  Alireza Shabani, Masoud Mohseni and Hartmut Neven from Google and Mohammad Amin from D-Wave Systems.
  • 10:40-11:05 - coffee break
  • 11:05-11:40 - Ilia Zintchenko

    Classical optimisation, can we do better?
    Download Talk View Video
    Finding ground states of Ising spin glasses is a notoriously hard problem for which there is to date no known efficient general case classical algorithm. Adiabatic quantum computing has shown great potential for solving such problems. However, comparison to classical computers remains elusive as there is to date no known lower bound on complexity for classical solvers. To this end we present techniques based on local constraint satisfaction which can significantly improve performance of classical algorithms, raising the bar for quantum speedup. Benchmarks are done on chimera graphs and compared to the D-wave Two device.
  • 11:40-12:15 - Damian Steiger

    Extreme Value Analysis of Simulated Annealing, Simulated Quantum Annealing and a Mean-Field Model
    Download Talk View Video
    We present results of finding the ground state for hard instances of random Ising spin glasses on chimera graphs using simulated annealing (SA), simulated quantum annealing (SQA) and a mean-field quantum annealing algorithm [1]. These 3 algorithms are heuristic and therefore we compare the empirical distribution functions of the time-to-solution with increasing system size. Because we are interested in the hard instances, we extrapolate the tails of these empirical distribution functions using Extreme Value Theory: The tails follow generalized Pareto (GP) distribution functions, which are characterized by their shape parameter ξ. For ξ = 0 the tails are exponentially decaying, i.e. Pr {X > x} = exp (−x/σ). For ξ < 0 they have a finite end-point and for ξ > 0 they are heavy and the kth moment is infinite for k ≥ 1/ξ. By comparing the fitted GP distribution parameters we find that for example the mean-time-to-solution for hard instances using SQA is infinite, ξ > 1, for 288 spins while it is still finite for SA.
  • 12:25-2:25 - lunch
  • 2:25-3:15 - Christopher Monroe (Joint Quantum Institute and University of Maryland)

    Modular and Scalable Quantum Networks of Atoms
    Download Talk View Video
    Laser-cooled atomic ions are standards in quantum information science, as such qubits have unsurpassed levels of quantum coherence and can be measured with near-perfect efficiency. When state-dependent optical dipole forces are applied to a collection of atomic ions, their Coulomb interaction is modulated in a way that allows the entanglement of the qubits through quantum gates. Similar forces allow the simulation of long-range quantum magnetic interactions with reconfigurable graphs, and recent experiments have implemented transverse Ising and XY models with up to 18 trapped ions, the largest collection of interacting qubits yet demonstrated. Soon the number of spins in the system will be high enough where no classical computer can predict its behavior. Scaling to even larger numbers can be accomplished by coupling trapped ion qubits to photons, where entanglement can be formed over remote distances for applications in quantum communication, quantum teleportation, and distributed quantum computation. Such a modular approach to large scale quantum information hardware will be required in any platform, and trapped atomic ions will likely continue to lead the way.
  • 3:15-3:50 - Nicholas Chancellor

    Dynamics of the D-Wave Chip
    Download Talk View Video
  • 3:50-4:15 - coffee break
  • 4:15-5:05 - Kristen Pudenz, USC

    Error Corrected Quantum Annealing with Hundreds of Qubits
    Download Talk View Video
    Physical realizations of quantum computing are always threatened by decoherence, especially as the scale of the device increases. This makes the implementation of error correction crucial. We have developed error correction techniques tailored to quantum annealing using superconducting adiabatic quantum optimization processors. I will show experimental results for computations using up to 448 physical qubits on the D-Wave Two device. Scaling of code performance on both antiferromagnetic chains and random 2D Ising problems will be addressed, along with insights into device error mechanisms and choices of decoding strategy. The error correction substantially enhances the observed success probabilities.
  • 5:05-5:40 - Eddie Farhi

    Error Correction in the Hamiltonian Model of Quantum Computation through Energy Penalties - Whiteboard Talk
    View Video
  • 5:40-8:00 - banquet

Saturday, June 14th

  • 8:00-9:00 - breakfast
  • 9:00-9:50 - Andrew Houck, Princeton

    Quantum simulation with coupled circuit QED arrays
    Download Talk View Video
    Superconducting circuits and circuit quantum electrodynamics provide an excellent toolbox for non-equilibrium quantum simulation. In circuit QED, the strong interaction of light with a single qubit can lead to strong qubit-mediated photon-photon interactions. Recent theoretical proposals have predicted phase transitions in arrays of these cavities, demonstrating that complex matter-like phenomena can emerge with such interacting photons. Due to inevitable photon dissipation and the ease of adding photons through driving, these systems are fundamentally open and a useful tool for studying non-equilibrium physics. I will discuss recent experimental and theoretical progress towards realization of these non-equilibrium quantum simulators, including a localization-delocalization crossover in a pair of coupled cavities and preliminary measurements of large cavity arrays. I will show a variety of available measurement tools in these systems, including transport and scanned local quantum probes.
  • 9:50-10:40 - Paolo Zanardi, USC

    Unitary dynamics in strongly non unitary systems
    Download Talk View Video
    It has been recently realized that dissipative processes can be harnessed and exploited to the end of coherent quantum control and information processing. In this spirit we consider strongly dissipative quantum systems admitting a non-trivial manifold of steady states. We show how one can enact adiabatic coherent unitary manipulations e.g., quantum logical gates, inside this steady-state manifold by adding a weak, time-rescaled, Hamiltonian term into the system's Liouvillian. The effective long-time dynamics is governed by a Fermi golden rule type Hamiltonian which results from the interplay between the weak unitary control and the fast relaxation process. The leakage outside the steady-state manifold entailed by the Hamiltonian term is suppressed by an environment-induced symmetrization of the dynamics. We present applications to quantum-computation in decoherence-free subspaces and noiseless subsystems and numerical analysis of non-adiabatic errors.
  • 10:40-11:05 - coffee break
  • 11:05-11:55 - Robin Blume-Kohout, Sandia

    A No-Go Ptheorem for Fault-Tolerant AQC
    Download Talk View Video
    AQC should be intrinsically robust against some of the failure modes that afflict circuit-model QC, including dissipation and dephasing in the energy eigenbasis.  Some other faults can be suppressed by dynamical decoupling and penalty Hamiltonians.  But fault tolerance requires robustness against all failure modes stemming from a reasonable error model, and this requires active error correction.  In this talk, I introduce the whimsical term "ptheorem", akin to the well-known "equals-sub-p" notation, to denote an argument that is compelling to a physicist (but not sufficiently rigorous to be a theorem).  I will then pprove a simple ptheorem that AQC cannot be error-corrected, without access to either (1) physical Hamiltonians that directly couple O(N) qubits, or (2) perturbative gadgets that simulate such Hamiltonians at cost poly(N).
  • 11:55-12:45 - Itay Hen, USC/ISI

    Performance of D-Wave Two on Problems with Planted Solutions
    Download Talk View Video
    I will talk about recent benchmarking results of D-Wave Two on problems with "planted solutions", i.e., problems generated around chosen ground-state configurations. I will show that a suitable construction of such problems allows for some control over their hardness, as well as over the frustration of their solutions, and may therefore be used to gain further insight into the capabilities of the D-Wave chip. A comparison of the performance of D-Wave on these problems against several classical solvers will also be discussed.
  • 12:45-1:00 - Concluding remarks (Daniel Lidar)
    View Video
Lockheed Martin Sandia National Laboratories Google Information Sciences Institute D-Wave Systems