The Many Flavors of Quantum Machine Learning Pie
The Many Flavors of Quantum Machine Learning Pie
Artwork by DALL-E/OpenAI. Prompt: “A supercomputer powered by thanksgiving pie, digital art, cyberpunk”
As quantum machine learning continues to bake in the hot ovens of science, let’s address a possible misunderstanding in our midst. The root of the misunderstanding pertains to the scope of the term “quantum machine learning” (QML).
Much like its cousin, good old fashioned (!) machine learning (ML), QML is a wide field that includes many different techniques. Much like you wouldn’t want just one pie at the table this Thanksgiving, there are in fact many different flavors of QML pie to choose from.
The misunderstanding can occur when characteristics of one slice of QML are assumed to apply to another or all of the slices. As I tell my children, just because you struggle with the pumpkin, it doesn’t mean you shouldn’t try the pecan.
The flavor people often think of when they think of QML is discriminative supervised learning. This may be because many of the most impactful applications of classical ML have involved using supervised learning to analyze and classify massive data sets. This is something classical computers already excel at, but that doesn’t mean quantum will excel at it too.
Certainly, you could implement classical supervised learning techniques, such as principal component analysis and support vector machines on a quantum device, but due to the high error rates of today’s quantum devices, you won’t be able to load the big datasets you want to analyze onto the quantum devices.
This just won’t work — at least any time soon. Much like trying to make a heaping apple pie in a short-sided tart pan, the contents don’t match the container.
The near-term value for quantum in ML is not in supervised ML, but in a different flavor: generative modeling, also known as generative AI.
There’s no reason to use quantum for something that classical computers already excel at. That’s why the near-term value for quantum in ML is not in supervised ML, but in a different flavor: generative modeling, also known as generative AI.
Rather than learning to label data from large training datasets, generative AI generates new data. Well-known examples include GPT-3 and DALL-E for generating text and images, but generative AI can also generate code, 3D objects, and as we have shown in our own work, novel solutions to complex combinatorial optimization problems (more on this later).
In generative modeling, you still need big datasets to train the generative models. However, you can keep the training data in the classical realm, while still leveraging quantum. In some classical generative neural networks, there is a component called a “latent space” that provides a condensed, compressed description of the input data, approximately capturing the data in fewer variables. This latent space presents the opportunity to stir in your quantum ingredients.
Artwork by DALL-E/OpenAI. Prompt: “Dropping a quantum computer into a large pot of soup, digital art”
In our approaches to quantum-enhanced generative modeling, we map the distilled information of the latent space onto a quantum circuit, which can be on a real quantum device or a classical simulation of a quantum circuit. The distilled data encoded in the quantum circuit then informs the generation of new samples, whether that’s AI-generated text or images of handwritten digits, as we recently demonstrated using a quantum device.
By using only a little bit of quantum, we are not asking quantum to do much. But we are exploiting an area where quantum can beat classical. We are still learning the details, but it appears that the unique capabilities of quantum computers, namely entanglement and superposition, can allow quantum circuits to encode complex correlations in the latent space that would be more difficult to model classically. These quantum-encoded correlations then help the generative model do a better job of generating new samples.
To bring back the pie analogy, imagine learning how to bake an apple pie without a recipe. It would take a lot of trial and error to get it right! This is akin to the generative model without the quantum component. In this analogy, the training process would be like a panel of tasters that tell you if the new pie is any good and give you feedback on how to make it better. Adding the quantum circuit that encodes the latent space would be like bringing in a chef who has made a cherry pie before, but never an apple pie. They have some more intuition into baking pies that can help you make a better guess as to how to make an apple pie.
Artwork by DALL-E/OpenAI. Prompt: “A panel of serious men and women tasting hundreds of different pies, digital art, cyberpunk”
This hybrid quantum-classical approach to generative machine learning does not fit what the community has come to call “quantum advantage.” However, it appears that there are some cases (that apply to real world problems) where we can leverage quantum devices for improved outcomes using generative modeling. One promising area where quantum can provide a boost in the near-term is with our generator-enhanced optimization (GEO) technique.
In GEO, a generative ML model trains on the best solutions from state-of-the-art classical optimization solvers to generate new solutions. The quantum (or quantum-inspired, as we demonstrated in our research) component in this model can better learn the correlations that make the good solutions good, which allows it generate better solutions.
In the example detailed in our research (see the link above), we used GEO to generate S&P 500 stock portfolios. The quantum-inspired model generated portfolios with lower risk than those generated by the purely classical solver, for the same level of return. In other words, our quantum-inspired version of GEO did better than classical solvers that have been fine-tuned for decades — without even using a real quantum device.
This approach can be applied to other combinatorial optimization problems as well. Although we may be in the quantum computing business, our work with GEO shows that you don’t necessarily need a quantum device to gain an advantage from QML: you can use quantum-inspired methods to boost your existing solutions.
A more detailed exploration of the benefits of quantum computing for generative AI can be found in this blog post. If you want to find out which flavor of QML might pair well with your organization’s problems, please reach out! And even if you don’t, please don’t limit your dessert choices to one kind of QML pie. Happy Thanksgiving to those who celebrate and may you have some delicious pie regardless!
Sharing our understanding of the current state of AI and quantum.