Racing Towards Quantum Advantage in Nashville
Racing Towards Quantum Advantage in Nashville
Things did not go as planned in Nashville this past weekend.
For starters, my flight to Nashville on Friday was canceled due to thunderstorms. I didn’t get to my hotel until Saturday night, missing an entire day’s worth of soul food IndyCar racing. It wasn’t the last time thunderstorms disrupted the weekend either. A storm interrupted the qualifying races on Saturday, and on Sunday, a menacing cumulonimbus descended on Nashville less than an hour before the Grand Prix was scheduled to begin.
Torrential rain flooded the asphalt as fans took shelter under Nissan Stadium. In the span of 15 minutes, the sweltering 94°F air cooled to a chilly 73°F. It was a welcome relief for all the sweaty masses in attendance — everybody except for the drivers and their teams.
The drivers and their teams were prepared for the heat. They had warmed up and practiced on the hot asphalt and tuned their cars accordingly. The sudden temperature change — not to mention the puddles on the road — cast a shadow of uncertainty over the race.
Minor weather changes can make a major difference in racing conditions. Cars run faster and grip tighter in cooler weather, while the heat can make for a greasy and slick track. Rain is even worse, washing out the rubber from the tiny grooves in the road that help tires stick. Drivers and their teams may prepare for any condition, but sudden changes in the weather will throw that initial preparation out the window. When weather conditions significantly change between practice runs and the actual race, it might as well be a different course.
After a nearly two-hour delay, the road had dried up considerably, and the drivers finally strapped into their cars. The Music City Grand Prix was about to begin. A highlight reel from last year’s inaugural Grand Prix played on the big screen, recapping a chaotic race with several gnarly crashes. After everything that had happened already this weekend, I would have been surprised if this year’s race wasn’t more chaotic. And to absolutely nobody’s surprise, the race was defined by chaos. There were eight wrecks throughout the race, and 36 out of 80 laps were run under the yellow caution flag.
Andretti had an uneven race. Alexander Rossi and Colton Herta both fell behind an entire lap after crashes early in the race, only to make an impressive comeback to place 4th and 5th, respectively. On the other hand, Devlin DeFrancesco was eliminated after driving Takuma Sato into the wall at lap 33. With four laps to go, Romain Grosjean, Herta, and Rossi were in a strong position at 5th, 6th, and 7th, when Grosjean was pushed into the wall and eliminated. A red flag was waved, draining all the tension from the final laps. When the race resumed, Scott Dixon led the entire way for an anticlimactic finish.
The race was a reminder of how unpredictable an IndyCar race can be. But in life as in racing, those that can adapt quickly to the unexpected are usually the ones that come out ahead. Therein lies the goal of our work with Andretti: enabling their drivers and race strategists to make real-time decisions based on the rapidly changing conditions on the course.
Our work with Andretti is concerned with three components of race strategy: predicting tire degradation, predicting competitor strategy, and optimizing fuel consumption. A lot of this comes down to when drivers should take pit stops. For example, if your competitor takes a pit stop, should you take one as well, or would you get stuck behind a pack of slower cars coming out of the pit lane?
There’s a lot of data that goes into these decisions. A longer race like the Indy 500 can generate as much as 1TB of data per car, all of which can factor into race strategy. For example, minor changes in the temperature of the road along the course, combined with the car’s speed and the frequency and intensity of braking can all impact the rate at which tires wear down. Humidity and wind resistance can affect the drag on the car, which in turn impacts fuel consumption. And that’s just your car. With 26 cars in the race, there are a lot of variables that the drivers and their teams need to pay attention to.
Part of what we’re doing with Andretti is helping to build out their data infrastructure so all this data can easily be used to train and run machine learning models that aid the team’s decision-making. All the data coming in from sensors on the cars and the course need to be cleaned, processed, and stored before it can be used in the models, whether for training and tuning the models after the race or to make real-time adjustments during the race.
Streamlining the data infrastructure is only the first step. Of course, we also need to build machine learning models. Given the overwhelming number of variables and data inputs to account for, we can only do so much with purely classical computing before we run up against the limitations of classical computing. This is where quantum computing comes into play, which is well suited for solving problems that are too complex for classical computers.
While the models we build will initially run on classical or classical simulators of quantum computers, the goal is to build models that are quantum-ready, meaning they can be enhanced with quantum computers as the hardware matures. This is one of the great advantages of our software platform Orquestra®, which we’re using to upgrade Andretti’s data infrastructure and build the machine learning models. It’s hardware-agnostic, which allows users to easily swap in and out computing resources as new hardware becomes available and models evolve.
Believe it or not however, building quantum-ready machine learning models is only half the battle. It’s an engineering feat in its own right to build a model that is accurate and reliable, but an entirely different feat to build models that are useful and interpretable by humans.
With Andretti for example, they’ve been using their current data analytics setup for years. The race strategists know how to use the dashboards and data visualizations they have to make split-second decisions in the heat of the race. A lot of these decisions are based on instinct and intuition gained over dozens of races. You can’t just rip out and replace that system with something new. You need to work with the systems and intuition in place. Our models need to provide guidance without being a black box; with the dashboards we’re building, the race strategists will need to see why our models make the recommendations that they do.
Frankly, this is true with any enterprise. As our enterprise survey recently found, the greatest barrier to quantum adoption is the complexity of integrating quantum computing with an enterprise’s existing IT stack. You can’t ever rip and replace; you need to work with what’s already there and build on it.
The hard work with Andretti is only beginning. It’s going to require close collaboration between scientists and engineers to build models that fit in with Andretti’s existing infrastructure. With every race, we’re getting a better understanding of how they make decisions based on rapidly changing conditions today, and how we can enable them to make more informed decisions in the future.
Soon, we’re hoping that our models will give Andretti the insight to adapt to the chaos that we saw in Nashville faster and more strategically than their competitors. Whether it’s a crash, a sudden downpour, or an unexpected pit stop by a competitor, we want the Andretti team to be able to make rapid decisions that can only be gained by processing massive amounts of data in real-time.
We believe this is the future not only of racing strategy but also enterprise strategy. Because whether you’re Andretti Autosport or a Fortune 500 company, the one thing you can count on is that things won’t always go as planned.