Researchers at the DIII-D National Fusion Facility achieved a scientific first this month when they used machine learning calculations to automatically prevent fusion plasma disruptions in real time, while simultaneously optimizing the plasma for peak performance. The new experiments are the first of what they expect to be a wave of research in which machine learning–augmented controls could broaden the understanding of fusion plasmas. The work may also help deliver reliable, peak performance operation of future fusion reactors.
These experiments are quite significant, because they illustrate why the fusion community has been so excited about machine learning,” said DIII-D Director David Hill. “Although DIII-D has applied machine learning to real-time prediction of instabilities for decades, actual real-time control to prevent disruption using these massive data sets is very novel and exciting.”
DIII-D is the largest magnetic fusion research facility in the U.S. and is operated by General Atomics as a national user facility for the U.S. Department of Energy’s Office of Science. The heart of the facility is a tokamak that uses powerful electromagnets to produce a doughnut-shaped magnetic bottle for confining a fusion plasma. In DIII-D, plasma temperatures more than 10 times hotter than the Sun are routinely achieved. At such extremely high temperatures, hydrogen isotopes can fuse together and release energy. (See Fusion Energy 101 explainer below for more detail on how fusion works.)
Previous experience on DIII-D and other tokamaks has shown that these temperatures can be achieved more efficiently as the plasma shape is stretched vertically. However, above a certain degree of stretching, the plasma becomes unstable, causing it to move up or down to touch the wall of the tokamak. The traditional control approach involves using the tokamak’s plasma control system to sense the approaching limit and shut down the plasma moments before reaching it. However, this would not be an ideal approach for a fusion power plant.
To avoid the need to shut down the reaction, researchers Jayson Barr and Brian Sammuli devised a new machine-learning approach that allows the plasma to reliably operate very close to its vertical stability limit. Using a neural network – a form of machine learning – Barr and Sammuli calculated an estimate of the growth rate of the vertical instability every millisecond, then used those calculations to make constant adjustments that kept the plasma near its vertical stability limit, where it is most efficient. While traditional methods can also perform this calculation, a machine learning–based approach can be executed 100 times faster, enabling robust, real-time control of the growth rate.
In addition, the researchers trained the neural network to estimate the degree of uncertainty in its own calculations, allowing the control system to take more conservative actions when the uncertainty is high. To train the neural network, Sammuli used General Atomics’ TokSearch data mining tool, which was developed to efficiently process the huge volume of DIII-D experimental data collected over several decades. The neural network was able to access knowledge from tens of thousands of plasmas in real time by creating estimates based on the prior results it had “learned” from.
Techniques like this will be critical for future fusion reactors, where high performance must be sustained for long periods of time to create practical and affordable fusion energy.
We’ve been collecting experimental data at DIII-D since the ’80s, but only recently have we been able to really take advantage of all that data by using modern software and computing hardware,” Sammuli said. “The answers to some of the hard problems in fusion are just sitting there in the data, waiting to be discovered. We’re now starting to be able to use modern machine-learning techniques to augment our physics understanding, and this allows us to control the plasma more effectively.”