This is a great update! I hope the authors continue publishing new versions of their plots as the community builds up towards facility gain. It's hard to keep track of all the experiments going on around the world, and normalizing all the results into the same plot space (even wrt. just triple product / Lawson criteria) is actually tricky for various reasons and takes dedicated time.
Somewhat relevant, folks here might also be interested in a whitepaper we recently put up on arXiv that describes what we are doing at Pacific Fusion: https://arxiv.org/abs/2504.10680
Section 1 in particular gives some extra high-level context that might be useful to have while reading Sam and Scott's update, and the rest of the paper should also be a good introduction to the various subsystems that make up a high-yield fusion demonstration system (albeit focused on pulser-driven inertial fusion).
I heard that NIF was never intended to be a power plant, not even a prototype of one. It's primarily a nuclear weapon research program. For a power plant you would need much more efficient lasers, you would need a much larger gain in the capsules, you would need lasers that can do many shots per second, some automated reloading system for the capsules, and you would need a heat to electricity conversion system around the fusion spot (which will have an efficiency of ~1/3 or so).
From my time in fusion research circles, you're correct, but it's also not a simple "weapons or energy?" question. It could only have ever been a pure research facility. At the time of design, the physics wasn't certain enough to aim for net energy gain. Where the weapons research came in is in the choice of laser focus. Instead of "direct drive", where the lasers directly strike the fusion fuel, NIF lasers strike the inside of a Hohlraum, which produces X-Rays that then heat the fuel. X-Ray opacity is an important topic in nuclear weapons research.
Bear in mind that I wasn't directly involved, and this my impression picked up from conversations during my time in fusion research, which was about 10 years ago.
It's an experimental facility. Yes, a power plant would need much more efficient lasers, but NIF's lasers date back to the 1990s, equivalent modern lasers are about 40X more efficient, and for an experiment it's easy enough to do a multiplication to see what the net result would have been with modern lasers.
Modern lasers can also repeat shots much more quickly. Power gain on the capsules appears to scale faster than linear with the input power, so getting to practical gain might not be as far off as it appears at first glance.
These are some of the reasons that various fusion startups are pursuing laser fusion for power plants.
From what I understood, laser fusion needs laser efficiencies not just 40x better than what NIF uses, but like 3 or 4 orders of magnitude more efficient than the state of the art. Seems like a non-starter.
Everything fusion reactor design needs similar gains in some part of the stack outside of the fusion parts to make it a viable power source: tokamaks need magnets to be orders of magnitude better, the lining for the reactors needs to last for much longer, the whole steam conversion mess, etc.
Commercial REBCO tape is an entirely sufficient superconductor for tokamaks. At this point the limiting factor for the magnetic field is the structural strength of the reactor. Tokamak output scales with the square of size and the fourth power of magnetic field strength, and using REBCO, the CFS ARC design should get practical power output from a reactor much smaller than ITER.
I was trying to work out a joke about buying better lasers off of alibaba but it seems that despite being 30 years old they're still orders of magnitude beyond off the shelf options.
partially. The very efficient lasers from alibaba don't have short pulse/high power, so they can potentially be used only as the part of the system - the pumping lasers. The final nanosecond-laser is still a one-off build which though seems to be pretty doable even by a small company if they set their mind to it.
Btw, NIF achieved those recent results by adding strong magnetic field around the target (penny-shrinkers knew that tech for 20+ years :). There are other things like this around that can potentially be similarly useful. Only if somebody had money and interest ...
I’ve seen some pretty wacky structures that involve mechanically forcing permanent magnets together at different orientations to create assymetric magnetic fields that are strongest where they need to be or weak where they would cause problems. Like eddy currents in electric motor housing, or insufficient hold for chef’s knives.
I know motor windings have gotten pretty funky of late to do a little bit of this, but do they do multi tesla magnetic fields that use several different windings to create the same sorts of bias in field strength? The ITER windings seem to be an extremely mild form of this.
It’s fascinating how NIF's legacy tech limits its relevance for actual energy generation, yet it still serves as a stepping stone. The fact that gain scales faster than linearly with input power is particularly encouraging — it suggests that advances in laser efficiency and repetition rate could unlock meaningful progress sooner than many assume. I can see why startups are jumping on this now. Curious to see how much of this can move from lab to grid in the next decade.
>NIF is a key element of the National Nuclear Security Administration’s science-based Stockpile Stewardship Program to maintain the reliability, security, and safety of the U.S. nuclear deterrent without full-scale testing.
Nothing about the NIF looks like a power plant to me. It's like the laser weapons guy and the nuclear weapons guy found a way to spend giant piles of money without having to acknowledge the weapons angle.
A lot of people think so, but the US government openly spends way more money on nuclear weapons than on fusion research. We'll spend almost a trillion dollars on nuclear weapons over the next decade.[1] The government's fusion funding was only $1.4 billion for 2023.[2]
So it seems more likely to me that some physicists figured out how to get their fusion power research funded under the guise of weapons research, since that's where the money is. NIF's original intent was mostly weapons research but it's turned out to be really useful for both, and these days, various companies are attempting to commercialize the technology for power plants.[3]
Yes. The NIF is a weapons research lab, not a power research lab.
The purpose of it is to show that the USA is still capable of producing advanced hydrogen bombs. More advanced then anybody else.
The '2.05 megajoules' is only a estimation of the laser energy actually used to trigger the reaction. It ignores how much power it took to actually run the lasers or reactor. Even if they update the lasers with modern ones there is zero chance of it ever actually breaking even. It is a technological dead end as far as power generation goes.
The point of the 'breakthrough' is really more about ensuring continued Congressional approval for funding then anything else. They are being paid to impress and certainly they succeeded in that.
However I suspect this is true of almost all 'fusion breakthroughs'. They publish updates to ensure continued funding from their respective governments.
People will argue that this is a good thing since it helps ensure that scientists continue to be employed and publishing research papers. That sentiment is likely true in that it does help keep people employed, but if your goal is to have a working and economically viable fusion power plant within your lifetime it isn't a good way to go about things.
If the governments actually cared about CO2 and man-made global warming they would be investing in fusion technology and helping to develop ways to recycle nuclear waste usefully. Got to walk before you can run.
It's been over 20 years since ive dug into nuclear tech pretty deep but - don't we already have breeder reactors and other tech that is low waste, safer and thus we could build modern (not based on nuclear submarine) reactors in the fission category and deliver cleaner power, today? Yes there is a lot of politics especially around manufacturing, production and storage of spent fuel so all of those are probably show stoppers no matter how safe they are in reality but we aren't invested in it.
The primary purpose of the NIF is to maintain the US nuclear stockpile without nuclear tests. The lasers very inefficient (iirc about 2%). The success they claimed is that the energy released by the burning plasma exceeds the laser energy put into the fuel capsule. Since NIF was never intended to be a power plant they don't use the most efficient lasers.
It was never intended to be a power plant but it was hoped that it would achieve a net gain fusion reaction for the first time. This turned out to be a lot harder than expected.
Yes, after the test ban treaties, there was a huge push into exploring mathematical emulations of all aspects of fusion, and all assorted bombs, as well as laser ignition of pellets with these large lasers using inertial confinement of the pellet as the laser impacted it - analysing the fusion by observation of emitted neutrons. xrays etc. They issued reports from time to time(sanitised), and probably used the secret data to fine tune emulated weapons with fact points. The pellets were composed of potential fuels, various Hydrogens and Lithiums, varied in composition to explore the ignition space. A number of pellets performed well in terms of gain, but were far-far from useable fusion when the LL labs costs were factored in. I think they determined it could not ever work as a fusion energy source, but it provided data. They still mine data from it with various elemental mixes making up the pellets.
It should be noted that "breakeven" is often misleading.
There's "breakeven" as in "the reaction produces more energy than put into it", and there's breakeven as in "the entire reactor system produces more energy than put into it", which isn't quite the same thing.
We are careful to always specify what kind of “breakeven” or “gain” is being referred to on all graphs and statements about the performance of specific experiments in this paper.
Energy gain (in the general sense) is the ratio of fusion energy released to the incoming heating energy crossing some closed boundary.
The right question to ask is then: “what is the closed boundary across which the heating energy is being measured?”
For scientific gain, this boundary is the vacuum vessel wall. For facility gain, it is the facility boundary.
It's always confused me a bit. It's not like if you put 10kWh into the reactor, that 10kWh goes away. You still lose a significant fraction of it in inefficiency of the cycle but it still goes towards heat which can be used to heat steam and turn a turbine. iirc, you can get about 4kWh back.
On the other side of the coin, if you put 10kWh in and get 10kWh of fusion out, that's 20kWh to run a steam turbine, which nets you about 8kWh. So really you need to be producing 15kWh of heat from fusion for every 10kWh you put in to break even.
That’s a good analogy - and the situation right now is trying to make a car that doesn’t use its entire tank of fuel before it arrives at the service station.
Why is the last plot basically empty between 2000 and 2020? I understand that NIF was probably being built during that time, but were there no significant tokamak experiments in that time?
Author here - some other posters have touched on the reasons. Much of the focus on high performing tokamaks shifted to ITER in recent decades, though this is now changing as fusion companies are utilizing new enabling technologies like high-temperature superconductors.
Additionally the final plot of scientific gain (Qsci) vs time effectively requires the use of deuterium-tritium fuel to generate the amounts of fusion energy needed for an appreciable level of Qsci. The number of tokamak experiments utilizing deuterium tritium is small.
Thanks a lot for this research. Seing the comments here I think it's really important to make breakthroughs and progress more visible to the public. Otherwise the impression that "we're always 50 years away" stays strong.
In the 2037 timeframe, modeling trends doesn’t matter as much as looking at the actual players. I think odds are good because you have at least 4 very well funded groups shooting to have something before 2035: commercial groups including CFS, Helios, TAE, also the efforts by ITER. Maybe more. Each with generally independent approaches. I think scientific viability will be proven by 2035, but getting economic viability could take much longer.
Companies like Commonwealth Fusion Systems are an example of those utilizing high-temperature superconductors which did not exist commercially when ITER was being designed.
> The design operating current of the feeders is 68Ka. High temperature superconductor (HTS) current leads transmit the high-power currents from the room-temperature power supplies to the low-temperature superconducting coils 4K (-269°C) with minimum heat load.
HTS current feeds are a good idea (we also use them at CFS, my employer: https://www.instagram.com/p/DJXInDUuDAK/). It's HTS in the coils (electromagnets) that enables higher magnetic fields and thus a more compact tokamak.
Presumably because everyone in MCF has been waiting for ITER for decades, and JET is being decommissioned after a last gasp. Every other tokamak is considerably smaller (or similar size like DIII-D or JT-60SA).
Much of the interesting tokamak engineering ideas were on small (so low-power) machines or just concepts using high-temperature superconducting magnets.
The really depressing part is if you plot rate of new delays against real time elapsed, the projected finishing date is even further.
This is why much of the fusion research community feel disillusioned with ITER, and so are more interested in these smaller (and supposedly more "agile") machines with high-temperature superconductors instead.
Mind you, it's not useless! It produced a TON of very useful fusion research: neutral beam injectors, divertors, construction techniques for complex vacuum chambers, etc. At this point, I don't think it's going to be complete by the time its competitors arrive.
One spinoff of this is high-temperature superconductor research that is now close to producing actually usable high-TC flexible tapes. This might make it possible to have cheaper MRI and NMR machines, and probably a lot of other innovations.
> actually usable high-TC flexible tapes. This might make it possible to have cheaper MRI and NMR machines, and probably a lot of other innovations.
I'm sure there'll be plenty of fascinating applications of high-Tc tape, however I'm not sure MRI/NMR machines will be one of those. There would still be a lot of thermal noise due to the high temperature. Which is why MRI/NMR machines tend to use liquid helium cooling, not because superconductors capable of operating at higher temperatures don't exist.
ITER doesn't use high temperature superconductors. It uses niobium-tin and niobium-titanium low temperature superconductors in its magnets.
ITER has been criticized since early days as a dead end, for example because of its enormous size relative to the power produced. A commercial follow-on would not be much better by that power density metric, certainly far worse than a fission reactor.
There is basically no chance than a fusion reactor operating in a regime similar to ITER could ever become an economical energy source. And this has been known since the beginning.
I call things like ITER "Blazing Saddles" projects. "We have to protect our phony baloney jobs, gentlemen!"
> I call things like ITER "Blazing Saddles" projects. "We have to protect our phony baloney jobs, gentlemen!"
I think this is overly harsh and somewhat unfair. You could make the same argument that anything operating in a regime similar to the Chicago Pile 1 could never be an economical reactor nor a bomb, but that does not mean skipping that particular development step is viable.
As far as fusion reporting goes, articles are at least somewhat consistent on the fact that ITER is a pure research project/reactor, while every 10-man fusion startup is being hyped up beyond all reason even if there is not even a credible roadmap towards an actual reactor in the 100MW range at all.
Personally I don't see fusion being a mainstream energy source (or helpful against climate change) in this century at all and maybe never, but ITER (even with all the delays) is at least an honest attempt at a credible size, and being stuck on older technology is an unfortunate side-effect of that.
conceptually sure; but size-wise they are so different as to warrant valid questions about ROI.
Chicago Pile 1 ran for 12 years, ITER started ~12 years ago and plans to run into the 2030s at least. Budget and headcount would likely be vastly different too, I’d welcome any educated guesses. Sometimes quantity has a quality of its own, as they say.
Sure but those are not really equivalent/comparable in scale; just looking at power/size and conceptual distance from commercial viability, the Chicago pile does not even match up to something like SPARC or JET, much less ITER.
A more fitting comparison to ITER would be something like Fermi-1 or other prototype designs at almost commercial scale, IMO, and those were multi-year, large projects too (and fission is much simpler than fusion, which obviously also helps).
The X-10 reactor at Argonne went critical less than a year after CP-1, with a power of 500 kW, rising to 4 MW in 1944.
The Hanford B reactor, with a power of 250 MW, was in operation less than two years after CP-1 went critical.
I don't think it's unfair at all. And I don't see ITER as an "honest attempt", far from it.
The initial cost figures for ITER were obviously deliberate lies. When the true costs inevitably came out (after commitment had been made) this led to alternative approaches being canned. ITER has done grievous damage to fusion as a field, in a way eerily similar to how the Space Shuttle and ISS have done damage to NASA.
The true purpose of ITER wasn't to achieve fusion or push forward fusion; it was to preserve funding until those making the decisions had retired. If this required sacrificing long term goals, like actually delivering competitive energy (or, really, delivering anything at all), so be it.
As an engineer, the difference between "deliberate lies" and "overoptimistic estimates" is often just in the eye of the beholder; Hanlons Razor should be applied IMO.
Was ITER overambitious? Timeline and budget unrealistic from the start? Maybe. But I'm fairly confident that most people involved had perfectly defensible intentions.
I also think that if the goal is commercial fusion, small reactors (100MW and below) are nothing but a stepping stone and inherently commercially useless; I don't see the output (hundreds of termal megawatts) ever justifying the "fixed" overhead costs, and a scale at least close to GW scale seems completely inevitable to me.
If you agree with that premise, then building a reactor that size has a lot of utility already that you'd never achieve from building Wendelstein 7x equivalents or whatever at 50 different university campuses (or however else you'd want to spend the funds instead).
> The true purpose of ITER wasn't to achieve fusion or push forward fusion; it was to preserve funding until those making the decisions had retired. If this required sacrificing long term goals, like actually delivering competitive energy (or, really, delivering anything at all), so be it.
This is what I most disagree with; if commercial fusion is viable (I believe it really isn't) then I think ITER (or an equivalent of its size) is a very necessary, if expensive, step to make, and spending the money on dozens of smaller projects is not an "obviously better long term approach" at all in my view.
I also think that speaking about "true purpose" of the whole project is personifying the output of a complex process way too much, where individual actors in that scheme just want to make ITER happen (for very defensible reasons IMO).
I looked hopefully at the HR report https://www.iter.org/sites/default/files/media/2024-11/rh-20... to see if there was some sort of job categorisation - scientist, engineer, management. Disappointingly scant. PhD heavy. Perhaps the budget would be more insightful.
"Execution not ideas" is a common refrain for startups.
I wonder how much of the real engineering for ITER is occurring in subcontractors?
> ITER doesn't use high temperature superconductors.
It does, for high-current buses that interface with regular resistive power distribution. They are also planned for some auxiliary components (like the neutral beam injectors).
> ITER has been criticized since early days as a dead end, for example because of its enormous size relative to the power produced.
ITER is NOT designed for power generation. It's essentially a lab experiment to see how plasma behaves in magnetic confinement and test various technologies.
That's why ITER was designed with a very conservative approach to reduce the technical risk. We don't need it to be compact, this can come later. We just need it to work.
> ITER is NOT designed for power generation. It's essentially a lab experiment to see how plasma behaves in magnetic confinement and test various technologies.
That's the go-to excuse. But if you look at DEMO, it's power density is not enormously greater. ITER is so far out of the running that DEMO (or PROTO, etc.) will be too.
We're learning a great deal about something that's largely irrelevant.
DEMO concept sketches are completely obsolete at this point. It's not going to look anything like this.
They're based on the state-of-the art from about 2005. Since then, a lot of improvements happened. A more realistic power plant design is going to use a thinner center column (because of better superconducting magnets), resulting in a smaller cryostat volume. Possibly high-TC magnets.
It can also be made more compact, if neutral beams can be used to suppress some plasma instabilities.
misunderstanding about ITER what you some people doing is that it is just one thing, it is not.
ITER is not only facility in france it is multitude of manufacturing capabilities all over the globe which build parts for ITER and all future power plants.
It's my understanding that neutron wall loading of DEMO concepts had been trending downward (due to materials limits), the opposite of the trend you're trying to portray there. And in no future world is the power density of DEMO going to be anywhere close to that of a fission reactor.
Neutron loading is not the limiting factor (for now), it's the magnetic field pressure and its homogeneity. That's actually what is driving the humongous size of the ITER.
To solve the homogeneity problem, you need the central column to be as thin as possible. That's where a lot of the recent advancements can help.
Also, fission research and fusion are actually aligned in designing materials that can tolerate more displacements per atom.
> And in no future world is the power density of DEMO going to be anywhere close to that of a fission reactor.
That's for sure. Modern fission reactors are close to magic, with the amount of heat they produce for a given volume.
However, if neutron loading has gone down, and the reactor is the same size, the volumetric power density must also have gone down.
The comment about fission and neutron dpa is misleading. The neutron damage issue is much less bothersome in fission reactors.
Fission produces about 3% of its energy in neutrons, vs. 80% in the DT fusion reaction. The spectrum of fission neutrons is much softer, with a peak around 1 Mev, vs. 14 MeV for DT neutrons. The DT neutrons are above threshold for (n,2n) reactions in most materials, and have much higher cross section for (n,p) and (n,alpha) reactions. The latter is particularly troublesome, as helium accumulates inside materials, forming microscopic very high pressure bubbles that rip the materials apart.
But it's even comparatively worse for fusion than that. In a PWR (for example), the core is carefully designed so that the only parts exposed to unmoderated neutrons are the fuel rods and the replaceable parts of the fuel rod bundles. The latter provide structural support for the fuel rods and are removed along with the fuel rods when the fuel is spent. The actual core supports for the fuel bundles are well away from where the chain reaction is occurring, shielded by water. The mean free path of a fission neutron in water is just a few centimeters, so their energy is quickly dissipated before reaching these components.
So, exposure of permanent reactor components to fast neutrons is essentially a non-issue in PWRs. Even control rods are not exposed much; reactivity is controlled by boric acid dissolved in the water (BWRs do it somewhat differently.)
This same strategy cannot be used in a fusion reactor; the plasma facing surfaces are exposed to the full, unshielded brunt of the DT neutron flux. Maybe a few cm of liquid lithium could be flowed along some surfaces? This is a stretch, particularly in a toroidal reactor.
Anyone have any idea where First Light Fusion's third machine fits into this?
The idea of using literal guns (gunpowder, then light gas gun, then coil gun) to impact projectiles against each other seemed like it was probably ludicrous, but I haven't seen any critical media or numbers yet.
This will probably need to be updated soon. There are rumors NIF recently achieved a gain of ~4.4 and ~10% fuel burn up. Being able to ignite more fuel is notable in and of itself.
In the context implied above it is the ratio of fusion energy released to laser energy on target or the laser energy crossing the vacuum vessel boundary (they are the same in this case). So it would have been more precise to say "target gain" or "scientific gain".
Progress toward net fusion energy is critical for delivering fusion power on the grid. It's not the only progress required — the rest of the machine has to be economical to build and operate. Most of the fusion machines in this paper are scientific projects, but as commercialization progresses, fusion machines with power plant needs in mind should arrive.
(I work for one startup in the field, Commonwealth Fusion Systems. We're building our SPARC tokamak now to demonstrate net energy gain in a commercially relevant design.)
If you took all the money in the world being spent on fusion research right now you would struggle to build a single 1 GWe fission power plant. That doesn't sound like an improvement in resource allocation to me.
What's the ROI on that versus current and near-term expected pricing for solar+storage? Is fission getting safer/cheaper at the same rate that solar and batteries are?
No. Because flywheels are not limited by material density, they are limited by centrifugal force.
High density is actively bad, you want to maximize strength and minimize density for flywheel designs, and this makes you much more likely to end up with low density composites (rather than high density tungsten alloys or somesuch).
Solar + days of storage is far more expensive than fission. Grid scale batteries like California has spent billions on only have 4 hour capacity. Fission can also supply heat that is needed for many industrial processes and chemical reactions.
it is not in most us areas. only problem is area covered, NOT price of technology. solar with 12 hour of storage was lower price than fission before covid hit. TCO, not one time nonsense.
fission has relatively low temperature heat, i.e. no metal reduction, no "concrete" production. you can cook hot dogs with it. also electrification of heat can provide lower losses stemming from regulation or lack thereof. with electricity you can say i need 293.5 degrees C and you just type it somewhere and you get it for almost free (regulation).
I am no fan of fission (I strongly oppose new fission plants). But one problem with solar+storage is that the cost of the storage component increases roughly linearly with the desired storage duration. That's not true of a fueled power plant (fission or fossil).
coal power plant needs to have 100 or so rail cars worth of material brought every single day. so you are simplifying too much.
every person doing anything with power generation should put into spreadsheet, what quantities of material is needed to provide power capacity for entire grid.
and you need people, infrastructure to bring, prepare, load that material. which adds COST OF LOCKING PEOPLE, locking workforce for nonsensical jobs. so if someone drives train supplying coal plant with coal he can not do programming job, job in services etc... labor/workforce "opportunity cost"
with PV + battery you bring material once per 10-15 years. and it is not in quantities as in fossil. and one coal plant worth of personnel can manage higher amount of generating capacity in PV/battery
Nuclear plant of ANY KIND will have to have even bigger workforce than whole coal plant, just to do NONTRIVIAL maintenance. just simple microcontroller, sensor.... used in nuclear power plant has to be made available for duration of plant lifetime 30-40 years. you can use any inverter, solar panel in pv, you can interchange them, mix them, this is not as simple with nuclear plant.
people involved in providing energy services and citizens drawing energy from grid, should start think like producers AND consumer, not only like consumers. that way a lot of "grid problem" will be easier to deal with.
There are any problems with fission that are all related to the extraordinary danger of handling the fuel, byproducts, and the sites themselves.
The cost of them is huge, some people are hoping that modularity will help with construction, but it is still astonishingly expensive.
The problems of handling the fuel has been solved, in theory and practise. Except when commerce is involved. When the money people get involved corners will get cut, and we are back to incredible danger. Technically solvable, but I would not go near it. I have known too many business people.
The problem of the long-term waste is entirely beyond us. There has been no practical progress on this front. Long term waste (including some parts of the assemblies themselves) are very dangerous for hundreds of thousands of years.
This is, with current technology that can be bought to bear, unsolvable.
The only thing we can do is put it in a stable site, be ready to move it when the site becomes unstable (nowhere on Earth is known to be stable on such time scales), and find a way of communication, across thousands of generations, just how poisonous this stuff is.
Maybe our ancestors will get lucky and find a way to safely dispose of it....
So fission power is making future generations pay for today's consumption.
Fortunately for us it is moot. The costs of renewables is dropped to the point that the only reason for fission is to build the capacity for nuclear weapons.
And there is still very much a need for zero-carbon DISPATCHABLE electricity of witch nuclear is the ONLY choice. You simply cannot have 100% of your electricity from only solar and wind because it is far too variable and we simply don't have the technology to store electricity cheaply enough.
Your attitude towards nuclear energy is as irrational as the average antivaxer towards vaccines.
meant of the good spots are already used, but we can change how they are used to greatly increase their usefulness to a 0 carbon grid. we can make them all pumped hydro, and start treating hydro as a battery rather than a generation source.
Lithium ion batteries are light with a high energy density, so are great for cars.
Flow batteries have a low energy density, but increasing the duration means a bigger tank, and the cost of bigger tanks increases as a function of the cube root (?) of their volume
Flow batteries are well over a century old, but I have been reading about improvements over the last two decades. Where are they?
Having trouble staying ahead of the enormous monster that is the lithium battery industry which through sheer scale are lowering the costs allowing it to break into one market after another.
It is the good old: Good enough beats theoretically perfect.
flow batteries are "controlled" by US patents. solar + batteries are not limited in bad way by US companies grip on patents.
china makes all panels, asia is making all batteries.
so US utilities / energy providers can not have harmful grip on PV + batteries.
US utilities / energy providers want to have docile customer who only pays every month. they do not want to invest money into grid and have customer not only demand but also supply grid. because they do not understand how to benefit from that. they can, it is just mental limit for them.
utilities / energy providers were too lazy to think about proper decentralized grid so every participant in us grid will suffer more because of that.
this will be flagged as conspiracy, be cause it is conspiracy, conspiracy against US citizen by US companies / US interests"
There are a number of flow batteries, where they have large vats where charge is stored in 2 discrete charge state fluids in a redox reaction. They charge a vat through a cell and discharge it in the other direction. Limits are solubility of the charge states in the transport fluid = huge vats for total watt-hours and huge redox cells for rate of charge/discharge. Runs well and vats are cheap.
https://en.wikipedia.org/wiki/Flow_battery
If you don’t need a mobile power plant why bother with fusion power instead of something like geothermal? At the end of the day we’re just turning water into steam.
Maybe someday we’ll finally achieve the ultimate dream: an extremely expensive nuclear power plant that needs vast amounts of coolant water and leaves radioactive waste behind.
Tossing out your opinions as fact doesn't do much to win hearts and minds, or educate us bystanders to the basis for your point of view.
Presumably your comment is either to persuade or to inform; it does neither. I'm very curious about this field and its future, do you care to try again?
ITER began building in 2013, first plasma is expected for 2034. DEMO is expected to start in 2040.
So, ITER is taking an estimated 20 years. It's being built for a reason, so I imagine follow-ups want to wait to see how that shakes out. So certainly, DEMO needs to start a few years after ITER is finally done.
Then DEMO isn't a production setup either, it's going to be the first attempt at a working reactor. So let's say optimistically 20 years is enough to build DEMO, run it for a few years, see how it shakes out, design the follow-ups with the lessons learned.
That means the first real, post-DEMO plant starts building somewhere in 2060. Yeah, fair to say a lot of the here present will be dead by then, and that'll only be the slow start of grid fusion if it sticks at all. Nobody is going to just go and build a hundred reactors at once. They'll be built slowly at first unless we somehow manage to start making them amazingly quickly and cheaply.
So that's what, half a century? By the time fusion gets all the kinks worked out, chances are it'll never be commercially viable. Renewables are far faster to build, many problems are solvable by brute force, and half a century is a lot of time to invent something new in the area.
ITER/DEMO is an exceptionally slow fusion project and arguably obsolete since it uses older superconductors. CFS uses the same design, with modern superconductors that can support much stronger magnetic fields. Tokamak output scales with the fourth power of magnetic field strength, so this should let them get results similar to ITER in a reactor a tenth the size. They'll have it running long before ITER is ready.
If Jesus Christ himself came to earth and hand delivered a durable and workable reactor design WITH high uptime WITH a near-optimal confinement scheme WITH zero neutronicity AND he included a decade of free perfectly packaged and purified fuel, it would still not pencil out as anything other than water-hungry staff-intensive baseload requiring significant state support.
This is the reality. It’s not happening. It’s a welfare program for bullshit artists that depends on a credulous public.
I am in the business of baiting militantly uninformed enthusiasts who form the foundation of the multigenerational grift that is Commercial Fusion Power.
Real talk, the point is not that whatever system is first past the post for fusion becomes the gold standard and fills the planet.
The issue right now is cracking the code. Once that is done, performance gains and miniaturization can take place.
Fusion can work on lots of things. Its possible that a fusion system the size of a car could be made within 25 years of the code being cracked that would power a house, or the size of a small building that could power a city block.
The waste product of hydrogen fusion is helium, a valuable resource that will always be in high demand, and it will not be radioactive.
And yes, it will need coolant as with hot fusion the system uses the heat to turn a turbine, but that coolant isn't fancy, it's just water.
Fusion has the potential to solve more problems than it causes by every metric as long as it is doable without extremely limited source materials, and this is what these big expensive reactors are trying to solve.
You’ve disputed nothing I’ve said and unless a dramatically higher temperature fusion reaction that does not generate a neutron flux is achieved, it will generate radioactive waste as a matter of factual physics. Thank you though!
I mean, yes, you're right, but it's not a permanently radioactive waste.
Quote:
A fusion power plant produces radioactive waste because the high-energy neutrons produced by fusion activate the walls of the plasma vessel. The intensity and duration of this activation depend on the material impinged on by the neutrons.
The walls of the plasma vessel must be temporarily stored after the end of operation. This waste quantity is initially larger than that from nuclear fission plants. However, these are mainly low- and medium-level radioactive materials that pose a much lower risk to the environment and human health than high-level radioactive materials from fission power plants. The radiation from this fusion waste decreases significantly faster than that of high-level radioactive waste from fission power plants. Scientists are researching materials for wall components that allow for further reduction of activation. They are also developing recycling technologies through which all activated components of a fusion reactor can be released after some time or reused in new power plants. Currently, it can be assumed that recycling by remote handling could be started as early as one year after switching off a fusion power plant. Unlike nuclear fission reactors, the long term storage should not be required.
Basically, whatever containment vessel becomes standard for the whole fusion industry would need probably an annual cycle of vessel replacements, which would be recycled indefinitely and possibly mined for other useful radioactive byproducts in the process.
The amount of radioactive scrap produced by hypothetical decommissioned radioactive fusion containment vessels is laughably trivial compared to fission waste streams. Even accounting for the most pessimistic irradiation models of first-wall materials, the total radioactive burden remains orders of magnitude below legacy technologies.
The half-lives of such activated components like predominantly steel alloys and ceramic composites trend dramatically shorter than actinide-laden spent fuel, with activity levels plummeting to background within mere decades rather than geological timescales. This makes waste management a single-generation engineering challenge rather than a multi-millennial obligation
The long term activity of the waste is certainly lower, but the volume of the waste is likely much higher. And much of the cost is driven by volume, not activity.
As a species, we're spectacularly bad at negative externalities.
We are also very bad at anything very long term. We've hardly pulled off any physical project to last more than one generation recently. We barely invest in any.
The winning energy tech of the future better have as little negative externalities as possible, especially long term ones.
Hey, there it is! Lots of radioactive waste being generated on a continuous business but maybe baby with dreams and creams we can decommission it with robots and recycle it all. Meanwhile a reactor is offline for refurbishment for days, weeks, months, blowing a hole in the economics of it all.
Unironically: you’re the first person I’ve come across to openly acknowledge this issue. Thank you.
This is a great update! I hope the authors continue publishing new versions of their plots as the community builds up towards facility gain. It's hard to keep track of all the experiments going on around the world, and normalizing all the results into the same plot space (even wrt. just triple product / Lawson criteria) is actually tricky for various reasons and takes dedicated time.
Somewhat relevant, folks here might also be interested in a whitepaper we recently put up on arXiv that describes what we are doing at Pacific Fusion: https://arxiv.org/abs/2504.10680
Section 1 in particular gives some extra high-level context that might be useful to have while reading Sam and Scott's update, and the rest of the paper should also be a good introduction to the various subsystems that make up a high-yield fusion demonstration system (albeit focused on pulser-driven inertial fusion).
I heard that NIF was never intended to be a power plant, not even a prototype of one. It's primarily a nuclear weapon research program. For a power plant you would need much more efficient lasers, you would need a much larger gain in the capsules, you would need lasers that can do many shots per second, some automated reloading system for the capsules, and you would need a heat to electricity conversion system around the fusion spot (which will have an efficiency of ~1/3 or so).
Any truth to that?
From my time in fusion research circles, you're correct, but it's also not a simple "weapons or energy?" question. It could only have ever been a pure research facility. At the time of design, the physics wasn't certain enough to aim for net energy gain. Where the weapons research came in is in the choice of laser focus. Instead of "direct drive", where the lasers directly strike the fusion fuel, NIF lasers strike the inside of a Hohlraum, which produces X-Rays that then heat the fuel. X-Ray opacity is an important topic in nuclear weapons research.
Bear in mind that I wasn't directly involved, and this my impression picked up from conversations during my time in fusion research, which was about 10 years ago.
It's an experimental facility. Yes, a power plant would need much more efficient lasers, but NIF's lasers date back to the 1990s, equivalent modern lasers are about 40X more efficient, and for an experiment it's easy enough to do a multiplication to see what the net result would have been with modern lasers.
Modern lasers can also repeat shots much more quickly. Power gain on the capsules appears to scale faster than linear with the input power, so getting to practical gain might not be as far off as it appears at first glance.
These are some of the reasons that various fusion startups are pursuing laser fusion for power plants.
From what I understood, laser fusion needs laser efficiencies not just 40x better than what NIF uses, but like 3 or 4 orders of magnitude more efficient than the state of the art. Seems like a non-starter.
NIF's lasers are 0.5% efficient. Equivalent modern lasers are 20% efficient. Both of these sources have both numbers:
https://physicsworld.com/a/national-ignition-facilitys-ignit...
https://pubs.aip.org/physicstoday/Online/31501/The-commercia...
Everything fusion reactor design needs similar gains in some part of the stack outside of the fusion parts to make it a viable power source: tokamaks need magnets to be orders of magnitude better, the lining for the reactors needs to last for much longer, the whole steam conversion mess, etc.
Commercial REBCO tape is an entirely sufficient superconductor for tokamaks. At this point the limiting factor for the magnetic field is the structural strength of the reactor. Tokamak output scales with the square of size and the fourth power of magnetic field strength, and using REBCO, the CFS ARC design should get practical power output from a reactor much smaller than ITER.
I was trying to work out a joke about buying better lasers off of alibaba but it seems that despite being 30 years old they're still orders of magnitude beyond off the shelf options.
partially. The very efficient lasers from alibaba don't have short pulse/high power, so they can potentially be used only as the part of the system - the pumping lasers. The final nanosecond-laser is still a one-off build which though seems to be pretty doable even by a small company if they set their mind to it.
Btw, NIF achieved those recent results by adding strong magnetic field around the target (penny-shrinkers knew that tech for 20+ years :). There are other things like this around that can potentially be similarly useful. Only if somebody had money and interest ...
I’ve seen some pretty wacky structures that involve mechanically forcing permanent magnets together at different orientations to create assymetric magnetic fields that are strongest where they need to be or weak where they would cause problems. Like eddy currents in electric motor housing, or insufficient hold for chef’s knives.
I know motor windings have gotten pretty funky of late to do a little bit of this, but do they do multi tesla magnetic fields that use several different windings to create the same sorts of bias in field strength? The ITER windings seem to be an extremely mild form of this.
Lots of people do have money and interest: https://archive.is/BCsf5
It’s fascinating how NIF's legacy tech limits its relevance for actual energy generation, yet it still serves as a stepping stone. The fact that gain scales faster than linearly with input power is particularly encouraging — it suggests that advances in laser efficiency and repetition rate could unlock meaningful progress sooner than many assume. I can see why startups are jumping on this now. Curious to see how much of this can move from lab to grid in the next decade.
There is no need to ask for speculation. It's the top item in their mission statement.
https://lasers.llnl.gov/about/what-is-nif
>NIF is a key element of the National Nuclear Security Administration’s science-based Stockpile Stewardship Program to maintain the reliability, security, and safety of the U.S. nuclear deterrent without full-scale testing.
Nothing about the NIF looks like a power plant to me. It's like the laser weapons guy and the nuclear weapons guy found a way to spend giant piles of money without having to acknowledge the weapons angle.
A lot of people think so, but the US government openly spends way more money on nuclear weapons than on fusion research. We'll spend almost a trillion dollars on nuclear weapons over the next decade.[1] The government's fusion funding was only $1.4 billion for 2023.[2]
So it seems more likely to me that some physicists figured out how to get their fusion power research funded under the guise of weapons research, since that's where the money is. NIF's original intent was mostly weapons research but it's turned out to be really useful for both, and these days, various companies are attempting to commercialize the technology for power plants.[3]
[1] https://theaviationist.com/2025/04/26/us-nuclear-weapons-wil...
[2] https://www.fusionindustryassociation.org/congress-provides-...
[3] NYTimes: https://archive.is/BCsf5
Yes. The NIF is a weapons research lab, not a power research lab.
The purpose of it is to show that the USA is still capable of producing advanced hydrogen bombs. More advanced then anybody else.
The '2.05 megajoules' is only a estimation of the laser energy actually used to trigger the reaction. It ignores how much power it took to actually run the lasers or reactor. Even if they update the lasers with modern ones there is zero chance of it ever actually breaking even. It is a technological dead end as far as power generation goes.
The point of the 'breakthrough' is really more about ensuring continued Congressional approval for funding then anything else. They are being paid to impress and certainly they succeeded in that.
However I suspect this is true of almost all 'fusion breakthroughs'. They publish updates to ensure continued funding from their respective governments.
People will argue that this is a good thing since it helps ensure that scientists continue to be employed and publishing research papers. That sentiment is likely true in that it does help keep people employed, but if your goal is to have a working and economically viable fusion power plant within your lifetime it isn't a good way to go about things.
If the governments actually cared about CO2 and man-made global warming they would be investing in fusion technology and helping to develop ways to recycle nuclear waste usefully. Got to walk before you can run.
It's been over 20 years since ive dug into nuclear tech pretty deep but - don't we already have breeder reactors and other tech that is low waste, safer and thus we could build modern (not based on nuclear submarine) reactors in the fission category and deliver cleaner power, today? Yes there is a lot of politics especially around manufacturing, production and storage of spent fuel so all of those are probably show stoppers no matter how safe they are in reality but we aren't invested in it.
The primary purpose of the NIF is to maintain the US nuclear stockpile without nuclear tests. The lasers very inefficient (iirc about 2%). The success they claimed is that the energy released by the burning plasma exceeds the laser energy put into the fuel capsule. Since NIF was never intended to be a power plant they don't use the most efficient lasers.
It was never intended to be a power plant but it was hoped that it would achieve a net gain fusion reaction for the first time. This turned out to be a lot harder than expected.
NIF has achieved net power, right? But only if you ignore the massive, massive power losses in converting electricity to feed energy into the system.
Correct. They got more bang out than they put in. The electrical-to-bang and bang-to-electrical conversions are not included.
Yes, after the test ban treaties, there was a huge push into exploring mathematical emulations of all aspects of fusion, and all assorted bombs, as well as laser ignition of pellets with these large lasers using inertial confinement of the pellet as the laser impacted it - analysing the fusion by observation of emitted neutrons. xrays etc. They issued reports from time to time(sanitised), and probably used the secret data to fine tune emulated weapons with fact points. The pellets were composed of potential fuels, various Hydrogens and Lithiums, varied in composition to explore the ignition space. A number of pellets performed well in terms of gain, but were far-far from useable fusion when the LL labs costs were factored in. I think they determined it could not ever work as a fusion energy source, but it provided data. They still mine data from it with various elemental mixes making up the pellets.
They should also have put fusion bombs on the graph?
ASML machine with "s/tin/DT/" looks like a prototype of such a reactor and of a fusion space drive.
It should be noted that "breakeven" is often misleading.
There's "breakeven" as in "the reaction produces more energy than put into it", and there's breakeven as in "the entire reactor system produces more energy than put into it", which isn't quite the same thing.
We are careful to always specify what kind of “breakeven” or “gain” is being referred to on all graphs and statements about the performance of specific experiments in this paper.
Energy gain (in the general sense) is the ratio of fusion energy released to the incoming heating energy crossing some closed boundary.
The right question to ask is then: “what is the closed boundary across which the heating energy is being measured?” For scientific gain, this boundary is the vacuum vessel wall. For facility gain, it is the facility boundary.
The article uses the term "scientific breakeven" which I assume is the first one you've stated.
It's always confused me a bit. It's not like if you put 10kWh into the reactor, that 10kWh goes away. You still lose a significant fraction of it in inefficiency of the cycle but it still goes towards heat which can be used to heat steam and turn a turbine. iirc, you can get about 4kWh back.
On the other side of the coin, if you put 10kWh in and get 10kWh of fusion out, that's 20kWh to run a steam turbine, which nets you about 8kWh. So really you need to be producing 15kWh of heat from fusion for every 10kWh you put in to break even.
Cars are a good analogy. You wouldn't talk about miles per gallon until you have an engine that idles. Humans are in the engine building phase.
That’s a good analogy - and the situation right now is trying to make a car that doesn’t use its entire tank of fuel before it arrives at the service station.
You can't always get this much energy back. Sometimes your waste heat is an enormous pool of warm water.
In the laser business, the latter is called "wall plug efficiency," which is laser power out per electrical power in.
"Uptime Percentage", "Operational Availability" (OA), "Duty Cycle"
Availability (reliability engineering) https://en.wikipedia.org/wiki/Availability
Terms from other types of work: kilowatt/hour (kWh), Weight per rep, number of reps, Total Time Under Tension
and then there's "breakeven" as in "it pays for the investment within X years"
Especially since steam turbines are in the 30-40% efficiency range
Why is the last plot basically empty between 2000 and 2020? I understand that NIF was probably being built during that time, but were there no significant tokamak experiments in that time?
Author here - some other posters have touched on the reasons. Much of the focus on high performing tokamaks shifted to ITER in recent decades, though this is now changing as fusion companies are utilizing new enabling technologies like high-temperature superconductors.
Additionally the final plot of scientific gain (Qsci) vs time effectively requires the use of deuterium-tritium fuel to generate the amounts of fusion energy needed for an appreciable level of Qsci. The number of tokamak experiments utilizing deuterium tritium is small.
Thanks a lot for this research. Seing the comments here I think it's really important to make breakthroughs and progress more visible to the public. Otherwise the impression that "we're always 50 years away" stays strong.
Here was my completely layman attempt to forecast fusion viability a few months ago. https://news.ycombinator.com/item?id=42791997 (in short: 2037)
Is there some semblance of realism there you think?
In the 2037 timeframe, modeling trends doesn’t matter as much as looking at the actual players. I think odds are good because you have at least 4 very well funded groups shooting to have something before 2035: commercial groups including CFS, Helios, TAE, also the efforts by ITER. Maybe more. Each with generally independent approaches. I think scientific viability will be proven by 2035, but getting economic viability could take much longer.
If ITER is where it's at why are we building commercial scale tokamak? https://en.wikipedia.org/wiki/Commonwealth_Fusion_Systems
Companies like Commonwealth Fusion Systems are an example of those utilizing high-temperature superconductors which did not exist commercially when ITER was being designed.
ITER uses HTSs, just not for the coils:
> The design operating current of the feeders is 68Ka. High temperature superconductor (HTS) current leads transmit the high-power currents from the room-temperature power supplies to the low-temperature superconducting coils 4K (-269°C) with minimum heat load.
Source: https://www.iter.org/machine/magnets
HTS current feeds are a good idea (we also use them at CFS, my employer: https://www.instagram.com/p/DJXInDUuDAK/). It's HTS in the coils (electromagnets) that enables higher magnetic fields and thus a more compact tokamak.
Presumably because everyone in MCF has been waiting for ITER for decades, and JET is being decommissioned after a last gasp. Every other tokamak is considerably smaller (or similar size like DIII-D or JT-60SA).
Much of the interesting tokamak engineering ideas were on small (so low-power) machines or just concepts using high-temperature superconducting magnets.
It's hard to believe that after all of this time, ITER is still almost a decade away from first plasma.
There's the common joke that fusion is always 30 years away, but now with the help of ITER, it's always 10 years away instead.
The really depressing part is if you plot rate of new delays against real time elapsed, the projected finishing date is even further.
This is why much of the fusion research community feel disillusioned with ITER, and so are more interested in these smaller (and supposedly more "agile") machines with high-temperature superconductors instead.
The ITER is in development hell.
Mind you, it's not useless! It produced a TON of very useful fusion research: neutral beam injectors, divertors, construction techniques for complex vacuum chambers, etc. At this point, I don't think it's going to be complete by the time its competitors arrive.
One spinoff of this is high-temperature superconductor research that is now close to producing actually usable high-TC flexible tapes. This might make it possible to have cheaper MRI and NMR machines, and probably a lot of other innovations.
> actually usable high-TC flexible tapes. This might make it possible to have cheaper MRI and NMR machines, and probably a lot of other innovations.
I'm sure there'll be plenty of fascinating applications of high-Tc tape, however I'm not sure MRI/NMR machines will be one of those. There would still be a lot of thermal noise due to the high temperature. Which is why MRI/NMR machines tend to use liquid helium cooling, not because superconductors capable of operating at higher temperatures don't exist.
ITER doesn't use high temperature superconductors. It uses niobium-tin and niobium-titanium low temperature superconductors in its magnets.
ITER has been criticized since early days as a dead end, for example because of its enormous size relative to the power produced. A commercial follow-on would not be much better by that power density metric, certainly far worse than a fission reactor.
There is basically no chance than a fusion reactor operating in a regime similar to ITER could ever become an economical energy source. And this has been known since the beginning.
I call things like ITER "Blazing Saddles" projects. "We have to protect our phony baloney jobs, gentlemen!"
> I call things like ITER "Blazing Saddles" projects. "We have to protect our phony baloney jobs, gentlemen!"
I think this is overly harsh and somewhat unfair. You could make the same argument that anything operating in a regime similar to the Chicago Pile 1 could never be an economical reactor nor a bomb, but that does not mean skipping that particular development step is viable.
As far as fusion reporting goes, articles are at least somewhat consistent on the fact that ITER is a pure research project/reactor, while every 10-man fusion startup is being hyped up beyond all reason even if there is not even a credible roadmap towards an actual reactor in the 100MW range at all.
Personally I don't see fusion being a mainstream energy source (or helpful against climate change) in this century at all and maybe never, but ITER (even with all the delays) is at least an honest attempt at a credible size, and being stuck on older technology is an unfortunate side-effect of that.
conceptually sure; but size-wise they are so different as to warrant valid questions about ROI.
Chicago Pile 1 ran for 12 years, ITER started ~12 years ago and plans to run into the 2030s at least. Budget and headcount would likely be vastly different too, I’d welcome any educated guesses. Sometimes quantity has a quality of its own, as they say.
Sure but those are not really equivalent/comparable in scale; just looking at power/size and conceptual distance from commercial viability, the Chicago pile does not even match up to something like SPARC or JET, much less ITER.
A more fitting comparison to ITER would be something like Fermi-1 or other prototype designs at almost commercial scale, IMO, and those were multi-year, large projects too (and fission is much simpler than fusion, which obviously also helps).
The X-10 reactor at Argonne went critical less than a year after CP-1, with a power of 500 kW, rising to 4 MW in 1944. The Hanford B reactor, with a power of 250 MW, was in operation less than two years after CP-1 went critical.
I don't think it's unfair at all. And I don't see ITER as an "honest attempt", far from it.
The initial cost figures for ITER were obviously deliberate lies. When the true costs inevitably came out (after commitment had been made) this led to alternative approaches being canned. ITER has done grievous damage to fusion as a field, in a way eerily similar to how the Space Shuttle and ISS have done damage to NASA.
The true purpose of ITER wasn't to achieve fusion or push forward fusion; it was to preserve funding until those making the decisions had retired. If this required sacrificing long term goals, like actually delivering competitive energy (or, really, delivering anything at all), so be it.
As an engineer, the difference between "deliberate lies" and "overoptimistic estimates" is often just in the eye of the beholder; Hanlons Razor should be applied IMO.
Was ITER overambitious? Timeline and budget unrealistic from the start? Maybe. But I'm fairly confident that most people involved had perfectly defensible intentions.
I also think that if the goal is commercial fusion, small reactors (100MW and below) are nothing but a stepping stone and inherently commercially useless; I don't see the output (hundreds of termal megawatts) ever justifying the "fixed" overhead costs, and a scale at least close to GW scale seems completely inevitable to me.
If you agree with that premise, then building a reactor that size has a lot of utility already that you'd never achieve from building Wendelstein 7x equivalents or whatever at 50 different university campuses (or however else you'd want to spend the funds instead).
> The true purpose of ITER wasn't to achieve fusion or push forward fusion; it was to preserve funding until those making the decisions had retired. If this required sacrificing long term goals, like actually delivering competitive energy (or, really, delivering anything at all), so be it.
This is what I most disagree with; if commercial fusion is viable (I believe it really isn't) then I think ITER (or an equivalent of its size) is a very necessary, if expensive, step to make, and spending the money on dozens of smaller projects is not an "obviously better long term approach" at all in my view.
I also think that speaking about "true purpose" of the whole project is personifying the output of a complex process way too much, where individual actors in that scheme just want to make ITER happen (for very defensible reasons IMO).
> phony baloney jobs
I looked hopefully at the HR report https://www.iter.org/sites/default/files/media/2024-11/rh-20... to see if there was some sort of job categorisation - scientist, engineer, management. Disappointingly scant. PhD heavy. Perhaps the budget would be more insightful.
"Execution not ideas" is a common refrain for startups.
I wonder how much of the real engineering for ITER is occurring in subcontractors?
I wonder why a physics research facility would be PhD-heavy. ;-)
> ITER doesn't use high temperature superconductors.
It does, for high-current buses that interface with regular resistive power distribution. They are also planned for some auxiliary components (like the neutral beam injectors).
> ITER has been criticized since early days as a dead end, for example because of its enormous size relative to the power produced.
ITER is NOT designed for power generation. It's essentially a lab experiment to see how plasma behaves in magnetic confinement and test various technologies.
That's why ITER was designed with a very conservative approach to reduce the technical risk. We don't need it to be compact, this can come later. We just need it to work.
And yes, it is necessary. Plasma behavior can't be simulated numerically or analytically. It always provides surprises, sometimes even good ones: https://en.wikipedia.org/wiki/High-confinement_mode
is a conservative approach useful if it is taking 30 years to build?
> ITER is NOT designed for power generation. It's essentially a lab experiment to see how plasma behaves in magnetic confinement and test various technologies.
That's the go-to excuse. But if you look at DEMO, it's power density is not enormously greater. ITER is so far out of the running that DEMO (or PROTO, etc.) will be too.
We're learning a great deal about something that's largely irrelevant.
DEMO concept sketches are completely obsolete at this point. It's not going to look anything like this.
They're based on the state-of-the art from about 2005. Since then, a lot of improvements happened. A more realistic power plant design is going to use a thinner center column (because of better superconducting magnets), resulting in a smaller cryostat volume. Possibly high-TC magnets.
It can also be made more compact, if neutral beams can be used to suppress some plasma instabilities.
misunderstanding about ITER what you some people doing is that it is just one thing, it is not.
ITER is not only facility in france it is multitude of manufacturing capabilities all over the globe which build parts for ITER and all future power plants.
It's my understanding that neutron wall loading of DEMO concepts had been trending downward (due to materials limits), the opposite of the trend you're trying to portray there. And in no future world is the power density of DEMO going to be anywhere close to that of a fission reactor.
Neutron loading is not the limiting factor (for now), it's the magnetic field pressure and its homogeneity. That's actually what is driving the humongous size of the ITER.
To solve the homogeneity problem, you need the central column to be as thin as possible. That's where a lot of the recent advancements can help.
Also, fission research and fusion are actually aligned in designing materials that can tolerate more displacements per atom.
> And in no future world is the power density of DEMO going to be anywhere close to that of a fission reactor.
That's for sure. Modern fission reactors are close to magic, with the amount of heat they produce for a given volume.
However, if neutron loading has gone down, and the reactor is the same size, the volumetric power density must also have gone down.
The comment about fission and neutron dpa is misleading. The neutron damage issue is much less bothersome in fission reactors.
Fission produces about 3% of its energy in neutrons, vs. 80% in the DT fusion reaction. The spectrum of fission neutrons is much softer, with a peak around 1 Mev, vs. 14 MeV for DT neutrons. The DT neutrons are above threshold for (n,2n) reactions in most materials, and have much higher cross section for (n,p) and (n,alpha) reactions. The latter is particularly troublesome, as helium accumulates inside materials, forming microscopic very high pressure bubbles that rip the materials apart.
But it's even comparatively worse for fusion than that. In a PWR (for example), the core is carefully designed so that the only parts exposed to unmoderated neutrons are the fuel rods and the replaceable parts of the fuel rod bundles. The latter provide structural support for the fuel rods and are removed along with the fuel rods when the fuel is spent. The actual core supports for the fuel bundles are well away from where the chain reaction is occurring, shielded by water. The mean free path of a fission neutron in water is just a few centimeters, so their energy is quickly dissipated before reaching these components.
So, exposure of permanent reactor components to fast neutrons is essentially a non-issue in PWRs. Even control rods are not exposed much; reactivity is controlled by boric acid dissolved in the water (BWRs do it somewhat differently.)
This same strategy cannot be used in a fusion reactor; the plasma facing surfaces are exposed to the full, unshielded brunt of the DT neutron flux. Maybe a few cm of liquid lithium could be flowed along some surfaces? This is a stretch, particularly in a toroidal reactor.
I imagine a 20 year gap isn't too crazy for a field like fusion, but you've made me curious as well.
Anyone have any idea where First Light Fusion's third machine fits into this?
The idea of using literal guns (gunpowder, then light gas gun, then coil gun) to impact projectiles against each other seemed like it was probably ludicrous, but I haven't seen any critical media or numbers yet.
FLF's own numbers on this are given in a white paper [1, fig 7.]. Note the "fusion measured" datapoint used a gas gun, rather than Machine-3, as the driver... [1] https://firstlightfusion.com/wp-content/uploads/2024/08/firs...
Amazing! Commercial fusion energy is only 30 years away.
(it's been 30 years away for 50 years already, but as long as I'm not dead 30 years from now, it's still a good investment...)
Or maybe 10 https://news.mit.edu/2024/commonwealth-fusion-systems-unveil...
This will probably need to be updated soon. There are rumors NIF recently achieved a gain of ~4.4 and ~10% fuel burn up. Being able to ignite more fuel is notable in and of itself.
what "gain" means.
In the context implied above it is the ratio of fusion energy released to laser energy on target or the laser energy crossing the vacuum vessel boundary (they are the same in this case). So it would have been more precise to say "target gain" or "scientific gain".
Energy out/energy into capsule
I’m excited about the new Squids design from the max Planck institute, it’s a design using the lessons learned from the existing stellarator the W7x.
Are there any betting odds on "On-Earth Fusion makes up more than 1% of the world energy supply by 2100?"
Metaculus sort of does this. The mean prediction is 2046.
https://www.metaculus.com/questions/9464/nuclear-fusion-powe...
Hmm. How much of this progress is really progress to actual useful fusion power ?
I want to believe, but this does not make that easier.
Progress toward net fusion energy is critical for delivering fusion power on the grid. It's not the only progress required — the rest of the machine has to be economical to build and operate. Most of the fusion machines in this paper are scientific projects, but as commercialization progresses, fusion machines with power plant needs in mind should arrive.
(I work for one startup in the field, Commonwealth Fusion Systems. We're building our SPARC tokamak now to demonstrate net energy gain in a commercially relevant design.)
I wish you luck !
I've heard of q-plasma and q-total. What is q-science?
It’s the ratio of fusion energy released to heating energy crossing the vacuum vessel boundary.
So much happening in energy right now. If I was to do it again I would have focused on this industry.
Fusion race vs space race is rather interesting.
The money being spent on fusion should be being spent building next generation fission power plants and liquid salt reactors.
If you took all the money in the world being spent on fusion research right now you would struggle to build a single 1 GWe fission power plant. That doesn't sound like an improvement in resource allocation to me.
What's the ROI on that versus current and near-term expected pricing for solar+storage? Is fission getting safer/cheaper at the same rate that solar and batteries are?
I wonder if it would make sense to make ultra heavy spent nuclear fuel into gigantic flywheels for short-term grid energy storage
No. Because flywheels are not limited by material density, they are limited by centrifugal force.
High density is actively bad, you want to maximize strength and minimize density for flywheel designs, and this makes you much more likely to end up with low density composites (rather than high density tungsten alloys or somesuch).
Solar + days of storage is far more expensive than fission. Grid scale batteries like California has spent billions on only have 4 hour capacity. Fission can also supply heat that is needed for many industrial processes and chemical reactions.
it is not in most us areas. only problem is area covered, NOT price of technology. solar with 12 hour of storage was lower price than fission before covid hit. TCO, not one time nonsense.
fission has relatively low temperature heat, i.e. no metal reduction, no "concrete" production. you can cook hot dogs with it. also electrification of heat can provide lower losses stemming from regulation or lack thereof. with electricity you can say i need 293.5 degrees C and you just type it somewhere and you get it for almost free (regulation).
I am no fan of fission (I strongly oppose new fission plants). But one problem with solar+storage is that the cost of the storage component increases roughly linearly with the desired storage duration. That's not true of a fueled power plant (fission or fossil).
this is manipulative
coal power plant needs to have 100 or so rail cars worth of material brought every single day. so you are simplifying too much.
every person doing anything with power generation should put into spreadsheet, what quantities of material is needed to provide power capacity for entire grid.
and you need people, infrastructure to bring, prepare, load that material. which adds COST OF LOCKING PEOPLE, locking workforce for nonsensical jobs. so if someone drives train supplying coal plant with coal he can not do programming job, job in services etc... labor/workforce "opportunity cost"
with PV + battery you bring material once per 10-15 years. and it is not in quantities as in fossil. and one coal plant worth of personnel can manage higher amount of generating capacity in PV/battery
Nuclear plant of ANY KIND will have to have even bigger workforce than whole coal plant, just to do NONTRIVIAL maintenance. just simple microcontroller, sensor.... used in nuclear power plant has to be made available for duration of plant lifetime 30-40 years. you can use any inverter, solar panel in pv, you can interchange them, mix them, this is not as simple with nuclear plant.
people involved in providing energy services and citizens drawing energy from grid, should start think like producers AND consumer, not only like consumers. that way a lot of "grid problem" will be easier to deal with.
Just curious, what makes you oppose new fission plants? Do you think existing ones should be closed before their scheduled end-of-life?
I will bite.
There are any problems with fission that are all related to the extraordinary danger of handling the fuel, byproducts, and the sites themselves.
The cost of them is huge, some people are hoping that modularity will help with construction, but it is still astonishingly expensive.
The problems of handling the fuel has been solved, in theory and practise. Except when commerce is involved. When the money people get involved corners will get cut, and we are back to incredible danger. Technically solvable, but I would not go near it. I have known too many business people.
The problem of the long-term waste is entirely beyond us. There has been no practical progress on this front. Long term waste (including some parts of the assemblies themselves) are very dangerous for hundreds of thousands of years.
This is, with current technology that can be bought to bear, unsolvable.
The only thing we can do is put it in a stable site, be ready to move it when the site becomes unstable (nowhere on Earth is known to be stable on such time scales), and find a way of communication, across thousands of generations, just how poisonous this stuff is.
Maybe our ancestors will get lucky and find a way to safely dispose of it....
So fission power is making future generations pay for today's consumption.
Fortunately for us it is moot. The costs of renewables is dropped to the point that the only reason for fission is to build the capacity for nuclear weapons.
I am so tired of this lie being repeated endlessly. We have a perfectly safe way to handle nuclear "waste":
reprocess the dirty fuel and bury the actual waste deep underground like Finland is doing at the Onkalo spent nuclear fuel repository.
https://en.wikipedia.org/wiki/Onkalo_spent_nuclear_fuel_repo...
And there is still very much a need for zero-carbon DISPATCHABLE electricity of witch nuclear is the ONLY choice. You simply cannot have 100% of your electricity from only solar and wind because it is far too variable and we simply don't have the technology to store electricity cheaply enough.
Your attitude towards nuclear energy is as irrational as the average antivaxer towards vaccines.
hydro is also a 0 carbon dispatchable choice (which is much cheaper)
Majority of good spots for hydro were already built up, and if they weren’t, good luck with NIMBY.
meant of the good spots are already used, but we can change how they are used to greatly increase their usefulness to a 0 carbon grid. we can make them all pumped hydro, and start treating hydro as a battery rather than a generation source.
> bury the actual waste deep underground
How deep, to stay put thousands of generations?
https://en.wikipedia.org/wiki/Onkalo_spent_nuclear_fuel_repo...
Where are the flow batteries? (fuel cells)
Lithium ion batteries are light with a high energy density, so are great for cars.
Flow batteries have a low energy density, but increasing the duration means a bigger tank, and the cost of bigger tanks increases as a function of the cube root (?) of their volume Flow batteries are well over a century old, but I have been reading about improvements over the last two decades. Where are they?
Having trouble staying ahead of the enormous monster that is the lithium battery industry which through sheer scale are lowering the costs allowing it to break into one market after another.
It is the good old: Good enough beats theoretically perfect.
flow batteries are "controlled" by US patents. solar + batteries are not limited in bad way by US companies grip on patents.
china makes all panels, asia is making all batteries. so US utilities / energy providers can not have harmful grip on PV + batteries.
US utilities / energy providers want to have docile customer who only pays every month. they do not want to invest money into grid and have customer not only demand but also supply grid. because they do not understand how to benefit from that. they can, it is just mental limit for them.
utilities / energy providers were too lazy to think about proper decentralized grid so every participant in us grid will suffer more because of that.
this will be flagged as conspiracy, be cause it is conspiracy, conspiracy against US citizen by US companies / US interests"
There are a number of flow batteries, where they have large vats where charge is stored in 2 discrete charge state fluids in a redox reaction. They charge a vat through a cell and discharge it in the other direction. Limits are solubility of the charge states in the transport fluid = huge vats for total watt-hours and huge redox cells for rate of charge/discharge. Runs well and vats are cheap. https://en.wikipedia.org/wiki/Flow_battery
If you don’t need a mobile power plant why bother with fusion power instead of something like geothermal? At the end of the day we’re just turning water into steam.
no variant of fusion power is smaller than current coal plant.
Maybe someday we’ll finally achieve the ultimate dream: an extremely expensive nuclear power plant that needs vast amounts of coolant water and leaves radioactive waste behind.
If the alternative option is a coal power plant, sign me up!
You will not live long enough to see commercial fusion power, and your children will not live long enough to see a complete end to thermal coal.
I don't know I think thermal coal could end in my lifetime.
I don't see how your comment addresses what I said at all.
coal is already very much on the way out. natural gas is much cheaper, and also greener.
Tossing out your opinions as fact doesn't do much to win hearts and minds, or educate us bystanders to the basis for your point of view.
Presumably your comment is either to persuade or to inform; it does neither. I'm very curious about this field and its future, do you care to try again?
I'm a different person, but I tend to agree.
ITER began building in 2013, first plasma is expected for 2034. DEMO is expected to start in 2040.
So, ITER is taking an estimated 20 years. It's being built for a reason, so I imagine follow-ups want to wait to see how that shakes out. So certainly, DEMO needs to start a few years after ITER is finally done.
Then DEMO isn't a production setup either, it's going to be the first attempt at a working reactor. So let's say optimistically 20 years is enough to build DEMO, run it for a few years, see how it shakes out, design the follow-ups with the lessons learned.
That means the first real, post-DEMO plant starts building somewhere in 2060. Yeah, fair to say a lot of the here present will be dead by then, and that'll only be the slow start of grid fusion if it sticks at all. Nobody is going to just go and build a hundred reactors at once. They'll be built slowly at first unless we somehow manage to start making them amazingly quickly and cheaply.
So that's what, half a century? By the time fusion gets all the kinks worked out, chances are it'll never be commercially viable. Renewables are far faster to build, many problems are solvable by brute force, and half a century is a lot of time to invent something new in the area.
ITER/DEMO is an exceptionally slow fusion project and arguably obsolete since it uses older superconductors. CFS uses the same design, with modern superconductors that can support much stronger magnetic fields. Tokamak output scales with the fourth power of magnetic field strength, so this should let them get results similar to ITER in a reactor a tenth the size. They'll have it running long before ITER is ready.
ITER will have 400x lower power density than a PWR.
ARC, which uses those high temperature superconductors, is just 40x lower power density.
Neither promises to be competitive with fission, never mind the things beating fission.
If Jesus Christ himself came to earth and hand delivered a durable and workable reactor design WITH high uptime WITH a near-optimal confinement scheme WITH zero neutronicity AND he included a decade of free perfectly packaged and purified fuel, it would still not pencil out as anything other than water-hungry staff-intensive baseload requiring significant state support.
This is the reality. It’s not happening. It’s a welfare program for bullshit artists that depends on a credulous public.
I see you're in the coolant business
I am in the business of baiting militantly uninformed enthusiasts who form the foundation of the multigenerational grift that is Commercial Fusion Power.
Real talk, the point is not that whatever system is first past the post for fusion becomes the gold standard and fills the planet.
The issue right now is cracking the code. Once that is done, performance gains and miniaturization can take place.
Fusion can work on lots of things. Its possible that a fusion system the size of a car could be made within 25 years of the code being cracked that would power a house, or the size of a small building that could power a city block.
The waste product of hydrogen fusion is helium, a valuable resource that will always be in high demand, and it will not be radioactive.
And yes, it will need coolant as with hot fusion the system uses the heat to turn a turbine, but that coolant isn't fancy, it's just water.
Fusion has the potential to solve more problems than it causes by every metric as long as it is doable without extremely limited source materials, and this is what these big expensive reactors are trying to solve.
You’ve disputed nothing I’ve said and unless a dramatically higher temperature fusion reaction that does not generate a neutron flux is achieved, it will generate radioactive waste as a matter of factual physics. Thank you though!
I mean, yes, you're right, but it's not a permanently radioactive waste.
Quote:
A fusion power plant produces radioactive waste because the high-energy neutrons produced by fusion activate the walls of the plasma vessel. The intensity and duration of this activation depend on the material impinged on by the neutrons.
The walls of the plasma vessel must be temporarily stored after the end of operation. This waste quantity is initially larger than that from nuclear fission plants. However, these are mainly low- and medium-level radioactive materials that pose a much lower risk to the environment and human health than high-level radioactive materials from fission power plants. The radiation from this fusion waste decreases significantly faster than that of high-level radioactive waste from fission power plants. Scientists are researching materials for wall components that allow for further reduction of activation. They are also developing recycling technologies through which all activated components of a fusion reactor can be released after some time or reused in new power plants. Currently, it can be assumed that recycling by remote handling could be started as early as one year after switching off a fusion power plant. Unlike nuclear fission reactors, the long term storage should not be required.
https://www.ipp.mpg.de/2769068/faq9
Basically, whatever containment vessel becomes standard for the whole fusion industry would need probably an annual cycle of vessel replacements, which would be recycled indefinitely and possibly mined for other useful radioactive byproducts in the process.
The amount of radioactive scrap produced by hypothetical decommissioned radioactive fusion containment vessels is laughably trivial compared to fission waste streams. Even accounting for the most pessimistic irradiation models of first-wall materials, the total radioactive burden remains orders of magnitude below legacy technologies. The half-lives of such activated components like predominantly steel alloys and ceramic composites trend dramatically shorter than actinide-laden spent fuel, with activity levels plummeting to background within mere decades rather than geological timescales. This makes waste management a single-generation engineering challenge rather than a multi-millennial obligation
The long term activity of the waste is certainly lower, but the volume of the waste is likely much higher. And much of the cost is driven by volume, not activity.
As a species, we're spectacularly bad at negative externalities.
We are also very bad at anything very long term. We've hardly pulled off any physical project to last more than one generation recently. We barely invest in any.
The winning energy tech of the future better have as little negative externalities as possible, especially long term ones.
Hey, there it is! Lots of radioactive waste being generated on a continuous business but maybe baby with dreams and creams we can decommission it with robots and recycle it all. Meanwhile a reactor is offline for refurbishment for days, weeks, months, blowing a hole in the economics of it all.
Unironically: you’re the first person I’ve come across to openly acknowledge this issue. Thank you.