Search This Blog

Sunday, February 24, 2019

Nuclear goes retro — with a much greener outlook

Returning to designs abandoned in the 1970s, start-ups are developing a new kind of reactor that promises to be much safer and cleaner than current ones.


Troels Schönfeldt can trace his path to becoming a nuclear energy entrepreneur back to 2009, when he and other young physicists at the Niels Bohr Institute in Copenhagen started getting together for an occasional “beer and nuclear” meetup.

The beer was an India pale ale that they brewed themselves in an old, junk-filled lab space in the institute’s basement. The “nuclear” part was usually a bull session about their options for fighting two of humanity’s biggest problems: global poverty and climate change. “If you want poor countries to become richer,” says Schönfeldt, “you need a cheap and abundant power source.” But if you want to avoid spewing out enough extra carbon dioxide to fry the planet, you need to provide that power without using coal and gas.

It seemed clear to Schönfeldt and the others that the standard alternatives simply wouldn’t be sufficient. Wind and solar power by themselves couldn’t offer nearly enough energy, not with billions of poor people trying to join the global middle class. Yet conventional nuclear reactors — which could meet the need, in principle — were massively expensive, potentially dangerous and anathema to much of the public. And if anyone needed a reminder of why, the catastrophic meltdown at Japan’s Fukushima Daiichi plant came along to provide it in March 2011.

On the other hand, says Schönfeldt, the worldwide nuclear engineering community was beginning to get fired up about unconventional reactor designs — technologies that had been sidelined 40 or 50 years before, but that might have a lot fewer problems than existing reactors. And the beer-and-nuclear group found that one such design, the molten salt reactor, had a simplicity, elegance and, well, weirdness that especially appealed.

Molten salt reactors might just turn nuclear power into the greenest energy source on the planet.

The weird bit was that word “molten,” says Schönfeldt: Every other reactor design in history had used fuel that’s solid, not liquid. This thing was basically a pot of hot nuclear soup. The recipe called for taking a mix of salts — compounds whose molecules are held together electrostatically, the way sodium and chloride ions are in table salt — and heating them up until they melted. This gave you a clear, hot liquid that was about the consistency of water. Then you stirred in a salt such as uranium tetrafluoride, which produced a lovely green tint, and let the uranium undergo nuclear fission right there in the melt — a reaction that would not only keep the salts nice and hot, but could power a city or two besides.

Photo shows a vial of fluoride salts, including uranium tetrafluoride, as a blue solid and as a melted liquid.
A compound combining various salts (including uranium tetrafluoride), shown as a solid at left and a liquid at right, is one example of a compound that could be used in a molten salt reactor. CREDIT: ORNL / US DOE 

Weird or not, molten salt technology was viable; the Oak Ridge National Laboratory in Tennessee had successfully operated a demonstration reactor back in the 1960s. And more to the point, the beer-and-nuclear group realized, the liquid nature of the fuel meant that they could potentially build molten salt reactors that were cheap enough for poor countries to buy; compact enough to deliver on a flatbed truck; green enough to burn our existing stockpiles of nuclear waste instead of generating more — and safe enough to put in cities and factories. That’s because Fukushima-style meltdowns would be physically impossible in a mix that’s molten already. Better still, these reactors would be proliferation resistant, because their hot, liquid contents would be very hard for rogue states or terrorists to hijack for making nuclear weapons.

Molten salt reactors might just turn nuclear power into the greenest energy source on the planet.

Crazy? “We had to try,” says Schönfeldt. So in 2014 he and his colleagues launched Seaborg Technologies, a Copenhagen-based start-up named in honor of the late Glenn Seaborg, a Manhattan Project veteran who helped pioneer the peaceful uses of nuclear energy. With Schönfeldt as chief executive officer, they set about turning their vision into an ultracompact molten salt reactor that could serve the developed and developing world alike.

“It will be exceedingly hard, but that is significantly better than impossible.” Troels Schönfeldt

They weren’t alone: Efforts to revive older nuclear designs had been bubbling up elsewhere, and dozens of start-ups were trying to commercialize them. At least half a dozen of these start-ups were focused on molten salt reactors specifically, since they were arguably the cleanest and safest of the lot. Research funding agencies around the world had begun to pour millions of dollars per year into developing molten salt technology. Even power companies were starting to make investments. A prime example was the Southern Company, a utility conglomerate headquartered in Atlanta, Georgia. In 2016, the company started an ambitious molten salt development program in collaboration with Oak Ridge and TerraPower, a nuclear research company in Bellevue, Washington.

“In the next 20 to 30 years, the energy environment is going to undergo a major transformation to a low- to no-carbon future,” says Nick Irvin, Southern’s director of research and development. There will be far fewer centralized power plants and many more distributed sources like wind and solar, he says. Molten salt reactors fit ideally into this future, he adds, because of both their inherent safety and their ability to consume spent nuclear fuel from traditional nuclear reactors.

Diagram shows the outline of the design of a molten salt reactor, one of the Generation IV nuclear designs now being pursued in numerous projects around the world. The fuels are continuously reprocessed with reaction-slowing fission products filtered out, requiring less input of fresh fuel. The heat generated in the reactor is cycled into a water tank, turning water to steam. The steam then turns a turbine, generating electricity. A frozen solid salt plug sits below the reactor as a precaution. If power fails, the plug will warm and melt, allowing the reactor fuel to empty into a storage tank and shutting down the reactor.

A molten salt reactor differs from a conventional nuclear reactor in a number of ways, starting with the fact that it uses nuclear fuel that’s liquid instead of solid. This has profound implications for safety. For example, meltdowns would be a non-issue: The fuel is already molten. And if temperatures in the fuel mix get too high for any reason, a plug of frozen salt below the reactor will melt and allow everything to drain into an underground holding tank for safekeeping. Long-lived nuclear waste would also be a non-issue: A chemical system would continuously extract reaction-slowing fission products from the molten fuel, which would allow plutonium and all the other long-half-life fissile isotopes to be completely consumed.

Getting there won’t be easy — not least because hot molten salts can be just as corrosive as they sound. Every component that comes into contact with the brew will have to be made of a specialized, high-tech alloy that can resist them. “You dissolve the uranium in the salt,” says Nathan Myhrvold, a venture capitalist who serves as vice chairman of TerraPower’s board. “What you have to make sure is that you don’t dissolve your reactor in it!”

Certainly no one expects to have a prototype power plant operating before the mid-2020s, or to field full-scale commercial reactors until the 2030s. Still, says Schönfeldt, “it will be exceedingly hard, but that is significantly better than impossible.”

Rethinking nuclear

To Rachel Slaybaugh, today’s surge of entrepreneurial focus on nuclear technology is astonishing. “It feels like we’re at the beginning of a movement, with an explosion of ideas,” says Slaybaugh, a nuclear engineer at the University of California, Berkeley, who has written about green energy options in the Annual Review of Environment and Resources.

But, as nuclear engineer Leslie Dewan points out, this explosion is also something of a throwback to the post–World War II era. “Nuclear power technology was incredibly new,” says Dewan, who in 2011 cofounded one of the first of the molten salt start-ups, Transatomic Power in Cambridge, Massachusetts. It was a time of blue-sky thinking, she says, “where they were trying many, many different types of technologies, running experiments, building and prototyping.”

Diagram of an atom shows a dense core of protons and neutrons surrounded by orbiting electrons. The core is called the nucleus.

Atoms may be the smallest unit of matter, but they are made up of even smaller subunits, including protons, neutrons and electrons. The energy that binds the protons and neutrons together in the atom’s nucleus is enormous, and can be released and put to use during a nuclear reaction.

The basics had been known since 1938, when German scientists discovered that firing a neutron into certain heavy atomic nuclei would cause the nucleus to fission, or split into two pieces. The rupture of such a “fissile” nucleus would release an enormous amount of energy, plus at least two new neutrons. These neutrons could then slam into nearby nuclei and trigger the release of more energy, plus 4 neutrons — then 8, 16, 32 and so on in an exponentially growing chain reaction.

This runaway energy release could produce a very powerful bomb, as the wartime Manhattan Project demonstrated. But taming it, and turning the chain reaction into a safe, steady-state heat source for power production, was a lot trickier.

The heart of any nuclear energy production is fission, the breaking up of an atom into various parts. The diagram here shows a free neutron shattering an atomic nucleus into two main fission products while releasing two additional free neutrons.

Fission is at the heart of nuclear energy production. Fission happens when a free neutron slams into an unstable atomic nucleus and shatters it into two or more “fission products”: lighter elements such as krypton and barium that cluster around the middle of the periodic table. Thanks to Einstein’s famous equation E = mc2, this process also transforms a tiny bit of the original nucleus’s mass into an immense amount of energy.

That’s where all the postwar experimentation came in. There were reactors fueled by uranium, which comes out of the ground containing virtually the only fissile isotope found in nature, uranium-235. There were reactors known as breeders, which could accomplish the magical-sounding feat of producing more fuel than they consumed. (Actually, they relied on the fact that uranium is “fertile,” meaning that its most abundant isotope, uranium-238, almost never undergoes fission by itself — but it can absorb a neutron and turn into highly fissile plutonium-239.) And there were reactors fueled with thorium, a fertile element that sits two slots to the left of uranium in the periodic table, and is about three times more abundant in the Earth’s crust. A neutron will turn its dominant isotope, thorium-232, into fissile uranium-233.

Diagram shows how isotopes differ from each other, showing the makeup of the three isotopes of carbon: carbon-12 (6 protons and 6 neutrons), carbon-13 (6 protons and 7 neutrons) and carbon-14 (6 protons and 8 neutrons). Uranium isotopes are a key nuclear fuel since the instability of the rare uranium-235 atom makes it easier to split and start a nuclear chain reaction.

An element, such as carbon or uranium, is defined by the number of protons in its nucleus: Carbon has 6, uranium 92. But atoms may vary in the number of neutrons, producing slightly different flavors, or isotopes, of an element. For example, the most common form of carbon is carbon-12, which has 6 protons and 6 neutrons, and is stable. But other isotopes include carbon-13, which is also stable, and carbon-14, which is unstable and radioactive. Uranium’s most common form is uranium-238, which has 92 protons and 146 neutrons, and decays only after billions of years. Other, less stable isotopes include uranium-235 (92 protons and 143 neutrons) and uranium-233 (92 protons and 141 neutrons), both of which can undergo nuclear fission if they are struck by a neutron.

At the same time, designers were trying out different types of coolant: the fluid that circulates through the reactor core, absorbs the heat being produced by the fission reactions, and carries it out to where the heat can do something useful like running a standard steam turbine to generate electricity. Some opted for ordinary water: an abundant, familiar substance that carries a lot of heat per unit of volume. But others went with high-temperature substances such as liquid sodium metal, helium gas or even molten lead. “Coolants” like these could keep a reactor running at 700 degrees Celsius or more, which would make it substantially more efficient at generating power.

By the 1960s, researchers had tested reactors featuring combinations of all these options and more. But the approach that won out for commercial power production — and that is still used in virtually all of the 454 nuclear plants operating around the world — was the water-cooled uranium reactor. This wasn’t necessarily the best nuclear design, but it was one of the first: Water-cooled reactors were originally developed in the 1940s to power submarines. So in the 1950s, when the Eisenhower administration launched a high-profile push to harness nuclear energy for peaceful purposes, the technology was adapted for civilian use and scaled up enormously. Other designs were left for later, if ever. By the 1960s and 1970s, second-generation water-cooled reactors were being deployed globally.

“It feels like we’re at the beginning of a movement.” Rachel Slaybaugh

Even then, however, there were many in the field who were uneasy with that choice. Among the most notable was nuclear physicist Alvin Weinberg, a Manhattan Project veteran and director of the Oak Ridge National Laboratory. Weinberg had participated in the development of water-cooled reactors, and knew that they had some key vulnerabilities — including water’s low boiling point, just 100°C at normal atmospheric pressure.

Commercial nuclear plants could get that up to 325°C or so by pressurizing the reactor vessel. But as Weinberg and others knew very well, that was not enough to rule out the nightmare of nightmares: a meltdown. All it would take was some freak accident that interrupted the flow of water through the core and trapped all the heat inside. You could shut down power production by dropping rods of boron or cadmium into the reactor core to soak up neutrons and stop the chain reaction. But nothing could stop the heat produced by the decay of fission products — the melange of short-lived but fiercely radioactive elements that inevitably build up inside an active reactor as nuclei split in two.

Diagram shows a fission chain reaction, in which each generation of fission products gives rise to two free neutrons. These then slam into larger atoms, producing more neutrons, fission products and energy. The chain reaction produces huge amounts of power for a relatively small amount of nuclear fuel.

When the nuclei of certain “fissile” isotopes such as uranium-235 are struck by a neutron, they don’t just split apart. They also release at least two additional neutrons, which can then go on to split two more nuclei. This produces at least 4 additional neutrons – then 8, 16, 32, 64 and so on. The result is an exponentially growing chain reaction, which can produce a nuclear explosion if it’s allowed to run away — or useful nuclear power if it’s contained and controlled inside a reactor.

Unless the operators managed to restore the coolant flow within a few hours, that trapped fission-product heat would send temperatures soaring past the 325°C mark, turn the water into high-pressure steam, and reduce the solid fuel to a radioactive puddle melting its way through the reactor vessel floor. Soon after, the vessel would likely rupture and send a pressurized plume of fission products into the atmosphere. Included would be radioactive strontium-90, iodine-131 and caesium-137 — extremely dangerous isotopes that can easily enter the food chain and end up in the body.

To forestall such a catastrophe, designers had equipped the commercial water-cooled reactors with all manner of redundancies and emergency backup cooling systems. But to Weinberg’s mind, that was a bit like installing fire alarms and sprinkler systems in a house built of papier-mâché. What you really wanted was the nuclear equivalent of a house built of fireproof brick — a reactor that based its safety on the laws of physics, with no need for operators or backup systems to do anything.

Weinberg and his team at Oak Ridge believed that they could come very close to that ideal with the molten salt reactor, which they had been working on since 1954. Such a reactor couldn’t possibly suffer a meltdown, even in an accident: The molten salt core was liquid already. The fission-product heat would simply cause the salt mix to expand and move the fuel nuclei farther apart, which would dampen the chain reaction.

Pressure would be a non-issue as well: The salts would have a boiling point far higher than any temperature the fission products could produce. (One common choice for nuclear applications is FLiBe, a mix of lithium fluorides and beryllium fluorides that doesn’t boil until 1,400°C, about the temperature of a butane blowtorch.) So the reactor vessel would never be in danger of rupture from molten salt “steam.” In fact, the reactor would barely shift from its normal operating pressure of one atmosphere.

Better still, the molten core would trap fission products far more securely than in solid-fueled reactors. Cesium, iodine and all the rest would chemically bind with the salts the instant they were created. And since the salts could not boil away in even the worst accident, these fission products would be held in place instead of being free to drift off and take up radioactive residence in people’s bones and thyroid glands.

And just in case, the liquid nature of the fuel allowed for a simple fail-safe known as a freeze plug. This involved connecting the bottom of the reactor vessel to a drain pipe, which would be plugged with a lump of solid fuel salt kept frozen by a jet of cool gas. If the power failed and the gas flow stopped, or if the reactor got too hot, the plug would melt and gravity would drain the contents into an underground holding tank. The mix would then cool, solidify and remain in the tank until the crisis was over — salts, fuel, fission products and all.

Molten salt success, then a detour

Weinberg and his team successfully demonstrated all this in the Molten Salt Reactor Experiment, an 8-megawatt prototype that ran at Oak Ridge from 1965 to 1969. The corrosiveness of the salts was a potential threat to the long-term integrity of pipes, pumps and other parts, but the researchers had identified a number of corrosion-resistant materials they thought might solve the problem. By the early 1970s, the group was well into development of an even more ambitious prototype that would allow them to test those materials as well as to demonstrate the use of thorium fuel salts instead of uranium.

Diagram shows the two fertile isotopes (thorium-232 and uranium-238) and how they can be converted into two fissile isotopes that are useful as nuclear fuels: uranium-233 and plutonium-239. It also shows the fissile isotope uranium-235.

Hundreds of nuclear isotopes have been observed in the laboratory, but only three of them seem to be a practical source of fission energy. Uranium-235, which comprises 0.7 percent of natural uranium ore, can sustain a chain reaction all by itself; it is the primary energy source for nuclear power reactors operating in the world today. Uranium-238, which comprises virtually all the other atoms in mined uranium, is “fertile”: it can’t sustain a chain reaction by itself, but when it’s hit by a neutron it can transform into the highly fissile isotope plutonium-239. Likewise with thorium-232: Slamming it with a neutron can turn it into the fissile isotope uranium-233.

The Oak Ridge physicists were also eager to try out a new system for dealing with the waste fission products — one that again took advantage of the fuel’s liquid nature, but had been tested only in the laboratory. The idea was to siphon off a little of the reactor’s fuel mix each day and run it through a nearby purification system, which would chemically extract the fission products in much the same way that the kidneys remove toxins from the bloodstream. The cleaned-up fuel would then be circulated back into the reactor, which could continue running at full power the whole time. This process would not only keep the fission products from building up until they snuffed out the chain reaction — a problem for any reactor, since these elements tend to absorb a lot of neutrons — but it would also enhance the safety of molten salt still further. Not even the worst accident can contaminate the countryside with fission products that aren’t there.

But none of it was to be. Officials in the US nuclear program terminated the Oak Ridge molten salt program in January 1973 — and fired Weinberg.

The nuclear engineering community was just too heavily committed to solid fuels, both financially and intellectually. Practitioners already had decades of experience with experimental and commercial solid-fueled reactors, versus that one molten salt experiment at Oak Ridge. A huge infrastructure existed for processing and producing solid fuel. And, not incidentally, the US research program was committed to a grand vision for the global nuclear future that would expand this infrastructure enormously — and that, viewed with 20-20 hindsight, would lead the nuclear industry into a trap.

Key to that vision was a different way of dealing with the buildup of fission products. Since Oak Ridge–style continuous purification wasn’t an option in a solid-fuel reactor, water-cooled or otherwise, standard procedure called for burning fuel until the fission products rendered it useless, or spent. In water-cooled power reactors this took roughly three years, at which point the spent fuel would be switched out for fresh, then stored at the bottom of a pool of water for a few years while the worst of its fission-product radioactivity decayed.
The first diagram shows three primary types of radiation that can be emitted from an atom. They are denoted as alpha (a cluster of two protons and two neutrons), beta (an electron) and gamma (a high-energy photon). The second illustrates the decay of radioactivity from a given material over time.

Radioactivity forms when an unstable atomic nucleus sheds its excess energy by firing off a high-speed particle. This allows the nucleus to “decay,” or settle into a more stable form. The type of particle that’s emitted depends on the isotope involved, but primarily is alpha, beta or gamma. The rate of decay depends on the isotope’s half-life: the time it takes for half the original sample of nuclei to decay.

From there, the plan was to recycle it. Counting the remaining uranium, plus the plutonium that had formed from neutrons hitting uranium-238 nuclei, the fuel still contained most of the potential fission energy it had started with. So there was to be a new, global network of reprocessing plants that would chemically extract the fission products for disposal, and turn the uranium and plutonium into fresh fuel. That network, in turn, would ultimately support a new generation of sodium-cooled breeder reactors that would produce plutonium by the ton — thus solving what was then thought to be an acute shortage of the uranium needed to power the all-nuclear global economy of the future.

But that plan started to look considerably less visionary in May 1974, when India tested a nuclear bomb made with plutonium extracted from the spent fuel of a conventional reactor. Governments around the world suddenly realized that global reprocessing would be an invitation to rampant nuclear weapons proliferation: In plants handling large quantities of pure plutonium, it would be entirely too easy for bad actors to secretly divert a few kilograms at a time for bombs. So in April 1977, US President Jimmy Carter banned commercial reprocessing in the United States, and much of the rest of the world followed.

Diagram shows a closed-loop nuclear fuel cycle, as initially planned in the 1960s and 1970s and followed by only a minority of nations today. Uranium is mined, processed into a fuel and used to produce energy in a reactor. Uranium and plutonium wastes are reprocessed and then used again. Only a small stream of nuclear waste must be managed and stored, and only for hundreds of years.

The vision for nuclear energy 50 years ago included a “closed loop” for nuclear fuels. In this scenario, most fuels would be reprocessed and cycled back into reactors to make more energy. The only wastes would be fission products, which are radioactive for a few hundred years at most.

That helped cement the already declining interest in breeder reactors, which made no sense without reprocessing plants to extract the new-made plutonium, and left the world with a nasty disposal problem. Instead of storing spent fuel underwater for a few years, engineers were now supposed to isolate it for something like 240,000 years, thanks to the 24,100-year half-life of plutonium-239. (The rule of thumb for safety is to wait 10 half-lives, which reduces radiation levels more than a thousand-fold.) No one has yet figured out how to guarantee isolation for that span of time. Today, there are nearly 300,000 tons of spent nuclear fuel still piling up at reactors around the world, part of an as yet unresolved long-term storage problem.

In retrospect, those 1970s-era nuclear planners would have done well to put serious money back into Oak Ridge’s molten salt program: As developers there tried to point out at the time, the continuous purification approach could have solved both the spent-fuel and proliferation problems at a stroke.

The proliferation risk would be minimal because — unlike the kind of reprocessing plants envisioned for the breeder program — the Oak Ridge system would never isolate uranium-235, plutonium-239 or any other fissile material. Instead, these isotopes would stay in the cleaned-up fuel salts at concentrations far too low to make a bomb. They would be circulated back into the reactor, where they could continue fissioning until they were completely consumed.

“The nuclear industry was not in an innovation frame of mind for 30 years.” Nathan Myhrvold

The reactor’s purification system would likewise offer a solution to the spent fuel issue. It would strip out the reaction-quenching fission products from the fuel almost as quickly as they formed, which would potentially allow the reactor to run for decades at a stretch with only an occasional injection of fresh fuel to replace what it burned. Some of that fuel could even come from today’s 300,000-ton backlog of spent solid fuel.

Admittedly, it would take centuries for even a large network of molten salt reactors to work through the full backlog. But burning it would eliminate the need to safely store it for thousands of centuries. By consuming the long-lived isotopes like plutonium-239, molten salt reactors could reduce the nuclear waste stream to a comparatively small volume of fission products having half-lives of 30 years or less. By the 10 half-life rule, this waste would then need to be isolated for just 300 years. That’s not trivial, says Schönfeldt, “but it’s something that can be handled” — say, by encasing the waste in concrete and steel, or putting it down a deep borehole.

Unfortunately, the late 1970s was not a good time for reviving any kind of nuclear program, molten salt or otherwise. Public mistrust of nuclear energy was escalating rapidly, thanks to rising concerns over safety, waste and weapons proliferation. The power companies’ patience was wearing thin, thanks to the skyrocketing, multibillion-dollar cost of standard water-cooled reactors. And then in March 1979 came a partial meltdown at Three Mile Island, a conventional nuclear plant near Harrisburg, Pennsylvania. In April 1986, another catastrophe hit with the fire and meltdown at the Chernobyl plant in Ukraine.

The resulting backlash against nuclear power was so strong that new plant construction effectively ceased — which is why most of the nuclear reactors operating today are at least three to four decades old. And nuclear power research stagnated, as well, with most of the money and effort going into ensuring the safety of those aging plants.

“The nuclear industry was not in an innovation frame of mind for 30 years,” says TerraPower’s Myhrvold.

Diagram shows an open-loop nuclear fuel cycle as it is practiced in the United States and most nations today. Uranium is mined, processed into a fuel and used to produce energy in a reactor. Substantial amounts of nuclear wastes must be managed and stored for more than 200,000 years before they will be considered safe.

In the 1970s, the threat that reprocessed fuels could be secretly diverted to make nuclear weapons led the United States and many other nations to reject the closed fuel cycle. Instead, they have opted to dispose of spent nuclear fuel after just one pass through the reactor. But because the fuel now contains long-half-life isotopes like plutonium-239, the wastes must be managed and stored for more than 200,000 years before they will be considered safe. No one knows how to guarantee that spent fuel will remain undisturbed for that span of time — one of the many safety concerns that have dogged nuclear energy programs.

Old tech revival

This defensive crouch lasted well into the new century, while the molten salt concept fell further and further into obscurity. That began to change only in 2002, when Kirk Sorensen came across a book describing what the molten salt program had accomplished at Oak Ridge.

“Why didn’t we do it this way in the first place?” he remembers wondering.

Sorensen, then a NASA engineer in Huntsville, Alabama, was so intrigued that he tracked down the old Oak Ridge technical reports, which were moldering in file cabinets, and talked NASA into paying to have them scanned. The files filled up five compact discs, which he copied and sent around to leaders in the US energy industry. “I received no response,” he says. So in 2006, in hopes of reaching somebody who would find the concept as compelling as he did, he uploaded the documents to energyfromthorium.com, a website he’d created with his own money.

That strategy worked — slowly. “I would give Kirk Sorensen personal credit,” says Lou Qualls, a nuclear engineer at Oak Ridge who became the Department of Energy’s first national technical director for molten salt reactors in 2017. “So Kirk is one of those voices out in the wilderness, and for a long time people would go, 'We don’t even know what you’re talking about.’” But once the old reports became available online, “people started to look at the technology, to understand it, see that it had a history,” Qualls says. “It started getting more credibility.”

“We … became nuclear engineers because we’re environmentalists.” Leslie Dewan
It helped that rising concerns about climate change — and the ever-growing backlog of spent nuclear fuel — had put many nuclear engineers in the mood for a radical rethink of their field. They could see that incremental improvements in standard reactor technology weren’t getting anywhere. Manufacturers had been hyping their “Generation III” designs for water-cooled reactors with enhanced safety features, but these were proving to be just as slow and expensive to build as their second-generation predecessors from the 1970s.

So instead, there was a move to revive the old reactor concepts and update them into a whole new series of Generation IV reactors: devices that would be considerably smaller and cheaper than their 1,000-megawatt, multibillion-dollar predecessors, with safety and proliferation resistance built in from the start. Among the most prominent symbols of this movement was TerraPower. Launched in 2008 with major funding from Microsoft cofounder Bill Gates, the company immediately started development of a liquid sodium-cooled device called the Traveling Wave Reactor.

The molten salt idea was definitely on the Gen IV list. Schönfeldt remembers getting excited about it as early as 2008. At MIT, Dewan and her fellow graduate student Mark Massie first encountered the idea in 2010, and were intrigued by the reactors’ inherent safety. “We both became nuclear engineers because we’re environmentalists,” says Dewan. Besides, her classmate had grown up watching his native West Virginia being devastated by mountaintop removal mining. “So Mark wanted to design a nuclear reactor that’s good enough to shut down the coal industry.”

Then in March 2011, the dangers of the nuclear status quo were underscored yet again. A tsunami knocked out all the cooling systems and backups at Japan’s Fukushima Daiichi plant and sent its 1970s-vintage power reactors into the worst meltdown since Chernobyl. That April, Sorensen launched the first of the molten salt start-up companies, Huntsville-based Flibe Energy. His goal ever since has been to develop and commercialize a Liquid-Fluoride Thorium Reactor — pretty much the same device that was envisioned at Oak Ridge back in the 1960s.

Dewan and Massie founded Transatomic the same month. And other molten salt start-ups soon followed, each building on the basic concept with a host of different design strategies and fuel choices. When Seaborg launched in 2014, for example, Schönfeldt and his colleagues started designing a molten salt Compact Used fuel BurnEr (called CUBE) that would not only run on a combination of spent nuclear fuel and thorium, but also be really, really small by reactor standards. “The fact that you can transport it to the site on the back of a truck is a major upside,” says Schönfeldt, “especially in remote regions.”

TerraPower, meanwhile, decided in 2015 to develop a much larger molten salt device, the Molten Chloride Fast Reactor, as a complement to the company’s ongoing work on its sodium-cooled Traveling Wave Reactor. The new system retains the latter’s ability to burn the widest possible range of fuels — including not just spent nuclear fuel, but also the ordinarily non-fissile uranium-238. (Both designs take advantage of the fact that a uranium-238 nucleus hit by a neutron has a tiny, but non-zero, probability of fissioning.) But unlike in the Traveling Wave Reactor, explains the company’s chief technical officer, John Gilleland, the molten salts’ 700°C-plus operating temperature will allow it to generate the kind of heat needed for industrial processes such as petroleum cracking and plastics making. Industrial process heat currently accounts for about one-third of total energy usage within the US manufacturing sector.

This industrial heat is now produced almost entirely by burning coal, oil or natural gas, says Gilleland. So if you could replace all that with carbon-free nuclear heat, he says, “you could hit the carbon problem in a very striking way.”

Of course, none of this is going to happen tomorrow. The various molten salt companies are still refining their designs by gathering lab data on liquid fuel chemistry, and running massive computer simulations of how the melt behaves when it’s simultaneously flowing and fissioning. The first prototypes won’t be up and running until the mid-2020s at the earliest.

And not all the companies will be there. Transatomic became the molten salt movement’s first casualty in September 2018, when Dewan shut it down. Her company had simply fallen too far behind in its design work relative to competitors, she explains. So even though investors were willing to keep going, she says, “it wouldn’t feel right for us to continue taking their money when I didn’t see a viable path forward for the business side.”

Still, most of the molten salt pioneers say they see reason for cautious optimism. Since at least 2015, the US Department of Energy has been ramping up its support for advanced reactor research in general, and molten salt reactors in particular.

“We kept telling people the three big advantages of molten salt reactors — no meltdown, no proliferation, burning up nuclear waste.” Troels Schönfeldt

Meanwhile, notes Slaybaugh, licensing agencies such as the US Nuclear Regulatory Commission are gearing up with the computer simulations and evaluation tools they will need when the advanced-reactor companies start seeking approval for constructing their prototypes. “People are looking at these technologies more carefully and more seriously than they have in a long time,” she says.

Perhaps the biggest and most unpredictable barrier is the public’s ingrained fear about almost anything labeled “nuclear.” What happens if people lump in molten salt reactors with older nuclear technologies, and reject them out of hand?

Based on their experience to date, most proponents are cautiously optimistic on this front as well. In Copenhagen, Schönfeldt and his colleagues kept hammering on the why of nuclear power, which was to fight climate change, poverty and pollution. “And we kept telling people the three big advantages of molten salt reactors — no meltdown, no proliferation, burning up nuclear waste,” he says. And slowly, people were willing to listen.

“We’ve moved a long way,” says Schönfeldt. “When we started in 2014, commercial nuclear power was illegal in Denmark. In 2017, we got public funding.”

Parallax (updated)

From Wikipedia, the free encyclopedia

A simplified illustration of the parallax of an object against a distant background due to a perspective shift. When viewed from "Viewpoint A", the object appears to be in front of the blue square. When the viewpoint is changed to "Viewpoint B", the object appears to have moved in front of the red square.
 
This animation is an example of parallax. As the viewpoint moves side to side, the objects in the distance appear to move more slowly than the objects close to the camera. In this case, the blue cube in front appears to move faster than the red cube.
 
Parallax (from Ancient Greek παράλλαξις (parallaxis), meaning 'alternation') is a displacement or difference in the apparent position of an object viewed along two different lines of sight, and is measured by the angle or semi-angle of inclination between those two lines. Due to foreshortening, nearby objects show a larger parallax than farther objects when observed from different positions, so parallax can be used to determine distances. 

To measure large distances, such as the distance of a planet or a star from Earth, astronomers use the principle of parallax. Here, the term parallax is the semi-angle of inclination between two sight-lines to the star, as observed when Earth is on opposite sides of the Sun in its orbit. These distances form the lowest rung of what is called "the cosmic distance ladder", the first in a succession of methods by which astronomers determine the distances to celestial objects, serving as a basis for other distance measurements in astronomy forming the higher rungs of the ladder.

Parallax also affects optical instruments such as rifle scopes, binoculars, microscopes, and twin-lens reflex cameras that view objects from slightly different angles. Many animals, including humans, have two eyes with overlapping visual fields that use parallax to gain depth perception; this process is known as stereopsis. In computer vision the effect is used for computer stereo vision, and there is a device called a parallax rangefinder that uses it to find range, and in some variations also altitude to a target. 

A simple everyday example of parallax can be seen in the dashboard of motor vehicles that use a needle-style speedometer gauge. When viewed from directly in front, the speed may show exactly 60; but when viewed from the passenger seat the needle may appear to show a slightly different speed, due to the angle of viewing.

Visual perception

As the eyes of humans and other animals are in different positions on the head, they present different views simultaneously. This is the basis of stereopsis, the process by which the brain exploits the parallax due to the different views from the eye to gain depth perception and estimate distances to objects. Animals also use motion parallax, in which the animals (or just the head) move to gain different viewpoints. For example, pigeons (whose eyes do not have overlapping fields of view and thus cannot use stereopsis) bob their heads up and down to see depth.

The motion parallax is exploited also in wiggle stereoscopy, computer graphics which provide depth cues through viewpoint-shifting animation rather than through binocular vision.

Astronomy

Parallax is an angle subtended by a line on a point. In the upper diagram, the earth in its orbit sweeps the parallax angle subtended on the sun. The lower diagram shows an equal angle swept by the sun in a geostatic model. A similar diagram can be drawn for a star except that the angle of parallax would be minuscule.
 
Parallax arises due to change in viewpoint occurring due to motion of the observer, of the observed, or of both. What is essential is relative motion. By observing parallax, measuring angles, and using geometry, one can determine distance.

Stellar parallax

Stellar parallax created by the relative motion between the Earth and a star can be seen, in the Copernican model, as arising from the orbit of the Earth around the Sun: the star only appears to move relative to more distant objects in the sky. In a geostatic model, the movement of the star would have to be taken as real with the star oscillating across the sky with respect to the background stars.

Stellar parallax is most often measured using annual parallax, defined as the difference in position of a star as seen from the Earth and Sun, i. e. the angle subtended at a star by the mean radius of the Earth's orbit around the Sun. The parsec (3.26 light-years) is defined as the distance for which the annual parallax is 1 arcsecond. Annual parallax is normally measured by observing the position of a star at different times of the year as the Earth moves through its orbit. Measurement of annual parallax was the first reliable way to determine the distances to the closest stars. The first successful measurements of stellar parallax were made by Friedrich Bessel in 1838 for the star 61 Cygni using a heliometer. Stellar parallax remains the standard for calibrating other measurement methods. Accurate calculations of distance based on stellar parallax require a measurement of the distance from the Earth to the Sun, now based on radar reflection off the surfaces of planets.

The angles involved in these calculations are very small and thus difficult to measure. The nearest star to the Sun (and thus the star with the largest parallax), Proxima Centauri, has a parallax of 0.7687 ± 0.0003 arcsec. This angle is approximately that subtended by an object 2 centimeters in diameter located 5.3 kilometers away.

Hubble Space TelescopeSpatial scanning precisely measures distances up to 10,000 light-years away (10 April 2014).
 
The fact that stellar parallax was so small that it was unobservable at the time was used as the main scientific argument against heliocentrism during the early modern age. It is clear from Euclid's geometry that the effect would be undetectable if the stars were far enough away, but for various reasons such gigantic distances involved seemed entirely implausible: it was one of Tycho's principal objections to Copernican heliocentrism that in order for it to be compatible with the lack of observable stellar parallax, there would have to be an enormous and unlikely void between the orbit of Saturn (then the most distant known planet) and the eighth sphere (the fixed stars).

In 1989, the satellite Hipparcos was launched primarily for obtaining improved parallaxes and proper motions for over 100,000 nearby stars, increasing the reach of the method tenfold. Even so, Hipparcos is only able to measure parallax angles for stars up to about 1,600 light-years away, a little more than one percent of the diameter of the Milky Way Galaxy. The European Space Agency's Gaia mission, launched in December 2013, will be able to measure parallax angles to an accuracy of 10 microarcseconds, thus mapping nearby stars (and potentially planets) up to a distance of tens of thousands of light-years from Earth. In April 2014, NASA astronomers reported that the Hubble Space Telescope, by using spatial scanning, can now precisely measure distances up to 10,000 light-years away, a ten-fold improvement over earlier measurements.

Distance measurement

 
Distance measurement by parallax is a special case of the principle of triangulation, which states that one can solve for all the sides and angles in a network of triangles if, in addition to all the angles in the network, the length of at least one side has been measured. Thus, the careful measurement of the length of one baseline can fix the scale of an entire triangulation network. In parallax, the triangle is extremely long and narrow, and by measuring both its shortest side (the motion of the observer) and the small top angle (always less than 1 arcsecond, leaving the other two close to 90 degrees), the length of the long sides (in practice considered to be equal) can be determined.

Assuming the angle is small (see derivation below), the distance to an object (measured in parsecs) is the reciprocal of the parallax (measured in arcseconds): For example, the distance to Proxima Centauri is 1/0.7687=1.3009 parsecs (4.243 ly).

Diurnal parallax

Diurnal parallax is a parallax that varies with rotation of the Earth or with difference of location on the Earth. The Moon and to a smaller extent the terrestrial planets or asteroids seen from different viewing positions on the Earth (at one given moment) can appear differently placed against the background of fixed stars.

Lunar parallax

Lunar parallax (often short for lunar horizontal parallax or lunar equatorial horizontal parallax), is a special case of (diurnal) parallax: the Moon, being the nearest celestial body, has by far the largest maximum parallax of any celestial body, it can exceed 1 degree.

The diagram (above) for stellar parallax can illustrate lunar parallax as well, if the diagram is taken to be scaled right down and slightly modified. Instead of 'near star', read 'Moon', and instead of taking the circle at the bottom of the diagram to represent the size of the Earth's orbit around the Sun, take it to be the size of the Earth's globe, and of a circle around the Earth's surface. Then, the lunar (horizontal) parallax amounts to the difference in angular position, relative to the background of distant stars, of the Moon as seen from two different viewing positions on the Earth: one of the viewing positions is the place from which the Moon can be seen directly overhead at a given moment (that is, viewed along the vertical line in the diagram); and the other viewing position is a place from which the Moon can be seen on the horizon at the same moment (that is, viewed along one of the diagonal lines, from an Earth-surface position corresponding roughly to one of the blue dots on the modified diagram). 

The lunar (horizontal) parallax can alternatively be defined as the angle subtended at the distance of the Moon by the radius of the Earth—equal to angle p in the diagram when scaled-down and modified as mentioned above. 

The lunar horizontal parallax at any time depends on the linear distance of the Moon from the Earth. The Earth–Moon linear distance varies continuously as the Moon follows its perturbed and approximately elliptical orbit around the Earth. The range of the variation in linear distance is from about 56 to 63.7 Earth radii, corresponding to horizontal parallax of about a degree of arc, but ranging from about 61.4' to about 54'. The Astronomical Almanac and similar publications tabulate the lunar horizontal parallax and/or the linear distance of the Moon from the Earth on a periodical e.g. daily basis for the convenience of astronomers (and formerly, of navigators), and the study of the way in which this coordinate varies with time forms part of lunar theory

Diagram of daily lunar parallax

Parallax can also be used to determine the distance to the Moon.

One way to determine the lunar parallax from one location is by using a lunar eclipse. A full shadow of the Earth on the Moon has an apparent radius of curvature equal to the difference between the apparent radii of the Earth and the Sun as seen from the Moon. This radius can be seen to be equal to 0.75 degree, from which (with the solar apparent radius 0.25 degree) we get an Earth apparent radius of 1 degree. This yields for the Earth–Moon distance 60.27 Earth radii or 384,399 kilometres (238,854 mi) This procedure was first used by Aristarchus of Samos and Hipparchus, and later found its way into the work of Ptolemy. The diagram at the right shows how daily lunar parallax arises on the geocentric and geostatic planetary model in which the Earth is at the center of the planetary system and does not rotate. It also illustrates the important point that parallax need not be caused by any motion of the observer, contrary to some definitions of parallax that say it is, but may arise purely from motion of the observed. 

Another method is to take two pictures of the Moon at exactly the same time from two locations on Earth and compare the positions of the Moon relative to the stars. Using the orientation of the Earth, those two position measurements, and the distance between the two locations on the Earth, the distance to the Moon can be triangulated:
Example of lunar parallax: Occultation of Pleiades by the Moon

This is the method referred to by Jules Verne in From the Earth to the Moon:
Until then, many people had no idea how one could calculate the distance separating the Moon from the Earth. The circumstance was exploited to teach them that this distance was obtained by measuring the parallax of the Moon. If the word parallax appeared to amaze them, they were told that it was the angle subtended by two straight lines running from both ends of the Earth's radius to the Moon. If they had doubts on the perfection of this method, they were immediately shown that not only did this mean distance amount to a whole two hundred thirty-four thousand three hundred and forty-seven miles (94,330 leagues), but also that the astronomers were not in error by more than seventy miles (≈ 30 leagues).

Solar parallax

After Copernicus proposed his heliocentric system, with the Earth in revolution around the Sun, it was possible to build a model of the whole Solar System without scale. To ascertain the scale, it is necessary only to measure one distance within the Solar System, e.g., the mean distance from the Earth to the Sun (now called an astronomical unit, or AU). When found by triangulation, this is referred to as the solar parallax, the difference in position of the Sun as seen from the Earth's center and a point one Earth radius away, i. e., the angle subtended at the Sun by the Earth's mean radius. Knowing the solar parallax and the mean Earth radius allows one to calculate the AU, the first, small step on the long road of establishing the size and expansion age of the visible Universe. 

A primitive way to determine the distance to the Sun in terms of the distance to the Moon was already proposed by Aristarchus of Samos in his book On the Sizes and Distances of the Sun and Moon. He noted that the Sun, Moon, and Earth form a right triangle (with the right angle at the Moon) at the moment of first or last quarter moon. He then estimated that the Moon, Earth, Sun angle was 87°. Using correct geometry but inaccurate observational data, Aristarchus concluded that the Sun was slightly less than 20 times farther away than the Moon. The true value of this angle is close to 89° 50', and the Sun is actually about 390 times farther away. He pointed out that the Moon and Sun have nearly equal apparent angular sizes and therefore their diameters must be in proportion to their distances from Earth. He thus concluded that the Sun was around 20 times larger than the Moon; this conclusion, although incorrect, follows logically from his incorrect data. It does suggest that the Sun is clearly larger than the Earth, which could be taken to support the heliocentric model.

Measuring Venus transit times to determine solar parallax
 
Although Aristarchus' results were incorrect due to observational errors, they were based on correct geometric principles of parallax, and became the basis for estimates of the size of the Solar System for almost 2000 years, until the transit of Venus was correctly observed in 1761 and 1769. This method was proposed by Edmond Halley in 1716, although he did not live to see the results. The use of Venus transits was less successful than had been hoped due to the black drop effect, but the resulting estimate, 153 million kilometers, is just 2% above the currently accepted value, 149.6 million kilometers. 

Much later, the Solar System was "scaled" using the parallax of asteroids, some of which, such as Eros, pass much closer to Earth than Venus. In a favorable opposition, Eros can approach the Earth to within 22 million kilometers. Both the opposition of 1901 and that of 1930/1931 were used for this purpose, the calculations of the latter determination being completed by Astronomer Royal Sir Harold Spencer Jones.

Also radar reflections, both off Venus (1958) and off asteroids, like Icarus, have been used for solar parallax determination. Today, use of spacecraft telemetry links has solved this old problem. The currently accepted value of solar parallax is 8".794 143.

Dynamical or moving-cluster parallax

The open stellar cluster Hyades in Taurus extends over such a large part of the sky, 20 degrees, that the proper motions as derived from astrometry appear to converge with some precision to a perspective point north of Orion. Combining the observed apparent (angular) proper motion in seconds of arc with the also observed true (absolute) receding motion as witnessed by the Doppler redshift of the stellar spectral lines, allows estimation of the distance to the cluster (151 light-years) and its member stars in much the same way as using annual parallax.

Dynamical parallax has sometimes also been used to determine the distance to a supernova, when the optical wave front of the outburst is seen to propagate through the surrounding dust clouds at an apparent angular velocity, while its true propagation velocity is known to be the speed of light.

Derivation

where is the parallax, 1 AU (149,600,000 km) is approximately the average distance from the Sun to Earth, and is the distance to the star. Using small-angle approximations (valid when the angle is small compared to 1 radian), 



so the parallax, measured in arcseconds, is
If the parallax is 1", then the distance is



This defines the parsec, a convenient unit for measuring distance using parallax. Therefore, the distance, measured in parsecs, is simply , when the parallax is given in arcseconds.

Error

Precise parallax measurements of distance have an associated error. However this error in the measured parallax angle does not translate directly into an error for the distance, except for relatively small errors. The reason for this is that an error toward a smaller angle results in a greater error in distance than an error toward a larger angle. 

However, an approximation of the distance error can be computed by
where d is the distance and p is the parallax. The approximation is far more accurate for parallax errors that are small relative to the parallax than for relatively large errors. For meaningful results in stellar astronomy, Dutch astronomer Floor van Leeuwen recommends that the parallax error be no more than 10% of the total parallax when computing this error estimate.

Spatio-temporal parallax

From enhanced relativistic positioning systems, spatio-temporal parallax generalizing the usual notion of parallax in space only has been developed. Then, eventfields in spacetime can be deduced directly without intermediate models of light bending by massive bodies such as the one used in the PPN formalism for instance.

Metrology

The correct line of sight needs to be used to avoid parallax error.
 
Measurements made by viewing the position of some marker relative to something to be measured are subject to parallax error if the marker is some distance away from the object of measurement and not viewed from the correct position. For example, if measuring the distance between two ticks on a line with a ruler marked on its top surface, the thickness of the ruler will separate its markings from the ticks. If viewed from a position not exactly perpendicular to the ruler, the apparent position will shift and the reading will be less accurate than the ruler is capable of. 

A similar error occurs when reading the position of a pointer against a scale in an instrument such as an analog multimeter. To help the user avoid this problem, the scale is sometimes printed above a narrow strip of mirror, and the user's eye is positioned so that the pointer obscures its own reflection, guaranteeing that the user's line of sight is perpendicular to the mirror and therefore to the scale. The same effect alters the speed read on a car's speedometer by a driver in front of it and a passenger off to the side, values read from a graticule not in actual contact with the display on an oscilloscope, etc.

Photogrammetry

Aerial picture pairs, when viewed through a stereo viewer, offer a pronounced stereo effect of landscape and buildings. High buildings appear to 'keel over' in the direction away from the centre of the photograph. Measurements of this parallax are used to deduce the height of the buildings, provided that flying height and baseline distances are known. This is a key component to the process of photogrammetry.

Photography

Contax III rangefinder camera with macro photography setting. Because the viewfinder is on top of the lens and of the close proximity of the subject, goggles are fitted in front of the rangefinder and a dedicated viewfinder installed to compensate for parallax.
 
Failed panoramic image due to the parallax, since axis of rotation of tripod is not same of focal point.
 
Parallax error can be seen when taking photos with many types of cameras, such as twin-lens reflex cameras and those including viewfinders (such as rangefinder cameras). In such cameras, the eye sees the subject through different optics (the viewfinder, or a second lens) than the one through which the photo is taken. As the viewfinder is often found above the lens of the camera, photos with parallax error are often slightly lower than intended, the classic example being the image of person with his or her head cropped off. This problem is addressed in single-lens reflex cameras, in which the viewfinder sees through the same lens through which the photo is taken (with the aid of a movable mirror), thus avoiding parallax error. 

Parallax is also an issue in image stitching, such as for panoramas.

Sights

Parallax affects sighting devices of ranged weapons in many ways. On sights fitted on small arms and bows, etc. the perpendicular distance between the sight and the weapon's launch axis (e.g. the bore axis of a gun) — generally referred to as "sight height" — can induce significant aiming errors when shooting at close range, particularly when shooting at small targets. This parallax error is compensated for (when needed) via calculations that also take in other variables such as bullet drop, windage, and the distance at which the target is expected to be. Sight height can be used to advantage when "sighting-in" rifles for field use. A typical hunting rifle (.222 with telescopic sights) sighted-in at 75m will still be useful from 50m to 200m without needing further adjustment.

Optical sights

Simple animation demonstrating the effects of parallax compensation in telescopic sights, as the eye moves relative to the sight.
 
In some reticled optical instruments such as telescopes, microscopes or in telescopic sights ("scopes") used on small arms and theodolites, parallax can create problems with aiming when the reticle is not coincident with the focal plane of the target image. This is because when the reticle and the target are not at the same focus, the optically corresponded distances being projected through the eyepiece are also different, and the user's eye will register the difference in parallaxes between the reticle and the target (whenever eye position changes) as a relative displacement on top of each other. The term parallax shift refers to that resultant apparent "floating" movements of the reticle over the target image when the user moves his/her head laterally (up/down or left/right) behind the sight, i.e. an error where the reticle does not stay aligned with the user's optical axis

Some firearm scopes are equipped with a parallax compensation mechanism, which basically consists of a movable optical element that enables the optical system to shift the focus of the target image at varying distances into the exact same optical plane of the reticle (or vice versa). Many low-tier telescopic sights may have no parallax compensation because in practice they can still perform very acceptably without eliminating parallax shift, in which case the scope is often set fixed at a designated parallax-free distance that best suits their intended usage. Typical standard factory parallax-free distances for hunting scopes are 100 yd (or 100 m) to make them suited for hunting shots that rarely exceed 300 yd/m. Some competition and military-style scopes without parallax compensation may be adjusted to be parallax free at ranges up to 300 yd/m to make them better suited for aiming at longer ranges. Scopes for guns with shorter practical ranges, such as airguns, rimfire rifles, shotguns and muzzle loaders, will have parallax settings for shorter distances, commonly 50 yd/m for rimfire scopes and 100 yd/m for shotguns and muzzleloaders. Airgun scopes are very often found with adjustable parallax, usually in the form of an adjustable objective (or "AO" for short) design, and may adjust down to as near as 3 yards (2.7 meters).

Non-magnifying reflector or "reflex" sights have the ability to be theoretically "parallax free." But since these sights use parallel collimated light this is only true when the target is at infinity. At finite distances eye movement perpendicular to the device will cause parallax movement in the reticle image in exact relationship to eye position in the cylindrical column of light created by the collimating optics. Firearm sights, such as some red dot sights, try to correct for this via not focusing the reticle at infinity, but instead at some finite distance, a designed target range where the reticle will show very little movement due to parallax. Some manufactures market reflector sight models they call "parallax free," but this refers to an optical system that compensates for off axis spherical aberration, an optical error induced by the spherical mirror used in the sight that can cause the reticle position to diverge off the sight's optical axis with change in eye position.

Artillery gunfire

Because of the positioning of field or naval artillery guns, each one has a slightly different perspective of the target relative to the location of the fire-control system itself. Therefore, when aiming its guns at the target, the fire control system must compensate for parallax in order to assure that fire from each gun converges on the target.

Rangefinders

Parallax theory for finding naval distances
 
A coincidence rangefinder or parallax rangefinder can be used to find distance to a target.

As a metaphor

In a philosophic/geometric sense: an apparent change in the direction of an object, caused by a change in observational position that provides a new line of sight. The apparent displacement, or difference of position, of an object, as seen from two different stations, or points of view. In contemporary writing parallax can also be the same story, or a similar story from approximately the same time line, from one book told from a different perspective in another book. The word and concept feature prominently in James Joyce's 1922 novel, Ulysses. Orson Scott Card also used the term when referring to Ender's Shadow as compared to Ender's Game

The metaphor is invoked by Slovenian philosopher Slavoj Žižek in his work The Parallax View, borrowing the concept of "parallax view" from the Japanese philosopher and literary critic Kojin Karatani. Žižek notes,
The philosophical twist to be added (to parallax), of course, is that the observed distance is not simply subjective, since the same object that exists 'out there' is seen from two different stances, or points of view. It is rather that, as Hegel would have put it, subject and object are inherently mediated so that an 'epistemological' shift in the subject's point of view always reflects an ontological shift in the object itself. Or—to put it in Lacanese—the subject's gaze is always-already inscribed into the perceived object itself, in the guise of its 'blind spot,' that which is 'in the object more than object itself', the point from which the object itself returns the gaze. Sure the picture is in my eye, but I am also in the picture.
— Slavoj Žižek, The Parallax View

Classical radicalism

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Cla...