This article was published more than 8 years ago

The Mushroom Cloud and the X-Ray Machine

70 years after Hiroshima, scientists still don’t truly understand the health risks of radiation.


At 6:45 a.m. on March 1, 1954, the earth rumbled beneath 10-year-old Jalel John’s feet as she stood on Ailuk Atoll in the Marshall Islands. Above her, half the sky turned strange colors. She remembers, in particular, the reds—the uncanny shades of red.

Within six minutes, a mushroom cloud reached 130,000 feet overhead, pulling with it the pulverized coral of islands. Left behind was a crater that measured more than a mile wide and 250 feet deep, vast enough to be visible from space. Some 350 miles away from the blast, John experienced the largest thermonuclear explosion that the U.S. military would ever detonate, a test known as Castle Bravo. (It reached a yield of 15 megatons; in layman’s terms, that’s 1,000 times more powerful than the bomb dropped over Hiroshima.)

Then came the fallout.

At around noon, a white, powdery substance began to drift down from the sky—first onto Rongelap Atoll, some 100 miles east of the explosion, and then onto Utirik Atoll, 300 miles away from the blast. On Ailuk, where John lived, a fine fog filled the air, finally settling on the earth and the atoll’s enclosed lagoon.

The following day, U.S. military planes flew over these islands, measuring radiation levels in the atmosphere. In the evening, an Air Force seaplane landed on Rongelap. The two men who got out of the aircraft were there for no more than 20 minutes and didn’t speak to anyone. They recorded high levels of radiation: The islanders’ total dose was estimated to be between 110 roentgens—enough to induce vomiting and cause muscle aches—and 340 roentgens, which could kill a person. By the next morning, the USS Philip, a slim, gray destroyer, had arrived to evacuate all 65 people on Rongelap. Shortly after, the U.S. Navy moved 154 people off Utirik.

But John and some 400 other residents remained on Ailuk. The total dose of radiation they had received, estimated at a maximum of 20 roentgens, “would not be a medical problem,” an Air Force lieutenant colonel reported to command that week.

In 1994, 40 years after Bravo, Holly Barker, an advisor to the Marshall Islands’ ambassador to the United States, was assigned to document the stories of so-called “unexposed” communities, including those on Ailuk. Scores of women, she says, shared personal stories of stillbirths and malformed “grape” babies. Jalel John, by then about 50 years old, was one of them: Two of her children had died, and one of them “was born defective,” she told Barker: “It didn’t look like a human. It looked just like the inside of a giant clam.”

John invited Barker into her home, where she insisted that the young advisor take a picture of her granddaughter—a moment that sticks with Barker, even years later. A young girl wearing a colorful patterned dress stood in the middle of a rough, empty floor. The girl’s big, dark eyes stared into the camera. Her right arm reached to her kneecap and rounded into a fleshy nub at her wrist, while her left arm was a stump. Her legs, covered mostly by the dress, were uneven, and one twisted outward. What John was trying to convey, Barker remembers, was, “This is it: I’m exposed, and this is what we’re dealing with.” But such visceral images had not altered the U.S. government’s official position that the only people affected by radiation were those who were originally evacuated.

In the 1950s, the U.S. government may have had the best of intentions when it told the residents of Ailuk that they were safe, but technically speaking, authorities weren’t in a position to offer those assurances. What U.S. government scientists said at the time was that below 25 roentgens, they could not see any effects on a person’s body. But they allowed for the possibility that, over time, small amounts of radiation exposure might cause genetic damage. In other words, the most reliable science of the era could not measure the effects of the relatively low levels of radiation that reached Ailuk.

Today, despite the 2,053 nuclear weapons tested around the world during the Cold War, the more than 430 nuclear power plants currently operating in 31 countries, and the skyrocketing use of radiation in medicine—annually, there are 20 million nuclear-medicine procedures in the United States alone—scientists are still uncertain about those risks. The estimated total levels of radiation that reached Ailuk were ultimately determined to be less than 10 roentgens. By today’s safety standards, such levels would be less than what is referred to as “low dose,” which is anything below 100 millisieverts (mSv), the metric measure now used, or roughly equal to 10 roentgens.

Not having the answer to the LNT puzzle could already be having tangible—even deadly—consequences.

Over the past 17 years, the U.S. Energy Department has invested in more than 240 projects, at a cost of over $130 million, to discover the effects of low-dose radiation on humans and the environment, to no avail. This January, the House of Representatives passed a bill calling for a new road map for low-dose research to find a science-backed reason to end what are—in the words of House Science Committee Chairman Lamar Smith, a Texas Republican—“overly restrictive regulations” on nuclear industries. Although the bill appears on its face benign, calling for coordinated efforts by scientists to finally get to the bottom of low-dose exposure risks, its goal is to discredit the so-called “linear no-threshold” (LNT) model, which has formed the basis for radiation safety policy for decades. This model assumes that radiation at any dose is harmful—an approach used by regulatory bodies, both in the United States and internationally. While most scientists agree that the LNT model offers a reasonably conservative guide for establishing standards, they know it’s based on an estimate—and they understand that, eventually, studies will pinpoint the exact effects of radiation at low doses.

Clarifying uncertainty over a long period of time is normal and necessary in science, where consensus can come at a glacial pace, but in the hands of U.S. policymakers, scientific examination can quickly get amped up and politicized. The legislative calls for more research could be discounted as political minutiae, but not having the answer to the LNT puzzle could already be having tangible—even deadly—consequences.

Those, like Lamar Smith, who oppose the LNT model, believe there is a threshold of exposure below which harm is either nonexistent or of no consequence. But even as it stands, America’s occupational radiation dose limits are more than twice those of international standards and are notably high among nuclear countries’ safety regulations. To make matters worse, government officials have avoided setting strict standards for resettlement and compensation plans for victims of a nuclear attack or accident. Stunningly, children’s greater sensitivity to radiation is not even accounted for in nuclear safety regulations.

To put it simply: The new bill places scientific inquiry in the service of a deregulation agenda that could make people less safe—not more.

Before 1945, when the United States dropped Little Boy over Hiroshima and Fat Man over Nagasaki, no one knew much about the dangers of radiation. Today, those nuclear bombs aren’t just the only ones ever used in war; they’re also the sources for much of what is known about how radiation affects the human body. In 1950, scientists began the Life Span Study, an epidemiological examination that continues today of the tens of thousands of people who were within about a six-mile radius of the points directly below the detonations. The project tracks the radiation-exposure outcomes over a person’s lifetime. The results haven’t been good: The greater the dose of radiation, the greater the risk of death, period.

There are limits, however, to what scientists can learn from this one group of people. The Hiroshima and Nagasaki populations were exposed to an external blast of radiation over a relatively short amount of time; the results in that case cannot be applied to other situations, such as people eating radioactive food or working for decades in a nuclear weapons plant. But after a couple of decades, the study found that the people who had been exposed to lower doses—and who had not shown signs of health effects before—started developing cancers that were once connected only to high exposures, such as thyroid, breast, and lung cancers.

In 1979, scientists working on the National Academy of Sciences’ third BEIR (Biological Effects of Ionizing Radiation) report were divided sharply over how to estimate these low-dose cancer risks. Some scientists thought there was still a “safe” threshold of exposure; others asserted that the risks of low-dose radiation were exponentially smaller than those of higher doses. But BEIR Committee Chairman Edward Radford argued heatedly that, proportionately, the risks were the same at high and low doses. He said that the best solution was to extrapolate, all the way down to zero, the linear relationship between dose and effect observed at higher doses. Although the report ultimately presented all sides of the debate, by the next BEIR report, in 1990, Radford’s view had won out. The LNT model has remained in the report ever since and has guided decisions not only in the United States, but around the world.

About that same time, in the 1990s, scientists were finding new, large populations to examine, groups that could fill in some of the gaps left by the Life Span Study. As the Cold War ended, scientists in the United States and Western Europe gained access to the Techa River cohort—29,800 people who lived on a small river that passed by the Soviet Union’s Mayak nuclear weapons complex. After the plant began producing plutonium, its waste was dumped from 1949 to 1956 into the river, where it seeped downstream, went into the water table, and contaminated the population’s drinking and bathing water, as well as its food crops.

In contrast to Japan’s bomb survivors, this group was exposed to radiation in dribs and drabs—exactly the sort of exposure that had been assumed, previously, to pose little threat. But over the past decade, an international team, connecting scientists in Russia and the United States, has found that these low doses, even if they were spread out over years, did increase the risk of developing cancer. The number of cancers was small: In one study, the researchers found that 3 percent, or 55 out of 1,836, of solid cancers that appeared in those tens of thousands of people near the Techa River were attributable to radiation. But it is significant that they showed up in the results at all—and that the relationship between dose and effect looked to be linear.

In 1988, another study, by the French-based International Agency for Research on Cancer (IARC), began to pool data about nuclear workers, ultimately almost 600,000 from 15 countries. It was hoped that this group, one of the biggest samples ever studied, would reflect more clearly the true effects of low doses. And, indeed, the IARC’s estimates of the workers’ cancer risk, first published in 2005, not only showed that chronic, low doses posed threats, but also that they were notably higher than those predicted using the LNT model. The problem was that the results were influenced heavily by one data set from Canada: Without it, the increased risks that the study showed were no longer statistically significant from zero. Almost immediately, the study was criticized, most vocally by scientists who’d worked in the nuclear industry. In 2013, the Canadian data set was thrown out.

That certainly doesn’t look good. But in a study like the IARC’s, any dramatic outlier demands further investigation—that’s part of good science. (And independent academics are working to salvage the Canadian data set, so it may still be used in the future.) The real problem, some scientists say, is not that this one data set was thrown out. It’s that, with doses and risks this small, scientists need to study more than hundreds of thousands of people, as in the IARC study. They need on the order of at least 1 million to nail down precise results. (That’s just how statistics work: Real-world data don’t come perfectly fit to a model. If only one out of 1,000 exposed people are expected to develop cancer attributable to radiation, small discrepancies can have a meaningful impact on a best-fit curve.)

With that in mind, John Boice, head of the U.S. National Council on Radiation Protection and Measurements—and an expert who’s not necessarily convinced that the LNT model is the best representation of risk for very low doses—is cobbling together the largest radiation cohort ever: more than 1 million Americans exposed to radiation while working at nuclear plants, with the military, or in other radiological industries. He has been planning this project since he first pushed the Nuclear Regulatory Commission (NRC) to start a registry of nuclear workers nearly three decades ago. It could take another seven years—plus about $25 million on top of the
 $10 million or so he has already received from the Energy Department, the National Cancer Institute, the NRC, and other funders—before he is done. “When we finish this investigation,” he says, “we’ll be able to address directly the effects of low-dose rate radiation.”

While Boice was busy jump-starting his study, lab researchers enabled by new technology in the late 1990s were making headway on how low-dose radiation affects human cells and animals. By zapping a single cell with a microbeam of radiation, the lab scientists were able to look at radiation’s effects on cells at lower doses than ever before. And what they found surprised them.

“It was always assumed—I always assumed—that what happened when you went to lower doses was that less and less cells got damaged,” says David Brenner, the director of Columbia University’s Center for Radiological Research. “So there was less and less probability that any cell would produce cancer.” But when researchers hit just one cell with radiation, not only that cell, but those near it, were also damaged. This became known as the bystander effect.

“Because cells talk to each other, they send out signals,” Brenner explains. “We don’t understand how those mechanisms work, but there’s no doubt whatsoever that a cell that’s unaffected by radiation can have DNA damage in it if it’s near to a cell that is affected by radiation.”

This idea suggests that the LNT model underestimates risks. If, say, twice as many cells are damaged by a hit of radiation than assumed, it’s a fair guess that the health risk would be twice as high. Certainly, this discovery eventually meant that, as Brenner puts it, “all bets are off” on the accuracy of the LNT model at low doses.

Indeed, by the late 1990s there was enough evidence calling into question the LNT model that the cracks in its foundation interested the Energy Department, which devoted resources to wiggling those cracks even wider. Starting in 1998, the department’s low-dose lab studies involved irradiating single layers of cells, tissue-like cell groups, and mice, with doses under 100 mSv. The results showed a plethora of “non-targeted effects,” where, as in the bystander effect, cells other than the one that was irradiated responded to the dose. But if the epidemiological studies support the idea of extrapolating from high to low doses, this body of work shows that the relationship between high- and low-dose risks is probably more complicated than that. And it’s still unclear what, exactly, any of this means for human health.

To conclusively prove radiation risks, these two lines of research—the epidemiology and the lab work—will likely need to come together.

Scientists, including those most closely associated with the Energy Department’s program, are focused on lab studies that show that some of these cell-level responses are protective, rather than damaging—that is, the neighboring cells are merely gearing up to fix minor damage. For these reasons, Tony Brooks, who worked as the program’s chief scientist for years, is matter-of-fact: “I personally don’t think that low doses are important at all in cancer-risk assessment.” The program has also funded scientists who believe in radiation hormesis—the idea that low doses of radiation might actually be beneficial.

Some of Brooks’s colleagues will say, politely, that these conclusions about the LNT model’s overestimation of risk are going much too far. Their doubts about hormesis are even more profound. To conclusively prove radiation risks, these two lines of research—the epidemiology and the lab work—will likely need to come together. This means that the lab models will have to show what the complex relationship of dose and effect does indeed look like, if it doesn’t look like a straight line, and this relationship will have to match with the real-world data from human populations.

This situation isn’t so unlike that in Europe, where the LNT model also has caused quite a kerfuffle among scientists. Most notably, the French Academy of Sciences’ position, since 2005, has been that the LNT model may overestimate risks (though that hasn’t changed France’s safety policies). But, compared with the United States, there’s a stronger inclination across the Atlantic toward the view that the risks might actually be higher than previously assumed. In 2009, an association of government agencies and research institutes in Europe, called MELODI, started a decades-long program that aims to nail down, once and for all, the true impacts of low-dose radiation, with the aim of assessing and improving Europe’s current system of radiation protection. This is slow-going work, and rigor and patience may yet prevail over politics.

Meanwhile, in Washington, the House bill on low-dose research awaits Senate approval. According to its lead sponsor, Rep. Randy Hultgren, a Republican from Illinois, the goal of the act isn’t to challenge that a threshold exists, but to “ensure that the Department of Energy’s Office of Science prioritizes the research necessary to understand what that level actually is.” He points to medical innovations, such as X-rays, as things that have benefited the public good while also relying on low-dose radiation to do so. “It is essential we have the science and facts straight before taking any potentially burdensome regulatory actions that could hamper future innovation,” he said in a January statement.

Edwin Lyman, a senior researcher at the Union of Concerned Scientists who specializes in nuclear proliferation, isn’t so optimistic things will change for the United States in the same way they could for Europe. At the NRC and Energy Department, he charges, institutional bias “motivates the grant selection of the people who are directing the grants, so they’re self-selecting a population of studies that support their own view.”

Gregory Jaczko, who chaired the NRC until 2012, doesn’t discount that “there is an industry-influence piece” to this particular scientific debate. In short, changes cost money.

Those who oppose the LNT model argue that low, allowable dose levels keep the business of nuclear cleanup thriving. Cleaning up a site to 0.15 mSv per year, the Environmental Protection Agency’s standard, rather than 0.25 mSv, the NRC’s, can cost millions of extra dollars. And when considering decommissioning nuclear plants, the NRC has been more concerned with that cost differential than with the differential in human health. The Energy Department already spends $5 billion each year on environmental cleanup for its atomic defense work. (Almost half of that goes to just two nuclear production sites now being decommissioned, Hanford and Savannah River.) As an example, cleaning up New York City to that 0.15 mSv standard—after a hypothetical nuclear event, that is—could cost approximately $4 trillion; to clean it up to 50 mSv would cost less than $1 trillion.

Over the past 15 years, the NRC has discussed updating the United States’ occupational dose limit, which governs how much radiation people who work in industries like nuclear power and nuclear medicine can receive each year, by lowering it from 50 mSv to 20 mSv per year, the limit first recommended by the International Commission on Radiological Protection (ICRP) in 1991. Today, the NRC’s workplace-safety regulations are less stringent than other countries’—and they are still based on recommendations that the ICRP published 38 years ago, in 1977.

In 2010, the NRC staff arranged three stakeholder meetings to discuss potential changes to nuclear safety standards, including lowering the occupational dose limit. According to transcripts of those meetings, the industry representatives said almost universally: Don’t.

They had plenty of reasons. For one, the change wasn’t necessary. Because they work to keep exposures—in the jargon of nuclear regulation—“as low as reasonably achievable,” some industries, such as the nuclear power sector, were already keeping their workers well below the
20 mSv-per-year standard that many European countries use. (One radiation-protection manager, Willie Harris, estimated that “roughly … about 83 individuals” across the entire industry were receiving a greater dose.) A representative from Oak Ridge National Laboratory argued that a lower limit would force contractors to step up expensive monitoring or that they might have to switch from urine samples to fecal samples to measure internal doses. Higher costs came up frequently. A representative from a radiopharmaceutical company said that stricter limits could cause whole medical-scanning industries to shut down. Most of their exposed workers were receiving doses higher than 20 mSv per year, and if they had to hire more people in order to reduce each worker’s exposure, the economics of the whole business would stop making sense.

But one of the main ways that the industry representatives, particularly those from the medical sector, justified their position was the uncertainty surrounding low-dose effects. By this flawed logic, what’s not proved to be dangerous should be assumed to be safe. The public, however, tends to take a more risk-averse position, thinking that what’s not proved safe should be assumed dangerous.

“I’d have people come in and meet with me and say, ‘We think the power plant is making our kids sick,’” says Jaczko, referring to the three years he headed the NRC. When he’d tell them that the commission hadn’t seen any worrisome health problems among workers in their area, the parents would always have a similar counterargument: Children may be more susceptible to radiation risks. Currently, the commission calculates doses to the public by modeling, primarily, how radiation affects an adult, not a child. When it updates its safety rules, according to a 2014 NRC report, the commission plans to include creating “a more realistic representation” of the public.

Where to set standards, Jaczko points out, is not science—it’s a societal, moral, and ethical debate.

Although scientists know children are more vulnerable to radiation, they’re unsure exactly how much more. In 2013, the United Nations’ scientific committee on radiation effects said that children could be up to three times more sensitive “for some health effects but certainly not for all.” For a parent, however, doubling even a small risk to a child may well be unacceptable. And the health effects to which kids are clearly more susceptible are not to be understated: leukemia, as well as thyroid, skin, breast, and brain cancers.

The 2011 nuclear accident that unfolded in Fukushima, Japan, illustrates how these dangers play out in real life. When a World Health Organization panel assessed the health risks to the people exposed to radiation during the accident, it found that, overall, the increased incidence of cancer was likely to “remain below detectable levels.” But the panel also found that infants were at the highest risk. (While they were exposed to external and internal radiation like everyone else, they also would have ingested their mothers’ breast milk, where radiation concentrates.)

In the worst-affected areas, where doses were estimated to be between 12 and 25 mSv in the first year after the accident, the lifetime risk for leukemia has increased 7 percent for male infants, according to the panel, while female infants have a 4 percent greater risk of all solid cancers and a 6 percent greater risk of breast cancer. Females exposed as infants, the panel also found, have a 70 percent greater lifetime risk of developing thyroid cancer. Recently, these statistics have begun to play out in reality. Researchers at the Fukushima Medical University have detected increased rates of thyroid cancer in children. This could be an artifact of stepped-up monitoring, as regular thyroid cancer screenings hadn’t been part of normal pediatric visits. But anti-nuclear activist Helen Caldicott says that argument, made by Japanese medical officials, is a thin excuse. Authorities, she says, “don’t want to admit Fukushima could cause any disease at all.”

In the aftermath of Japan’s nuclear accident, the country’s general public is being pushed to take on more risk of chronic exposures. Last year, some of the first evacuees from the area around Fukushima returned home to contaminated areas that have been cleaned up, but not entirely. Members of the public can now legally receive higher radiation doses than they could before the disaster: 20 mSv per year—the same as most nuclear workers—compared with the previous 1 mSv limit.

And while the difference between 1 and
20 mSv might not matter to a 95-year-old whose priority is simply returning home, for someone younger, with an infant, 20 mSv is a different issue. Where to set standards, Jaczko points out, is not science—it’s a societal, moral, and ethical debate. “It’s easy to say, one company’s
$10 million bill is not OK compared to my one-in-a-million chance [of getting cancer],” he says. “But if you did get cancer, you’d feel pretty bad—if you could prove it and show it. And we can’t.”

When U.S. policymakers and their advisors have looked at resettlement in the wake of nuclear-disaster scenarios—most recently with a February report from the National Council on Radiation Protection and Measurements (NCRP) on recovery from major nuclear or radiological incidents—they have favored working with communities to determine “acceptable risks” rather than setting down a bright-line rule. In the report, says the NCRP’s Boice, “we don’t say ‘mSv.’ We say, ‘You have stakeholder involvement, and we say you get as low as reasonably achievable.’”

If that sounds Orwellian, that’s because it is.

In 1968, the Atomic Energy Commission (AEC)—which oversaw nuclear safety until 1975, when the agency was dissolved and its responsibilities assigned to today’s NRC and, ultimately, to the Energy Department—determined that the dose to which any returning Marshallese would be exposed would be no more than 20 mSv over five years. And so the residents of Bikini Atoll, who had been relocated because of the nuclear tests long before the people of Rongelap and Utirik, were allowed to return home. Robert Conard, who ran the AEC’s medical program in the Marshall Islands for two decades, wrote in 1992 that the doses there were “so low that medical surveillance was not considered necessary.” While scientists knew that it was still possible that low levels of radiation could concentrate in the food chain, they did not communicate that clearly to the resettled Bikinians. “We drank coconuts and ate pandanus [a tropical fruit] all the time,” one elder later reported in 1989 to American researcher Jack Niedenthal. “We were always under the impression that everything was safe and that we could go about our everyday business and not worry.… Then the Americans started changing the rules on us.”

Out of caution, the AEC annually screened the resettled population and measured environmental radiation levels. By the late 1970s, those tests showed something alarming: The Bikinians’ body concentrations of radiation were heading toward the maximum allowed levels. They were evacuated again.

But internal exposures weren’t the only surprises in the Marshall Islands. On Utirik, islanders had been told they shouldn’t expect any major medical problems; however, when doctors conducted three-year examinations, they started finding lumps at the bases of patients’ necks, around the area of the thyroid. By 1987, 19 of the some 160 people originally exposed to radiation on Utirik had had thyroid tumors removed, 15 of them nonmalignant and four of them cancerous.

While the initial exposures at Utirik (later determined to be around 140 mSv) were considered low in 1954, now they’re clearly over the 100 mSv level at which epidemiologists can estimate radiation’s health effects with certainty. Exposures in places like Ailuk and the rest of the Marshall Islands remain controversial.

In 1986, the United States and the Marshall Islands entered into a compact of free association in which the Pacific nation granted the United States continued access to parts of its territory in exchange for military protection and other services. The United States agreed to pay $150 million in compensation for nuclear testing and set up a health program for survivors. But in 2000, the Marshallese government asked Congress for another
 $3 billion, to be used mainly for continued health-care services and to cover the outstanding $2.3 billion in awards the Islands’ Nuclear Claims Tribunal had made.

The United States won’t pay the bill. The political fight over this compensation—as well as whether Washington should award it—centers, unsurprisingly, on the low doses received outside Utirik, Rongelap, Bikini, and Enewetak, the four atolls that the United States recognizes as having been exposed to Bravo’s radiation. The State Department wrote in 2004, for instance, that “U.S. radiation‑
related compensation programs require proof of a minimal level of exposure” and that “[t]here is no similar basis,” in the Marshall Islands, for “recognizing the claims … of individuals located south of ten degrees north latitude.”

Ailuk lies just north of that line, right on the cusp of where science can accurately quantify the risks. Today, when evacuation zones are defined by a dose range, such lines are pretty arbitrary, says Brenner, the Columbia University radiobiologist. “It could easily be a factor of two less, a factor of two more, and that would radically change what the evacuation zone would be,” he says. “You see people’s lives being turned upside down on a permanent basis for something that we simply don’t know enough about. Ultimately, we have to make an arbitrary policy decision.”

This is what has kept people like Jalel John in limbo, certain that they were damaged by radiation, but told over and over again that they were not. It’s only when policymakers stop working the margins—using uncertainty to advance their own agendas—that the public will be closer to understanding even the most mundane, but nowhere near inconsequential, implications of the nuclear age.

Correction, March 31, 2015: The USS Philip was a destroyer. An earlier version of this article mistakenly said it was a battleship.

Sarah Laskow (@slaskow) is a journalist based in New York.

Loading graphics