Search
Close this search box.
December 2, 2024
Search
Close this search box.

Linking Northern and Central NJ, Bronx, Manhattan, Westchester and CT

World War II and Cancer: The A-Bomb

I was recently on a small committee assembled by NASA to discuss potential risks to astronauts undertaking long-term missions in outer space, such as a two-year mission to Mars, and the carcinogenic risks of cosmic radiation. Even after 75 years, the best data our committee discussed on the subject came from the results of the Japanese A-bomb studies.

It is apparent from the scenarios we have discussed regarding World War II and cancer that WWII led to many unique circumstances that had implications for the incidence of cancer in one way or another. But certainly far and away the most profound of these were the events of August 1945 that took place in Hiroshima, population 330,000, and Nagasaki, population 280,000, when President Truman authorized the use of the atomic bomb on large population centers in order to bring the war to a swift close.

Certainly, it was understood and anticipated, I dare say desired, that there would be mass annihilation from the immediate effects from the bombs. Aside from the explosions and their immediate effects, there was also awareness of the acute effects of burns from radiation as well as radiation poisoning, e.g., bone marrow aplasia. And there was even knowledge that exposure to radiation could cause malignancy—Marie Curie as well as her daughter both died from radiation-induced cancers.

What was not yet available, however, was a full and accurate assessment of all the malignancies that were radiosensitive, i.e., that ionizing radiation could lead to. And more importantly, a good assessment of the dosimetry that went along with all of this; in other words, how much or how little exposure was necessary to lead to leukemia or other malignancies? As the Manhattan Project had matured and the nuclear physicists and planners had become aware of both the military and potential future other applications of nuclear power, these questions were taking on increasing import.

The result was that for the teams that were assembled to evaluate the damage caused by the bombs (the military planners were obsessed, of course, with what the future held for nuclear weapons), a team was added to determine the individual exposures of individuals who survived the blasts so as to determine their long-term outcomes. Of course, many of those who survived the initial blast died within days or weeks from various aspects of radiation sickness.

In 1948, the Atomic Bomb Casualty Commission was established by the National Academy of Sciences to undertake the collection of data. It conducted extensive interview surveys in the early 1950s; an extraordinary effort was made to assemble data on a large number of Japanese at both blast sites with regard to where they were with regard to the epicenter of the blast, what they were standing behind, what they were wearing, etc., so as to get detailed information on each individual as to the precise dose to which they were exposed and what parts of their body may have been especially exposed. Based on these surveys, radiation doses were calculated for each survivor.

In 1975, these research efforts were brought under the jurisdiction of the Radiation Effects Research Foundation (RERF), a joint U.S.-Japan effort. This ended up being one of the greatest epidemiologic studies ever done; much of what we know until this day regarding radiation dosimetry and cancer stems from the RERF analyses and publications. It is estimated that there were 284,000 survivors of the bombs. Of these, about 120,000 Japanese people were included in the study cohort and followed longitudinally. Of these, about 54,000 were within 2.5 kilometers of the epicenters of the explosions.

The study is still ongoing as many of these individuals are still alive.

In terms of specific findings, leukemia was observed to increase as early as 1949 or 1950, which is significantly earlier than for solid tumors. Thus, one may conclude that leukemias have a shorter latency period than solid tumors following exposure. (The latency period is how long is required between onset of exposure to a carcinogen to the diagnosis of the cancer.) While the radiation had an effect at all ages, children were most sensitive to the radiation. But increased risk was apparent even 50 years later.

For solid tumors, excess risks were observed for a substantial number of tumor types, including breast, colorectal, esophagus, lung, nonmelanotic skin, oral cavity, urinary bladder, brain and thyroid. It is worth mentioning that later studies that have looked at other sources of radiation exposure, such as treatment-related radiation exposure or occupational radiation exposure, have confirmed most of these associations. Of course, what is most interesting in the RERF studies is the dosimetry and the latency. For the most part, a decade is required before the increase in cancer incidence. (Breast cancer rates began their increase in 1955.) Again, younger age was associated with greater risks. Radiation had no effect on breast cancer risk in women who had previously given birth, a well-known and replicated finding.

Interestingly, a cohort was also established of children of survivors, and their cancer experience has been analyzed as well. Surprisingly, they have had no obvious excess risks of cancer or leukemia.

Alfred I. Neugut, MD, PhD, is a medical oncologist and cancer epidemiologist at Columbia University Irving Medical Center/New York Presbyterian and Mailman School of Public Health in New York.


This article is for educational purposes only and is not intended to be a substitute for professional medical advice, diagnosis, or treatment, and does not constitute medical or other professional advice. Always seek the advice of your qualified health provider with any questions you may have regarding a medical condition or treatment.

Leave a Comment

Most Popular Articles