Fluoride Information

Fluoride is a poison. Fluoride was poison yesterday. Fluoride is poison today. Fluoride will be poison tomorrow. When in doubt, get it out.


An American Affidavit

Wednesday, July 24, 2019

A Retrospective: The Costs of U.S. Nuclear Weapons

A Retrospective: The Costs of U.S. Nuclear Weapons

NTI (1 October 2008)

Does it matter—in military, political, or economic terms—how much the United States has spent, and continues to spend, to develop and sustain its nuclear arsenal? Many observers would say no. The Cold War is long over, the United States won without having to use its nuclear weapons, they argue,
so whatever the cost was, it was “worth it.” But for those interested in accountability and reexamining history in light of new evidence, what the United States spent on nuclear weapons along with the justifications for that spending can shed light on the pace and scale of the U.S. effort and offer important lessons for the United States and for other countries that have or seek to have nuclear weapons. This issue brief, based on the 1998 book Atomic Audit: The Costs and Consequences of U.S. Nuclear Weapons Since 1940, examines how and why key decisions were made, what factors influenced those decisions, and whether alternatives were considered.[1] In so doing, it helps explain the process by which an arsenal consisting of just two primitive weapons in 1945 eventually grew to more than 32,000 highly sophisticated ones, what this process cost, and how the costs and consequences of the program were understood by policymakers at the time.

What Did the United States Spend?

From 1940-1996, the United States spent a minimum of $5.5 trillion on its nuclear weapons program.[2] The lack of data for some programs and the difficulty of segregating costs for programs that had both nuclear and conventional roles mean that in all likelihood the actual figure is higher. This figure does not include $320 billion in estimated future-year costs for storing and disposing of more than five decades’ worth of accumulated toxic and radioactive wastes and $20 billion for dismantling nuclear weapons systems and disposing of surplus nuclear materials. When those amounts are factored in, the total incurred costs of the U.S. nuclear weapons program exceed $5.8 trillion.[3]
Of the $5.8 trillion, just seven percent ($409 billion) was spent on developing, testing, and building the actual bombs and warheads. To make those weapons usable by deploying them aboard aircraft, missiles, submarines, and a variety of other delivery systems consumed 56 percent of the total ($3.2 trillion). Another $831 billion (14 percent) was spent on command, control, communications, and intelligence systems dedicated to nuclear weapons. The United States also spent $937 billion (16 percent) on various means of defending against nuclear attack, principally air defense, missile defense, antisubmarine warfare, and civil defense.
The amount spent through 1996—$5.5 trillion—was 29 percent of all military spending from 1940 through 1996 ($18.7 trillion). This figure is significantly larger than any previous official or unofficial estimate of nuclear weapons expenditures, exceeding all other categories of government spending except non-nuclear national defense ($13.2 trillion) and social security ($7.9 trillion). This amounted to almost 11 percent of all government expenditures through 1996 ($51.6 trillion). During this period, the United States spent on average nearly $98 billion a year developing and maintaining its nuclear arsenal.
It is very difficult to comprehend figures of this magnitude. To provide some perspective, consider the following:
  • $5.8 trillion divided equally among everyone living in the United States equals a bit more than $21,000 per person.
  • $5.8 trillion in one dollar bills stacked one atop another would stretch 459,361 miles (739,117 kilometers), to the Moon and nearly back.
  • If you attempted to count $5.8 trillion at the rate of $1 a second, it would take almost 12 days to reach $1 million, nearly 32 years to reach $1 billion, 31,709 years to reach $1 trillion and thus about 184,579 years to reach $5.8 trillion.

What Did the United States Get?

Between 1945 and 1990, the United States manufactured more than 70,000 nuclear bombs and warheads in 65 different configurations, for everything from land mines and artillery shells to multi-megaton warheads for intercontinental ballistic missiles. Thirty-six percent of these warheads were intended for tactical or battlefield use and nearly 12,000 warheads (17 percent) were for defensive purposes (anti-aircraft, anti-missile, and anti-submarine). To fuel these weapons, the United States produced 745.3 metric tons of highly enriched uranium and 103.5 metric tons of plutonium. Uranium was produced at three separate facilities in Tennessee, Ohio, and Kentucky. Plutonium was produced in reactors at the Hanford Reservation in Washington State and at the Savannah River Plant in South Carolina.
Costs for the Manhattan Project totaled about $21.6 billion thorough 1945. Sixty-three percent of this total went toward producing highly enriched uranium at Oak Ridge, Tennessee. Another 21 percent was expended at Hanford producing plutonium.[4]
When the megatonnage or explosive power of the U.S. arsenal peaked in 1960, it was equivalent to 1,366,000 Hiroshima-sized bombs (the “Little Boy” bomb dropped on Hiroshima had a yield of 15 kilotons or 15,000 tons of TNT equivalent). Today’s operational stockpile, although significantly smaller, contains the explosive equivalent of more than 91,500 Hiroshima-sized bombs.
From 1945 until September 1992, the United States conducted 1,030 nuclear tests (215 in the atmosphere and 815 underground). That is more tests than all the other nuclear power combined. The peak year for testing came in 1962, when 96 warheads were detonated (39 in the atmosphere) in advance of the signing of the Partial Test Ban Treaty.
During the Cold War, the United States produced nuclear weapons for 116 different delivery systems. These delivery systems included 6,125 strategic ballistic missiles (11 types), 4,700 strategic bombers (11 types), 59 strategic ballistic missile submarines (3 types), and tens of thousands of additional shorter-range missile systems, many of which were dual-capable.
Because the government did not segregate nuclear and non-nuclear costs for weapons systems capable of performing dual missions, accounting for the cost of these dual-capable systems presents a significant problem. Some 25,000 warheads and bombs—36.5 percent of all U.S. nuclear weapons—were designed to be delivered by “conventional” systems, such as Air Force and Navy tactical fighters, Army ground-based and Navy shipborne surface-to-air missiles (SAMs), Navy antisubmarine warfare systems (ASW), and Army and marine corps artillery pieces. The costs of building and operating these systems come under the heading of “General Purpose Forces” in Department of Defense budget.
Some portion of these costs clearly must be allocated to the overall nuclear weapons total, but how much? Assuming conservatively that just fifteen percent of the cost of equipping, operating, and supporting general purpose forces during the Cold War was allocated to the nuclear mission comes to $1.2 trillion, the figure included in the above totals. However, given the extent to which nuclear weapons were thoroughly integrated into the training and doctrine of U.S. general purpose forces during much of the Cold War, especially during the 1950s and 1960s, fifteen percent might well be a serious underestimate of the extent to which general purpose forces were involved in nuclear missions (one analyst, writing of the Eisenhower administration’s efforts to deemphasize conventional weapons from 1953 to 1960 in favor of increase tactical nuclear capabilities, wrote, “it seems reasonable to assume that more than half the budget for general purpose forces [during this period] was nuclear-related”).[5]

What Was Necessary?

Contrary to official pronouncements by military and political leaders over the years, the requirements for nuclear deterrence and warfighting strategies were largely subjective and inherently undefinable. When such requirements were combined with a lack of knowledge about the current or cumulative cost of the nuclear weapons program, inadequate intelligence about and fear of the capabilities and intentions of U.S. adversaries (principally the Soviet Union), and a blanket of secrecy that kept the public, the news media, and even some policymakers in the dark, all the ingredients for a largely unconstrained competition were present.
At one end of the deterrence spectrum, one can make an argument for achieving deterrence with just a few warheads or even with the potential to construct and deploy them (e.g., North Korea). At the other are the statements by General James Gavin, head of Army research and development, who testified before Congress in 1956 and 1957 and requested 151,000 nuclear warheads just for the Army (a figure justified by plans envisioning the use of as many as 423 warheads in a single day of “intense combat”), and a declassified Department of Defense report —History of the Custody and Deployment of Nuclear Weapons—that presents, almost as an afterthought, this sentence: “Finally, in June 1958, the [Joint Chiefs of Staff] after careful study, recommended a stockpile level of from 51,000 to 73,000 warheads by 1968.”[6] Such figures are all the more remarkable considering that annual production peaked in 1960 at 7,178 warheads and that the United States built a total of 70,000 warheads during the entire Cold War.
What was the “right” number? Given the subjective nature of the process, there can be no single figure. However, over the years, a number of knowledgeable individuals have tried to quantify a minimum nuclear requirement and it is worth considering the results of some of their efforts.
In 1957, Admiral Arleigh Burke, then the chief of naval operations, estimated that 720 warheads aboard 45 Polaris submarines were sufficient to achieve deterrence. This figure took into account the fact that some weapons would not work and that some would be destroyed in a Soviet attack (Burke believed that just 232 warheads were required to destroy the Soviet Union).[7] At the time Burke made this estimate, the U.S. arsenal already held six times as many warheads.
Several years later, in 1960, General Maxwell Taylor, former Army chief of staff and future chairman of the Joint Chiefs of Staff, wrote that “a few hundred reliable and accurate missiles” (armed with a few hundred warheads) and supplemented by a small number of bombers was adequate to deter the Soviet Union.[8] Yet by this time the United States had some 7,000 strategic nuclear warheads.
In 1964, Secretary of Defense Robert McNamara and his “whiz kids” calculated that 400 “equivalent megatons” (megatons weighted to take into account the varying blast effects from warheads of different yields) would be enough to achieve Mutual Assured Destruction and destroy the Soviet Union as a functioning society. At that time, the U.S. arsenal contained 17,000 equivalent megatons, or 17 billion tons of TNT equivalent.[9]

Environmental and Health Costs

Until the end of the Cold War, the environmental and public health costs of U.S. nuclear weapons generally received little attention and funding. This was partly because there were few systematic efforts underway to document or address the full extent of the problems and implement solutions. But it was also because few senior government officials felt comfortable raising concerns about real and potential hazards posed by the production and testing of nuclear weapons at a time when those weapons were still considered a crucial factor in U.S.-Soviet relations. The AEC and DOE also did what they could to discourage discussion about these issues, to the point of lying about the dangers to not only the general public but also the workers in its own facilities. As a result, one great irony of the Cold War is that although the United States produced nuclear weapons en masse to destroy the Soviet Union, and vice-versa, the principal victims of each country’s nuclear weapons were its own citizens.
From the very beginning, nuclear officials dealt with the problem of nuclear waste by devising interim rather than long-term solutions. During World War II, scientists at the Hanford Reservation understood that processing plutonium by dissolving a reactor’s fuel rods in acid created significant quantities of highly radioactive liquid wastes. They dealt with this problem by constructing giant underground tanks made of carbon steel. Carbon steel was used because stainless steel was in short supply. But using carbon steel meant having to first neutralize the acidic wastes to prevent them from dissolving the tanks and leaking out of them. This neutralization process involved adding lye and water to the wastes that among other things substantially increased their volume.[10] The tanks were only intended as a short-term fix, but after the war no one revisited the issue. As a result, millions of gallons of wastes leaked into the ground. Hanford officials insisted for years that it would take centuries for the waste to reach the groundwater underneath the site. In fact, it was only a matter of decades before their optimistic assumptions were proven wrong.
A major reason why the United States today faces a “cleanup” bill of at least $300 billion is that problems such as the Hanford waste tanks were ignored in favor of maintaining or increasing production of nuclear weapons. Production was the first priority of the government. Making sure it was done in a manner that did not unnecessarily hurt people or destroy the environment was a distant second. Had the government thought through more carefully the consequences of unrestrained production of plutonium and highly-enriched uranium, many of the problems—and bills—we face today could have been avoided or substantially mitigated. It now appears that in a number of cases, no effective “cleanup” will be possible and highly-contaminated sites will simply have to be fenced off and monitored for generations.
The human health costs of the U.S. nuclear weapons program are important but largely unquantifiable. How do you place a value on a human life? A number of the 600,000 people who worked in a nuclear weapons facility were exposed to unnecessarily high levels of radiation. Exposure to toxic chemicals was also high. At several facilities, no consistent records were kept of employee radiation exposures. At at least one, plant officials entered false readings into dosimetry logs. When workers fell ill and applied for worker’s compensation, the DOE spent millions of dollars on lawyer’s fees to avoid paying out even a single claim, out of fear that paying one claim would open the floodgates to lawsuits and increase calls for stricter health and safety measures, which would necessarily drive up costs and impede production of more weapons.[11]
Uranium miners, many of whom were Navajo, developed lung cancer after working in unvented mines without respirators or any sort of protective gear. Government officials were well aware of the dangers to the workers, but chose to ignore them to keep production high and the price of uranium low.[12]
Congress has since passed the Radiation Exposure Compensation Act (RECA) providing compensation to persons harmed by nuclear weapons production and testing activities. Through early 1998 the government had paid out $225 million to some 2,700 persons[13]

Factors Influencing the Growth of the Nuclear Arsenal

Why did the United States spend so much money amassing an arsenal far larger than even many military and government experts thought necessary?
Arbitrary decision making played a significant role. Although official reports and congressional testimony created the impression that military and political officials knew exactly what number of bombers or missiles would deter the Soviet Union, the reality is that the eventual size of a weapon program was arrived at through a number of interlocking factors and influences, including budgetary trade-offs, the perceived Soviet (and Chinese) threat, interservice and intraservice rivalry, the use by elected officials of military programs to promote jobs in states and congressional districts, corporate lobbying, cycles of technological obsolescence and development, and political charges and countercharges, to name a few. To all this must be added one additional factor: the lack of understanding—at the highest levels of government—about what these programs cost.
Fear of the Soviet Union was a significant driving force behind the U.S. nuclear weapons program. From the very beginning, U.S. officials sought to maintain a technological and numerical lead over the Soviets. The remarkable confluence of menacing events in 1949-1950—the first Soviet atomic bomb test, the communist revolution in China, the start of the Korean War, the revelations of atomic spies, and the beginning of Senator Joseph McCarthy’s anti-communist crusade—catalyzed the public and government officials and led to dire predictions about the future of the United States and global democracy. Because the United States had not yet developed means of obtaining reliable information about the Soviet Union, fear, worst-case scenarios, and mirror-imaging dominated (Air Force intelligence officials, for example, assumed that the Soviets would build thousands of strategic bombers because that is what the Air Force was doing.
Accordingly, Congress appropriated large sums of money to expand nuclear weapons production as rapidly as possible. Congress was especially concerned because it felt that the military’s requirements for nuclear weapons were unduly constrained by the relatively small capacity of the Atomic Energy Commission’s bomb production facilities. Increased production, it was felt, would allow the creation and fulfillment of more realistic requirements. At first, however, the military was not calling for increased production. When Eisenhower entered office in 1952, production was 644 bombs a year and the arsenal contained 841 weapons in all. By the time Eisenhower left office in 1961, more than 5,100 warheads were rolling off the assembly lines annually (production actually peaked the year before at more than 7,000) and the arsenal held more than 22,000 weapons, the majority of which were intended for battlefield use.[14]
As more and more money was appropriated for nuclear weapons, the Army, Navy, and Air Force began racing against each other to acquire new missions and develop new weapons that would place them at the forefront of U.S. military power. Weapons were developed and deployed sometimes before the rationale for their use had been fully tested in war games. Intense battles were fought over which service would control which mission (and the resulting flow of cash and prestige). For example, when the Navy introduced the Polaris submarine as an invulnerable strike platform, the Air Force tried to sink it with study after study and then created new bomber programs to try and “steal” the Polaris mission and return it to the Air Force.
The weapons laboratories at Los Alamos and Livermore also ended up competing against each other in the quest to develop newer and better nuclear weapons, with each coming to view the other as the “enemy.”[15]
Another overlooked factor is that nuclear weapons were considered “free goods” by the military services. That is, the cost of developing, testing, and building the warheads was borne almost entirely by the Atomic Energy Commission (now the Department of Energy). Although the AEC/DOE budget is part of the overall military budget, it has always been funded separately and in addition to monies provided to the services for weapons programs and operating costs. The services had to purchase the delivery systems (except in the case of gravity bombs), but the warheads themselves cost nothing. As a result, there was little financial disincentive for service officials to request a nuclear warhead when a conventional one might be just as much or even more appropriate. Furthermore, there was little reason not to create “requirements” for the AEC to produce large numbers of nuclear weapons. Not surprisingly, former government and military officials have stated that had the military been responsible from the beginning for paying for the warheads it requested, the nuclear stockpile would have been significantly smaller.
The extreme secrecy surrounding almost everything concerning nuclear weapons impeded effective democratic debate for decades. During the earliest years of the program, the AEC simply presented a budget to Congress with little or no detailed justification for how the money would be spent and why. The fundamental issue of how U.S. nuclear weapons would be used and how the requirements for deterrence were developed was never adequately explored during the early years when the basic framework for the program was being established. One result of this is that U.S. officials systematically failed to anticipate how the Soviet Union would perceive the U.S. buildup and how it would drive the Soviets to respond with its own provocative programs.
Finally, pork barrel politics (the use of government programs by elected representatives to enrich their constituents) was an important underlying factor as well. During the Cold War, military spending became a favored means of engaging in pork barrel spending because of the large amounts available within the defense budget and because funding something connected to the defense of the nation required less justification and was more immune to careful scrutiny than a non-military program. Nuclear weapons programs became an important means of support for the otherwise poor and mostly rural communities where production facilities were located. In time, these communities became dependent, to varying degrees, on their local nuclear facilities, to the extent that local officials (and many workers) often downplayed the health and environmental risks they posed. This dependency also made them difficult to shut down when the federal government no longer considered them necessary.
From an economic standpoint, the U.S. nuclear weapons program enjoyed a very privileged status. As a semiofficial history of the AEC/DOE production reactor program notes:
Not only was production of plutonium and tritium controlled by the government as a monopoly, but consumption was all taken by the government, a single-consumer situation that economists call a “monopsony.” This unique arrangement…represented an anomaly in the American industrial world….None of the operating contractors…risked major capital investments in the enterprises; the contracts provided for cost reimbursement. Demand was not driven by a free or even by a regulated economic market but by the single customer’s weapons policy….As a result of the Cold War and the imperatives of the nuclear standoff, this aspect of the American economy resembled the economy of the Soviet Union, in which decisions were made on a planned basis by a remote government, without reference to market forces, behind closed doors, for reasons that would not be made public.[16]

Lessons Learned

The belief underpinning the rapid increase in nuclear weapons during the 1950s was summed up in the phrase, “a bigger bang for a buck.” According to this widely accepted idea, nuclear weapons were more cost effective than conventional ones because pound for pound they could deliver more “killing power.” The thinking was that nuclear weapons would replace conventional weapons, saving large amounts of money and deterring war. But in reality nuclear weapons supplemented conventional weapons and the United States develop enormous arsenals of both, wiping out any potential savings envisioned by those who championed a large and robust nuclear arsenal. The military services also discovered by the late 1950s that early nuclear weapons, which required sizable technical support and extraordinary security measures, were actually much more expensive to deploy than anticipated. Nevertheless, the buildup continued.
The argument that nuclear weapons were the key to keeping the Cold War cold and that whatever was spent on them was therefore a sound investment is also flawed. First, nuclear weapons were not the sole means of keeping the peace (or, when deemed necessary, fighting wars). The United States built a large conventional arsenal too, spending two and a half times more money on conventional weapons than on nuclear weapons during the Cold War. Second, the nuclear weapons program, unbounded as it was by logic or cost, led to all sorts of weapons that contributed little or nothing to deterrence (such as the nuclear-powered aircraft; PLUTO, a nuclear-powered, nuclear-armed cruise missile, and the ASTOR, an antisubmarine nuclear torpedo guided to its target by a wire, ensuring that upon detonation it would destroy not only a Soviet submarine, but also the U.S. submarine that launched it). Finally, such arguments ignore the great risks posed by the constant alert nuclear postures the United States maintained, postures that easily could have triggered the very war nuclear weapons were supposed to prevent.
Exactly how much of the U.S. investment to date in nuclear weapons was “wasted” as a consequence of this inattention will remain a matter of debate, both because there has never been a fixed numerical goal or endpoint for U.S. deterrence and because “waste” is in the eye of the beholder. What is clear is that at a minimum hundreds of billions of dollars were expended on programs which contributed little or nothing to deterrence, diverted critical resources and effort away from those that did or created long-term costs which exceeded their benefits (e.g., the overproduction of fissile materials). Moreover, the appropriate question is not how much or how little should have been spent (to which there will never be a single, unambiguous answer), but why numerous government officials over more than half a century failed consistently to ensure that what was spent on nuclear weapons was spent wisely and in the most efficient manner possible.
Although it can be argued that excessive or wasteful spending is a perennial problem in the United States, and while it may be tempting to compare the nuclear weapons program to welfare or agricultural subsidies or other entitlement programs in this regard, it is important to recognize one critical difference with respect to nuclear weapons: the costs of entitlement programs are well known. They are frequently debated in Congress and are readily available in government documents to anyone who cares to look. The costs of nuclear weapons, by contrast, have been neither fully understood nor compiled by the Government.

Sources:

[1] Stephen I. Schwartz, ed., Atomic Audit: The Costs and Consequences of U.S. Nuclear Weapons Since 1940, (Washington, DC: Brookings Institution Press, 1998). Further information about Atomic Audit can be found at www.brookings.edu/projects/ archive/nucweapons/weapons.aspx.
[2] Except where noted, all cost figures in this paper have been adjusted for inflation and are expressed as constant fiscal 1996 dollars. “Then-year dollars” represent the prices of goods or services current at the time they were sold. “Constant dollars” have been adjusted for the effects of inflation. Dollar costs for past expenditures are adjusted by adding inflation. This permits a comparison of expenditures over time that, although still imperfect, is less distorted than if current dollar expenditures were used, and it allows the reader to view costs in terms of the dollars approximate purchasing power at the present time. Except where noted, these adjustments have been made using standard U.S. Department of Defense (DOD) deflators.
[3] A subsequent estimate based on Atomic Audit and using its methodology, found that costs through 2005 were $7.5 trillion in adjusted 2005 dollars. See Joseph Cirincione, “Lessons Lost,” Bulletin of the Atomic Scientists, November/December 2005, p. 47.
[4] For a breakdown, see “The Costs of the Manhattan Project,” Brookings Institution, www.brookings.edu/projects/ archive/nucweapons/manhattan.aspx.
[5] Jerome H. Kaplan, Security in the Nuclear Age: Developing U.S. Strategic Arms Policy, (Washington, D.C.: Brookings Institution, 1975), pp. 16-17.
[6] U.S. Department of Defense, Office of the Assistant Secretary of Defense (Atomic Energy), “History of the Custody and Deployment of Nuclear Weapons: July 1945 through September 1977,” February 1978, pp. 50, 77 (Formerly Top Secret, released under the Freedom of Information Act).
[7] David Alan Rosenberg, “The Origins of Overkill: Nuclear Weapons and American Strategy, 1945-1960, International Security (Spring 1983), p. 57.
[8] Maxwell D. Taylor, The Uncertain Trumpet (New York: Harper, 1960), pp. 148, 158.
[9] Schwartz, ed., Atomic Audit: The Costs and Consequences of U.S. Nuclear Weapons Since 1940, p. 23.
[10] Schwartz, ed., Atomic Audit: The Costs and Consequences of U.S. Nuclear Weapons Since 1940, pp. 358-359.
[11] Schwartz, ed., Atomic Audit: The Costs and Consequences of U.S. Nuclear Weapons Since 1940, pp. 380-381, 396-400.
[12] Schwartz, ed., Atomic Audit: The Costs and Consequences of U.S. Nuclear Weapons Since 1940, pp. 401-402.
[13] As of September 28, 2008, the Department of Justice had administered more than $1.3 billion (in unadjusted dollars) in RECA payments to settle 20,100 claims. See, Department of Justice, “Radiation Exposure Compensation System, Claims to Date Summary of Claims Received by 09/28/2008,” www.usdoj.gov:80/civil/ torts/const/reca/index.htm.
[14] Schwartz, ed., Atomic Audit: The Costs and Consequences of U.S. Nuclear Weapons Since 1940, p. 77
[15] Schwartz, ed., Atomic Audit: The Costs and Consequences of U.S. Nuclear Weapons Since 1940, pp. 46-47.
[16] Rodney P. Carlisle with Joan M. Zenzen, Supplying the Nuclear Arsenal: American Production Reactors, 1942-1992 (Baltimore: Johns Hopkins University Press, 1996), pp. 160-162.
October 1, 2008
About
Stephen Schwartz offers an intriguing look into the financial, environmental and public health costs incurred by the United States by developing and maintaining nuclear weapons.
Authors
Stephen I. Schwartz
Center for Nonproliferation Studies
Countries
Flag of United States

No comments:

Post a Comment