The New Era of Nuclear Arsenal Vulnerability

Keir A. Lieber and Daryl G. Press

Nuclear deterrence rests on the survivability of nuclear arsenals. A weapons arsenal that can survive a disarming strike and inflict unacceptable damage on the attacker is the foundation of a robust deterrent. For much of the nuclear age, arsenal survivability seemed straightforward. “Counterforce” disarming attacks — those aimed at eliminating an enemy’s retaliatory forces — were nearly impossible to undertake because potential victims couldeasily hide and protect their weapons.1 Today, however, leaps in weapons accuracy and breakthroughs in remote sensing are undermining states’ efforts to secure their arsenals. Specifically, pinpoint accuracy and improved sensors are negating two key approaches that countries have relied upon to ensure arsenal survivability: hardening and concealment. The computer revolution has also spawned dramatic advances in data processing, communication, and artificial intelligence, and has opened a new cyber domain of strategic operations — compounding the growing vulnerability of nuclear delivery systems.

Nuclear arsenals around the world are not becoming equally vulnerable to attack. Countries that have considerable resources can buck these trends in technology and keep their forces survivable, albeit with considerable cost and effort. However, other countries — especially those facing wealthy, technologically advanced adversaries — will find it increasingly difficult to secure their arsenals. The implications for nuclear policy are far reaching: “the new era of counterforce,” as we label it,2 will reduce deterrence stability, undercut the logic of future nuclear arms reductions, and compel U.S. leaders to balance the risks and opportunities of honing U.S. counterforce capabilities.

Eroding Foundation of Nuclear Deterrence

Nuclear weapons are the ultimate instruments of deterrence. There could be no conceivable benefit from invading or attacking a rival if doing so would trigger nuclear retaliation. As long as nuclear arsenals are secure, and hence could survive an adversary’s attack and then retaliate, nuclear weapons are a tremendous source of security for those who possess them. For this reason, military planners have employed three basic approaches to protect nuclear forces from attack: hardening, concealment, and redundancy. In terms of hardening, planners place missiles in reinforced silos; deploy aircraft in hardened shelters; disperse mobile missiles to protective sites; and bury command and control sites, as well as the secure means used to communicate launch orders. Nuclear planners also rely heavily on concealment to ensure force survivability, particularly by dispersing ballistic missile submarines (SSBNs) and mobile missile launchers within vast deployment areas. Aircraft are harder to hide because they require airfields for takeoff and landing, but they too can employ concealment by dispersing to alternate airfields or remaining airborne during alerts. Finally, redundancy is used to bolster every aspect of the nuclear mission, especially force survivability. Most nuclear-armed countries use multiple types of weapons and delivery systems to hedge against design flaws and complicate enemy strike plans. They spread their forces across multiple bases and employ redundant communication networks, command and control arrangements, and early warning systems.

Major technological trends are undermining these strategies of survivability. Leaps in weapons accuracy have diminished the value of hardening, while breakthroughs in remote sensing threaten nuclear forces that depend on concealment. Another major change since the end of the Cold War — the significant reduction of nuclear arsenals — weakens the third strategy of survivability: redundancy. Deploying survivable nuclear forces in this environment is possible, but the challenge of protecting those forces is growing.

Counterforce in the Age of Accuracy

Throughout most of the Cold War, nuclear delivery systems were too inaccurate to conduct effective disarming strikes against large arsenals comprising hundreds of hardened targets. As late as 1985, the largest yield warhead on the best U.S. intercontinental ballistic missile (ICBM) had only a 54% chance of destroying a Soviet missile silo. Missiles fired from submarines were even less effective, offering less than 10% chance per warhead of destroying a hardened missile silo.3 A nuclear disarming strike against Soviet missile fields would have left hundreds of silos intact to inflict a devastating counterblow.

But technological advances in navigation and guidance, which began to enter the superpower arsenals in the mid-1980s, have significantly increased the vulnerability of hardened targets. Improved inertial sensors and stellar navigation systems greatly enhanced missile accuracy. GPS and other geolocation technologies allowed submarines to precisely determine their position before launch — allowing their weapons to surpass the accuracy of land-based weapons. Over the next two decades, new missiles and improved guidance systems on old missiles have transformed offensive strike capabilities. Today, a single warhead delivered by a U.S. ICBM would have roughly a 75% chance of destroying a hardened missile silo; and the most effective warhead currently deployed on U.S. submarines would have roughly an 80% chance of destroying the same target. In short, the impact of the precision revolution — prominently displayed by the United States in each of its recent conventional wars — has had equally dramatic consequences for nuclear targeting and deterrence.

The unfolding accuracy revolution and the computer revolution that spawned it have had other complementary effects on the vulnerability of hard targets. For decades, nuclear targeters understood that effective disarming attacks would be impossible unless they could strike each individual target with multiple weapons. After all, even a 90% effective strike against an enemy’s arsenal would be a failure, since the surviving weapons could inflict a devastating counterattack. The simple solution to that problem, striking each target multiple times, has been thought infeasible because of the problem of fratricide: the danger that incoming weapons might destroy each other. However, the accuracy revolution also offers a solution to the fratricide problem by opening the door to assigning multiple warheads against a single target; thus paving the way to disarming counterforce strikes.

One type of fratricide occurs when the prompt effects of nuclear detonations — principally radiation, heat, and overpressure — destroy or deflect nearby warheads. To protect those warheads, targeters must separate the incoming weapons by at least 3-5 seconds. A second source of fratricide is harder to overcome. Destroying hard targets typically requires low-altitude detonations (so-called ground bursts), which vaporize material on the ground. When the debris begins to cool, 6-8 seconds after the detonation, it forms a dust cloud that envelops the target. Even small dust particles can be lethal to incoming warheads speeding through the cloud to the target. For decades, these two sources of fratricide posed a major problem for nuclear planners. Multiple warheads could be aimed at a single target only if they were separated by at least 3-5 seconds (to avoid interfering with each other); yet, all inbound warheads had to arrive within 6-8 seconds of the first (before the dust cloud formed). As a result, assigning more than two weapons to each target would produce only marginal gains: if the first one resulted in a miss, the target would likely be shielded when the third or fourth warhead arrived.4

Improvements in accuracy, however, have greatly mitigated the problem of fratricide. The proportion of misses — the main culprit of fratricide — compared to hits is declining. To be clear, some weapons will still malfunction; that is, they will be prevented from destroying their targets because of malfunctioning missile boosters, faulty guidance systems, or defective warheads. Those kinds of failures, however, do not generally cause fratricide, because the warheads do not reach or detonate near the target. Only those that travel to the target area, yet detonate outside the lethal radius, will create a dust cloud that shields the target from other incoming weapons. In short, leaps in accuracy are essentially reducing the set of three outcomes (hit, miss, and malfunction) to just two: hit or malfunction. The “miss” category, the key cause of fratricide, has virtually disappeared.

Improved missile accuracy and the end of fratricide are just two of the developments that have helped negate hardening and increased the vulnerability of nuclear arsenals. The computer revolution has led to other improvements that, taken together, significantly increase counterforce capabilities.

First, increased SLBM accuracy has added hundreds of those warheads to the counterforce arsenal; it has also unlocked other advantages that submarines possess over land-based missiles. For example, submarines have flexibility in firing location, allowing them to strike targets that are out of range of ICBMs or that are deployed in locations that ICBMs cannot hit. Submarines also permit strikes from close range, reducing an adversary’s response time. And because submarines can fire from unpredictable locations, SLBM launches are more difficult to detect than ICBM attacks, further reducing adversary response time before impact.

Second, new “compensating” fuses that exist on most U.S. SLBMs and that will soon be deployed on the entire force are making ballistic missiles even more capable than the results reported above.5 Reentry vehicles equipped with this fusing system use an altimeter to measure the difference between the actual and expected trajectory of the reentry vehicle, and then compensate for inaccuracies by adjusting the warhead’s height of burst. Specifically, if the altimeter reveals that the warhead will detonate “short” of the target, the fusing system lowers the height of burst, allowing the weapon to travel farther (hence, closer to the aimpoint) before detonation. Alternatively, if the reentry vehicle is going to detonate beyond the target, the height of burst automatically adjusts upward to allow the weapon to detonate before it travels too far. This improved fuse greatly increases the effectiveness of ballistic missiles. For example, more than half of the warheads currently deployed on U.S. submarines otherwise have too small of an explosive yield to carry out the type of attack described in the previous paragraphs; but the compensating fuse gives them this “hard target kill” capability. Putting aside all the other improvements described above, the new fuse more than doubles the counterforce capabilities of the U.S. submarine fleet.

Third, the computer revolution has made possible rapid missile reprogramming — which increases the effectiveness of ballistic missiles by reducing the consequence of malfunctions. In the age of pinpoint accuracy, missile reliability has become the main hurdle to attacks on hardened targets. For decades analysts have recognized a solution to this problem: if missile failures can be detected, the targets assigned to the malfunctioning missiles can be rapidly reassigned to other missiles held in reserve. The capability to retarget missiles in a matter of minutes was installed at U.S. ICBM launch control centers in the 1990s and on U.S. submarines in the early 2000s, and both systems have since been upgraded. We do not know if the United States has adopted war plans that fully exploit rapid reprogramming to minimize the effects of missile failures. Nevertheless, such a targeting approach is within the technical capabilities of the United States and other major nuclear powers and may already be incorporated into war plans.

The cumulative consequences of these improvements are profound. Given a hypothetical target set of 200 hardened missile silos, a 1985-era U.S. ICBM strike — with two warheads assigned per target — would have been expected to leave 42 surviving silos. A comparable strike in 2018 could destroy every hardened silo.

These results are simply the output of a nuclear targeting model. In the real world, the effectiveness of any strike would depend on many factors not modeled here, including the skill of the attacking forces, the accuracy of target intelligence, the ability of the targeted country to detect an inbound strike and “launch on warning,” and other factors that depend on the political and strategic context. As a result, these calculations tell us less about the precise vulnerability of a given arsenal at a given time—though one can reach arresting conclusions based on the evidence—and more about trends in how technology is undermining survivability.

At the same time, our model substantially understates the vulnerability of hard targets because it does not capture the growing contribution of nonnuclear forces to counterforce missions. As nuclear arsenals shrink — and hence offer fewer targets that must be destroyed — and as conventional strike forces proliferate, the challenges for ensuring force survivability will grow.

Counterforce in the Age of Transparency

While advances in accuracy are negating the value of hardening, leaps in remote sensing are chipping away at the other main approach to achieving survivability: concealment. Finding concealed forces, particularly mobile ones, remains a major challenge. Trends in technology, however, are eroding the security that mobility once provided. In the ongoing competition between “hiders” and “seekers,” waged by ballistic missile submarines, mobile land-based missiles, and the forces that seek to track them, the hider’s job is growing more difficult over time.

At least five trends are ushering in an age of unprecedented transparency. First, sensor platforms have become more diverse. The mainstays of Cold War technical intelligence — satellites, submarines, and piloted aircraft — continue to play a vital role, and they are being augmented by new platforms: including remotely piloted aircraft, undersea drones, and various autonomous sensors, hidden on the ground or tethered to the seabeds. Additionally, the past two decades have witnessed the development of a new “virtual” sensing platform: cyberspying. Second, sensors are collecting a widening array of signals using a growing list of techniques. Early Cold War strategic intelligence relied heavily on photoreconnaissance, underwater acoustics, and the collection of adversary communications — all of which remain important. Now, modern sensors gather data from across the entire electromagnetic spectrum and they exploit an increasing number of analytic techniques, such as spectroscopy to identify the vapors leaking from faraway facilities, interferometry to discover underground structures, and the signals processing techniques that underpin synthetic aperture radars (SAR).

Third, remote sensing platforms increasingly provide persistent observation. At the beginning of the Cold War, strategic intelligence was hobbled by sensors that collected snapshots rather than streams of data. Spy planes sprinted past targets, and satellites passed overhead and then disappeared over the horizon. Over time those sensors were supplemented with platforms that remained in place and soaked up data, such as signals intelligence antennas, undersea hydrophones, and geostationary satellites. The trend toward persistence is continuing. Today, remotely piloted vehicles can loiter near enemy targets and autonomous sensors can monitor critical road junctures for months or years. Persistent observation is essential if the goal is not merely to count enemy weapons, but also to track their movement.

The fourth factor in the ongoing remote sensing revolution is the steady improvement in sensor resolution. In every field that employs remote sensing, improved sensors and advanced data processing are permitting more accurate measurements and fainter signals to be discerned from background noise. The leap in satellite image resolution is but one example: the first U.S. reconnaissance satellite (Corona) could detect objects as small as 25 feet across. Today, even commercial satellites can collect images with 1-foot resolution. (Spy satellites can do much better.) Advances in resolution are not merely transforming optical remote sensing systems; they are increasing the capabilities of infrared sensors, advanced radars, interferometers, and spectrographs. High-resolution data, however, would have limited utility if it were not for the fifth leg of the sensor revolution: improved data transmission. In the past, analysts sometimes had to wait weeks before they could examine satellite images. (Early satellites had to finish a roll of film and eject the canister, which was then recovered and processed). Today, intelligence gathered by aircraft, satellites, and drones can be transmitted in nearly real time.

The impact of the remote sensing revolution for nuclear force survivability is probably greater than the consequences of improved accuracy, but the payoff of improved sensors is more difficult to demonstrate using unclassified models. We illustrate some of the effects of improved sensors by using commercial geospatial software to estimate the fraction of North Korea’s road network that can be monitored using existing SAR satellites, standoff UAVs, and stealthy UAVs (which can probably operate within North Korea’s air space). We discover that the existing constellation of U.S. satellites can image the roads surrounding North Korean missile bases at least every 90 minutes, and existing UAVs could maintain persistent observation of the entire road network indefinitely. Our analysis understates U.S. surveillance capabilities by not accounting for optical satellites (which are effective in daylight), ground based sensors (which have likely been emplaced at key locations in Korea), satellite capabilities of allies, and other remote sensing techniques that would all likely come into play in a hunt for North Korean missiles.

To be clear, even with improved sensors, finding concealed forces, particularly mobile ones, remains a major challenge. But in the ongoing competition between “hiders” and “seekers,” waged by ballistic missile submarines, mobile land-based missiles, and the forces that try to track them, the hider’s job is growing more difficult than ever before. Nuclear survivability through concealment can no longer be assumed.

Policy Dilemmas in the New Era of Counterforce

The growing threat to nuclear forces raises major policy questions for U.S. national security planners. One set of questions relates to the wisdom of future bilateral arms reductions. Since the end of the Cold War, U.S. and Russian leaders have used arms control agreements to reduce their arsenals, seek to increase strategic stability, prevent attacks, and soothe relations between former adversaries. Yet as the effectiveness of nuclear counterforce systems grows, further arms cuts risk creating unintended consequences: a situation in which the major nuclear-armed states can envision victory in nuclear war. To make matters worse, nonnuclear means of counterforce are growing — for example, through improved conventional weapons, missile defenses, anti-submarine warfare systems, and cyber operations. The problem is stark: arms control agreements that only cut nuclear weapons reduce the number of targets that must be destroyed in a disarming strike; all the while, the nonnuclear forces that aim at those targets grow in number and capability. For years, arms control was a policy that made war less winnable and therefore less likely; today arms cuts — however well meaning — may have perverse consequences.

Second, the new era of nuclear arsenal vulnerability should also reopen debates in the United States about the wisdom of developing effective counterforce systems. Fielding those capabilities — nuclear, conventional, and other — may prove invaluable by enhancing nuclear deterrence during conventional wars, and allowing the United States to defend itself and its allies if nuclear deterrence fails. Enhancing counterforce capabilities, however, may also trigger arms races and other dynamics (such as dangerous deployment modes) that exacerbate political and military risks.

In the past, the state of technology bolstered the case for proponents of nuclear restraint: after all, disarming strikes seemed impossible, so enhancing counterforce capabilities would trigger arms racing without creating useful military capabilities. Today, however, technological trends support the advocates of counterforce. Modern conventional military power depends heavily on intelligence, surveillance, and reconnaissance (ISR) capabilities, as well as precision conventional weapons; but those capabilities are also the foundation of a counterforce arsenal. The United States will surely continue to enhance ISR and precision strike — as well as missile defenses, anti-submarine warfare, and cyber techniques — whether or not Washington decides to maximize its nuclear counterforce capabilities. In this new era of counterforce, where arms racing seems nearly inevitable, exercising restraint may limit options without yielding much benefit.

Conclusion

For most of the nuclear age, there were many impediments to effective counterforce. Weapons were too inaccurate to reliably destroy hardened targets; fratricide prevented many-on-one targeting; the number of targets to strike was huge; target intelligence was poor; conventional weapons were of limited use; and any attempt at disarming an adversary would be expected to kill vast numbers of people. Today, in stark contrast, highly accurate weapons aim at shrinking enemy target sets. The fratricide problem has been swept away. Conventional weapons can destroy most types of counterforce targets, and low-fatality nuclear strikes can be employed against others. Target intelligence, especially against mobile targets, remains the biggest obstacle to effective counterforce, but the technological changes under way in that domain are revolutionary. Of the two key strategies that countries have employed since the start of the nuclear age to keep their arsenals safe, hardening has been negated, and concealment is under great duress.

Nuclear weapons are still the ultimate tools of deterrence. Even in the new era of counterforce, nuclear arsenals can be deployed in a manner that protects them from disarming strikes. But technological trends are making the nuclear deterrence mission more demanding, and hence widening the gap between stronger and weaker nuclear-armed countries. The most powerful countries should be able to deploy survivable deterrent forces and field potent counterforce capabilities, while relatively weaker countries with smaller nuclear arsenals will struggle to keep their forces secure. Moreover, the technological trends that are causing this shift show no signs of abating. Weapons will grow even more accurate. Sensors will continue to improve. How countries adapt to the new strategic landscape will greatly shape the prospects for international peace, stability, and conflict for years to come.

Keir A. Lieber is Director of the Security Studies Program and Associate Professor in the Edmund A. Walsh School of Foreign Service at Georgetown University. Kal25@georgetown.edu

Daryl G. Press is Associate Professor in the Department of Government at Dartmouth College


1. During the last decades of the Cold War, the massive nuclear arsenals deployed by the United States and the Soviet Union alsoseemed to make a perfect disarming strike impossible.

2. We use a set of unclassified models and geospatial analysis to illustrate the growing effectiveness of counterforce capabilities in Keir A. Lieber and Daryl G. Press, “The New Era of Counterforce: Technological Change and the Future of Nuclear Deterrence,” International Security, Vol. 41, No. 4 (Spring 2017), pp. 9-49.

3. The calculations underpinning this analysis are in Lieber and Press, “The New Era of Counterforce,” and its online appendix, http://dx.doi:10.7910/DVN/NKZJVT.

4. It would take approximately 20 minutes for the heavier particles in a dust cloud to settle. In that time interval, relatively slow moving missiles could launch upward through the dust cloud, but very fast-moving reentry vehicles could not penetrate the cloud to strike the target again. See discussion and sources in Lieber and Press, “The New Era of Counterforce,” pp. 21-22.

5. Theodore Postol, “Monte Carlo Simulations of Burst-Height Fuse Kill Probabilities,” unpublished presentation, July 28, 2015. See also Hans M. Kristensen, “Small Fuze, Big Effect,” Strategic Security blog (Federation of American Scientists), March 14, 2007, https://fas.org/blogs/security/2007/03/small_fuze_-_big_effect/.


These contributions have not been peer-refereed. They represent solely the view(s) of the author(s) and not necessarily the view of APS.