Physics and Society Apr '97 - Articles

Volume 26, Number 2, April 1997

ARTICLES

The articles in this issue stem from talks given at Forum sponsored sessions at the May 1996 APS/AAPT Joint Meeting in Indianapolis . They represent some of the range of interests of Forum members: careers for physicists, nuclear weapons testing, and energy supply technologies.

The Future of Industrial Careers

Roland W. Schmitt

Trends in Funding of Industrial R&D: After a decade of fast growth of industrial R&D expenditures, they began to flatten in 1984 due largely to drop in federal support. This was somewhat offset by continued growth of industry's own funds until 1992, when they, too, flattened. Thus the principal changes in industrial R&D funding over the last two decades have been
- flattening of growth rate in ~'84
- significant shift in support from federal to industrial
- drop in expenditures beginning ~ '92

There are several projections of future industrial R&D spending.

- A forecast for '96 by the Industrial Research Institute indicates growth of 6%; while this varies by industry, only the Petroleum and Energy Industries and the Fabricated Materials are projected to cut their R&D and neither is probably a big employer of physicists.

- Battelle also makes projections of industrial R&D. They say that a further drop in federal support will be more than offset by industry's own expenditures, the first increase in several years. Battelle says that "The underlying strength of .. industries .. like telecommunications, pharmaceuticals, automotive and aerospace, computers, electronics, and software ... and the rapid changes they are going through mean R&D funding growth is likely to continue for several years."

Conclusion: Indications are that industrial R&D expenditures may have "bottomed" and will start back up - though modestly so - over the next few years.

Even more important than the expenditures on industrial R&D are the changes that have taken place in it over the recent past and will, undoubtedly, persist into the future.

The common way of describing these changes is the "squeezing out" of long-term, basic research - especially in the corporate labs of major corporations - Bell, GE, IBM, Dupont, etc. But, I believe this is an inadequate characterization of what's going on. Quite a number of people have written about these changes - they are driven by industry's desire to improve the effectiveness and productivity of the money they spend on R&D - just as they are trying to do in all other corporate activities.

Let's look at some of the factors and features driving the changes. Andrew Odlyzko of Bell Labs has written a fine article about what is happening. He says "...the accumulation of technical knowledge has made it easy to build new products and services, and so the most promising areas for research have moved away from the traditional ones that focus on basic components, and toward assembling these components into complete systems." To the extent that these observations are true, the question will be how do physicists fit: before answering that question, let's look at another factor in the changing environment of industrial R&D.

As corporations search for stronger competitive positions in their markets, and for greater efficiency and productivity in their R&D, they are asking themselves how best to acquire and develop the technologies needed for their products. They have found that tapping outside sources is often preferable to doing the job in house.

So, what's happening in corporate R&D is a shift toward what's been called the "Virtual R&D Laboratory". By this is meant a strategy for developing or acquiring the technology needed by the corporation that employs a variety of modes: in-house research; joint ventures; partnerships with suppliers; sponsored research at universities. In short, moving away from the old concept that you must do all of your research in-house: both the pioneering R&D and the R&D for evolutionary innovations.

Companies are increasingly looking for the heart of competitive advantage, for their core competencies and concentrating on leadership work in that arena. All the other technology they need, they look for the best, most effective way of acquiring it.

What I've said so far pertains to large firms. There are also small and mid-sized firms. They, too, have to concentrate on core technologies. But. to them, having internal competence that has the ability to spot and recognize what they need from outside is crucial.

Finally, there is the entrepreneurial world. All I'll say about this is that some physicists are pretty good in this arena, but it is a different life from what you learn as a graduate student. But, don't forget it.

How do physicists fit into this new industrial world? The answer is "very well". The new APS Forum on Industrial and Applied Physics and AIP's new magazine "The Industrial Physicist" are, in my opinion, doing a fine job of letting the physics community generally know what life as an industrial physicist is like. The Chairman of FIAP, Abbas Ourmazd, and Len Felderman wrote a great little piece last November, called "But Is It Physics?" They say that we have to move "...beyond such questions of definition, because ultimately, they do not matter." We have to recognize "...the evolving nature of science and technology, and the central role that can be played by physicists in this evolution."

The opportunity to exercise conceptual powers, inventiveness, originality - all characteristics develop in a physics education - lurks in many, many corners of the industrial enterprise. As Ourmazd and Feldman say, "Physics produces a 'can make anything, can fix anything' attitude'." - a trait of immense value in industry.

But, coming back to the present view of physics and physicists and how this is letting them fare in the job market, one has to say things are mixed. For the past several years the job markets for scientists and engineers at the bachelors level have been cool and, of course, physics has shared this.

One campus placement officer told me recently that demand has exploded this year for the first time in a decade, and on his campus, the number of corporate recruiters is up 30%. Moreover, beginning salaries are up. Areas of high demand seem to be E.E., Systems Eng'g, Computer Sciences, and he also explicitly mentioned master's level physics majors.

The situation for Ph.D. physicists - again in "conventional" positions - seems a bit more problematical. In 1995 the total production of physics Ph.D.s in the U.S. dropped a bit from 1994 - from 1481 to 1450. For initial employment, 60% of these went to post-doctoral positions and only about a quarter went to "potentially permanent" positions. Among the latter group, 25% went to academic positions, and 58% to industry. The fraction going to academe has been constant over the last few years, while the fraction going to industry has grown at the expense of government labs and non-profits.

In one instance, (my own former lab) out of 66 Ph.D.s hired in 1995, only 3 were physicists! I'm disappointed to have to report this.

The conclusion is pretty simple. An education in physics, at all levels - Bachelor to Ph.D. - is a flexible and versatile asset that is applicable to many types of career. But two things get in the way of taking full advantage of this: the perception of newly minted physicists and the perception of employers! Physicists are not taught to think of themselves as versatile scientists or technologists so they tend not to look broadly enough for exciting career opportunities. And, employers, by and large, don't have a clear perception of what physicists can do. Both sides of this equation have to be tackled. Again, I think that the APS Forum on Industrial and Applied Physics and AIP's magazine The Industrial Physicist will help.


Roland W. Schmitt is Chairman of the Board of Governors of AIP and President Emeritus of RPI and Sr. Vice President (Ret.) of General Electric. rws@aip.org


End of Nuclear Testing

Jeremiah D. Sullivan

The technical basis for the U.S. adoption of a "Comprehensive Test Ban" national policy was a Department of Energy sponsored study conducted by the JASON group [1] during the summer of 1995. My remarks here draw from my participation in that study together with the knowledge and experience I have gained from two decades of work as an academic and as a consultant to the US Department of Defense, Department of Energy, and Arms Control and Disarmament Agency on arms control and defense technologies. The study itself is classified, but the Summary and Conclusions are publicly available [2], and I speak primarily about them.

The primary task of the JASON study was to determine the technical utility and importance of information the United States could gain from continued underground nuclear testing...

The study panel was unique in having four senior weapons scientist-engineers as members together with ten physicists, primarily from the academic community, all of whom had considerable experience in the issues addressed by the panel. The panel had full and unrestricted access to information held by the US nuclear weapons design laboratories: Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL) and Sandia National Laboratories (SNL), and it received full cooperation from these laboratories.

US Policy and Plans Concerning Nuclear Weapons

The key assumptions of the study were given by US policies and plans regarding its nuclear forces operative during the summer of 1995,which remain unchanged today.

1. The US intends to maintain a credible nuclear deterrent.

2. The US is committed to world-wide nuclear non-proliferation efforts.

3. The US will not develop any new nuclear weapons designs. (A policy first announced by President Bush in 1992 and reaffirmed by President Clinton.)

In practice, Policy 1 means that after START II reductions (scheduled for the year 2003), the US will have approximately 3,500 nuclear warheads and associated launchers in its active strategic arsenal, a smaller number of strategic warheads in the inactive reserve, and a sizable number of tactical warheads in reserve as well [3]. The warheads in the active reserve are referred to as the "enduring stockpile." The US is currently dismantling about 1,500 nuclear warheads per year and is manufacturing no new ones.

The enduring stockpile consists of nine distinct designs of various ages; six are LANL designs and three are from LLNL. All are variations of a general design type referred to as "hollow-boosted primaries with fission enhanced secondaries. Policy 1 also means that the US will require guarantees of the surety, safety, and performance (reliability) of weapons in the enduring stockpile.

Surety means that rigorous measures are in place to ensure that no nuclear weapon is used without authorization or falls into the hands of an unauthorized individual. To this end, physical mechanisms are built into warheads to prevent any nuclear yield should an unauthorized party acquire a weapon and permissive action links and related measures are included in the overall weapon systems to prevent use by authorized holders without approval from the National Command Authority - the President under normal circumstances. None of the technical or operational procedures associated with surety require nuclear testing, so it has never been an issue in debates over nuclear test ban policy.

Safety refers to choosing weapon designs and handling procedures are chosen to ensure to the highest possible levels that in the normal storage, transport, and basing of nuclear weapons, no accident, fire, collision, or other mishap will result in any nuclear yield or dispersal of fissile material (weapons grade plutonium or highly enriched uranium).

Reliability means the yield of the weapon will fall within specified limits even in the worst-case environment: just prior to tritium boost gas supply replenishment, operation in the high neutron environment of a nuclear war, or implosion of the primary at sub-freezing initial temperature.

Questions Facing the US Nuclear Weapons Community in a CTBT Era

1. What technical capabilities will be required to maintain confidence in the enduring stockpile for an indefinite period?

2. What should the response be to aging effects uncovered during inspections of existing weapons?

3. How will human expertise in the science and technology of nuclear weapons be maintained?

4. What types of zero-nuclear-yield experiments are important in maintaining confidence in the enduring stockpile? (These are often referred to as above-the-ground experiments, or AGEX.)

and most importantly,

5. What contributions would testing at low levels of nuclear yield make to maintaining confidence in the enduring stockpile? Such testing might be "permitted" for a finite period of time as a transition into a true CTBT era, or perhaps be allowed indefinitely, in effect defined as "not counting as a nuclear test. "

These technical questions do not exist in a vacuum, especially Question 5. Arms control policy, non-proliferation policy, international relations, and a host of other factors are impacted by the answers. My remarks below focus entirely on the fifth question. DOE initiatives to assure answers to the other four questions are contained in its Science Based Stockpile Stewardship Program (see the following paper in this issue).

Physics of Modern Nuclear Weapons

Modern nuclear weapons consist of a primary and a secondary stage [4, 5]. The primary consists of a hollow shell of fissile material (the "pit"), surrounded by an array of high explosive (HE) charges, which when detonated by an appropriate signal causes a spherically symmetric implosion that compresses the pit to a supercritical configuration. At the optimal moment, a burst of neutrons is released into the imploded pit, triggering a flood of fission chain reactions.

In the primaries of all modern nuclear weapons, a boost gas mixture consisting of deuterium (D) and tritium (T) is introduced into the pit just prior to HE initiation. As the fission energy released from the supercritical assembly builds, the DT gas in the pit is heated above the threshold for thermonuclear processes, the most important of which is D + T -> alpha + n. The sudden, intense flood of neutrons created in these fusion reactions induces vastly more fission chains, thereby greatly enhancing the fraction of the fissile material in the pit that undergoes fission. The direct contribution of fusion to the net primary yield is minor; the indirect contribution is very large.

The energy released from the primary couples to the secondary by radiative transport, creating the conditions for thermonuclear burn. The basic ingredient in a secondary is solid lithium deuteride (6Li-D), which contains the deuterium needed for the D-T fusion process and generates in a timely manner the needed tritium through the "catalytic" process n+ 6Li -> alpha + T. Depleted or enriched uranium can be added to the secondary to enhance substantially the secondary yield given the intense flux of energetic 14 MeV neutrons that result from D-T processes there. Modern secondary designs exploit this synergy between fusion and fission, and for this reason a substantial fraction of the overall secondary yield comes from fission processes. This is the basis for the popular rule of thumb for estimating fallout: 50% of the yield from fusion, 50% from fission [6].

Primaries and secondaries present quite different challenges to designers. Primaries are notoriously sensitive to small changes since the basic implosion processes, being hydrodynamic in nature, is accompanied by non-linear effects and a penchant for turbulent behavior. In addition, one has low density material driving high density material, which raises the specter of Rayleigh-Taylor and other instabilities. Finally, the primary depends critically on the uniform performance of high explosive, a notoriously idiosyncratic chemical that is sensitive to environment conditions, grain size, and other details of manufacture.

In contrast, the physics of secondaries while complex is generally forgiving because of the speed-of-light transport of radiation vs. the relatively slow supersonic speed of material transport. Computer modeling of secondaries, however, presents enormous challenges because of the need for fine time-steps to capture the rates of change of radiation processes and while follow hydrodynamic motion operating on much longer time scales.

Why the Study Was Commissioned

Following the successful indefinite extension of the Nuclear Non-Proliferation Proliferation Treaty in May 1995, a major debate developed within the US security and arms control communities. This debate had three main voices: (i) some argued that the United States needed to maintain the right to do nuclear testing at approximately the half-kiloton level for a fixed period, say ten years, to retain confidence in the safety and reliability of its nuclear weapons; (ii) others argued that retaining a right to do hydronuclear testing (defined roughly as less than four pounds of nuclear yield) indefinitely would be required to retain confidence in the enduring stockpile as it aged; (iii) yet others argued that neither sub-kiloton nor hydronuclear testing was necessary and neither contributed usefully to maintaining confidence in the enduring stockpile. (Hydro tests which study implosive assemblies that never achieve a supercritical configuration were not at issue.)

Conclusions of the Study

The primary task of the JASON study was to determine the technical utility and importance of information the United States could gain from continued underground nuclear testing at various levels of nuclear yield for weapons in its enduring stockpile as they age[2].

Conclusion 1-The first conclusion of the study panel was that the United States could have high confidence in the safety, reliability and performance margins of weapons in the enduring stockpile. This confidence is based on 50 years of experience and analysis of more that 1,000 nuclear tests, including approximately 150 nuclear tests of modern weapon types in the past 20 years.

Conclusion 2-The report recommends a number of activities that are important for retaining confidence in the safety and reliability of the weapons in the enduring stockpile whether or not testing at the half-kiloton level or less is permitted. The recommended non-nuclear tests, most being extensions of above-ground experiments that have long been part of laboratory operations, would be designed to detect, anticipate, and evaluate potential aging problems and to plan for refurbishment and remanufacture as part of the DOE Science Based Stockpile Stewardship Program.

Conclusion 3-Weapons in the enduring stockpile have a range of performance margins, all of which were judged by the panel to be adequate. The performance margin of a nuclear weapon is defined as the difference between the minimum expected primary yield [worst case] and the minimum yield required to ignite the secondary. Simple measures requiring little effort, such as increasing the tritium load in boosting or more frequent change in boost gas reservoirs to compensate for tritium decay, could substantially increase performance margins at little effort to hedge against unforeseen effects.

Conclusion 4-The primary argument for retaining the right to do testing at or around the half-kiloton level for a finite period was that it would give the US valuable information about the effects of aging of weapons designs in the enduring stockpile. Extrapolation using computer codes from yields obtained in tests of primaries that demonstrate the initiation of boosting (half-kiloton, roughly) to an expected full-boosted primary yield is quite robust, provided the codes are well calibrated.

After careful examination the study panel concluded that a finite period of half-kiloton testing would give little or no useful information about aging effects beyond what can be obtained from inspections and other AGEX in a well conceived stewardship program. Namely, nuclear tests at the half-kiloton level for a finite term would help develop codes further and improve theoretical understanding of the boosting process but would not contribute to understanding aging effects. Similarly, such testing would not provide useful checks of refurbished or remanufactured primaries made after the time the testing ceased. In view of its limited utility, the study panel found that half-kiloton testing for a limited time had a very low priority in comparison to the activities recommended in Conclusions 2 and 3.

To be useful, half-kiloton testing would need to be continued indefinitely. This, the study panel observed, would be tantamount into converting the CTBT into a threshold test ban treaty.

Conclusion 5-The study panel concluded that testing of nuclear weapons at any yield below that required to initiate boosting is of limited value to the United States whether done for a finite period or indefinitely. This yield range includes 100 ton testing and hydronuclear testing. The hydronuclear case merited special discussion.

The arguments for continued hydronuclear testing were that such tests would provide a valuable tool for monitoring the expected performance of weapons in the enduring stockpile as they age. The basic idea would be to do a set of baseline hydronuclear tests of (modified) primaries of the types now in the stockpile and to follow up in the future with regular hydronuclear testing of primaries drawn from the stockpile and similarly modified. The test results would be compared to the baseline data.

The panel concluded that a persuasive case could not be made for the utility of hydronuclear testing to detect small changes in the performance of primaries. The fundamental problem with hydronuclear testing is that primaries need to be modified drastically to reduce the nuclear yield to a few pounds of TNT equivalent. This can be done either by removing an appropriate amount of the fissile material from the pit and replacing it by non-fissile material, e.g., depleted uranium, or by leaving the pit intact and replacing the boost gas in the pit by material that will halt the implosion just after the point of supercriticality is achieved. Neither method exercises a pit through its normal sequence, and so one does not obtain data relevant to the real implosion. Extrapolation of hydronuclear test results to actual weapons performance would be of doubtful reliability and low utility.

Hydronuclear testing can be useful for checking one-point safety of a primary design as was done in the past by the U. S. [7] Today, however, all US weapons designs in the enduring stockpile are known and certified to be one-point-safe to a high degree of confidence, so hydronuclear testing is not needed for safety tests. Furthermore, the development of 3-D codes further reduces any need to do actual tests to evaluate issues of one-point safety. Modeling capabilities will further improve as computer systems and codes advance.

Conclusion 6-A repeated concern raised by the nuclear weapons community about entering into a CTBT of unlimited duration is the ultimate unpredictability of the future. No one can predict with certainty the behavior of any complex system as it ages. The study panel found that should the United States encounter problems in an existing stockpile design that lead to unacceptable loss of confidence in the safety or reliability of a weapon type, it is possible that testing of the primary at full yield, and ignition of the secondary, would be required to certify a specified fix. Useful tests to address such problems generate nuclear yields in excess of 10 kt. A "supreme national interest" withdrawal clause-standard in all arms control treaties-would permit the United States to respond appropriately should such a need arise. It is highly unlikely that even major problems with one or two designs would cause the United States to withdraw from the CTBT, given the political implications involved. Major problems with all or almost all of the designs in the enduring stockpile would most likely be required.

Conclusion 7- The study panel observed that its Conclusions 1-6 were consistent with US agreement to enter into a true zero-yield Comprehensive Test Ban Treaty of unending duration, which includes a supreme national interest clause.

_


Jeremiah D. Sullivan is with the Department of Physics and Program in Arms Control, Disarmament and International Security, University of Illinois at Urbana-Champaign jdsacdis@uxl.cso.uiuc.edu

References

1. Study panel members were: Sidney Drell (Chair), John Cornwall, Freeman Dyson, Douglas Eardley, Richard Garwin, David Hammer, Jack Kammerdiener, Robert LeLevier, Robert Peurifoy, John Richter, Marshall Rosenbluth, Seymore Sack, Jeremiah Sullivan, and Frederik Zachariasen.

2. Congressional Record-Senate, Vol. 141, No. 129, pp. S11368-S11369, August 4, 1995. -Arms Control Today, "JASON Nuclear Testing Study," pp. 36-37, September, 1995. (At this time no other portion of the report is unclassified.)

3. R. S. Norris and W. M. Arkin, Bulletin of the Atomic Scientists, "Natural Defense Research Council, Nuclear Notebook," July/August, pp. 76-79, 1995.

4. P. P. Craig and J. A. Jungerman, Nuclear Arms Race: Technology and Society, pp. 185-190, McGraw Hill, 1986.

5. D. Schroeer, Science, Technology, and The Arms Race, pp. 62-65, John Wiley and Sons, 1984

6. S. Glasstone and P. J. Dolan, The Effects of Nuclear Weapons: 3rd Edition, United States Government Printing Office, 1977.

7. R. N. Thorne and R. R. Westervelt, "Hydronuclear Experiments," LA-10902-MS UC-2, Los Alamos National Laboratory, 1987.


Stewardship of the Nuclear Stockpile Under a Comprehensive Test Ban Treaty (CTBT)

Philip D. Goldstone

In July 1993, President Clinton directed that means other than nuclear testing be sought to maintain the safety, reliability, and performance of U.S. weapons. This was elaborated later that year in a Presidential Decision Directive as well as in Congressional language. The 1994 Nuclear Posture Review established a "lead but hedge" policy of seeking further nuclear stockpile reductions (toward START II and if possible beyond) and established that there was, for the first time in decades, no requirement for new-design U.S. nuclear warhead production. But it also required the DOE to maintain the stockpile, maintain a readiness to test if required, and sustain the capability to develop and certify new weapons in the future if needed. In the same year, the President reaffirmed the role of nuclear deterrence in U.S. national security strategy. In declaring the U.S. intent to negotiate a "zero yield" test ban (August 1995), he codified the "supreme national interest" in confidence in stockpile safety and reliability. In addition he established six "safeguards" as conditions for our entry into the CTBT.

These safeguards include a science-based stockpile stewardship program, a new annual certification procedure--and the possibility of conducting necessary nuclear tests even under a CTBT as a matter of supreme national interest, should this science-based program be unable to provide the needed confidence in the stockpile at some time in the future. When ratifying the START II treaty in 1996, the Senate also reaffirmed the U.S.. commitment to stockpile stewardship and other nuclear-security capabilities.

Technical Challenge. What, then, is the challenge of stewardship without nuclear testing? The answer lies in three broad areas:

1. the certainty that issues will arise in ever-aging weapon systems which will require evaluation and repair;

2. the important role that nuclear testing played in validating not only the operation of new warheads and bombs, but also the quality of production practices and the quality of the scientific tools and expert judgment applied to evaluating weapon safety, reliability, and performance;

3. the scientific unknowns and technical gaps that we must necessarily fill in order to provide sufficient quality in such evaluations for the U.S. stockpile in the future. (On the other hand, a far lower level of science and technology is needed just to enter the nuclear arena; 1945's "Little Boy" was used first in war without a nuclear test.)

Aging Historically, retirement and replacement of old nuclear weapons with new or different types kept the average age of the U.S. stockpile relatively small. Until as late as 1975 the stockpile was, on average, less than a decade old. Through the 1980's, its average age was roughly constant, about 13 years. Selective retirement of older weapons in the large stockpile reductions at the end of the Cold War reduced the average age for a time. But without new weapons replacing old, the stockpile age is now increasing year for year even as we further dismantle. In 1997/1998, the average age of the U.S. stockpile will exceed our historical experience, and by 2005 will exceed 20 years. Many individual weapons will, of course, be considerably older than the average.

Instead of replacing old weapons with new types tested through underground nuclear explosions, now the task is assessment, revalidation, and renewal of the stockpile without nuclear testing. This requires a continuous process of surveillance, assessment, and response as an organizing principle. Specific revalidation and life extension of individual stockpile weapon types, and refurbishment of their nuclear packages, will be a part of this process so that each system in the stockpile has periodic intensive review and renewal in addition to annual certification of all systems. The first systematic "dual revalidation" of a weapon system (i.e., involving formal peer review with both weapon design laboratories) began in 1996. A life-extension program for another warhead is also under way.

In general, stockpile aging will raise a wide range of issues that will require enhancing the scientific and technical capabilities that can be applied to the surveillance-assessment-response process. High explosive and other organic-compound components will undergo chemical degradation. Materials may corrode and radiation damage may occur in some. Cracks may occur in components. Even plutonium itself can age through alpha decay and the ingrowth of helium in the material. Evaluating the effects of these changes will be complex without nuclear testing since in general, changes or defects may be localized and not amenable to symmetry assumptions, requiring three-dimensional analysis.

Aging of weapon components is known to occur. Roughly 14,000 nuclear weapons have been disassembled and examined since 1958 as part of a rigorous surveillance program. There have been a number of "findings" from this process, which are called "actionable" when corrective action has been needed to preserve safety or reliability. Of about 400 distinct actionable findings since 1958, over 100 have involved the nuclear explosive package itself, and most of those (but not all) have involved the weapon's primary stage,1 which includes both high explosive and plutonium. Past defects have on occasion systematically affected thousands of individual weapons. From the historical data, age-related findings (e.g. due to deteriorating components) can appear at any time; but there is little data on weapons more than 25 years after their introduction to the stockpile, since there were so few of these. The data suggest that statistically, one or two of these "actionable" defects could be discovered each year in the continuing stockpile through formal surveillance and other processes.

Aging: an example. One example of weapon aging and of the role of nuclear testing in validating predictions of performance, is the story of the now-retired W68 warhead for the Poseidon submarine-launched ballistic missile. A premature degradation of the high explosive in that weapon, which would have ultimately made the weapon inoperable, was found through routine surveillance. The weapons were disassembled and the high explosive replaced by a different and more chemically stable formulation that had, it turned out, been used successfully in earlier nuclear tests of the W68 design. Unfortunately, some of the other materials that had to be replaced in rebuilding this warhead were no longer available from the commercial infrastructure, so additional changes in the rebuilt weapon were required. The best available computational simulations, normalized to nuclear test data, were used to evaluate the repair and assure that the fix did not compromise the weapon's performance or reliability. A nuclear test was performed to validate this answer; however, the test data showed a reduced yield compared to calculational expectations, for which the cause remained unclear. While the reduced yield was ultimately deemed acceptable, this result did require the military to modify some maintenance procedures to assure reliability of the warhead over the full range of potential operating conditions.2

Safety. Past actionable findings have included those related to weapon safety assurance as well as to performance and reliability. While our current nuclear weapons are judged, on the basis of considerable data, to be adequately safe against credible accidents, the nation needs to be able to evaluate weapon safety with the highest possible confidence. U.S.. weapon experts will, for example, continue to address questions about safety in complex abnormal environments (for example, with multiple insults). They also must be able to ensure that aging or remanufacture of components do not compromise the appropriate safety margins. For example, age-induced changes could alter the expected behavior of explosives or fire-resistant features in accidents.

Remanufacture? Remanufacture and replacement of weapon components is obviously an important part of maintaining stockpile safety and reliability. Why not simply rely on routine remanufacture of nuclear weapons to original specifications, keeping their average age down? For one thing, large-scale production to ensure replacement rates comparable to those of the last 50 years would be more costly in terms of production infrastructure, and entail additional materials and environmental management issues. For another, identical remanufacture (in the sense of full replication of components and manufacturing processes to original specifications, without detailed evaluation via computation and experiments) is not generally feasible, and is harder with each passing decade. There are several reasons for this. Many of the Cold War manufacturing process lines have been disassembled, so both the facilities and the people involved in fabrication will be different by necessity. As in the W68, there are commercial materials, practices, and nuclear weapon manufacturing processes that either have or will become unavailable or obsolete, sometimes because of environmental and health concerns. Stockpiling obsolescent materials would not be a solution, because they too will age and the associated materials processing practices and knowledge cannot be "stockpiled" indefinitely either. For technologically complex systems, establishing a "complete" set of specifications adequately prescribing all relevant processes (e.g. those that could affect dynamic material behavior) is generally problematic.

Both aging and remanufacture can introduce changes which may affect the dynamic behavior of a weapon, either in the way it is designed to function, or in response to potential accidents. Since we need confidence in the outcome, a capability for component remanufacturing and replacement is essential, but would not be sufficient without adequate means for evaluation. In essence, weapon scientists will find themselves assessing the health of their "patient" and making judgments about whether, or when, to subject the patient to "surgery"--recognizing that the process of surgery entails potential risks and must itself be carefully evaluated.

Stewardship. Present-day nuclear weapons are complex objects. There are many technical issues associated with developing an adequately fundamental understanding of the consequences of aging and manufacturing processes on weapon safety and reliability. To predict when refurbishment will be needed before problems arise and avoid "remanufacturing crises", the stewardship community has begun to develop enhanced surveillance and assessment processes so that we may anticipate aging related phenomena, and predict stockpile lifetime issues perhaps ten to fifteen years into the future. This in turn will require linking existing engineering and nuclear test data, assessment of disassembled components, "forensic" surveillance techniques, computational modeling of materials phenomena and processes up to fully integrated simulations of weapons behavior, and a variety of laboratory experimentation to develop fundamental data and test theoretical models and understanding. A significant fraction of the necessary research involves fundamental science and technology; most of it is challenging.

The technical challenges of stewardship include the development of improved scientific data and models, as well as the tools necessary to explore and apply them. For example, to evaluate nuclear weapons primaries, improvements are needed in current U.S. capabilities for flash radiography of materials dynamics experiments and non-nuclear hydrodynamic tests--in which inert mockups of primaries (e.g. with tantalum or depleted uranium replacing the plutonium) are imploded. These tests help assess the quality of the implosion that in a real weapon would ultimately lead to the ignition of the deuterium-tritium boost gas, and can map the density so that criticality can be predicted. Since localized, aging-related perturbations, as well as hypothetical accident scenarios, generally would result in asymmetric hydrodynamic behavior, three dimensional imaging of very high density, high-velocity objects with sub-millimeter resolution is needed. The Dual-Axis Radiographic Hydrodynamic Test facility under construction at Los Alamos will provide the first two-view, high-resolution capability available to the U.S. when it is completed near the end of this decade. Of course, since these mockup experiments do not produce a nuclear explosion, the link to weapons safety or reliability must occur through computational simulations.

There is a related need for improved physical models and data on the dynamic behavior of materials at high strain and strain rate (i.e. constitutive properties including equation of state, spall, and dynamic deformation). To properly predict the effects of age-induced material changes or defects, such models must be incorporated, with sufficiently few assumptions or simplifications, into computational simulations for weapon performance and safety assessment. Plutonium is a unique material; experiments on its specific properties, including subcritical experiments at the Nevada Test Site, are needed to provide essential data. Such experiments are consistent with the CTBT since there is no nuclear explosion (DOE recently announced its decision to conduct subcritical experiments at the NTS). Similarly there is a need for more scientifically predictive models of high explosive initiation, burn, and detonation, including the effects of aging on these phenomena.

The detonation of high explosive and the subsequent release of nuclear energy from a weapon primary result in extreme conditions of high energy-density in which such issues as fluid instabilities and the nonlinear development into turbulence, material properties, and radiation transport must be better understood. We will need to use accurate and predictive models of fusion ignition and burn, both in the boost process in the primary and in the secondary. It will be necessary to link phenomena at vastly different scales to understand the effects of microscopic aging-related structural and chemical changes on the dynamic properties of the macroscopic engineering materials in weapons.

A variety of experimental capabilities would provide these data and help develop and validate theoretical models. But the integrating factor in science-based stockpile stewardship, tying together the scientific and engineering data (including past nuclear test data) and models, is high performance computing. Without testing, weapon performance or safety must ultimately be calculated. Because of the 3-D complexity of the task and the need to replace currently oversimplified models, up to 10,000 times today's capabilities will be needed (platforms capable of 10-100 teraFLOPS, handling many terabytes of data, and the computational techniques to exploit them). The Accelerated Strategic Computing Initiative, organized to develop this computational capability, has engaged supercomputer manufacturers to develop what will be the world's most powerful supercomputers--and produced some early hardware and software successes, like the recent achievement of a 1 teraFLOPS milestone.

Reducing Global Nuclear Danger. Stewardship of the nuclear stockpile without testing will be a grand technical challenge--one that the US is committed to meeting to provide unquestioned continuity of U.S. nuclear deterrence. The reason for science-based stewardship in lieu of nuclear testing is to help reduce global nuclear dangers while maintaining security in a nuclear world. Ultimately, any path toward reduced global nuclear dangers must be pursued in the context of broader processes of global security; major reductions of nuclear inventories have not stood completely separate from other factors, nor will they. For example, it can be argued that the step to START-II was only practically achievable as a result of the sociopolitical changes in the then-Soviet Union. Furthermore, it is the net dangers that need to be reduced, and this may not always equate to simply reducing nuclear inventories or postures.

Further steps along the path to reduced nuclear dangers will have to address the potential for altering the balance of security and deterrence equations as stockpiles become smaller, and acknowledge that the possibility of creating nuclear weapons cannot be eradicated from human knowledge. Many factors will be intertwined with and may pace the achievability of future arms reductions 3, for example if reductions continue toward zero, developing acceptable national and international responses to latent nuclear forces in times of crisis would be critical. Reducing nuclear dangers, it seems, will not only be a technical challenge; it will also be a grand challenge of policy and diplomacy.


Philip D. Goldstone is at Los Alamos National Laboratory, Los Alamos, New Mexico 87545; pgoldstone@lanl.gov. This article summarizes and updates the paper by John D. Immele and the author, presented at the American Physical Society 1996 Annual Meeting in a session on the CTBT sponsored by the APS Forum on Physics and Society. Immele was the Program Director for Nuclear Weapon Technology at Los Alamos at that time. Acknowledgements go particularly to R. Wagner and T. Scheber for valuable discussions on issues of latency and stability.

Footnotes

1 "Stockpile Surveillance: Past and Future", Johnson et al. 1995, available as Sandia National Laboratory Report SAND95-2751. This report contains considerable unclassified summary data on stockpile surveillance findings from 1958 to 1995.

2 Report to Congress on Stockpile Reliability Weapon Remanufacture, and the Role of Nuclear Testing, G.H. Miller, P.S. Brown, and C.T. Alonso, Lawrence Livermore National Laboratory, 1987, UCRL-53822.

3 See for example #An Evolving Nuclear Posture,# A. J. Goodpaster et al., Henry L. Stimson Center Report #19, 1995, and #Phased Nuclear Disarmament and U.S. Defense Policy,# M.E. Brown, Henry L. Stimson Center Occasional Paper #30, October 1996.


The Role of Fusion in the Future World Energy Market

John Sheffield

Introduction

Today, nuclear energy contributes some 6% of the world annual energy use of about 9000 million tonnes of oil equivalent (Mtoe/a; 1 toe = 42 GJ). Renewables contribute 14%. Fossil fuels account for the bulk, about 80%, of the energy use. In the future, it is postulated that these roles will change as energy demand rises and cheap oil and gas are depleted; even without consideration of global warming effects. The changes will be driven mainly by the developing areas, which have relatively less fossil fuel reserves.

A study of historical trends suggests that population growth rate may be viewed as depending, roughly, on a mixture of two factors relating to culture and standard of living respectively [1, 2]. A good surrogate for standard of living appears to be the annual energy use per capita [1]. To illustrate how energy demands might evolve, the author has developed a simple relationship, which permits coupled annual values of population growth rate and energy use per capita to be derived for each part of the developing world - Africa, China, East Asia, South Asia, and Latin America. The growth rate is updated every decade.

Annual Growth rate = (Ec -Ea)/(160 Ea^0.38); - where Ea is the annual energy use per capita, adjusted for efficiency gains obtained after the year 2000 (i.e., a 50% efficiency improvement allows a given amount of primary energy to be worth 1.5 times as much in supporting the standard of living);

- and Ec is the annual effective energy use per capita at which the population growth rate is zero (typically 2 to 3 toe/cap.a today). It may be viewed as a measure of the cultural factors.

Reference Case

The reference case for projecting world population and energy use assumes that steady efficiency gains will be made across-the-board from 2000, rising to 37.5% by 2050, 75% by 2100, and 100% by 2200.

The World Bank population projections [3] and the energy/capita predicted by the IEA [4] for 2010 are used for the OECD countries of Europe, the East, and North America, and the Former Soviet Union, Centrally Planned Europe, and the Middle East. These countries have low population growth rate or in the latter case unclear connection of population to energy use. The energy per capita is decreased in proportion to the efficiency improvements. The alternative case in which these areas take efficiency improvements as an increase in standard of living is also calculated.

For the developing areas, the primary annual energy per capita is increased systematically beyond the level predicted for 2010 by the IEA so as to stabilize each population in the time period 2100 to 2150. The starting factor Ec is chosen to match the present day trends, and is allowed to decrease with time, see figure 1 for the example of South Asia for which Ec = 2.5 initially and decreases to 2.0 by 2100.

Historical values are used for 1971-1992; predictions of the IEA [4] and the World Bank [3] for 1992-2010; and the reference case of this paper for 2010 to 2100. Interestingly, this approach led to population evolution for developing areas similar to those of the World Bank's standard case [3].

Reference case of this paper: designed to stabilize population by 2150.

Area	Ec      (10)-Ec(100)  	Stable Populations	  Peak Energy Demand
Africa	               2.6---2.2	      3,140 million	        3,270 Mtoe/a
China	               2.5---2.0	      1,700 million	        2,250 Mtoe/a
East Asia	   3.0---2.4	      1,140 million	        1,530 Mtoe/a
South Asia	   2.5---2.0	      2,850 million	        3,000 Mtoe/a
Latin Amer.	   3.5---2.8	         940 million	        1,410 Mtoe/a
World		                            11,810 million	       15,900 Mtoe/a

Value of Efficiency Gains

For the reference case, the world peak annual energy use is 15,900 Mtoe/a

a.) If the efficiency improved more slowly, e.g., to 25% by 2050, 50% by 2100 and 95% by 2200, the peak energy would be 18,100 Mtoe/a. The increase in energy demand from 2010-2100 would be 138,000 Mtoe; comparable to the proven plus projected recoverable natural gas reserves [5].

b.) If the developed world, FSU, CEE, and Middle East chose to take eficiency gains to increase their standard of living, rather than to reduce energy use, the total energy use would be 20,800 Mtoe/a and the energy increase from 2010-2100 would be an additional 110,000 Mtoe.

Total proven and projected, recoverable, oil reserves are 212,000 Mtoe. [5]. Thus, uncertainties in this simple assesment of needs are comparable with projected readily accessible, oil and gas reserves.

It appears that the availability of cheap oil and gas may be a one time chance for the developing areas to increase their energy use, improve their standards of living, and stabilize their populations, prior to the development and deployment of the long-term (renewable) energy sources.

Renewable Energy Deployment

A substantial potential is believed to exist in the world for renewables; an aggressive approach to biomass is described in reference [6], 4,900 Mtoe/a; for hydropower [5], 780 Mtoe/a (electric); an substantial potential for windpower is described in reference [7], 4540 Mtoe/a (electric). Roughly a half of these energy resources are in developing countries. The rest of the energy will have to be supplied by fossil, solar, geothermal and nuclear (fission and fusion) sources. While China and Latin America have large fossil energy resources compared to their expected demands, Africa, East Asia and South Asia do not. An example distribution of world energy sources for the reference case is shown in figure 2. For each area preference was given to indigenous energy sources; on average, 55% of the potential biomass, 95% of the hydropower and 20% of the wind power was deployed, with the highest percentages of the potential in the the areas which had the greatest need.

The distribution of the balance, solar and nuclear (fission + fusion) energy use - 5000Mtoe/a, will depend upon the relative costs, technical capabilities of each area and public acceptance. Where large, centrally generated, power is required the nuclear options offer distinct advantages. The alternative to their use is almost certainly a greater use of fossil fuels (coal), and a more rapid depletion of fossil resources.

Fusion Energy Deployment

Substantial progress has been made in recent years in both magnetic and inertial fusion: achievement of 10 MW of fusion power in the Tokamak Fusion Test Reactor, and the successful Halite-Centurion test in the inertial area; demonstration that energetic fusion products behave as expected; calibration of plasma modelling codes; and demonstration of some of the key technologies. These successes support the design studies of the International Thermonuclear Experimental Reactor (ITER), the National Ignition Facility (NIF), and a similar inertial facility in France. Assuming succesful operation of these faclities, and success in the development of radiation-resistant materials and a heavy-ion-beam driver for inertial fusion, a Demonstration Fusion Power Plant could be operating by 2030. A commercial power plant might then be operating by 2050. The most likely initiators of the fusion era are countries which have deployed substantial nuclear power, plan more, and will need even more as cheap fossil fuel becomes scarce, and have the technical capability; e.g., Japan and Europe.

Build-up rates may be constrained by the energy payback time - about 1.5 years for some reference fusion plants - and tritium build-up rates to support new plants. Consideration of these factors suggest that a doubling period of 5 years or less should be possible. Following the demonstration of commercial fusion energy, a systematic deployment of fusion plants is anticipated. It may be expected that fusion and fission plants will be built and operated by international consortia, allowing the deployment in countries which do not have all the in-house skills needed.

For the reference case, the following examples are considered for 2100 and 2200. The energy quoted is the replacement value for fossil fuel at 40 % thermal electric conversion efficiency. Note that a 500 Mtoe/a (fossil replacement value) corresponds to 350 fusion reactors of 1000MWe capability, operating at 75% capacity factor.

Example     	2100 (Mtoe/a)   	2200 (Mtoe/a)
1	               50	                          500
2	              100	                        1000
      3	              150                               1500
The consequences of not having the energy available, within this model, would be massive population increases in the deprived areas and, ultimately, the same demand for energy but from more poverty stricken populations! It is essential, therefor, that energy efficiency improvements and all energy sources, including fusion, are developed and deployed rapidly to ensure that the converse occurs - population stabilization with a decent standard of living for all!


John Sheffield is at Oak Ridge National Laboratory, Sheffieldj@ornl.gov. The submitted manuscript has been authored by a contractor of the U.S. Government under contract No.DE-AC05-96OR22464. Accordingly, the U.S. Government retains a nonexclusive, royalty-free license to publish or reproduce the published form of this contribution, or allow others to do so, for U.S. Government purposes.

References

1.) J.Goldemberg and T.B. Johansson, "Energy as an Instrument for Socio-Economic Development", United Nations Development Programme, New York, NY, p 9, 1995.

2.) F.Duchin, "Global Scenarios about Lifestyle and Technology", The Sustainable Future of the Global System, United Nations University, Tokyo, October, 1995.

3.) E.Bos, My T. Vu, E.Massiah, and R.A.Bulatao, "World Population Projections: 1994-95 Edition", published for the World Bank by The Johns Hopkins University Press, Baltimore and London, 1994.

4.) International Energy Agency, "World Energy Outlook, 1995 Edition", OECD Publications, 2 rue Andre Pascal, Paris, France, 1995.

5.) World Energy Council, "1995 Survey of Energy Resources", Holywell Press Ltd, Oxford, England, 1995.

6.) T.B.Johansson, H.Kelly, A.K.N. Reddy, and R.H.Williams, in T.B.Johansson et al. (eds), Renewable Energy: Sources for Fuels and Electricity (Washington: Island Press), 1993.

7.) B. Sorensen, Annual Review of Energy and the Environment, Vol. 20", Annual Reviews Inc, Palo Alto, CA, USA, p387, 1995.


Looking Forward: The Status of Renewable Technologies

Allan R. Hoffman

Introduction

Renewable electric technologies have been under development since the mid 1970's. Considerable progress has been made in improving technical performance and reducing capital and energy costs. As the technologies have matured, the use of renewable electric technologies has increased, both in the U.S. and in other countries. This paper outlines the factors that are encouraging the growth of renewables, describes the current policy environment, and discusses the status of renewable electric technologies.

_ Converging trends

Improving technological performance and reductions in costs have enabled renewables, under certain conditions, to be the low cost option for generating power. In addition to standing on its own merits, there are external driving forces that are encouraging the widespread use of renewable technologies. These factors include: increasing environmental awareness, availability of new technology options, world energy demand growth, increasing business interest, and energy security.

Increasing environmental awareness--It is very clear that if the rest of the world powers up the way we did, the environmental impacts could be very serious. If we do not want countries like China or India to use their coal we have to be willing to offer them some affordable alternatives. We have the opportunity to sell them vehicles that are less polluting, or renewable technologies that reduce their dependence on coal. DOE is working to develop advanced energy systems for cars that do not require petroleum and to develop various forms of renewable energy that replace coal.

Availability of new technology options--There are many new technology options that are becoming available both in the areas of energy supply and in the more efficient use of energy. The most notable new technologies on the horizon are: advanced fission, fusion, efficient gas turbines, renewable energy, storage technologies, and hydrogen. Efficient gas turbines are a reality and offer competition for renewables today. Natural gas could be the transition fuel to a renewable/hydrogen economy and they are natural partners in many ways. New storage technologies are being developed by DOE in partnerships with U.S. industries.

World energy demand growth--Increased deployment of renewables is also being driven by the recognition that renewables are competing for a total target market in the trillions of dollars. The World Bank has estimated that, over the next 30-40 years, developing countries alone will require 5 million megawatts of new generating capacity. This compares with a total world capacity of about 3 million megawatts today. At a capital cost of $1,000-2,000 per kilowatt, this corresponds to a $5-10 trillion market, exclusive of associated infrastructure costs.

Increasing business interest--There is increasing understanding among corporations that renewable energy can mean big business and high profits in the longer term. An example of this interest is provided by Jeffrey Eckel with EnergyWorks, a joint venture between PacifiCorp and Bechtel:

"The market for human-scale energy systems rather than gigantic projects is enormous... You have got 2 billion people in the world today who do not have a light bulb, many of these people cannot be reached by traditional power lines."

--Jeffrey Eckel, CEO, EnergyWorks, October 17, 1995

Energy Security--Currently nearly 50% of our petroleum needs are met with imports, primarily from Saudi Arabia, Venezuela, Canada, Mexico, and Nigeria, resulting in $50 billion of revenue that is going overseas. That number may increase to $100 billion over the next ten years as we increase imports from the rest of the world. Energy security requires us to develop alternatives so that we can deal effectively with the next oil price shock and increasing competition for petroleum resources.

Current policy Environment

The current policy environment strongly supports deployment of renewables and energy efficiency technologies. The following statements underline the Clinton Administration's support and encouragement of these programs:

"The administration will launch initiatives to develop new, clean renewable energy sources that cost less and preserve the environment." -- President Bill Clinton, from A Vision of Change for America (1993)

"The Administration's energy policy promotes the development and deployment of renewable resources and technologies...

The Administration supports fundamental and applied research that helps the renewable industry develop technologically advanced products...

The Administration is working throughout the Federal Government to identify and overcome market barriers and accelerate market acceptance of cost-effective renewable energy technologies." -- National Energy Policy Plan, July 1995.

One example of the Administration's commitment to sustainable development is DOE's program to showcase energy efficiency and renewable technologies at the 1996 Summer Olympic Games in Atlanta. Demonstrations of photovoltaic technologies, solar thermal water heating, solar thermal dish/Stirling generators, geothermal heat pumps, fuel cells, energy efficiency technologies, and alternative fuel vehicles will be presented at the Olympics. The Olympic showcase will also include a display of the "Cool Communities" concept, in which strategically planted trees and light-colored building materials are used to reduce air temperatures in urban areas.

A vision of the future

Given the prospect that fossil fuel supplies will peak and then begin to diminish before the middle of the next century and the need to move to sustainable economic systems, there should be a gradual transition to a global energy system largely dependent on renewable energy. Previous energy transitions, e.g. from wood to coal and coal to oil, have taken 50 to 100 years to occur, and there should be no difference in this case. Over this time period, hydrogen may well emerge as an important energy carrier to complement electricity, given its ability to be used in all end use sectors and its benign environmental characteristics.

In this vision, all renewables will be widely used: biomass for fuels and power generation, geothermal in selected locations for power generation and direct heating, wind, hydro, photovoltaics (PV), and solar thermal for power generation. Large amounts of renewable power generated in dedicated regions (e.g.wind in the midwest and solar in the southwest) will be transmitted thousands of miles over high voltage direct current power lines to load centers. Electricity and the services it provides will be available to almost everyone on the planet.

Technology challenges and accomplishments

This vision of the future can only be realized if substantial investments in renewable development are made today. A brief outline of technology challenges and accomplishments in PV, wind, solar thermal, geothermal, and biomass is presented here. Cost trends for these technologies as shown in Figure 1 indicate that they have been steadily falling for a number of years.

Photovoltaics--Recent accomplishments:

--- Achieving world record efficiencies of 17.7% for copper-indium-gallium-diselenide (CIGS) thin film polycrystalline cells and 10.9% for amorphous silicon cells --- System lifetimes have doubled since 1991

--- PV system costs have been reduced by 30% since 1990

--- Companies involved in DOE's PV Manufacturing Technology (PVMaT) pro