Volume 22, Number 3 July 1993

ARTICLES

Symposium On Physicists in Environmental Affairs

Physics and Society presents here articles based on the four talks given at an invited session entitled sponsored by the Forum on Physics and Society at the March 1993 APS meeting in Seattle.


Technology for Containment of Underground Wastes: Containment Now

J.G. Dash

Effluents from underground storage tanks and waste dumps of toxic and hazardous materials are entering or threatening aquifers which supply drinking water in many regions of the nation. According to current policy, cleanup cost is estimated at $10^11 - $10^12 over several decades (1), although no existing technology provides complete remediation within this cost estimate (2). Instead of proceeding with costly yet inadequate cleanup, we should try to safeguard water supplies immediately by waste containment, while reconsidering current policy and buying time for more R&D on improved methods of remediation.

Among several available containment methods, cryogenic barriers offer advantages of effectiveness and economy (3). Barriers of frozen ground can provide a complete enclosure of buried wastes, in virtually all soils and site geometries. The enclosure is effected by freezing pipes inserted around and below the site, so as to form a boat-shaped rib case (Fig. 1). The piping insertion and refrigeration use off-the-shelf technology in common use on many engineering projects to prevent seepage and stabilize wet soils during construction of dams, tunnels, mines, and foundations (4). Installation and maintenance costs are competitive with other containment methods; more importantly, ground-freezing can provide a complete hermetic enclosure, preventing downward as well as lateral migration of water-borne hazardous, radioactive, and mixed wastes. Thick barriers provide great thermal inertia, so that frozen ground 20 m thick in typical soils can sustain a hiatus of 2 years without refrigeration before breaching. Molecular diffusion through such barriers is estimated to be below detectability for 10^4 years. A cryogenic enclosure can be maintained as long as needed; whenever remediation is completed, the barrier can be completely removed by shutting off the refrigeration and withdrawing the pipes.

Figure 1. Schematic cross section of a cryogenic barrier enclosing underground storage tanks (5). The refrigeration piping consists of two arrays of pipes, spaced 4 to 8 feet apart. The pipes are concentric tubes of heavy wall mild steel such as used for oil well casing. Refrigeration is by circulating chilled fluid, supplied through the inner tube and returned through the annulus. The fluid may be one of several (e.g. brine, aqueous ammonia, or propylene glycol) commonly employed in large freezing plants. The pipes can be inserted by angle drilling, pile driving, or vibrating. With this design, a 20 m thick barrier with a core temperature of -30^oC can be formed 6 to 12+ months after refrigeration begins. Monitoring pipes and instruments are not shown.

Patented designs for cryogenic barriers around various DOE sites have been prepared by an engineering firm (5). The cost of a containment system for a large underground waste tank such as the single-shell tanks at Hanford is estimated to be less than $5 x 10^6. The refrigeration of such a barrier can be maintained at an annual power cost within $10^4. On this basis, containment of all of 177 underground storage tanks at Hanford could be accomplished for less than $10^9, with an annual power cost below $2 x 10^6. Current plans are under way for tests at a number of DOE sites.

1.	M. Russell, E.W. Colglazier, M.R. English, Hazardous Waste Remediation:        The Task Ahead, Waste Management Research and Education Institute, 
        Univ. of Tennessee, 1991.  Also see Colglazier's article, this issue.  
2.	Basic Research for Environmental Restoration, Energy Research Office 
        DOE 1989.
3.	J.G. Dash, Waste Management 11, 183 (1991).
4.	See e.g. H. Wind, Eng. Geol. 13, 417 (1979); M.B. Jones, Tunnels
        and Tunneling 14, 31 (1982).
5.	RKK, Ltd., Arlington, Washington, estimate 1993.

The author is at the Department of Physics, University of Washington, Seattle, Washington 98195, and RKK, Ltd., Arlington, Washington

Symposium On Physicists in Environmental Affairs

Physics and Society presents here articles based on the four talks given at an invited session entitled sponsored by the Forum on Physics and Society at the March 1993 APS meeting in Seattle.


Research for Environmental Management

M.L. Knotek

The environmental management problems facing the country and world are tremendous in scope and complexity. Government and industrial practices of the past century have created a legacy of contaminated soils and groundwater that must be restored and protected to ensure ecological and human well-being for future decades and centuries.

Two US government departments, Energy and Defense, have extensive environmental problems resulting from weapons production activities during a 50-year period starting in the 1940s. Unique materials were used in these activities, and as a result extremely complex waste byproducts were produced. Unfortunately, the approaches used to dispose of or store these wastes did not consider environmental consequences. Long-term environmental restoration and waste-management programs are now being implemented to deal with these wastes and with the contaminated environments and facilities resulting from these weapons production activities. Our challenge is to restore the environment where it has been contaminated, deal with the complex stored waste in an environmentally acceptable way, and develop environmentally benign manufacturing techniques, processes, and products.

As we set out to tackle these problems, we encounter impeding limitations. We must overcome several knowledge gaps: We don't clearly understand the risk that contaminants pose, so regulatory standards that specify the degree of clean-up may be overly conservative; our understanding of natural systems and their ability to accommodate pollutants for long durations is limited; we don't have the science or tools needed to clearly define the extent of our problems; and we don't have the technologies needed to restore the environment, deal with huge waste inventories, and develop an environmentally benign industrial infrastructure. These uncertainties plus an arbitrary and changing regulatory environment have cost large sums of money with little to show for it.

In addition to knowledge gaps, we run into financial limitations that impact cleanup activities. The projected cost of national environmental restoration far exceeds the nation's financial resources and the public's willingness to spend on cleanup.

To be successful, we need to establish environmental restoration programs that are soundly grounded in science, technology, and policy. The science and technology base resulting from a rational approach will allow us to apply available financial resources to environmental problems that have been determined to be significant on the basis of true risk. The fundamental knowledge resulting from this approach will also be applicable in solving other national problems, including health and economic competitiveness.

There are four areas where environmental R&D investments could lead to tremendous returns: human and ecological health effects; soils and groundwater, especially in situ analysis, remediation, and monitoring; waste processing technology and waste forms for permanent storage; and characterization and analysis technology.

Health effects

The effects of man-made toxins on human health have been the driving force behind environmental standards, but environmental "health" has recently assumed a similar importance. From a scientific perspective, human health and environmental health are similar problems. Our understanding of the effect of chemical and radioactive materials on health is based largely on epidemiological or animal studies. The setting of standards from animal studies involves extrapolations from high dosages over short times to environmentally-relevant low-level chronic exposures. These extrapolations assume a linear dose response, resulting from a genotoxic effect of the toxin, and an assumed single step process to induce cancer, or another effect, in a cell. Current standards are not generally based on a molecular-level analysis of disease in humans or the environment.

Several factors could change this approach, leading to more realistic standards. We now know that the path to disease usually involves multiple steps, leading to a nonlinear relationship between dose and health effects. Many toxins now assumed to be genotoxic in fact induce disease in other ways and exhibit distinct thresholds for health effects. At low concentrations, living systems have a variety of defense and repair mechanisms that are overwhelmed at the level encountered in laboratory studies. An individual's or environment's susceptibility to disease from toxins is determined by its genetic makeup, which can vary significantly among individuals of any species. To properly understand the environmental problem, we must understand the mechanistic effect that chemicals have on all types of living systems, the mechanisms that can defend living systems, and the genetic pathways these defense systems take. One part of the problem is that some man-made materials (e.g. plutonium and some halogenated hydrocarbons) have not been accommodated by nature, so no defense mechanisms have evolved as they have for many naturally-produced toxins.

Solving the environmental problem is only a small part of the much larger problem of explaining how living systems survive among the many other toxic materials in the environment. It is widely accepted that inherited or induced faults in the genetically determined defensive makeup of individuals lead to increased susceptibility to toxin-caused disease. The development of new tests involving "biomarkers" will allow the extent of accumulated dose and damage from toxins to be determined before overt health effects are observed, and will indicate whether the proper genetic defense and repair mechanisms are in place. This approach will allow intervention before overt disease is encountered. Thus, we will not only learn what materials we need to remove from the environment, but also how to protect individuals having genetic susceptibilities to disease. Adopting this approach will help us move our entire health strategy from treating to preventing disease.

Soils and groundwater

The behavior of complex waste streams, including their transport, transformation, and entry into the ecosystem, is only beginning to be understood. Contaminants are subjected to physical, chemical, and biological processes in a complex milieu of minerals, organic materials, biota, and liquids. The huge volumes of contaminated soils and groundwater lead us to try to develop in situ methods for characterizing, remediating, monitoring, and controlling contamination. In most cases, this problem is far more complex than any industrial chemical process; the number of variables is far greater, and our ability to monitor and control the process is limited. Further, the rates of reaction and movement are generally slow so that long times may pass before the efficacy or wisdom of an intervention is understood, and the consequences over decades or centuries cannot be predicted.

Early development of in situ technologies must be based on models, since trial and error could be disastrous. To develop realistic models, a broader base of fundamental understanding of subsurface chemistry and biology is needed. The ability to model these processes from the molecular level through the field scale is essential, and the problem of scaling through that range is formidable. The end product must be a new set of models and intervention protocols that can be successfully steered through the government permitting process and then widely applied.

A problem related to health effects at the molecular level is bioremediation, a promising technique for cleaning u soil and ground-water. Bioremediation would use naturally or genetically-altered microbes or plants to generate enzymes to chemically render compunds harmless or to sequester elements such as heavy metals or radionuclides. the molecular basis for bioremediation technology is usually the same as that for human health defense or environmental survival, and also could form the basis of new generations of bioprocesses for industrial applications. Thus, progress in this critical area could yield significant long-term societal benefits.

Recent discoveries of microbes at extreme subsurface depths offer hope that natural or minimally-modified species can be expected to deal with contaminants in deep aquifers or soils. In addition, microbes have been found to exist in extreme environments: thermophiles at temperatures above 100^oC, halophiles at greater than 4N saline solution, acidophiles functioning at pH levels as low as 2.5, and radiophiles that survive after exposure to a megarad of radiation. These exotic species may offer clues to creating microbes that are tolerant of a variety of environmental or industrial conditions and are useful in performing many chemical and physical tasks. Use of microbes or plants may be the only technically or financially feasible solution to cleaning up the extensive current soil and groundwater contamination. Possibly as important, the science and technology base for wide-scale bioremediation would be a cornerstone for the biotechnology industry.

Managing wastes

The most difficult and potentially most expensive waste accumulations are the large volumes of complex radioactive and chemical or "mixed" wastes stored around the country. This problem has many facets, including characterization of complex nonhomogenous mixtures; interim stabilization for safety reasons; retrieval; treatment, including incorporation in some inert sequestering waste form; and "permanent" storage of any nonreducible residue. The cost and time needed to complete this waste disposal will be driven by the tremendous volumes of waste and the problem of establishing a national repository strategy. While the actual volume of radioactive elements is quite small, its incorporation in large volumes of mixed chemical components requires that the entire volume be treated and disposed of as mixed waste.

Clearly, the strategy for reducing the cost of this effort must be based on a reduction in the volume of waste to be treated. This approach will require new separations technologies, and new processing technologies to permanently dispose of, rather than store, organic and inorganic chemical constituents. Separations technologies will be based on materials designed to selectively remove specific radionuclides and heavy metals from complex mixtures. The crown ethers, pillared clays, zeolites, membranes, and other physical and chemical separations concepts will be considered in this effort. Once the critical radionuclides, such as technetium, cesium, plutonium, and strontium, have been separated, a variety of processing techniques can be used to deal with the remaining chemical wastes, with only a small volume of residue requiring permanent storage. The materials needed for these processes will be difficult to develop because of the extreme environment in which they must function, but an important benefit of this effort will be the development of new concepts and technologies for future efficient and waste-free industrial processes.

Characterization, analysis, monitoring, and control

In dealing with either environmental or waste problems, several technologies must be developed. To achieve broad understanding of these problems, powerful analytical techniques and methods must be developed and used. In general these problems require capabilities that are either at or beyond the current state-of-the-art. Once basic understanding is in hand, analytical tools to quantify particular species and markers need to be applied to characterize individual cases. When attempting remediation or control using advanced processing tools, real-time analysis is required. Finally, there must be long-term monitoring of the end-products. Throughout all these activities, methods must be developed to monitor worker and public exposure to biological, chemical and physical threats. The long-term solution to problems of analysis, control, monitoring, and human exposure lies in development of new microsensors. The advent of microengineering and the ability to manipulate surfaces, materials, and biological systems at the molecular level makes possible a wide range of microsensor concepts. These tools are also needed in efforts to develop advanced approaches to health care and industrial processing.

Summary

The environmental restoration and waste management problems facing the nation are complex and enduring. Investment in a new generation of science and technology will not only make the effort technically, socially, and financially tractable, but will enable us to develop a wide range of new concepts and technologies to deal with other problems, such as national competitiveness, and thus improve the general quality of life in the United States.


The author is Senior Director of Science and Technology at the Pacific Northwest Laboratory, Richland, Washington 99352

Symposium On Physicists in Environmental Affairs

Physics and Society presents here articles based on the four talks given at an invited session entitled sponsored by the Forum on Physics and Society at the March 1993 APS meeting in Seattle.


Panel on Public Affairs Workshop on Electricity from Renewable Sources

David Bodansky

Problems of fossil fuel resource limitations and pollutants, including CO2, continue to motivate the exploration of alternatives, in particular nuclear and renewable energy sources. A major use of these would be for electricity generation.

Electricity production and use

There has been rapid growth in the use of electricity in the US throughout the past century. During the second part, from 1950 through 1992, electricity sales increased almost tenfold, from 33 to 315 GWyr, and the fraction of primary commercial energy consumption devoted to electricity generation rose from 14% to 36% (1). (For units and abbreviations used here, see (2).) In more recent years, since the beginning of the first perceived energy crisis in 1973, electricity growth has slowed, but it has nonetheless outstripped growth in total energy consumption and in population (Table 1)


TABLE 1. Comparison of increases in US electricity use, 1973-1992, with changes in related parameters.


	                                        Increase (%)
	
Energy input for electricity generation (quad)	    49%
Electricity sales (kWh)                     	    61%
Population	                                    20%
Gross Domestic Product (constant $)	            51%
Total primary energy consumption (quad)	            11%
	

Fossil fuels and nuclear power account for most electricity generation. In 1990, the last year for which comprehensive data are readily available, renewable sources collectively provided only 12% of generation (Table 2). Most of this was from hydroelectric, with smaller amounts from biomass and geothermal, and near negligible amounts from wind, solar thermal, and photovoltaics. Except for hydroelectric, most renewable generation is not by utilities but by so-called non-utility generators (NUG). There was little change in total generation between 1990 and 1992, with nuclear generation increasing 7%, hydroelectric dropping 14%, and fossil fuels (collectively) almost unchanged.


TABLE 2. US electricity generation, by source, for 1990: net generation (gigawatt), fraction from non-utility generators (NUG), and share of total generation


Source	      Generation	Percent from	Percent of
	      (net GWyr)	NUG	        Total Gen
	
Coal	        181	          2	          53
Natural gas	 41	         26	          12
Petroleum	 14	          4	           4
Nuclear	         66	          0	          19
Hydroelectric	 33	          2	           9.5
Biomass	          4.5	         95	           1.3
Geothermal	  1.7	         43	           0.5
Wind	          0.24	        100	           0.07
Solar Thermal	  0.07	        100	           0.02
Other	          2.6	        100	           0.75
TOTAL	        344	          7	         100

In recent years, there have been projections of a much larger future renewable contribution, including greatly increased electricity generation of photovoltaic, wind, and thermal solar power. For example, a 1990 study spearheaded by the Solar Energy Research Institute (since redesignated as the National Renewable Energy Laboratory) had scenarios that projected future electricity generation from renewable sources (excluding hydroelectric power) in 2030 extending up to a primary energy input of 33 quads, compared to under 1 quad today (3). Other projections vary widely, some considerably less optimistic (4). A great deal depends upon future costs. A target for competitiveness with fossil fuel or nuclear generation is often taken to be about 4 or 5 cents/kWh, although the actual level will depend in part upon future fossil fuel prices and on taxes or emission standards established for fossil fuels.

Motivation for workshop

In view of the importance of the issues, it is desirable to subject such projections to careful analysis. The projections for nuclear power, made when enthusiasm was high and skepticism relatively muted, can provide a cautionary note as an example of the poor track record of past projections. Thus, a 1972 Atomic Energy Commission report foresaw nuclear growth from about 0.3 quad (of primary energy) in 1970 to over 21 quads in 1990 and to 48 quads in 2000, while actual US nuclear energy consumption in 1990 was only 6 quads and is very unlikely to be significantly higher in 2000.

The APS Panel on Public Affairs (POPA) has a substantial history of studies of energy issues, including a 1979 study on photovoltaic energy conversion (5). It was concluded by POPA that it would be useful to have an independent, analytic study of electricity generation from renewable sources, although in view of the nature of some of the issues it was not clear that the APS was the proper group to undertake such a study. To assess the appropriateness of an APS study in this area, a workshop was organized by the POPA and held in November 1992 in Washington, D.C. (6) Talks and discussion at the Workshop focused primarily on solar thermal, photovoltaic, wind, and biomass electricity sources (7).

Renewable sources for electricity generation

In solar thermal electricity generation, sunlight is concentrated by large factors and used to heat a fluid to drive a steam or gas turbine. Three geometric configurations are being actively explored: (a) parabolic trough arrays; (b) central receivers (power towers), and (c) parabolic dishes.

In parabolic trough systems, the heated fluid passes through tubes lying on the focal line of long troughs. The troughs track the sun with single-axis rotation. By 1991, nine units with a total capacity of 354 MWe had been installed in California, but the company installing them declared bankruptcy in 1991 due to a lapse in tax advantages and low natural gas prices. The units continue to provide electricity to the California grid, despite the cessation of new construction. They represent the only solar thermal technology that provides commercial power.

In central receiver systems, a large number of individual heliostats, with two-axis tracking, reflect sunlight upon an elevated receiver. A 10 MWe pilot plant facility, Solar One, operated in California from 1982 to 1987. It is scheduled to be replaced by a more advanced unit, Solar Two, at the same site. Pending experience with the latter, construction may be undertaken in the late 1990s on a 100 MWe commercial unit. Technical advances being explored for Solar Two include: stretched membrane reflectors in which two membranes are used, with their curvature determined by the pressure in the gap between them; molten nitrate salt for heat reception, transmission and storage; and air, with suitable particle loading, for heat reception and transmission (with bricks for heat storage). Higher efficiencies are anticipated for an air + gas turbine system than for a molten salt + steam turbine system.

Parabolic dish units can have individual engines at the focus of each dish, or linked receivers driving a common larger engine. Typical units are in the 25 kWe range, and are more suited to remote locations than as suppliers to the electricity grid. Of the three technologies, it is expected that the central receiver will prove to be the most economical, eventually providing electricity at 5 to 7 cents/kWh.

The use of solar photovoltaic power is at present limited by high costs to specialized applications, such as for portable consumer devices, satellites, and remote locations such as lighthouses. World sales of photovoltaic modules in 1992 amounted to only 60 MW of capacity, divided roughly equally between suppliers in the US, Japan, and Europe. Crystalline silicon flat plate units remain the dominant technology, but many alternatives are being explored in the search for low cost, high efficiency, and durability. These include other crystalline materials, thin films, multiple-junction units, and Fresnel lens concentrator units.

Since 1980, the estimated cost of electricity from photovoltaic sources has dropped from about 90 cents/kWh to about 25 to 30 cents/kWh. If this rate of price reduction continues for another 20 or so years, photovoltaic power would become economically competitive. Learning curve methodology suggests that such a price decrease may occur, in conjunction with a large increase in the volume of photovoltaic sales (8).

In principle, wind resources are sufficient to supply a large fraction of the total US electricity demand (9). Aside from cost, possible constraints on the extent to which wind power will contribute arise from the uneven national distribution of wind resources (greatest in the upper midwest), the intermittent nature of the wind, and the possible environmental impacts of large wind energy farms. At present, there is appreciable use of wind in the US only in California, where it provides about 1% of the electricity. World generation by wind was under 0.4 GWyr in 1990, of which about 75% was in California. However, there is increasing interest in other countries, and at present most of the new wind turbines being installed in the US are from Japan or Denmark.

The present cost of electricity from wind is 7-10 cents/kWh. It is projected to drop to 5 cents/kWh by 1995 and to 4 "/kWh by the year 2000. Together with a 1.5 cents/kWh incentive incorporated in the 1992 Energy Policy Act, this would make wind power economically competitive with fossil fuels or nuclear power.

Wind turbines installed in the early 1980s were mostly under 100 kWe in capacity, but new US units are now several hundred kWe and still larger units are being investigated in Europe. There have been significant improvements in recent years in wind turbine materials and blade configurations. This may permit an increase in unit size, while preserving long term durability under rapidly varying stresses. Although land requirements are high, of the order of 600 km2 per gigawatt of average output, most of the land can also be used for grazing or other agricultural applications. There has been increasing recent interest in biomass for electricity generation. Biomass now provides about 4% of total US energy consumption and, in 1990, a little over 1% of electricity generation, mostly produced for internal use of companies in the wood product industries. Any large expansion in the use of biomass for electricity generation is expected to rely on new plantations of dedicated energy crops. It is estimated that 140,000 to 800,000 km^2 of land is available in the US for such plantations.

Harvested crops can produce steam directly or can be converted into liquid or gaseous fuels for use with steam turbines, gas turbines, or, eventually, fuel cells. Gas turbines offer high efficiency, especially in conjunction with combined cycle operation or co-generation. They can be started relatively quickly, making them complementary to intermittent sources. Under favorable assumptions for crop growth rates and turbine efficiencies, about 2,000 km^2 of land would be needed per gigawatt of average output.

The intermittent nature of wind, solar thermal, and photovoltaic sources can create problems if these represent a large fraction of the electricity supply. This is not yet the case in California, where intermittent sources now account for 1 to 2% of the electricity supply. The problems can be ameliorated by complementary renewable sources, such as hydroelectric power and biomass, or through explicit storage. At present, only pumped hydroelectric systems provide large storage capacities. Other storage possibilities include compressed air, thermal storage, batteries, flywheels, and superconducting magnets. Of course, if electricity is used more efficiently, demand is less and supply problems are eased.

The proponents of the various renewable technologies envisage that their costs will become competitive with those of fossil fuels, at 4 to 5 cents/kWh, before 2000 or 2010 for biomass and wind, and somewhat later for solar thermal and photovoltaic generation.

Workshop Conclusions

Overall, the organizing group was impressed by the substantial progress made in the development of renewable technologies, but concluded that it would be inappropriate for the APS to attempt an assessment of the overall prospects of renewable electricity generation, including future costs and market penetration. Many of the issues related to costs and environmental impacts have little physics-related technical component, and therefore do not naturally lend themselves to evaluation by a physics group.

However, it was concluded that it would be valuable to carry out a somewhat more limited study, emphasizing a critical evaluation of technological aspects of the main generation methods, including their current status and their potential progress, problems, and opportunities. Possible topics include metal fatigue and structural dynamics problems for wind turbines, fuel cells for electricity generation from gasified biomass, photodegradation in amorphous silicon cells, and energy storage methods. A study with a technical emphasis could be of value in identifying possible strengths and weaknesses in proposed technologies, without necessarily attempting to answer broader questions as to the future role of renewable energy.

Acknowledgements. I have drawn heavily upon talks and discussions at the POPA workshop, plus the report prepared subsequently by members of the POPA organizing committee, and I am indebted to these contributors.

1.	Unless otherwise indicated, data on energy consumption are based on
	 Annual Energy Review 1991, Report DOE/EIA-0384 (91) (US Department
	 of Energy, Washington DC, 1992) and Monthly Energy Review, March 		1993, Report DOE/EIA-0035(93/03) (US Department of Energy,
	 Washington, DC, 1993).
2.	Common units are kilowatt-hour (kWh) and gigawatt-year (GWyr),
	where 1 GWyr = 8.76 x 109 kWh.  For renewable energy, it is common 
	to assign a nominal primary energy input based on the energy content
	of the displaced fossil fuel, presently in the US at 10335 BTU/kWh
	(33% thermal efficiency).  Thus, 1 quad of primary energy
	corresponds to an output of 11 GWyr, where 1 quad = 10^15 BTU =
	1.055 x 10^18 J.  Often, generation capacity is specified in
	units such as megawatts-electric (MWe), to emphasize that the
	reference is to the electrical output rather than the thermal input.
3.	The Potential of Renewable Energy, An Interlaboratory White Paper,
	Report SERI/TP-260-3674 (Solar Energy Research Institute, Golden,
	1990).
4.	For example, in the National Energy Strategy (1991), electricity from
	renewable sources corresponds to only 12 quads of primary energy in
	2030, while the scenarios of reference (3) have primary inputs
	ranging from 13 to 38 quads.
5.	Principal Conclusions of the APS Study Group on Solar Photovoltaic 
	Energy Conversion, H. Ehrenreich, Chairman (APS, New York, 1979).
6.	The POPA Committee which organized the Workshop consisted of: 
	David Bodansky (Chair), Henry Ehrenreich, Anthony Fainberg, Daniel
	Fisher, J.D. Carcia, Pierre Hohenberg (Chair, POPA), Roberta Saxon,
	and Francis Slakey.  Talks were given by:  George D. Cody (Exx0n), 
	William Fulkerson (ORNL), Allan R. Hoffman (DOE), Pascal De Laquil 
	(Bechtel), Arthur H. Rosenfeld (Berkeley), Robert A. Stokes (NREL), 
	Carl J. Weinberg (PG&E), and Robert H. Williams (Princeton).
7.	A brief summary of the Workshop is presented in:  Report on POPA
	Workshop on Electricity Generation from Renewable Sources (1993). 
	Single copies may be obtained by writing (with an enclosed self-
	addressed mailing label) to Ms. Nancy Passemante, APS, 525 14th Street
	NW, Suite 1050, Washington, DC 20045.
8.	G.D. Cody and T. Tiedje, "The potential for utility scale photovoltaic
	technology in the developed world:  1990-2010," in Energy and the 
	Environment, B. Abeles, A.J. Jacobson, and P. Sheng, editors (World 
	Scientific, Singapore, 1992), pp. 147-215; and POPA workshop talk.
9.	D.L. Elliott, L.L. Wendell and G.L. Gower, An Assessment of the 
	Available Windy Land Area and Wind Energy Potential in the Contiguous 
	United States, Report PNL-7789/UC-261, (Pacific Northwest Laboratory, 
	Richland, 1961).  For winds of Class 4 or higher (mean wind speeds 
	greater than about 5.5 m/sec (12.4 mi/hr) at a height of 10 m), wind 
	turbine hubs at a height of 50 m, "severe" environmental restrictions 
	on the placement of wind turbines, and an overall efficiency of about 
	19% in converting wind power to electrical power, the authors conclude
	that wind could generate 250 GWyr per year.
	
The author is at the Department of Physics, University of Washington, Seattle, Washington 98195.

Symposium On Physicists in Environmental Affairs

Physics and Society presents here articles based on the four talks given at an invited session entitled sponsored by the Forum on Physics and Society at the March 1993 APS meeting in Seattle.


Hazardous Waste Remediation: The Task Ahead

E. William Colglazier The US has embarked on a massive effort to clean up land and water that has become contaminated with hazardous materials resulting from private and public activities. This effort has been mounted through federal, state, and private actions that stretch back a decade and a half. Most of these efforts were begun with only the haziest notion of the money and manpower that ultimately would be required; virtually nothing was known about the environmental consequences or costs of any of the possible choices.

Because of the work that has been done, there has been a dramatic expansion in the level of information and understanding of environmental contamination and its remediation. With it has come the sobering realization that the extent of that contamination, the technical challenges of its remediation, and the resources required are all much greater than envisioned when the cleanup programs were initiated. Further, what once seemed straightforward now seems much more complicated. Many actions, though unidimensionally beneficial, are now seen to have disturbing trade-offs among environmental end points, populations, media, and generations.

The nation is at a transition. Its hazardous waste remediation programs are moving out of adolescence and into maturity. The time is ripe to use the past decade of experience to ensure that the course set in the early days is right for the future. One opportunity for reassessment will come in public debates over reauthorization of the Resource Conservation and Recovery Act (RCRA) and the Comprehensive Environmental Response, Compensation, and Liability Act. Another will come as key decisions are made on the scope and speed of cleanups at federal facilities.

A study of hazardous waste remediation

The purpose of a University of Tennessee study (1) was to come to as complete an understanding as possible of the magnitudes of overall resources required for hazardous waste cleanup. Resource requirements were estimated separately for the Superfund National Priorities List, RCRA Corrective Action, Underground Storage Tanks, federal facilities, and state and private cleanup programs. The costs of the physical activities to clean up sites were separately estimated, including the costs of site investigation to determine what is to be done. For that reason the building blocks for the studies were taken to be the remediation technologies to be applied, the extent of those applications, and the number of sites to which they were imposed. The estimates systematically excluded the transactions component.

Somewhat arbitrarily, the study adopted a thirty year time horizon. No shorter period appears adequate to put the inherited hazardous waste problem to rest, and yet no longer period can easily be comprehended. Moreover, extrapolating beyond thirty years becomes so speculative as to lose much utility.

Estimates were presented on a timeless "as built" basis using current costs as a proxy for the resources required. That is, there is no consideration of the effect of when the remediation tasks are undertaken. Consequently, the costs are not discounted from some base period, and cannot in any meaningful way be compared--either as to magnitudes or benefits--with alternative expenditures today.

Three scenarios

The approach adopted was to take existing behavior and practices as the basis for inferring current policy and then to posit alternative policies--"less stringent" and "more stringent"--that would lead, in the researchers' judgment, to approximately the same level of risk to human health and the environment, but with lesser or greater certainty and with lesser or greater achievement of other goals. The current policy case was therefore grounded in observed behavior, while the two alternatives were more speculative.

Generally, application of current policy results in all sites with meaningful levels of contamination being restored to some degree. It utilizes a combination of technologies to detoxify wastes to levels where risks are low, and which restores most contaminated sites to the point where future use can be made of them with few restrictions, but which in other cases isolates the contamination to limit potential exposure.

The less stringent policy option assumes the goal of eliminating current and future exposure of people and natural systems to significant levels of risk. In this sense it is no less protective of human health and the environment than is current policy. It is distinguished from the current policy case by depending less on destruction and more on isolating the contamination to minimize exposure. It addresses the same sites, and allows no higher risks to exposed people. Because this case uses a greater degree of containment and isolation, it may become necessary to revisit more sites than in the current policy case if containment fails or if use restrictions are removed.

The more stringent policy starts with the premise that destruction of contamination is the basic goal and that containment or isolation is acceptable only when technological feasibility is a constraint or when heroic measures and extremely large resource expenditures would be required. Consequently, even the more stringent policy does not envision destruction when contamination is very low (in order to achieve pristine conditions), nor does it try to restore sites everywhere to the point where unrestricted use is appropriate.

In short, these three policy cases were conceived to lie along a continuum of more or less permanent destruction of contamination leading to unrestricted use of land and groundwater. In design, it was the intent that these options not differ in residual health and environmental risks to which people and natural systems may be exposed.

Results

In the current policy scenario, total resources required will be approximately $750 billion if the country maintains its present course. But contamination could be substantially less than now perceived, and if so the total could be as low as about $480 billion. Conversely, the total could plausibly rise over $1 trillion in direct remediation costs over the next three decades, if contamination is as great as some people suspect.

An appropriate interpretation of a tilt toward less stringent policy is that resource requirements would be reduced about one-third from current policy levels. Costs would drop from about $750 billion to less than $500 billion. This would occur through use of technologies that emphasized containment and waste isolation rather than destruction, but that would not be expected to change significantly the ultimate impacts on human health and the environment. This policy would leave more wastes in place, require somewhat greater restrictions on land and groundwater use than under current policy, and could present future generations with additional expenditures if they wished to remove those restrictions. This estimate is accompanied by a plausible lower bound of less than $400 billion in case contamination is less than projected, and a plausible upper bound of about $700 billion if contamination is greater.

The interpretation of a more stringent policy follows along that described for the current policy and less stringent policy cases. The best guess for resource requirements rises from $750 billion under current policy to well over $1 trillion if greater use is made of more intensive treatment technologies. This best guess is bounded at somewhat less than $1 trillion if contamination is less than expected, and over $1.5 trillion if it is greater than contemplated.

The more stringent policy scenario differs from current policy primarily in the degree to which it lessens the contingent burden on future generations by allowing unrestricted use of more sites and groundwater and by freeing them from the need to be cognizant of wastes that remain in place. Arguably, it also offers a greater margin of safety against future exposure to substances that may prove hazardous. In contrast, by leading to greater handling and treatment of contaminated material, it increases exposure of those living now to potentially harmful substances.

Conclusions

The overriding conclusion is that policy toward hazardous waste remediation deserves the most serious attention from the public and decision makers. It would be comforting to say that major decisions are behind us and that the course is set, but the facts suggest that major questions are, and should be, open.

Take first the magnitude of the task. The current policy best guess of $750 trillion would, if strictly comparable to current expenditures, absorb as much of the country's productive potential as one year of non-defense federal expenditures or a decade of total public and private expenditures on all other environmental quality objectives at FY 1990 levels. As significant, the difference in expenditures between a less and a more stringent policy--both of which are feasible and have strong advocates--would be about the same as the nation is now poised to spend on cleanups. Programs of this magnitude, and choices of this significance, deserve the closest scrutiny of operations, goals, and benefits.

But the gross numbers themselves lead to no conclusions, and certainly not to the conclusion that we cannot "afford" to do the job of cleaning up the wastes left by past generations. As daunting as the task is, it is well within our capacity, especially since the effort will be spread over several decades. The issue, rather, is one of incremental costs of different policies relative to their incremental benefits--and those benefits must be interpreted broadly to include matters beyond simple calculations of risks reduced or property values enhanced.

A second observation concerns the degree to which costs are being taken into account at federal facilities, particularly DOE facilities. In the case of DOE cleanups, the decisions are being made in negotiations between DOE, EPA, and the states. None of these parties, for understandable reasons, are especially interested in keeping costs down. DOE is attempting to change its culture toward environmental stewardship, and is concerned about legal liabilities of its personnel if environmental regulations are not met. For these reasons, DOE is asking for everything it thinks necessary to comply fully with the regulations and is going beyond what a private party might do. The states are interested not only in cleaning up sites the public perceives as a threat, but also in protecting jobs at sites that would have a significant employment downturn if it were not for the cleanup effort. And with DOE sites (unlike non-federal sites), EPA does not have to ask the President and Congress to add money to its own budget. This perhaps explains why in examining the DOE cleanups it appears that the decisions being made are at the upper end of what might be required in terms of stringency, with obvious cost implications. For many of the contaminated DOE sites that are isolated from the public, public health can likely be protected with institutional controls and containment remedies for the foreseeable future. These alternatives should receive greater weight than they now do in decisions on DOE cleanups.

Given the uncertainties in what can be achieved by the DOE cleanup effort and at what cost, one approach would be for Congress to set an annual level of funding and for the key parties--DOE, EPA, and state and tribal governments--to agree on a priority-setting mechanism for allocating these funds. That mechanism should emphasize protecting public health based on risk estimates, and reaching agreement with stakeholders on future land uses, and should place less emphasis on trying to comply as soon as possible with regulations designed for other cleanup problems. With a tiered prioritization scheme that first emphasizes risks, more time might be available to ensure not only that funding is being wisely spent, but also that new technologies might become available that would reduce costs in the long run. In the case of the DOE cleanup, continued investments in R&D on new cleanup technologies could have a high payoff over 30 years.

1.	Milton Russell, E. William Colglazier, and Mary R. English, 
	"Hazardous Waste Remediation:  The Task Ahead," Waste Management 
	Research and Education Institute, University of Tennessee, Knoxville, 
	December 1991.

The author is with the National Research Council, Office of International Affairs, 2101 Constitution Avenue,Washington, D.C. 20418

armd@physics.wm.edu