March 1999 APS talk

Results from Physics Education Research

Donald F. Holcomb

(Text of talk presented at the March 1999 meeting of the American Physical Society)

Physics Education Research results give valuable information about how to help students learn physics effectively. I consider the question, "How can physicists assess the relevance and validity of research results from this expanding field?"

INTRODUCTION

Physics Education Research has become a large enterprise, and its contributions to improving the effectiveness of student learning are demonstrable. But many physics faculty members who work hard at being effective teachers have overall responsibilities (research, leading the work of graduate students, heavy teaching load, university committees, etc.) which preclude their engagement in PER directly. Such a person will naturally ask, "What's in it for me? Are there results from PER that I should pay attention to and have some confidence in, in order to improve the effectiveness of my teaching?" My goal in this talk is to take some steps toward answering this question.

While I do not consider myself an expert practitioner, I have been an interested observer of the field of PER for some time, and have recently (Fall Term, 1997) led a group of ten Cornell physics graduate students in a study of the recent literature. Beyond the intuitive explorations into how to help students learn, which any serious university-level physics teacher naturally engages in, my direct engagement in PER has been limited to working with other colleagues on evaluation of the outcomes from the Introductory University Physics Project (IUPP).

The Job for Physics Education Research

A. To determine and describe the "initial knowledge state" of students as they enter a course or come to a new segment of subject matter.

B. To form appropriate teaching materials or a delivery system in response to two elements which are often in tension:

(1) What is the initial knowledge state? [Determined by step A.]

(2) What are the goals of the course? What working knowledge of physics and technical skills (e.g., math, computer, and lab skills) do we want students to carry away from the course?

C. To test and evaluate these materials and/or delivery systems, both in terms of whether students master the subject matter, and in terms of the level of their engagement and enjoyment. D. To recommend adoption, modification or abandonment of the materials or delivery systems, as guided by results from Stage C.

SOME CRITERIA FOR JUDGING WHETHER TO PAY ATTENTION TO A PARTICULAR STUDY

As one attempts to assess the utility of results from a particular piece of work, a good initial question to ask is "Have these workers selected an important question to try to answer?" By "important" I mean a question for which an answer stands a good chance of having significant impact on the quality of teaching and learning. Then:

 Look for sound methodology -- i.e., evidence that workers are savvy about how to use the tools -- an appropriate combination of interviewing of students and other tools such as well-designed multiple choice questionnaires, analysis of written responses via journals or open-ended questionnaires, sampling, and appropriate statistical tools.

 Is a student sample used in the evaluation process of adequate size, suitably representative and well characterized?

 When new materials or delivery systems are tested -- is an evaluation done with proper attention to possible biases stemming from the self-interest of the designers or promoters of the new components? (Either people other than designers or promoters should carry out the evaluation, or non-fudgeable evaluation patterns and instruments need to be used, or both.)

 Do researchers exhibit awareness of the possible tilting of evaluation results as a consequence of the awareness of teachers and students that something different is going on. [In the Hawthorne Effect, students respond positively simply to the fact that someone is interested in helping them learn better. In the John Henry Effect, teachers or students in a control group respond to a sense of competition and exert unusual effort. In the Pygmalion effect, the positive attitude of teachers enthusiastic about a new approach infects students and drives them to extra effort.]

 Look out for poorly-controlled comparisons. An example: Researchers decide that a particular systematic deficiency in student learning is of major importance. Some new pedagogical technique is introduced in an effort to repair this deficiency. Performance improves. But the real control is not in place -- namely, using a more standard pedagogical technique but spending an amount of time on the topic equivalent to that devoted to it while using the new pedagogical technique.

EXAMPLES OF NEW KNOWLEDGE GAINED FROM PER

Listening to Students

Out of such interviewing has come well-documented knowledge of deep and persistent difficulties with, among many other topics,

 the concept of acceleration as a physical quantity which is different from velocity

 how electrical circuits work

 the photon concept.

Sensitivity to use of different representations, including testing their usefulness

Interviews have shown that certain words or representations often just don't carry to a substantial fraction of students the meaning or message we intend them to. A couple of examples follow. (1) Diagrams often use vectors or other schematics drawn in coordinate space to represent physical quantities in some other space -- e.g., velocity, force, electric field strength. For the case of E-field vectors, Adrian and Fuller reported an interview with a student at Nebraska who asked, "To which point in x,y,z space does this E-field vector apply -- the point at the beginning of the E-field arrow or the point at the end?" (2) In a recent article, Ambrose, et al demonstrate that some students interpret a sketch of a sinusoidal wave form moving through space to imply that a photon travels along this sinusoidal path. Note that both of these misconceptions have been created by us! They were not brought into the classroom from some previous, ineffective instruction.

Ineffectiveness of some standard practices

Interviews and free-form journals in the IUPP evaluation turned up evidence that some common practices used in classroom and lab instruction are inherently ineffective.

1. Derivations of important relations in the lecture class format commonly miss their mark. The fundamental purpose of a derivation is most frequently to show how a chain of logic leads from one result or relationship to another. It is very difficult to get useful student feedback in the course of a classroom derivation -- are they really "getting it?" A quick bow to interaction with students in the form of the query, "Are there any questions?" is hopelessly inadequate. J. J. Thomson said it well many years ago, "A textbook must be exceptionally bad if it is not more intelligible than the majority of notes made by students. The proper function of a lecture is not to give the student all the information he needs but to arouse his enthusiasm so that he will gather knowledge himself..."

2. Use of computer programs in real time in the classroom are of questionable benefit. A quote from a student at an IUPP test site carries a cautionary message. "We did go over a computer spreadsheet program [in the lecture.] [Prof. A] had set it up, but I zonked out for a lot of it (the lights were off, he was talking about fuzzy figures on a distant screen, and the night before I'd been studying for our test ... oops!")

3. Again and again, at many IUPP test sites and with both new and traditional syllabi, students told us how much they valued synchronization of the subject matter of laboratory work with the classroom subject matter. Even in cases where students were explicitly told that the sequence of laboratory work was designed to be independent of the particular classroom topic being investigated, they said to us "But we'd learn better if it were synchronized."

SOME EXAMPLES OF "GOOD" PER WORK AND WHY I THINK SO

"Investigation of student understanding of the concept of acceleration in one dimension," by D. E. Trowbridge and L. C. McDermott (This work, with plenty of student interviews, displays the deep problems students have with the concept of acceleration.)

"Concepts first -- a small group approach to physics learning," by Ronald Gautreau and Lisa Novemsky (This work appears to me to be one of the cleanest demonstrations of the effectiveness of a particular form of active learning. This cleanliness was to some extent serendipitous, arising from some particular local circumstances.) [Comment: I like the following simple definition of "active learning," given by M. C. Di Stefano of Truman State University at a recent conference. " 'Active learning' is any process that involves students in doing things and thinking about what they are doing."]

"Interactive engagement versus traditional methods ...", by R. R. Hake (Hake collected lots of data. Playing off many chunks of data against one another is often important in the unavoidably murky world of social science research within which falls much of Physics Education Research.)

"Implications of cognitive studies for teaching physics," by Edward F. Redish. (There is much wisdom in this piece of thoughtful reflection.)

Studies of the effectiveness of active-engagement, microcomputer-based laboratories -- There is a long chain of work in this area by, among others, R. F. Tinker, R. K. Thornton, Priscilla Laws, David Vernier, and David Sokoloff. A recent evaluation paper contains references to the earlier work.

Computer Assisted Instruction: What to do about this 600-lb. gorilla?

In spite of thirty years or so of massive investment of time and energy by physics teachers in developing simulations, substitutes for text, and interactive learning materials, it's my sense that we know remarkably little about whether student learning is significantly enhanced by the presence of the computer. (I do make an exception to this sense of unease for the case of thoughtful use of the computer in the introductory lab, as noted in the last item in my previous list of "good" pieces of PER work.)

What to do, in response to PER results?-- Do they suggest major changes in syllabus, curriculum, pedagogical modes?

Most importantly, it seems to me that we should think freshly. This talk is too brief to give many examples, but here is one for starters. The conceptual and technical difficulties which students find in really mastering Newton's mechanics at the introductory level are amply documented. I am puzzled at our apparent unwillingness to seriously consider pathways through physics as alternatives to the Standard Model -- trying to build confidence, giving experience with the use of models, developing mathematics confidence through the use of early subject matter less littered with the minefield of preconceptions and complicated ancillary ideas which is characteristic of our typical approach to Newtonian mechanics. Wave physics and thermal physics come to mind as natural candidates. (One of the IUPP courses, "Physics in Context," used the thermal physics entree to good advantage.)

Finally, I urge us non-practitioners in the PER field to beware of holding the thought which is the oldest known barrier to productive change in physics instruction, namely, "I learned physics in a certain way and came to satisfactory mastery, so I'll teach it the way I learned it. Why can't today's students learn it the way I did?"