An Entry S.P.O.T. for Reform in the Landscape of STEM Pedagogy

Cassandra Paul, San Jose State University

At the FFPERPS conference in the North Cascades this June, I described the development and implementation of Student Participation Observation Tool (SPOT). This work, supported by NSF,[1] has been primarily conducted by my research team at San Jose State University with contributions from colleagues at other institutions. While our research has thus far been focused on the professional development of faculty, we have recently developed an interest in looking at how institutions of higher education evaluate teaching effectiveness as a whole, and what can be done to shift the prevailing academic culture of traditional instruction toward research-based teaching practices, particularly practices involving active learning. This article summarizes the arguments I presented at the meeting, as well as the discussion that followed.

While institutions of higher education, STEM instructors, and education researchers all want what is best for students, different stakeholders use different methods to evaluate teaching effectiveness. These metrics have the potential to work against one another. For example, instructors who use reformed practices in their courses tend to have lower student evaluations than their traditional counterparts, even when student course performance is higher.[2],[3] Institutions can thus actually penalize instruction associated with greater student learning. This is especially problematic for overall academic culture, because institutions overwhelmingly use student evaluations to judge teaching effectiveness in the tenure and promotion process.[4]

Physics Education Researchers, as well as other education scholars, agree that interactive engagement techniques are superior to passive modes of instruction in promoting student learning. One way to align institutional and instructor metrics for teaching effectiveness is to collect information about what is actually happening in the classroom, and compare that with the pedagogical expectations of the department, college and university. This can allow faculty members currently using interactive techniques to be rewarded for these efforts. This may encourage other faculty to try something new, as some faculty members cite a lack of institutional rewards as a reason for abandoning reformed teaching methods.[5] While many options exist for collecting information about classroom practice, we argue that SPOT or a similar observation protocol is ideal for this endeavor.

SPOT is a web-based observation protocol that allows an observer to categorize instructor and student actions in real-time by clicking on menu items representing different categories of actions, such as “explaining” or “asking a question.” SPOT’s major strength is that it auto-generates illustrative charts and graphs immediately following an observation, which are easy to interpret. An instructor can then visualize, digest and reflect on how they are spending time in the classroom. In fact, SPOT (like it’s predecessor, the Real-time Instructor Observing Tool, RIOT),[6] was developed predominantly to help instructor-observer pairs collaboratively develop their pedagogical practice.

Direct observations in classrooms are a much more reliable way to get information about what is pedagogically happening during class than student evaluations. Many institutions already use some sort of classroom observation, either as a part of the retention and promotion process, or as a part of voluntary or mandated instructor professional development. Introducing a specific classroom observation protocol may thus be a manageable change in many cases.

Since it may not be practical for a pedagogical expert to visit every class on campus, instructors are often observed by their peers, or supervisors, who generally lack formal training in education. These observers’ views may or may not align with best practices derived from educational research. With SPOT, however, an observer systematically collects information about what actions did and did not occur in the classroom, minimizing impacts of judgment or bias. Since SPOT immediately generates charts and tables, both the instructor and the evaluators are provided with a portrait of classroom practice capable of supporting reflection. Instructors could include SPOT data as part of a dossier or portfolio, perhaps accompanied by a narrative reflection.

Most of the FFPERPS audience was enthusiastic about the professional development value of SPOT, with several expressing interest in using it themselves. Some expressed reluctance to use SPOT as an evaluation tool because it measures only the occurrence, and not the quality of interactions. The discussion here centered on the complexity of facilitating active learning, including the need for sensitivity in understanding and responding to student thinking for an activity to promote learning effectively. SPOT, unlike some other observation practices, cannot capture such nuance. However, existing research overwhelmingly indicates that active learning is more effective than (or at least as good as)[7] lecture in promoting sense making. Thus, while using SPOT to rank teaching effectiveness across lessons or courses would not be appropriate, SPOT can provide a first order indication that an instructor is trying to incorporate aspects of active learning into the classroom. Learning to facilitate student-centered instruction is like learning anything: one improves with practice. We argue that using SPOT can encourage the adoption of interactive methods, not that it will provide a precise measure of the amount of student learning that occurs. In fact, the SPOT team purposely does not prescribe certain amounts or types of actions as indicative of particular levels of classroom interactivity. SPOT simply provides a picture of what happens.

A second concern was that existing institutional evaluation procedures are sufficient, rendering SPOT unnecessary for evaluation in some contexts. The FFPERPS conference participants, as PER consumers and practitioners, already are strongly committed to student-centered instruction. We believe that a primary benefit of including SPOT in the evaluative process is to provide incentive for instructors to take the risk of trying something new. That is, to encourage faculty not already using interactive methods to not only try them, but to stick with them through the potential implementation dips. We argue that SPOT is most useful NOT as an assessment of the types of instruction the PER community is trying to encourage, but rather as a way to encourage — through assessment — more usage of research-based interactive techniques.

College STEM instruction still largely consists of traditional lecture, poorly aligned with best practices established through research on student learning. We believe that physics educators, as a community, must actively address this issue. Although not necessarily appropriate for every context, SPOT can be a tool useful in supporting the professional development of instructors, and in engaging university administration in supporting such development. As PER matures, we find ourselves more and more able to effect change at the department, college and institutional level. Changes to institutional policy can steer individual instructors toward research-based teaching practices while normalizing such practices in the overall culture of the academy.

Instructors interested in using SPOT to reflect on the interactions happening in their classrooms may email the author at Cassandra.paul@sjsu.edu for more information on this continuing project.

Cassandra Paul is Assistant Professor of Physics and Astronomy and the Science Education Program at San Jose State University. She was a plenary speaker at FFPER: Puget Sound 2016.

Endnotes

[1] NSF PRIME #1337069

[2] J. D. Walker, S. H. Cotner, P. M. Baepler, and M. D. Decker, CBE-Life Sciences Education 7, 361 (2008),
http://dx.doi.org/10.1187/cbe.08-02-0004.

[3] M. Braga, M. Paccagnella, and M. Pellizzari, Economics of Education Review 41, 71 (2014), 
http://www.sciencedirect.com/science/article/pii/S0272775714000417.

[4] C. Henderson, C. Turpen, M. Dancy, and T. Chapman, Phys. Rev. ST Phys. Educ. Res. 10, 010106+ (2014), http://dx.doi.org/10.1103/physrevstper.10.010106.

[5] J. Michael, College Teaching 55, 42 (2007), URL http://www.jstor.org/stable/27559310.

[6] E. A. West, C. A. Paul, D. Webb, and W. H. Potter, Phys. Rev. ST Phys. Educ. Res. 9, 010109+ (2013), 
http://dx.doi.org/10.1103/physrevstper.9.010109.

[7] D. E. Meltzer and R. K. Thornton, American Journal of Physics 80, 478 (2012), ISSN 0002-9505, 
http://dx.doi.org/10.1119/1.3678299.


Disclaimer – The articles and opinion pieces found in this issue of the APS Forum on Education Newsletter are not peer refereed and represent solely the views of the authors and not necessarily the views of the APS.