Electronic-response systems (ERS) or audience-response systems (ARS) can be powerful instructional tools. Since the earliest adoption of ARS in the mid-1960s, the technology has changed significantly, especially in recent years, with advances in wireless communications. Over the years, ARS has become more sophisticated, more user-friendly, and cheaper. Typical ARS technology now allows instructors to present questions to the audience, and individual audience members respond with a keypad; responses are then automatically tabulated and displayed in a variety of graphic formats on-screen for feedback to the group. The tool is similar to that of the TV show “Who Wants to Be a Millionaire?” where every member of the audience transmits his or her response, and the distribution of responses is displayed on-screen.
According to a recent study, the diffusion of this instructional technology is so widespread now that almost all universities in the United States and over 3,000 schools at the primary and secondary levels are currently using ARS (1). To increase learner-engagement, medical education and health-profession education, in general, including many continuing medical education programs are increasingly turning to ARS as a tool to promote learner participation and create an interactive learning environment (2, 3). ARS claims to promote student participation and engagement through elicitation of content-understanding and creating a safer, anonymous environment for learners to participate freely without fear of embarrassment or being singled out. Also, the public display of the distribution of responses either reaffirms their understanding or can be used to help gauge the learner’s own responses against the group norm.
The recent reviews of the literature have reported several benefits of using ARS (4–6). Although these reviews provided a comprehensive list of benefits and potential barriers for successful implementation of the technology in the classrooms, they lacked a theoretical framework to help synthesize the reported outcomes of using ARS. The purpose of this review is to provide a theoretical framework for understanding the context for the intended and unintended consequences of using ARS. Given that only a handful of studies was conducted using a randomized, controlled design with validated measures, this review focused on the themes emerging from the literature, rather than performing a systematic review for comparison of studies or summarize the findings with metaanalysis. The focus of the review is on types of outcomes reported and context for implementation of ARS, rather than a comparison of the magnitude and statistical significance of the findings. This thematic review presents the findings from the literature in the context of existing learning theories and assessment models to help build upon the previous literature reviews.
Although use of ARS has grown rapidly over the last decade, reports on efficacy of the technology have mostly focused on perceived benefits to learners, with the exception of a small number of studies on knowledge-retention, which showed inconsistent results (6). Recent studies from both health-profession education and higher-education settings have reported significant improvement in learning and knowledge-retention (7, 8). However, a recent randomized, controlled study in the continuing medical education (CME) setting failed to show improved knowledge-gain from ARS versus the traditional didactic lecture environment (9). Despite widespread adoption in certain fields (5, 10), there is limited systematic review of the impact of ARS on instruction and learning outcomes.
The purpose of this paper is to provide a review of the literature to 1) evaluate the benefits and consequences of using ARS; 2) present existing theories and models that help provide context for the reported outcomes; and 3) offer recommendations for optimal utilization of this technology for instruction and learning.
For the current review, the literature search focused on evaluation of ARS and instructional context for successful implementation of ARS. Given the proliferation of ARS use and significant improvements in the technology in the last decade, published studies conducted between 2000 and 2009 were included in the review. The search terms used in Medline, ERIC, and PsycINFO included the following key terms: 1) audience response system; and 2) classroom response system. The search was confined to studies published in English, with 85 publications meeting the initial criteria. After reviewing the abstracts, articles that were commentaries rather than empirical studies were excluded. For the final analysis, all 42 studies that explicitly evaluated the effect of ARS on learning were selected for this review.
This study found that several consistent and interesting themes emerged from the literature review. Each study was reviewed for content, and findings were sorted into four major categories: 1) learner engagement; 2) peer instruction; 3) knowledge-gain/retention; and 4) formative assessment. These categorizations were verified by another reviewer. The four broad categorizations were chosen to capture the commonality as well as the diversity of findings across the 42 publications from various settings and disciplines. These four categories represent the most common outcomes cited in the literature for evaluating efficacy and benefits of ARS use (see Table 1).
List of Studies by Outcome Category
| Add to My POL
|Outcomes||List of Studies||N (by Learner Environment)|
|Engagement||Uhari et al. (2003)1616|
Elashvili et al. (2008)3333
Holmes et al. (2006)3535
Latessa & Mouw(2005)3838
Homme et al. (2004)3939
Vincent et al. (2002)4040
Streeter & Rybicki (2006)4141
Premkumar & Coupal (2008)4242
Moredich & Moore (2007)4444
Stowell & Nelson (2007)4646
Eggert et al. (2004)4747
Graham et al. (2007)4848
Addison et al. (2009)4949
Cain et al. (2009)5050
Health professions (other than medicine): 5
High school: 1
|Knowledge-gain||Pradhan et al. (2005)77|
Schackow et al. (2004)88
Miller et al. (2003)99
Pileggi & O’Neill (2008)2020
Rubio et al. (2008)2626
Stein et al. (2006)2727
Gauci S (2009)2828
Palmer et al. (2005)2929
Duggan et al. (2007)3030
Preszler et al. (2007)5252
Crossgrove & Curran (2008)5353
Alexander et al. (2009)5454
Doucet et al. (2009)5656
Health Professions (other than medicine): 6
|Peer-Instruction||Fies & Marshall (2008)55|
Pileggi & O’Neill (2008)2020
Jacobs et al. (2006)2121
Nayak & Erinjeri (2008)5757
Health Professions (other than medicine):1
|Formative Assessment||Feldman & Capobianco (2008)2424|
Burnstein & Lederman (2001)2525
Kaneshiro et al. (2008)5959
|Health professions (other than medicine):1|
High school: 1
Of the 42 studies reviewed here, 26 were conducted in medical or other health-profession settings (e.g., nursing, pharmacy, dentistry, veterinary). The 16 others were set in university and high school science classrooms. Almost half of the studies (19 of 42) reported outcomes related to learner-engagement; 4 (10%) discussed peer instruction/interaction and discourse opportunity as an outcome of ARS; 16 (38%) focused on knowledge-gain; and 4 (10%) discussed findings related to formative assessment. A summary of the findings is presented under each of these emerging themes. (See box1.)
By far the most frequently cited benefit of ARS is the increase in learner engagement and participation. Previous studies have shown positive correlation between active participation and knowledge gains (11, 12). Previous studies on traditional lectures have shown that learners’ engagement with the course decreases significantly after just 20 minutes of the traditional didactic lecture format (13, 14). Also, studies have shown that learners perceive the traditional didactic lectures to be the least effective form of educational delivery mode (15). However, didactic lectures still dominate a significant proportion of medical education, especially in postgraduate training and CME. Almost half of the reviewed studies (16 of 38) reported increased student participation and engagement. Eleven of those studies were conducted in a medical-education setting, with four involving medical students and seven conducted in postgraduate residency training programs or CME settings.
In the studies that cited increased learner-engagement, most of these attributed the findings to the unique feature of ARS, which affords anonymity as the key component of success. The study by Uhari et al. (16) on medical students in a pediatrics course stated that, “[ARS] allows all the students to express their opinions and not only those opinion leaders who are active and brave enough to express their thoughts aloud.” A study by Nayak and Erinjeri (19) on the effect of ARS on medical students in a radiology course also concluded that students felt more confident about providing responses with ARS. In this study, students were asked to deliver 15-minute PowerPoint presentations and include ARS questions to be posed to other first-year students. Students reported feeling significantly more comfortable answering questions in lecture when using an anonymous ARS device.
We know from the previous literature on assessment strategies, that questioning provides learners the opportunity to distill information by using the questions as a cue for highlighting major points, and it also activates the metacognitive process by allowing the learners to reflect on what they know and areas that need further clarification (17, 18). This process also helps learners to reinforce and validate their assumptions. This active process engages the learners to participate and motivates learners to the task at hand. With the ease of data-gathering and display-function within the ARS capability, ARS provides a user-friendly platform for instructors to incorporate frequent questioning as part of their instructional strategy.
Five of the studies reviewed reported ARS as an integral component of peer instruction. Peer instruction is a pedagogical concept widely adopted in undergraduate science courses (5). Instructors using this approach typically use conceptual questions to motivate the learners and then provide opportunity for learners to discuss their individual answers and then to arrive at a consensus on the best response (19). This type of instructional method requires frequent questioning and displaying of the response summary for group discussion. In a study with dental students, authors were able to successfully integrate ARS as part of their team-based learning (TBL) approach. In the study by Pileggi and O’Neil (20), case studies were given to a small group during a TBL learning session to stimulate discussion. After group discussion, the small groups decided on their consensus responses. The consensus responses were then projected on a screen, followed by larger group discussion. The authors reported significant increase in final examination scores for groups with TBL combined with ARS. In one study examining the effect of ARS on trauma performance improvement (21), the use of ARS resulted in more candid performance ratings on peer-review judgments of trauma performance. Incorporation of ARS resulted in decreased pressure to vote with the majority and created an environment for more critical feedback to residents.
ARS has also been successfully linked with the peer instruction. Peer instruction, especially prominent in physics education, has demonstrated knowledge-gain, as compared with the traditional didactic lecture environment (19, 22). The key feature of peer instruction is the opportunity it allows for learners to spend time explaining and discussing their responses to a question from their peers. In the peer instruction model, learners first answer questions independently, then review the group responses, and, finally, work in small groups to reach consensus. ARS provides an efficient way to initiate discussion through posing questions to the group, displaying the distribution of responses, and then monitoring responses after small-group discussion.
Example: Application of Audience-Response Systems
One goal for instruction is to help the learning process by intervening at the most opportune time for learners to reflect and verbalize their thinking. Within the formative assessment model, instructors can incorporate strategies to include more opportunities to intervene and provide opportunities for reflection on their thinking. For example, Dr. O’Sullivan uses ARS in her Pharmacotherapy lectures to pose content question about serotonin and nervous system. Then she asks the learners to rate their level of confidence in their answers, also using ARS. Using the display feature of the ARS, she helps the learners visualize how their level of confidence compares with that of the rest of the group. She then uses this opportunity to facilitate a small-group discussion among the learners. The learners are then given the opportunity to reflect on their thinking as well as to defend their own responses to the group. During this small-group discussion, learners then either solidify their responses or are persuaded by others. One learner, who responded incorrectly on the content question but rated his confidence level moderately high, realized that his inability to defend his answer effectively to others meant he must modify his own thinking. After the discussion, Dr. O’Sullivan asked the learners to respond once more to the content question, and also rate their level of confidence again. This process not only provides the learners with opportunities to self-reflect, but also provides the instructor with opportunities to gauge the learners’ level of understanding and identify when to intervene with further instruction or provide more “scaffolding” before moving on to the next topic. This is a modified scenario originally presented in Bruff D: Teaching With Classroom Response Systems: Creating Active Learning Environments. San Francisco, CA, Jossey-Bass, 2009.
ARS has been advanced as a tool to facilitate formative assessment. Black and Wiliam (23) define formative assessment as all activities that instructors undertake to get information that can be used to monitor and adjust instruction. When instructors know how learners are progressing and where misconceptions are, they can use these data to make necessary adjustments; these types of instructional adjustments have been shown to achieve the largest student knowledge-gains of any instructional intervention (23).
The four studies specifically evaluating the efficacy of ARS as the tool for incorporating formative assessment were all conducted in university or high school science courses. None of the studies conducted in medical or health-professions settings explored whether ARS is an effective educational tool to facilitate formative assessment, although some reported formative assessment as a potential positive byproduct of ARS use. The study by Feldman and Capobianco (24) specifically examined whether instructors were able to incorporate formative assessment techniques into their instruction with adoption of ARS and discuss possible barriers to successful implementation. They concluded that instructors need to acquire expertise in both using ARS technology and more importantly in constructing appropriate questions that elicit the kinds of feedback necessary to gauge understanding and make instructional modifications. A successful formative-assessment implementation requires expertise not only in the content area, to develop appropriate questions that can be delivered through ARS, but the pedagogical skills to adjust and modify instruction as learner-feedback and misconceptions unfold over the course of instruction. Burnstein and Lederman (25) point out that ARS is a great resource for instructors to gather necessary information for implementing formative assessment; however, “questions are at the discretion of the instructor and can range from conceptual questions … to factual questions.” All four studies suggest that ARS can potentially be a powerful tool for facilitating formative assessment; however, the impact of this strategy will vary considerably, based on the types of responses elicited and the amount of pedagogical expertise available to execute appropriate instructional modification based on the learner-feedback. This may help explain the inconsistent results reported with the use of ARS on learning outcomes.
Twelve of the reviewed studies (31%) specifically examined the effect of ARS implementation on knowledge-gain. Of these 12 studies, 9 reported significant gains with ARS, as compared with traditional lecture format. All nine of these identified formative assessment and student engagement as a key component in the positive outcome. For example, Rubio et al. (26) suggested that “[ARS] provides immediate feedback on the audience’s comprehension of the material as the lecture proceeds, allowing the presenter to revisit or re-explain concepts not fully understood by the audience at initial presentation. With immediate feedback, lecturers can fine-tune their delivery style and discern which aspects of a lecture are more or less successful at conveying a point.” In a study conducted with obstetrics and gynecology residents, Pradhan (7, 9) also credited formative assessment: “the immediate feedback allowed the instructor to measure the effectiveness with which the material was being delivered and thus rephrase and repeat concepts when necessary.”
In contrast, a randomized, controlled study by Miller et al. (9) on CME learners attending clinical roundtables (CRT) found no significant difference between ARS and non-ARS CRTs on a 7-item, post- knowledge-assessment. The authors contend that use of ARS was viewed favorably by participants, and overall quality of the CRT was found to be significantly more positive with ARS use. They suggest that the nonsignificant difference in results may be due to validity issues with the assessment instrument as well as single-time usage. As a second example, ARS was used in an undergraduate nursing anatomy and physiology course to engage students in examination reviews. When Stein et al. (27) compared examination scores between those with ARS in their review sessions and non-ARS users, no significant effect of ARS was detected. However, they found that the use of ARS has helped instructors to identify areas in the curriculum requiring additional instruction on the basis of responses gathered with ARS. Unlike the studies reporting positive student gains, the nonsignificant studies did not specifically mention formative assessment as part of their ARS pedagogical strategy.
A review of 42 studies examining the efficacy of ARS revealed four emerging themes in the literature. The most commonly cited benefit of ARS is increased learner-participation and engagement. Evidence of the impact of ARS on learner-engagement was reported across various educational levels and settings. This review confirms the growing body of literature supporting the claim that ARS is an effective tool for increasing learner-engagement through active participation. There is also some evidence to support the critical role of ARS in peer instruction. Although peer instruction has been widely implemented in science learning, exploration of this pedagogic technique is fairly new in the health professions and medical education.
Despite the consistently favorable perception of ARS use, the impact of ARS on knowledge-gain is not consistent in the literature. A small number of studies have explicitly examined the impact of ARS on knowledge-gain (28, 29–31). The lack of information on the types of ARS use, details on implementation of the technology in the existing curriculum, and validity issues of outcome instruments have all potentially contributed to this inconsistency. For systematic evaluation of the effect of ARS on knowledge-gain, the authors will need to provide detailed information about not only implementation procedures and validity information for measures, but also the theoretical framework for ARS adoption and clear hypotheses for the expected outcome. For example, using formative assessment as the theoretical model, studies examining the effect of ARS should incorporate how ARS has helped instructors check for factual as well as conceptual understanding, facilitate group discussion through conceptual questions, allow opportunity for learners to explain their answers, and adjust instruction after feedback.
All the studies reviewed here have incorporated questioning and feedback as the essential component of the ARS implementation. As Beatty et al. (32) point out, learning to operate ARS is not the barrier to adoption. Instead “more difficult challenges include creating and adapting suitable questions, cultivating productive classroom discourse, and integrating [ARS] use with the rest of the course.” For the implementation of ARS to be successful, every question should serve pedagogic objectives that can range from checking for understanding to eliciting discussion for conceptual change and understanding. Based on the literature, the combination of these question types will deliver the optimal utilization of this technology for instructional improvement.
This review has several limitations. The search algorithm may have inadvertently missed important studies, especially studies conducted outside of medicine, given the limited search terms and chosen database. The sample of studies used from other disciplines may not be representative and could bias the view of the literature from those fields. However, as stated in the Methods section, the purpose of this review was to examine the various benefits reported and the factors associated with successful implementation of the ARS technology, rather than to provide a systematic review of efficacy of ARS with comparison across multiple disciplines.
One of the key contributions of this review is the call for more empirical studies that provide explanation for the outcome through explicit incorporation of the theoretical framework as well as the mechanistic features of ARS implementation in the learning environment. Without clear documentation of the theoretical model underpinning these studies and the pedagogic features that represent the interventions, it will be difficult to determine the true potential benefits of ARS in facilitating desirable learner outcomes. This review underscores the need to differentiate the representation of ARS as an instructional tool (i.e., blackboard, videos, smartboard, etc.) rather than as an instructional strategy (i.e., formative assessment, frequent questioning, inquiry-based learning, etc.). Depending on the instructional goal, the purpose and the utilization of ARS will be different and will potentially yield different outcomes.
The authors thank Patricia S. O’Sullivan for expert advice and reviewing the manuscript.