ALL VOLUMES SEARCH TIEE
VOLUME 5 TEACHING ISSUES AND EXPERIMENTS IN ECOLOGY
RESEARCH

Use of an Inquiry-based Approach to Teaching Experimental Design Concepts in a General Ecology Course

Introduction

Many times changes are made in teaching strategies without systematic evaluation of whether these changes actually improve student learning (D'Avanzo 2003). By applying a research approach to learning assessment, teachers may be able to determine if the changes in pedagogy are effective. Ebert-May et al. (2005) outlined a methodology for ways teachers can perform educational research to contribute to the knowledge base of student learning. The suggested methodology included a critical evaluation of where students' current understanding is lacking, a development of assessment tools, and analysis of how the results may affect teaching effectiveness. Additionally, educational research may be used to influence classroom teaching (D'Avanzo 2003). Teaching Issues and Experiments in Ecology (TIEE) promotes scientific teaching, a practice in which faculty address questions about their own teaching. As part of an assessment of the effectiveness of TIEE, ESA members from a variety of institutions participated in a workshop to evaluate potential impacts of TIEE on classroom teaching. Participants were asked to 1) identify a gap in student understanding; 2) find out what has already been done in this area of student understanding; 3) develop an appropriate methodology for collection of data to answer a question about this gap in understanding; and 4) analyze data and reflect on how the results may influence teaching (e.g. teaching strategy, assessment, role of inquiry, etc.).

Important aspects of learning include not only the initial process of learning new skills and information, but also the persistence of the newly acquired skills and the ability to transfer these skills to new situations (Tashiro and Rowland 1997; NRC 2000b). Cognitive research distinguishes between novices and experts in terms of how they perceive and organize new information and in terms of their ability to apply problem-solving skills in new situations (NRC 2000b). Experts are able to recognize patterns of information, and thus begin work on a problem at a higher place than novices (Larkin et al. 1980). Inquiry-based learning is one way that students can become experts, providing them with the experiences they need to develop and improve problem solving skills. Hallmarks of inquiry-based learning include: an investigative approach, case-studies, projects, real-world complexities, and student involvement in the process of learning (McNeal and D'Avanzo 1997).

At the advanced secondary or college level, teachers are often faced with groups of students in a course who are not novices or experts but instead have a mixture of experiences that place them somewhere in between those two designations. Particularly in courses where the students have different academic backgrounds, contrasting preparation may present difficulties for the teacher, who is 1) unaware of the students' background and 2) must teach to a diverse set of learning levels based on the students' previous experiences. The teacher must use teaching strategies that will allow students to acquire new skills and knowledge at an appropriate level based on their previous experiences. To do this, teachers must assess students' current level of knowledge and skills at the beginning of a course.

Current cognitive theory tells us that knowledge is a set of conceptual tools best gained through authentic learning, which involves actually doing the activity to which the knowledge is applied (Lave and Wenger 1999). For undergraduates, the authentic learning that they are most likely to experience is participation in undergraduate research. The strong effect that undergraduate research experience can have on students is well-documented (Kardash 2000; CUR 2005). Undergraduates who participate in research self-report that they participate in research in order to learn how to do science, and they describe activities such as formulating hypotheses, designing an experiment to test the hypotheses, understanding the importance of controls, and knowing what data to collect to address hypotheses (Kardash 2000). These are important components of scientific inquiry. Students involved in undergraduate research experience also report increased confidence and improved communication skills (Kremer and Bringle 1990, Spilich 1997). Constraints on departmental budgets and faculty time mean that at many institutions not all students can be involved in research. Is there an approach that could be used in the classroom to approximate the research experience and give all students the benefits of doing science?

One frequently-used approach for guiding inquiry learning in the sciences is to gradually move students from teacher-directed instruction to learner self-directed instruction (NRC 2000a). Guided-inquiry can help inexperienced learners gain confidence and awareness of their learning, while at the same time moving the students toward a more open-ended type of inquiry (D'Avanzo 1996, Eick et al. 2005, Holton and Clarke 2006). For example, in guided-inquiry teachers may pose a research question which students then modify as they collect data and read relevant primary literature. As students gain confidence and experience, they can be increasingly independent rather than relying on the instructor for guidance, thus making the transition from guided-inquiry to a more self-directed inquiry.

Students working on problems in college ecology classrooms are likely to have similar conceptual knowledge that was gained in prerequisite courses and in the ecology course itself. The main differences among students are likely to be their exposure to research methodology and experimental design, gained in other courses or in undergraduate research experiences outside of classroom learning. In this case, the differences among experienced and inexperienced students are likely to be in operational and procedural knowledge. These areas of knowledge include analyzing, synthesizing, problem-solving, and evaluating. (Crismond 2001), and are likely to be critical in effective experimental design.

In this study I examined student learning about experimental design. Specifically, I investigated the questions: What differences do guided inquiry laboratory activities make to experienced versus inexperienced students? Can they serve as a proxy for involvement in undergraduate research?

Methods

Forty-eight students in Rochester Institute of Technology's Fall 2005 General Ecology course participated on a voluntary basis in this study. Institutional Review Board approval was received for all aspects of the study. The course was sophomore-level, but many students chose to take the course as juniors and seniors, leading to a mix of levels of experience of students in the course. A full description of the demographics of the course can be found in the results section. The course lasted 10 weeks and had a laboratory section that met for three hours a week.

Students were given identical pre- and post-tests with 20 questions at the beginning and end of the course. Of these questions, only 12 were used in the analysis for this study, as they pertained directly to experimental design rather than the scientific process in general. Students were told the test was not part of their grade, the scores were anonymous, and the test would be used to assess what needed to be covered or improved in the course. Each student picked a three digit code to remember, so that individual responses could be tracked across the term.

Questions utilized for this study were designed to determine the students' educational background (4 questions), their level of confidence (2 questions), and their knowledge of experimental design (6 questions; see Resources). Questions were derived and modified from published materials and discussions with TIEE workshop participants (Donovan and Allen 1983).

The guided inquiry approach to the course:

The essential features of a guided inquiry approach that were used in this course are outlined in Eick et al. (2005) and developed from a publication by the National Research Council (2000a). Throughout the 10-week quarter, students were increasingly responsible for hypothesis formulation and experimental design (Table 1). In the early stages, the instructor largely handled the design of the exercises and helped guide the students through the process of hypothesis formulation, evidence gathering, and hypothesis testing using discussion, worksheets, and other assignments. As the students gained experience, more of the responsibility was shifted to them. A final project of the students’ own design took place during the last few weeks of the term.

Table 1. The learning outcomes for each lab activity in the General Ecology laboratory. The goal was that students be able to design a small observational study and carry it out by the end of the quarter.
Exercise Learning Outcome
Natural History Observations
  • Formulate hypothesis based on observation
Forest Ecology/ Quantification of Saplings & Trees
  • Collect and analyze data
  • Formulate hypothesis based on data analysis
  • Understand plot/transect layout and rejection criteria
Salamander Habitat/ Land-use Study
  • Recognize that various methodologies may be used, depending on the hypothesis
  • Assist in experimental design
Stream Macroinvertebrate/ Water Quality
  • Generate hypothesis based on existing data
  • Recognize importance of replication
  • Perform more advanced data analysis
Independent Project (may be collaborative)
  • Formulate hypothesis
  • Design observational study
  • Collect data & analyze
  • Present results in public forum

Brief descriptions of the exercises used:

Natural history observations:

The main objectives of this exercise were to expose the students to the natural history of a familiar environment and to demonstrate how to use the inquiry process to ask questions and formulate hypotheses about the environment. The students were taken to two natural areas on campus: a mature forest and an early successional old-field. They made observations in small groups (2-3 students) about biotic and abiotic factors in the environment, posed questions about why the two areas differed, and formulated hypotheses about their observations. A worksheet and interactions with the teacher provided guidance, and a homework assignment (done individually) asked them to propose a way to test one of their hypotheses.

Forest ecology data collection & analysis:

This lab utilized a nearby natural area to introduce students to quantitative methods used to study a forested ecosystem, including calculations of diversity, basal area, density, and relative importance. Students practiced gathering data in a deciduous oak-maple forest and a nearby pine plantation, after which they used a worksheet to calculate forest parameters. A series of questions in the worksheet guided the students through comparisons of the two forests as well as comparisons of the overstory and understory of the two areas. Students formulated hypotheses about why the forests differed in structure and used the data that they had collected as evidence for their arguments. Often the hypotheses the students generated in this lab exercise were used later as the basis of their independent projects (see below).

Salamander habitat study:

Students assessed salamander density and diversity in relation to land-use history in nearby natural areas. They tested several different methodologies for assessing salamander numbers and compared the effectiveness of the methodologies for addressing different types of hypotheses. This lab served to highlight directed design and methodology for addressing specific hypotheses.

Stream macroinvertebrates and water quality:

In this three-week lab, students visited two nearby streams where they collected macroinvertebrates and water quality data. At this stage, the collection methods were determined by the instructor, but all other aspects of the scientific process were carried out by the students. They generated a hypothesis based on observations during the stream visits and used the data they collected to support or reject the hypothesis. The assignment required them to determine which data were important and how the data should be analyzed to address their specific hypothesis. Assessment of this lab required the students turn in two drafts of a written lab report.

Independent project:

Prior to this three-week lab, students completed a project proposal worksheet that required them to generate a hypothesis and design a small experiment or observational study to test that hypothesis. Although their project could be based on any of the previous labs, most students chose to return to the forest lab and use the hypotheses they generated in that lab worksheet. Students received feedback on their hypothesis and design prior to going out into the field, and they could modify their plan if necessary. A typical hypothesis was, The soil differs between the pine forest and the deciduous forest. Typical feedback would be that the students needed to be more specific about what aspects of the soil might be different. Often then the hypothesis would be refined to include predictions about the pH of the mineral soil or depth of organic layers. During the first week of the lab, students collected data in groups of 2-3. The second week included time for the group to work on their data analysis and a mandatory meeting with the instructor to work through a data analysis strategy. During the third week, students presented an 8-10 minute oral presentation to the lab section about their study.

Scoring the Pre- and Post- tests:

Questions were scored and assigned a dummy variable for analysis. For the descriptive questions (i.e. When did you take Introductory Biology?), I placed the students into categories and assigned a categorical variable. For the knowledge-based questions, I either scored the question as correct (1) or incorrect (0). For the more subjective, open-ended questions, I used a scoring rubric that placed the answers in either the mostly/completely correct category (1) or largely flawed/mostly incorrect category (0). All the tests were scored after the academic quarter was complete, and done anonymously to avoid bias. For each student, sub-scores and total scores were calculated based on the total number of correct answers divided by the total number of answers. Blank answers were removed from the scoring system.

Statistical analysis:

Statistical analyses were conducted using JMP 5.1 (SAS Institute 2003). All percentage data were arcsine square root transformed to ensure a normal distribution. To examine differences between pre- and post-tests, I used a matched-pair difference test. To examine improvement by expert or novice students, I used a categorical dummy variable student status (expert or novice) to predict scores using a one-way ANOVA on both the pre- and post-test scores. I also examined other potential ordinal effects (class year, number of papers written) in predicting scores using ANOVA.

Results

All 48 students enrolled in the course elected to take the pre- and post-tests. Two pairs of tests could not be matched and were excluded from the analysis, likely due to absences during one of the test applications. Although the course is a sophomore-level course, only 6% of students were second-year. The remaining students were 25% juniors, 67% seniors, and 2% other (one non-matriculated student). Of these, exactly half (23 students) indicated they previously had some kind of research experience. Gender and major were not included on the pre-test and post-test; however, the class as a whole was exactly half male and half female with a mixture of biology and environmental science majors, one math major, and one non-matriculated student.

Overall, scores on the test improved by an average of 26% (p < 0.001; Figure 1). Students who had participated in student research had higher pre-test scores (Figure 2; ANOVA; p= 0.046); however, no performance difference was seen on the post-test with regard to research experience (ANOVA; p = 0.24). I found no relationship between the test scores and potential predictors such as class year (ANOVA; p = 0.62) or number of papers written (ANOVA; p = 0.54). Inexperienced students made the greatest improvement in test scores (31%; p = 0.032), while the students with research experience improved slightly, but not significantly (13%; p = 0.15).

Figure 1

Figure 1. Student score (N = 46) on an assessment test for experimental design significantly improved (mean ± SE; p < 0.001) after completing a lab course that used a guided-inquiry approach. The test had open-ended questions that specifically targeted the students' ability to recognize whether an experiment addressed the proposed hypothesis, and it asked them to propose a solution to the problem.

Discussion

The students scored higher on the post-test than on the pre-test on experimental design (26% average improvement). This suggests that their experiences in the course may have improved their experimental design skills, as assessed by their ability to evaluate and improve others' experiments in the test questions. It is likely that the students' performances on the tests were influenced by other factors aside from the course experience, and several factors were investigated, including previous participation in undergraduate research. The strong effect that undergraduate research experience can have on students is well-documented (Kardash 2000; CUR 2005), and this study suggests that students who had previous research experience were better at evaluating experimental design concepts than students who did not have research experience, as evidenced by their higher pre-test scores (Fig 2). By the end of the course, there was no difference in post-test scores between groups of students with or without undergraduate research experience, indicating the students without experience may have in effect caught up with those students who had previous experience. Thus the inquiry-based approach in the course may have provided experiences that were similar to aspects of research for those students who lacked such experience. Another possible explanation, however, is that the students' exposure to graphs and figures in the course discussion/lecture may have influenced their test scores. While I did not emphasize experimental design in the lecture portion of the class, it is an important aspect of analyzing scientific work, so the students were exposed to it throughout the course.

Figure 2

Figure 2. Students with previous research experience (N = 23) performed significantly better (means ± SE; p = 0.046) on the pre-test than students who did not have research experience (N = 23). There was no difference between the groups on the post-test. Letters indicate significant differences (α = 0.05).

Exposure to the guided inquiry exercises exposed students to key components of doing science, including hypothesis formulation and experimental design, and in that way, was similar to a research environment. As implemented, the guided inquiry approach had other potential benefits. Group work helps students assess their own learning (an aspect of metacognition) and improves communication skills, both written and oral (Hogan 1999). The assessment methods included multiple drafts of a paper as well as a group oral presentation. Additionally, gradual transference of the control of the lab activities from teacher to students is a key component of metacognitive theory (Kurfiss 1988, cited in D'Avanzo 2003).

The study could be improved in the future by making more detailed observations about the nature student research experience. In the interest of anonymity, the students were not asked for specifics about their research experiences, but I know from talking to students in the course that their abilities and experiences varied considerably. Some of them have spent several years working in faculty research labs or in the rigorous Biological Sciences’ Research Scholars Program, while others spent as little as a few weeks working on a small project with little oversight. RIT has a strong co-op program, so many students had experience working in faculty labs over the summer either at RIT or at another university. The quality of the individual research experience and degree of independence may also have impacted the students' experimental design skills. The question about research asked if students participated in research, which also is unduly ambiguous. Follow-up studies should use clearer language or ask students to describe their research experience.

Anecdotally, I noticed that at the end of the quarter some students still did not have effective experimental designs for their independent lab projects. For example, lack of replication was a common design flaw. The assessment instrument did not necessarily catch all misconceptions, and in the future, I would like to track individual students so that I can match their performance on the pre- and post-tests with their performance on the lab projects. I believe this will provide needed validation of the assessment tool (is it really assessing experimental design skills?) and also provide better feedback about where students are struggling most. Additionally, the assessment tool may have a ceiling effect for those students who had previous experience. For students who already scored highly on the pre-test, their score could not improve, even though their experimental design skills may have improved.

Practitioner Reflections

A few years ago, my institution began undergoing institutional assessment by an accreditation agency. As part of that process, one of the programs I'm involved with needed a revised Assessment Plan. I originally approached the prospect of rewriting this document with a sense of dread, but then as I began revising, I realized that the questions I was asking were really fundamental to not only the program's goals, but also my teaching in general: What do we want our students to learn? How do we know they are learning what we want them to learn? These questions become particularly difficult when teaching skills (e.g. experimental design, critical thinking, problem-solving) rather than foundational knowledge. How do we know when students have mastered experimental design, and how do we assess that?

Because of my involvement with TIEE, I now feel I have a toolbox for starting to deal with some of these larger questions of assessment. I used to assume that my students left my class with an appreciation of the conceptual aspects of ecology as well as the critical thinking skills needed to assess experimental design. Now I know that some of the students have those skills because they came into the class with them already. Some students are improving and honing those skills while in the course, but there are others who still need help. This awareness of where my students are in their learning helps me to target class activities, which is particularly important at the beginning of the class so I know what I need to emphasize and where the gaps are.

For instance, in this study I designed guided inquiry activities that changed both the ecosystem of focus and the level of student involvement simultaneously almost every week. My thinking at the time was to expose them to as many ecosystems as I could while working on their design skills separately. I think the students found it difficult to acquire new knowledge about the ecosystem and how it works in addition to being increasingly responsible for the day's activities. In part because of what I learned in this study, we changed how the lab was taught and have focused the guided inquiry aspects of the lab on a single system. The students now have a great familiarity with the components of the system from experiences in the lab, and their conceptual knowledge of the ecosystem is increased with each activity. By the time the students reach the end of the quarter, they can focus more on the design of their experiment/observational study because they already have foundational knowledge of the system. This study demonstrated to me that previous experience was a major component of student success, so now the labs are designed to provide more experience within a single system.

References