ALL VOLUMES SEARCH TIEE
VOLUME 5 TEACHING ISSUES AND EXPERIMENTS IN ECOLOGY
RESEARCH

Semester-long Engagement in Science Inquiry Improves Students' Understanding of Experimental Design

Introduction

Pedagogy researchers and undergraduate education reformers have set out important directions for change in undergraduate teaching. Notably, the National Research Council (NRC 1996) emphasized that students should engage in the process of science in addition to learning factual information. A major recommendation of the NRC convocation for Science Technology Engineering and Mathematics (STEM) education reform was to actively involve students in the methods and processes of inquiry in order to promote science literacy (NRC 1996). The sheer quantity of scientific knowledge and rate of its increase suggest that the goal of undergraduate education cannot be to download knowledge of all subjects to students (Simon 1996, Bransford et al. 2000). Rather, our long term goal as teachers should be for students to learn how to acquire, interpret, and use scientific knowledge (Bransford et al. 2000, Weimer 2002).

In recent years many publications have identified student-active teaching as a strategy to help students acquire skills in addition to helping students acquire disciplinary knowledge. Student-active teaching is characterized by activities such as investigation, collaboration, and the collection, analysis and communication of data (McNeal and D'Avanzo 1997). Such activities create opportunities to practice essential science skills such as observation using standard methods, data manipulation, and description and interpretation in a defined context. These teaching strategies provide opportunities for students to develop science literacy by practicing scientific inquiry (NRC 1996) and honing confidence and skills in problem solving (Sundberg and Moncada 1994).

In addition to teaching the processes of science, student-active teaching strategies reinforce student learning (Bransford et al. 2000, D'Avanzo 2003a). Bransford et al. (2000) reviewed a broad array of theory and research that outlines the processes by which people learn. For example, metacognition is a process by which students reflect on what they do and do not know. When a student mentally monitors his/her current understanding of a topic, he/she is using metacognition and therefore monitoring personal learning progress. Student-active teaching can help students develop metacognitive skills. When students develop and test their own hypotheses or work in small groups to re-evaluate their understanding of concepts based upon conversations with their peers, they are using metacognition (D'Avanzo 2003b).

Past research has generally shown an improvement in science content knowledge from using student-active teaching methods (Sundberg et al. 1994, Anderson 2002); however, much of that research focuses on freshman biology laboratories (Sundberg and Armstrong 1993, Sundberg et al. 2005) and more specifically non-majors freshman biology laboratories (Sundberg and Moncada 1994, Udovic et al. 2002). Many papers have been published describing how to decrease the cookbook nature of introductory biology courses (Leonard 1991, Sundberg et al. 1992, Sundberg and Moncada 1994, Adams 1998, Grant and Vatnick 1998 and Udovic et al. 2002). Some researchers have documented knowledge and skills changes in introductory courses (Sundberg et al. 1994, Tashiro and Rowland 1997), but I have seen no studies assessing knowledge change in upper level biology courses.

I recently assessed the impact of an inquiry-based laboratory on student’s knowledge of experimental design. I implement this lab yearly, during the Fall semester, in my upper level plant ecology class for majors at the University of Mary Washington. A description of this laboratory was published in Teaching Issues and Experiments in Ecology (TIEE). The laboratory is titled Inquiry-based learning in plant ecology: students collect the field data, ask the questions, and propose the answers (Griffith 2004). This semester long laboratory falls between a bounded and open-ended inquiry experience (Table 1, Sundberg and Moncada 1994, D'Avanzo 1996, Grant and Vatnick 1998). Student experience is open-ended because students generate their own hypotheses, research the literature, design experiments, and present their ideas both orally and in a proposal. Their inquiry experience is bounded or constrained by the kinds of questions I direct them to (i.e. invasive species questions) and the methods that I have available to collect data and make observations.

Table 1. Inquiry framework and degrees of student creativity. Five inquiry approaches (rows) that might be used in ecology labs and four components of research (columns). Given means this research component is provided by the instructor. Student indicates that students, under guidance of instructor, create or collect the information for this research component. Scholarly goal describes the content or process that can be taught through different levels of student creativity. The semester long laboratory in this study falls between a bounded and open-ended inquiry experience, shown in grey.
Information given to student → Research question Study system and methods Data collection Analysis and presentation Scholarly goal of activity ↓
Inquiry name ↓
http://tiee.ecoed.net/teach/framework.jpg; Sundberg and Anderson 1994; D’Avanzo 1996; Grant and Vatnick 1998.
Demonstration given given given given teach existing knowledge by showing or guiding students
Guided Inquiry given given student/given student
Bounded Inquiry student/given student/given student student teach process of knowledge construction
Open-ended Inquiry student/given student student student
Collaborative research given student/given student/given student/given create new knowledge

The specific questions I addressed were:

  1. Do students improve their understanding of experimental design after working in this laboratory?
  2. Do students change their self assessment of their understanding of experimental design after working in this laboratory?
  3. Do students enrolled during different years differ in their changes in understanding of experimental design?

Methods

Teaching intervention

My inquiry laboratory was designed to teach upper-level students in a plant ecology course how to: 1) collect data on plant populations (distribution and abundance), 2) formulate hypotheses to explain observed patterns, and 3) write a research proposal that outlined a set of experiments to test their hypotheses. This semester-long project occurred in 13 three hour lab classes. All lectures and student exercises during this laboratory were focused on this project. Students, working in groups of 2 or 3, made qualitative observations, collected data on plant distribution and abiotic variables, proposed and researched hypotheses, and designed a series of experiments to answer these questions. The data collected by students came from several research plots that contained different abundances of two invasive species (Hedera helix and Vinca minor). The hypotheses addressed the possible causes and consequences of different abundances of these invasive species. Hypotheses were generated by student groups, reviewed by the instructor, and were mutually agreed upon, after revisions, by students and instructor. After literature searches, students designed a set of sampling and/or controlled experiments to test their hypotheses. Student researchers communicated their hypotheses, research, and experimental designs both orally and in a written proposal. They presented details of their proposals orally in small-research groups, and they also individually prepared a formal written research proposal.

Students learned about sampling designs in one laboratory class and controlled experimental designs in another. They applied sampling designs during data collection, and they applied controlled experimental design concepts by analyzing descriptions of experimental design from manuscripts (see methods from McElrone et al. 2003, Johnson and Agrawal 2005, Viswanathan et al. 2005). Students were also given time during a lab class to develop experimental designs for their questions, with my consultation. During other lab classes, they received library research instruction, did literature searches, received data presentation instruction, and practiced creating graphs. I have taught this laboratory 4 times in 4 years, but this assessment research was done only in the last 2 years. Each semester’s class was split into 2 laboratory sections for a total of 4 sections over 2 years. Full details of this laboratory can be found on the TIEE website (Griffith 2004).

All students in the laboratory were simultaneously enrolled in plant ecology lecture. During the lecture, students were exposed to creating hypotheses in the context of describing and interpreting data. I did not explain experimental design concepts in lecture, but I described experimental designs in the course of presenting data sets. Students were frequently presented with figures and tables and asked to describe and interpret these data in light of relevant hypotheses.

Student participants

Students enrolled in the plant ecology class during the fall of 2004 and 2005 participated in this assessment research. Sixty-five students enrolled during these two semesters. They were mostly junior or senior standing, with very few sophomores. Students may have been exposed to research concepts in depth, including experimental design, in two other biology courses at the University of Mary Washington. A limited number may also have participated in our undergraduate research program.

In 2005, I asked students to self report their experience stating hypotheses, writing college research papers, and writing research proposals. Just over 80% of these students reported writing hypotheses in at least one course in college. Eighty-eight percent of students had written at least one research paper in college. Two-thirds of students had not written a research proposal. These variables had no impact on pre- test scores, post-test scores, test score changes, or self reported knowledge. These variables were not included in my analyses reported here.

Assessment instruments

Figure 1 shows a brief timeline of student work on experimental design and my assessments of their knowledge. All students answered a set of objective questions (see Pre-test/Post-test Objective Questions in Resources) at the beginning and at the end of the semester (Figure 1). Objective questions were meant to assess students' knowledge in specific subject areas. In 2004, the test included 16 questions. Eight of these questions addressed experimental design concepts such as types of sampling strategies, organization and names of standard experimental designs, description of experiment components that represent independent and dependent variables, and identification of independently treated experimental units. The other 8 questions addressed other issues and concepts such as library research and hypotheses. In 2005, 2 additional experimental design questions were added to the test. The additional questions expanded my assessment of controlled and sampling designs without greatly increasing the length of the test. In addition, 8 questions were added to the test in 2005 to ascertain students’ history with research concepts and students’ self reported knowledge (Likert-type questions) of research concepts (see Pre-test/Post-test Background and Self-Assessment Questions in Resources). The University of Mary Washington Institutional Review Board reviewed and granted my use of human subjects for research in both 2004 and 2005. Students were anonymously identified for paired statistical testing in 2005. Specifically, each student wrote a unique number on their pre-test and post-test, in the place of their name.

Figure 1 - Diagram of Timeline for Assessments

Figure 1. Timeline of assessments and student work on student experimental designs (ED). Assessment and student work were during semester long lab in Plant Ecology course. Pre-test and post-test included objective questions and self-assessment (Likert-type) questions.

The week after the experiment design lecture and lab work, students took an unannounced quiz or interim assessment (Figure 1). This interim assessment instrument (see Interim Assessment in Resources) contained an experimental design description similar to a methods section of a manuscript. In response, students described nine components of the experimental design, such as independent and dependent variables, individually treated experimental units, and number of replicates. Each component was given a √ for a correct response, √- for a partially correct response, or 0 for an incorrect response.

Data analysis

For the 2004 and 2005 data I used Chi-square analysis to assess changes in the frequency of correct answers per test item. Significant differences in the frequency of correct answers were tested using Fisher’s exact test. The category associations tested with Fisher’s exact test, a 2 × 2 contingency table, were correct versus incorrect answers and pre- versus post- test. In 2004, no personally identifiable information was collected to pair pre-test and post-test data. Therefore, I analyzed changes in mean percent objective test items answered correctly per student between pre-test and post-test using ANOVA. These data were square root transformed to satisfy ANOVA assumptions. In 2005, a paired t-test assessed changes between the pre-test and post-test in mean percent objective test items answered correctly per student. For the Likert-type questions, the Wilcoxon related-samples test assessed shifts in frequency of student responses. This is a non-parametric, paired samples test that assessed individual student response changes between the pre-test and post-test in 5 response categories (i.e. strongly agree, agree, disagree, strongly disagree, and do not know). The interim assessment was analyzed using a frequency distribution of correct, partially correct, and incorrect responses to nine items on the interim assessment. All statistical analyses were done using SPSS v. 13 (2004). Any transformed data was back transformed for data presentations.

Results

In 2004, 31 students took the pre-test and 29 students took the post-test. The mean frequency of correct answers per question on the eight experimental design questions was 51.3 ± 19.5% (x ± 1 S.D., N=8) on the pre-test and 71.0 ± 20.7% (x ± 1 S.D., N=8) on the post-test. For 4 of the 8 questions, there was a significant increase in percentage of students answering a question correctly (Table 2). Students' mean number of correct answers (i.e. mean number of correct answers per student), when including all sixteen objective questions increased significantly (F1,59 = 15.89, p < 0.001) from 55.3 ± 5.1% (x ± 95% CI) to 70.9 ± 5.9% (x ± 95% CI) (Figure 2). When only experimental design questions were included, the mean number of correct answers per student also increased significantly (F1,59 = 18.12, p < 0.001) from 50.4 ± 7.2% (x ± 95% CI) to 71.1 ± 6.5% (x ± 95% CI).

Table 2. Change, between pre- and post-test, in frequency (%) correct answers per test item (objective questions only). Questions are about experimental design concepts. * indicates significant change in frequency. In 2004, pre-test N = 31 and post-test N = 29. In 2005, pre-test and post-test N = 33.
Question Year Beginning % correct % correct change Fisher's exact score p-value
1 2004 42 27 6.22 0.052*
  2005 22 60 27.34 0.000*
           
2 2004 45 34 7.42 0.036*
  2005 63 10 3.04 0.353
           
3 2004 90 -4 2.00 0.417
  2005 78 16 4.59 0.103
           
4 2004 45 38 9.49 0.006*
  2005 42 37 11.43 0.002*
           
5 2004 23 1 4.28 0.231
  2005 24 22 4.19 0.243
           
6 2004 47 25 4.39 0.083
  2005 64 9 0.63 0.426
           
7 2004 54 36 8.37 0.005*
  2005 56 23 5.59 0.073
           
8 2004 64 8 1.67 0.667
  2005 79 0 2.69 0.586
           
9 2004 Not asked --- --- ---
  2005 52 0 2.23 0.551
           
10 2004 Not asked --- --- ---
  2005 87 -11 2.30 0.617

Figure 2a - Bar Graph of Results for 2a (2004)

Figure 2b - Bar Graph of Results for 2b (2005)

Figure 2. Percent of correct answers per student on pre- and post-test questions in Plant Ecology class of 31 students in 2004 and 33 students in 2005. Panel A shows 2004 results and panel B shows 2005 results. A sub-set of the questions given were specific to experimental design. In 2004, 8 of the 16 questions were about experimental design and in 2005, 10 of the 20 questions were about experimental design. Error bars are 95% confidence intervals.

In 2005, 33 students took the pre-test and post-test. The mean frequency of correct answers per question on the ten experimental design questions was 56.7 ± 22.3% (x ± 1 S.D., N=10) on the pre-test and 73.3 ± 14.2% (x ± 1 S.D., N=10) on the post-test. For 2 of the 10 questions, there was a significant increase in the percentage of students answering a question correctly (Table 2). For 3 other questions there was a large (>10%), but not significant, increase in correct answers. The mean number of correct answers per student, when including all eighteen objective questions, increased by 14.8 ± 2.37% (x difference ± 1 S.E., Figure 2) between the pre-test and the post-test (t32=6.27, p < 0.001). The mean number of correct answers per student for just the experimental design questions increased by 17.6 ± 3.48 (x difference ± 1 S.E., N=10, Figure 2) between the pre-test and post-test (t32=5.05, p < 0.001).

For all pre-test questions (i.e. experimental design questions + all other questions), student scores in 2005 were significantly greater than in 2004. (Figure 2, F1,62 = 15.2, p < 0.001). When only including answers on experimental design pre-test questions, students in 2004 and 2005 did not score significantly differently (F1,62=1.2, p = 0.28).

Thirty-one (31) students took the mid-semester assessment of experimental design knowledge in 2005 (Table 3). Twenty-one (21) students were correct on 6 of the 9 components. Twenty-nine (29) students were correct on 5 of the 9 components. Sixteen (16) students answered 8 of the 9 components correctly when including partially correct answers. Six (6) students answered 9 of 9 components correctly or partially correct. In a component by component analysis, the majority of students gave correct answers for all nine questions except one component: name the type of experimental design.

Table 3. Number of students giving correct, partially correct, and incorrect answers for 9 components of the mid-semester experimental design (ED) quiz in 2005 (N=31). The nine components were 1. draw picture of ED, 2. describe treatments in experiment, 3. name independent variable, 4. name dependent variable, 5. state goal of experiment, 6. name experimental unit, 7. count experimental units, 8. count number of replicates, and 9. name ED type.
  Category of Answers
Component Correct Partially Correct Incorrect
1. Draw 28 3 0
2. Treatments 19 6 6
3. Independent variable 26 3 2
4. Dependent variable 26 4 1
5. Goal 26 2 3
6. Name unit 28 0 3
7. Count unit 19 1 11
8. Count replicates 18 2 11
9. Name design 0 15 16

The first attitude question stated, I am confident that I can write a hypothesis that is testable with an experiment. The pre-test responses shifted from 30% strongly agree to 67% strongly agree in the post-test responses (Figure 3). This was a significant increase in student self reported knowledge (Z=-2.995, p = 0.003). Agree responses were most frequent in the pre-test and strongly agree responses were the second most frequent. Sixteen of thirty-three students changed their response to agree or strongly agree responses. Do not know responses disappeared in the post-test.

Figure 3, Question 1 - Bar Graph of Responses for Question 1

Figure 3, Question 2 - Bar Graph of Responses for Question 2

Figure 3, Question 3 - Bar Graph of Responses for Question 3

Figure 3. Student responses on Likert-type self assessment questions about knowledge of experimental design. Response categories are strongly agree (SA), agree (A), disagree (D), strongly disagree (SD), and do not know (DNK). Thirty-three students took the pre-test and the post-test.

The second attitude question stated, I feel that I can analyze the design of an experiment in a research paper. The pre-test responses shifted from 21% strongly agree to 39% strongly agree in the post-test responses (Figure 3). Rank changes were not significantly different between pre-test and post-test (Z= -1.882, p = 0.06). About half of all students did not change response between the pre-test and post-test. Five students dropped from strongly agree to agree. Twelve students shifted from agree to strongly agree. The disagree and do not know responses disappeared in the post-test. The third attitude question stated, I feel that I can design an experiment to answer research hypotheses. The pre-test responses shifted from 21% strongly agree to 52% strongly agree in the post-test responses (Figure 3). This was a significant change in student self reported knowledge (Z= -3.1621, p = 0.002). The number of strongly agree responses in the post-test increased by 10. These response changes came from drops in agree, disagree, and do not know responses. All disagree responses disappeared and do not know responses decreased from 5 to 1 in the post-test.

Discussion

Students in my plant ecology classes of 2004 and 2005 showed moderate improvements in their understanding of experimental design, both objectively and by self assessment. In both years, students improved both their ability to recognize and name standard experimental design types and their understanding of independently treated experimental units. In 2004, students also improved in their ability to count experimental units. By self assessment, students in 2005 started the year with confidence in their general knowledge of experimental design. For each of these questions, this confidence shifted to the highest level by 6 – 12% by the end of the semester. The week following an experimental design lab, a large majority of students could analyze an experimental design well enough to draw it, name the independent and dependent variables, state the experiment's goal, and describe the experimental unit. Fewer, but still a majority, could describe treatments and identify experimental units and treatments.

In 2004 the increase in correct responses per test item between pre- and post- test was greater than in 2005. However, from the average students' perspective, knowledge of experimental design increased by similar amounts. In addition, students in both years started the year with similar degrees of knowledge. This suggests that individual increases in knowledge do appear with great frequency in specific areas, but that individuals show individual patterns of knowledge improvement. The significant improvements in percent correct in specific questions show that in several cases there is group-wide improvement on specific concepts.

Other studies have shown similar success in teaching science process skills through inquiry. Tashiro and Rowland (1997) have implemented research-rich courses for undergraduates. They used a variety of assessment instruments (e.g. Test of Integrated Process Skills and Experimental Design Exam) to determine changes in research related skills. Similar to my results, they determined that most students had improved research skills such as problem-solving and comprehending primary literature. A high school biology curriculum designed to teach science process skills through inquiry showed improvements in science process skills (Leonard et al. 2001). A meta-analysis of inquiry teaching strategies (El-Nemr 1979), found that, compared to students in traditionally taught classes, students showed greater improvement in process skills (e.g. hypothesizing and designing experiments). The results from individual objective questions suggest that at least one concept was not well understood before or after this course. Question 5 addressed the purpose of blocking in an experimental design. The low beginning score and lack of improvement may be due to a poor question, poor explanation of the concept, or an entrenched misconception. In future studies I will change the wording of this question. The question should more appropriately focus on the practical creation of blocks in experiments. I, after all, emphasize these practical aspects during conversations with students. So, for example, the question might read: Which of these statements correctly describes a block in an experiment? The correct response would be: A block must contain all treatment levels or treatment level combinations [i.e. factorial designs] for the independent variables. The current wording of the question placed a focus on variability concepts on which I did not focus in lab. The results from individual objective questions also suggest that at least two concepts were well understood before the beginning of the course and little improvement in knowledge happened. The first concept is the role of different sampling types (Question 3) and the second is how to create independent variables in a controlled experiment (Question 8).

The organization of this plant ecology laboratory, and other laboratories just mentioned, loosens the tight control on content and inquiry seen in many undergraduate science laboratories. Researchers and practitioners have described this as un-cookbooking laboratories (Leonard 1991). Students in non-cookbook laboratories use methods of inquiry similar to those used by scientists doing their own research. This use of inquiry in the undergraduate classroom was a key recommendation by the NRC (1996). The project-centered approach in these laboratories creates key opportunities for undergraduates to master process and content through group interaction, oral communication, and writing (NRC 1996) while developing problem-solving skills that may be applied to a variety of future situations (Sundberg and Moncada 1994).

Students in 2005 began the semester with high confidence in their ability to write testable hypotheses, analyze experimental designs, and create experiments. The large majority of students agreed or strongly agreed with the experimental design attitude questions (Table 2). Despite the beginning high confidence, students finished the semester with even higher confidence in their experimental design capabilities. Responses to all three attitude questions showed an increase in confidence as students shifted from Agree to Strongly Agree. Students reported increased confidence in their ability to write hypotheses, analyze experimental designs in papers, and design experiments. On average, this shift in beliefs paralleled the increase in knowledge by students shown by the increase in correct answers on the objective test.

Sundberg (1997) assessed similar attitude changes in an introductory investigative biology laboratory. The assessment items most similar to my questions were Likert-type questions for the statements: I do not have the ability to think scientifically and I would not hesitate to apply for a job in which I had to design and perform scientific experiments. His students, in contrast to mine, became more pessimistic about their abilities to design experiments after they finished the laboratory. I suggest two possible reasons for this discrepancy: question differences and student characteristics. First, the discrepancy may arise when comparing different Likert-type questions. So, student responses could be uniquely connected to the different questions. For example, a student might well disagree with the job application statement after he/she knows more about experimental design than before he/she studied experimental design. Their increased knowledge of design could lead to the realization that experimental design is not fun. Second, the discrepancy may arise because this compares students in an introductory, non-science majors laboratory to students in a biology laboratory for junior and senior majors. Multiple exposures to experimental design concepts or other characteristic differences may have led my students to answer very differently than Sundberg's students.

Taken as a whole, my course assessments support each other. Both objective and subjective question responses support students' improvement in understanding experimental design concepts. Groups of students understand some specific concepts better at the end of the course than the beginning, individual students understand all concepts better, on average, by the end of the course, and individual students recognize these improvements. These results are different from one past study in an important way. Sundberg (1997) found that students showed improved conceptual understanding but showed no parallel improvement in their attitudes toward science. This supports the possibility that this upper level inquiry laboratory helped my students develop increased confidence in their science skills in addition to their understanding of those skills.

Some of the limitations on conclusions drawn from this research leave room for future research. The assessments described focus attention on laboratory activities, but students also were exposed to limited experimental design concepts in the lecture part of this course. In addition, students heard many experimental design descriptions and they practiced data description and interpretation in almost every class. It is not possible to separate the influences of the laboratory and the lecture on knowledge change. That said, the most intensive study of and practice in experimental design comes in the laboratory. It would be interesting to study the impact of different parts of this laboratory on changes in experimental design knowledge. My current assessment design, a pre-test at the lab's beginning and a post-test at the lab's end, can not separate influences of different laboratory activities on student knowledge.

The question remains: Can students design experiments to answer testable hypotheses? Informal conversations with colleagues who teach upper division biology courses support my contention that experimental design concepts are not well understood. Yet, we also agree that experimental design includes key concepts for students to understand whether they plan a graduate research career or a post-graduate professional degree. While I have not analyzed the content of final research proposals, it is my observation that the majority of my students could describe their experimental designs both orally and in their final research proposal. Their descriptions included experimental units, counts of experimental units and replication, independent variables, and dependent variables. Furthermore, their experimental designs accurately addressed their proposed hypotheses.

As young scientists, experimental design is not the only process skill I hope my students learn from this semester long laboratory. Laboratory exercises also addressed research skills that included data collection techniques, data analysis, hypothesis writing and literature research. My assessments, reported here, focused on one component of this research project. Yet, students' experimental designs only make sense in the context of their complete research project. So, I believe more complete analysis of final research proposals would reveal evidence of students' ability to integrate several complex skill sets that they have learned during the semester.

Practitioner Reflections

This research was my first attempt to assess changes in knowledge of my students. My retrospective view of my results will help me improve my future assessment of student knowledge and experimental design concepts. The three main instruments of my assessment were the pre- and post-test of objective knowledge, pre- and post-test of attitudes, and the interim knowledge assessment. The specific experimental design concepts assessed in these three instruments were not parallel. For example, the attitude questions were not as specific as the objective knowledge questions or the interim knowledge questions. One of the attitude questions was I feel that I can analyze the design of an experiment in a research paper. A connected objective question would be What is the experimental unit in this experiment, or What is the dependent variable in this experiment? I believe I would produce clearer, more interpretable results if the vague idea of experimental design analysis in the attitude question more specifically addressed specific experimental design elements such as experimental units or dependent variables. In addition, the design of all assessment questions must flow directly from the concepts I plan to teach. As I teach experimental design concepts, I believe I make clear what students should know and how a good proposal will describe an experimental design. The objective questions included in the pre- and post-test do not parallel design concepts as closely as they should. The experimental blocking question again serves as an example. The statistical reason for creating experimental blocks is to control random variation between blocks. But, my behavioral objective during inquiry sessions in lab is for students to be able to correctly create and recognize blocks in experimental designs. Therefore, the design of all the assessment components must start back with the question, What experimental design concepts do I want students to learn? In the case, the current question addresses, for me, a minor concept related to experimental blocking. This question should be changed to address my behavioral objective for student understanding of blocking.

In closing, I believe the most important lesson I have learned as a research practitioner came from writing this paper. As I organized, read the literature and wrote this manuscript, I reviewed and revisited a set of student active teaching concepts I have learned in the last 2 years. My reading and writing has allowed me to verbalize my understanding of ideas like student misconceptions and metacognition. Now that I can verbalize these ideas, I can apply them and communicate them to students as well as colleagues. For example, I can talk explicitly about common misconceptions students have in lecture and in laboratory. In short, before I wrote this manuscript, student active concepts were in my head. Now, I can apply these student active concepts and improve my teaching strategies to improve student learning.

References