Abstract: We live in a world of complex relationships, united by a myriad of connections, existing in continuous granularities and multiple dimensions. Discovering truths in this complexity requires an ability to decipher causality amidst endless potentially spurious associations. It is our belief that underlying investigative skills are internal representations of external systems. Effective inquiry strategies evolve out of strong internal models of the mechanism of both the specific and general systems of inquiry. This paper outlines a work-in-progress, involving the creation of technologies which foster the development of more advanced internal mental models of causal systems during inquiry, thus improving reasoning competence.
Introduction
Craik (1943) presents the notion of mental models as internal representations of a system. Black (1992) implies that mental models are the highest form of knowledge representation, an amalgamation of propositions and imagery capable of generating procedures. Schwartz and Black emphasize imagery in mental models, describing them as "depictive simulations" (1996a) representing particular system mechanisms, from which rules are derived (1996b). Kuhn, Keselman, Black, and Kaplan (2000) present the notion of mental models in reasoning as abstract internal representations of causal systems, which underly the use of advanced investigative strategies. The goals of our current project are to extend our existing computer-based study environment, for the purpose of advancing development of reasoning strategies, by building visualization features to induce mental models representing 1) that more than one factor can contribute to one outcome, and 2) of correct system mechanism.
Program Design
Constructivist instructional design frameworks for computer-based study environments emphasize the relevance of self-directed knowledge construction, as opposed to direct instruction of objective knowledge (Wilson, 1996). Research indicates that instruction "situated" or "anchored" in meaningful contexts, with focused pre-set goals, have been effective vehicles for self-discovery learning (Schank, Fano, Bell & Jona, 1993; Black & McClintock, 1996; Brown, Collins & Duguid, 1989; Cognition and Technology Group). Anchored instruction is especially befitting in science education (Goldman, Petrosino, Sherwood, Garrison, Hickey, Bransford & Pellegrino, 1996).
Problem Scenario: Flood Predictor
Flood Predictor, a multimedia research program, created with Macromedia Director, is founded upon anchored, situated and goal-based instructional design principles, supporting the investigation of causal relationships in the domain of elementary hydrology, and induction of evidence-based deductive reasoning strategies, via self-discovery. Students are situated in the role of a builder working for a construction company, and assigned the job of determining how high to build stilts under houses near a group of lakes. Employees are informed that, in order to avoid flood damage and to minimize the cost of expenditure on unnecessary materials, they must determine which factors in the region will cause flooding and which will not. To do this, they must call up records of sites by creating unique combinations of features, predict how high flooding will rise, and make conclusions about whether features matters, don't matter or haven't found out yet. As well as being asked to make conclusions about whether features matter or don't matter after each instance of examining records, as a metacognitive que, students are asked to respond to a multiple choice question about how they know that certain features matter or don't matter in causing flooding.
Five factors are introduced as potentially causal in flooding: Water Pollution (high vs. low), Water Temperature (hot vs. cold), Soil Depth (shallow vs. deep), soil type (clay v. sand), Elevation (high v. low). Three factors (Water Temperature, Soil Depth and Soil Type) are causal within the program scenario. There is one interaction between Soil Depth and Soil Type. The other two factors are not causal within the program scenario.
Reasoning Activity
Within all Flood Predictor versions, participants are guided through investigations modeled upon previous research by Kuhn et al. (1992; 1995). Reasoning activity is operationalized as design of investigation, number of variables held constant & controlled comparisons, outcome predictions, outcome inference and interpretation, and theory change. Metacognitive awareness is activated/reinforced by declarations of inquiry intention, causal prediction, and inference justification. Theory assessment involves participant declaration of initial and final theoretical conclusions about whether factors matter and in which direction, and levels of theory certainty. Inquiry designs are determined by record comparisons made by the participant, who has the opportunity to call up records of any combination of factor values. A menu set of five menus, one for each factor, permits students to select one of two values for each factor. Records include unique identities of site, the value of each factor in the site, and the true outcome of the unique combination of factor values. For each record selection, before discovering the true outcome, participants make predictions about flood levels, and indicate which variables would be responsible for predicted results/change. Upon receiving true outcomes of a site flood level, participants check whether each factor "matters", "doesn't matter" or "haven't found out". For each variable participants claim to find out about, they are asked how they know what they know, from: "the records", "field evidence", "the records and field evidence," or "just know." A notebook is available to store notes at any point during the investigation session by clicking on the "open notebook" button. After each record instance, participants are asked if they would like to type anything in the notebook. Participant activity, including time and ID, is recorded into a text document on the local hard drive of the computer on which the program is installed.
Mental Model Manipulations
Simple Geographic Information Systems (GIS), made up of transparency map layers representing factors and factor values, are introduced as tools for eliciting development of the additive effects mental model. Depiction of environmental system components in space are intended to illustrate that more than one component exists in the same location, leading to easier analyses of combined relational effects. Differently than in previous causal reasoning research, qualitative evidence about each factor is presented, in addition to evidence about covariation evident in record comparisons, with the intention of stimulating a correct internal representation of the particular mechanism of causality. It is expected that experiencing one and/or more of these interventions will lead to more advanced mental models, thus more strategic inquiry strategies.
Results and Future Directions
The results showed that demonstrating GIS to the students lead to better student understanding of the relevant model of causal reasoning and thus better inquiry strategies, more accurate predictions and better knowledge acquisition about flooding. Providing information relevant to inferring flooding mechanisms increased student mental model complexity and dynamism, while also improving inquiry strategies and knowledge acquistion, but did not lead to better predictions.
Near future goals include incorporating the GIS layers into the computer program, so that students can investigate with the external map layers. Ultimate goals include building a data-based driven web application, capable of distributing and receiving evidence about any system, allowing teachers/students to specify particular content, and displaying system components and relations in diverse shapes and dimensions, as 2-dimensional maps layers, as sets of menus, as particular depictions, as networks, hierarchies, illustrations, etc. The production of such an artifact would allow students to improve reasoning strategies across domains and surface features, and permit researchers to compare cognitive development with various content and knowledge representation.
References
Black, J. B. (1992). Types of Knowledge Representation (CCT Report 92-3). New York: Teachers College, Columbia University.
Black, J. B. & McClintock, R. O. (1996). An interpretation construction approach to constructivist design. In B. G. Wilson (Ed.), Constructivist Learning Environments: Case studies in instructional design. Englewood Cliffs, NJ: Educational Technology Publications.
Brown, J. S., Collins, A., & Duguid, P. (1989). Situated cognition and the culture of learning. Educational Researcher, 18(1), 32-42.
Goldman, S.R., Petrosino, A.J., Sherwood, R.D., Garrison, S., Hickey, D., Bransford, J.D., Pellegrino, J.W. (1996). Anchoring Science Instruction in Multimedia Learning Environments. In S. Vosniadou, E. De Corte, R. Glaser, & H. Mandl, International Perspectives on the Design of Technology-Supported Learning Environments. Mahwah, NJ: Erlbaum.
Kuhn, D., Schauble, L. & Garcia-Mila, M. (1992). Cross-domain development of scientific reasoning. Cognition and Instruction.
Kuhn, D., Garcia-Mila, M., Zohar, A., & Andersen, C. (1995). Strategies of knowledge acquisition. Society of Research in Child Development Monographs, 60(4), Serial no. 245.
Schank, R.C., Fano, A., Bell, B. & Jona, M. (1993/1994). The design of goal-based scenarios. The Journal of the Learning Sciences.
Schwartz, D.L. & Black, J.B. (1996a). Analog imagery in mental model reasoning: Depictive models. Cognitive Psychology, 30, 154-219.
Schwartz, D.L. & Black, J.B. (1996b). Shuttling between depictive models and abstract rules. Cognitive Science, 20, 457-497.
|