Saturday, February 28, 2009

Gibbons Model-Centered Instruction

I like the central theme about Gibbons' (2000) model-centered instruction is that effective and efficient instruction takes place through experiencing models with various support of instructional augmentations to faciliate learning from that experience. Specifically, I can see MCI as an external to internal instructional approach in which learning from externalized models supplant the internalized knowledge models. there is a constant flow of meaning negotiate between external and internal models.

The 7 principles of model-centered instruction helps frame the practical aspects of desigining learning environments to support model-centered learning. Of all the principles, I am intrigued by Denaturing - a artifical way of modeling the real system to match target learner's existing knowledge and goals.

Model-centric thinking is literally systems thinking. This form of thinking sees problem at the marco level which then provides the foundation to seeing the inner workings of the problem at the micro level. As such, modeling faciliates the ability to see problem as a whole rather than sum of its parts. This is my epiphany!

Indeed, I see strong links between complex problem solving and modeling as a tool in the Instructional Designer's toolbox, and model-centered instruction as prescribing instruction to the learner in the model-centered learning environment.

Tuesday, February 24, 2009

Wii music in classroom

I found one interesting news about Wii music in the classroom: http://www.msnbc.msn.com/id/29127548/

The more I think about games, the more I feel it is how people perceive the activity. For example, classic classroom activities can view as a game which we have clear goal (answer all the questions in the test correctly), clear rules (you cannot cheat), and some kind of competition (before graduate school, there is more or less a curve concept in school that students need to do outperform the fellow students to get an A). Now, with Wii that people perceive as a game, students are receptive to learn with it.

However, some people still cannot accept that game can be a main instrument for learning instead of just a supplementary exercise.

reflection on MFL-DeJong

in De Jong and Van Jooligen 2008, the authors mentioned mainly two types of models: (a) computer models and (b) external models.
To my understanding, computer models that they mentioned is the simulation that the learners may observe and manipulate. They are black boxes. The learners need to observe their input and output value in order to predict its behavior and its inner structure. The external model that the authors mentioned, to me, is the externalized mental representation of the computer model. The learners have had some ideas (the mental representation, or mental model) about its internal mechanics after interacting with the computer model, then they need to externalize it as an external model, and then to amend it in order to imitate the original model's behavior.
Other types of models that were mentioned are domain models, which are generally agreed upon by researchers working in these domains, and individual models, external or internal (mental models), owned by individuals. The purpose of scientific practice lies in adapting individual modes to align with domain models.

While models belong to the category of learning theories, inquiry learning belongs to learning approach.
Learning from models, learning by modeling and a combination of the two are the three approaches advocated by the authors. As the first two were discussed in the reflective journal of Milrad, Spector & Davidsen, I will not repeat here.

Monday, February 23, 2009

Mark on Esryel & Spector 2002

Just a quick note on Eseryel and Spector 2002 (I realized I hadn't posted on this yet). The main thing I was thinking about was based on the (p8): "Causal influence diagrams (CID) were compared between novice and expert to assess level of understanding of novice." It seems the more "expert diagrams" you can get, the closer you can get to some "ethereal perfect diagram" with which to compare novice diagrams. In other words, for biology, I would need some database (online repository) where faculty contribute their understanding. Some online software could automatically compare novice diagrams to this ever-growing (hidden) database of expert CIDs.

The first step, then, is training biology faculty on causal diagrams! I'm not sure I could get any interested in this!! There are certainly some more interested in education, and I think I could get those....but some sort of incentive would still be nice (recognition? count as a peer-reviewed pub or review?)

Bryan plays beer game

The Beer Game
I started playing the game by reading the short overview and background on the homepage. I would like to know what the game is about. However, I think student may prefer jump into the game without looking at the introduction. I tried to click the three icons, which are much like the three challenges in the Food Chain, but they are not links. Then I clicked wrap-up to begin my trail.

My first trail is to play the simulation. Before play it, I read carefully the instruction of the goal: to keep the inventory at 20 cases. Then I begin to play. For the first three days, in which no change took place, I just click run until I see the consumption raised up to 8 (in the fourth week). Then I begin to increase the order from wholesaler gradually by raising it one case more per week in order to avoid severe fluctuation. (Because I predict that the consumption may change from time to time, which means it is a ill-structured problem).
Below is my first trail.
WEEK CASES
1 4
2 4
3 4
4 4
5 5
6 6
7 7
8 8
9 8
10 8
11 9
12 10
13 10
14 10
15 10
16 10
17 10
18 8
19 8
20 8

I ended up spending $141.88 with 20 cases in the inventory by the 20th week but the system told me I failed. It might because I didn’t compensate the inventory with equal amount of cases immediately, so I tried the second time.

The second trail was a success one. When consumption goes up (8 cases), I immediately compensate the inventory with a larger amount (10) of beers. So the inventory recovered soon and reached the original balance. This time, I know that the consumption will be steady (8 cases) so I know the model is an ideal one. So I can know what it is like.

For the third time, I looked at the model (stack-flow chart) provided by the software. What caused the problem is the 2 weeks’ delay. Inventory was both added and drained by retailers and consumers. So understanding the dynamic is the key thing to solve the problem.

Science Ed Model (Extra reading)

Both Gobert and Clement's papers (2000) are from the same issue, highlighting use of modeling in science education (and research). As such, both hit on some of the same issues, and both of these papers gave summary and reference to other papers in this "special issue." One important point is that, "At present, models and modeling are considered integral parts of scientific literacy" (p891).

Of interest are their references to Buckley's paper, which apparently has some discussion of assessments. "Buckley shows one method of doing this [describing target models and post-conceptions] in biology in her figure 8, including distinctions between structure, function, behavior and mechanism. Her diagram notation holds promise for allowing us to describe pre- and post-conceptions at an intermediate level of detail, and to compare them to a target model" (p1044). I need to get this and take a look.

Another thing that caught my eye was their reference to Gobert's discussion (in a different paper of this same issue) of the "drawing to learn" strategy (p1046): "...drawings can become a kind of glue that holds the instructional session(s) together, keeps them coherent, and focuses them on the developing conceptual model. This may be a larger sense in which drawings can be an integrative medium." This is of particular interest because I have been talking to a professor in the Botany/Microbiology department on campus about his using student drawings as an indication of their understanding of concepts/content in his Introductory Botany class. Integrating this with technology (both for DOING and ASSESSING the drawings) may be the direction I take my dissertation research.

Mark likes Beer!

run 1: I wanted to just jump right in and start pressing buttons, but I resisted this urge. Instead I read the directions to see what it was I was supposed to do/learn. With a 2-week delay between order from wholesaler and delivery, I knew I’d need to order more cases the week that I saw an increase in orders (not wait until another downturn). So, I was able to gradually increase or decrease orders as demand changed. Demand didn’t appear to rapidly increase, so I was able to keep expenditures to just over $200 for the 20-week period (the game said optimum was $250, so I felt pretty good about myself!).

run 2: I did almost as well this time; found myself looking more closely at the exact number of cases ordered by customers, instead of just estimating by watching the bar graph. I was thinking of the model behind this…the causal relationships of: number in stock, number ordered, and 2-week delay.

run 3: I did the worst this time, though still optimum ($249); and I’d been drinking beer. ☺ Looking at the model was interesting, but I didn’t find myself naturally thinking about it as I did this third game. Because I didn’t get any “backordering” on any of the 3 runs, I didn’t have to consider/use that part of the feedback loop. I do find myself wanting to play again to see if I can get it to give me a larger jump in customer orders; then, I could see my reaction and get a backlog of orders. So, for my fourth run, I had the same jump in orders from 4 to 8 during week 4 (?); I got to an expenditure of just $187!

In all three, I don’t feel like I learned anything…rather just played a game. I’d be curious if I could take a test on any underlying principles. Also, it does make me want to try to build my own model of something in Stella. As a student, I think the next step would be to have them change the variables and play again (increase case cost, increase/decrease order lead time, vary randomly the number of orders each week). The next step should then be to have a similar situation, with a different commodity, and have them write out a model that could explain it (in a format similar to the Beer Game model, since they have seen that). Without this last step, I am learning with models, but not learning with modeling. Learning with modeling definitely engages the learners mind in more active thinking and should accomplish more conceptual change than just using this Beer Game (or even seeing the underlying model). Eventually, some “real terminology” should be used, so that I know what to call all these supply/demand principles with which I’ve been working.

Saturday, February 21, 2009

Beer game reflection

First, if you haven't played the beer game, I suggest you don't read my blog yet. Finish it first, and it is fun to play both the researchers and students.

Here is my log (actions + reflections)
Objective -
1. maintain a stable inventory at 20 cases
2. Keep total expenditures for 20 week period under 300.

How it works:
sell when I have stock
lead time = 2 weeks
4 x carrying cost = out of stock cost

Seems like I have only ONE input variable - how many cases to order.

my notes: I rather write down the objectives to help me keep focus; also I translate the game rules into my langauge. I had inventory management experience, so I am kind of an expert. But, i still try to not using my knowledge at the beginning (try not to invoke any prior inventory model that I know of; but I know my general mental model should be invoked anyway - let's see)

My first question: what is the demand function? (too bad, my previous knowledge kicks in, but it is also a normal question to ask. We got to know the demand before we stock). Then, I found that the informatino is located at the left (given by the problem).

What i have now =
20 cases
Cases order per week 4

I start with 8 cases, because the lead time is 2 weeks, and the demand is 4 per week. Then, I run (try not to think too much at this time because I don't want to invoke any inventory model that I know of at this time).

Week number and number of cases ordered are listed below:
1 - 8
2 - 2 => I found out the price for each case $3.2??
3 - 4
4 - 4
5 - 8 => customer demand 8 => I reacted with more order.
6 - 8
7 - 8 => I still order 8, since I think I am still OK with 15 on my shelves (and my order from warehouse is getting in).
8 - 8
9 - 12 => because the objective is keeping the inventory at 20. So, I order 12 to push up my inventory now (actually, I don't think we need 20 safty stock).
10 - 9 => still adding a little bit more to keep the inventory at 20 (and, seems like I cank keep the target expenditure in range)
11 - 8
12 - 4 - we are over the target.
13 - 8 - The demand is keeping at 8. So, I order 8 (assuming we already balance our inventory)
14 - 8
15 - 8
16 - 8
17 - 8
18 - 8
19 - 8
20 - 8

I keep my inventory at 20, and cost at 245.88.

Can I do it better. Yes, with a spreadsheet. where I have initial inventory, demand, and supply, where initial inventory(t) = initial inventory (t-1) - demand (t-1) + supply (t-2). => this is my model

I still order 8 in the first week. Instead of order something on the second week, I order nothing. But, for some reason, my supply is below target when the demand jump from 4 to 8.
I keep my inventory at 20, and cost at 223.88

Then, I look at the model which is very similar to the mathematical model that I created.

I did almost the same thing, but keep the inventory closer to 20 most of the time.

Actually, the math model is abstract and it is not easy for students to create such a model (maybe my math model is off a little bit, too). I think the stock and flow diagram should be a lot more intuitive for the students to learn the concept.

Again, I took inventory management classes over my academic career. I think this is a better way for the students to experience the different issues of inventory management. Actually, the two key issues are (which I know before I play the game) are demand uncertainty and lead time uncertainty. I haven't played the advance game, but I guess the game will introduce the lead time uncertainty (or minimize the cost) to the equation. No matter what, I believe it is a better way to provide the students the key concepts. Then, mathematical models can be introduced to "solve the problem" because we do have mathematical models to solve the problems accurately. The instructional question is when and how to introduce the math models. In other words, how to use the simulation AND the math models to achieve academic goal? In this case, learning the inventory management concepts.

Victor

Thursday, February 19, 2009

Steps to construct a model and 3 Phases of Model-Facilitated Learning

Step 1: Identify and define objects and its variables.
Step 2: State the relationship between objects.
Step 3: Test the model for expected outcome.

This model constructing process mirrors the process for writing a computer program!!!

I am loving it!!! All thanks to de Jong and van Joolingen for making it so clear. I concur with the authors on first learning from models, then learning by modeling. lastly reinforce with model-based inquiry learning. these 3 sequential phases of learning activities seem effective.

Learning from model with cognitive scaffolds help novices develop modeling skills. As novices become more skillful, they can then construct models. combining these learning experiences set the stage for model-based inquiry learning...they learn, do, and integrate.

Wednesday, February 18, 2009

Davidsen (1996) & Clement (2000)

Davidsen (1996) describes a simulation-based and modeling approach to learning using system dynamics. I think the main issue that he tried to tackle is complex problem-solving. Modeling approach can provide a holistic view of complex problems. Simulation provides an experimental approach for the students to experience the complexity of the problems, and possible effect of proposed solution.

Even though Davidsen (1996) did not put a lot of emphasize on organizational learning, he implies that system dynamics may enhance organizational learning, too.

I think we did talk about the uncertainty issue in complexity, and Davidsen (1996) address this issue by suggesting Monte-Carlo simulation as a tool.

I think Clement (2000) summarized the themes and some studies in that special issue. He tried to develope a cognitive theory of conceptual change, which is very close to what Seel (2003) described, except Clement recognize that the the target model may not be as complicated as the expert model. I think Clement is trying to look at the problem from a "classroom" instructinal point of view, where Seel is looking at more general situation. When students spend long enough time to learn (I assume they learn), the target model will transit to expert model.

I would like to discuss an interesting issue here: students' concept of learning. Students do not believe in inquiry based learning (or learn by moving from one intemediate model to another). Actually, one of the students who participated in our research did not draw a concept map, but just wrote "it is a completely waste of time" on the paper. And, the teacher have a lot of pressure to help the students in standardized test. So, how easy to change our students' concept of learning? If it is not as easy, it there any middle ground to help our students to learn?

Actually, this converstation make me think about the game-based learning argument from Shaffer (2006) and Prensky (2007) that we may help students to learn in game so that they don't even need to think that they are learning. Maybe game-based learning is a solution.

Spector (2003) & Eseryel & Spector (2002)

Nelson did talk about PBL in his posting. After I review the paper a little bit more, I think the key point from Spector is not just to point out that PBL (or PCI) does not have strong theoretical foundation. Also, he wants to suggest a research issue for PBL - the difficulty of assessing high-order thinking (or learning in complex domain).

It is interesting to see Spector to talk about "short term goal" and "long term goal" for PBL. I believe it matches the theme from Seel (2003) that the mental model of the learners revise over time. The goal of educators should be move the mental model of the learners closer to the "final state". In the context of PBL, it will be more expert like.

My question: is it a reasonable progression for medical students (or other students) to first manage what is on-hand? and then developing high-order thinking skills? Do we have enough evidence to suggest that PBL is good for those short term goals? I read there are mixed results from PBL, but a thorough analysis need to be done to see why mixed results were found.

I think Eseryel & Spector (2003) can serve as a respond to the Spector (2003) future research direction. Eseryel & Spector (2003) study ID experts who solve problem in a complex domain. They found recognizable patterns in their representation and solution. With this finding, we may find a way assess high-order thinking skill in complex domain by comparing novice CID with expert CID. Actually, I think researchers have done this kind of comparison for some time (even though they may not use CID). However, Eseryel and Spector (2003) provides empirical evidences that the CID of an expert may be the final state that is discussed in Seel (2003).

So, may I use this argument to use ONE expert CID as my "rubric" (providing that I convince my reader he/she is indeed an expert in the field) for a study?

One more point: longitudinal study is important since the mental model revise over time. I think the research question should be how to make the revision happen (and, happen in a faster rate).

Tuesday, February 17, 2009

On Milrad, Spector & Davidsen

Focusing on the support that technology can provide in distributed learning environments and complex systems, the author provided us two approached that make use of model: learning with models and learning by modeling. To improve learners' learning, socio-constructivism, system dynamics and collaborative tele-learning are brought together.


The learning process, in which a novice is transferred to be an expert, shows "graduated complexity": MFL advocates a sequence of learning activities that begins with some kind of concrete operation, manipulating tangible objects in order to solve specific problems (Milrad, Spector & Davidsen, 2000). As these operations are mastered, learners can then progress to more abstract representations and solve increasingly complex problems.

This process is in accordance with the increasing difficulty from learning with models to learning by modeling.

Their research is supported by Situated learning theory: novice to expert in community; and Cognitive Flexibility Theory: multiple representation. Learner constructed and
Learner modifiable representation

Rouwette et al (2000) argue that a collaborative approach to model and policy design is effective for learning and understanding.

Causal loop diagram can provide a representation of the entire system, which can support elaboration, knowledge elicitation and assessment of understanding.

Using causal loop to present the problem, let student understand, change a little (variable) and predict, the use simulation to verify, may create disequilibrium, hence promote learning.

I find this process is much like that the Food Chain software provides. Based on same theory, we can design and develop similar effective learning environment or software to facilitate learners with complex problem solving.

Mark on Jonassen 2005

These articles seemed to get more and more practical/applicational. I enjoyed this one (perhaps because it mentioned Eco-Beaker, which I have seen demonstrated and looked into using!).

Jonassen et al are using this paper to convince us that technology supported models constructed by students can affect conceptual change (=learning)(p16). As we've discussed last week, ASSESSMENT still seems to be a tough piece of this puzzle. This article says that rubrics can be used to compare models built over time to gauge conceptual change. However, the sample rubric (~p16) is quite vague (the sign of a poor rubric, in my mind). It would still require lots of trained, human input; and results may not be consistent between instructors.

One statement that resonated with me (p19) was, "We argue that the task that most naturally engages and supports the construction and reorganization of mental models is the use of a variety of tools for constructing physical, visual, logical, or computational models of phenomena. Building representational and interpretive models using technologies provides learners the opportunities to externalize, restructure, and test their conceptual models." They cite (Frederiksen & White, 1998; Mellar, et al., 1994; White, 1993) to say that "interacting with model-based environments does result in development and change of mental models." Hard to argue with that!

Another part I was excited about (because I'm constantly trying to find ways that I could make "high memorizing" courses like intro science [new terminology] or Human Anatomy include more active learning) was on p24. Here they say that "by modeling domain knowledge, students must understand conceptual relationships among the entities within the domain in order to construct the model." So, they're memorizing by building relationships (even as simply as on a concept map).

I feel like I didn't get enough info about the "ontology shifting" (p31); so should probably look into the paper they cite. I was happy to see them list the limitations to modeling in this paper (though they seemed to have an answer for each). An enjoyable paper!

Monday, February 16, 2009

Spector's View on Problem Centered Instruction

I am really surprised that Spector claimed PCI lacks a proper theoretical foundation. Until now, my theoretical framework on ill-structured problem solving stemmed from Hannifin, Land, Oliver (1999) Open-ended Learning Environment (OELE) and Jonassen's Constructivists Learning Environment (CLE). Both learning environments are grounded in established learning theory - constructivisim and designed to support higher order cognitive skills. Further, these environments provide a system or holistic approach with support of cognitive tools for scaffolding learner problem solving.

On the assessment side, I am intrigued that learner problem solving can be assessed through causal diagrams. This brings new insights on self-assessement in which learners build causal diagrams on problem solutions and then compared that with the expert's diagrams.

Mark's thoughts on Milrad et al. 2002

Milrad et al. clearly lay out their goal in this paper: to show that "...technology can be effectively used in distributed learning environments to support learning in and about complex systems....To achieve this goal, learning theory (socio-
constructivism), methodology (system dynamics) and technology (collaborative tele-learning) should be suitably integrated (Spector & Anderson, 2000). We call this integration Model Facilitated Learning (MFL) (Spector & Davidsen, 2000)." (from pages 2-3). Apparently this has all been published elsewhere (citations above), but this paper just gives a little bit more concrete explanation of MFL with specific example(s).

Milrad et al. talk quite a bit about all the other theories/methods of the past, and how they've integrated them into MFL: situation/problem-based learning, cognitive flexibility theory (CFT), instructional design methods per "elaboration theory" and "cognitive apprenticeships. HUH??!!! Luckily, page 4 and beyond gives some background on these. "Situation based" says that learning "occurs in the context of activities that typically involve a problem, others, and a culture." SO, MFL applies technology to CFT, allowing collaboration in context-dependent situations, where the learning objectives are first concretely shown, then increasing complexity is added and inquiries collected/solved to allow the learner to construct a model of the concept. In other words they do, "coupling of system dynamics with collaborative and distributed technologies."

MFL is further boasted about (p6) because it suggests a sequence of learner challenges from 1) challenging learners to standardize behavior of a complex system to 6) challenging them to diversity and generalize to new problem situations. And, just as Deniz mentioned last week, MFL "advocates learning WITH models...to introduce learners to a new domain...and to promote learning simpler procedures" (using causal loop diagrams, for instance). Then, more advanced learners transition from learning with models to LEARNING BY MODELING (p9-10). To do this (still with MFL), the learners 1) must realize there is a system behavior occuring (underlying connections happening); 2) use graduated complexity (let learners fill in missing info on a partial model, have them construct a simple model, then complex model (or link simple models), then have them reach a goal/conclusion through from-scratch modeling.

It is nice that on p.11 a concrete EXAMPLE of MFL, using problem orientation, inquiry experimentation, and policy development in regard to acid rain/water quality is shown. I wish there was a bit more detail though (especially since it is in my area of "ecology!"). They argue that using MFL (structured, building, collaborative model-based learning), they meet all the "requirements" needed to allow learner growth. BUT, it doesn't appear this was ever tested in this paper (lecture on same stuff, or non-collaborative simulations VS. MFL to compare learner outcomes). Perhaps this next 2008 paper will show some!

Mark's thoughts on Seel 2002

WITH ANY OF THESE PAPERS THAT REFER BACK TO PIAGET AND OTHER "CLASSICAL PSYCHOLOGY" RESEARCHERS: Our reading from last week's GAME-THEORY FOLKS (Shaffer and Prenskey) told us that STUDENTS/YOUNG LEARNERS NO LONGER THINK LIKE THIS, BECAUSE THE DIGITAL AGE HAS CHANGED THE RULES (THE WAY THEY THINK)! SO, how much of this should we believe? I suppose whatever stuff they have solid research results from, huh?! Much of it seems theoretical though, so I'm not sure we can believe much of it.

Seel does make some interesting points though, and I found myself reading slower often; partly because I'm still retraining myself to read faster, partly because the concepts were complex, and partly because I found it so interesting.

Around pages 60-64 of this paper, Seel worked to convince us of the differences in models (discussed in his and other papers). That is, Piaget's "schema" is an interpretation network that is used to classify/organize incoming data, but couldn't actually be represented; the "constructed model" is an actual representation that can be used to prescribe or predict input from the world (an externalization of the internal world, or an internalization of an external system). p66 goes on to state there is a third system....external systems that are experienced in nature or artifacts of systems created by other humans!! The differences seem so subtle as to not matter (especially b/w second and third), and even by the end of this paper I wasn't quite sure I understood.

page 70 described a dichotomy I could understand more easily: the difference between "instructionally guided model-centered learning" and "self-organized discovery learning for the construction of effective mental models." I also appreciated and agreed with the note that "self-guided discovery learning is very ambitious insofar as the learners must have previously achieved adequate problem-solving and metacognitive skills to guide their learning process. Therefore, for novice students it can be argued that self-organized discovery learning is closely associated with learning by trial-and-error but not by insight." So perhaps the shift should happen to insight learning through self-guided discovery learning AS THE SEMESTER progresses with upper secondary and undergraduate education. I could see this happening with more "lecture/content" and term introduction towards the first of the semester, then shifting to putting more weight on the students; this seems to naturally happen in classes as larger individual or group projects are assigned/due at the end of the semesters.

I also appreciated Seel's quotation of Stewart et al. (1992, p.318) concerning science education that "these instructional approaches should do more than instruct students with respect to the conclusions reached by scientists; it should also encourage students to develop insights about science as an intellectual activity."

However, I feel like I need more application/examples of this. Hopefully my further reading can provide some (if we don't just assume that the Game-theory folks are right, and all of this is based on "old research" that doesn't apply!).

Sunday, February 15, 2009

Jonassen et al (2003)

This paper view modeling from another angle - conceptual change. Actually, I am not sure the differences between change of mental model and conceptual change. Just like the model described in Seel (2003), this paper also advocate change of mental model.

Jonassen and colleagues first described some theoretical foundation of conceptual change. They suggested synthetic view and cognitive view of conceptual change is more relevant in their hypothesis than the social/cultural view. Actually, the hypothesis talk about facilitating multiple representation of knowledge. de Jong & van Joolingen (2008) suggested Cognitive Flexibility Theory (CFT) may provide the theoretical foundation for multiple representation. Collaboration, argumentation, negotiation among group members may provide multiple representation.

The authors suggest that technology provide affordances to enable us externalize our model. By externalize our models, we may make revision of conceptual understanding, which is the conceptual change.

The authors also suggest that we can model domain knowledge, problems, system, experiences, and thinking using different tools. However, it is not clear that whether there is a one-to-one match between the phenomenon and the tool. Or what are the factors may affect the choice of tool?

It is interesting that the authors talked about different tools that is not originally design as modeling tool (or cognitive tool). For example, database and spreadsheet are tools that have "business" purposes. However, they still can be used as modeling tools.

Milrad, Spector and Daviden (2002) and de Jong and van Joolingen (2008)

Both of these two chapters named as modeling facilitated learning. There are many similarities between the two types of learning approaches, but there are also some differences.

First, both of them talking about two types of model based learning. In Milrad et al (2002), they described learning with model, and learning by model. In de Jong and van Joolingen (2008), they called the two types of model based learning as learning from models and learning by creating models.

Both articles talked about using computer simulation in model based learning.

In de Jong and van Joolingen (2008), they provided a definition of models at the beginning of the article. It is defined as a set of representations, rules, and reasoning structure that allow one to generate predictions and explanations (de Jong and van Joolingen, 2008, p.458). I believe it gave us a good starting point to understand about how they use models. Indeed, computer simulation seems fitting in this definition nicely.

de Jong and van Joolingen (2008) gave a little bit more description about their CoLab design (and it is nice to read their 2005 paper (van Joolingen et al, 2005)). They provided a very clear description of how CoLab support the scientific discovery learning process. Basically, CoLab supported the whole scientific discovery process, including data gathering, hypothesis testing, planning, and so on.

de Jong and van Joolingen (2008) quoted previous studies about the effectiveness of the model based learning. It should help students in conceptual understanding on science subjects, scientific reasoning, science knowledge, problem solving skills, modeling skills, and ability to perform far transfer. Therefore, it seems model based learning is promising. But why model based learning work? Milrad et al (2002) tried to provide a theoretical framework for such a case. They talked about situated learning and cognitive flexibility. I believe simulation model helps to bring learners closer to the context of activities. However, can other type of MBL also achieve the same effect? How close we need to bring the learners to the real activities in order for MBL works?

CFT (cognitive flexibility theory) emphasize multiple representation. However, it is not really clear that MBL has to include collaboration. Maybe this is why the authors created a new terminology, model facilitated learning. Unfortunately, both of those two papers did not really clear to how collaboration should be included in the model. In my understanding, they suggested that collaboration should be included, and provided some support for the collaboration. Should the teacher also involve in the collaboration as Clement (2008) suggested?

Seel (2003)

Seel's (2003) provides a very good overview of model-based learning and teaching. I think his focus in about "improving" of mental model. In figure 4 (p.70), Seel's description of the learning process have a final state. It is quite similar to what Clement (2008) described in his chapter. So, instructors are helping the learners to progress in mental model revision towards the final state.

I believe that final state maybe the goal of an instruction.

In the research scenario 1, Seels talked about presenting models to learners will affect their construction of mental model. I still haven't read the seminal work of Mayer(1989). I believe presenting model should affect learner's mental model construction, but how/in which direction?

This week, I read an article in Dr. Greene's class, Hall, Bailey & Tillman 1997 about student-generated illustration. Their claim is asking students to generate pictures is better than giving them the pictures when measuring their problem-solving ability. So, it sounds like that they asked the students to re-creating a model from text. Seel mainly talked about comparing students who build models with students who were given models. Actually, there can be a third case that the students generate models after they see some kind of model. Maybe that is cognitive apprenticeship when students see how the mentor created a model.

Victor

Casti (2008) & Clement (2008)

First Casti (2008) is a nice and short article that gave a brief review of what model is. It is important for us to understand what model is before we start. The discussion of predictive, explanatory, and prescriptive models is also interesting. Can we match those three kinds of model with empirical, conceptual, and design-based research?

The author suggested that is a "midway" between lecture style and discover learning. Actually, today when I teach a bible study class, I tried the method. I feel that people are more engaging, but it put a lot of cognitive load to the teacher because the teacher need to
1. understand the goal of the instruction very clearly; otherwise, discussion can be offtrack easily.
2. able to monitor the misconceptions.
3. able to come up with effective scaffold
4. as Clement (2008) suggested, the teacher need to hold off topics so that students can focus on one difficult topic at a time. Of course, it means that the teacher need to know what is difficult.

Of course, we may give teachers tool to guide them to decrease their cognitive load. For example, we can have some ways to guide the teachers to do preparation so that they have a list of misconceptions and corresponding scaffolding questions.

Actually, I am reading some ITS papers lately. It makes me thinking about whether those kindS of scaffolding can be performed by computer (or computer and teacher "work" together)?

Victor

Friday, February 13, 2009

last week reading

First a general question:
1) I am still working on last week's readings; while I continue to "train myself" to read 100 pages in 1 hour (I've been reading for about 9 hours total and have 2 articles left from last week), should I just leave the old readings behind or keep trying to catch up and fall farther behind?

Sterman 2002 thoughts: Complex systems are harder to understand (even with multiple mistakes on the same system in decision-making processes), and therefore harder to learn from. Well...yeah! This seemed to be the point of more than the first half of the paper. The strength of this paper definitely seems to be in Figure 8 (and surrounding text), where the "virtual world" is used as a "stepping stone" in understanding real world complex systems. Still though, the learner must experience real-world complexity to try out any new decision-making (mental models). I'd like to see/hear more examples about this (to see how it apply it to teaching of complex biological concepts).

Tuesday, February 10, 2009

Some thoughts from the 1st week readings

The readings of this week have a few major themes. First, our education system is not working. One interesting point that mentioned by Shaffer (2006) is that our educational system was geared to the industrial age, where we need people to perform some routine tasks accurately. So, the demand to do the “right” thing was important at that age. I don’t know whether our education was a result from the needs of the industry, but it is clear that the same type of educational outcomes that we are measuring today doesn’t work for the information age. We are now living in the information age (Akilli, 2007; Galarneau & Zibit, 2007; Prensky, 2007; Shaffer, 2006). The industry now looks for the 21st century skills such as knowledge sharing, creation, collaboration (Galarneau & Zabit, 2007).

The second theme of this week is that game and simulation can be a solution (Akilli, 2007; Galarneau & Zibit, 2007; Prensky, 2007; Shaffer, 2006). Even though some of the authors do not say game is the solution, the authors give a high hope for games to be a solution if it is effectively designed and implemented. Prensky (2007) even foresee that the education/training industry will move to be game-based soon because of the demand of the learners. In other words, because games are so good that people will want it. I may not agree with Prensky (2007) hope because innovation diffusion depends on a lot of factors. Theory of diffusion of innovations (Rogers, 2003) suggest that other factors such as compatiability with other technologies (technology in a loose sense that it does not have to be computing device), and complementarity with other technologies may also affect the diffusion of technology. Obviously, the improvement of technology seems like helping the game movement.

The third theme of this week is complexity. It is safe to suggest that the world is complex. Dorner (1987) provided a very good introduction of complexity. I see two main components in Dorner’s argument. (1) the cognitive components which suggest that human’s mind has limitation to deal with the complexity (2) the affective components, where fear of failure is a key driver to prevent people to deal with complex phenomena effectively. In Dr. Ge’s class, we examine computer as cognitive tools to support cognitive processes. However, I also believe that computer may also support the affective side of the equation. It will be interesting to examine any interaction effect between those two dimensions.

The two books by Shaffer (2006) and Prensky (2007) are more practitioner oriented. They have a lot of good observations, but I haven’t found good support from both books yet. Maybe I can find those in the later chapters. Anyway, they did raise many good issues that game-based design researchers should pay attention to. For example, Prensky (2007) and Akilli (2007) both talked about the addictiveness of game. Yes, game is additive. I was addicted to game, too. But, why it is addictive?

Another good point that Prensky (2007) observed is that gamers have expectations on the games. They are intentional which is an important concept for learning under constructivism. Gamers expect game is better than their previous game. They expect the graphic is better. They expect to network with other people. They expect to play hard. Actually, many of the readings claim that game provide the motivation for the learners to engage, and learning will happen if the game is designed effectively.

I applaud Akilli’s effort to start understand what is game and how can it be implemented in educational context (Akilli, 2007). It is important to understand what is a game, and what we want to get out of the a game before we try to understand how we utilize game as a tool to assist learning. The authors did summarize some definitions of games from other people, and he adds that game should be fun and creative. However, I could not find a working definition for game yet. Actually, it is relatively hard to measure creativity. Fun is a subjective measure, which can be influenced by the society. For example, some boys may find basketball is a fun game, but some other boys may find basketball boring, and they like to play card game instead. In other words, people define fun differently. Actually, this can be an issue for game-based learning if fun is a pre-requisite of a game.

Akilli (2007) uses a terminology called “game-like learning environment”. I find trouble with this terminology when we do not have a working definition of a game. Actually, from the readings we have this week, I feel that (it may not be true) people look at game as a black box. Anything that has some sort of game characteristics can fit into the game-based learning (or we just called them game-like learning environment since we know it is not really a game). Instead, I suggest that we need to examine the components of the game, and match those components with the expectation that we find beneficial to educational environment. Motivation and engagement are two of those big sellers for game-based learning. I think they are some legitimate constructs that we can examine as moderating variables or dependent variables. Also, implementation issues can be factors which affect the success of the game/simulation.

Finally, I have a personal belief that the world is not always fun. So, we should teach our kids that we try to make things fun, but we will also work hard on the boring stuffs. For example, a high school teacher may love to teach, but he/she may not like to deal with parents. A college professor may love to do research, but he/she may not like to deal with administrative issues and get funding. Life is full of examples like that. We, as responsible adults, we need to deal with boring stuffs effectively so that we have energy to work on the fun stuffs. Therefore, I think game-based learning is a good idea, but too much of it may discourage students to work on something that is not really fun, but maybe necessary.


Saturday, February 7, 2009

vibes about complex systems

i have vibes about researching more about complex systems...how to make complex systems as simple as possible...this area of research seems to have potential for growth.

Also noted from Sterman and Sabeli, research in complex system crosses various interdisciplinary bounderies. I wondering how to widen knowledge across domains...maybe collaborative research with other disciplines...

Shaffer, Intro & Chapter 1; What is a game?


What Shaffer wrote about in his introduction, resonated with me. Mostly, that teaching "content-only" in schools is no longer appropriate with increased outsourcing for "industrial" skills in America (those practiced through memorization, trained skills, standardized tests). It was necessary when schools were first created during the Industrial Revolution. But now, we need to find a way to teach students to practice innovation, creativity, and adaptability to new technologies, information, and procedures. He claims one solution is through the use of epistemic games.

Chapter 1 of Shaffer's "How computer games help children learn" gets into more detail on what a "game" really is. He explains that "fun" and "competition" aren't defining characteristics of a game (though they may be a part of games). Instead a game is an activity in which players are assigned "roles" which are governed by rules (as is the backdrop of these roles)(p23). Later Shaffer refers to these as role-playing games (but doesn't tell us what other kinds of games there are, and if they have different defining characteristics!). Role-playing games allow students to begin forming subject-specific epistemologies (ways of thinking).

In order to play these games, student learn content...but that is not the point. The point is that they are learning to think more creatively, and practice being a "professional" in some field. In the process they should be thinking critically and creatively, forming epistemologies about the subject-at-hand. Note that technology has not even been mentioned yet...this is just about games (p38)! However, I think we'd all agree that technology can give an easier platform both to present, play, and collaborate on games AND to allow students to become familiar with new/different technologies to expand their ability to adapt in the future.

My only major critique of Shaffer, so far, is a seeming contradiction he makes. On page 8 he states that what we do with technology is less important than the fact that we're just using technology; however, he goes on to state that gaming doesn't require technology, and that in using games we must be careful to set them up to encourage learning (pp39-40). Seem odd to anyone else?

Excellent conceptual paper on learning in complex systems

I love this paper by Sterman. The impediments to learning in the real world provide me a better appreciation on the complex problem solving...so many dynamic variables to consider and these variables are casual. Nonetheless, system thinking is key to solve complex systems.

Fortunately, the use of virtual worlds and simulation provide means to model the complex system in a controlled environment.

looking forward to the applications of virtual world to see how things are implemented.

Thursday, February 5, 2009

Low performing students and self-reflection

According to Dorner (1987), it is interesting to know there is a correlation between low performing subjects and self-reflection. these subjects do not reflect as often and may lead to cognitive emergency reaction, which I would interpret it as pressing the PANICK button. I did appreciate Dorner for describing its symptons and consequences though.

Since this is a complex problem solving experiment, I am curious on how he collect the data? Using think aloud protocols or focus groups or ???