by Blake Mason, John Vito Binzak, and Fangyun (Olivia) Zhao
As computer-based technologies continue to establish a significant presence in modern education, it becomes increasingly important to understand how to improve the ways that these technologies present educational content and adapt to learners. Achieving these goals in the design of cognitive tutors, instructional websites, and educational games leads to difficult questions. For example, what are the best activities and examples that will help learners understand the educational content? In what order should these examples be given? How can these activities focus learners’ attention in productive ways? These are difficult questions for instructional designers to answer on their own. Here at UW-Madison, educational researchers are teaming up with computer science experts in machine learning to test theories about how design decisions can optimize the learning technologies. There are many ways that machine learning techniques can be used to improve the design and effectiveness of educational technologies.
Here we outline two interesting examples:
One example of where LUCID students are applying machine learning techniques to educational problems, is our ongoing project studying how students perceive chemical properties from molecular diagrams. To succeed in chemistry courses, students need to develop perceptual fluency with visual representations of molecules to understand how they convey important properties.
A challenge for designing instructional interventions to support this learning is that acquiring perceptual fluency a form of implicit learning, that occurs in ways that students are not consciously aware. Therefore this form of knowledge is difficult to articulate, and we cannot rely on traditional methods to pinpoint what students do and do not understand. To get around this issue, can design experiments to see how advanced and novice students see visual representations of molecules differently. First, we take a long list of molecules and record all of the visual features of each of the molecules.
Then, we have students judge the similarity of molecules presented 3 at a time: “is molecule a more similar to b or to c?” Finally, using a specific form of machine learning called metric learning we see which visual features predict students similarity judgments, and thus detect which features students attend to when viewing visual representations of molecules. By comparing the results of chemistry experts and novice students, we hope to build a better understanding of how perceptual fluency changes over experiences. In the ongoing ChemTutor project at UW, we hope to use this knowledge in the development of new cognitive tutors capable of providing adaptive feedback that help as students identify and focus on key visual features of molecular diagrams.
Another example of creating interactive model for teaching and learning is combining the strengths of eye-tracking technologies along with machine-learning algorithms. Current educational software focuses much on creating features that attempt to attract students’ interest and raise motivation. We are interested in developing a tool that learns from and adapts to students’ habits such as where they tend to look on the screen and how they become distracted. Using eye-tracking technology, we can interpret gaze fixation to understand where students focus their attention, and then customize instructional materials accordingly. In addition to making better educational technologies, this work is also important for researchers studying human attention. Specifically, researchers are interested in understanding how changes in gaze fixation relate to shifts in attention, and using this data to develop models that predict gaze behaviors. Through multiple phases of development, this project demonstrates how improving education in powerful ways can involve research on low-level processes of cognitive behavior, to software development and user testing.