Machine Learning for Everyone by Rob Nowak

Rob Nowak explains Machine Learning, Neural Networks, Deep Learning, Linear Classifiers, Good Features, Principle Component Analysis, Dictionary Learning, Generalization, Cross Validation in this talk called “Machine Learning for Everyone”

Posted in LUCID Library, Machine Learning

Fractions War: Reflections from a non-traditional academic project

By John Vito Binzak

Fractions War is an iOS math game app envisioned by Educational Psychology graduate students, Elizabeth Toomarian and me, John Vito Binzak. The game is based on the children’s card game War, where a deck of playing cards is split between two players, each player flips over one card time, the player whose card has the larger value gets both cards, and this repeats until one player wins all of the cards. While the intention of the game is not to teach players about number ordinality, players are nonetheless engaged in making rapid and repetitive magnitude comparisons. In Fractions War, players flip over two cards at a time and arrange them vertically to form a fraction, then decide whose fraction has the larger value. Developing this app was motivated by my empirical, educational, and experiential goals.

Studying how people understand fractions and develop this knowledge is a primary focus of Toomarian’s and my research in the UW Educational Neuroscience Lab. Fraction learning has been come an important topic of study for cognitive scientists because studies have shown that fraction knowledge is a significant predictor of later math success and unfortunately many people show great difficulty when learning and retaining this knowledge. Much of the numerical cognition research studying fractions learning has occurred in well controlled lab-based or school-based studies, but much less is known about how people understand fractions in informal learning contexts and learn through experiences such as games. By designing Fractions War with a built-in data collection system, the game is designed to be both a learning intervention and a research tool. Furthermore, by designing options to play with different cards that vary the extent to which visual cues (e.g. 10 diamonds) are present or absent, studying learning with Fractions War allows us testing specific hypotheses regarding whether these visual features are critical for learning.

From an educational standpoint, this game was developed to create a fun experience for students to get additional exposure to fractions in a low-stakes activity. This is in line with my great interest in making better educational media and developing research-based principals that help to achieve these goals. Making educational games is not a requirement of my graduate school coursework, nor is it a commonplace practice for cognitive scientists in my field, thus developing this app provided me with evaluable experience to see how my skills could be applied in ways less traditional to academia.

Building this game has also provided me with opportunities to network with Ed Tech companies outside of academia and better understand how educational media is developed. This experience also created an opportunity to interact with students at UW who study outside of my discipline. Developing the app required partnering with four students in the Computer Science 407 course (Foundations of Mobile Systems and Applications) who were assigned to build a new piece of technology. The interactions that came from organizing a team behind one vision, such as explaining research-based motivations to our developers with no cognitive science background, were critical for developing confidence in my potential to continue this work going forward. Lastly, this experience was a major motivating factor behind my decision to attend an entrepreneurship boot camp at UW and learn how it is possible to bring innovative ideas born from academia and development them into new ventures in the commercial marketplace.

Posted in LUCID

Minds, Machines & Society Videos

Matt Botvinick

Director of Neuroscience Research at DeepMind in London, UK discusses current topics in Artificial Intelligence research and how this research provides implications in understanding the human mind and improving human life.

“One problem that we are having as a society is that we are trying to think through the implications of this technology that is developing so rapidly, but a lot of people outside of the artificial intelligence community are getting their information about what a.i. is from Hollywood, rather than from people who actually research a.i.”

Ulrike Hahn

Cognitive Scientist at the Centre for Cognition, Computation & Modelling at Birkbeck, University of London discusses the cognitive science of fake news.

“Network structures that promote the collective accuracy are not necessarily best for promoting individual accuracy”

Stay Tuned for more shorter clips from the Minds, Machines & Society Event.

To watch the full event video:

Minds, Machines, & Society 7.28.18 from Discovery Building on Vimeo.

Posted in LUCID Library

Minds, Machines & Society

Free Registration at lucid.wisc.edu/events

For an encore of CogSci2018 and for those of you unable to attend CogSci2018, the organizers and some incredible keynote speakers have planned a free public event. ‘Minds, Machines & Society‘ will be held on UW–Madison campus at the Discovery Building, 330 N Orchard St on July 28th from 7:30-9:30pm.

The speakers are world renowned thought-leaders in cognitive science, artificial intelligence and human creativity. Matt Botvinick, Director of Neuroscience Research for DeepMind, London will be discussing natural and artificial intelligence. Ulrike Hahn, Professor of Cognitive Science at Birkbeck, University of London will be discussing fake news and how it can affect our decision making specifically in voting. Mark Seidenberg is the author of the book Language at the Speed of Sight, whose research focuses on reading and the brain.

Due to a family emergency, Bob Mankoff will no longer be able to join us.

Information for getting to the event

Posted in Events

Join Us for Minds, Machines & Society

Join Us This Saturday for a free, open to the public event.

Don’t miss Bob Mankoff, humorist and cartoon editor for the New Yorker & more recently for Esquire. Bob will provide a must-see talk about human and machine collaboration & creativity.

Interested in fake news? So are we! Ulrike Hahn, cognitive scientist from the University of London-Birkbeck, will discuss fake news and its effects on our minds.

Artificial intelligence and human intelligence, what do we know & how do they compare?  Matt Botvinick, Director of Neuroscience Research at DeepMind will share insights in the latest a.i. research.

Free Registration: lucid.wisc.edu/events

How to get to the event: lucid.wisc.edu/info

Posted in Events, Resources

CogSci 2018 – Why Changing Minds?

On the motivation for this year’s theme and its connection to current events.

When we bid to organize CogSci three years ago the global erosion of faith in factual knowledge was already well under way. Scientific consensus was doing little to temper public disagreement across all manner of hot-button topics, from the health effects of vaccination to questions about climate change to the remarkable resurgence, in some quarters, of questions about whether the earth is flat. In our own discipline, a series of high-profile psychological studies failed to replicate or were even found to be made up of whole cloth, initiating a very public crisis of trust in the scientific process itself. While the questioning of historical and scientific truths is hardly unique to our time, the recent use of cognitive and behavioral research to promote misinformation demands some attention from our academic community. What light, if any, can cognitive science shed on the emergence, spread, and persistence of false beliefs? We aim to understand how minds work–how beliefs are formed, how decisions are made, how actions are taken, how the facts on the ground are perceived and understood. What can we as scientists do, and what should we advise policy makers or industry leaders or the general public to do, to remediate false beliefs? As advocate cognitive scientists, how do we change minds?

These questions intersected with another societal trend that was also becoming widely apparent: the increasing use of information technology in education. Universities were expanding their offerings of massive open online courses in efforts to both bolster their bottom lines and to make higher learning more widely available across the country and the globe. Apps seeking to teach you a new language or to “train your brain” or to inoculate against future dementia were rapidly multiplying, sometimes sparking controversy. Cognitive and computer scientists were collaborating to develop intelligent tutors that tailor learning experiences to each student’s individual needs. School boards across the country were evaluating the cost/benefit ratios for bringing ipads into the classroom. Here again it seemed that our science should have a role to play in guiding how these forces play out. What, if anything, do the learning models we develop in the lab offer to the teacher in the classroom, the school board president, the curriculum designer, or the software engineer? How do we relate the cognitive theories that elegantly explain behavior in tightly-controlled scenarios to the wilds of the classroom, where students simultaneously grapple with unconstrained learning problems ranging from how to read to calculating derivatives to understanding the structure of molecules? As educator cognitive scientists, how do we change minds?

A good theme should also resonate in the discipline more broadly, and it occurred to us that much of cognitive science concerns change in one guise or another. Developmental approaches, of course, make cognitive change their principal focus–but research into learning and memory is also the study of change, and so too is the study of mental and neurological disorders. Cognitive control, choice, decision, and action all involve understanding how behavior is changed in real time. Attention changes what events in our environment come to our awareness and guide our actions. A central debate in perception concerns whether it can be changed by interactions with conceptual or other non-perceptual knowledge. Indeed we were hard pressed to think of any aspect of cognitive science that does not involve questions about changing minds.

We hope the CogSci 2018 program has braided these strands in a way that does justice to the current moment and to the discipline as a whole. Our starting question–how and why do false beliefs arise, spread, and persist in society?–is reflected in Thursday’s plenary symposium Persuasion, Propaganda, and Politics which brings together four field-leading experts studying the problem from diverse perspectives. We have also identified contributed talks that focus generally on understanding how and why human learning and behavior can go wrong, or can appear to go wrong, and have grouped these into three sessions under the header “Fake news” (shown in orange here). The relations among cognitive science, technology, and education are addressed in Saturday’s plenary symposium Big Data Goes to School, which brings together speakers who address this topic from both the academic and the industrial side of the fence. Contributed talks focusing generally on cognitive science and education have been grouped into four sessions under the (less cheeky) heading “Education” (shown in yellow here).

The broader themes of change, technology, and current issues in society are addressed by three outstanding plenary speakers. On Thursday morning we will hear from Michael Kearns, a computer scientist at the University of Pennsylvania and Founding Director of the Warren Center for Network and Data Sciences, who focuses on understanding how machine learning algorithms function in and shape society, and how such algorithms can be inoculated against the human biases that permeate the masses of data on which they are trained. Friday we will hear from Matt Botvinick, Director of Neuroscience Research at DeepMind, whose research connects the latest innovations in machine learning to cognitive and neural theories about human goal-directed action. Saturday we will hear from Susan Gelman, the Heinz Werner Distinguished Professor at the University of Michigan, who will speak about the conceptual roots and development of moral reasoning.

We are thrilled to be hosting a scientist whose career has changed minds, in the sense of fundamentally altering our understanding of language. Friday morning features a symposium honoring the work of Michael Tanenhaus, the 2017 Rumelhart Prize recipient, and on Friday evening Prof. Tanenhaus will deliver his Rumelhart Prize Lecture, “The Paradox of Real-time Language Comprehension: Signal, Noise, and Context.” Join us afterward for a reception on the rooftop of the conference venue to celebrate the lecture and for the announcement of this year’s winner.

Finally, for those staying on past the conference end, Saturday evening features a public event entitled Minds, Machines, and Society. Two conference speakers–Matt Botvinick and Ulrike Hahn–will be joined by Bob Mankoff, humor editor at Esquire and longtime cartoon editor at the New Yorker,for an evening of informal talks about the ways that cognitive science, machine learning, and technology are interacting to influence culture and society. The event is open to the public and free to attend, but space is limited so make sure to register your attendance here.

Looking forward to seeing you all and hearing about your work in Madison,

The CogSci 2018 Program Committee

Posted in Events, Resources

CogSci2018

Posted in Events

CogSci2018 is almost here!

In less than two weeks, scientists from all over the world will be coming to Madison for the annual international Cognitive Science Conference, CogSci2018. Madison hosts events all the time so what makes this conference special? The topics, the recommender, and the free public event.

TOPICS:

‘Changing Minds’, the title of this year’s CogSci2018 conference will provide all the topics that you would imagine to be discussed such as perception, attention, learning, and memory. In addition, hot topics like fake news, big data and machine learning will provide thought-provoking discussions. Thought-leaders in the cognitive science field will be providing insight to questions like how our media exposure can negatively affect our judgements, specifically our voting habits; how technology and big data can be leveraged to support learning; and how we can make algorithms that automatically suggest higher insurance premiums for black people fairer.

Full schedule here: CogSci2018 Schedule

 

RECOMMENDER:
So many talks, so little time! Any multitrack conference goer might be interested to hear that the conference organizers (and their research collaborators, of course!) created a session recommender that uses machine learning, adaptive sampling and a few of your judgements to generate a top ten sessions list for you. Try it here: concepts.psych.wisc.edu and follow the instructions and link under “CogSci 2018 bespoke paper recommendations”

 

FREE PUBLIC EVENT:
For an encore of CogSci2018 and for those of you unable to attend CogSci2018, the organizers and some incredible keynote speakers have planned a free public event. ‘Minds, Machines & Society‘ will be held on UW–Madison campus at the Discovery Building, 330 N Orchard St on July 28th from 7:30-9:30pm. The speakers are world renowned thought-leaders in cognitive science, artificial intelligence and human creativity. Matt Botvinick, Director of Neuroscience Research for DeepMind, London will be discussing natural and artificial intelligence. Ulrike Hahn, Professor of Cognitive Science at Birkbeck, University of London will be discussing fake news and how it can affect our decision making specifically in voting. Bob Mankoff, Entrepreneur and Humor Editor for Esquire, will discuss how collaboration with machines can spark human creativity.

Free Registration Here: Lucid.wisc.edu/events

Posted in Events

How Can Machine Learning Improve Educational Technologies?

by Blake Mason, John Vito Binzak and Fangyun (Olivia) Zhao 

As computer-based technologies continue to establish a significant presence in modern education, it becomes increasingly important to understand how to improve the ways that these technologies present educational content and adapt to learners. Achieving these goals in the design of cognitive tutors, instructional websites, and educational games leads to difficult questions. For example, what are the best activities and examples that will help learners understand the educational content? In what order should these examples be given? How can these activities focus learners’ attention in productive ways? These are difficult questions for instructional designers to answer on their own. Here at UW-Madison, educational researchers are teaming up with computer science experts in machine learning to test theories about how design decisions can optimize the learning technologies. There are many ways that machine learning techniques can be used to improve the design and effectiveness of educational technologies.

Here we outline two interesting examples:

Visual Representations of the Water Molecule

One example of where LUCID students are applying machine learning techniques to educational problems, is our ongoing project studying how students perceive chemical properties from molecular diagrams. To succeed in chemistry courses, students need to develop perceptual fluency with visual representations of molecules to understand how they convey important properties.

A challenge for designing instructional interventions to support this learning is that acquiring perceptual fluency a form of implicit learning, that occurs in ways that students are not consciously aware. Therefore this form of knowledge is difficult to articulate, and we cannot rely on traditional methods to pinpoint what students do and do not understand. To get around this issue, can design experiments to see how advanced and novice students see visual representations of molecules differently. First, we take a long list of molecules and record all of the visual features of each of the molecules.

ChemTutor Interface

Then, we have students judge the similarity of molecules presented 3 at a time: “is molecule a more similar to b or to c?”  Finally, using a specific form of machine learning called metric learning we see which visual features predict students similarity judgments, and thus detect which features students attend to when viewing visual representations of molecules. By comparing the results of chemistry experts and novice students, we hope to build a better understanding of how perceptual fluency changes over experiences. In the ongoing ChemTutor project at UW, we hope to use this knowledge in the development of new cognitive tutors capable of providing adaptive feedback that help as students identify and focus on key visual features of molecular diagrams.

 

Another example of creating interactive model for teaching and learning is combining the strengths of eye-tracking technologies along with machine-learning algorithms. Current educational software focuses much on creating features that attempt to attract students’ interest and raise motivation. We are interested in developing a tool that learns from and adapts to students’ habits such as where they tend to look on the screen and how they become distracted. Using eye-tracking technology, we can interpret gaze fixation to understand where students focus their attention, and then customize instructional materials accordingly. In addition to making better educational technologies, this work is also important for researchers studying human attention. Specifically, researchers are interested in understanding how changes in gaze fixation relate to shifts in attention, and using this data to develop models that predict gaze behaviors. Through multiple phases of development, this project demonstrates how improving education in powerful ways can involve research on low-level processes of cognitive behavior, to software development and user testing.

 

Posted in LUCID, Machine Learning

Interactive Science Communication

 by Scott Sievert and Purav “Jay” Patel

Scientific research is still communicated with static text and images despite innovations in learning theories and technology. We are envisioning a future in which interactive simulations and visualizations are used to enhance how complex methods, procedures, and results are taught to other scientists and non-scientists.

Problem

Scientists in various fields ask questions about different things, but they share the same basic of communication. Nearly all research is communicated to other scientists, journalists, and the public in the form of static text and images encapsulated in journal articles and conference papers. These papers can be found in print and online for a high price. Recently, the “open science” movement has focused on broadening the number of people who can access these papers by eliminating the access fees for these (often) pricey papers. But there’s a problem. Even if all scientific papers are free and convenient for everyone to physically access, they will still not be cognitively accessible. In other words, the number of people who can understand the meaning of most scientific papers (even superficially) will be low.

Consequences

We believe that research should be conveyed directly to the public. At the moment, the traditional text and images approach makes it difficult to do so. Because of this, science journalism acts as an intermediary, translating the jargon for the public to appreciate (see physorgnewspapers, and educational videos.

But this is not always a good thing. Science journalism is often misleading (see John Oliver’s takedown) because their focus is to hook the audience and drive viewership and readership. Major change is needed because scientific literacy affects how people understand and affect the environment, their bodies, and
voting. Scientists need better ways to communicate their ideas and learn about other’s work. By one estimate, the total volume of scientific research (measured in the number of published papers) doubles every nine years.

For young and seasoned scientists in any field, this is a major bottleneck toward producing great science. How can the best discoveries be made if scientists are unable to keep up with the latest findings? But how can technical research be communicated effectively without “dumbing it down?” Below, we explore some promising ideas.

Suggestion 1: Interactive Simulations of Experimental Tasks

One problem with traditional research papers is their use of dense text and cluttered visuals to convey the materials used in a study. For example, I led a study during my master’s to understand how typical undergraduate comprehend irrational numbers. This study used four tasks, each including subsections. When I wrote up the scientific papers and sent it to the journal Cognitive Science, there was a constraint – use only text and images. Of course, I have the option of
including hyperlinks to the website hosting my experimental simulations. And I had the option of uploading the simulations to another website (Open Science Framework – osf.io). I did both of these things, but felt that most readers wouldn’t go through the trouble of visiting those website. When I got the reviews back from the editor of the journal, I found clues in their comments that no one actually used these websites. The best way to communicate the experimental tasks and procedures effectively is to directly embed them into scholarly articles. Some programmers, user interface designers, and scientists in computer science have played with this idea. I was particularly inspired by these three examples:

1. worrydream.com/ScientificCommunicationAsSequentialArt
2. ncase.me/polygons
3. distill.pub/2016/handwriting

What do these examples all have in common? In each case, the authors could have used blocks of clearly written text to communicate complex visuospatial relationships. Instead, they chose to bring the concepts to life and allow users to manipulate the information in ways that are more natural. It is as if an expert is besides you drawing and explaining otherwise obscure ideas.

So far, prototypes of interactive scientific papers have focused on math and computing (formal sciences). At the start of my first semester of an Educational Psychology PhD, I wondered how these ideas could apply to psychological papers like the one I wrote for my master’s thesis. With the help of three Computer Science students in a Human-Computer Interaction class, I set about embedded experimental simulations in a webpage containing my scientific article. The paper was titled How the Abstract Becomes Concrete: Irrational Numbers are Understood Relative to Natural Numbers and Perfect Squares. I started by reducing the length of the paper to 10% of the original size (start small!). I helped develop each simulation corresponding to different sections of the tasks.

This is what the original section of my scientific paper looked like. Though it makes sense to most people in the field of numerical cognition, it is hard for people outside of the field to say with confidence that the description is clear.

 

 

 

 

 

 

 

 

Now let’s have a look at the interactive simulation that enhances the text:

Traditional scientific papers give users a third-person glance into experiments, whereas we aim to provide users a first-person view into what the authors did. From a technological perspective, this change is fairly simple. Existing experimental code can be embedded into digital documents, saving authors valuable time. Moreover, any extra effort required can pay off significantly down the road. The enhanced paper is now easier for scientists, friends, family members, potential employers, collaborators, students, autodidacts, and many others to understand.

Currently, most experimental tasks live inside hard-to-access supplementary materials documents that are usually not accessed. The work that goes into creating these documents often doesn’t bear fruit. By directly embedding experimental simulations we can “show and tell” what happened and rescue authors from wasting time on supplementary materials.

It is not difficult to imagine how to extend this to other kinds of tasks with different kinds of input. In the interactive article mentioned earlier, the complexity of the tasks varying from simple keyboard responses (Z or M key) to mouse clicks and text input. It is also not difficult to consider embedding simulations of experimental tasks in fields like neuroscience and education. Health science fields like physical therapy and surgery may benefit from videos and simulations to practice new therapeutic procedures.

Suggestion 2: Interactive Data Visualizations

Interactive visualizations are useful because they allow the user to see the results of some experiment or function as parameters under their control are changed. This is super useful for data exploration and explanation and it’s picking up a lot of steam. These tools can allow users to interact with the results of scientific experiments or analyze how a particular method performs. Examples of tools that can create these interactive figures are ipywidgets and Holoviews.

These tools allow simple creation of interactive tools in Python, a high level language. These tools allow easy data exploration and can be embedding in a variety of contexts. In the future, one could imagine generating visualizations that show group level trends with the option to quickly change the visual so that individual differences across participants are transparent. For instance, consider an interactive visualization showing a density plot of participants’ scores on a cognitive task. By clicking buttons of dragging a slider, the user could abstract out more general information in the form of a violin plot, then a boxplot. Abstracting out even further, the user could change the original plot to a bar plot or table. Given that journal and conferences
adhere to strict (and problematic) word and page limits, creative interactive data visualizations capable of communicating vast amounts of information in small spaces would improve the quality of scientific articles. Authors would not need to question which one of a dozen analyses and results to focus on. By using rich multimedia, more information can be presented without overwhelming users.

One scientific journal that utilizes interactive graphics heavily is Distill. In this, the team at Distill works with the authors to create a set of interactive graphics for their article that highlight certain points. This has lowered the barrier to reading and understanding any of their papers. Interactive media can convey subtle points that are not immediately obvious.

Conclusion

This example can be extended easily. Communication in science is critical, but surprisingly difficult. Scientists need to communicate their methods, results and experimental design so other scientists can reproduce their results. Allowing quicker and more accurate communication between scientists can yield more robust experimental results and fuel collaboration. These ideas are espoused by arxiv, a digital, open-access scientific publication platform developed by physicists. More recently, website like the Open Science Framework (OSF) and Authorea have offered tools that enable open-access disseminate of scholarly research. The OSF archives preprint, experimental materials, and data across different fields for anyone to access. Authorea is a new digital authoring tool for scientists to collaborate online and embed interactive figures into their manuscripts. This tools also helps format manuscripts for the needs specified by journals and conferences, saving tedious editing. It is possible that one of these platforms (or another
like it) will play a large role in developing the scientific documents of the future.

Extend Your Learning

In addition to the sources listed above, consider checking out the
following examples of multimedia-enhanced scholarly communication:

1. Distill.pub description of benefits: https://distill.pub/about/
2. “Building blocks of interactive computing”, the introduction of Jupyterlab: https://www.youtube.com/watch?v=Ejh0ftSjk6g
3. Jupyer widgets, which has live (!) examples of ipywidgets and other libraries: http://jupyter.org/widgets.html
4. Explorable Explanation: http://worrydream.com/ExplorableExplanations/
5. Interactive PhD Thesis: http://tomasp.net/coeffects/
6. Journal of Visualized Experiments: https://www.jove.com/

Posted in LUCID, Machine Learning, Resources, Uncategorized