eLUCID8 is Aug 5-7th, 2019

eLUCID8 is a conference designed to connect people interested in data science and human behavior, both within and outside the University of Wisconsin-Madison. eLUCID8 provides a venue for presenting or hearing about new research, for learning about particular problems or challenges faced by applications of this research to contemporary real-world problems, and for brainstorming new opportunities for basic and applied research.

The meeting will feature:

– a series of talks from LUCID, UW faculty and partners from industry, non-profits and government agencies
– a data blitz and short-format talk sequence
– Keynote talk from Jordan Ellenberg, author of How to Not be Wrong: The Power of Mathematical Thinking
– Keynote talk from Patrick Shafto, professor of mathematics and computer science at Rutgers University
– Keynote talk from Bob Mankoff humor and cartoon editor of the Esquire and co-founder of Botnik Studios
– Sessions including: What’s causing the diversity crisis in data science; Understanding science denialism; Connecting data science and human behavior; and Exploring human and machine collaboration.

The event is free for presenters and attendees alike, and is funded by the LUCID graduate training program as part of the NSF’s National Research Traineeship program. It will be held at the Wisconsin Institute for Discovery, Madison WI, on August 5th at the Education Building in the Wisconsin Ideas Room on August 6th and 7th.

Free Registration: Register

For more information: eLUCID8

Posted in Events

Reinforcement Learning

link to reinforcement learning youtube video

by Rahul Parhi

This video by Arxiv Insights called “An Introduction to Reinforcement Learning” provides a decent high level introduction to reinforcement learning that I believe is accessible to academics, but doesn’t go into a lot of depth







This blog post on github by Andrej Karpathy called, “Deep Reinforcement Learning: Pong from Pixels” gets into a little bit more detail and works though using reinforcement learning to design an agent to play the game pong.







This (freely available) book called, Reinforcement Learning seems to be a pretty decent introduction to reinforcement learning for anyone who wants to get into the technical details of “how it works”.










OpenAI gym is a toolkit and playground for implementing reinforcement learning for various problems. It is useful for getting familiar with reinforcement learning. OpenAI gym also has a bunch of example code. Anyone with basic programming knowledge should be able to play around with it.


Posted in Resources

Spring Schedule HAMLET is posted!

HAMLET  (Human, Animal, and Machine Learning: Experiment and Theory)

Check out the updated schedule here: HAMLET

Did you hear about our brainstorming sessions?

This semester we are trying a new format for HAMLET. Speakers pursuing research at the intersection of computation and human behavior will give a short (10-15 minute) introduction/outline/description of a current research project or problem, with the aim of sparking collaborative discussion in the group about the project.

How would people from your field approach it?
Are there existing tools for solving the problem?
What is already known about the domain?
What kind of data are involved?
How does the research connect to real-world issues?

Subscribe/Unsubscribe: HAMLET mailing list

Posted in Uncategorized

Emotion-Recognizer Demonstration

Interactive Demonstration of the Emotion-Recognizer helps provide an engaged experience and and introduction to applications of machine learning. The Emotion-Recognizer involves using NEXT to map out the space of concepts, simple machine vision tools to identify facial features, and a learned mapping from feature to emotion space.


Human judgements through crowdsourcing were used to generate this map (below), which has clusters for happy, angry, bored and excited emotions. This demo allows you to take a picture of your face, then project it onto a map of facial emotions.

TRY This with your face!

In summary:

  • Generating the embedding of faces is time consuming (be patient!)
  • Embedding relied on human judgements is done by asking questions like “is face X more similar to face A or face B?”
  • There are two axes to the circle of faces: an intensity axis (calm/rage) and a positivity axis (happy/sad).
  • The faces are roughly distributed on a ring; all neutral faces (aka not happy or sad) are calm

If you are in the Madison area Please Join Us at the Wisconsin Science Festival Oct 11-14, 2018 more details here

Posted in Events, Machine Learning

Machine Learning for Everyone by Rob Nowak

Rob Nowak explains Machine Learning, Neural Networks, Deep Learning, Linear Classifiers, Good Features, Principle Component Analysis, Dictionary Learning, Generalization, Cross Validation in this talk called “Machine Learning for Everyone”

Posted in LUCID Library, Machine Learning

Fractions War: Reflections from a non-traditional academic project

By John Vito Binzak

For many children and adults, fractions are not a fun math topic. Indeed, multiple studies have shown that learning how to solve problems with fractions and developing an understanding of what these complex number symbols mean is difficult for many people. Identifying how people learn fractions has become an important topic of study for cognitive scientists who specialize in numerical cognition. One reason for this interest is the mounting evidence indicating that knowledge of fractions in elementary school is important for later math success. For me (John Binzak, PhD candidate) and Elizabeth Toomarian (recent PhD graduate), investigating how people learn and understand fractions led us to wonder how our work in the lab could have educational implications. From these discussions came our idea to develop an educational app for iOS called Fractions War.

Fractions War is based on the classic children’s card game War. In War, a deck of playing cards is split between two players and each player flips over one card at a time to see whose card has the higher value (highest value wins). Round after round, the player who wins adds both cards to their deck until one player wins all of the cards. In Fractions War, players flip over two cards at a time, arrange them vertically to form a fraction, and then decide whose fraction has the larger value. We found this form of gameplay interesting for two reasons. First, even though the intention of War is not necessarily to teach players about number cardinality, players are nevertheless engaged in making rapid and repetitive magnitude comparisons. Second, these comparisons are made with playing cards, which convey number magnitude in both numerical (e.g. 7) and visual cues (e.g. 7 diamonds). Therefore, our goal in creating this game was to create a fun experience for students to develop a better understanding of fractions through a game that practices their knowledge in a low-stakes way.

Much of the numerical cognition research studying fractions learning has occurred in well controlled lab-based or school-based studies, but much less is known about how people learn about fractions in informal learning contexts. By designing Fractions War with a built-in data collection system, the game is designed to be both a learning intervention and a research tool. For instance, the game includes options to play with different cards that vary the extent to which visual cues (e.g. 10 diamonds) are present or absent. For players and teachers, these options diversify the possible ways to engage with the game and think about fractions. For researchers such as myself, this design feature allows us to test specific hypotheses regarding the extent to which these visual features may be critical for learning.

Based on my experience of developing an educational app inspired from my research questions, I would encourage other graduate students to identify potential opportunities to do the same. Making educational games is not a requirement for graduate students in Educational Psychology, nor is it a commonplace practice for cognitive scientists in the field. Rather, developing this app was an opportunity to express my personal motivation to help make better educational media and develop research-based principles for achieving these goals. Doing this work has provided an invaluable experience to see how skills developed in grad school could be applied in ways beyond academia. For example, developing Fractions War created opportunities to network with educational technology companies and interact with students at UW who have expertise outside of my discipline. Specifically, realizing our vision would not have been possible without partnering with four computer science students completing the Foundations of Mobile Systems and Applications course. Lastly, the interactions that came from organizing a team behind one vision, such as explaining research-based motivations to developers with no cognitive science background, were critical for developing my confidence in continuing this work going forward.

Posted in LUCID

Minds, Machines & Society Videos

Matt Botvinick

Director of Neuroscience Research at DeepMind in London, UK discusses current topics in Artificial Intelligence research and how this research provides implications in understanding the human mind and improving human life.

“One problem that we are having as a society is that we are trying to think through the implications of this technology that is developing so rapidly, but a lot of people outside of the artificial intelligence community are getting their information about what a.i. is from Hollywood, rather than from people who actually research a.i.”

Ulrike Hahn

Cognitive Scientist at the Centre for Cognition, Computation & Modelling at Birkbeck, University of London discusses the cognitive science of fake news.

“Network structures that promote the collective accuracy are not necessarily best for promoting individual accuracy”

Stay Tuned for more shorter clips from the Minds, Machines & Society Event.

To watch the full event video:

Minds, Machines, & Society 7.28.18 from Discovery Building on Vimeo.

Posted in LUCID Library

Minds, Machines & Society

Free Registration at lucid.wisc.edu/events

For an encore of CogSci2018 and for those of you unable to attend CogSci2018, the organizers and some incredible keynote speakers have planned a free public event. ‘Minds, Machines & Society‘ will be held on UW–Madison campus at the Discovery Building, 330 N Orchard St on July 28th from 7:30-9:30pm.

The speakers are world renowned thought-leaders in cognitive science, artificial intelligence and human creativity. Matt Botvinick, Director of Neuroscience Research for DeepMind, London will be discussing natural and artificial intelligence. Ulrike Hahn, Professor of Cognitive Science at Birkbeck, University of London will be discussing fake news and how it can affect our decision making specifically in voting. Mark Seidenberg is the author of the book Language at the Speed of Sight, whose research focuses on reading and the brain.

Due to a family emergency, Bob Mankoff will no longer be able to join us.

Information for getting to the event

Posted in Events

Join Us for Minds, Machines & Society

Join Us This Saturday for a free, open to the public event.

Don’t miss Bob Mankoff, humorist and cartoon editor for the New Yorker & more recently for Esquire. Bob will provide a must-see talk about human and machine collaboration & creativity.

Interested in fake news? So are we! Ulrike Hahn, cognitive scientist from the University of London-Birkbeck, will discuss fake news and its effects on our minds.

Artificial intelligence and human intelligence, what do we know & how do they compare?  Matt Botvinick, Director of Neuroscience Research at DeepMind will share insights in the latest a.i. research.

Free Registration: lucid.wisc.edu/events

How to get to the event: lucid.wisc.edu/info

Posted in Events, Resources

CogSci 2018 – Why Changing Minds?

On the motivation for this year’s theme and its connection to current events.

When we bid to organize CogSci three years ago the global erosion of faith in factual knowledge was already well under way. Scientific consensus was doing little to temper public disagreement across all manner of hot-button topics, from the health effects of vaccination to questions about climate change to the remarkable resurgence, in some quarters, of questions about whether the earth is flat. In our own discipline, a series of high-profile psychological studies failed to replicate or were even found to be made up of whole cloth, initiating a very public crisis of trust in the scientific process itself. While the questioning of historical and scientific truths is hardly unique to our time, the recent use of cognitive and behavioral research to promote misinformation demands some attention from our academic community. What light, if any, can cognitive science shed on the emergence, spread, and persistence of false beliefs? We aim to understand how minds work–how beliefs are formed, how decisions are made, how actions are taken, how the facts on the ground are perceived and understood. What can we as scientists do, and what should we advise policy makers or industry leaders or the general public to do, to remediate false beliefs? As advocate cognitive scientists, how do we change minds?

These questions intersected with another societal trend that was also becoming widely apparent: the increasing use of information technology in education. Universities were expanding their offerings of massive open online courses in efforts to both bolster their bottom lines and to make higher learning more widely available across the country and the globe. Apps seeking to teach you a new language or to “train your brain” or to inoculate against future dementia were rapidly multiplying, sometimes sparking controversy. Cognitive and computer scientists were collaborating to develop intelligent tutors that tailor learning experiences to each student’s individual needs. School boards across the country were evaluating the cost/benefit ratios for bringing ipads into the classroom. Here again it seemed that our science should have a role to play in guiding how these forces play out. What, if anything, do the learning models we develop in the lab offer to the teacher in the classroom, the school board president, the curriculum designer, or the software engineer? How do we relate the cognitive theories that elegantly explain behavior in tightly-controlled scenarios to the wilds of the classroom, where students simultaneously grapple with unconstrained learning problems ranging from how to read to calculating derivatives to understanding the structure of molecules? As educator cognitive scientists, how do we change minds?

A good theme should also resonate in the discipline more broadly, and it occurred to us that much of cognitive science concerns change in one guise or another. Developmental approaches, of course, make cognitive change their principal focus–but research into learning and memory is also the study of change, and so too is the study of mental and neurological disorders. Cognitive control, choice, decision, and action all involve understanding how behavior is changed in real time. Attention changes what events in our environment come to our awareness and guide our actions. A central debate in perception concerns whether it can be changed by interactions with conceptual or other non-perceptual knowledge. Indeed we were hard pressed to think of any aspect of cognitive science that does not involve questions about changing minds.

We hope the CogSci 2018 program has braided these strands in a way that does justice to the current moment and to the discipline as a whole. Our starting question–how and why do false beliefs arise, spread, and persist in society?–is reflected in Thursday’s plenary symposium Persuasion, Propaganda, and Politics which brings together four field-leading experts studying the problem from diverse perspectives. We have also identified contributed talks that focus generally on understanding how and why human learning and behavior can go wrong, or can appear to go wrong, and have grouped these into three sessions under the header “Fake news” (shown in orange here). The relations among cognitive science, technology, and education are addressed in Saturday’s plenary symposium Big Data Goes to School, which brings together speakers who address this topic from both the academic and the industrial side of the fence. Contributed talks focusing generally on cognitive science and education have been grouped into four sessions under the (less cheeky) heading “Education” (shown in yellow here).

The broader themes of change, technology, and current issues in society are addressed by three outstanding plenary speakers. On Thursday morning we will hear from Michael Kearns, a computer scientist at the University of Pennsylvania and Founding Director of the Warren Center for Network and Data Sciences, who focuses on understanding how machine learning algorithms function in and shape society, and how such algorithms can be inoculated against the human biases that permeate the masses of data on which they are trained. Friday we will hear from Matt Botvinick, Director of Neuroscience Research at DeepMind, whose research connects the latest innovations in machine learning to cognitive and neural theories about human goal-directed action. Saturday we will hear from Susan Gelman, the Heinz Werner Distinguished Professor at the University of Michigan, who will speak about the conceptual roots and development of moral reasoning.

We are thrilled to be hosting a scientist whose career has changed minds, in the sense of fundamentally altering our understanding of language. Friday morning features a symposium honoring the work of Michael Tanenhaus, the 2017 Rumelhart Prize recipient, and on Friday evening Prof. Tanenhaus will deliver his Rumelhart Prize Lecture, “The Paradox of Real-time Language Comprehension: Signal, Noise, and Context.” Join us afterward for a reception on the rooftop of the conference venue to celebrate the lecture and for the announcement of this year’s winner.

Finally, for those staying on past the conference end, Saturday evening features a public event entitled Minds, Machines, and Society. Two conference speakers–Matt Botvinick and Ulrike Hahn–will be joined by Bob Mankoff, humor editor at Esquire and longtime cartoon editor at the New Yorker,for an evening of informal talks about the ways that cognitive science, machine learning, and technology are interacting to influence culture and society. The event is open to the public and free to attend, but space is limited so make sure to register your attendance here.

Looking forward to seeing you all and hearing about your work in Madison,

The CogSci 2018 Program Committee

Posted in Events, Resources