HAMLET

HAMLET  (Human, Animal, and Machine Learning: Experiment and Theory)

HAMLET is an interdisciplinary proseminar series that started in 2008. The goal is to provide behavioral and computational graduate students with a common grounding in the learning sciences. Guest speakers give a talk each week, followed by discussions. It has been an outlet for fresh research results from various projects. Participants are typically from Computer Sciences, ECE, Psychology, Educational Psychology as well as other parts of the campus. Multiple federal research grants and publications at machine learning and cognitive psychology venues have resulted from the interactions of HAMLET participants.

Meetings: Fridays 3:45 p.m. – 5 p.m. Berkowitz room (338 Psychology Building)

Subscribe/Unsubscribe: HAMLET mailing list

Fall 2017 Schedule:

Sept 15, Virtual and Physical: Two Frames of Mind, Bilge Mutlu (CS)

In creating interactive technologies, virtual and physical embodiments are often seen as two sides of the same coin. They utilize similar core technologies for perception, planning, and interaction and engage people in similar ways. Thus, designers consider these embodiments to be broadly interchangeable and choice of embodiment to primarily depend on the practical demands of an application. In this talk, I will make the case that virtual and physical embodiments elicit fundamentally different frames of mind in the users of the technology and follow different metaphors for interaction. These differences elicit different expectations, different forms of engagement, and eventually different interaction outcomes. I will discuss the design implications of these differences, arguing for different domains of interaction serving as appropriate context for virtual and physical embodiments.

October 13, Learning semantic representations for text: analysis of recent word embedding methods, Yingyu Liang (CS)

Recent advances in natural language processing build upon the approach of embedding words as low dimensional vectors. The fundamental observation that empirically justifies this approach is that these vectors can capture semantic relations. A probabilistic model for generating text is proposed to mathematically explain this observation and existing popular embedding algorithms. It also reveals surprising connections to classical notions such as Pointwise Mutual Information in computational linguistics, and allows to design novel, simple, and practical algorithms for applications such as embedding sentences as vectors.

October 20, Vlogging about research, Martina Rau (Ed Psych)

October 27, LUCID faculty meeting, No large group meeting

November 3, A Discussion of Open Science Practices, Martha W. Alibali (Psych)

This HAMLET session will be a discussion of open-science practices, led by Martha Alibali. We will start with brief discussion of the “replication crisis” and “questionable research practices”. We will then discuss solutions, including better research practices, data sharing and preregistration. Please read at least some of the provided papers, and come prepared to ask questions and share your experiences.

Replication crisis paper

Questionable Research Practices (QRPs) and solutions paper paper

Data sharing paper http://deevybee.blogspot.co.uk/2014/05/data-sharing-exciting-but-scary.html

Preregistration paper pic https://www.psychologicalscience.org/observer/seven-selfish-reasons-for-preregistration

November 10, Systematic misperceptions of 3D motion explained by Bayesian inference, Bas Rokers (Psych)

Abstract: Over the years, a number of surprising, but seemingly unrelated errors in 3D motion perception have been reported. Given the relevance of accurate motion perception to our everyday life, it is important to understand the cause of these perceptual errors. We considered that these perceptual errors might arise as a natural consequence of estimating motion direction given sensory noise and the geometry of 3D viewing. We characterized the retinal motion signals produced by objects moving along arbitrary trajectories through three dimensions and developed a Bayesian model of perceptual inference. The model predicted a number of known errors, including a lateral bias in the perception of motion trajectories, and a dependency of this bias on stimulus contrast and viewing distance. The model also predicted a number of previously unknown errors, including a dependency of perceptual bias on eccentricity, and a surprising tendency to misreport approaching motion as receding and vice versa. We then used standard 3D displays as well as a virtual reality (VR) headsets to test these predictions in naturalistic settings, and established that people make the predicted errors. In sum, we developed a quantitative model of 3D motion perception and provided a parsimonious account for a range of systematic perceptual errors in naturalistic environments.

November 17, Total variation regression under highly correlated designs, Becca Willett (ECE)

Abstract: I will describe a general method for solving high-dimensional linear inverse problems with highly correlated variables. This problem arises regularly in applications like neural decoding from fMRI data, where we often have two orders of magnitude more brain voxels than independent scans. Our approach leverages a graph structure that represents connections among voxels in the brain. This graph can be estimated from side sources, such as diffusion-weighted MRI, or from fMRI data itself. We will explore the underlying models, computational methods, and initial empirical results. This is joint work with Yuan Li and Garvesh Raskutti.

November 24 Thanksgiving Holiday

December 1, Micro-(Shape-And-Motion)-Scopes, Mohit Gupta (CS)

Imagine a drone looking for a safe landing site in a dense forest, or a social robot trying to determine the emotional state of a person by measuring her micro-saccade movements and skin-tremors due to pulse beats, or a surgical robot performing micro-surgery inside the body. In these applications, it is critical to resolve fine geometric details, such as tree twigs; to recover micro-motion due to biometric signals; and the precise motion of a robotic arm. Such precision is more than an order-of-magnitude beyond the capabilities of traditional vision techniques. I will talk about our recent work on designing extreme (micro) resolution 3D shape and motion sensors using unconventional, but low-cost optics, and computational techniques. These methods can measure highly subtle motions (< 10 microns), and highly detailed 3D geometry (<100 microns). These sensors can potentially detect a persons pulse or micro-saccade movements, and resolve fine geometric details such as a facial features, from a long distance.

December 8, Influence maximization in stochastic and adversarial settings, Po-Ling Loh (ECE)

We consider the problem of influence maximization in fixed networks, for both stochastic and adversarial contagion models. Such models may be used to model infection spreads in epidemiology, as well as the diffusion of information in viral marketing. In the stochastic setting, nodes are infected in waves according to linear threshold or independent cascade models. We establish upper and lower bounds for the influence of a subset of nodes in the network, where the influence is defined as the expected number of infected nodes at the conclusion of the epidemic. We quantify the gap between our upper and lower bounds in the case of the linear threshold model and illustrate the gains of our upper bounds for independent cascade models in relation to existing results. In the adversarial setting, an adversary is allowed to specify the edges through which contagion may spread, and the player chooses sets of nodes to infect in successive rounds. Our main result is to establish upper and lower bounds on the regret for possibly stochastic strategies of the adversary and player. This is joint work with Justin Khim (UPenn) and Varun Jog (UW-Madison).

For other events check out our calendar: Seminar and Events  This content and updates can be found: HAMLET


HAMLET Archives

Fall 2015 archive
Fall 2012 archive
Fall 2011 archive
Spring 2011 archive
Fall 2010 archive
Fall 2009 archive
Spring 2009 archive
Fall 2008 archive

 

Posted in Events

eLUCID8

eLUCID8-02

THANK YOU to all who contributed to the success of eLUCID8!

Data Science and Human Behavior  |  In the Lab and In the Wild

Wisconsin Institute for Discovery, Madison, WI
August 14th – August 15th, 2017

eLUCID8 featured interactive presentations, talks, and roundtables intended to discuss LUCID related projects and potential collaborations with government agencies, non-profits and industry groups.

Scroll down or click for the Agenda.

For communication during the conference (your questions, announcements and evaluation links) please sign up for eLUCID on Slack: https://join.slack.com/t/elucid8/signup

We truly appreciate and value your feedback. Please let us know what your experience was like at eLUCID8 in the following short surveys: Monday eLUCID8, Tuesday eLUCID8. Our students also respect and honor your feedback for their presentations. Please let us know what you think: LUCID Presentations

 

Keynote Speakers

bobmankoff

Monday, August 14th
6pm

Bob Mankoff
Former Cartoon Editor of The New Yorker,
Present Cartoon and Humor Editor of Esquire

“Crowdsourcing Humor”

Humor is traditionally at the hands of its author. What happens when the audience picks the punchline?

Each week, on the last page of the magazine, The New Yorker provides a cartoon in need of a caption. Readers submit captions, the magazine chooses three finalists, readers vote for their favorites. It’s humor—crowdsourced—and with more than 3 million submissions provided by 600,000 participants, it provides tremendous insight as to what makes us laugh.

In a fast-paced and funny talk, Bob Mankoff, The New Yorker‘s cartoon editor, will analyze the lessons we learn from crowdsourced humor. Along the way, he’ll explore how cartoons work (and sometimes don’t); how he makes decisions about what cartoons to include; and what crowds can tell us about a good joke.

 


mozer-largeTuesday, August 15th
4pm
Michael C. Mozer
Department of Computer Science 
and Institute of Cognitive Science
University of Colorado

“Amplifying Human Capabilities on Visual Categorization Tasks”

 

 

We are developing methods to improve human learning and performance on challenging visual categorization tasks, e.g., bird species identification, diagnostic dermatology. Our approach involves inferring _psychological embeddings_ — internal representations that individuals use to reason about a domain. Using predictive cognitive models that operate on an embedding, we perform surrogate-based optimization to determine efficient and effective means of training domain novices as well as amplifying an individual’s capabilities at any stage of training. Our cognitive models leverage psychological theories of: similarity judgement and generalization, contextual and sequential effects in choice, attention shifts among embedding dimensions. Rather than searching over all possible training policies, we focus our search on policy spaces motivated by the training literature, including manipulation of exemplar difficulty and the sequencing of category labels. We show that our models predict human behavior not only in the aggregate but at the level of individual learners and individual exemplars, and preliminary experiments show the benefits of surrogate-based optimization on learning and performance.

This work was performed in collaboration with Brett Roads at the University of Colorado.

Michael Mozer received a Ph.D. in Cognitive Science at the University of California at San Diego in 1987. Following a postdoctoral fellowship with Geoffrey Hinton at the University of Toronto, he joined the faculty at the University of Colorado at Boulder and is presently an Professor in the Department of Computer Science and the Institute of Cognitive Science. He is secretary of the Neural Information Processing Systems Foundation and has served as chair of the Cognitive Science Society. He is interested both in developing machine learning algorithms that leverage insights from human cognition, and in developing software tools to optimize human performance using machine learning methods.

 


Agenda

Monday August 14th

9:00-9:30      Welcome to eLUCID8 from our LUCID Director, Tim Rogers
9:30-10:30     Learning in Childhood with Jenny Saffran, Ed Hubbard and Chuck Kalish
10:30-10:45   Break
10:45-Noon    Machine Learning & Human Behavior with Varun Jog, Dimitris Papailiopoulos, Joe Austerweil and Jerry Zhu
Noon-1:30      LUNCH
1:30-2:15        Making Sense of the Ineffable with Karen Schloss and Paula Niedenthal
2:15-2:45       Data Science in the Wild with our Keynote Speakers Bob Mankoff and Michael Mozer
2:45-3:00       Break
3:00-4:15        Data Blitz
4:15-5:00       Poster and Movie Session
5:00-6:00      Posters, Mingling and Cash Bar
6:00-7:00      KEYNOTE: Bob Mankoff

Tuesday August 15th

9:30-10:30     Science Communication Panel with Veronica Reuckert, host of “Central Time” on WPR, Jordan Ellenberg, author of How Not to be Wrong: The Power of Mathematical Thinking, and Mark Seidenberg, author of Language at the Speed of Sight: How We Read, Why So Many Can’t, and What Can Be Done About It

10:30-12:00   LUCID Science Talks
12:00-1:00     LUNCH
1:00-1:45        University-Industry Partnerships  from Susan LaBelle, UW–Madison Office of Corporate Relations
1:45-2:00       Break
2:00-3:20      Data Science in the Wild with City of Madison, 4W, POLCO, and Lands’ End
3:20-4:00      Breakouts and Group Discussion
4:00-5:00      KEYNOTE: Michael C. Mozer

 

Posted in Events

CogSci 2018 Poster Design Contest for $100

Would you like to see your artwork displayed across the nation and the globe? Support a prestigious international scientific conference? Have a chance to win a cash prize of $100? If so, please submit your poster design for CogSci 2018 by June 12th!

The Cognitive Science Society is the world’s largest academic society focusing on how the mind works. In 2018 the CogSci annual meeting will be held in Madison, and we are seeking original artwork to advertise the conference. The conference title is ‘Changing Minds’ a focus that brings together disciplines such as cognitive psychology, machine learning, education, development, and neuroscience. Themes for the conference include:

changing minds: connecting human and machine learning
changing brains: neural mechanisms of cognitive change
changing knowledge: cognition, education, and technology
changing society: cognition, persuasion, and politics

We would like a poster that captures these themes with a compelling graphic, together with text providing further details about the event.

Here are examples from previous conferences:

cogsciposter_2011_smallercogsci_2012CogSci2015cogsci_2015_poster-small2cogsci_2016CogSci2017-Poster

 

 

 

 

 

 

 

 

 

 

For the poster text use llorum ipsum or dummy text to demonstrate font and color you recommend to work with your design elements. Please send PDF files to ceiverson@wisc.edu.

The competition due date is June 12. A shortlist will be determined via crowd-sourced adaptive sampling using the NEXT system, and the conference committee will select a winner by June 15.

Posted in Uncategorized

Rogers explains semantic impairments with neural networks

Using neural networks to understand healthy and disordered semantic cognition

In this video Tim Rogers explains how artificial neural networks can help to explain puzzling patterns observed in patients with a rare form of dementia in which knowledge about the meanings of words and pictures gradually erodes.
Posted in LUCID Library

Finding Meaning in Big Data

Binary VortexDiscovery Fellows Rebecca Willett and Rob Nowak are creating algorithms to make sense of big data and help machines learn. Full story at wid.wisc.edu.

Posted in LUCID, Machine Learning Tagged with: , , ,

How New Yorker Cartoons Could Teach Computers To Be Funny

Cnet Logo“The computer models Nowak and his team are developing are called adaptive crowdsourcing algorithms. They attempt to weed out the weakest captions as quickly as possible to get more people to vote on the potential winners.”

Full Story on C|net

Try the caption contest

Posted in LUCID, Machine Learning Tagged with: , ,

Podcasts showcase stories, science and secrets behind UW-Madison research

Bilge MutluImagine getting an intimate sneak peek at the research happening behind the scenes at one of the world’s largest research universities, as well as a glimpse at the human stories behind that research. Did we mention you can do it on your smartphone? Bilge Mutlu kicks off this initiative with the first mini-feature Birth of the Bots – See more at: http://news.wisc.edu/podcasts-showcase-stories-science-and-secrets-behind-uw-madison-research/#sthash.VhIjtZ3S.dpuf

Posted in LUCID, Science Narratives Project Tagged with: ,

The Science of Funny: Active Machine Learning & Cartoons

Niedenthal-550x367The New Yorker is using a machine learning system developed by WID Optimization researchers to sort through captions for their weekly cartoon caption contest. See full story on wid.wisc.edu.

Posted in LUCID, Machine Learning Tagged with: , ,

Rob Nowak talks Machine Learning to Blue Sky Science

Blue Sky Science: What is machine learning? from Morgridge Institute on Vimeo.

Rob Nowak lends his machine learning expertise to Morgridge Blue Sky Science Video.

Posted in LUCID Tagged with: ,