A Symposium on Human+AI

September 29, 2023, 9:00 AM - 5:30 PM
John Crerar Library, Room 390

AI has achieved impressive success in a wide variety of domains, ranging from medical diagnosis to generative AI. This success provides rich opportunities for AI to address important societal challenges, but there are also growing concerns about the bias and harm that AI systems may cause. This conference brings together diverse perspectives to think about the best way for AI to fit into society and how to develop the best AI for humans.

Invited speakers

See agenda below.

Jeffrey Bigham
Jeffrey Bigham
Marti Hearst
Marti Hearst
Zachary Lipton
Zachary Lipton
Jenn Logg
Jenn Logg
Sanjog Misra
Sanjog Misra
Mark Riedl
Mark Riedl

Registration

If you plan to attend, please fill out this one-minute survey. Registration is free. It would help us plan how much food to buy.

Please register here by September 15.

Call for poster presentations

We invite all researchers and practitioners to submit poster presentations for the Symposium on Human+AI. This is an excellent opportunity to showcase your work, share insights, and engage in discussions about the intersection of AI and human society. We are particularly interested in presentations that examine opportunities and challenges to achieve complementary and beneficent AI.

Poster presenters will have the opportunity to display their posters at the Symposium and engage with fellow attendees during poster sessions. This is a chance to receive feedback, establish collaborations, and contribute to meaningful conversations about the future of interaction between humans and AI. Please submit your abstract here by September 8, 2023.

Organization

The organizing committee for the Human + AI Conference is Chenhao Tan, Sendhil Mullainathan, and James Evans. This event is made possible by generous support of the Stevanovich Center for Financial Mathematics. Mourad Heddaya leads the program committee and Yixuan Wang is the web master.

Schedule

Breakfast
Welcome
Marti Hearst
Language as User Interface
Mark Riedl

Mark Riedl is a Professor in the Georgia Tech School of Interactive Computing and Associate Director of the Georgia Tech Machine Learning Center. Dr. Riedl’s research focuses on human-centered artificial intelligence—the development of artificial intelligence and machine learning technologies that understand and interact with human users in more natural ways. Dr. Riedl’s recent work has focused on story understanding and generation, computational creativity, explainable AI, and teaching virtual agents to behave safely.

Toward Human Centered Explainable AI
Jeffery Bigham

My research considers the intersection between people and machine learning broadly: I build novel human-AI systems, study how people use machine learning systems, and design possible AI futures. Much of my work focuses on accessibility because I see the field as a window into the future, given that people with disabilities are often the earliest adopters of AI. I am an Associate Professor in the Human-Computer Interaction and Language Technologies Institutes in the School of Computer Science at Carnegie Mellon University. I received my B.S.E degree in Computer Science from Princeton University in 2003, and received my Ph.D. in Computer Science and Engineering from the University of Washington in 2009. I have received the Alfred P. Sloan Foundation Fellowship (2014), the MIT Technology Review Top 35 Innovators Under 35 Award (2009), and the NSF CAREER Award (2012).

How HCI Might Engage with the Easy Access to Statistical Likelihoods of Things

Unintuitive statistical likelihoods of language and vision are now readily available via API, and people are connecting them to every possible way of interacting with machines. Despite this, we know both very little about and also have lots of historic precedent relevant to what interactions are likely to work, what is important for enabling them to work well, and where we should put our efforts if we want to enable better human interactions with machines. HCI thus has a vital role to play in helping us all to understand and scaffold human interaction where our intuitions fail. In this talk, I will bucket the opportunities we have as HCI researchers, using examples from my own (and others’) work in Human-AI Interaction, into themes of Benefit, Understand, Protect and Thrive.

Lunch / Poster session
Sanjog Misra
Structural Deep Learning

Humans have an amazing ability to describe the structure of the world in ways that allows for constraints, realisms and boundaries to be respected. This structure facilitates the notion of counterfactuals which is a fundamental element of any framework that aims at making decisions. In this talk, I will discuss the need for thinking of ML and in particular deep learning as embeddable objects in structural models of human (and group or firm) behavior. I will provide some relevant contexts, examples and applications of these ideas.

Zachary Lipton
Responsible AI's Causal Turn

With widespread excitement about the capability of machine learning systems, this technology has been instrumented to influence an ever-greater sphere of societal systems, often in contexts where what is expected of the systems goes far beyond the narrow tasks on which their performance was certified. Areas where our requirements of systems exceed their capabilities include (i) robustness and adaptivity to changes in the environment, (ii) compliance with notions of justice and non-discrimination, and (iii) providing actionable insights to decision-makers and decision subjects. In all cases, research has been stymied by confusion over how to conceptualize the critical problems in technical terms. And in each area, causality has emerged as a language for expressing our concerns, offering a philosophically coherent formulation of our problems but exposing new obstacles, such as an increasing reliance on stylized models and a sensitivity to assumptions that are unverifiable and (likely) unmet. This talk will introduce a few recent works, providing vignettes of reliable ML’s causal turn in the areas of distribution shift, fairness, and transparency research.

Break
Jenn Logg

Jennifer M. Logg, Ph.D., is an Assistant Professor of Management at Georgetown University's McDonough School of Business. Prior to joining Georgetown, she was a Post-Doctoral Fellow at Harvard University. Dr. Logg received her Ph.D. from the University of California, Berkeley’s Haas School of Business.

Her research examines why people fail to view themselves and their work realistically. It focuses on how individuals can assess themselves and the world more accurately by using advice and feedback produced by algorithms (scripts for mathematical calculations).

She calls her primary line of research Theory of Machine. It uses a psychological perspective to examine how people respond to the increasing prevalence of information produced by algorithms. Broadly, this work examines how people expect algorithmic and human judgment to differ. Read more in her book chapter, The Psychology of Big Data: Developing a “Theory of Machine” to Examine Perceptions of Algorithms.

She has been invited to speak on the topic of algorithms with decision-makers in the U.S. Senate, Air Force, and Navy. During her Ph.D., she was a collaborator on the Good Judgment Project, funded by IARPA, Intelligence of Advanced Research Projects Activity, the US intelligence community’s equivalent of DARPA. Currently, she is a Faculty Fellow at Georgetown University's AI, Analytics, and the Future of Work Initiative. She is also also a member of the "Theory of AI Practice" working group, funded by the Rockefeller Foundation through Stanford University's Center for Advanced Study in the Behavioral Sciences.

A Simple Explanation Reconciles “Algorithm Aversion” vs. “Algorithm Appreciation”: Hypotheticals vs. Real Judgments

We propose a simple explanation to reconcile research documenting algorithm aversion with research documenting algorithm appreciation: elicitation methods. We compare self-reports and actual judgments. When making judgments, people consistently utilize algorithmic advice more than human advice. In contrast, hypotheticals produce unstable preferences; people sometimes report indifference and sometimes report preferring human judgment. Moreover, people fail to correctly anticipate behavior, utilizing algorithmic advice more than they anticipate. A framing change between hypotheticals additionally moderates algorithm aversion. Stated preferences about algorithms are less stable than actual judgments, suggesting that algorithm aversion may be less stable than previous research leads us to believe.

Panel discussion