For third straight year, Khoury researchers set new publishing benchmark at CHI 

As Khoury College's faculty and student researchers descend on Barcelona this week, they're asking how technology — particularly AI — can fulfill users' needs, enhance their abilities, and amplify their humanity.

by Madelaine Millar

The CHI '26 logo overlaid on top of a mosaic containing faces of 39 Khoury researchers

The Conference on Human Factors in Computing Systems (CHI), the world’s most prestigious human–computer interaction conference, is taking place in Barcelona from April 13 to 17. For a third straight year, Khoury researchers have topped themselves, with 53 faculty, PhD students, master’s students, and undergraduates combining to contribute a college-record 41 accepted works, plus a handful of workshops and panels.

Five of those works were recognized with best paper honorable mentions, placing their work in the top 5% of accepted research. One paper explores how people interpret and predict visual forecasts in uncertain situations. Another examined whether AI could help physicians prepare for multidisciplinary tumor boards. The third recorded the asymptomatic harms of productivity-enhancing AI tools for skilled workers. The fourth developed a dashboard to help mental health clinicians synthesize data from multiple sources. And the fifth paper explored how players responded to AI-generated content in their video games.

To discover the full range of Khoury research, click any of the linked summaries below, or simply read on. Khoury researchers’ names are bolded, entries are ordered by day and local time, and researchers from other Northeastern colleges — the College of Arts, Media and Design (CAMD); the Bouvé College of Health Sciences; the College of Science (CoS); and the College of Engineering (CoE) — are noted in parentheses.

Monday, April 13

Outfoxed: Design and Evaluation of a Modular Interactive Puzzle for Cognitive Enrichment of Zoo Animals

Tricky as a fox: new adaptive animal feeder

Vatsal Mehta, Somil Urmil Shah, Lubaina Malvi, Willem Shak, Felix Sims, Sarah Woodruff, Rébecca Kleinberger (+CAMD)

Monday, April 13, 11:27–11:39 a.m.

Even with carefully designed enrichment programs, maintaining novelty and challenge for zoo animals can be difficult. Puzzle feeders can help by requiring animals to complete tasks to earn food, but animals like foxes often figure them out too quickly.

This team designed a new style of modular, adaptive puzzle-feeder that can grow in difficulty as an animal figures it out. In tests with one arctic fox and two coatis, the team found behaviors that indicate an animal’s cognitive wellness — like exploring their habitat and behaving in a variety of ways — improved when the animals were engaged with the puzzle.

“Sparked by experiential learning in my animal–computer interaction course at Khoury College, this project developed into a summer-long deployment at the zoo with a fantastic group of students,” said Khoury–CAMD Assistant Professor Rébecca Kleinberger. “Working closely with zoo staff and Hudson the Arctic fox, we observed meaningful changes in engagement, highlighting the potential of technology in supporting animal agency and experience.”

Back to top

Quantifying Latencies: A Conversation Analysis Approach to Human-Agent Interactions in Virtual Reality

Chatbot, say something already!

Raina Cao, Mengxu Pan, Panxin Liu, Viduni Ariyawansa, Mirjana Prpa (former Khoury professor, now Hong Kong University of Science and Technology assistant professor), Alexandra Kitson

Monday, April 13, 11:39–11:51 a.m.

Conversation has a rhythm to it. Humans can feel it, and when LLM agents can’t match it, humans can feel frustrated. But what is the rhythm that the agents are failing to match?

By analyzing human–agent conversations in social VR, this research team found that LLM agents take an average of 4.1 seconds to respond, compared to 1.2 seconds for humans. They also found agents tend to keep speaking for longer after an interruption, and that together the latencies created a “conversational timing drift” that disrupted the expected rhythm of a conversation.

Back to top

MIND: Empowering Mental Health Clinicians with Multimodal Data Insights through a Narrative Dashboard

**WINNER: Best Paper Honorable Mention**

Understanding patients’ MINDs

Ruishi Zou, Shiyu Xu, Margaret E Morris, Jihan Ryu, Timothy D. Becker, Nicholas Allen, Anne Marie Albano, Randy Auerbach, Daniel A. Adler, Varun Mishra (+Bouvé), Lace M. Padilla (+CoS), Dakuo Wang (+CAMD), Ryan Sultan, Xuhai “Orson” Xu

Monday, April 13, 11:51 a.m.–12:03 p.m.

Balancing clinical notes with a patient’s insights can help clinicians offer better mental health care, but the data patients generate about themselves is structured very differently from the data clinicians generate. Can the two be presented in conversation so providers can spend less time interpreting data and more time making evidence-based choices?

Through co-design with five mental health clinicians, these researchers suggest an LLM-powered multimodal dashboard called MIND. MIND draws on both patient-generated and clinical data to present insights through narrative text and complementary charts. Initial users reported MIND’s dashboard made it easier to uncover the clinically relevant stories hidden in messy and complex data.

“Our goal is not to give clinicians more data — they already have too much,” said Khoury–CAMD Associate Professor Dakuo Wang. “MIND turns multimodal data into interpretable insights that align with clinical context, so providers can spend their time on what matters: patient care.”

Back to top

BearBubbles: Interactive Olfactory Enrichment to Encourage Foraging in Zoo Animals

Bubbles for bears

Arushi Aggarwal, Sarah Woodruff, Jas Brooks, Rébecca Kleinberger (+CAMD)

Monday, April 13, 12:27–12:39 p.m.

Animals benefit from varied and stimulating experiences. In the wild, that happens naturally; in zoos, that can mean new foods, changes in their enclosures … or scented bubbles to play with.

This research team gave a proximity-activated bubble machine designed for olfactory enrichment to two elderly black bears, one of whom had arthritis. Over three weeks, researchers found the bears foraged and moved around their enclosure more often than they had without the bubble machine.

“Our project was interested in how we could motivate a gentle, active lifestyle without direct food-based motivations. This research — and our system — has the potential to extend to many large mammal species, many of whom face similar age-related challenges,” said third-year computer science student Arushi Aggarwal.

Back to top

Conversational Successes and Breakdowns in Everyday Smart Glasses Use

Do my smart glasses see what I see?

Xiuqi Tommy Zhu (CAMD), Xiaoan Liu, Casper Harteveld (CAMD), Smit Desai (+CAMD), Eileen McGivney (CAMD)

Monday, April 13, 2:15––3:45 p.m. and 4:30–6 p.m.

Smart glasses feel like magic — when they work. When they don’t, they are awkward and frustrating. So, when do they work?

These researchers explored people’s daily use of LLM-powered smart glasses for a month, comparing them to voice-only tools. The glasses did a great job solving immediate problems — answering questions like “How do I open this” when looking at a jar — but frequently broke down or gave wrong answers when they couldn’t track what the user was looking at, creating awkward moments in public.

“Smart glasses represent a fundamentally different kind of AI interaction, where the system is supposed to see the world alongside you,” said Khoury–CAMD Assistant Professor Smit Desai. “When that shared perception works, it feels magical. But when it fails, it’s uniquely frustrating, because the AI is confidently wrong about something you can plainly see.”

Back to top

Workshop: Co-Data: Cultivating Effective Human-LLM Collaboration for Collaborative Data Processing

LLMs: tool or partner?

Amedeo Pachera, Andrea Mauri, Kashif Imteyaz, Jie Yang, Eric Umuhoza, Angela Bonifati, Michal Lahav, Nitesh Goyal

Monday, April 13, 2:15–3:45 p.m. and 4:30–6 p.m.

Between concepts with multiple names, differences in data literacy, and varying levels of trust, researchers can see the same data in very different ways. LLMs already help with data tasks like cleaning and annotation, but can they step up to the role of research collaborators and help to get everyone on the same page?

Starting from the principles of interdependence theory, this workshop explores the potential for LLMs to take a more active, collaborative role in data-heavy work. It uses both interactive discussions and case study activities to explore the role of trust, coordination, equity, and transparency in human–LLM collaborations. The half-day workshop aims to produce a conceptual framework, high-level guidance, and a research agenda.

“Data work is rarely an individual effort; it involves multiple stakeholders, messy tradeoffs, and real consequences,” said Khoury PhD student Kashif Imteyaz. “It requires human judgment that’s hard to automate. How do we bring human–LLM collaboration into this space responsibly?”

Back to top

Workshop: From Human-Human Collaboration to Human-Agent Collaboration: A Vision, Design Philosophy, and an Empirical Framework for Achieving Successful Partnerships Between Humans and LLM Agents

Your remote colleague, the LLM

Bingsheng Yao, Chaoran Chen, April Yi Wang, Tongshuang Wu, Dakuo Wang (+CAMD), Toby Jia-Jun Li

Monday, April 13, 2:15–3:45 p.m. and 4:30–6 p.m.

Decades of research have developed a rich, nuanced understanding of how people work together from opposite sides of the world. Can applying the same mutual awareness and shared accountability to collaborations with LLM-powered agents lead to richer, more robust interactions?

By applying research on remote human teamwork, this three-hour workshop — featuring speakers and a storyboard-based group design session — strives to produce a research agenda that charts the future of collaborative agents.

“Human–AI collaboration is undergoing a paradigm shift, moving beyond the traditional ‘system-as-a-tool’ model. I care deeply about designing interactions where AI behaves and functions as a genuine teammate,” said Khoury Associate Research Scientist Bingsheng Yao. “[It paves] the way for systems that act as trustworthy, adaptive partners rather than unpredictable black boxes.”

Back to top

Workshop: Human-Centered Explainable AI (HCXAI): Re-examining XAI in the Era of Agentic AI

AI booked a flight and emailed your boss. Can anyone explain why?

Upol Ehsan, Amal Alabdulkarim, Kenneth Holstein, Min Kyung Lee, Andreas Riener, Justin D. Weisz

Monday, April 13, 2:15–3:45 p.m. and 4:30–6 p.m.

AI systems no longer just answer questions. They make plans, use tools, and trigger real-world actions. These “agentic” systems unfold over time, with cascading consequences. That breaks the old playbook for explaining what AI does and why. Worse, recent research found that the “chain-of-thought” reasoning these systems display produces explanations without explainability: They don’t actually help people understand the system.

Now in its sixth year, HCXAI is one of CHI’s longest-running workshop series, with more than 450 participants from 21 countries since 2021. It has evolved with the field, from evaluation frameworks to XAI harms to confronting the black box in the LLM era. This year, it takes on its most urgent challenge: What does explainability mean when AI acts autonomously?

“These systems plan, act, and trigger consequences across the real world,” says Assistant Professor Upol Ehsan, Khoury College’s faculty lead for responsible AI governance and policy, the HCXAI field’s founder, and the workshop’s founding organizer. “The people most affected are rarely AI experts, and they deserve to understand why. Without that, we’ve built power without accountability.”

Back to top

Poster: “I Want My Data To Be Used Purposely”: Women’s Data Relationships with FemTech Apps

Trust and tracking in reproductive care apps

Ximena Lainfiesta (CAMD), Ghada Alsebayel, Chenyan Jia (+CAMD), Casper Harteveld (CAMD)

Monday, April 13, 2:15–3:45 p.m. and 4:30–6 p.m.

FemTech apps like Clue and Flo help millions of people track their periods and monitor fertility. But when the end of a pregnancy can spell criminal charges, the privacy of data about a missed period has major consequences. How do women understand and negotiate their relationships with FemTech?

Through nearly 200 survey responses, these researchers found qualities like trust, reciprocity, dignity, and agency were the main ways users evaluated reproductive care apps. Many participants considered their health data an extension of themselves, so privacy violations cut deep.

“Women evaluate FemTech apps through a relational, value-based framework,” said Khoury PhD student Ghada Alsebayel. “So when an app violates their expectations, our participants described it as a personal betrayal of the trust they had placed in an app that was supposed to care for them.”

Back to top

Poster: Disclose with Care: AI Scaffolds for Privacy in Chatbot Interviews

Oversharing with AI interviewers

Ziwen Li, Ziang Xiao, Tianshi Li

Monday, April 13, 2:15–3:45 p.m.

When interviewed by chatbots, people tend to overshare. How can interviewers protect people from themselves?

This research team tested two privacy interventions: letting people edit their own transcripts after interviews and guiding those edits with AI. They found that AI-assisted editing reduced the amount of personally identifiable information (PII) in the final transcript, while free editing actually increased the amount of PII participants disclosed. The responses were of the same quality before and after both privacy interventions, suggesting that editing with AI after an interview can reduce an interviewee’s risk without compromising an interviewer’s data.

Back to top

Poster: From Idea to Co-Creation: A Planner-Actor-Critic Framework for Agent Augmented 3D Modeling

AI teamwork makes the dream work

Jin Gao, Saichandu Juluri

Monday, April 13, 2:15–3:45 p.m.

Intelligent humans design better when iterating with a team. Is the same true for AI?

These researchers compared a single LLM-powered agent directly executing human-provided 3D modeling commands in Blender, with a human-supervised team of agents filling three roles: a planner coordinating steps, an actor executing them, and a critic providing iterative feedback. They found that designs from the planner–actor–critic team were more complete, accurate, controllable, and aesthetic than the single-agent designs, establishing the value of structured agent self-reflection for AI-powered 3D design.

“This topic was inspired by the gap between one-shot text-to-3D generation and the way people actually design: through iteration, planning, feedback, and revision,” said Khoury master’s student Saichandu Juluri. “The AI not only generates actions but also reflects on screenshots and scene state while remaining open to human intervention.”

Back to top

Poster: Unpacking Empathy Development in HCI Learners: Patterns from Diary Reflections and Peer Discussions

Putting the “human” in human–computer interaction

Jixiang Fan, Wei-Lu Wang, Jiahui Song, Jiacheng Zhao, Lei Xia, D. Scott McCrickard

Monday, April 13, 2:15–3:45 p.m. and 4:30–6 p.m.

Doing good human–computer interaction work requires understanding other humans. Are there patterns in the ways HCI students develop empathy and understanding throughout their studies?

These researchers followed 10 HCI students through diary-style tracking and group discussions and found that the four main dimensions of empathy — emotional interest, sensitivity, personal experience, and self-awareness — tended to shift around, rather than following a single development trajectory. Over time, though, the students usually realized their initial assumptions about their users were insufficient and shifted their thinking from a focus on function toward a more holistic understanding of their users’ experiences.

Back to top

Playing the Imitation Game: How Perceived Generated Content Shapes Player Experience

**WINNER: Best Paper Honorable Mention**

Is AI designing my video game?

Mahsa Bazzaz, Seth Cooper

Monday, April 13, 11:15–11:27 a.m.

Generative AI can create video game levels. How does that source change the way players experience them?

This research team presented players with human-designed and procedurally/AI generated levels of Super Mario Bros. and Sokoban to explore how their perception of the creator influenced their experience of the game. While participants could not reliably tell the two apart, they consistently found the levels they believed to be AI-created more frustrating and less enjoyable.

“There’s a lot of debate around what should be disclosed, whether labels are effective, and whether they’re fair to developers. Our work shows that even without any disclosure, players’ spontaneous assumptions about AI involvement can have the same negative effect,” said Khoury PhD student Mahsa Bazzaz. “Nuanced disclosures explaining how and why AI was used may be the only way to prevent ambiguous signals lowering perceived quality across the board.”

Back to top

The Pit Beneath the Town Square: How Digital Solastalgia Affects Platform Migration and Community Structures of Transfeminine Users

Homesick for what used to be home

Erika Melder, Veronica E. Rubinsztain, Jiaqi (Ella) Li, Emma Vonbuelow, Michael Ann DeVito (+CAMD)

Monday, April 13, 11:15–11:27 a.m.

What is it like to see a ruin and long for the time when it was home? Twenty years ago, Glenn Albrecht coined the term “solastalgia” to describe the experience, associated with climate change; today, these researchers are using “digital solastalgia” to describe the experience of transfeminine users whose platforms become suddenly hostile to who they are.

Through an ARC study, these researchers enunciate the intense distress of experiencing a “pit” of hostility, moderator abandonment, and transphobic abuse on previously beloved and safe online gathering spaces like X. These traumatic experiences, and the emotional response of solastalgia they invoke, help explain the conflict that can plague communities attempting to rebuild elsewhere.

“Participants felt as though irreplaceable homes had been taken from them, overrun, and destroyed by a process beyond their control. This led us to incorporate scholarship about climate change refugees,” said Khoury PhD student Erika Melder. “We also suspect that the framework of digital solastalgia offers a useful way of understanding communities beyond transfems.”

Back to top

Tuesday, April 14

Examining Interpretation Strategies for Multiple Forecast Visualizations with Two and Four Forecasts

**WINNER: Best Paper Honorable Mention**

How sure are you about that forecast?

Lace M. Padilla (+CoS), Racquel Fygenson, Connor Wilson, Kristi Potter, Spencer C. Castro

Tuesday, April 14, 9:36–9:48 a.m.

Imagine you look up two different weather forecasts. Both aim to predict something uncertain, so both tell you exactly how uncertain they are, using visualization techniques like density plots or standard deviation intervals. You know what both forecasters think will happen and how confident they are in their predictions, so how do you decide whether to bring an umbrella?

These researchers explored how people interpret and predict in uncertain situations when presented with multiple forecast visualizations. Some participants attempted to average the forecasts, others picked one and ignored the rest, and still others looked for points of intersection or the edges of ranges to make their choices. Their findings point toward patterns that information designers can use to communicate uncertainty.

Back to top

Vibe Check: Understanding the Effects of LLM-Based Conversational Agents’ Personality and Alignment on User Perceptions in Goal-Oriented Tasks

AI–human personality clash

Hasibur Rahman (CAMD), Smit Desai (+CAMD)

Tuesday, April 14, 10:00–10:12 a.m.

Some personalities are easier to work with than others, and LLM agent personalities can be, well, personalized. What personality traits make LLM agents easy to work with?

These researchers had 150 users make travel plans with the help of LLM agents that expressed varying degrees of openness, conscientiousness, extraversion, agreeableness, and emotional stability. Users had the best experiences making plans with agents expressing personality traits at a medium degree, and with agents closely aligned to their own personalities.

“This work, led by Hasibur Rahman from CHAI Lab, shows that more personality in LLM-based agents is not always better. Users respond best to systems that feel balanced and authentic, not overly robotic or exaggerated,” said Khoury–CAMD Assistant Professor Smit Desai. “Getting this right is critical for trust, adoption, and how people experience AI in everyday tasks.”

Back to top

LLM-based Embodied Conversational Agent for Reducing Foreign Language Speaking Anxiety in Social VR

Practice makes peaceful

Mengxu Pan, Panxin Liu, Jinda Zhang, Raina Cao, Viduni Ariyawansa, Yaning Li (CoE), Bingsheng Yao, Dakuo Wang, Philippe Pasquier, Alexandra Kitson, Mirjana Prpa (former Khoury professor, now Hong Kong University of Science and Technology assistant professor)

Tuesday, April 14, 11:39–11:51 a.m.

It can be scary to speak in a language you’re learning. Can practicing conversations with AI agents in virtual reality help soothe speaking anxiety for English-language learners?

These researchers had 20 participants spend half an hour each day in a social VR environment, practicing conversations in English with an LLM-based embodied conversational agent. After three days, the participants reported improved confidence and showed small gains in their language skills.

“Anxiety can prevent learners from practicing and improving, even when they already have the language knowledge,” said Khoury master’s student Panxin (Claire) Liu. “This immersive setting allows learners to practice speaking in realistic contexts while receiving non-judgmental guidance and feedback from the agent.”

Back to top

Poster: “Who wants to be nagged by AI?”: Investigating the Effects of Agreeableness on Older Adults’ Perception of LLM-Based Voice Assistants’ Explanations

A kinder, gentler AI genie

Tuesday, April 14, 2:15–3:45 p.m. and 4:30–6 p.m.

Niharika Mathur, Hasibur Rahman (CAMD), Smit Desai (+CAMD)

AI-powered voice assistants can be customized with a wide range of personalities. Which ones do older adults like best?

In studying 70 older adults, this research team found that the agent’s agreeableness affected how participants perceived the assistant and whether they trusted it. While more polite assistants were seen as more trustworthy and likeable, those benefits faded during emergencies, when clarity became more important than agreeableness. The researchers encouraged future designers not to use a one-size-fits-all approach.

“For older adults, perceived personality, context, and individual differences shape how explanations are received. Designing for a single default conversational style risks excluding many users, especially in sensitive, long-term interactions,” said Khoury–CAMD Assistant Professor Smit Desai. “This work, led by Niharika Mathur and Hasibur Rahman from CHAI Lab, shows that how AI communicates matters as much as what it says.”

Back to top

Poster: Non-Linear Journeys with Voice Assistants: A Multi-Trajectory Analysis of Older Adults’ Long-Term Home Use

Grandma’s new friend, Alexa

Wen-Ning Chen, Smit Desai (+CAMD), Carrie O’Connell, Sarah Leiser, Kelly Quinn, Naoko Muramatsu, David Marquez, Catherine O’Brien, Jessie Chin

Tuesday, April 14, 2:15–3:45 p.m. and 4:30–6 p.m.

From reminders about meds to alerting after a fall, voice assistants have the potential to support independence as people age. But how do older adults actually use voice assistants?

By seeing how 50 older adults interacted with an in-home voice assistant, this research team found that some used the assistant consistently (a lot or a little), others started strong and dropped off, and others grew more engaged over time. Which path users took, and how they conceptualized the device, depended on the conditions in their social and emotional life.

“We often assume older adults will either adopt a new technology or reject it, but reality is much messier,” said Khoury–CAMD Assistant Professor Smit Desai. “Our research tells us that if we design only for the ‘ideal’ adopter, we miss the majority of older adults whose paths are non-linear and deeply shaped by the people around them.”

Back to top

Wednesday, April 15

Balancing Efficiency and Empathy: Healthcare Providers’ Perspectives on AI-Supported Workflows for Serious Illness Conversations in the Emergency Department

Why aren’t we talking about death?

Menglin Zhao (CAMD), Zhuorui Yong, Ruijia Guan, Kai-Wei Chang, Adrian Haimovich, Kei Ouchi, Timothy Bickmore, Zhan Zhang, Bingsheng Yao, Dakuo Wang (+CAMD), Smit Desai (+CAMD)

Wednesday, April 15, 9:12–9:24 a.m.

Although early conversations with patients about end-of-life preferences lead to better care, these serious illness conversations (SICs) remain rare in fast-paced emergency departments. What prevents SICs from happening, and are there ways AI could help?

Through conversations with 11 providers, these researchers identified a four-stage SIC workflow — identification, preparation, conduction, and documentation — with barriers like limited time and burdensome documentation at every stage. While providers were interested in tools to synthesize information or automate documentation, they emphasized the importance of human connection during vulnerable, personal conversations. The team offers design guidelines for AI systems to support providers having SICs without interrupting important bedside moments.

“In time-constrained emergency settings, these interactions shape critical treatment decisions, yet often don’t happen,” said Khoury–CAMD Assistant Professor Smit Desai. “This work — led by Menglin Zhao from CHAI Lab and HAI Lab — shows that AI can support high-stakes clinical conversations without compromising empathy.”

Back to top

Re-Examining the Examiners: Changes in Privacy and Security Perceptions of Exam Proctoring

Eyes on your own test

Adryana Hutchinson, Elaine Ly, Collins W Munyendo, Adam J. Aviv

Wednesday, April 15, 9:48–10 a.m.

When the pandemic first stopped teachers from overseeing tests in person, remote exam proctoring software surged — along with concerns about privacy and security trade-offs. Four years later, remote testing tools remain common; do user concerns remain common too?

This research team surveyed 127 people who have taken remotely proctored exams, and found attitudes had generally shifted in favor of technical monitoring methods. They believe users may have become resigned to reduced privacy as a necessary trade-off for the convenience of remote exams.

Back to top

Civic Data at the Seams

Civil society: numbers or negotiations?

Ashley Boone, Na’Taki Osborne Jelks, Quanda Spencer, Destinee Whitaker, Carl DiSalvo, Christopher A. Le Dantec (+CAMD)

Wednesday, April 15, 10:12–10:24 a.m.

Civil society belongs to everyone. That means civic data work usually involves lots of stakeholders, infrastructures, and institutions, which can lead to breakdowns in collaboration. Is the solution more, better data?

These researchers analyzed data production and use during a climate justice effort to map extreme heat islands. They found prioritizing data rarely led to hoped-for outcomes; instead, seams emerged where power dynamics were negotiated, and energy was more effectively spent managing alignment between groups as those seams shifted over time.

“The data breakdowns — or ‘seams’ — are not just places where data or technology fail but also places where organizations make choices to assert separation and protect their ability to work independently,” said Khoury–CAMD Professor and Director of Initiatives in Digital Civics Christopher A. Le Dantec. “This research examines under what circumstances data really do make a difference versus moments when other approaches to organizing, mobilizing, and building local capacity would be more effective.”

Back to top

My Body, Their Business: User Perspectives on Commercial Data Practices in FemTech Apps

My body, my data

Ghada Alsebayel, Ximena Lainfiesta (CAMD), Ayesha Fatima (+CAMD), Giovanni M. Troiano (CAMD), Chenyan Jia (+CAMD), Casper Harteveld (CAMD)

Wednesday, April 15, 11:15–11:27 a.m.

Millions track their periods and monitor fertility using FemTech apps, but their privacy, consent, and data use policies can be opaque. How do users understand and negotiate these data practices?

By presenting onboarding consent screens in plain language, these researchers analyzed almost 200 participants’ boundaries about data collection and commercial use. Users drew clear boundaries around acceptable data collection, strongly rejecting the collection of peripheral information like location and accepting commercial data uses only when functionally necessary. They found that communication matters deeply; users were more accepting of a data practice presented in a sleek app interface than in plain language.

“With this work, we hope to open a space for women to articulate their boundaries and discomfort around how their intimate data is being collected and used in commercial settings,” said Khoury PhD student Ghada Alsebayel, “and to spark broader conversations about how we get meaningful consent for personal data online.”

Back to top

Sometimes You Need Facts, and Sometimes a Hug: Understanding Older Adults’ Preferences for Explanations in LLM-Based Conversational AI Systems

AI assistant, watch your tone!

Niharika Mathur, Tamara Zubatiy, Agata Rozga, Jodi Forlizzi, Elizabeth D. Mynatt

Wednesday, April 15, 11:27–11:39 a.m.

To trust an AI system, it’s important to understand what it’s doing and why. What does that explainability mean to older adults using AI to maintain independence?

This research team presented older adults with AI-generated reminders and alerts, explained in a variety of ways. They found routine reminders — for example, to stretch or eat — were best received in the context of a user’s previous conversations about struggling to remember those actions. Emergency alerts — for example, about a stove left running — were best received when they were explained with environmental data, like the power status of the stove. Participants also tended to interpret the AI communications as two-way conversations and therefore benefitted from emotional attunement.

Back to top

From Future of Work to Future of Workers: Addressing Asymptomatic AI Harms to Foster Dignified Human-AI Interaction

WINNER: Best Paper Honorable Mention

AI made cancer doctors faster. So why is their intuition rusting?

Upol Ehsan, Samir Passi, Koustuv Saha, Todd McNutt, Mark O. Riedl, Sara Alcorn

Wednesday, April 15, 12:03–12:15 p.m.

Studies with real practitioners making life-or-death decisions are exceptionally rare in AI research. This award-winning paper is one of them.

For a full year, 42 cancer doctors across five hospitals used an AI system to plan radiation treatments. The results looked like a success story, with faster plans and better metrics. But beneath the surface, clinicians’ expert judgment was dulling. Skills built over decades were atrophying. Doctors described themselves as “rubber stampers” and “AI babysitters.”

So what went wrong? AI-driven deskilling is not new. What this research discovers is that the erosion happens asymptomatically, hiding behind good numbers and hardening into chronic harms. The authors call this the “AI-as-Amplifier Paradox”; AI simultaneously strengthens performance and erodes the very expertise that makes that performance possible.

So what can we do? Co-constructed with affected workers, the paper offers a cross-validated playbook for “sociotechnical immunity”: tools to detect hidden erosion early, contain it, and recover from it.

“The future of work is everywhere. Where’s the future of workers?” asks Assistant Professor Upol Ehsan, the study’s leader. “AI that makes you 50% more productive today but 30% less capable tomorrow has failed you. This work shifts the attention to who matters: the human worker.”

Back to top

Panel: The Global Impact of Generative AI on the HCI Landscape: International Perspectives on HCI Education, Industry Dynamics, and Funding Considerations

Does “human–computer interaction” mean “human–AI interaction” now?

Guo Freeman, Cliff Lampe, Elizabeth D. Mynatt, Heloisa Candello, Kori Inkpen, Nitesh Goyal, Karrie Karahalios, Xiaojuan Ma

Wednesday, April 15, 2:15–3:45 p.m.

In recent years, the CHI conference — and the human–computer interaction field at large — has experienced a boom in AI-related research and tech development, which is impacting HCI education, industry dynamics, and funding considerations around the world. This panel, featuring Khoury College Dean Elizabeth Mynatt, aims to foster reflection on these shifts. Alongside academic and industry leaders from four continents, Mynatt will explore key questions about the future of HCI as an international community.

“AI is transforming the field of HCI. It’s exposing a new horizon of challenges in designing effective, secure, and reliable interactive intelligence systems,” Mynatt said. “I’m excited about this panel because of the vast range of experience we bring together internationally as we thoughtfully consider the long game of HCI in an AI-enabled world.”

Back to top

Poster: Designing for Adolescent Voice in Health Decisions: Embodied Conversational Agents for HPV Vaccination

Medical advice from a magical squirrel

Ian Steenstra, Neha Patkar, Rebecca B. Perkins, Michael K MKPO, Timothy Bickmore

Wednesday, April 15, 2:15–3:45 p.m. and 4:30–6 p.m.

Modern vaccine interventions are designed almost exclusively for parents, but adolescents are usually the ones getting needles in their arms. Can a video game encourage adolescents’ informed consent to HPV vaccines?

Building on the team’s presentation at CHI 2024, these researchers offered two interactive interventions: an animated physician for parents, and for teens, a choice between an age-appropriate animated doctor and a narrative fantasy game where players answer woodland creatures’ HPV riddles. The clinic-based pilot study found both groups left the intervention better informed and with an increased intent to vaccinate.

“Building technologies that recognize children as active participants fundamentally changes the family’s health care experience,” said Khoury PhD student Ian Steenstra. “The app allows children to flag their own specific fears or questions about the vaccine. Even if the adolescent remains completely silent in the exam room, their voice still reaches their provider loud and clear.”

Back to top

Journal: Transphobia Is in the Eye of the Prompter: Trans-Centered Perspectives on Large Language Models

Is that chatbot a trans ally?

Morgan Klaus Sheuerman, Katy Weathington, Adrian Petterson, Dylan Thomas Doyle, Dipto Das, Michael Ann DeVito (+CAMD), Jed R. Brubaker

Wednesday, April 15, 2:30–2:45 p.m.

LLM-powered chatbots are asked about everything. How do they handle questions about gender identity?

These researchers used reflexive analysis to examine two popular LLMs’ responses to real-world questions about trans identity. Both returned responses that painted trans people in a positive light, even when the prompts were transphobic. Instead, anti-trans sentiment showed up in subtle and insidious ways, such as framing the existence of trans identities as a debate.

“LLMs aren’t obviously transphobic; they’re subtly transphobic, in ways that are hard to spot without insider knowledge,” said Khoury–CAMD Assistant Professor Michael Ann DeVito, noting that even supportive cisgender people often missed the signals. “It shows how important it is to have trans folks on the team when you’re trying to build safety guardrails for these systems.”

Back to top

Thursday, April 16

Beyond Accuracy: Experts See AI Fact-Checks as Accurate but Less Useful

Where are you getting your facts?

Chenyan Jia (+CAMD), Apoorva Gondimalla, Angie Zhang, David Joseph Mullings, Alexander Boltz, Min Kyung Lee

Thursday, April 16, 9:12–9:24 a.m.

The internet is flooded with orders of magnitude more misinformation than a person could fact-check. But AI can process orders of magnitude more information than a person; can it support fact-checking online?

While LLM-generated fact-checking reports were technically as accurate as human-authored ones, this research team found that media experts consistently rated them as less useful, especially when the experts were not aware of the report’s source.

“As AI systems are increasingly integrated into real-world applications, evaluating them solely

on objective metrics such as accuracy is insufficient,” said Khoury–CAMD Assistant Professor Chenyan Jia. “This work highlights the importance of understanding how domain experts perceive AI-generated outputs, emphasizing dimensions such as usefulness, reasoning, relevance, and trust. Such insights are essential for designing AI systems that are not only performant but also usable and reliable.”

Back to top

From Fragmentation to Integration: Exploring the Design Space of AI Agents for Human-as-the-Unit Privacy Management

Can AI protect our online privacy?

Eryue Xu, Tianshi Li

Thursday, April 16, 9:12–9:24 a.m.

Preserving digital privacy has become a near-full-time job. Rather than chasing privacy controls across platforms and posts alone, would users trust AI agents to understand their contextual privacy preferences and make information disclosure choices accordingly?

This research team collected more than 100 user responses to AI agents addressing nine online privacy management challenges. They found users preferred tools that had partial or full autonomy to fix privacy violations after users shared posts, and many trusted AI’s ability to spot and correct privacy violations more than they trusted their own manual efforts.

“Almost every reader will resonate with at least one privacy concern raised in this paper,” said Khoury Research Assistant Eryue Xu. “This research could help users better articulate where their privacy fatigue comes from and imagine how an AI agent might address some of their concerns while also introducing new risks. It could also help AI agent developers better understand users’ expectations for privacy management agents.”

Back to top

Learning Ecological Justice and Game Design by Creating Transformational Games

Building the game of life

Jailyn Zabala (CAMD), Dexter Delandro (CAMD), Clifford Lee (CAMD), Alexandra To (+CAMD)

Thursday, April 16, 9:24–9:36 a.m.

Adolescents are growing up in a world ravaged by climate change. How can we give them agency and support as they wrestle with complex questions about ecological justice?

These researchers held a three-session game design workshop with a group of American BIPOC low-income high school students, asking participants to use Twine to create a narrative game that would effect a transformation in its players. They found that the culturally relevant teaching method let students engage deeply and share personal stories in their games, which impacted students’ growth as well.

Back to top

Dark Patterns Meet GUI Agents: LLM Agent Susceptibility to Manipulative Interfaces and the Role of Human Oversight

The machine fell for it, too

Jingyu Tang, Chaoran Chen, Jiawen Li, Zhiping Zhang, Bingcan Guo, Ibrahim Khalilov, Simret Araya Gebreegziabher, Bingsheng Yao, Dakuo Wang (+CAMD), Yanfang Ye, Tianshi Li, Ziang Xiao, Yaxing Yao, Toby Jia-Jun Li

Thursday, April 16, 9:36–9:48 a.m.

If you’ve hit the wrong “download” button when trying to install software, you’ve been the victim of a “dark pattern,” a deceptively designed interface that tricks users into doing something they don’t want to. But are GUI agents — LLM-powered systems that interact through visual interfaces, like human users — vulnerable to the same design tricks?

In testing how humans, GUI agents, and human–agent teams respond to dark patterns, this research team found agents frequently miss deceptive design choices, and when they do spot them, they prioritize completing their task over defending themselves. They also discovered when humans and agents are tricked, they’re tricked in different ways; where humans comply accidentally or out of habit, agents often suffer from procedural blind spots.

“As we enter the age of agentic AI, people are moving from doers to supervisors. Recognizing the limits of both agents and human oversight is key to responsible adoption and building trustworthy, reliable systems,” said Khoury PhD student Zhiping Zhang.

READ: Dark patterns have long manipulated human behavior online. Now AI agents are falling for them, too

Back to top

Having Confidence in My Confidence Intervals: How Data Users Engage with Privacy-Protected Wikipedia Data

Comparing perceptions of noisy data

Harold Triedman, Jayshree Sarathy, Priyanka Nanayakkara, Rachel Cummings, Gabriel Kaptchuk, Sean Kross, Elissa Redmiles

Thursday, April 16, 10–10:12 a.m.

Even when a dataset is supposedly anonymized, its release can leak information about individuals. Differential privacy (DP) mitigates risk by adding statistical “noise” to published data, hiding identities while preserving patterns. How do data users understand and respond to this “noisy data”?

This research team asked 15 participants to compute confidence intervals using two Wikipedia datasets — one protected with DP, the other with rounding. The team found participants could calculate simple uncertainty metrics but struggled to make sense of how uncertainty compounds or shrinks over multiple “noisy” datapoints. Participants felt that DP made the data easier to analyze but mistakenly believed ease of analysis implied less privacy, when DP protections are actually stronger.

“Without understanding the perspectives and needs of data users, organizations cannot orient their design, policy, and communications,” said Khoury Assistant Professor Jayshree Sarathy. “Our study surfaces surprising insights about how data users think about the relationship between privacy and usability of data.”

Back to top

VizCrit: Exploring Strategies for Displaying Computational Feedback in a Visual Design Tool

Intentionality in feedback design

Mingyi Li, Mengyi Chen, Sarah Luo, Yining Cao, Haijun Xia, Maitraye Das (+CAMD), Steven P. Dow, Jane L. E.

Thursday, April 16, 10–10:12 a.m.

To help beginning designers learn, instructors often offer both written feedback and actionable annotations. Can design tools offer comparable support?

With VizCrit, these researchers offer an algorithmically powered system with static, textbook-based design feedback and two categories of visual annotations: awareness-centered and solution-centered. They hypothesized awareness-centered annotations would maintain creative ownership, while solution-centered annotations would fix design issues. They found instead that solution-centered annotations left novice users with fewer design issues and a more positive sense of their own creativity, although experts disagreed that their work was actually more creative.

“There is a tension in AI-powered tools between telling people exactly what to do, and what builds their skills and supports creativity,” said Khoury PhD student Mingyi Li. “We found that more actionable feedback improved novices’ design outcomes but may have given them a false sense of creativity, while less actionable feedback encouraged self-reflection. This suggests that future systems should calibrate feedback based on timing and user goals.”

Back to top

Exploring the Future of AI in Clinical Collaboration: A Study on Tumor Board Case Preparation

**WINNER: Best Paper Honorable Mention**

AI, trust, and support in cancer care

Jiachen Li, Amanda K. Hall, Ruican Zhong, Selin Everett, Alyssa Unell, Hanwen Xu, Matthias Blondeel, Jonathan Carlson, Katie Claveau, Thulasee Jose, Tristan Naumann, David C. Rhew, Naiteek Sangani, Frank Tuan, Jim Weinstein, Varun Mishra (+Bouvé), Elizabeth D. Mynatt, Scott Saponas, Hao Qiu, Leonardo Schettini, Sam Preson, Aiden Gu, Naoto Usuyama, Zelalem Gero, Cliff Wong, Noel Christopher Codella, Hoifung Poon, Shrey Jain, Matthew Lungren, Eric Horvitz

Thursday, April 16, 11:15–11:27 a.m.

To treat complex cancers, hospitals convene a varied group of specialists called a multidisciplinary tumor board (MTB). They surface valuable insights, but can only meet when they have ample time to prepare. Can LLMs help MTB specialists get up to speed?

These researchers equipped 16 MTB oncologists with two AI systems: Microsoft’s Copilot and the specialized system Healthcare Agent Orchestrator (HAO). They found that physicians — who preferred HAO — were skeptical of AI-recommended therapies but overconfident about AI-generated summaries, and it was difficult to recalibrate trust.

“The tension between the urgent need for AI support and the very real risks it introduces is what makes this research so compelling,” said Khoury PhD student Jiachen Li. “By examining the errors AI makes when interpreting medical records, observing how oncologists copied and pasted those errors, and recognizing the misaligned mental models clinicians hold about AI’s capabilities, we can better understand how to optimize AI in medicine.”

Back to top

Co-Designing Collaborative Generative AI Tools for Freelancers

Freelance tools for freelance workers

Kashif Imteyaz, Michael Muller, Claudia Flores-Saviaga (former Khoury PhD student), Saiph Savage

Thursday, April 16, 11:27–11:39 a.m.

Freelance work is unlike traditional office work. It’s unsurprising, then, that freelancers equipped with generative AI productivity tools designed for traditional workplaces instead find that they hamper collaboration and make employment more precarious.

This research team conducted co-design sessions with 27 freelancers, who wanted systems that supported creativity while prioritizing worker control. Together, they described “auxiliary AI” systems that would better preserve workers’ creative agencies and identities — the characteristics that get freelancers hired.

“We wanted to explore how AI could support freelancers in collaborating, learning from one another, and building collective power,” said Khoury Assistant Professor Saiph Savage.

“We used text-to-image generative AI itself as a design probe, letting freelancers generate AI images to express their frustrations and visions,” added Khoury PhD student Kashif Imteyaz. “They stopped just critiquing AI and started articulating concrete alternatives for how it could work differently.”

Back to top

“How would I know what I would want from or with them?”: Supporting A-Spec Approaches to Developing Relationships Through Online Platforms

Thinking beyond dating apps

Kelly Wang, Ashlee Milton, Leah Namisa Rosenbloom, Erika Melder, Ada Lerner, Michael Ann DeVito (+CAMD)

Thursday, April 16, 12:15–12:27 p.m.

Bumble, Tinder, Hinge — there are lots of ways to meet people online. But the common assumption that “meeting new people” means “seeking a partner” makes it hard for people who experience little or no sexual or romantic attraction — the a-spec community — to find fitting relationships outside the norm.

Through an eight-week, 38-participant ARC study, this research team enunciated a desire for “process-oriented” relationship-building technologies, which support users getting to know each other in the flexible, low-stakes way often associated with friendship. That contrasts with “goal-oriented” technologies like dating apps, which aim to elicit a specific relationship.

“You hear all the time that dating apps are terrible, but it’s hard for HCI to identify why,” said Khoury PhD student Kelly Wang. “A-spec users provide a unique perspective. There’s a desire to find in-person connection with specific kinds of people, but they’re also — in many cases — alienated by the entire concept of dating apps.”

Back to top

XSynth: GenAI-Empowered Shared Mental Model Building for Conceptual Design Collaboration in Extended Reality

A land of shared imagination

Yaning Li (CoE), Shumin Li, Ziyao He, Dakuo Wang (+CAMD)

Thursday, April 16, 12:15–12:27 p.m.

Designing something together starts with imagining the same thing. But when collaborators hold different mental pictures of a concept, misalignment can quietly derail the process. How can VR design tools help teams see where their thinking converges and where it doesn’t?

These researchers developed XSynth, a generative-AI-powered VR system that captures each designer’s reasoning as a knowledge graph, then merges the graphs to visualize where the team’s mental models align and diverge. In a study with 10 designer teams, XSynth boosted creativity, reduced workload, and improved design outcomes by making shared understanding visible in a shared virtual space.

“Good collaboration isn’t just about working in the same space — it’s about thinking on the same page,” said Khoury–CAMD Associate Professor Dakuo Wang. “XSynth uses VR to give design teams a way to literally see their shared mental model, so they can build on common ground and resolve differences early.”

Back to top

Poster: Agent A/B: Automated and Scalable A/B Testing on Live Websites with Interactive LLM

A/Briefer test for online user experience

Yuxuan Lu, Ting-Yao Hsu, Hansu Gu, Limeng Cui, Yaochen Xie, William Headden, Bingsheng Yao, Akash Veeragouni, Jiapeng Liu, Sreyashi Nag, Jessie Wang, Dakuo Wang (+CAMD)

Thursday, April 16, 2:15–3:45 p.m. and 4:30–6 p.m.

A/B testing is a tried-and-true method of designing a good user experience. Two versions of the same webpage are created; initial users are split between them, and the designer tracks their responses to decide the permanent version. But it takes time and money to gather meaningful conclusions, and in the meantime, half the users have a subpar experience. Could LLM agents speed up the process?

This research team ran Amazon shopping interface A/B tests using human testers and LLM agents with various personas. While differences existed between the human and LLM agent interactions, both groups’ purchase behavior pointed to the same designs, making it a good complement to — although not a replacement for — human A/B tests.

“Agent-based simulation could make experimentation and evaluation faster, cheaper, and possible before exposing real users to new designs,” said Khoury PhD student Yuxuan Lu.

Back to top

Poster: Finding MeBo: Delivering Reminiscence Therapy for Older Adults Using LLM-Based Voice User Interfaces

A shoebox of Polaroids for the digital age

Manasi Atul Vaidya (CAMD), Ryan Bruggeman (CAMD), Jessie Chin, Maitraye Das (+CAMD), Smit Desai (+CAMD)

Thursday, April 16, 2:15–3:45 p.m. and 4:30–6 p.m.

When older adults share the stories of their lives, everyone benefits. Can AI create support and opportunities for elders to reminisce?

These researchers created MemoryBox, an LLM-powered tool that uses photos, music, and conversation to help older adults recall and share stories. They found that participants didn’t see MemoryBox through a task lens, but rather valued the system as a relational companion that listens, supports storytelling, and helps preserve memories for future generations of family. Participants hoped that future versions would offer more socially oriented tools for crafting stories together, connecting them with physical artifacts, and sharing them.

“Memory is central to identity and connection, especially in later life,” said Khoury–CAMD Assistant Professor Smit Desai. “This work — led by Manasi Atul Vaidya and Ryan Bruggerman from CHAI Lab — shows how conversational AI can support deeply personal processes like reminiscing while guiding the design of systems that are respectful, safe, and human-centered.”

Back to top

Meet-up: Legitimizing, Developing, and Sustaining Feminist HCI in East Asia: Challenges and Opportunities

The East Asian feminist HCI community we want

Runhua Zhang, Ruyuan Wan, Jiaqi (Ella) Li, Daye Kang, Yigang Qin, Yijia Wang, Ziqi Pan, Tiffany Knearem, Huamin Qu, Xiaojuan Ma

Thursday, April 16, 4:30–6 p.m.

The unique societal, cultural, and political environments of East Asian contexts have contributed valuable, situated knowledge. However, feminist HCI researchers face challenges, including a scarcity of dedicated funding, specific political risks, and mismatches with Western theories.

To address this, Khoury PhD student Jiaqi (Ella) Li joins nine of her peers to create a space at CHI to support one another and grow a lasting network of feminist researchers, designers, and activists. The group has two goals — to provide a legitimate channel to connect and build community, and to host an action-oriented conversation on legitimizing, developing, and sustaining feminist HCI in the East Asian context. They welcome everyone who identifies with or is interested in these contexts, and encourage interested researchers to explore the meet-up hosts’ website for resources.

Back to top

Friday, April 17

Through the Lens of Human-Human Collaboration: A Configurable Research Platform for Exploring Human-Agent Collaboration

A place to test how we work together

Bingsheng Yao, Jiaju Chen, Chaoran Chen, April Yi Wang, Toby Jia-Jun Li, Dakuo Wang

Friday, April 17, 9–9:12 a.m.

AI systems were designed as tools but are increasingly capable of collaboration. Do the established principles of computer-mediated collaboration hold up when the collaborators are the computers themselves?

These researchers introduce a modular, configurable HCI research platform to a group of human and LLM agent participants to conduct two classic distributed teamwork experiments: Shape Factory and Hidden Profile. They found the platform provided a systemic way to analyze how human–LLM agent collaborations worked as the participants completed complex tasks together through different interfaces, providing empirical insights into design guidance.

“If we want to safely work together with AI systems that can think, behave, and communicate like humans, particularly in high-stakes areas like healthcare triage or crisis management, we need to know exactly how different interface designs affect trust and decision-making,” said Khoury Associate Research Scientist Bingsheng Yao. “This research platform gives scientists the tools to rigorously test those interactions, ensuring we build systems that don’t silently erode accountability.”

Back to top

“I can take what I want and adapt as needed”: BIPOC Identity Making and Resistance Through Internet Aesthetics on TikTok

Where BIPOC cultures and internet aesthetics meet

Natalie Chen, Gianna Williams, Alexandra To (+CAMD), Michael Ann DeVito (+CAMD)

Friday, April 17, 11:27–11:39 a.m.

Dark academia, vaporwave, cottagecore — the internet has a lot of looks. People take and remake internet aesthetics to build their identities all the time, but when those aesthetics are only uplifted on white bodies, their particular meaning to BIPOC users gets quashed.

Through semi-structured interviews, this research team enunciated the ways BIPOC TikTok users understand and engage with internet aesthetics. Building on their award-winning CHI paper from last year, they explored how BIPOC users extract joy and meaning from internet aesthetics while strategically resisting the flattening and erasure of meaningful symbols.

“Aesthetics are culture, and how platforms users treat BIPOC users attempting to employ these aesthetics tells us a lot about if and how platforms value their BIPOC users,” said Khoury–CAMD Assistant Professor Michael Ann DeVito. “We found the way BIPOC creators and users are treated when they try to discuss, employ, and embody key internet aesthetics on TikTok represents a form of microaggressions. It’s an active form of subtle harm towards BIPOC users when our key information environments work like this.”

Back to top

Exploring Collaboration Breakdowns Between Provider Teams and Patients in Post-Surgery Care

Surgery recovery is a team sport

Bingsheng Yao, Menglin Zhao (CAMD), Zhan Zhang, Pengqi Wang, Emma G. Chester, Changchang Yin, Tianshi Li, Varun Mishra (+Bouvé), Lace M. Padilla (+CoS), Odysseas P. Chatzipanagiotou, Timothy Pawlik, Ping Zhang, Weidan Cao, Dakuo Wang (+CAMD)

Friday, April 17, 11:27–11:39 a.m.

Healing from surgery isn’t over when a patient leaves the hospital. But getting home without all the right information is consequential and unfortunately common. Where exactly does collaboration between provider teams and patients break down?

These researchers interviewed 17 patients and providers involved in gastrointestinal surgeries and discovered that complex organizational structures and communication gaps were often responsible for home care plans falling through the cracks. The researchers also crafted design recommendations for technologies like voice assistants and wearables that could help close the gaps.

“A lot of existing research focuses on the struggles patients face after they are already home. We wanted to step back and look at the root cause: the discharge preparation stage,” said Khoury Associate Research Scientist Bingsheng Yao. “Our research highlights that discharging a patient isn’t just about handing them a piece of paper; it requires seamless teamwork.”

Back to top

Amplifying Rural Educators’ Perspectives: A Qualitative Study on the Impacts of Generative AI in Rural US High Schools

Small-town schools and AI tools

Shira Michel, Benjamin Taylor, Sabrina Parra Diaz, Joseph B. Wiggins, Ed Finn, Mahsan Nourani

Friday, April 17, 11:15–11:27 a.m.

Whether and how students should use AI in schools is hotly debated. How does AI show up in rural schools?

“Rural communities in particular are left out of these critical conversations, compounded by limited institutional ties and gaps in technical infrastructure,” said Khoury Research Assistant Professor Mahsan Nourani.

This research team studied how 31 rural high school educators across three US states use generative AI. While educators are already using AI tools to streamline teaching tasks, more meaningful integration of AI tools is constrained by familiar rural realities — inconsistent internet access, limited AI literacy training, and skepticism towards adoption. Educators emphasized that these tools needed to be designed for rural contexts to keep from widening existing gaps.

“Responsible AI adoption and policy cannot be built solely on the voices and experiences of the most connected and visible communities,” said Khoury PhD student Shira Michel.

Back to top

Idea11y: Enhancing Accessibility in Collaborative Ideation for Blind or Low Vision Screen Reader Users

Non-visual access to the whiteboard

Mingyi Li, Huiru Yang, Nihar Sanda, Maitraye Das (+CAMD)

Friday, April 17, 12:27–12:39 p.m.

Digital whiteboards are great collaborative tools — unless you’re using a screen reader, in which case they can be frustrating word salads.

With support from the prestigious Google Research Scholar Award in Human–Computer Interaction, this research team developed Idea11y, a plug-in that gives users a hierarchical, editable text outline of the content on a digital whiteboard, along with audio cues and voice coding. Blind and low-vision users working both independently and with sighted collaborators found Idea11y helped them better understand how ideas on the whiteboard were clustered or related, and to coordinate collaborative actions more easily.

“This research highlights design opportunities for future ideation tools to support nonvisual creative thinking processes, and accessible ideation in ability-diverse teams,” said Khoury PhD student Mingyi Li.

Back to top

The Khoury Network: Be in the know

Subscribe now to our monthly newsletter for the latest stories and achievements of our students and faculty

This field is for validation purposes and should be left unchanged.