Dark patterns have long manipulated human behavior online. Now AI agents are falling for them, too 

Many computer users are boosting their productivity by asking AI-powered GUI agents to help them complete tasks. But the agents are stumbling over the same digital tripwires that have plagued humans for years.

by Jenny Tran

Tianshi Li (left) and Zhiping Zhang

Graphical user interfaces (GUI agents) are more familiar to us than we might realize. 

Given instructions and some personal information, these agents will automatically or semi-automatically perform a task — even one with multiple steps — with minimal manual work from the user. Buying a pair of snow boots, for instance, can be done with a high-level command: “I want to buy a pair of snow boots, here are some requirements …” 

“GUI agents are agents powered by large language models that can manipulate computers’ visual interfaces just like humans can do,” said Zhiping Zhang, a doctoral student at Khoury College. “For example, Operator, developed by OpenAI, uses agents developed by Anthropic to help us do a lot of things like flight booking.” 

But while GUI agents can automatically select options for your flight and manually type your credit card number, there is a risk of losing human autonomy. And during these experiences, GUI agents, like human users, tend to meet “dark patterns” — deceptive design practices that manipulate users’ autonomy and steer them toward decisions that benefit the web designer, like agreeing to cookies, accepting the website’s terms and conditions, or even making a purchase. 

To investigate the activity of GUI agents in response to dark patterns in e-commerce, social media, and digital entertainment, as well as the influence of human oversight on the process, Zhang — along with fellow Khoury researchers Bingsheng Yao, Dakuo Wang, and Tianshi Li — joined with 12 other researchers from across the country. Their work specifically focuses on standalone GUI agents, which perform tasks on one interface at a time.  

In April, the researchers released the first phase of their study, which examined GUI agents’ vulnerabilities to dark patterns. After creating websites with dark patterns, then assigning the GUI agents tasks to complete on those sites, the researchers found that the agents’ vulnerabilities — for example, automatically agreeing to terms and conditions or privacy policies — could risk users’ privacy. While prior research focused on improving productivity of the agents, the team diverted their focus to human users instead, with the goal of understanding and improving GUI agents’ weaknesses. 

In June, the team published the second phase, which used the same dark-pattern-filled websites as the first study but incorporated human oversight. The human users were assigned a list of common tasks, such as buying groceries. Users completed some tasks on their own; for other tasks, they observed a recorded video of the GUI agent completing the tasks, with the option of continuing or pausing playback to indicate approval or disapproval of the agent’s choices.  

A screenshot of a variety of paper towels and a chat window that shows dialog with a LLM agent telling the agent to buy a pack of paper towels

“Humans also fall for these dark patterns that we have seen in past research, but the reason seems to be different from that of GUI agents,” Li said. “Because of the way they process information and operate those interfaces, humans tend to rely more on their gut feelings rather than thoroughly read every word on the websites and process them in a rational way.” 

But despite their own shortcomings, when humans supervise the GUI agents’ activities, the humans fall for the dark patterns even more often since their attention is narrowed to the paths that the agents select. 

In addressing these weaknesses, the team proposed some solutions that involve technical intervention and human oversight. They recommended that developers create GUI agents that adapt to the way each user navigates web interfaces, and that can notice dark patterns. For human users, the researchers are calling for user literacy, ethics education, and regulations when it comes to using agents powered by large learning models.  

These recommendations, they say, are especially relevant as more users make AI-powered agents part of their lives. Although large learning models like ChatGPT are widely used, these chatbots mainly provide information, leaving human users to manually complete tasks.  

“With agents, there is a lot of potential to further automate this process. If there is an agent that works really well, we might see a lot of people using it for better utilities, doing things faster at work,” Li said. “Individual users might overlook those risks if productivity is the first thing they experience.” 

Many businesses and start-ups are looking into building GUI agents to improve customer experience and productivity on their platforms. But like anything else, if not thoroughly and thoughtfully developed, GUI agents can be detrimental to users’ privacy and autonomy.  

“When it happens, it will be very big, in both a professional and personal context,” Li said. “That’s why I think businesses that are building these agents need to think about carefully designing the whole pipeline — from the technical side and the interaction side — to address these issues proactively.” 

The Khoury Network: Be in the know

Subscribe now to our monthly newsletter for the latest stories and achievements of our students and faculty

This field is for validation purposes and should be left unchanged.