Expanding on the Impact of Humanness on One’s Willingness to Trust Automated Partners

Ryon Cumings

Advisor: William S Helton, PhD, Department of Psychology

Committee Members: Gerald Matthews, Patrick McKnight

David J. King Hall, #2053
April 24, 2026, 02:00 PM to 04:00 PM

Abstract:

Prior research into factors that affect a user’s trust in an automated system isolated complicated, but generally positive, effects of transparency and anthropomorphism. Both factors were often, but not always, able to increase the trust that a user was willing to demonstrate. Study 1 attempted to expand on this effect by using Inner Speech, a way of providing transparency, to facilitate communication and compromise with a robotic partner; which was done in an attempt to increase trust in that partner while participants determined if simulated urban scenarios were threatening. This effect was not borne-out, as among other results, trust was shown to decrease when Inner Speech was provided (p<.05; η2ps: .178, and145). Therefore Study 2 attempted to disentangle this by assessing if participants were simply less willing to apply information received from a robotic partner versus information they could acquire themselves. These participants assessed the likelihood of a specific individual being threatening using information they could themselves see, or invisible information provided by a human or AI partner. What was found was that working with an AI partner reduced the use of all available information (F(1,72) = 5.64, p<.05, η2p = .075), but did not specifically lead to a decrease in the use of information coming from the partner; as indicated by the lack of a significant cue-visibility by partner-type interaction. This added context, but was not able to explain the results of Study 1. As such further research was conducted with Study 3, which attempted to assess if there is simply a general bias against an artificial partner committing the exact same mistake as a human. Participants determined if a potential human or AI partner at a company were responsible for an issue that arose at the company. If so, they rated the error’s severity, and the consequences required, while always saying how willing they would be to work with the partner. Preliminary results suggest that for most scenarios, participants did not rate a mistake made by an AI to be more severe (p < .05 for 2/10 partner-error scenarios), however they did rate an AI as requiring more severe punishment in most cases (p <.05 in 8/10 scenarios), and their willingness to work with the AI as uniformly lower (p <.05 for all scenarios) than with a human partner regardless of the similar error severity. This suggests that a specific bias against AI may be driving harsher reactions to mistakes made by an AI partner, which may help to explain the surprising results of Study 1.