Public acceptance of various ways to use and treat AI: A predictive moral framework

Eriksson, Kimmo , Karlsson, Simon , Vartanova, Irina , Strimling, Pontus | 2026

Computers in Human Behavior: Artificial Humans

Abstract

As artificial intelligence rapidly transforms society, understanding the psychological principles that drive public acceptance or resistance is a critical challenge. Our study (U.S. sample, N = 589) of 100 diverse AI applications, both personal and organizational, demonstrates these moral judgments are highly predictable. We show that public perceptions of five relatively independent core qualities—risk, benefit, dishonesty, unnaturalness, and accountability—predict 91.4% of the variance in an application's acceptability. Notably, all five qualities contributed to predictive accuracy. Our framework also explains a profound contradiction in public attitudes toward AI treatment: people strongly reject deceiving or forcing AI systems—as if granting them rights—yet readily accept their termination. We show this is not an inconsistent attribution of AI's status, but a consistent moral evaluation of human actions: deception and coercion are rejected as intrinsically flawed behaviors, while termination is accepted as a legitimate exercise of control. Furthermore, we identify a key psychological mechanism for attitude change: personal experience with an AI application predicts higher acceptance by improving perceptions of its moral qualities. In sum, our research reveals the structured moral psychology that governs AI acceptance, providing a powerful framework for understanding human-AI interaction and guiding responsible innovation.

Read more >

Computers in Human Behavior: Artificial Humans

Abstract

As artificial intelligence rapidly transforms society, understanding the psychological principles that drive public acceptance or resistance is a critical challenge. Our study (U.S. sample, N = 589) of 100 diverse AI applications, both personal and organizational, demonstrates these moral judgments are highly predictable. We show that public perceptions of five relatively independent core qualities—risk, benefit, dishonesty, unnaturalness, and accountability—predict 91.4% of the variance in an application's acceptability. Notably, all five qualities contributed to predictive accuracy. Our framework also explains a profound contradiction in public attitudes toward AI treatment: people strongly reject deceiving or forcing AI systems—as if granting them rights—yet readily accept their termination. We show this is not an inconsistent attribution of AI's status, but a consistent moral evaluation of human actions: deception and coercion are rejected as intrinsically flawed behaviors, while termination is accepted as a legitimate exercise of control. Furthermore, we identify a key psychological mechanism for attitude change: personal experience with an AI application predicts higher acceptance by improving perceptions of its moral qualities. In sum, our research reveals the structured moral psychology that governs AI acceptance, providing a powerful framework for understanding human-AI interaction and guiding responsible innovation.

Read more >