ISSN : 1229-0661
As AI usage is expanding for human convenience, ethical discussions on this issue remain insufficient. This study examined whether AI can bear responsibility and how people perceive it to take responsibility. In specific, by comparing AI service failures to those of humans, we focused on how the perceived responsibility is for each agent (service provider, organization, or user) and tested the moderating role of moral foundations. In Study 1, where a psychological treatment failure scenario was given, results showed that according to the service provider (AI or human) responsibility perceptions of the service provider, organization, and user varied. Fairness and respect for norms moderated the relationship of service provider type and user responsibility perception. In specific, when these values were rated low, users felt less responsible for AI failures than human failures. Study 2, involving a scenario on the government tax failures, found that responsibility was attributed the highest for the government, followed by service providers and users. AI failures led to lower provider responsibility and higher government responsibility. Respect for norms also showed moderationed perceptions, with lower values further reducing the perception of provider responsibility under AI conditions compared to that of human. These findings contribute to the understanding of whether AI can be held morally responsible and how moral judgment differs depending on the nature of the agent. Limitations and implications of the study are also discussed.