.png)
open access
메뉴
ISSN : 1229-8778
Humanoid AI, which combines generative AI with a physical robotic form, is emerging not only as a technological tool that assists human tasks but also as a new agent of interaction that facilitates psychological and social engagement. As these AI systems increasingly emulate human-like conversation, the scope of services they provide is also expanding. In line with this trend, consumers have begun to expect the nature and depth of conversation to vary depending on the type of humanoid AI. At the same time, concerns about personal information leakage during interactions with humanoid AI have also intensified. This study investigates how the degree of verbal embodiment exhibited by humanoid AI during communication influences consumer evaluations. Furthermore, it explores whether this effect varies depending on consumers’ perceived sensitivity to personal information. To test these relationships, a 2 (Type of Humanoid AI: Assistant-type vs. Companion-type) × 2 (Verbal Embodiment: Turn-taking vs. Grounding) × 2 (Privacy Concern: High vs. Low) experimental design was employed. The results show that consumers generally respond more positively under the grounding condition, but this effect diminishes when interacting with assistant-type AI. Additionally, consumers with high personal information sensitivity evaluated grounding with companion-type AI—intended for emotional exchange—less favorably. Based on these findings, the study offers both theoretical and practical implications.