바로가기메뉴

본문 바로가기 주메뉴 바로가기
 

logo

메뉴

Disclosure Intention in Generative AI: Effects of Privacy Concern, Personalization Benefit, and Trust

Abstract

Background: Generative AI enhances the efficiency of personalized services, but it also creates complex choices for users about what information to disclose during multi-turn interactions. Although prior research has examined privacy-related risk, personalization-related benefits, and trust, empirical evidence on their combined effects in generative AI contexts remains limited. Objective: Utilizing privacy calculus theory and incorporating a trust perspective, this study investigates how Privacy Concern, Perceived Personalization Benefit, and Trust affect Disclosure Intention in generative AI contexts and compares their relative explanatory power. Methods: Questionnaire data from 302 valid respondents were analyzed using reliability analysis, exploratory factor analysis, correlation analysis, and multiple linear regression. Results: Privacy Concern had a significant negative effect on Disclosure Intention, whereas Perceived Personalization Benefit and Trust had significant positive effects. Among the three predictors, Trust showed the strongest explanatory power, followed by Privacy Concern, while Perceived Personalization Benefit exhibited a comparatively weaker effect. Conclusion: Disclosure in generative AI contexts is not merely a simple cost-benefit analysis; rather, it reflects the combined influence of risk perception, expected benefits, and the assurance of relationships. This study extends the scope of disclosure research to include generative AI settings and offers practical implications for privacy governance, personalization design, and trust building.

keywords
Generative AI; Personal information disclosure intention; Privacy concern; Perceived personalization benefit; Tust
Received
2025-10-13
Revised
2025-11-28
Accepted
2025-12-19
Published
2025-12-30

logo