바로가기메뉴

본문 바로가기 주메뉴 바로가기
 

logo

  • P-ISSN1738-6764
  • E-ISSN2093-7504
  • KCI

Research Ethics of Artificial Intelligence in the Digital Transformation Era: Fairness, Accountability, and Human-Centered Design

INTERNATIONAL JOURNAL OF CONTENTS / INTERNATIONAL JOURNAL OF CONTENTS, (P)1738-6764; (E)2093-7504
2025, v.21 no.4, pp.10-19
Usmanov Doniyor (UNIDO)

KIM JIN-HEE

Abstract

The rapid expansion of digital transformation across industry, government, education, healthcare, and finance has positioned Artificial Intelligence (AI) as a central driving force of societal change. AI technologies are being integrated into diverse domains, restructuring industrial systems, innovating public services, and enhancing personalized quality of life. They significantly improve efficiency and productivity while creating new markets and employment opportunities. In particular, the rise of generative AI marks a qualitative shift in the technological paradigm, extending its influence into high-level human activities such as information creation, creative work, and decision support. However, these technological advancements and societal diffusion simultaneously generate a range of ethical and social risks. Algorithmic bias, opacity, privacy violations, workforce displacement, and unclear responsibility attribution undermine trust and societal acceptance of AI. Generative AI introduces additional concerns, including the veracity of generated content, copyright violations, and the proliferation of deepfakes, while philosophical and legal debates about accountability for AI-generated outcomes remain unresolved. In response to these challenges, this study re-examines the core principles and responsibility structures required for AI research ethics in the digital transformation era. It investigates an integrated research ethics framework grounded in three pillars: fairness, accountability, and human-centered design (HCD). Reflecting the expanding role of generative AI and evolving Human–Computer Interaction (HCI) dynamics, the study proposes a practical, operational evaluation model that overcomes the limitations of principle-centric ethical discourse. Specifically, it introduces the Integrated FAH–AIL Evaluation Framework (IFAEF), which combines the FAH-75 index for ethical compliance with the AIL-5 index that assesses AI intelligence and risk levels. As AI evolves from a tool to a “partner-like entity” influencing societal decision-making and human activity, mere technical performance is insufficient. Ethical frameworks must prioritize human dignity, societal context, and value-based judgment. Accordingly, AI research ethics must function not as a passive regulatory mechanism but as a strategic foundation for the sustainable coexistence of technology and society. This requires a multidimensional collaborative structure involving government, industry, academia, and civil society. This paper aims to contribute to this goal by examining the ethical conditions necessary throughout the AI development and deployment lifecycle, and by presenting actionable directions and quantitative evaluation standards for researcher ethics suited to the digital transformation era. In doing so, it offers a concrete foundation for strengthening the accountability, fairness, and human-centeredness of AI technologies.

keywords
Artificial Intelligence, Digital Transformation, Research Ethics, Generative AI, Accountability, Fairness, Human-Centered Design

INTERNATIONAL JOURNAL OF CONTENTS