바로가기메뉴

본문 바로가기 주메뉴 바로가기
 
 

logo

메뉴

Human–Robot Collaboration as an Experiential Process Shaped by Trust, Cognitive Load, and Interaction Transparency

Human–Robot Collaboration as an Experiential Process Shaped by Trust, Cognitive Load, and Interaction Transparency

초록

Background: As AI and robotics advances, human–robot collaboration (HRC) is becoming more common in manufacturing, healthcare and service settings. This shift pushes interactive systems to focus more on intelligence and user-centered design. However, most systems prioritize task efficiency and automation over important human factors such as cognitive load, trust and interaction transparency. This oversight limits adaptability and negatively affects user experience, posing challenges for human–computer interaction (HCI). Purpose: This paper examines how to integrate human factors engineering into “systems.” It introduces an analytical framework focused on cognitive load, trust and interface transparency to enhance system adaptability and improve user satisfaction. Methods: This study combines a literature review and qualitative analysis to identify interaction strategies and design features from key collaboration scenarios. User feedback is then used to validate the impact of cognitive load, trust and transparency on these mechanisms. Results: The results show that integrating these human-factor variables improves usability, interaction effectiveness, and perceived trustworthiness. Reduced cognitive load and clearer interaction transparency help users better understand how the system works, while enhanced trust leads to more stable and sustained collaborative interaction. Conclusions: The research builds on existing HRC models by adding a human-factor dimension and offers evidence-based guidance for designing user-centered intelligent interaction systems with interdisciplinary relevance and broad applicability.

keywords
Human–robot collaboration(HRC); Human factors engineering; Human–computer interaction(HCI); Interactive system design; Cognitive load

Abstract

Background: As AI and robotics advances, human–robot collaboration (HRC) is becoming more common in manufacturing, healthcare and service settings. This shift pushes interactive systems to focus more on intelligence and user-centered design. However, most systems prioritize task efficiency and automation over important human factors such as cognitive load, trust and interaction transparency. This oversight limits adaptability and negatively affects user experience, posing challenges for human–computer interaction (HCI). Purpose: This paper examines how to integrate human factors engineering into “systems.” It introduces an analytical framework focused on cognitive load, trust and interface transparency to enhance system adaptability and improve user satisfaction. Methods: This study combines a literature review and qualitative analysis to identify interaction strategies and design features from key collaboration scenarios. User feedback is then used to validate the impact of cognitive load, trust and transparency on these mechanisms. Results: The results show that integrating these human-factor variables improves usability, interaction effectiveness, and perceived trustworthiness. Reduced cognitive load and clearer interaction transparency help users better understand how the system works, while enhanced trust leads to more stable and sustained collaborative interaction. Conclusions: The research builds on existing HRC models by adding a human-factor dimension and offers evidence-based guidance for designing user-centered intelligent interaction systems with interdisciplinary relevance and broad applicability.

keywords
Human–robot collaboration(HRC); Human factors engineering; Human–computer interaction(HCI); Interactive system design; Cognitive load
투고일Received
2025-10-02
수정일Revised
2025-11-15
게재확정일Accepted
2025-11-25
출판일Published
2025-12-05

logo