ISSN : 1738-6764
This study presents an AI-augmented self-interview method that utilizes a large language model (LLM) as both the interviewer and an initial analytic assistant. This approach addresses long-standing challenges in autoethnographic self-study, particularly issues of subjectivity and weak audit trails. The primary contribution is a standardized and reproducible workflow that outlines interviewer prompts, turn-taking rules, audit-trail artifacts, and a human adjudication stage. This structure helps restore organization and reflective distance in self-research. In a proof-of-concept case, the workflow generated stable, quote-anchored themes and an explicit codebook with traceable interpretive moves. While we do not claim that this method is superior to human-led interviews, we provide evidence for procedural objectivity—defined by transparency and traceability—as well as reliability indicators such as short-interval test-retest stability and alignment between LLM-generated codes and human adjudication. Additionally, we propose a pre-registered design for a controlled comparison between human and LLM interviews, and we provide prompts and templates for others to reuse. Overall, this work positions LLMs as methodological supports rather than replacements, clarifying what is innovative, what is currently achievable, and what requires further validation.
