바로가기메뉴

본문 바로가기 주메뉴 바로가기
 

logo

  • P-ISSN1738-6764
  • E-ISSN2093-7504
  • KCI

Towards Adaptive Test Automation: JSON DSLs and LLM Agents for End-to-End Testing

INTERNATIONAL JOURNAL OF CONTENTS / INTERNATIONAL JOURNAL OF CONTENTS, (P)1738-6764; (E)2093-7504
2026, v.22 no.1, pp.77-95
Dong Kwan Kim

Abstract

End-to-end (E2E) testing of web user interfaces is a crucial but resource-intensive endeavor, often complicated by intricate user workflows, fragile scripts, and significant maintenance costs. Traditional UI testing frameworks, such as Playwright, offer precise control but demand substantial manual effort and are sensitive to changes in the user interface. To address these challenges, this paper introduces an approach that utilizes Large Language Models (LLMs) for the automated generation and execution of E2E tests. We present two complementary paradigms: (1) a JSON-based domain-specific language (DSL) that facilitates declarative test specification and deterministic execution through the Playwright Model Context Protocol (MCP), and (2) an agent-based testing framework in which an LLM dynamically plans and executes actions based on high-level objectives, progress state, and evolving UI snapshots. As a case study, we employ the Vendure e-commerce framework, which offers a representative testing environment featuring functionalities like product search, shopping cart management, payment workflows, and administrative tasks. The JSON-based DSL approach is evaluated for its effectiveness in enhancing test script readability, reusability, and maintainability. In parallel, the agent-based model showcases adaptability and self-healing capabilities. Experimental results indicate that LLM-driven test automation alleviates the burden of manual script creation, improves test coverage across user and administrative scenarios, and provides resilience against changes in application interfaces. By integrating structured JSON-based methods with adaptive agent-based reasoning, this work lays the groundwork for more robust, flexible, and scalable AI-driven test automation frameworks.

keywords
End-to-End Test, Automated Testing, Large Language Models, Model Context Protocol, Agents

INTERNATIONAL JOURNAL OF CONTENTS