E-ISSN : 2982-8007
This article is an interview with Yeo-Kyoung Chang, executive director of the Institute for Digital Rights, to find out a desirable legal and institutional approach to artificial intelligence as the Framework Act on Artificial Intelligence comes into effect in January 2026. The legislative process for the Framework Act on Artificial Intelligence is progressing rapidly, led by the Ministry of Science and Technology and industry, and sufficient discussions have not been held with information human rights experts and civil society. The biggest problem with Korea’s Artificial Intelligence Framework Act is that it exempts users such as hospitals and companies that directly use AI. At the heart of a human rights-based approach to AI is the concept of the “affected person.” This approach establishes those who are actually affected by AI systems (e.g., patients using cancer diagnostic tools or potential AI recruiters) as rights holders, beyond AI developers and users. This human rights-based approach proposes three layers of rights: first, the right to data, including the right to know about AI learning and the right to opt out; second, the right to receive explanations and documentation of algorithms, along with the right to regulatory investigation; and third, the right to long-term impacts across sectors. For effective human rights impact assessments, the democratic participation of those affected must be guaranteed. The intervention of the National Human Rights Commission, rather than a single ministry driven by industry perspectives, is essential. Furthermore, to counter the power structures of big tech companies, cross-sectoral civil society solidarity and the establishment of an independent research environment are urgently needed.