About Me
I am a Ph.D. candidate in Artificial Intelligence at the Theory of Computation (ToC) Lab, Yonsei University, advised by Prof. Yo-Sub Han.
My research aims to enhance the safety, transparency, and interpretability of large language models (LLMs) through detection and attribution techniques. Specifically, I investigate two complementary directions:
- Linguistic and stylistic feature–based detection, which distinguishes human- and LLM-generated text or code by analyzing subtle differences in writing patterns; and
- LLM watermarking, which embeds imperceptible signals into generated outputs to enable reliable provenance tracing.
I have developed multi-modal and multi-lingual systems for detecting and watermarking AI-generated content across natural language and source code in English, Korean, Python, C, C++, and Java.
More broadly, I am deeply interested in AI safety, responsible AI, and interpretable model design for building trustworthy and accountable generative systems.
Research Interests
- Detection of LLM-generated text/code using linguistic/stylistic features
- LLM watermarking for LLM-generated text/code attribution and provenance
- AI Safety and Responsible AI
- Interpretability
- Linguistics + AI
Recent News
- [Nov 2025] My first LLM watermarking paper (WaterMod) has been accepted to AAAI 2026 (Oral)!
- [Oct 2025] New preprint STELA:"A Linguistics-Aware LLM Watermarking via Syntactic Predictability" is now available on arXiv.
- [Sep 2025] Paper accepted to Engineering Applications of Artificial Intelligence. Detector for LLM-paraphrased code.
- [Aug 2025] Two papers accepted to EMNLP 2025.
- [Aug 2025] Presented KatFishNet (LLM-generated Korean text detector) at ACL 2025 (Main) in Vienna, Austria.
- [July 2025] Paper accepted to Expert Systems with Applications.
- [June 2025] Paper accepted to ACL 2025 (Main). Detector for LLM-generated Korean text.
- [March 2025] Paper accepted to Engineering Applications of Artificial Intelligence.