Peng, Yi-TingYi-TingPengLei, Chin-LaungChin-LaungLei2026-02-232026-02-232025-12-2315261492https://www.scopus.com/record/display.uri?eid=2-s2.0-105025945066&origin=resultslisthttps://scholars.lib.ntu.edu.tw/handle/123456789/735951The rapid advancement of Large Language Models (LLMs) has enabled their application in diverse professional domains, including law. However, research on automatic judicial document generation remains limited, particularly for Taiwanese courts. This study proposes a keyword-guided training framework that enhances LLMs’ ability to generate structured and semantically coherent judicial decisions in Chinese. The proposed method first employs LLMs to extract representative legal keywords from absolute court judgments. Then it integrates these keywords into Supervised Fine-Tuning (SFT) and Reinforcement Learning with Human Feedback using Proximal Policy Optimization (RLHF-PPO). Experimental evaluations using models such as Chinese Alpaca 7B and TAIDE-LX-7B demonstrate that keyword-guided training significantly improves generation quality, achieving ROUGE-1, ROUGE-2, and ROUGE-L score gains of up to 17%, 16%, and 20%, respectively. The results confirm that the proposed framework effectively aligns generated judgments with human-written legal logic and structural conventions. This research advances domain-adaptive LLM fine-tuning strategies and establishes a technical foundation for AI-assisted judicial document generation in the Taiwanese legal context. This research provides empirical evidence that domain-adaptive LLM fine-tuning strategies can significantly improve performance in complex, structured legal text generation.truegenerative AIlarge language modelsLegal AIlegal document generationnatural language processingA Keyword-Guided Training Approach to Large Language Models for Judicial Document Generationjournal article10.32604/cmes.2025.0732582-s2.0-105025945066