Lin, Zheng LinZheng LinLinYen, Chiao HanChiao HanYenXu, Jia ChengJia ChengXuWatty, DeborahDeborahWattySHU-KAI HSIEH2024-03-042024-03-042023-01-019789869576963https://scholars.lib.ntu.edu.tw/handle/123456789/640225https://www.scopus.com/inward/record.uri?eid=2-s2.0-85184841493&partnerID=40&md5=e99fca9e2b7db6014c4727812c86b601In this study, we delve into the efficacy of the Tree-of-Thought Prompting technique as a mechanism to address linguistic challenges and augment the reasoning capabilities of expansive language models. Specifically, we scrutinize the reasoning prowess of the Generative Pre-trained Transformer (GPT) model, which has garnered significant attention within the research and practitioner community. Utilizing the Tree-of-Thought Prompting methodology, we assess its utility in enhancing both the precision and response latency of the GPT model, especially for Linguistic Olympiad tasks demanding elevated reasoning competencies. Concurrently, we delineate inherent limitations within this approach and proffer avenues for future research to refine and optimize it.Generative Pre-trained Transformer | Large Language Models | Linguistic Olympiad | Machine Reasoning | Tree-of-Thought PromptingSolving Linguistic Olympiad Problems with Tree-of-Thought Promptingconference paper2-s2.0-85184841493https://www.scopus.com/inward/record.uri?eid=2-s2.0-85184841493&partnerID=40&md5=e99fca9e2b7db6014c4727812c86b601