Solving Linguistic Olympiad Problems with Tree-of-Thought Prompting
Journal
ROCLING 2023 - Proceedings of the 35th Conference on Computational Linguistics and Speech Processing
Pages
262 - 269
ISBN
9789869576963
Date Issued
2023-01-01
Author(s)
Abstract
In this study, we delve into the efficacy of the Tree-of-Thought Prompting technique as a mechanism to address linguistic challenges and augment the reasoning capabilities of expansive language models. Specifically, we scrutinize the reasoning prowess of the Generative Pre-trained Transformer (GPT) model, which has garnered significant attention within the research and practitioner community. Utilizing the Tree-of-Thought Prompting methodology, we assess its utility in enhancing both the precision and response latency of the GPT model, especially for Linguistic Olympiad tasks demanding elevated reasoning competencies. Concurrently, we delineate inherent limitations within this approach and proffer avenues for future research to refine and optimize it.
Subjects
Generative Pre-trained Transformer | Large Language Models | Linguistic Olympiad | Machine Reasoning | Tree-of-Thought Prompting
Type
conference paper