DSpace 集合:https://scholars.lib.ntu.edu.tw/handle/123456789/222024-03-28T23:41:31Z2024-03-28T23:41:31ZSolving Linguistic Olympiad Problems with Tree-of-Thought PromptingLin, Zheng LinYen, Chiao HanXu, Jia ChengWatty, DeborahSHU-KAI HSIEHhttps://scholars.lib.ntu.edu.tw/handle/123456789/6402252024-03-04T03:09:09Z2023-01-01T00:00:00Z標題: Solving Linguistic Olympiad Problems with Tree-of-Thought Prompting
作者: Lin, Zheng Lin; Yen, Chiao Han; Xu, Jia Cheng; Watty, Deborah; SHU-KAI HSIEH
摘要: In this study, we delve into the efficacy of the Tree-of-Thought Prompting technique as a mechanism to address linguistic challenges and augment the reasoning capabilities of expansive language models. Specifically, we scrutinize the reasoning prowess of the Generative Pre-trained Transformer (GPT) model, which has garnered significant attention within the research and practitioner community. Utilizing the Tree-of-Thought Prompting methodology, we assess its utility in enhancing both the precision and response latency of the GPT model, especially for Linguistic Olympiad tasks demanding elevated reasoning competencies. Concurrently, we delineate inherent limitations within this approach and proffer avenues for future research to refine and optimize it.2023-01-01T00:00:00ZEvaluating Interfaced LLM BiasYeh, Kai ChingChi, Jou AnLian, Da ChenSHU-KAI HSIEHhttps://scholars.lib.ntu.edu.tw/handle/123456789/6399272024-02-26T03:47:07Z2023-01-01T00:00:00Z標題: Evaluating Interfaced LLM Bias
作者: Yeh, Kai Ching; Chi, Jou An; Lian, Da Chen; SHU-KAI HSIEH
摘要: In this research, we comprehensively analyze the potential biases inherent in Large Language Model, utilizing meticulously curated input data to ascertain the extent to which such data sway machine-generated responses to yield prejudiced outcomes. Notwithstanding recent strides in mitigating bias in LLM-based NLP, our findings underscore the continued susceptibility of these models to data-driven bias. We have integrated the PTT NTU board as our primary data source for this investigation. Moreover, our study elucidates that, in certain contexts, machines may manifest biases without supplementary prompts. However, they can be guided toward rendering impartial responses when provided with enhanced contextual nuances.2023-01-01T00:00:00ZAge-related differences in understanding pronominal reference in sentence comprehension: An electrophysiological investigationCHIA-LIN LEELai, Chia-Hohttps://scholars.lib.ntu.edu.tw/handle/123456789/6397692024-02-19T03:40:22Z2023-08-31T00:00:00Z標題: Age-related differences in understanding pronominal reference in sentence comprehension: An electrophysiological investigation
作者: CHIA-LIN LEE; Lai, Chia-Ho
摘要: This study aimed to investigate how age affects the ability to comprehend sentence meaning, specifically how individuals resolve pronouns to their corresponding nouns. The study included 34 young participants (20-29 years old) and 34 older participants (60-81 years old). The participants were presented with sentences containing two characters and a third-person singular pronoun. Stereotypical genders associated with character names were manipulated such that the pronoun had either one, two, or no possible antecedents, rendering the pronoun referentially unambiguous, ambiguous, or mismatched, respectively. Consistent with the prior findings on preserved syntactic processing with advanced age, event-related potential data time-locked to the critical pronouns showed a P600 effect to mismatched pronouns regardless of age. These results indicate that older adults, like their younger counterparts, have a strong preference for readily available antecedents. When the pronoun was ambiguous, younger adults showed a typical Nref effect-a sustained anterior negativity associated with elaborative inferencing to search for the referent. Older adults did not exhibit this effect, suggesting a reduction in elaborative processes for establishing coherence. Nevertheless, the Nref response to ambiguous pronouns was observed in a subset of older adults, who also showed a Nref instead of P600 response to mismatched pronouns. Overall, individuals who elicited the Nref response to ambiguous pronouns were associated with a higher level of print exposure, suggesting that life-long reading experience may help to counteract age-related decline. Together, these findings help characterize the differential effects of aging on pronominal understanding and provide initial electrophysiological evidence of the protective benefit of print exposure on language processing in the aging population. (PsycInfo Database Record (c) 2023 APA, all rights reserved).2023-08-31T00:00:00ZPrompt-Based Translation of Chinese into Taiwanese Mandarin BrailleWatty, DeborahKitsunai, MicahSHU-KAI HSIEHhttps://scholars.lib.ntu.edu.tw/handle/123456789/6390552024-01-29T03:47:59Z2023-01-01T00:00:00Z標題: Prompt-Based Translation of Chinese into Taiwanese Mandarin Braille
作者: Watty, Deborah; Kitsunai, Micah; SHU-KAI HSIEH
摘要: In automated Braille translation, accommodating linguistic nuances and the rules peculiar to Braille across various languages poses considerable challenges. Mandarin Chinese stands out in this aspect due to its necessity to ascertain the appropriate pronunciation of characters based on context. Although rule-based algorithms have historically dominated this space, recent empirical evidence highlights the efficacy of statistical approaches and the emergent exploration of Large Language Model (LLM)-based techniques. This paper explores the potential advantages of leveraging a prompt-based strategy for the automated translation from Mandarin Chinese to Taiwanese Mandarin Braille. As a methodology, we devised a script capable of ingesting a Chinese sentence and subsequently generating a prompt that comprises the Zhuyin of unequivocal characters and dictionary definitions for those with polysemous readings. Utilizing a set of 103 test sentences, we assessed the precision with which GPT-3.5, GPT-4, and Liblouis (a widely-recognized open-source rule-based Braille translator) ascribed readings to polyphonic characters. Our findings revealed that, notwithstanding certain inconsistencies in the GPT-3.5 outputs, the extended GPT- 4 model exhibited superior performance compared to Liblouis.2023-01-01T00:00:00Z