Yeh, Kai ChingKai ChingYehChi, Jou AnJou AnChiLian, Da ChenDa ChenLianSHU-KAI HSIEH2024-02-262024-02-262023-01-019789869576963https://www.scopus.com/inward/record.uri?eid=2-s2.0-85183830386&partnerID=40&md5=319e25e46c5b92a49747cef47e071858https://scholars.lib.ntu.edu.tw/handle/123456789/639927In this research, we comprehensively analyze the potential biases inherent in Large Language Model, utilizing meticulously curated input data to ascertain the extent to which such data sway machine-generated responses to yield prejudiced outcomes. Notwithstanding recent strides in mitigating bias in LLM-based NLP, our findings underscore the continued susceptibility of these models to data-driven bias. We have integrated the PTT NTU board as our primary data source for this investigation. Moreover, our study elucidates that, in certain contexts, machines may manifest biases without supplementary prompts. However, they can be guided toward rendering impartial responses when provided with enhanced contextual nuances.Bias | LangChain | Natural Language ProcessingEvaluating Interfaced LLM Biasconference paper2-s2.0-85183830386