Evaluating Interfaced LLM Bias
Journal
ROCLING 2023 - Proceedings of the 35th Conference on Computational Linguistics and Speech Processing
Pages
292 - 299
ISBN
9789869576963
Date Issued
2023-01-01
Author(s)
Abstract
In this research, we comprehensively analyze the potential biases inherent in Large Language Model, utilizing meticulously curated input data to ascertain the extent to which such data sway machine-generated responses to yield prejudiced outcomes. Notwithstanding recent strides in mitigating bias in LLM-based NLP, our findings underscore the continued susceptibility of these models to data-driven bias. We have integrated the PTT NTU board as our primary data source for this investigation. Moreover, our study elucidates that, in certain contexts, machines may manifest biases without supplementary prompts. However, they can be guided toward rendering impartial responses when provided with enhanced contextual nuances.
Subjects
Bias | LangChain | Natural Language Processing
Type
conference paper