Chen, H.-Y.H.-Y.ChenLee, C.-S.C.-S.LeeLiao, K.-T.K.-T.LiaoSHOU-DE LIN2021-05-052021-05-052018https://www.scopus.com/inward/record.url?eid=2-s2.0-85081737081&partnerID=40&md5=6c04bd1eb2da5f4e3d8304d86f030ea1https://scholars.lib.ntu.edu.tw/handle/123456789/559200Lexicon relation extraction given distributional representation of words is an important topic in NLP. We observe that the state-of-the-art projection-based methods cannot be generalized to handle unseen hypernyms. We propose to analyze it in the perspective of pollution, that is, the predicted hypernyms are limited to those appeared in training set. We propose a word relation autoencoder (WRAE) model to address the challenge and construct the corresponding indicator to measure the pollution. Experiments on several hypernym-like lexicon datasets show that our model outperforms the competitors significantly. © 2018 Association for Computational LinguisticsExtraction; Learning systems; Pollution; Auto encoders; Hypernyms; Relation extraction; State of the art; Training sets; Natural language processing systemsWord relation autoencoder for unseen hypernym extraction using word embeddingsconference paper2-s2.0-85081737081