Lee, Yang-YinYang-YinLeeKe, HaoHaoKeHuang, Hen-HsenHen-HsenHuangHSIN-HSI CHEN2020-05-042020-05-042016https://www.scopus.com/inward/record.uri?eid=2-s2.0-85070777597&doi=10.1145%2f2872518.2889381&partnerID=40&md5=7ca9af444e67da95e7573c659556ae0aGloVe, global vectors for word representation, performs well in some word analogy and semantic relatedness tasks. However, we find that some dimensions of the trained word embedding are abnormal. We verify our conjecture via removing these abnormal dimensions using Kolmogorov-Smimov test and experiment on several benchmark datasets for semantic relatedness measurement. The experimental results confirm our finding. Interestingly, some of the tasks outperform the state-of-the-art model SensEmbed by simply removing these abnormal dimensions. The novel rule of thumb technique which leads to better performance is expected to be useful in practice. © 2016 owner/author(s).glove; semantic relatedness; word embeddingEmbeddings; ART model; Benchmark datasets; Embeddings; Glove; Kolmogorov; Less is mores; Semantic relatedness; State of the art; Word embedding; Word representations; SemanticsLess is More: Filtering Abnormal Dimensions in GloVe.conference paper10.1145/2872518.28893812-s2.0-85070777597https://doi.org/10.1145/2872518.2889381