Mao, H.-R.H.-R.MaoHUNG-YI LEE2021-05-052021-05-052020https://www.scopus.com/inward/record.url?eid=2-s2.0-85084300222&partnerID=40&md5=fa9c556c2082acb51b13ef2c40a7d145https://scholars.lib.ntu.edu.tw/handle/123456789/558969Paraphrase generation is an interesting and challenging NLP task which has numerous practical applications. In this paper, we analyze datasets commonly used for paraphrase generation research, and show that simply parroting input sentences surpasses state-of-the-art models in the literature when evaluated on standard metrics. Our findings illustrate that a model could be seemingly adept at generating paraphrases, despite only making trivial changes to the input sentence or even none at all. © 2019 Association for Computational LinguisticsStandard metrics; State of the art; Natural language processing systemsPolly want a cracker: Analyzing performance of parroting on paraphrase generation datasetsconference paper2-s2.0-85084300222