https://scholars.lib.ntu.edu.tw/handle/123456789/632245
標題: | Object Relation Attention for Image Paragraph Captioning | 作者: | Yang L.-C Yang C.-Y YUNG-JEN HSU |
公開日期: | 2021 | 卷: | 4A | 起(迄)頁: | 3136-3144 | 來源出版物: | 35th AAAI Conference on Artificial Intelligence, AAAI 2021 | 摘要: | Image paragraph captioning aims to automatically generate a paragraph from a given image. It is an extension of image captioning in terms of generating multiple sentences instead of a single one, and it is more challenging because paragraphs are longer, more informative, and more linguistically complicated. Because a paragraph consists of several sentences, an effective image paragraph captioning method should generate consistent sentences rather than contradictory ones. It is still an open question how to achieve this goal, and for it we propose a method to incorporate objects' spatial coherence into a language-generating model. For every two overlapping objects, the proposed method concatenates their raw visual features to create two directional pair features and learns weights optimizing those pair features as relation-aware object features for a language-generating model. Experimental results show that the proposed network extracts effective object features for image paragraph captioning and achieves promising performance against existing methods. Copyright © 2021, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved |
URI: | https://www.scopus.com/inward/record.uri?eid=2-s2.0-85123052339&partnerID=40&md5=1855c0aa848758c0ddc1b0b49293a507 https://scholars.lib.ntu.edu.tw/handle/123456789/632245 |
SDG/關鍵字: | Artificial intelligence; Generating models; Image captioning; Learn+; Object-relations; Performance; Spatial coherence; Two-directional; Visual feature; Visual languages |
顯示於: | 資訊工程學系 |
在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。