https://scholars.lib.ntu.edu.tw/handle/123456789/632245
Title: | Object Relation Attention for Image Paragraph Captioning | Authors: | Yang L.-C Yang C.-Y YUNG-JEN HSU |
Issue Date: | 2021 | Journal Volume: | 4A | Start page/Pages: | 3136-3144 | Source: | 35th AAAI Conference on Artificial Intelligence, AAAI 2021 | Abstract: | Image paragraph captioning aims to automatically generate a paragraph from a given image. It is an extension of image captioning in terms of generating multiple sentences instead of a single one, and it is more challenging because paragraphs are longer, more informative, and more linguistically complicated. Because a paragraph consists of several sentences, an effective image paragraph captioning method should generate consistent sentences rather than contradictory ones. It is still an open question how to achieve this goal, and for it we propose a method to incorporate objects' spatial coherence into a language-generating model. For every two overlapping objects, the proposed method concatenates their raw visual features to create two directional pair features and learns weights optimizing those pair features as relation-aware object features for a language-generating model. Experimental results show that the proposed network extracts effective object features for image paragraph captioning and achieves promising performance against existing methods. Copyright © 2021, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved |
URI: | https://www.scopus.com/inward/record.uri?eid=2-s2.0-85123052339&partnerID=40&md5=1855c0aa848758c0ddc1b0b49293a507 https://scholars.lib.ntu.edu.tw/handle/123456789/632245 |
SDG/Keyword: | Artificial intelligence; Generating models; Image captioning; Learn+; Object-relations; Performance; Spatial coherence; Two-directional; Visual feature; Visual languages |
Appears in Collections: | 資訊工程學系 |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.