I2T: Image Parsing to Text Description. In this paper, we present an image parsing to text description (I2T) framework that generates text descriptions of image and video content based on image understanding. The proposed I2T framework follows three steps: 1) input images (or video frames) are decomposed into their constituent visual patterns by an image parsing engine, in a spirit similar to parsing sentences in natural language; 2) the image parsing results are converted into semantic representation in the form of Web ontology language (OWL), which enables seamless integration with general knowledge bases; and 3) a text generation engine converts the results from previous steps into semantically meaningful, human readable, and query-able text reports. The centerpiece of the I2T framework is an and-or graph (AoG) visual knowledge representation, which provides a graphical representation serving as prior knowledge for representing diverse visual patterns and provides top-down hypotheses during the image parsing. The AoG embodies vocabularies of visual elements including primitives, parts, objects, scenes as well as a stochastic image grammar that specifies syntactic relations (i.e., compositional) and semantic relations (e.g., categorical, spatial, temporal, and functional) between these visual elements. Therefore, the AoG is a unified model of both categorical and symbolic representations of visual knowledge. The proposed I2T framework has two objectives. First, we use semiautomatic method to parse images from the Internet in order to build an AoG for visual knowledge representation. Our goal is to make the parsing process more and more automatic using the learned AoG model. Second, we use automatic methods to parse image/video in specific domains and generate text reports that are useful for real-world applications. In the case studies at the end of this paper, we demonstrate two automatic I2T systems: a maritime and urban scene video surveillance system and a real-time automatic driving scene understanding system

References in zbMATH (referenced in 5 articles )

Showing results 1 to 5 of 5.
Sorted by year (citations)

  1. Cheng, Cheng; Li, Chunping; Han, Youfang; Zhu, Yan: A semi-supervised deep learning image caption model based on pseudo label and N-gram (2021)
  2. Bryan A. Plummer, Liwei Wang, Chris M. Cervantes, Juan C. Caicedo, Julia Hockenmaier, Svetlana Lazebnik: Flickr30k Entities: Collecting Region-to-Phrase Correspondences for Richer Image-to-Sentence Models (2015) arXiv
  3. Su, Hao; Yu, Adams Wei: Probabilistic modeling of scenes using object frames (2015) ioport
  4. Lin, Liang; Liu, Xiaobai; Peng, Shaowu; Chao, Hongyang; Wang, Yongtian; Jiang, Bo: Object categorization with sketch representation and generalized samples (2012) ioport
  5. Lin, Liang; Luo, Ping; Chen, Xiaowu; Zeng, Kun: Representing and recognizing objects with massive local image patches (2012)