Im2Text

Im2Text: Describing Images Using 1 Million Captioned Photographs. We develop and demonstrate automatic image description methods using a large captioned photo collection. One contribution is our technique for the automatic collection of this new dataset - performing a huge number of Flickr queries and then filtering the noisy results down to 1 million images with associated visually relevant captions. Such a collection allows us to approach the extremely challenging problem of description generation using relatively simple non-parametric methods and produces surprisingly effective results. We also develop methods incorporating many state of the art, but fairly noisy, estimates of image content to produce even more pleasing results. Finally we introduce a new objective performance measure for image captioning


References in zbMATH (referenced in 4 articles )

Showing results 1 to 4 of 4.
Sorted by year (citations)

  1. Bryan A. Plummer, Liwei Wang, Chris M. Cervantes, Juan C. Caicedo, Julia Hockenmaier, Svetlana Lazebnik: Flickr30k Entities: Collecting Region-to-Phrase Correspondences for Richer Image-to-Sentence Models (2015) arXiv
  2. Gong, Yunchao; Ke, Qifa; Isard, Michael; Lazebnik, Svetlana: A multi-view embedding space for modeling Internet images, tags, and their semantics (2014) ioport
  3. Patterson, Genevieve; Xu, Chen; Su, Hang; Hays, James: The SUN attribute database: beyond categories for deeper scene understanding (2014) ioport
  4. Lu, Zhiwu; Peng, Yuxin: Exhaustive and efficient constraint propagation: a graph-based learning approach and its applications (2013)