{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,9,23]],"date-time":"2024-09-23T04:28:07Z","timestamp":1727065687138},"reference-count":0,"publisher":"AI Access Foundation","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["jair"],"abstract":"Image captioning has recently demonstrated impressive progress largely owing to the introduction of neural network algorithms trained on curated dataset like MS-COCO. Often work in this field is motivated by the promise of deployment of captioning systems in practical applications. However, the scarcity of data and contexts in many competition datasets renders the utility of systems trained on these datasets limited as an assistive technology in real-world settings, such as helping visually impaired people navigate and accomplish everyday tasks. This gap motivated the introduction of the novel VizWiz dataset, which consists of images taken by the visually impaired and captions that have useful, task-oriented information. In an attempt to help the machine learning computer vision field realize its promise of producing technologies that have positive social impact, the curators of the VizWiz dataset host several competitions, including one for image captioning. This work details the theory and engineering from our winning submission to the 2020 captioning competition. Our work provides a step towards improved assistive image captioning systems.
\nThis article appears in the special track on AI & Society.<\/jats:p>","DOI":"10.1613\/jair.1.13113","type":"journal-article","created":{"date-parts":[[2022,2,1]],"date-time":"2022-02-01T01:31:49Z","timestamp":1643679109000},"page":"437-459","source":"Crossref","is-referenced-by-count":10,"title":["Image Captioning as an Assistive Technology: Lessons Learned from VizWiz 2020 Challenge"],"prefix":"10.1613","volume":"73","author":[{"given":"Pierre","family":"Dognin","sequence":"first","affiliation":[]},{"given":"Igor","family":"Melnyk","sequence":"additional","affiliation":[]},{"given":"Youssef","family":"Mroueh","sequence":"additional","affiliation":[]},{"given":"Inkit","family":"Padhi","sequence":"additional","affiliation":[]},{"given":"Mattia","family":"Rigotti","sequence":"additional","affiliation":[]},{"given":"Jarret","family":"Ross","sequence":"additional","affiliation":[]},{"given":"Yair","family":"Schiff","sequence":"additional","affiliation":[]},{"given":"Richard A.","family":"Young","sequence":"additional","affiliation":[]},{"given":"Brian","family":"Belgodere","sequence":"additional","affiliation":[]}],"member":"16860","published-online":{"date-parts":[[2022,1,31]]},"container-title":["Journal of Artificial Intelligence Research"],"original-title":[],"link":[{"URL":"https:\/\/www.jair.org\/index.php\/jair\/article\/download\/13113\/26765","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/www.jair.org\/index.php\/jair\/article\/download\/13113\/26765","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,2,1]],"date-time":"2022-02-01T01:31:50Z","timestamp":1643679110000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.jair.org\/index.php\/jair\/article\/view\/13113"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,1,31]]},"references-count":0,"URL":"https:\/\/doi.org\/10.1613\/jair.1.13113","relation":{},"ISSN":["1076-9757"],"issn-type":[{"value":"1076-9757","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,1,31]]}}}