"It's trained by non-disabled people": Evaluating How Image Quality Affects Product Captioning with Vision-Language Models
arXiv:2511.08917v3 Announce Type: replace Abstract: Vision-Language Models (VLMs) are increasingly used by blind and low-vision (BLV) people to identify and understand products in their everyday lives, such as food, personal care items, and household goods. Despite their prevalence, we lack an empirical understanding of how common image quality issues--such as blur, misframing, and rotation--affect the accuracy of VLM-generated captions and whether the resulting captions meet BLV people's information needs. Based on a survey of 86 BLV participants, we develop an annotated dataset of 1,859 product images from BLV people to systematically evaluate how image quality issues affect VLM-generated captions. While the best VLM achieves 98% accuracy on images with no quality issues, accuracy drops
View PDF HTML (experimental)
Abstract:Vision-Language Models (VLMs) are increasingly used by blind and low-vision (BLV) people to identify and understand products in their everyday lives, such as food, personal care items, and household goods. Despite their prevalence, we lack an empirical understanding of how common image quality issues--such as blur, misframing, and rotation--affect the accuracy of VLM-generated captions and whether the resulting captions meet BLV people's information needs. Based on a survey of 86 BLV participants, we develop an annotated dataset of 1,859 product images from BLV people to systematically evaluate how image quality issues affect VLM-generated captions. While the best VLM achieves 98% accuracy on images with no quality issues, accuracy drops to 75% overall when quality issues are present, worsening considerably as issues compound. We discuss the need for model evaluations that center on disabled people's experiences throughout the process and offer concrete recommendations for HCI and ML researchers to make VLMs more reliable for BLV people.
Comments: Published at CHI 2026; Honorable Mention for Best Paper (Top 5%). Dataset available at: this https URL
Subjects:
Human-Computer Interaction (cs.HC); Computer Vision and Pattern Recognition (cs.CV)
Cite as: arXiv:2511.08917 [cs.HC]
(or arXiv:2511.08917v3 [cs.HC] for this version)
https://doi.org/10.48550/arXiv.2511.08917
arXiv-issued DOI via DataCite
Related DOI:
https://doi.org/10.1145/3772318.3791309
DOI(s) linking to related resources
Submission history
From: Kapil Garg [view email] [v1] Wed, 12 Nov 2025 02:54:13 UTC (29,971 KB) [v2] Sat, 22 Nov 2025 22:58:28 UTC (11,977 KB) [v3] Tue, 31 Mar 2026 11:56:00 UTC (12,339 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modellanguage modelannounce
Microsoft calls Copilot ‘entertainment only’ while charging $30 a month for it
In short: Microsoft has spent billions building Copilot into every corner of its product lineup, pitching it as an indispensable AI co-worker. Its own Terms of Use tell a different story. A clause quietly buried in the document labels Copilot “for entertainment purposes only” and warns users not to rely on it for important advice. The [ ] This story continues at The Next Web
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

How China is transforming Hong Kong into a strategic hub
Hong Kong’s first five-year plan is expected to guide the city’s future development. Never before has the city attempted a comprehensive plan in the style of mainland China, signalling a major shift in how it approaches long‑term growth. The real question is not why a laissez‑faire economy must adopt a new model but how this transformation will unfold. This exercise is unprecedented on multiple fronts. First, it departs from Hong Kong’s long-standing reliance on market forces and incremental...




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!