Live
Black Hat USAAI BusinessBlack Hat AsiaAI BusinessMy forays into cyborgism: theory, pt. 1LessWrongAI Is a Threat to Everything the American People Hold Dear – Bernie Sanders OpEdHacker News AI TopIgnore AI FOMO – For NowHacker News AI TopThe Engineer as Reader: Why Literature Skills Matter for Software Engineers in the Age of AIMedium AIApex Protocol – An open MCP-based standard for AI agent tradingHacker News AI TopWhen Enterprises Build an Agent OS, the Operating Model Must Change TooMedium AIBuilding a RAG-Powered Smart AI Chatbot for E-commerce application using LangChainMedium AIIntelligence isn’t genetic it’s something to be built part 2Medium AIWhich AI Tool Should You Use for What?Medium AIAI and Authority: What Happens When Writing No Longer Proves ExpertiseMedium AIThe One-Person Unicorn Is Impossible Until AI Outputs Are Officially RecognizedMedium AIShow HN: hot or not for .ai websitesHacker News AI TopBlack Hat USAAI BusinessBlack Hat AsiaAI BusinessMy forays into cyborgism: theory, pt. 1LessWrongAI Is a Threat to Everything the American People Hold Dear – Bernie Sanders OpEdHacker News AI TopIgnore AI FOMO – For NowHacker News AI TopThe Engineer as Reader: Why Literature Skills Matter for Software Engineers in the Age of AIMedium AIApex Protocol – An open MCP-based standard for AI agent tradingHacker News AI TopWhen Enterprises Build an Agent OS, the Operating Model Must Change TooMedium AIBuilding a RAG-Powered Smart AI Chatbot for E-commerce application using LangChainMedium AIIntelligence isn’t genetic it’s something to be built part 2Medium AIWhich AI Tool Should You Use for What?Medium AIAI and Authority: What Happens When Writing No Longer Proves ExpertiseMedium AIThe One-Person Unicorn Is Impossible Until AI Outputs Are Officially RecognizedMedium AIShow HN: hot or not for .ai websitesHacker News AI Top
AI NEWS HUBbyEIGENVECTOREigenvector

Worker Discretion Advised: Co-designing Risk Disclosure in Crowdsourced Responsible AI (RAI) Content Work

arXiv cs.HCby Alice Qian, Ziqi Yang, Ryland Shaw, Jina Suh, Laura Dabbish, Hong ShenApril 2, 20261 min read0 views
Source Quiz

arXiv:2509.12140v3 Announce Type: replace Abstract: Responsible AI (RAI) content work, such as annotation, moderation, or red teaming for AI safety, often exposes crowd workers to potentially harmful content. While prior work has underscored the importance of communicating well-being risk to employed content moderators, designing effective disclosure mechanisms for crowd workers while balancing worker protection with the needs of task designers and platforms remains largely unexamined. To address this gap, we conducted individual co-design sessions with 15 task designers, 11 crowdworkers, and 3 platform representatives. We investigated task designer preferences for support in disclosing tasks, worker preferences for receiving risk disclosure warnings, and how platform representatives envis

View PDF HTML (experimental)

Abstract:Responsible AI (RAI) content work, such as annotation, moderation, or red teaming for AI safety, often exposes crowd workers to potentially harmful content. While prior work has underscored the importance of communicating well-being risk to employed content moderators, designing effective disclosure mechanisms for crowd workers while balancing worker protection with the needs of task designers and platforms remains largely unexamined. To address this gap, we conducted individual co-design sessions with 15 task designers, 11 crowdworkers, and 3 platform representatives. We investigated task designer preferences for support in disclosing tasks, worker preferences for receiving risk disclosure warnings, and how platform representatives envision their role in shaping risk disclosure practices. We identify design tensions and map the sociotechnical tradeoffs that shape disclosure practices. We contribute design recommendations and feature concepts for risk disclosure mechanisms in the context of RAI content work.

Subjects:

Human-Computer Interaction (cs.HC); Computers and Society (cs.CY)

Cite as: arXiv:2509.12140 [cs.HC]

(or arXiv:2509.12140v3 [cs.HC] for this version)

https://doi.org/10.48550/arXiv.2509.12140

arXiv-issued DOI via DataCite

Related DOI:

https://doi.org/10.1145/3772318.3791558

DOI(s) linking to related resources

Submission history

From: Alice Qian [view email] [v1] Mon, 15 Sep 2025 17:05:34 UTC (7,905 KB) [v2] Tue, 30 Sep 2025 15:57:47 UTC (7,906 KB) [v3] Tue, 31 Mar 2026 19:18:59 UTC (9,161 KB)

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

Knowledge Map

Knowledge Map
TopicsEntitiesSource
Worker Disc…announceplatformfeaturesafetyarxivarXiv cs.HC

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 175 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!