Price of Anarchy of Algorithmic Monoculture
arXiv:2604.00444v1 Announce Type: new Abstract: Several recent works investigate the effects of monoculture, the ever increasing phenomenon of (possibly) self-interested actors in a society relying on one common source of advice for decision making, with an archetypal driving example being the growing adoption and predictive power of machine learning models in matching markets, e.g. in hiring. Kleinberg and Raghavan (PNAS, 2021) introduced a model that captures the effects of monoculture in a one-sided matching market with advice, demonstrating that a higher accuracy common signal (such as an algorithmic vendor) might incentivize society as a whole to rationally adopt it, but as a collective it would be better off if each instead adopted less accurate, but private advice. We generalize the
View PDF HTML (experimental)
Abstract:Several recent works investigate the effects of monoculture, the ever increasing phenomenon of (possibly) self-interested actors in a society relying on one common source of advice for decision making, with an archetypal driving example being the growing adoption and predictive power of machine learning models in matching markets, e.g. in hiring. Kleinberg and Raghavan (PNAS, 2021) introduced a model that captures the effects of monoculture in a one-sided matching market with advice, demonstrating that a higher accuracy common signal (such as an algorithmic vendor) might incentivize society as a whole to rationally adopt it, but as a collective it would be better off if each instead adopted less accurate, but private advice. We generalize their model and address the open question of their work in quantifying the social welfare loss. We find that monoculture and more generally decentralized optimization is close to optimal: we show a tight constant bound of 2 on the price of anarchy (and more general notions) for the induced game.
Comments: 27 pages, 1 figure. An earlier version of this paper was presented at WINE 2025
Subjects:
Computer Science and Game Theory (cs.GT); Computers and Society (cs.CY)
Cite as: arXiv:2604.00444 [cs.GT]
(or arXiv:2604.00444v1 [cs.GT] for this version)
https://doi.org/10.48550/arXiv.2604.00444
arXiv-issued DOI via DataCite (pending registration)
Submission history
From: Erald Sinanaj [view email] [v1] Wed, 1 Apr 2026 03:40:22 UTC (89 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelannouncemarket
Help running Qwen3-Coder-Next TurboQuant (TQ3) model
I found a TQ3-quantized version of Qwen3-Coder-Next here: https://huggingface.co/edwardyoon79/Qwen3-Coder-Next-TQ3_0 According to the page, this model requires a compatible inference engine that supports TurboQuant. It also provides a command, but it doesn’t clearly specify which version or fork of llama.cpp should be used (or maybe I missed it). llama-server I’ve tried the following llama.cpp forks that claim to support TQ3, but none of them worked for me: https://github.com/TheTom/llama-cpp-turboquant https://github.com/turbo-tan/llama.cpp-tq3 https://github.com/drdotdot/llama.cpp-turbo3-tq3 If anyone has successfully run this model, I’d really appreciate it if you could share how you did it. submitted by /u/UnluckyTeam3478 [link] [comments]

Is Turboquant really a game changer?
I am currently utilizing qwen3.5 and Gemma 4 model. Realized Gemma 4 requires 2x ram for same context length. As far as I understand, what turbo quant gives is quantizing kv cache into about 4 bit and minimize the loses But Q8 still not lose the context that much so isn't kv cache ram for qwen 3.5 q8 and Gemma 4 truboquant is the same? Is turboquant also applicable in qwen's cache architecture? because as far as I know they didn't tested it in qwen3.5 style kv cache in their paper. Just curious, I started to learn local LLM recently submitted by /u/Interesting-Print366 [link] [comments]
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!