Superintelligence and Law
The prospect of artificial superintelligence -- AI agents that can generally outperform humans in cognitive tasks and economically valuable activities -- will transform the legal order as we know it. Operating autonomously or under only limited human oversight, AI agents will assume a growing range of roles in the legal system. First, in making consequential decisions and taking real-world actions, AI agents will become de facto subjects of law. Second, to cooperate and compete with other actors...
Published on Mar 30
Authors:
Abstract
The prospect of artificial superintelligence -- AI agents that can generally outperform humans in cognitive tasks and economically valuable activities -- will transform the legal order as we know it. Operating autonomously or under only limited human oversight, AI agents will assume a growing range of roles in the legal system. First, in making consequential decisions and taking real-world actions, AI agents will become de facto subjects of law. Second, to cooperate and compete with other actors (human or non-human), AI agents will harness conventional legal instruments and institutions such as contracts and courts, becoming consumers of law. Third, to the extent AI agents perform the functions of writing, interpreting, and administering law, they will become producers and enforcers of law. These developments, whenever they ultimately occur, will call into question fundamental assumptions in legal theory and doctrine, especially to the extent they ground the legitimacy of legal institutions in their human origins. Attempts to align AI agents with extant human law will also face new challenges as AI agents will not only be a primary target of law, but a core user of law and contributor to law. To contend with the advent of superintelligence, lawmakers -- new and old -- will need to be clear-eyed, recognizing both the opportunity to shape legal institutions as society braces for superintelligence and the reality that, in the longer run, this may be a joint human-AI endeavor.
View arXiv page View PDF Add to collection
Models citing this paper 0
No model linking this paper
Cite arxiv.org/abs/2603.28669 in a model README.md to link it from this page.
Datasets citing this paper 0
No dataset linking this paper
Cite arxiv.org/abs/2603.28669 in a dataset README.md to link it from this page.
Spaces citing this paper 0
No Space linking this paper
Cite arxiv.org/abs/2603.28669 in a Space README.md to link it from this page.
Collections including this paper 0
No Collection including this paper
Add this paper to a collection to link it from this page.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
researchpaperarxivExclusive | Caltech Researchers Claim Radical Compression of High-Fidelity AI Models - WSJ
<a href="https://news.google.com/rss/articles/CBMiuANBVV95cUxNNWh0OTV4cnNDLVdHdHVUdE02cWRiaE03VENfdWJFbFlyaXZmbWtJNm9OdU05TXVsRjd4dVFwUzl0WkRfLVJoVzlaNkhKRVl4S0Y0Um5jN2QzZzhsb0twMElFOEpSZjdjX1pZZzNacXIxU2U4Ulloam5nR1hQeXg3TWhoMEE1ZzFzQmdiSjktRG1rUEs2YVVhZ0VMMk0wS3J6SWNJdTdJZlAtTEE0SUdaaFl5QWFUWS05NGFDN1FudnNRN2ZpcnFmM0N1bGVpSjNYZmZ6MUJKSkpMWk5tRWFSN2s4V0tEdi1EVVBuUTdnZm92Sjk0MEVYZWRieTkxNWMwRzRiQmxWVHpvaGEwUnpEZGJ1UVFhQmoydGxSTW93XzFVR1ZHeG5mMTZOLWthOVVKZTZMeGdsS0dDaUROelpWc1l4QmJLNWkzRkhGUGdua3hnOHFWYUpXQWp3RktyemZiN0VBTFhfNGFZUHpNaV9jX2U0Sk9Fb2k1dXhOZHdENWpPc2dRU2ZQeHZoMnBZNEN6RHJnNU1YYk9SSzRYNzZrbXRnQ3VOdE0ydGFCM3ZBVTdHeFJLY29Feg?oc=5" target="_blank">Exclusive | Caltech Researchers Claim Radical Compression of High-Fidelity AI Models</a> <font color="#6f6f6f">WSJ</font>

TurboQuant, KIVI, and the Real Cost of Long-Context KV Cache
<h1> I Built a Free KV Cache Calculator for LLM Inference </h1> <p>When people talk about LLM deployment costs, they usually start with model weights.</p> <p>That makes sense, but once you push context length higher, KV cache becomes one of the real bottlenecks. In many long-context setups, it is the<br> dynamic memory cost that quietly starts dominating deployment decisions.</p> <p>I built a small free tool to make that easier to estimate:</p> <p><a href="https://turbo-quant.com/en/kv-cache-calculator" rel="noopener noreferrer">TurboQuant Tools</a></p> <p>It is a practical KV cache calculator for LLM inference. You can use it to estimate memory for:</p> <ul> <li>MHA models</li> <li>GQA models</li> <li>MQA models</li> <li>different context lengths</li> <li>different batch sizes</li> <li>di
Sweden brings urgency—and royalty—to Montréal as it seeks AI research partnership - BetaKit
<a href="https://news.google.com/rss/articles/CBMipAFBVV95cUxNaFdQTnJheUhjQWN4YzRIX082TlN6MWhTOUo1UnhmTGJ3RFp6dXNhbXphd3FlOUUtTGxwZWhrUkQtNEszcnhpYXhNNnVVT3RIU3QtNXlnZ0thb3VoSmZJcUExN0JTMURxZjFMLWhhVmM3dlhLemtiRUZ0azkwRGVkRDdpN1lBa1E4ZF9kWFpWaDdIM0ZHbGk3d241N1JkMEJUdVpSMw?oc=5" target="_blank">Sweden brings urgency—and royalty—to Montréal as it seeks AI research partnership</a> <font color="#6f6f6f">BetaKit</font>
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Research Papers
From brain scans to alloys: Teaching AI to make sense of complex research data - Penn State University
<a href="https://news.google.com/rss/articles/CBMiwAFBVV95cUxPZDFHdkptQ2VUM2hmWjhqQkxoRnBiTWoxMXRRR21MUG5TamdUMlFRWmhvYVNHaFVNREVKU3VmSnVOdDVZYnNLb2ppYXRVRTZmVFVMV1pLTlVhUm9ybTNZbGtvZTdIMnIyMHNpOEk5aU9TSmxxS2Y4V2MwazYwY3JlX1Axbk1nd3pfcWhFdUJaaDJWRXJaMFIyTTROcmFHeXI3ZzFudXJ2M1h6UHI1LW1Ca1dta2RkM3BiYndocGk3Yjg?oc=5" target="_blank">From brain scans to alloys: Teaching AI to make sense of complex research data</a> <font color="#6f6f6f">Penn State University</font>

Locating Risk: Task Designers and the Challenge of Risk Disclosure in RAI Content Work
arXiv:2505.24246v4 Announce Type: replace Abstract: As AI systems are increasingly tested and deployed in open-ended and high-stakes domains, crowdworkers are often tasked with responsible AI (RAI) content work. These tasks include labeling violent content, moderating disturbing text, or simulating harmful behavior for red teaming exercises to shape AI system behaviors. While prior research efforts have highlighted the risks to worker well-being associated with RAI content work, far less attention has been paid to how these risks are communicated to workers by task designers or individuals who design and post RAI tasks. Existing transparency frameworks and guidelines, such as model cards, datasheets, and crowdworksheets, focus on documenting model information and dataset collection process

Togedule: Scheduling Meetings with Large Language Models and Adaptive Representations of Group Availability
arXiv:2505.01000v5 Announce Type: replace Abstract: Scheduling is a perennial-and often challenging-problem for many groups. Existing tools are mostly static, showing an identical set of choices to everyone, regardless of the current status of attendees' inputs and preferences. In this paper, we propose Togedule, an adaptive scheduling tool that uses large language models to dynamically adjust the pool of choices and their presentation format. With the initial prototype, we conducted a formative study (N=10) and identified the potential benefits and risks of such an adaptive scheduling tool. Then, after enhancing the system, we conducted two controlled experiments, one each for attendees and organizers (total N=66). For each experiment, we compared scheduling with verbal messages, shared c

Dynamic Cogeneration of Bug Reproduction Test in Agentic Program Repair
arXiv:2601.19066v2 Announce Type: replace Abstract: Bug Reproduction Tests (BRTs) have been used in many Automated Program Repair (APR) systems, primarily for validating promising fixes and aiding fix generation. In practice, when developers submit a patch, they often implement the BRT alongside the fix. Our experience deploying agentic APR reveals that developers similarly desire a BRT within AI-generated patches to increase their confidence. However, canonical APR systems tend to generate BRTs and fixes separately, and focus on producing only the fix in the final patch. In this paper, we study agentic APR in the context of cogeneration, where the APR agent is instructed to generate both a fix and a BRT in the same patch. We evaluate the effectiveness of different cogeneration strategies
Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!