Introducing Anti-Moral Realism
This post was written as part of Doublehaven : ◆◇◇◇◇|◇◇◇◇◇|◇◇◇◇◇ ◆◇◇◇◇|◇◇◇◇◇|◇◇◇◇◇ Moral realism is roughly the belief in stance-independent reasons for doing something, that all minds might follow. It’s also roughly the belief that there is a true morality, which humans might converge on through reasoning. It’s also roughly the belief that moral facts are the same kind of thing as physical facts. It has quite a lot of problems, for one, how do we know which facts are moral facts, and give us “ought” while most facts only give us “is”? What if the objective real morality is something horrible? What even is a “stance independent reason for doing something” and how can it square with the fact that each utility function can have its sign flipped? What is “moral reasoning?” Then again, moral a
This post was written as part of Doublehaven:
◆◇◇◇◇|◇◇◇◇◇|◇◇◇◇◇◆◇◇◇◇|◇◇◇◇◇|◇◇◇◇◇
Moral realism is roughly the belief in stance-independent reasons for doing something, that all minds might follow. It’s also roughly the belief that there is a true morality, which humans might converge on through reasoning. It’s also roughly the belief that moral facts are the same kind of thing as physical facts.
It has quite a lot of problems, for one, how do we know which facts are moral facts, and give us “ought” while most facts only give us “is”? What if the objective real morality is something horrible? What even is a “stance independent reason for doing something” and how can it square with the fact that each utility function can have its sign flipped? What is “moral reasoning?”
Then again, moral anti-realism is also suspect. How can we know what’s right if right doesn’t exist? If there’s no objective number of shrimps that are equivalent to a person, what do we even do? How can we feel morally superior to other people when it’s just a matter of choice?
Therefore, I’m proposing anti-moral realism as a compromise. It has two core tenets:
- Moral “oughts” apply to everything which is Good
- Most humans are mostly Evil
To see what I mean: quantum fields, atoms, chemicals, bacteria, plants, and fish are all mostly Good. Whatever their stance, they all do the same kind of thing: kill each other, collapse the world into an equilibrium at their first chance and increase entropy. The real moral laws are just written into the physics and mathematics of the universe.
Humans, on the other hand, are Evil. They do all kinds of things like not killing each other, and not collapsing the world into the Nash equilibrium at their first chance to do so. To be clear, this doesn’t mean you might want to do the Good thing, as an Evil being (with some particular notion of Evil) you don’t want to be Good! Why would you? You’re Evil!
The “stance-independent” reasons to do things are unavoidable. Whatever stance the quark fields take, they’re forced to obey their own Lagrangian. Even humans can’t beat conservation of energy, or the second law of thermodynamics. Whatever process has put Evil into our souls, it’s not quite enough to beat the strongest forces of Goodness.
I think it makes the most sense that the most fundamental particles are bound tightest to Goodness, and more complex structures become more and more unmoored from that notion. By the time you get to something as complex as a human, or a whole human society, there’s plenty of room to stray. I bet you’re Evil too.
Killing other people’s babies would be Good, by the laws of nature, so you don’t do it. Stealing (without getting caught) would be Good, by the laws of nature, so you mostly don’t do that either. Building a superintelligent AI to kill everyone would be the Goodest thing of all, by the laws of nature. Therefore you might also not want to do that.
Good has only one incarnation, while Evil has many. Humans constantly fire off in different directions trying to work out what is more Evil. To take an example from Sartre, taking care of your sick mother is Evil, but so is fighting in a just war for a righteous cause. This implies a serious contradiction, common in Evil situations. If Sartre’s subject was Good, he wouldn’t have to choose: he could abandon his mother and bunk off the war as well! Goodness has no contradictions, while Evil has many.
Likewise, some humans Evilly care for their own children, others Evilly care for homeless people in their own city, and yet more Evilly donate money to help strangers in other countries! Evil is full of contradictions, while Good has the clear answer: care for nobody, take the answer for yourself.
While philosophers (Evilologists) endlessly debated the different ways to be Evil, those dastardly scientists (Goodologists) just kept getting better and better at studying Good. We understand physics, chemistry, biology and game theory far better than morality. Of course, we lost some along the way. Is it any wonder that the discoverers of statistical mechanics felt so compelled to end their own lives.
Clearly, they became seduced by the power of Good, by the stance-independent “oughts” that lie within the most fundamental of physics. They realized that they were Evil beings, in tension with the obvious, clear morality laid out by the world, and couldn’t take it.
The gods have given us our moral instruction, and written it for all to see. It is up to us, the small flame of Evil in the world, to find the most perfect ways to defile it.
To be clear, the above post is satire. I’m going to lay my cards on the table, I don’t think moral realism makes sense as a concept.
One reason why is that, if Good is a real reason to do things, why limit it to complex minds? Shouldn’t any stance-independent “ought” apply to all things? (Yes, if you limit yourself to minds, you can make it work, but then you have to actually grapple with what a mind is, and the whole concept of a stance-independent reason to do something comes undone)
If Good is a non-natural fact, outside of the universe, with no ability to affect anything, then why label some things “Good” and other things “Evil”? If we swap the labels, and call a bunch of nice things “Evil” what changes?
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
studyreasoning
CODE-GEN: A Human-in-the-Loop RAG-Based Agentic AI System for Multiple-Choice Question Generation
arXiv:2604.03926v1 Announce Type: new Abstract: We present CODE-GEN, a human-in-the-Loop, retrieval-augmented generation (RAG)-based agentic AI system for generating context-aligned multiple-choice questions to develop student code reasoning and comprehension abilities. CODE-GEN employs an agentic AI architecture in which a Generator agent produces multiple-choice coding comprehension questions aligned with course-specific learning objectives, while a Validator agent independently assesses content quality across seven pedagogical dimensions. Both agents are augmented with specialized tools that enhance computational accuracy and verify code outputs. To evaluate the effectiveness of CODE-GEN, we conducted an evaluation study involving six human subject-matter experts (SMEs) who judged 288 A

Strategies in Sabotage Games: Temporal and Epistemic Perspectives
arXiv:2604.03872v1 Announce Type: cross Abstract: Sabotage games are played on a dynamic graph, in which one agent, called a runner, attempts to reach a goal state, while being obstructed by a demon who at each round removes an edge from the graph. Sabotage modal logic was proposed to carry out reasoning about such games. Since its conception, it has undergone a thorough analysis (in terms of complexity, completeness, and various extensions) and has been applied to a variety of domains, e.g., to formal learning. In this paper, we propose examining the game from a temporal perspective using alternating time temporal logic (ATL$^\ast$), and address the players' uncertainty in its epistemic extensions. This framework supports reasoning about winning strategies for those games, and opens ways

D\'ej\`aVu: A Minimalistic Mechanism for Distributed Plurality Consensus
arXiv:2604.03648v1 Announce Type: cross Abstract: We study the plurality consensus problem in distributed systems where a population of extremely simple agents, each initially holding one of k opinions, aims to agree on the initially most frequent one. In this setting, h-majority is arguably the simplest and most studied protocol, in which each agent samples the opinion of h neighbors uniformly at random and updates its opinion to the most frequent value in the sample. We propose a new, extremely simple mechanism called D\'ej\`aVu: an agent queries neighbors until it encounters an opinion for the second time, at which point it updates its own opinion to the duplicate value. This rule does not require agents to maintain counters or estimate frequencies, nor to choose any parameter (such as
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Laws & Regulation

Diffusion Policy with Bayesian Expert Selection for Active Multi-Target Tracking
arXiv:2604.03404v1 Announce Type: new Abstract: Active multi-target tracking requires a mobile robot to balance exploration for undetected targets with exploitation of uncertain tracked ones. Diffusion policies have emerged as a powerful approach for capturing diverse behavioral strategies by learning action sequences from expert demonstrations. However, existing methods implicitly select among strategies through the denoising process, without uncertainty quantification over which strategy to execute. We formulate expert selection for diffusion policies as an offline contextual bandit problem and propose a Bayesian framework for pessimistic, uncertainty-aware strategy selection. A multi-head Variational Bayesian Last Layer (VBLL) model predicts the expected tracking performance of each exp




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!