Announcing Doublehaven with Reflections on Humour
Hi sweetie! Imagine your favorite toy, maybe a teddy bear or a dinosaur. Now, imagine some grown-ups are playing a game where they have to draw one picture of their toy every single day for a whole month! That's a lot of pictures, right?
Well, this grown-up in the story thinks, "Hmm, I can draw two pictures of my toy every day!" It's like a super-duper challenge for them. They want to see how many amazing pictures they can make, even if it's super hard.
And then, they start talking about funny jokes, like silly riddles or funny faces that make you giggle! It's all about trying new, big things and having fun with words and jokes!
Inkhaven is a writers’ retreat, well, really it’s a bloggers’ retreat. In the Lighthaven campus, Berkeley, a couple dozen bloggers get together to complete an almost insurmountable challenge for us mere mortals. Post one blogpost every single day for a whole month. I say ‘insurmountable’ but in fact they all succeeded last time, although apparently it was not uncommon for them to claw success from the jaws of defeat at 11:45 pm each night. I look at this and I feel the same way that traditionalists feel when they see Millennials scared to use the phone, or Gen Zs unable to go outside. Our (blogosphere) ancestors used to blog seventy times per day! Great Yudkowsky used to go to war (with the methods of rationality)! Moldbug and Alexander were gunning each other down (with devastating couter
Note the date of this post.
Inkhaven is a writers’ retreat, well, really it’s a bloggers’ retreat. In the Lighthaven campus, Berkeley, a couple dozen bloggers get together to complete an almost insurmountable challenge for us mere mortals. Post one blogpost every single day for a whole month. I say ‘insurmountable’ but in fact they all succeeded last time, although apparently it was not uncommon for them to claw success from the jaws of defeat at 11:45 pm each night.
I look at this and I feel the same way that traditionalists feel when they see Millennials scared to use the phone, or Gen Zs unable to go outside. Our (blogosphere) ancestors used to blog seventy times per day! Great Yudkowsky used to go to war (with the methods of rationality)! Moldbug and Alexander were gunning each other down (with devastating couterarguments) over breakfast!
That’s why I’m going to be doing Doublehaven. Two blogposts per day. No “advice” or “tips” on “writing well”. No full-time live-in retreat (I’m not that rich). No endorsement from anybody at all. In fact, I also need to finish writing my PhD thesis and an entire paper this month.
Why? I want to give people the permission to be ambitious. Yes, some people struggle with writer’s block, some people have more “useful” skills than just blogging really fast, some people even write “good” blogposts rather than simply “lots of” blogposts. But I do want to see what’s possible. We cannot let a rate of one blogpost per day be our upper limit.
Will I do thirty posts, or thirty days? I don’t know. I might not make it five days, to be honest! There aren’t any rules, and there’s also no reward. Anyway, here’s the marker I’ll be using for the first thirty posts:
This post was written as part of Doublehaven:
◆◇◇◇◇|◇◇◇◇◇|◇◇◇◇◇◇◇◇◇◇|◇◇◇◇◇|◇◇◇◇◇
With that said, let me discuss humour.
Before I go further, I’m going to go through a ranking of my favourite ratty jokes, mostly from Twittern.
Honourable Mentions
Gettier Case. It’s very funny to say “Gettier case” whenever someone has a justified true belief that isn’t knowledge. I use it all the time! It’s great. Unfortunately I just don’t think it’s closely-tied enough to the rationalist community for it to count.
My Own Jokes. I have to put these here otherwise the entire list would be full of them. I particularly enjoy the ones I’ve come up with for my line of (legal, as I am not selling them) homebrewed alcohols.
I notice I am confused. Quite funny to deploy as a joke, but mostly not a joke. Sorry.
#5: That one Yudkowsky Rant Tweet
you are trying to solve the wrong problem using the wrong methods based on a wrong model of the world derived from poor thinking and unfortunately all of your mistakes have failed to cancel out
@ESYudkowsky via Twitter
I’ve seen this one thrown around a few times as a reference, my favourite is probably the “but doctor, I am grimaldi” reference, which I can’t now find. It’s pretty funny, but seemed to have limited staying power, and is relatively niche
Rationality: 8/10Humor: 6/10QALYs in expectation: 1e0
#4: The Special LessWrong Events
I’m not sure which ones of these count as jokes, and which ones don’t. On Petrov Day, LessWrong sometimes has an event where people can honestly actually blow up the front page of the website. That one is really quite good. One time a guy actually blew it up as well, which added a bit of spice to later events.
The April Fool’s Day ones are also very nice. I enjoyed the “Good Heart Points” and also the EA-themed one. Nice work, Lightcone team.
Rationality: 7/10Humor: 9/10QALYs in expectation: 1e1
#3: “Came in fluffer” Sankey Diagram
I don’t need to give context for this. Or maybe I just don’t want to. Either way, this joke ruined a common, useful diagram type for hundreds of people, and escaped containment all the way onto
I’m only putting it third because it’s not very rational. Aella is a prominent rationalist, but, eh, this doesn’t have much to do with AI or becoming more rational. Since it broke containment, I expect it will have a much higher overall impact than either of the previous jokes.
Rationality: 3/10Humor: 10/10QALYs in expectation: 1e2
#2: The Anthropic Responsible Scaling Policy
It had to be on here. Very very nice one, you really had us all for a moment. It was a clever trick to first say “We promise to follow these rules around AI” and then say “Aha! We never promised we’d stick to this promise!” like a toddler with two fingers crossed behind his back.
Even more absurd was that publicly supporting SB 53—which requires AI developers to publish a framework surrounding how they approach safe AI development, which they can then be held to—that Anthropic deliberately released a weaker thing than the RSP, which supersedes it for the purposes of SB 53. Incredible.
Rationality: 1/10Humor: 10/10QALYs in expectation: -1e50
#1: The Shoggoth
I ummed and ahhed about putting this one on here. Tetraspace is a “medium-sized fan of rationality” by her own description, but this joke is so good I think it has to count.
What more is there to say? Shifted public thinking about AI by a meaningful amount. Absolutely hilarious.
Rationality: 7/10Humor: 9/10QALYs produced in expectation: +1e40
Discussion
People have often debated what “The Law of Humour” is, and point to different observations on individual cases. Some major avenues towards humour seem to be:
- Inside jokes/references to something only a few people in the conversation have knowledge of
- Absurdism, wherein things don’t follow the logical rules, but instead dream logic, or something similar
- Puns are a kind of subtype of this, where the normal rules are superseded by word-ish rules
- Offense, where things make sense, but break social rules
I’ll also mention one of the more reliable ways to generate jokes, free association, which is apparently how writers on SNL and related shows manage to output so many (often mediocre) jokes so quickly. For any given thing, you write down a series of associations on two or three of the items in it, and then draw connections.
These can overlap. An absurd non-sequitur can also be offensive. An inside joke can appear in an illogical place, becoming absurd. An offensive joke can be generated by free association. I think the law of humor is as follows:
Something is funny when it is hard to predict but easy to retrodict.
Or to put it another way, something is funny when you should have predicted it, but you didn’t. Inside jokes work by referring to something from a different social group or cluster. It makes sense once you know it, but you wouldn’t normally e.g. talk about your stag do (bachelor party, for the yanks) in front of your parents. Offensive humour works because our cognition steers us away from saying (and therefore predicting) offensive things most of the time. Jokes generated by free association just directly draw together normally distant things.
LessWrong AI
https://www.lesswrong.com/posts/Qczwwgy6kr2p6Tgg3/announcing-doublehaven-with-reflections-on-humourSign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelreleasepolicy
Are there Multiple Moral Endpoints?
This is a different approach to explaining an old idea . What is the deep future going to look like? Will we be proud of it, or ashamed of the choices that led to it? Lots of focus on the future is on the near future. How will ongoing wars go? What will the next AI model's capabilities be? Will this business succeed or fail? Let's zoom out and focus just on advanced artificial intelligence; my guess is that we'll have a "transition period" with many different relevant actors with different philosophies and moral considerations (we're in it) and then end up in an "equilibrium period" with much more homogeneity of philosophy and moral standpoints. [1] The transition period is hard to predict and involves many contingent facts, and might involve lots of dramatic turnovers; the equilibrium per

OpenAI Pushes for Policies to Offset AI’s Impact | Bloomberg Tech 4/6/2026
Bloomberg’s Caroline Hyde and Ed Ludlow speak with OpenAI’s Chief Global Affairs Officer Chris Lehane about the company’s new policy proposals to help manage the rapid changes brought about by artificial intelligence. Plus, Oracle names a new CFO to help the company navigate massive data center development plans and a cash crunch. And, NASA's Artemis II is set to pass at the closest distance that humans have come to the lunar surface in 50 years. (Source: Bloomberg)

OpenAI doesn’t expect to be profitable until at least 2030 as AI costs surge
As OpenAI and Anthropic move closer to their planned initial public offerings, more details about the finances of both artificial intelligence giants are starting to emerge. It was no secret these companies were bleeding cash, but seeing the actual numbers is still striking. Neither company has made its filings official. Both are in the process of recruiting investors and have recently closed funding rounds, which meant opening their books. The Wall Street Journal got a peek . According to internal estimates, OpenAI will not turn a profit until 2030, while Anthropic expects slight positive results this year, followed by another year of losses before staying in the green in 2028 and 2029. Spending on AI training will be staggering. In 2028, OpenAI projects spending $121 billion on computing
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Laws & Regulation

Are there Multiple Moral Endpoints?
This is a different approach to explaining an old idea . What is the deep future going to look like? Will we be proud of it, or ashamed of the choices that led to it? Lots of focus on the future is on the near future. How will ongoing wars go? What will the next AI model's capabilities be? Will this business succeed or fail? Let's zoom out and focus just on advanced artificial intelligence; my guess is that we'll have a "transition period" with many different relevant actors with different philosophies and moral considerations (we're in it) and then end up in an "equilibrium period" with much more homogeneity of philosophy and moral standpoints. [1] The transition period is hard to predict and involves many contingent facts, and might involve lots of dramatic turnovers; the equilibrium per




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!