Estimates of the expected utility gain of AI Safety Research
When thinking about AI risk, I often wonder how materially impactful each hour of my time is, and I think that this may be useful for other people to know as well, so I spent a couple of hours making a couple of estimates. I basically expect that a tonne of people have put a bunch more time into this than me, but this is nice to have as a rough sketch to point people to. I'm going to make 3 estimates: an underestimate, my best-guess estimate and (what I think is) an overestimate. Starting facts [1] : Currently 8.3 Billion people on planet earth Current median age: 31.1 years Current life expectancy: 73.8 years I am going to commit statistical murder and assume this means that everyone on the planet lives ~42.7 years from this point onwards. Underestimate: 40 years of life left/person Media
When thinking about AI risk, I often wonder how materially impactful each hour of my time is, and I think that this may be useful for other people to know as well, so I spent a couple of hours making a couple of estimates. I basically expect that a tonne of people have put a bunch more time into this than me, but this is nice to have as a rough sketch to point people to.
I'm going to make 3 estimates: an underestimate, my best-guess estimate and (what I think is) an overestimate.
Starting facts[1]:
- Currently 8.3 Billion people on planet earth
- Current median age: 31.1 years
- Current life expectancy: 73.8 years
I am going to commit statistical murder and assume this means that everyone on the planet lives ~42.7 years from this point onwards.
- Underestimate: 40 years of life left/person
- Median: 42.7 years + ~15 years' increase in life expectancy (20 years' growth in the past 60 years) = about 60 years of life left
- Overestimate: Everyone gets life extension and lives to heat death of universe: 10^100 years
Since the population is growing, we should take that into account:
- Underestimate: We only care about the lives of people currently alive
- Median: We keep growing at current ~1% growth rate per year
- Overestimate: Population growth of 2% per year until the heat death of the universe
Given these parameters, we can figure out the total expected years of life we care about for each scenario:
- Under: 40 years x 8.3 B = 332 Gyr
- Median:
Current population: 60 years x 8.3 B
Additional population (linear approximation): =
Additional population life span: 73.8 years + ~1/3yrs added/year = 110 years
Total expected years of life:
- Overestimate: 10^100 years x 1.02^(10^100) = broken calculator.
I think it might be best to skip out on the overestimate. For the underestimate, we'll go with ~20 years of research to produce a 1% chance of a 1% decrease in the final risk for the entire field. Extinction occurs 30 years from now. For the median estimate, we'll go with 5 years of research to reduce a risk of extinction, which happens 10 years from now, and we will go with a 50% chance of a 5% reduction in risk.
Expected years of life available to be saved:
- Under: 332 Gyr x ((40-30)/40) = 83 Gyr
- Median: 498 Gyr x (60-10)/60 + 8.93Gyr x 10 = 415 Gyr + 89.3 Gyr = about 500 Gyr
Expected years of life actually saved:
- Under: 83 Gyr x 0.01 x 0.01 = 8.3 Myr
- Median: 500 Gyr x 0.5 x 0.05 = 12.5 Gyr
Number of AI Safety researchers:
- Under: 10k researchers
- Median: 2.5k researchers (to account for the growth of the field, current estimates are closer to 1-2k).
Expected impact per researcher:
- Under: 830 yrs
- Median: 5Myr
We've said the researchers have 20/5 years to make an impact, which gives us:
- Under: ~40 years of life saved/year
- Median: 1 Myr of life saved / year
Going back to the ~40 years of life expected for the modern median human, this gives an underestimate of 1 year of work to save one life, or a median estimate of 5 mins/life. This is a pretty broad range funnily enough.
1 year of work to save one life is just a tad worse than the 1.2 lives/year saved donating £3000/year as advertised by Effective Altruism UK. If we take that value as given and assume 1 life = £2500, this means that on the median estimate, you should be earning £2500 x 10^6 / 40 = £62.5 million/ year. If only the world was more sensible.
- ^
All population data comes from https://www.worldometers.info
lesswrong.com
https://www.lesswrong.com/posts/gXYeWoAfSrdGogchp/estimates-of-the-expected-utility-gain-of-ai-safety-researchSign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
availablebillionmillion
Causal-Audit: A Framework for Risk Assessment of Assumption Violations in Time-Series Causal Discovery
arXiv:2604.02488v1 Announce Type: new Abstract: Time-series causal discovery methods rely on assumptions such as stationarity, regular sampling, and bounded temporal dependence. When these assumptions are violated, structure learning can produce confident but misleading causal graphs without warning. We introduce Causal-Audit, a framework that formalizes assumption validation as calibrated risk assessment. The framework computes effect-size diagnostics across five assumption families (stationarity, irregularity, persistence, nonlinearity, and confounding proxies), aggregates them into four calibrated risk scores with uncertainty intervals, and applies an abstention-aware decision policy that recommends methods (e.g., PCMCI+, VAR-based Granger causality) only when evidence supports reliable

Matrix Profile for Time-Series Anomaly Detection: A Reproducible Open-Source Benchmark on TSB-AD
arXiv:2604.02445v1 Announce Type: new Abstract: Matrix Profile (MP) methods are an interpretable and scalable family of distance-based methods for time-series anomaly detection, but strong benchmark performance still depends on design choices beyond a vanilla nearest-neighbor profile. This technical report documents an open-source Matrix Profile for Anomaly Detection (MMPAD) submission to TSB-AD, a benchmark that covers both univariate and multivariate time series. The submitted system combines pre-sorted multidimensional aggregation, efficient exclusion-zone-aware k-nearest-neighbor (kNN) retrieval for repeated anomalies, and moving-average post-processing. To serve as a reproducible reference for MP-based anomaly detection on TSB-AD, we detail the released implementation, the hyperparame
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.





Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!