Should We Continue Advancing AI Despite the Risk of Human Extinction?
- Greg Thorson
- Jan 14
- 5 min read

The research explores whether rapid AI advancement should continue despite its potential existential risks. It examines economic models balancing AI-driven growth against extinction risks, analyzing factors like utility curvature and risk aversion. The study finds that under log utility, society may accept up to a 33% extinction risk for a 55-fold consumption increase. However, with higher risk aversion (γ ≥ 2), optimal AI use shrinks to a 57% consumption gain with only a 4% risk. Mortality improvements can make existential risks more tolerable. The findings highlight how different assumptions dramatically affect AI policy decisions on growth versus safety.
Full Citation and Link to Article
Charles I. Jones, "The AI Dilemma: Growth versus Existential Risk," AER: Insights, vol. 6, no. 4, 2024, pp. 575–590. DOI: 10.1257/aeri.20230570.
Extended Summary
Central Research Question
The central research question of this study is: Under what conditions should society continue the rapid progress of artificial intelligence (AI), and under what conditions should it be halted due to existential risk? The study explores the trade-off between AI-driven economic growth and the potential catastrophic consequences of superintelligent AI. The research aims to determine whether the benefits of AI, such as exponential economic growth and improvements in living standards, outweigh the risks, which include the possibility of human extinction. The study applies economic models to quantify this trade-off and identify the conditions under which AI development is optimal.
Previous Literature
The study builds upon a growing body of literature on the economic implications and risks of AI. Previous research has emphasized both the benefits and dangers of AI development. Scholars such as Bostrom (2002, 2014) and Ord (2020) have highlighted the existential risks associated with AI, warning that misaligned superintelligent AI could pose threats comparable to nuclear weapons. On the other hand, research by Brynjolfsson and McAfee (2014) has focused on AI's potential to significantly boost productivity and economic growth.
Models of AI-driven growth, such as those proposed by Aghion, Jones, and Jones (2019) and Davidson (2021), suggest that AI could eliminate traditional constraints on innovation, leading to exponential growth and possibly a technological singularity. However, these models do not fully integrate the risks of AI-driven extinction. This study contributes to the literature by incorporating existential risk into economic growth models, offering a structured way to analyze the trade-offs involved in AI advancement.
Data
The study does not rely on empirical datasets but instead develops theoretical models using standard economic principles. It references prior research on AI capabilities, economic growth, and risk assessment to inform its assumptions. Specifically, the study utilizes estimates from research on AI’s potential to accelerate innovation (Bubeck et al., 2023), historical economic growth patterns, and estimates of the value of statistical life (Hall, Jones, and Klenow, 2020).
To quantify existential risk, the study considers hypothetical risk probabilities (e.g., a 1% or 2% per-year chance of extinction). It also examines historical growth rates, comparing AI’s potential economic impact to past technological revolutions such as electricity and the internet. Additionally, it incorporates insights from mortality studies to assess how AI-driven life expectancy improvements could alter the trade-off between risk and growth.
Methods
The study employs economic modeling to evaluate the optimal use of AI in the presence of both benefits and risks. It develops two key models:
A Simple Model of AI Growth and Existential Risk
Assumes AI can accelerate economic growth at rate g (e.g., 10% per year).
Includes an existential risk probability δ (e.g., 1% or 2% per year).
Social welfare is defined as the expected lifetime utility of individuals, weighted by survival probability.
The model examines different utility functions, including logarithmic (log) utility and constant relative risk aversion (CRRA) utility, to analyze how risk preferences influence AI policy decisions.
An Extended Model Incorporating Singularities and Mortality Improvements
Introduces scenarios where AI could lead to infinite consumption (a technological singularity).
Considers the possibility that AI-driven medical advancements could reduce mortality rates.
Evaluates how near-zero social discounting (placing greater value on future generations) affects the willingness to accept existential risk.
Both models use a utilitarian framework, treating economic growth and existential risk as competing forces. The key decision variable is how long to allow AI to operate before halting its development. The study derives optimal conditions for AI advancement based on different levels of risk aversion and survival probabilities.
Findings and Size Effects
The study’s findings reveal stark contrasts in optimal AI policy depending on assumptions about risk aversion, economic growth, and mortality improvements.
Log Utility vs. CRRA Utility
With log utility, society is relatively unconcerned with existential risk, favoring large economic gains despite significant risks.
Under CRRA utility (where risk aversion coefficient γ ≥ 2), individuals place a higher value on survival, leading to more cautious AI adoption.
Size Effects in Economic Growth and Existential Risk
With log utility and an assumed 1% annual existential risk, society would optimally allow AI to operate for 40 years, resulting in a 55-fold increase in consumption. However, this would also entail a 33% probability of human extinction.
If the existential risk increases to 2% per year, the optimal decision is to immediately halt AI development, as the risk outweighs the benefits.
Under CRRA utility (γ = 2), the optimal AI usage period shrinks to 4.5 years, yielding only a 57% increase in consumption and an existential risk of just 4%. This highlights how increasing risk aversion significantly reduces society’s willingness to accept existential threats.
Singularities and Infinite Consumption
If AI leads to a technological singularity with infinite consumption, the study finds that for γ > 1 (bounded utility), this does not necessarily justify taking large existential risks.
When γ = 2, the optimal threshold for existential risk remains low at 2.4%, meaning that even an AI-driven singularity does not justify gambling with human extinction beyond this level.
Mortality Improvements and Risk Trade-Offs
If AI-driven innovations reduce mortality rates (e.g., by extending life expectancy from 100 to 200 years), society becomes much more willing to accept existential risk.
In a scenario where AI halves mortality, the acceptable existential risk rises to 25% for γ = 2 or 3, compared to 2-5% in scenarios without mortality improvements.
This result stems from the fact that life expectancy improvements counterbalance existential risk, making the trade-off more favorable.
Near-Zero Social Discounting
If policymakers place greater weight on future generations (reducing the social discount rate to near zero), the willingness to accept existential risk decreases.
However, when AI also improves life expectancy, this effect is offset, as future generations benefit more from longer lifespans.
This dynamic suggests that mortality improvements could be a key factor in shaping AI policy decisions.
Conclusion
The study provides a structured economic framework for evaluating AI’s trade-offs between growth and existential risk. Key takeaways include:
Risk preferences matter: Under log utility, society is willing to take extreme risks for high consumption gains, while under CRRA utility (γ ≥ 2), society is much more cautious.
Size effects are significant: AI-driven growth could dramatically increase living standards, but even a small increase in existential risk can justify shutting AI down under certain conditions.
Singularities are not a guaranteed justification for AI progress: With bounded utility, even infinite consumption does not always outweigh existential risk.
Mortality improvements change the equation: If AI extends life expectancy, society becomes far more willing to tolerate existential risks.
Future generations matter: If policymakers discount the future less, existential risk becomes less acceptable—unless AI also improves longevity.
The findings suggest that AI policy decisions should be heavily informed by assumptions about utility, risk aversion, and the likelihood of AI-induced mortality improvements. While AI presents extraordinary opportunities for economic growth, failing to properly account for existential risk could have catastrophic consequences. The study ultimately emphasizes the need for careful regulation and risk mitigation strategies before AI reaches potentially dangerous levels of capability.
Comments