(2005–2024)
Established in 2005, initially for a 3-year period, the Future of Humanity Institute was a multidisciplinary research group at Oxford University. It was founded by Prof Nick Bostrom and brought together a select set of researchers from disciplines such as philosophy, computer science, mathematics, and economics to study big-picture questions for human civilization, attempting to shield them from ordinary academic pressures and create an organizational culture conducive to creativity and intellectual progress.
During its 19-year existence, the team at FHI made a series of research contributions that helped change our conversation about the future and contributed to the creation of several new fields and paradigms. FHI was involved in the germination of a wide range of ideas including existential risk, effective altruism, longtermism, AI alignment, AI governance, global catastrophic risk, grand futures, information hazards, the unilateralist’s curse, and moral uncertainty. It also did significant work on anthropics, human enhancement ethics, systemic risk modeling, forecasting and prediction markets, the search for extraterrestrial intelligence, and on the attributes and strategic implications of key future technologies. One major contribution was in showing that it was even possible to do rigorous research on big picture questions about humanity’s future.
Over time FHI faced increasing administrative headwinds within the Faculty of Philosophy (the Institute’s organizational home). Starting in 2020, the Faculty imposed a freeze on fundraising and hiring. In late 2023, the Faculty of Philosophy decided that the contracts of the remaining FHI staff would not be renewed. On 16 April 2024, the Institute was closed down.
Over the course of its nineteen years, FHI inspired the emergence of a vibrant ecosystem of organizations where the kinds of questions that FHI investigated can be explored. FHI alumni will continue to research these questions both within Oxford and at other places around the world. Topics that once struggled to eke out a precarious existence at the margins of a single philosophy department are now pursued by leading AI labs, government agencies, nonprofits, and specialized academic research centers (with many more in the process of creation).
Resources
FHI’s final tech report — an oral history of the institute
A collection of FHI’s technical reports and other online pieces
Historical snapshots of FHI’s official website at the Internet Archive
An extensive list of articles by FHI (or mentioning FHI) via Google Scholar
A magazine article by journalist Tom Ough about the history of FHI
FHI Books
Global Catastrophic Risks, Nick Bostrom and Milan Ćirković (eds.), 2008.
Human Enhancement, Julian Savulescu and Nick Bostrom (eds.), 2009.
Radical Abundance: How a Revolution in Nanotechnology Will Change Civilization, Eric Drexler, 2013.
Superintelligence: Paths, Dangers, Strategies, Nick Bostrom, 2014.
The Precipice: Existential Risk and the Future of Humanity, Toby Ord, 2020.
Moral Uncertainty, William MacAskill, Krister Bykvist, and Toby Ord, 2020.
Deep Utopia: Life and Meaning in a Solved World, Nick Bostrom, 2024.
Key FHI Papers
Existential and Catastrophic Risk
Existential risks: analyzing human extinction scenarios and related hazards, Nick Bostrom, 2002 (pre-FHI).
Astronomical waste: the opportunity cost of delayed technological development, Nick Bostrom, 2003 (pre-FHI).
How unlikely is a doomsday catastrophe? Max Tegmark and Nick Bostrom, 2005.
What is a singleton? Nick Bostrom, 2005.
Where are they? Why I hope the search for extraterrestrial life finds nothing, Nick Bostrom, 2008.
Probing the improbable: methodological challenges for risks with low probabilities and high stakes. Toby Ord, Rafaela Hillerbrand, and Anders Sandberg, 2010.
Anthropic shadow: observation selection effects and human extinction risks, Milan Ćirković, Anders Sandberg, and Nick Bostrom, 2010.
Information hazards, Nick Bostrom, 2011.
Existential risk prevention as global priority, Nick Bostrom, 2013.
How much could refuges help us recover from a global catastrophe? Nick Beckstead, 2015.
Existential Risk and Existential Hope: Definitions, Owen Cotton-Barratt and Toby Ord, 2015.
The unilateralist’s curse and the case for a principle of conformity, Nick Bostrom, Thomas Douglas, and Anders Sandberg, 2016.
An upper bound for the background rate of human extinction, Andrew Snyder-Beattie, Toby Ord, and Michael Bonsall, 2019.
The vulnerable world hypothesis, Nick Bostrom, 2019.
The lifespan of civilizations: do societies “age,” or Is collapse just bad luck? Anders Sandberg, 2023.
AI Safety
Thinking inside the box: controlling and using an oracle AI, Stuart Armstrong, Anders Sandberg, and Nick Bostrom, 2012.
The superintelligent will: Motivation and instrumental rationality in advanced artificial agents, Nick Bostrom, 2012.
Safely interruptible agents, Laurent Orseau and Stuart Armstrong, 2016.
Future progress in artificial intelligence: a survey of expert opinion, Vincent Müller and Nick Bostrom, 2016.
Modeling agents with probabilistic programs, Owain Evans et al., 2017.
When will AI exceed human performance? Evidence from AI experts, Katja Grace et al., 2018.
Reframing superintelligence: comprehensive AI services as general intelligence, Eric Drexler, 2019.
Truthful AI: developing and governing AI that does not lie, Owain Evans et al., 2021.
AI Governance
Racing to the precipice: a model of artificial intelligence development, Stuart Armstrong, Carl Shulman, and Nick Bostrom, 2013.
Strategic implications of openness in AI development, Nick Bostrom, 2017.
AI governance: a research agenda, Allan Dafoe, 2018.
The malicious use of artificial intelligence: Forecasting, prevention, and mitigation, Miles Brundage et al., 2018.
Beyond privacy trade-offs with structured transparency, Andrew Trask et al., 2020.
The windfall clause: distributing the benefits of AI for the common good, Cullen O’Keefe et al., 2020.
Institutionalizing ethics in AI through broader impact requirements, Carina Prunkl et al., 2021.
International Control of Powerful Technology: Lessons from the Baruch Plan for Nuclear Weapons, Waqar Zaidi and Allan Dafoe, 2021.
Lessons from the development of the atomic bomb, Toby Ord. 2022.
Digital Minds
Quantity of experience: brain-duplication and degrees of consciousness, Nick Bostrom, 2006.
Propositions concerning digital minds and society, Nick Bostrom and Carl Shulman, 2022
Consciousness in artificial intelligence: insights from the science of consciousness, Patrick Butlin et al., 2023.
Biological Risk
Information hazards in biotechnology, Gregory Lewis et al., 2019.
The biosecurity benefits of genetic engineering attribution, Gregory Lewis et al., 2020.
High-risk human-caused pathogen exposure events from 1975-2016, David Manheim and Gregory Lewis, 2021.
Inferring the effectiveness of government interventions against COVID-19, Jan Brauner et al. 2021.
Human Enhancement
The fable of the dragon tyrant, Nick Bostrom, 2005.
The reversal test, Nick Bostrom and Toby Ord, 2006.
The wisdom of nature: an evolutionary heuristic for human enhancement, Nick Bostrom and Anders Sandberg, 2009.
The superintelligent will: motivation and instrumental rationality in advanced artificial agents, Nick Bostrom, 2012.
Embryo selection for cognitive enhancement: curiosity or game-changer? Carl Shulman and Nick Bostrom, 2014.
Moral Uncertainty
Statistical normalization methods in interpersonal and intertheoretic comparisons. William MacAskill, Owen Cotton-Barratt, and Toby Ord, 2020.
Why maximize expected choice-worthiness? William MacAskill and Toby Ord, 2020.
Effective Altruism
The moral imperative towards cost-effectiveness in global health, Toby Ord, 2013.
Global poverty and the demands of morality, Toby Ord, 2014.
Moral trade, Toby Ord, 2015.
Grand Futures
Eternity in six hours: Intergalactic spreading of intelligent life and sharpening the Fermi paradox, Stuart Armstrong and Anders Sandberg, 2013.
That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox. Anders Sandberg, Stuart Armstrong, and Milan Ćirković, 2016.
Dissolving the Fermi paradox, Anders Sandberg, Eric Drexler, and Toby Ord, 2018.
The edges of our universe. Toby Ord, 2021.
The timing of evolutionary transitions suggests intelligent life is rare, Andrew Snyder-Beattie et al., 2021.
Longtermism
Shaping humanity’s longterm trajectory. Toby Ord, 2023.
The Lindy effect, Toby Ord, 2023.
Ethical Theory
Beyond action: applying consequentialism to decision making and motivation, Toby Ord, 2009.
Pascal’s mugging, Nick Bostrom, 2009.
Infinite ethics, Nick Bostrom, 2011.
Other Topics
Are you living In a computer simulation?, Nick Bostrom, 2003 (pre-FHI).
Whole brain emulation: a roadmap, Anders Sandberg and Nick Bostrom, 2008.
Crucial considerations and wise philanthropy, Nick Bostrom, 2014.