(2005–2024)

Established in 2005, initially for a 3-year period, the Future of Humanity Institute was a multidisciplinary research group at Oxford University.  It was founded by Prof Nick Bostrom and brought together a select set of researchers from disciplines such as philosophy, computer science, mathematics, and economics to study big-picture questions for human civilization, attempting to shield them from ordinary academic pressures and create an organizational culture conducive to creativity and intellectual progress.

During its 19-year existence, the team at FHI made a series of research contributions that helped change our conversation about the future and contributed to the creation of several new fields and paradigms.  FHI was involved in the germination of a wide range of ideas including existential risk, effective altruism, longtermism, AI alignment, AI governance, global catastrophic risk, grand futures, information hazards, the unilateralist’s curse, and moral uncertainty.  It also did significant work on anthropics, human enhancement ethics, systemic risk modeling, forecasting and prediction markets, the search for extraterrestrial intelligence, and on the attributes and strategic implications of key future technologies.  One major contribution was in showing that it was even possible to do rigorous research on big picture questions about humanity’s future.

Over time FHI faced increasing administrative headwinds within the Faculty of Philosophy (the Institute’s organizational home).  Starting in 2020, the Faculty imposed a freeze on fundraising and hiring.  In late 2023, the Faculty of Philosophy decided that the contracts of the remaining FHI staff would not be renewed.  On 16 April 2024, the Institute was closed down.

Over the course of its nineteen years, FHI inspired the emergence of a vibrant ecosystem of organizations where the kinds of questions that FHI investigated can be explored.  FHI alumni will continue to research these questions both within Oxford and at other places around the world.  Topics that once struggled to eke out a precarious existence at the margins of a single philosophy department are now pursued by leading AI labs, government agencies, nonprofits, and specialized academic research centers (with many more in the process of creation).


Resources


FHI Books


Key FHI Papers

Existential and Catastrophic Risk

AI Safety

AI Governance

Digital Minds

Biological Risk

Human Enhancement

Moral Uncertainty

Effective Altruism

Grand Futures

Longtermism

Ethical Theory

Other Topics