To communicate risks, we often turn to stories. Nuclear weapons conjure stories of mutually assured destruction, briefcases with red buttons, and nuclear winter. Climate change conjures stories of extreme weather, cities overtaken by
This is a landing page for various posts I’ve written, and plan to write, about forecasting future developments in AI. I draw on the field of human judgmental forecasting, sometimes colloquially referred
I previously discussed the capabilities we might expect from future AI systems, illustrated through GPT2030, a hypothetical successor of GPT-4 trained in 2030. GPT2030 had a number of advanced capabilities, including superhuman programming,
Given their advanced capabilities, future AI systems could pose significant risks to society. Some of this risk stems from humans using AI systems for bad ends (misuse), while some stems from the difficulty
Two years ago, I commissioned forecasts for state-of-the-art performance on several popular ML benchmarks. Forecasters were asked to predict state-of-the-art performance on June 30th of 2022, 2023, 2024, and 2025. While there were