Previously, I argued
[https://bounded-regret.ghost.io/p/1527e9dd-c48d-4941-9b14-4f7293318d5c/] that
emergent phenomena in machine learning mean that we can't rely on current trends
to predict what the future of ML will
Previously, I've argued that future ML systems might exhibit unfamiliar,
emergent capabilities
[https://bounded-regret.ghost.io/p/1527e9dd-c48d-4941-9b14-4f7293318d5c/], and
that thought experiments provide one approach
[https://bounded-regret.ghost.io/p/a2d733a7-108a-4587-97fb-db90f66ce030/
In the previous post
[https://bounded-regret.ghost.io/thought-experiments-provide-a-third-anchor/], I
talked about several "anchors" that we could use to think about future ML
systems, including current ML systems, humans, ideal optimizers,
Previously, I argued
[https://bounded-regret.ghost.io/future-ml-systems-will-be-qualitatively-different/]
that we should expect future ML systems to often exhibit "emergent" behavior,
where they acquire new capabilities that were not explicitly designed or
In 1972, the Nobel prize-winning physicist Philip Anderson wrote the essay "
More
Is Different [https://science.sciencemag.org/content/177/4047/393]". In it, he
argues that quantitative changes can lead