In Future ML Systems Will Be Qualitatively Different
[https://bounded-regret.ghost.io/p/1527e9dd-c48d-4941-9b14-4f7293318d5c/], I
argued that we should expect ML systems to exhibit emergent capabilities. My
main support for this was four
Previously, I argued
[https://bounded-regret.ghost.io/p/1527e9dd-c48d-4941-9b14-4f7293318d5c/] that
emergent phenomena in machine learning mean that we can't rely on current trends
to predict what the future of ML will
Previously, I've argued that future ML systems might exhibit unfamiliar,
emergent capabilities
[https://bounded-regret.ghost.io/p/1527e9dd-c48d-4941-9b14-4f7293318d5c/], and
that thought experiments provide one approach
[https://bounded-regret.ghost.io/p/a2d733a7-108a-4587-97fb-db90f66ce030/
In the previous post
[https://bounded-regret.ghost.io/thought-experiments-provide-a-third-anchor/], I
talked about several "anchors" that we could use to think about future ML
systems, including current ML systems, humans, ideal optimizers,
Previously, I argued
[https://bounded-regret.ghost.io/future-ml-systems-will-be-qualitatively-different/]
that we should expect future ML systems to often exhibit "emergent" behavior,
where they acquire new capabilities that were not explicitly designed or