This blog post by Sarah Constantin has an impressively comprehensive tally of performance trends in AI across multiple domains.
- In games performance, e.g. chess (see right, based on Swedish Chess Computer Association data) “exponential growth in data and computation power yields exponential improvements in raw performance.” So the relation between them is linear.
- This relationship may be sublinear in non-game domains, such as natural language processing (NLP).
- “Deep learning” only created discontinuous (but one time) improvements in image and speech recognition, but not in strategy games or NLP. Its record on machine translation and arcade games (see below right) is ambiguous.
So “deep learning” might not have been as transformational as the tech press would have had you believe, and as Miles Brundage observed, has largely been about “general approaches for building narrow systems rather than general approaches for building general systems.”
And we also know that Moore’s Law has been slowing down of late.
If this is basically accurate, then the spate of highly visible AI successes we have been seeing in quick succession of late – peak human performance in go in 2016; in No Limit poker with multiple players a couple of months ago – could end up being a one-off coincidence that will be followed by another AI winter.
And we will have to do something cleverer than naively projecting Kurzweil’s graphs forwards to get to the singularity.