- Earlier, relatively small models excel at analytical tasks and become deployed for jobs from delivery time prediction to fraud classification, but are not expressive enough for general-purpose generative tasks
- Between 2015 and 2020, the compute used to train these models increased by 6 orders of magnitude and their results surpass human performance benchmarks in handwriting, speech and image recognition, reading comprehension and language understanding