As a follow-up to the most recent Scriptnotes, John August wrote a blog post that included one listener’s response:
Language models are built on “training data,” which is the text you feed into a learning process to produce the output. For very sophisticated models, the training data is vast: for something like ChatGPT, it includes something like all the text you can scrape off of the last twenty years of the Internet, or so.
But this means ChatGPT is about as smart as the average writer on the Internet has been over the past twenty years — and indeed, the models that comprise GPT drag the results toward the average, not the extraordinary, because the average has much nicer statistical properties than the extraordinary for companies that seek to produce a marketable, scalable product from their models, which requires the ability to tweak, diagnose, and defend what you’re selling.
Ultimately what these models mean is that with the click of a button you can now be just as good as the average writer who posts content to the Internet, and so the old “average” is now the new “zero.” If you wrote at the average level of the Internet in 2022 you now write at the zero level.Emphasis in original