GPT won't replace us, but it will make us better communicators
The unforeseen benefit of GPT algorithms is not additive, it's subtractive.
Like you, I've been reading the hype about how GPT has weaponized NLP and is going to be the beginning of the end of pretty much everything we know and love from programming to writing.
While being blown away by some of the quick and seemingly transcendent results that I got from my early experiments with GPT-3, after studying this closely for a while, I realized that what I had seen when I initially described it as magic, was actually a result of starting with the text of an exceptional writer and that text being large enough for the machine to sound like the author without using the author's words verbatim.
That is, unless you actually are Byung-Chul Han, what you'll get back from GPT-3 is likely less than perfect copy-pasta that sounds a bit uncanny valley. It's a stop along the editing road. I should know, I actually ran GPT-3 through small corpora of text from Han and what I got back was as resplendent as what I had read from Han without being quite as deep or difficult to unpack.
All else being equal, the smaller the input corpus, the more GPT-3 is just copy-pasta. Where it really shines is when the input corpus is large and the author is a professional. Then the output sounds like the author without plagiarizing a single word. I'm not sure where the threshold is but it's likely similar to the limits on measuring readability metrics.
It's as if you handed a Han book to your junior copywriter and asked for an abstract. There's actually good value proposition in the traditional academic publishing sense of a Han Reader, but that's beyond the scope of this discussion.
My stated personal goal was always that I admired the aphorisms of Nietzsche or Szasz but given my ramble-on, stream-of-consciousness writing style, I didn't really want to bring a premature end to the joy I get from writing fast and furious by having automagical transforms.
GPT-3 is actually quite useful for getting a quick summary or tldr to remove cruft, the kind of sideways writing that's my default mode and takes enormous effort to tighten manually even for an experienced editor.
There, the value proposition is priceless since we have little way of reducing the stack overflow that our brains face daily as the breadth and depth of information complexity overwhelms even the smartest among us.
Since the English language presents a choice paradox for writers and speakers alike, it's obvious that less is more when it comes to editing aphorisms. This in spite of the fact that the book's subtitle was more is less. Schwarz was referring to products, of course, not words in a dictionary.
Quantitatively, you can see how this works:
╭─watson@acer in ~ took 1ms
╰─λ wc *.txt
6 337 2026 1st.txt
3 109 583 2nd.txt
3 48 300 3rd.txt
12 494 2909 total
We go from the original version with 337 words to the version returned from GPT-3 at 109 words - almost exactly 1/3 to the final human-edited version with a 45% reduction from the GPT version but only 14% of the original! There's a reason so many authors refer to this as a kind of automated tldr; or abstract. I'm not sure how long this will take to reach humanities programs but it's probably de rigueur for computer science programs.
The qualitative statistics like Flesch-Kincaid and the like are not as dramatic but do show an improvement of about a grade level per step in my 3-step experiment.
I'm satisfied with the result given the work I put into exploring GPT this summer. I'm sure it's nowhere near where Notion AI is now but the future is promising for those of us who can write, but are challenged in the reductionist editing process.