AI amplifies you, but also dilutes you. As AI's presence increases, yours fades...
I've seen “here’s how to use AI” stuff, I haven’t seen any “here’s when NOT to use AI”. There's a Laffer Curve for AI, and more is not more.
There are 3 types of writing, and LLMs can only do 2 of them.
Drawing delineations around optimal AI use is important. LLMs should not be used to manufacture insights and creative ideas, because the nature of how they’re trained precludes it. They are not designed to produce novel output. They are designed to produce predictable ones.
When I saw this comment, it saddened me. I really like @ttunguz’s work. His tech insights are unique. I can tell there’s a man synthesizing years of experience, reflections, and lessons into something substantive.
This is why I think he’s making a mistake if he pursues this.
I can tell when I read someone’s work if it's their thinking or an LLM. ChatGPT is notoriously not a thinker; it’s a regurgitator. If you use it for ideas or analysis, you’ll sound like this guy.
Sometimes this guy is all you need though. So what jobs would you give him?
LLMs excel at summarizations and info procurement, but they generate quotidian, normative insights. Because that’s the nature of what they’re trained to do.
LLM sequence prediction generates sentences where each word is the statistically likely one based on the words that preceded it. If you’re ideating on something creative... you should be stringing together a series of words that IS NOT a statistically likely output.
An LLM is designed to produce a statistically likely (AKA predictable) series of words based on your prompt. Original thinking and creative ideas are not articulated by predictable sequences of words. Like, definitionally so.
New things are uncovered in the fat tails of thought, not in the middle distro of “most likely next word”. Unique writing exists in the realm of producing unlikely things. You cannot use a tool that is calibrated for the most-likely thing to produce a new, unlikely thing.
This is why ChatGPT is a midwit. It can only draw on what other people have already said. It has been trained on hivemind. ChatGPT is the most diligent 110-IQ analyst you’ll ever find.
Imagine you hired such an analyst, what tasks do you give him?
You go to your trusty 110-IQ analyst for spreadsheets, research, and other things that are fundamentally not creative. You go to him for information, not his opinion.
You do not use him for new ideas or differentiated analysis. Because remember, you're working with this guy.
There are 3 kinds of non-fiction writing. The LLM can only do 2 of them:
TYPE- 1 WRITING: “Here’s what happened” (LLM good)
Reporting on an event, summarization, and distillation of information. Tell me the news, find me data, etc. and report it. This one is most common and has the most competition because it requires no creativity. Little to no analysis or thinking is needed here.
Done by: reporters, junior analysts, news writers, and “here’s what you need to know”-types.
TYPE- 2 WRITING: “Here’s an opinion about what happened” (LLM good)
This is editorializing about an event or idea, or doing reasonable extrapolation from data. You did not create the idea, event, or research, but you have opinions on it. “Here’s why INSERT is good/bad.”
Junior analysts are information gatherers (type-1). Senior analysts are information extrapolators: requiring thought, analysis, and educated assessment. There’s critical thinking, but little abstraction or creativity.
Done by: pundits, researchers, sr analysts, commentators, etc
TYPE- 3 WRITING: “Here’s a framework to think about things” (LLM very bad)
This is the sphere of creative insights and systems thinking. You’re generating differentiated, creative ideas for how to assess or interpret something, and it often requires abstraction. This is what 1st-principles analysis actually means (a bastardized phrase at this point).
This writing is the least common because it’s the hardest to create. It’s also the highest risk, and highest reward. You’re sticking your neck out with something new. This exposes you to critique, insults, compliments, admiration and everything in between.
Type 3 is hard to do this because you’re being intellectually vulnerable and unorthodox. And since this is the internet, your skin must be thick enough for the kaleidoscope of feedback this elicits. New things disrupt priors, and the vast majority of people find hivemind assumptions comforting.
No one likes being punched in the axioms. The LLM is kind to axioms.
Type-3 writing isn’t necessarily being contrarian, it's just indifferent to dogma, with the courage to face criticism. I dislike “contrarian” as someone who rejects consensus for the sake of it; I view it as not fearing being disliked if you happen to be iconoclastic.
It’s hard to pinpoint who this is done by, but you know it when you see it. It’s a variation of philosophical thinking and concept rotating, but I don’t think the term “philosophers” is useful here.
People who repackage ideas elegantly and digestably don’t quite count to me (like Taleb or Jordan Peterson), though they’re not quite type-2 writing either (type 2.5?).
David Foster Wallace was a clear type 3.
My favorite type-3 thinkers are @vgr, @rorysutherland, @ByrneHobart, Moldbug, and @VitalikButerin. Their ideas are their own. It's a joy to hear them think. We stand on the shoulders of giants, and they make goliath just that much taller. I appreciate them greatly for it.
If you use LLMs for research writing (type 2): great! IMO that’s the right use of the tech. It’s fantastic at information distillation and regurgitation. Need ad copy? 10 ideas for a movie title? Summarize a research paper? ChatGPT is your guy. Our 110-IQ analyst delivers here.
This graphic covers most of the proper application of LLMs. However for type-3 writing reasons, I strongly advise against the 2 use cases highlighted below.
I can't fathom any of the guys I mentioned using LLMs for their writing. Partly because abstractoors… enjoy abstracting. It’s what they’re good at and they clearly relish it. Someone who finds driving cathartic won't use self-driving cars.
Do you actually like what you do?
I’ll be able to tell if you use LLMs for thinkpoasts, and not because the LLM won’t flawlessly mimic your prose, but because the ideas will be stale. The voice will sound like you, but your soul will be hollowed out.
Every bit of LLM in your writing... dilutes you out of it.
LLMs cannot synthesize new frameworks or abstractions, and the Good Will Hunting guy will creep in. You can train an LLM to sound like Vitalek, but you cannot train it on his abstraction capabilities.
AI is not a panacea for your abilities. If everyone stands on a 6-inch block, the block doesn’t make you taller in a useful way. If every athlete is using steroids, everyone is stronger and faster; but you cannot juice your way to Tom Brady’s brain or Steph’s jumpshot.
Because the already-elite are using it too, and that means talent is still the differentiator. Having access to the same tool as everyone else is not a competitive advantage.
You must have a differentiator. No tool will ever solve this for you. The cream of the crop always rises. If you use AI like a crutch, you relegate yourself to banal content purgatory. Every bit of AI you use for content is a bit of you that's removed from the creation of it.
If you catch someone LLM’ing things, you’ll care. You’ll dislike it too, and it’s not a luddite thing. You may even feel a hint of betrayal. It’ll be for the same reason you want to watch Magnus play chess, not a computer.
Two AI’s playing chess is technically superior, but it doesn’t matter, you want to watch the human.
You’ll want to read the human, too. This is an evolved human universal, and you can’t rationalize yourself out of it.
So long as the output is intended for humans (and it will be), you will want a human to make it. Because you are evolved to seek out and appreciate exceptional human output.
Neural network engineers will not change this, only make it harder to discern.