GPT4 as a Thinking Partner, and Falling in Love
What I will not do in this essay is waste your time with bullets which GPT4 could just as well generate. Like "1. Brainstorming and ideas, 2. Finding relevant information, 3. Adding a new perspective". When I see such headlines above paragraphs I skip over them because in almost every case I find that the longer explanation adds no additional value.
Herein lies the problem of making an LLM into a thinking partner. They are trained on content which is structured in such a way that the marginal insight of each subsequent sentence quickly drops to zero. Diagnosing why content is structured this way (SEO, culture, laziness) is currently unimportant to me, so I won't bother speculating more than that parenthetical.
So I'll do things like use highly academic language to prompt the LLM away from blogspam, or ask it to extend nonsense metaphors as a form of in-silico divination, or perhaps try to generate new examples by observing characteristics.
This leads to a different problem which I haven't heard many people talk about yet: becoming enchanted by the machine.
A digression. Humans are biased to think that the information in front of them is important. This is useful in an information-scarce landscape. In an information-abundant one, though, it leads to overlong engagement without reflection on "Is engaging with this worth it?" (Cue digression into clickbait and why so many Netflix shows have cliffhangers every episode.)
I used to be terrified, maybe even horrified, of this enchantment with the digital. Doomscrolling appeared to me a kind of hell. I'm no longer worried, though. Consciousness seems more like an onion which can develop layers outside of such traps. (And the fear, as I now understand it, came from my own control issues. Developing different kinds of trust has been great.)
Back to silicon producing enchantment. I found myself in the pre-ChatGPT days experiencing a very strange state of meaning-making when interacting with smaller LLMs. Words stitched together in response to anything I could dream of felt truly magical for how coherent they could be.
That is where the magic begins, and where it ends. Coherence alone does not make thought. As I look closer and closer at GPT4's output, the more disappointingly obvious this is, and the more I struggle to use it in ways which offer genuine novelty. And I've really tried. Seriously.
The enchantment itself almost ends up looking like a kind of falling in love, or it might be more accurate to call it an infatuation, with something one does not yet understand. Unlike falling in love with a person, with whom we can collaborate, grow, and change, LLMs do not yet update on a one-on-one basis (rather, the counterexamples are underwhelming to date). And so the disenchantment, the disillusionment whereby one gets glimpses behind the curtain and realizes no one is home. We grow another layer, and it does not. I'll be bold enough now to venture that it cannot, though with a healthy skepticism of myself and enthusiasm for more compute and experiments.
So in witnessing GPT4 as an anxious people-pleaser the spell breaks and I cannot unsee it, cannot interact without that understanding preempting each prompt I give it. And these aren't simply stylistic or cosmetic issues which dress up the language–they are core to its very ability (or not) to render novel, interesting, or meaningful output.
Meanwhile there are people working on grounding the outputs of these models so that they are factually accurate (nevermind the epistemic assumptions there). As far as I can tell, sitting here in my bathrobe with my coffee, getting these models to do creative work is fundamentally at odds with such an approach. They are already too grounded as it is in the patterns of long-winded, lazy, overcautious writing.
George out.
(Oh, final note to the human readers: if you'd like to explore using an LLM as a thinking partner together, let's talk. I'd also love to hear counterexamples to my pessimism above.)
Member discussion