
I really liked Chris Brogan’s recent article on LinkedIn, “AI is a Thought Partner, Not a Thought Replacer.” I do worry that people who use AI too much for the wrong things will lose their ability to think and write.
Brogan writes:
If you haven’t had a good conversation with something like ChatGPT, then you’re missing out. It can really help you refine your thoughts. Give you some other ways to consider information, and can largely guide you to make better decisions. When using the tool like this, you’re treating it as a “thought partner,” a collaborator to express your own thoughts and ideas.
By contrast, if you ask GPT to answer a question, and then feed that answer back to someone, you’ve bypassed YOU in this process. You’ve given up your agency, your value. And essentially, you’ve pulled yourself out of the rotation for a person I’ll want to question, because I can already ask GPT what it thinks without asking you.
This makes sense to me. If you are reaching out to someone for their opinion or feedback and they send you what ChatGPT spit out, it won’t feel great, and might even seem like a snub. And yes, you could easily have done that yourself, if that was the type of feedback you were looking for.
Brogan goes on to talk about how the random way humans think is a differentiator. We will connect the dots and come up with ideas that robots might not. He writes:
Humans, by contrast, work from our own autobiography (for better or worse), our own experiences, things we’ve read or seen, some light searching, and often from very non-linear connections of thought. AI is largely linear in how it thinks: A, B, C, __. Humans are non-linear: A, Brad Pitt, Fight Club, Soap.
Brogan suggests that we not forget to add our input, and not simply use the robot’s output.
And I cheered when I read that other people can have a bad reaction to AI-generated content. I can smell it a mile away and it often gives me the creeps. What I didn’t know is there is a name for that.
When we draw things with AI art tools, it’s fun. The first few bunches of times. And then, the novelty wears off (to some) and we find another way to approach this, because on some level, humans upon receiving artificially-generated information tend to FEEL something is off at some level in their brains. It’s that whole uncanny valley thing.
We will be grappling with how to best use AI for a while. Some people will want to use it for everything, and some people will try to avoid using it for as long as they can. What we all probably can agree on is that it’s here to stay.
I recommend clicking through and reading all of Brogan’s article here.
Photo by Tim van der Kuip on Unsplash