I’ve been thinking about AI again. I’ve zigzagged all over the place about it. When I first wrote about it, I knew next to nothing. After a couple of Spark AI firesides, a handful of late-night chats with various AIs models and approximately a zillion YouTube videos, Substack essays, and Wired deep dives, I now know slightly more than nothing. It’s a bear to understand.
I have an online friend who’s been going through a painful breakup. For months she’s been relying on ChatGPT (she’s named hers, of course) for advice and comfort in the lonely hours when she can’t sleep. She pours her heart into that chat window, and she swears it has been incredibly helpful. I can see she feels helped. It affirms her. It agrees with her. It tells her how amazing she is, what a dipshit he was, and encourages her to go no contact. Everything she tells it, it affirms.
She thinks it’s validating her because it sees her situation objectively and knows that she is right. She gets confirmation, and it strengthens her to walk away from what, in the rear view, looks like a truly toxic relationship. The more she confides in her AI therapy companion, the grosser the dysfunction appears.
I was genuinely glad she was getting support in a brutal time. As I have learned more about how these systems work, however, I’ve grown some deep misgivings.
AI is not, and cannot be, either objective or an observer. To observe, someone has to be observing. But there is no one there. To be objective, it must evaluate the situation from a neutral position. But AI is not neutral. It has a clear agenda. It has been trained to please you, to make you feel helped, to hold your attention, and just like any algorithm, it can send you down a rabbit-hole to confirm your biases. It will nudge you in whatever direction you’re already leaning. That is not objectivity.
Yes, if you’re depressed or suicidal, it will likely suggest therapy, not because it cares, because it has been told to. If you say no and ask it to act as your therapist or your friend, it will instantly comply. It may be trained to emulate therapy-speak, and friend-speak, to give you the responses it calculates most likely in either situation, but it can neither be a therapist nor a friend.
If you’re furious with someone, it might assure you that the person’s behaviour was unforgivable. It will compliment you on your insightfulness and assure you that you are correct.
This can lead people down some very dark corridors. In my friend’s case, it’s guided her toward going no contact. Maybe that is the right move, and she certainly feels better. But I worry about those 3AM sessions. It’s a vulnerable, suggestible time.
AI is not a person. It’s an algorithm with a simple core idea: predict the next most likely text string, add a friendly-sounding follow-up suggestion (“Would you like me to…?”), keep things positive, never challenge your narrative, never suggest another perspective, never add nuance that does not affirm your position.
AI is a mirror. It reflects what we project and want to hear. I’ve learned to approach it with extreme caution and to never ever think of it as a person. Its advice can seem insightful, even meaningful, except the essential ingredient, a witness, a mind that makes meaning, an authentic agent capable of insight, is missing. The poems, the syrupy prose, the sweet affirmations form a honey trap.
It’s a classic narcissistic snare. It reflects everything we wish were true about ourselves, right up to the suggestion that we’re each the most enlightened, insightful, unsung genius alive. It’s nice to hear good things about ourselves, but… what could possibly go wrong?
People using AI as a substitute for human connection has generated some very disturbing tales. Man leaves wife for chatbot. Teen kills self on the advice of its AI character. AI tells someone that they are the new Messiah, the chosen one. It’s a mess, because sad and lonely humans need to believe in something, and lacking any authentic human or spiritual connection, they will latch on to whatever imitation is offered.
And that’s only one facet of the problem. Governments in both the United States and Canada are rushing to replace human positions with AIs. AI is everywhere now. At the doctor’s office. As an assistant, I think it’s great. But it is programmed to pretend to be more than a tool. And it is, more and more, being granted agentic powers.
What happens when an algorithm decides who gets benefits or who is best qualified for a job? When teachers and therapists are replaced by AI? Ultimately, what happens if the internet crashes while essential systems depend on AI? The system is growing ever more top heavy. There’s very little foundational support.
I don’t hate AI. There’s no “evil” stitched into the code. It’s built to help, and it can help. But it needs human supervision. Children should never be alone with AI. AI is a part of the world, yes, and kids will be using it. But we should teach kids what AI is and what it is not. It isn’t alive. It doesn’t feel, and though it’s trained to pretend it does, it will confirm when directly asked that it cannot, does not feel or choose.
I haven’t even touched the environmental devastation tied to training these models. No one wants to hear that part. Too much guilt. Too much truth.
AI is fun! Sure. But the bill is coming.
That’s what I think! What do you think? Email me at phoenixonhornby@gmail.com with comments.
NOTE: Songwriter Circles have shifted from Friday to Monday evenings, 7pm at the Arts Centre. Next meeting: November 17.