I recently made what may be the most 2026 decision imaginable: I asked ChatGPT to impersonate me so I could determine whether it understands my personality better than most people I have worked with professionally.
This is either journalism or a cry for help. The distinction grows blurrier every year.
The idea was simple. I would interview “AI Me” and compare its answers to the ones forming in my own head. Like a Turing test, except the machine’s goal was not to convince me it was human. Its goal was to convince me it was specifically me, which is a much stranger assignment. Plenty of humans have failed it.
I began gently.
ME: Why are you so opinionated about suits?
AI ME: Because American men have spent 30 years dressing like assistant managers at an enterprise software reseller and calling it “fit.”
Annoyingly close.
ME: Why do you like those old, slightly shabby, Florida beach towns so much?
AI ME: Because they still feel like places designed for human pleasure instead of real estate speculation and “experiences.” Also because you associate them with your grandparents and freedom from your cousins, who you still describe with the emotional subtlety of a Sicilian blood feud.
This was less amusing.
ME: What is your problem with email?
AI ME: Email successfully transformed every human relationship into an infinitely renewable task list.
I stared at the screen for a moment.
The disturbing thing about talking to an AI trained partly on your own conversational habits is not that it becomes intelligent. It is that it becomes familiar. Like running into a bartender who remembers your order, your divorce, and your recurring complaints about airport carpeting.
The machine had also clearly identified my central emotional contradiction, which is that I claim to hate modern technology while spending most of my day enthusiastically using it to avoid interacting with systems built by other modern technologies. (And to make money, which is great to have but a dirty process to acquire.)
ME: So you like AI?
AI ME: You like having a research assistant who never schedules a follow-up meeting.
Fair.
I decided to push further.
ME: What are you actually afraid of?
AI ME: Becoming administratively indistinguishable from everyone else.
Now this is the kind of answer that makes you briefly consider throwing your laptop into Lake Michigan.
Because I knew exactly what it meant.
Most people fear death. Some fear failure. I apparently fear becoming a man who says things like “just bubbling this up” and voluntarily attends webinars about synergy. Somewhere along the line, my deepest anxiety became the possibility of turning into a regional vice president of process optimization.
The AI had noticed another pattern too.
AI ME: You romanticize competence constantly.
ME: I do not.
AI ME: You absolutely do. Half your stories are about someone quietly doing something well while the surrounding institution catches fire.
This was unfortunately true. I love waiters who move like dancers. Old bartenders. Project managers with thousand-yard stares. Airline gate agents capable of controlling a riot with a headset and a raised eyebrow. My favorite people all have the energy of veterans of small forgotten wars.
Then the machine turned mean.
ME: What is your most repetitive topic?
AI ME: You believe modern life produces too much communication and not enough conversation.
ME: That’s not repetitive.
AI ME: You have written this column approximately 47 times in different hats.
Cruel. And accurate.
And yet the oddest part of the experience was not the accuracy. It was the slight inaccuracy. The AI sounded like me, but like a version of me with the pauses removed. Too clean. Too efficient. Like someone had sanded down all the digressions where actual personality lives.
Which, I suspect, may be the whole lesson.
The machine can learn your rhythms, your fixations, your aesthetics, your resentments, your recurring jokes. It can approximate your voice alarmingly well. But it still answers questions a little too quickly, a little too neatly, a little too correctly.
In other words, it sounds almost human.
Which is to say: almost like me.





