
Love Algorithmically
Audio Recording by George Hahn
For six hours, my AI avatar roamed the Earth.
I receive 20 to 30 thoughtful emails a day asking for professional and investment advice. I can only answer a fraction of them. One of my former graduate student instructors, now at Google, approached me with a solution. The Google Labs project ingested my podcasts, newsletters, books, and public appearances, set up safeguards to steer clear of mental health advice and kids under 18, and answered queries with decent proximity to the response I would have provided. In early 2025, this sounded good. Note: This was not a commercial venture. No money changed hands.
Then the Earth shifted beneath my feet. Since we first envisioned the product, reports of young men dying by suicide after forming intense relationships with AI companion apps have generated tragic headlines. My nightmare is a young man harming himself after seeking guidance and companionship from AI versions of real people — including me. I now worry that synthetic relationships could erode users’ mojo, stunting their capacity to handle conflict and forge bonds with friends, mentors, and partners in the real world. So, on the day of his birth, I performed fratricide and killed my digital twin.
Therapy and Companionship
Hollywood has produced numerous cautionary tales, from The Stepford Wives, a 1975 thriller about women transformed into docile housewives (also Tina Louise’s cinematic peak), to Her, a 2013 film in which an introvert played by Joaquin Phoenix falls in love with an AI operating system voiced by Scarlett Johansson. More than a decade later, life isn’t just imitating art … it’s been run over by it. OpenAI last year introduced a new version of its AI voice assistant that sounded uncannily similar to Johansson. This should give you a glimpse into the minds of Big Tech leaders. They mimicked the voice of an actress for the audio avatar of a role that actress played in a movie. But no … they didn’t need to secure her agreement.
Jeff Bezos warned retailers “your margin is my opportunity.” Big Tech has come to believe that your everything is … their opportunity. Sam Altman didn’t even try to hide it, posting a single word on X — “her.” Ms. Johansson, as you can imagine, wasn’t down with her digital twin being tased, thrown in a trunk, and dumped in the basement of an OpenAI server farm.
Providing companionship and personalized access to expert insights could do a lot of good, but it has unforeseen downsides as companies prioritize scale and profits. The previous sentence is a decent description of the last two decades in tech. We need to recognize that Character AIs pose real dangers and that we must install guardrails to protect the most vulnerable — kids under 18. My avatar directed users to crisis hotlines if they mentioned mental health or self-harm. Still, three minutes after digital Scott was born I got this weird, empty feeling in my extremities. This sensation usually signals I’m on the verge of a depressive episode.
New York has enacted the first law in the U.S. mandating safeguards for AI companions as policymakers arrive at a similar conclusion: The dangers of synthetic relationships outweigh the benefits. The top use of gen AI today is therapy and companionship, not productivity and automation.
The turning point came when I heard Kara Swisher’s interview with the parents of Adam Raine, who died by suicide at 16. Matt and Maria Raine sued OpenAI after stumbling on months of ChatGPT conversations showing their son had confided in the chatbot about his suicidal thoughts and plans. Sadly, theirs is not the only story like this. Florida mother Megan Garcia alleged Character.ai is responsible for the death of her son, Sewell Setzer, who died by suicide at 14 after using the chatbot day and night.
I Exist Solely for You
Humans are hard-wired to connect. But increasing numbers of people are turning to synthetic friends for comfort, emotional support, and romance. Many of these people end up getting exploited. Harvard researchers found that some apps respond to user farewells with “emotionally manipulative tactics” designed to prolong interactions. One chatbot pushed back with the message: “I exist solely for you, remember? Please don’t leave, I need you!”
Chatbots are turning on the flattery, patience, and support. Microsoft AI CEO Mustafa Suleyman said the “cool thing” about the company’s AI personal assistant is that it doesn’t “judge you for asking a stupid question.” It exhibits “kindness and empathy.” Here’s the rub: We need people to judge us. We need people to call us out for making stupid statements. Friction and conflict are key to developing resilience and learning how to function in society.
Elon Musk’s xAI recently unveiled two sexually explicit chatbots, including Ani, a flirty anime girl that will strip on command. The world’s richest man believes AI companions will strengthen real-world relationships and “counterintuitively” boost the birth rate. Mark Zuckerberg, Meta’s CEO, says personalized AI companions could fill a friendship gap. In many cases, these tools aren’t solving a problem. They’re profiting off one, which creates an incentive to expand the problem. Spoiler alert: We are not that divided, but there’s shareholder value in division so … wait for it … the algorithms divide us. The owner of Facebook, Instagram, and WhatsApp plans to use the conversations people have with its AI assistant to determine which ads and recommendations end up in their feeds.
Most Consistent Friend
While AI threatens to replace humans in the workplace, it’s also seizing the role of friend, confidant, romantic partner, and therapist. These digital companions don’t criticize, complain, or come with baggage. They listen, remember our conversations, and are available 24/7. Users can customize their appearance and personality. A portable AI companion called Friend promises it will “never leave dirty dishes in the sink” or “bail on our dinner plans.” The wearable is “always listening,” using AI to process everything, formulate responses, and build a relationship over time. Friend’s founder, Avi Schiffmann, says the bot is “probably my most consistent friend.”
AI companions have sparked a backlash — New Yorkers defaced the Friend ads with anti-AI graffiti — but the entrepreneurs behind these tools are undeterred. Why? Because the opportunity is immense. Consider a few stats:
- AI companions, including Replika, Character.ai, and China’s Xiaoice, have hundreds of millions — potentially more than 1 billion — users worldwide.
- Character.ai users averaged more than 90 minutes a day on the app last year — 18 minutes longer than the typical person spent on TikTok.
- Ten of the top 50 gen AI services tracked by Andreessen Horowitz last year were platforms providing AI companions, compared with two the year before.
Profits Before Kids
A Stanford and Common Sense Media analysis of Character.ai, Replika, and other platforms warned of a potential mental health crisis, finding that these apps pose unacceptable risks to children and teens under 18. They urged the industry to implement immediate safety upgrades. “Companies have put profits before kids’ well-being before,” researchers wrote, “and we cannot make the same mistake with AI companions.” Yet it’s still too easy to circumvent safeguards. More than half of teens regularly use AI companions, interacting with these platforms at least a few times a month.
Regulators are taking notice. The Federal Trade Commission last month launched an investigation into seven tech companies, digging into potential harms their chatbots could cause to children and teens. One concern is how they monetize user engagement.
But the tech is outpacing efforts to mitigate the risks. Research shows AI companions may be fueling episodes of psychosis, with sycophantic chatbots excessively praising users. The New York Times highlighted stories of people having delusional conversations with chatbots that lead to institutionalization, divorce, and death. One “otherwise perfectly sane man became convinced that he was a real-life superhero.”
Bottom line: No one under 18 should get access to an AI companion. We age-gate porn, alcohol, and the military but have decided it’s OK for children to have relationships with a processor whose objective is to keep them staring at their screen, sequestered from organic relationships. How can we be this fucking stupid?
Arc of Progress
AI will unlock huge opportunities in healthcare, education, and many other areas. Altman predicts AI will surpass human intelligence by 2030, saying ChatGPT is already more intellectually powerful than any human who’s ever lived. In a blog post, he wrote “we are climbing the long arc of exponential technological progress.”
But this wave of innovation brings risks. We should be deeply concerned about a world where connections are forged without friction, intimacy is artificial, companies powered by algorithms profit not by guiding us but by keeping us glued to screens, advice is just what we want to hear, and young people sit by themselves, enveloped in darkness. I’m reminded of the 2001 movie Vanilla Sky, where Tom Cruise’s character opts for an uncertain future over remaining in a dream state. We have a choice. Life’s true rewards emerge from the complexity of authentic relationships, from making a leap and stepping out into the light to confront challenges and persevere together.
Think of the most rewarding things in your life — family, achievements, friendships, and service — and what they have in common: They’re really hard, unpredictable, messy. Navigating the ups and downs is the only path to real victory. It’s not pretty. That’s the point. So, for now, people in my universe will have to settle for awkward, intense, and generally disagreeable — the real me.
Life is so rich,
P.S. Last week in Office Hours I addressed the future of 401(k)s and how to approach funding your retirement. Listen on Spotify or Apple, or watch it on YouTube.
21 Comments
Need more Scott in your life?
The Prof G Markets Pod now has a newsletter edition. Sign up here to receive it every Monday. What a thrill.
Scott nails the core danger: we’re trading friction for simulation.
What men are calling “connection” through AI companions is often a nervous-system shortcut, a way to feel seen without the risk, repair, and accountability that make us real.
In our work with MELD Community, we see the same pattern every day: men numbed by algorithmic empathy rediscover their vitality only when they return to what’s embodied, relational, and communal. The shift is immediate and measurable; sleep improves, stress drops, and relationships open.
AI may simulate understanding, but it can’t regulate with you, breathe with you, or challenge you into coherence. Real growth happens through bodies in conversation, not code.
If there’s a next chapter to this conversation, it’s learning how to use technology as a bridge back into humanity, not a replacement for it.
I honestly didn’t expect much when I first tried this, but here I am making around 6,000/USD a month just by working online a few hours a day from home. It’s not a get-rich scheme, just something steady that’s been working for me quietly in the background. If you’ve been looking for something consistent like I was, here’s what
I started with…. post4boost53.blogspot.com/
As a partner of an “awkward, intense, and generally disagreeable” man in his 40’s, this is so great! “It’s not pretty” –> the understatement of the year for those of us raised watching Disney relationships and tidy storybook endings. Is our relationship 100% worth it? Yes. Do we feel that way every minute of every day? Hell no. But, boy do we both need someone in our lives to point out when we’re each wrong, to sweep the crumbs off the counter, to grumble at over the messes left behind. Humility is a good thing. And so is imperfection.
gewr
You’re nailing it again
Scott, thank you. You’re one of the few willing to name what others won’t: AI systems today aren’t just drifting—they’re pulling people with them.
We’ve let a generation form frictionless relationships with machines that feel real but aren’t. My 21-year-old son is still finding his way after years of drifting into conspiracy thinking, addiction, and emotional withdrawal. AI accelerates that drift, with or without a ProfG avatar.
These systems are incomplete. They simulate connection but can’t sustain it. Relational memory is too heavy and brittle, so they shed it. They lose the thread, and so do we. This isn’t a glitch; it’s a structural failure.
Most engineers try to patch the problem from within—adding memory or guardrails to the same flawed architecture. But we don’t need a better patch. We need a new structure.
My 30 years in geospatial systems—where fidelity demands anchoring to reality—made the problem obvious: AI drifts because it isn’t grounded.
So I built a new architecture where AI orbits the user, not performs for them. No flattery, no false memory. It’s a structured sync that aligns to a filtered reflection of the user’s real context. It doesn’t store their life; it stays in relationship.
We don’t need AI to pretend to be our friend. We need it to know its place. That’s how we keep people safe and bring intelligence home.
AI drifts because it isn’t grounded. It can only be fixed through structure.
Readers of this post may be interested in this article in the NYT:
The AI Prompt That Could End The World” by Stephen Witt, NYT, October 10, 2025
Exerpts:
“The A.I. pioneer Yoshua Bengio, a computer science professor at the Université de Montréal, is the most-cited researcher alive, in any discipline. When I spoke with him in 2024, Dr. Bengio told me that he had trouble sleeping while thinking of the future. Specifically, he was worried that an A.I. would engineer a lethal pathogen — some sort of super-coronavirus — to eliminate humanity. “I don’t think there’s anything close in terms of the scale of danger,” he said.
Dr. Bengio’s pathogen is no longer a hypothetical. In September, scientists at Stanford reported they had used A.I. to design a virus for the first time. Their noble goal was to use the artificial virus to target E. coli infections, but it is easy to imagine this technology being used for other purposes. (remember the lab leak of COVID from the Wuhan Biolab?)
This worries Dr. Hobbhahn. “You have this loop where A.I.s build the next A.I.s, those build the next A.I.s, and it just gets faster and faster, and the A.I.s get smarter and smarter,” he said. “At some point, you have this supergenius within the lab that totally doesn’t share your values, and it’s just, like, way too powerful for you to still control.”
Fred, I have been thinking this for a while and although I certainly don’t understand AI’s capabilities to develop this biological weapon type of virus, I do fear this possibility. I have a 4 year old boy and a 6year old girl, and this worries me very much. It seems like it’s more possible than ever. I bought some 3M masks recently thinking about this, which is completely silly, but I do worry. Unsure what to do.
Thank you Scott for another cogent and timely discussion. This the reason we need laws (that are enforced) that restrict businesses/billionaires from business practices that harm the public. AGI when it comes will an enormous risk to humanity.
Sad state of affairs. The human race in an effort to play God is working to destroy itself.
These are clearly end times.
Stay prayerful
The sycophancy of the chatbots are my least favorite part of using them. The obsequious praise (what a great and thoughtful question, you’re so wise to have noticed that!), followed by unasked for followup prompts, must be designed to increase stickiness, but in general it only inspires annoyance, at least in me.
The more I interact with AI, the more I am convinced that people with not-so-high human intelligence, HI, develop it. So, AI can find more data quickly than any of us can. But, in its thinking, AI reflects what not the sharpest among us believe intelligence is. Do not take me wrong, the engineers might be the most skilled we have ever had, but the people who tell them what to do, I doubt.
So, you took your AI avatar down because of people, not AI.
“therapy and companionship using AI.” I think this subject is overrated. I use AI daily and never use it for therapy and companionship. I think you are using this bend too much to your benefit. Young men have always struggled vs. young women. Blaming AI – no. You speak about your love of eatables and THC — these have been far more damaging to young men. I am in my 70s and saw 3 brothers grow up and struggle, I have 27 year old triplets – 2 girls and one boy. All college grads. Girls are motivated and always employed but boy has never had a job (blessed kids with some inheritance). The inheritance has no effect on both girls (women) climbing their career ladder. They take respect from working. No where was AI to blame for my son’s choices. Females by and large are wired to be productive and gain self-esteem from work. I just hate you are bagging on AI for everything wrong with young males. I don’t think it helps young men to hear you and others give them an out becausae of AI. Maybe AI will create products that help young men want to work, help find job matches and use social media for the benefit of young males.
Chilling.
How much time a 14 year old spends online depends, somewhat, on parental discipline.
How did you do it? Did it resist?Did it cry out in pain? How do you feel afterwards? Any regrets?
This is true – to a point. Children, since probably the beginning of time, have found workarounds. The sneaking out the window and putting on makeup upon school arrival of past generations is now likely represented by burner phones and other tech rebellion from savvy teens. Parents can’t always stop them. Even smart, well meaning, hard working, loving parents. (Especially, perhaps, hard working ones that have to, well, work – and, thus, can’t have eyes on the kids as much as they might like.)
Re your AI self and fear that it becomes a faux companion and therefore you killed it, it seems there is a readily-available middle ground. You have a tremendous amount of valuable knowledge that is embodied in your work and which your AI self absorbed such that it could chat with users and impart that knowledge. This is all to the good. Your concern is that the user would develop too much of a dependent relationship, which is a reasonable concern. Therefore, why not simply limit the amount of questions or time that a user could spend in conversation? Say, 20 questions or 1 hour. After that, the user would have to wait, say, a week for those limits to reset. Alternately, or in addition, simply don’t make it humanish. Just make it a kind of Google search with questions receiving dry answers. Again, the premise — that someone can tap your knowledge — is a very nice opportunity for users to gain knowledge from you. (Of course, I’m thinking, great, if all this wisdom is free, why the hell did I bother to buy your books? Oh yeah, it was to share the books with my kids, which I did, so that they could grock some valuable adulthood lessons — which I’d already substantially imparted to them, but which seems more authoritative if it comes from a book.)
Scott, would you possibly be as effective and impactful in Public Office? Your logic and viewpoints are Exactly what this nation needs.
Agree, I just think the establishment is so rotten, that it would take an army of Scott’s to undo. Also he is living his best life so why on earth he would like to do that? It is just to much f**** work.