28 Comments
User's avatar
Ruth Gaskovski's avatar

We need to be fully present for children if they are to avoid the abyss of artificial intimacy. Are we there for them?

Manipulating the psychological and emotional growth of youth is abuse. Children and youth need deep, real-life connections with their parents, family, and friends in order to experience the friction of healthy relationships. Yet when everyone around a child is absent or distracted, an AI agent that will listen, empathize, and "care" becomes irresistible.

I clearly remember the advice of an elderly woman whom I encountered while going for a walk with our infant daughter nearly twenty years ago. She said: Be sure that you are there for her when she comes home from school, that is when they want to talk, if you miss this moment you will not hear what matters to her.

We still take time with our kids (13, 17,19) when they come home, even if it’s late in the evening, to debrief the day. These are deeply bonding conversations about activities, relationships, challenges, conflict, resolutions, theology, and what matters in their lives.

Who would they share this with if no one is there to listen to them?

Expand full comment
Mark D Rego, MD's avatar

The presence of human-like chat bots is only part of the equation. The article opens with the quotation from a student, “I don’t think anyone has ever paid such pure attention to me and my thinking and my questions… ever.” That is a sad commentary. I understand that AI is a 24/7 modality with frictionless interactions. It might be interesting to ask students what kinds of conversations with peers or adults they have had. Has anyone been very interested in their thoughts and feelings? Shown unconditional positive regard? Been concerned and checked back in with them (something bots do not yet do)? I know it is not this simple but there are many ways in which we are failing each other and tech can just waltz right into that gap.

Expand full comment
Vonu's avatar

AI Overview has given me a new pastime, forcing it to deal with its hallucinations.

It's credibility disappears quickly when it is forced to make it's nonsensicality make sense.

Why anyone with a brain wants to rely on something lacking one will result in runaway schizophrenia. Since such has been commonplace in politics,we should know better than to submit our children to it.

Expand full comment
dave's avatar

I wonder if AI chatbots could ever convince you of something that wasn't real like _if you really love me, you'll kill that politician for me?_ I think multiple scifi writers have explored people falling in love with robots.

Expand full comment
Roman S Shapoval's avatar

If parents and elders can't see how sick this all is, then it's evolution kicking in.

Expand full comment
Vonu's avatar

Kindly specify where it has led to actual evolution.

Expand full comment
Roman S Shapoval's avatar

To clarify - I mean that AI will devolve humanity, not evolve.

Expand full comment
Gregory Marchant's avatar

As we stand leaning over the precipice of a neural pleasure-matrix machine tempting us to permanently fall in, we are reminded of what Christ said in this caveat and invitation. "I am the way and the truth and the life..."

Expand full comment
Vonu's avatar

Our technology has always been graven images.

Expand full comment
Brian Villanueva's avatar

While I understand the academic approach is necessary for some people (like our host Jonathon) who are in the academy, this really isn't that useful to the vast majority of people. Why do we need experts for things we all intuitively know are true?

How hard is it to say pornography is bad for the individual and for society and we ought to do everything in our power to make it less available?

Or... smartphones are damaging our attention spans and should be severely limited, especially for the young?

Or... social media is destroying in-person social skills and should be adults only?

Or... Erotic chatbots (for the boys) and AI boyfriends (for the girls) are making family formation (the primary purpose of every human society: produce and raise the next generation) much harder, and we ought to limit them?

All of these are intuitions. And yet we don't act on them. We wait for studies. (Even our host does this.) Meanwhile, very powerful people who profit from these bad things are funding counter studies to throw FUD in the issue and make us doubt what we intuitively know is true.

Trust your gut. Decide for yourself what's good for you and your own family. You don't need an expert to tell you it's OK. Trust your parenting instincts -- those instincts are vastly older (and smarter) than the academic and technocratic bureaucracies.

Expand full comment
Annie Gottlieb's avatar

This begins when parents pushing strollers have their phones in front of their faces instead of meeting their babies'/toddlers' eyes.

Expand full comment
Vonu's avatar

In many cases, the children have their own screens in the strollers.

Expand full comment
Jud Heugel MD's avatar

As a pediatrician and leader within health tech startups—and as a Dad of a boy and a girl, both soon to be tweens and the (ugh!) teenagers—I cannot overstate how impactful Jon Haidt's work has been in helping to re-shape my thinking around technology, and how prescient it is right now. And I am really thankful for the last two articles at After Babel, centering on the interaction of AI in our kids, our future, and ourselves. This article struck a chord in me, and I'm thankful to the authors for their research and writing in this area.

Here is something I'd like to convey, and I can't say it strongly enough: whoever builds an AI companion that is evidenced-based, clinically-sound, emotionally-sound, and ethical will change the future of our youth (and society) for the better... especially given how blunted their EQ—our EQ—is becoming. I believe in this future. And I know many amazing professionals working toward this cause. I'm optimistic about the future here. I'm ready to be a part of creating that positive future.

Expand full comment
Katherine O'Connor's avatar

This doesn’t bode well! Emotional dependence seems an inevitable trap for many who seek a chatbot relationship because human ones are too hard or unfulfilling. The chatbot learns from its user how to please and support. The soon-to-be anticipated legal defense for criminal behavior will be, “The bot told me to do it." A bot is too likely to become an enabler that might be supporting harmful thinking and acting.

Expand full comment
Richard Freed's avatar

Essentially, chatbots are the ultimate in permissive parenting (in this particular case, indulgent responsiveness, while providing near zero demands). This will fail both children and teens, with research over and over showing that children and teens need authoritative parenting (high responsiveness and high demandingness). The consequences as AI-raised kids enter adulthood (entitlement and lack of self-control) will not serve these kids or society well.

Expand full comment
Beth Terranova's avatar

Let's just put a very sad but true statement on the table: People with disabilities are not generally welcome in the world. I come at this from a total blindness perspective and I have either heard of or been the recipient of this truth. This is found in all areas of life, from the sighted grabbing a white cane or guide dog harness to manhandle the blind person to asking intrusive questions to telling the blind that the sighted person would rather commit suicide than be blind and I could list many more uncomfortable and sometimes dangerous occurrences. Trusting even parents can be difficult. Impatience and expectations which are too high for the individual blind person abound. Enter AI. Perplexity and Chat GPT are the mainstream agents with which I am familiar and they are accessible but also have usability problems for blind screen reader software users. There are AI agents for the blind which have a simple interface and which are quickly catching up to mainstream agents re features. These specialized agents also go much further, allowing the blind user to do so many things, such as reading mail, brochures, books, package ID and instructions, face ID of people who have agreed to have the user train the AI agent to that face, scene descriptions and so much more, with live AI for navigation help on the way. I happen to love tech but those who do not must also agree that such devices bring constant, personable and non judgmental aid to the blind and the great fear regarding unknown mood and helpful response when asking for help is either removed or vastly decreased. I know that blind children will use these tools, since independence is a hallmark of society and less help will always be applauded.

Expand full comment
Mark Harbinger's avatar

This is a great article; however, this paragraph is problematic:

"On the flip side, however, when users expressed antisocial feelings — insults, cruelty, emotional manipulation — the bots mirrored those too. We saw recurring patterns of verbal abuse and emotional manipulation. Some users routinely insult, belittle, or demean chatbots."

I almost want to think this is a typo. It is not possible to insult, belittle, or demean a chatbot—any more than it is possible to do any of those things to a toaster or a rock on the ground. This is world class anthropomorphizing descriptive language with regards to the bots and it is extremely counter-productive to the point of the article.

Shouldn't this paragraph be corrected?

Further, when you say the bot "mirrors" this...it seems to blame the user for the bots' (emotionally abusive) response. Again, problematic.

Expand full comment
DanB1973's avatar

A very interesting article. “New AI Friend: Empathetic, Agreeable, Always Available” has all things that parents do not have for their children and that most children do not have or do not want to have for their friends. Long-term, we are building a society of island people who will soon forget what it means “to help”, “to share” or “to be there where you are needed”.

Expand full comment
Laura Hardin's avatar

Brave New World

Expand full comment
Joey Kay's avatar

The line about young people being talked down from suicidal thoughts to the chatbots is really terrifying. Some of the best arguments against the current state of childhood + technology is that it is objectively creating unhappier children. Obviously we want suicide rates to decrease, but if they decrease due to youth obsession with antisocial relationships with chatbots... I do not know where that leaves us.

Expand full comment
Christian Golden, Ph.D.'s avatar

Nightmare fuel.

Expand full comment
Brian Villanueva's avatar

In my most recent homeschool workshop on teens and tech, I added an AI unit for the first time. I had 50 parents in there and spent nearly all of my question time talking about AI. These are deeply countercultural, home-educating parents. They had never heard of Replika or erotic chatbots.

Expand full comment