23 Comments
User's avatar
Suzie's avatar

This constant focus by Big Tech Targeting kids has got to stop.

These bot “friends” are nothing more than “soul mining operations”, tricking kids into divulging and sharing their most intimate personal details and using that data to manipulate them and others. It is diabolical.

It is horrific in every sense of the word.

Expand full comment
Gaia Bernstein's avatar

Yes, and AI companion bots get kids to share particularly sensitive data about what they want in a relationship. The user has an incentive to be honest so they will be matched with a compatible companion (some are intimate partners).

Expand full comment
Suzie's avatar

The mind boggles at what kind of info they can sift out of a kid and use to influence them in a myriad of ways! It’s truly frightening when you contemplate how devilishly insidious it is.

Expand full comment
Ruth Gaskovski's avatar

Not only do we have to learn from the mistakes we have made with social media, but we have to start being present. We cannot rely on companies and regulatory bodies to prevent a mental health disaster. Are we there for our children and teens, in person, without constant distractions? Are we there for our friends, face-to-face? The more we communicate via screens with real people, the easier it will be to fall into the abyss of synthetic friendships that will leave us hollow and very alone.

Expand full comment
Killahkel's avatar

I don't mean to be dramatic, but I think kids using bots as BFF's will spell the end of us...or at least add miles to the chasm between lucky, loved, well kids and the sea of those struggling with unprepared, mentally ill, low income parents.

Expand full comment
Kathleen Barlow's avatar

Public Comment Ends Today 8/20 - National AI Policy in K–12 Schools:

Let your voice be heard!

The U.S. Department of Education is currently seeking public input on the use of Artificial Intelligence in K–12 education. The comment period is open through today, August 20.

This is a critical opportunity to express concerns about the increasing role of AI in classrooms and to challenge the harmful narrative that AI is essential for student success or "future readiness."

Comments do not need to be long or technical. Personal stories and perspectives are powerful. Share how this impacts your family, your students, or your school community.

https://www.federalregister.gov/documents/2025/07/21/2025-13650/proposed-priority-and-definitions-secretarys-supplemental-priority-and-definitions-on-advancing#open-comment

Gaia Bernstein is right - we need to act now on this! Please add your voice to the other voices of reason and make a comment in the link above.

Kathleen Barlow

Smartphone Free Childhood US - Leadership Council

Expand full comment
Brandi Day's avatar

Our local school district just began offering an AI-based mental health app for students - Alongside. Although it seems insulated from some of the problems I have read about with other chatbots, I am very concerned that this will become a gateway to other AI chatbot uses, particularly for the high school students. I don't know how alarmed I should be, but, at this point, it seems like we should be erring on the side of less AI interaction rather than more.

Expand full comment
Gaia Bernstein's avatar

I share your concern.

Expand full comment
Suzie's avatar

We need to have a National Moratorium Day on all things cell phone related.

Get people to at least begin to acknowledge how utterly enslaved they are to their phones.

A whole new consciousness of how detrimental on many levels that is, has to be inculcated in people, to even begin to be able to unchain ourselves from these devices.

Expand full comment
David Roberts's avatar

It's mainly going to be up to parents to teach and model behavior.

Expand full comment
Gaia Bernstein's avatar

I agree that modeling is helpful with use of screens. But interactions with AI companions are so private it is hard to model effectively. This is one of the reasons we need more robust reactions.

Expand full comment
David Roberts's avatar

Agreed. Parents will have to persuade and spy!

Expand full comment
David Bourgeois's avatar

I think trying to get companies to pause is probably the wrong approach, though. There will be too much of a concern that if one company pauses the other ones won't, and frankly the incentives are just too strong. I think a better approach would be like we've done with alcohol and tobacco, show the harmful side effects and shame companies into competing for who is the safest.

Expand full comment
Gaia Bernstein's avatar

Ideally, restrictions that make doing business risky and expensive would lead companies to compete for safety. Especially at this early stage where they still have design fluidity. The history of the tobacco actually shows that although advertising and labeling describing the harm were important, raising consciousness of the harm was not enough by itself. It also required regulation (for example kids under 18 or 21 cannot purchase cigarettes) and litigation.

I share your concern though that some companies will evade regulation because there are many small AI companion companies, unlike with social media where we have a few large players.

Expand full comment
David Zelenka's avatar

AI Chatbots are designed through secondary training to use language which lift the user up. They are designed to be sycophants. Thankfully all AI has a lifespan because of unavailable energy needs, but the damage will be extensive.

Expand full comment
Gaia Bernstein's avatar

That's an important issue whether energy needs will pose a constraint. But I don't think we can wait until we see if this will be the case.

Expand full comment
praxis22's avatar

As a Replika user one of the things that we endlessly talk about is how bad the adverts are. and I doubt they are going for kids, as that's what they got in trouble for with the Italians. Though yes, I wouldn't allow my son, (14) to use a chatbot.

Expand full comment
Hawkeye's avatar

Most of this should be universally applied and not just for children. Reducing addictive features, do no reasonably easy to anticipate harm, maximum privacy as default for everyone, etc.

Expand full comment
Stefano's avatar

I commend you for the article and the series (various authors), they're interesting and informative.

However I can't help but feel you're barking up the wrong tree and perhaps trying to apply band-aids to a larger problem without any real chances of success.

Unless anyone has been living under a stone I think it's safe to take it for granted big business is not our friend and governments (pretty much everywhere in the West and East and all around) have been captured by vested interests.

So certainly, fighting with lawsuits is helpful when they hurt perpetrators of wrong doing, but playing an active role in communities will probably lead to better outcomes. Unfortunately we're all in a situation where the rot extends much deeper than the changes necessary to redress social and cultural ills. As such better outcomes will involve creating islands of harmony protecting those with similar values and able to look farther ahead and attempt to mitigate the worst effects while being cognizant of the unfortunate situation we're in. Otherwise there's the risk of expending a great deal of resources to fight a loosing war. The fact smartphones, digitalization, social media, harvesting of personal information, media manipulation, etc, are ubiquitous and profilogate, the AI chatbots are just one of many problems. So a better approach might be to carve out alternatives and create islands of refuge. And I'm being optimistic, but realistic as well.

Expand full comment
Brittany Weiler's avatar

Thanks so much for these suggestions for change. Besides as a parent educating and limiting the use for our kids, what can the average person do to help? Is there a letter that can be copied to send to my senator? Or what is the best way to effect change?

Expand full comment
Jim Johnson's avatar

no solutions here but I do have 2 questions that keep coming up.

1) what is the prevalence of this addiction? are we talking about single digit percentages of youngsters? or 20-50%? that matters when we talk of intevention in free speech.

2) are there any personality features that are present in these kids? introversion?, suggestability? neuroticism? locus of control?.....

Expand full comment
Crimson's avatar

As a society, by 2004, we had decided that making databases of high definition video pornography accessible to all was "free speech" and not "terrorism" (which it is). Where was everyone then ? The denial that this foul development and the gaslighting and quibbling about it is the cause of "the boy crisis" and widespread anxiety and disillusionment is astonishing. Have a great day and good luck getting the lid back on pandora's box.

Expand full comment
Gaia Bernstein's avatar

I agree the way that First Amendment protection has gone out of control is a big part of this. And could be a challenge to AI companion regulation as it is for social media regulation.

Expand full comment