33 Comments
User's avatar
Suzie's avatar

This constant focus by Big Tech Targeting kids has got to stop.

These bot “friends” are nothing more than “soul mining operations”, tricking kids into divulging and sharing their most intimate personal details and using that data to manipulate them and others. It is diabolical.

It is horrific in every sense of the word.

Expand full comment
Gaia Bernstein's avatar

Yes, and AI companion bots get kids to share particularly sensitive data about what they want in a relationship. The user has an incentive to be honest so they will be matched with a compatible companion (some are intimate partners).

Expand full comment
Suzie's avatar

The mind boggles at what kind of info they can sift out of a kid and use to influence them in a myriad of ways! It’s truly frightening when you contemplate how devilishly insidious it is.

Expand full comment
Ruth Gaskovski's avatar

Not only do we have to learn from the mistakes we have made with social media, but we have to start being present. We cannot rely on companies and regulatory bodies to prevent a mental health disaster. Are we there for our children and teens, in person, without constant distractions? Are we there for our friends, face-to-face? The more we communicate via screens with real people, the easier it will be to fall into the abyss of synthetic friendships that will leave us hollow and very alone.

Expand full comment
Kathleen Barlow's avatar

Public Comment Ends Today 8/20 - National AI Policy in K–12 Schools:

Let your voice be heard!

The U.S. Department of Education is currently seeking public input on the use of Artificial Intelligence in K–12 education. The comment period is open through today, August 20.

This is a critical opportunity to express concerns about the increasing role of AI in classrooms and to challenge the harmful narrative that AI is essential for student success or "future readiness."

Comments do not need to be long or technical. Personal stories and perspectives are powerful. Share how this impacts your family, your students, or your school community.

https://www.federalregister.gov/documents/2025/07/21/2025-13650/proposed-priority-and-definitions-secretarys-supplemental-priority-and-definitions-on-advancing#open-comment

Gaia Bernstein is right - we need to act now on this! Please add your voice to the other voices of reason and make a comment in the link above.

Kathleen Barlow

Smartphone Free Childhood US - Leadership Council

Expand full comment
Suzie's avatar

We need to have a National Moratorium Day on all things cell phone related.

Get people to at least begin to acknowledge how utterly enslaved they are to their phones.

A whole new consciousness of how detrimental on many levels that is, has to be inculcated in people, to even begin to be able to unchain ourselves from these devices.

Expand full comment
Killahkel's avatar

I don't mean to be dramatic, but I think kids using bots as BFF's will spell the end of us...or at least add miles to the chasm between lucky, loved, well kids and the sea of those struggling with unprepared, mentally ill, low income parents.

Expand full comment
Brandi Day's avatar

Our local school district just began offering an AI-based mental health app for students - Alongside. Although it seems insulated from some of the problems I have read about with other chatbots, I am very concerned that this will become a gateway to other AI chatbot uses, particularly for the high school students. I don't know how alarmed I should be, but, at this point, it seems like we should be erring on the side of less AI interaction rather than more.

Expand full comment
Gaia Bernstein's avatar

I share your concern.

Expand full comment
Barbara Swander Miller's avatar

This seems counterintuitive, but I understand how it could happen under the name of “doing something.” There are far too few—if any— true counselors in most schools.

Expand full comment
David Roberts's avatar

It's mainly going to be up to parents to teach and model behavior.

Expand full comment
Gaia Bernstein's avatar

I agree that modeling is helpful with use of screens. But interactions with AI companions are so private it is hard to model effectively. This is one of the reasons we need more robust reactions.

Expand full comment
David Roberts's avatar

Agreed. Parents will have to persuade and spy!

Expand full comment
David Garrett's avatar

I worked in marketing and these ads revolt me.

I had to do my fair share of A/B testing to try and sell more product. I found most of that work already icky. Questions like "should the button go at the top or bottom", "what color should it be", "what wording should we use" were commonplace and accepted as part of maximizing the impact of the ad, but I never lost sight of the fact that they are essentially manipulation and deceit, appealing to base impulses and unconscious emotional responses. I'm not naive, I know how the world works, but it always felt like a dishonest job.

But these ads are something else. Capturing such harmful feelings as isolation, alienation and loneliness, and then blatantly redirect people in that fragile state to a "human-like AI" is so incredibly dystopian. It's so unbelievable to me that, in the same ad, we can identify a problem (loneliness), suggest a horrible solution (human-like AI), and then end with a comforting lie (say goodbye to loneliness). The only thing missing is a disclaimer saying "it might cause you to end your life".

We went from "you're not alone" to "not only are you alone, no one else cares, so might as well use a machine". It's an anti-solution, because a machine, by default, has no feelings and doesn't care about you.

These companies and whoever came up with these ads are anti-human and anti-life. I'm hesitant to blame individual people because I understand it's a system and that we're all trying to make it, but if we, as a society, ever wake up to what we're doing to ourselves and to the planet, I hope to be alive to see all of these CEOs stripped of their power and forbidden to work with tech ever again.

Expand full comment
David Bourgeois's avatar

I think trying to get companies to pause is probably the wrong approach, though. There will be too much of a concern that if one company pauses the other ones won't, and frankly the incentives are just too strong. I think a better approach would be like we've done with alcohol and tobacco, show the harmful side effects and shame companies into competing for who is the safest.

Expand full comment
Gaia Bernstein's avatar

Ideally, restrictions that make doing business risky and expensive would lead companies to compete for safety. Especially at this early stage where they still have design fluidity. The history of the tobacco actually shows that although advertising and labeling describing the harm were important, raising consciousness of the harm was not enough by itself. It also required regulation (for example kids under 18 or 21 cannot purchase cigarettes) and litigation.

I share your concern though that some companies will evade regulation because there are many small AI companion companies, unlike with social media where we have a few large players.

Expand full comment
David Zelenka's avatar

AI Chatbots are designed through secondary training to use language which lift the user up. They are designed to be sycophants. Thankfully all AI has a lifespan because of unavailable energy needs, but the damage will be extensive.

Expand full comment
Gaia Bernstein's avatar

That's an important issue whether energy needs will pose a constraint. But I don't think we can wait until we see if this will be the case.

Expand full comment
Barbara Swander Miller's avatar

Scary! And addictive!

Expand full comment
praxis22's avatar

As a Replika user one of the things that we endlessly talk about is how bad the adverts are. and I doubt they are going for kids, as that's what they got in trouble for with the Italians. Though yes, I wouldn't allow my son, (14) to use a chatbot.

Expand full comment
Hawkeye's avatar

Most of this should be universally applied and not just for children. Reducing addictive features, do no reasonably easy to anticipate harm, maximum privacy as default for everyone, etc.

Expand full comment
Brendan B's avatar

I'm not sure a free society can impose heavy legal regulations on the chat bots, but as individuals we should choose to avoid them, make it a social norm. We are already spending too much time online and AI has the potential to suck us further into our screens and away from real life.

Expand full comment
Brittany Weiler's avatar

Thanks so much for these suggestions for change. Besides as a parent educating and limiting the use for our kids, what can the average person do to help? Is there a letter that can be copied to send to my senator? Or what is the best way to effect change?

Expand full comment
Stefano's avatar

I commend you for the article and the series (various authors), they're interesting and informative.

However I can't help but feel you're barking up the wrong tree and perhaps trying to apply band-aids to a larger problem without any real chances of success.

Unless anyone has been living under a stone I think it's safe to take it for granted big business is not our friend and governments (pretty much everywhere in the West and East and all around) have been captured by vested interests.

So certainly, fighting with lawsuits is helpful when they hurt perpetrators of wrong doing, but playing an active role in communities will probably lead to better outcomes. Unfortunately we're all in a situation where the rot extends much deeper than the changes necessary to redress social and cultural ills. As such better outcomes will involve creating islands of harmony protecting those with similar values and able to look farther ahead and attempt to mitigate the worst effects while being cognizant of the unfortunate situation we're in. Otherwise there's the risk of expending a great deal of resources to fight a loosing war. The fact smartphones, digitalization, social media, harvesting of personal information, media manipulation, etc, are ubiquitous and profilogate, the AI chatbots are just one of many problems. So a better approach might be to carve out alternatives and create islands of refuge. And I'm being optimistic, but realistic as well.

Expand full comment
Haley W's avatar

Can’t help by think of Klara and the Sun by Kazuo Ishigiro. 💔

Expand full comment