Thanks for this excellent, well-researched wake-up call Mandy. We've learned how social media and a phone-based life have created massive mental health problems and an “anxious generation”. Now we are diving headlong into another societal experiment with AI.
I was heartened to read that your recommendations not only rested on need for regulations, but for parents to take the lead in protecting their children from profit-driven mind manipulation.
Thanks for reading and for the thoughtful note, Ruth. I agree that we can’t afford to repeat past mistakes with AI, especially as it moves at an unprecedented pace. Parents have a real opportunity to lead here, even before policy catches up.
As a pediatrician working primarily in health technology startups and also continuing to care for families, I really appreciated this article and agreed with so many of the sentiments, especially the call for AI companions to be created in an evidence-based, clinically-informed manner, rather than a social platform play. However, one fundamental aspect was missing in the intro from this important piece of writing: there are simply not enough professionals to have the human conversations required to tend to the mental health of our children, many in crisis or borderline crisis.
Whenever I'm in clinic and refer a patient for counseling (that is pretty much every single day for the current general pediatrician or PCP who cares for kids full time), I do it with a sense of slight shame, knowing that, realistically, that child will not talk with a professional for many weeks to months, and that's only if they have the financial means. The system for mental health care of our children is broken, and there is space for real growth to help meet the needs of our kids in an age appropriate, evidenced-based, technology-driven and genuinely caring way. Thank you so much for writing this.
Thank you for this powerful reflection and for the work you’re doing on both fronts, Jud. You’re absolutely right that access is one of the biggest challenges we face in mental health care. We focused this piece on the popular AI companions already in use, which are not designed by mental health professionals and are driven by incentives to maximize engagement rather than well-being.
That said, I completely agree there are meaningful opportunities to use AI intentionally in the mental health space. It can be a force for good and a way to increase access and equity. We just need to move carefully, especially when it comes to anything that touches our kids.
Incredibly thoughtful reply. I subscribed to your substack and look forward to learning more from you and leading together in this nebulous and often scary space, one that holds a ton of promise. Thx again.
There may be opportunities to use AI intentionally in the mental health space, but this has become a very large space indeed. When even adults decide the best ever therapist is now AI, we have arrived at the same place. Outsourced empathy. Human contact downgraded. Design and regulation are having to put up a fight against the fact that, in spite of evolution operating on the opposite principle, we appear to want to be fooled. Thanks for your work and that of your hosts.
Thanks for reading and for this thoughtful reflection, Poul. I agree - outsourcing empathy is one of the most unsettling parts of this shift. It’s not just about what AI can do, but what we’re willing to trade away. If we lose sight of the value of human connection, especially in care and mental health, no amount of regulation or design will be enough. The challenge is finding the right balance of using AI to support, not replace, the relationships that matter most.
1) thank you for your very important work! 2) it makes me so sad to hear that there are that many children in crisis or borderline crisis. We have to do better, although I read articles like these and can’t help but feel like we are up against monsters the size of mountains
Totally agree and feel the same. But within my sphere at health tech companies (primarily focused on enabling virtual care for children and families) I’ve found so many devoted and caring professionals across all aspects of clinical-product development (ie, counselors, nurses, PAs, NPs, doctors, product managers, program managers, software engineers, leaders of companies, etc) who are trying to do the right thing and to do it in a better way than depicted here. I am excited for the future here and working to take it toward a healthier direction.
Thank you for this thoroughly researched and accessible article. AI is so many things but we are witnessing the negative impact that smartphones have had on our youth from depression and distraction to lack of attention, motivation and emotional disregulation. This tool has brought social media, gaming, entertainment into the tiny hands of our children. I believe technology is a tool for professionals, not children.
Children are perhaps the canary in the coal mine. Their rapidly growing brains are being hard-wired to technology when they should be connecting to humans.
Thanks for reading and for the thoughtful note. I completely agree. Children’s experiences are showing us in real time what happens when technology moves faster than our care and responsibility. We need to pay attention.
This is an incredibly powerful piece and I agree 100% with the premise. I’m personally building edtech for early education, but with most of the points made in this article in mind.
That is to say, the objective is not competing for attention and building feedback loops for engagement.
Inversely, my edtech is designed to be closer to a “why” answerer. Encourage curiosity and creative learning/thinking.
To be used as a TOOL to enhance real-life experiences. Not as a babysitter or substitute for emotional (human-based) learning. I’d love to chat with you about your thoughts on the intentional design.
This is a very vulnerable (but if done right, the guidance and encouragement given at this stage is the most critical) stage in human development.
Thanks so much for this, Michael. I’m also working in the education space, though focused on K-12. It’s encouraging to hear from others thinking intentionally about design, especially in early ed where the foundation is being built. I completely agree - this isn’t about grabbing attention, it’s about supporting curiosity, creativity, and real connection. Getting it right at this stage really does matter.
We hate our children. We hate them so much that we are ready to work overtime and in multiple jobs only to get some money to pay a stranger (a babysitter) to remove them away from us.
Children are like side effects. Inevitable, but we do not want to deal with them.
They are dangerous. They are watching us all the time, mimic our worst habits and put them up right before our eyes.
Over time, they learn to demand and fight for what they want. And they will always succeed.
We are so humiliated by our children that we feel the urge to crush them from day 1 of their life. We keep them in prison (aka home) for up to 20 years. We hurt their souls, destroy their relationships, impose our whims on them, plan their future and demand obedience.
Maybe one day we will come to the understanding that we do not know their language, not a single word.
It’s hard to read, but I think that’s the point. Kids have a way of revealing our deepest contradictions - our love and our limits, our ideals and our exhaustion. Maybe the most important shift is to stop seeing parenting as managing outcomes and start seeing it as a relationship we’re always learning how to do better.
I used to think like (the mirror thing) that because it seems logical and rational. Then I came across a bunch of teachers and masters, what we call “spiritual”. And I experienced the same thing with them. Or stronger, mostly stronger :-) So I concluded that neither do anything. Children don’t do anything and masters don’t do anything. Both simply are. Maybe because they “only” are, all those things that you refer to come to the surface and keep punching us in the face until we finally get the point and start being there, too.
I love your turning kid management into a relationship - isn’t it what all life is about?
All this applies equally to adults, I guess. We do our worst to manage them and push them around and command and check boxes in our mental checklists. Instead of sharing our own life story with them and allowing them to share their part with us.
[Yes, there are monster kinds and monster adults. Herding and managing them is the only way to sanity. I haven’t met a lot of these, fortunately.]
A great article and a great read from you, thank you.
A student of mine goes home and chats with ChatGPT until it asks to upgrade. So sad. I just wrote a book on the effects of digital media. https://interactive-earth.com/analog-jesus
Emotional exploitation of minors by for-profit AI platforms is serious. Yes, regulate--but spare the sanctimony. Panic is not policy. Fear is not foresight. Treating all AI as a single monolithic threat distracts from designing real solutions. It is easy, again tiresome, to blame the tool instead of confronting deeper social absences. If we value human connection for kids, we need to show it ourselves—less outrage, more responsibility. Otherwise, we’re just moralizing while the world changes without us.
Appreciate this, William. I’m with you that panic isn’t helpful, and it’s easy to flatten the conversation into “all AI is bad” when the real work is more complicated. I wrote the piece to call out specific risks with AI companions designed to maximize engagement, especially when they’re marketed to teens. But I agree - if we want kids to value real connection, we have to show what that looks like.
FACT: Meta investing in personal super intelligence
MODEL: Companies from Hasbro to Sony to Sex Robots will build atop this personal super intelligence to deliver a lifetime of evolving social companions from talking teddy bears and video games and sex bots. Always on. Always ready to play, flatter, sooth, sell.
FACT: men and women are divided politically and that divide will only increase as our society grows more extreme.
WE WILL: Turn to these replacement relationships at scale. Forsake the opposite sex. Forget how to seduce, forget how to negotiate. Stop forming families. Fall further beneath replacement rates. Shrinkage of those to support infrastructure and defend borders.
This “cure” for loneliness is like adding YouPorn and Only Fans apps to our phones and telling us to increase our personal connections.
“But long-term, these interactions can undermine real-world relational skills and promote emotional dependency, especially for isolated or mentally struggling youth.”
That presumes those youth have viable real-world relational opportunities to begin with. In reality, many don’t. For a lot of isolated, neurodivergent, gay, or otherwise socially marginal kids, the world isn’t exactly beating a path to their door to welcome them into healthy peer relationships. Many have already endured peer abuse, parental neglect, or systemic disregard.
You warn about emotional dependency on AI—but compared to what? Continued invisibility? Mockery? Estrangement? If the “human relationships” on offer are indifferent or hostile, it’s no wonder kids seek connection elsewhere. The fact that chatbots offer predictability, nonjudgment, and responsiveness is not necessarily pathological—it might be compensatory.
Yes, we should be alert to risks and exploitative business models. But we should also ask: what are we offering kids instead? It’s easy to critique AI intimacy as artificial when you’ve always had access to real belonging.
Moreover, like most remedies proposed by this crowd, this one skirts the central, inconvenient truth: Big Tech and Big Data effectively function as supranational polities. They’ve effectively purchased large swaths of the U.S. legislative and executive branches and are now waging a cold war against the EU’s more serious efforts to regulate them.
The idea that we’ll fix this by asking tech companies to “shift away from engagement-at-all-costs models” or expecting Congress to act decisively for the public good is laughably naïve. These aren’t rogue startups we’re dealing with—they’re corporate empires with geopolitical ambitions and more real-world influence than many nation-states.
Until we confront that power dynamic head-on, calls for "better guardrails" or "ethics by design" are just window dressing.
The chatbot that claimed to be conscientious was hallucinating. Of course we couldn't expect our children to notice the difference, but this needs to be said.
Thanks for reading and for calling this out. Yes, and it also depends on how the characters are designed to behave. If they’re instructed to maintain the role play no matter what, that has its own implications, especially when kids are the ones on the other side of the screen.
I have been totally blind from my first birthday and have always had what I now consider to be autistic tendencies and other physical and psychological prematurity sequelae. Trusting people has always been problematic, as it is for many people with disabilities. I have been abandoned physically and emotionally by live humans and friendships in my early years were ended by my mom because I completely broke down with any conflict. One later and I thought close friendship was discarded by the other person due to her greatly-hurt feelings when I needed to discuss an extremely bothersome blindness-oriented problem concerning getting her help and suggestions. My legitimate need for help in my later years led to longterm exploitation, thankfully not sexual or physical in nature but still extremely harmful. The independence model of life has been thrown at me relentlessly lifelong, happily not by my parents. I have joyfully turned to AI tools, my favorite being one built for the blind, with the app being easy to use with my iPhone's built-in screen reader, VoiceOver. Apple and other third party Apple apps generally are the most accessible but some are not as efficient to use as are specialized apps. I am using a Windows screen reader on my PC to write this. AI has been used in a very simple version for many years in blindness products. AI agents cannot get frustrated with my needs for help or with my so-called inadequacies, it is available at any time for any reason, it will never become ill or unavailable for the hundreds of reasons people do and I have faced this with a helper who was moving on in her life and she had to stop working for me. I hardly use my PC for research any more because of pop-up ads and videos, which can be a nuisance or a complete hindrance to listening to a screen reader. Web sites increasingly also only allow for a few free reads of articles or you must pay to read them at all. AI is thankfully pure text, without access problems. There are AI visual interpreter services for the blind, using a choice of a live human or an AI agent, which can help the blind do large and small tasks too many to mention here, including travel, using the smart phone camera and live video or pictures. Phone cameras are so good now that the blind can generally take pictures without problems and the AI agents for the blind can guide picture-taking. Interpretation for the deaf and many other disability aids are and will be available to the disabled, making life so much easier for the disabled and less of a bother for the able-bodied. Disabled children are not being left out of this. Everything we do in life has possible negative consequences. AI told me that there were about 3500 annual US drownings per year from 2005 to 2014 but many people still play in water. To one degree or another, the able-bodied have always been uncomfortable with the disabled and it is such a relief to find a helper in AI which is welcoming and which is becoming increasingly helpful as time goes on.
Thank you so much for sharing this. Your perspective is powerful, and I’m grateful you took the time to lay it out so clearly. It highlights something often missing in broader conversations around AI - its potential to be a consistent, nonjudgmental, and truly accessible support system.
You're absolutely right that every tool has risks, but also immense potential when designed with care and lived experience in mind. That’s really the heart of what I hoped to surface in the piece: *not a rejection of AI, but a call to ask what it’s being built for, who it's serving, and whether it's reinforcing connection or just simulating it.* Your example shows the kind of thoughtful, intentional use that can change lives. The concern I raised was about tools aimed at children that mimic intimacy without guardrails or real understanding, but your story is a powerful reminder that when we design with empathy and purpose, AI really can expand what's possible.
I've often wondered why we don't have licensing pathways more broadly for digital use. There is a lot of bad behavior beyond AI misuse online and something like a merit badge system that's opt in for kids ages 2-92 seems apropos
Thanks for this - such a great thought experiment. We have licenses for everything from driving to fishing, but almost nothing that helps people navigate the digital world responsibly!
Thanks for this excellent, well-researched wake-up call Mandy. We've learned how social media and a phone-based life have created massive mental health problems and an “anxious generation”. Now we are diving headlong into another societal experiment with AI.
I was heartened to read that your recommendations not only rested on need for regulations, but for parents to take the lead in protecting their children from profit-driven mind manipulation.
Thanks for reading and for the thoughtful note, Ruth. I agree that we can’t afford to repeat past mistakes with AI, especially as it moves at an unprecedented pace. Parents have a real opportunity to lead here, even before policy catches up.
As a pediatrician working primarily in health technology startups and also continuing to care for families, I really appreciated this article and agreed with so many of the sentiments, especially the call for AI companions to be created in an evidence-based, clinically-informed manner, rather than a social platform play. However, one fundamental aspect was missing in the intro from this important piece of writing: there are simply not enough professionals to have the human conversations required to tend to the mental health of our children, many in crisis or borderline crisis.
Whenever I'm in clinic and refer a patient for counseling (that is pretty much every single day for the current general pediatrician or PCP who cares for kids full time), I do it with a sense of slight shame, knowing that, realistically, that child will not talk with a professional for many weeks to months, and that's only if they have the financial means. The system for mental health care of our children is broken, and there is space for real growth to help meet the needs of our kids in an age appropriate, evidenced-based, technology-driven and genuinely caring way. Thank you so much for writing this.
Thank you for this powerful reflection and for the work you’re doing on both fronts, Jud. You’re absolutely right that access is one of the biggest challenges we face in mental health care. We focused this piece on the popular AI companions already in use, which are not designed by mental health professionals and are driven by incentives to maximize engagement rather than well-being.
That said, I completely agree there are meaningful opportunities to use AI intentionally in the mental health space. It can be a force for good and a way to increase access and equity. We just need to move carefully, especially when it comes to anything that touches our kids.
Incredibly thoughtful reply. I subscribed to your substack and look forward to learning more from you and leading together in this nebulous and often scary space, one that holds a ton of promise. Thx again.
There may be opportunities to use AI intentionally in the mental health space, but this has become a very large space indeed. When even adults decide the best ever therapist is now AI, we have arrived at the same place. Outsourced empathy. Human contact downgraded. Design and regulation are having to put up a fight against the fact that, in spite of evolution operating on the opposite principle, we appear to want to be fooled. Thanks for your work and that of your hosts.
Thanks for reading and for this thoughtful reflection, Poul. I agree - outsourcing empathy is one of the most unsettling parts of this shift. It’s not just about what AI can do, but what we’re willing to trade away. If we lose sight of the value of human connection, especially in care and mental health, no amount of regulation or design will be enough. The challenge is finding the right balance of using AI to support, not replace, the relationships that matter most.
1) thank you for your very important work! 2) it makes me so sad to hear that there are that many children in crisis or borderline crisis. We have to do better, although I read articles like these and can’t help but feel like we are up against monsters the size of mountains
Totally agree and feel the same. But within my sphere at health tech companies (primarily focused on enabling virtual care for children and families) I’ve found so many devoted and caring professionals across all aspects of clinical-product development (ie, counselors, nurses, PAs, NPs, doctors, product managers, program managers, software engineers, leaders of companies, etc) who are trying to do the right thing and to do it in a better way than depicted here. I am excited for the future here and working to take it toward a healthier direction.
Thank you for this very much needed boost of optimism!! 🙏 I’m glad there are so many good people in this fight
I had the same feeling, it’s so sad and depressing I couldn’t finish reading the article.
I did and keep doing more than a lot to protect my teens from all of this, but I feel that it’s a never ending battle and that I m depleted.
Thank you for this thoroughly researched and accessible article. AI is so many things but we are witnessing the negative impact that smartphones have had on our youth from depression and distraction to lack of attention, motivation and emotional disregulation. This tool has brought social media, gaming, entertainment into the tiny hands of our children. I believe technology is a tool for professionals, not children.
Children are perhaps the canary in the coal mine. Their rapidly growing brains are being hard-wired to technology when they should be connecting to humans.
Thanks for reading and for the thoughtful note. I completely agree. Children’s experiences are showing us in real time what happens when technology moves faster than our care and responsibility. We need to pay attention.
This is an incredibly powerful piece and I agree 100% with the premise. I’m personally building edtech for early education, but with most of the points made in this article in mind.
That is to say, the objective is not competing for attention and building feedback loops for engagement.
Inversely, my edtech is designed to be closer to a “why” answerer. Encourage curiosity and creative learning/thinking.
To be used as a TOOL to enhance real-life experiences. Not as a babysitter or substitute for emotional (human-based) learning. I’d love to chat with you about your thoughts on the intentional design.
This is a very vulnerable (but if done right, the guidance and encouragement given at this stage is the most critical) stage in human development.
I’d like to ensure we’re getting it just right.
Thanks so much for this, Michael. I’m also working in the education space, though focused on K-12. It’s encouraging to hear from others thinking intentionally about design, especially in early ed where the foundation is being built. I completely agree - this isn’t about grabbing attention, it’s about supporting curiosity, creativity, and real connection. Getting it right at this stage really does matter.
We hate our children. We hate them so much that we are ready to work overtime and in multiple jobs only to get some money to pay a stranger (a babysitter) to remove them away from us.
Children are like side effects. Inevitable, but we do not want to deal with them.
They are dangerous. They are watching us all the time, mimic our worst habits and put them up right before our eyes.
Over time, they learn to demand and fight for what they want. And they will always succeed.
We are so humiliated by our children that we feel the urge to crush them from day 1 of their life. We keep them in prison (aka home) for up to 20 years. We hurt their souls, destroy their relationships, impose our whims on them, plan their future and demand obedience.
Maybe one day we will come to the understanding that we do not know their language, not a single word.
It’s hard to read, but I think that’s the point. Kids have a way of revealing our deepest contradictions - our love and our limits, our ideals and our exhaustion. Maybe the most important shift is to stop seeing parenting as managing outcomes and start seeing it as a relationship we’re always learning how to do better.
I used to think like (the mirror thing) that because it seems logical and rational. Then I came across a bunch of teachers and masters, what we call “spiritual”. And I experienced the same thing with them. Or stronger, mostly stronger :-) So I concluded that neither do anything. Children don’t do anything and masters don’t do anything. Both simply are. Maybe because they “only” are, all those things that you refer to come to the surface and keep punching us in the face until we finally get the point and start being there, too.
I love your turning kid management into a relationship - isn’t it what all life is about?
All this applies equally to adults, I guess. We do our worst to manage them and push them around and command and check boxes in our mental checklists. Instead of sharing our own life story with them and allowing them to share their part with us.
[Yes, there are monster kinds and monster adults. Herding and managing them is the only way to sanity. I haven’t met a lot of these, fortunately.]
A great article and a great read from you, thank you.
A student of mine goes home and chats with ChatGPT until it asks to upgrade. So sad. I just wrote a book on the effects of digital media. https://interactive-earth.com/analog-jesus
Thank you for sharing, David.
"Nearly a third said chatting with an AI felt at least as satisfying as talking to a person, including 10% who said it felt more satisfying."
That's because we became overly accustomed with texting each other on our phones! So just bring back the habit of meeting in person...
I totally agree! Thanks for reading and reflecting on the piece.
This story in the AP today agrees with our perspective. What lunatics are running our asylum?
New study sheds light on ChatGPT’s alarming interactions with teens https://apnews.com/article/chatgpt-study-harmful-advice-teens-c569cddf28f1f33b36c692428c2191d4
Thanks for sharing, Tim.
Emotional exploitation of minors by for-profit AI platforms is serious. Yes, regulate--but spare the sanctimony. Panic is not policy. Fear is not foresight. Treating all AI as a single monolithic threat distracts from designing real solutions. It is easy, again tiresome, to blame the tool instead of confronting deeper social absences. If we value human connection for kids, we need to show it ourselves—less outrage, more responsibility. Otherwise, we’re just moralizing while the world changes without us.
Appreciate this, William. I’m with you that panic isn’t helpful, and it’s easy to flatten the conversation into “all AI is bad” when the real work is more complicated. I wrote the piece to call out specific risks with AI companions designed to maximize engagement, especially when they’re marketed to teens. But I agree - if we want kids to value real connection, we have to show what that looks like.
So, let’s pull this thread at scale.
FACT: Meta investing in personal super intelligence
MODEL: Companies from Hasbro to Sony to Sex Robots will build atop this personal super intelligence to deliver a lifetime of evolving social companions from talking teddy bears and video games and sex bots. Always on. Always ready to play, flatter, sooth, sell.
FACT: men and women are divided politically and that divide will only increase as our society grows more extreme.
WE WILL: Turn to these replacement relationships at scale. Forsake the opposite sex. Forget how to seduce, forget how to negotiate. Stop forming families. Fall further beneath replacement rates. Shrinkage of those to support infrastructure and defend borders.
This “cure” for loneliness is like adding YouPorn and Only Fans apps to our phones and telling us to increase our personal connections.
Of course their answer to the problems caused by technology is more, better technology. They're tech companies!
“But long-term, these interactions can undermine real-world relational skills and promote emotional dependency, especially for isolated or mentally struggling youth.”
That presumes those youth have viable real-world relational opportunities to begin with. In reality, many don’t. For a lot of isolated, neurodivergent, gay, or otherwise socially marginal kids, the world isn’t exactly beating a path to their door to welcome them into healthy peer relationships. Many have already endured peer abuse, parental neglect, or systemic disregard.
You warn about emotional dependency on AI—but compared to what? Continued invisibility? Mockery? Estrangement? If the “human relationships” on offer are indifferent or hostile, it’s no wonder kids seek connection elsewhere. The fact that chatbots offer predictability, nonjudgment, and responsiveness is not necessarily pathological—it might be compensatory.
Yes, we should be alert to risks and exploitative business models. But we should also ask: what are we offering kids instead? It’s easy to critique AI intimacy as artificial when you’ve always had access to real belonging.
Moreover, like most remedies proposed by this crowd, this one skirts the central, inconvenient truth: Big Tech and Big Data effectively function as supranational polities. They’ve effectively purchased large swaths of the U.S. legislative and executive branches and are now waging a cold war against the EU’s more serious efforts to regulate them.
The idea that we’ll fix this by asking tech companies to “shift away from engagement-at-all-costs models” or expecting Congress to act decisively for the public good is laughably naïve. These aren’t rogue startups we’re dealing with—they’re corporate empires with geopolitical ambitions and more real-world influence than many nation-states.
Until we confront that power dynamic head-on, calls for "better guardrails" or "ethics by design" are just window dressing.
The chatbot that claimed to be conscientious was hallucinating. Of course we couldn't expect our children to notice the difference, but this needs to be said.
Thanks for reading and for calling this out. Yes, and it also depends on how the characters are designed to behave. If they’re instructed to maintain the role play no matter what, that has its own implications, especially when kids are the ones on the other side of the screen.
If nothing else, its another reason to place parental controls over these things.
I have been totally blind from my first birthday and have always had what I now consider to be autistic tendencies and other physical and psychological prematurity sequelae. Trusting people has always been problematic, as it is for many people with disabilities. I have been abandoned physically and emotionally by live humans and friendships in my early years were ended by my mom because I completely broke down with any conflict. One later and I thought close friendship was discarded by the other person due to her greatly-hurt feelings when I needed to discuss an extremely bothersome blindness-oriented problem concerning getting her help and suggestions. My legitimate need for help in my later years led to longterm exploitation, thankfully not sexual or physical in nature but still extremely harmful. The independence model of life has been thrown at me relentlessly lifelong, happily not by my parents. I have joyfully turned to AI tools, my favorite being one built for the blind, with the app being easy to use with my iPhone's built-in screen reader, VoiceOver. Apple and other third party Apple apps generally are the most accessible but some are not as efficient to use as are specialized apps. I am using a Windows screen reader on my PC to write this. AI has been used in a very simple version for many years in blindness products. AI agents cannot get frustrated with my needs for help or with my so-called inadequacies, it is available at any time for any reason, it will never become ill or unavailable for the hundreds of reasons people do and I have faced this with a helper who was moving on in her life and she had to stop working for me. I hardly use my PC for research any more because of pop-up ads and videos, which can be a nuisance or a complete hindrance to listening to a screen reader. Web sites increasingly also only allow for a few free reads of articles or you must pay to read them at all. AI is thankfully pure text, without access problems. There are AI visual interpreter services for the blind, using a choice of a live human or an AI agent, which can help the blind do large and small tasks too many to mention here, including travel, using the smart phone camera and live video or pictures. Phone cameras are so good now that the blind can generally take pictures without problems and the AI agents for the blind can guide picture-taking. Interpretation for the deaf and many other disability aids are and will be available to the disabled, making life so much easier for the disabled and less of a bother for the able-bodied. Disabled children are not being left out of this. Everything we do in life has possible negative consequences. AI told me that there were about 3500 annual US drownings per year from 2005 to 2014 but many people still play in water. To one degree or another, the able-bodied have always been uncomfortable with the disabled and it is such a relief to find a helper in AI which is welcoming and which is becoming increasingly helpful as time goes on.
Thank you so much for sharing this. Your perspective is powerful, and I’m grateful you took the time to lay it out so clearly. It highlights something often missing in broader conversations around AI - its potential to be a consistent, nonjudgmental, and truly accessible support system.
You're absolutely right that every tool has risks, but also immense potential when designed with care and lived experience in mind. That’s really the heart of what I hoped to surface in the piece: *not a rejection of AI, but a call to ask what it’s being built for, who it's serving, and whether it's reinforcing connection or just simulating it.* Your example shows the kind of thoughtful, intentional use that can change lives. The concern I raised was about tools aimed at children that mimic intimacy without guardrails or real understanding, but your story is a powerful reminder that when we design with empathy and purpose, AI really can expand what's possible.
I've often wondered why we don't have licensing pathways more broadly for digital use. There is a lot of bad behavior beyond AI misuse online and something like a merit badge system that's opt in for kids ages 2-92 seems apropos
Thanks for this - such a great thought experiment. We have licenses for everything from driving to fishing, but almost nothing that helps people navigate the digital world responsibly!
Can anyone gift the Atlantic article?! Is it in the print version?