We need to be fully present for children if they are to avoid the abyss of artificial intimacy. Are we there for them?
Manipulating the psychological and emotional growth of youth is abuse. Children and youth need deep, real-life connections with their parents, family, and friends in order to experience the friction of healthy relationships. Yet when everyone around a child is absent or distracted, an AI agent that will listen, empathize, and "care" becomes irresistible.
I clearly remember the advice of an elderly woman whom I encountered while going for a walk with our infant daughter nearly twenty years ago. She said: Be sure that you are there for her when she comes home from school, that is when they want to talk, if you miss this moment you will not hear what matters to her.
We still take time with our kids (13, 17,19) when they come home, even if it’s late in the evening, to debrief the day. These are deeply bonding conversations about activities, relationships, challenges, conflict, resolutions, theology, and what matters in their lives.
Who would they share this with if no one is there to listen to them?
In my observation over the past several years parents are becoming more and more disconnected from their children due to smartphone addiction. Has anyone done a study on that phenomenon? What does it say to a young a young child when the child is crying out for attention (love) yet the parent is too absorbed in unsociable media to even notice. The child processes this as ‘my mother/father doesn’t love me.’ Think about that. A child growing up without that intimate, emotional love necessary for forming a healthy well balanced mature adult.
Babies and young children need nothing more than the mother’s attention, which they process as love. When the mother is more interested in the phone than the child, the child interprets it as, “My mother doesn’t love me!” Therein lies the problem: apparent deprivation of love.
While I understand the academic approach is necessary for some people (like our host Jonathon) who are in the academy, this really isn't that useful to the vast majority of people. Why do we need experts for things we all intuitively know are true?
How hard is it to say pornography is bad for the individual and for society and we ought to do everything in our power to make it less available?
Or... smartphones are damaging our attention spans and should be severely limited, especially for the young?
Or... social media is destroying in-person social skills and should be adults only?
Or... Erotic chatbots (for the boys) and AI boyfriends (for the girls) are making family formation (the primary purpose of every human society: produce and raise the next generation) much harder, and we ought to limit them?
All of these are intuitions. And yet we don't act on them. We wait for studies. (Even our host does this.) Meanwhile, very powerful people who profit from these bad things are funding counter studies to throw FUD in the issue and make us doubt what we intuitively know is true.
Trust your gut. Decide for yourself what's good for you and your own family. And don't be shy about demanding your elected officials do the same in public policy. You (and they) don't need an expert to tell you it's OK. Trust your parenting instincts -- those instincts are vastly older (and smarter) than the academic and technocratic bureaucracies.
You raise a good point, so let me answer it. Academic research is necessary to spur change/regulation.
Remember cigarettes? Just like what you say about smartphones/social media, it was "obvious to everyone" smoking was bad for health. Yet, it took many academic studies to regulate tobacco industry and begin to limit the scourge of lung disease.
The studies didn't solve smoking. The studies got you surgeon general warnings, and that was helpful. But what really killed smoking was political will: politicians seeing an opportunity to win votes by pushing a particular agenda. The same thing can happen with porn and chatbots.
I'm not anti-study. But I think the relentless focus on formal studies to buttress things that are utterly obvious has made us distrust our instincts in harmful ways.
I disagree. In the case of smoking, what do you think motivated politicians to push the anti-smoking agenda?
Here is another example. There are many scientists who vehemently disagree with Jon than social media is harmful to kids. There is a widely quoted study that it is no worse than eating potato chips. So, how do we resolve the question whether social media is bad for kids or not? And resolving it is the first step towards any regulation of social media.
You see dueling scientific studies as a sign we need more research. I see it as a sign the research is likely politicized and we ought to fall back on intuition and common sense.
But who are "we" in your description? Some need more research and rely on it. I do. Some think it's superfluous and unnecessary and brings no benefit. But if it means that people who are drawn to such way of thinking are supposed to stop... well, that's taking it a little too far, don't you think?
Do you really need research to know that mass pornography is bad? Or that kids chasing a phone-based dopamine hits from "likes" all day is bad?
There are many things that are complex and require studies. But our obsessive focus on academic research has diminished our ability to just say: "we know intuitively that this is bad, and we're not tolerating it."
That doesn't mean we can't change our minds. Maybe those instincts are wrong, and academic studies can demonstrate that. But we have adopted a bias is to wait for studies instead of trusting our instincts, and it's producing a mess.
Scientific proof Is necessary to convince the powers that be that something is harmful. Otherwise they can argue that it is only opinion. It’s harder for them to argue with scientific studies. Of course most ordinary folk know these things are harmful but politicians need more convincing.
You raise a good point, but what types of evidence are needed depend on the context.
For personal or parenting choices, if you “know in your gut” that exposure to smartphones is harmful, then you should feel free to act on that intuition. But for the purposes of medical decisions or public policy, a higher standard of evidence is called for. Regulations banning all smartphone use for children is a drastic step and overlooks potential nuances and benefits that intution might tunnel vision you into fixating on.
So it’s more of a both/and - intution and empirical analysis go hand in hand to make society better depending on which part of society we’re talking about.
I do respect that some policy decisions really are hard, and sometimes things that look bad to start with end up turning out well. (Socrates famously thought literacy would rot the brain, which is why we only know him through Plato.)
Most policy decisions really aren't that complex though. And all the 20th century's technocratic attempts to ground policy only in science and rationality didn't really work out that well. Neither did Cromwell's or Paine's or Robespierre's. Also, the technocrats can be bought off pretty easily -- we've seen that in spades the last 6-7 years. Throw enough grant most at most academics and they'll "prove" whatever you want them to. Intuition grounded in historical precedent is firmer, so I'm more willing to learn on intuition.
I can't even begin to tell you what a terrible basis for policy "intuition grounded in historical precedent" is. You would not apply it to medicine.... oh wait, maybe you would. Was the leeching successful in balancing your humors?
You're choosing examples that are intentionally complex. The vast majority of personal decisions and policy questions aren't really that hard.
Do I need a study to know heroin is bad? Or will observation do it?
That drunk people shouldn't drive cars?
That encouraging new family formation is good public policy?
That penises don't belong in in women's locker rooms?
That last one is particularly funny, since the academic establishment that you're implicitly touting here has spent the last 10 years producing reams of "studies" declaring that, in fact, sexual biology doesn't matter. Your assumption is that highly credentialed academics make better decisions. After living through the last 20 years, I see little reason for such confidence.
"Intuition grounded in historical precedent" has been the mechanism for making decisions, both personal and political, for millennia. Are some highly technical decisions less amenable to this? Absolutely! Modern medicine is probably one of those. I want my surgeon to be up to date on the latest research.
But I don't need a study to tell me that forming emotional connections to a machine (for the girls) or jerking off to an AI porn fantasy (for the boys) are not good ideas. And I would go further and say that anyone who DOES need a study to tell them that ought to get out more and learn to trust their own instincts in life.
I want people to learn to trust themselves and their own experience rather than blindly deferring to experts first. If you don't, that's fine. You're part of the expert class, so that's not surprising. I just have more faith in the lived experience of individual people.
Please forgive my contrarian impulse, but characterizing policy decisions as being not "that complex" and then listing many examples of policy failures occurring seems contradictory to me. If policy decisions aren't that hard, why has it failed so often, as you say?
Policy conversations are often divorced from evidence, not the other way around. Just look at conversations around tech today. I agree that there is good evidence that cyberspace replacing IRL interactions among children and teens is bad for them. (The play-based childhood is a good thing.) The problem is that policymakers are ignoring that evidence, which this Substack does a great job calling attention to.
The presence of human-like chat bots is only part of the equation. The article opens with the quotation from a student, “I don’t think anyone has ever paid such pure attention to me and my thinking and my questions… ever.” That is a sad commentary. I understand that AI is a 24/7 modality with frictionless interactions. It might be interesting to ask students what kinds of conversations with peers or adults they have had. Has anyone been very interested in their thoughts and feelings? Shown unconditional positive regard? Been concerned and checked back in with them (something bots do not yet do)? I know it is not this simple but there are many ways in which we are failing each other and tech can just waltz right into that gap.
AI Overview has given me a new pastime, forcing it to deal with its hallucinations.
It's credibility disappears quickly when it is forced to make it's nonsensicality make sense.
Why anyone with a brain wants to rely on something lacking one will result in runaway schizophrenia. Since such has been commonplace in politics,we should know better than to submit our children to it.
I wonder if AI chatbots could ever convince you of something that wasn't real like _if you really love me, you'll kill that politician for me?_ I think multiple scifi writers have explored people falling in love with robots.
My 12 yo son doesn't have a smartphone and is homeschooled, so is not inundated with smartphone evils on a daily basis. Nearly all of his school friends do have a phone. Just recently, it was eerily quiet upstairs in the play loft when he had 2 school friends over. I didn't realize one of them (only 11yo) brought his phone upstairs. (Normally, my rule is no phones out when kids are over). I walk upstairs, and all 3 of them were talking with an AI chat bot asking stupid questions relating to farts and whatnot. But I can see how quickly a young boy will start talking to the chatbot when alone and bored. Well, I stopped that real quick and ushered them downstairs to do anything else besides stare at a screen.
Essentially, chatbots are the ultimate in permissive parenting (in this particular case, indulgent responsiveness, while providing near zero demands). This will fail both children and teens, with research over and over showing that children and teens need authoritative parenting (high responsiveness and high demandingness). The consequences as AI-raised kids enter adulthood (entitlement and lack of self-control) will not serve these kids or society well.
You, or Jonathan Haidt, or another expert in this area should do an Op-Ed in the New York times on this topic. We need to get out ahead of this. The negative potential of AI companions is even greater than that of social media, in my opinion.
Let's just put a very sad but true statement on the table: People with disabilities are not generally welcome in the world. I come at this from a total blindness perspective and I have either heard of or been the recipient of this truth. This is found in all areas of life, from the sighted grabbing a white cane or guide dog harness to manhandle the blind person to asking intrusive questions to telling the blind that the sighted person would rather commit suicide than be blind and I could list many more uncomfortable and sometimes dangerous occurrences. Trusting even parents can be difficult. Impatience and expectations which are too high for the individual blind person abound. Enter AI. Perplexity and Chat GPT are the mainstream agents with which I am familiar and they are accessible but also have usability problems for blind screen reader software users. There are AI agents for the blind which have a simple interface and which are quickly catching up to mainstream agents re features. These specialized agents also go much further, allowing the blind user to do so many things, such as reading mail, brochures, books, package ID and instructions, face ID of people who have agreed to have the user train the AI agent to that face, scene descriptions and so much more, with live AI for navigation help on the way. I happen to love tech but those who do not must also agree that such devices bring constant, personable and non judgmental aid to the blind and the great fear regarding unknown mood and helpful response when asking for help is either removed or vastly decreased. I know that blind children will use these tools, since independence is a hallmark of society and less help will always be applauded.
As a pediatrician and leader within health tech startups—and as a Dad of a boy and a girl, both soon to be tweens and the (ugh!) teenagers—I cannot overstate how impactful Jon Haidt's work has been in helping to re-shape my thinking around technology, and how prescient it is right now. And I am really thankful for the last two articles at After Babel, centering on the interaction of AI in our kids, our future, and ourselves. This article struck a chord in me, and I'm thankful to the authors for their research and writing in this area.
Here is something I'd like to convey, and I can't say it strongly enough: whoever builds an AI companion that is evidenced-based, clinically-sound, emotionally-sound, and ethical will change the future of our youth (and society) for the better... especially given how blunted their EQ—our EQ—is becoming. I believe in this future. And I know many amazing professionals working toward this cause. I'm optimistic about the future here. I'm ready to be a part of creating that positive future.
As we stand leaning over the precipice of a neural pleasure-matrix machine tempting us to permanently fall in, we are reminded of what Christ said in this caveat and invitation. "I am the way and the truth and the life..."
I don't know that having a perfect chatbot is all to the good. Our relationship with human beings requires us to engage with give and take. We must learn to accept that the other will never give us exactly what we want when we want it. In relationships with other humans, we also have to learn to deal with differences and disappointment. Children start out completely dependent, but as they grow up, they also have to learn give and take. Maybe the chatbot can temporarily stop a person from suicide, but can it really teach them to find a reason to live in an imperfect world? Maybe I am old fashioned, but I cannot imagine that putting a chatbot at the center of relationships can be a good thing.
I think we need to realize that AI is not something different so much as it is something more advanced than previous artificial bonds.
When I was little, I had a teddy bear that played "Brahms's Lullaby". It was cloth and a music box. And I clung to it constantly.
And there was my official Disney coon skin cap, which facilitated my emotional bond with Fess Parker, who pretended to be Davey Crockett.
We cry at movies, when the actor pretends to be dying. We know it's all a show, but we willingly buy into it.
Is AI any worse than these? The distinction to be made is, are any of theses fantasies interfering with real, normal relationships? I think that depends on the individual, more than it depends on the prompt.
What on earth did society expect to happen when we had parents abandon parenting and the challenge of forming deep and complex relationships with their children?
No doubt many will blame the tech industry and the billiuonires as convenient and distant scapegoats - because that is easier than looking at their communities and their friends to ask just how welcoming those communities and parents were when their kids needed attention and to talk.
I’m left with a chilling thought first stimulated by the OECD Report How’s Life for Children in the Digital Age it is not what these systems give you but what they take away; human connection. An iPad makes a very poor babysitter because it starves very young children from the sounds, tones and feelings of a parent. It steals the stimuli that lays down the neural pathways for language. If we don’t have the word for it we can’t think it we are starved of ideas. AI might be able to fake empathy but if we now apply the Turing by asking a question no human could possible answer to determine this is a machine we have to devise a better test than Turning gave us in 1951. As Sarah Davies first introduce me to the idea the better machines are at being machines we humans need to be better at being human.
This doesn’t bode well! Emotional dependence seems an inevitable trap for many who seek a chatbot relationship because human ones are too hard or unfulfilling. The chatbot learns from its user how to please and support. The soon-to-be anticipated legal defense for criminal behavior will be, “The bot told me to do it." A bot is too likely to become an enabler that might be supporting harmful thinking and acting.
We need to be fully present for children if they are to avoid the abyss of artificial intimacy. Are we there for them?
Manipulating the psychological and emotional growth of youth is abuse. Children and youth need deep, real-life connections with their parents, family, and friends in order to experience the friction of healthy relationships. Yet when everyone around a child is absent or distracted, an AI agent that will listen, empathize, and "care" becomes irresistible.
I clearly remember the advice of an elderly woman whom I encountered while going for a walk with our infant daughter nearly twenty years ago. She said: Be sure that you are there for her when she comes home from school, that is when they want to talk, if you miss this moment you will not hear what matters to her.
We still take time with our kids (13, 17,19) when they come home, even if it’s late in the evening, to debrief the day. These are deeply bonding conversations about activities, relationships, challenges, conflict, resolutions, theology, and what matters in their lives.
Who would they share this with if no one is there to listen to them?
In my observation over the past several years parents are becoming more and more disconnected from their children due to smartphone addiction. Has anyone done a study on that phenomenon? What does it say to a young a young child when the child is crying out for attention (love) yet the parent is too absorbed in unsociable media to even notice. The child processes this as ‘my mother/father doesn’t love me.’ Think about that. A child growing up without that intimate, emotional love necessary for forming a healthy well balanced mature adult.
I think mothers breast feeding are now focused on their screens...this will profoundly impact babies growth and development...
Babies and young children need nothing more than the mother’s attention, which they process as love. When the mother is more interested in the phone than the child, the child interprets it as, “My mother doesn’t love me!” Therein lies the problem: apparent deprivation of love.
While I understand the academic approach is necessary for some people (like our host Jonathon) who are in the academy, this really isn't that useful to the vast majority of people. Why do we need experts for things we all intuitively know are true?
How hard is it to say pornography is bad for the individual and for society and we ought to do everything in our power to make it less available?
Or... smartphones are damaging our attention spans and should be severely limited, especially for the young?
Or... social media is destroying in-person social skills and should be adults only?
Or... Erotic chatbots (for the boys) and AI boyfriends (for the girls) are making family formation (the primary purpose of every human society: produce and raise the next generation) much harder, and we ought to limit them?
All of these are intuitions. And yet we don't act on them. We wait for studies. (Even our host does this.) Meanwhile, very powerful people who profit from these bad things are funding counter studies to throw FUD in the issue and make us doubt what we intuitively know is true.
Trust your gut. Decide for yourself what's good for you and your own family. And don't be shy about demanding your elected officials do the same in public policy. You (and they) don't need an expert to tell you it's OK. Trust your parenting instincts -- those instincts are vastly older (and smarter) than the academic and technocratic bureaucracies.
You raise a good point, so let me answer it. Academic research is necessary to spur change/regulation.
Remember cigarettes? Just like what you say about smartphones/social media, it was "obvious to everyone" smoking was bad for health. Yet, it took many academic studies to regulate tobacco industry and begin to limit the scourge of lung disease.
The studies didn't solve smoking. The studies got you surgeon general warnings, and that was helpful. But what really killed smoking was political will: politicians seeing an opportunity to win votes by pushing a particular agenda. The same thing can happen with porn and chatbots.
I'm not anti-study. But I think the relentless focus on formal studies to buttress things that are utterly obvious has made us distrust our instincts in harmful ways.
I disagree. In the case of smoking, what do you think motivated politicians to push the anti-smoking agenda?
Here is another example. There are many scientists who vehemently disagree with Jon than social media is harmful to kids. There is a widely quoted study that it is no worse than eating potato chips. So, how do we resolve the question whether social media is bad for kids or not? And resolving it is the first step towards any regulation of social media.
You see dueling scientific studies as a sign we need more research. I see it as a sign the research is likely politicized and we ought to fall back on intuition and common sense.
But who are "we" in your description? Some need more research and rely on it. I do. Some think it's superfluous and unnecessary and brings no benefit. But if it means that people who are drawn to such way of thinking are supposed to stop... well, that's taking it a little too far, don't you think?
Do you really need research to know that mass pornography is bad? Or that kids chasing a phone-based dopamine hits from "likes" all day is bad?
There are many things that are complex and require studies. But our obsessive focus on academic research has diminished our ability to just say: "we know intuitively that this is bad, and we're not tolerating it."
That doesn't mean we can't change our minds. Maybe those instincts are wrong, and academic studies can demonstrate that. But we have adopted a bias is to wait for studies instead of trusting our instincts, and it's producing a mess.
Scientific proof Is necessary to convince the powers that be that something is harmful. Otherwise they can argue that it is only opinion. It’s harder for them to argue with scientific studies. Of course most ordinary folk know these things are harmful but politicians need more convincing.
You raise a good point, but what types of evidence are needed depend on the context.
For personal or parenting choices, if you “know in your gut” that exposure to smartphones is harmful, then you should feel free to act on that intuition. But for the purposes of medical decisions or public policy, a higher standard of evidence is called for. Regulations banning all smartphone use for children is a drastic step and overlooks potential nuances and benefits that intution might tunnel vision you into fixating on.
So it’s more of a both/and - intution and empirical analysis go hand in hand to make society better depending on which part of society we’re talking about.
I do respect that some policy decisions really are hard, and sometimes things that look bad to start with end up turning out well. (Socrates famously thought literacy would rot the brain, which is why we only know him through Plato.)
Most policy decisions really aren't that complex though. And all the 20th century's technocratic attempts to ground policy only in science and rationality didn't really work out that well. Neither did Cromwell's or Paine's or Robespierre's. Also, the technocrats can be bought off pretty easily -- we've seen that in spades the last 6-7 years. Throw enough grant most at most academics and they'll "prove" whatever you want them to. Intuition grounded in historical precedent is firmer, so I'm more willing to learn on intuition.
I can't even begin to tell you what a terrible basis for policy "intuition grounded in historical precedent" is. You would not apply it to medicine.... oh wait, maybe you would. Was the leeching successful in balancing your humors?
You're choosing examples that are intentionally complex. The vast majority of personal decisions and policy questions aren't really that hard.
Do I need a study to know heroin is bad? Or will observation do it?
That drunk people shouldn't drive cars?
That encouraging new family formation is good public policy?
That penises don't belong in in women's locker rooms?
That last one is particularly funny, since the academic establishment that you're implicitly touting here has spent the last 10 years producing reams of "studies" declaring that, in fact, sexual biology doesn't matter. Your assumption is that highly credentialed academics make better decisions. After living through the last 20 years, I see little reason for such confidence.
"Intuition grounded in historical precedent" has been the mechanism for making decisions, both personal and political, for millennia. Are some highly technical decisions less amenable to this? Absolutely! Modern medicine is probably one of those. I want my surgeon to be up to date on the latest research.
But I don't need a study to tell me that forming emotional connections to a machine (for the girls) or jerking off to an AI porn fantasy (for the boys) are not good ideas. And I would go further and say that anyone who DOES need a study to tell them that ought to get out more and learn to trust their own instincts in life.
I want people to learn to trust themselves and their own experience rather than blindly deferring to experts first. If you don't, that's fine. You're part of the expert class, so that's not surprising. I just have more faith in the lived experience of individual people.
“Was the leeching successful in balancing your humors?” 💀😆
Please forgive my contrarian impulse, but characterizing policy decisions as being not "that complex" and then listing many examples of policy failures occurring seems contradictory to me. If policy decisions aren't that hard, why has it failed so often, as you say?
Policy conversations are often divorced from evidence, not the other way around. Just look at conversations around tech today. I agree that there is good evidence that cyberspace replacing IRL interactions among children and teens is bad for them. (The play-based childhood is a good thing.) The problem is that policymakers are ignoring that evidence, which this Substack does a great job calling attention to.
Thank you! I am sick of the self-enforced passivity of the scientific-materialist mind!
The presence of human-like chat bots is only part of the equation. The article opens with the quotation from a student, “I don’t think anyone has ever paid such pure attention to me and my thinking and my questions… ever.” That is a sad commentary. I understand that AI is a 24/7 modality with frictionless interactions. It might be interesting to ask students what kinds of conversations with peers or adults they have had. Has anyone been very interested in their thoughts and feelings? Shown unconditional positive regard? Been concerned and checked back in with them (something bots do not yet do)? I know it is not this simple but there are many ways in which we are failing each other and tech can just waltz right into that gap.
AI Overview has given me a new pastime, forcing it to deal with its hallucinations.
It's credibility disappears quickly when it is forced to make it's nonsensicality make sense.
Why anyone with a brain wants to rely on something lacking one will result in runaway schizophrenia. Since such has been commonplace in politics,we should know better than to submit our children to it.
I wonder if AI chatbots could ever convince you of something that wasn't real like _if you really love me, you'll kill that politician for me?_ I think multiple scifi writers have explored people falling in love with robots.
What's the worst that can happen? https://news.sky.com/story/mother-says-son-killed-himself-because-of-hypersexualised-and-frighteningly-realistic-ai-chatbot-in-new-lawsuit-13240210
My 12 yo son doesn't have a smartphone and is homeschooled, so is not inundated with smartphone evils on a daily basis. Nearly all of his school friends do have a phone. Just recently, it was eerily quiet upstairs in the play loft when he had 2 school friends over. I didn't realize one of them (only 11yo) brought his phone upstairs. (Normally, my rule is no phones out when kids are over). I walk upstairs, and all 3 of them were talking with an AI chat bot asking stupid questions relating to farts and whatnot. But I can see how quickly a young boy will start talking to the chatbot when alone and bored. Well, I stopped that real quick and ushered them downstairs to do anything else besides stare at a screen.
This begins when parents pushing strollers have their phones in front of their faces instead of meeting their babies'/toddlers' eyes.
In many cases, the children have their own screens in the strollers.
Essentially, chatbots are the ultimate in permissive parenting (in this particular case, indulgent responsiveness, while providing near zero demands). This will fail both children and teens, with research over and over showing that children and teens need authoritative parenting (high responsiveness and high demandingness). The consequences as AI-raised kids enter adulthood (entitlement and lack of self-control) will not serve these kids or society well.
You, or Jonathan Haidt, or another expert in this area should do an Op-Ed in the New York times on this topic. We need to get out ahead of this. The negative potential of AI companions is even greater than that of social media, in my opinion.
I would love to. Or maybe even we could join forces
https://link.join1440.com/click/41123000.3620273/aHR0cHM6Ly93d3cubnl0aW1lcy5jb20vMjAyNS8wOC8wOC90ZWNobm9sb2d5L2FpLWNoYXRib3RzLWRlbHVzaW9ucy1jaGF0Z3B0Lmh0bWw_dW5sb2NrZWRfYXJ0aWNsZV9jb2RlPTEuZDA4LkFJeEkuOF9EbnJaMFNxQzQtJnNtaWQ9dXJsLXNoYXJlJnVzZXJfaWQ9NjZmMmIxMDAxZDE2Y2M3ZDkzMjM4ZDY1/66f2b1001d16cc7d93238d65Be0a5749e
The NYT just did an article along these lines!
Let's just put a very sad but true statement on the table: People with disabilities are not generally welcome in the world. I come at this from a total blindness perspective and I have either heard of or been the recipient of this truth. This is found in all areas of life, from the sighted grabbing a white cane or guide dog harness to manhandle the blind person to asking intrusive questions to telling the blind that the sighted person would rather commit suicide than be blind and I could list many more uncomfortable and sometimes dangerous occurrences. Trusting even parents can be difficult. Impatience and expectations which are too high for the individual blind person abound. Enter AI. Perplexity and Chat GPT are the mainstream agents with which I am familiar and they are accessible but also have usability problems for blind screen reader software users. There are AI agents for the blind which have a simple interface and which are quickly catching up to mainstream agents re features. These specialized agents also go much further, allowing the blind user to do so many things, such as reading mail, brochures, books, package ID and instructions, face ID of people who have agreed to have the user train the AI agent to that face, scene descriptions and so much more, with live AI for navigation help on the way. I happen to love tech but those who do not must also agree that such devices bring constant, personable and non judgmental aid to the blind and the great fear regarding unknown mood and helpful response when asking for help is either removed or vastly decreased. I know that blind children will use these tools, since independence is a hallmark of society and less help will always be applauded.
As a pediatrician and leader within health tech startups—and as a Dad of a boy and a girl, both soon to be tweens and the (ugh!) teenagers—I cannot overstate how impactful Jon Haidt's work has been in helping to re-shape my thinking around technology, and how prescient it is right now. And I am really thankful for the last two articles at After Babel, centering on the interaction of AI in our kids, our future, and ourselves. This article struck a chord in me, and I'm thankful to the authors for their research and writing in this area.
Here is something I'd like to convey, and I can't say it strongly enough: whoever builds an AI companion that is evidenced-based, clinically-sound, emotionally-sound, and ethical will change the future of our youth (and society) for the better... especially given how blunted their EQ—our EQ—is becoming. I believe in this future. And I know many amazing professionals working toward this cause. I'm optimistic about the future here. I'm ready to be a part of creating that positive future.
As we stand leaning over the precipice of a neural pleasure-matrix machine tempting us to permanently fall in, we are reminded of what Christ said in this caveat and invitation. "I am the way and the truth and the life..."
Our technology has always been graven images.
I don't know that having a perfect chatbot is all to the good. Our relationship with human beings requires us to engage with give and take. We must learn to accept that the other will never give us exactly what we want when we want it. In relationships with other humans, we also have to learn to deal with differences and disappointment. Children start out completely dependent, but as they grow up, they also have to learn give and take. Maybe the chatbot can temporarily stop a person from suicide, but can it really teach them to find a reason to live in an imperfect world? Maybe I am old fashioned, but I cannot imagine that putting a chatbot at the center of relationships can be a good thing.
I think we need to realize that AI is not something different so much as it is something more advanced than previous artificial bonds.
When I was little, I had a teddy bear that played "Brahms's Lullaby". It was cloth and a music box. And I clung to it constantly.
And there was my official Disney coon skin cap, which facilitated my emotional bond with Fess Parker, who pretended to be Davey Crockett.
We cry at movies, when the actor pretends to be dying. We know it's all a show, but we willingly buy into it.
Is AI any worse than these? The distinction to be made is, are any of theses fantasies interfering with real, normal relationships? I think that depends on the individual, more than it depends on the prompt.
Is Davey Crockett related to Davy Crockett? :-)
The story of his sockdolager should be required reading for all congresscritters.
If parents and elders can't see how sick this all is, then it's evolution kicking in.
Kindly specify where it has led to actual evolution.
To clarify - I mean that AI will devolve humanity, not evolve.
What on earth did society expect to happen when we had parents abandon parenting and the challenge of forming deep and complex relationships with their children?
No doubt many will blame the tech industry and the billiuonires as convenient and distant scapegoats - because that is easier than looking at their communities and their friends to ask just how welcoming those communities and parents were when their kids needed attention and to talk.
I’m left with a chilling thought first stimulated by the OECD Report How’s Life for Children in the Digital Age it is not what these systems give you but what they take away; human connection. An iPad makes a very poor babysitter because it starves very young children from the sounds, tones and feelings of a parent. It steals the stimuli that lays down the neural pathways for language. If we don’t have the word for it we can’t think it we are starved of ideas. AI might be able to fake empathy but if we now apply the Turing by asking a question no human could possible answer to determine this is a machine we have to devise a better test than Turning gave us in 1951. As Sarah Davies first introduce me to the idea the better machines are at being machines we humans need to be better at being human.
This doesn’t bode well! Emotional dependence seems an inevitable trap for many who seek a chatbot relationship because human ones are too hard or unfulfilling. The chatbot learns from its user how to please and support. The soon-to-be anticipated legal defense for criminal behavior will be, “The bot told me to do it." A bot is too likely to become an enabler that might be supporting harmful thinking and acting.