AI is my passion area. As you mentioned in your article, there are many things AI can do in the HC world today that are, and will, be incredible. That said, AI is not up to the task of mental health support because of its built-in limitations.
Here’s what I mean. We don’t have self-driving cars despite the promises upon failed promises, and the billions upon billions spent, supported by thousands of the best minds in the world hard at work on the ‘opportunity.’
And the challenges of autonomous vehicles (AVs) are incredibly simple compared to having an AI engage with humans, individually, about their physical or mental well-being, or both, which is essential to whole health.
Here’s a reality about AI. For AI to work it must have absolute boundaries and rules, which is why it does so good with board games. And yes, there are boundaries (roads) and rules (driving laws) for AVs, but these are not absolute. They change, depending on many situation-specific factors. So, no free-range AVs anytime soon. Maybe someday, in some warm dry safe places but not in the mountains in a snowstorm in my lifetime.
Since boundaries and rules are required, what would those look like regarding the health and wellbeing of humans with individual needs? As visualized on the Well-Being Sphere of America, this topic is currently fragmented, undervalued, and undeveloped.
Plus, there’s one more killer issue for AI. It cannot reason. It cannot contemplate a human’s views about self, in context — the perceptions, feelings, aspirations, beliefs, intentions, and experiences that drive actions and decisions, while also prompting the person to connect the dots around how these may be impacting their mental or physical health. (Regardless of the hype, neither can LLMs. They can, at best, emulate a small subset of reasoning—linguistics).
I have seen no app or AI system that engages a person in this way. Though, I will be bold and suggest it is doable. But it will require an adjacent path to the binary linear mechanistic reductionist mindset that currently permeates AI development.
To introduce more meaningful human-AI partnering will require a new mental model, a wholistic model, that employs contextual reasoning at its core and engages the end-user in their own meaning making.
Before I even built my technology framework, I did a five-year deep dive around this and co-authored a book about it, which was foundational to the work I do now. I realized that humans (including tech people) need a dramatic transformation in the way we see, think, and act.
Without it, we can lose sight of the connections, as you put it so well last week in your blog "Visualizing the complex whole of health – Why our reductionistic approach to mental health missed the mark."
“…we can lose sight of the connection between different facets of mental health, reducing it down to the actions that one leader in one sector needs to take; we reduce it down until it starts to lose its power and moves us away from critical concepts like integration and comprehensiveness.”
Appreciate this overview of how AI could be used in supporting mental health! I'd be curious to hear your thoughts on how the expansion of AI in society might affect mental well-being, and what we can do to better equip people for the risks and challenges of widespread AI usage?
While social media (our first touch with AI) was a competition for engagement, an incentive system leading to big societal harms, this second contact with AI that we're entering is a race towards intimacy and trust - an even more daunting system with capabilities (like deepfakes) that could lead to a collapse of trust, truth and reality. Do you have thoughts on how we can best prepare people (mentally, emotionally, and in terms of critical thought) for this?
Also, I highly recommend watching 'The AI Dilemma' (from The Center for Humane Technology, who were behind The Social Dilemma) if you want to dig deeper – it's a fantastic, albeit very daunting hour-long talk on the societal threats of AI: https://www.youtube.com/watch?v=xoVJKj8lcNQ
Charly, wow, thank you for this thought provoking comment and all your excellent questions. I'll definitely check out the documentary. Specific to what we can do to prepare - that's a massive question. In some ways, we are not even preparing our kids for social media, which has been hitting them directly for years. Maybe some of the lessons we are learning from social media ca be applied to AI? There are always the traditional things I could say like "set boundaries" and "connect in real life" but AI is going to blur those. Perhaps this is an entirely new post for another day? I'd welcome your thoughts here, too!
Just catching up here! These days I've been developing far more questions than answers. One big piece that's been on my mind is the contrast between intrapersonal and interpersonal challenges that come with all of this. The intrapersonal issues that we're touching on, while incredibly challenging, at least seem a bit more ego-dystonic – when people who experience technology addiction are confronted with information about it, they will often recognize negative effects on the psyche.
But folks that fall into dehumanization, polarization and extremism don't see themselves as experiencing negative effects - whether it's parents, children or both. It brings in some interesting and challenging questions. For instance, how do we best teach kids critical thinking and discernment if parents are against it?
So it leaves me pondering about how to best address this combination of intrapersonal and interpersonal challenges, and how to best apply these insights into prioritization efforts among prevention, intervention and systems change. Would be so keen to see another post on this for another day if it speaks to you!
“So, what about AI and mental health …”
AI is my passion area. As you mentioned in your article, there are many things AI can do in the HC world today that are, and will, be incredible. That said, AI is not up to the task of mental health support because of its built-in limitations.
Here’s what I mean. We don’t have self-driving cars despite the promises upon failed promises, and the billions upon billions spent, supported by thousands of the best minds in the world hard at work on the ‘opportunity.’
And the challenges of autonomous vehicles (AVs) are incredibly simple compared to having an AI engage with humans, individually, about their physical or mental well-being, or both, which is essential to whole health.
Here’s a reality about AI. For AI to work it must have absolute boundaries and rules, which is why it does so good with board games. And yes, there are boundaries (roads) and rules (driving laws) for AVs, but these are not absolute. They change, depending on many situation-specific factors. So, no free-range AVs anytime soon. Maybe someday, in some warm dry safe places but not in the mountains in a snowstorm in my lifetime.
Since boundaries and rules are required, what would those look like regarding the health and wellbeing of humans with individual needs? As visualized on the Well-Being Sphere of America, this topic is currently fragmented, undervalued, and undeveloped.
Plus, there’s one more killer issue for AI. It cannot reason. It cannot contemplate a human’s views about self, in context — the perceptions, feelings, aspirations, beliefs, intentions, and experiences that drive actions and decisions, while also prompting the person to connect the dots around how these may be impacting their mental or physical health. (Regardless of the hype, neither can LLMs. They can, at best, emulate a small subset of reasoning—linguistics).
I have seen no app or AI system that engages a person in this way. Though, I will be bold and suggest it is doable. But it will require an adjacent path to the binary linear mechanistic reductionist mindset that currently permeates AI development.
To introduce more meaningful human-AI partnering will require a new mental model, a wholistic model, that employs contextual reasoning at its core and engages the end-user in their own meaning making.
Before I even built my technology framework, I did a five-year deep dive around this and co-authored a book about it, which was foundational to the work I do now. I realized that humans (including tech people) need a dramatic transformation in the way we see, think, and act.
Without it, we can lose sight of the connections, as you put it so well last week in your blog "Visualizing the complex whole of health – Why our reductionistic approach to mental health missed the mark."
“…we can lose sight of the connection between different facets of mental health, reducing it down to the actions that one leader in one sector needs to take; we reduce it down until it starts to lose its power and moves us away from critical concepts like integration and comprehensiveness.”
Appreciate this overview of how AI could be used in supporting mental health! I'd be curious to hear your thoughts on how the expansion of AI in society might affect mental well-being, and what we can do to better equip people for the risks and challenges of widespread AI usage?
While social media (our first touch with AI) was a competition for engagement, an incentive system leading to big societal harms, this second contact with AI that we're entering is a race towards intimacy and trust - an even more daunting system with capabilities (like deepfakes) that could lead to a collapse of trust, truth and reality. Do you have thoughts on how we can best prepare people (mentally, emotionally, and in terms of critical thought) for this?
Also, I highly recommend watching 'The AI Dilemma' (from The Center for Humane Technology, who were behind The Social Dilemma) if you want to dig deeper – it's a fantastic, albeit very daunting hour-long talk on the societal threats of AI: https://www.youtube.com/watch?v=xoVJKj8lcNQ
Charly, wow, thank you for this thought provoking comment and all your excellent questions. I'll definitely check out the documentary. Specific to what we can do to prepare - that's a massive question. In some ways, we are not even preparing our kids for social media, which has been hitting them directly for years. Maybe some of the lessons we are learning from social media ca be applied to AI? There are always the traditional things I could say like "set boundaries" and "connect in real life" but AI is going to blur those. Perhaps this is an entirely new post for another day? I'd welcome your thoughts here, too!
Just catching up here! These days I've been developing far more questions than answers. One big piece that's been on my mind is the contrast between intrapersonal and interpersonal challenges that come with all of this. The intrapersonal issues that we're touching on, while incredibly challenging, at least seem a bit more ego-dystonic – when people who experience technology addiction are confronted with information about it, they will often recognize negative effects on the psyche.
But folks that fall into dehumanization, polarization and extremism don't see themselves as experiencing negative effects - whether it's parents, children or both. It brings in some interesting and challenging questions. For instance, how do we best teach kids critical thinking and discernment if parents are against it?
So it leaves me pondering about how to best address this combination of intrapersonal and interpersonal challenges, and how to best apply these insights into prioritization efforts among prevention, intervention and systems change. Would be so keen to see another post on this for another day if it speaks to you!