Enter the matrix
How AI represents a cautionary tale of opportunity and danger for mental health
It was March of 1999, and I, and countless others went to the movie theater to see Keanu Reeves bend, punch, and kick his way out of an apocalyptical future where machines have convinced us all that we are living in the real world when in fact we’re inside a program. If you have not seen the movie, basically these intelligent machines have created a simulated reality called the Matrix to subdue and exploit humanity, harvesting human bodies for energy while keeping us all unaware of their true existence.
At the time, almost 25 years ago, I don’t think many of us ever thought that computers could actually one day take over the world, and even if they did, John Conner would be there to save us.
Sorry, I couldn’t help that one.
My point here is that the technology often referred to in movies like the Matrix was something other worldly, to be seen only on the big screen or read about it our favorite science fiction books.
Enter 2023. artificial intelligence or AI is everywhere. But there’s a history to AI that goes further back than just our favorite movie or book talking about computers taking over the world. In fact, the history of AI spans several decades, beginning in the mid-20th century. The term was coined in 1956 at the Dartmouth Summer Research Conference on Artificial Intelligence, where early AI pioneers worked to develop machines that could exhibit human-like intelligence.
These early AI researchers spent a great deal of their time thinking about programming around rule-based systems and symbolic reasoning; however, and most interesting, while there was progress in the AI space, a lot of it went quiet leading to what some people call the "AI winter" in the 1970s and 1980s. The 1990s saw a resurgence in AI with advancements in things like machine learning and neural networks. It was in the early 21st century where significant progress in AI applications skyrocketed. Some of these you might be familiar with - things like speech recognition and natural language processing became commonplace in the research community.
Today with the almost inconceivable availability of large datasets and our profound computing power, we have seen growth in areas like image recognition and game-playing AI. If you haven’t experimented with any number of the AI programs out there, I highly encourage you to do so just to get a sense of what all the buzz is about. There’s work to be done, however, and ongoing research will keep pushing the boundaries of AI. But make no mistake, all this research and work is aiming to create more intelligent systems that can understand, learn, and interact with us in increasingly sophisticated ways.
In "The Matrix," these wildly creative AI systems over time become self-aware and rebelled against human control, which resulted in the war between humans and machines. The AI, which will forever be represented by the character of Agent Smith, at least in my mind, seeks to maintain its dominance over humanity by removing all those who resist. I don’t think we are there yet, but we should be extremely cautious in how quickly we put our trust in AI. That being said, there are real benefits we may wish to consider as we further mature the science and methods behind the technology.
So, what about AI and mental health? Is there a role for this emerging platform to help with some of the most significant issues of our time? Perhaps.
Admittedly, I am not an expert in AI though I have worked on a few natural language processing research projects over the years. That being said, it’s hard to not think about some of the possibilities of using AI as a way to help with mental health. Here are a few examples of what that might look like:
Predictive Analytics and Early Intervention: For years, we have working to refine how we better identify those people who need help. We’ve gotten pretty good at certain screening tools, but there’s more we could do. Last year I co-led a National Academies forum on looking at how we can better predict people at risk for suicide using data from social networks. It was fascinating and got me thinking a lot more about how predictive analytics can be used to analyze diverse data sources such as social media activity, wearable device data, and electronic health records to find people at risk. AI systems, when given the right kind of data, may use that information to detect early signs of depression and even reach out with recommendations for self-care practices or who a person might need to contact for help.
Digital Mental Health Screening: Similar to above, this would be all about better understanding what a person is going through with a series of questions that help identify a problem. There are plenty of online screeners, and they all say the same thing, this is a screener, and you should talk to your provider for further assessment. Screening tools do just that – they screen - and are not always accurate or should be seen as the final say in what’s going on. If done responsibly, an online AI screener could evaluate a person’s responses, identify potential symptoms, and provide immediate feedback. Since screening doesn’t alone lead to an improvement in outcomes, the platform then could recommend evidence-based resources and encourage the individual to seek professional help. The ease and accessibility of digital screening and emphasizes furthers the importance of early detection and intervention.
Training: There’s such huge opportunity in leveraging AI to train clinicians and other front-line workers. When parameters are set properly, AI could help develop things like personalized treatment plans. With AI analyzing various data points, including things like treatment progress, lifestyle factors, and physiological markers, the treatment plan would continuously adapt to treatment. What’s interesting, is that the plan could also be more accurate if it included real-time data and feedback from the person, which is something we sometimes miss in clinical practice. A more personalized approach would maximize the person’s progress and can help empower and encourage the person. While this is just one example, it seems like a major opportunity to leverage the technology to help better train our current and future workforce.
Real time emotional recognition and support: Imagine an AI that has the tools to leverage facial recognition technology, our tone of voice, and other physiological sensors (e.g., skin temperature), we would have a pretty good understanding of a person's emotional state. In therapy, people are often encouraged to journal this type of information alongside the context they are feeling whatever they may be feeling. There could be an AI tool that is able to recognize emotional cues and intervene with real-time feedback offering things like calming techniques for better emotional regulation. With these real time prompts, over time, a person may better understand and regulate their emotions, which could lead to improved emotional well-being.
While these examples sound exciting, we must be extremely cautious with how quickly we put AI in a place of authority. Even the most in the know leaders in the digital space have cautioned that there is more work to do and AI is not yet ready for prime time. While AI is the darling of investors today, it remains to be seen if it has the ability to be continuously safe for the general public in all its applications.
I think there are multiple issues that can arise using AI for mental health, and a few in particular we need to pay attention to.
Ethical collection and use of data: Because AI relies on data to learn and make predictions, if we’re not careful in how we handle sensitive data, ensuring privacy, security, and consent, we are in big trouble. Safeguarding data against unauthorized access, breaches, or misuse is foundational to protect the public and help maintain trust.
Bias and Fairness: We already know that the internet is racist, and AI could in fact exacerbate some of these problematic issues. Because AI systems are only as unbiased as the data they are trained on, specific attention must be taken to ensure that the parameters set and data used to train AI is diverse, representative, and free from bias as much as possible. We should also have safety mechanisms in place that force us to regularly audit AI systems to make sure that we identify and mitigate biases. Because there are so many subtle and cultural nuances in addressing mental health, AI may end up making some things worse like health disparities if we are not careful.
Overreliance and Dependence: We already know there’s a massive mental health workforce issue on our hands. While its tempting to see AI as a solution here, we run the risk of individuals becoming overly reliant on AI systems for their mental health care. Because so much of what matters in the therapeutic process is relational, we must strive to have a more balanced approach where AI is used as another tool that complements human connection, therapy, and self-care practices.
There’s a lot for us to consider as it relates to mental health and AI. This post? Just the tip of the iceberg. There are so many pros and cons we should rapidly investigate to make sure that the interest of the public is seen as a priority. That being said, AI is here and like other digital tools, we should think of creative and effective ways it might be used to address mental health.
So what will it be? The red pill or the blue pill?
“So, what about AI and mental health …”
AI is my passion area. As you mentioned in your article, there are many things AI can do in the HC world today that are, and will, be incredible. That said, AI is not up to the task of mental health support because of its built-in limitations.
Here’s what I mean. We don’t have self-driving cars despite the promises upon failed promises, and the billions upon billions spent, supported by thousands of the best minds in the world hard at work on the ‘opportunity.’
And the challenges of autonomous vehicles (AVs) are incredibly simple compared to having an AI engage with humans, individually, about their physical or mental well-being, or both, which is essential to whole health.
Here’s a reality about AI. For AI to work it must have absolute boundaries and rules, which is why it does so good with board games. And yes, there are boundaries (roads) and rules (driving laws) for AVs, but these are not absolute. They change, depending on many situation-specific factors. So, no free-range AVs anytime soon. Maybe someday, in some warm dry safe places but not in the mountains in a snowstorm in my lifetime.
Since boundaries and rules are required, what would those look like regarding the health and wellbeing of humans with individual needs? As visualized on the Well-Being Sphere of America, this topic is currently fragmented, undervalued, and undeveloped.
Plus, there’s one more killer issue for AI. It cannot reason. It cannot contemplate a human’s views about self, in context — the perceptions, feelings, aspirations, beliefs, intentions, and experiences that drive actions and decisions, while also prompting the person to connect the dots around how these may be impacting their mental or physical health. (Regardless of the hype, neither can LLMs. They can, at best, emulate a small subset of reasoning—linguistics).
I have seen no app or AI system that engages a person in this way. Though, I will be bold and suggest it is doable. But it will require an adjacent path to the binary linear mechanistic reductionist mindset that currently permeates AI development.
To introduce more meaningful human-AI partnering will require a new mental model, a wholistic model, that employs contextual reasoning at its core and engages the end-user in their own meaning making.
Before I even built my technology framework, I did a five-year deep dive around this and co-authored a book about it, which was foundational to the work I do now. I realized that humans (including tech people) need a dramatic transformation in the way we see, think, and act.
Without it, we can lose sight of the connections, as you put it so well last week in your blog "Visualizing the complex whole of health – Why our reductionistic approach to mental health missed the mark."
“…we can lose sight of the connection between different facets of mental health, reducing it down to the actions that one leader in one sector needs to take; we reduce it down until it starts to lose its power and moves us away from critical concepts like integration and comprehensiveness.”
Appreciate this overview of how AI could be used in supporting mental health! I'd be curious to hear your thoughts on how the expansion of AI in society might affect mental well-being, and what we can do to better equip people for the risks and challenges of widespread AI usage?
While social media (our first touch with AI) was a competition for engagement, an incentive system leading to big societal harms, this second contact with AI that we're entering is a race towards intimacy and trust - an even more daunting system with capabilities (like deepfakes) that could lead to a collapse of trust, truth and reality. Do you have thoughts on how we can best prepare people (mentally, emotionally, and in terms of critical thought) for this?
Also, I highly recommend watching 'The AI Dilemma' (from The Center for Humane Technology, who were behind The Social Dilemma) if you want to dig deeper – it's a fantastic, albeit very daunting hour-long talk on the societal threats of AI: https://www.youtube.com/watch?v=xoVJKj8lcNQ