The Pros and Cons of Using AI-Based Mental Health Tools

The COVID-19 pandemic has highlighted the need for mental health resources and treatments. But not just in the US, but around the world, underinvestment in mental health care and the data paint a bleak picture. Less than the half of adults with a mental illness are being treated. There is a lack of trained therapists and psychiatrists. For those who have access to them, waiting for an appointment can be months. Many mental illnesses go undiagnosed or are diagnosed long after they begin.

Against this background, there is a lot of interest in digital tools for mental health. Digital tools are seen as cost-effective options that can improve access to mental health care around the world. They may be able to detect symptoms and allow for early diagnosis or identify conditions that are currently undiagnosed. One of the barriers to seeking treatment is the stigma associated with mental disorders, and digital tools can reduce that stigma. And in the absence of therapists who speak a patient’s native language, it is possible to use digital tools to provide care in the patient’s language.

Not surprisingly, there are a variety of digital — and often artificial intelligence (AI)-based — tools for wellbeing and mental health, ranging from simple mindfulness and meditation apps to therapy apps that complement ( or even alternative) are marketed as actual (ie personal) therapy. Typically, these latter options come in two forms (although they may not be specifically labeled as such for regulatory reasons): psychotherapy chatbots (or interactive tools) and digital phenotypes. Woebot Health, Wysa and myCompass are examples of interactive tools, and Cogito Companion, StudentLife, EmotionSense, MOSS and Biewe are examples of digital phenotypes (many of which are research projects rather than commercial apps, but are being piloted by key stakeholders such as insurers, employers and governments). What both types of apps have in common is that their clinical effectiveness has not been proven by long-term studies. But digital tools, especially psychotherapy chatbots, have already gained millions of users.

Psychotherapy Chatbots

Apps like e.g mycompass and Woebot Health are designed to treat mild to moderate depression, stress and anxiety. By sending text prompts and emails, myCompass encourages users to self-monitor their moods, behaviors and lifestyle changes. Similarly, Woebot Health uses short daily chats, videos, games, and mood tracking, prompting people to check their minds. These chatbots are mostly cognitive behavioral therapy (CBT) tools in digital avatars.

Also Read :  Middlebury needs better mental health infrastructure

CBT therapy is an evidence-based approach used by mental health professionals. A central tenet of CBT is that a person’s response to an adverse event, not just the event, plays a key role in their mental well-being. Based on this insight, CBT therapists train a patient to observe their reactions and mental states and to refocus/reframe their reactions to be less negative and more realistic in the context of their treatment. During CBT, clients jot down their thoughts (e.g., negative feelings) and are asked repeatedly (over several sessions) to restate them so that the new way of thinking becomes reinforced and normal. Not surprisingly, CBT requires repeated interactions between the patient and their therapist to be effective.

The chatbots are attempting to digitally implement CBT interventions (which were originally intended for in-person sessions), but there are some challenges, including the following:

  • CBT is usually done in sessions lasting 30 minutes to 1 hour. With the digital apps, users spend less time but access the sessions more frequently. In such a scenario, will CBT be effective?
  • The therapist observes many contextual cues and then adjusts their sessions accordingly. Would a digital assistant be up to the task?
  • The effectiveness of therapy depends on the level of trust and relationship between the counselor and the patient. Can this trust be replicated through digital apps? Even though they’re programmed to appear caring, it’s ultimately more of a simulation of empathy than real empathy, right?

A potential benefit of digital apps is that they can gather a more detailed picture of users’ mental states based on daily (or even more detailed) logs, compared to the less frequent (weekly or monthly) self-assessment that is the norm in face-to-face CBT. However, this requires users to diligently log and report their moods and emotional triggers in the digital apps – and even then, the subjective bias of self-reporting remains. Of course, psychotherapy chatbots play a role in mental health interventions, but their clinical efficacy must first be demonstrated, and then digital CBT best practices must be codified.

Also Read :  What Oregonians should know about the governor candidates and health care

Digital Phenotypes

Another active area of ​​research and interest is digital mental health phenotypes. Digital phenotypes refer to AI models that infer a user’s mental states, emotions, and behavioral patterns based on data collected from their smartphone. Given the ubiquity of smartphones and the large amount of time users spend using them, a digital phenotype can be very useful in determining basic behaviors at an individual level. For example, sleep cycles, language patterns, social interactions, cognitive functioning, physical movements, and several other aspects can be inferred by analyzing data from smartphones and wearable devices.

Two types of data serve as inputs to a digital phenotype: active data and passive data. Active data refers to the data provided by users in response to nudges, prompts, and questions while using the mental health app. Passive data is the data collected in the background as users go about their daily lives. For example, the number of steps taken in a day, the number of hours of sleep, time spent on apps, time spent on phone calls, as well as every digital interaction – clicks, taps and scrolls on the phone – are automatically logged. The user’s call data, text message data, social data, and activity data contain many clues about their mental state.

Once certain baseline behavior patterns have been established after an initial period of use, deviations can result in a warning that someone may be going through a rough patch. The sensors in the smartphone, such as GPS, accelerometer, keyboard and microphone, can detect changes in speech patterns, activity rhythms, etc. and can be used to detect depressive tendencies.

Such signals can be used to personalize a digital CBT app or result in a recommendation to see a human therapist. Proponents of digital mental health phenotypes claim that smartphone data can lead to an earlier and more accurate diagnosis than traditional approaches. That means data and AI can enable better diagnoses, improved treatments, and better care.

Also Read :  Health care artificial intelligence gets biased data creating unequal care

Concerns about digital phenotypes

Digital phenotypes raise several concerns (perhaps even more so than CBT chatbots), including the following:

  • The collected user data is highly sensitive, but many of the current apps are shy about privacy, data protection and data sharing. Also, a high level of data security must be in place to protect against data breaches or security attacks.
  • When (as is currently believed) mental health disorders fall on a spectrum, it is difficult to decide where to draw the line to classify a person as ill or not. This increases the specters of subjectivity (in a seemingly objective, data-driven approach), false negatives and false positives.
  • False negatives deny deserving people access to the treatments they need.
  • False positives can be triggered by a temporary change in a user’s activity or pattern for benign reasons (e.g. a common flu virus). If such data is recorded and shared with other parties, it may become a permanent part of their records.
  • AI technologies that digital phenotypes rely on, such as B. natural language processing (NLP) and speech recognition only work well for/in certain regions. NLP only works well for English and a dozen other languages, but it doesn’t work with the same accuracy for most of the world’s languages. Ditto for Speech/Speech Recognition.
  • Cultural norms and individual differences play a role in treatment effectiveness. This is what an experienced practitioner is good at. Phenotypes that work in one context do not necessarily work in another context or culture.

In summary, digital tools and AI show promise for the diagnosis and treatment of mental illness, but much remains to be done. There is a need for stricter clinical standards, regulatory oversight, strict data protection and privacy practices, transparency of AI methods and compliance Responsible AI principlesand validation/evidence from large-scale clinical trials. Mental health is too important an area to take a “move fast and break things” attitude. Given what is at stake, we must proceed with caution.