Will AI replace human RO DBT therapists?

At times I have struggled with the central analogy used to describe the “flexible mind” skill in RO DBT: the sighting of an iceberg from the Titanic. I mean, it can seem hyperbolic. And though this may be the point, illustrating by contrast that many of the situations that goad us into fixed or fatalistic thinking are not, in fact, the end of the world, I had an experience recently for which the iceberg metaphor seems to fit. 😬

I’m a therapist, and my electronic health records platform introduced an optional AI-powered note-taking feature. Clients who opt in would have their sessions audio recorded, then transcribed and summarized within minutes. A pediatrician client of mine has been using a similar setup for months through her hospital system, and she raves, in relatably OC fashion, about its thoroughness. Subsequently, I’ve had fantasies of, through the power of AI, dropping hours of paperwork in favor of more direct client care. 

Thus I excitedly and dutifully read the FAQs on my EHR’s note taker to learn more about how this fantasy could become reality! There at the top was an assurance that AI would not replace therapists, although yes, technically our sessions would be used to further train the model.

At this point I noticed a lot of bodily tension, and a few colorful curse words in my head — good signs that my thinking was perhaps becoming fixed (like the captain insisting “full speed ahead, iceberg be damned!”) or fatalistic (I’ll just chill in my room with headphones on, or maybe rearrange deck chairs).

The tension and the cussing were pulling strongly for me to decide whether opting in to AI, despite the risks, or opting out, despite the risks, was correct. Instead, I tried to find my flexible mind, the one that could perhaps make its way to a better question rather than insist on a quick, easy answer. The questions that arose were firstly, in what ways will AI replace human therapists? And secondly, in what ways will AI not replace human RO DBT therapists?

We all know that there is woefully more demand for skilled, accessible mental health care than there is supply. Although the current AI models are limited and even dangerous as therapists, they will keep learning. If, one day, with reasonable guardrails in place, the only option that some folks have for support is AI, is it humane to deny them? Is it arrogant to believe that direct human interaction is the only valid option? 

Many years ago, my shy and overcontrolled self read heaps of CBT and other self-help books to get myself as far as I could before eventually doing the scary and expensive thing of therapy with a person. And I benefited from those readings. However, it wasn’t until I was faced with another nervous system, daring to take up space, feeling that mirror-neuron attunement, being both unconditionally accepted and gently called in (not out!) on my blind spots that I really grew. Braving discomfort with people was integral to rewiring people-stuff. And as RO DBT practitioners, the subtle, nonverbal, social-signaling work we do would be entirely absent in a prompt to ChatGPT.

AI, in my humble opinion, is a valuable and inevitable tool for our tribe, but it is not a member of our tribe. I’m not so special because I’m human AND gosh, I’m kinda special because I’m human — and so are you.

At least, that’s my answer today… And I’m committed to staying flexible, whatever lies ahead. 


<
Rebecca Robinson, LMFT

Rebecca is a therapist working with Pennsylvania- and California-based bright people who are learning to relax. To this end, she is thankful for the help of RO DBT, standard DBT, and ACT. In her spare time, she enjoys tandem biking, rehabbing plants and furniture, winning her cat’s affection, and stepping on Legos (she has two young children).