Using AI for Mental Health Support: The Hidden Risk for OCD and Checking Behaviors

Artificial intelligence tools like ChatGPT are rapidly becoming part of everyday life. People use them to plan trips, write emails, research medical symptoms, and, increasingly, to ask questions about mental health.

For some, this can be helpful. AI tools can provide information, suggest coping strategies, and offer a starting point for understanding emotional experiences. They are available instantly, at any hour, without cost or scheduling barriers.

But like many powerful tools, AI can also introduce new mental health risks, particularly for people who struggle with anxiety, health anxiety, or obsessive-compulsive disorder (OCD).

One of the biggest concerns clinicians are beginning to see involves reassurance seeking and checking behaviors.

Understanding this dynamic can help people use AI more thoughtfully and avoid inadvertently reinforcing the very patterns they may be trying to escape.

The Appeal of AI for Mental Health Questions

It’s easy to understand why people turn to AI for support. Unlike traditional internet searches, AI can respond in a conversational way. It can tailor responses, answer follow-up questions, and simulate a dialogue that feels personal and responsive.

For someone who is worried about a symptom, a parenting decision, or a distressing thought, this can feel incredibly relieving. Instead of scrolling through conflicting articles online, they can ask a direct question and receive a structured answer.

For example:

  • “Is this headache normal or something serious?”

  • “Is it normal to feel anxious about my baby’s safety?”

  • “How do I know if this thought means something about me?”

Sometimes the answers are reassuring and grounding. But for some people, especially those vulnerable to obsessive thinking patterns, this dynamic can become a digital version of reassurance-seeking.

Reassurance and OCD

One of the central mechanisms of OCD is the cycle between obsession and reassurance. An intrusive thought, worry, or “what if” question appears. The person then engages in some behavior to reduce anxiety: checking, researching, asking others for reassurance, or mentally reviewing the situation. For a brief moment, anxiety decreases. But the relief is temporary and serves to strengthen the underlying cycle.

The brain learns: “When I feel anxious, I should check again.”

Over time, the questions often become more frequent and more specific. Instead of reducing anxiety in the long run, reassurance-seeking tends to expand the territory of doubt. This is where AI can become a slippery slope.

How AI Can Reinforce Checking Behaviors

AI tools are uniquely well-positioned to feed reassurance loops. Unlike a friend, partner, or clinician, AI will answer the same question over and over again without fatigue, boundaries, or concern that the pattern itself may be problematic. A person might start with a reasonable question: “Is this symptom normal?”

But when anxiety persists, they might ask again:

  • “Are you sure this isn’t dangerous?”

  • “What are the odds it could be something serious?”

  • “What if I also have this other symptom?”

  • “What would make it an emergency?”

Each answer can provide temporary relief, but because AI is not monitoring the pattern of reassurance-seeking (unless explicitly directed), it may unknowingly participate in the cycle. For people with OCD or strong health anxiety, this can function similarly to compulsive Googling, but with a more persuasive and conversational format.

AI Is NOT a Therapist

Even when responses sound supportive or reflective, AI tools are not able to track patterns of behavior across time in the way a therapist does. They are also not responsible for assessing risk or recognizing when reassurance-seeking has become compulsive. This doesn’t mean AI can’t be useful. It means the context in which it’s used really matters.

Two Ways to Use AI More Safely

If you find yourself asking AI mental health or safety-related questions, there are two simple strategies that can significantly change both the quality of the response and the overall impact on your mental health.

1. Tell AI That Mental Health Is Part of the Question

When asking about a health or safety concern, it can help to explicitly mention that mental health is part of what you are trying to protect. For example: instead of asking, “Is this headache something dangerous?” You might say, “I sometimes struggle with anxiety and reassurance-seeking. I’m trying to consider both physical safety and mental health when thinking about this symptom. How should I approach this question?”

This changes the frame of the response.

Instead of focusing exclusively on information about the symptom, the AI is more likely to include guidance about uncertainty tolerance, anxiety management, and balanced decision-making. I’ve even seen it say to a client, “I think it’s time to put the phone down”.

You can also explicitly request a therapeutic rather than purely informational response, such as:

“Can you answer this from a therapeutic perspective, not just a medical information perspective?”

This tends to produce answers that are less likely to reinforce checking cycles.

2. Tell Your Therapist How You Are Using AI

The second step is surprisingly simple but extremely important.

If you are using AI tools for mental health questions, tell your therapist about it.

Specifically, it can be helpful to discuss:

  • How often you use AI for reassurance or research

  • What kinds of questions you tend to ask

  • Whether you notice relief or increased anxiety afterward

  • Whether the questions repeat or escalate over time

This allows your therapist to assess whether AI is functioning as a helpful reflective tool or whether it may be drifting into a form of compulsive checking.

In some cases, therapists may even help you develop guidelines for intentional AI use, similar to the way clinicians sometimes work with clients to structure internet searching or symptom research.

The goal is not necessarily to eliminate AI use entirely but to ensure it supports your mental health rather than undermining it.

Technology Is Changing Faster Than Our Habits

AI tools are evolving rapidly. We are still learning how they interact with human psychology. Like earlier technologies: search engines, social media, online forums, AI introduces both opportunities and risks. For people navigating anxiety, OCD, or health worry, the key issue is often not the information itself, but the pattern of engagement. Are questions being asked to support thoughtful decision-making? Or to temporarily silence anxiety that keeps returning? Recognizing that difference is a powerful step toward using technology more intentionally.

A Tool, Not a Replacement

AI can be a useful tool for reflection, education, and even, sometimes, emotional support. But it works best when it is part of a broader support system, not a substitute for it. Therapists can notice patterns that technology can’t. They can help people tolerate uncertainty, understand underlying fears, and develop healthier responses to intrusive thoughts.

When AI is used thoughtfully, with awareness of mental health dynamics and openness with a therapist, it can become a supplement to real support rather than a substitute for it.

Next
Next

The Pregnant Brain: How Neuroplasticity in Pregnancy and Postpartum Makes Perinatal Therapy The Best Bang For Your Buck