AI as Advisor: How Adults Are Using ChatGPT for Health and Counseling Support

In the evolving landscape of digital health and self-help, ChatGPT has emerged as an unconventional tool for emotional and medical advice—despite not being designed or approved for either. Research and real-world usage patterns reveal that adults are increasingly consulting the AI for guidance on mental health, self-diagnosis, and wellness strategies, often treating it as a first line of support when professional resources are unavailable, unaffordable, or out of reach.

A 2023 peer-reviewed study published in the Journal of Multidisciplinary Healthcare found that nearly 80% of adult users engaged ChatGPT in managing anxiety, depression, and emotional stress, appreciating its suggestions on mindfulness, goal-setting, and psychoeducation. Similarly, an investigation in the Journal of Medical Internet Research reported that 78.4% of respondents were willing to use ChatGPT for self-diagnosis, highlighting the growing public trust in AI as a source of health-related guidance. These statistics suggest a clear trend: adults are turning to ChatGPT not just for curiosity or convenience, but as a quasi-counselor and informal health advisor.

Part of the chatbot’s appeal lies in its instant availability and perceived neutrality. Users across forums like Reddit and survey platforms frequently describe ChatGPT as “helpful when no one else is around,” a “good listener,” or a tool to “vent without being judged.” This resonates especially with those facing loneliness or stigma around seeking therapy. Others rely on the AI to clarify medical symptoms, weigh treatment options, or make lifestyle decisions, often treating it as a second opinion alongside Google.

However, these practices are not without risk. While ChatGPT can generate medically plausible explanations and compassionate responses, it lacks formal diagnostic training, clinical context, and ethical oversight. A 2023 Frontiers in Psychiatry study warned that ChatGPT’s responses in complex mental health scenarios could be “inappropriate or even dangerous,” particularly when users mistake polished language for professional credibility. The American Psychological Association and the FDA have both emphasized that no generative AI platform is licensed to offer clinical advice or therapeutic care, and should not be treated as such.

Moreover, emerging concerns point to a growing emotional dependency on chatbots. Research from the MIT Media Lab noted a pattern of users developing parasocial relationships with AI, seeking out the chatbot for emotional affirmation or surrogate companionship. This dynamic, while offering temporary comfort, may contribute to increased isolation or delay in seeking genuine human help—especially in urgent cases where professional intervention is critical.

In sum, while ChatGPT is being widely used by adults as both a health informant and emotional confidant, its role should be clearly bounded. It can offer basic explanations, clarify concepts, or simulate conversation, but it cannot replace the nuanced care of trained health professionals. As digital tools become ever more integrated into personal well-being, users must navigate the fine line between empowerment and over-reliance—using AI to supplement, not substitute, the human connections essential to true healing.

Leave a comment