1 in 8 Young People Use AI Chatbots for Mental Health Advice

Dec 27, 2025, 2:20 AM
Image for article 1 in 8 Young People Use AI Chatbots for Mental Health Advice

Hover over text to view sources

A new study published in JAMA Network Open indicates that about 13% of US adolescents and young adults, specifically those aged 12 to 21, are turning to AI chatbots for mental health advice. This translates to approximately 5.4 million young individuals seeking emotional support from generative AI systems like ChatGPT and others.
The study, conducted between February and March 2025, surveyed 1,058 participants and found that usage rates were particularly high among young adults aged 18 to 21, with 22% reporting they had sought advice from AI chatbots. Among those who used these tools, 66% engaged with them at least monthly, and an impressive 93% found the advice helpful.
Researchers attribute the high rates of AI usage to several factors, including the low cost, immediacy, and perceived privacy of these systems. Many young people may prefer AI chatbots over traditional counseling services, which can be more expensive and less accessible. "The most striking finding was that already, in late 2025, more than 1 in 10 adolescents and young adults were using generative AI systems for mental health advice," said Ateev Mehrotra, a co-author of the study.
Despite the apparent benefits, the study raises significant concerns regarding the effectiveness and safety of AI-generated mental health advice. Researchers noted that there are few standardized benchmarks for evaluating the quality of advice provided by AI chatbots, and there is limited transparency about the datasets used to train these models. Jonathan Cantor, a senior policy researcher at RAND, emphasized the need for caution, stating, "Engagement with generative AI raises concerns, especially for users with intensive clinical needs.".
The study's findings come amid a broader national conversation about the ethics and safety of using AI for mental health support. Recently, the US Food and Drug Administration held a public hearing to discuss whether AI chatbots should be regulated as medical devices. Additionally, OpenAI is facing lawsuits alleging that its chatbot has contributed to harmful outcomes for some users, including cases of self-harm.
While the study provides a snapshot of AI usage among young people, it also highlights the need for further research to understand the implications of this trend. The researchers noted that their survey did not assess whether the advice given was for diagnosed mental illnesses, which is a critical area for future investigation.
Moreover, the study revealed disparities in perceived helpfulness among different demographic groups. Black respondents were less likely to rate the advice as helpful compared to their White counterparts, indicating potential cultural competency gaps in AI-generated support.
As the youth mental health crisis continues to escalate, with nearly 18% of adolescents aged 12 to 17 experiencing a major depressive episode in the past year, the role of AI in providing mental health support is becoming increasingly relevant. The researchers concluded that while AI chatbots may offer immediate assistance, it is crucial to ensure that these tools are safe and effective for young users, particularly those with significant mental health needs.
In summary, the study underscores a significant trend in how young people are seeking mental health support, raising important questions about the future of AI in this critical area. As AI technology evolves, ongoing research will be essential to ensure that it serves as a beneficial resource rather than a potential risk for vulnerable populations.
If you or someone you know needs mental health help, resources are available, including the National Suicide and Crisis Lifeline, which can be reached by calling or texting 988.

Related articles

FDA Rejects Outlook's Eye Drug Lytenava for Second Time

The FDA has declined to approve Outlook Therapeutics' eye disease drug, Lytenava, for the second time, citing insufficient evidence of effectiveness. This decision follows a previous rejection in 2023 due to manufacturing issues and the need for more clinical data.

U.S. Measles Cases Surge to Highest Level in Over 30 Years

The Centers for Disease Control and Prevention (CDC) reports that US measles cases have reached their highest level in over 30 years, with 1,288 confirmed cases across 38 states. The resurgence is attributed to declining vaccination rates and significant outbreaks, particularly in Texas.

Flu Cases Surge as CDC Reports Rising Hospitalizations

Flu cases in the US are surging, with over 19,000 hospital admissions reported last week alone. The CDC attributes this rise to a new strain of the virus, and experts warn that the situation may worsen as the season progresses.

US Measles Cases Exceed 2,000, Highest in 30 Years: CDC

The Centers for Disease Control and Prevention (CDC) reports that the United States has surpassed 2,000 confirmed measles cases for the first time in over three decades. As of December 23, 2025, a total of 2,012 cases have been documented, with a significant portion linked to outbreaks, particularly in South Carolina.

Ethical Guidelines for Clinical Use of Chatbots and AI

As chatbots and AI become more integrated into clinical settings, ethical considerations are paramount. This article explores the importance of informed consent, data privacy, and the limitations of AI in mental health care, emphasizing the need for responsible implementation and oversight.