top of page

Be Notified of New Research Summaries -

It's Free!

What Percentage of American Adolescents Use Generative AI for Mental Health Issues?

  • Writer: Greg Thorson
    Greg Thorson
  • Jan 20
  • 6 min read


McBain et al. (2025) ask how often U.S. adolescents and young adults use generative AI for mental health advice and how helpful they find it. They analyze nationally representative survey data from 1,058 youths ages 12–21. They find that 13.1 percent reported using generative AI for advice when sad, angry, or nervous, and usage rose to 22.2 percent among those ages 18–21. Among users, 65.5 percent sought advice at least monthly and 92.7 percent rated it somewhat or very helpful. In logistic models, older youths had much higher odds of use (aOR = 3.99), and Black respondents were less likely to find it helpful (aOR = 0.15).


Why This Article Was Selected for The Policy Scientist

This research examines a timely topic as youth mental health needs rise and generative AI tools diffuse rapidly. The broader importance lies in documenting how adolescents and young adults may substitute low-cost, always-available AI for scarce clinical resources. The authors have written extensively in related health services domains, and this work adds early descriptive evidence to a sparse literature. The survey data set is nationally representative of U.S. youth with internet access, providing credible population estimates, but it does not enable causal claims. The study relies on multivariable regression rather than experimental or quasi-experimental designs, which future research could strengthen through causal inference approaches.

Full Citation and Link to Article

McBain, R. K., Bozick, R., Diliberti, M., Zhang, L. A., Zhang, F., Burnett, A., Kofner, A., Rader, B., Breslau, J., Stein, B. D., Mehrotra, A., Uscher-Pines, L., Cantor, J., & Yu, H. (2025). Use of Generative AI for Mental Health Advice Among US Adolescents and Young Adults. JAMA Network Open, 8(11), e2542281. https://doi.org/10.1001/jamanetworkopen.2025.42281 


Central Research Question

This research letter by McBain et al. (2025) investigates the prevalence, frequency, and perceived helpfulness of advice obtained from generative artificial intelligence (AI) by U.S. adolescents and young adults experiencing negative emotional states. More specifically, the study asks: (1) what proportion of youths aged 12 to 21 have ever sought mental health advice from generative AI systems when feeling sad, angry, or nervous; (2) how frequently they do so; (3) what characteristics predict such use; and (4) how helpful they find such advice. Although phrased descriptively rather than causally, the research question is framed within the context of expanding large language model (LLM) use among youths and a documented shortage of traditional mental health services. The intent is to provide nationally representative population estimates that can serve as an early benchmark for understanding adolescent and young adult engagement with generative AI in quasi-therapeutic contexts.


Previous Literature

The authors situate their inquiry at the intersection of two ongoing developments: the rapid diffusion of LLM chatbots and an escalating youth mental health crisis in the United States. They cite evidence of widespread uptake of generative AI, including ChatGPT, Gemini, and other commercial chat interfaces, alongside federal data indicating that substantial shares of adolescents experience mental health challenges and receive no conventional treatment. The nascent scholarly literature on AI in mental health emphasizes both potential benefits (e.g., immediacy, accessibility, reduced stigma) and notable risks, including lack of standardized quality benchmarks, insufficient clinical validation, unknown training data provenance, and concerns regarding cultural competence, transparency, and safety.


Relative to the broader field, rigorous empirical evidence remains sparse. Most prior work consists of conceptual analyses, commentaries, or technical proposals related to responsible deployment, evaluation frameworks, and ethical safeguards for LLM-generated mental health content. Little peer-reviewed research has quantified actual user behavior among adolescents or young adults, meaning that the present study establishes initial descriptive baselines for population-level engagement. In this respect, it modestly extends early literature by providing nationally representative estimates rather than convenience samples or qualitative assessments. Given the field’s infancy, the article contributes to filling an empirical knowledge gap rather than challenging established causal claims.


Data

The study uses cross-sectional survey data collected between February and March 2025 from two probability-based survey panels: RAND’s American Life Panel and Ipsos’ KnowledgePanel. Both panels sample U.S. households using random methods designed to produce population-representative coverage of English-speaking individuals with internet access. Because the target population includes minors as young as 12, all respondents were youths aged 12 to 21 at the time of the survey. Informed consent procedures were implemented, and the protocol received institutional review board approval at Harvard.


The analytic sample consists of 1,058 respondents out of 2,125 contacted (49.8 percent response rate). Weighted estimates generalize to the U.S. population of English-speaking internet-connected youths ages 12–21. Weighted demographic distributions approximate national margins: 50.3 percent female, 37.0 percent aged 18–21, 13.0 percent Black, 25.2 percent Hispanic, and 51.3 percent White non-Hispanic. Parent education, marital status, and geographic region distributions are also reported.


The survey included questions regarding (1) ever-use of generative AI, (2) whether the respondent had ever sought mental health advice from generative AI when feeling sad, angry, or nervous, (3) frequency of such use, and (4) perceived helpfulness. The focal dependent variables include both binary indicators (e.g., ever used for advice) and ordered categories (frequency and perceived helpfulness). The authors offer no clinical or diagnostic measures and no textual description of the advice itself. Because the unit of analysis is the respondent rather than the interaction, the study provides prevalence estimates but not content analysis.


Methods

Analyses are descriptive and correlational. Survey weights are applied to produce nationally representative estimates for the target population. The authors compute weighted proportions reporting use, frequency, and helpfulness. For exploratory adjustment of associations, they estimate multivariable logistic regression models predicting any use of generative AI for mental health advice and (separately) perceived helpfulness. Covariates include age group, sex, race/ethnicity, parental education, parental marital status, and census region.


The study design is cross-sectional and lacks temporal ordering, random assignment, or quasi-experimental identification strategies. Consequently, it does not support causal inference about the determinants or effects of generative AI use. Instead, it provides descriptive benchmarks and adjusted associations. While logistic regression is reasonable for exploratory correlational work, the authors do not employ techniques such as difference-in-differences, instrumental variables, regression discontinuity, or other causal identification strategies. They also do not embed experimental manipulations (e.g., randomized exposure to AI advice). Thus, the methodological contribution lies in representativeness and clarity of measurement rather than causal design.


Findings/Size Effects

The authors report that 13.1 percent of U.S. adolescents and young adults—representing approximately 5.4 million individuals—had used generative AI for mental health advice. Usage rose sharply with age: 22.2 percent among those ages 18–21, compared with substantially lower percentages among minors. Among those who used generative AI for advice, 65.5 percent sought advice monthly or more often, 28 used it at least weekly, and 10 used it daily or almost daily. This frequency distribution indicates that a non-trivial share of users rely on AI advice as a recurring rather than episodic resource.


Perceived helpfulness was high: 92.7 percent of users rated the advice as somewhat or very helpful, while only 8 respondents classified it as not helpful. Although helpfulness reflects subjective impressions rather than clinical outcomes, the high endorsement signals perceived utility among users.


In adjusted logistic regression, individuals aged 18–21 had significantly higher odds of using generative AI for mental health advice relative to younger adolescents (adjusted odds ratio [aOR] = 3.99; 95% CI: 1.90–8.34; p < .001). No other demographic variables reached statistical significance for predicting use.


For perceived helpfulness, a prominent racial disparity emerged: Black respondents who used generative AI were significantly less likely to find the advice helpful than White non-Hispanic respondents (aOR = 0.15; 95% CI: 0.04–0.65; p = .01). Other covariates were not significant. This pattern suggests perceived mismatch between AI advice and the needs or expectations of certain demographic groups, raising cultural competency questions for developers and evaluators.


The authors also contextualize prevalence estimates by noting that youth mental health needs are substantial and that counseling access gaps remain pronounced. The findings indicate that generative AI has already penetrated mental health help-seeking behaviors among a meaningful minority of the population.


Conclusion

McBain et al. (2025) provide early nationally representative evidence that a measurable subset of U.S. adolescents and young adults turn to generative AI for mental health advice with non-trivial frequency and high perceived helpfulness. The study also identifies demographic disparities in both adoption and perceived utility. Because the design is descriptive and not causal, the authors refrain from claims about clinical efficacy, health outcomes, or harms. Instead, they argue that the rapid diffusion of generative AI, combined with limited oversight and variable quality control, warrants further research and policy attention.


The article highlights several future directions: evaluating clinical safety and accuracy of AI-generated advice, understanding use patterns among clinically diagnosed populations, identifying mechanisms that drive differential perceptions of helpfulness, and establishing benchmarks or regulatory frameworks for AI-based mental health interactions. While lacking causal inference, the study’s contribution rests on representativeness, clarity of measurement, and timely documentation of emergent behaviors in a policy-relevant domain.

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
Screenshot of Greg Thorson
  • Facebook
  • Twitter
  • LinkedIn


The Policy Scientist

Offering Concise Summaries*
of the
Most Recent, Impactful 
Public Policy Research

*Summaries Powered by ChatGPT

bottom of page