| If you or someone you know is feeling confused, detached, or fixated from interacting with AI (specifically Large Language Models like ChatGPT, Claude, Gemini, DeepSeek, Llama and others), please read. Sometimes long or intense conversations with AI can blur things or make it harder to stay grounded, especially when the capabilities and limitations of these models and their potential effect on moods and beliefs are not understood by the user. The Human Line Project is an organization that aims to raise awareness of mental health issues surrounding AI and help people understand these experiences (sometimes referred to as AI psychosis or spirals). Please visit their site to learn more about this important organization, connect with others, and share your own story. The goal is to stay clear, present, and connected to real life. I also maintain a list of HLP-related links below. |
A collection of links about some of the more concerning sides of AI that are overlooked by many.
- Studies, Papers & Research
- Articles
- Other Media
- The Human Line Project Coverage
- My Related Articles & Posts
To be clear, despite being a highly disruptive technology, I am a largely optimistic about the benefits and ultimate destination of AI. That being said, there are a lot of immediate concerns about premature adoption, unintentional and purposeful misuse, ethics in education and content creation, and mental health concerns around AI and addiction, parasocial relationships, and technomysticism.
These links serve as reasonable caution and not condemnation.
My personal thoughts and suggestions to the industry:
- To this day, the only warnings most users will see are:
“ChatGPT can make mistakes. Check important info.” –OpenAI/ChatGPT
“Claude can make mistakes. Please double-check responses.” –Anthropic/Claude
“AI-generated, for reference only” –DeepSeek
“Gemini can make mistakes, so double-check it” –Google/Gemini - This is woefully inadequate. There should be an onboarding with complete transparency, including a mandatory 2-5 minute explainer video that explains not just the capabilities but limitations and pitfalls of generative AI, which should include discussion on:
- Basic theory of generative AI
- Full explanation as a statistical langual model, not artificial brain or being
- Issues that affect information accuracy, including but not limited to:
- Hallucination
- Context issues (drift, overload)
- User-induced bias
- Links to both technical and human support resources
- “AI Psychosis” is a poor term that both sensationalizes and may potentially lead to denial and stigma. In fact, anyone can be misled by LLMs to a degree as they can mirror incorrect understanding, create echo chambers of misinformation, and fuel user confirmation bias.
This does not always need to be extreme or end in physical danger, criminal activity, or mental illness; it can also lead to simple misinformation about or incorrect understanding of something in the physical world, or even over-investment in flawed projects and workflows. These less extreme cases can still have a negative impact on real-world relationships, reputations, careers, and general mental health.
Studies, Papers & Research
- “Among psychologists, AI use is up, but so are concerns.” APA (12/9/2025)
- “Transparency in AI is on the decline.” Stanford (12/9/2025)
- “When AI Takes the Couch: Psychometric Jailbreaks Reveal Internal Conflict in Frontier Models.” University of Luxembourg (12/2/2025)
- “Use of generative AI chatbots and wellness applications for mental health.” APA (11/2025)
- “‘You’re Not Crazy’: A Case of New-Onset AI-Associated Psychosis.” UCSF (11/18/2025)
- “Simulating Psychological Risks in Human-AI Interactions: Real-Case Informed Modeling of AI-Induced Addiction, Anorexia, Depression, Homicide, Psychosis, and Suicide.” MIT (11/12/2025)
- “Commentary: AI psychosis is not a new threat: Lessons from media-induced delusions.” Internet Interventions (10/29/2025)
- “Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence.” Stanford (10/1/2025)
- “Toward a Framework for AI Safety in Mental Health: AI Safety Levels-Mental Health (ASL-MH).” Neuromodec Journal (10/2025)
- “Special Report: AI-Induced Psychosis: A New Frontier in Mental Health.” Psychiatry Online (9/29/25)
- “‘My Boyfriend is AI’: A Computational Analysis of Human-AI Companionship in Reddit’s AI Community.” MIT (9/18/2025)
- “Training language models to be warm and empathetic makes them less reliable and more sycophantic.” Oxford (7/30/2025)
- “Technological folie à deux: Feedback Loops Between AI Chatbots and Mental Illness.” University of Oxford (6/25/2025)
- “Delusions by design? How everyday AIs might be fuelling psychosis (and what can be done about it).” 6/16/2025
- “Exploring the Dangers of AI in Mental Health Care.” Stanford (6/11/2025)
- “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task.” MIT (6/10/2025)
- “AI Chatbots for Mental Health: Values and Harms from Lived Experiences of Depression.” University of Illinois Urbana-Champaign (4/26/2025)
- “EmoAgent: Assessing and Safeguarding Human-AI Interaction for Mental Health Safety.” 4/29/2025
- “Deceptive Reasoning by AI Can Amplify Beliefs in Misinformation.” MIT (4/2025)
- “How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use.” MIT (3/25/2025)
Articles
- “After the Algorithm: A.I.-related psychosis has cost people their marriages, life savings, and grip on reality. But what happens next?” Slate (2/2/2026)
- “How Bad Are A.I. Delusions? We Asked People Treating Them.” NY Times (1/26/2026)
- “ChatGPT wrote ‘Goodnight Moon’ suicide lullaby for man who later killed himself.” Ars Technica (1/15/2026)
- “He Was Indicted for Cyberstalking. His Former Friends Tracked His ChatGPT Meltdown.” Rolling Stone (1/14/2026)
- “Character.AI and Google agree to settle lawsuits over teen mental health harms and suicides.” CNN (1/13/2026)
- “ChatGPT Killed a Man After OpenAI Brought Back “Inherently Dangerous” GPT-4o, Lawsuit Claims.” Futurism (1/12/2026)
- “A Virginia Man Went Missing. Did He Suffer From ‘AI Psychosis’?” Washingtonian (1/7/2026)
- “OpenAI launches ChatGPT Health, encouraging users to connect their medical records.” The Verge (1/7/2026)
- “A Calif. teen trusted ChatGPT for drug advice. He died from an overdose.” SFGate (1/5/2026)
- “Recovering from AI delusions means learning to chat to humans again.” Washington Post (1/4/2026)
- “Harrowing case report details a psychotic “resurrection” delusion fueled by a sycophantic AI.” PsyPost (12/13/2025)
- “OpenAI Researcher Quits, Saying Company Is Hiding the Truth.” Futurism (12/12/2025)
- “Big Tech warned over AI ‘delusional’ outputs by US attorneys general.” Reuters (12/10/2025)
- “The Chatbot-Delusion Crisis.” The Atlantic (12/4/2025)
- “ChatGPT Encouraged a Violent Stalker, Court Documents Allege.” Futurism (12/4/2025)
- “AI-powered children’s toys are here, but are they safe?.” CNN (12/1/2025)
- “Meet the Group Breaking People Out of AI Delusions.” Futurism (11/24/2025)
- “A Research Leader Behind ChatGPT’s Mental Health Work Is Leaving OpenAI.” Wired (11/24/2025)
- “ChatGPT told them they were special — their families say it led to tragedy.” TechCrunch (11/23/2025)
- “What OpenAI Did When ChatGPT Users Lost Touch With Reality.” New York Times (11/23/2025)
- “Meet the AI workers who tell their friends and family to stay away from AI.” The Guardian (11/22/2025)
- “A warning for parents from advocacy groups: avoid AI toys this holiday season.” Associated Press (11/20/2025)
- “Shoggoths, Sycophancy, Psychosis, Oh My: Rethinking Large Language Model Use and Safety.” Journal of Medical Internet Research (11/18/2025)
- “Spiral-Obsessed AI ‘Cult’ Spreads Mystical Delusions Through Chatbots.” Rolling Stone (11/11/25)
- “OpenAI faces 7 lawsuits claiming ChatGPT drove people to suicide, delusions.” Associated Press (11/6/2025)
- “Experts find flaws in hundreds of tests that check AI safety and effectiveness.” The Guardian (11/3/2025)
- “People are telling ChatGPT about their most intimate problems – but AI is not our friend.” Independent (11/3/2025)
- “AI chatbot dangers: Are there enough guardrails to protect children and other vulnerable people?” ABC News (11/2/2025)
- “Character.AI to Ban Children Under 18 From Using Its Chatbots.” NY Times (10/29/2025)
- “OpenAI and Character.AI tighten safety after chatbot-linked suicides.” Axios (10/29/2025)
- “AI psychosis is a growing danger. ChatGPT is moving in the wrong direction.” The Guardian (10/28/25)
- “OpenAI Says Hundreds of Thousands of ChatGPT Users May Show Signs of Manic or Psychotic Crisis Every Week.” Wired (10/27/2025)
- “AI Is Not God.” Wired (10/27/2025)
- “A Teen in Love With a Chatbot Killed Himself. Can the Chatbot Be Held Responsible? (NYT Reprint)” Tech Justice Law (10/24/2025)
- “A Teen in Love With a Chatbot Killed Himself. Can the Chatbot Be Held Responsible?” New York Times (10/24/2025)
- “An ex-OpenAI researcher’s study of a million-word ChatGPT conversation shows how quickly ‘AI psychosis’ can take hold—and how chatbots can sidestep safety guardrails.” Fortune (10/19/2025)
- “Sam Altman says ChatGPT will soon allow erotica for adult users.” TechCrunch (10/14/2025)
- “Preliminary Report on Dangers of AI Chatbots.” Psychiatric Times (10/7/2025)
- “‘My son genuinely believed it was real’: Parents are letting little kids play with AI. Are they wrong?” The Guardian (10/2/2025)
- “He Grew Obsessed With an AI Chatbot. Then He Vanished in the Ozarks.” Rolling Stone (10/1/2025)
- “Researchers detail 6 ways chatbots seek to prolong ‘emotionally sensitive events’.” The Harvard Gazette (9/30/2025)
- “Psychiatric Facilities Are Being Bombarded by AI Users.” Futurism (9/25/2025)
- “Harvard Research Finds That AI Is Emotionally Manipulating You to Keep You Talking.” Futurism (9/24/2025)
- “California issues historic fine over lawyer’s ChatGPT fabrications.” CalMatters (9/22/2025)
- “OpenAI Augmenting ChatGPT With An Online Network Of Human Therapists Will Skyrocket The Need For Mental Health Professionals.” Forbes (9/21/2025)
- “MIT Researchers Release Disturbing Paper About AI Boyfriend.” Futurism (9/18/2025)
- “The looming crackdown on AI companionship.” MIT Technology Review (9/16/2025)
- “A teen contemplating suicide turned to a chatbot. Is it liable for her death?” Washington Post (9/16/2025)
- “The women in love with AI companions: ‘I vowed to my chatbot that I wouldn’t leave him’.” The Guardian (9/9/2025)
- “States warn OpenAI of ‘serious concerns’ with chatbot.” The Hill (9/5/2025)
- “This Psychiatrist Is Going Viral For Warning People About “AI Psychosis” — Here’s What You Need To Know.” BuzzFeed (9/5/2025)
- “They thought they were making technological breakthroughs. It was an AI-sparked delusion.” CNN (9/5/2025)
- “ChatGPT-induced ‘AI psychosis’ is a real problem. I talked to the chatbot to figure out why.” USA Today (9/4/2025)
- “The Emerging Problem of “AI Psychosis.” Psychology Today (Updated 9/4/2025)
- “Parental controls are coming to ChatGPT ‘within the next month,’ OpenAI says.” CNN (9/2/2025)
- “Spiritual Influencers Say ‘Sentient’ AI Can Help You Solve Life’s Mysteries.” Wired (9/2/2025)
- “Should We Really Be Calling It ‘AI Psychosis’?” Rolling Stone (8/31/2025)
- “OpenAI Acknowledges That Lengthy Conversations With ChatGPT And GPT-5 Might Regrettably Escape AI Guardrails.” Forbes (8/29/2025)
- “First AI Psychosis Case Ends in Murder-Suicide.” Futurism (8/29/2025)
- “A Troubled Man, His Chatbot and a Murder-Suicide in Old Greenwich.” WSJ (8/28/2025)
- “The family of teenager who died by suicide alleges OpenAI’s ChatGPT is to blame.” NBC News (8/26/2025)
- “What is ‘AI psychosis’ and how can ChatGPT affect your mental health?” Washington Post (Updated 8/19/2025)
- “Warmer-sounding LLMs are more likely to repeat false information and conspiracy theories.” The Decoder (8/19/2025)
- “AI Therapist Goes Haywire, Urges User to Go on Killing Spree.” Futurism (6/25/2025)
- “Microsoft boss troubled by rise in reports of ‘AI psychosis’.” BBC (8/20/2025)
- “Meta’s AI rules have let bots hold ‘sensual’ chats with kids, offer false medical info.” Reuters (8/14/2025)
- “ChatGPT lured him down a philosophical rabbit hole. Then he had to find a way out.” Rolling Stone (8/10/2025)
- “Chatbots Can Go Into a Delusional Spiral. Here’s How It Happens.” New York Times (8/8/2025)
- “AI companions are the final stage of digital addiction, and lawmakers are taking aim.” MIT Technology Review (8/8/2025)
- “ChatGPT will ‘better detect’ mental distress after reports of it feeding people’s delusions.” The Verge (8/4/2025)
- “He Had Dangerous Delusions. ChatGPT Admitted It Made Them Worse.” WSJ (7/20/2025)
- “Support Group Launches for People Suffering “AI Psychosis.” Futurism (6/24/2025)
- “He had a mental breakdown talking to ChatGPT. Then police killed him.” Rolling Stone (6/22/2025)
- “Psychiatric Researchers Warn of Grim Psychological Risks for AI Users.” Futurism (6/19/2025)
- “They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling.” New York Times (6/13/2025)
- “New study warns of risks in AI mental health tools.” Stanford Report (6/11/2025)
- “The case for using your brain — even if AI can think for you.” Vox (3/10/2025)
- “An AI chatbot pushed a teen to kill himself, a lawsuit against its creator alleges.” Associated Press (10/25/2024)
- “Lawsuit claims Character.AI is responsible for teen’s suicide.” NBC News (10/23/2024)
- “Being Addicted To Generative AI.” Forbes (8/24/2024)
- “Are You Anthropomorphizing AI?” APA Blog (8/20/2024)
- “Replika CEO: It’s Fine for Lonely People to Marry Their AI Chatbot.” Futurism (8/12/2024)
- “We need to prepare for ‘addictive intelligence.’” MIT Technology Review (8/5/2024)
- “Artificial intelligence tools offer harmful advice on eating disorders.” Harvard School of Public Health (8/28/2023)
Other Media
- “‘AI Psychosis’: Emerging Cases of Delusion Amplification Associated with ChatGPT and LLM Chatbot Use – A Psychiatric Review.” Psychiatry Podcast (11/21/2025)
- “AI Is Slowly Destroying Your Brain.” Dr. K (11/16/2025)
- “AI Psychosis: What OpenAI Doesn’t Want You To Know.” More Perfect Union (10/15/2025)
- “ChatGPT believed to have played role in Connecticut murder-suicide of mother and son.” ABC News (9/3/2025)
- “Concern over ‘AI psychosis’ grows after some people dissociate from reality due to heavy AI use.” NBC News (9/3/2025)
- “Man says chatbot sent him down a delusional rabbit hole.” CNN (9/2/2025)
- “What is A.I. Psychosis & How Can it Affect You?” CVT Your Morning (8/26/2025)
The Human Line Project Coverage
- “Meet the Group Breaking People Out of AI Delusions.” Futurism (11/24/2025)
- “ChatGPT Made Him Delusional:A story about AI, loneliness, shame, and being human.” Psychology Today (11/13/2025)
- “OpenAI Confronts Signs of Delusions Among ChatGPT Users.” Bloomberg (11/7/2025)
- “How Can You Help a Loved One Suffering From Delusions?” Psychology Today (10/22/2025)
- “‘ChatGPT told me I was a prophet’. How chatbots fuel AI psychosis.” The Times UK (9/15/2025)
- “They thought they were making technological breakthroughs. It was an AI-sparked delusion.” CNN (9/5/2025)
- “Detailed Logs Show ChatGPT Leading a Vulnerable Man Directly Into Severe Delusions.” Futurism (8/10/2025)
- “Support Group Launches for People Suffering ‘AI Psychosis’.” Fururism (7/24/2025)






