Safety & Ethics

If you or someone you know is feeling confused, detached, or fixated from interacting with AI (specifically Large Language Models like ChatGPT, Claude, Gemini, DeepSeek, Llama and others), please read.

Sometimes long or intense conversations with AI can blur things or make it harder to stay grounded, especially when the capabilities and limitations of these models and their potential effect on moods and beliefs are not understood by the user.

The Human Line Project is an organization that aims to raise awareness of mental health issues surrounding AI and help people understand these experiences (sometimes referred to as AI psychosis or spirals).

Please visit their site to learn more about this important organization, connect with others, and share your own story. The goal is to stay clear, present, and connected to real life.

I also maintain a list of HLP-related links below.

A collection of links about some of the more concerning sides of AI that are overlooked by many.

To be clear, despite being a highly disruptive technology, I am a largely optimistic about the benefits and ultimate destination of AI. That being said, there are a lot of immediate concerns about premature adoption, unintentional and purposeful misuse, ethics in education and content creation, and mental health concerns around AI and addiction, parasocial relationships, and technomysticism.

These links serve as reasonable caution and not condemnation.

My personal thoughts and suggestions to the industry:

  • To this day, the only warnings most users will see are:
    “ChatGPT can make mistakes. Check important info.” –OpenAI/ChatGPT
    “Claude can make mistakes. Please double-check responses.” –Anthropic/Claude
    “AI-generated, for reference only” –DeepSeek
    “Gemini can make mistakes, so double-check it” –Google/Gemini
  • This is woefully inadequate. There should be an onboarding with complete transparency, including a mandatory 2-5 minute explainer video that explains not just the capabilities but limitations and pitfalls of generative AI, which should include discussion on:
    • Basic theory of generative AI
    • Full explanation as a statistical langual model, not artificial brain or being
    • Issues that affect information accuracy, including but not limited to:
      • Hallucination
      • Context issues (drift, overload)
      • User-induced bias
    • Links to both technical and human support resources
  • “AI Psychosis” is a poor term that both sensationalizes and may potentially lead to denial and stigma. In fact, anyone can be misled by LLMs to a degree as they can mirror incorrect understanding, create echo chambers of misinformation, and fuel user confirmation bias.

    This does not always need to be extreme or end in physical danger, criminal activity, or mental illness; it can also lead to simple misinformation about or incorrect understanding of something in the physical world, or even over-investment in flawed projects and workflows. These less extreme cases can still have a negative impact on real-world relationships, reputations, careers, and general mental health.

Studies, Papers & Research

Articles

Other Media

The Human Line Project Coverage

My Related Articles & Posts

Dave Ziegler

I’m a full-stack AI/LLM practitioner and solutions architect with 30+ years enterprise IT, application development, consulting, and technical communication experience.

While I currently engage in LLM consulting, application development, integration, local deployments, and technical training, my focus is on AI safety, ethics, education, and industry transparency.

Open to opportunities in technical education, system design consultation, practical deployment guidance, model evaluation, red teaming/adversarial prompting, and technical communication.

My passion is bridging the gap between theory and practice by making complex systems comprehensible and actionable.

Founding Member, AI Mental Health Collective

Community Moderator / SME, The Human Line Project

Let’s connect

Discord: AightBits