Prompted Patterns: How AI Can Mirror and Amplify False Beliefs

This article examines how large language models (LLMs) can be led to produce increasingly speculative or incorrect responses through carefully crafted prompting. It documents a controlled experiment with Meta’s Llama 3.1 8B Instruct model, demonstrating how consistent prompting with pseudoscientific language can shift a model from correctly rejecting false claims about quantum entanglement in AI to eventually endorsing them. The piece highlights the dangers of confirmation bias when interacting with AI systems and offers practical strategies for more responsible engagement with LLMs.