-
Continue reading →: Confirmation Bias, Dunning-Kruger, and LLM Echo Chambers
A measured look at Dunning–Kruger around LLMs and how prompts and chat history steer answers, how “hallucinations” arise, and why fresh sessions, neutral wording, and verification help more than clever tricks.
-
Continue reading →: Revisiting User-Induced Bias with OpenAI’s gpt-oss-20b
Back in April, I posted “Prompted Patterns: How AI Can Mirror and Amplify False Beliefs” to demonstrate how LLMs can inadvertently become echo chambers of misinformation through user-induced bias/confirmation bias. We revisit this post with the help of OpenAI gpt-oss-20b.
-
Continue reading →: How Iterative Prompting Can Elevate Lightweight LLMs to the Heavyweight Class
This post presents a controlled experiment using Mistral 3.1 Small to summarize Thomas Paine’s The Crisis (Part I). By adjusting only the prompt structure across iterations, the results show how even a lightweight, local model can produce output comparable in quality to a larger, hosted model using the same input…
-
Continue reading →: LLM Limitations, Weak Points & Blind Spots: Math
This post looks at the limitations of large language models when it comes to math. It explains why LLMs are not reliable for mathematical tasks, outlines common failure points, and highlights the importance of validation, especially in fields where precision and safety are critical.
-
Continue reading →: A High-Level Overview of Large Language Model TrainingThis post is a simplified walkthrough of how Large Language Models (LLMs) are trained, evaluated, and deployed. It’s written for people who are interested in AI and want a general understanding of the process without diving into the math or implementation details
-
Continue reading →: FramePack: Extended Image-to-Video On Budget HardwareThis post looks at FramePack, an open source tool for generating longer-form AI video from a single image. FramePack improves on traditional tools and models that break down over longer generations.
-
Continue reading →: Introduction to Tree of Thought (ToT) Prompting
Tree of Thought (ToT) prompting is a method for structuring language model prompts to explore multiple reasoning paths instead of just one. This post explains how it differs from Chain of Thought (CoT) prompting, when to use it, and how to apply it using practical, prompt-only techniques.
-
Continue reading →: Quick Byte: What Is Quantization?
Quantization refers to the process of reducing a model’s size and increasing speed by reducing precision. This quick post gives a high-level view comparing quantization with JPEG compression.
-
Continue reading →: Pattern Priming in Prompting: How to Shape LLM Output with Statistical Cues
This post introduces what call pattern priming: A prompt design technique that guides language model outputs by using statistically associated language cues. It explains how this approach differs from Few-Shot or Chain-of-Thought prompting, and offers practical guidance for tasks requiring clarity, precision, or domain-specific tone.
-
Continue reading →: Why LLMs Aren’t Black Boxes
This post challenges the common claim that large language models are “black boxes” by explaining their inner workings in clear, grounded terms. It explores what makes these systems appear opaque, clarifies key technical concepts, and outlines why understanding their design matters for responsible use.






