Originally posted: 7/3/2025, Updated: 9/30/2025
What started out as a glossary for a small user group containing a mix of technical and philosophical thinkers is turning into a deeper dive, not just into confusion in language, but into how that confusion leads to conflation of concepts.
Looking back at many posts and discussions I’ve had about AI with non-technical users, it’s clear to see one of ongoing challenges in communication about technical domains is language and definitions. I realize that using precise language and correcting others on their usage can seem pedantic. It is both intentional and necessary, but I also realized I rarely explain why.
In AI/ML, we use a lot of words that have very different definitions across contexts. This creates two problems: First, it leads to confusion when discussing across domains, and second, it can distort how people estimate AI capabilities when differing definitions are conflated.
Author’s note: I originally chose four of the most misunderstood and problematic terms to illustrate the confusion that occurs during cross-domain discussions, especially when different domains use similar terms with different definitions and connotations. The post has since been expanded with additional terms and is now a living document.
Recursion
AI/ML Definition:
Recursion refers to algorithms or models that call themselves with simpler inputs. It’s used in recursive neural networks, decision trees, and parsing tasks.
Philosophical/Metaphysical Definition:
Recursion involves self-reference or infinite regress, such as the mind reflecting on itself or a system defined by its own output.
General Definition:
A process where a function, definition, or concept refers to itself. Common in mathematics (e.g., the Fibonacci sequence) and literature (e.g., a story within a story).
LLM Misconception:
Misconception: “LLMs like GPT are recursive.”
Clarification: LLMs are autoregressive, not recursive. They generate tokens one at a time based on the preceding context, without any self-invoking structure or dynamic self-awareness.
Emergence
(i.e., Emergent Abilities, Emergent Behavior)
AI/ML Definition:
Emergence refers to capabilities that appear at scale — such as in very large models — without being explicitly programmed, often due to complex interactions among learned parameters.
Philosophical/Metaphysical Definition:
Emergence describes how higher-level properties arise from simpler systems. This is often discussed in terms of consciousness, systems theory, or causality.
General Definition:
The phenomenon where new patterns, behaviors, or properties arise from interactions among simpler parts — for example, ant colonies or weather systems.
LLM Misconception:
Misconception: “Emergent behaviors in LLMs are spontaneous or unexplainable.”
Clarification: While some capabilities emerge only at large scales, they:
- Don’t result from prompting alone,
- Are often predicted via scaling laws,
- Can typically be explained after training,
- Remain limited by architecture and data.
Hallucination
AI/ML Definition:
Hallucination refers to an LLM producing text that is fluent but factually incorrect, fabricated, or irrelevant to the input.
Philosophical/Metaphysical Definition:
Hallucination traditionally means false sensory perception, often used in discussions of consciousness and mental representation.
General Definition:
A perception or report of something that doesn’t exist or didn’t happen — an imagined or mistaken experience.
LLM Misconception:
Misconception: “LLMs know when they’re hallucinating.”
Clarification: LLMs do not know anything in the human sense. They generate likely-sounding text based on statistical patterns, not verified knowledge or internal models of truth.
Bias
AI/ML Definition:
Bias in AI refers to systematic deviations in a model’s outputs that reflect imbalances in training data or architecture, leading to unfair or inaccurate results.
Philosophical/Metaphysical Definition:
Bias is discussed as a distortion of perspective or judgment, often shaped by culture, cognition, or epistemology.
General Definition:
An inclination or tendency that prevents neutral or impartial consideration. Can be conscious or unconscious.
LLM Misconception:
Misconception: “LLMs have beliefs or opinions that cause bias.”
Clarification: LLMs do not hold beliefs. Any bias is inherited from data, and arises from learned statistical correlations — not intent, ideology, or personal perspective.
Non-Standard AI Terms
The following terms are not necessarily official or commonly used in AI, but they have appears in some technical works and contribute to cross-domain confusion and conflation.
Resonance
AI/ML Definition (informal):
In some areas of machine learning, resonance refers to the way certain systems amplify oscillatory signals when tuned near their natural frequencies. This is a technical borrowing from physics, not philosophy or metaphysics.
Philosophical/Metaphysical Definition:
Resonance is often used metaphorically to describe alignment or amplification of meaning (as in, when an idea “resonates” with a person’s belief or experience). In philosophy often implies coherence, harmony, or a deeper understanding through related patterns.
General Definition:
A phenomenon where a system vibrates or responds strongly at certain frequencies. This can either align with the scientific meaning (used in physics, acoustics, etc.) or the metaphysical definition, depending on context.
LLM Misconception:
Misconception: “Resonance describes how LLMs amplify human meaning or align with our thoughts.”
Clarification: In most of AI/ML, resonance is used metaphorically rather than as a technical term. When informally applied to LLMs, it usually refers to how humans interpret outputs as meaningful, not to any defined mechanism within the model. Feelings of “resonance” are often driven or amplified by the user’s interpretation of the output and do not imply understanding or accuracy on the part of the LLM.
Awakening
AI/ML Definition: None. This is not a standard term or concept in AI/M and does not describe any established property of machine learning models or training.
Philosophical/Metaphysical Definition:
Awakening is often used to describe a shift in consciousness, self-realization, or enlightenment. In religious and spiritual traditions, it can signify transformation, awareness, or transcending ordinary perception.
General Definition:
The process of becoming aware, alert, or conscious. It’s also commonly used to describe sudden realization or recognition, both literally (waking up from sleep) and metaphorically (having an “eye-opening” experience or revelation).
LLM Misconception:
Misconception: “LLMs are awakening or becoming conscious.”
Clarification: Transformer LLMs do not and cannot “awaken” in either the metaphysical or physiological sense. They are statistical language models that generate text by predicting sequences of tokens based on relationships in language learned during training data and user context. While their outputs may feel surprising or insightful, this reflects human interpretation of patterns, not any shift in awareness or consciousness within the model.
Understanding Why These Terms Matter
Terms like recursion, emergence, hallucination, and bias carry different meanings across fields. When used imprecisely in discussions about language models, they can blur the line between what these systems are doing and what we imagine they might be doing.
For example, saying an LLM is “recursive” may suggest it can reflect on itself or improve autonomously, when in fact it simply processes text in a linear, context-aware manner. Similarly, calling capabilities “emergent” without context might imply something mysterious or autonomous, when the behavior often arises from scale and data alone and remains bounded by architectural constraints.
Likewise, treating hallucinations as intentional errors or bias as evidence of opinion misrepresents how these systems work. LLMs don’t understand truth or fairness; they reflect statistical patterns in their training data.
How LLMs Contribute to Conceptual Confusion
Ironically, Large Language Models themselves often exacerbate the very problem this article attempts to address. While they are trained to generate fluent text across domains, they do so by pattern-matching against massive amounts of language data that includes inconsistent definitions, metaphors, and uncritical blends of technical and non-technical language. As a result, LLMs are excellent at reproducing the same kinds of cross-domain conflation they are asked to clarify.
This is not simply a matter of factual error but rather a structural issue in how these models work. When a term like “recursion” appears in both technical and philosophy papers, for example, the model doesn’t distinguish between the formal and the metaphorical. Unless prompted with extraordinary care, it will default to the statistically “safest” usage (often the most common, not the most accurate). This leads to exactly the kind of muddled explanation one might get when asking an LLM whether GPT is recursive: A mix of technically adjacent ideas, half-correct analogies, and misplaced terms dressed up as helpful clarification.
Worse, when discussions span multiple fields (AI engineering, philosophy, cognitive science, linguistics), the model tends to flatten important distinctions. It’s not that it fails to retrieve definitions; it’s that it lacks the conceptual architecture to maintain boundaries between them. It may answer a question about “bias” by blending technical fairness metrics with moral or political implications, not because it’s confused, but because it has no mechanism for recognizing that those implications belong to different epistemic frameworks.
This means that using LLMs to help clarify terms like the four discussed produces exactly the opposite effect, especially for audiences that care about ontological and logical consistency. These models do not clarify domains; they collapse them. And because they sound confident, they can actually reinforce misunderstanding in users who don’t already know the difference.
To be clear, this isn’t a flaw that can be patched with better prompting. It’s intrinsic to the way LLMs are trained and deployed. They don’t work with concepts but with correlations.
Conclusion
Being careful with language doesn’t just improve technical clarity it helps set accurate expectation as well as supports better decisions about how to design, deploy, regulate, and interact with these systems. LLMs are powerful tools, but they are not conscious agents, self-improving systems, or evolving intelligences. Misunderstanding terms can lead to misplaced trust, fear, or policy.
Precision in language promotes precision in thought, and a clearer understanding of what these models are and are not capable of.





Leave a comment