Imagine having a really smart friend who is very knowledgeable on a massive number of topics. Whenever you ask a question, she’s quick to answer and generally gives good information, but sometimes glosses over key points and occasionally makes mistakes in her eagerness to answer. Now you are working on a critical project and need help. You ask your friend to slow down, really think about the various aspects of your question before answering, and make sure they are really sure about what they are saying. Your friend, understanding the importance of your question, thinks a little harder and answers more carefully.

Of course, this is a very loose analogy to Chain of Thought (CoT) prompting. It’s important to understand that, unlike humans, LLMs don’t “think” but rather generate text based on learned language patterns and statistical associations. CoT techniques simply provide more guidance and structure, and do not change the fundamental operation of an LLM.

At the most basic level, CoT techniques provide additional structure that encourages text to be generated in a different, more guided way—potentially improving output by suggesting the answer be generated in a more structured and logically coherent fashion.

CoT prompting can be as simple as asking the LLM to “think through a problem carefully before answering” or “break down the problem in logical steps and consider one step at a time.” However, it can also include providing what you already know to be true and/or specifying the progression or intermediate steps yourself.

Examples: How CoT Differs from a Standard Prompt

The output is too large to include here, but you can try this yourself to see how the output differs between prompts. I urge you to try these with your favorite model, be it ChatGPT, Claude, or any other.

Standard prompt that doesn’t encourage a CoT response:
Explain the concept of supply and demand in economics.

Now let’s encourage the LLM to utilize CoT:
Explain the concept of supply and demand in economics. As you do so, think through your explanation step-by-step, breaking down the key components and their relationships. Consider starting with basic definitions and then build up to how these elements interact in a market setting.

Finally, let’s provide what we understand on the topic and suggest structure:
Explain the concept of supply and demand in economics, including its implications for market equilibrium, price elasticity, and market inefficiencies. As you develop your explanation, think through each aspect step-by-step. Begin with basic definitions, then explore how supply and demand interact to determine prices and quantities. Next, consider factors that influence supply and demand curves. Then, examine the concepts of elasticity and how they affect market outcomes. Finally, discuss potential market failures or inefficiencies related to supply and demand. Throughout your explanation, consider how these various elements interconnect and influence market dynamics.

With some models, you may see the complexity and accuracy of the output increase just by giving added direction, though this is not guaranteed and will vary from model to model. This works by prompting the model to produce output patterns resembling structured reasoning it has seen during training.

Using CoT for Logic Problems

This is another exercise you should try yourself with your test set of puzzles:

Standard prompt:
Solve the ‘Fox, Chicken, and Grain’ river crossing puzzle.

Simple CoT prompt:
Solve the ‘Fox, Chicken, and Grain’ river crossing puzzle. As you work through the solution, think step-by-step about each move the farmer can make. Consider the consequences of each potential action and explain your reasoning for each step.

Detailed CoT prompt:
Solve the ‘Fox, Chicken, and Grain’ river crossing puzzle. Approach this problem methodically by following these steps:

  1. Clearly state the initial conditions and constraints of the puzzle.
  2. Consider all possible first moves and their consequences.
  3. For each valid move, think about the resulting situation and the next possible actions.
  4. When you reach a dead end or unsafe situation, backtrack and explore alternative paths.
  5. Keep track of the sequence of moves that lead to progress.
  6. For each step, explain your reasoning and why certain moves are safe or unsafe.
  7. If you reach the solution, summarize the entire sequence of moves.
  8. Reflect on any key insights or strategies that helped solve the puzzle.

Conclusion

These techniques have the potential of providing additional structure in such a way as to increase the chance predicted output will be more structured and precise — all without requiring complex prompts or additional training.

Tip: Try experimenting with these CoT prompting styles using different LLMs and tasks—from math problems to concept explanations—to see what yields the best results.

Leave a comment

Dave Ziegler

I’m a full-stack AI/LLM practitioner and solutions architect with 30+ years enterprise IT, application development, consulting, and technical communication experience.

While I currently engage in LLM consulting, application development, integration, local deployments, and technical training, my focus is on AI safety, ethics, education, and industry transparency.

Open to opportunities in technical education, system design consultation, practical deployment guidance, model evaluation, red teaming/adversarial prompting, and technical communication.

My passion is bridging the gap between theory and practice by making complex systems comprehensible and actionable.

Founding Member, AI Mental Health Collective

Community Moderator / SME, The Human Line Community

Let’s connect

Discord: AightBits