HomeEncyclopediaPrompting Techniques

prompt icon

Chain-of-Thought (CoT)

Latest update: 26/04/29


Back to › Prompting Techniques


Definition

Chain-of-thought prompting is a technique where you ask an AI to show its reasoning step by step before giving a final answer – which leads to more accurate results on tasks that require logic, math, or multi-step thinking.

What Is Chain-of-Thought Prompting?

Most AI responses jump straight to a conclusion. Chain-of-thought (CoT) prompting changes that. It asks the model to work through a problem out loud – spelling out each step of its reasoning before arriving at an answer.

The name is literal: a chain of thoughts, linked in sequence, leading to a conclusion. Instead of “the answer is X,” you get “first I need to consider Y, which means Z, and therefore the answer is X.”

It sounds simple. It works remarkably well. Research from Google showed that for complex reasoning tasks, asking a model to reason step by step produced significantly better results than asking for a direct answer – even with identical base models.

💡 How Does It Work?

When a model generates a response, it builds each token based on what came before it. If that means jumping straight from a question to an answer, there’s no intermediate reasoning – just pattern-matching to what an answer typically looks like.

CoT gives the model space to process the problem incrementally. Each step in the reasoning becomes context for the next step. This reduces errors because the model isn’t required to compress a complex problem into a single output – it can work through it piece by piece.

The simplest way to trigger it: add “Think step by step” or “Work through this carefully before giving your answer” to your prompt. That’s the zero-shot version. You can also demonstrate the pattern with examples that show the model what step-by-step reasoning looks like for your specific task type – that’s few-shot CoT, and it tends to perform even better.

Why It Matters for Your Prompts

Chain-of-thought matters most when the task requires actual reasoning – not retrieval or formatting, but thinking. Math problems, logical deductions, multi-criteria decisions, troubleshooting, legal analysis, and anywhere an error in step two makes the final answer wrong.

Without CoT, a model can confidently produce an incorrect answer that looks right because it fits the expected shape of a response. With CoT, wrong reasoning becomes visible. You can see where the model went off track, catch errors before they matter, and correct the logic in your follow-up.

For everyday users, the practical version of this is knowing when to add “think through this step by step” to a prompt. For anything that feels like a problem to be solved rather than a question to be answered, that phrase consistently improves output quality. It doesn’t make the model smarter – it gives it room to use the reasoning it already has.

🌐 Real-World Example

Without CoT: A financial analyst asks the AI: “A client invested $10,000 at 6% annual interest compounded monthly for 3 years. What’s the final balance?” The model outputs a number confidently. It’s wrong – it applied simple interest rather than compound.

With CoT: She adds: “Work through this step by step, showing your calculations.” The model writes out the compound interest formula, substitutes the values, calculates each compounding period, and arrives at the correct answer – $12,706.41. More importantly, she can see the working and verify it.

Same model. The reasoning space made the difference.

Related Terms

  • Tree-of-Thought (ToT) – An extension of CoT that explores multiple reasoning paths simultaneously instead of following one chain.
  • Zero-Shot Prompting – CoT can be applied in zero-shot form with a simple instruction, no examples required.
  • Few-Shot Prompting – Combining few-shot examples with chain-of-thought reasoning is one of the most effective prompting patterns for complex tasks.
  • Prompt Engineering – CoT is one of the highest-impact techniques in prompt engineering for analytical tasks.
  • Hallucination – CoT makes reasoning visible, which helps catch errors before they make it into the final output.

Frequently Asked Questions

Does “think step by step” actually work, or is it just a trick?

It genuinely improves accuracy on reasoning tasks. The 2022 Google paper “Chain-of-Thought Prompting Elicits Reasoning in Large Language Models” showed measurable accuracy gains on math, logic, and common-sense reasoning benchmarks just from adding that phrase. The model isn’t performing – the intermediate reasoning steps change how the output gets constructed.

Does CoT slow things down?

It produces longer responses, which takes slightly more time and uses more tokens. For simple tasks, that’s not worth it. For complex reasoning problems, the accuracy improvement easily justifies the cost. Think of it as a tradeoff: faster wrong answer vs. slightly slower right one.

Should I always use chain-of-thought prompting?

No. For simple tasks – classification, summarization, reformatting – CoT adds length without adding accuracy. It pays off for multi-step reasoning, math, logic, and decision-making tasks where intermediate steps actually affect the outcome. If the task doesn’t require working through steps, don’t ask for them.

What if the model’s reasoning looks right but the answer is still wrong?

This happens. CoT reduces errors but doesn’t eliminate them. The model can reason convincingly through faulty premises, make arithmetic mistakes while showing correct logic structure, or reach a wrong conclusion through steps that each individually seem sound. Treat the visible reasoning as something to verify, not as proof the answer is correct.

References

Further Reading

Author Daniel: AI prompt specialist with over 5 years of experience in generative AI, LLM optimization, and prompt chain design. Daniel has helped hundreds of creators improve output quality through structured prompting techniques. At our AI Prompting Encyclopedia, he breaks down complex prompting strategies into clear, actionable guides.