Chain-of-Thought Prompting | Definition & Examples
Chain-of-thought prompting refers to ways of instructing a large language model (LLM) to reason through a problem step by step before giving its answer.
You can think of it like asking the AI to “show its work”—the way a teacher might ask a student to write out their reasoning before giving the final answer—even if the user of the LLM may only see that final answer.
What is chain-of-thought prompting?
Chain of thought prompting (CoT) is a technique developers use to write prompts inside software systems that use large language models (LLMs), such as OpenAI’s GPT or Anthropic’s Claude AI models.
CoT prompts are designed to encourage the model to work through a complex problem in steps to improve the accuracy of its answer.
End users accessing an LLM through a chatbot can also use chain-of-thought prompting to try to improve the accuracy of responses to prompts for ChatGPT and other services. However, many chatbots already use internal CoT reasoning automatically when a prompt requires it, so using CoT can be unnecessary and even reduce accuracy for straightforward questions.
Types of CoT prompts
Prompt engineers use various approaches to implement chain-of-thought prompting for different kinds of tasks.
Zero-shot chain-of-thought
Zero-shot chain-of-thought prompts add a simple instruction to break the problem into steps (e.g., “Show your work,” “Think step by step,” or “Explain your reasoning”), without providing examples.
“Let’s think step by step to satisfy all the user’s conditions before creating the final weekly schedule.”
You can also try improving your chatbot prompts by including a request such as “Think step by step,” although it may produce less accurate results for simple questions.
Few-shot chain-of-thought
Few-shot chain-of-thought prompts include one or more examples with solution steps that show the model how to reason through a problem before it tackles a new one. These examples are written into a prompt template—a predefined structure that the developer creates to tell the model how to behave and where to insert the user’s question. When the tool runs, the software fills in that template with the user’s input, and the model uses the built-in examples as a guide for how to think and explain its answer.
“Example 1:
Question: A box is 12 cm long, 4 cm wide, and 3 cm high. How many boxes can fit in a space that is 20 cm long, 10 cm wide, and 4 cm high?
Reasoning:
1) Test all orientations of the box and count whole fits along each dimension (use floor division).
– (12, 4, 3): ⌊20/12⌋ × ⌊10/4⌋ × ⌊4/3⌋ = 1 × 2 × 1 = 2
– (12, 3, 4): ⌊20/12⌋ × ⌊10/3⌋ × ⌊4/4⌋ = 1 × 3 × 1 = 3 ← best
– (4, 12, 3): ⌊20/4⌋ × ⌊10/12⌋ × ⌊4/3⌋ = 5 × 0 × 1 = 0
– (4, 3, 12): ⌊20/4⌋ × ⌊10/3⌋ × ⌊4/12⌋ = 5 × 3 × 0 = 0
– (3, 12, 4): ⌊20/3⌋ × ⌊10/12⌋ × ⌊4/4⌋ = 6 × 0 × 1 = 0
– (3, 4, 12): ⌊20/3⌋ × ⌊10/4⌋ × ⌊4/12⌋ = 6 × 2 × 0 = 0
2) Choose the orientation with the maximum product.
Answer: 3 boxes.
Now solve this one:
Question: {user_question}
Let’s think step by step.”
You can also try improving your chatbot prompts for complex problems that require multiple reasoning steps by including examples of how similar problems have been solved.
Frequently asked questions about chain-of-thought prompting
- What is chain of thought reasoning?
-
Chain-of-thought reasoning is the process by which a large language model (LLM) breaks a task down into logical steps before producing an answer. Developers can encourage this “behavior” in software systems through chain-of-thought prompting.
QuillBot’s free AI chat can answer your questions about prompt engineering.
Cite this Quillbot article
We encourage the use of reliable sources in all types of writing. You can copy and paste the citation or click the "Cite this article" button to automatically add it to our free Citation Generator.
QuillBot. (2025, November 04). Chain-of-Thought Prompting | Definition & Examples. Quillbot. Retrieved November 6, 2025, from https://quillbot.com/blog/ai-prompt-writing/chain-of-thought-prompting/