Chain-of-Thought (CoT)
Asking a model to think step-by-step before answering, which dramatically improves reasoning.
Chain-of-thought prompting tells the model to show its work. Instead of jumping to an answer, it generates intermediate reasoning steps. The result is large gains on math, logic, and multi-step problems.
The simplest form is appending 'Let's think step by step' to a prompt. More sophisticated variants include zero-shot CoT, few-shot CoT, and tree-of-thought (exploring multiple reasoning branches).
In 2025-2026 the CoT idea evolved into 'reasoning models' (OpenAI o-series, DeepSeek-R1, Claude with extended thinking). These models are trained to do internal CoT before answering, often producing dramatically better results on hard problems at the cost of latency and tokens.