Guides

Prompt Engineering 101: The Chain-of-Thought Technique

Sarah ChenOct 12, 20246 min read
← Back to all posts

Chain-of-thought (CoT) prompting improves reasoning by asking models to explain their steps. Use it selectively so you don’t blow up token counts.

Why CoT works

Explicit reasoning reduces hallucinations and helps the model stay on logical rails. It’s especially useful for multi-step math, planning, and compliance checks.

Lightweight CoT pattern

“Think step-by-step, then give the final answer on one line starting with ‘Answer:’.” This keeps intermediate tokens brief while preserving rationale.

When not to use CoT

For short factual lookups or deterministic transformations, CoT adds cost without benefit. Keep a fast path that skips CoT for low-risk tasks.

Key takeaways

  • Use CoT for reasoning-heavy flows; skip it for rote lookups.
  • Constrain the final answer format to avoid verbose endings.
  • Monitor token growth—CoT can 2-3x output tokens.