Risingfoundation models

Reasoning Models

Models trained to think in long chains of intermediate tokens before answering.

First observed 2024-09-12967 signals

Trend Dynamics

Updated daily
Velocity
+88
Maturity
32
Signals
967
All-time observed

Definition

Reasoning models are LLMs post-trained with reinforcement learning on verifiable rewards, producing extended internal "thinking" tokens that systematically improve performance on math, code, and planning benchmarks.

Why It Matters

Reasoning shifts the bottleneck from training compute to inference compute. Pricing, latency, and product UX all rebuild around the new dimension of "how long should the model think?"

Signals Feeding This Trend

  • New reasoning model releases34%
    release
  • Math/code benchmark deltas28%
    benchmark
  • Inference-time scaling papers20%
    research
  • Per-token "thinking" pricing changes18%
    pricing

Companies Involved

  • OpenAI
  • Anthropic
  • Google DeepMind
  • DeepSeek
  • Qwen
  • xAI

Timeline

  1. 2024-09

    OpenAI o1-preview launches; first commercial reasoning model.

  2. 2025-01

    DeepSeek R1 open-sources reasoning at frontier quality, collapsing the moat narrative.

  3. 2025-03

    Reasoning becomes default mode in flagship APIs (o3, Claude 3.7 Sonnet thinking, Gemini 2.5).

  4. 2025-09

    Per-task reasoning budgets appear as first-class API parameters.

Predictions

  • 6 monthshigh confidence

    Every major frontier API exposes a "reasoning_effort" parameter as standard.

  • 12 monthsmedium confidence

    Reasoning-token cost drops 10x year-over-year.

Related Trends

Track every signal feeding Reasoning Models

Steek surfaces individual signals the moment they enter the index.

Explore the Signal Index