Twenty trends, ranked by velocity multiplied by strategic importance. The top ten are full Steek-tracked trends with their own pages and signal histories. The next ten are emerging movements with strong but earlier signal bases — likely to graduate to fully tracked status within two quarters.
The methodology is documented in The AI Trend Velocity Model; the underlying data lives in the signal index.
The first ten — fully tracked
Each of these has a dedicated trend page with timeline, predictions, and signal density.
- Reasoning Models — Models trained to think in long chains of intermediate tokens before answering. (velocity +88, maturity 32)
- Computer Use — Models that drive a real screen, mouse, and keyboard like a human. (velocity +92, maturity 18)
- Agentic Workflows — LLMs that plan, call tools, and complete multi-step tasks autonomously. (velocity +78, maturity 38)
- Tool-Use Protocols — Standardized ways for models to discover and call external tools (MCP, A2A). (velocity +84, maturity 22)
- Open-Weights Frontier — Open-weight models closing the gap with closed frontier labs to under 6 months. (velocity +71, maturity 44)
- Inference Cost Collapse — Token prices falling 80–95% per year while quality keeps rising. (velocity +64, maturity 58)
- Enterprise AI Adoption — AI moves from pilots into core P&L line items. (velocity +55, maturity 62)
- Sovereign AI — Nation-states fund domestic frontier compute, models, and data centers. (velocity +67, maturity 28)
- AI Safety & Policy — Regulators move from frameworks to enforceable obligations. (velocity +48, maturity 51)
- Edge Inference — Capable models running fully on phones, laptops, and embedded devices. (velocity +58, maturity 24)
The next ten — emerging
These movements have strong directional signal bases but have not yet crossed the entity-diversity or stability thresholds that promote them to fully tracked status. Each is a candidate trend page in waiting.
- Multimodal-Native Apps — Products designed around voice, vision, and text from day one — not bolted on after a text MVP.
- AI-Native Codebases — New repositories whose architecture, conventions, and review process assume an LLM is the primary contributor.
- Synthetic Data Pipelines — Training corpora generated by larger models, then filtered against verifiable signals — now table stakes for fine-tuning.
- Long-Context Workloads — Million-token windows shifting use cases from RAG-around-everything to load-the-whole-corpus-then-reason.
- AI Search Engines — Answer-first surfaces (ChatGPT Search, Perplexity, Google AI Overviews) capturing query share from blue links.
- AI Coding Agents — Cursor, Replit Agent, Devin, and incumbents converging on autonomous PR generation.
- Voice-First Interfaces — Sub-300ms latency models making voice a default UX, not a novelty.
- Verifiable AI Outputs — Cryptographic and citation-grounded receipts attached to model outputs — driven by enterprise compliance.
- Per-Task Pricing — Vendors pricing outcomes ("ticket closed") rather than seats, exposing seat-based incumbents.
- AI in Vertical Workflows — Legal, healthcare, and financial-services agents reaching $100M+ ARR on narrow surfaces.
How to read the list
Three filters before you act on any item:
- Time horizon. Velocity tells you about the next 6–18 months. Maturity tells you whether the long-term shape is already largely decided.
- Capital exposure. A high-velocity trend with low maturity is where new companies are made. A high-maturity trend is where incumbents win.
- Substitution risk. Look at the signals feeding the trend, not the trend label. The label is a story; the signals are the evidence.
To audit any item, click into the trend page and inspect the underlying signals directly.
What this list deliberately excludes
- Trends with zero commercial signals — pure research interest does not earn a slot.
- Single-vendor narratives without entity diversity.
- Sentiment-driven micro-trends without measurable signal density.
The full criteria are documented in the AI Signals Report.