A periodic table is arranged as a grid of rows and columns designed to reveal patterns in how elements behave and relate to one another. Columns (often called groups or families) contain elements that share similar fundamental properties and therefore tend to behave in comparable ways, even as they differ in electron configuration. Rows (often called periods) represent increasing levels of electron shells: as you move from left to right within a row, elements become more reactive while retaining the core traits of their group, and as you move downward from one row to the next, elements build similar outer electron configurations upon those above them.
The structure allows you to predict how elements will interact, combine, or evolve based on their position, rather than treating each element as an isolated case.
This table mirrors the layout principles of the chemical periodic table.
Elements in the same column (group) share similar valence electron structures
and chemical behavior, while elements in lower rows occupy progressively
higher electron shells. Only Groups 14-18 and the top four periods are shown.
The Periodic Table of AI is a conceptual framework for organizing the rapidly expanding landscape of artificial intelligence into a small set of fundamental elements, families, and levels of composition. Its purpose is not to catalog products or hype new techniques, but to reveal the structure underlying AI systems—what the irreducible building blocks are, how they combine into stable patterns, and how more complex behaviors emerge over time. By mapping AI capabilities into rows (levels of maturity) and groups (functional roles), the table provides a common language for reasoning about system design, comparing architectures, identifying missing or unnecessary components, and distinguishing genuine innovation from recombination. Like its chemical counterpart, the table is meant to support understanding, prediction, and disciplined engineering—not memorization.
AI systems are not collections of buzzwords. They are repeatable combinations of a small
number of elements. Below are fourteen common reactions, followed by examples of when
those reactions are over- or under-applied.
Each reaction includes: a symbolic formula, a plain-English description,
and a step-by-step explanation of how the system behaves in practice.
Em → Vx → Rg → Pr → Lg (+ Gr)
A system that answers questions using a private document corpus rather than model memory.
- Documents are converted into embeddings that capture semantic meaning.
- Embeddings are stored in a vector database optimized for similarity search.
- User queries are embedded and matched against stored vectors.
- The most relevant document chunks are retrieved.
- Those chunks are injected into a structured prompt.
- The language model generates an answer grounded in retrieved text.
- Guardrails enforce safety, redaction, and output structure.
Products: Enterprise knowledge bots, Copilot-style document assistants, BI copilots.
Pr → Fc(search) → Rg → Lg
A system that answers questions using live external information.
- A user query triggers a web search tool.
- Search results are fetched and filtered.
- Relevant excerpts are selected.
- The model synthesizes an answer with citations.
Products: Perplexity, search copilots.
Ag ⇄ Fc → Fw
A goal-driven system that plans, acts, and adapts based on tool feedback.
- An agent receives a goal instead of a single question.
- The agent decomposes the goal into actionable steps.
- Tools and APIs are invoked via function calls.
- Results are observed and incorporated into context.
- The loop repeats until the goal is satisfied.
- A framework manages state, retries, and execution flow.
Products: Task automation agents, AI operators.
Ma ⇄ Ag ⇄ Fc
Multiple agents specialize and coordinate to solve complex tasks.
- Agents take distinct roles (researcher, writer, critic).
- They exchange intermediate results and context.
- Tools are shared across agents.
- Coordination protocols manage handoffs and consensus.
Products: Research assistants, complex workflow automation.
Ft → Pr → Lg
A model optimized for software development tasks inside an IDE.
- A base model is fine-tuned on large code corpora.
- Editor context (files, diffs, cursor position) is assembled.
- Context is formatted into a structured prompt.
- The model predicts code completions or refactors.
Products: GitHub Copilot, Cursor, Codeium.
Pr → Mm → Lg
A multimodal generative pipeline for producing images from text.
- A text prompt describes visual intent.
- The multimodal model encodes the description.
- Latent representations are iteratively refined.
- An image is decoded and returned.
Products: Midjourney, DALL·E, Stable Diffusion.
Pr → Th → Lg
Slower, deeper reasoning before answering complex questions.
- The model allocates extra reasoning compute.
- Intermediate reasoning steps are performed internally.
- A final answer is produced after deliberation.
- Reasoning traces may be exposed or hidden.
Products: OpenAI o1, complex problem-solving assistants.
Ft → Sm → Pr
Lightweight AI running locally for speed and privacy.
- Models are distilled for efficiency.
- Inference runs entirely on device.
- Minimal prompting controls behavior.
- No data leaves the device.
Products: Apple Intelligence, on-device voice assistants.
Pi → Pr → Lg → Gr
A graph-based workflow that routes data through multiple AI steps without custom code.
- A pipeline graph defines nodes (models, tools, transforms).
- Data flows through edges based on declarative rules.
- Each node processes input and passes output downstream.
- Guardrails validate outputs at each stage.
- No business logic is embedded in the pipeline itself.
Products: LangGraph, Prefect AI workflows, Azure ML Pipelines.
Ae → Pr → Lg (+ Sm)
A system that dynamically selects models or execution paths based on runtime signals.
- Incoming requests are analyzed for complexity and urgency.
- Adaptive logic routes simple queries to small, fast models.
- Complex queries are routed to large, capable models.
- Cost, latency, and confidence thresholds guide decisions.
- The system learns from feedback to improve routing.
Products: Martian, model routers, cost-optimized inference platforms.
Pr → Lg → Gr
Converting raw content into structured or condensed form.
- Input documents are provided directly.
- The model summarizes or extracts entities.
- Guardrails enforce schemas and formatting.
- Output is validated against expected structure.
Products: Contract review, meeting summaries.
Bt → Em → Vx
A system that maintains canonical, governed data for AI retrieval.
- Base truth defines what data is authoritative and permitted.
- Curation rules determine what can be added, updated, or removed.
- Approved documents are embedded and indexed.
- Access controls and retention policies are enforced.
- The knowledge base serves as the source for RAG systems.
Products: Enterprise knowledge management, compliance-aware document stores.
Rt → Gr → In
Stress-testing and securing AI behavior through adversarial methods.
- Red teams probe failure modes and vulnerabilities.
- Guardrails are strengthened based on findings.
- Interpretability tools explain failures and edge cases.
- The system is iteratively hardened.
Products: Model safety testing, AI security platforms.
Sy → Ft → Sm
Using AI-generated data to adapt or compress models.
- Synthetic examples are generated by large models.
- Models are fine-tuned on augmented data.
- Smaller specialized models are distilled.
- Performance is validated against real-world benchmarks.
Products: Domain-specific assistants, distilled models.
These examples contrast a common *over-engineered* AI architecture with a *minimal viable reaction*.
Each starts by describing why teams often overbuild the solution, then explains why the simpler
reaction is usually superior.
Over-engineered approach:
Teams often treat an internal FAQ bot as a "smart assistant" that must reason, plan,
and autonomously decide how to answer questions. This leads to agent loops, tool calls,
and even reasoning models layered on top of what is fundamentally a retrieval problem.
The system becomes harder to debug, slower to respond, and more fragile—despite the
task being static document lookup.
Over-engineered
Em → Vx → Rg → Ag → Fc → Fw → Th → Gr
- Agent plans how to answer each question.
- Tools and loops introduce latency.
- Reasoning models increase cost without improving grounding.
Minimal viable
Em → Vx → Rg → Lg (+ Gr)
- Retrieve the most relevant documents.
- Generate a grounded answer directly.
- Apply safety and access controls.
Why minimal wins:
The problem is deterministic retrieval, not autonomy. Adding agents does not improve
correctness and actively harms reliability.
Over-engineered approach:
Some teams attempt to treat code completion as a planning task, adding retrieval
systems, agent orchestration, and tool calls. This often stems from fear that the model
"won't understand enough context." In practice, this breaks the tight latency
requirements of developer workflows.
Over-engineered
Ft → Rg → Vx → Ag → Fc → Lg
- Searches the codebase for context.
- Agent reasons about edits.
- Multiple hops slow feedback loops.
Minimal viable
Ft → Pr → Lg
- Use a strong code-tuned model.
- Inject local editor context.
- Predict the next token immediately.
Why minimal wins:
Autocomplete is a real-time prediction problem. Local context plus a capable model
outperforms any multi-step orchestration.
Over-engineered approach:
Teams sometimes wrap simple perception tasks in agents and retrieval pipelines,
assuming the system must "think" about the image. This adds orchestration layers
despite the model already having all required inputs at inference time.
Over-engineered
Pr → Mm → Ag → Fc → Rg → Lg → Gr
- Agent decides how to interpret the image.
- Retrieval adds irrelevant context.
- Extra layers introduce failure modes.
Minimal viable
Pr → Mm → Lg
- Encode the image.
- Condition generation on visual features.
- Produce a caption directly.
Why minimal wins:
Image captioning is a single-shot perception task. Orchestration does not increase
understanding—it only increases complexity.