Offerings
Features & Capabilities
Customer Stories
Learn
Engage
Books
Every modern BI tool claims a semantic layer. But the range of questions it can answer natively — without falling back to raw SQL — determines whether your AI and self-service actually scale.
No credit card required.
A BI tool with a semantic layer outperforms one that relies on raw text-to-SQL — in both efficiency and reliability. When AI gets the user's intent correct, it passes that intent to the semantic layer, which converts it to accurate SQL through a deterministic, governed process. No guesswork, no hallucinated joins.
The real differentiator is how many types of questions the semantic layer can answer natively. Most conventional semantic layers handle the OLAP basics well — lookups, slice-and-dice, simple aggregations. These are the pivot-table equivalents that resonate with end users familiar with self-serve spreadsheet workflows, and they're what most vendors demo during evaluations. The hidden gap. There's a non-trivial class of questions that falls outside what conventional semantic layers can express — running totals by segment, rolling windows by region, nested aggregations, multi-step calculations, retention cohorts. The unspoken workarounds: build a derived table, push calculations outside the semantic layer, or ask an analyst to write custom SQL. These gaps don't surface during PoC evaluation — they appear later, as end users push the tool further. In the world of AI, the variety of questions users ask will only grow more sophisticated — and will inevitably hit these same boundaries. When that happens, AI has to generate raw SQL from scratch, losing all the governance and reliability the semantic layer was supposed to provide. This is why some vendors quietly position AI as "best suited for lookup questions" — to avoid exposing where their semantic layer falls short. Holistics' metrics-centric semantic layer answers a wider variety of self-service questions than what most conventional semantic layers currently support — running totals, rolling windows, nested aggregations, period comparisons, all as first-class metric definitions. It's easier for both AI and humans to parse, understand, and debug because it operates at a higher level of abstraction than SQL — while remaining SQL-native. Every metric is composable, governed, and visible to AI.
Every modern BI tool claims a semantic layer. But ask for a running total by segment, a rolling window by region, or a custom retention cohort, and most can't express it. The moment you do, you've left the semantic layer behind — back to table calculations, raw SQL, or "ask an analyst." And once you're outside the semantic layer, governance breaks down and self-serve stops scaling.
Composable metrics
Running totals, rolling windows, nested aggregations, period-over-period comparisons
Falls back to table calculations or raw SQL for complex logic
First-class composable metric definitions inside the semantic layer
Business-centric self-service
Cohorts, funnels, retention curves, segmented breakdowns — analysis business users actually need
Advanced analysis requires analyst to build custom reports — or users bypass governance with raw SQL
1-click advanced analysis without leaving the semantic layer — users stay within governed definitions
AI that reasons over semantics
Ask a follow-up question and get an answer that builds on the last one — not a fresh SQL query from scratch
Generates raw SQL — context lost between questions, AI ignores governed definitions
Reasons over semantic layer in AQL — multi-turn context preserved, AI respects metric governance
Git version control
Who changed what, when, why — for every metric, model, and dashboard definition
UI-configured, no audit trail, definitions drift
Code-defined, Git-backed, code-reviewed — full change history
Programmable semantic layer
Models, metrics, and dashboards defined as code — readable by humans, AI, and automation
Duplicated logic across dashboards, reports, exports
Define once, reuse everywhere — AI, dashboards, embedded analytics