Conditional Governance
Conditional governance is the pattern where a BI tool's AI interface enforces semantic layer definitions for simple queries but silently falls back to ungoverned text-to-SQL for anything the semantic layer can't handle. The user sees one interface. Behind it, two completely different query engines operate – one governed, one ungoverned – with no indication of which answered the question.
This creates a false sense of security. If the first ten questions a user asks all route through the semantic layer, they reasonably assume the eleventh will too. But the eleventh question – a period-over-period comparison, a cross-grain ratio, a nested aggregation – exceeds what the semantic layer can express. The AI quietly switches to raw SQL generation against the warehouse schema. The answer appears in the same clean format. Nothing flags the change.
How the fallback works
The mechanism varies by tool, but the pattern is consistent. The AI first attempts to resolve the user's question against the semantic layer's metric definitions. If the semantic layer covers the question – "total revenue by region last month" – the query runs through governed definitions. Correct joins, correct filters, correct aggregation logic.
When the question exceeds the semantic ceiling – "revenue this month vs. same month last year, ranked by percentage change" – the AI can't compose an answer from existing metric definitions. Rather than returning "I can't answer that," it falls back to generating SQL directly against the data warehouse. It infers table relationships, guesses join paths, and constructs filter logic from column names and metadata. The result may be right. It may be subtly wrong. There is no governed layer validating it.
Why the semantic ceiling drives it
Conditional governance is a direct consequence of a low semantic ceiling. The lower the ceiling, the larger the class of questions that trigger the fallback. A semantic layer that only handles single-grain aggregations will route the majority of real analytical questions – the ones that involve composition, comparison, or multi-step logic – through the ungoverned path.
The vendor's AI demo works because demo questions are chosen to stay below the ceiling. "Show me revenue by product category." "What's the average deal size this quarter." These queries stay inside the governed layer. Production questions from real business users don't stay that clean.
The risk
The central danger is semantic leakage that nobody detects. In a traditional BI setup, leakage is at least visible – analysts know when they're writing custom SQL or building derived tables outside the semantic layer. With conditional governance, the leakage is hidden behind an AI interface that presents governed and ungoverned answers identically.
This means:
- Metric inconsistency without audit trail. Two users ask similar questions, phrased differently. One routes through the semantic layer, the other through text-to-SQL. They get different numbers. Neither knows why.
- Confidence without verification. Business users trust the AI's answer because previous answers were correct. They have no way to distinguish a governed response from an ungoverned guess.
- Invisible scope of exposure. Without logging which queries used the governed path vs. the fallback path, the organization can't measure how much of its analytical surface is actually governed.
How to detect it
Run a structured test. Take 20 real business questions from actual users – skip the simple lookups and focus on the multi-step questions people ask in practice. Submit each to the AI interface. For each answer, check whether the query executed against semantic layer definitions or generated raw SQL against the warehouse.
Most tools don't expose this distinction in the UI. You may need to examine query logs, execution plans, or ask the vendor directly: "For this specific question, did the AI use the semantic layer or generate SQL?"
If more than a third of real questions bypass the semantic layer, the governance claim is conditional – and the ungoverned surface area is where errors will accumulate.
The Holistics Perspective
Holistics' approach eliminates conditional governance by routing all AI queries through AQL, which handles complex metric logic natively. When the semantic layer can express nested aggregations and cross-grain ratios, AI does not need to fall back to raw SQL for harder questions.
See how Holistics approaches this →