The Future of Self-Serve BI Is Conversational, Governed, and Embedded
Explore how conversational AI, governance frameworks, and embedded analytics are reshaping self-serve BI. Learn what's changing in 2025 and beyond.
The Moment Business Intelligence Got Personal
For decades, self-serve business intelligence promised to democratize data access. The pitch was simple: give every analyst, marketer, and operator a dashboard tool, and suddenly everyone can answer their own questions without waiting for a data engineer. The reality, though, has been messier. Most organizations ended up with sprawling dashboards nobody uses, data quality issues nobody caught, and a new bottleneck: the analyst managing the analyst tools.
That dynamic is shifting—not because dashboards are disappearing, but because the interface to data is changing fundamentally. Three forces are converging right now, and together they’re redefining what self-serve BI actually means in practice.
First, conversational interfaces powered by AI are replacing static dashboards as the primary way people interact with data. Second, governance—once the enemy of speed—is being baked into the analytics layer itself, making compliance and control automatic rather than bolted-on. Third, embedded analytics are moving from nice-to-have reporting features to core product infrastructure, especially as companies build AI-native applications.
This isn’t incremental progress. It’s a reorientation of how analytics platforms work, who uses them, and what “self-serve” actually delivers.
Force One: Conversational Analytics Is Replacing the Dashboard
A dashboard is a static artifact. You build it, deploy it, and hope people ask the questions you anticipated. When they don’t, they either ignore the dashboard or loop in an analyst to build something new. This creates friction at scale, especially in organizations with hundreds of potential users and limited analytics resources.
Conversational analytics—where users ask natural language questions and get instant answers—inverts that dynamic. Instead of the user conforming to the dashboard, the analytics layer conforms to the user’s question. This matters because it actually delivers on the self-serve promise in a way traditional BI tools never could.
The mechanics are straightforward but powerful. When you ask a conversational BI system a question like “What was our revenue growth by region last quarter?”, the system translates that into a SQL query, executes it, and returns results with context. This is often called text-to-SQL, and it’s the core technology enabling this shift. Tools like D23’s managed Apache Superset platform integrate AI/LLM capabilities with metadata about your data warehouse, allowing natural language queries to be converted into accurate, governed SQL.
But here’s what makes this genuinely different from earlier NLP-for-BI experiments: the AI models are now good enough, the integration patterns are mature, and the cost is low enough that this isn’t a research project anymore. As Starburst noted, AI is actively replacing traditional business intelligence, particularly in how teams interact with dashboards and reports. The conversational layer doesn’t eliminate dashboards—it sits on top of them, making data accessible to people who would never have opened a BI tool in the first place.
Consider a practical example: a marketing leader at a mid-market SaaS company wants to understand churn by cohort. With a traditional dashboard, they either find a pre-built report (unlikely to exist) or email the analytics team. With conversational analytics, they open a chat interface, ask the question in plain English, and get an answer in seconds. The system knows which tables contain customer data, which columns define cohorts, and how to calculate churn. The AI handles the translation; the data warehouse handles the compute.
This is where Sigma’s definition of conversational analytics becomes concrete: it’s the ability to interact with data through natural dialogue, not through pre-built queries or dashboard filters. And the impact is immediate. Organizations report faster decision cycles, fewer bottlenecks, and higher engagement from non-technical users who previously avoided analytics tools entirely.
The shift is also reshaping what platforms need to support. Traditional BI tools were built for dashboard creation and sharing. Conversational BI platforms need to handle real-time language understanding, query optimization, result caching, and safety guardrails. This is why D23 integrates API-first architecture and MCP server capabilities with Apache Superset—to make conversational queries fast, reliable, and integrable into workflows where they matter most.
Force Two: Governance Stops Being a Friction Point
Here’s a tension that’s plagued self-serve BI forever: the more you democratize access, the more risk you take on. Bad queries tank your warehouse. Sensitive data leaks. Metrics get calculated differently in different dashboards. Non-technical users run expensive queries without realizing it. So organizations layer on governance—approval workflows, access controls, query limits—and suddenly self-serve becomes not-so-self-serve again.
The emerging pattern flips this. Instead of governance being a gate that slows things down, it’s being embedded into the data layer itself. This is possible because of three converging changes: better metadata management, role-based access control at query time, and AI-assisted query optimization.
Metadata is the foundation. When your BI platform has rich, accurate metadata about every table, column, lineage, and sensitivity level, you can enforce rules programmatically. A user asks a conversational query, the system translates it to SQL, and before execution, it checks: Does this user have access to these tables? Are any of these columns marked as sensitive? Is this query going to scan too much data? If there’s a problem, the system either blocks it, masks the results, or asks for clarification—all without human intervention.
Role-based access control, once a manual and fragile process, becomes automatic. D23’s managed platform approach handles this by tying access to the data warehouse’s own permission model, so governance scales with your organization rather than requiring constant manual updates. A user’s access to dashboards and queries reflects their actual data warehouse permissions.
Query optimization is the third piece. Conversational systems can learn which queries are expensive and rewrite them to be cheaper. They can suggest using pre-aggregated tables instead of full scans. They can cache results for common questions. This means self-serve doesn’t just become safer; it becomes faster and cheaper too.
Recent trends in AI and BI adoption show that organizations are increasingly embedding governance into their analytics workflows rather than treating it as a separate compliance layer. The result is that self-serve can expand to more users without proportionally increasing risk. This is why CIOs and data leaders are moving away from traditional BI platforms like Looker and Tableau toward more flexible, governance-first architectures.
The practical outcome: a financial services company can give loan officers access to customer data through conversational queries, knowing that the system automatically enforces row-level security based on their region. A healthcare organization can let clinicians ask questions about patient outcomes without exposing PII. The governance is invisible to the user but omnipresent in the system.
Force Three: Embedded Analytics Are Becoming Core Product Infrastructure
The third shift is about where analytics lives. Traditionally, BI was a separate tool—you logged into Tableau or Looker to analyze data. Increasingly, analytics are being embedded directly into the applications where decisions happen. A SaaS company embeds usage analytics into their customer-facing dashboard. A marketplace platform embeds seller performance metrics into the seller app. A private equity firm embeds portfolio KPI dashboards into their deal management system.
This isn’t new in concept, but it’s becoming table stakes because of two changes. First, embedded BI tools are now mature enough to support this at scale, with proper security, performance, and customization. Second, companies are realizing that analytics embedded in the product workflow drive adoption and action far better than separate reporting tools.
D23’s focus on embedded analytics within Apache Superset reflects this shift. Instead of building a separate BI platform and hoping people use it, teams are integrating analytics directly into their applications. This could mean embedding a dashboard showing real-time metrics, or exposing a conversational analytics interface that lets users ask questions about the data underlying their workflow.
The advantage is profound. When your product shows you a metric, you act on it immediately. When you have to log into a separate tool to find the metric, friction wins and the insight doesn’t drive behavior change. This is why product-led companies, venture capital firms tracking portfolio performance, and enterprise software vendors are all moving toward embedded analytics architectures.
But embedding analytics at scale requires a different kind of platform. You need:
- API-first design: The analytics layer needs to be callable from your application code, not just a web UI. This is why D23 emphasizes API-first BI architecture.
- Multi-tenancy and isolation: If you’re embedding analytics for multiple customers, each needs to see only their data, and performance for one customer can’t affect another.
- Customization without forking: You need to be able to theme, customize, and extend the analytics layer without maintaining a fork of the underlying platform.
- Scalability: Embedded analytics can’t be a bottleneck. If your product scales to millions of users, the analytics layer needs to scale too.
Apache Superset, with its open-source foundation and extensible architecture, is particularly well-suited for this. D23’s managed hosting and consulting services handle the operational complexity—upgrades, security patches, performance optimization—so teams can focus on building the analytics experience their users need.
Consider a venture capital use case: a VC firm wants to give LPs real-time visibility into portfolio performance. Instead of sending monthly PDF reports, they embed a dashboard into a partner portal. The dashboard shows fund metrics, portfolio company KPIs, and AI-assisted projections. The LP logs in, sees their investment performance, and can drill into details. This is embedded analytics driving real business value.
How These Three Forces Interact
Separately, each of these shifts is significant. Together, they’re transformative. Here’s why they amplify each other:
Conversational interfaces make analytics accessible to more people, but without governance, that’s dangerous. Governance embedded in the platform makes it safe to expand access. And embedded analytics make it practical to put these accessible, governed analytics interfaces directly where people work.
Conversely, if you’re embedding analytics into your product, you need them to be fast and intuitive—conversational interfaces solve that. And you need them to be safe and compliant—embedded governance solves that.
This is why the companies winning in analytics right now aren’t the ones building better dashboards. They’re the ones building platforms that are simultaneously conversational, governed, and embeddable. D23’s integration of Apache Superset with AI capabilities, API-first architecture, and expert data consulting is a direct response to these three forces.
What This Means for Organizations Evaluating BI Platforms
If you’re a CTO, head of data, or analytics leader evaluating platforms in 2025, here’s what to look for:
Can it handle conversational queries? This doesn’t mean a chatbot that’s fun to demo but useless in practice. It means text-to-SQL that’s accurate, fast, and integrable into your workflows. Ask for examples with your actual data schema. Ask about hallucination rates and how the system handles ambiguous questions.
Is governance baked in or bolted on? If access control and query safety require manual configuration and ongoing maintenance, it’s bolted on. If they’re automatic based on metadata and role definitions, they’re baked in. The former scales linearly with effort; the latter scales with your organization.
Can it be embedded? If the platform only works as a standalone web application, embedding is going to be painful. Look for platforms with strong APIs, SDKs, and support for multi-tenant deployments. D23’s API-first approach makes this straightforward.
What’s the operational burden? Managed platforms handle upgrades, security, and scaling. Open-source platforms like Apache Superset give you flexibility and avoid vendor lock-in but require operational overhead. The right choice depends on your team’s bandwidth and your tolerance for operational complexity. D23 bridges this by offering managed Apache Superset hosting with expert consulting, so you get the flexibility and cost benefits of open-source with the operational benefits of a managed service.
How does it compare to Looker, Tableau, and Power BI? These platforms are mature and widely deployed, but they’re also expensive, slow to deploy, and built for a pre-AI era. Comparisons of self-service analytics platforms by use case show that organizations are increasingly choosing more flexible, modern alternatives. If you’re already deep in the Tableau ecosystem, moving is costly. But if you’re evaluating fresh or replacing an aging deployment, the newer generation of platforms—especially managed open-source options—are worth serious consideration.
The Role of Open-Source in This Transition
Apache Superset’s role in this shift is worth noting. As an open-source platform, it’s avoided the vendor lock-in and feature bloat that plague commercial BI tools. It’s also forced the community to focus on core functionality—dashboarding, SQL querying, and extensibility—rather than trying to be everything to everyone.
This is why managed Apache Superset platforms like D23 are gaining traction. They offer the flexibility and cost benefits of open-source with the operational convenience of a managed service. You’re not locked into a vendor’s roadmap or pricing model. You can customize and extend the platform to fit your exact needs. But you also don’t have to hire a DevOps team to run it.
This matters especially for mid-market and scale-up companies. Enterprise companies can afford Looker or Tableau and the teams to run them. Small companies can use lightweight tools like Metabase or Mode. But for companies in the middle—growing fast, needing sophisticated analytics, but not yet at enterprise scale—managed open-source is often the sweet spot.
Practical Steps to Adopt This Future
If you’re convinced that conversational, governed, embedded analytics are the future, how do you get there from where you are today?
Start with metadata. You can’t have conversational analytics without knowing what data you have, where it lives, and who should access it. Invest in a data catalog or metadata management system. This is foundational.
Implement role-based access control at the data warehouse level. Don’t try to manage access in the BI tool. Define roles and permissions in your data warehouse (Snowflake, BigQuery, Redshift all support this), and let the BI tool inherit those permissions.
Pick a platform that supports your architecture. If you’re embedding analytics into your product, you need APIs and SDKs. If you’re building conversational interfaces, you need LLM integration. If you’re concerned about cost and vendor lock-in, you need open-source options. D23’s managed Apache Superset platform supports all three, but your mileage may vary with other platforms.
Don’t try to boil the ocean. Start with one use case—maybe a critical dashboard or a conversational interface for a specific team—and expand from there. This lets you learn what works for your organization before committing to a full migration.
Invest in data consulting. This transition requires rethinking how you structure data, define metrics, and manage access. Working with experienced practitioners—whether internal or external—accelerates the process and reduces costly mistakes.
The Economics of This Shift
Beyond the functional benefits, there’s a compelling economic argument for this transition. Traditional BI platforms are expensive—Looker and Tableau both run into six figures annually for mid-sized deployments. They’re also slow to deploy and require specialized expertise to customize.
Managed open-source platforms are typically 50-70% cheaper. They’re faster to deploy because the underlying platform is simpler and more transparent. And because they’re open-source, you’re not paying for features you don’t use or locked into a vendor’s roadmap.
The cost savings are real, but they’re not the main point. The main point is that conversational, governed, embedded analytics simply deliver more value than static dashboards. They drive faster decisions, enable more people to use data, and integrate analytics into the workflows where they matter.
For a scale-up or mid-market company, this is often the difference between analytics being a cost center and analytics being a competitive advantage.
What’s Next
The three forces we’ve discussed—conversational interfaces, embedded governance, and embedded analytics—are already reshaping the market. But they’re still in the early adoption phase. Most organizations are still running dashboards built in 2015. Most BI platforms are still optimized for the pre-AI era.
But the transition is accelerating. AI and BI adoption trends show that organizations are actively embedding analytics into workflows for governed insights, and the future of BI is increasingly shaped by AI data and self-service platforms. Within two years, we’ll likely see conversational analytics become the default interface for most BI tools, governance become fully automatic, and embedded analytics become standard in any application that touches data.
The organizations that move first—that adopt conversational interfaces, bake in governance, and embed analytics into their products—will have a significant advantage. They’ll make faster decisions, empower more people to use data, and build products that are inherently more intelligent.
The question for your organization isn’t whether this future is coming. It is. The question is whether you’ll lead the transition or catch up after it’s already happened.
Conclusion: Self-Serve Finally Means What It Says
For decades, “self-serve BI” was a promise that never quite delivered. The tools were too complex, the dashboards were too static, and governance was too much of a friction point. Most organizations ended up with a few power users running queries and everyone else waiting in line.
That’s changing now, and the change is fundamental. Conversational interfaces make analytics accessible to anyone who can ask a question in English. Embedded governance makes it safe to expand access. Embedded analytics make it practical to put intelligence directly into the applications where people work.
These three forces—conversational, governed, and embedded—aren’t just incremental improvements. They’re redefining what analytics platforms are, who uses them, and what value they deliver.
If you’re evaluating BI platforms or rethinking your analytics architecture, this is the direction to move. The platforms that embrace these three forces will win. The organizations that adopt them will make faster decisions and empower more people to use data. And self-serve BI will finally become what it always promised to be: truly self-serve, truly governed, and truly embedded in how work gets done.
Visit D23 to learn how managed Apache Superset can support your analytics transformation, or explore how expert data consulting can help you implement conversational, governed analytics at your organization.