What a 90-Day D23 Data Consulting Sprint Actually Looks Like
Week-by-week breakdown of D23's fixed-fee consulting model for Apache Superset. Real timelines, deliverables, and outcomes for embedded analytics.
What a 90-Day D23 Data Consulting Sprint Actually Looks Like
You’re evaluating a managed Apache Superset platform. You’ve kicked the tires on D23’s core offering, and it looks promising. But before you commit, you want to understand what actually happens during a consulting engagement. How long does it take to go from zero to production dashboards? What does the work look like week to week? What are you actually paying for, and when do you see results?
This article walks through the exact structure of a 90-day D23 data consulting sprint—the fixed-fee engagement model we use to move teams from analytics chaos to production-grade self-serve BI, embedded dashboards, and AI-powered query automation. We’ll break down each phase, show you what gets delivered, and explain how this approach differs from traditional consulting where time is money and scope creep is the default.
Why 90 Days? The Math Behind the Timeline
The 90-day sprint isn’t arbitrary. It’s grounded in how data projects actually move through an organization. Research on best practices for data leaders in their first 90 days shows that this timeframe is long enough to uncover real technical and organizational problems, but short enough to maintain momentum and deliver visible wins before stakeholder attention drifts.
Think of it as the Goldilocks zone for data consulting. Thirty days is too short—you’re still learning the codebase, meeting people, and understanding data architecture. Six months feels like a long-term engagement, which introduces scope creep, changing priorities, and the risk of delivering something that no longer matches the business need. Ninety days is enough to:
- Audit your current data stack and identify technical debt
- Design a Superset deployment that fits your architecture
- Build initial dashboards and embed them into your product or internal tools
- Train your team on self-serve analytics and AI-assisted query generation
- Establish runbooks and handoff documentation
- Achieve measurable metrics: dashboard adoption, query latency, cost reduction vs. legacy BI tools
This aligns with broader thinking on how to turn business data into competitive advantage in 90 days, which emphasizes rapid phased delivery: assessment, model building, and automation. Our model adapts this framework specifically for Superset deployments and embedded analytics.
The D23 Engagement Model: Fixed Fee, Clear Scope
Before we walk through the weeks, let’s establish how D23 consulting differs from traditional hourly consulting.
Fixed fee. You pay one amount for 90 days. No surprise invoices. No “we need two more weeks.” Scope is locked at the start, and we’re incentivized to deliver efficiently.
Dedicated resources. You get a named consulting engineer (or small team) who works on your project full-time or near-full-time. Not a rotating cast of juniors.
Outcome-focused. We measure success not by hours logged, but by dashboards in production, adoption metrics, and your team’s ability to operate independently when we leave.
Transparent roadmap. You know exactly what’s happening each week, what gets delivered, and what decisions you need to make. No black-box consulting.
This model works because it forces clarity upfront. You can’t hire a consultant to “figure out BI” for 90 days and hope something good emerges. Instead, we work backward from your business goals: What decisions do you need data to inform? Who are the users? What’s the current pain—slow dashboards, duplicate data work, no self-serve capability? From there, we scope the engagement.
Phase 1: Discovery and Architecture (Weeks 1–3)
Week 1: Kickoff, Stakeholder Mapping, and Technical Audit
The first week is chaos in the best way. We’re learning your business, your data, your team structure, and your technical constraints—all at once.
Day 1–2: Kickoff and Stakeholder Interviews
We start with a full-day kickoff. You bring together data leaders, engineering, product, and finance. We present the 90-day roadmap and set expectations. Then we split: the consulting engineer starts technical interviews with your data and engineering teams, while a second team member (if available) runs stakeholder interviews with end users.
The goal is to map:
- Who are the analytics users? (Finance analysts, product managers, executives, customers?)
- What dashboards or reports exist today? Which are used, which are stale?
- What’s the current pain? (Slow query times, duplicate data pipelines, no way to self-serve?)
- What’s the technical stack? (Data warehouse, ETL, current BI tool, API layer, authentication?)
- What are the security and compliance constraints? (HIPAA, SOC 2, multi-tenancy, row-level access control?)
Day 3–5: Technical Deep Dive
We audit your data warehouse, ETL pipelines, and current BI infrastructure. If you’re running Looker or Tableau, we examine how dashboards are currently built, how data is modeled, and what’s working vs. what’s broken. We look for:
- Data quality issues: Are there fields with null values, inconsistent naming, or stale data?
- Query performance: How long do common queries take? Are there indexes missing?
- Access control: How do you currently manage who sees what data?
- Integration points: Where would embedded analytics live? (Your product, internal tools, mobile apps?)
We also run a competitive analysis if you’re migrating from Looker, Tableau, or Power BI. How many dashboards exist? How many users? What features are critical? This informs whether we can deprecate the old tool immediately or run both in parallel.
Week 2: Architecture Design and Proof of Concept
By week 2, we’ve synthesized the audit into a technical architecture. This isn’t a 50-page document—it’s a clear, one-page diagram showing:
- Where Superset lives (managed D23 instance, self-hosted, hybrid?)
- How data flows from your warehouse to Superset (direct connection, cached layer, API?)
- How users access it (embedded in your product, internal dashboards, mobile?)
- How AI features work (text-to-SQL, MCP server for analytics, custom LLM integration?)
- Security model (RBAC, row-level access control, SSO, API authentication?)
We also build a proof of concept. Pick one high-value dashboard—something that currently takes someone 2 hours to generate in a spreadsheet, or that doesn’t exist because it’s too slow in your legacy BI tool. We build it in Superset end-to-end: connect to your data, write the queries, build the visualizations, test performance.
This PoC serves multiple purposes:
- Validates the architecture. Does the data connection work? Are queries fast enough? Do we need caching?
- Builds confidence. Your team sees a real dashboard in Superset, not a PowerPoint mockup.
- Identifies gaps. Maybe you need a data transformation layer. Maybe your warehouse connection is slow. Better to find out now.
- Sets the bar. This becomes the template for future dashboards.
Week 3: Roadmap Refinement and Kickoff Planning
Week 3 is about locking the scope for the remaining 6 weeks. Based on the PoC and architecture, we prioritize:
- Phase 2 dashboards (weeks 4–6): Which 3–5 dashboards will we build first? These should be high-value, moderate complexity, and cover different use cases (executive KPI dashboard, operational metrics, embedded product analytics).
- Data work: Do we need to build new tables, transformations, or APIs? Or can we work with existing data?
- User training: Who needs hands-on training? When?
- Handoff plan: What documentation, runbooks, and training do you need to operate Superset independently?
We also establish a weekly cadence: 1-hour sync with decision-makers, async updates in Slack, and a demo every Friday.
Phase 2: Build and Deploy (Weeks 4–6)
Week 4: Dashboard Development Begins
Week 4 is heads-down building. The consulting engineer is in your Superset instance 6–8 hours a day, developing the first 2–3 dashboards. Your team is unblocked to ask questions, but the engineer is focused on delivery.
Each dashboard follows the same pattern:
- Data modeling: Write the SQL, test query performance, add indexes if needed.
- Visualization design: Build charts, tables, and filters that match the design system and user needs.
- Interactivity: Add drill-downs, cross-filtering, and parameter controls.
- Testing: Validate that filters work, performance is acceptable (<2 seconds for most queries), and edge cases are handled.
In parallel, we’re setting up infrastructure:
- Alerting: Configure Slack or email alerts for dashboards that track KPIs.
- Caching: Set up Superset’s caching layer if queries are slow.
- API setup: If you’re embedding dashboards in your product, we test the embedding API and authentication flow.
- Security: Configure RBAC (role-based access control) and row-level access control if needed.
Week 5: Expand Dashboard Suite and Start User Training
Week 5 is about scale. The first dashboards are in user testing; the engineer is building dashboards 2–3. We also kick off training.
Dashboard Development: The second and third dashboards typically move faster because the team understands the pattern. We’re also pulling in your analysts or BI team to co-develop—they’re learning Superset’s SQL editor, chart types, and dashboard building patterns.
User Training: We run 1-hour sessions with different user groups:
- Analysts and data teams: How to write queries, create charts, and build dashboards from scratch.
- Business users: How to use filters, drill-downs, and export data.
- Embedded users (if applicable): How to navigate embedded dashboards in your product.
Training is hands-on. We don’t lecture; we build dashboards together, and users follow along in a sandbox environment.
AI and Text-to-SQL: If you’re adopting AI-assisted query generation, week 5 is when we configure it. This might involve integrating Superset with an MCP server for analytics to enable natural-language queries, or connecting to your LLM (OpenAI, Claude, Anthropic). We test it with real use cases and set guardrails (e.g., which tables can be queried, rate limits).
Week 6: Finalize Dashboards and Handoff Planning
By week 6, you have 4–5 production dashboards. We’re in refinement mode: fixing bugs, optimizing queries, and gathering feedback from users.
We also start the handoff process:
- Documentation: We write runbooks for common tasks (adding a new dashboard, updating data sources, managing users, troubleshooting slow queries).
- Governance: We establish standards for dashboard naming, folder structure, and refresh schedules.
- Escalation paths: Who owns Superset? Who do users contact if something breaks?
At the end of week 6, you should have:
- 4–5 production dashboards, actively used by teams
- A trained team that can build simple dashboards independently
- Clear documentation and runbooks
- A plan for the final 4 weeks (scaling dashboards, embedding, advanced features)
Phase 3: Scale, Embed, and Optimize (Weeks 7–9)
Week 7: Embedded Analytics and Product Integration
If your goal is to embed analytics into your product or internal tools, week 7 is when we execute it. This is where D23’s API-first approach to BI shines.
Embedded analytics means your customers or internal users see dashboards without leaving your product. They don’t log into a separate BI tool; they click a tab and see their data.
The technical pattern:
- Superset API: Your backend calls Superset’s API to generate a signed URL for a dashboard.
- Authentication: We configure SSO or token-based auth so users don’t need to log in separately.
- Row-level filtering: If you have multi-tenant data, we set up RLS (row-level security) so each customer sees only their data.
- Styling: We customize the Superset UI to match your product’s look and feel (logo, colors, fonts).
Week 7 is about building the first embedded dashboard. This might be a customer-facing analytics dashboard in your SaaS product, or an internal KPI dashboard in your ops tool. We work with your frontend team to integrate the embedded dashboard, handle authentication, and test end-to-end.
Week 8: Scale Dashboards and Optimize Performance
Week 8 is about leveraging what you’ve learned. Your team is now comfortable building dashboards; the consulting engineer focuses on scaling and optimization.
Building more dashboards: If phase 2 was about proof-of-concept, phase 3 is about breadth. We add 2–3 more dashboards covering different business areas: customer analytics, operational metrics, financial reporting, or product usage.
Performance optimization: As you add more dashboards and users, query performance matters. We:
- Profile slow queries and add indexes to your warehouse.
- Configure Superset’s caching layer (Redis) to cache expensive queries.
- Use Superset’s query optimization features (e.g., pre-aggregated tables, materialized views).
- Monitor dashboard load times and fix bottlenecks.
Advanced features: Depending on your needs, we might set up:
- Alerts and notifications: Dashboards that trigger Slack alerts when KPIs cross thresholds.
- Scheduled reports: Dashboards that email summaries to executives daily or weekly.
- Custom visualizations: If Superset’s built-in charts don’t fit, we build or integrate custom viz plugins.
- Data validation: Automated checks to catch data quality issues before they hit dashboards.
Week 9: Final Optimization, Training, and Handoff
Week 9 is the final push. We’re polishing dashboards, running final user training, and preparing you to operate independently.
User feedback loop: We gather feedback from dashboard users and make final tweaks. This might be as simple as reordering filters or as involved as adding a new calculated metric.
Advanced training: We run a final training session focused on:
- How to troubleshoot common issues (slow queries, missing data, authentication problems).
- How to maintain dashboards as data changes (updating queries, adding new dimensions).
- How to scale: best practices for building new dashboards as your analytics needs grow.
Documentation handoff: We deliver:
- Architecture diagram: How Superset connects to your data, where it’s hosted, how users access it.
- Dashboard inventory: List of all dashboards, their owners, refresh schedules, and SLAs.
- Runbook: Step-by-step guides for common tasks (adding a user, updating a data source, debugging a slow dashboard).
- Escalation guide: Who to contact for different issues, and how to reach D23 support if needed.
Measuring Success: Metrics That Matter
By the end of 90 days, you should see measurable improvements. Here’s what we typically track:
Adoption Metrics
- Dashboard views per week: Are people using the dashboards? We aim for 80%+ of intended users accessing their primary dashboard at least weekly.
- User growth: How many new users joined Superset? Can they self-serve, or do they need help from analysts?
- Query volume: How many queries are running? This indicates self-serve adoption.
Performance Metrics
- Query latency: Average time for a dashboard to load. We target <2 seconds for most dashboards, <5 seconds for complex ones.
- Uptime: Superset availability. We aim for 99.5%+ uptime.
- Data freshness: How current is the data? (Real-time, hourly, daily refresh?)
Business Metrics
- Cost reduction: If you’re migrating from Looker or Tableau, how much are you saving? (Licensing, infrastructure, consulting time?)
- Time-to-insight: How long did it take to answer a question before vs. after? We often see 10x improvements (from 2 days of analyst work to 5 minutes of self-serve).
- Analyst productivity: How much time are analysts spending on dashboard maintenance vs. strategic work?
Team Capability
- Dashboard independence: Can your team build new dashboards without consulting help? We measure this by the number of dashboards built by your team post-engagement.
- Documentation quality: Do runbooks exist and are they being used?
- Escalation rate: How often do users hit issues that require D23 or your team to fix?
Research on how to create measurable impact during your first 90 days as a data leader emphasizes the importance of early wins and clear metrics. Our 90-day model is built around this: we deliver visible dashboards and measurable improvements, not vague “analytics transformation” promises.
Real-World Variations: Customizing the Timeline
The 90-day sprint is a framework, not a rigid template. Depending on your situation, we adjust:
Scenario 1: You’re Migrating from Looker or Tableau
If you have existing dashboards in a legacy BI tool, the first 2 weeks are about migration planning. We assess how many dashboards to port, which ones are actually used, and which can be rebuilt from scratch. This often shortens the discovery phase and accelerates dashboard development because we have a template to work from.
Scenario 2: You Need Embedded Analytics in Your Product
If embedding is your primary goal, we front-load weeks 4–6 with product integration work. Your frontend team is involved earlier, and we spend more time on authentication, styling, and RLS. The dashboard count might be lower, but the embedded experience is polished.
Scenario 3: You Have Complex Data Governance or Compliance Needs
If you need HIPAA, SOC 2, or multi-tenant isolation, discovery is longer (weeks 1–4). We spend more time on security architecture, access control, and audit logging. Development starts later, but the foundation is rock-solid.
Scenario 4: You’re Building AI-Powered Analytics
If text-to-SQL and AI-assisted query generation are core, we integrate this earlier. Weeks 2–3 include LLM setup and testing. Weeks 4–6 include training your team on how to use AI features responsibly (avoiding hallucinations, understanding confidence scores).
Each variation is still 90 days, but the emphasis shifts based on your priorities.
What Happens After 90 Days?
The sprint ends, but the relationship doesn’t have to. Here’s what typically happens:
Immediate aftermath: Your team operates independently. They’re building dashboards, troubleshooting issues, and iterating based on user feedback.
Support tier: Most teams sign up for a support plan with D23. This might be:
- Email support: For questions and non-urgent issues.
- Slack channel: Direct access to a D23 engineer for urgent problems.
- Quarterly reviews: We audit your Superset instance, suggest optimizations, and plan the next phase of analytics.
Phase 2 engagements: Some teams run a second 90-day sprint to add advanced features: custom visualizations, AI-powered insights, mobile analytics, or deeper product embedding.
Ongoing optimization: As your data grows and user needs evolve, we help you scale Superset. This might involve upgrading infrastructure, optimizing queries, or redesigning dashboards based on usage patterns.
Why the 90-Day Model Works
Looking back at the broader context of how to run a great 90-day sprint, the key principles are:
- Clear scope: You know exactly what you’re paying for and what you’ll get.
- Rapid iteration: You see results every week, not at the end of a 6-month engagement.
- Accountability: We’re incentivized to deliver on time and on budget.
- Knowledge transfer: Your team learns, not just receives a finished product.
- Measurable outcomes: Success is defined upfront, not discovered in a post-mortem.
The 90-day sprint also aligns with how modern data initiatives work. Research on the power of 90-day sprints in data transformation shows that short, focused engagements deliver better outcomes than long, open-ended consulting. Teams stay engaged, priorities don’t shift, and you see ROI quickly.
Getting Started: What You Need to Bring
For a 90-day sprint to succeed, you need:
- Executive sponsorship: Someone at the director level or above who cares about the outcome and can unblock decisions.
- Technical access: We need credentials to your data warehouse, any existing BI tools, and your cloud infrastructure.
- Dedicated stakeholders: A few people (analysts, engineers, product managers) who we can talk to regularly.
- Clear success metrics: What does success look like? (Faster dashboards, self-serve adoption, cost reduction?)
- Realistic scope: You can’t ask for 20 dashboards, a complete data warehouse redesign, and a mobile app in 90 days. We’ll help you prioritize.
The Bottom Line
A 90-day D23 data consulting sprint is not a traditional consulting engagement. It’s not 1,000 hours of billable time. It’s a fixed-fee, outcome-focused partnership to move you from analytics chaos to production-grade self-serve BI.
Weeks 1–3 are about understanding your data, your team, and your constraints. Weeks 4–6 are about building dashboards and proving the value of Superset. Weeks 7–9 are about scaling, embedding, and handing off operations to your team.
By the end, you have:
- 5–8 production dashboards, actively used by teams
- A trained team that can build and maintain dashboards independently
- Clear documentation and runbooks
- Measurable improvements in query speed, analyst productivity, and user adoption
- A foundation for scaling analytics as your business grows
If you’re evaluating D23’s managed Apache Superset platform or considering a consulting engagement, this is what you’re actually getting: a transparent, week-by-week plan to transform how your organization uses data. No surprises. No scope creep. Just results.
Ready to explore what a 90-day sprint could look like for your organization? Let’s talk about your specific challenges and how we can help you move from data chaos to self-serve analytics in 90 days.