Guide April 18, 2026 · 18 mins · The D23 Team

BigQuery + dbt + Apache Superset: A Modern Open Stack

Build a production-grade analytics stack with BigQuery, dbt, and Apache Superset. Learn architecture, best practices, and why open-source BI wins.

BigQuery + dbt + Apache Superset: A Modern Open Stack

Introduction: Why This Stack Matters

You’re evaluating analytics platforms and you keep hearing the same story: proprietary tools like Looker, Tableau, and Power BI lock you into expensive licensing, vendor-specific workflows, and limited flexibility. Meanwhile, your engineering team is already familiar with open-source tooling, version control, and infrastructure-as-code. You want something that plays well with your existing data stack, doesn’t require a separate contract negotiation every year, and lets you own your analytics layer the same way you own your application code.

The combination of BigQuery, dbt, and Apache Superset addresses exactly that need. This isn’t a theoretical architecture—it’s the backbone of how data-driven organizations at scale-ups and mid-market companies are building modern analytics without platform overhead.

BigQuery serves as your data warehouse: a fully managed, serverless compute engine that scales to petabytes without you managing infrastructure. dbt transforms raw data into trusted, modeled datasets using SQL and version control. Apache Superset provides the visualization and exploration layer, letting teams query those modeled datasets without writing SQL. Together, they form a cohesive, open-source-friendly stack that separates concerns cleanly and keeps costs predictable.

This article walks you through why this combination works, how to architect it, and the real-world considerations that separate a proof-of-concept from a production system.

Understanding the Three Pillars

BigQuery: The Warehouse Foundation

BigQuery is Google Cloud’s enterprise data warehouse. Unlike traditional data warehouses that require you to provision and manage clusters, BigQuery is fully managed and serverless—you pay only for the data you scan and the compute you use.

Key characteristics that make it a natural fit for this stack:

  • Columnar storage and compression: Queries scan only the columns you need, which means your analytics queries cost less and run faster than traditional row-oriented databases.
  • Automatic scaling: You don’t provision clusters. BigQuery scales compute automatically based on query complexity.
  • SQL dialect: BigQuery SQL is close enough to standard SQL that most engineers can write queries without a steep learning curve.
  • Native integration with Google Cloud: If you’re already using Compute Engine, Cloud Storage, or Dataflow, BigQuery integrates seamlessly.
  • Cost model: You’re charged per terabyte of data scanned (with a monthly minimum), plus storage costs. This is predictable and scales with your actual usage.

For analytics specifically, BigQuery’s columnar architecture means that a query aggregating a single column across a billion rows scans only that column’s data, not the entire table. That efficiency translates directly to lower query costs and faster dashboard load times—critical when you’re embedding analytics or serving hundreds of concurrent users.

The BigQuery Documentation provides comprehensive guides on setup, querying, and integration patterns, including how to optimize queries for cost and performance.

dbt: The Transformation Layer

dbt (data build tool) is a command-line tool and development framework that lets you write data transformations in SQL, version control them, test them, and document them—all the practices you already use for application code.

Instead of writing one-off SQL scripts or using a visual ETL tool, you define transformations as dbt models (SQL SELECT statements) organized in a project structure. dbt handles the orchestration: it figures out the dependency graph, runs models in the right order, and materializes them as tables or views in your warehouse.

Why dbt matters for this stack:

  • Version control: Your transformations live in Git. You can review changes, track history, and roll back if something breaks.
  • Testing and validation: dbt lets you define tests (like “this column should be unique” or “no null values in this field”) that run automatically. Catch data quality issues before they reach dashboards.
  • Documentation: dbt auto-generates documentation from your SQL and YAML configs. Your team knows what each column means without hunting through Confluence.
  • Modularity: dbt models can reference other models, letting you build a DAG (directed acyclic graph) of transformations that are easy to understand and maintain.
  • Works with BigQuery natively: dbt has a native BigQuery adapter. You write SQL, dbt compiles it to BigQuery-compatible syntax, and runs it on schedule or on-demand.

The dbt Documentation covers modeling patterns, testing strategies, and integration with warehouses like BigQuery. For many teams, dbt becomes the single source of truth for how data is transformed and what it means.

Apache Superset: The Visualization and Exploration Layer

Apache Superset is an open-source, modern data visualization and business intelligence tool. Unlike Tableau or Looker, which are proprietary and expensive, Superset is a community-driven project maintained under the Apache Software Foundation.

Superset lets you:

  • Connect to BigQuery (and many other databases) and query datasets directly.
  • Build dashboards by composing charts, tables, and other visualizations without writing code.
  • Enable self-serve analytics by letting business users explore data through SQL Lab (an interactive SQL editor) or pre-built dashboards.
  • Embed analytics into your product or internal tools via API or iFrame.
  • Integrate with AI through text-to-SQL capabilities that let users ask questions in natural language.

Superset’s strength is flexibility combined with simplicity. It’s not as polished as Tableau out of the box, but it’s deeply customizable, and you’re not locked into a vendor’s design philosophy. You can D23 - Dashboards, Embedded Analytics & Self-Serve BI on Apache Superset™ to get a managed, production-ready version with expert support, or run it yourself.

The Apache Superset Documentation provides installation guides, feature overviews, and integration patterns for connecting to BigQuery and other data sources.

How They Work Together: The Data Flow

Understanding the flow of data through this stack clarifies why it’s so powerful:

  1. Raw data lands in BigQuery: Your data pipelines (Dataflow, Airflow, or other tools) ingest raw data into BigQuery tables. This might be transactional data from your application, logs from your infrastructure, or third-party APIs.

  2. dbt transforms the raw data: dbt models read from those raw tables and apply business logic—joins, aggregations, filtering, calculations. The output is a set of clean, modeled tables (or views) that represent your business metrics, customer dimensions, and fact tables.

  3. Superset queries the modeled data: Superset connects to BigQuery and queries the dbt-modeled tables. Users either build dashboards by selecting columns and metrics, or they write ad-hoc SQL queries in SQL Lab to explore the data.

  4. Dashboards and insights flow to stakeholders: Whether embedded in a product, shared via URL, or accessed through a web interface, Superset serves the analytics to end users—data teams, executives, customers, or internal tools.

This separation of concerns is crucial. Your data engineers own the dbt project and ensure data quality. Your analytics engineers build Superset dashboards on top of trusted dbt models. Your business users explore data without needing to understand the underlying schema or write complex SQL.

Each layer has a clear responsibility, and changes propagate cleanly: if a dbt model is updated (e.g., a metric calculation is fixed), every Superset dashboard using that model automatically reflects the change.

Architecture and Deployment Patterns

Typical Setup on Google Cloud

Here’s how this stack typically looks on Google Cloud:

Data ingestion layer: Cloud Pub/Sub, Dataflow, or Cloud Composer (managed Airflow) ingests data and lands it in BigQuery raw tables.

Transformation layer: dbt runs on Cloud Run or Compute Engine on a schedule (e.g., nightly) or triggered by data ingestion events. dbt reads raw tables, applies transformations, and materializes modeled tables back into BigQuery.

Analytics layer: Apache Superset runs on Cloud Run or Compute Engine (or via a managed service like D23 - Dashboards, Embedded Analytics & Self-Serve BI on Apache Superset™). It connects to BigQuery using a service account and serves dashboards to end users.

Storage: All data lives in BigQuery. dbt models are stored in a Cloud Source Repository or GitHub. Superset configurations (dashboards, charts, users) can be backed up to Cloud Storage or version-controlled separately.

This architecture has several advantages:

  • No data silos: Everything flows through BigQuery. You’re not copying data between systems.
  • Scalability: Each component scales independently. Superset can handle more concurrent users without changing your dbt schedule or BigQuery configuration.
  • Cost predictability: BigQuery charges per query. dbt runs on a fixed schedule. Superset runs on a fixed instance. No surprise bills.
  • Open standards: You’re not locked into Google Cloud. You could move dbt to another warehouse (Snowflake, Postgres, DuckDB) or Superset to another hosting provider.

Self-Hosted vs. Managed

You have two paths:

Self-hosted: You run Apache Superset yourself on infrastructure you control. This gives you maximum flexibility but requires you to manage updates, security patches, backups, and scaling. For small teams, this overhead is manageable. For teams scaling to hundreds of users or embedding analytics in products, it becomes a burden.

Managed: Services like D23 - Dashboards, Embedded Analytics & Self-Serve BI on Apache Superset™ handle infrastructure, updates, security, and scaling. You focus on building dashboards and embedding analytics. This is particularly valuable if you’re embedding Superset into a product or need high availability and support.

For BigQuery and dbt, the self-hosted vs. managed distinction is less relevant. BigQuery is already managed by Google. dbt is a command-line tool that runs wherever you want (local machine, CI/CD pipeline, cloud VM). The main decision is where dbt jobs run and who manages that infrastructure.

Building Your First Dashboard: A Practical Example

Let’s walk through a concrete example: building a customer metrics dashboard.

Step 1: Raw Data in BigQuery

You have a raw_customers table in BigQuery with columns like customer_id, created_at, email, plan_type, and mrr (monthly recurring revenue). You also have a raw_events table with customer_id, event_type, event_date.

Step 2: Define dbt Models

In your dbt project, you create models:

models/
  staging/
    stg_customers.sql  -- clean and standardize customer data
    stg_events.sql     -- clean and standardize event data
  marts/
    dim_customers.sql  -- customer dimension with attributes
    fct_customer_metrics.sql  -- fact table with customer KPIs

The dim_customers.sql model might look like:

select
  customer_id,
  email,
  plan_type,
  created_at,
  date_diff(current_date(), date(created_at), day) as days_since_signup
from {{ ref('stg_customers') }}
where deleted_at is null

The fct_customer_metrics.sql might aggregate events:

select
  c.customer_id,
  c.email,
  count(distinct e.event_id) as total_events,
  count(distinct case when e.event_type = 'login' then e.event_id end) as login_count,
  max(e.event_date) as last_activity_date
from {{ ref('dim_customers') }} c
left join {{ ref('stg_events') }} e on c.customer_id = e.customer_id
group by 1, 2

You add tests to ensure data quality:

models:
  - name: dim_customers
    columns:
      - name: customer_id
        tests:
          - unique
          - not_null

You run dbt run and dbt compiles these to BigQuery SQL, creates the tables, and validates the tests. Your team reviews the dbt PR in GitHub, and once merged, the models are part of your source of truth.

The dbt Documentation covers modeling best practices, testing patterns, and advanced features like snapshots and incremental models.

Step 3: Connect Superset to BigQuery

In Superset, you add a new database connection:

  • Database type: Google BigQuery
  • Project ID: Your Google Cloud project
  • Credentials: A service account JSON key (or OAuth, depending on your setup)

Superset discovers your BigQuery datasets and tables. You select the fct_customer_metrics table as your data source.

Step 4: Build the Dashboard

In Superset, you create charts by selecting columns and metrics:

  • Chart 1: Total customers (count of customer_id)
  • Chart 2: Average events per customer (average of total_events)
  • Chart 3: Customers by signup date (time series of created_at)
  • Chart 4: Top customers by login count (sorted bar chart of email and login_count)

You arrange these charts on a dashboard, add filters for plan_type and date range, and publish. End users can now explore customer metrics without writing SQL.

The Apache Superset Documentation covers dashboard creation, chart types, filters, and interactive features.

Why This Stack Beats Proprietary Alternatives

Cost

Looker, Tableau, and Power BI charge per user (or per named user, or per viewer). If you have 100 users, that’s a meaningful recurring cost. BigQuery + dbt + Superset flips the model:

  • BigQuery: You pay per terabyte scanned. A typical analytics query scans 1-10 GB. At $6.25 per TB, that’s less than a penny per query.
  • dbt: Free (open-source) or paid for managed dbt Cloud. If you run it yourself, it’s just infrastructure costs (a small Cloud Run instance).
  • Superset: Free (open-source) or paid if you use a managed service. Self-hosted costs are minimal (a small VM).

For a team with 100 users, this stack might cost $500-2000/month. Looker or Tableau would cost $5000-15000+/month. That’s not a small difference.

Flexibility and Ownership

With Looker, you’re building in Looker’s modeling language (LookML). With Tableau, you’re building in Tableau’s visual paradigm. With this stack, you’re building in SQL (dbt) and open standards (Superset). If you need to switch visualization tools later, your dbt models are portable. Your data transformations are just SQL.

You own your data, your transformations, and your analytics layer. No vendor lock-in.

Integration with Engineering Workflows

Your engineering team already uses Git, CI/CD pipelines, code review, and testing. dbt brings those practices to analytics. Your analytics code is reviewed like application code. Changes are tracked. Tests run automatically. This is how serious organizations operate.

Superset’s API and extensibility mean you can embed analytics in your product, integrate with your identity provider (SAML, OAuth), and customize the UI to match your brand.

Common Challenges and How to Address Them

Challenge 1: Query Performance and Cost Optimization

BigQuery is fast, but poorly written queries can still be slow and expensive. A query that scans 100 GB when you only need 10 GB is wasting money.

Solutions:

  • Partition and cluster tables: BigQuery tables can be partitioned by date and clustered by columns you filter on frequently. This reduces the data scanned per query.
  • Materialized views in dbt: Instead of running expensive aggregations at query time, pre-compute them in dbt and materialize them as tables.
  • Query caching: Superset caches query results. The same query run twice in quick succession returns cached results, saving cost.
  • Monitor query costs: Use BigQuery’s cost analysis tools and set up alerts if costs spike.

The BigQuery Documentation includes optimization guides and cost analysis tools.

Challenge 2: Data Quality and Freshness

If your dbt transformations are wrong, every dashboard built on top of them is wrong. If your data is stale, dashboards show outdated metrics.

Solutions:

  • dbt testing: Write tests for every important transformation. Test for uniqueness, not-null constraints, referential integrity, and business logic (e.g., “revenue should be positive”).
  • Documentation: Document what each column means and why it exists. Future you will thank present you.
  • Incremental models: For large fact tables, use dbt’s incremental model type. Instead of rebuilding the entire table every run, append only new data. This is faster and cheaper.
  • Data contracts: Define what downstream systems expect from your dbt models (column names, data types, update frequency). Treat breaking changes like API breaking changes—communicate and version.

The dbt Documentation covers testing strategies and incremental models in detail.

Challenge 3: Scaling to Many Users and Dashboards

As your analytics footprint grows, you might have hundreds of dashboards and thousands of users. This puts pressure on Superset.

Solutions:

  • Use a managed Superset service: D23 - Dashboards, Embedded Analytics & Self-Serve BI on Apache Superset™ handles scaling, caching, and performance optimization out of the box.
  • Optimize Superset deployment: Use a load balancer, cache layer (Redis), and multiple Superset instances.
  • Limit dashboard refresh rates: Not every dashboard needs to refresh every minute. Set refresh rates based on how fresh the data needs to be.
  • Encourage self-serve with SQL Lab: Instead of building hundreds of dashboards, teach users to write SQL queries (or use text-to-SQL). This scales better than pre-built dashboards.

Challenge 4: Governance and Security

As you scale, you need to control who can see what data, who can modify dashboards, and who can access the underlying tables.

Solutions:

  • Row-level security (RLS): BigQuery supports RLS. You can define policies so users see only data they’re authorized to see. Superset can enforce these policies.
  • Superset permissions: Superset has role-based access control. You can restrict dashboard access, SQL Lab access, and data source access by role.
  • dbt governance: Use dbt’s documentation and contract features to define which tables are “public” (safe for end users) and which are “private” (internal only).
  • Audit logging: Track who accessed what data and when. Both BigQuery and Superset support audit logging.

The Modern Data Stack and Open-Source Alternatives

This stack is part of the broader “modern data stack” movement. The The Modern Data Stack blog post explains the history and philosophy: instead of monolithic data warehouses and BI tools, the modern stack is modular, cloud-native, and open-source-friendly.

Other popular combinations include:

  • Snowflake + dbt + Superset: Snowflake instead of BigQuery. Functionally similar, with different pricing and performance characteristics.
  • Postgres + dbt + Superset: For smaller teams or on-premises deployments. Postgres is free and simple, but doesn’t scale to petabytes like BigQuery.
  • DuckDB + dbt + Superset: For single-node analytics. Modern Data Stack in a Box with DuckDB explores this pattern.

The The Modern Data Stack: Open-source Edition provides a comprehensive analysis of open-source tools in the modern data stack, including dbt, Superset, and warehouse options.

The Top Modern Data Stack Tools for 2025 offers an updated overview of tools gaining traction in 2025, many of which complement or extend this stack.

The Modern Data Stack guide by Simon Späti dives deep into open-source tools and analytics pipelines, with practical examples.

Advanced Patterns: AI and Embedding

Text-to-SQL and AI-Assisted Analytics

One of the most exciting developments in analytics is AI-assisted querying. Instead of writing SQL or clicking a UI, users ask questions in natural language: “What’s our churn rate by plan type?” An LLM translates this to SQL, which runs against your data.

Superset has built-in support for this through integrations with LLMs. You can use OpenAI’s GPT, Anthropic’s Claude, or open-source models like Llama. The LLM sees your dbt-generated schema documentation and generates SQL based on that.

This requires:

  1. Good schema documentation: Your dbt models must be well-documented so the LLM understands what tables and columns exist.
  2. LLM access: You need API keys for your chosen LLM or a local model.
  3. Prompt engineering: You may need to tune the prompt to get reliable SQL generation.

The benefit is massive: self-serve analytics becomes truly self-serve. Users don’t need to know SQL or even how the data is organized. They ask questions and get answers.

Embedding Analytics in Products

If you’re a B2B SaaS company, you might want to embed analytics dashboards in your product so customers can see their own data. Superset’s API makes this straightforward.

You can:

  • Embed dashboards via iFrame: Generate a signed URL for a dashboard and embed it in your product.
  • Use the REST API: Query data programmatically and render it in your own UI.
  • Use the Python SDK: Build custom applications that interact with Superset.

Combined with row-level security, this lets you give each customer a personalized view of their data without building a custom analytics system.

Getting Started: A Practical Roadmap

If you’re convinced this stack is right for you, here’s how to get started:

Phase 1: Foundation (Weeks 1-4)

  1. Set up BigQuery: Create a Google Cloud project, enable BigQuery, and set up data ingestion. Start with a small dataset (e.g., sample customer data).
  2. Initialize dbt: Create a dbt project, connect it to BigQuery, and write 3-5 models to transform your sample data. Add tests and documentation.
  3. Deploy dbt: Set up a dbt schedule (e.g., nightly) using Cloud Scheduler or Cloud Composer.

Phase 2: Analytics (Weeks 5-8)

  1. Set up Superset: Install Superset (self-hosted or managed) and connect it to BigQuery.
  2. Build dashboards: Create 2-3 dashboards using your dbt models. Involve stakeholders to understand what metrics matter.
  3. Test and iterate: Gather feedback, refine dashboards, and optimize queries.

Phase 3: Scale (Weeks 9+)

  1. Expand dbt models: Add more transformations as your analytics needs grow.
  2. Enable self-serve: Set up SQL Lab and teach users to write queries (or use text-to-SQL).
  3. Implement governance: Add row-level security, user roles, and audit logging.
  4. Embed analytics: If you have a product, start embedding Superset dashboards.

Conclusion: The Right Stack for Data-Driven Teams

BigQuery + dbt + Apache Superset is not a revolutionary combination. Each component is mature, well-documented, and battle-tested. The revolution is in the philosophy: a modular, open-source, cost-effective alternative to monolithic proprietary platforms.

For data and analytics leaders at scale-ups and mid-market companies, this stack offers:

  • Lower costs: Pay for what you use, not per user.
  • Flexibility: Own your data, transformations, and analytics layer.
  • Integration with engineering: Use Git, testing, and CI/CD for analytics.
  • Scalability: Each component scales independently.
  • Future-proofing: You’re not locked into a vendor. You can swap components as your needs evolve.

The stack does require more setup and operational knowledge than a fully managed platform like Looker. You need to understand BigQuery, SQL, dbt concepts, and Superset architecture. But for teams that already have these skills (or are willing to learn), the payoff is substantial.

If you’re evaluating analytics platforms, don’t dismiss open-source just because it’s not as polished out of the box. The long-term benefits—cost, flexibility, and control—often outweigh the short-term friction of setup and learning.

For teams that want the benefits of this stack without the operational overhead, D23 - Dashboards, Embedded Analytics & Self-Serve BI on Apache Superset™ provides managed hosting, expert consulting, and support. But whether you self-host or use a managed service, the architectural principles remain the same: modular, open, and data-driven.

Start small, learn the fundamentals, and scale as your needs grow. That’s the modern way.