
Why dbt powers reliable analytics and safe AIWhy dbt powers reliable analytics and safe AI
From SQL to strategy
Published Nov 12, 2025

0%
Published Nov 12, 2025
If dbt’s value is not on your radar yet, it should be. Here’s why developers, teams, and organisations swear by it – and why we do too.
Since its release in 2016, dbt has brought software engineering practices into the world of data transformation. Yet, we still meet many clients – in both business and technical roles – for whom the value of dbt is not fully understood. The community and vendors alike have not always done the best job of articulating why it has been so widely adopted. With this article, we want to close that gap and explain why dbt matters for developers, teams, and organisations today, as well as how it is becoming a foundation for tomorrow’s AI-driven data landscape.
What is dbt?
dbt does not cover the entire extract, load, and transform (ELT) process. Instead, it focuses exclusively on the ‘T’ – transforming raw data into something reliable and business-ready.
Its main users are data engineers, analytics engineers, and data analysts. As a pure transformation tool, dbt furthermore does not provide its own storage or compute resources. Instead, it runs transformations inside your existing data warehouse or lakehouse. This means it plays nicely (and agnostically) with modern cloud data warehousing solutions like Snowflake, Databricks, BigQuery, and Redshift – and it is so loved by the ecosystem that Snowflake even integrated dbt directly into their platform.
Why dbt is such a great tool
When we first started with dbt, what struck us was how little ‘new’ we had to learn. At the end of the day, dbt is ‘just SQL with some extra features.’ But those features turn out to make a huge difference – not only for us in our role as developers, but also for teams and organisations as a whole.
From a developer’s perspective
This is where our appreciation for dbt really started. dbt projects are entirely code-first: models are written in SQL, configurations live in YAML, and everything is stored in source control. That means tests, documentation, and changes can be versioned alongside the code itself. The result is fewer surprises, easier collaboration, and the ability to catch issues early. Lineage is equally invaluable, allowing us to trace a bug all the way back to its source without endless guesswork.
And then there is Jinja. We love SQL but adding Python-like functionality, and having dbt compile it into warehouse-specific SQL, is pure joy. Combine that with CTEs, and it feels as if dbt has rewired our brains: we now approach problems in ways that make our code cleaner and our models more robust.
On top of that, dbt makes environment management painless. Models can be deployed to the right schemas and databases without hacks, keeping development, testing, and production environments cleanly separated. Add built-in documentation, and you have a developer experience that is hard not to get excited about.
And it is only getting better. dbt’s new Fusion engine (currently rolling out in dbt Cloud) takes the developer experience up another notch:
- It understands your SQL by dialect, catching ambiguous columns, invalid aggregations, and mismatched data types before you run anything – saving money on unnecessary cloud compute.
- It can propagate column and model renames across your project, so refactors that used to feel risky become almost effortless.
- It is also much faster than dbt Core at parsing and compiling large projects – we are talking 10,000 models in seconds instead of minutes.
That is the kind of future where dbt not only keeps us productive but also keeps the codebase healthy as it scales.
From a team’s perspective
What works so well for individual developers scales even further at the team level. For teams, dbt turns data transformation into a collaborative craft. Version control is a joy here: code reviews, branching, and continuous integration and deployment – all the practices we know from software engineering suddenly become part of data and analytics engineering. Everyone can see, review, and contribute to each other’s work. The result is cleaner code, fewer silos, and a shared sense of ownership of the data stack.
Beyond collaboration, dbt also reduces the ‘bus factor’ risk in data teams. When code is documented, version-controlled, and tested, knowledge is no longer locked inside a single developer’s head. New team members can onboard quickly by exploring the lineage graph, tests, and documentation, instead of deciphering ad-hoc SQL scripts or asking around for definitions.
This consistency also makes it easier to scale teams across geographies. A dbt project in one market looks and feels the same as in another, because the standards are baked into the tool. In practice, this means distributed teams can still work as one.
Because tests run automatically in continuous integration pipelines, broken models or unexpected changes are caught before they hit production. That builds trust not only within the team but also with the business functions that rely on their data.
From an organisation’s perspective
At scale, dbt hits a sweet spot between self-service, governance, and collaboration. Teams can share trusted data assets across projects while maintaining independence with their own repositories, databases, and developer schemas. It peaks when combined with proper use of Infrastructure-as-Code: a small centre of excellence can empower a large organisation with many independently operating teams, all working with governed, reliable data.
And then there’s documentation again – this time for business stakeholders. Instead of endless Slack threads or email chains asking “What does this field mean?”, you can simply point an analytics partner to the dbt catalogue. Self-service documentation changes the conversation from chasing definitions to using the data.
Another factor is cost. dbt is surprisingly affordable compared to many data tools. dbt Core, the open-source version, is entirely free. dbt Cloud starts at around $100 per developer seat per month (with a generous free tier for small teams) and scales up with enterprise features like SSO, audit logs, and SLAs. For what it unlocks in collaboration, testing, and governance, the price point is often negligible relative to the value delivered.

dbt, semantics, and AI
Beyond developers, teams, and organisations, dbt is now becoming a cornerstone for how structured data connects with the emerging world of generative AI.
The key concept here is MCP (Model Context Protocol). MCP is an open standard released in late 2024 that allows AI systems to dynamically pull in context and data from external sources. Without MCP, even the most advanced AI models remain trapped in isolation – unable to go beyond unstructured prompts or hard-coded integrations. MCP provides a universal way to connect AI systems with governed data, and has already been endorsed by Anthropic, Google, Microsoft, and OpenAI.
This is where the dbt MCP server comes in. Think of it as the missing link between your dbt project – with its models, lineage, documentation, and Semantic Layer – and any MCP-enabled client, from AI agents to BI assistants. Instead of building dozens of one-off integrations, the dbt MCP server provides a standardised way for AI systems to safely access the knowledge encoded in your dbt project.
The practical use cases are threefold:
- Data discovery: LLMs and agents can explore what data assets exist in your organisation, from staging models to business marts, and understand how they relate. Business users could ask “What customer data do we have?” and receive an answer grounded in your dbt documentation, while agents can autonomously map the data environment before generating SQL.
- Governed querying: Through the dbt Semantic Layer, AI can query your organisation’s defined metrics and dimensions. Instead of freewheeling SQL guesses, an LLM can request ‘monthly revenue by region’ and get results that follow the single source of truth metric definition you have codified in dbt.
- Project execution: The dbt MCP server even allows AI systems to interact with dbt itself – running models, compiling SQL, or executing tests. While experimental, this opens the door for agents to not only read context but to act on your data workflows under controlled conditions.
The significance for organisations is twofold. First, successful AI initiatives rest on a stable data foundation. Without clear definitions, robust documentation, and consistent semantics, AI will produce untrustworthy results. dbt uniquely bakes these foundations into the development workflow: as engineers build models, they also create tests, documentation, lineage, and semantics. Few tools combine product development and metadata creation so tightly.
Second, dbt’s Semantic Layer provides the missing semantics for AI to work with structured data. It fixes metric definitions and formalises relationships across the data model, ensuring that when AI retrieves ‘gross revenue by region,’ it uses the governed, agreed-upon definition – not a best guess.
In short, organisations do not need a patchwork of new tools to prepare for AI. They need a strong foundation. dbt, through its transformations, documentation, semantics, and now its MCP server, is increasingly becoming that foundation – the control plane through which AI can safely and effectively interact with structured enterprise data.
Closing thoughts
dbt succeeds because it delivers value on multiple levels. For developers, it is the joy of writing better SQL with powerful guardrails. For teams, it is the shift from ad-hoc scripts to structured collaboration and knowledge sharing. For organisations, it is the foundation for scaling trusted, governed, self-service analytics.
And now, with the dbt Semantic Layer and the MCP server, dbt is also emerging as the control plane for how AI will interact with structured data. Few other tools combine product development with the creation of documentation, lineage, and semantics so tightly. This makes dbt not only the best way to build reliable data today, but also the way to make generative AI initiatives succeed tomorrow.
In our consulting work, we see both sides of the coin: how dbt changes our own day-to-day work, and how it can unlock real impact for our clients. Whether the challenge is enabling self-service analytics, governing data at scale, or preparing for AI-driven transformation, dbt often plays a key role in the solution.
Our advice is simple: do not look at dbt as just another tool. Look at it as a way of working that transforms not only SQL, but also the way organisations build, understand, and trust data.

Passionate about dbt?
Visit our page or contact us directly. We would be happy to continue the conversation.
