The consulting industry has been having the wrong conversation about AI tools.
The conversation most consultants are having is internal: how much faster can I do my work, and how much of that efficiency do I keep versus pass along to clients? That framing treats AI tools as a margin improvement. It is a reasonable business question. It is also the wrong lens for understanding what has actually changed.
The more significant shift is not what AI tools do for the consultant. It is what they mean for what clients should reasonably expect from a consulting engagement. Turnaround time. Research depth. Documentation volume. Deliverable completeness. These have all moved in ways that clients have not yet fully registered and consultants have not fully disclosed.
The gap between what AI-native consulting can now deliver and what traditional consulting still delivers is not a future trend. It is the current state of the market. Consultants who have not adopted these tools are not delivering below their historical standard. They are delivering below the new one.
What Changes About Research
Competitive landscape work, market research, and public information aggregation were the most time-intensive components of most consulting engagements before AI tools. Not because they required rare expertise, but because they required sustained mechanical effort: reading, organizing, cross-referencing, structuring. A thorough competitive analysis that required three days of research work before now takes six to eight hours. The output is often more comprehensive, not less, because the speed of information assembly allows for broader coverage before the synthesis work begins.
This is where AI tools deliver the clearest and most measurable improvement. The research tasks that were time-intensive but not insight-intensive have accelerated dramatically. What does not change is the interpretive layer. AI tools surface information quickly. They do not know which competitors matter most to a specific client in a specific market position. They do not recognize the significance of a pricing change buried in a competitor's Q3 earnings call. They do not understand the organizational context that makes a market signal meaningful versus noise.
Research acceleration without experienced judgment produces faster wrong answers. The combination of AI-assisted information assembly and experienced interpretation produces analysis that is both more thorough and more grounded than either component alone.
The client-facing implication is concrete. A competitive analysis that would previously have taken a week of engagement time now takes two to three days. The deliverable that comes out of it is more complete: more sources, more structured, more consistently organized across the landscape. If a client was receiving a competitive analysis as a standalone deliverable, they should now also expect supporting materials -- a positioning framework, a battlecard draft, a set of go-to-market implications -- within the same engagement scope.
What Changes About Documentation
Structured written deliverables have a drafting component that is significant and separable from the strategic thinking that shapes them.
A twelve-page whitepaper requires a clear argument, domain knowledge, audience awareness, and the judgment to know what to include and what to leave out. It also requires putting words on pages in a coherent structure, which is a different kind of work. Before AI tools, those two components were inseparable: you thought through the argument while you wrote it out. AI tools change this by compressing the drafting component substantially. A first draft that previously took two to three days of writing now takes a day. The editing, refinement, and strategic calibration that follows is not compressed -- that work still requires the same expertise it always did.
The practical output difference: deliverables are denser. First drafts arrive with more structure and more complete coverage of the topic than first drafts from a traditional writing process. The revisions that follow are revisions of substance rather than revisions of organization.
For clients, this means more complete documentation at each stage of an engagement. A product strategy engagement that previously produced a roadmap document and an executive summary can now produce a roadmap, supporting prioritization analysis, a product requirements outline, and a stakeholder communication draft within the same timeline. These are not filler documents. They are the supporting materials that used to be scope expansions.
What Changes About Development
The build-versus-buy calculus for custom software has shifted, and most product leaders have not updated their thinking to reflect it.
The traditional model held that custom development was slower and more expensive than commercial off-the-shelf tools for most use cases, with exceptions carved out for organizations with highly specific operational requirements or sufficient scale to justify the investment. The math was roughly correct given the cost and pace of traditional development.
AI-assisted development changes the pace variable in ways that change the math. A senior developer working with embedded AI tools can produce a working, production-ready custom application in a timeframe that would have required a small team working for weeks using traditional methods. Documentation is generated alongside code rather than as a separate downstream task. Testing coverage is more thorough because writing tests with AI assistance is faster than writing them manually.
A project completed for a client in the healthcare services sector illustrates the practical outcome: a custom scheduling application that replaced a commercial tool the client had been running for years. The custom application delivered a fourteen-fold improvement in application responsiveness. The client eliminated a commercial license cost. The development timeline was a fraction of what the same project would have required before AI development tools reached their current capability.
This does not mean custom development is the right answer more often than it was. Commercial tools are still the right answer in most cases. What has changed is the threshold case: the category of situations where the specific requirements, the performance needs, or the cost economics of custom development justify the investment has expanded meaningfully.
What Does Not Change
This is where the conversation about AI tools in consulting tends to go wrong in the other direction.
AI tools do not supply strategic judgment. They do not know what a specific client should prioritize, which market to enter first, or how to position a product against a competitive set with unusual dynamics. They are exceptionally good at organizing and accelerating work that has a clear structure. They are not good at work that requires synthesizing unstated context, reading organizational dynamics, or making recommendations that depend on understanding what a market will look like in three years.
Domain expertise does not transfer from AI tools to the consultant using them. A consultant who uses AI tools to research a market they do not understand will produce faster, more organized, incorrect analysis. The judgment layer that makes research valuable is not in the tool.
Client relationships and the trust that makes engagements effective are not accelerated by AI tools. The CISO who is trying to figure out whether to launch a partner channel, the VP of Product who needs to make a roadmap call before the next board meeting, the founder who is trying to determine whether their pricing model is going to survive a competitive response: these people need a consultant who understands their situation well enough to give them a real answer, not a faster document.
What AI tools change is the ratio of time spent on mechanical work to time spent on the thinking that makes the mechanical work useful. More time goes to the work that actually requires experienced judgment. Less time goes to the work that required time but not judgment. The engagement becomes more intensive in the right places.
The Question Clients Should Be Asking
The consulting industry will spend the next several years claiming AI adoption that ranges from genuine to cosmetic. Every consultant will say they use AI tools. The variation in what that means in practice is enormous.
The most useful question a client can ask is not whether a consultant uses AI tools. It is what a recent deliverable looks like. Research density, structural completeness, the presence or absence of supporting materials alongside the primary deliverable, and the turnaround timeline on a project of comparable scope are all visible in finished work.
A consultant who works with AI tools embedded throughout their research and documentation process produces deliverables that look different from deliverables produced through traditional methods. The difference is not in the quality of strategic thinking. It is in the completeness of the work product around the strategic thinking.
Clients who have not updated their expectations for what a well-resourced consulting engagement delivers are leaving value on the table. They are accepting deliverables that meet the old standard when the new standard is achievable within the same budget and timeline.
The expectation reset is not coming. It has already happened. The question is whether the people you are working with know it.
Zed Foundry works with cybersecurity and enterprise SaaS companies on product strategy, go-to-market planning, competitive intelligence, and custom development. AI tools are embedded in every engagement. Get in touch →