AI-Intentional, Not AI-First, is the Mindset for Winning Teams
Why the most effective marketing leaders are adopting a people-first, outcomes-focused approach to AI.
A few months ago I wrote about what an "AI-first" marketing team would actually look like in practice, and landed on a concept I've been mulling over ever since: the idea that "AI-Intentional" might be a more useful frame for most marketing leaders to think about AI adoption strategy than "AI-First."
The language may vary, but this is a theme I'm hearing both in 1:1 conversations I'm having with other CMOs as well as in some of the more measured online and conference conversations. It's not so much that the AI hype cycle has abated, but but that I think the conversation is starting to mature. And for marketing leaders specifically, I think the distinction between "AI-first" and "AI-intentional" deserves deeper exploration, because how you frame your approach to AI shapes every downstream decision about talent, budget, culture, and operating model.
The Problem with "AI-First"
"AI-first" is an oddly seductive framing. It sounds decisive, modern, forward-looking. It's what your CEO or Board members want to hear, as a sign you are being proactive, innovative, and aggressive - both in terms of productivity and cost-efficiencies. But embedded within AI-first it is an assumption that can undermine good decision-making: that the starting point for every workflow, role, and investment should be the technology rather than the outcome. It can also undercut team culture, through the implicit message that the technology is more important than the people. A message being reinforced every day through as rounds of mass layoffs are justified (rightly or wrongly) by investments in AI.
AI-first is the organizational equivalent of buying the tool before defining the project. "Everyone is buying shovels, why aren't we?" You end up with a marketing team optimized around AI technology rather than around what the business actually needs marketing to accomplish. In practice, this can manifest as teams running AI pilots that generate impressive metrics and great visuals, but don't connect very well to real marketing workflows or strategic priorties, and chip away at team morale in the process.
The language matters here. When a CEO asks you to "make marketing AI-first," the implied mandate is one of transformation and, quite often, cost-cutting: reorganize, re-staff, rebuild around AI, replace people with AI automation. When the framing instead is "be intentional about where and how AI creates value," the implied mandate is one of strategic integration, a fundamentally different exercise that starts with what you are trying to accomplish and works backward to tools. Not every project needs a shovel, no matter how trendy they are; let's sort what the project is, what we're good at today, and see where it might make the most sense for shovels to help.
What "AI-Intentional" Actually Means
AI-intentional isn't about being slower or more cautious with AI adoption; a critical point I need to make. It's about being more precise, more strategic. It means making deliberate, contextualized decisions about where AI adds genuine value to your team, outcomes, and workflow versus where it introduces risk, costs, even complexity that outweigh the potential returns.
In practice, this looks like evaluating each area of your marketing function independently. Content production? AI is probably already delivering genuine efficiency gains there and the maturity curve is well advanced; it's one of the first marketing use cases that early marketing AI vendors (like Jasper or Copy.ai) and marketers gravitated towards. Brand strategy and creative development? The value proposition is more debatable as the homogenization risk is real, and the need for human creativity and judgment remains high. Market and competitive research, media optimization, analytics? AI is more naturally suited to these tasks and the vendor ecosystem is fairly fast, so the debate for your team is more where and when the human needs to be in the loop for governance.
An AI-intentional CMO shouldn't treat these as variations of the same question. They recognize that different parts of the marketing function likely have different levels of AI readiness, different risk profiles, and different levels of potential value and impact. The approach is portfolio-level thinking about AI, not a blanket mandate like "AI-first."
This maps directly to what I've been developing in the Disruption-Fluent Marketing framework - and why I keep bringing that up, because it's broadly applicable to modern organizations. One of the core principles of DFM is that leaders should be managing the tension between operational efficiency and adaptive responsiveness / creative thinking, not trying to eliminate that tension entirely (which is as impossible as it is counterproductive).
An AI-first mandate, applied indiscriminately, risks doing exactly that: optimizing everything for efficiency while inadvertently constraining the adaptive, creative, and emergent capabilities that make great marketing organizations so effective.
The Markers of an AI-Intentional Organization
So in practice, what separates an AI-intentional marketing team from one that's going all in on Ai-first (or clinging to an AI-last/never mindset)?
Outcomes and ROI before tools. The aged-old lesson of the rush to marketing technology, through every trend and era of the last 30 years, just updated for AI. Every AI initiative must have a clearly defined business outcome it's designed to achieve, and a way to measure whether it's working - same as with any new martech investment. This sounds obvious, but there's a growing body of research that the vast majority of early marketing AI pilots fail to deliver real ROI, often because they are considered simply experiments rather than business solutions. An AI-intentional team weighs AI through the lens of ROI, like with any investment, and kills a pilot that isn't producing value, no matter how shiny or cool the tech itself may be.
Differentiated adoption speed and approach by marketing function. Not everything moves at the same pace, nor should it; not every marketing function and activity is as suitable for AI. An AI-intentional organization accelerates hard where AI has proven reliable and the ROI is clear, while maintaining a more deliberate, test-and-learn approach in areas where the technology is less mature, the risk is higher, the automation opportunity is smaller, or the value of human creativity and judgment is clear. This requires leaders to learn to harness the kind of productive tension that enabling leadership is designed to manage.
Cultural honesty about what AI changes. The hardest part of AI-intentional leadership is the human dimension. Every AI adoption decision has downstream implications for your people: roles that shift, skills that become more or less valuable, career paths that need redesigning, and in many cases, teams that can be smaller. AI-intentional leaders don't pretend these impacts don't exist. They address them directly, investing in reskilling and having the uncomfortable conversations about how work is changing, because psychological safety is what determines whether your team leans into AI or actively resists it.
The TL;DR
"AI-first" is a technology-centric, tools-first mandate. "AI-intentional" is a people-centric, outcomes-first leadership philosophy.
The former has all the buzz right now from investors, boards, and CEOs, but it's fundamentally the wrong way to look at the (potentially) game-changing opportunities AI represents. The latter is more about how to think strategically and intentionally about maximizing value.
In a landscape where AI capabilities are evolving so rapidly that any specific tool or platform decision you make today might be irrelevant in six (three? one?) months, the organizations that will "win" in AI aren't the ones that simply adopted the fastest or deployed the widest.
The leaders and teams who will make the most effective use of AI will be the ones who embraced rapid experimentation, but also took the time built the judgment, culture, and discipline to continuously make good decisions about where and how AI creates real value.
Consider adopting an "AI-intentional" mindset, and diplomatically decline getting consumed by the "AI-first" tool-centric rush.