Why Your Best Marketers Might Seem the Most Resistant to AI (And Why That's a Good Thing)
Distinguishing between true resistance to the tech and the application of hard-won human judgement that helps you avoid AI slop and brand risk is perhaps the most important job for marketing leaders today.
These are scenes most marketing leaders of the moment, who are grappling with AI initiatives in their teams, will probably recognize: the highly competent product marketer who keeps rejecting AI-generated messaging drafts; the senior brand strategist who appears to reflexively label every AI concept "generic" and "unusable;" the brilliant creative or art director who seems to balk at anything AI as either "AI slop" or perhaps just sounds/looks "wrong" and they can't quite articulate why.
At first glance it seems like resistance to new tech. These are some of your best, and yet they are adding what feels like unnecessary friction to your mandate of becoming an "AI-intentional" marketing organization. On an individual level you are worried they risk falling behind, getting labeled as out-of-touch, a negative drag rather than a positive innovator.
But consider this: They may actually be your most valuable players in this marketing world of AI-fueled chaos, wild experimentation, unproven models, and unrelenting pressure.
The Expertise-Friction Paradox
Here's what I think is actually happening with those "resistant" marketers.
People who have spent years, sometimes decades, developing deep instincts for what works in their craft are finely tuned quality-detectors. Your senior brand strategist didn't wake up one morning knowing how to spot messaging that drifts off-brand or sounds stilted and artificial. She built that instinct through hundreds of campaigns, thousands of revisions, countless conversations with customers and stakeholders where she learned what resonates and what falls flat. Your rockstar product marketer developed his sense for competitive messaging through years of win/loss calls, sales pitches, and the often brutal experience of watching lovingly-crafted messaging that looked great in a deck completely fall flat with customers.
That kind of knowledge is largely tacit; it's human insight and intuition sharpened by experience, not simply piles of best practice markdown documents. It lives in the gut, not in a document. And it's the hardest kind of knowledge to explain, precisely because it was built through accumulated experience rather than learned from a textbook or some gated guide. When these people look at AI-generated output and say "something's off," avoid assuming they are just reflexively pushing back against the technology. It may just be that their well-honed "Spidey-sense" is going off, and it would be well worth everyone's while to perhaps take a step back and explore what's unsettling them.
AI, by definition, is not embedded in your specific business or market context. It doesn't have a depth of real-world experience to draw from, and certainly lacks (for now?) any kind of gut instinct for what "good" looks like. AI is operating from statistical patterns across massive datasets, which means it's very good at producing output that looks right in a general sense but misses the specific nuances that separate your brand from the amorphous blob of "industry best practices."
Your best employees, your marketing experts, are catching that gap. The friction they're creating, which may slow down your AI hopes and dreams, is the sound of quality control doing its job.
I explored this concept of judgment as the irreplaceable human skill in a previous piece on what humanities-trained thinking brings to the age of AI. Which, interestingly, was my most-shared post ever, so it seems to have struck a nerve. But the same dynamic applies here: AI excels at data synthesis and pattern-matching, but it doesn't quite have a grasp on what actually matters in the specific context of your brand, your market, this moment. It can produce something that should work, but it's lacking the subjective human judgment that so often makes or breaks a great brand or campaign.
The "Good Enough" Trap
Now contrast those "resistant" experts with a different archetype: the marketer who adopts AI fastest and most enthusiastically...and perhaps without question.
I want to be careful here because I'm not suggesting that early AI adopters are inherently less skilled. Many of the best marketers I know are also deeply engaged with AI tools and getting extraordinary results from them. Despite the challenges in the technology, and the legions of "never AI" naysayers out there, I firmly believe that an "AI-Intentional" approach is something every marketing team needs to be adopting.
But there is a pattern worth examining. In some cases, the people who adopt AI with the least friction do so because they lack the domain depth to recognize what's missing from the output. They see speed. They see a first draft that looks polished and professional. They see a campaign concept that hits all the expected notes.
What they may not see is that the elegant brand positioning their AI model generated is simply a sophisticated restatement of what everyone else's AI is also producing, drawing from the same training data and optimizing for the same market signals. They may not recognize that the tone is a few degrees off from what the brand actually sounds like (like an uncanny valley of sorts), or that the competitive differentiation in the messaging is functionally identical to what two of their competitors published last week.
This is the "homogenization crisis" I've written about before: when too many brands run their content and creative through similar AI tools, trained on overlapping data and applying the exact same best practices, genuine brand, creative, and strategic differentiation starts to disappear. The strongest brands are the ones that maintain a distinctly human judgement layer on top of AI-assisted production. And that judgement layer is exactly what your "resistant" experts are providing, whether anyone asked them to or not.
The primary risk for marketing organizations right now isn't AI resistance, I would argue. Though yes, that and generalized AI uncertainty/anxiety is a very real concern for marketing leaders. Rather, the primary risk those leaders face is uncritical AI acceptance. The marketer on your team who pushes back may simply be exercising a level of essential quality control that the uncritical AI enthusiast skips entirely in their embrace of speed and raw production.
What the Pushback is Actually Telling You
If you reframe "AI resistance" as an organizational signal rather than a people problem, the picture changes pretty dramatically.
When your best brand marketer rejects an AI draft, he may be telling you something about your brand's voice that the AI can't see. When your senior content strategist rewrites an AI whitepaper outline from scratch, she may be demonstrating the hard-won market and contextual knowledge that separates your brand's content from the generic AI-generated noise and slop we're all already drowning in. And when your creative director says an AI creative concept "doesn't feel right," they may be applying years of cultivated judgment that no prompt engineering is ever going to replicate.
These aren't examples of employees being "luddites"; they're expressions of the very human judgment built through years of experience that made these people your go-to rockstars in the first place. And that judgment is becoming more valuable, not less, as AI-generated content (slop) proliferates and the bar for genuine creativity and brand differentiation rises. More content doesn't mean better, something experienced content marketers have known for years. Sales teams can drown in, and outright reject, AI slop just as easily as customers can; your top product marketers using their judgement are your hedge against this.
The question, then, isn't how to get these people to stop pushing back against AI. The question is how to channel that pushback intentionally and productively. There's a meaningful difference between a marketer who reflexively rejects AI output and moves on (friction without learning) versus one who rejects AI output and articulates why it falls short (friction that drives learning for your whole team).
Some Practical Considerations
This is an argument for being more intentional about how you adopt AI, and more specifically, how you think about the role of human judgement and expertise in an AI-Intentional marketing organization.
A few things worth considering as you navigate this.
First, consider rethinking how you measure AI adoption within your team. If your primary metric is speed or volume of AI usage, you're incentivizing uncritical acceptance. I personally strongly dislike the blanket mandates we all hear about in the media, where "everyone must use AI" and CEO's tracking tokens used or AI output created. This devalues human judgement, spurs on more "slop", and creates serious brand and business risk (the recent Amazon example is a prime case).
Consider tracking something harder to measure but more valuable: the quality of the human-AI interaction and collaboration. Are your experienced marketers refining AI outputs in ways that demonstrably improve the end result? That's a better signal than whether everyone has an AI tool open on their desktop and is burning tokens to fill up Sharepoints with "stuff."
Second, create space for the articulation of "why." When a marketing expert on your team rejects AI output, don't just accept the rejection or override it. Push them to explain what specifically is wrong. Not to challenge them, but because the act of articulating their judgement call is how it becomes a teachable learning and ultimately results in better AI prompts, guidelines, and workflows. That "something's off" instinct is intenseley valuable, but it's exponentially more valuable when it's been translated into a useful insight that helps everyone get better.
Third, celebrate that pushback. Just like a good leader should celebrate, and elevate, productive failures, a good leader in today's AI world should find ways to elevate well-articulated creative and strategic pushback against AI output drawn from intuition, experience, and human judgment. It should never be viewed as a subversive action, but rather something exceptional marketers routinely do.
And finally, do everything you can to distinguish between resistance rooted in expertise and judgement, and resistance rooted in fear, anxiety or even disdain for the technology. They may look similar from the outside, but they are fundamentally different leadership challenges. The experienced expert who pushes back because the AI output genuinely isn't good enough needs to be heard and empowered, and given a role in shaping how AI gets used. The marketer who resists because AI feels threatening to their job security needs a different kind of support entirely: clarity about their evolving role, investment in their AI skills, and an honest conversation about how their expertise remains essential in an AI-Intentional world.
The Super Short TL;DR
The marketers who are "resisting AI" today might be resisting it precisely because they're great at marketing. They've spent years building exactly the kind of judgment, taste, and instinct that AI hasn't replicated, and probably never will. The friction they generate in your shiny new AI production engine isn't the problem; it's the signal that your investment in talent is paying off and your team is doing its job as good stewards of the brand and quality.
If that's the case (and it's critical you your job as a leader to diagnose if it is), your job then isn't to simply go "full speed ahead on the AI train" and pressure them into getting on board. It's to make sure their hard-won human judgment productively shapes how AI gets used across your team.
Ultimately I'd argue this may be one of the most import marketing AI initiatives you should be undertaking, right alongside general AI skills and fluency, agentic pilot deployments, and all the other AI initiatives that are dominating marketing conversations right now.