

Artificial Intelligence (AI) is now embedded in nearly every corner of marketing, from predictive analytics to content creation, automation, hyper-personalization, and customer experience design. However, beneath the surface of this rapid acceleration lies a critical and often overlooked question: how much of what we believe about AI’s impact is actually supported by evidence?
A recent article by Giacomo Zatini, published in the Italian Journal of Marketing, offers one of the most comprehensive answers to date. By reviewing 303 peer-reviewed studies using a mixed-method approach, the research uncovers a striking imbalance: although AI is celebrated for its potential to transform segmentation, targeting, pricing, and customer journeys, only 13.2% of studies rely on robust empirical designs such as experiments or case studies. This gap between theory and validated practice is not just academic: it carries direct strategic consequences for companies making significant, fast, and sometimes irreversible investments in AI.
In the interview below, Giacomo Zatini discusses what these findings mean for managers, explains how organizations can balance innovation with operational control, and highlights the skills and processes required to govern AI responsibly avoiding, thus, AI marketing risks.
Your review shows that many AI applications in marketing are still more theoretical than empirically validated. What concrete risks do companies face when adopting AI without solid evidence of actual outcomes?
The primary AI marketing risk is strategic and operational. The study found that only 13.2% of projects included case studies or experiments; therefore, it is crucial to begin developing concrete projects that go beyond simply studying the adoption of a new technology (which is already well documented) and instead exploit its full potential.
Furthermore, with the arrival of generative AI, companies must be careful about the shift from automation to supervision. LLMs produce variable outputs that require checks for accuracy, consistency with brand values, and compliance (e.g., copy and subject lines that change tone; product descriptions with unverified attributes; analytics summaries that insinuate causal connections). Without evaluation processes, companies could incur significant hidden costs, a sort of “supervision tax” that risks eroding ROI and time-to-market.
You discuss a transition from predictive to generative AI. In practical terms, what does this shift mean for a Chief Marketing Officer planning strategies over the next 2–3 years?
This evolution shifts the focus from process optimization (forecasts, recommendations) to scalable content creation and hyper-personalized experiences throughout the customer journey. The role of the CMO today becomes crucial because they will have to rethink content operations (ideation, production, quality control), integrate Human-in-the-loop for co-creation and brand safety, and adopt new metrics (controlled tests on uplift, brand fit, and funnel impact). While the need to train staff with hybrid skills is growing, it is becoming more essential than ever for companies to implement data governance systems to prevent distortion or abuse of synthetic content.
Ethical and governance issues emerge strongly from your findings. What practices or frameworks would you recommend to firms to avoid undermining customer trust?
First of all, data governance is the foundation on which data acquisition and its use with AI tools must be built. In essence, it is first necessary to collect only the data that are actually useful to move from information overload to the “knowledge that counts,” which can make the difference for management by helping it make effective decisions.
Beyond that, of course, transparency in data acquisition and management is vital, particularly for companies operating in heavily regulated industries (e.g., healthcare, finance, and public administration), where compliance, privacy, and auditability are non-negotiable.
The other aspect concerns customer attention. The conscious use of AI in content generation and customer communications must account for the substantial differences across consumer segments. A portion of the target audience will prefer authentic and “human” content; others will accept synthetic outputs if they are relevant and transparent. Hence, the need to use A/B tests, sentiment, and an opt-out/preference center to calibrate the human/synthetic mix.
If you had to give one priority to top managers investing in AI for marketing today, what should it be?
It is essential to understand that adopting a technology and governing it are not the same thing. The scientific organization of work is changing, but the Taylorism vision remains functional: break down, standardize, measure. The novelty is that managerial skills are expanding downward: with widespread access to generative AI tools, every employee must know how to use them to produce high-quality outputs.
In fact, every worker becomes a “micro-manager.” This is why I speak of hybrid skills: they must not remain the prerogative of roles with responsibility alone, but be spread all the way to the base of the pyramid. Hands-on training, clear usage guidelines, prompt standards, controls, and escalation at moments of risk: this is what it means to “govern” AI. Thus, AI is not yet another technological “add-on,” but a strategic asset: embedded in the company’s processes, skills, and culture, capable of producing repeatable and verifiable results over time.
Copertina: Foto di talha khalil da Pixabay
