

Crafting a publishable scientific article can be challenging, especially for young scholars such as PhD students and postdoctoral fellows. To provide guidance and clarity, the Editorial Board of the Italian Journal of Marketing has decided to leverage the expertise of our Associate Editors.
Associate Editors were invited to answer key questions addressing the fundamental challenges scholars face when preparing their articles. Every two months, a different Associate Editor of the Italian Journal of Marketing will address a new set of questions, providing continuous, up-to-date insights tailored to the evolving needs of the academic community. Readers are also encouraged to actively participate in shaping this initiative by submitting their questions or topics of interest to itjm@simktg.it.
After exploring six questions with John B. Ford, we spoke with Felipe Pantoja today about approaching and conducting experiments to expand the body of knowledge and explore relevant marketing topics. Felipe Pantoja is an Assistant Professor of Marketing at MBS School of Business.
Our conversation began with a single open-ended question: Could you provide our readers with a guide to conduct rigorous and effective experimental design studies? Below is the reported reply from Felipe Pantoja.
From my perspective, experiments represent one of the most powerful tools for advancing scientific knowledge, as they allow us to infer causality with a level of precision that other methods often cannot match. By systematically manipulating one or more independent variables and observing their effect on a dependent variable, we can investigate the mechanisms that drive behavior.
That said, I have learned through extensive experience that experiments are not without limitations. Precisely, their effectiveness and reliability depend on key factors such as (1) how well the independent variable is manipulated, (2) how accurately the dependent variable is measured, and (3) how carefully extraneous variables are controlled. Getting these elements right is not always straightforward, but they are essential for producing trustworthy research insights.
First, a critical point concerns the quality of the manipulation(s). Poor manipulation can compromise the entire study, regardless of sample size or analytical rigor. That said, before launching a full-scale experiment, researchers should always conduct pre-tests to ensure that the manipulation works as intended. In the pre-tests, it is important to use both direct and indirect manipulation checks. Ask participants not only whether they noticed or understood the manipulation but also whether it influenced their thoughts, feelings, or decisions in the expected way. This dual approach helps confirm that the manipulation was both noticed and psychologically meaningful.
Second, it is crucial to ensure proper random assignment of participants to experimental conditions. Randomly assigning participants to different conditions helps reduce the influence of confounding variables, increasing the likelihood that your manipulation indeed causes observed differences (if any) in the dependent variable. Beyond supporting causal inferences, random assignment also enhances internal validity, improves statistical power, and promotes replicability. Additionally, I highly recommend using power analyses (e.g., G*Power) to determine the minimum sample size required to detect the expected effect size.
Third, it is worth noting that any well-designed experiment should be surrounded by transparency, control, and methodological rigor. Today, open science practices—such as pre-registration and data sharing—offer researchers valuable tools to enhance the quality, transparency, and credibility of their experimental research. While these practices are not panacea, they are essential for laying the groundwork of reliable scientific evidence.
With pre-registrations, for example, researchers significantly reduce the risk of p-hacking and increase the transparency and replicability of their work by publicly disclosing their hypotheses, study design, and analysis plan prior to data collection. Pre-registering studies are particularly important for avoiding questionable research practices—such as dropping experimental conditions, excluding participants post hoc, changing statistical analysis, or reporting only statistically significant outcomes—that can bias results and lead to misleading conclusions.
My recommendation is always to pre-register your studies and be as transparent as possible. Clearly state your final sample size, exclusion criteria, independent and dependent variables, manipulation checks, and planned statistical analyses. Platforms like AsPredicted and the Open Science Framework (OSF) facilitate the registration of studies and commitment to transparency. Reviewers and readers will appreciate the clarity and structure this brings to your work.
Ultimately, I would like to emphasize the significance of field experiments. While laboratory and online studies are invaluable for testing theoretical predictions in controlled environments, field experiments allow us to assess whether these effects hold in real-world settings. They bring us closer to actual consumer behavior, increasing the ecological validity of our findings. For example, recent research by Liu et al. (2025) challenged the effectiveness of traffic light nutritional labeling in changing dietary behavior in real-world dining contexts. While the intervention showed strong effects in laboratory studies, its impact was not observed in a natural environment. Therefore, testing theories within the complexity of everyday decision-making is crucial for enhancing research relevance and readership.
In my work, I have increasingly incorporated field settings to observe behavior in real consumption contexts. Based on my recent experience, here are a few tips for conducting successful field experiments:
In sum, conducting experiments requires a balance between technical precision and creativity. Every research project comes with its own set of limitations, and researchers must remain attentive and flexible in navigating them. At the same time, transparency is essential, and clearly communicating the study design, acknowledging the constraints, and explaining how they were addressed strengthens the credibility of the work.
For readers wishing to go deeper, I recommend the following references:
Textbooks
Conceptual article
Empirical article with robust practices in experimentation
A randomized clinical trial on the effects of traffic light nutritional labels on dietary change
Copertina: Foto di StartupStockPhotos da Pixabay
