Artigo em revista científica Q1
Crowdsourcing hypothesis tests: making transparent how design choices shape research results
Justin F. Landy (Landy, J. F.); Miaolei (Liam) Jia (Jia; M.); Isabel L. Ding (Ding, I. L.); Domenico Viganola (Viganola, D.); Warren Tierney (Tierney, W); Anna Dreber (Dreber, A.); Magnus Johannesson (Johannesson, M.); Thomas Pfeiffer (Pfeiffer, T.); Charles R. Ebersole (Ebersole, C.); Quentin F. Gronau (Gronau, Q. F.); Alexander Ly (Ly, A.); Don van den Bergh (van den Bergh, D.); Maarten Marsman (Marsman, M.); Koen Derks (Derks, K.); Eric-Jan Wagenmakers (Wagenmakers, E.-J.); Andrew Proctor (Proctor, A.); Daniel M. Bartels (Bartels, D. M.); Christopher W. Bauman (Bauman, C. W.); William J. Brady (Brady, W. J.); Felix Cheung (Cheung, F.); Andrei Cimpian (Cimpian, A.); Simone Dohle (Dohle, S.); M. Brent Donnellan (Donnellan, M. B.); Adam Hahn (Hahn, A.); Michael P. Hall (Hall, M. P.); William Jiménez-Leal (Jiménez-Leal, W.); David J. Johnson (Johnson, D. J.); Richard E. Lucas (Lucas, R. E.); Benoît Monin (Monin, B.); Andres Montealegre (Montealegre, A.); Elizabeth Mullen (Mullen, E.); Jun Pang (Pang, J.); Jennifer Ray (Ray, J.); Diego A. Reinero (Reinero, D. A.); Jesse Reynolds (Reynolds, J.); Walter Sowden (Sowden, W.); Daniel Storage (Storage, D.); Runkun Su (Su, R.); Christina M. Tworek (Tworek, C. M.); Daniel Walco (Walco, D.); Julian Wills (Wills, J.); Jay J. Van Bavel (Van Bavel, J. J.); Xiaobing Xu (Xu, X.); Kai Chi Yam (Yam, K. C.); Xiaoyu Yang (Yang, X.); William A. Cunningham (Cunningham, W. A.); Martin Schweinsberg (Schweinsberg, M.); Molly Urwitz (Urwitz, M.); Eric L. Uhlmann (Uhlmann, Eric L.); Oleksandr Horchak (Horchak, O.V.); Crowdsourcing Hypothesis Tests Col (Crowdsourcing Hypothesis Tests Col); et al.
Título Revista
Psychological Bulletin
Ano (publicação definitiva)
2020
Língua
Inglês
País
Estados Unidos da América
Mais Informação
Web of Science®

N.º de citações: 78

(Última verificação: 2024-04-15 07:52)

Ver o registo na Web of Science®


: 0.9
Scopus

N.º de citações: 89

(Última verificação: 2024-04-11 10:14)

Ver o registo na Scopus


: 1.1
Google Scholar

N.º de citações: 161

(Última verificação: 2024-04-15 13:03)

Ver o registo no Google Scholar

Abstract/Resumo
To what extent are research results influenced by subjective decisions that scientists make as they design studies? Fifteen research teams independently designed studies to answer five original research questions related to moral judgments, negotiations, and implicit cognition. Participants from 2 separate large samples (total N > 15,000) were then randomly assigned to complete 1 version of each study. Effect sizes varied dramatically across different sets of materials designed to test the same hypothesis: Materials from different teams rendered statistically significant effects in opposite directions for 4 of 5 hypotheses, with the narrowest range in estimates being d = -0.37 to + 0.26. Meta-analysis and a Bayesian perspective on the results revealed overall support for 2 hypotheses and a lack of support for 3 hypotheses. Overall, practically none of the variability in effect sizes was attributable to the skill of the research team in designing materials, whereas considerable variability was attributable to the hypothesis being tested. In a forecasting survey, predictions of other scientists were significantly correlated with study results, both across and within hypotheses. Crowdsourced testing of research hypotheses helps reveal the true consistency of empirical support for a scientific claim.
Agradecimentos/Acknowledgements
--
Palavras-chave
Conceptual replications,Crowdsourcing,Forecasting,Research robustness,Scientific transparency
  • Psicologia - Ciências Sociais
Registos de financiamentos
Referência de financiamento Entidade Financiadora
SFB F63 Austrian Science Fund (FWF)
17-MAU-133 Marsden Fund Grants
16-UOA-190 Marsden Fund Grants