by Bryan Caplan · 16 Jan 2018 · 636pp · 140,406 words
Education 68 (3): 187–204. Ashenfelter, Orley, Colm Harmon, and Hessel Oosterbeek. 1999. “A Review of Estimates of the Schooling/Earnings Relationship, with Tests for Publication Bias.” Labour Economics 6 (4): 453–70. Assaad, Ragui. 1997. “The Effects of Public Sector Hiring and Compensation Policies on the Egyptian Labor Market.” World Bank
by Stuart Ritchie · 20 Jul 2020
whole thing becomes a vicious circle: why bother submitting your null paper for publication if it has a negligible chance of being accepted? This is publication bias. It’s also known, now anachronistically, as the ‘file-drawer problem’ – since that’s where scientists are said to be keeping all their null results
…
scientific results; or think of it as ‘if you don’t have anything positive to publish, don’t publish anything at all’. To understand how publication bias plays out in practice, we need to take a closer look at how scientists decide what’s a ‘positive’ or a ‘null’ result. That is
…
that someone legally becomes an adult precisely on a particular birthday. * * * Before we embarked on that somewhat complicated (but necessary) statistical diversion, we learned about publication bias – the tendency of scientists to publish only positive results and hide away the nulls. Now we know how they usually make that decision: ‘significant’ results
…
-off and the ‘realness’ or the importance of a result has had baleful consequences for the scientific record. We sometimes see the characteristic traces of publication bias when we zoom out to look at a whole segment of scientific literature. This zooming-out often takes the form of a meta-analysis, which
…
-analysis therefore gives more weight to the effect sizes from big studies, because they’re likely to be more accurate.28 In the context of publication bias, what we’re interested in is how the effect size and the sample size relate to one another. If you plot one versus the other
…
. In scenario B, the six studies from the bottom-left section (studies with small samples and small effects) are missing – a pattern that might signal publication bias. The vertical line in the middle of each graph is the overall effect size calculated by each meta-analysis. In the case of scenario B
…
human tendency towards confirmation bias (interpreting evidence in the way that fits our pre-existing beliefs and desires), is what’s at the root of publication bias. If you consider the overall conclusions of a meta-analysis based on Figure 2B rather than 2A, you can see how
…
priming in large-scale experiments of their own, their study found no effect from it whatsoever, with all their replication effect sizes converging around zero. Publication bias appears no less prominent in medicine. An analysis in 2007, for instance, found that over 90 per cent of articles describing the effectiveness of prognostic
…
, but not with as strong an effect as initially believed), their clinical reasoning will be knocked off track.32 If you hadn’t heard of publication bias before now, it would be perfectly understandable: it is one of science’s more embarrassing secrets. But a 2014 survey of reviews in top medical
…
, by how much?35 – but it’s doubtful that the proper answer is to ignore the issue entirely. The trouble with the archaeological approach to publication bias is that it relies on conjecture to fill in the gaps in the funnel plot – those places where we would expect the small studies with
…
small effects to appear. Funnel plots can have weird shapes for reasons other than publication bias, especially if there are a lot of differences between the assorted studies that go into the meta-analysis.36 There are many cases where
…
publication bias is more subtle, and thus harder to discern, than in those described above. Are there better ways to check for this kind of bias? One
…
. Thus the outlook of the leader on whose decisions fateful events depend is usually far more sanguine than the brutal facts admit.’42 In science, publication bias makes Yes Men out of the articles that do get published: we can see all the positives, but we don’t get to see the
…
brutal null results. Making decisions on such partial information is a recipe for disaster. Last but not least, there’s a moral case against publication bias. If you’ve run a study that involved human participants, particularly if it’s one where they’ve taken a drug or undergone an experimental
…
effects) was for nothing. A similar argument applies to research you’ve done with someone else’s money. From every perspective – scientific, practical and ethical – publication bias is thus a major problem. Unfortunately, it’s far from the only problem caused by science’s persistent, deep-rooted bias toward positive results. * * * For
…
an aspirational, career-driven scientist, publication bias has a major downside: hiding null results in the file drawer means you won’t get that all-important publication or that gratifying extra line
…
readers of the literature, have no idea how many tests have been done. Since so many results were being hidden from view, this was like publication bias happening within a single study. If we were able to see the whole process, null results and all, it’d look like a classic case
…
preferred over the equally well-conducted paper that reports the outcome, warts and all, to reach a more qualified conclusion.’74 Here we see how publication bias and p-hacking are two manifestations of the same phenomenon: a desire to erase results that don’t fit well with a preconceived theory. This
…
on their guard for the upwardly creeping risk of false positives. From 2005, the International Committee of Medical Journal Editors, recognising the massive problem of publication bias, ruled that all human clinical trials should be publicly registered before they take place – otherwise they wouldn’t be allowed to be published in most
…
surely enormous.83 Think back to the meta-analyses we covered earlier. Even leaving aside the fact that some research is often missing due to publication bias, if the studies included in the meta-analysis are all themselves exaggerated by p-hacking, the overall combined effect – in what’s supposed to be
…
and sexist prejudice are considered powerful forces that affect individuals and shape society. The evidence for the phenomenon is quite weak, and possibly subject to publication bias, for a 2015 meta-analysis reviewing all the relevant stereotype threat studies found a clear gap where the small, null studies on the subject – those
…
picture of the evidence on this important educational question.102 However, just as a misshapen funnel plot isn’t necessarily evidence of bias, nor should publication bias itself be interpreted as direct evidence of political bias. We know, after all, that scientists in general will favour positives over nulls, regardless of political
…
they see. This is where the logic leads. If you find an effect in an underpowered study, that effect is probably exaggerated.47 Then comes publication bias: since large effects are exciting effects, they’re much more likely to go on to get published. That’s why, when reading the scientific literature
…
fatty acids for their effects on heart disease and death.96 This was probably for three reasons. First, there were the tell-tale signs of publication bias, with lopsided funnel plots suggesting small-sample, small-effect papers were being file-drawered.97 Second, one of the trials that claimed to be randomised
…
-slicing to become a norm. Incentivise publication in high-impact journals, and you’ll get it – but be prepared for scientists to use p-hacking, publication bias and even fraud in their attempts to get there. Incentivise competing for grant money, and you’ll get it – but be prepared for scientists to
…
-looking story of scientific discovery – a process that made the drugs in question look substantially more effective than they really were. The first step was publication bias. 98 per cent of the positive trials (fifty-two of fifty-three) were eventually published, compared to only 48 per cent of the negative ones
…
of holes in the ground but no buildings.13 How do we reverse the prioritisation of novel results over solid ones? How do we combat publication bias, ensuring that all results get published, no matter whether they’re groundbreaking or null? One answer has been to create journals that specialise in publishing
…
routinely publishing more replications. Meta-scientists will be keeping an eye out. * * * Making it easier for scientists to publish replications and null results might reduce publication bias. But what about the other forms of bias we encountered, having to do with p-hacking? Many dozens of papers, and even entire books, have
…
come out. Only then do the scientists start to collect their data.49 Not only does this type of study, called a ‘Registered Report’, kill publication bias stone dead, by removing the pernicious link between the statistical significance of the results and the decision to publish, but it reduces p-hacking as
…
for formal peer review, being able to comment instantly on drafts of new findings, and sharing important null results that wouldn’t normally survive the publication-bias process has – despite the rare misleading claims given us a scientific literature that’s months, or maybe years, ahead of where it would be otherwise
…
extent will it be shared with the community? The funders could also follow the initiatives we saw above, and withhold money if scientists contribute to publication bias by failing to report all their results. As Plan S shows, when funders band together to push Open Science principles they can be a potent
…
at may itself be an attempted replication of an earlier result). Of course, reviews and meta-analyses can themselves be corrupted by poor research and publication bias in their source material; if you find a meta-analysis of studies that were all themselves pre-registered, then you’ve hit the jackpot, but
…
-highlight-biases-in-ecology-and-evolution-science-64475. Sparrows: Alfredo Sánchez-Tójar et al., ‘Meta-analysis challenges a textbook example of status signalling and demonstrates publication bias’, eLife 7 (13 Nov. 2008): e37385; https://doi.org/10.7554/eLife.37385.001. Blue tits: Timothy H. Parker, ‘What Do We Really Know about
…
’, Lancet 391, no. 10128 (April 2018): pp. 1357–66; https://doi.org/10.1016/S0140-6736(17)32802-7 33. Akira Onishi & Toshi A. Furukawa, ‘Publication Bias Is Underreported in Systematic Reviews Published in High-Impact-Factor Journals: Metaepidemiologic Study’, Journal of Clinical Epidemiology 67, no. 12 (Dec. 2014): pp. 1320–26
…
, https://doi.org/10.1016/j.jclinepi.2014.07.002 34. D. Herrmann et al., ‘Statistical Controversies in Clinical Research: Publication Bias Evaluations Are Not Routinely Conducted in Clinical Oncology Systematic Reviews’, Annals of Oncology 28, no. 5 (May 2017): pp. 931–37; https://doi.org/10
…
and Practices in Psychological Science 2, no. 2 (June 2019): pp. 115–44; https://doi.org/10.1177/2515245919847196 36. Daniel Cressey, ‘Tool for Detecting Publication Bias Goes under Spotlight’, Nature, 31 March 2017; https://doi.org/10.1038/nature.2017.21728; Richard Morey, ‘Asymmetric Funnel Plots without
…
‘Published’ column of Franco et al.’s Table 2 by the total in the bottom row. 40. All quotations from Franco et al.’s ‘Publication Bias’, Supplementary Table S6. 41. The conclusion of the Franco publication bias study is backed up by: Kerry Dwan et al., ‘Systematic Review of the Empirical Evidence of Study
…
in Social Psychology 3, no. 2 (4 May 2018): pp. 140–74; https://doi.org/10.1080/23743603.2018.15596 47. For more evidence on publication bias in stereotype threat studies, see Oren R. Shewach et al., ‘Stereotype Threat Effects in Settings with Features Likely versus Unlikely in Operational Test Settings: A
…
40, no. 11 (Nov. 2010): pp. 1767–78; https://doi.org/10.1017/S0033291710000516. Incidentally, you can bet that there was a great deal of publication bias in the candidate gene literature. For some evidence of this in studies of how candidate genes interact with the environment, see Laramie E. Duncan & Matthew
…
bias blinding and conflict of interest De Vries’ study (2018) funding and groupthink and meaning well bias Morton’s skull studies p-hacking politics and publication bias randomisation and sexism and Bik, Elisabeth Bill & Melinda Gates Foundation Biomaterials biology amyloid cascade hypothesis Bik’s fake images study (2016) Boldt affair (2010) cell
…
, misconduct in Hwang affair (2005–6) Macchiarini affair (2015–16) meta-scientific research microbiome studies Morton’s skull studies Obokata affair (2014) outcome switching preprints publication bias replication crisis Reuben affair (2009) spin and statistical power and Summerlin affair (1974) Wakefield affair (1998–2010) biomedical papers bird flu bispectral index monitor black
…
Brown, Nick Bush, George Walker business studies BuzzFeed News California Walnut Commission California wildfires (2017) Canada cancer cell lines collaborative projects faecal transplants food and publication bias and replication crisis and sleep and spin and candidate genes carbon-based transistors Cardiff University cardiovascular disease Carlisle, John Carlsmith, James Carney, Dana cash-for
…
, Alex Cuddy, Amy CV (curriculum vitae) cyber-bullying cystic fibrosis Daily Mail Daily Telegraph Darwin Memorial, The’ (Huxley) Darwin, Charles Das, Dipak datasets fraudulent Observational publication bias Davies, Phil Dawkins, Richard De Niro, Robert De Vries, Ymkje Anna debt-to-GDP ratio Deer, Brian democratic peace theory Denmark Department of Agriculture, US
…
interest disclosure fraud and hype and impact factor language in mega-journals negligence and Open Science and peer review, see peer review predatory journals preprints publication bias rent-seeking replication studies retraction salami slicing subscription fees Jupiter Kahneman, Daniel Kalla, Joshua Karolinska Institute Krasnodar, Russia Krugman, Paul Lacon, or Many Things in
…
projects Fujii affair (2012) Hwang affair (2005–6) Macchiarini affair (2015–16) meta-scientific research Obokata affair (2014) outcome switching pharmaceutical companies preprints pre-registration publication bias replication crisis Reuben affair (2009) spin and statistical power and Summerlin affair (1974) Wakefield affair (1998–2010) medical reversal Medical Science Monitor Mediterranean Diet Merton
…
, Max plane crashes PLOS ONE pluripotency Poehlman, Eric politics polygenes polyunsaturated fatty acids Popper, Karl populism pornography positive feedback loops positive versus null results, see publication bias post-traumatic stress disorder (PTSD) power posing Prasad, Vinay pre-registration preclinical studies predatory journals preprints Presence (Cuddy) press releases Prevención con Dieta Mediterránea (PREDIMED
by Sarah Boslaugh · 10 Nov 2012
characteristics of the data. Ideally, every result should be reported, even if the study did not find statistical significance. Failure to do so leads to publication bias, in which only significant results are published, creating a misleading picture of our state of knowledge. Don’t be afraid to report deviations, nonsignificant test
…
results, and failure to reject null hypotheses—not every experiment can or should result in a major scientific result! Publication Bias and the Funnel Plot It’s easy to fall into the naïve belief that the published research literature presents a fair picture of our collective
…
particular drug and no articles saying it is ineffective, that’s pretty good evidence that the drug works, right? Unfortunately, not always. The reason is publication bias (also known as the file drawer problem), the tendency for articles presenting statistically significant results to be published and articles without such results to remain
…
repeatedly by other articles. (The number of citations is sometimes used as a measure of an article’s importance or influence.) One way to evaluate publication bias on a topic is to create a funnel plot, a graph in which each data point represents a published study, with the log odds ratio
…
of the study on the horizontal axis and the standard error of the study on the vertical axis. If there is no publication bias, we expect to see a pattern similar to an inverted funnel, as in Figure 20-1. Note that in studies with a larger standard error
…
of studies with positive, negative, and nonsignificant results has been published. A funnel plot with the general shape shown in Figure 20-1 suggests that publication bias is not a large concern in this particular area of research. A funnel plot that looks more like Figure 20-2 does suggest
…
half of the funnel is missing because few studies have been published with a neutral or negative result. The plot alone does not prove publication bias (several other possibilities are discussed in the Cochrane Collaboration document listed in Appendix C), but it does suggest it as a possibility. Figure 20-1.
…
A funnel plot suggesting little to no publication bias Figure 20-2. A funnel plot suggesting publication bias Issues in Research Design Generally, the design of an investigation of a question of interest needs to follow the guidelines presented in
…
, Phil, and Sally Green, eds. 2009. The Cochrane Collaboration Learning Material for Reviewers. http://www.cochrane-net.org/openlearning/. This includes a clear discussion of publication bias, written to support the efforts of The Cochrane Collaboration, an international organization whose purpose is to support informed decision making in health care. Darryl Huff
…
in data, Ingredients of a Good Design nonresponse, Bias in Sample Selection and Retention, Glossary of Statistical Terms predictions of presidential elections and sample, Exercises publication bias, Quick Checklist recall, Information Bias recall bias, Glossary of Statistical Terms retrospective adjustment, Retrospective Adjustment selection, Bias in Sample Selection and Retention, Glossary of Statistical
…
, The Mean–The Mean graphical methods and, Frequency Tables–Frequency Tables Friedman test, Friedman Test–Friedman Test fully crossed design, Basic Vocabulary funnel plot, evaluating publication bias using, Quick Checklist G gambling and statistics, Closing Note: The Connection between Statistics and Gambling–Closing Note: The Connection between Statistics and Gambling gamma (Goodman
…
a Composite Test–Reliability of a Composite Test standardized scores, Standardized Scores–Standardized Scores test construction, Test Construction–Test Construction psychometrics, Educational and Psychological Statistics publication bias, Quick Checklist Q quadratic regression model, Polynomial Regression–Polynomial Regression Quality Improvement (QI), Quality Improvement–Run Charts and Control Charts quasi-experimental, Basic Vocabulary–Basic
by Robert N. Proctor · 28 Feb 2012 · 1,199pp · 332,563 words
the EPA of imprecision, inconsistency, faulty interpretations, improper extrapolations, use of “crude and disputable” estimates of exposure, bias from confounding and misclassification, improper treatment of publication bias, reliance on inconsistent or improperly recorded data, and several other flaws.39 Switzer was well paid for his services, receiving a total of $647,046
by Daniel Simons and Christopher Chabris · 10 Jul 2023 · 338pp · 104,815 words
that action video games increase cognitive abilities, see J. Hilgard, G. Sala, W. R. Boot, and D. J. Simons, “Overestimation of Action-Game Training Effects: Publication Bias and Salami Slicing,” Collabra: Psychology 5 (2019): 30 [https://doi.org/10.1525/collabra.231]. 31. Cornell has not released the full results of its
…
[https://psycnet.apa.org/doi/10.1037/bul0000139]; J. Hilgard, G. Sala, W. R. Boot, and D. J. Simons, “Overestimation of Action-Game Training Effects: Publication Bias and Salami Slicing,” Collabra: Psychology 5 (2019) [https://doi.org/10.1525/collabra.231]. 28. Original study: D. R. Carney, A. J. Cuddy, and A
by Gabriel Weinberg and Lauren McCann · 17 Jun 2019
be explained as a base rate fallacy. Unfortunately, studies are much, much more likely to be published if they show statistically significant results, which causes publication bias. Studies that fail to find statistically significant results are still scientifically meaningful, but both researchers and publications have a bias against them for a variety
…
populations vary too much. They also cannot eliminate biases from the original studies themselves. Further, both systematic reviews and meta-analyses can be compromised by publication bias because they can include only results that are publicly available. Whenever we are looking at the validity of a claim, we first look to see
…
, 302 promotions, 256, 275 proximate cause, 31, 117 proxy endpoint, 137 proxy metric, 139 psychology, 168 Psychology of Science, The (Maslow), 177 Ptolemy, Claudius, 8 publication bias, 170, 173 public goods, 39 punching above your weight, 242 p-values, 164, 165, 167–69, 172 Pygmalion effect, 267–68 Pyrrhus, King, 239 Qualcomm
…
, 296, 298 being locked into, 305 dating, 8–10, 95 replication crisis, 168–72 Republican Party, 104 reputation, 215 research: meta-analysis of, 172–73 publication bias and, 170, 173 systematic reviews of, 172, 173 see also experiments resonance, 293–94 response bias, 142, 143 responsibility, diffusion of, 259 restaurants, 297 menus
by Tim Harford · 2 Feb 2021 · 428pp · 103,544 words
Social Psychology, you might well conclude that people can indeed see into the future. For obvious reasons, this particular flavor of survivorship bias is called “publication bias.” Interesting findings are published; non-findings, or failures to replicate previous findings, face a higher publication hurdle. Bem’s finding was the $55,000 potato
…
and the rest of academic psychology with one big question on their hands: How on earth did this happen? Part of the explanation must be publication bias. As with Daryl Bem’s study, there is a systemic bias toward publishing the interesting results, and of course flukes are more likely to seem
…
American Quarter Dollars Minted in 1977.”13 To be clear—such a research paper would be fraudulent, and nobody believes that such extreme and premeditated publication bias explains the large number of nonreplicable studies that Nosek and his colleagues unveiled. But there are shades of gray. What if 1,024 researchers individually
…
. This behavior doesn’t sound especially unreasonable to the layman, and it probably doesn’t feel unreasonable to the researchers doing it—but it is publication bias nonetheless, and it means that flukes are likely to be disproportionately published. Another possibility is that the researcher does the study, finds some promising results
…
if he or she did so after gathering the data and getting a feel for how they looked. This leads to yet another kind of publication bias: if a particular way of analyzing the data produces no result, and a different way produces something more intriguing, then of course the more interesting
…
unwittingly commit subtler versions of the same statistical sins. The standard statistical methods are designed to exclude most chance results.19 But a combination of publication bias and loose research practices means we can expect that mixed in with the real discoveries will be a large number of statistical accidents. * * * — Darrell Huff
…
’s How to Lie with Statistics describes how publication bias can be used as a weapon by an amoral corporation more interested in money than truth. With his trademark cynicism, he mentions that a toothpaste
…
certainly a risk—not only in advertising but also in the clinical trials that underpin potentially lucrative pharmaceutical treatments. But might accidental publication bias be an even bigger risk than weaponized publication bias? In 2005, John Ioannidis caused a minor sensation with an article titled “Why Most Published Research Findings Are False.” Ioannidis is
…
no choice but to accept that the major conclusions of these studies are true.” Now we realize that disbelief is an option. Kahneman does, too. Publication bias, and more generally the garden of forking paths, means that plenty of research that seems rigorous at first sight both to onlookers and often to
…
of counterintuitiveness (not too absurd, but not too predictable) that makes them so fascinating. The “interestingness” filter is enormously powerful. * * * — Little harm is done if publication bias (and survivorship bias) merely produces cute distortions in our view of the world, leading people to prepare for a job interview by finding a secluded
…
or the best known treatment. An RCT is indeed the fairest one-shot test of a new medical treatment, but if RCTs are subject to publication bias, we won’t see the full picture of all the tests that have been done, and our conclusions are likely to be badly skewed.30
…
have found forty-eight trials showing a positive effect and three showing no positive effect. This sounds pretty encouraging, until you ponder the risk of publication bias. So the researchers behind that survey looked harder, digging out twenty-three unpublished trials; of these, twenty-two had a negative result in which the
…
plan to do and how they plan to analyze the results, posting that explanation on a public website. Such preregistration is an important fix for publication bias, because it means that researchers can easily see cases in which a trial was planned but then somehow the results went missing in action. Preregistration
…
on Apparent Efficacy,” New England Journal of Medicine, January 17, 2008, https://www.nejm.org/doi/full/10.1056/NEJMsa065779. 32. Ben Goldacre, “Transparency, Beyond Publication Bias,” talk given at the International Journal of Epidemiology Conference, 2016; available at https://www.badscience.net/2016/10/transparency-beyond
…
-publication-bias-a-video-of-my-super-speedy-talk-at-ije/. 33. Ben Goldacre et al., “COMPare: A Prospective Cohort Study Correcting and Monitoring 58 Misreported Trials
…
in Real Time,” Trials 20, no. 118 (2019), https://doi.org/10.1186/s13063-019-3173-2. 34. Ben Goldacre, “Transparency, Beyond Publication Bias.” 35. Amy Sippett, “Does the Backfire Effect Exist?,” Full Fact (blog), March 20, 2019, https://fullfact.org/blog/2019/mar/does-backfire-effect-exist/; Brendan
…
, 268 negativity bias, 95–99 non-response bias, 146–47 novelty bias, 95–99, 113, 114, 122 optimism bias, 96 and power of doubt, 13 publication bias, 113–16, 118–23, 125–27 racial bias in criminal justice, 176–79 in sampling, 135–38, 142–45, 147–51 selection bias, 2, 245
…
–74, 72n HIV/AIDS data, 36 and informed consent, 181 medical records, 220–21 and motivated reasoning, 28–29 and optimism bias, 96–98 and publication bias, 125–26 and sanitation advocacy, 225–26, 233–37 and scale of news reporting, 91 and smoking research, 3–6, 96, 100, 248, 279 and
…
reproducibility crisis, 112–16, 120–22, 128–29, 130–31 public health. See health and medical data public opinion, 149, 220 public transportation, 47–49 publication bias, 113–16, 118–23, 125–27 publicity, 107 Puerto Rico, 197–98, 200 Puy de Dôme, France, 172 Quetelet, Adolphe, 219 racial data, 176–79
…
and “N = All” assumptions, 150, 152, 155 and novel representations of statistics, 95n and peer review, 111–12, 189n and premature enumeration, 65–85 and publication bias, 113–16, 118–23, 125–27 and reproducibility crisis, 107, 112–16, 120–22, 129–31 and sampling bias, 135–38, 142–45, 147–51
by Alex Edmans · 13 May 2024 · 315pp · 87,035 words
data mining because they never see the tests the authors tried and buried because they didn’t work out. Journals can also fall victim to publication bias, where they accept a paper because they like its findings. And what findings do editors like? Statistically significant ones, because they’re more likely to
…
Journal of the American Medical Association 219 journal quality 218 journalists 228, 282 checking facts 273 journals impact factor 220 peer-reviews of 217–18 publication bias 220 Joy, Bill 61 Kahan, Dan 263, 266, 268 Kahneman, Daniel 29 Kaplan, Jonas 28–9 Keil, Frank 54, 251 Kempf, Elisabeth 241 Kennedy, General
…
Management, The (Taylor) 195 processing power 248–52 productivity 193–4 professorship 227 proof 198–9 Psychological Science 221, 269, 270 psychometric tests 108–9 publication bias 220 publication process 273–4 endorsements 274–5 quantitative easing 226 Quest, Richard 133–4 Quote Investigator 270 racial discrimination 175–6 Rambotti, Simone 161
by Charles Murray · 28 Jan 2020 · 741pp · 199,502 words
al., Doyle and Voyer). In contrast, Stoet and Geary and Flore and Wicherts are both worried about the degree to which there is evidence of publication bias (only studies that find stereotype threat reach publication), a lack of control groups in many studies, and other methodological weaknesses.[32] The studies of race
…
a problem with stereotype threat research: Replications often fail to confirm the earlier results.[36] The evidence for stereotype threat has dissipated over time.37 Publication bias (failure to report negative results) appears to have been a reality.[38] In 2019, scholars at the University of Minnesota dealt with these and other
…
-stakes settings, the effect size of stereotype threat was –.14 (lowering test scores), a small effect that was further reduced to –.09 after correcting for publication bias. The authors summarized their findings as follows: Based on the results of the focal analysis, operational and motivational subsets, and
…
publication bias analyses, we conclude that the burden of proof shifts back to those that claim that stereotype threat exerts a substantial effect on standardized test takers.
…
Our best estimate of stereotype threat effects within groups in settings with conditions most similar to operational testing is small and inflated by publication bias.39 Given this assessment from the largest and most rigorous meta-analysis of a quarter century of attempts to demonstrate stereotype threat, it seems unlikely
…
. The more recent literature on stereotype threat and math and visuospatial skills more commonly has found little or no effect and also found evidence of publication bias; e.g., Pennington, Litchfield, McLatchie et al. (2018); Stoet and Geary (2012); Ganley, Mingle, Ryan et al. (2013). For more on stereotype threat, see chapter
…
school-aged girls [d = –0.22]; however, the studies show large variation in outcomes, and it is likely that the effect is inflated due to publication bias. This finding leads us to conclude that we should be cautious when interpreting the effects of stereotype threat on children and adolescents in the STEM
…
realm. To be more explicit, based on the small average effect size in our meta-analysis, which is most likely inflated due to publication bias, we would not feel confident to proclaim that stereotype threat manipulations will harm mathematical performance of girls in a systematic way or lead women to
…
. 2018. “Hominin Occupation of the Chinese Loess Plateau Since About 2.1 Million Years Ago.” Nature 559 (7715): 608–12. Zigerell, L. J. 2017. “Potential Publication Bias in the Stereotype Threat Literature: Comment on Nguyen and Ryan (2008).” Journal of Applied Psychology 102 (8): 1159–68. Zucker, Kenneth J. 2017. “Epidemiology of
by John H. Johnson · 27 Apr 2016 · 250pp · 64,011 words
probably are some scientists who actually throw things at the wall until something sticks…). A fascinating New Yorker article (is there any other kind?) examines publication bias as a possible cause of the “decline effect,” in which the size of a statistically significant effect declines over time. Why? One statistician found that
…
, 4. See also misrepresentation and misinterpretation means, 32–34 definition of, 32 mean trimming, 40 media cherry-picking by, 116 data interpretation by, 75, 81 publication bias and, 80 medians, 32–34 definition of, 32 medical coding errors, 97 Medical News Today, 75 memory of printed vs. online material, 2 Mercator, Gerardus
…
printed vs. online material memory of, 2 probability, 70–71, 81 coincidence and, 138–139 forecasting and, 131 proxies, 49–50 psychology research, 15–16 publication bias, 80 p-values, 71, 72, 79 Q questions/questioning, 7–8 cherry-picking and, 122 correlation vs. causation, 60 of print vs. online information, 93
by Ben Goldacre · 22 Oct 2014 · 467pp · 116,094 words
by Jevin D. West and Carl T. Bergstrom · 3 Aug 2020
by Sebastien Page · 4 Nov 2020 · 367pp · 97,136 words
by David Spiegelhalter · 14 Oct 2019 · 442pp · 94,734 words
by John Abramson · 15 Dec 2022 · 362pp · 97,473 words
by Tom Chivers and David Chivers · 18 Mar 2021 · 172pp · 51,837 words
by Ben Goldacre · 1 Jan 2008 · 322pp · 107,576 words
by Samuel Arbesman · 31 Aug 2012 · 284pp · 79,265 words
by Plantbased Pixie · 7 Mar 2019 · 299pp · 81,377 words
by Eric J. Johnson · 12 Oct 2021 · 362pp · 103,087 words
by Tom Chivers · 6 May 2024 · 283pp · 102,484 words
by Ben Goldacre · 1 Jan 2012 · 402pp · 129,876 words
by Trisha Greenhalgh · 18 Nov 2010 · 321pp · 97,661 words
by Edzard Ernst and Simon Singh · 17 Aug 2008 · 357pp · 110,072 words
by Richard Kluger · 1 Jan 1996 · 1,157pp · 379,558 words
by John Abramson · 20 Sep 2004 · 436pp · 123,488 words
by Charles Murray · 14 Jun 2021 · 147pp · 42,682 words
by Robert M. Sapolsky · 1 May 2017 · 1,261pp · 294,715 words
by Matt Parker · 7 Mar 2019
by William Easterly · 1 Mar 2006
by Matthew Syed · 3 Nov 2015 · 410pp · 114,005 words
by Andrew Leigh · 14 Sep 2018 · 340pp · 94,464 words
by Rod Hill and Anthony Myatt · 15 Mar 2010
by David Spiegelhalter · 2 Sep 2019 · 404pp · 92,713 words
by New Scientist and Helen Thomson · 7 Jan 2021 · 442pp · 85,640 words
by Alan Rusbridger · 26 Nov 2020 · 371pp · 109,320 words
by Brian Klaas · 23 Jan 2024 · 250pp · 96,870 words
by Doug Henwood · 30 Aug 1998 · 586pp · 159,901 words
by Paul Bloom · 281pp · 79,464 words
by Dean Baker and Jared Bernstein · 14 Nov 2013 · 128pp · 35,958 words
by Ronald Purser · 8 Jul 2019 · 242pp · 67,233 words
by Dr. Julie Smith · 11 Jan 2022 · 481pp · 72,071 words
by George A. Akerlof, Robert J. Shiller and Stanley B Resor Professor Of Economics Robert J Shiller · 21 Sep 2015 · 274pp · 93,758 words
by John Logie · 29 Dec 2006 · 173pp · 14,313 words
by Johann Hari · 1 Jan 2018 · 428pp · 126,013 words
by Cordelia Fine · 13 Jan 2017 · 312pp · 83,998 words
by Matt Morgan · 29 May 2019 · 218pp · 70,323 words
by Daniel Lieberman · 2 Sep 2020 · 687pp · 165,457 words
by Peter Walker · 3 Apr 2017 · 231pp · 69,673 words
by Julie Holland · 22 Sep 2010 · 694pp · 197,804 words
by Jonathon Sullivan and Andy Baker · 2 Dec 2016 · 742pp · 166,595 words
by John Brisson · 12 Apr 2014
by Nicco Mele · 14 Apr 2013 · 270pp · 79,992 words
by Tom Chatfield · 13 Dec 2011 · 266pp · 67,272 words
by David Perlmutter and Kristin Loberg · 17 Sep 2013