publication bias

back to index

53 results

Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth
by Stuart Ritchie
Published 20 Jul 2020

If the medical literature gives doctors an inflated view of how much benefit a drug provides (as indeed appears to have been the case for antidepressants, which do seem to work, but not with as strong an effect as initially believed), their clinical reasoning will be knocked off track.32 If you hadn’t heard of publication bias before now, it would be perfectly understandable: it is one of science’s more embarrassing secrets. But a 2014 survey of reviews in top medical journals found that 31 per cent of meta-analyses didn’t even check for it. (Once it was properly checked for, 19 per cent of those meta-analyses indicated that publication bias was indeed present.)33 A later review of cancer-research reviews was even worse: 72 per cent didn’t include publication bias checks.34 It’s often hard to know exactly what to do when you find hints of publication bias in your meta-analytic dataset – should you revise the estimate of the average effect downwards?

35 – but it’s doubtful that the proper answer is to ignore the issue entirely. The trouble with the archaeological approach to publication bias is that it relies on conjecture to fill in the gaps in the funnel plot – those places where we would expect the small studies with small effects to appear. Funnel plots can have weird shapes for reasons other than publication bias, especially if there are a lot of differences between the assorted studies that go into the meta-analysis.36 There are many cases where publication bias is more subtle, and thus harder to discern, than in those described above. Are there better ways to check for this kind of bias?

There’s a whole set of techniques to adjust the effect size in your meta-analysis when you discover that there’s publication bias. Since these are guesswork (about how much you should reduce the effect size) stacked on guesswork (about how much publication bias there is), I always feel a bit nervous about using them. For details see e.g. Evan C. Carter et al., ‘Correcting for Bias in Psychology: A Comparison of Meta-Analytic Methods’, Advances in Methods and Practices in Psychological Science 2, no. 2 (June 2019): pp. 115–44; https://doi.org/10.1177/2515245919847196 36.  Daniel Cressey, ‘Tool for Detecting Publication Bias Goes under Spotlight’, Nature, 31 March 2017; https://doi.org/10.1038/nature.2017.21728; Richard Morey, ‘Asymmetric Funnel Plots without Publication Bias’, BayesFactor, 9 Jan. 2016; https://bayesfactor.blogspot.com/2016/01/asymmetric-funnel-plots- without.html 37.  

pages: 172 words: 51,837

How to Read Numbers: A Guide to Statistics in the News (And Knowing When to Trust Them)
by Tom Chivers and David Chivers
Published 18 Mar 2021

Simes noted that published cancer studies which were registered in advance (registering studies in advance means they can’t so easily be quietly filed away if they didn’t find anything: see box for more details) were much less likely to return positive results than studies which weren’t, suggesting that a lot of the unregistered studies were not being published.11 A group reviewing the effectiveness of antidepressants found that thirteen out of fifty-five studies were simply never published; when the data from those studies was added back in, the apparent effectiveness of the antidepressants fell by a quarter.12 You do not need to read or understand this box, but if you would like to know about funnel plots and checking for publication bias, go ahead. There’s a clever way of checking whether there is publication bias in a field, known as a funnel plot. A funnel plot plots the results of all the studies on a topic, with smaller, weaker studies towards the bottom of the chart and larger, better studies towards the top. If there’s no publication bias, then the studies should appear in a rough triangle shape: the smaller, less statistically powerful studies are widely spread out around the bottom (because you get more random error in a smaller study); the larger, more powerful studies are clustered more narrowly at the top.

Similarly, it could be that just by chance a load of smaller, weaker studies happened to find bigger-than-average results, and none found any smaller-than-average ones. Or it could be that those studies did find those results and then – through the magic of publication bias – were never published, leaving a suspicious blank space where they ought to have been. There are other reasons that a funnel plot could look like this, but it’s a hint that publication bias is a problem. This isn’t the only way of checking for publication bias – you can also simply write to researchers and ask them for any unpublished studies they might have performed, and then see whether the unpublished ones tend to return different results from the published ones.

(It’s worth noting that Bem later did do a meta-analysis, which included Ritchie et al’s paper and several others, and apparently still found that psychic abilities are real.10 With checks for publication bias and everything. So either psychic powers are real, or the experimental and statistical methods that underpin psychological science are capable of churning out meaningless nonsense even in meta-analyses.) This demand for novelty leads to a fundamental problem in science called publication bias. If 100 studies are carried out into whether psychic abilities are real and, say, ninety-two find that they’re not and eight find that they are, that’s a pretty good indicator that they’re not.

pages: 428 words: 103,544

The Data Detective: Ten Easy Rules to Make Sense of Statistics
by Tim Harford
Published 2 Feb 2021

Erick Turner et al., “Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy,” New England Journal of Medicine, January 17, 2008, https://www.nejm.org/doi/full/10.1056/NEJMsa065779. 32. Ben Goldacre, “Transparency, Beyond Publication Bias,” talk given at the International Journal of Epidemiology Conference, 2016; available at https://www.badscience.net/2016/10/transparency-beyond-publication-bias-a-video-of-my-super-speedy-talk-at-ije/. 33. Ben Goldacre et al., “COMPare: A Prospective Cohort Study Correcting and Monitoring 58 Misreported Trials in Real Time,” Trials 20, no. 118 (2019), https://doi.org/10.1186/s13063-019-3173-2. 34. Ben Goldacre, “Transparency, Beyond Publication Bias.” 35. Amy Sippett, “Does the Backfire Effect Exist?,” Full Fact (blog), March 20, 2019, https://fullfact.org/blog/2019/mar/does-backfire-effect-exist/; Brendan Nyhan, “Read this!

And the majority who did not might unwittingly commit subtler versions of the same statistical sins. The standard statistical methods are designed to exclude most chance results.19 But a combination of publication bias and loose research practices means we can expect that mixed in with the real discoveries will be a large number of statistical accidents. * * * — Darrell Huff’s How to Lie with Statistics describes how publication bias can be used as a weapon by an amoral corporation more interested in money than truth. With his trademark cynicism, he mentions that a toothpaste maker can truthfully advertise that the toothpaste is wonderfully effective simply by running experiments, putting all unwelcome results “well out of sight somewhere” and waiting until a positive result shows up.20 That is certainly a risk—not only in advertising but also in the clinical trials that underpin potentially lucrative pharmaceutical treatments.

With his trademark cynicism, he mentions that a toothpaste maker can truthfully advertise that the toothpaste is wonderfully effective simply by running experiments, putting all unwelcome results “well out of sight somewhere” and waiting until a positive result shows up.20 That is certainly a risk—not only in advertising but also in the clinical trials that underpin potentially lucrative pharmaceutical treatments. But might accidental publication bias be an even bigger risk than weaponized publication bias? In 2005, John Ioannidis caused a minor sensation with an article titled “Why Most Published Research Findings Are False.” Ioannidis is a “meta-researcher”—someone who researches the nature of research itself.* He reckoned that the cumulative effect of various apparently minor biases might mean that false results could easily outnumber the genuine ones.

pages: 402 words: 129,876

Bad Pharma: How Medicine Is Broken, and How We Can Fix It
by Ben Goldacre
Published 1 Jan 2012

It’s not ideal to lump every study of this type together in one giant spreadsheet, to produce a summary figure on publication bias, because they are all very different, in different fields, with different methods. This is a concern in many meta-analyses (though it shouldn’t be overstated: if there are lots of trials comparing one treatment against placebo, say, and they’re all using the same outcome measurement, then you might be fine just lumping them all in together). But you can reasonably put some of these studies together in groups. The most current systematic review on publication bias, from 2010, from which the examples above are taken, draws together the evidence from various fields.29 Twelve comparable studies follow up conference presentations, and taken together they find that a study with a significant finding is 1.62 times more likely to be published.

In a moment we will see more clear cases of drug companies withholding data – in stories where we can identify individuals – sometimes with the assistance of regulators. When we get to these, I hope your rage might swell. But first, it’s worth taking a moment to recognise that publication bias occurs outside commercial drug development, and in completely unrelated fields of academia, where people are motivated only by reputation, and their own personal interests. In many respects, after all, publication bias is a very human process. If you’ve done a study and it didn’t have an exciting, positive result, then you might wrongly conclude that your experiment isn’t very interesting to other researchers.

Health Technol Assess. 2010 Feb;14(8):iii, ix–xi, 1–193. 23 Dickersin K. How important is publication bias? A synthesis of available data. Aids Educ Prev 1997;9(1 SA):15–21. 24 Ioannidis J. Effect of the statistical significance of results on the time to completion and publication of randomized efficacy trials. JAMA 1998;279:281–6. 25 Bardy AH. Bias in reporting clinical trials. Brit J Clin Pharmaco 1998;46:147–50. 26 Dwan K, Altman DG, Arnaiz JA, Bloom J, Chan AW, Cronin E, et al. Systematic review of the empirical evidence of study publication bias and outcome reporting bias. PLoS ONE 2008;3(8):e3081. 27 Decullier E, Lhéritier V, Chapuis F.

pages: 322 words: 107,576

Bad Science
by Ben Goldacre
Published 1 Jan 2008

The smaller, more rubbish negative trials seem to be missing, because they were ignored—nobody had anything to lose by letting these tiny, unimpressive trials sit in their bottom drawer—and so only the positive ones were published. Not only has publication bias been demonstrated in many fields of medicine, but a paper has even found evidence of publication bias in studies of publication bias. Here is the funnel plot for that paper. This is what passes for humour in the world of evidence-based medicine. The most heinous recent case of publication bias has been in the area of SSRI antidepressant drugs, as has been shown in various papers. A group of academics published a paper in the New England Journal of Medicine at the beginning of 2008 which listed all the trials on SSRIs which had ever been formally registered with the FDA, and examined the same trials in the academic literature.

They’re not where you get your news from. How can we explain, then, the apparent fact that industry funded trials are so often so glowing? How can all the drugs simultaneously be better than all of the others? The crucial kludge may happen after the trial is finished. Publication bias and suppressing negative results ‘Publication bias’ is a very interesting and very human phenomenon. For a number of reasons, positive trials are more likely to get published than negative ones. It’s easy enough to understand, if you put yourself in the shoes of the researcher. Firstly, when you get a negative result, it feels as if it’s all been a bit of a waste of time.

If you aim too high and get a few rejections, it could be years until your paper comes out, even if you are being diligent: that’s years of people not knowing about your study. Publication bias is common, and in some fields it is more rife than in others. In 1995, only 1 per cent of all articles published in alternative medicine journals gave a negative result. The most recent figure is 5 per cent negative. This is very, very low, although to be fair, it could be worse. A review in 1998 looked at the entire canon of Chinese medical research, and found that not one single negative trial had ever been published. Not one. You can see why I use CAM as a simple teaching tool for evidence-based medicine. Generally the influence of publication bias is more subtle, and you can get a hint that publication bias exists in a field by doing something very clever called a funnel plot.

pages: 467 words: 116,094

I Think You'll Find It's a Bit More Complicated Than That
by Ben Goldacre
Published 22 Oct 2014

: First, Magnetise Your Wine What Is Science: http://www.badscience.net/2005/12/what-is-science-first-magnetise-your-wine/ BAD ACADEMIA What If Academics Were as Dumb as Quacks with Statistics What if Academics: http://www.badscience.net/2011/10/what-if-academics-were-as-dumb-as-quacks-with-statistics/ publish a mighty torpedo: http://www.nature.com/neuro/journal/v14/n9/full/nn.2886.html Brain-Imaging Studies Report More Positive Findings Than Their Numbers Can Support. This Is Fishy Brain-Imaging Studies: http://www.badscience.net/2011/08/brain-imaging-studies-report-more-positive-findings-than-their-numbers-can-support-this-is-fishy/ publication bias:http://www.badscience.net/category/publication-bias/ took a different approach: http://archpsyc.ama-assn.org/cgi/content/abstract/archgenpsychiatry.2011.28 ‘None of Your Damn Business’ None of Your: http://www.badscience.net/2011/01/none-of-your-damn-business/ 2004 published a study: http://ats.ctsnetjournals.org/cgi/content/abstract/annts;78/4/1433 it was retracted: http://retractionwatch.wordpress.com/2011/01/04/thoracic-surgery-journal-retracts-hypertension-study-marred-by-troubled-data/ Dr L.

But how reliable are the studies? One way of critiquing a piece of research is to read the academic paper itself, in detail, looking for flaws. But that might not be enough, if some sources of bias might exist outside the paper, in the wider system of science. By now you’ll be familiar with publication bias: the phenomenon whereby studies with boring negative results are less likely to get written up, and less likely to get published. Normally you can estimate this using a tool such as, say, a funnel plot. The principle behind these is simple: big, expensive landmark studies are harder to brush under the carpet, but small studies can disappear more easily.

The answer was stark: even being generous, there were twice as many positive findings as you could realistically have expected from the amount of data reported on. What could explain this? Inadequate blinding is an issue: a fair amount of judgement goes into measuring the size of a brain area on a scan, so wishful nudges can creep in. And boring old publication bias is another: maybe whole negative papers aren’t getting published. But a final, more interesting explanation is also possible. In these kinds of studies, it’s possible that many brain areas are measured to see if they’re bigger or smaller, and maybe then only the positive findings get reported within each study.

Statistics in a Nutshell
by Sarah Boslaugh
Published 10 Nov 2012

A funnel plot with the general shape shown in Figure 20-1 suggests that publication bias is not a large concern in this particular area of research. A funnel plot that looks more like Figure 20-2 does suggest publication bias; about half of the funnel is missing because few studies have been published with a neutral or negative result. The plot alone does not prove publication bias (several other possibilities are discussed in the Cochrane Collaboration document listed in Appendix C), but it does suggest it as a possibility. Figure 20-1. A funnel plot suggesting little to no publication bias Figure 20-2. A funnel plot suggesting publication bias Issues in Research Design Generally, the design of an investigation of a question of interest needs to follow the guidelines presented in Chapter 18 if meaningful inferences are eventually to be made.

Tests should be selected based on known or expected characteristics of the data. Ideally, every result should be reported, even if the study did not find statistical significance. Failure to do so leads to publication bias, in which only significant results are published, creating a misleading picture of our state of knowledge. Don’t be afraid to report deviations, nonsignificant test results, and failure to reject null hypotheses—not every experiment can or should result in a major scientific result! Publication Bias and the Funnel Plot It’s easy to fall into the naïve belief that the published research literature presents a fair picture of our collective knowledge in any research field.

For instance, research published in English might be more readily available than equally good or better research published in other languages and thus more likely to be cited repeatedly by other articles. (The number of citations is sometimes used as a measure of an article’s importance or influence.) One way to evaluate publication bias on a topic is to create a funnel plot, a graph in which each data point represents a published study, with the log odds ratio of the study on the horizontal axis and the standard error of the study on the vertical axis. If there is no publication bias, we expect to see a pattern similar to an inverted funnel, as in Figure 20-1. Note that in studies with a larger standard error (less precise studies), there is a greater variability of results (a wider range of values for the log odds ratio), whereas for more precise studies, the log odds ratio clusters more closely around a single value.

Calling Bullshit: The Art of Scepticism in a Data-Driven World
by Jevin D. West and Carl T. Bergstrom
Published 3 Aug 2020

In the case of the Higgs boson, there were already good reasons to expect that the Higgs boson would exist, and its existence was subsequently confirmed. But this is not always the case.*6 The important thing to remember is that a very unlikely hypothesis remains unlikely even after someone obtains experimental results with a very low p-value. P-HACKING AND PUBLICATION BIAS Purely as a matter of convention, we often use a p-value of 0.05 as a cutoff for saying that a result is statistically significant.*7 In other words, a result is statistically significant when p < 0.05, i.e., when it would have less than 5 percent probability of arising due to chance alone.

Thus among US Caucasians, roughly 5 in 6 of those who test positive for Helicobacter are actually carrying it. With that out of the way, let’s come back to Ioannidis. In his paper “Why Most Published Research Findings Are False,” Ioannidis draws the analogy between scientific studies and the interpretation of medical tests. He assumes that because of publication bias, most negative findings go unpublished and the literature comprises mostly positive results. If scientists are testing improbable hypotheses, the majority of positive results will be false positives, just as the majority of tests for Lyme disease, absent other risk factors, will be false positives.

This moves us toward the domain of the Helicobacter pylori example, where the majority of positive results are true positives. Ioannidis is overly pessimistic because he makes unrealistic assumptions about the kinds of hypotheses that researchers decide to test. Of course, this is all theoretical speculation. If we want to actually measure how big of a problem publication bias is, we need to know (1) what fraction of tested hypotheses are actually correct, and (2) what fraction of negative results get published. If both fractions are high, we’ve got little to worry about. If both are very low, we’ve got problems. We’ve argued that scientists will tend to test hypotheses with a decent chance of being correct.

pages: 340 words: 94,464

Randomistas: How Radical Researchers Changed Our World
by Andrew Leigh
Published 14 Sep 2018

I confess that I’m one of those who is guilty of popularising it without reviewing the follow-up studies: Andrew Leigh, The Economics of Just About Everything, Sydney: Allen & Unwin, 2014, p. 10. 44Benjamin Scheibehenne, Rainer Greifeneder & Peter M. Todd, ‘Can there ever be too many options? A meta-analytic review of choice overload’, Journal of Consumer Research, vol. 37, no. 3, 2010, pp. 409–25. 45Alan Gerber & Neil Malhotra, ‘Publication bias in empirical sociological research’, Sociological Methods & Research, vol. 37, no. 1, 2008, pp. 3–30; Alan Gerber & Neil Malhotra, ‘Do statistical reporting standards affect what is published? Publication bias in two leading political science journals’, Quarterly Journal of Political Science. vol. 3, no. 3, 2008, pp. 313–26; E.J. Masicampo & Daniel R. Lalande, ‘A peculiar prevalence of p values just below .05’, Quarterly Journal of Experimental Psychology, vol. 65, no. 11, 2012, pp. 2271–9; Kewei Hou, Chen Xue & Lu Zhang, ‘Replicating anomalies’, NBER Working Paper 23394, Cambridge, MA: National Bureau of Economic Research, 2017. 46Alexander A.

If researchers conceal findings that run counter to conventional wisdom, then the rest of us may form a mistaken impression of the results of available randomised trials. Like a golfer who takes a mulligan on every hole, discarded trials can leave us in a situation where the scorecard doesn’t reflect reality. One way of countering ‘publication bias’ is to require that studies be registered before they start – by lodging a statement in advance in which the researchers specify the questions they are seeking to answer. This makes it more likely that studies are reported after they finish. In medicine, there are fifteen major clinical trial registers around the world, including ones operated by Australia and New Zealand, China, the European Union, India, Japan, the Netherlands and Thailand.

Olds, David 211 ‘once and done’ campaign, and Smile Train aid charity 158 O’Neill, John, and Black Saturday 2009 13–14 O’Neill, Maura 210 Oportunidades Mexico 117 see also President Vincent Fox Oregon research on health insurance 42 parachute study, and randomised evaluation of 12 Pare, Ambroise, and soldiers’ gunpowder burns 22–3 parenting programs 68–9 and Chicago ‘Parent Academy’ 9 and Incredible Years Basic Parenting Programme 69 and randomised evaluations 70 ‘Triple P’ positive parenting program 68–9 ‘partial equilibrium’ effect 191 Peirce, Charles Sanders 49–51 Perry, Rick 150–1 Perry Preschool 66–8, 71, 169, 191–2 see also David Weikart; Evelyn Moore ‘P-hacking’ 195–6 Piaget, Jean 66 Pinker, Stephen 177 placebo effect 10, 29–31, 34, 138, 192 and John Haygarth 23–4 placebo surgery 18–21 see also sham surgery Planet Money 103 policing programs 91–4, 209 ‘broken windows policing’ 209 and ‘hot spots’ policing 93 and ‘problem oriented policing’ 94 and randomised evaluations 94 see also criminal justice experiments; Lawrence Sherman; Patrick Murphy; Rudi Lammers political campaign strategies and Benin political campaign 160 and control groups 148, 155 and ‘deep canvassing’ 163–4 and Harold Gosnell 148–50 and lobbying in US 162 and online campaigning 154–5 and political speeches 160–1 and ‘robocalls’ 152 and Sierra Leone election debates 161 and use of ‘social pressure’ 151–2 see also Get Out the Vote Pope Benedict XVI 119 ‘power of free’ theory 112 pragmatism 50 see also Charles Sanders Pierce ‘problem oriented policing’ 94 Programme for International Student Assessment 73 Progresa Mexico 117–18 see also President Ernesto Zedillo Project Independence 60–1 see also Ben Graber; Judith Gueron; Manpower Demonstration Research Corporation (MDRC) Project STAR experiment 81 Promise Academy 78–9 Prospera Mexico 118 psychology experiments 50–1, 143, 170, 177, 196 see also Charles Sanders Pierce; Joseph Jastrow ‘publication bias’ 199 Pyrotron 14–15 see also Andrew Sullivan Quintanar, Maricela 38–40 Quora 131 RAND Health Insurance Experiment 41, 169 randomised auditing 174–5 randomised trials see also A/B testing and ‘anchoring’ effect 133 and the book of Daniel 22 and Community Led Sanitation 116 and control groups 13, 67–8, 74, 78, 82 and data collection 171–2 and the driving licence experiment 109 and the ‘experimental idea’ 194 fairness of 37, 100, 177, 185 and ‘fixed mindset’ 6 and ‘general equilibrium’ effect 191 and the ‘gold standard’ 194 and ‘growth mindset’ 6 and ‘healthy cohort’ effect 12 and Highest Paid Person’s Opinion (HiPPO) 6 and Kenyan mini-bus driver experiments 115–16 and ‘natural experiments’ 193 and N-of-1 168–9 and the No Child Left Behind Act 210 and ‘the paradox of choice’ 195 and ‘partial equilibrium’ effect 191 and ‘publication bias’ 199 and replication of 90, 124, 195, 197–8 and sex education 119–20 and single-centre trials 197 and ‘virginity pledges’ in the US 46–7 randomistas, Angus Deaton Nobel laureate on 12 Read India 188 see also Rukmini Banerji Reagan, President Ronald 59, 151 Registry for International Development Impact Evaluations 199 replication 90, 195, 197–8 ‘restorative justice conferencing’ 84 restorative justice experiments 85–6, 182 Results for America 211 Rhinehart, Luke, and The Dice Man 180 Roach, William 52 ‘robocalls’ 152 Romney, Mitt 147 Rossi, Peter 190 ‘Rossi’s Law’ 190, 206 Rothamsted Experimental Centre 53 Rudder, Christian 130 see also OkCupid Sachs, Jeffrey 121 Sackett, David 27, 206 Sacred Heart Mission 36 Salk, Jonas 168 Salvation Army’s ‘Red Kettle Christmas drive 157 Sandburg, Sheryl 144 Saut, Fabiola Vasquez 110 see also Acayucan road experiment ‘scaling proven success,’ and ‘Development Innovation Ventures’ 210 Scared Straight 7–8, 94, 98–9, 189 see also Danny Glover; James Finckenauer Schmidt, Eric, and Google 143 Schwarzenegger, Arnold 75, 173 Science 163 ‘Science of Philanthropy Initiative’ 159 scurvy treatment trials 3–5, 16 see also Gilbert Blane; James Cook; James Lind; William Stark Second Chance Act 210 Seeger, Pete, and ‘The Draft Dodger Rag’ 42 Semelweiss, Ignaz 25 Sesame Street 63–5, 83 see also Joan Cooney sex education 119–20 sham surgery trials 19–20, 182 and ‘clinical equipoise’ 21 Sherman, Lawrence 91–4, 101 ‘Shoes for Better Tomorrows’ (TOMS) 113–15 see also Blake Mycoskie; Bruce Wydick Sierra Leone election debates 161 see also Saa Badabla SimCalc, and online learning tools 77 ‘single subject’ trials 168–9 see also N-of-1 Siroker, Dan 148 Sliding Doors 9 Smile Train aid charity, and ‘once and done’ campaign 158 social experiments large-scale 41 social field experiments and control groups 37, 39–41, 139 and credit card upgrades 132–3 and pay rates 136–7 and retail discounts 133 and ‘split cable’ techniques 139–40 and Western Union money transfers 130 social program trials and Kenyan electricity trial 110 and smoking deterrents 47–8 see also Acayucan road experiment; neighbourhood project social service agencies 36, 69 ‘soft targeting’ 36 ‘split cable’ technique 139–40 St.

pages: 442 words: 94,734

The Art of Statistics: Learning From Data
by David Spiegelhalter
Published 14 Oct 2019

There is nothing in the paper that will reveal the total implausibily of this result – external knowledge is required.7 Publication Bias Scientists examine huge numbers of published articles when they are conducting systematic reviews – trying to bring together the literature and synthesize the current state of knowledge. Such an enterprise becomes hopelessly flawed if what is published is a biased subset of the work that has been carried out, say because negative results have not been submitted for publication, or questionable research practices have led to an unjustified excess of significant results. Statistical techniques have been developed for identifying such publication bias. Suppose we have a set of studies that all set out to test the same null hypothesis that an intervention has no effect.

Then this is just the pattern that would occur were the null hypothesis true, and the only results being reported as significant were those 1 in 20 that tipped over P < 0.05 by luck. Simonsohn and others looked at the published psychological literature which supported the popular idea that giving people an excessive amount of choice led to negative consequences; an analysis of the P-curve suggested there was substantial publication bias and that there was no good evidence for this effect.8 Assessing a Statistical Claim or Story Whether we are journalists, fact-checkers, academics, professionals in government or business or NGOs, or simply members of the public, we are regularly told claims that are based on statistical evidence.

Simonsohn, ‘False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant’, Psychological Science 22:11 (November 2011), 1359–66. 7. A. Gelman and D. Weakliem, ‘Of Beauty, Sex and Power’, American Scientist 97:4 (2009), 310–16. 8. U. Simonsohn, L. D. Nelson and J. P. Simmons, ‘P-Curve and Effect Size: Correcting for Publication Bias Using Only Significant Results’, Perspectives on Psychological Science 9:6 (November 2014), 666–81. 9. For more on intelligent openness, see Royal Society, Science as an Open Enterprise (2012). Onora O’Neill’s perspectives on trustworthiness are brilliantly explained in her TedX talk ‘What We Don’t Understand About Trust’ (June 2013). 10.

pages: 741 words: 199,502

Human Diversity: The Biology of Gender, Race, and Class
by Charles Murray
Published 28 Jan 2020

There are several indications that such decisions have been a problem with stereotype threat research: Replications often fail to confirm the earlier results.[36] The evidence for stereotype threat has dissipated over time.37 Publication bias (failure to report negative results) appears to have been a reality.[38] In 2019, scholars at the University of Minnesota dealt with these and other issues in the most comprehensive meta-analysis of stereotype threat to date, focusing on the high-stakes test settings in which stereotype threat should theoretically cause the most problems. For the studies relevant to high-stakes settings, the effect size of stereotype threat was –.14 (lowering test scores), a small effect that was further reduced to –.09 after correcting for publication bias. The authors summarized their findings as follows: Based on the results of the focal analysis, operational and motivational subsets, and publication bias analyses, we conclude that the burden of proof shifts back to those that claim that stereotype threat exerts a substantial effect on standardized test takers.

The authors summarized their findings as follows: Based on the results of the focal analysis, operational and motivational subsets, and publication bias analyses, we conclude that the burden of proof shifts back to those that claim that stereotype threat exerts a substantial effect on standardized test takers. Our best estimate of stereotype threat effects within groups in settings with conditions most similar to operational testing is small and inflated by publication bias.39 Given this assessment from the largest and most rigorous meta-analysis of a quarter century of attempts to demonstrate stereotype threat, it seems unlikely that a significant role for stereotype threat exists.

Because so much of the controversy involves abstruse psychometric issues, I take his conclusion seriously: To conclude, we estimated a small average effect of stereotype threat on the MSSS [math, science, and spatial skills] test-performance of school-aged girls [d = –0.22]; however, the studies show large variation in outcomes, and it is likely that the effect is inflated due to publication bias. This finding leads us to conclude that we should be cautious when interpreting the effects of stereotype threat on children and adolescents in the STEM realm. To be more explicit, based on the small average effect size in our meta-analysis, which is most likely inflated due to publication bias, we would not feel confident to proclaim that stereotype threat manipulations will harm mathematical performance of girls in a systematic way or lead women to stay clear from occupations in the STEM domain.

pages: 147 words: 42,682

Facing Reality: Two Truths About Race in America
by Charles Murray
Published 14 Jun 2021

The former, coauthored by one the world’s most highly regarded quantitative social science methodologists (Jelte Wicherts), concluded that “based on the small average effect size in our meta-analysis, which is most likely inflated due to publication bias, we would not feel confident to proclaim that stereotype manipulations will harm mathematic performance of girls in a systematic way.” (p. 41). The latter article, written by a team of psychologists at the University of Minnesota, concluded, “Based on the result of the focal analysis, operational and motivational subsets, and publication bias analyses, we conclude that the burden of proof shifts back to those that claim that stereotype threat exerts a substantial effect on standardized test takers.”

It was seized upon so uncritically that by 2003, just eight years after its debut, it was already covered in two-thirds of introductory psychology textbooks. Since 2015, its reputation has been battered by a series of failures to replicate the effects seen in early studies and by evidence of “publication bias” – the tendency of scholars to fail to publish negative results. Two of the most rigorous critiques leave little room for the advocates of stereotype threat to make their case: Paulette C. Flore and Jelte M. Wicherts, “Does Stereotype Threat Influence Performance of Girls in Stereotyped Domains?

pages: 284 words: 79,265

The Half-Life of Facts: Why Everything We Know Has an Expiration Date
by Samuel Arbesman
Published 31 Aug 2012

Increasingly precise measurement allows us to often be more accurate in what we are looking for. And these improvements frequently dial the effects downward. But the decline effect is not only due to measurement. One other factor involves the dissemination of measurements, and it is known as publication bias. Publication bias is the idea that the collective scientific community and the community at large only know what has been published. If there is any sort of systematic bias in what is being published (and therefore publicly measured), then we might only be seeing some of the picture. The clearest example of this is in the world of negative results.

J., 148–49, 155 deuterium, 151 Devezas, Tessaleno, 207–8 DEVONthink, 118–19 Diabetes Care, 67 dialect, situation-based, 190 Diamond, Arthur, 187 Dictionary of Theories (Bothamley, ed.), 85 dinosaurs, 3, 79–82, 168–69, 194 discovery: long tail of, 38 multiple independent, 104–5 pace of, 9–25 discriminating power, 159–60 diseases, 52, 176–77 categorization of, 205 spread of, 62, 64 Dittmar, Jeremiah, 71, 73 Dixon, William Macneile, 8 DNA, 88, 90, 122, 163 drugs, 24, 111–12 repurposing of, 112 streptokinase, 108–9 Dunbar, Robin, 205 Dunbar’s Number, 205–6 Earth, curvature of, 35–36 education, 182–83, 195 Einstein, Albert, 36, 106, 186 Electronics, 42 Ellsworth, Henry, 54 e-mail, 41 Empedocles, 201 Encyclopaedia of Scientific Units, Weights, and Measures: Their SI Equivalences and Origins (Cardarelli), 146 EndNote, 117–18 energy, 55, 204 Eos, 148 Erdo˝s, Paul, 104 errors, 78–95 contrary to popular belief phrase and, 84–85 Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism, An (Green), 106 eurekometrics, 21, 22 Eureqa, 113–14 Everest, George, 140 evolution, 79, 187 evolutionary programming, 113 evolutionary psychology, 175 expertise, long tail of, 96, 102 experts, 96–97 exponential growth, 10–14, 44–45, 46–47, 54–55, 57, 59, 130, 204 extinct species, 26, 27–28 facts, see knowledge and facts factual inertia, 175, 179–83, 188, 190, 199 Fallows, James, 86 Fermat, Pierre de, 132 Feynman, Richard, 104 fish, 201 fishing, 173 fish oil, 99, 110 Florey, Lord, 163 Flory, Paul, 104 Foldit, 20 Franzen, Jonathan, 208–9 French Canadians, 193–94 frogs: boiling of, 86, 171 vision of, 171 Galaxy Zoo, 20 Galileo, 21, 143–44 Galton, Francis, 165–68 games, 51 generational knowledge, 183–85, 199 genetics, 87–90 genome sequencing, 48, 51 Gibrat’s Law, 103 Goddard, Robert H., 174 Godwin’s law, 105 Goldbach’s Conjecture, 112–13 Goodman, Steven, 107–8 Gould, Stephen Jay, 82 grammar: descriptive, 188–89 prescriptive, 188–89, 194 Granovetter, Mark, 76–78 Graves’ disease, 111 Great Vowel Shift, 191–93 Green, George, 105–6 growth: exponential, 10–14, 44–45, 46–47, 54–55, 57, 59, 130, 204 hyperbolic, 59 linear, 10, 11 Gumbel, Bryant, 41 Gutenberg, Johannes, 71–73, 78, 95 Hamblin, Terry, 83 Harrison, John, 102 Hawthorne effect, 55–56 helium, 104 Helmann, John, 162 Henrich, Joseph, 58 hepatitis, 28–30 hidden knowledge, 96–120 h-index, 17 Hirsch, Jorge, 17 History of the Modern Fact, A (Poovey), 200 Holmes, Sherlock, 206 homeoteleuton, 89 Hooke, Robert, 21, 94 Hull, David, 187–88 human anatomy, 23 human computation, 20 hydrogen, 151 hyperbolic growth rate, 59 idiolect, 190 impact factors, 16–17 inattentional blindness (change blindness), 177–79 India, 140–41 informational index funds, 197 information transformation, 43–44, 46 InnoCentive, 96–98, 101, 102 innovation, 204 population size and, 135–37, 202 prizes for, 102–3 simultaneous, 104–5 integrated circuits, 42, 43, 55, 203 Intel Corporation, 42 interdisciplinary research, 68–69 International Bureau of Weights and Measures, 47 Internet, 2, 40–41, 53, 198, 208, 211 Ioannidis, John, 156–61, 162 iPhone, 123 iron: magnetic properties of, 49–50 in spinach, 83–84 Ising, Ernst, 124, 125–26, 138 isotopes, 151 Jackson, John Hughlings, 30 Johnson, Steven, 119 Journal of Physical and Chemical Reference Data, 33–35 journals, 9, 12, 16–17, 32 Kahneman, Daniel, 177 Kay, Alan, 173 Kelly, Kevin, 38, 46 Kelly, Stuart, 115 Kelvin, Lord, 142–43 Kennaway, Kristian, 86 Keynes, John Maynard, 172 kidney stones, 52 kilogram, 147–48 Kiribati, 203 Kissinger, Henry, 190 Kleinberg, Jon, 92–93 knowledge and facts, 5, 54 cumulative, 56–57 erroneous, 78–95, 211–14 half-lives of, 1–8, 202 hidden, 96–120 phase transitions in, 121–39, 185 spread of, 66–95 Koh, Heebyung, 43, 45–46, 56 Kremer, Michael, 58–61 Kuhn, Thomas, 163, 186 Lambton, William, 140 land bridges, 57, 59–60 language, 188–94 French Canadians and, 193–94 grammar and, 188–89, 194 Great Vowel Shift and, 191–93 idiolect and, 190 situation-based dialect and, 190 verbs in, 189 voice onset time and, 190 Large Hadron Collider, 159 Laughlin, Gregory, 129–31 “Laws Underlying the Physics of Everyday Life Really Are Completely Understood, The” (Carroll), 36–37 Lazarus taxa, 27–28 Le Fanu, James, 23 LEGO, 184–85, 194 Lehman, Harvey, 13–14, 15 Leibniz, Gottfried, 67 Lenat, Doug, 112 Levan, Albert, 1–2 Liben-Nowell, David, 92–93 libraries, 31–32 life span, 53–54 Lincoln, Abraham, 70 linear growth, 10, 11 Linnaeus, Carl, 22, 204 Lippincott, Sara, 86 Lipson, Hod, 113 Little Science, Big Science (Price), 13 logistic curves, 44–46, 50, 116, 130, 203–4 longitude, 102 Long Now Foundation, 195 long tails: of discovery, 38 of expertise, 96, 102 of life, 38 of popularity, 103 Lou Gehrig’s disease (ALS), 98, 100–101 machine intelligence, 207 Magee, Chris, 43, 45–46, 56, 207–8 magicians, 178–79 magnetic properties of iron, 49–50 Maldives, 203 Malthus, Thomas, 59 mammal species, 22, 23, 128 extinct, 28 manuscripts, 87–91, 114–16 Marchetti, Cesare, 64 Marsh, Othniel, 80–81, 169 mathematics, 19, 51, 112–14, 124–25, 132–35 Matthew effect, 103 Mauboussin, Michael, 84 Mayor, Michel, 122 McGovern, George, 66 McIntosh, J. S., 81–82 McWhorter, John, 191 measurement, 142–70 decline effect and, 155–56, 157 kilogram in, 147–48 meter in, 143–47 of Mount Everest, 140–41 precision and accuracy in, 149–50 prefixes in, 47–48, 142, 147 publication bias and, 156 of trees, 142 Mechanical Turk, 180–82 medical knowledge, 23, 32, 51–52, 53, 122, 197, 198, 208 about cirrhosis and hepatitis, 28–30 MEDLINE, 99–100 memorization, 198 Mendel, Gregor, 106 Mendeley, 117, 118 Merton, Robert, 61, 103, 104 mesofacts, 6–7, 195, 203 meta-analysis, 107–8 cumulative, 109–10 meter, 143–47 Milgram, Stanley, 24, 167 mobile phone calls, 69, 77 Moon, 2, 126–28, 129, 138, 174, 203 Moore, Gordon, 42, 55, 56 Moore’s Law, 41–43, 46, 48, 51, 55, 56, 64, 203 Moriarty, James, 85–86 Mount Everest, 140–41 Mueller, John, 165 Munroe, Randall, 84, 153–54 Murphy, Tom, 55 mutation, 87–94 Napier’s constant, 12 National Institutes of Health, 17 natural selection, 104–5, 187 Nature, 122, 154, 156, 162, 166 negative results, 162 Neptune, 154–55, 183 network science, 74–78 neuroscience, 48 New Scientist, 85 Newton, Isaac, 21, 36, 67, 94, 174, 186 New Yorker, 86 New York Times, 20, 75, 174 Nobel laureates, 18 nosebleeds, 180–82 Noyce, Robert, 42 null hypothesis, 152 Obama, Barack, 179 Oliver, John, 159 Onnela, Jukka-Pekka, 69, 77 On the Origin of Species (Darwin), 79, 187 opera, 14–15 orders, 60 Original Theory or New Hypothesis of the Universe, An (Wright), 121–22 Pacioli, Luca, 200 paleography, 87–90 paradigm, 186 paradigm shift, 186, 187 Parmentier, Antoine, 102 particle accelerator, 51 Patent Office, 54 Pauly, Daniel, 172–73 Pepys, Samuel, 52 periodic table, 50, 150–52, 182 Petroski, Henry, 49 phase transitions, 207 in acceptance and assimilation of knowledge, 185, 186 in facts, 121–39, 185 Ising model and, 124, 125–26, 138 in physics, 123–24, 126 Philosophical Transactions of the Royal Society of London, 9, 12 physics, 32 Planck, Max, 186–88 planets, 6, 121–23, 128, 129–31, 132, 183–84 Planet X, 154–56, 160 Pluto, 122–23, 128, 138, 148–49, 155, 183–84 polio, 52 Pony Express, 70 Poovey, Mary, 200 Popeye the Sailor, 83, 213 population: innovation and, 135–37, 202 makeup of, 61 size of, 2, 6, 57–61, 122, 135–37, 204 Portugal, 207 posterior probability, 159 potatoes, 102 preferential attachment, 103 prefixes, 47–48, 142, 147 Price, Derek J. de Solla, 9, 12–13, 15, 17, 32, 47, 50, 103, 166–67 prices, 196–97 printing press, 70–74, 78, 115 prior probability, 159 Pritchett, Lant, 186 Prize4Life Foundation, 97–98 productivity, 55–56 programmed cell death, 111, 194 proteomics, 48 Proteus phenomenon, 161 publication bias, 156 p-values, 152–54, 156, 158 P versus NP, 133–35 “Quantitative Measures of the Development of Science” (Price), 12 Quebec, 193–94 Queloz, Didier, 122 radioactivity, 2–3, 29, 33 Raynaud’s syndrome, 99, 110 reading, 197–98 Real Time Statistics Project, 195 reinventions, 104–5 Rendezvous with Rama (Clarke), 19 Rényi, Alfréd, 104 replication, 161–62 Riggs, Elmer, 81 Robinson, Karen, 107–8 robots, 46 Royal Society, 94–95 Roychowdhury, Vwani, 91, 103–4 Russell, C.

S., 81–82 McWhorter, John, 191 measurement, 142–70 decline effect and, 155–56, 157 kilogram in, 147–48 meter in, 143–47 of Mount Everest, 140–41 precision and accuracy in, 149–50 prefixes in, 47–48, 142, 147 publication bias and, 156 of trees, 142 Mechanical Turk, 180–82 medical knowledge, 23, 32, 51–52, 53, 122, 197, 198, 208 about cirrhosis and hepatitis, 28–30 MEDLINE, 99–100 memorization, 198 Mendel, Gregor, 106 Mendeley, 117, 118 Merton, Robert, 61, 103, 104 mesofacts, 6–7, 195, 203 meta-analysis, 107–8 cumulative, 109–10 meter, 143–47 Milgram, Stanley, 24, 167 mobile phone calls, 69, 77 Moon, 2, 126–28, 129, 138, 174, 203 Moore, Gordon, 42, 55, 56 Moore’s Law, 41–43, 46, 48, 51, 55, 56, 64, 203 Moriarty, James, 85–86 Mount Everest, 140–41 Mueller, John, 165 Munroe, Randall, 84, 153–54 Murphy, Tom, 55 mutation, 87–94 Napier’s constant, 12 National Institutes of Health, 17 natural selection, 104–5, 187 Nature, 122, 154, 156, 162, 166 negative results, 162 Neptune, 154–55, 183 network science, 74–78 neuroscience, 48 New Scientist, 85 Newton, Isaac, 21, 36, 67, 94, 174, 186 New Yorker, 86 New York Times, 20, 75, 174 Nobel laureates, 18 nosebleeds, 180–82 Noyce, Robert, 42 null hypothesis, 152 Obama, Barack, 179 Oliver, John, 159 Onnela, Jukka-Pekka, 69, 77 On the Origin of Species (Darwin), 79, 187 opera, 14–15 orders, 60 Original Theory or New Hypothesis of the Universe, An (Wright), 121–22 Pacioli, Luca, 200 paleography, 87–90 paradigm, 186 paradigm shift, 186, 187 Parmentier, Antoine, 102 particle accelerator, 51 Patent Office, 54 Pauly, Daniel, 172–73 Pepys, Samuel, 52 periodic table, 50, 150–52, 182 Petroski, Henry, 49 phase transitions, 207 in acceptance and assimilation of knowledge, 185, 186 in facts, 121–39, 185 Ising model and, 124, 125–26, 138 in physics, 123–24, 126 Philosophical Transactions of the Royal Society of London, 9, 12 physics, 32 Planck, Max, 186–88 planets, 6, 121–23, 128, 129–31, 132, 183–84 Planet X, 154–56, 160 Pluto, 122–23, 128, 138, 148–49, 155, 183–84 polio, 52 Pony Express, 70 Poovey, Mary, 200 Popeye the Sailor, 83, 213 population: innovation and, 135–37, 202 makeup of, 61 size of, 2, 6, 57–61, 122, 135–37, 204 Portugal, 207 posterior probability, 159 potatoes, 102 preferential attachment, 103 prefixes, 47–48, 142, 147 Price, Derek J. de Solla, 9, 12–13, 15, 17, 32, 47, 50, 103, 166–67 prices, 196–97 printing press, 70–74, 78, 115 prior probability, 159 Pritchett, Lant, 186 Prize4Life Foundation, 97–98 productivity, 55–56 programmed cell death, 111, 194 proteomics, 48 Proteus phenomenon, 161 publication bias, 156 p-values, 152–54, 156, 158 P versus NP, 133–35 “Quantitative Measures of the Development of Science” (Price), 12 Quebec, 193–94 Queloz, Didier, 122 radioactivity, 2–3, 29, 33 Raynaud’s syndrome, 99, 110 reading, 197–98 Real Time Statistics Project, 195 reinventions, 104–5 Rendezvous with Rama (Clarke), 19 Rényi, Alfréd, 104 replication, 161–62 Riggs, Elmer, 81 Robinson, Karen, 107–8 robots, 46 Royal Society, 94–95 Roychowdhury, Vwani, 91, 103–4 Russell, C.

pages: 404 words: 92,713

The Art of Statistics: How to Learn From Data
by David Spiegelhalter
Published 2 Sep 2019

There is nothing in the paper that will reveal the total implausibily of this result—external knowledge is required.7 Publication Bias Scientists examine huge numbers of published articles when they are conducting systematic reviews—trying to bring together the literature and synthesize the current state of knowledge. Such an enterprise becomes hopelessly flawed if what is published is a biased subset of the work that has been carried out, say because negative results have not been submitted for publication, or questionable research practices have led to an unjustified excess of significant results. Statistical techniques have been developed for identifying such publication bias. Suppose we have a set of studies that all set out to test the same null hypothesis that an intervention has no effect.

Then this is just the pattern that would occur were the null hypothesis true, and the only results being reported as significant were those 1 in 20 that tipped over P < 0.05 by luck. Simonsohn and others looked at the published psychological literature which supported the popular idea that giving people an excessive amount of choice led to negative consequences; an analysis of the P-curve suggested there was substantial publication bias and that there was no good evidence for this effect.8 Assessing a Statistical Claim or Story Whether we are journalists, fact-checkers, academics, professionals in government or business or NGOs, or simply members of the public, we are regularly told claims that are based on statistical evidence.

Simonsohn, ‘False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant’, Psychological Science 22:11 (November 2011), 1359–66. 7. A. Gelman and D. Weakliem, ‘Of Beauty, Sex and Power’, American Scientist 97:4 (2009), 310–16. 8. U. Simonsohn, L. D. Nelson and J. P. Simmons, ‘P-Curve and Effect Size: Correcting for Publication Bias Using Only Significant Results’, Perspectives on Psychological Science 9:6 (November 2014), 666–81. 9. For more on intelligent openness, see Royal Society, Science as an Open Enterprise (2012). Onora O’Neill’s perspectives on trustworthiness are brilliantly explained in her TedX talk ‘What We Don’t Understand About Trust’ (June 2013). 10.

pages: 428 words: 126,013

Lost Connections: Uncovering the Real Causes of Depression – and the Unexpected Solutions
by Johann Hari
Published 1 Jan 2018

Prevention & Treatment 5, no. 1 (July 2002): No Pagination Specified Article 22, http://dx.doi.org/10.1037/1522-3736.5.1.522i; Ben Whalley et al., “Consistency of the placebo effect,” Journal of Psychosomatic Research 64, no. 5 (May 2008): 537–541; Kirsch et al., “National Depressive and Manic-Depressive Association Consensus Statement on the Use of Placebo in Clinical Trials of Mood Disorders,” Arch Gen Psychiatry 59, no. 3 (2002): 262–270, doi:10.1001/archpsyc.59.3.262; Kirsch, “St John’s wort, conventional medication, and placebo: an egregious double standard,” Complementary Therapies in Medicine 11, no. 3 (Sept. 2003): 193–195; Kirsch, “Antidepressants Versus Placebos: Meaningful Advantages Are Lacking,” Psychiatric Times, September 1, 2001, 6, Academic OneFile, as accessed Nov. 5, 2016; Kirsch, “Reducing noise and hearing placebo more clearly,” Prevention & Treatment 1, no. 2 (June 1998): No Pagination Specified Article 7r, http://dx.doi.org/10.1037/1522-3736.1.1.17r; Kirsch et al., “Calculations are correct: reconsidering Fountoulakis & Möller’s re-analysis of the Kirsch data,” International Journal of Neuropsychopharmacology 15, no. 8 (August 2012): 1193–1198, doi: https://doi.org/10.1017/S1461145711001878; Erik Turner et al., “Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy,” N Engl J Med 358 (2008): 252–260, doi: 10.1056/NEJMsa065779. This is called “publication bias.” Evans, Emperor’s New Drugs, 25. My friend Dr. Ben Goldacre has done outstanding work on publication bias. See http://www.badscience.net/category/publication-bias/ for some background. Intrigued, Irving joined Evans, Emperor’s New Drugs, 26–7. Those twenty-seven patients Ibid., 41. “dirty little secret” Ibid., 38. In the end, in court, Ibid., 40; http://web.law.columbia.edu/sites/default/files/microsites/career-services/Driven%20to%20Settle.pdf; http://www.independent.co.uk/news/business/news/drug-firm-settles-seroxat-research-claim-557943.html; http://news.bbc.co.uk/1/hi/business/3631448.stm; http://www.pharmatimes.com/news/gsk_to_pay_$14m_to_settle_paxil_fraud_claims_995307; http://www.nbcnews.com/id/5120989/ns/business-us_business/t/spitzer-sues-glaxosmithkline-over-paxil/; http://study329.org/; http://science.sciencemag.org/content/304/5677/1576.full?

That’s why the drug companies conduct their scientific studies in secret, and afterward, they only publish the results that make their drugs look good, or that make their rivals’ drugs look worse. They do this for exactly the same reasons that (say) KFC would never release information telling you that fried chicken isn’t good for you. This is called “publication bias.”7 Of all the studies drug companies carry out, 40 percent are never released to the public, and lots more are only released selectively, with any negative findings left on the cutting room floor. So, this e-mail explained to Irving, you have, up to now, been looking only at the parts of the scientific studies that the drug companies want us to see.

See psychedelic drugs psychedelic drugs effect of, here percentage of unpleasant experiences, here psychedelic drugs, spiritual experiences caused by 1950s-60s research on, here as escape from ego, here, here, here life-changing effects, here meditation as means of preserving effects of, here Roland’s experiments on, here sense of connection to others following, here, here, here similarity to meditation experience, here, here as treatment for depression, here, here, here psychiatrists and confusion of grief with depression, here focus on biological component of depression, here psychological causes of depression broad range of, here as too often ignored, here See also bio-psycho-social model of depression psychological changes as treatment for depression meditation and, here types of, here See also reconnecting strategies psychotherapy, as treatment for depression, here publication bias, in drug testing for antidepressant, here public engagement as treatment for depression, Kotti neighborhood protest and, here, here, here, here Putnam, Robert, here reactive model of depression vs. endogenous theory, here, here, here impact of research on, here, here research supporting, here, here reconnecting strategies, here author’s successful use of, here large changes required for, here, here as social/psychological antidepressant, here time and confidence needed to implement, here, here See also childhood trauma, overcoming; future, restoring; natural world, reconnection to; people, reconnection to; self/ego, overcoming addiction to; social prescribing; status and respect, reconnection to; values, meaningful, reconnecting to; work, reconnecting to relationships, extrinsic motivations and, here reSTART Life Internet addiction center, here, here Richards, Bill, here, here, here Rumspringa, here Ryan, Richard, here São Paulo, Brazil, banning of outdoor advertising, here Sapirstein, Guy analysis of antidepressant drug testing, here responses to drug testing analysis, here Sapolsky, Robert on genetic factors in depression, here recurring dream of, here research on baboon status hierarchies, here on stress of low or insecure status, here Schwenke, Regina, here Selective Serotonin Reuptake Inhibitors (SSRIs) and chemical imbalance model of depression, here, here effect of, as short-lived, here side effects of, here tests on effectiveness of, here self/ego effect of intrinsic vs. extrinsic motivation on, here experience of nature as escape from, here, here individual as prisoner of, in depression, here as protective barrier, here psychedelic drug experience as escape from, here, here, here resistance to diminishment of in some people, here Western vs.

pages: 321 words: 97,661

How to Read a Paper: The Basics of Evidence-Based Medicine
by Trisha Greenhalgh
Published 18 Nov 2010

Remember, too, that the results of an RCT may have limited applicability as a result of exclusion criteria (rules about who may not be entered into the study), inclusion bias (selection of trial participants from a group that is unrepresentative of everyone with the condition (see section ‘Whom is the study about?’)), refusal (or inability) of certain patient groups to give consent to be included in the trial, analysis of only pre-defined ‘objective’ endpoints which may exclude important qualitative aspects of the intervention (see Chapter 12) and publication bias (i.e. the selective publication of positive results, often but not always because the organisation that funded the research stands to gain or lose depending on the findings [9] [10]). Furthermore, RCTs can be well or badly managed [2], and, once published, their results are open to distortion by an over-enthusiastic scientific community or by a public eager for a new wonder drug [13].

The authors report a series of artificial dice-rolling experiments in which red, white and green dice, respectively, represented different therapies for acute stroke. Overall, the ‘trials’ showed no significant benefit from the three therapies. However, the simulation of a number of perfectly plausible events in the process of meta-analysis—such as the exclusion of several of the ‘negative’ trials through publication bias (see section ‘Randomised controlled trials’), a subgroup analysis that excluded data on red dice therapy (because, on looking back at the results, red dice appeared to be harmful), and other, essentially arbitrary, exclusions on the grounds of ‘methodological quality’—led to an apparently highly significant benefit of ‘dice therapy’ in acute stroke.

Eysenck's reservations about meta-analysis are borne out in the infamously discredited meta-analysis that demonstrated (wrongly) that there was significant benefit to be had from giving intravenous magnesium to heart attack victims. A subsequent megatrial involving 58 000 patients (ISIS-4) failed to find any benefit whatsoever, and the meta-analysts' misleading conclusions were subsequently explained in terms of publication bias, methodological weaknesses in the smaller trials and clinical heterogeneity [22] [23]. (Incidentally, for more debate on the pros and cons of meta-analysis versus megatrials, see this recent paper [24].) Eysenck's mathematical naiveté is embarrassing (‘if a medical treatment has an effect so recondite and obscure as to require a meta-analysis to establish it, I would not be happy to have it used on me’), which is perhaps why the editors of the second edition of the ‘Systematic reviews’ book dropped his chapter from their collection.

pages: 250 words: 64,011

Everydata: The Misinformation Hidden in the Little Data You Consume Every Day
by John H. Johnson
Published 27 Apr 2016

“P-hacking” (named after p-values) is a term used when researchers “collect or select data or statistical analyses until nonsignificant results become significant,” according to a PLoS Biology article.”36 This is similar to cherry picking, as p-hacking researchers simply throw things at the wall until something sticks, metaphorically speaking (although there probably are some scientists who actually throw things at the wall until something sticks…). A fascinating New Yorker article (is there any other kind?) examines publication bias as a possible cause of the “decline effect,” in which the size of a statistically significant effect declines over time. Why? One statistician found that “ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for,” making it perhaps less likely that future studies would be able to replicate these results.37 The Journal of Epidemiology and Community Health published a paper finding no evidence that reduced street lighting at night increased traffic collisions or crime in England and Wales.

See also misrepresentation and misinterpretation brain’s hardwiring for, 60–61 challenges in, 54–55 Ioannidis, John, 75 iPhones, 46–48, 58 “Ipse dixit” bias, 94 J Japan earthquake of 2011, 123–125 Jordan, Michael, 53 Journal of Epidemiology and Community Health, 80 Journal of Finance, 139–140 Journal of Safety Research, 20 Journal of the American Medical Informatics Association, 148 Journal of the National Cancer Institute, 69–70 K Katz, David, 22 Keillor, Garrison, 43 L Lake Wobegon effect, 42–43 Landon, Alfred, 132 Law360, 146–148 Lawyer Satisfaction Survey, 146–148 Literary Digest, 132 longevity, 4, 87–92 Los Angeles Times, 17–18 Lotto Stats, 133 Lund, Bob, 10 M magnitude, 77–78, 81 in birth month and health study, 149 map projections, 83–85 margins of error, 38, 68–69 Marie Claire, 34–35 math mistakes, 101–102, 103 mayors/deputy mayors salaries, 35–36 McCarthy, Jenny, 61 McGwire, Mark, 39 meaning, difficulty of extracting from too much data, 4. See also misrepresentation and misinterpretation means, 32–34 definition of, 32 mean trimming, 40 media cherry-picking by, 116 data interpretation by, 75, 81 publication bias and, 80 medians, 32–34 definition of, 32 medical coding errors, 97 Medical News Today, 75 memory of printed vs. online material, 2 Mercator, Gerardus, 83–85 misrepresentation and misinterpretation, 83–103. See also cherry-picking in charts, 87–92 correlation/causation based on, 58–60 data sources and, 99 errors and, 97–99 of food expiration dates, 99–100 in gas tank gauges, 96–97 guessing and, 86 helpful, 96–97 how to be a smart consumer and, 102–103 math mistakes and, 101–102 in the media, 75, 81 “only” and, 95–96 from treating all data equally, 95 trust in expertise and, 93–94 with visuals, 92–94 models, forecasts based on, 125–127 modes, 32–34 definition of, 32 Moore, Michael, 116 Morton Thiokol, 10 Moz.com, 55 multiple comparison problem, 75–76 N National Bureau of Economic Research, 59, 69 National Cancer Institute, 69–70 National Electronic Injury Surveillance System (NEISS), 18 National Foundation for Celiac Awareness, 21 National Weight Control Registry (NWCR), 17 Natural Resources Defense Council, 100 Nest, 100–101 Newman, Mark, 28–29 New York State Office of the Attorney General, 97 New York Times, 66–67 New York Times Magazine, 101 Nielsen, Arthur, Sr., 25 Nike, 53 NPD Group, 21 NWEA Measures of Academic Progress (MAP), 22–23 O Obama, Barack, 23, 27–30 observations, definition of, 13.

J., 58, 76, 135 presidential campaigns/elections averages/aggregates and, 27–30, 44 cherry-picking in, 115–116 forecasting, 132, 137 polls and, 37–38, 68–69, 73 sampling and, 20 terms of office and, 41 Princeton Review of schools, 19 printed material vs. online differences in consumption/interpretation of, 7 willingness to question, 93–94 printed vs. online material memory of, 2 probability, 70–71, 81 coincidence and, 138–139 forecasting and, 131 proxies, 49–50 psychology research, 15–16 publication bias, 80 p-values, 71, 72, 79 Q questions/questioning, 7–8 cherry-picking and, 122 correlation vs. causation, 60 of print vs. online information, 93–94 quote mining, 116 R Radio Television Digital News Association, 36 random chance, multiple comparison problem and, 75–76, 80–81 random samples, 65–68 Rate My Professor, 51–52 Reagan, Ronald, 9 recall of printed vs. online material, 2 Reinhart, Carmen, 97–98 relationships, 5–6.

pages: 128 words: 35,958

Getting Back to Full Employment: A Better Bargain for Working People
by Dean Baker and Jared Bernstein
Published 14 Nov 2013

[23] In fairness to advocates of inflation targeting, there is a wide range of views as to how strictly we should hold to the target as the primary or only goal of monetary policy. [24] There is also the possibility of publication bias. Given the strong belief by many economists that inflation reduces growth, there may be a reluctance to publish articles that find either insignificant results or even a positive relationship. This sort of publication bias was noted in the case of the minimum wage, where the distribution of published results has an otherwise inexplicable break at zero. If we assume that study results are normally distributed, there should be some number of studies that find a significant positive relationship between higher minimum wages and employment even if the true coefficient for an employment variable is zero (Doucouliagos and Stanley 2009)

Super Thinking: The Big Book of Mental Models
by Gabriel Weinberg and Lauren McCann
Published 17 Jun 2019

In other words, in this set of one hundred studies, the base rate of false positives is likely much larger than 5 percent, and so another large part of the replication crisis can likely be explained as a base rate fallacy. Unfortunately, studies are much, much more likely to be published if they show statistically significant results, which causes publication bias. Studies that fail to find statistically significant results are still scientifically meaningful, but both researchers and publications have a bias against them for a variety of reasons. For example, there are only so many pages in a publication, and given the choice, publications would rather publish studies with significant findings over ones with none.

There are advantages to meta-analyses, as combining data from multiple studies can increase the precision and accuracy of estimates, but they also have their drawbacks. For example, it is problematic to combine data across studies where the designs or sample populations vary too much. They also cannot eliminate biases from the original studies themselves. Further, both systematic reviews and meta-analyses can be compromised by publication bias because they can include only results that are publicly available. Whenever we are looking at the validity of a claim, we first look to see whether a thorough systematic review has been conducted, and if so, we start there. After all, systematic reviews and meta-analyses are commonly used by policy makers in decision making, e.g., in developing medical guidelines.

Department of, 97 just world hypothesis, 22 Kahneman, Daniel, 9, 30, 90 karoshi, 82 Kauffman Foundation, 122 keeping up with the Joneses, 210–11 key person insurance, 305 King, Martin Luther, Jr., 129, 225 KISS (Keep It Simple, Stupid), 10 knowledge, institutional, 257 knowns: known, 197 unknown, 198, 203 known unknowns, 197–98 Knox, Robert E., 91 Kodak, 302–3, 308–10, 312 Koenigswald, Gustav Heinrich Ralph von, 50 Kohl’s, 15 Kopelman, Josh, 301 Korea, 229, 231, 235, 238 Kristof, Nicholas, 254 Krokodil, 49 Kruger, Justin, 269 Kuhn, Thomas, 24 Kutcher, Ashton, 121 labor market, 283–84 laggards, 116–17 landlords, 178, 179, 182, 188 Laplace, Pierre-Simon, 132 large numbers, law of, 143–44 Latané, Bibb, 259 late majority, 116–17 lateral thinking, 201 law of diminishing returns, 81–83 law of diminishing utility, 81–82 law of inertia, 102–3, 105–8, 110, 112, 113, 119, 120, 129, 290, 296 law of large numbers, 143–44 law of small numbers, 143, 144 Lawson, Jerry, 289 lawsuits, 231 leadership, 248, 255, 260, 265, 271, 275, 276, 278–80 learned helplessness, 22–23 learning, 262, 269, 295 from past events, 271–72 learning curve, 269 Le Chatelier, Henri-Louis, 193 Le Chatelier’s principle, 193–94 left to their own devices, 275 Leibniz, Gottfried, 291 lemons into lemonade, 121 Lernaean Hydra, 51 Levav, Jonathan, 63 lever, 78 leverage, 78–80, 83, 115 high-leverage activities, 79–81, 83, 107, 113 leveraged buyout, 79 leveraging up, 78–79 Levitt, Steven, 44–45 Levitt, Theodore, 296 Lewis, Michael, 289 Lichtenstein, Sarah, 17 lightning, 145 liking, 216–17, 220 Lincoln, Abraham, 97 Lindy effect, 105, 106, 112 line in the sand, 238 LinkedIn, 7 littering, 41, 42 Lloyd, William, 37 loans, 180, 182–83 lobbyists, 216, 306 local optimum, 195–96 lock-in, 305 lock in your gains, 90 long-term negative scenarios, 60 loose versus tight, in organizational culture, 274 Lorenz, Edward, 121 loss, 91 loss aversion, 90–91 loss leader strategy, 236–37 lost at sea, 68 lottery, 85–86, 126, 145 low-context communication, 273–74 low-hanging fruit, 81 loyalists versus mercenaries, 276–77 luck, 128 making your own, 122 luck surface area, 122, 124, 128 Luft, Joseph, 196 LuLaRoe, 217 lung cancer, 133–34, 173 Lyautey, Hubert, 276 Lyft, ix, 288 Madoff, Bernie, 232 magnetic resonance imaging (MRI), 291 magnets, 194 maker’s schedule versus manager’s schedule, 277–78 Making of Economic Society, The (Heilbroner), 49 mammograms, 160–61 management debt, 56 manager’s schedule versus maker’s schedule, 277–78 managing to the person, 255 Manhattan Project, 195 Man in the High Castle, The (Dick), 201 manipulative insincerity, 264 man-month, 279 Mansfield, Peter, 291 manufacturer’s suggested retail price (MSRP), 15 margin of error, 154 markets, 42–43, 46–47, 106 failure in, 47–49 labor, 283–84 market norms versus social norms, 222–24 market power, 283–85, 312 product/market fit, 292–96, 302 secondary, 281–82 winner-take-most, 308 marriage: divorce, 231, 305 same-sex, 117, 118 Maslow, Abraham, 177, 270–71 Maslow’s hammer, xi, 177, 255, 297, 317 Maslow’s hierarchy of needs, 270–71 mathematics, ix–x, 3, 4, 132, 178 Singapore math, 23–24 matrices, 2 × 2, 125–26 consensus-contrarian, 285–86, 290 consequence-conviction, 265–66 Eisenhower Decision Matrix, 72–74, 89, 124, 125 of knowns and unknowns, 197–98 payoff, 212–15, 238 radical candor, 263–64 scatter plot on top of, 126 McCain, John, 241 mean, 146, 149, 151 regression to, 146, 286 standard deviation from, 149, 150–51, 154 variance from, 149 measles, 39, 40 measurable target, 49–50 median, 147 Medicare, 54–55 meetings, 113 weekly one-on-one, 262–63 Megginson, Leon, 101 mental models, vii–xii, 2, 3, 31, 35, 65, 131, 289, 315–17 mentorship, 23, 260, 262, 264, 265 mercenaries versus loyalists, 276–77 Merck, 283 merry-go-round, 108 meta-analysis, 172–73 Metcalfe, Robert, 118 Metcalfe’s law, 118 #MeToo movement, 113 metrics, 137 proxy, 139 Michaels, 15 Microsoft, 241 mid-mortems, 92 Miklaszewski, Jim, 196 Milgram, Stanley, 219, 220 military, 141, 229, 279, 294, 300 milkshakes, 297 Miller, Reggie, 246 Mills, Alan, 58 Mindset: The New Psychology of Success (Dweck), 266 mindset, fixed, 266–67, 272 mindset, growth, 266–67 minimum viable product (MVP), 7–8, 81, 294 mirroring, 217 mission, 276 mission statement, 68 MIT, 53, 85 moats, 302–5, 307–8, 310, 312 mode, 147 Moltke, Helmuth von, 7 momentum, 107–10, 119, 129 Monday morning quarterbacking, 271 Moneyball (Lewis), 289 monopolies, 283, 285 Monte Carlo fallacy, 144 Monte Carlo simulation, 195 Moore, Geoffrey, 311 moral hazard, 43–45, 47 most respectful interpretation (MRI), 19–20 moths, 99–101 Mountain Dew, 35 moving target, 136 multiple discovery, 291–92 multiplication, ix, xi multitasking, 70–72, 74, 76, 110 Munger, Charlie, viii, x–xi, 30, 286, 318 Murphy, Edward, 65 Murphy’s law, 64–65, 132 Musk, Elon, 5, 302 mutually assured destruction (MAD), 231 MVP (minimum viable product), 7–8, 81, 294 Mylan, 283 mythical man-month, 279 name-calling, 226 NASA, 4, 32, 33 Nash, John, 213 Nash equilibrium, 213–14, 226, 235 National Football League (NFL), 225–26 National Institutes of Health, 36 National Security Agency, 52 natural selection, 99–100, 102, 291, 295 nature versus nurture, 249–50 negative compounding, 85 negative externalities, 41–43, 47 negative returns, 82–83, 93 negotiations, 127–28 net benefit, 181–82, 184 Netflix, 69, 95, 203 net present value (NPV), 86, 181 network effects, 117–20, 308 neuroticism, 250 New Orleans, La., 41 Newport, Cal, 72 news headlines, 12–13, 221 newspapers, 106 Newsweek, 290 Newton, Isaac, 102, 291 New York Times, 27, 220, 254 Nielsen Holdings, 217 ninety-ninety rule, 89 Nintendo, 296 Nobel Prize, 32, 42, 220, 291, 306 nocebo effect, 137 nodes, 118, 119 No Fly List, 53–54 noise and signal, 311 nonresponse bias, 140, 142, 143 normal distribution (bell curve), 150–52, 153, 163–66, 191 North Korea, 229, 231, 238 north star, 68–70, 275 nothing in excess, 60 not ready for prime time, 242 “now what” questions, 291 NPR, 239 nuclear chain reaction, viii, 114, 120 nuclear industry, 305–6 nuclear option, 238 Nuclear Regulatory Commission (NRC), 305–6 nuclear weapons, 114, 118, 195, 209, 230–31, 233, 238 nudging, 13–14 null hypothesis, 163, 164 numbers, 130, 146 large, law of, 143–44 small, law of, 143, 144 see also data; statistics nurses, 284 Oakland Athletics, 289 Obama, Barack, 64, 241 objective versus subjective, in organizational culture, 274 obnoxious aggression, 264 observe, orient, decide, act (OODA), 294–95 observer effect, 52, 54 observer-expectancy bias, 136, 139 Ockham’s razor, 8–10 Odum, William E., 38 oil, 105–6 Olympics, 209, 246–48, 285 O’Neal, Shaquille, 246 one-hundred-year floods, 192 Onion, 211–12 On the Origin of Species by Means of Natural Selection (Darwin), 100 OODA loop, 294–95 openness to experience, 250 Operation Ceasefire, 232 opinion, diversity of, 205, 206 opioids, 36 opportunity cost, 76–77, 80, 83, 179, 182, 188, 305 of capital, 77, 179, 182 optimistic probability bias, 33 optimization, premature, 7 optimums, local and global, 195–96 optionality, preserving, 58–59 Oracle, 231, 291, 299 order, 124 balance between chaos and, 128 organizations: culture in, 107–8, 113, 273–80, 293 size and growth of, 278–79 teams in, see teams ostrich with its head in the sand, 55 out-group bias, 127 outliers, 148 Outliers (Gladwell), 261 overfitting, 10–11 overwork, 82 Paine, Thomas, 221–22 pain relievers, 36, 137 Pampered Chef, 217 Pangea, 24–25 paradigm shift, 24, 289 paradox of choice, 62–63 parallel processing, 96 paranoia, 308, 309, 311 Pareto, Vilfredo, 80 Pareto principle, 80–81 Pariser, Eli, 17 Parkinson, Cyril, 74–75, 89 Parkinson’s law, 89 Parkinson’s Law (Parkinson), 74–75 Parkinson’s law of triviality, 74, 89 passwords, 94, 97 past, 201, 271–72, 309–10 Pasteur, Louis, 26 path dependence, 57–59, 194 path of least resistance, 88 Patton, Bruce, 19 Pauling, Linus, 220 payoff matrix, 212–15, 238 PayPal, 72, 291, 296 peak, 105, 106, 112 peak oil, 105 Penny, Jonathon, 52 pent-up energy, 112 perfect, 89–90 as enemy of the good, 61, 89–90 personality traits, 249–50 person-month, 279 perspective, 11 persuasion, see influence models perverse incentives, 50–51, 54 Peter, Laurence, 256 Peter principle, 256, 257 Peterson, Tom, 108–9 Petrified Forest National Park, 217–18 Pew Research, 53 p-hacking, 169, 172 phishing, 97 phones, 116–17, 290 photography, 302–3, 308–10 physics, x, 114, 194, 293 quantum, 200–201 pick your battles, 238 Pinker, Steven, 144 Pirahã, x Pitbull, 36 pivoting, 295–96, 298–301, 308, 311, 312 placebo, 137 placebo effect, 137 Planck, Max, 24 Playskool, 111 Podesta, John, 97 point of no return, 244 Polaris, 67–68 polarity, 125–26 police, in organizations and projects, 253–54 politics, 70, 104 ads and statements in, 225–26 elections, 206, 218, 233, 241, 271, 293, 299 failure and, 47 influence in, 216 predictions in, 206 polls and surveys, 142–43, 152–54, 160 approval ratings, 152–54, 158 employee engagement, 140, 142 postmortems, 32, 92 Potemkin village, 228–29 potential energy, 112 power, 162 power drills, 296 power law distribution, 80–81 power vacuum, 259–60 practice, deliberate, 260–62, 264, 266 precautionary principle, 59–60 Predictably Irrational (Ariely), 14, 222–23 predictions and forecasts, 132, 173 market for, 205–7 superforecasters and, 206–7 PredictIt, 206 premature optimization, 7 premises, see principles pre-mortems, 92 present bias, 85, 87, 93, 113 preserving optionality, 58–59 pressure point, 112 prices, 188, 231, 299 arbitrage and, 282–83 bait and switch and, 228, 229 inflation in, 179–80, 182–83 loss leader strategy and, 236–37 manufacturer’s suggested retail, 15 monopolies and, 283 principal, 44–45 principal-agent problem, 44–45 principles (premises), 207 first, 4–7, 31, 207 prior, 159 prioritizing, 68 prisoners, 63, 232 prisoner’s dilemma, 212–14, 226, 234–35, 244 privacy, 55 probability, 132, 173, 194 bias, optimistic, 33 conditional, 156 probability distributions, 150, 151 bell curve (normal), 150–52, 153, 163–66, 191 Bernoulli, 152 central limit theorem and, 152–53, 163 fat-tailed, 191 power law, 80–81 sample, 152–53 pro-con lists, 175–78, 185, 189 procrastination, 83–85, 87, 89 product development, 294 product/market fit, 292–96, 302 promotions, 256, 275 proximate cause, 31, 117 proxy endpoint, 137 proxy metric, 139 psychology, 168 Psychology of Science, The (Maslow), 177 Ptolemy, Claudius, 8 publication bias, 170, 173 public goods, 39 punching above your weight, 242 p-values, 164, 165, 167–69, 172 Pygmalion effect, 267–68 Pyrrhus, King, 239 Qualcomm, 231 quantum physics, 200–201 quarantine, 234 questions: now what, 291 what if, 122, 201 why, 32, 33 why now, 291 quick and dirty, 234 quid pro quo, 215 Rabois, Keith, 72, 265 Rachleff, Andy, 285–86, 292–93 radical candor, 263–64 Radical Candor (Scott), 263 radiology, 291 randomized controlled experiment, 136 randomness, 201 rats, 51 Rawls, John, 21 Regan, Ronald, 183 real estate agents, 44–45 recessions, 121–22 reciprocity, 215–16, 220, 222, 229, 289 recommendations, 217 red line, 238 referrals, 217 reframe the problem, 96–97 refugee asylum cases, 144 regression to the mean, 146, 286 regret, 87 regulations, 183–84, 231–32 regulatory capture, 305–7 reinventing the wheel, 92 relationships, 53, 55, 63, 91, 111, 124, 159, 271, 296, 298 being locked into, 305 dating, 8–10, 95 replication crisis, 168–72 Republican Party, 104 reputation, 215 research: meta-analysis of, 172–73 publication bias and, 170, 173 systematic reviews of, 172, 173 see also experiments resonance, 293–94 response bias, 142, 143 responsibility, diffusion of, 259 restaurants, 297 menus at, 14, 62 RetailMeNot, 281 retaliation, 238 returns: diminishing, 81–83 negative, 82–83, 93 reversible decisions, 61–62 revolving door, 306 rewards, 275 Riccio, Jim, 306 rise to the occasion, 268 risk, 43, 46, 90, 288 cost-benefit analysis and, 180 de-risking, 6–7, 10, 294 moral hazard and, 43–45, 47 Road Ahead, The (Gates), 69 Roberts, Jason, 122 Roberts, John, 27 Rogers, Everett, 116 Rogers, William, 31 Rogers Commission Report, 31–33 roles, 256–58, 260, 271, 293 roly-poly toy, 111–12 root cause, 31–33, 234 roulette, 144 Rubicon River, 244 ruinous empathy, 264 Rumsfeld, Donald, 196–97, 247 Rumsfeld’s Rule, 247 Russia, 218, 241 Germany and, 70, 238–39 see also Soviet Union Sacred Heart University (SHU), 217, 218 sacrifice play, 239 Sagan, Carl, 220 sales, 81, 216–17 Salesforce, 299 same-sex marriage, 117, 118 Sample, Steven, 28 sample distribution, 152–53 sample size, 143, 160, 162, 163, 165–68, 172 Sánchez, Ricardo, 234 sanctions and fines, 232 Sanders, Bernie, 70, 182, 293 Sayre, Wallace, 74 Sayre’s law, 74 scarcity, 219, 220 scatter plot, 126 scenario analysis (scenario planning), 198–99, 201–3, 207 schools, see education and schools Schrödinger, Erwin, 200 Schrödinger’s cat, 200 Schultz, Howard, 296 Schwartz, Barry, 62–63 science, 133, 220 cargo cult, 315–16 Scientific Autobiography and other Papers (Planck), 24 scientific evidence, 139 scientific experiments, see experiments scientific method, 101–2, 294 scorched-earth tactics, 243 Scott, Kim, 263 S curves, 117, 120 secondary markets, 281–82 second law of thermodynamics, 124 secrets, 288–90, 292 Securities and Exchange Commission, U.S., 228 security, false sense of, 44 security services, 229 selection, adverse, 46–47 selection bias, 139–40, 143, 170 self-control, 87 self-fulfilling prophecies, 267 self-serving bias, 21, 272 Seligman, Martin, 22 Semmelweis, Ignaz, 25–26 Semmelweis reflex, 26 Seneca, Marcus, 60 sensitivity analysis, 181–82, 185, 188 dynamic, 195 Sequoia Capital, 291 Sessions, Roger, 8 sexual predators, 113 Shakespeare, William, 105 Sheets Energy Strips, 36 Shermer, Michael, 133 Shirky, Clay, 104 Shirky principle, 104, 112 Short History of Nearly Everything, A (Bryson), 50 short-termism, 55–56, 58, 60, 68, 85 side effects, 137 signal and noise, 311 significance, 167 statistical, 164–67, 170 Silicon Valley, 288, 289 simulations, 193–95 simultaneous invention, 291–92 Singapore math, 23–24 Sir David Attenborough, RSS, 35 Skeptics Society, 133 sleep meditation app, 162–68 slippery slope argument, 235 slow (high-concentration) thinking, 30, 33, 70–71 small numbers, law of, 143, 144 smartphones, 117, 290, 309, 310 smoking, 41, 42, 133–34, 139, 173 Snap, 299 Snowden, Edward, 52, 53 social engineering, 97 social equality, 117 social media, 81, 94, 113, 217–19, 241 Facebook, 18, 36, 94, 119, 219, 233, 247, 305, 308 Instagram, 220, 247, 291, 310 YouTube, 220, 291 social networks, 117 Dunbar’s number and, 278 social norms versus market norms, 222–24 social proof, 217–20, 229 societal change, 100–101 software, 56, 57 simulations, 192–94 solitaire, 195 solution space, 97 Somalia, 243 sophomore slump, 145–46 South Korea, 229, 231, 238 Soviet Union: Germany and, 70, 238–39 Gosplan in, 49 in Cold War, 209, 235 space exploration, 209 spacing effect, 262 Spain, 243–44 spam, 37, 161, 192–93, 234 specialists, 252–53 species, 120 spending, 38, 74–75 federal, 75–76 spillover effects, 41, 43 sports, 82–83 baseball, 83, 145–46, 289 football, 226, 243 Olympics, 209, 246–48, 285 Spotify, 299 spreadsheets, 179, 180, 182, 299 Srinivasan, Balaji, 301 standard deviation, 149, 150–51, 154 standard error, 154 standards, 93 Stanford Law School, x Starbucks, 296 startup business idea, 6–7 statistics, 130–32, 146, 173, 289, 297 base rate in, 157, 159, 160 base rate fallacy in, 157, 158, 170 Bayesian, 157–60 confidence intervals in, 154–56, 159 confidence level in, 154, 155, 161 frequentist, 158–60 p-hacking in, 169, 172 p-values in, 164, 165, 167–69, 172 standard deviation in, 149, 150–51, 154 standard error in, 154 statistical significance, 164–67, 170 summary, 146, 147 see also data; experiments; probability distributions Staubach, Roger, 243 Sternberg, Robert, 290 stock and flow diagrams, 192 Stone, Douglas, 19 stop the bleeding, 234 strategy, 107–8 exit, 242–43 loss leader, 236–37 pivoting and, 295–96, 298–301, 308, 311, 312 tactics versus, 256–57 strategy tax, 103–4, 112 Stiglitz, Joseph, 306 straw man, 225–26 Streisand, Barbra, 51 Streisand effect, 51, 52 Stroll, Cliff, 290 Structure of Scientific Revolutions, The (Kuhn), 24 subjective versus objective, in organizational culture, 274 suicide, 218 summary statistics, 146, 147 sunk-cost fallacy, 91 superforecasters, 206–7 Superforecasting (Tetlock), 206–7 super models, viii–xii super thinking, viii–ix, 3, 316, 318 surface area, 122 luck, 122, 124, 128 surgery, 136–37 Surowiecki, James, 203–5 surrogate endpoint, 137 surveys, see polls and surveys survivorship bias, 140–43, 170, 272 sustainable competitive advantage, 283, 285 switching costs, 305 systematic review, 172, 173 systems thinking, 192, 195, 198 tactics, 256–57 Tajfel, Henri, 127 take a step back, 298 Taleb, Nassim Nicholas, 2, 105 talk past each other, 225 Target, 236, 252 target, measurable, 49–50 taxes, 39, 40, 56, 104, 193–94 T cells, 194 teams, 246–48, 275 roles in, 256–58, 260 size of, 278 10x, 248, 249, 255, 260, 273, 280, 294 Tech, 83 technical debt, 56, 57 technologies, 289–90, 295 adoption curves of, 115 adoption life cycles of, 116–17, 129, 289, 290, 311–12 disruptive, 308, 310–11 telephone, 118–19 temperature: body, 146–50 thermostats and, 194 tennis, 2 10,000-Hour Rule, 261 10x individuals, 247–48 10x teams, 248, 249, 255, 260, 273, 280, 294 terrorism, 52, 234 Tesla, Inc., 300–301 testing culture, 50 Tetlock, Philip E., 206–7 Texas sharpshooter fallacy, 136 textbooks, 262 Thaler, Richard, 87 Theranos, 228 thermodynamics, 124 thermostats, 194 Thiel, Peter, 72, 288, 289 thinking: black-and-white, 126–28, 168, 272 convergent, 203 counterfactual, 201, 272, 309–10 critical, 201 divergent, 203 fast (low-concentration), 30, 70–71 gray, 28 inverse, 1–2, 291 lateral, 201 outside the box, 201 slow (high-concentration), 30, 33, 70–71 super, viii–ix, 3, 316, 318 systems, 192, 195, 198 writing and, 316 Thinking, Fast and Slow (Kahneman), 30 third story, 19, 92 thought experiment, 199–201 throwing good money after bad, 91 throwing more money at the problem, 94 tight versus loose, in organizational culture, 274 timeboxing, 75 time: management of, 38 as money, 77 work and, 89 tipping point, 115, 117, 119, 120 tit-for-tat, 214–15 Tōgō Heihachirō, 241 tolerance, 117 tools, 95 too much of a good thing, 60 top idea in your mind, 71, 72 toxic culture, 275 Toys “R” Us, 281 trade-offs, 77–78 traditions, 275 tragedy of the commons, 37–40, 43, 47, 49 transparency, 307 tribalism, 28 Trojan horse, 228 Truman Show, The, 229 Trump, Donald, 15, 206, 293 Trump: The Art of the Deal (Trump and Schwartz), 15 trust, 20, 124, 215, 217 trying too hard, 82 Tsushima, Battle of, 241 Tupperware, 217 TurboTax, 104 Turner, John, 127 turn lemons into lemonade, 121 Tversky, Amos, 9, 90 Twain, Mark, 106 Twitter, 233, 234, 296 two-front wars, 70 type I error, 161 type II error, 161 tyranny of small decisions, 38, 55 Tyson, Mike, 7 Uber, 231, 275, 288, 290 Ulam, Stanislaw, 195 ultimatum game, 224, 244 uncertainty, 2, 132, 173, 180, 182, 185 unforced error, 2, 10, 33 unicorn candidate, 257–58 unintended consequences, 35–36, 53–55, 57, 64–65, 192, 232 Union of Concerned Scientists (UCS), 306 unique value proposition, 211 University of Chicago, 144 unknown knowns, 198, 203 unknowns: known, 197–98 unknown, 196–98, 203 urgency, false, 74 used car market, 46–47 U.S.

pages: 281 words: 79,464

Against Empathy: The Case for Rational Compassion
by Paul Bloom

It turns out, then, that all the empathy measures that are commonly used are actually measures of a cluster of things—including empathy, but also concern and compassion, as well as some traits, such as being cool-headed in an emergency, that might have little to do with empathy in any sense of the term. Finally, when it comes to looking at research concerning the relationship between empathy and good behavior, there is the issue of publication bias. Researchers who study the effects of empathy are typically hoping and expecting that empathy does have effects—nobody does an experiment hoping to find nothing. Studies that fail to find an effect are therefore less likely to be submitted for publication (the so-called file drawer problem), and if such work is submitted, it’s more difficult to get published, because null effects are notoriously uninteresting to reviewers and editors.

(documentary), 50 food aid, 99 football, and violence, 187 foreign aid, 99 forgiveness, 25 Fourth Amendment, 37 Freddie Kruger (character), 180 free speech, 123–26 free trade, 112, 117 free will, 218–19, 221 Freud, Sigmund, 5, 145, 216 friendship, 149–54, 158–59 Fritz, Heidi, 133–35 Gandhi, Mahatma, 159–60 Garner, Eric, 118 Gawande, Atul, 145 gay marriage, 53, 55, 116, 122 Gaza War, 186, 188–89, 190 Gazzaniga, Michael, 220 gender differences, 81, 129, 133–36 objectification, 203–4, 206 genes, 8, 94–95, 154, 169, 195 Ghiselin, Michael, 166 Gladwell, Malcolm, 231–32 Glover, Jonathan, 74, 188 Godwin, Morgan, 202 Godwin’s Law, 63 Goebbels, Joseph, 196 Goodman, Charles, 138 goodness (good actions/behaviors), 41–42, 85–86, 101–6 effective altruism, 102–6, 238–39 empathy-altruism hypothesis, 25, 85–86, 168 high intelligence and, 233 measuring empathy and, 41–42, 77–82 publication bias and measuring empathy, 82–83 Gore, Al, 49–50, 121 Göring, Hermann, 196 Gourevitch, Philip, 93 greed, 188 Greene, Joshua, 10 guilt, 44, 87, 182, 198 gun control, 115, 116, 119, 122–23 gut feelings, 7, 213–14 Habitat for Humanity, 88 Haidt, Jonathan, 6, 120, 223 Haldane, J. B. S., 169 Hamas, 189–90 Hannibal Lecter (character), 180–81 Hare, Robert, 197, 198, 199 Harris, Lasana, 69 Harris, Paul, 174, 175 Harris, Sam, 10, 218 Harris, Thomas, 180–81 Helgeson, Vicki, 133–35 helping others.

See charitable giving; goodness heuristics, 227–28 Hickok, Gregory, 64, 67 Hieronymi, Pamela, 157 high empathy, in personal relationships, 42, 132–36 Hitler, Adolf, 28, 191, 193, 196, 208–9 Hobbes, Thomas, 167, 168 Hoffman, Martin, 10, 21–22, 166 Holloway, Natalee, 90–91 Holocaust, 5, 196, 205, 206–7, 208–9 homeless people, and empathy, 69–70, 167 Hopkins, Anthony, 180 Horton, Willie, 34–35, 53 hot cognition, 214, 216 Hotel Rwanda (movie), 93 How Adam Smith Can Change Your Life (Roberts), 153 Hume, David, 39, 44, 165–66 Hurricane Irene, 90 Hurricane Katrina, 90, 91 Hurricane Sandy, 90 Hussein, Saddam, 193 Iacoboni, Marco, 141 identifiable victim effect, 88–89, 90 impartiality, 8, 110, 159 incentives, 57–58 Indian Ocean tsunami of 2004, 90 indigestion, 217 innumerate, 9, 31, 36, 89 insular cortex, 61, 64, 65, 139 intelligence (IQ), 230–33 intimacy (intimate relationships), 129–36, 149–63 apologies and, 156–58 friendships, 149–54, 158–59 high empathy in relationships, 132–36 IQ tests, 232 Iraq war, 107, 193 Isaacson, Walter, 93, 94 ISIS, 193 Island of Doctor Moreau, The (Wells), 75 Israeli-Palestinian conflict, 4–5, 186, 188–89, 190, 204–5 Jackson, Frank, 148 James, William, 170 Jamison, Leslie, 10, 25, 146–47 Jesus, 161 Jinpa, Thupten, 141 job candidates, 224–25 Johnson, Lyndon, 93 judges, and cognitive empathy, 37, 125–26 Just Babies (Bloom), 6, 163, 172–73, 239 justice, 42–43, 48, 159, 210–11 Kagan, Shelly, 29 Kahneman, Daniel, 152, 214 Kant, Immanuel, 29, 30 Keyser Söze (character), 180 kidney donations, 26, 47, 238 kindness, 3, 21, 25, 170 loving-kindness meditation, 139–41 in romantic partners, 129–30 zero-sum nature of, 95–98 kinship, 7–8, 159–60 Klimecki, Olga, 138 knowledge argument, 148 Kravinsky, Zell, 25–26, 27, 102, 238 Kuehberger, Johann, 28 Lakoff, George, 20, 113–14, 119 Landy, Joshua, 48–50 Lanza, Adam, 1 Lazare, Aaron, 156–57 Leaves of Grass (Whitman), 21 Lee, Robert E., 184 Less Than Human (Smith), 204–5, 206 Levi, Primo, 206–7 Lévi-Strauss, Claude, 202–3 liberals (liberalism), 113–14, 118–27 political orientation and language, 114–18 libertarians (libertarianism), 114, 115, 118, 122 lies (lying), 29 Lifton, Robert Jay, 16 Lincoln, Abraham, 167, 168 Locke, John, 115 Lockwood, Heidi Howkins, 156 Louis CK, 109 loving-kindness meditation, 139–41 loyalty, 120, 158, 159–60, 236 Lynch, Michael, 10, 51 MacFarquhar, Larissa, 10, 46–47, 104, 161, 162 MacKinnon, Catharine, 203 Macnamara, John, 229 Mad Men (TV series), 154 Make-A-Wish Foundation, 96–97 malaria, 40–41, 97, 103 Manne, Kate, 205 “man versus man” problems, 104 Marsh, Abigail, 47 marshmallow experiment, 234 martial arts, and violence, 187 Mary’s Room, 148 McClure, Jessica (Baby Jessica), 90 McVeigh, Timothy, 49 meanness, 199 measurement problem, 77–83 media spotlight, 90–91, 92–93 medical students, and empathy, 142–44 Meltzoff, Andrew, 171–72 Mencius, 22 mentalization, 17, 71 Mill, John Stuart, 29, 115, 116–17 Milton, John, 26 mind–body problem, 217, 220 mindfulness meditation, 43, 140–41 minimum wage, 119 mirror neurons, 63–64 Mischel, Walter, 234 Montross, Christine, 142–45, 146 morality, 22–23, 39–54 in babies and children, 6, 165, 171 emotional nature of, 5–6 empathy as foundation of, 19–22, 165–76 empathy as poor guide for, 2–3, 9–10, 54–55 goodness and empathy, 41–42, 101–6 inciting violence, 184–87 origins of, 171–76 reason as basis for, 5–6, 8–9, 44, 51–54 terminological issues, 39–41 moralization gap, 181–84 moral philosophy, 22, 29–30, 44, 91–92 Mother Teresa of Calcutta, 89 Myth of Mirror Neurons, The (Hickok), 64, 67 names (naming), 222 national disasters, and election years, 94 natural selection, 168–70 Nazi Doctors, The (Lifton), 16 Nazis, 5, 16, 74, 110–11, 124, 177–78, 181, 191, 196, 202, 206–7 Netanyahu, Benjamin, 188–89, 190 neuroscience, 47, 59–73 of compassion, 138–39 difference between feeling and understanding, 70–73 of empathic experiences, 62–68 empathic reactions and prior bias, preference, and judgment, 68–70, 90 localization problem, 59–61 other people’s pain, 62–68, 73–75 of reason, 216–21 Newtown school shooting, 1–2, 31–33, 90 New Yorker, The, 11–12 New York Times, 11, 100, 214 Nussbaum, Martha, 10, 107, 203 Oakley, Barbara, 135 Obama, Barack, 2, 4, 18, 19, 118, 119, 122–23, 235 Obama, Michelle, 123 objectification, 178–79, 203–4, 206 objectivity, 86, 146 Ochsner, Kevin, 71–72 O’Connor, Lynn E., 141 Oliver Twist (Dickens), 92 Omnivore’s Dilemma, The (Pollan), 50 On Apology (Lazare), 156–57 origins of empathy, 171–76 orphanages, in Cambodia, 100 Orwell, George, 37–38, 159–60, 188 oxytocin, 195 pain babies and empathy, 172–74 neuroscience of, 62–68, 73–75 role in empathy, 17, 21, 33–36, 62–68, 155–56 parenting, 97, 130–31, 154–55 Parkinson’s disease, 219 parochialism, 9, 36 “pathological altruism,” 135 Patton, George S., 178 Paul, Laurie, 147–48 Paul, Ron, 118 Personal Concern scale, 80–81 personal distress, 25 Personal Distress scale, 79–81 Perspective Taking scale, 78–81 physicalism, 148 physician-patient relationship, 143–45, 146–47 Pinker, Steven, 10, 18–19, 74–75, 239–40 moralization gap and, 181, 184 self-control and, 234 threshold effect and, 231 Pitkin, Aaron, 46 pity, 40, 100 Plato, 214 poker, 28 Poland, Hitler’s invasion of, 193 police shootings, 4, 19–20, 205 political orientation and language, 114–18 politics, 113–27 free speech and, 123–26 legal context, 125–26 liberal policies and empathy, 113–14, 118–25 rationality and irrationality in, 235–37 pornography, depiction of women in, 203–4 Poulin, Michael, 193–95 prefrontal cortex, 61, 71 presidential election of 2012, 117–18, 119 Prinz, Jesse, 10, 22, 200, 210–11 prison rape, 93 progressives (progressivism), 113–14, 118–27 political orientation and language, 114–18 projective empathy, 70–71, 155 “prosocial concern,” 62 psychoanalysis, 5, 144, 145, 216 psychological egoism, 72–74, 75–76 psychopaths (psychopathy), 42, 197–201 lack of self-control and malicious nature of, 42, 199–201 myth of pure evil, 181, 184 neuroscience of, 47, 71–73 Psychopathy Checklist, 197–201, 198 publication bias, 82–83 punishment, 161, 185, 186, 192, 195–96, 207, 209, 225 purity, 117–18, 224 qualia, and knowledge argument, 148 Rachels, James, 52 racial bias, 226 racism, 9, 48–49, 202–3 Rai, Tage, 184–85, 186 Raine, Adrian, 179 Rand, David, 7 rape, 23, 34, 35, 93, 182, 192, 206 rationality.

pages: 436 words: 123,488

Overdosed America: The Broken Promise of American Medicine
by John Abramson
Published 20 Sep 2004

Fletcher said that as punishment for publishing this article, the pharmaceutical industry “withdrew many adverts” and showed that it was “willing to flex its considerable muscles when it felt its interests were threatened.” This is a price that medical journal editors would prefer not to pay. NOT TELLING THE WHOLE TRUTH: PUBLICATION BIAS Even if a doctor could keep up with all the studies that were published, he or she would still have a limited and skewed view of the real evidence. Notwithstanding all the potential ways that research can be tipped in favor of a sponsor’s product, clinical trials still tend to reveal the truth about whether a new therapy is effective—or not.

The results of all of the “pivotal” studies (those deemed to be of high enough quality to be used in the FDA’s determinations) for these seven antidepressants were then put together to assess the overall effect of the new drugs. By looking at all the studies, the researchers avoided the distortion of “publication bias” and were able to determine whether or not the scientific evidence really showed that the new antidepressants are more effective and safer than the older ones. When all the evidence is considered, it turns out that the new antidepressant drugs are no more effective than the older tricyclic antidepressants (the classic being amitriptyline, brand name Elavil).

See also medical research absolute vs. relative risk and, 14–16, 165, 166, 229 advertising and research companies and, 109–10 Celebrex and Vioxx research (see Celebrex and Vioxx) cholesterol research (see cholesterol guidelines of 2001) commercial funding, 94–97 (see also drug companies; funding) commercial goals vs. health goals, 21–22, 50–51, 53, 241–44 conflicts of interest and (see conflicts of interest) damage control and, 107–9 data manipulation, 34–36 data omission, 29–31 data transparency and, 27–28, 94, 105–6, 251–52 dosage manipulation, 101–2 failure to compare existing therapies, 17, 102–3 FDA drug approval and Rezulin, 86–88 ghostwriters and, 106–7 hormone replacement therapy (see hormone replacement therapy) implantable defibrillators, 98–101 independent review for, 249–53 medical journals and, 25–27, 37–38, 93–94, 96–97 (see also medical journals) osteoporosis research, 211–20 Paxil research, 243 premature termination of research, 104–5 publication bias as, 113–17 research design changes as, 31 septic shock research, 161–63 stroke research, 13–22 unbiased information vs., 167 unrepresentative patients, 16–17, 33, 103–4, 206–8, 251 commercial speech, 37–38, 157–59 conflicts of interest academic experts, xxii, 18, 243 cholesterol guidelines, 135, 147–48 clinical guideline experts, xxi, 127–28, 133–35, 146–48, 227, 249–50 continuing medical education, 121–23 damage control, 109 FDA, 85–87, 89–90 ghostwriters, 106–7 hormone replacement therapy, 60–61 medical journal, 26 medical news stories, 166–67 NIH researchers, 86–90 independent review and, 258–59 surgeons, 177–78 confounding factors, 66–67 consciousness, 206–8 consulting contracts, 88–90, 109, 249.

pages: 315 words: 87,035

May Contain Lies: How Stories, Statistics, and Studies Exploit Our Biases—And What We Can Do About It
by Alex Edmans
Published 13 May 2024

The same is true for academic journals – no matter how diligent they are, reviewers and editors can’t spot every flaw. For example, it’s difficult to detect data mining because they never see the tests the authors tried and buried because they didn’t work out. Journals can also fall victim to publication bias, where they accept a paper because they like its findings. And what findings do editors like? Statistically significant ones, because they’re more likely to make a splash. The main measure of a journal’s reputation is its ‘impact factor’, the number of times its papers are cited by other journals, and people are more likely to quote a study that finds something than one that unearths nothing.

Anders 61, 62–3, 66, 104 errors of commission 250 errors of omission 250 estimation 246 evaluation 223, 233 evidence 5, 12, 13, 122 average results 280, 282 credentials 226–8 identity and 266 is not proof 192–210 scientific management 198–9 lack of 219 limitations of 280, 282 smarter thinking 288–9 in social sciences 224 systematic reviews 222 testing 217–18 validity 199 EXCOMM 237–8, 244, 254 cognitive diversity 238–9, 240 deliberation process 244–5 demographic make up 238 executive pay 67–9 exogenous parts of instruments 178–9, 179–80, 182, 190 experts 223–4 explained components of instruments 178–9 explorers 170, 171–2 external validity 199, 202, 204, 209 Fabo et al. 225–6 Facebook 272 Fact Check (Reuters) 270 fact-checking websites 270–71, 277, 282 facts 12, 13 are not data 89–114 learning from a blank slate 108–13 narrative fallacy 104–8, 113 seeing the full picture 95–104 selected samples 95–6, 102, 113 Steve Jobs and Apple 89, 90–92, 93, 94, 101–2, 103, 106, 107, 200 checking 7–8, 12, 21, 37, 85, 88, 103, 268–9 interpretation of 24 smarter thinking 285–6 failure parties 250 failures 250–51 fake news 271, 272, 282 Faleye, Olubunmi 4 family businesses 181–2 Fancy, Tariq 83–5, 226 fast-food employment 184–5 Fernbach, Philip 251 ‘Fifty shades of QE’ (Fabo et al.) 225–6 Financial Management Association 270 Financial Reporting Council 74 fintech companies 85–6 Fisher, Matthew 54 Fixit (fictional company) 119, 135–40, 137–40 data mining see data mining see also Xinyi (fictional name) Flammer, Caroline 243 flexibility 108 Flint, Austin 174–5 Floyd, George 75 Fong, Geoffrey 262 Fooled by Randomness (Taleb) 274 football 126–7, 188 decline in stock markets 134–5 Euros (2004) 126–7 mood and emotions 126–7, 128, 129 sentiment 129 World Cup (2014) 133–4 Forbes 219 Forbes 15 Best Business Books (2015) 268–9 Ford, Henry 245 Fortune 60, 219, 223 Fos, Slava 241 frequent trading 97–101 Frontiers in Nutrition study 144, 145 Full Fact 270 Galileo Galilei 226 Gallagher, Liam 44 Gama, Vasco da 171 García, Diego 129 Gavin, Jim 23 gender diversity 243 company performance research 118–24, 135–40 data mining see data mining evidence for fund launch 116–18 geoengineering 268 Getting Things Done (Allen) 229, 270 Gibson, Belle 17–20, 103–4 Gimbel, Sarah 28–9 Gladwell, Malcolm 6, 60, 66 Ericsson study 61, 62–3, 66, 104 magazine interview 60, 61, 63 10,000-hours rule 6, 59–61, 62–6 Global Head of Sustainability Research 85 global warming 265–6 Glossner, Simon 243 Golden Circle Model 92 Google 157, 255 Gore, Al 266 gradients 136–7 Grant Thornton 224 Grant Thornton Corporate Governance Index 225 granular world 45, 51–2, 56, 201 Great North Run 47 Gresham College 62, 264 grit 204–5, 207 group discussions 247–8 grouping 137–40, 140, 141 groupthink 236, 237, 241, 247–8, 257 growth mindset 62 Guardian, The 215 Guriev, Sergei 271–2 Guzey, Alexey 270 Halo Effect, The (Rosenzweig) 111 Harris, Sam 28–9 Harvard Business Review 103, 152, 154, 290–91 Harvard University 228 Heeb, Florian 54 Henry, Emeric 271–2 hierarchies 249–50 high pollution 150–51 Holmes, Elizabeth 20–21, 219 homeopathy 6 honorary doctorates 227 Hoxby, Caroline 169, 177–9, 202, 221 HSBC 256 Hughes, Robert 22 Hung, William 207 hunter gatherers 43–4 hydroxychloroquine 6–7 Hypocritical Oath 230 hypotheses 23–5, 66 average output 99 control samples 99, 102 inputs and outputs 98–9 magnitude of underperformance 100 representative samples 99, 102 reverse engineering 124–5 sample size 100 statistical significance 100–101 test samples 99 identity 266 Imperial Tobacco 152 inclusion 243–8 micro-processes 248–9 An Inconvenient Truth 265, 266 inequality 159–61, 162–3 information gathering 214 InfoWars 231 initial beliefs 216 instruments 177–8, 190 endogenous parts 178–9, 180, 182 exogenous parts 178–9, 179–80, 182, 190 natural experiments and 186 relevance 179, 180, 182, 190 ridiculousness and irrelevance 180–81 interaction effect 207 internal validity 199, 200, 202, 209 intervention studies 173 investors 127 Ioannidis, John 219 iPhone 91, 92 IQ (intelligence quotient) 143–4, 145–7 irrelevance of instruments 180–81 Isaacson, Walter 93, 101–2, 103 James (acrobat) 59–60 Jandali, Abdulfattah 89 Janis, Irving 236 Jensen, Michael 69, 70–71 Jobs, Clara 90 Jobs, Paul 89–90 Jobs, Steve 89, 90–92, 93, 101–2, 103, 106 Johnson, Tim 127–8 Joint Chiefs of Staff (JCS) 236, 237, 239, 240 Journal of Finance 129, 218 Journal of the American Medical Association 219 journal quality 218 journalists 228, 282 checking facts 273 journals impact factor 220 peer-reviews of 217–18 publication bias 220 Joy, Bill 61 Kahan, Dan 263, 266, 268 Kahneman, Daniel 29 Kaplan, Jonas 28–9 Keil, Frank 54, 251 Kempf, Elisabeth 241 Kennedy, General Robert 244, 245 Kennedy, President John F. 244 Bay of Pigs invasion 235–7, 244 Cuban Missile Crisis 235, 237–8, 239–40, 244–5 EXCOMM 238–9, 244–6 Kerry, John 29–30 Khrushchev, Nikita 235, 237, 240 Kirk, Stuart 256–7 knowledge 7–8, 10 biased interpretation 37 biased search 36–7 Krantz, David 262 Krueger, Alan 184–5 Krueger, Joachim 52 Ladder of Misinference 11, 56, 152, 232 Lancet 221 Lancet Public Health study 46 Langley, Samuel Pierpont 200 law of attraction 20 Leavers (Brexit) 214 LeMay, Curtis 239, 240 Lemnitzer, Lyman 239 Lepper, Mark 30–31, 259, 260, 261 Les Décodeurs 270 lies 12 limbic brain 93, 107 Lind, James 172–3, 174 LinkedIn post 153 Lisker, Bruce 21–3, 24 Lisker, Dorka 21–2 Living Wage 76–7 Lodge, Milton 36–7 London Business School 74–5 London Marathon 47 Lord, Charles 30–31, 259, 260, 261 Macintosh 91 marbled world 45, 53–5, 56 Martin, Roger L. 290–92 McDaniel, Mark 48 McGrath Task Circumplex 240 McKinsey 225 report (2020) 78 study (2017) 152, 154, 187, 291–2 McLaughlin, Dan 64–5 McNamara, Robert 239–40, 244 Mearsheimer, John 86 Meckling, William 69, 70–71 Medium 84 Medscape 219 Merton College 213 Merton in the City reunion (2016) 213–14 metal cutting 193–4 micro-processes 248–9, 258, 263 Midvale Steel Works 193 Miliband, Ed 160, 161 minimum wage laws 183–4 misinformation 5–7, 9–10, 67–8, 214, 230, 231, 234 misrepresentation 74–6 MIT 125, 126, 127 moderate world 45–50, 55 moderation 206–7, 209–10 momentum 157 Monsue, Andrew 22–3, 24 Montessori education method 263 Morgan Stanley 125, 130–31, 189, 255–6 ‘balanceworks’ programme 156 motivated reasoning 27–8, 30, 37, 184 Motor Neurone Disease Association 47 Mountain View (later Silicon Valley) 90 Mozart 61 Mullainathan, Sendhil 175–6 Murdoch, Lachlan 181 Murdoch, Rupert 20, 181 my-side reasons 264 naïve acceptance 25–7, 32, 37–8 narrative fallacy 105–8, 109, 113 twin biases 106 National Childbirth Trust (NCT) 143, 144–5 National Geographic 223 National Health and Medical Research Council 222 National Health Service (NHS) website 222 National Security Council (NSC) 237 National Union of Journalists 273 NATO 86 natural experiments 185–6, 187, 190 instruments and 186 Nature 6, 218 neocortex 92, 107 New Scientist 223 News Corporation 181 news feeds 6 Nisbett, Richard 262 No Child Left Behind Act (2001) 196–7 non-pecuniary benefits 70–71 Norli, Øyvind 129 Nyhan, Brendan 270 Obama, Barack 265 Object-Spatial Imagery and Verbal Questionnaire 240, 241 observational studies 173 Odean, Terry 96–8 oil spills 25–8 100 Best Companies to Work for 116–17, 156–7, 189 opinions, articulating in detail 251 Organization Stream Analysis 111 organizations 235–58 Oster, Emily 147, 201, 222 other-side reasons 264 out-of-sample tests 133 Outliers 61, 64, 104, 270 over-extrapolation 206–7, 209–10 Pagella Politica 270 Paige, Rod 196, 199 Paine, Lynn 290–91 Palin, Sarah 81 papers, scientific retractions 221 reviewed by scientists 220–21 submitted for review 217–18 parachutes 208–9 Paris Agreement (2015) 49, 50 pausing before criticizing 232–3 pausing before sharing 230–32 pay gaps 3–4, 5 Peak (Ericsson) 63, 104 peer reviewers 8 peer reviews 217–19, 233 books 223 reliability of 220 Pennycook, Gordon 231, 272 Perkins, David 264 Ph.D.s 227 Phillips et al. 242 Pickett, Kate 160–61, 162–3 The Spirit Level 159–60, 161, 163, 165, 200, 225, 270 pig-iron handling 194–5 Pixar 91, 250–51 placebo effect 174–5 PolitiFact 81, 270, 271 Pollock, Joycelyn 24–5 population density 151 Porras, Jerry 110–12 positive correlation 165 post hoc ergo propter hoc fallacy 164–5 post-mortem 255 poverty 160–61 power distance 249 power posing 221 Power, Thomas 239 PowerPoint 251–2 pre-mortem 255 precision 170–73 predictions 151, 152, 154, 167 Presence (Cuddy) 269, 270, 274 Preston, Elizabeth 259, 260, 261 Principles of Scientific Management, The (Taylor) 195 processing power 248–52 productivity 193–4 professorship 227 proof 198–9 Psychological Science 221, 269, 270 psychometric tests 108–9 publication bias 220 publication process 273–4 endorsements 274–5 quantitative easing 226 Quest, Richard 133–4 Quote Investigator 270 racial discrimination 175–6 Rambotti, Simone 161 random events 107 randomized control trials (RCTs) 173, 174–6, 189–90 instruments 177–8 limitations of 177 parachute experiment 208–9 randomness 170–73 range of values 206, 207–8 Raquel, Ronald 23 Rassemblement National 271–2 Reading Football Club 159 reasoning 264–5 red teams 254–5 reducing hierarchies 249–50 regression 136–7, 139, 140, 158, 161–2 common causes 158–9 regression coefficient 136 regulation 123 Reifler, Jason 270 Reis, Ebru 4 relevance of instruments 179, 180, 182, 190 Remainers (Brexit) 213, 214 replication studies 221 representative samples 96, 99, 102 research 4–5 best practice 273 boardroom diversity 74–5 confirming opinions 5 data mining 119–20 diversity 117–19 gender diversity see gender diversity open access 35 rigour 8, 117–19 sources 5–6 unvetted 218 research qualifications 226–7 resilient companies 78 Responsible Investment Advisory Committee 248–9 Retraction Watch 269, 270 Reuters 79 reverse causation 164–5, 167, 170, 187 reverse engineering 94, 107–8 review papers 222 Reyes the Entrepreneur 95, 97 rhetoric 215 rheumatism experiment 174 Rice-Davies, Mandy 76–7, 226 ridiculousness of instruments 180–81 Rogers, David 47, 207 Rosenzweig, Phil 111 Ross, Lee 30–31, 259 Rossmo, Kim 24–5 Rothschild, Jesse 221–2 Royal London Asset Management 248 Rozenblit, Leonid 251 Rozin, Paul 51–2 rules 66 Rusk, Dean 240, 244 sailors 170–72 Sainsbury’s 76–7 sample mining 131–3, 141 defending against 133–5 sample size 100 San Francisco Business Times 219 Sanders, Bernie 82 scaffolding 264–5, 281 Schieble, Joanne 89 Scholar’s Mate 32–3 school curriculum 196 schools choice of 169 collective learning 169 competition between 168–9 Schultz, George 20 scientific consensus 222, 233 scientific culture 253–5, 258 scientific curiosity 263 scientific intelligence 263 scientific journals 134 debunking studies 221 papers for review 217–18 scientific management in education 195–7 failure of 198–9 in manufacturing 195 scientific method 24, 25, 98–101, 102–3, 124 scientists 220–21 Scott, Willard 53 scurvy 170–72 citrus fruits 173 endogenous remedy 172 exogenous remedy 172, 173 Select Committee on Business 3–5 CEOs’s executive pay report 67–9 selected samples 95, 99, 102, 109–10, 111 self-help books 229 self-interest 215 semiconductors 53–4 ShareAction 76–7 shareholder returns 120–21 shareholder value 69, 70–71, 71, 85, 86 sharing information 230–32 shoulders of giants 217–18 shovelling technique (Taylor) 194 significance level 100 silent majority 247 silent starts 246, 251–2, 257 Silicon Valley Bank 28 Silicon Valley Business Journal 219 Sinek, Simon 69, 71, 93, 94, 107, 200, 229 sleep 71–3 Sloan, Alfred 254 smarter thinking see thinking smarter smoking 163–4, 202 Snowdon, Christopher 161 social distancing 75–6 social diversity 241–2 social media 10, 230–31, 231, 282 Soeters, Joseph 249 soldiering 193 Spirit Level Delusion, The (Snowdon) 161 Spirit Level, The (Pickett and Wilkinson) 159–60, 161, 163, 165, 200, 225, 270 sports impact on stock market 126–9, 134 mood and emotions 128–9 spurious correlations 122, 127, 141 St Paul’s 159 Start with Why (Sinek) 93, 270 statements 13, 59–88 accepted as facts 12 are not facts 87–8 death panel episode 80–81 inaccuracy 59–63 misrepresentation 74–6 choosing words carefully 71–5 lack of sources 81–2 misportrayal 69–71 misrepresentation 74–6 smarter thinking 283–5 that can never be facts 82–8 examining evidence 84–6, 88 exploring alternative explanations 86 twin biases 83–4 verifying as facts 73 statistical literacy 262–3, 264, 281 statistical significance 100–101, 120, 122, 137 statistics 161 Bayesian inference 23–4 Staw, Barry 107–8, 166 stock market 95–7 brokers 96–8 frequent trading 97–101 sentiment 127, 128 sport, impact on 126–9, 134 traders 96–8, 125–6, 128 trading floor 125–6 stories 104–5, 108 Strange, Angela 85–6 striatum 30 Sun Tzu 11 Sunday Times Rich List 108 superlatives 85, 86 survey papers 222 sustainability 8–9, 215, 267 sustainable investments 54, 83–5 System 1 thought process 29 System 2 thought process 29 systematic reviews 222, 233 Taber, Charles 36–7 Taleb, Nassim 106, 274 targets 49–50 Taylor, Frederick Winslow 192–4 Taylor, General Maxwell 237 tech industry 157 TED 9, 205–6 Telegraph, The 215 10,000-hours rule 6, 64, 66, 104 chasing dreams 64–6 claim 59–61 disheartening 66 evidence 62–3 Tesla 152 test groups 139 natural experiments 185–6 randomized control trials (RCTs) 174–5, 177 test samples 99 theory of everything 199, 200, 204 Theranos 20–21, 219, 226 Thinking, Fast and Slow (Kahneman) 29 thinking smarter data 286–8 evidence 288–9 example of 290–92 facts 285–6 individuals 213–34 organizations 235–58 preliminaries 283 shortcuts 289 societies 259–82 statements 283–5 studies 289–90 Thirteen Days: A Memoir of the Cuban Missile Crisis (Kennedy) 244 Thomson Reuters 132 TikTok 20 time-series studies 30, 31 tolerating failure 250–51 Tolstoy, Leo 216–17 Tonight Show, The 40 traders 96–8, 125–6, 128 Trades Union Congress (TUC) 4–5 traits 149–50, 166 Trevithick, Richard 171 Trouble with Europe, The (Bootle) 213–14 Trump, Donald 6–7, 271 trust 153 Trust across America 153 trustworthy companies 153 truth 12, 13, 21–6 Tsoutsoura, Margarita 241 twin biases 56, 66–7, 73, 83–4, 106, 199 Twitter (later X) 230–31 2-4-6 brainteaser 33, 260, 261 UBS 250 unexplained components of instruments 178–9 United States of America (USA) death panels 80–81 healthcare 80–81 universal statements 85 universality 199, 201 unnatural experiments 186–7 US Military Academy 202–3 USSR 235, 244 see also Cuban Missile Crisis vaccination 267–8 Venkateswaran, Anand 4 verification 220 Vigen, Tyler 122 Vioxx 220 Vogue diet 40 Vogue magazine 40 voluntary choice inputs 149, 166 voting 247 Wakefield, Andrew 221 Walker, Matthew 71–3 Wall Street Journal 84, 219 Wason, Peter 33, 260, 261 water intoxication 47 weight loss 40–41 Welch, Jack 71, 85, 86 West Point 202–3, 204, 206 The Whole Pantry app 17–18 Whole Pantry, The 18, 273 Why We Sleep (Walker) 71–3, 270 Wikipedia 200 Wilkinson, Richard 160–61, 162–3 The Spirit Level 159–60, 161, 163, 165, 200, 225, 270 work-life balance 156 Wright Brothers 200 wrongful convictions 24–5 Xinyi (fictional name) 116–19 data mining see data mining see also Fixit (fictional company) Yeh, Robert 208–9 Zhuravskaya, Ekaterina 271–2 Founded in 1893, UNIVERSITY OF CALIFORNIA PRESS publishes bold, progressive books and journals on topics in the arts, humanities, social sciences, and natural sciences—with a focus on social justice issues—that inspire thought and action among readers worldwide.

Anders 61, 62–3, 66, 104 errors of commission 250 errors of omission 250 estimation 246 evaluation 223, 233 evidence 5, 12, 13, 122 average results 280, 282 credentials 226–8 identity and 266 is not proof 192–210 scientific management 198–9 lack of 219 limitations of 280, 282 smarter thinking 288–9 in social sciences 224 systematic reviews 222 testing 217–18 validity 199 EXCOMM 237–8, 244, 254 cognitive diversity 238–9, 240 deliberation process 244–5 demographic make up 238 executive pay 67–9 exogenous parts of instruments 178–9, 179–80, 182, 190 experts 223–4 explained components of instruments 178–9 explorers 170, 171–2 external validity 199, 202, 204, 209 Fabo et al. 225–6 Facebook 272 Fact Check (Reuters) 270 fact-checking websites 270–71, 277, 282 facts 12, 13 are not data 89–114 learning from a blank slate 108–13 narrative fallacy 104–8, 113 seeing the full picture 95–104 selected samples 95–6, 102, 113 Steve Jobs and Apple 89, 90–92, 93, 94, 101–2, 103, 106, 107, 200 checking 7–8, 12, 21, 37, 85, 88, 103, 268–9 interpretation of 24 smarter thinking 285–6 failure parties 250 failures 250–51 fake news 271, 272, 282 Faleye, Olubunmi 4 family businesses 181–2 Fancy, Tariq 83–5, 226 fast-food employment 184–5 Fernbach, Philip 251 ‘Fifty shades of QE’ (Fabo et al.) 225–6 Financial Management Association 270 Financial Reporting Council 74 fintech companies 85–6 Fisher, Matthew 54 Fixit (fictional company) 119, 135–40, 137–40 data mining see data mining see also Xinyi (fictional name) Flammer, Caroline 243 flexibility 108 Flint, Austin 174–5 Floyd, George 75 Fong, Geoffrey 262 Fooled by Randomness (Taleb) 274 football 126–7, 188 decline in stock markets 134–5 Euros (2004) 126–7 mood and emotions 126–7, 128, 129 sentiment 129 World Cup (2014) 133–4 Forbes 219 Forbes 15 Best Business Books (2015) 268–9 Ford, Henry 245 Fortune 60, 219, 223 Fos, Slava 241 frequent trading 97–101 Frontiers in Nutrition study 144, 145 Full Fact 270 Galileo Galilei 226 Gallagher, Liam 44 Gama, Vasco da 171 García, Diego 129 Gavin, Jim 23 gender diversity 243 company performance research 118–24, 135–40 data mining see data mining evidence for fund launch 116–18 geoengineering 268 Getting Things Done (Allen) 229, 270 Gibson, Belle 17–20, 103–4 Gimbel, Sarah 28–9 Gladwell, Malcolm 6, 60, 66 Ericsson study 61, 62–3, 66, 104 magazine interview 60, 61, 63 10,000-hours rule 6, 59–61, 62–6 Global Head of Sustainability Research 85 global warming 265–6 Glossner, Simon 243 Golden Circle Model 92 Google 157, 255 Gore, Al 266 gradients 136–7 Grant Thornton 224 Grant Thornton Corporate Governance Index 225 granular world 45, 51–2, 56, 201 Great North Run 47 Gresham College 62, 264 grit 204–5, 207 group discussions 247–8 grouping 137–40, 140, 141 groupthink 236, 237, 241, 247–8, 257 growth mindset 62 Guardian, The 215 Guriev, Sergei 271–2 Guzey, Alexey 270 Halo Effect, The (Rosenzweig) 111 Harris, Sam 28–9 Harvard Business Review 103, 152, 154, 290–91 Harvard University 228 Heeb, Florian 54 Henry, Emeric 271–2 hierarchies 249–50 high pollution 150–51 Holmes, Elizabeth 20–21, 219 homeopathy 6 honorary doctorates 227 Hoxby, Caroline 169, 177–9, 202, 221 HSBC 256 Hughes, Robert 22 Hung, William 207 hunter gatherers 43–4 hydroxychloroquine 6–7 Hypocritical Oath 230 hypotheses 23–5, 66 average output 99 control samples 99, 102 inputs and outputs 98–9 magnitude of underperformance 100 representative samples 99, 102 reverse engineering 124–5 sample size 100 statistical significance 100–101 test samples 99 identity 266 Imperial Tobacco 152 inclusion 243–8 micro-processes 248–9 An Inconvenient Truth 265, 266 inequality 159–61, 162–3 information gathering 214 InfoWars 231 initial beliefs 216 instruments 177–8, 190 endogenous parts 178–9, 180, 182 exogenous parts 178–9, 179–80, 182, 190 natural experiments and 186 relevance 179, 180, 182, 190 ridiculousness and irrelevance 180–81 interaction effect 207 internal validity 199, 200, 202, 209 intervention studies 173 investors 127 Ioannidis, John 219 iPhone 91, 92 IQ (intelligence quotient) 143–4, 145–7 irrelevance of instruments 180–81 Isaacson, Walter 93, 101–2, 103 James (acrobat) 59–60 Jandali, Abdulfattah 89 Janis, Irving 236 Jensen, Michael 69, 70–71 Jobs, Clara 90 Jobs, Paul 89–90 Jobs, Steve 89, 90–92, 93, 101–2, 103, 106 Johnson, Tim 127–8 Joint Chiefs of Staff (JCS) 236, 237, 239, 240 Journal of Finance 129, 218 Journal of the American Medical Association 219 journal quality 218 journalists 228, 282 checking facts 273 journals impact factor 220 peer-reviews of 217–18 publication bias 220 Joy, Bill 61 Kahan, Dan 263, 266, 268 Kahneman, Daniel 29 Kaplan, Jonas 28–9 Keil, Frank 54, 251 Kempf, Elisabeth 241 Kennedy, General Robert 244, 245 Kennedy, President John F. 244 Bay of Pigs invasion 235–7, 244 Cuban Missile Crisis 235, 237–8, 239–40, 244–5 EXCOMM 238–9, 244–6 Kerry, John 29–30 Khrushchev, Nikita 235, 237, 240 Kirk, Stuart 256–7 knowledge 7–8, 10 biased interpretation 37 biased search 36–7 Krantz, David 262 Krueger, Alan 184–5 Krueger, Joachim 52 Ladder of Misinference 11, 56, 152, 232 Lancet 221 Lancet Public Health study 46 Langley, Samuel Pierpont 200 law of attraction 20 Leavers (Brexit) 214 LeMay, Curtis 239, 240 Lemnitzer, Lyman 239 Lepper, Mark 30–31, 259, 260, 261 Les Décodeurs 270 lies 12 limbic brain 93, 107 Lind, James 172–3, 174 LinkedIn post 153 Lisker, Bruce 21–3, 24 Lisker, Dorka 21–2 Living Wage 76–7 Lodge, Milton 36–7 London Business School 74–5 London Marathon 47 Lord, Charles 30–31, 259, 260, 261 Macintosh 91 marbled world 45, 53–5, 56 Martin, Roger L. 290–92 McDaniel, Mark 48 McGrath Task Circumplex 240 McKinsey 225 report (2020) 78 study (2017) 152, 154, 187, 291–2 McLaughlin, Dan 64–5 McNamara, Robert 239–40, 244 Mearsheimer, John 86 Meckling, William 69, 70–71 Medium 84 Medscape 219 Merton College 213 Merton in the City reunion (2016) 213–14 metal cutting 193–4 micro-processes 248–9, 258, 263 Midvale Steel Works 193 Miliband, Ed 160, 161 minimum wage laws 183–4 misinformation 5–7, 9–10, 67–8, 214, 230, 231, 234 misrepresentation 74–6 MIT 125, 126, 127 moderate world 45–50, 55 moderation 206–7, 209–10 momentum 157 Monsue, Andrew 22–3, 24 Montessori education method 263 Morgan Stanley 125, 130–31, 189, 255–6 ‘balanceworks’ programme 156 motivated reasoning 27–8, 30, 37, 184 Motor Neurone Disease Association 47 Mountain View (later Silicon Valley) 90 Mozart 61 Mullainathan, Sendhil 175–6 Murdoch, Lachlan 181 Murdoch, Rupert 20, 181 my-side reasons 264 naïve acceptance 25–7, 32, 37–8 narrative fallacy 105–8, 109, 113 twin biases 106 National Childbirth Trust (NCT) 143, 144–5 National Geographic 223 National Health and Medical Research Council 222 National Health Service (NHS) website 222 National Security Council (NSC) 237 National Union of Journalists 273 NATO 86 natural experiments 185–6, 187, 190 instruments and 186 Nature 6, 218 neocortex 92, 107 New Scientist 223 News Corporation 181 news feeds 6 Nisbett, Richard 262 No Child Left Behind Act (2001) 196–7 non-pecuniary benefits 70–71 Norli, Øyvind 129 Nyhan, Brendan 270 Obama, Barack 265 Object-Spatial Imagery and Verbal Questionnaire 240, 241 observational studies 173 Odean, Terry 96–8 oil spills 25–8 100 Best Companies to Work for 116–17, 156–7, 189 opinions, articulating in detail 251 Organization Stream Analysis 111 organizations 235–58 Oster, Emily 147, 201, 222 other-side reasons 264 out-of-sample tests 133 Outliers 61, 64, 104, 270 over-extrapolation 206–7, 209–10 Pagella Politica 270 Paige, Rod 196, 199 Paine, Lynn 290–91 Palin, Sarah 81 papers, scientific retractions 221 reviewed by scientists 220–21 submitted for review 217–18 parachutes 208–9 Paris Agreement (2015) 49, 50 pausing before criticizing 232–3 pausing before sharing 230–32 pay gaps 3–4, 5 Peak (Ericsson) 63, 104 peer reviewers 8 peer reviews 217–19, 233 books 223 reliability of 220 Pennycook, Gordon 231, 272 Perkins, David 264 Ph.D.s 227 Phillips et al. 242 Pickett, Kate 160–61, 162–3 The Spirit Level 159–60, 161, 163, 165, 200, 225, 270 pig-iron handling 194–5 Pixar 91, 250–51 placebo effect 174–5 PolitiFact 81, 270, 271 Pollock, Joycelyn 24–5 population density 151 Porras, Jerry 110–12 positive correlation 165 post hoc ergo propter hoc fallacy 164–5 post-mortem 255 poverty 160–61 power distance 249 power posing 221 Power, Thomas 239 PowerPoint 251–2 pre-mortem 255 precision 170–73 predictions 151, 152, 154, 167 Presence (Cuddy) 269, 270, 274 Preston, Elizabeth 259, 260, 261 Principles of Scientific Management, The (Taylor) 195 processing power 248–52 productivity 193–4 professorship 227 proof 198–9 Psychological Science 221, 269, 270 psychometric tests 108–9 publication bias 220 publication process 273–4 endorsements 274–5 quantitative easing 226 Quest, Richard 133–4 Quote Investigator 270 racial discrimination 175–6 Rambotti, Simone 161 random events 107 randomized control trials (RCTs) 173, 174–6, 189–90 instruments 177–8 limitations of 177 parachute experiment 208–9 randomness 170–73 range of values 206, 207–8 Raquel, Ronald 23 Rassemblement National 271–2 Reading Football Club 159 reasoning 264–5 red teams 254–5 reducing hierarchies 249–50 regression 136–7, 139, 140, 158, 161–2 common causes 158–9 regression coefficient 136 regulation 123 Reifler, Jason 270 Reis, Ebru 4 relevance of instruments 179, 180, 182, 190 Remainers (Brexit) 213, 214 replication studies 221 representative samples 96, 99, 102 research 4–5 best practice 273 boardroom diversity 74–5 confirming opinions 5 data mining 119–20 diversity 117–19 gender diversity see gender diversity open access 35 rigour 8, 117–19 sources 5–6 unvetted 218 research qualifications 226–7 resilient companies 78 Responsible Investment Advisory Committee 248–9 Retraction Watch 269, 270 Reuters 79 reverse causation 164–5, 167, 170, 187 reverse engineering 94, 107–8 review papers 222 Reyes the Entrepreneur 95, 97 rhetoric 215 rheumatism experiment 174 Rice-Davies, Mandy 76–7, 226 ridiculousness of instruments 180–81 Rogers, David 47, 207 Rosenzweig, Phil 111 Ross, Lee 30–31, 259 Rossmo, Kim 24–5 Rothschild, Jesse 221–2 Royal London Asset Management 248 Rozenblit, Leonid 251 Rozin, Paul 51–2 rules 66 Rusk, Dean 240, 244 sailors 170–72 Sainsbury’s 76–7 sample mining 131–3, 141 defending against 133–5 sample size 100 San Francisco Business Times 219 Sanders, Bernie 82 scaffolding 264–5, 281 Schieble, Joanne 89 Scholar’s Mate 32–3 school curriculum 196 schools choice of 169 collective learning 169 competition between 168–9 Schultz, George 20 scientific consensus 222, 233 scientific culture 253–5, 258 scientific curiosity 263 scientific intelligence 263 scientific journals 134 debunking studies 221 papers for review 217–18 scientific management in education 195–7 failure of 198–9 in manufacturing 195 scientific method 24, 25, 98–101, 102–3, 124 scientists 220–21 Scott, Willard 53 scurvy 170–72 citrus fruits 173 endogenous remedy 172 exogenous remedy 172, 173 Select Committee on Business 3–5 CEOs’s executive pay report 67–9 selected samples 95, 99, 102, 109–10, 111 self-help books 229 self-interest 215 semiconductors 53–4 ShareAction 76–7 shareholder returns 120–21 shareholder value 69, 70–71, 71, 85, 86 sharing information 230–32 shoulders of giants 217–18 shovelling technique (Taylor) 194 significance level 100 silent majority 247 silent starts 246, 251–2, 257 Silicon Valley Bank 28 Silicon Valley Business Journal 219 Sinek, Simon 69, 71, 93, 94, 107, 200, 229 sleep 71–3 Sloan, Alfred 254 smarter thinking see thinking smarter smoking 163–4, 202 Snowdon, Christopher 161 social distancing 75–6 social diversity 241–2 social media 10, 230–31, 231, 282 Soeters, Joseph 249 soldiering 193 Spirit Level Delusion, The (Snowdon) 161 Spirit Level, The (Pickett and Wilkinson) 159–60, 161, 163, 165, 200, 225, 270 sports impact on stock market 126–9, 134 mood and emotions 128–9 spurious correlations 122, 127, 141 St Paul’s 159 Start with Why (Sinek) 93, 270 statements 13, 59–88 accepted as facts 12 are not facts 87–8 death panel episode 80–81 inaccuracy 59–63 misrepresentation 74–6 choosing words carefully 71–5 lack of sources 81–2 misportrayal 69–71 misrepresentation 74–6 smarter thinking 283–5 that can never be facts 82–8 examining evidence 84–6, 88 exploring alternative explanations 86 twin biases 83–4 verifying as facts 73 statistical literacy 262–3, 264, 281 statistical significance 100–101, 120, 122, 137 statistics 161 Bayesian inference 23–4 Staw, Barry 107–8, 166 stock market 95–7 brokers 96–8 frequent trading 97–101 sentiment 127, 128 sport, impact on 126–9, 134 traders 96–8, 125–6, 128 trading floor 125–6 stories 104–5, 108 Strange, Angela 85–6 striatum 30 Sun Tzu 11 Sunday Times Rich List 108 superlatives 85, 86 survey papers 222 sustainability 8–9, 215, 267 sustainable investments 54, 83–5 System 1 thought process 29 System 2 thought process 29 systematic reviews 222, 233 Taber, Charles 36–7 Taleb, Nassim 106, 274 targets 49–50 Taylor, Frederick Winslow 192–4 Taylor, General Maxwell 237 tech industry 157 TED 9, 205–6 Telegraph, The 215 10,000-hours rule 6, 64, 66, 104 chasing dreams 64–6 claim 59–61 disheartening 66 evidence 62–3 Tesla 152 test groups 139 natural experiments 185–6 randomized control trials (RCTs) 174–5, 177 test samples 99 theory of everything 199, 200, 204 Theranos 20–21, 219, 226 Thinking, Fast and Slow (Kahneman) 29 thinking smarter data 286–8 evidence 288–9 example of 290–92 facts 285–6 individuals 213–34 organizations 235–58 preliminaries 283 shortcuts 289 societies 259–82 statements 283–5 studies 289–90 Thirteen Days: A Memoir of the Cuban Missile Crisis (Kennedy) 244 Thomson Reuters 132 TikTok 20 time-series studies 30, 31 tolerating failure 250–51 Tolstoy, Leo 216–17 Tonight Show, The 40 traders 96–8, 125–6, 128 Trades Union Congress (TUC) 4–5 traits 149–50, 166 Trevithick, Richard 171 Trouble with Europe, The (Bootle) 213–14 Trump, Donald 6–7, 271 trust 153 Trust across America 153 trustworthy companies 153 truth 12, 13, 21–6 Tsoutsoura, Margarita 241 twin biases 56, 66–7, 73, 83–4, 106, 199 Twitter (later X) 230–31 2-4-6 brainteaser 33, 260, 261 UBS 250 unexplained components of instruments 178–9 United States of America (USA) death panels 80–81 healthcare 80–81 universal statements 85 universality 199, 201 unnatural experiments 186–7 US Military Academy 202–3 USSR 235, 244 see also Cuban Missile Crisis vaccination 267–8 Venkateswaran, Anand 4 verification 220 Vigen, Tyler 122 Vioxx 220 Vogue diet 40 Vogue magazine 40 voluntary choice inputs 149, 166 voting 247 Wakefield, Andrew 221 Walker, Matthew 71–3 Wall Street Journal 84, 219 Wason, Peter 33, 260, 261 water intoxication 47 weight loss 40–41 Welch, Jack 71, 85, 86 West Point 202–3, 204, 206 The Whole Pantry app 17–18 Whole Pantry, The 18, 273 Why We Sleep (Walker) 71–3, 270 Wikipedia 200 Wilkinson, Richard 160–61, 162–3 The Spirit Level 159–60, 161, 163, 165, 200, 225, 270 work-life balance 156 Wright Brothers 200 wrongful convictions 24–5 Xinyi (fictional name) 116–19 data mining see data mining see also Fixit (fictional company) Yeh, Robert 208–9 Zhuravskaya, Ekaterina 271–2 Founded in 1893, UNIVERSITY OF CALIFORNIA PRESS publishes bold, progressive books and journals on topics in the arts, humanities, social sciences, and natural sciences—with a focus on social justice issues—that inspire thought and action among readers worldwide.

pages: 367 words: 97,136

Beyond Diversification: What Every Investor Needs to Know About Asset Allocation
by Sebastien Page
Published 4 Nov 2020

Several other versions of the ARCH model have been proposed to incorporate fat tails, asymmetries in volatility (the fact that volatility spikes up more than down), exponential weights, dynamic correlations, etc. However, Marra shows that for US stocks, most sophisticated models, whether of the historical or ARCH classes, barely outperform the random walk model. The differences in model effectiveness don’t look statistically significant. Other issues with sophisticated models include publication bias (only the good results get published), as well as a related, important issue: the possibility that these models may overfit the in-sample data. It’s hard to argue that one specific model should perform consistently better than to simply extrapolate recent volatility. Aside from a slight advantage for volatility estimates derived from options prices, Poon and Granger find that across 93 academic studies, there’s no clear winner of the great risk forecasting horse race.

Indeed, the strategy appears to work well across risk forecast methodologies, asset classes (stocks, bonds, currencies), factors/risk premiums, regions, and time periods. These results suggest that any asset allocation process can be improved if we incorporate volatility forecasts. But a few caveats apply. Cynics may argue that only backtests that generate interesting results get published (earlier I mentioned publication bias). Authors often make unrealistic assumptions about implementation. For example, they assume portfolio managers can rebalance everything at the closing price of the same day the signal is generated. Worse, some ignore transaction costs altogether. And a more subtle but key caveat is that some strategies do not use budget constraints, such that part of the alpha may come from a systematically long exposure to equity, duration, or other risk premiums versus the static benchmark.

Rowe Price, 192 and usefulness of optimizers, 211–212 Portfolio optimization models, 2 Portfolio theory, 85 Power utility, 199 Price-to-book (P/B) ratio, 38 Price-to-cash flow (P/CF) ratio, 38 Price-to-earnings (P/E) ratio: compared to other ratios, 38 Price-to-earnings (P/E) ratio (continued): debate over CAPE vs., 13, 25–26 and global equity markets forecasts, 13–14 and inflation, 13 inverse of, and real return for stocks, 12–13 as relative valuation signal, 58–59 and sector weights, 159 as short term timing signal, 57 and valuation change, 30–31 Principles (Dalio), 85 Private assets, 217–229 biases related to, 220–221, 223 diversification with, 128–130 footnotes and fine-print disclaimers with, 219–224 hype associated with, 224–226 in portfolio construction, 217–229 public equities compared to, 218–224 and public equity fund returns, 226–229 “Private Equity Performance” (Kaplan and Schoar), 221 Probability distributions, 117–118, 147, 152–153 Probability-weighted utility, 201 Prout, William, 218–219 Public equities: fund returns for, 226–229 private equities compared to, 218–224 returns on private equities vs., 226–229 Public market equivalent (PME), 221–223, 229 Public pensions, 219 Publication bias, 91–92 Q Group conferences, 7 Qian, Edward, 213–214 Quantitative analysis, judgment and, 84–85 Quantitative data analysis, 2–3 Quantitative easing (QE), 17 Quantitative investing, momentum in models for, 70 Quantitative value-at-risk models, 165 Random walk model, 91 Real estate: CAPM expected returns for, 20 diversification with, 128–130 private, 129–130 Real estate investment trusts (REITS), 18, 240 Real returns, inflation and, 11, 13 “Regime Shifts” (Kritzman, Turkington, and Page), 156, 157 Regime-switching dynamic correlation (RDSC) model, 140 Relative returns: on dashboards, 64–66 and persistence of higher moments, 112, 117–119 on stocks vs. bonds, 10–11, 17–19, 112, 117–119 Relative valuation: and CAPE, 27–28 macro factors confirming signals, 67–68 shorter-term signals of, 56–59 Resampling, 208 Retirement planning, 187–194, 249–253 Return forecasting, 1–3, 83–87, 267, 272 equilibrium, 5–23 momentum, 69–82 paradox of, 73 rules of thumb for, 86–87 shorter-term macro signals, 61–68 shorter-term valuation signals, 45–59 valuation, 25–42 “Return of the Quants” (Dreyer et al.), 94 “The Revenge of the Stock Pickers” (Lynch et al.), 233 Rich, Don, 168–169 Richardson, Matthew, 115 Ringgenberg, Matthew C., 236 Risk aversion, 189, 204 Risk factor diversification, 130–131, 135, 176–177 Risk factors: asset classes vs., 173–184 crowding of, 184 in portfolio construction, 174 in scenario analysis, 162–165 Risk factors models, 178–179 Risk forecasting, 89–92, 267–268, 272–273 basic parameter choices for, 144–145 CAPM definition of, 10 correlation forecasts, 139–143 correlations, 121–136 exposure to loss in, 143–144 fat tails, 147–157 goal of, 178–179 longer-term, 111–119 models of, 89–92 risk-based investing, 93–109 rules of thumb for, 170–171 scenario analysis, 157–168 within-horizon risk in, 168–170 “Risk Management for Hedge Funds” (Lo), 150 Risk parity: and implicit return assumptions, 2 managed volatility vs., 105–106 in portfolio optimization, 212–215 Risk Parity Fundamentals (Qian), 213–214 Risk predictability tests, 112–119 Risk premiums, 179–184 backtest data for, 182–184 beta, 179–180 for bonds, 40 and currency carry trade, 131 diversification across, 182 low-risk anomaly, 180–181 and risk factors, 179–180 and Sharpe ratios, 150, 151 strategies for, 182–184 volatility, 102–104, 181–182 when rates are low, 12 Risk regimes, 131, 154–157, 168, 204 Risk tolerance, 149–150 Risk-based investing, 93–109 combination of strategies for, 104 covered call writing, 102–104 managed volatility backtests, 95–101 Q&A about, 105–109 (See also Managed volatility) Risk-free rate, 11 Roll, Richard, 62, 67 Roll down, 40–41 Ross, Stephen A., 62, 67 Rossi, Marco, 131 Rules of thumb: for portfolio construction, 243–244 for return forecasting, 86–87 for risk forecasting, 170–171 Samonov, Mikhail, 71–75 Sample bias, 223 Samuelson, Paul A., 186–187, 197–198 Sapra, Steve, 132 Satchell, Stephen, 212 Scenario analysis: in asset allocation, 134 and asset class changes over time, 158–162 defensive use of, 157–167 defining scenarios in, 158 factor-based, 162–165 forward-looking scenarios in, 165–168 offensive use of, 167–168 Scherer, Bernd, 2, 117 Schoar, Antoinette, 221, 222 Seasholes, Mark, 31 Sentiment, 69, 131–132 Sharpe, Bill, 6, 7, 9, 13, 151 Sharpe ratios, 150 Sharps, Rob, 228 Shiller, Robert, 13, 14, 25–26 “The Shiller CAPE Ratio” (Siegel), 26 Shive, Sophie, 235 Shkreli, Martin, 238 Shorter-term investments, macro factors for, 63–66 Shorter-term valuation signals, 45–59 for relative valuation between stocks and bonds and across bond markets, 56–59 for tactical asset allocation, 45–59 Shriver, Charles, 57, 62, 94 Siegel, Jeremy, 12–14, 25, 26 Simonato, Jean-Guy, 143 Single-period portfolio optimization, 194–195, 197–215, 268 issues with concentrated and unstable solutions, 207–210 mean-variance optimization, 198, 203–207 and risk parity, 212–215 and usefulness of optimizers, 211–212 Size of measurement errors, 148 Skewness, 118 of call options, 118 mean reversion of, 118–119 persistence of, 117–119 positive vs. negative, 207 and risk forecasting, 144–145 (See also Negative skewness) “Skulls, Financial Turbulence, and Risk Management” (Kritzman and Li), 205 Smart betas, 179, 235 S.M.O.O.T.H. fund, 224–225 Smoothing bias, 128–130 Sovereign wealth funds, 37, 128–130, 194 S&P 500: in March 2018, 12 P/E ratio of, 30–31 realized one-month volatility on, 103–104 recent earnings on, 27 sector weights in, 159 and tech bubble, 163 Spread duration, 40–42 Stock market: used as market portfolio, 17–18 valuation changes in, 31–34 Stock picking, 233–243 “The Stock-Bond Correlation” (Johnson et al.), 132–133 Stocks: beta and relative returns of bonds and, 10–11 CAPM and returns on, 5–14, 20 correlation of bonds and, 132–134 of emerging markets, 159–160 and human capital, 189–190 international equity diversion, 125–126 in market portfolio, 17–19 P/E ratio and real return for, 12–13 P/E ratio vs.

pages: 357 words: 110,072

Trick or Treatment: The Undeniable Facts About Alternative Medicine
by Edzard Ernst and Simon Singh
Published 17 Aug 2008

The crude reason for blaming Chinese researchers for the discrepancy is that their results are simply too good to be true. This criticism has been confirmed by careful statistical analyses of all the Chinese results, which demonstrate beyond all reasonable doubt that Chinese researchers are guilty of so-called publication bias. Before explaining the meaning of publication bias, it is important to stress that this is not necessarily a form of deliberate fraud, because it is easy to conceive of situations when it can occur due to an unconscious pressure to get a particular result. Imagine a Chinese researcher who conducts an acupuncture trial and achieves a positive result.

The key point is that this second piece of research might never be published for a whole range of possible reasons: maybe the researcher does not see it as a priority, or he thinks that nobody will be interested in reading about a negative result, or he persuades himself that this second trial must have been badly conducted, or he feels that this latest result would offend his peers. Whatever the reason, the researcher ends up having published the positive results of the first trial, while leaving the negative results of the second trial buried in a drawer. This is publication bias. When this sort of phenomenon is multiplied across China, then we have dozens of published positive trials, and dozens of unpublished negative trials. Therefore, when the WHO conducted a review of the published literature that relied heavily on Chinese research its conclusion was bound to be skewed – such a review could never take into account the unpublished negative trials.

pages: 312 words: 83,998

Testosterone Rex: Myths of Sex, Science, and Society
by Cordelia Fine
Published 13 Jan 2017

Smaller studies by contrast, being subject to more random error because of their small, idiosyncratic samples, will be scattered over a wider range of effect sizes. Some small studies will greatly overestimate a difference; others will greatly underestimate it (or even “flip” it in the wrong direction). The next part is simple but brilliant. If there isn’t publication bias toward reports of greater male risk taking, these over- and underestimates of the sex difference should be symmetrical around the “true” value indicated by the very large studies. This, with quite a bit of imagination, will make the plot of the data look like an upside-down funnel. (Personally, my vote would have been to call it the candlestick plot, but I wasn’t consulted.)

Meta-analysis of the relationship between digit-ratio 2D:4D and aggression. Personality and Individual Differences, 51(4), 381–386. A small correlation was found for men only (r = –.08 for the left hand and r = –.07 for the right hand), but this reduced to a nonsignificant correlation for r = –.03 after correction for weak publication bias. 56. Voracek et al. (2010), ibid. The authors note the complexity of the biological system thought to underlie sensation seeking, as well as the many psychosocial factors known to influence it, and thus conclude that “Given these knowns, it appears unsurprising that rather simplistic approaches, such as studies only utilizing 2D:4D (a putative, not yet sufficiently validated marker of prenatal testosterone), are prone to be barren of results.”

pages: 270 words: 79,992

The End of Big: How the Internet Makes David the New Goliath
by Nicco Mele
Published 14 Apr 2013

Peer-reviewed publication takes on average about two years, and many scientific journals costs thousands of dollars a year for subscriptions. Not only that, but if scientific research fails, it usually does not get written up and published. Who wants to publish an article that says, “we tried this and it didn’t work”? “Publication bias” is a well-known challenge in academia. A major review of more than 4,600 peer-reviewed academic papers across a range of disciplines and a range of countries found that over the last twenty years, positive results increased by almost 25%.26 And yet failure is a crucial part of the scientific process.

wikilang=en&wikifam=.wikipedia.org&grouped=on&page=Abraham_Lincoln 24. http://storify.com/jcstearns/50-years-after-the-vast-wast 25. http://www.nytimes.com/2012/01/17/science/open-science-challenges-journal-tradition-with-web-collaboration.html?pagewanted=all 26. http://www.theatlantic.com/health/archive/2011/10/publication-bias-may-permanently-damage-medical-research/246616/ 27. http://usefulchem.wikispaces.com/ 28. David Weinberger, Too Big to Know: Rethinking Knowledge Now That the Facts Aren’t the Facts, Experts Are Everywhere, and the Smartest Person in the Room Is the Room (New York: Basic Books, 2011), 139. 29.

Humble Pi: A Comedy of Maths Errors
by Matt Parker
Published 7 Mar 2019

When a company runs a drug trial on some new medication or medical intervention they have been working on, they want to show that it performs better than either no intervention or other current options. At the end of a long and expensive trial, if the results show that a drug has no benefit (or a negative one), there is very little motivation for the company to publish that data. It’s a kind of ‘publication bias’. An estimated half of all drug-trial results never get published. A negative result from a drug trial is twice as likely to remain unpublished as a positive result. Withholding any drug-trial data can put people’s lives at risk, possibly more so than any other mistake I’ve mentioned in this book.

The air force tried to get an academic anthropological department from a university involved, but no one was interested. 2 The extra sets of data were made by slowly evolving the data via tiny changes which moved the data points towards a new picture but didn’t change the averages and standard deviations. The software to do this has been made freely available. 3 Their study was finally published thirteen years later, in 1993, as an example of publication bias. 4 In the interest of full disclosure, this is before I was writing for the Guardian myself, but the article was written by my friend Ben Goldacre, of AllTrials fame. Twelve: Tltloay Rodanm 1 At the time of writing, ERNIE is no longer on public display at the Science Museum. 2 It pleases me greatly that part of the required word count of my book has now officially been randomly generated. 3 This was still in the era when the US government controlled the export of software with strong encryption, as they considered such cryptography as munitions.

pages: 338 words: 104,815

Nobody's Fool: Why We Get Taken in and What We Can Do About It
by Daniel Simons and Christopher Chabris
Published 10 Jul 2023

A more subtle variant of reporting the same results for different studies is known as “salami slicing,” the act of reporting different outcomes from a single study across multiple papers. For an investigation of this form of potentially deceptive conduct in studies claiming that action video games increase cognitive abilities, see J. Hilgard, G. Sala, W. R. Boot, and D. J. Simons, “Overestimation of Action-Game Training Effects: Publication Bias and Salami Slicing,” Collabra: Psychology 5 (2019): 30 [https://doi.org/10.1525/collabra.231]. 31. Cornell has not released the full results of its investigations, but the provost issued a statement: “Statement of Cornell University Provost Michael I. Kotlikoff,” Cornell University [https://statements.cornell.edu/2018/20180920-statement-provost-michael-kotlikoff.cfm].

Gobet, “Video Game Training Does Not Enhance Cognitive Ability: A Comprehensive Meta-Analytic Investigation,” Psychological Bulletin 144 (2018): 111–139 [https://psycnet.apa.org/doi/10.1037/bul0000139]; J. Hilgard, G. Sala, W. R. Boot, and D. J. Simons, “Overestimation of Action-Game Training Effects: Publication Bias and Salami Slicing,” Collabra: Psychology 5 (2019) [https://doi.org/10.1525/collabra.231]. 28. Original study: D. R. Carney, A. J. Cuddy, and A. J. Yap, “Power Posing: Brief Nonverbal Displays Affect Neuroendocrine Levels and Risk Tolerance,” Psychological Science 21 (2010): 1363–1368. TED talk: Amy Cuddy, “Your Body Language May Shape Who You Are,” YouTube, October 1, 2012 [https://www.ted.com/talks/amy_cuddy_your_body_language_may_shape_who_you_are].

pages: 586 words: 159,901

Wall Street: How It Works And for Whom
by Doug Henwood
Published 30 Aug 1998

They paused for a few pages in the middle of their book. Myth and Measurement, to review some reasons why the academic literature has almost unanimously found the minimum wage guilty as charged. They surmised that earlier studies showing that higher wages reduced employment were the result of "publication bias" among journal editors. They also surmised, very diplomatically, that economists have been aware of this bias, and played those notorious scholarly games, "specification searching and data mining" — bending the numbers to obtain the desired result. They also noted that some of the early studies were based on seriously flawed data, but since the results were desirable from both the political and professional points of view, they went undiscovered for several years.

See money managers portfolio vs. direct investment, 109 Post Keynesian Thought (PKT) computer network, 243 post-Keynesianism, 217-224 defined, 241-242 see also money, endogenous postmodernism. 237, 245 present value, 119-120 priest, banker's advice more useful than, 225 primitive accumulation, 252 prisoners' dilemma, 171, 183 Pritzker family, 271 private placements, 75 privatization, 110 of economic statistics, I36 returning capital flight and, 295 Social Security, 303-307 production, 241 socialization of, 240 productivity, 299 failure to boom in 1980s, 183 profit(s) maximization Galbraith on, 259 Herman on, 260 and modern corporation, 254 transformation into interest, 73-74, 238 Progressive Era, 94 property relations and social investing, 314- 315 prostitutes. Wail Streeters as customers, 79 protectionism, 295, 300 Proudhonism, 301 The Prudential, 262 investigations of, 304 psychoanalysis, 315 psychology and stock prices, 176-178; see also Keynes, John Maynard; money, psychology of public goods, 143 public relations, ll6 publication bias, 141 Pujo Committee, 260 Pulitzer Prize, 298 puritans of finance, 196 puts, 30; see also derivatives q ratio and capital expenditures, 145-148 and LBOs, 283 and M&A, 148, 284, 299 as stock market predictors, 148 Quan, Tracy, 79 race financial workers, 78 and wealth distribution, 69-70 racism, 98 among goldbugs, 48 Keynes's, 212 railroads and modern corporation, 188 Rainforest Crunch, 313 Rand, Ayn, 47, 89 random walk, 164 Rathenau, Walther, 256 rational expectations, l6l; see fl&o efficient market theory rationality, assumption of, 175 Ravenscraft, David, 279, 283-284 Reagan, Ronald, 87 real estate, 80 real sector predicting the financial, 125-126; see also business cycles Reconstruction Finance Corp., 286 reform, financial, difficulty of, 302 Regan, Edward, 27 regulation, government, overview, 90-99 Reich, Robert, 131 Relational Investors, 289 relationship investing, 293 religion banking and, 225 and belief in markets, 150 monetarism as, 242 and money, 225 restrictions on usury, 42 rentiers apologists, 293 appropriation of worker savings, 239 capture of Clinton administration, 134 consciousness, 237, 238; see also money, psychology of corporate cash flow share, 73-74 dominance of political discourse, 294 early 1990s riot, 288-291 euthanasia of, 210 evolutionary role, 8 formation through financial markets, 238 growing assertiveness, 207 proliferate over time, 215, 236 who needs them?

pages: 173 words: 14,313

Peers, Pirates, and Persuasion: Rhetoric in the Peer-To-Peer Debates
by John Logie
Published 29 Dec 2006

In this understanding, the central forms of intellectual property protection (i.e., copyrights and patents) are offered by the people, via Congress, and for the people, as an incentive for further production from authors and inventors. This represents a subtle but significant break from a broader European tradition in which the so-called “natural rights” of the author or inventor function as the bases for intellectual property protections. The 1991 Supreme Court’s ringing endorsement of copyright’s inherent public bias in the Feist case (once again: “The primary objective of copyright is not to reward the labor of authors, but ‘[t]o promote the Progress of Science and useful Arts.’”) almost certainly emboldened Robertson as he set about developing the my.mp3.com service. Robertson even agreed with the RIAA that Napster was enabling piracy.

pages: 266 words: 67,272

Fun Inc.
by Tom Chatfield
Published 13 Dec 2011

Its author, Dr Christopher John Ferguson, an assistant professor of psychology at Texas A&M International University, set out to compare every article published in a peer-reviewed journal between 1995 and April 2007 that in some way investigated the effect of playing violent video games on some measure of aggressive behaviour. A total of seventeen published studies matched these criteria – and Ferguson’s conclusions were unexpectedly unequivocal. ‘Once corrected for publication bias,’ he reported, ‘studies of video game violence provided no support for the hypothesis that violent video game-playing is associated with higher aggression.’ Moreover, he added, the question ‘do violent games cause violence?’ is itself flawed in that ‘it assumes that such games have only negative effects and ignores the possibility of positive effects’ such as the possibility that violent games allow ‘catharsis’ of a kind in their players.

pages: 218 words: 70,323

Critical: Science and Stories From the Brink of Human Life
by Matt Morgan
Published 29 May 2019

It is estimated that over half of all studies are never completed and data from one-third of trials not published. Of those that are, only half are read by more than just two people. Furthermore, journals are more likely to publish papers with positive results, conducted by well-known groups, by men and from Western countries. This introduces yet more bias, known as publication bias. So, we now have bias squared. It is on this flimsy basis that we decide how to treat patients. This selective publishing should not be acceptable in medicine. The former editor of the British Medical Journal has argued that the entire medical journal industry should be disbanded. The powerful ‘all trials’ movement led by Dr Ben Goldacre aims to publicise these issues surrounding clinical-trial data loss, manipulation and concealment.

pages: 231 words: 69,673

How Cycling Can Save the World
by Peter Walker
Published 3 Apr 2017

CHAPTER 7 1 Michael Polhamus, “Bill Would Require Neon Clothes, Government ID for Cyclists,” Jackson Hole News and Guide, January 30, 2015, http://www.jhnewsandguide.com/jackson_hole_daily/local/bill-would-require-neon-clothes-government-id-for-cyclists/article_d53b9712-2e93-517d-9e33-8f13d693ba21.html. 2 Wes Johnson, “Missouri Bill Requires Bicyclists to Fly 15-Foot Flag on Country Roads,” Springfield News-Leader, January 14, 2016. 3 “School Pupils Encouraged to Wear Hi-Vis Vests in Road Safety Scheme,” Grimsby Telegraph, January 23, 2012, http://www.grimsbytelegraph.co.uk/school-pupils-encouraged-wear-hi-vis-vests-road/story-15010565-detail/story.html. 4 Chris Boardman, “Why I Didn’t Wear a Helmet on BBC Breakfast,” BritishCycling.org, November 3, 2014, https://www.britishcycling.org.uk/campaigning/article/20141103-campaigning-news-Boardman--Why-I-didn-t-wear-a-helmet-on-BBC-Breakfast-0. 5 Nick Hussey, “Why My Cycling Clothing Company Uses Models without Helmets,” The Guardian, February 4, 2016, https://www.theguardian.com/environment/bike-blog/2016/feb/04/vulpine-bike-clothing-company-models-without-helmets-dont-hate-us. 6 Peter Jacobsen and Harry Rutter, “Cycling Safety,” in Pucher and Buehler, City Cycling, ch. 7. 7 “Helmets for Pedal Cyclists and for Users of Skateboards and Roller Skates,” European Committee for Standardization, 1997, http://www.mrtn.ch/pdf/en_1078.pdf. 8 R.G. Attewell, K. Glase, and M. McFadden, “Bicycle Helmet Efficacy: A Meta-Analysis,” Accident Analysis and Prevention 33 (2001). 9 Rune Elvik, “Publication bias and time-trend bias in meta-analysis of bicycle helmet efficacy: A re-analysis of Attewell, Glase and McFadden,” Accident Analysis and Prevention 43 (2011):1245–51. 10 E-mail exchange with the author. 11 Davis, Death on the Streets. 12 1985 Durbin-Harvey report, commissioned by UK Department of Transport from two professors of statistics. 13 Ian Walker, “Drivers Overtaking Bicyclists: Objective Data on the Effects of Riding Position, Helmet Use, Vehicle Type and Apparent Gender,” Accident Analysis and Prevention 39 (2007):417–25. 14 “Wearing a Helmet Puts Cyclists at Risk, Suggests Research,” University of Bath, September 11, 2016, http://www.bath.ac.uk/news/articles/archive/overtaking110906.html. 15 Tim Gamble and Ian Walker, “Wearing a Bicycle Helmet Can Increase Risk Taking and Sensation Seeking in Adults,” Psychological Science, 2016. 16 “Helmet Wearing Increases Risk Taking and Sensation Seeking,” University of Bath, January 25, 2016, http://www.bath.ac.uk/news/2016/01/25/helmet-wearing-risk-taking. 17 Fishman et al., “Barriers and Facilitators to Public Bicycle Scheme Use: A Qualitative Approach,” Transportation Research Part F: Traffic Psychology and Behaviour 15, Vol. 6 (2012):686–98. 18 Interview with the author. 19 N.C.

pages: 242 words: 67,233

McMindfulness: How Mindfulness Became the New Capitalist Spirituality
by Ronald Purser
Published 8 Jul 2019

As Walach puts it: “What is not answered is whether the true contribution is the mindfulness practice itself.”42 Positive effects could simply be attributed to having some downtime during the school day, or feeling heard in discussion. There is also the risk of “social desirability bias,” since children know they have been chosen as subjects in a study with expected improvements. Then there is the issue of publication bias, where only positive findings are published. A recent study by a group of psychologists at McGill University found that of the 124 randomized control studies they reviewed, 90% reported positive results.43 Such a number is quite high given the small sample sizes; a normal, non-biased threshold for this same sample size should be no more than 65%.

pages: 481 words: 72,071

Why Has Nobody Told Me This Before?
by Dr. Julie Smith
Published 11 Jan 2022

L. (2013), ‘Exercise-Induced Endocannabinoid Signaling Is Modulated by Intensity’, European Journal of Applied Physiology, 113 (4), 869–75. Sanchez-Villegas, A., et al. (2013), ‘Mediterranean dietary pattern and depression: the PREDIMED randomized trial’, BMC Medicine, 11, 208. Schuch, F. B., Vancampfort, D., Richards, J., et al. (2016), ‘Exercise as a treatment for depression: A Meta-Analysis Adjusting for Publication Bias’, Journal of Psychiatric Research, 77, 24–51. Singh, N. A., Clements, K. M., & Fiatrone, M. A. (1997), ‘A Randomized Controlled Trial of the Effect of Exercise on Sleep’, Sleep, 20 (2), 95–101. Tops, M., Riese, H., et al. (2008), ‘Rejection sensitivity relates to hypocortisolism and depressed mood state in young women’, Psychoneuroendocrinology, 33 (5), 551–9.

pages: 299 words: 81,377

The No Need to Diet Book: Become a Diet Rebel and Make Friends With Food
by Plantbased Pixie
Published 7 Mar 2019

Very few of these programmes publish their results and tend to stick to individual anecdotes instead, but the limited research we do have suggests that the largest weight loss was around 3.2 per cent of body weight after two years.9 Are you underwhelmed? ’Cause I sure am. On top of all that, you have to consider publication bias – scientific journals are far more likely to publish a study that shows a significant effect over something that didn’t work. Weight-loss programmes in the workplace and in schools have been equally unsuccessful. Despite appearing to be very concerned about the students’ growing waistlines, very few schools actually assess the impact of making nutritional changes on pupils’ weight.

pages: 442 words: 85,640

This Book Could Fix Your Life: The Science of Self Help
by New Scientist and Helen Thomson
Published 7 Jan 2021

New standards of evidence were needed, more replication was essential and lots of previously accepted assumptions were now found lacking. Cuddy’s research took a particularly bad hit and was heavily criticised by peers and the media. One of the big problems with her research was that it didn’t pass the p-curve test – a statistical tool that detects ‘publication bias’.7 In simple terms, it tests whether researchers may have caused errors in their data by cherry-picking certain data points most likely to produce a publishable result, or perhaps just got lucky with their data. The power pose didn’t pass the p-curve test and was given the heave-ho. In 2018, it made something of a comeback.

Grain Brain: The Surprising Truth About Wheat, Carbs, and Sugar--Your Brain's Silent Killers
by David Perlmutter and Kristin Loberg
Published 17 Sep 2013

It concluded that “intake of saturated fat was not associated with an increased risk of coronary heart disease, stroke, or cardiovascular disease.” In comparing the lowest to the highest consumption of saturated fat, the actual risk for coronary heart disease was 19 percent lower in the group consuming the highest amount of saturated fat. The authors also stated: “Our results suggested a publication bias, such that studies with significant associations tended to be received more favorably for publication.” What the authors are implying is that when other studies presented conclusions that were more familiar to the mainstream (i.e., fat causes heart disease), not to mention more attractive to Big Pharma, they were more likely to get published.

pages: 362 words: 97,473

Sickening: How Big Pharma Broke American Health Care and How We Can Repair It
by John Abramson
Published 15 Dec 2022

If the chosen details created a study or included a population that did not represent the people most likely to take the drug in the real world, the study would then be said to lack external validity. * Determination of the credibility of evidence includes evaluation of “study design, risk of bias (study strengths and limitations), precision, consistency (variability in results between studies), directness (applicability), publication bias, magnitude of effect, and dose-response gradients.” * In the results sent to the French regulatory authority, DePuy reported eleven of the sixteen cup-system failures that had occurred in the first four years of the study. Perhaps the data presented to French authorities were more accurate because the legal consequence of misrepresentation of data sent to regulatory authorities can be much greater than misrepresentation of data in marketing claims

pages: 362 words: 103,087

The Elements of Choice: Why the Way We Decide Matters
by Eric J. Johnson
Published 12 Oct 2021

Forest plots are not the whole story. We need to worry about two additional things at least. First, how did we select the studies to put in the plot? If we look only at published studies, it is likely that we will not plot studies that did not “work”—that is, have results that are not different from zero. Why? Because of publication bias: researchers submit, and journals accept, mostly those papers that do not fail. Researchers overcome this by searching all the online databases for results and by systematically asking people to share these studies. The second caution about forest plots is that they won’t necessarily detect which experiments have inflated their results, and/or shrunk their confidence intervals by what is called p-hacking—essentially doing many possible analyses and reporting only those that worked best.

pages: 274 words: 93,758

Phishing for Phools: The Economics of Manipulation and Deception
by George A. Akerlof , Robert J. Shiller and Stanley B Resor Professor Of Economics Robert J Shiller
Published 21 Sep 2015

Bero, Benjamin Djulbegovic, and Otavio Clark, “Pharmaceutical Industry Sponsorship and Research Outcome and Quality: Systematic Review,” British Medical Journal 326, no. 7400 (May 31, 2003): 1167. Bekelman, Li, and Gross also refer to two studies of “multiple reporting of studies with positive outcomes, further compounding publication bias.” 17. Bob Grant, “Elsevier Published 6 Fake Journals,” The Scientist, May 7, 2009, accessed November 24, 2014, http://classic.the-scientist.com/blog/display/55679/. See also Ben Goldacre, Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients (New York: Faber and Faber/Farrar, Straus and Giroux, 2012), pp. 309–10. 18.

pages: 371 words: 109,320

News and How to Use It: What to Believe in a Fake News World
by Alan Rusbridger
Published 26 Nov 2020

Too often statistics are trusted because they imply a level of precision, without an investigation of their validity. An important part of a journalist’s craft is to establish the incentives to report particular results. Governments can dislike uncomfortable news, particularly close to elections. Academics can be rewarded for new or exciting results, leading to publication bias – particularly if negative results do not see the light of day. A striking feature of public services across many countries has been the rise of performance monitoring, which records, analyses and publishes data in order to give the public a better idea of how systems or policies are implemented and can be improved.

pages: 410 words: 114,005

Black Box Thinking: Why Most People Never Learn From Their Mistakes--But Some Do
by Matthew Syed
Published 3 Nov 2015

*This has a rather obvious analog with what is sometimes called “defensive medicine,” in which clinicians use a host of unnecessary tests that protect their backs, but massively increase health-care costs. *Science is not without flaws, and an eye should always be kept on social and institutional obstacles to progress. Current concerns include publication bias (whereonly successful experiments are published in journals), the weakness of the peer review system, and the fact that many experiments do not appear to be replicable. For a good review of the issues, see: www.economist.com/news/briefing/21588057-scientists-think-self-correcting-alarming-degree-if-not-trouble.

The Economics Anti-Textbook: A Critical Thinker's Guide to Microeconomics
by Rod Hill and Anthony Myatt
Published 15 Mar 2010

These results have been the subject of a ‘lively’ debate, discussed in Card and Krueger’s 1995 book Myth and Measurement.4 Some idea of the tone of the debate can be had by noting that Valentine (1996) accused Card and Krueger (1994) of practising ‘politically correct’ economics, and of deliberately using suspect data in one of their studies. For their part, Card and Krueger present evidence of ‘publication bias’ against results contrary to textbook conventional wisdom (1995: 186). A feature of the debate, key for our discussion of methodology, is that one team of authors would consistently find results different from another team. David Levine, editor of the Berkeley journal Industrial Relations, attributed this phenomenon to ‘author biases’, which he diplomatically defined as ‘conscious or unconscious biases in searching for a robust equation’ (2001: 161).

Fix Your Gut: The Definitive Guide to Digestive Disorders
by John Brisson
Published 12 Apr 2014

Firstly, the papers identified in our study were limited to those openly published up to Jul 2012; it is possible that some related published or unpublished studies that might meet the inclusion criteria were missed, resulting in any inevitable bias, though the funnel plots and the Egger’s tests failed to show any significant publication bias. Secondly, the results may be interpreted with care because of the limited number and small sample sizes of each included studies. Thirdly, subgroup analyses regarding other confounding factors such as smoking status, age and gender have not been conducted in the present study because sufficient information could not be extracted from the primary literature.”

The White Man's Burden: Why the West's Efforts to Aid the Rest Have Done So Much Ill and So Little Good
by William Easterly
Published 1 Mar 2006

For the quoted passage on the motivation behind this new aid, see http://www. whitehouse.gov/infocus/developingnations/>. 11.http://www.mca.gov/countries_overview.html. 12.Esther Duflo and Michael Kremer, “Use of Randomization in the Evaluation of Development Effectiveness,” mimeograph, Harvard and MIT (2003), discuss publication bias. A classic paper on this problem is J. Bradford DeLong and Kevin Lang, “Are All Economic Hypotheses False?” Journal of Political Economy 100, no. 6 (December 1992): 1257–72. 13.UN Millennium Project Report, “Investing in Development: A Practical Plan to Achieve the Millennium Development Goals,” overview, box 8, p. 41. 14.Commission for Africa, “Our Common Interest: Report of the Commission for Africa,” p. 348; www.commissionforafrica.org/english/report/introduction.html. 15.Raghuram G.

pages: 636 words: 140,406

The Case Against Education: Why the Education System Is a Waste of Time and Money
by Bryan Caplan
Published 16 Jan 2018

Arum, Richard, and Yossi Shavit. 1995. “Secondary Vocational Education and the Transition from School to Work.” Sociology of Education 68 (3): 187–204. Ashenfelter, Orley, Colm Harmon, and Hessel Oosterbeek. 1999. “A Review of Estimates of the Schooling/Earnings Relationship, with Tests for Publication Bias.” Labour Economics 6 (4): 453–70. Assaad, Ragui. 1997. “The Effects of Public Sector Hiring and Compensation Policies on the Egyptian Labor Market.” World Bank Economic Review 11 (1): 85–118. Astin, Alexander. 2005–6. “Making Sense out of Degree Completion Rates.” Journal of College Student Retention 7 (1–2): 5–17.

pages: 742 words: 166,595

The Barbell Prescription: Strength Training for Life After 40
by Jonathon Sullivan and Andy Baker
Published 2 Dec 2016

In this pivotal chapter, we’ll survey some of that evidence. This is as good a time as any to point out an inconvenient truth about published scientific research: Like all other human endeavors, it’s about 90% shit by weight. This has always been true, and if anything it’s even more true now, as research effort is heavily impacted by publication bias, the pressures of academic life, and the corruption of science by industry, which has a decidedly non-scientific axe to grind.2 This sad fact of life does not exempt the biomedical literature,3 whether we’re talking about exercise medicine,4 cancer chemotherapy, diagnostic imaging, or even basic cell bi So I want to be perfectly up front with you: Just as you can easily find studies showing that generally accepted and widely used medical therapies do not actually produce the desired results, so are there contrary findings in the literature on strength training for various disease states and their markers.5 This overview of the literature focuses on the overwhelming preponderance of the evidence, draws heavily on physiological reasoning and experience, and would of necessity involve my own very human biases, whether I admitted it or not.

pages: 687 words: 165,457

Exercised: The Science of Physical Activity, Rest and Health
by Daniel Lieberman
Published 2 Sep 2020

D., et al. (2019), Aerobic exercise for adult patients with major depressive disorder in mental health services: A systematic review and meta-analysis, Depression and Anxiety 36:39–53; Stubbs, B., et al. (2017), An examination of the anxiolytic effects of exercise for people with anxiety and stress-related disorders: A meta-analysis, Psychiatry Research 249:102–8; Schuch, F. B., et al. (2016), Exercise as a treatment for depression: A meta-analysis adjusting for publication bias, Journal of Psychiatric Research 77:42–51; Josefsson, T., Lindwall, M., and Archer, T. (2014), Physical exercise intervention in depressive disorders: Meta-analysis and systematic review, Scandinavian Journal of Medicine and Science in Sports 24:259–72; Wegner, M., et al. (2014), Effects of exercise on anxiety and depression disorders: Review of meta-analyses and neurobiological mechanisms, CNS and Neurological Disorders—Drug Targets 13:1002–14; Asmundson, G.

pages: 694 words: 197,804

The Pot Book: A Complete Guide to Cannabis
by Julie Holland
Published 22 Sep 2010

Potential confounders were addressed in these studies, including other drug use and the question of early psychotic symptoms (Zammit et al. 2002; Arseneault et al. 2004). However, as Weiser and others have pointed out, a two- to threefold increase in risk is not so sizable and could be explained by unrecognized confounding variables (Weiser and Noy 2005b). Finally, there is also the issue of potential publication bias; negative studies that find no association between an exposure and an outcome may be less likely to be published. Biological Plausibility Biological plausibility lends support to the hypothesis of a causal association. If there is a medical basis for the phenomenon in question, it makes more sense.

pages: 1,261 words: 294,715

Behave: The Biology of Humans at Our Best and Worst
by Robert M. Sapolsky
Published 1 May 2017

Yancey, “The Effects of Media Violence Exposure on Criminal Aggression: A Meta-analysis,” Criminal Justice and Behav 35 (2008): 772; C. Anderson et al., “Violent Video Game Effects on Aggression, Empathy, and Prosocial Behavior in Eastern and Western Countries: A Meta-analytic Review,” Psych Bull 136, 151; C. J. Ferguson, “Evidence for Publication Bias in Video Game Violence Effects Literature: A Meta-analytic Review,” Aggression and Violent Behavior 12 (2007): 470; C. Ferguson, “The Good, the Bad and the Ugly: A Meta-analytic Review of Positive and Negative Effects of Violent Video Games,” Psychiatric Quarterly 78 (2007): 309. 42. W.

pages: 1,199 words: 332,563

Golden Holocaust: Origins of the Cigarette Catastrophe and the Case for Abolition
by Robert N. Proctor
Published 28 Feb 2012

Switzer denounced the EPA’s report as highly flawed and “problematic,” peppering his critique with pejoratives like “astonishing,” “equivocal,” “deceptive and pointless,” and “serious difficulties.” The Stanford statistician accused the EPA of imprecision, inconsistency, faulty interpretations, improper extrapolations, use of “crude and disputable” estimates of exposure, bias from confounding and misclassification, improper treatment of publication bias, reliance on inconsistent or improperly recorded data, and several other flaws.39 Switzer was well paid for his services, receiving a total of $647,046 from CIAR and other grants in one two-year period. He was also paid handsomely for private consultations with cartel law firms. In one three-month period in the fall of 1991 he received $26,900 from Covington & Burling for consulting on “health effects of exposure to ETS in the workplace” and an analysis of “epidemiology of spousal smoke exposure and lung cancer.”

pages: 1,157 words: 379,558

Ashes to Ashes: America's Hundred-Year Cigarette War, the Public Health, and the Unabashed Triumph of Philip Morris
by Richard Kluger
Published 1 Jan 1996

Public impressions to the contrary, no investigator had produced evidence remotely approaching in strength and consistency findings like those incriminating direct smoking by Wynder, Hammond and Horn, Doll and Hill, and Auerbach. The industry could thus retain the hope that a large-scale study might fail to show a correlation between lung cancer occurrence and exposure to ETS among nonsmokers. Such results, however, might not find their way into scientific journals because of a phenomenon known as “publication bias;” studies that produced negative results or did not report a statistically significant relationship were generally assigned a low priority among submissions. But in the spring of 1990, a Philip Morris scientist, Thomas J. Borelli, who bore the suggestive title of “manager of scientific issues,” was scouring about for unpublished studies on ETS and, while consulting the University Microfilms International Dissertation Information Service, struck gold.