systematic bias

back to index

53 results

pages: 321 words: 97,661

How to Read a Paper: The Basics of Evidence-Based Medicine
by Trisha Greenhalgh
Published 18 Nov 2010

An important source of difficulty (and potential bias) in a case–control study is the precise definition of who counts as a ‘case’, because one misallocated individual may substantially influence the results (see section ‘Was systematic bias avoided or minimised?’). In addition, such a design cannot demonstrate causality—in other words, the association of A with B in a case–control study does not prove that A has caused B. Clinical questions that should be addressed by a case–control study are listed here. Does the prone sleeping position increase the risk of cot death (sudden infant death syndrome)? Does whooping cough vaccine cause brain damage? (see section ‘Was systematic bias avoided or minimised?’). Do overhead power cables cause leukaemia?

Remember that what is important in the eyes of the doctor may not be valued so highly by the patient, and vice versa. One of the most exciting developments in evidence based medicine (EBM) in recent years is the emerging science of patient-reported outcomes measures, which I cover in the section ‘PROMs’ on page 223. Was systematic bias avoided or minimised? Systematic bias is defined by epidemiologists as anything that erroneously influences the conclusions about groups and distorts comparisons [4]. Whether the design of a study is an randomised control trial (RCT), a non-randomised comparative trial, a cohort study or a case–control study, the aim should be for the groups being compared to be as like one another as possible except for the particular difference being examined.

They should, as far as possible, receive the same explanations, have the same contacts with health professionals, and be assessed the same number of times by the same assessors, using the same outcome measures [5] [6]. Different study designs call for different steps to reduce systematic bias. Randomised controlled trials In an RCT, systematic bias is (in theory) avoided by selecting a sample of participants from a particular population and allocating them randomly to the different groups. Section ‘Randomised controlled trials’ describes some ways in which bias can creep into even this gold standard of clinical trial design, and Figure 4.1 summarises particular sources to check for.

Infotopia: How Many Minds Produce Knowledge
by Cass R. Sunstein
Published 23 Aug 2006

The first are those in which group members show a systematic bias. The second, a generalization of the first, are those in which their answers are worse than random. The failures of statistical judgments in these circumstances have strong implications for other social failures as well—as individual blunders, with respect to actual or likely facts, are transformed into blunders by private and public institutions. Often statistical groups will be wrong. Sometimes they will be disastrously wrong. The (Occasional) Power of Numbers / 33 Bias / A systematic bias in one or another direction will create serious problems for the group’s answers.

The resulting judgments of these “statistical groups” can be remarkably accurate.18 If we have access to many minds, we might trust the average response, a point that bears on the foundations of democracy itself. But accuracy is likely only under identifiable conditions, in which people do not suffer from a systematic bias that makes their answers worse than random. If we asked everyone in the world to estimate the population of Egypt, or to say how many people have served on the U.S. Supreme Court, or to guess the distance between Mars and Venus, the average answer is likely to be wildly off. Chapters 2 and 3 turn to deliberation.

As it happens, climate change was lowest on the list, and addressing communicable diseases, reducing hunger and malnutrition, and free trade were at the top. I do not mean to say that the results of this particular exercise are correct; everything depends on whether the relevant experts were in a position to offer good answers on the questions at hand. If the experts suffer from a systematic bias, or if their answers are worse than random, any effort to aggregate expert judgments will produce blunders. Maybe we shouldn’t trust the people who participated in the Copenhagen Consensus. But if statistical averages are a good way to aggregate knowledge when ordinary people know something of relevance, then they are also a good way to aggregate knowledge from experts.43 The (Occasional) Power of Numbers / 41 / No Magic Here / At first glance, the accuracy of statistical judgments looks like a parlor trick or even a kind of magic.

pages: 404 words: 92,713

The Art of Statistics: How to Learn From Data
by David Spiegelhalter
Published 2 Sep 2019

Far from freeing us from the need for statistical skills, bigger data and the rise in the number and complexity of scientific studies makes it even more difficult to draw appropriate conclusions. More data means that we need to be even more aware of what the evidence is actually worth. For example, intensive analysis of data sets derived from routine data can increase the possibility of false discoveries, both due to systematic bias inherent in the data sources and from carrying out many analyses and only reporting whatever looks most interesting, a practice sometimes known as ‘data-dredging’. In order to be able to critique published scientific work, and even more the media reports which we all encounter on a daily basis, we should have an acute awareness of the dangers of selective reporting, the need for scientific claims to be replicated by independent researchers, and the danger of over-interpreting a single study out of context.

Going from data (Stage 1) to the sample (Stage 2): these are problems of measurement: is what we record in our data an accurate reflection of what we are interested in? We want our data to be: • Reliable, in the sense of having low variability from occasion to occasion, and so being a precise or repeatable number. • Valid, in the sense of measuring what you really want to measure, and not having a systematic bias. Figure 3.1 Process of inductive inference: each arrow can be interpreted as ‘tells us something about’1 For example, the adequacy of the sex survey depends on people giving the same or very similar answers to the same question each time they are asked, and this should not depend on the style of the interviewer or the vagaries of the respondent’s mood or memory.

Whereas general AI needs a causal model for how the world actually works, which would allow it to answer human-level questions concerning the effect of interventions (‘What if we do X?’), and counterfactuals (‘What if we hadn’t done X?’). We are a long way from AI having this ability. This book emphasizes the classic statistical problems of small samples, systematic bias (in the statistical sense) and lack of generalizability to new situations. The list of challenges for algorithms shows that although having masses of data may reduce the concern about sample size, the other problems tend to get worse, and we are faced with the additional problem of explaining the reasoning of an algorithm.

pages: 442 words: 94,734

The Art of Statistics: Learning From Data
by David Spiegelhalter
Published 14 Oct 2019

Far from freeing us from the need for statistical skills, bigger data and the rise in the number and complexity of scientific studies makes it even more difficult to draw appropriate conclusions. More data means that we need to be even more aware of what the evidence is actually worth. For example, intensive analysis of data sets derived from routine data can increase the possibility of false discoveries, both due to systematic bias inherent in the data sources and from carrying out many analyses and only reporting whatever looks most interesting, a practice sometimes known as ‘data-dredging’. In order to be able to critique published scientific work, and even more the media reports which we all encounter on a daily basis, we should have an acute awareness of the dangers of selective reporting, the need for scientific claims to be replicated by independent researchers, and the danger of over-interpreting a single study out of context.

Going from data (Stage 1) to the sample (Stage 2): these are problems of measurement: is what we record in our data an accurate reflection of what we are interested in? We want our data to be: Reliable, in the sense of having low variability from occasion to occasion, and so being a precise or repeatable number. Valid, in the sense of measuring what you really want to measure, and not having a systematic bias. Figure 3.1 Process of inductive inference: each arrow can be interpreted as ‘tells us something about’1 For example, the adequacy of the sex survey depends on people giving the same or very similar answers to the same question each time they are asked, and this should not depend on the style of the interviewer or the vagaries of the respondent’s mood or memory.

Whereas general AI needs a causal model for how the world actually works, which would allow it to answer human-level questions concerning the effect of interventions (‘What if we do X?’), and counterfactuals (‘What if we hadn’t done X?’). We are a long way from AI having this ability. This book emphasizes the classic statistical problems of small samples, systematic bias (in the statistical sense) and lack of generalizability to new situations. The list of challenges for algorithms shows that although having masses of data may reduce the concern about sample size, the other problems tend to get worse, and we are faced with the additional problem of explaining the reasoning of an algorithm.

pages: 309 words: 78,361

Plenitude: The New Economics of True Wealth
by Juliet B. Schor
Published 12 May 2010

Over time, the study of environmental impacts moved out of general economics and into a subfield that sought ways to internalize these effects, that is, bring them inside the market calculus. Outside the subfield, most economists have practiced their craft as if nature did not exist. They do not incorporate natural resources into basic accounting categories and data collection. In addition, market prices do not include ecological costs, an omission that introduces a systematic bias into the analysis and evaluation of virtually all market outcomes, albeit one that has been largely ignored. Goods or activities that degrade the environment (without paying for that degradation) are priced too low. Those that are particularly dirty are especially underpriced and overproduced.

This work has the potential to revolutionize the cost-benefit calculations of environmental economics, and is beginning to make headway. For example, when climate models include ecosystem services, the case for urgent action is much stronger. By contrast, trade-off economics, particularly standard cost-benefit analysis, has tended to use a partial accounting, and is more tied to the short run. It also appears to have a systematic bias. A series of studies has found that economic calculations done to assess the potential impacts of environmental projects tend to overestimate costs and underestimate benefits. There are many cases of environmental protections that have been far less burdensome than opponents expected. Trade-off thinking has typically left technological change outside its purview, so that policies to protect the environment aren’t credited with spurring nature-saving innovation.

See Daily (1997), Daily et al. (2000), and Costanza et al. (1997). 81 when climate models include ecosystem services, the case for urgent action is much stronger: Sterner and Persson (2008). 81 standard cost-benefit analysis . . . has tended to use a partial accounting: For a critique of cost-benefit analysis, see Ackerman and Heinzerling (2004). 81 systematic bias . . . to overestimate costs and underestimate benefits: For a discussion of the literature on the accuracy of cost-benefit studies, see Ackerman (2006). 81 environmental protections that have been far less burdensome than opponents expected: For a discussion of a major chemical protection law and its light costs, as well as a more general discussion of this point, see Ackerman (2006). 82 innovation is more rapid and less costly than initially assumed: See Edenhofer et al. (2006) for a discussion of technological change in climate models. 82 Nicholas Stern . . . game-changing report: Stern (2006). 82 2 percent price tag: Jowit and Wintour (2008). 82 it would cost a mere $1.8 trillion a year: Sachs’s estimates of $1.8 trillion a year are based on a combination of plug-in hybrids and carbon-capture-and-sequestration technology and assume a cost of thirty dollars per ton of avoided emissions.

pages: 338 words: 104,815

Nobody's Fool: Why We Get Taken in and What We Can Do About It
by Daniel Simons and Christopher Chabris
Published 10 Jul 2023

That random assignment is intended to ensure that the people in one group are comparable to those in the other group in all aspects that aren’t directly manipulated in the study. Or, more precisely, random assignment ensures that there is no systematic bias in who ends up in which group.34 Imagine we’re picking teams for a basketball game; let’s call them the Reds and the Blues. It would be unfair to assign all the jocks to the Red team and all the nerds to Blue—that would be a systematic bias. If instead we flipped a coin to assign each person to a team, then each nerd and each jock would be equally likely to end up on each team. One team might still be better, but that advantage would be due to chance, not bias.

Each person is equally likely to be in the treatment or control group, so individual differences in factors like education or age, or, more importantly, disease severity, health behaviors, and other predictors of how well a person might respond to a treatment (including ones that were not or could not be measured), will be evenly distributed on average. That is, there won’t be a systematic bias favoring the treatment group or the control group. But in any given study, random assignment won’t guarantee that the treatment and control groups will look exactly the same in every respect. In fact, it ensures that they shouldn’t. If you measure enough things in a study, the treatment and control groups are bound to differ on some of them before anyone starts receiving a drug, a placebo, or anything else.

pages: 284 words: 79,265

The Half-Life of Facts: Why Everything We Know Has an Expiration Date
by Samuel Arbesman
Published 31 Aug 2012

The more papers in the field, the smaller the fraction of previous papers that were quoted in a new study. Astonishingly, no matter how many trials had been done before in that area, half the time only two or fewer studies were cited. Not only are a small fraction of the relevant studies being cited, there’s a systematic bias: The newer ones are far more likely to be mentioned. This shouldn’t be surprising after our discussion of citation decay and obsolescence in chapter 3. And it is hardly surprising that scientists might use the literature quite selectively, perhaps to bolster their own research. But when it comes to papers that are current, relevant, and necessary for the complete picture of the current state of a scientific question, this is unfortunate.

But the decline effect is not only due to measurement. One other factor involves the dissemination of measurements, and it is known as publication bias. Publication bias is the idea that the collective scientific community and the community at large only know what has been published. If there is any sort of systematic bias in what is being published (and therefore publicly measured), then we might only be seeing some of the picture. The clearest example of this is in the world of negative results. If you recall, John Maynard Smith noted that “statistics is the science that lets you do twenty experiments a year and publish one false result in Nature.”

pages: 302 words: 86,614

The Alpha Masters: Unlocking the Genius of the World's Top Hedge Funds
by Maneet Ahuja , Myron Scholes and Mohamed El-Erian
Published 29 May 2012

“By eliminating the beta in their portfolios, hedge funds would inevitably become more attractive to large pools of institutional capital.” Deemed the world’s first institutionalized hedge fund, with 300 clients, Bridgewater is known for accepting capital only from large pension funds, endowments, central banks, and governments. Dalio believes that the issue of not having any systematic bias is a big thing. In other words, there’s no good reason there should be a bad or good environment for hedge funds—they shouldn’t have any beta—period. “There’s an equal opportunity up or down in any kind of environment,” says Dalio. “There should be just the alpha, and that is important in terms of what the role of hedge funds is and for portfolio diversification.”

But, in my view, the classic definition of generating alphas are returns that are earned by those who can forecast future cash flows or the beta factor returns (macro factors) more accurately than other market participants, which, as alluded to in the book, is a zero-sum game. Not all can outperform—those that do are paid by those who don’t—and it is extremely difficult for those that do to replicate their successes over many periods. This is not at all the story I read in this book. There is a systematic bias that favors them. They don’t believe that they are investing in a zero-sum game; they are paid for their expertise. I have defined the true earning power of these hedge fund managers as not alpha but “omega” after Ohm’s law, where omega is the varying amounts of resistance in the market. As resistance increases (decreases), they are willing to step in (step out by short-selling or exiting positions) and reduce (increase) the resistance and earn a profit by so doing as other market participants change their holdings of securities over time.

pages: 321

Finding Alphas: A Quantitative Approach to Building Trading Strategies
by Igor Tulchinsky
Published 30 Sep 2019

We conclude with some practical suggestions for quantitative practitioners and firms. CATEGORIES OF BIAS We broadly categorize bias as systematic or behavioral. Investors introduce systematic bias by inadvertently coding it into their quantitative processes. By contrast, investors introduce behavioral bias by making ad hoc decisions rooted in their own human behavior. Over a period of time, both systematic and behavioral bias yield suboptimal investment outcomes. SYSTEMATIC BIASES There are two important sources of systematic bias: look-ahead bias and data mining. Finding Alphas: A Quantitative Approach to Building Trading Strategies, Second Edition. Edited by Igor Tulchinsky et al. and WorldQuant Virtual Research Center. © 2020 Tulchinsky et al., WorldQuant Virtual Research Center.

pages: 340 words: 91,416

Lost in Math: How Beauty Leads Physics Astray
by Sabine Hossenfelder
Published 11 Jun 2018

Now the time it takes to test a new fundamental law of nature can be longer than a scientist’s full career. This forces theorists to draw upon criteria other than empirical adequacy to decide which research avenues to pursue. Aesthetic appeal is one of them. In our search for new ideas, beauty plays many roles. It’s a guide, a reward, a motivation. It is also a systematic bias. Invisible Friends The movers have picked up my boxes, most of which I never bothered to unpack, knowing I wouldn’t stay here. Echoes of past moves return from empty cabinets. I call my friend and colleague Michael Krämer, professor of physics in Aachen, Germany. Michael works on supersymmetry, “susy” for short.

A Beautiful Question (Wilczek), 27, 146 beauty anthropic argument and, 152 of chaos, 157 components of, 95 danger of, 27 of E8 theory, 165 economy and, 147 experience and, 97–98 faith in, 23, 26 of God, 19 as insightful guide, 27 is ill-defined, 180 justification for, 208–209 laws of nature and, 3–4, 20–22 misrepresentation of, 68–69 origin of arguments from, 18 in particle physics, 147 planetary orbits and, 18–19 as a promising guide, 27 pursuit of, 223–224 quantum mechanics and, 29 in quark model, 24–26 revolution and, 128–130, 152 of rigidity, 98 rigidity and, 73–74 simplicity and, 147–148 of string theory, 181–182 subjectivity of, 26 of supersymmetry, 145, 180 symmetry and, 147 as systematic bias, 10 in theoretical physics, 147 as a treacherous guide, 28 ugliness and, 19 universal recognition of, 2–3 Beauty and Revolution in Science (McAllister), 128 bet, between Lisi and Wilczek, 165–166, 235 bias, 228–231, 245 big bang aesthetic bias against, 30 multiple, 100 repugnancy of, 29–30 black holes evaporation of, 183–185, 228–229 firewall of, 184–185, 228–229 formation of, 182 microstates of, 184 multiverses and, 107 stellar-mass, 182–183 string theory and, 175, 182, 184–185 supermassive, 182–183 Bohr, Niels, 6, 67 Boltzmann, Ludwig, 32 Bondi, Hermann, 30 Bose, Satyendra, 11 bosons, 11, 13, 239 fermion formation of, 159 gauge, 52–53, 53 (fig.)

pages: 428 words: 103,544

The Data Detective: Ten Easy Rules to Make Sense of Statistics
by Tim Harford
Published 2 Feb 2021

See also official statistics affirmative action, 33–34 age data and childhood mortality trends, 66–67, 91 and criminal justice data, 178 and defining “children,” 70n and divorce rates, 253n and evaluation of statistical claims, 133 and incidence of strokes, 98 and misuse of statistical methods, 117–19 and “priming” research, 121 and scale of numerical comparisons, 93–94 and self-harm statistics, 75 and smoking research, 4–5 and teen pregnancy, 55 and vaccination research, 53–54 aggressive behavior, defining, 70–71 AIDS, 28 alchemy, 171, 173–75, 173n, 179, 207 algorithms alchemy analogy, 175 and consumer data, 159–64 criminal justice applications, 176–79 and excessive credulity of data, 164–67 and found data, 149, 151, 154 and Google Flu Trends, 153–57 vs. human judgment, 167–71 pattern-recognizing, 183 and proliferation of big data, 157–59 and systematic bias, 166 and teacher evaluations, 163–64 trustworthiness of, 179–82 See also big data Allegory of Faith, The (Vermeer), 29–30 Allied Art Commission, 21 alternative sanctions, 176–79 altimeters, 172–73 Amazon, 148, 175, 181 American Economic Association, 186n American Statistical Association, 194 Anderson, Chris, 156 Angwin, Julia, 176 anorexia, 75 antidepressants, 126 anti-vaccination sentiment, 53–54 Apple, 175 Argentina, 194–95, 211 Army Medical Board (UK), 215 art forgeries, 19–23, 29–32, 42–45 Asch, Solomon, 135–38, 141–42, 260 assessable decisions, 180 astrology, 124 Atkinson, Tony, 83 atomic weapons, 90, 249–50 Attenborough, David, 277–78 authority, deference to, 138–39 autism, 53–54 automation, 128 Avogadro’s number, 246 Babbage, Charles, 219 Babcock, Linda, 27 Babson, Roger, 263 backfire effect, 129 Bad Science (Goldacre), 2 bail recommendations, 158, 169, 180 Bakelite, 32 Bank for International Settlements, 100–101 Bank of England, 256, 258 Bannon, Steve, 13 bar charts, 228, 234, 235 Bargh, John, 121, 122 barometric pressure, 172–73 Barrack Hospital, Scutari (Istanbul), 213–14, 220, 225, 233, 235 base rates, 253–54, 253n Battistella, Annabelle (Fanne Foxe), 185–86, 212 battlefield awareness, 58–59 Baumeister, Roy, 121, 122 BBC, 10, 276 behavioral data, 69–70 behavioral economics, 25, 27, 41, 96, 271.

See also choice research Bell, Vanessa, 256–57 Bem, Daryl, 111, 113–14, 119–23 benefits of statistical analysis, 9 Berti, Gasparo, 172 Bevacqua, Graciela, 194–95, 212 Beyth, Ruth, 248–49, 251, 254 biases biased assimilation, 35–36 confirmation bias, 33 current offense bias, 169 and motivated reasoning, 27–29, 32–36, 38, 131, 268 negativity bias, 95–99 non-response bias, 146–47 novelty bias, 95–99, 113, 114, 122 optimism bias, 96 and power of doubt, 13 publication bias, 113–16, 118–23, 125–27 racial bias in criminal justice, 176–79 in sampling, 135–38, 142–45, 147–51 selection bias, 2, 245–46 survivorship bias, 109–10, 112–13, 122–26 systematic bias in algorithms, 166 and value of statistical knowledge, 17 big data and certification of researchers, 182 and criminal justice, 176–79 and excessive credulity in data, 164–67 and found data, 149, 151, 152, 154 and Google Flu Trends, 153–57 historical perspective on, 171–75 influence in today’s world, 183 limitations and misuse of, 159–63, 170–71 proliferation of, 157–59 and teacher evaluations, 163–64 See also algorithms Big Data (Cukier and Mayer-Schönberger), 148, 157 “Big Duck” graphics, 216–18, 217, 229–30 Big Issue, The, 226n “Billion Pound-O-Gram, The” (infographic), 223 billionaires, 78–80 binge drinking, 75 Bird, Sheila, 68 bird’s-eye view of data, 61–64, 203, 221, 265 BizzFit, 108 Black Swan, The (Taleb), 101 Blastland, Michael, 10, 68, 93 blogs, 76 Bloomberg TV, 89 body count metrics, 58 Boijmans Museum, 20 Boon, Gerard, 19, 30–31 border wall debate, 93–94 Borges, Jorge Luis, 118 Boyle, Robert, 172–75 brain physiology, 270 Bredius, Abraham, 19–23, 29–32, 35, 43–45, 78, 242, 262 Bretton Woods conference, 262 Brettschneider, Brian, 224 Brexit, 71, 277 British Army, 213–14, 220–21 British Election Study, 145–46 British Medical Journal, 6, 67 British Treasury, 256–57 Broward County Sheriff’s Office, 176 Brown, Derren, 115 Brown, Zack “Danger,” 108 Buchanan, Larry, 229, 232 budget deficits, 188, 192–93, 195 Buffett, Warren, 259 Bureau of Economic Analysis, 190, 205 Bureau of Labor Statistics, 190, 205, 212 business-cycle forecasting, 258–59 business writing, 123–24 Butoyi, Imelda, 62–63 Cairo, Alberto, 227 Cambridge Analytica, 158 Cambridge University, 162.

pages: 755 words: 121,290

Statistics hacks
by Bruce Frey
Published 9 May 2006

Random assignment of participants to an experimental group and a control group solves this problem nicely. Selection There might be systematic bias in assigning subjects to groups. The solution is to assign subjects randomly. Testing Just taking a pretest might affect the level of the research variable. Create a comparison group and give both groups the pretest, so any changes will be equal between the groups. And assign subjects to the two groups randomly (are you starting to see a pattern here?). Instrumentation There might be systematic bias in the measurement. The solution is to use valid, standardized, objectively scored tests.

pages: 172 words: 51,837

How to Read Numbers: A Guide to Statistics in the News (And Knowing When to Trust Them)
by Tom Chivers and David Chivers
Published 18 Mar 2021

This is why people do literature reviews at the beginning of academic papers – to put their results into the context of the scientific literature as a whole. Sometimes researchers do meta-analyses – academic papers which go through all the existing literature and try to synthesise the results. If there have been enough studies, and if there isn’t a systematic bias either in the research or the publication process (as we’ve mentioned, two very big ifs), then hopefully the aggregated result will give you a good idea of what the true effect is. This is how science advances, at least in theory. Each time a new study comes out, it gets added to the pile; it’s a new set of data points which – hopefully, on average – will bring the consensus of scientific understanding closer to the underlying reality.

pages: 287 words: 69,655

Don't Trust Your Gut: Using Data to Get What You Really Want in LIfe
by Seth Stephens-Davidowitz
Published 9 May 2022

Here are the biggest misjudged activities: Underrated Activities: These Tend to Make People Happier Than We Think* Exhibition/Museum/Library Sports/Running/Exercise Drinking Alcohol Gardening Shopping/Errands Overrated Activities: These Tend to Make People Less Happy Than We Think Sleeping/Resting/Relaxing Computer Games/iPhone Games Watching TV/Film Eating/Snacking Browsing the Internet So, what should we make of those two lists? “Drinking alcohol” is obviously a complicated route to happiness, due to its addictive nature; I will talk more about the relationship between alcohol and happiness in the next chapter. But one systematic bias people have is they seem to overestimate the happiness effect of many passive activities. Think of the activities on the “Overrated Activities” list. Sleeping. Relaxing. Playing games. Watching TV. Snacking. Browsing the internet. These are not exactly activities that require a lot of energy.

pages: 275 words: 82,640

Money Mischief: Episodes in Monetary History
by Milton Friedman
Published 1 Jan 1992

For 1875–79 and 1901–14, it approximates the actual pattern. The U.S. hypothetical annual monetary demand for silver is simply the increment in the U.S. hypothetical silver stock: The possible errors in this approach are numerous. Some simply affect the year-to-year movements as a result of the use of a trend for k1. Any systematic bias arises primarily from the assumption that the same specie reserves would have been maintained under a silver standard in the early and late years of the period as those maintained under a gold standard. The possible sources of error are different for the specie reserve ratio and the real stock of money.

pages: 269 words: 77,042

Sex, Lies, and Pharmaceuticals: How Drug Companies Plan to Profit From Female Sexual Dysfunction
by Ray Moynihan and Barbara Mintzes
Published 1 Oct 2010

While individual links may not necessarily influence a researcher, there is growing evidence that, looked at in its totality, this web of influence may well be distorting medical science in the most profound way. At a research level, trials funded by drug companies are more likely to find favourable results for sponsors’ products, leading to a ‘systematic bias’ in the medical literature that overstates the benefits of drugs and underplays their harms.7 At the level of education, in some nations drug and device companies fund at least half of the seminars where our doctors undertake their ongoing professional development, with strong anecdotal evidence that sponsors sometimes influence these activities in important but often hidden ways.8 At the level of practice, studies have shown that doctors who accept gifts, and expose themselves to marketing in its many forms, tend to more often prescribe the latest and most expensive drugs, which may not always be in the interests of their patients or the public purse.9 So strong is the accumulating evidence that the calls for fundamental reform are no longer coming just from grass-roots activists like the New View, No Free Lunch and Healthy Scepticism.10 Powerful voices from within the heart of mainstream medicine are now calling for a much greater transparency in the relationship, and much greater independence between health professionals and the industries whose products those professionals prescribe.

pages: 220 words: 73,451

Democratizing innovation
by Eric von Hippel
Published 1 Apr 2005

In the general literature, Armstrong’s (2001) review on forecast bias for new product introduction indicates that sales forecasts are generally optimistic, but that that upward bias decreases as the magnitude of the sales forecast increases. Coller and Yohn (1998) review the literature on bias in accuracy of management earnings forecasts and find that little systematic bias occurs. Tull’s (1967) model calculates $15 million in revenue as a level above which forecasts actually become pessimistic on average. We think it reasonable to apply the same deflator to LU vs. non-LU project sales projections. Even if LU project personnel were for some reason more likely to be optimistic with respect to such projections than non-LU project personnel, that would not significantly affect our findings.

pages: 263 words: 75,610

Delete: The Virtue of Forgetting in the Digital Age
by Viktor Mayer-Schönberger
Published 1 Jan 2009

Of course, the information we have to ground our decisions in is almost always incomplete—by necessity. But in the analog world, random pieces of information are missing. With digital memory, the exclusion is biased against information that is not captured in digital form and not fed into digital memory. That is a systematic bias, and one that not only falsifies our understanding of events but that can also be gamed. In short, because digital memory amplifies only digitized information, humans like Jane trusting digital memory may find themselves worse off than if they’d relied solely on their human memory, with its tendency to forget information that is no longer important or relevant.

pages: 296 words: 78,631

Hello World: Being Human in the Age of Algorithms
by Hannah Fry
Published 17 Sep 2018

For every Christopher Drew Brooks, treated unfairly by an algorithm, there are countless cases like that of Nicholas Robinson, where a judge errs on their own. Having an algorithm – even an imperfect algorithm – working with judges to support their often faulty cognition is, I think, a step in the right direction. At least a well-designed and properly regulated algorithm can help get rid of systematic bias and random error. You can’t change a whole cohort of judges, especially if they’re not able to tell you how they make their decisions in the first place. Designing an algorithm for use in the criminal justice system demands that we sit down and think hard about exactly what the justice system is for.

pages: 250 words: 79,360

Escape From Model Land: How Mathematical Models Can Lead Us Astray and What We Can Do About It
by Erica Thompson
Published 6 Dec 2022

Methods of mathematical inference from models, by contrast, typically do assign some kind of truth value to the models. Statistical methods may assume that model outcomes are related to the truth in some consistent and discoverable way, such as a random error that will converge on zero given enough data, or a systematic bias that can be estimated and corrected. Other statistical methods assume that from a set of candidate models one can identify a single best model or construct a new best model by weighting the candidates according to agreement with other data. Essentially, they assume that the observations are generated by a process that has rules, that those rules can be written formally, that they are sufficiently close to the candidate set of models we are examining and that the only limit to our discovery of the rules is our observation of further data.

Statistics in a Nutshell
by Sarah Boslaugh
Published 10 Nov 2012

Random measurement error is the result of chance circumstances such as room temperature, variance in administrative procedure, or fluctuation in the individual’s mood or alertness. We do not expect random error to affect an individual’s score consistently in one direction or the other. Random error makes measurement less precise but does not systematically bias results because it can be expected to have a positive effect on one occasion and a negative effect on another, thus canceling itself out over the long run. Because there are so many potential sources for random error, we have no expectation that it can be completely eliminated, but we desire to reduce it as much as possible to increase the precision of our measurements.

This is not a problem if you are clear about where and how your sample was obtained. Imagine that you are a microbiologist interested in examining bacteria present in hospitals. If you use a filter with pores of diameter one µm (micrometer), any bacteria smaller than this will not be part of the population that you are observing. This sampling limitation will introduce systematic bias into the study; however, as long as you are clear that the population about which you can make inferences is bacteria of diameter greater than one µm, and nothing else, your results will be valid. In reality, we often want to generalize to a larger population than we sampled from, and whether we can do this depends on a number of factors.

pages: 304 words: 82,395

Big Data: A Revolution That Will Transform How We Live, Work, and Think
by Viktor Mayer-Schonberger and Kenneth Cukier
Published 5 Mar 2013

If we have only one temperature sensor for the whole plot of land, we must make sure it’s accurate and working at all times: no messiness allowed. In contrast, if we have a sensor for every one of the hundreds of vines, we can use cheaper, less sophisticated sensors (as long as they do not introduce a systematic bias). Chances are that at some points a few sensors may report incorrect data, creating a less exact, or “messier,” dataset than the one from a single precise sensor. Any particular reading may be incorrect, but the aggregate of many readings will provide a more comprehensive picture. Because this dataset consists of more data points, it offers far greater value that likely offsets its messiness.

pages: 306 words: 82,765

Skin in the Game: Hidden Asymmetries in Daily Life
by Nassim Nicholas Taleb
Published 20 Feb 2018

Hence the more “systemic” things are, the more important survival becomes. FIGURE 3. An illustration of the bias-variance tradeoff. Assume two people (sober) shooting at a target in, say, Texas. The left shooter has a bias, a systematic “error,” but on balance gets closer to the target than the right shooter, who has no systematic bias but a high variance. Typically, you cannot reduce one without increasing the other. When fragile, the strategy at the left is the best: maintain a distance from ruin, that is, from hitting a point in the periphery should it be dangerous. This schema explains why if you want to minimize the probability of the plane crashing, you may make mistakes with impunity provided you lower your dispersion.

pages: 290 words: 83,248

The Greed Merchants: How the Investment Banks Exploited the System
by Philip Augar
Published 20 Apr 2005

They paid the price in the end with horrendous collapses in their share prices.14 Hector Sants, who worked for a full spectrum of British-, European-and American-owned investment banks before joining the FSA as Managing Director, points out that London’s ‘culture of separation in the wholesale area, harking back to the traditional divide between stock broking and merchant banking’ spared London the worst of the investment banks’ excesses. But not by much: ‘Even in the UK there was evidence of systematic bias in analyst recommendations, poor management of conflicts of interest and a feeling that if it happened there it could happen here. I’m not sure the UK industry could put hand on heart and say in 2002 there was an inherently better conflict management culture in London than in New York.’15 It was clear that the City had to change, but did it have to swallow the American bait hook, line and sinker?

pages: 288 words: 81,253

Thinking in Bets
by Annie Duke
Published 6 Feb 2018

“Thus, the conventional view that natural selection favors nervous systems which produce ever more accurate images of the world must be a very naïve view of mental evolution.” Dawkins, in turn, considered Trivers, for his work, one of the heroes of his groundbreaking book, devoting four chapters of The Selfish Gene to developing Trivers’s ideas. * This is a systematic bias, not a guarantee that we always grab credit or always deflect blame. There are some people, to be sure, who exhibit the opposite of self-serving bias, treating everything bad that happens as their fault and attributing anything good in their lives to luck. That pattern is much rarer (and more likely in women).

pages: 337 words: 89,075

Understanding Asset Allocation: An Intuitive Approach to Maximizing Your Portfolio
by Victor A. Canto
Published 2 Jan 2005

Consequently, the model is overly pessimistic during high and/or rising growth periods, and overly optimistic during low and/or declining growth periods. An investor can do better than the CEM, but that doesn’t mean the model should be thrown out entirely. Rather, let’s build upon the CEM. For the CEM to be a useful investment tool, we need to correct for the systematic bias produced by its failure to account for earnings growth. Modifying the valuation formula to account for sustainable growth is a trivial adjustment in the formulation. It turns out earnings growth acts to reduce the discount rate on a one-for-one basis. In other words, a $1 income stream in perpetuity discounted at a 5 percent rate has the same value as $1 that grows at 1 percent per year and is discounted at a 6 percent rate.

Forward: Notes on the Future of Our Democracy
by Andrew Yang
Published 15 Nov 2021

We take seriously any complaints about the accuracy of our coverage, and any errors in our graphics were unintended. We apologize for any mistakes and omissions and look forward to working with Mr. Yang in the future.” They could apologize for the graphics—one of which they had already apologized for—without copping to any systematic bias, and everyone would move on. I figured my boycott would last a few days or so while they issued a press release and sent a message. Instead, they took my public complaints as an affront. At first, network sources told reporters that they had called and apologized to us when they had not.

pages: 367 words: 97,136

Beyond Diversification: What Every Investor Needs to Know About Asset Allocation
by Sebastien Page
Published 4 Nov 2020

Consequently, after adjusting for the exposure to the market in the 40 post-event days, we could see a reduction in the alpha generated by our strategy of about 20 bps—a fraction of our 300 bps of precost alpha. We left it to the reader to interpret whether this slight excess beta constitutes a systematic bias, but if so, the impact remains small relative to the magnitude of the net alphas. Regarding liquidity, our outsiders have a similar liquidity profile, on average, to that of their peer constituents. The distribution is symmetrical: roughly half the low-ETF-beta stocks have above-average liquidity, and half have below-average liquidity. 17 Sample Portfolios and Something About Gunslingers Asset allocation is simply about seeking the highest possible return given our risk tolerance.

Corbyn
by Richard Seymour

In terms of evidence, I know of no study which has said the BBC tends to reflect marginal or critical perspectives or ignores powerful interests. There is no evidence on that side. The problem is that bias gets discussed as if it reflected a hidden personal political agenda.’ So what, if not a hidden agenda, would explain systematic bias? Partly, Mills argues, it is a matter of the circulation of powerful milieus between media organisations, consultancies, political parties, and the state. Partly it is a matter of the BBC’s dependency on the government not just to ensure continued funding, uphold its charter and appoint its board members, but also for a great deal of its news content.

The Knowledge Machine: How Irrationality Created Modern Science
by Michael Strevens
Published 12 Oct 2020

Kennefick clearly explains how a “change of scale” would create a systematic error, but he does not address the curious fact that Eddington and his coauthors make no attempt to convince their readers that such a change had occurred, rather than there having been a simple loss of focus that would not create any systematic bias in the astrographic measurements. As far as we can tell, Eddington simply chose the explanation for the blurriness of the astrographic’s photos that best suited his goals. I will take up the question of Eddington’s omission again in Chapters 3 and 7. Matthew Stanley’s “An Expedition to Heal the Wounds of War” is also largely sympathetic to Eddington’s treatment of the data and provides much fascinating historical background.

Calling Bullshit: The Art of Scepticism in a Data-Driven World
by Jevin D. West and Carl T. Bergstrom
Published 3 Aug 2020

These gender differences in recommendation letters could be driving some of the gender inequality in the academic and corporate worlds. In this context, a friend of ours posted this message on Twitter, describing a research study in which the authors analyzed the text from nearly nine hundred letters of recommendation for faculty positions in chemistry and in biochemistry looking for systematic bias: male associated words female associated words The implication of our friend’s tweet was that this study had found large and systematic differences in how letter writers describe men and women as candidates. From the image that he shared, it appears that writers use words associated with exceptionalism and research ability when describing men, and words associated with diligence, teamwork, and teaching when describing women.

pages: 322 words: 107,576

Bad Science
by Ben Goldacre
Published 1 Jan 2008

In some inept trials, in all areas of medicine, patients are ‘randomised’ into the treatment or placebo group by the order in which they are recruited onto the study—the first patient in gets the real treatment, the second gets the placebo, the third the real treatment, the fourth the placebo, and so on. This sounds fair enough, but in fact it’s a glaring hole that opens your trial up to possible systematic bias. Let’s imagine there is a patient who the homeopath believes to be a no-hoper, a heart-sink patient who’ll never really get better, no matter what treatment he or she gets, and the next place available on the study is for someone going into the ‘homeopathy’ arm of the trial. It’s not inconceivable that the homeopath might just decide—again, consciously or unconsciously—that this particular patient ‘probably wouldn’t really be interested’ in the trial.

pages: 417 words: 109,367

The End of Doom: Environmental Renewal in the Twenty-First Century
by Ronald Bailey
Published 20 Jul 2015

They find that the models actually do simulate similar lengthy hiatuses during that period; they just don’t happen to coincide with the current observational hiatus. They find that due to natural variation, the observed warming might be at the upper or lower limit of simulated rates, but there is no indication of a systematic bias in model process. “Our conclusion is that climate models are fundamentally doing the right thing,” University of Leeds researcher Piers Forster explained. “They [climate models] do in fact correctly represent these 15-year short-term fluctuations but because they are inherently chaotic they don’t get them at the right time.”

Fortunes of Change: The Rise of the Liberal Rich and the Remaking of America
by David Callahan
Published 9 Aug 2010

Moreover, America’s best-endowed universities didn’t get that way by accident; they built their wealth by giving preference to alumni kids, many of whom come from money, and to so-called development cases—the children of wealthy donors or potential donors. In The Price of Admission, reporter Daniel Golden documents a systematic bias on the part of elite universities to admit rich kids. In that sense, these universities may do more to entrench today’s inequality than to challenge such patterns. That said, the progressive values now being inculcated at elite universities matter. Universities have a long track record of turning rich kids into critics of the existing order.

pages: 336 words: 113,519

The Undoing Project: A Friendship That Changed Our Minds
by Michael Lewis
Published 6 Dec 2016

He went and pulled all the other articles in other publications written by Kahneman and Tversky. “I have vivid memories of running from one article to another,” says Thaler. “As if I have discovered the secret pot of gold. For a while I wasn’t sure why I was so excited. Then I realized: They had one idea. Which was systematic bias.” If people could be systematically wrong, their mistakes couldn’t be ignored. The irrational behavior of the few would not be offset by the rational behavior of the many. People could be systematically wrong, and so markets could be systematically wrong, too. Thaler got someone to send him a draft of “Value Theory.”

pages: 320 words: 33,385

Market Risk Analysis, Quantitative Methods in Finance
by Carol Alexander
Published 2 Jan 2007

Later we shall prove that the OLS estimation method – i.e. to minimize the residual sum of squares – is optimal when the error is generated by an independent and identically distributed (i.i.d.) process.7 So for the moment we shall assume that   2 (I.4.12) t ∼ iid 0   It makes sense of course to assume that the expectation of the error is zero. If it were not zero we would not have a random error, we would have a systematic bias in the error and the regression line would not pass through the middle of the scatter plot. So the definition (I.4.12) is introducing the variance of the error process,  2 , into our notation. This is the third and final parameter of the simple linear model. The OLS estimate of  2 is s2 = RSS  T−2 (I.4.13) where the numerator in (I.4.13) is understood to be the residual sum of squares that has been minimized by the choice of the OLS estimates.

pages: 402 words: 129,876

Bad Pharma: How Medicine Is Broken, and How We Can Fix It
by Ben Goldacre
Published 1 Jan 2012

One way to restrict the harm that can come from early stopping is to set up ‘stopping rules’, specified before the trial begins, and carefully calculated to be extreme enough that they are unlikely to be triggered by the chance variation you’d expect to see, over time, in any trial. Such rules are useful because they restrict the intrusion of human judgement, which can introduce systematic bias. But whatever we do about early stopping in medicine, it will probably pollute the data. A review from 2010 took around a hundred truncated trials, and four hundred matched trials that ran their natural course to the end: the truncated trials reported much bigger benefits, overstating the usefulness of the treatments they were testing by about a quarter.13 Another recent review found that the number of trials stopped early has doubled since 1990,14 which is probably not good news.

pages: 500 words: 145,005

Misbehaving: The Making of Behavioral Economics
by Richard H. Thaler
Published 10 May 2015

Even across many people, the errors will not average out to zero. Although I did not appreciate it fully at the time, Kahneman and Tversky’s insights had inched me forward so that I was just one step away from doing something serious with my list. Each of the items on the List was an example of a systematic bias. The items on the List had another noteworthy feature. In every case, economic theory had a highly specific prediction about some key factor—such as the presence of the cashews or the amount paid for the basketball game tickets—that the theory said should not influence decisions. They were all supposedly irrelevant factors, or SIFs.

pages: 494 words: 142,285

The Future of Ideas: The Fate of the Commons in a Connected World
by Lawrence Lessig
Published 14 Jul 2001

Allison and Mark A. Lemley, “Who's Patenting What? An Empirical Exploration of Patent Prosecution,” Vanderbilt Law Review 53 (2000): 2099, 2146; John R. Allison and Mark A. Lemley, “How Federal Circuit Judges Vote in Patent Validity Cases,” Florida State University Law Review 27 (2000): 745, 765 (concluding no systematic bias in judges' votes). 71 Jaffe, 46. 72 Ibid., 47. Jaffe's argument here is narrower than the point I am making in this section. His concern is the social costs from too much effort being devoted to the pursuit of patented innovation. My concern is the cost of patents on the innovation process generally. 73 “Patently Absurd?”

Beautiful Data: The Stories Behind Elegant Data Solutions
by Toby Segaran and Jeff Hammerbacher
Published 1 Jul 2009

Spatial treemap of terms occurring in geograph titles and comments for selected element descriptors in the beach base level. Displacement vectors show absolute locations of leaf nodes in this enlarged section of Figure 6-9. (See Color Plate 20.) 100 CHAPTER SIX Download at Boykma.Com Our graphics and our exploration are incomplete. We are investigating the effects of systematic bias in community-contributed geographic information and developing strategies to mitigate this. We are developing notations to describe the visual design space and interactive applications through which this can be explored. We are yet to consider whether the geographically varying relationships that we are able to identify in Geograph are consistent over time.

pages: 577 words: 149,554

The Problem of Political Authority: An Examination of the Right to Coerce and the Duty to Obey
by Michael Huemer
Published 29 Oct 2012

And if we felt this requirement to obey, it is likely that this would lead us to think and say that we were obliged to obey and then – in the case of the more philosophically minded among us – to devise theories to explain why we have this obligation. Thus, the widespread belief in political authority does not provide strong evidence for the reality of political authority, since that belief can be explained as the product of systematic bias. 6.3 Cognitive dissonance According to the widely accepted theory of cognitive dissonance, we experience an uncomfortable state, known as ‘cognitive dissonance’, when we have two or more cognitions that stand in conflict or tension with one another – and particularly when our behavior or other reactions appear to conflict with our self-image.15 We then tend to alter our beliefs or reactions to reduce the dissonance.

pages: 470 words: 148,730

Good Economics for Hard Times: Better Answers to Our Biggest Problems
by Abhijit V. Banerjee and Esther Duflo
Published 12 Nov 2019

If people don’t have the right information in Nepal, with its many employment agencies, vast flows of workers in and out, and a government genuinely concerned about the welfare of its international migrants, one can only guess at how confused most potential migrants are elsewhere. Confusion could of course go either way, dampening migration, like in Nepal, or boosting it if people are overoptimistic. Why then is there a systematic bias against going? RISK VERSUS UNCERTAINTY Perhaps the exaggerated sense of mortality Maheshwor’s respondents reported should be read as a metaphor for a general sense of foreboding. Migration, after all, is leaving the familiar to embrace the unknown, and the unknown is more than just a list of different potential outcomes with associated probabilities, as economists would like to describe it.

pages: 578 words: 168,350

Scale: The Universal Laws of Growth, Innovation, Sustainability, and the Pace of Life in Organisms, Cities, Economies, and Companies
by Geoffrey West
Published 15 May 2017

A lingering issue of possible concern is that the data cover only sixty years, so companies older than this are automatically excluded. Actually, it’s worse than this because the analysis includes only those companies that were born and died in the time window between 1950 and 2009, thereby excluding all those that were born before 1950 and/or were still alive in 2009. This could clearly lead to a systematic bias in the estimates of life expectancy. A more complete analysis therefore needs to include these so-called censored companies, whose life spans are at least as long as and likely longer than the period over which they appear in the data set. This actually involves a sizable number of companies: in the sixty years covered, 6,873 firms were still alive at the end of the window in 2009.

pages: 592 words: 161,798

The Future of War
by Lawrence Freedman
Published 9 Oct 2017

Much of the MID was put together before the availability of modern search engines, and so used whatever material was then available in libraries. In the 2010s, a team of researchers going through the individual cases meticulously found the MID database to be unreliable, although that was not a word they used. They praised the effort and the utility of the database, insisted that they found no evidence of systematic bias, and offered detailed proposals to rectify the problems they encountered.37 Nonetheless, their investigations identified problems with almost 70 per cent of the MID cases, leading to proposals to drop 240, merge another 72 with similar cases, revise substantially a further 234, and make minor changes to another 1009.

The Washington Connection and Third World Fascism
by Noam Chomsky
Published 24 Oct 2014

Many of them were directly installed by us or are the beneficiaries of our direct intervention, and most of the others came into existence with our tacit support, using military equipment and training supplied by the United States. Our massive intervention and subversion over the past 25 years has been confined almost exclusively to overthrowing reformers, democrats, and radicals—we have rarely “destabilized” right-wing military regimes no matter how corrupt or terroristic.50 This systematic bias in intervention is only part of the larger system of connections—military, economic, and political—that have allowed the dominant power to shape the primary characteristics of the other states in its domains in accordance with its interests. The Brazilian counterrevolution, as we have noted (cf. note 6), took place with the connivance of the United States and was followed by immediate recognition and consistent support, just as in Guatemala ten years earlier and elsewhere, repeatedly.

pages: 578 words: 170,758

Gaza: An Inquest Into Its Martyrdom
by Norman Finkelstein
Published 9 Jan 2018

Although it invested considerable resources in “Black Friday,” Amnesty ultimately, and to its eternal shame, recoiled from its own factual findings and delivered up a legal whitewash. • • • It cannot be seriously doubted that Amnesty International’s reports on Operation Protective Edge lacked objectivity and professionalism. They betrayed a systematic bias against Hamas and in favor of Israel. They also registered a steep regression from the exacting standard Amnesty set in its reports spanning the past two decades on the Israel-Palestine conflict. Amnesty might be tempted to respond: If an acknowledged supporter of Palestinian human rights (such as this writer) criticizes its pro-Israel bias while Israel criticizes its pro-Palestinian bias, then it must be doing something right.

pages: 654 words: 191,864

Thinking, Fast and Slow
by Daniel Kahneman
Published 24 Oct 2011

Some individuals greatly overestimate the true number, others underestimate it, but when many judgments are averaged, the average tends to be quite accurate. The mechanism is straightforward: all individuals look at the same jar, and all their judgments have a common basis. On the other hand, the errors that individuals make are independent of the errors made by others, and (in the absence of a systematic bias) they tend to average to zero. However, the magic of error reduction works well only when the observations are independent and their errors uncorrelated. If the observers share a bias, the aggregation of judgments will not reduce it. Allowing the observers to influence each other effectively reduces the size of the sample, and with it the precision of the group estimate.

pages: 733 words: 179,391

Adaptive Markets: Financial Evolution at the Speed of Thought
by Andrew W. Lo
Published 3 Apr 2017

Gary, 262 short selling, 26, 223, 229–230, 233, 326 Siegel, Jeremy, 253, 255 Siegel, Stephan, 161 sigma (measure), 232 SIMON (risk management process), 388–389, 392 Simon, Herbert, 172; academic background of, 177; artificial intelligence research by, 101, 181, 182; bounded rationality notion of, 34, 36, 185, 188, 209, 213, 215, 217; environmental complexity viewed by, 198; heuristics notion of, 66, 179, 217; optimization viewed skeptically by, 178–180, 183; satisficing conceived by, 180–182 Simons, James, 6, 224, 225, 244, 277, 350 Sinclair, Upton, 319 Singapore, 411 single-photon emission computed tomography (SPECT), 102 Siri, 132, 396 Sirri, Erik, 308, 311 60/40s rule, 252, 255 skin conductance, 93–94 slot machines, 91–92 Slovic, Paul, 84 “SM” (patient incapable of fear), 83, 104, 106, 107, 144, 158, 323 small-cap stocks, 250, 259 smallpox, 344 smiling, 105 Smith, Adam, 28, 211 Smith, David V., 100 Sobel, Russell, 206 social Darwinism, 215 social exclusion, 85–86 social media, 55, 270, 405 Société Générale, 60–61 Society of Mind, The (Minsky), 132–133 sociobiology, 170–174, 216–217 Sociobiology (Wilson), 170–171 Solow, Herbert, 395 Soros, George, 6, 219, 222–223, 224, 227, 234, 244, 277 sovereign wealth funds, 230, 299, 409–410 Soviet Union, 411 Space Shuttle Challenger, 12–16, 24, 38 specialization, 217 speech synthesis, 132 Sperry, Roger, 113–114 “spoofing,” 360 Springer, James, 159 SR-52 programmable calculator, 357 stagflation, 37 Standard Portfolio Analysis of Risk (SPAN), 369–370 Stanton, Angela, 338 starfish, 192, 242 Star Trek, 395–397, 411, 414 stationarity, 253–255, 279, 282 statistical arbitrage (“statarb”), 284, 286, 288–291, 292–293, 362 statistical tests, 47 Steenbarger, Brett, 94 Stein, Carolyn, 69 sterilization, 171, 174 Stiglitz, Joseph, 224, 278, 310 Stocks for the Long Run (Siegel), 253 stock splits, 24, 47 Stone, Oliver, 346 Stone Age, 150, 163, 165 stone tools, 150–151, 153 stop-loss orders, 359 Strasberg, Lee, 105 stress, 3, 75, 93, 101, 122, 160–161, 346, 413–415 strong connectedness, 374 Strong Story Hypothesis, 133 Strumpf, Koleman, 39 “stub quotes,” 360 subjective value, 100 sublenticular extended amygdala, 89 subprime mortgages, 290, 292, 293, 297, 321, 327, 376, 377, 410 Sugihara, George, 366 suicide, 160 Sullenberger, Chesley, 381 Summers, Lawrence (Larry), 50, 315–316, 319–320, 379 sunlight, 108 SuperDot (trading system), 236 supply and demand curves, 29, 30, 31–33, 34 Surowiecki, James, 5, 16 survey research, 40 Sussman, Donald, 237–238 swaps, 243, 298, 300 Swedish Twin Registry, 161 systematic bias, 56 systematic risk, 194, 199–203, 204, 205, 250–251, 348, 389 systemic risk, 319; Bank of England’s measurement of, 366–367; government as source of, 361; in hedge fund industry, 291, 317; of large vs. small shocks, 315; managing, 370–371, 376–378, 387; transparency of, 384–385; trust linked to, 344 Takahashi, Hidehiko, 86 Tanner, Carmen, 353 Tanzania, 150 Tartaglia, Nunzio, 236 Tattersall, Ian, 150, 154 Tech Bubble, 40 telegraphy, 356 Tennyson, Alfred, Baron, 144 testosterone, 108, 337–338 Texas hold ’em, 59–60 Texas Instruments, 357, 384 Thackray, John, 234 Thales, 16 Théorie de la Spéculation (Bachelier), 19 theory of mind, 109–111 thermal homeostasis, 367–368, 370 This Time Is Different (Reinhart and Rogoff), 310 Thompson, Robert, 1, 81–82, 83, 103–104 three-body problem, 214 ticker tape machine, 356 tight coupling, 321, 322, 361, 372Tiger Fund, 234 Tinker, Grant, 395 Tobin tax, 245 Tokugawa era, 17 Tooby, John, 173, 174 tool use, 150–151, 153, 162, 165 “toxic assets,” 299 trade execution, 257, 356 trade secrets, 284–285, 384 trading volume, 257, 359 transactions tax, 245 Treynor, Jack, 263 trial and error, 133, 141, 142, 182, 183, 188, 198, 265 Triangle Shirtwaist Fire, 378–379 tribbles, 190–205, 216 Trivers, Robert, 172 trolley dilemma, 339 Trusty, Jessica, 120 Tversky, Amos, 55, 58, 66–67, 68–69, 70–71, 90, 106, 113, 388 TWA Flight 800, 84–85 twins, 159, 161, 348 “two-legged goat effect,” 155 UBS, 61 Ultimatum Game, 336–338 uncertainty, 212, 218; risk vs., 53–55, 415 unemployment, 36–37 unintended consequences, 7, 248, 269, 330, 358, 375 United Kingdom, 222–223, 242, 377 University of Chicago, 22 uptick rule, 233 Urbach-Wiethe disease, 82–83 U.S.

Basic Income: A Radical Proposal for a Free Society and a Sane Economy
by Philippe van Parijs and Yannick Vanderborght
Published 20 Mar 2017

It also helps to prevent unemployed workers from sinking into unemployability through the mutual reinforcement of the obsolescence of their productive skills and the lowering of their professional aspirations. Second, the combination of the last two unconditionalities—Â�universality and freedom from obligation—Â�generates a systematic bias in Â�favor of the creation and survival of jobs with high training content. One aspect of this is that a basic income helps give all young Â�people access to unpaid or low-Â�paid internships, otherÂ�wise monopolized by the privileged whose parents are able and willing to provide them with what amounts to privately funded basic incomes.

Designing Data-Intensive Applications: The Big Ideas Behind Reliable, Scalable, and Maintainable Systems
by Martin Kleppmann
Published 17 Apr 2017

When we develop predictive analytics systems, we are not merely automating a human’s decision by using software to specify the rules for when to say yes or no; we are even leaving the rules themselves to be inferred from data. However, the patterns learned by these systems are opaque: even if there is some correlation in the data, we may not know why. If there is a systematic bias in the input to an algorithm, the sys‐ tem will most likely learn and amplify that bias in its output [84]. In many countries, anti-discrimination laws prohibit treating people differently depending on protected traits such as ethnicity, age, gender, sexuality, disability, or beliefs. Other features of a person’s data may be analyzed, but what happens if they are correlated with protected traits?

pages: 1,237 words: 227,370

Designing Data-Intensive Applications: The Big Ideas Behind Reliable, Scalable, and Maintainable Systems
by Martin Kleppmann
Published 16 Mar 2017

When we develop predictive analytics systems, we are not merely automating a human’s decision by using software to specify the rules for when to say yes or no; we are even leaving the rules themselves to be inferred from data. However, the patterns learned by these systems are opaque: even if there is some correlation in the data, we may not know why. If there is a systematic bias in the input to an algorithm, the system will most likely learn and amplify that bias in its output [84]. In many countries, anti-discrimination laws prohibit treating people differently depending on protected traits such as ethnicity, age, gender, sexuality, disability, or beliefs. Other features of a person’s data may be analyzed, but what happens if they are correlated with protected traits?

pages: 1,351 words: 404,177

Nixonland: The Rise of a President and the Fracturing of America
by Rick Perlstein
Published 1 Jan 2008

… “The answer, I think, is that Mayor Daley and his supporters have a point. Most of us in what is called the communication field are not rooted in the great mass of ordinary Americans—in Middle America. And the results show up not merely in occasional episodes such as the Chicago violence but more importantly in the systematic bias toward young people, minority groups, and the kind of presidential candidate who appeals to them. “To get a feel of this bias it is first necessary to understand the antagonism that divides the middle class of this country. On the one hand there are highly educated upper-income whites sure of themselves and brimming with ideas for doing things differently.

Principles of Corporate Finance
by Richard A. Brealey , Stewart C. Myers and Franklin Allen
Published 15 Feb 2014

The investor may not stop to reflect on how little one can learn about expected returns from three years’ experience. Most individuals are also too conservative, that is, too slow to update their beliefs in the face of new evidence. People tend to update their beliefs in the correct direction but the magnitude of the change is less than rationality would require. Another systematic bias is overconfidence. For example, an American small business has just a 35% chance of surviving for five years. Yet the great majority of entrepreneurs think that they have a better than 70% chance of success.22 Similarly, most investors think they are better-than-average stock pickers. Two speculators who trade with each other cannot both make money, but nevertheless they may be prepared to continue trading because each is confident that the other is the patsy.