p-value

back to index

description: a statistic used in hypothesis testing to indicate the strength of the evidence against the null hypothesis

111 results

pages: 719 words: 104,316

R Cookbook
by Paul Teetor
Published 28 Mar 2011

Then use the summary function to perform a chi-squared test of the contingency table: > summary(table(fac1,fac2)) The output includes a p-value. Conventionally, a p-value of less than 0.05 indicates that the variables are likely not independent whereas a p-value exceeding 0.05 fails to provide any such evidence. Discussion This example performs a chi-squared test on the contingency table of Recipe 9.3 and yields a p-value of 0.01255: > summary(table(initial,outcome)) Number of cases in table: 100 Number of factors: 2 Test for independence of all factors: Chisq = 8.757, df = 2, p-value = 0.01255 The small p-value indicates that the two factors, initial and outcome, are probably not independent.

Solution Use the shapiro.test function: > shapiro.test(x) The output includes a p-value. Conventionally, p < 0.05 indicates that the population is likely not normally distributed whereas p > 0.05 provides no such evidence. Discussion This example reports a p-value of 0.4151 for x: > shapiro.test(x) Shapiro-Wilk normality test data: x W = 0.9651, p-value = 0.4151 The large p-value suggests the underlying population could be normally distributed. The next example reports a small p-value for y, so it is unlikely that this sample came from a normal population: > shapiro.test(y) Shapiro-Wilk normality test data: y W = 0.9503, p-value = 0.03520 I have highlighted the Shapiro–Wilk test because it is a standard R function.

That column highlights the significant variables. The line labeled "Signif. codes" at the bottom gives a cryptic guide to the flags’ meanings: *** p-value between 0 and 0.001 ** p-value between 0.001 and 0.01 * p-value between 0.01 and 0.05 . p-value between 0.05 and 0.1 (blank) p-value between 0.1 and 1.0 The column labeled Std. Error is the standard error of the estimated coefficient. The column labeled t value is the t statistic from which the p-value was calculated. Residual standard error Residual standard error: 1.625 on 26 degrees of freedom This reports the standard error of the residuals (σ)—that is, the sample standard deviation of ε.

pages: 442 words: 94,734

The Art of Statistics: Learning From Data
by David Spiegelhalter
Published 14 Oct 2019

Overall, of 125 ‘discoveries’, 36% (45) are false discoveries. Since all these false discoveries were based on a P-value identifying a ‘significant’ result, P-values have been increasingly blamed for a flood of incorrect scientific conclusions. In 2015 a reputable psychology journal even announced that they would ban the use of NHST (Null Hypothesis Significance Testing). Finally in 2016 the American Statistical Association (ASA) managed to get a group of statisticians to agree on six principles about P-values.fn9 The first of these principles simply points out what P-values can do: P-values can indicate how incompatible the data are with a specified statistical model.

Regardless of the actual experiments conducted, if the intervention really has no effect, it can be proved theoretically that any P-value that tests the null hypothesis is equally likely to take on any value between 0 and 1, and so the P-values from many studies testing the effect should tend to scatter uniformly. Whereas if there really is an effect, the P-values will tend to be skewed towards small values. The idea of the ‘P-curve’ is to look at all the actual P-values reported for significant test results – that is, when P < 0.05. Two features create suspicion. First, if there is cluster of P-values just below 0.05, it suggests some massaging has been done to tip some of them over this crucial boundary.

Such studies are lengthy and expensive, and may not identify many rare events. P-value: a measure of discrepancy between data and a null hypothesis. For a null hypothesis H0, let T be a statistic for which large values indicate inconsistency with H0. Suppose we observe a value t. Then a (one-sided) P-value is the probability of observing such an extreme value, were H0 true, that is P(T ≥ t|H0). If both small and large values of T indicate inconsistency with H0, then the two-sided P-value is the probability of observing such a large value in either direction. Often the two-sided P-value is simply taken as double the one-sided P-value, while the R software uses the total probability of events which have a lower probability of occurring than that actually observed.

pages: 404 words: 92,713

The Art of Statistics: How to Learn From Data
by David Spiegelhalter
Published 2 Sep 2019

Overall, of 125 ‘discoveries’, 36% (45) are false discoveries. Since all these false discoveries were based on a P-value identifying a ‘significant’ result, P-values have been increasingly blamed for a flood of incorrect scientific conclusions. In 2015 a reputable psychology journal even announced that they would ban the use of NHST (Null Hypothesis Significance Testing). Finally in 2016 the American Statistical Association (ASA) managed to get a group of statisticians to agree on six principles about P-values.* The first of these principles simply points out what P-values can do: 1. P-values can indicate how incompatible the data are with a specified statistical model.

Regardless of the actual experiments conducted, if the intervention really has no effect, it can be proved theoretically that any P-value that tests the null hypothesis is equally likely to take on any value between 0 and 1, and so the P-values from many studies testing the effect should tend to scatter uniformly. Whereas if there really is an effect, the P-values will tend to be skewed towards small values. The idea of the ‘P-curve’ is to look at all the actual P-values reported for significant test results—that is, when P < 0.05. Two features create suspicion. First, if there is cluster of P-values just below 0.05, it suggests some massaging has been done to tip some of them over this crucial boundary.

Such studies are lengthy and expensive, and may not identify many rare events. P-value: a measure of discrepancy between data and a null hypothesis. For a null hypothesis H0, let T be a statistic for which large values indicate inconsistency with H0. Suppose we observe a value t. Then a (one-sided) P-value is the probability of observing such an extreme value, were H0 true, that is P(T ≥ t | H0). If both small and large values of T indicate inconsistency with H0, then the two-sided P-value is the probability of observing such a large value in either direction. Often the two-sided P-value is simply taken as double the one-sided P-value, while the R software uses the total probability of events which have a lower probability of occurring than that actually observed.

Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth
by Stuart Ritchie
Published 20 Jul 2020

I should also say that there are a whole host of ways to adjust your p-value threshold if you’ve calculated a lot of them – you might only accept p-values that fall below 0.01 as significant instead of 0.05, for example. The problem is that most researchers forget to do this – or when they’re p-hacking, they don’t feel like they’ve really done so many tests, even if they have. There’s also the interesting philosophical question of how many p-values a scientist should be correcting for. Every p-value they’ve calculated in that specific paper? Every p-value they’ve calculated while researching that topic? Every p-value they’ve calculated in their entire career?

When you run one of these programs, its output will include, alongside many other useful numbers, the relevant p-value.15 Despite being one of the most commonly used statistics in science, the p-value has a notoriously tricky definition. A recent audit found that a stunning 89 per cent of a sample of introductory psychology textbooks got the definition wrong; I’ll try to avoid making the same mistake here.16 The p-value is the probability that your results would look the way they look, or would seem to show an even bigger effect, if the effect you’re interested in weren’t actually present.17 Notably, the p-value doesn’t tell you the probability that your result is true (whatever that might mean), nor how important it is.

For the American Statistical Association’s consensus position on p-values, written surprisingly comprehensibly, see Ronald L. Wasserstein & Nicole A. Lazar, ‘The ASA Statement on p-Values: Context, Process, and Purpose’, The American Statistician 70, no. 2 (2 April 2016): pp. 129–33; https://doi.org/10.1080/0003130 5.2016.1154108. It defines the p-value like this: ‘the probability under a specified statistical model that a statistical summary of the data (e.g., the sample mean difference between two compared groups) would be equal to or more extreme than its observed value: p. 131. 18.  Why does the definition of the p-value (‘how likely is it that pure noise would give you results like the ones you have, or ones with an even larger effect’) have that ‘or an even larger effect’ clause in it?

Beginning R: The Statistical Programming Language
by Mark Gardener
Published 13 Jun 2012

You might prefer to display the values as whole numbers and you can adjust the output “on the fly” by using the round() command to choose how many decimal points to display the values like so: > round(bird.cs$exp, 0) Garden Hedgerow Parkland Pasture Woodland Blackbird 60 11 24 4 2 Chaffinch 17 3 7 1 1 Great Tit 40 7 16 3 1 House Sparrow 44 8 17 3 2 Robin 8 2 3 1 0 Song Thrush 6 1 2 0 0 In this instance you chose to use no decimals at all and so use 0 as an instruction in the round() command. Monte Carlo Simulation You can decide to determine the p-value by a slightly different method and can use a Monte Carlo simulation to do this. You add an extra instruction to the chisq.test() command, simulate.p.value = TRUE, like so: > chisq.test(bird.df, simulate.p.value = TRUE, B = 2500) Pearson's Chi-squared test with simulated p-value (based on 2500 replicates) data: bird.df X-squared = 78.2736, df = NA, p-value = 0.0003998 The default is that simulate.p.value = FALSE and that B = 2000. The latter is the number of replicates to use in the Monte Carlo test, which is set to 2500 for this example.

Now run the chi-squared test again but this time use a Monte Carlo simulation with 3000 replicates to determine the p-value: > (bees.cs = chisq.test(bees, simulate.p.value = TRUE, B = 3000)) Pearson's Chi-squared test with simulated p-value (based on 3000 replicates) data: bees X-squared = 120.6531, df = NA, p-value = 0.0003332 4. Look at a portion of the data as a 2 × 2 contingency table. Examine the effect of Yates’ correction on this subset: > bees[1:2, 4:5] Honey.bee Carder.bee Thistle 12 8 Vipers.bugloss 13 27 > chisq.test(bees[1:2, 4:5], correct = FALSE) Pearson's Chi-squared test data: bees[1:2, 4:5] X-squared = 4.1486, df = 1, p-value = 0.04167 > chisq.test(bees[1:2, 4:5], correct = TRUE) Pearson's Chi-squared test with Yates' continuity correction data: bees[1:2, 4:5] X-squared = 3.0943, df = 1, p-value = 0.07857 5.

Two-Sample U-Test The basic way of using the wilcox.test() is to specify the two samples you want to compare as separate vectors, as the following example shows: > data1 ; data2 [1] 3 5 7 5 3 2 6 8 5 6 9 [1] 3 5 7 5 3 2 6 8 5 6 9 4 5 7 3 4 > wilcox.test(data1, data2) Wilcoxon rank sum test with continuity correction data: data1 and data2 W = 94.5, p-value = 0.7639 alternative hypothesis: true location shift is not equal to 0 Warning message: In wilcox.test.default(data1, data2) : cannot compute exact p-value with ties By default the confidence intervals are not calculated and the p-value is adjusted using the “continuity correction”; a message tells you that the latter has been used. In this case you see a warning message because you have tied values in the data. If you set exact = FALSE, this message would not be displayed because the p-value would be determined from a normal approximation method.

Evidence-Based Technical Analysis: Applying the Scientific Method and Statistical Inference to Trading Signals
by David Aronson
Published 1 Nov 2006

However, there are some standards that Null Hypothesis & Sampling Distribution Mean Return Test Statistic: +3.5% p-value = 0.10 0 Area = 0.10 of total sampling distribution FIGURE 5.9 P-Value: fractional area of sampling distribution greater than +3.5%, conditional probability of +3.5% or more given that H0 is true. 233 Hypothesis Tests and Confidence Intervals are commonly used. A p-value of 0.10 is often called possibly significant. A p-value of 0.05 or less is typically termed statistically significant and is usually considered to be the largest p-value that would give a scientist license to reject H0. When the p-value is 0.01 or less it is called called very significant and values of 0.001 or less are termed highly significant.

In a hypothesis test, this conditional probability is given the special name p-value. Specifically, it is the probability that the observed value of the test statistic could have occurred conditioned upon (given that) the hypothesis being tested (H0) is true. The smaller the p-value, the greater is our justification for calling into question the truth of H0. If the p-value is less than a threshold, which must be defined before the test is carried out, H0 is rejected and HA accepted. The p-value can also be interpreted as the probability H0 will be erroneously rejected when H0 is in fact true. P-value also has a graphical interpretation. It is equal to the fraction of the sampling distribution’s total area that lies at values equal to and greater than the observed value of the test statistic.

The value 0.10 is the sample statistic’s p-value. This fact is equivalent to saying that if the rule’s true return were zero, there is a 0.10 probability that its return in a back test would attain a value as high as +3.5 percent or higher due to sampling variability (chance). This is illustrated in Figure 5.9. p-value, Statistical Significance, and Rejecting the Null Hypothesis A second name for the p-value of the test statistic is the statistical significance of the test. The smaller the p-value, the more statistically significant the test result. A statistically significant result is one for which the p-value is low enough to warrant a rejection of H0.

pages: 443 words: 51,804

Handbook of Modeling High-Frequency Data in Finance
by Frederi G. Viens , Maria C. Mariani and Ionut Florescu
Published 20 Dec 2011

However, we could clearly see them in the figures obtained using the DFA method. 142 CHAPTER 6 Long Correlations Applied to the Study of Memory EEM:2003–2009 Cumulative distribution Cumulative distribution EEM:2003–2009 100 10–2 10–4 10 –1 0 1 100 10–2 10–4 10–1 100 101 Normalized returns (T = 4, α = 1.60) 10 10 Normalized returns (T = 1, α = 1.50) EEM:2003–2009 Cumulative distribution Cumulative distribution EEM:2003–2009 100 10–2 10–4 10–1 100 101 Normalized returns (T = 8, α = 1.60) 100 10–2 10–4 10–1 100 101 Normalized returns (T = 16, α = 1.60) DFA analysis:EEM 2003–2009 −2 −2.5 log(Fn) −3 −3.5 −4 −4.5 −5 −5.5 1 1.5 2 log(n) 2.5 3 [α = 0.74338] 3.5 4 3.5 4 Hurst analysis:EEM 2003–2009 2.2 2 1.8 1.6 log(R/S) 1.4 1.2 1 0.8 0.6 0.4 0.2 1 1.5 2 2.5 3 log(n) [H = 0.57794] FIGURE 6.5 Analysis results for EEM index using the entire period available. 143 S&P 500: 2001–2009 Cumulative distribution Cumulative distribution 6.4 Results and Discussions 100 10–2 10–4 S&P 500: 2001–2009 100 10–2 10–4 –1 10 0 1 10 10 Normalized returns (T = 8, α = 1.40) Cumulative distribution Cumulative distribution 10–1 100 101 Normalized returns (T = 1, α = 1.55) S&P 500: 2001–2009 100 10–2 10–4 10–1 100 101 Normalized returns (T = 4, α = 1.50) S&P 500: 2001–2009 0 10 10–2 10–4 10–1 100 101 Normalized returns (T = 16, α = 1.40) DFA analysis:S&P 500, 2001–2009 −3.5 log(Fn) −4 −4.5 −5 −5.5 −6 1 1.5 2 2.5 3 log(n) [α = 0.67073] 3.5 4 3.5 4 Hurst analysis: S&P 500, 2001–2009 2 1.8 1.6 log(R/S) 1.4 1.2 1 0.8 0.6 0.4 0.2 1 1.5 2 2.5 3 log(n) [H = 0.56657] FIGURE 6.6 Analysis results for S&P500 index using the entire period available. 144 CHAPTER 6 Long Correlations Applied to the Study of Memory iShares MSCI EAFE Index (EFA) Kohmogorov S.Statistic Normal 99.9 99.9 99 99 95 90 80 70 60 50 40 30 20 10 5 95 90 80 70 60 50 40 30 20 10 5 1 Mean 0.0005256 StDev 0.004476 N 252 AD 0.444 P-Value 0.283 Percent Percent iShares MSCI EAFE Index (EFA) Anderson D.Statistic Normal 1 0.1 0.1 –0.015 –0.010 –0.005 0.000 0.005 0.010 (EFA) 1/3/03 to 1/2/04 0.015 –0.015 –0.010 –0.005 0.000 0.005 0.010 (EFA) 1/3/03 to 1/2/04 99.9 99 99 95 90 80 70 60 50 40 30 20 10 5 95 90 80 70 60 50 40 30 20 10 5 Percent Percent 99.9 Mean 0.0005256 StDev 0.004476 N 252 AD 0.444 P-Value 0.283 1 0.1 –0.015 –0.010 –0.005 0.000 0.005 0.010 0.015 Sp500(827) 1/2/03 to 12/31/03 Kohmogorov S. for Sp500(827) 1/2/2003 until 12/31/2003 Normal RyanJ. for Sp500(827) 1/2/2003 until 12/31/2003 Normal 99.9 99.9 99 99 95 90 80 70 60 50 40 30 20 10 5 95 90 80 70 60 50 40 30 20 10 5 Mean 0.0004035 StDev 0.004663 N 252 RJ 0.995 P-Value 0.094 Percent Percent Mean 0.0004035 0.004663 StDev N 252 0.418 AD 0.327 P-Value 0.1 –0.015 –0.010 –0.005 0.000 0.005 0.010 0.015 (EFA) 1/3/03 to 1/2/04 1 0.015 Anderson D. for Sp500(827) 1/2/2003 until 12/31/2003 Normal Probability Plot of (EFA) 1/3/03 to 1/2/04 Normal–95%CI 1 Mean 0.0005256 StDev 0.004476 N 252 0.055 KS P-Value 0.064 1 0.1 Mean 0.0004035 StDev 0.004663 N 252 KS 0.039 P-Value >0.150 0.1 –0.015 –0.010 –0.005 0.000 0.005 0.010 0.015 –0.015 –0.010 –0.005 0.000 0.005 0.010 0.015 Sp500(827) 1/2/03 to 12/31/03 Sp500(827) 1/2/03 to 12/31/03 MSCI_(EEM) from 4/15/2003 until 12/31/2003 Anderson D.

It is worth mentioning that while the stationarity tests reject the presence of the unit root in the characteristic polynomial that does not necessarily mean that the data is stationary, only that the particular type of nonstationarity indicated 1.0 Emp rescaled stock(x) 1.0 Emp stock0(x) 0.8 0.6 0.4 0.2 0.0 –1e 0.8 0.6 0.4 0.2 0.0 –03 –5e–04 0e+00 x 5e–04 1e–03 x FIGURE 6.1 Plot of the empirical CDF of the returns for Stock 1. (a) The image contains the original CDF. (b) The image is the same empirical CDF but rescaled so that the discontinuities are clearly seen. 130 CHAPTER 6 Long Correlations Applied to the Study of Memory TABLE 6.1 DFA and Hurst Analysis Data ADF (p-value) PP (p-value) KPSS (p-value) DFA HURST Stock 1 <0.01 <0.01 >0.1 Stock 2 <0.01 <0.01 >0.1 Stock 3 <0.01 <0.01 >0.1 Stock 4 <0.01 <0.01 >0.1 Stock 5 <0.01 <0.01 >0.1 Stock 6 <0.01 <0.01 >0.1 Stock 7 <0.01 <0.01 >0.1 Stock 8 <0.01 <0.01 >0.1 Stock 9 <0.01 <0.01 >0.1 Stock 10 <0.01 <0.01 0.07686 Stock 11 <0.01 <0.01 >0.1 Stock 12 <0.01 <0.01 >0.1 Stock 13 <0.01 <0.01 >0.1 Stock 14 <0.01 <0.01 >0.1 Stock 15 <0.01 <0.01 >0.1 Stock 16 <0.01 <0.01 >0.1 Stock 17 <0.01 <0.01 >0.1 Stock 18 <0.01 <0.01 0.076 Stock 19 <0.01 <0.01 >0.1 Stock 20 <0.01 <0.01 >0.1 Stock 21 <0.01 <0.01 >0.1 Stock 22 <0.01 <0.01 >0.1 Stock 23 <0.01 <0.01 >0.1 Stock 24 <0.01 <0.01 >0.1 0.525178 0.007037 0.64812 0.01512 0.66368 0.01465 0.66969 0.01506 0.65525 0.02916 0.74206 0.01032 0.50432 0.01212 0.66184 0.01681 0.72729 0.01383 0.79322 0.01158 0.322432 0.007075 0.70352 0.01429 0.74889 0.02081 0.70976 0.01062 0.76746 0.01029 0.62549 0.01554 0.80534 0.02432 0.69134 0.01336 0.678050 0.009018 0.48603 0.01462 0.65553 0.02517 0.70807 0.01081 0.717223 0.009553 0.45403 0.01370 0.561643 0.005423 0.490789 0.006462 0.628440 0.006138 0.644534 0.005527 0.65044 0.02908 0.722893 0.008662 0.644820 0.008521 0.38046 0.01673 0.635075 0.006374 0.654970 0.006413 0.52485 0.01265 0.596178 0.007172 0.58279 0.00825 0.578053 0.007177 0.588555 0.004527 0.61023 0.01083 0.591336 0.006912 0.596003 0.001927 0.596190 0.005278 0.59426 0.01829 0.50115 0.01086 0.552367 0.009506 0.594051 0.006709 0.37129 0.02267 131 6.3 Data TABLE 6.1 (Continued) Data ADF (p-value) PP (p-value) KPSS (p-value) DFA HURST Stock 25 <0.01 <0.01 0.02718 <0.01 <0.01 >0.1 0.63043 0.01200 0.59568 0.01464 0.646725 0.005784 0.51591 0.01586 Stock 26 Abbreviations: ADF, augmented Dickey–Fuller test for unit-root stationarity; PP, Phillips–Perron unit-root test; KPSS, Kwiatkowski–Phillips–Schmidt–ShinKwiatkowski–Phillips–Schmidt–Shin test for unit-root Stationarity; DFA, detrended fluctuation analysis; Hurst, rescale range analysis.

(b) The image is the same empirical CDF but rescaled so that the discontinuities are clearly seen. 130 CHAPTER 6 Long Correlations Applied to the Study of Memory TABLE 6.1 DFA and Hurst Analysis Data ADF (p-value) PP (p-value) KPSS (p-value) DFA HURST Stock 1 <0.01 <0.01 >0.1 Stock 2 <0.01 <0.01 >0.1 Stock 3 <0.01 <0.01 >0.1 Stock 4 <0.01 <0.01 >0.1 Stock 5 <0.01 <0.01 >0.1 Stock 6 <0.01 <0.01 >0.1 Stock 7 <0.01 <0.01 >0.1 Stock 8 <0.01 <0.01 >0.1 Stock 9 <0.01 <0.01 >0.1 Stock 10 <0.01 <0.01 0.07686 Stock 11 <0.01 <0.01 >0.1 Stock 12 <0.01 <0.01 >0.1 Stock 13 <0.01 <0.01 >0.1 Stock 14 <0.01 <0.01 >0.1 Stock 15 <0.01 <0.01 >0.1 Stock 16 <0.01 <0.01 >0.1 Stock 17 <0.01 <0.01 >0.1 Stock 18 <0.01 <0.01 0.076 Stock 19 <0.01 <0.01 >0.1 Stock 20 <0.01 <0.01 >0.1 Stock 21 <0.01 <0.01 >0.1 Stock 22 <0.01 <0.01 >0.1 Stock 23 <0.01 <0.01 >0.1 Stock 24 <0.01 <0.01 >0.1 0.525178 0.007037 0.64812 0.01512 0.66368 0.01465 0.66969 0.01506 0.65525 0.02916 0.74206 0.01032 0.50432 0.01212 0.66184 0.01681 0.72729 0.01383 0.79322 0.01158 0.322432 0.007075 0.70352 0.01429 0.74889 0.02081 0.70976 0.01062 0.76746 0.01029 0.62549 0.01554 0.80534 0.02432 0.69134 0.01336 0.678050 0.009018 0.48603 0.01462 0.65553 0.02517 0.70807 0.01081 0.717223 0.009553 0.45403 0.01370 0.561643 0.005423 0.490789 0.006462 0.628440 0.006138 0.644534 0.005527 0.65044 0.02908 0.722893 0.008662 0.644820 0.008521 0.38046 0.01673 0.635075 0.006374 0.654970 0.006413 0.52485 0.01265 0.596178 0.007172 0.58279 0.00825 0.578053 0.007177 0.588555 0.004527 0.61023 0.01083 0.591336 0.006912 0.596003 0.001927 0.596190 0.005278 0.59426 0.01829 0.50115 0.01086 0.552367 0.009506 0.594051 0.006709 0.37129 0.02267 131 6.3 Data TABLE 6.1 (Continued) Data ADF (p-value) PP (p-value) KPSS (p-value) DFA HURST Stock 25 <0.01 <0.01 0.02718 <0.01 <0.01 >0.1 0.63043 0.01200 0.59568 0.01464 0.646725 0.005784 0.51591 0.01586 Stock 26 Abbreviations: ADF, augmented Dickey–Fuller test for unit-root stationarity; PP, Phillips–Perron unit-root test; KPSS, Kwiatkowski–Phillips–Schmidt–ShinKwiatkowski–Phillips–Schmidt–Shin test for unit-root Stationarity; DFA, detrended fluctuation analysis; Hurst, rescale range analysis.

pages: 589 words: 69,193

Mastering Pandas
by Femi Anthony
Published 21 Jun 2015

So, it would be almost impossible to obtain the value that we observe if the null hypothesis was actually true. In more formal terms, we would normally define a threshold or alpha value and reject the null hypothesis if the p-value ≤ α or fail to reject otherwise. The typical values for α are 0.05 or 0.01. Following list explains the different values of alpha: p-value <0.01: There is VERY strong evidence against H0 0.01 < p-value < 0.05: There is strong evidence against H0 0.05 < p-value < 0.1: There is weak evidence against H0 p-value > 0.1: There is little or no evidence against H0 Therefore, in this case, we would reject the null hypothesis and give credence to Intelligenza's claim and state that their claim is highly significant.

Rejecting the null hypothesis is tantamount to accepting the alternative hypothesis and vice versa. The alpha and p-values In order to conduct an experiment to decide for or against our null hypothesis, we need to come up with an approach that will enable us to make the decision in a concrete and measurable way. To do this test of significance, we have to consider two numbers—the p-value of the test statistic and the threshold level of significance, which is also known as alpha. The p-value is the probability if the result we observe by assuming that the null hypothesis is true or it occurred by occurred by chance alone. The p-value can also be thought of as the probability of obtaining a test statistic as extreme as or more extreme than the actual obtained test statistic, given that the null hypothesis is true.

The p-value can also be thought of as the probability of obtaining a test statistic as extreme as or more extreme than the actual obtained test statistic, given that the null hypothesis is true. The alpha value is the threshold value against which we compare p-values. This gives us a cut-off point in order to accept or reject the null hypothesis. It is a measure of how extreme the results we observe must be in order to reject the null hypothesis of our experiment. The most commonly used values of alpha are 0.05 or 0.01. In general, the rule is as follows: If the p-value is less than or equal to alpha (p< .05), then we reject the null hypothesis and state that the result is statistically significant. If the p-value is greater than alpha (p > .05), then we have failed to reject the null hypothesis, and we say that the result is not statistically significant.

Calling Bullshit: The Art of Scepticism in a Data-Driven World
by Jevin D. West and Carl T. Bergstrom
Published 3 Aug 2020

“When a measure becomes a target, it ceases to be a good measure.” In a sense this is what has happened with p-values. Because a p-value lower than 0.05 has become essential for publication, p-values no longer serve as a good measure of statistical support. If scientific papers were published irrespective of p-values, these values would remain useful measures of the degree of statistical support for rejecting a null hypothesis. But since journals have a strong preference for papers with p-values below 0.05, p-values no longer serve their original purpose.*10 In 2005, the epidemiologist John Ioannidis summarized the consequences of the file drawer effect in an article with the provocative title “Why Most Published Research Findings Are False.”

To assess the results of the mind-reading test we want to know the probability that something other than random chance was responsible for Carl’s score. So here’s the dirty secret about p-values in science. When scientists report p-values, they’re doing something a bit like the prosecutor did in reporting the chance of an innocent person matching the fingerprint from the crime scene. They would like to know the probability that their null hypothesis is wrong, in light of the data they have observed. But that’s not what a p-value is. A p-value describes the probability of getting data at least as extreme as those observed, if the null hypothesis were true. Unlike the prosecutor, scientists aren’t trying to trick anybody when they report this.

How then do we explain the replication crisis? To answer this, it is helpful to take a detour and look at a statistic known as a p-value. THE PROSECUTOR’S FALLACY As we’ve seen, most scientific studies look to patterns in data to make inferences about the world. But how can we distinguish a pattern from random noise? And how do we quantify how strong a particular pattern is? While there are a number of ways to draw these distinctions, the most common is the use of p-values. Loosely speaking, a p-value tells us how likely it is that the pattern we’ve seen could have arisen by chance alone. If that is highly unlikely, we say that the result is statistically significant.

pages: 579 words: 76,657

Data Science from Scratch: First Principles with Python
by Joel Grus
Published 13 Apr 2015

One way to convince yourself that this is a sensible estimate is with a simulation: extreme_value_count = 0 for _ in range(100000): num_heads = sum(1 if random.random() < 0.5 else 0 # count # of heads for _ in range(1000)) # in 1000 flips if num_heads >= 530 or num_heads <= 470: # and count how often extreme_value_count += 1 # the # is 'extreme' print extreme_value_count / 100000 # 0.062 Since the p-value is greater than our 5% significance, we don’t reject the null. If we instead saw 532 heads, the p-value would be: two_sided_p_value(531.5, mu_0, sigma_0) # 0.0463 which is smaller than the 5% significance, which means we would reject the null. It’s the exact same test as before. It’s just a different way of approaching the statistics. Similarly, we would have: upper_p_value = normal_probability_above lower_p_value = normal_probability_below For our one-sided test, if we saw 525 heads we would compute: upper_p_value(524.5, mu_0, sigma_0) # 0.061 which means we wouldn’t reject the null.

In a situation like this, where n is much larger than k, we can use normal_cdf and still feel good about ourselves: def p_value(beta_hat_j, sigma_hat_j): if beta_hat_j > 0: # if the coefficient is positive, we need to compute twice the # probability of seeing an even *larger* value return 2 * (1 - normal_cdf(beta_hat_j / sigma_hat_j)) else: # otherwise twice the probability of seeing a *smaller* value return 2 * normal_cdf(beta_hat_j / sigma_hat_j) p_value(30.63, 1.174) # ~0 (constant term) p_value(0.972, 0.079) # ~0 (num_friends) p_value(-1.868, 0.131) # ~0 (work_hours) p_value(0.911, 0.990) # 0.36 (phd) (In a situation not like this, we would probably be using statistical software that knows how to compute the t-distribution, as well as how to compute the exact standard errors.) While most of the coefficients have very small p-values (suggesting that they are indeed nonzero), the coefficient for “PhD” is not “significantly” different from zero, which makes it likely that the coefficient for “PhD” is random rather than meaningful.

So a 5%-significance test involves using normal_probability_below to find the cutoff below which 95% of the probability lies: hi = normal_upper_bound(0.95, mu_0, sigma_0) # is 526 (< 531, since we need more probability in the upper tail) type_2_probability = normal_probability_below(hi, mu_1, sigma_1) power = 1 - type_2_probability # 0.936 This is a more powerful test, since it no longer rejects when X is below 469 (which is very unlikely to happen if is true) and instead rejects when X is between 526 and 531 (which is somewhat likely to happen if is true). === p-values An alternative way of thinking about the preceding test involves p-values. Instead of choosing bounds based on some probability cutoff, we compute the probability — assuming is true — that we would see a value at least as extreme as the one we actually observed. For our two-sided test of whether the coin is fair, we compute: def two_sided_p_value(x, mu=0, sigma=1): if x >= mu: # if x is greater than the mean, the tail is what's greater than x return 2 * normal_probability_above(x, mu, sigma) else: # if x is less than the mean, the tail is what's less than x return 2 * normal_probability_below(x, mu, sigma) If we were to see 530 heads, we would compute: two_sided_p_value(529.5, mu_0, sigma_0) # 0.062 Note Why did we use 529.5 instead of 530?

pages: 250 words: 64,011

Everydata: The Misinformation Hidden in the Little Data You Consume Every Day
by John H. Johnson
Published 27 Apr 2016

It’s a measure of how probable it is that the effect we’re seeing is real (rather than due to chance occurrence), which is why it’s typically measured with a p-value. P, in this case, stands for probability. If you accept p-values as a measure of statistical significance, then the lower your p-value is, the less likely it is that the results you’re seeing are due to chance alone.17 One oft-accepted measure of statistical significance is a p-value of less than .05 (which equates to 5 percent probability). The widespread use of this threshold goes back to the 1920s, when it was popularized by Ronald Fisher, a mathematician who studied the effect of fertilizer on crops, among other things.18 Now, we’re not here to debate whether a p-value of .05 is an appropriate standard for statistical significance, or even whether p-values themselves are the right way to determine statistical significance.19 Instead, we’re here to tell you that p-values—including the .05 threshold—are the standard in many applications.

The widespread use of this threshold goes back to the 1920s, when it was popularized by Ronald Fisher, a mathematician who studied the effect of fertilizer on crops, among other things.18 Now, we’re not here to debate whether a p-value of .05 is an appropriate standard for statistical significance, or even whether p-values themselves are the right way to determine statistical significance.19 Instead, we’re here to tell you that p-values—including the .05 threshold—are the standard in many applications. And that’s why they matter to you. Because when you see an article about the latest scientific discovery, it’s quite likely that it has only been accepted by the scientific community—and reported by the media—because it has a p-value below .05. It may seem somewhat arbitrary, but, as Derek Daniels, PhD (an associate professor at the University at Buffalo) told us, “having a line allows us to stay objective.

Aggregated data—Individual data points combined together into groups (e.g., the total number of votes in a state are aggregated to determine who receives that state’s Electoral College votes) Average—A type of summary statistic (usually the mean, mode, or median) that describes the data in a single metric Big data—Data that’s too big for people to process without the use of sophisticated machinery or computing capacity, given its enormous volume Bivariate relationship—A fancy way of saying that there is a relationship between two (“bi”) variables (“variate”) (e.g., the price of your house is related to the number of bathrooms it has) Black swan event—Something that is highly improbable, yet has a massive impact when it occurs Causation—A relationship where it is determined that one factor causes another factor Cherry-picking—Choosing anecdotal examples from the data to make your point, while ignoring other data points that may contradict it Confidence interval—A way to measure the level of statistical certainty about results; typically expressed as a range of values, the confidence interval tells you the range of values within which you’re likely to see the estimate (assuming, of course, you have a random—and representative—sample) Confidence level—The term we use to determine how confident we are that we’re measuring the data correctly Confirmation bias—The tendency to interpret data in a way that reinforces your preconceptions Correlation—A type of statistical relationship between two variables, usually defined as positive (moving in the same direction) or negative (moving in opposite directions) Data—Information or facts Dependence—When one variable is said to be directly determined by another Deterministic forecast—A forecast for which you determine a precise outcome (e.g., it will rain tomorrow at 9 a.m. at my house) Economic impact—How much something is going to cost in terms of time, money, health, or other resources Estimate—A statistic capturing an inference about a population from a sample of data Everydata—The term we use to describe everyday data External validity—The extent to which the results from your sample can be extended to draw meaningful conclusions about the full population False positive—A situation in which the statistical forecast predicts an untrue outcome (e.g., your credit card company calls you suspecting a recent purchase you actually made was fraudulent) Forecast—A statement about the future; while forecast and prediction may have different meanings to specific groups of people (see chapter 8), we generally use them synonymously unless noted otherwise Forecast bias—The term used to describe when a prediction is consistently high (a positive forecast bias) or low (a negative bias) Inference—The process of making statistical conclusions about the data Magnitude—Essentially, the size of the effect Margin of error—A way to measure statistical uncertainty Mean—What most people think of when you say “average” (to get the mean, you add up all the values, then divide by the number of data points) Median—The middle value in a data set that has been rank ordered Misrepresentation—When data is portrayed in an inaccurate or misleading manner Mode—The data point (or points) most frequently found in your data Observation—Looking at one unit, such as a person, a price, or a day Odds—In statistics, the odds of something happening is the ratio of the probability of an outcome to the probability that it doesn’t occur (e.g., a horse’s statistical odds of winning a race might be ⅓, which means it is probable that the horse will win one out of every three races; in betting jargon, the odds are typically the reverse, so this same horse would have 2–1 odds against, which means it has a ⅔ chance of losing) Omitted variable—A variable that plays a role in a relationship, but may be overlooked or otherwise not included; omitted variables are one of the primary reasons why correlation doesn’t equal causation Outlier—A particular observation that doesn’t fit; it may be much higher (or lower) than all the other data, or perhaps it just doesn’t fall into the pattern of everything else that you’re seeing P-hacking—Named after p-values, p-hacking is a term for the practice of repeatedly analyzing data, trying to find ways to make nonsignificant results significant P-value—A way to measure statistical significance; the lower your p-value is, the less likely it is that the results you’re seeing are due to chance Population—The entire set of data or observations that you want to study and draw inferences about; statisticians rarely have the ability to look at the entire population in a study, although it could be possible with a small, well-defined group (e.g., the voting habits of all 100 U.S. senators) Prediction—See forecast Prediction error—A way to measure uncertainty in the future, essentially by comparing the predicted results to the actual outcomes, once they occur Prediction interval—The range in which we expect to see the next data point Probabilistic forecast—A forecast where you determine the probability of an outcome (e.g., there is a 30 percent chance of thunderstorms tomorrow) Probability—The likelihood (typically expressed as a percentage, fraction, or decimal) that an outcome will occur Proxy—A factor that you believe is closely related (but not identical) to another difficult-to-measure factor (e.g., IQ is a proxy for innate ability) Random—When an observed pattern is due to chance, rather than some observable process or event Risk—A term that can mean different things to different people; in general, risk takes into account not only the probability of an event, but also the consequences Sample—Part of the full population (e.g., the set of Challenger launches with O-ring failures) Sample selection—A potential statistical problem that arises when the way a sample has been chosen is directly related to the outcomes one is studying; also, sometimes used to describe the process of determining a sample from a population Sampling error—The uncertainty of not knowing if a sample represents the true value in the population or not Selection bias—A potential concern when a sample is comprised of those who chose to participate, a factor which may bias the results Spurious correlation—A statistical relationship between two factors that has no practical or economic meaning, or one that is driven by an omitted variable (e.g., the relationship between murder rates and ice cream consumption) Statistic—A numeric measure that describes an aspect of the data (e.g., a mean, a median, a mode) Statistical impact—Having a statistically significant effect of some undetermined size Statistical significance—A probability-based method to determine whether an observed effect is truly present in the data, or just due to random chance Summary statistic—Metric that provides information about one or more aspects of the data; averages and aggregated data are two examples of summary statistics Weighted average—An average calculated by assigning each value a weight (based on the value’s relative importance) NOTES Preface 1.

pages: 50 words: 13,399

The Elements of Data Analytic Style
by Jeff Leek
Published 1 Mar 2015

A few matters of form Report estimates followed by parentheses. The increase is 5.3 units (95% CI: 3.1, 4.3 units) When reporting P-values do not report numbers below machine precision. P-values less than 2 x 10e-16 are generally below machine precision and inaccurate. Reporting a P-value of 1.35 x 10e-25 is effectively reporting a P-value of 0 and caution should be urged. A common approach is to report censored P-values such as P < 1 x 10e-8. When reporting permutation P-values avoid reporting a value of zero. P-values should be calculated as (K + 1)/(B + 1) where B is the number of permutations and K is the number of times the null statistic is more extreme than the upper bound.

Before performing inference each variable should be plotted versus time to detect dependencies, and similarly for space. Similarly, identifying potential confounders should occur before model fitting. 6.12.2 Focusing on p-values over confidence intervals P-values can be a useful measure of statistical significance if used properly. However, a p-value alone is not sufficient for any convincing analysis. A measure of inference on a scientific scale (such as confidence intervals or credible intervals) should be reported and interpreted with every p-value. 6.12.3 Inference without exploration A very common mistake is to move directly to model fitting and calculation of statistical significance.

Analysis of Financial Time Series
by Ruey S. Tsay
Published 14 Oct 2001

The sample ACFs are all within their two standard-error limits, indicating that they are not significant at the 5% level. In addition, for the simple returns, the Ljung–Box statistics give Q(5) = 5.4 and Q(10) = 14.1, which correspond to p value of 0.37 and 0.17, respectively, based on chi-squared distributions with 5 and 10 degrees of freedom. For the log returns, we have Q(5) = 5.8 and Q(10) = 13.7 with p value 0.33 and 0.19, respectively. The joint tests confirm that monthly IBM stock returns have no significant serial correlations. Figure 2.2 shows the same for the monthly returns of the value-weighted index from the Center for Research in Security Prices (CRSP), University of Chicago.

Consider the residual series of the fitted AR(3) model for the monthly valueweighted simple returns. We have Q(10) = 15.8 with p value 0.027 based on its asymptotic chi-squared distribution with 7 degrees of freedom. Thus, the null hypothesis of no residual serial correlation in the first 10 lags is rejected at the 5% level, but not at the 1% level. If the model is refined to an AR(5) model, then we have rt = 0.0092 + 0.107rt−1 − 0.001rt−2 − 0.123rt−3 + 0.028rt−4 + 0.069rt−5 + ât , with σ̂a = 0.054. The AR coefficients at lags 1, 3, and 5 are significant at the 5% level. The Ljung–Box statistics give Q(10) = 11.2 with p value 0.048. This model shows some improvements and appears to be marginally adequate at the 5% significance level.

The Ljung–Box statistics of the residuals give Q(10) = 11.4 with p value 0.122, which is based on an asymptotic chi-squared distribution with 7 degrees of freedom. The model appears to be adequate except for a few marginal residual ACFs at lags 14, 17, and 20. The exact maximum likelihood method produces the fitted model rt = 0.0132+at +0.1806at−1 −0.1315at−3 +0.1379at−9 , σ̂a = 0.0727, (2.21) where standard errors of the estimates are 0.0029, 0.0329, 0.0330, and 0.0328, respectively. The Ljung–Box statistics of the residuals give Q(10) = 11.6 with p value 0.116. This fitted model is also adequate. Comparing models (2.20) and (2.21), we see that for this particular instance, the difference between the conditional and exact likelihood methods is negligible. 2.5.4 Forecasting Using MA Models Forecasts of an MA model can easily be obtained.

Statistics in a Nutshell
by Sarah Boslaugh
Published 10 Nov 2012

(NIST/SEMATECH e-Handbook of Statistical Methods)), Probability Tables for Common Distributions natural logarithms, Properties of Roots negative discrimination, Item Analysis Nelson quality control rules, Run Charts and Control Charts Nelson, Lloyd S., Run Charts and Control Charts Nightingale, Florence, Pie Charts NIST/SEMATECH e-Handbook of Statistical Methods (National Institute of Standards and Technology (U.S.)), Probability Tables for Common Distributions NNT (Number Needed to Treat), Attributable Risk, Attributable Risk Percentage, and Number Needed to Treat nominal data, Nominal Data–Nominal Data, Glossary of Statistical Terms about, Nominal Data–Nominal Data definition of, Glossary of Statistical Terms nonparametric statistics, Data Transformations, Nonparametric Statistics, Nonparametric Statistics, Glossary of Statistical Terms about, Nonparametric Statistics definition of, Glossary of Statistical Terms parametric statistics and, Data Transformations, Nonparametric Statistics nonprobability sampling, Nonprobability Sampling–Nonprobability Sampling, Glossary of Statistical Terms nonresponse bias, Bias in Sample Selection and Retention, Glossary of Statistical Terms norm group, Percentiles norm-referenced, Percentiles, Test Construction scoring, Percentiles tests, Test Construction normal distribution, The Normal Distribution–The Normal Distribution, The Histogram normal distribution, standard, The Standard Normal Distribution–The t-Distribution normal score, The Normal Distribution–The Normal Distribution, Percentiles normalized scores, The Normal Distribution–The Normal Distribution null hypothesis, Hypothesis Testing number line, Linear regression, Laws of Arithmetic Number Needed to Treat (NNT), Attributable Risk, Attributable Risk Percentage, and Number Needed to Treat numeric and string data, String and Numeric Data O observational studies, Observational Studies–Observational Studies observed score, Glossary of Statistical Terms observed values, The Chi-Square Test odds ratio, The Odds Ratio–The Odds Ratio odds, calculating, The Odds Ratio OLS (Ordinary Least Squares) regression equation, Independent and Dependent Variables omnibus F-test, Post Hoc Tests one-group pretest-posttest design, Quasi-Experimental Studies one-sample t-test, The One-Sample t-Test–Confidence Interval for the One-Sample t-Test one-way ANOVA, The t-Test, One-Way ANOVA–One-Way ANOVA about, One-Way ANOVA–One-Way ANOVA t-test and, The t-Test online resources, Online Resources–Online Textbooks operationalization, Operationalization, Glossary of Statistical Terms opportunity loss table, Minimax, Maximax, and Maximin ordinal data, Ordinal Data–Ordinal Data, Categorical Data, The R×C Table–The R×C Table, Measures of Agreement–Measures of Agreement, The Wilcoxon Rank Sum Test, The Wilcoxon Rank Sum Test, Glossary of Statistical Terms about, Ordinal Data–Ordinal Data, Categorical Data definition of, Glossary of Statistical Terms mean rank, The Wilcoxon Rank Sum Test measures of agreement, Measures of Agreement–Measures of Agreement rank sum, The Wilcoxon Rank Sum Test R×C table, The R×C Table–The R×C Table ordinal variables, correlation statistics for, Ordinal Variables, Ordinal Variables–Ordinal Variables, Ordinal Variables, Ordinal Variables–Ordinal Variables, Ordinal Variables, Ordinal Variables, Ordinal Variables–Ordinal Variables gamma, Ordinal Variables–Ordinal Variables Kendall’s tau-a, Ordinal Variables, Ordinal Variables Kendall’s tau-b, Ordinal Variables–Ordinal Variables Kendall’s tau-c, Ordinal Variables Somers’s d, Ordinal Variables–Ordinal Variables Spearman’s rank-order coefficient, Ordinal Variables Ordinary Least Squares (OLS) regression equation, Independent and Dependent Variables orthogonality, in research design structure, Ingredients of a Good Design outliers, Outliers–Outliers overfitting, Overfitting–Overfitting P p-values, p-valuesp-values, The Z-Statistic about, p-valuesp-values of Z value, The Z-Statistic Paasche index, Index Numbers–Index Numbers Packel, Edward, The Mathematics of Games and Gambling, Closing Note: The Connection between Statistics and Gambling parallel-forms (multiple-forms) reliability, Reliability parameters, in descriptive statistics, Inferential Statistics, Populations and Samples parametric statistics, Data Transformations, Nonparametric Statistics, Glossary of Statistical Terms Pareto charts (diagrams), Pareto Charts–Pareto Charts Pareto, Vilfredo, Pareto Charts partial correlation, Methods for Building Regression Models PCA (Principal Components Analysis), Factor Analysis–Factor Analysis, Factor Analysis, Factor Analysis–Factor Analysis Pearson correlation coefficient, Correlation Statistics for Categorical Data, Binary Variables, The Pearson Correlation Coefficient, Association–Association, Scatterplots–Relationships Between Continuous Variables, Relationships Between Continuous Variables–Relationships Between Continuous Variables, The Pearson Correlation Coefficient–Testing Statistical Significance for the Pearson Correlation, Testing Statistical Significance for the Pearson Correlation–Testing Statistical Significance for the Pearson Correlation, The Coefficient of Determination about, Correlation Statistics for Categorical Data, Binary Variables, The Pearson Correlation Coefficient about correlation coefficient, The Pearson Correlation Coefficient–Testing Statistical Significance for the Pearson Correlation associations, Association–Association coefficient of determination, The Coefficient of Determination relationships between continuous variables, Relationships Between Continuous Variables–Relationships Between Continuous Variables scatterplots as visual tool, Scatterplots–Relationships Between Continuous Variables testing statistical significance for, Testing Statistical Significance for the Pearson Correlation–Testing Statistical Significance for the Pearson Correlation Pearson’s chi-square test, The Chi-Square Test (see chi-square test) peer review process, journal, The Peer Review Process–The Peer Review Process percent agreement measures, Measures of Agreement percentages, interpreting, Power for the Test of the Difference between Two Sample Means (Independent Samples t-Test) percentiles, Percentiles–Percentiles permutations, Factorials, Permutations, and Combinations–Factorials, Permutations, and Combinations permutations of elements, Permutations phi coefficient, Binary Variables–Binary Variables, Item Analysis physical vs. social sciences, definition of treatments, Specifying Treatment Levels pie charts, Pie Charts placebo, Glossary of Statistical Terms placebo effect, Blinding, Glossary of Statistical Terms playing cards, Dice, Coins, and Playing Cards point estimates, calculating, Confidence Intervals point-biserial correlation coefficient, The Point-Biserial Correlation Coefficient–The Point-Biserial Correlation Coefficient, Item Analysis polynomial regression, Polynomial Regression–Polynomial Regression populations, Inferential Statistics, Inferential Statistics, Populations and Samples–Probability Sampling, Descriptive Statistics and Graphic Displays–Populations and Samples, The Mean–The Mean, The Variance and Standard Deviation, The Variance and Standard Deviation, Population in descriptive statistics, Descriptive Statistics and Graphic Displays–Populations and Samples, The Mean–The Mean, The Variance and Standard Deviation, The Variance and Standard Deviation calculating variance, The Variance and Standard Deviation formula for standard deviation, The Variance and Standard Deviation mean, The Mean–The Mean samples and, Descriptive Statistics and Graphic Displays–Populations and Samples in inferential statistics, Inferential Statistics, Inferential Statistics, Populations and Samples–Probability Sampling mean, Inferential Statistics samples and, Populations and Samples–Probability Sampling variance, Inferential Statistics issues in research design with, Population positive discrimination, Item Analysis post hoc test, Post Hoc Tests–Post Hoc Tests, Glossary of Statistical Terms posttest only design, Quasi-Experimental Studies posttest-only non-equivalent groups design, Quasi-Experimental Studies power, Glossary of Statistical Terms power analysis, Power Analysis–Power Analysis, Ingredients of a Good Design power of coincidence, issues in research design with, The Power of Coincidence Practical Nonparametric Statistics (Conover), Nonparametric Statistics presidential elections, predictions of, Exercises pretest-posttest design with comparison group, Quasi-Experimental Studies prevalence, Prevalence and Incidence–Prevalence and Incidence, Prevalence and Incidence, Glossary of Statistical Terms primary data, Basic Vocabulary Principal Components Analysis (PCA), Factor Analysis–Factor Analysis probability, Defining Probability–Intersection of nonindependent events, Expressing the Probability of an Event–Expressing the Probability of an Event, Conditional Probabilities–Conditional Probabilities conditional, Conditional Probabilities–Conditional Probabilities definition of, Defining Probability–Intersection of nonindependent events of events, Expressing the Probability of an Event–Expressing the Probability of an Event probability distributions, in inferential statistics, Probability Distributions–The Binomial Distribution probability sampling, Probability Sampling–Probability Sampling, Glossary of Statistical Terms probability tables for distributions, Probability Tables for Common Distributions–The Chi-Square Distribution, The Standard Normal Distribution–The t-Distribution, The t-Distribution–The t-Distribution, The Binomial Distribution–The Binomial Distribution, The Chi-Square Distribution–The Chi-Square Distribution about, Probability Tables for Common Distributions–The Chi-Square Distribution binomial distribution, The Binomial Distribution–The Binomial Distribution chi-square distribution, The Chi-Square Distribution–The Chi-Square Distribution standard normal distribution, The Standard Normal Distribution–The t-Distribution t-distribution, The t-Distribution–The t-Distribution probability theory, Probability–Probability, About Formulas–About Formulas, About Formulas–Combinations, Defining Probability–Intersection of nonindependent events, Bayes’ Theorem–Bayes’ Theorem, Closing Note: The Connection between Statistics and Gambling–Closing Note: The Connection between Statistics and Gambling about, Probability–Probability Bayes’ theorem and, Bayes’ Theorem–Bayes’ Theorem defining probability, Defining Probability–Intersection of nonindependent events definitions in, About Formulas–Combinations formulas, About Formulas–About Formulas gambling and, Closing Note: The Connection between Statistics and Gambling–Closing Note: The Connection between Statistics and Gambling product-moment correlation coefficient, The Pearson Correlation Coefficient propensity score, Observational Studies properties of equality, Solving Equations proportion, Proportions: The Large Sample Case–Proportions: The Large Sample Case, Ratio, Proportion, and Rate, Ratio, Proportion, and Rate, Glossary of Statistical Terms about, Ratio, Proportion, and Rate definition of, Glossary of Statistical Terms formula for, Ratio, Proportion, and Rate large-sample Z tests for, Proportions: The Large Sample Case–Proportions: The Large Sample Case prospective cohort study, Basic Vocabulary prospective study, Basic Vocabulary, Glossary of Statistical Terms proxy measurement, Proxy Measurement–Proxy Measurement, Glossary of Statistical Terms pseudo-chance-level parameter, Item Response Theory psychological and educational statistics, Educational and Psychological Statistics–Educational and Psychological Statistics, Percentiles–Percentiles, Standardized Scores–Standardized Scores, Test Construction–Test Construction, Classical Test Theory: The True Score Model–Classical Test Theory: The True Score Model, Reliability of a Composite Test–Reliability of a Composite Test, Measures of Internal Consistency–Coefficient Alpha, Item Analysis–Item Analysis, Item Response Theory–Item Response Theory about, Educational and Psychological Statistics–Educational and Psychological Statistics classical test theory, Classical Test Theory: The True Score Model–Classical Test Theory: The True Score Model item analysis, Item Analysis–Item Analysis item response theory, Item Response Theory–Item Response Theory measures of internal consistency, Measures of Internal Consistency–Coefficient Alpha percentiles, Percentiles–Percentiles reliability of composite test, Reliability of a Composite Test–Reliability of a Composite Test standardized scores, Standardized Scores–Standardized Scores test construction, Test Construction–Test Construction psychometrics, Educational and Psychological Statistics publication bias, Quick Checklist Q quadratic regression model, Polynomial Regression–Polynomial Regression Quality Improvement (QI), Quality Improvement–Run Charts and Control Charts quasi-experimental, Basic Vocabulary–Basic Vocabulary, Quasi-Experimental Studies–Quasi-Experimental Studies research design type, Basic Vocabulary–Basic Vocabulary studies, Quasi-Experimental Studies–Quasi-Experimental Studies quota sampling, Nonprobability Sampling R R programming language, Graphic Methods, R–R random errors, Random and Systematic Error–Random and Systematic Error, Glossary of Statistical Terms definition of, Glossary of Statistical Terms vs. systematic errors, Random and Systematic Error–Random and Systematic Error random measurement error, Classical Test Theory: The True Score Model–Classical Test Theory: The True Score Model Random-Digit-Dialing (RDD) techniques, Bias in Sample Selection and Retention randomization, Confounding, Stratified Analysis, and the Mantel-Haenszel Common Odds Ratio randomized block design, Blocking and the Latin Square range, Glossary of Statistical Terms range and interquartile range, The Range and Interquartile Range–The Range and Interquartile Range rank sum, The Wilcoxon Rank Sum Test Rasch model, Item Response Theory Rasch, Georg, Item Response Theory rate, Ratio, Proportion, and Rate–Ratio, Proportion, and Rate, Crude, Category-Specific, and Standardized Rates–Crude, Category-Specific, and Standardized Rates, Glossary of Statistical Terms about, Ratio, Proportion, and Rate–Ratio, Proportion, and Rate crude rate as, Crude, Category-Specific, and Standardized Rates–Crude, Category-Specific, and Standardized Rates definition of, Glossary of Statistical Terms ratio, Ratio, Proportion, and Rate, Glossary of Statistical Terms about, Ratio, Proportion, and Rate definition of, Glossary of Statistical Terms ratio data, Ratio Data–Ratio Data, Glossary of Statistical Terms about, Ratio Data–Ratio Data definition of, Glossary of Statistical Terms raw time series, Time Series real numbers, properties of, Properties of Real Numbers recall bias, Information Bias, Glossary of Statistical Terms rectangular coordinates (Cartesian coordinates), Graphing Equations–Graphing Equations rectangular data file, storing data electronically in, Codebooks–The Rectangular Data File regression, Independent and Dependent Variables–Independent and Dependent Variables, Introduction to Regression and ANOVA, Linear Regression–Linear Regression, Assumptions–Assumptions, Calculating Simple Regression by Hand–Calculating Simple Regression by Hand, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Dummy Variables–Dummy Variables, Methods for Building Regression Models–Backward removal, Logistic Regression–Converting Logits to Probabilities, Multinomial Logistic Regression–Multinomial Logistic Regression, Polynomial Regression–Polynomial Regression, Polynomial Regression–Polynomial Regression, Polynomial Regression–Polynomial Regression, Overfitting–Overfitting, Quasi-Experimental Studies about, Introduction to Regression and ANOVA arbitrary curve-fitting, Overfitting–Overfitting calculating by hand, Calculating Simple Regression by Hand–Calculating Simple Regression by Hand cubic regression model, Polynomial Regression–Polynomial Regression independent variables and dependent variables, Independent and Dependent Variables–Independent and Dependent Variables linear, Linear Regression–Linear Regression, Assumptions–Assumptions about, Linear Regression–Linear Regression assumptions, Assumptions–Assumptions logistic, Logistic Regression–Converting Logits to Probabilities modeling principles, Multiple Regression Models–Multiple Regression Models multinomial logistic, Multinomial Logistic Regression–Multinomial Logistic Regression multiple linear, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Multiple Regression Models–Multiple Regression Models, Dummy Variables–Dummy Variables, Methods for Building Regression Models–Backward removal about, Multiple Regression Models–Multiple Regression Models adding interaction term, Multiple Regression Models–Multiple Regression Models assumptions, Multiple Regression Models creating a correlation matrix, Multiple Regression Models–Multiple Regression Models dummy variables, Dummy Variables–Dummy Variables methods for building regression models, Methods for Building Regression Models–Backward removal regression equation for data, Multiple Regression Models–Multiple Regression Models results for individual predictors, Multiple Regression Models–Multiple Regression Models standardized coefficients, Multiple Regression Models variables in model, Multiple Regression Models–Multiple Regression Models polynomial, Polynomial Regression–Polynomial Regression quadratic regression model, Polynomial Regression–Polynomial Regression to the mean, Quasi-Experimental Studies regression equations, independent variables and dependent variables in, Independent and Dependent Variables–Independent and Dependent Variables regression to the mean, Quasi-Experimental Studies related samples t-test, Repeated Measures t-Test–Confidence Interval for the Repeated Measures t-Test relational databases, for data management, Spreadsheets and Relational Databases–Spreadsheets and Relational Databases relative frequency, Frequency Tables, Bar Charts–Bar Charts relative risk, The Risk Ratio–The Risk Ratio reliability, Reliability and Validity–Triangulation, Reliability–Reliability, Glossary of Statistical Terms about, Reliability–Reliability definition of, Glossary of Statistical Terms validity and, Reliability and Validity–Triangulation reliability coefficient, Reliability of a Composite Test reliability index, Reliability of a Composite Test repeated measures (related samples) t-test, Repeated Measures t-Test–Confidence Interval for the Repeated Measures t-Test research articles, Writing the Article–Writing the Article, Common Problems–Common Problems, Quick Checklist–Quick Checklist, Issues in Research Design–The Power of Coincidence, Descriptive Statistics–Extrapolation and Trends, Extrapolation and Trends–Linear regression checklist for statistics based investigations, Quick Checklist–Quick Checklist common problems with, Common Problems–Common Problems critiquing descriptive statistics, Descriptive Statistics–Extrapolation and Trends incorrect use of tests in inferential statistics, Extrapolation and Trends–Linear regression issues in research design, Issues in Research Design–The Power of Coincidence writing, Writing the Article–Writing the Article research design, Research Design, Basic Vocabulary–Basic Vocabulary, Basic Vocabulary, Basic Vocabulary, Basic Vocabulary, Basic Vocabulary, Basic Vocabulary, Basic Vocabulary, Observational Studies–Observational Studies, Quasi-Experimental Studies–Quasi-Experimental Studies, Experimental Studies–Experimental Studies, Ingredients of a Good Design–Ingredients of a Good Design, Gathering Experimental Data–Blocking and the Latin Square, Specifying Treatment Levels, Specifying Response Variables, Blinding, Retrospective Adjustment, Blocking and the Latin Square, Example Experimental Design–Example Experimental Design, Communicating with Statistics–Writing for Your Workplace, Issues in Research Design–The Power of Coincidence about, Research Design blinding, Blinding blocking and Latin square, Blocking and the Latin Square classification of studies, Basic Vocabulary communicating with statistics, Communicating with Statistics–Writing for Your Workplace data types, Basic Vocabulary example of, Example Experimental Design–Example Experimental Design experimental studies, Experimental Studies–Experimental Studies factor in, Basic Vocabulary factorial design, Basic Vocabulary gathering experimental data, Gathering Experimental Data–Blocking and the Latin Square hypothesis testing vs. data mining, Specifying Response Variables ingredients of good design, Ingredients of a Good Design–Ingredients of a Good Design issues in, Issues in Research Design–The Power of Coincidence observational studies, Observational Studies–Observational Studies physical vs. social sciences definition of treatments, Specifying Treatment Levels quasi-experimental studies, Quasi-Experimental Studies–Quasi-Experimental Studies retrospective adjustment, Retrospective Adjustment style of notation, Basic Vocabulary types of, Basic Vocabulary–Basic Vocabulary unit of analysis in study, Basic Vocabulary response variables, specifying in experimental design, Specifying Response Variables–Specifying Response Variables responses, experimental, Experimental Studies restriction, Confounding, Stratified Analysis, and the Mantel-Haenszel Common Odds Ratio results section, Writing the Article, Evaluating the Whole Article critiquing in articles, Evaluating the Whole Article writing, Writing the Article retrospective adjustment, Retrospective Adjustment retrospective study, Basic Vocabulary, Glossary of Statistical Terms risk ratio, The Risk Ratio–Attributable Risk, Attributable Risk Percentage, and Number Needed to Treat Robinson, W.S., Basic Vocabulary rolling average, Time Series roots, properties of, Properties of Roots–Properties of Roots Rosenbaum, Paul, Observational Studies Rubin, Donald, Observational Studies Rubin, Roderick J.A., String and Numeric Data–Missing Data run charts and control charts, Run Charts and Control Charts R×C table (contingency table), The R×C Table–The R×C Table S Safari Books Online, Safari® Books Online sample size calculations, Sample Size Calculations–Power for the Test of the Difference between Two Sample Means (Independent Samples t-Test) sample space, definition of, Sample Space–Sample Space samples, Inferential Statistics, Inferential Statistics, Populations and Samples–Probability Sampling, Populations and Samples, Descriptive Statistics and Graphic Displays–Populations and Samples, The Variance and Standard Deviation, The Variance and Standard Deviation, The One-Sample t-Test–Confidence Interval for the One-Sample t-Test, The Independent Samples t-Test–Confidence Interval for the Independent Samples t-Test, Repeated Measures t-Test–Confidence Interval for the Repeated Measures t-Test in descriptive statistics, Descriptive Statistics and Graphic Displays–Populations and Samples, The Variance and Standard Deviation, The Variance and Standard Deviation calculating variance, The Variance and Standard Deviation formula for standard deviation, The Variance and Standard Deviation populations and, Descriptive Statistics and Graphic Displays–Populations and Samples in inferential statistics, Inferential Statistics, Inferential Statistics, Populations and Samples–Probability Sampling mean, Inferential Statistics populations and, Inferential Statistics, Populations and Samples–Probability Sampling one-sample t-test, The One-Sample t-Test–Confidence Interval for the One-Sample t-Test related samples t-test, Repeated Measures t-Test–Confidence Interval for the Repeated Measures t-Test two-sample t-test, The Independent Samples t-Test–Confidence Interval for the Independent Samples t-Test U.S.

In this case, the probability of getting 8, 9, or 10 heads in 10 flips of a fair coin is 0.0439 + 0.0098 + 0.0010, or 0.0547. This is the p-value for the result of at least 8 heads in 10 trials, using a coin where P(heads) = 0.5. p-values are commonly reported for most research results involving statistical calculations, in part because intuition is a poor guide to how unusual a particular result is. For instance, many people might think it is unusual to get 8 or more heads on 10 trials using a fair coin. There is no statistical definition of what constitutes “unusual” results, so we will use the common standard that the p-value for our results must be less than 0.05 for us to reject the null hypothesis (which is, in this case, that the coin is fair).

Suppose we are testing a two-tailed hypothesis with an alpha level of 0.05. In this case, we would also want the p-values for each result, which are: Sample 1: p = 0.2713 Sample 2: p = 0.1211 Sample 3: p = 0.0455 Only the third sample gives us significant results; that is, only the p-value from the third sample is less than our alpha level of 0.05 and thus allows us to reject the null hypothesis. This underlines the importance of having adequate sample size when conducting a study. You can find the p-value for a given Z value in several ways: by using statistical software, by using one of the many online calculators, or by using probability tables.

pages: 408 words: 85,118

Python for Finance
by Yuxing Yan
Published 24 Apr 2014

Then, we conduct two tests: test whether the mean is 0.5, and test whether the mean is zero: >>>from scipy import stats >>>np.random.seed(1235) >>>x = stats.norm.rvs(size=10000) >>>print("T-value P-value (two-tail)") >>>print(stats.ttest_1samp(x,5.0)) >>>print(stats.ttest_1samp(x,0)) T-value P-value (two-tail) [ 193 ] Statistical Analysis of Time Series (array(-495.266783341032), 0.0) (array(-0.26310321925083124), 0.79247644375164772) >>> For the first test, in which we test whether the time series has a mean of 0.5, we reject the null hypothesis since the T-value is 495.2 and the P-value is 0. For the second test, we accept the null hypothesis since the T-value is close to -0.26 and the P-value is 0.79. In the following program, we test whether the mean daily returns from IBM in 2013 is zero: from scipy import stats from matplotlib.finance import quotes_historical_yahoo ticker='ibm' begdate=(2013,1,1) enddate=(2013,11,9) p=quotes_historical_yahoo(ticker,begdate,enddate,asobject=True, adjusted=True) ret=(p.aclose[1:] - p.aclose[:-1])/p.aclose[:-1] print(' Mean T-value P-value ' ) print(round(mean(ret),5), stats.ttest_1samp(ret,0)) Mean T-value P-value (-0.00024, (array(-0.296271094280657), 0.76730904089713181)) From the previous results, we know that the average daily returns for IBM is 0.0024 percent.

Eventually, the preceding equation could be rewritten as follows: y =α + β ∗x (12) The following lines of code are an example of this: >>>from scipy import stats >>>stock_ret = [0.065, 0.0265, -0.0593, -0.001,0.0346] >>>mkt_ret = [0.055, -0.09, -0.041,0.045,0.022] >>>beta, alpha, r_value, p_value, std_err = stats.linregress(stock_ret,mkt_ret) >>>print beta, alpha 0.507743187877 -0.00848190035246 >>>print "R-squared=", r_value**2 R-squared =0.147885662966 >>>print "p-value =", p_value 0.522715523909 [ 117 ] Introduction to NumPy and SciPy Retrieving data from an external text file When retrieving data from an external data file, the variable generated will be a list. >>>f=open("c:\\data\\ibm.csv","r") >>>data=f.readlines() >>>type(data) <class 'list'> The first few lines of the input file are shown in the following lines of code.

In the following program, we test whether the mean daily returns from IBM in 2013 is zero: from scipy import stats from matplotlib.finance import quotes_historical_yahoo ticker='ibm' begdate=(2013,1,1) enddate=(2013,11,9) p=quotes_historical_yahoo(ticker,begdate,enddate,asobject=True, adjusted=True) ret=(p.aclose[1:] - p.aclose[:-1])/p.aclose[:-1] print(' Mean T-value P-value ' ) print(round(mean(ret),5), stats.ttest_1samp(ret,0)) Mean T-value P-value (-0.00024, (array(-0.296271094280657), 0.76730904089713181)) From the previous results, we know that the average daily returns for IBM is 0.0024 percent. The T-value is -0.29 while the P-value is 0.77. Thus, the mean is statistically not different from zero. Tests of equal means and equal variances Next, we test whether two variances for IBM and DELL in 2013 are equal or not.

Commodity Trading Advisors: Risk, Performance Analysis, and Selection
by Greg N. Gregoriou , Vassilios Karavas , François-Serge Lhabitant and Fabrice Douglas Rouah
Published 23 Sep 2004

−6.7692 −3.9901 −5.1742 −4.8709 −4.2616 −4.1900 −5.3569 −4.3574 −5.3094 −4.7461 ADF Tests CTA Excess Returns: ARMA Models CTA Exc1 CTA Exc1: p-value CTA Exc2 CTA Exc2: p-value CTA Exc3 CTA Exc3: p-value CTA Exc4 CTA Exc4: p-value CTA Exc5 CTA Exc5: p-value CTA Exc6 CTA Exc6: p-value CTA Exc7 CTA Exc7: p-value CTA Exc8 CTA Exc8: p-value CTA Exc9 CTA Exc9: p-value CTA Exc10 CTA Exc10: p-value TABLE 21.4 0.00 0.01 0.11 0.07 0.03 0.12 0.07 0.01 R2 0.20 0.9248 0.16 0.0000 MA(4) B4 376 TABLE 21.5 PROGRAM EVALUATION, SELECTION, AND RETURNS CTA Returns, 2000 to 2003: ARMA Models M CTA3 CTA3: p-value CTA4 CTA4: p-value CTA8 CTA8: p-value 0.0123 0.0895 0.0050 0.0288 0.0120 0.0831 AR(1) A1 AR(2) A2 −0.8042 0.0000 −0.5734 0.0126 −0.7018 0.0000 −0.6546 0.0000 0.0956 0.5748 −0.1482 0.3521 MA(1) A1 0.9994 0.0000 0.8731 0.0000 0.9529 0.0000 MA(2) A2 0.9800 0.0000 R2 0.16 0.04 0.09 there is a significant improvement for CTA #3 and #8 (evidenced by the increased R2).

−4.5596 −5.1140 −5.4682 −4.7019 −4.9529 −4.9350 −5.7926 −3.4275 −5.6161 −5.6629 ADF Tests CTA Returns: ARMA Models CTA1 CTA1: p-value CTA2 CTA2: p-value CTA3 CTA3: p-value CTA4 CTA4: p-value CTA5 CTA5: p-value CTA6 CTA6: p-value CTA7 CTA7: p-value CTA8 CTA8: p-value CTA9 CTA9: p-value CTA10 CTA10: p-value TABLE 21.3 0.1529 0.0000 −0.9581 0.0000 MA(3) B3 0.15 0.20 0.06 0.03 0.01 0.15 0.03 0.06 0.31 0.04 R2 0.34 0.93 1.49 0.20 1.40 0.22 2.25 0.06 3.95 0.01 0.23 0.97 1.95 0.13 0.53 0.66 2.21 0.07 Chow F-Stat p-value 375 0.0059 0.0259 −0.0043 0.0178 0.0051 0.3588 −0.0012 0.7033 0.0025 0.3855 0.0046 0.0473 0.0051 0.0572 0.0014 0.6276 0.0100 0.0000 0.0016 0.2435 M −0.7203 0.0191 −0.7132 0.0000 −0.5293 0.0000 −0.3677 0.0000 1.0716 0.0000 −0.5997 0.0000 −0.7890 0.0039 −0.4560 0.1771 0.5498 0.0000 0.7768 0.0000 AR(1) A1 0.9293 0.0000 −0.5202 0.0004 −0.7877 0.0000 −0.8945 0.0000 −0.7539 0.0000 −0.4724 0.0000 −0.5644 0.0376 AR(2) A2 AR(4) A4 MA(1) B1 MA(2) B2 MA(3) B3 0.7109 0.0262 −0.8592 0.0000 0.5947 0.9800 0.0000 0.0000 −0.4187 0.9617 0.0000 0.0000 −1.2220 0.9638 0.0000 0.0000 −0.7067 0.5722 0.5721 0.9661 0.0000 0.0000 0.0000 0.0000 −0.8271 0.6983 0.0009 0.0053 0.5768 0.0706 0.1356 −0.6643 −0.4929 − 1.0160 −0.4115 0.2974 0.0000 0.0000 0.0000 0.0000 −1.1091 0.3889 0.0000 0.0300 AR(3) A3 All ADF tests are at 99 percent confidence level.

The Spearman correlation coefficients show some ability to detect persistence when large TABLE 3.4 EGR Performance Persistence Results from Monte Carlo Generated Data Sets: No Persistence Present by Restricting a = 1 Data Generation Method Generated Data Subgroups Mean returns top 1/3 middle 1/3 bottom 1/3 top 3 bottom 3 p-values reject-positive z reject-negative z fail to reject test of 2 means reject-positive reject-negative fail to reject 1a 2b 3c 1.25 1.25 1.25 1.25 1.26 1.25 1.25 1.22 1.15 1.19 0.70 0.72 0.68 0.61 0.68 0.021 0.028 0.951 0.041 0.037 0.922 0.041 0.039 0.920 0.026 0.028 0.946 0.032 0.020 0.948 0.032 0.026 0.942 generated using a = 1, b = .5; s = 2. generated using a = 1, b = .5; s = 5, 10, 15, 20. cData generated using a = 1, b = .5, 1, 1.5, 1; s = 5, 10, 15, 20. aData bData 37 Performance of Managed Futures TABLE 3.5 EGR Performance Persistence Results from Monte Carlo Generated Data Sets: Persistence Present by Allowing a to Vary Data Generation Method Generated Data Subgroups Mean returns top 1/3 middle 1/3 bottom 1/3 top 3 bottom 3 p-values reject-positive z reject-negative z fail to reject.000 test of 2 means reject-positive reject-negative fail to reject.000 1a 2b 3c 4d 3.21 1.87 0.80 4.93 −1.60 2.77 2.09 1.41 3.47 1.14 2.57 1.85 1.15 3.26 0.86 1.48 1.30 1.14 1.68 1.06 1.000 0.000 0.000 0.827 0.000 0.173 0.823 0.000 0.177 0.149 0.003 0.848 1.00 0.000 0.000 0.268 0.000 0.732 0.258 0.000 0.742 0.043 0.012 0.945 generated using a = N(1.099,4.99); b = .5, 1, 1.5, 2; s = 2. generated using a = N(1.099,4.99); b = .5; s = 5, 10, 15, 20. cData generated using a = N(1.099,4.99); b = .5, 1, 1.5, 2; s = 5, 10, 15, 20. dData generated using a = N(1.099,1); b = .5, 1, 1.5, 2; s = 5, 10, 15, 20. aData bData differences are found in CTA data.

pages: 681 words: 64,159

Numpy Beginner's Guide - Third Edition
by Ivan Idris
Published 23 Jun 2015

It is instructve to see what happens if we generate more points, because if we generate more points, we should have a more normal distributon. For 900,000 points, we get a p-value of 0.16 . For 20 generated values, the p-value is 0.50 . 4. Kurtosis tells us how curved a probability distributon is. Perform a kurtosis test. This test is set up similarly to the skewness test, but, of course, applies to kurtosis: print("Kurtosistest", "pvalue", stats.kurtosistest(generated)) The result of the kurtosis test appears as follows: Kurtosistest pvalue (1.3065381019536981, 0.19136963054975586) The p-value for 900,000 values is 0.028 . For 20 generated values, the p-values is 0.88 . 5. A normality test tells us how likely it is that a dataset complies the normal distributon.

Skewness tells us how skewed (asymmetric) a probability distributon is (see http://en.wikipedia.org/wiki/Skewness ). Perform a skewness test. This test returns two values. The second value is the p-value —the probability that the skewness of the dataset does not correspond to a normal distributon. Generally speaking, the p-value is the probability of an outcome different than what was expected given the null hypothesis—in this case, the probability of getting a skewness different from that of a normal distribution (which is 0 because of symmetry). P-values range from 0 to 1 : print("Skewtest", "pvalue", stats.skewtest(generated)) The result of the skewness test appears as follows: Skewtest pvalue (-0.62120640688766893, 0.5344638245033837) So, there is a 53 percent chance we are not dealing with a normal distributon.

A normality test tells us how likely it is that a dataset complies the normal distributon. Perform a normality test. This test also returns two values, of which the second is a p-value: print("Normaltest", "pvalue", stats.normaltest(generated)) The result of the normality test appears as follows: Normaltest pvalue (2.09293921181506, 0.35117535059841687) The p-value for 900,000 generated values is 0.035 . For 20 generated values, the p-value is 0.79 . 6. We can fnd the value at a certain percentle easily with SciPy: print("95 percentile", stats.scoreatpercentile(generated, 95)) The value at the 95th percentle appears as follows: 95 percentile 1.54048860252 7.

pages: 446 words: 102,421

Network Security Through Data Analysis: Building Situational Awareness
by Michael S Collins
Published 23 Feb 2014

In statistical testing, this is done by using a p-value. The p-value is the probability that if the null hypothesis is true, you will get a result at least as extreme as the observed results. The lower the p-value, the lower the probability that the observed result could have occurred under the null hypothesis. Conventionally, a null hypothesis is rejected when the p-value is below 0.05. To understand the concept of extremity here, consider a binomial test with no successes and four coin flips. In R: > binom.test(0,4,p=0.5) Exact binomial test data: 0 and 4 number of successes = 0, number of trials = 4, p-value = 0.125 alternative hypothesis: true probability of success is not equal to 0.5 95 percent confidence interval: 0.0000000 0.6023646 sample estimates: probability of success 0 That p-value of 0.125 is the sum of the probabilities that a coin flip was four heads (0.0625) AND four tails (also 0.0625).

In R: > binom.test(0,4,p=0.5) Exact binomial test data: 0 and 4 number of successes = 0, number of trials = 4, p-value = 0.125 alternative hypothesis: true probability of success is not equal to 0.5 95 percent confidence interval: 0.0000000 0.6023646 sample estimates: probability of success 0 That p-value of 0.125 is the sum of the probabilities that a coin flip was four heads (0.0625) AND four tails (also 0.0625). The p value is, in this context “two tailed,” meaning that it accounts for both extremes. Similarly, if we account for one heads: > binom.test(1,4,p=0.5) Exact binomial test data: 1 and 4 number of successes = 1, number of trials = 4, p-value = 0.625 alternative hypothesis: true probability of success is not equal to 0.5 95 percent confidence interval: 0.006309463 0.805879550 sample estimates: probability of success 0.25 The p-value is 0.625, the sum of 0.0625 + 0.25 + 0.25 + 0.0625 (everything but the probability of 2 heads and 2 tails).

. > # For the uniform, I can use min and max, like I'd use mean and sd for > # the normal > ks.test(a.set,punif,min=min(a.set),max=max(a.set)) One-sample Kolmogorov-Smirnov test data: a.set D = 0.0829, p-value = 0.4984 alternative hypothesis: two-sided > # Now one where I reject the null; I'll treat the data as if it > # were normally distributed and estimate again > ks.test(a.set,pnorm,mean=mean(a.set),sd=sd(a.set)) One-sample Kolmogorov-Smirnov test data: a.set D = 0.0909, p-value = 0.3806 alternative hypothesis: two-sided > #Hmm, p-value's high... Because I'm not using enough samples, let's > # do this again with 400 samples each. > a.set<-runif(400,min=10,max=20) > b.set<-runif(400,min=10,max=20) > # Compare against each other > ks.test(a.set,b.set)$p.value [1] 0.6993742 > # Compare against the distribution > ks.test(a.set,punif,min=min(a.set),max=max(a.set))$p.value [1] 0.5499412 > # Compare against a different distribution > ks.test(a.set,pnorm, mean = mean(a.set),sd=sd(a.set))$p.value [1] 0.001640407 The KS test has weak power.

pages: 284 words: 79,265

The Half-Life of Facts: Why Everything We Know Has an Expiration Date
by Samuel Arbesman
Published 31 Aug 2012

On the other hand, imagine if we had gathered a much larger group and still had the same fractions: Out of 500 left-handers, 300 carried L, while out of 500 right-handers, only 220 were carriers for L. If we ran the exact same test, we get a much lower p-value. Now it’s less than 0.0001. This means that there is less than one hundredth of 1 percent chance that the differences are due to chance alone. The larger the sample we get, the better we can test our questions. The smaller the p-value, the more robust our findings. But to publish a result in a scientific journal, you don’t need a minuscule p-value. In general, you need a p-value less than 0.05 or, sometimes, 0.01. For 0.05, this means that there is a one in twenty probability that the result being reported is in fact not real!

We know that when flipping a coin ten times, we don’t necessarily get exactly five heads and five tails. The same is true in the null hypothesis scenario for our L experiment. Enter p-values. Using sophisticated statistical analyses, we can reduce this complicated question to a single number: the p-value. This provides us with the probability that our result, which appears to support our hypothesis, is simply due to chance. For example, using certain assumptions, we can calculate what the p-value is for the above results: 0.16, or 16 percent. What this means is that there is about a one in six chance that this result is simply due to sampling variation (getting a few more L left-handers and a few less L right-handed carriers than we expected, if they are of equal frequency).

Measurement (and its sibling, error) are important factors in the scientific process in general, whenever we are trying to test whether a hypothesis is true. Scientific knowledge is dependent on measurement. . . . IF you ever delve a bit below the surface when reading about a scientific result, you will often bump into the term p-value. P-values are an integral part of determining how new knowledge is created. More important, they give us a way of estimating the possibility of error. Anytime a scientist tries to discover something new or validate an exciting and novel hypothesis, she tests it against something else. Specifically, our scientist tests it against a version of the world where the hypothesis would not be true.

pages: 286 words: 92,521

How Medicine Works and When It Doesn't: Learning Who to Trust to Get and Stay Healthy
by F. Perry Wilson
Published 24 Jan 2023

The measure of that oddness, of how weird the data is assuming the drug doesn’t work, is quantified in the p-value. Specifically, the p-value is a measure of how unusual the data you observe is, under the assumption that the treatment of interest has no true underlying effect. The lower the p-value, the weirder the data is. If it’s low enough (and the commonly but arbitrarily agreed-upon threshold for this is 0.05), we reject the hypothesis that the drug has no effect, de facto embracing the idea that the drug does have an effect. We call this “statistical significance.” A p-value of 0.05 means that the data you see (or even weirder data) would arise only 5 percent of the time if the drug didn’t work at all.

There just weren’t enough people in my study to rule out the vicissitudes of chance. And if I calculated a p-value, it would confirm this. The math works out to a p-value of 0.53, suggesting that results this weird would happen more than half the time, assuming dodgeball and tetherball have no different effect on boredom. Now, if I did the same trial with ten thousand bored folks and cured two thousand in the dodgeball group but three thousand in the tetherball group, I would feel much more confident recommending tetherball over dodgeball to my bored patients in the future. (The p-value in that trial would be less than 0.0001.) Note that both of these hypothetical studies had the same proportion of response in each group but vastly different interpretation.

Is the observed effect of the drug fairly weird assuming the drug doesn’t work? Yes? Well, then the drug probably works. That arbitrary threshold of 0.05 has created something of an obsession in scientists, and it’s easy to see why. If I do a trial of my Ebola treatment and arrive at a p-value of 0.06, what I am saying is that data as weird as this would happen only 6 percent of the time, assuming my drug doesn’t work. If I arrive at a p-value of 0.04, I am saying that data as weird as this would happen only 4 percent of the time, assuming my drug doesn’t work. And yet I can describe only the latter as significant. The latter would lead to headlines that read NEW DRUG SIGNIFICANTLY REDUCES THE RISK OF DEATH FROM EBOLA, while the former (the 6 percent example) would lead to headlines that read NEW DRUG HAS NO SIGNIFICANT EFFECT ON THE RISK OF DEATH FROM EBOLA.

pages: 205 words: 20,452

Data Mining in Time Series Databases
by Mark Last , Abraham Kandel and Horst Bunke
Published 24 Jun 2004

Results of the CD hypothesis testing on the ‘Manufacturing’ database Month 1 2 3 4 5 6 CD XP eMK−1,K eMK−1 ,K−1 d H(95%) 1 − p-value 1 − p-value — 14.10% 11.70% 10.60% 11.90% 6.60% — 12.10% 10.40% 9.10% 10.10% 8.90% — 2.00% 1.30% 1.50% 1.80% 2.30% — 4.80% 3.40% 2.90% 2.80% 2.30% — 58.50% 54.40% 68.60% 78.90% 95.00% — 78.30% 98.80% 76.50% 100.00% 63.10% 99% 100% 100% 78% 1 – p value 60% 95% 76% 80% 79% 63% 69% 58% 54% 40% 20% 0% 2 3 4 month 5 6 CD XP Fig. 2. Summary of implementing the change detection methodology on ‘Manufacturing’ database (1 − p-value). Table 10. XP confidence level of all independent and dependent variables in ‘Manufacturing’ database (1 − p-value). Domain Month 2 Month 3 Month 4 Month 5 Month 6 CAT GRP MRKT Code Duration Time to operate Quantity Customer GRP 18 100% 100% 100% 100% 100% 19 100% 100% 99.8% 99.9% 100% 19 100% 100% 100% 100% 100% 19 100% 100% 100% 100% 100% 15 100% 100% 100% 100% 100% 18 100% 100% 100% 100% 100% Change Detection in Classification Models Induced from Time Series Data 119 According to the change detection methodology, during all six consecutive months there was no significant change in the rules describing the relationships between the candidate and the target variables (which is our main interest).

. • K is the cumulative number of periods in a data stream. • α is the desired significance level for the change detection procedure (the probability of a false alarm when no actual change is present). Outputs: • CD(α) is the error-based change detection estimator (1 – p-value). • XP (α) is the Pearson’s chi-square estimator of distribution change (1 – p-value). 2.5. Change Detection Procedure Stage 1: For periods K − 1 build the model MK−1 using the DM algorithm G. Define the data set DK−1(val) . Count the number of records nK−1 = |DK−1(val) |. Calculate the validation error rate êMK−1 ,K−1 according to the validation method V .

These results support the assumptions that a change in the target variable does not necessarily affect the classification “rules” of the database and that a change can mainly be detected in the first period after its occurrence. Period 1 2 3 4 5 6 7 Results of the CD hypothesis testing on an artificially generated time series database. CD Change XP introduced eM K−1,K eM K−1,K−1 d H(95%) 1 − p-value 1 − p-value No No Yes No No Yes No — 20.30% 32.80% 26.60% 26.30% 18.80% 22.10% — 24.00% 19.80% 24.60% 26.40% 27.20% 22.00% — 3.70% 13.00% 2.00% 0.10% 8.40% 0.10% — 4.30% 3.20% 2.80% 2.50% 2.20% 2.00% — 91.90% 100.00% 88.20% 52.60% 100.00% 53.40% — 92.50% 100.00% 100.00% 99.90% 100.00% 52.80% Change Detection in Classification Models Induced from Time Series Data Table 7. 115 G.

pages: 982 words: 221,145

Ajax: The Definitive Guide
by Anthony T. Holdener
Published 25 Jan 2008

php /** * This function, quote_smart, tries to ensure that a SQL injection attack * cannot occur. * * @param {string} $p_value The string to quote correctly. * @return string The properly quoted string. */ function quote_smart($p_value) { /* Are magic quotes on? */ if (get_magic_quotes_gpc( )) $p_value = stripslashes($p_value); /* Is the value a string to quote? */ if (!is_numeric($p_value) || $p_value[0] == '0') $p_value = "'".mysql_real_escape_string($p_value)."'"; return ($p_value); } ?> The quote_smart( ) function I am using is one of many variants available on the Web from which you can choose.

_options.id + '_img').style.backgroundPosition = (-1 * ((2 * this._options.width) + ((this.checked)) ? this._options.width : 0)) + 'px 0'; }, /** * This method, _toggleValue, * * @member customRadioCheckControl * @param {Boolean} p_value The optional value to set the control to. * @see #_positionImage * @see #onChange */ _toggleValue: function(p_value) { /* Was a /p_value/ passed to the method? */ if (p_value) this.checked = p_value; else this.checked = !this.checked; this._positionImage( ); this.onChange( ); }, /** * This method, _createEvents, sets an /onclick/ event on the custom control. * * @member customRadioCheckControl * @see Event#observe */ _createEvents: function( ) { /* Was an id passed?

This involves testing whether the values are equal, as in the following code: /** * This function, testValue, checks to see if the passed /p_id/ has a value that * is equal to the passed /p_value/ in both value and type. * * @param {String} p_id The name of the input field to get the value from. * @param {Number | String | Boolean | Object | Array | null | etc.} p_value The * value to test against. * @return Returns a value indicating whether the passed inputs are equal to one * another. * @type Boolean */ function testValue(p_id, p_value) { try { /* Check both value and type */ return($F(p_id) === p_value); Validation with JavaScript | 537 } catch (ex) { return (false); } } You will notice in this example that I am using the inclusive === operator to test for equality.

pages: 295 words: 66,912

Walled Culture: How Big Content Uses Technology and the Law to Lock Down Culture and Keep Creators Poor
by Glyn Moody
Published 26 Sep 2022

An apparently minor copyright squabble between researchers regarding p-values has had an outsized negative effect on research culture. Wikipedia explains the concept of a p-value: “In null-hypothesis significance testing, the p-value is the probability of obtaining test results at least as extreme as the results actually observed, under the assumption that the null hypothesis is correct. A very small p-value means that such an extreme observed outcome would be very unlikely under the null hypothesis. Reporting p-values of statistical tests is common practice in academic publications of many quantitative fields.”657 These p-values are typically used to evaluate whether the results of an experiment could have happened by chance, or whether there is an alternative explanation such as the hypothesis being investigated.

To work around this barrier, Fisher created a method of inference based on only two values: p-values of 0.05 and 0.01.”658 Fisher himself later admitted that using a range of p-values was better than his approach based on just two figures, 0.05 and 0.01. But by then, the use of the 0.05 figure had become part of the established scientific method adopted by researchers. As a result of copyright, and a statistician’s refusal to share some basic calculations, the academic world has been producing research using a suboptimal analytical approach for decades. The story of how academia arrived at the p-value is an example of how an apparently minor issue has major impact.

Reporting p-values of statistical tests is common practice in academic publications of many quantitative fields.”657 These p-values are typically used to evaluate whether the results of an experiment could have happened by chance, or whether there is an alternative explanation such as the hypothesis being investigated. Wikipedia notes that different p-values can be used but ‘by convention’ a p-value cut-off of 0.05 is generally picked. The reason for the convention is because of a spat over copyright, as scholars Brent Goldfarb (Robert H. Smith School of Business, University of Maryland) and Andrew King (Questrom School of Business, Boston University) explained in a paper in 2014: “We were surprised to learn, in the course of writing this paper, that the p < 0.05 cutoff was established as a competitive response to a disagreement over book royalties between two foundational statisticians.

pages: 204 words: 58,565

Keeping Up With the Quants: Your Guide to Understanding and Using Analytics
by Thomas H. Davenport and Jinho Kim
Published 10 Jun 2013

For example, if you wish to predict the quality of a vintage wine using various predictors (average growing season temperature, harvest rainfall, winter rainfall, and the age of the vintage), the various predictors would serve as independent variables. Alternative names are explanatory variable, predictor variable, and regressor. p-value: When performing a hypothesis test, the p-value gives the probability of data occurrence under the assumption that H0 is true. Small p-values are an indication of rare or unusual data from H0, which in turn provides support that H0 is actually false (and thus support of the alternative hypothesis). In hypothesis testing, we “reject the null hypothesis” when the p-value is less than the significance level a (Greek alpha), which is often 0.05 or 0.01. When the null hypothesis is rejected, the result is said to be statistically significant.

A value of 5 percent signifies that we need data that occurs less than 5 percent of the time from H0 (if H0 were indeed true) for us to doubt H0 and reject it as being true. In practice, this is often assessed by calculating a p-value; p-values less than alpha are indication that H0 is rejected and the alternative supported. t-test or student’s t-test: A test statistic that tests whether the means of two groups are equal, or whether the mean of one group has a specified value. Type I error or α error: This error occurs when the null hypothesis is true, but it is rejected. In traditional hypothesis testing, one rejects the null hypothesis if the p-value is smaller than the significance level α. So, the probability of incorrectly rejecting a true null hypothesis equals α and thus this error is also called α error.

In statistical hypothesis testing, the probability of 0.003 calculated above is called the p-value—the probability of obtaining a test statistic (e.g., Z-value of 2.75 in this case) at least as extreme as the one that was actually observed (a pregnancy that would last at least ten months and five days), assuming that the null hypothesis is true. In this example the null hypothesis (H0) is “This baby is my husband’s.” In traditional hypothesis testing, one rejects the null hypothesis if the p-value is smaller than the significance level. In this case a p-value of 0.003 would result in the rejection of the null hypothesis even at the 1 percent significance level—typically the lowest level anyone uses.

pages: 451 words: 103,606

Machine Learning for Hackers
by Drew Conway and John Myles White
Published 10 Feb 2012

Finally, the last piece of information you’ll see is the “F-statistic.” This is a measure of the improvement of your model over using just the mean to make predictions. It’s an alternative to “R-squared” that allows one to calculate a “p-value.” Because we think that a “p-value” is usually deceptive, we encourage you to not put too much faith in the F-statistic. “p-values” have their uses if you completely understand the mechanism used to calculate them, but otherwise they can provide a false sense of security that will make you forget that the gold standard of model performance is predictive power on data that wasn’t used to fit your model, rather than the performance of your model on the data that it was fit to.

The traditional cutoff for being confident that an input is related to your output is to find a coefficient that’s at least two standard errors away from zero. The next piece of information that summary spits out are the significance codes for the coefficients. These are asterisks shown along the side that are meant to indicate how large the “t value” is or how small the p-value is. Specifically, the asterisks tell you whether you’re passed a series of arbitrary cutoffs at which the p-value is less than 0.1, less than 0.05, less than 0.01, or less than 0.001. Please don’t worry about these values; they’re disturbingly popular in academia, but are really holdovers from a time when statistical analysis was done by hand rather than on a computer.

Error t value Pr(>|t|) #(Intercept) -2.83441 0.75201 -3.769 0.000173 *** #log(UniqueVisitors) 1.33628 0.04568 29.251 < 2e-16 *** #--- #Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 # #Residual standard error: 1.084 on 998 degrees of freedom #Multiple R-squared: 0.4616, Adjusted R-squared: 0.4611 #F-statistic: 855.6 on 1 and 998 DF, p-value: < 2.2e-16 The first thing that summary tells us is the call we made to lm. This isn’t very useful when you’re working at the console, but it can be helpful when you’re working in larger scripts that make multiple calls to lm. When this is the case, this information helps you keep all of the models organized so you have a clear understanding of what data and variables went into each model.

pages: 764 words: 261,694

The Elements of Statistical Learning (Springer Series in Statistics)
by Trevor Hastie , Robert Tibshirani and Jerome Friedman
Published 25 Aug 2009

Fix the false discovery rate α and let p(1) ≤ p(2) ≤ · · · ≤ p(M ) denote the ordered p-values 2. Define n j o L = max j : p(j) < α · . M (18.44) 5*10^−4 5*10^−6 5*10^−5 p−value 5*10^−3 3. Reject all hypotheses H0j for which pj ≤ p(L) , the BH rejection threshold. • • • • •• •• ••••••• •••••• • • • • • •••• ••••• ••••••• • • • • • ••• •••• ••••••• • • • • • • •••••• •••••• • • • • •••• • •• • • 1 5 10 50 100 Genes ordered by p−value FIGURE 18.19. Microarray example continued. Shown is a plot of the ordered p-values p(j) and the line 0.15 · (j/12, 625), for the Benjamini–Hochberg method. The largest j for which the p-value p(j) falls below the line, gives the BH threshold.

Instead we take a random sample of the possible permutations; here we took a random sample of K = 1000 permutations. To exploit the fact that the genes are similar (e.g., measured on the same scale), we can instead pool the results for all genes in computing the p-values. pj = M K 1 XX I(|tkj′ | > |tj |). MK ′ (18.41) j =1 k=1 This also gives more granular p-values than does (18.40), since there many more values in the pooled null distribution than there are in each individual null distribution. Using this set of p-values, we would like to test the hypotheses: H0j = treatment has no effect on gene j versus H1j = treatment has an effect on gene j (18.42) for all j = 1, 2, . . . , M .

The Benjamini–Hochberg (BH) procedure is based on p-values; these can be obtained from an asymptotic approximation to the test statistic (e.g., Gaussian), or a permutation distribution, as is done here. If the hypotheses are independent, Benjamini and Hochberg (1995) show that regardless of how many null hypotheses are true and regardless of the distribution of the p-values when the null hypothesis is false, this procedure has the property FDR ≤ M0 α ≤ α. M (18.45) For illustration we chose α = 0.15. Figure 18.19 shows a plot of the ordered p-values p(j) , and the line with slope 0.15/12625. 688 18.

Super Thinking: The Big Book of Mental Models
by Gabriel Weinberg and Lauren McCann
Published 17 Jun 2019

The final measure commonly used to declare whether a result is statistically significant is called the p-value, which is formally defined as the probability of obtaining a result equal to or more extreme than what was observed, assuming the null hypothesis was true. Essentially, if the p-value is smaller than the selected false positive rate (5 percent), then you would say that the result is statistically significant. P-values are commonly used in study reports to communicate such significance. For example, a p-value of 0.01 would mean that a difference equal to or larger than the one observed would happen only 1 percent of the time if the app had no effect.

As a result, ideally any experiment should be designed to detect the smallest meaningful difference. One final note on p-values and statistical significance: Most statisticians caution against overreliance on p-values in interpreting the results of a study. Failing to find a significant result (a sufficiently small p-value) is not the same as having confidence that there is no effect. The absence of evidence is not the evidence of absence. Similarly, even though the study may have achieved a low p-value, it might not be a replicable result, which we will explore in the final section. Statistical significance should not be confused with scientific, human, or economic significance.

The developers might even want to increase the sample size in order to be able to guarantee a certain margin of error in their estimates. Further, the American Statistical Association stressed in The American Statistician in 2016 that “scientific conclusions and business or policy decisions should not be based only on whether a p-value passes a specific threshold.” Focusing too much on the p-value encourages black-and-white thinking and compresses the wealth of information that comes out of a study into just one number. Such a singular focus can make you overlook possible suboptimal choices in a study’s design (e.g., sample size) or biases that could have crept in (e.g., selection bias).

Scikit-Learn Cookbook
by Trent Hauck
Published 3 Nov 2014

We can then compare these features and based on this comparison, we can cull features. p is also the p value associated with that f value. In statistics, the p value is the probability of a value more extreme than the current value of the test statistic. Here, the f value is the test statistic: >>> f[:5] array([ 1.06271357e-03, 2.91136869e+00, 1.01886922e+00, 2.22483130e+00, 4.67624756e-01]) >>> p[:5] array([ 0.97400066, 0.08826831, 0.31303204, 0.1361235, 0.49424067]) As we can see, many of the p values are quite large. We would rather want that the p values be quite small. So, we can grab NumPy out of our tool box and choose all the p values less than .05. These will be the features we'll use for the analysis: >>> >>> >>> >>> import numpy as np idx = np.arange(0, X.shape[1]) features_to_keep = idx[p < .05] len(features_to_keep) 501 As you can see, we're actually keeping a relatively large amount of features.

We'll use the same scoring function from the first example, but just 20 features: >>> X, y = datasets.make_regression(10000, 20) >>> f, p = feature_selection.f_regression(X, y) Now let's plot the p values of the features, we can see which feature will be eliminated and which will be kept: >>> from matplotlib import pyplot as plt >>> f, ax = plt.subplots(figsize=(7, 5)) >>> ax.bar(np.arange(20), p, color='k') >>> ax.set_title("Feature p values") 186 www.it-ebooks.info Chapter 5 The output will be as follows: As we can see, many of the features won't be kept, but several will be. Feature selection on L1 norms We're going to work with some ideas similar to those we saw in the recipe on Lasso Regression.

These will be the features we'll use for the analysis: >>> >>> >>> >>> import numpy as np idx = np.arange(0, X.shape[1]) features_to_keep = idx[p < .05] len(features_to_keep) 501 As you can see, we're actually keeping a relatively large amount of features. Depending on the context of the model, we can tighten this p value. This will lessen the number of features kept. Another option is using the VarianceThreshold object. We've learned a bit about it, but it's important to understand that our ability to fit models is largely based on the variance created by features. If there is no variance, then our features cannot describe the variation in the dependent variable.

pages: 561 words: 120,899

The Theory That Would Not Die: How Bayes' Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant From Two Centuries of Controversy
by Sharon Bertsch McGrayne
Published 16 May 2011

As the statistician Dennis Lindley wrote, Jeffreys “would admit a probability for the existence of the greenhouse effect, whereas most [frequentist] statisticians would not and would confine their probabilities to the data on CO2, ozone, heights of the oceans, etc.”49 Jeffreys was particularly annoyed by Fisher’s measures of uncertainty, his “p-values” and significance levels. The p-value was a probability statement about data, given the hypothesis under consideration. Fisher had developed them for dealing with masses of agricultural data; he needed some way to determine which should be trashed, which filed away, and which followed up on immediately. Comparing two hypotheses, he could reject the chaff and save the wheat. Technically, p-values let laboratory workers state that their experimental outcome offered statistically significant evidence against a hypothesis if the outcome (or a more extreme outcome) had only a small probability (under the hypothesis) of having occurred by chance alone.

Newton, as Jeffreys pointed out, derived his law of gravity 100 years before Laplace proved it by discovering Jupiter’s and Saturn’s 877-year cycle: “There has not been a single date in the history of the law of gravitation when a modern significance test would not have rejected all laws [about gravitation] and left us with no law.”50 Bayes, on the other hand, “makes it possible to modify a law that has stood criticism for centuries without the need to suppose that its originator and his followers were useless blunderers.”51 Jeffreys concluded that p-values fundamentally distorted science. Frequentists, he complained, “appear to regard observations as a basis for possibly rejecting hypotheses, but in no case for supporting them.”52 But odds are that at least some of the hypotheses Fisher rejected were worth investigating or were actually true. A frequentist who tests a precise hypothesis and obtains a p-value of .04, for example, can consider that significant evidence against the hypothesis. But Bayesians say that even with a .01 p-value (which many frequentists would see as extremely strong evidence against a hypothesis) the odds in its favor are still 1 to 9 or 10—“not earth-shaking,” says Jim Berger, a Bayesian theorist at Duke University.

But Bayesians say that even with a .01 p-value (which many frequentists would see as extremely strong evidence against a hypothesis) the odds in its favor are still 1 to 9 or 10—“not earth-shaking,” says Jim Berger, a Bayesian theorist at Duke University. P-values still irritate Bayesians. Steven N. Goodman, a distinguished Bayesian biostatistician at Johns Hopkins Medical School, complained in 1999, “The p-value is almost nothing sensible you can think of. I tell students to give up trying.”53 Jeffreys was making Laplace’s probability of causes useful for practicing scientists, even as Fisher was doing the same for Laplace’s frequency-based methods. The difference was that Fisher used the word “Bayes” as an insult, while Jeffreys called it the Pythagorean theorem of probability theory.

pages: 523 words: 112,185

Doing Data Science: Straight Talk From the Frontline
by Cathy O'Neil and Rachel Schutt
Published 8 Oct 2013

Note that mean squared error is in there getting divided by total error, which is the proportion of variance unexplained by our model and we calculate 1 minus that. p-values Looking at the output, the estimated s are in the column marked Estimate. To see the p-values, look at . We can interpret the values in this column as follows: We are making a null hypothesis that the s are zero. For any given , the p-value captures the probability of observing the data that we observed, and obtaining the test-statistic that we obtained under the null hypothesis. This means that if we have a low p-value, it is highly unlikely to observe such a test-statistic under the null hypothesis, and the coefficient is highly likely to be nonzero and therefore significant.

Different selection criterion might produce wildly different models, and it’s part of your job to decide what to optimize for and why: R-squared Given by the formula , it can be interpreted as the proportion of variance explained by your model. p-values In the context of regression where you’re trying to estimate coefficients (the s), to think in terms of p-values, you make an assumption of there being a null hypothesis that the s are zero. For any given , the p-value captures the probability of observing the data that you observed, and obtaining the test-statistic (in this case the estimated ) that you got under the null hypothesis. Specifically, if you have a low p-value, it is highly unlikely that you would observe such a test-statistic if the null hypothesis actually held.

You have a couple values in the output of the R function that help you get at the issue of how confident you can be in the estimates: p-values and R-squared. Going back to our model in R, if we type in summary(model), which is the name we gave to this model, the output would be: summary (model) Call: lm(formula = y ~ x) Residuals: Min 1Q Median 3Q Max -121.17 -52.63 -9.72 41.54 356.27 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -32.083 16.623 -1.93 0.0565 . x 45.918 2.141 21.45 <2e-16 *** Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 77.47 on 98 degrees of freedom Multiple R-squared: 0.8244, Adjusted R-squared: 0.8226 F-statistic: 460 on 1 and 98 DF, p-value: < 2.2e-16 R-squared .

pages: 923 words: 163,556

Advanced Stochastic Models, Risk Assessment, and Portfolio Optimization: The Ideal Risk, Uncertainty, and Performance Measures
by Frank J. Fabozzi
Published 25 Feb 2008

Furthermore, let δ(X) be any test with test statistic t(X) such that the test statistic evaluated at x, t(x), is the value of the acceptance region ΔA closest to the rejection region ΔC. The p-value determines the probability, under the null hypothesis, that in any trial X the test statistic t(X) assumes a value in the rejection region ΔC; that is,p = Pθ0 (t ( X ) ∈ ΔC ) = Pθ0 (δ ( X ) = d1 ) We can interpret the p-value as follows. Suppose we obtained a sample outcome x such that the test statistics assumed the corresponding value t(x). Now, we want to know what is the probability, given that the null hypothesis holds, that the test statistic might become even more extreme than t(x). This probability is equal to the p-value. If t(x) is a value pretty close to the median of the distribution of t(X), then the chance of obtaining a more extreme value, which refutes the null hypothesis more strongly might be fairly feasible.

If t(x) is a value pretty close to the median of the distribution of t(X), then the chance of obtaining a more extreme value, which refutes the null hypothesis more strongly might be fairly feasible. Then, the p-value will be large. However, if, instead, the value t(x) is so extreme that the chances will be minimal under the null hypothesis that, in some other test run we obtain a value t(X) even more in favor of the alternative hypothesis, this will correspond to a very low p-value. If p is less than some given significance level α, we reject the null hypothesis and we say that the test result is significant. We demonstrate the meaning of the p-value in Figure 19.4. The horizontal axis provides the state space of possible values for the statistic t(X).

Note that in the second and third lines of equation (19.1), we indicate that the null hypothesis holds (i.e., the parameter is in Θ0), by using the subscript θ0 with the probability measure. The p-Value Suppose we had drawn some sample x and computed the value t(x) of the statistic from it. It might be of interest to find out how significant this test result is or, in other words, at which significance level this value t(x) would still lead to decision d0 (i.e., no rejection of the null hypothesis), while any value greater than t(x) would result in its rejection (i.e., d1). This concept brings us to the next definition. p-value: Suppose we have a sample realization given by x = (x1, x2, …, xn). Furthermore, let δ(X) be any test with test statistic t(X) such that the test statistic evaluated at x, t(x), is the value of the acceptance region ΔA closest to the rejection region ΔC.

pages: 321 words: 97,661

How to Read a Paper: The Basics of Evidence-Based Medicine
by Trisha Greenhalgh
Published 18 Nov 2010

Box 5.1 gives some criteria, originally developed by Sir Austin Bradford Hill [14], which should be met before assuming causality. Probability and confidence Have ‘p-values’ been calculated and interpreted appropriately? One of the first values a student of statistics learns to calculate is the p-value—that is the probability that any particular outcome would have arisen by chance. Standard scientific practice, which is essentially arbitrary, usually deems a p-value of less than one in twenty (expressed as p < 0.05, and equivalent to a betting odds of twenty to one) as ‘statistically significant’, and a p-value of less than one in a hundred (p < 0.01) as ‘statistically highly significant’.

A result in the statistically significant range (p < 0.05 or p < 0.01 depending on what you have chosen as the cutoff) suggests that the authors should reject the null hypothesis (i.e. the hypothesis that there is no real difference between two groups). But as I have argued earlier (see section ‘Were preliminary statistical questions addressed?’), a p-value in the non-significant range tells you that either there is no difference between the groups or there were too few participants to demonstrate such a difference if it existed. It does not tell you which. The p-value has a further limitation. Guyatt and colleagues conclude thus, in the first article of their ‘Basic Statistics for Clinicians’ series on hypothesis testing using p-values. Why use a single cut-off point [for statistical significance] when the choice of such a point is arbitrary?

Only a single pair of measurements should be made on each participant, as the measurements made on successive participants need to be statistically independent of each other if we are to end up with unbiased estimates of the population parameters of interest. 4. Every r-value should be accompanied by a p-value, which expresses how likely an association of this strength would be to have arisen by chance (see section ‘Have ‘p-values’ been calculated and interpreted appropriately?’), or a confidence interval, which expresses the range within which the ‘true’ R-value is likely to lie (see section ‘Have confidence intervals been calculated, and do the authors' conclusions reflect them?’).

pages: 172 words: 51,837

How to Read Numbers: A Guide to Statistics in the News (And Knowing When to Trust Them)
by Tom Chivers and David Chivers
Published 18 Mar 2021

There is no single point at which we can unequivocally say that the null hypothesis is false; in theory, even the most dramatic results could be a total fluke. But the bigger the difference, the more unlikely that fluke is. Scientists measure the chances of coincidence with something called the probability value, or ‘p-value’. The more unlikely something is to happen by random chance, the lower the p-value is: so if there’s only a one-in-100 chance that you’d see a result at least that extreme if there was no effect, that would be written as p=0.01, or one divided by 100. (What that doesn’t mean – and this is EXTREMELY IMPORTANT, so important that we will write EXTREMELY IMPORTANT in capitals, twice – is that there is only a one-in-100 chance that the result is wrong.

So let’s say that when we look at our results, we see that the average score for people who’ve read our book is indeed higher than the average score for people who haven’t. If the p-value of that result is less than 0.05, then we would say that we’ve achieved statistical significance, and we would reject the null hypothesis (that the book doesn’t do anything) in favour of the alternative hypothesis (that the book makes you better at stats). What the p-value is telling us is that, if the null hypothesis were true and we were to run the test 100 times, we would expect to see our book-readers do as well as they have, compared to the non-readers, fewer than five times.

At his prompting, the PhD student reanalysed the data in dozens of different ways, and – it won’t surprise you to learn – found lots of correlations, in exactly the same way as, in our imaginary book-reading study above, we could chop up our data as much as we liked until we found a p<0.05 result. She and Wansink published five different papers from that dataset, including the ‘men eat to impress women’ study. In it, they found a p-value of 0.02 for men eating more pizza around women, and 0.04 for salad. But that blog post raised red flags with scientists. Behaviour like this is known as ‘p-hacking’, massaging the data to get your p-value to a publishable below-0.05 figure. Methodologically savvy researchers started to go through all Wansink’s old work, and a source leaked his emails to Stephanie M. Lee, an investigative science journalist at BuzzFeed News.

pages: 339 words: 112,979

Unweaving the Rainbow
by Richard Dawkins
Published 7 Aug 2011

When we say that an effect is statistically significant, we must always specify a so-called p-value. This is the probability that a purely random process would have generated a result at least as impressive as the actual result. A p-value of 2 in 10,000 is pretty impressive, but it is still possible that there is no genuine pattern there. The beauty of doing a proper statistical test is that we know how probable it is that there is no genuine pattern there. Conventionally, scientists allow themselves to be swayed by p-values of 1 in 100, or even as high as 1 in 20: fair less impressive than 2 in 10,000. What p-value you accept depends upon how important the result is, and upon what decisions might follow from it.

What p-value you accept depends upon how important the result is, and upon what decisions might follow from it. If all you are trying to decide is whether it is worth repeating the experiment with a larger sample, a p-value of 0.05, or 1 in 20, is quite acceptable. Even though there is a 1 in 20 chance that your interesting result would have happened anyway by chance, not much is at stake: the error is not a costly one. If the decision is a life and death matter, as in some medical research, a much lower p-value than 1 in 20 should be sought. The same is true of experiments that purport to show highly controversial results, such as telepathy or 'paranormal' effects.

Whether they learn or not, successfully hunting animals must usually behave as if they are good statisticians. (I hope it is not necessary, by the way, to plod through the usual disclaimer: No, no, the birds aren't consciously working it out with calculator and probability tables. They are behaving as if they were calculating p-values. They are no more aware of what a p-value means than you are aware of the equation for a parabolic trajectory when you catch a cricket ball or baseball in the outfield.) Angler fish take advantage of the gullibility of little fish such as gobies. But that is an unfairly value-laden way of putting it. It would be better not to speak of gullibility and say that they exploit the inevitable difficulty the little fish have in steering between type 1 and type 2 errors.

pages: 571 words: 105,054

Advances in Financial Machine Learning
by Marcos Lopez de Prado
Published 2 Feb 2018

Third, determine the minimum d such that the p-value of the ADF statistic on FFD(d) falls below 5%. Fourth, use the FFD(d) series as your predictive feature. Exercises Generate a time series from an IID Gaussian random process. This is a memory-less, stationary series: Compute the ADF statistic on this series. What is the p-value? Compute the cumulative sum of the observations. This is a non-stationary series without memory. What is the order of integration of this cumulative series? Compute the ADF statistic on this series. What is the p-value? Differentiate the series twice. What is the p-value of this over-differentiated series?

Generate a time series that follows a sinusoidal function. This is a stationary series with memory. Compute the ADF statistic on this series. What is the p-value? Shift every observation by the same positive value. Compute the cumulative sum of the observations. This is a non-stationary series with memory. Compute the ADF statistic on this series. What is the p-value? Apply an expanding window fracdiff, with τ = 1E − 2. For what minimum d value do you get a p-value below 5%? Apply FFD, with τ = 1E − 5. For what minimum d value do you get a p-value below 5%? Take the series from exercise 2.b: Fit the series to a sine function. What is the R-squared?

If the features were entirely random, the PCA ranking would have no correspondance with the feature importance ranking. Figure 8.1 displays the scatter plot of eigenvalues associated with an eigenvector (x-axis) paired with MDI of the feature associated with an engenvector (y-axis). The Pearson correlation is 0.8491 (p-value below 1E-150), evidencing that PCA identified informative features and ranked them correctly without overfitting. Figure 8.1 Scatter plot of eigenvalues (x-axis) and MDI levels (y-axis) in log-log scale I find it useful to compute the weighted Kendall's tau between the feature importances and their associated eigenvalues (or equivalently, their inverse PCA rank).

pages: 227 words: 62,177

Numbers Rule Your World: The Hidden Influence of Probability and Statistics on Everything You Do
by Kaiser Fung
Published 25 Jan 2010

For example, Jeffrey Rosenthal demonstrated that it was impossible for store insiders to win the Encore lotteries with such frequency if one were to assume they had the same chance of winning as everyone else. The minute probability he computed, one in a quindecillion, is technically known as the p-value and signifies how unlikely the situation was. The smaller the p-value, the more impossible the situation, and the greater its power to refute the no-fraud scenario. Then, statisticians say, the result has statistical significance. Note that this is a matter of magnitude, rather than direction. If the p-value were 20 percent, then there would be a one-in-five chance of seeing at least 200 insider wins in seven years despite absence of fraud, and then Rosenthal would not have sufficient evidence to overturn the fair-lottery hypothesis.

If the p-value were 20 percent, then there would be a one-in-five chance of seeing at least 200 insider wins in seven years despite absence of fraud, and then Rosenthal would not have sufficient evidence to overturn the fair-lottery hypothesis. Statisticians set a minimum acceptable standard of evidence, which is a p-value of 1 percent or 5 percent. This practice originated with Sir Ronald Fisher, one of the giants of statistical thinking. For a more formal treatment of p-values and statistical significance, look up the topics of hypothesis testing and confidence intervals in a statistics textbook. The statistical testing framework demands a disbelief in miracles. If we were not daunted by odds of one in a quindecillion, then we could believe that Phyllis LaPlante was just an incredibly, incredibly lucky woman.

From the viewpoint of statistical testing, the doubters led by Senator Day wanted to know, if ramp metering was useless, what was the likelihood that the average trip time would rise by 22 percent (the improvement claimed by engineers who run the program) after the meters were shut off? Because this likelihood, or p-value, was small, the consultants who analyzed the experiment concluded that the favorite tool of the traffic engineers was indeed effective at reducing congestion. Since statisticians do not believe in miracles, they avoided the alternative path, which would assert that a rare event—rather than the shutting off of ramp meters—could have produced the deterioration in travel time during the experiment.

pages: 322 words: 107,576

Bad Science
by Ben Goldacre
Published 1 Jan 2008

He combined individual statistical tests by multiplying p-values, the mathematical description of chance, or statistical significance. This bit’s for the hardcore science nerds, and will be edited out by the publisher, but I intend to write it anyway: you do not just multiply p-values together, you weave them with a clever tool, like maybe ‘Fisher’s method for combination of independent p-values’. If you multiply p-values together, then harmless and probable incidents rapidly appear vanishingly unlikely. Let’s say you worked in twenty hospitals, each with a harmless incident pattern: say p=0.5. If you multiply those harmless p-values, of entirely chance findings, you end up with a final p-value of 0.5 to the power of twenty, which is p < 0.000001, which is extremely, very, highly statistically significant.

I did the maths, and the answer is yes, it is, in that you get a p-value of less than 0.05. What does ‘statistically significant’ mean? It’s just a way of expressing the likelihood that the result you got was attributable merely to chance. Sometimes you might throw ‘heads’ five times in a row, with a completely normal coin, especially if you kept tossing it for long enough. Imagine a jar of 980 blue marbles, and twenty red ones, all mixed up: every now and then—albeit rarely—picking blindfolded, you might pull out three red ones in a row, just by chance. The standard cut-off point for statistical significance is a p-value of 0.05, which is just another way of saying, ‘If I did this experiment a hundred times, I’d expect a spurious positive result on five occasions, just by chance.’

Because there is a final problem with this data: there is so much of it to choose from. There are dozens of data points in the report: on solvents, cigarettes, ketamine, cannabis, and so on. It is standard practice in research that we only accept a finding as significant if it has a p-value of 0.05 or less. But as we said, a p-value of 0.05 means that for every hundred comparisons you do, five will be positive by chance alone. From this report you could have done dozens of comparisons, and some of them would indeed have shown increases in usage—but by chance alone, and the cocaine figure could be one of those.

Learn Algorithmic Trading
by Sebastien Donadio
Published 7 Nov 2019

The last argument will mask the p-values higher than 0.98: seaborn.heatmap(pvalues, xticklabels=symbolsIds, yticklabels=symbolsIds, cmap='RdYlGn_r', mask = (pvalues >= 0.98)) This code will return the following map as an output. This map shows the p-values of the return of the coin: If a p-value is lower than 0.02, this means the null hypothesis is rejected. This means that the two series of prices corresponding to two different symbols can be co-integrated. This means that the two symbols will keep the same spread on average. On the heatmap, we observe that the following symbols have p-values lower than 0.02: This screenshot represents the heatmap measuring the cointegration between a pair of symbols.

If we fail to reject the null hypothesis, we can say that the time series is non-stationary: def test_stationarity(timeseries): print('Results of Dickey-Fuller Test:') dftest = adfuller(timeseries[1:], autolag='AIC') dfoutput = pd.Series(dftest[0:4], index=['Test Statistic', 'p-value', '#Lags Used', 'Number of Observations Used']) print (dfoutput) test_stationarity(goog_data['Adj Close']) This test returns a p-value of 0.99. Therefore, the time series is not stationary. Let's have a look at the test: test_stationarity(goog_monthly_return[1:]) This test returns a p-value of less than 0.05. Therefore, we cannot say that the time series is not stationary. We recommend using daily returns when studying financial products.

On the heatmap, we observe that the following symbols have p-values lower than 0.02: This screenshot represents the heatmap measuring the cointegration between a pair of symbols. If it is red, this means that the p-value is 1, which means that the null hypothesis is not rejected. Therefore, there is no significant evidence that the pair of symbols is co-integrated. After selecting the pairs we will use for trading, let's focus on how to trade these pairs of symbols. First, let's create a pair of symbols artificially to get an idea of how to trade. We will use the following libraries: import numpy as np import pandas as pd from statsmodels.tsa.stattools import coint import matplotlib.pyplot as plt As shown in the code, let's create a symbol return that we will call Symbol1.

The Ethical Algorithm: The Science of Socially Aware Algorithm Design
by Michael Kearns and Aaron Roth
Published 3 Oct 2019

Your null hypothesis is that this fellow is no better at predicting stock movements than the flip of a coin—on any particular day, the probability that he correctly guesses the directional movement of LYFT is 50 percent. You go on to compute the p-value corresponding to your null hypothesis—the probability that if the null hypothesis were true, you would have observed something as extreme as you did: ten correct predictions in a row. Well, if the sender had only a 50 percent chance of getting the answer right on any given day, then the chance that he would get it right ten days in a row—the p-value—would be only about .0009, the probability of flipping a coin ten times in a row and getting heads each time. This is very small—well below the .05 threshold for p-values that is often taken as the standard for statistical significance in the scientific literature.

The white leaves have received perfect predictions so far. And these pitfalls are not just limited to email scams, hedge funds, and other for-profit ventures. In fact, as we shall see in this chapter, the problems pervade much of modern scientific research as well. Power Poses, Priming, and Pinot Noir If p-values and hedge funds are foreign to you, you have probably at least received an email forwarded from a gullible friend, or seen a post on your social media feeds, proclaiming the newest scientific finding that will change your life forever. Do you want to live longer? Drink more red wine (or maybe less).

Repeatedly performing the same experiment, or repeatedly running different statistical tests on the same dataset, but then only reporting the most interesting results is known as p-hacking. It is a technique that scientists can use (deliberately or unconsciously) to try to get their results to appear more significant (remember from the beginning of the chapter that p-values are a commonly used measure of statistical significance). It isn’t a statistically valid practice, but it is incentivized by the structure of modern scientific publishing. This is because not all scientific journals are created equal: like most other things in life, some are viewed as conferring a higher degree of status than others, and researchers want to publish in these better journals.

pages: 62 words: 14,996

SciPy and NumPy
by Eli Bressert
Published 14 Oct 2012

import numpy as np from scipy import stats # Generating a normal distribution sample # with 100 elements sample = np.random.randn(100) # normaltest tests the null hypothesis. out = stats.normaltest(sample) print('normaltest output') print('Z-score = ' + str(out[0])) print('P-value = ' + str(out[1])) # kstest is the Kolmogorov-Smirnov test for goodness of fit. # Here its sample is being tested against the normal distribution. # D is the KS statistic and the closer it is to 0 the better. out = stats.kstest(sample, 'norm') print('\nkstest output for the Normal distribution') print('D = ' + str(out[0])) print('P-value = ' + str(out[1])) # Similarly, this can be easily tested against other distributions, # like the Wald distribution. out = stats.kstest(sample, 'wald') print('\nkstest output for the Wald distribution') print('D = ' + str(out[0])) print('P-value = ' + str(out[1])) Researchers commonly use descriptive functions for statistics.

pages: 119 words: 10,356

Topics in Market Microstructure
by Ilija I. Zovko
Published 1 Nov 2008

Significant slope coefficients show that if two institutions’ strategies were correlated in one month, they are likely to be correlated in the next one as well. The table does not contain the off-book market because we cannot reconstruct institution codes for the off-book market in the same way as we can for the on-book market. The ± values are the standard error of the coefficient estimate and the values in the parenthesis are the standard p-values. On-book market Stock Intercept Slope R2 AAL -0.010 ± 0.004 (0.02) 0.25 ± 0.04 (0.00) 0.061 AZN -0.01 ± 0.003 (0.00) 0.14 ± 0.03 (0.00) 0.019 LLOY 0.003 ± 0.003 (0.28) 0.23 ± 0.02 (0.00) 0.053 VOD 0.008 ± 0.001 (0.00) 0.17 ± 0.01 (0.00) 0.029 does not work for institutions that do not trade frequently5 .

This may lead to instabilities in coefficient estimates for those variables and we need to keep this in mind when interpreting results. The results for the on- and off-book markets, as well as for the daily and hourly returns are collected in table II. Apart from the value of the coefficient, its error and p-value, we list also Rs2 and Rp2 . Rs2 is the value of R-square of a regression with only the selected variable, and no others, included. It is equal to the square root of the absolute value of the correlation between the variable and the 86 Rs2 Rp2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 R2 = 0.00 off-book Error 0.006 (0.0) 0.008 (0.0) 0.009 (0.0) 0.008 (0.0) Coef. -0.075 0.019 -0.031 0.033 on-book market Error Rs2 Rp2 0.05 (0.0) 0.24 0.06 0.04 (0.0) 0.04 0.00 0.04 (0.0) 0.19 0.01 0.05 (0.0) 0.06 0.01 R2 = 0.28 Coef. 3.03 0.54 -1.23 1.46 Hourly δV (signed volume) δE (entropy) δN (no. firms) δT (no. signed trades) Overall Daily δV (signed volume) δE (entropy) δN (no. firms) δT (no. signed trades) Overall Coef. 0.40 0.20 -0.17 0.00 on-book market Error Rs2 Rp2 0.01 (0.0) 0.21 0.10 0.01 (0.0) 0.12 0.03 0.01 (0.0) 0.15 0.02 0.01 (0.7) 0.07 0.00 R2 = 0.32 Coef. -0.104 0.039 -0.050 0.008 off-book Error 0.007 (0.0) 0.008 (0.0) 0.009 (0.0) 0.008 (0.3) Rs2 Rp2 0.05 0.02 0.03 0.00 0.03 0.00 0.02 0.00 R2 = 0.07 CHAPTER 5.

. -0.104 0.039 -0.050 0.008 off-book Error 0.007 (0.0) 0.008 (0.0) 0.009 (0.0) 0.008 (0.3) Rs2 Rp2 0.05 0.02 0.03 0.00 0.03 0.00 0.02 0.00 R2 = 0.07 CHAPTER 5. MARKET IMBALANCES AND STOCK RET.: HETEROGENEITY OF ORDER SIZES AT THE LSE Table 5.2: Regression results showing the significance of the market imbalance variables on price returns. Columns from left to right are estimated coefficient, its error and in the parenthesis the p-value of the test that the coefficient is zero assuming normal statistics; Rs2 is the value of R2 in a regression where only the selected variable is present in the regression. It expresses how much the variable on its own (solo) explains price returns. Final column Rp2 is the partial R2 of the selected variable.

The Intelligent Asset Allocator: How to Build Your Portfolio to Maximize Returns and Minimize Risk
by William J. Bernstein
Published 12 Oct 2000

An only slightly more complex formulation is used to evaluate money managers. One has to be extremely careful to distinguish out-of-sample from in-sample performance. One should not be surprised if one picks out the best-performing manager out of 500 and finds that his p value is .001. However, if one identifies him ahead of time, and then his performance p value is .001 after the fact, then he probably is skilled. 87 88 The Intelligent Asset Allocator Table 6-1. Subsequent Performance of Top Performing Funds, 1970–1998 Top 30 funds 1970–1974 All funds S&P 500 Top 30 funds 1975–1979 All funds S&P 500 Top 30 funds 1980–1984 All funds S&P 500 Top 30 funds 1985–1989 All funds S&P 500 Top 30 funds 1990–1994 All funds S&P 500 SOURCE: Return 1970–1974 Return 1975–1998 0.78% ⫺6.12% ⫺2.35% 16.05% 16.38% 17.04% Return 1975–1979 Return 1980–1998 35.70% 20.44% 14.76% 15.78% 15.28% 17.67% Return 1980–1984 Return 1985–1998 22.51% 14.83% 14.76% 16.01% 15.59% 18.76% Return 1985–1989 Return 1990–1998 22.08% 16.40% 20.41% 16.24% 15.28% 17.81% Return 1990–1994 Return 1995–1998 18.94% 9.39% 8.69% 21.28% 24.60% 32.18% DFA/Micropal/Standard and Poor’s.

The difference between the batter’s performance and the mean is .020, and dividing that by the SE of .0063 gives a “z value” of 3.17. Since we are considering 10 years, performance, there are 9 “degrees of freedom.” The z value and degrees of freedom are fed into a “t distribution function” on our spreadsheet, and out pops a p value of .011. In other words, in a “random batting” world, there is a 1.1% chance of a given batter averaging .280 over 10 seasons. Whether or not we consider such a batter skilled also depends on whether we are observing him “in sample” or “out of sample.” In sample means that we picked him out of a large number of batters—say, all of his teammates—after the fact.

For example, if a manager has an alpha of ⫺4% per year this means that the manager has underperformed the regression-determined benchmark by 4% 90 The Intelligent Asset Allocator annually. Oakmark’s alpha for the first 29 months is truly spectacular, and quite statistically significant, with a p value of .0004. This means that there was less than a 1-in-2000 possibility that the fund’s superb performance in the first 29 months could have been due to chance. Unfortunately, its performance in the last 29-month period was equally impressive, but in the wrong direction. My interpretation of the above data is that Mr.

pages: 836 words: 158,284

The 4-Hour Body: An Uncommon Guide to Rapid Fat-Loss, Incredible Sex, and Becoming Superhuman
by Timothy Ferriss
Published 1 Dec 2010

In fact, it’s one in two. Fifty percent. To become better at spotting randomness for what it is, it’s important to understand the concept of “p-value,” which you’ll see in all good research studies. It answers the question: how confident are we that this result wasn’t due to random chance? To demonstrate (or imply) cause-and-effect, the gold standard for studies is a p-value of less than 0.05 (p < 0.05), which means a less than 5% likelihood that the result can be attributed to chance. A p-value of less than 0.05 is also what most scientists mean when they say something is “statistically significant.” An example makes this easy to understand.

In this case, 10 extra flips out of 100 doesn’t prove cause-and- effect at all. Three points to remember about p-values and “statistical significance”: • Just because something seems miraculous doesn’t mean it is. People are fooled by randomness all the time, as in the birthday example. • The larger the difference between groups, the smaller the groups can be. Critics of small trials or self-experimentation often miss this. If something appears to produce a 300% change, you don’t need that many people to show significance, assuming you’re controlling variables. • It is not kosher to combine p-values from multiple experiments to make something more or less believable.

LIST OF ILLUSTRATIONS GROUND ZERO—GETTING STARTED AND SWARAJ Comparison of Methods for Estimating % Bodyfat Male Examples—Bodyfat Female Examples—Bodyfat Ramit Sethi’s Betting Chart Weight Glide Path SUBTRACTING FAT Comparison of Dietary Fats and Oils Air Squats Wall Presses Chest Pulls Ray Cornise’s Fat-Loss Spreadsheet Continuous Glucose Monitor Glucose Trend: Ferriss, Tim Modal Day: Ferriss, Tim Glucose Trend, September 25 Glucose Trend, September 26 Testosterone and Nandrolone ADDING MUSCLE The Kettlebell Swing Touch-and-Go Deadlifts Two-Legged Glute Activation Raises Flying Dog The Myotatic Crunch Abdominal Muscles Cat Vomit Exercise Front Plank Side Plank Hip Flexor Stretch Alpha-Actinin 3 (ACTN3) Time Ferriss, Before-and-After Shots Pull-down Machine Shoulder Press The Locked Position Slight Incline/Decline Bench Press Leg Press Barbell Overhead Press Squat Sample Workouts Calendars The “Yates” Bent Row The Reverse Drag Curl Sacroplasmic Hypertrophy and Myofibrillar Hyertrophy IMPROVING SEX Conventional Missionary and Improved-Angle Missionary Improved-Pressure Missionary Conventional Cowgirl and Improved-Pressure Cowgirl The Clitoris The 15-Minute Female Orgasm The Hypothalamus-Pituitary-Testosterone Axis (HPTA) The Menstrual Cycle PERFECTING SLEEP FitBit Sleep Analysis WakeMate Sleep Analysis Zeo—Good Sleep Example Zeo—Bad Sleep Example Monophasic Sleep and Polyphasic Sleep REVERSING INJURIES Barefoot Walker’s Feet and Modern Man’s Feet Static Back Static Extension Position on Elbows Shoulder Bridge with Pillow Active Bridges with Pillow Supine Groin Progressive in Tower Alternative: Supine Groin on Chair Air Bench ART, Before and After Thoraco-dorsal Fascia The Chop and Lift Full and Half-Kneeling Ideal Placement on One Line Tricep Rope Attachment Single-Leg Flexibility Assessment Down-Left Chop Ideal Placement Turkish Get-Up Start and Finish of Two-Arm Single-Leg Deadlift RUNNING FASTER AND FASTER Hip Flexors Stretch Reverse Lunge Demonstration Untrained and Trained Start Positions Reverse Hyper(extension) on a Bench and Swiss Ball Enzyme Activity Graph Super Quad Stretch Pelvic Symmetry and Glute Flexibility Stretches Repositioning the Pelvis Pre-Workout Glute Activation Running by the Numbers Video Snapshots Diagram of Energetic Systems Taper Schedule 12-Weeks to 50k Schedules GETTING STRONGER How to Perform the Conventional Deadlift Brench-Press Plyometrics The Torture Twist The Sumo Deadlift The Sharapova Sit-Up: Janda Bench Pressing 854 Pounds: Set up Bench Pressing 854 Pounds: Technique FROM SWIMMING TO SWINGING Full Stroke The Cushion The Slot Impact Position Historical CSRs Area of Impact (AOI) Angle L Practicing Your Angles APPENDICES AND EXTRAS Weight (Food) Conversions Body Weight Conversions Volume (Food) Conversions Muscles of the Body (Partial) Today’s Random Medical News P-Value Grid Number of Respondents by Weight Loss Average Weight Lost by Number of Meals Per Day CONTENTS LIST OF ILLUSTRATIONS START HERE Thinner, Bigger, Faster, Stronger? How to Use This Book FUNDAMENTALS—FIRST AND FOREMOST The Minimum Effective Dose: From Microwaves to Fat-Loss Rules That Change the Rules: Everything Popular Is Wrong GROUND ZERO—GETTING STARTED AND SWARAJ The Harajuku Moment: The Decision to Become a Complete Human Elusive Bodyfat: Where Are You Really?

pages: 301 words: 85,263

New Dark Age: Technology and the End of the Future
by James Bridle
Published 18 Jun 2018

P stands for probability, denoting the value at which an experimental result can be considered statistically significant. The ability to calculate a p-value in many different situations has made it a common marker for scientific rigour in experiments. A value of p less than 0.05 – meaning that there is a less than 5 per cent chance of a correlation being the result of chance, or a false positive – is widely agreed across many disciplines to be the benchmark for a successful hypothesis. But the result of this agreement is that a p-value less than 0.05 becomes a target, rather than a measure. Researchers, given a particular goal to aim for, can selectively cull from great fields of data in order to prove any particular hypothesis.

Take ten green dice and roll each of them one hundred times. Of those 1,000 rolls, 183 turn up a six. If the dice were absolutely fair, the number of sixes should be 1,000/6, which is 167. Something’s up. In order to determine the validity of the experiment, we need to calculate the p-value of our experiment. But the p-value has nothing to do with the actual hypothesis: it is simply the probability that random rolls would turn up 183 or more sixes. For 1,000 dice rolls, that probability is only 4 per cent, or p = 0.04 – and just like that, we have an experimental result that is deemed sufficient by many scientific communities to warrant publication.15 Why should such a ridiculous process be regarded as anything other than a gross simplification?

Data dredging has become particularly notorious in the social sciences, where social media and other sources of big behavioural data have suddenly and vastly increased the amount of information available to researchers. But the pervasiveness of p-hacking isn’t limited to the social sciences. A comprehensive analysis of 100,000 open access papers in 2015 found evidence of p-hacking across multiple disciplines.16 The researchers mined the papers for every p-value they could find, and they discovered that the vast majority just scraped under the 0.05 boundary – evidence, they said, that many scientists were adjusting their experimental designs, data sets, or statistical methods in order to get a result that crossed the significance threshold. It was results such as these that led the editor of PLOS ONE, a leading medical journal, to publish an editorial attacking statistical methods in research entitled ‘Why most published research findings are false.’17 It’s worth emphasising at this point that data dredging is not the same as fraud.

pages: 290 words: 82,871

The Hidden Half: How the World Conceals Its Secrets
by Michael Blastland
Published 3 Apr 2019

Once the data is in and we think we do see evidence, say, that drinking more tea seems to be associated with having more babies, we ask: ‘How likely is it that we would see these results if our null hypothesis was true, that is, if there was in fact no relationship?’ If the chance of our observation is less than 5% (or p < 0.05 as it is usually written, known as a p-value), then this is considered an acceptable level at which to reject the null hypothesis. It is not a proof, or a test of the ‘truth’ of an experimental result, it is a probabilistic test of there being nothing, given what we’ve observed. A p-value of less than 0.05 has become many researchers’ heart’s desire. Statistical significance sounds cumbersome, but it is a workhorse of statistical inquiry. To those outside statistics, it’s amazing to discover there’s a war between its proponents and critics.

Critics call it ‘statistical alchemy’ and would like to do away with it.33 The essence of their complaint is that results that could arise simply by chance are too easily turned into ‘findings’ when given the stamp of approval by a test of statistical significance. Of course, all methods can be misused; whether the misuse discredits the test, I’ll leave to others.34 I confess to retaining a cautious interest in p-values, but agree – with modest statistical understanding – that dependence on them as one-off, binary tests often seems to have been simplistic. A single path to knowledge is often not enough. That sounds like the utterance of a mountain-top mystic, but in the context of research it has become a pragmatic necessity to follow more than one path (where possible), to make sure they lead to the same destination.

The case of medicine is at first sight rather more intractable than astrology, for it is hard to disprove astrology.’ 28 John Ioannidis has given many talks on this subject. They are good viewing. This one was the first BIH (Berlin Institute of Health) annual special lecture. It can be found on YouTube. His published papers are also surprisingly readable. 29 The test of statistical significance with p-values <0.05, whenever statistical significance is reported. 30 John P. A. Ioannidis, T. D. Stanley and Hristos Doucouliagos, ‘The Power of Bias in Economics Research’, Economic Journal, vol. 127, 2017, F236–F265. 31 Colin F. Camerer et al., ‘Evaluating Replicability of Laboratory Experiments in Economics’, Science, 25 March 2016. 32 Monya Baker, ‘1,500 Scientists Lift the Lid on Reproducibility: Survey Sheds Light on the “Crisis” Rocking Research’, Nature (News and Comment), vol. 534, 25 May 2016. 33 See, for example, Blakeley B.

pages: 197 words: 35,256

NumPy Cookbook
by Ivan Idris
Published 30 Sep 2012

Download price data # 2011 to 2012 start = datetime.datetime(2011, 01, 01) end = datetime.datetime(2012, 01, 01) print "Retrieving data for", sys.argv[1] quotes = finance.quotes_historical_yahoo(sys.argv[1], start, end, asobject=True) close = numpy.array(quotes.close).astype(numpy.float) print close.shape print normal_ad(numpy.diff(numpy.log(close))) The following shows the output of the script with p-value of 0.13: Retrieving data for AAPL (252,) (0.57103805516803163, 0.13725944999430437) How it works... This recipe demonstrated the Anderson Darling statistical test for normality, as found in scikits-statsmodels. We used the stock price data, which does not have a normal distribution, as input. For the data, we got a p-value of 0.13. Since probabilities range between zero and one, this confirms our hypothesis. Installing scikits-image scikits image is a toolkit for image processing, which requires PIL, SciPy, Cython, and NumPy.

How to do it... We will download price data as in the previous recipe; but this time for a single stock. Again, we will calculate the log returns of the close price of this stock, and use that as an input for the normality test function. This function returns a tuple containing a second element—a p-value between zero and one. The complete code for this tutorial is as follows: import datetime import numpy from matplotlib import finance from statsmodels.stats.adnorm import normal_ad import sys #1. Download price data # 2011 to 2012 start = datetime.datetime(2011, 01, 01) end = datetime.datetime(2012, 01, 01) print "Retrieving data for", sys.argv[1] quotes = finance.quotes_historical_yahoo(sys.argv[1], start, end, asobject=True) close = numpy.array(quotes.close).astype(numpy.float) print close.shape print normal_ad(numpy.diff(numpy.log(close))) The following shows the output of the script with p-value of 0.13: Retrieving data for AAPL (252,) (0.57103805516803163, 0.13725944999430437) How it works...

pages: 755 words: 121,290

Statistics hacks
by Bruce Frey
Published 9 May 2006

This process is called a test of significance. Tests of significance produce a p-value (probability value), which is the probability that the sample value could have been drawn from a particular population of interest. The lower the p-value, the more confident we are in our beliefs that we have achieved statistical significance and that our data reveals a relationship that exists not only in our sample but also in the whole population represented by that sample. Usually, a predetermined level of significance is chosen as a standard for what counts. If the eventual p-value is equal to or lower than that predetermined level of significance, then the researcher has achieved a level of significance.

There must be a relationship in the population to find; otherwise, power has no meaning. Power is not the chance of finding a significant result; it is the chance of finding that relationship if it is there to find. The formula for power contains three components: Sample size The predetermined level of significance (p-value) to beat (be less than) The effect size (the size of the relationship in the population) Conducting a Power Analysis Let's say we want to compare two different sample groups and see whether they are different enough that there is likely a real difference in the populations they represent.

magic number, lotteries and MANOVA (multivariate analysis of variance) MCAT (Medical College Admission Test) mean [See also standard error of the mean] ACT calculating Central Limit Theorem central tendency and cut score and 2nd defined 2nd effect size and linear regression and normal curve and 2nd normal distribution precision of predicting test performance 2nd regression toward 2nd 3rd T scores z score 2nd 3rd measurement [See also standard error of measurement] <Emphasis>t</> tests asking questions categorical converting raw scores defined effect of increasing sample size Gott's Principle graphs and improving test scores levels of 2nd normal distribution percentile ranks precise predicting with normal curve probability characteristics reliability of standardized scores 2nd testing fairly validity of 2nd 3rd measures of central tendency median central tendency and 2nd 3rd defined normal curve and medical decisions Michie, Donald Microsoft Excel DATAS software histograms predicting football games Milgram, Stanley 2nd 3rd 4th mind control Minnesota Multiphase Personality Inventory-II test mnemonic devices mode central tendency and 2nd defined normal curve and models building 2nd defined goodness-of-fit statistic and money casinos and 2nd infinite doubling of Monopoly Monty Hall problem multiple choice questions analysis of answer options writing good 2nd 3rd multiple regression criterion variables and defined multiple predictor variables predicting football games multiple regression) multiplicative rule 2nd multivariate analysis of variance (MANOVA) mutually exclusive outcomes Index [SYMBOL] [A] [B] [C] [D] [E] [F] [G] [H] [I] [J] [K] [L] [M] [N] [O] [P] [Q] [R] [S] [T] [V] [W] [X] [Y] [Z] negative correlation 2nd negative numbers negative wording Newcomb, Simon 2nd Nigrini, Mark 2nd 3rd 4th 5th 6th nominal level of measurement 2nd 3rd non-experimental designs norm-referenced scoring defined 2nd percentile ranks simplicity of normal curve Central Limit Theorem and overview precision of predicting with z score and 2nd normal distribution applying characteristics iTunes shuffle and overview shape of traffic patterns null hypothesis defined errors in testing Law of Large Numbers and possible outcomes purpose 2nd 3rd research hypothesis and statistical significance and nuts 2nd Index [SYMBOL] [A] [B] [C] [D] [E] [F] [G] [H] [I] [J] [K] [L] [M] [N] [O] [P] [Q] [R] [S] [T] [V] [W] [X] [Y] [Z] O'Reilly Media 2nd observed score 2nd 3rd odds [See also odds: (see also gambling\\] [See also odds: (see also gambling\\] figuring out 2nd pot odds 2nd Powerball lottery one-way chi-square test ordering scores ordinal level of measurement outcomes blackjack 2nd coin toss comparing number of possible 2nd dice rolls 2nd gambler's fallacy about identifying unexpected likelihood of 2nd mutually exclusive occurrence of specific 2nd predicting 2nd predicting baseball games shuffled deck of cards spotting random trial-and-error learning two-point conversion chart and outs Index [SYMBOL] [A] [B] [C] [D] [E] [F] [G] [H] [I] [J] [K] [L] [M] [N] [O] [P] [Q] [R] [S] [T] [V] [W] [X] [Y] [Z] p-values pairs of cards, counting by parallel forms reliability partial correlations Party Shuffle (iTunes) 2nd 3rd 4th Pascal's Triangle Pascal, Blaise passing epochs payoffs expected 2nd magic number for lotteries Powerball lottery Pearson correlation coefficient 2nd Pedrotti, J.T. percentages ratio level of measurement sample estimates of scores percentile ranks 2nd performance criterion-based arguments ranking players permutations 2nd 3rd Petersen, S.E.

Risk Management in Trading
by Davis Edwards
Published 10 Jul 2014

Number of observations must be greater than 30 ANOVA Regression Residual Total Intercept x Variable 1 df SS MS 1 509 510 0.07406 0.01863 0.09269 0.07406 0.00004 F Coefficients Standard Error t Stat 0.0001 0.2787 0.0003 0.8002 44.9799 0.0178 Test 1. Slope must be between 0.80 and 1.25 FIGURE 7.5 Significance F 2023.19472 P-Value 0.7806 0.0000 0.0000 Lower 95% −0.0005 0.7653 Test 5. p-value of F-statistic must be less than 0.05 Upper 95% Lower 95.0% Upper 95.0% 0.0006 −0.0005 0.0006 0.8352 0.7653 0.8352 Test 4. p-value of t-statistic must be less than 0.05 Regression Output hedge documentation memo must pass the retrospective tests. In this example, five tests have been identified, and the hedge‐effectiveness test would fail because the adjusted R2 of 0.7986 is less than the 0.80 required for passing result.

An R2 test indicates how well observations fit a line or curve. A common test is to ensure an R2 greater than 0.80. Test 3 (Slope Significance). It is common to test that the slope test is mathematically significant. This can be done by checking that the p‐ value of the F‐statistic is less than 0.05. Test 4 (R2 Significance). It is common to test that the R2 test is significant. This can be done by checking that the p‐value of the t‐statistic is less than 0.05. Test 5 (Number of Samples). A sufficient number of samples needs to be taken for a valid test. For most situations, this means 30 or more samples. Generally, a statistical package in a spreadsheet is used to calculate effectiveness.

No Slack: The Financial Lives of Low-Income Americans
by Michael S. Barr
Published 20 Mar 2012

Withholding Preference, by Portfolio Allocation Group a Percent Prefers to overwithhold paycheck rather than underwithhold or exactly withhold All Mostly illiquid assets Mostly liquid assets One illiquid asset One ­liquid asset No assets 0.685 (0.027) 0.761 (0.037) 0.623 (0.048) 0.599 (0.073) 0.713 (0.047) 0.618 (0.045) F statistic = 3.43, p value = 0.013 Summary statistic Sample size 650 220 145 38 117 130 Source: Detroit Area Household Financial Services study. a. Standard errors are in parentheses. Sample includes respondents living in low- and moderate-income census tracts who filed a tax return in 2003 or 2004. The F statistic and p value correspond to a test of equality of the percentages. The F statistic is distributed with 4 numerator and 70 denominator degrees of freedom. Standard errors are clustered at the segment level.

**Statistically significance at the 5 percent level, two-tailed test. ***Statistically significance at the 1 percent level, two-tailed test. Controls Demographics Employment and financial variables Income volatility Household income Risk tolerance Time preference Ease of borrowing $500 Gets refund Summary statistic F statistic p value One liquid asset One illiquid asset Mostly liquid assets Mostly illiquid assets (1) Dependent variable is “wants to overwithhold” Table 10-4. Relationship between Withholding Preference and Asset Allocationa 232 michael s. barr and jane k. dokko and it is more so for those facing greater income volatility.

**Statistically significant at the 5 percent level, two-tailed test. ***Statistically significant at the 1 percent level, two-tailed test. Controls Demographics Employment and financial variables Income volatility Household income Risk tolerance Time preference Ease of borrowing $500 Gets refund Summary statistic F statistic p value Has one liquid asset Has one illiquid asset Mostly liquid assets Mostly illiquid assets (2) (1) Dependent variable is “spends all” Table 10-6. Relationship between Spending All of Tax Refund and Asset Allocation a 12864-10_CH10_3rdPgs.indd 238 3/23/12 11:57 AM No No No No No No No Yes 6.274 (0) Yes No No No No No No Yes 4.637 (0.002) (2) -0.108 (0.066) -0.192*** (0.060) 0.029 (0.096) -0.021 (0.060) (1) -0.144*** (0.055) -0.216*** (0.052) 0.026 (0.093) -0.033 (0.058) (3) Yes Yes No No No No No Yes 4.067 (0.005) -0.100 (0.065) -0.182*** (0.057) 0.026 (0.097) -0.021 (0.058) (4) Yes Yes Yes No No No No Yes 3.948 (0.006) -0.096 (0.065) -0.177*** (0.055) 0.039 (0.095) -0.016 (0.056) (5) Yes Yes Yes Yes No No No Yes 3.935 (0.006) -0.107* (0.065) -0.186*** (0.057) 0.041 (0.096) -0.014 (0.057) (6) Yes Yes Yes No Yes Yes No Yes 4.704 (0.002) -0.094 (0.065) -0.185*** (0.055) 0.048 (0.095) -0.004 (0.054) (7) Yes Yes Yes Yes Yes Yes No Yes 4.799 (0.002) -0.107 (0.066) -0.196*** (0.057) 0.050 (0.096) 0 (0.055) (8) Yes Yes Yes Yes Yes Yes Yes Yes 4.836 (0.002) -0.108 (0.066) -0.198*** (0.057) 0.050 (0.096) -0.001 (0.055) Source: Detroit Area Household Financial Services study. a.

pages: 420 words: 130,714

Science in the Soul: Selected Writings of a Passionate Rationalist
by Richard Dawkins
Published 15 Mar 2017

Statistical tests exist to compute the probability that, if the drug really were doing nothing, you could have got the result you did get (or an even ‘better’ result) by luck. The ‘P value’ is that probability, and the lower it is, the less likely the result is to have been a matter of luck. Results with P values of 1 per cent or less are customarily taken as evidence, but that cut-off point is arbitrary. P values of 5 per cent may be taken as suggestive. For results that seem very surprising, for example an apparent demonstration of telepathic communication, a P value much lower than 1 per cent would be demanded.e demanded. demanded.demanded.emanded.manded.anded.nded.ded.ed.d

The odds against such a threefold coincidence are not particularly great. No special explanation is called for. Compare how a statistician decides what P value*3 to accept as evidence for an effect in an experiment. It is a matter of judgement and dispute, almost of taste, exactly when a coincidence becomes too great to stomach. But, no matter whether you are a cautious statistician or a daring statistician, there are some complex adaptations whose ‘P value’, whose coincidence rating, is so impressive that nobody would hesitate to diagnose life (or an artefact designed by a living thing). My definition of living complexity is, in effect, ‘that complexity which is too great to have come about through coincidence’.

pages: 276 words: 81,153

Outnumbered: From Facebook and Google to Fake News and Filter-Bubbles – the Algorithms That Control Our Lives
by David Sumpter
Published 18 Jun 2018

Using the training set, I found that the logistic model that best predicted recidivism was based on age (bage=-0.047; P-value  2e-16) and number of priors (bpriors= 0.172; P-value  2e-16), combined with a constant (bconst=0.885; P-value  2e-16). This implies that older defendants are less likely to be arrested for further crimes, while those with more priors are more likely to be arrested again. Race was not a statistically significant predictor of recidivism (in a multivariate model including race, an African American factor had P-value = 0.427). 12 The most comprehensive of these is Flores, A. W., Bechtel, K. and Lowenkamp, C.

pages: 306 words: 82,765

Skin in the Game: Hidden Asymmetries in Daily Life
by Nassim Nicholas Taleb
Published 20 Feb 2018

fn4 I am usually allergic to some public personalities, but not others. It took me a while to figure out how to draw the line explicitly. The difference is risk-taking and whether the person worries about his or her reputation. fn5 In a technical note called “Meta-distribution of p-values” around the stochasticity of “p-values” and their hacking by researchers, I show that the statistical significance of these papers is at least one order of magnitude smaller than claimed. fn6 Segnius homines bona quam mala sentiunt. fn7 Nimium boni est, cui hinil est mali. fn8 Non scabat caput praeter unges tuo, Ma biikkak illa ifrak.

The IYI has been wrong, historically, about Stalinism, Maoism, GMOs, Iraq, Libya, Syria, lobotomies, urban planning, low carbohydrate diets, gym machines, behaviorism, trans-fats, Freudianism, portfolio theory, linear regression, HFCS (High-Fructose Corn Syrup), Gaussianism, Salafism, dynamic stochastic equilibrium modeling, housing projects, marathon running, selfish genes, election-forecasting models, Bernie Madoff (pre-blowup), and p-values. But he is still convinced that his current position is right.fn1 NEVER GOTTEN DRUNK WITH RUSSIANS The IYI joins a club to get travel privileges; if he is a social scientist, he uses statistics without knowing how they are derived (like Steven Pinker and psycholophasters in general); when in the United Kingdom, he goes to literary festivals and eats cucumber sandwiches, taking small bites at a time; he drinks red wine with steak (never white); he used to believe that dietary fat was harmful and has now completely reversed himself (information in both cases is derived from the same source); he takes statins because his doctor told him to do so; he fails to understand ergodicity, and, when explained to him, he forgets about it soon after; he doesn’t use Yiddish words even when talking business; he studies grammar before speaking a language; he has a cousin who worked with someone who knows the Queen; he has never read Frédéric Dard, Libanius Antiochus, Michael Oakeshott, John Gray, Ammianus Marcellinus, Ibn Battuta, Saadia Gaon, or Joseph de Maistre; he has never gotten drunk with Russians; he never drinks to the point where he starts breaking glasses (or, preferably, chairs); he doesn’t even know the difference between Hecate and Hecuba (which in Brooklynese is “can’t tell sh**t from shinola”); he doesn’t know that there is no difference between “pseudointellectual” and “intellectual” in the absence of skin in the game; he has mentioned quantum mechanics at least twice in the past five years in conversations that had nothing to do with physics.

Debtor Nation: The History of America in Red Ink (Politics and Society in Modern America)
by Louis Hyman
Published 3 Jan 2011

Walter Greene, “Before National Urban League, Cleveland, Ohio,” September 2, 1952, folder “Minority Group Housing – Printed Material, Speeches, Field Letters, Etc., 1940–1950,” box 4, Commissioner’s Correspondence and subject file, 1938–1958, RG 31 Records of the Federal Housing Administration, NARA, 6. 46. Kenneth Wells to Guy T.O. Holladay, June 24, 1953, folder “Minority Group Housing – Printed Material, Speeches, Field Letters, Etc., 1940–1950,” box 4, Commissioner’s Correspondence and subject file, 1938–1958, RG 31 Records of the Federal Housing Administration, NARA. 47. P-value = 0.0001. 48. Once again, race had a p-value of > 0.586. The racial co-efficient, moreover, dropped to only a little over $500. 49. Linear regression with mortgage-having subpopulation for mortgage amount, race (P > 0.586) was not significant, and location (P > 0.006) was. NOTES TO CHAPTER 5 335 50. Pearson test for suburban dummy variable was (P > 0.42). 51.

In terms of questions, this chapter pays far greater attention to the intersections of race, class, and location than the original published survey, which was mostly a collection of bar graphs and averages. For the less technically inclined reader, explanations of NOTES TO CHAPTER 5 331 some of the statistical methods will be in the notes. For the more technically inclined reader, p-values of relevant tests and regressions have generally been put in the notes. 3. William H. Whyte, “Budgetism: Opiate of the Middle Class,” Fortune (May 1956), 133, 136–37. 4. John Lebor, “Requirements for Profitable Credit Selling,” Credit Management Year Book 1959–1960 (New York: National Retail Dry Goods Association, 1959), 12. 5.

As discussed later in the chapter, at the same income levels, African Americans always borrowed more frequently than whites and had lower wealth levels. 22. This was determined by running a series of regressions on debt and liquid assets, while controlling for location, mortgage status, marital status, and income. P-values for liquid assets in all models (P > 0.00). For whites, the model had R2 = 0.12 and for whites R2 = 0.41. 23. Odds ratio 5.42 with (P > 0.01) [1.44, 20.41]. 24. A linear regression with a suburban debtor subpopulation shows race (P > 0.248) and liquid assets (P > 0.241) to have no relationship to the amount borrowed unlike mortgage status (P > 0.000) and income (P > 0.013). 25.

pages: 543 words: 153,550

Model Thinker: What You Need to Know to Make Data Work for You
by Scott E. Page
Published 27 Nov 2018

Sign, Significance, and Magnitude Linear regression tells us the following about coefficients of independent variables: Sign: The correlation, positive or negative, between the independent variable and the dependent variable. Significance (p-value): The probability that the sign on the coefficient is nonzero. Magnitude: The best estimate of the coefficient of the independent variable. In a single-variable regression, the closer fit to the line and the more data, the more confidence we can place in the sign and magnitude of the coefficient. Statisticians characterize the significance of a coefficient using its p-value, which equals the probability, based on the regression, that the coefficient is not zero. A p-value of 5% means a one-in-twenty chance that the data were generated by a process where the coefficient equals zero.

payoffs in, 261 probabilities contact, 135 diffusion, 135 sharing, 135 transition, 190, 191 product competition, hybrid model of, 238–240 production function, 101 program trading, 225 property rights, 104 proposer effects, 236 (fig.) prospect theory, defining, 52 psychological biases, in rational-actor model, 51–53 public goods, 272–275 public projects decision problems, 292 mechanisms for, 292–294 pure coordination games, 174 pure exchange economies, 186–187 p-value, 85–86 quality and degree network formation, 123 quantity, 30 quantum computing, 80 Race to the Bottom, 2, 181, 182 radial symmetry, 233 random, 147 random friends, 125, 126 random mixing, 135 random networks, 122 (fig.) Monte Carlo method for, 121 random walk models, 155–158 efficient markets and, 159–161 network size and, 158–159, 158 (fig.)

Exploring Everyday Things with R and Ruby
by Sau Sheong Chang
Published 27 Jun 2012

Without going in depth into the mathematics of this test (which would probably fill up a whole section, if not an entire chapter, on its own), let’s examine the initial population by assuming that the population is normally distributed and running the Shapiro-Wilk test on it: > data <- read.table("money.csv", header=F, sep=",") > row <- as.vector(as.matrix(data[1,])) > row [1] 56 79 66 74 96 54 91 59 70 95 65 82 64 80 63 68 69 69 72 89 64 53 87 49 [47] 68 66 80 89 57 73 72 82 76 58 57 78 94 73 83 52 75 71 52 57 76 59 63 ... > shapiro.test(row) Shapiro-Wilk normality test data: row W = 0.9755, p-value = 0.3806 > As you can see, the p-value is 0.3806, which (on a a scale of 0.0 to 1.0) is not small, and therefore the null hypothesis is not rejected. The null hypothesis is that of no change (i.e., the assumption that the distribution is normal). Strictly speaking, this doesn’t really prove that the distribution is normal, but a visual inspection of the first histogram chart in Figure 8-3 tells us that the likelihood of a normal distribution is high.

pages: 1,065 words: 229,099

Real World Haskell
by Bryan O'Sullivan , John Goerzen , Donald Stewart and Donald Bruce Stewart
Published 2 Dec 2008

With this p_series function, parsing an array is simple: -- file: ch16/JSONParsec.hs p_array :: CharParser () (JAry JValue) p_array = JAry <$> p_series '[' p_value ']' Dealing with a JSON object is hardly more complicated, requiring just a little additional effort to produce a name/value pair for each of the object’s fields: -- file: ch16/JSONParsec.hs p_object :: CharParser () (JObj JValue) p_object = JObj <$> p_series '{' p_field '}' where p_field = (,) <$> (p_string <* char ':' <* spaces) <*> p_value Parsing an individual value is a matter of calling an existing parser, and then wrapping its result with the appropriate JValue constructor: -- file: ch16/JSONParsec.hs p_value :: CharParser () JValue p_value = value <* spaces where value = JString <$> p_string <|> JNumber <$> p_number <|> JObject <$> p_object <|> JArray <$> p_array <|> JBool <$> p_bool <|> JNull <$ string "null" <?

With this p_series function, parsing an array is simple: -- file: ch16/JSONParsec.hs p_array :: CharParser () (JAry JValue) p_array = JAry <$> p_series '[' p_value ']' Dealing with a JSON object is hardly more complicated, requiring just a little additional effort to produce a name/value pair for each of the object’s fields: -- file: ch16/JSONParsec.hs p_object :: CharParser () (JObj JValue) p_object = JObj <$> p_series '{' p_field '}' where p_field = (,) <$> (p_string <* char ':' <* spaces) <*> p_value Parsing an individual value is a matter of calling an existing parser, and then wrapping its result with the appropriate JValue constructor: -- file: ch16/JSONParsec.hs p_value :: CharParser () JValue p_value = value <* spaces where value = JString <$> p_string <|> JNumber <$> p_number <|> JObject <$> p_object <|> JArray <$> p_array <|> JBool <$> p_bool <|> JNull <$ string "null" <?> "JSON value" p_bool :: CharParser () Bool p_bool = True <$ string "true" <|> False <$ string "false" The choice combinator allows us to represent this kind of ladder-of-alternatives as a list. It returns the result of the first parser to succeed: -- file: ch16/JSONParsec.hs p_value_choice = value <* spaces where value = choice [ JString <$> p_string , JNumber <$> p_number , JObject <$> p_object , JArray <$> p_array , JBool <$> p_bool , JNull <$ string "null" ] <?> "JSON value" This leads us to the two most interesting parsers, for numbers and strings. We’ll deal with numbers first, since they’re simpler: -- file: ch16/JSONParsec.hs p_number :: CharParser () Double p_number = do s <- getInput case readSigned readFloat s of [(n, s')] -> n <$ setInput s' _ -> empty Our trick here is to take advantage of Haskell’s standard number parsing library functions, which are defined in the Numeric module.

pages: 273 words: 72,024

Bitcoin for the Befuddled
by Conrad Barski
Published 13 Nov 2014

Figure 7-7: A Koblitz curve We then choose a prime modulo p so that the elliptic curve satisfies this equation: y2 = x2 + ax + b(modp) NOTE In this type of math notation, the modulo operation is performed after the additions so first you calculate x2 + ax + b and then you perform mod p on the result. Bitcoin uses a very large p value (specifically p = 2256 − 232 − 29 − 28 − 27 − 26 − 24 − 1), which is important for cryptographic strength, but we can use a smaller number to illustrate how “driving around on integer-valued points on a Koblitz curve” works. Let’s choose p = 67. In fact, many curves satisfy the modular equation (namely, every curve where p is added to or subtracted from the b parameter any number of times; see the left-hand chart in Figure 7-8), and from those curves we can use all of the points that have integer-valued coordinates (shown in Figure 7-8 as dots).

Q = d × G = 13 × (5,47) = (7,22) (see Figure 7-10) * A clever way to generate a seemingly random but memorable private key is by coming up with a passphrase (i.e., Crowley and Satoshi sitting in a tree) and feeding it into a cryptographic hash function, which outputs an integer. This is called using a brainwallet. Because there are just slightly fewer than 2256 points on the curve Bitcoin uses (because the p value is much higher than the one we are using), brainwallets can use the SHA256 hash function (due to its 256-bit output). Figure 7-10: Here are the 13 points we “drive through” as we point multiply to create a digital signature. Now let’s look at how we sign messages with our private and public keys (or Bitcoin transactions): The receiver of our message will need all the values we have calculated so far except the private key, namely p, a, b, G, n, and Q, in order to verify that the signature is valid.

pages: 416 words: 39,022

Asset and Risk Management: Risk Oriented Finance
by Louis Esch , Robert Kieffer and Thierry Lopez
Published 28 Nov 2005

The values of zq are found in the normal distribution tables.7 A few examples of these values are given in Table 6.2. This shows that the expression Table 6.2 Normal distribution quantiles q 0.500 0.600 0.700 0.800 0.850 0.900 0.950 0.960 0.970 0.975 0.980 0.985 0.990 0.995 6 7 zq 0.0000 0.2533 0.5244 0.8416 1.0364 1.2816 1.6449 1.7507 1.8808 1.9600 2.0537 2.1701 2.3263 2.5758 Jorion P., Value At Risk, McGraw-Hill, 2001. Pearson E. S. and Hartley H. O., Biometrika Tables for Statisticians, Biometrika Trust, 1976, p. 118. Theory of VaR 189 Example If a security gives an average profit of 100 over the reference period with a standard deviation of 80, we have E(pt ) = 100 and σ (pt ) = 80, which allows us to write: VaR 0.95 = 100 − (1.6449 × 80) = −31.6 VaR 0.975 = 100 − (1.9600 × 80) = −56.8 VaR 0.99 = 100 − (2.3263 × 80) = −86.1 The loss incurred by this security will only therefore exceed 31.6 (56.8 and 86.1 respectively) five times (2.5 times and once respectively) in 100 times.

Factor 3 Systematic risk of the portfolio Variable A Variable C Variable B Factor 2 Variable D Factor 1 Figure 11.5 Independent allocation Institutional Management: APT Applied to Investment Funds 289 APT – factor 3 Systematic risk of the portfolio Growth Not explained Value APT – factor 2 APT – factor 1 Figure 11.6 Joint allocation 11.4.2 Joint allocation: ‘value’ and ‘growth’ example As the systematic risk of the portfolio is expressed by its APT factor-sensitivity vector, it can be broken down into the explicative variables ‘growth’ and ‘value’, representing the S&P Value and the S&P Growth (Figure 11.6). One cannot, however, be content with projecting the portfolio risk vector onto each of the variables. In fact, the ‘growth’ and ‘value’ variables are not necessarily independent statistically. They cannot therefore be represented by geometrically orthogonal variables.

CHAPTER 6 Blattberg R. and Gonedes N., A comparison of stable and Student descriptions as statistical models for stock prices, Journal of Business, Vol. 47, 1974, pp. 244–80. Fama E., Behaviour of stock market prices, Journal of Business, Vol. 38, 1965, pp. 34–105. Johnson N. L. and Kotz S., Continuous Univariate Distribution, John Wiley & Sons, Inc, 1970. Jorion P., Value at Risk, McGraw-Hill, 2001. Pearson E. S. and Hartley H. O., Biometrika Tables for Students, Biometrika Trust, 1976. CHAPTER 7 Abramowitz M. and Stegun A., Handbook of Mathematical Functions, Dover, 1972. Chase Manhattan Bank NA, The Management of Financial Price Risk, Chase Manhattan Bank NA, 1995. 386 Bibliography Chase Manhattan Bank NA, Value at Risk, its Measurement and Uses, Chase Manhattan Bank NA, undated.

pages: 360 words: 85,321

The Perfect Bet: How Science and Math Are Taking the Luck Out of Gambling
by Adam Kucharski
Published 23 Feb 2016

Runs of two or three of the same color were scarcer than they should have been. And runs of a single color—say, a black sandwiched between two reds—were far too common. Pearson calculated the probability of observing an outcome at least as extreme as this one, assuming that the roulette wheel was truly random. This probability, which he dubbed the p value, was tiny. So small, in fact, that Pearson said that even if he’d been watching the Monte Carlo tables since the start of Earth’s history, he would not have expected to see a result that extreme. He believed it was conclusive evidence that roulette was not a game of chance. The discovery infuriated him.

As the ball traveled around the rim a dozen or so times, he gathered enough information to make predictions about where it would land. He only had time to run the experiment twenty-two times before he had to leave the office. Out of these attempts, he predicted the correct number three times. Had he just been making random guesses, the probability he would have got at least this many right (the p value) was less than 2 percent. This persuaded him that the Eudaemons’ strategy worked. It seemed that roulette really could be beaten with physics. Having tested the method by hand, Small and Tse set up a high-speed camera to collect more precise measurements about the ball’s position. The camera took photos of the wheel at a rate of about ninety frames per second.

pages: 288 words: 81,253

Thinking in Bets
by Annie Duke
Published 6 Feb 2018

When scientists publish results of experiments, they share with the rest of their community their methods of gathering and analyzing the data, the data itself, and their confidence in that data. That makes it possible for others to assess the quality of the information being presented, systematized through peer review before publication. Confidence in the results is expressed through both p-values, the probability one would expect to get the result that was actually observed (akin to declaring your confidence on a scale of zero to ten), and confidence intervals (akin to declaring ranges of plausible alternatives). Scientists, by institutionalizing the expression of uncertainty, invite their community to share relevant information and to test and challenge the results and explanations.

ConAgra Foods, 228–29 jobs, 41–46 Johnson, Hollyn, 55 Journal of Experimental Psychology, 55 Journal of Law, Economics, and Organization, 144 Journal of the American Medical Association, 55 judges, 141–44, 147, 148 Jussim, Lee, 146 Kable, Joe, 250n Kahan, Dan, 58, 62–64, 181n Kahn, Herman, 243n Kahneman, Daniel, 12, 14, 36, 52, 61, 181n Katyal, Neal, 140 Kazmaier, Dick, 56–57 Kissinger, Henry, 243n Klein, Gary, 219 Kluge: The Haphazard Evolution of the Human Mind (Marcus), 12–13, 52 Kubrick, Stanley, 19 Kurosawa, Akira, 157 language, 52, 197 Late Show with David Letterman, 119–21, 123, 125, 161, 171, 175, 205 lawyers, 28–29, 93, 110, 167, 202, 221, 222 learning, 2–3, 67, 77–78, 80, 82, 105, 108, 110, 113, 115, 116, 169, 173, 231 from experience, 78–80, 82, 88, 89, 91, 93–95 loop in, 80, 84, 120 poker and, 78 by watching, 96–97, 102 Lederer, Howard, 1–2, 101–2, 106, 123–24, 133–34, 161–62, 244n Lederer, Richard, 90n Lerner, Jennifer, 128–29, 132 Lester, Jason, 244n, 248n Letterman, David, 119–21, 123, 125, 161, 171, 175, 205, 248n Life of Lucullus (Plutarch), 160 Lombardi, Vince, 159 loss aversion, 36 low-fat diet, 54–55, 62, 85–86, 164–65 luck, 4, 7, 10, 11, 21, 22, 34, 35, 46, 79–80, 82, 86–92, 94–98, 101, 102, 110, 111, 113, 121, 123, 124, 129–31, 194, 205 skill vs., 82–85 Ludwig, David, 54–55 Lynch, Marshawn, 5, 7, 217n Lyubomirsky, Sonja, 104 MacCoun, Robert, 90, 166, 168 Madden, John, 159 Maddon, Joe, 100 Magriel, Paul, 244n Marcus, Gary, 12–13, 52 Marshmallow Test, 181n–82n math skills, 64, 181n Matrix, The, 122–23, 175–76 Mauboussin, Michael, 83n Maxwell, James Clerk, 27 Medical Daily, 49 mental contrasting, 223 Merrill Edge, 185 Merton, Robert C., 153 Merton, Robert K., 151, 153–55 Meserve, Russell, 62 Mickelson, Phil, 109, 247n Microsoft, 150 Mill, John Stuart, 137, 140, 163, 169 Mischel, Walter, 181n–82n misconceptions, common, 49 Mitchell, Deborah, 219 Monday Morning Quarterback, 7, 8, 229 Montag, Heidi, 119–20 Morgenstern, Oskar, 19, 23 Morris, Benjamin, 6 motivated reasoning, 59–61, 63–64, 94, 102, 108, 115, 132, 136, 181n, 206 MTV, 119–21 Müller-Lyer illusion, 14–15 Myerson, Roger, 19–20 Nabisco, 85, 86 nails, 197 narratives, 60–62, 95–96, 105, 107–9, 157, 160 Nash, John, 19 National Medal of Science, 154 National Science Foundation, 1 natural selection, 91n–92n, 103 Nature, 166 negotiated settlements, 40, 202 New England Journal of Medicine (NEJM), 164, 165 New England Patriots, 5–7, 48, 216–18 New York, 218–20 New Yorker, 6, 218–19 New York Times, 140, 143, 153 Nick the Greek, 75–78, 84, 87, 90, 116 Nietzsche, Friedrich, 186, 187, 189 Night Jerry and Morning Jerry, 180–87 Nobel Prize, 12, 19–20, 36, 153, 166, 243n–44n Normandy landings, 208 Obama, Barack, 140, 146 obesity and weight gain, 55, 85–86, 164 Odysseus, 200–201 Oettingen, Gabriele, 223–24 Olmsted, Frederick Law, 220 On Liberty (Mill), 137 Operation Overlord, 208 optimism, 226 outcomes, 78–82, 86, 88, 95, 108, 113–14, 134, 166–68, 175, 226, 231 blindness to, 166–67 fielding, 82–85, 87, 89–91, 95, 103, 105, 111–15, 121, 194, 195, 205 negative, preparing for, 189, 226 see also future Pariser, Eli, 61 past, 178, 181, 183, 186 and moving regret in front of decisions, 186–89 see also time travel, mental Pavlov, Ivan, 107–8, 134 peer review, 72, 147–50 Pennington, Nancy, 219 Perlmutter, Saul, 166, 168 perspective, 227 Pfizer, 150 physics, 166 pinball, 198 Pleasure of Finding Things Out, The (Feynman), 72n Plutarch, 160 poker, 1–4, 7, 15–18, 28, 30–31, 33, 35, 37–38, 43, 47, 66–67, 75, 81–82, 90–91, 101–3, 105–6, 111, 115, 116, 123–24, 129, 167, 219, 231 belief formation and, 53 chess vs., 20–23, 80, 244n decisions in, 116, 167, 179, 180, 188, 196–98 diversity of opinions and, 139 learning and, 78 long hours of playing, 188–89 loss limits in, 136–37, 187 napkin list of hands in, 101–2, 161–62 possible futures and, 211 scoreboard in, 196 seminars on, 167 six and seven of diamonds in, 53, 59–60, 121 strategic plans and long view in, 179, 180, 200 strategy group for, 124, 126–27, 131, 133–34, 136–37, 155, 167, 174 suited connectors in, 53–54 Texas Hold’em, 53 tilt in, 197–98 time constraints in, 179 tournaments, 241n watching in, 97 workshopping in, 158–59 political beliefs, 63–64, 141–45, 162–63, 205 social psychologists and, 145–47 Pollan, Michael, 85 pollsters, 32, 230–31, 245n Poundstone, William, 19, 246n Powell, Justice, 143 Power of Habit, The (Duhigg), 106–7 Pratt, Spencer, 119–20 precommitments (Ulysses contracts), 200–203, 212, 221 decision swear jar, 204–7 Predictably Irrational (Ariely), 89n prediction markets, 149–50 premortems, 221–26 president-firing decision, 8–11, 33, 43, 48, 158, 229–30 presidential election of 2016, 32–33, 61n, 230–31, 245n Princess Bride, The, 23–26, 244n Princeton Alumni Weekly, 57 Princeton-Dartmouth football game, 56–59 Prisoner’s Dilemma (Poundstone), 19, 246n privacy, 157 Prospect Theory, 36 Prudential Retirement, 185 psychology, 145–47, 149 Pulitzer, Joseph, 60 p-values, 72 Rashomon, 157 Rashomon Effect, 157–58 rationality and irrationality, 11, 43, 51, 64, 181n, 183, 204 Ulysses contracts and, 201, 203 words, phrases, and thoughts that signal irrationality, 204–7 rats, 87 reconnaissance, 207–12, 218 red teams, 140, 170–71 Reese, Chip, 244n reflexive mind, 12–14, 16, 181n regret, 186–89, 212, 225, 230 Rehnquist, Justice, 143 Reiner, Rob, 244n relationships, 195, 196, 199, 223 relocating, 38–43, 45, 46 Reproducibility Project: Psychology, 149–50 resulting, 7–11, 26, 166 Rethinking Positive Thinking: Inside the New Science of Motivation (Oettingen), 223 retirement, 182, 184–86, 203 Righteous Mind, The: Why Good People Are Divided by Politics and Religion (Haidt), 129–30 risk, 20, 34, 39, 42–44, 46–47, 66, 111, 179 Roberts, Justice, 143 Russo, J.

pages: 506 words: 152,049

The Extended Phenotype: The Long Reach of the Gene
by Richard Dawkins
Published 1 Jan 1982

There is a whole family of ‘mixed strategies’ of the form ‘Dig with probability p, enter with probability 1 – p’, and only one of these is the ESS. I said that the two extremes were joined by a continuum. I meant that the stable population frequency of digging, p* (70 per cent or whatever it is), could be achieved by any of a large number of combinations of pure and mixed individual strategies. There might be a wide distribution of p values in individual nervous systems in the population, including some pure diggers and pure enterers. But, provided the total frequency of digging in the population is equal to the critical value p*, it would still be true that digging and entering were equally successful, and natural selection would not act to change the relative frequency of the two subroutines in the next generation.

Classify all individuals into those that entered with a probability less than 0.1, those that entered with a probability between 0.1 and 0.2, those with a probability between 0.2 and 0.3, between 0.3 and 0.4, 0.4 and 0.5, etc. Then compare the lifetime reproductive successes of wasps in the different classes. But supposing we did this, exactly what would the ESS theory predict? A hasty first thought is that those wasps with a p value close to the equilibrium p* should enjoy a higher success score than wasps with some other value of p: the graph of success against p should peak at an ‘optimum’ at p*. But p* is not really an optimum value, it is an evolutionarily stable value. The theory expects that, when p* is achieved in the population as a whole, digging and entering should be equally successful.

Indeed, the analogy with sex ratio theory just mentioned gives positive grounds for expecting that wasps should not vary in digging probability. In accordance with this, a statistical test on the actual data revealed no evidence of inter-individual variation in digging tendency. Even if there were some individual variation, the method of comparing the success of individuals with different p values would have been a crude and insensitive one for comparing the success rates of digging and entering. This can be seen by an analogy. An agriculturalist wishes to compare the efficacy of two fertilizers, A and B. He takes ten fields and divides each of them into a large number of small plots. Each plot is treated, at random, with either A or B, and wheat is sown in all the plots of all the fields.

pages: 447 words: 104,258

Mathematics of the Financial Markets: Financial Instruments and Derivatives Modelling, Valuation and Risk Issues
by Alain Ruttiens
Published 24 Apr 2013

For a given investor, characterized by some utility function U, representing his well-being, assuming his wealth as a portfolio P, if the portfolio return were certain (i.e., deterministic), we would have but, more realistically (even if simplified, in the spirit of this theory), if the portfolio P value is normally distributed in returns, with some rP and σP, where f is some function, often considered as a quadratic curve.4 So that, given the property of the CML (i.e., tangent to the efficient frontier), and some U = f(P) curve, the optimal portfolio must be located at the tangent of U to CML, determining the adequate proportion between B and risk-free instrument.

Moreover, if the data present irregularities in their succession (changes of trends, mean reversion, etc.), the AR process is unable to incorporate such phenomena and works poorly. The generalized form of the previous case, in order to forecast rt as a function of more than its previous observed value, can be represented as follows: This is called an AR(p) process, involving the previous p values of the series. There is no rule for determining p, provided it is not excessive (by application of the “parcimony principle”). The above relationship looks like a linear regression, but instead of regressing according to a series of independent variables, this regression uses previous values of the dependent variable itself, hence the “autoregression” name. 9.2 THE MOVING AVERAGE (MA) PROCESS Let us consider a series of returns consisting in pure so-called “random numbers” {t}, i.i.d., generally distributed following a normal distribution.

Social Capital and Civil Society
by Francis Fukuyama
Published 1 Mar 2000

To take the earlier example of the religious sect that encourages honesty and reliability, if these traits are demanded of its members not just in their dealings with other members of the sect but generally in their dealings with other people, then there will be a positive spillover effect into the larger society. Again, Weber argued in effect that sectarian Puritans had an r p value greater than 1. The final factor affecting a society’s supply of social capital concerns not the internal cohesiveness of groups, but rather the way in which they relate to outsiders. Strong moral bonds within a group in some cases may actually serve to decrease the degree to which members of that group are able to trust outsiders and work effectively with them.

pages: 407 words: 104,622

The Man Who Solved the Market: How Jim Simons Launched the Quant Revolution
by Gregory Zuckerman
Published 5 Nov 2019

The team uncovered predictive effects related to volatility, as well as a series of combination effects, such as the propensity of pairs of investments—such as gold and silver, or heating oil and crude oil—to move in the same direction at certain times in the trading day compared with others. It wasn’t immediately obvious why some of the new trading signals worked, but as long as they had p-values, or probability values, under 0.01—meaning they appeared statistically significant, with a low probability of being statistical mirages—they were added to the system. Wielding an array of profitable investing ideas wasn’t nearly enough, Simons soon realized. “How do we pull the trigger?” he asked Laufer and the rest of the team.

See also Renaissance Technologies Corporation Straus at, 74–77 trading models, 54–60, 62–63 Money Game, The (Goodman), 124–25 Monty Hall problem, 211 More Guns, Less Crime (Lott Jr.), 207 Morgan, Howard, 56 Morgan Stanley, 129–33, 157, 166, 211, 256 Moscow State University, 236 moving averages, 73 Muller, Peter, 256, 299 multidimensional anomalies, 273 Murdoch, Rupert, xvii Murphy, John, 96 Musk, Elon, xvii mutual funds, 161–64, 172, 309–10 My Life as a Quant (Derman), 126 NASA, 93 Nasar, Sylvia, 90 Nasdaq’s dot-com crash, 215–17, 257–58 Nash, John, 89–90 National Museum of Mathematics, 262 National Rifle Association (NRA), 275 National Security Agency (NSA), 23–24, 31, 208 National Youth Science Camp, 170 Nepal, 239, 240 Neuwirth, Lee, 25, 26, 30–31, 46 Newman, Paul, 128 news flashes, 221–22 Newton, Isaac, 27 Newton High School, 13 New York City Fire Department, 168 New York Mercantile Exchange, 58 New York Stock Exchange, 211, 212 New York Times, 31–32, 76, 99, 126, 172, 281, 282, 293 Nick Simons Institute, 240 Nobel Prize, 33, 152, 209 noncompete agreements, 133, 201, 238, 241, 252–53 nondisclosure agreements, xv–xvi, 133, 201, 238, 241, 252–53 nonrandom trading effects, 143–44 Norris, Floyd, 126 Nova Fund, 167, 188–89 number theory, 34, 69–70 Obama, Barack, 276 Ohio State University, 275 Olsen, Greg, 79–80, 96–97 One Up on Wall Street (Lynch), 163 “On the Transitivity of Holonomy Systems” (Simons), 20 Open Marriage (O’Neill), 36 origins of the universe, xviii, 287, 323–26, 350 OSHA (Occupational Safety and Health Administration), 234 Oswald Veblen Prize, 38 Owl’s Nest, 228, 275, 288–89, 295 Pacific Investment Management Company (PIMCO), 163–64, 309 PaineWebber, 155–56 pairs trade, 129–30, 272 Paloma Partners, 138 partial differential equations (PDEs), 21, 26–28 pattern analysis, 5, 24, 45, 57, 123–24 Patterson, Nick background of, 147–48 at IDA, 148 Patterson, Nick, at Renaissance, xv, 145–50, 202 Brown and Mercer, 169, 179–80, 231 departure, 238 LTCM collapse and, 212–13 recruitment of, 168–69 tech bubble, 215–17 trading models, 149–50, 153, 193, 198 Paulson, John, 263–64, 309 PDT Partners, 258, 299 peer pressure, 200 Peled, Abe, 178 Pellegrini, Paolo, 263–64 Penavic, Kresimir, 145, 153 Pence, Mike, 285 Pepsi, 129–30, 272 Perl, 155 “Piggy Basket,” 57–59 Plateau, Joseph, 27 points, 190 poker, 15, 18, 25, 29, 69, 94, 127, 163 polynomials, 93 pool operator, 86 portfolio insurance, 126 portfolio theory, 30, 92 presidential election of 2016, xviii, 279–91, 294–95, 302 presidential election of 2020, 304–5 primal therapy, 36–37 Primerica, 123 Princeton/Newport Partners, 128 Princeton University, 28, 31, 37, 82, 141 Priorities USA, 283 “Probabilistic Models for and Prediction of Stock Market Behavior” (Simons), 28–30 Procter & Gamble, 132 programming language, 155, 191–92, 233–34 p-values, 144 Qatar, 261–62 quantitative trading, 30, 39, 61, 124, 126–27, 211–12, 256, 308–15 quants, xvii, 126–27, 199, 204, 256 Quantum Fund, 164–65, 333 racism, 13–14, 278, 294, 295–96, 303 Rand, Ayn, 277 Reagan, Ronald, 65, 105 Recession of 1969–1970, 123 regression line, 83–84 Reichardt, Louis, 323 Renaissance Institutional Diversified Alpha Fund, 319 Renaissance Institutional Diversified Global Equity Fund, 319 Renaissance Institutional Equities Fund (RIEF), 246–52, 254, 255, 257–61, 264–65, 271, 284, 300, 316, 319 Renaissance Institutional Futures Fund (RIFF), 252, 265, 271 Renaissance Riviera, 227–28 Renaissance Technologies Corporation Ax and Straus establish Axcom, 78–83 Ax joins, 51–52 Ax’s departure, 102–3 Baum joins, 45–46, 49 Baum’s departure, 63–64 Berlekamp’s departure, 117–18 Brown and Mercer join, 169, 179–80 compensation, 200–201, 227, 228–29, 233 expansion into stock investing, 157–58 financial crisis of 2007–2008, 255–62, 263–64 GAM Investments, 153–54 headquarters, 186, 205 hiring and interview process, 202–3, 233 Laufer joins, 109, 141–44 Mercer and political blowback, 291–305 Mercer steps down as co-CEO, 301–2, 319 name change to, 61 nondisclosure agreements, xv–xvi, 133, 201, 238, 241, 252–53 Straus’s departure, 158 tax avoidance investigation of 2014, 226–27 “the Sheiks,” 156–57 timeline of key events, xii trading models, 138–40, 156–57, 161, 203–5, 212–13, 221–22, 272–74 Volfbeyn and Belopolsky, 238, 241, 242, 252–54 Reserve Primary Fund, 172–73 Resnik, Phil, 176 retracements, 203–4 reversion trading strategy, 95–96 Revolution Books, 133–34 Riemann hypothesis, 65 Rival, Anita, 140 Robertson, Julian, 217 Robert Wood Johnson Foundation, 249–50 Robinson, Arthur, 231, 276 Rockefeller, Nelson, 33, 71 rocket scientists, 126 Romney, Mitt, 279, 290 Rosenberg, Barr, 127 Rosenfeld, Eric, 209 Rosenshein, Joe, 16–17, 41 Rosinsky, Jacqueline, 168 Royal Bank of Bermuda, 51 Rubio, Marco, 279 Russian cryptography, 23–26, 46–49, 148 Russian financial crisis of 1998, 210 St.

pages: 936 words: 85,745

Programming Ruby 1.9: The Pragmatic Programmer's Guide
by Dave Thomas , Chad Fowler and Andy Hunt
Published 15 Dec 2000

When VALUE is used as a pointer to a specific Ruby structure, it is guaranteed always to have an LSB of zero; the other immediate values also have LSBs of zero. Thus, a simple bit test can tell you whether you have a Fixnum. This test is wrapped in a macro, FIXNUM_P. Similar tests let you check for other immediate values. FIXNUM_P(value) SYMBOL_P(value) NIL_P(value) RTEST(value) → → → → nonzero nonzero nonzero nonzero if if if if value value value value is is is is a Fixnum a Symbol nil neither nil nor false Several useful conversion macros for numbers as well as other standard data types are shown in Table 29.1 on the next page. The other immediate values (true, false, and nil) are represented in C as the constants Qtrue, Qfalse, and Qnil, respectively.

If not, it tries to invoke to_str on the object, throwing a TypeError exception if it can’t. So, if you want to write some code that iterates over all the characters in a String object, you may write the following: Download samples/extruby_5.rb static VALUE iterate_over(VALUE original_str) { int i; char *p; VALUE str = StringValue(original_str); p = RSTRING_PTR(str); // may be null for (i = 0; i < RSTRING_LEN(str); i++, p++) { // process *p } return str; } Report erratum RUBY O BJECTS IN C 839 If you want to bypass the length and just access the underlying string pointer, you can use the convenience method StringValuePtr, which both resolves the string reference and then returns the C pointer to the contents.

pages: 436 words: 123,488

Overdosed America: The Broken Promise of American Medicine
by John Abramson
Published 20 Sep 2004

The comment that a statistically significant finding “may reflect the play of chance” struck me as very odd. Surely the experts who wrote the review article knew that the whole purpose of doing statistics is to determine the degree of probability and the role of chance. Anyone who has taken Statistics 101 knows that p values of .05 or less (p < .05) are considered statistically significant. In this case it means that if the VIGOR study were repeated 100 times, more than 95 of those trials would show that the people who took Vioxx had at least twice as many heart attacks, strokes, and death from any cardiovascular event than the people who took naproxen.

The conventional cutoff for determining statistical significance is a probability (p) of the observed difference between the groups occurring purely by chance less than 5 times out of 100 trials, or p < .05. This translates to: “the probability that this difference will occur at random is less than 5 chances in 100 trials.” The smaller the p value, the less likely it is that the difference between the groups happened by chance, and therefore the stronger—i.e., the more statistically significant—the finding. *The blood levels of all three kinds of cholesterol (total, LDL, and HDL) are expressed as “mg/dL,” meaning the number of milligrams of cholesterol present in one-tenth of a liter of serum (the clear liquid that remains after the cells have been removed from a blood sample).

pages: 303 words: 67,891

Advances in Artificial General Intelligence: Concepts, Architectures and Algorithms: Proceedings of the Agi Workshop 2006
by Ben Goertzel and Pei Wang
Published 1 Jan 2007

How can we estimate the statistical significance of cooccurrence of the same words in top portions of two lists in each row of Table 2? Here is one easy way to estimate p-values from above. Given the size of the English core, and assuming that each French-to-English translation is a “blind shot” into the English core (null-hypothesis), we can estimate the probability to find one and the same word in top-twelve portions of both lists: p ~ 2*12*12 / 8,236 = 0.035 (we included the factor 2, because there are two possible ways of aligning the lists with respect to each other4). Therefore, the p-value of the case of word repetition that we see in Table 2 is smaller than 0.035, at least.

Marxian Economic Theory
by Meghnad Desai
Published 20 May 2013

L Labour; labour power as sold by the labourer and labour as expended during production. MP Materials of production. L and MP together comprise C, which is the same as P Productive capital. c The difference between C' and C. C Constant capital. V Variable capital. S Surplus value. r = SIV Rate of surplus value. g - = CIC+V Organic composition of capital. P (Value) rate of profit. P Rate of profit (ambiguous as to whether money or value). p OMoney) rate of profit. YI The value of output of Department Y2 The value of output of Department II Y Total value of output PI Price of the commodity produced by Department I. P2 Price of the commodity produced by Department II.

Quantitative Trading: How to Build Your Own Algorithmic Trading Business
by Ernie Chan
Published 17 Nov 2008

The following code fragment, however, tests for correlation between the two time series: % A test for correlation. dailyReturns=(adjcls-lag1(adjcls))./lag1(adjcls); [R,P]=corrcoef(dailyReturns(2:end,:)); % R = % % 1.0000 % 0.4849 0.4849 1.0000 P1: JYS c07 JWBK321-Chan September 24, 2008 14:4 Printer: Yet to come Special Topics in Quantitative Trading % % % P = % % % 1 0 133 0 1 % The P value of 0 indicates that the two time series % are significantly correlated. Stationarity is not limited to the spread between stocks: it can also be found in certain currency rates. For example, the Canadian dollar/Australian dollar (CAD/AUD) cross-currency rate is quite stationary, both being commodities currencies.

pages: 1,331 words: 163,200

Hands-On Machine Learning With Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems
by Aurélien Géron
Published 13 Mar 2017

A node whose children are all leaf nodes is considered unnecessary if the purity improvement it provides is not statistically significant. Standard statistical tests, such as the χ2 test, are used to estimate the probability that the improvement is purely the result of chance (which is called the null hypothesis). If this probability, called the p-value, is higher than a given threshold (typically 5%, controlled by a hyperparameter), then the node is considered unnecessary and its children are deleted. The pruning continues until all unnecessary nodes have been pruned. Figure 6-3 shows two Decision Trees trained on the moons dataset (introduced in Chapter 5).

Pac-Man Using Deep Q-Learning min_after_dequeue, RandomShuffleQueue MNIST dataset, MNIST-MNIST model parallelism, Model Parallelism-Model Parallelism model parameters, Gradient Descent, Batch Gradient Descent, Early Stopping, Under the Hood, Quadratic Programming, Creating Your First Graph and Running It in a Session, Construction Phase, Training RNNsdefining, Model-based learning model selection, Model-based learning model zoos, Model Zoos model-based learning, Model-based learning-Model-based learning modelsanalyzing, Analyze the Best Models and Their Errors-Analyze the Best Models and Their Errors evaluating on test set, Evaluate Your System on the Test Set-Evaluate Your System on the Test Set moments, Adam Optimization Momentum optimization, Momentum optimization-Momentum optimization Monte Carlo tree search, Policy Gradients Multi-Layer Perceptrons (MLP), Introduction to Artificial Neural Networks, The Perceptron-Multi-Layer Perceptron and Backpropagation, Neural Network Policiestraining with TF.Learn, Training an MLP with TensorFlow’s High-Level API multiclass classifiers, Multiclass Classification-Multiclass Classification Multidimensional Scaling (MDS), Other Dimensionality Reduction Techniques multilabel classifiers, Multilabel Classification-Multilabel Classification Multinomial Logistic Regression (see Softmax Regression) multinomial(), Neural Network Policies multioutput classifiers, Multioutput Classification-Multioutput Classification MultiRNNCell, Distributing a Deep RNN Across Multiple GPUs multithreaded readers, Multithreaded readers using a Coordinator and a QueueRunner-Multithreaded readers using a Coordinator and a QueueRunner multivariate regression, Frame the Problem N naive Bayes classifiers, Multiclass Classification name scopes, Name Scopes natural language processing (NLP), Recurrent Neural Networks, Natural Language Processing-An Encoder–Decoder Network for Machine Translationencoder-decoder network for machine translation, An Encoder–Decoder Network for Machine Translation-An Encoder–Decoder Network for Machine Translation TensorFlow tutorials, Natural Language Processing, An Encoder–Decoder Network for Machine Translation word embeddings, Word Embeddings-Word Embeddings Nesterov Accelerated Gradient (NAG), Nesterov Accelerated Gradient-Nesterov Accelerated Gradient Nesterov momentum optimization, Nesterov Accelerated Gradient-Nesterov Accelerated Gradient network topology, Fine-Tuning Neural Network Hyperparameters neural network hyperparameters, Fine-Tuning Neural Network Hyperparameters-Activation Functionsactivation functions, Activation Functions neurons per hidden layer, Number of Neurons per Hidden Layer number of hidden layers, Number of Hidden Layers-Number of Hidden Layers neural network policies, Neural Network Policies-Neural Network Policies neuronsbiological, From Biological to Artificial Neurons-Biological Neurons logical computations with, Logical Computations with Neurons neuron_layer(), Construction Phase next_batch(), Execution Phase No Free Lunch theorem, Testing and Validating node edges, Visualizing the Graph and Training Curves Using TensorBoard nonlinear dimensionality reduction (NLDR), LLE(see also Kernel PCA; LLE (Locally Linear Embedding)) nonlinear SVM classification, Nonlinear SVM Classification-Computational Complexitycomputational complexity, Computational Complexity Gaussian RBF kernel, Gaussian RBF Kernel-Gaussian RBF Kernel with polynomial features, Nonlinear SVM Classification-Polynomial Kernel polynomial kernel, Polynomial Kernel-Polynomial Kernel similarity features, adding, Adding Similarity Features-Adding Similarity Features nonparametric models, Regularization Hyperparameters nonresponse bias, Nonrepresentative Training Data nonsaturating activation functions, Nonsaturating Activation Functions-Nonsaturating Activation Functions normal distribution (see Gaussian distribution) Normal Equation, The Normal Equation-Computational Complexity normalization, Feature Scaling normalized exponential, Softmax Regression norms, Select a Performance Measure notations, Select a Performance Measure-Select a Performance Measure NP-Complete problems, The CART Training Algorithm null hypothesis, Regularization Hyperparameters numerical differentiation, Numerical Differentiation NumPy, Create the Workspace NumPy arrays, Handling Text and Categorical Attributes NVidia Compute Capability, Installation nvidia-smi, Managing the GPU RAM n_components, Choosing the Right Number of Dimensions O observation space, Neural Network Policies off-policy algorithm, Temporal Difference Learning and Q-Learning offline learning, Batch learning one-hot encoding, Handling Text and Categorical Attributes one-versus-all (OvA) strategy, Multiclass Classification, Softmax Regression, Exercises one-versus-one (OvO) strategy, Multiclass Classification online learning, Online learning-Online learning online SVMs, Online SVMs-Online SVMs OpenAI Gym, Introduction to OpenAI Gym-Introduction to OpenAI Gym operation_timeout_in_ms, In-Graph Versus Between-Graph Replication Optical Character Recognition (OCR), The Machine Learning Landscape optimal state value, Markov Decision Processes optimizers, Faster Optimizers-Learning Rate SchedulingAdaGrad, AdaGrad-AdaGrad Adam optimization, Faster Optimizers, Adam Optimization-Adam Optimization Gradient Descent (see Gradient Descent optimizer) learning rate scheduling, Learning Rate Scheduling-Learning Rate Scheduling Momentum optimization, Momentum optimization-Momentum optimization Nesterov Accelerated Gradient (NAG), Nesterov Accelerated Gradient-Nesterov Accelerated Gradient RMSProp, RMSProp out-of-bag evaluation, Out-of-Bag Evaluation-Out-of-Bag Evaluation out-of-core learning, Online learning out-of-memory (OOM) errors, Static Unrolling Through Time out-of-sample error, Testing and Validating OutOfRangeError, Reading the training data directly from the graph, Multithreaded readers using a Coordinator and a QueueRunner output gate, LSTM Cell output layer, Multi-Layer Perceptron and Backpropagation OutputProjectionWrapper, Training to Predict Time Series-Training to Predict Time Series output_put_keep_prob, Applying Dropout overcomplete autoencoder, Unsupervised Pretraining Using Stacked Autoencoders overfitting, Overfitting the Training Data-Overfitting the Training Data, Create a Test Set, Soft Margin Classification, Gaussian RBF Kernel, Regularization Hyperparameters, Regression, Number of Neurons per Hidden Layeravoiding through regularization, Avoiding Overfitting Through Regularization-Data Augmentation P p-value, Regularization Hyperparameters PaddingFIFOQueue, PaddingFifoQueue Pandas, Create the Workspace, Download the Datascatter_matrix, Looking for Correlations-Looking for Correlations parallel distributed computing, Distributing TensorFlow Across Devices and Servers-Exercisesdata parallelism, Data Parallelism-TensorFlow implementation in-graph versus between-graph replication, In-Graph Versus Between-Graph Replication-Model Parallelism model parallelism, Model Parallelism-Model Parallelism multiple devices across multiple servers, Multiple Devices Across Multiple Servers-Other convenience functionsasynchronous communication using queues, Asynchronous Communication Using TensorFlow Queues-PaddingFifoQueue loading training data, Loading Data Directly from the Graph-Other convenience functions master and worker services, The Master and Worker Services opening a session, Opening a Session pinning operations across tasks, Pinning Operations Across Tasks sharding variables, Sharding Variables Across Multiple Parameter Servers sharing state across sessions, Sharing State Across Sessions Using Resource Containers-Sharing State Across Sessions Using Resource Containers multiple devices on a single machine, Multiple Devices on a Single Machine-Control Dependenciescontrol dependencies, Control Dependencies installation, Installation-Installation managing the GPU RAM, Managing the GPU RAM-Managing the GPU RAM parallel execution, Parallel Execution-Parallel Execution placing operations on devices, Placing Operations on Devices-Soft placement one neural network per device, One Neural Network per Device-One Neural Network per Device parameter efficiency, Number of Hidden Layers parameter matrix, Softmax Regression parameter server (ps), Multiple Devices Across Multiple Servers parameter space, Gradient Descent parameter vector, Linear Regression, Gradient Descent, Training and Cost Function, Softmax Regression parametric models, Regularization Hyperparameters partial derivative, Batch Gradient Descent partial_fit(), Incremental PCA Pearson's r, Looking for Correlations peephole connections, Peephole Connections penalties (see rewards, in RL) percentiles, Take a Quick Look at the Data Structure Perceptron convergence theorem, The Perceptron Perceptrons, The Perceptron-Multi-Layer Perceptron and Backpropagationversus Logistic Regression, The Perceptron training, The Perceptron-The Perceptron performance measures, Select a Performance Measure-Select a Performance Measureconfusion matrix, Confusion Matrix-Confusion Matrix cross-validation, Measuring Accuracy Using Cross-Validation-Measuring Accuracy Using Cross-Validation precision and recall, Precision and Recall-Precision/Recall Tradeoff ROC (receiver operating characteristic) curve, The ROC Curve-The ROC Curve performance scheduling, Learning Rate Scheduling permutation(), Create a Test Set PG algorithms, Policy Gradients photo-hosting services, Semisupervised learning pinning operations, Pinning Operations Across Tasks pip, Create the Workspace Pipeline constructor, Transformation Pipelines-Select and Train a Model pipelines, Frame the Problem placeholder nodes, Feeding Data to the Training Algorithm placers (see simple placer; dynamic placer) policy, Policy Search policy gradients, Policy Search (see PG algorithms) policy space, Policy Search polynomial features, adding, Nonlinear SVM Classification-Polynomial Kernel polynomial kernel, Polynomial Kernel-Polynomial Kernel, Kernelized SVM Polynomial Regression, Training Models, Polynomial Regression-Polynomial Regressionlearning curves in, Learning Curves-Learning Curves pooling kernel, Pooling Layer pooling layer, Pooling Layer-Pooling Layer power scheduling, Learning Rate Scheduling precision, Confusion Matrix precision and recall, Precision and Recall-Precision/Recall TradeoffF-1 score, Precision and Recall-Precision and Recall precision/recall (PR) curve, The ROC Curve precision/recall tradeoff, Precision/Recall Tradeoff-Precision/Recall Tradeoff predetermined piecewise constant learning rate, Learning Rate Scheduling predict(), Data Cleaning predicted class, Confusion Matrix predictions, Confusion Matrix-Confusion Matrix, Decision Function and Predictions-Decision Function and Predictions, Making Predictions-Estimating Class Probabilities predictors, Supervised learning, Data Cleaning preloading training data, Preload the data into a variable PReLU (parametric leaky ReLU), Nonsaturating Activation Functions preprocessed attributes, Take a Quick Look at the Data Structure pretrained layers reuse, Reusing Pretrained Layers-Pretraining on an Auxiliary Taskauxiliary task, Pretraining on an Auxiliary Task-Pretraining on an Auxiliary Task caching frozen layers, Caching the Frozen Layers freezing lower layers, Freezing the Lower Layers model zoos, Model Zoos other frameworks, Reusing Models from Other Frameworks TensorFlow model, Reusing a TensorFlow Model-Reusing a TensorFlow Model unsupervised pretraining, Unsupervised Pretraining-Unsupervised Pretraining upper layers, Tweaking, Dropping, or Replacing the Upper Layers Pretty Tensor, Up and Running with TensorFlow primal problem, The Dual Problem principal component, Principal Components Principal Component Analysis (PCA), PCA-Randomized PCAexplained variance ratios, Explained Variance Ratio finding principal components, Principal Components-Principal Components for compression, PCA for Compression-Incremental PCA Incremental PCA, Incremental PCA-Randomized PCA Kernel PCA (kPCA), Kernel PCA-Selecting a Kernel and Tuning Hyperparameters projecting down to d dimensions, Projecting Down to d Dimensions Randomized PCA, Randomized PCA Scikit Learn for, Using Scikit-Learn variance, preserving, Preserving the Variance-Preserving the Variance probabilistic autoencoders, Variational Autoencoders probabilities, estimating, Estimating Probabilities-Estimating Probabilities, Estimating Class Probabilities producer functions, Other convenience functions projection, Projection-Projection propositional logic, From Biological to Artificial Neurons pruning, Regularization Hyperparameters, Symbolic Differentiation Pythonisolated environment in, Create the Workspace-Create the Workspace notebooks in, Create the Workspace-Download the Data pickle, Better Evaluation Using Cross-Validation pip, Create the Workspace Q Q-Learning algorithm, Temporal Difference Learning and Q-Learning-Learning to Play Ms.

pages: 242 words: 68,019

Why Information Grows: The Evolution of Order, From Atoms to Economies
by Cesar Hidalgo
Published 1 Jun 2015

Here we consider a country to be an exporter of a product if its percapita exports of that product are at least 25 percent of the world’s average per capita exports of that product. This allows us to control for the size of the product’s global market and the size of the country’s population. 5. In the case of Honduras and Argentina the probability of the observed overlap (what is known academically as its p-value) is 4.4 × 10–4. The same probability is 2 × 10–2 for the overlap observed between Honduras and the Netherlands and 4 × 10–3 for the overlap observed between Argentina and the Netherlands. 6. César A. Hidalgo and Ricardo Hausmann, “The Building Blocks of Economic Complexity,” Proceedings of the National Academy of Sciences 106, no. 26 (2009): 10570–10575. 7.

pages: 245 words: 12,162

In Pursuit of the Traveling Salesman: Mathematics at the Limits of Computation
by William J. Cook
Published 1 Jan 2011

To compute this value, we find the minimum of the four sums trip({2, 3, 4, 5}, 2) + cost(2,6) trip({2, 3, 4, 5}, 3) + cost(3,6) trip({2, 3, 4, 5}, 4) + cost(4,6) trip({2, 3, 4, 5}, 5) + cost(5,6) corresponding to the possible choices for the next-to-last city in the subpath from 1 to 6, that is, we optimally travel to the next-to-last city then travel over to city 6. This construction of a five-city tr i p-value from several four-city values is the heart of the Held-Karp method. The algorithm proceeds as follows. We first compute all one-city values: these are easy, for example, tr i p({2}, 2) is just cos t(1, 2). Next, we use the one-city values to compute all two-city values. Then we use the two-city values to compute all three-city values, and on up the line.

Spite: The Upside of Your Dark Side
by Simon McCarthy-Jones
Published 12 Apr 2021

Rotemberg, “Populism and the Return of the ‘Paranoid Style’: Some Evidence and a Simple Model of Demand for Incompetence as Insurance Against Elite Betrayal,” Journal of Comparative Economics 46, no. 4 (2018): 988–1005. 12. I. Bohnet and R. Zeckhauser, “Trust, Risk and Betrayal,” Journal of Economic Behavior and Organization 55, no. 4 (2004): 467–484. 13. For the statistically minded of you, the p-value of this effect was p=0.07, and the authors made no correction to alpha for performing multiple statistical tests. 14. J. Graham, J. Haidt, and B. A. Nosek, “Liberals and Conservatives Rely on Different Sets of Moral Foundations,” Journal of Personality and Social Psychology 96, no. 5 (2009): 1029–1046. 15.

pages: 263 words: 75,455

Quantitative Value: A Practitioner's Guide to Automating Intelligent Investment and Eliminating Behavioral Errors
by Wesley R. Gray and Tobias E. Carlisle
Published 29 Nov 2012

On a rolling 5-year basis there are only a few short instances where the strategy's performance does not add value after controlling for risk. The 10-year rolling chart tells the story vividly: over the long-term, Quantitative Value has consistently created value for investors. Table 12.5 shows the full sample coefficient estimates for the four asset-pricing models. We set out P-values below each estimate and represent the probability of seeing the estimate given the null hypothesis is zero. MKT-RF represents the excess return on the market-weight returns of all New York Stock Exchange (NYSE)/American Stock Exchange (AMEX)/Nasdaq stocks. SMB is a long/short factor portfolio that captures exposures to small capitalization stocks.

pages: 366 words: 76,476

Dataclysm: Who We Are (When We Think No One's Looking)
by Christian Rudder
Published 8 Sep 2014

For issues that have to do with sex only indirectly, such as ratings from one race to another, gays and straights also show similar patterns. Male-female relationships allowed for the least repetition and widest resonance per unit of space, so I made the choice to focus on them. My second decision, to leave out statistical esoterica, was made with much less regret. I don’t mention confidence intervals, sample sizes, p values, and similar devices in Dataclysm because the book is above all a popularization of data and data science. Mathematical wonkiness wasn’t what I wanted to get across. But like the spars and crossbeams of a house, the rigor is no less present for being unseen. Many of the findings in the book are drawn from academic, peer-reviewed sources.

Hands-On Machine Learning With Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems
by Aurelien Geron
Published 14 Aug 2019

A node whose children are all leaf nodes is considered unnecessary if the purity improvement it provides is not statistically significant. Standard statistical tests, such as the χ2 test, are used to estimate the probability that the improvement is purely the result of chance (which is called the null hypothesis). If this probability, called the p-value, is higher than a given threshold (typically 5%, controlled by a hyperparameter), then the node is considered unnecessary and its children are deleted. The pruning continues until all unnecessary nodes have been pruned. Figure 6-3 shows two Decision Trees trained on the moons dataset (introduced in Chapter 5).

pages: 741 words: 199,502

Human Diversity: The Biology of Gender, Race, and Class
by Charles Murray
Published 28 Jan 2020

A more precise description is given in the note.[79] The Johnson study presented the results for all 42 tests, but calculated effect sizes only for those that met a stricter than normal standard of statistical significance (p < .01 instead of p < .05) because of the large number of tests involved. Results for the residual effects on 21 of the subtests that met that statistical standard are shown in the following table. I omit the p values. All but two of the p values for the residual effects were at the .001 level.80 The effect sizes stripped of g are ordered from the largest for females (positive) to the largest for males (negative). COGNITIVE SEX DIFFERENCES IN THE MISTRA SAMPLE Assessment activity: Coding (ID of symbol-number pairings) Overall effect size: +0.56 Effect size stripped of g: +0.83 Assessment activity: Perceptual speed (evaluation of symbol pairs) Overall effect size: +0.37 Effect size stripped of g: +0.68 Assessment activity: Spelling (multiple choice) Overall effect size: ns Effect size stripped of g: +0.66 Assessment activity: Word fluency (production of anagrams) Overall effect size: ns Effect size stripped of g: +0.64 Assessment activity: ID of familial relationships within a family tree Overall effect size: ns Effect size stripped of g: +0.63 Assessment activity: Rote memorization of meaningful pairings Overall effect size: +0.33 Effect size stripped of g: +0.60 Assessment activity: Production of words beginning and ending with specified letters Overall effect size: ns Effect size stripped of g: +0.57 Assessment activity: Vocabulary (multiple choice) Overall effect size: ns Effect size stripped of g: +0.50 Assessment activity: Rote memorization of meaningless pairings Overall effect size: ns Effect size stripped of g: +0.42 Assessment activity: Chronological sequencing of pictures Overall effect size: –0.28 Effect size stripped of g: –0.30 Assessment activity: Information (recall of factual knowledge) Overall effect size: –0.29 Effect size stripped of g: –0.39 Assessment activity: Trace of a path through a grid of dots Overall effect size: –0.42 Effect size stripped of g: –0.40 Assessment activity: Matching of rotated alternatives to probe Overall effect size: ns Effect size stripped of g: –0.45 Assessment activity: Reproduction of 2-D designs of 3-D blocks Overall effect size: –0.34 Effect size stripped of g: –0.48 Assessment activity: Outline of cutting instructions to form the target figure Overall effect size: –0.39 Effect size stripped of g: –0.48 Assessment activity: Arithmetic (mental calculation of problems presented verbally) Overall effect size: –0.36 Effect size stripped of g: –0.53 Assessment activity: ID of unfolded version of a folded probe Overall effect size: –0.44 Effect size stripped of g: –0.59 Assessment activity: ID of matched figures after rotation Overall effect size: –0.55 Effect size stripped of g: –0.75 Assessment activity: ID of parts missing in pictures of common objects Overall effect size: –0.60 Effect size stripped of g: –0.81 Assessment activity: ID of rotated versions of 2-D representation of 3-D objects Overall effect size: –0.92 Effect size stripped of g: –1.04 Assessment activity: ID of mechanical principles and tools Overall effect size: –1.18 Effect size stripped of g: –1.43 Source: Adapted from Johnson and Bouchard (2007): Table 4.

pages: 271 words: 83,944

The Sellout: A Novel
by Paul Beatty
Published 2 Mar 2016

Back then he was an assistant professor in urban studies, at UC Brentwood, living in Larchmont with the rest of the L.A. intellectual class, and hanging out in Dickens doing field research for his first book, Blacktopolis: The Intransigence of African-American Urban Poverty and Baggy Clothes. “I think an examination of the confluence of independent variables on income could result in some interesting r coefficients. Frankly, I wouldn’t be surprised by p values in the .75 range.” Despite the smug attitude, Pops took a liking to Foy right away. Though Foy was born and raised in Michigan, it wasn’t often Dad found somebody in Dickens who knew the difference between a t-test and an analysis of variance. After debriefing over a box of donut holes, everyone—locals and Foy included—agreed to meet on a regular basis, and the Dum Dum Donut Intellectuals were born.

pages: 301 words: 85,126

AIQ: How People and Machines Are Smarter Together
by Nick Polson and James Scott
Published 14 May 2018

*   We also simulated the 24 games prior to game 1 of the 2007 season, so that the 25-game average was well-defined at the beginning of the 176-game stretch in question. This implies that the rolling 25-game winning percentage starting from that first game in 2007 actually went back to mid-2005. †   If you’ve taken a statistics class, you may recognize this number as the p-value (p = 0.23) under the null hypothesis of no cheating. ‡   Two synonyms for anomalies that you may have encountered are “signals in the noise” or “violations of the null hypothesis.” §   This inscription, Decus et Tutamen, remained on English coins into 2017, when it was sadly removed from the latest version of the £1 coin

The Impact of Early Life Trauma on Health and Disease
by Lanius, Ruth A.; Vermetten, Eric; Pain, Clare
Published 11 Jan 2011

(a) Women Lifetime History of Depression (%) Men 60 50 40 30 20 10 0 0 1 2 3 4 ACE Score (b) 25 4+ 20 15 3 10 2 5 1 0 0 ACE Score (c) Psychiatric disorders 100 90 80 70 60 50 40 30 20 10 0 4 5 or more 3 2 1 0 ACE Score (d) 12 Abused Alcohol Ever Hallucinated* (%) The relationship between ACE Score and self-acknowledged chronic depression is illustrated in Fig. 8.1a [5]. Should one doubt the reliability of self-acknowledged chronic depression, there is a similar but stronger relationship between ACE Score and later suicide attempts, as shown in the exponential progression of Fig. 8.1b [6]. The p value of all graphic depictions herein is 0.001 or lower. One continues to see a proportionate relationship between ACE Score and depression by analysis of prescription rates for antidepressant medications after a 10-year prospective follow-up, now approximately 50 to 60 years after the ACEs occurred (Fig. 8.1c) [7].

The first was that child sexual abuse (CSA) (or absence of CSA) during one period would predict CSA (or absence of CSA) during the subsequent period. The second component examined the association between density of CSA during each stage and all morphometric measures. Numerical values represent standardized beta weights and their associated p values. The dotted lines were evaluated in the model but were not significantly predictive of any relationship between the variables. (From Andersen et al. [23] with permission.) individuals had significantly reduced occipital GMV. However, it appeared that loss of GMV was a consequence of exposure to childhood abuse and not a result of intimate-partner violence or development of PTSD [44].

Monte Carlo Simulation and Finance
by Don L. McLeish
Published 1 Apr 2005

Evaluate the Chi-squared statistic χ2obs for a test that these points are independent uniform on the cube where we divide the cube into 8 subcubes, each having sides of length 1/2. Carry out the test by finding P [χ2 > χ2obs ] where χ2 is a random chi-squared variate with the appropriate number of degrees of freedom. This quantity P [χ2 > χ2obs ] is usually referrred to as the “significance probability” or “p-value” for the test. If we suspected too much uniformity to be consistent with assumption of independent uniform, we might use the other tail of the test, i.e. evaluate P [χ2 < χ2obs ]. Do so and comment on your results.

pages: 305 words: 89,103

Scarcity: The True Cost of Not Having Enough
by Sendhil Mullainathan
Published 3 Sep 2014

Flynn, “Massive IQ Gains in 14 Nations: What IQ Tests Really Measure,” Psychological Bulletin 101 (1987): 171–91. A forceful case for environmental and cultural influences on IQ is Richard Nisbett’s Intelligence and How to Get It: Why Schools and Cultures Count (New York: W. W. Norton, 2010). people in a New Jersey mall: These experiments are summarized along with details on sample sizes and p-values in Anandi Mani, Sendhil Mullainathan, Eldar Shafir, and Jiaying Zhao, “Poverty Impedes Cognitive Function” (working paper, 2012). unable to come up with $2,000 in thirty days: A. Lusardi, D. J. Schneider, and P. Tufano, Financially Fragile Households: Evidence and Implications (National Bureau of Economic Research, Working Paper No. 17072, May 2011).

pages: 340 words: 94,464

Randomistas: How Radical Researchers Changed Our World
by Andrew Leigh
Published 14 Sep 2018

A meta-analytic review of choice overload’, Journal of Consumer Research, vol. 37, no. 3, 2010, pp. 409–25. 45Alan Gerber & Neil Malhotra, ‘Publication bias in empirical sociological research’, Sociological Methods & Research, vol. 37, no. 1, 2008, pp. 3–30; Alan Gerber & Neil Malhotra, ‘Do statistical reporting standards affect what is published? Publication bias in two leading political science journals’, Quarterly Journal of Political Science. vol. 3, no. 3, 2008, pp. 313–26; E.J. Masicampo & Daniel R. Lalande, ‘A peculiar prevalence of p values just below .05’, Quarterly Journal of Experimental Psychology, vol. 65, no. 11, 2012, pp. 2271–9; Kewei Hou, Chen Xue & Lu Zhang, ‘Replicating anomalies’, NBER Working Paper 23394, Cambridge, MA: National Bureau of Economic Research, 2017. 46Alexander A. Aarts, Joanna E. Anderson, Christopher J.

The Unknowers: How Strategic Ignorance Rules the World
by Linsey McGoey
Published 14 Sep 2019

Typically, however, staff members at the Office for New Drugs, the office responsible for drug approvals, tend to dismiss epidemiological studies as less reliable than RCTs, and so considerable money and effort is spent on studies that are systemically ignored in practice. Graham told me that many of his colleagues ‘only believe – this has been said to me more than once by people from the Office of New Drugs, very high level people – they will only believe that an adverse effect is real when a controlled clinical trial has been done that shows an effect with a p value of less than 0.05.’26 Unless adverse effects are visible on RCT evidence, FDA reviewers have difficulty accepting that a drug’s risks may be severe – even though staff members know that many RCTs are too short and have too few participants to reveal those very risks. Persistent faith in how regulation should work in theory – the hope that, ideally, regulators will pick up problems before a drug is licensed – leads to a sort of institutionally sanctioned strategic ignorance of problems that emerge after a drug is on the market.

pages: 315 words: 87,035

May Contain Lies: How Stories, Statistics, and Studies Exploit Our Biases—And What We Can Do About It
by Alex Edmans
Published 13 May 2024

Lunar phases and stock returns’, Journal of Empirical Finance 13, 1–23. 9 Carroll, Douglas et al. (2002): ‘Admissions for myocardial infarction and World Cup football: database survey’, British Medical Journal 325, 1439–42. 10 Trovato, Frank (1998): ‘The Stanley Cup of hockey and suicide in Quebec, 1951–1992’, Social Forces 77, 105–26. 11 White, Garland F. (1989): ‘Media and violence: the case of professional football championship games’, Aggressive Behavior 15, 423–33. 12 Edmans, Alex, Diego García and Øyvind Norli (2007): ‘Sports sentiment and stock returns’, Journal of Finance 62, 1967–98. 13 Chanavat, André and Katharine Ramsden (2013): ‘Mining the metrics of board diversity’, Thomson Reuters. 14 ‘The effect of the 2014 World Cup on stock markets – Alex Edmans and CNN’s Richard Quest’. Available at https://bit.ly/soccercnn 6. Data is Not Evidence: Causation * The ‘p-value’ corresponds to the significance level, which needs to be 0.05 or lower for a result to be deemed significant. † More technical terms for ‘common causes’ are ‘omitted factors’, ‘omitted variables’ or ‘confounding variables’. ‡ Ideally, we’d have read all the required papers and books before the birth.

pages: 1,088 words: 228,743

Expected Returns: An Investor's Guide to Harvesting Market Rewards
by Antti Ilmanen
Published 4 Apr 2011

Similar considerations suggest that we might reduce the CPI and D/P components for equities. The fourth column shows that using 2.3% CPI (consensus forecast for long-term inflation) and 2.0% D/P, a forward-looking measure predicts only 5.6% nominal equity returns for the long term. Admittedly the D/P value could be raised if we use a broader carry measure including net share buybacks, so I add 0.75% to the estimate (and call it “D/P+”). Even more bullish return forecasts than 6.4% would have to rely on growth optimism (beyond the historical 1.3% rate of real earnings-per-share growth) or expected further P/E expansion in the coming decades (my analysis assumes none).

One crucial question is whether persistent industry sector biases should be allowed or whether sector neutrality should be pursued. Sector neutrality. Practitioner studies highlight the empirical benefits of sector-neutral approaches. Yet, academic studies and many popular investment products (FF and LSV, MSCI-Barra and S&P value/growth indices, and the RAFI fundamental index) do nothing to impose sector neutrality. Without any such adjustments, persistent industry concentrations are possible in the long–short portfolio. For example, in early 2008, the long (value) portfolio heavily overweighted finance stocks while the short (growth) portfolio overweighted energy stocks.

pages: 628 words: 107,927

Node.js in Action
by Mike Cantelon , Marc Harter , Tj Holowaychuk and Nathan Rajlich
Published 27 Jul 2013

</p> Jade also supports a non-JavaScript form of iteration: the each statement. each statements allow you to cycle through arrays and object properties with ease. The following is equivalent to the previous example, but using each instead: each message in messages p= message You can cycle through object properties using a slight variation, like this: each value, key in post div strong #{key} p value Conditionally rendering template code Sometimes templates need to make decisions about how data is displayed depending on the value of the data. The next example illustrates a conditional in which, roughly half the time, the script tag is outputted as HTML: - var n = Math.round(Math.random() * 1) + 1 - if (n == 1) { script alert('You win!')

pages: 354 words: 26,550

High-Frequency Trading: A Practical Guide to Algorithmic Strategies and Trading Systems
by Irene Aldridge
Published 1 Dec 2009

Table 4.5 reports summary statistics for EUR/USD order flows observed by Citibank and sampled at the weekly frequency between January 1993 and July 1999: A) statistics for weekly EUR/USD order flow aggregated across Citibank’s corporate, trading, and investing customers; and B) order flows from end-user segments cumulated over a week. The last four columns on the right report autocorrelations i at lag i and p-values for the null that (i = 0). The summary statistics on the order flow data are from Evans and Lyons (2007), who define order flow as the total value of EUR/USD purchases (in USD millions) initiated against Citibank’s quotes. Daily Dollar Volume in Most Active Foreign Exchange Products on TABLE 4.4 CME Electronic Trading (Globex) on 6/12/2009 Computed as Average Price Times Total Contract Volume Reported by CME Currency Futures Daily Volume (in USD thousands) Mini-Futures Daily Volume (in USD thousands) Australian Dollar British Pound Canadian Dollar Euro Japanese Yen New Zealand Dollar Swiss Franc 5,389.8 17,575.6 6,988.1 32,037.9 8,371.5 426.5 4,180.6 N/A N/A N/A 525.3 396.2 N/A N/A 45 3.722 −3.715 549.302 −529.055 634.918 −692.419 1710.163 −2024.28 972.106 −629.139 535.32 −874.15 1881.284 −718.895 −0.043 1.234 −16.774 108.685 −59.784 196.089 −4.119 346.296 11.187 183.36 19.442 146.627 15.85 273.406 Maximum Minimum −0.696 9.246 −0.005 3.908 0.026 8.337 0.392 5.86 −1.079 11.226 0.931 9.253 0.105 3.204 Skewness or Kurtosis* −0.037 (0.434) 0.072 (0.223) −0.021 (0.735) −0.098 (0.072) 0.096 (0.085) 0.061 (0.182) −0.061 (0.287) 1 −0.04 (0.608) 0.089 (0.124) 0.024 (0.602) 0.024 (0.660) −0.024 (0.568) 0.107 (0.041) 0.027 (0.603) 2 0.028 (0.569) −0.038 (0.513) 0.126 (0.101) 0.015 (0.747) −0.03 (0.536) −0.03 (0.550) 0.025 (0.643) 4 Autocorrelations Lag −0.028 (0.562) 0.103 (0.091) −0.009 (0.897) 0.083 (0.140) −0.016 (0.690) −0.014 (0.825) −0.015 (0.789) 8 *Skewness of order flows measures whether the flows skew toward either the positive or the negative side of their mean, and kurtosis indicates the likelihood of extremely large or small order flows.

pages: 430 words: 107,765

The Quantum Magician
by Derek Künsken
Published 1 Oct 2018

“It has penetrated habitat and communications, but not fortifications,” the major said. “Yes, and its infection of habitat and comms is very selective. The distribution suggests to me that it has infected support systems.” “That’s not random,” Cassandra said. “No.” Cassandra had a brief urge to recalculate the p-value to verify the non-randomness, but Iekanjika wouldn’t care and Bel would already have calculated it. “The infection pattern doesn’t follow the systems architecture, but this pattern could have been made by selectively shielding critical systems prior to infection,” Bel said. “So the Puppets know something is up,” Cassandra said.

pages: 370 words: 107,983

Rage Inside the Machine: The Prejudice of Algorithms, and How to Stop the Internet Making Bigots of Us All
by Robert Elliott Smith
Published 26 Jun 2019

If you have a look at the test instructions, it will report (in some form, probably a table) four IF/THEN rules, with uncertainty factors If + then pregnant with P(pregnant|+) If – then pregnant with P(pregnant|−) If + then not pregnant with P(not pregnant|+) If – then not pregnant with P(not pregnant|−) where the four P values are probabilities. This situation reflects what Lane and Maxfield call truth uncertainty, where there is a clear true or false outcome in the case of a well-understood question or a well-posed problem. In this case, it is entirely appropriate to use the statistic heuristic, and that is in fact precisely what the probabilities in the rules above reflect.

pages: 385 words: 118,901

Black Edge: Inside Information, Dirty Money, and the Quest to Bring Down the Most Wanted Man on Wall Street
by Sheelah Kolhatkar
Published 7 Feb 2017

He told Martoma that in spite of the negative results, he was still hopeful that bapi might work, because he had observed some improvements in his own patients who were taking it. “I don’t know how you can say that when the statistical evidence shows otherwise,” Martoma said. He cited the exact p-values, a number that indicated whether a result was statistically significant or not, and a handful of other specific figures that had just been included in the presentation to the investigators. The results still hadn’t been publicly released. Ross was flabbergasted. How could Martoma possibly know about those details?

pages: 384 words: 112,971

What’s Your Type?
by Merve Emre
Published 16 Aug 2018

“Neither of these authors has had formal training in psychology, and consequently little of the very extensive evidence they have developed on the instrument is in a form for immediate assimilation by psychologists generally,” Chauncey warned his staff as he prepared them to start work on the indicator. “Indeed, many of the ideas employed are so different from what psychologists are accustomed to that it has sometimes been difficult to keep from rejecting the whole approach without first examining it closely enough.” Those who worshipped at the altar of facts and figures, of t-tests and p-values, had little patience for Isabel’s kitchen table experiments, the imprecise, if enthusiastic, attempts at validation that had accompanied Forms A, B, C, and D. (“I sometimes kind of shook in my shoes with the old [versions] because the scores would be coming out on the basis of so few questions,” she later recalled.)

pages: 320 words: 33,385

Market Risk Analysis, Quantitative Methods in Finance
by Carol Alexander
Published 2 Jan 2007

The probability value of the t statistics is also given for convenience, and this shows that whilst the constant term is not significant the log return on the S&P 500 is a very highly significant determinant of the Amex log returns. Table I.4.7 Coefficient estimates for the Amex and S&P 500 model Intercept S&P 500 rtn Coefficients Standard error t stat −00002 12885 00003 00427 −06665 301698 p value 05053 00000 Following the results in Table I.4.7, we may write the estimated model, with t ratios in parentheses, as Ŷ = −00002 + 12885 X −06665 301698 where X and Y are the daily log returns on the S&P 500 and on Amex, respectively. The Excel output automatically tests whether the explanatory variable should be included Introduction to Linear Regression 155 in the model, and with a t ratio of 30.1698 this is certainly the case.

pages: 351 words: 123,876

Beautiful Testing: Leading Professionals Reveal How They Improve Software (Theory in Practice)
by Adam Goucher and Tim Riley
Published 13 Oct 2009

The recommendation is to start with the simplest tests and work up to more advanced tests. The simplest tests, besides being easiest to implement, are also the easiest to understand. A software developer is more likely to respond well to being told, “Looks like the average of your generator is 7 when it should be 8,” than to being told, “I’m getting a small p-value from my Kolmogorov-Smirnov test.” Range Tests If a probability distribution has a limited range, the simplest thing to test is whether the output values fall in that range. For example, an exponential distribution produces only positive values. If your test detects a single negative value, you’ve found a bug.

Braiding Sweetgrass
by Robin Wall Kimmerer

We measure and record and analyze in ways that might seem lifeless but to us are the conduits to understanding the inscrutable lives of species not our own. Doing science with awe and humility is a powerful act of reciprocity with the more-than-human world. I’ve never met an ecologist who came to the field for the love of data or for the wonder of a p-value. These are just ways we have of crossing the species boundary, of slipping off our human skin and wearing fins or feathers or foliage, trying to know others as fully as we can. Science can be a way of forming intimacy and respect with other species that is rivaled only by the observations of traditional knowledge holders.

pages: 1,829 words: 135,521

Python for Data Analysis: Data Wrangling with Pandas, NumPy, and IPython
by Wes McKinney
Published 25 Sep 2017

Analysis of variance (ANOVA) Time series analysis: AR, ARMA, ARIMA, VAR, and other models Nonparametric methods: Kernel density estimation, kernel regression Visualization of statistical model results statsmodels is more focused on statistical inference, providing uncertainty estimates and p-values for parameters. scikit-learn, by contrast, is more prediction-focused. As with scikit-learn, I will give a brief introduction to statsmodels and how to use it with NumPy and pandas. 1.4 Installation and Setup Since everyone uses Python for different applications, there is no single solution for setting up Python and required add-on packages.

Advanced Software Testing—Vol. 3, 2nd Edition
by Jamie L. Mitchell and Rex Black
Published 15 Feb 2015

This edge is drawn differently than the other edges—as a dashed line rather than a solid line, showing that while he is using it theoretically in the math proof, it is not actually there in the code. This extra edge is counted when it is shown, and it must be added when not shown (as in our example). The P value stands for the number of connected components.12 In our examples, we do not have any connected components, so by definition, P = 1. To avoid confusion, we abstract out the P and simply use 2 (P equal to 1 plus the missing theoretical line, connecting the exit to the entrance node). The mathematics of this is beyond the scope of this book; suffice it to say, unless an example directed graph contains an edge from the exit node to the enter node, the formula that we used is correct.

pages: 336 words: 163,867

How to Diagnose and Fix Everything Electronic
by Michael Geier
Published 6 Jan 2011

Chapter 11 A-Hunting We Will Go: Signal Tracing and Diagnosis 201 If you see voltage there (typically 5 volts, but possibly less and very occasionally more) but no oscillation, the crystal may be dead. Without a clock to drive it, the micro will sit there like a rock. If you do see oscillation, check that its peak-to-peak (p-p) value is fairly close to the total power supply voltage running the micro. If it’s a 5-volt micro and the oscillation is 1 volt p-p, the micro won’t get clocked. If you have power and a running micro, you should see some life someplace. Lots of products include small backup batteries on their boards. See Figure 11-1.

pages: 660 words: 141,595

Data Science for Business: What You Need to Know About Data Mining and Data-Analytic Thinking
by Foster Provost and Tom Fawcett
Published 30 Jun 2013

However, researchers have developed techniques to decide the stopping point statistically. Statistics provides the notion of a “hypothesis test,” which you might recall from a basic statistics class. Roughly, a hypothesis test tries to assess whether a difference in some statistic is not due simply to chance. In most cases, the hypothesis test is based on a “p-value,” which gives a limit on the probability that the difference in statistic is due to chance. If this value is below a threshold (often 5%, but problem specific), then the hypothesis test concludes that the difference is likely not due to chance. So, for stopping tree growth, an alternative to setting a fixed size for the leaves is to conduct a hypothesis test at every leaf to determine whether the observed difference in (say) information gain could have been due to chance.

pages: 513 words: 152,381

The Precipice: Existential Risk and the Future of Humanity
by Toby Ord
Published 24 Mar 2020

For example, if the risk were above 0.34 percent per century there would have been a 99.9 percent chance of going extinct before now.60 We thus say that risk above 0.34 percent per century is ruled out at the 99.9 percent confidence level—a conclusion that is highly significant by the usual scientific standards (equivalent to a p-value of 0.001).61 So our 2,000 centuries of Homo sapiens suggests a “best-guess” risk estimate between 0 percent and 0.05 percent, with an upper bound of 0.34 percent. But what if Homo sapiens is not the relevant category? We are interested in the survival of humanity, and we may well see this as something broader than our species.

God Created the Integers: The Mathematical Breakthroughs That Changed History
by Stephen Hawking
Published 28 Mar 2007

If the total number of letters is not exhausted, we will take a third index such that am · n · 0, am·n·1, am·n·2, am·n·3, …, am·n·P − 1 are, in general, a system of conjoined letters; and in this way we will reach the conclusion that N = Pα, α being a certain number equal to that of the different indices about which we are concerned. The general form of the letters will be being the indices which can take each of the P values 0, 1, 2, 3, …, P-1. Through the way in which we have proceeded we can also see that, all the substitutions in the group H will be of the form because each index corresponds to a system of conjoined letters. If P is not a prime number, we will reason about the group of permutations of any one of the systems of conjoined letters, as we reasoned about the group G, when we replaced each index by a certain number of new indices, and we will find P = Rα, and so forth; whence, finally, N = pv, p being a prime number.

Now I say that unless this group of linear substitutions does not always belong [appartienne], as we will see, to the equations which are solvable by radicals, it will always enjoy this property that if in any one of its substitutions there are n letters which are fixed, then n will divide the number of the letters. And, in fact, whatever the number of letters is which remains fixed, we will be able to express this circumstance by linear equations which give all the indices of one of the fixed letters, by means of a certain number of those among them. Giving p values to each of these indices as they remain arbitrary, we will have pm systems of values, m being a certain number. In the case with which we are concerned, m is necessarily < 2, and, consequently, is found to be between 0 and 1. Therefore the number of substitutions is known to be no greater than p2(p − 1) (p2 − p).

There will be found herewith1 the proof of the following theorems: 1°. In order that a primitive equation be solvable by radicals its degree must be pv, p being a prime. 2°. All the permutations of such an equation have the form xk, l, m,. . . | xak + bl + cm + . . . + h, a′k + b′l + c′m + . . . + h′, a″k + . . ., k, l, m, . . . being v indices, which, taking p values each, denote all the roots. The indices are taken with respect to a modulus p; that is to say, the root will be the same if we add a multiple of p to one of the indices. The group which is obtained on applying all the substitutions of this linear form contains in all Pv(pv − 1)(pv − p) . . . (pv − pv − 1) permutations.

pages: 1,606 words: 168,061

Python Cookbook
by David Beazley and Brian K. Jones
Published 9 May 2013

Solution The ctypes module can be used to create Python callables that wrap around arbitrary memory addresses. The following example shows how to obtain the raw, low-level address of a C function and how to turn it back into a callable object: >>> import ctypes >>> lib = ctypes.cdll.LoadLibrary(None) >>> # Get the address of sin() from the C math library >>> addr = ctypes.cast(lib.sin, ctypes.c_void_p).value >>> addr 140735505915760 >>> # Turn the address into a callable function >>> functype = ctypes.CFUNCTYPE(ctypes.c_double, ctypes.c_double) >>> func = functype(addr) >>> func <CFunctionType object at 0x1006816d0> >>> # Call the resulting function >>> func(2) 0.9092974268256817 >>> func(0) 0.0 >>> Discussion To make a callable, you must first create a CFUNCTYPE instance.

pages: 566 words: 160,453

Not Working: Where Have All the Good Jobs Gone?
by David G. Blanchflower
Published 12 Apr 2021

He says, colorfully, the IYI has been wrong, historically, on “Stalinism, Maoism, GMOs, Iraq, Libya, Syria, lobotomies, urban planning, low carbohydrate diets, gym machines, behaviorism, transfats, Freudianism, portfolio theory, linear regression, Gaussianism, Salafism, dynamic stochastic equilibrium modeling, housing projects, selfish gene, election forecasting models, Bernie Madoff (pre-blowup) and p-values. But he is convinced that his current position is right.” He doesn’t mean me of course! 24. Letter to the Queen from the British Academy signed by 33 economists including 9 members, ex- and future, of the MPC and civil servants: http://www.feed-charity.org/user/image/besley-hennessy2009a.pdf. 25.

pages: 673 words: 164,804

Peer-to-Peer
by Andy Oram
Published 26 Feb 2001

A rewiring of a regular graph Surprisingly, what they found was that with larger p, clustering remains high but pathlength drops precipitously, as shown in Figure 14.7. Rewiring with p as low as 0.001 (that is, rewiring only about 0.1% of the edges) cuts the pathlength in half while leaving clustering virtually unchanged. At a p value of 0.01, the graph has taken on hybrid characteristics. Locally, its clustering coefficient still looks essentially like that of the regular graph. Globally, however, its pathlength has nearly dropped to the random-graph level. Watts and Strogatz dubbed graphs with this combination of high local clustering and short global pathlengths small-world graphs .

pages: 733 words: 179,391

Adaptive Markets: Financial Evolution at the Speed of Thought
by Andrew W. Lo
Published 3 Apr 2017

See Lo and MacKinlay (1988) for details. 4. Of course, the expected payoff of most investments also increases with the investment horizon, enough to entice many to be long-term investors. We’ll come back to this issue later in chapter 8 when we explore the strange world of hedge funds, but for now let’s focus on the variance. 5. The p-value of a z-score of 7.51 is 2.9564×10−14. This result was based on an equally weighted index of all stocks traded on the New York, American, and NASDAQ stock exchanges during our sample. When we applied our test to a value-weighted version of that stock index—one where larger stocks received proportionally greater weight—the rejection was less dramatic but still compelling: the odds of the Random Walk Hypothesis in this case were slightly less than 1 out of 100. 6.

pages: 968 words: 224,513

The Art of Assembly Language
by Randall Hyde
Published 8 Sep 2003

For example: static p:procedure( i:int32; c:char ) := &SomeProcedure; Note that SomeProcedure must be a procedure whose parameter list exactly matches p's parameter list (i.e., two value parameters, the first is an int32 parameter and the second is a char parameter). To indirectly call this procedure, you could use either of the following sequences: push( Value_for_i ); push( Value_for_c ); call( p ); or p( Value_for_i, Value_for_c ); The high-level language syntax has the same features and restrictions as the high-level syntax for a direct procedure call. The only difference is the actual call instruction HLA emits at the end of the calling sequence. Although all the examples in this section use static variable declarations, don't get the idea that you can declare simple procedure pointers only in the static or other variable declaration sections.

pages: 848 words: 227,015

On the Edge: The Art of Risking Everything
by Nate Silver
Published 12 Aug 2024

Although co-opted by right-wing groups in the mid-2010s, the creator of Pepe the Frog has disavowed that connotation and the meme is typically apolitical today. p-hacking: Any of a number of dubious methods to obtain an ostensibly statistically significant result to increase the chances for publication in an academic journal. The term is derived from the p-value, a measure of statistical significance in classical statistics. Pip: The spots on a dice or a playing card. Pit boss: A senior casino employee in charge of a section of gaming tables. Pits: The parts of a casino with -EV table games like blackjack; where a degen goes when he busts out of the poker tournament but still wants more action.

pages: 892 words: 91,000

Valuation: Measuring and Managing the Value of Companies
by Tim Koller , McKinsey , Company Inc. , Marc Goedhart , David Wessels , Barbara Schwimmer and Franziska Manoury
Published 16 Aug 2015

At the end of the research phase, there are three possible outcomes: success combined with an increase in the value of a marketable drug to $5,594 million, success combined with a decrease 27 The formula for estimating the upward probability is: (1 + k)T − d 1.073 − 0.77 = = 0.86 u−d 1.30 − 0.77 where k is the expected return on the asset. 816 FLEXIBILITY EXHIBIT 35.18 Decision Tree: R&D Option with Technological and Commercial Risk $ million Technological risk event Commercial risk event Research phase TTesting phase Marketing VValue up 74% NPV = 7,104 1 – q* = 26% V lue down Va NPV = 4,164 q* = p= Value up V q* = 74% Success 40% NPV = 1,936 1–p= 60% Failure NPV = 0 Success p = 15% 1 – q* = 26% Value down V 1 – p = 85% Failure r VValue up 74% NPV = 4,164 1 – q* = 26% V lue down Va NPV = 2,416 q* = p= NPV = 120 Success 40% NPV = 1,029 1–p= NPV = 0 Decision event 60% Failure NPV = 0 Note: NPV = net present value of project q* = binomial (risk-neutral) probability of an increase in marketable drug value p = probability of technological success in the value of a marketable drug to $3,327 million, and failure leading to a drug value of $0.

The Art of Computer Programming: Sorting and Searching
by Donald Ervin Knuth
Published 15 Jan 1998

In fact, it is easy to see from A) that C) en dn Cn bn — Gn-1) = an_1+en_1 = an-i + dn-i = an_i +cn_i = an-i +<2n-2, = an_i +an_2 = an-l +«n-2 O"n = «n-l + bn-l = fln-1 + &n-2 + &n-3 + an-4 + & where a0 = 1 and where we let an = 0 for n = -1, -2, -3, -4. 270 SORTING 5.4.2 The pth-order Fibonacci numbers Fn are defined by the rules = F^\ + F^2 + ¦¦¦ + i#?p, for „ > p; = 0, for 0 < n < p - 2; F^\ = 1. In other words, we start with p—l 0s, then 1, and then each number is the sum of the preceding p values. When p = 2, this is the usual Fibonacci sequence; for larger values of p the sequence was apparently first studied by V. Schlegel in El Progreso Matematico 4 A894), 173-174. Schlegel derived the generating function F(p)zn = fZlZP~1 ZP n = l-z-z2 zP 1-2Z + n>0 The last equation of C) shows that the number of runs on Tl during a six-tape polyphase merge is a fifth-order Fibonacci number: an = -f^+V In general, if we set P = T— 1, the polyphase merge distributions for T tapes will correspond to Pth order Fibonacci numbers in the same way.

UNIX® Network Programming, Volume 1: The Sockets Networking API, 3rd Edition
by W. Richard Stevens, Bill Fenner, Andrew M. Rudoff
Published 8 Jun 2013

As we mentioned earlier, group addresses are recognized and handled specially by receiving interfaces. flags: 0 0 P T IPv6 multicast address 32-bit group ID ff 80 bits of zero 4-bit scope 4-bit flags: 0 0 0 T unicastbased IPv6 multicast address 64-bit prefix plen ff 32-bit group ID 4-bit scope 4-bit flags: 0 0 1 1 Figure 21.2 Format of IPv6 multicast addresses Two formats are defined for IPv6 multicast addresses, as shown in Figure 21.2. When the P flag is 0, the T flag differentiates between a well-known multicast group (a value of 0) and a transient multicast group (a value of 1). A P value of 1 designates a multicast address that is assigned based on a unicast prefix (defined in RFC 3306 [Haberman and Thaler 2002]). If the P flag is 1, the T flag also must be 1 (i.e., unicastbased multicast addresses are always transient), and the plen and prefix fields are set to the prefix length and value of the unicast prefix, respectively.

pages: 1,993 words: 478,072

The Boundless Sea: A Human History of the Oceans
by David Abulafia
Published 2 Oct 2019

Crawford, Dilmun and Its Gulf Neighbours (Cambridge, 1998), p. 8. 8. D. T. Potts, The Arabian Gulf in Antiquity , vol. 1: From Prehistory to the Fall of the Achaemenid Empire (Oxford, 1990), p. 41; Crawford, Dilmun , p. 14. 9. Potts, Arabian Gulf in Antiquity , vol. 1, pp. 56, 59–61. 10. M. Roaf and J. Galbraith, ‘Pottery and P-Values: “Seafaring Merchants of Ur” Re-Examined’, Antiquity , vol. 68 (1994), no. 261, pp. 770–83; Crawford, Dilmun , pp. 24, 27. 11. D. K. Chakrabarti, The External Trade of the Indus Civilization (New Delhi, 1990), pp. 31–7, 141. 12. J. Connan, R. Carter, H. Crawford, et al., ‘A Comparative Geochemical Study of Bituminous Boat Remains from H3, As-Sabiyah (Kuwait), and RJ-2, Ra’s al-Jinz (Oman)’, Arabian Archaeology and Epigraphy , vol. 16 (2005), pp. 21–66. 13.