Term | Definition |
---|---|
Alpha Risk | A formal measurement of the risk of a false positive defined for statistical tests typically during hypothesis testing. Alpha risks (a.k.a., Type I Errors or false positives) generally represent the amount of risk or error in yielding a false positive that you're willing to allow for any statistical test you run. In normal situations, 5% is a common amount of risk statisticians allow for false positive errors in their analyzed data. This means they're willing to accept that there's a 5% chance their data will yield a false positive result. In high-risk situations (e.g., building weapons, healthcare, etc.) where precision and accuracy in the results are critical, a lower amount of risk is probably preferred; in those cases, it's not uncommon for statisticians to set an alpha risk level at 1% or lower. This type of risk is also subtracted from 1.0 in order to calculate your confidence level. So a confidence level of 95% simply means you're 95% confident in the statistical results, which likewise means there's a 5% chance or risk that you're wrong (or not as confident). A judicial example of alpha risk would state this is the risk of convicting an innocent person. A statistical example would state this is the risk of saying a factor causes a difference when it really doesn't. A practical example would state this is the risk of fixing something that isn't broken. Compare to Beta Risk. |