Term | Definition |
---|---|

ABC Model | The Analysis of Behavior & Cognition (ABC) model refers to a method illustrating how we think (absorb and assess information internally) so we can understand the risks and evidence behind our decisions and and how it influences others and our external behavior. |

Affinity Diagram | A method of grouping ideas (e.g., during a brainstorming session) into similar categories to help team members easily visualize their contributions and guide them in making final selections for next steps. |

Alpha Risk | A formal measurement of the risk of a false positive defined for statistical tests typically during hypothesis testing. Alpha risks (a.k.a., Type I Errors or false positives) generally represent the amount of risk or error in yielding a false positive that you're willing to allow for any statistical test you run. In normal situations, 5% is a common amount of risk statisticians allow for false positive errors in their analyzed data. This means they're willing to accept that there's a 5% chance their data will yield a false positive result. In high-risk situations (e.g., building weapons, healthcare, etc.) where precision and accuracy in the results are critical, a lower amount of risk is probably preferred; in those cases, it's not uncommon for statisticians to set an alpha risk level at 1% or lower. This type of risk is also subtracted from 1.0 in order to calculate your confidence level. So a confidence level of 95% simply means you're 95% confident in the statistical results, which likewise means there's a 5% chance or risk that you're wrong (or not as confident). A judicial example of alpha risk would state this is the risk of convicting an innocent person. A statistical example would state this is the risk of saying a factor causes a difference when it really doesn't. A practical example would state this is the risk of fixing something that isn't broken. Compare to Beta Risk. |

Analysis Results Compilation | A method of compiling the results of multiple statistical tests into a single spreadsheet in order to have an easy reference of the critical results and conclusions from those tests. Very often during hypothesis testing there can be dozens of tests run on different types of data and from different perspectives. For the statistician, it can get complex to track the results of all those tests. This method allows for quickly and easily documenting those results that can also benefit the team when reviewing the hypothesis testing results and for appending as project documentation. |

ANOVA (Analysis of Variance) Test | A test used to measure the statistical difference of the mean (a.k.a. average) of a continuous (a.k.a. variable or numerical) value to two or more discrete (a.k.a. attribute or categorical) factors. It is generally used when the distribution for the continuous value is normal, hence the use of the mean. When the distribution is non-normal, the mean may not be valid so the median would be a better measure for central tendency; in those cases, try the Mood's Median Test or the Kruskal-Wallis Test. |

Association Test | One of the Chi-Square tests that is used for comparing two or more observed discrete (a.k.a. attribute or categorical) values that don't have a fixed or evenly proportionate set of outcomes. For example, comparing the gender vs. political affiliation for a group of surveyed people; although each group may itself have a fixed set of outcomes (like gender is either male or female), they're not being compared to themselves but to a different factor (political affiliation). |

Attribute ARR Test (MSA) | A test used in a Measurement System Analysis (MSA) that evaluates the trustworthiness of discrete (a.k.a. attribute or categorical) data according to three perspectives: 1) Accuracy, 2) Repeatability, and 3) Reproducibility. The common test is for single attributes that are discrete, but a Multiple Attribute ARR test can be run that allows for testing continuous (a.k.a. variable or numerical) data |

Average (Mean) | A measure of central tendency calculated by adding the grand total for a set of continuous (a.k.a. variable or numerical) values and dividing it by the total count of those continuous values. Also known as the Mean, it is often a reliable measurement when those continuous values being measured have a normal distribution. Otherwise if it's a non-normal distribution that is positively or negatively skewed, then the Mean would no longer be a trustworthy representation of the central tendency for that distribution. In those cases, the Median would be more reliable |